pax_global_header00006660000000000000000000000064137206722560014524gustar00rootroot0000000000000052 comment=33b174f313ba4f8aec2e2a159ff337ae7ca42cb8 rtslib-fb-2.1.74/000077500000000000000000000000001372067225600135035ustar00rootroot00000000000000rtslib-fb-2.1.74/.gitignore000066400000000000000000000006151372067225600154750ustar00rootroot00000000000000debian/changelog dpkg-buildpackage.log dpkg-buildpackage.version *.swp *.swo build-stamp build/* debian/files debian/python-rtslib.debhelper.log debian/python-rtslib.substvars debian/python-rtslib/ debian/rtslib-doc.debianebhelper.log debian/rtslib-doc.substvars debian/rtslib-doc/ debian/tmp/ dist/* doc/* *.pyc debian/python-rtslib.substvars debian/rtslib-doc.debhelper.log debian/tmp/ rtslib-* rtslib-fb-2.1.74/COPYING000066400000000000000000000236371372067225600145510ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. rtslib-fb-2.1.74/Makefile000066400000000000000000000042751372067225600151530ustar00rootroot00000000000000# This file is part of RTSLib. # Copyright (c) 2011-2013 by Datera, Inc # # Licensed under the Apache License, Version 2.0 (the "License"); you may # not use this file except in compliance with the License. You may obtain # a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, WITHOUT # WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the # License for the specific language governing permissions and limitations # under the License. # PKGNAME = rtslib-fb NAME = rtslib GIT_BRANCH = $$(git branch | grep \* | tr -d \*) VERSION = $$(basename $$(git describe --tags | tr - . | grep -o '[0-9].*$$')) all: @echo "Usage:" @echo @echo " make release - Generates the release tarball." @echo @echo " make clean - Cleanup the local repository build files." @echo " make cleanall - Also remove dist/*" clean: @rm -fv ${NAME}/*.pyc ${NAME}/*.html @rm -frv ${NAME}.egg-info MANIFEST build @rm -frv results @rm -frv ${PKGNAME}-* @echo "Finished cleanup." cleanall: clean @rm -frv dist release: build/release-stamp build/release-stamp: @mkdir -p build @echo "Exporting the repository files..." @git archive ${GIT_BRANCH} --prefix ${PKGNAME}-${VERSION}/ \ | (cd build; tar xfp -) @echo "Cleaning up the target tree..." @rm -f build/${PKGNAME}-${VERSION}/Makefile @rm -f build/${PKGNAME}-${VERSION}/.gitignore @echo "Fixing version string..." @sed -i "s/__version__ = .*/__version__ = '${VERSION}'/g" \ build/${PKGNAME}-${VERSION}/${NAME}/__init__.py @find build/${PKGNAME}-${VERSION}/ -exec \ touch -t $$(date -d @$$(git show -s --format="format:%at") \ +"%Y%m%d%H%M.%S") {} \; @mkdir -p dist @cd build; tar -c --owner=0 --group=0 --numeric-owner \ --format=gnu -b20 --quoting-style=escape \ -f ../dist/${PKGNAME}-${VERSION}.tar \ $$(find ${PKGNAME}-${VERSION} -type f | sort) \ $$(find ${PKGNAME}-${VERSION} -type l | sort) @gzip -6 -n dist/${PKGNAME}-${VERSION}.tar @echo "Generated release tarball:" @echo " $$(ls dist/${PKGNAME}-${VERSION}.tar.gz)" @touch build/release-stamp rtslib-fb-2.1.74/README.md000066400000000000000000000031601372067225600147620ustar00rootroot00000000000000rtslib-fb ========= A Python object API for managing the Linux LIO kernel target ------------------------------------------------------------ rtslib-fb is an object-based Python library for configuring the LIO generic SCSI target, present in 3.x Linux kernel versions. It supports both Python 2 and Python 3, thanks to the python-six library. rtslib-fb development --------------------- rtslib-fb is licensed under the Apache 2.0 license. Contributions are welcome. Since rtslib-fb is used most often with targetcli-fb, the targetcli-fb mailing should be used for rtslib-fb discussion. * Mailing list: [targetcli-fb-devel](https://lists.fedorahosted.org/mailman/listinfo/targetcli-fb-devel) * Source repo: [GitHub](https://github.com/open-iscsi/rtslib-fb) * Bugs: [GitHub](https://github.com/open-iscsi/rtslib-fb/issues) or [Trac](https://fedorahosted.org/targetcli-fb/) * Tarballs: [fedorahosted](https://fedorahosted.org/releases/t/a/targetcli-fb/) Packages -------- rtslib-fb is packaged for a number of Linux distributions including RHEL, [Fedora](https://apps.fedoraproject.org/packages/python-rtslib), openSUSE, Arch Linux, [Gentoo](https://packages.gentoo.org/packages/dev-python/rtslib-fb), and [Debian](https://tracker.debian.org/pkg/python-rtslib-fb). "fb" -- "free branch" --------------------- rtslib-fb is a fork of the "rtslib" code written by RisingTide Systems. The "-fb" differentiates between the original and this version. Please ensure to use either all "fb" versions of the targetcli components -- targetcli, rtslib, and configshell, or stick with all non-fb versions, since they are no longer strictly compatible. rtslib-fb-2.1.74/doc/000077500000000000000000000000001372067225600142505ustar00rootroot00000000000000rtslib-fb-2.1.74/doc/getting_started.md000066400000000000000000000140131372067225600177600ustar00rootroot00000000000000Getting started using the rtslib API ==================================== The rtslib API wraps the LIO kernel target's configfs-based userspace configuration with an object-based, Python interface. Its operating model is that instantiating a Python object will result in that object creating the corresponding kernel object if it doesn't already exist, or will refer to the existing object if it does. Also, note that the Python objects wrap LIO's configfs objects, but do no buffering or caching of values or properties. Setting a value on an object via rtslib results in the LIO kernel configuration being modified immediately as well -- there is no additional save or flush required. Let's try it. > sudo python Configuring LIO must be done by root, so run the python REPL as root. >>> from rtslib import FileIOStorageObject >>> f = FileIOStorageObject("test1", "/tmp/test.img", 100000000) FileIO storage objects enable a file to serve as a disk image. In this example, a few things are happening. First, since a backing file path is given and does not yet exist, rtslib creates the backing file at /tmp/test.img with a size of 100000000 bytes. Next, rtslib configures LIO to use the file to back a fileio storage object called "test1". The storage object has a number of properties. >>> f.status 'deactivated' This shows that while the backstore has been registered, it hasn't yet been exported via a fabric. Now, let's create a fabric. >>> from rtslib import FabricModule, Target, TPG >>> iscsi = FabricModule("iscsi") Fabric objects are singleton objects. The preferred way to obtain a reference to one is via the FabricModule factory method, as shown. Then, we can create a Target, an instance of that fabric. >>> target = Target(iscsi) For other fabrics that are linked to actual hardware resources, we would have also needed to supply a valid "wwn" parameter that matched available hardware IDs. But for iscsi, we can omit this and rtslib will autogenerate one. >>> target.wwn 'iqn.2003-01.org.linux-iscsi.localhost.x8664:sn.c11be18bebc3' Next, we must create a TPG. iSCSI allows a single named target to have multiple independent configurations within it, divided into Target Port Groups, or TPGs. Usually one is enough, so we just need to create one, and then all further configuration will be on the TPG. >>> tpg = TPG(target, 1) Our TPG needs to listen on a TCP port for incoming connections from initiators, so let's set that up. >>> from rtslib import NetworkPortal, NodeACL, LUN, MappedLUN >>> portal = NetworkPortal(tpg, "0.0.0.0", 3260) Now LIO is listening on all IP addresses, on port 3260, the iSCSI default. But, we aren't exporting any luns yet! >>> lun = LUN(tpg, 0, f) We've just assigned the FileIO storage object to the TPG. as we can see: >>> f.status 'activated' ...the storage object is now active and linked to the TPG. The final thing to configure is Node ACLs. There are some authentication modes where just assigning a LUN to a TPG will export it to all initiators (see targetcli manpage for more info), but usually one creates individual permissions and LUN mappings for each initiator. LIO configures initiator access via initiator IQN, instead of some other targets that are based on initiator IP address. (For open-iscsi, the autogenerated initiator IQN is in "/etc/iscsi/initiatorname.iscsi".) >>> nodeacl = NodeACL(tpg, "iqn.2004-03.com.example.foo:0987") When we are using nodeacl-based authentication, i.e. when generate_node_acls is 0, we then need to map the tpg lun to the nodeacl. This is handy in that each initiator can have its own view of the available LUNs. >>> mlun = MappedLUN(nodeacl, 5, lun) Finding associated objects -------------------------- All rtslib objects have properties to obtain other related objects. For example, if you have a TPG object called "tpg", then tpg.parent_target will be the Target that contains the tpg. >>> tpg >>> tpg.parent_target The FabricModule object is then accessible from the target: >>> tpg.parent_target.fabric_module.name 'iscsi' Going down the hierarchy, an object can have multiple child objects of a given type, so a list is returned: >>> iscsi.targets Actually it's a Python generator, list's less memory-intensive cousin. If we really want a list, call: >>> list(iscsi.targets) [] Finding objects using RTSRoot() ------------------------------- The RTSRoot object enables finding all rtslib objects of a type that are configured on the system. >>> root = rtslib.RTSRoot() >>> list(root.storage_objects) [] RTSRoot contains generators for all levels of objects. For example, if we want to obtain all NodeACL objects, instead of needing nested for loops to iterate through all fabrics, then all targets, then all tpgs, then all node acls, the root.node_acls generator iterates through NodeACLs wherever they are: >>> list(root.node_acls) [, ] Other handy things to try ------------------------- All objects have a dump() method, which outputs a dict of the object's current state. >>> mlun.dump() {'index': 5, 'tpg_lun': 0, 'write_protect': False} Finally, rtslib is just a wrapper around LIO's configfs interface, which is usually mounted at /sys/kernel/config/target. Poking around there may also help to understand what's going on. Other sample code ----------------- While targetcli uses rtslib, it has a parallel configshell-based tree structure that may make it less helpful as a reference. Focus on 'rtsnode' objects -- these are references to rtslib objects, as described here. Also, the 'targetd' project (https://github.com/agrover/targetd) uses rtslib for its kernel target support, see 'targetd/block.py' for more examples of rtslib usage. rtslib-fb-2.1.74/doc/saveconfig.json.5000066400000000000000000000143021372067225600174320ustar00rootroot00000000000000.TH saveconfig.json 5 .SH NAME .B saveconfig.json \- Saved configuration file for rtslib-fb and LIO kernel target .SH DESCRIPTION .B /etc/target/saveconfig.json is the default location for storing the configuration of the Linux kernel target, also known as LIO. Since the target is in the kernel, tools like .B targetctl or .B targetcli must be used to save and restore the configuration across reboots. .P Generating or modifying this file by hand, or with other tools, is also supported. This fills a gap for users whose needs are not met by the targetcli configuration shell, who cannot use the rtslib Python library, and yet also wish to avoid directly manipulating LIO's configfs interface. .SH OVERVIEW The configuration file is in the "json" text format, which is both human- and machine-readable. Its format is very closely modeled on the layout and terminology that LIO uses. Attributes may be string, boolean, or numeric values. All sizes are expressed in bytes. .SH LAYOUT .SS TOP LEVEL SUMMARY .B storage_objects describes mappings of resources on the local machine that can be used to emulate block devices. .P .B fabric_modules describes settings for LIO fabrics -- the hardware or software protocols that transport SCSI commands -- such as iSCSI or Fibre Channel over Ethernet (FCoE). .P .B targets describes the SCSI target endpoints that export storage objects over a fabric. .SS storage_objects All storage objects must contain .I name and .I plugin values. Each name must be unique for all storage objects of its plugin type. .P .I plugin must be one of: .IR fileio , .IR block , .IR pscsi , or .IR ramdisk . .P Objects with plugin value of .I fileio must also contain .IR dev , which is the full path to the file that is backing the storage object. Optional .I fileio attributes are .I wwn (string), .I write_back (boolean), and .I size (number). If the file given in .I dev does not exist, then .I size must be present, and a backing file of that size will be created. .P Objects with plugin value of .I block must also contain .IR dev , which is the full path to the block device that is backing the storage object. Optional .I block attributes are .I wwn (string), .I write_back (boolean), and .I readonly (boolean). .P Objects with plugin value of .I pscsi must also contain .IR dev , which is the full path to the SCSI device that is backing the storage object. There are no optional attributes. .P Objects with plugin value of .I ramdisk must also contain .IR size (number), which is the size in bytes of the ramdisk. Optional .I ramdisk attributes are .I wwn (string), and .I nullio (boolean). .P All storage object definitions may also contain an .I attributes object. This contains LIO attribute values, all of which are also optional. Please see LIO documentation for more information on these. .SS fabric_modules This section is limited to setting discovery authentication settings for fabrics that support it (currently just iscsi). Objects here should contain .IR name (e.g. "iscsi"), .IR userid , .IR password , .IR mutual_userid , and .I mutual_password string values. .SS targets Target configuration is modeled on iSCSI, in which a named target can contain multiple sub-configurations called Target Port Groups (TPGs). LIO follows this model for describing configuration even for fabrics that do not support TPGs. .P Objects in .I targets contain just three attributes: .I wwn is the world-wide name the target has been given. This may start with "iqn", or "naa", for example. .I fabric is the name of the fabric module this target is exported over. Allowed values for this depend on the system configuration, but examples include "iscsi", "loopback", and "tcm_fc". .I tpgs is a list of objects describing 1 or more TPGs, described below. .SS tpgs TPG object attributes are all optional. Values not supplied will be set to default values. .I tag (number) allows the tpg tag to be specified. .I enable (bool, default to true) allows the TPG to be disabled or enabled. .IR luns , .IR portals , and .I node_acls contain further lists of objects, descibed below. Finally, .IR userid , .IR password , .IR mutual_userid , and .I mutual_password allow main-phase authentication values to be set for fabrics (like iSCSI) that support TPG-level authentication. (Please see .IR targetcli (8) for details on TPG versus ACL-based authentication.) Finally, TPGs can also contain optional .I attributes and .I parameters lists, see LIO documentation for more information. .SS luns This list of objects maps storage objects to the TPG. .I index is a TPG-unique number for the assignment, which may be used as the LU number for fabrics that do not support ACL mappings. .I storage_object is a string linking back to a storage object, of the format "/backstores//", where and correspond to a storage object defined under .IR storage_objects . .SS portals Portals describe connection endpoints for iSCSI targets. Required values are .I ip_address (string) and .I port (number). .I iser (boolean) is an optional value to enable iSER. .I offload (boolean) is an optional value to enable hardware offload. .SS node_acls This contains information about explicit initiator LUN mappings. .I node_wwn (string) must be present. Authentication may also be set on a per-ACL basis with .IR userid , .IR password , .IR mutual_userid , and .IR mutual_password, similar to TPGs. .I mapped_luns is a list of mapped luns, described below. Finally, node_acls can contain an optional .I attributes list. .SS mapped_luns Objects in .I mapped_luns contain three required attributes. .I write_protect (boolean) sets the write-protect status of the mapped LUN. .I tpg_lun (number) corresponds to an existing entry in the TPG's .I luns list. .I index is the LU number that the mapped LUN will claim. .SH EXAMPLE CONFIGURATION Since tools generate this file, one good way to understand its format is to use a tool like .B targetcli to configure a target, then run .BR saveconfig, and view the resulting json file. .SH SEE ALSO .BR targetcli (8), .BR targetctl (8) .SH FILES .B /etc/target/saveconfig.json .SH AUTHOR Man page written by Andy Grover . .SH REPORTING BUGS Report bugs via .br or rtslib-fb-2.1.74/doc/targetctl.8000066400000000000000000000035001372067225600163300ustar00rootroot00000000000000.TH targetctl 8 .SH NAME .B targetctl \- Save and restore configuration of kernel target .SH DESCRIPTION .B targetctl is a low-level script to save and restore the configuration of the LIO kernel target, to and from a file in json format. It is not normally meant to be used by end-users directly, but by system init frameworks, or advanced end-users who are generating the configuration file themselves and need a way to load the configuration without relying on the .B targetcli configuration shell. .SH USAGE .B targetctl must be invoked as root. Exit status will be 0 if successful, or nonzero if an error was encountered. .P .B targetctl save [config-file] .P Saves the current configuration of the kernel target to a file in json format. Since the file may contain cleartext passwords, the file's permissions will be set to only allow root access. If .B config-file is not supplied, .B targetctl will use the default file location, .BR /etc/target/saveconfig.json. .P .B targetctl restore [config-file] .P Removes any existing configuration and replaces it with the configuration described in the file. See .BR saveconfig.json (5) for more details. If parts of the configuration could not be restored, those parts will be noted in the error output, and the rest of the configuration will still be applied. .P .B targetctl clear .P Removes any existing configuration from the running kernel target. .P .B targetctl --help .P Displays usage information. .P .SH SEE ALSO .BR targetcli (8), .BR targetd (8), .BR saveconfig.json (5) .SH FILES .B /etc/target/saveconfig.json .P .B /sys/kernel/config/target .SH AUTHOR Written by Andy Grover . .br Man page written by Andy Grover . .SH REPORTING BUGS Report bugs via .br or rtslib-fb-2.1.74/rtslib/000077500000000000000000000000001372067225600150025ustar00rootroot00000000000000rtslib-fb-2.1.74/rtslib/__init__.py000066400000000000000000000030611372067225600171130ustar00rootroot00000000000000''' This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' if __name__ == "rtslib-fb": from warnings import warn warn("'rtslib' package name for rtslib-fb is deprecated, please" + " instead import 'rtslib_fb'", UserWarning, stacklevel=2) from .root import RTSRoot from .utils import RTSLibError, RTSLibBrokenLink, RTSLibNotInCFS from .utils import RTSLibALUANotSupported from .target import LUN, MappedLUN from .target import NodeACL, NetworkPortal, TPG, Target from .target import NodeACLGroup, MappedLUNGroup from .fabric import FabricModule from .tcm import FileIOStorageObject, BlockStorageObject from .tcm import PSCSIStorageObject, RDMCPStorageObject, UserBackedStorageObject from .tcm import StorageObjectFactory from .alua import ALUATargetPortGroup __version__ = '2.1.74' __author__ = "Jerome Martin " __url__ = 'http://github.com/open-iscsi/rtslib-fb' __description__ = 'API for Linux kernel SCSI target (aka LIO)' __license__ = 'Apache 2.0' rtslib-fb-2.1.74/rtslib/alua.py000066400000000000000000000364251372067225600163100ustar00rootroot00000000000000''' Implements the RTS ALUA Target Port Group class. This file is part of RTSLib. Copyright (c) 2016 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' from .node import CFSNode from .utils import RTSLibError, RTSLibALUANotSupported, fread, fwrite import six alua_rw_params = ['alua_access_state', 'alua_access_status', 'alua_write_metadata', 'alua_access_type', 'preferred', 'nonop_delay_msecs', 'trans_delay_msecs', 'implicit_trans_secs', 'alua_support_offline', 'alua_support_standby', 'alua_support_transitioning', 'alua_support_active_nonoptimized', 'alua_support_unavailable', 'alua_support_active_optimized'] alua_ro_params = ['tg_pt_gp_id', 'members', 'alua_support_lba_dependent'] alua_types = ['None', 'Implicit', 'Explicit', 'Implicit and Explicit'] alua_statuses = ['None', 'Altered by Explicit STPG', 'Altered by Implicit ALUA'] class ALUATargetPortGroup(CFSNode): """ ALUA Target Port Group interface """ def __repr__(self): return "" % self.name def __init__(self, storage_object, name, tag=None): """ @param storage_object: backstore storage object to create ALUA group for @param name: name of ALUA group @param tag: target port group id. If not passed in, try to look up existing ALUA TPG with the same name """ if storage_object.alua_supported is False: raise RTSLibALUANotSupported("Backend does not support ALUA setup") # default_tg_pt_gp takes tag 1 if tag is not None and (tag > 65535 or tag < 1): raise RTSLibError("The TPG Tag must be between 1 and 65535") super(ALUATargetPortGroup, self).__init__() self.name = name self.storage_object = storage_object self._path = "%s/alua/%s" % (storage_object.path, name) if tag is not None: try: self._create_in_cfs_ine('create') except OSError as msg: raise RTSLibError(msg) try: fwrite("%s/tg_pt_gp_id" % self._path, tag) except IOError as msg: self.delete() raise RTSLibError("Cannot set id to %d: %s" % (tag, str(msg))) else: try: self._create_in_cfs_ine('lookup') except OSError as msg: raise RTSLibError(msg) # Public def delete(self): """ Delete ALUA TPG and unmap from LUNs """ self._check_self() # default_tg_pt_gp created by the kernel and cannot be deleted if self.name == "default_tg_pt_gp": raise RTSLibError("Can not delete default_tg_pt_gp") # This will reset the ALUA tpg to default_tg_pt_gp super(ALUATargetPortGroup, self).delete() def _get_alua_access_state(self): self._check_self() path = "%s/alua_access_state" % self.path return int(fread(path)) def _set_alua_access_state(self, newstate): self._check_self() path = "%s/alua_access_state" % self.path try: fwrite(path, str(int(newstate))) except IOError as e: raise RTSLibError("Cannot change ALUA state: %s" % e) def _get_alua_access_status(self): self._check_self() path = "%s/alua_access_status" % self.path status = fread(path) return alua_statuses.index(status) def _set_alua_access_status(self, newstatus): self._check_self() path = "%s/alua_access_status" % self.path try: fwrite(path, str(int(newstatus))) except IOError as e: raise RTSLibError("Cannot change ALUA status: %s" % e) def _get_alua_access_type(self): self._check_self() path = "%s/alua_access_type" % self.path alua_type = fread(path) return alua_types.index(alua_type) def _set_alua_access_type(self, access_type): self._check_self() path = "%s/alua_access_type" % self.path try: fwrite(path, str(int(access_type))) except IOError as e: raise RTSLibError("Cannot change ALUA access type: %s" % e) def _get_preferred(self): self._check_self() path = "%s/preferred" % self.path return int(fread(path)) def _set_preferred(self, pref): self._check_self() path = "%s/preferred" % self.path try: fwrite(path, str(int(pref))) except IOError as e: raise RTSLibError("Cannot set preferred: %s" % e) def _get_alua_write_metadata(self): self._check_self() path = "%s/alua_write_metadata" % self.path return int(fread(path)) def _set_alua_write_metadata(self, pref): self._check_self() path = "%s/alua_write_metadata" % self.path try: fwrite(path, str(int(pref))) except IOError as e: raise RTSLibError("Cannot set alua_write_metadata: %s" % e) def _get_alua_support_active_nonoptimized(self): self._check_self() path = "%s/alua_support_active_nonoptimized" % self.path return int(fread(path)) def _set_alua_support_active_nonoptimized(self, enabled): self._check_self() path = "%s/alua_support_active_nonoptimized" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_active_nonoptimized: %s" % e) def _get_alua_support_active_optimized(self): self._check_self() path = "%s/alua_support_active_optimized" % self.path return int(fread(path)) def _set_alua_support_active_optimized(self, enabled): self._check_self() path = "%s/alua_support_active_optimized" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_active_optimized: %s" % e) def _get_alua_support_offline(self): self._check_self() path = "%s/alua_support_offline" % self.path return int(fread(path)) def _set_alua_support_offline(self, enabled): self._check_self() path = "%s/alua_support_offline" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_offline: %s" % e) def _get_alua_support_unavailable(self): self._check_self() path = "%s/alua_support_unavailable" % self.path return int(fread(path)) def _set_alua_support_unavailable(self, enabled): self._check_self() path = "%s/alua_support_unavailable" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_unavailable: %s" % e) def _get_alua_support_standby(self): self._check_self() path = "%s/alua_support_standby" % self.path return int(fread(path)) def _set_alua_support_standby(self, enabled): self._check_self() path = "%s/alua_support_standby" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_standby: %s" % e) def _get_alua_support_transitioning(self): self._check_self() path = "%s/alua_support_transitioning" % self.path return int(fread(path)) def _set_alua_support_transitioning(self, enabled): self._check_self() path = "%s/alua_support_transitioning" % self.path try: fwrite(path, str(int(enabled))) except IOError as e: raise RTSLibError("Cannot set alua_support_transitioning: %s" % e) def _get_alua_support_lba_dependent(self): self._check_self() path = "%s/alua_support_lba_dependent" % self.path return int(fread(path)) def _get_members(self): self._check_self() path = "%s/members" % self.path member_list = [] for member in fread(path).splitlines(): lun_path = member.split("/") if len(lun_path) != 4: continue member_list.append({ 'driver': lun_path[0], 'target': lun_path[1], 'tpgt': int(lun_path[2].split("_", 1)[1]), 'lun': int(lun_path[3].split("_", 1)[1]) }) return member_list def _get_tg_pt_gp_id(self): self._check_self() path = "%s/tg_pt_gp_id" % self.path return int(fread(path)) def _get_trans_delay_msecs(self): self._check_self() path = "%s/trans_delay_msecs" % self.path return int(fread(path)) def _set_trans_delay_msecs(self, secs): self._check_self() path = "%s/trans_delay_msecs" % self.path try: fwrite(path, str(int(secs))) except IOError as e: raise RTSLibError("Cannot set trans_delay_msecs: %s" % e) def _get_implicit_trans_secs(self): self._check_self() path = "%s/implicit_trans_secs" % self.path return int(fread(path)) def _set_implicit_trans_secs(self, secs): self._check_self() path = "%s/implicit_trans_secs" % self.path try: fwrite(path, str(int(secs))) except IOError as e: raise RTSLibError("Cannot set implicit_trans_secs: %s" % e) def _get_nonop_delay_msecs(self): self._check_self() path = "%s/nonop_delay_msecs" % self.path return int(fread(path)) def _set_nonop_delay_msecs(self, delay): self._check_self() path = "%s/nonop_delay_msecs" % self.path try: fwrite(path, str(int(delay))) except IOError as e: raise RTSLibError("Cannot set nonop_delay_msecs: %s" % e) def dump(self): d = super(ALUATargetPortGroup, self).dump() d['name'] = self.name d['tg_pt_gp_id'] = self.tg_pt_gp_id for param in alua_rw_params: d[param] = getattr(self, param, None) return d alua_access_state = property(_get_alua_access_state, _set_alua_access_state, doc="Get or set ALUA state. " "0 = Active/optimized, " "1 = Active/non-optimized, " "2 = Standby, " "3 = Unavailable, " "4 = LBA Dependent, " "14 = Offline, " "15 = Transitioning") alua_access_type = property(_get_alua_access_type, _set_alua_access_type, doc="Get or set ALUA access type. " "1 = Implicit, 2 = Explicit, 3 = Both") alua_access_status = property(_get_alua_access_status, _set_alua_access_status, doc="Get or set ALUA access status. " "0 = None, " "1 = Altered by Explicit STPG, " "2 = Altered by Implicit ALUA") preferred = property(_get_preferred, _set_preferred, doc="Get or set preferred bit. 1 = Pref, 0 Not-Pre") alua_write_metadata = property(_get_alua_write_metadata, _set_alua_write_metadata, doc="Get or set alua_write_metadata flag. " "enable (1) or disable (0)") tg_pt_gp_id = property(_get_tg_pt_gp_id, doc="Get ALUA Target Port Group ID") members = property(_get_members, doc="Get LUNs in Target Port Group") alua_support_active_nonoptimized = property(_get_alua_support_active_nonoptimized, _set_alua_support_active_nonoptimized, doc="Enable (1) or disable (0) " "Active/non-optimized support") alua_support_active_optimized = property(_get_alua_support_active_optimized, _set_alua_support_active_optimized, doc="Enable (1) or disable (0) " "Active/optimized support") alua_support_offline = property(_get_alua_support_offline, _set_alua_support_offline, doc="Enable (1) or disable (0) " "offline support") alua_support_unavailable = property(_get_alua_support_unavailable, _set_alua_support_unavailable, doc="enable (1) or disable (0) " "unavailable support") alua_support_standby = property(_get_alua_support_standby, _set_alua_support_standby, doc="enable (1) or disable (0) " "standby support") alua_support_lba_dependent = property(_get_alua_support_lba_dependent, doc="show lba_dependent support " "enabled (1) or disabled (0)") alua_support_transitioning = property(_get_alua_support_transitioning, _set_alua_support_transitioning, doc="enable (1) or disable (0) " "transitioning support") trans_delay_msecs = property(_get_trans_delay_msecs, _set_trans_delay_msecs, doc="msecs to delay state transition") implicit_trans_secs = property(_get_implicit_trans_secs, _set_implicit_trans_secs, doc="implicit transition time limit") nonop_delay_msecs = property(_get_nonop_delay_msecs, _set_nonop_delay_msecs, doc="msecs to delay IO when non-optimized") @classmethod def setup(cls, storage_obj, alua_tpg, err_func): name = alua_tpg['name'] if name == 'default_tg_pt_gp': return alua_tpg_obj = cls(storage_obj, name, alua_tpg['tg_pt_gp_id']) for param, value in six.iteritems(alua_tpg): if param != 'name' and param != 'tg_pt_gp_id': try: setattr(alua_tpg_obj, param, value) except: raise RTSLibError("Could not set attribute '%s' for alua tpg '%s'" % (param, alua_tpg['name'])) rtslib-fb-2.1.74/rtslib/fabric.py000066400000000000000000000407761372067225600166200ustar00rootroot00000000000000''' This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Description ----------- Fabrics may differ in how fabric WWNs are represented, as well as what capabilities they support Available parameters -------------------- * features Lists the target fabric available features. Default value: ("discovery_auth", "acls", "auth", "nps") example: features = ("discovery_auth", "acls", "auth") example: features = () # no features supported Detail of features: * tpgts The target fabric module is using iSCSI-style target portal group tags. * discovery_auth The target fabric module supports a fabric-wide authentication for discovery. * acls The target's TPGTs support explicit initiator ACLs. * auth The target's TPGT's support per-TPG authentication, and the target's TPGT's ACLs support per-ACL initiator authentication. Fabrics that support auth must support acls. * nps The TPGTs support iSCSI-like IPv4/IPv6 network portals, using IP:PORT group names. * nexus The TPGTs have a 'nexus' attribute that contains the local initiator serial unit. This attribute must be set before being able to create any LUNs. * wwn_types Sets the type of WWN expected by the target fabric. Defaults to 'free'. Usually a fabric will only support one type but iSCSI supports more. First entry is the "native" wwn type - i.e. if a wwn can be generated, it will be of this type. Example: wwn_types = ("eui",) Current valid types are: * free Freeform WWN. * iqn The fabric module targets are using iSCSI-type IQNs. * naa NAA FC or SAS address type WWN. * eui EUI-64. See http://en.wikipedia.org/wiki/MAC_address for info on this format. * unit_serial Disk-type unit serial. * wwns This property returns an iterable (either generator or list) of valid target WWNs for the fabric, if WWNs should be chosen from existing fabric interfaces. The most common case for this is hardware-set WWNs. WWNs should conform to rtslib's normalized internal format: the wwn type (see above), a period, then the wwn with interstitial dividers like ':' removed. * to_fabric_wwn() Converts WWNs from normalized format (see above) to whatever the kernel code expects when getting a wwn. Only needed if different from normalized format. * kernel_module Sets the name of the kernel module implementing the fabric modules. If not specified, it will be assumed to be MODNAME_target_mod, where MODNAME is the name of the fabric module, from the fabrics list. Note that you must not specify any .ko or such extension here. Example: self.kernel_module = "my_module" * _path Sets the path of the configfs group used by the fabric module. Defaults to the name of the module from the fabrics list. Example: self._path = "%s/%s" % (self.configfs_dir, "my_cfs_dir") ''' from functools import partial from glob import iglob as glob import os import six from .node import CFSNode from .utils import fread, fwrite, normalize_wwn, colonize from .utils import RTSLibError, modprobe, ignored from .target import Target from .utils import _get_auth_attr, _set_auth_attr version_attributes = set(["lio_version", "version"]) discovery_auth_attributes = set(["discovery_auth"]) target_names_excludes = version_attributes | discovery_auth_attributes class _BaseFabricModule(CFSNode): ''' Abstract Base clase for Fabric Modules. It can load modules, provide information about them and handle the configfs housekeeping. After instantiation, whether or not the fabric module is loaded depends on if a method requiring it (i.e. accessing configfs) is used. This helps limit loaded kernel modules to just the fabrics in use. ''' # FabricModule ABC private stuff def __init__(self, name): ''' Instantiate a FabricModule object, according to the provided name. @param name: the name of the FabricModule object. @type name: str ''' super(_BaseFabricModule, self).__init__() self.name = name self.spec_file = "N/A" self._path = "%s/%s" % (self.configfs_dir, self.name) self.features = ('discovery_auth', 'acls', 'auth', 'nps', 'tpgts') self.wwn_types = ('free',) self.kernel_module = "%s_target_mod" % self.name # FabricModule public stuff def _check_self(self): if not self.exists: try: self._create_in_cfs_ine('any') except RTSLibError: modprobe(self.kernel_module) self._create_in_cfs_ine('any') super(_BaseFabricModule, self)._check_self() def has_feature(self, feature): # Handle a renamed feature if feature == 'acls_auth': feature = 'auth' return feature in self.features def _list_targets(self): if self.exists: for wwn in os.listdir(self.path): if os.path.isdir("%s/%s" % (self.path, wwn)) and \ wwn not in target_names_excludes: yield Target(self, self.from_fabric_wwn(wwn), 'lookup') def _get_version(self): if self.exists: for attr in version_attributes: path = "%s/%s" % (self.path, attr) if os.path.isfile(path): return fread(path) else: raise RTSLibError("Can't find version for fabric module %s" % self.name) else: return None def to_normalized_wwn(self, wwn): ''' Checks whether or not the provided WWN is valid for this fabric module according to the spec, and returns a tuple of our preferred string representation of the wwn, and what type it turned out to be. ''' return normalize_wwn(self.wwn_types, wwn) def to_fabric_wwn(self, wwn): ''' Some fabrics need WWNs in a format different than rtslib's internal format. These fabrics should override this method. ''' return wwn def from_fabric_wwn(self, wwn): ''' Converts from WWN format used in this fabric's LIO configfs to canonical format. Note: Do not call from wwns(). There's no guarantee fabric wwn format is the same as wherever wwns() is reading from. ''' return wwn def needs_wwn(self): ''' This fabric requires wwn to be specified when creating a target, it cannot be autogenerated. ''' return self.wwns != None def _assert_feature(self, feature): if not self.has_feature(feature): raise RTSLibError("Fabric module %s does not implement " + "the %s feature" % (self.name, feature)) def clear_discovery_auth_settings(self): self._check_self() self._assert_feature('discovery_auth') self.discovery_mutual_password = '' self.discovery_mutual_userid = '' self.discovery_password = '' self.discovery_userid = '' self.discovery_enable_auth = False def _get_discovery_enable_auth(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/enforce_discovery_auth" % self.path value = fread(path) return bool(int(value)) def _set_discovery_enable_auth(self, enable): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/enforce_discovery_auth" % self.path if int(enable): enable = 1 else: enable = 0 fwrite(path, "%s" % enable) def _get_discovery_authenticate_target(self): self._check_self() self._assert_feature('discovery_auth') path = "%s/discovery_auth/authenticate_target" % self.path return bool(int(fread(path))) def _get_wwns(self): ''' Returns either iterable or None. None means fabric allows arbitrary WWNs. ''' return None def _get_disc_attr(self, *args, **kwargs): self._assert_feature('discovery_auth') return _get_auth_attr(self, *args, **kwargs) def _set_disc_attr(self, *args, **kwargs): self._assert_feature('discovery_auth') _set_auth_attr(self, *args, **kwargs) discovery_enable_auth = \ property(_get_discovery_enable_auth, _set_discovery_enable_auth, doc="Set or get the discovery enable_auth flag.") discovery_authenticate_target = property(_get_discovery_authenticate_target, doc="Get the boolean discovery authenticate target flag.") discovery_userid = property(partial(_get_disc_attr, attribute='discovery_auth/userid'), partial(_set_disc_attr, attribute='discovery_auth/userid'), doc="Set or get the initiator discovery userid.") discovery_password = property(partial(_get_disc_attr, attribute='discovery_auth/password'), partial(_set_disc_attr, attribute='discovery_auth/password'), doc="Set or get the initiator discovery password.") discovery_mutual_userid = property(partial(_get_disc_attr, attribute='discovery_auth/userid_mutual'), partial(_set_disc_attr, attribute='discovery_auth/userid_mutual'), doc="Set or get the mutual discovery userid.") discovery_mutual_password = property(partial(_get_disc_attr, attribute='discovery_auth/password_mutual'), partial(_set_disc_attr, attribute='discovery_auth/password_mutual'), doc="Set or get the mutual discovery password.") targets = property(_list_targets, doc="Get the list of target objects.") version = property(_get_version, doc="Get the fabric module version string.") wwns = property(_get_wwns, doc="iterable of WWNs present for this fabric") def setup(self, fm, err_func): ''' Setup fabricmodule with settings from fm dict. ''' for name, value in six.iteritems(fm): if name != 'name': try: setattr(self, name, value) except: err_func("Could not set fabric %s attribute '%s'" % (fm['name'], name)) def dump(self): d = super(_BaseFabricModule, self).dump() d['name'] = self.name if self.has_feature("discovery_auth"): for attr in ("userid", "password", "mutual_userid", "mutual_password"): val = getattr(self, "discovery_" + attr, None) if val: d["discovery_" + attr] = val d['discovery_enable_auth'] = self.discovery_enable_auth return d class ISCSIFabricModule(_BaseFabricModule): def __init__(self): super(ISCSIFabricModule, self).__init__('iscsi') self.wwn_types = ('iqn', 'naa', 'eui') class LoopbackFabricModule(_BaseFabricModule): def __init__(self): super(LoopbackFabricModule, self).__init__('loopback') self.features = ("nexus",) self.wwn_types = ('naa',) self.kernel_module = "tcm_loop" class SBPFabricModule(_BaseFabricModule): def __init__(self): super(SBPFabricModule, self).__init__('sbp') self.features = () self.wwn_types = ('eui',) self.kernel_module = "sbp_target" def to_fabric_wwn(self, wwn): return wwn[4:] def from_fabric_wwn(self, wwn): return "eui." + wwn # return 1st local guid (is unique) so our share is named uniquely @property def wwns(self): for fname in glob("/sys/bus/firewire/devices/fw*/is_local"): if bool(int(fread(fname))): guid_path = os.path.dirname(fname) + "/guid" yield "eui." + fread(guid_path)[2:] break class Qla2xxxFabricModule(_BaseFabricModule): def __init__(self): super(Qla2xxxFabricModule, self).__init__('qla2xxx') self.features = ("acls",) self.wwn_types = ('naa',) self.kernel_module = "tcm_qla2xxx" def to_fabric_wwn(self, wwn): # strip 'naa.' and add colons return colonize(wwn[4:]) def from_fabric_wwn(self, wwn): return "naa." + wwn.replace(":", "") @property def wwns(self): for wwn_file in glob("/sys/class/fc_host/host*/port_name"): with ignored(IOError): if not fread(os.path.dirname(wwn_file)+"/symbolic_name").startswith("fcoe"): yield "naa." + fread(wwn_file)[2:] class SRPTFabricModule(_BaseFabricModule): def __init__(self): super(SRPTFabricModule, self).__init__('srpt') self.features = ("acls",) self.wwn_types = ('ib',) self.kernel_module = "ib_srpt" def to_fabric_wwn(self, wwn): # strip 'ib.' and re-add '0x' return "0x" + wwn[3:] def from_fabric_wwn(self, wwn): return "ib." + wwn[2:] @property def wwns(self): for wwn_file in glob("/sys/class/infiniband/*/ports/*/gids/0"): yield "ib." + fread(wwn_file).replace(":", "") class FCoEFabricModule(_BaseFabricModule): def __init__(self): super(FCoEFabricModule, self).__init__('tcm_fc') self.features = ("acls",) self.kernel_module = "tcm_fc" self.wwn_types=('naa',) self._path = "%s/%s" % (self.configfs_dir, "fc") def to_fabric_wwn(self, wwn): # strip 'naa.' and add colons return colonize(wwn[4:]) def from_fabric_wwn(self, wwn): return "naa." + wwn.replace(":", "") @property def wwns(self): for wwn_file in glob("/sys/class/fc_host/host*/port_name"): with ignored(IOError): if fread(os.path.dirname(wwn_file)+"/symbolic_name").startswith("fcoe"): yield "naa." + fread(wwn_file)[2:] class USBGadgetFabricModule(_BaseFabricModule): def __init__(self): super(USBGadgetFabricModule, self).__init__('usb_gadget') self.features = ("nexus",) self.wwn_types = ('naa',) self.kernel_module = "tcm_usb_gadget" class VhostFabricModule(_BaseFabricModule): def __init__(self): super(VhostFabricModule, self).__init__('vhost') self.features = ("nexus", "acls", "tpgts") self.wwn_types = ('naa',) self.kernel_module = "tcm_vhost" class XenPvScsiFabricModule(_BaseFabricModule): def __init__(self): super(XenPvScsiFabricModule, self).__init__('xen-pvscsi') self._path = "%s/%s" % (self.configfs_dir, 'xen-pvscsi') self.features = ("nexus", "tpgts") self.wwn_types = ('naa',) self.kernel_module = "xen-scsiback" class IbmvscsisFabricModule(_BaseFabricModule): def __init__(self): super(IbmvscsisFabricModule, self).__init__('ibmvscsis') self.features = () self.kernel_module = "ibmvscsis" @property def wwns(self): for wwn_file in glob("/sys/module/ibmvscsis/drivers/vio:ibmvscsis/*/devspec"): name = fread(wwn_file) yield name[name.find("@") + 1:] fabric_modules = { "srpt": SRPTFabricModule, "iscsi": ISCSIFabricModule, "loopback": LoopbackFabricModule, "qla2xxx": Qla2xxxFabricModule, "sbp": SBPFabricModule, "tcm_fc": FCoEFabricModule, # "usb_gadget": USBGadgetFabricModule, # very rare, don't show "vhost": VhostFabricModule, "xen-pvscsi": XenPvScsiFabricModule, "ibmvscsis": IbmvscsisFabricModule, } # # Maintain compatibility with existing FabricModule(fabricname) usage # e.g. FabricModule('iscsi') returns an ISCSIFabricModule # class FabricModule(object): def __new__(cls, name): return fabric_modules[name]() @classmethod def all(cls): for mod in six.itervalues(fabric_modules): yield mod() @classmethod def list_registered_drivers(cls): try: return os.listdir('/sys/module/target_core_mod/holders') except OSError: return [] rtslib-fb-2.1.74/rtslib/node.py000066400000000000000000000226361372067225600163120ustar00rootroot00000000000000''' Implements the base CFSNode class and a few inherited variants. This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import stat from .utils import fread, fwrite, RTSLibError, RTSLibNotInCFS class CFSNode(object): # Where is the configfs base LIO directory ? configfs_dir = '/sys/kernel/config/target' # CFSNode private stuff def __init__(self): self._path = self.configfs_dir def __eq__(self, other): return self._path == other._path def __ne__(self, other): return self._path != other._path def _get_path(self): return self._path def _create_in_cfs_ine(self, mode): ''' Creates the configFS node if it does not already exist, depending on the mode. any -> makes sure it exists, also works if the node already does exist lookup -> make sure it does NOT exist create -> create the node which must not exist beforehand ''' if mode not in ['any', 'lookup', 'create']: raise RTSLibError("Invalid mode: %s" % mode) if self.exists and mode == 'create': # ensure that self.path is not stale hba-only dir if os.path.samefile(os.path.dirname(self.path), self.configfs_dir+'/core') \ and not next(os.walk(self.path))[1]: os.rmdir(self.path) else: raise RTSLibError("This %s already exists in configFS" % self.__class__.__name__) elif not self.exists and mode == 'lookup': raise RTSLibNotInCFS("No such %s in configfs: %s" % (self.__class__.__name__, self.path)) if not self.exists: try: os.mkdir(self.path) except: raise RTSLibError("Could not create %s in configFS" % self.__class__.__name__) def _exists(self): return os.path.isdir(self.path) def _check_self(self): if not self.exists: raise RTSLibNotInCFS("This %s does not exist in configFS" % self.__class__.__name__) def _list_files(self, path, writable=None, readable=None): ''' List files under a path depending on their owner's write permissions. @param path: The path under which the files are expected to be. If the path itself is not a directory, an empty list will be returned. @type path: str @param writable: If None (default), return all files despite their writability. If True, return only writable files. If False, return only non-writable files. @type writable: bool or None @param readable: If None (default), return all files despite their readability. If True, return only readable files. If False, return only non-readable files. @type readable: bool or None @return: List of file names filtered according to their read/write perms. ''' if not os.path.isdir(path): return [] if writable is None and readable is None: names = os.listdir(path) else: names = [] for name in os.listdir(path): sres = os.stat("%s/%s" % (path, name)) if writable is not None: if writable != ((sres[stat.ST_MODE] & stat.S_IWUSR) == \ stat.S_IWUSR): continue if readable is not None: if readable != ((sres[stat.ST_MODE] & stat.S_IRUSR) == \ stat.S_IRUSR): continue names.append(name) names.sort() return names # CFSNode public stuff def list_parameters(self, writable=None, readable=None): ''' @param writable: If None (default), return all parameters despite their writability. If True, return only writable parameters. If False, return only non-writable parameters. @type writable: bool or None @param readable: If None (default), return all parameters despite their readability. If True, return only readable parameters. If False, return only non-readable parameters. @type readable: bool or None @return: The list of existing RFC-3720 parameter names. ''' self._check_self() path = "%s/param" % self.path return self._list_files(path, writable, readable) def list_attributes(self, writable=None, readable=None): ''' @param writable: If None (default), return all files despite their writability. If True, return only writable files. If False, return only non-writable files. @type writable: bool or None @param readable: If None (default), return all files despite their readability. If True, return only readable files. If False, return only non-readable files. @type readable: bool or None @return: A list of existing attribute names as strings. ''' self._check_self() path = "%s/attrib" % self.path return self._list_files(path, writable, readable) def set_attribute(self, attribute, value): ''' Sets the value of a named attribute. The attribute must exist in configFS. @param attribute: The attribute's name. It is case-sensitive. @type attribute: string @param value: The attribute's value. @type value: string ''' self._check_self() path = "%s/attrib/%s" % (self.path, str(attribute)) if not os.path.isfile(path): raise RTSLibError("Cannot find attribute: %s" % str(attribute)) else: try: fwrite(path, "%s" % str(value)) except Exception as e: raise RTSLibError("Cannot set attribute %s: %s" % (attribute, e)) def get_attribute(self, attribute): ''' @param attribute: The attribute's name. It is case-sensitive. @return: The named attribute's value, as a string. ''' self._check_self() path = "%s/attrib/%s" % (self.path, str(attribute)) if not os.path.isfile(path): raise RTSLibError("Cannot find attribute: %s" % attribute) else: return fread(path) def set_parameter(self, parameter, value): ''' Sets the value of a named RFC-3720 parameter. The parameter must exist in configFS. @param parameter: The RFC-3720 parameter's name. It is case-sensitive. @type parameter: string @param value: The parameter's value. @type value: string ''' self._check_self() path = "%s/param/%s" % (self.path, str(parameter)) if not os.path.isfile(path): raise RTSLibError("Cannot find parameter: %s" % parameter) else: try: fwrite(path, "%s\n" % str(value)) except Exception as e: raise RTSLibError("Cannot set parameter %s: %s" % (parameter, e)) def get_parameter(self, parameter): ''' @param parameter: The RFC-3720 parameter's name. It is case-sensitive. @type parameter: string @return: The named parameter value as a string. ''' self._check_self() path = "%s/param/%s" % (self.path, str(parameter)) if not os.path.isfile(path): raise RTSLibError("Cannot find RFC-3720 parameter: %s" % parameter) else: return fread(path) def delete(self): ''' If the underlying configFS object does not exist, this method does nothing. If the underlying configFS object exists, this method attempts to delete it. ''' if self.exists: os.rmdir(self.path) path = property(_get_path, doc="Get the configFS object path.") exists = property(_exists, doc="Is True as long as the underlying configFS object exists. " \ + "If the underlying configFS objects gets deleted " \ + "either by calling the delete() method, or by any " \ + "other means, it will be False.") def dump(self): d = {} attrs = {} params = {} for item in self.list_attributes(writable=True, readable=True): try: attrs[item] = int(self.get_attribute(item)) except ValueError: attrs[item] = self.get_attribute(item) if attrs: d['attributes'] = attrs for item in self.list_parameters(writable=True, readable=True): params[item] = self.get_parameter(item) if params: d['parameters'] = params return d def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-fb-2.1.74/rtslib/root.py000066400000000000000000000507651372067225600163540ustar00rootroot00000000000000''' Implements the RTSRoot class. This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import stat import json import glob import errno import shutil from .node import CFSNode from .target import Target from .fabric import FabricModule from .tcm import so_mapping, bs_cache, StorageObject from .utils import RTSLibError, RTSLibALUANotSupported, modprobe, mount_configfs from .utils import dict_remove, set_attributes from .utils import fread, fwrite from .alua import ALUATargetPortGroup default_save_file = "/etc/target/saveconfig.json" class RTSRoot(CFSNode): ''' This is an interface to the root of the configFS object tree. Is allows one to start browsing Target and StorageObjects, as well as helper methods to return arbitrary objects from the configFS tree. >>> import rtslib.root as root >>> rtsroot = root.RTSRoot() >>> rtsroot.path '/sys/kernel/config/target' >>> rtsroot.exists True >>> rtsroot.targets # doctest: +ELLIPSIS [...] >>> rtsroot.tpgs # doctest: +ELLIPSIS [...] >>> rtsroot.storage_objects # doctest: +ELLIPSIS [...] >>> rtsroot.network_portals # doctest: +ELLIPSIS [...] ''' # RTSRoot private stuff # this should match the kernel target driver default db dir _default_dbroot = "/var/target" # this is where the target DB is to be located (instead of the default) _preferred_dbroot = "/etc/target" def __init__(self): ''' Instantiate an RTSRoot object. Basically checks for configfs setup and base kernel modules (tcm) ''' super(RTSRoot, self).__init__() try: mount_configfs() except RTSLibError: modprobe('configfs') mount_configfs() try: self._create_in_cfs_ine('any') except RTSLibError: modprobe('target_core_mod') self._create_in_cfs_ine('any') self._set_dbroot() def _list_targets(self): self._check_self() for fabric_module in self.fabric_modules: for target in fabric_module.targets: yield target def _list_storage_objects(self): self._check_self() for so in StorageObject.all(): yield so def _list_alua_tpgs(self): self._check_self() for so in self.storage_objects: for a in so.alua_tpgs: yield a def _list_tpgs(self): self._check_self() for t in self.targets: for tpg in t.tpgs: yield tpg def _list_node_acls(self): self._check_self() for t in self.tpgs: for node_acl in t.node_acls: yield node_acl def _list_node_acl_groups(self): self._check_self() for t in self.tpgs: for nag in t.node_acl_groups: yield nag def _list_mapped_luns(self): self._check_self() for na in self.node_acls: for mlun in na.mapped_luns: yield mlun def _list_mapped_lun_groups(self): self._check_self() for nag in self.node_acl_groups: for mlg in nag.mapped_lun_groups: yield mlg def _list_network_portals(self): self._check_self() for t in self.tpgs: for p in t.network_portals: yield p def _list_luns(self): self._check_self() for t in self.tpgs: for lun in t.luns: yield lun def _list_sessions(self): self._check_self() for na in self.node_acls: if na.session: yield na.session def _list_fabric_modules(self): self._check_self() for mod in FabricModule.all(): yield mod def __str__(self): return "rtslib" def _set_dbroot(self): dbroot_path = self.path + "/dbroot" if not os.path.exists(dbroot_path): self._dbroot = self._default_dbroot return self._dbroot = fread(dbroot_path) if self._dbroot != self._preferred_dbroot: if len(FabricModule.list_registered_drivers()) != 0: # Writing to dbroot_path after drivers have been registered will make the kernel emit this error: # db_root: cannot be changed: target drivers registered from warnings import warn warn("Cannot set dbroot to {}. Target drivers have already been registered." .format(self._preferred_dbroot)) return try: fwrite(dbroot_path, self._preferred_dbroot+"\n") except: if not os.path.isdir(self._preferred_dbroot): raise RTSLibError("Cannot set dbroot to {}. Please check if this directory exists." .format(self._preferred_dbroot)) self._dbroot = fread(dbroot_path) def _get_dbroot(self): return self._dbroot def _get_saveconf(self, so_path, save_file): ''' Fetch the configuration of all the blocks and return conf with updated storageObject info and its related target configuraion of given storage object path ''' current = self.dump() try: with open(save_file, "r") as f: saveconf = json.loads(f.read()) except IOError as e: if e.errno == errno.ENOENT: saveconf = {'storage_objects': [], 'targets': []} else: raise ExecutionError("Could not open %s" % save_file) fetch_cur_so = False fetch_cur_tg = False # Get the given block current storageObj configuration for sidx, sobj in enumerate(current.get('storage_objects', [])): if '/backstores/' + sobj['plugin'] + '/' + sobj['name'] == so_path: current_so = current['storage_objects'][sidx] fetch_cur_so = True break # Get the given block current target configuration if fetch_cur_so: for tidx, tobj in enumerate(current.get('targets', [])): if fetch_cur_tg: break for luns in tobj.get('tpgs', []): if fetch_cur_tg: break for lun in luns.get('luns', []): if lun['storage_object'] == so_path: current_tg = current['targets'][tidx] fetch_cur_tg = True break fetch_sav_so = False fetch_sav_tg = False # Get the given block storageObj from saved configuration for sidx, sobj in enumerate(saveconf.get('storage_objects', [])): if '/backstores/' + sobj['plugin'] + '/' + sobj['name'] == so_path: # Merge StorageObj if fetch_cur_so: saveconf['storage_objects'][sidx] = current_so # Remove StorageObj else: saveconf['storage_objects'].remove(saveconf['storage_objects'][sidx]) fetch_sav_so = True break # Get the given block target from saved configuration if fetch_sav_so: for tidx, tobj in enumerate(saveconf.get('targets', [])): if fetch_sav_tg: break for luns in tobj.get('tpgs', []): if fetch_sav_tg: break for lun in luns.get('luns', []): if lun['storage_object'] == so_path: # Merge target if fetch_cur_tg: saveconf['targets'][tidx] = current_tg # Remove target else: saveconf['targets'].remove(saveconf['targets'][tidx]) fetch_sav_tg = True break # Insert storageObj if fetch_cur_so and not fetch_sav_so: saveconf['storage_objects'].append(current_so) # Insert target if fetch_cur_tg and not fetch_sav_tg: saveconf['targets'].append(current_tg) return saveconf # RTSRoot public stuff def dump(self): ''' Returns a dict representing the complete state of the target config, suitable for serialization/deserialization, and then handing to restore(). ''' d = super(RTSRoot, self).dump() d['storage_objects'] = [so.dump() for so in self.storage_objects] d['targets'] = [t.dump() for t in self.targets] d['fabric_modules'] = [f.dump() for f in self.fabric_modules if f.has_feature("discovery_auth") if f.discovery_enable_auth] return d def clear_existing(self, target=None, storage_object=None, confirm=False): ''' Remove entire current configuration. ''' if not confirm: raise RTSLibError("As a precaution, confirm=True needs to be set") # Targets depend on storage objects, delete them first. for t in self.targets: # * Delete the single matching target if target=iqn.xxx was supplied # with restoreconfig command # * If only storage_object=blockx option is supplied then do not # delete any targets # * If restoreconfig was not supplied with neither target=iqn.xxx # nor storage_object=blockx then delete all targets if (not storage_object and not target) or (target and t.wwn == target): t.delete() if target: break for fm in (f for f in self.fabric_modules if f.has_feature("discovery_auth")): fm.clear_discovery_auth_settings() for so in self.storage_objects: # * Delete the single matching storage object if storage_object=blockx # was supplied with restoreconfig command # * If only target=iqn.xxx option is supplied then do not # delete any storage_object's # * If restoreconfig was not supplied with neither target=iqn.xxx # nor storage_object=blockx then delete all storage_object's if (not storage_object and not target) or (storage_object and so.name == storage_object): so.delete() if storage_object: break # If somehow some hbas still exist (no storage object within?) clean # them up too. if not (storage_object or target): for hba_dir in glob.glob("%s/core/*_*" % self.configfs_dir): os.rmdir(hba_dir) def restore(self, config, target=None, storage_object=None, clear_existing=False, abort_on_error=False): ''' Takes a dict generated by dump() and reconfigures the target to match. Returns list of non-fatal errors that were encountered. Will refuse to restore over an existing configuration unless clear_existing is True. ''' if clear_existing: self.clear_existing(target, storage_object, confirm=True) elif any(self.storage_objects) or any(self.targets): if any(self.storage_objects): for config_so in config.get('storage_objects', []): for loaded_so in self.storage_objects: if config_so['name'] == loaded_so.name and \ config_so['plugin'] == loaded_so.plugin: raise RTSLibError("storageobject '%s:%s' exist not restoring" %(loaded_so.plugin, loaded_so.name)) if any(self.targets): for config_tg in config.get('targets', []): for loaded_tg in self.targets: if config_tg['wwn'] == loaded_tg.wwn: raise RTSLibError("target with wwn %s exist, not restoring" %(loaded_tg.wwn)) errors = [] if abort_on_error: def err_func(err_str): raise RTSLibError(err_str) else: def err_func(err_str): errors.append(err_str + ", skipped") for index, so in enumerate(config.get('storage_objects', [])): if 'name' not in so: err_func("'name' not defined in storage object %d" % index) continue # * Restore/load the single matching storage object if # storage_object=blockx was supplied with restoreconfig command # * In case if no storage_object was supplied but only target=iqn.xxx # was supplied then do not load any storage_object's # * If neither storage_object nor target option was supplied to # restoreconfig, then go ahead and load all storage_object's if (not storage_object and not target) or (storage_object and so['name'] == storage_object): try: so_cls = so_mapping[so['plugin']] except KeyError: err_func("'plugin' not defined or invalid in storageobject %s" % so['name']) if storage_object: break continue kwargs = so.copy() dict_remove(kwargs, ('exists', 'attributes', 'plugin', 'buffered_mode', 'alua_tpgs')) try: so_obj = so_cls(**kwargs) except Exception as e: err_func("Could not create StorageObject %s: %s" % (so['name'], e)) if storage_object: break continue # Custom err func to include block name def so_err_func(x): return err_func("Storage Object %s/%s: %s" % (so['plugin'], so['name'], x)) set_attributes(so_obj, so.get('attributes', {}), so_err_func) for alua_tpg in so.get('alua_tpgs', {}): try: ALUATargetPortGroup.setup(so_obj, alua_tpg, err_func) except RTSLibALUANotSupported: pass if storage_object: break # Don't need to create fabric modules for index, fm in enumerate(config.get('fabric_modules', [])): if 'name' not in fm: err_func("'name' not defined in fabricmodule %d" % index) continue for fm_obj in self.fabric_modules: if fm['name'] == fm_obj.name: fm_obj.setup(fm, err_func) break for index, t in enumerate(config.get('targets', [])): if 'wwn' not in t: err_func("'wwn' not defined in target %d" % index) continue # * Restore/load the single matching target if target=iqn.xxx was # supplied with restoreconfig command # * In case if no target was supplied but only storage_object=blockx # was supplied then do not load any targets # * If neither storage_object nor target option was supplied to # restoreconfig, then go ahead and load all targets if (not storage_object and not target) or (target and t['wwn'] == target): if 'fabric' not in t: err_func("target %s missing 'fabric' field" % t['wwn']) if target: break continue if t['fabric'] not in (f.name for f in self.fabric_modules): err_func("Unknown fabric '%s'" % t['fabric']) if target: break continue fm_obj = FabricModule(t['fabric']) # Instantiate target Target.setup(fm_obj, t, err_func) if target: break return errors def save_to_file(self, save_file=None, so_path=None): ''' Write the configuration in json format to a file. Save file defaults to '/etc/target/saveconfig.json'. ''' if not save_file: save_file = default_save_file if so_path: saveconf = self._get_saveconf(so_path, save_file) else: saveconf = self.dump() tmp_file = save_file + ".temp" mode = stat.S_IRUSR | stat.S_IWUSR # 0o600 umask = 0o777 ^ mode # Prevents always downgrading umask to 0 # For security, remove file with potentially elevated mode try: os.remove(tmp_file) except OSError: pass umask_original = os.umask(umask) # Even though the old file is first deleted, a race condition is still # possible. Including os.O_EXCL with os.O_CREAT in the flags will # prevent the file from being created if it exists due to a race try: fdesc = os.open(tmp_file, os.O_WRONLY | os.O_CREAT | os.O_EXCL, mode) except OSError: raise ExecutionError("Could not open %s" % tmp_file) with os.fdopen(fdesc, 'w') as f: f.write(json.dumps(saveconf, sort_keys=True, indent=2)) f.write("\n") f.flush() os.fsync(f.fileno()) f.close() # copy along with permissions shutil.copy(tmp_file, save_file) os.umask(umask_original) os.remove(tmp_file) def restore_from_file(self, restore_file=None, clear_existing=True, target=None, storage_object=None, abort_on_error=False): ''' Restore the configuration from a file in json format. Restore file defaults to '/etc/target/saveconfig.json'. Returns a list of non-fatal errors. If abort_on_error is set, it will raise the exception instead of continuing. ''' if not restore_file: restore_file = default_save_file with open(restore_file, "r") as f: config = json.loads(f.read()) return self.restore(config, target, storage_object, clear_existing=clear_existing, abort_on_error=abort_on_error) def invalidate_caches(self): ''' Invalidate any caches used throughout the hierarchy ''' bs_cache.clear() targets = property(_list_targets, doc="Get the list of Target objects.") tpgs = property(_list_tpgs, doc="Get the list of all the existing TPG objects.") node_acls = property(_list_node_acls, doc="Get the list of all the existing NodeACL objects.") node_acl_groups = property(_list_node_acl_groups, doc="Get the list of all the existing NodeACLGroup objects.") mapped_luns = property(_list_mapped_luns, doc="Get the list of all the existing MappedLUN objects.") mapped_lun_groups = property(_list_mapped_lun_groups, doc="Get the list of all the existing MappedLUNGroup objects.") sessions = property(_list_sessions, doc="Get the list of all the existing sessions.") network_portals = property(_list_network_portals, doc="Get the list of all the existing Network Portal objects.") storage_objects = property(_list_storage_objects, doc="Get the list of all the existing Storage objects.") luns = property(_list_luns, doc="Get the list of all existing LUN objects.") fabric_modules = property(_list_fabric_modules, doc="Get the list of all FabricModule objects.") alua_tpgs = property(_list_alua_tpgs, doc="Get the list of all ALUA TPG objects.") dbroot = property(_get_dbroot, doc="Get the target database root") def _test(): '''Run the doctests.''' import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-fb-2.1.74/rtslib/target.py000066400000000000000000001543211372067225600166500ustar00rootroot00000000000000''' Implements the RTS generic Target fabric classes. This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc. Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os from glob import iglob as glob from functools import partial from six.moves import range import uuid from .node import CFSNode from .utils import RTSLibBrokenLink, RTSLibError from .utils import fread, fwrite, normalize_wwn, generate_wwn from .utils import dict_remove, set_attributes, set_parameters, ignored from .utils import _get_auth_attr, _set_auth_attr from . import tcm import six auth_params = ('userid', 'password', 'mutual_userid', 'mutual_password') class Target(CFSNode): ''' This is an interface to Targets in configFS. A Target is identified by its wwn. To a Target is attached a list of TPG objects. ''' # Target private stuff def __repr__(self): return "" % (self.fabric_module.name, self.wwn) def __init__(self, fabric_module, wwn=None, mode='any'): ''' @param fabric_module: The target's fabric module. @type fabric_module: FabricModule @param wwn: The optional Target's wwn. If no wwn is specified, one will be generated. @type wwn: string @param mode:An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A Target object. ''' super(Target, self).__init__() self.fabric_module = fabric_module fabric_module._check_self() if wwn is not None: # old versions used wrong NAA prefix, fixup if wwn.startswith("naa.6"): wwn = "naa.5" + wwn[5:] self.wwn, self.wwn_type = fabric_module.to_normalized_wwn(wwn) elif not fabric_module.wwns: self.wwn = generate_wwn(fabric_module.wwn_types[0]) self.wwn_type = fabric_module.wwn_types[0] else: raise RTSLibError("Fabric cannot generate WWN but it was not given") # Checking is done, convert to format the fabric wants fabric_wwn = fabric_module.to_fabric_wwn(self.wwn) self._path = "%s/%s" % (self.fabric_module.path, fabric_wwn) self._create_in_cfs_ine(mode) def _list_tpgs(self): self._check_self() for tpg_dir in glob("%s/tpgt*" % self.path): tag = os.path.basename(tpg_dir).split('_')[1] tag = int(tag) yield TPG(self, tag, 'lookup') # Target public stuff def has_feature(self, feature): ''' Whether or not this Target has a certain feature. ''' return self.fabric_module.has_feature(feature) def delete(self): ''' Recursively deletes a Target object. This will delete all attached TPG objects and then the Target itself. ''' self._check_self() for tpg in self.tpgs: tpg.delete() super(Target, self).delete() tpgs = property(_list_tpgs, doc="Get the list of TPG for the Target.") @classmethod def setup(cls, fm_obj, t, err_func): ''' Set up target objects based upon t dict, from saved config. Guard against missing or bad dict items, but keep going. Call 'err_func' for each error. ''' if 'wwn' not in t: err_func("'wwn' not defined for Target") return try: t_obj = Target(fm_obj, t['wwn']) except RTSLibError as e: err_func("Could not create Target object: %s" % e) return for tpg in t.get('tpgs', []): TPG.setup(t_obj, tpg, err_func) def dump(self): d = super(Target, self).dump() d['wwn'] = self.wwn d['fabric'] = self.fabric_module.name d['tpgs'] = [tpg.dump() for tpg in self.tpgs] return d class TPG(CFSNode): ''' This is a an interface to Target Portal Groups in configFS. A TPG is identified by its parent Target object and its TPG Tag. To a TPG object is attached a list of NetworkPortals. Targets without the 'tpgts' feature cannot have more than a single TPG, so attempts to create more will raise an exception. ''' # TPG private stuff def __repr__(self): return "" % self.tag def __init__(self, parent_target, tag=None, mode='any'): ''' @param parent_target: The parent Target object of the TPG. @type parent_target: Target @param tag: The TPG Tag (TPGT). @type tag: int > 0 @param mode:An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A TPG object. ''' super(TPG, self).__init__() if tag is None: tags = [tpg.tag for tpg in parent_target.tpgs] for index in range(1048576): if index not in tags and index > 0: tag = index break if tag is None: raise RTSLibError("Cannot find an available TPG Tag") else: tag = int(tag) if not tag > 0: raise RTSLibError("The TPG Tag must be >0") self._tag = tag if isinstance(parent_target, Target): self._parent_target = parent_target else: raise RTSLibError("Invalid parent Target") self._path = "%s/tpgt_%d" % (self.parent_target.path, self.tag) target_path = self.parent_target.path if not self.has_feature('tpgts') and not os.path.isdir(self._path): for filename in os.listdir(target_path): if filename.startswith("tpgt_") \ and os.path.isdir("%s/%s" % (target_path, filename)) \ and filename != "tpgt_%d" % self.tag: raise RTSLibError("Target cannot have multiple TPGs") self._create_in_cfs_ine(mode) if self.has_feature('nexus') and not self._get_nexus(): self._set_nexus() def _get_tag(self): return self._tag def _get_parent_target(self): return self._parent_target def _list_network_portals(self): self._check_self() if not self.has_feature('nps'): return for network_portal_dir in os.listdir("%s/np" % self.path): (ip_address, port) = \ os.path.basename(network_portal_dir).rsplit(":", 1) port = int(port) yield NetworkPortal(self, ip_address, port, 'lookup') def _get_enable(self): self._check_self() path = "%s/enable" % self.path # If the TPG does not have the enable attribute, then it is always # enabled. if os.path.isfile(path): return bool(int(fread(path))) else: return True def _set_enable(self, boolean): ''' Enables or disables the TPG. If the TPG doesn't support the enable attribute, do nothing. ''' self._check_self() path = "%s/enable" % self.path if os.path.isfile(path) and (boolean != self._get_enable()): try: fwrite(path, str(int(boolean))) except IOError as e: raise RTSLibError("Cannot change enable state: %s" % e) def _get_nexus(self): ''' Gets the nexus initiator WWN, or None if the TPG does not have one. ''' self._check_self() if self.has_feature('nexus'): try: nexus_wwn = fread("%s/nexus" % self.path) except IOError: nexus_wwn = '' return nexus_wwn else: return None def _set_nexus(self, nexus_wwn=None): ''' Sets the nexus initiator WWN. Raises an exception if the nexus is already set or if the TPG does not use a nexus. ''' self._check_self() if not self.has_feature('nexus'): raise RTSLibError("The TPG does not use a nexus") if self._get_nexus(): raise RTSLibError("The TPG's nexus initiator WWN is already set") fm = self.parent_target.fabric_module if nexus_wwn: nexus_wwn = fm.to_normalized_wwn(nexus_wwn)[0] else: # Nexus wwn type should match parent target nexus_wwn = generate_wwn(self.parent_target.wwn_type) fwrite("%s/nexus" % self.path, fm.to_fabric_wwn(nexus_wwn)) def _list_node_acls(self): self._check_self() if not self.has_feature('acls'): return node_acl_dirs = [os.path.basename(path) for path in os.listdir("%s/acls" % self.path)] for node_acl_dir in node_acl_dirs: fm = self.parent_target.fabric_module yield NodeACL(self, fm.from_fabric_wwn(node_acl_dir), 'lookup') def _list_node_acl_groups(self): self._check_self() if not self.has_feature('acls'): return names = set([]) for na in self.node_acls: tag = na.tag if tag: names.add(tag) return (NodeACLGroup(self, n) for n in names) def _list_luns(self): self._check_self() lun_dirs = [os.path.basename(path) for path in os.listdir("%s/lun" % self.path)] for lun_dir in lun_dirs: lun = lun_dir.split('_')[1] lun = int(lun) yield LUN(self, lun) def _control(self, command): self._check_self() path = "%s/control" % self.path fwrite(path, "%s\n" % str(command)) # TPG public stuff def has_feature(self, feature): ''' Whether or not this TPG has a certain feature. ''' return self.parent_target.has_feature(feature) def delete(self): ''' Recursively deletes a TPG object. This will delete all attached LUN, NetworkPortal and Node ACL objects and then the TPG itself. Before starting the actual deletion process, all sessions will be disconnected. ''' self._check_self() self.enable = False for acl in self.node_acls: acl.delete() for lun in self.luns: lun.delete() for portal in self.network_portals: portal.delete() super(TPG, self).delete() def node_acl(self, node_wwn, mode='any'): ''' Same as NodeACL() but without specifying the parent_tpg. ''' self._check_self() return NodeACL(self, node_wwn=node_wwn, mode=mode) def network_portal(self, ip_address, port, mode='any'): ''' Same as NetworkPortal() but without specifying the parent_tpg. ''' self._check_self() return NetworkPortal(self, ip_address=ip_address, port=port, mode=mode) def lun(self, lun, storage_object=None, alias=None): ''' Same as LUN() but without specifying the parent_tpg. ''' self._check_self() return LUN(self, lun=lun, storage_object=storage_object, alias=alias) tag = property(_get_tag, doc="Get the TPG Tag as an int.") parent_target = property(_get_parent_target, doc="Get the parent Target object to which the " \ + "TPG is attached.") enable = property(_get_enable, _set_enable, doc="Get or set a boolean value representing the " \ + "enable status of the TPG. " \ + "True means the TPG is enabled, False means it is " \ + "disabled.") network_portals = property(_list_network_portals, doc="Get the list of NetworkPortal objects currently attached " \ + "to the TPG.") node_acls = property(_list_node_acls, doc="Get the list of NodeACL objects currently " \ + "attached to the TPG.") node_acl_groups = property(_list_node_acl_groups, doc="Get the list of NodeACL groups currently " \ + "attached to the TPG.") luns = property(_list_luns, doc="Get the list of LUN objects currently attached " \ + "to the TPG.") nexus = property(_get_nexus, _set_nexus, doc="Get or set (once) the TPG's Nexus is used.") chap_userid = property(partial(_get_auth_attr, attribute='auth/userid', ignore=True), partial(_set_auth_attr, attribute='auth/userid', ignore=True), doc="Set or get the initiator CHAP auth userid.") chap_password = property(partial(_get_auth_attr, attribute='auth/password', ignore=True), partial(_set_auth_attr, attribute='auth/password', ignore=True), doc="Set or get the initiator CHAP auth password.") chap_mutual_userid = property(partial(_get_auth_attr, attribute='auth/userid_mutual', ignore=True), partial(_set_auth_attr, attribute='auth/userid_mutual', ignore=True), doc="Set or get the initiator CHAP auth userid.") chap_mutual_password = property(partial(_get_auth_attr, attribute='auth/password_mutual', ignore=True), partial(_set_auth_attr, attribute='auth/password_mutual', ignore=True), doc="Set or get the initiator CHAP auth password.") def _get_authenticate_target(self): self._check_self() path = "%s/auth/authenticate_target" % self.path try: return bool(int(fread(path))) except: return None authenticate_target = property(_get_authenticate_target, doc="Get the boolean authenticate target flag.") @classmethod def setup(cls, t_obj, tpg, err_func): tpg_obj = cls(t_obj, tag=tpg.get("tag", None)) set_attributes(tpg_obj, tpg.get('attributes', {}), err_func) set_parameters(tpg_obj, tpg.get('parameters', {}), err_func) for lun in tpg.get('luns', []): LUN.setup(tpg_obj, lun, err_func) for p in tpg.get('portals', []): NetworkPortal.setup(tpg_obj, p, err_func) for acl in tpg.get('node_acls', []): NodeACL.setup(tpg_obj, acl, err_func) tpg_obj.enable = tpg.get('enable', True) dict_remove(tpg, ('luns', 'portals', 'node_acls', 'tag', 'attributes', 'parameters', 'enable')) for name, value in six.iteritems(tpg): if value: try: setattr(tpg_obj, name, value) except: err_func("Could not set tpg %s attribute '%s'" % (tpg_obj.tag, name)) def dump(self): d = super(TPG, self).dump() d['tag'] = self.tag d['enable'] = self.enable d['luns'] = [lun.dump() for lun in self.luns] d['portals'] = [portal.dump() for portal in self.network_portals] d['node_acls'] = [acl.dump() for acl in self.node_acls] if self.has_feature("auth"): for attr in auth_params: val = getattr(self, "chap_" + attr, None) if val: d["chap_" + attr] = val return d class LUN(CFSNode): ''' This is an interface to RTS Target LUNs in configFS. A LUN is identified by its parent TPG and LUN index. ''' MAX_TARGET_LUN = 65535 # LUN private stuff def __repr__(self): return "" % (self.lun, self.storage_object.plugin, self.storage_object.name) def __init__(self, parent_tpg, lun=None, storage_object=None, alias=None): ''' A LUN object can be instanciated in two ways: - B{Creation mode}: If I{storage_object} is specified, the underlying configFS object will be created with that parameter. No LUN with the same I{lun} index can pre-exist in the parent TPG in that mode, or instanciation will fail. - B{Lookup mode}: If I{storage_object} is not set, then the LUN will be bound to the existing configFS LUN object of the parent TPG having the specified I{lun} index. The underlying configFS object must already exist in that mode. @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param lun: The LUN index. @type lun: 0-65535 @param storage_object: The storage object to be exported as a LUN. @type storage_object: StorageObject subclass @param alias: An optional parameter to manually specify the LUN alias. You probably do not need this. @type alias: string @return: A LUN object. ''' super(LUN, self).__init__() if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG") if lun is None: luns = [l.lun for l in self.parent_tpg.luns] for index in range(self.MAX_TARGET_LUN+1): if index not in luns: lun = index break if lun is None: raise RTSLibError("All LUNs 0-%d in use" % self.MAX_TARGET_LUN) else: lun = int(lun) if lun < 0 or lun > self.MAX_TARGET_LUN: raise RTSLibError("LUN must be 0 to %d" % self.MAX_TARGET_LUN) self._lun = lun self._path = "%s/lun/lun_%d" % (self.parent_tpg.path, self.lun) if storage_object is None and alias is not None: raise RTSLibError("The alias parameter has no meaning " \ + "without the storage_object parameter") if storage_object is not None: self._create_in_cfs_ine('create') try: self._configure(storage_object, alias) except: self.delete() raise else: self._create_in_cfs_ine('lookup') def _configure(self, storage_object, alias): self._check_self() if alias is None: alias = str(uuid.uuid4())[-10:] else: alias = str(alias).strip() if '/' in alias: raise RTSLibError("Invalid alias: %s", alias) destination = "%s/%s" % (self.path, alias) if storage_object.exists: source = storage_object.path else: raise RTSLibError("storage_object does not exist in configFS") os.symlink(source, destination) def _get_alias(self): self._check_self() for path in os.listdir(self.path): if os.path.islink("%s/%s" % (self.path, path)): return os.path.basename(path) raise RTSLibBrokenLink("Broken LUN in configFS, no storage object") def _get_storage_object(self): self._check_self() alias_path = os.path.realpath("%s/%s" % (self.path, self.alias)) return tcm.StorageObject.so_from_path(alias_path) def _get_parent_tpg(self): return self._parent_tpg def _get_lun(self): return self._lun def _list_mapped_luns(self): self._check_self() tpg = self.parent_tpg if not tpg.has_feature('acls'): return for na in tpg.node_acls: for mlun in na.mapped_luns: if os.path.realpath("%s/%s" % (mlun.path, mlun.alias)) == self.path: yield mlun # pass through backends will not have setup all the default # ALUA structs in the kernel. If the kernel has been setup, # a user created group or default_tg_pt_gp will be returned. # If the kernel was not properly setup an empty string is # return in alua_tg_pt_gp. Writing to alua_tg_pt_gp will crash # older kernels and will return a -Exyz code in newer ones. def _get_alua_tg_pt_gp_name(self): self._check_self() storage_object = self._get_storage_object() if storage_object.alua_supported is False: return None path = "%s/alua_tg_pt_gp" % self.path try: info = fread(path) if not info: return None group_line = info.splitlines()[0] return group_line.split(':')[1].strip() except IOError as e: return None def _set_alua_tg_pt_gp_name(self, group_name): self._check_self() if not self._get_alua_tg_pt_gp_name(): return -1 path = "%s/alua_tg_pt_gp" % self.path try: fwrite(path, group_name) except IOError as e: return -1 return 0 # LUN public stuff def delete(self): ''' If the underlying configFS object does not exist, this method does nothing. If the underlying configFS object exists, this method attempts to delete it along with all MappedLUN objects referencing that LUN. ''' self._check_self() for mlun in self.mapped_luns: mlun.delete() try: link = self.alias except RTSLibBrokenLink: pass else: if os.path.islink("%s/%s" % (self.path, link)): os.unlink("%s/%s" % (self.path, link)) super(LUN, self).delete() parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") lun = property(_get_lun, doc="Get the LUN index as an int.") storage_object = property(_get_storage_object, doc="Get the storage object attached to the LUN.") alias = property(_get_alias, doc="Get the LUN alias.") mapped_luns = property(_list_mapped_luns, doc="List all MappedLUN objects referencing this LUN.") alua_tg_pt_gp_name = property(_get_alua_tg_pt_gp_name, _set_alua_tg_pt_gp_name, doc="Get and Set the LUN's ALUA Target Port Group") @classmethod def setup(cls, tpg_obj, lun, err_func): if 'index' not in lun: err_func("'index' missing from a LUN in TPG %d" % tpg_obj.tag) return try: bs_name, so_name = lun['storage_object'].split('/')[2:] except: err_func("Malformed storage object field for LUN %d" % lun['index']) return for so in tcm.StorageObject.all(): if so_name == so.name and bs_name == so.plugin: match_so = so break else: err_func("Could not find matching StorageObject for LUN %d" % lun['index']) return try: lun_obj = cls(tpg_obj, lun['index'], storage_object=match_so, alias=lun.get('alias')) except (RTSLibError, KeyError): err_func("Creating TPG %d LUN index %d failed" % (tpg_obj.tag, lun['index'])) try: lun_obj.alua_tg_pt_gp_name = lun['alua_tg_pt_gp_name'] except KeyError: # alua_tg_pt_gp support not present in older versions pass def dump(self): d = super(LUN, self).dump() d['storage_object'] = "/backstores/%s/%s" % \ (self.storage_object.plugin, self.storage_object.name) d['index'] = self.lun d['alias'] = self.alias d['alua_tg_pt_gp_name'] = self.alua_tg_pt_gp_name return d class NetworkPortal(CFSNode): ''' This is an interface to NetworkPortals in configFS. A NetworkPortal is identified by its IP and port, but here we also require the parent TPG, so instance objects represent both the NetworkPortal and its association to a TPG. This is necessary to get path information in order to create the portal in the proper configFS hierarchy. ''' # NetworkPortal private stuff def __repr__(self): return "" % (self.ip_address, self.port) def __init__(self, parent_tpg, ip_address, port=3260, mode='any'): ''' @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param ip_address: The ipv4/v6 IP address of the NetworkPortal. ipv6 addresses should be surrounded by '[]'. @type ip_address: string @param port: The optional (defaults to 3260) NetworkPortal TCP/IP port. @type port: int @param mode: An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A NetworkPortal object. ''' super(NetworkPortal, self).__init__() self._ip_address = str(ip_address) try: self._port = int(port) except ValueError: raise RTSLibError("Invalid port") if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG") self._path = "%s/np/%s:%d" \ % (self.parent_tpg.path, self.ip_address, self.port) try: self._create_in_cfs_ine(mode) except OSError as msg: raise RTSLibError(msg) def _get_ip_address(self): return self._ip_address def _get_port(self): return self._port def _get_parent_tpg(self): return self._parent_tpg def _get_iser(self): try: return bool(int(fread("%s/iser" % self.path))) except IOError: return False def _set_iser(self, boolean): path = "%s/iser" % self.path try: fwrite(path, str(int(boolean))) except IOError: # b/w compat: don't complain if iser entry is missing if os.path.isfile(path): raise RTSLibError("Cannot change iser") def _get_offload(self): try: # only offload at the moment is cxgbit return bool(int(fread("%s/cxgbit" % self.path))) except IOError: return False def _set_offload(self, boolean): path = "%s/cxgbit" % self.path try: fwrite(path, str(int(boolean))) except IOError: # b/w compat: don't complain if cxgbit entry is missing if os.path.isfile(path): raise RTSLibError("Cannot change offload") # NetworkPortal public stuff def delete(self): self.iser = False self.offload = False super(NetworkPortal, self).delete() parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") port = property(_get_port, doc="Get the NetworkPortal's TCP port as an int.") ip_address = property(_get_ip_address, doc="Get the NetworkPortal's IP address as a string.") iser = property(_get_iser, _set_iser, doc="Get or set a boolean value representing if this " \ + "NetworkPortal supports iSER.") offload = property(_get_offload, _set_offload, doc="Get or set a boolean value representing if this " \ + "NetworkPortal supports offload.") @classmethod def setup(cls, tpg_obj, p, err_func): if 'ip_address' not in p: err_func("'ip_address' field missing from a portal in TPG %d" % tpg_obj.tag) return if 'port' not in p: err_func("'port' field missing from a portal in TPG %d" % tpg_obj.tag) return try: np = cls(tpg_obj, p['ip_address'], p['port']) np.iser = p.get('iser', False) np.offload = p.get('offload', False) except (RTSLibError, KeyError) as e: err_func("Creating NetworkPortal object %s:%s failed: %s" % (p['ip_address'], p['port'], e)) def dump(self): d = super(NetworkPortal, self).dump() d['port'] = self.port d['ip_address'] = self.ip_address d['iser'] = self.iser d['offload'] = self.offload return d class NodeACL(CFSNode): ''' This is an interface to node ACLs in configFS. A NodeACL is identified by the initiator node wwn and parent TPG. ''' # NodeACL private stuff def __repr__(self): return "" % self.node_wwn def __init__(self, parent_tpg, node_wwn, mode='any'): ''' @param parent_tpg: The parent TPG object. @type parent_tpg: TPG @param node_wwn: The wwn of the initiator node for which the ACL is created. @type node_wwn: string @param mode:An optionnal string containing the object creation mode: - I{'any'} means the configFS object will be either looked up or created. - I{'lookup'} means the object MUST already exist configFS. - I{'create'} means the object must NOT already exist in configFS. @type mode:string @return: A NodeACL object. ''' super(NodeACL, self).__init__() if isinstance(parent_tpg, TPG): self._parent_tpg = parent_tpg else: raise RTSLibError("Invalid parent TPG") fm = self.parent_tpg.parent_target.fabric_module self._node_wwn, self.wwn_type = normalize_wwn(fm.wwn_types, node_wwn) self._path = "%s/acls/%s" % (self.parent_tpg.path, fm.to_fabric_wwn(self.node_wwn)) self._create_in_cfs_ine(mode) def _get_node_wwn(self): return self._node_wwn def _get_parent_tpg(self): return self._parent_tpg def _get_tcq_depth(self): self._check_self() path = "%s/cmdsn_depth" % self.path return fread(path) def _set_tcq_depth(self, depth): self._check_self() path = "%s/cmdsn_depth" % self.path try: fwrite(path, "%s" % depth) except IOError as msg: msg = msg[1] raise RTSLibError("Cannot set tcq_depth: %s" % str(msg)) def _get_tag(self): self._check_self() try: tag = fread("%s/tag" % self.path) if tag: return tag return None except IOError: return None def _set_tag(self, tag_str): with ignored(IOError): if tag_str is None: fwrite("%s/tag" % self.path, 'NULL') else: fwrite("%s/tag" % self.path, tag_str) def _list_mapped_luns(self): self._check_self() for mapped_lun_dir in glob("%s/lun_*" % self.path): mapped_lun = int(os.path.basename(mapped_lun_dir).split("_")[1]) yield MappedLUN(self, mapped_lun) def _get_session(self): try: lines = fread("%s/info" % self.path).splitlines() except IOError: return None if lines[0].startswith("No active"): return None session = {} for line in lines: if line.startswith("InitiatorName:"): session['parent_nodeacl'] = self session['connections'] = [] elif line.startswith("InitiatorAlias:"): session['alias'] = line.split(":")[1].strip() elif line.startswith("LIO Session ID:"): session['id'] = int(line.split(":")[1].split()[0]) session['type'] = line.split("SessionType:")[1].split()[0].strip() elif "TARG_SESS_STATE_" in line: session['state'] = line.split("_STATE_")[1].split()[0] elif "TARG_CONN_STATE_" in line: cid = int(line.split(":")[1].split()[0]) cstate = line.split("_STATE_")[1].split()[0] session['connections'].append(dict(cid=cid, cstate=cstate)) elif "Address" in line: session['connections'][-1]['address'] = line.split()[1] session['connections'][-1]['transport'] = line.split()[2] return session # NodeACL public stuff def has_feature(self, feature): ''' Whether or not this NodeACL has a certain feature. ''' return self.parent_tpg.has_feature(feature) def delete(self): ''' Delete the NodeACL, including all MappedLUN objects. If the underlying configFS object does not exist, this method does nothing. ''' self._check_self() for mapped_lun in self.mapped_luns: mapped_lun.delete() super(NodeACL, self).delete() def mapped_lun(self, mapped_lun, tpg_lun=None, write_protect=None): ''' Same as MappedLUN() but without the parent_nodeacl parameter. ''' self._check_self() return MappedLUN(self, mapped_lun=mapped_lun, tpg_lun=tpg_lun, write_protect=write_protect) tcq_depth = property(_get_tcq_depth, _set_tcq_depth, doc="Set or get the TCQ depth for the initiator " \ + "sessions matching this NodeACL.") tag = property(_get_tag, _set_tag, doc="Set or get the NodeACL tag. If not supported, return None") parent_tpg = property(_get_parent_tpg, doc="Get the parent TPG object.") node_wwn = property(_get_node_wwn, doc="Get the node wwn.") mapped_luns = property(_list_mapped_luns, doc="Get the list of all MappedLUN objects in this NodeACL.") session = property(_get_session, doc="Gives a snapshot of the current session or C{None}") chap_userid = property(partial(_get_auth_attr, attribute='auth/userid'), partial(_set_auth_attr, attribute='auth/userid'), doc="Set or get the initiator CHAP auth userid.") chap_password = property(partial(_get_auth_attr, attribute='auth/password'), partial(_set_auth_attr, attribute='auth/password',), doc="Set or get the initiator CHAP auth password.") chap_mutual_userid = property(partial(_get_auth_attr, attribute='auth/userid_mutual'), partial(_set_auth_attr, attribute='auth/userid_mutual'), doc="Set or get the initiator CHAP auth userid.") chap_mutual_password = property(partial(_get_auth_attr, attribute='auth/password_mutual'), partial(_set_auth_attr, attribute='auth/password_mutual'), doc="Set or get the initiator CHAP auth password.") def _get_authenticate_target(self): self._check_self() path = "%s/auth/authenticate_target" % self.path return bool(int(fread(path))) authenticate_target = property(_get_authenticate_target, doc="Get the boolean authenticate target flag.") @classmethod def setup(cls, tpg_obj, acl, err_func): if 'node_wwn' not in acl: err_func("'node_wwn' missing in node_acl") return try: acl_obj = cls(tpg_obj, acl['node_wwn']) except RTSLibError as e: err_func("Error when creating NodeACL for %s: %s" % (acl['node_wwn'], e)) return set_attributes(acl_obj, acl.get('attributes', {}), err_func) for mlun in acl.get('mapped_luns', []): MappedLUN.setup(tpg_obj, acl_obj, mlun, err_func) dict_remove(acl, ('attributes', 'mapped_luns', 'node_wwn')) for name, value in six.iteritems(acl): if value: try: setattr(acl_obj, name, value) except: err_func("Could not set nodeacl %s attribute '%s'" % (acl['node_wwn'], name)) def dump(self): d = super(NodeACL, self).dump() d['node_wwn'] = self.node_wwn d['mapped_luns'] = [lun.dump() for lun in self.mapped_luns] if self.tag: d['tag'] = self.tag if self.has_feature("auth"): for attr in auth_params: val = getattr(self, "chap_" + attr, None) if val: d["chap_" + attr] = val return d class MappedLUN(CFSNode): ''' This is an interface to RTS Target Mapped LUNs. A MappedLUN is a mapping of a TPG LUN to a specific initiator node, and is part of a NodeACL. It allows the initiator to actually access the TPG LUN if ACLs are enabled for the TPG. The initial TPG LUN will then be seen by the initiator node as the MappedLUN. ''' MAX_LUN = 255 # MappedLUN private stuff def __repr__(self): return " tpg%d lun %d>" % \ (self.parent_nodeacl.node_wwn, self.mapped_lun, self.parent_nodeacl.parent_tpg.tag, self.tpg_lun.lun) def __init__(self, parent_nodeacl, mapped_lun, tpg_lun=None, write_protect=None, alias=None): ''' A MappedLUN object can be instanciated in two ways: - B{Creation mode}: If I{tpg_lun} is specified, the underlying configFS object will be created with that parameter. No MappedLUN with the same I{mapped_lun} index can pre-exist in the parent NodeACL in that mode, or instanciation will fail. - B{Lookup mode}: If I{tpg_lun} is not set, then the MappedLUN will be bound to the existing configFS MappedLUN object of the parent NodeACL having the specified I{mapped_lun} index. The underlying configFS object must already exist in that mode. @param mapped_lun: The mapped LUN index. @type mapped_lun: int @param tpg_lun: The TPG LUN index to map, or directly a LUN object that belong to the same TPG as the parent NodeACL. @type tpg_lun: int or LUN @param write_protect: The write-protect flag value, defaults to False (write-protection disabled). @type write_protect: bool ''' super(MappedLUN, self).__init__() if not isinstance(parent_nodeacl, NodeACL): raise RTSLibError("The parent_nodeacl parameter must be " \ + "a NodeACL object") else: self._parent_nodeacl = parent_nodeacl if not parent_nodeacl.exists: raise RTSLibError("The parent_nodeacl does not exist") try: self._mapped_lun = int(mapped_lun) except ValueError: raise RTSLibError("The mapped_lun parameter must be an " \ + "integer value") if self._mapped_lun < 0 or self._mapped_lun > self.MAX_LUN: raise RTSLibError("Mapped LUN must be 0 to %d" % self.MAX_LUN) self._path = "%s/lun_%d" % (self.parent_nodeacl.path, self.mapped_lun) if tpg_lun is None and write_protect is not None: raise RTSLibError("The write_protect parameter has no " \ + "meaning without the tpg_lun parameter") if tpg_lun is not None: self._create_in_cfs_ine('create') try: self._configure(tpg_lun, write_protect, alias) except: self.delete() raise else: self._create_in_cfs_ine('lookup') def _configure(self, tpg_lun, write_protect, alias): self._check_self() if isinstance(tpg_lun, LUN): tpg_lun = tpg_lun.lun else: try: tpg_lun = int(tpg_lun) except ValueError: raise RTSLibError("The tpg_lun must be either an " + "integer or a LUN object") # Check that the tpg_lun exists in the TPG for lun in self.parent_nodeacl.parent_tpg.luns: if lun.lun == tpg_lun: tpg_lun = lun break if not (isinstance(tpg_lun, LUN) and tpg_lun): raise RTSLibError("LUN %s does not exist in this TPG" % str(tpg_lun)) if not alias: alias = str(uuid.uuid4())[-10:] os.symlink(tpg_lun.path, "%s/%s" % (self.path, alias)) try: self.write_protect = int(write_protect) > 0 except: self.write_protect = False def _get_alias(self): self._check_self() for path in os.listdir(self.path): if os.path.islink("%s/%s" % (self.path, path)): return os.path.basename(path) raise RTSLibBrokenLink("Broken LUN in configFS, no storage object") def _get_mapped_lun(self): return self._mapped_lun def _get_parent_nodeacl(self): return self._parent_nodeacl def _set_write_protect(self, write_protect): self._check_self() path = "%s/write_protect" % self.path if write_protect: fwrite(path, "1") else: fwrite(path, "0") def _get_write_protect(self): self._check_self() path = "%s/write_protect" % self.path return bool(int(fread(path))) def _get_tpg_lun(self): self._check_self() path = os.path.realpath("%s/%s" % (self.path, self.alias)) for lun in self.parent_nodeacl.parent_tpg.luns: if lun.path == path: return lun raise RTSLibBrokenLink("Broken MappedLUN, no TPG LUN found") def _get_node_wwn(self): self._check_self() return self.parent_nodeacl.node_wwn # MappedLUN public stuff def delete(self): ''' Delete the MappedLUN. ''' self._check_self() try: lun_link = "%s/%s" % (self.path, self.alias) except RTSLibBrokenLink: pass else: if os.path.islink(lun_link): os.unlink(lun_link) super(MappedLUN, self).delete() mapped_lun = property(_get_mapped_lun, doc="Get the integer MappedLUN mapped_lun index.") parent_nodeacl = property(_get_parent_nodeacl, doc="Get the parent NodeACL object.") write_protect = property(_get_write_protect, _set_write_protect, doc="Get or set the boolean write protection.") tpg_lun = property(_get_tpg_lun, doc="Get the TPG LUN object the MappedLUN is pointing at.") node_wwn = property(_get_node_wwn, doc="Get the wwn of the node for which the TPG LUN is mapped.") alias = property(_get_alias, doc="Get the MappedLUN alias.") @classmethod def setup(cls, tpg_obj, acl_obj, mlun, err_func): if 'tpg_lun' not in mlun: err_func("'tpg_lun' not in a mapped_lun") return if 'index' not in mlun: err_func("'index' not in a mapped_lun") return # Mapped lun needs to correspond with already-created # TPG lun for lun in tpg_obj.luns: if lun.lun == mlun['tpg_lun']: tpg_lun_obj = lun break else: err_func("Could not find matching TPG LUN %d for MappedLUN %s" % (mlun['tpg_lun'], mlun['index'])) return try: mlun_obj = cls(acl_obj, mlun['index'], tpg_lun_obj, mlun.get('write_protect'), mlun.get('alias')) mlun_obj.tag = mlun.get("tag", None) except (RTSLibError, KeyError): err_func("Creating MappedLUN object %d failed" % mlun['index']) def dump(self): d = super(MappedLUN, self).dump() d['write_protect'] = self.write_protect d['index'] = self.mapped_lun d['tpg_lun'] = self.tpg_lun.lun d['alias'] = self.alias return d class Group(object): ''' An abstract base class akin to CFSNode, but for classes that emulate a higher-level group object across the actual NodeACL configfs structure. ''' def __init__(self, members_func): ''' members_func is a function that takes a self argument and returns an iterator of the objects that the derived Group class is grouping. ''' self._mem_func = members_func def _get_first_member(self): try: return next(self._mem_func(self)) except StopIteration: raise IndexError("Group is empty") def _get_prop(self, prop): ''' Helper fn to use with partial() to support getting a property value from the first member of the group. (All members of the group should be identical.) ''' return getattr(self._get_first_member(), prop) def _set_prop(self, value, prop): ''' Helper fn to use with partial() to support setting a property value in all members of the group. Caution: Arguments reversed! This is so partial() can be used on property name. ''' for mem in self._mem_func(self): setattr(mem, prop, value) def list_attributes(self, writable=None, readable=None): return self._get_first_member().list_attributes(writable, readable) def list_parameters(self, writable=None, readable=None): return self._get_first_member().list_parameters(writable, readable) def set_attribute(self, attribute, value): for obj in self._mem_func(self): obj.set_attribute(attribute, value) def set_parameter(self, parameter, value): for obj in self._mem_func(self): obj.set_parameter(parameter, value) def get_attribute(self, attribute): return self._get_first_member().get_attribute(attribute) def get_parameter(self, parameter): return self._get_first_member().get_parameter(parameter) def delete(self): ''' Delete all members of the group. ''' for mem in self._mem_func(self): mem.delete() @property def exists(self): return any(self._mem_func(self)) def _check_group_name(name): # Since all WWNs have a '.' in them, let's avoid confusion. if '.' in name: raise RTSLibError("'.' not permitted in group names.") class NodeACLGroup(Group): ''' Allow a group of NodeACLs that share a tag to be managed collectively. ''' def __repr__(self): return "" % self.name def __init__(self, parent_tpg, name): super(NodeACLGroup, self).__init__(NodeACLGroup._node_acls.fget) _check_group_name(name) self._name = name self._parent_tpg = parent_tpg def _get_name(self): return self._name def _set_name(self, name): _check_group_name(name) for na in self._node_acls: na.tag = name self._name = name @property def parent_tpg(self): ''' Get the parent TPG object. ''' return self._parent_tpg def add_acl(self, node_wwn): ''' Add a WWN to the NodeACLGroup. If a NodeACL already exists for this WWN, its configuration will be changed to match the NodeACLGroup, except for its auth parameters, which can vary among group members. @param node_wwn: An initiator WWN @type node_wwn: string ''' nacl = NodeACL(self.parent_tpg, node_wwn) if nacl in self._node_acls: return # if joining a group, take its config try: model = next(self._node_acls) except StopIteration: pass else: for mlun in nacl.mapped_luns: mlun.delete() for mlun in model.mapped_luns: MappedLUN(nacl, mlun.mapped_lun, mlun.tpg_lun, mlun.write_protect) for item in model.list_attributes(writable=True): nacl.set_attribute(item, model.get_attribute(item)) for item in model.list_parameters(writable=True): nacl.set_parameter(item, model.get_parameter(item)) finally: nacl.tag = self.name def remove_acl(self, node_wwn): ''' Remove a WWN from the NodeACLGroup. @param node_wwn: An initiator WWN @type node_wwn: string ''' nacl = NodeACL(self.parent_tpg, node_wwn, mode='lookup') nacl.delete() @property def _node_acls(self): ''' Gives access to the underlying NodeACLs within this group. ''' for na in self.parent_tpg.node_acls: if na.tag == self.name: yield na @property def wwns(self): ''' Give the Node WWNs of members of this group. ''' return (na.node_wwn for na in self._node_acls) def has_feature(self, feature): ''' Whether or not this NodeACL has a certain feature. ''' return self._parent_tpg.has_feature(feature) @property def sessions(self): ''' Yields any current sessions. ''' for na in self._node_acls: session = na.session if session: yield session def mapped_lun_group(self, mapped_lun, tpg_lun=None, write_protect=None): ''' Add a mapped lun to all group members. ''' return MappedLUNGroup(self, mapped_lun=mapped_lun, tpg_lun=tpg_lun, write_protect=write_protect) @property def mapped_lun_groups(self): ''' Generates all MappedLUNGroup objects in this NodeACLGroup. ''' try: first = self._get_first_member() except IndexError: return for mlun in first.mapped_luns: yield MappedLUNGroup(self, mlun.mapped_lun) name = property(_get_name, _set_name, doc="Get/set NodeACLGroup name.") def _get_chap(self, name): for na in self._node_acls: yield (na.node_wwn, getattr(na, "chap_" + name)) def _set_chap(self, name, value, wwn): for na in self._node_acls: if not wwn: setattr(na, "chap_" + name, value) elif wwn == na.node_wwn: setattr(na, "chap_" + name, value) break def get_userids(self): ''' Returns a (wwn, userid) tuple for each member of the group. ''' return self._get_chap(name="userid") def set_userids(self, value, wwn=None): ''' If wwn, set the userid for just that wwn, otherwise set it for all group members. ''' return self._set_chap("userid", value, wwn) def get_passwords(self): ''' Returns a (wwn, password) tuple for each member of the group. ''' return self._get_chap(name="password") def set_passwords(self, value, wwn=None): ''' If wwn, set the password for just that wwn, otherwise set it for all group members. ''' return self._set_chap("password", value, wwn) def get_mutual_userids(self): ''' Returns a (wwn, mutual_userid) tuple for each member of the group. ''' return self._get_chap(name="mutual_userid") def set_mutual_userids(self, value, wwn=None): ''' If wwn, set the mutual_userid for just that wwn, otherwise set it for all group members. ''' return self._set_chap("mutual_userid", value, wwn) def get_mutual_passwords(self): ''' Returns a (wwn, mutual_password) tuple for each member of the group. ''' return self._get_chap(name="mutual_password") def set_mutual_passwords(self, value, wwn=None): ''' If wwn, set the mutual_password for just that wwn, otherwise set it for all group members. ''' return self._set_chap("mutual_password", value, wwn) tcq_depth = property(partial(Group._get_prop, prop="tcq_depth"), partial(Group._set_prop, prop="tcq_depth"), doc="Set or get the TCQ depth for the initiator " + "sessions matching this NodeACLGroup") authenticate_target = property(partial(Group._get_prop, prop="authenticate_target"), doc="Get the boolean authenticate target flag.") class MappedLUNGroup(Group): ''' Used with NodeACLGroup, this aggregates all MappedLUNs with the same LUN so that it can be configured across all members of the NodeACLGroup. ''' def __repr__(self): return "" % (self._nag.name, self._mapped_lun) def __init__(self, nodeaclgroup, mapped_lun, *args, **kwargs): super(MappedLUNGroup, self).__init__(MappedLUNGroup._mapped_luns.fget) self._nag = nodeaclgroup self._mapped_lun = mapped_lun for na in self._nag._node_acls: MappedLUN(na, mapped_lun=mapped_lun, *args, **kwargs) @property def _mapped_luns(self): for na in self._nag._node_acls: for mlun in na.mapped_luns: if mlun.mapped_lun == self.mapped_lun: yield mlun @property def mapped_lun(self): ''' Get the integer MappedLUN mapped_lun index. ''' return self._mapped_lun @property def parent_nodeaclgroup(self): ''' Get the parent NodeACLGroup object. ''' return self._nag write_protect = property(partial(Group._get_prop, prop="write_protect"), partial(Group._set_prop, prop="write_protect"), doc="Get or set the boolean write protection.") tpg_lun = property(partial(Group._get_prop, prop="tpg_lun"), doc="Get the TPG LUN object the MappedLUN is pointing at.") def _test(): from doctest import testmod testmod() if __name__ == "__main__": _test() rtslib-fb-2.1.74/rtslib/tcm.py000066400000000000000000001115011372067225600161360ustar00rootroot00000000000000''' Implements the RTS Target backstore and storage object classes. This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import stat import re import glob import resource from six.moves import range from .alua import ALUATargetPortGroup from .node import CFSNode from .utils import fread, fwrite, generate_wwn, RTSLibError, RTSLibNotInCFS from .utils import convert_scsi_path_to_hctl, convert_scsi_hctl_to_path from .utils import is_dev_in_use, get_blockdev_type from .utils import get_size_for_blk_dev, get_size_for_disk_name def storage_object_get_alua_support_attr(so): ''' Helper function that can be called by passthrough type of backends. ''' try: if int(so.get_attribute("alua_support")) == 1: return True except RTSLibError: pass # Default to false because older kernels will crash when # reading/writing to some ALUA files when ALUA was not # fully supported by pscsi and tcmu. return False class StorageObject(CFSNode): ''' This is an interface to storage objects in configFS. A StorageObject is identified by its backstore and its name. ''' # StorageObject private stuff def __repr__(self): return "<%s %s/%s>" % (self.__class__.__name__, self.plugin, self.name) def __init__(self, name, mode, index=None): super(StorageObject, self).__init__() if "/" in name or " " in name or "\t" in name or "\n" in name: raise RTSLibError("A storage object's name cannot contain " " /, newline or spaces/tabs") else: self._name = name self._backstore = _Backstore(name, type(self), mode, index) self._path = "%s/%s" % (self._backstore.path, self.name) self.plugin = self._backstore.plugin try: self._create_in_cfs_ine(mode) except: self._backstore.delete() raise def _configure(self, wwn=None): if not wwn: wwn = generate_wwn('unit_serial') self.wwn = wwn self._config_pr_aptpl() def __eq__(self, other): return self.plugin == other.plugin and self.name == other.name def __ne__(self, other): return not self == other def _config_pr_aptpl(self): """ LIO actually *writes* pr aptpl info to the filesystem, so we need to read it in and squirt it back into configfs when we configure the storage object. BLEH. """ from .root import RTSRoot aptpl_dir = "%s/pr" % RTSRoot().dbroot try: lines = fread("%s/aptpl_%s" % (aptpl_dir, self.wwn)).split() except: return if not lines[0].startswith("PR_REG_START:"): return reservations = [] for line in lines: if line.startswith("PR_REG_START:"): res_list = [] elif line.startswith("PR_REG_END:"): reservations.append(res_list) else: res_list.append(line.strip()) for res in reservations: fwrite(self.path + "/pr/res_aptpl_metadata", ",".join(res)) @classmethod def all(cls): for so_dir in glob.glob("%s/core/*_*/*" % cls.configfs_dir): if os.path.isdir(so_dir): yield cls.so_from_path(so_dir) @classmethod def so_from_path(cls, path): ''' Build a StorageObject of the correct type from a configfs path. ''' so_name = os.path.basename(path) so_type, so_index = path.split("/")[-2].rsplit("_", 1) return so_mapping[so_type](so_name, index=so_index) def _get_wwn(self): self._check_self() if self.is_configured(): path = "%s/wwn/vpd_unit_serial" % self.path return fread(path).partition(":")[2].strip() else: raise RTSLibError("Cannot read a T10 WWN Unit Serial from " + "an unconfigured StorageObject") def _set_wwn(self, wwn): self._check_self() if self.is_configured(): path = "%s/wwn/vpd_unit_serial" % self.path fwrite(path, "%s\n" % wwn) else: raise RTSLibError("Cannot write a T10 WWN Unit Serial to " + "an unconfigured StorageObject") def _set_udev_path(self, udev_path): self._check_self() path = "%s/udev_path" % self.path fwrite(path, "%s" % udev_path) def _get_udev_path(self): self._check_self() path = "%s/udev_path" % self.path udev_path = fread(path) if not udev_path and self._backstore.plugin == "fileio": udev_path = self._parse_info('File').strip() return udev_path def _get_version(self): return self._backstore.version def _get_name(self): return self._name def _enable(self): self._check_self() path = "%s/enable" % self.path fwrite(path, "1\n") def _control(self, command): self._check_self() path = "%s/control" % self.path fwrite(path, "%s" % str(command).strip()) def _write_fd(self, contents): self._check_self() path = "%s/fd" % self.path fwrite(path, "%s" % str(contents).strip()) def _parse_info(self, key): self._check_self() info = fread("%s/info" % self.path) try: return re.search(".*%s: ([^: ]+).*" \ % key, ' '.join(info.split())).group(1) except AttributeError: return None def _get_status(self): self._check_self() return self._parse_info('Status').lower() def _gen_attached_luns(self): ''' Fast scan of luns attached to a storage object. This is an order of magnitude faster than using root.luns and matching path on them. ''' isdir = os.path.isdir islink = os.path.islink listdir = os.listdir realpath = os.path.realpath path = self.path from .root import RTSRoot from .target import LUN, TPG, Target from .fabric import target_names_excludes for base, fm in ((fm.path, fm) for fm in RTSRoot().fabric_modules if fm.exists): for tgt_dir in listdir(base): if tgt_dir not in target_names_excludes: tpgts_base = "%s/%s" % (base, tgt_dir) for tpgt_dir in listdir(tpgts_base): luns_base = "%s/%s/lun" % (tpgts_base, tpgt_dir) if isdir(luns_base): for lun_dir in listdir(luns_base): links_base = "%s/%s" % (luns_base, lun_dir) for lun_file in listdir(links_base): link = "%s/%s" % (links_base, lun_file) if islink(link) and realpath(link) == path: val = (tpgt_dir + "_" + lun_dir) val = val.split('_') target = Target(fm, tgt_dir) yield LUN(TPG(target, val[1]), val[3]) def _list_attached_luns(self): ''' Generates all luns attached to a storage object. ''' self._check_self() for lun in self._gen_attached_luns(): yield lun def _list_alua_tpgs(self): ''' Generate all ALUA groups attach to a storage object. ''' self._check_self() for tpg in os.listdir("%s/alua" % self.path): if self.alua_supported: yield ALUATargetPortGroup(self, tpg) def _get_alua_supported(self): ''' Children should override if the backend did not always support ALUA ''' self._check_self() return True # StorageObject public stuff def delete(self, save=False): ''' Recursively deletes a StorageObject object. This will delete all attached LUNs currently using the StorageObject object, and then the StorageObject itself. The underlying file and block storages will not be touched, but all ramdisk data will be lost. ''' self._check_self() for alua_tpg in self._list_alua_tpgs(): if alua_tpg.name != 'default_tg_pt_gp': alua_tpg.delete() # If we are called after a configure error, we can skip this if self.is_configured(): for lun in self._gen_attached_luns(): if self.status != 'activated': break else: lun.delete() super(StorageObject, self).delete() self._backstore.delete() if save: from .root import RTSRoot, default_save_file RTSRoot().save_to_file(default_save_file, '/backstores/' + self.plugin + '/' + self._name) def is_configured(self): ''' @return: True if the StorageObject is configured, else returns False ''' self._check_self() path = "%s/enable" % self.path try: configured = fread(path) except IOError: return True return bool(int(configured)) version = property(_get_version, doc="Get the version of the StorageObject's backstore") name = property(_get_name, doc="Get the StorageObject name as a string.") udev_path = property(_get_udev_path, doc="Get the StorageObject udev_path as a string.") wwn = property(_get_wwn, _set_wwn, doc="Get or set the StorageObject T10 WWN Serial as a string.") status = property(_get_status, doc="Get the storage object status, depending on whether or not it"\ + "is used by any LUN") attached_luns = property(_list_attached_luns, doc="Get the list of all LUN objects attached.") alua_tpgs = property(_list_alua_tpgs, doc="Get list of ALUA Target Port Groups attached.") alua_supported = property(_get_alua_supported, doc="Returns true if ALUA can be setup. False if not supported.") def dump(self): d = super(StorageObject, self).dump() d['name'] = self.name d['plugin'] = self.plugin d['alua_tpgs'] = [tpg.dump() for tpg in self.alua_tpgs] return d class PSCSIStorageObject(StorageObject): ''' An interface to configFS storage objects for pscsi backstore. ''' # PSCSIStorageObject private stuff def __init__(self, name, dev=None, index=None): ''' A PSCSIStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} is specified, the underlying configFS object will be created with that parameter. No PSCSIStorageObject with the same I{name} can pre-exist in the parent PSCSIBackstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{dev} is not set, then the PSCSIStorageObject will be bound to the existing configFS object in the parent PSCSIBackstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param name: The name of the PSCSIStorageObject. @type name: string @param dev: You have two choices: - Use the SCSI id of the device: I{dev="H:C:T:L"}. - Use the path to the SCSI device: I{dev="/path/to/dev"}. @type dev: string @return: A PSCSIStorageObject object. ''' if dev is not None: super(PSCSIStorageObject, self).__init__(name, 'create', index) try: self._configure(dev) except: self.delete() raise else: super(PSCSIStorageObject, self).__init__(name, 'lookup', index) def _configure(self, dev): self._check_self() # Use H:C:T:L format or use the path given by the user. try: # assume 'dev' is the path, try to get h:c:t:l values (hostid, channelid, targetid, lunid) = \ convert_scsi_path_to_hctl(dev) udev_path = dev.strip() except: # Oops, maybe 'dev' is in h:c:t:l format, try to get udev_path try: (hostid, channelid, targetid, lunid) = dev.split(':') hostid = int(hostid) channelid = int(channelid) targetid = int(targetid) lunid = int(lunid) except ValueError: raise RTSLibError("Cannot find SCSI device by " + "path, and dev " + "parameter not in H:C:T:L " + "format: %s" % dev) udev_path = convert_scsi_hctl_to_path(hostid, channelid, targetid, lunid) # -- now have all 5 values or have errored out -- if is_dev_in_use(udev_path): raise RTSLibError("Cannot configure StorageObject because " + "device %s (SCSI %d:%d:%d:%d) " % (udev_path, hostid, channelid, targetid, lunid) + "is already in use") self._control("scsi_host_id=%d," % hostid \ + "scsi_channel_id=%d," % channelid \ + "scsi_target_id=%d," % targetid \ + "scsi_lun_id=%d" % lunid) self._set_udev_path(udev_path) self._enable() super(PSCSIStorageObject, self)._configure() def _set_wwn(self, wwn): # pscsi doesn't support setting wwn pass def _get_model(self): self._check_self() info = fread("%s/info" % self.path) return str(re.search(".*Model:(.*)Rev:", ' '.join(info.split())).group(1)).strip() def _get_vendor(self): self._check_self() info = fread("%s/info" % self.path) return str(re.search(".*Vendor:(.*)Model:", ' '.join(info.split())).group(1)).strip() def _get_revision(self): self._check_self() return self._parse_info('Rev') def _get_channel_id(self): self._check_self() return int(self._parse_info('Channel ID')) def _get_target_id(self): self._check_self() return int(self._parse_info('Target ID')) def _get_lun(self): self._check_self() return int(self._parse_info('LUN')) def _get_host_id(self): self._check_self() return int(self._parse_info('Host ID')) def _get_alua_supported(self): self._check_self() return storage_object_get_alua_support_attr(self) # PSCSIStorageObject public stuff wwn = property(StorageObject._get_wwn, _set_wwn, doc="Get the StorageObject T10 WWN Unit Serial as a string." + " You cannot set it for pscsi-backed StorageObjects.") model = property(_get_model, doc="Get the SCSI device model string") vendor = property(_get_vendor, doc="Get the SCSI device vendor string") revision = property(_get_revision, doc="Get the SCSI device revision string") host_id = property(_get_host_id, doc="Get the SCSI device host id") channel_id = property(_get_channel_id, doc="Get the SCSI device channel id") target_id = property(_get_target_id, doc="Get the SCSI device target id") lun = property(_get_lun, doc="Get the SCSI device LUN") alua_supported = property(_get_alua_supported, doc="Returns true if ALUA can be setup. False if not supported.") def dump(self): d = super(PSCSIStorageObject, self).dump() d['dev'] = self.udev_path return d class RDMCPStorageObject(StorageObject): ''' An interface to configFS storage objects for rd_mcp backstore. ''' # RDMCPStorageObject private stuff def __init__(self, name, size=None, wwn=None, nullio=False, index=None): ''' A RDMCPStorageObject can be instantiated in two ways: - B{Creation mode}: If I{size} is specified, the underlying configFS object will be created with that parameter. No RDMCPStorageObject with the same I{name} can pre-exist in the parent Backstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{size} is not set, then the RDMCPStorageObject will be bound to the existing configFS object in the parent Backstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param name: The name of the RDMCPStorageObject. @type name: string @param size: The size of the ramdrive to create, in bytes. @type size: int @param wwn: T10 WWN Unit Serial, will generate if None @type wwn: string @param nullio: If rd should be created w/o backing page store. @type nullio: boolean @return: A RDMCPStorageObject object. ''' if size is not None: super(RDMCPStorageObject, self).__init__(name, 'create', index) try: self._configure(size, wwn, nullio) except: self.delete() raise else: super(RDMCPStorageObject, self).__init__(name, 'lookup', index) def _configure(self, size, wwn, nullio): self._check_self() # convert to pages size = round(float(size)/resource.getpagesize()) if size == 0: size = 1 self._control("rd_pages=%d" % size) if nullio: self._control("rd_nullio=1") self._enable() super(RDMCPStorageObject, self)._configure(wwn) def _get_page_size(self): self._check_self() return int(self._parse_info("PAGES/PAGE_SIZE").split('*')[1]) def _get_pages(self): self._check_self() return int(self._parse_info("PAGES/PAGE_SIZE").split('*')[0]) def _get_size(self): self._check_self() size = self._get_page_size() * self._get_pages() return size def _get_nullio(self): self._check_self() # nullio not present before 3.10 try: return bool(int(self._parse_info('nullio'))) except AttributeError: return False # RDMCPStorageObject public stuff page_size = property(_get_page_size, doc="Get the ramdisk page size.") pages = property(_get_pages, doc="Get the ramdisk number of pages.") size = property(_get_size, doc="Get the ramdisk size in bytes.") nullio = property(_get_nullio, doc="Get the nullio status.") def dump(self): d = super(RDMCPStorageObject, self).dump() d['wwn'] = self.wwn d['size'] = self.size # only dump nullio if enabled if self.nullio: d['nullio'] = True return d class FileIOStorageObject(StorageObject): ''' An interface to configFS storage objects for fileio backstore. ''' # FileIOStorageObject private stuff def __init__(self, name, dev=None, size=None, wwn=None, write_back=False, aio=False, index=None): ''' A FileIOStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} and I{size} are specified, the underlying configFS object will be created with those parameters. No FileIOStorageObject with the same I{name} can pre-exist in the parent Backstore in that mode, or instantiation will fail. - B{Lookup mode}: If I{dev} and I{size} are not set, then the FileIOStorageObject will be bound to the existing configFS object in the parent Backstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param name: The name of the FileIOStorageObject. @type name: string @param dev: The path to the backend file or block device to be used. - Examples: I{dev="/dev/sda"}, I{dev="/tmp/myfile"} - The only block device type that is accepted I{TYPE_DISK}, or partitions of a I{TYPE_DISK} device. For other device types, use pscsi. @type dev: string @param size: Size of the object, if not a block device @type size: int @param wwn: T10 WWN Unit Serial, will generate if None @type wwn: string @param write_back: Should we create the StorageObject with write caching enabled? Disabled by default @type write_back: bool @return: A FileIOStorageObject object. ''' if dev is not None: super(FileIOStorageObject, self).__init__(name, 'create', index) try: self._configure(dev, size, wwn, write_back, aio) except: self.delete() raise else: super(FileIOStorageObject, self).__init__(name, 'lookup', index) def _configure(self, dev, size, wwn, write_back, aio): self._check_self() block_type = get_blockdev_type(dev) if block_type is None: # a file if os.path.exists(dev) and not os.path.isfile(dev): raise RTSLibError("Path not to a file or block device") if size is None: raise RTSLibError("Path is to a file, size needed") self._control("fd_dev_name=%s,fd_dev_size=%d" % (dev, size)) else: # a block device # size is ignored but we can't raise an exception because # dump() saves it and thus restore() will call us with it. if block_type != 0: raise RTSLibError("Device is not a TYPE_DISK block device") if is_dev_in_use(dev): raise RTSLibError("Device %s is already in use" % dev) self._control("fd_dev_name=%s" % dev) if write_back: self.set_attribute("emulate_write_cache", 1) self._control("fd_buffered_io=%d" % write_back) if aio: self._control("fd_async_io=%d" % aio) self._set_udev_path(dev) self._enable() super(FileIOStorageObject, self)._configure(wwn) def _get_wb_enabled(self): self._check_self() return bool(int(self.get_attribute("emulate_write_cache"))) def _get_size(self): self._check_self() if self.is_block: return (get_size_for_blk_dev(self._parse_info('File')) * int(self._parse_info('SectorSize'))) else: return int(self._parse_info('Size')) def _is_block(self): return get_blockdev_type(self.udev_path) is not None def _aio(self): self._check_self() info = fread("%s/info" % self.path) r = re.search(".*Async: ([^: ]+).*", ' '.join(info.split())) if not r: # for backward compatibility with old kernels return False return bool(int(r.group(1))) # FileIOStorageObject public stuff write_back = property(_get_wb_enabled, doc="True if write-back, False if write-through (write cache disabled)") size = property(_get_size, doc="Get the current FileIOStorage size in bytes") is_block = property(_is_block, doc="True if FileIoStorage is backed by a block device instead of a file") aio = property(_aio, doc="True if asynchronous I/O is enabled") def dump(self): d = super(FileIOStorageObject, self).dump() d['write_back'] = self.write_back d['wwn'] = self.wwn d['dev'] = self.udev_path d['size'] = self.size d['aio'] = self.aio return d class BlockStorageObject(StorageObject): ''' An interface to configFS storage objects for block backstore. ''' # BlockStorageObject private stuff def __init__(self, name, dev=None, wwn=None, readonly=False, write_back=False, index=None): ''' A BlockIOStorageObject can be instantiated in two ways: - B{Creation mode}: If I{dev} is specified, the underlying configFS object will be created with that parameter. No BlockIOStorageObject with the same I{name} can pre-exist in the parent Backstore in that mode. - B{Lookup mode}: If I{dev} is not set, then the BlockIOStorageObject will be bound to the existing configFS object in the parent Backstore having the specified I{name}. The underlying configFS object must already exist in that mode, or instantiation will fail. @param name: The name of the BlockIOStorageObject. @type name: string @param dev: The path to the backend block device to be used. - Example: I{dev="/dev/sda"}. - The only device type that is accepted I{TYPE_DISK}. For other device types, use pscsi. @type dev: string @param wwn: T10 WWN Unit Serial, will generate if None @type wwn: string @return: A BlockIOStorageObject object. ''' if dev is not None: super(BlockStorageObject, self).__init__(name, 'create', index) try: self._configure(dev, wwn, readonly) except: self.delete() raise else: super(BlockStorageObject, self).__init__(name, 'lookup', index) def _configure(self, dev, wwn, readonly): self._check_self() if get_blockdev_type(dev) != 0: raise RTSLibError("Device %s is not a TYPE_DISK block device" % dev) if is_dev_in_use(dev): raise RTSLibError("Cannot configure StorageObject because " + "device %s is already in use" % dev) self._set_udev_path(dev) self._control("udev_path=%s" % dev) self._control("readonly=%d" % readonly) self._enable() super(BlockStorageObject, self)._configure(wwn) def _get_major(self): self._check_self() return int(self._parse_info('Major')) def _get_minor(self): self._check_self() return int(self._parse_info('Minor')) def _get_size(self): # udev_path doesn't work here, what if LV gets renamed? return get_size_for_disk_name(self._parse_info('device')) * int(self._parse_info('SectorSize')) def _get_wb_enabled(self): self._check_self() return bool(int(self.get_attribute("emulate_write_cache"))) def _get_readonly(self): self._check_self() # 'readonly' not present before kernel 3.6 try: return bool(int(self._parse_info('readonly'))) except AttributeError: return False # BlockStorageObject public stuff major = property(_get_major, doc="Get the block device major number") minor = property(_get_minor, doc="Get the block device minor number") size = property(_get_size, doc="Get the block device size") write_back = property(_get_wb_enabled, doc="True if write-back, False if write-through (write cache disabled)") readonly = property(_get_readonly, doc="True if the device is read-only, False if read/write") def dump(self): d = super(BlockStorageObject, self).dump() d['write_back'] = self.write_back d['readonly'] = self.readonly d['wwn'] = self.wwn d['dev'] = self.udev_path return d class UserBackedStorageObject(StorageObject): ''' An interface to configFS storage objects for userspace-backed backstore. ''' def __init__(self, name, config=None, size=None, wwn=None, hw_max_sectors=None, control=None, index=None): ''' @param name: The name of the UserBackedStorageObject. @type name: string @param config: user-handler-specific config string. - e.g. "rbd/machine1@snap4" @type config: string @param size: The size of the device to create, in bytes. @type size: int @param wwn: T10 WWN Unit Serial, will generate if None @type wwn: string @hw_max_sectors: Max sectors per command limit to export to initiators. @type hw_max_sectors: int @control: String of control=value tuples separate by a ',' that will passed to the kernel control file. @type: string @return: A UserBackedStorageObject object. ''' if size is not None: if config is None: raise RTSLibError("'size' and 'config' must be set when " "creating a new UserBackedStorageObject") if '/' not in config: raise RTSLibError("'config' must contain a '/' separating subtype " "from its configuration string") super(UserBackedStorageObject, self).__init__(name, 'create', index) try: self._configure(config, size, wwn, hw_max_sectors, control) except: self.delete() raise else: super(UserBackedStorageObject, self).__init__(name, 'lookup', index) def _configure(self, config, size, wwn, hw_max_sectors, control): self._check_self() if ':' in config: raise RTSLibError("':' not allowed in config string") self._control("dev_config=%s" % config) self._control("dev_size=%d" % size) if hw_max_sectors is not None: self._control("hw_max_sectors=%s" % hw_max_sectors) if control is not None: self._control(control) self._enable() super(UserBackedStorageObject, self)._configure(wwn) def _get_size(self): self._check_self() return int(self._parse_info('Size')) def _get_hw_max_sectors(self): self._check_self() return int(self._parse_info('HwMaxSectors')) def _get_control_tuples(self): self._check_self() tuples = [] # 1. max_data_area_mb val = self._parse_info('MaxDataAreaMB') if val != "NULL": tuples.append("max_data_area_mb=%s" % val) val = self.get_attribute('hw_block_size') if val != "NULL": tuples.append("hw_block_size=%s" % val) # 3. add next ... return ",".join(tuples) def _get_config(self): self._check_self() val = self._parse_info('Config') if val == "NULL": return None return val def _get_alua_supported(self): self._check_self() return storage_object_get_alua_support_attr(self) hw_max_sectors = property(_get_hw_max_sectors, doc="Get the max sectors per command.") control_tuples = property(_get_control_tuples, doc="Get the comma separated string containing control=value tuples.") size = property(_get_size, doc="Get the size in bytes.") config = property(_get_config, doc="Get the TCMU config.") alua_supported = property(_get_alua_supported, doc="Returns true if ALUA can be setup. False if not supported.") def dump(self): d = super(UserBackedStorageObject, self).dump() d['wwn'] = self.wwn d['size'] = self.size d['config'] = self.config d['hw_max_sectors'] = self.hw_max_sectors d['control'] = self.control_tuples return d class StorageObjectFactory(object): """ Create a storage object based on a given path. Only works for file & block. """ def __new__(cls, path): name = path.strip("/").replace("/", "-") if os.path.exists(path): s = os.stat(path) if stat.S_ISBLK(s.st_mode): return BlockStorageObject(name=name, dev=path) elif stat.S_ISREG(s.st_mode): return FileIOStorageObject(name=name, dev=path, size=s.st_size) raise RTSLibError("Can't create storageobject from path: %s" % path) # Used to convert either dirprefix or plugin to the SO. Instead of two # almost-identical dicts we just have some duplicate entries. so_mapping = { "pscsi": PSCSIStorageObject, "rd_mcp": RDMCPStorageObject, "ramdisk": RDMCPStorageObject, "fileio": FileIOStorageObject, "iblock": BlockStorageObject, "block": BlockStorageObject, "user": UserBackedStorageObject, } bs_params = { PSCSIStorageObject: dict(name='pscsi'), RDMCPStorageObject: dict(name='ramdisk', alt_dirprefix='rd_mcp'), FileIOStorageObject: dict(name='fileio'), BlockStorageObject: dict(name='block', alt_dirprefix='iblock'), UserBackedStorageObject: dict(name='user'), } bs_cache = {} class _Backstore(CFSNode): """ Backstore is needed as a level in the configfs hierarchy, but otherwise useless. 1:1 so:backstore. Created by storageobject ctor before SO configfs entry. """ def __init__(self, name, storage_object_cls, mode, index=None): super(_Backstore, self).__init__() self._so_cls = storage_object_cls self._plugin = bs_params[self._so_cls]['name'] dirp = bs_params[self._so_cls].get("alt_dirprefix", self._plugin) # if the caller knows the index then skip the cache global bs_cache if index is None and not bs_cache: for dir in glob.iglob("%s/core/*_*/*/" % self.configfs_dir): parts = dir.split("/") bs_name = parts[-2] bs_dirp, bs_index = parts[-3].rsplit("_", 1) current_key = "%s/%s" % (bs_dirp, bs_name) bs_cache[current_key] = int(bs_index) self._lookup_key = "%s/%s" % (dirp, name) if index is None: self._index = bs_cache.get(self._lookup_key, None) if self._index != None and mode == 'create': raise RTSLibError("Storage object %s/%s exists" % (self._plugin, name)) else: self._index = int(index) if self._index == None: if mode == 'lookup': raise RTSLibNotInCFS("Storage object %s/%s not found" % (self._plugin, name)) else: # Allocate new index value indexes = set(bs_cache.values()) for index in range(1048576): if index not in indexes: self._index = index bs_cache[self._lookup_key] = self._index break else: raise RTSLibError("No available backstore index") self._path = "%s/core/%s_%d" % (self.configfs_dir, dirp, self._index) try: self._create_in_cfs_ine(mode) except: if self._lookup_key in bs_cache: del bs_cache[self._lookup_key] raise def delete(self): super(_Backstore, self).delete() if self._lookup_key in bs_cache: del bs_cache[self._lookup_key] def _get_index(self): return self._index def _parse_info(self, key): self._check_self() info = fread("%s/hba_info" % self.path) try: return re.search(".*%s: ([^: ]+).*" \ % key, ' '.join(info.split())).group(1) except AttributeError: return None def _get_version(self): self._check_self() return self._parse_info("version") def _get_plugin(self): self._check_self() return self._plugin def _get_name(self): self._check_self() return "%s%d" % (self.plugin, self.index) plugin = property(_get_plugin, doc="Get the backstore plugin name.") index = property(_get_index, doc="Get the backstore index as an int.") version = property(_get_version, doc="Get the Backstore plugin version string.") name = property(_get_name, doc="Get the backstore name.") def _test(): import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-fb-2.1.74/rtslib/utils.py000066400000000000000000000371541372067225600165260ustar00rootroot00000000000000''' Provides various utility functions. This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Copyright (c) 2011-2014 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import re import six import socket import stat import subprocess import uuid from contextlib import contextmanager import pyudev _CONTEXT = pyudev.Context() class RTSLibError(Exception): ''' Generic rtslib error. ''' pass class RTSLibALUANotSupported(RTSLibError): ''' Backend does not support ALUA. ''' pass class RTSLibBrokenLink(RTSLibError): ''' Broken link in configfs, i.e. missing LUN storage object. ''' pass class RTSLibNotInCFS(RTSLibError): ''' The underlying configfs object does not exist. Happens when calling methods of an object that is instantiated but have been deleted from congifs, or when trying to lookup an object that does not exist. ''' pass def fwrite(path, string): ''' This function writes a string to a file, and takes care of opening it and closing it. If the file does not exist, it will be created. >>> from rtslib.utils import * >>> fwrite("/tmp/test", "hello") >>> fread("/tmp/test") 'hello' @param path: The file to write to. @type path: string @param string: The string to write to the file. @type string: string ''' with open(path, 'w') as file_fd: file_fd.write(str(string)) def fread(path): ''' This function reads the contents of a file. It takes care of opening and closing it. >>> from rtslib.utils import * >>> fwrite("/tmp/test", "hello") >>> fread("/tmp/test") 'hello' >>> fread("/tmp/notexistingfile") # doctest: +ELLIPSIS Traceback (most recent call last): ... IOError: [Errno 2] No such file or directory: '/tmp/notexistingfile' @param path: The path to the file to read from. @type path: string @return: A string containing the file's contents. ''' with open(path, 'r') as file_fd: return file_fd.read().strip() def is_dev_in_use(path): ''' This function will check if the device or file referenced by path is already mounted or used as a storage object backend. It works by trying to open the path with O_EXCL flag, which will fail if someone else already did. Note that the file is closed before the function returns, so this does not guaranteed the device will still be available after the check. @param path: path to the file of device to check @type path: string @return: A boolean, True is we cannot get exclusive descriptor on the path, False if we can. ''' path = os.path.realpath(str(path)) try: file_fd = os.open(path, os.O_EXCL|os.O_NDELAY|os.O_RDWR) except OSError: return True else: os.close(file_fd) return False def _get_size_for_dev(device): ''' @param device: the device @type device: pyudev.Device @return: the size in logical blocks, 0 if none found @rtype: int ''' attributes = device.attributes try: sect_size = attributes.asint('size') except (KeyError, UnicodeDecodeError, ValueError): return 0 if device['DEVTYPE'] == 'partition': attributes = device.parent.attributes try: logical_block_size = attributes.asint('queue/logical_block_size') except (KeyError, UnicodeDecodeError, ValueError): return 0 return (sect_size * 512) // logical_block_size def get_size_for_blk_dev(path): ''' @param path: The path to a block device @type path: string @return: The size in logical blocks of the device @raises: DeviceNotFoundError if corresponding device not found @raises: EnvironmentError, ValueError in some situations ''' device = pyudev.Device.from_device_file(_CONTEXT, os.path.realpath(str(path))) return _get_size_for_dev(device) get_block_size = get_size_for_blk_dev def get_size_for_disk_name(name): ''' @param name: a kernel disk name, as found in /proc/partitions @type name: string @return: The size in logical blocks of a disk-type block device. @raises: DeviceNotFoundError ''' # size is in 512-byte sectors, we want to return number of logical blocks def get_size(name): """ :param str name: name of block device :raises DeviceNotFoundError: if device not found """ device = pyudev.Device.from_name(_CONTEXT, 'block', name) return _get_size_for_dev(device) # Disk names can include '/' (e.g. 'cciss/c0d0') but these are changed to # '!' when listed in /sys/block. # in pyudev 0.19 it should no longer be necessary to swap '/'s in name name = name.replace("/", "!") try: return get_size(name) except pyudev.DeviceNotFoundError: # Maybe it's a partition? m = re.search(r'^([a-z0-9_\-!]+?)(\d+)$', name) if m: # If disk name ends with a digit, Linux sticks a 'p' between it and # the partition number in the blockdev name. disk = m.groups()[0] if disk[-1] == 'p' and disk[-2].isdigit(): disk = disk[:-1] return get_size(m.group()) else: raise def get_blockdev_type(path): ''' This function returns a block device's type. Example: 0 is TYPE_DISK If no match is found, None is returned. >>> from rtslib.utils import * >>> get_blockdev_type("/dev/sda") 0 >>> get_blockdev_type("/dev/sr0") 5 >>> get_blockdev_type("/dev/scd0") 5 >>> get_blockdev_type("/dev/nodevicehere") is None True @param path: path to the block device @type path: string @return: An int for the block device type, or None if not a block device. ''' try: device = pyudev.Device.from_device_file(_CONTEXT, path) except (pyudev.DeviceNotFoundError, EnvironmentError, ValueError): return None if device.subsystem != u'block': return None attributes = device.attributes disk_type = 0 try: disk_type = attributes.asint('device/type') except (KeyError, UnicodeDecodeError, ValueError): pass return disk_type get_block_type = get_blockdev_type def convert_scsi_path_to_hctl(path): ''' This function returns the SCSI ID in H:C:T:L form for the block device being mapped to the udev path specified. If no match is found, None is returned. >>> import rtslib.utils as utils >>> utils.convert_scsi_path_to_hctl('/dev/scd0') (2, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sr0') (2, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sda') (3, 0, 0, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sda1') >>> utils.convert_scsi_path_to_hctl('/dev/sdb') (3, 0, 1, 0) >>> utils.convert_scsi_path_to_hctl('/dev/sdc') (3, 0, 2, 0) @param path: The udev path to the SCSI block device. @type path: string @return: An (host, controller, target, lun) tuple of integer values representing the SCSI ID of the device, or raise RTSLibError. ''' try: path = os.path.realpath(path) device = pyudev.Device.from_device_file(_CONTEXT, path) parent = device.find_parent(subsystem='scsi') return [int(data) for data in parent.sys_name.split(':')] except: raise RTSLibError("Could not convert scsi path to hctl") def convert_scsi_hctl_to_path(host, controller, target, lun): ''' This function returns a udev path pointing to the block device being mapped to the SCSI device that has the provided H:C:T:L. >>> import rtslib.utils as utils >>> utils.convert_scsi_hctl_to_path(0,0,0,0) '' >>> utils.convert_scsi_hctl_to_path(2,0,0,0) # doctest: +ELLIPSIS '/dev/s...0' >>> utils.convert_scsi_hctl_to_path(3,0,2,0) '/dev/sdc' @param host: The SCSI host id. @type host: int @param controller: The SCSI controller id. @type controller: int @param target: The SCSI target id. @type target: int @param lun: The SCSI Logical Unit Number. @type lun: int @return: A string for the canonical path to the device, or raise RTSLibError. ''' try: host = int(host) controller = int(controller) target = int(target) lun = int(lun) except ValueError: raise RTSLibError( "The host, controller, target and lun parameter must be integers") hctl = [str(host), str(controller), str(target), str(lun)] try: scsi_device = pyudev.Device.from_name(_CONTEXT, 'scsi', ':'.join(hctl)) except pyudev.DeviceNotFoundError: raise RTSLibError("Could not find path for SCSI hctl") devices = _CONTEXT.list_devices( subsystem='block', parent=scsi_device ) path = next((dev.device_node for dev in devices), '') if path == None: raise RTSLibError("Could not find path for SCSI hctl") return path def generate_wwn(wwn_type): ''' Generates a random WWN of the specified type: - unit_serial: T10 WWN Unit Serial. - iqn: iSCSI IQN - naa: SAS NAA address @param wwn_type: The WWN address type. @type wwn_type: str @returns: A string containing the WWN. ''' wwn_type = wwn_type.lower() if wwn_type == 'free': return str(uuid.uuid4()) if wwn_type == 'unit_serial': return str(uuid.uuid4()) elif wwn_type == 'iqn': localname = socket.gethostname().split(".")[0].replace("_", "") localarch = os.uname()[4].replace("_", "") prefix = "iqn.2003-01.org.linux-iscsi.%s.%s" % (localname, localarch) prefix = prefix.strip().lower() serial = "sn.%s" % str(uuid.uuid4())[24:] return "%s:%s" % (prefix, serial) elif wwn_type == 'naa': # see http://standards.ieee.org/develop/regauth/tut/fibre.pdf # 5 = IEEE registered # 001405 = OpenIB OUI (they let us use it I guess?) # rest = random return "naa.5001405" + uuid.uuid4().hex[-9:] elif wwn_type == 'eui': return "eui.001405" + uuid.uuid4().hex[-10:] else: raise ValueError("Unknown WWN type: %s" % wwn_type) def colonize(str): ''' helper function to add colons every 2 chars ''' return ":".join(str[i:i+2] for i in range(0, len(str), 2)) def _cleanse_wwn(wwn_type, wwn): ''' Some wwns may have alternate text representations. Adjust to our preferred representation. ''' wwn = str(wwn.strip()).lower() if wwn_type in ('naa', 'eui', 'ib'): if wwn.startswith("0x"): wwn = wwn[2:] wwn = wwn.replace("-", "") wwn = wwn.replace(":", "") if not (wwn.startswith("naa.") or wwn.startswith("eui.") or \ wwn.startswith("ib.")): wwn = wwn_type + "." + wwn return wwn def normalize_wwn(wwn_types, wwn): ''' Take a WWN as given by the user and convert it to a standard text representation. Returns (normalized_wwn, wwn_type), or exception if invalid wwn. ''' wwn_test = { 'free': lambda wwn: True, 'iqn': lambda wwn: \ re.match(r"iqn\.[0-9]{4}-[0-1][0-9]\..*\..*", wwn) \ and not re.search(' ', wwn) \ and not re.search('_', wwn), 'naa': lambda wwn: re.match(r"naa\.[125][0-9a-fA-F]{15}$", wwn), 'eui': lambda wwn: re.match(r"eui\.[0-9a-f]{16}$", wwn), 'ib': lambda wwn: re.match(r"ib\.[0-9a-f]{32}$", wwn), 'unit_serial': lambda wwn: \ re.match("[0-9A-Fa-f]{8}(-[0-9A-Fa-f]{4}){3}-[0-9A-Fa-f]{12}$", wwn), } for wwn_type in wwn_types: clean_wwn = _cleanse_wwn(wwn_type, wwn) found_type = wwn_test[wwn_type](clean_wwn) if found_type: break else: raise RTSLibError("WWN not valid as: %s" % ", ".join(wwn_types)) return (clean_wwn, wwn_type) def list_loaded_kernel_modules(): ''' List all currently loaded kernel modules ''' return [line.split(" ")[0] for line in fread("/proc/modules").split('\n') if line] def modprobe(module): ''' Load the specified kernel module if needed. @param module: The name of the kernel module to be loaded. @type module: str ''' if module in list_loaded_kernel_modules(): return try: import kmod except ImportError: process = subprocess.Popen(("modprobe", module), stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdoutdata, stderrdata) = process.communicate() if process.returncode != 0: raise RTSLibError(stderrdata) return try: kmod.Kmod().modprobe(module) except kmod.error.KmodError: raise RTSLibError("Could not load module: %s" % module) def mount_configfs(): if not os.path.ismount("/sys/kernel/config"): cmdline = "mount -t configfs none /sys/kernel/config" process = subprocess.Popen(cmdline.split(), stdout=subprocess.PIPE, stderr=subprocess.PIPE) (stdoutdata, stderrdata) = process.communicate() if process.returncode != 0 and not os.path.ismount( "/sys/kernel/config"): raise RTSLibError("Cannot mount configfs") def dict_remove(d, items): for item in items: if item in d: del d[item] @contextmanager def ignored(*exceptions): try: yield except exceptions: pass # # These two functions are meant to be used with functools.partial and # properties. # # 'ignore=True' will silently return None if the attribute is not present. # This is good for attributes only present in some kernel versions. # # All curried arguments should be keyword args. # # These should only be used for attributes that follow the convention of # "NULL" having a special sentinel value, such as auth attributes, and # that return a string. # def _get_auth_attr(self, attribute, ignore=False): self._check_self() path = "%s/%s" % (self.path, attribute) try: value = fread(path) except: if not ignore: raise return None if value == "NULL": return '' else: return value # Auth params take the string "NULL" to unset the attribute def _set_auth_attr(self, value, attribute, ignore=False): self._check_self() path = "%s/%s" % (self.path, attribute) value = value.strip() if value == "NULL": raise RTSLibError("'NULL' is not a permitted value") if len(value) > 255: raise RTSLibError("Value longer than maximum length of 255") if value == '': value = "NULL" try: fwrite(path, "%s" % value) except: if not ignore: raise def set_attributes(obj, attr_dict, err_func): for name, value in six.iteritems(attr_dict): try: obj.set_attribute(name, value) except RTSLibError as e: err_func(str(e)) def set_parameters(obj, param_dict, err_func): for name, value in six.iteritems(param_dict): try: obj.set_parameter(name, value) except RTSLibError as e: err_func(str(e)) def _test(): '''Run the doctests''' import doctest doctest.testmod() if __name__ == "__main__": _test() rtslib-fb-2.1.74/rtslib_fb000077700000000000000000000000001372067225600166112rtslibustar00rootroot00000000000000rtslib-fb-2.1.74/scripts/000077500000000000000000000000001372067225600151725ustar00rootroot00000000000000rtslib-fb-2.1.74/scripts/convert-to-json000077500000000000000000000225041372067225600201720ustar00rootroot00000000000000#!/usr/bin/python3 ''' convert-to-json This file is part of RTSLib-fb. Copyright (c) 2013-2016 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' # # A script to convert .lio format save files to json format. # import json import re def human_to_bytes(hsize, kilo=1024): ''' This function converts human-readable amounts of bytes to bytes. It understands the following units : - I{B} or no unit present for Bytes - I{k}, I{K}, I{kB}, I{KB} for kB (kilobytes) - I{m}, I{M}, I{mB}, I{MB} for MB (megabytes) - I{g}, I{G}, I{gB}, I{GB} for GB (gigabytes) - I{t}, I{T}, I{tB}, I{TB} for TB (terabytes) Note: The definition of I{kilo} defaults to 1kB = 1024Bytes. Strictly speaking, those should not be called I{kB} but I{kiB}. You can override that with the optional kilo parameter. @param hsize: The human-readable version of the Bytes amount to convert @type hsize: string or int @param kilo: Optional base for the kilo prefix @type kilo: int @return: An int representing the human-readable string converted to bytes ''' size = hsize.replace('i', '') size = size.lower() if not re.match("^[0-9\.]+[k|m|g|t]?[b]?$", size): raise Exception("Cannot interpret size, wrong format: %s" % hsize) size = size.rstrip('ib') units = ['k', 'm', 'g', 't'] try: power = units.index(size[-1]) + 1 except ValueError: power = 0 size = int(size) else: try: size = int(size[:-1]) except ValueError: size = int(float(size[:-1])) return size * (int(kilo) ** power) def parse_yesno(val): if val == "yes": return 1 elif val == "no": return 0 else: try: return int(val) except: return val def parse_attributes(txt, cur): attribs = {} while txt[cur] != "}": name = txt[cur] val = txt[cur+1] attribs[name] = parse_yesno(val) cur += 2 return (cur+1, attribs) def parse_fileio(txt, cur): so = dict(plugin="fileio") while txt[cur] != "}": if txt[cur] == "path": so["dev"] = txt[cur+1] cur += 2 continue if txt[cur] == "size": so["size"] = human_to_bytes(txt[cur+1]) cur += 2 continue if txt[cur] == "buffered": # skip, recent LIO doesn't use for fileio cur += 2 continue if txt[cur] == "attribute": cur, so["attributes"] = parse_attributes(txt, cur+2) continue return (cur+1, so) def parse_block(txt, cur): so = dict(plugin="block") while txt[cur] != "}": if txt[cur] == "path": so["dev"] = txt[cur+1] cur += 2 continue if txt[cur] == "attribute": cur, so["attributes"] = parse_attributes(txt, cur+2) continue return (cur+1, so) def parse_ramdisk(txt, cur): so = dict(plugin="ramdisk") while txt[cur] != "}": if txt[cur] == "nullio": so["nullio"] = parse_yesno(txt[cur+1]) cur += 2 continue if txt[cur] == "size": so["size"] = human_to_bytes(txt[cur+1]) cur += 2 continue if txt[cur] == "attribute": cur, so["attributes"] = parse_attributes(txt, cur+2) continue return (cur+1, so) so_types = { "fileio": parse_fileio, "rd_mcp": parse_ramdisk, "iblock": parse_block, } def parse_storage(txt, cur): name = txt[cur+3] ty = txt[cur+1] cur += 5 (cur, d) = so_types[ty](txt, cur) d["name"] = name return (cur, d) def parse_lun(txt, cur): index = int(txt[cur+1]) plugin, name = txt[cur+3].split(":") return cur+4, dict(index=index, plugin=plugin, name=name) def parse_mapped_lun(txt, cur): mlun = dict(index=txt[cur+1]) cur += 3 while txt[cur] != "}": if txt[cur] == "target_lun": mlun["tpg_lun"] = parse_yesno(txt[cur+1]) cur += 2 continue if txt[cur] == "write_protect": mlun["write_protect"] = bool(parse_yesno(txt[cur+1])) cur += 2 continue return cur+1, mlun def parse_acl(txt, cur): acl = dict(node_wwn=txt[cur+1]) mapped_luns = [] cur += 3 while txt[cur] != "}": if txt[cur] == "attribute": cur, acl["attributes"] = parse_attributes(txt, cur+2) continue if txt[cur] == "auth": cur, auth = parse_attributes(txt, cur+2) if len(auth): acl["auth"] = auth continue if txt[cur] == "mapped_lun": cur, mlun = parse_mapped_lun(txt, cur) mapped_luns.append(mlun) acl["mapped_luns"] = mapped_luns return cur+1, acl def parse_tpg(tag, txt, cur): if tag is None: tag = int(txt[cur+1]) cur += 2 tpg = dict(tag=tag) luns = [] acls = [] portals = [] cur += 3 while txt[cur] != "}": if txt[cur] == "enable": tpg["enable"] = parse_yesno(txt[cur+1]) cur += 2 continue if txt[cur] == "attribute": cur, tpg["attributes"] = parse_attributes(txt, cur+2) continue if txt[cur] == "parameter": cur, tpg["parameters"] = parse_attributes(txt, cur+2) continue if txt[cur] == "auth": cur, auth = parse_attributes(txt, cur+2) if len(auth): tpg["auth"] = auth continue if txt[cur] == "lun": cur, l = parse_lun(txt, cur) luns.append(l) continue if txt[cur] == "acl": cur, acl = parse_acl(txt, cur) acls.append(acl) continue if txt[cur] == "portal": ip, port = txt[cur+1].split(":") portal = dict(ip_address=ip, port=port) portals.append(portal) cur += 2 continue if len(luns): tpg["luns"] = luns if len(acls): tpg["node_acls"] = acls if len(portals): tpg["portals"] = portals return cur+1, tpg def parse_target(fabric, txt, cur): target = dict(wwn=txt[cur+1], fabric=fabric) tpgs = [] tpgt = None # handle multiple tpgts if txt[cur+2] == "{": extra = 1 else: extra = 0 tpgt = int(txt[cur+3]) cur += 2 + extra while txt[cur] != "}": cur, tpg = parse_tpg(tpgt, txt, cur) tpgs.append(tpg) target["tpgs"] = tpgs return cur+extra, target def parse_fabric(txt, cur): fabric = txt[cur+1] cur += 3 while txt[cur] != "}": if txt[cur] == "discovery_auth": cur, disco = parse_attributes(txt, cur+2) new_disco = {} if disco.get("enable"): new_disco["discovery_enable_auth"] = disco.get("enable") if disco.get("userid"): new_disco["discovery_userid"] = disco.get("userid") if disco.get("password"): new_disco["discovery_password"] = disco.get("password") if disco.get("mutual_userid"): new_disco["discovery_mutual_userid"] = disco.get("mutual_userid") if disco.get("mutual_password"): new_disco["discovery_mutual_password"] = disco.get("mutual_password") new_disco["name"] = "iscsi" fabs.append(new_disco) continue if txt[cur] == "target": cur, t = parse_target(fabric, txt, cur) targs.append(t) continue return cur sos = [] fabs = [] targs = [] # a basic tokenizer that splits on whitespace and handles double quotes def split(s): new_lst = [] in_quotes = False new_str = [] for c in s: if c not in " \n\t\"": new_str.append(c) elif c == '\"' and in_quotes == False: in_quotes = True elif c == '\"' and in_quotes == True: in_quotes = False if len(new_str) == 0: # don't include things that are set to '""' del new_lst[-1] elif in_quotes == True: # append ws if in quotes new_str.append(c) elif len(new_str): # not in quotes, break on ws if anything in new_str new_lst.append("".join(new_str)) new_str = [] else: pass # drop ws return new_lst def parse(txt, cur): cur = 0 end = len(txt) - 1 while cur != end: if txt[cur] == "storage": cur, d = parse_storage(txt, cur) sos.append(d) elif txt[cur] == "fabric": cur = parse_fabric(txt, cur) with open("/etc/target/scsi_target.lio") as f: txt = f.read() txt = split(txt) cur = parse(txt, 0) output = dict(storage_objects=sos, fabric_modules=fabs, targets=targs) print(json.dumps(output, indent=2, sort_keys=True)) rtslib-fb-2.1.74/scripts/targetctl000077500000000000000000000041311372067225600171100ustar00rootroot00000000000000#!/usr/bin/python ''' targetctl This file is part of RTSLib. Copyright (c) 2013 by Red Hat, Inc. Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' # # A script to save/restore LIO configuration to/from a file in json format # from __future__ import print_function from rtslib_fb import RTSRoot import os import sys default_save_file = "/etc/target/saveconfig.json" err = sys.stderr def usage(): print("syntax: %s save [file_to_save_to]" % sys.argv[0], file=err) print(" %s restore [file_to_restore_from]" % sys.argv[0], file=err) print(" %s clear" % sys.argv[0], file=err) print(" default file is: %s" % default_save_file, file=err) sys.exit(-1) def save(to_file): RTSRoot().save_to_file(save_file=to_file) def restore(from_file): try: errors = RTSRoot().restore_from_file(restore_file=from_file) except IOError: # Not an error if the restore file is not present print("No saved config file at %s, ok, exiting" % from_file) sys.exit(0) for error in errors: print(error, file=err) def clear(unused): RTSRoot().clear_existing(confirm=True) funcs = dict(save=save, restore=restore, clear=clear) def main(): if os.geteuid() != 0: print("Must run as root", file=err) sys.exit(-1) if len(sys.argv) < 2 or len(sys.argv) > 3: usage() if sys.argv[1] == "--help": usage() if sys.argv[1] not in funcs.keys(): usage() savefile = default_save_file if len(sys.argv) == 3: savefile = os.path.expanduser(sys.argv[2]) funcs[sys.argv[1]](savefile) if __name__ == "__main__": main() rtslib-fb-2.1.74/setup.py000077500000000000000000000031701372067225600152210ustar00rootroot00000000000000#! /usr/bin/env python ''' This file is part of RTSLib. Copyright (c) 2011-2013 by Datera, Inc Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. ''' import os import re from setuptools import setup # Get version without importing. init_file_path = os.path.join(os.path.dirname(__file__), 'rtslib/__init__.py') with open(init_file_path) as f: for line in f: match = re.match(r"__version__.*'([0-9.]+)'", line) if match: version = match.group(1) break else: raise Exception("Couldn't find version in setup.py") setup ( name = 'rtslib-fb', version = version, description = 'API for Linux kernel SCSI target (aka LIO)', license = 'Apache 2.0', maintainer = 'Andy Grover', maintainer_email = 'agrover@redhat.com', url = 'http://github.com/open-iscsi/rtslib-fb', packages = ['rtslib_fb', 'rtslib'], scripts = ['scripts/targetctl'], install_requires = [ 'pyudev >= 0.16.1', 'six', ], classifiers = [ "Programming Language :: Python", "Programming Language :: Python :: 3", "License :: OSI Approved :: Apache Software License", ], ) rtslib-fb-2.1.74/systemd/000077500000000000000000000000001372067225600151735ustar00rootroot00000000000000rtslib-fb-2.1.74/systemd/README.md000066400000000000000000000004731372067225600164560ustar00rootroot00000000000000### Service file for rtslib-fb use with systemd The systemd developers encourage upstream projects to ship and install a service file, saving each systemd-based distribution from having to create one. In this directory is the systemd service file for rtslib-fb. However, it is not currently installed by default. rtslib-fb-2.1.74/systemd/target.service000066400000000000000000000005131372067225600200420ustar00rootroot00000000000000[Unit] Description=Restore LIO kernel target configuration Requires=sys-kernel-config.mount After=sys-kernel-config.mount network.target local-fs.target [Service] Type=oneshot RemainAfterExit=yes ExecStart=/usr/bin/targetctl restore ExecStop=/usr/bin/targetctl clear SyslogIdentifier=target [Install] WantedBy=multi-user.target