pax_global_header00006660000000000000000000000064141747001360014515gustar00rootroot0000000000000052 comment=8eb38fd39f66082302b8c2819fc38f30e09d5e2f textfsm-1.1.3/000077500000000000000000000000001417470013600132115ustar00rootroot00000000000000textfsm-1.1.3/.gitignore000066400000000000000000000000331417470013600151750ustar00rootroot00000000000000*.pyc /dist/ /*.egg-info textfsm-1.1.3/COPYING000066400000000000000000000261361417470013600142540ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. textfsm-1.1.3/LICENSE.txt000066400000000000000000000261361417470013600150440ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. textfsm-1.1.3/MANIFEST.in000066400000000000000000000002021417470013600147410ustar00rootroot00000000000000include COPYING include *.md include *.txt recursive-include tests *.py recursive-include examples * recursive-include testdata * textfsm-1.1.3/README.md000066400000000000000000000034241417470013600144730ustar00rootroot00000000000000TextFSM ======= Python module which implements a template based state machine for parsing semi-formatted text. Originally developed to allow programmatic access to information returned from the command line interface (CLI) of networking devices. The engine takes two inputs - a template file, and text input (such as command responses from the CLI of a device) and returns a list of records that contains the data parsed from the text. A template file is needed for each uniquely structured text input. Some examples are provided with the code and users are encouraged to develop their own. By developing a pool of template files, scripts can call TextFSM to parse useful information from a variety of sources. It is also possible to use different templates on the same data in order to create different tables (or views). TextFSM was developed internally at Google and released under the Apache 2.0 licence for the benefit of the wider community. [**See documentation for more details.**](https://github.com/google/textfsm/wiki/TextFSM) Before contributing ------------------- If you are not a Google employee, our lawyers insist that you sign a Contributor Licence Agreement (CLA). If you are an individual writing original source code and you're sure you own the intellectual property, then you'll need to sign an [individual CLA](https://cla.developers.google.com/about/google-individual). Individual CLAs can be signed electronically. If you work for a company that wants to allow you to contribute your work, then you'll need to sign a [corporate CLA](https://cla.developers.google.com/clas). The Google CLA is based on Apache's. Note that unlike some projects (notably GNU projects), we do not require a transfer of copyright. You still own the patch. Sadly, even the smallest patch needs a CLA. textfsm-1.1.3/examples/000077500000000000000000000000001417470013600150275ustar00rootroot00000000000000textfsm-1.1.3/examples/cisco_bgp_summary_example000066400000000000000000000014401417470013600221710ustar00rootroot00000000000000BGP router identifier 192.0.2.70, local AS number 65550 BGP table version is 9, main routing table version 9 4 network entries using 468 bytes of memory 4 path entries using 208 bytes of memory 3/2 BGP path/bestpath attribute entries using 420 bytes of memory 1 BGP AS-PATH entries using 24 bytes of memory 1 BGP community entries using 24 bytes of memory 0 BGP route-map cache entries using 0 bytes of memory 0 BGP filter-list cache entries using 0 bytes of memory BGP using 1144 total bytes of memory BGP activity 12/4 prefixes, 12/4 paths, scan interval 5 secs Neighbor V AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/PfxRcd 192.0.2.77 4 65551 6965 1766 9 0 0 5w4d 1 192.0.2.78 4 65552 6965 1766 9 0 0 5w4d 10 textfsm-1.1.3/examples/cisco_bgp_summary_template000066400000000000000000000010661417470013600223550ustar00rootroot00000000000000# Carry down the local end information so that it is present on each row item. Value Filldown RouterID (\S+) Value Filldown LocalAS (\d+) Value RemoteAS (\d+) Value Required RemoteIP (\d+(\.\d+){3}) Value Uptime (\d+\S+) Value Received_V4 (\d+) Value Status (\D.*) Start ^BGP router identifier ${RouterID}, local AS number ${LocalAS} ^${RemoteIP}\s+\d+\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Received_V4} -> Record ^${RemoteIP}\s+\d+\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Status} -> Record # Last record is already recorded then skip doing so here. EOF textfsm-1.1.3/examples/cisco_ipv6_interface_example000066400000000000000000000040041417470013600225470ustar00rootroot00000000000000Dialer0 is up, line protocol is up IPv6 is enabled, link-local address is FE80::21B:2BFF:FECE:4EE3 No Virtual link-local address(es): Description: PPP Dialer Stateless address autoconfig enabled General-prefix in use for addressing Global unicast address(es): 2001:4567:1212:B2:21B:2BFF:FECE:4EE3, subnet is 2001:4567:1212:B2::/64 [EUI/CAL/PRE] valid lifetime 5041 preferred lifetime 5041 2001:4567:1111:56FF::1, subnet is 2001:4567:1111:56FF::1/128 [CAL/PRE] valid lifetime 5945 preferred lifetime 2344 Joined group address(es): FF02::1 FF02::2 FF02::1:FF00:1 FF02::1:FFCE:4EE3 MTU is 1500 bytes ICMP error messages limited to one every 100 milliseconds ICMP redirects are enabled ICMP unreachables are sent Input features: Access List Inbound access list IPV6-IN ND DAD is enabled, number of DAD attempts: 1 ND reachable time is 30000 milliseconds (using 21397) Hosts use stateless autoconfig for addresses. Vlan1 is up, line protocol is up IPv6 is enabled, link-local address is FE80::21B:2BFF:FECE:4EE3 No Virtual link-local address(es): Description: Local VLAN General-prefix in use for addressing Global unicast address(es): 2001:4567:1212:5600::1, subnet is 2001:4567:1212:5600::/64 [CAL/PRE] valid lifetime 5943 preferred lifetime 2342 Joined group address(es): FF02::1 FF02::2 FF02::1:2 FF02::1:FF00:1 FF02::1:FFCE:4EE3 FF05::1:3 MTU is 1500 bytes ICMP error messages limited to one every 100 milliseconds ICMP redirects are enabled ICMP unreachables are sent ND DAD is enabled, number of DAD attempts: 1 ND reachable time is 30000 milliseconds (using 26371) ND advertised reachable time is 0 (unspecified) ND advertised retransmit interval is 0 (unspecified) ND router advertisements are sent every 200 seconds ND router advertisements live for 1800 seconds ND advertised default router preference is Medium Hosts use stateless autoconfig for addresses. Hosts use DHCP to obtain other configuration. textfsm-1.1.3/examples/cisco_ipv6_interface_template000066400000000000000000000012041417470013600227260ustar00rootroot00000000000000Value Interface (\S+) Value Admin (\S+) Value Oper (\S+) Value Description (.*) Value LinkLocal (\S+) Value List Addresses (\S+) Value List Subnets (\S+) Value List GroupAddresses (\S+) Value Mtu (\d+) Start ^${Interface} is ${Admin}, line protocol is ${Oper} ^.*link-local address is ${LinkLocal} ^ Description: ${Description} ^ Global unicast address -> Unicast ^ Joined group address -> Multicast ^ MTU is ${Mtu} bytes -> Record Unicast ^ ${Addresses}, subnet is ${Subnets} ^ Joined group address -> Multicast ^ \S -> Start Multicast ^ ${GroupAddresses} ^ MTU is ${Mtu} bytes -> Record ^ \S -> Start textfsm-1.1.3/examples/cisco_version_example000066400000000000000000000032721417470013600213360ustar00rootroot00000000000000Cisco IOS Software, Catalyst 4500 L3 Switch Software (cat4500-ENTSERVICESK9-M), Version 12.2(31)SGA1, RELEASE SOFTWARE (fc3) Technical Support: http://www.cisco.com/techsupport Copyright (c) 1986-2007 by Cisco Systems, Inc. Compiled Fri 26-Jan-07 14:28 by kellythw Image text-base: 0x10000000, data-base: 0x118AD800 ROM: 12.2(31r)SGA Pod Revision 0, Force Revision 34, Gill Revision 20 router.abc uptime is 3 days, 13 hours, 53 minutes System returned to ROM by reload System restarted at 05:09:09 PDT Wed Apr 2 2008 System image file is "bootflash:cat4500-entservicesk9-mz.122-31.SGA1.bin" This product contains cryptographic features and is subject to United States and local country laws governing import, export, transfer and use. Delivery of Cisco cryptographic products does not imply third-party authority to import, export, distribute or use encryption. Importers, exporters, distributors and users are responsible for compliance with U.S. and local country laws. By using this product you agree to comply with applicable laws and regulations. If you are unable to comply with U.S. and local laws, return this product immediately. A summary of U.S. laws governing Cisco cryptographic products may be found at: http://www.cisco.com/wwl/export/crypto/tool/stqrg.html If you require further assistance please contact us by sending email to export@cisco.com. cisco WS-C4948-10GE (MPC8540) processor (revision 5) with 262144K bytes of memory. Processor board ID FOX111700ZP MPC8540 CPU at 667Mhz, Fixed Module Last reset from Reload 2 Virtual Ethernet interfaces 48 Gigabit Ethernet interfaces 2 Ten Gigabit Ethernet interfaces 511K bytes of non-volatile configuration memory. Configuration register is 0x2102 textfsm-1.1.3/examples/cisco_version_template000066400000000000000000000007251417470013600215160ustar00rootroot00000000000000Value Model (\S+) Value Memory (\S+) Value ConfigRegister (0x\S+) Value Uptime (.*) Value Version (.*?) Value ReloadReason (.*) Value ReloadTime (.*) Value ImageFile ([^"]+) Start ^Cisco IOS Software.*Version ${Version}, ^.*uptime is ${Uptime} ^System returned to ROM by ${ReloadReason} ^System restarted at ${ReloadTime} ^System image file is "${ImageFile}" ^cisco ${Model} .* with ${Memory} bytes of memory ^Configuration register is ${ConfigRegister} textfsm-1.1.3/examples/f10_ip_bgp_summary_example000066400000000000000000000015531417470013600221540ustar00rootroot00000000000000BGP router identifier 192.0.2.1, local AS number 65551 BGP table version is 173711, main routing table version 173711 255 network entrie(s) using 43260 bytes of memory 1114 paths using 75752 bytes of memory BGP-RIB over all using 76866 bytes of memory 23 BGP path attribute entrie(s) using 1472 bytes of memory 3 BGP AS-PATH entrie(s) using 137 bytes of memory 10 BGP community entrie(s) using 498 bytes of memory 2 BGP route-reflector cluster entrie(s) using 62 bytes of memory 6 neighbor(s) using 28128 bytes of memory Neighbor AS MsgRcvd MsgSent TblVer InQ OutQ Up/Down State/Pfx 10.10.10.10 65551 647 397 73711 0 (0) 10:37:12 5 10.10.100.1 65552 664 416 73711 0 (0) 10:38:27 0 10.100.10.9 65553 709 526 73711 0 (0) 07:55:38 1 textfsm-1.1.3/examples/f10_ip_bgp_summary_template000066400000000000000000000007061417470013600223330ustar00rootroot00000000000000Value Filldown RouterID (\d+(\.\d+){3}) Value Filldown LocalAS (\d+) Value RemoteAS (\d+) Value Required RemoteIP (\d+(\.\d+){3}) Value Uptime (\S+) Value Received_V4 (\d+) Value Received_V6 () Value Status (\D.*) Start ^BGP router identifier ${RouterID}, local AS number ${LocalAS} ^${RemoteIP}\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Received_V4} -> Next.Record ^${RemoteIP}\s+${RemoteAS}(\s+\S+){5}\s+${Uptime}\s+${Status} -> Next.Record EOF textfsm-1.1.3/examples/f10_version_example000066400000000000000000000017641417470013600206300ustar00rootroot00000000000000Force10 Networks Real Time Operating System Software Force10 Operating System Version: 1.0 Force10 Application Software Version: 7.7.1.1 Copyright (c) 1999-2008 by Force10 Networks, Inc. Build Time: Fri Sep 12 14:08:26 PDT 2008 Build Path: /sites/sjc/work/sw/build/special_build/Release/E7-7-1/SW/SRC router.abc uptime is 3 day(s), 2 hour(s), 3 minute(s) System image file is "flash://FTOS-EF-7.7.1.1.bin" Chassis Type: E1200 Control Processor: IBM PowerPC 750FX (Rev D2.2) with 536870912 bytes of memory. Route Processor 1: IBM PowerPC 750FX (Rev D2.2) with 1073741824 bytes of memory. Route Processor 2: IBM PowerPC 750FX (Rev D2.2) with 1073741824 bytes of memory. 128K bytes of non-volatile configuration memory. 1 Route Processor Module 9 Switch Fabric Module 1 48-port GE line card with SFP optics (EF) 7 4-port 10GE LAN/WAN PHY line card with XFP optics (EF) 1 FastEthernet/IEEE 802.3 interface(s) 48 GigabitEthernet/IEEE 802.3 interface(s) 28 Ten GigabitEthernet/IEEE 802.3 interface(s) textfsm-1.1.3/examples/f10_version_template000066400000000000000000000003561417470013600210040ustar00rootroot00000000000000Value Chassis (\S+) Value Model (.*) Value Software (.*) Value Image ([^"]*) Start ^Force10 Application Software Version: ${Software} ^Chassis Type: ${Chassis} -> Continue ^Chassis Type: ${Model} ^System image file is "${Image}" textfsm-1.1.3/examples/index000066400000000000000000000013571417470013600160670ustar00rootroot00000000000000 # First line is the header fields for columns and is mandatory. # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expanded to abc(x(y(z)?)?)?, regexp inside [[]] is not supported # Template, Hostname, Vendor, Command cisco_bgp_summary_template, .*, Cisco, sh[[ow]] ip bg[[p]] su[[mmary]] cisco_version_template, .*, Cisco, sh[[ow]] ve[[rsion]] f10_ip_bgp_summary_template, .*, Force10, sh[[ow]] ip bg[[p]] sum[[mary]] f10_version_template, .*, Force10, sh[[ow]] ve[[rsion]] juniper_bgp_summary_template, .*, Juniper, sh[[ow]] bg[[p]] su[[mmary]] juniper_version_template, .*, Juniper, sh[[ow]] ve[[rsion]] unix_ifcfg_template, hostname[abc].*, .*, ifconfig textfsm-1.1.3/examples/juniper_bgp_summary_example000066400000000000000000000013061417470013600225460ustar00rootroot00000000000000Groups: 3 Peers: 3 Down peers: 0 Table Tot Paths Act Paths Suppressed History Damp State Pending inet.0 947 310 0 0 0 0 inet6.0 849 807 0 0 0 0 Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Damped... 10.247.68.182 65550 131725 28179233 0 11 6w3d17h Establ inet.0: 4/5/1 inet6.0: 0/0/0 10.254.166.246 65550 136159 29104942 0 0 6w5d6h Establ inet.0: 0/0/0 inet6.0: 7/8/1 192.0.2.100 65551 1269381 1363320 0 1 9w5d6h 2/3/0 0/0/0 textfsm-1.1.3/examples/juniper_bgp_summary_template000066400000000000000000000016611417470013600227320ustar00rootroot00000000000000Value RemoteAS (\d+) Value RemoteIP (\S+) Value Uptime (.*[0-9h]) Value Active_V4 (\d+) Value Received_V4 (\d+) Value Accepted_V4 (\d+) Value Damped_V4 (\d+) Value Active_V6 (\d+) Value Received_V6 (\d+) Value Accepted_V6 (\d+) Value Damped_V6 (\d+) Value Status ([a-zA-Z]+) Start # New format IPv4 & IPv6 split across newlines. ^\s+inet.0: ${Active_V4}/${Received_V4}/${Damped_V4} ^\s+inet6.0: ${Active_V6}/${Received_V6}/${Damped_V6} ^ -> Continue.Record ^${RemoteIP}\s+${RemoteAS}(\s+\d+){4}\s+${Uptime}\s+${Status} ^${RemoteIP}\s+${RemoteAS}(\s+\d+){4}\s+${Uptime}\s+${Active_V4}/${Received_V4}/${Damped_V4}\s+${Active_V6}/${Received_V6}/${Damped_V6} -> Next.Record ^${RemoteIP}\s+${RemoteAS}(\s+\d+){4}\s+${Uptime}\s+${Active_V4}/${Received_V4}/${Accepted_V4}/${Damped_V4}\s+${Active_V6}/${Received_V6}/${Accepted_V6}/${Damped_V6} -> Next.Record ^${RemoteIP}\s+${RemoteAS}(\s+\d+){4}\s+${Uptime}\s+${Status} -> Next.Record textfsm-1.1.3/examples/juniper_version_example000066400000000000000000000005711417470013600217110ustar00rootroot00000000000000Hostname: router.abc Model: mx960 JUNOS Base OS boot [9.1S3.5] JUNOS Base OS Software Suite [9.1S3.5] JUNOS Kernel Software Suite [9.1S3.5] JUNOS Crypto Software Suite [9.1S3.5] JUNOS Packet Forwarding Engine Support (M/T Common) [9.1S3.5] JUNOS Packet Forwarding Engine Support (MX Common) [9.1S3.5] JUNOS Online Documentation [9.1S3.5] JUNOS Routing Software Suite [9.1S3.5] textfsm-1.1.3/examples/juniper_version_template000066400000000000000000000010731417470013600220670ustar00rootroot00000000000000Value Chassis (\S+) Value Required Model (\S+) Value Boot (.*) Value Base (.*) Value Kernel (.*) Value Crypto (.*) Value Documentation (.*) Value Routing (.*) Start # Support multiple chassis systems. ^\S+:$$ -> Continue.Record ^${Chassis}:$$ ^Model: ${Model} ^JUNOS Base OS boot \[${Boot}\] ^JUNOS Software Release \[${Base}\] ^JUNOS Base OS Software Suite \[${Base}\] ^JUNOS Kernel Software Suite \[${Kernel}\] ^JUNOS Crypto Software Suite \[${Crypto}\] ^JUNOS Online Documentation \[${Documentation}\] ^JUNOS Routing Software Suite \[${Routing}\] textfsm-1.1.3/examples/unix_ifcfg_example000066400000000000000000000012351417470013600206070ustar00rootroot00000000000000lo0: flags=8049 mtu 16384 inet6 ::1 prefixlen 128 inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 en0: flags=8863 mtu 1500 ether 34:15:9e:27:45:e3 inet6 fe80::3615:9eff:fe27:45e3%en0 prefixlen 64 scopeid 0x4 inet6 2001:db8::3615:9eff:fe27:45e3 prefixlen 64 autoconf inet 192.0.2.215 netmask 0xfffffe00 broadcast 192.0.2.255 media: autoselect (1000baseT ) status: active en1: flags=8863 mtu 1500 ether 90:84:0d:f6:d1:55 media: () status: inactive textfsm-1.1.3/examples/unix_ifcfg_template000066400000000000000000000010031417470013600207600ustar00rootroot00000000000000Value Required Interface ([^:]+) Value MTU (\d+) Value State ((in)?active) Value MAC ([\d\w:]+) Value List Inet ([\d\.]+) Value List Netmask (\S+) # Don't match interface local (fe80::/10) - achieved with excluding '%'. Value List Inet6 ([^%]+) Value List Prefix (\d+) Start # Record interface record (if we have one). ^\S+:.* -> Continue.Record # Collect data for new interface. ^${Interface}:.* mtu ${MTU} ^\s+ether ${MAC} ^\s+inet6 ${Inet6} prefixlen ${Prefix} ^\s+inet ${Inet} netmask ${Netmask} textfsm-1.1.3/setup.cfg000066400000000000000000000001321417470013600150260ustar00rootroot00000000000000[metadata] description-file = README.md [aliases] test=pytest [bdist_wheel] universal=1 textfsm-1.1.3/setup.py000077500000000000000000000037731417470013600147400ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2017 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Setup script.""" from setuptools import setup, find_packages import textfsm # To use a consistent encoding from codecs import open from os import path here = path.abspath(path.dirname(__file__)) # Get the long description from the README file with open(path.join(here, 'README.md'), encoding="utf8") as f: long_description = f.read() setup(name='textfsm', maintainer='Google', maintainer_email='textfsm-dev@googlegroups.com', version=textfsm.__version__, description='Python module for parsing semi-structured text into python tables.', long_description=long_description, long_description_content_type='text/markdown', url='https://github.com/google/textfsm', license='Apache License, Version 2.0', classifiers=[ 'Development Status :: 5 - Production/Stable', 'Intended Audience :: Developers', 'License :: OSI Approved :: Apache Software License', 'Operating System :: OS Independent', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 3', 'Topic :: Software Development :: Libraries'], packages=['textfsm'], entry_points={ 'console_scripts': [ 'textfsm=textfsm.parser:main' ] }, include_package_data=True, package_data={'textfsm': ['../testdata/*']}, install_requires=['six', 'future'], ) textfsm-1.1.3/testdata/000077500000000000000000000000001417470013600150225ustar00rootroot00000000000000textfsm-1.1.3/testdata/clitable_templateA000066400000000000000000000001351417470013600205170ustar00rootroot00000000000000Value Key Col1 (.) Value Col2 (.) Value Col3 (.) Start ^${Col1} ${Col2} ${Col3} -> Record textfsm-1.1.3/testdata/clitable_templateB000066400000000000000000000001061417470013600205160ustar00rootroot00000000000000Value Key Col1 (.) Value Col4 (.) Start ^${Col1} ${Col4} -> Record textfsm-1.1.3/testdata/clitable_templateC000066400000000000000000000001311417470013600205150ustar00rootroot00000000000000Value Col1 (a) Value Col2 (.) Value Col3 (.) Start ^${Col1} ${Col2} ${Col3} -> Record textfsm-1.1.3/testdata/clitable_templateD000066400000000000000000000001311417470013600205160ustar00rootroot00000000000000Value Col1 (d) Value Col2 (.) Value Col3 (.) Start ^${Col1} ${Col2} ${Col3} -> Record textfsm-1.1.3/testdata/default_index000066400000000000000000000007301417470013600175600ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # Template, Hostname, Vendor, Command # clitable_templateA:clitable_templateB, .*, VendorA, sh[[ow]] ve[[rsion]] clitable_templateC, .*, VendorB, sh[[ow]] ve[[rsion]] clitable_templateD, .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/testdata/nondefault_index000066400000000000000000000006011417470013600202700ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # Devicename, Vendor, Command # .*, VendorA, sh[[ow]] ve[[rsion]] .*, VendorB, sh[[ow]] ve[[rsion]] .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/testdata/parseindex_index000066400000000000000000000007301417470013600202760ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # Template, Hostname, Vendor, Command # clitable_templateA:clitable_templateB, .*, VendorA, sh[[ow]] ve[[rsion]] clitable_templateC, .*, VendorB, sh[[ow]] ve[[rsion]] clitable_templateD, .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/testdata/parseindexfail1_index000066400000000000000000000007641417470013600212220ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # # Type in header name. Templatebogus, Hostname, Vendor, Command # clitable_templateA:clitable_templateB, .*, VendorA, sh[[ow]] ve[[rsion]] clitable_templateC, .*, VendorB, sh[[ow]] ve[[rsion]] clitable_templateD, .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/testdata/parseindexfail2_index000066400000000000000000000007561417470013600212240ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # # Column out of order Hostname, Template, Vendor, Command # clitable_templateA:clitable_templateB, .*, VendorA, sh[[ow]] ve[[rsion]] clitable_templateC, .*, VendorB, sh[[ow]] ve[[rsion]] clitable_templateD, .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/testdata/parseindexfail3_index000066400000000000000000000010011417470013600212050ustar00rootroot00000000000000# First line is header fields for columns # Regular expressions are supported in all fields except the first. # Last field supports variable length command completion. # abc[[xyz]] is expended to abc(x(y(z)?)?)?, regexp inside [[]] i not supported # Template, Hostname, Vendor, Command # # Illegal regexp characters in column. clitable_templateA:clitable_templateB, .*, VendorA, sh[[ow]] ve[[rsion]] clitable_templateC, .*, [[VendorB, sh[[ow]] ve[[rsion]] clitable_templateD, .*, VendorA, sh[[ow]] in[[terfaces]] textfsm-1.1.3/tests/000077500000000000000000000000001417470013600143535ustar00rootroot00000000000000textfsm-1.1.3/tests/__init__.py000066400000000000000000000000001417470013600164520ustar00rootroot00000000000000textfsm-1.1.3/tests/clitable_test.py000077500000000000000000000271531417470013600175560ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Unittest for clitable script.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import copy import os import re import unittest from io import StringIO from textfsm import clitable from textfsm import copyable_regex_object class UnitTestIndexTable(unittest.TestCase): """Tests the IndexTable class.""" def testParseIndex(self): """Test reading and index and parsing to index and compiled tables.""" file_path = os.path.join('testdata', 'parseindex_index') indx = clitable.IndexTable(file_path=file_path) # Compare number of entries found in the index table. self.assertEqual(indx.index.size, 3) self.assertEqual(indx.index[2]['Template'], 'clitable_templateC') self.assertEqual(indx.index[3]['Template'], 'clitable_templateD') self.assertEqual(indx.index[1]['Command'], 'sh[[ow]] ve[[rsion]]') self.assertEqual(indx.index[1]['Hostname'], '.*') self.assertEqual(indx.compiled.size, 3) for col in ('Command', 'Vendor', 'Template', 'Hostname'): self.assertTrue(isinstance(indx.compiled[1][col], copyable_regex_object.CopyableRegexObject)) self.assertTrue(indx.compiled[1]['Hostname'].match('random string')) def _PreParse(key, value): if key == 'Template': return value.upper() return value def _PreCompile(key, value): if key in ('Template', 'Command'): return None return value self.assertEqual(indx.compiled.size, 3) indx = clitable.IndexTable(_PreParse, _PreCompile, file_path) self.assertEqual(indx.index[2]['Template'], 'CLITABLE_TEMPLATEC') self.assertEqual(indx.index[1]['Command'], 'sh[[ow]] ve[[rsion]]') self.assertTrue(isinstance(indx.compiled[1]['Hostname'], copyable_regex_object.CopyableRegexObject)) self.assertFalse(indx.compiled[1]['Command']) def testGetRowMatch(self): """Tests retreiving rows from table.""" file_path = os.path.join('testdata', 'parseindex_index') indx = clitable.IndexTable(file_path=file_path) self.assertEqual(1, indx.GetRowMatch({'Hostname': 'abc'})) self.assertEqual(2, indx.GetRowMatch({'Hostname': 'abc', 'Vendor': 'VendorB'})) def testCopy(self): """Tests copy of IndexTable object.""" file_path = os.path.join('testdata', 'parseindex_index') indx = clitable.IndexTable(file_path=file_path) copy.deepcopy(indx) class UnitTestCliTable(unittest.TestCase): """Tests the CliTable class.""" def setUp(self): super(UnitTestCliTable, self).setUp() clitable.CliTable.INDEX = {} self.clitable = clitable.CliTable('default_index', 'testdata') self.input_data = ('a b c\n' 'd e f\n') self.template = ('Value Key Col1 (.)\n' 'Value Col2 (.)\n' 'Value Col3 (.)\n' '\n' 'Start\n' ' ^${Col1} ${Col2} ${Col3} -> Record\n' '\n') self.template_file = StringIO(self.template) def testCompletion(self): """Tests '[[]]' syntax replacement.""" indx = clitable.CliTable() self.assertEqual('abc', re.sub(r'(\[\[.+?\]\])', indx._Completion, 'abc')) self.assertEqual('a(b(c)?)?', re.sub(r'(\[\[.+?\]\])', indx._Completion, 'a[[bc]]')) self.assertEqual('a(b(c)?)? de(f)?', re.sub(r'(\[\[.+?\]\])', indx._Completion, 'a[[bc]] de[[f]]')) def testRepeatRead(self): """Tests that index file is read only once at the class level.""" new_clitable = clitable.CliTable('default_index', 'testdata') self.assertEqual(self.clitable.index, new_clitable.index) def testCliCompile(self): """Tests PreParse and PreCompile.""" self.assertEqual('sh(o(w)?)? ve(r(s(i(o(n)?)?)?)?)?', self.clitable.index.index[1]['Command']) self.assertEqual(None, self.clitable.index.compiled[1]['Template']) self.assertTrue( self.clitable.index.compiled[1]['Command'].match('sho vers')) def testParseCmdItem(self): """Tests parsing data with a single specific template.""" t = self.clitable._ParseCmdItem(self.input_data, template_file=self.template_file) self.assertEqual(t.table, 'Col1, Col2, Col3\na, b, c\nd, e, f\n') def testParseCmd(self): """Tests parsing data with a mocked template.""" # Stub out the conversion of filename to file handle. self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh vers'}) self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\nd, e, f\n') def testParseWithTemplate(self): """Tests parsing with an explicitly declared the template.""" self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh vers'}, templates='clitable_templateB') self.assertEqual( self.clitable.table, 'Col1, Col4\na, b\nd, e\n') def testParseCmdFromIndex(self): """Tests parsing with a template found in the index.""" self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh vers', 'Vendor': 'VendorB'}) self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\n') self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh int', 'Vendor': 'VendorA'}) self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\nd, e, f\n') self.assertRaises(clitable.CliTableError, self.clitable.ParseCmd, self.input_data, attributes={'Command': 'show vers', 'Vendor': 'bogus'}) self.assertRaises(clitable.CliTableError, self.clitable.ParseCmd, self.input_data, attributes={'Command': 'unknown command', 'Vendor': 'VendorA'}) def testParseWithMultiTemplates(self): """Tests that multiple matching templates extend the table.""" self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh ver', 'Vendor': 'VendorA'}) self.assertEqual( self.clitable.table, 'Col1, Col2, Col3, Col4\na, b, c, b\nd, e, f, e\n') self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh vers'}, templates='clitable_templateB:clitable_templateA') self.assertEqual( self.clitable.table, 'Col1, Col4, Col2, Col3\na, b, b, c\nd, e, e, f\n') self.assertRaises(IOError, self.clitable.ParseCmd, self.input_data, attributes={'Command': 'sh vers'}, templates='clitable_templateB:clitable_bogus') def testRequireCols(self): """Tests that CliTable expects a 'Template' row to be present.""" self.assertRaises(clitable.CliTableError, clitable.CliTable, 'nondefault_index', 'testdata') def testSuperKey(self): """Tests that superkey is derived from the template and is extensible.""" # Stub out the conversion of filename to file handle. self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh ver'}) self.assertEqual(self.clitable.superkey, ['Col1']) self.assertEqual( self.clitable.LabelValueTable(), '# LABEL Col1\n' 'a.Col2 b\n' 'a.Col3 c\n' 'd.Col2 e\n' 'd.Col3 f\n') self.clitable.AddKeys(['Col2']) self.assertEqual( self.clitable.LabelValueTable(), '# LABEL Col1.Col2\n' 'a.b.Col3 c\n' 'd.e.Col3 f\n') def testAddKey(self): """Tests that new keys are not duplicated and non-existant columns.""" self.assertEqual(self.clitable.superkey, []) # Stub out the conversion of filename to file handle. self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh ver'}) self.assertEqual(self.clitable.superkey, ['Col1']) self.clitable.AddKeys(['Col1', 'Col2', 'Col3']) self.assertEqual(self.clitable.superkey, ['Col1', 'Col2', 'Col3']) self.assertRaises(KeyError, self.clitable.AddKeys, ['Bogus']) def testKeyValue(self): """Tests retrieving row value that corresponds to the key.""" # Stub out the conversion of filename to file handle. self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] self.clitable.ParseCmd(self.input_data, attributes={'Command': 'sh ver'}) self.assertEqual(self.clitable.KeyValue(), ['a']) self.clitable.row_index = 2 self.assertEqual(self.clitable.KeyValue(), ['d']) self.clitable.row_index = 1 self.clitable.AddKeys(['Col3']) self.assertEqual(self.clitable.KeyValue(), ['a', 'c']) # With no key it falls back to row number. self.clitable._keys = set() for rownum, row in enumerate(self.clitable, start=1): self.assertEqual(row.table.KeyValue(), ['%s' % rownum]) def testTableSort(self): """Tests sorting of table based on superkey.""" self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] input_data2 = ('a e c\n' 'd b f\n') self.clitable.ParseCmd(self.input_data + input_data2, attributes={'Command': 'sh ver'}) self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\nd, e, f\na, e, c\nd, b, f\n') self.clitable.sort() # Key was non-unique, columns outside of the key do not count. self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\na, e, c\nd, e, f\nd, b, f\n') # Create a new table with no explicit key. self.template = ('Value Col1 (.)\n' 'Value Col2 (.)\n' 'Value Col3 (.)\n' '\n' 'Start\n' ' ^${Col1} ${Col2} ${Col3} -> Record\n' '\n') self.template_file = StringIO(self.template) self.clitable._TemplateNamesToFiles = lambda t: [self.template_file] self.clitable.ParseCmd(self.input_data + input_data2, attributes={'Command': 'sh ver'}) # Add a manual key. self.clitable.AddKeys(['Col2']) self.clitable.sort() self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\nd, b, f\nd, e, f\na, e, c\n') # Clear the keys. self.clitable._keys = set() # With no key, sort based on whole row. self.clitable.sort() self.assertEqual( self.clitable.table, 'Col1, Col2, Col3\na, b, c\na, e, c\nd, b, f\nd, e, f\n') def testCopy(self): """Tests copying of clitable object.""" copy.deepcopy(self.clitable) if __name__ == '__main__': unittest.main() textfsm-1.1.3/tests/copyable_regex_object_test.py000077500000000000000000000023611417470013600223070ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Tests for copyable_regex_object.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import copy import unittest from textfsm import copyable_regex_object class CopyableRegexObjectTest(unittest.TestCase): def testCopyableRegexObject(self): obj1 = copyable_regex_object.CopyableRegexObject('fo*') self.assertTrue(obj1.match('foooo')) self.assertFalse(obj1.match('bar')) obj2 = copy.copy(obj1) self.assertTrue(obj2.match('foooo')) self.assertFalse(obj2.match('bar')) if __name__ == '__main__': unittest.main() textfsm-1.1.3/tests/terminal_test.py000077500000000000000000000150451417470013600176070ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2011 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """Unittest for terminal module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from builtins import range from builtins import object import sys import unittest from textfsm import terminal class TerminalTest(unittest.TestCase): def setUp(self): super(TerminalTest, self).setUp() self.environ_orig = terminal.os.environ self.open_orig = terminal.os.open self.terminal_orig = terminal.TerminalSize def tearDown(self): terminal.os.environ = self.environ_orig terminal.os.open = self.open_orig terminal.TerminalSize = self.terminal_orig def testAnsiCmd(self): self.assertEqual('\033[0m', terminal._AnsiCmd(['reset'])) self.assertEqual('\033[0m', terminal._AnsiCmd(['RESET'])) self.assertEqual('\033[0;32m', terminal._AnsiCmd(['reset', 'Green'])) self.assertRaises(ValueError, terminal._AnsiCmd, ['bogus']) self.assertRaises(ValueError, terminal._AnsiCmd, ['reset', 'bogus']) def testAnsiText(self): self.assertEqual('\033[0mhello world\033[0m', terminal.AnsiText('hello world')) self.assertEqual('\033[31mhello world\033[0m', terminal.AnsiText('hello world', ['red'])) self.assertEqual('\033[31;46mhello world', terminal.AnsiText( 'hello world', ['red', 'bg_cyan'], False)) def testStripAnsi(self): text = 'ansi length' self.assertEqual(text, terminal.StripAnsiText(text)) ansi_text = '\033[5;32;44mansi\033[0m length' self.assertEqual(text, terminal.StripAnsiText(ansi_text)) def testEncloseAnsi(self): text = 'ansi length' self.assertEqual(text, terminal.EncloseAnsiText(text)) ansi_text = '\033[5;32;44mansi\033[0m length' ansi_enclosed = '\001\033[5;32;44m\002ansi\001\033[0m\002 length' self.assertEqual(ansi_enclosed, terminal.EncloseAnsiText(ansi_text)) def testTerminalSize(self): # pylint: disable=unused-argument def StubOpen(args, *kwargs): raise IOError terminal.open = StubOpen terminal.os.environ = {} # Raise exceptions on ioctl and environ and assign a default. self.assertEqual((24, 80), terminal.TerminalSize()) terminal.os.environ = {'LINES': 'bogus', 'COLUMNS': 'bogus'} self.assertEqual((24, 80), terminal.TerminalSize()) # Still raise exception on ioctl and use environ. terminal.os.environ = {'LINES': '10', 'COLUMNS': '20'} self.assertEqual((10, 20), terminal.TerminalSize()) def testLineWrap(self): terminal.TerminalSize = lambda: (5, 11) text = '' self.assertEqual(text, terminal.LineWrap(text)) text = 'one line' self.assertEqual(text, terminal.LineWrap(text)) text = 'two\nlines' self.assertEqual(text, terminal.LineWrap(text)) text = 'one line that is too long' text2 = 'one line th\nat is too l\nong' self.assertEqual(text2, terminal.LineWrap(text)) # Counting ansi characters won't matter if there are none. self.assertEqual(text2, terminal.LineWrap(text, False)) text = 'one line \033[5;32;44mthat\033[0m is too long with ansi' text2 = 'one line \033[5;32;44mth\nat\033[0m is too l\nong with an\nsi' text3 = 'one line \033[\n5;32;44mtha\nt\033[0m is to\no long with\n ansi' # Ansi does not factor and the line breaks stay the same. self.assertEqual(text2, terminal.LineWrap(text, True)) # If we count the ansi escape as characters then the line breaks change. self.assertEqual(text3, terminal.LineWrap(text, False)) # False is implicit default. self.assertEqual(text3, terminal.LineWrap(text)) # Couple of edge cases where we split on token boundary. text4 = 'ooone line \033[5;32;44mthat\033[0m is too long with ansi' text5 = 'ooone line \033[5;32;44m\nthat\033[0m is too\n long with \nansi' self.assertEqual(text5, terminal.LineWrap(text4, True)) text6 = 'e line \033[5;32;44mthat\033[0m is too long with ansi' text7 = 'e line \033[5;32;44mthat\033[0m\n is too lon\ng with ansi' self.assertEqual(text7, terminal.LineWrap(text6, True)) def testIssue1(self): self.assertEqual(10, len(terminal.StripAnsiText('boembabies' '\033[0m'))) terminal.TerminalSize = lambda: (10, 10) text1 = terminal.LineWrap('\033[32m' + 'boembabies, ' * 10 + 'boembabies' + '\033[0m', omit_sgr=True) text2 = ('\033[32m' + terminal.LineWrap('boembabies, ' * 10 + 'boembabies') + '\033[0m') self.assertEqual(text1, text2) class FakeTerminal(object): def __init__(self): self.output = '' # pylint: disable=C6409 def write(self, text): self.output += text # pylint: disable=C6409 def CountLines(self): return len(self.output.splitlines()) def flush(self): pass class PagerTest(unittest.TestCase): def setUp(self): super(PagerTest, self).setUp() sys.stdout = FakeTerminal() self.get_ch_orig = terminal.Pager._GetCh terminal.Pager._GetCh = lambda self: 'q' self.ts_orig = terminal.TerminalSize terminal.TerminalSize = lambda: (24, 80) self.p = terminal.Pager() def tearDown(self): terminal.Pager._GetCh = self.get_ch_orig terminal.TerminalSize = self.ts_orig sys.stdout = sys.__stdout__ def testPager(self): self.assertEqual(terminal.TerminalSize()[0], self.p._cli_lines) self.p.Clear() self.assertEqual('', self.p._text) self.assertEqual(0, self.p._displayed) self.assertEqual(1, self.p._lastscroll) def testPage(self): txt = '' for i in range(100): txt += '%d a random line of text here\n' % i self.p._text = txt self.p.Page() self.assertEqual(terminal.TerminalSize()[0]+2, sys.stdout.CountLines()) sys.stdout.output = '' self.p = terminal.Pager() self.p._text = '' for i in range(10): self.p._text += 'a' * 100 + '\n' self.p.Page() self.assertEqual(20, sys.stdout.CountLines()) if __name__ == '__main__': unittest.main() textfsm-1.1.3/tests/textfsm_test.py000077500000000000000000000741151417470013600174710ustar00rootroot00000000000000#!/usr/bin/python # -*- coding: utf-8 -*- # # Copyright 2010 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Unittest for textfsm module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from builtins import str import unittest from io import StringIO import textfsm class UnitTestFSM(unittest.TestCase): """Tests the FSM engine.""" def testFSMValue(self): # Check basic line is parsed. line = r'Value beer (\S+)' v = textfsm.TextFSMValue() v.Parse(line) self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, r'(\S+)') self.assertEqual(v.template, r'(?P\S+)') self.assertFalse(v.options) # Test options line = r'Value Filldown,Required beer (\S+)' v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse(line) self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, r'(\S+)') self.assertEqual(v.OptionNames(), ['Filldown', 'Required']) # Multiple parenthesis. v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse('Value Required beer (boo(hoo))') self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, '(boo(hoo))') self.assertEqual(v.template, '(?Pboo(hoo))') self.assertEqual(v.OptionNames(), ['Required']) # regex must be bounded by parenthesis. self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, 'Value beer (boo(hoo)))boo') self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, 'Value beer boo(boo(hoo)))') self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, 'Value beer (boo)hoo)') # Escaped parentheses don't count. v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse(r'Value beer (boo\)hoo)') self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, r'(boo\)hoo)') self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, r'Value beer (boohoo\)') self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, r'Value beer (boo)hoo\)') # Unbalanced parenthesis can exist if within square "[]" braces. v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse('Value beer (boo[(]hoo)') self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, '(boo[(]hoo)') # Escaped braces don't count. self.assertRaises(textfsm.TextFSMTemplateError, v.Parse, r'Value beer (boo\[)\]hoo)') # String function. v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse('Value Required beer (boo(hoo))') self.assertEqual(str(v), 'Value Required beer (boo(hoo))') v = textfsm.TextFSMValue(options_class=textfsm.TextFSMOptions) v.Parse( r'Value Required,Filldown beer (bo\S+(hoo))') self.assertEqual(str(v), r'Value Required,Filldown beer (bo\S+(hoo))') def testFSMRule(self): # Basic line, no action line = ' ^A beer called ${beer}' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A beer called ${beer}') self.assertEqual(r.line_op, '') self.assertEqual(r.new_state, '') self.assertEqual(r.record_op, '') # Multiple matches line = ' ^A $hi called ${beer}' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A $hi called ${beer}') self.assertEqual(r.line_op, '') self.assertEqual(r.new_state, '') self.assertEqual(r.record_op, '') # Line with action. line = ' ^A beer called ${beer} -> Next' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A beer called ${beer}') self.assertEqual(r.line_op, 'Next') self.assertEqual(r.new_state, '') self.assertEqual(r.record_op, '') # Line with record. line = ' ^A beer called ${beer} -> Continue.Record' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A beer called ${beer}') self.assertEqual(r.line_op, 'Continue') self.assertEqual(r.new_state, '') self.assertEqual(r.record_op, 'Record') # Line with new state. line = ' ^A beer called ${beer} -> Next.NoRecord End' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A beer called ${beer}') self.assertEqual(r.line_op, 'Next') self.assertEqual(r.new_state, 'End') self.assertEqual(r.record_op, 'NoRecord') # Bad syntax tests. self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, ' ^A beer called ${beer} -> Next Next Next') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, ' ^A beer called ${beer} -> Boo.hoo') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, ' ^A beer called ${beer} -> Continue.Record $Hi') def testRulePrefixes(self): """Test valid and invalid rule prefixes.""" # Bad syntax tests. for prefix in (' ', '.^', ' \t', ''): f = StringIO('Value unused (.)\n\nStart\n' + prefix + 'A simple string.') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, f) # Good syntax tests. for prefix in (' ^', ' ^', '\t^'): f = StringIO('Value unused (.)\n\nStart\n' + prefix + 'A simple string.') self.assertIsNotNone(textfsm.TextFSM(f)) def testImplicitDefaultRules(self): for line in (' ^A beer called ${beer} -> Record End', ' ^A beer called ${beer} -> End', ' ^A beer called ${beer} -> Next.NoRecord End', ' ^A beer called ${beer} -> Clear End', ' ^A beer called ${beer} -> Error "Hello World"'): r = textfsm.TextFSMRule(line) self.assertEqual(str(r), line) for line in (' ^A beer called ${beer} -> Next "Hello World"', ' ^A beer called ${beer} -> Record.Next', ' ^A beer called ${beer} -> Continue End', ' ^A beer called ${beer} -> Beer End'): self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, line) def testSpacesAroundAction(self): for line in (' ^Hello World -> Boo', ' ^Hello World -> Boo', ' ^Hello World -> Boo'): self.assertEqual( str(textfsm.TextFSMRule(line)), ' ^Hello World -> Boo') # A '->' without a leading space is considered part of the matching line. self.assertEqual(' A simple line-> Boo -> Next', str(textfsm.TextFSMRule(' A simple line-> Boo -> Next'))) def testParseFSMVariables(self): # Trivial template to initiate object. f = StringIO('Value unused (.)\n\nStart\n') t = textfsm.TextFSM(f) # Trivial entry buf = 'Value Filldown Beer (beer)\n\n' f = StringIO(buf) t._ParseFSMVariables(f) # Single variable with commented header. buf = '# Headline\nValue Filldown Beer (beer)\n\n' f = StringIO(buf) t._ParseFSMVariables(f) self.assertEqual(str(t._GetValue('Beer')), 'Value Filldown Beer (beer)') # Multiple variables. buf = ('# Headline\n' 'Value Filldown Beer (beer)\n' 'Value Required Spirits (whiskey)\n' 'Value Filldown Wine (claret)\n' '\n') t._line_num = 0 f = StringIO(buf) t._ParseFSMVariables(f) self.assertEqual(str(t._GetValue('Beer')), 'Value Filldown Beer (beer)') self.assertEqual( str(t._GetValue('Spirits')), 'Value Required Spirits (whiskey)') self.assertEqual(str(t._GetValue('Wine')), 'Value Filldown Wine (claret)') # Multiple variables. buf = ('# Headline\n' 'Value Filldown Beer (beer)\n' ' # A comment\n' 'Value Spirits ()\n' 'Value Filldown,Required Wine ((c|C)laret)\n' '\n') f = StringIO(buf) t._ParseFSMVariables(f) self.assertEqual(str(t._GetValue('Beer')), 'Value Filldown Beer (beer)') self.assertEqual( str(t._GetValue('Spirits')), 'Value Spirits ()') self.assertEqual(str(t._GetValue('Wine')), 'Value Filldown,Required Wine ((c|C)laret)') # Malformed variables. buf = 'Value Beer (beer) beer' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMVariables, f) buf = 'Value Filldown, Required Spirits ()' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMVariables, f) buf = 'Value filldown,Required Wine ((c|C)laret)' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMVariables, f) # Values that look bad but are OK. buf = ('# Headline\n' 'Value Filldown Beer (bee(r), (and) (M)ead$)\n' '# A comment\n' 'Value Spirits,and,some ()\n' 'Value Filldown,Required Wine ((c|C)laret)\n' '\n') f = StringIO(buf) t._ParseFSMVariables(f) self.assertEqual(str(t._GetValue('Beer')), 'Value Filldown Beer (bee(r), (and) (M)ead$)') self.assertEqual( str(t._GetValue('Spirits,and,some')), 'Value Spirits,and,some ()') self.assertEqual(str(t._GetValue('Wine')), 'Value Filldown,Required Wine ((c|C)laret)') # Variable name too long. buf = ('Value Filldown ' 'nametoolong_nametoolong_nametoolo_nametoolong_nametoolong ' '(beer)\n\n') f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMVariables, f) def testParseFSMState(self): f = StringIO('Value Beer (.)\nValue Wine (\\w)\n\nStart\n') t = textfsm.TextFSM(f) # Fails as we already have 'Start' state. buf = 'Start\n ^.\n' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMState, f) # Remove start so we can test new Start state. t.states = {} # Single state. buf = '# Headline\nStart\n ^.\n\n' f = StringIO(buf) t._ParseFSMState(f) self.assertEqual(str(t.states['Start'][0]), ' ^.') try: _ = t.states['Start'][1] except IndexError: pass # Multiple states. buf = '# Headline\nStart\n ^.\n ^Hello World\n ^Last-[Cc]ha$$nge\n' f = StringIO(buf) t._line_num = 0 t.states = {} t._ParseFSMState(f) self.assertEqual(str(t.states['Start'][0]), ' ^.') self.assertEqual(str(t.states['Start'][1]), ' ^Hello World') self.assertEqual(t.states['Start'][1].line_num, 4) self.assertEqual(str(t.states['Start'][2]), ' ^Last-[Cc]ha$$nge') try: _ = t.states['Start'][3] except IndexError: pass t.states = {} # Malformed states. buf = 'St%art\n ^.\n ^Hello World\n' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMState, f) buf = 'Start\n^.\n ^Hello World\n' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMState, f) buf = ' Start\n ^.\n ^Hello World\n' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMState, f) # Multiple variables and substitution (depends on _ParseFSMVariables). buf = ('# Headline\nStart\n ^.${Beer}${Wine}.\n' ' ^Hello $Beer\n ^Last-[Cc]ha$$nge\n') f = StringIO(buf) t.states = {} t._ParseFSMState(f) self.assertEqual(str(t.states['Start'][0]), ' ^.${Beer}${Wine}.') self.assertEqual(str(t.states['Start'][1]), ' ^Hello $Beer') self.assertEqual(str(t.states['Start'][2]), ' ^Last-[Cc]ha$$nge') try: _ = t.states['Start'][3] except IndexError: pass t.states['bogus'] = [] # State name too long (>32 char). buf = 'rnametoolong_nametoolong_nametoolong_nametoolong_nametoolo\n ^.\n\n' f = StringIO(buf) self.assertRaises(textfsm.TextFSMTemplateError, t._ParseFSMState, f) def testInvalidStates(self): # 'Continue' should not accept a destination. self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, '^.* -> Continue Start') # 'Error' accepts a text string but "next' state does not. self.assertEqual(str(textfsm.TextFSMRule(' ^ -> Error "hi there"')), ' ^ -> Error "hi there"') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSMRule, '^.* -> Next "Hello World"') def testRuleStartsWithCarrot(self): f = StringIO( 'Value Beer (.)\nValue Wine (\\w)\n\nStart\n A Simple line') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, f) def testValidateFSM(self): # No Values. f = StringIO('\nNotStart\n') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, f) # No states. f = StringIO('Value unused (.)\n\n') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, f) # No 'Start' state. f = StringIO('Value unused (.)\n\nNotStart\n') self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, f) # Has 'Start' state with valid destination f = StringIO('Value unused (.)\n\nStart\n') t = textfsm.TextFSM(f) t.states['Start'] = [] t.states['Start'].append(textfsm.TextFSMRule('^.* -> Start')) t._ValidateFSM() # Invalid destination. t.states['Start'].append(textfsm.TextFSMRule('^.* -> bogus')) self.assertRaises(textfsm.TextFSMTemplateError, t._ValidateFSM) # Now valid again. t.states['bogus'] = [] t.states['bogus'].append(textfsm.TextFSMRule('^.* -> Start')) t._ValidateFSM() # Valid destination with options. t.states['bogus'] = [] t.states['bogus'].append(textfsm.TextFSMRule('^.* -> Next.Record Start')) t._ValidateFSM() # Error with and without messages string. t.states['bogus'] = [] t.states['bogus'].append(textfsm.TextFSMRule('^.* -> Error')) t._ValidateFSM() t.states['bogus'].append(textfsm.TextFSMRule('^.* -> Error "Boo hoo"')) t._ValidateFSM() def testTextFSM(self): # Trivial template buf = 'Value Beer (.*)\n\nStart\n ^\\w\n' buf_result = buf f = StringIO(buf) t = textfsm.TextFSM(f) self.assertEqual(str(t), buf_result) # Slightly more complex, multple vars. buf = 'Value A (.*)\nValue B (.*)\n\nStart\n ^\\w\n\nState1\n ^.\n' buf_result = buf f = StringIO(buf) t = textfsm.TextFSM(f) self.assertEqual(str(t), buf_result) def testParseText(self): # Trivial FSM, no records produced. tplt = 'Value unused (.)\n\nStart\n ^Trivial SFM\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'Non-matching text\nline1\nline 2\n' self.assertFalse(t.ParseText(data)) # Matching. data = 'Matching text\nTrivial SFM\nline 2\n' self.assertFalse(t.ParseText(data)) # Simple FSM, One Variable no options. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next.Record\n\nEOF\n' t = textfsm.TextFSM(StringIO(tplt)) # Matching one line. # Tests 'Next' & 'Record' actions. data = 'Matching text' result = t.ParseText(data) self.assertListEqual(result, [['Matching text']]) # Matching two lines. Reseting FSM before Parsing. t.Reset() data = 'Matching text\nAnd again' result = t.ParseText(data) self.assertListEqual(result, [['Matching text'], ['And again']]) # Two Variables and singular options. tplt = ('Value Required boo (one)\nValue Filldown hoo (two)\n\n' 'Start\n ^$boo -> Next.Record\n ^$hoo -> Next.Record\n\n' 'EOF\n') t = textfsm.TextFSM(StringIO(tplt)) # Matching two lines. Only one records returned due to 'Required' flag. # Tests 'Filldown' and 'Required' options. data = 'two\none' result = t.ParseText(data) self.assertListEqual(result, [['one', 'two']]) t = textfsm.TextFSM(StringIO(tplt)) # Matching two lines. Two records returned due to 'Filldown' flag. data = 'two\none\none' t.Reset() result = t.ParseText(data) self.assertListEqual(result, [['one', 'two'], ['one', 'two']]) # Multiple Variables and options. tplt = ('Value Required,Filldown boo (one)\n' 'Value Filldown,Required hoo (two)\n\n' 'Start\n ^$boo -> Next.Record\n ^$hoo -> Next.Record\n\n' 'EOF\n') t = textfsm.TextFSM(StringIO(tplt)) data = 'two\none\none' result = t.ParseText(data) self.assertListEqual(result, [['one', 'two'], ['one', 'two']]) def testParseTextToDicts(self): # Trivial FSM, no records produced. tplt = 'Value unused (.)\n\nStart\n ^Trivial SFM\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'Non-matching text\nline1\nline 2\n' self.assertFalse(t.ParseText(data)) # Matching. data = 'Matching text\nTrivial SFM\nline 2\n' self.assertFalse(t.ParseText(data)) # Simple FSM, One Variable no options. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next.Record\n\nEOF\n' t = textfsm.TextFSM(StringIO(tplt)) # Matching one line. # Tests 'Next' & 'Record' actions. data = 'Matching text' result = t.ParseTextToDicts(data) self.assertListEqual(result, [{'boo': 'Matching text'}]) # Matching two lines. Reseting FSM before Parsing. t.Reset() data = 'Matching text\nAnd again' result = t.ParseTextToDicts(data) self.assertListEqual(result, [{'boo': 'Matching text'}, {'boo': 'And again'}]) # Two Variables and singular options. tplt = ('Value Required boo (one)\nValue Filldown hoo (two)\n\n' 'Start\n ^$boo -> Next.Record\n ^$hoo -> Next.Record\n\n' 'EOF\n') t = textfsm.TextFSM(StringIO(tplt)) # Matching two lines. Only one records returned due to 'Required' flag. # Tests 'Filldown' and 'Required' options. data = 'two\none' result = t.ParseTextToDicts(data) self.assertListEqual(result, [{'hoo': 'two', 'boo': 'one'}]) t = textfsm.TextFSM(StringIO(tplt)) # Matching two lines. Two records returned due to 'Filldown' flag. data = 'two\none\none' t.Reset() result = t.ParseTextToDicts(data) self.assertListEqual( result, [{'hoo': 'two', 'boo': 'one'}, {'hoo': 'two', 'boo': 'one'}]) # Multiple Variables and options. tplt = ('Value Required,Filldown boo (one)\n' 'Value Filldown,Required hoo (two)\n\n' 'Start\n ^$boo -> Next.Record\n ^$hoo -> Next.Record\n\n' 'EOF\n') t = textfsm.TextFSM(StringIO(tplt)) data = 'two\none\none' result = t.ParseTextToDicts(data) self.assertListEqual( result, [{'hoo': 'two', 'boo': 'one'}, {'hoo': 'two', 'boo': 'one'}]) def testParseNullText(self): # Simple FSM, One Variable no options. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next.Record\n\n' t = textfsm.TextFSM(StringIO(tplt)) # Null string data = '' result = t.ParseText(data) self.assertListEqual(result, []) def testReset(self): tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next.Record\n\nEOF\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'Matching text' result1 = t.ParseText(data) t.Reset() result2 = t.ParseText(data) self.assertListEqual(result1, result2) tplt = ('Value boo (one)\nValue hoo (two)\n\n' 'Start\n ^$boo -> State1\n\n' 'State1\n ^$hoo -> Start\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one' t.ParseText(data) t.Reset() self.assertEqual(t._cur_state[0].match, '^$boo') self.assertEqual(t._GetValue('boo').value, None) self.assertEqual(t._GetValue('hoo').value, None) self.assertEqual(t._result, []) def testClear(self): # Clear Filldown variable. # Tests 'Clear'. tplt = ('Value Required boo (on.)\n' 'Value Filldown,Required hoo (tw.)\n\n' 'Start\n ^$boo -> Next.Record\n ^$hoo -> Next.Clear') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\ntwo\nonE\ntwO' result = t.ParseText(data) self.assertListEqual(result, [['onE', 'two']]) # Clearall, with Filldown variable. # Tests 'Clearall'. tplt = ('Value Filldown boo (on.)\n' 'Value Filldown hoo (tw.)\n\n' 'Start\n ^$boo -> Next.Clearall\n' ' ^$hoo') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\ntwo' result = t.ParseText(data) self.assertListEqual(result, [['', 'two']]) def testContinue(self): tplt = ('Value Required boo (on.)\n' 'Value Filldown,Required hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Continue.Record') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\non0' result = t.ParseText(data) self.assertListEqual(result, [['one', 'one'], ['on0', 'on0']]) def testError(self): tplt = ('Value Required boo (on.)\n' 'Value Filldown,Required hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Error') t = textfsm.TextFSM(StringIO(tplt)) data = 'one' self.assertRaises(textfsm.TextFSMError, t.ParseText, data) tplt = ('Value Required boo (on.)\n' 'Value Filldown,Required hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Error "Hello World"') t = textfsm.TextFSM(StringIO(tplt)) self.assertRaises(textfsm.TextFSMError, t.ParseText, data) def testKey(self): tplt = ('Value Required boo (on.)\n' 'Value Required,Key hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Record') t = textfsm.TextFSM(StringIO(tplt)) self.assertTrue('Key' in t._GetValue('hoo').OptionNames()) self.assertTrue('Key' not in t._GetValue('boo').OptionNames()) def testList(self): tplt = ('Value List boo (on.)\n' 'Value hoo (tw.)\n\n' 'Start\n ^$boo\n ^$hoo -> Next.Record\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\ntwo\non0\ntw0' result = t.ParseText(data) self.assertListEqual(result, [[['one'], 'two'], [['on0'], 'tw0']]) tplt = ('Value List,Filldown boo (on.)\n' 'Value hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Next.Record\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\non0\non1' result = t.ParseText(data) self.assertEqual(result, ([[['one'], 'one'], [['one', 'on0'], 'on0'], [['one', 'on0', 'on1'], 'on1']])) tplt = ('Value List,Required boo (on.)\n' 'Value hoo (tw.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Next.Record\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one\ntwo\ntw2' result = t.ParseText(data) self.assertListEqual(result, [[['one'], 'two']]) def testNestedMatching(self): """ Ensures that List-type values with nested regex capture groups are parsed correctly as a list of dictionaries. Additionaly, another value is used with the same group-name as one of the nested groups to ensure that there are no conflicts when the same name is used. """ tplt = ( # A nested group is called "name" r"Value List foo ((?P\w+):\s+(?P\d+)\s+(?P\w{2})\s*)" "\n" # A regular value is called "name" r"Value name (\w+)" # "${name}" here refers to the Value called "name" "\n\nStart\n" r" ^\s*${foo}" "\n" r" ^\s*${name}" "\n" r" ^\s*$$ -> Record" ) t = textfsm.TextFSM(StringIO(tplt)) # Julia should be parsed as "name" separately data = " Bob: 32 NC\n Alice: 27 NY\n Jeff: 45 CA\nJulia\n\n" result = t.ParseText(data) self.assertListEqual( result, ( [[[ {'name': 'Bob', 'age': '32', 'state': 'NC'}, {'name': 'Alice', 'age': '27', 'state': 'NY'}, {'name': 'Jeff', 'age': '45', 'state': 'CA'} ], 'Julia']] ) ) def testNestedNameConflict(self): tplt = ( # Two nested groups are called "name" r"Value List foo ((?P\w+)\s+(?P\w+):\s+(?P\d+)\s+(?P\w{2})\s*)" "\nStart\n" r"^\s*${foo}" "\n ^" r"\s*$$ -> Record" ) self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, StringIO(tplt)) def testGetValuesByAttrib(self): tplt = ('Value Required boo (on.)\n' 'Value Required,List hoo (on.)\n\n' 'Start\n ^$boo -> Continue\n ^$hoo -> Record') # Explicit default. t = textfsm.TextFSM(StringIO(tplt)) self.assertEqual(t.GetValuesByAttrib('List'), ['hoo']) self.assertEqual(t.GetValuesByAttrib('Filldown'), []) result = t.GetValuesByAttrib('Required') result.sort() self.assertListEqual(result, ['boo', 'hoo']) def testStateChange(self): # Sinple state change, no actions tplt = ('Value boo (one)\nValue hoo (two)\n\n' 'Start\n ^$boo -> State1\n\nState1\n ^$hoo -> Start\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one' t.ParseText(data) self.assertEqual(t._cur_state[0].match, '^$hoo') self.assertEqual('one', t._GetValue('boo').value) self.assertEqual(None, t._GetValue('hoo').value) self.assertEqual(t._result, []) # State change with actions. tplt = ('Value boo (one)\nValue hoo (two)\n\n' 'Start\n ^$boo -> Next.Record State1\n\n' 'State1\n ^$hoo -> Start\n\n' 'EOF') t = textfsm.TextFSM(StringIO(tplt)) data = 'one' t.ParseText(data) self.assertEqual(t._cur_state[0].match, '^$hoo') self.assertEqual(None, t._GetValue('boo').value) self.assertEqual(None, t._GetValue('hoo').value) self.assertEqual(t._result, [['one', '']]) def testEOF(self): # Implicit EOF. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'Matching text' result = t.ParseText(data) self.assertListEqual(result, [['Matching text']]) # EOF explicitly suppressed in template. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next\n\nEOF\n' t = textfsm.TextFSM(StringIO(tplt)) result = t.ParseText(data) self.assertListEqual(result, []) # Implicit EOF suppressed by argument. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next\n' t = textfsm.TextFSM(StringIO(tplt)) result = t.ParseText(data, eof=False) self.assertListEqual(result, []) def testEnd(self): # End State, EOF is skipped. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> End\n ^$boo -> Record\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'Matching text A\nMatching text B' result = t.ParseText(data) self.assertListEqual(result, []) # End State, with explicit Record. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Record End\n' t = textfsm.TextFSM(StringIO(tplt)) result = t.ParseText(data) self.assertListEqual(result, [['Matching text A']]) # EOF state transition is followed by implicit End State. tplt = 'Value boo (.*)\n\nStart\n ^$boo -> EOF\n ^$boo -> Record\n' t = textfsm.TextFSM(StringIO(tplt)) result = t.ParseText(data) self.assertListEqual(result, [['Matching text A']]) def testInvalidRegexp(self): tplt = 'Value boo (.$*)\n\nStart\n ^$boo -> Next\n' self.assertRaises(textfsm.TextFSMTemplateError, textfsm.TextFSM, StringIO(tplt)) def testValidRegexp(self): """RegexObjects uncopyable in Python 2.6.""" tplt = 'Value boo (fo*)\n\nStart\n ^$boo -> Record\n' t = textfsm.TextFSM(StringIO(tplt)) data = 'f\nfo\nfoo\n' result = t.ParseText(data) self.assertListEqual(result, [['f'], ['fo'], ['foo']]) def testReEnteringState(self): """Issue 2. TextFSM should leave file pointer at top of template file.""" tplt = 'Value boo (.*)\n\nStart\n ^$boo -> Next Stop\n\nStop\n ^abc\n' output_text = 'one\ntwo' tmpl_file = StringIO(tplt) t = textfsm.TextFSM(tmpl_file) t.ParseText(output_text) t = textfsm.TextFSM(tmpl_file) t.ParseText(output_text) def testFillup(self): """Fillup should work ok.""" tplt = """Value Required Col1 ([^-]+) Value Fillup Col2 ([^-]+) Value Fillup Col3 ([^-]+) Start ^$Col1 -- -- -> Record ^$Col1 $Col2 -- -> Record ^$Col1 -- $Col3 -> Record ^$Col1 $Col2 $Col3 -> Record """ data = """ 1 -- B1 2 A2 -- 3 -- B3 """ t = textfsm.TextFSM(StringIO(tplt)) result = t.ParseText(data) self.assertListEqual( result, [['1', 'A2', 'B1'], ['2', 'A2', 'B3'], ['3', '', 'B3']]) class UnitTestUnicode(unittest.TestCase): """Tests the FSM engine.""" def testFSMValue(self): # Check basic line is parsed. line = 'Value beer (\\S+Δ)' v = textfsm.TextFSMValue() v.Parse(line) self.assertEqual(v.name, 'beer') self.assertEqual(v.regex, '(\\S+Δ)') self.assertEqual(v.template, '(?P\\S+Δ)') self.assertFalse(v.options) def testFSMRule(self): # Basic line, no action line = ' ^A beer called ${beer}Δ' r = textfsm.TextFSMRule(line) self.assertEqual(r.match, '^A beer called ${beer}Δ') self.assertEqual(r.line_op, '') self.assertEqual(r.new_state, '') self.assertEqual(r.record_op, '') def testTemplateValue(self): # Complex template, multiple vars and states with comments (no var options). buf = """# Header # Header 2 Value Beer (.*) Value Wine (\\w+) # An explanation with a unicode character Δ Start ^hi there ${Wine}. -> Next.Record State1 State1 ^\\wΔ ^$Beer .. -> Start # Some comments ^$$ -> Next ^$$ -> End End # Tail comment. """ buf_result = """Value Beer (.*) Value Wine (\\w+) Start ^hi there ${Wine}. -> Next.Record State1 State1 ^\\wΔ ^$Beer .. -> Start ^$$ -> Next ^$$ -> End """ f = StringIO(buf) t = textfsm.TextFSM(f) self.assertEqual(str(t), buf_result) if __name__ == '__main__': unittest.main() textfsm-1.1.3/tests/texttable_test.py000077500000000000000000000660551417470013600177770ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Unittest for text table.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals from builtins import range import unittest from io import StringIO from textfsm import terminal from textfsm import texttable def cmp(a, b): return (a > b) - (a < b) class UnitTestRow(unittest.TestCase): """Tests texttable.Row() class.""" def setUp(self): super(UnitTestRow, self).setUp() self.row = texttable.Row() self.row._keys = ['a', 'b', 'c'] self.row._values = ['1', '2', '3'] self.row._BuildIndex() def testRowBasicMethods(self): row = texttable.Row() # Setting columns (__setitem__). row['a'] = 'one' row['b'] = 'two' row['c'] = 'three' # Access a single column (__getitem__). self.assertEqual('one', row['a']) self.assertEqual('two', row['b']) self.assertEqual('three', row['c']) # Access multiple columns (__getitem__). self.assertEqual(['one', 'three'], row[('a', 'c')]) self.assertEqual(['two', 'three'], row[('b', 'c')]) # Access integer indexes (__getitem__). self.assertEqual('one', row[0]) self.assertEqual(['two', 'three'], row[1:]) # Test "get". self.assertEqual('one', row.get('a')) self.assertEqual('one', row.get('a', 'four')) self.assertEqual('four', row.get('d', 'four')) self.assertIsNone(row.get('d')) self.assertEqual(['one', 'three'], row.get(('a', 'c'), 'four')) self.assertEqual(['one', 'four'], row.get(('a', 'd'), 'four')) self.assertEqual(['one', None], row.get(('a', 'd'))) self.assertEqual('one', row.get(0, 'four')) self.assertEqual('four', row.get(3, 'four')) self.assertIsNone(row.get(3)) # Change existing column value. row['b'] = 'Two' self.assertEqual('Two', row['b']) # Length. self.assertEqual(3, len(row)) # Contains. self.assertTrue('two' not in row) self.assertTrue('Two' in row) # Iteration. self.assertEqual(['one', 'Two', 'three'], list(row)) def testRowPublicMethods(self): self.row.header = ('x', 'y', 'z') # Header should be set, values initialised to None. self.assertEqual(['x', 'y', 'z'], self.row.header) self.assertEqual(['1', '2', '3'], self.row.values) row = texttable.Row() row.header = ('x', 'y', 'z') self.assertEqual(['x', 'y', 'z'], row.header) self.assertEqual([None, None, None], row.values) def testSetValues(self): """Tests setting row values from 'From' method.""" # Set values from Dict. self.row._SetValues({'a': 'seven', 'b': 'eight', 'c': 'nine'}) self.assertEqual(['seven', 'eight', 'nine'], self.row._values) self.row._SetValues({'b': '8', 'a': '7', 'c': '9'}) self.assertEqual(['7', '8', '9'], self.row._values) # Converts integers to string equivalents. # Excess key/value pairs are ignored. self.row._SetValues({'a': 1, 'b': 2, 'c': 3, 'd': 4}) self.assertEqual(['1', '2', '3'], self.row._values) # Values can come from a list of equal length the the keys. self.row._SetValues((7, '8', 9)) self.assertEqual(['7', '8', '9'], self.row._values) # Or from a tuple of the same length. self.row._SetValues(('vb', 'coopers', 'squires')) self.assertEqual(['vb', 'coopers', 'squires'], self.row._values) # Raise error if list length is incorrect. self.assertRaises(TypeError, self.row._SetValues, ['seven', 'eight', 'nine', 'ten']) # Raise error if row object has mismatched header. row = texttable.Row() self.row._keys = ['a'] self.row._values = ['1'] self.assertRaises(TypeError, self.row._SetValues, row) # Raise error if assigning wrong data type. self.assertRaises(TypeError, row._SetValues, 'abc') def testHeader(self): """Tests value property.""" self.row.header = ('x', 'y', 'z') self.assertEqual(['x', 'y', 'z'], self.row.header) self.assertRaises(ValueError, self.row._SetHeader, ('a', 'b', 'c', 'd')) def testValue(self): """Tests value property.""" self.row.values = {'a': 'seven', 'b': 'eight', 'c': 'nine'} self.assertEqual(['seven', 'eight', 'nine'], self.row.values) self.row.values = (7, '8', 9) self.assertEqual(['7', '8', '9'], self.row.values) def testIndex(self): """Tests Insert and Index methods.""" self.assertEqual(1, self.row.index('b')) self.assertRaises(ValueError, self.row.index, 'bogus') # Insert element within row. self.row.Insert('black', 'white', 1) self.row.Insert('red', 'yellow', -1) self.assertEqual(['a', 'black', 'b', 'red', 'c'], self.row.header) self.assertEqual(['1', 'white', '2', 'yellow', '3'], self.row.values) self.assertEqual(1, self.row.index('black')) self.assertEqual(2, self.row.index('b')) self.assertRaises(IndexError, self.row.Insert, 'grey', 'gray', 6) self.assertRaises(IndexError, self.row.Insert, 'grey', 'gray', -7) class MyRow(texttable.Row): pass class UnitTestTextTable(unittest.TestCase): # pylint: disable=invalid-name def BasicTable(self): t = texttable.TextTable() t.header = ('a', 'b', 'c') t.Append(('1', '2', '3')) t.Append(('10', '20', '30')) return t def testFilter(self): old_table = self.BasicTable() filtered_table = old_table.Filter( function=lambda row: row['a'] == '10') self.assertEqual(1, filtered_table.size) def testFilterNone(self): t = texttable.TextTable() t.header = ('a', 'b', 'c') t.Append(('', '', [])) filtered_table = t.Filter() self.assertEqual(0, filtered_table.size) def testMap(self): old_table = self.BasicTable() filtered_table = old_table.Map( function=lambda row: row['a'] == '10' and row) self.assertEqual(1, filtered_table.size) def testCustomRow(self): table = texttable.TextTable() table.header = ('a', 'b', 'c') self.assertEqual(type(texttable.Row()), type(table[0])) table = texttable.TextTable(row_class=MyRow) self.assertEqual(MyRow, table.row_class) table.header = ('a', 'b', 'c') self.assertEqual(type(MyRow()), type(table[0])) def testTableRepr(self): self.assertEqual( "TextTable('a, b, c\\n1, 2, 3\\n10, 20, 30\\n')", repr(self.BasicTable())) def testTableStr(self): self.assertEqual('a, b, c\n1, 2, 3\n10, 20, 30\n', self.BasicTable().__str__()) def testTableSetRow(self): t = self.BasicTable() t.Append(('one', 'two', 'three')) self.assertEqual(['one', 'two', 'three'], t[3].values) self.assertEqual(3, t.size) def testTableRowTypes(self): t = self.BasicTable() t.Append(('one', ['two', None], None)) self.assertEqual(['one', ['two', 'None'], 'None'], t[3].values) self.assertEqual(3, t.size) def testTableRowDictWithInt(self): t = self.BasicTable() t.Append({'a': 1, 'b': 'two', 'c': 3}) self.assertEqual(['1', 'two', '3'], t[3].values) self.assertEqual(3, t.size) def testTableRowListWithInt(self): t = self.BasicTable() t.Append([1, 'two', 3]) self.assertEqual(['1', 'two', '3'], t[3].values) self.assertEqual(3, t.size) def testTableGetRow(self): t = self.BasicTable() self.assertEqual(['1', '2', '3'], t[1].values) self.assertEqual(['1', '3'], t[1][('a', 'c')]) self.assertEqual('3', t[1][('c')]) for rnum in range(t.size): self.assertEqual(rnum, t[rnum].row) def testTableRowWith(self): t = self.BasicTable() self.assertEqual(t.RowWith('a', '10'), t[2]) self.assertRaises(IndexError, t.RowWith, 'g', '5') def testContains(self): t = self.BasicTable() self.assertTrue('a' in t) self.assertFalse('x' in t) def testIteration(self): t = self.BasicTable() index = 0 for r in t: index += 1 self.assertEqual(r, t[index]) self.assertEqual(index, r.table._iterator) # Have we iterated over all entries. self.assertEqual(index, t.size) # The iterator count is reset. self.assertEqual(0, t._iterator) # Can we iterate repeatedly. index = 0 for r in t: index += 1 self.assertEqual(r, t[index]) index1 = 0 try: for r in t: index1 += 1 index2 = 0 self.assertEqual(index1, r.table._iterator) # Test nesting of iterations. for r2 in t: index2 += 1 self.assertEqual(index2, r2.table._iterator) # Preservation of outer iterator after 'break'. if index1 == 2 and index2 == 2: break if index1 == 2: # Restoration of initial iterator after exception. raise IndexError self.assertEqual(index1, r.table._iterator) except IndexError: pass # Have we iterated over all entries - twice. self.assertEqual(index, t.size) self.assertEqual(index2, t.size) # The iterator count is reset. self.assertEqual(0, t._iterator) def testCsvToTable(self): buf = """ # A comment a,b, c, d # Trim comment # Inline comment # 1,2,3,4 1,2,3,4 5, 6, 7, 8 10, 11 # More comments. """ f = StringIO(buf) t = texttable.TextTable() self.assertEqual(2, t.CsvToTable(f)) # pylint: disable=E1101 self.assertEqual(['a', 'b', 'c', 'd'], t.header.values) self.assertEqual(['1', '2', '3', '4'], t[1].values) self.assertEqual(['5', '6', '7', '8'], t[2].values) self.assertEqual(2, t.size) def testHeaderIndex(self): t = self.BasicTable() self.assertEqual('c', t.header[2]) self.assertEqual('a', t.header[0]) def testAppend(self): t = self.BasicTable() t.Append(['10', '20', '30']) self.assertEqual(3, t.size) self.assertEqual(['10', '20', '30'], t[3].values) t.Append(('100', '200', '300')) self.assertEqual(4, t.size) self.assertEqual(['100', '200', '300'], t[4].values) t.Append(t[1]) self.assertEqual(5, t.size) self.assertEqual(['1', '2', '3'], t[5].values) t.Append({'a': '11', 'b': '12', 'c': '13'}) self.assertEqual(6, t.size) self.assertEqual(['11', '12', '13'], t[6].values) # The row index and container table should be set on new rows. self.assertEqual(6, t[6].row) self.assertEqual(t[1].table, t[6].table) self.assertRaises(TypeError, t.Append, ['20', '30']) self.assertRaises(TypeError, t.Append, ('1', '2', '3', '4')) self.assertRaises(TypeError, t.Append, {'a': '11', 'b': '12', 'd': '13'}) def testDeleteRow(self): t = self.BasicTable() self.assertEqual(2, t.size) t.Remove(1) self.assertEqual(['10', '20', '30'], t[1].values) for row in t: self.assertEqual(row, t[row.row]) t.Remove(1) self.assertFalse(t.size) def testRowNumberandParent(self): t = self.BasicTable() t.Append(['10', '20', '30']) t.Remove(1) for rownum, row in enumerate(t, start=1): self.assertEqual(row.row, rownum) self.assertEqual(row.table, t) t2 = self.BasicTable() t.table = t2 for rownum, row in enumerate(t, start=1): self.assertEqual(row.row, rownum) self.assertEqual(row.table, t) def testAddColumn(self): t = self.BasicTable() t.AddColumn('Beer') # pylint: disable=E1101 self.assertEqual(['a', 'b', 'c', 'Beer'], t.header.values) self.assertEqual(['10', '20', '30', ''], t[2].values) t.AddColumn('Wine', default='Merlot', col_index=1) self.assertEqual(['a', 'Wine', 'b', 'c', 'Beer'], t.header.values) self.assertEqual(['10', 'Merlot', '20', '30', ''], t[2].values) t.AddColumn('Spirits', col_index=-2) self.assertEqual(['a', 'Wine', 'b', 'Spirits', 'c', 'Beer'], t.header.values) self.assertEqual(['10', 'Merlot', '20', '', '30', ''], t[2].values) self.assertRaises(IndexError, t.AddColumn, 'x', col_index=6) self.assertRaises(IndexError, t.AddColumn, 'x', col_index=-7) self.assertRaises(texttable.TableError, t.AddColumn, 'b') def testAddTable(self): t = self.BasicTable() t2 = self.BasicTable() t3 = t + t2 # pylint: disable=E1101 self.assertEqual(['a', 'b', 'c'], t3.header.values) self.assertEqual(['10', '20', '30'], t3[2].values) self.assertEqual(['10', '20', '30'], t3[4].values) self.assertEqual(4, t3.size) def testExtendTable(self): t2 = self.BasicTable() t2.AddColumn('Beer') t2[1]['Beer'] = 'Lager' t2[1]['three'] = 'three' t2.Append(('one', 'two', 'three', 'Stout')) t = self.BasicTable() # Explicit key, use first column. t.extend(t2, ('a',)) # pylint: disable=E1101 self.assertEqual(['a', 'b', 'c', 'Beer'], t.header.values) # Only new columns have updated values. self.assertEqual(['1', '2', '3', 'Lager'], t[1].values) # All rows are extended. self.assertEqual(['10', '20', '30', ''], t[2].values) # The third row of 't2', is not included as there is no matching # row with the same key in the first table 't'. self.assertEqual(2, t.size) # pylint: disable=E1101 t = self.BasicTable() # If a Key is non-unique (which is a soft-error), then the first instance # on the RHS is used for and applied to all non-unique entries on the LHS. t.Append(('1', '2b', '3b')) t2.Append(('1', 'two', '', 'Ale')) t.extend(t2, ('a',)) self.assertEqual(['1', '2', '3', 'Lager'], t[1].values) self.assertEqual(['1', '2b', '3b', 'Lager'], t[3].values) t = self.BasicTable() # No explicit key, row number is used as the key. t.extend(t2) self.assertEqual(['a', 'b', 'c', 'Beer'], t.header.values) # Since row is key we pick up new values from corresponding row number. self.assertEqual(['1', '2', '3', 'Lager'], t[1].values) # All rows are still extended. self.assertEqual(['10', '20', '30', ''], t[2].values) # The third/fourth row of 't2', is not included as there is no corresponding # row in the first table 't'. self.assertEqual(2, t.size) t = self.BasicTable() t.Append(('1', 'two', '3')) t.Append(('two', '1', 'three')) t2 = texttable.TextTable() t2.header = ('a', 'b', 'c', 'Beer') t2.Append(('1', 'two', 'three', 'Stout')) # Explicitly declare which columns constitute the key. # Sometimes more than one row is needed to define a unique key (superkey). t.extend(t2, ('a', 'b')) self.assertEqual(['a', 'b', 'c', 'Beer'], t.header.values) # key '1', '2' does not equal '1', 'two', so column unmatched. self.assertEqual(['1', '2', '3', ''], t[1].values) # '1', 'two' matches but 'two', '1' does not as order is important. self.assertEqual(['1', 'two', '3', 'Stout'], t[3].values) self.assertEqual(['two', '1', 'three', ''], t[4].values) self.assertEqual(4, t.size) # Expects a texttable as the argument. self.assertRaises(AttributeError, t.extend, ['a', 'list']) # All Key column Names must be valid. self.assertRaises(IndexError, t.extend, ['a', 'list'], ('a', 'bogus')) def testTableWithLabels(self): t = self.BasicTable() self.assertEqual( '# LABEL a\n1.b 2\n1.c 3\n10.b 20\n10.c 30\n', t.LabelValueTable()) self.assertEqual( '# LABEL a\n1.b 2\n1.c 3\n10.b 20\n10.c 30\n', t.LabelValueTable(['a'])) self.assertEqual( '# LABEL a.c\n1.3.b 2\n10.30.b 20\n', t.LabelValueTable(['a', 'c'])) self.assertEqual( '# LABEL a.c\n1.3.b 2\n10.30.b 20\n', t.LabelValueTable(['c', 'a'])) self.assertRaises(texttable.TableError, t.LabelValueTable, ['a', 'z']) def testTextJustify(self): t = texttable.TextTable() self.assertEqual([' a '], t._TextJustify('a', 6)) self.assertEqual([' a b '], t._TextJustify('a b', 6)) self.assertEqual([' a b '], t._TextJustify('a b', 6)) self.assertEqual([' a ', ' b '], t._TextJustify('a b', 3)) self.assertEqual([' a ', ' b '], t._TextJustify('a b', 3)) self.assertRaises(texttable.TableError, t._TextJustify, 'a', 2) self.assertRaises(texttable.TableError, t._TextJustify, 'a bb', 3) self.assertEqual([' a b '], t._TextJustify('a\tb', 6)) self.assertEqual([' a b '], t._TextJustify('a\t\tb', 6)) self.assertEqual([' a ', ' b '], t._TextJustify('a\nb\t', 6)) def testSmallestColSize(self): t = texttable.TextTable() self.assertEqual(1, t._SmallestColSize('a')) self.assertEqual(2, t._SmallestColSize('a bb')) self.assertEqual(4, t._SmallestColSize('a cccc bb')) self.assertEqual(0, t._SmallestColSize('')) self.assertEqual(1, t._SmallestColSize('a\tb')) self.assertEqual(1, t._SmallestColSize('a\nb\tc')) self.assertEqual(3, t._SmallestColSize('a\nbbb\n\nc')) # Check if _SmallestColSize is not influenced by ANSI colors. self.assertEqual( 3, t._SmallestColSize('bbb ' + terminal.AnsiText('bb', ['red']))) def testFormattedTableColor(self): # Test to sepcify the color defined in terminal.FG_COLOR_WORDS t = texttable.TextTable() t.header = ('LSP', 'Name') t.Append(('col1', 'col2')) for color_key in terminal.FG_COLOR_WORDS: t[0].color = terminal.FG_COLOR_WORDS[color_key] t.FormattedTable() self.assertEqual(sorted(t[0].color), sorted(terminal.FG_COLOR_WORDS[color_key])) for color_key in terminal.BG_COLOR_WORDS: t[0].color = terminal.BG_COLOR_WORDS[color_key] t.FormattedTable() self.assertEqual(sorted(t[0].color), sorted(terminal.BG_COLOR_WORDS[color_key])) def testFormattedTableColoredMultilineCells(self): t = texttable.TextTable() t.header = ('LSP', 'Name') t.Append((terminal.AnsiText('col1 boembabies', ['yellow']), 'col2')) t.Append(('col1', 'col2')) self.assertEqual( ' LSP Name \n' '====================\n' ' \033[33mcol1 col2 \n' ' boembabies\033[0m \n' '--------------------\n' ' col1 col2 \n', t.FormattedTable(width=20)) def testFormattedTableColoredCells(self): t = texttable.TextTable() t.header = ('LSP', 'Name') t.Append((terminal.AnsiText('col1', ['yellow']), 'col2')) t.Append(('col1', 'col2')) self.assertEqual( ' LSP Name \n' '============\n' ' \033[33mcol1\033[0m col2 \n' ' col1 col2 \n', t.FormattedTable()) def testFormattedTableColoredHeaders(self): t = texttable.TextTable() t.header = (terminal.AnsiText('LSP', ['yellow']), 'Name') t.Append(('col1', 'col2')) t.Append(('col1', 'col2')) self.assertEqual( ' \033[33mLSP\033[0m Name \n' '============\n' ' col1 col2 \n' ' col1 col2 \n', t.FormattedTable()) self.assertEqual( ' col1 col2 \n' ' col1 col2 \n', t.FormattedTable(display_header=False)) def testFormattedTable(self): # Basic table has a single whitespace on each side of the max cell width. t = self.BasicTable() self.assertEqual( ' a b c \n' '============\n' ' 1 2 3 \n' ' 10 20 30 \n', t.FormattedTable()) # An increase in a cell size (or header), increases the side of that column. t.AddColumn('Beer') self.assertEqual( ' a b c Beer \n' '==================\n' ' 1 2 3 \n' ' 10 20 30 \n', t.FormattedTable()) self.assertEqual( ' 1 2 3 \n' ' 10 20 30 \n', t.FormattedTable(display_header=False)) # Multiple words are on one line while space permits. t.Remove(1) t.Append(('', '', '', 'James Squire')) self.assertEqual( ' a b c Beer \n' '==========================\n' ' 10 20 30 \n' ' James Squire \n', t.FormattedTable()) # Or split across rows if not enough space. # A '---' divider is inserted to give a delimiter for multiline data. self.assertEqual( ' a b c Beer \n' '====================\n' ' 10 20 30 \n' '--------------------\n' ' James \n' ' Squire \n', t.FormattedTable(20)) # Not needed below the data if last line, is needed otherwise. t.Append(('1', '2', '3', '4')) self.assertEqual( ' a b c Beer \n' '====================\n' ' 10 20 30 \n' '--------------------\n' ' James \n' ' Squire \n' '--------------------\n' ' 1 2 3 4 \n', t.FormattedTable(20)) # Multiple multi line columms. t.Remove(3) t.Append(('', 'A small essay with a longword here', '1', '2')) self.assertEqual( ' a b c Beer \n' '==========================\n' ' 10 20 30 \n' '--------------------------\n' ' James \n' ' Squire \n' '--------------------------\n' ' A small 1 2 \n' ' essay \n' ' with a \n' ' longword \n' ' here \n', t.FormattedTable(26)) # Available space is added to multiline columns proportionaly # i.e. a column with twice as much text gets twice the space. self.assertEqual( ' a b c Beer \n' '=============================\n' ' 10 20 30 \n' '-----------------------------\n' ' James \n' ' Squire \n' '-----------------------------\n' ' A small 1 2 \n' ' essay with \n' ' a longword \n' ' here \n', t.FormattedTable(29)) # Display fails if the minimum size needed is not available. # These are both 1-char less than the minimum required. self.assertRaises(texttable.TableError, t.FormattedTable, 25) t.Remove(3) t.Remove(2) self.assertRaises(texttable.TableError, t.FormattedTable, 17) t.Append(('line\nwith\n\nbreaks', 'Line with\ttabs\t\t', 'line with lots of spaces.', '4')) t[0].color = ['yellow'] self.assertEqual( '\033[33m a b c Beer \n' '==============================\033[0m\n' ' 10 20 30 \n' '------------------------------\n' ' line Line line 4 \n' ' with with with \n' ' tabs lots of \n' ' breaks spaces. \n', t.FormattedTable(30)) t[0].color = None self.assertEqual( ' a b c Beer \n' '========================================\n' ' 10 20 30 \n' '----------------------------------------\n' ' line Line line with 4 \n' ' with with lots of \n' ' tabs spaces. \n' ' breaks \n', t.FormattedTable(40)) def testFormattedTable2(self): t = texttable.TextTable() t.header = ('Host', 'Interface', 'Admin', 'Oper', 'Proto', 'Address') t.Append(('DeviceA', 'lo0', 'up', 'up', '', [])) t.Append(('DeviceA', 'lo0.0', 'up', 'up', 'inet', ['127.0.0.1', '10.100.100.1'])) t.Append(('DeviceA', 'lo0.16384', 'up', 'up', 'inet', ['127.0.0.1'])) t[-2].color = ['red'] # pylint: disable=C6310 self.assertEqual( ' Host Interface Admin Oper Proto Address \n' '==============================================================\n' ' DeviceA lo0 up up \n' '--------------------------------------------------------------\n' '\033[31m DeviceA lo0.0 up up inet 127.0.0.1, \n' ' 10.100.100.1 \033[0m\n' '--------------------------------------------------------------\n' ' DeviceA lo0.16384 up up inet 127.0.0.1 \n', t.FormattedTable(62)) # Test with specific columns only self.assertEqual( ' Host Interface Admin Oper Address \n' '==========================================================\n' ' DeviceA lo0 up up \n' '\033[31m DeviceA lo0.0 up up 127.0.0.1, 10.100.100.1 \033[0m\n' ' DeviceA lo0.16384 up up 127.0.0.1 \n', t.FormattedTable(62, columns=['Host', 'Interface', 'Admin', 'Oper', 'Address'])) def testSortTable(self): # pylint: disable=invalid-name def MakeTable(): t = texttable.TextTable() t.header = ('Col1', 'Col2', 'Col3') t.Append(('lorem', 'ipsum', 'dolor')) t.Append(('ut', 'enim', 'ad')) t.Append(('duis', 'aute', 'irure')) return t # Test basic sort table = MakeTable() table.sort() self.assertEqual(['duis', 'aute', 'irure'], table[1].values) self.assertEqual(['lorem', 'ipsum', 'dolor'], table[2].values) self.assertEqual(['ut', 'enim', 'ad'], table[3].values) # Test with different key table = MakeTable() table.sort(key=lambda x: x['Col2']) self.assertEqual(['duis', 'aute', 'irure'], table[1].values) self.assertEqual(['ut', 'enim', 'ad'], table[2].values) self.assertEqual(['lorem', 'ipsum', 'dolor'], table[3].values) # Multiple keys. table = MakeTable() table.Append(('duis', 'aute', 'aute')) table.sort(key=lambda x: x['Col2', 'Col3']) self.assertEqual(['duis', 'aute', 'aute'], table[1].values) self.assertEqual(['duis', 'aute', 'irure'], table[2].values) # Test with custom compare # pylint: disable=C6409 def compare(a, b): # Compare from 2nd char of 1st col return cmp(a[0][1:], b[0][1:]) table = MakeTable() table.sort(cmp=compare) self.assertEqual(['lorem', 'ipsum', 'dolor'], table[1].values) self.assertEqual(['ut', 'enim', 'ad'], table[2].values) self.assertEqual(['duis', 'aute', 'irure'], table[3].values) # Set the key, so the 1st col compared is 'Col2'. table.sort(key=lambda x: x['Col2'], cmp=compare) self.assertEqual(['ut', 'enim', 'ad'], table[2].values) self.assertEqual(['lorem', 'ipsum', 'dolor'], table[1].values) self.assertEqual(['duis', 'aute', 'irure'], table[3].values) # Sort in reverse order. table.sort(key=lambda x: x['Col2'], reverse=True) self.assertEqual(['lorem', 'ipsum', 'dolor'], table[1].values) self.assertEqual(['ut', 'enim', 'ad'], table[2].values) self.assertEqual(['duis', 'aute', 'irure'], table[3].values) if __name__ == '__main__': unittest.main() textfsm-1.1.3/textfsm/000077500000000000000000000000001417470013600147035ustar00rootroot00000000000000textfsm-1.1.3/textfsm/__init__.py000066400000000000000000000007001417470013600170110ustar00rootroot00000000000000"""Template based text parser. This module implements a parser, intended to be used for converting human readable text, such as command output from a router CLI, into a list of records, containing values extracted from the input text. A simple template language is used to describe a state machine to parse a specific type of text input, returning a record of values for each input entity. """ from textfsm.parser import * __version__ = '1.1.2' textfsm-1.1.3/textfsm/clitable.py000077500000000000000000000314171417470013600170450ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """GCLI Table - CLI data in TextTable format. Class that reads CLI output and parses into tabular format. Supports the use of index files to map TextFSM templates to device/command output combinations and store the data in a TextTable. Is the glue between an automated command scraping program (such as RANCID) and the TextFSM output parser. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import copy import os import re import threading from builtins import object # pylint: disable=redefined-builtin from builtins import str # pylint: disable=redefined-builtin import textfsm from textfsm import copyable_regex_object from textfsm import texttable class Error(Exception): """Base class for errors.""" class IndexTableError(Error): """General INdexTable error.""" class CliTableError(Error): """General CliTable error.""" class IndexTable(object): """Class that reads and stores comma-separated values as a TextTable. Stores a compiled regexp of the value for efficient matching. Includes functions to preprocess Columns (both compiled and uncompiled). Attributes: index: TextTable, the index file parsed into a texttable. compiled: TextTable, the table but with compiled regexp for each field. """ def __init__(self, preread=None, precompile=None, file_path=None): """Create new IndexTable object. Args: preread: func, Pre-processing, applied to each field as it is read. precompile: func, Pre-compilation, applied to each field before compiling. file_path: String, Location of file to use as input. """ self.index = None self.compiled = None if file_path: self._index_file = file_path self._index_handle = open(self._index_file, 'r') self._ParseIndex(preread, precompile) def __del__(self): """Close index handle.""" if hasattr(self, '_index_handle'): self._index_handle.close() def __len__(self): """Returns number of rows in table.""" return self.index.size def __copy__(self): """Returns a copy of an IndexTable object.""" clone = IndexTable() if hasattr(self, '_index_file'): # pylint: disable=protected-access clone._index_file = self._index_file clone._index_handle = self._index_handle clone.index = self.index clone.compiled = self.compiled return clone def __deepcopy__(self, memodict=None): """Returns a deepcopy of an IndexTable object.""" clone = IndexTable() if hasattr(self, '_index_file'): # pylint: disable=protected-access clone._index_file = copy.deepcopy(self._index_file) clone._index_handle = open(clone._index_file, 'r') clone.index = copy.deepcopy(self.index) clone.compiled = copy.deepcopy(self.compiled) return clone def _ParseIndex(self, preread, precompile): """Reads index file and stores entries in TextTable. For optimisation reasons, a second table is created with compiled entries. Args: preread: func, Pre-processing, applied to each field as it is read. precompile: func, Pre-compilation, applied to each field before compiling. Raises: IndexTableError: If the column headers has illegal column labels. """ self.index = texttable.TextTable() self.index.CsvToTable(self._index_handle) if preread: for row in self.index: for col in row.header: row[col] = preread(col, row[col]) self.compiled = copy.deepcopy(self.index) for row in self.compiled: for col in row.header: if precompile: row[col] = precompile(col, row[col]) if row[col]: row[col] = copyable_regex_object.CopyableRegexObject(row[col]) def GetRowMatch(self, attributes): """Returns the row number that matches the supplied attributes.""" for row in self.compiled: try: for key in attributes: # Silently skip attributes not present in the index file. # pylint: disable=E1103 if (key in row.header and row[key] and not row[key].match(attributes[key])): # This line does not match, so break and try next row. raise StopIteration() return row.row except StopIteration: pass return 0 class CliTable(texttable.TextTable): """Class that reads CLI output and parses into tabular format. Reads an index file and uses it to map command strings to templates. It then uses TextFSM to parse the command output (raw) into a tabular format. The superkey is the set of columns that contain data that uniquely defines the row, the key is the row number otherwise. This is typically gathered from the templates 'Key' value but is extensible. Attributes: raw: String, Unparsed command string from device/command. index_file: String, file where template/command mappings reside. template_dir: String, directory where index file and templates reside. """ # Parse each template index only once across all instances. # Without this, the regexes are parsed at every call to CliTable(). _lock = threading.Lock() INDEX = {} def synchronised(func): """Synchronisation decorator.""" # pylint: disable=E0213 def Wrapper(main_obj, *args, **kwargs): main_obj._lock.acquire() # pylint: disable=W0212 try: return func(main_obj, *args, **kwargs) # pylint: disable=E1102 finally: main_obj._lock.release() # pylint: disable=W0212 return Wrapper @synchronised def __init__(self, index_file=None, template_dir=None): """Create new CLiTable object. Args: index_file: String, file where template/command mappings reside. template_dir: String, directory where index file and templates reside. """ # pylint: disable=E1002 super(CliTable, self).__init__() self._keys = set() self.raw = None self.index_file = index_file self.template_dir = template_dir if index_file: self.ReadIndex(index_file) def ReadIndex(self, index_file=None): """Reads the IndexTable index file of commands and templates. Args: index_file: String, file where template/command mappings reside. Raises: CliTableError: A template column was not found in the table. """ self.index_file = index_file or self.index_file fullpath = os.path.join(self.template_dir, self.index_file) if self.index_file and fullpath not in self.INDEX: self.index = IndexTable(self._PreParse, self._PreCompile, fullpath) self.INDEX[fullpath] = self.index else: self.index = self.INDEX[fullpath] # Does the IndexTable have the right columns. if 'Template' not in self.index.index.header: # pylint: disable=E1103 raise CliTableError("Index file does not have 'Template' column.") def _TemplateNamesToFiles(self, template_str): """Parses a string of templates into a list of file handles.""" template_list = template_str.split(':') template_files = [] try: for tmplt in template_list: template_files.append( open(os.path.join(self.template_dir, tmplt), 'r')) except: for tmplt in template_files: tmplt.close() raise return template_files def ParseCmd(self, cmd_input, attributes=None, templates=None): """Creates a TextTable table of values from cmd_input string. Parses command output with template/s. If more than one template is found subsequent tables are merged if keys match (dropped otherwise). Args: cmd_input: String, Device/command response. attributes: Dict, attribute that further refine matching template. templates: String list of templates to parse with. If None, uses index Raises: CliTableError: A template was not found for the given command. """ # Store raw command data within the object. self.raw = cmd_input if not templates: # Find template in template index. row_idx = self.index.GetRowMatch(attributes) if row_idx: templates = self.index.index[row_idx]['Template'] else: raise CliTableError('No template found for attributes: "%s"' % attributes) template_files = self._TemplateNamesToFiles(templates) try: # Re-initialise the table. self.Reset() self._keys = set() self.table = self._ParseCmdItem(self.raw, template_file=template_files[0]) # Add additional columns from any additional tables. for tmplt in template_files[1:]: self.extend(self._ParseCmdItem(self.raw, template_file=tmplt), set(self._keys)) finally: for f in template_files: f.close() def _ParseCmdItem(self, cmd_input, template_file=None): """Creates Texttable with output of command. Args: cmd_input: String, Device response. template_file: File object, template to parse with. Returns: TextTable containing command output. Raises: CliTableError: A template was not found for the given command. """ # Build FSM machine from the template. fsm = textfsm.TextFSM(template_file) if not self._keys: self._keys = set(fsm.GetValuesByAttrib('Key')) # Pass raw data through FSM. table = texttable.TextTable() table.header = fsm.header # Fill TextTable from record entries. for record in fsm.ParseText(cmd_input): table.Append(record) return table def _PreParse(self, key, value): """Executed against each field of each row read from index table.""" if key == 'Command': return re.sub(r'(\[\[.+?\]\])', self._Completion, value) else: return value def _PreCompile(self, key, value): """Executed against each field of each row before compiling as regexp.""" if key == 'Template': return else: return value def _Completion(self, match): r"""Replaces double square brackets with variable length completion. Completion cannot be mixed with regexp matching or '\' characters i.e. '[[(\n)]] would become (\(n)?)?.' Args: match: A regex Match() object. Returns: String of the format '(a(b(c(d)?)?)?)?'. """ # Strip the outer '[[' & ']]' and replace with ()? regexp pattern. word = str(match.group())[2:-2] return '(' + ('(').join(word) + ')?' * len(word) def LabelValueTable(self, keys=None): """Return LabelValue with FSM derived keys.""" keys = keys or self.superkey # pylint: disable=E1002 return super(CliTable, self).LabelValueTable(keys) # pylint: disable=W0622 def sort(self, cmp=None, key=None, reverse=False): """Overrides sort func to use the KeyValue for the key.""" if not key and self._keys: key = self.KeyValue super(CliTable, self).sort(cmp=cmp, key=key, reverse=reverse) # pylint: enable=W0622 def AddKeys(self, key_list): """Mark additional columns as being part of the superkey. Supplements the Keys already extracted from the FSM template. Useful when adding new columns to existing tables. Note: This will impact attempts to further 'extend' the table as the superkey must be common between tables for successful extension. Args: key_list: list of header entries to be included in the superkey. Raises: KeyError: If any entry in list is not a valid header entry. """ for keyname in key_list: if keyname not in self.header: raise KeyError("'%s'" % keyname) self._keys = self._keys.union(set(key_list)) @property def superkey(self): """Returns a set of column names that together constitute the superkey.""" sorted_list = [] for header in self.header: if header in self._keys: sorted_list.append(header) return sorted_list def KeyValue(self, row=None): """Returns the super key value for the row.""" if not row: if self._iterator: # If we are inside an iterator use current row iteration. row = self[self._iterator] else: row = self.row # If no superkey then use row number. if not self.superkey: return ['%s' % row.row] sorted_list = [] for header in self.header: if header in self.superkey: sorted_list.append(row[header]) return sorted_list textfsm-1.1.3/textfsm/copyable_regex_object.py000077500000000000000000000023501417470013600215760ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Work around a regression in Python 2.6 that makes RegexObjects uncopyable.""" import re from builtins import object # pylint: disable=redefined-builtin class CopyableRegexObject(object): """Like a re.RegexObject, but can be copied.""" def __init__(self, pattern): self.pattern = pattern self.regex = re.compile(pattern) def match(self, *args, **kwargs): return self.regex.match(*args, **kwargs) def sub(self, *args, **kwargs): return self.regex.sub(*args, **kwargs) def __copy__(self): return CopyableRegexObject(self.pattern) def __deepcopy__(self, unused_memo): return self.__copy__() textfsm-1.1.3/textfsm/parser.py000077500000000000000000001017521417470013600165620ustar00rootroot00000000000000#!/usr/bin/env python # # Copyright 2010 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """Template based text parser. This module implements a parser, intended to be used for converting human readable text, such as command output from a router CLI, into a list of records, containing values extracted from the input text. A simple template language is used to describe a state machine to parse a specific type of text input, returning a record of values for each input entity. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import getopt import inspect import re import string import sys from builtins import object # pylint: disable=redefined-builtin from builtins import str # pylint: disable=redefined-builtin from builtins import zip # pylint: disable=redefined-builtin import six class Error(Exception): """Base class for errors.""" class Usage(Exception): """Error in command line execution.""" class TextFSMError(Error): """Error in the FSM state execution.""" class TextFSMTemplateError(Error): """Errors while parsing templates.""" # The below exceptions are internal state change triggers # and not used as Errors. class FSMAction(Exception): """Base class for actions raised with the FSM.""" class SkipRecord(FSMAction): """Indicate a record is to be skipped.""" class SkipValue(FSMAction): """Indicate a value is to be skipped.""" class TextFSMOptions(object): """Class containing all valid TextFSMValue options. Each nested class here represents a TextFSM option. The format is "option". Each class may override any of the methods inside the OptionBase class. A user of this module can extend options by subclassing TextFSMOptionsBase, adding the new option class(es), then passing that new class to the TextFSM constructor with the 'option_class' argument. """ class OptionBase(object): """Factory methods for option class. Attributes: name: The name of the option. value: A TextFSMValue, the parent Value. """ def __init__(self, value): self.value = value @property def name(self): return self.__class__.__name__.replace('option', '') def OnCreateOptions(self): """Called after all options have been parsed for a Value.""" def OnClearVar(self): """Called when value has been cleared.""" def OnClearAllVar(self): """Called when a value has clearalled.""" def OnAssignVar(self): """Called when a matched value is being assigned.""" def OnGetValue(self): """Called when the value name is being requested.""" def OnSaveRecord(self): """Called just prior to a record being committed.""" @classmethod def ValidOptions(cls): """Returns a list of valid option names.""" valid_options = [] for obj_name in dir(cls): obj = getattr(cls, obj_name) if inspect.isclass(obj) and issubclass(obj, cls.OptionBase): valid_options.append(obj_name) return valid_options @classmethod def GetOption(cls, name): """Returns the class of the requested option name.""" return getattr(cls, name) class Required(OptionBase): """The Value must be non-empty for the row to be recorded.""" def OnSaveRecord(self): if not self.value.value: raise SkipRecord class Filldown(OptionBase): """Value defaults to the previous line's value.""" def OnCreateOptions(self): self._myvar = None def OnAssignVar(self): self._myvar = self.value.value def OnClearVar(self): self.value.value = self._myvar def OnClearAllVar(self): self._myvar = None class Fillup(OptionBase): """Like Filldown, but upwards until it finds a non-empty entry.""" def OnAssignVar(self): # If value is set, copy up the results table, until we # see a set item. if self.value.value: # Get index of relevant result column. value_idx = self.value.fsm.values.index(self.value) # Go up the list from the end until we see a filled value. # pylint: disable=protected-access for result in reversed(self.value.fsm._result): if result[value_idx]: # Stop when a record has this column already. break # Otherwise set the column value. result[value_idx] = self.value.value class Key(OptionBase): """Value constitutes part of the Key of the record.""" class List(OptionBase): r""" Value takes the form of a list. If the value regex contains nested match groups in the form (?Pregex), instead of adding a string to the list, we add a dictionary of the groups. Eg. Value List ((?P\w+)\s+(?P\d+)) would create results like: [{'name': 'Bob', 'age': 32}] Do not give nested groups the same name as other values in the template. """ def OnCreateOptions(self): self.OnClearAllVar() def OnAssignVar(self): # Nested matches will have more than one match group if self.value.compiled_regex.groups > 1: match = self.value.compiled_regex.match(self.value.value) else: match = None # If the List-value regex has match-groups defined, add the resulting # dict to the list. Otherwise, add the string that was matched if match and match.groupdict(): self._value.append(match.groupdict()) else: self._value.append(self.value.value) def OnClearVar(self): if 'Filldown' not in self.value.OptionNames(): self._value = [] def OnClearAllVar(self): self._value = [] def OnSaveRecord(self): self.value.value = list(self._value) class TextFSMValue(object): """A TextFSM value. A value has syntax like: 'Value Filldown,Required helloworld (.*)' Where 'Value' is a keyword. 'Filldown' and 'Required' are options. 'helloworld' is the value name. '(.*) is the regular expression to match in the input data. Attributes: compiled_regex: (regexp), Compiled regex for nested matching of List values. max_name_len: (int), maximum character length os a variable name. name: (str), Name of the value. options: (list), A list of current Value Options. regex: (str), Regex which the value is matched on. template: (str), regexp with named groups added. fsm: A TextFSMBase(), the containing FSM. value: (str), the current value. """ # The class which contains valid options. def __init__(self, fsm=None, max_name_len=48, options_class=None): """Initialise a new TextFSMValue.""" self.max_name_len = max_name_len self.name = None self.options = [] self.regex = None self.value = None self.fsm = fsm self._options_cls = options_class def AssignVar(self, value): """Assign a value to this Value.""" self.value = value # Call OnAssignVar on options. _ = [option.OnAssignVar() for option in self.options] def ClearVar(self): """Clear this Value.""" self.value = None # Call OnClearVar on options. _ = [option.OnClearVar() for option in self.options] def ClearAllVar(self): """Clear this Value.""" self.value = None # Call OnClearAllVar on options. _ = [option.OnClearAllVar() for option in self.options] def Header(self): """Fetch the header name of this Value.""" # Call OnGetValue on options. _ = [option.OnGetValue() for option in self.options] return self.name def OptionNames(self): """Returns a list of option names for this Value.""" return [option.name for option in self.options] def Parse(self, value): """Parse a 'Value' declaration. Args: value: String line from a template file, must begin with 'Value '. Raises: TextFSMTemplateError: Value declaration contains an error. """ value_line = value.split(' ') if len(value_line) < 3: raise TextFSMTemplateError('Expect at least 3 tokens on line.') if not value_line[2].startswith('('): # Options are present options = value_line[1] for option in options.split(','): self._AddOption(option) # Call option OnCreateOptions callbacks _ = [option.OnCreateOptions() for option in self.options] self.name = value_line[2] self.regex = ' '.join(value_line[3:]) else: # There were no valid options, so there are no options. # Treat this argument as the name. self.name = value_line[1] self.regex = ' '.join(value_line[2:]) if len(self.name) > self.max_name_len: raise TextFSMTemplateError( "Invalid Value name '%s' or name too long." % self.name) if self.regex[0]!='(' or self.regex[-1]!=')' or self.regex[-2]=='\\': raise TextFSMTemplateError( "Value '%s' must be contained within a '()' pair." % self.regex) try: compiled_regex = re.compile(self.regex) except re.error as e: raise TextFSMTemplateError(str(e)) self.template = re.sub(r'^\(', '(?P<%s>' % self.name, self.regex) # Compile and store the regex object only on List-type values for use in # nested matching if any([isinstance(x, TextFSMOptions.List) for x in self.options]): self.compiled_regex = compiled_regex def _AddOption(self, name): """Add an option to this Value. Args: name: (str), the name of the Option to add. Raises: TextFSMTemplateError: If option is already present or the option does not exist. """ # Check for duplicate option declaration if name in [option.name for option in self.options]: raise TextFSMTemplateError('Duplicate option "%s"' % name) # Create the option object try: option = self._options_cls.GetOption(name)(self) except AttributeError: raise TextFSMTemplateError('Unknown option "%s"' % name) self.options.append(option) def OnSaveRecord(self): """Called just prior to a record being committed.""" _ = [option.OnSaveRecord() for option in self.options] def __str__(self): """Prints out the FSM Value, mimic the input file.""" if self.options: return 'Value %s %s %s' % ( ','.join(self.OptionNames()), self.name, self.regex) else: return 'Value %s %s' % (self.name, self.regex) class CopyableRegexObject(object): """Like a re.RegexObject, but can be copied.""" def __init__(self, pattern): self.pattern = pattern self.regex = re.compile(pattern) def match(self, *args, **kwargs): return self.regex.match(*args, **kwargs) def sub(self, *args, **kwargs): return self.regex.sub(*args, **kwargs) def __copy__(self): return CopyableRegexObject(self.pattern) def __deepcopy__(self, unused_memo): return self.__copy__() class TextFSMRule(object): """A rule in each FSM state. A value has syntax like: ^ -> Next.Record State2 Where '' is a regular expression. 'Next' is a Line operator. 'Record' is a Record operator. 'State2' is the next State. Attributes: match: Regex to match this rule. regex: match after template substitution. line_op: Operator on input line on match. record_op: Operator on output record on match. new_state: Label to jump to on action regex_obj: Compiled regex for which the rule matches. line_num: Integer row number of Value. """ # Implicit default is '(regexp) -> Next.NoRecord' MATCH_ACTION = re.compile(r'(?P.*)(\s->(?P.*))') # The structure to the right of the '->'. LINE_OP = ('Continue', 'Next', 'Error') RECORD_OP = ('Clear', 'Clearall', 'Record', 'NoRecord') # Line operators. LINE_OP_RE = '(?P%s)' % '|'.join(LINE_OP) # Record operators. RECORD_OP_RE = '(?P%s)' % '|'.join(RECORD_OP) # Line operator with optional record operator. OPERATOR_RE = r'(%s(\.%s)?)' % (LINE_OP_RE, RECORD_OP_RE) # New State or 'Error' string. NEWSTATE_RE = r'(?P\w+|\".*\")' # Compound operator (line and record) with optional new state. ACTION_RE = re.compile(r'\s+%s(\s+%s)?$' % (OPERATOR_RE, NEWSTATE_RE)) # Record operator with optional new state. ACTION2_RE = re.compile(r'\s+%s(\s+%s)?$' % (RECORD_OP_RE, NEWSTATE_RE)) # Default operators with optional new state. ACTION3_RE = re.compile(r'(\s+%s)?$' % (NEWSTATE_RE)) def __init__(self, line, line_num=-1, var_map=None): """Initialise a new rule object. Args: line: (str), a template rule line to parse. line_num: (int), Optional line reference included in error reporting. var_map: Map for template (${var}) substitutions. Raises: TextFSMTemplateError: If 'line' is not a valid format for a Value entry. """ self.match = '' self.regex = '' self.regex_obj = None self.line_op = '' # Equivalent to 'Next'. self.record_op = '' # Equivalent to 'NoRecord'. self.new_state = '' # Equivalent to current state. self.line_num = line_num line = line.strip() if not line: raise TextFSMTemplateError('Null data in FSMRule. Line: %s' % self.line_num) # Is there '->' action present. match_action = self.MATCH_ACTION.match(line) if match_action: self.match = match_action.group('match') else: self.match = line # Replace ${varname} entries. self.regex = self.match if var_map: try: self.regex = string.Template(self.match).substitute(var_map) except (ValueError, KeyError): raise TextFSMTemplateError( "Duplicate or invalid variable substitution: '%s'. Line: %s." % (self.match, self.line_num)) try: # Work around a regression in Python 2.6 that makes RE Objects uncopyable. self.regex_obj = CopyableRegexObject(self.regex) except re.error: raise TextFSMTemplateError( "Invalid regular expression: '%s'. Line: %s." % (self.regex, self.line_num)) # No '->' present, so done. if not match_action: return # Attempt to match line.record operation. action_re = self.ACTION_RE.match(match_action.group('action')) if not action_re: # Attempt to match record operation. action_re = self.ACTION2_RE.match(match_action.group('action')) if not action_re: # Math implicit defaults with an optional new state. action_re = self.ACTION3_RE.match(match_action.group('action')) if not action_re: # Last attempt, match an optional new state only. raise TextFSMTemplateError("Badly formatted rule '%s'. Line: %s." % (line, self.line_num)) # We have an Line operator. if 'ln_op' in action_re.groupdict() and action_re.group('ln_op'): self.line_op = action_re.group('ln_op') # We have a record operator. if 'rec_op' in action_re.groupdict() and action_re.group('rec_op'): self.record_op = action_re.group('rec_op') # A new state was specified. if 'new_state' in action_re.groupdict() and action_re.group('new_state'): self.new_state = action_re.group('new_state') # Only 'Next' (or implicit 'Next') line operator can have a new_state. # But we allow error to have one as a warning message so we are left # checking that Continue does not. if self.line_op == 'Continue' and self.new_state: raise TextFSMTemplateError( "Action '%s' with new state %s specified. Line: %s." % (self.line_op, self.new_state, self.line_num)) # Check that an error message is present only with the 'Error' operator. if self.line_op != 'Error' and self.new_state: if not re.match(r'\w+', self.new_state): raise TextFSMTemplateError( 'Alphanumeric characters only in state names. Line: %s.' % (self.line_num)) def __str__(self): """Prints out the FSM Rule, mimic the input file.""" operation = '' if self.line_op and self.record_op: operation = '.' operation = '%s%s%s' % (self.line_op, operation, self.record_op) if operation and self.new_state: new_state = ' ' + self.new_state else: new_state = self.new_state # Print with implicit defaults. if not (operation or new_state): return ' %s' % self.match # Non defaults. return ' %s -> %s%s' % (self.match, operation, new_state) class TextFSM(object): """Parses template and creates Finite State Machine (FSM). Attributes: states: (str), Dictionary of FSMState objects. values: (str), List of FSMVariables. value_map: (map), For substituting values for names in the expressions. header: Ordered list of values. state_list: Ordered list of valid states. """ # Variable and State name length. MAX_NAME_LEN = 48 comment_regex = re.compile(r'^\s*#') state_name_re = re.compile(r'^(\w+)$') _DEFAULT_OPTIONS = TextFSMOptions def __init__(self, template, options_class=_DEFAULT_OPTIONS): """Initialises and also parses the template file.""" self._options_cls = options_class self.states = {} # Track order of state definitions. self.state_list = [] self.values = [] self.value_map = {} # Track where we are for error reporting. self._line_num = 0 # Run FSM in this state self._cur_state = None # Name of the current state. self._cur_state_name = None # Read and parse FSM definition. # Restore the file pointer once done. try: self._Parse(template) finally: template.seek(0) # Initialise starting data. self.Reset() def __str__(self): """Returns the FSM template, mimicing the input file.""" result = '\n'.join([str(value) for value in self.values]) result += '\n' for state in self.state_list: result += '\n%s\n' % state state_rules = '\n'.join([str(rule) for rule in self.states[state]]) if state_rules: result += state_rules + '\n' return result def Reset(self): """Preserves FSM but resets starting state and current record.""" # Current state is Start state. self._cur_state = self.states['Start'] self._cur_state_name = 'Start' # Clear table of results and current record. self._result = [] self._ClearAllRecord() @property def header(self): """Returns header.""" return self._GetHeader() def _GetHeader(self): """Returns header.""" header = [] for value in self.values: try: header.append(value.Header()) except SkipValue: continue return header def _GetValue(self, name): """Returns the TextFSMValue object natching the requested name.""" for value in self.values: if value.name == name: return value def _AppendRecord(self): """Adds current record to result if well formed.""" # If no Values then don't output. if not self.values: return cur_record = [] for value in self.values: try: value.OnSaveRecord() except SkipRecord: self._ClearRecord() return except SkipValue: continue # Build current record into a list. cur_record.append(value.value) # If no Values in template or whole record is empty then don't output. if len(cur_record) == (cur_record.count(None) + cur_record.count([])): return # Replace any 'None' entries with null string ''. while None in cur_record: cur_record[cur_record.index(None)] = '' self._result.append(cur_record) self._ClearRecord() def _Parse(self, template): """Parses template file for FSM structure. Args: template: Valid template file. Raises: TextFSMTemplateError: If template file syntax is invalid. """ if not template: raise TextFSMTemplateError('Null template.') # Parse header with Variables. self._ParseFSMVariables(template) # Parse States. while self._ParseFSMState(template): pass # Validate destination states. self._ValidateFSM() def _ParseFSMVariables(self, template): """Extracts Variables from start of template file. Values are expected as a contiguous block at the head of the file. These will be line separated from the State definitions that follow. Args: template: Valid template file, with Value definitions at the top. Raises: TextFSMTemplateError: If syntax or semantic errors are found. """ self.values = [] for line in template: self._line_num += 1 line = line.rstrip() # Blank line signifies end of Value definitions. if not line: return if not isinstance(line, six.string_types): line = line.decode('utf-8') # Skip commented lines. if self.comment_regex.match(line): continue if line.startswith('Value '): try: value = TextFSMValue( fsm=self, max_name_len=self.MAX_NAME_LEN, options_class=self._options_cls) value.Parse(line) except TextFSMTemplateError as error: raise TextFSMTemplateError('%s Line %s.' % (error, self._line_num)) if value.name in self.header: raise TextFSMTemplateError( "Duplicate declarations for Value '%s'. Line: %s." % (value.name, self._line_num)) try: self._ValidateOptions(value) except TextFSMTemplateError as error: raise TextFSMTemplateError('%s Line %s.' % (error, self._line_num)) self.values.append(value) self.value_map[value.name] = value.template # The line has text but without the 'Value ' prefix. elif not self.values: raise TextFSMTemplateError('No Value definitions found.') else: raise TextFSMTemplateError( 'Expected blank line after last Value entry. Line: %s.' % (self._line_num)) def _ValidateOptions(self, value): """Checks that combination of Options is valid.""" # Always passes in base class. pass def _ParseFSMState(self, template): """Extracts State and associated Rules from body of template file. After the Value definitions the remainder of the template is state definitions. The routine is expected to be called iteratively until no more states remain - indicated by returning None. The routine checks that the state names are a well formed string, do not clash with reserved names and are unique. Args: template: Valid template file after Value definitions have already been read. Returns: Name of the state parsed from file. None otherwise. Raises: TextFSMTemplateError: If any state definitions are invalid. """ if not template: return state_name = '' # Strip off extra white space lines (including comments). for line in template: self._line_num += 1 line = line.rstrip() if not isinstance(line, six.string_types): line = line.decode('utf-8') # First line is state definition if line and not self.comment_regex.match(line): # Ensure statename has valid syntax and is not a reserved word. if (not self.state_name_re.match(line) or len(line) > self.MAX_NAME_LEN or line in TextFSMRule.LINE_OP or line in TextFSMRule.RECORD_OP): raise TextFSMTemplateError("Invalid state name: '%s'. Line: %s" % (line, self._line_num)) state_name = line if state_name in self.states: raise TextFSMTemplateError("Duplicate state name: '%s'. Line: %s" % (line, self._line_num)) self.states[state_name] = [] self.state_list.append(state_name) break # Parse each rule in the state. for line in template: self._line_num += 1 line = line.rstrip() # Finish rules processing on blank line. if not line: break if not isinstance(line, six.string_types): line = line.decode('utf-8') if self.comment_regex.match(line): continue # A rule within a state, starts with 1 or 2 spaces, or a tab. if not line.startswith((' ^', ' ^', '\t^')): raise TextFSMTemplateError( "Missing white space or carat ('^') before rule. Line: %s" % self._line_num) self.states[state_name].append( TextFSMRule(line, self._line_num, self.value_map)) return state_name def _ValidateFSM(self): """Checks state names and destinations for validity. Each destination state must exist, be a valid name and not be a reserved name. There must be a 'Start' state and if 'EOF' or 'End' states are specified, they must be empty. Returns: True if FSM is valid. Raises: TextFSMTemplateError: If any state definitions are invalid. """ # Must have 'Start' state. if 'Start' not in self.states: raise TextFSMTemplateError("Missing state 'Start'.") # 'End/EOF' state (if specified) must be empty. if self.states.get('End'): raise TextFSMTemplateError("Non-Empty 'End' state.") if self.states.get('EOF'): raise TextFSMTemplateError("Non-Empty 'EOF' state.") # Remove 'End' state. if 'End' in self.states: del self.states['End'] self.state_list.remove('End') # Ensure jump states are all valid. for state in self.states: for rule in self.states[state]: if rule.line_op == 'Error': continue if not rule.new_state or rule.new_state in ('End', 'EOF'): continue if rule.new_state not in self.states: raise TextFSMTemplateError( "State '%s' not found, referenced in state '%s'" % (rule.new_state, state)) return True def ParseText(self, text, eof=True): """Passes CLI output through FSM and returns list of tuples. First tuple is the header, every subsequent tuple is a row. Args: text: (str), Text to parse with embedded newlines. eof: (boolean), Set to False if we are parsing only part of the file. Suppresses triggering EOF state. Raises: TextFSMError: An error occurred within the FSM. Returns: List of Lists. """ lines = [] if text: lines = text.splitlines() for line in lines: self._CheckLine(line) if self._cur_state_name in ('End', 'EOF'): break if self._cur_state_name != 'End' and 'EOF' not in self.states and eof: # Implicit EOF performs Next.Record operation. # Suppressed if Null EOF state is instantiated. self._AppendRecord() return self._result def ParseTextToDicts(self, *args, **kwargs): """Calls ParseText and turns the result into list of dicts. List items are dicts of rows, dict key is column header and value is column value. Args: text: (str), Text to parse with embedded newlines. eof: (boolean), Set to False if we are parsing only part of the file. Suppresses triggering EOF state. Raises: TextFSMError: An error occurred within the FSM. Returns: List of dicts. """ result_lists = self.ParseText(*args, **kwargs) result_dicts = [] for row in result_lists: result_dicts.append(dict(zip(self.header, row))) return result_dicts def _CheckLine(self, line): """Passes the line through each rule until a match is made. Args: line: A string, the current input line. """ for rule in self._cur_state: matched = self._CheckRule(rule, line) if matched: for value in matched.groupdict(): self._AssignVar(matched, value) if self._Operations(rule, line): # Not a Continue so check for state transition. if rule.new_state: if rule.new_state not in ('End', 'EOF'): self._cur_state = self.states[rule.new_state] self._cur_state_name = rule.new_state break def _CheckRule(self, rule, line): """Check a line against the given rule. This is a separate method so that it can be overridden by a debugging tool. Args: rule: A TextFSMRule(), the rule to check. line: A str, the line to check. Returns: A regex match object. """ return rule.regex_obj.match(line) def _AssignVar(self, matched, value): """Assigns variable into current record from a matched rule. If a record entry is a list then append, otherwise values are replaced. Args: matched: (regexp.match) Named group for each matched value. value: (str) The matched value. """ _value = self._GetValue(value) if _value is not None: _value.AssignVar(matched.group(value)) def _Operations(self, rule, line): """Operators on the data record. Operators come in two parts and are a '.' separated pair: Operators that effect the input line or the current state (line_op). 'Next' Get next input line and restart parsing (default). 'Continue' Keep current input line and continue resume parsing. 'Error' Unrecoverable input discard result and raise Error. Operators that affect the record being built for output (record_op). 'NoRecord' Does nothing (default) 'Record' Adds the current record to the result. 'Clear' Clears non-Filldown data from the record. 'Clearall' Clears all data from the record. Args: rule: FSMRule object. line: A string, the current input line. Returns: True if state machine should restart state with new line. Raises: TextFSMError: If Error state is encountered. """ # First process the Record operators. if rule.record_op == 'Record': self._AppendRecord() elif rule.record_op == 'Clear': # Clear record. self._ClearRecord() elif rule.record_op == 'Clearall': # Clear all record entries. self._ClearAllRecord() # Lastly process line operators. if rule.line_op == 'Error': if rule.new_state: raise TextFSMError('Error: %s. Rule Line: %s. Input Line: %s.' % (rule.new_state, rule.line_num, line)) raise TextFSMError('State Error raised. Rule Line: %s. Input Line: %s' % (rule.line_num, line)) elif rule.line_op == 'Continue': # Continue with current line without returning to the start of the state. return False # Back to start of current state with a new line. return True def _ClearRecord(self): """Remove non 'Filldown' record entries.""" _ = [value.ClearVar() for value in self.values] def _ClearAllRecord(self): """Remove all record entries.""" _ = [value.ClearAllVar() for value in self.values] def GetValuesByAttrib(self, attribute): """Returns the list of values that have a particular attribute.""" if attribute not in self._options_cls.ValidOptions(): raise ValueError("'%s': Not a valid attribute." % attribute) result = [] for value in self.values: if attribute in value.OptionNames(): result.append(value.name) return result def main(argv=None): """Validate text parsed with FSM or validate an FSM via command line.""" if argv is None: argv = sys.argv try: opts, args = getopt.getopt(argv[1:], 'h', ['help']) except getopt.error as msg: raise Usage(msg) for opt, _ in opts: if opt in ('-h', '--help'): print(__doc__) print(help_msg) return 0 if not args or len(args) > 4: raise Usage('Invalid arguments.') # If we have an argument, parse content of file and display as a template. # Template displayed will match input template, minus any comment lines. with open(args[0], 'r') as template: fsm = TextFSM(template) print('FSM Template:\n%s\n' % fsm) if len(args) > 1: # Second argument is file with example cli input. # Prints parsed tabular result. with open(args[1], 'r') as f: cli_input = f.read() table = fsm.ParseText(cli_input) print('FSM Table:') result = str(fsm.header) + '\n' for line in table: result += str(line) + '\n' print(result, end='') if len(args) > 2: # Compare tabular result with data in third file argument. # Exit value indicates if processed data matched expected result. with open(args[2], 'r') as f: ref_table = f.read() if ref_table != result: print('Data mis-match!') return 1 else: print('Data match!') if __name__ == '__main__': help_msg = '%s [--help] template [input_file [output_file]]\n' % sys.argv[0] try: sys.exit(main()) except Usage as err: print(err, file=sys.stderr) print('For help use --help', file=sys.stderr) sys.exit(2) except (IOError, TextFSMError, TextFSMTemplateError) as err: print(err, file=sys.stderr) sys.exit(2) textfsm-1.1.3/textfsm/terminal.py000077500000000000000000000344271417470013600171050ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2011 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # """Simple terminal related routines.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals try: # Import fails on Windows machines. import fcntl import termios import tty except (ImportError, ModuleNotFoundError): pass import getopt import os import re import struct import sys import time from builtins import object # pylint: disable=redefined-builtin from builtins import str # pylint: disable=redefined-builtin __version__ = '0.1.1' # ANSI, ISO/IEC 6429 escape sequences, SGR (Select Graphic Rendition) subset. SGR = { 'reset': 0, 'bold': 1, 'underline': 4, 'blink': 5, 'negative': 7, 'underline_off': 24, 'blink_off': 25, 'positive': 27, 'black': 30, 'red': 31, 'green': 32, 'yellow': 33, 'blue': 34, 'magenta': 35, 'cyan': 36, 'white': 37, 'fg_reset': 39, 'bg_black': 40, 'bg_red': 41, 'bg_green': 42, 'bg_yellow': 43, 'bg_blue': 44, 'bg_magenta': 45, 'bg_cyan': 46, 'bg_white': 47, 'bg_reset': 49, } # Provide a familar descriptive word for some ansi sequences. FG_COLOR_WORDS = {'black': ['black'], 'dark_gray': ['bold', 'black'], 'blue': ['blue'], 'light_blue': ['bold', 'blue'], 'green': ['green'], 'light_green': ['bold', 'green'], 'cyan': ['cyan'], 'light_cyan': ['bold', 'cyan'], 'red': ['red'], 'light_red': ['bold', 'red'], 'purple': ['magenta'], 'light_purple': ['bold', 'magenta'], 'brown': ['yellow'], 'yellow': ['bold', 'yellow'], 'light_gray': ['white'], 'white': ['bold', 'white']} BG_COLOR_WORDS = {'black': ['bg_black'], 'red': ['bg_red'], 'green': ['bg_green'], 'yellow': ['bg_yellow'], 'dark_blue': ['bg_blue'], 'purple': ['bg_magenta'], 'light_blue': ['bg_cyan'], 'grey': ['bg_white']} # Characters inserted at the start and end of ANSI strings # to provide hinting for readline and other clients. ANSI_START = '\001' ANSI_END = '\002' sgr_re = re.compile(r'(%s?\033\[\d+(?:;\d+)*m%s?)' % ( ANSI_START, ANSI_END)) class Error(Exception): """The base error class.""" class Usage(Error): """Command line format error.""" def _AnsiCmd(command_list): """Takes a list of SGR values and formats them as an ANSI escape sequence. Args: command_list: List of strings, each string represents an SGR value. e.g. 'fg_blue', 'bg_yellow' Returns: The ANSI escape sequence. Raises: ValueError: if a member of command_list does not map to a valid SGR value. """ if not isinstance(command_list, list): raise ValueError('Invalid list: %s' % command_list) # Checks that entries are valid SGR names. # No checking is done for sequences that are correct but 'nonsensical'. for sgr in command_list: if sgr.lower() not in SGR: raise ValueError('Invalid or unsupported SGR name: %s' % sgr) # Convert to numerical strings. command_str = [str(SGR[x.lower()]) for x in command_list] # Wrap values in Ansi escape sequence (CSI prefix & SGR suffix). return '\033[%sm' % (';'.join(command_str)) def AnsiText(text, command_list=None, reset=True): """Wrap text in ANSI/SGR escape codes. Args: text: String to encase in sgr escape sequence. command_list: List of strings, each string represents an sgr value. e.g. 'fg_blue', 'bg_yellow' reset: Boolean, if to add a reset sequence to the suffix of the text. Returns: String with sgr characters added. """ command_list = command_list or ['reset'] if reset: return '%s%s%s' % (_AnsiCmd(command_list), text, _AnsiCmd(['reset'])) else: return '%s%s' % (_AnsiCmd(command_list), text) def StripAnsiText(text): """Strip ANSI/SGR escape sequences from text.""" return sgr_re.sub('', text) def EncloseAnsiText(text): """Enclose ANSI/SGR escape sequences with ANSI_START and ANSI_END.""" return sgr_re.sub(lambda x: ANSI_START + x.group(1) + ANSI_END, text) def TerminalSize(): """Returns terminal length and width as a tuple.""" try: with open(os.ctermid()) as tty_instance: length_width = struct.unpack( 'hh', fcntl.ioctl(tty_instance.fileno(), termios.TIOCGWINSZ, '1234')) except (IOError, OSError, NameError): try: length_width = (int(os.environ['LINES']), int(os.environ['COLUMNS'])) except (ValueError, KeyError): length_width = (24, 80) return length_width def LineWrap(text, omit_sgr=False): """Break line to fit screen width, factoring in ANSI/SGR escape sequences. Args: text: String to line wrap. omit_sgr: Bool, to omit counting ANSI/SGR sequences in the length. Returns: Text with additional line wraps inserted for lines grater than the width. """ def _SplitWithSgr(text_line): """Tokenise the line so that the sgr sequences can be omitted.""" token_list = sgr_re.split(text_line) text_line_list = [] line_length = 0 for (index, token) in enumerate(token_list): # Skip null tokens. if token == '': continue if sgr_re.match(token): # Add sgr escape sequences without splitting or counting length. text_line_list.append(token) text_line = ''.join(token_list[index +1:]) else: if line_length + len(token) <= width: # Token fits in line and we count it towards overall length. text_line_list.append(token) line_length += len(token) text_line = ''.join(token_list[index +1:]) else: # Line splits part way through this token. # So split the token, form a new line and carry the remainder. text_line_list.append(token[:width - line_length]) text_line = token[width - line_length:] text_line += ''.join(token_list[index +1:]) break return (''.join(text_line_list), text_line) # We don't use textwrap library here as it insists on removing # trailing/leading whitespace (pre 2.6). (_, width) = TerminalSize() text = str(text) text_multiline = [] for text_line in text.splitlines(): # Is this a line that needs splitting? while ((omit_sgr and (len(StripAnsiText(text_line)) > width)) or (len(text_line) > width)): # If there are no sgr escape characters then do a straight split. if not omit_sgr: text_multiline.append(text_line[:width]) text_line = text_line[width:] else: (multiline_line, text_line) = _SplitWithSgr(text_line) text_multiline.append(multiline_line) if text_line: text_multiline.append(text_line) return '\n'.join(text_multiline) class Pager(object): """A simple text pager module. Supports paging of text on a terminal, somewhat like a simple 'more' or 'less', but in pure Python. The simplest usage: with open('file.txt') as f: s = f.read() Pager(s).Page() Particularly unique is the ability to sequentially feed new text into the pager: p = Pager() for line in socket.read(): p.Page(line) If done this way, the Page() method will block until either the line has been displayed, or the user has quit the pager. Currently supported keybindings are: - one line down - one line down b - one page up - one line up q - Quit the pager g - scroll to the end - one page down """ def __init__(self, text=None, delay=None): """Constructor. Args: text: A string, the text that will be paged through. delay: A boolean, if True will cause a slight delay between line printing for more obvious scrolling. """ self._text = text or '' self._delay = delay try: self._tty = open('/dev/tty') except IOError: # No TTY, revert to stdin self._tty = sys.stdin self.SetLines(None) self.Reset() def __del__(self): """Deconstructor, closes tty.""" if getattr(self, '_tty', sys.stdin) is not sys.stdin: self._tty.close() def Reset(self): """Reset the pager to the top of the text.""" self._displayed = 0 self._currentpagelines = 0 self._lastscroll = 1 self._lines_to_show = self._cli_lines def SetLines(self, lines): """Set number of screen lines. Args: lines: An int, number of lines. If None, use terminal dimensions. Raises: ValueError, TypeError: Not a valid integer representation. """ (self._cli_lines, self._cli_cols) = TerminalSize() if lines: self._cli_lines = int(lines) def Clear(self): """Clear the text and reset the pager.""" self._text = '' self.Reset() def Page(self, text=None, show_percent=None): """Page text. Continues to page through any text supplied in the constructor. Also, any text supplied to this method will be appended to the total text to be displayed. The method returns when all available text has been displayed to the user, or the user quits the pager. Args: text: A string, extra text to be paged. show_percent: A boolean, if True, indicate how much is displayed so far. If None, this behaviour is 'text is None'. Returns: A boolean. If True, more data can be displayed to the user. False implies that the user has quit the pager. """ if text is not None: self._text += text if show_percent is None: show_percent = text is None self._show_percent = show_percent text = LineWrap(self._text).splitlines() while True: # Get a list of new lines to display. self._newlines = text[self._displayed:self._displayed+self._lines_to_show] for line in self._newlines: sys.stdout.write(line + '\n') if self._delay and self._lastscroll > 0: time.sleep(0.005) self._displayed += len(self._newlines) self._currentpagelines += len(self._newlines) if self._currentpagelines >= self._lines_to_show: self._currentpagelines = 0 wish = self._AskUser() if wish == 'q': # Quit pager. return False elif wish == 'g': # Display till the end. self._Scroll(len(text) - self._displayed + 1) elif wish == '\r': # Enter, down a line. self._Scroll(1) elif wish == '\033[B': # Down arrow, down a line. self._Scroll(1) elif wish == '\033[A': # Up arrow, up a line. self._Scroll(-1) elif wish == 'b': # Up a page. self._Scroll(0 - self._cli_lines) else: # Next page. self._Scroll() if self._displayed >= len(text): break return True def _Scroll(self, lines=None): """Set attributes to scroll the buffer correctly. Args: lines: An int, number of lines to scroll. If None, scrolls by the terminal length. """ if lines is None: lines = self._cli_lines if lines < 0: self._displayed -= self._cli_lines self._displayed += lines if self._displayed < 0: self._displayed = 0 self._lines_to_show = self._cli_lines else: self._lines_to_show = lines self._lastscroll = lines def _AskUser(self): """Prompt the user for the next action. Returns: A string, the character entered by the user. """ if self._show_percent: progress = int(self._displayed*100 / (len(self._text.splitlines()))) progress_text = ' (%d%%)' % progress else: progress_text = '' question = AnsiText( 'Enter: next line, Space: next page, ' 'b: prev page, q: quit.%s' % progress_text, ['green']) sys.stdout.write(question) sys.stdout.flush() ch = self._GetCh() sys.stdout.write('\r%s\r' % (' '*len(question))) sys.stdout.flush() return ch def _GetCh(self): """Read a single character from the user. Returns: A string, the character read. """ fd = self._tty.fileno() old = termios.tcgetattr(fd) try: tty.setraw(fd) ch = self._tty.read(1) # Also support arrow key shortcuts (escape + 2 chars) if ord(ch) == 27: ch += self._tty.read(2) finally: termios.tcsetattr(fd, termios.TCSADRAIN, old) return ch def main(argv=None): """Routine to page text or determine window size via command line.""" if argv is None: argv = sys.argv try: opts, args = getopt.getopt(argv[1:], 'dhs', ['nodelay', 'help', 'size']) except getopt.error as msg: raise Usage(msg) # Print usage and return, regardless of presence of other args. for opt, _ in opts: if opt in ('-h', '--help'): print(__doc__) print(help_msg) return 0 isdelay = False for opt, _ in opts: # Prints the size of the terminal and returns. # Mutually exclusive to the paging of text and overrides that behaviour. if opt in ('-s', '--size'): print('Length: %d, Width: %d' % TerminalSize()) return 0 elif opt in ('-d', '--delay'): isdelay = True else: raise Usage('Invalid arguments.') # Page text supplied in either specified file or stdin. if len(args) == 1: with open(args[0], 'r') as f: fd = f.read() else: fd = sys.stdin.read() Pager(fd, delay=isdelay).Page() if __name__ == '__main__': help_msg = '%s [--help] [--size] [--nodelay] [input_file]\n' % sys.argv[0] try: sys.exit(main()) except Usage as err: print(err, file=sys.stderr) print('For help use --help', file=sys.stderr) sys.exit(2) textfsm-1.1.3/textfsm/texttable.py000077500000000000000000001007661417470013600172660ustar00rootroot00000000000000#!/usr/bin/python # # Copyright 2012 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or # implied. See the License for the specific language governing # permissions and limitations under the License. """A module to represent and manipulate tabular text data. A table of rows, indexed on row number. Each row is a ordered dictionary of row elements that maintains knowledge of the parent table and column headings. Tables can be created from CSV input and in-turn supports a number of display formats such as CSV and variable sized and justified rows. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from __future__ import unicode_literals import copy from functools import cmp_to_key import textwrap from builtins import next # pylint: disable=redefined-builtin from builtins import object # pylint: disable=redefined-builtin from builtins import range # pylint: disable=redefined-builtin from builtins import str # pylint: disable=redefined-builtin from builtins import zip # pylint: disable=redefined-builtin import six from textfsm import terminal class Error(Exception): """Base class for errors.""" class TableError(Error): """Error in TextTable.""" class Row(dict): """Represents a table row. We implement this as an ordered dictionary. The order is the chronological order of data insertion. Methods are supplied to make it behave like a regular dict() and list(). Attributes: row: int, the row number in the container table. 0 is the header row. table: A TextTable(), the associated container table. """ def __init__(self, *args, **kwargs): super(Row, self).__init__(*args, **kwargs) self._keys = list() self._values = list() self.row = None self.table = None self._color = None self._index = {} def _BuildIndex(self): """Recreate the key index.""" self._index = {} for i, k in enumerate(self._keys): self._index[k] = i def __getitem__(self, column): """Support for [] notation. Args: column: Tuple of column names, or a (str) column name, or positional column number, 0-indexed. Returns: A list or string with column value(s). Raises: IndexError: The given column(s) were not found. """ if isinstance(column, (list, tuple)): ret = [] for col in column: ret.append(self[col]) return ret try: return self._values[self._index[column]] except (KeyError, TypeError, ValueError): pass # Perhaps we have a range like '1', ':-1' or '1:'. try: return self._values[column] except (IndexError, TypeError): pass raise IndexError('No such column "%s" in row.' % column) def __contains__(self, value): return value in self._values def __setitem__(self, column, value): for i in range(len(self)): if self._keys[i] == column: self._values[i] = value return # No column found, add a new one. self._keys.append(column) self._values.append(value) self._BuildIndex() def __iter__(self): return iter(self._values) def __len__(self): return len(self._keys) def __str__(self): ret = '' for v in self._values: ret += '%12s ' % v ret += '\n' return ret def __repr__(self): return '%s(%r)' % (self.__class__.__name__, str(self)) def get(self, column, default_value=None): """Get an item from the Row by column name. Args: column: Tuple of column names, or a (str) column name, or positional column number, 0-indexed. default_value: The value to use if the key is not found. Returns: A list or string with column value(s) or default_value if not found. """ if isinstance(column, (list, tuple)): ret = [] for col in column: ret.append(self.get(col, default_value)) return ret # Perhaps we have a range like '1', ':-1' or '1:'. try: return self._values[column] except (IndexError, TypeError): pass try: return self[column] except IndexError: return default_value def index(self, column): """Fetches the column number (0 indexed). Args: column: A string, column to fetch the index of. Returns: An int, the row index number. Raises: ValueError: The specified column was not found. """ for i, key in enumerate(self._keys): if key == column: return i raise ValueError('Column "%s" not found.' % column) def iterkeys(self): return iter(self._keys) def items(self): # TODO(harro): self.get(k) should work here but didn't ? return [(k, self.__getitem__(k)) for k in self._keys] def _GetValues(self): """Return the row's values.""" return self._values def _GetHeader(self): """Return the row's header.""" return self._keys def _SetHeader(self, values): """Set the row's header from a list.""" if self._values and len(values) != len(self._values): raise ValueError('Header values not equal to existing data width.') if not self._values: for _ in range(len(values)): self._values.append(None) self._keys = list(values) self._BuildIndex() def _SetColour(self, value_list): """Sets row's colour attributes to a list of values in terminal.SGR.""" if value_list is None: self._color = None return colors = [] for color in value_list: if color in terminal.SGR: colors.append(color) elif color in terminal.FG_COLOR_WORDS: colors += terminal.FG_COLOR_WORDS[color] elif color in terminal.BG_COLOR_WORDS: colors += terminal.BG_COLOR_WORDS[color] else: raise ValueError('Invalid colour specification.') self._color = list(set(colors)) def _GetColour(self): if self._color is None: return None return list(self._color) def _SetValues(self, values): """Set values from supplied dictionary or list. Args: values: A Row, dict indexed by column name, or list. Raises: TypeError: Argument is not a list or dict, or list is not equal row length or dictionary keys don't match. """ def _ToStr(value): """Convert individul list entries to string.""" if isinstance(value, (list, tuple)): result = [] for val in value: result.append(str(val)) return result else: return str(value) # Row with identical header can be copied directly. if isinstance(values, Row): if self._keys != values.header: raise TypeError('Attempt to append row with mismatched header.') self._values = copy.deepcopy(values.values) elif isinstance(values, dict): for key in self._keys: if key not in values: raise TypeError('Dictionary key mismatch with row.') for key in self._keys: self[key] = _ToStr(values[key]) elif isinstance(values, list) or isinstance(values, tuple): if len(values) != len(self._values): raise TypeError('Supplied list length != row length') for (index, value) in enumerate(values): self._values[index] = _ToStr(value) else: raise TypeError('Supplied argument must be Row, dict or list, not %s', type(values)) def Insert(self, key, value, row_index): """Inserts new values at a specified offset. Args: key: string for header value. value: string for a data value. row_index: Offset into row for data. Raises: IndexError: If the offset is out of bands. """ if row_index < 0: row_index += len(self) if not 0 <= row_index < len(self): raise IndexError('Index "%s" is out of bounds.' % row_index) new_row = Row() for idx in self.header: if self.index(idx) == row_index: new_row[key] = value new_row[idx] = self[idx] self._keys = new_row.header self._values = new_row.values del new_row self._BuildIndex() color = property(_GetColour, _SetColour, doc='Colour spec of this row') header = property(_GetHeader, _SetHeader, doc="List of row's headers.") values = property(_GetValues, _SetValues, doc="List of row's values.") class TextTable(object): """Class that provides data methods on a tabular format. Data is stored as a list of Row() objects. The first row is always present as the header row. Attributes: row_class: class, A class to use for the Row object. separator: str, field separator when printing table. """ def __init__(self, row_class=Row): """Initialises a new table. Args: row_class: A class to use as the row object. This should be a subclass of this module's Row() class. """ self.row_class = row_class self.separator = ', ' self.Reset() def Reset(self): self._row_index = 1 self._table = [[]] self._iterator = 0 # While loop row index def __repr__(self): return '%s(%r)' % (self.__class__.__name__, str(self)) def __str__(self): """Displays table with pretty formatting.""" return self.table def __incr__(self, incr=1): self._SetRowIndex(self._row_index +incr) def __contains__(self, name): """Whether the given column header name exists.""" return name in self.header def __getitem__(self, row): """Fetches the given row number.""" return self._table[row] def __iter__(self): """Iterator that excludes the header row.""" return next(self) def __next__(self): # Maintain a counter so a row can know what index it is. # Save the old value to support nested interations. old_iter = self._iterator try: for r in self._table[1:]: self._iterator = r.row yield r finally: # Recover the original index after loop termination or exit with break. self._iterator = old_iter def __add__(self, other): """Merges two with identical columns.""" new_table = copy.copy(self) for row in other: new_table.Append(row) return new_table def __copy__(self): """Copy table instance.""" new_table = self.__class__() # pylint: disable=protected-access new_table._table = [self.header] for row in self[1:]: new_table.Append(row) return new_table def Filter(self, function=None): """Construct Textable from the rows of which the function returns true. Args: function: A function applied to each row which returns a bool. If function is None, all rows with empty column values are removed. Returns: A new TextTable() Raises: TableError: When an invalid row entry is Append()'d """ flat = lambda x: x if isinstance(x, str) else ''.join([flat(y) for y in x]) if function is None: function = lambda row: bool(flat(row.values)) new_table = self.__class__() # pylint: disable=protected-access new_table._table = [self.header] for row in self: if function(row) is True: new_table.Append(row) return new_table def Map(self, function): """Applies the function to every row in the table. Args: function: A function applied to each row. Returns: A new TextTable() Raises: TableError: When transform is not invalid row entry. The transform must be compatible with Append(). """ new_table = self.__class__() # pylint: disable=protected-access new_table._table = [self.header] for row in self: filtered_row = function(row) if filtered_row: new_table.Append(filtered_row) return new_table # pylint: disable=W0622 def sort(self, cmp=None, key=None, reverse=False): """Sorts rows in the texttable. Args: cmp: func, non default sort algorithm to use. key: func, applied to each element before sorting. reverse: bool, reverse order of sort. """ def _DefaultKey(value): """Default key func is to create a list of all fields.""" result = [] for key in self.header: # Try sorting as numerical value if possible. try: result.append(float(value[key])) except ValueError: result.append(value[key]) return result key = key or _DefaultKey # Exclude header by copying table. new_table = self._table[1:] if cmp is not None: key = cmp_to_key(cmp) new_table.sort(key=key, reverse=reverse) # Regenerate the table with original header self._table = [self.header] self._table.extend(new_table) # Re-write the 'row' attribute of each row for index, row in enumerate(self._table): row.row = index # pylint: enable=W0622 def extend(self, table, keys=None): """Extends all rows in the texttable. The rows are extended with the new columns from the table. Args: table: A texttable, the table to extend this table by. keys: A set, the set of columns to use as the key. If None, the row index is used. Raises: IndexError: If key is not a valid column name. """ if keys: for k in keys: if k not in self._Header(): raise IndexError("Unknown key: '%s'", k) extend_with = [] for column in table.header: if column not in self.header: extend_with.append(column) if not extend_with: return for column in extend_with: self.AddColumn(column) if not keys: for row1, row2 in zip(self, table): for column in extend_with: row1[column] = row2[column] return for row1 in self: for row2 in table: for k in keys: if row1[k] != row2[k]: break else: for column in extend_with: row1[column] = row2[column] break def Remove(self, row): """Removes a row from the table. Args: row: int, the row number to delete. Must be >= 1, as the header cannot be removed. Raises: TableError: Attempt to remove nonexistent or header row. """ if row == 0 or row > self.size: raise TableError('Attempt to remove header row') new_table = [] # pylint: disable=E1103 for t_row in self._table: if t_row.row != row: new_table.append(t_row) if t_row.row > row: t_row.row -= 1 self._table = new_table def _Header(self): """Returns the header row.""" return self._table[0] def _GetRow(self, columns=None): """Returns the current row as a tuple.""" row = self._table[self._row_index] if columns: result = [] for col in columns: if col not in self.header: raise TableError('Column header %s not known in table.' % col) result.append(row[self.header.index(col)]) row = result return row def _SetRow(self, new_values, row=0): """Sets the current row to new list. Args: new_values: List|dict of new values to insert into row. row: int, Row to insert values into. Raises: TableError: If number of new values is not equal to row size. """ if not row: row = self._row_index if row > self.size: raise TableError('Entry %s beyond table size %s.' % (row, self.size)) self._table[row].values = new_values def _SetHeader(self, new_values): """Sets header of table to the given tuple. Args: new_values: Tuple of new header values. """ row = self.row_class() row.row = 0 for v in new_values: row[v] = v self._table[0] = row def _SetRowIndex(self, row): if not row or row > self.size: raise TableError('Entry %s beyond table size %s.' % (row, self.size)) self._row_index = row def _GetRowIndex(self): return self._row_index def _GetSize(self): """Returns number of rows in table.""" if not self._table: return 0 return len(self._table) - 1 def _GetTable(self): """Returns table, with column headers and separators. Returns: The whole table including headers as a string. Each row is joined by a newline and each entry by self.separator. """ result = [] # Avoid the global lookup cost on each iteration. lstr = str for row in self._table: result.append( '%s\n' % self.separator.join(lstr(v) for v in row)) return ''.join(result) def _SetTable(self, table): """Sets table, with column headers and separators.""" if not isinstance(table, TextTable): raise TypeError('Not an instance of TextTable.') self.Reset() self._table = copy.deepcopy(table._table) # pylint: disable=W0212 # Point parent table of each row back ourselves. for row in self: row.table = self def _SmallestColSize(self, text): """Finds the largest indivisible word of a string. ...and thus the smallest possible column width that can contain that word unsplit over rows. Args: text: A string of text potentially consisting of words. Returns: Integer size of the largest single word in the text. """ if not text: return 0 stripped = terminal.StripAnsiText(text) return max(len(word) for word in stripped.split()) def _TextJustify(self, text, col_size): """Formats text within column with white space padding. A single space is prefixed, and a number of spaces are added as a suffix such that the length of the resultant string equals the col_size. If the length of the text exceeds the column width available then it is split into words and returned as a list of string, each string contains one or more words padded to the column size. Args: text: String of text to format. col_size: integer size of column to pad out the text to. Returns: List of strings col_size in length. Raises: TableError: If col_size is too small to fit the words in the text. """ result = [] if '\n' in text: for paragraph in text.split('\n'): result.extend(self._TextJustify(paragraph, col_size)) return result wrapper = textwrap.TextWrapper(width=col_size-2, break_long_words=False, expand_tabs=False) try: text_list = wrapper.wrap(text) except ValueError: raise TableError('Field too small (minimum width: 3)') if not text_list: return [' '*col_size] for current_line in text_list: stripped_len = len(terminal.StripAnsiText(current_line)) ansi_color_adds = len(current_line) - stripped_len # +2 for white space on either side. if stripped_len + 2 > col_size: raise TableError('String contains words that do not fit in column.') result.append(' %-*s' % (col_size - 1 + ansi_color_adds, current_line)) return result def FormattedTable(self, width=80, force_display=False, ml_delimiter=True, color=True, display_header=True, columns=None): """Returns whole table, with whitespace padding and row delimiters. Args: width: An int, the max width we want the table to fit in. force_display: A bool, if set to True will display table when the table can't be made to fit to the width. ml_delimiter: A bool, if set to False will not display the multi-line delimiter. color: A bool. If true, display any colours in row.colour. display_header: A bool. If true, display header. columns: A list of str, show only columns with these names. Returns: A string. The tabled output. Raises: TableError: Width too narrow to display table. """ def _FilteredCols(): """Returns list of column names to display.""" if not columns: return self._Header().values return [col for col in self._Header().values if col in columns] # Largest is the biggest data entry in a column. largest = {} # Smallest is the same as above but with linewrap i.e. largest unbroken # word in the data stream. smallest = {} # largest == smallest for a column with a single word of data. # Initialise largest and smallest for all columns. for key in _FilteredCols(): largest[key] = 0 smallest[key] = 0 # Find the largest and smallest values. # Include Title line in equation. # pylint: disable=E1103 for row in self._table: for key, value in row.items(): if key not in _FilteredCols(): continue # Convert lists into a string. if isinstance(value, list): value = ', '.join(value) value = terminal.StripAnsiText(value) largest[key] = max(len(value), largest[key]) smallest[key] = max(self._SmallestColSize(value), smallest[key]) # pylint: enable=E1103 min_total_width = 0 multi_word = [] # Bump up the size of each column to include minimum pad. # Find all columns that can be wrapped (multi-line). # And the minimum width needed to display all columns (even if wrapped). for key in _FilteredCols(): # Each column is bracketed by a space on both sides. # So increase size required accordingly. largest[key] += 2 smallest[key] += 2 min_total_width += smallest[key] # If column contains data that 'could' be split over multiple lines. if largest[key] != smallest[key]: multi_word.append(key) # Check if we have enough space to display the table. if min_total_width > width and not force_display: raise TableError('Width too narrow to display table.') # We have some columns that may need wrapping over several lines. if multi_word: # Find how much space is left over for the wrapped columns to use. # Also find how much space we would need if they were not wrapped. # These are 'spare_width' and 'desired_width' respectively. desired_width = 0 spare_width = width - min_total_width for key in multi_word: spare_width += smallest[key] desired_width += largest[key] # Scale up the space we give each wrapped column. # Proportional to its size relative to 'desired_width' for all columns. # Rinse and repeat if we changed the wrap list in this iteration. # Once done we will have a list of columns that definitely need wrapping. done = False while not done: done = True for key in multi_word: # If we scale past the desired width for this particular column, # then give it its desired width and remove it from the wrapped list. if (largest[key] <= round((largest[key] / float(desired_width)) * spare_width)): smallest[key] = largest[key] multi_word.remove(key) spare_width -= smallest[key] desired_width -= largest[key] done = False # If we scale below the minimum width for this particular column, # then leave it at its minimum and remove it from the wrapped list. elif (smallest[key] >= round((largest[key] / float(desired_width)) * spare_width)): multi_word.remove(key) spare_width -= smallest[key] desired_width -= largest[key] done = False # Repeat the scaling algorithm with the final wrap list. # This time we assign the extra column space by increasing 'smallest'. for key in multi_word: smallest[key] = int(round((largest[key] / float(desired_width)) * spare_width)) total_width = 0 row_count = 0 result_dict = {} # Format the header lines and add to result_dict. # Find what the total width will be and use this for the ruled lines. # Find how many rows are needed for the most wrapped line (row_count). for key in _FilteredCols(): result_dict[key] = self._TextJustify(key, smallest[key]) if len(result_dict[key]) > row_count: row_count = len(result_dict[key]) total_width += smallest[key] # Store header in header_list, working down the wrapped rows. header_list = [] for row_idx in range(row_count): for key in _FilteredCols(): try: header_list.append(result_dict[key][row_idx]) except IndexError: # If no value than use whitespace of equal size. header_list.append(' '*smallest[key]) header_list.append('\n') # Format and store the body lines result_dict = {} body_list = [] # We separate multi line rows with a single line delimiter. prev_muli_line = False # Unless it is the first line in which there is already the header line. first_line = True for row in self: row_count = 0 for key, value in row.items(): if key not in _FilteredCols(): continue # Convert field contents to a string. if isinstance(value, list): value = ', '.join(value) # Store results in result_dict and take note of wrapped line count. result_dict[key] = self._TextJustify(value, smallest[key]) if len(result_dict[key]) > row_count: row_count = len(result_dict[key]) if row_count > 1: prev_muli_line = True # If current or prior line was multi-line then include delimiter. if not first_line and prev_muli_line and ml_delimiter: body_list.append('-'*total_width + '\n') if row_count == 1: # Our current line was not wrapped, so clear flag. prev_muli_line = False row_list = [] for row_idx in range(row_count): for key in _FilteredCols(): try: row_list.append(result_dict[key][row_idx]) except IndexError: # If no value than use whitespace of equal size. row_list.append(' '*smallest[key]) row_list.append('\n') if color and row.color is not None: body_list.append( terminal.AnsiText(''.join(row_list)[:-1], command_list=row.color)) body_list.append('\n') else: body_list.append(''.join(row_list)) first_line = False header = ''.join(header_list) + '='*total_width if color and self._Header().color is not None: header = terminal.AnsiText(header, command_list=self._Header().color) # Add double line delimiter between header and main body. if display_header: return '%s\n%s' % (header, ''.join(body_list)) return '%s' % ''.join(body_list) def LabelValueTable(self, label_list=None): """Returns whole table as rows of name/value pairs. One (or more) column entries are used for the row prefix label. The remaining columns are each displayed as a row entry with the prefix labels appended. Use the first column as the label if label_list is None. Args: label_list: A list of prefix labels to use. Returns: Label/Value formatted table. Raises: TableError: If specified label is not a column header of the table. """ label_list = label_list or self._Header()[0] # Ensure all labels are valid. for label in label_list: if label not in self._Header(): raise TableError('Invalid label prefix: %s.' % label) sorted_list = [] for header in self._Header(): if header in label_list: sorted_list.append(header) label_str = '# LABEL %s\n' % '.'.join(sorted_list) body = [] for row in self: # Some of the row values are pulled into the label, stored in label_prefix. label_prefix = [] value_list = [] for key, value in row.items(): if key in sorted_list: # Set prefix. label_prefix.append(value) else: value_list.append('%s %s' % (key, value)) body.append(''.join( ['%s.%s\n' % ('.'.join(label_prefix), v) for v in value_list])) return '%s%s' % (label_str, ''.join(body)) table = property(_GetTable, _SetTable, doc='Whole table') row = property(_GetRow, _SetRow, doc='Current row') header = property(_Header, _SetHeader, doc='List of header entries.') row_index = property(_GetRowIndex, _SetRowIndex, doc='Current row.') size = property(_GetSize, doc='Number of rows in table.') def RowWith(self, column, value): """Retrieves the first non header row with the column of the given value. Args: column: str, the name of the column to check. value: str, The value of the column to check. Returns: A Row() of the first row found, None otherwise. Raises: IndexError: The specified column does not exist. """ for row in self._table[1:]: if row[column] == value: return row return None def AddColumn(self, column, default='', col_index=-1): """Appends a new column to the table. Args: column: A string, name of the column to add. default: Default value for entries. Defaults to ''. col_index: Integer index for where to insert new column. Raises: TableError: Column name already exists. """ if column in self.table: raise TableError('Column %r already in table.' % column) if col_index == -1: self._table[0][column] = column for i in range(1, len(self._table)): self._table[i][column] = default else: self._table[0].Insert(column, column, col_index) for i in range(1, len(self._table)): self._table[i].Insert(column, default, col_index) def Append(self, new_values): """Adds a new row (list) to the table. Args: new_values: Tuple, dict, or Row() of new values to append as a row. Raises: TableError: Supplied tuple not equal to table width. """ newrow = self.NewRow() newrow.values = new_values self._table.append(newrow) def NewRow(self, value=''): """Fetches a new, empty row, with headers populated. Args: value: Initial value to set each row entry to. Returns: A Row() object. """ newrow = self.row_class() newrow.row = self.size + 1 newrow.table = self headers = self._Header() for header in headers: newrow[header] = value return newrow def CsvToTable(self, buf, header=True, separator=','): """Parses buffer into tabular format. Strips off comments (preceded by '#'). Optionally parses and indexes by first line (header). Args: buf: String file buffer containing CSV data. header: Is the first line of buffer a header. separator: String that CSV is separated by. Returns: int, the size of the table created. Raises: TableError: A parsing error occurred. """ self.Reset() header_row = self.row_class() if header: line = buf.readline() header_str = '' while not header_str: if not isinstance(line, six.string_types): line = line.decode('utf-8') # Remove comments. header_str = line.split('#')[0].strip() if not header_str: line = buf.readline() header_list = header_str.split(separator) header_length = len(header_list) for entry in header_list: entry = entry.strip() if entry in header_row: raise TableError('Duplicate header entry %r.' % entry) header_row[entry] = entry header_row.row = 0 self._table[0] = header_row # xreadlines would be better but not supported by StringIO for testing. for line in buf: if not isinstance(line, six.string_types): line = line.decode('utf-8') # Support commented lines, provide '#' is first character of line. if line.startswith('#'): continue lst = line.split(separator) lst = [l.strip() for l in lst] if header and len(lst) != header_length: # Silently drop illegal line entries continue if not header: header_row = self.row_class() header_length = len(lst) header_row.values = dict(zip(range(header_length), range(header_length))) self._table[0] = header_row header = True continue new_row = self.NewRow() new_row.values = lst header_row.row = self.size + 1 self._table.append(new_row) return self.size def index(self, name=None): """Returns index number of supplied column name. Args: name: string of column name. Raises: TableError: If name not found. Returns: Index of the specified header entry. """ try: return self.header.index(name) except ValueError: raise TableError('Unknown index name %s.' % name)