pyroute2-0.5.9/0000755000175000017500000000000013621220110013216 5ustar peetpeet00000000000000pyroute2-0.5.9/CHANGELOG.md0000644000175000017500000004642313621217775015067 0ustar peetpeet00000000000000Changelog ========= * 0.5.9 * ethtool: fix module setup * 0.5.8 * ethtool: initial support * tc: multimatch support * tc: meta support * tc: cake: add stats_app decoder * conntrack: filter * ndb.objects.interface: reload after setns * ndb.objects.route: create() dst syntax * ndb.objects.route: 'default' syntax * wireguard: basic testing * 0.5.7 * ndb.objects.netns: prototype * ndb: netns management * ndb: netns sources autoconnect (disabled by default) * wireguard: basic support * netns: fix FD leakage * * cli: Python3 fixes * iproute: support `route('append', ...)` * ipdb: fix routes cleanup on link down * * wiset: support "mark" ipset type * 0.5.6 * ndb.objects.route: multipath routes * ndb.objects.rule: basic support * ndb.objects.interface: veth fixed * ndb.source: fix source restart * ndb.log: logging setup * 0.5.5 * nftables: rules expressions * * netns: ns_pids * * ndb: wait() method * ndb: add extra logging, log state transitions * ndb: nested views, e.g. `ndb.interfaces['br0'].ports * cli: port pyroute2-cli to use NDB instead of IPDB * iproute: basic Windows support (proof of concept only) * remote: support mitogen proxy chains, support remote netns * 0.5.4 * iproute: basic SR-IOV support, virtual functions setup * ipdb: shutdown logging fixed * * nftables: fix regression (errata: previously mentioned ipset) * * netns: pushns() / popns() / dropns() calls * * 0.5.3 * bsd: parser improvements * ndb: PostgreSQL support * ndb: transactions commit/rollback * ndb: dependencies rollback * ipdb: IPv6 routes fix * * tcmsg: ematch support * tcmsg: flow filter * tcmsg: stats2 support improvements * ifinfmsg: GRE i/oflags, i/okey format fixed * * cli/ss2: improvements, tests * nlsocket: fix work on kernels < 3.2 * * 0.5.2 * ndb: read-only DB prototype * remote: support communication via stdio * general: fix async keyword -- Python 3.7 compatibility * * * iproute: support monitoring on BSD systems via PF_ROUTE * rtnl: support for SQL schema in message classes * nl80211: improvements * * * * netlink: support generators * 0.5.1 * ipdb: #310 -- route keying fix * ipdb: #483, #484 -- callback internals change * ipdb: #499 -- eventloop interface * ipdb: #500 -- fix non-default :: routes * netns: #448 -- API change: setns() doesn't remove FD * netns: #504 -- fix resource leakage * bsd: initial commits * 0.5.0 * ACHTUNG: ipdb commit logic is changed * ipdb: do not drop failed transactions * ipdb: #388 -- normalize IPv6 addresses * ipdb: #391 -- support both IPv4 and IPv6 default routes * ipdb: #392 -- fix MPLS route key reference * ipdb: #394 -- correctly work with route priorities * ipdb: #408 -- fix IPv6 routes in tables >= 256 * ipdb: #416 -- fix VRF interfaces creation * ipset: multiple improvements * tuntap: #469 -- support s390x arch * nlsocket: #443 -- fix socket methods resolve order for Python2 * netns: non-destructive `netns.create()` * 0.4.18 * ipdb: #379 [critical] -- routes in global commits * ipdb: #380 -- global commit with disabled plugins * ipdb: #381 -- exceptions fixed * ipdb: #382 -- manage dependent routes during interface commits * ipdb: #384 -- global `review()` * ipdb: #385 -- global `drop()` * netns: #383 -- support ppc64 * general: public API refactored (same signatures; to be documented) * 0.4.17 * req: #374 [critical] -- mode nla init * iproute: #378 [critical] -- fix `flush_routes()` to respect filters * ifinfmsg: #376 -- fix data plugins API to support pyinstaller * 0.4.16 * ipdb: race fixed: remove port/bridge * ipdb: #280 -- race fixed: port/bridge * ipdb: #302 -- ipaddr views: [ifname].ipaddr.ipv4, [ifname]ipaddr.ipv6 * ipdb: #357 -- allow bridge timings to have some delta * ipdb: #338 -- allow to fix interface objects from failed `create()` * rtnl: #336 -- fix vlan flags * iproute: #342 -- the match method takes any callable * nlsocket: #367 -- increase default SO_SNDBUF * ifinfmsg: support tuntap on armv6l, armv7l platforms * 0.4.15 * req: #365 -- full and short nla notation fixed, critical * iproute: #364 -- new method, `brport()` * ipdb: -- support bridge port options * 0.4.14 * event: new genl protocols set: VFS_DQUOT, acpi_event, thermal_event * ipdb: #310 -- fixed priority change on routes * ipdb: #349 -- fix setting ifalias on interfaces * ipdb: #353 -- mitigate kernel oops during bridge creation * ipdb: #354 -- allow to explicitly choose plugins to load * ipdb: #359 -- provide read-only context managers * rtnl: #336 -- vlan flags support * rtnl: #352 -- support interface type plugins * tc: #344 -- mirred action * tc: #346 -- connmark action * netlink: #358 -- memory optimization * config: #360 -- generic asyncio config * iproute: #362 -- allow to change or replace a qdisc * 0.4.13 * ipset: full rework of the IPSET_ATTR_DATA and IPSET_ATTR_ADT ACHTUNG: this commit may break API compatibility * ipset: hash:mac support * ipset: list:set support * ipdb: throw EEXIST when creates VLAN/VXLAN devs with same ID, but under different names * tests: #329 -- include unit tests into the bundle * legal: E/// logo removed * 0.4.12 * ipdb: #314 -- let users choose RTNL groups IPDB listens to * ipdb: #321 -- isolate `net_ns_.*` setup in a separate code block * ipdb: #322 -- IPv6 updates on interfaces in DOWN state * ifinfmsg: allow absolute/relative paths in the net_ns_fd NLA * ipset: #323 -- support setting counters on ipset add * ipset: `headers()` command * ipset: revisions * ipset: #326 -- mark types * 0.4.11 * rtnl: #284 -- support vlan_flags * ipdb: #288 -- do not inore link-local addresses * ipdb: #300 -- sort ip addresses * ipdb: #306 -- support net_ns_pid * ipdb: #307 -- fix IPv6 routes management * ipdb: #311 -- vlan interfaces address loading * iprsocket: #305 -- support NETLINK_LISTEN_ALL_NSID * 0.4.10 * devlink: fix fd leak on broken init * 0.4.9 * sock_diag: initial NETLINK_SOCK_DIAG support * rtnl: fix critical fd leak in the compat code * 0.4.8 * rtnl: compat proxying fix * 0.4.7 * rtnl: compat code is back * netns: custom netns path support * ipset: multiple improvements * 0.4.6 * ipdb: #278 -- fix initial ports mapping * ipset: #277 -- fix ADT attributes parsing * nl80211: #274, #275, #276 -- BSS-related fixes * 0.4.5 * ifinfmsg: GTP interfaces support * generic: devlink protocol support * generic: code cleanup * 0.4.4 * iproute: #262 -- `get_vlans()` fix * iproute: default mask 32 for IPv4 in `addr()` * rtmsg: #260 -- RTA_FLOW support * 0.4.3 * ipdb: #259 -- critical `Interface` class fix * benchmark: initial release * 0.4.2 * ipdb: event modules * ipdb: on-demand views * ipdb: rules management * ipdb: bridge controls * ipdb: #258 -- important Python compatibility fixes * netns: #257 -- pipe leak fix * netlink: support pickling for nlmsg * 0.4.1 * netlink: no buffer copying in the parser * netlink: parse NLA on demand * ipdb: #244 -- lwtunnel multipath fixes * iproute: #235 -- route types * docs updated * 0.4.0 * ACHTUNG: old kernels compatibility code is dropped * ACHTUNG: IPDB uses two separate sockets for monitoring and commands * ipdb: #244 -- multipath lwtunnel * ipdb: #242 -- AF_MPLS routes * ipdb: #241, #234 -- fix create(..., reuse=True) * ipdb: #239 -- route encap and metrics fixed * ipdb: #238 -- generic port management * ipdb: #235 -- support route scope and type * ipdb: #230, #232 -- routes GC (work in progress) * rtnl: #245 -- do not fail if `/proc/net/psched` doesn't exist * rtnl: #233 -- support VRF interfaces (requires net-next) * 0.3.21 * ipdb: #231 -- return `ipdb.common` as deprecated * 0.3.20 * iproute: `vlan_filter()` * iproute: #229 -- FDB management * general: exceptions re-exported via the root module * 0.3.19 * rtmsg: #227 -- MPLS lwtunnel basic support * iproute: `route()` docs updated * general: #228 -- exceptions layout changed * package-rh: rpm subpackages * 0.3.18 * version bump -- include docs in the release tarball * 0.3.17 * tcmsg: qdiscs and filters as plugins * tcmsg: #223 -- tc clsact and bpf direct-action * tcmsg: plug, codel, choke, drr qdiscs * tests: CI in VMs (see civm project) * tests: xunit output * ifinfmsg: tuntap support in i386, i686 * ifinfmsg: #207 -- support vlan filters * examples: #226 -- included in the release tarball * ipdb: partial commits, initial support * 0.3.16 * ipdb: fix the multiple IPs in one commit case * rtnl: support veth peer attributes * netns: support 32bit i686 * netns: fix MIPS support * netns: fix tun/tap creation * netns: fix interface move between namespaces * tcmsg: support hfsc, fq_codel, codel qdiscs * nftables: initial support * netlink: dump/load messages to/from simple types * 0.3.15 * netns: #194 -- fix fd leak * iproute: #184 -- fix routes dump * rtnl: TCA_ACT_BPF support * rtnl: ipvlan support * rtnl: OVS support removed * iproute: rule() improved to support all NLAs * project supported by Ericsson * 0.3.14 * package-rh: spec fixed * package-rh: both licenses added * remote: fixed the setup.py record * 0.3.13 * package-rh: new rpm for Fedora and CentOS * remote: new draft of the remote protocol * netns: refactored using the new remote protocol * ipdb: gretap support * 0.3.12 * ipdb: new `Interface.wait_ip()` routine * ipdb: #175 -- fix `master` attribute cleanup * ipdb: #171 -- support multipath routes * ipdb: memory consumption improvements * rtmsg: MPLS support * rtmsg: RTA_VIA support * iwutil: #174 -- fix FREQ_FIXED flag * 0.3.11 * ipdb: #161 -- fix memory allocations * nlsocket: #161 -- remove monitor mode * 0.3.10 * rtnl: added BPF filters * rtnl: LWtunnel support in ifinfmsg * ipdb: support address attributes * ipdb: global transactions, initial version * ipdb: routes refactored to use key index (speed up) * config: eventlet support embedded (thanks to Angus Lees) * iproute: replace tc classes * iproute: flush_addr(), flush_rules() * iproute: rule() refactored * netns: proxy file objects (stdin, stdout, stderr) * 0.3.9 * root imports: #109, #135 -- `issubclass`, `isinstance` * iwutil: multiple improvements * iwutil: initial tests * proxy: correctly forward NetlinkError * iproute: neighbour tables support * iproute: #147, filters on dump calls * config: initial usage of `capabilities` * 0.3.8 * docs: inheritance diagrams * nlsocket: #126, #132 -- resource deallocation * arch: #128, #131 -- MIPS support * setup.py: #133 -- syntax error during install on Python2 * 0.3.7 * ipdb: new routing syntax * ipdb: sync interface movement between namespaces * ipdb: #125 -- fix route metrics * netns: new class NSPopen * netns: #119 -- i386 syscall * netns: #122 -- return correct errno * netlink: #126 -- fix socket reuse * 0.3.6 * dhcp: initial release DHCPv4 * license: dual GPLv2+ and Apache v2.0 * ovs: port add/delete * macvlan, macvtap: basic support * vxlan: basic support * ipset: basic support * 0.3.5 * netns: #90 -- netns setns support * generic: #99 -- support custom basic netlink socket classes * proxy-ng: #106 -- provide more diagnostics * nl80211: initial nl80211 support, iwutil module added * 0.3.4 * ipdb: #92 -- route metrics support * ipdb: #85 -- broadcast address specification * ipdb, rtnl: #84 -- veth support * ipdb, rtnl: tuntap support * netns: #84 -- network namespaces support, NetNS class * rtnl: proxy-ng API * pypi: #91 -- embed docs into the tarball * 0.3.3 * ipdb: restart on error * generic: handle non-existing family case * [fix]: #80 -- Python 2.6 unicode vs -O bug workaround * 0.3.2 * simple socket architecture * all the protocols now are based on NetlinkSocket, see examples * rpc: deprecated * iocore: deprecated * iproute: single-threaded socket object * ipdb: restart on errors * rtnl: updated ifinfmsg policies * 0.3.1 * module structure refactored * new protocol: ipq * new protocol: nfnetlink / nf-queue * new protocol: generic * threadless sockets for all the protocols * 0.2.16 * prepare the transition to 0.3.x * 0.2.15 * ipdb: fr #63 -- interface settings freeze * ipdb: fr #50, #51 -- bridge & bond options (initial version) * RHEL7 support * [fix]: #52 -- HTB: correct rtab compilation * [fix]: #53 -- RHEL6.5 bridge races * [fix]: #55 -- IPv6 on bridges * [fix]: #58 -- vlans as bridge ports * [fix]: #59 -- threads sync in iocore * 0.2.14 * [fix]: #44 -- incorrect netlink exceptions proxying * [fix]: #45 -- multiple issues with device targets * [fix]: #46 -- consistent exceptions * ipdb: LinkedSet cascade updates fixed * ipdb: allow to reuse existing interface in `create()` * 0.2.13 * [fix]: #43 -- pipe leak in the main I/O loop * tests: integrate examples, import into tests * iocore: use own TimeoutException instead of Queue.Empty * iproute: default routing table = 254 * iproute: flush_routes() routine * iproute: fwmark parameter for rule() routine * iproute: destination and mask for rules * docs: netlink development guide * 0.2.12 * [fix]: #33 -- release resources only for bound sockets * [fix]: #37 -- fix commit targets * rtnl: HFSC support * rtnl: priomap fixed * 0.2.11 * ipdb: watchdogs to sync on RTNL events * ipdb: fix commit errors * generic: NLA operations, complement and intersection * docs: more autodocs in the code * tests: -W error: more strict testing now * tests: cover examples by the integration testing cycle * with -W error many resource leaks were fixed * 0.2.10 * ipdb: command chaining * ipdb: fix for RHEL6.5 Python "optimizations" * rtnl: support TCA_U32_ACT * [fix]: #32 -- NLA comparison * 0.2.9 * ipdb: support bridges and bonding interfaces on RHEL * ipdb: "shadow" interfaces (still in alpha state) * ipdb: minor fixes on routing and compat issues * ipdb: as a separate package (sub-module) * docs: include ipdb autodocs * rpc: include in setup.py * 0.2.8 * netlink: allow multiple NetlinkSocket allocation from one process * netlink: fix defragmentation for netlink-over-tcp * iocore: support forked IOCore and IOBroker as a separate process * ipdb: generic callbacks support * ipdb: routing support * rtnl: #30 -- support IFLA_INFO_DATA for bond interfaces * 0.2.7 * ipdb: use separate namespaces for utility functions and other stuff * ipdb: generic callbacks (see also IPDB.wait_interface()) * iocore: initial multipath support * iocore: use of 16byte uuid4 for packet ids * 0.2.6 * rpc: initial version, REQ/REP, PUSH/PULL * iocore: shared IOLoop * iocore: AddrPool usage * iproute: policing in FW filter * python3 compatibility issues fixed * 0.2.4 * python3 compatibility issues fixed, tests passed * 0.2.3 * [fix]: #28 -- bundle issue * 0.2.2 * iocore: new component * iocore: separate IOCore and IOBroker * iocore: change from peer-to-peer to flat addresses * iocore: REP/REQ, PUSH/PULL * iocore: support for UDP PUSH/PULL * iocore: AddrPool component for addresses and nonces * generic: allow multiple re-encoding * 0.1.12 * ipdb: transaction commit callbacks * iproute: delete root qdisc (@chantra) * iproute: netem qdisc management (@chantra) * 0.1.11 * netlink: get qdiscs for particular interface * netlink: IPRSocket threadless objects * rtnl: u32 policy setup * iproute: filter actions, such as `ok`, `drop` and so on * iproute: changed syntax of commands, `action` → `command` * tests: htb, tbf tests added * 0.1.10 * [fix]: #8 -- default route fix, routes filtering * [fix]: #9 -- add/delete route routine improved * [fix]: #10 -- shutdown sequence fixed * [fix]: #11 -- close IPC pipes on release() * [fix]: #12 -- stop service threads on release() * netlink: debug mode added to be used with GUI * ipdb: interface removal * ipdb: fail on transaction sync timeout * tests: R/O mode added, use `export PYROUTE2_TESTS_RO=True` * 0.1.9 * tests: all races fixed * ipdb: half-sync commit(): wait for IPs and ports lists update * netlink: use pipes for in-process communication * Python 2.6 compatibility issue: remove copy.deepcopy() usage * QPython 2.7 for Android: works * 0.1.8 * complete refactoring of class names * Python 2.6 compatibility issues * tests: code coverage, multiple code fixes * plugins: ptrace message source * packaging: RH package * 0.1.7 * ipdb: interface creation: dummy, bond, bridge, vlan * ipdb: if\_slaves interface obsoleted * ipdb: 'direct' mode * iproute: code refactored * examples: create() examples committed * 0.1.6 * netlink: tc ingress, sfq, tbf, htb, u32 partial support * ipdb: completely re-implemented transactional model (see docs) * generic: internal fields declaration API changed for nlmsg * tests: first unit tests committed * 0.1.5 * netlink: dedicated io buffering thread * netlink: messages reassembling * netlink: multi-uplink remote * netlink: masquerade remote requests * ipdb: represent interfaces hierarchy * iproute: decode VLAN info * 0.1.4 * netlink: remote netlink access * netlink: SSL/TLS server/client auth support * netlink: tcp and unix transports * docs: started sphinx docs * 0.1.3 * ipdb: context manager interface * ipdb: [fix] correctly handle ip addr changes in transaction * ipdb: [fix] make up()/down() methods transactional [#1] * iproute: mirror packets to 0 queue * iproute: [fix] handle primary ip address removal response * 0.1.2 * initial ipdb version * iproute fixes * 0.1.1 * initial release, iproute module pyroute2-0.5.9/LICENSE.Apache.v20000644000175000017500000002612413610051400015740 0ustar peetpeet00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright 2016 Peter V. Saveliev Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. pyroute2-0.5.9/LICENSE.GPL.v20000644000175000017500000004325413610051400015204 0ustar peetpeet00000000000000 GNU GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1989, 1991 Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This General Public License applies to most of the Free Software Foundation's software and to any other program whose authors commit to using it. (Some other Free Software Foundation software is covered by the GNU Lesser General Public License instead.) You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the software, or if you modify it. For example, if you distribute copies of such a program, whether gratis or for a fee, you must give the recipients all the rights that you have. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. We protect your rights with two steps: (1) copyright the software, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the software. Also, for each author's protection and ours, we want to make certain that everyone understands that there is no warranty for this free software. If the software is modified by someone else and passed on, we want its recipients to know that what they have is not the original, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that redistributors of a free program will individually obtain patent licenses, in effect making the program proprietary. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. The precise terms and conditions for copying, distribution and modification follow. GNU GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License applies to any program or other work which contains a notice placed by the copyright holder saying it may be distributed under the terms of this General Public License. The "Program", below, refers to any such program or work, and a "work based on the Program" means either the Program or any derivative work under copyright law: that is to say, a work containing the Program or a portion of it, either verbatim or with modifications and/or translated into another language. (Hereinafter, translation is included without limitation in the term "modification".) Each licensee is addressed as "you". Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running the Program is not restricted, and the output from the Program is covered only if its contents constitute a work based on the Program (independent of having been made by running the Program). Whether that is true depends on what the Program does. 1. You may copy and distribute verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and give any other recipients of the Program a copy of this License along with the Program. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Program or any portion of it, thus forming a work based on the Program, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) You must cause the modified files to carry prominent notices stating that you changed the files and the date of any change. b) You must cause any work that you distribute or publish, that in whole or in part contains or is derived from the Program or any part thereof, to be licensed as a whole at no charge to all third parties under the terms of this License. c) If the modified program normally reads commands interactively when run, you must cause it, when started running for such interactive use in the most ordinary way, to print or display an announcement including an appropriate copyright notice and a notice that there is no warranty (or else, saying that you provide a warranty) and that users may redistribute the program under these conditions, and telling the user how to view a copy of this License. (Exception: if the Program itself is interactive but does not normally print such an announcement, your work based on the Program is not required to print an announcement.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Program, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Program, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Program. In addition, mere aggregation of another work not based on the Program with the Program (or with a work based on the Program) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may copy and distribute the Program (or a work based on it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you also do one of the following: a) Accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, b) Accompany it with a written offer, valid for at least three years, to give any third party, for a charge no more than your cost of physically performing source distribution, a complete machine-readable copy of the corresponding source code, to be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange; or, c) Accompany it with the information you received as to the offer to distribute corresponding source code. (This alternative is allowed only for noncommercial distribution and only if you received the program in object code or executable form with such an offer, in accord with Subsection b above.) The source code for a work means the preferred form of the work for making modifications to it. For an executable work, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the executable. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. If distribution of executable or object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place counts as distribution of the source code, even though third parties are not compelled to copy the source along with the object code. 4. You may not copy, modify, sublicense, or distribute the Program except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense or distribute the Program is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 5. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Program or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Program (or any work based on the Program), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Program or works based on it. 6. Each time you redistribute the Program (or any work based on the Program), the recipient automatically receives a license from the original licensor to copy, distribute or modify the Program subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 7. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Program at all. For example, if a patent license would not permit royalty-free redistribution of the Program by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Program. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system, which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 8. If the distribution and/or use of the Program is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Program under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 9. The Free Software Foundation may publish revised and/or new versions of the General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of this License, you may choose any version ever published by the Free Software Foundation. 10. If you wish to incorporate parts of the Program into other free programs whose distribution conditions are different, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program; if not, write to the Free Software Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA. Also add information on how to contact you by electronic and paper mail. If the program is interactive, make it output a short notice like this when it starts in an interactive mode: Gnomovision version 69, Copyright (C) year name of author Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, the commands you use may be called something other than `show w' and `show c'; they could even be mouse-clicks or menu items--whatever suits your program. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the program, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the program `Gnomovision' (which makes passes at compilers) written by James Hacker. , 1 April 1989 Ty Coon, President of Vice This General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. pyroute2-0.5.9/PKG-INFO0000644000175000017500000002216113621220110014315 0ustar peetpeet00000000000000Metadata-Version: 1.1 Name: pyroute2 Version: 0.5.9 Summary: Python Netlink library Home-page: https://github.com/svinota/pyroute2 Author: Peter V. Saveliev Author-email: peter@svinota.eu License: dual license GPLv2+ and Apache v2 Description: Pyroute2 ======== Pyroute2 is a pure Python **netlink** library. It requires only Python stdlib, no 3rd party libraries. The library was started as an RTNL protocol implementation, so the name is **pyroute2**, but now it supports many netlink protocols. Some supported netlink families and protocols: * **rtnl**, network settings --- addresses, routes, traffic controls * **nfnetlink** --- netfilter API: **ipset**, **nftables**, ... * **ipq** --- simplest userspace packet filtering, iptables QUEUE target * **devlink** --- manage and monitor devlink-enabled hardware * **generic** --- generic netlink families * **ethtool** --- low-level network interface setup * **nl80211** --- wireless functions API (basic support) * **taskstats** --- extended process statistics * **acpi_events** --- ACPI events monitoring * **thermal_events** --- thermal events monitoring * **VFS_DQUOT** --- disk quota events monitoring Latest important milestones: * 0.5.8 --- **Ethtool** support * 0.5.7 --- **WireGuard** support * 0.5.2 --- **PF_ROUTE** support on FreeBSD and OpenBSD Supported systems ----------------- Pyroute2 runs natively on Linux and emulates some limited subset of RTNL netlink API on BSD systems on top of PF_ROUTE notifications and standard system tools. Other platforms are not supported. The simplest usecase -------------------- The objects, provided by the library, are socket objects with an extended API. The additional functionality aims to: * Help to open/bind netlink sockets * Discover generic netlink protocols and multicast groups * Construct, encode and decode netlink and PF_ROUTE messages Maybe the simplest usecase is to monitor events. Disk quota events:: from pyroute2 import DQuotSocket # DQuotSocket automatically performs discovery and binding, # since it has no other functionality beside of the monitoring with DQuotSocket() as ds: for message in ds.get(): print(message) Get notifications about network settings changes with IPRoute:: from pyroute2 import IPRoute with IPRoute() as ipr: # With IPRoute objects you have to call bind() manually ipr.bind() for message in ipr.get(): print(message) RTNetlink examples ------------------ More samples you can read in the project documentation. Low-level **IPRoute** utility --- Linux network configuration. The **IPRoute** class is a 1-to-1 RTNL mapping. There are no implicit interface lookups and so on. Some examples:: from socket import AF_INET from pyroute2 import IPRoute # get access to the netlink socket ip = IPRoute() # no monitoring here -- thus no bind() # print interfaces print(ip.get_links()) # create VETH pair and move v0p1 to netns 'test' ip.link('add', ifname='v0p0', peer='v0p1', kind='veth') idx = ip.link_lookup(ifname='v0p1')[0] ip.link('set', index=idx, net_ns_fd='test') # bring v0p0 up and add an address idx = ip.link_lookup(ifname='v0p0')[0] ip.link('set', index=idx, state='up') ip.addr('add', index=idx, address='10.0.0.1', broadcast='10.0.0.255', prefixlen=24) # create a route with metrics ip.route('add', dst='172.16.0.0/24', gateway='10.0.0.10', metrics={'mtu': 1400, 'hoplimit': 16}) # create MPLS lwtunnel # $ sudo modprobe mpls_iptunnel ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'mpls', 'labels': '200/300'}) # create MPLS route: push label # $ sudo modprobe mpls_router # $ sudo sysctl net.mpls.platform_labels=1024 ip.route('add', family=AF_MPLS, oif=idx, dst=0x200, newdst=[0x200, 0x300]) # create SEG6 tunnel encap mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # create SEG6 tunnel inline mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'inline', 'segs': ['2000::5', '2000::6']}) # create SEG6 tunnel with ip4ip6 encapsulation # Kernel >= 4.14 ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # release Netlink socket ip.close() The project contains several modules for different types of netlink messages, not only RTNL. Network namespace examples -------------------------- Network namespace manipulation:: from pyroute2 import netns # create netns netns.create('test') # list print(netns.listnetns()) # remove netns netns.remove('test') Create **veth** interfaces pair and move to **netns**:: from pyroute2 import IPRoute with IPRoute() as ipr: # create interface pair ipr.link('add', ifname='v0p0', kind='veth', peer='v0p1') # lookup the peer index idx = ipr.link_lookup(ifname='v0p1')[0] # move the peer to the 'test' netns: ipr.link('set', index='v0p1', net_ns_fd='test') List interfaces in some **netns**:: from pyroute2 import NetNS from pprint import pprint ns = NetNS('test') pprint(ns.get_links()) ns.close() More details and samples see in the documentation. Installation ------------ `make install` or `pip install pyroute2` Requirements ------------ Python >= 2.7 The pyroute2 testing framework requirements: * flake8 * coverage * nosetests * sphinx * netaddr Optional dependencies for testing: * eventlet * mitogen * bottle * team (http://libteam.org/) Links ----- * home: https://pyroute2.org/ * srcs: https://github.com/svinota/pyroute2 * bugs: https://github.com/svinota/pyroute2/issues * pypi: https://pypi.python.org/pypi/pyroute2 * docs: http://docs.pyroute2.org/ * list: https://groups.google.com/d/forum/pyroute2-dev Platform: UNKNOWN Classifier: License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+) Classifier: License :: OSI Approved :: Apache Software License Classifier: Programming Language :: Python Classifier: Topic :: Software Development :: Libraries :: Python Modules Classifier: Topic :: System :: Networking Classifier: Topic :: System :: Systems Administration Classifier: Operating System :: POSIX :: Linux Classifier: Intended Audience :: Developers Classifier: Intended Audience :: System Administrators Classifier: Intended Audience :: Telecommunications Industry Classifier: Programming Language :: Python :: 2.6 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Development Status :: 4 - Beta pyroute2-0.5.9/README.license.md0000644000175000017500000000051213610051400016116 0ustar peetpeet00000000000000Pyroute2 package is dual licensed since 0.3.6, emerging two licenses: * GPL v2+ * Apache v2 It means, that being writing some derived code, or including the library into distribution, you are free to choose the license from the list above. Apache v2 license was included to make the code compatible with the OpenStack project. pyroute2-0.5.9/README.make.md0000644000175000017500000000571713610051400015425 0ustar peetpeet00000000000000Makefile documentation ====================== Makefile is used to automate Pyroute2 deployment and test processes. Mostly, it is but a collection of common commands. target: clean ------------- Clean up the repo directory from the built documentation, collected coverage data, compiled bytecode etc. target: docs ------------ Build documentation. Requires `sphinx`. target: test ------------ Run tests against current code. Command line options: * python -- path to the Python to use * nosetests -- path to nosetests to use * wlevel -- the Python -W level * coverage -- set `coverage=html` to get coverage report * pdb -- set `pdb=true` to launch pdb on errors * module -- run only specific test module * skip -- skip tests by pattern * loop -- number of test iterations for each module * report -- url to submit reports to (see tests/collector.py) * worker -- the worker id To run the full test cycle on the project, using a specific python, making html coverage report:: $ sudo make test python=python3 coverage=html To run a specific test module:: $ sudo make test module=general:test_ipdb.py:TestExplicit The module parameter syntax:: ## module=package[:test_file.py[:TestClass[.test_case]]] $ sudo make test module=lnst $ sudo make test module=general:test_ipr.py $ sudo make test module=general:test_ipdb.py:TestExplicit There are several test packages: * general -- common functional tests * eventlet -- Neutron compatibility tests * lnst -- LNST compatibility tests For each package a new Python instance is launched, keep that in mind since it affects the code coverage collection. It is possible to skip tests by a pattern:: $ sudo make test skip=test_stress To run tests in a loop, use the loop parameter:: $ sudo make test loop=10 For every iteration the code will be packed again with `make dist` and checked against PEP8. All the statistic may be collected with a simple web-script, see `tests/collector.py` (requires the bottle framework). To retrieve the collected data one can use curl:: $ sudo make test report=http://localhost:8080/v1/report/ $ curl http://localhost:8080/v1/report/ | python -m json.tool target: dist ------------ Make Python distribution package. Command line options: * python -- the Python to use target: install --------------- Build and install the package into the system. Command line options: * python -- the Python to use * root -- root install directory * lib -- where to install lib files target: develop --------------- Build the package and deploy the egg-link with setuptools. No code will be deployed into the system directories, but instead the local package directory will be visible to the python. In that case one can change the code locally and immediately test it system-wide without running `make install`. * python -- the Python to use other targets ------------- Other targets are either utility targets to be used internally, or hooks for related projects. You can safely ignore them. pyroute2-0.5.9/README.md0000644000175000017500000001436213621213504014514 0ustar peetpeet00000000000000Pyroute2 ======== Pyroute2 is a pure Python **netlink** library. It requires only Python stdlib, no 3rd party libraries. The library was started as an RTNL protocol implementation, so the name is **pyroute2**, but now it supports many netlink protocols. Some supported netlink families and protocols: * **rtnl**, network settings --- addresses, routes, traffic controls * **nfnetlink** --- netfilter API: **ipset**, **nftables**, ... * **ipq** --- simplest userspace packet filtering, iptables QUEUE target * **devlink** --- manage and monitor devlink-enabled hardware * **generic** --- generic netlink families * **ethtool** --- low-level network interface setup * **nl80211** --- wireless functions API (basic support) * **taskstats** --- extended process statistics * **acpi_events** --- ACPI events monitoring * **thermal_events** --- thermal events monitoring * **VFS_DQUOT** --- disk quota events monitoring Latest important milestones: * 0.5.8 --- **Ethtool** support * 0.5.7 --- **WireGuard** support * 0.5.2 --- **PF_ROUTE** support on FreeBSD and OpenBSD Supported systems ----------------- Pyroute2 runs natively on Linux and emulates some limited subset of RTNL netlink API on BSD systems on top of PF_ROUTE notifications and standard system tools. Other platforms are not supported. The simplest usecase -------------------- The objects, provided by the library, are socket objects with an extended API. The additional functionality aims to: * Help to open/bind netlink sockets * Discover generic netlink protocols and multicast groups * Construct, encode and decode netlink and PF_ROUTE messages Maybe the simplest usecase is to monitor events. Disk quota events:: from pyroute2 import DQuotSocket # DQuotSocket automatically performs discovery and binding, # since it has no other functionality beside of the monitoring with DQuotSocket() as ds: for message in ds.get(): print(message) Get notifications about network settings changes with IPRoute:: from pyroute2 import IPRoute with IPRoute() as ipr: # With IPRoute objects you have to call bind() manually ipr.bind() for message in ipr.get(): print(message) RTNetlink examples ------------------ More samples you can read in the project documentation. Low-level **IPRoute** utility --- Linux network configuration. The **IPRoute** class is a 1-to-1 RTNL mapping. There are no implicit interface lookups and so on. Some examples:: from socket import AF_INET from pyroute2 import IPRoute # get access to the netlink socket ip = IPRoute() # no monitoring here -- thus no bind() # print interfaces print(ip.get_links()) # create VETH pair and move v0p1 to netns 'test' ip.link('add', ifname='v0p0', peer='v0p1', kind='veth') idx = ip.link_lookup(ifname='v0p1')[0] ip.link('set', index=idx, net_ns_fd='test') # bring v0p0 up and add an address idx = ip.link_lookup(ifname='v0p0')[0] ip.link('set', index=idx, state='up') ip.addr('add', index=idx, address='10.0.0.1', broadcast='10.0.0.255', prefixlen=24) # create a route with metrics ip.route('add', dst='172.16.0.0/24', gateway='10.0.0.10', metrics={'mtu': 1400, 'hoplimit': 16}) # create MPLS lwtunnel # $ sudo modprobe mpls_iptunnel ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'mpls', 'labels': '200/300'}) # create MPLS route: push label # $ sudo modprobe mpls_router # $ sudo sysctl net.mpls.platform_labels=1024 ip.route('add', family=AF_MPLS, oif=idx, dst=0x200, newdst=[0x200, 0x300]) # create SEG6 tunnel encap mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # create SEG6 tunnel inline mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'inline', 'segs': ['2000::5', '2000::6']}) # create SEG6 tunnel with ip4ip6 encapsulation # Kernel >= 4.14 ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # release Netlink socket ip.close() The project contains several modules for different types of netlink messages, not only RTNL. Network namespace examples -------------------------- Network namespace manipulation:: from pyroute2 import netns # create netns netns.create('test') # list print(netns.listnetns()) # remove netns netns.remove('test') Create **veth** interfaces pair and move to **netns**:: from pyroute2 import IPRoute with IPRoute() as ipr: # create interface pair ipr.link('add', ifname='v0p0', kind='veth', peer='v0p1') # lookup the peer index idx = ipr.link_lookup(ifname='v0p1')[0] # move the peer to the 'test' netns: ipr.link('set', index='v0p1', net_ns_fd='test') List interfaces in some **netns**:: from pyroute2 import NetNS from pprint import pprint ns = NetNS('test') pprint(ns.get_links()) ns.close() More details and samples see in the documentation. Installation ------------ `make install` or `pip install pyroute2` Requirements ------------ Python >= 2.7 The pyroute2 testing framework requirements: * flake8 * coverage * nosetests * sphinx * netaddr Optional dependencies for testing: * eventlet * mitogen * bottle * team (http://libteam.org/) Links ----- * home: https://pyroute2.org/ * srcs: https://github.com/svinota/pyroute2 * bugs: https://github.com/svinota/pyroute2/issues * pypi: https://pypi.python.org/pypi/pyroute2 * docs: http://docs.pyroute2.org/ * list: https://groups.google.com/d/forum/pyroute2-dev pyroute2-0.5.9/README.report.md0000644000175000017500000000132413610051400016011 0ustar peetpeet00000000000000Report a bug ============ In the case you have issues, please report them to the project bug tracker: https://github.com/svinota/pyroute2/issues It is important to provide all the required information with your report: * Linux kernel version * Python version * Specific environment, if used -- gevent, eventlet etc. Sometimes it is needed to measure specific system parameters. There is a code to do that, e.g.:: $ sudo make test-platform Please keep in mind, that this command will try to create and delete different interface types. It is possible also to run the test in your code:: from pprint import pprint from pyroute2.config.test_platform import TestCapsRtnl pprint(TestCapsRtnl().collect()) pyroute2-0.5.9/cli/0000755000175000017500000000000013621220110013765 5ustar peetpeet00000000000000pyroute2-0.5.9/cli/pyroute2-cli0000755000175000017500000000223313610051400016253 0ustar peetpeet00000000000000#!/usr/bin/env python import json import argparse from pyroute2 import Console from pyroute2 import Server argp = argparse.ArgumentParser() for spec in (('-a', '[S] IP address to listen on'), ('-c', '[C] Command line to run'), ('-l', '[C,S] Log spec'), ('-m', 'set mode (C,S)'), ('-p', '[S] Port to listen on'), ('-r', '[C] Load rc file'), ('-s', '[C,S] Load sources from a json file')): argp.add_argument(spec[0], help=spec[1]) args = argp.parse_args() commands = [] sources = None if args.s: with open(args.s, 'r') as f: sources = json.loads(f.read()) if args.m in ('S', 'server'): if args.p: port = int(args.p) else: port = 8080 server = Server(address=args.a or 'localhost', port=port, log=args.l, sources=sources) server.serve_forever() else: console = Console(log=args.l, sources=sources) if args.r: console.loadrc(args.r) if args.c: commands.append(args.c) console.interact(readfunc=lambda x: commands.pop(0)) else: console.interact() pyroute2-0.5.9/cli/ss20000644000175000017500000005021313610051400014422 0ustar peetpeet00000000000000#!/usr/bin/env python # pyroute2 - ss2 # Copyright (C) 2018 Matthias Tafelmeier # ss2 is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # ss2 is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # You should have received a copy of the GNU General Public License # along with this program. If not, see . import json import socket import re import os import argparse from socket import (AF_INET, AF_UNIX ) import collections import psutil from pyroute2 import DiagSocket from pyroute2.netlink.diag import (SS_ESTABLISHED, SS_SYN_SENT, SS_SYN_RECV, SS_FIN_WAIT1, SS_FIN_WAIT2, SS_TIME_WAIT, SS_CLOSE, SS_CLOSE_WAIT, SS_LAST_ACK, SS_LISTEN, SS_CLOSING, SS_ALL, SS_CONN) from pyroute2.netlink.diag import (UDIAG_SHOW_NAME, UDIAG_SHOW_VFS, UDIAG_SHOW_PEER) # UDIAG_SHOW_ICONS, # UDIAG_SHOW_RQLEN, # UDIAG_SHOW_MEMINFO class UserCtxtMap(collections.Mapping): _data = {} _sk_inode_re = re.compile(r"socket:\[(?P\d+)\]") _proc_sk_fd_cast = "/proc/%d/fd/%d" _BUILD_RECURS_PATH = ["inode", "usr", "pid", "fd"] def _parse_inode(self, sconn): sk_path = self._proc_sk_fd_cast % (sconn.pid, sconn.fd) inode = None sk_inode_raw = os.readlink(sk_path) inode = self._sk_inode_re.search(sk_inode_raw).group('ino') if not inode: raise RuntimeError("Unexpected kernel sk inode outline") return inode def __recurs_enter(self, _sk_inode=None, _sk_fd=None, _usr=None, _pid=None, _ctxt=None, _recurs_path=[]): step = _recurs_path.pop(0) if self._BUILD_RECURS_PATH[0] == step: if _sk_inode not in self._data.keys(): self._data[_sk_inode] = {} elif self._BUILD_RECURS_PATH[1] == step: if _usr not in self._data[_sk_inode].keys(): self._data[_sk_inode][_usr] = {} elif self._BUILD_RECURS_PATH[2] == step: if _pid not in self._data[_sk_inode][_usr].keys(): self._data[_sk_inode][_usr].__setitem__(_pid, _ctxt) elif self._BUILD_RECURS_PATH[3] == step: self._data[_sk_inode][_usr][_pid]["fds"].append(_sk_fd) # end recursion return else: raise RuntimeError("Unexpected step in recursion") # descend self.__recurs_enter(_sk_inode=_sk_inode, _sk_fd=_sk_fd, _usr=_usr, _pid=_pid, _ctxt=_ctxt, _recurs_path=_recurs_path) def _enter_item(self, usr, flow, ctxt): if not flow.pid: # corner case of eg anonnymous AddressFamily.AF_UNIX # sockets return sk_inode = int(self._parse_inode(flow)) sk_fd = flow.fd recurs_path = list(self._BUILD_RECURS_PATH) self.__recurs_enter(_sk_inode=sk_inode, _sk_fd=sk_fd, _usr=usr, _pid=flow.pid, _ctxt=ctxt, _recurs_path=recurs_path) def _build(self): for flow in psutil.net_connections(kind="all"): proc = psutil.Process(flow.pid) usr = proc.username() ctxt = {"cmd": proc.exe(), "full_cmd": proc.cmdline(), "fds": []} self._enter_item(usr, flow, ctxt) def __init__(self): self._build() def __getitem__(self, key): return self._data[key] def __len__(self): return len(self._data.keys()) def __delitem__(self, key): raise RuntimeError("Not implemented") def __iter__(self): raise RuntimeError("Not implemented") class Protocol(collections.Callable): class Resolver: @staticmethod def getHost(ip): try: data = socket.gethostbyaddr(ip) host = str(data[0]) return host except Exception: # gracefully return None def __init__(self, sk_states, fmt='json'): self._states = sk_states fmter = "_fmt_%s" % fmt self._fmt = getattr(self, fmter, None) def __call__(self, nl_diag_sk, args, usr_ctxt): raise RuntimeError('not implemented') def _fmt_json(self, refined_stats): return json.dumps(refined_stats, indent=4) class UNIX(Protocol): def __init__(self, sk_states=SS_CONN, _fmt='json'): super(UNIX, self).__init__(sk_states, fmt=_fmt) def __call__(self, nl_diag_sk, args, usr_ctxt): sstats = nl_diag_sk.get_sock_stats(states=self._states, family=AF_UNIX, show=(UDIAG_SHOW_NAME | UDIAG_SHOW_VFS | UDIAG_SHOW_PEER)) refined_stats = self._refine_diag_raw(sstats, usr_ctxt) printable = self._fmt(refined_stats) print(printable) def _refine_diag_raw(self, raw_stats, usr_ctxt): refined = {'UNIX': {'flows': []}} def vfs_cb(raw_val): out = {} out['inode'] = raw_val['udiag_vfs_ino'] out['dev'] = raw_val['udiag_vfs_dev'] return out k_idx = 0 val_idx = 1 cb_idx = 1 idiag_attr_refine_map = {'UNIX_DIAG_NAME': ('path_name', None), 'UNIX_DIAG_VFS': ('vfs', vfs_cb), 'UNIX_DIAG_PEER': ('peer_inode', None), 'UNIX_DIAG_SHUTDOWN': ('shutdown', None)} for raw_flow in raw_stats: vessel = {} vessel['inode'] = raw_flow['udiag_ino'] for attr in raw_flow['attrs']: attr_k = attr[k_idx] attr_val = attr[val_idx] k = idiag_attr_refine_map[attr_k][k_idx] cb = idiag_attr_refine_map[attr_k][cb_idx] if cb: attr_val = cb(attr_val) vessel[k] = attr_val refined['UNIX']['flows'].append(vessel) if usr_ctxt: for flow in refined['UNIX']['flows']: try: sk_inode = flow['inode'] flow['usr_ctxt'] = usr_ctxt[sk_inode] except KeyError: # might define sentinel val pass return refined class TCP(Protocol): INET_DIAG_MEMINFO = 1 INET_DIAG_INFO = 2 INET_DIAG_VEGASINFO = 3 INET_DIAG_CONG = 4 def __init__(self, sk_states=SS_CONN, _fmt='json'): super(TCP, self).__init__(sk_states, fmt=_fmt) IDIAG_EXT_FLAGS = [self.INET_DIAG_MEMINFO, self.INET_DIAG_INFO, self.INET_DIAG_VEGASINFO, self.INET_DIAG_CONG] self.ext_f = 0 for f in IDIAG_EXT_FLAGS: self.ext_f |= (1 << (f - 1)) def __call__(self, nl_diag_sk, args, usr_ctxt): sstats = nl_diag_sk.get_sock_stats(states=self._states, family=AF_INET, extensions=self.ext_f) refined_stats = self._refine_diag_raw(sstats, args.resolve, usr_ctxt) printable = self._fmt(refined_stats) print(printable) def _refine_diag_raw(self, raw_stats, do_resolve, usr_ctxt): refined = {'TCP': {'flows': []}} idiag_refine_map = {'src': 'idiag_src', 'dst': 'idiag_dst', 'src_port': 'idiag_sport', 'dst_port': 'idiag_dport', 'inode': 'idiag_inode', 'iface_idx': 'idiag_if', 'retrans': 'idiag_retrans'} for raw_flow in raw_stats: vessel = {} for k1, k2 in idiag_refine_map.items(): vessel[k1] = raw_flow[k2] for ext_bundle in raw_flow['attrs']: vessel = self._refine_extension(vessel, ext_bundle) refined['TCP']['flows'].append(vessel) if usr_ctxt: for flow in refined['TCP']['flows']: try: sk_inode = flow['inode'] flow['usr_ctxt'] = usr_ctxt[sk_inode] except KeyError: # might define sentinel val pass if do_resolve: for flow in refined['TCP']['flows']: src_host = Protocol.Resolver.getHost(flow['src']) if src_host: flow['src_host'] = src_host dst_host = Protocol.Resolver.getHost(flow['dst']) if dst_host: flow['dst_host'] = dst_host return refined def _refine_extension(self, vessel, raw_ext): k, content = raw_ext ext_refine_map = {'meminfo': {'r': 'idiag_rmem', 'w': 'idiag_wmem', 'f': 'idiag_fmem', 't': 'idiag_tmem'}} if k == 'INET_DIAG_MEMINFO': mem_k = 'meminfo' vessel[mem_k] = {} for k1, k2 in ext_refine_map[mem_k].items(): vessel[mem_k][k1] = content[k2] elif k == 'INET_DIAG_CONG': vessel['cong_algo'] = content elif k == 'INET_DIAG_INFO': vessel = self._refine_tcp_info(vessel, content) elif k == 'INET_DIAG_SHUTDOWN': pass return vessel # interim approach # tcpinfo call backs class InfoCbCore: # normalizer @staticmethod def rto_n_cb(key, value, **ctx): out = None if value != 3000000: out = value / 1000.0 return out @staticmethod def generic_1k_n_cb(key, value, **ctx): return value / 1000.0 # predicates @staticmethod def snd_thresh_p_cb(key, value, **ctx): if value < 0xFFFF: return value return None @staticmethod def rtt_p_cb(key, value, **ctx): tcp_info_raw = ctx['raw'] try: if tcp_info_raw['tcpv_enabled'] != 0 and \ tcp_info_raw['tcpv_rtt'] != 0x7fffffff: return tcp_info_raw['tcpv_rtt'] except KeyError: # ill practice, yet except quicker path pass return tcp_info_raw['tcpi_rtt'] / 1000.0 # converter @staticmethod def state_c_cb(key, value, **ctx): state_str_map = {SS_ESTABLISHED: "established", SS_SYN_SENT: "syn-sent", SS_SYN_RECV: "syn-recv", SS_FIN_WAIT1: "fin-wait-1", SS_FIN_WAIT2: "fin-wait-2", SS_TIME_WAIT: "time-wait", SS_CLOSE: "unconnected", SS_CLOSE_WAIT: "close-wait", SS_LAST_ACK: "last-ack", SS_LISTEN: "listening", SS_CLOSING: "closing"} return state_str_map[value] @staticmethod def opts_c_cb(key, value, **ctx): tcp_info_raw = ctx['raw'] # tcp_info opt flags TCPI_OPT_TIMESTAMPS = 1 TCPI_OPT_SACK = 2 TCPI_OPT_ECN = 8 out = [] opts = tcp_info_raw['tcpi_options'] if (opts & TCPI_OPT_TIMESTAMPS): out.append("ts") if (opts & TCPI_OPT_SACK): out.append("sack") if (opts & TCPI_OPT_ECN): out.append("ecn") return out def _refine_tcp_info(self, vessel, tcp_info_raw): ti = TCP.InfoCbCore info_refine_tabl = {'tcpi_state': ('state', ti.state_c_cb), 'tcpi_pmtu': ('pmtu', None), 'tcpi_retrans': ('retrans', None), 'tcpi_ato': ('ato', ti.generic_1k_n_cb), 'tcpi_rto': ('rto', ti.rto_n_cb), # TODO consider wscale baking 'tcpi_snd_wscale': ('snd_wscale', None), 'tcpi_rcv_wscale': ('rcv_wscale', None), # TODO bps baking 'tcpi_snd_mss': ('snd_mss', None), 'tcpi_snd_cwnd': ('snd_cwnd', None), 'tcpi_snd_ssthresh': ('snd_ssthresh', ti.snd_thresh_p_cb), # TODO consider rtt agglomeration - needs nesting 'tcpi_rtt': ('rtt', ti.rtt_p_cb), 'tcpi_rttvar': ('rttvar', ti.generic_1k_n_cb), 'tcpi_rcv_rtt': ('rcv_rtt', ti.generic_1k_n_cb), 'tcpi_rcv_space': ('rcv_space', None), 'tcpi_options': ('opts', ti.opts_c_cb), # unclear, NB not in use by iproute2 ss latest 'tcpi_last_data_sent': ('last_data_sent', None), 'tcpi_rcv_ssthresh': ('rcv_ssthresh', None), 'tcpi_rcv_ssthresh': ('rcv_ssthresh', None), 'tcpi_segs_in': ('segs_in', None), 'tcpi_segs_out': ('segs_out', None), 'tcpi_data_segs_in': ('data_segs_in', None), 'tcpi_data_segs_out': ('data_segs_out', None), 'tcpi_lost': ('lost', None), 'tcpi_notsent_bytes': ('notsent_bytes', None), 'tcpi_rcv_mss': ('rcv_mss', None), 'tcpi_pacing_rate': ('pacing_rate', None), 'tcpi_retransmits': ('retransmits', None), 'tcpi_min_rtt': ('min_rtt', None), 'tcpi_rwnd_limited': ('rwnd_limited', None), 'tcpi_max_pacing_rate': ('max_pacing_rate', None), 'tcpi_probes': ('probes', None), 'tcpi_reordering': ('reordering', None), 'tcpi_last_data_recv': ('last_data_recv', None), 'tcpi_bytes_received': ('bytes_received', None), 'tcpi_fackets': ('fackets', None), 'tcpi_last_ack_recv': ('last_ack_recv', None), 'tcpi_last_ack_sent': ('last_ack_sent', None), 'tcpi_unacked': ('unacked', None), 'tcpi_sacked': ('sacked', None), 'tcpi_bytes_acked': ('bytes_acked', None), 'tcpi_delivery_rate_app_limited': ('delivery_rate_app_limited', None), 'tcpi_delivery_rate': ('delivery_rate', None), 'tcpi_sndbuf_limited': ('sndbuf_limited', None), 'tcpi_ca_state': ('ca_state', None), 'tcpi_busy_time': ('busy_time', None), 'tcpi_total_retrans': ('total_retrans', None), 'tcpi_advmss': ('advmss', None), 'tcpi_backoff': (None, None), 'tcpv_enabled': (None, 'skip'), 'tcpv_rttcnt': (None, 'skip'), 'tcpv_rtt': (None, 'skip'), 'tcpv_minrtt': (None, 'skip'), # BBR 'bbr_bw_lo': ('bbr_bw_lo', None), 'bbr_bw_hi': ('bbr_bw_hi', None), 'bbr_min_rtt': ('bbr_min_rtt', None), 'bbr_pacing_gain': ('bbr_pacing_gain', None), 'bbr_cwnd_gain': ('bbr_cwnd_gain', None), # DCTCP 'dctcp_enabled': ('dctcp_enabled', None), 'dctcp_ce_state': ('dctcp_ce_state', None), 'dctcp_alpha': ('dctcp_alpha', None), 'dctcp_ab_ecn': ('dctcp_ab_ecn', None), 'dctcp_ab_tot': ('dctcp_ab_tot', None)} k_idx = 0 cb_idx = 1 info_k = 'tcp_info' vessel[info_k] = {} # BUG - pyroute2 diag - seems always last info instance from kernel if type(tcp_info_raw) != str: for k, v in tcp_info_raw.items(): refined_k = info_refine_tabl[k][k_idx] cb = info_refine_tabl[k][cb_idx] refined_v = v if cb and cb == 'skip': continue elif cb: ctx = {'raw': tcp_info_raw} refined_v = cb(k, v, **ctx) vessel[info_k][refined_k] = refined_v return vessel def prepare_args(): parser = argparse.ArgumentParser(description=""" ss2 - socket statistics depictor meant as a complete and convenient surrogate for iproute2/misc/ss2""") parser.add_argument('-x', '--unix', help='Display Unix domain sockets.', action='store_true') parser.add_argument('-t', '--tcp', help='Display TCP sockets.', action='store_true') parser.add_argument('-l', '--listen', help='Display listening sockets.', action='store_true') parser.add_argument('-a', '--all', help='Display all sockets.', action='store_true') parser.add_argument('-p', '--process', help='show socket holding context', action='store_true') parser.add_argument('-r', '--resolve', help='resolve host names in addition', action='store_true') args = parser.parse_args() return args def run(args=None): if not args: args = prepare_args() _states = SS_CONN if args.listen: _states = (1 << SS_LISTEN) if args.all: _states = SS_ALL protocols = [] if args.tcp: protocols.append(TCP(sk_states=_states)) if args.unix: protocols.append(UNIX(sk_states=_states)) if not protocols: raise RuntimeError('not implemented - ss2 in fledging mode') _user_ctxt_map = None if args.process: _user_ctxt_map = UserCtxtMap() with DiagSocket() as ds: ds.bind() for p in protocols: p(ds, args, _user_ctxt_map) if __name__ == "__main__": run() pyroute2-0.5.9/docs/0000755000175000017500000000000013621220110014146 5ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/0000755000175000017500000000000013621220110015112 5ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/.buildinfo0000644000175000017500000000034613621220107017077 0ustar peetpeet00000000000000# Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. config: 9dabfc59a50e7a0755e91a82717a0a5a tags: 645f666f9bcd5a90fca523b33c5a78b7 pyroute2-0.5.9/docs/html/_images/0000755000175000017500000000000013621220110016516 5ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/_images/inheritance-0c1b6bf9bf282e1cdae685cfce6a116f7a75a086.png0000644000175000017500000006147713621220106027476 0ustar peetpeet00000000000000PNG  IHDR=r}RbKGD IDATxk\׺05Q J4(ZmtxR"m=j-"VEQ^BzR*,nKFnBaNQ0!d͚5ϐ0O̚5$u -My &VzJovD۷oSw 0)?oyzz)a.==]! h[4 -D3򖇇Gbb~T!h- 0 svvNIIQtuu ƪU?~,_֯_nnnΝSQlAm-.X,ްaCPPPaaJ7tm333OOׯh[TM BBB0 +((@YXXܽ{X$0 kllJKK ˥FFFl6(?v옍w^^^ttt>Ĵxoڴu幹k֬0Cu%H0 sss*AAAYYYG&_x˗|ro\ӧ=233ʘLSSSLfRR<o; @y?_F?"tttnܸ1iҤ%_^UUenn:t萱qQQLѡP(ƍ{diizbaee͛7KKKB7n|]:jUWWo߾UVVhh --r˗/nnnk׮?zkA("&LXX^^ekkn@|$119s!F,{>N'HbW駟~sΞ=["q)))buww7QD 0㲑Qxx:;;]]]+**vze++EH$Gq8/xYWWWwww[[L&S\ӧǎ0Gl6{ڵgSSSBBBFFB,??_*/ԟB$)))**J,h46dɒ7VNNNvpp`XT*544488xر266IKK#DX |||srr8^xqԩ ,`0|B(""ۺ0 D9&-Pa My &@@I o$h[4WzÜH$ھ}Oe h[4 -My &@@I o$h[4 -My &@@I o$h[4 -My &@hxﯖ8B(,,CQ3o!҆>. !o@>oD˖-[p/zTZZUYYpd2J4iҾ}x<^{{[u W~~~,X@!|>BCqh?3Q. B 8wwⵋKtt4跉DRЋ/{)qPQlle-ڵKhɧNR:aUUQB8.BBBƌyվ2Zt}w߾}BP[[ښ`gg"CR*2VF-U\.bxÆ AAA*[7K[x-J(|Iۓ'O:;;+V $UUUL&sѢEuuu5pB2WYY +j酄`VPP RW'YXXܽ{x-0 kll\|ynn5k0 #&#Pǎm06oI$8 BMMMO>>BP[[ښ`gg fffJ%^"ڪd r,))immE͜9SHqE!W,++Ch4Spyyvkxy9BH&UWWWWWoݺu&&&FFFo]JX7.XX}})SܸqԴvh4B.\wޜ9s^9+`v𕉉Ibb3g͛'J+++EϞ=CYZZ"tuuڈU/ΓHhXL.@[+V~!LMMM/%a@h"ut:D?~³NKK*jxbŘ1c׬YO?'il2{Çccf@P(#Gdgg222 ?pٳCBBjjjB֭[Y,ruuMJJjllعs|]33 }@ صkWo5d2NjQ\W^}w"(.ZhQpppIID"yy a'N>}ztt'Orrr6n88:*__{544455:u0 }ǡ$))ʕ+d2ٙdhdBTTX,hl6{ɒ%#""n߾]r?&,yCVXAP7ڸqݻSRR233MLLzbTjhhhppرc ?rsssssCCCfQ͛7GEEZYY%$$$%%͛7c`4 Myŏ?j*h{׸o4bvDYPi \rE[[>;w\|#L4?a̛7/==NsFGG;"4[ǏWB`i-#-M2ҮoxBPQ Gaȑ#a8x=r4#G@yyy;Ҹ Uj|@4eV|T&^yi4 -My &ðDyIAAad2Ya^^^cX#ZZZ._iӦ׫Q(W^iWر`=酄`VPP RW?ݻkHaXccsss׬Ya.KӍlvEE@H78q!tooo++ACR'Mo>:$..0777SSӓ'O?~, h?55d&%%8BAAAYYYG& "r[a6lpƍ-[Nss]d2yaߧh ڵk򥱱뇇߸qcҤIqqq斖ǎkjjH$~zbb B9t/**#[qT*ڸqckk z,%Rov SSڡhǏ/--+W"׷|}}- vvvO\VVFl񡥥\^^>'@ͳe˖Wn޼ðΝ۟(JHHuvvbIbb"?s挥T*$={ !diimkk#HI1t:D"b\h"oooBUU{N:U^^ޣQxx,,,fϞRSS# nbBIII;wkff/JBVVV- .))H$=p8 C͛7GEEZYY%$$$%%͛7CCC%IRRҕ+Wd3ɤhD(XLl%K+FDDܾ}[WWƖbTjhhhppرc#0< 0|4-d`;jkk}}}gΜd2&CJ=z(%%XOOO__[KKKw`!p7553gN]]$R___XXXTTTPP@ܐkff6e*&$$tuǫ o0ZDަGWbPե`!Y]]c{{iӦ)1ENNmQ(&&]VWWWIIIqqqQQQiiiWWDrppprr |(DfeeUVV:88p8%I\\annn!'O<~X([jj*LJJqgs`x~~~o`ukk+a7n JҼ<;;{ҥϞ=#m/^eĉRT&M4O>ikkSX,@ԴcǎiӦuww8.?*kLJV[`!bʕ]]]W]l@__???_qP(DUUUo) QG}"|PYYQUWW8N~gf~^}l!ǡԣy׮]d2Y޹Қ6mZhh͛7B|>Ç'NR|>3///--- ߿O)׵kB"뉉666 СC|>hh~#~[tww|PvZ<==}d20.+B666/_֮] B4\^^~]]}B5Q!Dd{s`Zll,aۍ7LMMx4DT*X֖x[9JF?~T,\/++ pG" Ɗ+_]bW_}USS#bbbR)Q~Ν;{lDBI$X,VcYD>KKKb@bѳgB!]]N IDATݶ6\j⬵1 oFniixshhaa*?N Eo ^"B ݣ婩...6Lfhhbbbj?_|3qD5++EH$Gq8022 ?pٳCBBjjjB֭[Y,B5))bΝuoK?7FGjv̙}ً/].޽{ MMMN*//wwwx>;JtBСC+VP(aaaK,!*FFF,8&&&--/99bQcǾPDt2d2i4Zrr2Q!**J,h46-(B(""ۺĩ~nFw!waSSSm۶"bL&&&...SNuvvP( ֨&6@ DNNN .,^ w4f̘+W\xXXX{… lllLzrF@ JKKT*ptt7oN?& `߾}&L2FFFf͚5k񖘿(==oӝ\\\LLLhyk(@PRRի &0 7:::qqqaH[[TPPP\\|֭ɄSf޲TbG$m߾}(QLQ@>66o޼[Y<`83Z@ hhh@:::YA___oqvHŋ֦&D2cƌ_~o%tvvt&4eʔwwjc_pp8Dĵb"Ea6yd''y{z͌/]T]]-/qtts$!CUWW޺uȑ#8qbM/ \Qk[j% #&q$ښ`0`_cǎ&۷ɏ$Mdnnnnn.B\ZZZTTtA@ J{=C4n8+**Zlف0ykÓע|~{{;L˿ăP[nݼyɓ''NtDL&3((!$q֭ÇKR2|x̙k$#HvvvvvvK.%J^zUPPɓfT*-///O> yEX~>}:--MKKNO4ޞfIj0̥_x!ussSw\޽{}}`8m۶<oȒVII áR,Ç}wss;w\߸qczzzjjjTT}ᬬׯIIKG}sδVVV[XX A 0 svvNII\;=Ȱo zzzxHooouz:u޽{@|}}C9=zT"9s&((eʔ)Đ<<<׮{;w.BsNdqkk+Bt###6]QQATa իDamm-߱c333Eolg幹k֬0x8#CSSӚ5kۯ\Ik 0!TWW@RMMMW^]__OT{.Z$aï Crrrddσ~9q͛=AAAYYYG&!-[6nܸgϞݺuKޱR&MڷokooWIMMe2III8xJ[@@r08hTH$qqqW?Ldž>ޯߒ?1BNH$X}}}-,,Z[[j(JHHuvvvo ^?026o|ر%K;6&&&|>̙3d"4%BHWWWaH_}6UM)))SL?բEKJJ$ɣG8NGG̙3?/^]v-TUU޻wԩS233ϗJsӧϝ;_##XXX̞=;$$F(nݺbS&%%566VTTܹSæ!ob .DJHH;wٳE"B(99bQ`b+WڲlbJFۼysTTUBBBRRҼyh'""ۺ0hM6544={h$))ʕ+d2ٙdhdBTTX,hl6[xaS9&ʤ0P5556l={cJ0RBn߾ĉ&MRw,(-F;w:u*==Hݱd… `;"0?y$55u̘1僼d˗/wqq)..?Ӗ.*`0%@yKƌCwiSYQ +#L}?D۷cyL W˴t:dr96G!]}ʕ\.,{Z|U}Hv˪e2emmm=& ~~~0H ¡[Rt͚5JIZ@#1 G`0'cnhhEEEOVy{{;88Ę q>[j$-0@L&ɔtuu ¢ .444d544\t);;>:t萡&""?Vw 5[Ș1c^_z*** FGGgϞsN@@̼0|4-88X݁F;SSSSSx})T*-//;v#'Nhiim۶M݁_8N)?sL>n&\󣣣Bb?L<Ձ?yݨVcH@.77֭[ \.ѣ.p*{Q1ia?ܾEv9{ػw/))p8T*ؘb=|p@Euu޽{O:54wuuݳgܹs);wf.kccclliӦV0D"߿_~G_|MPOpeywwy 似ʰHuGmmm7o>qΩ󠠠@"eff1#222pv/_,o$>>oD˖-[p/Զ?`8)889))E---\.N슊 Puuŋ:֭/_f so]]]@@J555]zu}}=Q쵵|>ǎ#ȁt2lڵwﶴ-66S r.;c ==-[UWWWVV޼y3))hƍl6[ގִiBCCo޼)/ܼy9s(Ν;i4ׇzy (Ӹq={XTZZUYYpd2yJJʖ-[˗/kii2̤$y<[ɓǏ @SI&۷㵷5ȁtwuwwML0(R bDO[[[yymӧON&1 r"H`(V\F3[@VXaeeu!BHtD r!>_TT7o^XXŋ?K.M8{߈r@8sssKKcǎ555cv}HP]wj2v؀|bamm+꫚L#JKЅ 1[@0  F--rb͛+**M6w>Z^ [;;;0hǏ+WeeeZ5<<|ժUCEΟ?_UU_()~nWTg7 ;8Oݻwyuf'Py afW\0Lݱ0LA#ffoooߴiɓ'5e2'e0,8ya8 -ov̙gVw wP󟍍6lPw h_:z诿ꭨ J㔔wENNBLDPwsF*[<>etMtorYY޽{SSSTUgt_WWWhccO?0xBvAcxxx%lr92Ҩ`ɧE0);;& o2k=zTݱĉg͚.BYYY Eq`\jݽ~/~ץK:99h4ФI߿?~xu@ 5 Rw o'ORԊw޿'yj|͔)S>ucbb cbbY*22u0@`H=zD" [ c'L0au@4϶m䯉Ǫxyy=xG.XjǏ[ jkk={b OV 6EEEÍY@h@8^UUd2-ZTWWXtm333OOׯXҥKǎ;s挚ٳgKJJݫ0ׯ_wޓ'O7JSR(ŋutt>\__'(>B[[{̙3f(,,TG`:w\QQQttcGH$_~%Pc$ M[orʳg>}A^(J?~ѣM6160$''111 モH$wﶵUW(={!Kmlllll,zʕg(ד'OeMMM_Zb@4Vll,aۍ7zklM> Bڵk.b0]zݻ'NWVVn߾/ܾ}K iC[\.ѣobaa{)H$bŊ?\e~;wǓHCQ(9rD,m>ޝ孾uuu 3g$''xEEEjj"DJJJNN'G>{l%#' qǧ:''XNR/]PwG}C^xk׮>/-- 4O?uou劊jhhl}]qq޽{O>4@4 oAKP"##`[bvrss-0@@9p߱cӺuT/_~?>C Jݽe˖?_zՉ'>|CtvvX,&W^EEE̘1###WݶΠ;H$~yT_~e۶m;wTV={̘1C&&&fee]2[ RMMڵk:4uT7\'jp%\CӇ`+CL$;NO?Sϟvںu!"iiiykP0UoB-o߮~g.\Pn<}ŋo߾1TQ#hGR(ey9u< ׷7n7N) vuu]|9==D诓'O/^TJvQX7nz6 UQQqرwOZ2,==f߸q#((ht&իWL2EݱM-[[wJ&]z511qҥڣGD"c[s_ywicX̍7jJyq77s 'OC9Ecc5kn 322>Û7o;Vj&kAoV]]W_ :i8~֭O>D q\M?Lٳgܹ Ν;˗Y0⦗.KӍ, IDATlvEE|ݻwϙ3ðǏ:u ð/!f1 #H666qXoڴMQǧb-,+22M)[,X]\\~STL8LJJpX,@ hjjڱcǴiӺ'N-Jq ٴiSIҼ<;;dѢEf***jhh8uԭ[Ļ/ oݻw9NccVg,Dny+::x]__D_P(DUUUo)J~~>:ё"1obcc-[8ё*--U\&}׷?|055UGGg޽{7&&f?*S;*JEPVVhVVV~L̙3lmm 0A?jjj:/3u˗/+q'MxON'HuuuǏfo+V~!LMMMEY[[www|ũ `\!BP=:/{ٳg; B,??_*"-Z\RR"H=zp:::nM&xʊf]?ljjJHH&y b>p222;C}.1099bQ>tЊ+(JXXؒ%K.^8u 0\bx 6F `SFL&|VVV;w466Vix*?ڞ5w[`T+..޺uk|||?bcct:b%nsj/Р`0ƍ𼼼o߮ֆtkPat_UQDzz:#?ەx"E744tuuYYY9999::N6M___uar$}G-՞'ykd {„ D_* @h" IRy_J(vuu::::::rau My Uq̙3}є)S>S: o*mڴ \#VyWW׋/; 0xxx$&&; FF5- 0 #PYjǏߺ۹sTK{{;&j>֯_pJqfffׯ_WwPƍҥK׭[GGcTJycooᠠPn躺*jjjzzsss׬YaX[%ŋ?VPKK ˥FFFl6!?uT*Ģcǎ{{{ WGG[suuݽ{9s RSSl6a$f]]]3w\ booBo477s\ccM6P|{ RŮ]Ξ=666._<##񮮮k׮?^󠠠@{zk$000///33d޿ߨނcuUu,X@'+))A/|>!TVVF q...vS#VUUo)J~~>!!!Vq}W^q\ ߷~K|Tɓϟ...Du˖-W&^HԟOAeKKK{TpwwOHHqo#xy}O-"L0A(jkk[[[vvv7΋z &KB4MZyyԩSld2=zmmmm___˗?sXr]\\{y= #*P-XT xi9a{Ч(E)b*:T܋GV=k Zl 0CT.$~)UP.INN'7!_sNW,O*@ HHHHJJᦦҟO:T[[+9xtRM$+Y[[[u'MMMl6{H=l(]ohjjJo'X__O111!·BOOϨ;177jhh4Xz$(~EEEEYZZXbwٳgOW^y#[(--]p@ z{{֊DWH/@ IIIoWH_t)66~ppphh飨 ӳr&]Z#IF]]]--/1L-SSSOOϠ[nuuu?x'J&@1->OQ̙3***|||$7+//1cZy{{wSGV<%%o֬Y֭{G-+h05bmmm۶'OLnAAAW[[l*񱲲RXvP(r !ѣGeee;vѣ0333===OOfBƍlBQd GGǸ7|s̙'Npvvẻ,,,wKȰY|kצ&@ܹS,Oz'[nڹs'k*X,VpppVV!x妦[꫖.#O8䔟?<<\YY);;;Zv|׮]p۷}ܿ?,++SeㄠvV^{{7nDEE:tHV!mll^ڳZ|}Fn455 DW^'+ׯ_1AAAn=Kbccrzj??qEN}YEEE\. 44400P[[r/r8*))_feUUoᡯ_u1Pa/࠿={F'+555iiiZZZ2߿ 'uYXXaÆ3fq˫IHH_mFᔔ֭h/CobѢE7o~bU0w 0o&++@755%''wuu%$$(I}}}D Nw %|EEEl6[OԔ?|?;Bog}v…?x OEss3G{H^իS\gr*ӅB! 0.7oLMM1CKK  qqq\.$4BoaBB1$uX!wuu>| kָ8Oc[}!Bȝ;w޽50ӣ$EFWLLkFwBoLU!޽{|> 0Iiii;/xbz `򔳺WcccTT T z `JRSS2 i{8pɉ8*e.2߿dLz @RSS?~Gw1utt޺uh/`4l9r_h_Wt $&&쨨(@GGѣGoܸAoRJJH$} 7n܈MwBoرcjjj|>EQtgyQ[WZZ~G,59 y_u@rÕjB \yyytg/P8<<|==}Н [r…BmmmL@mmUZ[[ !/Rtt3f`kܹ3 ,&)))F1G07ظqcII̙3bl6{"wpp/22R[[;((t8!TUU(,,ե;˘LLL\\\cH#HdeeIK)*//;[r䔘_PP0gE%;o(mjjLz @Aϟ};wН[cffӝ[ ell\ZZQQQAw9;;ѝh/҉'x<ŋ<-|'_?NwYrww(ӧ;88ҝTz wINN;,  ۷ܼyDj[('""B,GO{nnܸA166|V:;;zȣG<==@={,[,88xpp,2#=JQԒ%KX` _^__Æ f͢;Δ|>O^}<ٳw522"WWW7.0[Jaʕqqq~~~,S"9%{。<ۄyI.`MMM ̄PNNNetg_~9//3!HsX,P(K![J¢0<<ڵktg*r;;;w```mmH$zOcCo(CCÓ'O&&&~tgPH, ͛nݺQ\TTr=<< BCCN4( PFWW3:&*[HKK+??>::\PRo%s܀Ǐ@YZ``Ν;7lAwPvK.MMM}w~'@33:u*///??,tBo0ƴi]2gEELv W/߿um۶yzzҝ@q[ 688ꫯFDDН@Ap~ 4558N```oo/q-UPUUdu @iTP( ޴iX '|l68r[f'%%9::nܸt TPuu޽{ c-ihhx! GV@u 8Nvv 3oq޽K.; T___޽{ pT>deeYXX`J0P#tg$Ԉ٧~Jy뭷0oƈzkӦMtgԑ3gZZZ6o|]L[j!**j…ZZZ]]] [jNNN !===\.hx !D$9re``ҥK666txz ~vҥM6 B_mhhHXIDATw(_qBYjj! o{{{Ç;;;tttkjj֮];44Dw48!B.Px###GGGMkxx8)@4ʢ|b(G&|}}[*/)AL&AoI[hyyyt {uuuFntss̬+@vqN閺4Հsww嗋[rrr/zzzRb,,,>/cX۶m͕s``p燇?²2&Ao\lݺ֭[}!ٳ#@| ~BCCϝ;H- ##kBrrr6o<}gvԩŋϘ1V'AoرɓQ޻wСCb=yD9Y[ /k֬3g]\\.\bVee%S|H` ھ}۷Ǻ<%%o֬Y֭SpB`" ;[KaL&AoI[$-`0 ֍Bimm=u)^ ;BQ211qvv;z I[$-`y}5xIENDB`pyroute2-0.5.9/docs/html/_images/inheritance-0c1b6bf9bf282e1cdae685cfce6a116f7a75a086.png.map0000644000175000017500000000215013621220106030231 0ustar peetpeet00000000000000 pyroute2-0.5.9/docs/html/_images/inheritance-3f838403cee8909e49dd6d8f69421e420ada5b44.png0000644000175000017500000004350613621220106027131 0ustar peetpeet00000000000000PNG  IHDRB lbKGD IDATx}D277yfW\3a„oߢ mZ[[iݻwQQQT*UCCc߿G=**H]]˗b]\\D"F+..NHH000 t:] %%%QTRRRuww'H4-99577@@@rrr"""֯_/ aBfgg?… :t͛7WE>޿׫W>YzuII_xaoo HppW^YXXx{{2˗=WWgϞ=z׮]B2&&&==%>>^WXA&srrӇ&0D~~~3k֬ӧOۢ1Hǎ[bmmmn,Sy B}}\tׯaٟjxyϪg֢|>Dzȑ#G8}sNPd2 zijjB޺u~0 1axR!33ΎH$p(% Hgg~Z^^off q;555rrr={ 8ɽ{)BA7UUU !Z0Jpg̘ZVV Hmm-HF611# g^ $&&xachh^z>Bw"" (cll###<󛊧޽;9sttt뷷.`2o 0&@{L&+**%&&9sDDD׿~:22mĉ㏭/_I&yxxmmmOə4iך5k*++;::ݻ5rss˪*.їlffy榦&&7Z3ÇI$Rttϰ ӧOOKKCi?==}ڴi/611y HZZ͛׬Y3uΝ榡`0D"ZpkkҥKu'kK7VZ%h.H3 yyE1؀+7`>f… 333<<<ϟuEq F1EKKŋXW@@@@& r\.O0ArE0uV @N:D+ĀSNp8lkuuuYZZ#~c[8od}zzzhΘ0aBCC Fg ) 'O뭬Ξ=dKQ9c2ǏgX۷o9s&?..`4A@JKKO8cccͱ.09())9t萉ɖ-[tuu.\ H$$$x$H i޽ݻw622º:3.7XR 7`~w///_B3;w\|YYYr@A`PBO?tXZZb] p=~Ç2``<tww߿Xc gIO|u9 S ת|||⋃B 0~]rܹsgΜúAKٳgoٲ*؀#gϞLyy)W`0.A!g0ny0|??zT =K/gBBBJJ>ֵ샜ƑSN1 kqΛqkl6ٳ2`@?66Z$冄M:Z9ȸ;w%''/XZ̙3+Wݻboyxx`X!@~:44ԩST*ZyyyK,QPP( E^zaÆSN`]аk׮***.[lŰ 3xƍO>u-zׯx/b媪XRr 555?~ᇉ'b]:::nݺC$.  M6b]K:;;];wwwooo5559`ӧ۶mKKKº}!77l6{ٲe$ `lTUUO?IayfvvvCCŋW\IP. 3^ϟ0|;w_x1o޼*_^nݏ?(%~"++˗s]b_,c ڰga[@ }vff'O-ZzjKY9cccjŋ ''իW9@jA)88jZ̒;;ɓ'cR3a!!!?xƋR Q.'`&&&Zh(**277  3\b7|ckk; UTT0)S9;;B>9H]reddD;rJqq˝p8D{?@5k$ūW._|]]] U g  gXYYY꾾rrrbH@;eʔ01ܜ&/// ) [lKko߾˓^dXZ 93d#UׇTaa%K|}}]]]!^βё``]DXW=zXW!SdWlqm(FXuY3 )3a<͛78Y%a3 )3@*pܐ55Sb]' {L[33gΜA\r=l@ g;...$ήݙDR KII ÃD"h,Q 666...D"F'$$d:.im399\QQQxx8]ZZ:k,2mI̟1dggٝ|Px֭GPBBCϝΚ5Bpҥ_˻vp?A?X[[oݺٙ@ XZZ}w$iݺu|>=رcƪ<@wX,777555KKˤUClmm 9NddIdҥKkjj|{`̂qĤ6ŢR.\ؾ}{ssӣY,@A455BA7UUU agg' }Y[[K$?d@Ϟ=kooGDUUuHE k0T.]*++֦h۷o7o?9996mx)))=G\Fm1Q g188Ç־}߿?w 3Ɲ/^L _~3gN}}Ѩd2m뷷777QcHm>xvkkp(sÇO ޿/㕕\.rrr]]]BUUcǓH$99!/mL&Hg ΎW^zΝ;ZZZfjmmRI}ݼyիW;w.11@ hjj.XOAAAQQD")++;88888 RUUUXXԤMP<`Kڰaüy=˗tz#ź|rmm (//6m. g;)))-KMMeX޹|>S :99mڴĉ'..nH٦͛fa(޽S$?| EGzpDbyy|GG{)33h*urrQexLS#UG__g5553233.A 9rƸCwUVVfffvu 2w܇N2`DA222֭[gjjj``qOt:T__?,,ݻz@b444?={A 2a„+VM8!!00ҥK , ǎx(cTqFFFx>>waexLBYzLKK۷o[CCԩSd5dApC:c Aggg||4mmjiiy&@9t萊ʀ/ ^ƍVz-օɓ a]ŏWyYܬD &Od2kXN3e2oijjB B%%%mmm]]]tXEWWWGGgؗI1|tD8q HP^;CBBΝ;7f.\n:(:!tĉ^;lv]]ͮ/..F7l6S>O 444&N(ڠP(ܥ-.mdd$+@:::?~\ZZZZZ '''vRIII__?66Nc['@%qQF u$ ى4TTT9r\Bz&'͖-[ݻE3Cxx8ҍh9=0r4jmm}Ƀ<kJ[qKNNNWWMMML&O0aɒ%>>>666_ h -7x}7;}S9Ct\.WEE=NS# eƤP(LJJzѣGn}6''',,L^^{ɒ%8-F t:}3gΔ5 Rbr+++`QWWr===Mwᢢ߿wٳgYپnnnb' XDGG?&H*!8n޽boVj<=| k֬y͎;t:`<1cFVVΝ;sssw=qEmmm:NY,VVV}}}݇R9&Hrz9cՓ԰.dRSS;qĥK;300z˗tuu/_4%>Irzc[㍒[|ybb?aI&EEE]xq˖-< ڲeKAApL4}-G"}||FܹsܷEk IDAT444{.jjjtRtqnHV|9٨YjhvqF߿cccBB[`0SLsvvK0Tc/:j0_b __/^5w~N-Y$**D,p8eee]Nr"s9wܶm۲Ҡ޽{w:~e˖ݻӧbiF_MM #k֬a .422?~BB8EH `0>|7VVVZ/^,)) {loo222RWWx0 .ӧO |xGG8^yqWW׽{VWWx www׮]{aCCEI^fykkI&~:::c߿p֮]nݺJ]]]t:gϞ)((tvv޹s'&&}S@@@cccuug};˺F~~~[[ۡCH$Rcc<Gooo777&ֶm6[[[|ݜ@e& 6z=*~39npvaPPΝ;B!$={[n 3D---SSS{sر+V|Gt>>O"z';//Oxk˖-GtYիӧ˛^~]R gAA7TTTAg3ef?y :~s#zwCy|xǎN8n޼sΆŋ\R}QAAAii۷o---544622JOOźXie)))-KMMeX߿s>{4iך5k+U8pE%%%ɿ`Pp8_}ر[9rdtWRRBSrsscccleH$(޽{nAAAUUUWW׌36ntNNN_utttGX挹s>|ͭvʔ)=g?馦aaawS:k׮233ׯS( f|||ܹOiiissskhh:uj\\ )~'Oihˋp8nnn0<2^UUu~~~4/9۷osM*vw\?蓶jz꯿zqM1 nA+`O>-,,,**jmm544tvv7o-9i+##C^^~ѢEL&S%֯_UPUUEG8Z[[^˗bR`޿²2.keezj .8qMsС*밐l㕆jʺyfJJ,f]lٲŋkCy޽{L&SQQq̙ζ $rK9roٲBܜ&/)߼yO?I$㕕<|ISS#訧̦O>m4sss:ʶl2w\ooo $O&W^=ݻwsb] ?c]bb۷<ݻBp̙NNNf͂ Ƣt'[˛xzz{?~\QQq&Lbeey9ig}k.-WWׂ!=K &02,ziaaa]]իF< 88D&5BE{kK F9994yh4ߦƮ[{B믿5kֵ fO2t!DeeeOfXݝ]]]'N-XǏGDDdggeH!>oVTT(((;99mذ>a3UU3f̘1}v tuԩaaaI=>|8..Zuuu~zʕ0 ;N@@ H$6ydGGǶ6N oxxÇmmme YjօJӱ*X9fL ?>}{/7o677'#ivƌ[NgϞ@G*dL޸q/ ьb@ H㚍ͮ]͛rxw B{{Q(ypeeeLLLzzzKKK|| >>߿_^z5 /^qr0 #DGGDV\\```@&t@ @3ֺH$D3ŅD";=<}zTTumm-HF611ǘX,A233/hKMMʹi֭[gvAA]]]]]( d#A _'N|+燆Ny\ɜ9s"""N:"##l\Btuu=<^__?--mKNhh͛7ѯLFNnn_~YUUrEqqMMML&s0󥦤?p8}F"ɊEEE8i$//5kTVVvttܻw{<<?a;;xj:>8.m$0xƸO''/ePTTܱcǎ;zy n ظpn֭ C:Ϟvloo?rΝC77)S`FFN755 {n]v]~BP(Çl;;;;wcZZھ}NS~lǎui$9nKcz>[FRRRxy`0FaqƪUM~D??"C Λ>>f梗OYYY,666 $0$3@?zdn*3w$.\idd1 @d>H. OHH p}G 6nW_IMh{ZZZC$A<55.`aaaeeE,,,4 ~3\vZdhOWWWUUUEEEfffUUUww7!3 gq˗/hX@Dvww?Mϟ?9FCTVVƶf0x3xِY]]r)**J\."//p8BM4MEEۚ@ΐ)h***^֭p(O 4ypׯ_8qM=D߰$A/Flܸ155Wæ`bbbbb" ]\ Mǎcx<𰱱xb0 `.0eeeݼy3%%E=zms饊 ~Fy 6mdd&kkk555vghh(_bmݺ<{3xwǏwX ǣP(ٖӧ P| BBB~IIIp2OOOOOOo…=䑙+yL:UWWv۷oVcVYvA@@?(qT8yltڵk@KK ME!_7nܸۑ#Gd2v@AAՕL&9sS__ㄆhϛ7oשּׁ&A*++?g*bp}d27o޼m۶c] @xBPAAA]]ۃ冇ggg=yD2π/k׮ܹc]@J׳lmmmKK˹sFFFikk2Μ9n_r޽{,K6BF %%Dwccc+HӧO<+gϞ='T,--5kl_9>;;;رcڴi۶mP=M8)))iM蛞^zz:F[lݻw. #DM<<<?vލ Mtt Hh d2N Trm&%%QTRRRuww'H4-99577B??7nX[[u9ّ~zP~t˗=WWgϞ=z׮]=Q(r_pl2&&&==%>>^+druuuNNNzz^& qĉ 6466b]@fEFFDMMɓ'/X@4̀ݼyիWGئ"DRVVvppG099Bvaffy榦&&7!g0LgΜٳgϙ3g-[v5+>ccÇHh9w\uuu777 t!" HFFFkkҥK)>8;;w.///((=CׅqƪU޾}a08A& FGGgbbbKK Eddddeeq8؀a49QQQٰaCNNδiӶlCȞV %K_?l6>f͚6pL###'$$ 8o444;w?ܹsCCCEdYuu5 jjj$@ 䮮 >>o2~ $m„ ۷o?ڵk,X05҆<{ӧ꺸'x;,<p… yܹss 4i֥իgϞ={nll $ A]]]'Neq59Qs;v|~~~:::X@Y,VyyyEEd<Bh4+++www*s0޽{%%%666O2e K9*<?gΜ9s;wݻ˗-  46d2Tb䌌h4… -,,Ds_|gNc !''t2w޹ HD zJ*l@ 211ht:ݗ-y|Mmmm/^uQ`P>jmmFSϹ\. &&&h>}:D\=k_X[9q֭7n~zƌX9+ׯ￟={" 544---ͩT*L΋9H/@k׮kjjzxx,^x IOOe˖YXXXXXt3^x<̨vڹs¼y,X`ddu`LEE%,, *@`q33[feey!XPPIDAT kΜ9 SSSq8ݻvqq!48!!L&t@x>>[l9w\YYYww7֥a ` xجYN>nGDD_^#kkkccs8kת[RWWW<3;wĠurr hllFnjj# CH$tɱ>TnnnL&m۶m|>oc g0{ KNN믿) %g9rpà;w~ڱcVXOwidG|{[n 3D---SSS?Um=~AZ!'H g0}C6]ZZz޽o"BP,--y`-{P(膲heeNt;33߿GimR .l߾yQQQֵD"Qɀ<׺j=͚߳iӦ 5r3=O>/***T*5 .]Z`HqIII'..nvRRR-Zdggb߿g}8i$//5k֠W8pŋpK UUU[[[[[۞;Amm-|EqqqMMMGG 8NCCC______OOpĉczݻwJJJ366>|p`` Ͷsۜ;wÇjkkL`0D" t:T__?,,ݻ/CwUVVfffvu BPS=۷ͭaԩqqq2doձXׯ_׋pjjj::: E[[ UUU-S_2}p///olvƍUVHׅ3U<8NCCCSSSss7o?~܌ɑduuuuuutWWWWPPPSSSQQQVV&xdRRR/,,EdҒ=322-Zd2cccU3XRSSSSS3330>r9Ngg޽{'B!x<AtwwzիWꑑx#WSSCR{ 9wH]paDDĺuϡC$y3c:.u!Hqqqrr2 ۶mc8z%q[KK,|B gd&8ob yH H H HsԆIENDB`pyroute2-0.5.9/docs/html/_images/inheritance-3f838403cee8909e49dd6d8f69421e420ada5b44.png.map0000644000175000017500000000031613621220106027675 0ustar peetpeet00000000000000 pyroute2-0.5.9/docs/html/_images/inheritance-e3c5704fb116d5e1ca6c2d8505ff63c892fa04f2.png0000644000175000017500000014713713621220106027251 0ustar peetpeet00000000000000PNG  IHDR^rbKGD IDATxy\y?i:HZNEd)C({ɖĠ&k2Bfd0(K3FVZ\?ilSW~>^qs}>0 !B!FvB!BHbB!9E>!B!r}B!B#ȴk׮ɓ'lǐ)ppp`;FvA8TVNMGK@m#MIơGHc<B!)* !BSTB!"'m=#}СCR!#JSmm-fϞ .={6zj̜9Mھ|I{'Bd?/O>k׮!77RRR8q7n@NNLaBHJJœ9s$ߧ^jn߾]o{i۷/8 B>B*--Ev횼/>k{BHG>+ Ő!C`eeg~p}WWWp8(((c`ǻ{.`gg+W4xPWW#n޼)YVQQBKK xq# ڵt{%x\]]m6ڵ Gx>ʕ+m6ӧ1>|EEE} >_!88FFF侕UUUcƍaϞ=xذawϥK0{lt Æ ӧQSSJlW^Ŵi`aaQF… -zޘ6cbapA̚5 睟 !OH B]Gee%` >;|r|>QQQ BQQzÇ##w#ի޼yP%%%()) իW} ÀHlPQQAm8^|/_ɓ'z*tttп6ӧOƍq%<&L:?56!D:'mֳg0e\.[lѣG?|uD"ƌӠ} bGajj @WWWj6iWBbPUUũSOOOIqސ=4l޼W^e;FvT6 bPQQARR`hh}DDD`ᰳî]ǣNLLą 0k,Vcƌ76mڄΝ;#==k֬ѣG5r Ȼ`oڵ+/^#F@II -vW1^Rj@AWW={DPPCAA^^^Ǎ7pɲ ()Q!^j֨5ߦofaÆ 2e x<%U7/VXN:8}4?oggghiiaȑFXXbbbrݑ[n9r$ ooo"OJw^Ebb"FiWŽBBB~?G~-iٲe8s ڷo_o4B;8 u'9 5gGt8~V[B!BHQOX7tP`̘1ظq#ѣzԼۛlB! պR%m~wɓx@mm-8rss;[s233^!Vs9s؎"3'(//GFFӡQoYYY/ !BYxBRgddѣG t=zѣ2@Ϟ=[UBi}ŢEQf@>/^ 33{._*ROZJAA/+PoBJy***lE$r8~8 %%%@QQ(**b;"!-fƌ())ӧѩS'… 8z(233Ͼ}p1ؼy;Cll,aÆiii(.. wwwܿK.Ņ  pmDEEIɓ'YYY(//ǔ)SZiV-9eXxq컴TRЧ#??UUUF=^7ǦvZ?s >}:K鈼B||<\EE=zgGڄl,h" 0C pA]]:::pqq͛7eeeAUU333q]|}}}]...;d/-W3 CRKC"mwA>}p]:tٳzj.j|vbcǎkUUUA]]Ϫ|>QQQ BQQz 77\.:.j zGμѣGАap ;v iӐ_~#QE||<Ο?ۣcǎÁ)ߏ/현(333"++Krgmm0nnnpssX,Ƒ#G|EEE_$I-}wJGMM %EرcѡCb1^SN!##}̙3amm-YGSSVVVi<D8y$.^MMML8Bv  @@퍴Y&&&puuŬYk.3puu!PSSS%&&… 5kQ[[*bXZZ~~~Gyy9BBB-KKb_aƍ@YYݻwG^---S®Z\v Bw1k,tƢgϞiX[nɓHMM9QFaժUXd 7Ҧ:t˗/Lj#PYY wwwlڴ l2̙3[n@ @RRRdpvvƭ[0rHk׮DGG022ٳqu=zV5j[t<8^Bbb"PPP'''L4f]&͢III Q#BڊR+&$$`())ivȝ}{{{̙3si5-.]ڪGr߿?1ip9ٳg>|8[tؑhDUVV۲e肒9 %%% >"d;gx!ϑsΡ @̙3HHH@MM ƌ0hjj̆ bܹPWWDŽ n:c}* !Rqq1N>ӧOcƌ?( iz_ۃ]vlG# ]]]=zRE>!ABB\%%%_¾}ꍗL46%%`K!L*g4[[[bȐ!xٳg?~`` rammdlڴ ;w&|}}!%ۄC]]ydYnn.FkkklݺӐ߽{xÕ+W$\]]p>իW>UQQBKK xq2V<~[lqrJbػw/ݩ'R+;wK,qpIxxxԩSزe  D>!DH~dd$B!z쉰0̘1O>0cqqq1l0xzz"-- prr°acҥpn߾(&O cccdeeSLiT?~vvvعs'FLԩS/-- 077ԩS? ^ŋUVaHII?(M{d"99<G… ule68s挤ҥK-!͐_ր`\yyyp ͛pww2TTT񠪪 {{{233qBOOXv-\\\} qAW"???cԩ͕8SN 6@[[իW`Bd\zz:B!nܸKKKߟ'RWPP_:pbeeeG `nnrJ)R+ $_oC}ۣhBMM|>"Ann.\dZcnx\999Pk 8>|yM+ܹ+++c;Cux8q""##ez"BvmDFFscccX[[#55,$k@f>3wssb19'''(**"Q+D"3Ϟ=Ô)S p\lٲ^/255 o!2Yl&L$ddd ""999Baa!TUUagg3g΀D2idwvvƭ[0rHk׮lgfϞׯ7[___Xw܁%N> }}}cÆ 2e JKKagg &ڵk UVaȑxz쉐*{qqbذa O!DZJKKq9$&&JoFy'N@tt4h"ɠ+8q"EEEF0o#<<<FHHHQRRv/_'OD؎F̓'Op?;v,ƍv4BNaa!"""'Sp̟?Ŕ3eDGGCII ÇH$Bpp0<==َE+--ɓ'5j6nHcΝܹsx<2N.Cb…;w.1a[=63gāZ6(!2 HHHUݾ}4B:Ž~ w1h6@.}]]z#133{R9555 HF\\^ uuu?%D:&$$"}e@qq1f̘j(++[n֭ $>D||<.^MMML8 ,2шy_ۥKR[BZX,Ʊcǰ~1'm2C]lD"\v {Eii)amm=z ,'Izz:N:4cҤIvB*;;+._ XÇ;DFF"660%%%C"矑7o@QQnݺ՛ȷYl~.zٲe\4 dI={P[BZ0ݻ1i$؁ãIӉb$%%A[[FFFM($%%᧟~b;LztLcǎ@ F8T|عs̟?\:z3''BÐf")-+--\, xO<;َ"s&L ‚(2_эqV^ gggDGG3ޖ,Y9G6۾1h 4_xLD"B!,Ú5kP[[={@__HzoOw&AP]] H \r{EEEB ؾ};[/B;o޼ƍᅦۑH+GtI(++ʪիWڵk믿W^-B;/_ƪU0oۃPQQhjjSN8p  [[[̛7툄Bz9hhh ..F#BkfhhK.!++ iii_C[[[dzǞ={_BgEϞ=}uu5fΜ 8t萴9폴gϞԩS1oCRRRSRRпhjjի'cɿ4#44 `;qTBg*-- l^xA&'_"##믿ȑ#(,i{1"44C ǃΞ=. kkk$''cӦMܹ3455 X,&<<|>ptt͛7%rss1j(x***SSShii?nKII'OhVz s/' cbb$''3իF,3 0gv]o}333&%%)//g||| fܹLqq1s}cǎLLL 0 s=FYY|2SUU\vYtdd<==&++ׯ),,lkan޼XZZ2\ 0ﯷ33j(&77f̌5^_CH)Ϛ:.\̛7O7vITuhkk3.\`^x[xLAAA>Əό9D̋/ӧS[[ۨ*oooFOOի~zZyU[aa!3n8lG!'V) ,X}p}p\xyyaaaA.]"ˤ x<7ndffʕ+غu+annk6:"???p=H?۷oGN`llp$$$H&*#il/Cpp0Wo IDATaϞ=xذai $_o>GiUUUVox2UUU|>QQQ BQQz 77\.zzzm[(bڵ>*++O<$?,344#ilm{6C;{Q&GoJKK%_b塨!!!ٳ=CS_k9ʳcǎ!** BBŏI;wʾ'm q899!??FFF@QQH$j>={)S 66...rزe =*YGAiƨAvv6LLL}n_ZIZ^]a^G$a̘1 jkjPPP@AAtuu?+ELLg-}KAAܹs~pRUy(++CLL;ZJXX,YʱɇՍ.QOڜD\pf͂1jkkQUUX KKK888(//GHHH[MMM )) aaaJJJ077.\555ȑ#m'GdKDD;;;ڵ 999?~<*++?>LLL0fx{{K:[c͚58z( ڷoݻc={6:tv$k<Xx1z+VsR+(IE6ZZZ9r$p\@tt4? =Zr"TQQ~̰aL2<0aBu-[3gΠ}xrѻwo!22^Xb:uÇd[TDn0rHT7͛7ݻX|\@W͛7oAa޼yl!maf!'$$`l >mu6Yה󪬷J̜9>|8q{*|^c>>Xj؎Cz:t(B!LMM!C`ӦMxQIkWBȻ>t^m ?~ ///l޼Y '|#ӧO[qtgխ7I333Sox:ʻ ,Y{04'Bׯ#((n~AAN }}}x<8;;#%%EAAAׇ:ƏgϞI׿MMMa(++,_hv%w_jy[A>!B>w> Yh"dgg#55999ѣQ[[ Xv-q9dffr^_5\]]`mmcǎI-hiB!- <<[nś7ok KKKiaVBUUv vf P={",, 3f̐tD'"::cǎCDD ####77_|dݻCMM w܁;?`kk+2c ԩbbbp +++cذaDZZaÆIȜ>> q޻СCannvΝ;S%R'q1{@t iصk.\; ѣG˗8qХK >>^Rٶo>_GB=`ff۷ٳgDxx8%qFܹsO>_ ggg+V@DDHb֬Y/{n:uIh"B$!$$D}BZHǎ BAVV<<ӥK0~x8::B(R/-[a>|X. }vPOH p8@hh;BdCuu5{DEE!66...lG"Ç4ipaFbhhh`͚57oI:Bd˭[0f888`ǎrlG"̀aFXXfΜv$B}BXһwod; !ߏM6!** Çg;i&;v,#d4>!,6m+$&&bذal!|@MM BCC0 "##@QYYVZ2ٳI** a͛1qDX[[wqB1gx{{cرl!ɓؽ{71p@HEaa!}}}{C.]`nnvR͠KH+"1yd|f;!mұcpܹl!RRRRM6!++ ѣۑ$??)))HMMŽ{+@ @ G1a1f1=a}BZL6 111tG'O`Æ &"JJJO?/1`#5XYYҐ4}àA`kkGٵPjPOHktuر`; !mƍK|lG!RPRRp믿َQHKK˗qmTVVBMM }@ @߾}vL"{'ڱcb1-ZvBQSSC@QQ6oܪ޽{HMMEjj*!ѳgO 4!B>!ܹs-7c>}wŋَB>ÇBHyyyr ._ H;;;ߟ&"͉}BZ*L<{>q;+WDvDiii0 dm^^PQQ333IZ6}BZ`ٲe1 i н{w̚5( ~wܹ|>/fe/^?Djj*_O@ @%C\* q~"V\ ###|WlG!PUUÇ#..NNN7oZ}6RSSqTTTHF/Ю]BH#PO €0~x"V^v!(((>}$%%aԩ2eJ΁"pppagg55f;>!RD>!&M¦M`aavBdҖ-[K#''׮]c5,``#F9xmm-!===F>!ٳgAll,U"݋l|lGP(DRRَUVVB]]]}XX^݁633eee4h CCCi&*p:t@hh(-Z} lGB(6xϟ#%%/_ƽ{kC 4S2{T"cpݻgf;!qYݻ(Ν;HMMEZZ%h˗CEE현8* A_5`kk @vBZT#::@#ոdRO>EَIH@>!2`ΝpwwǡCv$BZ,|w!ŋ_`ʔ)`;!(.͛7cѢE8|0ݵ$-rَ#1g̙3(}v#"3: DYYYaرXnQi5^| l۶ :t`;!}Bd'>}~(N,cΜ9XbَC!b9i&l߾=b; !o BH@>!r@YY۷o޼yvBXw^hiiaԩlGi5jx<uVp8***SSShii?lokkP 2<VVV8{gkk@8::شi:w MMMB,K χ:q'4* Xd َBHx"|r4ɓ'CSSYYY8u:To} ///̘1 |qqqؼy3>} {{{ 6 CZZRRRpI;v p},]Bqq16n܈'4* #C "##َBH>°m64L\r[n>ͱvZaϞ=xذa.Y/ ,X}rBEE .]Er^YY***xPUU=6nؠC>!rfʕ8{l?ȫr,^wQsssr'>|022Á"100|]7TUU/ZUUjjjPWW|>}b֬YsNBOp8غu+Nhx<#, jժzE<222BEE$H$,755 Xd nnn8reSu%KݽISOƺu0|>|HHݷ~ ۗ(Cxx8"Ynbb1c[ҁ6==k֬ѣG[dD\pf͂1jkkQUUXe888 &&BgmO>!r͛d"U'Nׯv ___XXXg%/EFFbժU9r$={={"$$E }pvvƭ[0rHk׮` %%%BWWzzzח|&G0OyôivB"##+WDLL َ$uw<$$$`())Vضm Q\\b(**a(++׻0htg_}BHw s[lĉaiiN:RYY%KСC2[7Utt40|pD"ӓX &QUU|塴{zy<:u!GT" 'DΩbǎ%DV 44ި.mСCpB̝;0a֭[vc;annިQyPZZ HԪ/._ MMMիYXt)Ov6+W‐ܹ6668p QXGd1,^ ۷'NVXqAAFZo˨'puuիWq1L884ʝ;wjBMMA]O?|Kv?}}}<}%%%())7 0yd,[ѯo߾XhQߔ,)?' Yz5<<<УGt֍84Hii)XH䒒:t:4x /?~K+//Gyy96l؀-[sxeL?\6DAA;v%K)"5롡vBZ kkk899a;w./_jѥK"22ݻwopTߟs8(((cx{`p|7o|_|cbѨMm>ptt˅5%sWhjjbA777Fǃ5n !9׮]cَA'EDD0۶mc;0111lǐIlGh :ٵk[oYc;@߿jjj7o2Ça{1˗*ڵkҥK%0 ۗ?>SSSӠm}4$”3>>>3w\>ӱcGOo'SPPdee1c0 <_c({F@@lll$۷zzzXpa6ddyC___򵪪*Ԡ^gu}lr n̨ND>!mXxx8&M_~M[NZe˖!00Pi'OH~0i$ݻd'//B׮]Cǎq-ӇXrCv>{ SLAll,\\\re˖zC bG%}֭[~ Æ CBBmSoOk(**D"Q񹨃.!m:l___ֲӧrY[JJJꍏ *SNѣG'NׯG~~>䖡!PSS9b1455$IOLLˑjJÇgϞ!m\׮]իW<;wķ~v&{1-Z 6`044{n!DcǎXlm66dظq#َ yyyN8֭;@jj*n݊G/ӡn@"tuu* !|HJJv:tvvvѣQ>aڵ 6l؀.]4@ @ ׯqY,Y/_ɓ1n8$1B;q9ܺu(D!::lG\L4 ڵǩЗQڵ{\?2CDG+9rTHC(VdWJPS)-vvC3jr$5ݿ?g> fg|Μ9# ׯcBBBPRRv|?~9saƍRIII .cǎ7o&N(SC0rHc&*((իO5"""v"?ׯ_cŊlGy0Ç{nR40h 8::АX4}BH!.^cΝlG!2jٲeXt) vz233-[؎CdP]]]'O3g΄5L& t !fnn%%%|tMtE DlذTf_|ĉܹ3/^y!)) lG$'4I]]lmmeȈZL>111PQQa;Hpp0=z;wJݰ"6ڵkӧ/ih!?#..\.8D@__lGBxxx@MM l!󑖖,Z:::l"mu݋cǎHGa͚5lGTVVgφ-qH;n|SiӦaܹڵ+HC>!P[[ 777)fggozzzlGgϰxb]_|q7opB/DǎَF* !-tR,\ 'ҠDdffˋ(ΆBBB0`/^@  %%}􁽽=b2UUU3g:޽{HJ 11׮]?:={x!bccqU 6 R9TBZ.;;k׮E||BL>}:v"6oތ[!<<ػwT4VN`ee ""UUUǢEpE` z#={9ZcϞ=p8 DRTUU>???aʔ)X~=؎GGhh(PYYv‚ZbժUeY@Hk҂{,X´iO}BX 6m۩}]ti'E(?g,$#l +Wlٔׯ۷v!zannÇ#&&;w3/^L* !ၟ~ iiilG!6mj1BK,ɓhѢV[ܾ}#F `x ˩4ܹ3x<????ddd`̙pwwǝ;w؎G$}BDp8aƍxqĿVabԩ={v--^xN: prr·~iӦaǎ(..n aTB$FYYw+؎C$hӦMزeK0 0f̘v;%݋p8?ܿfffr6l^*fll AYYFFFq 8997gBII fffHOOm+,,\. R?!I ___$%%aغu+fϞ(޼yI&Ĥ9'H!f̘۷Hŋ1tP۷U]~=LMM1wVW$''cŊpvv0`nzop@IIh{bb"vލbbҤIx޽۷o#)) Lxzz"::Ϟ=C`` bbbDmBEE999HNNe ߿Ǐ</^? Byy9._Ç#//'H񨬬D]]ٳl$M@>!Ua߾}g; Uܸq#j}:==nnS"t%%zUUUtuukbXx1~W QVVF=>7 ]sUXXŋcƍ,"MՁcǎطo-[ ٳgj}nݺ***X|ysEnn.O`ee:POiEZZZvvB!!!XreMEEE]UkK1yd 6 (((Yօ pe,^ZZZ B]] 0rHaϞ=(++Æ hHk177/6)f4M@x!j„ ѣ؎BZ˗y&&O*EFF֭k'''o߾8qRRR icBUUSN󡬬 ˗/iӦYSPP!4':888f;i۷Z᝔_~kd޽{ZZ4MMME.հnݺp9gBMM \."tf8OOOz8c5J}]~111 x_D<␐2򂝝۱i'BMM pqqe̶m%~ݻ>|:ѲڰĄ v,B-z$fС8q"[}v!UK,?4–-[$>Jii)q!J/Bi'.$$;vv\.%Gee%/^`[bBqQc9TBX555l!'ѱ555ppp/ %!'TB6i&\v АXXp! &>!iiipttÁ)pww.쌊 1<<<`ffeeeƍ &TTT䄺:Q?EEE˅Xy}B԰|x5AAA]600X 9r 55`#%%yyy011˗E%&&b(..)&Mݻ}6bkk EEEdgg#))] x!R666022 A؎C\FFFPSSH(..n3`ժU(..FEEjjjD?Ũa;*!RѣG8s ```ՠnʕ011888ѣu놉'"==<999'O@]]y?D>!DaXp!㡤vvoΝHiii8vXgϞy&ݯ `Ϟ=,%#DzC^^S"tR EEETUU uuuvIN8 -h!Dꨫ+V`;JCCzKaa!|||p!ˋ}icmm ݻcݺuT/Kutt  #555QQQOKҌ}BT1b 0k;vUnYY,Y劽}irƯ7/_M'ڻwoa,#DzwEmm-_~ŋիW8x վ>F WWW 77WH* !Rkʕ}6]vv_~NA(bɒ%زe ڶCRRfΜ$!<<#F:pBc"U֭[sΡs΢x1dL2zzzHKKG\\ˡKKK888+0 ðB>sRӧX]z5ƍ3f]i["..'OҥKѥKLTWWcȐ!,$lllَ!899A Ν;{C[[+_~#G… ӧѡ0tBހ???k8<.^x 2d`O!lb"fΜׯ_ܹslGi222гgO[\\#,,M%_VV;vƍCBB&Lv,BypM۾}Xw{{{"))|BTb"3:t耈XBH @EEbksʕpvv) `mm }}}>}cǎe;!|һwo[lGivڅիWc:ulmϟ%ˑv$Bi4* !2gܸqׯ9v6ɓ'}X?!ׯv%KǏ_~077S*Ά5C>!ݠ36W\Att4َ#ahhf$%%!..NħAAA}6/<B$Й}BHW_Ӹ>Qھ}㫫j*ٳG*=wfΜ }}}* !'9rrryrَ$uJJJPSS--f5k֠wbLr555صk~7=zf!kTBڤnݺnnn8vqNXX}|JJ \ԍ_퍥Kˋ8"666lGhFUVD+Bڴ0`ʕlG555={6u|AAJ4o޼?nݺ|cYAOiӖ-[~ W\a;3gN;'5~jj*fϞ/'NBBDx!mޞ={`mmCO>la]ll,Aݲe lll#PP[[m۶ӧPVVf;!H:Oi:w={` laUjj* EE&O?簵@ǃ1O>!|vؼy3QXf]+# @&** طofΜvBjTBڍӧlGaş yyyhhh4Xwwwlݺ;w@y%/^"ǷykjjpBt}Y(..AiiiBdvoALL rrr؎0h @ƹt///ɵLMM)}iܺu w!.%+ #ѥK#ܹs~~~M:QQQ8uꔄ}\uu5|}}Q]]ӧOSNhmn߹s#F K!m! 6m+QZͷ~uuupssCHH%߿3gb̘1صkW)իv !G>!]9r$>zC%ڲX5`̞=էdwq L:Ucii{"""7np '']]]l۶ _'r022BBBB>>>>>7nvXfffPVVnܸ hjjBEENNNgBII fffHOOm+,,(Chh(]7@0BH|011a;@SSJJJ>&## %}X|9,,,pV[Z%''w?Bܽ{666üyPWWGvv6pa$$$`Ĉ?CCC\&M!x IDAT޽gϞa̘14ix<233˗/wALL &ʠaܹz i>!]ۿ?6n܈gϞEb°tF[[{`ݼyشi-ZԪ}2yyy|pssÙ3g/uuu_ţQFƵrJ@YY(//Gpp0ucĉ;v\.ajj@@vv6^Pzzzؾ}$*B%* !횲2v WWzCڊj5iF???Xݺu`a!!!8p *:@aÆAYY(((@II Ej;{゙={~WTTD.]}k*.bbbvZ >/Ư !<ѣX==&?bbƌm@ ٳ/RXXXH0AII MjԞ=ysW_}"!88MMMTTTӧcrssEj;Ypyܼy&L1cPYY PBHPO!S|wlGxX[[7jjx{{op%KJmEUUꠢ"88X]__G+JJJ 64z\pFMM B(F 777f bB;wb߾}g;X0`͛555簝۞ s˅{šCرcSBMM PVVex%1m4x<س^qE!\AA\\\ \nnn߿'MMMEtt4+Lpqq_~ٮ؀C={xy痐Й}BMMMxxx`ݺulGi*6 $~ !_W׃ |&L@Ϟ=q14[bbb/ݴi<==ѵkWda8th177@ 6,--1a6"`:t(َd Z͛x Ə/XlƌAdWv B4:O! p8طo֯_W^Iѻwot}6mDrdffյIzB* !԰c@28r?7|˗CUUUΜ9uѣ1b'8TBG :'NĮ]؎(uuuHKK?Ǐcƌb_(b˖-~:ЫW/O!i'OXd 233?.\ssSWW///ϾSZZ  <۷oX'tt.!4BHHa``}烢?-ݻagguuuoooڵ FFFbkW &&\.JJJPRRBFFktի}"bǏݢtDjBH#=|nnnHHH@ǎَϟa[QQQ8wá"veUUUQRRP[[ GGO~BFKK l`=ڵk mЙ}Bi$mmm8;;cƍرcqs~t 6_,UWWɓ'pҮܹ3  *!//]]]cС,'$m@ 6O!M0m4tlGyw}ɓ'p{BBLLL⾊1gx!љ}Bi[.D4hqUhC3gδ;w`ݺu aۓew455aii www6mB)* !p,\PRRb;`oo7oƍ[|A\\!|PׯC 0556o\ 555XZZb͚58p ! ݺuƍ#Gac֭ n{.޼yѣG[͛7\ZYYK.ܹs(..ƈ#RBB/cccxzz~dG{G>!4ӈ#pMaٲez*ǧEH ,XȒR={.]۷o1~xuRBdjj%K`ɒ%lG!F>!,Yk׮aԨQd=z4휜,_۶mÿĔzyyy8s _.]`ԩطoَFإA* >XhQ$}Bi̙3ǏovQ\o߾ųgϠ޶'O÷~۬gڵ ǎC޽[T:5t+>x3!Ț3B$DII !!!prrP(lվaaa߿Ybٳ'OoSP(ĕ+W/VVVݻannN>iLMM)mll ooobذazh{QQ,--radd}ᠴ}}mKKKp8AWW۶m|}5ݡ 5558;;~*++1c L6Mֆ*,--CӨٌ333(++7n@PP455'''Չٳgtuu333B6ym,* !D   ϯUMLLٳ߻f pwwqEeMee%xb̘1iii #"燂B8-$=zTlm'''aԩS8vbbbD.XHIIA^^LLLpzcر۷/Μ9#%YYY# 8fB]]bccabb#GaFmJLLݻQ\\ SSSL4 <ݻwqm$%%!!! OOODGGٳg lmm$''#::Ik0BÃIIIi***sw7o)S0Mj׌-ߊ+"kJJJcǎ1666̜9sɓ'l"Dbx<^1bsAC26lϠA&;;PŋLII'X almma2cSXX(O(2\.{.0 cbb9r䓏ڵKt é~;|f 0Lnn.$ﲲ{/]>w"F;v쀵5===۷ͭI۷o3 ]`{aXY,===JJJfK }vdff߼*8??ѣVXQ<{cȐ!Mz S"tR}FQQUUU]]]`ڵ(--ſ/cС(,,rk$ !1С"""*z3gʪ}999iNjj*\\\p!+322  <8|0x<4\hjjBSSximhɓ';w.+@("''mcȐ!4i^x!_[[rrr(--0~޽wƚ!VVV8-Bff&*++q-̚5 >}ݻvx{{#;;555 l``#GͭkcPO!0n8hkkKlu3gҲ}.];puDEEK.) ]`vZ|wt-!b'N@JJhJ\\ˡKKK888m`ܹrxoҁh 2SL0~z3EEEa:u*ESNuܹsܹsficBUUSNEw˗Ǵi@#posBa`xxx`ذaOeV;;;8p\.P[[ӧC |rʛ7o &`…-" VMIIP(1}tdcc>㍍ {{F_\\>}4ٳϛtGB$ ,, <Ɗ+sO-j ;vo>,Z蓅~ii)vZ3E9-77III|2TUU1m4ҦšC!HP׮]zjѣGӧ^~]Pooޅϟ??ɕrsssb c_ޭ` Z_ov[pagS~~>tuu&bnn+V`ҥPRRٳ/>ݔB$(!!x$$$ѱmСCۛ6mW_}ѕroܸ???=zzjv-% qu9r$Z&BdEvv6zꅮ];w4i޽{VԈO>q޽;bcc%"AfyơC]|;v mȀG/RMLLgBJ\t x)&MkBSSճ"|||O?AMM }رc1a|lG#R.% ܹsrhii޽{sQtYf=$$BCC[uhLII Ν;G"F[ly[0. aСCܸqg'a]K!,;v?lPѸQ>q5X BY kChwPQQ5DSBG(-CTVVBAAo߾t]vŴiS*I'kGFPPE,7VMM ***pY0 @YW^<,ǤUo.wئAOO<nnn*% puc1y79jjjc~ [TWWwӃ "337oDii)I&aǎPSS kH7C!3j(^Zܺu &&&-*B!Ř|>!cԨQj7 [ JdҺq5Q]]B"##p //mmmQA'y}YPo~γOd !Fl}lQl{޽{B߾}َ SSS7^/^wv>77/^Zaee==fltmFATBi^zŅ(**BN!:3onnhOiUVaժU.* !ҮѣG Ǐ555PWWWл2-!A>!B%{{{TUUcǎֆ> 0ehii IB>!2SSS,YK,a; ic {{{Ȅh#"qB SSSDFFNљ}B!DRSSَ@i>!B!m"fشi&L. CCC|fffPVVnܸ hjjBEENNNgBII fffHOOm+,,\. R>Zs1aff.aÆիEEEפ?ض%8䠫m۶kk~uU^^wwwhkkCUUxa2kPO! ,,OLLݻQ\\ SSSL4 <ݻwqm$%%!!! OOODGGٳg DLL-[[[ ''tbBWW6lht1TCAAlmmaaa-Gd0 :u ǎv>rpp@VV~G~za&77QRRb޽^YYY DtߥK޻H۷3:3h Ϗ)..f|>^C 6gРALDD`vCBBB[[[a>z1),,' . 咶x{x-0 .% PWW޹sg@UUٳwEEEtJJJ{w.bbbvZ_1tPBYY=z'U\\ ':u$ӧx)|}}}vٹ}o@IIG___,m@ ۑ ѣ|ux| 2䓹d5NH[C>! +++XYY'OĘ1cPTT TT 抭޽{狭=i/]NN{Fakk^zAEE}+ӡ <}TT7>'N2BBB+C+mmmӧ޽{3uӰ'1"c.\ooodggBUUUF 777 776l`;QUU|||p ܽ{>>>b)O(++Ν;QPPYfA__Gk~jkDEE HMMEpp踏ӧcѢEDee%nݺYfOe8!MC>!ȘcBUUSN󡬬 ˗/iӦ،-3 lݺb >>>۷/N808C__pppK:::ܹsrٳg*** ԩS777,Zް'q8!aLA!팍MrY-n->?M!@T\61<==aooߨѧON9Z&톀BH:`ͅ؎Ekƣa<ƘC @[[0a؎EZ(??E\'>!1ݻw7 .wi{-&#outtZ= i<:O!v)33S" "M>!Bڥ8* !4Zuu5|||3=3g΄>۱}?q}ŋ(,,Duu5:t~>i*TUU[n-}(E!olll؎РϟCUUhEׂrr ϵP^^.ZqT\FUV pM<Gng4SSSk.c1Z555xhhPFFrrr 2dTTT>֍70uT,Z[lA׮]E%3- TB F||<$ܻw6lhPdeea׮]0##ɸq\.1e(((H_Bd۷oQPP p}xp0h CCCaҥ&͛7sbYٙȾ M>!Ȃu1Әh IDAT11077)qϟGdd$"##*Y=zʕ+ɓ1sLpVY+dee>Et}ҥK;RQO!2ɓԩ~j999?xo5k`޽0`@kEE/^ą 33<}mȺcݖCQQQ_pvvfu%e¾t.!cccYYY`_6x!CpI,YWرc[-x<x<޼y+W`Νʂ lmm1hРVCz)tttCCCL8#GĠAv<"Ũ'`hhL]${u5 a̜9ǏGN>>> DMMM۷/{=8788O`ii www\r44!???())׵BFRRR̟?;ܹ#ѣ7nh}deeģG0p@̚5^|a|7x%욜gHOOGJJ `bb˗/#66&&&8rAjjjWbb"vލbbҤIx޽۷o#))I$33ƳgQ[PQQANNݨ Bf\|}m޽qqqa޾}ےav̚5Yp!sX >&3bCe6lPoA1Lvv6yhŋLII'aV\0 1Ltt40 cSXX(_(2\.{(۶mDÇ &++|&&&̑#G>8:t(k.˗/3)//7|fcΣ?gMFrAT,p"(nHn$K Ⓓ(Vfj. h J,*,o$63az?g^^}A{=]v!##CDUk)Y[[رcg|?#={C➡ƍC"@"@KK ϟ?WL hii+siLMM?v000(^NNaaa3g:uq^MdŖ Ce'"F>C͑XV^y{Z[n)%7|̘1ڵk$vD*7'$$xRRRm+~AOOعs'\]]MKK 899)jӬY3">>xv[\t ٳ'Ѹqcdee_,U#~!._xmooӧO_Æ ѷo_|7ݻ燣GV& ۷hݺ51h bG$2+9s Dbb"ammݻcڴiHMMEBBϟ_RXnKYtuu7>|NNN(voMD> 5ž={,X[[=vX9sܹsmvB˖-333OHP|߿??駟/j<իW/&&&FDDd2<{ 8p r9@GGG)KD5tPQ5ӬYbkP;;;qᅬ ￯xXp!\]]oUNU ªUS37ʅ1w\*>}"jsθx!Cp7`xciaaaXh_:Ձ)o%*(VOOO㕛O>DӦM{{{)>U3xo„ Xl4hP1gΜ~J]XXe˖!77˖-VU'p"## ̄[nprrBǎKC޽QV-5'%Mi|~~~zI0zh|RDiUCݺuùs犽QCqb ۷O<d 6|mUZZѺub+)""ڷok{uָs󡭭c1}t߿#+J"gΜ++J׭*\\\ʵvv6Pn]H$ԪU RjBAA ͛+'HLLČ3x+Hl!]]%舘 26nذaMLzBVSVV[l%pA5?Tg5s'"P*+]?j(2!""~~~8s $rVވs]6oxQǯZ 3gTQfgg===K]sׇ7VZxŌ?-~'% .`„ b "fZիWsƍsS\K3x`ܽ{7nPvbZlȑ#Xxo&"Uc͚5{pܰgϞ2Zf ,X/_*)]6m@.#))Ik , ѡCb7,?~044 6lP9ڵ̙3aggL\xAAA077PTT8&%%nnn033;#1cZ<(6fVV|}}ѴiSѱOHuիWے0`E֐_@l9ccc<{ǏΤI_(39;;c͚5:u*bbb2X+,]1b T#8DEEaΝ}!YɰE߾}qܸqW\ATTT_q<|GVl/-ݻӧO㯿B˖-\AWV޻wb Ezz:VZbYh#Uk[nJܶxbܹs7aҥKʈV&0c !//Om㖕\./]tl٢xݶm[aiժyf!..N ߊm?@HMM-um WV>u H,{¼yAxp}7o GÇ!))IP044nܸ Aݻt2w!DDDj"W^z̙3%n8q"o^zCnn2╪VZXhaÆ!..N-ӿrD$&&fffmmjjYOOk׆Arrr>DZЬY3͛+^"H Hϟq:YeZZZ",, sAN0n8\~)<2 W8k޼ ";;egdd3gbѢEʊX&{ƶm۰`ݻWc?M?`nnsssx)))ŶJ&MPPPK?N&MJӴiSHRAbT ~-.]{{{hܸ1WwGT]'"ԩƕw<<<ʼ ?뇼<}M RXD}T .رcXdIkbر8x###`4iDUvU}C5}}9vZ755ԩSվgIr90m4޽[8DDU}"wG޺++Tyyy+t25mƓ'O0zrߐID) .߸p+WbÆ B$ |}}1sL#::ZHDDU}"o߾8qkkk?/RɓQXXXјJվ}{=znѣx2&bODT?[j6ÇԩS?O>T>: 6AAAhܸq뺸(!UU1c1TKo0˗/G>}йs7s5DDD aȐ!رc4hP:9s&ND"v$""uқDD5MiWoݺ L47kkk8p+ȉԏWjB1Uw"## &Tj={?Ă *UG>>Jy44ZZZCVV[svvQk>HLLw}W:ԬY3DFFB&aذawؑB_DD\x1Zl}R)ӑ-ZTj~a֬Y֭+UKU$ :t耞={bѢEz*lmm-v4""U+DD5qR0avQ񴵵qFL6 yyyJ57|>}`С\j46DD5KSvm|8w\lҤ f̘9sT: QQQx)HP:tիWKoڴiذaRݻ7̰sNS5mmmbÆ  '|/^HiP)u?333Wʸ}{\vM)ԡQFغu+ WWWDGGH)P:tZ}g͚+HqF̟?Ϟ=SJMuqA ;Q'"$ ׯRcmll+V`ҤI(**RJMugjO@@.\K"''GXDD¥7joh߾}6ivB޽2v H{{{T'ccc 6 9s& @"&QMַo_">Q &ɐ]'L[*5ҥKqa\|YuI*B.#22pqqAllرJfի~2;`UR[n?o>slڴ 7ow}b333?݋8::b׮](,,,_```{UOl4-.]TR)FP _ׯԩ___8piTk׮E^^~G,Y9)tl4D"A&MPcF@yc̛7J_UGEݺu ___dffԩSѣRk@D_AHC>}5kVٹs'R)<<UfH|wwMVc1dDEEA*U/ӧl urh6W=tr&ǩSu6}(+gݻwqCDIimmm!%%\Ǎ?wVɺvZ߿?J!"l4̐!Cpr1c`۶m*Jl۶ k׮ŵkT:&`ODa'N#F ::*Htttc̟?SXDD5}"" <}\I$L6 !!!*J {n[Rc̘1S}Q5fH 4>.\'OT8SSS_cƌgT>^Uw\|ĎCT-n*v *}"" cǎUٳg#((HɉJfee@;j_ѥK5.\ 2VNsNU'"@2 R>{wTu|>xzzIAFFtuuŎAD }"" ₃VX???\RɉެW^puuŔ)SPTTqթ]v󃽽= ѢE |GGG_7oD"… III0` accH$닦M?4fYΜ9vvvdŋsss˫׺u`ii իm}R$\tIi;]vX``hh:ٳo;v޺"66cǎD"m4\H# Ccƌ┘tPXXq+""B(1m۶,,,/ /^-[& EEE ԩSoobt]puuRRRxs!55UApvv!##C5kо}{Wژm֬p!33S;vPNaĉBzzp]AΝ;/999ٳg\./ԩSB:ukkka۷*rvEزeu۶mԩSBFFb PHII)\(رcǎ2gs5>4ip  ?^ <<\4iRRT R~@HLLf޽{5'OTC^XX( 7n(Ә]zSD"dee)sww͛'«@1Yʃ~ٜ:uJd`jj*{Ӆ4/OV͛7\(y[f_ՈZ"2777cܹ>zzzqڴit%sqqAvv6f͚իWm\u033S)qߤ$d2ԯ__񞕕7nڱ<|P1T񳞞j׮ bSaaa3gЦMm۶~x)\\\*t&(:55iiixׯI&UzZYY!11BY窺'"`=zʕ++ooo:tH۶mÜ9sj*]U4nYYYHKKS4 M6T*EJJ ի'Vb䄢"ݻ={ǏK,abbeƮN>͛7x׭[ӕ$$$`РA|.引JU˩:Uu DDL"M6qF733-=d?~<,,,͛k׮Ajj*0|v 4{.qe8;;#//OyO8  ..(,,DNNJ,uĉqIXn7o_}Μ9L"11o=r6l7nP\5\eODF={T3f`ƍ4>>>066_|p<{ 8p r9@GGk.l033<==EY³W^066LLLd2}Rnݺ!)) _uƔ }5j={ && ~)?}}2S^5\ 5p@=zZZZ:xfΜdeU [dd$(u9vF'ZE? T2va5jbUYU\}""G}:ϟmа4 33ׯ_ܹs*v Ijv'""c߾}pB,^XIoԩO}hڴ)aooJ}$I<==+,T\-W!""4j7rssWm۶E~~>n޼9BXX&LjիWH+If͠Y,$_U-\-ǎT%K`ѢEJJT1#G3&L|Q>xu3+U ;wFttRU<<<0z2? &bODDCCCJ.3gĆ KeĪ0{{{L:5 X;BCC+UCGGǏƍ`!==]8DDjW\""r߿?6l؀ӧW1tPAJJW1]tApp0FkS˸.\P8T\bbj5>g^#Hp=ܼRڷoŋcJJWquE>} KKKXXXt<###֧7ڵ+Zn-vjzg6KDD$&&bҥؼyskk׮0`Uދ/0~xaРAb!"R5>A377ӧOJZp!6n܈\%$<ٳ?6l v""cODD9rRӧ# @ CKK ֭CAAf̘1O%"f^䄘ׯ߿])~!Ǝ<}""zMZ`cck׮)ޗ_~yqssøq0|p$%%HQ&N۷+V Я_J z¦MQ8DDJfJdee$dgg+ɓ4S&sss8pXrq>\.RK*bϕROtuuyf`ܸq;Q'"76l>zmڴΞ=qaذaxq*>6m۷o+ŋl2|Ri5Gزe M'O[y{{+F]S|||_(*4nl2[lqqqJ} XT===lڴ 6664hŎDDT.-zLDDU޽{1c(fFFpahkk+$''O>=ĎCDTODDR<022ɓb U `޽̄ŎDDT*6DDT*mmmtϟWj]GGG͛J*Rf²e닕+Wr.?UilL_+n`` Ν[W/+++8pJk``Ea޼yJNCtt4ԩ믿V43f@=!rJ"l\F01-]v1bbb^[R)r9bbb'''|78{,_ݻ#--MDQhkkG8}J/^7oƣGTR_jժOOO=z'NG n߾ݻ׈HDUKDDSL<SI̘1TZKe˖xA}]J~AMӤIڊ:tLLL;*oeewww]v hժ,--ann "99m۶6OիŎ@Tm>Uj*lܸQec?'NqWusssG!%%rQuqqqADD1#^'"DFFU6ƺud<\ecAOOŠAРA#Q f*lڴiꫯTVK. "QuJ۱cGtׯWDD5}""///lٲEcL2+Ν;q4Arr2$ 'lR SN!//O]K,aJDTlܰ{na``k *`ODD&qt-Z`ذaX|J!")QiiiaС8prwwǓ'O򱪃v044D-+?~044kGڵ̙3aggL\xAAA077PTT8fݺupUŶ$ 0@1^HH }""R OOOBj Į]p-Uڵ +V@rr2<<<0zhÈ#8DEEaΝ!YɰE߾}qܸqW\ATT wٳtZ aaaZ#FP|DT26DDڰGLL͛1g<{LUuܹ3j׮)S(7nbŊ׎>}::vLdee!88uŻヒ޽{+kkkCGGӃ-VZٳgSSSXYY! @ffؼyZ׭[?~߰kffY__Dnmm񦦦Pvm{/''''aΜ9ԩƍׯx5G&~c)"?V}h۶-r92^ucnn/^ %%E^BBB:99oťK`oo={";;7FVVV*>)Ք)Svu\]]!٣kkktӦMCjj*0J}ZmcVʂ5Qzz1`bb`DDD@&){1p@rN? DPץ"">|8 H6\.իeLeqqqADD1Tرc5j}pQc̘1O>U۸T\xx88L\~s΅ر4}""R ooo|jʕ+1~xm\?}Add$6m GGG#((HXD>>~ۺukL2jR^=۷O<ÇvZrD~lHe}vu>}_~UDDU }""Rmmmaǎj-[D``&"*JVSLAZZvܩ6DDRR&MƍE? gϞŷ~+DDbbODD*䄳gϊ$D"ƍsNOjHL@DDa֬YXz5.]k.C*GjP#TYݺu;Q'ڌ1k֬AÆ E?''#G:v(J""5tH}/سg o "R6DD6Z6bccE```۷cܹsh9ԁxHRSS1qD>|XU`cc#j""4""R/SSStG5Gzg̛7ׯ_5 >˗/1h 9rfʂ͛Ν;HxeOGG&LMĎLݻwcʕ8q>b8uU9f͚HܹsXlƎ \.v":DDTXXX{ػwQJխ[7=zO>q #)>UIEEE2dmSSSIJJ >s/J=%jH6DDTeݼy֭Ö-[ĎR.XbZhYf0sLԭ[ .9!i6DDT[nppp;J]p֭f̘[[[ddd@.cb#>Um:t(ԭ[W8ril۶ Gnn. ݻwC*I&ڵ1#UGluV^m۶Zj?C}> 7776obU}[uX$?#oԯ_2 /^'лwojZb ""*yaذaC ĎS.=zot!%%'x#Q fT@̜9Zo:::hԨ5jTv5'""Mi}:^|)v*SNعs1aODD;#)S_(DDU}""'NEtرc!H`kk x9|}}aii x{{ŋva̙L& .^ ^^^Ŗ]n,--a``;;;\zU-))  !lllD4~DT }""zj$''ETCǎc .FW"&&Gǎq)qš5k [[[w7pDEEwb Ezz:VZ0E#F񈎎Fhhz"*$"jKWW!!!>}:!HĎTe_8z(ݻ͛3}ttၝ;w"88[.zWB.C[[:::044lmmAٳgݻz?4WZk޼9իWJy`mm}LMM?v000(^NNaaa3g:uq^Md_X+++e$"6DDT3;Ξ=+vHެY3*NNNoq%ۣgϞFƍUl~~BB%cODD5|r8R6m;`ll,v^zطo1?xeju/+i6DDTc}8q"M&v""Q'"Zlk׊HQ' ..QQQbG!"R+6DD֬Y={ƍbG!"R>T4m۶w;B&MеkWcTi|QD'""ӧ#,, u;* DDY|r3yyyb!"R)6DDqڴiӧb!"R6DD>#W(DD*f4ѦM,^X(DD*f4R)ED5}""x~~~>v-v""bODD 88W\aOD5 }"""k֬˗*v""`ODDI$]/^Ğ={ĎCDTilE"`ݺu8<+v"JaODD!!!8wĎCDTalJ~ ۷o;Q'""zTM6!333f̀ bG""*6DDDE1zhƎ#G"--M8DDefz˗'O;Q$' K^^+WBGGGHDD%}""rE@@뇡C޽{bG""*UYfç~ ===#W*wAhh(z\cǎH>#22pwwǃĎDDi=}gΜ!jzK|Ƣ{9!TCDDbzzz_t$%%!11HII;wPXXOD,lTL__Zzv6DC/غuuAAƍ:u@"@"Tkb "*>[DGG… HLL |ؑʌ>[\z]tQH$:uΝ;ŎATa<4}xb l""">iv044D-ߗ#֯_͛7C"`…%#H Jaiie˖NJJ€`hh@" -- n:XZZvvvzj>x uM)^vgᣏ>RܧcB"V̙3aggL\xAAA077PTTΓܾ}vvv044DpYŶѷʂ/6m ccc8::?A@LLʗج0@TJ߶m[Bx aٲePTT$ tEزebS ފח/_tuu_[PP \zUh޼gݻw\]]!>>^ܹ@HMMܹ#hkk 򋐓#?^={v>_eOSЪU+aBRRb[DDQzm۶4h ={V(((A:v(ر5k&\rEƎ+ԩSG8q.ܽ{WhРbv%p)!##CXb`hh(ۯs\΂ dddfڷo/{մ$`nn.899 O.=*;pY"^'Ν;vژ2e RRRѣJBャGpYVVVP BOOXjUrhLXx1>Ci˗/+Toڴi֭[ N>;vL&uwE޽W>w]yR&MG}CCC̝;7ơCS9\8tn KKK/D|| ''B"##wŋݻxkmL+Rlii0̙3iiihӦ |}}Ѷm esTÇ{yI ~i$߰:@mP%tZ]V"ڵʂEj)eBCEdR_WǮ3\L7Lu )|>QYY?SE\~~~uVVJ%-WuD˗AE3c@IIɇy^ƈ{||Q+~?~?ΰ^uI1D?lt~~͆466BRaff~7I777 xvގ6 iii8==:Χ%)dG{%IODww72O,9N6RmexbT?Dshii hnn]%愤&J^CGMMMp:hiiACC]%rz!>.(:+Dqh3_`03-I x#J^ODDDD(6DDDDD)>Q??>E1%$A](<Q\Z$) $e$5)>D*DQnˠ1MF_f(599g(E'""""JQlR?][wIENDB`pyroute2-0.5.9/docs/html/_images/inheritance-e3c5704fb116d5e1ca6c2d8505ff63c892fa04f2.png.map0000644000175000017500000000256113621220106030014 0ustar peetpeet00000000000000 pyroute2-0.5.9/docs/html/_images/inheritance-ed0c595ce08d01c093a63434be8747972b9d1183.png0000644000175000017500000002510613621220106026752 0ustar peetpeet00000000000000PNG  IHDRHbKGD IDATxyX7'l )Km*KVʎZQ"hA"u,jĥ"=jqVd ǜ7堬&,篙o2$aHPZ @Ah5=剌$: 7FDD`VVVDP %%%>v)(( :Rq\#mCy+aV"jPZ ңGH$Rss3Bd(Vz(E$H7nPJnddDtQ__fTɓ'>TQQGR_} . ;::tիBL&3%%bQTWW˗/+3M6X, TVVjiiIR)ZFFLfXBH̔}bEo$333--m`*qH`@kDDD khhrssC555almm]VVݽc!J1 {wz;w<~8+++K.T*_nkk{ڵ˗Z_.((0B__X,$$$gΜ) 6Y f:uT}}=BFHzL&sH #??f/]tsrrOooo̞=_m۶)S;v̙3!ψ|||LLLtzZZZAAB988E`R[(,,  CQ!1@c+Bpvv>z?s N͚Zsi̮ b|==y h됀0Psr666l6/55D`:$'OgɽY _<)X^Z z0CRG!f hooC?}/Lt:TAk. Y:9a  jÇ~jaag>b=A SXXۃwyܹǏ+?#:)WRRm~(%%%d#&$ɥK;а`h*?{ﵷgggO0ؐ2(?rtڵkONt"APӧOX֖DhvXu577:t8""/=AUT]]W_ׯ\rܹD@ABgeemٲeڴiD@AOPU/{5k֚5kLMM \fccs]ݻv211IOOٱc YƩ EH${쩬ϧP(DTA]]]hháC'86ݻ322(aخ]ݻw 9ãicc LfNN)3 ?NOO:u*Yr==zt\.* nnn=)xQRooorr28|0D":@!'|uuuaaaIIIZ^mllLLLlBI$;fEEJ}W/\/d26mbX ɩ,55ҒFIR| ;;;2b"(((J:99effHf@aeee555DQѬ(7o"H0 1cFvvl5gggkk벲;v0 T/vZggˍWZRYY/`VQQ_\\,KJJ6gΜjjjEp'NDFS,>|DD"RnžWSSSVP(ėݻ_~%Յ.^x֭ 27+sU P}p555! D"tuu;::jkk2'_b| 0448q"L&&&޼y!$(ʤI"_+WOKKS(Nss!$w?ʕ+~~~O<}(9d!Z[[-ZDgQ-λ[YYɓW<}!dnn~֭JaaaJJJUUU__D"RcݺuMMM 997>Ϗ,ȑ#Z`` Xnݻ;aЖ-[Ν;gdd$vx|||LLLtzZZZAA~e6""aFt"]x1===;;[T ={6&&exz? Ք7o<@Mt"t8,J[ZZsr\6痚Jt"'ۻzjY8SSӼ1cY*A`mm5k222Xw-..4hĉ4аhO[x! (X,޼ysWW>BٙhO+௿GшBWWɓ'wuu'988111\.tL?ghhHtJ<Ν;t:_`08qFb*ȦMԉ1uΝt&“'OΞ=+ g̘p\]]|~TT8poMtFT&ݻwwwwGDQOEEERp8GbqTTTvv)!PA[˾> $J_n߾mff0nV1 KHHprrZbY䬱EEEgfSL!:jOT*}<==-[FtsϿr励qpppPP9EP"Y& ,,,/J,_|ݻwŅf{yyEА"ۻ|edގ__p-<%ӄ"ӳtիWϙ3,xʨTjppp`` ~qy}YdY,Y.xpDgs]x',,ҒPh55.˖-[j%nЀ7 ؘPԷ~)..xL&&,}}}˖-[b ~|Yـ744ʊP!_H$QQQAAADg>---h0@YH$+V&:?Jggg6=@3S-'0FssK [ZZCBB xjS1 {gϞAH@JKKI$R@@ T*!Ir6E0))7Xx27_ZZo޼9uT뫧$@{G-,,֬Y͵ΝkhhvNDEo𖔔̟?_;NVQ"xn߾]AK$>mGGGmAիWݻdɒo]6ZTTtܹF5^{ŷ^Tj_Ѡ} AOOOչmOOτ r-((xUj_Nq%QAQp8'''vZ]]]OSphˇ_&&&;< h5(EՠEO:#ww~֭[׮]x'ODUVV&$$䴴޽;77bTTF999 mlllΝ;M߿O? Bdٲe{և#:EDD`/y޽KH$RWW>x[b&d[>&|h싐R_!2W .HQ'f:p>윜,{hڴiYYYر#** ðze+H} JQg'N$ɲYXMLLlnn~7㝝E"B4i򴷷GFFʫ5wmhӧO߼ysff8M wy\$1 r͛7ٳgm*H`¾.,,LIIH$bX*:::zxx[I $''Otuu>>&&&t:=--B kRc||>Μ9/xm۶M2رcgΜv @AԇgΜriJ^{|QEޘ`LTRZzzzIIID'Ā#(ʝ12w\.kccfRSSNGP <2O #(@AVӖ"ֶk׮ բr_(F3gN~~;w^z%6?<΅t75cƌ3f jjj -[ۋ߂}/ u:Z<== 03 ۷o?~"ϝ;0sL@etD۷ѣzz K$>{{{#(@ũzD8p@GG_;&&&J.jPBǏ/++۷o27ZTTtܹF5^{MJE!7ߴr]9`ێWsҨMDm߾tf%%%:::ϧh u*{xx,\ EͲwHHԩS35+>>>QQQDgR>ͩSFDDxzz*;S"J+V p8Dg ,/4-88ߟJ 0+!DHtknn>{ŋCCCx,%KYח,)..xL&f{yyH$sK] Bwׯ# ðׯxrsss6=g###sԸ"͍,cPWW󋊊BsaSL!:ZJ BsѢE}Ӊ2fb0X@j_Bmmm111;wtvv&:ݹs痕Q@҄"XpG}oE566?7,,ҒPh, )[Κ5,!,߻w(AГ'ObbblNt92 QE7lؠAp A`[[["f4">}lٲիWϞ=, 4`ƍv튋#:&N Bwٲe+W3gYðk׮x7oZXXp8gddpelE!ӥKZjܹDgQ={v߾}uttlmm9Et4TAPooolllxx ΢>~ڴiqqqDgm)K%K*"ϟzNǣGH$Rss3AləLfNN@ KP92i$]=A۞={bbbB0aڷoV fvh@T#;(..n .loo':B!Lf0GN »w~ij&&&޼y!TSSOR設k./&SNXD"QppիW CBB?~\[[_d+""z̙hp^_:::vmll&(3\p!99===~gg'a ,`W\Y`AOO8{{{캺:3=zK SSSC"~瞞z֬Ys 3ԋpX@#<<TVFGR]]b&NoPhll/O:ё`ooO&g͚u%4iYLL p-Cٽ{ɓ 7l +C( smڴO?ſvTET(j>|==Th"i…[nwY䬶w}\P@chȑ#Vz3џlkk q}Y9xG0y'N={O>JD۷o D ϗ/_^RRBt@W_}믉(cCPutt/^<O; Wbb-[t|ﰺ"8~===UvޭOtxpx ҼÇ@AOP>|aÆ+V06++'NH$pHDt@OPjjj6nGt@OPN<ֶdɒG02 *DUUURRutt? >ĉt:=44":`HT-[L:u˖-&L :`0 *ѣG]\\ϟ 2lW]]za?cFtA"W600 :Z x뭷|mhhw}'HN"H pN>Op\ 0z{{?@y.f.]JPNV"ZR?mll?m4o666fH$<>zjiT92yŋ{zzEP“'O^x__r#щP{PqȑʀSSS3g뇅͛7h4O,--ݻw n F &hii9uO?d``< ǏIII}rBUH(`9M`jjrJ.{cccfOG>|xΝ,` (eĉ?~`*X|ʕ3gfggEP3r8nݺ_( 7ofb1PP5\nnnkk.B1550aoiӦZ%_g04FBy#ss󦦦I&ɥA ? Ldd`8 jPZ ŋiPH"Bl6J:99LIIaXT*˲6mbX ɩ,55ҒF+aggG&Y,Vyy9P$$H{ ",**а}ڿ~* NJOO葻o]QQq֭k׮x0Q{{B633' 'NH&e/rssbcco޼D E㛁 /Pd466ݲY@0)555i@`ii9q8_rɓ']]]m "d2%..n۶mSL9vؙ3gdmmm  j%]Od&$$ȷYgĴ&\O6TZ~~޼yARRRtt4щ0Pisr666l6/55D@@O͍7ަi^^ܛ@zEՠAV/FTii)Hv2#"<< pyroute2-0.5.9/docs/html/_images/inheritance-f5d240e390c43059a31361af9d3718306b0969bd.png0000644000175000017500000011537513621220106026670 0ustar peetpeet00000000000000PNG  IHDR}7\bKGD IDATxy\T? l (*((K)cB0v5DR]-24exQG *h"nя,a|<|AF """"" Hhǝ;w&v R---: """RgoQ}g, """R1c`̙bǠG~#P'a""""" H """" ,4 """$ $ 0b޽[,GƎ;~???,] bllc.W澶77 """Uhh(A@II ^|Eٳbꐆ}y&d2QTTd:+u_,H% $$gΜTTT 44666իpUȑ#|rxxxCEzz:6mڄgϞX`ma̙@߾}1gܺu ӧOɓ'1w\H$Ø ""HMMmu_[`HHHP|} H$)57iDDDC"` \xIIIv1eE~H>>>p233q |سgiӦOƉ'pu̞=d ٌ[k_YRtunRD(.ܸq~-n޼ KKKƍajj,%K }ѻc|044S"''ƕ+W&~KʰvZ)>oi_~ɩ]#;DDD$⋊0yd_uuur JyVKK eeeG}}}аkׯC[[ R,W,{P[PTT$ ,--}o߾vjMGrfa@DDD*O>˗HR@F&O8vK.)I/:~~EE/=zP2u,VFn,,He +W;#~!00FUU?)Sݻ޶-<==|\~K,?#33kkkҾ~puum1wXܹsW^ūNWŋ #aaaŋ#88zzz_~ ###12 VVV^?3zxn_~~~- ?X&Fn$ b """sabGGԨ!;DDDDDa@DDDDAXiDDDDDa@DDDDA@DDD%==+W`ǎׇ лwo۷/N]]]v5#P'D`DDD4wo ,@QQQVB]]>S矨{p]TVVB*"** ]+OKK bǠN:EQQn:D#,, 3fPwС8|q!߿Ks;>DDDDJQZZ#GСC(,,?dannh݀梺NNNHHH@>}D":'ODJJ P__OOOX P?0}FHOOGdd$x XZZ"''NNNHLLYWF`@DDDmRWWӧO#!!҂;MرcqܹF={7oF߾}'W^O?ÇȨ+vH#('' HKKCyy9F \gggH$vo?@TTLMMrFWTT@KK =zP.XBFF1tP:::nAAz-bժUՇHD,4XAA=$r{߻w[lDZ~z)!5= DDDHMMō7```oooL8={Tj[BHH|||m"8DDDjHHH@NNooNeΝ;Xf n:I "a@DDFjkk#!!М8p`o>ڵ aaapttXuc͙ pqq e9vZcڵJ]6 ""n9+** ihGuIx7Q ""Rq999HNNFFF 0h r7TH <<f!a@DDb q$&&vvvpwwwҥK`Q; ""Yyy9#>>>ի(..ƒ%K WWWQ; ""bUUU8vccc?^^^077;C={k׮G}+++Q ""dyY|}}acc#r;q"""}v:5 ͙ . NNNdbÇ>CLLC a@DD Cs ΐ5jTسglmmm#b@DDyyyHIIARR0drxzzBWWWxJ-[ 66ZZZb!"%`@DDEEE8|0QRR[[[rGbgΜAXX⠧'v"RDD* V1'O ׯdj޽[>Ŋ+sNҥKXfZ})..NԍA>""RUUGGG[bGvXhjkkⴢضmJ\;w0vX,YD(*'((Ht^IDDDTx饗 Q'`@DDD ׯs='''Q'a@DDDCDNBD#Gb׮]bv uV]V(]QWWWDGG5c@DDd|H " H H```#F`ݢd=z4vgggcʔ))q uSR(J#燥K6566رcqf}`@DͅB_DPPΞ=+vVM4 FFFkװ|rK?/^Ŀ/(*_ ͛7!&˻9L ""5a``H$9sncc^z! W^3˗/0tPcӦM0`z ^FQQfΜ s[0}t>2 iiimwppp둖&R[ fϞ ㏸r d2 0n8XZZテC3}:{뭷+T(Qy;w.233; V/sQPP ̞= ztaN!ʸr劰v6W+ ͛7ﯫLLLL'6oެX(H$BڬY^{MA|@rb3gB^^ ɄXŲڿq" :::SO=%ٳMm]zU \x'^uaBxxxeezpj…BeeeӉ'UVJaL:t+Dž6oaǗ }lڴI[BZZCAa۶m+*uiidX(((P,onٳgDODͅ"22x饗~zʕ+++&xM__O^\~4hbbكZk }ٹs'NB"<hiiήѻwo+SGbfZ:t^^^Xpq?oݺ[nܹsؾ};ɓ'y[VVV-_ǏF3p9j8n+** QQQ044ño>WsXKKmEMEDDӧaggA*MZ\v K.)I*yh311AHH6oތ̘1mݻwuuu|0_xx88OG\Zo{ҠgϞؼy3 :~W W_}'|R(TUU!..~axW0h ?~C}j(Ӟ㶭.[9`O  "R#zʕ+;_~ Dpp0QUUǏcʔ){nmkk OOO ??ׯ_ǒ%KDmm-ںsrr0yd|ZZZ 1b|W(((cDLI>X$''c޽x"ttt H---H$XXX`ȑri_~066nʕ+JU[O??;J*** ވ@ll,VX+++,^VD";w?ĉbW9bGnTTZϞ=1zhǣw"<ߑ,Zx +>sDGG7$]lUUU8v3ްj˗/cػw/(Rb"HEСC8rn߾cǎڵk",,㸫/hsrr`nn///xyy\L޽{'~tH$xxxy"2A H:?(b@D$J6y B.nnn8{,|I~ˑCƍ0007&N={vz{OGqy ==x'!̋y"TWWɓHIIAVVI&aM'GH q$&&pwwRڸq#GmMNNj5 rFRD<DDOFBB233wwwp}7d$%%!??=r9<== "R 4c| x$;;+WĚ5k0f̘NX\\$$&&pss DDOv1tPťßfiN ޽{SSShihΆ>)D>XCAA=$r;m+>vvvX|#QzOJ]]ƍb"R,HMMetJ|ᇈ-oooy;BD X9wo27߄CGG%%%HLLDjj**++ѣG&&&g"j "h-=nYHI4o?veOիW/8;;O 54Aс[.o7"00g~'I6loF\]]݅{BDL&cr.] 6`Xr%z=-Ii ?g%,H 99(((A i%r C*W_b̙3JLXQWXX#G 11Ű\.;ŎGԬ"lܸxWakk}Rd磮cƌPDju;HOOo2az%v)HJJBbb"***`ii 777?b#b@D* Ξ= /lllDNGvEtt4`ii GGn'%??HJJB^^ \OOOڈcbrr2.\2899A.C&] ql߾/^?KB.]RSSQUUgggr5 RTxDD$ ^8OQPP>}`ʕ;^瑒ӧO7*U&"MD^^RRRu{I122 SO!88};(b@DFbb"JJJ`kk \77f'8"娬ĭ[Ď-ӧGtuuu HOOyuun)//&&&3f r9,--oo޼ 8 "R bbbkkkt+̙?w޽{QSS@\ xzzB[.Dg5=QZZ'|rWvRSS믿 غu+;C߾}}}}CWW7oݻwQ__ˆ "RhرcQ /-aÆa}:N6l؀kעByn݊e˖AWW!!!ϔ}R]IIIz76((/^DRR]GGGL2L&Cll,A@ZZZkaa_iii#j n&''}͛ӧ#!!rvB\\,Xt.QQQH$044ʕ+o>888y3f7nTZ233qFX[[_Vs-W\fg4ͅS۰͏>߅Ν;1uTSL:$ŋ#㥗^­[,kgpFٺu+ //f%{YYYHKKüylIsm\\ ,^W?=zZYYaѢEm6B.wʾD" v"uV[[ B}}=<==kkk㑚=ƍ'vno.tAAAM&d~PG "%ӧl ꫯZ"DJ0ROZZ*** ˱zjH$3gvѵ|װܠ/J"rrr `РA  U|v~W޽<<<H)XAaa!9DrӦM'Dj"&&|}}1|̟?uuu8r6n܈9>,ŎKDa,Q^^t$$$ ''fff9{%v<"?<ꫯ2 < &L &N:X\t >}:z-rr"a@ ǎCBB`llcʕN!0ydL<8uq9XZZ" ƍèQ =7|?gyFTu"tМ c{xxॗ^H }{ 2 2 pUؾ};z)S ޽{1sL >/" J<>KNNƅ PVV'''r8;;sR111gggt/c=\cǎ!991c +++{"11fff:u*<<< Jq)l۶ 555X`ADQ=3Xn1TƦMk.cP7VМ3r9F`z/"vngm***֤?zꅢ"|8|0q̙3}6lقgbѢEMlقwCe888 @Ͱ d$%%!//C \.'tuuŎGDjFbb"JJJ`kk \www\|qqqz*1m4O?Ezz:ϟ___wu_!%r9УGCJJJPYYoX`h=zF%vt"`,hV\.񈈚{&pزeK֭CMMXUf͚%v f|!''?!FW__T۷o#&&_ee%ʰfh+WСC,@rr2y؈ɓJؼy3-[xk֬ D՜cԩ:u*ܹs8t F---TVV"..G֭[xddR^}UnM ի,--: ݇4ehhRDbG(<޻Wƍbǀ%ga…x駑-[_;^CA.BCC_|3fĉ }}}0u_:l֬YHNNFAA $$.]‘#GDN6ZZZd(++kzqq1[888?)huO֭[O?YfVVV"k]MM ,--KKKX[[}E޽n:#u{*{ZwȑصkLc9u<75,-- ATTBCCQ[[+rcسgbժUxgGGGzDVӏUz+#QsVZ(ǒ%K;T#~~~H$8p@GM=z4v^hL^9sFFFjSYH9~ڰڵkQ__ߦrLUvv6L GUʶ;JGҥK}pcر8|pe<7RsG?xcASt3)9VZ:elӍ1|I>1BDBCC!jjj?DLLL4ik׮aY7oB&!00EEEPoGll,',, M:jp~u&(++Clll難@hh(lll\]zӧOɓ'1w\H$6،9k׮LLL0d/ͮWUUgyO?bç~%%%پ ,\6зoF͟?&L@]]]Gړ:60f={Vz@@$ R)01YVVP <Xpa1/\]XX˗/cŊ0`Q^QQfΜ s[fG=H= ""HMMmLύ jwn?5===;xDL:T lJ)a];'((/^DRR]GGGL29d2S6p ((gƍ%;{” `˖-;w-giiݻw__|:{嗱xb\xQi۬EZZ??A{w^_|fٳg###?#\LDyX~=Қrڴi(--ӧq \~gnS@ǹ9RI&!,, FKE}}}םaݍ^:t(_uV… q92y~Z:Yؽ{/Wx7za̘1ڵkAL&{ ͛7 &&&Bff ɄFuqqmO;wbԩ8x𠢣٠A=j-qs4LըG^^p9DFFcƌyϾ5QQQ!}o߾4hprrjq{ILLӧ;~gٳ1vF"%kYEEE(**x_~p~u&N???^T*EQQ͛o+3 <<~'6y/ OOO{mJsA@@RSS}v̛7O\2$ƦM0`deeaÆ GZhk+'6vލ b„ _UDv\)l:U988W^?QXX@\zx''={.䘴F@@͛[_}۾y&-[KRͅ lmm鉐|gŒ%KVl8sNTSS5 K.+$ ֯_%kڹysCOOOύ3gСCx/b޼y(..~Stuuamm {{{K TLs. Aٔ>X0RC88::Xx1իWF=:T*ŶmOOOܸqJLLᅬ?~s*qF̘1&&&X|9*mܵk [[[yLfgYڶm:YD}}=$IMQwE>}ĎR8۷oCcIjK"`ʔ){.*++i&A*;&5bh'xgϞC׳gOlٲ ,;TfӧO^;h#jNA{t IDAT݋ UZZZ;/9ikkC*~aʔ)ظqcΆJӝ΍ddeea…mzKzjܹukq$#MpW=NL6 ٳqH8rСClmmsNd2駟y {A'#q:u _}}]ĩS\)ggg\v G>ϟk. uٳgcΝZ'++ omNJFD]A#5 c=&v"RSeeeߐj"RyyyHIIARRaff777xyyQRiG /vh[na… :F:u*KADABB222P^^ T۷ocݺu6jkkzj899aܹJLGDM HHH3gwQQ\o t)RZ(vE4h *b%bA*  jL FEl%Q@tQ(e?/(Ys½0ϔ;K"@Klڴ |.!t3vڅ{AJJJ$'';%?-򂗗TTTb,]C _IiTϚ5k0uT=(vhDFF֙Ɔf2!]ǡkkknرc mBP?o޼ٳB YYY@TT!%%q l#u۷mgeeᅦ-ZAi=*j?,](JNNFDDܹSgJNkkk(**SVVOOO߿ƹsqF[;76qD>})))f;!HHH@dd$>|Bhjjb4 ! %%vm㫯 ֯_)))^rrr&!itXt)N:vBTVVѣG@\\JJJ0p@6C``` 6`tY!,?#0}t%!&&HLLě7o```.aÆю!|ͭne8{,\]]~2!yGuu5K@FF8dfff011AhJNBɩSI&uh޽ñcpM,]>!] xΜ9[w"22ɐرc1acլ_XX۷###˖-yeee͛7e%!Œ Fxzzbܹ6lQ233hAJJ ְDϞ=َGHU]]رcS,]Çǂ pU4h6T48{,DEEَCHQ{J\fffg;!;;;B`׮]HIIATTRRRseBM8{,rssvBRUU_DFF"** _ƀrajj #B 2{l۶ 555>},'#D8P sΥG?䌏GQQabbQFA\\툄SYY mmmx8rlmmYJFRRR~z֯VׯǓ'OHq'r{9;wkv6o tjIIIؿ? َa+WYBCC4dddx-^| 9CDDOHѓA[[ÇǹsgG蔗cPSSc; {{{#tIo0gtjvBUU1:Ԑ!Cj;| oooL$w.i zN3-\NB^^Q!p8p8`; !PL"""زe ֬YvB!v BH]]]olG!'N`;!aii7nAhe˖! _f; !3f >e !?f;!PBظq#֮]v.kĉXxqkaŸqp͏ޓ!NŌ3L4 n#Fԙաs|kX`AiɲDtqDpЭ[7y(|Z  ЫW/\z(<<<0 ^xcccL6 uǷ~ {{{?u? &&iiiXt)|}}( 244զ+$+#0[[[\|-}a@@+\ǫW؎BjU=))) >>$xzzBCCAhh( 77_5TTTЫW/888͢"xxx@GG pvvFiiiv0}tL<^II <<<yyyL2 >dff2 ٳge:t(j*FGG^zz:gb䘸8aؘ W^0=z`^|2/_f.^HKK=z4~`2׮]eرuAA;;f/KZ۷cddlذ2'Nh:8bRkQsܹi:̙3ٳgَ̜9afΜܼy1Meauuu毿_O_zQlNnKZŋqK۳g8p\pzsbԩOuuuݻϞ=C~~>̙/AAA֭yBOOLGBvkn6EQQRRR***4iPZZ [[&7...HMMŰa`ee***KJJPY 66ׯ1yfyۂ8R[iؔ)S2 <ϟӡ/o77ڗ[0aS7tv"""ضm-[ tYؽ{w(++Ñ#G2rrrpssΝ;J.ILLBRR@mr lll@AABIIެ"" 殮077Ƕmpppp)SGbV5kVUվ,Arr2جK#,u5f˖-ZOXܿ3f̀kx5^~lm~k:m/>`Rsc1qlN&P077GPPP%//e˖apttD^^\\\xb >"""FJJ fϞDСC;&MUUU͟?~{ݺuɓ' KKK\z***Ĵi;v@CC?ѽ{w"..֭0533k56mBzz:ݻ{aڴi5j ol-={D```m+{o'wի>s8;;#&&є)S?:j* 6]w7www2L6 = `dd þnnnh_622B@@PXX+ValN&%@|`\rNu-BYYAOOG@@\.Μ9uvnO8BWW<%} F[}}}L4 ***Xhy/_˗/CRRճ\~۷oGHHaaa˗/HHIIa͚5>bccf(++wxG@c#QWdaa[nz󿦦&N~ 8s ̚<j󡮮)S`̙MaB as <"88YVH]+W;؎Bޞʕ+kIڵ ] ۷zzzͺϨAZ𑲲2,X%ҹ͟?l *=PB,$$$0l0DDDh[lm2-!†?1! vн{wlܸ˗/g; !*rrr7n._vB4d4hN:8!PPP;v@\\=z􀸸8 &&O2C n`ccӢT !Fs;ZhΝ1c@WW8NN^^dkAWWK,a)VbbbpppCvB%@`޽X|9޾}vBH'0 ֻwo|wؽ{7dddXJFg}۷oŋlG! hg XlV^۷G`UTTݻwl |@Ῐ>};v,PZZ ]]]>|r hQUU֭[#G P?'cƌ7p:O$5 7nd;EEE#t?wE~===}°ap())444pw~_~%& p9|gfQPPˑj;v,H'DO a;!DԩSv0g~aaaA70 Ν-[@SS86 gΜ8q!ɓ'8t _c„ [ĢE/@TT8VҥKغu+Q!,z~7oӧv,BtE<|֭c; !S377<.\vB rrrfۣGŖ-[htSN0 a; ! #VXARSSsNC a;!0 ,]2!,)..!''vBH;IJJݻQVV+VlG"/aoowl!,h6ḄGpAHJJbɒ%`;!| ?~ l!4, FFF-[vB@g0X`<<<`hhvB0o9uFJs\Ǐ_CGA;\v RTTӧO1c (**^/,,Duu5XL' J@G@'#))`8z 1H'sV-..ŋqa>4zzzl ##XctZfffPSSc;Fv"&NHII ;w3َCHTUU@YY8BHс;TUUNgŊ?>vB!CQЉ9s΅+َC'N`;;{,z kkkv@BG@'gmmɓ'ӓ(3f >eQRRΟ?OcBHE6mF~(-6qDp8p8HII'Olv/^\o7nn޼L53W@AA&M­[%Fcǎǃ@FFZtt4,XЬvZ vqԜ\aYzkQ$ !1ghkkc֭lGi10 ||?axƘ6mrssO> bbbK׷3!_АDڴi\\\L9(,q5#"X:c_'DP D͛ v: ))) >>0ep8@GG6mBͳΝ;_v-8ZGl]999QQQMfARR<==YYY 44***իPPPfQQ<<<8;;yf2L>'OFii)JJJ---cʔ)HMM|Wx͛1c&O|ܽ{PRRӧOٹN[/D^\C7UUU]dddk rrr0`^ >Dii?I[Q}G@c#L77ACCaaa322p0qzcc}@ȸ#;;Ga;J#.]0qq>}`eeDxp~~  IDATݺqeRQQA~菦b5k #==vvvP<Ƹ~zmdggcPSSiiiٳgqҠ3f066F@@Att4@DDNNNطo ((fffl3ɓpuuEBBӧOCTTuloׯg;J M88K9H`…t~,} i?6և i6%OOO&$$۵k򶶶 ?qqqҥK .g/}=bȑ#Laa!ӣGȑ#0 XXX0?#:|]XXȸ3Lvvv2edd0nnnL߾}111fر̹s瘤$s޲ &33IMMe0Ϟ= :Yj^OOOg0/^VUU1qqq 01P3^az|144d._\x-;zh߿N~ׯ#++\veرuAAȨ3gƍ[jLxxxmHKK3DGG3 0ׯgAG8bѱa9n߾۷Et%YYYLϞ=yyyF__Yd 0 ìXϣ)77aF]]믿xW^}Ӈr:푮,m///cܹlivލ<|wĉ!**lٲO>711r%%%;| PRR(ޑٳ{쁴4  .W^MfuuuݻK_`ݺu֭yr6ףGgϞpsss^JIIASQQIRֹ̢!...زe ƌ++FUQQ_RR͛&ۨŋ4dCOO |~[p8)))vѣGI88v8Nc֭[>?СC^_)))A\\x5_>}PPP}ܙoTUUäk@Hq8l߾֭v&)++Ñ#G0}t̞=gϞ5dddgr3gacc޽{CMM w`{5[e憝;w"-- HKK]fӧ***PUU$ʕ+hiiADDPRRw=suu9m֬o)S ** G\- ?My N> uPg&"")))(++CYYNQ8X="##1{ǒ+1***066nպÇUJIIARRѣZׯ^ALjXalN&9r6lyIXl6o TWWGGtt4vUg'Ԕwf@.m۶O?{&GGG/!""``HLL:JcҤISLqA_~*L֭N< gggXZZիĴi;v@CC?ѽ{w"..]jff֢k7mڄtܻwôi0j( 2/6/^[bȑ|.}PUU!//oooHHHz8>eccwwzѣ sszѣYGa(//G>}Eaڴiʕ+[]t81c47033Caa!VX·9}栛#//?3QeѢE(++ױuV̞=rrrXt)fΜYgݻS666(**{~~~Ӄ&.3g@FF066:y:q `kk ]]]Ʈ]ܼy9_ z_8qbuuuL2NcS}0-9,H:[BBBֆ}_l4ǦM0k,(**=BBB &&Ɨmv$ooo̝;EDEE!66}Q:+WݽE  ҹ.dx]a$3[b6n)w !F@l20 -[vq%lBH肖,Y ,]BJNN?KBXTtQvvvO0o޼N m !B@rpB8888ƍ1{:!BǨF ___8::"##8Jhh(***0}tB! ]]]8pَCH$$$ĉf;J{Bz0 ǎ÷~ŋcرNqq1vtDݾ}fjz/_ʕ+$AJJ 8|0EEEBWW툤0`1:-ɱ˸>͛v 2zݻwpwwǸq`ooQ\\Ɉ4hPdYTT޽ZZZ혬X[[󚤤$۷CIId)}AϞ=َdff"'']N߾}!++v "*H:8޽ݺщ""Xooo 48mÇ0p8† `ggvLB!B Ҡp:t?؎CahhvBDD)))7(..0 ƌǏQB!|GiTRR<==yf <8+**‚ |r18ܹ͒sPTT XgYa```;v֖ĄBI%%%puu̙3駟tQ999X`|||```v%''#,, >Daa!455abb7Fǚ!BijƊ+htDxzzb߾}PWWg;Oee%=zš055Ynj&B: EBBBoa48q(eeeAdd$={rbذaGB|TKMMŒ%KL)v0 mۆׯ_cƍ_RRhDDD 99 `Ĉ055L!N *ׯ111#!̙3'OvsssqDFF"33bbb=z4llla9!Di?vAO)%|qY/غu+۵LDFF֙PVVn׶ !P@ڬhԩS1{lNիW*zSrAUUr_@!t*_0 ;w"%%>>>q"gΜٳg ===l /"##C~ZZZr011$_!B:*_NNN4iq-[`kkSr`011Q ..BHEJܹؼy3]KM>K ?zmԞw0|pBi<{ ?F 777t֍Heeee駟c[}mŸsN)9`bbAєBH3Q@ݥKpaƆ8eee8p7NNN޽H$''CII cǎ㡥 !D޽O?{#Gd;޽ñc_SNmpٚ)9_|(**iB! ;v@jj*0x`{/^@AA XLHǽ{ kk돖=%gNN A^^ԄBH@aE^^v؁l,ZpvvSNҒ툤qa\.@)9P^^`99!uP@XUPP{ŝ;w޽{c۶m;w.H3ȑ#PVVG!,, xiJNB!DP@ž={rJ|r,[d! BDDFC"66?Çz!B}ERRR___Rv }}}HJJo߾?~~ h5իaaaYYYԧl?chNmH\\fϞ MMM8p^^^:t(TUUcܸq/@NNƷ~'a/*og 7o׮]Ø1c ##VT*p֮]zdzgp A__3fS`ll0 f___dggvvvpl?jjj!-- ވجYPXXXܿ鰳ogg066m>W0m49{a*͑f=Suu5\S=z>>>x5em-clv8kˆ#`nn'N͛7mVskC>39s(**0n8˗xF[~£G>͛7addԦB7sL9vSʊӧOӧOyC]kkk6cR@G[~^FF~W>|:::֭[ǏM5 RRRD'O`ܸqϱ~8_uuuANNƍ瑜7o4h)S4f{}͛7>.]Cbܸq8tʚ<:'Np8sj5g݅ bܸq~Q[S} 1Vvi(nݺo~V? ژUVA\\}A=pѣGu\?n޼IO0n'eee>-[5 d 0n8\~e5'aB;`kk gg:=@)))nSEEIII:z(z 77SWW޽{ ((_|֭[nݺA[[l E(//(P 990y%HOOGLL $$$0cƌfmCYY 0w\̝;aaaмSC}l0cJJJ} 1VvYn*++[~gQsݻwHMMEjj*޽ eeeXXXx{p\"33 ?~ 999@aaau^~{xqF5ڗիWB򪫫LATT^^^}PQQ*$%%}4@cmՅOt57i@]]pttqףgƉ'ڵk۴n͙hkk7ڷjt1PM}~}j 64Ma2tP8TUU %xPmH~~>ܹ]vAUUdYQQQϠ{rBBxlقׯ_[pm-5Ez uuu|7@Ϟ=YէOBpp0͝;?~1w\^P^^-[˗8p z@Z f͂5tuu!..3gΠ>>>HKKCy۫Czap8L:UUUe>ϟǽ{d(**.lXYY^9Y8L)))8z(6lv҄ ۷O/jHBB.\UVO%@ ffSvE5>[SNAQQ'N: B! B!. B!B*!BB B! B!. B!Bnl C=QNaa!ŬIDATDDD ++ZAEEe- pm]DLL лwoCop;;qqq<~Z3u={qb\A`t"_Ɗ+ ///swAXX!..sss`d"%''#,, QQQ(++r1l0"!tB!!!8Ddd$`055#"(**\.َG!N*;;^^^033qꨬģGx&&&5j&BχZZZ,# cۡvzUUUEdd$k 0\.`;"!|UVV۷o#,, vB*ϟ?aooSY;w 77zzz011َG_}xߌ"accC DHTUU񁌌 ۑZ$33F^^`mm KKKٓxUNNquAOOΈB: X_d;NݻDrr21vXL0l#jΈ ;;r?~|T"*++sNc˖-l@}j.@rr2LLL0hРP3htt4JJJ`ll .ÇS_'8x{{ VVVl᫲2 22x oFz)GSѣV!WQQ]v ـԞz4..%%%8p LMMaffݻ=ǏQ]] sssL8l#tTEԜ Xpa7H<|˅ $%%َH_"**7Ð`mm EEEBt!5g+gR{b(**\. l#/(dddfԚ0azvB:1*!u0 _ؼy3+B...F>}`bbssstz M9J3iҹP@Wvv6VXAˋJč7L=jee%%%&yyyqPZZ `񐓓c;!TBuE8qׯَL,())aرhٖ*v@RBZfj7n ++ or'O@UU ҁ 4ڻÍ߅Qq]EVKĬZ-&@ EIKɦ3M2㥍^BԌxFZ͚ #*biƨp}>3s笯 Ϲk^yڷoϔ)S2:RQ^^پ}{GCnݾ~`ҤIEo/㏩GУGQQQ9r9scpb "wm޽̛7$BBB ]|#GTt ?hnҥ ΝG%--Ν;Lz5#7M4ٮt:9r;wѣfzŢENzz:-[޲l:" "?ʵkX`Nb̙2UVVr!KAA`Zb޼y 4Hqx@>#lbtڵkl޼7={Ʉf_g۶mtE݃< @llQR{ϫSO=ERRn'Of^M6|}} NYl޼r-D~ԩdee1|pf̘qk9p?OYfX,ر#}NSELƍ3n8СڵUV|ݱqj)f*"R5j`ܹ̘1Jc]ˣ-1b;vGy3f}Çs'P\\\>|M6qiv;v_|Ai۶-6li*!!'O{nܹ3aINNr]u7nrq5֭[իIOO_$55sV-ǞfxػS[D _WO\\\U/Cpp0Ν'?0ϟg׮]XVX,ݛu石g,YB۶mi߾= .daڵ+L8sqzMtt4_ [RRYb6 œ9s>OK OOOzARR6l90bۛ`Ν[m[=tcsd*"R~󟓙I~~>_2:{뭷0Lmۖ>ZnMV8sq_|mڴjxbN<9r$111޽bf3AAAU'?`߾}xzzҩSf[jM6%11O)'Iyy9HCO<& Q5;f#==)SгgOƍѣGmwcsd*"bog Zt)+WdҤI\xXnۛ/]Dvv6K.~ot\.b!11 O~:EEEU:u }r om֬YtޝAQVVVznu\ARn/dĈL:gt:Y`A9x _~TTTT-;TXl"CEP̝;8ƎKJJJX&E|rONqq1_5.9s&7n <<;vPVVƅ Xt)Ջ;ү_?9{,<󄆆ҫW/7nyyy\p˗W7=c/669n^?88+KD|2#??ӧcٌT%$$TA]Ν;G;v,oFTބ x2:Jyfˉ5:Qx""YfL:fϞMF6m-Z0:ZbʕDDDOhm*"Rgk׎E۷/eCr c=="Rv}uFll,YYYFGTD !==28rёDDDW5jĉYlHHHX""".zLJSRPPGڵkǔ)S4MׂXr%C a„ ̛7+WKDDizEFF=z4vÁjW_}֭[ \x?۷o׬_ȑ#tRA`" m߾~3nܸGڵkucСFp{ٜ:uV߾}С1Rm,YPƏёDDD {DA aڵMVV:!""L@D<bbbXv-QQQz-q5kPVVFtt4;v0:H="***Xd L8IDDƩۻz*Wfƍd2:HPddda"""9rf G@D&ׯ_'==A\\^^^Fy TDDnta{=zɳ>ѱDDD ]eժUx{{p8 4:=QN8Er oy'$""܃R-[ѣG gx{{KDDΝ;Yf 5")).]KDDTDDSNj*N>MTTfcT "]r 6oL݉fKDDPQǏ'33O?~1j(%""nL@D8N>c(--%**_W4jh""fTDDjYyy9k׮e֭XV4Çl`ۍ%"" HQTT9x  6>}`2&"" HTPPϡC dذa믿ΨQԔitCSf2p@N'>hղbk֬!22""Rר4Kƍ1xzzǙ:uEDPiz!֭[-Z___f3iiil""RG46 ft 4(^dd2Ѽysz͞={8qH#44}ٳ'΃ ""pr8s vJKK2d>>>>|"&MĬY N,"TDD䁱X,̞=/rKuXV/^ɓ'9<#G$&&DIyy9Hæi@ED9|\b)))ÃRoS pyroute2-0.5.9/docs/html/_sources/0000755000175000017500000000000013621220110016734 5ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/_sources/arch.rst.txt0000644000175000017500000001545013344172040021240 0ustar peetpeet00000000000000.. sockets: Module architecture ^^^^^^^^^^^^^^^^^^^ Sockets ======= The idea behind the pyroute2 framework is pretty simple. The library provides socket objects, that have: * shortcuts to establish netlink connections * extra methods to run netlink queries * some magic to handle packet bursts * another magic to transparently mangle netlink messages In other sense any netlink socket is just an ordinary socket with `fileno()`, `recv()`, `sendto()` etc. Of course, one can use it in `poll()`. There is an inheritance diagram of netlink sockets, provided by the library: .. inheritance-diagram:: pyroute2.iproute.linux.IPRoute pyroute2.iproute.linux.IPBatch pyroute2.iproute.linux.RawIPRoute pyroute2.iproute.bsd.IPRoute pyroute2.iproute.RemoteIPRoute pyroute2.iwutil.IW pyroute2.ipset.IPSet pyroute2.netlink.taskstats.TaskStats pyroute2.netlink.ipq.IPQSocket pyroute2.remote.RemoteSocket pyroute2.remote.shell.ShellIPR pyroute2.netns.nslink.NetNS :parts: 1 under the hood -------------- Let's assume we use an `IPRoute` object to get the interface list of the system:: from pyroute2 import IPRoute ipr = IPRoute() ipr.get_links() ipr.close() The `get_links()` method is provided by the `IPRouteMixin` class. It chooses the message to send (`ifinfmsg`), prepares required fields and passes it to the next layer:: result.extend(self.nlm_request(msg, RTM_GETLINK, msg_flags)) The `nlm_request()` is a method of the `NetlinkMixin` class. It wraps the pair request/response in one method. The request is done via `put()`, response comes with `get()`. These methods hide under the hood the asynchronous nature of the netlink protocol, where the response can come whenever -- the time and packet order are not guaranteed. But one can use the `sequence_number` field of a netlink message to match responses, and the pair `put()/get()` does it. cache thread ------------ Sometimes it is preferrable to get incoming messages asap and parse them only when there is time for that. For that case the `NetlinkMixin` provides a possibility to start a dedicated cache thread, that will collect and queue incoming messages as they arrive. The thread doesn't affect the socket behaviour: it will behave exactly in the same way, the only difference is that `recv()` will return already cached in the userspace message. To start the thread, one should call `bind()` with `async_cache=True`:: ipr = IPRoute() ipr.bind(async_cache=True) ... # do some stuff ipr.close() message mangling ---------------- An interesting feature of the `IPRSocketMixin` is a netlink proxy code, that allows to register callbacks for different message types. The callback API is simple. The callback must accept the message as a binary data, and must return a dictionary with two keys, `verdict` and `data`. The verdict can be: * for `sendto()`: `forward`, `return` or `error` * for `recv()`: `forward` or `error` E.g.:: msg = ifinfmsg(data) msg.decode() ... # mangle msg msg.reset() msg.encode() return {'verdict': 'forward', 'data': msg.buf.getvalue()} The `error` verdict raises an exception from `data`. The `forward` verdict causes the `data` to be passed. The `return` verdict is valid only in `sendto()` callbacks and means that the `data` should not be passed to the kernel, but instead it must be returned to the user. This magic allows the library to transparently support ovs, teamd, tuntap calls via netlink. The corresponding callbacks transparently route the call to an external utility or to `ioctl()` API. How to register callbacks, see `IPRSocketMixin` init. The `_sproxy` serves `sendto()` mangling, the `_rproxy` serves the `recv()` mangling. Later this API can become public. Netlink messages ================ To handle the data going through the sockets, the library uses different message classes. To create a custom message type, one should inherit: * `nlmsg` to create a netlink message class * `genlmsg` to create generic netlink message class * `nla` to create a NLA class The messages hierarchy: .. inheritance-diagram:: pyroute2.netlink.rtnl.ndmsg.ndmsg pyroute2.netlink.rtnl.tcmsg.tcmsg pyroute2.netlink.rtnl.rtmsg.rtmsg pyroute2.netlink.rtnl.fibmsg.fibmsg pyroute2.netlink.rtnl.ifaddrmsg.ifaddrmsg pyroute2.netlink.rtnl.ifinfmsg.ifinfmsg pyroute2.netlink.rtnl.ifinfmsg.ifinfveth pyroute2.netlink.taskstats.taskstatsmsg pyroute2.netlink.taskstats.tcmd pyroute2.netlink.ctrlmsg pyroute2.netlink.nl80211.nl80211cmd pyroute2.netlink.nfnetlink.ipset.ipset_msg pyroute2.netlink.ipq.ipq_mode_msg pyroute2.netlink.ipq.ipq_packet_msg pyroute2.netlink.ipq.ipq_verdict_msg :parts: 1 PF_ROUTE messages ================= PF_ROUTE socket is used to receive notifications from the BSD kernel. The PF_ROUTE messages: .. inheritance-diagram:: pyroute2.bsd.pf_route.freebsd.bsdmsg pyroute2.bsd.pf_route.freebsd.if_msg pyroute2.bsd.pf_route.freebsd.rt_msg_base pyroute2.bsd.pf_route.freebsd.ifa_msg_base pyroute2.bsd.pf_route.freebsd.ifma_msg_base pyroute2.bsd.pf_route.freebsd.if_announcemsg pyroute2.bsd.pf_route.rt_slot pyroute2.bsd.pf_route.rt_msg pyroute2.bsd.pf_route.ifa_msg pyroute2.bsd.pf_route.ifma_msg :parts: 1 IPDB ==== The `IPDB` module implements high-level logic to manage some of the system network settings. It is completely agnostic to the netlink object's nature, the only requirement is that the netlink transport must provide RTNL API. So, using proper mixin classes one can create a custom RTNL-compatible transport. E.g., this way `IPDB` can work over `NetNS` objects, providing the network management within some network namespace — while itself it runs in the main namespace. The `IPDB` architecture is not too complicated, but it implements some useful transaction magic, see `commit()` methods of the `Transactional` objects. .. inheritance-diagram:: pyroute2.ipdb.main.IPDB pyroute2.ipdb.interfaces.Interface pyroute2.ipdb.linkedset.LinkedSet pyroute2.ipdb.linkedset.IPaddrSet pyroute2.ipdb.routes.NextHopSet pyroute2.ipdb.routes.Via pyroute2.ipdb.routes.Encap pyroute2.ipdb.routes.Metrics pyroute2.ipdb.routes.BaseRoute pyroute2.ipdb.routes.Route pyroute2.ipdb.routes.MPLSRoute pyroute2.ipdb.routes.RoutingTable pyroute2.ipdb.routes.MPLSTable pyroute2.ipdb.routes.RoutingTableSet pyroute2.ipdb.rules.Rule pyroute2.ipdb.rules.RulesDict :parts: 1 Internet protocols ================== Beside of the netlink protocols, the library implements a limited set of supplementary internet protocol to play with. .. inheritance-diagram:: pyroute2.protocols.udpmsg pyroute2.protocols.ip4msg pyroute2.protocols.udp4_pseudo_header pyroute2.protocols.ethmsg pyroute2.dhcp.dhcp4msg.dhcp4msg :parts: 1 pyroute2-0.5.9/docs/html/_sources/changelog.rst.txt0000644000175000017500000004642313621220101022244 0ustar peetpeet00000000000000Changelog ========= * 0.5.9 * ethtool: fix module setup * 0.5.8 * ethtool: initial support * tc: multimatch support * tc: meta support * tc: cake: add stats_app decoder * conntrack: filter * ndb.objects.interface: reload after setns * ndb.objects.route: create() dst syntax * ndb.objects.route: 'default' syntax * wireguard: basic testing * 0.5.7 * ndb.objects.netns: prototype * ndb: netns management * ndb: netns sources autoconnect (disabled by default) * wireguard: basic support * netns: fix FD leakage * * cli: Python3 fixes * iproute: support `route('append', ...)` * ipdb: fix routes cleanup on link down * * wiset: support "mark" ipset type * 0.5.6 * ndb.objects.route: multipath routes * ndb.objects.rule: basic support * ndb.objects.interface: veth fixed * ndb.source: fix source restart * ndb.log: logging setup * 0.5.5 * nftables: rules expressions * * netns: ns_pids * * ndb: wait() method * ndb: add extra logging, log state transitions * ndb: nested views, e.g. `ndb.interfaces['br0'].ports * cli: port pyroute2-cli to use NDB instead of IPDB * iproute: basic Windows support (proof of concept only) * remote: support mitogen proxy chains, support remote netns * 0.5.4 * iproute: basic SR-IOV support, virtual functions setup * ipdb: shutdown logging fixed * * nftables: fix regression (errata: previously mentioned ipset) * * netns: pushns() / popns() / dropns() calls * * 0.5.3 * bsd: parser improvements * ndb: PostgreSQL support * ndb: transactions commit/rollback * ndb: dependencies rollback * ipdb: IPv6 routes fix * * tcmsg: ematch support * tcmsg: flow filter * tcmsg: stats2 support improvements * ifinfmsg: GRE i/oflags, i/okey format fixed * * cli/ss2: improvements, tests * nlsocket: fix work on kernels < 3.2 * * 0.5.2 * ndb: read-only DB prototype * remote: support communication via stdio * general: fix async keyword -- Python 3.7 compatibility * * * iproute: support monitoring on BSD systems via PF_ROUTE * rtnl: support for SQL schema in message classes * nl80211: improvements * * * * netlink: support generators * 0.5.1 * ipdb: #310 -- route keying fix * ipdb: #483, #484 -- callback internals change * ipdb: #499 -- eventloop interface * ipdb: #500 -- fix non-default :: routes * netns: #448 -- API change: setns() doesn't remove FD * netns: #504 -- fix resource leakage * bsd: initial commits * 0.5.0 * ACHTUNG: ipdb commit logic is changed * ipdb: do not drop failed transactions * ipdb: #388 -- normalize IPv6 addresses * ipdb: #391 -- support both IPv4 and IPv6 default routes * ipdb: #392 -- fix MPLS route key reference * ipdb: #394 -- correctly work with route priorities * ipdb: #408 -- fix IPv6 routes in tables >= 256 * ipdb: #416 -- fix VRF interfaces creation * ipset: multiple improvements * tuntap: #469 -- support s390x arch * nlsocket: #443 -- fix socket methods resolve order for Python2 * netns: non-destructive `netns.create()` * 0.4.18 * ipdb: #379 [critical] -- routes in global commits * ipdb: #380 -- global commit with disabled plugins * ipdb: #381 -- exceptions fixed * ipdb: #382 -- manage dependent routes during interface commits * ipdb: #384 -- global `review()` * ipdb: #385 -- global `drop()` * netns: #383 -- support ppc64 * general: public API refactored (same signatures; to be documented) * 0.4.17 * req: #374 [critical] -- mode nla init * iproute: #378 [critical] -- fix `flush_routes()` to respect filters * ifinfmsg: #376 -- fix data plugins API to support pyinstaller * 0.4.16 * ipdb: race fixed: remove port/bridge * ipdb: #280 -- race fixed: port/bridge * ipdb: #302 -- ipaddr views: [ifname].ipaddr.ipv4, [ifname]ipaddr.ipv6 * ipdb: #357 -- allow bridge timings to have some delta * ipdb: #338 -- allow to fix interface objects from failed `create()` * rtnl: #336 -- fix vlan flags * iproute: #342 -- the match method takes any callable * nlsocket: #367 -- increase default SO_SNDBUF * ifinfmsg: support tuntap on armv6l, armv7l platforms * 0.4.15 * req: #365 -- full and short nla notation fixed, critical * iproute: #364 -- new method, `brport()` * ipdb: -- support bridge port options * 0.4.14 * event: new genl protocols set: VFS_DQUOT, acpi_event, thermal_event * ipdb: #310 -- fixed priority change on routes * ipdb: #349 -- fix setting ifalias on interfaces * ipdb: #353 -- mitigate kernel oops during bridge creation * ipdb: #354 -- allow to explicitly choose plugins to load * ipdb: #359 -- provide read-only context managers * rtnl: #336 -- vlan flags support * rtnl: #352 -- support interface type plugins * tc: #344 -- mirred action * tc: #346 -- connmark action * netlink: #358 -- memory optimization * config: #360 -- generic asyncio config * iproute: #362 -- allow to change or replace a qdisc * 0.4.13 * ipset: full rework of the IPSET_ATTR_DATA and IPSET_ATTR_ADT ACHTUNG: this commit may break API compatibility * ipset: hash:mac support * ipset: list:set support * ipdb: throw EEXIST when creates VLAN/VXLAN devs with same ID, but under different names * tests: #329 -- include unit tests into the bundle * legal: E/// logo removed * 0.4.12 * ipdb: #314 -- let users choose RTNL groups IPDB listens to * ipdb: #321 -- isolate `net_ns_.*` setup in a separate code block * ipdb: #322 -- IPv6 updates on interfaces in DOWN state * ifinfmsg: allow absolute/relative paths in the net_ns_fd NLA * ipset: #323 -- support setting counters on ipset add * ipset: `headers()` command * ipset: revisions * ipset: #326 -- mark types * 0.4.11 * rtnl: #284 -- support vlan_flags * ipdb: #288 -- do not inore link-local addresses * ipdb: #300 -- sort ip addresses * ipdb: #306 -- support net_ns_pid * ipdb: #307 -- fix IPv6 routes management * ipdb: #311 -- vlan interfaces address loading * iprsocket: #305 -- support NETLINK_LISTEN_ALL_NSID * 0.4.10 * devlink: fix fd leak on broken init * 0.4.9 * sock_diag: initial NETLINK_SOCK_DIAG support * rtnl: fix critical fd leak in the compat code * 0.4.8 * rtnl: compat proxying fix * 0.4.7 * rtnl: compat code is back * netns: custom netns path support * ipset: multiple improvements * 0.4.6 * ipdb: #278 -- fix initial ports mapping * ipset: #277 -- fix ADT attributes parsing * nl80211: #274, #275, #276 -- BSS-related fixes * 0.4.5 * ifinfmsg: GTP interfaces support * generic: devlink protocol support * generic: code cleanup * 0.4.4 * iproute: #262 -- `get_vlans()` fix * iproute: default mask 32 for IPv4 in `addr()` * rtmsg: #260 -- RTA_FLOW support * 0.4.3 * ipdb: #259 -- critical `Interface` class fix * benchmark: initial release * 0.4.2 * ipdb: event modules * ipdb: on-demand views * ipdb: rules management * ipdb: bridge controls * ipdb: #258 -- important Python compatibility fixes * netns: #257 -- pipe leak fix * netlink: support pickling for nlmsg * 0.4.1 * netlink: no buffer copying in the parser * netlink: parse NLA on demand * ipdb: #244 -- lwtunnel multipath fixes * iproute: #235 -- route types * docs updated * 0.4.0 * ACHTUNG: old kernels compatibility code is dropped * ACHTUNG: IPDB uses two separate sockets for monitoring and commands * ipdb: #244 -- multipath lwtunnel * ipdb: #242 -- AF_MPLS routes * ipdb: #241, #234 -- fix create(..., reuse=True) * ipdb: #239 -- route encap and metrics fixed * ipdb: #238 -- generic port management * ipdb: #235 -- support route scope and type * ipdb: #230, #232 -- routes GC (work in progress) * rtnl: #245 -- do not fail if `/proc/net/psched` doesn't exist * rtnl: #233 -- support VRF interfaces (requires net-next) * 0.3.21 * ipdb: #231 -- return `ipdb.common` as deprecated * 0.3.20 * iproute: `vlan_filter()` * iproute: #229 -- FDB management * general: exceptions re-exported via the root module * 0.3.19 * rtmsg: #227 -- MPLS lwtunnel basic support * iproute: `route()` docs updated * general: #228 -- exceptions layout changed * package-rh: rpm subpackages * 0.3.18 * version bump -- include docs in the release tarball * 0.3.17 * tcmsg: qdiscs and filters as plugins * tcmsg: #223 -- tc clsact and bpf direct-action * tcmsg: plug, codel, choke, drr qdiscs * tests: CI in VMs (see civm project) * tests: xunit output * ifinfmsg: tuntap support in i386, i686 * ifinfmsg: #207 -- support vlan filters * examples: #226 -- included in the release tarball * ipdb: partial commits, initial support * 0.3.16 * ipdb: fix the multiple IPs in one commit case * rtnl: support veth peer attributes * netns: support 32bit i686 * netns: fix MIPS support * netns: fix tun/tap creation * netns: fix interface move between namespaces * tcmsg: support hfsc, fq_codel, codel qdiscs * nftables: initial support * netlink: dump/load messages to/from simple types * 0.3.15 * netns: #194 -- fix fd leak * iproute: #184 -- fix routes dump * rtnl: TCA_ACT_BPF support * rtnl: ipvlan support * rtnl: OVS support removed * iproute: rule() improved to support all NLAs * project supported by Ericsson * 0.3.14 * package-rh: spec fixed * package-rh: both licenses added * remote: fixed the setup.py record * 0.3.13 * package-rh: new rpm for Fedora and CentOS * remote: new draft of the remote protocol * netns: refactored using the new remote protocol * ipdb: gretap support * 0.3.12 * ipdb: new `Interface.wait_ip()` routine * ipdb: #175 -- fix `master` attribute cleanup * ipdb: #171 -- support multipath routes * ipdb: memory consumption improvements * rtmsg: MPLS support * rtmsg: RTA_VIA support * iwutil: #174 -- fix FREQ_FIXED flag * 0.3.11 * ipdb: #161 -- fix memory allocations * nlsocket: #161 -- remove monitor mode * 0.3.10 * rtnl: added BPF filters * rtnl: LWtunnel support in ifinfmsg * ipdb: support address attributes * ipdb: global transactions, initial version * ipdb: routes refactored to use key index (speed up) * config: eventlet support embedded (thanks to Angus Lees) * iproute: replace tc classes * iproute: flush_addr(), flush_rules() * iproute: rule() refactored * netns: proxy file objects (stdin, stdout, stderr) * 0.3.9 * root imports: #109, #135 -- `issubclass`, `isinstance` * iwutil: multiple improvements * iwutil: initial tests * proxy: correctly forward NetlinkError * iproute: neighbour tables support * iproute: #147, filters on dump calls * config: initial usage of `capabilities` * 0.3.8 * docs: inheritance diagrams * nlsocket: #126, #132 -- resource deallocation * arch: #128, #131 -- MIPS support * setup.py: #133 -- syntax error during install on Python2 * 0.3.7 * ipdb: new routing syntax * ipdb: sync interface movement between namespaces * ipdb: #125 -- fix route metrics * netns: new class NSPopen * netns: #119 -- i386 syscall * netns: #122 -- return correct errno * netlink: #126 -- fix socket reuse * 0.3.6 * dhcp: initial release DHCPv4 * license: dual GPLv2+ and Apache v2.0 * ovs: port add/delete * macvlan, macvtap: basic support * vxlan: basic support * ipset: basic support * 0.3.5 * netns: #90 -- netns setns support * generic: #99 -- support custom basic netlink socket classes * proxy-ng: #106 -- provide more diagnostics * nl80211: initial nl80211 support, iwutil module added * 0.3.4 * ipdb: #92 -- route metrics support * ipdb: #85 -- broadcast address specification * ipdb, rtnl: #84 -- veth support * ipdb, rtnl: tuntap support * netns: #84 -- network namespaces support, NetNS class * rtnl: proxy-ng API * pypi: #91 -- embed docs into the tarball * 0.3.3 * ipdb: restart on error * generic: handle non-existing family case * [fix]: #80 -- Python 2.6 unicode vs -O bug workaround * 0.3.2 * simple socket architecture * all the protocols now are based on NetlinkSocket, see examples * rpc: deprecated * iocore: deprecated * iproute: single-threaded socket object * ipdb: restart on errors * rtnl: updated ifinfmsg policies * 0.3.1 * module structure refactored * new protocol: ipq * new protocol: nfnetlink / nf-queue * new protocol: generic * threadless sockets for all the protocols * 0.2.16 * prepare the transition to 0.3.x * 0.2.15 * ipdb: fr #63 -- interface settings freeze * ipdb: fr #50, #51 -- bridge & bond options (initial version) * RHEL7 support * [fix]: #52 -- HTB: correct rtab compilation * [fix]: #53 -- RHEL6.5 bridge races * [fix]: #55 -- IPv6 on bridges * [fix]: #58 -- vlans as bridge ports * [fix]: #59 -- threads sync in iocore * 0.2.14 * [fix]: #44 -- incorrect netlink exceptions proxying * [fix]: #45 -- multiple issues with device targets * [fix]: #46 -- consistent exceptions * ipdb: LinkedSet cascade updates fixed * ipdb: allow to reuse existing interface in `create()` * 0.2.13 * [fix]: #43 -- pipe leak in the main I/O loop * tests: integrate examples, import into tests * iocore: use own TimeoutException instead of Queue.Empty * iproute: default routing table = 254 * iproute: flush_routes() routine * iproute: fwmark parameter for rule() routine * iproute: destination and mask for rules * docs: netlink development guide * 0.2.12 * [fix]: #33 -- release resources only for bound sockets * [fix]: #37 -- fix commit targets * rtnl: HFSC support * rtnl: priomap fixed * 0.2.11 * ipdb: watchdogs to sync on RTNL events * ipdb: fix commit errors * generic: NLA operations, complement and intersection * docs: more autodocs in the code * tests: -W error: more strict testing now * tests: cover examples by the integration testing cycle * with -W error many resource leaks were fixed * 0.2.10 * ipdb: command chaining * ipdb: fix for RHEL6.5 Python "optimizations" * rtnl: support TCA_U32_ACT * [fix]: #32 -- NLA comparison * 0.2.9 * ipdb: support bridges and bonding interfaces on RHEL * ipdb: "shadow" interfaces (still in alpha state) * ipdb: minor fixes on routing and compat issues * ipdb: as a separate package (sub-module) * docs: include ipdb autodocs * rpc: include in setup.py * 0.2.8 * netlink: allow multiple NetlinkSocket allocation from one process * netlink: fix defragmentation for netlink-over-tcp * iocore: support forked IOCore and IOBroker as a separate process * ipdb: generic callbacks support * ipdb: routing support * rtnl: #30 -- support IFLA_INFO_DATA for bond interfaces * 0.2.7 * ipdb: use separate namespaces for utility functions and other stuff * ipdb: generic callbacks (see also IPDB.wait_interface()) * iocore: initial multipath support * iocore: use of 16byte uuid4 for packet ids * 0.2.6 * rpc: initial version, REQ/REP, PUSH/PULL * iocore: shared IOLoop * iocore: AddrPool usage * iproute: policing in FW filter * python3 compatibility issues fixed * 0.2.4 * python3 compatibility issues fixed, tests passed * 0.2.3 * [fix]: #28 -- bundle issue * 0.2.2 * iocore: new component * iocore: separate IOCore and IOBroker * iocore: change from peer-to-peer to flat addresses * iocore: REP/REQ, PUSH/PULL * iocore: support for UDP PUSH/PULL * iocore: AddrPool component for addresses and nonces * generic: allow multiple re-encoding * 0.1.12 * ipdb: transaction commit callbacks * iproute: delete root qdisc (@chantra) * iproute: netem qdisc management (@chantra) * 0.1.11 * netlink: get qdiscs for particular interface * netlink: IPRSocket threadless objects * rtnl: u32 policy setup * iproute: filter actions, such as `ok`, `drop` and so on * iproute: changed syntax of commands, `action` → `command` * tests: htb, tbf tests added * 0.1.10 * [fix]: #8 -- default route fix, routes filtering * [fix]: #9 -- add/delete route routine improved * [fix]: #10 -- shutdown sequence fixed * [fix]: #11 -- close IPC pipes on release() * [fix]: #12 -- stop service threads on release() * netlink: debug mode added to be used with GUI * ipdb: interface removal * ipdb: fail on transaction sync timeout * tests: R/O mode added, use `export PYROUTE2_TESTS_RO=True` * 0.1.9 * tests: all races fixed * ipdb: half-sync commit(): wait for IPs and ports lists update * netlink: use pipes for in-process communication * Python 2.6 compatibility issue: remove copy.deepcopy() usage * QPython 2.7 for Android: works * 0.1.8 * complete refactoring of class names * Python 2.6 compatibility issues * tests: code coverage, multiple code fixes * plugins: ptrace message source * packaging: RH package * 0.1.7 * ipdb: interface creation: dummy, bond, bridge, vlan * ipdb: if\_slaves interface obsoleted * ipdb: 'direct' mode * iproute: code refactored * examples: create() examples committed * 0.1.6 * netlink: tc ingress, sfq, tbf, htb, u32 partial support * ipdb: completely re-implemented transactional model (see docs) * generic: internal fields declaration API changed for nlmsg * tests: first unit tests committed * 0.1.5 * netlink: dedicated io buffering thread * netlink: messages reassembling * netlink: multi-uplink remote * netlink: masquerade remote requests * ipdb: represent interfaces hierarchy * iproute: decode VLAN info * 0.1.4 * netlink: remote netlink access * netlink: SSL/TLS server/client auth support * netlink: tcp and unix transports * docs: started sphinx docs * 0.1.3 * ipdb: context manager interface * ipdb: [fix] correctly handle ip addr changes in transaction * ipdb: [fix] make up()/down() methods transactional [#1] * iproute: mirror packets to 0 queue * iproute: [fix] handle primary ip address removal response * 0.1.2 * initial ipdb version * iproute fixes * 0.1.1 * initial release, iproute module pyroute2-0.5.9/docs/html/_sources/debug.rst.txt0000644000175000017500000001344013060273627021416 0ustar peetpeet00000000000000.. debug: Netlink debug howto ------------------- Dump data ========= Either run the required command via `strace`, or attach to the running process with `strace -p`. Use `-s {int}` argument to make sure that all the messages are dumped. The `-x` argument instructs `strace` to produce output in the hex format that can be passed to the pyroute2 decoder:: $ strace -e trace=network -x -s 16384 ip ro socket(PF_NETLINK, SOCK_RAW|SOCK_CLOEXEC, NETLINK_ROUTE) = 3 setsockopt(3, SOL_SOCKET, SO_SNDBUF, [32768], 4) = 0 setsockopt(3, SOL_SOCKET, SO_RCVBUF, [1048576], 4) = 0 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=28616, groups=00000000}, [12]) = 0 sendto(3, "\x28\x00\x00\x00\x1a\x00\x01\x03 [skip] ", 40, 0, NULL, 0) = 40 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\x3c\x00\x00\x00\x18 [skip]", 16384}], msg_controllen=0, msg_flags=0}, 0) = 480 socket(PF_LOCAL, SOCK_DGRAM|SOCK_CLOEXEC, 0) = 4 192.168.122.0/24 dev virbr0 proto kernel scope link src 192.168.122.1 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\x14\x00\x00\x00\x03 [skip]", 16384}], msg_controllen=0, msg_flags=0}, 0) = 20 +++ exited with 0 +++ Now you can copy `send…()` and `recv…()` buffer strings to a file. Strace compatibility note ========================= Starting with version 4.13, `strace` parses Netlink message headers and displays them in their parsed form instead of displaying the whole buffer in its raw form. The rest of the buffer is still shown, but due to it being incomplete, the method mentioned above doesn't work anymore. For the time being, the easiest workaround is probably to use an older strace version as it only depends on libc6. Decode data =========== The decoder is not provided with rpm or pip packages, so you should have a local git repo of the project:: $ git clone $ cd pyroute2 Now run the decoder:: $ export PYTHONPATH=`pwd` $ python tests/decoder/decoder.py E.g. for the route dump in the file `rt.dump` the command line should be:: $ python tests/decoder/decoder.py \ pyroute2.netlink.rtnl.rtmsg.rtmsg \ rt.dump **Why should I specify the message class?** Why there is no marshalling in the decoder script? 'Cause it is intended to be used with different netlink protocols, not only RTNL, but also nl80211, nfnetlink etc. There is no common marshalling for all the netlink protocols. **How to specify the message class?** All the netlink protocols are defined under `pyroute2/netlink/`, e.g. `rtmsg` module is `pyroute2/netlink/rtnl/rtmsg.py`. Thereafter you should specify the class inside the module, since there can be several classes. In the `rtmsg` case the line will be `pyroute.netlink.rtnl.rtmsg.rtmsg` or, more friendly to the bash autocomplete, `pyroute2/netlink/rtnl/rtmsg.rtmsg`. Notice, that the class you have to specify with dot anyways. **What is the data file format?** Rules are as follows: * The data dump should be in a hex format. Two possible variants are: `\\x00\\x01\\x02\\x03` or `00:01:02:03`. * There can be several packets in the same file. They should be of the same type. * Spaces and line ends are ignored, so you can format the dump as you want. * The `#` symbol starts a comment until the end of the line. * The `#!` symbols start a comment until the end of the file. Example:: # ifinfmsg headers # # nlmsg header \x84\x00\x00\x00 # length \x10\x00 # type \x05\x06 # flags \x49\x61\x03\x55 # sequence number \x00\x00\x00\x00 # pid # RTNL header \x00\x00 # ifi_family \x00\x00 # ifi_type \x00\x00\x00\x00 # ifi_index \x00\x00\x00\x00 # ifi_flags \x00\x00\x00\x00 # ifi_change # ... Compile data ============ Starting with 0.4.1, the library provides `BatchSocket` class, that only compiles and collects requests instead of sending them to the kernel. E.g., it is used by `IPBatch`, that combines `BatchSocket` with `IPRouteMixin`, providing RTNL compiler:: $ python3 Python 3.4.3 (default, Mar 31 2016, 20:42:37) [GCC 5.3.1 20151207 (Red Hat 5.3.1-2)] on linux Type "help", "copyright", "credits" or "license" for more information. # import all the stuff >>> from pyroute2 import IPBatch >>> from pyroute2.common import hexdump # create the compiler >>> ipb = IPBatch() # compile requests into one buffer >>> ipb.link("add", index=550, kind="dummy", ifname="test") >>> ipb.link("set", index=550, state="up") >>> ipb.addr("add", index=550, address="10.0.0.2", mask=24) # inspect the buffer >>> hexdump(ipb.batch) '3c:00:00:00:10:00:05:06:00:00:00:00:a2:7c:00:00:00:00:00:00: 26:02:00:00:00:00:00:00:00:00:00:00:09:00:03:00:74:65:73:74: 00:00:00:00:10:00:12:00:0a:00:01:00:64:75:6d:6d:79:00:00:00: 20:00:00:00:13:00:05:06:00:00:00:00:a2:7c:00:00:00:00:00:00: 26:02:00:00:01:00:00:00:01:00:00:00:28:00:00:00:14:00:05:06: 00:00:00:00:a2:7c:00:00:02:18:00:00:26:02:00:00:08:00:01:00: 0a:00:00:02:08:00:02:00:0a:00:00:02' # reset the buffer >>> ipb.reset() Pls notice, that in Python2 you should use `hexdump(str(ipb.batch))` instead of `hexdump(ipb.batch)`. The data, compiled by `IPBatch` can be used either to run batch requests, when one `send()` call sends several messages at once, or to produce binary buffers to test your own netlink parsers. Or just to dump some data to be sent later and probably even on another host:: >>> ipr = IPRoute() >>> ipr.sendto(ipb.batch, (0, 0)) The compiler always produces requests with `sequence_number == 0`, so if there will be any responses, they can be handled as broadcasts. pyroute2-0.5.9/docs/html/_sources/devcontribute.rst.txt0000644000175000017500000000140412717645345023212 0ustar peetpeet00000000000000.. devcontribute: Project contribution guide ========================== To contribute the code to the project, you can use the github instruments: issues and pull-requests. See more on the project github page: https://github.com/svinota/pyroute2 Requirements ++++++++++++ The code should comply with some requirements: * the library must work on Python >= 2.6 and 3.2. * the code must strictly comply with PEP8 (use `flake8`) * the `ctypes` usage must not break the library on SELinux Testing +++++++ To perform code tests, run `make test`. Details about the makefile parameters see in `README.make.md`. Links +++++ * flake8: https://pypi.python.org/pypi/flake8 * vim-flake8: https://github.com/nvie/vim-flake8 * nosetests: http://nose.readthedocs.org/en/latest/ pyroute2-0.5.9/docs/html/_sources/devgeneral.rst.txt0000644000175000017500000001166112717645345022457 0ustar peetpeet00000000000000.. devgeneral: It is easy to start to develop with pyroute2. In the simplest case one just uses the library as is, and please do not forget to file issues, bugs and feature requests to the project github page. If something should be changed in the library itself, and you think you can do it, this document should help a bit. Modules layout ============== The library consists of several significant parts, and every part has its own functionality:: NetlinkSocket: connects the library to the OS ↑ ↑ | | | ↓ | Marshal ←—→ Message classes | | ↓ NL utility classes: more or less user-friendly API NetlinkSocket and Marshal: :doc:`nlsocket` NetlinkSocket +++++++++++++ Notice, that it is possible to use a custom base class instead of `socket.socket`. Thus, one can transparently port this library to any different transport, or to use it with `eventlet` library, that is not happy with `socket.socket` objects, and so on. Marshal +++++++ A custom marshalling class can be required, if the protocol uses some different marshalling algo from usual netlink. Otherwise it is enough to use `register_policy` method of the `NetlinkSocket`:: # somewhere in a custom netlink class # dict key: message id, int # dict value: message class policy = {IPSET_CMD_PROTOCOL: ipset_msg, IPSET_CMD_LIST: ipset_msg} def __init__(self, ...): ... self.register_policy(policy) But if just matching is not enough, refer to the `Marshal` implementation. It is possible, e.g., to define the custom `fix_message` method to be run on every message, etc. A sample of such custom marshal can be found in the RTNL implementation: `pyroute2.netlink.rtnl`. Messages ++++++++ All the message classes hierarchy is built on the simple fact that the netlink message structure is recursive in that or other way. A usual way to implement messages is described in the netlink docs: :doc:`netlink`. The core module, `pyroute2.netlink`, provides base classes `nlmsg` and `nla`, as well as some other (`genlmsg`), and basic NLA types: `uint32`, `be32`, `ip4addr`, `l2addr` etc. One of the NLA types, `hex`, can be used to dump the NLA structure in the hex format -- it is useful for development. NL utility classes ++++++++++++++++++ They are based on different netlink sockets, such as `IPRsocket` (RTNL), `NL80211` (wireless), or just `NetlinkSocket` -- be it generic netlink or nfnetlink (see taskstats and ipset). Primarily, `pyroute2` is a netlink framework, so basic classes and low-level utilities are intended to return parsed netlink messages, not some user-friendly output. So be not surprised. But user-friendly modules are also possible and partly provided, such as `IPDB`. A list of low-level utility classes: * `IPRoute` [`pyroute2.iproute`], RTNL utility like ip/tc * `IPSet` [`pyroute2.ipset`], manipulate IP sets * `IW` [`pyroute2.iwutil`], basic nl80211 support * `NetNS` [`pyroute2.netns`], netns-enabled `IPRoute` * `TaskStats` [`pyroute2.netlink.taskstats`], taskstats utility High-level utilities: * `IPDB` [`pyroute2.ipdb`], async IP database Deferred imports ++++++++++++++++ The file `pyroute2/__init__.py` is a proxy for some modules, thus providing a fixed import address, like:: from pyroute2 import IPRoute ipr = IPRoute() ... ipr.close() But not only. Actually, `pyroute2/__init__.py` exports not classes and modules, but proxy objects, that load the actual code in the runtime. The rationale is simple: in that way we provide a possibility to use a custom base classes, see `examples/custom_socket_base.py`. Protocol debugging ++++++++++++++++++ The simplest way to start with some netlink protocol is to use a reference implementation. Lets say we wrote the `ipset_msg` class using the kernel code, and want to check how it works. So the ipset(8) utility will be used as a reference implementation:: $ sudo strace -e trace=network -f -x -s 4096 ipset list socket(PF_NETLINK, SOCK_RAW, NETLINK_NETFILTER) = 3 bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0 getsockname(3, {sa_family=AF_NETLINK, pid=7009, groups=00000000}, [12]) = 0 sendto(3, "\x1c\x00\x00\x00\x01\x06\x01\x00\xe3\x95\... recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\x1c\x00\x00\x00\x01\x06\x00\x00\xe3\... sendto(3, "\x1c\x00\x00\x00\x07\x06\x05\x03\xe4\x95\... recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(1)=[{"\x78\x00\x00\x00\x07\x06\x02\x00\xe4\... Here you can just copy packet strings from `sendto` and `recvmsg`, place in a file and use `scripts/decoder.py` to inspect them:: $ export PYTHONPATH=`pwd` $ python scripts/decoder.py \ pyroute2.netlink.nfnetlink.ipset.ipset_msg \ scripts/ipset_01.data See collected samples in the `scripts` directory. The script ignores spaces and allows multiple messages in the same file. pyroute2-0.5.9/docs/html/_sources/devmodules.rst.txt0000644000175000017500000000144612717645345022512 0ustar peetpeet00000000000000.. devmodules: Modules in progress =================== There are several modules in the very initial development state, and the help with them will be particularly valuable. You are more than just welcome to help with: .. automodule:: pyroute2.ipset :members: .. automodule:: pyroute2.iwutil :members: Network settings daemon -- pyrouted ----------------------------------- Pyrouted is a standalone project of a system service, that utilizes the `pyroute2` library. It consists of a daemon controlled by `systemd` and a CLI utility that communicates with the daemon via UNIX socket. * home: https://github.com/svinota/pyrouted * bugs: https://github.com/svinota/pyrouted/issues * pypi: https://pypi.python.org/pypi/pyrouted It is an extremely simple and basic network interface setup tool. pyroute2-0.5.9/docs/html/_sources/dhcp.rst.txt0000644000175000017500000000057512717645345021263 0ustar peetpeet00000000000000.. dhcp: DHCP support ============ DHCP support in `pyroute2` is in very initial state, so it is in the «Development section» yet. DHCP protocol has nothing to do with netlink, but `pyroute2` slowly moving from netlink-only library to some more general networking framework. .. automodule:: pyroute2.dhcp :members: .. automodule:: pyroute2.dhcp.dhcp4socket :members: pyroute2-0.5.9/docs/html/_sources/general.rst.txt0000644000175000017500000001436213621220101021727 0ustar peetpeet00000000000000Pyroute2 ======== Pyroute2 is a pure Python **netlink** library. It requires only Python stdlib, no 3rd party libraries. The library was started as an RTNL protocol implementation, so the name is **pyroute2**, but now it supports many netlink protocols. Some supported netlink families and protocols: * **rtnl**, network settings --- addresses, routes, traffic controls * **nfnetlink** --- netfilter API: **ipset**, **nftables**, ... * **ipq** --- simplest userspace packet filtering, iptables QUEUE target * **devlink** --- manage and monitor devlink-enabled hardware * **generic** --- generic netlink families * **ethtool** --- low-level network interface setup * **nl80211** --- wireless functions API (basic support) * **taskstats** --- extended process statistics * **acpi_events** --- ACPI events monitoring * **thermal_events** --- thermal events monitoring * **VFS_DQUOT** --- disk quota events monitoring Latest important milestones: * 0.5.8 --- **Ethtool** support * 0.5.7 --- **WireGuard** support * 0.5.2 --- **PF_ROUTE** support on FreeBSD and OpenBSD Supported systems ----------------- Pyroute2 runs natively on Linux and emulates some limited subset of RTNL netlink API on BSD systems on top of PF_ROUTE notifications and standard system tools. Other platforms are not supported. The simplest usecase -------------------- The objects, provided by the library, are socket objects with an extended API. The additional functionality aims to: * Help to open/bind netlink sockets * Discover generic netlink protocols and multicast groups * Construct, encode and decode netlink and PF_ROUTE messages Maybe the simplest usecase is to monitor events. Disk quota events:: from pyroute2 import DQuotSocket # DQuotSocket automatically performs discovery and binding, # since it has no other functionality beside of the monitoring with DQuotSocket() as ds: for message in ds.get(): print(message) Get notifications about network settings changes with IPRoute:: from pyroute2 import IPRoute with IPRoute() as ipr: # With IPRoute objects you have to call bind() manually ipr.bind() for message in ipr.get(): print(message) RTNetlink examples ------------------ More samples you can read in the project documentation. Low-level **IPRoute** utility --- Linux network configuration. The **IPRoute** class is a 1-to-1 RTNL mapping. There are no implicit interface lookups and so on. Some examples:: from socket import AF_INET from pyroute2 import IPRoute # get access to the netlink socket ip = IPRoute() # no monitoring here -- thus no bind() # print interfaces print(ip.get_links()) # create VETH pair and move v0p1 to netns 'test' ip.link('add', ifname='v0p0', peer='v0p1', kind='veth') idx = ip.link_lookup(ifname='v0p1')[0] ip.link('set', index=idx, net_ns_fd='test') # bring v0p0 up and add an address idx = ip.link_lookup(ifname='v0p0')[0] ip.link('set', index=idx, state='up') ip.addr('add', index=idx, address='10.0.0.1', broadcast='10.0.0.255', prefixlen=24) # create a route with metrics ip.route('add', dst='172.16.0.0/24', gateway='10.0.0.10', metrics={'mtu': 1400, 'hoplimit': 16}) # create MPLS lwtunnel # $ sudo modprobe mpls_iptunnel ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'mpls', 'labels': '200/300'}) # create MPLS route: push label # $ sudo modprobe mpls_router # $ sudo sysctl net.mpls.platform_labels=1024 ip.route('add', family=AF_MPLS, oif=idx, dst=0x200, newdst=[0x200, 0x300]) # create SEG6 tunnel encap mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # create SEG6 tunnel inline mode # Kernel >= 4.10 ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'inline', 'segs': ['2000::5', '2000::6']}) # create SEG6 tunnel with ip4ip6 encapsulation # Kernel >= 4.14 ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) # release Netlink socket ip.close() The project contains several modules for different types of netlink messages, not only RTNL. Network namespace examples -------------------------- Network namespace manipulation:: from pyroute2 import netns # create netns netns.create('test') # list print(netns.listnetns()) # remove netns netns.remove('test') Create **veth** interfaces pair and move to **netns**:: from pyroute2 import IPRoute with IPRoute() as ipr: # create interface pair ipr.link('add', ifname='v0p0', kind='veth', peer='v0p1') # lookup the peer index idx = ipr.link_lookup(ifname='v0p1')[0] # move the peer to the 'test' netns: ipr.link('set', index='v0p1', net_ns_fd='test') List interfaces in some **netns**:: from pyroute2 import NetNS from pprint import pprint ns = NetNS('test') pprint(ns.get_links()) ns.close() More details and samples see in the documentation. Installation ------------ `make install` or `pip install pyroute2` Requirements ------------ Python >= 2.7 The pyroute2 testing framework requirements: * flake8 * coverage * nosetests * sphinx * netaddr Optional dependencies for testing: * eventlet * mitogen * bottle * team (http://libteam.org/) Links ----- * home: https://pyroute2.org/ * srcs: https://github.com/svinota/pyroute2 * bugs: https://github.com/svinota/pyroute2/issues * pypi: https://pypi.python.org/pypi/pyroute2 * docs: http://docs.pyroute2.org/ * list: https://groups.google.com/d/forum/pyroute2-dev pyroute2-0.5.9/docs/html/_sources/generator.rst.txt0000644000175000017500000001070213311503520022277 0ustar peetpeet00000000000000Generators ---------- Problem ======= Until 0.5.2 Pyroute2 collected all the responses in a list and returned them at once. It may be ok as long as there is not so many objects to return. But let's say there are some thousands of routes:: $ ip ro | wc -l 315417 Now we use a script to retrieve the routes:: import sys from pyroute2 import config from pyroute2 import IPRoute config.nlm_generator = (sys.argv[1].lower() if len(sys.argv) > 1 else 'false') == 'true' with IPRoute() as ipr: for route in ipr.get_routes(): pass If the library collects all the routes in a list and returns the list, it may take a lot of memory:: $ /usr/bin/time -v python e.py false Command being timed: "python e.py false" User time (seconds): 30.42 System time (seconds): 3.63 Percent of CPU this job got: 99% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:34.09 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 2416472 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 604787 Voluntary context switches: 9 Involuntary context switches: 688 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 2416472 kbytes of RSS. Pretty much. Solution ======== Now we use generator to iterate the results:: $ /usr/bin/time -v python e.py true Command being timed: "python e.py true" User time (seconds): 18.48 System time (seconds): 0.99 Percent of CPU this job got: 99% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:19.49 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 45132 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 433589 Voluntary context switches: 9 Involuntary context switches: 244 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 45132 kbytes of RSS. That's the difference. Say we have a bit more routes:: $ ip ro | wc -l 678148 Without generators the script will simply run ot of memory. But with the generators:: $ /usr/bin/time -v python e.py true Command being timed: "python e.py true" User time (seconds): 39.63 System time (seconds): 2.78 Percent of CPU this job got: 99% Elapsed (wall clock) time (h:mm:ss or m:ss): 0:42.75 Average shared text size (kbytes): 0 Average unshared data size (kbytes): 0 Average stack size (kbytes): 0 Average total size (kbytes): 0 Maximum resident set size (kbytes): 45324 Average resident set size (kbytes): 0 Major (requiring I/O) page faults: 0 Minor (reclaiming a frame) page faults: 925560 Voluntary context switches: 11 Involuntary context switches: 121182 Swaps: 0 File system inputs: 0 File system outputs: 0 Socket messages sent: 0 Socket messages received: 0 Signals delivered: 0 Page size (bytes): 4096 Exit status: 0 Again, 45324 kbytes of RSS. Configuration ============= To turn the generator option on, one should set ``pyroute2.config.nlm_generator`` to ``True``. By default is ``False`` not to break existing projects.:: from pyroute2 import config from pyroute2 import IPRoute config.nlm_generator = True with IPRoute() as ipr: for route in ipr.get_routes(): handle(route) IPRoute and generators ====================== IPRoute objects will return generators only for methods that employ ``GET_...`` requests like ``get_routes()``, ``get_links()``, ``link('dump', ...)``, ``addr('dump', ...)``. Setters will work as usually to apply changes immediately. pyroute2-0.5.9/docs/html/_sources/index.rst.txt0000644000175000017500000000115313621021764021431 0ustar peetpeet00000000000000.. pyroute2 documentation master file Pyroute2 netlink library ======================== General information ------------------- .. toctree:: :maxdepth: 2 general changelog makefile report Usage ----- .. toctree:: :maxdepth: 2 usage iproute remote ndb ipdb wiset ipset netns wireguard Howtos ------ .. toctree:: :maxdepth: 2 mpls debug Development ----------- .. toctree:: :maxdepth: 2 devcontribute arch netlink nlsocket Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` pyroute2-0.5.9/docs/html/_sources/ipdb.rst.txt0000644000175000017500000000007613471017534021246 0ustar peetpeet00000000000000.. _ipdb: .. automodule:: pyroute2.ipdb.main :members: pyroute2-0.5.9/docs/html/_sources/iproute.rst.txt0000644000175000017500000000126613471017534022021 0ustar peetpeet00000000000000.. _iproute: IPRoute module ============== .. automodule:: pyroute2.iproute :members: BSD notes --------- .. automodule:: pyroute2.iproute.bsd IPRoute API ----------- .. automodule:: pyroute2.iproute.linux :members: Queueing disciplines -------------------- .. automodule:: pyroute2.netlink.rtnl.tcmsg.sched_drr :members: .. automodule:: pyroute2.netlink.rtnl.tcmsg.sched_choke :members: .. automodule:: pyroute2.netlink.rtnl.tcmsg.sched_clsact :members: .. automodule:: pyroute2.netlink.rtnl.tcmsg.sched_hfsc :members: .. automodule:: pyroute2.netlink.rtnl.tcmsg.sched_htb :members: Filters ------- .. automodule:: pyroute2.netlink.rtnl.tcmsg.cls_u32 pyroute2-0.5.9/docs/html/_sources/ipset.rst.txt0000644000175000017500000000012513160243337021444 0ustar peetpeet00000000000000.. ipset: IPSet module ============== .. automodule:: pyroute2.ipset :members: pyroute2-0.5.9/docs/html/_sources/makefile.rst.txt0000644000175000017500000000571713621220101022073 0ustar peetpeet00000000000000Makefile documentation ====================== Makefile is used to automate Pyroute2 deployment and test processes. Mostly, it is but a collection of common commands. target: clean ------------- Clean up the repo directory from the built documentation, collected coverage data, compiled bytecode etc. target: docs ------------ Build documentation. Requires `sphinx`. target: test ------------ Run tests against current code. Command line options: * python -- path to the Python to use * nosetests -- path to nosetests to use * wlevel -- the Python -W level * coverage -- set `coverage=html` to get coverage report * pdb -- set `pdb=true` to launch pdb on errors * module -- run only specific test module * skip -- skip tests by pattern * loop -- number of test iterations for each module * report -- url to submit reports to (see tests/collector.py) * worker -- the worker id To run the full test cycle on the project, using a specific python, making html coverage report:: $ sudo make test python=python3 coverage=html To run a specific test module:: $ sudo make test module=general:test_ipdb.py:TestExplicit The module parameter syntax:: ## module=package[:test_file.py[:TestClass[.test_case]]] $ sudo make test module=lnst $ sudo make test module=general:test_ipr.py $ sudo make test module=general:test_ipdb.py:TestExplicit There are several test packages: * general -- common functional tests * eventlet -- Neutron compatibility tests * lnst -- LNST compatibility tests For each package a new Python instance is launched, keep that in mind since it affects the code coverage collection. It is possible to skip tests by a pattern:: $ sudo make test skip=test_stress To run tests in a loop, use the loop parameter:: $ sudo make test loop=10 For every iteration the code will be packed again with `make dist` and checked against PEP8. All the statistic may be collected with a simple web-script, see `tests/collector.py` (requires the bottle framework). To retrieve the collected data one can use curl:: $ sudo make test report=http://localhost:8080/v1/report/ $ curl http://localhost:8080/v1/report/ | python -m json.tool target: dist ------------ Make Python distribution package. Command line options: * python -- the Python to use target: install --------------- Build and install the package into the system. Command line options: * python -- the Python to use * root -- root install directory * lib -- where to install lib files target: develop --------------- Build the package and deploy the egg-link with setuptools. No code will be deployed into the system directories, but instead the local package directory will be visible to the python. In that case one can change the code locally and immediately test it system-wide without running `make install`. * python -- the Python to use other targets ------------- Other targets are either utility targets to be used internally, or hooks for related projects. You can safely ignore them. pyroute2-0.5.9/docs/html/_sources/mpls.rst.txt0000644000175000017500000001022512717645345021311 0ustar peetpeet00000000000000.. mpls: MPLS howto ---------- Short introduction into Linux MPLS. Requirements: * kernel >= 4.4 * modules: `mpls_router`, `mpls_iptunnel` * `$ sudo sysctl net.mpls.platform_labels=$x`, where `$x` -- number of labels * `pyroute2` >= 0.4.0 MPLS labels =========== Possible label formats:: # int "dst": 20 # list of ints "newdst": [20] "newdst": [20, 30] # string "labels": "20/30" Any of these notations should be accepted by `pyroute2`, if not -- try another format and submit an issue to the project github page. The code is quite new, some issues are possible. Refer also to the test cases, there are many usage samples: * `tests/general/test_ipr.py` * `tests/general/test_ipdb.py` IPRoute ======= MPLS routes ~~~~~~~~~~~ Label swap:: from pyroute2 import IPRoute from pyroute2.common import AF_MPLS ipr = IPRoute() # get the `eth0` interface's index: idx = ipr.link_lookup(ifname="eth0")[0] # create the request req = {"family": AF_MPLS, "oif": idx, "dst": 20, "newdst": [30]} # set up the route ipr.route("add", **req) Notice, that `dst` is a single label, while `newdst` is a stack. Label push:: req = {"family": AF_MPLS, "oif": idx, "dst": 20, "newdst": [20, 30]} ipr.route("add", **req) One can set up also the `via` field:: from socket import AF_INET req = {"family": AF_MPLS, "oif": idx, "dst": 20, "newdst": [30], "via": {"family": AF_INET, "addr": "1.2.3.4"}} ipr.route("add", **req) MPLS lwtunnel ~~~~~~~~~~~~~ To inject IP packets into MPLS:: req = {"dst": "1.2.3.0/24", "oif": idx, "encap": {"type": "mpls", "labels": [202, 303]}} ipr.route("add", **req) IPDB ==== MPLS routes ~~~~~~~~~~~ The `IPDB` database also supports MPLS routes, they are reflected in the `ipdb.routes.tables["mpls"]`:: >>> (ipdb ... .routes ... .add({"family": AF_MPLS, ... "oif": ipdb.interfaces["eth0"]["index"], ... "dst": 20, ... "newdst": [30]}) ... .commit()) >>> (ipdb ... .routes ... .add({"family": AF_MPLS, ... "oif": ipdb.interfaces["eth0"]["index"], ... "dst": 22, ... "newdst": [22, 42]}) ... .commit()) >>> ipdb.routes.tables["mpls"].keys() [20, 22] Pls notice, that there is only one MPLS routing table. Multipath MPLS:: with IDPB() as ipdb: (ipdb .routes .add({"family": AF_MPLS, "dst": 20, "multipath": [{"via": {"family": AF_INET, "addr": "10.0.0.2"}, "oif": ipdb.interfaces["eth0"]["index"], "newdst": [30]}, {"via": {"family": AF_INET, "addr": "10.0.0.3"}, "oif": ipdb.interfaces["eth0"]["index"], "newdst": [40]}]}) .commit()) MPLS lwtunnel ~~~~~~~~~~~~~ LWtunnel routes reside in common route tables:: with IPDB() as ipdb: (ipdb .routes .add({"dst": "1.2.3.0/24", "oif": ipdb.interfaces["eth0"]["index"], "encap": {"type": "mpls", "labels": [22]}}) .commit()) print(ipdb.routes["1.2.3.0/24"]) Multipath MPLS lwtunnel:: with IPDB() as ipdb: (ipdb .routes .add({"dst": "1.2.3.0/24", "table": 200, "multipath": [{"oif": ipdb.interfaces["eth0"]["index"], "gateway": "10.0.0.2", "encap": {"type": "mpls", "labels": [200, 300]}}, {"oif": ipdb.interfaces["eth1"]["index"], "gateway": "172.16.0.2", "encap": {"type": "mpls", "labels": [200, 300]}}]}) .commit()) print(ipdb.routes.tables[200]["1.2.3.0/24"]) pyroute2-0.5.9/docs/html/_sources/ndb.rst.txt0000644000175000017500000000032513504766061021073 0ustar peetpeet00000000000000.. ndb: NDB module ========== .. automodule:: pyroute2.ndb.main Reference --------- .. toctree:: :maxdepth: 2 ndb_objects ndb_interfaces ndb_schema ndb_sources ndb_debug *work in progress* pyroute2-0.5.9/docs/html/_sources/ndb_addresses.rst.txt0000644000175000017500000000013213471265347023130 0ustar peetpeet00000000000000.. ndbaddresses: IP addresses ============ .. automodule:: pyroute2.ndb.objects.address pyroute2-0.5.9/docs/html/_sources/ndb_debug.rst.txt0000644000175000017500000001125213616747507022252 0ustar peetpeet00000000000000Debug and logging ================= Logging ------- A simple way to set up stderr logging:: # to start logging on the DB init ndb = NDB(log='on') # ... or to start it in run time ndb.log('on') # ... the same as above, another syntax ndb.log.on # ... turn logging off ndb.log('off') # ... or ndb.log.off It is possible also to set up logging to a file or to a syslog server:: # ndb.log('file_name.log') # ndb.log('syslog://server:port') Fetching DB data ---------------- By default, NDB starts with an in-memory SQLite3 database. In order to perform post mortem analysis it may be more useful to start the DB with a file DB or a PostgreSQL as the backend. See more: :ref:`ndbschema` It is possible to dump all the DB data with `schema.export()`:: with NDB() as ndb: ndb.schema.export('stderr') # dump the DB to stderr ... ndb.schema.export('pr2_debug') # dump the DB to a file RTNL events ----------- All the loaded RTNL events may be stored in the DB. To turn that feature on, one should start NDB with the `debug` option:: ndb = NDB(rtnl_debug=True) The events may be exported with the same `schema.export()`. Unlike ordinary table, limited with the number of network objects in the system, the events log tables are not limited. Do not enable the events logging in production, it may exhaust all the memory. RTNL objects ------------ NDB creates RTNL objects on demand, it doesn't keep them all the time. References to created objects are linked to the `ndb..cache` set:: >>> ndb.interfaces.cache.keys() [(('target', u'localhost'), ('index', 2)), (('target', u'localhost'), ('index', 39615))] >>> [x['ifname'] for x in ndb.interfaces.cache.values()] [u'eth0', u't2'] Object states ------------- RTNL objects may be in several states: * invalid: the object does not exist in the system * system: the object exists both in the system and in NDB * setns: the existing object should be moved to another network namespace * remove: the existing object must be deleted from the system The state transitions are logged in the state log:: >>> from pyroute2 import NDB >>> ndb = NDB() >>> c = ndb.interfaces.create(ifname='t0', kind='dummy').commit() >>> c.state.events [ (1557752212.6703758, 'invalid'), (1557752212.6821117, 'system') ] The timestamps allow to correlate the state transitions with the NDB log and the RTNL events log, in the case it was enabled. Object snapshots ---------------- Before running any commit, NDB marks all the related records in the DB with a random value in the `f_tflags` DB field (`tflags` object field), and stores all the marked records in the snapshot tables. Shortly, the `commit()` is a `snapshot() + apply() + revert() if failed`:: >>> nic = ndb.interfaces['t0'] >>> nic['state'] 'down' >>> nic['state'] = 'up' >>> snapshot = nic.snapshot() >>> ndb.schema.snapshots { 'addresses_139736119707256': , 'neighbours_139736119707256': , 'routes_139736119707256': , 'nh_139736119707256': , 'p2p_139736119707256': , 'ifinfo_bridge_139736119707256': , 'ifinfo_bond_139736119707256': , 'ifinfo_vlan_139736119707256': , 'ifinfo_vxlan_139736119707256': , 'ifinfo_gre_139736119707256': , 'ifinfo_vrf_139736119707256': , 'ifinfo_vti_139736119707256': , 'ifinfo_vti6_139736119707256': , 'interfaces_139736119707256': } >>> nic.apply() ... >>> nic['state'] 'up' >>> snapshot.apply(rollback=True) ... >>> nic['state'] 'down' Or same:: >>> nic = ndb.interfaces['t0'] >>> nic['state'] 'down' >>> nic['state'] = 'up' >>> nic.commit() >>> nic['state'] 'up' >>> nic.rollback() >>> nic['state'] 'down' These snapshot tables are the objects' state before the changes were applied. pyroute2-0.5.9/docs/html/_sources/ndb_interfaces.rst.txt0000644000175000017500000000015113471265301023265 0ustar peetpeet00000000000000.. ndbinterfaces: Network interfaces ================== .. automodule:: pyroute2.ndb.objects.interface pyroute2-0.5.9/docs/html/_sources/ndb_intro.rst.txt0000644000175000017500000000006213465341146022303 0ustar peetpeet00000000000000.. ndb_intro: .. automodule:: pyroute2.ndb.main pyroute2-0.5.9/docs/html/_sources/ndb_objects.rst.txt0000644000175000017500000000017113505113041022564 0ustar peetpeet00000000000000.. ndbobjects: NDB objects =========== .. automodule:: pyroute2.ndb.objects :members: :exclude-members: update pyroute2-0.5.9/docs/html/_sources/ndb_schema.rst.txt0000644000175000017500000000010713471042404022400 0ustar peetpeet00000000000000.. _ndbschema: Database ======== .. automodule:: pyroute2.ndb.schema pyroute2-0.5.9/docs/html/_sources/ndb_sources.rst.txt0000644000175000017500000000006513471017534022634 0ustar peetpeet00000000000000.. ndb_sources: .. automodule:: pyroute2.ndb.source pyroute2-0.5.9/docs/html/_sources/netlink.rst.txt0000644000175000017500000000005612717645345022003 0ustar peetpeet00000000000000.. netlink: .. automodule:: pyroute2.netlink pyroute2-0.5.9/docs/html/_sources/netns.rst.txt0000644000175000017500000000030513471017534021452 0ustar peetpeet00000000000000.. _netns: NetNS module ============ .. automodule:: pyroute2.netns :members: .. automodule:: pyroute2.netns.nslink :members: .. automodule:: pyroute2.netns.process.proxy :members: pyroute2-0.5.9/docs/html/_sources/nlsocket.rst.txt0000644000175000017500000000010612715075360022145 0ustar peetpeet00000000000000.. nlsocket: .. automodule:: pyroute2.netlink.nlsocket :members: pyroute2-0.5.9/docs/html/_sources/remote.rst.txt0000644000175000017500000000562613471017534021631 0ustar peetpeet00000000000000.. _remote: RemoteIPRoute ------------- Caveats ======= .. warning:: The class implies a serious performance penalty. Please consider other options if you expect high loads of the netlink traffic. .. warning:: The class requires the mitogen library that should be installed separately: https://mitogen.readthedocs.io/en/latest/ .. warning:: The object of this class implicitly spawn child processes. Beware. Here are some reasons why this class is not used as a general class instead of specific IPRoute for local RTNL, and NetNS for local netns management: * The performance of the Python parser for the binary netlink protocol is not so good, but using such proxies makes it even worse. * Local IPRoute and NetNS access is the core functionality and must work with no additional libraries installed. Introduction ============ It is possible to run IPRoute instances remotely using the mitogen library. The remote node must have same python version installed, but no additional libraries are required there: all the code will be imported from the host where you start your script. The simplest case, run IPRoute on a remote Linux host via ssh (assume the keys are deployed):: from pyroute2 import RemoteIPRoute rip = RemoteIPRoute(protocol='ssh', hostname='test01', username='ci') rip.get_links() # ... Indirect access =============== Building mitogen proxy chains you can access nodes indirectly:: import mitogen.master from pyroute2 import RemoteIPRoute broker = mitogen.master.Broker() router = mitogen.master.Router(broker) # login to the gateway gw = router.ssh(hostname='test-gateway', username='ci') # login from the gateway to the target node host = router.ssh(via=gw, hostname='test01', username='ci') rip = RemoteIPRoute(router=router, context=host) rip.get_links() # ... Run with privileges =================== It requires the mitogen sudo proxy to run IPRoute with root permissions:: import mitogen.master from pyroute2 import RemoteIPRoute broker = mitogen.master.Broker() router = mitogen.master.Router(broker) host = router.ssh(hostname='test01', username='ci') sudo = router.sudo(via=host, username='root') rip = RemoteIPRoute(router=router, context=sudo) rip.link('add', ifname='br0', kind='bridge') # ... Remote network namespaces ========================= You also can access remote network namespaces with the same RemoteIPRoute object:: import mitogen.master from pyroute2 import RemoteIPRoute broker = mitogen.master.Broker() router = mitogen.master.Router(broker) host = router.ssh(hostname='test01', username='ci') sudo = router.sudo(via=host, username='root') rip = RemoteIPRoute(router=router, context=sudo, netns='test-netns') rip.link('add', ifname='br0', kind='bridge') # ... pyroute2-0.5.9/docs/html/_sources/report.rst.txt0000644000175000017500000000132413621220101021617 0ustar peetpeet00000000000000Report a bug ============ In the case you have issues, please report them to the project bug tracker: https://github.com/svinota/pyroute2/issues It is important to provide all the required information with your report: * Linux kernel version * Python version * Specific environment, if used -- gevent, eventlet etc. Sometimes it is needed to measure specific system parameters. There is a code to do that, e.g.:: $ sudo make test-platform Please keep in mind, that this command will try to create and delete different interface types. It is possible also to run the test in your code:: from pprint import pprint from pyroute2.config.test_platform import TestCapsRtnl pprint(TestCapsRtnl().collect()) pyroute2-0.5.9/docs/html/_sources/usage.rst.txt0000644000175000017500000000541313305322105021421 0ustar peetpeet00000000000000.. usage: Quickstart ========== Hello, world:: $ sudo pip install pyroute2 $ cat example.py from pyroute2 import IPRoute with IPRoute() as ipr: print([x.get_attr('IFLA_IFNAME') for x in ipr.get_links()]) $ python example.py ['lo', 'p6p1', 'wlan0', 'virbr0', 'virbr0-nic'] Sockets ------- In the runtime pyroute2 socket objects behave as normal sockets. One can use them in the poll/select, one can call `recv()` and `sendmsg()`:: from pyroute2 import IPRoute # create RTNL socket ipr = IPRoute() # subscribe to broadcast messages ipr.bind() # wait for data (do not parse it) data = ipr.recv(65535) # parse received data messages = ipr.marshal.parse(data) # shortcut: recv() + parse() # # (under the hood is much more, but for # simplicity it's enough to say so) # messages = ipr.get() But pyroute2 objects have a lot of methods, written to handle specific tasks:: from pyroute2 import IPRoute # RTNL interface with IPRoute() as ipr: # get devices list ipr.get_links() # get addresses ipr.get_addr() Resource release ---------------- Do not forget to release resources and close sockets. Also keep in mind, that the real fd will be closed only when the Python GC will collect closed objects. Imports ------- The public API is exported by `pyroute2/__init__.py`. It is done so to provide a stable API that will not be affected by changes in the package layout. There may be significant layout changes between versions, but if a symbol is re-exported via `pyroute2/__init__.py`, it will be available with the same import signature. .. warning:: All other objects are also available for import, but they may change signatures in the next versions. E.g.:: # Import a pyroute2 class directly. In the next versions # the import signature can be changed, e.g., NetNS from # pyroute2.netns.nslink it can be moved somewhere else. # from pyroute2.netns.nslink import NetNS ns = NetNS('test') # Import the same class from root module. This signature # will stay the same, any layout change is reflected in # the root module. # from pyroute2 import NetNS ns = NetNS('test') Special cases ============= eventlet -------- The eventlet environment conflicts in some way with socket objects, and pyroute2 provides some workaround for that:: # import symbols # import eventlet from pyroute2 import NetNS from pyroute2.config.eventlet import eventlet_config # setup the environment eventlet.monkey_patch() eventlet_config() # run the code ns = NetNS('nsname') ns.get_routes() ... This may help, but not always. In general, the pyroute2 library is not eventlet-friendly. pyroute2-0.5.9/docs/html/_sources/wireguard.rst.txt0000644000175000017500000000014213621021764022310 0ustar peetpeet00000000000000.. _netns: WireGuard module ================ .. automodule:: pyroute2.netlink.generic.wireguard pyroute2-0.5.9/docs/html/_sources/wiset.rst.txt0000644000175000017500000000012313160243337021451 0ustar peetpeet00000000000000.. wiset: WiSet module ============ .. automodule:: pyroute2.wiset :members: pyroute2-0.5.9/docs/html/_static/0000755000175000017500000000000013621220110016540 5ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/_static/ajax-loader.gif0000644000175000017500000000124113405433753021437 0ustar peetpeet00000000000000GIF89aU|NU|l!Created with ajaxload.info! ! NETSCAPE2.0,30Ikc:Nf E1º.`q-[9ݦ9 JkH! ,4N!  DqBQT`1 `LE[|ua C%$*! ,62#+AȐ̔V/cNIBap ̳ƨ+Y2d! ,3b%+2V_ ! 1DaFbR]=08,Ȥr9L! ,2r'+JdL &v`\bThYB)@<&,ȤR! ,3 9tڞ0!.BW1  sa50 m)J! ,2 ٜU]qp`a4AF0` @1Α! ,20IeBԜ) q10ʰPaVڥ ub[;pyroute2-0.5.9/docs/html/_static/basic.css0000644000175000017500000002524013621220107020344 0ustar peetpeet00000000000000/* * basic.css * ~~~~~~~~~ * * Sphinx stylesheet -- basic theme. * * :copyright: Copyright 2007-2018 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /* -- main layout ----------------------------------------------------------- */ div.clearer { clear: both; } /* -- relbar ---------------------------------------------------------------- */ div.related { width: 100%; font-size: 90%; } div.related h3 { display: none; } div.related ul { margin: 0; padding: 0 0 0 10px; list-style: none; } div.related li { display: inline; } div.related li.right { float: right; margin-right: 5px; } /* -- sidebar --------------------------------------------------------------- */ div.sphinxsidebarwrapper { padding: 10px 5px 0 10px; } div.sphinxsidebar { float: left; width: 230px; margin-left: -100%; font-size: 90%; word-wrap: break-word; overflow-wrap : break-word; } div.sphinxsidebar ul { list-style: none; } div.sphinxsidebar ul ul, div.sphinxsidebar ul.want-points { margin-left: 20px; list-style: square; } div.sphinxsidebar ul ul { margin-top: 0; margin-bottom: 0; } div.sphinxsidebar form { margin-top: 10px; } div.sphinxsidebar input { border: 1px solid #98dbcc; font-family: sans-serif; font-size: 1em; } div.sphinxsidebar #searchbox form.search { overflow: hidden; } div.sphinxsidebar #searchbox input[type="text"] { float: left; width: 80%; padding: 0.25em; box-sizing: border-box; } div.sphinxsidebar #searchbox input[type="submit"] { float: left; width: 20%; border-left: none; padding: 0.25em; box-sizing: border-box; } img { border: 0; max-width: 100%; } /* -- search page ----------------------------------------------------------- */ ul.search { margin: 10px 0 0 20px; padding: 0; } ul.search li { padding: 5px 0 5px 20px; background-image: url(file.png); background-repeat: no-repeat; background-position: 0 7px; } ul.search li a { font-weight: bold; } ul.search li div.context { color: #888; margin: 2px 0 0 30px; text-align: left; } ul.keywordmatches li.goodmatch a { font-weight: bold; } /* -- index page ------------------------------------------------------------ */ table.contentstable { width: 90%; margin-left: auto; margin-right: auto; } table.contentstable p.biglink { line-height: 150%; } a.biglink { font-size: 1.3em; } span.linkdescr { font-style: italic; padding-top: 5px; font-size: 90%; } /* -- general index --------------------------------------------------------- */ table.indextable { width: 100%; } table.indextable td { text-align: left; vertical-align: top; } table.indextable ul { margin-top: 0; margin-bottom: 0; list-style-type: none; } table.indextable > tbody > tr > td > ul { padding-left: 0em; } table.indextable tr.pcap { height: 10px; } table.indextable tr.cap { margin-top: 10px; background-color: #f2f2f2; } img.toggler { margin-right: 3px; margin-top: 3px; cursor: pointer; } div.modindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } div.genindex-jumpbox { border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 1em 0 1em 0; padding: 0.4em; } /* -- domain module index --------------------------------------------------- */ table.modindextable td { padding: 2px; border-collapse: collapse; } /* -- general body styles --------------------------------------------------- */ div.body { min-width: 450px; max-width: 800px; } div.body p, div.body dd, div.body li, div.body blockquote { -moz-hyphens: auto; -ms-hyphens: auto; -webkit-hyphens: auto; hyphens: auto; } a.headerlink { visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink, caption:hover > a.headerlink, p.caption:hover > a.headerlink, div.code-block-caption:hover > a.headerlink { visibility: visible; } div.body p.caption { text-align: inherit; } div.body td { text-align: left; } .first { margin-top: 0 !important; } p.rubric { margin-top: 30px; font-weight: bold; } img.align-left, .figure.align-left, object.align-left { clear: left; float: left; margin-right: 1em; } img.align-right, .figure.align-right, object.align-right { clear: right; float: right; margin-left: 1em; } img.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } .align-left { text-align: left; } .align-center { text-align: center; } .align-right { text-align: right; } /* -- sidebars -------------------------------------------------------------- */ div.sidebar { margin: 0 0 0.5em 1em; border: 1px solid #ddb; padding: 7px 7px 0 7px; background-color: #ffe; width: 40%; float: right; } p.sidebar-title { font-weight: bold; } /* -- topics ---------------------------------------------------------------- */ div.topic { border: 1px solid #ccc; padding: 7px 7px 0 7px; margin: 10px 0 10px 0; } p.topic-title { font-size: 1.1em; font-weight: bold; margin-top: 10px; } /* -- admonitions ----------------------------------------------------------- */ div.admonition { margin-top: 10px; margin-bottom: 10px; padding: 7px; } div.admonition dt { font-weight: bold; } div.admonition dl { margin-bottom: 0; } p.admonition-title { margin: 0px 10px 5px 0px; font-weight: bold; } div.body p.centered { text-align: center; margin-top: 25px; } /* -- tables ---------------------------------------------------------------- */ table.docutils { border: 0; border-collapse: collapse; } table.align-center { margin-left: auto; margin-right: auto; } table caption span.caption-number { font-style: italic; } table caption span.caption-text { } table.docutils td, table.docutils th { padding: 1px 8px 1px 5px; border-top: 0; border-left: 0; border-right: 0; border-bottom: 1px solid #aaa; } table.footnote td, table.footnote th { border: 0 !important; } th { text-align: left; padding-right: 5px; } table.citation { border-left: solid 1px gray; margin-left: 1px; } table.citation td { border-bottom: none; } /* -- figures --------------------------------------------------------------- */ div.figure { margin: 0.5em; padding: 0.5em; } div.figure p.caption { padding: 0.3em; } div.figure p.caption span.caption-number { font-style: italic; } div.figure p.caption span.caption-text { } /* -- field list styles ----------------------------------------------------- */ table.field-list td, table.field-list th { border: 0 !important; } .field-list ul { margin: 0; padding-left: 1em; } .field-list p { margin: 0; } .field-name { -moz-hyphens: manual; -ms-hyphens: manual; -webkit-hyphens: manual; hyphens: manual; } /* -- hlist styles ---------------------------------------------------------- */ table.hlist td { vertical-align: top; } /* -- other body styles ----------------------------------------------------- */ ol.arabic { list-style: decimal; } ol.loweralpha { list-style: lower-alpha; } ol.upperalpha { list-style: upper-alpha; } ol.lowerroman { list-style: lower-roman; } ol.upperroman { list-style: upper-roman; } dl { margin-bottom: 15px; } dd p { margin-top: 0px; } dd ul, dd table { margin-bottom: 10px; } dd { margin-top: 3px; margin-bottom: 10px; margin-left: 30px; } dt:target, span.highlighted { background-color: #fbe54e; } rect.highlighted { fill: #fbe54e; } dl.glossary dt { font-weight: bold; font-size: 1.1em; } .optional { font-size: 1.3em; } .sig-paren { font-size: larger; } .versionmodified { font-style: italic; } .system-message { background-color: #fda; padding: 5px; border: 3px solid red; } .footnote:target { background-color: #ffa; } .line-block { display: block; margin-top: 1em; margin-bottom: 1em; } .line-block .line-block { margin-top: 0; margin-bottom: 0; margin-left: 1.5em; } .guilabel, .menuselection { font-family: sans-serif; } .accelerator { text-decoration: underline; } .classifier { font-style: oblique; } abbr, acronym { border-bottom: dotted 1px; cursor: help; } /* -- code displays --------------------------------------------------------- */ pre { overflow: auto; overflow-y: hidden; /* fixes display issues on Chrome browsers */ } span.pre { -moz-hyphens: none; -ms-hyphens: none; -webkit-hyphens: none; hyphens: none; } td.linenos pre { padding: 5px 0px; border: 0; background-color: transparent; color: #aaa; } table.highlighttable { margin-left: 0.5em; } table.highlighttable td { padding: 0 0.5em 0 0.5em; } div.code-block-caption { padding: 2px 5px; font-size: small; } div.code-block-caption code { background-color: transparent; } div.code-block-caption + div > div.highlight > pre { margin-top: 0; } div.code-block-caption span.caption-number { padding: 0.1em 0.3em; font-style: italic; } div.code-block-caption span.caption-text { } div.literal-block-wrapper { padding: 1em 1em 0; } div.literal-block-wrapper div.highlight { margin: 0; } code.descname { background-color: transparent; font-weight: bold; font-size: 1.2em; } code.descclassname { background-color: transparent; } code.xref, a code { background-color: transparent; font-weight: bold; } h1 code, h2 code, h3 code, h4 code, h5 code, h6 code { background-color: transparent; } .viewcode-link { float: right; } .viewcode-back { float: right; font-family: sans-serif; } div.viewcode-block:target { margin: -1px -10px; padding: 0 10px; } /* -- math display ---------------------------------------------------------- */ img.math { vertical-align: middle; } div.body div.math p { text-align: center; } span.eqno { float: right; } span.eqno a.headerlink { position: relative; left: 0px; z-index: 1; } div.math:hover a.headerlink { visibility: visible; } /* -- printout stylesheet --------------------------------------------------- */ @media print { div.document, div.documentwrapper, div.bodywrapper { margin: 0 !important; width: 100%; } div.sphinxsidebar, div.related, div.footer, #top-link { display: none; } }pyroute2-0.5.9/docs/html/_static/classic.css0000644000175000017500000000000013305322105020667 0ustar peetpeet00000000000000pyroute2-0.5.9/docs/html/_static/comment-bright.png0000644000175000017500000000136413406216703022207 0ustar peetpeet00000000000000PNG  IHDRaIDATx<ߙm۶m۶qm۶m۶mM=D8tٍ\{56j>Qn~3sD{oS+ٻ؀=nnW?XumAHI%pHscYoo_{Z)48sڳۗ8YüYsj34s^#ǒtˋqkZܜwݿߵ>!8pVn{շ=n$p\^;=;wPIENDB`pyroute2-0.5.9/docs/html/_static/comment-close.png0000644000175000017500000000147513406216703022040 0ustar peetpeet00000000000000PNG  IHDRaIDATxm8$km۶m۶m۶m۶AMfp:O'e$Qq aO[B3U9Og+ł-81dw=7q1CKa~ ʏ lϕ]O4l!A@@wny^xa*;1uSWݦO<*7g>b~yޞ mN\(t:+tU&>9Z}Ok=wԈ=ehjo OSd̳m#(2ڮ&!Q&$|~\>&nMK<+W 7zɫ ?w!8_O ާ4& MS'/қ=rּ`V0!?t'$#'P`iawP?Dãqف.`Ž lZ%9A {EҺ !;e`fT]P]ZCDX2e)ןryOZs߂Ј {1<*Bx `(B42|k@=PAȚe; HͭU`B@(IϚR F"a(. |R*wZB/bZ fMQ+d!!065.9Eq+@3ىVSËd8;&KpHh0f;hY,]|Lcne!fKcJFiySOhמ%ws vaJ{ڣ;/S3 ?qcC\qHxsemk2n dt { margin-bottom: 1em; padding: 1em; border-left: solid 1px #ccc; border-bottom: solid 1px #ccc; border-radius: 10px; } pyroute2-0.5.9/docs/html/_static/default.css0000644000175000017500000000003413406216703020711 0ustar peetpeet00000000000000@import url("classic.css"); pyroute2-0.5.9/docs/html/_static/doctools.js0000644000175000017500000002214113406216703020742 0ustar peetpeet00000000000000/* * doctools.js * ~~~~~~~~~~~ * * Sphinx JavaScript utilities for all documentation. * * :copyright: Copyright 2007-2018 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ /** * select a different prefix for underscore */ $u = _.noConflict(); /** * make the code below compatible with browsers without * an installed firebug like debugger if (!window.console || !console.firebug) { var names = ["log", "debug", "info", "warn", "error", "assert", "dir", "dirxml", "group", "groupEnd", "time", "timeEnd", "count", "trace", "profile", "profileEnd"]; window.console = {}; for (var i = 0; i < names.length; ++i) window.console[names[i]] = function() {}; } */ /** * small helper function to urldecode strings */ jQuery.urldecode = function(x) { return decodeURIComponent(x).replace(/\+/g, ' '); }; /** * small helper function to urlencode strings */ jQuery.urlencode = encodeURIComponent; /** * This function returns the parsed url parameters of the * current request. Multiple values per key are supported, * it will always return arrays of strings for the value parts. */ jQuery.getQueryParameters = function(s) { if (typeof s === 'undefined') s = document.location.search; var parts = s.substr(s.indexOf('?') + 1).split('&'); var result = {}; for (var i = 0; i < parts.length; i++) { var tmp = parts[i].split('=', 2); var key = jQuery.urldecode(tmp[0]); var value = jQuery.urldecode(tmp[1]); if (key in result) result[key].push(value); else result[key] = [value]; } return result; }; /** * highlight a given string on a jquery object by wrapping it in * span elements with the given class name. */ jQuery.fn.highlightText = function(text, className) { function highlight(node, addItems) { if (node.nodeType === 3) { var val = node.nodeValue; var pos = val.toLowerCase().indexOf(text); if (pos >= 0 && !jQuery(node.parentNode).hasClass(className) && !jQuery(node.parentNode).hasClass("nohighlight")) { var span; var isInSVG = jQuery(node).closest("body, svg, foreignObject").is("svg"); if (isInSVG) { span = document.createElementNS("http://www.w3.org/2000/svg", "tspan"); } else { span = document.createElement("span"); span.className = className; } span.appendChild(document.createTextNode(val.substr(pos, text.length))); node.parentNode.insertBefore(span, node.parentNode.insertBefore( document.createTextNode(val.substr(pos + text.length)), node.nextSibling)); node.nodeValue = val.substr(0, pos); if (isInSVG) { var bbox = span.getBBox(); var rect = document.createElementNS("http://www.w3.org/2000/svg", "rect"); rect.x.baseVal.value = bbox.x; rect.y.baseVal.value = bbox.y; rect.width.baseVal.value = bbox.width; rect.height.baseVal.value = bbox.height; rect.setAttribute('class', className); var parentOfText = node.parentNode.parentNode; addItems.push({ "parent": node.parentNode, "target": rect}); } } } else if (!jQuery(node).is("button, select, textarea")) { jQuery.each(node.childNodes, function() { highlight(this, addItems); }); } } var addItems = []; var result = this.each(function() { highlight(this, addItems); }); for (var i = 0; i < addItems.length; ++i) { jQuery(addItems[i].parent).before(addItems[i].target); } return result; }; /* * backward compatibility for jQuery.browser * This will be supported until firefox bug is fixed. */ if (!jQuery.browser) { jQuery.uaMatch = function(ua) { ua = ua.toLowerCase(); var match = /(chrome)[ \/]([\w.]+)/.exec(ua) || /(webkit)[ \/]([\w.]+)/.exec(ua) || /(opera)(?:.*version|)[ \/]([\w.]+)/.exec(ua) || /(msie) ([\w.]+)/.exec(ua) || ua.indexOf("compatible") < 0 && /(mozilla)(?:.*? rv:([\w.]+)|)/.exec(ua) || []; return { browser: match[ 1 ] || "", version: match[ 2 ] || "0" }; }; jQuery.browser = {}; jQuery.browser[jQuery.uaMatch(navigator.userAgent).browser] = true; } /** * Small JavaScript module for the documentation. */ var Documentation = { init : function() { this.fixFirefoxAnchorBug(); this.highlightSearchWords(); this.initIndexTable(); if (DOCUMENTATION_OPTIONS.NAVIGATION_WITH_KEYS) { this.initOnKeyListeners(); } }, /** * i18n support */ TRANSLATIONS : {}, PLURAL_EXPR : function(n) { return n === 1 ? 0 : 1; }, LOCALE : 'unknown', // gettext and ngettext don't access this so that the functions // can safely bound to a different name (_ = Documentation.gettext) gettext : function(string) { var translated = Documentation.TRANSLATIONS[string]; if (typeof translated === 'undefined') return string; return (typeof translated === 'string') ? translated : translated[0]; }, ngettext : function(singular, plural, n) { var translated = Documentation.TRANSLATIONS[singular]; if (typeof translated === 'undefined') return (n == 1) ? singular : plural; return translated[Documentation.PLURALEXPR(n)]; }, addTranslations : function(catalog) { for (var key in catalog.messages) this.TRANSLATIONS[key] = catalog.messages[key]; this.PLURAL_EXPR = new Function('n', 'return +(' + catalog.plural_expr + ')'); this.LOCALE = catalog.locale; }, /** * add context elements like header anchor links */ addContextElements : function() { $('div[id] > :header:first').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this headline')). appendTo(this); }); $('dt[id]').each(function() { $('\u00B6'). attr('href', '#' + this.id). attr('title', _('Permalink to this definition')). appendTo(this); }); }, /** * workaround a firefox stupidity * see: https://bugzilla.mozilla.org/show_bug.cgi?id=645075 */ fixFirefoxAnchorBug : function() { if (document.location.hash && $.browser.mozilla) window.setTimeout(function() { document.location.href += ''; }, 10); }, /** * highlight the search words provided in the url in the text */ highlightSearchWords : function() { var params = $.getQueryParameters(); var terms = (params.highlight) ? params.highlight[0].split(/\s+/) : []; if (terms.length) { var body = $('div.body'); if (!body.length) { body = $('body'); } window.setTimeout(function() { $.each(terms, function() { body.highlightText(this.toLowerCase(), 'highlighted'); }); }, 10); $('') .appendTo($('#searchbox')); } }, /** * init the domain index toggle buttons */ initIndexTable : function() { var togglers = $('img.toggler').click(function() { var src = $(this).attr('src'); var idnum = $(this).attr('id').substr(7); $('tr.cg-' + idnum).toggle(); if (src.substr(-9) === 'minus.png') $(this).attr('src', src.substr(0, src.length-9) + 'plus.png'); else $(this).attr('src', src.substr(0, src.length-8) + 'minus.png'); }).css('display', ''); if (DOCUMENTATION_OPTIONS.COLLAPSE_INDEX) { togglers.click(); } }, /** * helper function to hide the search marks again */ hideSearchWords : function() { $('#searchbox .highlight-link').fadeOut(300); $('span.highlighted').removeClass('highlighted'); }, /** * make the url absolute */ makeURL : function(relativeURL) { return DOCUMENTATION_OPTIONS.URL_ROOT + '/' + relativeURL; }, /** * get the current relative url */ getCurrentURL : function() { var path = document.location.pathname; var parts = path.split(/\//); $.each(DOCUMENTATION_OPTIONS.URL_ROOT.split(/\//), function() { if (this === '..') parts.pop(); }); var url = parts.join('/'); return path.substring(url.lastIndexOf('/') + 1, path.length - 1); }, initOnKeyListeners: function() { $(document).keyup(function(event) { var activeElementType = document.activeElement.tagName; // don't navigate when in search box or textarea if (activeElementType !== 'TEXTAREA' && activeElementType !== 'INPUT' && activeElementType !== 'SELECT') { switch (event.keyCode) { case 37: // left var prevHref = $('link[rel="prev"]').prop('href'); if (prevHref) { window.location.href = prevHref; return false; } case 39: // right var nextHref = $('link[rel="next"]').prop('href'); if (nextHref) { window.location.href = nextHref; return false; } } } }); } }; // quick alias for translations _ = Documentation.gettext; $(document).ready(function() { Documentation.init(); }); pyroute2-0.5.9/docs/html/_static/documentation_options.js0000644000175000017500000000046513621220107023535 0ustar peetpeet00000000000000var DOCUMENTATION_OPTIONS = { URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), VERSION: '0.5.9', LANGUAGE: 'None', COLLAPSE_INDEX: false, FILE_SUFFIX: '.html', HAS_SOURCE: true, SOURCELINK_SUFFIX: '.txt', NAVIGATION_WITH_KEYS: false, };pyroute2-0.5.9/docs/html/_static/down-pressed.png0000644000175000017500000000033613406216703021700 0ustar peetpeet00000000000000PNG  IHDRaIDATxc@J@lKf[^g%_  HK ĿD Ab3CGhr.x/`X Wʱ 2 eF+,.xEJ lAR $WT?0i)1maUIENDB`pyroute2-0.5.9/docs/html/_static/down.png0000644000175000017500000000031213406216703020227 0ustar peetpeet00000000000000PNG  IHDR7IDATxP@ @Iߗ`&"z6xK@kbϢxs]M :/+gPd2GeÐ~߸J_c S_ S%exdU](UH>&;4i$n3> ycdIENDB`pyroute2-0.5.9/docs/html/_static/file.png0000644000175000017500000000043613406216703020206 0ustar peetpeet00000000000000PNG  IHDRaIDATxR){l ۶f=@ :3~箄rX$AX-D ~ lj(P%8<<9:: PO&$ l~X&EW^4wQ}^ͣ i0/H/@F)Dzq+j[SU5h/oY G&Lfs|{3%U+S`AFIENDB`pyroute2-0.5.9/docs/html/_static/graphviz.css0000644000175000017500000000045313406216703021124 0ustar peetpeet00000000000000/* * graphviz.css * ~~~~~~~~~~~~ * * Sphinx stylesheet -- graphviz extension. * * :copyright: Copyright 2007-2018 by the Sphinx team, see AUTHORS. * :license: BSD, see LICENSE for details. * */ img.graphviz { border: 0; max-width: 100%; } object.graphviz { max-width: 100%; } pyroute2-0.5.9/docs/html/_static/jquery.js0000644000175000017500000102257713424212267020452 0ustar peetpeet00000000000000/*! * jQuery JavaScript Library v3.3.1-dfsg * https://jquery.com/ * * Includes Sizzle.js * https://sizzlejs.com/ * * Copyright JS Foundation and other contributors * Released under the MIT license * https://jquery.org/license * * Date: 2019-01-30T03:06Z */ ( function( global, factory ) { "use strict"; if ( typeof module === "object" && typeof module.exports === "object" ) { // For CommonJS and CommonJS-like environments where a proper `window` // is present, execute the factory and get jQuery. // For environments that do not have a `window` with a `document` // (such as Node.js), expose a factory as module.exports. // This accentuates the need for the creation of a real `window`. // e.g. var jQuery = require("jquery")(window); // See ticket #14549 for more info. module.exports = global.document ? factory( global, true ) : function( w ) { if ( !w.document ) { throw new Error( "jQuery requires a window with a document" ); } return factory( w ); }; } else { factory( global ); } // Pass this if window is not defined yet } )( typeof window !== "undefined" ? window : this, function( window, noGlobal ) { // Edge <= 12 - 13+, Firefox <=18 - 45+, IE 10 - 11, Safari 5.1 - 9+, iOS 6 - 9.1 // throw exceptions when non-strict code (e.g., ASP.NET 4.5) accesses strict mode // arguments.callee.caller (trac-13335). But as of jQuery 3.0 (2016), strict mode should be common // enough that all such attempts are guarded in a try block. var arr = []; var document = window.document; var getProto = Object.getPrototypeOf; var slice = arr.slice; var concat = arr.concat; var push = arr.push; var indexOf = arr.indexOf; var class2type = {}; var toString = class2type.toString; var hasOwn = class2type.hasOwnProperty; var fnToString = hasOwn.toString; var ObjectFunctionString = fnToString.call( Object ); var support = {}; var isFunction = function isFunction( obj ) { // Support: Chrome <=57, Firefox <=52 // In some browsers, typeof returns "function" for HTML elements // (i.e., `typeof document.createElement( "object" ) === "function"`). // We don't want to classify *any* DOM node as a function. return typeof obj === "function" && typeof obj.nodeType !== "number"; }; var isWindow = function isWindow( obj ) { return obj != null && obj === obj.window; }; var preservedScriptAttributes = { type: true, src: true, noModule: true }; function DOMEval( code, doc, node ) { doc = doc || document; var i, script = doc.createElement( "script" ); script.text = code; if ( node ) { for ( i in preservedScriptAttributes ) { if ( node[ i ] ) { script[ i ] = node[ i ]; } } } doc.head.appendChild( script ).parentNode.removeChild( script ); } function toType( obj ) { if ( obj == null ) { return obj + ""; } // Support: Android <=2.3 only (functionish RegExp) return typeof obj === "object" || typeof obj === "function" ? class2type[ toString.call( obj ) ] || "object" : typeof obj; } /* global Symbol */ // Defining this global in .eslintrc.json would create a danger of using the global // unguarded in another place, it seems safer to define global only for this module var version = "3.3.1", // Define a local copy of jQuery jQuery = function( selector, context ) { // The jQuery object is actually just the init constructor 'enhanced' // Need init if jQuery is called (just allow error to be thrown if not included) return new jQuery.fn.init( selector, context ); }, // Support: Android <=4.0 only // Make sure we trim BOM and NBSP rtrim = /^[\s\uFEFF\xA0]+|[\s\uFEFF\xA0]+$/g; jQuery.fn = jQuery.prototype = { // The current version of jQuery being used jquery: version, constructor: jQuery, // The default length of a jQuery object is 0 length: 0, toArray: function() { return slice.call( this ); }, // Get the Nth element in the matched element set OR // Get the whole matched element set as a clean array get: function( num ) { // Return all the elements in a clean array if ( num == null ) { return slice.call( this ); } // Return just the one element from the set return num < 0 ? this[ num + this.length ] : this[ num ]; }, // Take an array of elements and push it onto the stack // (returning the new matched element set) pushStack: function( elems ) { // Build a new jQuery matched element set var ret = jQuery.merge( this.constructor(), elems ); // Add the old object onto the stack (as a reference) ret.prevObject = this; // Return the newly-formed element set return ret; }, // Execute a callback for every element in the matched set. each: function( callback ) { return jQuery.each( this, callback ); }, map: function( callback ) { return this.pushStack( jQuery.map( this, function( elem, i ) { return callback.call( elem, i, elem ); } ) ); }, slice: function() { return this.pushStack( slice.apply( this, arguments ) ); }, first: function() { return this.eq( 0 ); }, last: function() { return this.eq( -1 ); }, eq: function( i ) { var len = this.length, j = +i + ( i < 0 ? len : 0 ); return this.pushStack( j >= 0 && j < len ? [ this[ j ] ] : [] ); }, end: function() { return this.prevObject || this.constructor(); }, // For internal use only. // Behaves like an Array's method, not like a jQuery method. push: push, sort: arr.sort, splice: arr.splice }; jQuery.extend = jQuery.fn.extend = function() { var options, name, src, copy, copyIsArray, clone, target = arguments[ 0 ] || {}, i = 1, length = arguments.length, deep = false; // Handle a deep copy situation if ( typeof target === "boolean" ) { deep = target; // Skip the boolean and the target target = arguments[ i ] || {}; i++; } // Handle case when target is a string or something (possible in deep copy) if ( typeof target !== "object" && !isFunction( target ) ) { target = {}; } // Extend jQuery itself if only one argument is passed if ( i === length ) { target = this; i--; } for ( ; i < length; i++ ) { // Only deal with non-null/undefined values if ( ( options = arguments[ i ] ) != null ) { // Extend the base object for ( name in options ) { src = target[ name ]; copy = options[ name ]; // Prevent never-ending loop if ( target === copy ) { continue; } // Recurse if we're merging plain objects or arrays if ( deep && copy && ( jQuery.isPlainObject( copy ) || ( copyIsArray = Array.isArray( copy ) ) ) ) { if ( copyIsArray ) { copyIsArray = false; clone = src && Array.isArray( src ) ? src : []; } else { clone = src && jQuery.isPlainObject( src ) ? src : {}; } // Never move original objects, clone them target[ name ] = jQuery.extend( deep, clone, copy ); // Don't bring in undefined values } else if ( copy !== undefined ) { target[ name ] = copy; } } } } // Return the modified object return target; }; jQuery.extend( { // Unique for each copy of jQuery on the page expando: "jQuery" + ( version + Math.random() ).replace( /\D/g, "" ), // Assume jQuery is ready without the ready module isReady: true, error: function( msg ) { throw new Error( msg ); }, noop: function() {}, isPlainObject: function( obj ) { var proto, Ctor; // Detect obvious negatives // Use toString instead of jQuery.type to catch host objects if ( !obj || toString.call( obj ) !== "[object Object]" ) { return false; } proto = getProto( obj ); // Objects with no prototype (e.g., `Object.create( null )`) are plain if ( !proto ) { return true; } // Objects with prototype are plain iff they were constructed by a global Object function Ctor = hasOwn.call( proto, "constructor" ) && proto.constructor; return typeof Ctor === "function" && fnToString.call( Ctor ) === ObjectFunctionString; }, isEmptyObject: function( obj ) { /* eslint-disable no-unused-vars */ // See https://github.com/eslint/eslint/issues/6125 var name; for ( name in obj ) { return false; } return true; }, // Evaluates a script in a global context globalEval: function( code ) { DOMEval( code ); }, each: function( obj, callback ) { var length, i = 0; if ( isArrayLike( obj ) ) { length = obj.length; for ( ; i < length; i++ ) { if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { break; } } } else { for ( i in obj ) { if ( callback.call( obj[ i ], i, obj[ i ] ) === false ) { break; } } } return obj; }, // Support: Android <=4.0 only trim: function( text ) { return text == null ? "" : ( text + "" ).replace( rtrim, "" ); }, // results is for internal usage only makeArray: function( arr, results ) { var ret = results || []; if ( arr != null ) { if ( isArrayLike( Object( arr ) ) ) { jQuery.merge( ret, typeof arr === "string" ? [ arr ] : arr ); } else { push.call( ret, arr ); } } return ret; }, inArray: function( elem, arr, i ) { return arr == null ? -1 : indexOf.call( arr, elem, i ); }, // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit merge: function( first, second ) { var len = +second.length, j = 0, i = first.length; for ( ; j < len; j++ ) { first[ i++ ] = second[ j ]; } first.length = i; return first; }, grep: function( elems, callback, invert ) { var callbackInverse, matches = [], i = 0, length = elems.length, callbackExpect = !invert; // Go through the array, only saving the items // that pass the validator function for ( ; i < length; i++ ) { callbackInverse = !callback( elems[ i ], i ); if ( callbackInverse !== callbackExpect ) { matches.push( elems[ i ] ); } } return matches; }, // arg is for internal usage only map: function( elems, callback, arg ) { var length, value, i = 0, ret = []; // Go through the array, translating each of the items to their new values if ( isArrayLike( elems ) ) { length = elems.length; for ( ; i < length; i++ ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } // Go through every key on the object, } else { for ( i in elems ) { value = callback( elems[ i ], i, arg ); if ( value != null ) { ret.push( value ); } } } // Flatten any nested arrays return concat.apply( [], ret ); }, // A global GUID counter for objects guid: 1, // jQuery.support is not used in Core but other projects attach their // properties to it so it needs to exist. support: support } ); if ( typeof Symbol === "function" ) { jQuery.fn[ Symbol.iterator ] = arr[ Symbol.iterator ]; } // Populate the class2type map jQuery.each( "Boolean Number String Function Array Date RegExp Object Error Symbol".split( " " ), function( i, name ) { class2type[ "[object " + name + "]" ] = name.toLowerCase(); } ); function isArrayLike( obj ) { // Support: real iOS 8.2 only (not reproducible in simulator) // `in` check used to prevent JIT error (gh-2145) // hasOwn isn't used here due to false negatives // regarding Nodelist length in IE var length = !!obj && "length" in obj && obj.length, type = toType( obj ); if ( isFunction( obj ) || isWindow( obj ) ) { return false; } return type === "array" || length === 0 || typeof length === "number" && length > 0 && ( length - 1 ) in obj; } var Sizzle = /*! * Sizzle CSS Selector Engine v2.3.3 * https://sizzlejs.com/ * * Copyright jQuery Foundation and other contributors * Released under the MIT license * http://jquery.org/license * * Date: 2016-08-08 */ (function( window ) { var i, support, Expr, getText, isXML, tokenize, compile, select, outermostContext, sortInput, hasDuplicate, // Local document vars setDocument, document, docElem, documentIsHTML, rbuggyQSA, rbuggyMatches, matches, contains, // Instance-specific data expando = "sizzle" + 1 * new Date(), preferredDoc = window.document, dirruns = 0, done = 0, classCache = createCache(), tokenCache = createCache(), compilerCache = createCache(), sortOrder = function( a, b ) { if ( a === b ) { hasDuplicate = true; } return 0; }, // Instance methods hasOwn = ({}).hasOwnProperty, arr = [], pop = arr.pop, push_native = arr.push, push = arr.push, slice = arr.slice, // Use a stripped-down indexOf as it's faster than native // https://jsperf.com/thor-indexof-vs-for/5 indexOf = function( list, elem ) { var i = 0, len = list.length; for ( ; i < len; i++ ) { if ( list[i] === elem ) { return i; } } return -1; }, booleans = "checked|selected|async|autofocus|autoplay|controls|defer|disabled|hidden|ismap|loop|multiple|open|readonly|required|scoped", // Regular expressions // http://www.w3.org/TR/css3-selectors/#whitespace whitespace = "[\\x20\\t\\r\\n\\f]", // http://www.w3.org/TR/CSS21/syndata.html#value-def-identifier identifier = "(?:\\\\.|[\\w-]|[^\0-\\xa0])+", // Attribute selectors: http://www.w3.org/TR/selectors/#attribute-selectors attributes = "\\[" + whitespace + "*(" + identifier + ")(?:" + whitespace + // Operator (capture 2) "*([*^$|!~]?=)" + whitespace + // "Attribute values must be CSS identifiers [capture 5] or strings [capture 3 or capture 4]" "*(?:'((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\"|(" + identifier + "))|)" + whitespace + "*\\]", pseudos = ":(" + identifier + ")(?:\\((" + // To reduce the number of selectors needing tokenize in the preFilter, prefer arguments: // 1. quoted (capture 3; capture 4 or capture 5) "('((?:\\\\.|[^\\\\'])*)'|\"((?:\\\\.|[^\\\\\"])*)\")|" + // 2. simple (capture 6) "((?:\\\\.|[^\\\\()[\\]]|" + attributes + ")*)|" + // 3. anything else (capture 2) ".*" + ")\\)|)", // Leading and non-escaped trailing whitespace, capturing some non-whitespace characters preceding the latter rwhitespace = new RegExp( whitespace + "+", "g" ), rtrim = new RegExp( "^" + whitespace + "+|((?:^|[^\\\\])(?:\\\\.)*)" + whitespace + "+$", "g" ), rcomma = new RegExp( "^" + whitespace + "*," + whitespace + "*" ), rcombinators = new RegExp( "^" + whitespace + "*([>+~]|" + whitespace + ")" + whitespace + "*" ), rattributeQuotes = new RegExp( "=" + whitespace + "*([^\\]'\"]*?)" + whitespace + "*\\]", "g" ), rpseudo = new RegExp( pseudos ), ridentifier = new RegExp( "^" + identifier + "$" ), matchExpr = { "ID": new RegExp( "^#(" + identifier + ")" ), "CLASS": new RegExp( "^\\.(" + identifier + ")" ), "TAG": new RegExp( "^(" + identifier + "|[*])" ), "ATTR": new RegExp( "^" + attributes ), "PSEUDO": new RegExp( "^" + pseudos ), "CHILD": new RegExp( "^:(only|first|last|nth|nth-last)-(child|of-type)(?:\\(" + whitespace + "*(even|odd|(([+-]|)(\\d*)n|)" + whitespace + "*(?:([+-]|)" + whitespace + "*(\\d+)|))" + whitespace + "*\\)|)", "i" ), "bool": new RegExp( "^(?:" + booleans + ")$", "i" ), // For use in libraries implementing .is() // We use this for POS matching in `select` "needsContext": new RegExp( "^" + whitespace + "*[>+~]|:(even|odd|eq|gt|lt|nth|first|last)(?:\\(" + whitespace + "*((?:-\\d)?\\d*)" + whitespace + "*\\)|)(?=[^-]|$)", "i" ) }, rinputs = /^(?:input|select|textarea|button)$/i, rheader = /^h\d$/i, rnative = /^[^{]+\{\s*\[native \w/, // Easily-parseable/retrievable ID or TAG or CLASS selectors rquickExpr = /^(?:#([\w-]+)|(\w+)|\.([\w-]+))$/, rsibling = /[+~]/, // CSS escapes // http://www.w3.org/TR/CSS21/syndata.html#escaped-characters runescape = new RegExp( "\\\\([\\da-f]{1,6}" + whitespace + "?|(" + whitespace + ")|.)", "ig" ), funescape = function( _, escaped, escapedWhitespace ) { var high = "0x" + escaped - 0x10000; // NaN means non-codepoint // Support: Firefox<24 // Workaround erroneous numeric interpretation of +"0x" return high !== high || escapedWhitespace ? escaped : high < 0 ? // BMP codepoint String.fromCharCode( high + 0x10000 ) : // Supplemental Plane codepoint (surrogate pair) String.fromCharCode( high >> 10 | 0xD800, high & 0x3FF | 0xDC00 ); }, // CSS string/identifier serialization // https://drafts.csswg.org/cssom/#common-serializing-idioms rcssescape = /([\0-\x1f\x7f]|^-?\d)|^-$|[^\0-\x1f\x7f-\uFFFF\w-]/g, fcssescape = function( ch, asCodePoint ) { if ( asCodePoint ) { // U+0000 NULL becomes U+FFFD REPLACEMENT CHARACTER if ( ch === "\0" ) { return "\uFFFD"; } // Control characters and (dependent upon position) numbers get escaped as code points return ch.slice( 0, -1 ) + "\\" + ch.charCodeAt( ch.length - 1 ).toString( 16 ) + " "; } // Other potentially-special ASCII characters get backslash-escaped return "\\" + ch; }, // Used for iframes // See setDocument() // Removing the function wrapper causes a "Permission Denied" // error in IE unloadHandler = function() { setDocument(); }, disabledAncestor = addCombinator( function( elem ) { return elem.disabled === true && ("form" in elem || "label" in elem); }, { dir: "parentNode", next: "legend" } ); // Optimize for push.apply( _, NodeList ) try { push.apply( (arr = slice.call( preferredDoc.childNodes )), preferredDoc.childNodes ); // Support: Android<4.0 // Detect silently failing push.apply arr[ preferredDoc.childNodes.length ].nodeType; } catch ( e ) { push = { apply: arr.length ? // Leverage slice if possible function( target, els ) { push_native.apply( target, slice.call(els) ); } : // Support: IE<9 // Otherwise append directly function( target, els ) { var j = target.length, i = 0; // Can't trust NodeList.length while ( (target[j++] = els[i++]) ) {} target.length = j - 1; } }; } function Sizzle( selector, context, results, seed ) { var m, i, elem, nid, match, groups, newSelector, newContext = context && context.ownerDocument, // nodeType defaults to 9, since context defaults to document nodeType = context ? context.nodeType : 9; results = results || []; // Return early from calls with invalid selector or context if ( typeof selector !== "string" || !selector || nodeType !== 1 && nodeType !== 9 && nodeType !== 11 ) { return results; } // Try to shortcut find operations (as opposed to filters) in HTML documents if ( !seed ) { if ( ( context ? context.ownerDocument || context : preferredDoc ) !== document ) { setDocument( context ); } context = context || document; if ( documentIsHTML ) { // If the selector is sufficiently simple, try using a "get*By*" DOM method // (excepting DocumentFragment context, where the methods don't exist) if ( nodeType !== 11 && (match = rquickExpr.exec( selector )) ) { // ID selector if ( (m = match[1]) ) { // Document context if ( nodeType === 9 ) { if ( (elem = context.getElementById( m )) ) { // Support: IE, Opera, Webkit // TODO: identify versions // getElementById can match elements by name instead of ID if ( elem.id === m ) { results.push( elem ); return results; } } else { return results; } // Element context } else { // Support: IE, Opera, Webkit // TODO: identify versions // getElementById can match elements by name instead of ID if ( newContext && (elem = newContext.getElementById( m )) && contains( context, elem ) && elem.id === m ) { results.push( elem ); return results; } } // Type selector } else if ( match[2] ) { push.apply( results, context.getElementsByTagName( selector ) ); return results; // Class selector } else if ( (m = match[3]) && support.getElementsByClassName && context.getElementsByClassName ) { push.apply( results, context.getElementsByClassName( m ) ); return results; } } // Take advantage of querySelectorAll if ( support.qsa && !compilerCache[ selector + " " ] && (!rbuggyQSA || !rbuggyQSA.test( selector )) ) { if ( nodeType !== 1 ) { newContext = context; newSelector = selector; // qSA looks outside Element context, which is not what we want // Thanks to Andrew Dupont for this workaround technique // Support: IE <=8 // Exclude object elements } else if ( context.nodeName.toLowerCase() !== "object" ) { // Capture the context ID, setting it first if necessary if ( (nid = context.getAttribute( "id" )) ) { nid = nid.replace( rcssescape, fcssescape ); } else { context.setAttribute( "id", (nid = expando) ); } // Prefix every selector in the list groups = tokenize( selector ); i = groups.length; while ( i-- ) { groups[i] = "#" + nid + " " + toSelector( groups[i] ); } newSelector = groups.join( "," ); // Expand context for sibling selectors newContext = rsibling.test( selector ) && testContext( context.parentNode ) || context; } if ( newSelector ) { try { push.apply( results, newContext.querySelectorAll( newSelector ) ); return results; } catch ( qsaError ) { } finally { if ( nid === expando ) { context.removeAttribute( "id" ); } } } } } } // All others return select( selector.replace( rtrim, "$1" ), context, results, seed ); } /** * Create key-value caches of limited size * @returns {function(string, object)} Returns the Object data after storing it on itself with * property name the (space-suffixed) string and (if the cache is larger than Expr.cacheLength) * deleting the oldest entry */ function createCache() { var keys = []; function cache( key, value ) { // Use (key + " ") to avoid collision with native prototype properties (see Issue #157) if ( keys.push( key + " " ) > Expr.cacheLength ) { // Only keep the most recent entries delete cache[ keys.shift() ]; } return (cache[ key + " " ] = value); } return cache; } /** * Mark a function for special use by Sizzle * @param {Function} fn The function to mark */ function markFunction( fn ) { fn[ expando ] = true; return fn; } /** * Support testing using an element * @param {Function} fn Passed the created element and returns a boolean result */ function assert( fn ) { var el = document.createElement("fieldset"); try { return !!fn( el ); } catch (e) { return false; } finally { // Remove from its parent by default if ( el.parentNode ) { el.parentNode.removeChild( el ); } // release memory in IE el = null; } } /** * Adds the same handler for all of the specified attrs * @param {String} attrs Pipe-separated list of attributes * @param {Function} handler The method that will be applied */ function addHandle( attrs, handler ) { var arr = attrs.split("|"), i = arr.length; while ( i-- ) { Expr.attrHandle[ arr[i] ] = handler; } } /** * Checks document order of two siblings * @param {Element} a * @param {Element} b * @returns {Number} Returns less than 0 if a precedes b, greater than 0 if a follows b */ function siblingCheck( a, b ) { var cur = b && a, diff = cur && a.nodeType === 1 && b.nodeType === 1 && a.sourceIndex - b.sourceIndex; // Use IE sourceIndex if available on both nodes if ( diff ) { return diff; } // Check if b follows a if ( cur ) { while ( (cur = cur.nextSibling) ) { if ( cur === b ) { return -1; } } } return a ? 1 : -1; } /** * Returns a function to use in pseudos for input types * @param {String} type */ function createInputPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === type; }; } /** * Returns a function to use in pseudos for buttons * @param {String} type */ function createButtonPseudo( type ) { return function( elem ) { var name = elem.nodeName.toLowerCase(); return (name === "input" || name === "button") && elem.type === type; }; } /** * Returns a function to use in pseudos for :enabled/:disabled * @param {Boolean} disabled true for :disabled; false for :enabled */ function createDisabledPseudo( disabled ) { // Known :disabled false positives: fieldset[disabled] > legend:nth-of-type(n+2) :can-disable return function( elem ) { // Only certain elements can match :enabled or :disabled // https://html.spec.whatwg.org/multipage/scripting.html#selector-enabled // https://html.spec.whatwg.org/multipage/scripting.html#selector-disabled if ( "form" in elem ) { // Check for inherited disabledness on relevant non-disabled elements: // * listed form-associated elements in a disabled fieldset // https://html.spec.whatwg.org/multipage/forms.html#category-listed // https://html.spec.whatwg.org/multipage/forms.html#concept-fe-disabled // * option elements in a disabled optgroup // https://html.spec.whatwg.org/multipage/forms.html#concept-option-disabled // All such elements have a "form" property. if ( elem.parentNode && elem.disabled === false ) { // Option elements defer to a parent optgroup if present if ( "label" in elem ) { if ( "label" in elem.parentNode ) { return elem.parentNode.disabled === disabled; } else { return elem.disabled === disabled; } } // Support: IE 6 - 11 // Use the isDisabled shortcut property to check for disabled fieldset ancestors return elem.isDisabled === disabled || // Where there is no isDisabled, check manually /* jshint -W018 */ elem.isDisabled !== !disabled && disabledAncestor( elem ) === disabled; } return elem.disabled === disabled; // Try to winnow out elements that can't be disabled before trusting the disabled property. // Some victims get caught in our net (label, legend, menu, track), but it shouldn't // even exist on them, let alone have a boolean value. } else if ( "label" in elem ) { return elem.disabled === disabled; } // Remaining elements are neither :enabled nor :disabled return false; }; } /** * Returns a function to use in pseudos for positionals * @param {Function} fn */ function createPositionalPseudo( fn ) { return markFunction(function( argument ) { argument = +argument; return markFunction(function( seed, matches ) { var j, matchIndexes = fn( [], seed.length, argument ), i = matchIndexes.length; // Match elements found at the specified indexes while ( i-- ) { if ( seed[ (j = matchIndexes[i]) ] ) { seed[j] = !(matches[j] = seed[j]); } } }); }); } /** * Checks a node for validity as a Sizzle context * @param {Element|Object=} context * @returns {Element|Object|Boolean} The input node if acceptable, otherwise a falsy value */ function testContext( context ) { return context && typeof context.getElementsByTagName !== "undefined" && context; } // Expose support vars for convenience support = Sizzle.support = {}; /** * Detects XML nodes * @param {Element|Object} elem An element or a document * @returns {Boolean} True iff elem is a non-HTML XML node */ isXML = Sizzle.isXML = function( elem ) { // documentElement is verified for cases where it doesn't yet exist // (such as loading iframes in IE - #4833) var documentElement = elem && (elem.ownerDocument || elem).documentElement; return documentElement ? documentElement.nodeName !== "HTML" : false; }; /** * Sets document-related variables once based on the current document * @param {Element|Object} [doc] An element or document object to use to set the document * @returns {Object} Returns the current document */ setDocument = Sizzle.setDocument = function( node ) { var hasCompare, subWindow, doc = node ? node.ownerDocument || node : preferredDoc; // Return early if doc is invalid or already selected if ( doc === document || doc.nodeType !== 9 || !doc.documentElement ) { return document; } // Update global variables document = doc; docElem = document.documentElement; documentIsHTML = !isXML( document ); // Support: IE 9-11, Edge // Accessing iframe documents after unload throws "permission denied" errors (jQuery #13936) if ( preferredDoc !== document && (subWindow = document.defaultView) && subWindow.top !== subWindow ) { // Support: IE 11, Edge if ( subWindow.addEventListener ) { subWindow.addEventListener( "unload", unloadHandler, false ); // Support: IE 9 - 10 only } else if ( subWindow.attachEvent ) { subWindow.attachEvent( "onunload", unloadHandler ); } } /* Attributes ---------------------------------------------------------------------- */ // Support: IE<8 // Verify that getAttribute really returns attributes and not properties // (excepting IE8 booleans) support.attributes = assert(function( el ) { el.className = "i"; return !el.getAttribute("className"); }); /* getElement(s)By* ---------------------------------------------------------------------- */ // Check if getElementsByTagName("*") returns only elements support.getElementsByTagName = assert(function( el ) { el.appendChild( document.createComment("") ); return !el.getElementsByTagName("*").length; }); // Support: IE<9 support.getElementsByClassName = rnative.test( document.getElementsByClassName ); // Support: IE<10 // Check if getElementById returns elements by name // The broken getElementById methods don't pick up programmatically-set names, // so use a roundabout getElementsByName test support.getById = assert(function( el ) { docElem.appendChild( el ).id = expando; return !document.getElementsByName || !document.getElementsByName( expando ).length; }); // ID filter and find if ( support.getById ) { Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { return elem.getAttribute("id") === attrId; }; }; Expr.find["ID"] = function( id, context ) { if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { var elem = context.getElementById( id ); return elem ? [ elem ] : []; } }; } else { Expr.filter["ID"] = function( id ) { var attrId = id.replace( runescape, funescape ); return function( elem ) { var node = typeof elem.getAttributeNode !== "undefined" && elem.getAttributeNode("id"); return node && node.value === attrId; }; }; // Support: IE 6 - 7 only // getElementById is not reliable as a find shortcut Expr.find["ID"] = function( id, context ) { if ( typeof context.getElementById !== "undefined" && documentIsHTML ) { var node, i, elems, elem = context.getElementById( id ); if ( elem ) { // Verify the id attribute node = elem.getAttributeNode("id"); if ( node && node.value === id ) { return [ elem ]; } // Fall back on getElementsByName elems = context.getElementsByName( id ); i = 0; while ( (elem = elems[i++]) ) { node = elem.getAttributeNode("id"); if ( node && node.value === id ) { return [ elem ]; } } } return []; } }; } // Tag Expr.find["TAG"] = support.getElementsByTagName ? function( tag, context ) { if ( typeof context.getElementsByTagName !== "undefined" ) { return context.getElementsByTagName( tag ); // DocumentFragment nodes don't have gEBTN } else if ( support.qsa ) { return context.querySelectorAll( tag ); } } : function( tag, context ) { var elem, tmp = [], i = 0, // By happy coincidence, a (broken) gEBTN appears on DocumentFragment nodes too results = context.getElementsByTagName( tag ); // Filter out possible comments if ( tag === "*" ) { while ( (elem = results[i++]) ) { if ( elem.nodeType === 1 ) { tmp.push( elem ); } } return tmp; } return results; }; // Class Expr.find["CLASS"] = support.getElementsByClassName && function( className, context ) { if ( typeof context.getElementsByClassName !== "undefined" && documentIsHTML ) { return context.getElementsByClassName( className ); } }; /* QSA/matchesSelector ---------------------------------------------------------------------- */ // QSA and matchesSelector support // matchesSelector(:active) reports false when true (IE9/Opera 11.5) rbuggyMatches = []; // qSa(:focus) reports false when true (Chrome 21) // We allow this because of a bug in IE8/9 that throws an error // whenever `document.activeElement` is accessed on an iframe // So, we allow :focus to pass through QSA all the time to avoid the IE error // See https://bugs.jquery.com/ticket/13378 rbuggyQSA = []; if ( (support.qsa = rnative.test( document.querySelectorAll )) ) { // Build QSA regex // Regex strategy adopted from Diego Perini assert(function( el ) { // Select is set to empty string on purpose // This is to test IE's treatment of not explicitly // setting a boolean content attribute, // since its presence should be enough // https://bugs.jquery.com/ticket/12359 docElem.appendChild( el ).innerHTML = "" + ""; // Support: IE8, Opera 11-12.16 // Nothing should be selected when empty strings follow ^= or $= or *= // The test attribute must be unknown in Opera but "safe" for WinRT // https://msdn.microsoft.com/en-us/library/ie/hh465388.aspx#attribute_section if ( el.querySelectorAll("[msallowcapture^='']").length ) { rbuggyQSA.push( "[*^$]=" + whitespace + "*(?:''|\"\")" ); } // Support: IE8 // Boolean attributes and "value" are not treated correctly if ( !el.querySelectorAll("[selected]").length ) { rbuggyQSA.push( "\\[" + whitespace + "*(?:value|" + booleans + ")" ); } // Support: Chrome<29, Android<4.4, Safari<7.0+, iOS<7.0+, PhantomJS<1.9.8+ if ( !el.querySelectorAll( "[id~=" + expando + "-]" ).length ) { rbuggyQSA.push("~="); } // Webkit/Opera - :checked should return selected option elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked // IE8 throws error here and will not see later tests if ( !el.querySelectorAll(":checked").length ) { rbuggyQSA.push(":checked"); } // Support: Safari 8+, iOS 8+ // https://bugs.webkit.org/show_bug.cgi?id=136851 // In-page `selector#id sibling-combinator selector` fails if ( !el.querySelectorAll( "a#" + expando + "+*" ).length ) { rbuggyQSA.push(".#.+[+~]"); } }); assert(function( el ) { el.innerHTML = "" + ""; // Support: Windows 8 Native Apps // The type and name attributes are restricted during .innerHTML assignment var input = document.createElement("input"); input.setAttribute( "type", "hidden" ); el.appendChild( input ).setAttribute( "name", "D" ); // Support: IE8 // Enforce case-sensitivity of name attribute if ( el.querySelectorAll("[name=d]").length ) { rbuggyQSA.push( "name" + whitespace + "*[*^$|!~]?=" ); } // FF 3.5 - :enabled/:disabled and hidden elements (hidden elements are still enabled) // IE8 throws error here and will not see later tests if ( el.querySelectorAll(":enabled").length !== 2 ) { rbuggyQSA.push( ":enabled", ":disabled" ); } // Support: IE9-11+ // IE's :disabled selector does not pick up the children of disabled fieldsets docElem.appendChild( el ).disabled = true; if ( el.querySelectorAll(":disabled").length !== 2 ) { rbuggyQSA.push( ":enabled", ":disabled" ); } // Opera 10-11 does not throw on post-comma invalid pseudos el.querySelectorAll("*,:x"); rbuggyQSA.push(",.*:"); }); } if ( (support.matchesSelector = rnative.test( (matches = docElem.matches || docElem.webkitMatchesSelector || docElem.mozMatchesSelector || docElem.oMatchesSelector || docElem.msMatchesSelector) )) ) { assert(function( el ) { // Check to see if it's possible to do matchesSelector // on a disconnected node (IE 9) support.disconnectedMatch = matches.call( el, "*" ); // This should fail with an exception // Gecko does not error, returns false instead matches.call( el, "[s!='']:x" ); rbuggyMatches.push( "!=", pseudos ); }); } rbuggyQSA = rbuggyQSA.length && new RegExp( rbuggyQSA.join("|") ); rbuggyMatches = rbuggyMatches.length && new RegExp( rbuggyMatches.join("|") ); /* Contains ---------------------------------------------------------------------- */ hasCompare = rnative.test( docElem.compareDocumentPosition ); // Element contains another // Purposefully self-exclusive // As in, an element does not contain itself contains = hasCompare || rnative.test( docElem.contains ) ? function( a, b ) { var adown = a.nodeType === 9 ? a.documentElement : a, bup = b && b.parentNode; return a === bup || !!( bup && bup.nodeType === 1 && ( adown.contains ? adown.contains( bup ) : a.compareDocumentPosition && a.compareDocumentPosition( bup ) & 16 )); } : function( a, b ) { if ( b ) { while ( (b = b.parentNode) ) { if ( b === a ) { return true; } } } return false; }; /* Sorting ---------------------------------------------------------------------- */ // Document order sorting sortOrder = hasCompare ? function( a, b ) { // Flag for duplicate removal if ( a === b ) { hasDuplicate = true; return 0; } // Sort on method existence if only one input has compareDocumentPosition var compare = !a.compareDocumentPosition - !b.compareDocumentPosition; if ( compare ) { return compare; } // Calculate position if both inputs belong to the same document compare = ( a.ownerDocument || a ) === ( b.ownerDocument || b ) ? a.compareDocumentPosition( b ) : // Otherwise we know they are disconnected 1; // Disconnected nodes if ( compare & 1 || (!support.sortDetached && b.compareDocumentPosition( a ) === compare) ) { // Choose the first element that is related to our preferred document if ( a === document || a.ownerDocument === preferredDoc && contains(preferredDoc, a) ) { return -1; } if ( b === document || b.ownerDocument === preferredDoc && contains(preferredDoc, b) ) { return 1; } // Maintain original order return sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; } return compare & 4 ? -1 : 1; } : function( a, b ) { // Exit early if the nodes are identical if ( a === b ) { hasDuplicate = true; return 0; } var cur, i = 0, aup = a.parentNode, bup = b.parentNode, ap = [ a ], bp = [ b ]; // Parentless nodes are either documents or disconnected if ( !aup || !bup ) { return a === document ? -1 : b === document ? 1 : aup ? -1 : bup ? 1 : sortInput ? ( indexOf( sortInput, a ) - indexOf( sortInput, b ) ) : 0; // If the nodes are siblings, we can do a quick check } else if ( aup === bup ) { return siblingCheck( a, b ); } // Otherwise we need full lists of their ancestors for comparison cur = a; while ( (cur = cur.parentNode) ) { ap.unshift( cur ); } cur = b; while ( (cur = cur.parentNode) ) { bp.unshift( cur ); } // Walk down the tree looking for a discrepancy while ( ap[i] === bp[i] ) { i++; } return i ? // Do a sibling check if the nodes have a common ancestor siblingCheck( ap[i], bp[i] ) : // Otherwise nodes in our document sort first ap[i] === preferredDoc ? -1 : bp[i] === preferredDoc ? 1 : 0; }; return document; }; Sizzle.matches = function( expr, elements ) { return Sizzle( expr, null, null, elements ); }; Sizzle.matchesSelector = function( elem, expr ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } // Make sure that attribute selectors are quoted expr = expr.replace( rattributeQuotes, "='$1']" ); if ( support.matchesSelector && documentIsHTML && !compilerCache[ expr + " " ] && ( !rbuggyMatches || !rbuggyMatches.test( expr ) ) && ( !rbuggyQSA || !rbuggyQSA.test( expr ) ) ) { try { var ret = matches.call( elem, expr ); // IE 9's matchesSelector returns false on disconnected nodes if ( ret || support.disconnectedMatch || // As well, disconnected nodes are said to be in a document // fragment in IE 9 elem.document && elem.document.nodeType !== 11 ) { return ret; } } catch (e) {} } return Sizzle( expr, document, null, [ elem ] ).length > 0; }; Sizzle.contains = function( context, elem ) { // Set document vars if needed if ( ( context.ownerDocument || context ) !== document ) { setDocument( context ); } return contains( context, elem ); }; Sizzle.attr = function( elem, name ) { // Set document vars if needed if ( ( elem.ownerDocument || elem ) !== document ) { setDocument( elem ); } var fn = Expr.attrHandle[ name.toLowerCase() ], // Don't get fooled by Object.prototype properties (jQuery #13807) val = fn && hasOwn.call( Expr.attrHandle, name.toLowerCase() ) ? fn( elem, name, !documentIsHTML ) : undefined; return val !== undefined ? val : support.attributes || !documentIsHTML ? elem.getAttribute( name ) : (val = elem.getAttributeNode(name)) && val.specified ? val.value : null; }; Sizzle.escape = function( sel ) { return (sel + "").replace( rcssescape, fcssescape ); }; Sizzle.error = function( msg ) { throw new Error( "Syntax error, unrecognized expression: " + msg ); }; /** * Document sorting and removing duplicates * @param {ArrayLike} results */ Sizzle.uniqueSort = function( results ) { var elem, duplicates = [], j = 0, i = 0; // Unless we *know* we can detect duplicates, assume their presence hasDuplicate = !support.detectDuplicates; sortInput = !support.sortStable && results.slice( 0 ); results.sort( sortOrder ); if ( hasDuplicate ) { while ( (elem = results[i++]) ) { if ( elem === results[ i ] ) { j = duplicates.push( i ); } } while ( j-- ) { results.splice( duplicates[ j ], 1 ); } } // Clear input after sorting to release objects // See https://github.com/jquery/sizzle/pull/225 sortInput = null; return results; }; /** * Utility function for retrieving the text value of an array of DOM nodes * @param {Array|Element} elem */ getText = Sizzle.getText = function( elem ) { var node, ret = "", i = 0, nodeType = elem.nodeType; if ( !nodeType ) { // If no nodeType, this is expected to be an array while ( (node = elem[i++]) ) { // Do not traverse comment nodes ret += getText( node ); } } else if ( nodeType === 1 || nodeType === 9 || nodeType === 11 ) { // Use textContent for elements // innerText usage removed for consistency of new lines (jQuery #11153) if ( typeof elem.textContent === "string" ) { return elem.textContent; } else { // Traverse its children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { ret += getText( elem ); } } } else if ( nodeType === 3 || nodeType === 4 ) { return elem.nodeValue; } // Do not include comment or processing instruction nodes return ret; }; Expr = Sizzle.selectors = { // Can be adjusted by the user cacheLength: 50, createPseudo: markFunction, match: matchExpr, attrHandle: {}, find: {}, relative: { ">": { dir: "parentNode", first: true }, " ": { dir: "parentNode" }, "+": { dir: "previousSibling", first: true }, "~": { dir: "previousSibling" } }, preFilter: { "ATTR": function( match ) { match[1] = match[1].replace( runescape, funescape ); // Move the given value to match[3] whether quoted or unquoted match[3] = ( match[3] || match[4] || match[5] || "" ).replace( runescape, funescape ); if ( match[2] === "~=" ) { match[3] = " " + match[3] + " "; } return match.slice( 0, 4 ); }, "CHILD": function( match ) { /* matches from matchExpr["CHILD"] 1 type (only|nth|...) 2 what (child|of-type) 3 argument (even|odd|\d*|\d*n([+-]\d+)?|...) 4 xn-component of xn+y argument ([+-]?\d*n|) 5 sign of xn-component 6 x of xn-component 7 sign of y-component 8 y of y-component */ match[1] = match[1].toLowerCase(); if ( match[1].slice( 0, 3 ) === "nth" ) { // nth-* requires argument if ( !match[3] ) { Sizzle.error( match[0] ); } // numeric x and y parameters for Expr.filter.CHILD // remember that false/true cast respectively to 0/1 match[4] = +( match[4] ? match[5] + (match[6] || 1) : 2 * ( match[3] === "even" || match[3] === "odd" ) ); match[5] = +( ( match[7] + match[8] ) || match[3] === "odd" ); // other types prohibit arguments } else if ( match[3] ) { Sizzle.error( match[0] ); } return match; }, "PSEUDO": function( match ) { var excess, unquoted = !match[6] && match[2]; if ( matchExpr["CHILD"].test( match[0] ) ) { return null; } // Accept quoted arguments as-is if ( match[3] ) { match[2] = match[4] || match[5] || ""; // Strip excess characters from unquoted arguments } else if ( unquoted && rpseudo.test( unquoted ) && // Get excess from tokenize (recursively) (excess = tokenize( unquoted, true )) && // advance to the next closing parenthesis (excess = unquoted.indexOf( ")", unquoted.length - excess ) - unquoted.length) ) { // excess is a negative index match[0] = match[0].slice( 0, excess ); match[2] = unquoted.slice( 0, excess ); } // Return only captures needed by the pseudo filter method (type and argument) return match.slice( 0, 3 ); } }, filter: { "TAG": function( nodeNameSelector ) { var nodeName = nodeNameSelector.replace( runescape, funescape ).toLowerCase(); return nodeNameSelector === "*" ? function() { return true; } : function( elem ) { return elem.nodeName && elem.nodeName.toLowerCase() === nodeName; }; }, "CLASS": function( className ) { var pattern = classCache[ className + " " ]; return pattern || (pattern = new RegExp( "(^|" + whitespace + ")" + className + "(" + whitespace + "|$)" )) && classCache( className, function( elem ) { return pattern.test( typeof elem.className === "string" && elem.className || typeof elem.getAttribute !== "undefined" && elem.getAttribute("class") || "" ); }); }, "ATTR": function( name, operator, check ) { return function( elem ) { var result = Sizzle.attr( elem, name ); if ( result == null ) { return operator === "!="; } if ( !operator ) { return true; } result += ""; return operator === "=" ? result === check : operator === "!=" ? result !== check : operator === "^=" ? check && result.indexOf( check ) === 0 : operator === "*=" ? check && result.indexOf( check ) > -1 : operator === "$=" ? check && result.slice( -check.length ) === check : operator === "~=" ? ( " " + result.replace( rwhitespace, " " ) + " " ).indexOf( check ) > -1 : operator === "|=" ? result === check || result.slice( 0, check.length + 1 ) === check + "-" : false; }; }, "CHILD": function( type, what, argument, first, last ) { var simple = type.slice( 0, 3 ) !== "nth", forward = type.slice( -4 ) !== "last", ofType = what === "of-type"; return first === 1 && last === 0 ? // Shortcut for :nth-*(n) function( elem ) { return !!elem.parentNode; } : function( elem, context, xml ) { var cache, uniqueCache, outerCache, node, nodeIndex, start, dir = simple !== forward ? "nextSibling" : "previousSibling", parent = elem.parentNode, name = ofType && elem.nodeName.toLowerCase(), useCache = !xml && !ofType, diff = false; if ( parent ) { // :(first|last|only)-(child|of-type) if ( simple ) { while ( dir ) { node = elem; while ( (node = node[ dir ]) ) { if ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) { return false; } } // Reverse direction for :only-* (if we haven't yet done so) start = dir = type === "only" && !start && "nextSibling"; } return true; } start = [ forward ? parent.firstChild : parent.lastChild ]; // non-xml :nth-child(...) stores cache data on `parent` if ( forward && useCache ) { // Seek `elem` from a previously-cached index // ...in a gzip-friendly way node = parent; outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); cache = uniqueCache[ type ] || []; nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; diff = nodeIndex && cache[ 2 ]; node = nodeIndex && parent.childNodes[ nodeIndex ]; while ( (node = ++nodeIndex && node && node[ dir ] || // Fallback to seeking `elem` from the start (diff = nodeIndex = 0) || start.pop()) ) { // When found, cache indexes on `parent` and break if ( node.nodeType === 1 && ++diff && node === elem ) { uniqueCache[ type ] = [ dirruns, nodeIndex, diff ]; break; } } } else { // Use previously-cached element index if available if ( useCache ) { // ...in a gzip-friendly way node = elem; outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); cache = uniqueCache[ type ] || []; nodeIndex = cache[ 0 ] === dirruns && cache[ 1 ]; diff = nodeIndex; } // xml :nth-child(...) // or :nth-last-child(...) or :nth(-last)?-of-type(...) if ( diff === false ) { // Use the same loop as above to seek `elem` from the start while ( (node = ++nodeIndex && node && node[ dir ] || (diff = nodeIndex = 0) || start.pop()) ) { if ( ( ofType ? node.nodeName.toLowerCase() === name : node.nodeType === 1 ) && ++diff ) { // Cache the index of each encountered element if ( useCache ) { outerCache = node[ expando ] || (node[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ node.uniqueID ] || (outerCache[ node.uniqueID ] = {}); uniqueCache[ type ] = [ dirruns, diff ]; } if ( node === elem ) { break; } } } } } // Incorporate the offset, then check against cycle size diff -= last; return diff === first || ( diff % first === 0 && diff / first >= 0 ); } }; }, "PSEUDO": function( pseudo, argument ) { // pseudo-class names are case-insensitive // http://www.w3.org/TR/selectors/#pseudo-classes // Prioritize by case sensitivity in case custom pseudos are added with uppercase letters // Remember that setFilters inherits from pseudos var args, fn = Expr.pseudos[ pseudo ] || Expr.setFilters[ pseudo.toLowerCase() ] || Sizzle.error( "unsupported pseudo: " + pseudo ); // The user may use createPseudo to indicate that // arguments are needed to create the filter function // just as Sizzle does if ( fn[ expando ] ) { return fn( argument ); } // But maintain support for old signatures if ( fn.length > 1 ) { args = [ pseudo, pseudo, "", argument ]; return Expr.setFilters.hasOwnProperty( pseudo.toLowerCase() ) ? markFunction(function( seed, matches ) { var idx, matched = fn( seed, argument ), i = matched.length; while ( i-- ) { idx = indexOf( seed, matched[i] ); seed[ idx ] = !( matches[ idx ] = matched[i] ); } }) : function( elem ) { return fn( elem, 0, args ); }; } return fn; } }, pseudos: { // Potentially complex pseudos "not": markFunction(function( selector ) { // Trim the selector passed to compile // to avoid treating leading and trailing // spaces as combinators var input = [], results = [], matcher = compile( selector.replace( rtrim, "$1" ) ); return matcher[ expando ] ? markFunction(function( seed, matches, context, xml ) { var elem, unmatched = matcher( seed, null, xml, [] ), i = seed.length; // Match elements unmatched by `matcher` while ( i-- ) { if ( (elem = unmatched[i]) ) { seed[i] = !(matches[i] = elem); } } }) : function( elem, context, xml ) { input[0] = elem; matcher( input, null, xml, results ); // Don't keep the element (issue #299) input[0] = null; return !results.pop(); }; }), "has": markFunction(function( selector ) { return function( elem ) { return Sizzle( selector, elem ).length > 0; }; }), "contains": markFunction(function( text ) { text = text.replace( runescape, funescape ); return function( elem ) { return ( elem.textContent || elem.innerText || getText( elem ) ).indexOf( text ) > -1; }; }), // "Whether an element is represented by a :lang() selector // is based solely on the element's language value // being equal to the identifier C, // or beginning with the identifier C immediately followed by "-". // The matching of C against the element's language value is performed case-insensitively. // The identifier C does not have to be a valid language name." // http://www.w3.org/TR/selectors/#lang-pseudo "lang": markFunction( function( lang ) { // lang value must be a valid identifier if ( !ridentifier.test(lang || "") ) { Sizzle.error( "unsupported lang: " + lang ); } lang = lang.replace( runescape, funescape ).toLowerCase(); return function( elem ) { var elemLang; do { if ( (elemLang = documentIsHTML ? elem.lang : elem.getAttribute("xml:lang") || elem.getAttribute("lang")) ) { elemLang = elemLang.toLowerCase(); return elemLang === lang || elemLang.indexOf( lang + "-" ) === 0; } } while ( (elem = elem.parentNode) && elem.nodeType === 1 ); return false; }; }), // Miscellaneous "target": function( elem ) { var hash = window.location && window.location.hash; return hash && hash.slice( 1 ) === elem.id; }, "root": function( elem ) { return elem === docElem; }, "focus": function( elem ) { return elem === document.activeElement && (!document.hasFocus || document.hasFocus()) && !!(elem.type || elem.href || ~elem.tabIndex); }, // Boolean properties "enabled": createDisabledPseudo( false ), "disabled": createDisabledPseudo( true ), "checked": function( elem ) { // In CSS3, :checked should return both checked and selected elements // http://www.w3.org/TR/2011/REC-css3-selectors-20110929/#checked var nodeName = elem.nodeName.toLowerCase(); return (nodeName === "input" && !!elem.checked) || (nodeName === "option" && !!elem.selected); }, "selected": function( elem ) { // Accessing this property makes selected-by-default // options in Safari work properly if ( elem.parentNode ) { elem.parentNode.selectedIndex; } return elem.selected === true; }, // Contents "empty": function( elem ) { // http://www.w3.org/TR/selectors/#empty-pseudo // :empty is negated by element (1) or content nodes (text: 3; cdata: 4; entity ref: 5), // but not by others (comment: 8; processing instruction: 7; etc.) // nodeType < 6 works because attributes (2) do not appear as children for ( elem = elem.firstChild; elem; elem = elem.nextSibling ) { if ( elem.nodeType < 6 ) { return false; } } return true; }, "parent": function( elem ) { return !Expr.pseudos["empty"]( elem ); }, // Element/input types "header": function( elem ) { return rheader.test( elem.nodeName ); }, "input": function( elem ) { return rinputs.test( elem.nodeName ); }, "button": function( elem ) { var name = elem.nodeName.toLowerCase(); return name === "input" && elem.type === "button" || name === "button"; }, "text": function( elem ) { var attr; return elem.nodeName.toLowerCase() === "input" && elem.type === "text" && // Support: IE<8 // New HTML5 attribute values (e.g., "search") appear with elem.type === "text" ( (attr = elem.getAttribute("type")) == null || attr.toLowerCase() === "text" ); }, // Position-in-collection "first": createPositionalPseudo(function() { return [ 0 ]; }), "last": createPositionalPseudo(function( matchIndexes, length ) { return [ length - 1 ]; }), "eq": createPositionalPseudo(function( matchIndexes, length, argument ) { return [ argument < 0 ? argument + length : argument ]; }), "even": createPositionalPseudo(function( matchIndexes, length ) { var i = 0; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "odd": createPositionalPseudo(function( matchIndexes, length ) { var i = 1; for ( ; i < length; i += 2 ) { matchIndexes.push( i ); } return matchIndexes; }), "lt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument; for ( ; --i >= 0; ) { matchIndexes.push( i ); } return matchIndexes; }), "gt": createPositionalPseudo(function( matchIndexes, length, argument ) { var i = argument < 0 ? argument + length : argument; for ( ; ++i < length; ) { matchIndexes.push( i ); } return matchIndexes; }) } }; Expr.pseudos["nth"] = Expr.pseudos["eq"]; // Add button/input type pseudos for ( i in { radio: true, checkbox: true, file: true, password: true, image: true } ) { Expr.pseudos[ i ] = createInputPseudo( i ); } for ( i in { submit: true, reset: true } ) { Expr.pseudos[ i ] = createButtonPseudo( i ); } // Easy API for creating new setFilters function setFilters() {} setFilters.prototype = Expr.filters = Expr.pseudos; Expr.setFilters = new setFilters(); tokenize = Sizzle.tokenize = function( selector, parseOnly ) { var matched, match, tokens, type, soFar, groups, preFilters, cached = tokenCache[ selector + " " ]; if ( cached ) { return parseOnly ? 0 : cached.slice( 0 ); } soFar = selector; groups = []; preFilters = Expr.preFilter; while ( soFar ) { // Comma and first run if ( !matched || (match = rcomma.exec( soFar )) ) { if ( match ) { // Don't consume trailing commas as valid soFar = soFar.slice( match[0].length ) || soFar; } groups.push( (tokens = []) ); } matched = false; // Combinators if ( (match = rcombinators.exec( soFar )) ) { matched = match.shift(); tokens.push({ value: matched, // Cast descendant combinators to space type: match[0].replace( rtrim, " " ) }); soFar = soFar.slice( matched.length ); } // Filters for ( type in Expr.filter ) { if ( (match = matchExpr[ type ].exec( soFar )) && (!preFilters[ type ] || (match = preFilters[ type ]( match ))) ) { matched = match.shift(); tokens.push({ value: matched, type: type, matches: match }); soFar = soFar.slice( matched.length ); } } if ( !matched ) { break; } } // Return the length of the invalid excess // if we're just parsing // Otherwise, throw an error or return tokens return parseOnly ? soFar.length : soFar ? Sizzle.error( selector ) : // Cache the tokens tokenCache( selector, groups ).slice( 0 ); }; function toSelector( tokens ) { var i = 0, len = tokens.length, selector = ""; for ( ; i < len; i++ ) { selector += tokens[i].value; } return selector; } function addCombinator( matcher, combinator, base ) { var dir = combinator.dir, skip = combinator.next, key = skip || dir, checkNonElements = base && key === "parentNode", doneName = done++; return combinator.first ? // Check against closest ancestor/preceding element function( elem, context, xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { return matcher( elem, context, xml ); } } return false; } : // Check against all ancestor/preceding elements function( elem, context, xml ) { var oldCache, uniqueCache, outerCache, newCache = [ dirruns, doneName ]; // We can't set arbitrary data on XML nodes, so they don't benefit from combinator caching if ( xml ) { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { if ( matcher( elem, context, xml ) ) { return true; } } } } else { while ( (elem = elem[ dir ]) ) { if ( elem.nodeType === 1 || checkNonElements ) { outerCache = elem[ expando ] || (elem[ expando ] = {}); // Support: IE <9 only // Defend against cloned attroperties (jQuery gh-1709) uniqueCache = outerCache[ elem.uniqueID ] || (outerCache[ elem.uniqueID ] = {}); if ( skip && skip === elem.nodeName.toLowerCase() ) { elem = elem[ dir ] || elem; } else if ( (oldCache = uniqueCache[ key ]) && oldCache[ 0 ] === dirruns && oldCache[ 1 ] === doneName ) { // Assign to newCache so results back-propagate to previous elements return (newCache[ 2 ] = oldCache[ 2 ]); } else { // Reuse newcache so results back-propagate to previous elements uniqueCache[ key ] = newCache; // A match means we're done; a fail means we have to keep checking if ( (newCache[ 2 ] = matcher( elem, context, xml )) ) { return true; } } } } } return false; }; } function elementMatcher( matchers ) { return matchers.length > 1 ? function( elem, context, xml ) { var i = matchers.length; while ( i-- ) { if ( !matchers[i]( elem, context, xml ) ) { return false; } } return true; } : matchers[0]; } function multipleContexts( selector, contexts, results ) { var i = 0, len = contexts.length; for ( ; i < len; i++ ) { Sizzle( selector, contexts[i], results ); } return results; } function condense( unmatched, map, filter, context, xml ) { var elem, newUnmatched = [], i = 0, len = unmatched.length, mapped = map != null; for ( ; i < len; i++ ) { if ( (elem = unmatched[i]) ) { if ( !filter || filter( elem, context, xml ) ) { newUnmatched.push( elem ); if ( mapped ) { map.push( i ); } } } } return newUnmatched; } function setMatcher( preFilter, selector, matcher, postFilter, postFinder, postSelector ) { if ( postFilter && !postFilter[ expando ] ) { postFilter = setMatcher( postFilter ); } if ( postFinder && !postFinder[ expando ] ) { postFinder = setMatcher( postFinder, postSelector ); } return markFunction(function( seed, results, context, xml ) { var temp, i, elem, preMap = [], postMap = [], preexisting = results.length, // Get initial elements from seed or context elems = seed || multipleContexts( selector || "*", context.nodeType ? [ context ] : context, [] ), // Prefilter to get matcher input, preserving a map for seed-results synchronization matcherIn = preFilter && ( seed || !selector ) ? condense( elems, preMap, preFilter, context, xml ) : elems, matcherOut = matcher ? // If we have a postFinder, or filtered seed, or non-seed postFilter or preexisting results, postFinder || ( seed ? preFilter : preexisting || postFilter ) ? // ...intermediate processing is necessary [] : // ...otherwise use results directly results : matcherIn; // Find primary matches if ( matcher ) { matcher( matcherIn, matcherOut, context, xml ); } // Apply postFilter if ( postFilter ) { temp = condense( matcherOut, postMap ); postFilter( temp, [], context, xml ); // Un-match failing elements by moving them back to matcherIn i = temp.length; while ( i-- ) { if ( (elem = temp[i]) ) { matcherOut[ postMap[i] ] = !(matcherIn[ postMap[i] ] = elem); } } } if ( seed ) { if ( postFinder || preFilter ) { if ( postFinder ) { // Get the final matcherOut by condensing this intermediate into postFinder contexts temp = []; i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) ) { // Restore matcherIn since elem is not yet a final match temp.push( (matcherIn[i] = elem) ); } } postFinder( null, (matcherOut = []), temp, xml ); } // Move matched elements from seed to results to keep them synchronized i = matcherOut.length; while ( i-- ) { if ( (elem = matcherOut[i]) && (temp = postFinder ? indexOf( seed, elem ) : preMap[i]) > -1 ) { seed[temp] = !(results[temp] = elem); } } } // Add elements to results, through postFinder if defined } else { matcherOut = condense( matcherOut === results ? matcherOut.splice( preexisting, matcherOut.length ) : matcherOut ); if ( postFinder ) { postFinder( null, results, matcherOut, xml ); } else { push.apply( results, matcherOut ); } } }); } function matcherFromTokens( tokens ) { var checkContext, matcher, j, len = tokens.length, leadingRelative = Expr.relative[ tokens[0].type ], implicitRelative = leadingRelative || Expr.relative[" "], i = leadingRelative ? 1 : 0, // The foundational matcher ensures that elements are reachable from top-level context(s) matchContext = addCombinator( function( elem ) { return elem === checkContext; }, implicitRelative, true ), matchAnyContext = addCombinator( function( elem ) { return indexOf( checkContext, elem ) > -1; }, implicitRelative, true ), matchers = [ function( elem, context, xml ) { var ret = ( !leadingRelative && ( xml || context !== outermostContext ) ) || ( (checkContext = context).nodeType ? matchContext( elem, context, xml ) : matchAnyContext( elem, context, xml ) ); // Avoid hanging onto element (issue #299) checkContext = null; return ret; } ]; for ( ; i < len; i++ ) { if ( (matcher = Expr.relative[ tokens[i].type ]) ) { matchers = [ addCombinator(elementMatcher( matchers ), matcher) ]; } else { matcher = Expr.filter[ tokens[i].type ].apply( null, tokens[i].matches ); // Return special upon seeing a positional matcher if ( matcher[ expando ] ) { // Find the next relative operator (if any) for proper handling j = ++i; for ( ; j < len; j++ ) { if ( Expr.relative[ tokens[j].type ] ) { break; } } return setMatcher( i > 1 && elementMatcher( matchers ), i > 1 && toSelector( // If the preceding token was a descendant combinator, insert an implicit any-element `*` tokens.slice( 0, i - 1 ).concat({ value: tokens[ i - 2 ].type === " " ? "*" : "" }) ).replace( rtrim, "$1" ), matcher, i < j && matcherFromTokens( tokens.slice( i, j ) ), j < len && matcherFromTokens( (tokens = tokens.slice( j )) ), j < len && toSelector( tokens ) ); } matchers.push( matcher ); } } return elementMatcher( matchers ); } function matcherFromGroupMatchers( elementMatchers, setMatchers ) { var bySet = setMatchers.length > 0, byElement = elementMatchers.length > 0, superMatcher = function( seed, context, xml, results, outermost ) { var elem, j, matcher, matchedCount = 0, i = "0", unmatched = seed && [], setMatched = [], contextBackup = outermostContext, // We must always have either seed elements or outermost context elems = seed || byElement && Expr.find["TAG"]( "*", outermost ), // Use integer dirruns iff this is the outermost matcher dirrunsUnique = (dirruns += contextBackup == null ? 1 : Math.random() || 0.1), len = elems.length; if ( outermost ) { outermostContext = context === document || context || outermost; } // Add elements passing elementMatchers directly to results // Support: IE<9, Safari // Tolerate NodeList properties (IE: "length"; Safari: ) matching elements by id for ( ; i !== len && (elem = elems[i]) != null; i++ ) { if ( byElement && elem ) { j = 0; if ( !context && elem.ownerDocument !== document ) { setDocument( elem ); xml = !documentIsHTML; } while ( (matcher = elementMatchers[j++]) ) { if ( matcher( elem, context || document, xml) ) { results.push( elem ); break; } } if ( outermost ) { dirruns = dirrunsUnique; } } // Track unmatched elements for set filters if ( bySet ) { // They will have gone through all possible matchers if ( (elem = !matcher && elem) ) { matchedCount--; } // Lengthen the array for every element, matched or not if ( seed ) { unmatched.push( elem ); } } } // `i` is now the count of elements visited above, and adding it to `matchedCount` // makes the latter nonnegative. matchedCount += i; // Apply set filters to unmatched elements // NOTE: This can be skipped if there are no unmatched elements (i.e., `matchedCount` // equals `i`), unless we didn't visit _any_ elements in the above loop because we have // no element matchers and no seed. // Incrementing an initially-string "0" `i` allows `i` to remain a string only in that // case, which will result in a "00" `matchedCount` that differs from `i` but is also // numerically zero. if ( bySet && i !== matchedCount ) { j = 0; while ( (matcher = setMatchers[j++]) ) { matcher( unmatched, setMatched, context, xml ); } if ( seed ) { // Reintegrate element matches to eliminate the need for sorting if ( matchedCount > 0 ) { while ( i-- ) { if ( !(unmatched[i] || setMatched[i]) ) { setMatched[i] = pop.call( results ); } } } // Discard index placeholder values to get only actual matches setMatched = condense( setMatched ); } // Add matches to results push.apply( results, setMatched ); // Seedless set matches succeeding multiple successful matchers stipulate sorting if ( outermost && !seed && setMatched.length > 0 && ( matchedCount + setMatchers.length ) > 1 ) { Sizzle.uniqueSort( results ); } } // Override manipulation of globals by nested matchers if ( outermost ) { dirruns = dirrunsUnique; outermostContext = contextBackup; } return unmatched; }; return bySet ? markFunction( superMatcher ) : superMatcher; } compile = Sizzle.compile = function( selector, match /* Internal Use Only */ ) { var i, setMatchers = [], elementMatchers = [], cached = compilerCache[ selector + " " ]; if ( !cached ) { // Generate a function of recursive functions that can be used to check each element if ( !match ) { match = tokenize( selector ); } i = match.length; while ( i-- ) { cached = matcherFromTokens( match[i] ); if ( cached[ expando ] ) { setMatchers.push( cached ); } else { elementMatchers.push( cached ); } } // Cache the compiled function cached = compilerCache( selector, matcherFromGroupMatchers( elementMatchers, setMatchers ) ); // Save selector and tokenization cached.selector = selector; } return cached; }; /** * A low-level selection function that works with Sizzle's compiled * selector functions * @param {String|Function} selector A selector or a pre-compiled * selector function built with Sizzle.compile * @param {Element} context * @param {Array} [results] * @param {Array} [seed] A set of elements to match against */ select = Sizzle.select = function( selector, context, results, seed ) { var i, tokens, token, type, find, compiled = typeof selector === "function" && selector, match = !seed && tokenize( (selector = compiled.selector || selector) ); results = results || []; // Try to minimize operations if there is only one selector in the list and no seed // (the latter of which guarantees us context) if ( match.length === 1 ) { // Reduce context if the leading compound selector is an ID tokens = match[0] = match[0].slice( 0 ); if ( tokens.length > 2 && (token = tokens[0]).type === "ID" && context.nodeType === 9 && documentIsHTML && Expr.relative[ tokens[1].type ] ) { context = ( Expr.find["ID"]( token.matches[0].replace(runescape, funescape), context ) || [] )[0]; if ( !context ) { return results; // Precompiled matchers will still verify ancestry, so step up a level } else if ( compiled ) { context = context.parentNode; } selector = selector.slice( tokens.shift().value.length ); } // Fetch a seed set for right-to-left matching i = matchExpr["needsContext"].test( selector ) ? 0 : tokens.length; while ( i-- ) { token = tokens[i]; // Abort if we hit a combinator if ( Expr.relative[ (type = token.type) ] ) { break; } if ( (find = Expr.find[ type ]) ) { // Search, expanding context for leading sibling combinators if ( (seed = find( token.matches[0].replace( runescape, funescape ), rsibling.test( tokens[0].type ) && testContext( context.parentNode ) || context )) ) { // If seed is empty or no tokens remain, we can return early tokens.splice( i, 1 ); selector = seed.length && toSelector( tokens ); if ( !selector ) { push.apply( results, seed ); return results; } break; } } } } // Compile and execute a filtering function if one is not provided // Provide `match` to avoid retokenization if we modified the selector above ( compiled || compile( selector, match ) )( seed, context, !documentIsHTML, results, !context || rsibling.test( selector ) && testContext( context.parentNode ) || context ); return results; }; // One-time assignments // Sort stability support.sortStable = expando.split("").sort( sortOrder ).join("") === expando; // Support: Chrome 14-35+ // Always assume duplicates if they aren't passed to the comparison function support.detectDuplicates = !!hasDuplicate; // Initialize against the default document setDocument(); // Support: Webkit<537.32 - Safari 6.0.3/Chrome 25 (fixed in Chrome 27) // Detached nodes confoundingly follow *each other* support.sortDetached = assert(function( el ) { // Should return 1, but returns 4 (following) return el.compareDocumentPosition( document.createElement("fieldset") ) & 1; }); // Support: IE<8 // Prevent attribute/property "interpolation" // https://msdn.microsoft.com/en-us/library/ms536429%28VS.85%29.aspx if ( !assert(function( el ) { el.innerHTML = ""; return el.firstChild.getAttribute("href") === "#" ; }) ) { addHandle( "type|href|height|width", function( elem, name, isXML ) { if ( !isXML ) { return elem.getAttribute( name, name.toLowerCase() === "type" ? 1 : 2 ); } }); } // Support: IE<9 // Use defaultValue in place of getAttribute("value") if ( !support.attributes || !assert(function( el ) { el.innerHTML = ""; el.firstChild.setAttribute( "value", "" ); return el.firstChild.getAttribute( "value" ) === ""; }) ) { addHandle( "value", function( elem, name, isXML ) { if ( !isXML && elem.nodeName.toLowerCase() === "input" ) { return elem.defaultValue; } }); } // Support: IE<9 // Use getAttributeNode to fetch booleans when getAttribute lies if ( !assert(function( el ) { return el.getAttribute("disabled") == null; }) ) { addHandle( booleans, function( elem, name, isXML ) { var val; if ( !isXML ) { return elem[ name ] === true ? name.toLowerCase() : (val = elem.getAttributeNode( name )) && val.specified ? val.value : null; } }); } return Sizzle; })( window ); jQuery.find = Sizzle; jQuery.expr = Sizzle.selectors; // Deprecated jQuery.expr[ ":" ] = jQuery.expr.pseudos; jQuery.uniqueSort = jQuery.unique = Sizzle.uniqueSort; jQuery.text = Sizzle.getText; jQuery.isXMLDoc = Sizzle.isXML; jQuery.contains = Sizzle.contains; jQuery.escapeSelector = Sizzle.escape; var dir = function( elem, dir, until ) { var matched = [], truncate = until !== undefined; while ( ( elem = elem[ dir ] ) && elem.nodeType !== 9 ) { if ( elem.nodeType === 1 ) { if ( truncate && jQuery( elem ).is( until ) ) { break; } matched.push( elem ); } } return matched; }; var siblings = function( n, elem ) { var matched = []; for ( ; n; n = n.nextSibling ) { if ( n.nodeType === 1 && n !== elem ) { matched.push( n ); } } return matched; }; var rneedsContext = jQuery.expr.match.needsContext; function nodeName( elem, name ) { return elem.nodeName && elem.nodeName.toLowerCase() === name.toLowerCase(); }; var rsingleTag = ( /^<([a-z][^\/\0>:\x20\t\r\n\f]*)[\x20\t\r\n\f]*\/?>(?:<\/\1>|)$/i ); // Implement the identical functionality for filter and not function winnow( elements, qualifier, not ) { if ( isFunction( qualifier ) ) { return jQuery.grep( elements, function( elem, i ) { return !!qualifier.call( elem, i, elem ) !== not; } ); } // Single element if ( qualifier.nodeType ) { return jQuery.grep( elements, function( elem ) { return ( elem === qualifier ) !== not; } ); } // Arraylike of elements (jQuery, arguments, Array) if ( typeof qualifier !== "string" ) { return jQuery.grep( elements, function( elem ) { return ( indexOf.call( qualifier, elem ) > -1 ) !== not; } ); } // Filtered directly for both simple and complex selectors return jQuery.filter( qualifier, elements, not ); } jQuery.filter = function( expr, elems, not ) { var elem = elems[ 0 ]; if ( not ) { expr = ":not(" + expr + ")"; } if ( elems.length === 1 && elem.nodeType === 1 ) { return jQuery.find.matchesSelector( elem, expr ) ? [ elem ] : []; } return jQuery.find.matches( expr, jQuery.grep( elems, function( elem ) { return elem.nodeType === 1; } ) ); }; jQuery.fn.extend( { find: function( selector ) { var i, ret, len = this.length, self = this; if ( typeof selector !== "string" ) { return this.pushStack( jQuery( selector ).filter( function() { for ( i = 0; i < len; i++ ) { if ( jQuery.contains( self[ i ], this ) ) { return true; } } } ) ); } ret = this.pushStack( [] ); for ( i = 0; i < len; i++ ) { jQuery.find( selector, self[ i ], ret ); } return len > 1 ? jQuery.uniqueSort( ret ) : ret; }, filter: function( selector ) { return this.pushStack( winnow( this, selector || [], false ) ); }, not: function( selector ) { return this.pushStack( winnow( this, selector || [], true ) ); }, is: function( selector ) { return !!winnow( this, // If this is a positional/relative selector, check membership in the returned set // so $("p:first").is("p:last") won't return true for a doc with two "p". typeof selector === "string" && rneedsContext.test( selector ) ? jQuery( selector ) : selector || [], false ).length; } } ); // Initialize a jQuery object // A central reference to the root jQuery(document) var rootjQuery, // A simple way to check for HTML strings // Prioritize #id over to avoid XSS via location.hash (#9521) // Strict HTML recognition (#11290: must start with <) // Shortcut simple #id case for speed rquickExpr = /^(?:\s*(<[\w\W]+>)[^>]*|#([\w-]+))$/, init = jQuery.fn.init = function( selector, context, root ) { var match, elem; // HANDLE: $(""), $(null), $(undefined), $(false) if ( !selector ) { return this; } // Method init() accepts an alternate rootjQuery // so migrate can support jQuery.sub (gh-2101) root = root || rootjQuery; // Handle HTML strings if ( typeof selector === "string" ) { if ( selector[ 0 ] === "<" && selector[ selector.length - 1 ] === ">" && selector.length >= 3 ) { // Assume that strings that start and end with <> are HTML and skip the regex check match = [ null, selector, null ]; } else { match = rquickExpr.exec( selector ); } // Match html or make sure no context is specified for #id if ( match && ( match[ 1 ] || !context ) ) { // HANDLE: $(html) -> $(array) if ( match[ 1 ] ) { context = context instanceof jQuery ? context[ 0 ] : context; // Option to run scripts is true for back-compat // Intentionally let the error be thrown if parseHTML is not present jQuery.merge( this, jQuery.parseHTML( match[ 1 ], context && context.nodeType ? context.ownerDocument || context : document, true ) ); // HANDLE: $(html, props) if ( rsingleTag.test( match[ 1 ] ) && jQuery.isPlainObject( context ) ) { for ( match in context ) { // Properties of context are called as methods if possible if ( isFunction( this[ match ] ) ) { this[ match ]( context[ match ] ); // ...and otherwise set as attributes } else { this.attr( match, context[ match ] ); } } } return this; // HANDLE: $(#id) } else { elem = document.getElementById( match[ 2 ] ); if ( elem ) { // Inject the element directly into the jQuery object this[ 0 ] = elem; this.length = 1; } return this; } // HANDLE: $(expr, $(...)) } else if ( !context || context.jquery ) { return ( context || root ).find( selector ); // HANDLE: $(expr, context) // (which is just equivalent to: $(context).find(expr) } else { return this.constructor( context ).find( selector ); } // HANDLE: $(DOMElement) } else if ( selector.nodeType ) { this[ 0 ] = selector; this.length = 1; return this; // HANDLE: $(function) // Shortcut for document ready } else if ( isFunction( selector ) ) { return root.ready !== undefined ? root.ready( selector ) : // Execute immediately if ready is not present selector( jQuery ); } return jQuery.makeArray( selector, this ); }; // Give the init function the jQuery prototype for later instantiation init.prototype = jQuery.fn; // Initialize central reference rootjQuery = jQuery( document ); var rparentsprev = /^(?:parents|prev(?:Until|All))/, // Methods guaranteed to produce a unique set when starting from a unique set guaranteedUnique = { children: true, contents: true, next: true, prev: true }; jQuery.fn.extend( { has: function( target ) { var targets = jQuery( target, this ), l = targets.length; return this.filter( function() { var i = 0; for ( ; i < l; i++ ) { if ( jQuery.contains( this, targets[ i ] ) ) { return true; } } } ); }, closest: function( selectors, context ) { var cur, i = 0, l = this.length, matched = [], targets = typeof selectors !== "string" && jQuery( selectors ); // Positional selectors never match, since there's no _selection_ context if ( !rneedsContext.test( selectors ) ) { for ( ; i < l; i++ ) { for ( cur = this[ i ]; cur && cur !== context; cur = cur.parentNode ) { // Always skip document fragments if ( cur.nodeType < 11 && ( targets ? targets.index( cur ) > -1 : // Don't pass non-elements to Sizzle cur.nodeType === 1 && jQuery.find.matchesSelector( cur, selectors ) ) ) { matched.push( cur ); break; } } } } return this.pushStack( matched.length > 1 ? jQuery.uniqueSort( matched ) : matched ); }, // Determine the position of an element within the set index: function( elem ) { // No argument, return index in parent if ( !elem ) { return ( this[ 0 ] && this[ 0 ].parentNode ) ? this.first().prevAll().length : -1; } // Index in selector if ( typeof elem === "string" ) { return indexOf.call( jQuery( elem ), this[ 0 ] ); } // Locate the position of the desired element return indexOf.call( this, // If it receives a jQuery object, the first element is used elem.jquery ? elem[ 0 ] : elem ); }, add: function( selector, context ) { return this.pushStack( jQuery.uniqueSort( jQuery.merge( this.get(), jQuery( selector, context ) ) ) ); }, addBack: function( selector ) { return this.add( selector == null ? this.prevObject : this.prevObject.filter( selector ) ); } } ); function sibling( cur, dir ) { while ( ( cur = cur[ dir ] ) && cur.nodeType !== 1 ) {} return cur; } jQuery.each( { parent: function( elem ) { var parent = elem.parentNode; return parent && parent.nodeType !== 11 ? parent : null; }, parents: function( elem ) { return dir( elem, "parentNode" ); }, parentsUntil: function( elem, i, until ) { return dir( elem, "parentNode", until ); }, next: function( elem ) { return sibling( elem, "nextSibling" ); }, prev: function( elem ) { return sibling( elem, "previousSibling" ); }, nextAll: function( elem ) { return dir( elem, "nextSibling" ); }, prevAll: function( elem ) { return dir( elem, "previousSibling" ); }, nextUntil: function( elem, i, until ) { return dir( elem, "nextSibling", until ); }, prevUntil: function( elem, i, until ) { return dir( elem, "previousSibling", until ); }, siblings: function( elem ) { return siblings( ( elem.parentNode || {} ).firstChild, elem ); }, children: function( elem ) { return siblings( elem.firstChild ); }, contents: function( elem ) { if ( nodeName( elem, "iframe" ) ) { return elem.contentDocument; } // Support: IE 9 - 11 only, iOS 7 only, Android Browser <=4.3 only // Treat the template element as a regular one in browsers that // don't support it. if ( nodeName( elem, "template" ) ) { elem = elem.content || elem; } return jQuery.merge( [], elem.childNodes ); } }, function( name, fn ) { jQuery.fn[ name ] = function( until, selector ) { var matched = jQuery.map( this, fn, until ); if ( name.slice( -5 ) !== "Until" ) { selector = until; } if ( selector && typeof selector === "string" ) { matched = jQuery.filter( selector, matched ); } if ( this.length > 1 ) { // Remove duplicates if ( !guaranteedUnique[ name ] ) { jQuery.uniqueSort( matched ); } // Reverse order for parents* and prev-derivatives if ( rparentsprev.test( name ) ) { matched.reverse(); } } return this.pushStack( matched ); }; } ); var rnothtmlwhite = ( /[^\x20\t\r\n\f]+/g ); // Convert String-formatted options into Object-formatted ones function createOptions( options ) { var object = {}; jQuery.each( options.match( rnothtmlwhite ) || [], function( _, flag ) { object[ flag ] = true; } ); return object; } /* * Create a callback list using the following parameters: * * options: an optional list of space-separated options that will change how * the callback list behaves or a more traditional option object * * By default a callback list will act like an event callback list and can be * "fired" multiple times. * * Possible options: * * once: will ensure the callback list can only be fired once (like a Deferred) * * memory: will keep track of previous values and will call any callback added * after the list has been fired right away with the latest "memorized" * values (like a Deferred) * * unique: will ensure a callback can only be added once (no duplicate in the list) * * stopOnFalse: interrupt callings when a callback returns false * */ jQuery.Callbacks = function( options ) { // Convert options from String-formatted to Object-formatted if needed // (we check in cache first) options = typeof options === "string" ? createOptions( options ) : jQuery.extend( {}, options ); var // Flag to know if list is currently firing firing, // Last fire value for non-forgettable lists memory, // Flag to know if list was already fired fired, // Flag to prevent firing locked, // Actual callback list list = [], // Queue of execution data for repeatable lists queue = [], // Index of currently firing callback (modified by add/remove as needed) firingIndex = -1, // Fire callbacks fire = function() { // Enforce single-firing locked = locked || options.once; // Execute callbacks for all pending executions, // respecting firingIndex overrides and runtime changes fired = firing = true; for ( ; queue.length; firingIndex = -1 ) { memory = queue.shift(); while ( ++firingIndex < list.length ) { // Run callback and check for early termination if ( list[ firingIndex ].apply( memory[ 0 ], memory[ 1 ] ) === false && options.stopOnFalse ) { // Jump to end and forget the data so .add doesn't re-fire firingIndex = list.length; memory = false; } } } // Forget the data if we're done with it if ( !options.memory ) { memory = false; } firing = false; // Clean up if we're done firing for good if ( locked ) { // Keep an empty list if we have data for future add calls if ( memory ) { list = []; // Otherwise, this object is spent } else { list = ""; } } }, // Actual Callbacks object self = { // Add a callback or a collection of callbacks to the list add: function() { if ( list ) { // If we have memory from a past run, we should fire after adding if ( memory && !firing ) { firingIndex = list.length - 1; queue.push( memory ); } ( function add( args ) { jQuery.each( args, function( _, arg ) { if ( isFunction( arg ) ) { if ( !options.unique || !self.has( arg ) ) { list.push( arg ); } } else if ( arg && arg.length && toType( arg ) !== "string" ) { // Inspect recursively add( arg ); } } ); } )( arguments ); if ( memory && !firing ) { fire(); } } return this; }, // Remove a callback from the list remove: function() { jQuery.each( arguments, function( _, arg ) { var index; while ( ( index = jQuery.inArray( arg, list, index ) ) > -1 ) { list.splice( index, 1 ); // Handle firing indexes if ( index <= firingIndex ) { firingIndex--; } } } ); return this; }, // Check if a given callback is in the list. // If no argument is given, return whether or not list has callbacks attached. has: function( fn ) { return fn ? jQuery.inArray( fn, list ) > -1 : list.length > 0; }, // Remove all callbacks from the list empty: function() { if ( list ) { list = []; } return this; }, // Disable .fire and .add // Abort any current/pending executions // Clear all callbacks and values disable: function() { locked = queue = []; list = memory = ""; return this; }, disabled: function() { return !list; }, // Disable .fire // Also disable .add unless we have memory (since it would have no effect) // Abort any pending executions lock: function() { locked = queue = []; if ( !memory && !firing ) { list = memory = ""; } return this; }, locked: function() { return !!locked; }, // Call all callbacks with the given context and arguments fireWith: function( context, args ) { if ( !locked ) { args = args || []; args = [ context, args.slice ? args.slice() : args ]; queue.push( args ); if ( !firing ) { fire(); } } return this; }, // Call all the callbacks with the given arguments fire: function() { self.fireWith( this, arguments ); return this; }, // To know if the callbacks have already been called at least once fired: function() { return !!fired; } }; return self; }; function Identity( v ) { return v; } function Thrower( ex ) { throw ex; } function adoptValue( value, resolve, reject, noValue ) { var method; try { // Check for promise aspect first to privilege synchronous behavior if ( value && isFunction( ( method = value.promise ) ) ) { method.call( value ).done( resolve ).fail( reject ); // Other thenables } else if ( value && isFunction( ( method = value.then ) ) ) { method.call( value, resolve, reject ); // Other non-thenables } else { // Control `resolve` arguments by letting Array#slice cast boolean `noValue` to integer: // * false: [ value ].slice( 0 ) => resolve( value ) // * true: [ value ].slice( 1 ) => resolve() resolve.apply( undefined, [ value ].slice( noValue ) ); } // For Promises/A+, convert exceptions into rejections // Since jQuery.when doesn't unwrap thenables, we can skip the extra checks appearing in // Deferred#then to conditionally suppress rejection. } catch ( value ) { // Support: Android 4.0 only // Strict mode functions invoked without .call/.apply get global-object context reject.apply( undefined, [ value ] ); } } jQuery.extend( { Deferred: function( func ) { var tuples = [ // action, add listener, callbacks, // ... .then handlers, argument index, [final state] [ "notify", "progress", jQuery.Callbacks( "memory" ), jQuery.Callbacks( "memory" ), 2 ], [ "resolve", "done", jQuery.Callbacks( "once memory" ), jQuery.Callbacks( "once memory" ), 0, "resolved" ], [ "reject", "fail", jQuery.Callbacks( "once memory" ), jQuery.Callbacks( "once memory" ), 1, "rejected" ] ], state = "pending", promise = { state: function() { return state; }, always: function() { deferred.done( arguments ).fail( arguments ); return this; }, "catch": function( fn ) { return promise.then( null, fn ); }, // Keep pipe for back-compat pipe: function( /* fnDone, fnFail, fnProgress */ ) { var fns = arguments; return jQuery.Deferred( function( newDefer ) { jQuery.each( tuples, function( i, tuple ) { // Map tuples (progress, done, fail) to arguments (done, fail, progress) var fn = isFunction( fns[ tuple[ 4 ] ] ) && fns[ tuple[ 4 ] ]; // deferred.progress(function() { bind to newDefer or newDefer.notify }) // deferred.done(function() { bind to newDefer or newDefer.resolve }) // deferred.fail(function() { bind to newDefer or newDefer.reject }) deferred[ tuple[ 1 ] ]( function() { var returned = fn && fn.apply( this, arguments ); if ( returned && isFunction( returned.promise ) ) { returned.promise() .progress( newDefer.notify ) .done( newDefer.resolve ) .fail( newDefer.reject ); } else { newDefer[ tuple[ 0 ] + "With" ]( this, fn ? [ returned ] : arguments ); } } ); } ); fns = null; } ).promise(); }, then: function( onFulfilled, onRejected, onProgress ) { var maxDepth = 0; function resolve( depth, deferred, handler, special ) { return function() { var that = this, args = arguments, mightThrow = function() { var returned, then; // Support: Promises/A+ section 2.3.3.3.3 // https://promisesaplus.com/#point-59 // Ignore double-resolution attempts if ( depth < maxDepth ) { return; } returned = handler.apply( that, args ); // Support: Promises/A+ section 2.3.1 // https://promisesaplus.com/#point-48 if ( returned === deferred.promise() ) { throw new TypeError( "Thenable self-resolution" ); } // Support: Promises/A+ sections 2.3.3.1, 3.5 // https://promisesaplus.com/#point-54 // https://promisesaplus.com/#point-75 // Retrieve `then` only once then = returned && // Support: Promises/A+ section 2.3.4 // https://promisesaplus.com/#point-64 // Only check objects and functions for thenability ( typeof returned === "object" || typeof returned === "function" ) && returned.then; // Handle a returned thenable if ( isFunction( then ) ) { // Special processors (notify) just wait for resolution if ( special ) { then.call( returned, resolve( maxDepth, deferred, Identity, special ), resolve( maxDepth, deferred, Thrower, special ) ); // Normal processors (resolve) also hook into progress } else { // ...and disregard older resolution values maxDepth++; then.call( returned, resolve( maxDepth, deferred, Identity, special ), resolve( maxDepth, deferred, Thrower, special ), resolve( maxDepth, deferred, Identity, deferred.notifyWith ) ); } // Handle all other returned values } else { // Only substitute handlers pass on context // and multiple values (non-spec behavior) if ( handler !== Identity ) { that = undefined; args = [ returned ]; } // Process the value(s) // Default process is resolve ( special || deferred.resolveWith )( that, args ); } }, // Only normal processors (resolve) catch and reject exceptions process = special ? mightThrow : function() { try { mightThrow(); } catch ( e ) { if ( jQuery.Deferred.exceptionHook ) { jQuery.Deferred.exceptionHook( e, process.stackTrace ); } // Support: Promises/A+ section 2.3.3.3.4.1 // https://promisesaplus.com/#point-61 // Ignore post-resolution exceptions if ( depth + 1 >= maxDepth ) { // Only substitute handlers pass on context // and multiple values (non-spec behavior) if ( handler !== Thrower ) { that = undefined; args = [ e ]; } deferred.rejectWith( that, args ); } } }; // Support: Promises/A+ section 2.3.3.3.1 // https://promisesaplus.com/#point-57 // Re-resolve promises immediately to dodge false rejection from // subsequent errors if ( depth ) { process(); } else { // Call an optional hook to record the stack, in case of exception // since it's otherwise lost when execution goes async if ( jQuery.Deferred.getStackHook ) { process.stackTrace = jQuery.Deferred.getStackHook(); } window.setTimeout( process ); } }; } return jQuery.Deferred( function( newDefer ) { // progress_handlers.add( ... ) tuples[ 0 ][ 3 ].add( resolve( 0, newDefer, isFunction( onProgress ) ? onProgress : Identity, newDefer.notifyWith ) ); // fulfilled_handlers.add( ... ) tuples[ 1 ][ 3 ].add( resolve( 0, newDefer, isFunction( onFulfilled ) ? onFulfilled : Identity ) ); // rejected_handlers.add( ... ) tuples[ 2 ][ 3 ].add( resolve( 0, newDefer, isFunction( onRejected ) ? onRejected : Thrower ) ); } ).promise(); }, // Get a promise for this deferred // If obj is provided, the promise aspect is added to the object promise: function( obj ) { return obj != null ? jQuery.extend( obj, promise ) : promise; } }, deferred = {}; // Add list-specific methods jQuery.each( tuples, function( i, tuple ) { var list = tuple[ 2 ], stateString = tuple[ 5 ]; // promise.progress = list.add // promise.done = list.add // promise.fail = list.add promise[ tuple[ 1 ] ] = list.add; // Handle state if ( stateString ) { list.add( function() { // state = "resolved" (i.e., fulfilled) // state = "rejected" state = stateString; }, // rejected_callbacks.disable // fulfilled_callbacks.disable tuples[ 3 - i ][ 2 ].disable, // rejected_handlers.disable // fulfilled_handlers.disable tuples[ 3 - i ][ 3 ].disable, // progress_callbacks.lock tuples[ 0 ][ 2 ].lock, // progress_handlers.lock tuples[ 0 ][ 3 ].lock ); } // progress_handlers.fire // fulfilled_handlers.fire // rejected_handlers.fire list.add( tuple[ 3 ].fire ); // deferred.notify = function() { deferred.notifyWith(...) } // deferred.resolve = function() { deferred.resolveWith(...) } // deferred.reject = function() { deferred.rejectWith(...) } deferred[ tuple[ 0 ] ] = function() { deferred[ tuple[ 0 ] + "With" ]( this === deferred ? undefined : this, arguments ); return this; }; // deferred.notifyWith = list.fireWith // deferred.resolveWith = list.fireWith // deferred.rejectWith = list.fireWith deferred[ tuple[ 0 ] + "With" ] = list.fireWith; } ); // Make the deferred a promise promise.promise( deferred ); // Call given func if any if ( func ) { func.call( deferred, deferred ); } // All done! return deferred; }, // Deferred helper when: function( singleValue ) { var // count of uncompleted subordinates remaining = arguments.length, // count of unprocessed arguments i = remaining, // subordinate fulfillment data resolveContexts = Array( i ), resolveValues = slice.call( arguments ), // the master Deferred master = jQuery.Deferred(), // subordinate callback factory updateFunc = function( i ) { return function( value ) { resolveContexts[ i ] = this; resolveValues[ i ] = arguments.length > 1 ? slice.call( arguments ) : value; if ( !( --remaining ) ) { master.resolveWith( resolveContexts, resolveValues ); } }; }; // Single- and empty arguments are adopted like Promise.resolve if ( remaining <= 1 ) { adoptValue( singleValue, master.done( updateFunc( i ) ).resolve, master.reject, !remaining ); // Use .then() to unwrap secondary thenables (cf. gh-3000) if ( master.state() === "pending" || isFunction( resolveValues[ i ] && resolveValues[ i ].then ) ) { return master.then(); } } // Multiple arguments are aggregated like Promise.all array elements while ( i-- ) { adoptValue( resolveValues[ i ], updateFunc( i ), master.reject ); } return master.promise(); } } ); // These usually indicate a programmer mistake during development, // warn about them ASAP rather than swallowing them by default. var rerrorNames = /^(Eval|Internal|Range|Reference|Syntax|Type|URI)Error$/; jQuery.Deferred.exceptionHook = function( error, stack ) { // Support: IE 8 - 9 only // Console exists when dev tools are open, which can happen at any time if ( window.console && window.console.warn && error && rerrorNames.test( error.name ) ) { window.console.warn( "jQuery.Deferred exception: " + error.message, error.stack, stack ); } }; jQuery.readyException = function( error ) { window.setTimeout( function() { throw error; } ); }; // The deferred used on DOM ready var readyList = jQuery.Deferred(); jQuery.fn.ready = function( fn ) { readyList .then( fn ) // Wrap jQuery.readyException in a function so that the lookup // happens at the time of error handling instead of callback // registration. .catch( function( error ) { jQuery.readyException( error ); } ); return this; }; jQuery.extend( { // Is the DOM ready to be used? Set to true once it occurs. isReady: false, // A counter to track how many items to wait for before // the ready event fires. See #6781 readyWait: 1, // Handle when the DOM is ready ready: function( wait ) { // Abort if there are pending holds or we're already ready if ( wait === true ? --jQuery.readyWait : jQuery.isReady ) { return; } // Remember that the DOM is ready jQuery.isReady = true; // If a normal DOM Ready event fired, decrement, and wait if need be if ( wait !== true && --jQuery.readyWait > 0 ) { return; } // If there are functions bound, to execute readyList.resolveWith( document, [ jQuery ] ); } } ); jQuery.ready.then = readyList.then; // The ready event handler and self cleanup method function completed() { document.removeEventListener( "DOMContentLoaded", completed ); window.removeEventListener( "load", completed ); jQuery.ready(); } // Catch cases where $(document).ready() is called // after the browser event has already occurred. // Support: IE <=9 - 10 only // Older IE sometimes signals "interactive" too soon if ( document.readyState === "complete" || ( document.readyState !== "loading" && !document.documentElement.doScroll ) ) { // Handle it asynchronously to allow scripts the opportunity to delay ready window.setTimeout( jQuery.ready ); } else { // Use the handy event callback document.addEventListener( "DOMContentLoaded", completed ); // A fallback to window.onload, that will always work window.addEventListener( "load", completed ); } // Multifunctional method to get and set values of a collection // The value/s can optionally be executed if it's a function var access = function( elems, fn, key, value, chainable, emptyGet, raw ) { var i = 0, len = elems.length, bulk = key == null; // Sets many values if ( toType( key ) === "object" ) { chainable = true; for ( i in key ) { access( elems, fn, i, key[ i ], true, emptyGet, raw ); } // Sets one value } else if ( value !== undefined ) { chainable = true; if ( !isFunction( value ) ) { raw = true; } if ( bulk ) { // Bulk operations run against the entire set if ( raw ) { fn.call( elems, value ); fn = null; // ...except when executing function values } else { bulk = fn; fn = function( elem, key, value ) { return bulk.call( jQuery( elem ), value ); }; } } if ( fn ) { for ( ; i < len; i++ ) { fn( elems[ i ], key, raw ? value : value.call( elems[ i ], i, fn( elems[ i ], key ) ) ); } } } if ( chainable ) { return elems; } // Gets if ( bulk ) { return fn.call( elems ); } return len ? fn( elems[ 0 ], key ) : emptyGet; }; // Matches dashed string for camelizing var rmsPrefix = /^-ms-/, rdashAlpha = /-([a-z])/g; // Used by camelCase as callback to replace() function fcamelCase( all, letter ) { return letter.toUpperCase(); } // Convert dashed to camelCase; used by the css and data modules // Support: IE <=9 - 11, Edge 12 - 15 // Microsoft forgot to hump their vendor prefix (#9572) function camelCase( string ) { return string.replace( rmsPrefix, "ms-" ).replace( rdashAlpha, fcamelCase ); } var acceptData = function( owner ) { // Accepts only: // - Node // - Node.ELEMENT_NODE // - Node.DOCUMENT_NODE // - Object // - Any return owner.nodeType === 1 || owner.nodeType === 9 || !( +owner.nodeType ); }; function Data() { this.expando = jQuery.expando + Data.uid++; } Data.uid = 1; Data.prototype = { cache: function( owner ) { // Check if the owner object already has a cache var value = owner[ this.expando ]; // If not, create one if ( !value ) { value = {}; // We can accept data for non-element nodes in modern browsers, // but we should not, see #8335. // Always return an empty object. if ( acceptData( owner ) ) { // If it is a node unlikely to be stringify-ed or looped over // use plain assignment if ( owner.nodeType ) { owner[ this.expando ] = value; // Otherwise secure it in a non-enumerable property // configurable must be true to allow the property to be // deleted when data is removed } else { Object.defineProperty( owner, this.expando, { value: value, configurable: true } ); } } } return value; }, set: function( owner, data, value ) { var prop, cache = this.cache( owner ); // Handle: [ owner, key, value ] args // Always use camelCase key (gh-2257) if ( typeof data === "string" ) { cache[ camelCase( data ) ] = value; // Handle: [ owner, { properties } ] args } else { // Copy the properties one-by-one to the cache object for ( prop in data ) { cache[ camelCase( prop ) ] = data[ prop ]; } } return cache; }, get: function( owner, key ) { return key === undefined ? this.cache( owner ) : // Always use camelCase key (gh-2257) owner[ this.expando ] && owner[ this.expando ][ camelCase( key ) ]; }, access: function( owner, key, value ) { // In cases where either: // // 1. No key was specified // 2. A string key was specified, but no value provided // // Take the "read" path and allow the get method to determine // which value to return, respectively either: // // 1. The entire cache object // 2. The data stored at the key // if ( key === undefined || ( ( key && typeof key === "string" ) && value === undefined ) ) { return this.get( owner, key ); } // When the key is not a string, or both a key and value // are specified, set or extend (existing objects) with either: // // 1. An object of properties // 2. A key and value // this.set( owner, key, value ); // Since the "set" path can have two possible entry points // return the expected data based on which path was taken[*] return value !== undefined ? value : key; }, remove: function( owner, key ) { var i, cache = owner[ this.expando ]; if ( cache === undefined ) { return; } if ( key !== undefined ) { // Support array or space separated string of keys if ( Array.isArray( key ) ) { // If key is an array of keys... // We always set camelCase keys, so remove that. key = key.map( camelCase ); } else { key = camelCase( key ); // If a key with the spaces exists, use it. // Otherwise, create an array by matching non-whitespace key = key in cache ? [ key ] : ( key.match( rnothtmlwhite ) || [] ); } i = key.length; while ( i-- ) { delete cache[ key[ i ] ]; } } // Remove the expando if there's no more data if ( key === undefined || jQuery.isEmptyObject( cache ) ) { // Support: Chrome <=35 - 45 // Webkit & Blink performance suffers when deleting properties // from DOM nodes, so set to undefined instead // https://bugs.chromium.org/p/chromium/issues/detail?id=378607 (bug restricted) if ( owner.nodeType ) { owner[ this.expando ] = undefined; } else { delete owner[ this.expando ]; } } }, hasData: function( owner ) { var cache = owner[ this.expando ]; return cache !== undefined && !jQuery.isEmptyObject( cache ); } }; var dataPriv = new Data(); var dataUser = new Data(); // Implementation Summary // // 1. Enforce API surface and semantic compatibility with 1.9.x branch // 2. Improve the module's maintainability by reducing the storage // paths to a single mechanism. // 3. Use the same single mechanism to support "private" and "user" data. // 4. _Never_ expose "private" data to user code (TODO: Drop _data, _removeData) // 5. Avoid exposing implementation details on user objects (eg. expando properties) // 6. Provide a clear path for implementation upgrade to WeakMap in 2014 var rbrace = /^(?:\{[\w\W]*\}|\[[\w\W]*\])$/, rmultiDash = /[A-Z]/g; function getData( data ) { if ( data === "true" ) { return true; } if ( data === "false" ) { return false; } if ( data === "null" ) { return null; } // Only convert to a number if it doesn't change the string if ( data === +data + "" ) { return +data; } if ( rbrace.test( data ) ) { return JSON.parse( data ); } return data; } function dataAttr( elem, key, data ) { var name; // If nothing was found internally, try to fetch any // data from the HTML5 data-* attribute if ( data === undefined && elem.nodeType === 1 ) { name = "data-" + key.replace( rmultiDash, "-$&" ).toLowerCase(); data = elem.getAttribute( name ); if ( typeof data === "string" ) { try { data = getData( data ); } catch ( e ) {} // Make sure we set the data so it isn't changed later dataUser.set( elem, key, data ); } else { data = undefined; } } return data; } jQuery.extend( { hasData: function( elem ) { return dataUser.hasData( elem ) || dataPriv.hasData( elem ); }, data: function( elem, name, data ) { return dataUser.access( elem, name, data ); }, removeData: function( elem, name ) { dataUser.remove( elem, name ); }, // TODO: Now that all calls to _data and _removeData have been replaced // with direct calls to dataPriv methods, these can be deprecated. _data: function( elem, name, data ) { return dataPriv.access( elem, name, data ); }, _removeData: function( elem, name ) { dataPriv.remove( elem, name ); } } ); jQuery.fn.extend( { data: function( key, value ) { var i, name, data, elem = this[ 0 ], attrs = elem && elem.attributes; // Gets all values if ( key === undefined ) { if ( this.length ) { data = dataUser.get( elem ); if ( elem.nodeType === 1 && !dataPriv.get( elem, "hasDataAttrs" ) ) { i = attrs.length; while ( i-- ) { // Support: IE 11 only // The attrs elements can be null (#14894) if ( attrs[ i ] ) { name = attrs[ i ].name; if ( name.indexOf( "data-" ) === 0 ) { name = camelCase( name.slice( 5 ) ); dataAttr( elem, name, data[ name ] ); } } } dataPriv.set( elem, "hasDataAttrs", true ); } } return data; } // Sets multiple values if ( typeof key === "object" ) { return this.each( function() { dataUser.set( this, key ); } ); } return access( this, function( value ) { var data; // The calling jQuery object (element matches) is not empty // (and therefore has an element appears at this[ 0 ]) and the // `value` parameter was not undefined. An empty jQuery object // will result in `undefined` for elem = this[ 0 ] which will // throw an exception if an attempt to read a data cache is made. if ( elem && value === undefined ) { // Attempt to get data from the cache // The key will always be camelCased in Data data = dataUser.get( elem, key ); if ( data !== undefined ) { return data; } // Attempt to "discover" the data in // HTML5 custom data-* attrs data = dataAttr( elem, key ); if ( data !== undefined ) { return data; } // We tried really hard, but the data doesn't exist. return; } // Set the data... this.each( function() { // We always store the camelCased key dataUser.set( this, key, value ); } ); }, null, value, arguments.length > 1, null, true ); }, removeData: function( key ) { return this.each( function() { dataUser.remove( this, key ); } ); } } ); jQuery.extend( { queue: function( elem, type, data ) { var queue; if ( elem ) { type = ( type || "fx" ) + "queue"; queue = dataPriv.get( elem, type ); // Speed up dequeue by getting out quickly if this is just a lookup if ( data ) { if ( !queue || Array.isArray( data ) ) { queue = dataPriv.access( elem, type, jQuery.makeArray( data ) ); } else { queue.push( data ); } } return queue || []; } }, dequeue: function( elem, type ) { type = type || "fx"; var queue = jQuery.queue( elem, type ), startLength = queue.length, fn = queue.shift(), hooks = jQuery._queueHooks( elem, type ), next = function() { jQuery.dequeue( elem, type ); }; // If the fx queue is dequeued, always remove the progress sentinel if ( fn === "inprogress" ) { fn = queue.shift(); startLength--; } if ( fn ) { // Add a progress sentinel to prevent the fx queue from being // automatically dequeued if ( type === "fx" ) { queue.unshift( "inprogress" ); } // Clear up the last queue stop function delete hooks.stop; fn.call( elem, next, hooks ); } if ( !startLength && hooks ) { hooks.empty.fire(); } }, // Not public - generate a queueHooks object, or return the current one _queueHooks: function( elem, type ) { var key = type + "queueHooks"; return dataPriv.get( elem, key ) || dataPriv.access( elem, key, { empty: jQuery.Callbacks( "once memory" ).add( function() { dataPriv.remove( elem, [ type + "queue", key ] ); } ) } ); } } ); jQuery.fn.extend( { queue: function( type, data ) { var setter = 2; if ( typeof type !== "string" ) { data = type; type = "fx"; setter--; } if ( arguments.length < setter ) { return jQuery.queue( this[ 0 ], type ); } return data === undefined ? this : this.each( function() { var queue = jQuery.queue( this, type, data ); // Ensure a hooks for this queue jQuery._queueHooks( this, type ); if ( type === "fx" && queue[ 0 ] !== "inprogress" ) { jQuery.dequeue( this, type ); } } ); }, dequeue: function( type ) { return this.each( function() { jQuery.dequeue( this, type ); } ); }, clearQueue: function( type ) { return this.queue( type || "fx", [] ); }, // Get a promise resolved when queues of a certain type // are emptied (fx is the type by default) promise: function( type, obj ) { var tmp, count = 1, defer = jQuery.Deferred(), elements = this, i = this.length, resolve = function() { if ( !( --count ) ) { defer.resolveWith( elements, [ elements ] ); } }; if ( typeof type !== "string" ) { obj = type; type = undefined; } type = type || "fx"; while ( i-- ) { tmp = dataPriv.get( elements[ i ], type + "queueHooks" ); if ( tmp && tmp.empty ) { count++; tmp.empty.add( resolve ); } } resolve(); return defer.promise( obj ); } } ); var pnum = ( /[+-]?(?:\d*\.|)\d+(?:[eE][+-]?\d+|)/ ).source; var rcssNum = new RegExp( "^(?:([+-])=|)(" + pnum + ")([a-z%]*)$", "i" ); var cssExpand = [ "Top", "Right", "Bottom", "Left" ]; var isHiddenWithinTree = function( elem, el ) { // isHiddenWithinTree might be called from jQuery#filter function; // in that case, element will be second argument elem = el || elem; // Inline style trumps all return elem.style.display === "none" || elem.style.display === "" && // Otherwise, check computed style // Support: Firefox <=43 - 45 // Disconnected elements can have computed display: none, so first confirm that elem is // in the document. jQuery.contains( elem.ownerDocument, elem ) && jQuery.css( elem, "display" ) === "none"; }; var swap = function( elem, options, callback, args ) { var ret, name, old = {}; // Remember the old values, and insert the new ones for ( name in options ) { old[ name ] = elem.style[ name ]; elem.style[ name ] = options[ name ]; } ret = callback.apply( elem, args || [] ); // Revert the old values for ( name in options ) { elem.style[ name ] = old[ name ]; } return ret; }; function adjustCSS( elem, prop, valueParts, tween ) { var adjusted, scale, maxIterations = 20, currentValue = tween ? function() { return tween.cur(); } : function() { return jQuery.css( elem, prop, "" ); }, initial = currentValue(), unit = valueParts && valueParts[ 3 ] || ( jQuery.cssNumber[ prop ] ? "" : "px" ), // Starting value computation is required for potential unit mismatches initialInUnit = ( jQuery.cssNumber[ prop ] || unit !== "px" && +initial ) && rcssNum.exec( jQuery.css( elem, prop ) ); if ( initialInUnit && initialInUnit[ 3 ] !== unit ) { // Support: Firefox <=54 // Halve the iteration target value to prevent interference from CSS upper bounds (gh-2144) initial = initial / 2; // Trust units reported by jQuery.css unit = unit || initialInUnit[ 3 ]; // Iteratively approximate from a nonzero starting point initialInUnit = +initial || 1; while ( maxIterations-- ) { // Evaluate and update our best guess (doubling guesses that zero out). // Finish if the scale equals or crosses 1 (making the old*new product non-positive). jQuery.style( elem, prop, initialInUnit + unit ); if ( ( 1 - scale ) * ( 1 - ( scale = currentValue() / initial || 0.5 ) ) <= 0 ) { maxIterations = 0; } initialInUnit = initialInUnit / scale; } initialInUnit = initialInUnit * 2; jQuery.style( elem, prop, initialInUnit + unit ); // Make sure we update the tween properties later on valueParts = valueParts || []; } if ( valueParts ) { initialInUnit = +initialInUnit || +initial || 0; // Apply relative offset (+=/-=) if specified adjusted = valueParts[ 1 ] ? initialInUnit + ( valueParts[ 1 ] + 1 ) * valueParts[ 2 ] : +valueParts[ 2 ]; if ( tween ) { tween.unit = unit; tween.start = initialInUnit; tween.end = adjusted; } } return adjusted; } var defaultDisplayMap = {}; function getDefaultDisplay( elem ) { var temp, doc = elem.ownerDocument, nodeName = elem.nodeName, display = defaultDisplayMap[ nodeName ]; if ( display ) { return display; } temp = doc.body.appendChild( doc.createElement( nodeName ) ); display = jQuery.css( temp, "display" ); temp.parentNode.removeChild( temp ); if ( display === "none" ) { display = "block"; } defaultDisplayMap[ nodeName ] = display; return display; } function showHide( elements, show ) { var display, elem, values = [], index = 0, length = elements.length; // Determine new display value for elements that need to change for ( ; index < length; index++ ) { elem = elements[ index ]; if ( !elem.style ) { continue; } display = elem.style.display; if ( show ) { // Since we force visibility upon cascade-hidden elements, an immediate (and slow) // check is required in this first loop unless we have a nonempty display value (either // inline or about-to-be-restored) if ( display === "none" ) { values[ index ] = dataPriv.get( elem, "display" ) || null; if ( !values[ index ] ) { elem.style.display = ""; } } if ( elem.style.display === "" && isHiddenWithinTree( elem ) ) { values[ index ] = getDefaultDisplay( elem ); } } else { if ( display !== "none" ) { values[ index ] = "none"; // Remember what we're overwriting dataPriv.set( elem, "display", display ); } } } // Set the display of the elements in a second loop to avoid constant reflow for ( index = 0; index < length; index++ ) { if ( values[ index ] != null ) { elements[ index ].style.display = values[ index ]; } } return elements; } jQuery.fn.extend( { show: function() { return showHide( this, true ); }, hide: function() { return showHide( this ); }, toggle: function( state ) { if ( typeof state === "boolean" ) { return state ? this.show() : this.hide(); } return this.each( function() { if ( isHiddenWithinTree( this ) ) { jQuery( this ).show(); } else { jQuery( this ).hide(); } } ); } } ); var rcheckableType = ( /^(?:checkbox|radio)$/i ); var rtagName = ( /<([a-z][^\/\0>\x20\t\r\n\f]+)/i ); var rscriptType = ( /^$|^module$|\/(?:java|ecma)script/i ); // We have to close these tags to support XHTML (#13200) var wrapMap = { // Support: IE <=9 only option: [ 1, "" ], // XHTML parsers do not magically insert elements in the // same way that tag soup parsers do. So we cannot shorten // this by omitting or other required elements. thead: [ 1, "", "
" ], col: [ 2, "", "
" ], tr: [ 2, "", "
" ], td: [ 3, "", "
" ], _default: [ 0, "", "" ] }; // Support: IE <=9 only wrapMap.optgroup = wrapMap.option; wrapMap.tbody = wrapMap.tfoot = wrapMap.colgroup = wrapMap.caption = wrapMap.thead; wrapMap.th = wrapMap.td; function getAll( context, tag ) { // Support: IE <=9 - 11 only // Use typeof to avoid zero-argument method invocation on host objects (#15151) var ret; if ( typeof context.getElementsByTagName !== "undefined" ) { ret = context.getElementsByTagName( tag || "*" ); } else if ( typeof context.querySelectorAll !== "undefined" ) { ret = context.querySelectorAll( tag || "*" ); } else { ret = []; } if ( tag === undefined || tag && nodeName( context, tag ) ) { return jQuery.merge( [ context ], ret ); } return ret; } // Mark scripts as having already been evaluated function setGlobalEval( elems, refElements ) { var i = 0, l = elems.length; for ( ; i < l; i++ ) { dataPriv.set( elems[ i ], "globalEval", !refElements || dataPriv.get( refElements[ i ], "globalEval" ) ); } } var rhtml = /<|&#?\w+;/; function buildFragment( elems, context, scripts, selection, ignored ) { var elem, tmp, tag, wrap, contains, j, fragment = context.createDocumentFragment(), nodes = [], i = 0, l = elems.length; for ( ; i < l; i++ ) { elem = elems[ i ]; if ( elem || elem === 0 ) { // Add nodes directly if ( toType( elem ) === "object" ) { // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( nodes, elem.nodeType ? [ elem ] : elem ); // Convert non-html into a text node } else if ( !rhtml.test( elem ) ) { nodes.push( context.createTextNode( elem ) ); // Convert html into DOM nodes } else { tmp = tmp || fragment.appendChild( context.createElement( "div" ) ); // Deserialize a standard representation tag = ( rtagName.exec( elem ) || [ "", "" ] )[ 1 ].toLowerCase(); wrap = wrapMap[ tag ] || wrapMap._default; tmp.innerHTML = wrap[ 1 ] + jQuery.htmlPrefilter( elem ) + wrap[ 2 ]; // Descend through wrappers to the right content j = wrap[ 0 ]; while ( j-- ) { tmp = tmp.lastChild; } // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( nodes, tmp.childNodes ); // Remember the top-level container tmp = fragment.firstChild; // Ensure the created nodes are orphaned (#12392) tmp.textContent = ""; } } } // Remove wrapper from fragment fragment.textContent = ""; i = 0; while ( ( elem = nodes[ i++ ] ) ) { // Skip elements already in the context collection (trac-4087) if ( selection && jQuery.inArray( elem, selection ) > -1 ) { if ( ignored ) { ignored.push( elem ); } continue; } contains = jQuery.contains( elem.ownerDocument, elem ); // Append to fragment tmp = getAll( fragment.appendChild( elem ), "script" ); // Preserve script evaluation history if ( contains ) { setGlobalEval( tmp ); } // Capture executables if ( scripts ) { j = 0; while ( ( elem = tmp[ j++ ] ) ) { if ( rscriptType.test( elem.type || "" ) ) { scripts.push( elem ); } } } } return fragment; } ( function() { var fragment = document.createDocumentFragment(), div = fragment.appendChild( document.createElement( "div" ) ), input = document.createElement( "input" ); // Support: Android 4.0 - 4.3 only // Check state lost if the name is set (#11217) // Support: Windows Web Apps (WWA) // `name` and `type` must use .setAttribute for WWA (#14901) input.setAttribute( "type", "radio" ); input.setAttribute( "checked", "checked" ); input.setAttribute( "name", "t" ); div.appendChild( input ); // Support: Android <=4.1 only // Older WebKit doesn't clone checked state correctly in fragments support.checkClone = div.cloneNode( true ).cloneNode( true ).lastChild.checked; // Support: IE <=11 only // Make sure textarea (and checkbox) defaultValue is properly cloned div.innerHTML = ""; support.noCloneChecked = !!div.cloneNode( true ).lastChild.defaultValue; } )(); var documentElement = document.documentElement; var rkeyEvent = /^key/, rmouseEvent = /^(?:mouse|pointer|contextmenu|drag|drop)|click/, rtypenamespace = /^([^.]*)(?:\.(.+)|)/; function returnTrue() { return true; } function returnFalse() { return false; } // Support: IE <=9 only // See #13393 for more info function safeActiveElement() { try { return document.activeElement; } catch ( err ) { } } function on( elem, types, selector, data, fn, one ) { var origFn, type; // Types can be a map of types/handlers if ( typeof types === "object" ) { // ( types-Object, selector, data ) if ( typeof selector !== "string" ) { // ( types-Object, data ) data = data || selector; selector = undefined; } for ( type in types ) { on( elem, type, selector, data, types[ type ], one ); } return elem; } if ( data == null && fn == null ) { // ( types, fn ) fn = selector; data = selector = undefined; } else if ( fn == null ) { if ( typeof selector === "string" ) { // ( types, selector, fn ) fn = data; data = undefined; } else { // ( types, data, fn ) fn = data; data = selector; selector = undefined; } } if ( fn === false ) { fn = returnFalse; } else if ( !fn ) { return elem; } if ( one === 1 ) { origFn = fn; fn = function( event ) { // Can use an empty set, since event contains the info jQuery().off( event ); return origFn.apply( this, arguments ); }; // Use same guid so caller can remove using origFn fn.guid = origFn.guid || ( origFn.guid = jQuery.guid++ ); } return elem.each( function() { jQuery.event.add( this, types, fn, data, selector ); } ); } /* * Helper functions for managing events -- not part of the public interface. * Props to Dean Edwards' addEvent library for many of the ideas. */ jQuery.event = { global: {}, add: function( elem, types, handler, data, selector ) { var handleObjIn, eventHandle, tmp, events, t, handleObj, special, handlers, type, namespaces, origType, elemData = dataPriv.get( elem ); // Don't attach events to noData or text/comment nodes (but allow plain objects) if ( !elemData ) { return; } // Caller can pass in an object of custom data in lieu of the handler if ( handler.handler ) { handleObjIn = handler; handler = handleObjIn.handler; selector = handleObjIn.selector; } // Ensure that invalid selectors throw exceptions at attach time // Evaluate against documentElement in case elem is a non-element node (e.g., document) if ( selector ) { jQuery.find.matchesSelector( documentElement, selector ); } // Make sure that the handler has a unique ID, used to find/remove it later if ( !handler.guid ) { handler.guid = jQuery.guid++; } // Init the element's event structure and main handler, if this is the first if ( !( events = elemData.events ) ) { events = elemData.events = {}; } if ( !( eventHandle = elemData.handle ) ) { eventHandle = elemData.handle = function( e ) { // Discard the second event of a jQuery.event.trigger() and // when an event is called after a page has unloaded return typeof jQuery !== "undefined" && jQuery.event.triggered !== e.type ? jQuery.event.dispatch.apply( elem, arguments ) : undefined; }; } // Handle multiple events separated by a space types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[ t ] ) || []; type = origType = tmp[ 1 ]; namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); // There *must* be a type, no attaching namespace-only handlers if ( !type ) { continue; } // If event changes its type, use the special event handlers for the changed type special = jQuery.event.special[ type ] || {}; // If selector defined, determine special event api type, otherwise given type type = ( selector ? special.delegateType : special.bindType ) || type; // Update special based on newly reset type special = jQuery.event.special[ type ] || {}; // handleObj is passed to all event handlers handleObj = jQuery.extend( { type: type, origType: origType, data: data, handler: handler, guid: handler.guid, selector: selector, needsContext: selector && jQuery.expr.match.needsContext.test( selector ), namespace: namespaces.join( "." ) }, handleObjIn ); // Init the event handler queue if we're the first if ( !( handlers = events[ type ] ) ) { handlers = events[ type ] = []; handlers.delegateCount = 0; // Only use addEventListener if the special events handler returns false if ( !special.setup || special.setup.call( elem, data, namespaces, eventHandle ) === false ) { if ( elem.addEventListener ) { elem.addEventListener( type, eventHandle ); } } } if ( special.add ) { special.add.call( elem, handleObj ); if ( !handleObj.handler.guid ) { handleObj.handler.guid = handler.guid; } } // Add to the element's handler list, delegates in front if ( selector ) { handlers.splice( handlers.delegateCount++, 0, handleObj ); } else { handlers.push( handleObj ); } // Keep track of which events have ever been used, for event optimization jQuery.event.global[ type ] = true; } }, // Detach an event or set of events from an element remove: function( elem, types, handler, selector, mappedTypes ) { var j, origCount, tmp, events, t, handleObj, special, handlers, type, namespaces, origType, elemData = dataPriv.hasData( elem ) && dataPriv.get( elem ); if ( !elemData || !( events = elemData.events ) ) { return; } // Once for each type.namespace in types; type may be omitted types = ( types || "" ).match( rnothtmlwhite ) || [ "" ]; t = types.length; while ( t-- ) { tmp = rtypenamespace.exec( types[ t ] ) || []; type = origType = tmp[ 1 ]; namespaces = ( tmp[ 2 ] || "" ).split( "." ).sort(); // Unbind all events (on this namespace, if provided) for the element if ( !type ) { for ( type in events ) { jQuery.event.remove( elem, type + types[ t ], handler, selector, true ); } continue; } special = jQuery.event.special[ type ] || {}; type = ( selector ? special.delegateType : special.bindType ) || type; handlers = events[ type ] || []; tmp = tmp[ 2 ] && new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ); // Remove matching events origCount = j = handlers.length; while ( j-- ) { handleObj = handlers[ j ]; if ( ( mappedTypes || origType === handleObj.origType ) && ( !handler || handler.guid === handleObj.guid ) && ( !tmp || tmp.test( handleObj.namespace ) ) && ( !selector || selector === handleObj.selector || selector === "**" && handleObj.selector ) ) { handlers.splice( j, 1 ); if ( handleObj.selector ) { handlers.delegateCount--; } if ( special.remove ) { special.remove.call( elem, handleObj ); } } } // Remove generic event handler if we removed something and no more handlers exist // (avoids potential for endless recursion during removal of special event handlers) if ( origCount && !handlers.length ) { if ( !special.teardown || special.teardown.call( elem, namespaces, elemData.handle ) === false ) { jQuery.removeEvent( elem, type, elemData.handle ); } delete events[ type ]; } } // Remove data and the expando if it's no longer used if ( jQuery.isEmptyObject( events ) ) { dataPriv.remove( elem, "handle events" ); } }, dispatch: function( nativeEvent ) { // Make a writable jQuery.Event from the native event object var event = jQuery.event.fix( nativeEvent ); var i, j, ret, matched, handleObj, handlerQueue, args = new Array( arguments.length ), handlers = ( dataPriv.get( this, "events" ) || {} )[ event.type ] || [], special = jQuery.event.special[ event.type ] || {}; // Use the fix-ed jQuery.Event rather than the (read-only) native event args[ 0 ] = event; for ( i = 1; i < arguments.length; i++ ) { args[ i ] = arguments[ i ]; } event.delegateTarget = this; // Call the preDispatch hook for the mapped type, and let it bail if desired if ( special.preDispatch && special.preDispatch.call( this, event ) === false ) { return; } // Determine handlers handlerQueue = jQuery.event.handlers.call( this, event, handlers ); // Run delegates first; they may want to stop propagation beneath us i = 0; while ( ( matched = handlerQueue[ i++ ] ) && !event.isPropagationStopped() ) { event.currentTarget = matched.elem; j = 0; while ( ( handleObj = matched.handlers[ j++ ] ) && !event.isImmediatePropagationStopped() ) { // Triggered event must either 1) have no namespace, or 2) have namespace(s) // a subset or equal to those in the bound event (both can have no namespace). if ( !event.rnamespace || event.rnamespace.test( handleObj.namespace ) ) { event.handleObj = handleObj; event.data = handleObj.data; ret = ( ( jQuery.event.special[ handleObj.origType ] || {} ).handle || handleObj.handler ).apply( matched.elem, args ); if ( ret !== undefined ) { if ( ( event.result = ret ) === false ) { event.preventDefault(); event.stopPropagation(); } } } } } // Call the postDispatch hook for the mapped type if ( special.postDispatch ) { special.postDispatch.call( this, event ); } return event.result; }, handlers: function( event, handlers ) { var i, handleObj, sel, matchedHandlers, matchedSelectors, handlerQueue = [], delegateCount = handlers.delegateCount, cur = event.target; // Find delegate handlers if ( delegateCount && // Support: IE <=9 // Black-hole SVG instance trees (trac-13180) cur.nodeType && // Support: Firefox <=42 // Suppress spec-violating clicks indicating a non-primary pointer button (trac-3861) // https://www.w3.org/TR/DOM-Level-3-Events/#event-type-click // Support: IE 11 only // ...but not arrow key "clicks" of radio inputs, which can have `button` -1 (gh-2343) !( event.type === "click" && event.button >= 1 ) ) { for ( ; cur !== this; cur = cur.parentNode || this ) { // Don't check non-elements (#13208) // Don't process clicks on disabled elements (#6911, #8165, #11382, #11764) if ( cur.nodeType === 1 && !( event.type === "click" && cur.disabled === true ) ) { matchedHandlers = []; matchedSelectors = {}; for ( i = 0; i < delegateCount; i++ ) { handleObj = handlers[ i ]; // Don't conflict with Object.prototype properties (#13203) sel = handleObj.selector + " "; if ( matchedSelectors[ sel ] === undefined ) { matchedSelectors[ sel ] = handleObj.needsContext ? jQuery( sel, this ).index( cur ) > -1 : jQuery.find( sel, this, null, [ cur ] ).length; } if ( matchedSelectors[ sel ] ) { matchedHandlers.push( handleObj ); } } if ( matchedHandlers.length ) { handlerQueue.push( { elem: cur, handlers: matchedHandlers } ); } } } } // Add the remaining (directly-bound) handlers cur = this; if ( delegateCount < handlers.length ) { handlerQueue.push( { elem: cur, handlers: handlers.slice( delegateCount ) } ); } return handlerQueue; }, addProp: function( name, hook ) { Object.defineProperty( jQuery.Event.prototype, name, { enumerable: true, configurable: true, get: isFunction( hook ) ? function() { if ( this.originalEvent ) { return hook( this.originalEvent ); } } : function() { if ( this.originalEvent ) { return this.originalEvent[ name ]; } }, set: function( value ) { Object.defineProperty( this, name, { enumerable: true, configurable: true, writable: true, value: value } ); } } ); }, fix: function( originalEvent ) { return originalEvent[ jQuery.expando ] ? originalEvent : new jQuery.Event( originalEvent ); }, special: { load: { // Prevent triggered image.load events from bubbling to window.load noBubble: true }, focus: { // Fire native event if possible so blur/focus sequence is correct trigger: function() { if ( this !== safeActiveElement() && this.focus ) { this.focus(); return false; } }, delegateType: "focusin" }, blur: { trigger: function() { if ( this === safeActiveElement() && this.blur ) { this.blur(); return false; } }, delegateType: "focusout" }, click: { // For checkbox, fire native event so checked state will be right trigger: function() { if ( this.type === "checkbox" && this.click && nodeName( this, "input" ) ) { this.click(); return false; } }, // For cross-browser consistency, don't fire native .click() on links _default: function( event ) { return nodeName( event.target, "a" ); } }, beforeunload: { postDispatch: function( event ) { // Support: Firefox 20+ // Firefox doesn't alert if the returnValue field is not set. if ( event.result !== undefined && event.originalEvent ) { event.originalEvent.returnValue = event.result; } } } } }; jQuery.removeEvent = function( elem, type, handle ) { // This "if" is needed for plain objects if ( elem.removeEventListener ) { elem.removeEventListener( type, handle ); } }; jQuery.Event = function( src, props ) { // Allow instantiation without the 'new' keyword if ( !( this instanceof jQuery.Event ) ) { return new jQuery.Event( src, props ); } // Event object if ( src && src.type ) { this.originalEvent = src; this.type = src.type; // Events bubbling up the document may have been marked as prevented // by a handler lower down the tree; reflect the correct value. this.isDefaultPrevented = src.defaultPrevented || src.defaultPrevented === undefined && // Support: Android <=2.3 only src.returnValue === false ? returnTrue : returnFalse; // Create target properties // Support: Safari <=6 - 7 only // Target should not be a text node (#504, #13143) this.target = ( src.target && src.target.nodeType === 3 ) ? src.target.parentNode : src.target; this.currentTarget = src.currentTarget; this.relatedTarget = src.relatedTarget; // Event type } else { this.type = src; } // Put explicitly provided properties onto the event object if ( props ) { jQuery.extend( this, props ); } // Create a timestamp if incoming event doesn't have one this.timeStamp = src && src.timeStamp || Date.now(); // Mark it as fixed this[ jQuery.expando ] = true; }; // jQuery.Event is based on DOM3 Events as specified by the ECMAScript Language Binding // https://www.w3.org/TR/2003/WD-DOM-Level-3-Events-20030331/ecma-script-binding.html jQuery.Event.prototype = { constructor: jQuery.Event, isDefaultPrevented: returnFalse, isPropagationStopped: returnFalse, isImmediatePropagationStopped: returnFalse, isSimulated: false, preventDefault: function() { var e = this.originalEvent; this.isDefaultPrevented = returnTrue; if ( e && !this.isSimulated ) { e.preventDefault(); } }, stopPropagation: function() { var e = this.originalEvent; this.isPropagationStopped = returnTrue; if ( e && !this.isSimulated ) { e.stopPropagation(); } }, stopImmediatePropagation: function() { var e = this.originalEvent; this.isImmediatePropagationStopped = returnTrue; if ( e && !this.isSimulated ) { e.stopImmediatePropagation(); } this.stopPropagation(); } }; // Includes all common event props including KeyEvent and MouseEvent specific props jQuery.each( { altKey: true, bubbles: true, cancelable: true, changedTouches: true, ctrlKey: true, detail: true, eventPhase: true, metaKey: true, pageX: true, pageY: true, shiftKey: true, view: true, "char": true, charCode: true, key: true, keyCode: true, button: true, buttons: true, clientX: true, clientY: true, offsetX: true, offsetY: true, pointerId: true, pointerType: true, screenX: true, screenY: true, targetTouches: true, toElement: true, touches: true, which: function( event ) { var button = event.button; // Add which for key events if ( event.which == null && rkeyEvent.test( event.type ) ) { return event.charCode != null ? event.charCode : event.keyCode; } // Add which for click: 1 === left; 2 === middle; 3 === right if ( !event.which && button !== undefined && rmouseEvent.test( event.type ) ) { if ( button & 1 ) { return 1; } if ( button & 2 ) { return 3; } if ( button & 4 ) { return 2; } return 0; } return event.which; } }, jQuery.event.addProp ); // Create mouseenter/leave events using mouseover/out and event-time checks // so that event delegation works in jQuery. // Do the same for pointerenter/pointerleave and pointerover/pointerout // // Support: Safari 7 only // Safari sends mouseenter too often; see: // https://bugs.chromium.org/p/chromium/issues/detail?id=470258 // for the description of the bug (it existed in older Chrome versions as well). jQuery.each( { mouseenter: "mouseover", mouseleave: "mouseout", pointerenter: "pointerover", pointerleave: "pointerout" }, function( orig, fix ) { jQuery.event.special[ orig ] = { delegateType: fix, bindType: fix, handle: function( event ) { var ret, target = this, related = event.relatedTarget, handleObj = event.handleObj; // For mouseenter/leave call the handler if related is outside the target. // NB: No relatedTarget if the mouse left/entered the browser window if ( !related || ( related !== target && !jQuery.contains( target, related ) ) ) { event.type = handleObj.origType; ret = handleObj.handler.apply( this, arguments ); event.type = fix; } return ret; } }; } ); jQuery.fn.extend( { on: function( types, selector, data, fn ) { return on( this, types, selector, data, fn ); }, one: function( types, selector, data, fn ) { return on( this, types, selector, data, fn, 1 ); }, off: function( types, selector, fn ) { var handleObj, type; if ( types && types.preventDefault && types.handleObj ) { // ( event ) dispatched jQuery.Event handleObj = types.handleObj; jQuery( types.delegateTarget ).off( handleObj.namespace ? handleObj.origType + "." + handleObj.namespace : handleObj.origType, handleObj.selector, handleObj.handler ); return this; } if ( typeof types === "object" ) { // ( types-object [, selector] ) for ( type in types ) { this.off( type, selector, types[ type ] ); } return this; } if ( selector === false || typeof selector === "function" ) { // ( types [, fn] ) fn = selector; selector = undefined; } if ( fn === false ) { fn = returnFalse; } return this.each( function() { jQuery.event.remove( this, types, fn, selector ); } ); } } ); var /* eslint-disable max-len */ // See https://github.com/eslint/eslint/issues/3229 rxhtmlTag = /<(?!area|br|col|embed|hr|img|input|link|meta|param)(([a-z][^\/\0>\x20\t\r\n\f]*)[^>]*)\/>/gi, /* eslint-enable */ // Support: IE <=10 - 11, Edge 12 - 13 only // In IE/Edge using regex groups here causes severe slowdowns. // See https://connect.microsoft.com/IE/feedback/details/1736512/ rnoInnerhtml = /\s*$/g; // Prefer a tbody over its parent table for containing new rows function manipulationTarget( elem, content ) { if ( nodeName( elem, "table" ) && nodeName( content.nodeType !== 11 ? content : content.firstChild, "tr" ) ) { return jQuery( elem ).children( "tbody" )[ 0 ] || elem; } return elem; } // Replace/restore the type attribute of script elements for safe DOM manipulation function disableScript( elem ) { elem.type = ( elem.getAttribute( "type" ) !== null ) + "/" + elem.type; return elem; } function restoreScript( elem ) { if ( ( elem.type || "" ).slice( 0, 5 ) === "true/" ) { elem.type = elem.type.slice( 5 ); } else { elem.removeAttribute( "type" ); } return elem; } function cloneCopyEvent( src, dest ) { var i, l, type, pdataOld, pdataCur, udataOld, udataCur, events; if ( dest.nodeType !== 1 ) { return; } // 1. Copy private data: events, handlers, etc. if ( dataPriv.hasData( src ) ) { pdataOld = dataPriv.access( src ); pdataCur = dataPriv.set( dest, pdataOld ); events = pdataOld.events; if ( events ) { delete pdataCur.handle; pdataCur.events = {}; for ( type in events ) { for ( i = 0, l = events[ type ].length; i < l; i++ ) { jQuery.event.add( dest, type, events[ type ][ i ] ); } } } } // 2. Copy user data if ( dataUser.hasData( src ) ) { udataOld = dataUser.access( src ); udataCur = jQuery.extend( {}, udataOld ); dataUser.set( dest, udataCur ); } } // Fix IE bugs, see support tests function fixInput( src, dest ) { var nodeName = dest.nodeName.toLowerCase(); // Fails to persist the checked state of a cloned checkbox or radio button. if ( nodeName === "input" && rcheckableType.test( src.type ) ) { dest.checked = src.checked; // Fails to return the selected option to the default selected state when cloning options } else if ( nodeName === "input" || nodeName === "textarea" ) { dest.defaultValue = src.defaultValue; } } function domManip( collection, args, callback, ignored ) { // Flatten any nested arrays args = concat.apply( [], args ); var fragment, first, scripts, hasScripts, node, doc, i = 0, l = collection.length, iNoClone = l - 1, value = args[ 0 ], valueIsFunction = isFunction( value ); // We can't cloneNode fragments that contain checked, in WebKit if ( valueIsFunction || ( l > 1 && typeof value === "string" && !support.checkClone && rchecked.test( value ) ) ) { return collection.each( function( index ) { var self = collection.eq( index ); if ( valueIsFunction ) { args[ 0 ] = value.call( this, index, self.html() ); } domManip( self, args, callback, ignored ); } ); } if ( l ) { fragment = buildFragment( args, collection[ 0 ].ownerDocument, false, collection, ignored ); first = fragment.firstChild; if ( fragment.childNodes.length === 1 ) { fragment = first; } // Require either new content or an interest in ignored elements to invoke the callback if ( first || ignored ) { scripts = jQuery.map( getAll( fragment, "script" ), disableScript ); hasScripts = scripts.length; // Use the original fragment for the last item // instead of the first because it can end up // being emptied incorrectly in certain situations (#8070). for ( ; i < l; i++ ) { node = fragment; if ( i !== iNoClone ) { node = jQuery.clone( node, true, true ); // Keep references to cloned scripts for later restoration if ( hasScripts ) { // Support: Android <=4.0 only, PhantomJS 1 only // push.apply(_, arraylike) throws on ancient WebKit jQuery.merge( scripts, getAll( node, "script" ) ); } } callback.call( collection[ i ], node, i ); } if ( hasScripts ) { doc = scripts[ scripts.length - 1 ].ownerDocument; // Reenable scripts jQuery.map( scripts, restoreScript ); // Evaluate executable scripts on first document insertion for ( i = 0; i < hasScripts; i++ ) { node = scripts[ i ]; if ( rscriptType.test( node.type || "" ) && !dataPriv.access( node, "globalEval" ) && jQuery.contains( doc, node ) ) { if ( node.src && ( node.type || "" ).toLowerCase() !== "module" ) { // Optional AJAX dependency, but won't run scripts if not present if ( jQuery._evalUrl ) { jQuery._evalUrl( node.src ); } } else { DOMEval( node.textContent.replace( rcleanScript, "" ), doc, node ); } } } } } } return collection; } function remove( elem, selector, keepData ) { var node, nodes = selector ? jQuery.filter( selector, elem ) : elem, i = 0; for ( ; ( node = nodes[ i ] ) != null; i++ ) { if ( !keepData && node.nodeType === 1 ) { jQuery.cleanData( getAll( node ) ); } if ( node.parentNode ) { if ( keepData && jQuery.contains( node.ownerDocument, node ) ) { setGlobalEval( getAll( node, "script" ) ); } node.parentNode.removeChild( node ); } } return elem; } jQuery.extend( { htmlPrefilter: function( html ) { return html.replace( rxhtmlTag, "<$1>" ); }, clone: function( elem, dataAndEvents, deepDataAndEvents ) { var i, l, srcElements, destElements, clone = elem.cloneNode( true ), inPage = jQuery.contains( elem.ownerDocument, elem ); // Fix IE cloning issues if ( !support.noCloneChecked && ( elem.nodeType === 1 || elem.nodeType === 11 ) && !jQuery.isXMLDoc( elem ) ) { // We eschew Sizzle here for performance reasons: https://jsperf.com/getall-vs-sizzle/2 destElements = getAll( clone ); srcElements = getAll( elem ); for ( i = 0, l = srcElements.length; i < l; i++ ) { fixInput( srcElements[ i ], destElements[ i ] ); } } // Copy the events from the original to the clone if ( dataAndEvents ) { if ( deepDataAndEvents ) { srcElements = srcElements || getAll( elem ); destElements = destElements || getAll( clone ); for ( i = 0, l = srcElements.length; i < l; i++ ) { cloneCopyEvent( srcElements[ i ], destElements[ i ] ); } } else { cloneCopyEvent( elem, clone ); } } // Preserve script evaluation history destElements = getAll( clone, "script" ); if ( destElements.length > 0 ) { setGlobalEval( destElements, !inPage && getAll( elem, "script" ) ); } // Return the cloned set return clone; }, cleanData: function( elems ) { var data, elem, type, special = jQuery.event.special, i = 0; for ( ; ( elem = elems[ i ] ) !== undefined; i++ ) { if ( acceptData( elem ) ) { if ( ( data = elem[ dataPriv.expando ] ) ) { if ( data.events ) { for ( type in data.events ) { if ( special[ type ] ) { jQuery.event.remove( elem, type ); // This is a shortcut to avoid jQuery.event.remove's overhead } else { jQuery.removeEvent( elem, type, data.handle ); } } } // Support: Chrome <=35 - 45+ // Assign undefined instead of using delete, see Data#remove elem[ dataPriv.expando ] = undefined; } if ( elem[ dataUser.expando ] ) { // Support: Chrome <=35 - 45+ // Assign undefined instead of using delete, see Data#remove elem[ dataUser.expando ] = undefined; } } } } } ); jQuery.fn.extend( { detach: function( selector ) { return remove( this, selector, true ); }, remove: function( selector ) { return remove( this, selector ); }, text: function( value ) { return access( this, function( value ) { return value === undefined ? jQuery.text( this ) : this.empty().each( function() { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { this.textContent = value; } } ); }, null, value, arguments.length ); }, append: function() { return domManip( this, arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.appendChild( elem ); } } ); }, prepend: function() { return domManip( this, arguments, function( elem ) { if ( this.nodeType === 1 || this.nodeType === 11 || this.nodeType === 9 ) { var target = manipulationTarget( this, elem ); target.insertBefore( elem, target.firstChild ); } } ); }, before: function() { return domManip( this, arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this ); } } ); }, after: function() { return domManip( this, arguments, function( elem ) { if ( this.parentNode ) { this.parentNode.insertBefore( elem, this.nextSibling ); } } ); }, empty: function() { var elem, i = 0; for ( ; ( elem = this[ i ] ) != null; i++ ) { if ( elem.nodeType === 1 ) { // Prevent memory leaks jQuery.cleanData( getAll( elem, false ) ); // Remove any remaining nodes elem.textContent = ""; } } return this; }, clone: function( dataAndEvents, deepDataAndEvents ) { dataAndEvents = dataAndEvents == null ? false : dataAndEvents; deepDataAndEvents = deepDataAndEvents == null ? dataAndEvents : deepDataAndEvents; return this.map( function() { return jQuery.clone( this, dataAndEvents, deepDataAndEvents ); } ); }, html: function( value ) { return access( this, function( value ) { var elem = this[ 0 ] || {}, i = 0, l = this.length; if ( value === undefined && elem.nodeType === 1 ) { return elem.innerHTML; } // See if we can take a shortcut and just use innerHTML if ( typeof value === "string" && !rnoInnerhtml.test( value ) && !wrapMap[ ( rtagName.exec( value ) || [ "", "" ] )[ 1 ].toLowerCase() ] ) { value = jQuery.htmlPrefilter( value ); try { for ( ; i < l; i++ ) { elem = this[ i ] || {}; // Remove element nodes and prevent memory leaks if ( elem.nodeType === 1 ) { jQuery.cleanData( getAll( elem, false ) ); elem.innerHTML = value; } } elem = 0; // If using innerHTML throws an exception, use the fallback method } catch ( e ) {} } if ( elem ) { this.empty().append( value ); } }, null, value, arguments.length ); }, replaceWith: function() { var ignored = []; // Make the changes, replacing each non-ignored context element with the new content return domManip( this, arguments, function( elem ) { var parent = this.parentNode; if ( jQuery.inArray( this, ignored ) < 0 ) { jQuery.cleanData( getAll( this ) ); if ( parent ) { parent.replaceChild( elem, this ); } } // Force callback invocation }, ignored ); } } ); jQuery.each( { appendTo: "append", prependTo: "prepend", insertBefore: "before", insertAfter: "after", replaceAll: "replaceWith" }, function( name, original ) { jQuery.fn[ name ] = function( selector ) { var elems, ret = [], insert = jQuery( selector ), last = insert.length - 1, i = 0; for ( ; i <= last; i++ ) { elems = i === last ? this : this.clone( true ); jQuery( insert[ i ] )[ original ]( elems ); // Support: Android <=4.0 only, PhantomJS 1 only // .get() because push.apply(_, arraylike) throws on ancient WebKit push.apply( ret, elems.get() ); } return this.pushStack( ret ); }; } ); var rnumnonpx = new RegExp( "^(" + pnum + ")(?!px)[a-z%]+$", "i" ); var getStyles = function( elem ) { // Support: IE <=11 only, Firefox <=30 (#15098, #14150) // IE throws on elements created in popups // FF meanwhile throws on frame elements through "defaultView.getComputedStyle" var view = elem.ownerDocument.defaultView; if ( !view || !view.opener ) { view = window; } return view.getComputedStyle( elem ); }; var rboxStyle = new RegExp( cssExpand.join( "|" ), "i" ); ( function() { // Executing both pixelPosition & boxSizingReliable tests require only one layout // so they're executed at the same time to save the second computation. function computeStyleTests() { // This is a singleton, we need to execute it only once if ( !div ) { return; } container.style.cssText = "position:absolute;left:-11111px;width:60px;" + "margin-top:1px;padding:0;border:0"; div.style.cssText = "position:relative;display:block;box-sizing:border-box;overflow:scroll;" + "margin:auto;border:1px;padding:1px;" + "width:60%;top:1%"; documentElement.appendChild( container ).appendChild( div ); var divStyle = window.getComputedStyle( div ); pixelPositionVal = divStyle.top !== "1%"; // Support: Android 4.0 - 4.3 only, Firefox <=3 - 44 reliableMarginLeftVal = roundPixelMeasures( divStyle.marginLeft ) === 12; // Support: Android 4.0 - 4.3 only, Safari <=9.1 - 10.1, iOS <=7.0 - 9.3 // Some styles come back with percentage values, even though they shouldn't div.style.right = "60%"; pixelBoxStylesVal = roundPixelMeasures( divStyle.right ) === 36; // Support: IE 9 - 11 only // Detect misreporting of content dimensions for box-sizing:border-box elements boxSizingReliableVal = roundPixelMeasures( divStyle.width ) === 36; // Support: IE 9 only // Detect overflow:scroll screwiness (gh-3699) div.style.position = "absolute"; scrollboxSizeVal = div.offsetWidth === 36 || "absolute"; documentElement.removeChild( container ); // Nullify the div so it wouldn't be stored in the memory and // it will also be a sign that checks already performed div = null; } function roundPixelMeasures( measure ) { return Math.round( parseFloat( measure ) ); } var pixelPositionVal, boxSizingReliableVal, scrollboxSizeVal, pixelBoxStylesVal, reliableMarginLeftVal, container = document.createElement( "div" ), div = document.createElement( "div" ); // Finish early in limited (non-browser) environments if ( !div.style ) { return; } // Support: IE <=9 - 11 only // Style of cloned element affects source element cloned (#8908) div.style.backgroundClip = "content-box"; div.cloneNode( true ).style.backgroundClip = ""; support.clearCloneStyle = div.style.backgroundClip === "content-box"; jQuery.extend( support, { boxSizingReliable: function() { computeStyleTests(); return boxSizingReliableVal; }, pixelBoxStyles: function() { computeStyleTests(); return pixelBoxStylesVal; }, pixelPosition: function() { computeStyleTests(); return pixelPositionVal; }, reliableMarginLeft: function() { computeStyleTests(); return reliableMarginLeftVal; }, scrollboxSize: function() { computeStyleTests(); return scrollboxSizeVal; } } ); } )(); function curCSS( elem, name, computed ) { var width, minWidth, maxWidth, ret, // Support: Firefox 51+ // Retrieving style before computed somehow // fixes an issue with getting wrong values // on detached elements style = elem.style; computed = computed || getStyles( elem ); // getPropertyValue is needed for: // .css('filter') (IE 9 only, #12537) // .css('--customProperty) (#3144) if ( computed ) { ret = computed.getPropertyValue( name ) || computed[ name ]; if ( ret === "" && !jQuery.contains( elem.ownerDocument, elem ) ) { ret = jQuery.style( elem, name ); } // A tribute to the "awesome hack by Dean Edwards" // Android Browser returns percentage for some values, // but width seems to be reliably pixels. // This is against the CSSOM draft spec: // https://drafts.csswg.org/cssom/#resolved-values if ( !support.pixelBoxStyles() && rnumnonpx.test( ret ) && rboxStyle.test( name ) ) { // Remember the original values width = style.width; minWidth = style.minWidth; maxWidth = style.maxWidth; // Put in the new values to get a computed value out style.minWidth = style.maxWidth = style.width = ret; ret = computed.width; // Revert the changed values style.width = width; style.minWidth = minWidth; style.maxWidth = maxWidth; } } return ret !== undefined ? // Support: IE <=9 - 11 only // IE returns zIndex value as an integer. ret + "" : ret; } function addGetHookIf( conditionFn, hookFn ) { // Define the hook, we'll check on the first run if it's really needed. return { get: function() { if ( conditionFn() ) { // Hook not needed (or it's not possible to use it due // to missing dependency), remove it. delete this.get; return; } // Hook needed; redefine it so that the support test is not executed again. return ( this.get = hookFn ).apply( this, arguments ); } }; } var // Swappable if display is none or starts with table // except "table", "table-cell", or "table-caption" // See here for display values: https://developer.mozilla.org/en-US/docs/CSS/display rdisplayswap = /^(none|table(?!-c[ea]).+)/, rcustomProp = /^--/, cssShow = { position: "absolute", visibility: "hidden", display: "block" }, cssNormalTransform = { letterSpacing: "0", fontWeight: "400" }, cssPrefixes = [ "Webkit", "Moz", "ms" ], emptyStyle = document.createElement( "div" ).style; // Return a css property mapped to a potentially vendor prefixed property function vendorPropName( name ) { // Shortcut for names that are not vendor prefixed if ( name in emptyStyle ) { return name; } // Check for vendor prefixed names var capName = name[ 0 ].toUpperCase() + name.slice( 1 ), i = cssPrefixes.length; while ( i-- ) { name = cssPrefixes[ i ] + capName; if ( name in emptyStyle ) { return name; } } } // Return a property mapped along what jQuery.cssProps suggests or to // a vendor prefixed property. function finalPropName( name ) { var ret = jQuery.cssProps[ name ]; if ( !ret ) { ret = jQuery.cssProps[ name ] = vendorPropName( name ) || name; } return ret; } function setPositiveNumber( elem, value, subtract ) { // Any relative (+/-) values have already been // normalized at this point var matches = rcssNum.exec( value ); return matches ? // Guard against undefined "subtract", e.g., when used as in cssHooks Math.max( 0, matches[ 2 ] - ( subtract || 0 ) ) + ( matches[ 3 ] || "px" ) : value; } function boxModelAdjustment( elem, dimension, box, isBorderBox, styles, computedVal ) { var i = dimension === "width" ? 1 : 0, extra = 0, delta = 0; // Adjustment may not be necessary if ( box === ( isBorderBox ? "border" : "content" ) ) { return 0; } for ( ; i < 4; i += 2 ) { // Both box models exclude margin if ( box === "margin" ) { delta += jQuery.css( elem, box + cssExpand[ i ], true, styles ); } // If we get here with a content-box, we're seeking "padding" or "border" or "margin" if ( !isBorderBox ) { // Add padding delta += jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); // For "border" or "margin", add border if ( box !== "padding" ) { delta += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); // But still keep track of it otherwise } else { extra += jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); } // If we get here with a border-box (content + padding + border), we're seeking "content" or // "padding" or "margin" } else { // For "content", subtract padding if ( box === "content" ) { delta -= jQuery.css( elem, "padding" + cssExpand[ i ], true, styles ); } // For "content" or "padding", subtract border if ( box !== "margin" ) { delta -= jQuery.css( elem, "border" + cssExpand[ i ] + "Width", true, styles ); } } } // Account for positive content-box scroll gutter when requested by providing computedVal if ( !isBorderBox && computedVal >= 0 ) { // offsetWidth/offsetHeight is a rounded sum of content, padding, scroll gutter, and border // Assuming integer scroll gutter, subtract the rest and round down delta += Math.max( 0, Math.ceil( elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - computedVal - delta - extra - 0.5 ) ); } return delta; } function getWidthOrHeight( elem, dimension, extra ) { // Start with computed style var styles = getStyles( elem ), val = curCSS( elem, dimension, styles ), isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box", valueIsBorderBox = isBorderBox; // Support: Firefox <=54 // Return a confounding non-pixel value or feign ignorance, as appropriate. if ( rnumnonpx.test( val ) ) { if ( !extra ) { return val; } val = "auto"; } // Check for style in case a browser which returns unreliable values // for getComputedStyle silently falls back to the reliable elem.style valueIsBorderBox = valueIsBorderBox && ( support.boxSizingReliable() || val === elem.style[ dimension ] ); // Fall back to offsetWidth/offsetHeight when value is "auto" // This happens for inline elements with no explicit setting (gh-3571) // Support: Android <=4.1 - 4.3 only // Also use offsetWidth/offsetHeight for misreported inline dimensions (gh-3602) if ( val === "auto" || !parseFloat( val ) && jQuery.css( elem, "display", false, styles ) === "inline" ) { val = elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ]; // offsetWidth/offsetHeight provide border-box values valueIsBorderBox = true; } // Normalize "" and auto val = parseFloat( val ) || 0; // Adjust for the element's box model return ( val + boxModelAdjustment( elem, dimension, extra || ( isBorderBox ? "border" : "content" ), valueIsBorderBox, styles, // Provide the current computed size to request scroll gutter calculation (gh-3589) val ) ) + "px"; } jQuery.extend( { // Add in style property hooks for overriding the default // behavior of getting and setting a style property cssHooks: { opacity: { get: function( elem, computed ) { if ( computed ) { // We should always get a number back from opacity var ret = curCSS( elem, "opacity" ); return ret === "" ? "1" : ret; } } } }, // Don't automatically add "px" to these possibly-unitless properties cssNumber: { "animationIterationCount": true, "columnCount": true, "fillOpacity": true, "flexGrow": true, "flexShrink": true, "fontWeight": true, "lineHeight": true, "opacity": true, "order": true, "orphans": true, "widows": true, "zIndex": true, "zoom": true }, // Add in properties whose names you wish to fix before // setting or getting the value cssProps: {}, // Get and set the style property on a DOM Node style: function( elem, name, value, extra ) { // Don't set styles on text and comment nodes if ( !elem || elem.nodeType === 3 || elem.nodeType === 8 || !elem.style ) { return; } // Make sure that we're working with the right name var ret, type, hooks, origName = camelCase( name ), isCustomProp = rcustomProp.test( name ), style = elem.style; // Make sure that we're working with the right name. We don't // want to query the value if it is a CSS custom property // since they are user-defined. if ( !isCustomProp ) { name = finalPropName( origName ); } // Gets hook for the prefixed version, then unprefixed version hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; // Check if we're setting a value if ( value !== undefined ) { type = typeof value; // Convert "+=" or "-=" to relative numbers (#7345) if ( type === "string" && ( ret = rcssNum.exec( value ) ) && ret[ 1 ] ) { value = adjustCSS( elem, name, ret ); // Fixes bug #9237 type = "number"; } // Make sure that null and NaN values aren't set (#7116) if ( value == null || value !== value ) { return; } // If a number was passed in, add the unit (except for certain CSS properties) if ( type === "number" ) { value += ret && ret[ 3 ] || ( jQuery.cssNumber[ origName ] ? "" : "px" ); } // background-* props affect original clone's values if ( !support.clearCloneStyle && value === "" && name.indexOf( "background" ) === 0 ) { style[ name ] = "inherit"; } // If a hook was provided, use that value, otherwise just set the specified value if ( !hooks || !( "set" in hooks ) || ( value = hooks.set( elem, value, extra ) ) !== undefined ) { if ( isCustomProp ) { style.setProperty( name, value ); } else { style[ name ] = value; } } } else { // If a hook was provided get the non-computed value from there if ( hooks && "get" in hooks && ( ret = hooks.get( elem, false, extra ) ) !== undefined ) { return ret; } // Otherwise just get the value from the style object return style[ name ]; } }, css: function( elem, name, extra, styles ) { var val, num, hooks, origName = camelCase( name ), isCustomProp = rcustomProp.test( name ); // Make sure that we're working with the right name. We don't // want to modify the value if it is a CSS custom property // since they are user-defined. if ( !isCustomProp ) { name = finalPropName( origName ); } // Try prefixed name followed by the unprefixed name hooks = jQuery.cssHooks[ name ] || jQuery.cssHooks[ origName ]; // If a hook was provided get the computed value from there if ( hooks && "get" in hooks ) { val = hooks.get( elem, true, extra ); } // Otherwise, if a way to get the computed value exists, use that if ( val === undefined ) { val = curCSS( elem, name, styles ); } // Convert "normal" to computed value if ( val === "normal" && name in cssNormalTransform ) { val = cssNormalTransform[ name ]; } // Make numeric if forced or a qualifier was provided and val looks numeric if ( extra === "" || extra ) { num = parseFloat( val ); return extra === true || isFinite( num ) ? num || 0 : val; } return val; } } ); jQuery.each( [ "height", "width" ], function( i, dimension ) { jQuery.cssHooks[ dimension ] = { get: function( elem, computed, extra ) { if ( computed ) { // Certain elements can have dimension info if we invisibly show them // but it must have a current display style that would benefit return rdisplayswap.test( jQuery.css( elem, "display" ) ) && // Support: Safari 8+ // Table columns in Safari have non-zero offsetWidth & zero // getBoundingClientRect().width unless display is changed. // Support: IE <=11 only // Running getBoundingClientRect on a disconnected node // in IE throws an error. ( !elem.getClientRects().length || !elem.getBoundingClientRect().width ) ? swap( elem, cssShow, function() { return getWidthOrHeight( elem, dimension, extra ); } ) : getWidthOrHeight( elem, dimension, extra ); } }, set: function( elem, value, extra ) { var matches, styles = getStyles( elem ), isBorderBox = jQuery.css( elem, "boxSizing", false, styles ) === "border-box", subtract = extra && boxModelAdjustment( elem, dimension, extra, isBorderBox, styles ); // Account for unreliable border-box dimensions by comparing offset* to computed and // faking a content-box to get border and padding (gh-3699) if ( isBorderBox && support.scrollboxSize() === styles.position ) { subtract -= Math.ceil( elem[ "offset" + dimension[ 0 ].toUpperCase() + dimension.slice( 1 ) ] - parseFloat( styles[ dimension ] ) - boxModelAdjustment( elem, dimension, "border", false, styles ) - 0.5 ); } // Convert to pixels if value adjustment is needed if ( subtract && ( matches = rcssNum.exec( value ) ) && ( matches[ 3 ] || "px" ) !== "px" ) { elem.style[ dimension ] = value; value = jQuery.css( elem, dimension ); } return setPositiveNumber( elem, value, subtract ); } }; } ); jQuery.cssHooks.marginLeft = addGetHookIf( support.reliableMarginLeft, function( elem, computed ) { if ( computed ) { return ( parseFloat( curCSS( elem, "marginLeft" ) ) || elem.getBoundingClientRect().left - swap( elem, { marginLeft: 0 }, function() { return elem.getBoundingClientRect().left; } ) ) + "px"; } } ); // These hooks are used by animate to expand properties jQuery.each( { margin: "", padding: "", border: "Width" }, function( prefix, suffix ) { jQuery.cssHooks[ prefix + suffix ] = { expand: function( value ) { var i = 0, expanded = {}, // Assumes a single number if not a string parts = typeof value === "string" ? value.split( " " ) : [ value ]; for ( ; i < 4; i++ ) { expanded[ prefix + cssExpand[ i ] + suffix ] = parts[ i ] || parts[ i - 2 ] || parts[ 0 ]; } return expanded; } }; if ( prefix !== "margin" ) { jQuery.cssHooks[ prefix + suffix ].set = setPositiveNumber; } } ); jQuery.fn.extend( { css: function( name, value ) { return access( this, function( elem, name, value ) { var styles, len, map = {}, i = 0; if ( Array.isArray( name ) ) { styles = getStyles( elem ); len = name.length; for ( ; i < len; i++ ) { map[ name[ i ] ] = jQuery.css( elem, name[ i ], false, styles ); } return map; } return value !== undefined ? jQuery.style( elem, name, value ) : jQuery.css( elem, name ); }, name, value, arguments.length > 1 ); } } ); function Tween( elem, options, prop, end, easing ) { return new Tween.prototype.init( elem, options, prop, end, easing ); } jQuery.Tween = Tween; Tween.prototype = { constructor: Tween, init: function( elem, options, prop, end, easing, unit ) { this.elem = elem; this.prop = prop; this.easing = easing || jQuery.easing._default; this.options = options; this.start = this.now = this.cur(); this.end = end; this.unit = unit || ( jQuery.cssNumber[ prop ] ? "" : "px" ); }, cur: function() { var hooks = Tween.propHooks[ this.prop ]; return hooks && hooks.get ? hooks.get( this ) : Tween.propHooks._default.get( this ); }, run: function( percent ) { var eased, hooks = Tween.propHooks[ this.prop ]; if ( this.options.duration ) { this.pos = eased = jQuery.easing[ this.easing ]( percent, this.options.duration * percent, 0, 1, this.options.duration ); } else { this.pos = eased = percent; } this.now = ( this.end - this.start ) * eased + this.start; if ( this.options.step ) { this.options.step.call( this.elem, this.now, this ); } if ( hooks && hooks.set ) { hooks.set( this ); } else { Tween.propHooks._default.set( this ); } return this; } }; Tween.prototype.init.prototype = Tween.prototype; Tween.propHooks = { _default: { get: function( tween ) { var result; // Use a property on the element directly when it is not a DOM element, // or when there is no matching style property that exists. if ( tween.elem.nodeType !== 1 || tween.elem[ tween.prop ] != null && tween.elem.style[ tween.prop ] == null ) { return tween.elem[ tween.prop ]; } // Passing an empty string as a 3rd parameter to .css will automatically // attempt a parseFloat and fallback to a string if the parse fails. // Simple values such as "10px" are parsed to Float; // complex values such as "rotate(1rad)" are returned as-is. result = jQuery.css( tween.elem, tween.prop, "" ); // Empty strings, null, undefined and "auto" are converted to 0. return !result || result === "auto" ? 0 : result; }, set: function( tween ) { // Use step hook for back compat. // Use cssHook if its there. // Use .style if available and use plain properties where available. if ( jQuery.fx.step[ tween.prop ] ) { jQuery.fx.step[ tween.prop ]( tween ); } else if ( tween.elem.nodeType === 1 && ( tween.elem.style[ jQuery.cssProps[ tween.prop ] ] != null || jQuery.cssHooks[ tween.prop ] ) ) { jQuery.style( tween.elem, tween.prop, tween.now + tween.unit ); } else { tween.elem[ tween.prop ] = tween.now; } } } }; // Support: IE <=9 only // Panic based approach to setting things on disconnected nodes Tween.propHooks.scrollTop = Tween.propHooks.scrollLeft = { set: function( tween ) { if ( tween.elem.nodeType && tween.elem.parentNode ) { tween.elem[ tween.prop ] = tween.now; } } }; jQuery.easing = { linear: function( p ) { return p; }, swing: function( p ) { return 0.5 - Math.cos( p * Math.PI ) / 2; }, _default: "swing" }; jQuery.fx = Tween.prototype.init; // Back compat <1.8 extension point jQuery.fx.step = {}; var fxNow, inProgress, rfxtypes = /^(?:toggle|show|hide)$/, rrun = /queueHooks$/; function schedule() { if ( inProgress ) { if ( document.hidden === false && window.requestAnimationFrame ) { window.requestAnimationFrame( schedule ); } else { window.setTimeout( schedule, jQuery.fx.interval ); } jQuery.fx.tick(); } } // Animations created synchronously will run synchronously function createFxNow() { window.setTimeout( function() { fxNow = undefined; } ); return ( fxNow = Date.now() ); } // Generate parameters to create a standard animation function genFx( type, includeWidth ) { var which, i = 0, attrs = { height: type }; // If we include width, step value is 1 to do all cssExpand values, // otherwise step value is 2 to skip over Left and Right includeWidth = includeWidth ? 1 : 0; for ( ; i < 4; i += 2 - includeWidth ) { which = cssExpand[ i ]; attrs[ "margin" + which ] = attrs[ "padding" + which ] = type; } if ( includeWidth ) { attrs.opacity = attrs.width = type; } return attrs; } function createTween( value, prop, animation ) { var tween, collection = ( Animation.tweeners[ prop ] || [] ).concat( Animation.tweeners[ "*" ] ), index = 0, length = collection.length; for ( ; index < length; index++ ) { if ( ( tween = collection[ index ].call( animation, prop, value ) ) ) { // We're done with this property return tween; } } } function defaultPrefilter( elem, props, opts ) { var prop, value, toggle, hooks, oldfire, propTween, restoreDisplay, display, isBox = "width" in props || "height" in props, anim = this, orig = {}, style = elem.style, hidden = elem.nodeType && isHiddenWithinTree( elem ), dataShow = dataPriv.get( elem, "fxshow" ); // Queue-skipping animations hijack the fx hooks if ( !opts.queue ) { hooks = jQuery._queueHooks( elem, "fx" ); if ( hooks.unqueued == null ) { hooks.unqueued = 0; oldfire = hooks.empty.fire; hooks.empty.fire = function() { if ( !hooks.unqueued ) { oldfire(); } }; } hooks.unqueued++; anim.always( function() { // Ensure the complete handler is called before this completes anim.always( function() { hooks.unqueued--; if ( !jQuery.queue( elem, "fx" ).length ) { hooks.empty.fire(); } } ); } ); } // Detect show/hide animations for ( prop in props ) { value = props[ prop ]; if ( rfxtypes.test( value ) ) { delete props[ prop ]; toggle = toggle || value === "toggle"; if ( value === ( hidden ? "hide" : "show" ) ) { // Pretend to be hidden if this is a "show" and // there is still data from a stopped show/hide if ( value === "show" && dataShow && dataShow[ prop ] !== undefined ) { hidden = true; // Ignore all other no-op show/hide data } else { continue; } } orig[ prop ] = dataShow && dataShow[ prop ] || jQuery.style( elem, prop ); } } // Bail out if this is a no-op like .hide().hide() propTween = !jQuery.isEmptyObject( props ); if ( !propTween && jQuery.isEmptyObject( orig ) ) { return; } // Restrict "overflow" and "display" styles during box animations if ( isBox && elem.nodeType === 1 ) { // Support: IE <=9 - 11, Edge 12 - 15 // Record all 3 overflow attributes because IE does not infer the shorthand // from identically-valued overflowX and overflowY and Edge just mirrors // the overflowX value there. opts.overflow = [ style.overflow, style.overflowX, style.overflowY ]; // Identify a display type, preferring old show/hide data over the CSS cascade restoreDisplay = dataShow && dataShow.display; if ( restoreDisplay == null ) { restoreDisplay = dataPriv.get( elem, "display" ); } display = jQuery.css( elem, "display" ); if ( display === "none" ) { if ( restoreDisplay ) { display = restoreDisplay; } else { // Get nonempty value(s) by temporarily forcing visibility showHide( [ elem ], true ); restoreDisplay = elem.style.display || restoreDisplay; display = jQuery.css( elem, "display" ); showHide( [ elem ] ); } } // Animate inline elements as inline-block if ( display === "inline" || display === "inline-block" && restoreDisplay != null ) { if ( jQuery.css( elem, "float" ) === "none" ) { // Restore the original display value at the end of pure show/hide animations if ( !propTween ) { anim.done( function() { style.display = restoreDisplay; } ); if ( restoreDisplay == null ) { display = style.display; restoreDisplay = display === "none" ? "" : display; } } style.display = "inline-block"; } } } if ( opts.overflow ) { style.overflow = "hidden"; anim.always( function() { style.overflow = opts.overflow[ 0 ]; style.overflowX = opts.overflow[ 1 ]; style.overflowY = opts.overflow[ 2 ]; } ); } // Implement show/hide animations propTween = false; for ( prop in orig ) { // General show/hide setup for this element animation if ( !propTween ) { if ( dataShow ) { if ( "hidden" in dataShow ) { hidden = dataShow.hidden; } } else { dataShow = dataPriv.access( elem, "fxshow", { display: restoreDisplay } ); } // Store hidden/visible for toggle so `.stop().toggle()` "reverses" if ( toggle ) { dataShow.hidden = !hidden; } // Show elements before animating them if ( hidden ) { showHide( [ elem ], true ); } /* eslint-disable no-loop-func */ anim.done( function() { /* eslint-enable no-loop-func */ // The final step of a "hide" animation is actually hiding the element if ( !hidden ) { showHide( [ elem ] ); } dataPriv.remove( elem, "fxshow" ); for ( prop in orig ) { jQuery.style( elem, prop, orig[ prop ] ); } } ); } // Per-property setup propTween = createTween( hidden ? dataShow[ prop ] : 0, prop, anim ); if ( !( prop in dataShow ) ) { dataShow[ prop ] = propTween.start; if ( hidden ) { propTween.end = propTween.start; propTween.start = 0; } } } } function propFilter( props, specialEasing ) { var index, name, easing, value, hooks; // camelCase, specialEasing and expand cssHook pass for ( index in props ) { name = camelCase( index ); easing = specialEasing[ name ]; value = props[ index ]; if ( Array.isArray( value ) ) { easing = value[ 1 ]; value = props[ index ] = value[ 0 ]; } if ( index !== name ) { props[ name ] = value; delete props[ index ]; } hooks = jQuery.cssHooks[ name ]; if ( hooks && "expand" in hooks ) { value = hooks.expand( value ); delete props[ name ]; // Not quite $.extend, this won't overwrite existing keys. // Reusing 'index' because we have the correct "name" for ( index in value ) { if ( !( index in props ) ) { props[ index ] = value[ index ]; specialEasing[ index ] = easing; } } } else { specialEasing[ name ] = easing; } } } function Animation( elem, properties, options ) { var result, stopped, index = 0, length = Animation.prefilters.length, deferred = jQuery.Deferred().always( function() { // Don't match elem in the :animated selector delete tick.elem; } ), tick = function() { if ( stopped ) { return false; } var currentTime = fxNow || createFxNow(), remaining = Math.max( 0, animation.startTime + animation.duration - currentTime ), // Support: Android 2.3 only // Archaic crash bug won't allow us to use `1 - ( 0.5 || 0 )` (#12497) temp = remaining / animation.duration || 0, percent = 1 - temp, index = 0, length = animation.tweens.length; for ( ; index < length; index++ ) { animation.tweens[ index ].run( percent ); } deferred.notifyWith( elem, [ animation, percent, remaining ] ); // If there's more to do, yield if ( percent < 1 && length ) { return remaining; } // If this was an empty animation, synthesize a final progress notification if ( !length ) { deferred.notifyWith( elem, [ animation, 1, 0 ] ); } // Resolve the animation and report its conclusion deferred.resolveWith( elem, [ animation ] ); return false; }, animation = deferred.promise( { elem: elem, props: jQuery.extend( {}, properties ), opts: jQuery.extend( true, { specialEasing: {}, easing: jQuery.easing._default }, options ), originalProperties: properties, originalOptions: options, startTime: fxNow || createFxNow(), duration: options.duration, tweens: [], createTween: function( prop, end ) { var tween = jQuery.Tween( elem, animation.opts, prop, end, animation.opts.specialEasing[ prop ] || animation.opts.easing ); animation.tweens.push( tween ); return tween; }, stop: function( gotoEnd ) { var index = 0, // If we are going to the end, we want to run all the tweens // otherwise we skip this part length = gotoEnd ? animation.tweens.length : 0; if ( stopped ) { return this; } stopped = true; for ( ; index < length; index++ ) { animation.tweens[ index ].run( 1 ); } // Resolve when we played the last frame; otherwise, reject if ( gotoEnd ) { deferred.notifyWith( elem, [ animation, 1, 0 ] ); deferred.resolveWith( elem, [ animation, gotoEnd ] ); } else { deferred.rejectWith( elem, [ animation, gotoEnd ] ); } return this; } } ), props = animation.props; propFilter( props, animation.opts.specialEasing ); for ( ; index < length; index++ ) { result = Animation.prefilters[ index ].call( animation, elem, props, animation.opts ); if ( result ) { if ( isFunction( result.stop ) ) { jQuery._queueHooks( animation.elem, animation.opts.queue ).stop = result.stop.bind( result ); } return result; } } jQuery.map( props, createTween, animation ); if ( isFunction( animation.opts.start ) ) { animation.opts.start.call( elem, animation ); } // Attach callbacks from options animation .progress( animation.opts.progress ) .done( animation.opts.done, animation.opts.complete ) .fail( animation.opts.fail ) .always( animation.opts.always ); jQuery.fx.timer( jQuery.extend( tick, { elem: elem, anim: animation, queue: animation.opts.queue } ) ); return animation; } jQuery.Animation = jQuery.extend( Animation, { tweeners: { "*": [ function( prop, value ) { var tween = this.createTween( prop, value ); adjustCSS( tween.elem, prop, rcssNum.exec( value ), tween ); return tween; } ] }, tweener: function( props, callback ) { if ( isFunction( props ) ) { callback = props; props = [ "*" ]; } else { props = props.match( rnothtmlwhite ); } var prop, index = 0, length = props.length; for ( ; index < length; index++ ) { prop = props[ index ]; Animation.tweeners[ prop ] = Animation.tweeners[ prop ] || []; Animation.tweeners[ prop ].unshift( callback ); } }, prefilters: [ defaultPrefilter ], prefilter: function( callback, prepend ) { if ( prepend ) { Animation.prefilters.unshift( callback ); } else { Animation.prefilters.push( callback ); } } } ); jQuery.speed = function( speed, easing, fn ) { var opt = speed && typeof speed === "object" ? jQuery.extend( {}, speed ) : { complete: fn || !fn && easing || isFunction( speed ) && speed, duration: speed, easing: fn && easing || easing && !isFunction( easing ) && easing }; // Go to the end state if fx are off if ( jQuery.fx.off ) { opt.duration = 0; } else { if ( typeof opt.duration !== "number" ) { if ( opt.duration in jQuery.fx.speeds ) { opt.duration = jQuery.fx.speeds[ opt.duration ]; } else { opt.duration = jQuery.fx.speeds._default; } } } // Normalize opt.queue - true/undefined/null -> "fx" if ( opt.queue == null || opt.queue === true ) { opt.queue = "fx"; } // Queueing opt.old = opt.complete; opt.complete = function() { if ( isFunction( opt.old ) ) { opt.old.call( this ); } if ( opt.queue ) { jQuery.dequeue( this, opt.queue ); } }; return opt; }; jQuery.fn.extend( { fadeTo: function( speed, to, easing, callback ) { // Show any hidden elements after setting opacity to 0 return this.filter( isHiddenWithinTree ).css( "opacity", 0 ).show() // Animate to the value specified .end().animate( { opacity: to }, speed, easing, callback ); }, animate: function( prop, speed, easing, callback ) { var empty = jQuery.isEmptyObject( prop ), optall = jQuery.speed( speed, easing, callback ), doAnimation = function() { // Operate on a copy of prop so per-property easing won't be lost var anim = Animation( this, jQuery.extend( {}, prop ), optall ); // Empty animations, or finishing resolves immediately if ( empty || dataPriv.get( this, "finish" ) ) { anim.stop( true ); } }; doAnimation.finish = doAnimation; return empty || optall.queue === false ? this.each( doAnimation ) : this.queue( optall.queue, doAnimation ); }, stop: function( type, clearQueue, gotoEnd ) { var stopQueue = function( hooks ) { var stop = hooks.stop; delete hooks.stop; stop( gotoEnd ); }; if ( typeof type !== "string" ) { gotoEnd = clearQueue; clearQueue = type; type = undefined; } if ( clearQueue && type !== false ) { this.queue( type || "fx", [] ); } return this.each( function() { var dequeue = true, index = type != null && type + "queueHooks", timers = jQuery.timers, data = dataPriv.get( this ); if ( index ) { if ( data[ index ] && data[ index ].stop ) { stopQueue( data[ index ] ); } } else { for ( index in data ) { if ( data[ index ] && data[ index ].stop && rrun.test( index ) ) { stopQueue( data[ index ] ); } } } for ( index = timers.length; index--; ) { if ( timers[ index ].elem === this && ( type == null || timers[ index ].queue === type ) ) { timers[ index ].anim.stop( gotoEnd ); dequeue = false; timers.splice( index, 1 ); } } // Start the next in the queue if the last step wasn't forced. // Timers currently will call their complete callbacks, which // will dequeue but only if they were gotoEnd. if ( dequeue || !gotoEnd ) { jQuery.dequeue( this, type ); } } ); }, finish: function( type ) { if ( type !== false ) { type = type || "fx"; } return this.each( function() { var index, data = dataPriv.get( this ), queue = data[ type + "queue" ], hooks = data[ type + "queueHooks" ], timers = jQuery.timers, length = queue ? queue.length : 0; // Enable finishing flag on private data data.finish = true; // Empty the queue first jQuery.queue( this, type, [] ); if ( hooks && hooks.stop ) { hooks.stop.call( this, true ); } // Look for any active animations, and finish them for ( index = timers.length; index--; ) { if ( timers[ index ].elem === this && timers[ index ].queue === type ) { timers[ index ].anim.stop( true ); timers.splice( index, 1 ); } } // Look for any animations in the old queue and finish them for ( index = 0; index < length; index++ ) { if ( queue[ index ] && queue[ index ].finish ) { queue[ index ].finish.call( this ); } } // Turn off finishing flag delete data.finish; } ); } } ); jQuery.each( [ "toggle", "show", "hide" ], function( i, name ) { var cssFn = jQuery.fn[ name ]; jQuery.fn[ name ] = function( speed, easing, callback ) { return speed == null || typeof speed === "boolean" ? cssFn.apply( this, arguments ) : this.animate( genFx( name, true ), speed, easing, callback ); }; } ); // Generate shortcuts for custom animations jQuery.each( { slideDown: genFx( "show" ), slideUp: genFx( "hide" ), slideToggle: genFx( "toggle" ), fadeIn: { opacity: "show" }, fadeOut: { opacity: "hide" }, fadeToggle: { opacity: "toggle" } }, function( name, props ) { jQuery.fn[ name ] = function( speed, easing, callback ) { return this.animate( props, speed, easing, callback ); }; } ); jQuery.timers = []; jQuery.fx.tick = function() { var timer, i = 0, timers = jQuery.timers; fxNow = Date.now(); for ( ; i < timers.length; i++ ) { timer = timers[ i ]; // Run the timer and safely remove it when done (allowing for external removal) if ( !timer() && timers[ i ] === timer ) { timers.splice( i--, 1 ); } } if ( !timers.length ) { jQuery.fx.stop(); } fxNow = undefined; }; jQuery.fx.timer = function( timer ) { jQuery.timers.push( timer ); jQuery.fx.start(); }; jQuery.fx.interval = 13; jQuery.fx.start = function() { if ( inProgress ) { return; } inProgress = true; schedule(); }; jQuery.fx.stop = function() { inProgress = null; }; jQuery.fx.speeds = { slow: 600, fast: 200, // Default speed _default: 400 }; // Based off of the plugin by Clint Helfers, with permission. // https://web.archive.org/web/20100324014747/http://blindsignals.com/index.php/2009/07/jquery-delay/ jQuery.fn.delay = function( time, type ) { time = jQuery.fx ? jQuery.fx.speeds[ time ] || time : time; type = type || "fx"; return this.queue( type, function( next, hooks ) { var timeout = window.setTimeout( next, time ); hooks.stop = function() { window.clearTimeout( timeout ); }; } ); }; ( function() { var input = document.createElement( "input" ), select = document.createElement( "select" ), opt = select.appendChild( document.createElement( "option" ) ); input.type = "checkbox"; // Support: Android <=4.3 only // Default value for a checkbox should be "on" support.checkOn = input.value !== ""; // Support: IE <=11 only // Must access selectedIndex to make default options select support.optSelected = opt.selected; // Support: IE <=11 only // An input loses its value after becoming a radio input = document.createElement( "input" ); input.value = "t"; input.type = "radio"; support.radioValue = input.value === "t"; } )(); var boolHook, attrHandle = jQuery.expr.attrHandle; jQuery.fn.extend( { attr: function( name, value ) { return access( this, jQuery.attr, name, value, arguments.length > 1 ); }, removeAttr: function( name ) { return this.each( function() { jQuery.removeAttr( this, name ); } ); } } ); jQuery.extend( { attr: function( elem, name, value ) { var ret, hooks, nType = elem.nodeType; // Don't get/set attributes on text, comment and attribute nodes if ( nType === 3 || nType === 8 || nType === 2 ) { return; } // Fallback to prop when attributes are not supported if ( typeof elem.getAttribute === "undefined" ) { return jQuery.prop( elem, name, value ); } // Attribute hooks are determined by the lowercase version // Grab necessary hook if one is defined if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { hooks = jQuery.attrHooks[ name.toLowerCase() ] || ( jQuery.expr.match.bool.test( name ) ? boolHook : undefined ); } if ( value !== undefined ) { if ( value === null ) { jQuery.removeAttr( elem, name ); return; } if ( hooks && "set" in hooks && ( ret = hooks.set( elem, value, name ) ) !== undefined ) { return ret; } elem.setAttribute( name, value + "" ); return value; } if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { return ret; } ret = jQuery.find.attr( elem, name ); // Non-existent attributes return null, we normalize to undefined return ret == null ? undefined : ret; }, attrHooks: { type: { set: function( elem, value ) { if ( !support.radioValue && value === "radio" && nodeName( elem, "input" ) ) { var val = elem.value; elem.setAttribute( "type", value ); if ( val ) { elem.value = val; } return value; } } } }, removeAttr: function( elem, value ) { var name, i = 0, // Attribute names can contain non-HTML whitespace characters // https://html.spec.whatwg.org/multipage/syntax.html#attributes-2 attrNames = value && value.match( rnothtmlwhite ); if ( attrNames && elem.nodeType === 1 ) { while ( ( name = attrNames[ i++ ] ) ) { elem.removeAttribute( name ); } } } } ); // Hooks for boolean attributes boolHook = { set: function( elem, value, name ) { if ( value === false ) { // Remove boolean attributes when set to false jQuery.removeAttr( elem, name ); } else { elem.setAttribute( name, name ); } return name; } }; jQuery.each( jQuery.expr.match.bool.source.match( /\w+/g ), function( i, name ) { var getter = attrHandle[ name ] || jQuery.find.attr; attrHandle[ name ] = function( elem, name, isXML ) { var ret, handle, lowercaseName = name.toLowerCase(); if ( !isXML ) { // Avoid an infinite loop by temporarily removing this function from the getter handle = attrHandle[ lowercaseName ]; attrHandle[ lowercaseName ] = ret; ret = getter( elem, name, isXML ) != null ? lowercaseName : null; attrHandle[ lowercaseName ] = handle; } return ret; }; } ); var rfocusable = /^(?:input|select|textarea|button)$/i, rclickable = /^(?:a|area)$/i; jQuery.fn.extend( { prop: function( name, value ) { return access( this, jQuery.prop, name, value, arguments.length > 1 ); }, removeProp: function( name ) { return this.each( function() { delete this[ jQuery.propFix[ name ] || name ]; } ); } } ); jQuery.extend( { prop: function( elem, name, value ) { var ret, hooks, nType = elem.nodeType; // Don't get/set properties on text, comment and attribute nodes if ( nType === 3 || nType === 8 || nType === 2 ) { return; } if ( nType !== 1 || !jQuery.isXMLDoc( elem ) ) { // Fix name and attach hooks name = jQuery.propFix[ name ] || name; hooks = jQuery.propHooks[ name ]; } if ( value !== undefined ) { if ( hooks && "set" in hooks && ( ret = hooks.set( elem, value, name ) ) !== undefined ) { return ret; } return ( elem[ name ] = value ); } if ( hooks && "get" in hooks && ( ret = hooks.get( elem, name ) ) !== null ) { return ret; } return elem[ name ]; }, propHooks: { tabIndex: { get: function( elem ) { // Support: IE <=9 - 11 only // elem.tabIndex doesn't always return the // correct value when it hasn't been explicitly set // https://web.archive.org/web/20141116233347/http://fluidproject.org/blog/2008/01/09/getting-setting-and-removing-tabindex-values-with-javascript/ // Use proper attribute retrieval(#12072) var tabindex = jQuery.find.attr( elem, "tabindex" ); if ( tabindex ) { return parseInt( tabindex, 10 ); } if ( rfocusable.test( elem.nodeName ) || rclickable.test( elem.nodeName ) && elem.href ) { return 0; } return -1; } } }, propFix: { "for": "htmlFor", "class": "className" } } ); // Support: IE <=11 only // Accessing the selectedIndex property // forces the browser to respect setting selected // on the option // The getter ensures a default option is selected // when in an optgroup // eslint rule "no-unused-expressions" is disabled for this code // since it considers such accessions noop if ( !support.optSelected ) { jQuery.propHooks.selected = { get: function( elem ) { /* eslint no-unused-expressions: "off" */ var parent = elem.parentNode; if ( parent && parent.parentNode ) { parent.parentNode.selectedIndex; } return null; }, set: function( elem ) { /* eslint no-unused-expressions: "off" */ var parent = elem.parentNode; if ( parent ) { parent.selectedIndex; if ( parent.parentNode ) { parent.parentNode.selectedIndex; } } } }; } jQuery.each( [ "tabIndex", "readOnly", "maxLength", "cellSpacing", "cellPadding", "rowSpan", "colSpan", "useMap", "frameBorder", "contentEditable" ], function() { jQuery.propFix[ this.toLowerCase() ] = this; } ); // Strip and collapse whitespace according to HTML spec // https://infra.spec.whatwg.org/#strip-and-collapse-ascii-whitespace function stripAndCollapse( value ) { var tokens = value.match( rnothtmlwhite ) || []; return tokens.join( " " ); } function getClass( elem ) { return elem.getAttribute && elem.getAttribute( "class" ) || ""; } function classesToArray( value ) { if ( Array.isArray( value ) ) { return value; } if ( typeof value === "string" ) { return value.match( rnothtmlwhite ) || []; } return []; } jQuery.fn.extend( { addClass: function( value ) { var classes, elem, cur, curValue, clazz, j, finalValue, i = 0; if ( isFunction( value ) ) { return this.each( function( j ) { jQuery( this ).addClass( value.call( this, j, getClass( this ) ) ); } ); } classes = classesToArray( value ); if ( classes.length ) { while ( ( elem = this[ i++ ] ) ) { curValue = getClass( elem ); cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); if ( cur ) { j = 0; while ( ( clazz = classes[ j++ ] ) ) { if ( cur.indexOf( " " + clazz + " " ) < 0 ) { cur += clazz + " "; } } // Only assign if different to avoid unneeded rendering. finalValue = stripAndCollapse( cur ); if ( curValue !== finalValue ) { elem.setAttribute( "class", finalValue ); } } } } return this; }, removeClass: function( value ) { var classes, elem, cur, curValue, clazz, j, finalValue, i = 0; if ( isFunction( value ) ) { return this.each( function( j ) { jQuery( this ).removeClass( value.call( this, j, getClass( this ) ) ); } ); } if ( !arguments.length ) { return this.attr( "class", "" ); } classes = classesToArray( value ); if ( classes.length ) { while ( ( elem = this[ i++ ] ) ) { curValue = getClass( elem ); // This expression is here for better compressibility (see addClass) cur = elem.nodeType === 1 && ( " " + stripAndCollapse( curValue ) + " " ); if ( cur ) { j = 0; while ( ( clazz = classes[ j++ ] ) ) { // Remove *all* instances while ( cur.indexOf( " " + clazz + " " ) > -1 ) { cur = cur.replace( " " + clazz + " ", " " ); } } // Only assign if different to avoid unneeded rendering. finalValue = stripAndCollapse( cur ); if ( curValue !== finalValue ) { elem.setAttribute( "class", finalValue ); } } } } return this; }, toggleClass: function( value, stateVal ) { var type = typeof value, isValidValue = type === "string" || Array.isArray( value ); if ( typeof stateVal === "boolean" && isValidValue ) { return stateVal ? this.addClass( value ) : this.removeClass( value ); } if ( isFunction( value ) ) { return this.each( function( i ) { jQuery( this ).toggleClass( value.call( this, i, getClass( this ), stateVal ), stateVal ); } ); } return this.each( function() { var className, i, self, classNames; if ( isValidValue ) { // Toggle individual class names i = 0; self = jQuery( this ); classNames = classesToArray( value ); while ( ( className = classNames[ i++ ] ) ) { // Check each className given, space separated list if ( self.hasClass( className ) ) { self.removeClass( className ); } else { self.addClass( className ); } } // Toggle whole class name } else if ( value === undefined || type === "boolean" ) { className = getClass( this ); if ( className ) { // Store className if set dataPriv.set( this, "__className__", className ); } // If the element has a class name or if we're passed `false`, // then remove the whole classname (if there was one, the above saved it). // Otherwise bring back whatever was previously saved (if anything), // falling back to the empty string if nothing was stored. if ( this.setAttribute ) { this.setAttribute( "class", className || value === false ? "" : dataPriv.get( this, "__className__" ) || "" ); } } } ); }, hasClass: function( selector ) { var className, elem, i = 0; className = " " + selector + " "; while ( ( elem = this[ i++ ] ) ) { if ( elem.nodeType === 1 && ( " " + stripAndCollapse( getClass( elem ) ) + " " ).indexOf( className ) > -1 ) { return true; } } return false; } } ); var rreturn = /\r/g; jQuery.fn.extend( { val: function( value ) { var hooks, ret, valueIsFunction, elem = this[ 0 ]; if ( !arguments.length ) { if ( elem ) { hooks = jQuery.valHooks[ elem.type ] || jQuery.valHooks[ elem.nodeName.toLowerCase() ]; if ( hooks && "get" in hooks && ( ret = hooks.get( elem, "value" ) ) !== undefined ) { return ret; } ret = elem.value; // Handle most common string cases if ( typeof ret === "string" ) { return ret.replace( rreturn, "" ); } // Handle cases where value is null/undef or number return ret == null ? "" : ret; } return; } valueIsFunction = isFunction( value ); return this.each( function( i ) { var val; if ( this.nodeType !== 1 ) { return; } if ( valueIsFunction ) { val = value.call( this, i, jQuery( this ).val() ); } else { val = value; } // Treat null/undefined as ""; convert numbers to string if ( val == null ) { val = ""; } else if ( typeof val === "number" ) { val += ""; } else if ( Array.isArray( val ) ) { val = jQuery.map( val, function( value ) { return value == null ? "" : value + ""; } ); } hooks = jQuery.valHooks[ this.type ] || jQuery.valHooks[ this.nodeName.toLowerCase() ]; // If set returns undefined, fall back to normal setting if ( !hooks || !( "set" in hooks ) || hooks.set( this, val, "value" ) === undefined ) { this.value = val; } } ); } } ); jQuery.extend( { valHooks: { option: { get: function( elem ) { var val = jQuery.find.attr( elem, "value" ); return val != null ? val : // Support: IE <=10 - 11 only // option.text throws exceptions (#14686, #14858) // Strip and collapse whitespace // https://html.spec.whatwg.org/#strip-and-collapse-whitespace stripAndCollapse( jQuery.text( elem ) ); } }, select: { get: function( elem ) { var value, option, i, options = elem.options, index = elem.selectedIndex, one = elem.type === "select-one", values = one ? null : [], max = one ? index + 1 : options.length; if ( index < 0 ) { i = max; } else { i = one ? index : 0; } // Loop through all the selected options for ( ; i < max; i++ ) { option = options[ i ]; // Support: IE <=9 only // IE8-9 doesn't update selected after form reset (#2551) if ( ( option.selected || i === index ) && // Don't return options that are disabled or in a disabled optgroup !option.disabled && ( !option.parentNode.disabled || !nodeName( option.parentNode, "optgroup" ) ) ) { // Get the specific value for the option value = jQuery( option ).val(); // We don't need an array for one selects if ( one ) { return value; } // Multi-Selects return an array values.push( value ); } } return values; }, set: function( elem, value ) { var optionSet, option, options = elem.options, values = jQuery.makeArray( value ), i = options.length; while ( i-- ) { option = options[ i ]; /* eslint-disable no-cond-assign */ if ( option.selected = jQuery.inArray( jQuery.valHooks.option.get( option ), values ) > -1 ) { optionSet = true; } /* eslint-enable no-cond-assign */ } // Force browsers to behave consistently when non-matching value is set if ( !optionSet ) { elem.selectedIndex = -1; } return values; } } } } ); // Radios and checkboxes getter/setter jQuery.each( [ "radio", "checkbox" ], function() { jQuery.valHooks[ this ] = { set: function( elem, value ) { if ( Array.isArray( value ) ) { return ( elem.checked = jQuery.inArray( jQuery( elem ).val(), value ) > -1 ); } } }; if ( !support.checkOn ) { jQuery.valHooks[ this ].get = function( elem ) { return elem.getAttribute( "value" ) === null ? "on" : elem.value; }; } } ); // Return jQuery for attributes-only inclusion support.focusin = "onfocusin" in window; var rfocusMorph = /^(?:focusinfocus|focusoutblur)$/, stopPropagationCallback = function( e ) { e.stopPropagation(); }; jQuery.extend( jQuery.event, { trigger: function( event, data, elem, onlyHandlers ) { var i, cur, tmp, bubbleType, ontype, handle, special, lastElement, eventPath = [ elem || document ], type = hasOwn.call( event, "type" ) ? event.type : event, namespaces = hasOwn.call( event, "namespace" ) ? event.namespace.split( "." ) : []; cur = lastElement = tmp = elem = elem || document; // Don't do events on text and comment nodes if ( elem.nodeType === 3 || elem.nodeType === 8 ) { return; } // focus/blur morphs to focusin/out; ensure we're not firing them right now if ( rfocusMorph.test( type + jQuery.event.triggered ) ) { return; } if ( type.indexOf( "." ) > -1 ) { // Namespaced trigger; create a regexp to match event type in handle() namespaces = type.split( "." ); type = namespaces.shift(); namespaces.sort(); } ontype = type.indexOf( ":" ) < 0 && "on" + type; // Caller can pass in a jQuery.Event object, Object, or just an event type string event = event[ jQuery.expando ] ? event : new jQuery.Event( type, typeof event === "object" && event ); // Trigger bitmask: & 1 for native handlers; & 2 for jQuery (always true) event.isTrigger = onlyHandlers ? 2 : 3; event.namespace = namespaces.join( "." ); event.rnamespace = event.namespace ? new RegExp( "(^|\\.)" + namespaces.join( "\\.(?:.*\\.|)" ) + "(\\.|$)" ) : null; // Clean up the event in case it is being reused event.result = undefined; if ( !event.target ) { event.target = elem; } // Clone any incoming data and prepend the event, creating the handler arg list data = data == null ? [ event ] : jQuery.makeArray( data, [ event ] ); // Allow special events to draw outside the lines special = jQuery.event.special[ type ] || {}; if ( !onlyHandlers && special.trigger && special.trigger.apply( elem, data ) === false ) { return; } // Determine event propagation path in advance, per W3C events spec (#9951) // Bubble up to document, then to window; watch for a global ownerDocument var (#9724) if ( !onlyHandlers && !special.noBubble && !isWindow( elem ) ) { bubbleType = special.delegateType || type; if ( !rfocusMorph.test( bubbleType + type ) ) { cur = cur.parentNode; } for ( ; cur; cur = cur.parentNode ) { eventPath.push( cur ); tmp = cur; } // Only add window if we got to document (e.g., not plain obj or detached DOM) if ( tmp === ( elem.ownerDocument || document ) ) { eventPath.push( tmp.defaultView || tmp.parentWindow || window ); } } // Fire handlers on the event path i = 0; while ( ( cur = eventPath[ i++ ] ) && !event.isPropagationStopped() ) { lastElement = cur; event.type = i > 1 ? bubbleType : special.bindType || type; // jQuery handler handle = ( dataPriv.get( cur, "events" ) || {} )[ event.type ] && dataPriv.get( cur, "handle" ); if ( handle ) { handle.apply( cur, data ); } // Native handler handle = ontype && cur[ ontype ]; if ( handle && handle.apply && acceptData( cur ) ) { event.result = handle.apply( cur, data ); if ( event.result === false ) { event.preventDefault(); } } } event.type = type; // If nobody prevented the default action, do it now if ( !onlyHandlers && !event.isDefaultPrevented() ) { if ( ( !special._default || special._default.apply( eventPath.pop(), data ) === false ) && acceptData( elem ) ) { // Call a native DOM method on the target with the same name as the event. // Don't do default actions on window, that's where global variables be (#6170) if ( ontype && isFunction( elem[ type ] ) && !isWindow( elem ) ) { // Don't re-trigger an onFOO event when we call its FOO() method tmp = elem[ ontype ]; if ( tmp ) { elem[ ontype ] = null; } // Prevent re-triggering of the same event, since we already bubbled it above jQuery.event.triggered = type; if ( event.isPropagationStopped() ) { lastElement.addEventListener( type, stopPropagationCallback ); } elem[ type ](); if ( event.isPropagationStopped() ) { lastElement.removeEventListener( type, stopPropagationCallback ); } jQuery.event.triggered = undefined; if ( tmp ) { elem[ ontype ] = tmp; } } } } return event.result; }, // Piggyback on a donor event to simulate a different one // Used only for `focus(in | out)` events simulate: function( type, elem, event ) { var e = jQuery.extend( new jQuery.Event(), event, { type: type, isSimulated: true } ); jQuery.event.trigger( e, null, elem ); } } ); jQuery.fn.extend( { trigger: function( type, data ) { return this.each( function() { jQuery.event.trigger( type, data, this ); } ); }, triggerHandler: function( type, data ) { var elem = this[ 0 ]; if ( elem ) { return jQuery.event.trigger( type, data, elem, true ); } } } ); // Support: Firefox <=44 // Firefox doesn't have focus(in | out) events // Related ticket - https://bugzilla.mozilla.org/show_bug.cgi?id=687787 // // Support: Chrome <=48 - 49, Safari <=9.0 - 9.1 // focus(in | out) events fire after focus & blur events, // which is spec violation - http://www.w3.org/TR/DOM-Level-3-Events/#events-focusevent-event-order // Related ticket - https://bugs.chromium.org/p/chromium/issues/detail?id=449857 if ( !support.focusin ) { jQuery.each( { focus: "focusin", blur: "focusout" }, function( orig, fix ) { // Attach a single capturing handler on the document while someone wants focusin/focusout var handler = function( event ) { jQuery.event.simulate( fix, event.target, jQuery.event.fix( event ) ); }; jQuery.event.special[ fix ] = { setup: function() { var doc = this.ownerDocument || this, attaches = dataPriv.access( doc, fix ); if ( !attaches ) { doc.addEventListener( orig, handler, true ); } dataPriv.access( doc, fix, ( attaches || 0 ) + 1 ); }, teardown: function() { var doc = this.ownerDocument || this, attaches = dataPriv.access( doc, fix ) - 1; if ( !attaches ) { doc.removeEventListener( orig, handler, true ); dataPriv.remove( doc, fix ); } else { dataPriv.access( doc, fix, attaches ); } } }; } ); } var location = window.location; var nonce = Date.now(); var rquery = ( /\?/ ); // Cross-browser xml parsing jQuery.parseXML = function( data ) { var xml; if ( !data || typeof data !== "string" ) { return null; } // Support: IE 9 - 11 only // IE throws on parseFromString with invalid input. try { xml = ( new window.DOMParser() ).parseFromString( data, "text/xml" ); } catch ( e ) { xml = undefined; } if ( !xml || xml.getElementsByTagName( "parsererror" ).length ) { jQuery.error( "Invalid XML: " + data ); } return xml; }; var rbracket = /\[\]$/, rCRLF = /\r?\n/g, rsubmitterTypes = /^(?:submit|button|image|reset|file)$/i, rsubmittable = /^(?:input|select|textarea|keygen)/i; function buildParams( prefix, obj, traditional, add ) { var name; if ( Array.isArray( obj ) ) { // Serialize array item. jQuery.each( obj, function( i, v ) { if ( traditional || rbracket.test( prefix ) ) { // Treat each array item as a scalar. add( prefix, v ); } else { // Item is non-scalar (array or object), encode its numeric index. buildParams( prefix + "[" + ( typeof v === "object" && v != null ? i : "" ) + "]", v, traditional, add ); } } ); } else if ( !traditional && toType( obj ) === "object" ) { // Serialize object item. for ( name in obj ) { buildParams( prefix + "[" + name + "]", obj[ name ], traditional, add ); } } else { // Serialize scalar item. add( prefix, obj ); } } // Serialize an array of form elements or a set of // key/values into a query string jQuery.param = function( a, traditional ) { var prefix, s = [], add = function( key, valueOrFunction ) { // If value is a function, invoke it and use its return value var value = isFunction( valueOrFunction ) ? valueOrFunction() : valueOrFunction; s[ s.length ] = encodeURIComponent( key ) + "=" + encodeURIComponent( value == null ? "" : value ); }; // If an array was passed in, assume that it is an array of form elements. if ( Array.isArray( a ) || ( a.jquery && !jQuery.isPlainObject( a ) ) ) { // Serialize the form elements jQuery.each( a, function() { add( this.name, this.value ); } ); } else { // If traditional, encode the "old" way (the way 1.3.2 or older // did it), otherwise encode params recursively. for ( prefix in a ) { buildParams( prefix, a[ prefix ], traditional, add ); } } // Return the resulting serialization return s.join( "&" ); }; jQuery.fn.extend( { serialize: function() { return jQuery.param( this.serializeArray() ); }, serializeArray: function() { return this.map( function() { // Can add propHook for "elements" to filter or add form elements var elements = jQuery.prop( this, "elements" ); return elements ? jQuery.makeArray( elements ) : this; } ) .filter( function() { var type = this.type; // Use .is( ":disabled" ) so that fieldset[disabled] works return this.name && !jQuery( this ).is( ":disabled" ) && rsubmittable.test( this.nodeName ) && !rsubmitterTypes.test( type ) && ( this.checked || !rcheckableType.test( type ) ); } ) .map( function( i, elem ) { var val = jQuery( this ).val(); if ( val == null ) { return null; } if ( Array.isArray( val ) ) { return jQuery.map( val, function( val ) { return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; } ); } return { name: elem.name, value: val.replace( rCRLF, "\r\n" ) }; } ).get(); } } ); var r20 = /%20/g, rhash = /#.*$/, rantiCache = /([?&])_=[^&]*/, rheaders = /^(.*?):[ \t]*([^\r\n]*)$/mg, // #7653, #8125, #8152: local protocol detection rlocalProtocol = /^(?:about|app|app-storage|.+-extension|file|res|widget):$/, rnoContent = /^(?:GET|HEAD)$/, rprotocol = /^\/\//, /* Prefilters * 1) They are useful to introduce custom dataTypes (see ajax/jsonp.js for an example) * 2) These are called: * - BEFORE asking for a transport * - AFTER param serialization (s.data is a string if s.processData is true) * 3) key is the dataType * 4) the catchall symbol "*" can be used * 5) execution will start with transport dataType and THEN continue down to "*" if needed */ prefilters = {}, /* Transports bindings * 1) key is the dataType * 2) the catchall symbol "*" can be used * 3) selection will start with transport dataType and THEN go to "*" if needed */ transports = {}, // Avoid comment-prolog char sequence (#10098); must appease lint and evade compression allTypes = "*/".concat( "*" ), // Anchor tag for parsing the document origin originAnchor = document.createElement( "a" ); originAnchor.href = location.href; // Base "constructor" for jQuery.ajaxPrefilter and jQuery.ajaxTransport function addToPrefiltersOrTransports( structure ) { // dataTypeExpression is optional and defaults to "*" return function( dataTypeExpression, func ) { if ( typeof dataTypeExpression !== "string" ) { func = dataTypeExpression; dataTypeExpression = "*"; } var dataType, i = 0, dataTypes = dataTypeExpression.toLowerCase().match( rnothtmlwhite ) || []; if ( isFunction( func ) ) { // For each dataType in the dataTypeExpression while ( ( dataType = dataTypes[ i++ ] ) ) { // Prepend if requested if ( dataType[ 0 ] === "+" ) { dataType = dataType.slice( 1 ) || "*"; ( structure[ dataType ] = structure[ dataType ] || [] ).unshift( func ); // Otherwise append } else { ( structure[ dataType ] = structure[ dataType ] || [] ).push( func ); } } } }; } // Base inspection function for prefilters and transports function inspectPrefiltersOrTransports( structure, options, originalOptions, jqXHR ) { var inspected = {}, seekingTransport = ( structure === transports ); function inspect( dataType ) { var selected; inspected[ dataType ] = true; jQuery.each( structure[ dataType ] || [], function( _, prefilterOrFactory ) { var dataTypeOrTransport = prefilterOrFactory( options, originalOptions, jqXHR ); if ( typeof dataTypeOrTransport === "string" && !seekingTransport && !inspected[ dataTypeOrTransport ] ) { options.dataTypes.unshift( dataTypeOrTransport ); inspect( dataTypeOrTransport ); return false; } else if ( seekingTransport ) { return !( selected = dataTypeOrTransport ); } } ); return selected; } return inspect( options.dataTypes[ 0 ] ) || !inspected[ "*" ] && inspect( "*" ); } // A special extend for ajax options // that takes "flat" options (not to be deep extended) // Fixes #9887 function ajaxExtend( target, src ) { var key, deep, flatOptions = jQuery.ajaxSettings.flatOptions || {}; for ( key in src ) { if ( src[ key ] !== undefined ) { ( flatOptions[ key ] ? target : ( deep || ( deep = {} ) ) )[ key ] = src[ key ]; } } if ( deep ) { jQuery.extend( true, target, deep ); } return target; } /* Handles responses to an ajax request: * - finds the right dataType (mediates between content-type and expected dataType) * - returns the corresponding response */ function ajaxHandleResponses( s, jqXHR, responses ) { var ct, type, finalDataType, firstDataType, contents = s.contents, dataTypes = s.dataTypes; // Remove auto dataType and get content-type in the process while ( dataTypes[ 0 ] === "*" ) { dataTypes.shift(); if ( ct === undefined ) { ct = s.mimeType || jqXHR.getResponseHeader( "Content-Type" ); } } // Check if we're dealing with a known content-type if ( ct ) { for ( type in contents ) { if ( contents[ type ] && contents[ type ].test( ct ) ) { dataTypes.unshift( type ); break; } } } // Check to see if we have a response for the expected dataType if ( dataTypes[ 0 ] in responses ) { finalDataType = dataTypes[ 0 ]; } else { // Try convertible dataTypes for ( type in responses ) { if ( !dataTypes[ 0 ] || s.converters[ type + " " + dataTypes[ 0 ] ] ) { finalDataType = type; break; } if ( !firstDataType ) { firstDataType = type; } } // Or just use first one finalDataType = finalDataType || firstDataType; } // If we found a dataType // We add the dataType to the list if needed // and return the corresponding response if ( finalDataType ) { if ( finalDataType !== dataTypes[ 0 ] ) { dataTypes.unshift( finalDataType ); } return responses[ finalDataType ]; } } /* Chain conversions given the request and the original response * Also sets the responseXXX fields on the jqXHR instance */ function ajaxConvert( s, response, jqXHR, isSuccess ) { var conv2, current, conv, tmp, prev, converters = {}, // Work with a copy of dataTypes in case we need to modify it for conversion dataTypes = s.dataTypes.slice(); // Create converters map with lowercased keys if ( dataTypes[ 1 ] ) { for ( conv in s.converters ) { converters[ conv.toLowerCase() ] = s.converters[ conv ]; } } current = dataTypes.shift(); // Convert to each sequential dataType while ( current ) { if ( s.responseFields[ current ] ) { jqXHR[ s.responseFields[ current ] ] = response; } // Apply the dataFilter if provided if ( !prev && isSuccess && s.dataFilter ) { response = s.dataFilter( response, s.dataType ); } prev = current; current = dataTypes.shift(); if ( current ) { // There's only work to do if current dataType is non-auto if ( current === "*" ) { current = prev; // Convert response if prev dataType is non-auto and differs from current } else if ( prev !== "*" && prev !== current ) { // Seek a direct converter conv = converters[ prev + " " + current ] || converters[ "* " + current ]; // If none found, seek a pair if ( !conv ) { for ( conv2 in converters ) { // If conv2 outputs current tmp = conv2.split( " " ); if ( tmp[ 1 ] === current ) { // If prev can be converted to accepted input conv = converters[ prev + " " + tmp[ 0 ] ] || converters[ "* " + tmp[ 0 ] ]; if ( conv ) { // Condense equivalence converters if ( conv === true ) { conv = converters[ conv2 ]; // Otherwise, insert the intermediate dataType } else if ( converters[ conv2 ] !== true ) { current = tmp[ 0 ]; dataTypes.unshift( tmp[ 1 ] ); } break; } } } } // Apply converter (if not an equivalence) if ( conv !== true ) { // Unless errors are allowed to bubble, catch and return them if ( conv && s.throws ) { response = conv( response ); } else { try { response = conv( response ); } catch ( e ) { return { state: "parsererror", error: conv ? e : "No conversion from " + prev + " to " + current }; } } } } } } return { state: "success", data: response }; } jQuery.extend( { // Counter for holding the number of active queries active: 0, // Last-Modified header cache for next request lastModified: {}, etag: {}, ajaxSettings: { url: location.href, type: "GET", isLocal: rlocalProtocol.test( location.protocol ), global: true, processData: true, async: true, contentType: "application/x-www-form-urlencoded; charset=UTF-8", /* timeout: 0, data: null, dataType: null, username: null, password: null, cache: null, throws: false, traditional: false, headers: {}, */ accepts: { "*": allTypes, text: "text/plain", html: "text/html", xml: "application/xml, text/xml", json: "application/json, text/javascript" }, contents: { xml: /\bxml\b/, html: /\bhtml/, json: /\bjson\b/ }, responseFields: { xml: "responseXML", text: "responseText", json: "responseJSON" }, // Data converters // Keys separate source (or catchall "*") and destination types with a single space converters: { // Convert anything to text "* text": String, // Text to html (true = no transformation) "text html": true, // Evaluate text as a json expression "text json": JSON.parse, // Parse text as xml "text xml": jQuery.parseXML }, // For options that shouldn't be deep extended: // you can add your own custom options here if // and when you create one that shouldn't be // deep extended (see ajaxExtend) flatOptions: { url: true, context: true } }, // Creates a full fledged settings object into target // with both ajaxSettings and settings fields. // If target is omitted, writes into ajaxSettings. ajaxSetup: function( target, settings ) { return settings ? // Building a settings object ajaxExtend( ajaxExtend( target, jQuery.ajaxSettings ), settings ) : // Extending ajaxSettings ajaxExtend( jQuery.ajaxSettings, target ); }, ajaxPrefilter: addToPrefiltersOrTransports( prefilters ), ajaxTransport: addToPrefiltersOrTransports( transports ), // Main method ajax: function( url, options ) { // If url is an object, simulate pre-1.5 signature if ( typeof url === "object" ) { options = url; url = undefined; } // Force options to be an object options = options || {}; var transport, // URL without anti-cache param cacheURL, // Response headers responseHeadersString, responseHeaders, // timeout handle timeoutTimer, // Url cleanup var urlAnchor, // Request state (becomes false upon send and true upon completion) completed, // To know if global events are to be dispatched fireGlobals, // Loop variable i, // uncached part of the url uncached, // Create the final options object s = jQuery.ajaxSetup( {}, options ), // Callbacks context callbackContext = s.context || s, // Context for global events is callbackContext if it is a DOM node or jQuery collection globalEventContext = s.context && ( callbackContext.nodeType || callbackContext.jquery ) ? jQuery( callbackContext ) : jQuery.event, // Deferreds deferred = jQuery.Deferred(), completeDeferred = jQuery.Callbacks( "once memory" ), // Status-dependent callbacks statusCode = s.statusCode || {}, // Headers (they are sent all at once) requestHeaders = {}, requestHeadersNames = {}, // Default abort message strAbort = "canceled", // Fake xhr jqXHR = { readyState: 0, // Builds headers hashtable if needed getResponseHeader: function( key ) { var match; if ( completed ) { if ( !responseHeaders ) { responseHeaders = {}; while ( ( match = rheaders.exec( responseHeadersString ) ) ) { responseHeaders[ match[ 1 ].toLowerCase() ] = match[ 2 ]; } } match = responseHeaders[ key.toLowerCase() ]; } return match == null ? null : match; }, // Raw string getAllResponseHeaders: function() { return completed ? responseHeadersString : null; }, // Caches the header setRequestHeader: function( name, value ) { if ( completed == null ) { name = requestHeadersNames[ name.toLowerCase() ] = requestHeadersNames[ name.toLowerCase() ] || name; requestHeaders[ name ] = value; } return this; }, // Overrides response content-type header overrideMimeType: function( type ) { if ( completed == null ) { s.mimeType = type; } return this; }, // Status-dependent callbacks statusCode: function( map ) { var code; if ( map ) { if ( completed ) { // Execute the appropriate callbacks jqXHR.always( map[ jqXHR.status ] ); } else { // Lazy-add the new callbacks in a way that preserves old ones for ( code in map ) { statusCode[ code ] = [ statusCode[ code ], map[ code ] ]; } } } return this; }, // Cancel the request abort: function( statusText ) { var finalText = statusText || strAbort; if ( transport ) { transport.abort( finalText ); } done( 0, finalText ); return this; } }; // Attach deferreds deferred.promise( jqXHR ); // Add protocol if not provided (prefilters might expect it) // Handle falsy url in the settings object (#10093: consistency with old signature) // We also use the url parameter if available s.url = ( ( url || s.url || location.href ) + "" ) .replace( rprotocol, location.protocol + "//" ); // Alias method option to type as per ticket #12004 s.type = options.method || options.type || s.method || s.type; // Extract dataTypes list s.dataTypes = ( s.dataType || "*" ).toLowerCase().match( rnothtmlwhite ) || [ "" ]; // A cross-domain request is in order when the origin doesn't match the current origin. if ( s.crossDomain == null ) { urlAnchor = document.createElement( "a" ); // Support: IE <=8 - 11, Edge 12 - 15 // IE throws exception on accessing the href property if url is malformed, // e.g. http://example.com:80x/ try { urlAnchor.href = s.url; // Support: IE <=8 - 11 only // Anchor's host property isn't correctly set when s.url is relative urlAnchor.href = urlAnchor.href; s.crossDomain = originAnchor.protocol + "//" + originAnchor.host !== urlAnchor.protocol + "//" + urlAnchor.host; } catch ( e ) { // If there is an error parsing the URL, assume it is crossDomain, // it can be rejected by the transport if it is invalid s.crossDomain = true; } } // Convert data if not already a string if ( s.data && s.processData && typeof s.data !== "string" ) { s.data = jQuery.param( s.data, s.traditional ); } // Apply prefilters inspectPrefiltersOrTransports( prefilters, s, options, jqXHR ); // If request was aborted inside a prefilter, stop there if ( completed ) { return jqXHR; } // We can fire global events as of now if asked to // Don't fire events if jQuery.event is undefined in an AMD-usage scenario (#15118) fireGlobals = jQuery.event && s.global; // Watch for a new set of requests if ( fireGlobals && jQuery.active++ === 0 ) { jQuery.event.trigger( "ajaxStart" ); } // Uppercase the type s.type = s.type.toUpperCase(); // Determine if request has content s.hasContent = !rnoContent.test( s.type ); // Save the URL in case we're toying with the If-Modified-Since // and/or If-None-Match header later on // Remove hash to simplify url manipulation cacheURL = s.url.replace( rhash, "" ); // More options handling for requests with no content if ( !s.hasContent ) { // Remember the hash so we can put it back uncached = s.url.slice( cacheURL.length ); // If data is available and should be processed, append data to url if ( s.data && ( s.processData || typeof s.data === "string" ) ) { cacheURL += ( rquery.test( cacheURL ) ? "&" : "?" ) + s.data; // #9682: remove data so that it's not used in an eventual retry delete s.data; } // Add or update anti-cache param if needed if ( s.cache === false ) { cacheURL = cacheURL.replace( rantiCache, "$1" ); uncached = ( rquery.test( cacheURL ) ? "&" : "?" ) + "_=" + ( nonce++ ) + uncached; } // Put hash and anti-cache on the URL that will be requested (gh-1732) s.url = cacheURL + uncached; // Change '%20' to '+' if this is encoded form body content (gh-2658) } else if ( s.data && s.processData && ( s.contentType || "" ).indexOf( "application/x-www-form-urlencoded" ) === 0 ) { s.data = s.data.replace( r20, "+" ); } // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. if ( s.ifModified ) { if ( jQuery.lastModified[ cacheURL ] ) { jqXHR.setRequestHeader( "If-Modified-Since", jQuery.lastModified[ cacheURL ] ); } if ( jQuery.etag[ cacheURL ] ) { jqXHR.setRequestHeader( "If-None-Match", jQuery.etag[ cacheURL ] ); } } // Set the correct header, if data is being sent if ( s.data && s.hasContent && s.contentType !== false || options.contentType ) { jqXHR.setRequestHeader( "Content-Type", s.contentType ); } // Set the Accepts header for the server, depending on the dataType jqXHR.setRequestHeader( "Accept", s.dataTypes[ 0 ] && s.accepts[ s.dataTypes[ 0 ] ] ? s.accepts[ s.dataTypes[ 0 ] ] + ( s.dataTypes[ 0 ] !== "*" ? ", " + allTypes + "; q=0.01" : "" ) : s.accepts[ "*" ] ); // Check for headers option for ( i in s.headers ) { jqXHR.setRequestHeader( i, s.headers[ i ] ); } // Allow custom headers/mimetypes and early abort if ( s.beforeSend && ( s.beforeSend.call( callbackContext, jqXHR, s ) === false || completed ) ) { // Abort if not done already and return return jqXHR.abort(); } // Aborting is no longer a cancellation strAbort = "abort"; // Install callbacks on deferreds completeDeferred.add( s.complete ); jqXHR.done( s.success ); jqXHR.fail( s.error ); // Get transport transport = inspectPrefiltersOrTransports( transports, s, options, jqXHR ); // If no transport, we auto-abort if ( !transport ) { done( -1, "No Transport" ); } else { jqXHR.readyState = 1; // Send global event if ( fireGlobals ) { globalEventContext.trigger( "ajaxSend", [ jqXHR, s ] ); } // If request was aborted inside ajaxSend, stop there if ( completed ) { return jqXHR; } // Timeout if ( s.async && s.timeout > 0 ) { timeoutTimer = window.setTimeout( function() { jqXHR.abort( "timeout" ); }, s.timeout ); } try { completed = false; transport.send( requestHeaders, done ); } catch ( e ) { // Rethrow post-completion exceptions if ( completed ) { throw e; } // Propagate others as results done( -1, e ); } } // Callback for when everything is done function done( status, nativeStatusText, responses, headers ) { var isSuccess, success, error, response, modified, statusText = nativeStatusText; // Ignore repeat invocations if ( completed ) { return; } completed = true; // Clear timeout if it exists if ( timeoutTimer ) { window.clearTimeout( timeoutTimer ); } // Dereference transport for early garbage collection // (no matter how long the jqXHR object will be used) transport = undefined; // Cache response headers responseHeadersString = headers || ""; // Set readyState jqXHR.readyState = status > 0 ? 4 : 0; // Determine if successful isSuccess = status >= 200 && status < 300 || status === 304; // Get response data if ( responses ) { response = ajaxHandleResponses( s, jqXHR, responses ); } // Convert no matter what (that way responseXXX fields are always set) response = ajaxConvert( s, response, jqXHR, isSuccess ); // If successful, handle type chaining if ( isSuccess ) { // Set the If-Modified-Since and/or If-None-Match header, if in ifModified mode. if ( s.ifModified ) { modified = jqXHR.getResponseHeader( "Last-Modified" ); if ( modified ) { jQuery.lastModified[ cacheURL ] = modified; } modified = jqXHR.getResponseHeader( "etag" ); if ( modified ) { jQuery.etag[ cacheURL ] = modified; } } // if no content if ( status === 204 || s.type === "HEAD" ) { statusText = "nocontent"; // if not modified } else if ( status === 304 ) { statusText = "notmodified"; // If we have data, let's convert it } else { statusText = response.state; success = response.data; error = response.error; isSuccess = !error; } } else { // Extract error from statusText and normalize for non-aborts error = statusText; if ( status || !statusText ) { statusText = "error"; if ( status < 0 ) { status = 0; } } } // Set data for the fake xhr object jqXHR.status = status; jqXHR.statusText = ( nativeStatusText || statusText ) + ""; // Success/Error if ( isSuccess ) { deferred.resolveWith( callbackContext, [ success, statusText, jqXHR ] ); } else { deferred.rejectWith( callbackContext, [ jqXHR, statusText, error ] ); } // Status-dependent callbacks jqXHR.statusCode( statusCode ); statusCode = undefined; if ( fireGlobals ) { globalEventContext.trigger( isSuccess ? "ajaxSuccess" : "ajaxError", [ jqXHR, s, isSuccess ? success : error ] ); } // Complete completeDeferred.fireWith( callbackContext, [ jqXHR, statusText ] ); if ( fireGlobals ) { globalEventContext.trigger( "ajaxComplete", [ jqXHR, s ] ); // Handle the global AJAX counter if ( !( --jQuery.active ) ) { jQuery.event.trigger( "ajaxStop" ); } } } return jqXHR; }, getJSON: function( url, data, callback ) { return jQuery.get( url, data, callback, "json" ); }, getScript: function( url, callback ) { return jQuery.get( url, undefined, callback, "script" ); } } ); jQuery.each( [ "get", "post" ], function( i, method ) { jQuery[ method ] = function( url, data, callback, type ) { // Shift arguments if data argument was omitted if ( isFunction( data ) ) { type = type || callback; callback = data; data = undefined; } // The url can be an options object (which then must have .url) return jQuery.ajax( jQuery.extend( { url: url, type: method, dataType: type, data: data, success: callback }, jQuery.isPlainObject( url ) && url ) ); }; } ); jQuery._evalUrl = function( url ) { return jQuery.ajax( { url: url, // Make this explicit, since user can override this through ajaxSetup (#11264) type: "GET", dataType: "script", cache: true, async: false, global: false, "throws": true } ); }; jQuery.fn.extend( { wrapAll: function( html ) { var wrap; if ( this[ 0 ] ) { if ( isFunction( html ) ) { html = html.call( this[ 0 ] ); } // The elements to wrap the target around wrap = jQuery( html, this[ 0 ].ownerDocument ).eq( 0 ).clone( true ); if ( this[ 0 ].parentNode ) { wrap.insertBefore( this[ 0 ] ); } wrap.map( function() { var elem = this; while ( elem.firstElementChild ) { elem = elem.firstElementChild; } return elem; } ).append( this ); } return this; }, wrapInner: function( html ) { if ( isFunction( html ) ) { return this.each( function( i ) { jQuery( this ).wrapInner( html.call( this, i ) ); } ); } return this.each( function() { var self = jQuery( this ), contents = self.contents(); if ( contents.length ) { contents.wrapAll( html ); } else { self.append( html ); } } ); }, wrap: function( html ) { var htmlIsFunction = isFunction( html ); return this.each( function( i ) { jQuery( this ).wrapAll( htmlIsFunction ? html.call( this, i ) : html ); } ); }, unwrap: function( selector ) { this.parent( selector ).not( "body" ).each( function() { jQuery( this ).replaceWith( this.childNodes ); } ); return this; } } ); jQuery.expr.pseudos.hidden = function( elem ) { return !jQuery.expr.pseudos.visible( elem ); }; jQuery.expr.pseudos.visible = function( elem ) { return !!( elem.offsetWidth || elem.offsetHeight || elem.getClientRects().length ); }; jQuery.ajaxSettings.xhr = function() { try { return new window.XMLHttpRequest(); } catch ( e ) {} }; var xhrSuccessStatus = { // File protocol always yields status code 0, assume 200 0: 200, // Support: IE <=9 only // #1450: sometimes IE returns 1223 when it should be 204 1223: 204 }, xhrSupported = jQuery.ajaxSettings.xhr(); support.cors = !!xhrSupported && ( "withCredentials" in xhrSupported ); support.ajax = xhrSupported = !!xhrSupported; jQuery.ajaxTransport( function( options ) { var callback, errorCallback; // Cross domain only allowed if supported through XMLHttpRequest if ( support.cors || xhrSupported && !options.crossDomain ) { return { send: function( headers, complete ) { var i, xhr = options.xhr(); xhr.open( options.type, options.url, options.async, options.username, options.password ); // Apply custom fields if provided if ( options.xhrFields ) { for ( i in options.xhrFields ) { xhr[ i ] = options.xhrFields[ i ]; } } // Override mime type if needed if ( options.mimeType && xhr.overrideMimeType ) { xhr.overrideMimeType( options.mimeType ); } // X-Requested-With header // For cross-domain requests, seeing as conditions for a preflight are // akin to a jigsaw puzzle, we simply never set it to be sure. // (it can always be set on a per-request basis or even using ajaxSetup) // For same-domain requests, won't change header if already provided. if ( !options.crossDomain && !headers[ "X-Requested-With" ] ) { headers[ "X-Requested-With" ] = "XMLHttpRequest"; } // Set headers for ( i in headers ) { xhr.setRequestHeader( i, headers[ i ] ); } // Callback callback = function( type ) { return function() { if ( callback ) { callback = errorCallback = xhr.onload = xhr.onerror = xhr.onabort = xhr.ontimeout = xhr.onreadystatechange = null; if ( type === "abort" ) { xhr.abort(); } else if ( type === "error" ) { // Support: IE <=9 only // On a manual native abort, IE9 throws // errors on any property access that is not readyState if ( typeof xhr.status !== "number" ) { complete( 0, "error" ); } else { complete( // File: protocol always yields status 0; see #8605, #14207 xhr.status, xhr.statusText ); } } else { complete( xhrSuccessStatus[ xhr.status ] || xhr.status, xhr.statusText, // Support: IE <=9 only // IE9 has no XHR2 but throws on binary (trac-11426) // For XHR2 non-text, let the caller handle it (gh-2498) ( xhr.responseType || "text" ) !== "text" || typeof xhr.responseText !== "string" ? { binary: xhr.response } : { text: xhr.responseText }, xhr.getAllResponseHeaders() ); } } }; }; // Listen to events xhr.onload = callback(); errorCallback = xhr.onerror = xhr.ontimeout = callback( "error" ); // Support: IE 9 only // Use onreadystatechange to replace onabort // to handle uncaught aborts if ( xhr.onabort !== undefined ) { xhr.onabort = errorCallback; } else { xhr.onreadystatechange = function() { // Check readyState before timeout as it changes if ( xhr.readyState === 4 ) { // Allow onerror to be called first, // but that will not handle a native abort // Also, save errorCallback to a variable // as xhr.onerror cannot be accessed window.setTimeout( function() { if ( callback ) { errorCallback(); } } ); } }; } // Create the abort callback callback = callback( "abort" ); try { // Do send the request (this may raise an exception) xhr.send( options.hasContent && options.data || null ); } catch ( e ) { // #14683: Only rethrow if this hasn't been notified as an error yet if ( callback ) { throw e; } } }, abort: function() { if ( callback ) { callback(); } } }; } } ); // Prevent auto-execution of scripts when no explicit dataType was provided (See gh-2432) jQuery.ajaxPrefilter( function( s ) { if ( s.crossDomain ) { s.contents.script = false; } } ); // Install script dataType jQuery.ajaxSetup( { accepts: { script: "text/javascript, application/javascript, " + "application/ecmascript, application/x-ecmascript" }, contents: { script: /\b(?:java|ecma)script\b/ }, converters: { "text script": function( text ) { jQuery.globalEval( text ); return text; } } } ); // Handle cache's special case and crossDomain jQuery.ajaxPrefilter( "script", function( s ) { if ( s.cache === undefined ) { s.cache = false; } if ( s.crossDomain ) { s.type = "GET"; } } ); // Bind script tag hack transport jQuery.ajaxTransport( "script", function( s ) { // This transport only deals with cross domain requests if ( s.crossDomain ) { var script, callback; return { send: function( _, complete ) { script = jQuery( "

Module architecture

Sockets

The idea behind the pyroute2 framework is pretty simple. The library provides socket objects, that have:

  • shortcuts to establish netlink connections
  • extra methods to run netlink queries
  • some magic to handle packet bursts
  • another magic to transparently mangle netlink messages

In other sense any netlink socket is just an ordinary socket with fileno(), recv(), sendto() etc. Of course, one can use it in poll().

There is an inheritance diagram of netlink sockets, provided by the library:

Inheritance diagram of pyroute2.iproute.linux.IPRoute, pyroute2.iproute.linux.IPBatch, pyroute2.iproute.linux.RawIPRoute, pyroute2.iproute.bsd.IPRoute, pyroute2.iproute.RemoteIPRoute, pyroute2.iwutil.IW, pyroute2.ipset.IPSet, pyroute2.netlink.taskstats.TaskStats, pyroute2.netlink.ipq.IPQSocket, pyroute2.remote.RemoteSocket, pyroute2.remote.shell.ShellIPR, pyroute2.netns.nslink.NetNS

under the hood

Let’s assume we use an IPRoute object to get the interface list of the system:

from pyroute2 import IPRoute
ipr = IPRoute()
ipr.get_links()
ipr.close()

The get_links() method is provided by the IPRouteMixin class. It chooses the message to send (ifinfmsg), prepares required fields and passes it to the next layer:

result.extend(self.nlm_request(msg, RTM_GETLINK, msg_flags))

The nlm_request() is a method of the NetlinkMixin class. It wraps the pair request/response in one method. The request is done via put(), response comes with get(). These methods hide under the hood the asynchronous nature of the netlink protocol, where the response can come whenever – the time and packet order are not guaranteed. But one can use the sequence_number field of a netlink message to match responses, and the pair put()/get() does it.

cache thread

Sometimes it is preferrable to get incoming messages asap and parse them only when there is time for that. For that case the NetlinkMixin provides a possibility to start a dedicated cache thread, that will collect and queue incoming messages as they arrive. The thread doesn’t affect the socket behaviour: it will behave exactly in the same way, the only difference is that recv() will return already cached in the userspace message. To start the thread, one should call bind() with async_cache=True:

ipr = IPRoute()
ipr.bind(async_cache=True)
...  # do some stuff
ipr.close()

message mangling

An interesting feature of the IPRSocketMixin is a netlink proxy code, that allows to register callbacks for different message types. The callback API is simple. The callback must accept the message as a binary data, and must return a dictionary with two keys, verdict and data. The verdict can be:

  • for sendto(): forward, return or error
  • for recv(): forward or error

E.g.:

msg = ifinfmsg(data)
msg.decode()
...  # mangle msg
msg.reset()
msg.encode()
return {'verdict': 'forward',
        'data': msg.buf.getvalue()}

The error verdict raises an exception from data. The forward verdict causes the data to be passed. The return verdict is valid only in sendto() callbacks and means that the data should not be passed to the kernel, but instead it must be returned to the user.

This magic allows the library to transparently support ovs, teamd, tuntap calls via netlink. The corresponding callbacks transparently route the call to an external utility or to ioctl() API.

How to register callbacks, see IPRSocketMixin init. The _sproxy serves sendto() mangling, the _rproxy serves the recv() mangling. Later this API can become public.

PF_ROUTE messages

PF_ROUTE socket is used to receive notifications from the BSD kernel. The PF_ROUTE messages:

Inheritance diagram of pyroute2.bsd.pf_route.freebsd.bsdmsg, pyroute2.bsd.pf_route.freebsd.if_msg, pyroute2.bsd.pf_route.freebsd.rt_msg_base, pyroute2.bsd.pf_route.freebsd.ifa_msg_base, pyroute2.bsd.pf_route.freebsd.ifma_msg_base, pyroute2.bsd.pf_route.freebsd.if_announcemsg, pyroute2.bsd.pf_route.rt_slot, pyroute2.bsd.pf_route.rt_msg, pyroute2.bsd.pf_route.ifa_msg, pyroute2.bsd.pf_route.ifma_msg

IPDB

The IPDB module implements high-level logic to manage some of the system network settings. It is completely agnostic to the netlink object’s nature, the only requirement is that the netlink transport must provide RTNL API.

So, using proper mixin classes one can create a custom RTNL-compatible transport. E.g., this way IPDB can work over NetNS objects, providing the network management within some network namespace — while itself it runs in the main namespace.

The IPDB architecture is not too complicated, but it implements some useful transaction magic, see commit() methods of the Transactional objects.

Inheritance diagram of pyroute2.ipdb.main.IPDB, pyroute2.ipdb.interfaces.Interface, pyroute2.ipdb.linkedset.LinkedSet, pyroute2.ipdb.linkedset.IPaddrSet, pyroute2.ipdb.routes.NextHopSet, pyroute2.ipdb.routes.Via, pyroute2.ipdb.routes.Encap, pyroute2.ipdb.routes.Metrics, pyroute2.ipdb.routes.BaseRoute, pyroute2.ipdb.routes.Route, pyroute2.ipdb.routes.MPLSRoute, pyroute2.ipdb.routes.RoutingTable, pyroute2.ipdb.routes.MPLSTable, pyroute2.ipdb.routes.RoutingTableSet, pyroute2.ipdb.rules.Rule, pyroute2.ipdb.rules.RulesDict

Internet protocols

Beside of the netlink protocols, the library implements a limited set of supplementary internet protocol to play with.

Inheritance diagram of pyroute2.protocols.udpmsg, pyroute2.protocols.ip4msg, pyroute2.protocols.udp4_pseudo_header, pyroute2.protocols.ethmsg, pyroute2.dhcp.dhcp4msg.dhcp4msg
pyroute2-0.5.9/docs/html/changelog.html0000644000175000017500000010625013621220106017740 0ustar peetpeet00000000000000 Changelog — pyroute2 0.5.9 documentation

Changelog

  • 0.5.9
    • ethtool: fix module setup
  • 0.5.8
  • 0.5.7
  • 0.5.6
    • ndb.objects.route: multipath routes
    • ndb.objects.rule: basic support
    • ndb.objects.interface: veth fixed
    • ndb.source: fix source restart
    • ndb.log: logging setup
  • 0.5.5
  • 0.5.4
  • 0.5.3
  • 0.5.2
  • 0.5.1
    • ipdb: #310 – route keying fix
    • ipdb: #483, #484 – callback internals change
    • ipdb: #499 – eventloop interface
    • ipdb: #500 – fix non-default :: routes
    • netns: #448 – API change: setns() doesn’t remove FD
    • netns: #504 – fix resource leakage
    • bsd: initial commits
  • 0.5.0
    • ACHTUNG: ipdb commit logic is changed
    • ipdb: do not drop failed transactions
    • ipdb: #388 – normalize IPv6 addresses
    • ipdb: #391 – support both IPv4 and IPv6 default routes
    • ipdb: #392 – fix MPLS route key reference
    • ipdb: #394 – correctly work with route priorities
    • ipdb: #408 – fix IPv6 routes in tables >= 256
    • ipdb: #416 – fix VRF interfaces creation
    • ipset: multiple improvements
    • tuntap: #469 – support s390x arch
    • nlsocket: #443 – fix socket methods resolve order for Python2
    • netns: non-destructive netns.create()
  • 0.4.18
    • ipdb: #379 [critical] – routes in global commits
    • ipdb: #380 – global commit with disabled plugins
    • ipdb: #381 – exceptions fixed
    • ipdb: #382 – manage dependent routes during interface commits
    • ipdb: #384 – global review()
    • ipdb: #385 – global drop()
    • netns: #383 – support ppc64
    • general: public API refactored (same signatures; to be documented)
  • 0.4.17
    • req: #374 [critical] – mode nla init
    • iproute: #378 [critical] – fix flush_routes() to respect filters
    • ifinfmsg: #376 – fix data plugins API to support pyinstaller
  • 0.4.16
    • ipdb: race fixed: remove port/bridge
    • ipdb: #280 – race fixed: port/bridge
    • ipdb: #302 – ipaddr views: [ifname].ipaddr.ipv4, [ifname]ipaddr.ipv6
    • ipdb: #357 – allow bridge timings to have some delta
    • ipdb: #338 – allow to fix interface objects from failed create()
    • rtnl: #336 – fix vlan flags
    • iproute: #342 – the match method takes any callable
    • nlsocket: #367 – increase default SO_SNDBUF
    • ifinfmsg: support tuntap on armv6l, armv7l platforms
  • 0.4.15
    • req: #365 – full and short nla notation fixed, critical
    • iproute: #364 – new method, brport()
    • ipdb: – support bridge port options
  • 0.4.14
    • event: new genl protocols set: VFS_DQUOT, acpi_event, thermal_event
    • ipdb: #310 – fixed priority change on routes
    • ipdb: #349 – fix setting ifalias on interfaces
    • ipdb: #353 – mitigate kernel oops during bridge creation
    • ipdb: #354 – allow to explicitly choose plugins to load
    • ipdb: #359 – provide read-only context managers
    • rtnl: #336 – vlan flags support
    • rtnl: #352 – support interface type plugins
    • tc: #344 – mirred action
    • tc: #346 – connmark action
    • netlink: #358 – memory optimization
    • config: #360 – generic asyncio config
    • iproute: #362 – allow to change or replace a qdisc
  • 0.4.13
    • ipset: full rework of the IPSET_ATTR_DATA and IPSET_ATTR_ADT ACHTUNG: this commit may break API compatibility
    • ipset: hash:mac support
    • ipset: list:set support
    • ipdb: throw EEXIST when creates VLAN/VXLAN devs with same ID, but under different names
    • tests: #329 – include unit tests into the bundle
    • legal: E/// logo removed
  • 0.4.12
    • ipdb: #314 – let users choose RTNL groups IPDB listens to
    • ipdb: #321 – isolate net_ns_.* setup in a separate code block
    • ipdb: #322 – IPv6 updates on interfaces in DOWN state
    • ifinfmsg: allow absolute/relative paths in the net_ns_fd NLA
    • ipset: #323 – support setting counters on ipset add
    • ipset: headers() command
    • ipset: revisions
    • ipset: #326 – mark types
  • 0.4.11
    • rtnl: #284 – support vlan_flags
    • ipdb: #288 – do not inore link-local addresses
    • ipdb: #300 – sort ip addresses
    • ipdb: #306 – support net_ns_pid
    • ipdb: #307 – fix IPv6 routes management
    • ipdb: #311 – vlan interfaces address loading
    • iprsocket: #305 – support NETLINK_LISTEN_ALL_NSID
  • 0.4.10
    • devlink: fix fd leak on broken init
  • 0.4.9
    • sock_diag: initial NETLINK_SOCK_DIAG support
    • rtnl: fix critical fd leak in the compat code
  • 0.4.8
    • rtnl: compat proxying fix
  • 0.4.7
    • rtnl: compat code is back
    • netns: custom netns path support
    • ipset: multiple improvements
  • 0.4.6
    • ipdb: #278 – fix initial ports mapping
    • ipset: #277 – fix ADT attributes parsing
    • nl80211: #274, #275, #276 – BSS-related fixes
  • 0.4.5
    • ifinfmsg: GTP interfaces support
    • generic: devlink protocol support
    • generic: code cleanup
  • 0.4.4
    • iproute: #262 – get_vlans() fix
    • iproute: default mask 32 for IPv4 in addr()
    • rtmsg: #260 – RTA_FLOW support
  • 0.4.3
    • ipdb: #259 – critical Interface class fix
    • benchmark: initial release
  • 0.4.2
    • ipdb: event modules
    • ipdb: on-demand views
    • ipdb: rules management
    • ipdb: bridge controls
    • ipdb: #258 – important Python compatibility fixes
    • netns: #257 – pipe leak fix
    • netlink: support pickling for nlmsg
  • 0.4.1
    • netlink: no buffer copying in the parser
    • netlink: parse NLA on demand
    • ipdb: #244 – lwtunnel multipath fixes
    • iproute: #235 – route types
    • docs updated
  • 0.4.0
    • ACHTUNG: old kernels compatibility code is dropped
    • ACHTUNG: IPDB uses two separate sockets for monitoring and commands
    • ipdb: #244 – multipath lwtunnel
    • ipdb: #242 – AF_MPLS routes
    • ipdb: #241, #234 – fix create(…, reuse=True)
    • ipdb: #239 – route encap and metrics fixed
    • ipdb: #238 – generic port management
    • ipdb: #235 – support route scope and type
    • ipdb: #230, #232 – routes GC (work in progress)
    • rtnl: #245 – do not fail if /proc/net/psched doesn’t exist
    • rtnl: #233 – support VRF interfaces (requires net-next)
  • 0.3.21
    • ipdb: #231 – return ipdb.common as deprecated
  • 0.3.20
    • iproute: vlan_filter()
    • iproute: #229 – FDB management
    • general: exceptions re-exported via the root module
  • 0.3.19
    • rtmsg: #227 – MPLS lwtunnel basic support
    • iproute: route() docs updated
    • general: #228 – exceptions layout changed
    • package-rh: rpm subpackages
  • 0.3.18
    • version bump – include docs in the release tarball
  • 0.3.17
    • tcmsg: qdiscs and filters as plugins
    • tcmsg: #223 – tc clsact and bpf direct-action
    • tcmsg: plug, codel, choke, drr qdiscs
    • tests: CI in VMs (see civm project)
    • tests: xunit output
    • ifinfmsg: tuntap support in i386, i686
    • ifinfmsg: #207 – support vlan filters
    • examples: #226 – included in the release tarball
    • ipdb: partial commits, initial support
  • 0.3.16
    • ipdb: fix the multiple IPs in one commit case
    • rtnl: support veth peer attributes
    • netns: support 32bit i686
    • netns: fix MIPS support
    • netns: fix tun/tap creation
    • netns: fix interface move between namespaces
    • tcmsg: support hfsc, fq_codel, codel qdiscs
    • nftables: initial support
    • netlink: dump/load messages to/from simple types
  • 0.3.15
    • netns: #194 – fix fd leak
    • iproute: #184 – fix routes dump
    • rtnl: TCA_ACT_BPF support
    • rtnl: ipvlan support
    • rtnl: OVS support removed
    • iproute: rule() improved to support all NLAs
    • project supported by Ericsson
  • 0.3.14
    • package-rh: spec fixed
    • package-rh: both licenses added
    • remote: fixed the setup.py record
  • 0.3.13
    • package-rh: new rpm for Fedora and CentOS
    • remote: new draft of the remote protocol
    • netns: refactored using the new remote protocol
    • ipdb: gretap support
  • 0.3.12
    • ipdb: new Interface.wait_ip() routine
    • ipdb: #175 – fix master attribute cleanup
    • ipdb: #171 – support multipath routes
    • ipdb: memory consumption improvements
    • rtmsg: MPLS support
    • rtmsg: RTA_VIA support
    • iwutil: #174 – fix FREQ_FIXED flag
  • 0.3.11
    • ipdb: #161 – fix memory allocations
    • nlsocket: #161 – remove monitor mode
  • 0.3.10
    • rtnl: added BPF filters
    • rtnl: LWtunnel support in ifinfmsg
    • ipdb: support address attributes
    • ipdb: global transactions, initial version
    • ipdb: routes refactored to use key index (speed up)
    • config: eventlet support embedded (thanks to Angus Lees)
    • iproute: replace tc classes
    • iproute: flush_addr(), flush_rules()
    • iproute: rule() refactored
    • netns: proxy file objects (stdin, stdout, stderr)
  • 0.3.9
    • root imports: #109, #135 – issubclass, isinstance
    • iwutil: multiple improvements
    • iwutil: initial tests
    • proxy: correctly forward NetlinkError
    • iproute: neighbour tables support
    • iproute: #147, filters on dump calls
    • config: initial usage of capabilities
  • 0.3.8
    • docs: inheritance diagrams
    • nlsocket: #126, #132 – resource deallocation
    • arch: #128, #131 – MIPS support
    • setup.py: #133 – syntax error during install on Python2
  • 0.3.7
    • ipdb: new routing syntax
    • ipdb: sync interface movement between namespaces
    • ipdb: #125 – fix route metrics
    • netns: new class NSPopen
    • netns: #119 – i386 syscall
    • netns: #122 – return correct errno
    • netlink: #126 – fix socket reuse
  • 0.3.6
    • dhcp: initial release DHCPv4
    • license: dual GPLv2+ and Apache v2.0
    • ovs: port add/delete
    • macvlan, macvtap: basic support
    • vxlan: basic support
    • ipset: basic support
  • 0.3.5
    • netns: #90 – netns setns support
    • generic: #99 – support custom basic netlink socket classes
    • proxy-ng: #106 – provide more diagnostics
    • nl80211: initial nl80211 support, iwutil module added
  • 0.3.4
    • ipdb: #92 – route metrics support
    • ipdb: #85 – broadcast address specification
    • ipdb, rtnl: #84 – veth support
    • ipdb, rtnl: tuntap support
    • netns: #84 – network namespaces support, NetNS class
    • rtnl: proxy-ng API
    • pypi: #91 – embed docs into the tarball
  • 0.3.3
    • ipdb: restart on error
    • generic: handle non-existing family case
    • [fix]: #80 – Python 2.6 unicode vs -O bug workaround
  • 0.3.2
    • simple socket architecture
    • all the protocols now are based on NetlinkSocket, see examples
    • rpc: deprecated
    • iocore: deprecated
    • iproute: single-threaded socket object
    • ipdb: restart on errors
    • rtnl: updated ifinfmsg policies
  • 0.3.1
    • module structure refactored
    • new protocol: ipq
    • new protocol: nfnetlink / nf-queue
    • new protocol: generic
    • threadless sockets for all the protocols
  • 0.2.16
    • prepare the transition to 0.3.x
  • 0.2.15
    • ipdb: fr #63 – interface settings freeze
    • ipdb: fr #50, #51 – bridge & bond options (initial version)
    • RHEL7 support
    • [fix]: #52 – HTB: correct rtab compilation
    • [fix]: #53 – RHEL6.5 bridge races
    • [fix]: #55 – IPv6 on bridges
    • [fix]: #58 – vlans as bridge ports
    • [fix]: #59 – threads sync in iocore
  • 0.2.14
    • [fix]: #44 – incorrect netlink exceptions proxying
    • [fix]: #45 – multiple issues with device targets
    • [fix]: #46 – consistent exceptions
    • ipdb: LinkedSet cascade updates fixed
    • ipdb: allow to reuse existing interface in create()
  • 0.2.13
    • [fix]: #43 – pipe leak in the main I/O loop
    • tests: integrate examples, import into tests
    • iocore: use own TimeoutException instead of Queue.Empty
    • iproute: default routing table = 254
    • iproute: flush_routes() routine
    • iproute: fwmark parameter for rule() routine
    • iproute: destination and mask for rules
    • docs: netlink development guide
  • 0.2.12
    • [fix]: #33 – release resources only for bound sockets
    • [fix]: #37 – fix commit targets
    • rtnl: HFSC support
    • rtnl: priomap fixed
  • 0.2.11
    • ipdb: watchdogs to sync on RTNL events
    • ipdb: fix commit errors
    • generic: NLA operations, complement and intersection
    • docs: more autodocs in the code
    • tests: -W error: more strict testing now
    • tests: cover examples by the integration testing cycle
    • with -W error many resource leaks were fixed
  • 0.2.10
    • ipdb: command chaining
    • ipdb: fix for RHEL6.5 Python “optimizations”
    • rtnl: support TCA_U32_ACT
    • [fix]: #32 – NLA comparison
  • 0.2.9
    • ipdb: support bridges and bonding interfaces on RHEL
    • ipdb: “shadow” interfaces (still in alpha state)
    • ipdb: minor fixes on routing and compat issues
    • ipdb: as a separate package (sub-module)
    • docs: include ipdb autodocs
    • rpc: include in setup.py
  • 0.2.8
    • netlink: allow multiple NetlinkSocket allocation from one process
    • netlink: fix defragmentation for netlink-over-tcp
    • iocore: support forked IOCore and IOBroker as a separate process
    • ipdb: generic callbacks support
    • ipdb: routing support
    • rtnl: #30 – support IFLA_INFO_DATA for bond interfaces
  • 0.2.7
    • ipdb: use separate namespaces for utility functions and other stuff
    • ipdb: generic callbacks (see also IPDB.wait_interface())
    • iocore: initial multipath support
    • iocore: use of 16byte uuid4 for packet ids
  • 0.2.6
    • rpc: initial version, REQ/REP, PUSH/PULL
    • iocore: shared IOLoop
    • iocore: AddrPool usage
    • iproute: policing in FW filter
    • python3 compatibility issues fixed
  • 0.2.4
    • python3 compatibility issues fixed, tests passed
  • 0.2.3
    • [fix]: #28 – bundle issue
  • 0.2.2
    • iocore: new component
    • iocore: separate IOCore and IOBroker
    • iocore: change from peer-to-peer to flat addresses
    • iocore: REP/REQ, PUSH/PULL
    • iocore: support for UDP PUSH/PULL
    • iocore: AddrPool component for addresses and nonces
    • generic: allow multiple re-encoding
  • 0.1.12
    • ipdb: transaction commit callbacks
    • iproute: delete root qdisc (@chantra)
    • iproute: netem qdisc management (@chantra)
  • 0.1.11
    • netlink: get qdiscs for particular interface
    • netlink: IPRSocket threadless objects
    • rtnl: u32 policy setup
    • iproute: filter actions, such as ok, drop and so on
    • iproute: changed syntax of commands, actioncommand
    • tests: htb, tbf tests added
  • 0.1.10
    • [fix]: #8 – default route fix, routes filtering
    • [fix]: #9 – add/delete route routine improved
    • [fix]: #10 – shutdown sequence fixed
    • [fix]: #11 – close IPC pipes on release()
    • [fix]: #12 – stop service threads on release()
    • netlink: debug mode added to be used with GUI
    • ipdb: interface removal
    • ipdb: fail on transaction sync timeout
    • tests: R/O mode added, use export PYROUTE2_TESTS_RO=True
  • 0.1.9
    • tests: all races fixed
    • ipdb: half-sync commit(): wait for IPs and ports lists update
    • netlink: use pipes for in-process communication
    • Python 2.6 compatibility issue: remove copy.deepcopy() usage
    • QPython 2.7 for Android: works
  • 0.1.8
    • complete refactoring of class names
    • Python 2.6 compatibility issues
    • tests: code coverage, multiple code fixes
    • plugins: ptrace message source
    • packaging: RH package
  • 0.1.7
    • ipdb: interface creation: dummy, bond, bridge, vlan
    • ipdb: if_slaves interface obsoleted
    • ipdb: ‘direct’ mode
    • iproute: code refactored
    • examples: create() examples committed
  • 0.1.6
    • netlink: tc ingress, sfq, tbf, htb, u32 partial support
    • ipdb: completely re-implemented transactional model (see docs)
    • generic: internal fields declaration API changed for nlmsg
    • tests: first unit tests committed
  • 0.1.5
    • netlink: dedicated io buffering thread
    • netlink: messages reassembling
    • netlink: multi-uplink remote
    • netlink: masquerade remote requests
    • ipdb: represent interfaces hierarchy
    • iproute: decode VLAN info
  • 0.1.4
    • netlink: remote netlink access
    • netlink: SSL/TLS server/client auth support
    • netlink: tcp and unix transports
    • docs: started sphinx docs
  • 0.1.3
    • ipdb: context manager interface
    • ipdb: [fix] correctly handle ip addr changes in transaction
    • ipdb: [fix] make up()/down() methods transactional [#1]
    • iproute: mirror packets to 0 queue
    • iproute: [fix] handle primary ip address removal response
  • 0.1.2
    • initial ipdb version
    • iproute fixes
  • 0.1.1
    • initial release, iproute module
pyroute2-0.5.9/docs/html/debug.html0000644000175000017500000003413313621220106017077 0ustar peetpeet00000000000000 Netlink debug howto — pyroute2 0.5.9 documentation
pyroute2-0.5.9/docs/html/devcontribute.html0000644000175000017500000001512013621220106020661 0ustar peetpeet00000000000000 Project contribution guide — pyroute2 0.5.9 documentation

Project contribution guide

To contribute the code to the project, you can use the github instruments: issues and pull-requests. See more on the project github page: https://github.com/svinota/pyroute2

Requirements

The code should comply with some requirements:

  • the library must work on Python >= 2.6 and 3.2.
  • the code must strictly comply with PEP8 (use flake8)
  • the ctypes usage must not break the library on SELinux

Testing

To perform code tests, run make test. Details about the makefile parameters see in README.make.md.

pyroute2-0.5.9/docs/html/devgeneral.html0000644000175000017500000003024513621220106020125 0ustar peetpeet00000000000000 Modules layout — pyroute2 0.5.9 documentation

It is easy to start to develop with pyroute2. In the simplest case one just uses the library as is, and please do not forget to file issues, bugs and feature requests to the project github page.

If something should be changed in the library itself, and you think you can do it, this document should help a bit.

Modules layout

The library consists of several significant parts, and every part has its own functionality:

NetlinkSocket: connects the library to the OS
  ↑       ↑
  |       |
  |       ↓
  |     Marshal ←—→ Message classes
  |
  |
  ↓
NL utility classes: more or less user-friendly API

NetlinkSocket and Marshal: Base netlink socket and marshal

NetlinkSocket

Notice, that it is possible to use a custom base class instead of socket.socket. Thus, one can transparently port this library to any different transport, or to use it with eventlet library, that is not happy with socket.socket objects, and so on.

Marshal

A custom marshalling class can be required, if the protocol uses some different marshalling algo from usual netlink. Otherwise it is enough to use register_policy method of the NetlinkSocket:

# somewhere in a custom netlink class

# dict key: message id, int
# dict value: message class
policy = {IPSET_CMD_PROTOCOL: ipset_msg,
          IPSET_CMD_LIST: ipset_msg}

def __init__(self, ...):
    ...
    self.register_policy(policy)

But if just matching is not enough, refer to the Marshal implementation. It is possible, e.g., to define the custom fix_message method to be run on every message, etc. A sample of such custom marshal can be found in the RTNL implementation: pyroute2.netlink.rtnl.

Messages

All the message classes hierarchy is built on the simple fact that the netlink message structure is recursive in that or other way.

A usual way to implement messages is described in the netlink docs: Netlink.

The core module, pyroute2.netlink, provides base classes nlmsg and nla, as well as some other (genlmsg), and basic NLA types: uint32, be32, ip4addr, l2addr etc.

One of the NLA types, hex, can be used to dump the NLA structure in the hex format – it is useful for development.

NL utility classes

They are based on different netlink sockets, such as IPRsocket (RTNL), NL80211 (wireless), or just NetlinkSocket – be it generic netlink or nfnetlink (see taskstats and ipset).

Primarily, pyroute2 is a netlink framework, so basic classes and low-level utilities are intended to return parsed netlink messages, not some user-friendly output. So be not surprised.

But user-friendly modules are also possible and partly provided, such as IPDB.

A list of low-level utility classes:

  • IPRoute [pyroute2.iproute], RTNL utility like ip/tc
  • IPSet [pyroute2.ipset], manipulate IP sets
  • IW [pyroute2.iwutil], basic nl80211 support
  • NetNS [pyroute2.netns], netns-enabled IPRoute
  • TaskStats [pyroute2.netlink.taskstats], taskstats utility

High-level utilities:

  • IPDB [pyroute2.ipdb], async IP database

Deferred imports

The file pyroute2/__init__.py is a proxy for some modules, thus providing a fixed import address, like:

from pyroute2 import IPRoute
ipr = IPRoute()
...
ipr.close()

But not only. Actually, pyroute2/__init__.py exports not classes and modules, but proxy objects, that load the actual code in the runtime. The rationale is simple: in that way we provide a possibility to use a custom base classes, see examples/custom_socket_base.py.

Protocol debugging

The simplest way to start with some netlink protocol is to use a reference implementation. Lets say we wrote the ipset_msg class using the kernel code, and want to check how it works. So the ipset(8) utility will be used as a reference implementation:

$ sudo strace -e trace=network -f -x -s 4096 ipset list
socket(PF_NETLINK, SOCK_RAW, NETLINK_NETFILTER) = 3
bind(3, {sa_family=AF_NETLINK, pid=0, groups=00000000}, 12) = 0
getsockname(3, {sa_family=AF_NETLINK, pid=7009, groups=00000000}, [12]) = 0
sendto(3, "\x1c\x00\x00\x00\x01\x06\x01\x00\xe3\x95\...
recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000},
    msg_iov(1)=[{"\x1c\x00\x00\x00\x01\x06\x00\x00\xe3\...
sendto(3, "\x1c\x00\x00\x00\x07\x06\x05\x03\xe4\x95\...
recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000},
    msg_iov(1)=[{"\x78\x00\x00\x00\x07\x06\x02\x00\xe4\...

Here you can just copy packet strings from sendto and recvmsg, place in a file and use scripts/decoder.py to inspect them:

$ export PYTHONPATH=`pwd`
$ python scripts/decoder.py \
    pyroute2.netlink.nfnetlink.ipset.ipset_msg \
    scripts/ipset_01.data

See collected samples in the scripts directory. The script ignores spaces and allows multiple messages in the same file.

pyroute2-0.5.9/docs/html/devmodules.html0000644000175000017500000012607013621220107020163 0ustar peetpeet00000000000000 Modules in progress — pyroute2 0.5.9 documentation

Modules in progress

There are several modules in the very initial development state, and the help with them will be particularly valuable. You are more than just welcome to help with:

IPSet module

ipset support.

This module is tested with hash:ip, hash:net, list:set and several other ipset structures (like hash:net,iface). There is no guarantee that this module is working with all available ipset modules.

It supports almost all kernel commands (create, destroy, flush, rename, swap, test…)

class pyroute2.ipset.PortRange(begin, end, protocol=None)

A simple container for port range with optional protocol

Note that optional protocol parameter is not supported by all kernel ipset modules using ports. On the other hand, it’s sometimes mandatory to set it (like for hash:net,port ipsets)

Example:

udp_proto = socket.getprotobyname("udp")
port_range = PortRange(1000, 2000, protocol=udp_proto)
ipset.create("foo", stype="hash:net,port")
ipset.add("foo", ("192.0.2.0/24", port_range), etype="net,port")
ipset.test("foo", ("192.0.2.0/24", port_range), etype="net,port")
class pyroute2.ipset.PortEntry(port, protocol=None)

A simple container for port entry with optional protocol

class pyroute2.ipset.IPSet(version=None, attr_revision=None, nfgen_family=2)

NFNetlink socket (family=NETLINK_NETFILTER).

Implements API to the ipset functionality.

headers(name, **kwargs)

Get headers of the named ipset. It can be used to test if one ipset exists, since it returns a no such file or directory.

get_proto_version(version=6)

Get supported protocol version by kernel.

version parameter allow to set mandatory (but unused?) IPSET_ATTR_PROTOCOL netlink attribute in the request.

list(*argv, **kwargs)

List installed ipsets. If name is provided, list the named ipset or return an empty list.

Be warned: netlink does not return an error if given name does not exit, you will receive an empty list.

destroy(name=None)

Destroy one (when name is set) or all ipset (when name is None)

create(name, stype='hash:ip', family=<AddressFamily.AF_INET: 2>, exclusive=True, counters=False, comment=False, maxelem=65536, forceadd=False, hashsize=None, timeout=None, bitmap_ports_range=None, size=None, skbinfo=False)

Create an ipset name of type stype, by default hash:ip.

Common ipset options are supported:

  • exclusive – if set, raise an error if the ipset exists
  • counters – enable data/packets counters
  • comment – enable comments capability
  • maxelem – max size of the ipset
  • forceadd – you should refer to the ipset manpage
  • hashsize – size of the hashtable (if any)
  • timeout – enable and set a default value for entries (if not None)
  • bitmap_ports_range – set the specified inclusive portrange for
    the bitmap ipset structure (0, 65536)
  • size – Size of the list:set, the default is 8
  • skbinfo – enable skbinfo capability
add(name, entry, family=<AddressFamily.AF_INET: 2>, exclusive=True, comment=None, timeout=None, etype='ip', skbmark=None, skbprio=None, skbqueue=None, wildcard=False, **kwargs)

Add a member to the ipset.

etype is the entry type that you add to the ipset. It’s related to the ipset type. For example, use “ip” for one hash:ip or bitmap:ip ipset.

When your ipset store a tuple, like “hash:net,iface”, you must use a comma a separator (etype=”net,iface”)

entry is a string for “ip” and “net” objects. For ipset with several dimensions, you must use a tuple (or a list) of objects.

“port” type is specific, since you can use integer of specialized containers like PortEntry and PortRange

Examples:

ipset = IPSet()
ipset.create("foo", stype="hash:ip")
ipset.add("foo", "198.51.100.1", etype="ip")

ipset = IPSet()
ipset.create("bar", stype="bitmap:port",
             bitmap_ports_range=(1000, 2000))
ipset.add("bar", 1001, etype="port")
ipset.add("bar", PortRange(1500, 2000), etype="port")

ipset = IPSet()
import socket
protocol = socket.getprotobyname("tcp")
ipset.create("foobar", stype="hash:net,port")
port_entry = PortEntry(80, protocol=protocol)
ipset.add("foobar", ("198.51.100.0/24", port_entry),
          etype="net,port")

wildcard option enable kernel wildcard matching on interface name for net,iface entries.

delete(name, entry, family=<AddressFamily.AF_INET: 2>, exclusive=True, etype='ip')

Delete a member from the ipset.

See add() method for more information on etype.

test(name, entry, family=<AddressFamily.AF_INET: 2>, etype='ip')

Test if entry is part of an ipset

See add() method for more information on etype.

swap(set_a, set_b)

Swap two ipsets. They must have compatible content type.

flush(name=None)

Flush all ipsets. When name is set, flush only this ipset.

rename(name_src, name_dst)

Rename the ipset.

get_set_byname(name)

Get a set by its name

get_set_byindex(index)

Get a set by its index

get_supported_revisions(stype, family=<AddressFamily.AF_INET: 2>)

Return minimum and maximum of revisions supported by the kernel.

Each ipset module (like hash:net, hash:ip, etc) has several revisions. Newer revisions often have more features or more performances. Thanks to this call, you can ask the kernel the list of supported revisions.

You can manually set/force revisions used in IPSet constructor.

Example:

ipset = IPSet()
ipset.get_supported_revisions("hash:net")

ipset.get_supported_revisions("hash:net,port,net")

IW module

Experimental wireless module — nl80211 support.

Disclaimer

Unlike IPRoute, which is mostly usable, though is far from complete yet, the IW module is in the very initial state. Neither the module itself, nor the message class cover the nl80211 functionality reasonably enough. So if you’re going to use it, brace yourself — debug is coming.

Messages

nl80211 messages are defined here:

pyroute2/netlink/nl80211/__init__.py

Pls notice NLAs of type hex. On the early development stage hex allows to inspect incoming data as a hex dump and, occasionally, even make requests with such NLAs. But it’s not a production way.

The type hex in the NLA definitions means that this particular NLA is not handled yet properly. If you want to use some NLA which is defined as hex yet, pls find out a specific type, patch the message class and submit your pull request on github.

If you’re not familiar with NLA types, take a look at RTNL definitions:

pyroute2/netlink/rtnl/ndmsg.py

and so on.

Communication with the kernel

There are several methods of the communication with the kernel.

  • sendto() — lowest possible, send a raw binary data
  • put() — send a netlink message
  • nlm_request() — send a message, return the response
  • get() — get a netlink message
  • recv() — get a raw binary data from the kernel

There are no errors on put() usually. Any permission denied, any invalid value errors are returned from the kernel with netlink also. So if you do put(), but don’t do get(), be prepared to miss errors.

The preferred method for the communication is nlm_request(). It tracks the message ID, returns the corresponding response. In the case of errors nlm_request() raises an exception. To get the response on any operation with nl80211, use flag NLM_F_ACK.

Reverse it

If you’re too lazy to read the kernel sources, but still need something not implemented here, you can use reverse engineering on a reference implementation. E.g.:

# strace -e trace=network -f -x -s 4096 \
        iw phy phy0 interface add test type monitor

Will dump all the netlink traffic between the program iw and the kernel. Three first packets are the generic netlink protocol discovery, you can ignore them. All that follows, is the nl80211 traffic:

sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, ... },
    msg_iov(1)=[{"\x30\x00\x00\x00\x1b\x00\x05 ...", 48}],
    msg_controllen=0, msg_flags=0}, 0) = 48
recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, ... },
    msg_iov(1)=[{"\x58\x00\x00\x00\x1b\x00\x00 ...", 16384}],
    msg_controllen=0, msg_flags=0}, 0) = 88
...

With -s 4096 you will get the full dump. Then copy the strings from msg_iov to a file, let’s say data, and run the decoder:

$ pwd
/home/user/Projects/pyroute2
$ export PYTHONPATH=`pwd`
$ python scripts/decoder.py pyroute2.netlink.nl80211.nl80211cmd data

You will get the session decoded:

{'attrs': [['NL80211_ATTR_WIPHY', 0],
           ['NL80211_ATTR_IFNAME', 'test'],
           ['NL80211_ATTR_IFTYPE', 6]],
 'cmd': 7,
 'header': {'flags': 5,
            'length': 48,
            'pid': 3292542647,
            'sequence_number': 1430426434,
            'type': 27},
 'reserved': 0,
 'version': 0}
{'attrs': [['NL80211_ATTR_IFINDEX', 23811],
           ['NL80211_ATTR_IFNAME', 'test'],
           ['NL80211_ATTR_WIPHY', 0],
           ['NL80211_ATTR_IFTYPE', 6],
           ['NL80211_ATTR_WDEV', 4],
           ['NL80211_ATTR_MAC', 'a4:4e:31:43:1c:7c'],
           ['NL80211_ATTR_GENERATION', '02:00:00:00']],
 'cmd': 7,
 'header': {'flags': 0,
            'length': 88,
            'pid': 3292542647,
            'sequence_number': 1430426434,
            'type': 27},
 'reserved': 0,
 'version': 1}

Now you know, how to do a request and what you will get as a response. Sample collected data is in the scripts directory.

Submit changes

Please do not hesitate to submit the changes on github. Without your patches this module will not evolve.

class pyroute2.iwutil.IW(*argv, **kwarg)
del_interface(dev)

Delete a virtual interface

  • dev — device index
add_interface(ifname, iftype, dev=None, phy=0)

Create a virtual interface

  • ifname — name of the interface to create
  • iftype — interface type to create
  • dev — device index
  • phy — phy index

One should specify dev (device index) or phy (phy index). If no one specified, phy == 0.

iftype can be integer or string:

  1. adhoc
  2. station
  3. ap
  4. ap_vlan
  5. wds
  6. monitor
  7. mesh_point
  8. p2p_client
  9. p2p_go
  10. p2p_device
  11. ocb
list_dev()

Get list of all wifi network interfaces

list_wiphy()

Get list of all phy devices

get_interfaces_dict()

Get interfaces dictionary

get_interfaces_dump()

Get interfaces dump

get_interface_by_phy(attr)

Get interface by phy ( use x.get_attr(‘NL80211_ATTR_WIPHY’) )

get_interface_by_ifindex(ifindex)

Get interface by ifindex ( use x.get_attr(‘NL80211_ATTR_IFINDEX’)

get_stations(ifindex)

Get stations by ifindex

join_ibss(ifindex, ssid, freq, bssid=None, channel_fixed=False, width=None, center=None, center2=None)
Connect to network by ssid
  • ifindex - IFINDEX of the interface to perform the connection
  • ssid - Service set identification
  • freq - Frequency in MHz
  • bssid - The MAC address of target interface
  • channel_fixed: Boolean flag
  • width - Channel width
  • center - Central frequency of the 40/80/160 MHz channel
  • center2 - Center frequency of second segment if 80P80

If the flag of channel_fixed is True, one should specify both the width and center of the channel

width can be integer of string:

  1. 20_noht
  2. 20
  3. 40
  4. 80
  5. 80p80
  6. 160
  7. 5
  8. 10
leave_ibss(ifindex)

Leave the IBSS – the IBSS is determined by the network interface

authenticate(ifindex, bssid, ssid, freq, auth_type=0)

Send an Authentication management frame.

deauthenticate(ifindex, bssid, reason_code=1)

Send a Deauthentication management frame.

associate(ifindex, bssid, ssid, freq, info_elements=None)

Send an Association request frame.

disassociate(ifindex, bssid, reason_code=3)

Send a Disassociation management frame.

connect(ifindex, ssid, bssid=None)

Connect to the ap with ssid and bssid

disconnect(ifindex)

Disconnect the device

scan(ifindex, ssids=None, flush_cache=False)

Trigger scan and get results.

Triggering scan usually requires root, and can take a couple of seconds.

get_associated_bss(ifindex)

Returns the same info like scan() does, but only about the currently associated BSS.

Unlike scan(), it returns immediately and doesn’t require root.

Network settings daemon – pyrouted

Pyrouted is a standalone project of a system service, that utilizes the pyroute2 library. It consists of a daemon controlled by systemd and a CLI utility that communicates with the daemon via UNIX socket.

It is an extremely simple and basic network interface setup tool.

pyroute2-0.5.9/docs/html/dhcp.html0000644000175000017500000004705513621220107016737 0ustar peetpeet00000000000000 DHCP support — pyroute2 0.5.9 documentation

DHCP support

DHCP support in pyroute2 is in very initial state, so it is in the «Development section» yet. DHCP protocol has nothing to do with netlink, but pyroute2 slowly moving from netlink-only library to some more general networking framework.

DHCP protocol

The DHCP implementation here is far from complete, but already provides some basic functionality. Later it will be extended with IPv6 support and more DHCP options will be added.

Right now it can be interesting mostly to developers, but not users and/or system administrators. So, the development hints first.

The packet structure description is intentionally implemented as for netlink packets. Later these two parsers, netlink and generic, can be merged, so the syntax is more or less compatible.

Packet fields

There are two big groups of items within any DHCP packet. First, there are BOOTP/DHCP packet fields, they’re defined with the fields attribute:

class dhcp4msg(msg):
    fields = ((name, format, policy),
              (name, format, policy),
              ...
              (name, format, policy))

The name can be any literal. Format should be specified as for the struct module, like B for uint8, or i for int32, or >Q for big-endian uint64. There are also aliases defined, so one can write uint8 or be16, or like that. Possible aliases can be seen in the pyroute2.protocols module.

The policy is a bit complicated. It can be a number or literal, and it will mean that it is a default value, that should be encoded if no other value is given.

But when the policy is a dictionary, it can contain keys as follows:

'l2addr': {'format': '6B',
           'decode': ...,
           'encode': ...}

Keys encode and decode should contain filters to be used in decoding and encoding procedures. The encoding filter should accept the value from user’s definition and should return a value that can be packed using format. The decoding filter should accept a value, decoded according to format, and should return value that can be used by a user.

The struct module can not decode IP addresses etc, so they should be decoded as 4s, e.g. Further transformation from 4 bytes string to a string like ‘10.0.0.1’ performs the filter.

DHCP options

DHCP options are described in a similar way:

options = ((code, name, format),
           (code, name, format),
           ...
           (code, name, format))

Code is a uint8 value, name can be any string literal. Format is a string, that must have a corresponding class, inherited from pyroute2.dhcp.option. One can find these classes in pyroute2.dhcp (more generic) or in pyroute2.dhcp.dhcp4msg (IPv4-specific). The option class must reside within dhcp message class.

Every option class can be decoded in two ways. If it has fixed width fields, it can be decoded with ordinary msg routines, and in this case it can look like that:

class client_id(option):
    fields = (('type', 'uint8'),
              ('key', 'l2addr'))

If it must be decoded by some custom rules, one can define the policy just like for the fields above:

class array8(option):
    policy = {'format': 'string',
              'encode': lambda x: array('B', x).tobytes(),
              'decode': lambda x: array('B', x).tolist()}

In the corresponding modules, like in pyroute2.dhcp.dhcp4msg, one can define as many custom DHCP options, as one need. Just be sure, that they are compatible with the DHCP server and all fit into 1..254 (uint8) – the 0 code is used for padding and the code 255 is the end of options code.

class pyroute2.dhcp.option(content=None, buf=b'', offset=0, value=None, code=0)
class pyroute2.dhcp.dhcpmsg(content=None, buf=b'', offset=0, value=None)
class none(content=None, buf=b'', offset=0, value=None, code=0)
class be16(content=None, buf=b'', offset=0, value=None, code=0)
class be32(content=None, buf=b'', offset=0, value=None, code=0)
class uint8(content=None, buf=b'', offset=0, value=None, code=0)
class string(content=None, buf=b'', offset=0, value=None, code=0)
class array8(content=None, buf=b'', offset=0, value=None, code=0)
class client_id(content=None, buf=b'', offset=0, value=None, code=0)

IPv4 DHCP socket

class pyroute2.dhcp.dhcp4socket.DHCP4Socket(ifname, port=68)

Parameters:

  • ifname – interface name to work on

This raw socket binds to an interface and installs BPF filter to get only its UDP port. It can be used in poll/select and provides also the context manager protocol, so can be used in with statements.

It does not provide any DHCP state machine, and does not inspect DHCP packets, it is totally up to you. No default values are provided here, except xid – DHCP transaction ID. If xid is not provided, DHCP4Socket generates it for outgoing messages.

put(msg=None, dport=67)

Put DHCP message. Parameters:

  • msg – dhcp4msg instance
  • dport – DHCP server port

If msg is not provided, it is constructed as default BOOTREQUEST + DHCPDISCOVER.

Examples:

sock.put(dhcp4msg({'op': BOOTREQUEST,
                   'chaddr': 'ff:11:22:33:44:55',
                   'options': {'message_type': DHCPREQUEST,
                               'parameter_list': [1, 3, 6, 12, 15],
                               'requested_ip': '172.16.101.2',
                               'server_id': '172.16.101.1'}}))

The method returns dhcp4msg that was sent, so one can get from there xid (transaction id) and other details.

get()

Get the next incoming packet from the socket and try to decode it as IPv4 DHCP. No analysis is done here, only MAC/IPv4/UDP headers are stripped out, and the rest is interpreted as DHCP.

pyroute2-0.5.9/docs/html/general.html0000644000175000017500000006143213621220107017431 0ustar peetpeet00000000000000 Pyroute2 — pyroute2 0.5.9 documentation

Pyroute2

Pyroute2 is a pure Python netlink library. It requires only Python stdlib, no 3rd party libraries. The library was started as an RTNL protocol implementation, so the name is pyroute2, but now it supports many netlink protocols. Some supported netlink families and protocols:

  • rtnl, network settings — addresses, routes, traffic controls
  • nfnetlink — netfilter API: ipset, nftables, …
  • ipq — simplest userspace packet filtering, iptables QUEUE target
  • devlink — manage and monitor devlink-enabled hardware
  • generic — generic netlink families
    • ethtool — low-level network interface setup
    • nl80211 — wireless functions API (basic support)
    • taskstats — extended process statistics
    • acpi_events — ACPI events monitoring
    • thermal_events — thermal events monitoring
    • VFS_DQUOT — disk quota events monitoring

Latest important milestones:

  • 0.5.8 — Ethtool support
  • 0.5.7 — WireGuard support
  • 0.5.2 — PF_ROUTE support on FreeBSD and OpenBSD

Supported systems

Pyroute2 runs natively on Linux and emulates some limited subset of RTNL netlink API on BSD systems on top of PF_ROUTE notifications and standard system tools.

Other platforms are not supported.

The simplest usecase

The objects, provided by the library, are socket objects with an extended API. The additional functionality aims to:

  • Help to open/bind netlink sockets
  • Discover generic netlink protocols and multicast groups
  • Construct, encode and decode netlink and PF_ROUTE messages

Maybe the simplest usecase is to monitor events. Disk quota events:

from pyroute2 import DQuotSocket
# DQuotSocket automatically performs discovery and binding,
# since it has no other functionality beside of the monitoring
with DQuotSocket() as ds:
    for message in ds.get():
        print(message)

Get notifications about network settings changes with IPRoute:

from pyroute2 import IPRoute
with IPRoute() as ipr:
    # With IPRoute objects you have to call bind() manually
    ipr.bind()
    for message in ipr.get():
        print(message)

Network namespace examples

Network namespace manipulation:

from pyroute2 import netns
# create netns
netns.create('test')
# list
print(netns.listnetns())
# remove netns
netns.remove('test')

Create veth interfaces pair and move to netns:

from pyroute2 import IPRoute

with IPRoute() as ipr:

    # create interface pair
    ipr.link('add',
             ifname='v0p0',
             kind='veth',
             peer='v0p1')

    # lookup the peer index
    idx = ipr.link_lookup(ifname='v0p1')[0]

    # move the peer to the 'test' netns:
    ipr.link('set',
             index='v0p1',
             net_ns_fd='test')

List interfaces in some netns:

from pyroute2 import NetNS
from pprint import pprint

ns = NetNS('test')
pprint(ns.get_links())
ns.close()

More details and samples see in the documentation.

Installation

make install or pip install pyroute2

Requirements

Python >= 2.7

The pyroute2 testing framework requirements:

  • flake8
  • coverage
  • nosetests
  • sphinx
  • netaddr

Optional dependencies for testing:

pyroute2-0.5.9/docs/html/generator.html0000644000175000017500000003003413621220107017774 0ustar peetpeet00000000000000 Generators — pyroute2 0.5.9 documentation

Generators

Problem

Until 0.5.2 Pyroute2 collected all the responses in a list and returned them at once. It may be ok as long as there is not so many objects to return. But let’s say there are some thousands of routes:

$ ip ro | wc -l
315417

Now we use a script to retrieve the routes:

import sys
from pyroute2 import config
from pyroute2 import IPRoute

config.nlm_generator = (sys.argv[1].lower()
                        if len(sys.argv) > 1
                        else 'false') == 'true'

with IPRoute() as ipr:
    for route in ipr.get_routes():
        pass

If the library collects all the routes in a list and returns the list, it may take a lot of memory:

$ /usr/bin/time -v python e.py false
    Command being timed: "python e.py false"
    User time (seconds): 30.42
    System time (seconds): 3.63
    Percent of CPU this job got: 99%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:34.09
    Average shared text size (kbytes): 0
    Average unshared data size (kbytes): 0
    Average stack size (kbytes): 0
    Average total size (kbytes): 0
    Maximum resident set size (kbytes): 2416472
    Average resident set size (kbytes): 0
    Major (requiring I/O) page faults: 0
    Minor (reclaiming a frame) page faults: 604787
    Voluntary context switches: 9
    Involuntary context switches: 688
    Swaps: 0
    File system inputs: 0
    File system outputs: 0
    Socket messages sent: 0
    Socket messages received: 0
    Signals delivered: 0
    Page size (bytes): 4096
    Exit status: 0

2416472 kbytes of RSS. Pretty much.

Solution

Now we use generator to iterate the results:

$ /usr/bin/time -v python e.py true
    Command being timed: "python e.py true"
    User time (seconds): 18.48
    System time (seconds): 0.99
    Percent of CPU this job got: 99%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:19.49
    Average shared text size (kbytes): 0
    Average unshared data size (kbytes): 0
    Average stack size (kbytes): 0
    Average total size (kbytes): 0
    Maximum resident set size (kbytes): 45132
    Average resident set size (kbytes): 0
    Major (requiring I/O) page faults: 0
    Minor (reclaiming a frame) page faults: 433589
    Voluntary context switches: 9
    Involuntary context switches: 244
    Swaps: 0
    File system inputs: 0
    File system outputs: 0
    Socket messages sent: 0
    Socket messages received: 0
    Signals delivered: 0
    Page size (bytes): 4096
    Exit status: 0

45132 kbytes of RSS. That’s the difference. Say we have a bit more routes:

$ ip ro | wc -l
678148

Without generators the script will simply run ot of memory. But with the generators:

$ /usr/bin/time -v python e.py true
    Command being timed: "python e.py true"
    User time (seconds): 39.63
    System time (seconds): 2.78
    Percent of CPU this job got: 99%
    Elapsed (wall clock) time (h:mm:ss or m:ss): 0:42.75
    Average shared text size (kbytes): 0
    Average unshared data size (kbytes): 0
    Average stack size (kbytes): 0
    Average total size (kbytes): 0
    Maximum resident set size (kbytes): 45324
    Average resident set size (kbytes): 0
    Major (requiring I/O) page faults: 0
    Minor (reclaiming a frame) page faults: 925560
    Voluntary context switches: 11
    Involuntary context switches: 121182
    Swaps: 0
    File system inputs: 0
    File system outputs: 0
    Socket messages sent: 0
    Socket messages received: 0
    Signals delivered: 0
    Page size (bytes): 4096
    Exit status: 0

Again, 45324 kbytes of RSS.

Configuration

To turn the generator option on, one should set pyroute2.config.nlm_generator to True. By default is False not to break existing projects.:

from pyroute2 import config
from pyroute2 import IPRoute

config.nlm_generator = True
with IPRoute() as ipr:
    for route in ipr.get_routes():
        handle(route)

IPRoute and generators

IPRoute objects will return generators only for methods that employ GET_... requests like get_routes(), get_links(), link('dump', ...), addr('dump', ...). Setters will work as usually to apply changes immediately.

pyroute2-0.5.9/docs/html/genindex.html0000644000175000017500000010702313621220107017612 0ustar peetpeet00000000000000 Index — pyroute2 0.5.9 documentation

Index

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W

A

B

C

D

E

F

G

H

I

J

K

L

M

N

O

P

Q

R

S

T

U

V

W

pyroute2-0.5.9/docs/html/index.html0000644000175000017500000004006713621220107017124 0ustar peetpeet00000000000000 Pyroute2 netlink library — pyroute2 0.5.9 documentation

Indices and tables

pyroute2-0.5.9/docs/html/ipdb.html0000644000175000017500000022227213621220107016733 0ustar peetpeet00000000000000 IPDB module — pyroute2 0.5.9 documentation

IPDB module

Warning

The IPDB module has design issues that may not be fixed. It is recommended to switch to NDB wherever it’s possible.

Basically, IPDB is a transactional database, containing records, that represent network stack objects. Any change in the database is not reflected immediately in OS, but waits until commit() is called. One failed operation during commit() rolls back all the changes, has been made so far. Moreover, IPDB has commit hooks API, that allows you to roll back changes depending on your own function calls, e.g. when a host or a network becomes unreachable.

Limitations

One of the major issues with IPDB is its memory footprint. It proved not to be suitable for environments with thousands of routes or neighbours. Being a design issue, it could not be fixed, so a new module was started, NDB, that aims to replace IPDB. IPDB is still more feature rich, but NDB is already more fast and stable.

IPDB, NDB, IPRoute

These modules use different approaches.

  • IPRoute just forwards requests to the kernel, and doesn’t wait for the system state. So it’s up to developer to check, whether the requested object is really set up or not.
  • IPDB is an asynchronously updated database, that starts several additional threads by default. If your project’s policy doesn’t allow implicit threads, keep it in mind. But unlike IPRoute, the IPDB ensures the changes to be reflected in the system.
  • NDB is like IPDB, and will obsolete it in the future. The difference is that IPDB creates Python object for every RTNL object, while NDB stores everything in an SQL DB, and creates objects on demand.

Being asynchronously updated, IPDB does sync on commit:

with IPDB() as ipdb:
    with ipdb.interfaces['eth0'] as i:
        i.up()
        i.add_ip('192.168.0.2/24')
        i.add_ip('192.168.0.3/24')
    # ---> <--- here you can expect `eth0` is up
    #           and has these two addresses, so
    #           the following code can rely on that

NB: In the example above `commit()` is implied with the `__exit__()` of the `with` statement.

IPDB and other software

IPDB is designed to be a non-exclusive network settings database. There may be several IPDB instances on the same OS, as well as other network management software, such as NetworkManager etc.

The IPDB transactions should not interfere with other software settings, unless they touch the same objects. E.g., if IPDB brings an interface up, while NM shuts it down, there will be a race condition.

An example:

# IPDB code                       #  NetworkManager at the same time:
ipdb.interfaces['eth0'].up()      #
ipdb.interfaces['eth0'].commit()  #  $ sudo nmcli con down eth0
# ---> <---
# The eth0 state here is undefined. Some of the commands
# above will fail

But as long as the software doesn’t touch the same objects, there will be no conflicts. Another example:

# IPDB code                         # At the same time, NetworkManager
with ipdb.interfaces['eth0'] as i:  # adds addresses:
    i.add_ip('172.16.254.2/24')     #  * 10.0.0.2/24
    i.add_ip('172.16.254.3/24')     #  * 10.0.0.3/24
# ---> <---
# At this point the eth0 interface will have all four addresses.
# If the IPDB transaction fails by some reason, only IPDB addresses
# will be rolled back.

There may be a need to prevent other software from changing the network settings. There is no locking at the kernel level, but IPDB can revert all the changes as soon as they appear on the interface:

# IPDB code
ipdb.interfaces['eth0'].freeze()
                                   # Here some other software tries to
                                   # add an address, or to remove the old
                                   # one
# ---> <---
# At this point the eth0 interface will have all the same settings as
# at the `freeze()` call moment. Newly added addresses will be removed,
# all the deleted addresses will be restored.
#
# Please notice, that an address removal may cause also a routes removal,
# and that is the thing that IPDB can not neither prevent, nor revert.

ipdb.interfaces['eth0'].unfreeze()

Quickstart

Simple tutorial:

from pyroute2 import IPDB
# several IPDB instances are supported within on process
ipdb = IPDB()

# commit is called automatically upon the exit from `with`
# statement
with ipdb.interfaces.eth0 as i:
    i.address = '00:11:22:33:44:55'
    i.ifname = 'bala'
    i.txqlen = 2000

# basic routing support
ipdb.routes.add({'dst': 'default',
                 'gateway': '10.0.0.1'}).commit()

# do not forget to shutdown IPDB
ipdb.release()

Please, notice ip.release() call in the end. Though it is not forced in an interactive python session for the better user experience, it is required in the scripts to sync the IPDB state before exit.

IPDB supports functional-like syntax also:

from pyroute2 import IPDB
with IPDB() as ipdb:
    intf = (ipdb.interfaces['eth0']
            .add_ip('10.0.0.2/24')
            .add_ip('10.0.0.3/24')
            .set_address('00:11:22:33:44:55')
            .set_mtu(1460)
            .set_name('external')
            .commit())
    # ---> <--- here you have the interface reference with
    #           all the changes applied: renamed, added ipaddr,
    #           changed macaddr and mtu.
    ...  # some code

# pls notice, that the interface reference will not work
# outside of `with IPDB() ...`

Transaction modes

IPDB has several operating modes:

  • ‘implicit’ (default) – the first change starts an implicit
    transaction, that have to be committed
  • ‘explicit’ – you have to begin() a transaction prior to
    make any change

The default is to use implicit transaction. This behaviour can be changed in the future, so use ‘mode’ argument when creating IPDB instances.

The sample session with explicit transactions:

In [1]: from pyroute2 import IPDB
In [2]: ip = IPDB(mode='explicit')
In [3]: ifdb = ip.interfaces
In [4]: ifdb.tap0.begin()
    Out[3]: UUID('7a637a44-8935-4395-b5e7-0ce40d31d937')
In [5]: ifdb.tap0.up()
In [6]: ifdb.tap0.address = '00:11:22:33:44:55'
In [7]: ifdb.tap0.add_ip('10.0.0.1', 24)
In [8]: ifdb.tap0.add_ip('10.0.0.2', 24)
In [9]: ifdb.tap0.review()
    Out[8]:
    {'+ipaddr': set([('10.0.0.2', 24), ('10.0.0.1', 24)]),
     '-ipaddr': set([]),
     'address': '00:11:22:33:44:55',
     'flags': 4099}
In [10]: ifdb.tap0.commit()

Note, that you can review() the current_tx transaction, and commit() or drop() it. Also, multiple transactions are supported, use uuid returned by begin() to identify them.

Actually, the form like ‘ip.tap0.address’ is an eye-candy. The IPDB objects are dictionaries, so you can write the code above as that:

ipdb.interfaces['tap0'].down()
ipdb.interfaces['tap0']['address'] = '00:11:22:33:44:55'
...

Context managers

Transactional objects (interfaces, routes) can act as context managers in the same way as IPDB does itself:

with ipdb.interfaces.tap0 as i:
    i.address = '00:11:22:33:44:55'
    i.ifname = 'vpn'
    i.add_ip('10.0.0.1', 24)
    i.add_ip('10.0.0.1', 24)

On exit, the context manager will authomatically commit() the transaction.

Read-only interface views

Using an interface as a context manager will start a transaction. Sometimes it is not what one needs. To avoid unnecessary transactions, and to avoid the risk to occasionally change interface attributes, one can use read-only views:

with ipdb.interfaces[1].ro as iface:
    print(iface.ifname)
    print(iface.address)

The .ro view neither starts transactions, nor allows to change anything, raising the RuntimeError exception.

The same read-only views are available for routes and rules.

Create interfaces

IPDB can also create virtual interfaces:

with ipdb.create(kind='bridge', ifname='control') as i:
    i.add_port(ip.interfaces.eth1)
    i.add_port(ip.interfaces.eth2)
    i.add_ip('10.0.0.1/24')

The IPDB.create() call has the same syntax as IPRoute.link(‘add’, …), except you shouldn’t specify the ‘add’ command. Refer to IPRoute docs for details.

Please notice, that the interface object stays in the database even if there was an error during the interface creation. It is done so to make it possible to fix the interface object and try to run commit() again. Or you can drop the interface object with the .remove().commit() call.

IP address management

IP addresses on interfaces may be managed using add_ip() and del_ip():

with ipdb.interfaces['eth0'] as eth:
    eth.add_ip('10.0.0.1/24')
    eth.add_ip('10.0.0.2/24')
    eth.add_ip('2001:4c8:1023:108::39/64')
    eth.del_ip('172.16.12.5/24')

The address format may be either a string with ‘address/mask’ notation, or a pair of ‘address’, mask:

with ipdb.interfaces['eth0'] as eth:
    eth.add_ip('10.0.0.1', 24)
    eth.del_ip('172.16.12.5', 24)

The ipaddr attribute contains all the IP addresses of the interface, which are acessible in different ways. Getting an iterator from ipaddr gives you a sequence of tuples (‘address’, mask):

>>> for addr in ipdb.interfaces['eth0'].ipaddr:
...    print(ipaddr)
...
('10.0.0.2', 24)
('10.0.0.1', 24)

Getting one IP from ipaddr returns a dict object with full spec:

>>> ipdb.interfaces['eth0'].ipaddr[0]:
    {'family': 2,
     'broadcast': None,
     'flags': 128,
     'address': '10.0.0.2',
     'prefixlen': 24,
     'local': '10.0.0.2'}
>>> ipdb.intefaces['eth0'].ipaddr['10.0.0.2/24']:
    {'family': 2,
     'broadcast': None,
     'flags': 128,
     'address': '10.0.0.2',
     'prefixlen': 24,
     'local': '10.0.0.2'}

The API is a bit weird, but it’s because of historical reasons. In the future it may be changed.

Another feature of the ipaddr attribute is views:

>>> ipdb.interfaces['eth0'].ipaddr.ipv4:
    (('10.0.0.2', 24), ('10.0.0.1', 24))
>>> ipdb.interfaces['eth0'].ipaddr.ipv6:
    (('2001:4c8:1023:108::39', 64),)

The views, as well as the ipaddr attribute itself are not supposed to be changed by user, but only by the internal API.

Bridge interfaces

Modern kernels provide possibility to manage bridge interface properties such as STP, forward delay, ageing time etc. Names of these properties start with br_, like br_ageing_time, br_forward_delay e.g.:

[x for x in dir(ipdb.interfaces.virbr0) if x.startswith('br_')]

Bridge ports

IPDB supports specific bridge port parameters, such as proxyarp, unicast/multicast flood, cost etc.:

with ipdb.interfaces['br-port0'] as p:
    p.brport_cost = 200
    p.brport_unicast_flood = 0
    p.brport_proxyarp = 0

Ports management

IPDB provides a uniform API to manage bridge, bond and vrf ports:

with ipdb.interfaces['br-int'] as br:
    br.add_port('veth0')
    br.add_port(ipdb.interfaces.veth1)
    br.add_port(700)
    br.del_port('veth2')

Both add_port() and del_port() accept three types of arguments:

  • ‘veth0’ – interface name as a string
  • ipdb.interfaces.veth1 – IPDB interface object
  • 700 – interface index, an integer

Routes management

IPDB has a simple yet useful routing management interface.

Create a route

To add a route, there is an easy to use syntax:

# spec as a dictionary
spec = {'dst': '172.16.1.0/24',
        'oif': 4,
        'gateway': '192.168.122.60',
        'metrics': {'mtu': 1400,
                    'advmss': 500}}

# pass spec as is
ipdb.routes.add(spec).commit()

# pass spec as kwargs
ipdb.routes.add(**spec).commit()

# use keyword arguments explicitly
ipdb.routes.add(dst='172.16.1.0/24', oif=4, ...).commit()

Please notice, that the device can be specified with oif (output interface) or iif (input interface), the device keyword is not supported anymore.

More examples:

# specify table and priority
(ipdb.routes
 .add(dst='172.16.1.0/24',
      gateway='192.168.0.1',
      table=100,
      priority=10)
 .commit())

The priority field is what the iproute2 utility calls metric – see also below.

Get a route

To access and change the routes, one can use notations as follows:

# default table (254)
#
# change the route gateway and mtu
#
with ipdb.routes['172.16.1.0/24'] as route:
    route.gateway = '192.168.122.60'
    route.metrics.mtu = 1500

# access the default route
print(ipdb.routes['default'])

# change the default gateway
with ipdb.routes['default'] as route:
    route.gateway = '10.0.0.1'

By default, the path ipdb.routes reflects only the main routing table (254). But Linux supports much more routing tables, so does IPDB:

In [1]: ipdb.routes.tables.keys()
Out[1]: [0, 254, 255]

In [2]: len(ipdb.routes.tables[255])
Out[2]: 11  # => 11 automatic routes in the table local

It is important to understand, that routing tables keys in IPDB are not only the destination prefix. The key consists of ‘prefix/mask’ string and the route priority (if any):

In [1]: ipdb.routes.tables[254].idx.keys()
Out[1]:
[RouteKey(dst='default', table=254, family=2, ...),
 RouteKey(dst='172.17.0.0/16', table=254, ...),
 RouteKey(dst='172.16.254.0/24', table=254, ...),
 RouteKey(dst='192.168.122.0/24', table=254, ...),
 RouteKey(dst='fe80::/64', table=254, family=10, ...)]

But a routing table in IPDB allows several variants of the route spec. The simplest case is to retrieve a route by prefix, if there is only one match:

# get route by prefix
ipdb.routes['172.16.1.0/24']

# get route by a special name
ipdb.routes['default']

If there are more than one route that matches the spec, only the first one will be retrieved. One should iterate all the records and filter by a key to retrieve all matches:

# only one route will be retrieved
ipdb.routes['fe80::/64']

# get all routes by this prefix
[ x for x in ipdb.routes if x['dst'] == 'fe80::/64' ]

It is also possible to use dicts as specs:

# get IPv4 default route
ipdb.routes[{'dst': 'default', 'family': AF_INET}]

# get IPv6 default route
ipdb.routes[{'dst': 'default', 'family': AF_INET6}]

# get route by priority
ipdb.routes.table[100][{'dst': '10.0.0.0/24', 'priority': 10}]

While this notation returns one route, there is a method to get all the routes matching the spec:

# get all the routes from all the tables via some interface
ipdb.routes.filter({'oif': idx})

# get all IPv6 routes from some table
ipdb.routes.table[tnum].filter({'family': AF_INET6})

Route metrics

A special object is dedicated to route metrics, one can access it via route.metrics or route[‘metrics’]:

# these two statements are equal:
with ipdb.routes['172.16.1.0/24'] as route:
    route['metrics']['mtu'] = 1400

with ipdb.routes['172.16.1.0/24'] as route:
    route.metrics.mtu = 1400

Possible metrics are defined in rtmsg.py:rtmsg.metrics, e.g. RTAX_HOPLIMIT means hoplimit metric etc.

Multipath routing

Multipath nexthops are managed via route.add_nh() and route.del_nh() methods. They are available to review via route.multipath, but one should not directly add/remove/modify nexthops in route.multipath, as the changes will not be committed correctly.

To create a multipath route:

ipdb.routes.add({'dst': '172.16.232.0/24',
                 'multipath': [{'gateway': '172.16.231.2',
                                'hops': 2},
                               {'gateway': '172.16.231.3',
                                'hops': 1},
                               {'gateway': '172.16.231.4'}]}).commit()

To change a multipath route:

with ipdb.routes['172.16.232.0/24'] as r:
    r.add_nh({'gateway': '172.16.231.5'})
    r.del_nh({'gateway': '172.16.231.4'})

Another possible way is to create a normal route and turn it into multipath by add_nh():

# create a non-MP route with one gateway:
(ipdb
 .routes
 .add({'dst': '172.16.232.0/24',
       'gateway': '172.16.231.2'})
 .commit())

# turn it to become a MP route:
(ipdb
 .routes['172.16.232.0/24']
 .add_nh({'gateway': '172.16.231.3'})
 .commit())

# here the route will contain two NH records, with
# gateways 172.16.231.2 and 172.16.231.3

# remove one NH and turn the route to be a normal one
(ipdb
 .routes['172.16.232.0/24']
 .del_nh({'gateway': '172.16.231.2'})
 .commit())

# thereafter the traffic to 172.16.232.0/24 will go only
# via 172.16.231.3

Differences from the iproute2 syntax

By historical reasons, iproute2 uses names that differs from what the kernel uses. E.g., iproute2 uses weight for multipath route hops instead of hops, where weight == (hops + 1). Thus, a route created with hops == 2 will be listed by iproute2 as weight 3.

Another significant difference is metrics. The pyroute2 library uses the kernel naming scheme, where metrics means mtu, rtt, window etc. The iproute2 utility uses metric (not metrics) as a name for the priority field.

In examples:

# -------------------------------------------------------
# iproute2 command:
$ ip route add default \
    nexthop via 172.16.0.1 weight 2 \
    nexthop via 172.16.0.2 weight 9

# pyroute2 code:
(ipdb
 .routes
 .add({'dst': 'default',
       'multipath': [{'gateway': '172.16.0.1', 'hops': 1},
                     {'gateway': '172.16.0.2', 'hops': 8}])
 .commit())

# -------------------------------------------------------
# iproute2 command:
$ ip route add default via 172.16.0.2 metric 200

# pyroute2 code:
(ipdb
 .routes
 .add({'dst': 'default',
       'gateway': '172.16.0.2',
       'priority': 200})
 .commit())

# -------------------------------------------------------
# iproute2 command:
$ ip route add default via 172.16.0.2 mtu 1460

# pyroute2 code:
(ipdb
 .routes
 .add({'dst': 'default',
       'gateway': '172.16.0.2',
       'metrics': {'mtu': 1460}})
 .commit())

Multipath default routes

Warning

As of the merge of kill_rtcache into the kernel, and it’s release in ~3.6, weighted default routes no longer work in Linux.

Please refer to https://github.com/svinota/pyroute2/issues/171#issuecomment-149297244 for details.

Rules management

IPDB provides a basic IP rules management system.

Create a rule

Syntax is almost the same as for routes:

# rule spec
spec = {'src': '172.16.1.0/24',
        'table': 200,
        'priority': 15000}

ipdb.rules.add(spec).commit()

Get a rule

The way IPDB handles IP rules is almost the same as routes, but rule keys are more complicated – the Linux kernel doesn’t use keys for rules, but instead iterates all the records until the first one w/o any attribute mismatch.

The fields that the kernel uses to compare rules, IPDB uses as the key fields (see pyroute2/ipdb/rule.py:RuleKey)

There are also more ways to find a record, as with routes:

# 1. iterate all the records
for record in ipdb.rules:
    match(record)

# 2. an integer as the key matches the first
#    rule with that priority
ipdb.rules[32565]

# 3. a dict as the key returns the first match
#    for all the specified attrs
ipdb.rules[{'dst': '10.0.0.0/24', 'table': 200}]

Priorities

Thus, the rule priority is not a key, neither in the kernel, nor in IPDB. One should not rely on priorities as on keys, there may be several rules with the same priority, and it often happens, e.g. on Android systems.

Persistence

There is no change operation for the rule records in the kernel, so only add/del work. When IPDB changes a record, it effectively deletes the old one and creates the new with new parameters, but the object, referring the record, stays the same. Also that means, that IPDB can not recognize the situation, when someone else does the same. So if there is another program changing records by del/add operations, even another IPDB instance, referring objects in the IPDB will be recreated.

Performance issues

In the case of bursts of Netlink broadcast messages, all the activity of the pyroute2-based code in the async mode becomes suppressed to leave more CPU resources to the packet reader thread. So please be ready to cope with delays in the case of Netlink broadcast storms. It means also, that IPDB state will be synchronized with OS also after some delay.

The class API

class pyroute2.ipdb.main.IPDB(nl=None, mode='implicit', restart_on_error=None, nl_async=None, sndbuf=1048576, rcvbuf=1048576, nl_bind_groups=67372509, ignore_rtables=None, callbacks=None, sort_addresses=False, plugins=None)

The class that maintains information about network setup of the host. Monitoring netlink events allows it to react immediately. It uses no polling.

register_callback(callback, mode='post')

IPDB callbacks are routines executed on a RT netlink message arrival. There are two types of callbacks: “post” and “pre” callbacks.

“Post” callbacks are executed after the message is processed by IPDB and all corresponding objects are created or deleted. Using ipdb reference in “post” callbacks you will access the most up-to-date state of the IP database.

“Post” callbacks are executed asynchronously in separate threads. These threads can work as long as you want them to. Callback threads are joined occasionally, so for a short time there can exist stopped threads.

“Pre” callbacks are synchronous routines, executed before the message gets processed by IPDB. It gives you the way to patch arriving messages, but also places a restriction: until the callback exits, the main event IPDB loop is blocked.

Normally, only “post” callbacks are required. But in some specific cases “pre” also can be useful.

The routine, register_callback(), takes two arguments:
  • callback function
  • mode (optional, default=”post”)

The callback should be a routine, that accepts three arguments:

cb(ipdb, msg, action)

Arguments are:

  • ipdb is a reference to IPDB instance, that invokes
    the callback.
  • msg is a message arrived
  • action is just a msg[‘event’] field

E.g., to work on a new interface, you should catch action == ‘RTM_NEWLINK’ and with the interface index (arrived in msg[‘index’]) get it from IPDB:

index = msg['index']
interface = ipdb.interfaces[index]
eventqueue(qsize=8192, block=True, timeout=None)

Initializes event queue and returns event queue context manager. Once the context manager is initialized, events start to be collected, so it is possible to read initial state from the system witout losing last moment changes, and once that is done, start processing events.

Example:

ipdb = IPDB() with ipdb.eventqueue() as evq:

my_state = ipdb.<needed_attribute>… for msg in evq:

update_state_by_msg(my_state, msg)
eventloop(qsize=8192, block=True, timeout=None)

Event generator for simple cases when there is no need for initial state setup. Initialize event queue and yield events as they happen.

release()

Shutdown IPDB instance and sync the state. Since IPDB is asyncronous, some operations continue in the background, e.g. callbacks. So, prior to exit the script, it is required to properly shutdown IPDB.

The shutdown sequence is not forced in an interactive python session, since it is easier for users and there is enough time to sync the state. But for the scripts the release() call is required.

pyroute2-0.5.9/docs/html/iproute.html0000644000175000017500000051713213621220107017506 0ustar peetpeet00000000000000 IPRoute module — pyroute2 0.5.9 documentation

IPRoute module

Classes

The RTNL API is provided by the class RTNL_API. It is a mixin class that works on top of any RTNL-compatible socket, so several classes with almost the same API are available:

  • IPRoute – simple RTNL API
  • NetNS – RTNL API in a network namespace
  • IPBatch – RTNL packet compiler
  • RemoteIPRoute – run RTNL remotely (no deployment required)

Responses as lists

The netlink socket implementation in the pyroute2 is agnostic to particular netlink protocols, and always returns a list of messages as the response to a request sent to the kernel:

with IPRoute() as ipr:

    # this request returns one match
    eth0 = ipr.link_lookup(ifname='eth0')
    len(eth0)  # -> 1, if exists, else 0

    # but that one returns a set of
    up = ipr.link_lookup(operstate='UP')
    len(up)  # -> k, where 0 <= k <= [interface count]

Thus, always expect a list in the response, running any IPRoute() netlink request.

NLMSG_ERROR responses

Some kernel subsystems return NLMSG_ERROR in response to any request. It is OK as long as nlmsg[“header”][“error”] is None. Otherwise an exception will be raised by the parser.

So if instead of an exception you get a NLMSG_ERROR message, it means error == 0, the same as $? == 0 in bash.

How to work with messages

Every netlink message contains header, fields and NLAs (netlink attributes). Every NLA is a netlink message… (see “recursion”).

And the library provides parsed messages according to this scheme. Every RTNL message contains:

  • nlmsg[‘header’] – parsed header
  • nlmsg[‘attrs’] – NLA chain (parsed on demand)
  • 0 .. k data fields, e.g. nlmsg[‘flags’] etc.
  • nlmsg.header – the header fields spec
  • nlmsg.fields – the data fields spec
  • nlmsg.nla_map – NLA spec

An important parser feature is that NLAs are parsed on demand, when someone tries to access them. Otherwise the parser doesn’t waste CPU cycles.

The NLA chain is a list-like structure, not a dictionary. The netlink standard doesn’t require NLAs to be unique within one message:

{'attrs': [('IFLA_IFNAME', 'lo'),    # [1]
           ('IFLA_TXQLEN', 1),
           ('IFLA_OPERSTATE', 'UNKNOWN'),
           ('IFLA_LINKMODE', 0),
           ('IFLA_MTU', 65536),
           ('IFLA_GROUP', 0),
           ('IFLA_PROMISCUITY', 0),
           ('IFLA_NUM_TX_QUEUES', 1),
           ('IFLA_NUM_RX_QUEUES', 1),
           ('IFLA_CARRIER', 1),
           ...],
 'change': 0,
 'event': 'RTM_NEWLINK',             # [2]
 'family': 0,
 'flags': 65609,
 'header': {'error': None,           # [3]
            'flags': 2,
            'length': 1180,
            'pid': 28233,
            'sequence_number': 257,  # [4]
            'type': 16},             # [5]
 'ifi_type': 772,
 'index': 1}

 # [1] every NLA is parsed upon access
 # [2] this field is injected by the RTNL parser
 # [3] if not None, an exception will be raised
 # [4] more details in the netlink description
 # [5] 16 == RTM_NEWLINK

To access fields:

msg['index'] == 1

To access one NLA:

msg.get_attr('IFLA_CARRIER') == 1

When an NLA with the specified name is not present in the chain, get_attr() returns None. To get the list of all NLAs of that name, use get_attrs(). A real example with NLA hierarchy, take notice of get_attr() and get_attrs() usage:

# for macvlan interfaces there may be several
# IFLA_MACVLAN_MACADDR NLA provided, so use
# get_attrs() to get all the list, not only
# the first one

(msg
 .get_attr('IFLA_LINKINFO')           # one NLA
 .get_attr('IFLA_INFO_DATA')          # one NLA
 .get_attrs('IFLA_MACVLAN_MACADDR'))  # a list of

The protocol itself has no limit for number of NLAs of the same type in one message, that’s why we can not make a dictionary from them – unlike PF_ROUTE messages.

BSD notes

The library provides very basic RTNL API for BSD systems via protocol emulation. Only getters are supported yet, no setters.

BSD employs PF_ROUTE sockets to send notifications about network object changes, but the protocol doesn not allow changing links/addresses/etc like Netlink.

To change network setting one have to rely on system calls or external tools. Thus IPRoute on BSD systems is not as effective as on Linux, where all the changes are done via Netlink.

The monitoring started with bind() is implemented as an implicit thread, started by the bind() call. This is done to have only one notification FD, used both for normal calls and notifications. This allows to use IPRoute objects in poll/select calls.

On Linux systems RTNL API is provided by the netlink protocol, so no implicit threads are started by default to monitor the system updates. IPRoute.bind(…) may start the async cache thread, but only when asked explicitly:

#
# Normal monitoring. Always starts monitoring thread on
# FreeBSD / OpenBSD, no threads on Linux.
#
with IPRoute() as ipr:
    ipr.bind()
    ...

#
# Monitoring with async cache. Always starts cache thread
# on Linux, ignored on FreeBSD / OpenBSD.
#
with IPRoute() as ipr:
    ipr.bind(async_cache=True)
    ...

On all the supported platforms, be it Linux or BSD, the IPRoute.recv(…) method returns valid netlink RTNL raw binary payload and IPRoute.get(…) returns parsed RTNL messages.

IPRoute API

class pyroute2.iproute.linux.RTNL_API(*argv, **kwarg)

RTNL_API should not be instantiated by itself. It is intended to be used as a mixin class. Following classes use RTNL_API:

  • IPRoute – RTNL API to the current network namespace
  • NetNS – RTNL API to another network namespace
  • IPBatch – RTNL compiler
  • ShellIPR – RTNL via standard I/O, runs IPRoute in a shell

It is an old-school API, that provides access to rtnetlink as is. It helps you to retrieve and change almost all the data, available through rtnetlink:

from pyroute2 import IPRoute
ipr = IPRoute()
# create an interface
ipr.link('add', ifname='brx', kind='bridge')
# lookup the index
dev = ipr.link_lookup(ifname='brx')[0]
# bring it down
ipr.link('set', index=dev, state='down')
# change the interface MAC address and rename it just for fun
ipr.link('set', index=dev,
         address='00:11:22:33:44:55',
         ifname='br-ctrl')
# add primary IP address
ipr.addr('add', index=dev,
         address='10.0.0.1', mask=24,
         broadcast='10.0.0.255')
# add secondary IP address
ipr.addr('add', index=dev,
         address='10.0.0.2', mask=24,
         broadcast='10.0.0.255')
# bring it up
ipr.link('set', index=dev, state='up')
dump()

Iterate all the objects – links, routes, addresses etc.

get_qdiscs(index=None)

Get all queue disciplines for all interfaces or for specified one.

get_filters(index=0, handle=0, parent=0)

Get filters for specified interface, handle and parent.

get_classes(index=0)

Get classes for specified interface.

get_vlans(**kwarg)

Dump available vlan info on bridge ports

Get network interfaces.

By default returns all interfaces. Arguments vector can contain interface indices or a special keyword ‘all’:

ip.get_links()
ip.get_links('all')
ip.get_links(1, 2, 3)

interfaces = [1, 2, 3]
ip.get_links(*interfaces)
get_neighbours(family=<AddressFamily.AF_UNSPEC: 0>, match=None, **kwarg)

Dump ARP cache records.

The family keyword sets the family for the request: e.g. AF_INET or AF_INET6 for arp cache, AF_BRIDGE for fdb.

If other keyword arguments not empty, they are used as filter. Also, one can explicitly set filter as a function with the match parameter.

Examples:

# get neighbours on the 3rd link:
ip.get_neighbours(ifindex=3)

# get a particular record by dst:
ip.get_neighbours(dst='172.16.0.1')

# get fdb records:
ip.get_neighbours(AF_BRIDGE)

# and filter them by a function:
ip.get_neighbours(AF_BRIDGE, match=lambda x: x['state'] == 2)
get_ntables(family=<AddressFamily.AF_UNSPEC: 0>)

Get neighbour tables

get_addr(family=<AddressFamily.AF_UNSPEC: 0>, match=None, **kwarg)

Dump addresses.

If family is not specified, both AF_INET and AF_INET6 addresses will be dumped:

# get all addresses
ip.get_addr()

It is possible to apply filters on the results:

# get addresses for the 2nd interface
ip.get_addr(index=2)

# get addresses with IFA_LABEL == 'eth0'
ip.get_addr(label='eth0')

# get all the subnet addresses on the interface, identified
# by broadcast address (should be explicitly specified upon
# creation)
ip.get_addr(index=2, broadcast='192.168.1.255')

A custom predicate can be used as a filter:

ip.get_addr(match=lambda x: x['index'] == 1)
get_rules(family=<AddressFamily.AF_UNSPEC: 0>, match=None, **kwarg)

Get all rules. By default return all rules. To explicitly request the IPv4 rules use family=AF_INET.

Example::
ip.get_rules() # get all the rules for all families ip.get_rules(family=AF_INET6) # get only IPv6 rules
get_routes(family=255, match=None, **kwarg)

Get all routes. You can specify the table. There are 255 routing classes (tables), and the kernel returns all the routes on each request. So the routine filters routes from full output.

Example:

ip.get_routes()  # get all the routes for all families
ip.get_routes(family=AF_INET6)  # get only IPv6 routes
ip.get_routes(table=254)  # get routes from 254 table

The default family=255 is a hack. Despite the specs, the kernel returns only IPv4 routes for AF_UNSPEC family. But it returns all the routes for all the families if one uses an invalid value here. Hack but true. And let’s hope the kernel team will not fix this bug.

get_netns_info(list_proc=False)

A prototype method to list available netns and associated interfaces. A bit weird to have it here and not under pyroute2.netns, but it uses RTNL to get all the info.

get_default_routes(family=<AddressFamily.AF_UNSPEC: 0>, table=254)

Get default routes

Lookup interface index (indeces) by first level NLA value.

Example:

ip.link_lookup(address="52:54:00:9d:4e:3d")
ip.link_lookup(ifname="lo")
ip.link_lookup(operstate="UP")

Please note, that link_lookup() returns list, not one value.

flush_routes(*argv, **kwarg)

Flush routes – purge route records from a table. Arguments are the same as for get_routes() routine. Actually, this routine implements a pipe from get_routes() to nlm_request().

flush_addr(*argv, **kwarg)

Flush IP addresses.

Examples:

# flush all addresses on the interface with index 2:
ipr.flush_addr(index=2)

# flush all addresses with IFA_LABEL='eth0':
ipr.flush_addr(label='eth0')
flush_rules(*argv, **kwarg)

Flush rules. Please keep in mind, that by default the function operates on all rules of all families. To work only on IPv4 rules, one should explicitly specify family=AF_INET.

Examples:

# flush all IPv4 rule with priorities above 5 and below 32000
ipr.flush_rules(family=AF_INET, priority=lambda x: 5 < x < 32000)

# flush all IPv6 rules that point to table 250:
ipr.flush_rules(family=socket.AF_INET6, table=250)
brport(command, **kwarg)

Set bridge port parameters. Example:

idx = ip.link_lookup(ifname='eth0')
ip.brport("set", index=idx, unicast_flood=0, cost=200)
ip.brport("show", index=idx)

Possible keywords are NLA names for the protinfo_bridge class, without the prefix and in lower letters.

vlan_filter(command, **kwarg)

Vlan filters is another approach to support vlans in Linux. Before vlan filters were introduced, there was only one way to bridge vlans: one had to create vlan interfaces and then add them as ports:

        +------+      +----------+
net --> | eth0 | <--> | eth0.500 | <---+
        +------+      +----------+     |
                                       v
        +------+                    +-----+
net --> | eth1 |                    | br0 |
        +------+                    +-----+
                                       ^
        +------+      +----------+     |
net --> | eth2 | <--> | eth2.500 | <---+
        +------+      +----------+

It means that one has to create as many bridges, as there were vlans. Vlan filters allow to bridge together underlying interfaces and create vlans already on the bridge:

# v500 label shows which interfaces have vlan filter

        +------+ v500
net --> | eth0 | <-------+
        +------+         |
                         v
        +------+      +-----+    +---------+
net --> | eth1 | <--> | br0 |<-->| br0v500 |
        +------+      +-----+    +---------+
                         ^
        +------+ v500    |
net --> | eth2 | <-------+
        +------+

In this example vlan 500 will be allowed only on ports eth0 and eth2, though all three eth nics are bridged.

Some example code:

# create bridge
ip.link("add",
        ifname="br0",
        kind="bridge")

# attach a port
ip.link("set",
        index=ip.link_lookup(ifname="eth0")[0],
        master=ip.link_lookup(ifname="br0")[0])

# set vlan filter
ip.vlan_filter("add",
               index=ip.link_lookup(ifname="eth0")[0],
               vlan_info={"vid": 500})

# create vlan interface on the bridge
ip.link("add",
        ifname="br0v500",
        kind="vlan",
        link=ip.link_lookup(ifname="br0")[0],
        vlan_id=500)

# set all UP
ip.link("set",
        index=ip.link_lookup(ifname="br0")[0],
        state="up")
ip.link("set",
        index=ip.link_lookup(ifname="br0v500")[0],
        state="up")
ip.link("set",
        index=ip.link_lookup(ifname="eth0")[0],
        state="up")

# set IP address
ip.addr("add",
        index=ip.link_lookup(ifname="br0v500")[0],
        address="172.16.5.2",
        mask=24)

Now all the traffic to the network 172.16.5.2/24 will go
to vlan 500 only via ports that have such vlan filter.

Required arguments for vlan_filter()index and vlan_info. Vlan info struct:

{"vid": uint16,
 "flags": uint16}
More details:
  • kernel:Documentation/networking/switchdev.txt
  • pyroute2.netlink.rtnl.ifinfmsg:… vlan_info
One can specify flags as int or as a list of flag names:
  • master == 0x1
  • pvid == 0x2
  • untagged == 0x4
  • range_begin == 0x8
  • range_end == 0x10
  • brentry == 0x20

E.g.:

{"vid": 20,
 "flags": ["pvid", "untagged"]}

# is equal to
{"vid": 20,
 "flags": 6}

Commands:

add

Add vlan filter to a bridge port. Example:

ip.vlan_filter("add", index=2, vlan_info={"vid": 200})

del

Remove vlan filter from a bridge port. Example:

ip.vlan_filter("del", index=2, vlan_info={"vid": 200})
fdb(command, **kwarg)

Bridge forwarding database management.

More details:
  • kernel:Documentation/networking/switchdev.txt
  • pyroute2.netlink.rtnl.ndmsg

add

Add a new FDB record. Works in the same way as ARP cache management, but some additional NLAs can be used:

# simple FDB record
#
ip.fdb('add',
       ifindex=ip.link_lookup(ifname='br0')[0],
       lladdr='00:11:22:33:44:55',
       dst='10.0.0.1')

# specify vlan
# NB: vlan should exist on the device, use
# `vlan_filter()`
#
ip.fdb('add',
       ifindex=ip.link_lookup(ifname='br0')[0],
       lladdr='00:11:22:33:44:55',
       dst='10.0.0.1',
       vlan=200)

# specify vxlan id and port
# NB: works only for vxlan devices, use
# `link("add", kind="vxlan", ...)`
#
# if port is not specified, the default one is used
# by the kernel.
#
# if vni (vxlan id) is equal to the device vni,
# the kernel doesn't report it back
#
ip.fdb('add',
       ifindex=ip.link_lookup(ifname='vx500')[0]
       lladdr='00:11:22:33:44:55',
       dst='10.0.0.1',
       port=5678,
       vni=600)

append

Append a new FDB record. The same syntax as for add.

del

Remove an existing FDB record. The same syntax as for add.

dump

Dump all the FDB records. If any **kwarg is provided, results will be filtered:

# dump all the records
ip.fdb('dump')

# show only specific lladdr, dst, vlan etc.
ip.fdb('dump', lladdr='00:11:22:33:44:55')
ip.fdb('dump', dst='10.0.0.1')
ip.fdb('dump', vlan=200)
neigh(command, **kwarg)

Neighbours operations, same as ip neigh or bridge fdb

add

Add a neighbour record, e.g.:

from pyroute2 import IPRoute
from pyroute2.netlink.rtnl import ndmsg

# add a permanent record on veth0
idx = ip.link_lookup(ifname='veth0')[0]
ip.neigh('add',
         dst='172.16.45.1',
         lladdr='00:11:22:33:44:55',
         ifindex=idx,
         state=ndmsg.states['permanent'])

set

Set an existing record or create a new one, if it doesn’t exist. The same as above, but the command is “set”:

ip.neigh('set',
         dst='172.16.45.1',
         lladdr='00:11:22:33:44:55',
         ifindex=idx,
         state=ndmsg.states['permanent'])

change

Change an existing record. If the record doesn’t exist, fail.

del

Delete an existing record.

dump

Dump all the records in the NDB:

ip.neigh('dump')

Link operations.

Keywords to set up ifinfmsg fields:
  • index – interface index
  • family – AF_BRIDGE for bridge operations, otherwise 0
  • flags – device flags
  • change – change mask

All other keywords will be translated to NLA names, e.g. mtu -> IFLA_MTU, af_spec -> IFLA_AF_SPEC etc. You can provide a complete NLA structure or let filters do it for you. E.g., these pairs show equal statements:

# set device MTU
ip.link("set", index=x, mtu=1000)
ip.link("set", index=x, IFLA_MTU=1000)

# add vlan device
ip.link("add", ifname="test", kind="dummy")
ip.link("add", ifname="test",
        IFLA_LINKINFO={'attrs': [['IFLA_INFO_KIND', 'dummy']]})

Filters are implemented in the pyroute2.netlink.rtnl.req module. You can contribute your own if you miss shortcuts.

Commands:

add

To create an interface, one should specify the interface kind:

ip.link("add",
        ifname="test",
        kind="dummy")

The kind can be any of those supported by kernel. It can be dummy, bridge, bond etc. On modern kernels one can specify even interface index:

ip.link("add",
        ifname="br-test",
        kind="bridge",
        index=2345)

Specific type notes:

► gre

Create GRE tunnel:

ip.link("add",
        ifname="grex",
        kind="gre",
        gre_local="172.16.0.1",
        gre_remote="172.16.0.101",
        gre_ttl=16)

The keyed GRE requires explicit iflags/oflags specification:

ip.link("add",
        ifname="grex",
        kind="gre",
        gre_local="172.16.0.1",
        gre_remote="172.16.0.101",
        gre_ttl=16,
        gre_ikey=10,
        gre_okey=10,
        gre_iflags=32,
        gre_oflags=32)

Support for GRE over IPv6 is also included; use kind=ip6gre and ip6gre_ as the prefix for its values.

► ipip

Create ipip tunnel:

ip.link("add",
        ifname="tun1",
        kind="ipip",
        ipip_local="172.16.0.1",
        ipip_remote="172.16.0.101",
        ipip_ttl=16)

Support for sit and ip6tnl is also included; use kind=sit and sit_ as prefix for sit tunnels, and kind=ip6tnl and ip6tnl_ prefix for ip6tnl tunnels.

► macvlan

Macvlan interfaces act like VLANs within OS. The macvlan driver provides an ability to add several MAC addresses on one interface, where every MAC address is reflected with a virtual interface in the system.

In some setups macvlan interfaces can replace bridge interfaces, providing more simple and at the same time high-performance solution:

ip.link("add",
        ifname="mvlan0",
        kind="macvlan",
        link=ip.link_lookup(ifname="em1")[0],
        macvlan_mode="private").commit()

Several macvlan modes are available: “private”, “vepa”, “bridge”, “passthru”. Ususally the default is “vepa”.

► macvtap

Almost the same as macvlan, but creates also a character tap device:

ip.link("add",
        ifname="mvtap0",
        kind="macvtap",
        link=ip.link_lookup(ifname="em1")[0],
        macvtap_mode="vepa").commit()

Will create a device file “/dev/tap%s” % index

► tuntap

Possible tuntap keywords:

  • mode — “tun” or “tap”
  • uid — integer
  • gid — integer
  • ifr — dict of tuntap flags (see ifinfmsg:… tuntap_data)

Create a tap interface:

ip.link("add",
        ifname="tap0",
        kind="tuntap",
        mode="tap")

Tun/tap interfaces are created using ioctl(), but the library provides a transparent way to manage them using netlink API.

► veth

To properly create veth interface, one should specify peer also, since veth interfaces are created in pairs:

# simple call
ip.link("add", ifname="v1p0", kind="veth", peer="v1p1")

# set up specific veth peer attributes
ip.link("add",
        ifname="v1p0",
        kind="veth",
        peer={"ifname": "v1p1",
              "net_ns_fd": "test_netns"})

► vlan

VLAN interfaces require additional parameters, vlan_id and link, where link is a master interface to create VLAN on:

ip.link("add",
        ifname="v100",
        kind="vlan",
        link=ip.link_lookup(ifname="eth0")[0],
        vlan_id=100)

There is a possibility to create also 802.1ad interfaces:

# create external vlan 802.1ad, s-tag
ip.link("add",
        ifname="v100s",
        kind="vlan",
        link=ip.link_lookup(ifname="eth0")[0],
        vlan_id=100,
        vlan_protocol=0x88a8)

# create internal vlan 802.1q, c-tag
ip.link("add",
        ifname="v200c",
        kind="vlan",
        link=ip.link_lookup(ifname="v100s")[0],
        vlan_id=200,
        vlan_protocol=0x8100)

► vrf

VRF interfaces (see linux/Documentation/networking/vrf.txt):

ip.link("add",
        ifname="vrf-foo",
        kind="vrf",
        vrf_table=42)

► vxlan

VXLAN interfaces are like VLAN ones, but require a bit more parameters:

ip.link("add",
        ifname="vx101",
        kind="vxlan",
        vxlan_link=ip.link_lookup(ifname="eth0")[0],
        vxlan_id=101,
        vxlan_group='239.1.1.1',
        vxlan_ttl=16)

All possible vxlan parameters are listed in the module pyroute2.netlink.rtnl.ifinfmsg:… vxlan_data.

► ipoib

IPoIB driver provides an ability to create several ip interfaces on one interface. IPoIB interfaces requires the following parameter:

link : The master interface to create IPoIB on.

The following parameters can also be provided:

pkey : Inifiniband partition key the ip interface is associated with mode : Underlying infiniband transport mode.

One of: [‘datagram’ ,’connected’]
umcast : If set(1), multicast group membership for this interface is
handled by user space.

Example:

ip.link("add",
        ifname="ipoib1",
        kind="ipoib",
        link=ip.link_lookup(ifname="ib0")[0],
        pkey=10)

set

Set interface attributes:

# get interface index
x = ip.link_lookup(ifname="eth0")[0]
# put link down
ip.link("set", index=x, state="down")
# rename and set MAC addr
ip.link("set", index=x, address="00:11:22:33:44:55", name="bala")
# set MTU and TX queue length
ip.link("set", index=x, mtu=1000, txqlen=2000)
# bring link up
ip.link("set", index=x, state="up")

Keyword “state” is reserved. State can be “up” or “down”, it is a shortcut:

state="up":   flags=1, mask=1
state="down": flags=0, mask=0

SR-IOV virtual function setup:

# get PF index
x = ip.link_lookup(ifname="eth0")[0]
# setup macaddr
ip.link("set",
        index=x,                          # PF index
        vf={"vf": 0,                      # VF index
            "mac": "00:11:22:33:44:55"})  # address
# setup vlan
ip.link("set",
        index=x,           # PF index
        vf={"vf": 0,       # VF index
            "vlan": 100})  # the simplest case
# setup QinQ
ip.link("set",
        index=x,                           # PF index
        vf={"vf": 0,                       # VF index
            "vlan": [{"vlan": 100,         # vlan id
                      "proto": 0x88a8},    # 802.1ad
                     {"vlan": 200,         # vlan id
                      "proto": 0x8100}]})  # 802.1q

update

Almost the same as set, except it uses different flags and message type. Mostly does the same, but in some cases differs. If you’re not sure what to use, use set.

del

Destroy the interface:

ip.link("del", index=ip.link_lookup(ifname="dummy0")[0])

dump

Dump info for all interfaces

get

Get specific interface info:

ip.link("get", index=ip.link_lookup(ifname="br0")[0])

Get extended attributes like SR-IOV setup:

ip.link("get", index=3, ext_mask=1)
addr(command, index=None, address=None, mask=None, family=None, scope=None, match=None, **kwarg)

Address operations

  • command – add, delete
  • index – device index
  • address – IPv4 or IPv6 address
  • mask – address mask
  • family – socket.AF_INET for IPv4 or socket.AF_INET6 for IPv6
  • scope – the address scope, see /etc/iproute2/rt_scopes
  • kwarg – dictionary, any ifaddrmsg field or NLA

Later the method signature will be changed to:

def addr(self, command, match=None, **kwarg):
    # the method body

So only keyword arguments (except of the command) will be accepted. The reason for this change is an unification of API.

Example:

idx = 62
ip.addr('add', index=idx, address='10.0.0.1', mask=24)
ip.addr('add', index=idx, address='10.0.0.2', mask=24)

With more NLAs:

# explicitly set broadcast address
ip.addr('add', index=idx,
        address='10.0.0.3',
        broadcast='10.0.0.255',
        prefixlen=24)

# make the secondary address visible to ifconfig: add label
ip.addr('add', index=idx,
        address='10.0.0.4',
        broadcast='10.0.0.255',
        prefixlen=24,
        label='eth0:1')

Configure p2p address on an interface:

ip.addr('add', index=idx,
        address='10.1.1.2',
        mask=24,
        local='10.1.1.1')
tc(command, kind=None, index=0, handle=0, **kwarg)

“Swiss knife” for traffic control. With the method you can add, delete or modify qdiscs, classes and filters.

  • command – add or delete qdisc, class, filter.
  • kind – a string identifier – “sfq”, “htb”, “u32” and so on.
  • handle – integer or string

Command can be one of (“add”, “del”, “add-class”, “del-class”, “add-filter”, “del-filter”) (see commands dict in the code).

Handle notice: traditional iproute2 notation, like “1:0”, actually represents two parts in one four-bytes integer:

1:0    ->    0x10000
1:1    ->    0x10001
ff:0   ->   0xff0000
ffff:1 -> 0xffff0001

Target notice: if your target is a class/qdisc that applies an algorithm that can only apply to upstream traffic profile, but your keys variable explicitly references a match that is only relevant for upstream traffic, the kernel will reject the filter. Unless you’re dealing with devices like IMQs

For pyroute2 tc() you can use both forms: integer like 0xffff0000 or string like ‘ffff:0000’. By default, handle is 0, so you can add simple classless queues w/o need to specify handle. Ingress queue causes handle to be 0xffff0000.

So, to set up sfq queue on interface 1, the function call will be like that:

ip = IPRoute()
ip.tc("add", "sfq", 1)

Instead of string commands (“add”, “del”…), you can use also module constants, RTM_NEWQDISC, RTM_DELQDISC and so on:

ip = IPRoute()
flags = NLM_F_REQUEST | NLM_F_ACK | NLM_F_CREATE | NLM_F_EXCL
ip.tc((RTM_NEWQDISC, flags), "sfq", 1)

It should be noted that “change”, “change-class” and “change-filter” work like “replace”, “replace-class” and “replace-filter”, except they will fail if the node doesn’t exist (while it would have been created by “replace”). This is not the same behaviour as with “tc” where “change” can be used to modify the value of some options while leaving the others unchanged. However, as not all entities support this operation, we believe the “change” commands as implemented here are more useful.

Also available “modules” (returns tc plugins dict) and “help” commands:

help(ip.tc("modules")["htb"])
print(ip.tc("help", "htb"))
route(command, **kwarg)

Route operations.

Keywords to set up rtmsg fields:

  • dst_len, src_len – destination and source mask(see dst below)
  • tos – type of service
  • table – routing table
  • proto – redirect, boot, static (see rt_proto)
  • scope – routing realm
  • type – unicast, local, etc. (see rt_type)

pyroute2/netlink/rtnl/rtmsg.py rtmsg.nla_map:

  • table – routing table to use (default: 254)
  • gateway – via address
  • prefsrc – preferred source IP address
  • dst – the same as prefix
  • iif – incoming traffic interface
  • oif – outgoing traffic interface

etc.

One can specify mask not as dst_len, but as a part of dst, e.g.: dst=”10.0.0.0/24”.

Commands:

add

Example:

ip.route("add", dst="10.0.0.0/24", gateway="192.168.0.1")

It is possible to set also route metrics. There are two ways to do so. The first is to use ‘raw’ NLA notation:

ip.route("add",
         dst="10.0.0.0",
         mask=24,
         gateway="192.168.0.1",
         metrics={"attrs": [["RTAX_MTU", 1400],
                            ["RTAX_HOPLIMIT", 16]]})

The second way is to use shortcuts, provided by IPRouteRequest class, which is applied to **kwarg automatically:

ip.route("add",
         dst="10.0.0.0/24",
         gateway="192.168.0.1",
         metrics={"mtu": 1400,
                  "hoplimit": 16})

More route() examples. Blackhole route:

ip.route("add",
         dst="10.0.0.0/24",
         type="blackhole")

Multipath route:

ip.route("add",
         dst="10.0.0.0/24",
         multipath=[{"gateway": "192.168.0.1", "hops": 2},
                    {"gateway": "192.168.0.2", "hops": 1},
                    {"gateway": "192.168.0.3"}])

MPLS lwtunnel on eth0:

idx = ip.link_lookup(ifname='eth0')[0]
ip.route("add",
         dst="10.0.0.0/24",
         oif=idx,
         encap={"type": "mpls",
                "labels": "200/300"})

MPLS multipath:

idx = ip.link_lookup(ifname='eth0')[0]
ip.route("add",
         dst="10.0.0.0/24",
         table=20,
         multipath=[{"gateway": "192.168.0.1",
                     "encap": {"type": "mpls",
                               "labels": 200}},
                    {"ifindex": idx,
                     "encap": {"type": "mpls",
                               "labels": 300}}])

MPLS target can be int, string, dict or list:

"labels": 300    # simple label
"labels": "300"  # the same
"labels": (200, 300)  # stacked
"labels": "200/300"   # the same

# explicit label definition
"labels": {"bos": 1,
           "label": 300,
           "tc": 0,
           "ttl": 16}

Create SEG6 tunnel encap mode (kernel >= 4.10):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6',
                'mode': 'encap',
                'segs': '2000::5,2000::6'})

Create SEG6 tunnel inline mode (kernel >= 4.10):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6',
                'mode': 'inline',
                'segs': ['2000::5', '2000::6']})

Create SEG6 tunnel inline mode with hmac (kernel >= 4.10):

ip.route('add',
         dst='2001:0:0:22::2/128',
         oif=idx,
         encap={'type': 'seg6',
                'mode': 'inline',
                'segs':'2000::5,2000::6,2000::7,2000::8',
                'hmac':0xf})

Create SEG6 tunnel with ip4ip6 encapsulation (kernel >= 4.14):

ip.route('add',
         dst='172.16.0.0/24',
         oif=idx,
         encap={'type': 'seg6',
                'mode': 'encap',
                'segs': '2000::5,2000::6'})

Create SEG6LOCAL tunnel End.DX4 action (kernel >= 4.14):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6local',
                'action': 'End.DX4',
                'nh4': '172.16.0.10'})

Create SEG6LOCAL tunnel End.DT6 action (kernel >= 4.14):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6local',
                'action': 'End.DT6',
                'table':'10'})

Create SEG6LOCAL tunnel End.B6 action (kernel >= 4.14):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6local',
                'action': 'End.B6',
                'srh':{'segs': '2000::5,2000::6'}})

Create SEG6LOCAL tunnel End.B6 action with hmac (kernel >= 4.14):

ip.route('add',
         dst='2001:0:0:10::2/128',
         oif=idx,
         encap={'type': 'seg6local',
                'action': 'End.B6',
                'srh': {'segs': '2000::5,2000::6',
                        'hmac':0xf}})

change, replace, append

Commands change, replace and append have the same meanings as in ip-route(8): change modifies only existing route, while replace creates a new one, if there is no such route yet. append allows to create an IPv6 multipath route.

del

Remove the route. The same syntax as for add.

get

Get route by spec.

dump

Dump all routes.

rule(command, *argv, **kwarg)

Rule operations

  • command — add, delete
  • table — 0 < table id < 253
  • priority — 0 < rule’s priority < 32766
  • action — type of rule, default ‘FR_ACT_NOP’ (see fibmsg.py)
  • rtscope — routing scope, default RT_SCOPE_UNIVERSE
    (RT_SCOPE_UNIVERSE|RT_SCOPE_SITE| RT_SCOPE_LINK|RT_SCOPE_HOST|RT_SCOPE_NOWHERE)
  • family — rule’s family (socket.AF_INET (default) or
    socket.AF_INET6)
  • src — IP source for Source Based (Policy Based) routing’s rule
  • dst — IP for Destination Based (Policy Based) routing’s rule
  • src_len — Mask for Source Based (Policy Based) routing’s rule
  • dst_len — Mask for Destination Based (Policy Based) routing’s
    rule
  • iifname — Input interface for Interface Based (Policy Based)
    routing’s rule
  • oifname — Output interface for Interface Based (Policy Based)
    routing’s rule

All packets route via table 10:

# 32000: from all lookup 10
# ...
ip.rule('add', table=10, priority=32000)

Default action:

# 32001: from all lookup 11 unreachable
# ...
iproute.rule('add',
             table=11,
             priority=32001,
             action='FR_ACT_UNREACHABLE')

Use source address to choose a routing table:

# 32004: from 10.64.75.141 lookup 14
# ...
iproute.rule('add',
             table=14,
             priority=32004,
             src='10.64.75.141')

Use dst address to choose a routing table:

# 32005: from 10.64.75.141/24 lookup 15
# ...
iproute.rule('add',
             table=15,
             priority=32005,
             dst='10.64.75.141',
             dst_len=24)

Match fwmark:

# 32006: from 10.64.75.141 fwmark 0xa lookup 15
# ...
iproute.rule('add',
             table=15,
             priority=32006,
             dst='10.64.75.141',
             fwmark=10)
class pyroute2.iproute.linux.IPBatch(*argv, **kwarg)

Netlink requests compiler. Does not send any requests, but instead stores them in the internal binary buffer. The contents of the buffer can be used to send batch requests, to test custom netlink parsers and so on.

Uses RTNL_API and provides all the same API as normal IPRoute objects:

# create the batch compiler
ipb = IPBatch()
# compile requests into the internal buffer
ipb.link("add", index=550, ifname="test", kind="dummy")
ipb.link("set", index=550, state="up")
ipb.addr("add", index=550, address="10.0.0.2", mask=24)
# save the buffer
data = ipb.batch
# reset the buffer
ipb.reset()
...
# send the buffer
IPRoute().sendto(data, (0, 0))
class pyroute2.iproute.linux.IPRoute(*argv, **kwarg)

Regular ordinary utility class, see RTNL API for the list of methods.

class pyroute2.iproute.linux.RawIPRoute(*argv, **kwarg)

The same as IPRoute, but does not use the netlink proxy. Thus it can not manage e.g. tun/tap interfaces.

Queueing disciplines

drr

The qdisc doesn’t accept any parameters, but the class accepts quantum parameter:

ip.tc('add', 'drr', interface, '1:')
ip.tc('add-class', 'drr', interface, '1:10', quantum=1600)
ip.tc('add-class', 'drr', interface, '1:20', quantum=1600)
class pyroute2.netlink.rtnl.tcmsg.sched_drr.options(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_drr.stats(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_drr.stats2(data=None, offset=0, length=None, parent=None, init=None)
class stats_app(data=None, offset=0, length=None, parent=None, init=None)

choke

Parameters:

  • limit (required) – int
  • bandwith (required) – str/int
  • min – int
  • max – int
  • avpkt – str/int, packet size
  • burst – int
  • probability – float
  • ecn – bool

Example:

ip.tc('add', 'choke', interface,
      limit=5500,
      bandwith="10mbit",
      ecn=True)
class pyroute2.netlink.rtnl.tcmsg.sched_choke.options(data=None, offset=0, length=None, parent=None, init=None)
class qopt(data=None, offset=0, length=None, parent=None, init=None)
class stab(data=None, offset=0, length=None, parent=None, init=None)
encode()

Encode the message into the binary buffer:

msg.encode()
sock.send(msg.data)

If you want to customize the encoding process, override the method:

class CustomMessage(nlmsg):

    def encode(self):
        ...  # do some custom data tuning
        nlmsg.encode(self)
class pyroute2.netlink.rtnl.tcmsg.sched_choke.stats(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_choke.stats2(data=None, offset=0, length=None, parent=None, init=None)
class stats_app(data=None, offset=0, length=None, parent=None, init=None)

clsact

The clsact qdisc provides a mechanism to attach integrated filter-action classifiers to an interface, either at ingress or egress, or both. The use case shown here is using a bpf program (implemented elsewhere) to direct the packet processing. The example also uses the direct-action feature to specify what to do with each packet (pass, drop, redirect, etc.).

BPF ingress/egress example using clsact qdisc:

# open_bpf_fd is outside the scope of pyroute2
#fd = open_bpf_fd()
eth0 = ip.get_links(ifname="eth0")[0]
ip.tc("add", "clsact", eth0)
# add ingress clsact
ip.tc("add-filter", "bpf", idx, ":1", fd=fd, name="myprog",
      parent="ffff:fff2", classid=1, direct_action=True)
# add egress clsact
ip.tc("add-filter", "bpf", idx, ":1", fd=fd, name="myprog",
      parent="ffff:fff3", classid=1, direct_action=True)

hfsc

Simple HFSC example:

eth0 = ip.get_links(ifname="eth0")[0]
ip.tc("add", "hfsc", eth0,
      handle="1:",
      default="1:1")
ip.tc("add-class", "hfsc", eth0,
      handle="1:1",
      parent="1:0"
      rsc={"m2": "5mbit"})

HFSC curve nla types:

  • rsc: real-time curve
  • fsc: link-share curve
  • usc: upper-limit curve
class pyroute2.netlink.rtnl.tcmsg.sched_hfsc.options_hfsc(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_hfsc.options_hfsc_class(data=None, offset=0, length=None, parent=None, init=None)
class hfsc_curve(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_hfsc.stats2(data=None, offset=0, length=None, parent=None, init=None)
class stats_app(data=None, offset=0, length=None, parent=None, init=None)

htb

TODO: list parameters

An example with htb qdisc, lets assume eth0 == 2:

#          u32 -->    +--> htb 1:10 --> sfq 10:0
#          |          |
#          |          |
# eth0 -- htb 1:0 -- htb 1:1
#          |          |
#          |          |
#          u32 -->    +--> htb 1:20 --> sfq 20:0

eth0 = 2
# add root queue 1:0
ip.tc("add", "htb", eth0, 0x10000, default=0x200000)

# root class 1:1
ip.tc("add-class", "htb", eth0, 0x10001,
      parent=0x10000,
      rate="256kbit",
      burst=1024 * 6)

# two branches: 1:10 and 1:20
ip.tc("add-class", "htb", eth0, 0x10010,
      parent=0x10001,
      rate="192kbit",
      burst=1024 * 6,
      prio=1)
ip.tc("add-class", "htb", eht0, 0x10020,
      parent=0x10001,
      rate="128kbit",
      burst=1024 * 6,
      prio=2)

# two leaves: 10:0 and 20:0
ip.tc("add", "sfq", eth0, 0x100000,
      parent=0x10010,
      perturb=10)
ip.tc("add", "sfq", eth0, 0x200000,
      parent=0x10020,
      perturb=10)

# two filters: one to load packets into 1:10 and the
# second to 1:20
ip.tc("add-filter", "u32", eth0,
      parent=0x10000,
      prio=10,
      protocol=socket.AF_INET,
      target=0x10010,
      keys=["0x0006/0x00ff+8", "0x0000/0xffc0+2"])
ip.tc("add-filter", "u32", eth0,
      parent=0x10000,
      prio=10,
      protocol=socket.AF_INET,
      target=0x10020,
      keys=["0x5/0xf+0", "0x10/0xff+33"])
class pyroute2.netlink.rtnl.tcmsg.sched_htb.stats(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_htb.qdisc_stats2(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_htb.class_stats2(data=None, offset=0, length=None, parent=None, init=None)
class stats_app(data=None, offset=0, length=None, parent=None, init=None)
class pyroute2.netlink.rtnl.tcmsg.sched_htb.options(data=None, offset=0, length=None, parent=None, init=None)
class htb_glob(data=None, offset=0, length=None, parent=None, init=None)
class htb_parms(data=None, offset=0, length=None, parent=None, init=None)

Filters

u32

Filters can take an action argument, which affects the packet behavior when the filter matches. Currently the gact, bpf, and police action types are supported, and can be attached to the u32 and bpf filter types:

# An action can be a simple string, which translates to a gact type
action = "drop"

# Or it can be an explicit type (these are equivalent)
action = dict(kind="gact", action="drop")

# There can also be a chain of actions, which depend on the return
# value of the previous action.
action = [
    dict(kind="bpf", fd=fd, name=name, action="ok"),
    dict(kind="police", rate="10kbit", burst=10240, limit=0),
    dict(kind="gact", action="ok"),
]

# Add the action to a u32 match-all filter
ip.tc("add", "htb", eth0, 0x10000, default=0x200000)
ip.tc("add-filter", "u32", eth0,
      parent=0x10000,
      prio=10,
      protocol=protocols.ETH_P_ALL,
      target=0x10020,
      keys=["0x0/0x0+0"],
      action=action)

# Add two more filters: One to send packets with a src address of
# 192.168.0.1/32 into 1:10 and the second to send packets  with a
# dst address of 192.168.0.0/24 into 1:20
ip.tc("add-filter", "u32", eth0,
    parent=0x10000,
    prio=10,
    protocol=socket.AF_INET,
    target=0x10010,
    keys=["0xc0a80001/0xffffffff+12"])
    # 0xc0a800010 = 192.168.0.1
    # 0xffffffff = 255.255.255.255 (/32)
    # 12 = Source network field bit offset

ip.tc("add-filter", "u32", eth0,
    parent=0x10000,
    prio=10,
    protocol=socket.AF_INET,
    target=0x10020,
    keys=["0xc0a80000/0xffffff00+16"])
    # 0xc0a80000 = 192.168.0.0
    # 0xffffff00 = 255.255.255.0 (/24)
    # 16 = Destination network field bit offset
pyroute2-0.5.9/docs/html/ipset.html0000644000175000017500000005307213621220107017141 0ustar peetpeet00000000000000 IPSet module — pyroute2 0.5.9 documentation

IPSet module

IPSet module

ipset support.

This module is tested with hash:ip, hash:net, list:set and several other ipset structures (like hash:net,iface). There is no guarantee that this module is working with all available ipset modules.

It supports almost all kernel commands (create, destroy, flush, rename, swap, test…)

class pyroute2.ipset.PortRange(begin, end, protocol=None)

A simple container for port range with optional protocol

Note that optional protocol parameter is not supported by all kernel ipset modules using ports. On the other hand, it’s sometimes mandatory to set it (like for hash:net,port ipsets)

Example:

udp_proto = socket.getprotobyname("udp")
port_range = PortRange(1000, 2000, protocol=udp_proto)
ipset.create("foo", stype="hash:net,port")
ipset.add("foo", ("192.0.2.0/24", port_range), etype="net,port")
ipset.test("foo", ("192.0.2.0/24", port_range), etype="net,port")
class pyroute2.ipset.PortEntry(port, protocol=None)

A simple container for port entry with optional protocol

class pyroute2.ipset.IPSet(version=None, attr_revision=None, nfgen_family=2)

NFNetlink socket (family=NETLINK_NETFILTER).

Implements API to the ipset functionality.

headers(name, **kwargs)

Get headers of the named ipset. It can be used to test if one ipset exists, since it returns a no such file or directory.

get_proto_version(version=6)

Get supported protocol version by kernel.

version parameter allow to set mandatory (but unused?) IPSET_ATTR_PROTOCOL netlink attribute in the request.

list(*argv, **kwargs)

List installed ipsets. If name is provided, list the named ipset or return an empty list.

Be warned: netlink does not return an error if given name does not exit, you will receive an empty list.

destroy(name=None)

Destroy one (when name is set) or all ipset (when name is None)

create(name, stype='hash:ip', family=<AddressFamily.AF_INET: 2>, exclusive=True, counters=False, comment=False, maxelem=65536, forceadd=False, hashsize=None, timeout=None, bitmap_ports_range=None, size=None, skbinfo=False)

Create an ipset name of type stype, by default hash:ip.

Common ipset options are supported:

  • exclusive – if set, raise an error if the ipset exists
  • counters – enable data/packets counters
  • comment – enable comments capability
  • maxelem – max size of the ipset
  • forceadd – you should refer to the ipset manpage
  • hashsize – size of the hashtable (if any)
  • timeout – enable and set a default value for entries (if not None)
  • bitmap_ports_range – set the specified inclusive portrange for
    the bitmap ipset structure (0, 65536)
  • size – Size of the list:set, the default is 8
  • skbinfo – enable skbinfo capability
add(name, entry, family=<AddressFamily.AF_INET: 2>, exclusive=True, comment=None, timeout=None, etype='ip', skbmark=None, skbprio=None, skbqueue=None, wildcard=False, **kwargs)

Add a member to the ipset.

etype is the entry type that you add to the ipset. It’s related to the ipset type. For example, use “ip” for one hash:ip or bitmap:ip ipset.

When your ipset store a tuple, like “hash:net,iface”, you must use a comma a separator (etype=”net,iface”)

entry is a string for “ip” and “net” objects. For ipset with several dimensions, you must use a tuple (or a list) of objects.

“port” type is specific, since you can use integer of specialized containers like PortEntry and PortRange

Examples:

ipset = IPSet()
ipset.create("foo", stype="hash:ip")
ipset.add("foo", "198.51.100.1", etype="ip")

ipset = IPSet()
ipset.create("bar", stype="bitmap:port",
             bitmap_ports_range=(1000, 2000))
ipset.add("bar", 1001, etype="port")
ipset.add("bar", PortRange(1500, 2000), etype="port")

ipset = IPSet()
import socket
protocol = socket.getprotobyname("tcp")
ipset.create("foobar", stype="hash:net,port")
port_entry = PortEntry(80, protocol=protocol)
ipset.add("foobar", ("198.51.100.0/24", port_entry),
          etype="net,port")

wildcard option enable kernel wildcard matching on interface name for net,iface entries.

delete(name, entry, family=<AddressFamily.AF_INET: 2>, exclusive=True, etype='ip')

Delete a member from the ipset.

See add() method for more information on etype.

test(name, entry, family=<AddressFamily.AF_INET: 2>, etype='ip')

Test if entry is part of an ipset

See add() method for more information on etype.

swap(set_a, set_b)

Swap two ipsets. They must have compatible content type.

flush(name=None)

Flush all ipsets. When name is set, flush only this ipset.

rename(name_src, name_dst)

Rename the ipset.

get_set_byname(name)

Get a set by its name

get_set_byindex(index)

Get a set by its index

get_supported_revisions(stype, family=<AddressFamily.AF_INET: 2>)

Return minimum and maximum of revisions supported by the kernel.

Each ipset module (like hash:net, hash:ip, etc) has several revisions. Newer revisions often have more features or more performances. Thanks to this call, you can ask the kernel the list of supported revisions.

You can manually set/force revisions used in IPSet constructor.

Example:

ipset = IPSet()
ipset.get_supported_revisions("hash:net")

ipset.get_supported_revisions("hash:net,port,net")
pyroute2-0.5.9/docs/html/makefile.html0000644000175000017500000002414413621220107017570 0ustar peetpeet00000000000000 Makefile documentation — pyroute2 0.5.9 documentation

Makefile documentation

Makefile is used to automate Pyroute2 deployment and test processes. Mostly, it is but a collection of common commands.

target: clean

Clean up the repo directory from the built documentation, collected coverage data, compiled bytecode etc.

target: docs

Build documentation. Requires sphinx.

target: test

Run tests against current code. Command line options:

  • python – path to the Python to use
  • nosetests – path to nosetests to use
  • wlevel – the Python -W level
  • coverage – set coverage=html to get coverage report
  • pdb – set pdb=true to launch pdb on errors
  • module – run only specific test module
  • skip – skip tests by pattern
  • loop – number of test iterations for each module
  • report – url to submit reports to (see tests/collector.py)
  • worker – the worker id

To run the full test cycle on the project, using a specific python, making html coverage report:

$ sudo make test python=python3 coverage=html

To run a specific test module:

$ sudo make test module=general:test_ipdb.py:TestExplicit

The module parameter syntax:

## module=package[:test_file.py[:TestClass[.test_case]]]

$ sudo make test module=lnst
$ sudo make test module=general:test_ipr.py
$ sudo make test module=general:test_ipdb.py:TestExplicit

There are several test packages:

  • general – common functional tests
  • eventlet – Neutron compatibility tests
  • lnst – LNST compatibility tests

For each package a new Python instance is launched, keep that in mind since it affects the code coverage collection.

It is possible to skip tests by a pattern:

$ sudo make test skip=test_stress

To run tests in a loop, use the loop parameter:

$ sudo make test loop=10

For every iteration the code will be packed again with make dist and checked against PEP8.

All the statistic may be collected with a simple web-script, see tests/collector.py (requires the bottle framework). To retrieve the collected data one can use curl:

$ sudo make test report=http://localhost:8080/v1/report/
$ curl http://localhost:8080/v1/report/ | python -m json.tool

target: dist

Make Python distribution package. Command line options:

  • python – the Python to use

target: install

Build and install the package into the system. Command line options:

  • python – the Python to use
  • root – root install directory
  • lib – where to install lib files

target: develop

Build the package and deploy the egg-link with setuptools. No code will be deployed into the system directories, but instead the local package directory will be visible to the python. In that case one can change the code locally and immediately test it system-wide without running make install.

  • python – the Python to use

other targets

Other targets are either utility targets to be used internally, or hooks for related projects. You can safely ignore them.

pyroute2-0.5.9/docs/html/mpls.html0000644000175000017500000005725413621220107016776 0ustar peetpeet00000000000000 MPLS howto — pyroute2 0.5.9 documentation

MPLS howto

Short introduction into Linux MPLS. Requirements:

  • kernel >= 4.4
  • modules: mpls_router, mpls_iptunnel
  • $ sudo sysctl net.mpls.platform_labels=$x, where $x – number of labels
  • pyroute2 >= 0.4.0

MPLS labels

Possible label formats:

# int
"dst": 20

# list of ints
"newdst": [20]
"newdst": [20, 30]

# string
"labels": "20/30"

Any of these notations should be accepted by pyroute2, if not – try another format and submit an issue to the project github page. The code is quite new, some issues are possible.

Refer also to the test cases, there are many usage samples:

  • tests/general/test_ipr.py
  • tests/general/test_ipdb.py

IPRoute

MPLS routes

Label swap:

from pyroute2 import IPRoute
from pyroute2.common import AF_MPLS

ipr = IPRoute()
# get the `eth0` interface's index:
idx = ipr.link_lookup(ifname="eth0")[0]
# create the request
req = {"family": AF_MPLS,
       "oif": idx,
       "dst": 20,
       "newdst": [30]}
# set up the route
ipr.route("add", **req)

Notice, that dst is a single label, while newdst is a stack. Label push:

req = {"family": AF_MPLS,
       "oif": idx,
       "dst": 20,
       "newdst": [20, 30]}
ipr.route("add", **req)

One can set up also the via field:

from socket import AF_INET

req = {"family": AF_MPLS,
       "oif": idx,
       "dst": 20,
       "newdst": [30],
       "via": {"family": AF_INET,
               "addr": "1.2.3.4"}}
ipr.route("add", **req)

MPLS lwtunnel

To inject IP packets into MPLS:

req = {"dst": "1.2.3.0/24",
       "oif": idx,
       "encap": {"type": "mpls",
                 "labels": [202, 303]}}
ipr.route("add", **req)

IPDB

MPLS routes

The IPDB database also supports MPLS routes, they are reflected in the ipdb.routes.tables[“mpls”]:

>>> (ipdb
...  .routes
...  .add({"family": AF_MPLS,
...        "oif": ipdb.interfaces["eth0"]["index"],
...        "dst": 20,
...        "newdst": [30]})
...  .commit())
<skip>
>>> (ipdb
...  .routes
...  .add({"family": AF_MPLS,
...        "oif": ipdb.interfaces["eth0"]["index"],
...        "dst": 22,
...        "newdst": [22, 42]})
...  .commit())
<skip>
>>> ipdb.routes.tables["mpls"].keys()
[20, 22]

Pls notice, that there is only one MPLS routing table.

Multipath MPLS:

with IDPB() as ipdb:
    (ipdb
     .routes
     .add({"family": AF_MPLS,
           "dst": 20,
           "multipath": [{"via": {"family": AF_INET,
                                  "addr": "10.0.0.2"},
                          "oif": ipdb.interfaces["eth0"]["index"],
                          "newdst": [30]},
                         {"via": {"family": AF_INET,
                                  "addr": "10.0.0.3"},
                          "oif": ipdb.interfaces["eth0"]["index"],
                          "newdst": [40]}]})
     .commit())

MPLS lwtunnel

LWtunnel routes reside in common route tables:

with IPDB() as ipdb:
    (ipdb
     .routes
     .add({"dst": "1.2.3.0/24",
           "oif": ipdb.interfaces["eth0"]["index"],
           "encap": {"type": "mpls",
                     "labels": [22]}})
     .commit())
    print(ipdb.routes["1.2.3.0/24"])

Multipath MPLS lwtunnel:

with IPDB() as ipdb:
    (ipdb
     .routes
     .add({"dst": "1.2.3.0/24",
           "table": 200,
           "multipath": [{"oif": ipdb.interfaces["eth0"]["index"],
                          "gateway": "10.0.0.2",
                          "encap": {"type": "mpls",
                                    "labels": [200, 300]}},
                         {"oif": ipdb.interfaces["eth1"]["index"],
                          "gateway": "172.16.0.2",
                          "encap": {"type": "mpls",
                                    "labels": [200, 300]}}]})
     .commit())
    print(ipdb.routes.tables[200]["1.2.3.0/24"])
pyroute2-0.5.9/docs/html/ndb.html0000644000175000017500000003356313621220107016563 0ustar peetpeet00000000000000 NDB module — pyroute2 0.5.9 documentation

NDB module

NDB is a high level network management module. IT allows to manage interfaces, routes, addresses etc. of connected systems, containers and network namespaces.

NDB work with remote systems via ssh, in that case mitogen module is required. It is possible to connect also OpenBSD and FreeBSD systems, but in read-only mode for now.

Quick start

Print the routing infotmation in the CSV format:

with NDB() as ndb:
    for record in ndb.routes.summary(format='csv'):
        print(record)

Print all the interface names on the system:

with NDB() as ndb:
    print([x.ifname for x in ndb.interfaces.summary()])

Print IP addresses of interfaces in several network namespaces:

nslist = ['netns01',
          'netns02',
          'netns03']

with NDB() as ndb:
    for nsname in nslist:
        ndb.sources.add(netns=nsname)
    for record in ndb.interfaces.summary(format='json'):
        print(record)

Add an IP address on an interface:

with NDB() as ndb:
    with ndb.interfaces['eth0'] as i:
        i.ipaddr.create(address='10.0.0.1', prefixlen=24).commit()
        # ---> <---  NDB waits until the address actually
        #            becomes available

Change an interface property:

with NDB() as ndb:
    with ndb.interfaces['eth0'] as i:
        i['state'] = 'up'
        i['address'] = '00:11:22:33:44:55'
    # ---> <---  the commit() is called authomatically by
    #            the context manager's __exit__()
Key NDB features:
  • Asynchronously updated database of RTNL objects
  • Data integrity
  • Multiple data sources – local, netns, remote
  • Fault tolerance and memory consumtion limits
  • Transactions
pyroute2-0.5.9/docs/html/ndb_addresses.html0000644000175000017500000000754413621220107020620 0ustar peetpeet00000000000000 IP addresses — pyroute2 0.5.9 documentation

IP addresses

pyroute2-0.5.9/docs/html/ndb_debug.html0000644000175000017500000004543713621220107017734 0ustar peetpeet00000000000000 Debug and logging — pyroute2 0.5.9 documentation

Debug and logging

Logging

A simple way to set up stderr logging:

# to start logging on the DB init
ndb = NDB(log='on')

# ... or to start it in run time
ndb.log('on')

# ... the same as above, another syntax
ndb.log.on

# ... turn logging off
ndb.log('off')

# ... or
ndb.log.off

It is possible also to set up logging to a file or to a syslog server:

#
ndb.log('file_name.log')

#
ndb.log('syslog://server:port')

Fetching DB data

By default, NDB starts with an in-memory SQLite3 database. In order to perform post mortem analysis it may be more useful to start the DB with a file DB or a PostgreSQL as the backend.

See more: Database

It is possible to dump all the DB data with schema.export():

with NDB() as ndb:
   ndb.schema.export('stderr')  # dump the DB to stderr
   ...
   ndb.schema.export('pr2_debug')  # dump the DB to a file

RTNL events

All the loaded RTNL events may be stored in the DB. To turn that feature on, one should start NDB with the debug option:

ndb = NDB(rtnl_debug=True)

The events may be exported with the same schema.export().

Unlike ordinary table, limited with the number of network objects in the system, the events log tables are not limited. Do not enable the events logging in production, it may exhaust all the memory.

RTNL objects

NDB creates RTNL objects on demand, it doesn’t keep them all the time. References to created objects are linked to the ndb.<view>.cache set:

>>> ndb.interfaces.cache.keys()
[(('target', u'localhost'), ('index', 2)),
 (('target', u'localhost'), ('index', 39615))]

>>> [x['ifname'] for x in ndb.interfaces.cache.values()]
[u'eth0', u't2']

Object states

RTNL objects may be in several states:

  • invalid: the object does not exist in the system
  • system: the object exists both in the system and in NDB
  • setns: the existing object should be moved to another network namespace
  • remove: the existing object must be deleted from the system

The state transitions are logged in the state log:

>>> from pyroute2 import NDB
>>> ndb = NDB()
>>> c = ndb.interfaces.create(ifname='t0', kind='dummy').commit()
>>> c.state.events
[
   (1557752212.6703758, 'invalid'),
   (1557752212.6821117, 'system')
]

The timestamps allow to correlate the state transitions with the NDB log and the RTNL events log, in the case it was enabled.

Object snapshots

Before running any commit, NDB marks all the related records in the DB with a random value in the f_tflags DB field (tflags object field), and stores all the marked records in the snapshot tables. Shortly, the commit() is a snapshot() + apply() + revert() if failed:

>>> nic = ndb.interfaces['t0']
>>> nic['state']
'down'
>>> nic['state'] = 'up'
>>> snapshot = nic.snapshot()
>>> ndb.schema.snapshots
{
   'addresses_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'neighbours_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'routes_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'nh_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'p2p_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_bridge_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_bond_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_vlan_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_vxlan_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_gre_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_vrf_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_vti_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'ifinfo_vti6_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>,
   'interfaces_139736119707256': <weakref at 0x7f16d8391a98; to 'Interface' at 0x7f16d9c708e0>
}
>>> nic.apply()
...
>>> nic['state']
'up'
>>> snapshot.apply(rollback=True)
...
>>> nic['state']
'down'

Or same:

>>> nic = ndb.interfaces['t0']
>>> nic['state']
'down'
>>> nic['state'] = 'up'
>>> nic.commit()
>>> nic['state']
'up'
>>> nic.rollback()
>>> nic['state']
'down'

These snapshot tables are the objects’ state before the changes were applied.

pyroute2-0.5.9/docs/html/ndb_interfaces.html0000644000175000017500000004244713621220107020767 0ustar peetpeet00000000000000 Network interfaces — pyroute2 0.5.9 documentation

Network interfaces

List interfaces

List interface keys:

with NDB(log='on') as ndb:
    for key in ndb.interfaces:
        print(key)

NDB views support some dict methods: items(), values(), keys():

with NDB(log='on') as ndb:
    for key, nic in ndb.interfaces.items():
        nic.set('state', 'up')
        nic.commit()

Get interface objects

The keys may be used as selectors to get interface objects:

with NDB(log='on') as ndb:
    for key in ndb.interfaces:
        print(ndb.interfaces[key])

Also possible selector formats are dict() and simple string. The latter means the interface name:

eth0 = ndb.interfaces['eth0']

Dict selectors are necessary to get interfaces by other properties:

wrk1_eth0 = ndb.interfaces[{'target': 'worker1.sample.com',
                            'ifname': 'eth0'}]

wrk2_eth0 = ndb.interfaces[{'target': 'worker2.sample.com',
                            'address': '52:54:00:22:a1:b7'}]

Change nic properties

Changing MTU and MAC address:

with NDB(log='on') as ndb:
    with ndb.interfaces['eth0'] as eth0:
        eth0['mtu'] = 1248
        eth0['address'] = '00:11:22:33:44:55'
    # --> <-- eth0.commit() is called by the context manager
# --> <-- ndb.close() is called by the context manager

One can change a property either using the assignment statement, or using the .set() routine:

# same code
with NDB(log='on') as ndb:
    with ndb.interfaces['eth0'] as eth0:
        eth0.set('mtu', 1248)
        eth0.set('address', '00:11:22:33:44:55')

The .set() routine returns the object itself, that makes possible chain calls:

# same as above
with NDB(log='on') as ndb:
    with ndb.interfaces['eth0'] as eth0:
        eth0.set('mtu', 1248).set('address', '00:11:22:33:44:55')

# or
with NDB(log='on') as ndb:
    with ndb.interfaces['eth0'] as eth0:
        (eth0
         .set('mtu', 1248)
         .set('address', '00:11:22:33:44:55'))

# or without the context manager, call commit() explicitly
with NDB(log='on') as ndb:
    (ndb
     .interfaces['eth0']
     .set('mtu', 1248)
     .set('address', '00:11:22:33:44:55')
     .commit())

Create virtual interfaces

Create a bridge and add a port, eth0:

(ndb
 .interfaces
 .create(ifname='br0', kind='bridge')
 .commit())

(ndb
 .interfaces['eth0']
 .set('master', ndb.interfaces['br0']['index'])
 .commit())
pyroute2-0.5.9/docs/html/ndb_intro.html0000644000175000017500000002336413621220107017774 0ustar peetpeet00000000000000 Quick start — pyroute2 0.5.9 documentation

NDB is a high level network management module. IT allows to manage interfaces, routes, addresses etc. of connected systems, containers and network namespaces.

NDB work with remote systems via ssh, in that case mitogen module is required. It is possible to connect also OpenBSD and FreeBSD systems, but in read-only mode for now.

Quick start

Print the routing infotmation in the CSV format:

with NDB() as ndb:
    for record in ndb.routes.summary(format='csv'):
        print(record)

Print all the interface names on the system:

with NDB() as ndb:
    print([x.ifname for x in ndb.interfaces.summary()])

Print IP addresses of interfaces in several network namespaces:

nslist = ['netns01',
          'netns02',
          'netns03']

with NDB() as ndb:
    for nsname in nslist:
        ndb.sources.add(netns=nsname)
    for record in ndb.interfaces.summary(format='json'):
        print(record)

Add an IP address on an interface:

with NDB() as ndb:
    with ndb.interfaces['eth0'] as i:
        i.ipaddr.create(address='10.0.0.1', prefixlen=24).commit()
        # ---> <---  NDB waits until the address actually
        #            becomes available

Change an interface property:

with NDB() as ndb:
    with ndb.interfaces['eth0'] as i:
        i['state'] = 'up'
        i['address'] = '00:11:22:33:44:55'
    # ---> <---  the commit() is called authomatically by
    #            the context manager's __exit__()
Key NDB features:
  • Asynchronously updated database of RTNL objects
  • Data integrity
  • Multiple data sources – local, netns, remote
  • Fault tolerance and memory consumtion limits
  • Transactions
pyroute2-0.5.9/docs/html/ndb_objects.html0000644000175000017500000004722013621220107020267 0ustar peetpeet00000000000000 NDB objects — pyroute2 0.5.9 documentation

NDB objects

Structure and API

The NDB objects are dictionary-like structures that represent network objects – interfaces, routes, addresses etc. In addition to the usual dictionary API they have some NDB-specific methods, see the RTNL_Object class description below.

The dictionary fields represent RTNL messages fields and NLA names, and the objects are used as argument dictionaries to normal IPRoute methods like link() or route(). Thus everything described for the IPRoute methods is valid here as well.

See also: IPRoute module

E.g.:

# create a vlan interface with IPRoute
with IPRoute() as ipr:
    ipr.link("add",
             ifname="vlan1108",
             kind="vlan",
             link=ipr.link_lookup(ifname="eth0"),
             vlan_id=1108)

# same with NDB:
with NDB(log="stderr") as ndb:
    (ndb
     .interfaces
     .add(ifname="vlan1108",
          kind="vlan",
          link="eth0",
          vlan_id=1108)
     .commit())

Slightly simplifying, if a network object doesn’t exist, NDB will run an RTNL method with “add” argument, if exists – “set”, and to remove an object NDB will call the method with “del” argument.

Accessing objects

NDB objects are grouped into “views”:

  • interfaces
  • addresses
  • routes
  • neighbours
  • rules
  • netns

Views are dictionary-like objects that accept strings or dict selectors:

# access eth0
ndb.interfaces["eth0"]

# access eth0 in the netns test01
ndb.sources.add(netns="test01")
ndb.interfaces[{"target": "test01", "ifname": "eth0"}]

# access a route to 10.4.0.0/24
ndb.routes["10.4.0.0/24"]

# same with a dict selector
ndb.routes[{"dst": "10.4.0.0", "dst_len": 24}]

Objects cache

NDB create objects on demand, it doesn’t create thousands of route objects for thousands of routes by default. The object is being created only when accessed for the first time, and stays in the cache as long as it has any not committed changes. To inspect cached objects, use views’ .cache:

>>> ndb.interfaces.cache.keys()
[(('target', u'localhost'), ('tflags', 0), ('index', 1)),  # lo
 (('target', u'localhost'), ('tflags', 0), ('index', 5))]  # eth3

There is no asynchronous cache invalidation, the cache is being cleaned up every time when an object is accessed.

class pyroute2.ndb.objects.RTNL_Object(view, key, iclass, ctxid=None, match_src=None, match_pairs=None)

Base RTNL object class.

table

The main reference table for the object. The SQL schema of the table is used to build the key and to verify the fields.

etable

The table where the object actually fetches the data from. It is not always equal self.table, e.g. snapshot objects fetch the data from snapshot tables.

Read-only property.

key

The key of the object, used to fetch it from the DB.

show(**kwarg)

Return the object in a specified format. The format may be specified with the keyword argument format or in the ndb.config[‘show_format’].

TODO: document different formats

set(key, value)

Set a field specified by key to value, and return self. The method is useful to write call chains like that:

(ndb
 .interfaces["eth0"]
 .set('mtu', 1200)
 .set('state', 'up')
 .set('address', '00:11:22:33:44:55')
 .commit())
snapshot(ctxid=None)

Create and return a snapshot of the object. The method creates corresponding SQL tables for the object itself and for detected dependencies.

The snapshot tables will be removed as soon as the snapshot gets collected by the GC.

complete_key(key)

Try to complete the object key based on the provided fields. E.g.:

>>> ndb.interfaces['eth0'].complete_key({"ifname": "eth0"})
{'ifname': 'eth0',
 'index': 2,
 'target': u'localhost',
 'tflags': 0}

It is an internal method and is not supposed to be used externally.

rollback(snapshot=None)

Try to rollback the object state using the snapshot provided as an argument or using self.last_save.

commit()

Try to commit the pending changes. If the commit fails, automatically revert the state.

apply(rollback=False)

Create a snapshot and apply pending changes. Do not revert the changes in the case of an exception.

load_value(key, value)

Load a value and clean up the self.changed set if the loaded value matches the expectation.

load_sql(table=None, ctxid=None, set_state=True)

Load the data from the database.

load_rtnlmsg(target, event)

Check if the RTNL event matches the object and load the data from the database if it does.

pyroute2-0.5.9/docs/html/ndb_schema.html0000644000175000017500000003611713621220107020101 0ustar peetpeet00000000000000 Database — pyroute2 0.5.9 documentation

Database

Backends

NDB stores all the records in an SQL database. By default it uses the SQLite3 module, which is a part of the Python stdlib, so no extra packages are required:

# SQLite3 -- simple in-memory DB
ndb = NDB()

# SQLite3 -- file DB
ndb = NDB(db_provider='sqlite3', db_spec='test.db')

It is also possible to use a PostgreSQL database via psycopg2 module:

# PostgreSQL -- local DB
ndb = NDB(db_provider='psycopg2',
          db_spec={'dbname': 'test'})

# PostgreSQL -- remote DB
ndb = NDB(db_provider='psycopg2',
          db_spec={'dbname': 'test',
                   'host': 'db1.example.com'})

SQL schema

A file based SQLite3 DB or PostgreSQL may be useful for inspection of the collected data. Here is an example schema:

rtnl=# \dt
            List of relations
 Schema |      Name       | Type  | Owner
--------+-----------------+-------+-------
 public | addresses       | table | root
 public | ifinfo_bond     | table | root
 public | ifinfo_bridge   | table | root
 public | ifinfo_gre      | table | root
 public | ifinfo_vlan     | table | root
 public | ifinfo_vrf      | table | root
 public | ifinfo_vti      | table | root
 public | ifinfo_vti6     | table | root
 public | ifinfo_vxlan    | table | root
 public | interfaces      | table | root
 public | neighbours      | table | root
 public | nh              | table | root
 public | routes          | table | root
 public | sources         | table | root
 public | sources_options | table | root
(15 rows)

rtnl=# select f_index, f_ifla_ifname from interfaces;
 f_index | f_ifla_ifname
---------+---------------
       1 | lo
       2 | eth0
      28 | ip_vti0
      31 | ip6tnl0
      32 | ip6_vti0
   36445 | br0
   11434 | dummy0
       3 | eth1
(8 rows)

rtnl=# select f_index, f_ifla_br_stp_state from ifinfo_bridge;
 f_index | f_ifla_br_stp_state
---------+---------------------
   36445 |                   0
(1 row)

There are also some useful views, that join ifinfo tables with interfaces:

rtnl=# \dv
       List of relations
 Schema |  Name  | Type | Owner
--------+--------+------+-------
 public | bond   | view | root
 public | bridge | view | root
 public | gre    | view | root
 public | vlan   | view | root
 public | vrf    | view | root
 public | vti    | view | root
 public | vti6   | view | root
 public | vxlan  | view | root
(8 rows)
pyroute2-0.5.9/docs/html/ndb_sources.html0000644000175000017500000003023413621220107020316 0ustar peetpeet00000000000000 RTNL sources — pyroute2 0.5.9 documentation

RTNL sources

Local RTNL

Local RTNL source is a simple IPRoute instance. By default NDB starts with one local RTNL source names localhost:

>>> ndb = NDB()
>>> ndb.sources.details()
{'kind': u'local', u'nlm_generator': 1, 'target': u'localhost'}
>>> ndb.sources['localhost']
[running] <IPRoute {'nlm_generator': 1}>

The localhost RTNL source starts an additional async cache thread. The nlm_generator option means that instead of collections the IPRoute object returns generators, so IPRoute responses will not consume memory regardless of the RTNL objects number:

>>> ndb.sources['localhost'].nl.link('dump')
<generator object _match at 0x7fa444961e10>

See also: IPRoute module

Network namespaces

There are two ways to connect additional sources to an NDB instance. One is to specify sources when creating an NDB object:

ndb = NDB(sources=[{'target': 'localhost'}, {'netns': 'test01'}])

Another way is to call ndb.sources.add() method:

ndb.sources.add(netns='test01')

This syntax: {target’: ‘localhost’} and {‘netns’: ‘test01’} is the short form. The full form would be:

{'target': 'localhost', # the label for the DB
 'kind': 'local',       # use IPRoute class to start the source
 'nlm_generator': 1}    #

{'target': 'test01',    # the label
 'kind': 'netns',       # use NetNS class
 'netns': 'test01'}     #

See also: WireGuard module

Remote systems

It is possible also to connect to remote systems using SSH. In order to use this kind of sources it is required to install the mitogen module. The remote kind of sources uses the RemoteIPRoute class. The short form:

ndb.sources.add(hostname='worker1.example.com')

In some more extended form:

ndb.sources.add(**{'target': 'worker1.example.com',
                   'kind': 'remote',
                   'hostname': 'worker1.example.com',
                   'username': 'jenkins',
                   'check_host_keys': False})

See also: RemoteIPRoute

pyroute2-0.5.9/docs/html/netlink.html0000644000175000017500000011100413621220107017447 0ustar peetpeet00000000000000 Netlink — pyroute2 0.5.9 documentation
pyroute2-0.5.9/docs/html/netns.html0000644000175000017500000007653613621220107017156 0ustar peetpeet00000000000000 NetNS module — pyroute2 0.5.9 documentation

NetNS module

Netns management overview

Pyroute2 provides basic namespaces management support. Here’s a quick overview of typical netns tasks and related pyroute2 tools.

Move an interface to a namespace

Though this task is managed not via netns module, it should be mentioned here as well. To move an interface to a netns, one should provide IFLA_NET_NS_FD nla in a set link RTNL request. The nla is an open FD number, that refers to already created netns. The pyroute2 library provides also a possibility to specify not a FD number, but a netns name as a string. In that case the library will try to lookup the corresponding netns in the standard location.

Create veth and move the peer to a netns with IPRoute:

from pyroute2 import IPRoute
ipr = IPRoute()
ipr.link('add', ifname='v0p0', kind='veth', peer='v0p1')
idx = ipr.link_lookup(ifname='v0p1')[0]
ipr.link('set', index=idx, net_ns_fd='netns_name')

Create veth and move the peer to a netns with IPDB:

from pyroute2 import IPDB
ipdb = IPDB()
ipdb.create(ifname='v0p0', kind='veth', peer='v0p1').commit()
with ipdb.interfaces.v0p1 as i:
    i.net_ns_fd = 'netns_name'

Manage interfaces within a netns

This task can be done with NetNS objects. A NetNS object spawns a child and runs it within a netns, providing the same API as IPRoute does:

from pyroute2 import NetNS
ns = NetNS('netns_name')
# do some stuff within the netns
ns.close()

One can even start IPDB on the top of NetNS:

from pyroute2 import NetNS
from pyroute2 import IPDB
ns = NetNS('netns_name')
ipdb = IPDB(nl=ns)
# do some stuff within the netns
ipdb.release()
ns.close()

Spawn a process within a netns

For that purpose one can use NSPopen API. It works just as normal Popen, but starts a process within a netns.

List, set, create and remove netns

These functions are described below. To use them, import netns module:

from pyroute2 import netns
netns.listnetns()

Please be aware, that in order to run system calls the library uses ctypes module. It can fail on platforms where SELinux is enforced. If the Python interpreter, loading this module, dumps the core, one can check the SELinux state with getenforce command.

pyroute2.netns.listnetns(nspath=None)

List available network namespaces.

pyroute2.netns.ns_pids(nspath='/var/run/netns')

List pids in all netns

If a pid is in a unknown netns do not return it

pyroute2.netns.pid_to_ns(pid=1, nspath='/var/run/netns')

Return netns name which matches the given pid, None otherwise

pyroute2.netns.create(netns, libc=None)

Create a network namespace.

pyroute2.netns.remove(netns, libc=None)

Remove a network namespace.

pyroute2.netns.setns(netns, flags=64, libc=None)

Set netns for the current process.

The flags semantics is the same as for the open(2) call:

  • O_CREAT – create netns, if doesn’t exist
  • O_CREAT | O_EXCL – create only if doesn’t exist

Note that “main” netns has no name. But you can access it with:

setns('foo')  # move to netns foo
setns('/proc/1/ns/net')  # go back to default netns

See also pushns()/popns()/dropns()

Changed in 0.5.1: the routine closes the ns fd if it’s not provided via arguments.

pyroute2.netns.pushns(newns=None, libc=None)

Save the current netns in order to return to it later. If newns is specified, change to it:

# --> the script in the "main" netns
netns.pushns("test")
# --> changed to "test", the "main" is saved
netns.popns()
# --> "test" is dropped, back to the "main"
pyroute2.netns.popns(libc=None)

Restore the previously saved netns.

pyroute2.netns.dropns(libc=None)

Discard the last saved with pushns() namespace

NetNS objects

A NetNS object is IPRoute-like. It runs in the main network namespace, but also creates a proxy process running in the required netns. All the netlink requests are done via that proxy process.

NetNS supports standard IPRoute API, so can be used instead of IPRoute, e.g., in IPDB:

# start the main network settings database:
ipdb_main = IPDB()
# start the same for a netns:
ipdb_test = IPDB(nl=NetNS('test'))

# create VETH
ipdb_main.create(ifname='v0p0', kind='veth', peer='v0p1').commit()

# move peer VETH into the netns
with ipdb_main.interfaces.v0p1 as veth:
    veth.net_ns_fd = 'test'

# please keep in mind, that netns move clears all the settings
# on a VETH interface pair, so one should run netns assignment
# as a separate operation only

# assign addresses
# please notice, that `v0p1` is already in the `test` netns,
# so should be accessed via `ipdb_test`
with ipdb_main.interfaces.v0p0 as veth:
    veth.add_ip('172.16.200.1/24')
    veth.up()
with ipdb_test.interfaces.v0p1 as veth:
    veth.add_ip('172.16.200.2/24')
    veth.up()

Please review also the test code, under tests/test_netns.py for more examples.

By default, NetNS creates requested netns, if it doesn’t exist, or uses existing one. To control this behaviour, one can use flags as for open(2) system call:

# create a new netns or fail, if it already exists
netns = NetNS('test', flags=os.O_CREAT | os.O_EXCL)

# create a new netns or use existing one
netns = NetNS('test', flags=os.O_CREAT)

# the same as above, the default behaviour
netns = NetNS('test')

To remove a network namespace:

from pyroute2 import NetNS
netns = NetNS('test')
netns.close()
netns.remove()

One should stop it first with close(), and only after that run remove().

class pyroute2.netns.nslink.NetNS(netns, flags=64)

NetNS is the IPRoute API with network namespace support.

Why not IPRoute?

The task to run netlink commands in some network namespace, being in another network namespace, requires the architecture, that differs too much from a simple Netlink socket.

NetNS starts a proxy process in a network namespace and uses multiprocessing communication channels between the main and the proxy processes to route all recv() and sendto() requests/responses.

Any specific API calls?

Nope. NetNS supports all the same, that IPRoute does, in the same way. It provides full socket-compatible API and can be used in poll/select as well.

The only difference is the close() call. In the case of NetNS it is mandatory to close the socket before exit.

NetNS and IPDB

It is possible to run IPDB with NetNS:

from pyroute2 import NetNS
from pyroute2 import IPDB

ip = IPDB(nl=NetNS('somenetns'))
...
ip.release()

Do not forget to call release() when the work is done. It will shut down NetNS instance as well.

remove()

Try to remove this network namespace from the system.

NSPopen

The NSPopen class has nothing to do with netlink at all, but it is required to have a reasonable network namespace support.

class pyroute2.netns.process.proxy.NSPopen(nsname, *argv, **kwarg)

A proxy class to run Popen() object in some network namespace.

Sample to run ip ad command in nsname network namespace:

nsp = NSPopen('nsname', ['ip', 'ad'], stdout=subprocess.PIPE)
print(nsp.communicate())
nsp.wait()
nsp.release()

The NSPopen class was intended to be a drop-in replacement for the Popen class, but there are still some important differences.

The NSPopen object implicitly spawns a child python process to be run in the background in a network namespace. The target process specified as the argument of the NSPopen will be started in its turn from this child. Thus all the fd numbers of the running NSPopen object are meaningless in the context of the main process. Trying to operate on them, one will get ‘Bad file descriptor’ in the best case or a system call working on a wrong file descriptor in the worst case. A possible solution would be to transfer file descriptors between the NSPopen object and the main process, but it is not implemented yet.

The process’ diagram for NSPopen(‘test’, [‘ip’, ‘ad’]):

+---------------------+     +--------------+     +------------+
| main python process |<--->| child python |<--->| netns test |
| NSPopen()           |     | Popen()      |     | $ ip ad    |
+---------------------+     +--------------+     +------------+

As a workaround for the issue with file descriptors, some additional methods are available on file objects stdin, stdout and stderr. E.g., one can run fcntl calls:

from fcntl import F_GETFL
from pyroute2 import NSPopen
from subprocess import PIPE

proc = NSPopen('test', ['my_program'], stdout=PIPE)
flags = proc.stdout.fcntl(F_GETFL)

In that way one can use fcntl(), ioctl(), flock() and lockf() calls.

Another additional method is release(), which can be used to explicitly stop the proxy process and release all the resources.

release()

Explicitly stop the proxy process and release all the resources. The NSPopen object can not be used after the release() call.

pyroute2-0.5.9/docs/html/nlsocket.html0000644000175000017500000006563413621220107017646 0ustar peetpeet00000000000000 Base netlink socket and marshal — pyroute2 0.5.9 documentation
pyroute2-0.5.9/docs/html/objects.inv0000644000175000017500000000421013621220107017264 0ustar peetpeet00000000000000# Sphinx inventory version 2 # Project: pyroute2 # Version: 0.5 # The remainder of this file is compressed using zlib. xڵ[K8W\#l`n"Ǔ Gh1EIʎ/LJMQKU#Y|YY/n`w_ZV̵hdKaEooU7~ ->O/ $@ /CMlX8ZqXSm^絨 z̿ޯ@%)h-FHj`NbFqWWE 0Fp4Oi[wP5a kz者D cA@n~,kdk*Q P@W./=^{ڂ-@]YϬdxǸQma_W H/r9,y%0Hw Y0KQAOd;i=&jJ@D4CS'TY. $&I5Mt#AJ(q-f#NOp~7Yc G`ѫar`}g8FY:F6_͠ioSТ8xi %G'[!V4-1(53V~;eP/3KCz@As/16wO_# xj7z(  eF!Atd=7cuY 2AS%/Xle +hr1LK=G[N3B [hJ/Ǫojt{ypOAFぬmLCiw<BਉWj0<$IpNTT'@fqdݸ{( ;ST)jqt1$ G+XִaK$`Z[(^HNW>Iǰ8c= $}LDɍ( ",y"*1pHom,&mRXU|\4Vc+lwLckrlȜRK9`Gݏn OCӿqnIQ kO 0^ԑJĶɀ, Ӽ ?ABBH!_ǟoh.aÎ(%א!nøW#):W D!Cc|ocz+oeWs)(L):ry(Y'6{^Sx7V kaRt-T-TL]'Z&aUbJuiST$lu zFB=&9󝋼9lQ}=U0k1YY:J4UJʕVV[_4x~ٱp݄LW}Ӌfeï= +FPvg+ꋌ9vU*4Q9?A2!2aeLn*umj:qq49ݣuDǛUr"ᇣM.dK`]]KeQdo$ǬFUi.(oY2s^X3uHӘ( 3A6O\k{rH_7svС|bbƯPrQ>de7g7t(_*U4> nSA6"Z'3 XH 8La@raUkGtu?1Ys; yn4.lQC[5uMVI7 ]ގ/icMvOyP2 {[~ mΔg1O|bNaֹLCgQ=ό\>pyroute2-0.5.9/docs/html/py-modindex.html0000644000175000017500000002352313621220107020250 0ustar peetpeet00000000000000 Python Module Index — pyroute2 0.5.9 documentation pyroute2-0.5.9/docs/html/remote.html0000644000175000017500000003672213621220107017313 0ustar peetpeet00000000000000 RemoteIPRoute — pyroute2 0.5.9 documentation

RemoteIPRoute

Caveats

Warning

The class implies a serious performance penalty. Please consider other options if you expect high loads of the netlink traffic.

Warning

The class requires the mitogen library that should be installed separately: https://mitogen.readthedocs.io/en/latest/

Warning

The object of this class implicitly spawn child processes. Beware.

Here are some reasons why this class is not used as a general class instead of specific IPRoute for local RTNL, and NetNS for local netns management:

  • The performance of the Python parser for the binary netlink protocol is not so good, but using such proxies makes it even worse.
  • Local IPRoute and NetNS access is the core functionality and must work with no additional libraries installed.

Introduction

It is possible to run IPRoute instances remotely using the mitogen library. The remote node must have same python version installed, but no additional libraries are required there: all the code will be imported from the host where you start your script.

The simplest case, run IPRoute on a remote Linux host via ssh (assume the keys are deployed):

from pyroute2 import RemoteIPRoute

rip = RemoteIPRoute(protocol='ssh',
                    hostname='test01',
                    username='ci')
rip.get_links()

# ...

Indirect access

Building mitogen proxy chains you can access nodes indirectly:

import mitogen.master
from pyroute2 import RemoteIPRoute

broker = mitogen.master.Broker()
router = mitogen.master.Router(broker)
# login to the gateway
gw = router.ssh(hostname='test-gateway',
                username='ci')
# login from the gateway to the target node
host = router.ssh(via=gw,
                  hostname='test01',
                  username='ci')

rip = RemoteIPRoute(router=router, context=host)

rip.get_links()

# ...

Run with privileges

It requires the mitogen sudo proxy to run IPRoute with root permissions:

import mitogen.master
from pyroute2 import RemoteIPRoute

broker = mitogen.master.Broker()
router = mitogen.master.Router(broker)
host = router.ssh(hostname='test01', username='ci')
sudo = router.sudo(via=host, username='root')

rip = RemoteIPRoute(router=router, context=sudo)

rip.link('add', ifname='br0', kind='bridge')

# ...

Remote network namespaces

You also can access remote network namespaces with the same RemoteIPRoute object:

import mitogen.master
from pyroute2 import RemoteIPRoute

broker = mitogen.master.Broker()
router = mitogen.master.Router(broker)
host = router.ssh(hostname='test01', username='ci')
sudo = router.sudo(via=host, username='root')

rip = RemoteIPRoute(router=router, context=sudo, netns='test-netns')

rip.link('add', ifname='br0', kind='bridge')

# ...
pyroute2-0.5.9/docs/html/report.html0000644000175000017500000001402413621220107017322 0ustar peetpeet00000000000000 Report a bug — pyroute2 0.5.9 documentation

Report a bug

In the case you have issues, please report them to the project bug tracker: https://github.com/svinota/pyroute2/issues

It is important to provide all the required information with your report:

  • Linux kernel version
  • Python version
  • Specific environment, if used – gevent, eventlet etc.

Sometimes it is needed to measure specific system parameters. There is a code to do that, e.g.:

$ sudo make test-platform

Please keep in mind, that this command will try to create and delete different interface types.

It is possible also to run the test in your code:

from pprint import pprint
from pyroute2.config.test_platform import TestCapsRtnl
pprint(TestCapsRtnl().collect())
pyroute2-0.5.9/docs/html/search.html0000644000175000017500000000767113621220107017266 0ustar peetpeet00000000000000 Search — pyroute2 0.5.9 documentation

Search

Please activate JavaScript to enable the search functionality.

From here you can search these documents. Enter your search words into the box below and click "search". Note that the search function will automatically search for all of the words. Pages containing fewer words won't appear in the result list.

pyroute2-0.5.9/docs/html/searchindex.js0000644000175000017500000013521413621220107017761 0ustar peetpeet00000000000000Search.setIndex({docnames:["arch","changelog","debug","devcontribute","devgeneral","devmodules","dhcp","general","generator","index","ipdb","iproute","ipset","makefile","mpls","ndb","ndb_addresses","ndb_debug","ndb_interfaces","ndb_intro","ndb_objects","ndb_schema","ndb_sources","netlink","netns","nlsocket","remote","report","usage","wireguard","wiset"],envversion:{"sphinx.domains.c":1,"sphinx.domains.changeset":1,"sphinx.domains.cpp":1,"sphinx.domains.javascript":1,"sphinx.domains.math":2,"sphinx.domains.python":1,"sphinx.domains.rst":1,"sphinx.domains.std":1,sphinx:55},filenames:["arch.rst","changelog.rst","debug.rst","devcontribute.rst","devgeneral.rst","devmodules.rst","dhcp.rst","general.rst","generator.rst","index.rst","ipdb.rst","iproute.rst","ipset.rst","makefile.rst","mpls.rst","ndb.rst","ndb_addresses.rst","ndb_debug.rst","ndb_interfaces.rst","ndb_intro.rst","ndb_objects.rst","ndb_schema.rst","ndb_sources.rst","netlink.rst","netns.rst","nlsocket.rst","remote.rst","report.rst","usage.rst","wireguard.rst","wiset.rst"],objects:{"pyroute2.dhcp":{dhcp4socket:[6,0,0,"-"],dhcpmsg:[6,1,1,""],option:[6,1,1,""]},"pyroute2.dhcp.dhcp4socket":{DHCP4Socket:[6,1,1,""]},"pyroute2.dhcp.dhcp4socket.DHCP4Socket":{get:[6,2,1,""],put:[6,2,1,""]},"pyroute2.dhcp.dhcpmsg":{array8:[6,1,1,""],be16:[6,1,1,""],be32:[6,1,1,""],client_id:[6,1,1,""],none:[6,1,1,""],string:[6,1,1,""],uint8:[6,1,1,""]},"pyroute2.ipdb":{main:[10,0,0,"-"]},"pyroute2.ipdb.main":{IPDB:[10,1,1,""]},"pyroute2.ipdb.main.IPDB":{eventloop:[10,2,1,""],eventqueue:[10,2,1,""],register_callback:[10,2,1,""],release:[10,2,1,""]},"pyroute2.iproute":{bsd:[11,0,0,"-"],linux:[11,0,0,"-"]},"pyroute2.iproute.linux":{IPBatch:[11,1,1,""],IPRoute:[11,1,1,""],RTNL_API:[11,1,1,""],RawIPRoute:[11,1,1,""]},"pyroute2.iproute.linux.RTNL_API":{addr:[11,2,1,""],brport:[11,2,1,""],dump:[11,2,1,""],fdb:[11,2,1,""],flush_addr:[11,2,1,""],flush_routes:[11,2,1,""],flush_rules:[11,2,1,""],get_addr:[11,2,1,""],get_classes:[11,2,1,""],get_default_routes:[11,2,1,""],get_filters:[11,2,1,""],get_links:[11,2,1,""],get_neighbours:[11,2,1,""],get_netns_info:[11,2,1,""],get_ntables:[11,2,1,""],get_qdiscs:[11,2,1,""],get_routes:[11,2,1,""],get_rules:[11,2,1,""],get_vlans:[11,2,1,""],link:[11,2,1,""],link_lookup:[11,2,1,""],neigh:[11,2,1,""],route:[11,2,1,""],rule:[11,2,1,""],tc:[11,2,1,""],vlan_filter:[11,2,1,""]},"pyroute2.ipset":{IPSet:[12,1,1,""],PortEntry:[12,1,1,""],PortRange:[12,1,1,""]},"pyroute2.ipset.IPSet":{"delete":[12,2,1,""],add:[12,2,1,""],create:[12,2,1,""],destroy:[12,2,1,""],flush:[12,2,1,""],get_proto_version:[12,2,1,""],get_set_byindex:[12,2,1,""],get_set_byname:[12,2,1,""],get_supported_revisions:[12,2,1,""],headers:[12,2,1,""],list:[12,2,1,""],rename:[12,2,1,""],swap:[12,2,1,""],test:[12,2,1,""]},"pyroute2.iwutil":{IW:[5,1,1,""]},"pyroute2.iwutil.IW":{add_interface:[5,2,1,""],associate:[5,2,1,""],authenticate:[5,2,1,""],connect:[5,2,1,""],deauthenticate:[5,2,1,""],del_interface:[5,2,1,""],disassociate:[5,2,1,""],disconnect:[5,2,1,""],get_associated_bss:[5,2,1,""],get_interface_by_ifindex:[5,2,1,""],get_interface_by_phy:[5,2,1,""],get_interfaces_dict:[5,2,1,""],get_interfaces_dump:[5,2,1,""],get_stations:[5,2,1,""],join_ibss:[5,2,1,""],leave_ibss:[5,2,1,""],list_dev:[5,2,1,""],list_wiphy:[5,2,1,""],scan:[5,2,1,""]},"pyroute2.ndb":{main:[19,0,0,"-"],objects:[20,0,0,"-"],schema:[21,0,0,"-"],source:[22,0,0,"-"]},"pyroute2.ndb.objects":{"interface":[18,0,0,"-"],RTNL_Object:[20,1,1,""],address:[16,0,0,"-"]},"pyroute2.ndb.objects.RTNL_Object":{apply:[20,2,1,""],commit:[20,2,1,""],complete_key:[20,2,1,""],etable:[20,3,1,""],key:[20,3,1,""],load_rtnlmsg:[20,2,1,""],load_sql:[20,2,1,""],load_value:[20,2,1,""],rollback:[20,2,1,""],set:[20,2,1,""],show:[20,2,1,""],snapshot:[20,2,1,""],table:[20,3,1,""]},"pyroute2.netlink":{nlsocket:[25,0,0,"-"]},"pyroute2.netlink.generic":{wireguard:[29,0,0,"-"]},"pyroute2.netlink.nlsocket":{BatchBacklog:[25,1,1,""],BatchBacklogQueue:[25,1,1,""],BatchSocket:[25,1,1,""],Marshal:[25,1,1,""],NetlinkMixin:[25,1,1,""],NetlinkSocket:[25,1,1,""],Stats:[25,1,1,""]},"pyroute2.netlink.nlsocket.BatchBacklogQueue":{append:[25,2,1,""],pop:[25,2,1,""]},"pyroute2.netlink.nlsocket.BatchSocket":{get:[25,2,1,""]},"pyroute2.netlink.nlsocket.Marshal":{parse:[25,2,1,""]},"pyroute2.netlink.nlsocket.NetlinkMixin":{get:[25,2,1,""],get_policy_map:[25,2,1,""],put:[25,2,1,""],register_callback:[25,2,1,""],register_policy:[25,2,1,""],unregister_callback:[25,2,1,""],unregister_policy:[25,2,1,""]},"pyroute2.netlink.nlsocket.NetlinkSocket":{bind:[25,2,1,""],close:[25,2,1,""]},"pyroute2.netlink.nlsocket.Stats":{delay:[25,3,1,""],delta:[25,3,1,""],qsize:[25,3,1,""]},"pyroute2.netlink.rtnl.tcmsg":{cls_u32:[11,0,0,"-"],sched_choke:[11,0,0,"-"],sched_clsact:[11,0,0,"-"],sched_drr:[11,0,0,"-"],sched_hfsc:[11,0,0,"-"],sched_htb:[11,0,0,"-"]},"pyroute2.netlink.rtnl.tcmsg.sched_choke":{options:[11,1,1,""],stats2:[11,1,1,""],stats:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_choke.options":{qopt:[11,1,1,""],stab:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_choke.options.stab":{encode:[11,2,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_choke.stats2":{stats_app:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_drr":{options:[11,1,1,""],stats2:[11,1,1,""],stats:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_drr.stats2":{stats_app:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_hfsc":{options_hfsc:[11,1,1,""],options_hfsc_class:[11,1,1,""],stats2:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_hfsc.options_hfsc_class":{hfsc_curve:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_hfsc.stats2":{stats_app:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_htb":{class_stats2:[11,1,1,""],options:[11,1,1,""],qdisc_stats2:[11,1,1,""],stats:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_htb.class_stats2":{stats_app:[11,1,1,""]},"pyroute2.netlink.rtnl.tcmsg.sched_htb.options":{htb_glob:[11,1,1,""],htb_parms:[11,1,1,""]},"pyroute2.netns":{create:[24,4,1,""],dropns:[24,4,1,""],listnetns:[24,4,1,""],ns_pids:[24,4,1,""],nslink:[24,0,0,"-"],pid_to_ns:[24,4,1,""],popns:[24,4,1,""],pushns:[24,4,1,""],remove:[24,4,1,""],setns:[24,4,1,""]},"pyroute2.netns.nslink":{NetNS:[24,1,1,""]},"pyroute2.netns.nslink.NetNS":{remove:[24,2,1,""]},"pyroute2.netns.process":{proxy:[24,0,0,"-"]},"pyroute2.netns.process.proxy":{NSPopen:[24,1,1,""]},"pyroute2.netns.process.proxy.NSPopen":{release:[24,2,1,""]},"pyroute2.wiset":{IPStats:[30,1,1,""],WiSet:[30,1,1,""],get_ipset_socket:[30,4,1,""],need_ipset_socket:[30,4,1,""]},"pyroute2.wiset.WiSet":{"delete":[30,2,1,""],add:[30,2,1,""],close_netlink:[30,2,1,""],content:[30,3,1,""],create:[30,2,1,""],destroy:[30,2,1,""],flush:[30,2,1,""],from_netlink:[30,5,1,""],insert_list:[30,2,1,""],open_netlink:[30,2,1,""],replace_entries:[30,2,1,""],test:[30,2,1,""],test_list:[30,2,1,""],update_content:[30,2,1,""],update_dict_content:[30,2,1,""]},pyroute2:{dhcp:[6,0,0,"-"],iproute:[11,0,0,"-"],ipset:[12,0,0,"-"],iwutil:[5,0,0,"-"],netlink:[23,0,0,"-"],netns:[24,0,0,"-"],wiset:[30,0,0,"-"]}},objnames:{"0":["py","module","Python module"],"1":["py","class","Python class"],"2":["py","method","Python method"],"3":["py","attribute","Python attribute"],"4":["py","function","Python function"],"5":["py","classmethod","Python class method"]},objtypes:{"0":"py:module","1":"py:class","2":"py:method","3":"py:attribute","4":"py:function","5":"py:classmethod"},terms:{"0ce40d31d937":10,"0x0":11,"0x0000":11,"0x0006":11,"0x00ff":11,"0x1":11,"0x10":11,"0x10000":11,"0x100000":11,"0x10001":11,"0x10010":11,"0x10020":11,"0x1337":29,"0x2":11,"0x20":11,"0x200":7,"0x200000":11,"0x300":7,"0x4":11,"0x5":11,"0x7f16d8391a98":17,"0x7f16d9c708e0":17,"0x7fa444961e10":22,"0x8":11,"0x8100":11,"0x88a8":11,"0x8b00":23,"0x8b01":23,"0x8b02":23,"0x8b03":23,"0xa":11,"0xc0a80000":11,"0xc0a80001":11,"0xc0a800010":11,"0xf":11,"0xff":11,"0xff0000":11,"0xffc0":11,"0xffff0000":11,"0xffff0001":11,"0xffffff00":11,"0xffffffff":11,"10kbit":11,"10mbit":11,"128kbit":11,"12x":23,"16byte":1,"16s":23,"192kbit":11,"1ad":11,"20_noht":5,"256kbit":11,"2nd":11,"32bit":1,"3rd":[7,11],"4c8":10,"5mbit":11,"7a637a44":10,"80p80":5,"boolean":[5,23,29],"break":[1,3,8],"byte":[6,8,11,23,25,30],"case":[0,1,2,4,5,6,9,10,11,13,14,15,17,19,20,23,24,25,26,27,30],"catch":10,"class":[0,1,2,5,6,7,9,12,20,22,23,24,26,28,30],"default":[1,2,5,6,8,11,12,17,20,21,22,24,25,30],"export":[1,2,4,5,17,28],"float":11,"function":[1,4,5,6,7,10,11,12,13,24,25,26,30],"import":[0,1,2,5,7,8,9,10,11,12,14,17,23,24,26,27,29],"int":[2,4,10,11,14,23,25],"long":[8,10,11,20,30],"new":[1,10,11,13,14,23,24,30],"null":2,"public":[0,1,21,28,29],"return":[0,1,4,5,6,8,10,11,12,18,20,22,24,25,30],"short":[1,10,14,22],"static":11,"switch":[8,10],"throw":1,"true":[0,1,5,8,10,11,12,13,17,20,23,25,29],"try":[6,10,14,20,24,27],"var":24,"while":[0,10,11,14],And:[11,23],Being:10,But:[0,4,5,6,8,10,11,23,24,28,30],For:[0,2,5,11,12,13,24,30],IPs:1,OVS:1,One:[4,5,6,10,11,14,18,22,23,24,25,28,30],Pls:[2,5,14],TLS:1,That:8,The:[0,2,3,4,5,6,9,11,13,14,17,18,20,22,23,24,25,26,28,29],Then:[5,25],There:[0,2,5,6,7,10,11,12,13,20,21,22,23,25,27,28],These:[0,10,17,24],Use:[2,11],Uses:11,Using:[10,23,29],VMs:1,Will:[5,11],With:[5,7,11,25],__align:23,__enter__:30,__exit__:[10,15,19,30],__init__:[4,5,23,28],__pad:23,__u32:23,__u8:23,_match:22,_rproxi:0,_sproxi:0,abil:11,about:[3,5,7,10,11,23],abov:[2,6,10,11,17,18,23,24],absolut:1,ac_exitcod:23,ac_flag:23,accept:[0,6,10,11,14,20],access:[1,7,9,10,11,15,24],accord:[6,11],acess:10,achtung:1,acpi:7,acpi_ev:[1,7],act:[10,11],action:[1,10,11,25],activ:[10,25],actual:[4,10,11,15,19,20],add:[1,2,5,7,10,11,12,14,15,18,19,20,22,23,24,26,29,30],add_interfac:5,add_ip:[10,24,29],add_nh:10,add_port:10,added:[1,6,10],adding:30,addit:[7,10,11,20,22,24,26],addr:[1,2,7,8,10,11,14,23,25],address:[1,2,4,5,6,7,9,11,15,18,19,20,21,23,24,28,30],addresses_139736119707256:17,addressfamili:[5,11,12,30],addrpool:1,adhoc:5,administr:6,adt:1,advmss:10,af_bridg:11,af_inet6:[10,11],af_inet:[5,7,10,11,12,14,23,30],af_mpl:[1,7,14],af_netlink:[2,4,5],af_spec:11,af_unspec:11,affect:[0,11,13,23,28],after:[1,10,24,25,30],again:[8,10,13,30],against:13,ageing:10,agnost:[0,11],aim:[7,10],algo:4,algorithm:11,alia:25,alias:6,alignemt:23,aliv:29,all:[1,2,4,5,6,8,10,11,12,13,15,17,19,21,23,24,25,26,27,28,30],all_n:25,all_sets_dict:30,alloc:[1,25],allow:[0,1,4,5,10,11,12,15,17,19,29,30],allowed_ip:29,almost:[5,10,11,12,25],alpha:1,alreadi:[0,6,10,11,24],also:[1,2,4,5,6,10,11,14,15,17,18,19,20,21,22,23,24,25,26,27,28,30],alwai:[2,11,20,23,25,28],analysi:[6,17],android:[1,10],angu:1,ani:[0,1,2,4,5,6,10,11,12,14,17,20,24,25,28,30],anoth:[0,2,10,11,14,17,22,24,30],anymor:[2,10],anyth:10,anywai:2,ap_vlan:5,apach:1,api:[0,1,4,5,7,9,12,15,24,25,28],appear:10,append:[1,11,25],appli:[8,10,11,17,20],approach:[10,11],arch:1,architectur:[1,9,24],arg:25,argument:[2,10,11,20,24,25,30],argv:[5,8,11,12,24,25],armv6l:1,armv7l:1,around:25,arp:11,arrai:6,array8:6,arriv:[0,10,25],asap:0,ascii:23,asciiz:23,ask:[5,11,12],assign:[18,24],associ:[5,11],assum:[0,11,25,26],async:[1,4,9,10,11,22],async_cach:[0,11,25],async_qs:25,asynchron:[0,9,10,15,19,20,23],asyncio:1,asyncron:10,atom:30,attach:[2,11,23],attent:25,attr:[5,10,11,23],attr_revis:[5,12],attr_typ:30,attribut:[1,5,6,10,11,12,25,30],auth:1,auth_typ:5,authent:5,authomat:[10,15,19],autocomplet:2,autoconnect:1,autodoc:1,autogener:23,autom:13,automat:[7,10,11,20,25],avail:[5,10,11,12,15,19,24,25,28],averag:8,avoid:[10,25],avpkt:11,awar:24,b5e7:10,back:[1,10,11,24],backend:[15,17],background:[10,24,25,30],backlog:25,bad:24,bala:[10,11],bandwith:11,bar:[5,12],base64:29,base:[1,4,9,10,11,20,21,30],baserout:0,basestr:30,bash:[2,11],basic:[1,4,5,6,7,9,10,11,24],batch:[2,11],batchbacklog:25,batchbacklogqueu:25,batchsocket:[0,2,25],be16:[6,23],be32:[4,6,23],be64:23,be8:23,becaus:10,becom:[0,10,15,19,25],been:[10,11,30],befor:[10,11,17,24,25],begin:[5,10,12],behav:[0,28],behavior:[11,30],behaviour:[0,10,11,24],behind:0,being:[2,8,20,24],believ:11,below:[10,11,20,24],benchmark:1,besid:[0,7,23],best:24,better:10,between:[1,5,24,25,28],bewar:[25,26],big:[6,23],bin:8,binari:[0,2,5,11,23,26],bind:[0,2,4,6,7,11,25,28],bit:[4,6,8,10,11,23],bitmap:[5,12],bitmap_ports_rang:[5,12],blackhol:11,block:[1,10],bodi:11,bond:[1,10,11,21],bool:[11,30],boot:11,bootp:6,bootrequest:6,bos:11,both:[1,5,10,11,17,23],bottl:[7,13],bound:1,bpf:[1,6,11],br0:[1,11,18,21,26],br0v500:11,br_:10,br_ageing_tim:10,br_forward_delai:10,brace:5,branch:11,brentri:11,bridg:[1,9,11,18,21,26],bring:[7,10,11],broadcast:[1,2,7,10,11,25,28],broken:1,broker:26,brport:[1,11],brport_cost:10,brport_proxyarp:10,brport_unicast_flood:10,brx:11,bsd:[0,1,7,9],bsdmsg:0,bss:[1,5],bssid:5,buf:[0,6],buffer:[1,2,11,23,25],bufsiz:25,bug:[1,4,5,7,9,11],bugzilla:1,build:[13,20,26],built:[4,13,30],bump:1,bundl:1,burst:[0,10,11,25],bytecod:13,cach:[11,15,17,22],cacheinfo:23,cake:1,calcul:25,call:[0,1,2,5,7,10,11,12,15,18,19,20,22,24,25,28,30],callabl:[1,25],callback:[0,1,10,25],caller:30,calm:25,can:[0,2,3,4,5,6,7,10,11,12,13,14,18,23,24,25,26,28,30],candi:10,capabl:[1,5,12],carefulli:30,carri:23,cascad:[1,23],cat:28,caus:[0,2,10,11],caveat:9,cdata:23,center2:5,center:5,cento:1,central:5,cgi:1,chaddr:6,chain:[1,11,18,20,23,26],chang:[1,4,7,8,10,11,13,15,17,19,20,23,24,28],changelog:9,channel:[5,24],channel_fix:5,chantra:1,charact:11,check:[4,10,13,20,24],check_host_kei:22,child:[24,26],choke:1,choos:[0,1,11,23],cidr:29,civm:1,class_stats2:11,classid:11,classifi:11,classless:11,classmethod:30,clean:[9,20],cleanup:1,clear:24,cli:[1,5],client:1,client_id:6,clock:8,clone:[2,30],close:[0,1,4,7,18,24,25,28,30],close_netlink:30,clsact:1,cmd:5,code:[0,1,3,4,6,10,11,13,14,18,23,24,25,26,27,28,30],codel:1,collect:[0,2,4,5,8,10,13,20,21,22,25,27,28],collector:13,com:[1,3,5,7,10,18,21,22,27],combin:2,come:[0,5],comma:[5,12],command:[1,2,5,8,10,11,12,13,23,24,27],comment:[2,5,12,30],commit:[0,1,10,11,14,15,17,18,19,20,24,29],common:[0,1,2,5,12,13,14],commun:[1,24],compar:10,comparison:1,compat:[0,1,5,6,9,11,12,13,24,25],compil:[1,9,11,13],complement:1,complet:[0,1,5,6,11,20,23],complete_kei:20,complex:25,compli:3,complic:[0,6,10],compon:1,con:10,concept:1,condit:10,config:[1,8,20,27,28],configur:[7,11,29],conflict:[10,28],connect:[0,4,5,11,15,19,22],connmark:1,conntrack:1,consid:26,consist:[1,4,5,10],constant:11,construct:[6,7,23,25],constructor:[5,12],consum:[22,25,30],consumpt:1,consumt:[15,19],contain:[5,6,7,10,11,12,15,19,23,25],content:[5,6,11,12,30],context:[1,6,8,9,15,18,19,24,26],continu:10,contribut:[9,11],control:[1,5,7,10,11,24],cooki:23,cope:10,copi:[1,2,4,5],copyright:2,core:[4,24,26],correct:1,correctli:[1,10,25],correl:17,correspond:[0,5,6,10,20,23,24],cost:[10,11],could:10,count:11,counter:[1,5,12,25,30],coupl:5,cours:0,cover:[1,5,23],coverag:[1,7,13],cpu:[8,10,11,25],creat:[0,1,2,5,7,9,11,12,14,15,17,19,20,22,25,27,28,29,30],creation:[1,10,11],credit:2,critic:1,cstamp:23,csv:[15,19],ctrl:11,ctrlmsg:0,ctxid:20,ctype:[3,24],curl:13,current:[5,11,13,24,25],current_tx:10,curv:11,custom:[0,1,4,6,11,23],custom_socket_bas:4,custommessag:11,cycl:[1,11,13],daemon:30,data:[0,1,4,5,8,9,11,12,13,15,19,20,21,23,25,28,30],databas:[4,10,11,14,15,17,19,20,24],datagram:11,date:10,db1:21,db_provid:21,db_spec:21,dbname:21,deal:11,dealloc:1,deauthent:5,debug:[1,5,9,15,23,25],declar:[1,23],decod:[0,1,4,5,6,7,9,25],decor:30,dedic:[0,1,10,25],deepcopi:1,def:[4,11,25],defin:[2,4,5,6,10,23],definit:[5,6,11],defragment:[1,25],del:[10,11,20],del_interfac:5,del_ip:10,del_nh:10,del_port:10,delai:[10,25],delet:[1,5,10,11,12,17,27,29,30],deliv:8,delta:[1,25],demand:[1,10,11,17,20],deni:5,depend:[1,2,7,10,11,20],deploi:[13,25,26],deploy:[11,13],deprec:[1,25],deriv:25,describ:[4,6,20,23,24,29],descript:[6,11,20,23],descriptor:24,design:10,despit:11,destin:[1,10,11],destroi:[5,11,12,30],destruct:1,detail:[3,6,7,10,11,22,23],detect:20,determin:5,dev:[1,2,5,7,11],develop:[1,4,5,6,10,23],devic:[1,5,10,11,25,28],devlink:[1,7],dhcp4msg:[0,6],dhcp4socket:6,dhcp:[0,1],dhcpdiscov:6,dhcpmsg:[0,6],dhcprequest:6,dhcpv4:1,diagnost:1,diagram:[0,1,24],dict:[4,10,11,18,20,25,30],dictionari:[0,5,6,10,11,20,23,25,30],differ:[0,1,2,4,7,8,11,20,23,24,27],dimens:[5,12],dir:10,direct:[1,11,30],direct_act:11,directli:[10,23,28],directori:[4,5,12,13],disabl:1,disassoci:5,discard:24,disciplin:9,disconnect:5,discov:[7,30],discoveri:[5,7],disk:7,displai:2,dist:9,distinguish:23,distribut:13,doc:[1,4,7,9,10],document:[1,4,7,9,11,20,30],doe:[0,5,6,10,11,12,17,20,24,30],doesn:[0,1,2,5,9,10,11,17,20,23,24],don:[5,25,30],done:[0,6,10,11,24,28],dot:2,dotkei:0,down:[1,10,11,17,24,25],dport:6,dquotsocket:7,draft:1,driver:11,drop:[1,10,11,24],dropn:[1,24],drr:1,dst:[1,7,10,11,14,20],dst_len:[11,20],dt6:11,dual:1,due:2,dummi:[1,2,11,17],dummy0:[11,21],dump:[1,4,5,8,9,11,17,22,23,24],dure:[1,10],dx4:11,each:[5,11,12,13,23],earli:5,easi:[4,10,23],easier:[10,30],easiest:2,ecn:11,eexist:1,effect:[10,11],egg:13,egress:11,eht0:11,either:[2,10,11,13,18,23,25],elaps:8,element:[23,30],els:[8,10,11,28],elsewher:11,em1:11,ematch:1,emb:1,embed:1,emploi:[8,11],empti:[1,5,11,12,23,25],emul:[7,11],enabl:[4,5,7,12,17,30],encap:[0,1,7,11,14],encapsul:[7,11],encod:[0,1,6,7,11,25],end:[2,5,6,10,11,12,23,25],endian:[6,23],endpoint:29,endpoint_addr:29,endpoint_port:29,enforc:24,engin:5,enobuf:9,enough:[4,5,10,25,28],enqueu:25,ensur:10,entiti:11,entri:[5,12,30],environ:[10,27,28],equal:[10,11,20],equival:11,ericsson:1,errata:1,errno:[1,25],error:[0,1,5,10,11,12,13,23,25],establish:0,etabl:20,etc:[0,2,4,5,6,10,11,12,13,15,19,20,23,25,27,30],eth0:[10,11,14,15,17,18,19,20,21],eth1:[10,11,14,21],eth2:[10,11],eth3:20,eth:[10,11],eth_p_al:11,ether:23,ethmsg:0,ethtool:[1,7],etyp:[5,12,30],even:[2,5,10,11,24,25,26],event:[1,7,10,11,15,20,23],eventlet:[1,4,7,9,13,27],eventlet_config:28,eventloop:[1,10],eventqueu:10,everi:[4,6,10,11,13,20,23,30],everyth:[10,20],evolv:5,evq:10,exactli:0,exampl:[1,2,4,5,6,9,10,11,12,21,22,23,24,25,28,30],exce:25,except:[0,1,5,6,10,11,20,25,30],exclus:[5,10,12],execut:10,exhaust:17,exist:[1,5,8,10,11,12,17,20,23,24],exit:[2,5,8,10,12,24],expect:[10,11,20,25,26],expens:30,experi:10,experiment:5,explicit:[10,11,23],explicitli:[1,10,11,18,23,24],express:1,ext_mask:11,extend:[0,6,7,11,22,25],extern:[0,10,11,20],extra:[0,1,21],extrem:5,eye:10,f_getfl:24,f_ifla_br_stp_st:21,f_ifla_ifnam:21,f_index:21,f_tflag:17,fact:4,fail:[1,10,11,17,20,24,30],fals:[5,8,10,11,12,20,22,25,30],famili:[1,5,7,10,11,12,14,23,25,30],familiar:5,far:[5,6,10,25],fast:[10,25],faster:25,fault:[8,15,19],fcntl:24,fdb:[1,11],fe80:10,featur:[0,4,5,10,11,12,15,17,19],fedora:1,fetch:[15,20],fff2:11,fff3:11,ffff:11,fibmsg:[0,11],field:[0,1,10,11,14,17,20,25],field_name1:23,field_namex:23,file:[1,2,4,5,8,11,12,13,17,21,24],file_nam:17,fileno:[0,25],fill:[23,30],filter:[1,6,7,9,10,25],find:[5,6,10,23],first:[1,5,6,10,11,20,24,25,29],fit:6,fix:[1,4,6,10,11,23,25],fix_messag:4,flag:[1,2,5,10,11,23,24,25],flake8:[3,7],flat:1,flexibl:30,flock:24,flood:10,flow:1,flush:[5,11,12,30],flush_addr:[1,11],flush_cach:5,flush_rout:[1,11],flush_rul:[1,11],follow:[2,5,6,10,11,23,29,30],foo:[5,11,12,24,30],foo_msg:23,foobar:[5,12],footprint:10,forc:[5,10,12],forceadd:[5,12],forget:[4,10,24,28],fork:1,form:[2,10,11,22,23],format:[1,2,4,6,10,14,15,18,19,20,23,25],forum:7,forward:[0,1,10,11],found:[4,30],four:[10,11],fq_codel:1,fr_act_nop:11,fr_act_unreach:11,frame:[5,8],framework:[0,4,6,7,13],free:25,freebsd:[0,7,11,15,19],freez:[1,10],freq:5,freq_fix:1,frequenc:5,friendli:[2,4,28],from:[0,1,2,4,5,6,7,8,11,12,13,14,17,20,21,23,24,25,26,27,28,29,30],from_netlink:30,fsc:11,full:[1,5,10,11,13,22,24],fun:[11,30],further:6,futur:10,fwmark:[1,11,29],gact:11,gatewai:[7,10,11,14,26],gcc:2,gener:[0,1,4,5,6,7,10,13,14,22,23,25,26,28],genericnetlinksocket:0,genl:1,genlmsg:[0,4],get:[0,1,5,6,7,11,12,13,14,15,20,23,24,25,28,29,30],get_:8,get_addr:[11,28],get_associated_bss:5,get_attr:[5,11,28],get_class:11,get_default_rout:11,get_filt:11,get_interface_by_ifindex:5,get_interface_by_phi:5,get_interfaces_dict:5,get_interfaces_dump:5,get_ipset_socket:30,get_link:[0,7,8,11,26,28],get_neighbour:11,get_netns_info:11,get_ntabl:11,get_policy_map:25,get_proto_vers:[5,12],get_qdisc:11,get_rout:[8,11,28],get_rul:11,get_set_byindex:[5,12],get_set_bynam:[5,12],get_stat:5,get_supported_revis:[5,12],get_vlan:[1,11],getenforc:24,getpid:[23,25],getprotobynam:[5,12],getsocknam:[2,4],getsockopt:25,getter:11,getvalu:0,gevent:27,gid:11,git:2,github:[1,3,4,5,7,10,14,27],give:10,given:[5,6,12,24,25],global:1,going:[0,5],good:26,googl:7,got:8,gplv2:1,gre:[1,11,21],gre_iflag:11,gre_ikei:11,gre_loc:11,gre_oflag:11,gre_okei:11,gre_remot:11,gre_ttl:11,gretap:1,grex:11,group:[1,2,4,6,7,11,20,25],gtp:1,guarante:[0,5,12,23],gui:1,guid:[1,9],hack:11,had:11,half:1,hand:[5,12],handl:[0,1,2,5,8,10,11,28],happen:10,happi:4,hardwar:7,has:[4,5,6,7,10,11,12,20,24],hash:[1,5,12,30],hashsiz:[5,12,30],hashtabl:[5,12],hat:2,have:[0,1,2,5,6,7,8,10,11,12,20,23,24,26,27,28,30],header:[1,2,5,6,11,12,23,25],hello:28,help:[2,4,5,7,9,11,28,30],helper:30,here:[4,5,6,7,10,11,20,21,23,24,26],hesit:5,hex:[2,4,5,23],hexdump:2,hfsc:1,hfsc_curv:11,hide:0,hierarchi:[0,1,4,11],high:[0,4,11,15,19,23,26,30],hint:6,histor:10,hmac:11,home:[5,7],hood:28,hook:[10,13],hop:[10,11],hope:11,hoplimit:[7,10,11],host:[2,10,21,26],hostnam:[22,26],how:[0,2,4,5],howev:11,htb:1,htb_glob:11,htb_parm:11,html:13,http:[1,3,5,7,10,13,26,27],i386:1,i686:1,ib0:11,ibss:5,iclass:20,idea:0,identif:5,identifi:[10,11],idpb:14,ids:1,idx:[7,10,11,14,24],if_announcemsg:0,if_msg:0,if_slav:1,ifa_address:23,ifa_cacheinfo:23,ifa_famili:23,ifa_flag:23,ifa_index:23,ifa_label:11,ifa_loc:23,ifa_msg:0,ifa_msg_bas:0,ifa_pref:23,ifa_prefixlen:23,ifa_scop:23,ifa_unspec:23,ifa_valid:23,ifac:[5,10,12],ifaddrmsg:[0,11,23,25],ifalia:1,ifconfig:11,ifdb:10,ifi_chang:2,ifi_famili:2,ifi_flag:2,ifi_index:2,ifi_typ:[2,11,23],ifindex:[5,11],ifinfbas:0,ifinfmsg:[0,1,2,11,25],ifinfo:21,ifinfo_bond:21,ifinfo_bond_139736119707256:17,ifinfo_bridg:21,ifinfo_bridge_139736119707256:17,ifinfo_gr:21,ifinfo_gre_139736119707256:17,ifinfo_vlan:21,ifinfo_vlan_139736119707256:17,ifinfo_vrf:21,ifinfo_vrf_139736119707256:17,ifinfo_vti6:21,ifinfo_vti6_139736119707256:17,ifinfo_vti:21,ifinfo_vti_139736119707256:17,ifinfo_vxlan:21,ifinfo_vxlan_139736119707256:17,ifinfveth:0,ifla_af_spec:11,ifla_carri:11,ifla_group:11,ifla_ifnam:[11,23,28],ifla_info_data:[1,11],ifla_info_kind:11,ifla_linkinfo:11,ifla_linkmod:11,ifla_macvlan_macaddr:11,ifla_mtu:11,ifla_net_ns_fd:24,ifla_num_rx_queu:11,ifla_num_tx_queu:11,ifla_operst:11,ifla_promiscu:11,ifla_txqlen:11,ifla_wireless:23,iflag:11,ifma_msg:0,ifma_msg_bas:0,ifnam:[1,2,5,6,7,10,11,14,15,17,18,19,20,24,26,29],ifr:11,iftyp:5,ignor:[2,4,5,11,13,23,25],ignore_rt:10,iif:[10,11],iifnam:11,immedi:[5,8,10,13,25],implement:[0,1,4,5,6,7,11,12,24,25],impli:[10,26,30],implicit:[7,10,11],implicitli:[24,26],improv:1,imq:11,incapsul:23,includ:[1,11,23,25],inclus:[5,12],incom:[0,5,6,11],incomplet:2,incorrect:1,increas:[1,25],indec:11,index:[1,2,5,7,9,10,11,12,14,17,18,20,23,24,25],indexerror:25,indic:11,indirect:9,indirectli:26,infiniband:11,info:[1,5,11,29,30],info_el:5,inform:[2,5,10,12,23,27,29,30],infotm:[15,19],ingress:[1,11],inherit:[0,1,6,23],inifiniband:11,init:[0,1,11,17],initi:[1,5,6,10,23,30],inject:[11,14],inlin:[7,11],inor:1,input:[8,10,11],insert:30,insert_list:30,insid:2,inspect:[2,4,5,6,20,21],instal:[1,5,6,9,12,22,26,28],instanc:[6,10,13,22,24,25,26],instanti:[11,23,30],instead:[0,1,2,4,10,11,13,22,23,24,25,26,30],instruct:2,instrument:3,int32:6,intefac:10,integ:[5,10,11,12],integr:[1,11,15,19],intend:[2,4,11,24],intens:25,intention:6,interact:10,interest:[0,6],interfac:[0,1,5,6,7,9,11,12,14,15,17,19,20,21,23,27,28,29],interfaces_139736119707256:17,interfer:10,intern:[1,10,11,13,20],internet:9,interpret:[6,24],intersect:1,intf:10,introduc:11,introduct:[9,14],invalid:[5,11,17,20],invok:[10,25],involuntari:8,iobrok:1,iocor:1,ioctl:[0,11,24],ioloop:1,iov:[1,11],ip4addr:[4,23],ip4ip6:[7,11],ip4msg:0,ip6_vti0:21,ip6addr:23,ip6gr:11,ip6gre_:11,ip6tnl0:21,ip6tnl:11,ip6tnl_:11,ip_vti0:21,ipaddr:[1,10,15,19,23],ipaddrset:0,ipb:[2,11],ipbatch:[0,2,11],ipbatchsocket:0,ipc:1,ipdb:[1,4,9,23,24,29],ipdb_main:24,ipdb_test:24,ipip:11,ipip_loc:11,ipip_remot:11,ipip_ttl:11,ipoib1:11,ipoib:11,ipq:[0,1,7],ipq_base_msg:0,ipq_mode_msg:[0,23],ipq_packet_msg:0,ipq_verdict_msg:0,ipqsocket:0,ipr:[0,2,4,7,8,11,14,20,24,25,28],iprout:[0,1,2,4,5,7,9,20,22,23,24,25,26,28],iproute2:[11,23],iproutemixin:[0,2],iprouterequest:11,iprsocket:[0,1,4,25],iprsocketmixin:0,ipset:[0,1,4,7,9,30],ipset_01:4,ipset_attr_adt:1,ipset_attr_data:1,ipset_attr_protocol:[5,12],ipset_cmd_list:4,ipset_cmd_protocol:4,ipset_msg:[0,4],ipstat:30,iptabl:[7,30],ipv4:[1,10,11,23,29],ipv6:[1,6,10,11,23,29],ipvlan:1,isinst:1,isol:1,issu:[1,3,4,5,7,9,14,24,27],issubclass:1,issuecom:10,item:[6,18,25],iter:[8,10,11,13,23,25],its:[2,4,5,6,10,11,12,23,24],itself:[0,4,5,10,11,18,20,23],iw_ev:23,iwutil:[0,1,4,5],jenkin:22,job:8,join:[10,21],join_ibss:5,json:[13,15,19],just:[0,2,4,5,6,10,11,23,24,25,30],kbyte:8,keep:[10,11,13,17,23,24,25,27,28,29,30],kei:[0,1,4,6,10,11,14,15,17,18,19,20,25,26,29,30],kernel:[0,1,2,4,7,10,11,12,14,23,27,30],keyword:[1,10,11,20,30],kill_rtcach:10,kind:[2,7,10,11,17,18,20,22,24,26,29],knife:11,know:5,kwarg:[5,10,11,12,20,24,25,30],l2addr:[4,6,23],label:[7,9,11,22],lambda:[6,11,25],larg:30,last:[10,24,25,30],last_sav:20,later:[0,2,6,11,24],latest:[3,7,26],latter:18,launch:[13,25],layer:0,layout:[1,23,28],lazi:5,leak:[1,30],leakag:1,leav:[5,10,11],leave_ibss:5,lee:1,legal:1,len:[8,10,11,23],length:[2,5,11,23],less:[4,6],let:[0,1,4,5,8,11],letter:11,level:[0,4,7,10,11,13,15,19,23,25,30],lib:13,libc6:2,libc:24,librari:[0,2,3,4,5,6,7,8,10,11,24,25,26,28],libteam:7,licens:[1,2],like:[4,5,6,8,10,11,12,20,23,24,30],limit:[0,7,9,11,15,17,19],line:[2,13],link:[1,2,8,9,10,11,13,17,20,22,24,26],link_lookup:[7,11,14,20,24],linkedset:[0,1],linux:[0,2,7,10,11,14,26,27],list:[0,1,4,5,7,8,9,10,12,14,15,21,23,25,28,29,30],list_dev:5,list_proc:11,list_wiphi:5,listen:1,listen_port:29,listnetn:[7,24],liter:6,lladdr:11,lnst:13,load:[1,4,11,17,20,24,25,26,30],load_all_ipset:30,load_ipset:30,load_rtnlmsg:20,load_sql:20,load_valu:20,local:[1,2,10,11,13,15,19,21,26],localhost:[13,17,20,22],locat:24,lock:10,lockf:24,log:[1,15,18,20,25],logic:[0,1],login:26,logo:1,longer:10,look:[5,6,23,30],lookup:[7,11,24],loop:[1,10,13,25,30],loopback:25,lose:10,lot:[8,28],low:[4,7,30],lower:[8,11],lowest:5,lucki:23,lwtunnel:[1,7,11],mac:[1,5,6,11,18,23],macaddr:[10,11],machin:6,macvlan:[1,11],macvlan_mod:11,macvtap:[1,11],macvtap_mod:11,made:10,magic:0,mai:[1,8,10,11,13,17,18,20,21,23,28],main:[0,1,10,20,24,30],maintain:10,major:[8,10],make:[1,2,3,5,7,10,11,13,18,23,25,26,27],makefil:[3,9],man:23,manag:[0,1,5,6,7,9,11,15,18,19,26,30],mandatori:[5,12,24],mani:[1,6,7,8,11,14,25,30],manipul:[4,7,30],manpag:[5,12],manual:[5,7,12,23,30],map:[1,7,23,30],mar:2,mark:[1,17,23],marshal:[2,9,28],mask:[1,2,10,11,23],masquerad:1,massiv:25,master:[1,11,18,26],match:[0,1,4,5,10,11,12,20,24],match_pair:20,match_src:20,matter:23,max:[5,11,12],maxelem:[5,12],maximum:[5,8,12],mayb:7,mean:[0,5,6,10,11,18,22,23,25],meaningless:24,measur:27,mechan:11,member:[5,12,23],membership:11,memori:[1,8,10,15,17,19,21,22,30],mention:[1,2,24],merg:[6,10],mesh_point:5,messag:[1,2,6,7,8,9,10,20,25,28,30],message_typ:6,meta:1,method:[0,1,2,4,5,6,8,10,11,12,18,20,22,24,28,29],metric:[0,1,7,11],mhz:5,mileston:7,min:11,mind:[10,11,13,23,24,25,27,28],minimum:[5,12],minor:[1,8],mip:1,mir:1,mirror:1,mismatch:10,miss:[5,11],mitig:1,mitogen:[1,7,15,19,22,26],mix:[23,30],mixin:[0,11],mode:[1,7,9,11,15,19],model:1,modern:[10,11],modifi:[10,11],modprob:7,modul:[1,2,6,7,9,13,14,19,20,21,22,23,28],moment:[10,25],monitor:[1,5,7,10,11],monkey_patch:28,more:[1,2,3,4,5,6,7,8,10,11,12,17,22,23,24,25,28,30],moreov:10,mortem:17,most:10,mostli:[5,6,11,13,25],move:[1,6,7,17,28],movement:1,mpl:[1,7,9,11,23],mpls_iptunnel:[7,14],mpls_router:[7,14],mplsrout:0,mplstabl:0,msg:[0,6,10,11,23,25],msg_class:25,msg_controllen:[2,5],msg_flag:[0,2,5,25],msg_id1:25,msg_id:25,msg_iov:[2,4,5],msg_name:[2,4,5],msg_pid:25,msg_seq:25,msg_type:25,mtu:[7,10,11,18,20],much:[8,10,24,25,28],multi:1,multicast:[7,10,11,25],multimatch:1,multipath:[1,11,14],multipl:[1,4,10,15,19],multiprocess:24,must:[0,3,5,6,12,17,23,25,26,30],mutabl:23,mvlan0:11,mvtap0:11,my_program:24,my_stat:10,myprog:11,myset:30,mysuperipset:30,name:[1,5,6,7,10,11,12,15,18,19,20,21,22,23,24,29,30],name_dst:[5,12],name_src:[5,12],namespac:[0,1,9,11,15,17,19],nativ:[7,25],natur:0,ndb:[1,9,11,17,18,19,21,22],ndmsg:[0,5,11,30],necessari:18,need:[5,6,10,11,23,25,27,30],need_ipset_socket:30,needed_attribut:10,neigh:11,neighbour:[1,10,11,20,21],neighbours_139736119707256:17,neither:[5,10],nest:[1,23],net:[1,5,7,11,12,14,24,30],net_ns_:1,net_ns_fd:[1,7,11,24],net_ns_pid:1,netaddr:7,netem:1,netfilt:7,netlink:[1,4,5,6,7,10,11,12,24,26,30],netlink_listen_all_nsid:1,netlink_netfilt:[4,5,12],netlink_rout:2,netlink_sock_diag:1,netlinkerror:1,netlinkmixin:[0,25],netlinksocket:[0,1,25],netn:[0,1,4,7,9,11,15,19,20,22,26,28],netns01:[15,19],netns02:[15,19],netns03:[15,19],netns_nam:24,network:[0,1,2,4,6,9,10,11,15,17,19,20,24,25],networkmanag:10,neutron:13,never:23,new_list:30,newdst:[7,14],newer:[5,12],newli:10,newn:24,next:[0,1,6,28],nexthop:10,nexthopset:0,nfgen_famili:[5,12],nfgen_msg:0,nfnetlink:[0,1,2,4,5,7,12],nftabl:[1,7],nh4:11,nh_139736119707256:17,nic:[11,15,17,28],nl80211:[0,1,2,4,5,7],nl80211_attr_gener:5,nl80211_attr_ifindex:5,nl80211_attr_ifnam:5,nl80211_attr_iftyp:5,nl80211_attr_mac:5,nl80211_attr_wdev:5,nl80211_attr_wiphi:5,nl80211cmd:[0,5],nl_async:10,nl_bind_group:10,nla:[0,1,4,5,11,20,23,24],nla_bas:0,nla_f_nest:23,nla_map:11,nla_nam:23,nla_name1:23,nla_namei:23,nla_namex:23,nla_typ:23,nlflag:0,nlm_f_ack:[5,11,23],nlm_f_creat:[11,23],nlm_f_excl:[11,23],nlm_f_request:[11,23],nlm_gener:[8,22,25],nlm_request:[0,5,11],nlmsg:[0,1,2,4,11,23],nlmsg_atom:0,nlmsg_base:0,nlsock:23,nlsocket:[0,1,25],nmcli:10,node:[11,26],non:[1,10,30],nonc:[1,23],none:[5,6,10,11,12,20,23,24,25,30],nope:24,nor:[5,10],normal:[1,10,11,20,24,25,28],nose:3,nosetest:[3,7,13],notat:[1,10,11,14,23],note:[5,9,10,12,23,24,25,29,30],noth:[6,24,30],notic:[2,4,5,10,11,14,23,24,25],notif:[0,7,11],now:[1,2,5,6,7,8,11,15,19],ns_pid:[1,24],nslink:[0,24,28],nslist:[15,19],nsname:[15,19,24,28],nsp:24,nspath:24,nspopen:[1,9],number:[2,6,11,13,14,17,22,23,24,25,30],nvie:3,o_creat:24,o_excl:24,obj:25,object:[0,1,4,5,7,8,9,10,11,12,15,19,22,23,25,26,28,29,30],obsolet:[1,10],ocb:5,occasion:[5,10],off:17,offset:[6,11],oflag:[1,11],often:[5,10,12],oif:[7,10,11,14],oifnam:11,okei:1,old:[1,10,11,30],older:2,onc:[2,8,10,25,30],one:[0,1,2,4,5,6,8,10,11,12,13,14,17,22,23,24,25,28,30],one_set:30,ones:[11,25],onli:[0,1,2,4,5,6,7,8,9,11,12,13,14,15,19,20,23,24,25,28,29,30],oop:1,open:[7,24,30],open_bpf_fd:11,open_netlink:30,openbsd:[7,11,15,19],oper:[1,5,10,11,24,30],operst:11,optim:1,option:[1,5,7,8,10,11,12,13,17,22,23,25,26,29,30],options_hfsc:11,options_hfsc_class:11,order:[0,1,17,22,23,24],ordinari:[0,6,11,17],org:[3,5,7],other:[0,1,4,5,6,7,9,11,12,18,23,25,26,28,30],otherwis:[4,11,24],our:30,out:[5,6,10,25],outgo:[6,11],output:[1,2,4,8,10,11],outsid:[10,11],over:[0,1,11,25],overflow:25,overrid:11,overview:9,ovs:[0,1],own:[1,2,4,10,11,23],owner:21,p2p:11,p2p_139736119707256:17,p2p_client:5,p2p_devic:5,p2p_go:5,p6p1:28,pack:[6,13,23],packag:[1,2,13,21,28],packet:[0,1,2,4,5,7,10,11,12,14,25,30],pad:[6,23],page:[3,4,8,9,14,23],pair:[0,7,10,11,24],paramet:[1,3,5,6,10,11,12,13,25,27,30],parameter_list:6,parent:[11,23],pars:[0,1,2,4,11,25,28,30],parser:[1,2,6,11,25,26],part:[4,5,11,12,21],parti:7,partial:1,particular:[1,5,11,25],particularli:5,partit:11,partli:4,pass:[0,1,2,8,10,11,23,30],passthru:11,patch:[5,10],path:[1,10,13],pattern:13,payload:[11,23],pdb:13,peer:[1,7,11,24,29],peer_:29,penalti:26,pend:20,pep8:[3,13],percent:8,perform:[3,5,6,7,9,11,12,17,25,26],perman:11,permiss:[5,26],permit:23,persistent_keepal:29,perturb:11,pf_local:2,pf_netlink:[2,4],pf_rout:[1,7,9,11],phy0:5,phy:5,physdev:30,pickl:1,pid:[2,4,5,11,23,24,25],pid_to_n:24,pip:[2,7,28],pipe:[1,11,24],pkei:11,place:[4,10],plai:[0,25],plan:25,platform:[1,7,11,24,27],platform_label:[7,14],pleas:[4,5,10,11,23,24,25,26,27],pls:[5,10],plug:1,plugin:[1,10,11],point:[10,11],polic:[1,11],polici:[1,4,6,10,11,25],poll:[0,6,10,11,24,25,28],pop:25,popen:24,popn:[1,24],port0:10,port:[1,4,5,6,9,11,12,17,18,25,29],port_entri:[5,12],port_rang:[5,12],portentri:[5,12],portrang:[5,12],possibl:[0,2,4,5,6,10,11,13,14,15,17,18,19,21,22,23,24,25,26,27,30],post:[10,17],postgresql:[1,17,21],ppc64:1,pprint:[7,27],pr2_debug:17,pre:[10,23],predic:[11,25],prefer:[5,11],preferr:0,prefix:[10,11],prefixlen:[7,10,11,15,19,23],prefsrc:11,prepar:[0,1,5],present:11,preshar:29,preshared_kei:29,pretti:[0,8],prevent:10,previou:11,previous:[1,24],primari:[1,11,23],primarili:4,print:[7,10,11,14,15,18,19,24,25,28],prio:11,priomap:1,prior:[10,25],priorit:25,prioriti:[1,11,25],privat:11,private_kei:29,privileg:9,probabl:[2,11,30],proc:[1,24],procedur:6,process:[1,2,7,10,11,13,23,25,26],produc:2,product:[5,17,25],profil:11,program:[5,10,11,25],progress:[1,15],project:[1,2,4,5,7,8,9,10,13,14,27],promote_secondari:23,proof:1,proper:[0,23],properli:[5,10,11],properti:[10,15,19,20],protinfo_bridg:11,proto:[2,11],protocol:[1,2,5,7,9,11,12,23,26],prototyp:[1,11],prove:10,provid:[0,1,2,4,5,6,7,10,11,12,20,23,24,25,27,28,30],proxi:[0,1,4,11,24,26],proxyarp:10,psched:1,psycopg2:21,ptrace:1,public_kei:29,pull:[1,3,5],pure:7,purg:11,purpos:24,push:[1,7,14],pushn:[1,24],put:[0,5,6,11,25],pvid:11,pwd:[2,4,5],pyinstal:1,pypi:[1,3,5,7],pyrout:2,pyroute2:[0,1,2,3,4,5,6,8,10,11,12,13,14,17,20,24,25,26,27,28,29,30],pyroute2_tests_ro:1,python2:[1,2],python3:[1,2,13],python:[1,2,3,4,5,7,8,10,13,21,23,24,25,26,27,28,30],pythonpath:[2,4,5],pz8:29,qdisc:[1,11],qdisc_stats2:11,qinq:11,qopt:11,qpython:1,qsize:[10,25],quantum:11,queri:0,queu:25,queue:[0,1,7,9,10,25],quick:[9,24],quickstart:9,quit:14,quota:7,race:[1,10],rais:[0,5,10,11,12,25],random:17,rang:[5,12,23,25],range_begin:11,range_end:11,rate:11,rational:4,raw:[2,5,6,11,25],rawiprout:[0,11],rawiprsocket:0,rawiprsocketmixin:0,rcdbchjlc0jpy2hllfzpdmvmzxnqcm9iaw90axf1zxm:29,rcdhchjlc0jpy2hlleplu2vyywlztgfqbhvzqm9ubmu:29,rcvbuf:[10,25],react:10,read:[1,5,7,9,15,19,20,25],reader:[10,25],readi:10,readm:3,readthedoc:[3,26],real:[11,28],realli:[10,23],realm:11,reason:[5,10,11,23,24,26],reason_cod:5,reassembl:1,receiv:[0,5,8,12,25,28],reclaim:8,recogn:10,recommend:10,record:[1,10,11,15,17,19,21],recreat:10,recurs:[4,11,23],recv:[0,2,5,11,24,25,28],recvmsg:[2,4,5],red:2,redhat:1,redirect:11,reduc:30,refactor:1,refer:[1,4,5,9,10,11,12,14,17,20,23,24,25],referenc:30,reflect:[10,11,14,28],regardless:22,regist:[0,25],register_callback:[10,25],register_polici:[4,25],regress:1,regular:11,reject:11,rel:1,relat:[1,5,12,13,17,21,24],releas:[1,7,9,10,24],relev:11,reli:[10,11],reload:1,remot:[0,1,9,11,15,19,21],remoteiprout:[0,9,11,22],remotesocket:0,remov:[1,7,10,11,17,20,23,25,29,30],renam:[5,10,11,12],rep:1,replac:[1,10,11,24,25,30],replace_entri:30,repo:[2,13],report:[9,11,13],repres:[1,10,11,20,23],req:[1,11,14],request:[0,1,2,3,4,5,8,10,11,12,14,23,24,25,30],requested_ip:6,requir:[0,1,2,4,5,8,9,10,11,13,14,15,19,21,22,23,24,25,26,27,29],reserv:[5,11,23,25],reset:[0,2,11,30],resid:[6,8,14],resolv:[1,23],resourc:[1,9,10,24,25,30],respect:1,respons:[0,1,2,5,8,9,22,23,24,25],rest:[2,6],restart:1,restart_on_error:10,restor:[10,24],restrict:10,result:[0,5,8,11,25],retriev:[8,10,11,13],reus:1,revert:[10,17,20],review:[1,10,24],revis:[1,5,12,30],rework:1,rhel6:1,rhel7:1,rhel:1,rich:10,right:6,rip:26,riprsocket:0,risk:10,roll:10,rollback:[1,17,20],root:[1,5,11,13,21,25,26,28],rout:[0,1,2,7,8,9,11,15,19,20,21,24],routekei:10,router:26,routes_139736119707256:17,routin:[1,6,10,11,18,24,25],routingt:0,routingtableset:0,row:21,rpc:1,rpm:[1,2],rsc:11,rss:8,rt_msg:0,rt_msg_base:0,rt_proto:11,rt_scope:[11,23],rt_scope_host:11,rt_scope_link:11,rt_scope_nowher:11,rt_scope_sit:11,rt_scope_univers:11,rt_slot:0,rt_type:11,rta_flow:1,rta_via:1,rtab:1,rtax_hoplimit:[10,11],rtax_mtu:11,rtm_deladdr:25,rtm_dellink:25,rtm_delqdisc:11,rtm_getlink:[0,25],rtm_newaddr:[23,25],rtm_newlink:[10,11,23,25],rtm_newqdisc:11,rtmsg:[0,1,2,10,11],rtmsg_base:0,rtnetlink:[9,11],rtnl:[0,1,2,4,5,7,10,11,15,19,20,21,23,24,26,28],rtnl_api:[0,11],rtnl_debug:17,rtnl_object:20,rtscope:11,rtt:10,rule:[0,1,2,6,9,11,20,25,30],rulekei:10,rulesdict:0,run:[0,2,3,4,5,7,8,9,10,11,13,17,20,22,24,25,27,28],runtim:[4,23,28],runtimeerror:10,s390x:1,sa_famili:[2,4,5],safe:[13,25],sai:[4,5,8,28],same:[0,1,2,4,5,10,11,17,18,20,23,24,25,26,28],sampl:[4,5,7,10,14,18,24],save:[11,24,25],scan:[5,23],sched_chok:11,sched_drr:11,sched_hfsc:11,sched_htb:11,schema:[1,15,17,20],scheme:[10,11,25],school:11,scope:[1,2,11,23],script:[2,4,5,8,10,13,24,26],search:9,second:[5,8,11,25,29],secondari:11,section:6,see:[0,1,3,4,5,7,10,11,12,13,17,20,22,23,24,30],seen:6,seg6:[7,11],seg6loc:11,seg:[7,11],segment:5,select:[6,11,21,24,25,28],selector:[18,20],self:[0,4,11,20,25],selinux:[3,24],semant:24,send:[0,2,5,11,25,29],sendmsg:[5,28],sendto:[0,2,4,5,11,23,24,25],sens:0,sent:[2,6,8,11,30],separ:[1,5,10,12,23,24,26],seq:[23,25],sequenc:[1,2,10,23,25],sequence_numb:[0,2,5,11,23,25],seriou:26,serv:0,server:[1,6,17],server_id:6,servic:[1,5,11],session:[5,10],set:[0,1,2,4,7,8,10,11,12,13,14,17,18,20,23,25,29,30],set_a:[5,12],set_address:10,set_b:[5,12],set_mtu:10,set_nam:10,set_stat:20,setn:[1,17,24],setsockopt:[2,25],setter:[8,11],setup:[1,5,7,10,11,28],setuptool:13,sever:[2,4,5,7,10,11,12,13,15,17,19,23,25,30],sfq:[1,11],shadow:1,share:[1,8,11],shell:[0,11],shellipr:[0,11],shortcut:[0,11,28],shortli:17,should:[0,2,3,4,5,6,8,10,11,12,14,17,23,24,25,26,30],shouldn:10,show:[11,20],show_bug:1,show_format:20,shown:[2,11],shut:[10,24],shutdown:[1,10],signal:8,signatur:[1,11,28],signific:[4,10,28],similar:6,simpl:[0,1,4,5,10,11,12,13,17,18,21,22,23,24],simpler:23,simplest:[4,9,10,11,25,26],simpli:8,simplic:28,simplifi:[20,23],sinc:[2,5,7,10,11,12,13,23,30],singl:[1,14],siocgiwnam:23,siocgiwnwid:23,siocgiwscan:23,siocsiwcommit:23,siocsiwnwid:23,sit:11,sit_:11,situat:10,size:[5,8,11,12],skbinfo:[5,12,30],skbmark:[5,12,30],skbprio:[5,12],skbqueue:[5,12],skip:[2,13,14,23],slightli:20,slow:30,slowli:6,small:30,snapshot:[15,20],sndbuf:[10,25],so_rcvbuf:[2,25],so_sndbuf:[1,2],sock:[6,11,30],sock_cloexec:2,sock_dgram:2,sock_diag:1,sock_raw:[2,4],socket:[1,2,4,5,7,8,9,11,12,14,23,24,30],sockopt:25,softwar:9,sol_socket:2,solut:[11,24,25],some:[0,1,2,3,4,5,6,7,8,10,11,14,18,20,21,22,23,24,25,26,28],some_attr:25,somenetn:24,someon:[10,11,30],someth:[4,5,25],sometim:[0,5,10,12,25,27,30],somewher:[4,28],soon:[10,20],sort:1,sort_address:10,sourc:[1,5,11,15,19,20,21],sources_opt:21,space:[2,4,11,23,25],spare:25,spawn:26,spec:[1,10,11],special:[5,9,10,11,12],specif:[1,5,6,10,11,12,13,20,23,24,26,27,28,29],specifi:[2,5,6,10,11,12,20,22,23,24,25],speed:1,sphinx:[1,7,13],sql:[1,10,15,20],sqlite3:[17,21],src:[2,7,10,11],src_len:11,srh:11,ss2:1,ssh:[15,19,22,26],ssid:5,ssl:1,stab:11,stabl:[10,28],stack:[8,10,11,14],stage:5,stai:[10,20,28],standalon:5,standard:[7,11,24],start:[0,1,2,4,7,9,10,11,17,22,24,25,26],startswith:10,stat:[11,25],state:[1,2,5,6,7,10,11,15,18,19,20,24,25],statement:[6,10,11,18],station:5,statist:[7,13,30],stats2:[1,11],stats_app:[1,11],statu:8,stderr:[1,17,20,24],stdin:[1,24],stdio:1,stdlib:[7,21],stdout:[1,24],step:23,still:[1,2,5,10,24,25,30],stop:[1,10,24],store:[5,10,11,12,17,21,25,30],storm:[10,25],stp:10,str:[2,11],strace:[4,5,9],strang:30,stream:23,stress:25,strict:1,strictli:3,string:[2,4,5,6,10,11,12,14,18,20,23,24,25,30],strip:6,struct:[6,11,23,29],structur:[1,4,5,6,11,12,15,23,29],stuff:[0,1,2,24,30],stype:[5,12],sub:1,submit:[13,14],subnet:11,subpackag:1,subprocess:24,subscrib:28,subset:7,subsystem:[11,23],sudo:[4,7,10,13,14,26,27,28],suitabl:10,summari:[15,19],supplementari:0,support:[0,1,4,5,9,10,11,12,14,18,24,25,30],suppos:[10,20],suppress:[10,25],sure:[2,6,11],surpris:4,svinota:[1,3,5,7,10,27],swap:[5,8,12,14,30],swiss:11,switchdev:11,symbol:[2,28],sync:[1,10],synchron:10,syntax:[1,6,11,13,17,22],sys:8,syscal:1,sysctl:[7,14,23],syslog:17,system:[0,1,5,6,8,9,10,11,13,15,17,19,24,27],systemd:5,tabl:[1,10,11,14,17,20,21],tag:11,take:[1,5,8,10,11,23],tap0:[10,11],tap:[1,11],tarbal:1,target:[1,5,7,9,11,17,18,20,22,23,24,26],task:[24,28],taskstat:[0,4,7],taskstatsmsg:0,tbf:1,tca_act_bpf:1,tca_u32_act:1,tcmd:0,tcmsg:[0,1,11],tcp:[1,5,12],team:[7,11],teamd:0,temporari:30,termin:[23,25],test01:[20,22,26],test:[1,2,5,7,9,11,12,14,21,24,25,26,27,28,30],test_cas:13,test_fil:13,test_ipdb:[13,14],test_ipr:[13,14],test_list:30,test_netn:[11,24],test_platform:27,test_stress:13,testcapsrtnl:27,testclass:13,testexplicit:13,text:8,tflag:[17,20],tgfhcm9zc2vcawnozv9dj2vzdexhugx1c0jlbgxlpdm:29,than:[5,10,25],thank:[1,5,12],thei:[0,2,4,5,6,10,11,12,14,20,23,25,28,30],them:[0,2,4,5,8,10,11,13,17,23,24,27,28],thereaft:[2,10],thermal:7,thermal_ev:[1,7],thi:[0,1,4,5,6,8,10,11,12,22,23,24,25,26,27,28,30],thing:10,think:4,those:11,though:[5,10,11,23,24],thousand:[8,10,20],thread:[1,10,11,22,25],threadless:1,three:[5,10,11,23],through:[0,11],thu:[4,7,10,11,20,24],time:[0,1,2,8,10,11,17,20,25,29,30],timeout:[1,5,10,12,30],timeoutexcept:1,timestamp:17,tnum:10,tobyt:6,todo:[11,20],togeth:11,toler:[15,19],tolist:6,too:[0,5,24],tool:[5,7,11,13,24,30],top:[7,11,24],tos:11,total:[6,8],touch:10,trace:[2,4,5],track:5,tracker:27,tradit:11,traffic:[5,7,10,11,26],transact:[0,1,6,9,15,19],transfer:24,transform:6,transit:[1,17],translat:11,transpar:[0,4,11],transport:[0,1,4,11,25],treat:23,tree:23,tri:[10,11],trigger:5,tstamp:23,tstat:23,ttl:11,tun1:11,tun:[1,11],tune:11,tunnel:[7,11],tuntap:[0,1,11],tuntap_data:11,tupl:[5,10,12,23,25,30],turn:[8,10,17,24,25],tutori:10,two:[0,1,2,5,6,10,11,12,22,23,25],txqlen:[10,11],txt:11,type:[0,1,2,4,5,6,7,10,11,12,14,21,25,27,30],typic:24,typl:23,u32:1,udp4_pseudo_head:0,udp:[1,5,6,12],udp_proto:[5,12],udpmsg:0,uid:11,uint16:[11,23],uint32:[4,23],uint64:[6,23],uint8:[6,23],uint:23,umcast:11,unchang:11,undefin:10,under:[1,2,11,24,28],underli:11,understand:10,unfreez:10,unicast:[10,11],unicast_flood:11,unicod:1,unif:11,uniform:10,uniqu:[11,23],unit:1,univer:23,unix:[1,5],unknown:[11,24],unless:[10,11],unlik:[5,10,11,17],unnecessari:10,unpack:23,unreach:[10,11],unregist:25,unregister_callback:25,unregister_polici:25,unshar:8,unsign:23,untag:11,until:[2,8,10,15,19,23],unus:[5,12],updat:[1,10,11,15,19,30],update_cont:30,update_dict_cont:30,update_state_by_msg:10,uplink:1,upon:[10,11,25],upper:11,upstream:11,url:[2,13],usabl:5,usag:[1,3,11,14,29],usc:11,use:[0,1,2,3,4,5,8,10,11,12,13,20,21,22,23,24,25,28,30],usecas:9,used:[0,1,2,4,5,6,11,12,13,18,20,23,24,25,26,27],useful:[0,4,10,11,17,20,21,23],user:[0,1,4,5,6,8,10,11],usernam:[22,26],userspac:[0,7,25],uses:[0,1,4,10,11,21,22,24],using:[0,1,4,5,6,10,11,12,13,18,20,22,25,26,30],usr:8,usual:[4,5,8,20,23],usus:11,utf:23,util:[0,1,5,7,10,11,13],uuid4:1,uuid:10,v0p0:[7,24],v0p1:[7,24],v100:11,v1p0:11,v1p1:11,v200c:11,v2fudfrvvhj5txlbzxjvr3jvc3nlqmljagu:29,v500:11,valid:[0,11,20],valu:[4,5,6,11,12,17,18,20,23,25,29,30],valuabl:5,variabl:[11,30],variant:[2,10],variou:30,vector:11,vepa:11,verdict:0,veri:[5,6,11,23,30],verifi:20,version:[1,2,5,12,23,26,27,28],veth0:[10,11],veth1:10,veth2:10,veth:[1,7,11,24],vfs_dquot:[1,7],via:[0,1,2,5,10,11,14,15,19,21,24,26,28],vid:11,view:[1,9,17,18,20,21],vim:3,virbr0:[2,10,28],virtual:[1,5,10,11,15],visibl:[11,13],vlan1108:20,vlan:[1,11,20,21],vlan_filt:[1,11],vlan_flag:1,vlan_id:[11,20],vlan_info:11,vlan_protocol:11,vni:11,voluntari:8,vpn:10,vrf:[1,10,11,21],vrf_tabl:11,vti6:21,vti:21,vx101:11,vx500:11,vxlan:[1,11,21],vxlan_data:11,vxlan_group:11,vxlan_id:11,vxlan_link:11,vxlan_ttl:11,wai:[0,4,5,6,10,11,17,22,23,24,25,28],wait:[1,10,15,19,24,28],wait_interfac:1,wait_ip:1,wall:8,want:[2,4,5,10,11,23,25,30],warn:[5,12,25,30],wast:11,watchdog:1,wds:5,weakref:17,web:13,weight:10,weird:[10,11],welcom:5,well:[4,10,20,23,24],were:[1,11,17,23],wg1:29,wgdevice_a_private_kei:29,what:[2,5,10,11],when:[0,1,2,5,6,9,10,11,12,20,22,23,24,28,30],whenev:0,where:[0,10,11,13,14,20,23,24,26],wherev:10,whether:10,which:[5,10,11,21,24],whole:2,why:[2,11,24,26],wide:13,width:[5,6],wifi:5,wildcard:[5,12],window:[1,10],wireguard:[1,7,9,22],wireless:[4,5,7,23],wiset:[1,9],within:[0,6,10,11],without:[5,8,11,13,18,30],witout:10,wlan0:28,wlevel:13,wlp3s0:23,work:[0,1,2,3,4,5,6,8,10,12,15,19,23,24,26,30],workaround:[1,2,24,25,28],worker1:[18,22],worker2:18,worker:13,world:28,wors:26,worst:24,would:[11,22,24],wrap:0,write:[6,10,20],written:28,wrk1_eth0:18,wrk2_eth0:18,wrong:24,wrote:4,x00:[2,4,5],x01:[2,4],x02:[2,4],x03:[2,4],x05:[2,4,5],x06:[2,4],x07:4,x10:2,x14:2,x18:2,x1a:2,x1b:5,x1c:4,x28:2,x30:5,x3c:2,x49:2,x55:2,x58:5,x61:2,x78:4,x84:2,x95:4,xe3:4,xe4:4,xid:6,xunit:1,yes:23,yet:[5,6,10,11,24],yield:10,you:[2,3,4,5,6,7,10,11,12,13,23,24,25,26,27,30],your:[2,5,10,11,12,23,26,27],yourself:5,zero:23},titles:["Module architecture","Changelog","Netlink debug howto","Project contribution guide","Modules layout","Modules in progress","DHCP support","Pyroute2","Generators","Pyroute2 netlink library","IPDB module","IPRoute module","IPSet module","Makefile documentation","MPLS howto","NDB module","IP addresses","Debug and logging","Network interfaces","Quick start","NDB objects","Database","RTNL sources","Netlink","NetNS module","Base netlink socket and marshal","RemoteIPRoute","Report a bug","Quickstart","WireGuard module","WiSet module"],titleterms:{"case":28,"class":[4,10,11,25],"default":10,"import":[4,28],The:[7,10],access:[20,26],address:[10,16],algo:23,api:[10,11,20],architectur:0,arrai:23,async:25,asynchron:25,attribut:23,backend:21,base:25,basic:23,bridg:10,bsd:11,bug:27,cach:[0,20],caveat:26,chang:[5,18],changelog:1,choke:11,clean:13,clsact:11,commun:5,compat:2,compil:2,configur:8,context:10,contribut:3,creat:[10,18,23,24],daemon:5,data:[2,17],databas:21,debug:[2,4,17],decod:[2,23],defer:4,develop:[9,13],dhcp:6,differ:10,disciplin:11,disclaim:5,dist:13,doc:13,document:13,doesn:25,drr:11,dump:2,encod:23,enobuf:25,event:17,eventlet:28,exampl:7,fetch:17,field:[6,23],filter:11,from:10,gener:[8,9],get:[10,18],guid:3,help:25,hfsc:11,hood:0,how:11,howto:[2,9,14],htb:11,indic:9,indirect:26,inform:9,instal:[7,13],interfac:[10,18,24],internet:0,introduct:26,ipdb:[0,10,14],iprout:[8,10,11,14],iproute2:10,ipset:[5,12],ipv4:6,issu:10,kernel:5,label:14,layout:4,librari:9,limit:10,link:[3,7],list:[11,18,24],local:22,log:17,lwtunnel:14,makefil:13,manag:[10,24],mangl:0,marshal:[4,25],messag:[0,4,5,11,23],metric:10,mode:10,modul:[0,4,5,10,11,12,15,24,29,30],move:24,mpl:14,multipath:10,namespac:[7,22,24,26],ndb:[10,15,20],netlink:[0,2,9,23,25],netlinksocket:4,netn:24,network:[5,7,18,22,26],nic:18,nla_map:23,nlmsg_error:11,note:[2,11],nspopen:24,object:[17,18,20,24],onli:10,option:6,other:[10,13],overview:24,packet:[6,23],pars:23,perform:10,persist:10,pf_rout:0,port:10,prioriti:10,privileg:26,problem:8,process:24,progress:5,project:3,properti:18,protocol:[0,4,6],pyrout:5,pyroute2:[7,9,23],queue:11,quick:[15,19],quickstart:[10,28],read:10,refer:15,releas:28,remot:[22,26],remoteiprout:26,remov:24,report:27,requir:[3,7],resourc:28,respons:11,revers:5,rout:[10,14],rtnetlink:7,rtnl:[17,22],rule:10,run:26,schema:21,send:23,set:[5,24],simplest:7,snapshot:17,socket:[0,6,25,28],softwar:10,solut:8,sourc:22,spawn:24,special:28,sql:21,start:[15,19],state:17,strace:2,structur:20,submit:5,support:[6,7],syntax:10,system:[7,22],tabl:9,target:13,test:[3,13],thread:0,transact:10,type:23,u32:11,under:0,usag:9,usecas:7,util:4,view:10,virtual:18,when:25,wireguard:29,wiset:30,within:24,work:11}})pyroute2-0.5.9/docs/html/usage.html0000644000175000017500000002755513621220107017130 0ustar peetpeet00000000000000 Quickstart — pyroute2 0.5.9 documentation

Quickstart

Hello, world:

$ sudo pip install pyroute2

$ cat example.py
from pyroute2 import IPRoute
with IPRoute() as ipr:
    print([x.get_attr('IFLA_IFNAME') for x in ipr.get_links()])

$ python example.py
['lo', 'p6p1', 'wlan0', 'virbr0', 'virbr0-nic']

Sockets

In the runtime pyroute2 socket objects behave as normal sockets. One can use them in the poll/select, one can call recv() and sendmsg():

from pyroute2 import IPRoute

# create RTNL socket
ipr = IPRoute()

# subscribe to broadcast messages
ipr.bind()

# wait for data (do not parse it)
data = ipr.recv(65535)

# parse received data
messages = ipr.marshal.parse(data)

# shortcut: recv() + parse()
#
# (under the hood is much more, but for
# simplicity it's enough to say so)
#
messages = ipr.get()

But pyroute2 objects have a lot of methods, written to handle specific tasks:

from pyroute2 import IPRoute

# RTNL interface
with IPRoute() as ipr:

    # get devices list
    ipr.get_links()

    # get addresses
    ipr.get_addr()

Resource release

Do not forget to release resources and close sockets. Also keep in mind, that the real fd will be closed only when the Python GC will collect closed objects.

Imports

The public API is exported by pyroute2/__init__.py.

It is done so to provide a stable API that will not be affected by changes in the package layout. There may be significant layout changes between versions, but if a symbol is re-exported via pyroute2/__init__.py, it will be available with the same import signature.

Warning

All other objects are also available for import, but they may change signatures in the next versions.

E.g.:

# Import a pyroute2 class directly. In the next versions
# the import signature can be changed, e.g., NetNS from
# pyroute2.netns.nslink it can be moved somewhere else.
#
from pyroute2.netns.nslink import NetNS
ns = NetNS('test')

# Import the same class from root module. This signature
# will stay the same, any layout change is reflected in
# the root module.
#
from pyroute2 import NetNS
ns = NetNS('test')

Special cases

eventlet

The eventlet environment conflicts in some way with socket objects, and pyroute2 provides some workaround for that:

# import symbols
#
import eventlet
from pyroute2 import NetNS
from pyroute2.config.eventlet import eventlet_config

# setup the environment
eventlet.monkey_patch()
eventlet_config()

# run the code
ns = NetNS('nsname')
ns.get_routes()
...

This may help, but not always. In general, the pyroute2 library is not eventlet-friendly.

pyroute2-0.5.9/docs/html/wireguard.html0000644000175000017500000002733613621220107020012 0ustar peetpeet00000000000000 WireGuard module — pyroute2 0.5.9 documentation

WireGuard module

Usage:

# Imports
from pyroute2 import IPDB, WireGuard

IFNAME = 'wg1'

# Create a WireGuard interface
with IPDB() as ip:
    wg1 = ip.create(kind='wireguard', ifname=IFNAME)
    wg1.add_ip('10.0.0.1/24')
    wg1.up()
    wg1.commit()

# Create WireGuard object
wg = WireGuard()

# Add a WireGuard configuration + first peer
peer = {'public_key': 'TGFHcm9zc2VCaWNoZV9DJ2VzdExhUGx1c0JlbGxlPDM=',
        'endpoint_addr': '8.8.8.8',
        'endpoint_port': 8888,
        'persistent_keepalive': 15,
        'allowed_ips': ['10.0.0.0/24', '8.8.8.8/32']}
wg.set(IFNAME, private_key='RCdhcHJlc0JpY2hlLEplU2VyYWlzTGFQbHVzQm9ubmU=',
       fwmark=0x1337, listen_port=2525, peer=peer)

# Add second peer with preshared key
peer = {'public_key': 'RCdBcHJlc0JpY2hlLFZpdmVMZXNQcm9iaW90aXF1ZXM=',
        'preshared_key': 'Pz8/V2FudFRvVHJ5TXlBZXJvR3Jvc3NlQmljaGU/Pz8=',
        'endpoint_addr': '8.8.8.8',
        'endpoint_port': 9999,
        'persistent_keepalive': 25,
        'allowed_ips': ['::/0']}
wg.set(IFNAME, peer=peer)

# Delete second peer
peer = {'public_key': 'RCdBcHJlc0JpY2hlLFZpdmVMZXNQcm9iaW90aXF1ZXM=',
        'remove': True}
wg.set(IFNAME, peer=peer)

# Get information of the interface
wg.info(IFNAME)

# Get specific value from the interface
wg.info(IFNAME)[0].WGDEVICE_A_PRIVATE_KEY.value

NOTES:

  • Using set method only requires an interface name.

  • The peer structure is described as follow:

    struct peer_s {
        public_key:            # Base64 public key - required
        remove:                # Boolean - optional
        preshared_key:         # Base64 preshared key - optional
        endpoint_addr:         # IPv4 or IPv6 endpoint - optional
        endpoint_port :        # endpoint Port - required only if endpoint_addr
        persistent_keepalive:  # time in seconds to send keep alive - optional
        allowed_ips:           # list of CIDRs allowed - optional
    }
    
pyroute2-0.5.9/docs/html/wiset.html0000644000175000017500000005355313621220107017154 0ustar peetpeet00000000000000 WiSet module — pyroute2 0.5.9 documentation

WiSet module

WiSet module

High level ipset support.

When IPSet module is providing a direct netlink socket with low level functions, a WiSet object is built to map ipset objects from kernel. It helps to add/remove entries, list content, etc.

For example, adding an entry with pyroute2.ipset.IPSet object implies to set a various number of parameters:

>>> ipset = IPSet()
>>> ipset.add("foo", "1.2.3.4/24", etype="net")
>>> ipset.close()

When they are discovered by a WiSet:

>>> wiset = load_ipset("foo")
>>> wiset.add("1.2.3.4/24")

Listing entries is also easier using WiSet, since it parses for you netlink messages:

>>> wiset.content
{'1.2.3.0/24': IPStats(packets=None, bytes=None, comment=None,
                       timeout=None, skbmark=None, physdev=False)}
pyroute2.wiset.need_ipset_socket(fun)

Decorator to create netlink socket if needed.

In many of our helpers, we need to open a netlink socket. This can be expensive for someone using many times the functions: instead to have only one socket and use several requests, we will open it again and again.

This helper allow our functions to be flexible: the caller can pass an optional socket, or do nothing. In this last case, this decorator will open a socket for the caller (and close it after call)

It also help to mix helpers. One helper can call another one: the socket will be opened only once. We just have to pass the ipset variable.

Note that all functions using this helper must use ipset as variable name for the socket.

class pyroute2.wiset.IPStats
class pyroute2.wiset.WiSet(name=None, attr_type='hash:ip', family=<AddressFamily.AF_INET: 2>, sock=None, timeout=None, counters=False, comment=False, hashsize=None, revision=None, skbinfo=False)

Main high level ipset manipulation class.

Every high level ipset operation should be possible with this class, you probably don’t need other helpers of this module, except tools to load data from kernel (load_all_ipsets() and load_ipset())

For example, you can create and an entry in a ipset just with:

>>> with WiSet(name="mysuperipset") as myset:
>>>    myset.create()             # add the ipset in the kernel
>>>    myset.add("198.51.100.1")  # add one IP to the set

Netlink sockets are opened by __enter__ and __exit__ function, so you don’t have to manage it manually if you use the “with” keyword.

If you want to manage it manually (for example for long operation in a daemon), you can do the following:

>>> myset = WiSet(name="mysuperipset")
>>> myset.open_netlink()
>>> # do stuff
>>> myset.close_netlink()

You can also don’t initiate at all any netlink socket, this code will work:

>>> myset = WiSet(name="mysuperipset")
>>> myset.create()
>>> myset.destroy()

But do it very carefully. In that case, a netlink socket will be opened in background for any operation. No socket will be leaked, but that can consume resources.

You can also instantiate WiSet objects with load_all_ipsets() and load_ipset():

>>> all_sets_dict = load_all_ipsets()
>>> one_set = load_ipset(name="myset")

Have a look on content variable if you need list of entries in the Set.

Open manually a netlink socket. You can use “with WiSet()” instead

Clone any opened netlink socket

Create a ipset objects based on a parsed netlink message

Parameters:
  • ndmsg – the netlink message to parse
  • content (bool) – should we fill (and parse) entries info (can be slow on very large set)
update_dict_content(ndmsg)

Update a dictionary statistics with values sent in netlink message

Parameters:ndmsg (netlink message) – the netlink message
create(**kwargs)

Insert this Set in the kernel

Many options are set with python object attributes (like comments, counters, etc). For non-supported type, kwargs are provided. See IPSet module documentation for more information.

destroy()

Destroy this ipset in the kernel list.

It does not delete this python object (any content or other stored values are keep in memory). This function will fail if the ipset is still referenced (by example in iptables rules), you have been warned.

add(entry, **kwargs)

Add an entry in this ipset.

If counters are enabled on the set, reset by default the value when we add the element. Without this reset, kernel sometimes store old values and can add very strange behavior on counters.

delete(entry, **kwargs)

Delete/remove an entry in this ipset

test(entry, **kwargs)

Test if an entry is in this ipset

test_list(entries, **kwargs)

Test if a list of a set of entries is in this ipset

Return a set of entries found in the IPSet

update_content()

Update the content dictionary with values from kernel

flush()

Flush entries of the ipset

content

Dictionary of entries in the set.

Keys are IP addresses (as string), values are IPStats tuples.

insert_list(entries)

Just a small helper to reduce the number of loops in main code.

replace_entries(new_list)

Replace the content of an ipset with a new list of entries.

This operation is like a flush() and adding all entries one by one. But this call is atomic: it creates a temporary ipset and swap the content.

Parameters:new_list (list or set of basestring or of) – list of entries to add

keyword arguments dict

pyroute2.wiset.get_ipset_socket(**kwargs)

Get a socket that one can pass to several WiSet objects

pyroute2-0.5.9/examples/0000755000175000017500000000000013621220110015034 5ustar peetpeet00000000000000pyroute2-0.5.9/examples/README.md0000644000175000017500000000134513610051400016320 0ustar peetpeet00000000000000Examples ======== This directory contains code examples, written in the form of scripts. An example script, placed here, *must* meet following requirements: * Able to run as a script w/o any command line parameters. * Able to be imported as a module, running the code at import time. * An example script *must* clean up after itself all it created. * All allocated resources *must* be released. * Significant exceptions *must* be raised further, but *after* cleanup. * There *must* be corresponding test case in `tests/test_examples.py`. The goal is to keep examples tested and working with the current code base; to increase code coverage; to drop the dead code on time. Actually, thus examples become a part of the integration testing. pyroute2-0.5.9/examples/cli/0000755000175000017500000000000013621220110015603 5ustar peetpeet00000000000000pyroute2-0.5.9/examples/cli/test_address0000644000175000017500000000056413610051400020221 0ustar peetpeet00000000000000# # # interfaces create {ifname test01} kind dummy address 00:11:22:33:44:55 commit # ipaddr create {address 192.168.15.67, prefixlen 24} commit create {address 192.168.15.68, prefixlen 24} commit 192.168.15.68/24 remove commit pyroute2-0.5.9/examples/cli/test_comments_bang0000644000175000017500000000041113610051400021377 0ustar peetpeet00000000000000! ! Test comments start with ! ! interfaces ! ... tail comments ! ! ... indented comments ! create {ifname test01, kind dummy, address 00:11:22:33:44:55} commit ! test01 ! show ! remove commit pyroute2-0.5.9/examples/cli/test_comments_hash0000644000175000017500000000041113610051400021413 0ustar peetpeet00000000000000# # Test comments start with # # interfaces # ... tail comments # # ... indented comments # create {ifname test01, kind dummy, address 00:11:22:33:44:55} commit # test01 # show # remove commit pyroute2-0.5.9/examples/cli/test_comments_mixed0000644000175000017500000000042013610051400021576 0ustar peetpeet00000000000000# ! Test mixed comments, both ! and # # interfaces # ... tail comments ! # ... indented comments ! create {ifname test01, kind dummy, address 00:11:22:33:44:55} commit ! test01 # show ! remove commit pyroute2-0.5.9/examples/cli/test_dump_lo0000644000175000017500000000011013610051400020216 0ustar peetpeet00000000000000! ! Just dump the loopback interface. ! interfaces lo show pyroute2-0.5.9/examples/cli/test_ensure0000644000175000017500000000120413610051400020065 0ustar peetpeet00000000000000! ! Create a dummy interface with an address on it. ! Notice that the interface doesn't appear on the ! system before the commit call. ! interfaces add {ifname test01, kind dummy} commit test01 add_ip 172.16.189.5/24 state up commit ! Rollback any transaction that makes the address ! unavailable: ! ## # not implemented yet in the new version of cli # # ensure {reachable 172.16.189.5} # ! Try to remove the interface, the transaction ! must fail: (not, see comment above) ! interfaces test01 remove commit ! Here we check with external tools that the ! interface still exists. ! pyroute2-0.5.9/examples/create_bond.py0000644000175000017500000000162013610051400017654 0ustar peetpeet00000000000000''' Example: python ./examples/create_bond.py Creates bond interface. ''' from pyroute2 import IPDB from pyroute2.common import uifname ip = IPDB() try: # create unique interface names p0 = uifname() p1 = uifname() ms = uifname() # The same scheme works for bridge interfaces too: you # can create a bridge interface and enslave some ports # to it just as below. ip.create(kind='dummy', ifname=p0).commit() ip.create(kind='dummy', ifname=p1).commit() with ip.create(kind='bond', ifname=ms) as i: # enslave two interfaces i.add_port(ip.interfaces[p0]) i.add_port(ip.interfaces[p1]) # make an example more scary: add IPs i.add_ip('10.251.0.1/24') i.add_ip('10.251.0.2/24') finally: for i in (p0, p1, ms): try: ip.interfaces[i].remove().commit() except: pass ip.release() pyroute2-0.5.9/examples/create_interface.py0000644000175000017500000000135613610051400020700 0ustar peetpeet00000000000000''' Example: python ./examples/create_interface.py Creates dummy interface. ''' from pyroute2 import IPDB from pyroute2.common import uifname ip = IPDB() try: # dummy, bridge and bond interfaces are created in the # same way # # uifname() function is used here only to generate a # unique name of the interface for the regression testing, # you can pick up any name # # details of this restriction are in the documentation # # possible kinds: # * bond # * bridge # * dummy # * vlan -- see /examples/create_vlan.py # dummy = ip.create(kind='dummy', ifname=uifname()) dummy.commit() finally: try: dummy.remove().commit() except: pass ip.release() pyroute2-0.5.9/examples/create_vlan.py0000644000175000017500000000202113610051400017666 0ustar peetpeet00000000000000''' Example: python ./examples/create_vlan.py [master] Master is an interface to add VLAN to, e.g. eth0 or tap0 or whatever else. Without parameters use tap0 as the default. ''' from pyroute2 import IPDB from pyroute2.common import uifname ip = IPDB() try: # unique interface names ms = uifname() vn = uifname() master = ip.create(ifname=ms, kind='dummy').commit() with ip.create(kind='vlan', ifname=vn, link=master, vlan_id=101) as i: # Arguments for ip.create() are executed before the transaction, # in the IPRoute.link('add', ...) call. If create() fails, the # interface became invalid and is not operable, you can safely # drop it. # # Here goes the rest of transaction. If it fails, the interface # continues to work, only failed transaction gets dropped. i.add_ip('10.251.0.1', 24) i.add_ip('10.251.0.2', 24) i.mtu = 1400 finally: try: ip.interfaces[ms].remove().commit() except: pass ip.release() pyroute2-0.5.9/examples/custom_socket_base.py0000644000175000017500000000402613610051400021266 0ustar peetpeet00000000000000### # # The `socket.socket` class is not sublcass-friendly, and sometimes it is # better to use a custom wrapper providing socket API, than the original # socket class. # # But some projects, that use pyroute2, already have their own solutions, # and providing the library-wide wrapper breaks the behaviour of these # projects. # # So we provide a way to define a custom `SocketBase` class, that will be # used as base for the `NetlinkSocket` # import types from socket import socket from functools import partial from pyroute2 import config from pyroute2 import netns from pyroute2 import NetNS ### # # socket.socket isn't very subclass-friendly, so wrap it instead. # class SocketWrapper(object): def __init__(self, *args, **kwargs): _socketmethods = ( 'bind', 'close', 'connect', 'connect_ex', 'listen', 'getpeername', 'getsockname', 'getsockopt', 'makefile', 'recv', 'recvfrom', 'recv_into', 'recvfrom_into', 'send', 'sendto', 'sendall', 'setsockopt', 'setblocking', 'settimeout', 'gettimeout', 'shutdown') _sock = kwargs.get('_sock', None) or socket(*args, **kwargs) self._sock = _sock print("Custom socket wrapper init done") def _forward(name, self, *args, **kwargs): print("Forward <%s> method" % name) return getattr(self._sock, name)(*args, **kwargs) for name in _socketmethods: f = partial(_forward, name) f.__name__ = name setattr(SocketWrapper, name, types.MethodType(f, self)) def fileno(self): # for some obscure reason, we can not implement `fileno()` # proxying as above, so just make a hardcore version return self._sock.fileno() def dup(self): return self.__class__(_sock=self._sock.dup()) config.SocketBase = SocketWrapper print(netns.listnetns()) ### # # Being run via the root module, real IPRoute import is postponed, # to inspect the code, refer to `pyroute2/__init__.py` # ns = NetNS('test') print(ns.get_addr()) ns.close() pyroute2-0.5.9/examples/devlink_list.py0000644000175000017500000000027513610051400020103 0ustar peetpeet00000000000000from pyroute2 import DL dl = DL() for q in dl.get_dump(): print('%s\t%s' % (q.get_attr('DEVLINK_ATTR_BUS_NAME'), q.get_attr('DEVLINK_ATTR_DEV_NAME'))) dl.close() pyroute2-0.5.9/examples/devlink_monitor.py0000644000175000017500000000012013610051400020604 0ustar peetpeet00000000000000from pyroute2.devlink import DL dl = DL(groups=~0) print(dl.get()) dl.close() pyroute2-0.5.9/examples/devlink_port_list.py0000644000175000017500000000041313610051400021141 0ustar peetpeet00000000000000from pyroute2 import DL dl = DL() for q in dl.get_port_dump(): print('%s\t%s\t%u' % (q.get_attr('DEVLINK_ATTR_BUS_NAME'), q.get_attr('DEVLINK_ATTR_DEV_NAME'), q.get_attr('DEVLINK_ATTR_PORT_INDEX'))) dl.close() pyroute2-0.5.9/examples/dhclient.py0000644000175000017500000000363313610051400017207 0ustar peetpeet00000000000000import sys import select from pprint import pprint from pyroute2.dhcp import BOOTREQUEST from pyroute2.dhcp import DHCPDISCOVER from pyroute2.dhcp import DHCPOFFER from pyroute2.dhcp import DHCPREQUEST from pyroute2.dhcp import DHCPACK from pyroute2.dhcp.dhcp4msg import dhcp4msg from pyroute2.dhcp.dhcp4socket import DHCP4Socket def req(s, poll, msg, expect): do_req = True xid = None while True: # get transaction id if do_req: xid = s.put(msg)['xid'] # wait for response events = poll.poll(2) for (fd, event) in events: response = s.get() if response['xid'] != xid: do_req = False continue if response['options']['message_type'] != expect: raise Exception("DHCP protocol error") return response do_req = True def action(ifname): s = DHCP4Socket(ifname) poll = select.poll() poll.register(s, select.POLLIN | select.POLLPRI) # DISCOVER discover = dhcp4msg({'op': BOOTREQUEST, 'chaddr': s.l2addr, 'options': {'message_type': DHCPDISCOVER, 'parameter_list': [1, 3, 6, 12, 15, 28]}}) reply = req(s, poll, discover, expect=DHCPOFFER) # REQUEST request = dhcp4msg({'op': BOOTREQUEST, 'chaddr': s.l2addr, 'options': {'message_type': DHCPREQUEST, 'requested_ip': reply['yiaddr'], 'server_id': reply['options']['server_id'], 'parameter_list': [1, 3, 6, 12, 15, 28]}}) reply = req(s, poll, request, expect=DHCPACK) pprint(reply) s.close() return reply if __name__ == '__main__': if len(sys.argv) > 1: ifname = sys.argv[1] else: ifname = 'eth0' action(ifname) pyroute2-0.5.9/examples/ethtool-ioctl_get_infos.py0000644000175000017500000000146013621076743022257 0ustar peetpeet00000000000000import sys from pyroute2.ethtool.ioctl import IoctlEthtool from pyroute2.ethtool.ioctl import NotSupportedError if len(sys.argv) != 2: raise Exception("USAGE: {0} IFNAME".format(sys.argv[0])) dev = IoctlEthtool(sys.argv[1]) print("=== Device cmd: ===") try: for name, value in dev.get_cmd().items(): print("\t{}: {}".format(name, value)) except NotSupportedError: print("Not supported by driver.\n") print("") print("=== Device feature: ===") for name, value, not_fixed, _, _ in dev.get_features(): value = "on" if value else "off" if not not_fixed: # I love double negations value += " [fixed]" print("\t{}: {}".format(name, value)) print("\n=== Device coalesce: ===") for name, value in dev.get_coalesce().items(): print("\t{}: {}".format(name, value)) pyroute2-0.5.9/examples/ethtool-netlink_get_infos.py0000644000175000017500000000076013621076743022613 0ustar peetpeet00000000000000import pprint import sys from pyroute2.netlink.generic.ethtool import NlEthtool if len(sys.argv) != 2: raise Exception("USAGE: {0} IFNAME".format(sys.argv[0])) IFNAME = sys.argv[1] eth = NlEthtool() print("kernel ok?:", eth.is_nlethtool_in_kernel()) pprint.pprint(eth.get_linkmode(IFNAME)) print("") pprint.pprint(eth.get_linkinfo(IFNAME)) print("") pprint.pprint(eth.get_stringset(IFNAME)) print("") pprint.pprint(eth.get_linkstate(IFNAME)) print("") pprint.pprint(eth.get_wol(IFNAME)) pyroute2-0.5.9/examples/ethtool_get_infos.py0000644000175000017500000000060713621076743021151 0ustar peetpeet00000000000000import sys from pyroute2.ethtool import Ethtool if len(sys.argv) != 2: raise Exception("USAGE: {0} IFNAME".format(sys.argv[0])) ethtool = Ethtool() ifname = sys.argv[1] print(ethtool.get_link_mode(ifname)) print(ethtool.get_link_info(ifname)) print(ethtool.get_strings_set(ifname)) print(ethtool.get_wol(ifname)) print(ethtool.get_features(ifname)) print(ethtool.get_coalesce(ifname)) pyroute2-0.5.9/examples/generic/0000755000175000017500000000000013621220110016450 5ustar peetpeet00000000000000pyroute2-0.5.9/examples/generic/Makefile0000644000175000017500000000023313610051400020110 0ustar peetpeet00000000000000obj-m += netl.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules clean: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) clean pyroute2-0.5.9/examples/generic/netl.c0000644000175000017500000000675113610051400017571 0ustar peetpeet00000000000000/* * Generic netlink sample -- kernel module * Use `make` to compile and `insmod` to load the module * * Sergiy Lozovsky * Peter V. Saveliev * * Requires kernel 4.10+ */ #include /* Needed by all modules */ #include /* Needed for KERN_INFO */ #include /* Needed for the macros */ #include /* attributes (variables): the index in this enum is used as a reference for the type, * userspace application has to indicate the corresponding type * the policy is used for security considerations */ enum { EXMPL_NLA_UNSPEC, EXMPL_NLA_DATA, EXMPL_NLA_LEN, __EXMPL_NLA_MAX, }; /* ... and the same for commands */ enum { EXMPL_CMD_UNSPEC, EXMPL_CMD_MSG, }; /* attribute policy: defines which attribute has which type (e.g int, char * etc) * possible values defined in net/netlink.h */ static struct nla_policy exmpl_genl_policy[__EXMPL_NLA_MAX] = { [EXMPL_NLA_DATA] = { .type = NLA_NUL_STRING }, [EXMPL_NLA_LEN] = { .type = NLA_U32 }, }; #define VERSION_NR 1 static struct genl_family exmpl_gnl_family; static int get_length(struct sk_buff *request, struct genl_info *info) { struct sk_buff *reply; char *buffer; void *msg_head; if (info == NULL) return -EINVAL; if (!info->attrs[EXMPL_NLA_DATA]) return -EINVAL; /* get the data */ buffer = nla_data(info->attrs[EXMPL_NLA_DATA]); /* send a message back*/ /* allocate some memory, since the size is not yet known use NLMSG_GOODSIZE*/ reply = genlmsg_new(NLMSG_GOODSIZE, GFP_KERNEL); if (reply == NULL) return -ENOMEM; /* start the message */ msg_head = genlmsg_put_reply(reply, info, &exmpl_gnl_family, 0, info->genlhdr->cmd); if (msg_head == NULL) { return -ENOMEM; } /* add a EXMPL_LEN attribute -- report the data length */ if (0 != nla_put_u32(reply, EXMPL_NLA_LEN, strlen(buffer))) return -EINVAL; /* finalize the message */ genlmsg_end(reply, msg_head); /* send the message back */ if (0 != genlmsg_reply(reply, info)) return -EINVAL; return 0; } /* commands: mapping between commands and actual functions*/ static const struct genl_ops exmpl_gnl_ops_echo[] = { { .cmd = EXMPL_CMD_MSG, .policy = exmpl_genl_policy, .doit = get_length, }, }; /* family definition */ static struct genl_family exmpl_gnl_family __ro_after_init = { .name = "EXMPL_GENL", //the name of this family, used by userspace application .version = VERSION_NR, //version number .maxattr = __EXMPL_NLA_MAX - 1, .module = THIS_MODULE, .ops = exmpl_gnl_ops_echo, .n_ops = ARRAY_SIZE(exmpl_gnl_ops_echo), }; static int __init exmpl_gnl_init(void) { int rc; rc = genl_register_family(&exmpl_gnl_family); if (rc != 0) { printk(KERN_INFO "rkmod: genl_register_family failed %d\n", rc); return 1; } printk(KERN_INFO "Generic netlink example loaded, protocol version %d\n", VERSION_NR); return 0; } static void __exit exmpl_gnl_exit(void) { int ret; /*unregister the family*/ ret = genl_unregister_family(&exmpl_gnl_family); if(ret !=0){ printk("unregister family %i\n",ret); } } module_init(exmpl_gnl_init); module_exit(exmpl_gnl_exit); MODULE_LICENSE("GPL"); pyroute2-0.5.9/examples/generic/netl.py0000755000175000017500000000233013610051400017767 0ustar peetpeet00000000000000#!/usr/bin/env python import traceback from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import genlmsg from pyroute2.netlink.generic import GenericNetlinkSocket RLINK_CMD_UNSPEC = 0 RLINK_CMD_REQ = 1 class rcmd(genlmsg): ''' Message class that will be used to communicate with the kernel module ''' nla_map = (('RLINK_ATTR_UNSPEC', 'none'), ('RLINK_ATTR_DATA', 'asciiz'), ('RLINK_ATTR_LEN', 'uint32')) class Rlink(GenericNetlinkSocket): def send_data(self, data): msg = rcmd() msg['cmd'] = RLINK_CMD_REQ msg['version'] = 1 msg['attrs'] = [('RLINK_ATTR_DATA', data)] ret = self.nlm_request(msg, self.prid, msg_flags=NLM_F_REQUEST)[0] return ret.get_attr('RLINK_ATTR_LEN') if __name__ == '__main__': try: # create protocol instance rlink = Rlink() rlink.bind('EXMPL_GENL', rcmd) # request a method print(rlink.send_data('x' * 65000)) except: # if there was an error, log it to the console traceback.print_exc() finally: # finally -- release the instance rlink.close() pyroute2-0.5.9/examples/ip_monitor.py0000644000175000017500000000027313610051400017571 0ustar peetpeet00000000000000''' Simplest example to monitor Netlink events with a Python script. ''' from pyroute2 import IPRSocket from pprint import pprint ip = IPRSocket() ip.bind() pprint(ip.get()) ip.close() pyroute2-0.5.9/examples/ipdb_autobr.py0000644000175000017500000000304713610051400017706 0ustar peetpeet00000000000000''' A sample of usage of IPDB generic callbacks ''' from pyroute2 import IPDB from pyroute2.common import uifname # unique interface names -- for the testing p0 = uifname() br0 = uifname() ### # # The callback receives three arguments: # 1. ipdb reference # 2. msg arrived # 3. action (actually, that's msg['event'] field) # # By default, callbacks are registered as 'post', # it means that they're executed after any other # action on a message arrival. # # More details read in pyroute2.ipdb # def cb(ipdb, msg, action): global p0 global br0 if action == 'RTM_NEWLINK' and \ msg.get_attr('IFLA_IFNAME', '') == p0: # get corresponding interface -- in the case of # post-callbacks it is created already interface = ipdb.interfaces[msg['index']] # add it as a port to the bridge ipdb.interfaces[br0].add_port(interface) try: ipdb.interfaces[br0].commit() except Exception: pass # create IPDB instance with IPDB() as ip: # create watchdogs wd0 = ip.watchdog(ifname=br0) wd1 = ip.watchdog(ifname=p0) # create bridge ip.create(kind='bridge', ifname=br0).commit() # wait the bridge to be created wd0.wait() # register callback cuid = ip.register_callback(cb) # create ports ip.create(kind='dummy', ifname=p0).commit() # sleep for interfaces wd1.wait() ip.unregister_callback(cuid) # cleanup for i in (p0, br0): try: ip.interfaces[i].remove().commit() except: pass pyroute2-0.5.9/examples/ipdb_chain.py0000644000175000017500000000141413610051400017470 0ustar peetpeet00000000000000# An example how to use command chaining in IPDB from pyroute2 import IPDB from pyroute2.common import uifname # unique names -- for the testing bo0 = uifname() p0 = uifname() p1 = uifname() with IPDB() as ip: # create bonding ip.create(ifname=bo0, kind='bond', bond_mode=2).commit() # create slave ports ip.create(ifname=p0, kind='dummy').commit() ip.create(ifname=p1, kind='dummy').commit() # set up bonding ip.interfaces[bo0].add_port(ip.interfaces[p0]).\ add_port(ip.interfaces[p1]).\ add_ip('172.16.0.1/24').\ add_ip('172.16.0.2/24').\ option('mtu', 1400).\ up().\ commit() for i in (p0, p1, bo0): try: ip.interfaces[i].remove().commit() except: pass pyroute2-0.5.9/examples/ipdb_routes.py0000644000175000017500000000103713610051400017730 0ustar peetpeet00000000000000from pyroute2 import IPDB from pyroute2.common import uifname p0 = uifname() ip = IPDB() # create dummy interface to host routes on ip.create(kind='dummy', ifname=p0).\ add_ip('172.16.1.1/24').\ up().\ commit() # create a route with ip.routes.add({'dst': '172.16.0.0/24', 'gateway': '172.16.1.2'}) as r: pass # modify it with ip.routes['172.16.0.0/24'] as r: r.gateway = '172.16.1.3' # cleanup with ip.routes['172.16.0.0/24'] as r: r.remove() ip.interfaces[p0].remove().commit() ip.release() pyroute2-0.5.9/examples/ipq.py0000644000175000017500000000060713610051400016204 0ustar peetpeet00000000000000from pyroute2.common import hexdump from pyroute2 import IPQSocket from pyroute2.netlink.ipq import NF_ACCEPT from dpkt.ip import IP ip = IPQSocket() ip.bind() try: while True: msg = ip.get()[0] print("\n") print(hexdump(msg.raw)) print(repr(IP(msg['payload']))) ip.verdict(msg['packet_id'], NF_ACCEPT) except: pass finally: ip.release() pyroute2-0.5.9/examples/ipset.py0000644000175000017500000000305513610051400016537 0ustar peetpeet00000000000000import socket from pyroute2.ipset import IPSet, PortRange, PortEntry ipset = IPSet() ipset.create("foo", stype="hash:ip") ipset.add("foo", "198.51.100.1", etype="ip") ipset.add("foo", "198.51.100.2", etype="ip") print(ipset.test("foo", "198.51.100.1")) # True print(ipset.test("foo", "198.51.100.10")) # False msg_list = ipset.list("foo") for msg in msg_list: for attr_data in (msg .get_attr('IPSET_ATTR_ADT') .get_attrs('IPSET_ATTR_DATA')): for attr_ip_from in attr_data.get_attrs('IPSET_ATTR_IP_FROM'): for ipv4 in attr_ip_from.get_attrs('IPSET_ATTR_IPADDR_IPV4'): print("- " + ipv4) ipset.destroy("foo") ipset.close() ipset = IPSet() ipset.create("bar", stype="bitmap:port", bitmap_ports_range=(1000, 2000)) ipset.add("bar", 1001, etype="port") ipset.add("bar", PortRange(1500, 2000), etype="port") print(ipset.test("bar", 1600, etype="port")) # True print(ipset.test("bar", 2600, etype="port")) # False ipset.destroy("bar") ipset.close() ipset = IPSet() protocol_tcp = socket.getprotobyname("tcp") ipset.create("foobar", stype="hash:net,port") port_entry_http = PortEntry(80, protocol=protocol_tcp) ipset.add("foobar", ("198.51.100.0/24", port_entry_http), etype="net,port") print(ipset.test("foobar", ("198.51.100.1", port_entry_http), etype="ip,port")) # True port_entry_https = PortEntry(443, protocol=protocol_tcp) print(ipset.test("foobar", ("198.51.100.1", port_entry_https), etype="ip,port")) # False ipset.destroy("foobar") ipset.close() pyroute2-0.5.9/examples/nl80211_interface_type.py0000644000175000017500000000072013610051400021475 0ustar peetpeet00000000000000import errno from pyroute2 import IW from pyroute2 import IPRoute from pyroute2.netlink.exceptions import NetlinkError # interface name to check ifname = 'lo' ip = IPRoute() iw = IW() index = ip.link_lookup(ifname=ifname)[0] try: iw.get_interface_by_ifindex(index) print("wireless interface") except NetlinkError as e: if e.code == errno.ENODEV: # 19 'No such device' print("not a wireless interface") finally: iw.close() ip.close() pyroute2-0.5.9/examples/nl80211_interfaces.py0000644000175000017500000000062113610051400020617 0ustar peetpeet00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- # from pyroute2.iwutil import IW iw = IW() for q in iw.get_interfaces_dump(): phyname = 'phy%i' % int(q.get_attr('NL80211_ATTR_WIPHY')) print('%s\t%s\t%s\t%s' % (q.get_attr('NL80211_ATTR_IFINDEX'), phyname, q.get_attr('NL80211_ATTR_IFNAME'), q.get_attr('NL80211_ATTR_MAC'))) iw.close() pyroute2-0.5.9/examples/nl80211_monitor.py0000644000175000017500000000016113610051400020162 0ustar peetpeet00000000000000from pyroute2 import IW # register IW to get all the messages iw = IW(groups=0xfff) print(iw.get()) iw.close() pyroute2-0.5.9/examples/nl80211_scan_dump.py0000644000175000017500000000464613610051400020460 0ustar peetpeet00000000000000#!/usr/bin/env python3 import sys import logging from pyroute2 import IPRoute from pyroute2.iwutil import IW from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink.nl80211 import nl80211cmd from pyroute2.netlink.nl80211 import NL80211_NAMES logging.basicConfig(level=logging.DEBUG) logger = logging.getLogger("scandump") logger.setLevel(level=logging.INFO) # interface name to dump scan results ifname = sys.argv[1] iw = IW() ip = IPRoute() ifindex = ip.link_lookup(ifname=ifname)[0] ip.close() # CMD_GET_SCAN doesn't require root privileges. # Can use 'nmcli device wifi' or 'nmcli d w' to trigger a scan which will fill # the scan results cache for ~30 seconds. # See also 'iw dev $yourdev scan dump' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_SCAN'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] scan_dump = iw.nlm_request(msg, msg_type=iw.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) for network in scan_dump: for attr in network['attrs']: if attr[0] == 'NL80211_ATTR_BSS': # handy debugging; see everything we captured for bss_attr in attr[1]['attrs']: logger.debug("bss attr=%r", bss_attr) bss = dict(attr[1]['attrs']) # NOTE: the contents of beacon and probe response frames may or # may not contain all these fields. Very likely there could be a # keyerror in the following code. Needs a bit more bulletproofing. # print like 'iw dev $dev scan dump" print("BSS {}" .format(bss['NL80211_BSS_BSSID'])) print("\tTSF: {0[VALUE]} ({0[TIME]})" .format(bss['NL80211_BSS_TSF'])) print("\tfreq: {}" .format(bss['NL80211_BSS_FREQUENCY'])) print("\tcapability: {}" .format(bss['NL80211_BSS_CAPABILITY']['CAPABILITIES'])) print("\tsignal: {0[VALUE]} {0[UNITS]}" .format(bss['NL80211_BSS_SIGNAL_MBM']['SIGNAL_STRENGTH'])) print("\tlast seen: {} ms ago" .format(bss['NL80211_BSS_SEEN_MS_AGO'])) ies = bss['NL80211_BSS_INFORMATION_ELEMENTS'] # Be VERY careful with the SSID! Can contain hostile input. print("\tSSID: {}".format(ies['SSID'].decode("utf8"))) # TODO more IE decodes iw.close() pyroute2-0.5.9/examples/nla_operators.py0000644000175000017500000000137413610051400020265 0ustar peetpeet00000000000000#!/usr/bin/python from pprint import pprint from pyroute2 import IPRoute from pyroute2 import IPDB from pyroute2.common import uifname # high-level interface ipdb = IPDB() interface = ipdb.create(ifname=uifname(), kind='dummy').\ commit().\ add_ip('172.16.0.1/24').\ add_ip('172.16.0.2/24').\ commit() # low-level interface just to get raw messages ip = IPRoute() a = [x for x in ip.get_addr() if x['index'] == interface['index']] print('\n8<--------------------- left operand') pprint(a[0]) print('\n8<--------------------- right operand') pprint(a[1]) print('\n8<--------------------- complement') pprint(a[0] - a[1]) print('\n8<--------------------- intersection') pprint(a[0] & a[1]) interface.remove().commit() ip.close() ipdb.release() pyroute2-0.5.9/examples/pmonitor.py0000644000175000017500000000063613610051400017264 0ustar peetpeet00000000000000''' Monitor process exit ''' from pyroute2 import TaskStats from pyroute2.common import hexdump pmask = '' with open('/proc/cpuinfo', 'r') as f: for line in f.readlines(): if line.startswith('processor'): pmask += ',' + line.split()[2] pmask = pmask[1:] ts = TaskStats() ts.register_mask(pmask) msg = ts.get()[0] print(hexdump(msg.raw)) print(msg) ts.deregister_mask(pmask) ts.release() pyroute2-0.5.9/examples/taskstats.py0000644000175000017500000000036613610051400017436 0ustar peetpeet00000000000000''' Simple taskstats sample. ''' import os from pyroute2 import TaskStats pid = os.getpid() ts = TaskStats() # bind is required in the case of generic netlink ts.bind() ret = ts.get_pid_stat(int(pid))[0] # parsed structure print(ret) ts.close() pyroute2-0.5.9/pyroute2/0000755000175000017500000000000013621220110015007 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/__init__.py0000644000175000017500000001207313610051400017125 0ustar peetpeet00000000000000## # # This module contains all the public symbols from the library. # try: import pkg_resources except ImportError: pass import sys import struct ## # # Windows platform specific: socket module monkey patching # # To use the library on Windows, run:: # pip install win-inet-pton # if sys.platform.startswith('win'): # noqa: E402 import win_inet_pton # noqa: F401 ## # # Logging setup # # See the history: # * https://github.com/svinota/pyroute2/issues/246 # * https://github.com/svinota/pyroute2/issues/255 # * https://github.com/svinota/pyroute2/issues/270 # * https://github.com/svinota/pyroute2/issues/573 # * https://github.com/svinota/pyroute2/issues/601 # from pyroute2.config import log ## # # Platform independent modules # from pyroute2.ipdb.exceptions import (DeprecationException, CommitException, CreateException, PartialCommitException) from pyroute2.netlink.exceptions import (NetlinkError, NetlinkDecodeError) from pyroute2.netlink.rtnl.req import (IPRouteRequest, IPLinkRequest) from pyroute2.iproute import (IPRoute, IPBatch, RawIPRoute, RemoteIPRoute) from pyroute2.netlink.rtnl.iprsocket import IPRSocket from pyroute2.ipdb.main import IPDB ## # # Linux specific code # if sys.platform.startswith('linux'): from pyroute2.ipset import IPSet from pyroute2.iwutil import IW from pyroute2.devlink import DL from pyroute2.conntrack import Conntrack from pyroute2.nftables.main import NFTables from pyroute2.netns.nslink import NetNS from pyroute2.netns.process.proxy import NSPopen from pyroute2.inotify.inotify_fd import Inotify from pyroute2.netlink.taskstats import TaskStats from pyroute2.netlink.nl80211 import NL80211 from pyroute2.netlink.devlink import DevlinkSocket from pyroute2.netlink.event.acpi_event import AcpiEventSocket from pyroute2.netlink.event.dquot import DQuotSocket from pyroute2.netlink.ipq import IPQSocket from pyroute2.netlink.diag import DiagSocket from pyroute2.netlink.generic import GenericNetlinkSocket from pyroute2.netlink.generic.wireguard import WireGuard from pyroute2.netlink.nfnetlink.nftsocket import NFTSocket from pyroute2.netlink.nfnetlink.nfctsocket import NFCTSocket from pyroute2.netns.manager import NetNSManager # # The NDB module has extra requirements that may not be present. # It is not the core functionality, so simply skip the import if # requirements are not met. # try: from pyroute2.ndb.main import NDB HAS_NDB = True except ImportError: HAS_NDB = False # # The Console class is a bit special, it tries to engage # modules from stdlib, that are sometimes stripped. Some # of them are optional, but some aren't. So catch possible # errors here. try: from pyroute2.cli.console import Console from pyroute2.cli.server import Server HAS_CONSOLE = True except ImportError: HAS_CONSOLE = False try: # probe, if the bytearray can be used in struct.unpack_from() struct.unpack_from('I', bytearray((1, 0, 0, 0)), 0) except: if sys.version_info[0] < 3: # monkeypatch for old Python versions log.warning('patching struct.unpack_from()') def wrapped(fmt, buf, offset=0): return struct._u_f_orig(fmt, str(buf), offset) struct._u_f_orig = struct.unpack_from struct.unpack_from = wrapped else: raise # re-export exceptions exceptions = [NetlinkError, NetlinkDecodeError, DeprecationException, CommitException, CreateException, PartialCommitException] # re-export classes classes = [IPRouteRequest, IPLinkRequest, IPRoute, IPBatch, RawIPRoute, RemoteIPRoute, IPRSocket, IPDB] if sys.platform.startswith('linux'): classes.extend([IPSet, IW, DL, Conntrack, NFTables, NetNS, NSPopen, TaskStats, NL80211, DevlinkSocket, AcpiEventSocket, DQuotSocket, IPQSocket, DiagSocket, GenericNetlinkSocket, WireGuard, NFTSocket, NFCTSocket, Inotify, NetNSManager]) if HAS_CONSOLE: classes.append(Console) classes.append(Server) else: log.warning("Couldn't import the Console class") if HAS_NDB: classes.append(NDB) else: log.warning("Couldn't import NDB") __all__ = [] __all__.extend([x.__name__ for x in exceptions]) __all__.extend([x.__name__ for x in classes]) try: __version__ = pkg_resources.get_distribution('pyroute2').version except: __version__ = 'unknown' pyroute2-0.5.9/pyroute2/arp.py0000644000175000017500000000501413610051400016145 0ustar peetpeet00000000000000from pyroute2.common import map_namespace # ARP protocol HARDWARE identifiers. ARPHRD_NETROM = 0 # from KA9Q: NET/ROM pseudo ARPHRD_ETHER = 1 # Ethernet 10Mbps ARPHRD_EETHER = 2 # Experimental Ethernet ARPHRD_AX25 = 3 # AX.25 Level 2 ARPHRD_PRONET = 4 # PROnet token ring ARPHRD_CHAOS = 5 # Chaosnet ARPHRD_IEEE802 = 6 # IEEE 802.2 Ethernet/TR/TB ARPHRD_ARCNET = 7 # ARCnet ARPHRD_APPLETLK = 8 # APPLEtalk ARPHRD_DLCI = 15 # Frame Relay DLCI ARPHRD_ATM = 19 # ATM ARPHRD_METRICOM = 23 # Metricom STRIP (new IANA id) ARPHRD_IEEE1394 = 24 # IEEE 1394 IPv4 - RFC 2734 ARPHRD_EUI64 = 27 # EUI-64 ARPHRD_INFINIBAND = 32 # InfiniBand # Dummy types for non ARP hardware ARPHRD_SLIP = 256 ARPHRD_CSLIP = 257 ARPHRD_SLIP6 = 258 ARPHRD_CSLIP6 = 259 ARPHRD_RSRVD = 260 # Notional KISS type ARPHRD_ADAPT = 264 ARPHRD_ROSE = 270 ARPHRD_X25 = 271 # CCITT X.25 ARPHRD_HWX25 = 272 # Boards with X.25 in firmware ARPHRD_PPP = 512 ARPHRD_CISCO = 513 # Cisco HDLC ARPHRD_HDLC = ARPHRD_CISCO ARPHRD_LAPB = 516 # LAPB ARPHRD_DDCMP = 517 # Digital's DDCMP protocol ARPHRD_RAWHDLC = 518 # Raw HDLC ARPHRD_TUNNEL = 768 # IPIP tunnel ARPHRD_TUNNEL6 = 769 # IP6IP6 tunnel ARPHRD_FRAD = 770 # Frame Relay Access Device ARPHRD_SKIP = 771 # SKIP vif ARPHRD_LOOPBACK = 772 # Loopback device ARPHRD_LOCALTLK = 773 # Localtalk device ARPHRD_FDDI = 774 # Fiber Distributed Data Interface ARPHRD_BIF = 775 # AP1000 BIF ARPHRD_SIT = 776 # sit0 device - IPv6-in-IPv4 ARPHRD_IPDDP = 777 # IP over DDP tunneller ARPHRD_IPGRE = 778 # GRE over IP ARPHRD_PIMREG = 779 # PIMSM register interface ARPHRD_HIPPI = 780 # High Performance Parallel Interface ARPHRD_ASH = 781 # Nexus 64Mbps Ash ARPHRD_ECONET = 782 # Acorn Econet ARPHRD_IRDA = 783 # Linux-IrDA # ARP works differently on different FC media .. so ARPHRD_FCPP = 784 # Point to point fibrechannel ARPHRD_FCAL = 785 # Fibrechannel arbitrated loop ARPHRD_FCPL = 786 # Fibrechannel public loop ARPHRD_FCFABRIC = 787 # Fibrechannel fabric # 787->799 reserved for fibrechannel media types ARPHRD_IEEE802_TR = 800 # Magic type ident for TR ARPHRD_IEEE80211 = 801 # IEEE 802.11 ARPHRD_IEEE80211_PRISM = 802 # IEEE 802.11 + Prism2 header ARPHRD_IEEE80211_RADIOTAP = 803 # IEEE 802.11 + radiotap header ARPHRD_MPLS_TUNNEL = 899 # MPLS Tunnel Interface ARPHRD_VOID = 0xFFFF # Void type, nothing is known ARPHRD_NONE = 0xFFFE # zero header length (ARPHRD_NAMES, ARPHRD_VALUES) = map_namespace("ARPHRD_", globals()) pyroute2-0.5.9/pyroute2/bsd/0000755000175000017500000000000013621220110015557 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/bsd/__init__.py0000644000175000017500000000000013610051400017660 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/bsd/pf_route/0000755000175000017500000000000013621220110017402 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/bsd/pf_route/__init__.py0000644000175000017500000001146013610051400021517 0ustar peetpeet00000000000000import socket import struct from pyroute2 import config from pyroute2.common import hexdump from pyroute2.netlink import nlmsg_base if config.uname[0] == 'OpenBSD': from pyroute2.bsd.pf_route.openbsd import (bsdmsg, if_msg, rt_msg_base, ifa_msg_base, ifma_msg_base, if_announcemsg, IFF_NAMES, IFF_VALUES) else: from pyroute2.bsd.pf_route.freebsd import (bsdmsg, if_msg, rt_msg_base, ifa_msg_base, ifma_msg_base, if_announcemsg, IFF_NAMES, IFF_VALUES) RTAX_MAX = 8 class rt_slot(nlmsg_base): __slots__ = () header = (('length', 'B'), ('family', 'B')) class rt_msg(rt_msg_base): __slots__ = () force_mask = False class hex(rt_slot): def decode(self): rt_slot.decode(self) length = self['header']['length'] self['value'] = hexdump(self.data[self.offset + 2: self.offset + length]) class rt_slot_ifp(rt_slot): def decode(self): rt_slot.decode(self) # # Structure # 0 1 2 3 4 5 6 7 # |-------+-------+-------+-------|-------+-------+-------+-------| # | len | fam | ifindex | ? | nlen | padding? | # |-------+-------+-------+-------|-------+-------+-------+-------| # | ... # | ... # # len -- sockaddr len # fam -- sockaddr family # ifindex -- interface index # ? -- no idea, probably again some sockaddr related info? # nlen -- device name length # padding? -- probably structure alignment # (self['index'], _, name_length) = struct.unpack('HBB', self.data[self.offset + 2: self.offset + 6]) self['ifname'] = self.data[self.offset + 8: self.offset + 8 + name_length] class rt_slot_addr(rt_slot): def decode(self): alen = {socket.AF_INET: 4, socket.AF_INET6: 16} rt_slot.decode(self) # # Yksinkertainen: only the sockaddr family (one byte) and the # network address. # # But for netmask it's completely screwed up. E.g.: # # ifconfig disc2 10.0.0.1 255.255.255.0 up # --> # ... NETMASK: 38:12:00:00:ff:00:00:00:00:00:00:... # # Why?! # family = self['header']['family'] length = self['header']['length'] if family in (socket.AF_INET, socket.AF_INET6): addrlen = alen.get(family, 0) data = self.data[self.offset + 4: self.offset + 4 + addrlen] self['address'] = socket.inet_ntop(family, data) else: # FreeBSD and OpenBSD use different approaches # FreeBSD: family == 0x12 # OpenBSD: family == 0x0 if self.parent.force_mask and family in (0x0, 0x12): data = self.data[self.offset + 4: self.offset + 8] data = data + b'\0' * (4 - len(data)) self['address'] = socket.inet_ntop(socket.AF_INET, data) else: self['raw'] = self.data[self.offset:self.offset + length] def decode(self): bsdmsg.decode(self) offset = self.sockaddr_offset for i in range(RTAX_MAX): if self['rtm_addrs'] & (1 << i): handler = getattr(self, self.ifa_slots[i][1]) slot = handler(self.data[offset:], parent=self) slot.decode() offset += slot['header']['length'] self[self.ifa_slots[i][0]] = slot class ifa_msg(ifa_msg_base, rt_msg): force_mask = True class ifma_msg(ifma_msg_base, rt_msg): pass __all__ = (bsdmsg, if_msg, rt_msg, ifa_msg, ifma_msg, if_announcemsg, IFF_NAMES, IFF_VALUES) pyroute2-0.5.9/pyroute2/bsd/pf_route/freebsd.py0000644000175000017500000000706013610051400021373 0ustar peetpeet00000000000000from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg_base IFNAMSIZ = 16 IFF_UP = 0x1 IFF_BROADCAST = 0x2 IFF_DEBUG = 0x4 IFF_LOOPBACK = 0x8 IFF_POINTOPOINT = 0x10 IFF_DRV_RUNNING = 0x40 IFF_NOARP = 0x80 IFF_PROMISC = 0x100 IFF_ALLMULTI = 0x200 IFF_DRV_OACTIVE = 0x400 IFF_SIMPLEX = 0x800 IFF_LINK0 = 0x1000 IFF_LINK1 = 0x2000 IFF_LINK2 = 0x4000 IFF_MULTICAST = 0x8000 IFF_CANTCONFIG = 0x10000 IFF_PPROMISC = 0x20000 IFF_MONITOR = 0x40000 IFF_STATICARP = 0x80000 IFF_DYING = 0x200000 IFF_RENAMING = 0x400000 IFF_NOGROUP = 0x800000 (IFF_NAMES, IFF_VALUES) = map_namespace('IFF', globals()) class bsdmsg(nlmsg_base): __slots__ = () header = (('length', 'H'), ('version', 'B'), ('type', 'B')) class if_msg(bsdmsg): __slots__ = () fields = (('ifm_addrs', 'i'), ('ifm_flags', 'i'), ('ifm_index', 'H'), ('ifi_type', 'B'), ('ifi_physical', 'B'), ('ifi_addrlen', 'B'), ('ifi_hdrlen', 'B'), ('ifi_link_state', 'B'), ('ifi_vhid', 'B'), ('ifi_datalen', 'H'), ('ifi_mtu', 'I'), ('ifi_metric', 'I'), ('ifi_baudrate', 'Q'), ('ifi_ipackets', 'Q'), ('ifi_ierrors', 'Q'), ('ifi_opackets', 'Q'), ('ifi_oerrors', 'Q'), ('ifi_collisions', 'Q'), ('ifi_ibytes', 'Q'), ('ifi_obytes', 'Q'), ('ifi_imcasts', 'Q'), ('ifi_omcasts', 'Q'), ('ifi_iqdrops', 'Q'), ('ifi_oqdrops', 'Q'), ('ifi_noproto', 'Q'), ('ifi_hwassist', 'Q'), ('ifu_tt', 'Q'), ('ifu_tv1', 'Q'), ('ifu_tv2', 'Q')) class rt_msg_base(bsdmsg): __slots__ = () fields = (('rtm_index', 'I'), ('rtm_flags', 'i'), ('rtm_addrs', 'i'), ('rtm_pid', 'I'), ('rtm_seq', 'i'), ('rtm_errno', 'i'), ('rtm_fmask', 'i'), ('rtm_inits', 'I'), ('rmx_locks', 'I'), ('rmx_mtu', 'I'), ('rmx_hopcount', 'I'), ('rmx_expire', 'I'), ('rmx_recvpipe', 'I'), ('rmx_sendpipe', 'I'), ('rmx_ssthresh', 'I'), ('rmx_rtt', 'I'), ('rmx_rttvar', 'I'), ('rmx_pksent', 'I'), ('rmx_weight', 'I'), ('rmx_filler', '3I')) sockaddr_offset = 92 ifa_slots = {0: ('DST', 'rt_slot_addr'), 1: ('GATEWAY', 'rt_slot_addr'), 2: ('NETMASK', 'rt_slot_addr'), 3: ('GENMASK', 'hex'), 4: ('IFP', 'rt_slot_ifp'), 5: ('IFA', 'rt_slot_addr'), 6: ('AUTHOR', 'hex'), 7: ('BRD', 'rt_slot_addr')} class ifa_msg_base(bsdmsg): __slots__ = () fields = (('rtm_addrs', 'i'), ('ifam_flags', 'i'), ('ifam_index', 'H'), ('ifam_metric', 'i')) sockaddr_offset = 20 class ifma_msg_base(bsdmsg): __slots__ = () fields = (('rtm_addrs', 'i'), ('ifmam_flags', 'i'), ('ifmam_index', 'H')) sockaddr_offset = 16 class if_announcemsg(bsdmsg): __slots__ = () fields = (('ifan_index', 'H'), ('ifan_name', '%is' % IFNAMSIZ), ('ifan_what', 'H')) def decode(self): bsdmsg.decode(self) self['ifan_name'] = self['ifan_name'].strip(b'\0').decode('ascii') pyroute2-0.5.9/pyroute2/bsd/pf_route/openbsd.py0000644000175000017500000000772113610051400021417 0ustar peetpeet00000000000000from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg_base IFNAMSIZ = 16 IFF_UP = 0x1 IFF_BROADCAST = 0x2 IFF_DEBUG = 0x4 IFF_LOOPBACK = 0x8 IFF_POINTOPOINT = 0x10 IFF_STATICARP = 0x20 IFF_RUNNING = 0x40 IFF_NOARP = 0x80 IFF_PROMISC = 0x100 IFF_ALLMULTI = 0x200 IFF_OACTIVE = 0x400 IFF_SIMPLEX = 0x800 IFF_LINK0 = 0x1000 IFF_LINK1 = 0x2000 IFF_LINK2 = 0x4000 IFF_MULTICAST = 0x8000 (IFF_NAMES, IFF_VALUES) = map_namespace('IFF', globals()) class bsdmsg(nlmsg_base): __slots__ = () header = (('length', 'H'), ('version', 'B'), ('type', 'B'), ('hdrlen', 'H')) class if_msg(bsdmsg): __slots__ = () fields = (('ifm_index', 'H'), ('ifm_tableid', 'H'), ('ifm_pad1', 'B'), ('ifm_pad2', 'B'), ('ifm_addrs', 'i'), ('ifm_flags', 'i'), ('ifm_xflags', 'i'), ('ifi_type', 'B'), ('ifi_addrlen', 'B'), ('ifi_hdrlen', 'B'), ('ifi_link_state', 'B'), ('ifi_mtu', 'I'), ('ifi_metric', 'I'), ('ifi_rdomain', 'I'), ('ifi_baudrate', 'Q'), ('ifi_ipackets', 'Q'), ('ifi_ierrors', 'Q'), ('ifi_opackets', 'Q'), ('ifi_oerrors', 'Q'), ('ifi_collisions', 'Q'), ('ifi_ibytes', 'Q'), ('ifi_obytes', 'Q'), ('ifi_imcasts', 'Q'), ('ifi_omcasts', 'Q'), ('ifi_iqdrops', 'Q'), ('ifi_oqdrops', 'Q'), ('ifi_noproto', 'Q'), ('ifi_capabilities', 'I'), ('ifu_sec', 'Q'), ('ifu_usec', 'I')) class rt_msg_base(bsdmsg): __slots__ = () fields = (('rtm_index', 'H'), ('rtm_tableid', 'H'), ('rtm_priority', 'B'), ('rtm_mpls', 'B'), ('rtm_addrs', 'i'), ('rtm_flags', 'i'), ('rtm_fmask', 'i'), ('rtm_pid', 'I'), ('rtm_seq', 'i'), ('rtm_errno', 'i'), ('rtm_inits', 'I'), ('rmx_pksent', 'Q'), ('rmx_expire', 'q'), ('rmx_locks', 'I'), ('rmx_mtu', 'I'), ('rmx_refcnt', 'I'), ('rmx_hopcount', 'I'), ('rmx_recvpipe', 'I'), ('rmx_sendpipe', 'I'), ('rmx_ssthresh', 'I'), ('rmx_rtt', 'I'), ('rmx_rttvar', 'I'), ('rmx_pad', 'I')) sockaddr_offset = 96 ifa_slots = {0: ('DST', 'rt_slot_addr'), 1: ('GATEWAY', 'rt_slot_addr'), 2: ('NETMASK', 'rt_slot_addr'), 3: ('GENMASK', 'hex'), 4: ('IFP', 'rt_slot_ifp'), 5: ('IFA', 'rt_slot_addr'), 6: ('AUTHOR', 'hex'), 7: ('BRD', 'rt_slot_addr'), 8: ('SRC', 'rt_slot_addr'), 9: ('SRCMASK', 'rt_slot_addr'), 10: ('LABEL', 'hex'), 11: ('BFD', 'hex'), 12: ('DNS', 'hex'), 13: ('STATIC', 'hex'), 14: ('SEARCH', 'hex')} class ifa_msg_base(bsdmsg): __slots__ = () fields = (('ifam_index', 'H'), ('ifam_tableid', 'H'), ('ifam_pad1', 'B'), ('ifam_pad2', 'B'), ('rtm_addrs', 'i'), ('ifam_flags', 'i'), ('ifam_metric', 'i')) sockaddr_offset = 24 class ifma_msg_base(bsdmsg): __slots__ = () fields = (('rtm_addrs', 'i'), ('ifmam_flags', 'i'), ('ifmam_index', 'H')) sockaddr_offset = 16 class if_announcemsg(bsdmsg): __slots__ = () fields = (('ifan_index', 'H'), ('ifan_what', 'H'), ('ifan_name', '%is' % IFNAMSIZ)) def decode(self): bsdmsg.decode(self) self['ifan_name'] = self['ifan_name'].strip(b'\0').decode('ascii') pyroute2-0.5.9/pyroute2/bsd/rtmsocket/0000755000175000017500000000000013621220110017572 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/bsd/rtmsocket/__init__.py0000644000175000017500000001072613610051400021713 0ustar peetpeet00000000000000import struct from socket import AF_ROUTE from socket import SOCK_RAW from socket import AF_INET from socket import AF_INET6 from pyroute2 import config from pyroute2.common import dqn2int from pyroute2.bsd.pf_route import (bsdmsg, if_msg, rt_msg, if_announcemsg, ifma_msg, ifa_msg) from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl import (RTM_NEWLINK as RTNL_NEWLINK, RTM_DELLINK as RTNL_DELLINK, RTM_NEWADDR as RTNL_NEWADDR, RTM_DELADDR as RTNL_DELADDR, RTM_NEWROUTE as RTNL_NEWROUTE, RTM_DELROUTE as RTNL_DELROUTE) if config.uname[0] == 'OpenBSD': from pyroute2.bsd.rtmsocket.openbsd import (RTMSocketBase, RTM_ADD, RTM_NEWADDR) else: from pyroute2.bsd.rtmsocket.freebsd import (RTMSocketBase, RTM_ADD, RTM_NEWADDR) def convert_rt_msg(msg): ret = rtmsg() ret['header']['type'] = RTNL_NEWROUTE if \ msg['header']['type'] == RTM_ADD else \ RTNL_DELROUTE ret['family'] = msg['DST']['header']['family'] ret['attrs'] = [] if 'address' in msg['DST']: ret['attrs'].append(['RTA_DST', msg['DST']['address']]) if 'NETMASK' in msg and \ msg['NETMASK']['header']['family'] == ret['family']: ret['dst_len'] = dqn2int(msg['NETMASK']['address'], ret['family']) if 'GATEWAY' in msg: if msg['GATEWAY']['header']['family'] not in (AF_INET, AF_INET6): # interface routes, table 255 # discard for now return None ret['attrs'].append(['RTA_GATEWAY', msg['GATEWAY']['address']]) if 'IFA' in msg: ret['attrs'].append(['RTA_SRC', msg['IFA']['address']]) if 'IFP' in msg: ret['attrs'].append(['RTA_OIF', msg['IFP']['index']]) elif msg['rtm_index'] != 0: ret['attrs'].append(['RTA_OIF', msg['rtm_index']]) del ret['value'] return ret def convert_if_msg(msg): # discard this type for now return None def convert_ifa_msg(msg): ret = ifaddrmsg() ret['header']['type'] = RTNL_NEWADDR if \ msg['header']['type'] == RTM_NEWADDR else \ RTNL_DELADDR ret['index'] = msg['IFP']['index'] ret['family'] = msg['IFA']['header']['family'] ret['prefixlen'] = dqn2int(msg['NETMASK']['address'], ret['family']) ret['attrs'] = [['IFA_ADDRESS', msg['IFA']['address']], ['IFA_BROADCAST', msg['BRD']['address']], ['IFA_LABEL', msg['IFP']['ifname']]] del ret['value'] return ret def convert_ifma_msg(msg): # ignore for now return None def convert_if_announcemsg(msg): ret = ifinfmsg() ret['header']['type'] = RTNL_DELLINK if msg['ifan_what'] else RTNL_NEWLINK ret['index'] = msg['ifan_index'] ret['attrs'] = [['IFLA_IFNAME', msg['ifan_name']]] del ret['value'] return ret def convert_bsdmsg(msg): # ignore unknown messages return None convert = {rt_msg: convert_rt_msg, ifa_msg: convert_ifa_msg, if_msg: convert_if_msg, ifma_msg: convert_ifma_msg, if_announcemsg: convert_if_announcemsg, bsdmsg: convert_bsdmsg} class RTMSocket(RTMSocketBase): def __init__(self, output='pf_route'): self._sock = config.SocketBase(AF_ROUTE, SOCK_RAW) self._output = output def fileno(self): return self._sock.fileno() def get(self): msg = self._sock.recv(2048) _, _, msg_type = struct.unpack('HBB', msg[:4]) msg_class = self.msg_map.get(msg_type, None) if msg_class is not None: msg = msg_class(msg) msg.decode() if self._output == 'netlink': # convert messages to the Netlink format msg = convert[type(msg)](msg) return msg def close(self): self._sock.close() def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() __all__ = [RTMSocket, ] pyroute2-0.5.9/pyroute2/bsd/rtmsocket/freebsd.py0000644000175000017500000000317613610051400021567 0ustar peetpeet00000000000000 from pyroute2.bsd.pf_route import (if_msg, rt_msg, if_announcemsg, ifma_msg, ifa_msg) RTM_ADD = 0x1 # Add Route RTM_DELETE = 0x2 # Delete Route RTM_CHANGE = 0x3 # Change Metrics or flags RTM_GET = 0x4 # Report Metrics RTM_LOSING = 0x5 # Kernel Suspects Partitioning RTM_REDIRECT = 0x6 # Told to use different route RTM_MISS = 0x7 # Lookup failed on this address RTM_LOCK = 0x8 # Fix specified metrics RTM_RESOLVE = 0xb # Req to resolve dst to LL addr RTM_NEWADDR = 0xc # Address being added to iface RTM_DELADDR = 0xd # Address being removed from iface RTM_IFINFO = 0xe # Iface going up/down etc RTM_NEWMADDR = 0xf # Mcast group membership being added to if RTM_DELMADDR = 0x10 # Mcast group membership being deleted RTM_IFANNOUNCE = 0x11 # Iface arrival/departure RTM_IEEE80211 = 0x12 # IEEE80211 wireless event class RTMSocketBase(object): msg_map = {RTM_ADD: rt_msg, RTM_DELETE: rt_msg, RTM_CHANGE: rt_msg, RTM_GET: rt_msg, RTM_LOSING: rt_msg, RTM_REDIRECT: rt_msg, RTM_MISS: rt_msg, RTM_LOCK: rt_msg, RTM_RESOLVE: rt_msg, RTM_NEWADDR: ifa_msg, RTM_DELADDR: ifa_msg, RTM_IFINFO: if_msg, RTM_NEWMADDR: ifma_msg, RTM_DELMADDR: ifma_msg, RTM_IFANNOUNCE: if_announcemsg, RTM_IEEE80211: if_announcemsg} pyroute2-0.5.9/pyroute2/bsd/rtmsocket/openbsd.py0000644000175000017500000000326513610051400021606 0ustar peetpeet00000000000000from pyroute2.bsd.pf_route import (bsdmsg, if_msg, rt_msg, if_announcemsg, ifa_msg) RTM_ADD = 0x1 # Add Route RTM_DELETE = 0x2 # Delete Route RTM_CHANGE = 0x3 # Change Metrics or flags RTM_GET = 0x4 # Report Metrics RTM_LOSING = 0x5 # Kernel Suspects Partitioning RTM_REDIRECT = 0x6 # Told to use different route RTM_MISS = 0x7 # Lookup failed on this address RTM_LOCK = 0x8 # Fix specified metrics RTM_RESOLVE = 0xb # Req to resolve dst to LL addr RTM_NEWADDR = 0xc # Address being added to iface RTM_DELADDR = 0xd # Address being removed from iface RTM_IFINFO = 0xe # Iface going up/down etc RTM_IFANNOUNCE = 0xf # Iface arrival/departure RTM_DESYNC = 0x10 # route socket buffer overflow RTM_INVALIDATE = 0x10 # Invalidate cache of L2 route RTM_BFD = 0x12 # bidirectional forwarding detection RTM_PROPOSAL = 0x13 # proposal for netconfigd class RTMSocketBase(object): msg_map = {RTM_ADD: rt_msg, RTM_DELETE: rt_msg, RTM_CHANGE: rt_msg, RTM_GET: rt_msg, RTM_LOSING: rt_msg, RTM_REDIRECT: rt_msg, RTM_MISS: rt_msg, RTM_LOCK: rt_msg, RTM_RESOLVE: rt_msg, RTM_NEWADDR: ifa_msg, RTM_DELADDR: ifa_msg, RTM_IFINFO: if_msg, RTM_IFANNOUNCE: if_announcemsg, RTM_DESYNC: bsdmsg, RTM_INVALIDATE: bsdmsg, RTM_BFD: bsdmsg, RTM_PROPOSAL: bsdmsg} pyroute2-0.5.9/pyroute2/bsd/util.py0000644000175000017500000001734313610051400017120 0ustar peetpeet00000000000000''' Utility to parse ifconfig, netstat etc. PF_ROUTE may be effectively used only to get notifications. To fetch info from the system we have to use ioctl or external utilities. Maybe some day it will be ioctl. For now it's ifconfig and netstat. ''' import re import socket import subprocess class CMD(object): cmd = ['uname', '-s'] def __init__(self, cmd=None): if cmd is not None: self.cmd = cmd def run(self): ''' Run the command and get stdout ''' stdout = stderr = '' try: process = subprocess.Popen(self.cmd, stdout=subprocess.PIPE) (stdout, stderr) = process.communicate() except Exception: process.kill() finally: process.wait() return stdout class Route(CMD): cmd = ['netstat', '-rn'] def parse(self, data): ret = [] family = 0 for line in data.split('\n'): if line == 'Internet:': family = socket.AF_INET elif line == 'Internet6:': # do NOT support IPv6 routes yet break sl = line.split() if len(sl) < 4: continue if sl[0] == 'Destination': # create the field map fmap = dict([(x[1], x[0]) for x in enumerate(sl)]) if 'Netif' not in fmap: fmap['Netif'] = fmap['Iface'] continue route = {'family': family, 'attrs': []} # # RTA_DST dst = sl[fmap['Destination']] if dst != 'default': dst = dst.split('/') if len(dst) == 2: dst, dst_len = dst else: dst = dst[0] if family == socket.AF_INET: dst_len = 32 else: dst_len = 128 dst = dst.split('%') if len(dst) == 2: dst, _ = dst else: dst = dst[0] dst = '%s%s' % (dst, '.0' * (3 - dst.count('.'))) route['dst_len'] = int(dst_len) route['attrs'].append(['RTA_DST', dst]) # # RTA_GATEWAY gw = sl[fmap['Gateway']] if not gw.startswith('link') and not gw.find(':') >= 0: route['attrs'].append(['RTA_GATEWAY', sl[fmap['Gateway']]]) # # RTA_OIF -- do not resolve it here! just save route['ifname'] = sl[fmap['Netif']] ret.append(route) return ret class ARP(CMD): cmd = ['arp', '-an'] def parse(self, data): ret = [] f_dst = 1 f_addr = 3 f_ifname = 5 for line in data.split('\n'): sl = line.split() if not sl: continue if sl[0] == 'Host': f_dst = 0 f_addr = 1 f_ifname = 2 continue dst = sl[f_dst].strip('(').strip(')') addr = sl[f_addr].strip('(').strip(')') if addr == 'incomplete': continue ifname = sl[f_ifname] neighbour = {'ifindex': 0, 'ifname': ifname, 'family': 2, 'attrs': [['NDA_DST', dst], ['NDA_LLADDR', addr]]} ret.append(neighbour) return ret class Ifconfig(CMD): match = {'NR': re.compile(r'^\b').match} cmd = ['ifconfig', '-a'] def parse_line(self, line): ''' Dumb line parser: "key1 value1 key2 value2 something" -> {"key1": "value1", "key2": "value2"} ''' ret = {} cursor = 0 while cursor < (len(line) - 1): ret[line[cursor]] = line[cursor + 1] cursor += 2 return ret def parse(self, data): ''' Parse ifconfig output into netlink-compatible dicts:: from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.bsd.util import Ifconfig def links() ifc = Ifconfig() data = ifc.run() for name, spec in ifc.parse(data)["links"].items(): yield ifinfmsg().load(spec) ''' ifname = None kind = None ret = {'links': {}, 'addrs': {}} idx = 0 info_data = {'attrs': None} for line in data.split('\n'): sl = line.split() pl = self.parse_line(sl) # type-specific if kind == 'gre' and 'inet' in sl and not info_data['attrs']: # first "inet" -- low-level addresses arrow = None try: arrow = sl.index('->') except ValueError: try: arrow = sl.index('-->') except ValueError: continue if arrow is not None: info_data['attrs'] = [('IFLA_GRE_LOCAL', sl[arrow - 1]), ('IFLA_GRE_REMOTE', sl[arrow + 1])] continue # first line -- ifname, flags, mtu if self.match['NR'](line): ifname = sl[0][:-1] kind = None idx += 1 ret['links'][ifname] = link = {'index': idx, 'attrs': []} ret['addrs'][ifname] = addrs = [] link['attrs'].append(['IFLA_IFNAME', ifname]) # if ifname[:3] == 'gre': kind = 'gre' info_data = {'attrs': []} linkinfo = {'attrs': [('IFLA_INFO_KIND', kind), ('IFLA_INFO_DATA', info_data)]} link['attrs'].append(['IFLA_LINKINFO', linkinfo]) # extract flags try: link['flags'] = int(sl[1].split('=')[1].split('<')[0]) except Exception: pass # extract MTU if 'mtu' in pl: link['attrs'].append(['IFLA_MTU', int(pl['mtu'])]) elif 'ether' in pl: link['attrs'].append(['IFLA_ADDRESS', pl['ether']]) elif 'lladdr' in pl: link['attrs'].append(['IFLA_ADDRESS', pl['lladdr']]) elif 'index' in pl: idx = int(pl['index']) link['index'] = int(pl['index']) elif 'inet' in pl: if ('netmask' not in pl) or \ ('inet' not in pl): print(pl) continue addr = {'index': idx, 'family': socket.AF_INET, 'prefixlen': bin(int(pl['netmask'], 16)).count('1'), 'attrs': [['IFA_ADDRESS', pl['inet']]]} if 'broadcast' in pl: addr['attrs'].append(['IFA_BROADCAST', pl['broadcast']]) addrs.append(addr) elif 'inet6' in pl: if ('prefixlen' not in pl) or \ ('inet6' not in pl): print(pl) continue addr = {'index': idx, 'family': socket.AF_INET6, 'prefixlen': int(pl['prefixlen']), 'attrs': [['IFA_ADDRESS', pl['inet6'].split('%')[0]]]} if 'scopeid' in pl: addr['scope'] = int(pl['scopeid'], 16) addrs.append(addr) return ret pyroute2-0.5.9/pyroute2/cli/0000755000175000017500000000000013621220110015556 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/cli/__init__.py0000644000175000017500000000033713610051400017674 0ustar peetpeet00000000000000 t_stmt = 1 t_dict = 2 t_comma = 3 t_end_of_dict = 7 t_end_of_sentence = 8 t_end_of_stream = 9 def change_pointer(f): f.__cli_cptr__ = True return f def show_result(f): f.__cli_publish__ = True return f pyroute2-0.5.9/pyroute2/cli/console.py0000644000175000017500000000650313610051400017600 0ustar peetpeet00000000000000import sys import code import socket import getpass from pyroute2 import NDB from pyroute2.cli.session import Session try: import readline HAS_READLINE = True except ImportError: HAS_READLINE = False class Console(code.InteractiveConsole): def __init__(self, stdout=None, log=None, sources=None): global HAS_READLINE self.db = NDB(log=log, sources=sources) self.db.config = {'show_format': 'json'} self.stdout = stdout or sys.stdout self.session = Session(self.db, self.stdout, self.set_prompt) self.matches = [] self.isatty = sys.stdin.isatty() self.prompt = '' self.set_prompt() code.InteractiveConsole.__init__(self) if HAS_READLINE: readline.parse_and_bind('tab: complete') readline.set_completer(self.completer) readline.set_completion_display_matches_hook(self.display) def close(self): self.db.close() def help(self): self.session.lprint("Built-in commands: \n" "exit\t-- exit cli\n" "ls\t-- list current namespace\n" ".\t-- print the current object\n" ".. or Ctrl-D\t-- one level up\n") def set_prompt(self, prompt=None): if self.isatty: if prompt is not None: self.prompt = '%s > ' % (prompt) else: self.prompt = '%s > ' % (self.session.ptr.__class__.__name__) self.prompt = '%s@%s : %s' % (getpass.getuser(), (socket .gethostname() .split('.')[0]), self.prompt) def loadrc(self, fname): with open(fname, 'r') as f: self.session.handle(f.read()) def interact(self, readfunc=None): if self.isatty and readfunc is None: self.session.lprint("pyroute2 cli prototype") if readfunc is None: readfunc = self.raw_input indent = 0 while True: try: text = readfunc(self.prompt) except EOFError: if self.session.stack: self.session.lprint() self.session.stack_pop() continue else: self.close() break except Exception: self.close() break try: indent = self.session.handle(text, indent) except: self.showtraceback() continue def completer(self, text, state): if state == 0: d = [x for x in dir(self.session.ptr) if x.startswith(text)] if isinstance(self.session.ptr, dict): keys = [str(y) for y in self.session.ptr.keys()] d.extend([x for x in keys if x.startswith(text)]) self.matches = d try: return self.matches[state] except: pass def display(self, line, matches, length): self.session.lprint() self.session.lprint(matches) self.session.lprint('%s%s' % (self.prompt, line), end='') if __name__ == '__main__': Console().interact() pyroute2-0.5.9/pyroute2/cli/parser.py0000644000175000017500000001245213610051400017432 0ustar peetpeet00000000000000import re import shlex from pyroute2.common import basestring from pyroute2.cli import (t_stmt, t_dict, t_comma, t_end_of_dict, t_end_of_sentence, t_end_of_stream) class Token(object): def __init__(self, lex, expect=tuple(), prohibit=tuple(), leaf=False): self.lex = lex self.leaf = leaf self.kind = 0 self.name = None self.argv = [] self.kwarg = {} self.parse() if expect and self.kind not in expect: raise SyntaxError('expected %s, got %s' % (expect, self.kind)) if prohibit and self.kind in prohibit: raise SyntaxError('unexpected %s' % (self.name, )) def convert(self, arg): if re.match('^[0-9]+$', arg): return int(arg) else: return arg def parse(self): # triage first = self.lex.get_token() self.name = first ## # no token # if first == '': self.kind = t_end_of_stream ## # dict, e.g. # # resource spec, function arguments:: # {arg1, arg2} # {key1 value1, key2 value2} # {key {skey1 value}} # elif first == '{': arg_name = None while True: nt = Token(self.lex, expect=(t_stmt, t_dict, t_comma, t_end_of_dict)) if arg_name is None: if nt.kind == t_dict: self.argv.append(nt.kwarg) elif nt.kind == t_comma: continue elif nt.kind == t_stmt: arg_name = nt.name else: if nt.kind in (t_end_of_dict, t_comma): self.argv.append(arg_name) elif nt.kind == t_stmt: self.kwarg[arg_name] = nt.name elif nt.kind == t_dict: self.kwarg[arg_name] = nt.kwarg arg_name = None if nt.kind == t_end_of_dict: self.kind = t_dict self.name = '%s %s' % (self.argv, self.kwarg) return ## # end of dict # elif first == '}': self.kind = t_end_of_dict ## # end of sentence # elif first == ';': self.kind = t_end_of_sentence ## # end of dict entry # elif first == ',': self.kind = t_comma ## # simple statement # # object name:: # name # # function call:: # func # func {arg1, arg2} # func {key1 value1, key2 value2} # else: self.name = self.convert(first) self.kind = t_stmt class Sentence(object): def __init__(self, text, indent=0, master=None): self.offset = 0 self.statements = [] self.text = text self.lex = shlex.shlex(text) self.lex.wordchars += '.:/' self.lex.commenters = '#!' self.lex.debug = False self.indent = indent if master: self.chain = master.chain else: self.chain = [] self.parse() def __iter__(self): for stmt in self.statements: yield stmt def parse(self): sentence = self while True: nt = Token(self.lex) if nt.kind == t_end_of_sentence: sentence = Sentence(None, self.indent, master=self) elif nt.kind == t_end_of_stream: return else: sentence.statements.append(nt) if sentence not in self.chain: self.chain.append(sentence) def __repr__(self): ret = '----\n' for s in self.statements: ret += '%i [%s] %s\n' % (self.indent, s.kind, s.name) ret += '\targv: %s\n' % (s.argv) ret += '\tkwarg: %s\n' % (s.kwarg) return ret class Parser(object): def __init__(self, stream): self.stream = stream self.indent = None self.sentences = [] self.parse() def parse(self): if hasattr(self.stream, 'readlines'): for text in self.stream.readlines(): self.parse_string(text) elif isinstance(self.stream, basestring): self.parse_string(self.stream) else: raise ValueError('unsupported stream') self.parsed = True def parse_string(self, text): # 1. get indentation indent = re.match(r'^([ \t]*)', text).groups(0)[0] spaces = [] # 2. sort it if indent: spaces = list(set(indent)) if len(spaces) > 1: raise SyntaxError('mixed indentation') if self.indent is None: self.indent = spaces[0] if self.indent != spaces[0]: raise SyntaxError('mixed indentation') sentence = Sentence(text, len(indent)) self.sentences.extend(sentence.chain) pyroute2-0.5.9/pyroute2/cli/server.py0000644000175000017500000000354513610051400017447 0ustar peetpeet00000000000000import json from pyroute2 import NDB from pyroute2.cli.session import Session try: from BaseHTTPServer import HTTPServer as HTTPServer from BaseHTTPServer import BaseHTTPRequestHandler except ImportError: from http.server import HTTPServer as HTTPServer from http.server import BaseHTTPRequestHandler class Handler(BaseHTTPRequestHandler): def do_error(self, code, reason): self.send_error(code, reason) self.end_headers() def do_POST(self): # # sanity checks: # # * path if self.path != '/v1/': return self.do_error(404, 'url not found') # * content length if 'Content-Length' not in self.headers: return self.do_error(500, 'Content-Length') # * content type if 'Content-Type' not in self.headers: return self.do_error(500, 'Content-Type') # content_length = int(self.headers['Content-Length']) content_type = self.headers['Content-Type'] data = self.rfile.read(content_length) if content_type == 'application/json': try: request = json.loads(data) except ValueError: return self.do_error(500, 'Incorrect JSON input') else: request = {'commands': [data]} session = Session(ndb=self.server.ndb, stdout=self.wfile) self.send_response(200) self.end_headers() for cmd in request['commands']: session.handle(cmd) class Server(HTTPServer): def __init__(self, address='localhost', port=8080, debug=None, sources=None): self.sessions = {} self.ndb = NDB(debug=debug, sources=sources) self.ndb.config = {'show_format': 'json'} HTTPServer.__init__(self, (address, port), Handler) pyroute2-0.5.9/pyroute2/cli/session.py0000644000175000017500000001331613610051400017621 0ustar peetpeet00000000000000from __future__ import print_function import sys from collections import namedtuple from pyroute2.common import basestring from pyroute2.cli import t_dict from pyroute2.cli import t_stmt from pyroute2.cli.parser import Parser class Session(object): def __init__(self, ndb, stdout=None, ptrname_callback=None): self.db = ndb self.ptr = self.db self._ptrname = None self._ptrname_callback = ptrname_callback self.stack = [] self.prompt = '' self.stdout = stdout or sys.stdout @property def ptrname(self): return self._ptrname @ptrname.setter def ptrname(self, name): self._ptrname = name if self._ptrname_callback is not None: self._ptrname_callback(name) def stack_pop(self): self.ptr, self.ptrname = self.stack.pop() return (self.ptr, self.ptrname) def lprint(self, text='', end='\n'): if not isinstance(text, basestring): text = str(text) self.stdout.write(text) if end: self.stdout.write(end) self.stdout.flush() def handle_statement(self, stmt, token): obj = None if stmt.name == 'exit': raise SystemExit() elif stmt.name == 'ls': self.lprint(dir(self.ptr)) elif stmt.name == '.': self.lprint(repr(self.ptr)) elif stmt.name == '..': if self.stack: self.ptr, self.ptrname = self.stack.pop() else: if stmt.kind == t_dict: obj = self.ptr[stmt.kwarg] elif stmt.kind == t_stmt: if isinstance(self.ptr, dict): try: obj = self.ptr.get(stmt.name, None) except Exception: pass if obj is None: obj = getattr(self.ptr, stmt.name, None) if hasattr(obj, '__call__'): try: nt = next(token) except StopIteration: nt = (namedtuple('Token', ('kind', 'argv', 'kwarg'))(t_dict, [], {})) if nt.kind != t_dict: raise TypeError('function arguments expected') try: ret = obj(*nt.argv, **nt.kwarg) if hasattr(obj, '__cli_cptr__'): obj = ret elif hasattr(obj, '__cli_publish__'): if hasattr(ret, 'generator') or hasattr(ret, 'next'): for line in ret: if isinstance(line, basestring): self.lprint(line) else: self.lprint(repr(line)) else: self.lprint(ret) return elif isinstance(ret, (bool, basestring, int, float)): self.lprint(ret) return else: return except: import traceback traceback.print_exc() return else: if isinstance(self.ptr, dict) and not isinstance(obj, dict): try: nt = next(token) if nt.kind == t_stmt: self.ptr[stmt.name] = nt.name elif nt.kind == t_dict and nt.argv: self.ptr[stmt.name] = nt.argv elif nt.kind == t_dict and nt.kwarg: self.ptr[stmt.name] = nt.kwarg else: raise TypeError('failed setting a key/value pair') return except NotImplementedError: raise KeyError() except StopIteration: pass if obj is None: raise KeyError() elif isinstance(obj, (basestring, int, float)): self.lprint(obj) else: self.stack.append((self.ptr, self.ptrname)) self.ptr = obj if hasattr(obj, 'key_repr'): self.ptrname = obj.key_repr() else: self.ptrname = stmt.name return True return def handle_sentence(self, sentence, indent): if sentence.indent < indent: if self.stack: self.ptr, self.ptrname = self.stack.pop() indent = sentence.indent iterator = iter(sentence) rcode = None rcounter = 0 try: for stmt in iterator: try: rcode = self.handle_statement(stmt, iterator) if rcode: rcounter += 1 except SystemExit: self.close() return except KeyError: self.lprint('object not found') rcode = False return indent except: import traceback traceback.print_exc() finally: if not rcode: for _ in range(rcounter): self.ptr, self.ptrname = self.stack.pop() return indent def handle(self, text, indent=0): parser = Parser(text) for sentence in parser.sentences: indent = self.handle_sentence(sentence, indent) return indent pyroute2-0.5.9/pyroute2/common.py0000644000175000017500000004166613610051400016670 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- ''' Common utilities ''' import io import re import os import time import sys import errno import types import struct import socket import logging import threading log = logging.getLogger(__name__) try: basestring = basestring reduce = reduce file = file except NameError: basestring = (str, bytes) from functools import reduce reduce = reduce file = io.BytesIO AF_MPLS = 28 AF_PIPE = 255 # Right now AF_MAX == 40 DEFAULT_RCVBUF = 16384 _uuid32 = 0 # (singleton) the last uuid32 value saved to avoid collisions _uuid32_lock = threading.Lock() size_suffixes = {'b': 1, 'k': 1024, 'kb': 1024, 'm': 1024 * 1024, 'mb': 1024 * 1024, 'g': 1024 * 1024 * 1024, 'gb': 1024 * 1024 * 1024, 'kbit': 1024 / 8, 'mbit': 1024 * 1024 / 8, 'gbit': 1024 * 1024 * 1024 / 8} time_suffixes = {'s': 1, 'sec': 1, 'secs': 1, 'ms': 1000, 'msec': 1000, 'msecs': 1000, 'us': 1000000, 'usec': 1000000, 'usecs': 1000000} rate_suffixes = {'bit': 1, 'Kibit': 1024, 'kbit': 1000, 'mibit': 1024 * 1024, 'mbit': 1000000, 'gibit': 1024 * 1024 * 1024, 'gbit': 1000000000, 'tibit': 1024 * 1024 * 1024 * 1024, 'tbit': 1000000000000, 'Bps': 8, 'KiBps': 8 * 1024, 'KBps': 8000, 'MiBps': 8 * 1024 * 1024, 'MBps': 8000000, 'GiBps': 8 * 1024 * 1024 * 1024, 'GBps': 8000000000, 'TiBps': 8 * 1024 * 1024 * 1024 * 1024, 'TBps': 8000000000000} ## # General purpose # class View(object): ''' A read-only view of a dictionary object. ''' def __init__(self, src=None, path=None, constraint=lambda k, v: True): self.src = src if src is not None else {} if path is not None: path = path.split('/') for step in path: self.src = getattr(self.src, step) self.constraint = constraint def __getitem__(self, key): if key in self.keys(): return self.src[key] raise KeyError() def __setitem__(self, key, value): raise NotImplementedError() def __delitem__(self, key): raise NotImplementedError() def get(self, key, default=None): try: return self[key] except KeyError: return default def _filter(self): ret = [] for (key, value) in tuple(self.src.items()): try: if self.constraint(key, value): ret.append((key, value)) except Exception as e: log.error("view filter error: %s", e) return ret def keys(self): return [x[0] for x in self._filter()] def values(self): return [x[1] for x in self._filter()] def items(self): return self._filter() def __iter__(self): for key in self.keys(): yield key def __repr__(self): return repr(dict(self._filter())) class Namespace(object): def __init__(self, parent, override=None): self.parent = parent self.override = override or {} def __getattr__(self, key): if key in ('parent', 'override'): return object.__getattr__(self, key) elif key in self.override: return self.override[key] else: ret = getattr(self.parent, key) # ACHTUNG # # if the attribute we got with `getattr` # is a method, rebind it to the Namespace # object, so all subsequent getattrs will # go through the Namespace also. # if isinstance(ret, types.MethodType): ret = type(ret)(ret.__func__, self) return ret def __setattr__(self, key, value): if key in ('parent', 'override'): object.__setattr__(self, key, value) elif key in self.override: self.override[key] = value else: setattr(self.parent, key, value) class Dotkeys(dict): ''' This is a sick-minded hack of dict, intended to be an eye-candy. It allows to get dict's items by dot reference: ipdb["lo"] == ipdb.lo ipdb["eth0"] == ipdb.eth0 Obviously, it will not work for some cases, like unicode names of interfaces and so on. Beside of that, it introduces some complexity. But it simplifies live for old-school admins, who works with good old "lo", "eth0", and like that naming schemes. ''' __var_name = re.compile('^[a-zA-Z_]+[a-zA-Z_0-9]*$') def __dir__(self): return [i for i in self if type(i) == str and self.__var_name.match(i)] def __getattribute__(self, key, *argv): try: return dict.__getattribute__(self, key) except AttributeError as e: if key == '__deepcopy__': raise e elif key[:4] == 'set_': def set_value(value): self[key[4:]] = value return self return set_value elif key in self: return self[key] else: raise e def __setattr__(self, key, value): if key in self: self[key] = value else: dict.__setattr__(self, key, value) def __delattr__(self, key): if key in self: del self[key] else: dict.__delattr__(self, key) def map_namespace(prefix, ns, normalize=None): ''' Take the namespace prefix, list all constants and build two dictionaries -- straight and reverse mappings. E.g.: ## neighbor attributes NDA_UNSPEC = 0 NDA_DST = 1 NDA_LLADDR = 2 NDA_CACHEINFO = 3 NDA_PROBES = 4 (NDA_NAMES, NDA_VALUES) = map_namespace('NDA', globals()) Will lead to:: NDA_NAMES = {'NDA_UNSPEC': 0, ... 'NDA_PROBES': 4} NDA_VALUES = {0: 'NDA_UNSPEC', ... 4: 'NDA_PROBES'} The `normalize` parameter can be: - None — no name transformation will be done - True — cut the prefix and `lower()` the rest - lambda x: … — apply the function to every name ''' nmap = {None: lambda x: x, True: lambda x: x[len(prefix):].lower()} if not isinstance(normalize, types.FunctionType): normalize = nmap[normalize] by_name = dict([(normalize(i), ns[i]) for i in ns.keys() if i.startswith(prefix)]) by_value = dict([(ns[i], normalize(i)) for i in ns.keys() if i.startswith(prefix)]) return (by_name, by_value) def getbroadcast(addr, mask, family=socket.AF_INET): # 1. convert addr to int i = socket.inet_pton(family, addr) if family == socket.AF_INET: i = struct.unpack('>I', i)[0] a = 0xffffffff length = 32 elif family == socket.AF_INET6: i = struct.unpack('>QQ', i) i = i[0] << 64 | i[1] a = 0xffffffffffffffffffffffffffffffff length = 128 else: raise NotImplementedError('family not supported') # 2. calculate mask m = (a << length - mask) & a # 3. calculate default broadcast n = (i & m) | a >> mask # 4. convert it back to the normal address form if family == socket.AF_INET: n = struct.pack('>I', n) else: n = struct.pack('>QQ', n >> 64, n & (a >> 64)) return socket.inet_ntop(family, n) def dqn2int(mask, family=socket.AF_INET): ''' IPv4 dotted quad notation to int mask conversion ''' ret = 0 binary = socket.inet_pton(family, mask) for offset in range(len(binary) // 4): ret += bin(struct.unpack('I', binary[offset * 4: offset * 4 + 4])[0]).count('1') return ret def hexdump(payload, length=0): ''' Represent byte string as hex -- for debug purposes ''' if sys.version[0] == '3': return ':'.join('{0:02x}'.format(c) for c in payload[:length] or payload) else: return ':'.join('{0:02x}'.format(ord(c)) for c in payload[:length] or payload) def hexload(data): ret = ''.join(chr(int(x, 16)) for x in data.split(':')) if sys.version[0] == '3': return bytes(ret, 'ascii') else: return bytes(ret) def load_dump(f, meta=None): ''' Load a packet dump from an open file-like object. Supported dump formats: * strace hex dump (\\x00\\x00...) * pyroute2 hex dump (00:00:...) Simple markup is also supported. Any data from # or ; till the end of the string is a comment and ignored. Any data after . till EOF is ignored as well. With #! starts an optional code block. All the data in the code block will be read and returned via metadata dictionary. ''' data = '' code = None for a in f.readlines(): if code is not None: code += a continue offset = 0 length = len(a) while offset < length: if a[offset] in (' ', '\t', '\n'): offset += 1 elif a[offset] == '#': if a[offset:offset + 2] == '#!': # read and save the code block; # do not parse it here code = '' break elif a[offset] == '.': return data elif a[offset] == '\\': # strace hex format data += chr(int(a[offset + 2:offset + 4], 16)) offset += 4 else: # pyroute2 hex format data += chr(int(a[offset:offset + 2], 16)) offset += 3 if isinstance(meta, dict): meta['code'] = code if sys.version[0] == '3': return bytes(data, 'iso8859-1') else: return data class AddrPool(object): ''' Address pool ''' cell = 0xffffffffffffffff def __init__(self, minaddr=0xf, maxaddr=0xffffff, reverse=False, release=False): self.cell_size = 0 # in bits mx = self.cell self.reverse = reverse self.release = release self.allocated = 0 if self.release and not isinstance(self.release, int): raise TypeError() self.ban = [] while mx: mx >>= 8 self.cell_size += 1 self.cell_size *= 8 # calculate, how many ints we need to bitmap all addresses self.cells = int((maxaddr - minaddr) / self.cell_size + 1) # initial array self.addr_map = [self.cell] self.minaddr = minaddr self.maxaddr = maxaddr self.lock = threading.RLock() def alloc(self): with self.lock: # gc self.ban: for item in tuple(self.ban): if item['counter'] == 0: self.free(item['addr']) self.ban.remove(item) else: item['counter'] -= 1 # iterate through addr_map base = 0 for cell in self.addr_map: if cell: # not allocated addr bit = 0 while True: if (1 << bit) & self.addr_map[base]: self.addr_map[base] ^= 1 << bit break bit += 1 ret = (base * self.cell_size + bit) if self.reverse: ret = self.maxaddr - ret else: ret = ret + self.minaddr if self.minaddr <= ret <= self.maxaddr: if self.release: self.free(ret, ban=self.release) self.allocated += 1 return ret else: self.free(ret) raise KeyError('no free address available') base += 1 # no free address available if len(self.addr_map) < self.cells: # create new cell to allocate address from self.addr_map.append(self.cell) return self.alloc() else: raise KeyError('no free address available') def locate(self, addr): if self.reverse: addr = self.maxaddr - addr else: addr -= self.minaddr base = addr // self.cell_size bit = addr % self.cell_size try: is_allocated = not self.addr_map[base] & (1 << bit) except IndexError: is_allocated = False return (base, bit, is_allocated) def setaddr(self, addr, value): if value not in ('free', 'allocated'): raise TypeError() with self.lock: base, bit, is_allocated = self.locate(addr) if value == 'free' and is_allocated: self.allocated -= 1 self.addr_map[base] |= 1 << bit elif value == 'allocated' and not is_allocated: self.allocated += 1 self.addr_map[base] &= ~(1 << bit) def free(self, addr, ban=0): with self.lock: if ban != 0: self.ban.append({'addr': addr, 'counter': ban}) else: base, bit, is_allocated = self.locate(addr) if len(self.addr_map) <= base: raise KeyError('address is not allocated') if self.addr_map[base] & (1 << bit): raise KeyError('address is not allocated') self.allocated -= 1 self.addr_map[base] ^= 1 << bit def _fnv1_python2(data): ''' FNV1 -- 32bit hash, python2 version @param data: input @type data: bytes @return: 32bit int hash @rtype: int See: http://www.isthe.com/chongo/tech/comp/fnv/index.html ''' hval = 0x811c9dc5 for i in range(len(data)): hval *= 0x01000193 hval ^= struct.unpack('B', data[i])[0] return hval & 0xffffffff def _fnv1_python3(data): ''' FNV1 -- 32bit hash, python3 version @param data: input @type data: bytes @return: 32bit int hash @rtype: int See: http://www.isthe.com/chongo/tech/comp/fnv/index.html ''' hval = 0x811c9dc5 for i in range(len(data)): hval *= 0x01000193 hval ^= data[i] return hval & 0xffffffff if sys.version[0] == '3': fnv1 = _fnv1_python3 else: fnv1 = _fnv1_python2 def uuid32(): ''' Return 32bit UUID, based on the current time and pid. @return: 32bit int uuid @rtype: int The uuid is guaranteed to be unique within one process. ''' global _uuid32 global _uuid32_lock with _uuid32_lock: candidate = _uuid32 while candidate == _uuid32: candidate = fnv1(struct.pack('QQ', int(time.time() * 1000000), os.getpid())) _uuid32 = candidate return candidate def uifname(): ''' Return a unique interface name based on a prime function @return: interface name @rtype: str ''' return 'pr%x' % uuid32() def map_exception(match, subst): ''' Decorator to map exception types ''' def wrapper(f): def decorated(*argv, **kwarg): try: f(*argv, **kwarg) except Exception as e: if match(e): raise subst(e) raise return decorated return wrapper def map_enoent(f): ''' Shortcut to map OSError(2) -> OSError(95) ''' return map_exception(lambda x: (isinstance(x, OSError) and x.errno == errno.ENOENT), lambda x: OSError(errno.EOPNOTSUPP, 'Operation not supported'))(f) def metaclass(mc): def wrapped(cls): nvars = {} skip = ['__dict__', '__weakref__'] slots = cls.__dict__.get('__slots__') if not isinstance(slots, (list, tuple)): slots = [slots] for k in slots: skip.append(k) for (k, v) in cls.__dict__.items(): if k not in skip: nvars[k] = v return mc(cls.__name__, cls.__bases__, nvars) return wrapped def failed_class(message): class FailedClass(object): def __init__(self, *argv, **kwarg): ret = RuntimeError(message) ret.feature_supported = False raise ret return FailedClass pyroute2-0.5.9/pyroute2/config/0000755000175000017500000000000013621220110016254 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/config/__init__.py0000644000175000017500000000155713610051400020377 0ustar peetpeet00000000000000import socket import platform import multiprocessing from distutils.version import LooseVersion SocketBase = socket.socket MpPipe = multiprocessing.Pipe MpQueue = multiprocessing.Queue MpProcess = multiprocessing.Process ipdb_nl_async = True nlm_generator = False nla_via_getattr = False async_qsize = 4096 commit_barrier = 0 gc_timeout = 60 db_transaction_limit = 10000 # save uname() on startup time: it is not so # highly possible that the kernel will be # changed in runtime, while calling uname() # every time is a bit expensive uname = platform.uname() machine = platform.machine() arch = platform.architecture()[0] kernel = LooseVersion(uname[2]).version[:3] AF_BRIDGE = getattr(socket, 'AF_BRIDGE', 7) AF_NETLINK = getattr(socket, 'AF_NETLINK', 16) data_plugins_pkgs = [] data_plugins_path = [] netns_path = ['/var/run/netns', '/var/run/docker/netns'] pyroute2-0.5.9/pyroute2/config/asyncio.py0000644000175000017500000000434113610051400020277 0ustar peetpeet00000000000000# # Author: Angus Lees # # Backported from a Neutron privsep proposal with the # permission of the author. # from __future__ import absolute_import import functools import socket import types try: import cPickle as pickle except ImportError: import pickle from pyroute2 import config as config _socketmethods = ( 'bind', 'close', 'connect', 'connect_ex', 'listen', 'getpeername', 'getsockname', 'getsockopt', 'makefile', 'recv', 'recvfrom', 'recv_into', 'recvfrom_into', 'send', 'sendto', 'sendall', 'setsockopt', 'setblocking', 'settimeout', 'gettimeout', 'shutdown') def _forward(name, self, *args, **kwargs): return getattr(self._sock, name)(*args, **kwargs) class _SocketWrapper(object): """eventlet-monkeypatch friendly socket class""" def __init__(self, *args, **kwargs): _sock = kwargs.get('_sock', None) or socket.socket(*args, **kwargs) self._sock = _sock for name in _socketmethods: f = functools.partial(_forward, name) f.__name__ = name setattr(_SocketWrapper, name, types.MethodType(f, self)) def fileno(self): return self._sock.fileno() def dup(self): return self.__class__(_sock=self._sock.dup()) class _MpConnection(object): """Highly limited multiprocessing.Connection alternative""" def __init__(self, sock): sock.setblocking(True) self.sock = sock def fileno(self): return self.sock.fileno() def send(self, obj): pickle.dump(obj, self, protocol=-1) def write(self, s): self.sock.sendall(s) def recv(self): return pickle.load(self) def read(self, n): return self.sock.recv(n) def readline(self): buf = b'' c = None while c != b'\n': c = self.sock.recv(1) buf += c return buf def close(self): self.sock.close() def _MultiprocessingPipe(): """multiprocess.Pipe reimplementation that uses MpConnection wrapper""" s1, s2 = socket.socketpair() return (_MpConnection(s1), _MpConnection(s2)) def asyncio_config(): config.SocketBase = _SocketWrapper config.MpPipe = _MultiprocessingPipe config.ipdb_nl_async = False pyroute2-0.5.9/pyroute2/config/eventlet.py0000644000175000017500000000040113610051400020451 0ustar peetpeet00000000000000import logging from pyroute2.config.asyncio import asyncio_config log = logging.getLogger(__name__) log.warning("Please use pyroute2.config.asyncio.asyncio_config") log.warning("The eventlet module will be dropped soon ") eventlet_config = asyncio_config pyroute2-0.5.9/pyroute2/config/log.py0000644000175000017500000000103513610051400017410 0ustar peetpeet00000000000000import logging ## # Create the main logger # # Do NOT touch the root logger -- not to break basicConfig() etc # log = logging.getLogger('pyroute2') log.setLevel(0) log.addHandler(logging.NullHandler()) def debug(*argv, **kwarg): return log.debug(*argv, **kwarg) def info(*argv, **kwarg): return log.info(*argv, **kwarg) def warning(*argv, **kwarg): return log.warning(*argv, **kwarg) def error(*argv, **kwarg): return log.error(*argv, **kwarg) def critical(*argv, **kwarg): return log.critical(*argv, **kwarg) pyroute2-0.5.9/pyroute2/config/test_platform.py0000644000175000017500000001736413610051400021526 0ustar peetpeet00000000000000''' Platform tests to discover the system capabilities. ''' import os import sys import select import struct import threading from pyroute2 import config from pyroute2.common import uifname from pyroute2 import RawIPRoute from pyroute2.netlink.rtnl import RTMGRP_LINK class SkipTest(Exception): pass class TestCapsRtnl(object): ''' A minimal test set to collect the RTNL implementation capabilities. It uses raw RTNL sockets and doesn't run any proxy code, so no transparent helpers are executed -- e.g., it will not create bridge via `brctl`, if RTNL doesn't support it. A short developer's guide:: def test_whatever_else(self): code This test will create a capability record `whatever_else`. If the `code` fails, the `whatever_else` will be set to `False`. If it throws the `SkipTest` exception, the `whatever_else` will be set to `None`. Otherwise it will be set to whatever the test returns. To collect the capabilities:: tce = TestCapsExt() tce.collect() print(tce.capabilities) Collected capabilities are in the `TestCapsExt.capabilities` dictionary, you can use them directly or by setting the `config.capabilities` singletone:: from pyroute2 import config # ... tce.collect() config.capabilities = tce.capabilities ''' def __init__(self): self.capabilities = {} self.ifnames = [] self.rtm_newlink = {} self.rtm_dellink = {} self.rtm_events = {} self.cmd, self.cmdw = os.pipe() self.ip = None self.event = threading.Event() def __getitem__(self, key): return self.capabilities[key] def set_capability(self, key, value): ''' Set a capability. ''' self.capabilities[key] = value def ifname(self): ''' Register and return a new unique interface name to be used in a test. ''' ifname = uifname() self.ifnames.append(ifname) self.rtm_events[ifname] = threading.Event() self.rtm_newlink[ifname] = [] self.rtm_dellink[ifname] = [] return ifname def monitor(self): # The monitoring code to collect RTNL messages # asynchronously. # Do **NOT** run manually. # use a separate socket for monitoring ip = RawIPRoute() ip.bind(RTMGRP_LINK) poll = select.poll() poll.register(ip, select.POLLIN | select.POLLPRI) poll.register(self.cmd, select.POLLIN | select.POLLPRI) self.event.set() while True: events = poll.poll() for (fd, evt) in events: if fd == ip.fileno(): msgs = ip.get() for msg in msgs: name = msg.get_attr('IFLA_IFNAME') event = msg.get('event') if name not in self.rtm_events: continue if event == 'RTM_NEWLINK': self.rtm_events[name].set() self.rtm_newlink[name].append(msg) elif event == 'RTM_DELLINK': self.rtm_dellink[name].append(msg) else: ip.close() return def setup(self): # The setup procedure for a test. # Do **NOT** run manually. # create the raw socket self.ip = RawIPRoute() def teardown(self): # The teardown procedure for a test. # Do **NOT** run manually. # clear the collected interfaces for ifname in self.ifnames: self.rtm_events[ifname].wait() self.rtm_events[ifname].clear() if self.rtm_newlink.get(ifname): self.ip.link('del', index=self.rtm_newlink[ifname][0]['index']) self.ifnames = [] # close the socket self.ip.close() def collect(self): ''' Run the tests and collect the capabilities. They will be saved in the `TestCapsRtnl.capabilities` attribute. ''' symbols = sorted(dir(self)) # start the monitoring thread mthread = threading.Thread(target=self.monitor) mthread.start() self.event.wait() # wait for the thread setup for name in symbols: if name.startswith('test_'): self.setup() try: ret = getattr(self, name)() if ret is None: ret = True self.set_capability(name[5:], ret) except SkipTest: self.set_capability(name[5:], None) except Exception: for ifname in self.ifnames: # cancel events queued for that test self.rtm_events[ifname].set() self.set_capability(name[5:], False) self.teardown() # stop the monitor os.write(self.cmdw, b'q') mthread.join() return self.capabilities def test_uname(self): ''' Return collected uname ''' return config.uname def test_python_version(self): ''' Return Python version ''' return sys.version def test_unpack_from(self): ''' Does unpack_from() support bytearray as the buffer ''' # probe unpack from try: struct.unpack_from('I', bytearray((1, 0, 0, 0)), 0) except: return False # works... but may it be monkey patched? if hasattr(struct, '_u_f_orig'): return False def test_create_dummy(self): ''' An obvious test: an ability to create dummy interfaces ''' self.ghost = self.ifname() self.ip.link('add', ifname=self.ghost, kind='dummy') def test_create_bridge(self): ''' Can the kernel create bridges via netlink? ''' self.ip.link('add', ifname=self.ifname(), kind='bridge') def test_create_bond(self): ''' Can the kernel create bonds via netlink? ''' self.ip.link('add', ifname=self.ifname(), kind='bond') def test_ghost_newlink_count(self): ''' A normal flow (req == request, brd == broadcast message):: (req) -> RTM_NEWLINK (brd) <- RTM_NEWLINK (req) -> RTM_DELLINK (brd) <- RTM_DELLINK But on old kernels you can encounter the following:: (req) -> RTM_NEWLINK (brd) <- RTM_NEWLINK (req) -> RTM_DELLINK (brd) <- RTM_DELLINK (brd) <- RTM_NEWLINK (!) false positive And that obviously can break the code that relies on broadcast updates, since it will see as a new interface is created immediately after it was destroyed. One can ignore RTM_NEWLINK for the same name that follows a normal RTM_DELLINK. To do that, one should be sure the message will come. Another question is how many messages to ignore. This is not a test s.str., but it should follow after the `test_create_dummy`. It counts, how many RTM_NEWLINK messages arrived during the `test_create_dummy`. The ghost newlink messages count will be the same for other interface types as well. ''' with open('/proc/version', 'r') as f: if int(f.read().split()[2][0]) > 2: # the issue is reported only for kernels 2.x return 0 # there is no guarantee it will come; it *may* come self.rtm_events[self.ghost].wait(0.5) return max(len(self.rtm_newlink.get(self.ghost, [])) - 1, 0) pyroute2-0.5.9/pyroute2/conntrack.py0000644000175000017500000001410513613574566017400 0ustar peetpeet00000000000000import socket from pyroute2.netlink.nfnetlink.nfctsocket import IP_CT_TCP_FLAG_TO_NAME from pyroute2.netlink.nfnetlink.nfctsocket import IPSBIT_TO_NAME from pyroute2.netlink.nfnetlink.nfctsocket import TCPF_TO_NAME from pyroute2.netlink.nfnetlink.nfctsocket import NFCTAttrTuple from pyroute2.netlink.nfnetlink.nfctsocket import NFCTSocket class NFCTATcpProtoInfo(object): def __init__(self, state, wscale_orig=None, wscale_reply=None, flags_orig=None, flags_reply=None): self.state = state self.wscale_orig = wscale_orig self.wscale_reply = wscale_reply self.flags_orig = flags_orig self.flags_reply = flags_reply def state_name(self): return ','.join([name for bit, name in TCPF_TO_NAME.items() if self.state & bit]) def flags_name(self, flags): if flags is None: return '' s = '' for bit, name in IP_CT_TCP_FLAG_TO_NAME.items(): if flags & bit: s += '{},'.format(name) return s[:-1] @classmethod def from_netlink(cls, ndmsg): cta_tcp = ndmsg.get_attr('CTA_PROTOINFO_TCP') state = cta_tcp.get_attr('CTA_PROTOINFO_TCP_STATE') # second argument is the mask returned by kernel but useless for us flags_orig, _ = cta_tcp.get_attr('CTA_PROTOINFO_TCP_FLAGS_ORIGINAL') flags_reply, _ = cta_tcp.get_attr('CTA_PROTOINFO_TCP_FLAGS_REPLY') return cls(state=state, flags_orig=flags_orig, flags_reply=flags_reply) def __repr__(self): return 'TcpInfo(state={}, orig_flags={}, reply_flags={})'.format( self.state_name(), self.flags_name(self.flags_orig), self.flags_name(self.flags_reply)) class ConntrackEntry(object): def __init__(self, family, tuple_orig, tuple_reply, cta_status, cta_timeout, cta_protoinfo, cta_mark, cta_id, cta_use): self.tuple_orig = NFCTAttrTuple.from_netlink(family, tuple_orig) self.tuple_reply = NFCTAttrTuple.from_netlink(family, tuple_reply) self.status = cta_status self.timeout = cta_timeout if self.tuple_orig.proto == socket.IPPROTO_TCP: self.protoinfo = NFCTATcpProtoInfo.from_netlink(cta_protoinfo) else: self.protoinfo = None self.mark = cta_mark self.id = cta_id self.use = cta_use def status_name(self): s = '' for bit, name in IPSBIT_TO_NAME.items(): if self.status & bit: s += '{},'.format(name) return s[:-1] def __repr__(self): s = 'Entry(orig={}, reply={}, status={}'.format( self.tuple_orig, self.tuple_reply, self.status_name()) if self.protoinfo is not None: s += ', protoinfo={}'.format(self.protoinfo) s += ')' return s class Conntrack(NFCTSocket): """ High level conntrack functions """ def stat(self): """ Return current statistics per CPU Same result than conntrack -S command but a list of dictionaries """ stats = [] for msg in super(Conntrack, self).stat(): stats.append({'cpu': msg['res_id']}) stats[-1].update((k[10:].lower(), v) for k, v in msg['attrs'] if k.startswith('CTA_STATS_')) return stats def count(self): """ Return current number of conntrack entries Same result than /proc/sys/net/netfilter/nf_conntrack_count file or conntrack -C command """ ndmsg = super(Conntrack, self).count() return ndmsg[0].get_attr('CTA_STATS_GLOBAL_ENTRIES') def conntrack_max_size(self): """ Return the max size of connection tracking table /proc/sys/net/netfilter/nf_conntrack_max """ ndmsg = super(Conntrack, self).conntrack_max_size() return ndmsg[0].get_attr('CTA_STATS_GLOBAL_MAX_ENTRIES') def delete(self, entry): if isinstance(entry, ConntrackEntry): tuple_orig = entry.tuple_orig elif isinstance(entry, NFCTAttrTuple): tuple_orig = entry else: raise NotImplementedError() self.entry('del', tuple_orig=tuple_orig) def dump_entries(self, mark=None, mark_mask=0xffffffff, tuple_orig=None, tuple_reply=None): """ Dump all entries from conntrack table with filters Filters can be only part of a conntrack tuple :param NFCTAttrTuple tuple_orig: filter on original tuple :param NFCTAttrTuple tuple_reply: filter on reply tuple Examples:: # Filter only on tcp connections for entry in ct.dump_entries(tuple_orig=NFCTAttrTuple( proto=socket.IPPROTO_TCP)): print("This entry is tcp: {}".format(entry)) # Filter only on icmp message to 8.8.8.8 for entry in ct.dump_entries(tuple_orig=NFCTAttrTuple( proto=socket.IPPROTO_ICMP, daddr='8.8.8.8')): print("This entry is icmp to 8.8.8.8: {}".format(entry)) """ for ndmsg in super(Conntrack, self).dump(mark=mark, mark_mask=mark_mask): if tuple_orig is not None and not tuple_orig.nla_eq( ndmsg['nfgen_family'], ndmsg.get_attr('CTA_TUPLE_ORIG')): continue if tuple_reply is not None and not tuple_reply.nla_eq( ndmsg['nfgen_family'], ndmsg.get_attr('CTA_TUPLE_REPLY')): continue yield ConntrackEntry( ndmsg['nfgen_family'], ndmsg.get_attr('CTA_TUPLE_ORIG'), ndmsg.get_attr('CTA_TUPLE_REPLY'), ndmsg.get_attr('CTA_STATUS'), ndmsg.get_attr('CTA_TIMEOUT'), ndmsg.get_attr('CTA_PROTOINFO'), ndmsg.get_attr('CTA_MARK'), ndmsg.get_attr('CTA_ID'), ndmsg.get_attr('CTA_USE'), ) pyroute2-0.5.9/pyroute2/devlink.py0000644000175000017500000000415713610051400017026 0ustar peetpeet00000000000000import logging from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink.devlink import DevlinkSocket from pyroute2.netlink.devlink import devlinkcmd from pyroute2.netlink.devlink import DEVLINK_NAMES log = logging.getLogger(__name__) class DL(DevlinkSocket): def __init__(self, *argv, **kwarg): # get specific groups kwarg if 'groups' in kwarg: groups = kwarg['groups'] del kwarg['groups'] else: groups = None # get specific async kwarg if 'async' in kwarg: # FIXME # raise deprecation error after 0.5.3 # log.warning('use "async_cache" instead of "async", ' '"async" is a keyword from Python 3.7') kwarg['async_cache'] = kwarg.pop('async') if 'async_cache' in kwarg: async_cache = kwarg.pop('async_cache') else: async_cache = False # align groups with async_cache if groups is None: groups = ~0 if async_cache else 0 # continue with init super(DL, self).__init__(*argv, **kwarg) # do automatic bind # FIXME: unfortunately we can not omit it here try: self.bind(groups, async_cache=async_cache) except: # thanks to jtluka at redhat.com and the LNST # team for the fixed fd leak super(DL, self).close() raise def list(self): return self.get_dump() def get_dump(self): msg = devlinkcmd() msg['cmd'] = DEVLINK_NAMES['DEVLINK_CMD_GET'] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def port_list(self): return self.get_port_dump() def get_port_dump(self): msg = devlinkcmd() msg['cmd'] = DEVLINK_NAMES['DEVLINK_CMD_PORT_GET'] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) pyroute2-0.5.9/pyroute2/dhcp/0000755000175000017500000000000013621220110015725 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/dhcp/__init__.py0000644000175000017500000002376013610051400020050 0ustar peetpeet00000000000000''' DHCP protocol ============= The DHCP implementation here is far from complete, but already provides some basic functionality. Later it will be extended with IPv6 support and more DHCP options will be added. Right now it can be interesting mostly to developers, but not users and/or system administrators. So, the development hints first. The packet structure description is intentionally implemented as for netlink packets. Later these two parsers, netlink and generic, can be merged, so the syntax is more or less compatible. Packet fields ------------- There are two big groups of items within any DHCP packet. First, there are BOOTP/DHCP packet fields, they're defined with the `fields` attribute:: class dhcp4msg(msg): fields = ((name, format, policy), (name, format, policy), ... (name, format, policy)) The `name` can be any literal. Format should be specified as for the struct module, like `B` for `uint8`, or `i` for `int32`, or `>Q` for big-endian uint64. There are also aliases defined, so one can write `uint8` or `be16`, or like that. Possible aliases can be seen in the `pyroute2.protocols` module. The `policy` is a bit complicated. It can be a number or literal, and it will mean that it is a default value, that should be encoded if no other value is given. But when the `policy` is a dictionary, it can contain keys as follows:: 'l2addr': {'format': '6B', 'decode': ..., 'encode': ...} Keys `encode` and `decode` should contain filters to be used in decoding and encoding procedures. The encoding filter should accept the value from user's definition and should return a value that can be packed using `format`. The decoding filter should accept a value, decoded according to `format`, and should return value that can be used by a user. The `struct` module can not decode IP addresses etc, so they should be decoded as `4s`, e.g. Further transformation from 4 bytes string to a string like '10.0.0.1' performs the filter. DHCP options ------------ DHCP options are described in a similar way:: options = ((code, name, format), (code, name, format), ... (code, name, format)) Code is a `uint8` value, name can be any string literal. Format is a string, that must have a corresponding class, inherited from `pyroute2.dhcp.option`. One can find these classes in `pyroute2.dhcp` (more generic) or in `pyroute2.dhcp.dhcp4msg` (IPv4-specific). The option class must reside within dhcp message class. Every option class can be decoded in two ways. If it has fixed width fields, it can be decoded with ordinary `msg` routines, and in this case it can look like that:: class client_id(option): fields = (('type', 'uint8'), ('key', 'l2addr')) If it must be decoded by some custom rules, one can define the policy just like for the fields above:: class array8(option): policy = {'format': 'string', 'encode': lambda x: array('B', x).tobytes(), 'decode': lambda x: array('B', x).tolist()} In the corresponding modules, like in `pyroute2.dhcp.dhcp4msg`, one can define as many custom DHCP options, as one need. Just be sure, that they are compatible with the DHCP server and all fit into 1..254 (`uint8`) -- the 0 code is used for padding and the code 255 is the end of options code. ''' import sys import struct from array import array from pyroute2.common import basestring from pyroute2.protocols import msg BOOTREQUEST = 1 BOOTREPLY = 2 DHCPDISCOVER = 1 DHCPOFFER = 2 DHCPREQUEST = 3 DHCPDECLINE = 4 DHCPACK = 5 DHCPNAK = 6 DHCPRELEASE = 7 DHCPINFORM = 8 if not hasattr(array, 'tobytes'): # Python2 and Python3 versions of array differ, # but we need here a consistent API w/o warnings class array(array): tobytes = array.tostring class option(msg): code = 0 data_length = 0 policy = None value = None def __init__(self, content=None, buf=b'', offset=0, value=None, code=0): msg.__init__(self, content=content, buf=buf, offset=offset, value=value) self.code = code @property def length(self): if self.data_length is None: return None if self.data_length == 0: return 1 else: return self.data_length + 2 def encode(self): # pack code self.buf += struct.pack('B', self.code) if self.code in (0, 255): return self # save buf save = self.buf self.buf = b'' # pack data into the new buf if self.policy is not None: value = self.policy.get('encode', lambda x: x)(self.value) if self.policy['format'] == 'string': fmt = '%is' % len(value) else: fmt = self.policy['format'] if sys.version_info[0] == 3 and isinstance(value, str): value = value.encode('utf-8') self.buf = struct.pack(fmt, value) else: msg.encode(self) # get the length data = self.buf self.buf = save self.buf += struct.pack('B', len(data)) # attach the packed data self.buf += data return self def decode(self): self.data_length = struct.unpack('B', self.buf[self.offset + 1: self.offset + 2])[0] if self.policy is not None: if self.policy['format'] == 'string': fmt = '%is' % self.data_length else: fmt = self.policy['format'] value = struct.unpack(fmt, self.buf[self.offset + 2: self.offset + 2 + self.data_length]) if len(value) == 1: value = value[0] value = self.policy.get('decode', lambda x: x)(value) if isinstance(value, basestring) and \ self.policy['format'] == 'string': value = value[:value.find('\x00')] self.value = value else: # remember current offset as msg.decode() will advance it offset = self.offset # move past the code and option length bytes so that msg.decode() # starts parsing at the right location self.offset += 2 msg.decode(self) # restore offset so that dhcpmsg.decode() advances it correctly self.offset = offset return self class dhcpmsg(msg): options = () l2addr = None _encode_map = {} _decode_map = {} def _register_options(self): for option in self.options: code, name, fmt = option[:3] self._decode_map[code] =\ self._encode_map[name] = {'name': name, 'code': code, 'format': fmt} def decode(self): msg.decode(self) self._register_options() self['options'] = {} while self.offset < len(self.buf): code = struct.unpack('B', self.buf[self.offset:self.offset + 1])[0] if code == 0: self.offset += 1 continue if code == 255: return self # code is unknown -- bypass it if code not in self._decode_map: length = struct.unpack('B', self.buf[self.offset + 1: self.offset + 2])[0] self.offset += length + 2 continue # code is known, work on it option_class = getattr(self, self._decode_map[code]['format']) option = option_class(buf=self.buf, offset=self.offset) option.decode() self.offset += option.length if option.value is not None: value = option.value else: value = option self['options'][self._decode_map[code]['name']] = value return self def encode(self): msg.encode(self) self._register_options() # put message type options = self.get('options') or {'message_type': DHCPDISCOVER, 'parameter_list': [1, 3, 6, 12, 15, 28]} self.buf += self.uint8(code=53, value=options['message_type']).encode().buf self.buf += self.client_id({'type': 1, 'key': self['chaddr']}, code=61).encode().buf self.buf += self.string(code=60, value='pyroute2').encode().buf for (name, value) in options.items(): if name in ('message_type', 'client_id', 'vendor_id'): continue fmt = self._encode_map.get(name, {'format': None})['format'] if fmt is None: continue # name is known, ok option_class = getattr(self, fmt) if isinstance(value, dict): option = option_class(value, code=self._encode_map[name]['code']) else: option = option_class(code=self._encode_map[name]['code'], value=value) self.buf += option.encode().buf self.buf += self.none(code=255).encode().buf return self class none(option): pass class be16(option): policy = {'format': '>H'} class be32(option): policy = {'format': '>I'} class uint8(option): policy = {'format': 'B'} class string(option): policy = {'format': 'string'} class array8(option): policy = {'format': 'string', 'encode': lambda x: array('B', x).tobytes(), 'decode': lambda x: array('B', x).tolist()} class client_id(option): fields = (('type', 'uint8'), ('key', 'l2addr')) pyroute2-0.5.9/pyroute2/dhcp/dhcp4msg.py0000644000175000017500000000452513610051400020020 0ustar peetpeet00000000000000from socket import inet_pton from socket import inet_ntop from socket import AF_INET from pyroute2.dhcp import dhcpmsg from pyroute2.dhcp import option class dhcp4msg(dhcpmsg): # # https://www.ietf.org/rfc/rfc2131.txt # fields = (('op', 'uint8', 1), # request ('htype', 'uint8', 1), # ethernet ('hlen', 'uint8', 6), # ethernet addr len ('hops', 'uint8'), ('xid', 'uint32'), ('secs', 'uint16'), ('flags', 'uint16'), ('ciaddr', 'ip4addr'), ('yiaddr', 'ip4addr'), ('siaddr', 'ip4addr'), ('giaddr', 'ip4addr'), ('chaddr', 'l2paddr'), ('sname', '64s'), ('file', '128s'), ('cookie', '4s', b'c\x82Sc')) # # https://www.ietf.org/rfc/rfc2132.txt # options = ((0, 'pad', 'none'), (1, 'subnet_mask', 'ip4addr'), (2, 'time_offset', 'be32'), (3, 'router', 'ip4list'), (4, 'time_server', 'ip4list'), (5, 'ien_name_server', 'ip4list'), (6, 'name_server', 'ip4list'), (7, 'log_server', 'ip4list'), (8, 'cookie_server', 'ip4list'), (9, 'lpr_server', 'ip4list'), (50, 'requested_ip', 'ip4addr'), (51, 'lease_time', 'be32'), (53, 'message_type', 'uint8'), (54, 'server_id', 'ip4addr'), (55, 'parameter_list', 'array8'), (57, 'messagi_size', 'be16'), (58, 'renewal_time', 'be32'), (59, 'rebinding_time', 'be32'), (60, 'vendor_id', 'string'), (61, 'client_id', 'client_id'), (255, 'end', 'none')) class ip4addr(option): policy = {'format': '4s', 'encode': lambda x: inet_pton(AF_INET, x), 'decode': lambda x: inet_ntop(AF_INET, x)} class ip4list(option): policy = {'format': 'string', 'encode': lambda x: ''.join([inet_pton(AF_INET, i) for i in x]), 'decode': lambda x: [inet_ntop(AF_INET, x[i * 4:i * 4 + 4]) for i in range(len(x) // 4)]} pyroute2-0.5.9/pyroute2/dhcp/dhcp4socket.py0000644000175000017500000001045713610051400020523 0ustar peetpeet00000000000000''' IPv4 DHCP socket ================ ''' from pyroute2.common import AddrPool from pyroute2.protocols import udpmsg from pyroute2.protocols import udp4_pseudo_header from pyroute2.protocols import ethmsg from pyroute2.protocols import ip4msg from pyroute2.protocols.rawsocket import RawSocket from pyroute2.dhcp.dhcp4msg import dhcp4msg def listen_udp_port(port=68): # pre-scripted BPF code that matches UDP port bpf_code = [[40, 0, 0, 12], [21, 0, 8, 2048], [48, 0, 0, 23], [21, 0, 6, 17], [40, 0, 0, 20], [69, 4, 0, 8191], [177, 0, 0, 14], [72, 0, 0, 16], [21, 0, 1, port], [6, 0, 0, 65535], [6, 0, 0, 0]] return bpf_code class DHCP4Socket(RawSocket): ''' Parameters: * ifname -- interface name to work on This raw socket binds to an interface and installs BPF filter to get only its UDP port. It can be used in poll/select and provides also the context manager protocol, so can be used in `with` statements. It does not provide any DHCP state machine, and does not inspect DHCP packets, it is totally up to you. No default values are provided here, except `xid` -- DHCP transaction ID. If `xid` is not provided, DHCP4Socket generates it for outgoing messages. ''' def __init__(self, ifname, port=68): RawSocket.__init__(self, ifname, listen_udp_port(port)) self.port = port # Create xid pool # # Every allocated xid will be released automatically after 1024 # alloc() calls, there is no need to call free(). Minimal xid == 16 self.xid_pool = AddrPool(minaddr=16, release=1024) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def put(self, msg=None, dport=67): ''' Put DHCP message. Parameters: * msg -- dhcp4msg instance * dport -- DHCP server port If `msg` is not provided, it is constructed as default BOOTREQUEST + DHCPDISCOVER. Examples:: sock.put(dhcp4msg({'op': BOOTREQUEST, 'chaddr': 'ff:11:22:33:44:55', 'options': {'message_type': DHCPREQUEST, 'parameter_list': [1, 3, 6, 12, 15], 'requested_ip': '172.16.101.2', 'server_id': '172.16.101.1'}})) The method returns dhcp4msg that was sent, so one can get from there `xid` (transaction id) and other details. ''' # DHCP layer dhcp = msg or dhcp4msg({'chaddr': self.l2addr}) # dhcp transaction id if dhcp['xid'] is None: dhcp['xid'] = self.xid_pool.alloc() data = dhcp.encode().buf # UDP layer udp = udpmsg({'sport': self.port, 'dport': dport, 'len': 8 + len(data)}) udph = udp4_pseudo_header({'dst': '255.255.255.255', 'len': 8 + len(data)}) udp['csum'] = self.csum(udph.encode().buf + udp.encode().buf + data) udp.reset() # IPv4 layer ip4 = ip4msg({'len': 20 + 8 + len(data), 'proto': 17, 'dst': '255.255.255.255'}) ip4['csum'] = self.csum(ip4.encode().buf) ip4.reset() # MAC layer eth = ethmsg({'dst': 'ff:ff:ff:ff:ff:ff', 'src': self.l2addr, 'type': 0x800}) data = eth.encode().buf +\ ip4.encode().buf +\ udp.encode().buf +\ data self.send(data) dhcp.reset() return dhcp def get(self): ''' Get the next incoming packet from the socket and try to decode it as IPv4 DHCP. No analysis is done here, only MAC/IPv4/UDP headers are stripped out, and the rest is interpreted as DHCP. ''' (data, addr) = self.recvfrom(4096) eth = ethmsg(buf=data).decode() ip4 = ip4msg(buf=data, offset=eth.offset).decode() udp = udpmsg(buf=data, offset=ip4.offset).decode() return dhcp4msg(buf=data, offset=udp.offset).decode() pyroute2-0.5.9/pyroute2/ethtool/0000755000175000017500000000000013621220110016465 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ethtool/__init__.py0000644000175000017500000000006313621076743020622 0ustar peetpeet00000000000000from .ethtool import Ethtool __all__ = [Ethtool] pyroute2-0.5.9/pyroute2/ethtool/common.py0000644000175000017500000001504513621076743020361 0ustar peetpeet00000000000000from collections import namedtuple # from ethtool/ethtool-copy.h of ethtool repo DUPLEX_HALF = 0x0 DUPLEX_FULL = 0x1 DUPLEX_UNKNOWN = 0xff LINK_DUPLEX_NAMES = { DUPLEX_HALF: "Half", DUPLEX_FULL: "Full", DUPLEX_UNKNOWN: "Unknown" } # Which connector port. PORT_TP = 0x00 PORT_AUI = 0x01 PORT_MII = 0x02 PORT_FIBRE = 0x03 PORT_BNC = 0x04 PORT_DA = 0x05 PORT_NONE = 0xef PORT_OTHER = 0xff LINK_PORT_NAMES = { PORT_TP: "Twisted Pair", PORT_AUI: "AUI", PORT_MII: "MII", PORT_FIBRE: "FIBRE", PORT_BNC: "BNC", PORT_DA: "Direct Attach Copper", PORT_NONE: "NONE", PORT_OTHER: "Other", } # Which transceiver to use. XCVR_INTERNAL = 0x00 # PHY and MAC are in the same package XCVR_EXTERNAL = 0x01 # PHY and MAC are in different packages LINK_TRANSCEIVER_NAMES = { XCVR_INTERNAL: "Internal", XCVR_EXTERNAL: "External", } # Enable or disable autonegotiation. AUTONEG_DISABLE = 0x00 AUTONEG_ENABLE = 0x01 LINK_AUTONEG_NAMES = { AUTONEG_DISABLE: "off", AUTONEG_ENABLE: "on", } # MDI or MDI-X status/control - if MDI/MDI_X/AUTO is set then # the driver is required to renegotiate link ETH_TP_MDI_INVALID = 0x00 # status: unknown; control: unsupported ETH_TP_MDI = 0x01 # status: MDI; control: force MDI ETH_TP_MDI_X = 0x02 # status: MDI-X; control: force MDI-X ETH_TP_MDI_AUTO = 0x03 # control: auto-select LINK_TP_MDI_NAMES = { ETH_TP_MDI: "off", ETH_TP_MDI_X: "on", ETH_TP_MDI_AUTO: "auto", } LMBTypePort = 0 LMBTypeMode = 1 LMBTypeOther = -1 LinkModeBit = namedtuple('LinkModeBit', ('bit_index', 'name', 'type')) LinkModeBits = ( LinkModeBit(bit_index=0, name='10baseT/Half', type=LMBTypeMode), LinkModeBit(bit_index=1, name='10baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=2, name='100baseT/Half', type=LMBTypeMode), LinkModeBit(bit_index=3, name='100baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=4, name='1000baseT/Half', type=LMBTypeMode), LinkModeBit(bit_index=5, name='1000baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=6, name='Autoneg', type=LMBTypeOther), LinkModeBit(bit_index=7, name='TP', type=LMBTypePort), LinkModeBit(bit_index=8, name='AUI', type=LMBTypePort), LinkModeBit(bit_index=9, name='MII', type=LMBTypePort), LinkModeBit(bit_index=10, name='FIBRE', type=LMBTypePort), LinkModeBit(bit_index=11, name='BNC', type=LMBTypePort), LinkModeBit(bit_index=12, name='10000baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=13, name='Pause', type=LMBTypeOther), LinkModeBit(bit_index=14, name='Asym_Pause', type=LMBTypeOther), LinkModeBit(bit_index=15, name='2500baseX/Full', type=LMBTypeMode), LinkModeBit(bit_index=16, name='Backplane', type=LMBTypeOther), LinkModeBit(bit_index=17, name='1000baseKX/Full', type=LMBTypeMode), LinkModeBit(bit_index=18, name='10000baseKX4/Full', type=LMBTypeMode), LinkModeBit(bit_index=19, name='10000baseKR/Full', type=LMBTypeMode), LinkModeBit(bit_index=20, name='10000baseR_FEC', type=LMBTypeMode), LinkModeBit(bit_index=21, name='20000baseMLD2/Full', type=LMBTypeMode), LinkModeBit(bit_index=22, name='20000baseKR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=23, name='40000baseKR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=24, name='40000baseCR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=25, name='40000baseSR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=26, name='40000baseLR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=27, name='56000baseKR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=28, name='56000baseCR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=29, name='56000baseSR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=30, name='56000baseLR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=31, name='25000baseCR/Full', type=LMBTypeMode), LinkModeBit(bit_index=32, name='25000baseKR/Full', type=LMBTypeMode), LinkModeBit(bit_index=33, name='25000baseSR/Full', type=LMBTypeMode), LinkModeBit(bit_index=34, name='50000baseCR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=35, name='50000baseKR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=36, name='100000baseKR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=37, name='100000baseSR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=38, name='100000baseCR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=39, name='100000baseLR4_ER4/Full', type=LMBTypeMode), LinkModeBit(bit_index=40, name='50000baseSR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=41, name='1000baseX/Full', type=LMBTypeMode), LinkModeBit(bit_index=42, name='10000baseCR/Full', type=LMBTypeMode), LinkModeBit(bit_index=43, name='10000baseSR/Full', type=LMBTypeMode), LinkModeBit(bit_index=44, name='10000baseLR/Full', type=LMBTypeMode), LinkModeBit(bit_index=45, name='10000baseLRM/Full', type=LMBTypeMode), LinkModeBit(bit_index=46, name='10000baseER/Full', type=LMBTypeMode), LinkModeBit(bit_index=47, name='2500baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=48, name='5000baseT/Full', type=LMBTypeMode), LinkModeBit(bit_index=49, name='FEC_NONE', type=LMBTypeOther), LinkModeBit(bit_index=50, name='FEC_RS', type=LMBTypeOther), LinkModeBit(bit_index=51, name='FEC_BASER', type=LMBTypeOther), LinkModeBit(bit_index=52, name='50000baseKR/Full', type=LMBTypeMode), LinkModeBit(bit_index=53, name='50000baseSR/Full', type=LMBTypeMode), LinkModeBit(bit_index=54, name='50000baseCR/Full', type=LMBTypeMode), LinkModeBit(bit_index=55, name='50000baseLR_ER_FR/Full', type=LMBTypeMode), LinkModeBit(bit_index=56, name='50000baseDR/Full', type=LMBTypeMode), LinkModeBit(bit_index=57, name='100000baseKR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=58, name='100000baseSR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=59, name='100000baseCR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=60, name='100000baseLR2_ER2_FR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=61, name='100000baseDR2/Full', type=LMBTypeMode), LinkModeBit(bit_index=62, name='200000baseKR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=63, name='200000baseSR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=64, name='200000baseLR4_ER4_FR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=65, name='200000baseDR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=66, name='200000baseCR4/Full', type=LMBTypeMode), LinkModeBit(bit_index=67, name='100baseT1/Full', type=LMBTypeMode), LinkModeBit(bit_index=68, name='1000baseT1/Full', type=LMBTypeMode), ) LinkModeBits_by_index = {bit.bit_index: bit for bit in LinkModeBits} pyroute2-0.5.9/pyroute2/ethtool/ethtool.py0000644000175000017500000002643313621076743020552 0ustar peetpeet00000000000000from collections import namedtuple import logging from pyroute2.netlink.generic.ethtool import NlEthtool from pyroute2.ethtool.ioctl import IoctlEthtool from pyroute2.ethtool.common import LMBTypePort from pyroute2.ethtool.common import LMBTypeMode from pyroute2.netlink.exceptions import NetlinkError from pyroute2.ethtool.common import LinkModeBits_by_index from pyroute2.ethtool.common import LINK_DUPLEX_NAMES from pyroute2.ethtool.common import LINK_PORT_NAMES from pyroute2.ethtool.common import LINK_TRANSCEIVER_NAMES from pyroute2.ethtool.common import LINK_TP_MDI_NAMES from pyroute2.ethtool.ioctl import WAKE_NAMES from ctypes import c_uint32 from ctypes import c_uint16 INT32MINUS_UINT32 = c_uint32(-1).value INT16MINUS_UINT16 = c_uint16(-1).value log = logging.getLogger(__name__) EthtoolBitsetBit = namedtuple('EthtoolBitsetBit', ('index', 'name', 'enable', 'set')) class UseIoctl(Exception): pass class EthtoolCoalesce(object): @staticmethod def from_ioctl(ioctl_coalesce): return {name: int(value) for name, value in ioctl_coalesce.items()} @staticmethod def to_ioctl(ioctl_coalesce, coalesce): for name, value in coalesce.items(): if ioctl_coalesce[name] != value: ioctl_coalesce[name] = value class EthtoolFeature(object): __slots__ = ('set', 'index', 'name', 'enable', 'available') def __init__(self, set, index, name, enable, available): self.set = set self.index = index self.name = name self.enable = enable self.available = available class EthtoolFeatures(namedtuple('EthtoolFeatures', ('features',))): @classmethod def from_ioctl(cls, features): return cls({name: EthtoolFeature(set, index, name, enable, available) for name, enable, available, set, index in features}) @staticmethod def to_ioctl(ioctl_features, eth_features): for feature in eth_features.features.values(): enable = ioctl_features[feature.name] if feature.enable == enable: continue ioctl_features[feature.name] = feature.enable class EthtoolWakeOnLan(namedtuple('EthtoolWolMode', ('modes', 'sopass'))): @classmethod def from_netlink(cls, nl_wol): nl_wol = nl_wol[0].get_attr('ETHTOOL_A_WOL_MODES') wol_modes = {} for mode in nl_wol.get_attr('ETHTOOL_A_BITSET_BITS')['attrs']: mode = mode[1] index = mode.get_attr('ETHTOOL_A_BITSET_BIT_INDEX') name = mode.get_attr('ETHTOOL_A_BITSET_BIT_NAME') enable = mode.get_attr('ETHTOOL_A_BITSET_BIT_VALUE') wol_modes[name] = EthtoolBitsetBit( index, name, True if enable is True else False, set=None) return EthtoolWakeOnLan(modes=wol_modes, sopass=None) @classmethod def from_ioctl(cls, wol_mode): dict_wol_modes = {} for bit_index, name in WAKE_NAMES.items(): if wol_mode.supported & bit_index: dict_wol_modes[name] = EthtoolBitsetBit( bit_index, name, wol_mode.wolopts & bit_index != 0, set=None) return EthtoolWakeOnLan(modes=dict_wol_modes, sopass=None) class EthtoolStringBit(namedtuple('EthtoolStringBit', ('set', 'index', 'name'))): @classmethod def from_netlink(cls, nl_string_sets): nl_string_sets = nl_string_sets[0] ethtool_strings_set = set() for i in nl_string_sets.get_attr('ETHTOOL_A_STRSET_STRINGSETS')['attrs']: i = i[1] set_id = i.get_attr('ETHTOOL_A_STRINGSET_ID') i = i.get_attr('ETHTOOL_A_STRINGSET_STRINGS') for i in i['attrs']: i = i[1] ethtool_strings_set.add( cls(set=set_id, index=i.get_attr('ETHTOOL_A_STRING_INDEX'), name=i.get_attr('ETHTOOL_A_STRING_VALUE')) ) return ethtool_strings_set @classmethod def from_ioctl(cls, string_sets): return {cls(i // 32, i % 32, string) for i, string in enumerate(string_sets)} class EthtoolLinkInfo(namedtuple('EthtoolLinkInfo', ( 'port', 'phyaddr', 'tp_mdix', 'tp_mdix_ctrl', 'transceiver'))): def __new__(cls, port, phyaddr, tp_mdix, tp_mdix_ctrl, transceiver): port = LINK_PORT_NAMES.get(port, None) transceiver = LINK_TRANSCEIVER_NAMES.get(transceiver, None) tp_mdix = LINK_TP_MDI_NAMES.get(tp_mdix, None) tp_mdix_ctrl = LINK_TP_MDI_NAMES.get(tp_mdix_ctrl, None) return super(EthtoolLinkInfo, cls).__new__( cls, port, phyaddr, tp_mdix, tp_mdix_ctrl, transceiver) @classmethod def from_ioctl(cls, link_settings): return cls( port=link_settings.port, phyaddr=link_settings.phy_address, tp_mdix=link_settings.eth_tp_mdix, tp_mdix_ctrl=link_settings.eth_tp_mdix_ctrl, transceiver=link_settings.transceiver, ) @classmethod def from_netlink(cls, nl_link_mode): nl_link_mode = nl_link_mode[0] return cls( port=nl_link_mode.get_attr('ETHTOOL_A_LINKINFO_PORT'), phyaddr=nl_link_mode.get_attr('ETHTOOL_A_LINKINFO_PHYADDR'), tp_mdix=nl_link_mode.get_attr('ETHTOOL_A_LINKINFO_TP_MDIX'), tp_mdix_ctrl=nl_link_mode.get_attr('ETHTOOL_A_LINKINFO_TP_MDIX_CTR'), transceiver=nl_link_mode.get_attr('ETHTOOL_A_LINKINFO_TRANSCEIVER'), ) class EthtoolLinkMode(namedtuple('EthtoolLinkMode', ( 'speed', 'duplex', 'autoneg', 'supported_ports', 'supported_modes'))): def __new__(cls, speed, duplex, autoneg, supported_ports, supported_modes): if speed == 0 or speed == INT32MINUS_UINT32 or speed == INT16MINUS_UINT16: speed = None duplex = LINK_DUPLEX_NAMES.get(duplex, None) return super(EthtoolLinkMode, cls).__new__( cls, speed, duplex, bool(autoneg), supported_ports, supported_modes) @classmethod def from_ioctl(cls, link_settings): map_supported, map_advertising, map_lp_advertising = \ IoctlEthtool.get_link_mode_masks(link_settings) bits_supported = IoctlEthtool.get_link_mode_bits(map_supported) supported_ports = [] supported_modes = [] for bit in bits_supported: if bit.type == LMBTypePort: supported_ports.append(bit.name) elif bit.type == LMBTypeMode: supported_modes.append(bit.name) return cls( speed=link_settings.speed, duplex=link_settings.duplex, autoneg=link_settings.autoneg, supported_ports=supported_ports, supported_modes=supported_modes, ) @classmethod def from_netlink(cls, nl_link_mode): nl_link_mode = nl_link_mode[0] supported_ports = [] supported_modes = [] for bitset_bit in nl_link_mode.get_attr('ETHTOOL_A_LINKMODES_OURS')\ .get_attr('ETHTOOL_A_BITSET_BITS')['attrs']: bitset_bit = bitset_bit[1] bit_index = bitset_bit.get_attr('ETHTOOL_A_BITSET_BIT_INDEX') bit_name = bitset_bit.get_attr('ETHTOOL_A_BITSET_BIT_NAME') bit_value = bitset_bit.get_attr('ETHTOOL_A_BITSET_BIT_VALUE') if bit_value is not True: continue bit = LinkModeBits_by_index[bit_index] if bit.name != bit_name: log.error("Bit name is not the same as the target: %s <> %s", bit.name, bit_name) continue if bit.type == LMBTypePort: supported_ports.append(bit.name) elif bit.type == LMBTypeMode: supported_modes.append(bit.name) return cls( speed=nl_link_mode.get_attr("ETHTOOL_A_LINKMODES_SPEED"), duplex=nl_link_mode.get_attr("ETHTOOL_A_LINKMODES_DUPLEX"), autoneg=nl_link_mode.get_attr("ETHTOOL_A_LINKMODES_AUTONEG"), supported_ports=supported_ports, supported_modes=supported_modes, ) class Ethtool(object): def __init__(self): self._with_ioctl = IoctlEthtool() self._with_nl = NlEthtool() self._is_nl_working = self._with_nl.is_nlethtool_in_kernel() def _nl_exec(self, f, with_netlink, *args, **kwargs): if with_netlink is None: with_netlink = self._is_nl_working if with_netlink is False: raise UseIoctl() try: return f(*args, **kwargs) except NetlinkError: raise UseIoctl() def get_link_mode(self, ifname, with_netlink=None): try: link_mode = self._nl_exec(self._with_nl.get_linkmode, with_netlink, ifname) link_mode = EthtoolLinkMode.from_netlink(link_mode) except UseIoctl: self._with_ioctl.change_ifname(ifname) link_settings = self._with_ioctl.get_link_settings() link_mode = EthtoolLinkMode.from_ioctl(link_settings) return link_mode def get_link_info(self, ifname, with_netlink=None): try: link_info = self._nl_exec(self._with_nl.get_linkinfo, with_netlink, ifname) link_info = EthtoolLinkInfo.from_netlink(link_info) except UseIoctl: self._with_ioctl.change_ifname(ifname) link_settings = self._with_ioctl.get_link_settings() link_info = EthtoolLinkInfo.from_ioctl(link_settings) return link_info def get_strings_set(self, ifname, with_netlink=None): try: stringsets = self._nl_exec(self._with_nl.get_stringset, with_netlink, ifname) return EthtoolStringBit.from_netlink(stringsets) except UseIoctl: self._with_ioctl.change_ifname(ifname) stringsets = self._with_ioctl.get_stringset() return EthtoolStringBit.from_ioctl(stringsets) def get_wol(self, ifname): nl_working = self._is_nl_working if nl_working is True: try: wol = self._with_nl.get_wol(ifname) return EthtoolWakeOnLan.from_netlink(wol) except NetlinkError: nl_working = False if nl_working is False: self._with_ioctl.change_ifname(ifname) wol_mode = self._with_ioctl.get_wol() return EthtoolWakeOnLan.from_ioctl(wol_mode) def get_features(self, ifname): self._with_ioctl.change_ifname(ifname) return EthtoolFeatures.from_ioctl(self._with_ioctl.get_features()) def set_features(self, ifname, features): self._with_ioctl.change_ifname(ifname) ioctl_features = self._with_ioctl.get_features() EthtoolFeatures.to_ioctl(ioctl_features, features) self._with_ioctl.set_features(ioctl_features) def get_coalesce(self, ifname): self._with_ioctl.change_ifname(ifname) return EthtoolCoalesce.from_ioctl(self._with_ioctl.get_coalesce()) def set_coalesce(self, ifname, coalesce): self._with_ioctl.change_ifname(ifname) ioctl_coalesce = self._with_ioctl.get_coalesce() EthtoolCoalesce.to_ioctl(ioctl_coalesce, coalesce) self._with_ioctl.set_coalesce(ioctl_coalesce) pyroute2-0.5.9/pyroute2/ethtool/ioctl.py0000644000175000017500000004330613621076743020204 0ustar peetpeet00000000000000import ctypes import socket import fcntl import errno from pyroute2.ethtool.common import LinkModeBits # ethtool/ethtool-copy.h IFNAMSIZ = 16 SIOCETHTOOL = 0x8946 ETHTOOL_GSET = 0x1 ETHTOOL_GCOALESCE = 0xe ETHTOOL_SCOALESCE = 0xf ETHTOOL_GSSET_INFO = 0x37 ETHTOOL_GWOL = 0x00000005 ETHTOOL_GFLAGS = 0x00000025 ETHTOOL_GFEATURES = 0x0000003a ETHTOOL_SFEATURES = 0x0000003b ETHTOOL_GLINKSETTINGS = 0x0000004c ETHTOOL_GSTRINGS = 0x0000001b ETH_GSTRING_LEN = 32 ETHTOOL_GRXCSUM = 0x00000014 ETHTOOL_SRXCSUM = 0x00000015 ETHTOOL_GTXCSUM = 0x00000016 ETHTOOL_STXCSUM = 0x00000017 ETHTOOL_GSG = 0x00000018 ETHTOOL_SSG = 0x00000019 ETHTOOL_GTSO = 0x0000001e ETHTOOL_STSO = 0x0000001f ETHTOOL_GUFO = 0x00000021 ETHTOOL_SUFO = 0x00000022 ETHTOOL_GGSO = 0x00000023 ETHTOOL_SGSO = 0x00000024 ETHTOOL_GGRO = 0x0000002b ETHTOOL_SGRO = 0x0000002c SOPASS_MAX = 6 ETH_SS_FEATURES = 4 ETH_FLAG_RXCSUM = (1 << 0) ETH_FLAG_TXCSUM = (1 << 1) ETH_FLAG_SG = (1 << 2) ETH_FLAG_TSO = (1 << 3) ETH_FLAG_UFO = (1 << 4) ETH_FLAG_GSO = (1 << 5) ETH_FLAG_GRO = (1 << 6) ETH_FLAG_TXVLAN = (1 << 7) ETH_FLAG_RXVLAN = (1 << 8) ETH_FLAG_LRO = (1 << 15) ETH_FLAG_NTUPLE = (1 << 27) ETH_FLAG_RXHASH = (1 << 28) ETH_FLAG_EXT_MASK = ETH_FLAG_LRO | ETH_FLAG_RXVLAN | ETH_FLAG_TXVLAN | \ ETH_FLAG_NTUPLE | ETH_FLAG_RXHASH SCHAR_MAX = 127 ETHTOOL_LINK_MODE_MASK_MAX_KERNEL_NU32 = SCHAR_MAX # Wake-On-Lan options. WAKE_PHY = (1 << 0) WAKE_UCAST = (1 << 1) WAKE_MCAST = (1 << 2) WAKE_BCAST = (1 << 3) WAKE_ARP = (1 << 4) WAKE_MAGIC = (1 << 5) WAKE_MAGICSECURE = (1 << 6) # only meaningful if WAKE_MAGIC WAKE_FILTER = (1 << 7) WAKE_NAMES = { WAKE_PHY: "phy", WAKE_UCAST: "ucast", WAKE_MCAST: "mcast", WAKE_BCAST: "bcast", WAKE_ARP: "arp", WAKE_MAGIC: "magic", WAKE_MAGICSECURE: "magic_secure", WAKE_FILTER: "filter", } class EthtoolError(Exception): pass class NotSupportedError(EthtoolError): pass class NoSuchDevice(EthtoolError): pass class DictStruct(ctypes.Structure): def __init__(self, *args, **kwargs): super(DictStruct, self).__init__(*args, **kwargs) self._fields_as_dict = { name: [lambda k: getattr(self, k), lambda k, v: setattr(self, k, v)] for name, ct in self._fields_} def __getitem__(self, key): return self._fields_as_dict[key][0](key) def __setitem__(self, key, value): return self._fields_as_dict[key][1](key, value) def __iter__(self): return iter(self._fields_as_dict) def items(self): for k, f in self._fields_as_dict.items(): getter, _ = f yield k, getter(k) def keys(self): return self._fields_as_dict.keys() def __contains__(self, key): return key in self._fields_as_dict class EthtoolWolInfo(DictStruct): _fields_ = [ ("cmd", ctypes.c_uint32), ("supported", ctypes.c_uint32), ("wolopts", ctypes.c_uint32), ("sopass", ctypes.c_uint8 * SOPASS_MAX), ] class EthtoolCmd(DictStruct): _pack_ = 1 _fields_ = [ ("cmd", ctypes.c_uint32), ("supported", ctypes.c_uint32), ("advertising", ctypes.c_uint32), ("speed", ctypes.c_uint16), ("duplex", ctypes.c_uint8), ("port", ctypes.c_uint8), ("phy_address", ctypes.c_uint8), ("transceiver", ctypes.c_uint8), ("autoneg", ctypes.c_uint8), ("mdio_support", ctypes.c_uint8), ("maxtxpkt", ctypes.c_uint32), ("maxrxpkt", ctypes.c_uint32), ("speed_hi", ctypes.c_uint16), ("eth_tp_mdix", ctypes.c_uint8), ("reserved2", ctypes.c_uint8), ("lp_advertising", ctypes.c_uint32), ("reserved", ctypes.c_uint32 * 2), ] class IoctlEthtoolLinkSettings(DictStruct): _pack_ = 1 _fields_ = [ ("cmd", ctypes.c_uint32), ("speed", ctypes.c_uint32), ("duplex", ctypes.c_uint8), ("port", ctypes.c_uint8), ("phy_address", ctypes.c_uint8), ("autoneg", ctypes.c_uint8), ("mdio_support", ctypes.c_uint8), ("eth_tp_mdix", ctypes.c_uint8), ("eth_tp_mdix_ctrl", ctypes.c_uint8), ("link_mode_masks_nwords", ctypes.c_int8), ("transceiver", ctypes.c_uint8), ("reserved1", ctypes.c_uint8 * 3), ("reserved", ctypes.c_uint32 * 7), ("link_mode_data", ctypes.c_uint32 * (3 * ETHTOOL_LINK_MODE_MASK_MAX_KERNEL_NU32)), ] class EthtoolCoalesce(DictStruct): _pack_ = 1 _fields_ = [ # ETHTOOL_{G,S}COALESCE ("cmd", ctypes.c_uint32), # How many usecs to delay an RX interrupt after # a packet arrives. If 0, only rx_max_coalesced_frames #is used. ("rx_coalesce_usecs", ctypes.c_uint32), # How many packets to delay an RX interrupt after # a packet arrives. If 0, only rx_coalesce_usecs is # used. It is illegal to set both usecs and max frames # to zero as this would cause RX interrupts to never be # generated. ("rx_max_coalesced_frames", ctypes.c_uint32), # Same as above two parameters, except that these values # apply while an IRQ is being serviced by the host. Not # all cards support this feature and the values are ignored # in that case. ("rx_coalesce_usecs_irq", ctypes.c_uint32), ("rx_max_coalesced_frames_irq", ctypes.c_uint32), # How many usecs to delay a TX interrupt after # a packet is sent. If 0, only tx_max_coalesced_frames # is used. ("tx_coalesce_usecs", ctypes.c_uint32), # How many packets to delay a TX interrupt after # a packet is sent. If 0, only tx_coalesce_usecs is # used. It is illegal to set both usecs and max frames # to zero as this would cause TX interrupts to never be # generated. ("tx_max_coalesced_frames", ctypes.c_uint32), # Same as above two parameters, except that these values # apply while an IRQ is being serviced by the host. Not # all cards support this feature and the values are ignored # in that case. ("tx_coalesce_usecs_irq", ctypes.c_uint32), ("tx_max_coalesced_frames_irq", ctypes.c_uint32), # How many usecs to delay in-memory statistics # block updates. Some drivers do not have an in-memory # statistic block, and in such cases this value is ignored. # This value must not be zero. ("stats_block_coalesce_usecs", ctypes.c_uint32), # Adaptive RX/TX coalescing is an algorithm implemented by # some drivers to improve latency under low packet rates and # improve throughput under high packet rates. Some drivers # only implement one of RX or TX adaptive coalescing. Anything # not implemented by the driver causes these values to be # silently ignored. ("use_adaptive_rx_coalesce", ctypes.c_uint32), ("use_adaptive_tx_coalesce", ctypes.c_uint32), # When the packet rate (measured in packets per second) # is below pkt_rate_low, the {rx,tx}_*_low parameters are # used. ("pkt_rate_low", ctypes.c_uint32), ("rx_coalesce_usecs_low", ctypes.c_uint32), ("rx_max_coalesced_frames_low", ctypes.c_uint32), ("tx_coalesce_usecs_low", ctypes.c_uint32), ("tx_max_coalesced_frames_low", ctypes.c_uint32), # When the packet rate is below pkt_rate_high but above # pkt_rate_low (both measured in packets per second) the # normal {rx,tx}_* coalescing parameters are used. # When the packet rate is (measured in packets per second) # is above pkt_rate_high, the {rx,tx}_*_high parameters are # used. ("pkt_rate_high", ctypes.c_uint32), ("rx_coalesce_usecs_high", ctypes.c_uint32), ("rx_max_coalesced_frames_high", ctypes.c_uint32), ("tx_coalesce_usecs_high", ctypes.c_uint32), ("tx_max_coalesced_frames_high", ctypes.c_uint32), # How often to do adaptive coalescing packet rate sampling, # measured in seconds. Must not be zero. ("rate_sample_interval", ctypes.c_uint32), ] class EthtoolValue(ctypes.Structure): _fields_ = [ ("cmd", ctypes.c_uint32), ("data", ctypes.c_uint32), ] class EthtoolSsetInfo(ctypes.Structure): _pack_ = 1 _fields_ = [ ("cmd", ctypes.c_uint32), ("reserved", ctypes.c_uint32), ("sset_mask", ctypes.c_uint64), ("data", ctypes.c_uint32), ] class EthtoolGstrings(ctypes.Structure): _fields_ = [ ("cmd", ctypes.c_uint32), ("string_set", ctypes.c_uint32), ("len", ctypes.c_uint32), # If you have more than 256 features on your NIC # they will not be seen by it ("strings", ctypes.c_ubyte * ETH_GSTRING_LEN * 256) ] class EthtoolGetFeaturesBlock(ctypes.Structure): _fields_ = [ ("available", ctypes.c_uint32), ("requested", ctypes.c_uint32), ("active", ctypes.c_uint32), ("never_changed", ctypes.c_uint32), ] class EthtoolSetFeaturesBlock(ctypes.Structure): _fields_ = [ ("changed", ctypes.c_uint32), ("active", ctypes.c_uint32), ] def div_round_up(n, d): return int(((n) + (d) - 1) / (d)) def feature_bits_to_blocks(n_bits): return div_round_up(n_bits, 32) class EthtoolGfeatures(ctypes.Structure): _fields_ = [ ("cmd", ctypes.c_uint32), ("size", ctypes.c_uint32), ("features", EthtoolGetFeaturesBlock * feature_bits_to_blocks(256)), ] class EthtoolSfeatures(ctypes.Structure): _fields_ = [ ("cmd", ctypes.c_uint32), ("size", ctypes.c_uint32), ("features", EthtoolSetFeaturesBlock * feature_bits_to_blocks(256)), ] class FeatureState(ctypes.Structure): _fields_ = [ ("off_flags", ctypes.c_uint32), ("features", EthtoolGfeatures), ] class IfReqData(ctypes.Union): _fields_ = [ ("ifr_data", ctypes.POINTER(EthtoolCmd)), ("coalesce", ctypes.POINTER(EthtoolCoalesce)), ("value", ctypes.POINTER(EthtoolValue)), ("sset_info", ctypes.POINTER(EthtoolSsetInfo)), ("gstrings", ctypes.POINTER(EthtoolGstrings)), ("gfeatures", ctypes.POINTER(EthtoolGfeatures)), ("sfeatures", ctypes.POINTER(EthtoolSfeatures)), ("glinksettings", ctypes.POINTER(IoctlEthtoolLinkSettings)), ("wolinfo", ctypes.POINTER(EthtoolWolInfo)), ] class IfReq(ctypes.Structure): _pack_ = 1 _anonymous_ = ("u",) _fields_ = [ ("ifr_name", ctypes.c_uint8 * IFNAMSIZ), ("u", IfReqData) ] class EthtoolFeaturesList(object): def __init__(self, cmd, stringsset): self._offsets = {} self._cmd = cmd self._cmd_set = EthtoolSfeatures(cmd=ETHTOOL_SFEATURES, size=cmd.size) self._gfeatures = cmd.features self._sfeatures = self._cmd_set.features feature_i = 0 for i, name in enumerate(stringsset): feature_i = i // 32 flag_bit = 1 << (i % 32) self._offsets[name] = (feature_i, flag_bit) while feature_i: feature_i -= 1 self._gfeatures[feature_i].active = \ self._sfeatures[feature_i].active self._gfeatures[feature_i].changed = 0 def is_available(self, name): feature_i, flag_bit = self._offsets[name] return self._gfeatures[feature_i].available & flag_bit != 0 def is_active(self, name): feature_i, flag_bit = self._offsets[name] return self._gfeatures[feature_i].active & flag_bit != 0 def is_requested(self, name): feature_i, flag_bit = self._offsets[name] return self._gfeatures[feature_i].requested & flag_bit != 0 def is_never_changed(self, name): feature_i, flag_bit = self._offsets[name] return self._gfeatures[feature_i].never_changed & flag_bit != 0 def __iter__(self): for name in self._offsets.keys(): feature_i, flag_bit = self._offsets[name] yield (name, self.get_value(name), self.is_available(name), feature_i, flag_bit) def keys(self): return self._offsets.keys() def __contains__(self, name): return name in self._offsets def __getitem__(self, key): return self.get_value(key) def __setitem__(self, key, value): return self.set_value(key, value) def get_value(self, name): return self.is_active(name) def set_value(self, name, value): if value not in (1, 0, True, False): raise ValueError("Need a boolean value") feature_i, flag_bit = self._offsets[name] if value: self._gfeatures[feature_i].active |= flag_bit self._sfeatures[feature_i].active |= flag_bit else: # active is ctypes.c_uint32 self._gfeatures[feature_i].active &= (flag_bit ^ 0xFFFF) self._sfeatures[feature_i].active &= (flag_bit ^ 0xFFFF) self._sfeatures[feature_i].changed |= flag_bit class IoctlEthtool(object): def __init__(self, ifname=None): self.sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) self.ifname = None self.ifreq = None if ifname is not None: self.change_ifname(ifname) def change_ifname(self, ifname): self.ifname = bytearray(ifname, 'utf-8') self.ifname.extend(b"\0" * (IFNAMSIZ - len(self.ifname))) self.ifreq = IfReq() self.ifreq.ifr_name = (ctypes.c_uint8 * IFNAMSIZ)(*self.ifname) def ioctl(self): try: if fcntl.ioctl(self.sock, SIOCETHTOOL, self.ifreq): raise NotSupportedError() except OSError as e: if e.errno == errno.ENOTSUP: raise NotSupportedError(self.ifname.decode("utf-8")) elif e.errno == errno.ENODEV: raise NoSuchDevice(self.ifname.decode("utf-8")) raise def get_stringset(self, set_id=ETH_SS_FEATURES, drvinfo_offset=0, null_terminate=1): sset_info = EthtoolSsetInfo(cmd=ETHTOOL_GSSET_INFO, reserved=0, sset_mask=1 << set_id) self.ifreq.sset_info = ctypes.pointer(sset_info) fcntl.ioctl(self.sock, SIOCETHTOOL, self.ifreq) if sset_info.sset_mask: length = sset_info.data else: length = 0 strings_found = [] gstrings = EthtoolGstrings(cmd=ETHTOOL_GSTRINGS, string_set=set_id, len=length) self.ifreq.gstrings = ctypes.pointer(gstrings) self.ioctl() for i in range(length): buf = '' for j in range(ETH_GSTRING_LEN): code = gstrings.strings[i][j] if code == 0: break buf += chr(code) strings_found.append(buf) return strings_found def get_features(self): stringsset = self.get_stringset() cmd = EthtoolGfeatures() cmd.cmd = ETHTOOL_GFEATURES cmd.size = feature_bits_to_blocks(len(stringsset)) self.ifreq.gfeatures = ctypes.pointer(cmd) self.ioctl() return EthtoolFeaturesList(cmd, stringsset) def set_features(self, features): self.ifreq.sfeatures = ctypes.pointer(features._cmd_set) return self.ioctl() def get_cmd(self): cmd = EthtoolCmd(cmd=ETHTOOL_GSET) self.ifreq.ifr_data = ctypes.pointer(cmd) self.ioctl() return cmd @staticmethod def get_link_mode_bits(map_bits): for bit in LinkModeBits: map_i = bit.bit_index // 32 map_bit = bit.bit_index % 32 if map_i >= len(map_bits): continue if map_bits[map_i] & (1 << map_bit): yield bit @staticmethod def get_link_mode_masks(ecmd): map_supported = [] map_advertising = [] map_lp_advertising = [] i = 0 while i != ecmd.link_mode_masks_nwords: map_supported.append(ecmd.link_mode_data[i]) i += 1 while i != ecmd.link_mode_masks_nwords * 2: map_advertising.append(ecmd.link_mode_data[i]) i += 1 while i != ecmd.link_mode_masks_nwords * 3: map_lp_advertising.append(ecmd.link_mode_data[i]) i += 1 return (map_supported, map_advertising, map_lp_advertising) def get_link_settings(self): ecmd = IoctlEthtoolLinkSettings() ecmd.cmd = ETHTOOL_GLINKSETTINGS self.ifreq.glinksettings = ctypes.pointer(ecmd) # Handshake with kernel to determine number of words for link # mode bitmaps. When requested number of bitmap words is not # the one expected by kernel, the latter returns the integer # opposite of what it is expecting. We request length 0 below # (aka. invalid bitmap length) to get this info. self.ioctl() # see above: we expect a strictly negative value from kernel. if ecmd.link_mode_masks_nwords >= 0 or \ ecmd.cmd != ETHTOOL_GLINKSETTINGS: raise NotSupportedError() # got the real ecmd.req.link_mode_masks_nwords, # now send the real request ecmd.link_mode_masks_nwords = -ecmd.link_mode_masks_nwords self.ioctl() if ecmd.link_mode_masks_nwords <= 0 or \ ecmd.cmd != ETHTOOL_GLINKSETTINGS: raise NotSupportedError() return ecmd def get_coalesce(self): cmd = EthtoolCoalesce(cmd=ETHTOOL_GCOALESCE) self.ifreq.coalesce = ctypes.pointer(cmd) self.ioctl() return cmd def set_coalesce(self, coalesce): coalesce.cmd = ETHTOOL_SCOALESCE self.ifreq.coalesce = ctypes.pointer(coalesce) self.ioctl() return def get_wol(self): cmd = EthtoolWolInfo(cmd=ETHTOOL_GWOL) self.ifreq.wolinfo = ctypes.pointer(cmd) self.ioctl() return cmd pyroute2-0.5.9/pyroute2/inotify/0000755000175000017500000000000013621220110016470 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/inotify/__init__.py0000644000175000017500000000000013610051400020571 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/inotify/inotify_fd.py0000644000175000017500000000500113610051400021172 0ustar peetpeet00000000000000 import os import select import socket import ctypes import ctypes.util import threading from pyroute2.inotify.inotify_msg import inotify_msg class Inotify(object): def __init__(self, libc=None, path=None): self.fd = None self.wd = {} self.ctlr, self.ctlw = os.pipe() self.path = set(path) self._poll = select.poll() self._poll.register(self.ctlr) self.lock = threading.RLock() self.libc = libc or ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) def bind(self, *argv, **kwarg): with self.lock: if self.fd is not None: raise socket.error(22, 'Invalid argument') self.fd = self.libc.inotify_init() self._poll.register(self.fd) for path in self.path: self.register_path(path) def register_path(self, path, mask=0x100 | 0x200): os.stat(path) with self.lock: if path in self.wd: return if self.fd is not None: s_path = ctypes.create_string_buffer(path.encode('utf-8')) wd = (self .libc .inotify_add_watch(self.fd, ctypes.byref(s_path), mask)) self.wd[wd] = path self.path.add(path) def unregister_path(self): pass def get(self): # events = self._poll.poll() for fd, event in events: if fd == self.fd: data = os.read(self.fd, 4096) for msg in self.parse(data): yield msg else: yield def close(self): with self.lock: if self.fd is not None: os.write(self.ctlw, b'\0') for fd in (self.fd, self.ctlw, self.ctlr): if fd is not None: try: os.close(fd) self._poll.unregister(fd) except Exception: pass def parse(self, data): offset = 0 while offset <= len(data) - 16: # pick one header msg = inotify_msg(data, offset=offset) msg.decode() if msg['wd'] == 0: break msg['path'] = self.wd[msg['wd']] offset += msg.length yield msg pyroute2-0.5.9/pyroute2/inotify/inotify_msg.py0000644000175000017500000000077313610051400021402 0ustar peetpeet00000000000000import struct from pyroute2.netlink import nlmsg_base class inotify_msg(nlmsg_base): fields = (('wd', 'i'), ('mask', 'I'), ('cookie', 'I'), ('name_length', 'I')) def decode(self): super(inotify_msg, self).decode() name, = struct.unpack_from('%is' % self['name_length'], self.data, self.offset + 16) self['name'] = name.decode('utf-8').strip('\0') self.length = self['name_length'] + 16 pyroute2-0.5.9/pyroute2/ipdb/0000755000175000017500000000000013621220110015725 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ipdb/__init__.py0000644000175000017500000000000013610051400020026 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ipdb/exceptions.py0000644000175000017500000000035713610051400020467 0ustar peetpeet00000000000000 class DeprecationException(Exception): pass class CommitException(Exception): pass class CreateException(Exception): pass class PartialCommitException(Exception): pass class ShutdownException(Exception): pass pyroute2-0.5.9/pyroute2/ipdb/interfaces.py0000644000175000017500000015052513610051400020434 0ustar peetpeet00000000000000import time import errno import traceback from socket import AF_INET from socket import AF_INET6 from socket import inet_ntop from socket import inet_pton from socket import error as socket_error from pyroute2 import config from pyroute2.config import AF_BRIDGE from pyroute2.common import basestring from pyroute2.common import dqn2int from pyroute2.common import View from pyroute2.common import Dotkeys from pyroute2.netlink import rtnl from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.rtnl.ifinfmsg import IFF_MASK from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.ipdb.transactional import Transactional from pyroute2.ipdb.transactional import with_transaction from pyroute2.ipdb.transactional import SYNC_TIMEOUT from pyroute2.ipdb.linkedset import LinkedSet from pyroute2.ipdb.exceptions import CreateException from pyroute2.ipdb.exceptions import CommitException from pyroute2.ipdb.exceptions import PartialCommitException supported_kinds = ('bridge', 'bond', 'tuntap', 'vxlan', 'gre', 'gretap', 'ip6gre', 'ip6gretap', 'macvlan', 'macvtap', 'ipvlan', 'vrf', 'vti') groups = rtnl.RTMGRP_LINK |\ rtnl.RTMGRP_NEIGH |\ rtnl.RTMGRP_IPV4_IFADDR |\ rtnl.RTMGRP_IPV6_IFADDR def _get_data_fields(): global supported_kinds ret = [] for data in supported_kinds: msg = ifinfmsg.ifinfo.data_map.get(data) if msg is not None: if getattr(msg, 'prefix', None) is not None: ret += [msg.nla2name(i[0]) for i in msg.nla_map] else: ret += [ifinfmsg.nla2name(i[0]) for i in msg.nla_map] return ret def _br_time_check(x, y): return abs(x - y) < 5 class Interface(Transactional): ''' Objects of this class represent network interface and all related objects: * addresses * (todo) neighbours * (todo) routes Interfaces provide transactional model and can act as context managers. Any attribute change implicitly starts a transaction. The transaction can be managed with three methods: * review() -- review changes * rollback() -- drop all the changes * commit() -- try to apply changes If anything will go wrong during transaction commit, it will be rolled back authomatically and an exception will be raised. Failed transaction review will be attached to the exception. ''' _fields_cmp = {'flags': lambda x, y: x & y & IFF_MASK == y & IFF_MASK, 'br_hello_time': _br_time_check, 'br_max_age': _br_time_check, 'br_ageing_time': _br_time_check, 'br_forward_delay': _br_time_check, 'br_mcast_membership_intvl': _br_time_check, 'br_mcast_querier_intvl': _br_time_check, 'br_mcast_query_intvl': _br_time_check, 'br_mcast_query_response_intvl': _br_time_check, 'br_mcast_startup_query_intvl': _br_time_check} _virtual_fields = ['ipdb_scope', 'ipdb_priority', 'vlans', 'ipaddr', 'ports', 'vlan_flags', 'net_ns_fd', 'net_ns_pid'] _fields = [ifinfmsg.nla2name(i[0]) for i in ifinfmsg.nla_map] for name in ('bridge_slave_data', ): data = getattr(ifinfmsg.ifinfo, name) _fields.extend([ifinfmsg.nla2name(i[0]) for i in data.nla_map]) _fields.append('index') _fields.append('flags') _fields.append('mask') _fields.append('change') _fields.append('kind') _fields.append('peer') _fields.append('vlan_id') _fields.append('vlan_protocol') _fields.append('bond_mode') _fields.extend(_get_data_fields()) _fields.extend(_virtual_fields) def __init__(self, ipdb, mode=None, parent=None, uid=None): ''' Parameters: * ipdb -- ipdb() reference * mode -- transaction mode ''' Transactional.__init__(self, ipdb, mode) self.cleanup = ('header', 'linkinfo', 'protinfo', 'af_spec', 'attrs', 'event', 'map', 'stats', 'stats64', 'change', '__align') self.ingress = None self.egress = None self.nlmsg = None self.errors = [] self.partial = False self._exception = None self._deferred_link = None self._tb = None self._linked_sets.add('ipaddr') self._linked_sets.add('ports') self._linked_sets.add('vlans') self._freeze = None self._delay_add_port = set() self._delay_del_port = set() # 8<----------------------------------- # local setup: direct state is required with self._direct_state: for i in ('change', 'mask'): del self[i] self['ipaddr'] = self.ipdb._ipaddr_set() self['ports'] = LinkedSet() self['vlans'] = LinkedSet() self['ipdb_priority'] = 0 # 8<----------------------------------- def __hash__(self): return self['index'] @property def if_master(self): ''' [property] Link to the parent interface -- if it exists ''' return self.get('master', None) def detach(self): self.ipdb.interfaces._detach(self['ifname'], self['index'], self.nlmsg) return self def freeze(self): if self._freeze is not None: raise RuntimeError("the interface is frozen already") dump = self.pick() def cb(ipdb, msg, action): if msg.get('index', -1) == dump['index']: try: # important: that's a rollback, so do not # try to revert changes in the case of failure self.commit(transaction=dump, commit_phase=2, commit_mask=2) except Exception: pass self._freeze = self.ipdb.register_callback(cb) return self def unfreeze(self): self.ipdb.unregister_callback(self._freeze) self._freeze = None return self def load(self, data): ''' Load the data from a dictionary to an existing transaction. Requires `commit()` call, or must be called from within a `with` statement. Sample:: data = json.loads(...) with ipdb.interfaces['dummy1'] as i: i.load(data) Sample, mode `explicit:: data = json.loads(...) i = ipdb.interfaces['dummy1'] i.begin() i.load(data) i.commit() ''' for key in data: if data[key] is None: continue if key == 'ipaddr': for addr in self['ipaddr']: self.del_ip(*addr) for addr in data[key]: if isinstance(addr, basestring): addr = (addr, ) self.add_ip(*addr) elif key == 'ports': for port in self['ports']: self.del_port(port) for port in data[key]: self.add_port(port) elif key == 'vlans': for vlan in self['vlans']: self.del_vlan(vlan) for vlan in data[key]: if vlan != 1: self.add_vlan(vlan) elif key in ('neighbours', 'family'): # ignore on load pass else: self[key] = data[key] return self def load_dict(self, data): ''' Update the interface info from a dictionary. This call always bypasses open transactions, loading changes directly into the interface data. ''' with self._direct_state: self.load(data) def load_netlink(self, dev): ''' Update the interface info from RTM_NEWLINK message. This call always bypasses open transactions, loading changes directly into the interface data. ''' global supported_kinds with self._direct_state: if self['ipdb_scope'] == 'locked': # do not touch locked interfaces return if self['ipdb_scope'] in ('shadow', 'create'): # ignore non-broadcast messages if dev['header']['sequence_number'] != 0: return # ignore ghost RTM_NEWLINK messages if (config.kernel[0] < 3) and \ (not dev.get_attr('IFLA_AF_SPEC')): return for (name, value) in dev.items(): self[name] = value for cell in dev['attrs']: # # Parse on demand # # At that moment, being not referenced, the # NLA is not decoded (yet). Calling # `__getitem__()` on nla_slot triggers the # NLA decoding, if the nla is referenced: # norm = ifinfmsg.nla2name(cell[0]) if norm not in self.cleanup: self[norm] = cell[1] # load interface kind linkinfo = dev.get_attr('IFLA_LINKINFO') if linkinfo is not None: kind = linkinfo.get_attr('IFLA_INFO_KIND') if kind is not None: self['kind'] = kind if kind == 'vlan': data = linkinfo.get_attr('IFLA_INFO_DATA') self['vlan_id'] = data.get_attr('IFLA_VLAN_ID') self['vlan_protocol'] = data\ .get_attr('IFLA_VLAN_PROTOCOL') self['vlan_flags'] = data\ .get_attr('IFLA_VLAN_FLAGS', {})\ .get('flags', 0) if kind in supported_kinds: data = linkinfo.get_attr('IFLA_INFO_DATA') or {} for nla in data.get('attrs', []): norm = ifinfmsg.nla2name(nla[0]) self[norm] = nla[1] # load vlans if dev['family'] == AF_BRIDGE: spec = dev.get_attr('IFLA_AF_SPEC') if spec is not None: vlans = spec.get_attrs('IFLA_BRIDGE_VLAN_INFO') vmap = {} for vlan in vlans: vmap[vlan['vid']] = vlan vids = set(vmap.keys()) # remove vids we do not have anymore for vid in (self['vlans'] - vids): self.del_vlan(vid) for vid in (vids - self['vlans']): self.add_vlan(vmap[vid]) protinfo = dev.get_attr('IFLA_PROTINFO') if protinfo is not None: for attr, value in protinfo['attrs']: attr = attr[5:].lower() self[attr] = value # the rest is possible only when interface # is used in IPDB, not standalone if self.ipdb is not None: self['ipaddr'] = self.ipdb.ipaddr[self['index']] self['neighbours'] = self.ipdb.neighbours[self['index']] # finally, cleanup all not needed for item in self.cleanup: if item in self: del self[item] # AF_BRIDGE messages for bridges contain # IFLA_MASTER == self.index, we should fix it if self.get('master', None) == self['index']: self['master'] = None self['ipdb_scope'] = 'system' def wait_ip(self, *argv, **kwarg): return self['ipaddr'].wait_ip(*argv, **kwarg) @with_transaction def add_ip(self, ip, mask=None, broadcast=None, anycast=None, scope=None): ''' Add IP address to an interface Address formats: with ipdb.interfaces.eth0 as i: i.add_ip('192.168.0.1', 24) i.add_ip('192.168.0.2/24') i.add_ip('192.168.0.3/255.255.255.0') i.add_ip('192.168.0.4/24', broadcast='192.168.0.255', scope=254) ''' family = 0 # split mask if mask is None: ip, mask = ip.split('/') if ip.find(':') > -1: family = AF_INET6 # normalize IPv6 format ip = inet_ntop(AF_INET6, inet_pton(AF_INET6, ip)) else: family = AF_INET if isinstance(mask, basestring): try: mask = int(mask, 0) except: mask = dqn2int(mask, family) # if it is a transaction or an interface update, apply the change self['ipaddr'].unlink((ip, mask)) request = {} if broadcast is not None: request['broadcast'] = broadcast if anycast is not None: request['anycast'] = anycast if scope is not None: request['scope'] = scope self['ipaddr'].add((ip, mask), raw=request) @with_transaction def del_ip(self, ip, mask=None): ''' Delete IP address from an interface ''' if mask is None: ip, mask = ip.split('/') if mask.find('.') > -1: mask = dqn2int(mask) else: mask = int(mask, 0) # normalize the address if ip.find(':') > -1: ip = inet_ntop(AF_INET6, inet_pton(AF_INET6, ip)) if (ip, mask) in self['ipaddr']: self['ipaddr'].unlink((ip, mask)) self['ipaddr'].remove((ip, mask)) @with_transaction def add_vlan(self, vlan, flags=None): if isinstance(vlan, dict): vid = vlan['vid'] else: vid = vlan vlan = {'vid': vlan, 'flags': 0} self['vlans'].unlink(vid) self['vlans'].add(vid, raw=(vlan, flags)) @with_transaction def del_vlan(self, vlan): if vlan in self['vlans']: self['vlans'].unlink(vlan) self['vlans'].remove(vlan) @with_transaction def add_port(self, port): ''' Add port to a bridge or bonding ''' ifindex = self._resolve_port(port) if not ifindex: self._delay_add_port.add(port) else: self['ports'].unlink(ifindex) self['ports'].add(ifindex) @with_transaction def del_port(self, port): ''' Remove port from a bridge or bonding ''' ifindex = self._resolve_port(port) if not ifindex: self._delay_del_port.add(port) else: self['ports'].unlink(ifindex) self['ports'].remove(ifindex) def reload(self): ''' Reload interface information ''' countdown = 3 while countdown: links = self.nl.get_links(self['index']) if links: self.load_netlink(links[0]) break else: countdown -= 1 time.sleep(1) return self def review(self): ret = super(Interface, self).review() last = self.current_tx if self['ipdb_scope'] == 'create': ret['+ipaddr'] = last['ipaddr'] ret['+ports'] = last['ports'] ret['+vlans'] = last['vlans'] del ret['ports'] del ret['ipaddr'] del ret['vlans'] if last._delay_add_port: ports = set(['*%s' % x for x in last._delay_add_port]) if '+ports' in ret: ret['+ports'] |= ports else: ret['+ports'] = ports if last._delay_del_port: ports = set(['*%s' % x for x in last._delay_del_port]) if '-ports' in ret: ret['-ports'] |= ports else: ret['-ports'] = ports return ret def _run(self, cmd, *argv, **kwarg): try: return cmd(*argv, **kwarg) except Exception as error: if self.partial: self.errors.append(error) return [] raise error def _resolve_port(self, port): # for now just a stupid resolver, will be # improved later with search by mac, etc. if isinstance(port, Interface): return port['index'] else: return self.ipdb.interfaces.get(port, {}).get('index', None) def commit(self, tid=None, transaction=None, commit_phase=1, commit_mask=0xff, newif=False): ''' Commit transaction. In the case of exception all changes applied during commit will be reverted. ''' if not commit_phase & commit_mask: return self error = None added = None removed = None drop = self.ipdb.txdrop notx = True init = None debug = {'traceback': None, 'transaction': None, 'next_stage': None} if tid or transaction: notx = False if tid: transaction = self.global_tx[tid] else: transaction = transaction or self.current_tx if transaction.partial: transaction.errors = [] with self._write_lock: # if the interface does not exist, create it first ;) if self['ipdb_scope'] != 'system': # a special case: transition "create" -> "remove" if transaction['ipdb_scope'] == 'remove' and \ self['ipdb_scope'] == 'create': self.invalidate() return self newif = True self.set_target('ipdb_scope', 'system') try: # 8<--------------------------------------------------- # link resolve if self._deferred_link: link_key, link_obj = self._deferred_link transaction[link_key] = self._resolve_port(link_obj) self._deferred_link = None # 8<---------------------------------------------------- # ACHTUNG: hack for old platforms if self['address'] == '00:00:00:00:00:00': with self._direct_state: self['address'] = None self['broadcast'] = None # 8<---------------------------------------------------- init = self.pick() try: request = {key: transaction[key] for key in filter(lambda x: x[:5] != 'bond_' and x[:7] != 'brport_' and x[:3] != 'br_', transaction)} for key in ('net_ns_fd', 'net_ns_pid'): if key in request: with self._direct_state: self[key] = None del request[key] self.nl.link('add', **request) except NetlinkError as x: # File exists if x.code == errno.EEXIST: # A bit special case, could be one of two cases: # # 1. A race condition between two different IPDB # processes # 2. An attempt to create dummy0, gre0, bond0 when # the corrseponding module is not loaded. Being # loaded, the module creates a default interface # by itself, causing the request to fail # # The exception in that case can cause the DB # inconsistence, since there can be queued not only # the interface creation, but also IP address # changes etc. # # So we ignore this particular exception and try to # continue, as it is created by us. # # 3. An attempt to create VLAN or VXLAN interface # with the same ID but under different name # # In that case we should forward error properly if self['kind'] in ('vlan', 'vxlan'): newif = x else: raise except Exception as e: if transaction.partial: transaction.errors.append(e) raise PartialCommitException() else: # If link('add', ...) raises an exception, no netlink # broadcast will be sent, and the object is unmodified. # After the exception forwarding, the object is ready # to repeat the commit() call. if drop and notx: self.drop(transaction.uid) raise if transaction['ipdb_scope'] == 'create' and commit_phase > 1: if self['index']: wd = self.ipdb.watchdog('RTM_DELLINK', ifname=self['ifname']) with self._direct_state: self['ipdb_scope'] = 'locked' self.nl.link('delete', index=self['index']) wd.wait() self.load_dict(transaction) return self elif newif: # Here we come only if a new interface is created # if commit_phase == 1 and not self.wait_target('ipdb_scope'): if drop and notx: self.drop(transaction.uid) self.invalidate() if isinstance(newif, Exception): raise newif else: raise CreateException() # Re-populate transaction.ipaddr to have a proper IP target # # The reason behind the code is that a new interface in the # "up" state will have automatic IPv6 addresses, that aren't # reflected in the transaction. This may cause a false IP # target mismatch and a commit failure. # # To avoid that, collect automatic addresses to the # transaction manually, since it is not yet properly linked. # for addr in self.ipdb.ipaddr[self['index']]: transaction['ipaddr'].add(addr) # Reload the interface data try: self.load_netlink(self.nl.link('get', **request)[0]) except Exception: pass # now we have our index and IP set and all other stuff snapshot = self.pick() # make snapshots of all dependent routes if commit_phase == 1 and hasattr(self.ipdb, 'routes'): self.routes = [] for record in self.ipdb.routes.filter({'oif': self['index']}): # For MPLS routes the key is an integer # They should match anyways if getattr(record['key'], 'table', None) != 255: self.routes.append((record['route'], record['route'].pick())) # resolve all delayed ports def resolve_ports(transaction, ports, callback, self, drop): def error(x): return KeyError('can not resolve port %s' % x) for port in tuple(ports): ifindex = self._resolve_port(port) if not ifindex: if transaction.partial: transaction.errors.append(error(port)) else: if drop: self.drop(transaction.uid) raise error(port) else: ports.remove(port) with transaction._direct_state: # ???? callback(ifindex) resolve_ports(transaction, transaction._delay_add_port, transaction.add_port, self, drop and notx) resolve_ports(transaction, transaction._delay_del_port, transaction.del_port, self, drop and notx) try: removed, added = snapshot // transaction run = transaction._run nl = transaction.nl # 8<--------------------------------------------- # Port vlans if removed['vlans'] or added['vlans']: self['vlans'].set_target(transaction['vlans']) for i in removed['vlans']: # remove vlan from the port run(nl.vlan_filter, 'del', index=self['index'], vlan_info=self['vlans'][i][0]) for i in added['vlans']: # add vlan to the port vinfo = transaction['vlans'][i][0] flags = transaction['vlans'][i][1] req = {'index': self['index'], 'vlan_info': vinfo} if flags == 'self': req['vlan_flags'] = flags # this request will NOT give echo, # so bypass the check with self._direct_state: self.add_vlan(vinfo['vid']) run(nl.vlan_filter, 'add', **req) self['vlans'].target.wait(SYNC_TIMEOUT) if not self['vlans'].target.is_set(): raise CommitException('vlans target is not set') # 8<--------------------------------------------- # Ports if removed['ports'] or added['ports']: self['ports'].set_target(transaction['ports']) for i in removed['ports']: # detach port if i in self.ipdb.interfaces: (self.ipdb.interfaces[i] .set_target('master', None) .mirror_target('master', 'link')) run(nl.link, 'update', index=i, master=0) else: transaction.errors.append(KeyError(i)) for i in added['ports']: # attach port if i in self.ipdb.interfaces: (self.ipdb.interfaces[i] .set_target('master', self['index']) .mirror_target('master', 'link')) run(nl.link, 'update', index=i, master=self['index']) else: transaction.errors.append(KeyError(i)) self['ports'].target.wait(SYNC_TIMEOUT) if self['ports'].target.is_set(): for msg in self.nl.get_vlans(index=self['index']): self.load_netlink(msg) else: raise CommitException('ports target is not set') # 1. wait for proper targets on ports # 2. wait for mtu sync # # the bridge mtu is set from the port, if the latter is smaller # the bond mtu sets the port mtu, if the latter is smaller # # FIXME: team interfaces? for i in list(added['ports']) + list(removed['ports']): port = self.ipdb.interfaces[i] # port update target = port._local_targets['master'] target.wait(SYNC_TIMEOUT) with port._write_lock: del port._local_targets['master'] del port._local_targets['link'] if not target.is_set(): raise CommitException('master target failed') if i in added['ports']: if port.if_master != self['index']: raise CommitException('master set failed') else: if port.if_master == self['index']: raise CommitException('master unset failed') # master update if self['kind'] == 'bridge' and self['mtu'] > port['mtu']: self.set_target('mtu', port['mtu']) self.wait_target('mtu') # 8<--------------------------------------------- # Interface changes request = {} brequest = {} prequest = {} # preseed requests with the interface kind request['kind'] = self['kind'] brequest['kind'] = self['kind'] wait_all = False for key in added: if (key not in self._virtual_fields) and (key != 'kind'): if key[:3] == 'br_': brequest[key] = added[key] elif key[:7] == 'brport_': prequest[key[7:]] = added[key] else: if key == 'address' and added[key] is not None: self[key] = added[key].lower() request[key] = added[key] # FIXME: flush the interface type so the next two conditions # will work correctly request['kind'] = None brequest['kind'] = None # apply changes only if there is something to apply if (self['kind'] == 'bridge') and \ any([brequest[item] is not None for item in brequest]): brequest['index'] = self['index'] brequest['kind'] = self['kind'] brequest['family'] = AF_BRIDGE wait_all = True run(nl.link, 'set', **brequest) if any([request[item] is not None for item in request]): request['index'] = self['index'] request['kind'] = self['kind'] if request.get('address', None) == '00:00:00:00:00:00': request.pop('address') request.pop('broadcast', None) wait_all = True run(nl.link, 'update', **request) # Yet another trick: setting ifalias doesn't cause # netlink updates if 'ifalias' in request: self.reload() if any([prequest[item] is not None for item in prequest]): prequest['index'] = self['index'] run(nl.brport, 'set', **prequest) if (wait_all) and (not transaction.partial): transaction.wait_all_targets() # 8<--------------------------------------------- # VLAN flags -- a dirty hack, pls do something with it if added.get('vlan_flags') is not None: run(nl.link, 'set', **{'kind': 'vlan', 'index': self['index'], 'vlan_flags': added['vlan_flags']}) # 8<--------------------------------------------- # IP address changes for _ in range(3): ip2add = transaction['ipaddr'] - self['ipaddr'] ip2remove = self['ipaddr'] - transaction['ipaddr'] if not ip2add and not ip2remove: break self['ipaddr'].set_target(transaction['ipaddr']) ### # Remove # # The promote_secondaries sysctl causes the kernel # to add secondary addresses back after the primary # address is removed. # # The library can not tell this from the result of # an external program. # # One simple way to work that around is to remove # secondaries first. rip = sorted(ip2remove, key=lambda x: self['ipaddr'][x]['flags'], reverse=True) # 8<-------------------------------------- for i in rip: # When you remove a primary IP addr, all the # subnetwork can be removed. In this case you # will fail, but it is OK, no need to roll back try: run(nl.addr, 'delete', self['index'], i[0], i[1]) except NetlinkError as x: # bypass only errno 99, # 'Cannot assign address' if x.code != errno.EADDRNOTAVAIL: raise except socket_error as x: # bypass illegal IP requests if isinstance(x.args[0], basestring) and \ x.args[0].startswith('illegal IP'): continue raise ### # Add addresses # 8<-------------------------------------- for i in ip2add: # Try to fetch additional address attributes try: kwarg = dict([k for k in transaction['ipaddr'][i].items() if k[0] in ('broadcast', 'anycast', 'scope')]) except KeyError: kwarg = None try: # feed the address to the OS run(nl.addr, 'add', self['index'], i[0], i[1], **kwarg if kwarg else {}) except NetlinkError as x: if x.code != errno.EEXIST: raise # 8<-------------------------------------- # some interfaces do not send IPv6 address # updates, when are down # # beside of that, bridge interfaces are # down by default, so they never send # address updates from beginning # # FIXME: # # that all is a dirtiest hack ever, pls do # something with it # if (not self['flags'] & 1) or hasattr(self.ipdb.nl, 'netns'): # 1. flush old IPv6 addresses for addr in list(self['ipaddr'].ipv6): self['ipaddr'].remove(addr) # 2. reload addresses for addr in self.nl.get_addr(index=self['index'], family=AF_INET6): self.ipdb.ipaddr._new(addr) # if there are tons of IPv6 addresses, it may take a # really long time, and that's bad, but it's broken in # the kernel :| # 8<-------------------------------------- self['ipaddr'].target.wait(SYNC_TIMEOUT) if self['ipaddr'].target.is_set(): break else: raise CommitException('ipaddr target is not set') # 8<--------------------------------------------- # Iterate callback chain for ch in self._commit_hooks: # An exception will rollback the transaction ch(self.dump(), snapshot.dump(), transaction.dump()) # 8<--------------------------------------------- # Move the interface to a netns if ('net_ns_fd' in added) or ('net_ns_pid' in added): request = {} for key in ('net_ns_fd', 'net_ns_pid'): if key in added: request[key] = added[key] request['index'] = self['index'] run(nl.link, 'update', **request) countdown = 10 while countdown: # wait until the interface will disappear # from the current network namespace -- # up to 1 second (make it configurable?) try: self.nl.get_links(self['index']) except NetlinkError as e: if e.code == errno.ENODEV: break raise except Exception: raise countdown -= 1 time.sleep(0.1) # 8<--------------------------------------------- # Interface removal if added.get('ipdb_scope') in ('shadow', 'remove'): wd = self.ipdb.watchdog('RTM_DELLINK', ifname=self['ifname']) with self._direct_state: self['ipdb_scope'] = 'locked' self.nl.link('delete', index=self['index']) wd.wait() with self._direct_state: self['ipdb_scope'] = 'shadow' # system-wide checks if commit_phase == 1: self.ipdb.ensure('run') if added.get('ipdb_scope') == 'remove': self.ipdb.interfaces._detach(None, self['index'], None) if notx: self.drop(transaction.uid) return self # 8<--------------------------------------------- # system-wide checks if commit_phase == 1: self.ipdb.ensure('run') # so far all's ok drop = True except Exception as e: error = e # log the error environment debug['traceback'] = traceback.format_exc() debug['transaction'] = transaction debug['next_stage'] = None # something went wrong: roll the transaction back if commit_phase == 1: if newif: drop = False try: self.commit(transaction=init if newif else snapshot, commit_phase=2, commit_mask=commit_mask, newif=newif) except Exception as i_e: debug['next_stage'] = i_e error = RuntimeError() else: # reload all the database -- it can take a long time, # but it is required since we have no idea, what is # the result of the failure links = self.nl.get_links() for link in links: self.ipdb.interfaces._new(link) links = self.nl.get_vlans() for link in links: self.ipdb.interfaces._new(link) for addr in self.nl.get_addr(): self.ipdb.ipaddr._new(addr) for key in ('ipaddr', 'ports', 'vlans'): self[key].clear_target() # raise partial commit exceptions if transaction.partial and transaction.errors: error = PartialCommitException('partial commit error') # drop only if required if drop and notx: # drop last transaction in any case self.drop(transaction.uid) # raise exception for failed transaction if error is not None: error.debug = debug raise error # restore dependent routes for successful rollback if commit_phase == 2: for route in self.routes: with route[0]._direct_state: route[0]['ipdb_scope'] = 'restore' try: route[0].commit(transaction=route[1], commit_phase=2, commit_mask=2) except RuntimeError as x: # RuntimeError is raised due to phase 2, so # an additional check is required if isinstance(x.cause, NetlinkError) and \ x.cause.code == errno.EEXIST: pass time.sleep(config.commit_barrier) # drop all collected errors, if any self.errors = [] return self def up(self): ''' Shortcut: change the interface state to 'up'. ''' self['state'] = 'up' return self def down(self): ''' Shortcut: change the interface state to 'down'. ''' self['state'] = 'down' return self def remove(self): ''' Mark the interface for removal ''' self['ipdb_scope'] = 'remove' return self def shadow(self): ''' Remove the interface from the OS, but leave it in the database. When one will try to re-create interface with the same name, all the old saved attributes will apply to the new interface, incl. MAC-address and even the interface index. Please be aware, that the interface index can be reused by OS while the interface is "in the shadow state", in this case re-creation will fail. ''' self['ipdb_scope'] = 'shadow' return self class InterfacesDict(Dotkeys): def __init__(self, ipdb): self.ipdb = ipdb self._event_map = {'RTM_NEWLINK': self._new, 'RTM_DELLINK': self._del} def _register(self): links = self.ipdb.nl.get_links() # iterate twice to map port/master relations for link in links: self._new(link, skip_master=True) for link in links: self._new(link) # load bridge vlan information links = self.ipdb.nl.get_vlans() for link in links: self._new(link) def add(self, kind, ifname, reuse=False, **kwarg): ''' Create new network interface ''' with self.ipdb.exclusive: # check for existing interface if ifname in self: if (self[ifname]['ipdb_scope'] == 'shadow') or reuse: device = self[ifname] kwarg['kind'] = kind device.load_dict(kwarg) if self[ifname]['ipdb_scope'] == 'shadow': with device._direct_state: device['ipdb_scope'] = 'create' device.begin() else: raise CreateException("interface %s exists" % ifname) else: device = self[ifname] = Interface(ipdb=self.ipdb, mode='snapshot') # delay link resolve? for key in kwarg: # any /.+link$/ attr if key[-4:] == 'link': if isinstance(kwarg[key], Interface): kwarg[key] = kwarg[key].get('index') or \ kwarg[key].get('ifname') if not isinstance(kwarg[key], int): device._deferred_link = (key, kwarg[key]) device._mode = self.ipdb.mode with device._direct_state: device['kind'] = kind device['index'] = kwarg.get('index', 0) device['ifname'] = ifname device['ipdb_scope'] = 'create' # set some specific attrs for attr in ('peer', 'uid', 'gid', 'ifr', 'mode', 'bond_mode', 'address'): if attr in kwarg: device[attr] = kwarg.pop(attr) device.begin() device.load(kwarg) return device def _del(self, msg): target = self.get(msg['index']) if target is None: return if msg['family'] == AF_BRIDGE: with target._direct_state: for vlan in tuple(target['vlans']): target.del_vlan(vlan) # check for freezed devices if getattr(target, '_freeze', None): with target._direct_state: target['ipdb_scope'] = 'shadow' return # check for locked devices if target.get('ipdb_scope') in ('locked', 'shadow'): return self._detach(None, msg['index'], msg) def _new(self, msg, skip_master=False): # check, if a record exists index = msg.get('index', None) ifname = msg.get_attr('IFLA_IFNAME', None) device = None cleanup = None # scenario #1: no matches for both: new interface # # scenario #2: ifname exists, index doesn't: # index changed # scenario #3: index exists, ifname doesn't: # name changed # scenario #4: both exist: assume simple update and # an optional name change if (index not in self) and (ifname not in self): # scenario #1, new interface device = \ self[index] = \ self[ifname] = Interface(ipdb=self.ipdb) elif (index not in self) and (ifname in self): # scenario #2, index change old_index = self[ifname]['index'] device = self[index] = self[ifname] if old_index in self: cleanup = old_index if old_index in self.ipdb.ipaddr: self.ipdb.ipaddr[index] = \ self.ipdb.ipaddr[old_index] del self.ipdb.ipaddr[old_index] if old_index in self.ipdb.neighbours: self.ipdb.neighbours[index] = \ self.ipdb.neighbours[old_index] del self.ipdb.neighbours[old_index] else: # scenario #3, interface rename # scenario #4, assume rename old_name = self[index]['ifname'] if old_name != ifname: # unlink old name cleanup = old_name device = self[ifname] = self[index] if index not in self.ipdb.ipaddr: self.ipdb.ipaddr[index] = self.ipdb._ipaddr_set() if index not in self.ipdb.neighbours: self.ipdb.neighbours[index] = LinkedSet() # update port references old_master = device.get('master', None) new_master = msg.get_attr('IFLA_MASTER') if old_master != new_master: if old_master in self: with self[old_master]._direct_state: if index in self[old_master]['ports']: self[old_master].del_port(index) if new_master in self and new_master != index: with self[new_master]._direct_state: self[new_master].add_port(index) if cleanup is not None: del self[cleanup] if skip_master: msg.strip('IFLA_MASTER') device.load_netlink(msg) if new_master is None: with device._direct_state: device['master'] = None def _detach(self, name, idx, msg=None): with self.ipdb.exclusive: if msg is not None: if msg['event'] == 'RTM_DELLINK' and \ msg['change'] != 0xffffffff: return if idx is None or idx < 1: target = self[name] idx = target['index'] else: target = self[idx] name = target['ifname'] # clean up port, if exists master = target.get('master', None) if master in self and target['index'] in self[master]['ports']: with self[master]._direct_state: self[master].del_port(target) self.pop(name, None) self.pop(idx, None) self.ipdb.ipaddr.pop(idx, None) self.ipdb.neighbours.pop(idx, None) with target._direct_state: target['ipdb_scope'] = 'detached' class AddressesDict(dict): def __init__(self, ipdb): self.ipdb = ipdb self._event_map = {'RTM_NEWADDR': self._new, 'RTM_DELADDR': self._del} def _register(self): for msg in self.ipdb.nl.get_addr(): self._new(msg) def reload(self): # Reload addresses from the kernel. # (This is a workaround to reorder primary and secondary addresses.) for k in self.keys(): self[k] = self.ipdb._ipaddr_set() for msg in self.ipdb.nl.get_addr(): self._new(msg) for idx in self.keys(): iff = self.ipdb.interfaces[idx] with iff._direct_state: iff['ipaddr'] = self[idx] def _new(self, msg): if msg['family'] == AF_INET: addr = msg.get_attr('IFA_LOCAL') elif msg['family'] == AF_INET6: addr = msg.get_attr('IFA_ADDRESS') else: return raw = {'local': msg.get_attr('IFA_LOCAL'), 'broadcast': msg.get_attr('IFA_BROADCAST'), 'address': msg.get_attr('IFA_ADDRESS'), 'flags': msg.get_attr('IFA_FLAGS') or msg.get('flags'), 'prefixlen': msg['prefixlen'], 'family': msg['family'], 'cacheinfo': msg.get_attr('IFA_CACHEINFO')} try: self[msg['index']].add(key=(addr, raw['prefixlen']), raw=raw) except: pass def _del(self, msg): if msg['family'] == AF_INET: addr = msg.get_attr('IFA_LOCAL') elif msg['family'] == AF_INET6: addr = msg.get_attr('IFA_ADDRESS') else: return try: self[msg['index']].remove((addr, msg['prefixlen'])) except: pass class NeighboursDict(dict): def __init__(self, ipdb): self.ipdb = ipdb self._event_map = {'RTM_NEWNEIGH': self._new, 'RTM_DELNEIGH': self._del} def _register(self): for msg in self.ipdb.nl.get_neighbours(): self._new(msg) def _new(self, msg): if msg['family'] == AF_BRIDGE: return try: (self[msg['ifindex']] .add(key=msg.get_attr('NDA_DST'), raw={'lladdr': msg.get_attr('NDA_LLADDR')})) except: pass def _del(self, msg): if msg['family'] == AF_BRIDGE: return try: (self[msg['ifindex']] .remove(msg.get_attr('NDA_DST'))) except: pass spec = [{'name': 'interfaces', 'class': InterfacesDict, 'kwarg': {}}, {'name': 'by_name', 'class': View, 'kwarg': {'path': 'interfaces', 'constraint': lambda k, v: isinstance(k, basestring)}}, {'name': 'by_index', 'class': View, 'kwarg': {'path': 'interfaces', 'constraint': lambda k, v: isinstance(k, int)}}, {'name': 'ipaddr', 'class': AddressesDict, 'kwarg': {}}, {'name': 'neighbours', 'class': NeighboursDict, 'kwarg': {}}] pyroute2-0.5.9/pyroute2/ipdb/linkedset.py0000644000175000017500000002235713610051400020274 0ustar peetpeet00000000000000''' ''' import struct import threading from collections import OrderedDict from socket import inet_pton from socket import AF_INET from socket import AF_INET6 from pyroute2.common import basestring class LinkedSet(set): ''' Utility class, used by `Interface` to track ip addresses and ports. Called "linked" as it automatically updates all instances, linked with it. Target filter is a function, that returns `True` if a set member should be counted in target checks (target methods see below), or `False` if it should be ignored. ''' def target_filter(self, x): return True def __init__(self, *argv, **kwarg): set.__init__(self, *argv, **kwarg) def _check_default_target(self): if self._ct is not None: if set(filter(self.target_filter, self)) == \ set(filter(self.target_filter, self._ct)): self._ct = None return True return False self.lock = threading.RLock() self.target = threading.Event() self.targets = {self.target: _check_default_target} self._ct = None self.raw = OrderedDict() self.links = [] self.exclusive = set() def __getitem__(self, key): return self.raw[key] def clear_target(self, target=None): with self.lock: if target is None: self._ct = None self.target.clear() else: target.clear() del self.targets[target] def set_target(self, value, ignore_state=False): ''' Set target state for the object and clear the target event. Once the target is reached, the event will be set, see also: `check_target()` Args: - value (set): the target state to compare with ''' with self.lock: if isinstance(value, (set, tuple, list)): self._ct = value self.target.clear() # immediately check, if the target already # reached -- otherwise you will miss the # target forever if not ignore_state: self.check_target() elif hasattr(value, '__call__'): new_target = threading.Event() self.targets[new_target] = value if not ignore_state: self.check_target() return new_target else: raise TypeError("target type not supported") def check_target(self): ''' Check the target state and set the target event in the case the state is reached. Called from mutators, `add()` and `remove()` ''' with self.lock: for evt in self.targets: if self.targets[evt](self): evt.set() def add(self, key, raw=None, cascade=False): ''' Add an item to the set and all connected instances, check the target state. Args: - key: any hashable object - raw (optional): raw representation of the object Raw representation is not required. It can be used, e.g., to store RTM_NEWADDR RTNL messages along with human-readable ip addr representation. ''' with self.lock: if cascade and (key in self.exclusive): return if key not in self: self.raw[key] = raw super(LinkedSet, self).add(key) for link in self.links: link.add(key, raw, cascade=True) self.check_target() def remove(self, key, raw=None, cascade=False): ''' Remove an item from the set and all connected instances, check the target state. ''' with self.lock: if cascade and (key in self.exclusive): return super(LinkedSet, self).remove(key) self.raw.pop(key, None) for link in self.links: if key in link: link.remove(key, cascade=True) self.check_target() def unlink(self, key): ''' Exclude key from cascade updates. ''' self.exclusive.add(key) def relink(self, key): ''' Do not ignore key on cascade updates. ''' self.exclusive.remove(key) def connect(self, link): ''' Connect a LinkedSet instance to this one. Connected sets will be updated together with this instance. ''' if not isinstance(link, LinkedSet): raise TypeError() self.links.append(link) def disconnect(self, link): self.links.remove(link) def __repr__(self): return repr(tuple(self)) class IPaddrSet(LinkedSet): ''' LinkedSet child class with different target filter. The filter ignores link local IPv6 addresses when sets and checks the target. The `wait_ip()` routine by default does not ignore link local IPv6 addresses, but it may be changed with the `ignore_link_local` argument. ''' @property def ipv4(self): ret = IPaddrSet() for x in self: if self[x]['family'] == AF_INET: ret.add(x, self[x]) return ret @property def ipv6(self): ret = IPaddrSet() for x in self: if self[x]['family'] == AF_INET6: ret.add(x, self[x]) return ret def wait_ip(self, net, mask=None, timeout=None, ignore_link_local=False): family = AF_INET6 if net.find(':') >= 0 else AF_INET alen = 32 if family == AF_INET else 128 net = inet_pton(family, net) if mask is None: mask = alen if family == AF_INET: net = struct.unpack('>I', net)[0] else: na, nb = struct.unpack('>QQ', net) net = (na << 64) | nb match = net & (((1 << mask) - 1) << (alen - mask)) def match_ip(ipset): for rnet, rmask in ipset: rfamily = AF_INET6 if rnet.find(':') >= 0 else AF_INET if family != rfamily: continue if family == AF_INET6 and \ ignore_link_local and \ rnet[:4] == 'fe80' and \ rmask == 64: continue rnet = inet_pton(family, rnet) if family == AF_INET: rnet = struct.unpack('>I', rnet)[0] else: rna, rnb = struct.unpack('>QQ', rnet) rnet = (rna << 64) | rnb if (rnet & (((1 << mask) - 1) << (alen - mask))) == match: return True return False target = self.set_target(match_ip) target.wait(timeout) ret = target.is_set() self.clear_target(target) return ret def __getitem__(self, key): if isinstance(key, (tuple, list)): return self.raw[key] elif isinstance(key, int): return self.raw[tuple(self.raw.keys())[key]] elif isinstance(key, basestring): key = key.split('/') key = (key[0], int(key[1])) return self.raw[key] else: TypeError('wrong key type') class SortedIPaddrSet(IPaddrSet): def __init__(self, *argv, **kwarg): super(SortedIPaddrSet, self).__init__(*argv, **kwarg) if argv and isinstance(argv[0], SortedIPaddrSet): # Re-initialize self.raw from argv[0].raw to preserve order: self.raw = OrderedDict(argv[0].raw) def __and__(self, other): nset = SortedIPaddrSet(self) return nset.__iand__(other) def __iand__(self, other): for key in self.raw: if key not in other: self.remove(key) return self def __rand__(self, other): return self.__and__(other) def __xor__(self, other): nset = SortedIPaddrSet(self) return nset.__ixor__(other) def __ixor__(self, other): if not isinstance(other, SortedIPaddrSet): return RuntimeError('SortedIPaddrSet instance required') xor_keys = set(self.raw.keys()) ^ set(other.raw.keys()) for key in xor_keys: if key in self: self.remove(key) else: self.add(key, raw=other.raw[key], cascade=False) return self def __rxor__(self, other): return self.__xor__(other) def __or__(self, other): nset = SortedIPaddrSet(self) return nset.__ior__(other) def __ior__(self, other): if not isinstance(other, SortedIPaddrSet): return RuntimeError('SortedIPaddrSet instance required') for key, value in other.raw.items(): if key not in self: self.add(key, raw=value, cascade=False) return self def __ror__(self, other): return self.__or__(other) def __sub__(self, other): nset = SortedIPaddrSet(self) return nset.__isub__(other) def __isub__(self, other): for key in other: if key in self: self.remove(key) return self def __iter__(self): return iter(self.raw) pyroute2-0.5.9/pyroute2/ipdb/main.py0000644000175000017500000013646013610051400017237 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- ''' IPDB module =========== .. warning:: The IPDB module has design issues that may not be fixed. It is recommended to switch to NDB wherever it's possible. Basically, IPDB is a transactional database, containing records, that represent network stack objects. Any change in the database is not reflected immediately in OS, but waits until `commit()` is called. One failed operation during `commit()` rolls back all the changes, has been made so far. Moreover, IPDB has commit hooks API, that allows you to roll back changes depending on your own function calls, e.g. when a host or a network becomes unreachable. Limitations ----------- One of the major issues with IPDB is its memory footprint. It proved not to be suitable for environments with thousands of routes or neighbours. Being a design issue, it could not be fixed, so a new module was started, NDB, that aims to replace IPDB. IPDB is still more feature rich, but NDB is already more fast and stable. IPDB, NDB, IPRoute ------------------ These modules use different approaches. * IPRoute just forwards requests to the kernel, and doesn't wait for the system state. So it's up to developer to check, whether the requested object is really set up or not. * IPDB is an asynchronously updated database, that starts several additional threads by default. If your project's policy doesn't allow implicit threads, keep it in mind. But unlike IPRoute, the IPDB ensures the changes to be reflected in the system. * NDB is like IPDB, and will obsolete it in the future. The difference is that IPDB creates Python object for every RTNL object, while NDB stores everything in an SQL DB, and creates objects on demand. Being asynchronously updated, IPDB does sync on commit:: with IPDB() as ipdb: with ipdb.interfaces['eth0'] as i: i.up() i.add_ip('192.168.0.2/24') i.add_ip('192.168.0.3/24') # ---> <--- here you can expect `eth0` is up # and has these two addresses, so # the following code can rely on that NB: *In the example above `commit()` is implied with the `__exit__()` of the `with` statement.* IPDB and other software ----------------------- IPDB is designed to be a non-exclusive network settings database. There may be several IPDB instances on the same OS, as well as other network management software, such as NetworkManager etc. The IPDB transactions should not interfere with other software settings, unless they touch the same objects. E.g., if IPDB brings an interface up, while NM shuts it down, there will be a race condition. An example:: # IPDB code # NetworkManager at the same time: ipdb.interfaces['eth0'].up() # ipdb.interfaces['eth0'].commit() # $ sudo nmcli con down eth0 # ---> <--- # The eth0 state here is undefined. Some of the commands # above will fail But as long as the software doesn't touch the same objects, there will be no conflicts. Another example:: # IPDB code # At the same time, NetworkManager with ipdb.interfaces['eth0'] as i: # adds addresses: i.add_ip('172.16.254.2/24') # * 10.0.0.2/24 i.add_ip('172.16.254.3/24') # * 10.0.0.3/24 # ---> <--- # At this point the eth0 interface will have all four addresses. # If the IPDB transaction fails by some reason, only IPDB addresses # will be rolled back. There may be a need to prevent other software from changing the network settings. There is no locking at the kernel level, but IPDB can revert all the changes as soon as they appear on the interface:: # IPDB code ipdb.interfaces['eth0'].freeze() # Here some other software tries to # add an address, or to remove the old # one # ---> <--- # At this point the eth0 interface will have all the same settings as # at the `freeze()` call moment. Newly added addresses will be removed, # all the deleted addresses will be restored. # # Please notice, that an address removal may cause also a routes removal, # and that is the thing that IPDB can not neither prevent, nor revert. ipdb.interfaces['eth0'].unfreeze() Quickstart ---------- Simple tutorial:: from pyroute2 import IPDB # several IPDB instances are supported within on process ipdb = IPDB() # commit is called automatically upon the exit from `with` # statement with ipdb.interfaces.eth0 as i: i.address = '00:11:22:33:44:55' i.ifname = 'bala' i.txqlen = 2000 # basic routing support ipdb.routes.add({'dst': 'default', 'gateway': '10.0.0.1'}).commit() # do not forget to shutdown IPDB ipdb.release() Please, notice `ip.release()` call in the end. Though it is not forced in an interactive python session for the better user experience, it is required in the scripts to sync the IPDB state before exit. IPDB supports functional-like syntax also:: from pyroute2 import IPDB with IPDB() as ipdb: intf = (ipdb.interfaces['eth0'] .add_ip('10.0.0.2/24') .add_ip('10.0.0.3/24') .set_address('00:11:22:33:44:55') .set_mtu(1460) .set_name('external') .commit()) # ---> <--- here you have the interface reference with # all the changes applied: renamed, added ipaddr, # changed macaddr and mtu. ... # some code # pls notice, that the interface reference will not work # outside of `with IPDB() ...` Transaction modes ----------------- IPDB has several operating modes: - 'implicit' (default) -- the first change starts an implicit transaction, that have to be committed - 'explicit' -- you have to begin() a transaction prior to make any change The default is to use implicit transaction. This behaviour can be changed in the future, so use 'mode' argument when creating IPDB instances. The sample session with explicit transactions:: In [1]: from pyroute2 import IPDB In [2]: ip = IPDB(mode='explicit') In [3]: ifdb = ip.interfaces In [4]: ifdb.tap0.begin() Out[3]: UUID('7a637a44-8935-4395-b5e7-0ce40d31d937') In [5]: ifdb.tap0.up() In [6]: ifdb.tap0.address = '00:11:22:33:44:55' In [7]: ifdb.tap0.add_ip('10.0.0.1', 24) In [8]: ifdb.tap0.add_ip('10.0.0.2', 24) In [9]: ifdb.tap0.review() Out[8]: {'+ipaddr': set([('10.0.0.2', 24), ('10.0.0.1', 24)]), '-ipaddr': set([]), 'address': '00:11:22:33:44:55', 'flags': 4099} In [10]: ifdb.tap0.commit() Note, that you can `review()` the `current_tx` transaction, and `commit()` or `drop()` it. Also, multiple transactions are supported, use uuid returned by `begin()` to identify them. Actually, the form like 'ip.tap0.address' is an eye-candy. The IPDB objects are dictionaries, so you can write the code above as that:: ipdb.interfaces['tap0'].down() ipdb.interfaces['tap0']['address'] = '00:11:22:33:44:55' ... Context managers ---------------- Transactional objects (interfaces, routes) can act as context managers in the same way as IPDB does itself:: with ipdb.interfaces.tap0 as i: i.address = '00:11:22:33:44:55' i.ifname = 'vpn' i.add_ip('10.0.0.1', 24) i.add_ip('10.0.0.1', 24) On exit, the context manager will authomatically `commit()` the transaction. Read-only interface views ------------------------- Using an interface as a context manager **will** start a transaction. Sometimes it is not what one needs. To avoid unnecessary transactions, and to avoid the risk to occasionally change interface attributes, one can use read-only views:: with ipdb.interfaces[1].ro as iface: print(iface.ifname) print(iface.address) The `.ro` view neither starts transactions, nor allows to change anything, raising the `RuntimeError` exception. The same read-only views are available for routes and rules. Create interfaces ----------------- IPDB can also create virtual interfaces:: with ipdb.create(kind='bridge', ifname='control') as i: i.add_port(ip.interfaces.eth1) i.add_port(ip.interfaces.eth2) i.add_ip('10.0.0.1/24') The `IPDB.create()` call has the same syntax as `IPRoute.link('add', ...)`, except you shouldn't specify the `'add'` command. Refer to `IPRoute` docs for details. Please notice, that the interface object stays in the database even if there was an error during the interface creation. It is done so to make it possible to fix the interface object and try to run `commit()` again. Or you can drop the interface object with the `.remove().commit()` call. IP address management --------------------- IP addresses on interfaces may be managed using `add_ip()` and `del_ip()`:: with ipdb.interfaces['eth0'] as eth: eth.add_ip('10.0.0.1/24') eth.add_ip('10.0.0.2/24') eth.add_ip('2001:4c8:1023:108::39/64') eth.del_ip('172.16.12.5/24') The address format may be either a string with `'address/mask'` notation, or a pair of `'address', mask`:: with ipdb.interfaces['eth0'] as eth: eth.add_ip('10.0.0.1', 24) eth.del_ip('172.16.12.5', 24) The `ipaddr` attribute contains all the IP addresses of the interface, which are acessible in different ways. Getting an iterator from `ipaddr` gives you a sequence of tuples `('address', mask)`:: >>> for addr in ipdb.interfaces['eth0'].ipaddr: ... print(ipaddr) ... ('10.0.0.2', 24) ('10.0.0.1', 24) Getting one IP from `ipaddr` returns a dict object with full spec: >>> ipdb.interfaces['eth0'].ipaddr[0]: {'family': 2, 'broadcast': None, 'flags': 128, 'address': '10.0.0.2', 'prefixlen': 24, 'local': '10.0.0.2'} >>> ipdb.intefaces['eth0'].ipaddr['10.0.0.2/24']: {'family': 2, 'broadcast': None, 'flags': 128, 'address': '10.0.0.2', 'prefixlen': 24, 'local': '10.0.0.2'} The API is a bit weird, but it's because of historical reasons. In the future it may be changed. Another feature of the `ipaddr` attribute is views:: >>> ipdb.interfaces['eth0'].ipaddr.ipv4: (('10.0.0.2', 24), ('10.0.0.1', 24)) >>> ipdb.interfaces['eth0'].ipaddr.ipv6: (('2001:4c8:1023:108::39', 64),) The views, as well as the `ipaddr` attribute itself are not supposed to be changed by user, but only by the internal API. Bridge interfaces ----------------- Modern kernels provide possibility to manage bridge interface properties such as STP, forward delay, ageing time etc. Names of these properties start with `br_`, like `br_ageing_time`, `br_forward_delay` e.g.:: [x for x in dir(ipdb.interfaces.virbr0) if x.startswith('br_')] Bridge ports ------------ IPDB supports specific bridge port parameters, such as proxyarp, unicast/multicast flood, cost etc.:: with ipdb.interfaces['br-port0'] as p: p.brport_cost = 200 p.brport_unicast_flood = 0 p.brport_proxyarp = 0 Ports management ---------------- IPDB provides a uniform API to manage bridge, bond and vrf ports:: with ipdb.interfaces['br-int'] as br: br.add_port('veth0') br.add_port(ipdb.interfaces.veth1) br.add_port(700) br.del_port('veth2') Both `add_port()` and `del_port()` accept three types of arguments: * `'veth0'` -- interface name as a string * `ipdb.interfaces.veth1` -- IPDB interface object * `700` -- interface index, an integer Routes management ----------------- IPDB has a simple yet useful routing management interface. Create a route ~~~~~~~~~~~~~~ To add a route, there is an easy to use syntax:: # spec as a dictionary spec = {'dst': '172.16.1.0/24', 'oif': 4, 'gateway': '192.168.122.60', 'metrics': {'mtu': 1400, 'advmss': 500}} # pass spec as is ipdb.routes.add(spec).commit() # pass spec as kwargs ipdb.routes.add(**spec).commit() # use keyword arguments explicitly ipdb.routes.add(dst='172.16.1.0/24', oif=4, ...).commit() Please notice, that the device can be specified with `oif` (output interface) or `iif` (input interface), the `device` keyword is not supported anymore. More examples:: # specify table and priority (ipdb.routes .add(dst='172.16.1.0/24', gateway='192.168.0.1', table=100, priority=10) .commit()) The `priority` field is what the `iproute2` utility calls `metric` -- see also below. Get a route ~~~~~~~~~~~ To access and change the routes, one can use notations as follows:: # default table (254) # # change the route gateway and mtu # with ipdb.routes['172.16.1.0/24'] as route: route.gateway = '192.168.122.60' route.metrics.mtu = 1500 # access the default route print(ipdb.routes['default']) # change the default gateway with ipdb.routes['default'] as route: route.gateway = '10.0.0.1' By default, the path `ipdb.routes` reflects only the main routing table (254). But Linux supports much more routing tables, so does IPDB:: In [1]: ipdb.routes.tables.keys() Out[1]: [0, 254, 255] In [2]: len(ipdb.routes.tables[255]) Out[2]: 11 # => 11 automatic routes in the table local It is important to understand, that routing tables keys in IPDB are not only the destination prefix. The key consists of 'prefix/mask' string and the route priority (if any):: In [1]: ipdb.routes.tables[254].idx.keys() Out[1]: [RouteKey(dst='default', table=254, family=2, ...), RouteKey(dst='172.17.0.0/16', table=254, ...), RouteKey(dst='172.16.254.0/24', table=254, ...), RouteKey(dst='192.168.122.0/24', table=254, ...), RouteKey(dst='fe80::/64', table=254, family=10, ...)] But a routing table in IPDB allows several variants of the route spec. The simplest case is to retrieve a route by prefix, if there is only one match:: # get route by prefix ipdb.routes['172.16.1.0/24'] # get route by a special name ipdb.routes['default'] If there are more than one route that matches the spec, only the first one will be retrieved. One should iterate all the records and filter by a key to retrieve all matches:: # only one route will be retrieved ipdb.routes['fe80::/64'] # get all routes by this prefix [ x for x in ipdb.routes if x['dst'] == 'fe80::/64' ] It is also possible to use dicts as specs:: # get IPv4 default route ipdb.routes[{'dst': 'default', 'family': AF_INET}] # get IPv6 default route ipdb.routes[{'dst': 'default', 'family': AF_INET6}] # get route by priority ipdb.routes.table[100][{'dst': '10.0.0.0/24', 'priority': 10}] While this notation returns one route, there is a method to get all the routes matching the spec:: # get all the routes from all the tables via some interface ipdb.routes.filter({'oif': idx}) # get all IPv6 routes from some table ipdb.routes.table[tnum].filter({'family': AF_INET6}) Route metrics ~~~~~~~~~~~~~ A special object is dedicated to route metrics, one can access it via `route.metrics` or `route['metrics']`:: # these two statements are equal: with ipdb.routes['172.16.1.0/24'] as route: route['metrics']['mtu'] = 1400 with ipdb.routes['172.16.1.0/24'] as route: route.metrics.mtu = 1400 Possible metrics are defined in `rtmsg.py:rtmsg.metrics`, e.g. `RTAX_HOPLIMIT` means `hoplimit` metric etc. Multipath routing ~~~~~~~~~~~~~~~~~ Multipath nexthops are managed via `route.add_nh()` and `route.del_nh()` methods. They are available to review via `route.multipath`, but one should not directly add/remove/modify nexthops in `route.multipath`, as the changes will not be committed correctly. To create a multipath route:: ipdb.routes.add({'dst': '172.16.232.0/24', 'multipath': [{'gateway': '172.16.231.2', 'hops': 2}, {'gateway': '172.16.231.3', 'hops': 1}, {'gateway': '172.16.231.4'}]}).commit() To change a multipath route:: with ipdb.routes['172.16.232.0/24'] as r: r.add_nh({'gateway': '172.16.231.5'}) r.del_nh({'gateway': '172.16.231.4'}) Another possible way is to create a normal route and turn it into multipath by `add_nh()`:: # create a non-MP route with one gateway: (ipdb .routes .add({'dst': '172.16.232.0/24', 'gateway': '172.16.231.2'}) .commit()) # turn it to become a MP route: (ipdb .routes['172.16.232.0/24'] .add_nh({'gateway': '172.16.231.3'}) .commit()) # here the route will contain two NH records, with # gateways 172.16.231.2 and 172.16.231.3 # remove one NH and turn the route to be a normal one (ipdb .routes['172.16.232.0/24'] .del_nh({'gateway': '172.16.231.2'}) .commit()) # thereafter the traffic to 172.16.232.0/24 will go only # via 172.16.231.3 Differences from the iproute2 syntax ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ By historical reasons, `iproute2` uses names that differs from what the kernel uses. E.g., `iproute2` uses `weight` for multipath route hops instead of `hops`, where `weight == (hops + 1)`. Thus, a route created with `hops == 2` will be listed by `iproute2` as `weight 3`. Another significant difference is `metrics`. The `pyroute2` library uses the kernel naming scheme, where `metrics` means mtu, rtt, window etc. The `iproute2` utility uses `metric` (not `metrics`) as a name for the `priority` field. In examples:: # ------------------------------------------------------- # iproute2 command: $ ip route add default \\ nexthop via 172.16.0.1 weight 2 \\ nexthop via 172.16.0.2 weight 9 # pyroute2 code: (ipdb .routes .add({'dst': 'default', 'multipath': [{'gateway': '172.16.0.1', 'hops': 1}, {'gateway': '172.16.0.2', 'hops': 8}]) .commit()) # ------------------------------------------------------- # iproute2 command: $ ip route add default via 172.16.0.2 metric 200 # pyroute2 code: (ipdb .routes .add({'dst': 'default', 'gateway': '172.16.0.2', 'priority': 200}) .commit()) # ------------------------------------------------------- # iproute2 command: $ ip route add default via 172.16.0.2 mtu 1460 # pyroute2 code: (ipdb .routes .add({'dst': 'default', 'gateway': '172.16.0.2', 'metrics': {'mtu': 1460}}) .commit()) Multipath default routes ~~~~~~~~~~~~~~~~~~~~~~~~ .. warning:: As of the merge of kill_rtcache into the kernel, and it's release in ~3.6, weighted default routes no longer work in Linux. Please refer to https://github.com/svinota/pyroute2/issues/171#issuecomment-149297244 for details. Rules management ---------------- IPDB provides a basic IP rules management system. Create a rule ~~~~~~~~~~~~~ Syntax is almost the same as for routes:: # rule spec spec = {'src': '172.16.1.0/24', 'table': 200, 'priority': 15000} ipdb.rules.add(spec).commit() Get a rule ~~~~~~~~~~ The way IPDB handles IP rules is almost the same as routes, but rule keys are more complicated -- the Linux kernel doesn't use keys for rules, but instead iterates all the records until the first one w/o any attribute mismatch. The fields that the kernel uses to compare rules, IPDB uses as the key fields (see `pyroute2/ipdb/rule.py:RuleKey`) There are also more ways to find a record, as with routes:: # 1. iterate all the records for record in ipdb.rules: match(record) # 2. an integer as the key matches the first # rule with that priority ipdb.rules[32565] # 3. a dict as the key returns the first match # for all the specified attrs ipdb.rules[{'dst': '10.0.0.0/24', 'table': 200}] Priorities ~~~~~~~~~~ Thus, the rule priority is **not** a key, neither in the kernel, nor in IPDB. One should **not** rely on priorities as on keys, there may be several rules with the same priority, and it often happens, e.g. on Android systems. Persistence ~~~~~~~~~~~ There is no *change* operation for the rule records in the kernel, so only *add/del* work. When IPDB changes a record, it effectively deletes the old one and creates the new with new parameters, but the object, referring the record, stays the same. Also that means, that IPDB can not recognize the situation, when someone else does the same. So if there is another program changing records by *del/add* operations, even another IPDB instance, referring objects in the IPDB will be recreated. Performance issues ------------------ In the case of bursts of Netlink broadcast messages, all the activity of the pyroute2-based code in the async mode becomes suppressed to leave more CPU resources to the packet reader thread. So please be ready to cope with delays in the case of Netlink broadcast storms. It means also, that IPDB state will be synchronized with OS also after some delay. The class API ------------- ''' import sys import atexit import logging import traceback import threading import weakref try: import queue except ImportError: import Queue as queue # The module is called 'Queue' in Python2 from functools import partial from pprint import pprint from pyroute2 import config from pyroute2.common import uuid32 from pyroute2.common import basestring from pyroute2.iproute import IPRoute from pyroute2.netlink.rtnl import RTM_GETLINK, RTMGRP_DEFAULTS from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.ipdb import rules from pyroute2.ipdb import routes from pyroute2.ipdb import interfaces from pyroute2.ipdb.routes import BaseRoute from pyroute2.ipdb.exceptions import ShutdownException from pyroute2.ipdb.transactional import SYNC_TIMEOUT from pyroute2.ipdb.linkedset import IPaddrSet from pyroute2.ipdb.linkedset import SortedIPaddrSet from pyroute2.ipdb.utils import test_reachable_icmp log = logging.getLogger(__name__) class Watchdog(object): def __init__(self, ipdb, action, kwarg): self.event = threading.Event() self.is_set = False self.ipdb = ipdb def cb(ipdb, msg, _action): if _action != action: return for key in kwarg: if (msg.get(key, None) != kwarg[key]) and \ (msg.get_attr(msg.name2nla(key)) != kwarg[key]): return self.is_set = True self.event.set() self.cb = cb # register callback prior to other things self.uuid = self.ipdb.register_callback(self.cb) def wait(self, timeout=SYNC_TIMEOUT): ret = self.event.wait(timeout=timeout) self.cancel() return ret def cancel(self): self.ipdb.unregister_callback(self.uuid) class _evq_context(object): ''' Context manager class for the event queue used by the event loop ''' def __init__(self, ipdb, qsize, block, timeout): self._ipdb = ipdb self._qsize = qsize self._block = block self._timeout = timeout def __enter__(self): # Context manager protocol self._ipdb._evq_lock.acquire() self._ipdb._evq = queue.Queue(maxsize=self._qsize) self._ipdb._evq_drop = 0 return self def __exit__(self, exc_type, exc_value, traceback): # Context manager protocol self._ipdb._evq = None self._ipdb._evq_drop = 0 self._ipdb._evq_lock.release() def __iter__(self): # Iterator protocol if not self._ipdb._evq: raise RuntimeError('eventqueue must be used ' 'as a context manager') return self def next(self): # Iterator protocol -- Python 2.x compatibility return self.__next__() def __next__(self): # Iterator protocol -- Python 3.x msg = self._ipdb._evq.get(self._block, self._timeout) self._ipdb._evq.task_done() if isinstance(msg, Exception): raise msg return msg class IPDB(object): ''' The class that maintains information about network setup of the host. Monitoring netlink events allows it to react immediately. It uses no polling. ''' def __init__(self, nl=None, mode='implicit', restart_on_error=None, nl_async=None, sndbuf=1048576, rcvbuf=1048576, nl_bind_groups=RTMGRP_DEFAULTS, ignore_rtables=None, callbacks=None, sort_addresses=False, plugins=None): plugins = plugins or ['interfaces', 'routes', 'rules'] pmap = {'interfaces': interfaces, 'routes': routes, 'rules': rules} self.mode = mode self.txdrop = False self._stdout = sys.stdout self._ipaddr_set = SortedIPaddrSet if sort_addresses else IPaddrSet self._event_map = {} self._deferred = {} self._ensure = [] self._loaded = set() self._mthread = None self._nl_own = nl is None self._nl_async = config.ipdb_nl_async if nl_async is None else True self.mnl = None self.nl = nl self._sndbuf = sndbuf self._rcvbuf = rcvbuf self.nl_bind_groups = nl_bind_groups self._plugins = [pmap[x] for x in plugins if x in pmap] if isinstance(ignore_rtables, int): self._ignore_rtables = [ignore_rtables, ] elif isinstance(ignore_rtables, (list, tuple, set)): self._ignore_rtables = ignore_rtables else: self._ignore_rtables = [] self._stop = False # see also 'register_callback' self._post_callbacks = {} self._pre_callbacks = {} # local event queues # - callbacks event queue self._cbq = queue.Queue(maxsize=8192) self._cbq_drop = 0 # - users event queue self._evq = None self._evq_lock = threading.Lock() self._evq_drop = 0 # locks and events self.exclusive = threading.RLock() self._shutdown_lock = threading.Lock() # register callbacks # # examples:: # def cb1(ipdb, msg, event): # print(event, msg) # def cb2(...): # ... # # # default mode: post # IPDB(callbacks=[cb1, cb2]) # # specify the mode explicitly # IPDB(callbacks=[(cb1, 'pre'), (cb2, 'post')]) # for cba in callbacks or []: if not isinstance(cba, (tuple, list, set)): cba = (cba, ) self.register_callback(*cba) # load information self.restart_on_error = restart_on_error if \ restart_on_error is not None else nl is None # init the database self.initdb() # init the dir() cache self.__dir_cache__ = [i for i in self.__class__.__dict__.keys() if i[0] != '_'] self.__dir_cache__.extend(list(self._deferred.keys())) def cleanup(ref): ipdb_obj = ref() if (ipdb_obj is not None) and (not ipdb_obj._stop): ipdb_obj.release() atexit.register(cleanup, weakref.ref(self)) def __dir__(self): return self.__dir_cache__ def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.release() def _flush_db(self): def flush(idx): for key in tuple(idx.keys()): try: del idx[key] except KeyError: pass idx_list = [] if 'interfaces' in self._loaded: for (key, dev) in self.by_name.items(): try: # FIXME self.interfaces._detach(key, dev['index'], dev.nlmsg) except KeyError: pass idx_list.append(self.ipaddr) idx_list.append(self.neighbours) if 'routes' in self._loaded: idx_list.extend([self.routes.tables[x] for x in self.routes.tables.keys()]) if 'rules' in self._loaded: idx_list.append(self.rules) for idx in idx_list: flush(idx) def initdb(self): # flush all the DB objects with self.exclusive: # explicitly cleanup object references for event in tuple(self._event_map): del self._event_map[event] self._flush_db() # if the command socket is not provided, create it if self._nl_own: if self.nl is not None: self.nl.close() self.nl = IPRoute(sndbuf=self._sndbuf, rcvbuf=self._rcvbuf, async_qsize=0) # OBS: legacy design # setup monitoring socket if self.mnl is not None: self._flush_mnl() self.mnl.close() self.mnl = self.nl.clone() try: self.mnl.bind(groups=self.nl_bind_groups, async_cache=self._nl_async) except: self.mnl.close() if self._nl_own is None: self.nl.close() raise # explicitly cleanup references for key in tuple(self._deferred): del self._deferred[key] for module in self._plugins: if (module.groups & self.nl_bind_groups) != module.groups: continue for plugin in module.spec: self._deferred[plugin['name']] = module.spec if plugin['name'] in self._loaded: delattr(self, plugin['name']) self._loaded.remove(plugin['name']) # start service threads for tspec in (('_mthread', '_serve_main', 'IPDB main event loop'), ('_cthread', '_serve_cb', 'IPDB cb event loop')): tg = getattr(self, tspec[0], None) if not getattr(tg, 'is_alive', lambda: False)(): tx = threading.Thread(name=tspec[2], target=getattr(self, tspec[1])) setattr(self, tspec[0], tx) tx.setDaemon(True) tx.start() def __getattribute__(self, name): deferred = super(IPDB, self).__getattribute__('_deferred') if name in deferred: register = [] spec = deferred[name] for plugin in spec: obj = plugin['class'](self, **plugin['kwarg']) setattr(self, plugin['name'], obj) register.append(obj) self._loaded.add(plugin['name']) del deferred[plugin['name']] for obj in register: if hasattr(obj, '_register'): obj._register() if hasattr(obj, '_event_map'): for event in obj._event_map: if event not in self._event_map: self._event_map[event] = [] self._event_map[event].append(obj._event_map[event]) return super(IPDB, self).__getattribute__(name) def register_callback(self, callback, mode='post'): ''' IPDB callbacks are routines executed on a RT netlink message arrival. There are two types of callbacks: "post" and "pre" callbacks. ... "Post" callbacks are executed after the message is processed by IPDB and all corresponding objects are created or deleted. Using ipdb reference in "post" callbacks you will access the most up-to-date state of the IP database. "Post" callbacks are executed asynchronously in separate threads. These threads can work as long as you want them to. Callback threads are joined occasionally, so for a short time there can exist stopped threads. ... "Pre" callbacks are synchronous routines, executed before the message gets processed by IPDB. It gives you the way to patch arriving messages, but also places a restriction: until the callback exits, the main event IPDB loop is blocked. Normally, only "post" callbacks are required. But in some specific cases "pre" also can be useful. ... The routine, `register_callback()`, takes two arguments: - callback function - mode (optional, default="post") The callback should be a routine, that accepts three arguments:: cb(ipdb, msg, action) Arguments are: - **ipdb** is a reference to IPDB instance, that invokes the callback. - **msg** is a message arrived - **action** is just a msg['event'] field E.g., to work on a new interface, you should catch action == 'RTM_NEWLINK' and with the interface index (arrived in msg['index']) get it from IPDB:: index = msg['index'] interface = ipdb.interfaces[index] ''' lock = threading.Lock() def safe(*argv, **kwarg): with lock: callback(*argv, **kwarg) safe.hook = callback safe.lock = lock safe.uuid = uuid32() if mode == 'post': self._post_callbacks[safe.uuid] = safe elif mode == 'pre': self._pre_callbacks[safe.uuid] = safe else: raise KeyError('Unknown callback mode') return safe.uuid def unregister_callback(self, cuid, mode='post'): if mode == 'post': cbchain = self._post_callbacks elif mode == 'pre': cbchain = self._pre_callbacks else: raise KeyError('Unknown callback mode') safe = cbchain[cuid] with safe.lock: ret = cbchain.pop(cuid) return ret def eventqueue(self, qsize=8192, block=True, timeout=None): ''' Initializes event queue and returns event queue context manager. Once the context manager is initialized, events start to be collected, so it is possible to read initial state from the system witout losing last moment changes, and once that is done, start processing events. Example: ipdb = IPDB() with ipdb.eventqueue() as evq: my_state = ipdb.... for msg in evq: update_state_by_msg(my_state, msg) ''' return _evq_context(self, qsize, block, timeout) def eventloop(self, qsize=8192, block=True, timeout=None): """ Event generator for simple cases when there is no need for initial state setup. Initialize event queue and yield events as they happen. """ with self.eventqueue(qsize=qsize, block=block, timeout=timeout) as evq: for msg in evq: yield msg def release(self): ''' Shutdown IPDB instance and sync the state. Since IPDB is asyncronous, some operations continue in the background, e.g. callbacks. So, prior to exit the script, it is required to properly shutdown IPDB. The shutdown sequence is not forced in an interactive python session, since it is easier for users and there is enough time to sync the state. But for the scripts the `release()` call is required. ''' with self._shutdown_lock: if self._stop: log.warning("shutdown in progress") return self._stop = True self._cbq.put(ShutdownException("shutdown")) if self._mthread is not None: self._flush_mnl() self._mthread.join() if self.mnl is not None: self.mnl.close() self.mnl = None if self._nl_own: self.nl.close() self.nl = None self._flush_db() def _flush_mnl(self): if self.mnl is not None: # terminate the main loop for t in range(3): try: msg = ifinfmsg() msg['index'] = 1 msg.reset() self.mnl.put(msg, RTM_GETLINK) except Exception as e: log.error("shutdown error: %s", e) # Just give up. # We can not handle this case def create(self, kind, ifname, reuse=False, **kwarg): return self.interfaces.add(kind, ifname, reuse, **kwarg) def ensure(self, cmd='add', reachable=None, condition=None): if cmd == 'reset': self._ensure = [] elif cmd == 'run': for f in self._ensure: f() elif cmd == 'add': if isinstance(reachable, basestring): reachable = reachable.split(':') if len(reachable) == 1: f = partial(test_reachable_icmp, reachable[0]) else: raise NotImplementedError() self._ensure.append(f) else: if sys.stdin.isatty(): pprint(self._ensure, stream=self._stdout) elif cmd == 'print': pprint(self._ensure, stream=self._stdout) elif cmd == 'get': return self._ensure else: raise NotImplementedError() def items(self): # TODO: add support for filters? # iterate interfaces for ifname in getattr(self, 'by_name', {}): yield (('interfaces', ifname), self.interfaces[ifname]) # iterate routes for table in getattr(getattr(self, 'routes', None), 'tables', {}): for key, route in self.routes.tables[table].items(): yield (('routes', table, key), route) def dump(self): ret = {} for key, obj in self.items(): ptr = ret for step in key[:-1]: if step not in ptr: ptr[step] = {} ptr = ptr[step] ptr[key[-1]] = obj return ret def load(self, config, ptr=None): if ptr is None: ptr = self for key in config: obj = getattr(ptr, key, None) if obj is not None: if hasattr(obj, 'load'): obj.load(config[key]) else: self.load(config[key], ptr=obj) elif hasattr(ptr, 'add'): ptr.add(**config[key]) return self def review(self): ret = {} for key, obj in self.items(): ptr = ret try: rev = obj.review() except TypeError: continue for step in key[:-1]: if step not in ptr: ptr[step] = {} ptr = ptr[step] ptr[key[-1]] = rev if not ret: raise TypeError('no transaction started') return ret def drop(self): ok = False for key, obj in self.items(): try: obj.drop() except TypeError: continue ok = True if not ok: raise TypeError('no transaction started') def commit(self, transactions=None, phase=1): # what to commit: either from transactions argument, or from # started transactions on existing objects if transactions is None: # collect interface transactions txlist = [(x, x.current_tx) for x in getattr(self, 'by_name', {}).values() if x.local_tx.values()] # collect route transactions for table in getattr(getattr(self, 'routes', None), 'tables', {}).keys(): txlist.extend([(x, x.current_tx) for x in self.routes.tables[table] if x.local_tx.values()]) transactions = txlist snapshots = [] removed = [] tx_ipdb_prio = [] tx_main = [] tx_prio1 = [] tx_prio2 = [] tx_prio3 = [] for (target, tx) in transactions: # 8<------------------------------ # first -- explicit priorities if tx['ipdb_priority']: tx_ipdb_prio.append((target, tx)) continue # 8<------------------------------ # routes if isinstance(target, BaseRoute): tx_prio3.append((target, tx)) continue # 8<------------------------------ # intefaces kind = target.get('kind', None) if kind in ('vlan', 'vxlan', 'gre', 'tuntap', 'vti', 'vti6', 'vrf', 'xfrm'): tx_prio1.append((target, tx)) elif kind in ('bridge', 'bond'): tx_prio2.append((target, tx)) else: tx_main.append((target, tx)) # 8<------------------------------ # explicitly sorted transactions tx_ipdb_prio = sorted(tx_ipdb_prio, key=lambda x: x[1]['ipdb_priority'], reverse=True) # FIXME: this should be documented # # The final transactions order: # 1. any txs with ipdb_priority (sorted by that field) # # Then come default priorities (no ipdb_priority specified): # 2. all the rest # 3. vlan, vxlan, gre, tuntap, vti, vrf # 4. bridge, bond # 5. routes transactions = tx_ipdb_prio + tx_main + tx_prio1 + tx_prio2 + tx_prio3 try: for (target, tx) in transactions: if target['ipdb_scope'] == 'detached': continue if tx['ipdb_scope'] == 'remove': tx['ipdb_scope'] = 'shadow' removed.append((target, tx)) if phase == 1: s = (target, target.pick(detached=True)) snapshots.append(s) # apply the changes, but NO rollback -- only phase 1 target.commit(transaction=tx, commit_phase=phase, commit_mask=phase) # if the commit above fails, the next code # branch will run rollbacks except Exception: if phase == 1: # run rollbacks for ALL the collected transactions, # even successful ones self.fallen = transactions txs = filter(lambda x: not ('create' == x[0]['ipdb_scope'] == x[1]['ipdb_scope']), snapshots) self.commit(transactions=txs, phase=2) raise else: if phase == 1: for (target, tx) in removed: target['ipdb_scope'] = 'detached' target.detach() finally: if phase == 1: for (target, tx) in transactions: target.drop(tx.uid) return self def watchdog(self, wdops='RTM_NEWLINK', **kwarg): return Watchdog(self, wdops, kwarg) def _serve_cb(self): ### # Callbacks thread working on a dedicated event queue. ### while not self._stop: msg = self._cbq.get() self._cbq.task_done() if isinstance(msg, ShutdownException): return elif isinstance(msg, Exception): raise msg for cb in tuple(self._post_callbacks.values()): try: cb(self, msg, msg['event']) except: pass def _serve_main(self): ### # Main monitoring cycle. It gets messages from the # default iproute queue and updates objects in the # database. ### while not self._stop: try: messages = self.mnl.get() ## # Check it again # # NOTE: one should not run callbacks or # anything like that after setting the # _stop flag, since IPDB is not valid # anymore if self._stop: break except Exception as e: with self.exclusive: if self._evq: self._evq.put(e) return if self.restart_on_error: log.error('Restarting IPDB instance after ' 'error:\n%s', traceback.format_exc()) try: self.initdb() except: log.error('Error restarting DB:\n%s', traceback.format_exc()) return continue else: log.error('Emergency shutdown, cleanup manually') raise RuntimeError('Emergency shutdown') for msg in messages: # Run pre-callbacks # NOTE: pre-callbacks are synchronous for (cuid, cb) in tuple(self._pre_callbacks.items()): try: cb(self, msg, msg['event']) except: pass with self.exclusive: event = msg.get('event', None) if event in self._event_map: for func in self._event_map[event]: func(msg) # Post-callbacks try: self._cbq.put_nowait(msg) if self._cbq_drop: log.warning('dropped %d events', self._cbq_drop) self._cbq_drop = 0 except queue.Full: self._cbq_drop += 1 except Exception: log.error('Emergency shutdown, cleanup manually') raise RuntimeError('Emergency shutdown') # # Why not to put these two pieces of the code # it in a routine? # # TODO: run performance tests with routines # Users event queue if self._evq: try: self._evq.put_nowait(msg) if self._evq_drop: log.warning("dropped %d events", self._evq_drop) self._evq_drop = 0 except queue.Full: self._evq_drop += 1 except Exception: log.error('Emergency shutdown, cleanup manually') raise RuntimeError('Emergency shutdown') pyroute2-0.5.9/pyroute2/ipdb/routes.py0000644000175000017500000013267213610051400017635 0ustar peetpeet00000000000000import time import types import struct import logging import traceback import threading from collections import namedtuple from socket import AF_UNSPEC from socket import AF_INET6 from socket import AF_INET from socket import inet_pton from socket import inet_ntop from pyroute2.common import AF_MPLS from pyroute2.common import basestring from pyroute2.netlink import rtnl from pyroute2.netlink import nlmsg from pyroute2.netlink import nlmsg_base from pyroute2.netlink import NLM_F_MULTI from pyroute2.netlink import NLM_F_CREATE from pyroute2.netlink.rtnl import rt_type from pyroute2.netlink.rtnl import rt_proto from pyroute2.netlink.rtnl import encap_type from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl.req import IPRouteRequest from pyroute2.netlink.rtnl.ifaddrmsg import IFA_F_SECONDARY from pyroute2.ipdb.exceptions import CommitException from pyroute2.ipdb.transactional import Transactional from pyroute2.ipdb.transactional import with_transaction from pyroute2.ipdb.transactional import SYNC_TIMEOUT from pyroute2.ipdb.linkedset import LinkedSet log = logging.getLogger(__name__) groups = rtnl.RTMGRP_IPV4_ROUTE |\ rtnl.RTMGRP_IPV6_ROUTE |\ rtnl.RTMGRP_MPLS_ROUTE IP6_RT_PRIO_USER = 1024 class Metrics(Transactional): _fields = [rtmsg.metrics.nla2name(i[0]) for i in rtmsg.metrics.nla_map] class Encap(Transactional): _fields = ['type', 'labels'] class Via(Transactional): _fields = ['family', 'addr'] class NextHopSet(LinkedSet): def __init__(self, prime=None): super(NextHopSet, self).__init__() prime = prime or [] for v in prime: self.add(v) def __sub__(self, vs): ret = type(self)() sub = set(self.raw.keys()) - set(vs.raw.keys()) for v in sub: ret.add(self[v], raw=self.raw[v]) return ret def __make_nh(self, prime): if isinstance(prime, BaseRoute): return prime.make_nh_key(prime) elif isinstance(prime, dict): if prime.get('family', None) == AF_MPLS: return MPLSRoute.make_nh_key(prime) else: return Route.make_nh_key(prime) elif isinstance(prime, tuple): return prime else: raise TypeError("unknown prime type %s" % type(prime)) def __getitem__(self, key): return self.raw[key] def __iter__(self): def NHIterator(): for x in tuple(self.raw.values()): yield x return NHIterator() def add(self, prime, raw=None, cascade=False): key = self.__make_nh(prime) req = key._required fields = key._fields skey = key[:req] + (None, ) * (len(fields) - req) if skey in self.raw: del self.raw[skey] return super(NextHopSet, self).add(key, raw=prime) def remove(self, prime, raw=None, cascade=False): key = self.__make_nh(prime) try: super(NextHopSet, self).remove(key) except KeyError as e: req = key._required fields = key._fields skey = key[:req] + (None, ) * (len(fields) - req) for rkey in tuple(self.raw.keys()): if skey == rkey[:req] + (None, ) * (len(fields) - req): break else: raise e super(NextHopSet, self).remove(rkey) class WatchdogMPLSKey(dict): def __init__(self, route): dict.__init__(self) self['oif'] = route['oif'] self['dst'] = [{'ttl': 0, 'bos': 1, 'tc': 0, 'label': route['dst']}] class WatchdogKey(dict): ''' Construct from a route a dictionary that could be used as a match for IPDB watchdogs. ''' def __init__(self, route): dict.__init__(self, [x for x in IPRouteRequest(route).items() if x[0] in ('dst', 'dst_len', 'src', 'src_len', 'tos', 'priority', 'gateway', 'table') and x[1]]) # Universal route key # Holds the fields that the kernel uses to uniquely identify routes. # IPv4 allows redundant routes with different 'tos' but IPv6 does not, # so 'tos' is used for IPv4 but not IPv6. # For reference, see fib_table_insert() in # https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/ipv4/fib_trie.c#n1147 # and fib6_add_rt2node() in # https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/net/ipv6/ip6_fib.c#n765 RouteKey = namedtuple('RouteKey', ('dst', 'table', 'family', 'priority', 'tos')) # IP multipath NH key IPNHKey = namedtuple('IPNHKey', ('gateway', 'encap', 'oif')) IPNHKey._required = 2 # MPLS multipath NH key MPLSNHKey = namedtuple('MPLSNHKey', ('newdst', 'via', 'oif')) MPLSNHKey._required = 2 def _normalize_ipaddr(x, y): if isinstance(y, basestring) and y.find(':') > -1: y = inet_ntop(AF_INET6, inet_pton(AF_INET6, y)) return x == y def _normalize_ipnet(x, y): # # x -- incoming value # y -- transaction value # if isinstance(y, basestring) and y.find(':') > -1: s = y.split('/') ip = inet_ntop(AF_INET6, inet_pton(AF_INET6, s[0])) if len(s) > 1: y = '%s/%s' % (ip, s[1]) else: y = ip return x == y class BaseRoute(Transactional): ''' Persistent transactional route object ''' _fields = [rtmsg.nla2name(i[0]) for i in rtmsg.nla_map] for key, _ in rtmsg.fields: _fields.append(key) _fields.append('removal') _virtual_fields = ['ipdb_scope', 'ipdb_priority'] _fields.extend(_virtual_fields) _linked_sets = ['multipath', ] _nested = [] _gctime = None cleanup = ('attrs', 'header', 'event', 'cacheinfo') _fields_cmp = {'src': _normalize_ipnet, 'dst': _normalize_ipnet, 'gateway': _normalize_ipaddr, 'prefsrc': _normalize_ipaddr} def __init__(self, ipdb, mode=None, parent=None, uid=None): Transactional.__init__(self, ipdb, mode, parent, uid) with self._direct_state: self['ipdb_priority'] = 0 @with_transaction def add_nh(self, prime): with self._write_lock: # if the multipath chain is empty, copy the current # nexthop as the first in the multipath if not self['multipath']: first = {} for key in ('oif', 'gateway', 'newdst'): if self[key]: first[key] = self[key] if first: if self['family']: first['family'] = self['family'] for key in ('encap', 'via', 'metrics'): if self[key] and any(self[key].values()): first[key] = self[key] self[key] = None self['multipath'].add(first) # cleanup key fields for key in ('oif', 'gateway', 'newdst'): self[key] = None # add the prime as NH if self['family'] == AF_MPLS: prime['family'] = AF_MPLS self['multipath'].add(prime) @with_transaction def del_nh(self, prime): with self._write_lock: if not self['multipath']: raise KeyError('attempt to delete nexthop from ' 'non-multipath route') nh = dict(prime) if self['family'] == AF_MPLS: nh['family'] = AF_MPLS self['multipath'].remove(nh) def load_netlink(self, msg): with self._direct_state: if self['ipdb_scope'] == 'locked': # do not touch locked interfaces return self['ipdb_scope'] = 'system' # IPv6 multipath via several devices (not networks) is a very # special case, since we get only the first hop notification. Ask # the kernel guys why. I've got no idea. # # So load all the rest flags = msg.get('header', {}).get('flags', 0) family = msg.get('family', 0) clean_mp = True table = msg.get_attr('RTA_TABLE') or msg.get('table') dst = msg.get_attr('RTA_DST') # # It MAY be a multipath hop # if family == AF_INET6 and not msg.get_attr('RTA_MULTIPATH'): # # It is a notification about the route created # if flags == NLM_F_CREATE: # # This routine can significantly slow down the IPDB # instance, but I see no way around. Some are born # to endless night. # clean_mp = False msgs = self.nl.route('show', table=table, dst=dst, family=family) for nhmsg in msgs: nh = type(self)(ipdb=self.ipdb, parent=self) nh.load_netlink(nhmsg) with nh._direct_state: del nh['dst'] del nh['ipdb_scope'] del nh['ipdb_priority'] del nh['multipath'] del nh['metrics'] self.add_nh(nh) # # it IS a multipath hop loaded during IPDB init # elif flags == NLM_F_MULTI and self.get('dst'): nh = type(self)(ipdb=self.ipdb, parent=self) nh.load_netlink(msg) with nh._direct_state: del nh['dst'] del nh['ipdb_scope'] del nh['ipdb_priority'] del nh['multipath'] del nh['metrics'] self.add_nh(nh) return for (key, value) in msg.items(): self[key] = value # cleanup multipath NH if clean_mp: for nh in self['multipath']: self.del_nh(nh) for cell in msg['attrs']: # # Parse on demand # norm = rtmsg.nla2name(cell[0]) if norm in self.cleanup: continue value = cell[1] # normalize RTAX if norm == 'metrics': with self['metrics']._direct_state: for metric in tuple(self['metrics'].keys()): del self['metrics'][metric] for (rtax, rtax_value) in value['attrs']: rtax_norm = rtmsg.metrics.nla2name(rtax) self['metrics'][rtax_norm] = rtax_value elif norm == 'multipath': for record in value: nh = type(self)(ipdb=self.ipdb, parent=self) nh.load_netlink(record) with nh._direct_state: del nh['dst'] del nh['ipdb_scope'] del nh['ipdb_priority'] del nh['multipath'] del nh['metrics'] self['multipath'].add(nh) elif norm == 'encap': with self['encap']._direct_state: ret = [] # FIXME: should support encap_types other than MPLS try: for l in value.get_attr('MPLS_IPTUNNEL_DST'): ret.append(str(l['label'])) if ret: self['encap']['labels'] = '/'.join(ret) except AttributeError: pass elif norm == 'via': with self['via']._direct_state: self['via'] = value elif norm == 'newdst': self['newdst'] = [x['label'] for x in value] else: self[norm] = value if msg.get('family', 0) == AF_MPLS: dst = msg.get_attr('RTA_DST') if dst: dst = dst[0]['label'] else: if msg.get_attr('RTA_DST'): dst = '%s/%s' % (msg.get_attr('RTA_DST'), msg['dst_len']) else: dst = 'default' self['dst'] = dst # fix RTA_ENCAP_TYPE if needed if msg.get_attr('RTA_ENCAP'): if self['encap_type'] is not None: with self['encap']._direct_state: self['encap']['type'] = self['encap_type'] self['encap_type'] = None # or drop encap, if there is no RTA_ENCAP in msg elif self['encap'] is not None: self['encap_type'] = None with self['encap']._direct_state: self['encap'] = {} # drop metrics, if there is no RTA_METRICS in msg if not msg.get_attr('RTA_METRICS') and self['metrics'] is not None: with self['metrics']._direct_state: self['metrics'] = {} # same for via if not msg.get_attr('RTA_VIA') and self['via'] is not None: with self['via']._direct_state: self['via'] = {} # one hop -> multihop transition if not msg.get_attr('RTA_GATEWAY') and self['gateway'] is not None: self['gateway'] = None if 'oif' not in msg and \ not msg.get_attr('RTA_OIF') and \ self['oif'] is not None: self['oif'] = None # finally, cleanup all not needed for item in self.cleanup: if item in self: del self[item] def commit(self, tid=None, transaction=None, commit_phase=1, commit_mask=0xff): if not commit_phase & commit_mask: return self error = None drop = self.ipdb.txdrop devop = 'set' cleanup = [] # FIXME -- make a debug object debug = {'traceback': None, 'next_stage': None} notx = True if tid or transaction: notx = False if tid: transaction = self.global_tx[tid] else: transaction = transaction or self.current_tx # ignore global rollbacks on invalid routes if self['ipdb_scope'] == 'create' and commit_phase > 1: return # create a new route if self['ipdb_scope'] != 'system': devop = 'add' # work on an existing route snapshot = self.pick() added, removed = transaction // snapshot added.pop('ipdb_scope', None) removed.pop('ipdb_scope', None) try: # route set if self['family'] != AF_MPLS: cleanup = [any(snapshot['metrics'].values()) and not any(added.get('metrics', {}).values()), any(snapshot['encap'].values()) and not any(added.get('encap', {}).values())] if any(added.values()) or \ any(cleanup) or \ removed.get('multipath', None) or \ devop == 'add': # prepare multipath target sync wlist = [] if transaction['multipath']: mplen = len(transaction['multipath']) if mplen == 1: # set up local targets for nh in transaction['multipath']: for key in ('oif', 'gateway', 'newdst'): if nh.get(key, None): self.set_target(key, nh[key]) wlist.append(key) mpt = None else: def mpcheck(mpset): return len(mpset) == mplen mpt = self['multipath'].set_target(mpcheck, True) else: mpt = None # prepare the anchor key to catch *possible* route update old_key = self.make_key(self) new_key = self.make_key(transaction) if old_key != new_key: # assume we can not move routes between tables (yet ;) if self['family'] == AF_MPLS: route_index = self.ipdb.routes.tables['mpls'].idx else: route_index = (self.ipdb .routes .tables[self['table'] or 254] .idx) # re-link the route record if new_key in route_index: raise CommitException('route idx conflict') else: route_index[new_key] = {'key': new_key, 'route': self} # wipe the old key, if needed if old_key in route_index: del route_index[old_key] self.nl.route(devop, **transaction) # delete old record, if required if (old_key != new_key) and (devop == 'set'): req = dict(old_key._asdict()) # update the request with the scope. # # though the scope isn't a part of the # key, it is required for the correct # removal -- only if it is set req['scope'] = self.get('scope', 0) self.nl.route('del', **req) transaction.wait_all_targets() for key in ('metrics', 'via'): if transaction[key] and transaction[key]._targets: transaction[key].wait_all_targets() if mpt is not None: mpt.wait(SYNC_TIMEOUT) if not mpt.is_set(): raise CommitException('multipath target is not set') self['multipath'].clear_target(mpt) for key in wlist: self.wait_target(key) # route removal if (transaction['ipdb_scope'] in ('shadow', 'remove')) or\ ((transaction['ipdb_scope'] == 'create') and commit_phase == 2): if transaction['ipdb_scope'] == 'shadow': with self._direct_state: self['ipdb_scope'] = 'locked' # create watchdog wd = self.ipdb.watchdog('RTM_DELROUTE', **self.wd_key(snapshot)) for route in self.nl.route('delete', **snapshot): self.ipdb.routes.load_netlink(route) wd.wait() if transaction['ipdb_scope'] == 'shadow': with self._direct_state: self['ipdb_scope'] = 'shadow' # success, so it's safe to drop the transaction drop = True except Exception as e: error = e # prepare postmortem debug['traceback'] = traceback.format_exc() debug['error_stack'] = [] debug['next_stage'] = None if commit_phase == 1: try: self.commit(transaction=snapshot, commit_phase=2, commit_mask=commit_mask) except Exception as i_e: debug['next_stage'] = i_e error = RuntimeError() if drop and notx: self.drop(transaction.uid) if error is not None: error.debug = debug raise error self.ipdb.routes.gc() return self def remove(self): self['ipdb_scope'] = 'remove' return self def shadow(self): self['ipdb_scope'] = 'shadow' return self def detach(self): if self.get('family') == AF_MPLS: table = 'mpls' else: table = self.get('table', 254) del self.ipdb.routes.tables[table][self.make_key(self)] class Route(BaseRoute): _nested = ['encap', 'metrics'] wd_key = WatchdogKey @classmethod def make_encap(cls, encap): ''' Normalize encap object ''' labels = encap.get('labels', None) if isinstance(labels, (list, tuple, set)): labels = '/'.join(map(lambda x: str(x['label']) if isinstance(x, dict) else str(x), labels)) if not isinstance(labels, basestring): raise TypeError('labels struct not supported') return {'type': encap.get('type', 'mpls'), 'labels': labels} @classmethod def make_nh_key(cls, msg): ''' Construct from a netlink message a multipath nexthop key ''' values = [] if isinstance(msg, nlmsg_base): for field in IPNHKey._fields: v = msg.get_attr(msg.name2nla(field)) if field == 'encap': # 1. encap type if msg.get_attr('RTA_ENCAP_TYPE') != 1: # FIXME values.append(None) continue # 2. encap_type == 'mpls' v = '/'.join([str(x['label']) for x in v.get_attr('MPLS_IPTUNNEL_DST')]) elif v is None: v = msg.get(field, None) values.append(v) elif isinstance(msg, dict): for field in IPNHKey._fields: v = msg.get(field, None) if field == 'encap' and v and v['labels']: v = v['labels'] elif (field == 'encap') and \ (len(msg.get('multipath', []) or []) == 1): v = (tuple(msg['multipath'].raw.values())[0] .get('encap', {}) .get('labels', None)) elif field == 'encap': v = None elif (field == 'gateway') and \ (len(msg.get('multipath', []) or []) == 1) and \ not v: v = (tuple(msg['multipath'].raw.values())[0] .get('gateway', None)) if field == 'encap' and isinstance(v, (list, tuple, set)): v = '/'.join(map(lambda x: str(x['label']) if isinstance(x, dict) else str(x), v)) values.append(v) else: raise TypeError('prime not supported: %s' % type(msg)) return IPNHKey(*values) @classmethod def make_key(cls, msg): ''' Construct from a netlink message a key that can be used to locate the route in the table ''' values = [] if isinstance(msg, nlmsg_base): for field in RouteKey._fields: v = msg.get_attr(msg.name2nla(field)) if field == 'dst': if v is not None: v = '%s/%s' % (v, msg['dst_len']) else: v = 'default' elif field == 'tos' and msg.get('family') != AF_INET: # ignore tos field for non-IPv6 routes, # as it used as a key only there v = None elif v is None: v = msg.get(field, None) values.append(v) elif isinstance(msg, dict): for field in RouteKey._fields: v = msg.get(field, None) if field == 'dst' and \ isinstance(v, basestring) and \ v.find(':') > -1: v = v.split('/') ip = inet_ntop(AF_INET6, inet_pton(AF_INET6, v[0])) if len(v) > 1: v = '%s/%s' % (ip, v[1]) else: v = ip elif field == 'tos' and msg.get('family') != AF_INET: # ignore tos field for non-IPv6 routes, # as it used as a key only there v = None values.append(v) else: raise TypeError('prime not supported: %s' % type(msg)) return RouteKey(*values) def __setitem__(self, key, value): ret = value if (key in ('encap', 'metrics')) and isinstance(value, dict): # transactionals attach as is if type(value) in (Encap, Metrics): with self._direct_state: return Transactional.__setitem__(self, key, value) # check, if it exists already ret = Transactional.__getitem__(self, key) # it doesn't # (plain dict can be safely discarded) if (type(ret) == dict) or not ret: # bake transactionals in place if key == 'encap': ret = Encap(parent=self) elif key == 'metrics': ret = Metrics(parent=self) # attach transactional to the route with self._direct_state: Transactional.__setitem__(self, key, ret) # begin() works only if the transactional is attached if any(value.values()): if self._mode in ('implicit', 'explicit'): ret._begin(tid=self.current_tx.uid) [ret.__setitem__(k, v) for k, v in value.items() if v is not None] # corresponding transactional exists else: # set fields for k in ret: ret[k] = value.get(k, None) return elif key == 'multipath': cur = Transactional.__getitem__(self, key) if isinstance(cur, NextHopSet): # load entries vs = NextHopSet(value) for key in vs - cur: cur.add(key) for key in cur - vs: cur.remove(key) else: # drop any result of `update()` Transactional.__setitem__(self, key, NextHopSet(value)) return elif key == 'encap_type' and not isinstance(value, int): ret = encap_type.get(value, value) elif key == 'type' and not isinstance(value, int): ret = rt_type.get(value, value) elif key == 'proto' and not isinstance(value, int): ret = rt_proto.get(value, value) elif key == 'dst' and \ isinstance(value, basestring) and \ value in ('0.0.0.0/0', '::/0'): ret = 'default' Transactional.__setitem__(self, key, ret) def __getitem__(self, key): ret = Transactional.__getitem__(self, key) if (key in ('encap', 'metrics', 'multipath')) and (ret is None): with self._direct_state: self[key] = [] if key == 'multipath' else {} ret = self[key] return ret class MPLSRoute(BaseRoute): wd_key = WatchdogMPLSKey _nested = ['via'] @classmethod def make_nh_key(cls, msg): ''' Construct from a netlink message a multipath nexthop key ''' return MPLSNHKey(newdst=tuple(msg['newdst']), via=msg.get('via', {}).get('addr', None), oif=msg.get('oif', None)) @classmethod def make_key(cls, msg): ''' Construct from a netlink message a key that can be used to locate the route in the table ''' ret = None if isinstance(msg, nlmsg): ret = msg.get_attr('RTA_DST') elif isinstance(msg, dict): ret = msg.get('dst', None) else: raise TypeError('prime not supported') if isinstance(ret, list): ret = ret[0]['label'] return ret def __setitem__(self, key, value): if key == 'via' and isinstance(value, dict): # replace with a new transactional if type(value) == Via: with self._direct_state: return BaseRoute.__setitem__(self, key, value) # or load the dict ret = BaseRoute.__getitem__(self, key) if not isinstance(ret, Via): ret = Via(parent=self) # attach new transactional -- replace any # non-Via object (may be a result of update()) with self._direct_state: BaseRoute.__setitem__(self, key, ret) # load value into the new object if any(value.values()): if self._mode in ('implicit', 'explicit'): ret._begin(tid=self.current_tx.uid) [ret.__setitem__(k, v) for k, v in value.items() if v is not None] else: # load value into existing object for k in ret: ret[k] = value.get(k, None) return elif key == 'multipath': cur = BaseRoute.__getitem__(self, key) if isinstance(cur, NextHopSet): # load entries vs = NextHopSet(value) for key in vs - cur: cur.add(key) for key in cur - vs: cur.remove(key) else: BaseRoute.__setitem__(self, key, NextHopSet(value)) else: BaseRoute.__setitem__(self, key, value) def __getitem__(self, key): with self._direct_state: ret = BaseRoute.__getitem__(self, key) if key == 'multipath' and ret is None: self[key] = [] ret = self[key] elif key == 'via' and ret is None: self[key] = {} ret = self[key] return ret class RoutingTable(object): route_class = Route def __init__(self, ipdb, prime=None): self.ipdb = ipdb self.lock = threading.Lock() self.idx = {} self.kdx = {} def __nogc__(self): return self.filter(lambda x: x['route']['ipdb_scope'] != 'gc') def __repr__(self): return repr([x['route'] for x in self.__nogc__()]) def __len__(self): return len(self.keys()) def __iter__(self): for record in self.__nogc__(): yield record['route'] def gc(self): now = time.time() for route in self.filter({'ipdb_scope': 'gc'}): if now - route['route']._gctime < 2: continue try: if not self.ipdb.nl.route('dump', **route['route']): raise with route['route']._direct_state: route['route']['ipdb_scope'] = 'system' except: del self.idx[route['key']] def keys(self, key='dst'): with self.lock: return [x['route'][key] for x in self.__nogc__()] def items(self): for key in self.keys(): yield (key, self[key]) def filter(self, target, oneshot=False): # if isinstance(target, types.FunctionType): return filter(target, [x for x in tuple(self.idx.values())]) if isinstance(target, basestring): target = {'dst': target} if not isinstance(target, dict): raise TypeError('target type not supported: %s' % type(target)) ret = [] for record in tuple(self.idx.values()): for key, value in tuple(target.items()): if (key not in record['route']) or \ (value != record['route'][key]): break else: ret.append(record) if oneshot: return ret return ret def describe(self, target, forward=False): # match the route by index -- a bit meaningless, # but for compatibility if isinstance(target, int): keys = [x['key'] for x in self.__nogc__()] return self.idx[keys[target]] # match the route by key if isinstance(target, (tuple, list)): # full match return self.idx[RouteKey(*target)] if isinstance(target, nlmsg): return self.idx[Route.make_key(target)] # match the route by filter ret = self.filter(target, oneshot=True) if ret: return ret[0] if not forward: raise KeyError('record not found') # match the route by dict spec if not isinstance(target, dict): raise TypeError('lookups can be done only with dict targets') # split masks if target.get('dst', '').find('/') >= 0: dst = target['dst'].split('/') target['dst'] = dst[0] target['dst_len'] = int(dst[1]) if target.get('src', '').find('/') >= 0: src = target['src'].split('/') target['src'] = src[0] target['src_len'] = int(src[1]) # load and return the route, if exists route = Route(self.ipdb) ret = self.ipdb.nl.get_routes(**target) if not ret: raise KeyError('record not found') route.load_netlink(ret[0]) return {'route': route, 'key': None} def __delitem__(self, key): with self.lock: item = self.describe(key, forward=False) del self.idx[self.route_class.make_key(item['route'])] def load(self, msg): key = self.route_class.make_key(msg) self[key] = msg return key def __setitem__(self, key, value): with self.lock: try: record = self.describe(key, forward=False) except KeyError: record = {'route': self.route_class(self.ipdb), 'key': None} if isinstance(value, nlmsg): record['route'].load_netlink(value) elif isinstance(value, self.route_class): record['route'] = value elif isinstance(value, dict): with record['route']._direct_state: record['route'].update(value) key = self.route_class.make_key(record['route']) if record['key'] is None: self.idx[key] = {'route': record['route'], 'key': key} else: self.idx[key] = record if record['key'] != key: del self.idx[record['key']] record['key'] = key def __getitem__(self, key): with self.lock: return self.describe(key, forward=False)['route'] def __contains__(self, key): try: with self.lock: self.describe(key, forward=False) return True except KeyError: return False class MPLSTable(RoutingTable): route_class = MPLSRoute def keys(self): return self.idx.keys() def describe(self, target, forward=False): # match by key if isinstance(target, int): return self.idx[target] # match by rtmsg if isinstance(target, rtmsg): return self.idx[self.route_class.make_key(target)] raise KeyError('record not found') class RoutingTableSet(object): def __init__(self, ipdb): self.ipdb = ipdb self._gctime = time.time() self.ignore_rtables = ipdb._ignore_rtables or [] self.tables = {254: RoutingTable(self.ipdb)} self._event_map = {'RTM_NEWROUTE': self.load_netlink, 'RTM_DELROUTE': self.load_netlink, 'RTM_NEWLINK': self.gc_mark_link, 'RTM_DELLINK': self.gc_mark_link, 'RTM_DELADDR': self.gc_mark_addr} def _register(self): for msg in self.ipdb.nl.get_routes(family=AF_INET, match={'family': AF_INET}): self.load_netlink(msg) for msg in self.ipdb.nl.get_routes(family=AF_INET6, match={'family': AF_INET6}): self.load_netlink(msg) for msg in self.ipdb.nl.get_routes(family=AF_MPLS, match={'family': AF_MPLS}): self.load_netlink(msg) def add(self, spec=None, **kwarg): ''' Create a route from a dictionary ''' spec = dict(spec or kwarg) gateway = spec.get('gateway') or '' dst = spec.get('dst') or '' if 'tos' not in spec: spec['tos'] = 0 if 'scope' not in spec: spec['scope'] = 0 if 'table' not in spec: spec['table'] = 254 if 'family' not in spec: if (dst.find(':') > -1) or (gateway.find(':') > -1): spec['family'] = AF_INET6 else: spec['family'] = AF_INET if not dst: raise ValueError('dst not specified') if isinstance(dst, basestring) and \ (dst not in ('', 'default')) and \ ('/' not in dst): if spec['family'] == AF_INET: spec['dst'] = dst + '/32' elif spec['family'] == AF_INET6: spec['dst'] = dst + '/128' if 'priority' not in spec: if spec['family'] == AF_INET6: spec['priority'] = IP6_RT_PRIO_USER else: spec['priority'] = None multipath = spec.pop('multipath', []) if spec.get('family', 0) == AF_MPLS: table = 'mpls' if table not in self.tables: self.tables[table] = MPLSTable(self.ipdb) route = MPLSRoute(self.ipdb) else: table = spec.get('table', 254) if table not in self.tables: self.tables[table] = RoutingTable(self.ipdb) route = Route(self.ipdb) route.update(spec) with route._direct_state: route['ipdb_scope'] = 'create' for nh in multipath: if 'encap' in nh: nh['encap'] = route.make_encap(nh['encap']) if table == 'mpls': nh['family'] = AF_MPLS route.add_nh(nh) route.begin() for (key, value) in spec.items(): if key == 'encap': route[key] = route.make_encap(value) else: route[key] = value self.tables[table][route.make_key(route)] = route return route def load_netlink(self, msg): ''' Loads an existing route from a rtmsg ''' if not isinstance(msg, rtmsg): return if msg['family'] == AF_MPLS: table = 'mpls' else: table = msg.get_attr('RTA_TABLE', msg['table']) if table in self.ignore_rtables: return now = time.time() if now - self._gctime > 5: self._gctime = now self.gc() # RTM_DELROUTE if msg['event'] == 'RTM_DELROUTE': try: # locate the record record = self.tables[table][msg] # delete the record if record['ipdb_scope'] not in ('locked', 'shadow'): del self.tables[table][msg] with record._direct_state: record['ipdb_scope'] = 'detached' except Exception as e: # just ignore this failure for now log.debug("delroute failed for %s", e) return # RTM_NEWROUTE if table not in self.tables: if table == 'mpls': self.tables[table] = MPLSTable(self.ipdb) else: self.tables[table] = RoutingTable(self.ipdb) self.tables[table].load(msg) def gc_mark_addr(self, msg): ## # Find invalid IPv4 route records after addr delete # # Example:: # $ sudo ip link add test0 type dummy # $ sudo ip link set dev test0 up # $ sudo ip addr add 172.18.0.5/24 dev test0 # $ sudo ip route add 10.1.2.0/24 via 172.18.0.1 # ... # $ sudo ip addr flush dev test0 # # The route {'dst': '10.1.2.0/24', 'gateway': '172.18.0.1'} # will stay in the routing table being removed from the system. # That's because the kernel doesn't send IPv4 route updates in # that case, so we have to calculate the update here -- or load # all the routes from scratch. The latter may be far too # expensive. # # See http://www.spinics.net/lists/netdev/msg254186.html for # background on this kernel behavior. # Simply ignore secondary addresses, as they don't matter if msg['flags'] & IFA_F_SECONDARY: return # When the primary address is removed, corresponding routes # may be silently discarded. But if promote_secondaries is set # to 1, the next secondary becomes a new primary, and routes # stay. There is no way to know here, whether promote_secondaries # was set at the moment of the address removal, so we have to # act as if it wasn't. # Get the removed address: family = msg['family'] if family == AF_INET: addr = msg.get_attr('IFA_LOCAL') net = struct.unpack('>I', inet_pton(family, addr))[0] &\ (0xffffffff << (32 - msg['prefixlen'])) # now iterate all registered routes and mark those with # gateway from that network for record in self.filter({'family': family}): gw = record['route'].get('gateway') if gw: gwnet = struct.unpack('>I', inet_pton(family, gw))[0] & net if gwnet == net: with record['route']._direct_state: record['route']['ipdb_scope'] = 'gc' record['route']._gctime = time.time() elif family == AF_INET6: # Unlike IPv4, IPv6 route updates are sent after addr # delete, so no need to delete them here. pass else: # ignore not (IPv4 or IPv6) return def gc_mark_link(self, msg): ### # mark route records for GC after link delete # if msg['family'] != 0 or msg['state'] != 'down': return for record in self.filter({'oif': msg['index']}): with record['route']._direct_state: record['route']['ipdb_scope'] = 'gc' record['route']._gctime = time.time() for record in self.filter({'iif': msg['index']}): with record['route']._direct_state: record['route']['ipdb_scope'] = 'gc' record['route']._gctime = time.time() def gc(self): for table in self.tables.keys(): self.tables[table].gc() def remove(self, route, table=None): if isinstance(route, Route): table = route.get('table', 254) or 254 route = route.get('dst', 'default') else: table = table or 254 self.tables[table][route].remove() def filter(self, target): # FIXME: turn into generator! ret = [] for table in tuple(self.tables.values()): if table is not None: ret.extend(table.filter(target)) return ret def describe(self, spec, table=254): return self.tables[table].describe(spec) def get(self, dst, table=None): table = table or 254 return self.tables[table][dst] def keys(self, table=254, family=AF_UNSPEC): return [x['dst'] for x in self.tables[table] if (x.get('family') == family) or (family == AF_UNSPEC)] def has_key(self, key, table=254): return key in self.tables[table] def __contains__(self, key): return key in self.tables[254] def __getitem__(self, key): return self.get(key) def __setitem__(self, key, value): if key != value['dst']: raise ValueError("dst doesn't match key") return self.add(value) def __delitem__(self, key): return self.remove(key) def __repr__(self): return repr(self.tables[254]) spec = [{'name': 'routes', 'class': RoutingTableSet, 'kwarg': {}}] pyroute2-0.5.9/pyroute2/ipdb/rules.py0000644000175000017500000002333613610051400017442 0ustar peetpeet00000000000000import logging import traceback import threading from socket import AF_INET from socket import AF_INET6 from collections import namedtuple from pyroute2.netlink import rtnl from pyroute2.netlink.rtnl.fibmsg import fibmsg from pyroute2.netlink.rtnl.fibmsg import FR_ACT_NAMES from pyroute2.ipdb.exceptions import CommitException from pyroute2.ipdb.transactional import Transactional log = logging.getLogger(__name__) groups = rtnl.RTMGRP_IPV4_RULE |\ rtnl.RTMGRP_IPV6_RULE RuleKey = namedtuple('RuleKey', ('action', 'table', 'priority', 'iifname', 'oifname', 'fwmark', 'fwmask', 'family', 'goto', 'tun_id')) class Rule(Transactional): ''' Persistent transactional rule object ''' _fields = [fibmsg.nla2name(i[1]) for i in fibmsg.nla_map] for key, _ in fibmsg.fields: _fields.append(key) _fields.append('removal') _virtual_fields = ['ipdb_scope', 'ipdb_priority'] _fields.extend(_virtual_fields) cleanup = ('attrs', 'header', 'event', 'src_len', 'dst_len', 'res1', 'res2') @classmethod def make_key(cls, msg): values = [] if isinstance(msg, fibmsg): for field in RuleKey._fields: v = msg.get_attr(msg.name2nla(field)) if v is None: v = msg.get(field, 0) values.append(v) elif isinstance(msg, dict): for field in RuleKey._fields: values.append(msg.get(field, 0)) else: raise TypeError('prime not supported: %s' % type(msg)) return RuleKey(*values) def __init__(self, ipdb, mode=None, parent=None, uid=None): Transactional.__init__(self, ipdb, mode, parent, uid) with self._direct_state: self['ipdb_priority'] = 0 def load_netlink(self, msg): with self._direct_state: if self['ipdb_scope'] == 'locked': # do not touch locked interfaces return self['ipdb_scope'] = 'system' for (key, value) in msg.items(): self[key] = value # merge NLA for cell in msg['attrs']: # # Parse on demand # norm = fibmsg.nla2name(cell[0]) if norm in self.cleanup: continue self[norm] = cell[1] if msg.get_attr('FRA_DST'): dst = '%s/%s' % (msg.get_attr('FRA_DST'), msg['dst_len']) self['dst'] = dst if msg.get_attr('FRA_SRC'): src = '%s/%s' % (msg.get_attr('FRA_SRC'), msg['src_len']) self['src'] = src # finally, cleanup all not needed for item in self.cleanup: if item in self: del self[item] return self def commit(self, tid=None, transaction=None, commit_phase=1, commit_mask=0xff): if not commit_phase & commit_mask: return self error = None drop = self.ipdb.txdrop devop = 'set' debug = {'traceback': None, 'next_stage': None} notx = True if tid or transaction: notx = False if tid: transaction = self.global_tx[tid] else: transaction = transaction or self.current_tx # create a new route if self['ipdb_scope'] != 'system': devop = 'add' # work on an existing route snapshot = self.pick() added, removed = transaction // snapshot added.pop('ipdb_scope', None) removed.pop('ipdb_scope', None) try: # rule add/set if any(added.values()) or devop == 'add': old_key = self.make_key(self) new_key = self.make_key(transaction) if new_key != old_key: # check for the key conflict if new_key in self.ipdb.rules: raise CommitException('rule priority conflict') else: self.ipdb.rules[new_key] = self self.nl.rule('del', **old_key._asdict()) self.nl.rule('add', **transaction) else: if devop != 'add': with self._direct_state: self['ipdb_scope'] = 'locked' wd = self.ipdb.watchdog('RTM_DELRULE', **old_key._asdict()) self.nl.rule('del', **old_key._asdict()) wd.wait() with self._direct_state: self['ipdb_scope'] = 'reload' self.nl.rule('add', **transaction) transaction.wait_all_targets() # rule removal if (transaction['ipdb_scope'] in ('shadow', 'remove')) or\ ((transaction['ipdb_scope'] == 'create') and commit_phase == 2): if transaction['ipdb_scope'] == 'shadow': with self._direct_state: self['ipdb_scope'] = 'locked' # create watchdog key = self.make_key(snapshot) wd = self.ipdb.watchdog('RTM_DELRULE', **key._asdict()) self.nl.rule('del', **key._asdict()) wd.wait() if transaction['ipdb_scope'] == 'shadow': with self._direct_state: self['ipdb_scope'] = 'shadow' # everything ok drop = True except Exception as e: error = e # prepare postmortem debug['traceback'] = traceback.format_exc() debug['error_stack'] = [] debug['next_stage'] = None if commit_phase == 1: try: self.commit(transaction=snapshot, commit_phase=2, commit_mask=commit_mask) except Exception as i_e: debug['next_stage'] = i_e error = RuntimeError() if drop and notx: self.drop(transaction.uid) if error is not None: error.debug = debug raise error return self def remove(self): self['ipdb_scope'] = 'remove' return self def shadow(self): self['ipdb_scope'] = 'shadow' return self class RulesDict(dict): def __init__(self, ipdb): self.ipdb = ipdb self.lock = threading.Lock() self._event_map = {'RTM_NEWRULE': self.load_netlink, 'RTM_DELRULE': self.load_netlink} def _register(self): for msg in self.ipdb.nl.get_rules(family=AF_INET): self.load_netlink(msg) for msg in self.ipdb.nl.get_rules(family=AF_INET6): self.load_netlink(msg) def __getitem__(self, key): with self.lock: if isinstance(key, RuleKey): return super(RulesDict, self).__getitem__(key) elif isinstance(key, tuple): return super(RulesDict, self).__getitem__(RuleKey(*key)) elif isinstance(key, int): for k in self.keys(): if key == k[2]: return super(RulesDict, self).__getitem__(k) elif isinstance(key, dict): for v in self.values(): for k in key: if key[k] != v.get(k, None): break else: return v def add(self, spec=None, **kwarg): ''' Create a rule from a dictionary ''' spec = dict(spec or kwarg) # action and priority are parts of the key, so # they must be specified if 'priority' not in spec: spec['priority'] = 32000 if 'table' in spec: spec['action'] = FR_ACT_NAMES['FR_ACT_TO_TBL'] elif 'goto' in spec: spec['action'] = FR_ACT_NAMES['FR_ACT_GOTO'] if 'family' not in spec: spec['family'] = AF_INET rule = Rule(self.ipdb) rule.update(spec) # setup the scope with rule._direct_state: rule['ipdb_scope'] = 'create' # rule.begin() for (key, value) in spec.items(): rule[key] = value self[rule.make_key(spec)] = rule return rule def load_netlink(self, msg): if not isinstance(msg, fibmsg): return key = Rule.make_key(msg) # RTM_DELRULE if msg['event'] == 'RTM_DELRULE': try: # locate the record record = self[key] # delete the record if record['ipdb_scope'] not in ('locked', 'shadow'): del self[key] with record._direct_state: record['ipdb_scope'] = 'detached' except Exception as e: # just ignore this failure for now log.debug("delrule failed for %s", e) return # RTM_NEWRULE if key not in self: self[key] = Rule(self.ipdb) self[key].load_netlink(msg) return self[key] spec = [{'name': 'rules', 'class': RulesDict, 'kwarg': {}}] pyroute2-0.5.9/pyroute2/ipdb/transactional.py0000644000175000017500000004025713610051400021153 0ustar peetpeet00000000000000''' ''' import logging import threading from pyroute2.common import uuid32 from pyroute2.common import Dotkeys from pyroute2.ipdb.linkedset import LinkedSet from pyroute2.ipdb.exceptions import CommitException # How long should we wait on EACH commit() checkpoint: for ipaddr, # ports etc. That's not total commit() timeout. SYNC_TIMEOUT = 5 log = logging.getLogger(__name__) class State(object): def __init__(self, lock=None): self.lock = lock or threading.Lock() self.flag = 0 def acquire(self): self.lock.acquire() self.flag += 1 def release(self): if self.flag < 1: raise RuntimeError('release unlocked state') self.flag -= 1 self.lock.release() def is_set(self): return self.flag def __enter__(self): self.acquire() return self def __exit__(self, exc_type, exc_value, traceback): self.release() def update(f): def decorated(self, *argv, **kwarg): if self._mode == 'snapshot': # short-circuit with self._write_lock: return f(self, True, *argv, **kwarg) elif self._mode == 'readonly': raise RuntimeError('can not change readonly object') with self._write_lock: direct = self._direct_state.is_set() if not direct: # 1. 'implicit': begin transaction, if there is none if self._mode == 'implicit': if not self.current_tx: self.begin() # 2. require open transaction for 'explicit' type elif self._mode == 'explicit': if not self.current_tx: raise TypeError('start a transaction first') # do not support other modes else: raise TypeError('transaction mode not supported') # now that the transaction _is_ open return f(self, direct, *argv, **kwarg) decorated.__doc__ = f.__doc__ return decorated def with_transaction(f): def decorated(self, direct, *argv, **kwarg): if direct: f(self, *argv, **kwarg) else: transaction = self.current_tx f(transaction, *argv, **kwarg) return self return update(decorated) class Transactional(Dotkeys): ''' Utility class that implements common transactional logic. ''' _fields = [] _virtual_fields = [] _fields_cmp = {} _linked_sets = [] _nested = [] def __init__(self, ipdb=None, mode=None, parent=None, uid=None): # if ipdb is not None: self.nl = ipdb.nl self.ipdb = ipdb else: self.nl = None self.ipdb = None # self._parent = None if parent is not None: self._mode = mode or parent._mode self._parent = parent elif ipdb is not None: self._mode = mode or ipdb.mode else: self._mode = mode or 'implicit' # self.nlmsg = None self.uid = uid or uuid32() self.last_error = None self._commit_hooks = [] self._sids = [] self._ts = threading.local() self._snapshots = {} self.global_tx = {} self._targets = {} self._local_targets = {} self._write_lock = threading.RLock() self._direct_state = State(self._write_lock) self._linked_sets = self._linked_sets or set() # for i in self._fields: Dotkeys.__setitem__(self, i, None) @property def ro(self): return self.pick(detached=False, readonly=True) def register_commit_hook(self, hook): ''' ''' self._commit_hooks.append(hook) def unregister_commit_hook(self, hook): ''' ''' with self._write_lock: for cb in tuple(self._commit_hooks): if hook == cb: self._commit_hooks.pop(self._commit_hooks.index(cb)) ## # Object serialization: dump, pick def dump(self, not_none=True): ''' ''' with self._write_lock: res = {} for key in self: if self[key] is not None and key[0] != '_': if isinstance(self[key], Transactional): res[key] = self[key].dump() elif isinstance(self[key], LinkedSet): res[key] = tuple(self[key]) else: res[key] = self[key] return res def pick(self, detached=True, uid=None, parent=None, readonly=False): ''' Get a snapshot of the object. Can be of two types: * detached=True -- (default) "true" snapshot * detached=False -- keep ip addr set updated from OS Please note, that "updated" doesn't mean "in sync". The reason behind this logic is that snapshots can be used as transactions. ''' with self._write_lock: res = self.__class__(ipdb=self.ipdb, mode='snapshot', parent=parent, uid=uid) for (key, value) in self.items(): if self[key] is not None: if key in self._fields: res[key] = self[key] for key in self._linked_sets: res[key] = type(self[key])(self[key]) if not detached: self[key].connect(res[key]) if readonly: res._mode = 'readonly' return res ## # Context management: enter, exit def __enter__(self): if self._mode == 'readonly': return self elif self._mode not in ('implicit', 'explicit'): raise TypeError('context managers require a transactional mode') if not self.current_tx: self.begin() return self def __exit__(self, exc_type, exc_value, traceback): # apply transaction only if there was no error if self._mode == 'readonly': return elif exc_type is None: try: self.commit() except Exception as e: self.last_error = e raise ## # Implicit object transfomations def __repr__(self): res = {} for i in tuple(self): if self[i] is not None: res[i] = self[i] return res.__repr__() ## # Object ops: +, -, /, ... def __sub__(self, vs): # create result res = {} with self._direct_state: # simple keys for key in self: if (key in self._fields): if ((key not in vs) or (self[key] != vs[key])): res[key] = self[key] for key in self._linked_sets: diff = type(self[key])(self[key] - vs[key]) if diff: res[key] = diff else: res[key] = set() for key in self._nested: res[key] = self[key] - vs[key] return res def __floordiv__(self, vs): left = {} right = {} with self._direct_state: with vs._direct_state: for key in set(tuple(self.keys()) + tuple(vs.keys())): if self.get(key, None) != vs.get(key, None): left[key] = self.get(key) right[key] = vs.get(key) continue if key not in self: right[key] = vs[key] elif key not in vs: left[key] = self[key] for key in self._linked_sets: ldiff = type(self[key])(self[key] - vs[key]) rdiff = type(vs[key])(vs[key] - self[key]) if ldiff: left[key] = ldiff else: left[key] = set() if rdiff: right[key] = rdiff else: right[key] = set() for key in self._nested: left[key], right[key] = self[key] // vs[key] return left, right ## # Methods to be overloaded def detach(self): pass def load(self, data): pass def commit(self, *args, **kwarg): pass def last_snapshot_id(self): return self._sids[-1] def invalidate(self): # on failure, invalidate the interface and detach it # from the parent # 0. obtain lock on IPDB, to avoid deadlocks # ... all the DB updates will wait with self.ipdb.exclusive: # 1. drop the IPRoute() link self.nl = None # 2. clean up ipdb self.detach() # 3. invalidate the interface with self._direct_state: for i in tuple(self.keys()): del self[i] self['ipdb_scope'] = 'invalid' # 4. the rest self._mode = 'invalid' ## # Snapshot methods def revert(self, sid): with self._write_lock: assert sid in self._snapshots self.local_tx[sid] = self._snapshots[sid] self.global_tx[sid] = self._snapshots[sid] self.current_tx = self._snapshots[sid] self._sids.remove(sid) del self._snapshots[sid] return self def snapshot(self, sid=None): ''' Create new snapshot ''' if self._parent: raise RuntimeError("Can't init snapshot from a nested object") if (self.ipdb is not None) and self.ipdb._stop: raise RuntimeError("Can't create snapshots on released IPDB") t = self.pick(detached=True, uid=sid) self._snapshots[t.uid] = t self._sids.append(t.uid) for key, value in t.items(): if isinstance(value, Transactional): value.snapshot(sid=t.uid) return t.uid def last_snapshot(self): if not self._sids: raise TypeError('create a snapshot first') return self._snapshots[self._sids[-1]] ## # Current tx def _set_current_tx(self, tx): with self._write_lock: self._ts.current = tx def _get_current_tx(self): ''' The current active transaction (thread-local) ''' with self._write_lock: if not hasattr(self._ts, 'current'): self._ts.current = None return self._ts.current current_tx = property(_get_current_tx, _set_current_tx) ## # Local tx registry def _get_local_tx(self): with self._write_lock: if not hasattr(self._ts, 'tx'): self._ts.tx = {} return self._ts.tx local_tx = property(_get_local_tx) ## # Transaction ops: begin, review, drop def begin(self): ''' Start new transaction ''' if self._parent is not None: self._parent.begin() else: return self._begin() def _begin(self, tid=None): if (self.ipdb is not None) and self.ipdb._stop: raise RuntimeError("Can't start transaction on released IPDB") t = self.pick(detached=False, uid=tid) self.local_tx[t.uid] = t self.global_tx[t.uid] = t if self.current_tx is None: self.current_tx = t for key, value in t.items(): if isinstance(value, Transactional): # start transaction on a nested object value._begin(tid=t.uid) # link transaction to own one t[key] = value.global_tx[t.uid] return t.uid def review(self, tid=None): ''' Review the changes made in the transaction `tid` or in the current active transaction (thread-local) ''' if self.current_tx is None: raise TypeError('start a transaction first') tid = tid or self.current_tx.uid if self.get('ipdb_scope') == 'create': if self.current_tx is not None: prime = self.current_tx else: log.warning('the "create" scope without transaction') prime = self return dict([(x[0], x[1]) for x in prime.items() if x[1] is not None]) with self._write_lock: added = self.global_tx[tid] - self removed = self - self.global_tx[tid] for key in self._linked_sets: added['-%s' % (key)] = removed[key] added['+%s' % (key)] = added[key] del added[key] return added def drop(self, tid=None): ''' Drop a transaction. If tid is not specified, drop the current one. ''' with self._write_lock: if tid is None: tx = self.current_tx if tx is None: raise TypeError("no transaction") else: tx = self.global_tx[tid] if self.current_tx == tx: self.current_tx = None # detach linked sets for key in self._linked_sets: if tx[key] in self[key].links: self[key].disconnect(tx[key]) for (key, value) in self.items(): if isinstance(value, Transactional): try: value.drop(tx.uid) except KeyError: pass # finally -- delete the transaction del self.local_tx[tx.uid] del self.global_tx[tx.uid] ## # Property ops: set/get/delete @update def __setitem__(self, direct, key, value): if not direct: # automatically set target on the active transaction, # which must be started prior to that call transaction = self.current_tx transaction[key] = value if value is not None: transaction._targets[key] = threading.Event() else: # set the item Dotkeys.__setitem__(self, key, value) # update on local targets with self._write_lock: if key in self._local_targets: func = self._fields_cmp.get(key, lambda x, y: x == y) if func(value, self._local_targets[key].value): self._local_targets[key].set() # cascade update on nested targets for tn in tuple(self.global_tx.values()): if (key in tn._targets) and (key in tn): if self._fields_cmp.\ get(key, lambda x, y: x == y)(value, tn[key]): tn._targets[key].set() @update def __delitem__(self, direct, key): # firstly set targets self[key] = None # then continue with delete if not direct: transaction = self.current_tx if key in transaction: del transaction[key] else: Dotkeys.__delitem__(self, key) def option(self, key, value): self[key] = value return self def unset(self, key): del self[key] return self def wait_all_targets(self): for key, target in self._targets.items(): if key not in self._virtual_fields: target.wait(SYNC_TIMEOUT) if not target.is_set(): raise CommitException('target %s is not set' % key) def wait_target(self, key, timeout=SYNC_TIMEOUT): self._local_targets[key].wait(SYNC_TIMEOUT) with self._write_lock: return self._local_targets.pop(key).is_set() def set_target(self, key, value): with self._write_lock: self._local_targets[key] = threading.Event() self._local_targets[key].value = value if self.get(key) == value: self._local_targets[key].set() return self def mirror_target(self, key_from, key_to): with self._write_lock: self._local_targets[key_to] = self._local_targets[key_from] return self def set(self, key, value): self[key] = value return self pyroute2-0.5.9/pyroute2/ipdb/utils.py0000644000175000017500000000042213610051400017437 0ustar peetpeet00000000000000import os import subprocess def test_reachable_icmp(host): with open(os.devnull, 'w') as devnull: return subprocess.check_call(['ping', '-c', '1', host], stdout=devnull, stderr=devnull) pyroute2-0.5.9/pyroute2/iproute/0000755000175000017500000000000013621220110016476 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/iproute/__init__.py0000644000175000017500000001231213610051400020610 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- ''' Classes ------- The RTNL API is provided by the class `RTNL_API`. It is a mixin class that works on top of any RTNL-compatible socket, so several classes with almost the same API are available: * `IPRoute` -- simple RTNL API * `NetNS` -- RTNL API in a network namespace * `IPBatch` -- RTNL packet compiler * `RemoteIPRoute` -- run RTNL remotely (no deployment required) Responses as lists ------------------ The netlink socket implementation in the pyroute2 is agnostic to particular netlink protocols, and always returns a list of messages as the response to a request sent to the kernel:: with IPRoute() as ipr: # this request returns one match eth0 = ipr.link_lookup(ifname='eth0') len(eth0) # -> 1, if exists, else 0 # but that one returns a set of up = ipr.link_lookup(operstate='UP') len(up) # -> k, where 0 <= k <= [interface count] Thus, always expect a list in the response, running any `IPRoute()` netlink request. NLMSG_ERROR responses ~~~~~~~~~~~~~~~~~~~~~ Some kernel subsystems return `NLMSG_ERROR` in response to any request. It is OK as long as `nlmsg["header"]["error"] is None`. Otherwise an exception will be raised by the parser. So if instead of an exception you get a `NLMSG_ERROR` message, it means `error == 0`, the same as `$? == 0` in bash. How to work with messages ~~~~~~~~~~~~~~~~~~~~~~~~~ Every netlink message contains header, fields and NLAs (netlink attributes). Every NLA is a netlink message... (see "recursion"). And the library provides parsed messages according to this scheme. Every RTNL message contains: * `nlmsg['header']` -- parsed header * `nlmsg['attrs']` -- NLA chain (parsed on demand) * 0 .. k data fields, e.g. `nlmsg['flags']` etc. * `nlmsg.header` -- the header fields spec * `nlmsg.fields` -- the data fields spec * `nlmsg.nla_map` -- NLA spec An important parser feature is that NLAs are parsed on demand, when someone tries to access them. Otherwise the parser doesn't waste CPU cycles. The NLA chain is a list-like structure, not a dictionary. The netlink standard doesn't require NLAs to be unique within one message:: {'attrs': [('IFLA_IFNAME', 'lo'), # [1] ('IFLA_TXQLEN', 1), ('IFLA_OPERSTATE', 'UNKNOWN'), ('IFLA_LINKMODE', 0), ('IFLA_MTU', 65536), ('IFLA_GROUP', 0), ('IFLA_PROMISCUITY', 0), ('IFLA_NUM_TX_QUEUES', 1), ('IFLA_NUM_RX_QUEUES', 1), ('IFLA_CARRIER', 1), ...], 'change': 0, 'event': 'RTM_NEWLINK', # [2] 'family': 0, 'flags': 65609, 'header': {'error': None, # [3] 'flags': 2, 'length': 1180, 'pid': 28233, 'sequence_number': 257, # [4] 'type': 16}, # [5] 'ifi_type': 772, 'index': 1} # [1] every NLA is parsed upon access # [2] this field is injected by the RTNL parser # [3] if not None, an exception will be raised # [4] more details in the netlink description # [5] 16 == RTM_NEWLINK To access fields:: msg['index'] == 1 To access one NLA:: msg.get_attr('IFLA_CARRIER') == 1 When an NLA with the specified name is not present in the chain, `get_attr()` returns `None`. To get the list of all NLAs of that name, use `get_attrs()`. A real example with NLA hierarchy, take notice of `get_attr()` and `get_attrs()` usage:: # for macvlan interfaces there may be several # IFLA_MACVLAN_MACADDR NLA provided, so use # get_attrs() to get all the list, not only # the first one (msg .get_attr('IFLA_LINKINFO') # one NLA .get_attr('IFLA_INFO_DATA') # one NLA .get_attrs('IFLA_MACVLAN_MACADDR')) # a list of The protocol itself has no limit for number of NLAs of the same type in one message, that's why we can not make a dictionary from them -- unlike PF_ROUTE messages. ''' import sys from pyroute2 import config from pyroute2.common import failed_class from pyroute2.iproute.linux import RTNL_API from pyroute2.iproute.linux import IPBatch # compatibility fix -- LNST: from pyroute2.netlink.rtnl import (RTM_GETLINK, RTM_NEWLINK, RTM_DELLINK, RTM_GETADDR, RTM_NEWADDR, RTM_DELADDR) try: from pyroute2.iproute.remote import RemoteIPRoute except ImportError: RemoteIPRoute = failed_class('missing mitogen library') if sys.platform.startswith('win'): from pyroute2.iproute.windows import IPRoute from pyroute2.iproute.windows import RawIPRoute elif config.uname[0][-3:] == 'BSD': from pyroute2.iproute.bsd import IPRoute from pyroute2.iproute.bsd import RawIPRoute else: from pyroute2.iproute.linux import IPRoute from pyroute2.iproute.linux import RawIPRoute classes = [RTNL_API, IPBatch, IPRoute, RawIPRoute, RemoteIPRoute] constants = [RTM_GETLINK, RTM_NEWLINK, RTM_DELLINK, RTM_GETADDR, RTM_NEWADDR, RTM_DELADDR] pyroute2-0.5.9/pyroute2/iproute/bsd.py0000644000175000017500000002457713610051400017641 0ustar peetpeet00000000000000''' The library provides very basic RTNL API for BSD systems via protocol emulation. Only getters are supported yet, no setters. BSD employs PF_ROUTE sockets to send notifications about network object changes, but the protocol doesn not allow changing links/addresses/etc like Netlink. To change network setting one have to rely on system calls or external tools. Thus IPRoute on BSD systems is not as effective as on Linux, where all the changes are done via Netlink. The monitoring started with `bind()` is implemented as an implicit thread, started by the `bind()` call. This is done to have only one notification FD, used both for normal calls and notifications. This allows to use IPRoute objects in poll/select calls. On Linux systems RTNL API is provided by the netlink protocol, so no implicit threads are started by default to monitor the system updates. `IPRoute.bind(...)` may start the async cache thread, but only when asked explicitly:: # # Normal monitoring. Always starts monitoring thread on # FreeBSD / OpenBSD, no threads on Linux. # with IPRoute() as ipr: ipr.bind() ... # # Monitoring with async cache. Always starts cache thread # on Linux, ignored on FreeBSD / OpenBSD. # with IPRoute() as ipr: ipr.bind(async_cache=True) ... On all the supported platforms, be it Linux or BSD, the `IPRoute.recv(...)` method returns valid netlink RTNL raw binary payload and `IPRoute.get(...)` returns parsed RTNL messages. ''' import os import errno import struct import select import threading from pyroute2 import config from pyroute2.netlink import (NLM_F_REQUEST, NLM_F_DUMP, NLM_F_MULTI, NLMSG_DONE) from pyroute2.netlink.rtnl import (RTM_NEWLINK, RTM_GETLINK, RTM_NEWADDR, RTM_GETADDR, RTM_NEWROUTE, RTM_GETROUTE, RTM_NEWNEIGH, RTM_GETNEIGH) from pyroute2.bsd.rtmsocket import RTMSocket from pyroute2.bsd.pf_route import IFF_VALUES from pyroute2.netlink.rtnl.ifinfmsg import IFF_NAMES from pyroute2.bsd.util import (ARP, Route, Ifconfig) from pyroute2.netlink.rtnl.marshal import MarshalRtnl from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg from pyroute2.netlink.rtnl.ndmsg import ndmsg from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.common import AddrPool from pyroute2.common import Namespace from pyroute2.proxy import NetlinkProxy try: import queue except ImportError: import Queue as queue class IPRoute(object): def __init__(self, *argv, **kwarg): if 'ssh' in kwarg: self._ssh = ['ssh', kwarg.pop('ssh')] else: self._ssh = [] async_qsize = kwarg.get('async_qsize') self._ifc = Ifconfig(cmd=self._ssh + ['ifconfig', '-a']) self._arp = ARP(cmd=self._ssh + ['arp', '-an']) self._route = Route(cmd=self._ssh + ['netstat', '-rn']) self.marshal = MarshalRtnl() send_ns = Namespace(self, {'addr_pool': AddrPool(0x10000, 0x1ffff), 'monitor': False}) self._sproxy = NetlinkProxy(policy='return', nl=send_ns) self._mon_th = None self._rtm = None self._brd_socket = None self._pfdr, self._pfdw = os.pipe() # notify external poll/select self._ctlr, self._ctlw = os.pipe() # notify monitoring thread self._outq = queue.Queue(maxsize=async_qsize or config.async_qsize) self._system_lock = threading.Lock() self.closed = threading.Event() def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def clone(self): return self def close(self, code=errno.ECONNRESET): with self._system_lock: if self.closed.is_set(): return if self._mon_th is not None: os.write(self._ctlw, b'\0') self._mon_th.join() self._rtm.close() if code > 0: self._outq.put(struct.pack('IHHQIQQ', 28, 2, 0, 0, code, 0, 0)) os.write(self._pfdw, b'\0') for ep in (self._pfdr, self._pfdw, self._ctlr, self._ctlw): try: os.close(ep) except OSError: pass self.closed.set() def bind(self, *argv, **kwarg): with self._system_lock: if self._mon_th is not None: return if self._ssh: return self._mon_th = threading.Thread(target=self._monitor_thread, name='PF_ROUTE monitoring') self._mon_th.setDaemon(True) self._mon_th.start() def _monitor_thread(self): # Monitoring thread to convert arriving PF_ROUTE data into # the netlink format, enqueue it and notify poll/select. self._rtm = RTMSocket(output='netlink') inputs = [self._rtm.fileno(), self._ctlr] outputs = [] while True: try: events, _, _ = select.select(inputs, outputs, inputs) except: continue for fd in events: if fd == self._ctlr: # Main thread <-> monitor thread protocol is # pretty simple: discard the data and terminate # the monitor thread. os.read(self._ctlr, 1) return else: # Read the data from the socket and queue it msg = self._rtm.get() if msg is not None: msg.encode() self._outq.put(msg.data) # Notify external poll/select os.write(self._pfdw, b'\0') def fileno(self): # Every time when some new data arrives, one should write # into self._pfdw one byte to kick possible poll/select. # # Resp. recv() discards one byte from self._pfdr each call. return self._pfdr def get(self): data = self.recv() return self.marshal.parse(data) def recv(self, bufsize=None): os.read(self._pfdr, 1) return self._outq.get() def getsockopt(self, *argv, **kwarg): return 1024 * 1024 def sendto_gate(self, msg, addr): # # handle incoming netlink requests # # sendto_gate() receives single RTNL messages as objects # cmd = msg['header']['type'] flags = msg['header']['flags'] seq = msg['header']['sequence_number'] # work only on dump requests for now if flags != NLM_F_REQUEST | NLM_F_DUMP: return # if cmd == RTM_GETLINK: rtype = RTM_NEWLINK ret = self.get_links() elif cmd == RTM_GETADDR: rtype = RTM_NEWADDR ret = self.get_addr() elif cmd == RTM_GETROUTE: rtype = RTM_NEWROUTE ret = self.get_routes() elif cmd == RTM_GETNEIGH: rtype = RTM_NEWNEIGH ret = self.get_neighbours() # # set response type and finalize the message for r in ret: r['header']['type'] = rtype r['header']['flags'] = NLM_F_MULTI r['header']['sequence_number'] = seq # r = type(msg)() r['header']['type'] = NLMSG_DONE r['header']['sequence_number'] = seq ret.append(r) data = b'' for r in ret: r.encode() data += r.data self._outq.put(data) os.write(self._pfdw, b'\0') # 8<--------------------------------------------------------------- # def dump(self): ''' Iterate all the objects -- links, routes, addresses etc. ''' for method in (self.get_links, self.get_addr, self.get_neighbours, self.get_routes): for msg in method(): yield msg # 8<--------------------------------------------------------------- def get_links(self, *argv, **kwarg): ret = [] data = self._ifc.run() parsed = self._ifc.parse(data) for name, spec in parsed['links'].items(): msg = ifinfmsg().load(spec) msg['header']['type'] = RTM_NEWLINK del msg['value'] flags = msg['flags'] new_flags = 0 for value, name in IFF_VALUES.items(): if value & flags and name in IFF_NAMES: new_flags |= IFF_NAMES[name] msg['flags'] = new_flags ret.append(msg) return ret def get_addr(self, *argv, **kwarg): ret = [] data = self._ifc.run() parsed = self._ifc.parse(data) for name, specs in parsed['addrs'].items(): for spec in specs: msg = ifaddrmsg().load(spec) msg['header']['type'] = RTM_NEWADDR del msg['value'] ret.append(msg) return ret def get_neighbours(self, *argv, **kwarg): ifc = self._ifc.parse(self._ifc.run()) arp = self._arp.parse(self._arp.run()) ret = [] for spec in arp: if spec['ifname'] not in ifc['links']: continue spec['ifindex'] = ifc['links'][spec['ifname']]['index'] msg = ndmsg().load(spec) msg['header']['type'] = RTM_NEWNEIGH del msg['value'] ret.append(msg) return ret def get_routes(self, *argv, **kwarg): ifc = self._ifc.parse(self._ifc.run()) rta = self._route.parse(self._route.run()) ret = [] for spec in rta: if spec['ifname'] not in ifc['links']: continue idx = ifc['links'][spec['ifname']]['index'] spec['attrs'].append(['RTA_OIF', idx]) msg = rtmsg().load(spec) msg['header']['type'] = RTM_NEWROUTE del msg['value'] ret.append(msg) return ret class RawIPRoute(IPRoute): pass pyroute2-0.5.9/pyroute2/iproute/linux.py0000644000175000017500000021237013616276541020243 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- import os import types import logging from socket import AF_INET from socket import AF_INET6 from socket import AF_UNSPEC from functools import partial from pyroute2 import config from pyroute2.config import AF_BRIDGE from pyroute2.netlink import NLMSG_ERROR from pyroute2.netlink import NLM_F_ATOMIC from pyroute2.netlink import NLM_F_ROOT from pyroute2.netlink import NLM_F_REPLACE from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NLM_F_CREATE from pyroute2.netlink import NLM_F_EXCL from pyroute2.netlink import NLM_F_APPEND from pyroute2.netlink.rtnl import RTM_NEWADDR from pyroute2.netlink.rtnl import RTM_GETADDR from pyroute2.netlink.rtnl import RTM_DELADDR from pyroute2.netlink.rtnl import RTM_NEWLINK from pyroute2.netlink.rtnl import RTM_GETLINK from pyroute2.netlink.rtnl import RTM_DELLINK from pyroute2.netlink.rtnl import RTM_NEWQDISC from pyroute2.netlink.rtnl import RTM_GETQDISC from pyroute2.netlink.rtnl import RTM_DELQDISC from pyroute2.netlink.rtnl import RTM_NEWTFILTER from pyroute2.netlink.rtnl import RTM_GETTFILTER from pyroute2.netlink.rtnl import RTM_DELTFILTER from pyroute2.netlink.rtnl import RTM_NEWTCLASS from pyroute2.netlink.rtnl import RTM_GETTCLASS from pyroute2.netlink.rtnl import RTM_DELTCLASS from pyroute2.netlink.rtnl import RTM_NEWRULE from pyroute2.netlink.rtnl import RTM_GETRULE from pyroute2.netlink.rtnl import RTM_DELRULE from pyroute2.netlink.rtnl import RTM_NEWROUTE from pyroute2.netlink.rtnl import RTM_GETROUTE from pyroute2.netlink.rtnl import RTM_DELROUTE from pyroute2.netlink.rtnl import RTM_NEWNEIGH from pyroute2.netlink.rtnl import RTM_GETNEIGH from pyroute2.netlink.rtnl import RTM_DELNEIGH from pyroute2.netlink.rtnl import RTM_SETLINK from pyroute2.netlink.rtnl import RTM_GETNEIGHTBL from pyroute2.netlink.rtnl import RTM_GETNSID from pyroute2.netlink.rtnl import RTM_NEWNETNS from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl import rt_type from pyroute2.netlink.rtnl import rt_scope from pyroute2.netlink.rtnl import rt_proto from pyroute2.netlink.rtnl.req import IPLinkRequest from pyroute2.netlink.rtnl.req import IPBridgeRequest from pyroute2.netlink.rtnl.req import IPBrPortRequest from pyroute2.netlink.rtnl.req import IPRouteRequest from pyroute2.netlink.rtnl.req import IPRuleRequest from pyroute2.netlink.rtnl.tcmsg import plugins as tc_plugins from pyroute2.netlink.rtnl.tcmsg import tcmsg from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl import ndmsg from pyroute2.netlink.rtnl.ndtmsg import ndtmsg from pyroute2.netlink.rtnl.fibmsg import fibmsg from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifinfmsg import IFF_NOARP from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg from pyroute2.netlink.rtnl.iprsocket import IPRSocket from pyroute2.netlink.rtnl.iprsocket import IPBatchSocket from pyroute2.netlink.rtnl.riprsocket import RawIPRSocket from pyroute2.netlink.rtnl.nsidmsg import nsidmsg from pyroute2.netlink.rtnl.nsinfmsg import nsinfmsg from pyroute2.netlink.exceptions import SkipInode from pyroute2.common import AF_MPLS from pyroute2.common import basestring from pyroute2.common import getbroadcast DEFAULT_TABLE = 254 log = logging.getLogger(__name__) def transform_handle(handle): if isinstance(handle, basestring): (major, minor) = [int(x if x else '0', 16) for x in handle.split(':')] handle = (major << 8 * 2) | minor return handle class RTNL_API(object): ''' `RTNL_API` should not be instantiated by itself. It is intended to be used as a mixin class. Following classes use `RTNL_API`: * `IPRoute` -- RTNL API to the current network namespace * `NetNS` -- RTNL API to another network namespace * `IPBatch` -- RTNL compiler * `ShellIPR` -- RTNL via standard I/O, runs IPRoute in a shell It is an old-school API, that provides access to rtnetlink as is. It helps you to retrieve and change almost all the data, available through rtnetlink:: from pyroute2 import IPRoute ipr = IPRoute() # create an interface ipr.link('add', ifname='brx', kind='bridge') # lookup the index dev = ipr.link_lookup(ifname='brx')[0] # bring it down ipr.link('set', index=dev, state='down') # change the interface MAC address and rename it just for fun ipr.link('set', index=dev, address='00:11:22:33:44:55', ifname='br-ctrl') # add primary IP address ipr.addr('add', index=dev, address='10.0.0.1', mask=24, broadcast='10.0.0.255') # add secondary IP address ipr.addr('add', index=dev, address='10.0.0.2', mask=24, broadcast='10.0.0.255') # bring it up ipr.link('set', index=dev, state='up') ''' def __init__(self, *argv, **kwarg): if 'netns_path' in kwarg: self.netns_path = kwarg['netns_path'] else: self.netns_path = config.netns_path super(RTNL_API, self).__init__(*argv, **kwarg) if not self.nlm_generator: def _match(*argv, **kwarg): return tuple(self._genmatch(*argv, **kwarg)) self._genmatch = self._match self._match = _match def _match(self, match, msgs): # filtered results, the generator version for msg in msgs: if hasattr(match, '__call__'): if match(msg): yield msg elif isinstance(match, dict): matches = [] for key in match: KEY = msg.name2nla(key) if isinstance(match[key], types.FunctionType): if msg.get(key) is not None: matches.append(match[key](msg.get(key))) elif msg.get_attr(KEY) is not None: matches.append(match[key](msg.get_attr(KEY))) else: matches.append(False) else: matches.append(msg.get(key) == match[key] or msg.get_attr(KEY) == match[key]) if all(matches): yield msg # 8<--------------------------------------------------------------- # def dump(self): ''' Iterate all the objects -- links, routes, addresses etc. ''' for method in (self.get_links, self.get_addr, self.get_neighbours, self.get_routes, partial(self.get_rules, family=AF_INET), partial(self.get_rules, family=AF_INET6)): for msg in method(): yield msg # 8<--------------------------------------------------------------- # # Listing methods # def get_qdiscs(self, index=None): ''' Get all queue disciplines for all interfaces or for specified one. ''' msg = tcmsg() msg['family'] = AF_UNSPEC ret = self.nlm_request(msg, RTM_GETQDISC) if index is None: return ret else: return [x for x in ret if x['index'] == index] def get_filters(self, index=0, handle=0, parent=0): ''' Get filters for specified interface, handle and parent. ''' msg = tcmsg() msg['family'] = AF_UNSPEC msg['index'] = index msg['handle'] = handle msg['parent'] = parent return self.nlm_request(msg, RTM_GETTFILTER) def get_classes(self, index=0): ''' Get classes for specified interface. ''' msg = tcmsg() msg['family'] = AF_UNSPEC msg['index'] = index return self.nlm_request(msg, RTM_GETTCLASS) def get_vlans(self, **kwarg): ''' Dump available vlan info on bridge ports ''' # IFLA_EXT_MASK, extended info mask # # include/uapi/linux/rtnetlink.h # 1 << 0 => RTEXT_FILTER_VF # 1 << 1 => RTEXT_FILTER_BRVLAN # 1 << 2 => RTEXT_FILTER_BRVLAN_COMPRESSED # 1 << 3 => RTEXT_FILTER_SKIP_STATS # # maybe place it as mapping into ifinfomsg.py? # match = kwarg.get('match', None) or kwarg or None return self.link('dump', family=AF_BRIDGE, ext_mask=2, match=match) def get_links(self, *argv, **kwarg): ''' Get network interfaces. By default returns all interfaces. Arguments vector can contain interface indices or a special keyword 'all':: ip.get_links() ip.get_links('all') ip.get_links(1, 2, 3) interfaces = [1, 2, 3] ip.get_links(*interfaces) ''' result = [] links = argv or [0] if links[0] == 'all': # compat syntax links = [0] if links[0] == 0: cmd = 'dump' else: cmd = 'get' for index in links: kwarg['index'] = index result.extend(self.link(cmd, **kwarg)) return result def get_neighbours(self, family=AF_UNSPEC, match=None, **kwarg): ''' Dump ARP cache records. The `family` keyword sets the family for the request: e.g. `AF_INET` or `AF_INET6` for arp cache, `AF_BRIDGE` for fdb. If other keyword arguments not empty, they are used as filter. Also, one can explicitly set filter as a function with the `match` parameter. Examples:: # get neighbours on the 3rd link: ip.get_neighbours(ifindex=3) # get a particular record by dst: ip.get_neighbours(dst='172.16.0.1') # get fdb records: ip.get_neighbours(AF_BRIDGE) # and filter them by a function: ip.get_neighbours(AF_BRIDGE, match=lambda x: x['state'] == 2) ''' return self.neigh('dump', family=family, match=match or kwarg) def get_ntables(self, family=AF_UNSPEC): ''' Get neighbour tables ''' msg = ndtmsg() msg['family'] = family return self.nlm_request(msg, RTM_GETNEIGHTBL) def get_addr(self, family=AF_UNSPEC, match=None, **kwarg): ''' Dump addresses. If family is not specified, both AF_INET and AF_INET6 addresses will be dumped:: # get all addresses ip.get_addr() It is possible to apply filters on the results:: # get addresses for the 2nd interface ip.get_addr(index=2) # get addresses with IFA_LABEL == 'eth0' ip.get_addr(label='eth0') # get all the subnet addresses on the interface, identified # by broadcast address (should be explicitly specified upon # creation) ip.get_addr(index=2, broadcast='192.168.1.255') A custom predicate can be used as a filter:: ip.get_addr(match=lambda x: x['index'] == 1) ''' return self.addr('dump', family=family, match=match or kwarg) def get_rules(self, family=AF_UNSPEC, match=None, **kwarg): ''' Get all rules. By default return all rules. To explicitly request the IPv4 rules use `family=AF_INET`. Example:: ip.get_rules() # get all the rules for all families ip.get_rules(family=AF_INET6) # get only IPv6 rules ''' return self.rule((RTM_GETRULE, NLM_F_REQUEST | NLM_F_ROOT | NLM_F_ATOMIC), family=family, match=match or kwarg) def get_routes(self, family=255, match=None, **kwarg): ''' Get all routes. You can specify the table. There are 255 routing classes (tables), and the kernel returns all the routes on each request. So the routine filters routes from full output. Example:: ip.get_routes() # get all the routes for all families ip.get_routes(family=AF_INET6) # get only IPv6 routes ip.get_routes(table=254) # get routes from 254 table The default family=255 is a hack. Despite the specs, the kernel returns only IPv4 routes for AF_UNSPEC family. But it returns all the routes for all the families if one uses an invalid value here. Hack but true. And let's hope the kernel team will not fix this bug. ''' # get a particular route? if isinstance(kwarg.get('dst'), basestring): return self.route('get', dst=kwarg['dst']) else: return self.route('dump', family=family, match=match or kwarg) # 8<--------------------------------------------------------------- # 8<--------------------------------------------------------------- # # List NetNS info # def _dump_one_ns(self, path, registry): item = nsinfmsg() item['netnsid'] = 0xffffffff # default netnsid "unknown" nsfd = 0 info = nsidmsg() msg = nsidmsg() try: nsfd = os.open(path, os.O_RDONLY) item['inode'] = os.fstat(nsfd).st_ino # # if the inode is registered, skip it # if item['inode'] in registry: raise SkipInode() registry.add(item['inode']) # # request NETNSA_NSID # # may not work on older kernels ( <4.20 ?) # msg['attrs'] = [('NETNSA_FD', nsfd)] try: for info in self.nlm_request(msg, RTM_GETNSID, NLM_F_REQUEST): # response to nlm_request() is a list or a generator, # that's why loop item['netnsid'] = info.get_attr('NETNSA_NSID') break except Exception: pass item['attrs'] = [('NSINFO_PATH', path)] except OSError: raise SkipInode() finally: if nsfd > 0: os.close(nsfd) item['header']['type'] = RTM_NEWNETNS item['event'] = 'RTM_NEWNETNS' return item def _dump_dir(self, path, registry): for name in os.listdir(path): # strictly speaking, there is no need to use os.sep, # since the code is not portable outside of Linux nspath = '%s%s%s' % (path, os.sep, name) try: yield self._dump_one_ns(nspath, registry) except SkipInode: pass def _dump_proc(self, registry): for name in os.listdir('/proc'): try: int(name) except ValueError: continue try: yield self._dump_one_ns('/proc/%s/ns/net' % name, registry) except SkipInode: pass def get_netns_info(self, list_proc=False): ''' A prototype method to list available netns and associated interfaces. A bit weird to have it here and not under `pyroute2.netns`, but it uses RTNL to get all the info. ''' # # register all the ns inodes, not to repeat items in the output # registry = set() # # fetch veth peers # peers = {} for peer in self.get_links(): netnsid = peer.get_attr('IFLA_LINK_NETNSID') if netnsid is not None: if netnsid not in peers: peers[netnsid] = [] peers[netnsid].append(peer.get_attr('IFLA_IFNAME')) # # chain iterators: # # * one iterator for every item in self.path # * one iterator for /proc//ns/net # views = [] for path in self.netns_path: views.append(self._dump_dir(path, registry)) if list_proc: views.append(self._dump_proc(registry)) # # iterate all the items # for view in views: try: for item in view: # # remove uninitialized 'value' field # del item['value'] # # fetch peers for that ns # for peer in peers.get(item['netnsid'], []): item['attrs'].append(('NSINFO_PEER', peer)) yield item except OSError: pass # 8<--------------------------------------------------------------- # 8<--------------------------------------------------------------- # # Shortcuts # def get_default_routes(self, family=AF_UNSPEC, table=DEFAULT_TABLE): ''' Get default routes ''' # according to iproute2/ip/iproute.c:print_route() return [x for x in self.get_routes(family, table=table) if (x.get_attr('RTA_DST', None) is None and x['dst_len'] == 0)] def link_lookup(self, **kwarg): ''' Lookup interface index (indeces) by first level NLA value. Example:: ip.link_lookup(address="52:54:00:9d:4e:3d") ip.link_lookup(ifname="lo") ip.link_lookup(operstate="UP") Please note, that link_lookup() returns list, not one value. ''' name = tuple(kwarg.keys())[0] value = kwarg[name] name = str(name).upper() if not name.startswith('IFLA_'): name = 'IFLA_%s' % (name) return [k['index'] for k in [i for i in self.get_links() if 'attrs' in i] if [l for l in k['attrs'] if l[0] == name and l[1] == value]] # 8<--------------------------------------------------------------- # 8<--------------------------------------------------------------- # # Shortcuts to flush RTNL objects # def flush_routes(self, *argv, **kwarg): ''' Flush routes -- purge route records from a table. Arguments are the same as for `get_routes()` routine. Actually, this routine implements a pipe from `get_routes()` to `nlm_request()`. ''' ret = [] for route in self.get_routes(*argv, **kwarg): self.put(route, msg_type=RTM_DELROUTE, msg_flags=NLM_F_REQUEST) ret.append(route) return ret def flush_addr(self, *argv, **kwarg): ''' Flush IP addresses. Examples:: # flush all addresses on the interface with index 2: ipr.flush_addr(index=2) # flush all addresses with IFA_LABEL='eth0': ipr.flush_addr(label='eth0') ''' flags = NLM_F_CREATE | NLM_F_EXCL | NLM_F_REQUEST ret = [] for addr in self.get_addr(*argv, **kwarg): self.put(addr, msg_type=RTM_DELADDR, msg_flags=flags) ret.append(addr) return ret def flush_rules(self, *argv, **kwarg): ''' Flush rules. Please keep in mind, that by default the function operates on **all** rules of **all** families. To work only on IPv4 rules, one should explicitly specify `family=AF_INET`. Examples:: # flush all IPv4 rule with priorities above 5 and below 32000 ipr.flush_rules(family=AF_INET, priority=lambda x: 5 < x < 32000) # flush all IPv6 rules that point to table 250: ipr.flush_rules(family=socket.AF_INET6, table=250) ''' flags = NLM_F_CREATE | NLM_F_EXCL | NLM_F_REQUEST ret = [] for rule in self.get_rules(*argv, **kwarg): self.put(rule, msg_type=RTM_DELRULE, msg_flags=flags) ret.append(rule) return ret # 8<--------------------------------------------------------------- # 8<--------------------------------------------------------------- # # Extensions to low-level functions # def brport(self, command, **kwarg): ''' Set bridge port parameters. Example:: idx = ip.link_lookup(ifname='eth0') ip.brport("set", index=idx, unicast_flood=0, cost=200) ip.brport("show", index=idx) Possible keywords are NLA names for the `protinfo_bridge` class, without the prefix and in lower letters. ''' if (command in ('dump', 'show')) and ('match' not in kwarg): match = kwarg else: match = kwarg.pop('match', None) flags_dump = NLM_F_REQUEST | NLM_F_DUMP flags_req = NLM_F_REQUEST | NLM_F_ACK commands = {'set': (RTM_SETLINK, flags_req), 'dump': (RTM_GETLINK, flags_dump), 'show': (RTM_GETLINK, flags_dump)} (command, msg_flags) = commands.get(command, command) msg = ifinfmsg() if command == RTM_GETLINK: msg['index'] = kwarg.get('index', 0) else: msg['index'] = kwarg.pop('index', 0) msg['family'] = AF_BRIDGE protinfo = IPBrPortRequest(kwarg) msg['attrs'].append(('IFLA_PROTINFO', protinfo, 0x8000)) ret = self.nlm_request(msg, msg_type=command, msg_flags=msg_flags) if match is not None: ret = self._match(match, ret) if not (command == RTM_GETLINK and self.nlm_generator): ret = tuple(ret) return ret def vlan_filter(self, command, **kwarg): ''' Vlan filters is another approach to support vlans in Linux. Before vlan filters were introduced, there was only one way to bridge vlans: one had to create vlan interfaces and then add them as ports:: +------+ +----------+ net --> | eth0 | <--> | eth0.500 | <---+ +------+ +----------+ | v +------+ +-----+ net --> | eth1 | | br0 | +------+ +-----+ ^ +------+ +----------+ | net --> | eth2 | <--> | eth2.500 | <---+ +------+ +----------+ It means that one has to create as many bridges, as there were vlans. Vlan filters allow to bridge together underlying interfaces and create vlans already on the bridge:: # v500 label shows which interfaces have vlan filter +------+ v500 net --> | eth0 | <-------+ +------+ | v +------+ +-----+ +---------+ net --> | eth1 | <--> | br0 |<-->| br0v500 | +------+ +-----+ +---------+ ^ +------+ v500 | net --> | eth2 | <-------+ +------+ In this example vlan 500 will be allowed only on ports `eth0` and `eth2`, though all three eth nics are bridged. Some example code:: # create bridge ip.link("add", ifname="br0", kind="bridge") # attach a port ip.link("set", index=ip.link_lookup(ifname="eth0")[0], master=ip.link_lookup(ifname="br0")[0]) # set vlan filter ip.vlan_filter("add", index=ip.link_lookup(ifname="eth0")[0], vlan_info={"vid": 500}) # create vlan interface on the bridge ip.link("add", ifname="br0v500", kind="vlan", link=ip.link_lookup(ifname="br0")[0], vlan_id=500) # set all UP ip.link("set", index=ip.link_lookup(ifname="br0")[0], state="up") ip.link("set", index=ip.link_lookup(ifname="br0v500")[0], state="up") ip.link("set", index=ip.link_lookup(ifname="eth0")[0], state="up") # set IP address ip.addr("add", index=ip.link_lookup(ifname="br0v500")[0], address="172.16.5.2", mask=24) Now all the traffic to the network 172.16.5.2/24 will go to vlan 500 only via ports that have such vlan filter. Required arguments for `vlan_filter()` -- `index` and `vlan_info`. Vlan info struct:: {"vid": uint16, "flags": uint16} More details: * kernel:Documentation/networking/switchdev.txt * pyroute2.netlink.rtnl.ifinfmsg:... vlan_info One can specify `flags` as int or as a list of flag names: * `master` == 0x1 * `pvid` == 0x2 * `untagged` == 0x4 * `range_begin` == 0x8 * `range_end` == 0x10 * `brentry` == 0x20 E.g.:: {"vid": 20, "flags": ["pvid", "untagged"]} # is equal to {"vid": 20, "flags": 6} Commands: **add** Add vlan filter to a bridge port. Example:: ip.vlan_filter("add", index=2, vlan_info={"vid": 200}) **del** Remove vlan filter from a bridge port. Example:: ip.vlan_filter("del", index=2, vlan_info={"vid": 200}) ''' flags_req = NLM_F_REQUEST | NLM_F_ACK commands = {'add': (RTM_SETLINK, flags_req), 'del': (RTM_DELLINK, flags_req)} kwarg['family'] = AF_BRIDGE kwarg['kwarg_filter'] = IPBridgeRequest (command, flags) = commands.get(command, command) return tuple(self.link((command, flags), **kwarg)) def fdb(self, command, **kwarg): ''' Bridge forwarding database management. More details: * kernel:Documentation/networking/switchdev.txt * pyroute2.netlink.rtnl.ndmsg **add** Add a new FDB record. Works in the same way as ARP cache management, but some additional NLAs can be used:: # simple FDB record # ip.fdb('add', ifindex=ip.link_lookup(ifname='br0')[0], lladdr='00:11:22:33:44:55', dst='10.0.0.1') # specify vlan # NB: vlan should exist on the device, use # `vlan_filter()` # ip.fdb('add', ifindex=ip.link_lookup(ifname='br0')[0], lladdr='00:11:22:33:44:55', dst='10.0.0.1', vlan=200) # specify vxlan id and port # NB: works only for vxlan devices, use # `link("add", kind="vxlan", ...)` # # if port is not specified, the default one is used # by the kernel. # # if vni (vxlan id) is equal to the device vni, # the kernel doesn't report it back # ip.fdb('add', ifindex=ip.link_lookup(ifname='vx500')[0] lladdr='00:11:22:33:44:55', dst='10.0.0.1', port=5678, vni=600) **append** Append a new FDB record. The same syntax as for **add**. **del** Remove an existing FDB record. The same syntax as for **add**. **dump** Dump all the FDB records. If any `**kwarg` is provided, results will be filtered:: # dump all the records ip.fdb('dump') # show only specific lladdr, dst, vlan etc. ip.fdb('dump', lladdr='00:11:22:33:44:55') ip.fdb('dump', dst='10.0.0.1') ip.fdb('dump', vlan=200) ''' kwarg['family'] = AF_BRIDGE # nud -> state if 'nud' in kwarg: kwarg['state'] = kwarg.pop('nud') if (command in ('add', 'del', 'append')) and \ not (kwarg.get('state', 0) & ndmsg.states['noarp']): # state must contain noarp in add / del / append kwarg['state'] = kwarg.pop('state', 0) | ndmsg.states['noarp'] # other assumptions if not kwarg.get('state', 0) & (ndmsg.states['permanent'] | ndmsg.states['reachable']): # permanent (default) or reachable kwarg['state'] |= ndmsg.states['permanent'] if not kwarg.get('flags', 0) & (ndmsg.flags['self'] | ndmsg.flags['master']): # self (default) or master kwarg['flags'] = kwarg.get('flags', 0) | ndmsg.flags['self'] # return self.neigh(command, **kwarg) # 8<--------------------------------------------------------------- # # General low-level configuration methods # def neigh(self, command, **kwarg): ''' Neighbours operations, same as `ip neigh` or `bridge fdb` **add** Add a neighbour record, e.g.:: from pyroute2 import IPRoute from pyroute2.netlink.rtnl import ndmsg # add a permanent record on veth0 idx = ip.link_lookup(ifname='veth0')[0] ip.neigh('add', dst='172.16.45.1', lladdr='00:11:22:33:44:55', ifindex=idx, state=ndmsg.states['permanent']) **set** Set an existing record or create a new one, if it doesn't exist. The same as above, but the command is "set":: ip.neigh('set', dst='172.16.45.1', lladdr='00:11:22:33:44:55', ifindex=idx, state=ndmsg.states['permanent']) **change** Change an existing record. If the record doesn't exist, fail. **del** Delete an existing record. **dump** Dump all the records in the NDB:: ip.neigh('dump') ''' if (command == 'dump') and ('match' not in kwarg): match = kwarg else: match = kwarg.pop('match', None) flags_dump = NLM_F_REQUEST | NLM_F_DUMP flags_base = NLM_F_REQUEST | NLM_F_ACK flags_make = flags_base | NLM_F_CREATE | NLM_F_EXCL flags_append = flags_base | NLM_F_CREATE | NLM_F_APPEND flags_change = flags_base | NLM_F_REPLACE flags_replace = flags_change | NLM_F_CREATE commands = {'add': (RTM_NEWNEIGH, flags_make), 'set': (RTM_NEWNEIGH, flags_replace), 'replace': (RTM_NEWNEIGH, flags_replace), 'change': (RTM_NEWNEIGH, flags_change), 'del': (RTM_DELNEIGH, flags_make), 'remove': (RTM_DELNEIGH, flags_make), 'delete': (RTM_DELNEIGH, flags_make), 'dump': (RTM_GETNEIGH, flags_dump), 'append': (RTM_NEWNEIGH, flags_append)} (command, flags) = commands.get(command, command) if 'nud' in kwarg: kwarg['state'] = kwarg.pop('nud') msg = ndmsg.ndmsg() for field in msg.fields: msg[field[0]] = kwarg.pop(field[0], 0) msg['family'] = msg['family'] or AF_INET msg['attrs'] = [] # fix nud kwarg if isinstance(msg['state'], basestring): msg['state'] = ndmsg.states_a2n(msg['state']) for key in kwarg: nla = ndmsg.ndmsg.name2nla(key) if kwarg[key] is not None: msg['attrs'].append([nla, kwarg[key]]) ret = self.nlm_request(msg, msg_type=command, msg_flags=flags) if match is not None: ret = self._match(match, ret) if not (command == RTM_GETNEIGH and self.nlm_generator): ret = tuple(ret) return ret def link(self, command, **kwarg): ''' Link operations. Keywords to set up ifinfmsg fields: * index -- interface index * family -- AF_BRIDGE for bridge operations, otherwise 0 * flags -- device flags * change -- change mask All other keywords will be translated to NLA names, e.g. `mtu -> IFLA_MTU`, `af_spec -> IFLA_AF_SPEC` etc. You can provide a complete NLA structure or let filters do it for you. E.g., these pairs show equal statements:: # set device MTU ip.link("set", index=x, mtu=1000) ip.link("set", index=x, IFLA_MTU=1000) # add vlan device ip.link("add", ifname="test", kind="dummy") ip.link("add", ifname="test", IFLA_LINKINFO={'attrs': [['IFLA_INFO_KIND', 'dummy']]}) Filters are implemented in the `pyroute2.netlink.rtnl.req` module. You can contribute your own if you miss shortcuts. Commands: **add** To create an interface, one should specify the interface kind:: ip.link("add", ifname="test", kind="dummy") The kind can be any of those supported by kernel. It can be `dummy`, `bridge`, `bond` etc. On modern kernels one can specify even interface index:: ip.link("add", ifname="br-test", kind="bridge", index=2345) Specific type notes: ► gre Create GRE tunnel:: ip.link("add", ifname="grex", kind="gre", gre_local="172.16.0.1", gre_remote="172.16.0.101", gre_ttl=16) The keyed GRE requires explicit iflags/oflags specification:: ip.link("add", ifname="grex", kind="gre", gre_local="172.16.0.1", gre_remote="172.16.0.101", gre_ttl=16, gre_ikey=10, gre_okey=10, gre_iflags=32, gre_oflags=32) Support for GRE over IPv6 is also included; use `kind=ip6gre` and `ip6gre_` as the prefix for its values. ► ipip Create ipip tunnel:: ip.link("add", ifname="tun1", kind="ipip", ipip_local="172.16.0.1", ipip_remote="172.16.0.101", ipip_ttl=16) Support for sit and ip6tnl is also included; use `kind=sit` and `sit_` as prefix for sit tunnels, and `kind=ip6tnl` and `ip6tnl_` prefix for ip6tnl tunnels. ► macvlan Macvlan interfaces act like VLANs within OS. The macvlan driver provides an ability to add several MAC addresses on one interface, where every MAC address is reflected with a virtual interface in the system. In some setups macvlan interfaces can replace bridge interfaces, providing more simple and at the same time high-performance solution:: ip.link("add", ifname="mvlan0", kind="macvlan", link=ip.link_lookup(ifname="em1")[0], macvlan_mode="private").commit() Several macvlan modes are available: "private", "vepa", "bridge", "passthru". Ususally the default is "vepa". ► macvtap Almost the same as macvlan, but creates also a character tap device:: ip.link("add", ifname="mvtap0", kind="macvtap", link=ip.link_lookup(ifname="em1")[0], macvtap_mode="vepa").commit() Will create a device file `"/dev/tap%s" % index` ► tuntap Possible `tuntap` keywords: - `mode` — "tun" or "tap" - `uid` — integer - `gid` — integer - `ifr` — dict of tuntap flags (see ifinfmsg:... tuntap_data) Create a tap interface:: ip.link("add", ifname="tap0", kind="tuntap", mode="tap") Tun/tap interfaces are created using `ioctl()`, but the library provides a transparent way to manage them using netlink API. ► veth To properly create `veth` interface, one should specify `peer` also, since `veth` interfaces are created in pairs:: # simple call ip.link("add", ifname="v1p0", kind="veth", peer="v1p1") # set up specific veth peer attributes ip.link("add", ifname="v1p0", kind="veth", peer={"ifname": "v1p1", "net_ns_fd": "test_netns"}) ► vlan VLAN interfaces require additional parameters, `vlan_id` and `link`, where `link` is a master interface to create VLAN on:: ip.link("add", ifname="v100", kind="vlan", link=ip.link_lookup(ifname="eth0")[0], vlan_id=100) There is a possibility to create also 802.1ad interfaces:: # create external vlan 802.1ad, s-tag ip.link("add", ifname="v100s", kind="vlan", link=ip.link_lookup(ifname="eth0")[0], vlan_id=100, vlan_protocol=0x88a8) # create internal vlan 802.1q, c-tag ip.link("add", ifname="v200c", kind="vlan", link=ip.link_lookup(ifname="v100s")[0], vlan_id=200, vlan_protocol=0x8100) ► vrf VRF interfaces (see linux/Documentation/networking/vrf.txt):: ip.link("add", ifname="vrf-foo", kind="vrf", vrf_table=42) ► vxlan VXLAN interfaces are like VLAN ones, but require a bit more parameters:: ip.link("add", ifname="vx101", kind="vxlan", vxlan_link=ip.link_lookup(ifname="eth0")[0], vxlan_id=101, vxlan_group='239.1.1.1', vxlan_ttl=16) All possible vxlan parameters are listed in the module `pyroute2.netlink.rtnl.ifinfmsg:... vxlan_data`. ► ipoib IPoIB driver provides an ability to create several ip interfaces on one interface. IPoIB interfaces requires the following parameter: `link` : The master interface to create IPoIB on. The following parameters can also be provided: `pkey` : Inifiniband partition key the ip interface is associated with `mode` : Underlying infiniband transport mode. One of: ['datagram' ,'connected'] `umcast` : If set(1), multicast group membership for this interface is handled by user space. Example:: ip.link("add", ifname="ipoib1", kind="ipoib", link=ip.link_lookup(ifname="ib0")[0], pkey=10) **set** Set interface attributes:: # get interface index x = ip.link_lookup(ifname="eth0")[0] # put link down ip.link("set", index=x, state="down") # rename and set MAC addr ip.link("set", index=x, address="00:11:22:33:44:55", name="bala") # set MTU and TX queue length ip.link("set", index=x, mtu=1000, txqlen=2000) # bring link up ip.link("set", index=x, state="up") Keyword "state" is reserved. State can be "up" or "down", it is a shortcut:: state="up": flags=1, mask=1 state="down": flags=0, mask=0 SR-IOV virtual function setup:: # get PF index x = ip.link_lookup(ifname="eth0")[0] # setup macaddr ip.link("set", index=x, # PF index vf={"vf": 0, # VF index "mac": "00:11:22:33:44:55"}) # address # setup vlan ip.link("set", index=x, # PF index vf={"vf": 0, # VF index "vlan": 100}) # the simplest case # setup QinQ ip.link("set", index=x, # PF index vf={"vf": 0, # VF index "vlan": [{"vlan": 100, # vlan id "proto": 0x88a8}, # 802.1ad {"vlan": 200, # vlan id "proto": 0x8100}]}) # 802.1q **update** Almost the same as `set`, except it uses different flags and message type. Mostly does the same, but in some cases differs. If you're not sure what to use, use `set`. **del** Destroy the interface:: ip.link("del", index=ip.link_lookup(ifname="dummy0")[0]) **dump** Dump info for all interfaces **get** Get specific interface info:: ip.link("get", index=ip.link_lookup(ifname="br0")[0]) Get extended attributes like SR-IOV setup:: ip.link("get", index=3, ext_mask=1) ''' if (command == 'dump') and ('match' not in kwarg): match = kwarg else: match = kwarg.pop('match', None) if command[:4] == 'vlan': log.warning('vlan filters are managed via `vlan_filter()`') log.warning('this compatibility hack will be removed soon') return self.vlan_filter(command[5:], **kwarg) flags_dump = NLM_F_REQUEST | NLM_F_DUMP flags_req = NLM_F_REQUEST | NLM_F_ACK flags_create = flags_req | NLM_F_CREATE | NLM_F_EXCL commands = {'set': (RTM_NEWLINK, flags_req), 'update': (RTM_SETLINK, flags_create), 'add': (RTM_NEWLINK, flags_create), 'del': (RTM_DELLINK, flags_create), 'remove': (RTM_DELLINK, flags_create), 'delete': (RTM_DELLINK, flags_create), 'dump': (RTM_GETLINK, flags_dump), 'get': (RTM_GETLINK, NLM_F_REQUEST)} msg = ifinfmsg() # ifinfmsg fields # # ifi_family # ifi_type # ifi_index # ifi_flags # ifi_change # msg['family'] = kwarg.pop('family', 0) lrq = kwarg.pop('kwarg_filter', IPLinkRequest) (command, msg_flags) = commands.get(command, command) # index msg['index'] = kwarg.pop('index', 0) # flags flags = kwarg.pop('flags', 0) or 0 # change mask = kwarg.pop('mask', 0) or kwarg.pop('change', 0) or 0 # UP/DOWN shortcut if 'state' in kwarg: mask = 1 # IFF_UP mask if kwarg['state'].lower() == 'up': flags = 1 # 0 (down) or 1 (up) del kwarg['state'] # arp on/off shortcut if 'arp' in kwarg: mask |= IFF_NOARP if not kwarg.pop('arp'): flags |= IFF_NOARP msg['flags'] = flags msg['change'] = mask # apply filter kwarg = lrq(kwarg) # attach NLA for key in kwarg: nla = type(msg).name2nla(key) if kwarg[key] is not None: msg['attrs'].append([nla, kwarg[key]]) ret = self.nlm_request(msg, msg_type=command, msg_flags=msg_flags) if match is not None: ret = self._match(match, ret) if not (command == RTM_GETLINK and self.nlm_generator): ret = tuple(ret) return ret def addr(self, command, index=None, address=None, mask=None, family=None, scope=None, match=None, **kwarg): ''' Address operations * command -- add, delete * index -- device index * address -- IPv4 or IPv6 address * mask -- address mask * family -- socket.AF_INET for IPv4 or socket.AF_INET6 for IPv6 * scope -- the address scope, see /etc/iproute2/rt_scopes * kwarg -- dictionary, any ifaddrmsg field or NLA Later the method signature will be changed to:: def addr(self, command, match=None, **kwarg): # the method body So only keyword arguments (except of the command) will be accepted. The reason for this change is an unification of API. Example:: idx = 62 ip.addr('add', index=idx, address='10.0.0.1', mask=24) ip.addr('add', index=idx, address='10.0.0.2', mask=24) With more NLAs:: # explicitly set broadcast address ip.addr('add', index=idx, address='10.0.0.3', broadcast='10.0.0.255', prefixlen=24) # make the secondary address visible to ifconfig: add label ip.addr('add', index=idx, address='10.0.0.4', broadcast='10.0.0.255', prefixlen=24, label='eth0:1') Configure p2p address on an interface:: ip.addr('add', index=idx, address='10.1.1.2', mask=24, local='10.1.1.1') ''' if command in ('get', 'set'): return flags_dump = NLM_F_REQUEST | NLM_F_DUMP flags_create = NLM_F_REQUEST | NLM_F_ACK | NLM_F_CREATE | NLM_F_EXCL commands = {'add': (RTM_NEWADDR, flags_create), 'del': (RTM_DELADDR, flags_create), 'remove': (RTM_DELADDR, flags_create), 'delete': (RTM_DELADDR, flags_create), 'dump': (RTM_GETADDR, flags_dump)} (command, flags) = commands.get(command, command) # fetch args index = index or kwarg.pop('index', 0) family = family or kwarg.pop('family', None) prefixlen = mask or kwarg.pop('mask', 0) or kwarg.pop('prefixlen', 0) scope = scope or kwarg.pop('scope', 0) # move address to kwarg # FIXME: add deprecation notice if address: kwarg['address'] = address # try to guess family, if it is not forced if kwarg.get('address') and family is None: if address.find(":") > -1: family = AF_INET6 mask = mask or 128 else: family = AF_INET mask = mask or 32 # setup the message msg = ifaddrmsg() msg['index'] = index msg['family'] = family or 0 msg['prefixlen'] = prefixlen msg['scope'] = scope # inject IFA_LOCAL, if family is AF_INET and IFA_LOCAL is not set if family == AF_INET and \ kwarg.get('address') and \ kwarg.get('local') is None: kwarg['local'] = kwarg['address'] # patch broadcast, if needed if kwarg.get('broadcast') is True: kwarg['broadcast'] = getbroadcast(address, mask, family) # work on NLA for key in kwarg: nla = ifaddrmsg.name2nla(key) if kwarg[key] not in (None, ''): msg['attrs'].append([nla, kwarg[key]]) ret = self.nlm_request(msg, msg_type=command, msg_flags=flags, terminate=lambda x: x['header']['type'] == NLMSG_ERROR) if match: ret = self._match(match, ret) if not (command == RTM_GETADDR and self.nlm_generator): ret = tuple(ret) return ret def tc(self, command, kind=None, index=0, handle=0, **kwarg): ''' "Swiss knife" for traffic control. With the method you can add, delete or modify qdiscs, classes and filters. * command -- add or delete qdisc, class, filter. * kind -- a string identifier -- "sfq", "htb", "u32" and so on. * handle -- integer or string Command can be one of ("add", "del", "add-class", "del-class", "add-filter", "del-filter") (see `commands` dict in the code). Handle notice: traditional iproute2 notation, like "1:0", actually represents two parts in one four-bytes integer:: 1:0 -> 0x10000 1:1 -> 0x10001 ff:0 -> 0xff0000 ffff:1 -> 0xffff0001 Target notice: if your target is a class/qdisc that applies an algorithm that can only apply to upstream traffic profile, but your keys variable explicitly references a match that is only relevant for upstream traffic, the kernel will reject the filter. Unless you're dealing with devices like IMQs For pyroute2 tc() you can use both forms: integer like 0xffff0000 or string like 'ffff:0000'. By default, handle is 0, so you can add simple classless queues w/o need to specify handle. Ingress queue causes handle to be 0xffff0000. So, to set up sfq queue on interface 1, the function call will be like that:: ip = IPRoute() ip.tc("add", "sfq", 1) Instead of string commands ("add", "del"...), you can use also module constants, `RTM_NEWQDISC`, `RTM_DELQDISC` and so on:: ip = IPRoute() flags = NLM_F_REQUEST | NLM_F_ACK | NLM_F_CREATE | NLM_F_EXCL ip.tc((RTM_NEWQDISC, flags), "sfq", 1) It should be noted that "change", "change-class" and "change-filter" work like "replace", "replace-class" and "replace-filter", except they will fail if the node doesn't exist (while it would have been created by "replace"). This is not the same behaviour as with "tc" where "change" can be used to modify the value of some options while leaving the others unchanged. However, as not all entities support this operation, we believe the "change" commands as implemented here are more useful. Also available "modules" (returns tc plugins dict) and "help" commands:: help(ip.tc("modules")["htb"]) print(ip.tc("help", "htb")) ''' if command == 'set': return if command == 'modules': return tc_plugins if command == 'help': p = tc_plugins.get(kind) if p is not None and hasattr(p, '__doc__'): return p.__doc__ else: return 'No help available' flags_base = NLM_F_REQUEST | NLM_F_ACK flags_make = flags_base | NLM_F_CREATE | NLM_F_EXCL flags_change = flags_base | NLM_F_REPLACE flags_replace = flags_change | NLM_F_CREATE commands = {'add': (RTM_NEWQDISC, flags_make), 'del': (RTM_DELQDISC, flags_make), 'remove': (RTM_DELQDISC, flags_make), 'delete': (RTM_DELQDISC, flags_make), 'change': (RTM_NEWQDISC, flags_change), 'replace': (RTM_NEWQDISC, flags_replace), 'add-class': (RTM_NEWTCLASS, flags_make), 'del-class': (RTM_DELTCLASS, flags_make), 'change-class': (RTM_NEWTCLASS, flags_change), 'replace-class': (RTM_NEWTCLASS, flags_replace), 'add-filter': (RTM_NEWTFILTER, flags_make), 'del-filter': (RTM_DELTFILTER, flags_make), 'change-filter': (RTM_NEWTFILTER, flags_change), 'replace-filter': (RTM_NEWTFILTER, flags_replace)} if isinstance(command, int): command = (command, flags_make) command, flags = commands.get(command, command) msg = tcmsg() # transform handle, parent and target, if needed: handle = transform_handle(handle) for item in ('parent', 'target', 'default'): if item in kwarg and kwarg[item] is not None: kwarg[item] = transform_handle(kwarg[item]) msg['index'] = index msg['handle'] = handle opts = kwarg.get('opts', None) ## # # if kind in tc_plugins: p = tc_plugins[kind] msg['parent'] = kwarg.pop('parent', getattr(p, 'parent', 0)) if hasattr(p, 'fix_msg'): p.fix_msg(msg, kwarg) if kwarg: if command in (RTM_NEWTCLASS, RTM_DELTCLASS): opts = p.get_class_parameters(kwarg) else: opts = p.get_parameters(kwarg) else: msg['parent'] = kwarg.get('parent', TC_H_ROOT) if kind is not None: msg['attrs'].append(['TCA_KIND', kind]) if opts is not None: msg['attrs'].append(['TCA_OPTIONS', opts]) return tuple(self.nlm_request(msg, msg_type=command, msg_flags=flags)) def route(self, command, **kwarg): ''' Route operations. Keywords to set up rtmsg fields: * dst_len, src_len -- destination and source mask(see `dst` below) * tos -- type of service * table -- routing table * proto -- `redirect`, `boot`, `static` (see `rt_proto`) * scope -- routing realm * type -- `unicast`, `local`, etc. (see `rt_type`) `pyroute2/netlink/rtnl/rtmsg.py` rtmsg.nla_map: * table -- routing table to use (default: 254) * gateway -- via address * prefsrc -- preferred source IP address * dst -- the same as `prefix` * iif -- incoming traffic interface * oif -- outgoing traffic interface etc. One can specify mask not as `dst_len`, but as a part of `dst`, e.g.: `dst="10.0.0.0/24"`. Commands: **add** Example:: ip.route("add", dst="10.0.0.0/24", gateway="192.168.0.1") It is possible to set also route metrics. There are two ways to do so. The first is to use 'raw' NLA notation:: ip.route("add", dst="10.0.0.0", mask=24, gateway="192.168.0.1", metrics={"attrs": [["RTAX_MTU", 1400], ["RTAX_HOPLIMIT", 16]]}) The second way is to use shortcuts, provided by `IPRouteRequest` class, which is applied to `**kwarg` automatically:: ip.route("add", dst="10.0.0.0/24", gateway="192.168.0.1", metrics={"mtu": 1400, "hoplimit": 16}) ... More `route()` examples. Blackhole route:: ip.route("add", dst="10.0.0.0/24", type="blackhole") Multipath route:: ip.route("add", dst="10.0.0.0/24", multipath=[{"gateway": "192.168.0.1", "hops": 2}, {"gateway": "192.168.0.2", "hops": 1}, {"gateway": "192.168.0.3"}]) MPLS lwtunnel on eth0:: idx = ip.link_lookup(ifname='eth0')[0] ip.route("add", dst="10.0.0.0/24", oif=idx, encap={"type": "mpls", "labels": "200/300"}) MPLS multipath:: idx = ip.link_lookup(ifname='eth0')[0] ip.route("add", dst="10.0.0.0/24", table=20, multipath=[{"gateway": "192.168.0.1", "encap": {"type": "mpls", "labels": 200}}, {"ifindex": idx, "encap": {"type": "mpls", "labels": 300}}]) MPLS target can be int, string, dict or list:: "labels": 300 # simple label "labels": "300" # the same "labels": (200, 300) # stacked "labels": "200/300" # the same # explicit label definition "labels": {"bos": 1, "label": 300, "tc": 0, "ttl": 16} Create SEG6 tunnel encap mode (kernel >= 4.10):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) Create SEG6 tunnel inline mode (kernel >= 4.10):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'inline', 'segs': ['2000::5', '2000::6']}) Create SEG6 tunnel inline mode with hmac (kernel >= 4.10):: ip.route('add', dst='2001:0:0:22::2/128', oif=idx, encap={'type': 'seg6', 'mode': 'inline', 'segs':'2000::5,2000::6,2000::7,2000::8', 'hmac':0xf}) Create SEG6 tunnel with ip4ip6 encapsulation (kernel >= 4.14):: ip.route('add', dst='172.16.0.0/24', oif=idx, encap={'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'}) Create SEG6LOCAL tunnel End.DX4 action (kernel >= 4.14):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6local', 'action': 'End.DX4', 'nh4': '172.16.0.10'}) Create SEG6LOCAL tunnel End.DT6 action (kernel >= 4.14):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6local', 'action': 'End.DT6', 'table':'10'}) Create SEG6LOCAL tunnel End.B6 action (kernel >= 4.14):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6local', 'action': 'End.B6', 'srh':{'segs': '2000::5,2000::6'}}) Create SEG6LOCAL tunnel End.B6 action with hmac (kernel >= 4.14):: ip.route('add', dst='2001:0:0:10::2/128', oif=idx, encap={'type': 'seg6local', 'action': 'End.B6', 'srh': {'segs': '2000::5,2000::6', 'hmac':0xf}}) **change**, **replace**, **append** Commands `change`, `replace` and `append` have the same meanings as in ip-route(8): `change` modifies only existing route, while `replace` creates a new one, if there is no such route yet. `append` allows to create an IPv6 multipath route. **del** Remove the route. The same syntax as for **add**. **get** Get route by spec. **dump** Dump all routes. ''' # 8<---------------------------------------------------- # FIXME # flags should be moved to some more general place flags_dump = NLM_F_DUMP | NLM_F_REQUEST flags_base = NLM_F_REQUEST | NLM_F_ACK flags_make = flags_base | NLM_F_CREATE | NLM_F_EXCL flags_change = flags_base | NLM_F_REPLACE flags_replace = flags_change | NLM_F_CREATE flags_append = flags_base | NLM_F_CREATE | NLM_F_APPEND # 8<---------------------------------------------------- # transform kwarg if command in ('add', 'set', 'replace', 'change', 'append'): kwarg['proto'] = kwarg.get('proto', 'static') or 'static' kwarg['type'] = kwarg.get('type', 'unicast') or 'unicast' kwarg = IPRouteRequest(kwarg) if 'match' not in kwarg and command in ('dump', 'show'): match = kwarg else: match = kwarg.pop('match', None) callback = kwarg.pop('callback', None) commands = {'add': (RTM_NEWROUTE, flags_make), 'set': (RTM_NEWROUTE, flags_replace), 'replace': (RTM_NEWROUTE, flags_replace), 'change': (RTM_NEWROUTE, flags_change), 'append': (RTM_NEWROUTE, flags_append), 'del': (RTM_DELROUTE, flags_make), 'remove': (RTM_DELROUTE, flags_make), 'delete': (RTM_DELROUTE, flags_make), 'get': (RTM_GETROUTE, NLM_F_REQUEST), 'show': (RTM_GETROUTE, flags_dump), 'dump': (RTM_GETROUTE, flags_dump)} (command, flags) = commands.get(command, command) msg = rtmsg() # table is mandatory; by default == 254 # if table is not defined in kwarg, save it there # also for nla_attr: table = kwarg.get('table', 254) msg['table'] = table if table <= 255 else 252 msg['family'] = kwarg.pop('family', AF_INET) msg['scope'] = kwarg.pop('scope', rt_scope['universe']) msg['dst_len'] = kwarg.pop('dst_len', None) or kwarg.pop('mask', 0) msg['src_len'] = kwarg.pop('src_len', 0) msg['tos'] = kwarg.pop('tos', 0) msg['flags'] = kwarg.pop('flags', 0) msg['type'] = kwarg.pop('type', rt_type['unspec']) msg['proto'] = kwarg.pop('proto', rt_proto['unspec']) msg['attrs'] = [] if msg['family'] == AF_MPLS: for key in tuple(kwarg): if key not in ('dst', 'newdst', 'via', 'multipath', 'oif'): kwarg.pop(key) for key in kwarg: nla = rtmsg.name2nla(key) if nla == 'RTA_DST' and not kwarg[key]: continue if kwarg[key] is not None: msg['attrs'].append([nla, kwarg[key]]) # fix IP family, if needed if msg['family'] in (AF_UNSPEC, 255): if key in ('dst', 'src', 'gateway', 'prefsrc', 'newdst') \ and isinstance(kwarg[key], basestring): msg['family'] = AF_INET6 if kwarg[key].find(':') >= 0 \ else AF_INET elif key == 'multipath' and len(kwarg[key]) > 0: hop = kwarg[key][0] attrs = hop.get('attrs', []) for attr in attrs: if attr[0] == 'RTA_GATEWAY': msg['family'] = AF_INET6 if \ attr[1].find(':') >= 0 else AF_INET break ret = self.nlm_request(msg, msg_type=command, msg_flags=flags, callback=callback) if match: ret = self._match(match, ret) if not (command == RTM_GETROUTE and self.nlm_generator): ret = tuple(ret) return ret def rule(self, command, *argv, **kwarg): ''' Rule operations - command — add, delete - table — 0 < table id < 253 - priority — 0 < rule's priority < 32766 - action — type of rule, default 'FR_ACT_NOP' (see fibmsg.py) - rtscope — routing scope, default RT_SCOPE_UNIVERSE `(RT_SCOPE_UNIVERSE|RT_SCOPE_SITE|\ RT_SCOPE_LINK|RT_SCOPE_HOST|RT_SCOPE_NOWHERE)` - family — rule's family (socket.AF_INET (default) or socket.AF_INET6) - src — IP source for Source Based (Policy Based) routing's rule - dst — IP for Destination Based (Policy Based) routing's rule - src_len — Mask for Source Based (Policy Based) routing's rule - dst_len — Mask for Destination Based (Policy Based) routing's rule - iifname — Input interface for Interface Based (Policy Based) routing's rule - oifname — Output interface for Interface Based (Policy Based) routing's rule All packets route via table 10:: # 32000: from all lookup 10 # ... ip.rule('add', table=10, priority=32000) Default action:: # 32001: from all lookup 11 unreachable # ... iproute.rule('add', table=11, priority=32001, action='FR_ACT_UNREACHABLE') Use source address to choose a routing table:: # 32004: from 10.64.75.141 lookup 14 # ... iproute.rule('add', table=14, priority=32004, src='10.64.75.141') Use dst address to choose a routing table:: # 32005: from 10.64.75.141/24 lookup 15 # ... iproute.rule('add', table=15, priority=32005, dst='10.64.75.141', dst_len=24) Match fwmark:: # 32006: from 10.64.75.141 fwmark 0xa lookup 15 # ... iproute.rule('add', table=15, priority=32006, dst='10.64.75.141', fwmark=10) ''' if command == 'set': return flags_base = NLM_F_REQUEST | NLM_F_ACK flags_make = flags_base | NLM_F_CREATE | NLM_F_EXCL flags_dump = NLM_F_REQUEST | NLM_F_ROOT | NLM_F_ATOMIC commands = {'add': (RTM_NEWRULE, flags_make), 'del': (RTM_DELRULE, flags_make), 'remove': (RTM_DELRULE, flags_make), 'delete': (RTM_DELRULE, flags_make), 'dump': (RTM_GETRULE, flags_dump)} if isinstance(command, int): command = (command, flags_make) command, flags = commands.get(command, command) if argv: # this code block will be removed in some release log.error('rule(): positional parameters are deprecated') names = ['table', 'priority', 'action', 'family', 'src', 'src_len', 'dst', 'dst_len', 'fwmark', 'iifname', 'oifname'] kwarg.update(dict(zip(names, argv))) kwarg = IPRuleRequest(kwarg) msg = fibmsg() table = kwarg.get('table', 0) msg['table'] = table if table <= 255 else 252 for key in ('family', 'src_len', 'dst_len', 'action', 'tos', 'flags'): msg[key] = kwarg.pop(key, 0) msg['attrs'] = [] for key in kwarg: nla = fibmsg.name2nla(key) if kwarg[key] is not None: msg['attrs'].append([nla, kwarg[key]]) ret = self.nlm_request(msg, msg_type=command, msg_flags=flags) if 'match' in kwarg: ret = self._match(kwarg['match'], ret) if not (command == RTM_GETRULE and self.nlm_generator): ret = tuple(ret) return ret # 8<--------------------------------------------------------------- class IPBatch(RTNL_API, IPBatchSocket): ''' Netlink requests compiler. Does not send any requests, but instead stores them in the internal binary buffer. The contents of the buffer can be used to send batch requests, to test custom netlink parsers and so on. Uses `RTNL_API` and provides all the same API as normal `IPRoute` objects:: # create the batch compiler ipb = IPBatch() # compile requests into the internal buffer ipb.link("add", index=550, ifname="test", kind="dummy") ipb.link("set", index=550, state="up") ipb.addr("add", index=550, address="10.0.0.2", mask=24) # save the buffer data = ipb.batch # reset the buffer ipb.reset() ... # send the buffer IPRoute().sendto(data, (0, 0)) ''' pass class IPRoute(RTNL_API, IPRSocket): ''' Regular ordinary utility class, see RTNL API for the list of methods. ''' pass class RawIPRoute(RTNL_API, RawIPRSocket): ''' The same as `IPRoute`, but does not use the netlink proxy. Thus it can not manage e.g. tun/tap interfaces. ''' pass pyroute2-0.5.9/pyroute2/iproute/remote.py0000644000175000017500000001062113610051400020345 0ustar peetpeet00000000000000import os import errno import mitogen.core import mitogen.master import threading from pyroute2.remote import (Transport, Server, RemoteSocket) from pyroute2.iproute.linux import RTNL_API from pyroute2.netlink.rtnl.iprsocket import MarshalRtnl class Channel(object): def __init__(self, ch): self.ch = ch self._pfdr, self._pfdw = os.pipe() self.th = None self.closed = False self.lock = threading.RLock() self.shutdown_lock = threading.RLock() self.read = self._read_sync self.buf = '' def flush(self): pass def _read_sync(self, size): with self.lock: if self.buf: ret = self.buf[:size] self.buf = self.buf[size:] return ret ret = self.ch.get().unpickle() if len(ret) > size: self.buf = ret[size:] return ret[:size] def _read_async(self, size): with self.lock: return os.read(self._pfdr, size) def write(self, data): with self.lock: self.ch.send(data) return len(data) def start(self): with self.lock: if self.th is None: self.read = self._read_async self.th = threading.Thread(target=self._monitor_thread, name='Channel <%s> I/O' % self.ch) self.th.start() def fileno(self): return self._pfdr def close(self): with self.shutdown_lock: if not self.closed: os.close(self._pfdw) os.close(self._pfdr) if self.th is not None: self.th.join() self.closed = True if hasattr(self.ch, 'send'): self.ch.send(None) def _monitor_thread(self): while True: msg = self.ch.get().unpickle() if msg is None: raise EOFError() os.write(self._pfdw, msg) @mitogen.core.takes_router def MitogenServer(ch_out, netns, router): ch_in = mitogen.core.Receiver(router) ch_out.send(ch_in.to_sender()) trnsp_in = Transport(Channel(ch_in)) trnsp_in.file_obj.start() trnsp_out = Transport(Channel(ch_out)) return Server(trnsp_in, trnsp_out, netns) class RemoteIPRoute(RTNL_API, RemoteSocket): def __init__(self, *argv, **kwarg): self._argv = tuple(argv) self._kwarg = dict(kwarg) if 'router' in kwarg: self._mitogen_broker = None self._mitogen_router = kwarg.pop('router') else: self._mitogen_broker = mitogen.master.Broker() self._mitogen_router = mitogen.master.Router(self._mitogen_broker) netns = kwarg.get('netns', None) try: if 'context' in kwarg: context = kwarg['context'] else: protocol = kwarg.pop('protocol', 'local') context = getattr(self._mitogen_router, protocol)(*argv, **kwarg) ch_in = mitogen.core.Receiver(self._mitogen_router, respondent=context) self._mitogen_call = context.call_async(MitogenServer, ch_out=ch_in.to_sender(), netns=netns) ch_out = ch_in.get().unpickle() super(RemoteIPRoute, self).__init__(Transport(Channel(ch_in)), Transport(Channel(ch_out))) except Exception: if self._mitogen_broker is not None: self._mitogen_broker.shutdown() self._mitogen_broker.join() raise self.marshal = MarshalRtnl() def clone(self): return type(self)(*self._argv, **self._kwarg) def close(self, code=errno.ECONNRESET): with self.shutdown_lock: if not self.closed: super(RemoteIPRoute, self).close(code=code) self.closed = True try: self._mitogen_call.get() except mitogen.core.ChannelError: pass if self._mitogen_broker is not None: self._mitogen_broker.shutdown() self._mitogen_broker.join() pyroute2-0.5.9/pyroute2/iproute/windows.py0000644000175000017500000001520013610051400020542 0ustar peetpeet00000000000000''' ''' import os import ctypes from socket import AF_INET from pyroute2.netlink import (NLM_F_REQUEST, NLM_F_DUMP, NLM_F_MULTI, NLMSG_DONE) from pyroute2.netlink.rtnl import (RTM_NEWLINK, RTM_GETLINK, RTM_NEWADDR, RTM_GETADDR, RTM_NEWROUTE, RTM_GETROUTE, RTM_NEWNEIGH, RTM_GETNEIGH) from pyroute2.netlink.rtnl.marshal import MarshalRtnl from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg from pyroute2.common import AddrPool from pyroute2.common import Namespace from pyroute2.common import dqn2int from pyroute2.proxy import NetlinkProxy MAX_ADAPTER_NAME_LENGTH = 256 MAX_ADAPTER_DESCRIPTION_LENGTH = 128 MAX_ADAPTER_ADDRESS_LENGTH = 8 class IP_ADDRESS_STRING(ctypes.Structure): pass PIP_ADDRESS_STRING = ctypes.POINTER(IP_ADDRESS_STRING) IP_ADDRESS_STRING._fields_ = [('Next', PIP_ADDRESS_STRING), ('IpAddress', ctypes.c_byte * 16), ('IpMask', ctypes.c_byte * 16), ('Context', ctypes.c_ulong)] class IP_ADAPTER_INFO(ctypes.Structure): pass PIP_ADAPTER_INFO = ctypes.POINTER(IP_ADAPTER_INFO) IP_ADAPTER_INFO._fields_ = [('Next', PIP_ADAPTER_INFO), ('ComboIndex', ctypes.c_ulong), ('AdapterName', ctypes.c_byte * (256 + 4)), ('Description', ctypes.c_byte * (128 + 4)), ('AddressLength', ctypes.c_uint), ('Address', ctypes.c_ubyte * 8), ('Index', ctypes.c_ulong), ('Type', ctypes.c_uint), ('DhcpEnabled', ctypes.c_uint), ('CurrentIpAddress', PIP_ADDRESS_STRING), ('IpAddressList', IP_ADDRESS_STRING), ('GatewayList', IP_ADDRESS_STRING), ('DhcpServer', IP_ADDRESS_STRING), ('HaveWins', ctypes.c_byte), ('PrimaryWinsServer', IP_ADDRESS_STRING), ('SecondaryWinsServer', IP_ADDRESS_STRING), ('LeaseObtained', ctypes.c_ulong), ('LeaseExpires', ctypes.c_ulong)] class IPRoute(object): def __init__(self, *argv, **kwarg): self.marshal = MarshalRtnl() send_ns = Namespace(self, {'addr_pool': AddrPool(0x10000, 0x1ffff), 'monitor': False}) self._sproxy = NetlinkProxy(policy='return', nl=send_ns) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def clone(self): return self def close(self, code=None): pass def bind(self, *argv, **kwarg): pass def getsockopt(self, *argv, **kwarg): return 1024 * 1024 def sendto_gate(self, msg, addr): # # handle incoming netlink requests # # sendto_gate() receives single RTNL messages as objects # cmd = msg['header']['type'] flags = msg['header']['flags'] seq = msg['header']['sequence_number'] # work only on dump requests for now if flags != NLM_F_REQUEST | NLM_F_DUMP: return # if cmd == RTM_GETLINK: rtype = RTM_NEWLINK ret = self.get_links() elif cmd == RTM_GETADDR: rtype = RTM_NEWADDR ret = self.get_addr() elif cmd == RTM_GETROUTE: rtype = RTM_NEWROUTE ret = self.get_routes() elif cmd == RTM_GETNEIGH: rtype = RTM_NEWNEIGH ret = self.get_neighbours() # # set response type and finalize the message for r in ret: r['header']['type'] = rtype r['header']['flags'] = NLM_F_MULTI r['header']['sequence_number'] = seq # r = type(msg)() r['header']['type'] = NLMSG_DONE r['header']['sequence_number'] = seq ret.append(r) data = b'' for r in ret: r.encode() data += r.data self._outq.put(data) os.write(self._pfdw, b'\0') def _GetAdaptersInfo(self): ret = {'interfaces': [], 'addresses': []} # prepare buffer buf = ctypes.create_string_buffer(15000) buf_len = ctypes.c_ulong(15000) (ctypes .windll .iphlpapi .GetAdaptersInfo(ctypes.byref(buf), ctypes.byref(buf_len))) adapter = IP_ADAPTER_INFO.from_address(ctypes.addressof(buf)) while True: mac = ':'.join(['%02x' % x for x in adapter.Address][:6]) ifname = ctypes.string_at(ctypes.addressof(adapter.AdapterName)) spec = {'index': adapter.Index, 'attrs': (['IFLA_ADDRESS', mac], ['IFLA_IFNAME', ifname])} msg = ifinfmsg().load(spec) del msg['value'] ret['interfaces'].append(msg) ipaddr = adapter.IpAddressList while True: addr = ctypes.string_at(ctypes.addressof(ipaddr.IpAddress)) mask = ctypes.string_at(ctypes.addressof(ipaddr.IpMask)) spec = {'index': adapter.Index, 'family': AF_INET, 'prefixlen': dqn2int(mask), 'attrs': (['IFA_ADDRESS', addr], ['IFA_LOCAL', addr], ['IFA_LABEL', ifname])} msg = ifaddrmsg().load(spec) del msg['value'] ret['addresses'].append(msg) if ipaddr.Next: ipaddr = ipaddr.Next.contents else: break if adapter.Next: adapter = adapter.Next.contents else: break return ret def get_links(self, *argv, **kwarg): return self._GetAdaptersInfo()['interfaces'] def get_addr(self, *argv, **kwarg): return self._GetAdaptersInfo()['addresses'] def get_neighbours(self, *argv, **kwarg): ret = [] return ret def get_routes(self, *argv, **kwarg): ret = [] return ret class RawIPRoute(IPRoute): pass pyroute2-0.5.9/pyroute2/ipset.py0000644000175000017500000006254213613573646016550 0ustar peetpeet00000000000000''' IPSet module ============ ipset support. This module is tested with hash:ip, hash:net, list:set and several other ipset structures (like hash:net,iface). There is no guarantee that this module is working with all available ipset modules. It supports almost all kernel commands (create, destroy, flush, rename, swap, test...) ''' import errno import socket from pyroute2.common import basestring from pyroute2.netlink import NLMSG_ERROR from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_EXCL from pyroute2.netlink import NETLINK_NETFILTER from pyroute2.netlink.exceptions import NetlinkError, IPSetError from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.nfnetlink import NFNL_SUBSYS_IPSET from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_PROTOCOL from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_CREATE from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_DESTROY from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_SWAP from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_LIST from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_ADD from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_DEL from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_FLUSH from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_RENAME from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_TEST from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_TYPE from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_HEADER from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_GET_BYNAME from pyroute2.netlink.nfnetlink.ipset import IPSET_CMD_GET_BYINDEX from pyroute2.netlink.nfnetlink.ipset import ipset_msg from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_COUNTERS from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_COMMENT from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_FORCEADD from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_SKBINFO from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_IFACE_WILDCARD from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_PHYSDEV from pyroute2.netlink.nfnetlink.ipset import IPSET_DEFAULT_MAXELEM from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_PROTOCOL from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_FIND_TYPE from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_MAX_SETS from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_BUSY from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_EXIST_SETNAME2 from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_TYPE_MISMATCH from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_EXIST from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_INVALID_CIDR from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_INVALID_NETMASK from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_INVALID_FAMILY from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_TIMEOUT from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_REFERENCED from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_IPADDR_IPV4 from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_IPADDR_IPV6 from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_COUNTER from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_COMMENT from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_INVALID_MARKMASK from pyroute2.netlink.nfnetlink.ipset import IPSET_ERR_SKBINFO def _nlmsg_error(msg): return msg['header']['type'] == NLMSG_ERROR class PortRange(object): """ A simple container for port range with optional protocol Note that optional protocol parameter is not supported by all kernel ipset modules using ports. On the other hand, it's sometimes mandatory to set it (like for hash:net,port ipsets) Example:: udp_proto = socket.getprotobyname("udp") port_range = PortRange(1000, 2000, protocol=udp_proto) ipset.create("foo", stype="hash:net,port") ipset.add("foo", ("192.0.2.0/24", port_range), etype="net,port") ipset.test("foo", ("192.0.2.0/24", port_range), etype="net,port") """ def __init__(self, begin, end, protocol=None): self.begin = begin self.end = end self.protocol = protocol class PortEntry(object): """ A simple container for port entry with optional protocol """ def __init__(self, port, protocol=None): self.port = port self.protocol = protocol class IPSet(NetlinkSocket): ''' NFNetlink socket (family=NETLINK_NETFILTER). Implements API to the ipset functionality. ''' policy = {IPSET_CMD_PROTOCOL: ipset_msg, IPSET_CMD_LIST: ipset_msg, IPSET_CMD_TYPE: ipset_msg, IPSET_CMD_HEADER: ipset_msg, IPSET_CMD_GET_BYNAME: ipset_msg, IPSET_CMD_GET_BYINDEX: ipset_msg} attr_map = {'iface': 'IPSET_ATTR_IFACE', 'mark': 'IPSET_ATTR_MARK', 'set': 'IPSET_ATTR_NAME', 'mac': 'IPSET_ATTR_ETHER', 'port': 'IPSET_ATTR_PORT', ('ip_from', 1): 'IPSET_ATTR_IP_FROM', ('ip_from', 2): 'IPSET_ATTR_IP2', ('cidr', 1): 'IPSET_ATTR_CIDR', ('cidr', 2): 'IPSET_ATTR_CIDR2', ('ip_to', 1): 'IPSET_ATTR_IP_TO', ('ip_to', 2): 'IPSET_ATTR_IP2_TO'} def __init__(self, version=None, attr_revision=None, nfgen_family=2): super(IPSet, self).__init__(family=NETLINK_NETFILTER) policy = dict([(x | (NFNL_SUBSYS_IPSET << 8), y) for (x, y) in self.policy.items()]) self.register_policy(policy) self._nfgen_family = nfgen_family if version is None: msg = self.get_proto_version() version = msg[0].get_attr('IPSET_ATTR_PROTOCOL') self._proto_version = version self._attr_revision = attr_revision def request(self, msg, msg_type, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=None): msg['nfgen_family'] = self._nfgen_family try: return self.nlm_request(msg, msg_type | (NFNL_SUBSYS_IPSET << 8), msg_flags, terminate=terminate) except NetlinkError as err: raise _IPSetError(err.code, cmd=msg_type) def headers(self, name, **kwargs): ''' Get headers of the named ipset. It can be used to test if one ipset exists, since it returns a no such file or directory. ''' return self._list_or_headers(IPSET_CMD_HEADER, name=name, **kwargs) def get_proto_version(self, version=6): ''' Get supported protocol version by kernel. version parameter allow to set mandatory (but unused?) IPSET_ATTR_PROTOCOL netlink attribute in the request. ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', version]] return self.request(msg, IPSET_CMD_PROTOCOL) def list(self, *argv, **kwargs): ''' List installed ipsets. If `name` is provided, list the named ipset or return an empty list. Be warned: netlink does not return an error if given name does not exit, you will receive an empty list. ''' if len(argv): kwargs['name'] = argv[0] return self._list_or_headers(IPSET_CMD_LIST, **kwargs) def _list_or_headers(self, cmd, name=None, flags=None): msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version]] if name is not None: msg['attrs'].append(['IPSET_ATTR_SETNAME', name]) if flags is not None: msg['attrs'].append(['IPSET_ATTR_FLAGS', flags]) return self.request(msg, cmd) def destroy(self, name=None): ''' Destroy one (when name is set) or all ipset (when name is None) ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version]] if name is not None: msg['attrs'].append(['IPSET_ATTR_SETNAME', name]) return self.request(msg, IPSET_CMD_DESTROY, msg_flags=NLM_F_REQUEST | NLM_F_ACK | NLM_F_EXCL, terminate=_nlmsg_error) def create(self, name, stype='hash:ip', family=socket.AF_INET, exclusive=True, counters=False, comment=False, maxelem=IPSET_DEFAULT_MAXELEM, forceadd=False, hashsize=None, timeout=None, bitmap_ports_range=None, size=None, skbinfo=False): ''' Create an ipset `name` of type `stype`, by default `hash:ip`. Common ipset options are supported: * exclusive -- if set, raise an error if the ipset exists * counters -- enable data/packets counters * comment -- enable comments capability * maxelem -- max size of the ipset * forceadd -- you should refer to the ipset manpage * hashsize -- size of the hashtable (if any) * timeout -- enable and set a default value for entries (if not None) * bitmap_ports_range -- set the specified inclusive portrange for the bitmap ipset structure (0, 65536) * size -- Size of the list:set, the default is 8 * skbinfo -- enable skbinfo capability ''' excl_flag = NLM_F_EXCL if exclusive else 0 msg = ipset_msg() cadt_flags = 0 if counters: cadt_flags |= IPSET_FLAG_WITH_COUNTERS if comment: cadt_flags |= IPSET_FLAG_WITH_COMMENT if forceadd: cadt_flags |= IPSET_FLAG_WITH_FORCEADD if skbinfo: cadt_flags |= IPSET_FLAG_WITH_SKBINFO if stype == 'bitmap:port' and bitmap_ports_range is None: raise ValueError('Missing value bitmap_ports_range') data = {"attrs": [["IPSET_ATTR_CADT_FLAGS", cadt_flags], ["IPSET_ATTR_MAXELEM", maxelem]]} if hashsize is not None: data['attrs'] += [["IPSET_ATTR_HASHSIZE", hashsize]] elif size is not None and stype == 'list:set': data['attrs'] += [['IPSET_ATTR_SIZE', size]] if timeout is not None: data['attrs'] += [["IPSET_ATTR_TIMEOUT", timeout]] if bitmap_ports_range is not None and stype == 'bitmap:port': # Set the bitmap range A bitmap type of set # can store up to 65536 entries if isinstance(bitmap_ports_range, PortRange): data['attrs'] += [['IPSET_ATTR_PORT_FROM', bitmap_ports_range.begin]] data['attrs'] += [['IPSET_ATTR_PORT_TO', bitmap_ports_range.end]] else: data['attrs'] += [['IPSET_ATTR_PORT_FROM', bitmap_ports_range[0]]] data['attrs'] += [['IPSET_ATTR_PORT_TO', bitmap_ports_range[1]]] if self._attr_revision is None: # Get the last revision supported by kernel revision = self.get_supported_revisions(stype)[1] else: revision = self._attr_revision msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_SETNAME', name], ['IPSET_ATTR_TYPENAME', stype], ['IPSET_ATTR_FAMILY', family], ['IPSET_ATTR_REVISION', revision], ["IPSET_ATTR_DATA", data]] return self.request(msg, IPSET_CMD_CREATE, msg_flags=NLM_F_REQUEST | NLM_F_ACK | excl_flag, terminate=_nlmsg_error) @staticmethod def _family_to_version(family): if family is not None: if family == socket.AF_INET: return 'IPSET_ATTR_IPADDR_IPV4' elif family == socket.AF_INET6: return 'IPSET_ATTR_IPADDR_IPV6' elif family == socket.AF_UNSPEC: return None raise TypeError('unknown family') def _entry_to_data_attrs(self, entry, etype, ip_version): attrs = [] ip_count = 0 if etype == 'set': attrs += [['IPSET_ATTR_NAME', entry]] return attrs # We support string (for one element, and for users calling this # function like a command line), and tupple/list if isinstance(entry, basestring): entry = entry.split(',') if isinstance(entry, (int, PortRange, PortEntry)): entry = [entry] for e, t in zip(entry, etype.split(',')): if t in ('ip', 'net'): ip_count += 1 if t == 'net': if '/' in e: e, cidr = e.split('/') attrs += [[self.attr_map[('cidr', ip_count)], int(cidr)]] elif '-' in e: e, to = e.split('-') attrs += [[self.attr_map[('ip_to', ip_count)], {'attrs': [[ip_version, to]]}]] attrs += [[self.attr_map[('ip_from', ip_count)], {'attrs': [[ip_version, e]]}]] elif t == "port": if isinstance(e, PortRange): attrs += [['IPSET_ATTR_PORT_FROM', e.begin]] attrs += [['IPSET_ATTR_PORT_TO', e.end]] if e.protocol is not None: attrs += [['IPSET_ATTR_PROTO', e.protocol]] elif isinstance(e, PortEntry): attrs += [['IPSET_ATTR_PORT', e.port]] if e.protocol is not None: attrs += [['IPSET_ATTR_PROTO', e.protocol]] else: attrs += [[self.attr_map[t], e]] else: attrs += [[self.attr_map[t], e]] return attrs def _add_delete_test(self, name, entry, family, cmd, exclusive, comment=None, timeout=None, etype="ip", packets=None, bytes=None, skbmark=None, skbprio=None, skbqueue=None, wildcard=False, physdev=False): excl_flag = NLM_F_EXCL if exclusive else 0 adt_flags = 0 if wildcard: adt_flags |= IPSET_FLAG_IFACE_WILDCARD if physdev: adt_flags |= IPSET_FLAG_PHYSDEV ip_version = self._family_to_version(family) data_attrs = self._entry_to_data_attrs(entry, etype, ip_version) if comment is not None: data_attrs += [["IPSET_ATTR_COMMENT", comment], ["IPSET_ATTR_CADT_LINENO", 0]] if timeout is not None: data_attrs += [["IPSET_ATTR_TIMEOUT", timeout]] if bytes is not None: data_attrs += [["IPSET_ATTR_BYTES", bytes]] if packets is not None: data_attrs += [["IPSET_ATTR_PACKETS", packets]] if skbmark is not None: data_attrs += [["IPSET_ATTR_SKBMARK", skbmark]] if skbprio is not None: data_attrs += [["IPSET_ATTR_SKBPRIO", skbprio]] if skbqueue is not None: data_attrs += [["IPSET_ATTR_SKBQUEUE", skbqueue]] if adt_flags: data_attrs += [["IPSET_ATTR_CADT_FLAGS", adt_flags]] msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_SETNAME', name], ['IPSET_ATTR_DATA', {'attrs': data_attrs}]] return self.request(msg, cmd, msg_flags=NLM_F_REQUEST | NLM_F_ACK | excl_flag, terminate=_nlmsg_error) def add(self, name, entry, family=socket.AF_INET, exclusive=True, comment=None, timeout=None, etype="ip", skbmark=None, skbprio=None, skbqueue=None, wildcard=False, **kwargs): ''' Add a member to the ipset. etype is the entry type that you add to the ipset. It's related to the ipset type. For example, use "ip" for one hash:ip or bitmap:ip ipset. When your ipset store a tuple, like "hash:net,iface", you must use a comma a separator (etype="net,iface") entry is a string for "ip" and "net" objects. For ipset with several dimensions, you must use a tuple (or a list) of objects. "port" type is specific, since you can use integer of specialized containers like :class:`PortEntry` and :class:`PortRange` Examples:: ipset = IPSet() ipset.create("foo", stype="hash:ip") ipset.add("foo", "198.51.100.1", etype="ip") ipset = IPSet() ipset.create("bar", stype="bitmap:port", bitmap_ports_range=(1000, 2000)) ipset.add("bar", 1001, etype="port") ipset.add("bar", PortRange(1500, 2000), etype="port") ipset = IPSet() import socket protocol = socket.getprotobyname("tcp") ipset.create("foobar", stype="hash:net,port") port_entry = PortEntry(80, protocol=protocol) ipset.add("foobar", ("198.51.100.0/24", port_entry), etype="net,port") wildcard option enable kernel wildcard matching on interface name for net,iface entries. ''' return self._add_delete_test(name, entry, family, IPSET_CMD_ADD, exclusive, comment=comment, timeout=timeout, etype=etype, skbmark=skbmark, skbprio=skbprio, skbqueue=skbqueue, wildcard=wildcard, **kwargs) def delete(self, name, entry, family=socket.AF_INET, exclusive=True, etype="ip"): ''' Delete a member from the ipset. See :func:`add` method for more information on etype. ''' return self._add_delete_test(name, entry, family, IPSET_CMD_DEL, exclusive, etype=etype) def test(self, name, entry, family=socket.AF_INET, etype="ip"): ''' Test if entry is part of an ipset See :func:`add` method for more information on etype. ''' try: self._add_delete_test(name, entry, family, IPSET_CMD_TEST, False, etype=etype) return True except IPSetError as e: if e.code == IPSET_ERR_EXIST: return False raise e def swap(self, set_a, set_b): ''' Swap two ipsets. They must have compatible content type. ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_SETNAME', set_a], ['IPSET_ATTR_TYPENAME', set_b]] return self.request(msg, IPSET_CMD_SWAP, msg_flags=NLM_F_REQUEST | NLM_F_ACK, terminate=_nlmsg_error) def flush(self, name=None): ''' Flush all ipsets. When name is set, flush only this ipset. ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version]] if name is not None: msg['attrs'].append(['IPSET_ATTR_SETNAME', name]) return self.request(msg, IPSET_CMD_FLUSH, msg_flags=NLM_F_REQUEST | NLM_F_ACK, terminate=_nlmsg_error) def rename(self, name_src, name_dst): ''' Rename the ipset. ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_SETNAME', name_src], ['IPSET_ATTR_TYPENAME', name_dst]] return self.request(msg, IPSET_CMD_RENAME, msg_flags=NLM_F_REQUEST | NLM_F_ACK, terminate=_nlmsg_error) def _get_set_by(self, cmd, value): # Check that IPSet version is supported if self._proto_version < 7: raise NotImplementedError() msg = ipset_msg() if cmd == IPSET_CMD_GET_BYNAME: msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_SETNAME', value]] if cmd == IPSET_CMD_GET_BYINDEX: msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_INDEX', value]] return self.request(msg, cmd) def get_set_byname(self, name): ''' Get a set by its name ''' return self._get_set_by(IPSET_CMD_GET_BYNAME, name) def get_set_byindex(self, index): ''' Get a set by its index ''' return self._get_set_by(IPSET_CMD_GET_BYINDEX, index) def get_supported_revisions(self, stype, family=socket.AF_INET): ''' Return minimum and maximum of revisions supported by the kernel. Each ipset module (like hash:net, hash:ip, etc) has several revisions. Newer revisions often have more features or more performances. Thanks to this call, you can ask the kernel the list of supported revisions. You can manually set/force revisions used in IPSet constructor. Example:: ipset = IPSet() ipset.get_supported_revisions("hash:net") ipset.get_supported_revisions("hash:net,port,net") ''' msg = ipset_msg() msg['attrs'] = [['IPSET_ATTR_PROTOCOL', self._proto_version], ['IPSET_ATTR_TYPENAME', stype], ['IPSET_ATTR_FAMILY', family]] response = self.request(msg, IPSET_CMD_TYPE, msg_flags=NLM_F_REQUEST | NLM_F_ACK, terminate=_nlmsg_error) min_revision = response[0].get_attr("IPSET_ATTR_PROTOCOL_MIN") max_revision = response[0].get_attr("IPSET_ATTR_REVISION") return min_revision, max_revision class _IPSetError(IPSetError): ''' Proxy class to not import all specifics ipset code in exceptions.py Out of the ipset module, a caller should use parent class instead ''' def __init__(self, code, msg=None, cmd=None): if code in self.base_map: msg = self.base_map[code] elif cmd in self.cmd_map: error_map = self.cmd_map[cmd] if code in error_map: msg = error_map[code] super(_IPSetError, self).__init__(code, msg) base_map = {IPSET_ERR_PROTOCOL: "Kernel error received:" " ipset protocol error", IPSET_ERR_INVALID_CIDR: "The value of the CIDR parameter of" " the IP address is invalid", IPSET_ERR_TIMEOUT: "Timeout cannot be used: set was created" " without timeout support", IPSET_ERR_IPADDR_IPV4: "An IPv4 address is expected, but" " not received", IPSET_ERR_IPADDR_IPV6: "An IPv6 address is expected, but" " not received", IPSET_ERR_COUNTER: "Packet/byte counters cannot be used:" " set was created without counter support", IPSET_ERR_COMMENT: "Comment string is too long!", IPSET_ERR_SKBINFO: "Skbinfo mapping cannot be used: " " set was created without skbinfo support"} c_map = {errno.EEXIST: "Set cannot be created: set with the same" " name already exists", IPSET_ERR_FIND_TYPE: "Kernel error received: " "set type not supported", IPSET_ERR_MAX_SETS: "Kernel error received: maximal number of" " sets reached, cannot create more.", IPSET_ERR_INVALID_NETMASK: "The value of the netmask parameter" " is invalid", IPSET_ERR_INVALID_MARKMASK: "The value of the markmask parameter" " is invalid", IPSET_ERR_INVALID_FAMILY: "Protocol family not supported by the" " set type"} destroy_map = {IPSET_ERR_BUSY: "Set cannot be destroyed: it is in use" " by a kernel component"} r_map = {IPSET_ERR_EXIST_SETNAME2: "Set cannot be renamed: a set with the" " new name already exists", IPSET_ERR_REFERENCED: "Set cannot be renamed: it is in use by" " another system"} s_map = {IPSET_ERR_EXIST_SETNAME2: "Sets cannot be swapped: the second set" " does not exist", IPSET_ERR_TYPE_MISMATCH: "The sets cannot be swapped: their type" " does not match"} a_map = {IPSET_ERR_EXIST: "Element cannot be added to the set: it's" " already added"} del_map = {IPSET_ERR_EXIST: "Element cannot be deleted from the set:" " it's not added"} cmd_map = {IPSET_CMD_CREATE: c_map, IPSET_CMD_DESTROY: destroy_map, IPSET_CMD_RENAME: r_map, IPSET_CMD_SWAP: s_map, IPSET_CMD_ADD: a_map, IPSET_CMD_DEL: del_map} pyroute2-0.5.9/pyroute2/iwutil.py0000644000175000017500000004633113610051400016707 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- ''' IW module ========= Experimental wireless module — nl80211 support. Disclaimer ---------- Unlike IPRoute, which is mostly usable, though is far from complete yet, the IW module is in the very initial state. Neither the module itself, nor the message class cover the nl80211 functionality reasonably enough. So if you're going to use it, brace yourself — debug is coming. Messages -------- nl80211 messages are defined here:: pyroute2/netlink/nl80211/__init__.py Pls notice NLAs of type `hex`. On the early development stage `hex` allows to inspect incoming data as a hex dump and, occasionally, even make requests with such NLAs. But it's not a production way. The type `hex` in the NLA definitions means that this particular NLA is not handled yet properly. If you want to use some NLA which is defined as `hex` yet, pls find out a specific type, patch the message class and submit your pull request on github. If you're not familiar with NLA types, take a look at RTNL definitions:: pyroute2/netlink/rtnl/ndmsg.py and so on. Communication with the kernel ----------------------------- There are several methods of the communication with the kernel. * `sendto()` — lowest possible, send a raw binary data * `put()` — send a netlink message * `nlm_request()` — send a message, return the response * `get()` — get a netlink message * `recv()` — get a raw binary data from the kernel There are no errors on `put()` usually. Any `permission denied`, any `invalid value` errors are returned from the kernel with netlink also. So if you do `put()`, but don't do `get()`, be prepared to miss errors. The preferred method for the communication is `nlm_request()`. It tracks the message ID, returns the corresponding response. In the case of errors `nlm_request()` raises an exception. To get the response on any operation with nl80211, use flag `NLM_F_ACK`. Reverse it ---------- If you're too lazy to read the kernel sources, but still need something not implemented here, you can use reverse engineering on a reference implementation. E.g.:: # strace -e trace=network -f -x -s 4096 \\ iw phy phy0 interface add test type monitor Will dump all the netlink traffic between the program `iw` and the kernel. Three first packets are the generic netlink protocol discovery, you can ignore them. All that follows, is the nl80211 traffic:: sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, ... }, msg_iov(1)=[{"\\x30\\x00\\x00\\x00\\x1b\\x00\\x05 ...", 48}], msg_controllen=0, msg_flags=0}, 0) = 48 recvmsg(3, {msg_name(12)={sa_family=AF_NETLINK, ... }, msg_iov(1)=[{"\\x58\\x00\\x00\\x00\\x1b\\x00\\x00 ...", 16384}], msg_controllen=0, msg_flags=0}, 0) = 88 ... With `-s 4096` you will get the full dump. Then copy the strings from `msg_iov` to a file, let's say `data`, and run the decoder:: $ pwd /home/user/Projects/pyroute2 $ export PYTHONPATH=`pwd` $ python scripts/decoder.py pyroute2.netlink.nl80211.nl80211cmd data You will get the session decoded:: {'attrs': [['NL80211_ATTR_WIPHY', 0], ['NL80211_ATTR_IFNAME', 'test'], ['NL80211_ATTR_IFTYPE', 6]], 'cmd': 7, 'header': {'flags': 5, 'length': 48, 'pid': 3292542647, 'sequence_number': 1430426434, 'type': 27}, 'reserved': 0, 'version': 0} {'attrs': [['NL80211_ATTR_IFINDEX', 23811], ['NL80211_ATTR_IFNAME', 'test'], ['NL80211_ATTR_WIPHY', 0], ['NL80211_ATTR_IFTYPE', 6], ['NL80211_ATTR_WDEV', 4], ['NL80211_ATTR_MAC', 'a4:4e:31:43:1c:7c'], ['NL80211_ATTR_GENERATION', '02:00:00:00']], 'cmd': 7, 'header': {'flags': 0, 'length': 88, 'pid': 3292542647, 'sequence_number': 1430426434, 'type': 27}, 'reserved': 0, 'version': 1} Now you know, how to do a request and what you will get as a response. Sample collected data is in the `scripts` directory. Submit changes -------------- Please do not hesitate to submit the changes on github. Without your patches this module will not evolve. ''' import logging from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink.nl80211 import NL80211 from pyroute2.netlink.nl80211 import nl80211cmd from pyroute2.netlink.nl80211 import NL80211_NAMES from pyroute2.netlink.nl80211 import IFTYPE_NAMES from pyroute2.netlink.nl80211 import CHAN_WIDTH from pyroute2.netlink.nl80211 import BSS_STATUS_NAMES from pyroute2.netlink.nl80211 import SCAN_FLAGS_NAMES log = logging.getLogger(__name__) class IW(NL80211): def __init__(self, *argv, **kwarg): # get specific groups kwarg if 'groups' in kwarg: groups = kwarg['groups'] del kwarg['groups'] else: groups = None # get specific async kwarg if 'async' in kwarg: # FIXME # raise deprecation error after 0.5.3 # log.warning('use "async_cache" instead of "async", ' '"async" is a keyword from Python 3.7') kwarg['async_cache'] = kwarg.pop('async') if 'async_cache' in kwarg: async_cache = kwarg.pop('async_cache') else: async_cache = False # align groups with async_cache if groups is None: groups = ~0 if async_cache else 0 # continue with init super(IW, self).__init__(*argv, **kwarg) # do automatic bind # FIXME: unfortunately we can not omit it here self.bind(groups, async_cache=async_cache) def del_interface(self, dev): ''' Delete a virtual interface - dev — device index ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_DEL_INTERFACE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', dev]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def add_interface(self, ifname, iftype, dev=None, phy=0): ''' Create a virtual interface - ifname — name of the interface to create - iftype — interface type to create - dev — device index - phy — phy index One should specify `dev` (device index) or `phy` (phy index). If no one specified, phy == 0. `iftype` can be integer or string: 1. adhoc 2. station 3. ap 4. ap_vlan 5. wds 6. monitor 7. mesh_point 8. p2p_client 9. p2p_go 10. p2p_device 11. ocb ''' # lookup the interface type iftype = IFTYPE_NAMES.get(iftype, iftype) assert isinstance(iftype, int) msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_NEW_INTERFACE'] msg['attrs'] = [['NL80211_ATTR_IFNAME', ifname], ['NL80211_ATTR_IFTYPE', iftype]] if dev is not None: msg['attrs'].append(['NL80211_ATTR_IFINDEX', dev]) elif phy is not None: msg['attrs'].append(['NL80211_ATTR_WIPHY', phy]) else: raise TypeError('no device specified') self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def list_dev(self): ''' Get list of all wifi network interfaces ''' return self.get_interfaces_dump() def list_wiphy(self): ''' Get list of all phy devices ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_WIPHY'] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def _get_phy_name(self, attr): return 'phy%i' % attr.get_attr('NL80211_ATTR_WIPHY') def _get_frequency(self, attr): return attr.get_attr('NL80211_ATTR_WIPHY_FREQ') or 0 def get_interfaces_dict(self): ''' Get interfaces dictionary ''' ret = {} for wif in self.get_interfaces_dump(): chan_width = wif.get_attr('NL80211_ATTR_CHANNEL_WIDTH') freq = self._get_frequency(wif) if chan_width is not None else 0 wifname = wif.get_attr('NL80211_ATTR_IFNAME') ret[wifname] = [wif.get_attr('NL80211_ATTR_IFINDEX'), self._get_phy_name(wif), wif.get_attr('NL80211_ATTR_MAC'), freq, chan_width] return ret def get_interfaces_dump(self): ''' Get interfaces dump ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_INTERFACE'] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def get_interface_by_phy(self, attr): ''' Get interface by phy ( use x.get_attr('NL80211_ATTR_WIPHY') ) ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_INTERFACE'] msg['attrs'] = [['NL80211_ATTR_WIPHY', attr]] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def get_interface_by_ifindex(self, ifindex): ''' Get interface by ifindex ( use x.get_attr('NL80211_ATTR_IFINDEX') ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_INTERFACE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST) def get_stations(self, ifindex): ''' Get stations by ifindex ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_STATION'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def join_ibss(self, ifindex, ssid, freq, bssid=None, channel_fixed=False, width=None, center=None, center2=None): ''' Connect to network by ssid - ifindex - IFINDEX of the interface to perform the connection - ssid - Service set identification - freq - Frequency in MHz - bssid - The MAC address of target interface - channel_fixed: Boolean flag - width - Channel width - center - Central frequency of the 40/80/160 MHz channel - center2 - Center frequency of second segment if 80P80 If the flag of channel_fixed is True, one should specify both the width and center of the channel `width` can be integer of string: 0. 20_noht 1. 20 2. 40 3. 80 4. 80p80 5. 160 6. 5 7. 10 ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_JOIN_IBSS'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_SSID', ssid], ['NL80211_ATTR_WIPHY_FREQ', freq]] if channel_fixed: msg['attrs'].append(['NL80211_ATTR_FREQ_FIXED', None]) width = CHAN_WIDTH.get(width, width) assert isinstance(width, int) if width in [2, 3, 5] and center: msg['attrs'].append(['NL80211_ATTR_CHANNEL_WIDTH', width]) msg['attrs'].append(['NL80211_ATTR_CENTER_FREQ1', center]) elif width == 4 and center and center2: msg['attrs'].append(['NL80211_ATTR_CHANNEL_WIDTH', width]) msg['attrs'].append(['NL80211_ATTR_CENTER_FREQ1', center]) msg['attrs'].append(['NL80211_ATTR_CENTER_FREQ2', center2]) elif width in [0, 1, 6, 7]: msg['attrs'].append(['NL80211_ATTR_CHANNEL_WIDTH', width]) else: raise TypeError('No channel specified') if bssid is not None: msg['attrs'].append(['NL80211_ATTR_MAC', bssid]) self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def leave_ibss(self, ifindex): ''' Leave the IBSS -- the IBSS is determined by the network interface ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_LEAVE_IBSS'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def authenticate(self, ifindex, bssid, ssid, freq, auth_type=0): ''' Send an Authentication management frame. ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_AUTHENTICATE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_MAC', bssid], ['NL80211_ATTR_SSID', ssid], ['NL80211_ATTR_WIPHY_FREQ', freq], ['NL80211_ATTR_AUTH_TYPE', auth_type]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def deauthenticate(self, ifindex, bssid, reason_code=0x01): ''' Send a Deauthentication management frame. ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_DEAUTHENTICATE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_MAC', bssid], ['NL80211_ATTR_REASON_CODE', reason_code]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def associate(self, ifindex, bssid, ssid, freq, info_elements=None): ''' Send an Association request frame. ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_ASSOCIATE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_MAC', bssid], ['NL80211_ATTR_SSID', ssid], ['NL80211_ATTR_WIPHY_FREQ', freq]] if info_elements is not None: msg['attrs'].append(['NL80211_ATTR_IE', info_elements]) self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def disassociate(self, ifindex, bssid, reason_code=0x03): ''' Send a Disassociation management frame. ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_DISASSOCIATE'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_MAC', bssid], ['NL80211_ATTR_REASON_CODE', reason_code]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def connect(self, ifindex, ssid, bssid=None): ''' Connect to the ap with ssid and bssid ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_CONNECT'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex], ['NL80211_ATTR_SSID', ssid]] if bssid is not None: msg['attrs'].append(['NL80211_ATTR_MAC', bssid]) self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def disconnect(self, ifindex): ''' Disconnect the device ''' msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_DISCONNECT'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) def scan(self, ifindex, ssids=None, flush_cache=False): ''' Trigger scan and get results. Triggering scan usually requires root, and can take a couple of seconds. ''' # Prepare a second netlink socket to get the scan results. # The issue is that the kernel can send the results notification # before we get answer for the NL80211_CMD_TRIGGER_SCAN nsock = NL80211() nsock.bind() nsock.add_membership('scan') # send scan request msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_TRIGGER_SCAN'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] # If a list of SSIDs is provided, active scanning should be performed if ssids is not None: if isinstance(ssids, list): msg['attrs'].append(['NL80211_ATTR_SCAN_SSIDS', ssids]) scan_flags = 0 if flush_cache: # Flush the cache before scanning scan_flags |= SCAN_FLAGS_NAMES['NL80211_SCAN_FLAG_FLUSH'] msg['attrs'].append(['NL80211_ATTR_SCAN_FLAGS', scan_flags]) self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_ACK) # monitor the results notification on the secondary socket scanResultNotFound = True while scanResultNotFound: listMsg = nsock.get() for msg in listMsg: if msg["event"] == "NL80211_CMD_NEW_SCAN_RESULTS": scanResultNotFound = False break # close the secondary socket nsock.close() # request the results msg2 = nl80211cmd() msg2['cmd'] = NL80211_NAMES['NL80211_CMD_GET_SCAN'] msg2['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] return self.nlm_request(msg2, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def get_associated_bss(self, ifindex): ''' Returns the same info like scan() does, but only about the currently associated BSS. Unlike scan(), it returns immediately and doesn't require root. ''' # When getting scan results without triggering scan first, # you'll always get the information about currently associated BSS # # However, it may return other BSS, if last scan wasn't very # long time go msg = nl80211cmd() msg['cmd'] = NL80211_NAMES['NL80211_CMD_GET_SCAN'] msg['attrs'] = [['NL80211_ATTR_IFINDEX', ifindex]] res = self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) for x in res: attr_bss = x.get_attr('NL80211_ATTR_BSS') if attr_bss is not None: status = attr_bss.get_attr('NL80211_BSS_STATUS') if status in (BSS_STATUS_NAMES['associated'], BSS_STATUS_NAMES['ibss_joined']): return x return None pyroute2-0.5.9/pyroute2/ndb/0000755000175000017500000000000013621220110015552 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ndb/__init__.py0000644000175000017500000000000013610051400017653 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ndb/events.py0000644000175000017500000000311513610051400017432 0ustar peetpeet00000000000000import time class SyncStart(Exception): pass class SchemaFlush(Exception): pass class SchemaReadLock(Exception): pass class SchemaReadUnlock(Exception): pass class SchemaGenericRequest(object): def __init__(self, response, *argv, **kwarg): self.response = response self.argv = argv self.kwarg = kwarg class MarkFailed(Exception): pass class DBMExitException(Exception): pass class ShutdownException(Exception): pass class InvalidateHandlerException(Exception): pass class State(object): events = None def __init__(self, prime=None, log=None): self.events = [] self.log = log if prime is not None: self.load(prime) def load(self, prime): self.events = [] for state in prime.events: self.events.append(state) def transition(self): if len(self.events) < 2: return None return (self.events[-2][1], self.events[-1][1]) def get(self): if not self.events: return None return self.events[-1][1] def set(self, state): if self.log is not None: self.log.debug(state) if self.events and self.events[-1][1] == state: self.events.pop() self.events.append((time.time(), state)) return state def __eq__(self, other): if not self.events: return False return self.events[-1][1] == other def __ne__(self, other): if not self.events: return True return self.events[-1][1] != other pyroute2-0.5.9/pyroute2/ndb/main.py0000644000175000017500000007536413621021705017100 0ustar peetpeet00000000000000''' NDB is a high level network management module. IT allows to manage interfaces, routes, addresses etc. of connected systems, containers and network namespaces. NDB work with remote systems via ssh, in that case `mitogen `_ module is required. It is possible to connect also OpenBSD and FreeBSD systems, but in read-only mode for now. Quick start ----------- Print the routing infotmation in the CSV format:: with NDB() as ndb: for record in ndb.routes.summary(format='csv'): print(record) Print all the interface names on the system:: with NDB() as ndb: print([x.ifname for x in ndb.interfaces.summary()]) Print IP addresses of interfaces in several network namespaces:: nslist = ['netns01', 'netns02', 'netns03'] with NDB() as ndb: for nsname in nslist: ndb.sources.add(netns=nsname) for record in ndb.interfaces.summary(format='json'): print(record) Add an IP address on an interface:: with NDB() as ndb: with ndb.interfaces['eth0'] as i: i.ipaddr.create(address='10.0.0.1', prefixlen=24).commit() # ---> <--- NDB waits until the address actually # becomes available Change an interface property:: with NDB() as ndb: with ndb.interfaces['eth0'] as i: i['state'] = 'up' i['address'] = '00:11:22:33:44:55' # ---> <--- the commit() is called authomatically by # the context manager's __exit__() Key NDB features: * Asynchronously updated database of RTNL objects * Data integrity * Multiple data sources -- local, netns, remote * Fault tolerance and memory consumtion limits * Transactions ''' import gc import sys import json import time import errno import atexit import sqlite3 import logging import threading import traceback import ctypes import ctypes.util from functools import partial from collections import OrderedDict from pyroute2 import config from pyroute2 import cli from pyroute2.common import basestring from pyroute2.ndb import schema from pyroute2.ndb.events import (SyncStart, MarkFailed, DBMExitException, ShutdownException, InvalidateHandlerException) from pyroute2.ndb.source import Source from pyroute2.ndb.objects.interface import Interface from pyroute2.ndb.objects.address import Address from pyroute2.ndb.objects.route import Route from pyroute2.ndb.objects.neighbour import Neighbour from pyroute2.ndb.objects.rule import Rule from pyroute2.ndb.objects.netns import NetNS from pyroute2.ndb.query import Query from pyroute2.ndb.report import (Report, Record) try: from urlparse import urlparse except ImportError: from urllib.parse import urlparse try: import queue except ImportError: import Queue as queue try: import psycopg2 except ImportError: psycopg2 = None log = logging.getLogger(__name__) def target_adapter(value): # # MPLS target adapter for SQLite3 # return json.dumps(value) sqlite3.register_adapter(list, target_adapter) class View(dict): ''' The View() object returns RTNL objects on demand:: ifobj1 = ndb.interfaces['eth0'] ifobj2 = ndb.interfaces['eth0'] # ifobj1 != ifobj2 ''' def __init__(self, ndb, table, match_src=None, match_pairs=None, chain=None, default_target='localhost'): self.ndb = ndb self.log = ndb.log.channel('view.%s' % table) self.table = table self.event = table # FIXME self.chain = chain self.cache = {} self.default_target = default_target self.constraints = {} self.match_src = match_src self.match_pairs = match_pairs self.classes = OrderedDict() self.classes['interfaces'] = Interface self.classes['addresses'] = Address self.classes['neighbours'] = Neighbour self.classes['routes'] = Route self.classes['rules'] = Rule self.classes['netns'] = NetNS def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): pass def constraint(self, key, value): if value is None: self.constraints.pop(key) else: self.constraints[key] = value return self def getmany(self, spec, table=None): return self.ndb.schema.get(table or self.table, spec) def getone(self, spec, table=None): for obj in self.getmany(spec, table): return obj def get(self, spec, table=None): try: return self.__getitem__(spec, table) except KeyError: return None @cli.change_pointer def create(self, *argspec, **kwspec): spec = self.classes[self.table].adjust_spec(kwspec or argspec[0]) if self.chain: spec['ndb_chain'] = self.chain spec['create'] = True return self[spec] @cli.change_pointer def add(self, *argspec, **kwspec): self.log.warning('''\n The name add() will be removed in future releases, use create() instead. If you believe that the idea to rename is wrong, please file your opinion to the project's bugtracker. The reason behind the rename is not to confuse interfaces.add() with bridge and bond port operations, that don't create any new interfaces but work on existing ones. ''') return self.create(*argspec, **kwspec) def wait(self, **spec): ret = None # install a limited events queue -- for a possible immediate reaction evq = queue.Queue(maxsize=100) def handler(evq, target, event): # ignore the "queue full" exception # # if we miss some events here, nothing bad happens: we just # load them from the DB after a timeout, falling back to # the DB polling # # the most important here is not to allocate too much memory try: evq.put_nowait((target, event)) except queue.Full: pass # hdl = partial(handler, evq) (self .ndb .register_handler(self .ndb .schema .classes[self.event], hdl)) # try: ret = self.__getitem__(spec) for key in spec: if ret[key] != spec[key]: ret = None break except KeyError: ret = None while ret is None: try: target, msg = evq.get(timeout=1) except queue.Empty: try: ret = self.__getitem__(spec) for key in spec: if ret[key] != spec[key]: ret = None raise KeyError() break except KeyError: continue # for key, value in spec.items(): if key == 'target' and value != target: break elif value not in (msg.get(key), msg.get_attr(msg.name2nla(key))): break else: while ret is None: try: ret = self.__getitem__(spec) except KeyError: time.sleep(0.1) # (self .ndb .unregister_handler(self .ndb .schema .classes[self.event], hdl)) del evq del hdl gc.collect() return ret def __getitem__(self, key, table=None): iclass = self.classes[table or self.table] key = iclass.adjust_spec(key) if self.match_src: match_src = [x for x in self.match_src] match_pairs = dict(self.match_pairs) else: match_src = [] match_pairs = {} if self.constraints: match_src.insert(0, self.constraints) for cskey, csvalue in self.constraints.items(): match_pairs[cskey] = cskey ret = iclass(self, key, match_src=match_src, match_pairs=match_pairs) # rtnl_object.key() returns a dcitionary that can not # be used as a cache key. Create here a tuple from it. # The key order guaranteed by the dictionary. cache_key = tuple(ret.key.items()) # Iterate all the cache to remove unused and clean # (without any started transaction) objects. for ckey in tuple(self.cache): # Skip the current cache_key to avoid extra # cache del/add records in the logs if ckey == cache_key: continue # The number of referrers must be > 1, the first # one is the cache itself rcount = len(gc.get_referrers(self.cache[ckey])) # The number of changed rtnl_object fields must # be 0 which means that no transaction is started ccount = len(self.cache[ckey].changed) if rcount == 1 and ccount == 0: self.log.debug('cache del %s' % (ckey, )) del self.cache[ckey] # Cache only existing objects if ret.state == 'system': if cache_key in self.cache: self.log.debug('cache hit %s' % (cache_key, )) # Explicitly get rid of the created object del ret # The object from the cache has already # registered callbacks, simply return it ret = self.cache[cache_key] return ret else: self.log.debug('cache add %s' % (cache_key, )) # Otherwise create a cache entry self.cache[cache_key] = ret ret.register() return ret def __setitem__(self, key, value): raise NotImplementedError() def __delitem__(self, key): raise NotImplementedError() def __iter__(self): return self.keys() def keys(self): for record in self.dump(): yield record def values(self): for key in self.keys(): yield self[key] def items(self): for key in self.keys(): yield (key, self[key]) @cli.show_result def count(self): return (self .ndb .schema .fetchone('SELECT count(*) FROM %s' % self.table))[0] def __len__(self): return self.count() def _keys(self, iclass): return (['target', 'tflags'] + self.ndb.schema.compiled[iclass.view or iclass.table]['names']) def _dump(self, match=None): iclass = self.classes[self.table] keys = self._keys(iclass) spec, values = self._match(match, iclass, keys, iclass.table_alias) if iclass.dump and iclass.dump_header: yield iclass.dump_header for record in (self .ndb .schema .fetch(iclass.dump + spec, values)): yield record else: yield tuple([iclass.nla2name(x) for x in keys]) for record in (self .ndb .schema .fetch('SELECT * FROM %s AS %s %s' % (iclass.view or iclass.table, iclass.table_alias, spec), values)): yield record def _csv(self, match=None, dump=None): if dump is None: dump = self._dump(match) for record in dump: row = [] for field in record: if isinstance(field, int): row.append('%i' % field) elif field is None: row.append('') else: row.append("'%s'" % field) yield ','.join(row) def _json(self, match=None, dump=None): if dump is None: dump = self._dump(match) fnames = next(dump) buf = [] yield '[' for record in dump: if buf: buf[-1] += ',' for line in buf: yield line buf = [] lines = json.dumps(dict(zip(fnames, record)), indent=4).split('\n') buf.append(' {') for line in sorted(lines[1:-1]): buf.append(' %s,' % line.split(',')[0]) buf[-1] = buf[-1][:-1] buf.append(' }') for line in buf: yield line yield ']' def _native(self, match=None, dump=None): if dump is None: dump = self._dump(match) fnames = next(dump) for record in dump: yield Record(fnames, record) def _details(self, match=None, dump=None, format=None): # get the raw dump generator and get the fields description if dump is None: dump = self._dump(match) fnames = next(dump) if format == 'json': yield '[' buf = [] # iterate all the records and yield a dict for every record for record in dump: obj = self[dict(zip(fnames, record))] if format == 'json': if buf: buf[-1] += ',' for line in buf: yield line buf = [] ret = OrderedDict() for key in sorted(obj): ret[key] = obj[key] lines = json.dumps(ret, indent=4).split('\n') for line in lines: buf.append(' %s' % line) else: yield dict(obj) if format == 'json': for line in buf: yield line yield ']' def _summary(self, match=None): iclass = self.classes[self.table] keys = self._keys(iclass) spec, values = self._match(match, iclass, keys, iclass.table_alias) if iclass.summary is not None: if iclass.summary_header is not None: yield iclass.summary_header for record in (self .ndb .schema .fetch(iclass.summary + spec, values)): yield record else: header = ('target', ) + self.ndb.schema.indices[iclass.table] yield tuple([iclass.nla2name(x) for x in header]) key_fields = ','.join(['f_%s' % x for x in header]) for record in (self .ndb .schema .fetch('SELECT %s FROM %s AS %s %s' % (key_fields, iclass.view or iclass.table, iclass.table_alias, spec), values)): yield record def _match(self, match, cls, keys, alias): values = [] match = match or {} if self.match_src and self.match_pairs: for l_key, r_key in self.match_pairs.items(): for src in self.match_src: try: match[l_key] = src[r_key] break except: pass if match: spec = ' WHERE ' conditions = [] for key, value in match.items(): keyc = [] if cls.name2nla(key) in keys: keyc.append(cls.name2nla(key)) if key in keys: keyc.append(key) if not keyc: raise KeyError('key %s not found' % key) if len(keyc) == 1: conditions.append('%s.f_%s = %s' % (alias, keyc[0], self.ndb.schema.plch)) values.append(value) elif len(keyc) == 2: conditions.append('(%s.f_%s = %s OR %s.f_%s = %s)' % (alias, keyc[0], self.ndb.schema.plch, alias, keyc[1], self.ndb.schema.plch)) values.append(value) values.append(value) spec = ' WHERE %s' % ' AND '.join(conditions) else: spec = '' return spec, values @cli.show_result def dump(self, *argv, **kwarg): fmt = kwarg.pop('format', kwarg.pop('fmt', self.ndb.config.get('show_format', 'native'))) if fmt == 'native': return Report(self._native(dump=self._dump(*argv, **kwarg))) elif fmt == 'csv': return Report(self._csv(dump=self._dump(*argv, **kwarg)), ellipsis=False) elif fmt == 'json': return Report(self._json(dump=self._dump(*argv, **kwarg)), ellipsis=False) else: raise ValueError('format not supported') @cli.show_result def summary(self, *argv, **kwarg): fmt = kwarg.pop('format', kwarg.pop('fmt', self.ndb.config.get('show_format', 'native'))) if fmt == 'native': return Report(self._native(dump=self._summary(*argv, **kwarg))) elif fmt == 'csv': return Report(self._csv(dump=self._summary(*argv, **kwarg)), ellipsis=False) elif fmt == 'json': return Report(self._json(dump=self._summary(*argv, **kwarg)), ellipsis=False) else: raise ValueError('format not supported') @cli.show_result def details(self, *argv, **kwarg): fmt = kwarg.pop('format', kwarg.pop('fmt', self.ndb.config.get('show_format', 'native'))) if fmt == 'native': return Report(self._details(*argv, **kwarg)) elif fmt == 'json': kwarg['format'] = 'json' return Report(self._details(*argv, **kwarg), ellipsis=False) else: raise ValueError('format not supported') class SourcesView(View): def __init__(self, ndb): super(SourcesView, self).__init__(ndb, 'sources') self.classes['sources'] = Source self.cache = {} self.lock = threading.Lock() def async_add(self, **spec): spec = dict(Source.defaults(spec)) self.cache[spec['target']] = Source(self.ndb, **spec).start() return self.cache[spec['target']] def add(self, **spec): spec = dict(Source.defaults(spec)) if 'event' not in spec: sync = True spec['event'] = threading.Event() else: sync = False self.cache[spec['target']] = Source(self.ndb, **spec).start() if sync: self.cache[spec['target']].event.wait() return self.cache[spec['target']] def remove(self, target, code=errno.ECONNRESET, sync=True): with self.lock: if target in self.cache: source = self.cache[target] source.close(code=code, sync=sync) return self.cache.pop(target) def keys(self): for key in self.cache: yield key def _keys(self, iclass): return ['target', 'kind'] def wait(self, **spec): raise NotImplementedError() def _summary(self, *argv, **kwarg): return self._dump(*argv, **kwarg) def __getitem__(self, key, table=None): if isinstance(key, basestring): target = key elif isinstance(key, dict) and 'target' in key.keys(): target = key['target'] else: raise ValueError('key format not supported') return self.cache[target] class Log(object): def __init__(self, log_id=None): self.logger = None self.state = False self.log_id = log_id or id(self) self.logger = logging.getLogger('pyroute2.ndb.%s' % self.log_id) self.main = self.channel('main') def __call__(self, target=None): if target is None: return self.logger is not None if self.logger is not None: for handler in tuple(self.logger.handlers): self.logger.removeHandler(handler) if target in ('off', False): if self.state: self.logger.setLevel(0) self.logger.addHandler(logging.NullHandler()) return if target in ('on', 'stderr'): handler = logging.StreamHandler() elif isinstance(target, basestring): url = urlparse(target) if not url.scheme and url.path: handler = logging.FileHandler(url.path) elif url.scheme == 'syslog': handler = logging.SysLogHandler(address=url.netloc.split(':')) else: raise ValueError('logging scheme not supported') else: handler = target fmt = '%(asctime)s %(levelname)8s %(name)s: %(message)s' formatter = logging.Formatter(fmt) handler.setFormatter(formatter) self.logger.addHandler(handler) self.logger.setLevel(logging.DEBUG) @property def on(self): self.__call__(target='on') @property def off(self): self.__call__(target='off') def channel(self, name): return logging.getLogger('pyroute2.ndb.%s.%s' % (self.log_id, name)) def debug(self, *argv, **kwarg): return self.main.debug(*argv, **kwarg) def info(self, *argv, **kwarg): return self.main.info(*argv, **kwarg) def warning(self, *argv, **kwarg): return self.main.warning(*argv, **kwarg) def error(self, *argv, **kwarg): return self.main.error(*argv, **kwarg) def critical(self, *argv, **kwarg): return self.main.critical(*argv, **kwarg) class ReadOnly(object): def __init__(self, ndb): self.ndb = ndb def __enter__(self): self.ndb.schema.allow_write(False) return self def __exit__(self, exc_type, exc_value, traceback): self.ndb.schema.allow_write(True) class DeadEnd(object): def put(self, *argv, **kwarg): raise ShutdownException('shutdown in progress') class EventQueue(object): def __init__(self, *argv, **kwarg): self._bypass = self._queue = queue.Queue(*argv, **kwarg) def put(self, *argv, **kwarg): return self._queue.put(*argv, **kwarg) def shutdown(self): self._queue = DeadEnd() def bypass(self, *argv, **kwarg): return self._bypass.put(*argv, **kwarg) def get(self, *argv, **kwarg): return self._bypass.get(*argv, **kwarg) def qsize(self): return self._bypass.qsize() class NDB(object): def __init__(self, sources=None, db_provider='sqlite3', db_spec=':memory:', rtnl_debug=False, log=False, auto_netns=False, libc=None): self.ctime = self.gctime = time.time() self.schema = None self.config = {} self.libc = libc or ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) self.log = Log(log_id=id(self)) self.readonly = ReadOnly(self) self._auto_netns = auto_netns self._db = None self._dbm_thread = None self._dbm_ready = threading.Event() self._dbm_shutdown = threading.Event() self._global_lock = threading.Lock() self._event_map = None self._event_queue = EventQueue(maxsize=100) # if log: self.log(log) # # fix sources prime if sources is None: sources = [{'target': 'localhost', 'kind': 'local', 'nlm_generator': 1}] if sys.platform.startswith('linux'): sources.append({'target': 'localhost/netns', 'kind': 'nsmanager'}) elif not isinstance(sources, (list, tuple)): raise ValueError('sources format not supported') self.sources = SourcesView(self) self._nl = sources self._db_provider = db_provider self._db_spec = db_spec self._db_rtnl_log = rtnl_debug atexit.register(self.close) self._dbm_ready.clear() self._dbm_autoload = set() self._dbm_thread = threading.Thread(target=self.__dbm__, name='NDB main loop') self._dbm_thread.setDaemon(True) self._dbm_thread.start() self._dbm_ready.wait() for event in tuple(self._dbm_autoload): event.wait() self._dbm_autoload = None self.interfaces = View(self, 'interfaces') self.addresses = View(self, 'addresses') self.routes = View(self, 'routes') self.neighbours = View(self, 'neighbours') self.rules = View(self, 'rules') self.netns = View(self, 'netns', default_target='localhost/netns') self.query = Query(self.schema) def _get_view(self, name, match_src=None, match_pairs=None, chain=None): return View(self, name, match_src, match_pairs, chain) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def register_handler(self, event, handler): if event not in self._event_map: self._event_map[event] = [] self._event_map[event].append(handler) def unregister_handler(self, event, handler): self._event_map[event].remove(handler) def execute(self, *argv, **kwarg): return self.schema.execute(*argv, **kwarg) def close(self): with self._global_lock: if self._dbm_shutdown.is_set(): return else: self._dbm_shutdown.set() if hasattr(atexit, 'unregister'): atexit.unregister(self.close) else: try: atexit._exithandlers.remove((self.close, (), {})) except ValueError: pass # shutdown the _dbm_thread self._event_queue.shutdown() self._event_queue.bypass(('localhost', (ShutdownException(), ))) self._dbm_thread.join() def __dbm__(self): def default_handler(target, event): if isinstance(event, Exception): raise event log.warning('unsupported event ignored: %s' % type(event)) def check_sources_started(self, _locals, target, event): _locals['countdown'] -= 1 if _locals['countdown'] == 0: self._dbm_ready.set() _locals = {'countdown': len(self._nl)} # init the events map event_map = {type(self._dbm_ready): [lambda t, x: x.set()], MarkFailed: [lambda t, x: (self .schema .mark(t, 1))], SyncStart: [partial(check_sources_started, self, _locals)]} self._event_map = event_map event_queue = self._event_queue if self._db_provider == 'sqlite3': self._db = sqlite3.connect(self._db_spec) elif self._db_provider == 'psycopg2': self._db = psycopg2.connect(**self._db_spec) self.schema = schema.init(self, self._db, self._db_provider, self._db_rtnl_log, id(threading.current_thread())) for spec in self._nl: spec['event'] = None self.sources.add(**spec) for (event, handlers) in self.schema.event_map.items(): for handler in handlers: self.register_handler(event, handler) stop = False while not stop: target, events = event_queue.get() try: if events is None: continue # if nlm_generator is True, an exception can come # here while iterating events for event in events: handlers = event_map.get(event.__class__, [default_handler, ]) for handler in tuple(handlers): try: handler(target, event) except InvalidateHandlerException: try: handlers.remove(handler) except: log.error('could not invalidate ' 'event handler:\n%s' % traceback.format_exc()) except ShutdownException: stop = True break except DBMExitException: return except: log.error('could not load event:\n%s\n%s' % (event, traceback.format_exc())) if time.time() - self.gctime > config.gc_timeout: self.gctime = time.time() except Exception as e: self.log.error('exception <%s> in source %s' % (e, target)) # restart the target try: self.sources[target].restart(reason=e) except KeyError: pass # release all the sources for target in tuple(self.sources.cache): source = self.sources.remove(target, sync=False) if source is not None and source.th is not None: source.shutdown.set() source.th.join() self.log.debug('flush DB for the target %s' % target) self.schema.flush(target) # close the database self.schema.commit() self.schema.close() pyroute2-0.5.9/pyroute2/ndb/objects/0000755000175000017500000000000013621220110017203 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/ndb/objects/__init__.py0000644000175000017500000006367413616355730021361 0ustar peetpeet00000000000000''' Structure and API ================= The NDB objects are dictionary-like structures that represent network objects -- interfaces, routes, addresses etc. In addition to the usual dictionary API they have some NDB-specific methods, see the `RTNL_Object` class description below. The dictionary fields represent RTNL messages fields and NLA names, and the objects are used as argument dictionaries to normal `IPRoute` methods like `link()` or `route()`. Thus everything described for the `IPRoute` methods is valid here as well. See also: :ref:`iproute` E.g.:: # create a vlan interface with IPRoute with IPRoute() as ipr: ipr.link("add", ifname="vlan1108", kind="vlan", link=ipr.link_lookup(ifname="eth0"), vlan_id=1108) # same with NDB: with NDB(log="stderr") as ndb: (ndb .interfaces .add(ifname="vlan1108", kind="vlan", link="eth0", vlan_id=1108) .commit()) Slightly simplifying, if a network object doesn't exist, NDB will run an RTNL method with "add" argument, if exists -- "set", and to remove an object NDB will call the method with "del" argument. Accessing objects ================= NDB objects are grouped into "views": * interfaces * addresses * routes * neighbours * rules * netns Views are dictionary-like objects that accept strings or dict selectors:: # access eth0 ndb.interfaces["eth0"] # access eth0 in the netns test01 ndb.sources.add(netns="test01") ndb.interfaces[{"target": "test01", "ifname": "eth0"}] # access a route to 10.4.0.0/24 ndb.routes["10.4.0.0/24"] # same with a dict selector ndb.routes[{"dst": "10.4.0.0", "dst_len": 24}] Objects cache ============= NDB create objects on demand, it doesn't create thousands of route objects for thousands of routes by default. The object is being created only when accessed for the first time, and stays in the cache as long as it has any not committed changes. To inspect cached objects, use views' `.cache`:: >>> ndb.interfaces.cache.keys() [(('target', u'localhost'), ('tflags', 0), ('index', 1)), # lo (('target', u'localhost'), ('tflags', 0), ('index', 5))] # eth3 There is no asynchronous cache invalidation, the cache is being cleaned up every time when an object is accessed. ''' import json import time import errno import weakref import traceback import threading import collections from functools import partial from pyroute2 import cli from pyroute2.ndb.events import State from pyroute2.ndb.report import Record from pyroute2.netlink.exceptions import NetlinkError from pyroute2.ndb.events import InvalidateHandlerException class RTNL_Object(dict): ''' Base RTNL object class. ''' view = None # (optional) view to load values for the summary etc. utable = None # table to send updates to table_alias = '' key_extra_fields = [] fields_cmp = {} schema = None event_map = None state = None log = None summary = None summary_header = None dump = None dump_header = None errors = None msg_class = None reverse_update = None _table = None _apply_script = None _key = None _replace = None _replace_on_key_change = False # 8<------------------------------------------------------------ # # Documented public properties section # @property def table(self): ''' The main reference table for the object. The SQL schema of the table is used to build the key and to verify the fields. ''' return self._table @table.setter def table(self, value): self._table = value @property def etable(self): ''' The table where the object actually fetches the data from. It is not always equal `self.table`, e.g. snapshot objects fetch the data from snapshot tables. Read-only property. ''' if self.ctxid: return '%s_%s' % (self.table, self.ctxid) else: return self.table @property def key(self): ''' The key of the object, used to fetch it from the DB. ''' nkey = self._key or {} ret = collections.OrderedDict() for name in self.kspec: kname = self.iclass.nla2name(name) if kname in self: value = self[kname] if value is None and name in nkey: value = nkey[name] ret[name] = value if len(ret) < len(self.kspec): for name in self.key_extra_fields: kname = self.iclass.nla2name(name) if self.get(kname): ret[name] = self[kname] return ret @key.setter def key(self, k): if not isinstance(k, dict): return for key, value in k.items(): if value is not None: dict.__setitem__(self, self.iclass.nla2name(key), value) # # 8<------------------------------------------------------------ # def __init__(self, view, key, iclass, ctxid=None, match_src=None, match_pairs=None): self.view = view self.ndb = view.ndb self.sources = view.ndb.sources self.ctxid = ctxid self.schema = view.ndb.schema self.match_src = match_src or tuple() self.match_pairs = match_pairs or dict() self.changed = set() self.iclass = iclass self.utable = self.utable or self.table self.errors = [] self.log = self.ndb.log.channel('rtnl_object') self.log.debug('init') self.state = State() self.state.set('invalid') self.snapshot_deps = [] self.load_event = threading.Event() self.load_event.set() self.lock = threading.Lock() self.kspec = self.schema.compiled[self.table]['idx'] self.knorm = self.schema.compiled[self.table]['norm_idx'] self.spec = self.schema.compiled[self.table]['all_names'] self.names = self.schema.compiled[self.table]['norm_names'] self._apply_script = [] if isinstance(key, dict): self.chain = key.pop('ndb_chain', None) create = key.pop('create', False) else: self.chain = None create = False ckey = self.complete_key(key) if create and ckey is not None: raise KeyError('object exists') elif not create and ckey is None: raise KeyError('object does not exists') elif create: for name in key: self[name] = key[name] # FIXME -- merge with complete_key() if 'target' not in self: self.load_value('target', self.view.default_target) elif ctxid is None: self.key = ckey self.load_sql() else: self.key = ckey self.load_sql(table=self.table) @classmethod def adjust_spec(cls, spec): return spec @classmethod def nla2name(self, name): return self.msg_class.nla2name(name) @classmethod def name2nla(self, name): return self.msg_class.name2nla(name) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.commit() def __hash__(self): return id(self) def __getitem__(self, key): if key in self.match_pairs: for src in self.match_src: try: return src[self.match_pairs[key]] except: pass return dict.__getitem__(self, key) def __setitem__(self, key, value): if self.state == 'system' and key in self.knorm: if self._replace_on_key_change: self.log.debug('prepare replace for key %s' % (self.key)) self._replace = type(self)(self.view, self.key) self.state.set('replace') else: raise ValueError('attempt to change a key field (%s)' % key) if key in ('net_ns_fd', 'net_ns_pid'): self.state.set('setns') if value != self.get(key, None): if key != 'target': self.changed.add(key) dict.__setitem__(self, key, value) def fields(self, *argv): Fields = collections.namedtuple('Fields', argv) return Fields(*[self[key] for key in argv]) def key_repr(self): return repr(self.key) @cli.change_pointer def create(self, **spec): spec['create'] = True spec['ndb_chain'] = self return self.view[spec] @cli.show_result def show(self, **kwarg): ''' Return the object in a specified format. The format may be specified with the keyword argument `format` or in the `ndb.config['show_format']`. TODO: document different formats ''' fmt = kwarg.pop('format', kwarg.pop('fmt', self.view.ndb.config.get('show_format', 'native'))) if fmt == 'native': return dict(self) else: out = collections.OrderedDict() for key in sorted(self): out[key] = self[key] return '%s\n' % json.dumps(out, indent=4, separators=(',', ': ')) def set(self, key, value): ''' Set a field specified by `key` to `value`, and return self. The method is useful to write call chains like that:: (ndb .interfaces["eth0"] .set('mtu', 1200) .set('state', 'up') .set('address', '00:11:22:33:44:55') .commit()) ''' self[key] = value return self def wtime(self, itn=1): return max(min(itn * 0.1, 1), self.view.ndb._event_queue.qsize() / 10) def register(self): # # Construct a weakref handler for events. # # If the referent doesn't exist, raise the # exception to remove the handler from the # chain. # def wr_handler(wr, fname, *argv): try: return getattr(wr(), fname)(*argv) except: # check if the weakref became invalid if wr() is None: raise InvalidateHandlerException() raise wr = weakref.ref(self) for event, fname in self.event_map.items(): # # Do not trust the implicit scope and pass the # weakref explicitly via partial # (self .ndb .register_handler(event, partial(wr_handler, wr, fname))) def snapshot(self, ctxid=None): ''' Create and return a snapshot of the object. The method creates corresponding SQL tables for the object itself and for detected dependencies. The snapshot tables will be removed as soon as the snapshot gets collected by the GC. ''' ctxid = ctxid or self.ctxid or id(self) if self._replace is None: key = self.key else: key = self._replace.key snp = type(self)(self.view, key, ctxid=ctxid) self.ndb.schema.save_deps(ctxid, weakref.ref(snp), self.iclass) snp.changed = set(self.changed) return snp def complete_key(self, key): ''' Try to complete the object key based on the provided fields. E.g.:: >>> ndb.interfaces['eth0'].complete_key({"ifname": "eth0"}) {'ifname': 'eth0', 'index': 2, 'target': u'localhost', 'tflags': 0} It is an internal method and is not supposed to be used externally. ''' self.log.debug('complete key %s from table %s' % (key, self.etable)) fetch = [] if isinstance(key, Record): key = key._as_dict() for name in self.kspec: if name not in key: fetch.append('f_%s' % name) if fetch: keys = [] values = [] for name, value in key.items(): nla_name = self.iclass.name2nla(name) if nla_name in self.spec: name = nla_name if value is not None and name in self.spec: keys.append('f_%s = %s' % (name, self.schema.plch)) values.append(value) spec = (self .ndb .schema .fetchone('SELECT %s FROM %s WHERE %s' % (' , '.join(fetch), self.etable, ' AND '.join(keys)), values)) if spec is None: self.log.debug('got none') return None for name, value in zip(fetch, spec): key[name[2:]] = value self.log.debug('got %s' % key) return key def rollback(self, snapshot=None): ''' Try to rollback the object state using the snapshot provided as an argument or using `self.last_save`. ''' if self._replace is not None: self.log.debug('rollback replace: %s :: %s' % (self.key, self._replace.key)) new_replace = type(self)(self.view, self.key) new_replace.state.set('remove') self.state.set('replace') self.update(self._replace) self._replace = new_replace self.log.debug('rollback: %s' % str(self.state.events)) snapshot = snapshot or self.last_save snapshot.state.set(self.state.get()) snapshot.apply(rollback=True) for link, snp in snapshot.snapshot_deps: link.rollback(snapshot=snp) return self def commit(self): ''' Try to commit the pending changes. If the commit fails, automatically revert the state. ''' if self.state == 'system' and \ not self.changed and \ not self._apply_script: return self if self.chain: self.chain.commit() self.log.debug('commit: %s' % str(self.state.events)) # Is it a new object? if self.state == 'invalid': # Save values, try to apply save = dict(self) try: return self.apply() except Exception as e_i: # Save the debug info e_i.trace = traceback.format_exc() # ACHTUNG! The routine doesn't clean up the system # # Drop all the values and rollback to the initial state for key in tuple(self.keys()): del self[key] for key in save: dict.__setitem__(self, key, save[key]) raise e_i # Continue with an existing object # The snapshot tables in the DB will be dropped as soon as the GC # collects the object. But in the case of an exception the `snp` # variable will be saved in the traceback, so the tables will be # available to debug. If the traceback will be saved somewhere then # the tables will never be dropped by the GC, so you can do it # manually by `ndb.schema.purge_snapshots()` -- to invalidate all # the snapshots and to drop the associated tables. # Apply the changes try: self.apply() except Exception as e_c: # Rollback in the case of any error try: self.rollback() except Exception as e_r: e_c.chain = [e_r] if hasattr(e_r, 'chain'): e_c.chain.extend(e_r.chain) e_r.chain = None raise finally: (self .last_save .state .set(self.state.get())) if self._replace is not None: self._replace = None return self def remove(self): with self.lock: self.state.set('remove') return self def check(self): state_map = (('invalid', 'system'), ('remove', 'invalid'), ('setns', 'invalid'), ('setns', 'system'), ('replace', 'system')) self.load_sql() self.log.debug('check: %s' % str(self.state.events)) if self.state.transition() not in state_map: self.log.debug('check state: False') return False if self.changed: self.log.debug('check changed: %s' % (self.changed)) return False self.log.debug('check: True') return True def make_req(self, prime): req = dict(prime) for key in self.changed: req[key] = self[key] return req def get_count(self): conditions = [] values = [] for name in self.kspec: conditions.append('f_%s = %s' % (name, self.schema.plch)) values.append(self.get(self.iclass.nla2name(name), None)) return (self .ndb .schema .fetchone(''' SELECT count(*) FROM %s WHERE %s ''' % (self.table, ' AND '.join(conditions)), values))[0] def hook_apply(self, method, **spec): pass def apply(self, rollback=False): ''' Create a snapshot and apply pending changes. Do not revert the changes in the case of an exception. ''' # Save the context if not rollback and self.state != 'invalid': self.last_save = self.snapshot() self.log.debug('apply: %s' % str(self.state.events)) self.load_event.clear() self._apply_script_snapshots = [] # Load the current state try: self.schema.commit() except: pass self.load_sql(set_state=False) if self.state == 'system' and self.get_count() == 0: state = self.state.set('invalid') else: state = self.state.get() # Create the request. idx_req = dict([(x, self[self.iclass.nla2name(x)]) for x in self.schema.compiled[self.table]['idx'] if self.iclass.nla2name(x) in self]) req = self.make_req(idx_req) self.log.debug('apply req: %s' % str(req)) self.log.debug('apply idx_req: %s' % str(idx_req)) self.log.debug('apply state: %s' % state) method = None ignore = tuple() # if state in ('invalid', 'replace'): req = dict([x for x in self.items() if x[1] is not None]) for l_key, r_key in self.match_pairs.items(): for src in self.match_src: try: req[l_key] = src[r_key] break except: pass method = 'add' ignore = {errno.EEXIST: 'set'} elif state == 'system': method = 'set' elif state == 'setns': method = 'set' ignore = {errno.ENODEV: None} elif state == 'remove': method = 'del' req = idx_req ignore = {errno.ENODEV: None, # interfaces errno.ESRCH: None, # routes errno.EADDRNOTAVAIL: None} # addresses else: raise Exception('state transition not supported') for itn in range(20): try: self.log.debug('run %s (%s)' % (method, req)) (self .sources[self['target']] .api(self.api, method, **req)) (self .hook_apply(method, **req)) except NetlinkError as e: (self .log .debug('error: %s' % e)) if e.code in ignore: self.log.debug('ignore error %s for %s' % (e.code, self)) if ignore[e.code] is not None: self.log.debug('run fallback %s (%s)' % (ignore[e.code], req)) try: (self .sources[self['target']] .api(self.api, ignore[e.code], **req)) except NetlinkError: pass else: raise e wtime = self.wtime(itn) mqsize = self.view.ndb._event_queue.qsize() nq = self.schema.stats.get(self['target']) if nq is not None: nqsize = nq.qsize else: nqsize = 0 self.log.debug('stats: apply %s {' 'objid %s, wtime %s, ' 'mqsize %s, nqsize %s' '}' % (method, id(self), wtime, mqsize, nqsize)) if self.check(): self.log.debug('checked') break self.log.debug('check failed') self.load_event.wait(wtime) self.load_event.clear() else: self.log.debug('stats: %s apply %s fail' % (id(self), method)) raise Exception('lost sync in apply()') self.log.debug('stats: %s pass' % (id(self))) # if state == 'replace': self._replace.remove() self._replace.apply() # if rollback: # # Iterate all the snapshot tables and collect the diff for (table, cls) in self.view.classes.items(): if issubclass(type(self), cls) or \ issubclass(cls, type(self)): continue # comprare the tables diff = (self .ndb .schema .fetch(''' SELECT * FROM %s_%s EXCEPT SELECT * FROM %s ''' % (table, self.ctxid, table))) for record in diff: record = dict(zip((self .schema .compiled[table]['all_names']), record)) key = dict([x for x in record.items() if x[0] in self.schema.compiled[table]['idx']]) obj = self.view.get(key, table) obj.load_sql(ctxid=self.ctxid) obj.state.set('invalid') try: obj.apply() except Exception as e: self.errors.append((time.time(), obj, e)) else: for op, argv, kwarg in self._apply_script: ret = op(*argv, **kwarg) if isinstance(ret, Exception): raise ret elif ret is not None: self._apply_script_snapshots.append(ret) self._apply_script = [] return self def update(self, data): for key, value in data.items(): self.load_value(key, value) def load_value(self, key, value): ''' Load a value and clean up the `self.changed` set if the loaded value matches the expectation. ''' if key not in self.changed: dict.__setitem__(self, key, value) elif self.get(key) == value: self.changed.remove(key) elif key in self.fields_cmp and self.fields_cmp[key](self, value): self.changed.remove(key) def load_sql(self, table=None, ctxid=None, set_state=True): ''' Load the data from the database. ''' if not self.key: return if table is None: if ctxid is None: table = self.etable else: table = '%s_%s' % (self.table, ctxid) keys = [] values = [] for name, value in self.key.items(): keys.append('f_%s = %s' % (name, self.schema.plch)) values.append(value) spec = (self .ndb .schema .fetchone('SELECT * FROM %s WHERE %s' % (table, ' AND '.join(keys)), values)) self.log.debug('load_sql: %s' % str(spec)) if set_state: with self.lock: if spec is None: if self.state != 'invalid': # No such object (anymore) self.state.set('invalid') self.changed = set() elif self.state not in ('remove', 'setns'): self.update(dict(zip(self.names, spec))) self.state.set('system') return spec def load_rtnlmsg(self, target, event): ''' Check if the RTNL event matches the object and load the data from the database if it does. ''' # TODO: partial match (object rename / restore) # ... # full match for name in self.knorm: value = self.get(name) if name == 'target': if value != target: return elif name == 'tflags': continue elif value != (event.get_attr(name) or event.get(name)): return self.log.debug('load_rtnl: %s' % str(event.get('header'))) if event['header'].get('type', 0) % 2: self.state.set('invalid') self.changed = set() else: self.load_sql() self.load_event.set() pyroute2-0.5.9/pyroute2/ndb/objects/address.py0000644000175000017500000000352513610051400021211 0ustar peetpeet00000000000000from pyroute2.ndb.objects import RTNL_Object from pyroute2.common import basestring from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg class Address(RTNL_Object): table = 'addresses' msg_class = ifaddrmsg api = 'addr' summary = ''' SELECT a.f_target, a.f_tflags, i.f_IFLA_IFNAME, a.f_IFA_ADDRESS, a.f_prefixlen FROM addresses AS a INNER JOIN interfaces AS i ON a.f_index = i.f_index AND a.f_target = i.f_target ''' table_alias = 'a' summary_header = ('target', 'tflags', 'ifname', 'address', 'prefixlen') reverse_update = {'table': 'addresses', 'name': 'addresses_f_tflags', 'field': 'f_tflags', 'sql': ''' UPDATE interfaces SET f_tflags = NEW.f_tflags WHERE f_index = NEW.f_index AND f_target = NEW.f_target; '''} def __init__(self, *argv, **kwarg): kwarg['iclass'] = ifaddrmsg self.event_map = {ifaddrmsg: "load_rtnlmsg"} super(Address, self).__init__(*argv, **kwarg) @classmethod def adjust_spec(cls, spec): if isinstance(spec, basestring): ret = {'target': 'localhost'} ret['address'], prefixlen = spec.split('/') ret['prefixlen'] = int(prefixlen) return ret return spec def key_repr(self): return '%s/%s %s/%s' % (self.get('target', ''), self.get('label', self.get('index', '')), self.get('local', self.get('address', '')), self.get('prefixlen', '')) pyroute2-0.5.9/pyroute2/ndb/objects/interface.py0000644000175000017500000003135213616555550021550 0ustar peetpeet00000000000000''' List interfaces =============== List interface keys:: with NDB(log='on') as ndb: for key in ndb.interfaces: print(key) NDB views support some dict methods: `items()`, `values()`, `keys()`:: with NDB(log='on') as ndb: for key, nic in ndb.interfaces.items(): nic.set('state', 'up') nic.commit() Get interface objects ===================== The keys may be used as selectors to get interface objects:: with NDB(log='on') as ndb: for key in ndb.interfaces: print(ndb.interfaces[key]) Also possible selector formats are `dict()` and simple string. The latter means the interface name:: eth0 = ndb.interfaces['eth0'] Dict selectors are necessary to get interfaces by other properties:: wrk1_eth0 = ndb.interfaces[{'target': 'worker1.sample.com', 'ifname': 'eth0'}] wrk2_eth0 = ndb.interfaces[{'target': 'worker2.sample.com', 'address': '52:54:00:22:a1:b7'}] Change nic properties ===================== Changing MTU and MAC address:: with NDB(log='on') as ndb: with ndb.interfaces['eth0'] as eth0: eth0['mtu'] = 1248 eth0['address'] = '00:11:22:33:44:55' # --> <-- eth0.commit() is called by the context manager # --> <-- ndb.close() is called by the context manager One can change a property either using the assignment statement, or using the `.set()` routine:: # same code with NDB(log='on') as ndb: with ndb.interfaces['eth0'] as eth0: eth0.set('mtu', 1248) eth0.set('address', '00:11:22:33:44:55') The `.set()` routine returns the object itself, that makes possible chain calls:: # same as above with NDB(log='on') as ndb: with ndb.interfaces['eth0'] as eth0: eth0.set('mtu', 1248).set('address', '00:11:22:33:44:55') # or with NDB(log='on') as ndb: with ndb.interfaces['eth0'] as eth0: (eth0 .set('mtu', 1248) .set('address', '00:11:22:33:44:55')) # or without the context manager, call commit() explicitly with NDB(log='on') as ndb: (ndb .interfaces['eth0'] .set('mtu', 1248) .set('address', '00:11:22:33:44:55') .commit()) Create virtual interfaces ========================= Create a bridge and add a port, `eth0`:: (ndb .interfaces .create(ifname='br0', kind='bridge') .commit()) (ndb .interfaces['eth0'] .set('master', ndb.interfaces['br0']['index']) .commit()) ''' import weakref import traceback from pyroute2.config import AF_BRIDGE from pyroute2.ndb.objects import RTNL_Object from pyroute2.common import basestring from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.exceptions import NetlinkError def _cmp_master(self, value): if self['master'] == value: return True elif self['master'] == 0 and value is None: dict.__setitem__(self, 'master', None) return True return False class Interface(RTNL_Object): table = 'interfaces' msg_class = ifinfmsg api = 'link' key_extra_fields = ['IFLA_IFNAME'] summary = ''' SELECT a.f_target, a.f_tflags, a.f_index, a.f_IFLA_IFNAME, a.f_IFLA_ADDRESS, a.f_flags, a.f_IFLA_INFO_KIND FROM interfaces AS a ''' table_alias = 'a' summary_header = ('target', 'tflags', 'index', 'ifname', 'lladdr', 'flags', 'kind') fields_cmp = {'master': _cmp_master} def __init__(self, *argv, **kwarg): kwarg['iclass'] = ifinfmsg self.event_map = {ifinfmsg: "load_rtnlmsg"} dict.__setitem__(self, 'flags', 0) dict.__setitem__(self, 'state', 'unknown') if isinstance(argv[1], dict) and argv[1].get('create'): if 'ifname' not in argv[1]: raise Exception('specify at least ifname') super(Interface, self).__init__(*argv, **kwarg) @property def ipaddr(self): return (self .view .ndb ._get_view('addresses', chain=self, match_src=[weakref.proxy(self), {'index': self.get('index', 0), 'target': self['target']}], match_pairs={'index': 'index', 'target': 'target'})) @property def ports(self): return (self .view .ndb ._get_view('interfaces', chain=self, match_src=[weakref.proxy(self), {'index': self.get('index', 0), 'target': self['target']}], match_pairs={'master': 'index', 'target': 'target'})) @property def routes(self): return (self .view .ndb ._get_view('routes', chain=self, match_src=[weakref.proxy(self), {'index': self.get('index', 0), 'target': self['target']}], match_pairs={'oif': 'index', 'target': 'target'})) @property def neighbours(self): return (self .view .ndb ._get_view('neighbours', chain=self, match_src=[weakref.proxy(self), {'index': self.get('index', 0), 'target': self['target']}], match_pairs={'ifindex': 'index', 'target': 'target'})) def add_ip(self, spec): def do_add_ip(self, spec): try: self.ipaddr.create(spec).apply() except Exception as e_s: e_s.trace = traceback.format_stack() return e_s self._apply_script.append((do_add_ip, (self, spec), {})) return self def del_ip(self, spec): def do_del_ip(self, spec): try: ret = self.ipaddr[spec].remove().apply() except Exception as e_s: e_s.trace = traceback.format_stack() return e_s return ret.last_save self._apply_script.append((do_del_ip, (self, spec), {})) return self def add_port(self, spec): def do_add_port(self, spec): try: port = self.view[spec] assert port['target'] == self['target'] port['master'] = self['index'] port.apply() except Exception as e_s: e_s.trace = traceback.format_stack() return e_s return port.last_save self._apply_script.append((do_add_port, (self, spec), {})) return self def del_port(self, spec): def do_del_port(self, spec): try: port = self.view[spec] assert port['master'] == self['index'] assert port['target'] == self['target'] port['master'] = 0 port.apply() except Exception as e_s: e_s.trace = traceback.format_stack() return e_s return port.last_save self._apply_script.append((do_del_port, (self, spec), {})) return self def __setitem__(self, key, value): if key == 'peer': dict.__setitem__(self, key, value) elif key == 'target' and self.state == 'invalid': dict.__setitem__(self, key, value) elif key == 'net_ns_fd' and self.state == 'invalid': dict.__setitem__(self, 'target', value) elif key == 'target' and \ self.get('target') and \ self['target'] != value: super(Interface, self).__setitem__('net_ns_fd', value) else: super(Interface, self).__setitem__(key, value) def complete_key(self, key): if isinstance(key, dict): ret_key = key else: ret_key = {'target': 'localhost'} if isinstance(key, basestring): ret_key['ifname'] = key elif isinstance(key, int): ret_key['index'] = key return super(Interface, self).complete_key(ret_key) def snapshot(self, ctxid=None): # 1. make own snapshot snp = super(Interface, self).snapshot(ctxid=ctxid) # 2. collect dependencies and store in self.snapshot_deps for spec in (self .ndb .interfaces .getmany({'IFLA_MASTER': self['index']})): # bridge ports link = type(self)(self.view, spec) snp.snapshot_deps.append((link, link.snapshot())) for spec in (self .ndb .interfaces .getmany({'IFLA_LINK': self['index']})): # vlans & veth if self.get('link') != spec['index']: link = type(self)(self.view, spec) snp.snapshot_deps.append((link, link.snapshot())) # return the root node return snp def make_req(self, prime): req = super(Interface, self).make_req(prime) if self.state == 'system': # --> link('set', ...) req['master'] = self['master'] return req def apply(self, rollback=False, fallback=False): # translate string link references into numbers for key in ('link', ): if key in self and isinstance(self[key], basestring): self[key] = self.ndb.interfaces[self[key]]['index'] setns = self.state.get() is 'setns' try: super(Interface, self).apply(rollback) except NetlinkError as e: if e.code == 95 and \ 'master' in self and \ self.state == 'invalid': key = dict(self) key['create'] = True del key['master'] fb = type(self)(self.view, key) fb.register() fb.apply(rollback) fb.set('master', self['master']) fb.apply(rollback) del fb self.apply() else: raise if setns: self.load_value('target', self['net_ns_fd']) dict.__setitem__(self, 'net_ns_fd', None) spec = self.load_sql() if spec: self.state.set('system') return self def hook_apply(self, method, **spec): if method == 'set': if self['kind'] == 'bridge': keys = filter(lambda x: x.startswith('br_'), self.changed) if keys: req = {'index': self['index'], 'kind': 'bridge', 'family': AF_BRIDGE} for key in keys: req[key] = self[key] (self .sources[self['target']] .api(self.api, method, **req)) update = (self .sources[self['target']] .api(self.api, 'get', **{'index': self['index']})) self.ndb._event_queue.put((self['target'], update)) def load_sql(self, *argv, **kwarg): spec = super(Interface, self).load_sql(*argv, **kwarg) if spec: tname = 'ifinfo_%s' % self['kind'] if tname in self.schema.compiled: names = self.schema.compiled[tname]['norm_names'] spec = (self .ndb .schema .fetchone('SELECT * from %s WHERE f_index = %s' % (tname, self.schema.plch), (self['index'], ))) if spec: self.update(dict(zip(names, spec))) return spec def load_rtnlmsg(self, *argv, **kwarg): super(Interface, self).load_rtnlmsg(*argv, **kwarg) def key_repr(self): return '%s/%s' % (self.get('target', ''), self.get('ifname', self.get('index', ''))) pyroute2-0.5.9/pyroute2/ndb/objects/neighbour.py0000644000175000017500000000222613610051400021543 0ustar peetpeet00000000000000from pyroute2.ndb.objects import RTNL_Object from pyroute2.common import basestring from pyroute2.netlink.rtnl.ndmsg import ndmsg class Neighbour(RTNL_Object): table = 'neighbours' msg_class = ndmsg api = 'neigh' summary = ''' SELECT n.f_target, n.f_tflags, i.f_IFLA_IFNAME, n.f_NDA_LLADDR, n.f_NDA_DST FROM neighbours AS n INNER JOIN interfaces AS i ON n.f_ifindex = i.f_index AND n.f_target = i.f_target ''' table_alias = 'n' summary_header = ('target', 'flags', 'ifname', 'lladdr', 'neighbour') def __init__(self, *argv, **kwarg): kwarg['iclass'] = ndmsg self.event_map = {ndmsg: "load_rtnlmsg"} super(Neighbour, self).__init__(*argv, **kwarg) def complete_key(self, key): if isinstance(key, dict): ret_key = key else: ret_key = {'target': 'localhost'} if isinstance(key, basestring): ret_key['NDA_DST'] = key return super(Neighbour, self).complete_key(ret_key) pyroute2-0.5.9/pyroute2/ndb/objects/netns.py0000644000175000017500000000206713610051400020713 0ustar peetpeet00000000000000from pyroute2 import netns from pyroute2.common import basestring from pyroute2.ndb.objects import RTNL_Object from pyroute2.netlink.rtnl.nsinfmsg import nsinfmsg class NetNS(RTNL_Object): table = 'netns' msg_class = nsinfmsg table_alias = 'n' api = 'netns' def __init__(self, *argv, **kwarg): kwarg['iclass'] = nsinfmsg self.event_map = {nsinfmsg: "load_rtnlmsg"} super(NetNS, self).__init__(*argv, **kwarg) @classmethod def adjust_spec(cls, spec): if isinstance(spec, dict): ret_spec = spec else: ret_spec = {'target': 'localhost/netns'} if isinstance(spec, basestring): ret_spec['path'] = spec ret_spec['path'] = netns._get_netnspath(ret_spec['path']) return ret_spec def __setitem__(self, key, value): if self.state == 'system': raise ValueError('attempt to change a readonly object') if key == 'path': value = netns._get_netnspath(value) return super(NetNS, self).__setitem__(key, value) pyroute2-0.5.9/pyroute2/ndb/objects/route.py0000644000175000017500000001330113616576734020747 0ustar peetpeet00000000000000from pyroute2.ndb.objects import RTNL_Object from pyroute2.ndb.report import Record from pyroute2.common import basestring from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl.rtmsg import nh _dump_rt = ['rt.f_%s' % x[0] for x in rtmsg.sql_schema()][:-2] _dump_nh = ['nh.f_%s' % x[0] for x in nh.sql_schema()][:-2] class Route(RTNL_Object): table = 'routes' msg_class = rtmsg api = 'route' summary = ''' SELECT rt.f_target, rt.f_tflags, rt.f_RTA_TABLE, rt.f_RTA_DST, rt.f_dst_len, rt.f_RTA_GATEWAY, nh.f_RTA_GATEWAY FROM routes AS rt LEFT JOIN nh ON rt.f_route_id = nh.f_route_id AND rt.f_target = nh.f_target ''' table_alias = 'rt' summary_header = ('target', 'tflags', 'table', 'dst', 'dst_len', 'gateway', 'nexthop') dump = ''' SELECT rt.f_target,rt.f_tflags,%s FROM routes AS rt LEFT JOIN nh AS nh ON rt.f_route_id = nh.f_route_id AND rt.f_target = nh.f_target ''' % ','.join(['%s' % x for x in _dump_rt + _dump_nh]) dump_header = (['target', 'tflags'] + [rtmsg.nla2name(x[5:]) for x in _dump_rt] + ['nh_%s' % nh.nla2name(x[5:]) for x in _dump_nh]) reverse_update = {'table': 'routes', 'name': 'routes_f_tflags', 'field': 'f_tflags', 'sql': ''' UPDATE interfaces SET f_tflags = NEW.f_tflags WHERE (f_index = NEW.f_RTA_OIF OR f_index = NEW.f_RTA_IIF) AND f_target = NEW.f_target; '''} _replace_on_key_change = True def __init__(self, *argv, **kwarg): kwarg['iclass'] = rtmsg self.event_map = {rtmsg: "load_rtnlmsg"} dict.__setitem__(self, 'multipath', []) super(Route, self).__init__(*argv, **kwarg) def complete_key(self, key): ret_key = {} if isinstance(key, basestring): ret_key['dst'] = key elif isinstance(key, (Record, tuple, list)): return super(Route, self).complete_key(key) elif isinstance(key, dict): ret_key.update(key) else: raise TypeError('unsupported key type') if 'target' not in ret_key: ret_key['target'] = 'localhost' if 'table' not in ret_key: ret_key['table'] = 254 if isinstance(ret_key.get('dst_len'), basestring): ret_key['dst_len'] = int(ret_key['dst_len']) if isinstance(ret_key.get('dst'), basestring): if ret_key.get('dst') == 'default': ret_key['dst'] = '' ret_key['dst_len'] = 0 elif '/' in ret_key['dst']: ret_key['dst'], ret_key['dst_len'] = ret_key['dst'].split('/') return super(Route, self).complete_key(ret_key) def make_req(self, prime): req = dict(prime) for key in self.changed: req[key] = self[key] if self['multipath']: req['multipath'] = self['multipath'] return req def __setitem__(self, key, value): if key in ('dst', 'src') and '/' in value: net, net_len = value.split('/') super(Route, self).__setitem__(key, net) super(Route, self).__setitem__('%s_len' % key, int(net_len)) else: super(Route, self).__setitem__(key, value) if key == 'multipath': self.changed.remove(key) def apply(self, rollback=False): if (self.get('table') == 255) and \ (self.get('family') == 10) and \ (self.get('proto') == 2): # skip automatic ipv6 routes with proto kernel return self else: return super(Route, self).apply(rollback) def load_sql(self, *argv, **kwarg): super(Route, self).load_sql(*argv, **kwarg) if not self.load_event.is_set(): return if 'nh_id' not in self and self.get('route_id') is not None: nhs = (self .schema .fetch('SELECT * FROM nh WHERE f_route_id = %s' % (self.schema.plch, ), (self['route_id'], ))) flush = False idx = 0 for nexthop in tuple(self['multipath']): if not isinstance(nexthop, NextHop): flush = True if not flush: try: spec = next(nhs) except StopIteration: flush = True for key, value in zip(nexthop.names, spec): if key in nexthop and value is None: continue else: nexthop.load_value(key, value) if flush: self['multipath'].pop(idx) continue idx += 1 for nexthop in nhs: key = {'route_id': self['route_id'], 'nh_id': nexthop[-1]} self['multipath'].append(NextHop(self.view, key)) class NextHop(Route): msg_class = nh table = 'nh' reverse_update = {'table': 'nh', 'name': 'nh_f_tflags', 'field': 'f_tflags', 'sql': ''' UPDATE routes SET f_tflags = NEW.f_tflags WHERE f_route_id = NEW.f_route_id; '''} pyroute2-0.5.9/pyroute2/ndb/objects/rule.py0000644000175000017500000000241513610051400020530 0ustar peetpeet00000000000000import collections from pyroute2.ndb.objects import RTNL_Object from pyroute2.netlink.rtnl.fibmsg import fibmsg class Rule(RTNL_Object): table = 'rules' msg_class = fibmsg api = 'rule' table_alias = 'n' _replace_on_key_change = True summary = ''' SELECT f_target, f_tflags, f_family, f_FRA_PRIORITY, f_action, f_FRA_TABLE FROM rules ''' summary_header = ('target', 'tflags', 'family', 'priority', 'action', 'table') def __init__(self, *argv, **kwarg): kwarg['iclass'] = fibmsg self._fields = [x[0] for x in fibmsg.fields] self.event_map = {fibmsg: "load_rtnlmsg"} super(Rule, self).__init__(*argv, **kwarg) def load_sql(self, *argv, **kwarg): spec = super(Rule, self).load_sql(*argv, **kwarg) if spec is None: return nkey = collections.OrderedDict() for name_norm, name_raw, value in zip(self.names, self.spec, spec): if name_raw in self.kspec: nkey[name_raw] = value if name_norm not in self._fields and value in (0, ''): dict.__setitem__(self, name_norm, None) self._key = nkey return spec pyroute2-0.5.9/pyroute2/ndb/query.py0000644000175000017500000001046713610051400017303 0ustar peetpeet00000000000000from pyroute2.ndb.report import Report class Query(object): def __init__(self, schema, fmt='raw'): self._schema = schema self._fmt = fmt def _formatter(self, cursor, fmt=None, header=None, transform=None): fmt = fmt or self._fmt if fmt == 'csv': if header: yield ','.join(header) for record in cursor: if transform: record = transform(record) if isinstance(record, (list, tuple)): yield ','.join([str(x) for x in record]) else: yield record elif fmt == 'raw': if header: yield header for record in cursor: if transform: record = transform(record) yield record else: raise TypeError('format not supported') def nodes(self, fmt=None): ''' Report all the nodes within the cluster. ''' header = ('nodename',) return Report(self._formatter(self._schema.fetch(''' SELECT DISTINCT f_target FROM interfaces '''), fmt, header)) def p2p_edges(self, fmt=None): ''' Report point to point edges within the cluster, like GRE or PPP interfaces. ''' header = ('left_node', 'right_node') return Report(self._formatter(self._schema.fetch(''' SELECT DISTINCT l.f_target, r.f_target FROM p2p AS l INNER JOIN p2p AS r ON l.f_p2p_local = r.f_p2p_remote AND l.f_target != r.f_target '''), fmt, header)) def l2_edges(self, fmt=None): ''' Report l2 links within the cluster, reconstructed from the ARP caches on the nodes. Works as follows: 1. for every node take the ARP cache 2. for every record in the cache reconstruct two triplets: * the interface index -> the local interface name * the neighbour lladdr -> the remote node and interface name Issues: does not filter out fake lladdr, so CARP interfaces produce fake l2 edges within the cluster. ''' header = ('left_node', 'left_ifname', 'left_lladdr', 'right_node', 'right_ifname', 'right_lladdr') return Report(self._formatter(self._schema.fetch(''' SELECT DISTINCT j.f_target, j.f_IFLA_IFNAME, j.f_IFLA_ADDRESS, d.f_target, d.f_IFLA_IFNAME, j.f_NDA_LLADDR FROM (SELECT n.f_target, i.f_IFLA_IFNAME, i.f_IFLA_ADDRESS, n.f_NDA_LLADDR FROM neighbours AS n INNER JOIN interfaces AS i ON n.f_target = i.f_target AND i.f_IFLA_ADDRESS != '00:00:00:00:00:00' AND n.f_ifindex = i.f_index) AS j INNER JOIN interfaces AS d ON j.f_NDA_LLADDR = d.f_IFLA_ADDRESS AND j.f_target != d.f_target '''), fmt, header)) def l3_edges(self, fmt=None): ''' Report l3 edges. For every address on every node look if it is used as a gateway on remote nodes. Such cases are reported as l3 edges. Issues: does not report routes (edges) via point to point connections like GRE where local addresses are used as gateways. To be fixed. ''' header = ('source_node', 'gateway_node', 'gateway_address', 'dst', 'dst_len') return Report(self._formatter(self._schema.fetch(''' SELECT DISTINCT r.f_target, a.f_target, a.f_IFA_ADDRESS, r.f_RTA_DST, r.f_dst_len FROM addresses AS a INNER JOIN routes AS r ON r.f_target != a.f_target AND r.f_RTA_GATEWAY = a.f_IFA_ADDRESS AND r.f_RTA_GATEWAY NOT IN (SELECT f_IFA_ADDRESS FROM addresses WHERE f_target = r.f_target) '''), fmt, header)) pyroute2-0.5.9/pyroute2/ndb/report.py0000644000175000017500000000365013610051400017445 0ustar peetpeet00000000000000from pyroute2.common import basestring MAX_REPORT_LINES = 100 class Record(object): def __init__(self, names, values): if len(names) != len(values): raise ValueError('names and values must have the same length') self._names = tuple(names) self._values = tuple(values) def __getitem__(self, key): idx = len(self._names) for i in reversed(self._names): idx -= 1 if i == key: return self._values[idx] def __setitem__(self, *argv, **kwarg): raise TypeError('immutable object') def __getattribute__(self, key): if key.startswith('_'): return object.__getattribute__(self, key) else: return self[key] def __setattr__(self, key, value): if not key.startswith('_'): raise TypeError('immutable object') return object.__setattr__(self, key, value) def __iter__(self): return iter(self._values) def __repr__(self): return repr(self._values) def __len__(self): return len(self._values) def _as_dict(self): ret = {} for key, value in zip(self._names, self._values): ret[key] = value return ret class Report(object): def __init__(self, generator, ellipsis=True): self.generator = generator self.ellipsis = ellipsis self.cached = [] def __iter__(self): return self.generator def __repr__(self): counter = 0 ret = [] for record in self.generator: if isinstance(record, basestring): ret.append(record) else: ret.append(repr(record)) ret.append('\n') counter += 1 if self.ellipsis and counter > MAX_REPORT_LINES: ret.append('(...)') break if ret: ret.pop() return ''.join(ret) pyroute2-0.5.9/pyroute2/ndb/schema.py0000644000175000017500000014207013616747522017421 0ustar peetpeet00000000000000''' Backends -------- NDB stores all the records in an SQL database. By default it uses the SQLite3 module, which is a part of the Python stdlib, so no extra packages are required:: # SQLite3 -- simple in-memory DB ndb = NDB() # SQLite3 -- file DB ndb = NDB(db_provider='sqlite3', db_spec='test.db') It is also possible to use a PostgreSQL database via psycopg2 module:: # PostgreSQL -- local DB ndb = NDB(db_provider='psycopg2', db_spec={'dbname': 'test'}) # PostgreSQL -- remote DB ndb = NDB(db_provider='psycopg2', db_spec={'dbname': 'test', 'host': 'db1.example.com'}) SQL schema ---------- A file based SQLite3 DB or PostgreSQL may be useful for inspection of the collected data. Here is an example schema:: rtnl=# \\dt List of relations Schema | Name | Type | Owner --------+-----------------+-------+------- public | addresses | table | root public | ifinfo_bond | table | root public | ifinfo_bridge | table | root public | ifinfo_gre | table | root public | ifinfo_vlan | table | root public | ifinfo_vrf | table | root public | ifinfo_vti | table | root public | ifinfo_vti6 | table | root public | ifinfo_vxlan | table | root public | interfaces | table | root public | neighbours | table | root public | nh | table | root public | routes | table | root public | sources | table | root public | sources_options | table | root (15 rows) rtnl=# select f_index, f_ifla_ifname from interfaces; f_index | f_ifla_ifname ---------+--------------- 1 | lo 2 | eth0 28 | ip_vti0 31 | ip6tnl0 32 | ip6_vti0 36445 | br0 11434 | dummy0 3 | eth1 (8 rows) rtnl=# select f_index, f_ifla_br_stp_state from ifinfo_bridge; f_index | f_ifla_br_stp_state ---------+--------------------- 36445 | 0 (1 row) There are also some useful views, that join `ifinfo` tables with `interfaces`:: rtnl=# \\dv List of relations Schema | Name | Type | Owner --------+--------+------+------- public | bond | view | root public | bridge | view | root public | gre | view | root public | vlan | view | root public | vrf | view | root public | vti | view | root public | vti6 | view | root public | vxlan | view | root (8 rows) ''' import sys import time import uuid import struct import random import sqlite3 import threading import traceback from functools import partial from collections import OrderedDict from socket import (AF_INET, inet_pton) from pyroute2 import config from pyroute2.config import AF_BRIDGE from pyroute2.common import uuid32 from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg from pyroute2.netlink.rtnl.ndmsg import ndmsg from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl.rtmsg import nh from pyroute2.netlink.rtnl.fibmsg import fibmsg from pyroute2.netlink.rtnl.p2pmsg import p2pmsg from pyroute2.netlink.rtnl.nsinfmsg import nsinfmsg # rtnl objects from pyroute2.ndb.objects.route import Route from pyroute2.ndb.objects.route import NextHop from pyroute2.ndb.objects.address import Address # events from pyroute2.ndb.events import SchemaGenericRequest try: import queue except ImportError: import Queue as queue MAX_ATTEMPTS = 5 ifinfo_names = ('bridge', 'bond', 'vlan', 'vxlan', 'gre', 'vrf', 'vti', 'vti6') supported_ifinfo = {x: ifinfmsg.ifinfo.data_map[x] for x in ifinfo_names} def publish(method): # # this wrapper will be published in the DBM thread # def _do_local(self, target, request): try: for item in method(self, *request.argv, **request.kwarg): request.response.put(item) request.response.put(StopIteration()) except Exception as e: request.response.put(e) # # this class will be used to map the requests # class SchemaRequest(SchemaGenericRequest): pass # # this method will replace the source one # def _do_dispatch(self, *argv, **kwarg): if self.thread == id(threading.current_thread()): # same thread, run method locally for item in method(self, *argv, **kwarg): yield item else: # another thread, run via message bus self._allow_read.wait() response = queue.Queue(maxsize=4096) request = SchemaRequest(response, *argv, **kwarg) self.ndb._event_queue.put((None, (request, ))) while True: item = response.get() if isinstance(item, StopIteration): return elif isinstance(item, Exception): raise item else: yield item # # announce the function so it will be published # _do_dispatch.publish = (SchemaRequest, _do_local) return _do_dispatch def publish_exec(method): # # this wrapper will be published in the DBM thread # def _do_local(self, target, request): try: (request .response .put(method(self, *request.argv, **request.kwarg))) except Exception as e: (request .response .put(e)) # # this class will be used to map the requests # class SchemaRequest(SchemaGenericRequest): pass # # this method will replace the source one # def _do_dispatch(self, *argv, **kwarg): if self.thread == id(threading.current_thread()): # same thread, run method locally return method(self, *argv, **kwarg) else: # another thread, run via message bus response = queue.Queue(maxsize=1) request = SchemaRequest(response, *argv, **kwarg) self.ndb._event_queue.put((None, (request, ))) ret = response.get() if isinstance(ret, Exception): raise ret else: return ret # # announce the function so it will be published # _do_dispatch.publish = (SchemaRequest, _do_local) return _do_dispatch class DBSchema(object): connection = None thread = None event_map = None key_defaults = None snapshots = None # : spec = OrderedDict() # main tables spec['interfaces'] = OrderedDict(ifinfmsg.sql_schema()) spec['addresses'] = OrderedDict(ifaddrmsg.sql_schema()) spec['neighbours'] = OrderedDict(ndmsg.sql_schema()) spec['routes'] = OrderedDict(rtmsg.sql_schema() + [(('route_id', ), 'TEXT UNIQUE'), (('gc_mark', ), 'INTEGER')]) spec['nh'] = OrderedDict(nh.sql_schema() + [(('route_id', ), 'TEXT'), (('nh_id', ), 'INTEGER')]) spec['rules'] = OrderedDict(fibmsg.sql_schema()) spec['netns'] = OrderedDict(nsinfmsg.sql_schema()) # additional tables spec['p2p'] = OrderedDict(p2pmsg.sql_schema()) classes = {'interfaces': ifinfmsg, 'addresses': ifaddrmsg, 'neighbours': ndmsg, 'routes': rtmsg, 'nh': nh, 'rules': fibmsg, 'netns': nsinfmsg, 'p2p': p2pmsg} # # OBS: field names MUST go in the same order as in the spec, # that's for the load_netlink() to work correctly -- it uses # one loop to fetch both index and row values # indices = {'interfaces': ('index', ), 'p2p': ('index', ), 'addresses': ('family', 'prefixlen', 'index', 'IFA_ADDRESS', 'IFA_LOCAL'), 'neighbours': ('ifindex', 'NDA_LLADDR'), 'routes': ('family', 'dst_len', 'tos', 'RTA_DST', 'RTA_PRIORITY', 'RTA_TABLE'), 'nh': ('route_id', 'nh_id'), 'netns': ('NSINFO_PATH', ), 'rules': ('family', 'dst_len', 'src_len', 'tos', 'action', 'flags', 'FRA_DST', 'FRA_SRC', 'FRA_IIFNAME', 'FRA_GOTO', 'FRA_PRIORITY', 'FRA_FWMARK', 'FRA_FLOW', 'FRA_TUN_ID', 'FRA_SUPPRESS_IFGROUP', 'FRA_SUPPRESS_PREFIXLEN', 'FRA_TABLE', 'FRA_FWMASK', 'FRA_OIFNAME', 'FRA_L3MDEV', 'FRA_UID_RANGE', 'FRA_PROTOCOL', 'FRA_IP_PROTO', 'FRA_SPORT_RANGE', 'FRA_DPORT_RANGE')} foreign_keys = {'addresses': [{'fields': ('f_target', 'f_tflags', 'f_index'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}], 'neighbours': [{'fields': ('f_target', 'f_tflags', 'f_ifindex'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}], 'routes': [{'fields': ('f_target', 'f_tflags', 'f_RTA_OIF'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}, {'fields': ('f_target', 'f_tflags', 'f_RTA_IIF'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}], # # man kan not use f_tflags together with f_route_id # 'cause it breaks ON UPDATE CASCADE for interfaces # 'nh': [{'fields': ('f_route_id', ), 'parent_fields': ('f_route_id', ), 'parent': 'routes'}, {'fields': ('f_target', 'f_tflags', 'f_oif'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}], # # additional tables # 'p2p': [{'fields': ('f_target', 'f_tflags', 'f_index'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}]} # # load supported ifinfo # for (name, data) in supported_ifinfo.items(): name = 'ifinfo_%s' % name # # classes # classes[name] = data # # indices # indices[name] = ('index', ) # # spec # spec[name] = \ OrderedDict(data.sql_schema() + [(('index', ), 'BIGINT')]) # # foreign keys # foreign_keys[name] = \ [{'fields': ('f_target', 'f_tflags', 'f_index'), 'parent_fields': ('f_target', 'f_tflags', 'f_index'), 'parent': 'interfaces'}] def __init__(self, ndb, connection, mode, rtnl_log, tid): self.ndb = ndb # collect all the dispatched methods and publish them for name in dir(self): obj = getattr(self, name, None) if hasattr(obj, 'publish'): event, fbody = obj.publish self.ndb._event_map[event] = [partial(fbody, self)] self.mode = mode self.stats = {} self.thread = tid self.connection = connection self.rtnl_log = rtnl_log self.log = ndb.log.channel('schema') self.snapshots = {} self.key_defaults = {} self._cursor = None self._counter = 0 self._allow_read = threading.Event() self._allow_read.set() self._allow_write = threading.Event() self._allow_write.set() self.share_cursor() if self.mode == 'sqlite3': # SQLite3 self.connection.execute('PRAGMA foreign_keys = ON') self.plch = '?' elif self.mode == 'psycopg2': # PostgreSQL self.plch = '%s' else: raise NotImplementedError('database provider not supported') self.gctime = self.ctime = time.time() # # compile request lines # self.compiled = {} for table in self.spec.keys(): self.compiled[table] = (self .compile_spec(table, self.spec[table], self.indices[table])) self.create_table(table) if table.startswith('ifinfo_'): idx = ('index', ) self.compiled[table[7:]] = self.merge_spec('interfaces', table, table[7:], idx) self.create_ifinfo_view(table) # # specific SQL code # if self.mode == 'sqlite3': template = ''' CREATE TRIGGER IF NOT EXISTS {name} BEFORE UPDATE OF {field} ON {table} FOR EACH ROW BEGIN {sql} END ''' elif self.mode == 'psycopg2': template = ''' CREATE OR REPLACE FUNCTION {name}() RETURNS trigger AS ${name}$ BEGIN {sql} RETURN NEW; END; ${name}$ LANGUAGE plpgsql; DROP TRIGGER IF EXISTS {name} ON {table}; CREATE TRIGGER {name} BEFORE UPDATE OF {field} ON {table} FOR EACH ROW EXECUTE PROCEDURE {name}(); ''' for cls in (NextHop, Route, Address): self.execute(template.format(**cls.reverse_update)) self.connection.commit() # # service tables # self.execute(''' DROP TABLE IF EXISTS sources_options ''') self.execute(''' DROP TABLE IF EXISTS sources ''') self.execute(''' CREATE TABLE IF NOT EXISTS sources (f_target TEXT PRIMARY KEY, f_kind TEXT NOT NULL) ''') self.execute(''' CREATE TABLE IF NOT EXISTS sources_options (f_target TEXT NOT NULL, f_name TEXT NOT NULL, f_type TEXT NOT NULL, f_value TEXT NOT NULL, FOREIGN KEY (f_target) REFERENCES sources(f_target) ON UPDATE CASCADE ON DELETE CASCADE) ''') def merge_spec(self, table1, table2, table, schema_idx): spec1 = self.compiled[table1] spec2 = self.compiled[table2] names = spec1['names'] + spec2['names'][:-1] all_names = spec1['all_names'] + spec2['all_names'][2:-1] norm_names = spec1['norm_names'] + spec2['norm_names'][2:-1] idx = ('target', 'tflags') + schema_idx f_names = ['f_%s' % x for x in all_names] f_set = ['f_%s = %s' % (x, self.plch) for x in all_names] f_idx = ['f_%s' % x for x in idx] f_idx_match = ['%s.%s = %s' % (table2, x, self.plch) for x in f_idx] plchs = [self.plch] * len(f_names) return {'names': names, 'all_names': all_names, 'norm_names': norm_names, 'idx': idx, 'fnames': ','.join(f_names), 'plchs': ','.join(plchs), 'fset': ','.join(f_set), 'knames': ','.join(f_idx), 'fidx': ' AND '.join(f_idx_match)} def compile_spec(self, table, schema_names, schema_idx): # e.g.: index, flags, IFLA_IFNAME # names = [] # # same + two internal fields # all_names = ['target', 'tflags'] # # norm_names = ['target', 'tflags'] bclass = self.classes.get(table) for name in schema_names: names.append(name[-1]) all_names.append(name[-1]) iclass = bclass if len(name) > 1: for step in name[:-1]: imap = dict(iclass.nla_map) iclass = getattr(iclass, imap[step]) norm_names.append(iclass.nla2name(name[-1])) # # escaped names: f_index, f_flags, f_IFLA_IFNAME # # the reason: words like "index" are keywords in SQL # and we can not use them; neither can we change the # C structure # f_names = ['f_%s' % x for x in all_names] # # set the fields # # e.g.: f_flags = ?, f_IFLA_IFNAME = ? # # there are different placeholders: # ? -- SQLite3 # %s -- PostgreSQL # so use self.plch here # f_set = ['f_%s = %s' % (x, self.plch) for x in all_names] # # the set of the placeholders to use in the INSERT statements # plchs = [self.plch] * len(f_names) # # the index schema; use target and tflags in every index # idx = ('target', 'tflags') + schema_idx # # the same, escaped: f_target, f_tflags etc. # f_idx = ['f_%s' % x for x in idx] # # normalized idx names # norm_idx = [iclass.nla2name(x) for x in idx] # # match the index fields, fully qualified # # interfaces.f_index = ?, interfaces.f_IFLA_IFNAME = ? # # the same issue with the placeholders # f_idx_match = ['%s.%s = %s' % (table, x, self.plch) for x in f_idx] return {'names': names, 'all_names': all_names, 'norm_names': norm_names, 'idx': idx, 'norm_idx': norm_idx, 'fnames': ','.join(f_names), 'plchs': ','.join(plchs), 'fset': ','.join(f_set), 'knames': ','.join(f_idx), 'fidx': ' AND '.join(f_idx_match)} @publish_exec def execute(self, *argv, **kwarg): if self._cursor: cursor = self._cursor else: cursor = self.connection.cursor() self._counter = config.db_transaction_limit + 1 try: # # FIXME: add logging # for _ in range(MAX_ATTEMPTS): try: cursor.execute(*argv, **kwarg) break except (sqlite3.InterfaceError, sqlite3.OperationalError) as e: self.log.debug('%s' % e) # # Retry on: # -- InterfaceError: Error binding parameter ... # -- OperationalError: SQL logic error # pass else: raise Exception('DB execute error: %s %s' % (argv, kwarg)) except Exception: self.connection.commit() if self._cursor: self._cursor = self.connection.cursor() raise finally: if self._counter > config.db_transaction_limit: self.connection.commit() # no performance optimisation yet self._counter = 0 return cursor def share_cursor(self): self._cursor = self.connection.cursor() self._counter = 0 def unshare_cursor(self): self._cursor = None self._counter = 0 self.connection.commit() def fetchone(self, *argv, **kwarg): for row in self.fetch(*argv, **kwarg): return row return None @publish_exec def wait_read(self, timeout=None): return self._allow_read.wait(timeout) @publish_exec def wait_write(self, timeout=None): return self._allow_write.wait(timeout) def allow_read(self, flag=True): if not flag: # block immediately... self._allow_read.clear() # ...then forward the request through the message bus # in the case of different threads, or simply run stage2 self._r_allow_read(flag) @publish_exec def _r_allow_read(self, flag): if flag: self._allow_read.set() else: self._allow_read.clear() def allow_write(self, flag=True): if not flag: self._allow_write.clear() self._r_allow_write(flag) @publish_exec def _r_allow_write(self, flag): if flag: self._allow_write.set() else: self._allow_write.clear() @publish def fetch(self, *argv, **kwarg): cursor = self.execute(*argv, **kwarg) while True: row_set = cursor.fetchmany() if not row_set: return for row in row_set: yield row @publish_exec def export(self, fname='stdout'): if fname in ('stdout', 'stderr'): f = getattr(sys, fname) else: f = open(fname, 'w') try: for table in self.spec.keys(): f.write('\ntable %s\n' % table) for record in (self .execute('SELECT * FROM %s' % table)): f.write(' '.join([str(x) for x in record])) f.write('\n') if self.rtnl_log: f.write('\ntable %s_log\n' % table) for record in (self .execute('SELECT * FROM %s_log' % table)): f.write(' '.join([str(x) for x in record])) f.write('\n') finally: if fname not in ('stdout', 'stderr'): f.close() @publish_exec def close(self): self.purge_snapshots() self.connection.commit() self.connection.close() @publish_exec def commit(self): self.connection.commit() def create_ifinfo_view(self, table, ctxid=None): iftable = 'interfaces' req = (('main.f_target', 'main.f_tflags') + tuple(['main.f_%s' % x[-1] for x in self.spec['interfaces'].keys()]) + tuple(['data.f_%s' % x[-1] for x in self.spec[table].keys()])[:-2]) # -> ... main.f_index, main.f_IFLA_IFNAME, ..., data.f_IFLA_BR_GC_TIMER if ctxid is not None: iftable = '%s_%s' % (iftable, ctxid) table = '%s_%s' % (table, ctxid) self.execute(''' DROP VIEW IF EXISTS %s ''' % table[7:]) self.execute(''' CREATE VIEW %s AS SELECT %s FROM %s AS main INNER JOIN %s AS data ON main.f_index = data.f_index AND main.f_target = data.f_target ''' % (table[7:], ','.join(req), iftable, table)) def create_table(self, table): req = ['f_target TEXT NOT NULL', 'f_tflags BIGINT NOT NULL DEFAULT 0'] fields = [] self.key_defaults[table] = {} for field in self.spec[table].items(): # # Why f_? # 'Cause there are attributes like 'index' and such # names may not be used in SQL statements # field = (field[0][-1], field[1]) fields.append('f_%s %s' % field) req.append('f_%s %s' % field) if field[1].strip().startswith('TEXT'): self.key_defaults[table][field[0]] = '' else: self.key_defaults[table][field[0]] = 0 if table in self.foreign_keys: for key in self.foreign_keys[table]: spec = ('(%s)' % ','.join(key['fields']), '%s(%s)' % (key['parent'], ','.join(key['parent_fields']))) req.append('FOREIGN KEY %s REFERENCES %s ' 'ON UPDATE CASCADE ' 'ON DELETE CASCADE ' % spec) # # make a unique index for compound keys on # the parent table # # https://sqlite.org/foreignkeys.html # if len(key['fields']) > 1: idxname = 'uidx_%s_%s' % (key['parent'], '_'.join(key['parent_fields'])) self.execute('CREATE UNIQUE INDEX ' 'IF NOT EXISTS %s ON %s' % (idxname, spec[1])) req = ','.join(req) req = ('CREATE TABLE IF NOT EXISTS ' '%s (%s)' % (table, req)) # self.execute('DROP TABLE IF EXISTS %s %s' # % (table, 'CASCADE' if self.mode == 'psycopg2' else '')) self.execute(req) index = ','.join(['f_target', 'f_tflags'] + ['f_%s' % x for x in self.indices[table]]) req = ('CREATE UNIQUE INDEX IF NOT EXISTS ' '%s_idx ON %s (%s)' % (table, table, index)) self.execute(req) # # create table for the transaction buffer: there go the system # updates while the transaction is not committed. # # w/o keys (yet) # # req = ['f_target TEXT NOT NULL', # 'f_tflags INTEGER NOT NULL DEFAULT 0'] # req = ','.join(req) # self.execute('CREATE TABLE IF NOT EXISTS ' # '%s_buffer (%s)' % (table, req)) # # create the log table, if required # if self.rtnl_log: req = ['f_tstamp BIGINT NOT NULL', 'f_target TEXT NOT NULL', 'f_event INTEGER NOT NULL'] + fields req = ','.join(req) self.execute('CREATE TABLE IF NOT EXISTS ' '%s_log (%s)' % (table, req)) def mark(self, target, mark): for table in self.spec: self.execute(''' UPDATE %s SET f_tflags = %s WHERE f_target = %s ''' % (table, self.plch, self.plch), (mark, target)) @publish_exec def flush(self, target): for table in self.spec: self.execute(''' DELETE FROM %s WHERE f_target = %s ''' % (table, self.plch), (target, )) @publish_exec def save_deps(self, ctxid, weak_ref, iclass): uuid = uuid32() obj = weak_ref() obj_k = obj.key idx = self.indices[obj.table] conditions = [] values = [] for key in idx: conditions.append('f_%s = %s' % (key, self.plch)) if key in obj_k: values.append(obj_k[key]) else: values.append(obj.get(iclass.nla2name(key))) # # save the old f_tflags value # tflags = (self .execute(''' SELECT f_tflags FROM %s WHERE %s ''' % (obj.table, ' AND '.join(conditions)), values) .fetchone()[0]) # # mark tflags for obj # self.execute(''' UPDATE %s SET f_tflags = %s WHERE %s ''' % (obj.utable, self.plch, ' AND '.join(conditions)), [uuid] + values) # # t_flags is used in foreign keys ON UPDATE CASCADE, so all # related records will be marked, now just copy the marked data # for table in self.spec: self.log.debug('create snapshot %s_%s' % (table, ctxid)) # # create the snapshot table # self.execute(''' CREATE TABLE IF NOT EXISTS %s_%s AS SELECT * FROM %s WHERE f_tflags IS NULL ''' % (table, ctxid, table)) # # copy the data -- is it possible to do it in one step? # self.execute(''' INSERT INTO %s_%s SELECT * FROM %s WHERE f_tflags = %s ''' % (table, ctxid, table, self.plch), [uuid]) if table.startswith('ifinfo_'): self.create_ifinfo_view(table, ctxid) # # unmark all the data # self.execute(''' UPDATE %s SET f_tflags = %s WHERE %s ''' % (obj.utable, self.plch, ' AND '.join(conditions)), [tflags] + values) for table in self.spec: self.execute(''' UPDATE %s_%s SET f_tflags = %s ''' % (table, ctxid, self.plch), [tflags]) self.snapshots['%s_%s' % (table, ctxid)] = weak_ref def purge_snapshots(self): for table in tuple(self.snapshots): for _ in range(MAX_ATTEMPTS): try: if table.startswith('ifinfo_'): try: self.execute('DROP VIEW %s' % table[7:]) except Exception: # GC collision? self.log.warning('purge_snapshots: %s' % traceback.format_exc()) if self.mode == 'sqlite3': self.execute('DROP TABLE %s' % table) elif self.mode == 'psycopg2': self.execute('DROP TABLE %s CASCADE' % table) del self.snapshots[table] break except sqlite3.OperationalError: # # Retry on: # -- OperationalError: database table is locked # time.sleep(random.random()) else: raise Exception('DB snapshot error') @publish def get(self, table, spec): # # Retrieve info from the DB # # ndb.interfaces.get({'ifname': 'eth0'}) # conditions = [] values = [] cls = self.classes[table] cspec = self.compiled[table] for key, value in spec.items(): if key not in cspec['all_names']: key = cls.name2nla(key) if key not in cspec['all_names']: raise KeyError('field name not found') conditions.append('f_%s = %s' % (key, self.plch)) values.append(value) req = 'SELECT * FROM %s WHERE %s' % (table, ' AND '.join(conditions)) for record in self.fetch(req, values): yield dict(zip(self.compiled[table]['all_names'], record)) def rtmsg_gc_mark(self, target, event, gc_mark=None): # if gc_mark is None: gc_clause = ' AND f_gc_mark IS NOT NULL' else: gc_clause = '' # # select all routes for that OIF where f_gc_mark is not null # key_fields = ','.join(['f_%s' % x for x in self.indices['routes']]) key_query = ' AND '.join(['f_%s = %s' % (x, self.plch) for x in self.indices['routes']]) routes = (self .execute(''' SELECT %s,f_RTA_GATEWAY FROM routes WHERE f_target = %s AND f_RTA_OIF = %s AND f_RTA_GATEWAY IS NOT NULL %s ''' % (key_fields, self.plch, self.plch, gc_clause), (target, event.get_attr('RTA_OIF'))) .fetchmany()) # # get the route's RTA_DST and calculate the network # addr = event.get_attr('RTA_DST') net = struct.unpack('>I', inet_pton(AF_INET, addr))[0] &\ (0xffffffff << (32 - event['dst_len'])) # # now iterate all the routes from the query above and # mark those with matching RTA_GATEWAY # for route in routes: # get route GW gw = route[-1] gwnet = struct.unpack('>I', inet_pton(AF_INET, gw))[0] & net if gwnet == net: (self .execute('UPDATE routes SET f_gc_mark = %s ' 'WHERE f_target = %s AND %s' % (self.plch, self.plch, key_query), (gc_mark, target) + route[:-1])) def load_nsinfmsg(self, target, event): # # check if there is corresponding source # netns_path = event.get_attr('NSINFO_PATH') if netns_path is None: self.log.debug('ignore %s %s' % (target, event)) return if self.ndb._auto_netns: if netns_path.find('/var/run/docker') > -1: source_name = 'docker/%s' % netns_path.split('/')[-1] else: source_name = 'netns/%s' % netns_path.split('/')[-1] if event['header'].get('type', 0) % 2: if source_name in self.ndb.sources.cache: self.ndb.sources.remove(source_name, code=108, sync=False) elif source_name not in self.ndb.sources.cache: sync_event = None if self.ndb._dbm_autoload and not self.ndb._dbm_ready.is_set(): sync_event = threading.Event() self.ndb._dbm_autoload.add(sync_event) self.log.debug('queued event %s' % sync_event) else: sync_event = None self.log.debug('starting netns source %s' % source_name) self.ndb.sources.async_add(target=source_name, netns=netns_path, persistent=False, event=sync_event) self.load_netlink('netns', target, event) def load_ndmsg(self, target, event): # # ignore events with ifindex == 0 # if event['ifindex'] == 0: return self.load_netlink('neighbours', target, event) def load_ifinfmsg(self, target, event): # # link goes down: flush all related routes # if not event['flags'] & 1: self.execute('DELETE FROM routes WHERE ' 'f_target = %s AND ' 'f_RTA_OIF = %s OR f_RTA_IIF = %s' % (self.plch, self.plch, self.plch), (target, event['index'], event['index'])) # # ignore wireless updates # if event.get_attr('IFLA_WIRELESS'): return # # AF_BRIDGE events # if event['family'] == AF_BRIDGE: # # bypass for now # return self.load_netlink('interfaces', target, event) # # load ifinfo, if exists # if not event['header'].get('type', 0) % 2: linkinfo = event.get_attr('IFLA_LINKINFO') if linkinfo is not None: iftype = linkinfo.get_attr('IFLA_INFO_KIND') table = 'ifinfo_%s' % iftype if iftype == 'gre': ifdata = linkinfo.get_attr('IFLA_INFO_DATA') local = ifdata.get_attr('IFLA_GRE_LOCAL') remote = ifdata.get_attr('IFLA_GRE_REMOTE') p2p = p2pmsg() p2p['index'] = event['index'] p2p['family'] = 2 p2p['attrs'] = [('P2P_LOCAL', local), ('P2P_REMOTE', remote)] self.load_netlink('p2p', target, p2p) elif iftype == 'veth': link = event.get_attr('IFLA_LINK') # for veth interfaces, IFLA_LINK points to # the peer -- but NOT in automatic updates if (not link) and \ (target in self.ndb.sources.keys()): self.log.debug('reload veth %s' % event['index']) update = (self .ndb .sources[target] .api('link', 'get', index=event['index'])) update = tuple(update)[0] return self.load_netlink('interfaces', target, update) if table in self.spec: ifdata = linkinfo.get_attr('IFLA_INFO_DATA') ifdata['header'] = {} ifdata['index'] = event['index'] self.load_netlink(table, target, ifdata) def load_rtmsg(self, target, event): mp = event.get_attr('RTA_MULTIPATH') # create an mp route if (not event['header']['type'] % 2) and mp: # # create key keys = ['f_target = %s' % self.plch] values = [target] for key in self.indices['routes']: keys.append('f_%s = %s' % (key, self.plch)) values.append(event.get(key) or event.get_attr(key)) # spec = 'WHERE %s' % ' AND '.join(keys) s_req = 'SELECT f_route_id FROM routes %s' % spec # # get existing route_id for route_id in self.execute(s_req, values).fetchall(): # # if exists route_id = route_id[0][0] # # flush all previous MP hops d_req = 'DELETE FROM nh WHERE f_route_id= %s' % self.plch self.execute(d_req, (route_id, )) break else: # # or create a new route_id route_id = str(uuid.uuid4()) # # set route_id on the route itself event['route_id'] = route_id self.load_netlink('routes', target, event) for idx in range(len(mp)): mp[idx]['header'] = {} # for load_netlink() mp[idx]['route_id'] = route_id # set route_id on NH mp[idx]['nh_id'] = idx # add NH number self.load_netlink('nh', target, mp[idx], 'routes') # # we're done with an MP-route, just exit return # # manage gc marks on related routes # # only for automatic routes: # - table 254 (main) # - proto 2 (kernel) # - scope 253 (link) elif (event.get_attr('RTA_TABLE') == 254) and \ (event['proto'] == 2) and \ (event['scope'] == 253) and \ (event['family'] == AF_INET): evt = event['header']['type'] # # set f_gc_mark = timestamp for "del" events # and clean it for "new" events # self.rtmsg_gc_mark(target, event, int(time.time()) if (evt % 2) else None) # # continue with load_netlink() # # # ... or work on a regular route self.load_netlink("routes", target, event) def log_netlink(self, table, target, event, ctable=None): # # RTNL Logs # fkeys = self.compiled[table]['names'] fields = ','.join(['f_tstamp', 'f_target', 'f_event'] + ['f_%s' % x for x in fkeys]) pch = ','.join([self.plch] * (len(fkeys) + 3)) values = [int(time.time() * 1000), target, event.get('header', {}).get('type', 0)] for field in fkeys: value = event.get_attr(field) or event.get(field) if value is None and field in self.indices[ctable or table]: value = self.key_defaults[table][field] values.append(value) self.execute('INSERT INTO %s_log (%s) VALUES (%s)' % (table, fields, pch), values) def load_netlink(self, table, target, event, ctable=None): # if self.rtnl_log: self.log_netlink(table, target, event, ctable) # # Update metrics # if 'stats' in event['header']: self.stats[target] = event['header']['stats'] # # Periodic jobs # if time.time() - self.gctime > config.gc_timeout: self.gctime = time.time() # clean dead snapshots after GC timeout for name, wref in tuple(self.snapshots.items()): if wref() is None: del self.snapshots[name] if name.startswith('ifinfo_'): self.execute('DROP VIEW %s' % name[7:]) self.execute('DROP TABLE %s' % name) # clean marked routes self.execute('DELETE FROM routes WHERE ' '(f_gc_mark + 5) < %s' % self.plch, (int(time.time()), )) # # The event type # if event['header'].get('type', 0) % 2: # # Delete an object # conditions = ['f_target = %s' % self.plch] values = [target] for key in self.indices[table]: conditions.append('f_%s = %s' % (key, self.plch)) value = event.get(key) or event.get_attr(key) if value is None: value = self.key_defaults[table][key] values.append(value) self.execute('DELETE FROM %s WHERE' ' %s' % (table, ' AND '.join(conditions)), values) else: # # Create or set an object # # field values values = [target, 0] # index values ivalues = [target, 0] compiled = self.compiled[table] # a map of sub-NLAs nodes = {} # fetch values (exc. the first two columns) for fname, ftype in self.spec[table].items(): node = event # if the field is located in a sub-NLA if len(fname) > 1: # see if we tried to get it already if fname[:-1] not in nodes: # descend for steg in fname[:-1]: node = node.get_attr(steg) if node is None: break nodes[fname[:-1]] = node # lookup the sub-NLA in the map node = nodes[fname[:-1]] # the event has no such sub-NLA if node is None: values.append(None) continue # NLA have priority value = node.get_attr(fname[-1]) if value is None: value = node.get(fname[-1]) if value is None and \ fname[-1] in self.compiled[table]['idx']: value = self.key_defaults[table][fname[-1]] if fname[-1] in compiled['idx']: ivalues.append(value) values.append(value) try: if self.mode == 'psycopg2': # # run UPSERT -- the DB provider must support it # (self .execute('INSERT INTO %s (%s) VALUES (%s) ' 'ON CONFLICT (%s) ' 'DO UPDATE SET %s WHERE %s' % (table, compiled['fnames'], compiled['plchs'], compiled['knames'], compiled['fset'], compiled['fidx']), (values + values + ivalues))) # elif self.mode == 'sqlite3': # # SQLite3 >= 3.24 actually has UPSERT, but ... # # We can not use here INSERT OR REPLACE as well, since # it drops (almost always) records with foreign key # dependencies. Maybe a bug in SQLite3, who knows. # count = (self .execute(''' SELECT count(*) FROM %s WHERE %s ''' % (table, compiled['fidx']), ivalues) .fetchone())[0] if count == 0: self.execute(''' INSERT INTO %s (%s) VALUES (%s) ''' % (table, compiled['fnames'], compiled['plchs']), values) else: self.execute(''' UPDATE %s SET %s WHERE %s ''' % (table, compiled['fset'], compiled['fidx']), (values + ivalues)) else: raise NotImplementedError() # except Exception: # # A good question, what should we do here self.log.debug('load_netlink: %s %s %s' % (table, target, event)) self.log.error('load_netlink: %s' % traceback.format_exc()) def init(ndb, connection, mode, rtnl_log, tid): ret = DBSchema(ndb, connection, mode, rtnl_log, tid) ret.event_map = {ifinfmsg: [ret.load_ifinfmsg], ifaddrmsg: [partial(ret.load_netlink, 'addresses')], ndmsg: [ret.load_ndmsg], rtmsg: [ret.load_rtmsg], fibmsg: [partial(ret.load_netlink, 'rules')], nsinfmsg: [ret.load_nsinfmsg]} return ret pyroute2-0.5.9/pyroute2/ndb/source.py0000644000175000017500000003261313621021711017437 0ustar peetpeet00000000000000''' RTNL sources ============ Local RTNL ---------- Local RTNL source is a simple `IPRoute` instance. By default NDB starts with one local RTNL source names `localhost`:: >>> ndb = NDB() >>> ndb.sources.details() {'kind': u'local', u'nlm_generator': 1, 'target': u'localhost'} >>> ndb.sources['localhost'] [running] The `localhost` RTNL source starts an additional async cache thread. The `nlm_generator` option means that instead of collections the `IPRoute` object returns generators, so `IPRoute` responses will not consume memory regardless of the RTNL objects number:: >>> ndb.sources['localhost'].nl.link('dump') See also: :ref:`iproute` Network namespaces ------------------ There are two ways to connect additional sources to an NDB instance. One is to specify sources when creating an NDB object:: ndb = NDB(sources=[{'target': 'localhost'}, {'netns': 'test01'}]) Another way is to call `ndb.sources.add()` method:: ndb.sources.add(netns='test01') This syntax: `{target': 'localhost'}` and `{'netns': 'test01'}` is the short form. The full form would be:: {'target': 'localhost', # the label for the DB 'kind': 'local', # use IPRoute class to start the source 'nlm_generator': 1} # {'target': 'test01', # the label 'kind': 'netns', # use NetNS class 'netns': 'test01'} # See also: :ref:`netns` Remote systems -------------- It is possible also to connect to remote systems using SSH. In order to use this kind of sources it is required to install the `mitogen `_ module. The `remote` kind of sources uses the `RemoteIPRoute` class. The short form:: ndb.sources.add(hostname='worker1.example.com') In some more extended form:: ndb.sources.add(**{'target': 'worker1.example.com', 'kind': 'remote', 'hostname': 'worker1.example.com', 'username': 'jenkins', 'check_host_keys': False}) See also: :ref:`remote` ''' import sys import time import errno import socket import struct import threading from pyroute2 import IPRoute from pyroute2 import RemoteIPRoute from pyroute2.ndb.events import (SyncStart, SchemaReadLock, SchemaReadUnlock, ShutdownException, MarkFailed, State) from pyroute2.netlink.nlsocket import NetlinkMixin from pyroute2.netlink.exceptions import NetlinkError if sys.platform.startswith('linux'): from pyroute2.netns.nslink import NetNS from pyroute2.netns.manager import NetNSManager else: NetNS = None NetNSManager = None SOURCE_FAIL_PAUSE = 5 class Source(dict): ''' The RNTL source. The source that is used to init the object must comply to IPRoute API, must support the async_cache. If the source starts additional threads, they must be joined in the source.close() ''' table_alias = 'src' dump = None dump_header = None summary = None summary_header = None view = None table = 'sources' vmap = {'local': IPRoute, 'netns': NetNS, 'remote': RemoteIPRoute, 'nsmanager': NetNSManager} def __init__(self, ndb, **spec): self.th = None self.nl = None self.ndb = ndb self.evq = self.ndb._event_queue # the target id -- just in case self.target = spec.pop('target') self.kind = spec.pop('kind', 'local') self.persistent = spec.pop('persistent', True) self.event = spec.pop('event') if not self.event: self.event = SyncStart() # RTNL API self.nl_prime = self.vmap[self.kind] self.nl_kwarg = spec # self.shutdown = threading.Event() self.started = threading.Event() self.lock = threading.RLock() self.shutdown_lock = threading.Lock() self.started.clear() self.log = ndb.log.channel('sources.%s' % self.target) self.state = State(log=self.log) self.state.set('init') self.ndb.schema.execute(''' INSERT INTO sources (f_target, f_kind) VALUES (%s, %s) ''' % (self.ndb.schema.plch, self.ndb.schema.plch), (self.target, self.kind)) for key, value in spec.items(): vtype = 'int' if isinstance(value, int) else 'str' self.ndb.schema.execute(''' INSERT INTO sources_options (f_target, f_name, f_type, f_value) VALUES (%s, %s, %s, %s) ''' % (self.ndb.schema.plch, self.ndb.schema.plch, self.ndb.schema.plch, self.ndb.schema.plch), (self.target, key, vtype, value)) self.load_sql() @classmethod def defaults(cls, spec): ret = dict(spec) defaults = {} if 'hostname' in spec: defaults['kind'] = 'remote' defaults['protocol'] = 'ssh' defaults['target'] = spec['hostname'] if 'netns' in spec: defaults['kind'] = 'netns' defaults['target'] = spec['netns'] for key in defaults: if key not in ret: ret[key] = defaults[key] return ret def __repr__(self): if isinstance(self.nl_prime, NetlinkMixin): name = self.nl_prime.__class__.__name__ elif isinstance(self.nl_prime, type): name = self.nl_prime.__name__ return '[%s] <%s %s>' % (self.state.get(), name, self.nl_kwarg) @classmethod def nla2name(cls, name): return name @classmethod def name2nla(cls, name): return name def api(self, name, *argv, **kwarg): for _ in range(100): # FIXME make a constant with self.lock: try: return getattr(self.nl, name)(*argv, **kwarg) except (NetlinkError, AttributeError, ValueError, KeyError, TypeError, socket.error, struct.error): raise except Exception as e: print(type(e)) # probably the source is restarting self.log.debug('source api error: %s' % e) time.sleep(1) raise RuntimeError('api call failed') def receiver(self): # # The source thread routine -- get events from the # channel and forward them into the common event queue # # The routine exists on an event with error code == 104 # stop = False while not stop: with self.lock: if self.shutdown.is_set(): break if self.nl is not None: try: self.nl.close(code=0) except Exception as e: self.log.warning('source restart: %s' % e) try: self.state.set('connecting') if isinstance(self.nl_prime, type): spec = {} spec.update(self.nl_kwarg) if self.kind in ('nsmanager', ): spec['libc'] = self.ndb.libc self.nl = self.nl_prime(**spec) else: raise TypeError('source channel not supported') self.state.set('loading') # self.nl.bind(async_cache=True, clone_socket=True) # # Initial load -- enqueue the data # self.ndb.schema.allow_read(False) try: self.ndb.schema.flush(self.target) self.evq.put((self.target, self.nl.dump())) finally: self.ndb.schema.allow_read(True) self.started.set() self.shutdown.clear() self.state.set('running') if self.event is not None: self.evq.put((self.target, (self.event, ))) except Exception as e: self.started.set() self.state.set('failed') self.log.error('source error: %s %s' % (type(e), e)) try: self.evq.put((self.target, (MarkFailed(), ))) except ShutdownException: stop = True break if self.persistent: self.log.debug('sleeping before restart') self.shutdown.wait(SOURCE_FAIL_PAUSE) if self.shutdown.is_set(): self.log.debug('source shutdown') stop = True break else: self.event.set() return continue while not stop: try: msg = tuple(self.nl.get()) except Exception as e: self.log.error('source error: %s %s' % (type(e), e)) msg = None if not self.persistent: stop = True break code = 0 if msg and msg[0]['header']['error']: code = msg[0]['header']['error'].code if msg is None or code == errno.ECONNRESET: stop = True break self.ndb.schema._allow_write.wait() try: self.evq.put((self.target, msg)) except ShutdownException: stop = True break # thus we make sure that all the events from # this source are consumed by the main loop # in __dbm__() routine try: self.sync() self.log.debug('flush DB for the target') self.ndb.schema.flush(self.target) except ShutdownException: self.log.debug('shutdown handled by the main thread') pass self.state.set('stopped') def sync(self): self.log.debug('sync') sync = threading.Event() self.evq.put((self.target, (sync, ))) sync.wait() def start(self): # # Start source thread with self.lock: self.log.debug('starting the source') if (self.th is not None) and self.th.is_alive(): raise RuntimeError('source is running') self.th = (threading .Thread(target=self.receiver, name='NDB event source: %s' % (self.target))) self.th.start() return self def close(self, code=errno.ECONNRESET, sync=True): with self.shutdown_lock: if self.shutdown.is_set(): self.log.debug('already stopped') return self.log.info('source shutdown') self.shutdown.set() if self.nl is not None: try: self.nl.close(code=code) except Exception as e: self.log.error('source close: %s' % e) if sync: if self.th is not None: self.th.join() self.th = None else: self.log.debug('receiver thread missing') def restart(self, reason='unknown'): with self.lock: if not self.shutdown.is_set(): self.log.debug('restarting the source, reason <%s>' % (reason)) self.evq.put((self.target, (SchemaReadLock(), ))) try: self.close() if self.th: self.th.join() self.start() finally: self.evq.put((self.target, (SchemaReadUnlock(), ))) def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def load_sql(self): # spec = self.ndb.schema.fetchone(''' SELECT * FROM sources WHERE f_target = %s ''' % self.ndb.schema.plch, (self.target, )) self['target'], self['kind'] = spec for spec in self.ndb.schema.fetch(''' SELECT * FROM sources_options WHERE f_target = %s ''' % self.ndb.schema.plch, (self.target, )): f_target, f_name, f_type, f_value = spec self[f_name] = int(f_value) if f_type == 'int' else f_value pyroute2-0.5.9/pyroute2/netlink/0000755000175000017500000000000013621220110016453 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/__init__.py0000644000175000017500000016645513617560120020622 0ustar peetpeet00000000000000''' Netlink ------- basics ====== General netlink packet structure:: nlmsg packet: header data Generic netlink message header:: nlmsg header: uint32 length uint16 type uint16 flags uint32 sequence number uint32 pid The `length` field is the length of all the packet, including data and header. The `type` field is used to distinguish different message types, commands etc. Please note, that there is no explicit protocol field -- you choose a netlink protocol, when you create a socket. The `sequence number` is very important. Netlink is an asynchronous protocol -- it means, that the packet order doesn't matter and is not guaranteed. But responses to a request are always marked with the same sequence number, so you can treat it as a cookie. Please keep in mind, that a netlink request can initiate a cascade of events, and netlink messages from these events can carry sequence number == 0. E.g., it is so when you remove a primary IP addr from an interface, when `promote_secondaries` sysctl is set. Beside of incapsulated headers and other protocol-specific data, netlink messages can carry NLA (netlink attributes). NLA structure is as follows:: NLA header: uint16 length uint16 type NLA data: data-specific struct # optional: NLA NLA ... So, NLA structures can be nested, forming a tree. Complete structure of a netlink packet:: nlmsg header: uint32 length uint16 type uint16 flags uint32 sequence number uint32 pid [ optional protocol-specific data ] [ optional NLA tree ] More information about netlink protocol you can find in the man pages. pyroute2 and netlink ==================== packets ~~~~~~~ To simplify the development, pyroute2 provides an easy way to describe packet structure. As an example, you can take the ifaddrmsg description -- `pyroute2/netlink/rtnl/ifaddrmsg.py`. To describe a packet, you need to inherit from `nlmsg` class:: from pyroute2.netlink import nlmsg class foo_msg(nlmsg): fields = ( ... ) nla_map = ( ... ) NLA are described in the same way, but the parent class should be `nla`, instead of `nlmsg`. And yes, it is important to use the proper parent class -- it affects the header structure. fields attribute ~~~~~~~~~~~~~~~~ The `fields` attribute describes the structure of the protocol-specific data. It is a tuple of tuples, where each member contains a field name and its data format. Field data format should be specified as for Python `struct` module. E.g., ifaddrmsg structure:: struct ifaddrmsg { __u8 ifa_family; __u8 ifa_prefixlen; __u8 ifa_flags; __u8 ifa_scope; __u32 ifa_index; }; should be described as follows:: class ifaddrmsg(nlmsg): fields = (('family', 'B'), ('prefixlen', 'B'), ('flags', 'B'), ('scope', 'B'), ('index', 'I')) Format strings are passed directly to the `struct` module, so you can use all the notations like `>I`, `16s` etc. All fields are parsed from the stream separately, so if you want to explicitly fix alignemt, as if it were C struct, use the `pack` attribute:: class tstats(nla): pack = 'struct' fields = (('version', 'H'), ('ac_exitcode', 'I'), ('ac_flag', 'B'), ...) Explicit padding bytes also can be used, when struct packing doesn't work well:: class ipq_mode_msg(nlmsg): pack = 'struct' fields = (('value', 'B'), ('__pad', '7x'), ('range', 'I'), ('__pad', '12x')) nla_map attribute ~~~~~~~~~~~~~~~~~ The `nla_map` attribute is a tuple of NLA descriptions. Each description is also a tuple in two different forms: either two fields, name and format, or three fields: type, name and format. Please notice, that the format field is a string name of corresponding NLA class:: class ifaddrmsg(nlmsg): ... nla_map = (('IFA_UNSPEC', 'hex'), ('IFA_ADDRESS', 'ipaddr'), ('IFA_LOCAL', 'ipaddr'), ...) This code will create mapping, where IFA_ADDRESS NLA will be of type 1 and IFA_LOCAL -- of type 2, etc. Both NLA will be decoded as IP addresses (class `ipaddr`). IFA_UNSPEC will be of type 0, and if it will be in the NLA tree, it will be just dumped in hex. NLA class names are should be specified as strings, since they are resolved in runtime. There are several pre-defined NLA types, that you will get with `nla` class: - `none` -- ignore this NLA - `flag` -- boolean flag NLA (no payload; NLA exists = True) - `uint8`, `uint16`, `uint32`, `uint64` -- unsigned int - `be8`, `be16`, `be32`, `be64` -- big-endian unsigned int - `ipaddr` -- IP address, IPv4 or IPv6 - `ip4addr` -- only IPv4 address type - `ip6addr` -- only IPv6 address type - `target` -- a univeral target (IPv4, IPv6, MPLS) - `l2addr` -- MAC address - `hex` -- hex dump as a string -- useful for debugging - `cdata` -- a binary data - `string` -- UTF-8 string - `asciiz` -- zero-terminated ASCII string, no decoding - `array` -- array of simple types (uint8, uint16 etc.) Please refer to `pyroute2/netlink/__init__.py` for details. You can also make your own NLA descriptions:: class ifaddrmsg(nlmsg): ... nla_map = (... ('IFA_CACHEINFO', 'cacheinfo'), ...) class cacheinfo(nla): fields = (('ifa_preferred', 'I'), ('ifa_valid', 'I'), ('cstamp', 'I'), ('tstamp', 'I')) Custom NLA descriptions should be defined in the same class, where they are used. Also, it is possible to use not autogenerated type numbers, as for ifaddrmsg, but specify them explicitly:: class iw_event(nla): ... nla_map = ((0x8B00, 'SIOCSIWCOMMIT', 'hex'), (0x8B01, 'SIOCGIWNAME', 'hex'), (0x8B02, 'SIOCSIWNWID', 'hex'), (0x8B03, 'SIOCGIWNWID', 'hex'), ...) Here you can see custom NLA type numbers -- 0x8B00, 0x8B01 etc. It is not permitted to mix these two forms in one class: you should use ether autogenerated type numbers (two fields tuples), or explicit numbers (three fields typles). array types ~~~~~~~~~~~ There are different array-like NLA types in the kernel, and some of them are covered by pyroute2. An array of simple type elements:: # declaration nla_map = (('NLA_TYPE', 'array(uint8)'), ...) # data layout +======+======+---------------------------- | len | type | uint8 | uint8 | uint 8 | ... +======+======+---------------------------- # decoded {'attrs': [['NLA_TYPE', (2, 3, 4, 5, ...)], ...], ...} An array of NLAs:: # declaration nla_map = (('NLA_TYPE', '*type'), ...) # data layout +=======+=======+-----------------------+-----------------------+-- | len | type* | len | type | payload | len | type | payload | ... +=======+=======+-----------------------+-----------------------+-- # type* -- in that case the type is OR'ed with NLA_F_NESTED # decoded {'attrs': [['NLA_TYPE', [payload, payload, ...]], ...], ...} parsed netlink message ~~~~~~~~~~~~~~~~~~~~~~ Netlink messages are represented by pyroute2 as dictionaries as follows:: {'header': {'pid': ..., 'length: ..., 'flags': ..., 'error': None, # if you are lucky 'type': ..., 'sequence_number': ...}, # fields attributes 'field_name1': value, ... 'field_nameX': value, # nla tree 'attrs': [['NLA_NAME1', value], ... ['NLA_NAMEX', value], ['NLA_NAMEY', {'field_name1': value, ... 'field_nameX': value, 'attrs': [['NLA_NAME.... ]]}]]} As an example, a message from the wireless subsystem about new scan event:: {'index': 4, 'family': 0, '__align': 0, 'header': {'pid': 0, 'length': 64, 'flags': 0, 'error': None, 'type': 16, 'sequence_number': 0}, 'flags': 69699, 'ifi_type': 1, 'event': 'RTM_NEWLINK', 'change': 0, 'attrs': [['IFLA_IFNAME', 'wlp3s0'], ['IFLA_WIRELESS', {'attrs': [['SIOCGIWSCAN', '00:00:00:00:00:00:00:00:00:00:00:00']]}]]} One important detail is that NLA chain is represented as a list of elements `['NLA_TYPE', value]`, not as a dictionary. The reason is that though in the kernel *usually* NLA chain is a dictionary, the netlink protocol by itself doesn't require elements of each type to be unique. In a message there may be several NLA of the same type. encoding and decoding algo ~~~~~~~~~~~~~~~~~~~~~~~~~~ The message encoding works as follows: 1. Reserve space for the message header (if there is) 2. Iterate defined `fields`, encoding values with `struct.pack()` 3. Iterate NLA from the `attrs` field, looking up types in `nla_map` 4. Encode the header Since every NLA is also an `nlmsg` object, there is a recursion. The decoding process is a bit simpler: 1. Decode the header 2. Iterate `fields`, decoding values with `struct.unpack()` 3. Iterate NLA until the message ends If the `fields` attribute is an empty list, the step 2 will be skipped. The step 3 will be skipped in the case of the empty `nla_map`. If both attributes are empty lists, only the header will be encoded/decoded. create and send messages ~~~~~~~~~~~~~~~~~~~~~~~~ Using high-level interfaces like `IPRoute` or `IPDB`, you will never need to manually construct and send netlink messages. But in the case you really need it, it is simple as well. Having a description class, like `ifaddrmsg` from above, you need to: - instantiate it - fill the fields - encode the packet - send the encoded data The code:: from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_CREATE from pyroute2.netlink import NLM_F_EXCL from pyroute2.iproute import RTM_NEWADDR from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg ## # add an addr to an interface # # create the message msg = ifaddrmsg() # fill the protocol-specific fields msg['index'] = index # index of the interface msg['family'] = AF_INET # address family msg['prefixlen'] = 24 # the address mask msg['scope'] = scope # see /etc/iproute2/rt_scopes # attach NLA -- it MUST be a list / mutable msg['attrs'] = [['IFA_LOCAL', '192.168.0.1'], ['IFA_ADDRESS', '192.162.0.1']] # fill generic netlink fields msg['header']['sequence_number'] = nonce # an unique seq number msg['header']['pid'] = os.getpid() msg['header']['type'] = RTM_NEWADDR msg['header']['flags'] = NLM_F_REQUEST |\\ NLM_F_ACK |\\ NLM_F_CREATE |\\ NLM_F_EXCL # encode the packet msg.encode() # send the buffer nlsock.sendto(msg.data, (0, 0)) Please notice, that NLA list *MUST* be mutable. ''' import weakref import threading import traceback import logging import struct import types import sys import io import re from socket import inet_pton from socket import inet_ntop from socket import AF_INET from socket import AF_INET6 from socket import AF_UNSPEC from pyroute2.common import AF_MPLS from pyroute2.common import hexdump from pyroute2.common import basestring from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.exceptions import NetlinkDecodeError from pyroute2.netlink.exceptions import NetlinkNLADecodeError log = logging.getLogger(__name__) # make pep8 happy _ne = NetlinkError # reexport for compatibility _de = NetlinkDecodeError # class NotInitialized(Exception): pass _letters = re.compile('[A-Za-z]') _fmt_letters = re.compile('[^!><@=][!><@=]') ## # That's a hack for the code linter, which works under # Python3, see unicode reference in the code below if sys.version[0] == '3': unicode = str NLMSG_MIN_TYPE = 0x10 GENL_NAMSIZ = 16 # length of family name GENL_MIN_ID = NLMSG_MIN_TYPE GENL_MAX_ID = 1023 GENL_ADMIN_PERM = 0x01 GENL_CMD_CAP_DO = 0x02 GENL_CMD_CAP_DUMP = 0x04 GENL_CMD_CAP_HASPOL = 0x08 # # List of reserved static generic netlink identifiers: # GENL_ID_GENERATE = 0 GENL_ID_CTRL = NLMSG_MIN_TYPE # # Controller # CTRL_CMD_UNSPEC = 0x0 CTRL_CMD_NEWFAMILY = 0x1 CTRL_CMD_DELFAMILY = 0x2 CTRL_CMD_GETFAMILY = 0x3 CTRL_CMD_NEWOPS = 0x4 CTRL_CMD_DELOPS = 0x5 CTRL_CMD_GETOPS = 0x6 CTRL_CMD_NEWMCAST_GRP = 0x7 CTRL_CMD_DELMCAST_GRP = 0x8 CTRL_CMD_GETMCAST_GRP = 0x9 # unused CTRL_ATTR_UNSPEC = 0x0 CTRL_ATTR_FAMILY_ID = 0x1 CTRL_ATTR_FAMILY_NAME = 0x2 CTRL_ATTR_VERSION = 0x3 CTRL_ATTR_HDRSIZE = 0x4 CTRL_ATTR_MAXATTR = 0x5 CTRL_ATTR_OPS = 0x6 CTRL_ATTR_MCAST_GROUPS = 0x7 CTRL_ATTR_OP_UNSPEC = 0x0 CTRL_ATTR_OP_ID = 0x1 CTRL_ATTR_OP_FLAGS = 0x2 CTRL_ATTR_MCAST_GRP_UNSPEC = 0x0 CTRL_ATTR_MCAST_GRP_NAME = 0x1 CTRL_ATTR_MCAST_GRP_ID = 0x2 # Different Netlink families # NETLINK_ROUTE = 0 # Routing/device hook NETLINK_UNUSED = 1 # Unused number NETLINK_USERSOCK = 2 # Reserved for user mode socket protocols NETLINK_FIREWALL = 3 # Firewalling hook NETLINK_SOCK_DIAG = 4 # INET socket monitoring NETLINK_NFLOG = 5 # netfilter/iptables ULOG NETLINK_XFRM = 6 # ipsec NETLINK_SELINUX = 7 # SELinux event notifications NETLINK_ISCSI = 8 # Open-iSCSI NETLINK_AUDIT = 9 # auditing NETLINK_FIB_LOOKUP = 10 NETLINK_CONNECTOR = 11 NETLINK_NETFILTER = 12 # netfilter subsystem NETLINK_IP6_FW = 13 NETLINK_DNRTMSG = 14 # DECnet routing messages NETLINK_KOBJECT_UEVENT = 15 # Kernel messages to userspace NETLINK_GENERIC = 16 # leave room for NETLINK_DM (DM Events) NETLINK_SCSITRANSPORT = 18 # SCSI Transports # NLA flags NLA_F_NESTED = 1 << 15 NLA_F_NET_BYTEORDER = 1 << 14 # Netlink message flags values (nlmsghdr.flags) # NLM_F_REQUEST = 1 # It is request message. NLM_F_MULTI = 2 # Multipart message, terminated by NLMSG_DONE NLM_F_ACK = 4 # Reply with ack, with zero or error code NLM_F_ECHO = 8 # Echo this request # Modifiers to GET request NLM_F_ROOT = 0x100 # specify tree root NLM_F_MATCH = 0x200 # return all matching NLM_F_ATOMIC = 0x400 # atomic GET NLM_F_DUMP = (NLM_F_ROOT | NLM_F_MATCH) # Modifiers to NEW request NLM_F_REPLACE = 0x100 # Override existing NLM_F_EXCL = 0x200 # Do not touch, if it exists NLM_F_CREATE = 0x400 # Create, if it does not exist NLM_F_APPEND = 0x800 # Add to end of list NLMSG_NOOP = 0x1 # Nothing NLMSG_ERROR = 0x2 # Error NLMSG_DONE = 0x3 # End of a dump NLMSG_OVERRUN = 0x4 # Data lost NLMSG_CONTROL = 0xe # Custom message type for messaging control NLMSG_TRANSPORT = 0xf # Custom message type for NL as a transport NLMSG_MIN_TYPE = 0x10 # < 0x10: reserved control messages NLMSG_MAX_LEN = 0xffff # Max message length mtypes = {1: 'NLMSG_NOOP', 2: 'NLMSG_ERROR', 3: 'NLMSG_DONE', 4: 'NLMSG_OVERRUN'} IPRCMD_NOOP = 0 IPRCMD_STOP = 1 IPRCMD_ACK = 2 IPRCMD_ERR = 3 IPRCMD_REGISTER = 4 IPRCMD_RELOAD = 5 IPRCMD_ROUTE = 6 IPRCMD_CONNECT = 7 IPRCMD_DISCONNECT = 8 IPRCMD_SERVE = 9 IPRCMD_SHUTDOWN = 10 IPRCMD_SUBSCRIBE = 11 IPRCMD_UNSUBSCRIBE = 12 IPRCMD_PROVIDE = 13 IPRCMD_REMOVE = 14 IPRCMD_DISCOVER = 15 IPRCMD_UNREGISTER = 16 SOL_NETLINK = 270 NETLINK_ADD_MEMBERSHIP = 1 NETLINK_DROP_MEMBERSHIP = 2 NETLINK_PKTINFO = 3 NETLINK_BROADCAST_ERROR = 4 NETLINK_NO_ENOBUFS = 5 NETLINK_RX_RING = 6 NETLINK_TX_RING = 7 NETLINK_LISTEN_ALL_NSID = 8 clean_cbs = threading.local() # Cached results for some struct operations. # No cache invalidation required. cache_fmt = {} cache_hdr = {} cache_jit = {} class nlmsg_base(dict): ''' Netlink base class. You do not need to inherit it directly, unless you're inventing completely new protocol structure. Use nlmsg or nla classes. The class provides several methods, but often one need to customize only `decode()` and `encode()`. ''' fields = tuple() header = tuple() pack = None # pack pragma cell_header = None align = 4 nla_map = {} # NLA mapping sql_constraints = {} sql_extra_fields = tuple() sql_extend = tuple() nla_flags = 0 # NLA flags value_map = {} is_nla = False prefix = None own_parent = False header_type = None # caches __compiled_nla = False __compiled_ft = False __t_nla_map = None __r_nla_map = None __slots__ = ( "_buf", "data", "chain", "offset", "length", "parent", "decoded", "_nla_init", "_nla_array", "_nla_flags", "value", "_ft_decode", "_r_value_map", "__weakref__" ) def msg_align(self, l): return (l + self.align - 1) & ~ (self.align - 1) def __init__(self, data=None, offset=0, length=None, parent=None, init=None): global cache_jit dict.__init__(self) for i in self.fields: self[i[0]] = 0 # FIXME: only for number values self._buf = None self.data = data or bytearray() self.offset = offset self.length = length or 0 self.chain = [self, ] if parent is not None: # some structures use parents, some not, # so don't create cycles without need self.parent = parent if self.own_parent else weakref.proxy(parent) else: self.parent = None self.decoded = False self._nla_init = init self._nla_array = False self._nla_flags = self.nla_flags self['attrs'] = [] self['value'] = NotInitialized self.value = NotInitialized # work only on non-empty mappings if self.nla_map and not self.__class__.__compiled_nla: self.compile_nla() # compile fast-track for particular types if id(self.__class__) in cache_jit: self._ft_decode = cache_jit[id(self.__class__)]['ft_decode'] else: self.compile_ft() self._r_value_map = dict([ (x[1], x[0]) for x in self.value_map.items() ]) if self.header: self['header'] = {} @classmethod def sql_schema(cls): ret = [] for field in cls.fields: if field[0][0] != '_': ret.append(((field[0], ), ' '.join(('BIGINT', cls.sql_constraints.get(field[0], ''))))) for nla in cls.nla_map: if isinstance(nla[0], basestring): nla_name = nla[0] nla_type = nla[1] else: nla_name = nla[1] nla_type = nla[2] nla_type = getattr(cls, nla_type, None) sql_type = getattr(nla_type, 'sql_type', None) if sql_type: sql_type = ' '.join((sql_type, cls.sql_constraints.get(nla_name, ''))) ret.append(((nla_name, ), sql_type)) for (fname, ftype) in cls.sql_extra_fields: if isinstance(fname, basestring): fname = (fname, ) ret.append((fname, ftype)) for (dcls, prefix) in cls.sql_extend: for fname, ftype in dcls.sql_schema(): ret.append(((prefix, ) + fname, ftype)) return ret @property def buf(self): logging.error('nlmsg.buf is deprecated:\n%s', ''.join(traceback.format_stack())) if self._buf is None: self._buf = io.BytesIO() self._buf.write(self.data[self.offset:self.length or None]) self._buf.seek(0) return self._buf def copy(self): ''' Return a decoded copy of the netlink message. Works correctly only if the message was encoded, or is received from the socket. ''' ret = type(self)(data=self.data, offset=self.offset) ret.decode() return ret def reset(self, buf=None): self.data = bytearray() self.offset = 0 self.decoded = False def register_clean_cb(self, cb): global clean_cbs if self.parent is not None: return self.parent.register_clean_cb(cb) else: # get the msg_seq -- if applicable seq = self.get('header', {}).get('sequence_number', None) if seq is not None and seq not in clean_cbs.__dict__: clean_cbs.__dict__[seq] = [] # attach the callback clean_cbs.__dict__[seq].append(cb) def unregister_clean_cb(self): global clean_cbs seq = self.get('header', {}).get('sequence_number', None) msf = self.get('header', {}).get('flags', 0) if (seq is not None) and \ (not msf & NLM_F_REQUEST) and \ seq in clean_cbs.__dict__: for cb in clean_cbs.__dict__[seq]: try: cb() except: log.error('Cleanup callback fail: %s' % (cb)) log.error(traceback.format_exc()) del clean_cbs.__dict__[seq] def _strip_one(self, name): for i in tuple(self['attrs']): if i[0] == name: self['attrs'].remove(i) return self def strip(self, attrs): ''' Remove an NLA from the attrs chain. The `attrs` parameter can be either string, or iterable. In the latter case, will be stripped NLAs, specified in the provided list. ''' if isinstance(attrs, basestring): self._strip_one(attrs) else: for name in attrs: self._strip_one(name) return self def __ops(self, rvalue, op0, op1): lvalue = self.getvalue() res = self.__class__() for key in lvalue: if key not in ('header', 'attrs'): if op0 == '__sub__': # operator -, complement if (key not in rvalue) or (lvalue[key] != rvalue[key]): res[key] = lvalue[key] elif op0 == '__and__': # operator &, intersection if (key in rvalue) and (lvalue[key] == rvalue[key]): res[key] = lvalue[key] if 'attrs' in lvalue: res['attrs'] = [] for attr in lvalue['attrs']: if isinstance(attr[1], nla): diff = getattr(attr[1], op0)(rvalue.get_attr(attr[0])) if diff is not None: res['attrs'].append([attr[0], diff]) else: if op0 == '__sub__': # operator -, complement if rvalue.get_attr(attr[0]) != attr[1]: res['attrs'].append(attr) elif op0 == '__and__': # operator &, intersection if rvalue.get_attr(attr[0]) == attr[1]: res['attrs'].append(attr) if not len(res): return None else: if 'header' in res: del res['header'] if 'value' in res: del res['value'] if 'attrs' in res and not len(res['attrs']): del res['attrs'] return res def __sub__(self, rvalue): ''' Subjunction operation. ''' return self.__ops(rvalue, '__sub__', '__ne__') def __and__(self, rvalue): ''' Conjunction operation. ''' return self.__ops(rvalue, '__and__', '__eq__') def __ne__(self, rvalue): return not self.__eq__(rvalue) def __eq__(self, rvalue): ''' Having nla, we are able to use it in operations like:: if nla == 'some value': ... ''' lvalue = self.getvalue() if lvalue is self: for key in self: try: if key == 'attrs': for nla in self[key]: lv = self.get_attr(nla[0]) if isinstance(lv, dict): lv = nlmsg().setvalue(lv) rv = rvalue.get_attr(nla[0]) if isinstance(rv, dict): rv = nlmsg().setvalue(rv) # this strange condition means a simple thing: # None, 0, empty container and NotInitialized in # that context should be treated as equal. if (lv != rv) and not \ ((not lv or lv is NotInitialized) and (not rv or rv is NotInitialized)): return False else: lv = self.get(key) rv = rvalue.get(key) if (lv != rv) and not \ ((not lv or lv is NotInitialized) and (not rv or rv is NotInitialized)): return False except Exception: # on any error -- is not equal return False return True else: return lvalue == rvalue @classmethod def get_size(self): size = 0 for field in self.fields: size += struct.calcsize(field[1]) return size @classmethod def nla2name(self, name): ''' Convert NLA name into human-friendly name Example: IFLA_ADDRESS -> address Requires self.prefix to be set ''' return name[(name.find(self.prefix) + 1) * len(self.prefix):].lower() @classmethod def name2nla(self, name): ''' Convert human-friendly name into NLA name Example: address -> IFLA_ADDRESS Requires self.prefix to be set ''' name = name.upper() if name.find(self.prefix) == -1: name = "%s%s" % (self.prefix, name) return name def decode(self): ''' Decode the message. The message should have the `buf` attribute initialized. e.g.:: data = sock.recv(16384) msg = ifinfmsg(data) If you want to customize the decoding process, override the method, but don't forget to call parent's `decode()`:: class CustomMessage(nlmsg): def decode(self): nlmsg.decode(self) ... # do some custom data tuning ''' offset = self.offset global cache_hdr global clean_cbs # Decode the header if self.header is not None: ## # ~~ self['header'][name] = struct.unpack_from(...) # # Instead of `struct.unpack()` all the NLA headers, it is # much cheaper to cache decoded values. The resulting dict # will be not much bigger than some hundreds ov values. # # The code might look ugly, but line_profiler shows here # a notable performance gain. # # The chain is: # dict.get(key, None) or dict.set(unpack(key, ...)) or dict[key] # # If there is no such key in the dict, get() returns None, and # Python executes __setitem__(), which always return None, and # then dict[key] is returned. # # If the key exists, the statement after the first `or` is not # executed. if self.is_nla: key = tuple(self.data[offset:offset + 4]) self['header'] = cache_hdr.get(key, None) or \ (cache_hdr .__setitem__(key, dict(zip(('length', 'type'), struct.unpack_from('HH', self.data, offset))))) or \ cache_hdr[key] ## offset += 4 self.length = self['header']['length'] else: for name, fmt in self.header: self['header'][name] = struct.unpack_from(fmt, self.data, offset)[0] offset += struct.calcsize(fmt) # update length from header # it can not be less than 4 if 'header' in self: self.length = max(self['header']['length'], 4) # handle the array case if self._nla_array: self.setvalue([]) while offset < self.offset + self.length: cell = type(self)(data=self.data, offset=offset, parent=self) cell._nla_array = False if cell.cell_header is not None: cell.header = cell.cell_header cell.decode() self.value.append(cell) offset += (cell.length + 4 - 1) & ~ (4 - 1) else: self._ft_decode(self, offset) if clean_cbs.__dict__: self.unregister_clean_cb() self.decoded = True def encode(self): ''' Encode the message into the binary buffer:: msg.encode() sock.send(msg.data) If you want to customize the encoding process, override the method:: class CustomMessage(nlmsg): def encode(self): ... # do some custom data tuning nlmsg.encode(self) ''' offset = self.offset diff = 0 # reserve space for the header if self.header is not None: hsize = struct.calcsize(''.join([x[1] for x in self.header])) self.data.extend([0] * hsize) offset += hsize # handle the array case if self._nla_array: header_type = 1 for value in self.getvalue(): cell = type(self)(data=self.data, offset=offset, parent=self) cell._nla_array = False cell['header']['type'] = self.header_type or \ (header_type | self._nla_flags) header_type += 1 if cell.cell_header is not None: cell.header = cell.cell_header cell.setvalue(value) cell.encode() offset += (cell.length + 4 - 1) & ~ (4 - 1) elif self.getvalue() is not None: for name, fmt in self.fields: value = self[name] if fmt == 's': length = len(value) efmt = '%is' % (length) elif fmt == 'z': length = len(value) + 1 efmt = '%is' % (length) else: length = struct.calcsize(fmt) efmt = fmt self.data.extend([0] * length) # in python3 we should force it if sys.version[0] == '3': if isinstance(value, str): value = bytes(value, 'utf-8') elif isinstance(value, float): value = int(value) elif sys.version[0] == '2': if isinstance(value, unicode): value = value.encode('utf-8') try: if fmt[-1] == 'x': struct.pack_into(efmt, self.data, offset) elif type(value) in (list, tuple, set): struct.pack_into(efmt, self.data, offset, *value) else: struct.pack_into(efmt, self.data, offset, value) except struct.error: log.error(''.join(traceback.format_stack())) log.error(traceback.format_exc()) log.error("error pack: %s %s %s" % (efmt, value, type(value))) raise offset += length diff = ((offset + 4 - 1) & ~ (4 - 1)) - offset offset += diff self.data.extend([0] * diff) # write NLA chain if self.nla_map: offset = self.encode_nlas(offset) # calculate the size and write it if 'header' in self and self.header is not None: self.length = self['header']['length'] = (offset - self.offset - diff) offset = self.offset for name, fmt in self.header: struct.pack_into(fmt, self.data, offset, self['header'].get(name, 0)) offset += struct.calcsize(fmt) def setvalue(self, value): if isinstance(value, dict): self.update(value) if 'attrs' in value: self['attrs'] = [] for nla in value['attrs']: nlv = nlmsg_base() nlv.setvalue(nla[1]) self['attrs'].append([nla[0], nlv.getvalue()]) else: try: value = self._r_value_map.get(value, value) except TypeError: pass self['value'] = value self.value = value return self def get_encoded(self, attr, default=None): ''' Return the first encoded NLA by name ''' cells = [i[1] for i in self['attrs'] if i[0] == attr] if cells: return cells[0] def get_nested(self, *attrs): ''' Return nested NLA or None ''' pointer = self for attr in attrs: pointer = pointer.get_attr(attr) if pointer is None: return return pointer def get_attr(self, attr, default=None): ''' Return the first NLA with that name or None ''' try: attrs = self.get_attrs(attr) except KeyError: return default if attrs: return attrs[0] else: return default def get_attrs(self, attr): ''' Return attrs by name or an empty list ''' return [i[1] for i in self['attrs'] if i[0] == attr] def nla(self, attr=None, default=NotInitialized): ''' ''' if default is NotInitialized: response = nlmsg_base() del response['value'] del response['attrs'] response.value = None chain = self.get('attrs', []) if attr is not None: chain = [i.nla for i in chain if i.name == attr] else: chain = [i.nla for i in chain] if chain: for link in chain: link.chain = chain response = chain[0] return response def __getattribute__(self, key): try: return super(nlmsg_base, self).__getattribute__(key) except AttributeError: if ord(key[0]) < 90: return self.nla(key) raise AttributeError(key) def __getitem__(self, key): if isinstance(key, int): return self.chain[key] return dict.__getitem__(self, key) def __setstate__(self, state): return self.load(state) def __reduce__(self): return (type(self), (), self.dump()) def load(self, dump): ''' Load packet from a dict:: ipr = IPRoute() lo = ipr.link('dump', ifname='lo')[0] msg_type, msg_value = type(lo), lo.dump() ... lo = msg_type() lo.load(msg_value) The same methods -- `dump()`/`load()` -- implement the pickling protocol for the nlmsg class, see `__reduce__()` and `__setstate__()`. ''' if isinstance(dump, dict): for (k, v) in dump.items(): if k == 'header': self['header'].update(dump['header']) else: self[k] = v else: self.setvalue(dump) return self def dump(self): ''' Dump packet as a dict ''' a = self.getvalue() if isinstance(a, dict): ret = {} for (k, v) in a.items(): if k == 'header': ret['header'] = dict(a['header']) elif k == 'attrs': ret['attrs'] = attrs = [] for i in a['attrs']: if isinstance(i[1], nlmsg_base): attrs.append([i[0], i[1].dump()]) elif isinstance(i[1], set): attrs.append([i[0], tuple(i[1])]) else: attrs.append([i[0], i[1]]) else: ret[k] = v else: ret = a return ret def getvalue(self): ''' Atomic NLAs return their value in the 'value' field, not as a dictionary. Complex NLAs return whole dictionary. ''' if self.value != NotInitialized: # value decoded by custom decoder return self.value if 'value' in self and self['value'] != NotInitialized: # raw value got by generic decoder return self.value_map.get(self['value'], self['value']) return self @staticmethod def _ft_decode_zstring(self, offset): value, = struct.unpack_from('%is' % (self.length - 4), self.data, offset) self['value'] = value.strip(b'\0') @staticmethod def _ft_decode_string(self, offset): self['value'], = struct.unpack_from('%is' % (self.length - 4), self.data, offset) @staticmethod def _ft_decode_packed(self, offset): names = [] fmt = '' for field in self.fields: names.append(field[0]) fmt += field[1] value = struct.unpack_from(fmt, self.data, offset) values = list(value) for name in names: if name[0] != '_': self[name] = values.pop(0) # read NLA chain if self.nla_map: offset = (offset + 4 - 1) & ~ (4 - 1) try: self.decode_nlas(offset) except Exception as e: log.warning(traceback.format_exc()) raise NetlinkNLADecodeError(e) else: del self['attrs'] if self['value'] is NotInitialized: del self['value'] @staticmethod def _ft_decode_generic(self, offset): global cache_fmt for name, fmt in self.fields: ## # ~~ size = struct.calcsize(efmt) # # The use of the cache gives here a tiny performance # improvement, but it is an improvement anyways # size = cache_fmt.get(fmt, None) or \ cache_fmt.__setitem__(fmt, struct.calcsize(fmt)) or \ cache_fmt[fmt] ## value = struct.unpack_from(fmt, self.data, offset) offset += size if len(value) == 1: self[name] = value[0] else: self[name] = value # read NLA chain if self.nla_map: offset = (offset + 4 - 1) & ~ (4 - 1) try: self.decode_nlas(offset) except Exception as e: log.warning(traceback.format_exc()) raise NetlinkNLADecodeError(e) else: del self['attrs'] if self['value'] is NotInitialized: del self['value'] def compile_ft(self): global cache_jit if self.fields and self.fields[0][1] == 's': self._ft_decode = self._ft_decode_string elif self.fields and self.fields[0][1] == 'z': self._ft_decode = self._ft_decode_zstring elif self.pack == 'struct': self._ft_decode = self._ft_decode_packed else: self._ft_decode = self._ft_decode_generic cache_jit[id(self.__class__)] = {'ft_decode': self._ft_decode} def compile_nla(self): # clean up NLA mappings t_nla_map = {} r_nla_map = {} # fix nla flags nla_map = [] for item in self.nla_map: if not isinstance(item[-1], int): item = list(item) item.append(0) nla_map.append(item) # detect, whether we have pre-defined keys if not isinstance(nla_map[0][0], int): # create enumeration nla_types = enumerate((i[0] for i in nla_map)) # that's a little bit tricky, but to reduce # the required amount of code in modules, we have # to jump over the head zipped = [(k[1][0], k[0][0], k[0][1], k[0][2]) for k in zip(nla_map, nla_types)] else: zipped = nla_map for (key, name, nla_class, nla_flags) in zipped: # it is an array if nla_class[0] == '*': nla_class = nla_class[1:] nla_array = True else: nla_array = False # are there any init call in the string? lb = nla_class.find('(') rb = nla_class.find(')') if 0 < lb < rb: init = nla_class[lb + 1:rb] nla_class = nla_class[:lb] else: init = None # lookup NLA class if nla_class == 'recursive': nla_class = type(self) else: nla_class = getattr(self, nla_class) # update mappings prime = {'class': nla_class, 'type': key, 'name': name, 'nla_flags': nla_flags, 'nla_array': nla_array, 'init': init} t_nla_map[key] = r_nla_map[name] = prime self.__class__.__t_nla_map = t_nla_map self.__class__.__r_nla_map = r_nla_map self.__class__.__compiled_nla = True def encode_nlas(self, offset): ''' Encode the NLA chain. Should not be called manually, since it is called from `encode()` routine. ''' r_nla_map = self.__class__.__r_nla_map for i in range(len(self['attrs'])): cell = self['attrs'][i] if cell[0] in r_nla_map: prime = r_nla_map[cell[0]] msg_class = prime['class'] # is it a class or a function? if isinstance(msg_class, types.FunctionType): # if it is a function -- use it to get the class msg_class = msg_class(self) # encode NLA nla = msg_class(data=self.data, offset=offset, parent=self, init=prime['init']) nla._nla_flags |= prime['nla_flags'] if isinstance(cell, tuple) and len(cell) > 2: nla._nla_flags |= cell[2] nla._nla_array = prime['nla_array'] nla['header']['type'] = prime['type'] | nla._nla_flags nla.setvalue(cell[1]) try: nla.encode() except: raise else: nla.decoded = True self['attrs'][i] = nla_slot(prime['name'], nla) offset += (nla.length + 4 - 1) & ~ (4 - 1) return offset def decode_nlas(self, offset): ''' Decode the NLA chain. Should not be called manually, since it is called from `decode()` routine. ''' t_nla_map = self.__class__.__t_nla_map while offset - self.offset <= self.length - 4: nla = None # pick the length and the type (length, base_msg_type) = struct.unpack_from('HH', self.data, offset) # first two bits of msg_type are flags: msg_type = base_msg_type & ~(NLA_F_NESTED | NLA_F_NET_BYTEORDER) # rewind to the beginning length = min(max(length, 4), (self.length - offset + self.offset)) # we have a mapping for this NLA if msg_type in t_nla_map: prime = t_nla_map[msg_type] # get the class msg_class = t_nla_map[msg_type]['class'] # is it a class or a function? if isinstance(msg_class, types.FunctionType): # if it is a function -- use it to get the class msg_class = msg_class(self, data=self.data, offset=offset) # decode NLA nla = msg_class(data=self.data, offset=offset, parent=self, length=length, init=prime['init']) nla._nla_array = prime['nla_array'] nla._nla_flags = base_msg_type & (NLA_F_NESTED | NLA_F_NET_BYTEORDER) name = prime['name'] else: name = 'UNKNOWN' nla = nla_base(data=self.data, offset=offset, length=length) self['attrs'].append(nla_slot(name, nla)) offset += (length + 4 - 1) & ~ (4 - 1) class nla_slot(object): __slots__ = ( "cell", ) def __init__(self, name, value): self.cell = (name, value) def try_to_decode(self): try: cell = self.cell[1] if not cell.decoded: cell.decode() return True except Exception: log.warning("decoding %s" % (self.cell[0])) log.warning(traceback.format_exc()) return False def get_value(self): cell = self.cell[1] if self.try_to_decode(): return cell.getvalue() else: return cell.data[cell.offset:cell.offset + cell.length] def get_flags(self): if self.try_to_decode(): return self.cell[1]._nla_flags return None @property def name(self): return self.cell[0] @property def value(self): return self.get_value() @property def nla(self): self.try_to_decode() return self.cell[1] def __getitem__(self, key): if key == 1: return self.get_value() elif key == 0: return self.cell[0] elif isinstance(key, slice): s = list(self.cell.__getitem__(key)) if self.cell[1] in s: s[s.index(self.cell[1])] = self.get_value() return s else: raise IndexError(key) def __repr__(self): if self.get_flags(): return repr((self.cell[0], self.get_value(), self.get_flags())) return repr((self.cell[0], self.get_value())) class nla_base(nlmsg_base): ''' The NLA base class. Use `nla_header` class as the header. ''' __slots__ = () is_nla = True header = (('length', 'H'), ('type', 'H')) class nlmsg_atoms(nlmsg_base): ''' A collection of base NLA types ''' __slots__ = () class none(nla_base): ''' 'none' type is used to skip decoding of NLA. You can also use 'hex' type to dump NLA's content. ''' __slots__ = () def decode(self): nla_base.decode(self) self.value = None class flag(nla_base): ''' 'flag' type is used to denote attrs that have no payload ''' __slots__ = () fields = [] def decode(self): nla_base.decode(self) self.value = True class uint8(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'B')] class uint16(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'H')] class uint32(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'I')] class uint64(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'Q')] class int8(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'b')] class int16(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'h')] class int32(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'i')] class int64(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', 'q')] class be8(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', '>B')] class be16(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', '>H')] class be32(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', '>I')] class be64(nla_base): __slots__ = () sql_type = 'INTEGER' fields = [('value', '>Q')] class ipXaddr(nla_base): __slots__ = () sql_type = 'TEXT' fields = [('value', 's')] family = None def encode(self): self['value'] = inet_pton(self.family, self.value) nla_base.encode(self) def decode(self): nla_base.decode(self) self.value = inet_ntop(self.family, self['value']) class ip4addr(ipXaddr): ''' Explicit IPv4 address type class. ''' __slots__ = () family = AF_INET class ip6addr(ipXaddr): ''' Explicit IPv6 address type class. ''' __slots__ = () family = AF_INET6 class ipaddr(nla_base): ''' This class is used to decode IP addresses according to the family. Socket library currently supports only two families, AF_INET and AF_INET6. We do not specify here the string size, it will be calculated in runtime. ''' __slots__ = () sql_type = 'TEXT' fields = [('value', 's')] def encode(self): # use real provided family, not implicit if self.value.find(':') > -1: family = AF_INET6 else: family = AF_INET self['value'] = inet_pton(family, self.value) nla_base.encode(self) def decode(self): nla_base.decode(self) # use real provided family, not implicit if self.length > 8: family = AF_INET6 else: family = AF_INET self.value = inet_ntop(family, self['value']) class target(nla_base): ''' A universal target class. The target type depends on the msg family: * AF_INET: IPv4 addr, string: "127.0.0.1" * AF_INET6: IPv6 addr, string: "::1" * AF_MPLS: MPLS labels, 0 .. k: [{"label": 0x20, "ttl": 16}, ...] ''' __slots__ = () sql_type = 'TEXT' fields = [('value', 's')] family = None own_parent = True def get_family(self): if self.family is not None: return self.family pointer = self while pointer.parent is not None: pointer = pointer.parent return pointer.get('family', AF_UNSPEC) def encode(self): family = self.get_family() if family in (AF_INET, AF_INET6): self['value'] = inet_pton(family, self.value) elif family == AF_MPLS: self['value'] = b'' if isinstance(self.value, (set, list, tuple)): labels = self.value else: if 'label' in self: labels = [{'label': self.get('label', 0), 'tc': self.get('tc', 0), 'bos': self.get('bos', 0), 'ttl': self.get('ttl', 0)}] else: labels = [] for record in labels: label = (record.get('label', 0) << 12) |\ (record.get('tc', 0) << 9) |\ ((1 if record.get('bos') else 0) << 8) |\ record.get('ttl', 0) self['value'] += struct.pack('>I', label) else: raise TypeError('socket family not supported') nla_base.encode(self) def decode(self): nla_base.decode(self) family = self.get_family() if family in (AF_INET, AF_INET6): self.value = inet_ntop(family, self['value']) elif family == AF_MPLS: self.value = [] for i in range(len(self['value']) // 4): label = struct.unpack('>I', self['value'][i * 4:i * 4 + 4])[0] record = {'label': (label & 0xFFFFF000) >> 12, 'tc': (label & 0x00000E00) >> 9, 'bos': (label & 0x00000100) >> 8, 'ttl': label & 0x000000FF} self.value.append(record) else: raise TypeError('socket family not supported') class mpls_target(target): __slots__ = () family = AF_MPLS class l2addr(nla_base): ''' Decode MAC address. ''' __slots__ = () sql_type = 'TEXT' fields = [('value', '=6s')] def encode(self): self['value'] = struct.pack('BBBBBB', *[int(i, 16) for i in self.value.split(':')]) nla_base.encode(self) def decode(self): nla_base.decode(self) self.value = ':'.join('%02x' % (i) for i in struct.unpack('BBBBBB', self['value'])) class hex(nla_base): ''' Represent NLA's content with header as hex string. ''' __slots__ = () fields = [('value', 's')] def decode(self): nla_base.decode(self) self.value = hexdump(self['value']) class array(nla_base): ''' Array of simple data type ''' __slots__ = ( "_fmt", ) fields = [('value', 's')] own_parent = True @property def fmt(self): # try to get format from parent # work only with elementary types if getattr(self, "_fmt", None) is not None: return self._fmt try: fclass = getattr(self.parent, self._nla_init) self._fmt = fclass.fields[0][1] except Exception: self._fmt = self._nla_init return self._fmt def encode(self): fmt = '%s%i%s' % (self.fmt[:-1], len(self.value), self.fmt[-1:]) self['value'] = struct.pack(fmt, *self.value) nla_base.encode(self) def decode(self): nla_base.decode(self) data_length = len(self['value']) element_size = struct.calcsize(self.fmt) array_size = data_length // element_size trail = (data_length % element_size) or -data_length data = self['value'][:-trail] fmt = '%s%i%s' % (self.fmt[:-1], array_size, self.fmt[-1:]) self.value = struct.unpack(fmt, data) class cdata(nla_base): ''' Binary data ''' __slots__ = () fields = [('value', 's')] class string(nla_base): ''' UTF-8 string. ''' __slots__ = () sql_type = 'TEXT' fields = [('value', 's')] def encode(self): if isinstance(self['value'], str) and sys.version[0] == '3': self['value'] = bytes(self['value'], 'utf-8') nla_base.encode(self) def decode(self): nla_base.decode(self) self.value = self['value'] if sys.version_info[0] >= 3: try: self.value = self.value.decode('utf-8') except UnicodeDecodeError: pass # Failed to decode, keep undecoded value class asciiz(string): ''' Zero-terminated string. ''' __slots__ = () # FIXME: move z-string hacks from general decode here? fields = [('value', 'z')] # FIXME: support NLA_FLAG and NLA_MSECS as well. # # aliases to support standard kernel attributes: # binary = cdata # NLA_BINARY nul_string = asciiz # NLA_NUL_STRING class nla(nla_base, nlmsg_atoms): ''' Main NLA class ''' __slots__ = () def decode(self): nla_base.decode(self) del self['header'] class nlmsg(nlmsg_atoms): ''' Main netlink message class ''' __slots__ = () header = (('length', 'I'), ('type', 'H'), ('flags', 'H'), ('sequence_number', 'I'), ('pid', 'I')) class genlmsg(nlmsg): ''' Generic netlink message ''' __slots__ = () fields = (('cmd', 'B'), ('version', 'B'), ('reserved', 'H')) class ctrlmsg(genlmsg): ''' Netlink control message ''' __slots__ = () # FIXME: to be extended nla_map = (('CTRL_ATTR_UNSPEC', 'none'), ('CTRL_ATTR_FAMILY_ID', 'uint16'), ('CTRL_ATTR_FAMILY_NAME', 'asciiz'), ('CTRL_ATTR_VERSION', 'uint32'), ('CTRL_ATTR_HDRSIZE', 'uint32'), ('CTRL_ATTR_MAXATTR', 'uint32'), ('CTRL_ATTR_OPS', '*ops'), ('CTRL_ATTR_MCAST_GROUPS', '*mcast_groups')) class ops(nla): __slots__ = () nla_map = (('CTRL_ATTR_OP_UNSPEC', 'none'), ('CTRL_ATTR_OP_ID', 'uint32'), ('CTRL_ATTR_OP_FLAGS', 'uint32')) class mcast_groups(nla): __slots__ = () nla_map = (('CTRL_ATTR_MCAST_GRP_UNSPEC', 'none'), ('CTRL_ATTR_MCAST_GRP_NAME', 'asciiz'), ('CTRL_ATTR_MCAST_GRP_ID', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/devlink/0000755000175000017500000000000013621220110020107 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/devlink/__init__.py0000644000175000017500000001145413610051400022227 0ustar peetpeet00000000000000''' devlink module ============== ''' from pyroute2.common import map_namespace from pyroute2.netlink import genlmsg from pyroute2.netlink.generic import GenericNetlinkSocket from pyroute2.netlink.nlsocket import Marshal # devlink commands DEVLINK_CMD_UNSPEC = 0 DEVLINK_CMD_GET = 1 DEVLINK_CMD_SET = 2 DEVLINK_CMD_NEW = 3 DEVLINK_CMD_DEL = 4 DEVLINK_CMD_PORT_GET = 5 DEVLINK_CMD_PORT_SET = 6 DEVLINK_CMD_PORT_NEW = 7 DEVLINK_CMD_PORT_DEL = 8 DEVLINK_CMD_PORT_SPLIT = 9 DEVLINK_CMD_PORT_UNSPLIT = 10 DEVLINK_CMD_SB_GET = 11 DEVLINK_CMD_SB_SET = 12 DEVLINK_CMD_SB_NEW = 13 DEVLINK_CMD_SB_DEL = 14 DEVLINK_CMD_SB_POOL_GET = 15 DEVLINK_CMD_SB_POOL_SET = 16 DEVLINK_CMD_SB_POOL_NEW = 17 DEVLINK_CMD_SB_POOL_DEL = 18 DEVLINK_CMD_SB_PORT_POOL_GET = 19 DEVLINK_CMD_SB_PORT_POOL_SET = 20 DEVLINK_CMD_SB_PORT_POOL_NEW = 21 DEVLINK_CMD_SB_PORT_POOL_DEL = 22 DEVLINK_CMD_SB_TC_POOL_BIND_GET = 23 DEVLINK_CMD_SB_TC_POOL_BIND_SET = 24 DEVLINK_CMD_SB_TC_POOL_BIND_NEW = 25 DEVLINK_CMD_SB_TC_POOL_BIND_DEL = 26 DEVLINK_CMD_SB_OCC_SNAPSHOT = 27 DEVLINK_CMD_SB_OCC_MAX_CLEAR = 28 DEVLINK_CMD_MAX = DEVLINK_CMD_SB_OCC_MAX_CLEAR (DEVLINK_NAMES, DEVLINK_VALUES) = map_namespace('DEVLINK_CMD_', globals()) # port type DEVLINK_PORT_TYPE_NOTSET = 0 DEVLINK_PORT_TYPE_AUTO = 1 DEVLINK_PORT_TYPE_ETH = 2 DEVLINK_PORT_TYPE_IB = 3 # threshold type DEVLINK_SB_POOL_TYPE_INGRESS = 0 DEVLINK_SB_POOL_TYPE_EGRESS = 1 DEVLINK_SB_THRESHOLD_TO_ALPHA_MAX = 20 class devlinkcmd(genlmsg): prefix = 'DEVLINK_ATTR_' nla_map = (('DEVLINK_ATTR_UNSPEC', 'none'), ('DEVLINK_ATTR_BUS_NAME', 'asciiz'), ('DEVLINK_ATTR_DEV_NAME', 'asciiz'), ('DEVLINK_ATTR_PORT_INDEX', 'uint32'), ('DEVLINK_ATTR_PORT_TYPE', 'uint16'), ('DEVLINK_ATTR_PORT_DESIRED_TYPE', 'uint16'), ('DEVLINK_ATTR_PORT_NETDEV_IFINDEX', 'uint32'), ('DEVLINK_ATTR_PORT_NETDEV_NAME', 'asciiz'), ('DEVLINK_ATTR_PORT_IBDEV_NAME', 'asciiz'), ('DEVLINK_ATTR_PORT_SPLIT_COUNT', 'uint32'), ('DEVLINK_ATTR_PORT_SPLIT_GROUP', 'uint32'), ('DEVLINK_ATTR_SB_INDEX', 'uint32'), ('DEVLINK_ATTR_SB_SIZE', 'uint32'), ('DEVLINK_ATTR_SB_INGRESS_POOL_COUNT', 'uint16'), ('DEVLINK_ATTR_SB_EGRESS_POOL_COUNT', 'uint16'), ('DEVLINK_ATTR_SB_INGRESS_TC_COUNT', 'uint16'), ('DEVLINK_ATTR_SB_EGRESS_TC_COUNT', 'uint16'), ('DEVLINK_ATTR_SB_POOL_INDEX', 'uint16'), ('DEVLINK_ATTR_SB_POOL_TYPE', 'uint8'), ('DEVLINK_ATTR_SB_POOL_SIZE', 'uint32'), ('DEVLINK_ATTR_SB_POOL_THRESHOLD_TYPE', 'uint8'), ('DEVLINK_ATTR_SB_THRESHOLD', 'uint32'), ('DEVLINK_ATTR_SB_TC_INDEX', 'uint16'), ('DEVLINK_ATTR_SB_OCC_CUR', 'uint32'), ('DEVLINK_ATTR_SB_OCC_MAX', 'uint32')) class MarshalDevlink(Marshal): msg_map = {DEVLINK_CMD_UNSPEC: devlinkcmd, DEVLINK_CMD_GET: devlinkcmd, DEVLINK_CMD_SET: devlinkcmd, DEVLINK_CMD_NEW: devlinkcmd, DEVLINK_CMD_DEL: devlinkcmd, DEVLINK_CMD_PORT_GET: devlinkcmd, DEVLINK_CMD_PORT_SET: devlinkcmd, DEVLINK_CMD_PORT_NEW: devlinkcmd, DEVLINK_CMD_PORT_DEL: devlinkcmd, DEVLINK_CMD_PORT_SPLIT: devlinkcmd, DEVLINK_CMD_PORT_UNSPLIT: devlinkcmd, DEVLINK_CMD_SB_GET: devlinkcmd, DEVLINK_CMD_SB_SET: devlinkcmd, DEVLINK_CMD_SB_NEW: devlinkcmd, DEVLINK_CMD_SB_DEL: devlinkcmd, DEVLINK_CMD_SB_POOL_GET: devlinkcmd, DEVLINK_CMD_SB_POOL_SET: devlinkcmd, DEVLINK_CMD_SB_POOL_NEW: devlinkcmd, DEVLINK_CMD_SB_POOL_DEL: devlinkcmd, DEVLINK_CMD_SB_PORT_POOL_GET: devlinkcmd, DEVLINK_CMD_SB_PORT_POOL_SET: devlinkcmd, DEVLINK_CMD_SB_PORT_POOL_NEW: devlinkcmd, DEVLINK_CMD_SB_PORT_POOL_DEL: devlinkcmd, DEVLINK_CMD_SB_TC_POOL_BIND_GET: devlinkcmd, DEVLINK_CMD_SB_TC_POOL_BIND_SET: devlinkcmd, DEVLINK_CMD_SB_TC_POOL_BIND_NEW: devlinkcmd, DEVLINK_CMD_SB_TC_POOL_BIND_DEL: devlinkcmd, DEVLINK_CMD_SB_OCC_SNAPSHOT: devlinkcmd, DEVLINK_CMD_SB_OCC_MAX_CLEAR: devlinkcmd} def fix_message(self, msg): try: msg['event'] = DEVLINK_VALUES[msg['cmd']] except Exception: pass class DevlinkSocket(GenericNetlinkSocket): def __init__(self): GenericNetlinkSocket.__init__(self) self.marshal = MarshalDevlink() def bind(self, groups=0, **kwarg): GenericNetlinkSocket.bind(self, 'devlink', devlinkcmd, groups, None, **kwarg) pyroute2-0.5.9/pyroute2/netlink/diag/0000755000175000017500000000000013621220110017357 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/diag/__init__.py0000644000175000017500000002545613610051400021506 0ustar peetpeet00000000000000from struct import pack from socket import inet_ntop from socket import AF_UNIX from socket import AF_INET from socket import AF_INET6 from socket import IPPROTO_TCP from pyroute2.netlink import nlmsg from pyroute2.netlink import nla from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_ROOT from pyroute2.netlink import NLM_F_MATCH from pyroute2.netlink import NETLINK_SOCK_DIAG from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink.nlsocket import NetlinkSocket SOCK_DIAG_BY_FAMILY = 20 SOCK_DESTROY = 21 # states SS_UNKNOWN = 0 SS_ESTABLISHED = 1 SS_SYN_SENT = 2 SS_SYN_RECV = 3 SS_FIN_WAIT1 = 4 SS_FIN_WAIT2 = 5 SS_TIME_WAIT = 6 SS_CLOSE = 7 SS_CLOSE_WAIT = 8 SS_LAST_ACK = 9 SS_LISTEN = 10 SS_CLOSING = 11 SS_MAX = 12 SS_ALL = ((1 << SS_MAX) - 1) SS_CONN = (SS_ALL & ~((1 << SS_LISTEN) | (1 << SS_CLOSE) | (1 << SS_TIME_WAIT) | (1 << SS_SYN_RECV))) # multicast groups ids (for use with {add,drop}_membership) SKNLGRP_NONE = 0 SKNLGRP_INET_TCP_DESTROY = 1 SKNLGRP_INET_UDP_DESTROY = 2 SKNLGRP_INET6_TCP_DESTROY = 3 SKNLGRP_INET6_UDP_DESTROY = 4 class sock_diag_req(nlmsg): fields = (('sdiag_family', 'B'), ('sdiag_protocol', 'B')) UDIAG_SHOW_NAME = 0x01 UDIAG_SHOW_VFS = 0x02 UDIAG_SHOW_PEER = 0x04 UDIAG_SHOW_ICONS = 0x08 UDIAG_SHOW_RQLEN = 0x10 UDIAG_SHOW_MEMINFO = 0x20 class inet_addr_codec(nlmsg): def encode(self): # FIXME: add human-friendly API to specify IP addresses as str # (see also decode()) if self['idiag_src'] == 0: self['idiag_src'] = (0, 0, 0, 0) if self['idiag_dst'] == 0: self['idiag_dst'] = (0, 0, 0, 0) nlmsg.encode(self) def decode(self): nlmsg.decode(self) if self[self.ffname] == AF_INET: self['idiag_dst'] = inet_ntop(AF_INET, pack('>I', self['idiag_dst'][0])) self['idiag_src'] = inet_ntop(AF_INET, pack('>I', self['idiag_src'][0])) elif self[self.ffname] == AF_INET6: self['idiag_dst'] = inet_ntop(AF_INET6, pack('>IIII', *self['idiag_dst'])) self['idiag_src'] = inet_ntop(AF_INET6, pack('>IIII', *self['idiag_src'])) class inet_diag_req(inet_addr_codec): ffname = 'sdiag_family' fields = (('sdiag_family', 'B'), ('sdiag_protocol', 'B'), ('idiag_ext', 'B'), ('__pad', 'B'), ('idiag_states', 'I'), ('idiag_sport', '>H'), ('idiag_dport', '>H'), ('idiag_src', '>4I'), ('idiag_dst', '>4I'), ('idiag_if', 'I'), ('idiag_cookie', 'Q')) class inet_diag_msg(inet_addr_codec): ffname = 'idiag_family' fields = (('idiag_family', 'B'), ('idiag_state', 'B'), ('idiag_timer', 'B'), ('idiag_retrans', 'B'), ('idiag_sport', '>H'), ('idiag_dport', '>H'), ('idiag_src', '>4I'), ('idiag_dst', '>4I'), ('idiag_if', 'I'), ('idiag_cookie', 'Q'), ('idiag_expires', 'I'), ('idiag_rqueue', 'I'), ('idiag_wqueue', 'I'), ('idiag_uid', 'I'), ('idiag_inode', 'I')) nla_map = (('INET_DIAG_NONE', 'none'), ('INET_DIAG_MEMINFO', 'inet_diag_meminfo'), # FIXME: must be protocol specific? ('INET_DIAG_INFO', 'tcp_info'), ('INET_DIAG_VEGASINFO', 'tcpvegas_info'), ('INET_DIAG_CONG', 'asciiz'), ('INET_DIAG_TOS', 'hex'), ('INET_DIAG_TCLASS', 'hex'), ('INET_DIAG_SKMEMINFO', 'hex'), ('INET_DIAG_SHUTDOWN', 'uint8'), ('INET_DIAG_DCTCPINFO', 'tcp_dctcp_info'), ('INET_DIAG_PROTOCOL', 'hex'), ('INET_DIAG_SKV6ONLY', 'uint8'), ('INET_DIAG_LOCALS', 'hex'), ('INET_DIAG_PEERS', 'hex'), ('INET_DIAG_PAD', 'hex'), ('INET_DIAG_MARK', 'hex'), ('INET_DIAG_BBRINFO', 'tcp_bbr_info'), ('INET_DIAG_CLASS_ID', 'hex'), ('INET_DIAG_MD5SIG', 'hex')) class inet_diag_meminfo(nla): fields = (('idiag_rmem', 'I'), ('idiag_wmem', 'I'), ('idiag_fmem', 'I'), ('idiag_tmem', 'I')) class tcpvegas_info(nla): fields = (('tcpv_enabled', 'I'), ('tcpv_rttcnt', 'I'), ('tcpv_rtt', 'I'), ('tcpv_minrtt', 'I')) class tcp_dctcp_info(nla): fields = (('dctcp_enabled', 'H'), ('dctcp_ce_state', 'H'), ('dctcp_alpha', 'I'), ('dctcp_ab_ecn', 'I'), ('dctcp_ab_tot', 'I')) class tcp_bbr_info(nla): fields = (('bbr_bw_lo', 'I'), ('bbr_bw_hi', 'I'), ('bbr_min_rtt', 'I'), ('bbr_pacing_gain', 'I'), ('bbr_cwnd_gain', 'I')) class tcp_info(nla): fields = (('tcpi_state', 'B'), ('tcpi_ca_state', 'B'), ('tcpi_retransmits', 'B'), ('tcpi_probes', 'B'), ('tcpi_backoff', 'B'), ('tcpi_options', 'B'), ('tcpi_snd_wscale', 'B'), # tcpi_rcv_wscale -- in decode() ('tcpi_delivery_rate_app_limited', 'B'), ('tcpi_rto', 'I'), ('tcpi_ato', 'I'), ('tcpi_snd_mss', 'I'), ('tcpi_rcv_mss', 'I'), ('tcpi_unacked', 'I'), ('tcpi_sacked', 'I'), ('tcpi_lost', 'I'), ('tcpi_retrans', 'I'), ('tcpi_fackets', 'I'), # Times ('tcpi_last_data_sent', 'I'), ('tcpi_last_ack_sent', 'I'), ('tcpi_last_data_recv', 'I'), ('tcpi_last_ack_recv', 'I'), # Metrics ('tcpi_pmtu', 'I'), ('tcpi_rcv_ssthresh', 'I'), ('tcpi_rtt', 'I'), ('tcpi_rttvar', 'I'), ('tcpi_snd_ssthresh', 'I'), ('tcpi_snd_cwnd', 'I'), ('tcpi_advmss', 'I'), ('tcpi_reordering', 'I'), ('tcpi_rcv_rtt', 'I'), ('tcpi_rcv_space', 'I'), ('tcpi_total_retrans', 'I'), ('tcpi_pacing_rate', 'Q'), ('tcpi_max_pacing_rate', 'Q'), ('tcpi_bytes_acked', 'Q'), ('tcpi_bytes_received', 'Q'), ('tcpi_segs_out', 'I'), ('tcpi_segs_in', 'I'), ('tcpi_notsent_bytes', 'I'), ('tcpi_min_rtt', 'I'), ('tcpi_data_segs_in', 'I'), ('tcpi_data_segs_out', 'I'), ('tcpi_delivery_rate', 'Q'), ('tcpi_busy_time', 'Q'), ('tcpi_rwnd_limited', 'Q'), ('tcpi_sndbuf_limited', 'Q')) def decode(self): # Fix tcpi_rcv_scale amd delivery_rate bit fields. # In the C: # # __u8 tcpi_snd_wscale : 4, tcpi_rcv_wscale : 4; # __u8 tcpi_delivery_rate_app_limited:1; # nla.decode(self) self['tcpi_rcv_wscale'] = self['tcpi_snd_wscale'] & 0xf self['tcpi_snd_wscale'] = self['tcpi_snd_wscale'] & 0xf0 >> 4 self['tcpi_delivery_rate_app_limited'] = \ self['tcpi_delivery_rate_app_limited'] & 0x80 >> 7 class unix_diag_req(nlmsg): fields = (('sdiag_family', 'B'), ('sdiag_protocol', 'B'), ('__pad', 'H'), ('udiag_states', 'I'), ('udiag_ino', 'I'), ('udiag_show', 'I'), ('udiag_cookie', 'Q')) class unix_diag_msg(nlmsg): fields = (('udiag_family', 'B'), ('udiag_type', 'B'), ('udiag_state', 'B'), ('__pad', 'B'), ('udiag_ino', 'I'), ('udiag_cookie', 'Q')) nla_map = (('UNIX_DIAG_NAME', 'asciiz'), ('UNIX_DIAG_VFS', 'unix_diag_vfs'), ('UNIX_DIAG_PEER', 'uint32'), ('UNIX_DIAG_ICONS', 'hex'), ('UNIX_DIAG_RQLEN', 'unix_diag_rqlen'), ('UNIX_DIAG_MEMINFO', 'hex'), ('UNIX_DIAG_SHUTDOWN', 'uint8')) class unix_diag_vfs(nla): fields = (('udiag_vfs_ino', 'I'), ('udiag_vfs_dev', 'I')) class unix_diag_rqlen(nla): fields = (('udiag_rqueue', 'I'), ('udiag_wqueue', 'I')) class MarshalDiag(Marshal): type_format = 'B' # The family goes after the nlmsg header, # IHHII = 4 + 2 + 2 + 4 + 4 = 16 bytes type_offset = 16 # Please notice that the SOCK_DIAG Marshal # uses not the nlmsg type, but sdiag_family # to choose the proper class msg_map = {AF_UNIX: unix_diag_msg, AF_INET: inet_diag_msg, AF_INET6: inet_diag_msg} # error type NLMSG_ERROR == 2 == AF_INET, # it doesn't work for DiagSocket that way, # so disable the error messages for now error_type = -1 class DiagSocket(NetlinkSocket): ''' Usage:: from pyroute2 import DiagSocket with DiagSocket() as ds: ds.bind() sstats = ds.get_sock_stats() ''' def __init__(self, fileno=None): super(DiagSocket, self).__init__(NETLINK_SOCK_DIAG) self.marshal = MarshalDiag() def get_sock_stats(self, family=AF_UNIX, states=SS_ALL, protocol=IPPROTO_TCP, extensions=0, show=(UDIAG_SHOW_NAME | UDIAG_SHOW_VFS | UDIAG_SHOW_PEER | UDIAG_SHOW_ICONS)): ''' Get sockets statistics. ACHTUNG: the get_sock_stats() signature will be changed before the next release, this one is a WIP-code! ''' if family == AF_UNIX: req = unix_diag_req() req['udiag_states'] = states req['udiag_show'] = show elif family in (AF_INET, AF_INET6): req = inet_diag_req() req['idiag_states'] = states req['sdiag_protocol'] = protocol req['idiag_ext'] = extensions else: raise NotImplementedError() req['sdiag_family'] = family return self.nlm_request(req, SOCK_DIAG_BY_FAMILY, NLM_F_REQUEST | NLM_F_ROOT | NLM_F_MATCH) pyroute2-0.5.9/pyroute2/netlink/event/0000755000175000017500000000000013621220110017574 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/event/__init__.py0000644000175000017500000000134013610051400021705 0ustar peetpeet00000000000000 from pyroute2.config import kernel from pyroute2.netlink.generic import GenericNetlinkSocket class EventSocket(GenericNetlinkSocket): marshal_class = None genl_family = None def __init__(self): GenericNetlinkSocket.__init__(self) self.marshal = self.marshal_class() if kernel[0] <= 2: self.bind(groups=0xffffff) else: self.bind() for group in self.mcast_groups: self.add_membership(group) def bind(self, groups=0, **kwarg): GenericNetlinkSocket.bind(self, self.genl_family, self.marshal_class.msg_map[0], groups, None, **kwarg) pyroute2-0.5.9/pyroute2/netlink/event/acpi_event.py0000644000175000017500000000173013610051400022266 0ustar peetpeet00000000000000''' ''' from pyroute2.netlink import nla from pyroute2.netlink import genlmsg from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink.event import EventSocket ACPI_GENL_CMD_UNSPEC = 0 ACPI_GENL_CMD_EVENT = 1 class acpimsg(genlmsg): nla_map = (('ACPI_GENL_ATTR_UNSPEC', 'none'), ('ACPI_GENL_ATTR_EVENT', 'acpiev')) class acpiev(nla): fields = (('device_class', '20s'), ('bus_id', '15s'), ('type', 'I'), ('data', 'I')) def decode(self): nla.decode(self) dc = self['device_class'] bi = self['bus_id'] self['device_class'] = dc[:dc.find(b'\x00')] self['bus_id'] = bi[:bi.find(b'\x00')] class MarshalAcpiEvent(Marshal): msg_map = {ACPI_GENL_CMD_UNSPEC: acpimsg, ACPI_GENL_CMD_EVENT: acpimsg} class AcpiEventSocket(EventSocket): marshal_class = MarshalAcpiEvent genl_family = 'acpi_event' pyroute2-0.5.9/pyroute2/netlink/event/dquot.py0000644000175000017500000000233213610051400021304 0ustar peetpeet00000000000000''' VFS_DQUOT module ================ Usage:: from pyroute2 import DQuotSocket ds = DQuotSocket() msgs = ds.get() Please notice, that `.get()` always returns a list, even if only one message arrived. To get NLA values:: msg = msgs[0] uid = msg.get_attr('QUOTA_NL_A_EXCESS_ID') major = msg.get_attr('QUOTA_NL_A_DEV_MAJOR') minor = msg.get_attr('QUOTA_NL_A_DEV_MINOR') ''' from pyroute2.netlink import genlmsg from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink.event import EventSocket QUOTA_NL_C_UNSPEC = 0 QUOTA_NL_C_WARNING = 1 class dquotmsg(genlmsg): prefix = 'QUOTA_NL_A_' nla_map = (('QUOTA_NL_A_UNSPEC', 'none'), ('QUOTA_NL_A_QTYPE', 'uint32'), ('QUOTA_NL_A_EXCESS_ID', 'uint64'), ('QUOTA_NL_A_WARNING', 'uint32'), ('QUOTA_NL_A_DEV_MAJOR', 'uint32'), ('QUOTA_NL_A_DEV_MINOR', 'uint32'), ('QUOTA_NL_A_CAUSED_ID', 'uint64'), ('QUOTA_NL_A_PAD', 'uint64')) class MarshalDQuot(Marshal): msg_map = {QUOTA_NL_C_UNSPEC: dquotmsg, QUOTA_NL_C_WARNING: dquotmsg} class DQuotSocket(EventSocket): marshal_class = MarshalDQuot genl_family = 'VFS_DQUOT' pyroute2-0.5.9/pyroute2/netlink/event/thermal_event.py0000644000175000017500000000120413610051400023002 0ustar peetpeet00000000000000''' TODO: add THERMAL_GENL_ATTR_EVENT structure ''' from pyroute2.netlink import genlmsg from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink.event import EventSocket THERMAL_GENL_CMD_UNSPEC = 0 THERMAL_GENL_CMD_EVENT = 1 class thermal_msg(genlmsg): nla_map = (('THERMAL_GENL_ATTR_UNSPEC', 'none'), ('THERMAL_GENL_ATTR_EVENT', 'hex')) # to be done class MarshalThermalEvent(Marshal): msg_map = {THERMAL_GENL_CMD_UNSPEC: thermal_msg, THERMAL_GENL_CMD_EVENT: thermal_msg} class ThermalEventSocket(EventSocket): marshal_class = MarshalThermalEvent genl_family = 'thermal_event' pyroute2-0.5.9/pyroute2/netlink/exceptions.py0000644000175000017500000000200513610051400021205 0ustar peetpeet00000000000000import os class NetlinkError(Exception): ''' Base netlink error ''' def __init__(self, code, msg=None): msg = msg or os.strerror(code) super(NetlinkError, self).__init__(code, msg) self.code = code class NetlinkDecodeError(Exception): ''' Base decoding error class. Incapsulates underlying error for the following analysis ''' def __init__(self, exception): self.exception = exception class NetlinkHeaderDecodeError(NetlinkDecodeError): ''' The error occured while decoding a header ''' pass class NetlinkDataDecodeError(NetlinkDecodeError): ''' The error occured while decoding the message fields ''' pass class NetlinkNLADecodeError(NetlinkDecodeError): ''' The error occured while decoding NLA chain ''' pass class IPSetError(NetlinkError): ''' Netlink error with IPSet special error codes. Messages are imported from errcode.c ''' pass class SkipInode(Exception): pass pyroute2-0.5.9/pyroute2/netlink/generic/0000755000175000017500000000000013621220110020067 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/generic/__init__.py0000644000175000017500000000517313610051400022210 0ustar peetpeet00000000000000# -*- coding: utf-8 -*- ''' Generic netlink =============== Describe ''' import errno import logging from pyroute2.netlink import CTRL_CMD_GETFAMILY from pyroute2.netlink import GENL_ID_CTRL from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import SOL_NETLINK from pyroute2.netlink import NETLINK_ADD_MEMBERSHIP from pyroute2.netlink import NETLINK_DROP_MEMBERSHIP from pyroute2.netlink import ctrlmsg from pyroute2.netlink.nlsocket import NetlinkSocket class GenericNetlinkSocket(NetlinkSocket): ''' Low-level socket interface. Provides all the usual socket does, can be used in poll/select, doesn't create any implicit threads. ''' mcast_groups = {} def bind(self, proto, msg_class, groups=0, pid=None, **kwarg): ''' Bind the socket and performs generic netlink proto lookup. The `proto` parameter is a string, like "TASKSTATS", `msg_class` is a class to parse messages with. ''' NetlinkSocket.bind(self, groups, pid, **kwarg) self.marshal.msg_map[GENL_ID_CTRL] = ctrlmsg msg = self.discovery(proto) self.prid = msg.get_attr('CTRL_ATTR_FAMILY_ID') self.mcast_groups = \ dict([(x.get_attr('CTRL_ATTR_MCAST_GRP_NAME'), x.get_attr('CTRL_ATTR_MCAST_GRP_ID')) for x in msg.get_attr('CTRL_ATTR_MCAST_GROUPS', [])]) self.marshal.msg_map[self.prid] = msg_class def add_membership(self, group): self.setsockopt(SOL_NETLINK, NETLINK_ADD_MEMBERSHIP, self.mcast_groups[group]) def drop_membership(self, group): self.setsockopt(SOL_NETLINK, NETLINK_DROP_MEMBERSHIP, self.mcast_groups[group]) def discovery(self, proto): ''' Resolve generic netlink protocol -- takes a string as the only parameter, return protocol description ''' msg = ctrlmsg() msg['cmd'] = CTRL_CMD_GETFAMILY msg['version'] = 1 msg['attrs'].append(['CTRL_ATTR_FAMILY_NAME', proto]) msg['header']['type'] = GENL_ID_CTRL msg['header']['flags'] = NLM_F_REQUEST msg['header']['pid'] = self.pid msg.encode() self.sendto(msg.data, (0, 0)) msg = self.get()[0] err = msg['header'].get('error', None) if err is not None: if hasattr(err, 'code') and err.code == errno.ENOENT: logging.error('Generic netlink protocol %s not found' % proto) logging.error('Please check if the protocol module is loaded') raise err return msg pyroute2-0.5.9/pyroute2/netlink/generic/ethtool.py0000644000175000017500000001637413621076743022157 0ustar peetpeet00000000000000from pyroute2.netlink import genlmsg from pyroute2.netlink import nla from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.generic import GenericNetlinkSocket ETHTOOL_GENL_NAME = "ethtool" ETHTOOL_GENL_VERSION = 1 ETHTOOL_MSG_USER_NONE = 0 ETHTOOL_MSG_STRSET_GET = 1 ETHTOOL_MSG_LINKINFO_GET = 2 ETHTOOL_MSG_LINKINFO_SET = 3 ETHTOOL_MSG_LINKMODES_GET = 4 ETHTOOL_MSG_LINKMODES_SET = 5 ETHTOOL_MSG_LINKSTATE_GET = 6 ETHTOOL_MSG_DEBUG_GET = 7 ETHTOOL_MSG_DEBUG_SET = 8 ETHTOOL_MSG_WOL_GET = 9 ETHTOOL_MSG_WOL_SET = 10 class ethtoolheader(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_HEADER_UNSPEC', 'none'), ('ETHTOOL_A_HEADER_DEV_INDEX', 'uint32'), ('ETHTOOL_A_HEADER_DEV_NAME', 'asciiz'), ('ETHTOOL_A_HEADER_FLAGS', 'uint32')) class ethtoolbitset(nla): nla_flags = NLA_F_NESTED nla_map = (("ETHTOOL_A_BITSET_UNSPEC", 'none'), ("ETHTOOL_A_BITSET_NOMASK", "flag"), ("ETHTOOL_A_BITSET_SIZE", 'uint32'), ("ETHTOOL_A_BITSET_BITS", 'bitset_bits'), ("ETHTOOL_A_BITSET_VALUE", 'hex'), ("ETHTOOL_A_BITSET_MASK", 'hex')) class bitset_bits(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_BITSET_BIT_UNSPEC', 'none'), ('ETHTOOL_A_BITSET_BITS_BIT', 'bitset_bits_bit')) class bitset_bits_bit(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_BITSET_BIT_UNSPEC', 'none'), ('ETHTOOL_A_BITSET_BIT_INDEX', 'uint32'), ('ETHTOOL_A_BITSET_BIT_NAME', 'asciiz'), ('ETHTOOL_A_BITSET_BIT_VALUE', 'flag')) class ethtool_strset_msg(genlmsg): nla_map = (('ETHTOOL_A_STRSET_UNSPEC', 'none'), ('ETHTOOL_A_STRSET_HEADER', 'ethtoolheader'), ('ETHTOOL_A_STRSET_STRINGSETS', 'strings_sets'), ('ETHTOOL_A_STRSET_COUNTS_ONLY', 'flag')) ethtoolheader = ethtoolheader class strings_sets(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_STRINGSETS_UNSPEC', 'none'), ('ETHTOOL_A_STRINGSETS_STRINGSET', 'string_set')) class string_set(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_STRINGSET_UNSPEC', 'none'), ('ETHTOOL_A_STRINGSET_ID', 'uint32'), ('ETHTOOL_A_STRINGSET_COUNT', 'uint32'), ('ETHTOOL_A_STRINGSET_STRINGS', 'stringset_strings')) class stringset_strings(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_STRINGS_UNSPEC', 'none'), ('ETHTOOL_A_STRINGS_STRING', 'strings_string')) class strings_string(nla): nla_flags = NLA_F_NESTED nla_map = (('ETHTOOL_A_STRING_UNSPEC', 'none'), ('ETHTOOL_A_STRING_INDEX', 'uint32'), ('ETHTOOL_A_STRING_VALUE', 'asciiz')) class ethtool_linkinfo_msg(genlmsg): nla_map = (('ETHTOOL_A_LINKINFO_UNSPEC', 'none'), ('ETHTOOL_A_LINKINFO_HEADER', 'ethtoolheader'), ('ETHTOOL_A_LINKINFO_PORT', 'uint8'), ('ETHTOOL_A_LINKINFO_PHYADDR', 'uint8'), ('ETHTOOL_A_LINKINFO_TP_MDIX', 'uint8'), ('ETHTOOL_A_LINKINFO_TP_MDIX_CTR', 'uint8'), ('ETHTOOL_A_LINKINFO_TRANSCEIVER', 'uint8')) ethtoolheader = ethtoolheader class ethtool_linkmode_msg(genlmsg): nla_map = (('ETHTOOL_A_LINKMODES_UNSPEC', 'none'), ('ETHTOOL_A_LINKMODES_HEADER', 'ethtoolheader'), ('ETHTOOL_A_LINKMODES_AUTONEG', 'uint8'), ('ETHTOOL_A_LINKMODES_OURS', 'ethtoolbitset'), ('ETHTOOL_A_LINKMODES_PEER', 'ethtoolbitset'), ('ETHTOOL_A_LINKMODES_SPEED', 'uint32'), ('ETHTOOL_A_LINKMODES_DUPLEX', 'uint8')) ethtoolheader = ethtoolheader ethtoolbitset = ethtoolbitset class ethtool_linkstate_msg(genlmsg): nla_map = (('ETHTOOL_A_LINKSTATE_UNSPEC', 'none'), ('ETHTOOL_A_LINKSTATE_HEADER', 'ethtoolheader'), ('ETHTOOL_A_LINKSTATE_LINK', 'uint8')) ethtoolheader = ethtoolheader class ethtool_wol_msg(genlmsg): nla_map = (('ETHTOOL_A_WOL_UNSPE', 'none'), ('ETHTOOL_A_WOL_HEADER', 'ethtoolheader'), ('ETHTOOL_A_WOL_MODES', 'ethtoolbitset'), ('ETHTOOL_A_WOL_SOPASS', 'hex')) ethtoolheader = ethtoolheader ethtoolbitset = ethtoolbitset class NlEthtool(GenericNetlinkSocket): def _do_request(self, msg, msg_flags=NLM_F_REQUEST): return self.nlm_request(msg, msg_type=self.prid, msg_flags=msg_flags) def is_nlethtool_in_kernel(self): try: self.bind(ETHTOOL_GENL_NAME, ethtool_linkinfo_msg) except NetlinkError: return False return True def _get_dev_header(self, ifname=None, ifindex=None): if ifindex is not None: return {'attrs': [['ETHTOOL_A_HEADER_DEV_INDEX', ifindex]]} elif ifname is not None: return {'attrs': [['ETHTOOL_A_HEADER_DEV_NAME', ifname]]} else: raise ValueError("Need ifname or ifindex") def get_linkinfo(self, ifname=None, ifindex=None): msg = ethtool_linkinfo_msg() msg["cmd"] = ETHTOOL_MSG_LINKINFO_GET msg['version'] = ETHTOOL_GENL_VERSION msg["attrs"].append(('ETHTOOL_A_LINKINFO_HEADER', self._get_dev_header(ifname, ifindex))) self.bind(ETHTOOL_GENL_NAME, ethtool_linkinfo_msg) return self._do_request(msg) def get_linkmode(self, ifname=None, ifindex=None): msg = ethtool_linkmode_msg() msg["cmd"] = ETHTOOL_MSG_LINKMODES_GET msg['version'] = ETHTOOL_GENL_VERSION msg["attrs"].append(('ETHTOOL_A_LINKMODES_HEADER', self._get_dev_header(ifname, ifindex))) self.bind(ETHTOOL_GENL_NAME, ethtool_linkmode_msg) return self._do_request(msg) def get_stringset(self, ifname=None, ifindex=None): msg = ethtool_strset_msg() msg["cmd"] = ETHTOOL_MSG_STRSET_GET msg['version'] = ETHTOOL_GENL_VERSION msg["attrs"].append(('ETHTOOL_A_STRSET_HEADER', self._get_dev_header(ifname, ifindex))) self.bind(ETHTOOL_GENL_NAME, ethtool_strset_msg) return self._do_request(msg) def get_linkstate(self, ifname=None, ifindex=None): msg = ethtool_linkstate_msg() msg["cmd"] = ETHTOOL_MSG_LINKSTATE_GET msg['version'] = ETHTOOL_GENL_VERSION msg["attrs"].append(('ETHTOOL_A_LINKSTATE_HEADER', self._get_dev_header(ifname, ifindex))) self.bind(ETHTOOL_GENL_NAME, ethtool_linkstate_msg) return self._do_request(msg) def get_wol(self, ifname=None, ifindex=None): msg = ethtool_wol_msg() msg["cmd"] = ETHTOOL_MSG_WOL_GET msg['version'] = ETHTOOL_GENL_VERSION msg["attrs"].append(('ETHTOOL_A_WOL_HEADER', self._get_dev_header(ifname, ifindex))) self.bind(ETHTOOL_GENL_NAME, ethtool_wol_msg) return self._do_request(msg) pyroute2-0.5.9/pyroute2/netlink/generic/wireguard.py0000644000175000017500000002761313621021764022461 0ustar peetpeet00000000000000''' Usage:: # Imports from pyroute2 import IPDB, WireGuard IFNAME = 'wg1' # Create a WireGuard interface with IPDB() as ip: wg1 = ip.create(kind='wireguard', ifname=IFNAME) wg1.add_ip('10.0.0.1/24') wg1.up() wg1.commit() # Create WireGuard object wg = WireGuard() # Add a WireGuard configuration + first peer peer = {'public_key': 'TGFHcm9zc2VCaWNoZV9DJ2VzdExhUGx1c0JlbGxlPDM=', 'endpoint_addr': '8.8.8.8', 'endpoint_port': 8888, 'persistent_keepalive': 15, 'allowed_ips': ['10.0.0.0/24', '8.8.8.8/32']} wg.set(IFNAME, private_key='RCdhcHJlc0JpY2hlLEplU2VyYWlzTGFQbHVzQm9ubmU=', fwmark=0x1337, listen_port=2525, peer=peer) # Add second peer with preshared key peer = {'public_key': 'RCdBcHJlc0JpY2hlLFZpdmVMZXNQcm9iaW90aXF1ZXM=', 'preshared_key': 'Pz8/V2FudFRvVHJ5TXlBZXJvR3Jvc3NlQmljaGU/Pz8=', 'endpoint_addr': '8.8.8.8', 'endpoint_port': 9999, 'persistent_keepalive': 25, 'allowed_ips': ['::/0']} wg.set(IFNAME, peer=peer) # Delete second peer peer = {'public_key': 'RCdBcHJlc0JpY2hlLFZpdmVMZXNQcm9iaW90aXF1ZXM=', 'remove': True} wg.set(IFNAME, peer=peer) # Get information of the interface wg.info(IFNAME) # Get specific value from the interface wg.info(IFNAME)[0].WGDEVICE_A_PRIVATE_KEY.value NOTES: * Using `set` method only requires an interface name. * The `peer` structure is described as follow:: struct peer_s { public_key: # Base64 public key - required remove: # Boolean - optional preshared_key: # Base64 preshared key - optional endpoint_addr: # IPv4 or IPv6 endpoint - optional endpoint_port : # endpoint Port - required only if endpoint_addr persistent_keepalive: # time in seconds to send keep alive - optional allowed_ips: # list of CIDRs allowed - optional } ''' from base64 import b64encode, b64decode from binascii import a2b_hex import errno import logging from socket import inet_ntoa, inet_aton, inet_pton, AF_INET, AF_INET6 from struct import pack, unpack from time import ctime from pyroute2.netlink import genlmsg from pyroute2.netlink import nla from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink.generic import GenericNetlinkSocket # Defines from uapi/wireguard.h WG_GENL_NAME = "wireguard" WG_GENL_VERSION = 1 WG_KEY_LEN = 32 # WireGuard Device commands WG_CMD_GET_DEVICE = 0 WG_CMD_SET_DEVICE = 1 # Wireguard Device attributes WGDEVICE_A_UNSPEC = 0 WGDEVICE_A_IFINDEX = 1 WGDEVICE_A_IFNAME = 2 WGDEVICE_A_PRIVATE_KEY = 3 WGDEVICE_A_PUBLIC_KEY = 4 WGDEVICE_A_FLAGS = 5 WGDEVICE_A_LISTEN_PORT = 6 WGDEVICE_A_FWMARK = 7 WGDEVICE_A_PEERS = 8 # WireGuard Device flags WGDEVICE_F_REPLACE_PEERS = 1 # WireGuard Allowed IP attributes WGALLOWEDIP_A_UNSPEC = 0 WGALLOWEDIP_A_FAMILY = 1 WGALLOWEDIP_A_IPADDR = 2 WGALLOWEDIP_A_CIDR_MASK = 3 # WireGuard Peer flags WGPEER_F_REMOVE_ME = 0 WGPEER_F_REPLACE_ALLOWEDIPS = 1 WGPEER_F_UPDATE_ONLY = 2 # Specific defines WG_MAX_PEERS = 1000 WG_MAX_ALLOWEDIPS = 1000 class wgmsg(genlmsg): prefix = 'WGDEVICE_A_' nla_map = (('WGDEVICE_A_UNSPEC', 'none'), ('WGDEVICE_A_IFINDEX', 'uint32'), ('WGDEVICE_A_IFNAME', 'asciiz'), ('WGDEVICE_A_PRIVATE_KEY', 'parse_wg_key'), ('WGDEVICE_A_PUBLIC_KEY', 'parse_wg_key'), ('WGDEVICE_A_FLAGS', 'uint32'), ('WGDEVICE_A_LISTEN_PORT', 'uint16'), ('WGDEVICE_A_FWMARK', 'uint32'), ('WGDEVICE_A_PEERS', '*wgdevice_peer')) class wgdevice_peer(nla): prefix = 'WGPEER_A_' nla_flags = NLA_F_NESTED nla_map = (('WGPEER_A_UNSPEC', 'none'), ('WGPEER_A_PUBLIC_KEY', 'parse_peer_key'), ('WGPEER_A_PRESHARED_KEY', 'parse_peer_key'), ('WGPEER_A_FLAGS', 'uint32'), ('WGPEER_A_ENDPOINT', 'parse_endpoint'), ('WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL', 'uint16'), ('WGPEER_A_LAST_HANDSHAKE_TIME', 'parse_handshake_time'), ('WGPEER_A_RX_BYTES', 'uint64'), ('WGPEER_A_TX_BYTES', 'uint64'), ('WGPEER_A_ALLOWEDIPS', '*wgpeer_allowedip'), ('WGPEER_A_PROTOCOL_VERSION', 'uint32')) class parse_peer_key(nla): fields = (('key', 's'), ) def decode(self): nla.decode(self) self['value'] = b64encode(self['value']) def encode(self): self['key'] = b64decode(self['value']) nla.encode(self) class parse_endpoint(nla): fields = (('family', 'H'), ('port', '>H'), ('addr4', '>I'), ('addr6', 's')) def decode(self): nla.decode(self) if self['family'] == AF_INET: self['addr'] = inet_ntoa(pack('>I', self['addr4'])) else: self['addr'] = inet_ntoa(AF_INET6, self['addr6']) del self['addr4'] del self['addr6'] def encode(self): if self['addr'].find(":") > -1: self['family'] = AF_INET6 self['addr4'] = 0 # Set to NULL self['addr6'] = inet_pton(AF_INET6, self['addr']) else: self['family'] = AF_INET self['addr4'] = unpack('>I', inet_aton(self['addr']))[0] self['addr6'] = b'\x00\x00\x00\x00\x00\x00\x00\x00' self['port'] = int(self['port']) nla.encode(self) class parse_handshake_time(nla): fields = (('tv_sec', 'Q'), ('tv_nsec', 'Q')) def decode(self): nla.decode(self) self['latest handshake'] = ctime(self['tv_sec']) class wgpeer_allowedip(nla): prefix = 'WGALLOWEDIP_A_' nla_flags = NLA_F_NESTED nla_map = (('WGALLOWEDIP_A_UNSPEC', 'none'), ('WGALLOWEDIP_A_FAMILY', 'uint16'), ('WGALLOWEDIP_A_IPADDR', 'hex'), ('WGALLOWEDIP_A_CIDR_MASK', 'uint8')) def decode(self): nla.decode(self) if self.get_attr('WGALLOWEDIP_A_FAMILY') == AF_INET: pre = (self .get_attr('WGALLOWEDIP_A_IPADDR') .replace(':', '')) self['addr'] = inet_ntoa(a2b_hex(pre)) else: self['addr'] = (self .get_attr('WGALLOWEDIP_A_IPADDR')) wgaddr = self.get_attr('WGALLOWEDIP_A_CIDR_MASK') self['addr'] = '{0}/{1}'.format(self['addr'], wgaddr) class parse_wg_key(nla): fields = (('key', 's'), ) def decode(self): nla.decode(self) self['value'] = b64encode(self['value']) def encode(self): self['key'] = b64decode(self['value']) nla.encode(self) class WireGuard(GenericNetlinkSocket): def __init__(self): GenericNetlinkSocket.__init__(self) self.bind(WG_GENL_NAME, wgmsg) def info(self, interface): msg = wgmsg() msg['cmd'] = WG_CMD_GET_DEVICE msg['attrs'].append(['WGDEVICE_A_IFNAME', interface]) return self.nlm_request(msg, msg_type=self.prid, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def set(self, interface, listen_port=None, fwmark=None, private_key=None, peer=None): msg = wgmsg() msg['attrs'].append(['WGDEVICE_A_IFNAME', interface]) if private_key is not None: self._wg_test_key(private_key) msg['attrs'].append(['WGDEVICE_A_PRIVATE_KEY', private_key]) if listen_port is not None: msg['attrs'].append(['WGDEVICE_A_LISTEN_PORT', listen_port]) if fwmark is not None: msg['attrs'].append(['WGDEVICE_A_FWMARK', fwmark]) if peer is not None: self._wg_set_peer(msg, peer) # Message attributes msg['cmd'] = WG_CMD_SET_DEVICE msg['version'] = WG_GENL_VERSION msg['header']['type'] = self.prid msg['header']['flags'] = NLM_F_REQUEST | NLM_F_ACK msg['header']['pid'] = self.pid msg.encode() self.sendto(msg.data, (0, 0)) msg = self.get()[0] err = msg['header'].get('error', None) if err is not None: if hasattr(err, 'code') and err.code == errno.ENOENT: logging.error('Generic netlink protocol %s not found' % self.prid) logging.error('Please check if the protocol module is loaded') raise err return msg def _wg_test_key(self, key): try: if len(b64decode(key)) != WG_KEY_LEN: raise ValueError('Invalid WireGuard key length') except TypeError: raise ValueError('Failed to decode Base64 key') def _wg_set_peer(self, msg, peer): attrs = [] wg_peer = [{'attrs': attrs}] if 'public_key' not in peer: raise ValueError('Peer Public key required') # Check public key validity public_key = peer['public_key'] self._wg_test_key(public_key) attrs.append(['WGPEER_A_PUBLIC_KEY', public_key]) # If peer removal is set to True if 'remove' in peer and peer['remove']: attrs.append(['WGPEER_A_FLAGS', WGDEVICE_F_REPLACE_PEERS]) msg['attrs'].append(['WGDEVICE_A_PEERS', wg_peer]) return # Set Endpoint if 'endpoint_addr' in peer and 'endpoint_port' in peer: attrs.append(['WGPEER_A_ENDPOINT', {'addr': peer['endpoint_addr'], 'port': peer['endpoint_port']}]) # Set Preshared key if 'preshared_key' in peer: pkey = peer['preshared_key'] self._wg_test_key(pkey) attrs.append(['WGPEER_A_PRESHARED_KEY', pkey]) # Set Persistent Keepalive time if 'persistent_keepalive' in peer: keepalive = peer['persistent_keepalive'] attrs.append(['WGPEER_A_PERSISTENT_KEEPALIVE_INTERVAL', keepalive]) # Set Peer flags attrs.append(['WGPEER_A_FLAGS', WGPEER_F_UPDATE_ONLY]) # Set allowed IPs if 'allowed_ips' in peer: allowed_ips = self._wg_build_allowedips(peer['allowed_ips']) attrs.append(['WGPEER_A_ALLOWEDIPS', allowed_ips]) msg['attrs'].append(['WGDEVICE_A_PEERS', wg_peer]) def _wg_build_allowedips(self, allowed_ips): ret = [] for index, ip in enumerate(allowed_ips): allowed_ip = [] ret.append({'attrs': allowed_ip}) if ip.find("/") == -1: raise ValueError('No CIDR set in allowed ip #{}'.format(index)) addr, mask = ip.split('/') if addr.find(":") > -1: allowed_ip.append(['WGALLOWEDIP_A_FAMILY', AF_INET6]) allowed_ip.append(['WGALLOWEDIP_A_IPADDR', inet_pton(AF_INET6, addr)]) else: allowed_ip.append(['WGALLOWEDIP_A_FAMILY', AF_INET]) allowed_ip.append(['WGALLOWEDIP_A_IPADDR', inet_aton(addr)]) allowed_ip.append(['WGALLOWEDIP_A_CIDR_MASK', int(mask)]) return ret pyroute2-0.5.9/pyroute2/netlink/ipq/0000755000175000017500000000000013621220110017244 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/ipq/__init__.py0000644000175000017500000000657613610051400021375 0ustar peetpeet00000000000000''' IPQ -- userspace firewall ========================= Netlink family for dealing with `QUEUE` iptables target. All the packets routed to the target `QUEUE` should be handled by a userspace program and the program should response with a verdict. E.g., the verdict can be `NF_DROP` and in that case the packet will be silently dropped, or `NF_ACCEPT`, and the packet will be pass the rule. ''' from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import nlmsg from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.nlsocket import Marshal # constants IFNAMSIZ = 16 IPQ_MAX_PAYLOAD = 0x800 # IPQ messages IPQM_BASE = 0x10 IPQM_MODE = IPQM_BASE + 1 IPQM_VERDICT = IPQM_BASE + 2 IPQM_PACKET = IPQM_BASE + 3 # IPQ modes IPQ_COPY_NONE = 0 IPQ_COPY_META = 1 IPQ_COPY_PACKET = 2 # verdict types NF_DROP = 0 NF_ACCEPT = 1 NF_STOLEN = 2 NF_QUEUE = 3 NF_REPEAT = 4 NF_STOP = 5 class ipq_base_msg(nlmsg): def decode(self): nlmsg.decode(self) self['payload'] = self.buf.read(self['data_len']) def encode(self): init = self.buf.tell() nlmsg.encode(self) if 'payload' in self: self.buf.write(self['payload']) self.update_length(init) class ipq_packet_msg(ipq_base_msg): fields = (('packet_id', 'L'), ('mark', 'L'), ('timestamp_sec', 'l'), ('timestamp_usec', 'l'), ('hook', 'I'), ('indev_name', '%is' % IFNAMSIZ), ('outdev_name', '%is' % IFNAMSIZ), ('hw_protocol', '>H'), ('hw_type', 'H'), ('hw_addrlen', 'B'), ('hw_addr', '6B'), ('__pad', '9x'), ('data_len', 'I'), ('__pad', '4x')) class ipq_mode_msg(nlmsg): pack = 'struct' fields = (('value', 'B'), ('__pad', '7x'), ('range', 'I'), ('__pad', '12x')) class ipq_verdict_msg(ipq_base_msg): pack = 'struct' fields = (('value', 'I'), ('__pad', '4x'), ('id', 'L'), ('data_len', 'I'), ('__pad', '4x')) class MarshalIPQ(Marshal): msg_map = {IPQM_MODE: ipq_mode_msg, IPQM_VERDICT: ipq_verdict_msg, IPQM_PACKET: ipq_packet_msg} class IPQSocket(NetlinkSocket): ''' Low-level socket interface. Provides all the usual socket does, can be used in poll/select, doesn't create any implicit threads. ''' def bind(self, mode=IPQ_COPY_PACKET): ''' Bind the socket and performs IPQ mode configuration. The only parameter is mode, the default value is IPQ_COPY_PACKET (copy all the packet data). ''' NetlinkSocket.bind(self, groups=0, pid=0) self.register_policy(MarshalIPQ.msg_map) msg = ipq_mode_msg() msg['value'] = mode msg['range'] = IPQ_MAX_PAYLOAD msg['header']['type'] = IPQM_MODE msg['header']['flags'] = NLM_F_REQUEST msg.encode() self.sendto(msg.data, (0, 0)) def verdict(self, seq, v): ''' Issue a verdict `v` for a packet `seq`. ''' msg = ipq_verdict_msg() msg['value'] = v msg['id'] = seq msg['data_len'] = 0 msg['header']['type'] = IPQM_VERDICT msg['header']['flags'] = NLM_F_REQUEST msg.encode() self.sendto(msg.buf.getvalue(), (0, 0)) pyroute2-0.5.9/pyroute2/netlink/nfnetlink/0000755000175000017500000000000013621220110020443 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/nfnetlink/__init__.py0000644000175000017500000000205113610051400022554 0ustar peetpeet00000000000000''' Nfnetlink ========= The support of nfnetlink families is now at the very beginning. So there is no public exports yet, but you can review the code. Work is in progress, stay tuned. nf-queue ++++++++ Netfilter protocol for NFQUEUE iptables target. ''' from pyroute2.netlink import nlmsg NFNL_SUBSYS_NONE = 0 NFNL_SUBSYS_CTNETLINK = 1 NFNL_SUBSYS_CTNETLINK_EXP = 2 NFNL_SUBSYS_QUEUE = 3 NFNL_SUBSYS_ULOG = 4 NFNL_SUBSYS_OSF = 5 NFNL_SUBSYS_IPSET = 6 NFNL_SUBSYS_ACCT = 7 NFNL_SUBSYS_CTNETLINK_TIMEOUT = 8 NFNL_SUBSYS_CTHELPER = 9 NFNL_SUBSYS_NFTABLES = 10 NFNL_SUBSYS_NFT_COMPAT = 11 NFNL_SUBSYS_COUNT = 12 # multicast group ids (for use with {add,drop}_membership) NFNLGRP_NONE = 0 NFNLGRP_CONNTRACK_NEW = 1 NFNLGRP_CONNTRACK_UPDATE = 2 NFNLGRP_CONNTRACK_DESTROY = 3 NFNLGRP_CONNTRACK_EXP_NEW = 4 NFNLGRP_CONNTRACK_EXP_UPDATE = 5 NFNLGRP_CONNTRACK_EXP_DESTROY = 6 NFNLGRP_NFTABLES = 7 NFNLGRP_ACCT_QUOTA = 8 NFNLGRP_NFTRACE = 9 class nfgen_msg(nlmsg): fields = (('nfgen_family', 'B'), ('version', 'B'), ('res_id', '!H')) pyroute2-0.5.9/pyroute2/netlink/nfnetlink/ipset.py0000644000175000017500000001700413613573646022175 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink import NLA_F_NET_BYTEORDER from pyroute2.netlink.nfnetlink import nfgen_msg from pyroute2.netlink.nfnetlink import NFNL_SUBSYS_IPSET IPSET_MAXNAMELEN = 32 IPSET_DEFAULT_MAXELEM = 65536 IPSET_CMD_NONE = 0 IPSET_CMD_PROTOCOL = 1 # Return protocol version IPSET_CMD_CREATE = 2 # Create a new (empty) set IPSET_CMD_DESTROY = 3 # Destroy a (empty) set IPSET_CMD_FLUSH = 4 # Remove all elements from a set IPSET_CMD_RENAME = 5 # Rename a set IPSET_CMD_SWAP = 6 # Swap two sets IPSET_CMD_LIST = 7 # List sets IPSET_CMD_SAVE = 8 # Save sets IPSET_CMD_ADD = 9 # Add an element to a set IPSET_CMD_DEL = 10 # Delete an element from a set IPSET_CMD_TEST = 11 # Test an element in a set IPSET_CMD_HEADER = 12 # Get set header data only IPSET_CMD_TYPE = 13 # 13: Get set type IPSET_CMD_GET_BYNAME = 14 # 14: Get set index by name IPSET_CMD_GET_BYINDEX = 15 # 15: Get set index by index # flags at command level (IPSET_ATTR_FLAGS) IPSET_FLAG_LIST_SETNAME = 1 << 1 IPSET_FLAG_LIST_HEADER = 1 << 2 IPSET_FLAG_SKIP_COUNTER_UPDATE = 1 << 3 IPSET_FLAG_SKIP_SUBCOUNTER_UPDATE = 1 << 4 IPSET_FLAG_MATCH_COUNTERS = 1 << 5 IPSET_FLAG_RETURN_NOMATCH = 1 << 7 # flags at cadt attribute (IPSET_ATTR_CADT_FLAGS) IPSET_FLAG_PHYSDEV = 1 << 1 IPSET_FLAG_NOMATCH = 1 << 2 IPSET_FLAG_WITH_COUNTERS = 1 << 3 IPSET_FLAG_WITH_COMMENT = 1 << 4 IPSET_FLAG_WITH_FORCEADD = 1 << 5 IPSET_FLAG_WITH_SKBINFO = 1 << 6 IPSET_FLAG_IFACE_WILDCARD = 1 << 7 IPSET_ERR_PROTOCOL = 4097 IPSET_ERR_FIND_TYPE = 4098 IPSET_ERR_MAX_SETS = 4099 IPSET_ERR_BUSY = 4100 IPSET_ERR_EXIST_SETNAME2 = 4101 IPSET_ERR_TYPE_MISMATCH = 4102 IPSET_ERR_EXIST = 4103 IPSET_ERR_INVALID_CIDR = 4104 IPSET_ERR_INVALID_NETMASK = 4105 IPSET_ERR_INVALID_FAMILY = 4106 IPSET_ERR_TIMEOUT = 4107 IPSET_ERR_REFERENCED = 4108 IPSET_ERR_IPADDR_IPV4 = 4109 IPSET_ERR_IPADDR_IPV6 = 4110 IPSET_ERR_COUNTER = 4111 IPSET_ERR_COMMENT = 4112 IPSET_ERR_INVALID_MARKMASK = 4113 IPSET_ERR_SKBINFO = 4114 IPSET_ERR_TYPE_SPECIFIC = 4352 class ipset_base(nla): class ipset_ip(nla): nla_flags = NLA_F_NESTED nla_map = (('IPSET_ATTR_UNSPEC', 'none'), ('IPSET_ATTR_IPADDR_IPV4', 'ip4addr', NLA_F_NET_BYTEORDER), ('IPSET_ATTR_IPADDR_IPV6', 'ip6addr', NLA_F_NET_BYTEORDER)) class ipset_msg(nfgen_msg): ''' Since the support just begins to be developed, many attrs are still in `hex` format -- just to dump the content. ''' nla_map = (('IPSET_ATTR_UNSPEC', 'none'), ('IPSET_ATTR_PROTOCOL', 'uint8'), ('IPSET_ATTR_SETNAME', 'asciiz'), ('IPSET_ATTR_TYPENAME', 'asciiz'), ('IPSET_ATTR_REVISION', 'uint8'), ('IPSET_ATTR_FAMILY', 'uint8'), ('IPSET_ATTR_FLAGS', 'be32'), ('IPSET_ATTR_DATA', 'get_data_type'), ('IPSET_ATTR_ADT', 'attr_adt'), ('IPSET_ATTR_LINENO', 'hex'), ('IPSET_ATTR_PROTOCOL_MIN', 'uint8'), ('IPSET_ATTR_INDEX', 'be16')) @staticmethod def get_data_type(self, *args, **kwargs): # create and list commands have specific attributes. See linux_ip_set.h # for more information (and/or lib/PROTOCOL of ipset sources) cmd = self['header']['type'] & ~(NFNL_SUBSYS_IPSET << 8) if cmd == IPSET_CMD_CREATE or cmd == IPSET_CMD_LIST: return self.cadt_data return self.ipset_generic.adt_data class ipset_generic(ipset_base): class adt_data(ipset_base): nla_flags = NLA_F_NESTED nla_map = ((0, 'IPSET_ATTR_UNSPEC', 'none'), (1, 'IPSET_ATTR_IP', 'ipset_ip'), (1, 'IPSET_ATTR_IP_FROM', 'ipset_ip'), (2, 'IPSET_ATTR_IP_TO', 'ipset_ip'), (3, 'IPSET_ATTR_CIDR', 'be8', NLA_F_NET_BYTEORDER), (4, 'IPSET_ATTR_PORT', 'be16', NLA_F_NET_BYTEORDER), (4, 'IPSET_ATTR_PORT_FROM', 'be16', NLA_F_NET_BYTEORDER), (5, 'IPSET_ATTR_PORT_TO', 'be16', NLA_F_NET_BYTEORDER), (6, 'IPSET_ATTR_TIMEOUT', 'be32', NLA_F_NET_BYTEORDER), (7, 'IPSET_ATTR_PROTO', 'be8', NLA_F_NET_BYTEORDER), (8, 'IPSET_ATTR_CADT_FLAGS', 'be32', NLA_F_NET_BYTEORDER), (9, 'IPSET_ATTR_CADT_LINENO', 'be32'), (10, 'IPSET_ATTR_MARK', 'be32', NLA_F_NET_BYTEORDER), (11, 'IPSET_ATTR_MARKMASK', 'be32', NLA_F_NET_BYTEORDER), (17, 'IPSET_ATTR_ETHER', 'l2addr'), (18, 'IPSET_ATTR_NAME', 'asciiz'), (19, 'IPSET_ATTR_NAMEREF', 'be32'), (20, 'IPSET_ATTR_IP2', 'ipset_ip'), (21, 'IPSET_ATTR_CIDR2', 'be8', NLA_F_NET_BYTEORDER), (22, 'IPSET_ATTR_IP2_TO', 'ipset_ip'), (23, 'IPSET_ATTR_IFACE', 'asciiz'), (24, 'IPSET_ATTR_BYTES', 'be64', NLA_F_NET_BYTEORDER), (25, 'IPSET_ATTR_PACKETS', 'be64', NLA_F_NET_BYTEORDER), (26, 'IPSET_ATTR_COMMENT', 'asciiz'), (27, 'IPSET_ATTR_SKBMARK', 'skbmark'), (28, 'IPSET_ATTR_SKBPRIO', 'skbprio'), (29, 'IPSET_ATTR_SKBQUEUE', 'be16', NLA_F_NET_BYTEORDER)) class skbmark(nla): nla_flags = NLA_F_NET_BYTEORDER fields = [('value', '>II')] class skbprio(nla): nla_flags = NLA_F_NET_BYTEORDER fields = [('value', '>HH')] class cadt_data(ipset_base): nla_flags = NLA_F_NESTED nla_map = ((0, 'IPSET_ATTR_UNSPEC', 'none'), (1, 'IPSET_ATTR_IP', 'ipset_ip'), (1, 'IPSET_ATTR_IP_FROM', 'ipset_ip'), (2, 'IPSET_ATTR_IP_TO', 'ipset_ip'), (3, 'IPSET_ATTR_CIDR', 'be8', NLA_F_NET_BYTEORDER), (4, 'IPSET_ATTR_PORT', 'be16', NLA_F_NET_BYTEORDER), (4, 'IPSET_ATTR_PORT_FROM', 'be16', NLA_F_NET_BYTEORDER), (5, 'IPSET_ATTR_PORT_TO', 'be16', NLA_F_NET_BYTEORDER), (6, 'IPSET_ATTR_TIMEOUT', 'be32', NLA_F_NET_BYTEORDER), (7, 'IPSET_ATTR_PROTO', 'be8', NLA_F_NET_BYTEORDER), (8, 'IPSET_ATTR_CADT_FLAGS', 'be32', NLA_F_NET_BYTEORDER), (9, 'IPSET_ATTR_CADT_LINENO', 'be32'), (10, 'IPSET_ATTR_MARK', 'be32', NLA_F_NET_BYTEORDER), (11, 'IPSET_ATTR_MARKMASK', 'be32', NLA_F_NET_BYTEORDER), (17, 'IPSET_ATTR_GC', 'hex'), (18, 'IPSET_ATTR_HASHSIZE', 'be32', NLA_F_NET_BYTEORDER), (19, 'IPSET_ATTR_MAXELEM', 'be32', NLA_F_NET_BYTEORDER), (20, 'IPSET_ATTR_NETMASK', 'hex'), (21, 'IPSET_ATTR_PROBES', 'hex'), (22, 'IPSET_ATTR_RESIZE', 'hex'), (23, 'IPSET_ATTR_SIZE', 'be32', NLA_F_NET_BYTEORDER), (24, 'IPSET_ATTR_ELEMENTS', 'be32', NLA_F_NET_BYTEORDER), (25, 'IPSET_ATTR_REFERENCES', 'be32', NLA_F_NET_BYTEORDER), (26, 'IPSET_ATTR_MEMSIZE', 'be32', NLA_F_NET_BYTEORDER)) class attr_adt(ipset_generic): nla_flags = NLA_F_NESTED nla_map = ((7, 'IPSET_ATTR_DATA', 'adt_data'),) pyroute2-0.5.9/pyroute2/netlink/nfnetlink/nfctsocket.py0000644000175000017500000005446313613574566023230 0ustar peetpeet00000000000000""" NFCTSocket -- low level connection tracking API See also: pyroute2.conntrack """ import socket from pyroute2.netlink import NLMSG_ERROR from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NLM_F_ACK from pyroute2.netlink import NLM_F_EXCL from pyroute2.netlink import NLM_F_CREATE from pyroute2.netlink import NETLINK_NETFILTER from pyroute2.netlink import nla from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.nfnetlink import nfgen_msg from pyroute2.netlink.nfnetlink import NFNL_SUBSYS_CTNETLINK IPCTNL_MSG_CT_NEW = 0 IPCTNL_MSG_CT_GET = 1 IPCTNL_MSG_CT_DELETE = 2 IPCTNL_MSG_CT_GET_CTRZERO = 3 IPCTNL_MSG_CT_GET_STATS_CPU = 4 IPCTNL_MSG_CT_GET_STATS = 5 IPCTNL_MSG_CT_GET_DYING = 6 IPCTNL_MSG_CT_GET_UNCONFIRMED = 7 IPCTNL_MSG_MAX = 8 try: IP_PROTOCOLS = {num: name[8:] for name, num in vars(socket).items() if name.startswith("IPPROTO")} except (IOError, OSError): IP_PROTOCOLS = {} # Window scaling is advertised by the sender IP_CT_TCP_FLAG_WINDOW_SCALE = 0x01 # SACK is permitted by the sender IP_CT_TCP_FLAG_SACK_PERM = 0x02 # This sender sent FIN first IP_CT_TCP_FLAG_CLOSE_INIT = 0x04 # Be liberal in window checking IP_CT_TCP_FLAG_BE_LIBERAL = 0x08 # Has unacknowledged data IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED = 0x10 # The field td_maxack has been set IP_CT_TCP_FLAG_MAXACK_SET = 0x20 # From linux/include/net/tcp_states.h TCPF_ESTABLISHED = (1 << 1) TCPF_SYN_SENT = (1 << 2) TCPF_SYN_RECV = (1 << 3) TCPF_FIN_WAIT1 = (1 << 4) TCPF_FIN_WAIT2 = (1 << 5) TCPF_TIME_WAIT = (1 << 6) TCPF_CLOSE = (1 << 7) TCPF_CLOSE_WAIT = (1 << 8) TCPF_LAST_ACK = (1 << 9) TCPF_LISTEN = (1 << 10) TCPF_CLOSING = (1 << 11) TCPF_NEW_SYN_RECV = (1 << 12) TCPF_TO_NAME = { TCPF_ESTABLISHED: 'ESTABLISHED', TCPF_SYN_SENT: 'SYN_SENT', TCPF_SYN_RECV: 'SYN_RECV', TCPF_FIN_WAIT1: 'FIN_WAIT1', TCPF_FIN_WAIT2: 'FIN_WAIT2', TCPF_TIME_WAIT: 'TIME_WAIT', TCPF_CLOSE: 'CLOSE', TCPF_CLOSE_WAIT: 'CLOSE_WAIT', TCPF_LAST_ACK: 'LAST_ACK', TCPF_LISTEN: 'LISTEN', TCPF_CLOSING: 'CLOSING', TCPF_NEW_SYN_RECV: 'NEW_SYN_RECV', } # From include/uapi/linux/netfilter/nf_conntrack_common.h IPS_EXPECTED = (1 << 0) IPS_SEEN_REPLY = (1 << 1) IPS_ASSURED = (1 << 2) IPS_CONFIRMED = (1 << 3) IPS_SRC_NAT = (1 << 4) IPS_DST_NAT = (1 << 5) IPS_NAT_MASK = (IPS_DST_NAT | IPS_SRC_NAT) IPS_SEQ_ADJUST = (1 << 6) IPS_SRC_NAT_DONE = (1 << 7) IPS_DST_NAT_DONE = (1 << 8) IPS_NAT_DONE_MASK = (IPS_DST_NAT_DONE | IPS_SRC_NAT_DONE) IPS_DYING = (1 << 9) IPS_FIXED_TIMEOUT = (1 << 10) IPS_TEMPLATE = (1 << 11) IPS_UNTRACKED = (1 << 12) IPS_HELPER = (1 << 13) IPS_OFFLOAD = (1 << 14) IPS_UNCHANGEABLE_MASK = ( IPS_NAT_DONE_MASK | IPS_NAT_MASK | IPS_EXPECTED | IPS_CONFIRMED | IPS_DYING | IPS_SEQ_ADJUST | IPS_TEMPLATE | IPS_OFFLOAD) IPSBIT_TO_NAME = { IPS_EXPECTED: 'EXPECTED', IPS_SEEN_REPLY: 'SEEN_REPLY', IPS_ASSURED: 'ASSURED', IPS_CONFIRMED: 'CONFIRMED', IPS_SRC_NAT: 'SRC_NAT', IPS_DST_NAT: 'DST_NAT', IPS_SEQ_ADJUST: 'SEQ_ADJUST', IPS_SRC_NAT_DONE: 'SRC_NAT_DONE', IPS_DST_NAT_DONE: 'DST_NAT_DONE', IPS_DYING: 'DYING', IPS_FIXED_TIMEOUT: 'FIXED_TIMEOUT', IPS_TEMPLATE: 'TEMPLATE', IPS_UNTRACKED: 'UNTRACKED', IPS_HELPER: 'HELPER', IPS_OFFLOAD: 'OFFLOAD' } # From include/uapi/linux/netfilter/nf_conntrack_tcp.h IP_CT_TCP_FLAG_WINDOW_SCALE = 0x01 IP_CT_TCP_FLAG_SACK_PERM = 0x02 IP_CT_TCP_FLAG_CLOSE_INIT = 0x04 IP_CT_TCP_FLAG_BE_LIBERAL = 0x08 IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED = 0x10 IP_CT_TCP_FLAG_MAXACK_SET = 0x20 IP_CT_EXP_CHALLENGE_ACK = 0x40 IP_CT_TCP_SIMULTANEOUS_OPEN = 0x80 IP_CT_TCP_FLAG_TO_NAME = { IP_CT_TCP_FLAG_WINDOW_SCALE: 'WINDOW_SCALE', IP_CT_TCP_FLAG_SACK_PERM: 'SACK_PERM', IP_CT_TCP_FLAG_CLOSE_INIT: 'CLOSE_INIT', IP_CT_TCP_FLAG_BE_LIBERAL: 'BE_LIBERAL', IP_CT_TCP_FLAG_DATA_UNACKNOWLEDGED: 'DATA_UNACKNOWLEDGED', IP_CT_TCP_FLAG_MAXACK_SET: 'MAXACK_SET', IP_CT_EXP_CHALLENGE_ACK: 'CHALLENGE_ACK', IP_CT_TCP_SIMULTANEOUS_OPEN: 'SIMULTANEOUS_OPEN', } def terminate_single_msg(msg): return msg def terminate_error_msg(msg): return msg['header']['type'] == NLMSG_ERROR class nfct_stats(nfgen_msg): nla_map = ( ('CTA_STATS_GLOBAL_UNSPEC', 'none'), ('CTA_STATS_GLOBAL_ENTRIES', 'be32'), ('CTA_STATS_GLOBAL_MAX_ENTRIES', 'be32'), ) class nfct_stats_cpu(nfgen_msg): nla_map = ( ('CTA_STATS_UNSPEC', 'none'), ('CTA_STATS_SEARCHED', 'be32'), ('CTA_STATS_FOUND', 'be32'), ('CTA_STATS_NEW', 'be32'), ('CTA_STATS_INVALID', 'be32'), ('CTA_STATS_IGNORE', 'be32'), ('CTA_STATS_DELETE', 'be32'), ('CTA_STATS_DELETE_LIST', 'be32'), ('CTA_STATS_INSERT', 'be32'), ('CTA_STATS_INSERT_FAILED', 'be32'), ('CTA_STATS_DROP', 'be32'), ('CTA_STATS_EARLY_DROP', 'be32'), ('CTA_STATS_ERROR', 'be32'), ('CTA_STATS_SEARCH_RESTART', 'be32'), ) class nfct_msg(nfgen_msg): prefix = 'CTA_' nla_map = ( ('CTA_UNSPEC', 'none'), ('CTA_TUPLE_ORIG', 'cta_tuple'), ('CTA_TUPLE_REPLY', 'cta_tuple'), ('CTA_STATUS', 'be32'), ('CTA_PROTOINFO', 'cta_protoinfo'), ('CTA_HELP', 'asciiz'), ('CTA_NAT_SRC', 'cta_nat'), ('CTA_TIMEOUT', 'be32'), ('CTA_MARK', 'be32'), ('CTA_COUNTERS_ORIG', 'cta_counters'), ('CTA_COUNTERS_REPLY', 'cta_counters'), ('CTA_USE', 'be32'), ('CTA_ID', 'be32'), ('CTA_NAT_DST', 'cta_nat'), ('CTA_TUPLE_MASTER', 'cta_tuple'), ('CTA_SEQ_ADJ_ORIG', 'cta_nat_seq_adj'), ('CTA_SEQ_ADJ_REPLY', 'cta_nat_seq_adj'), ('CTA_SECMARK', 'be32'), ('CTA_ZONE', 'be16'), ('CTA_SECCTX', 'cta_secctx'), ('CTA_TIMESTAMP', 'cta_timestamp'), ('CTA_MARK_MASK', 'be32'), ('CTA_LABELS', 'cta_labels'), ('CTA_LABELS_MASK', 'cta_labels'), ('CTA_SYNPROXY', 'cta_synproxy'), ) @classmethod def create_from(cls, **kwargs): self = cls() for key, value in kwargs.items(): if isinstance(value, NFCTAttr): value = {'attrs': value.attrs()} if value is not None: self['attrs'].append([self.name2nla(key), value]) return self class cta_tuple(nla): nla_map = ( ('CTA_TUPLE_UNSPEC', 'none'), ('CTA_TUPLE_IP', 'cta_ip'), ('CTA_TUPLE_PROTO', 'cta_proto'), ) class cta_ip(nla): nla_map = ( ('CTA_IP_UNSPEC', 'none'), ('CTA_IP_V4_SRC', 'ip4addr'), ('CTA_IP_V4_DST', 'ip4addr'), ('CTA_IP_V6_SRC', 'ip6addr'), ('CTA_IP_V6_DST', 'ip6addr'), ) class cta_proto(nla): nla_map = ( ('CTA_PROTO_UNSPEC', 'none'), ('CTA_PROTO_NUM', 'uint8'), ('CTA_PROTO_SRC_PORT', 'be16'), ('CTA_PROTO_DST_PORT', 'be16'), ('CTA_PROTO_ICMP_ID', 'be16'), ('CTA_PROTO_ICMP_TYPE', 'uint8'), ('CTA_PROTO_ICMP_CODE', 'uint8'), ('CTA_PROTO_ICMPV6_ID', 'be16'), ('CTA_PROTO_ICMPV6_TYPE', 'uint8'), ('CTA_PROTO_ICMPV6_CODE', 'uint8'), ) class cta_protoinfo(nla): nla_map = ( ('CTA_PROTOINFO_UNSPEC', 'none'), ('CTA_PROTOINFO_TCP', 'cta_protoinfo_tcp'), ('CTA_PROTOINFO_DCCP', 'cta_protoinfo_dccp'), ('CTA_PROTOINFO_SCTP', 'cta_protoinfo_sctp'), ) class cta_protoinfo_tcp(nla): nla_map = ( ('CTA_PROTOINFO_TCP_UNSPEC', 'none'), ('CTA_PROTOINFO_TCP_STATE', 'uint8'), ('CTA_PROTOINFO_TCP_WSCALE_ORIGINAL', 'uint8'), ('CTA_PROTOINFO_TCP_WSCALE_REPLY', 'uint8'), ('CTA_PROTOINFO_TCP_FLAGS_ORIGINAL', 'cta_tcp_flags'), ('CTA_PROTOINFO_TCP_FLAGS_REPLY', 'cta_tcp_flags'), ) class cta_tcp_flags(nla): fields = [('value', 'BB')] class cta_protoinfo_dccp(nla): nla_map = ( ('CTA_PROTOINFO_DCCP_UNSPEC', 'none'), ('CTA_PROTOINFO_DCCP_STATE', 'uint8'), ('CTA_PROTOINFO_DCCP_ROLE', 'uint8'), ('CTA_PROTOINFO_DCCP_HANDSHAKE_SEQ', 'be64'), ) class cta_protoinfo_sctp(nla): nla_map = ( ('CTA_PROTOINFO_SCTP_UNSPEC', 'none'), ('CTA_PROTOINFO_SCTP_STATE', 'uint8'), ('CTA_PROTOINFO_SCTP_VTAG_ORIGINAL', 'be32'), ('CTA_PROTOINFO_SCTP_VTAG_REPLY', 'be32'), ) class cta_nat(nla): nla_map = ( ('CTA_NAT_UNSPEC', 'none'), ('CTA_NAT_V4_MINIP', 'ip4addr'), ('CTA_NAT_V4_MAXIP', 'ip4addr'), ('CTA_NAT_PROTO', 'cta_protonat'), ('CTA_NAT_V6_MINIP', 'ip6addr'), ('CTA_NAT_V6_MAXIP', 'ip6addr'), ) class cta_protonat(nla): nla_map = ( ('CTA_PROTONAT_UNSPEC', 'none'), ('CTA_PROTONAT_PORT_MIN', 'be16'), ('CTA_PROTONAT_PORT_MAX', 'be16'), ) class cta_nat_seq_adj(nla): nla_map = ( ('CTA_NAT_SEQ_UNSPEC', 'none'), ('CTA_NAT_SEQ_CORRECTION_POS', 'be32'), ('CTA_NAT_SEQ_OFFSET_BEFORE', 'be32'), ('CTA_NAT_SEQ_OFFSET_AFTER', 'be32'), ) class cta_counters(nla): nla_map = ( ('CTA_COUNTERS_UNSPEC', 'none'), ('CTA_COUNTERS_PACKETS', 'be64'), ('CTA_COUNTERS_BYTES', 'be64'), ('CTA_COUNTERS32_PACKETS', 'be32'), ('CTA_COUNTERS32_BYTES', 'be32'), ) class cta_secctx(nla): nla_map = ( ('CTA_SECCTX_UNSPEC', 'none'), ('CTA_SECCTX_NAME', 'asciiz'), ) class cta_timestamp(nla): nla_map = ( ('CTA_TIMESTAMP_UNSPEC', 'none'), ('CTA_TIMESTAMP_START', 'be64'), ('CTA_TIMESTAMP_STOP', 'be64'), ) class cta_labels(nla): fields = [('value', 'QQ')] def encode(self): if not isinstance(self['value'], tuple): self['value'] = (self['value'] & 0xffffffffffffffff, self['value'] >> 64) nla.encode(self) def decode(self): nla.decode(self) if isinstance(self['value'], tuple): self['value'] = (self['value'][0] & 0xffffffffffffffff) | \ (self['value'][1] << 64) class cta_synproxy(nla): nla_map = ( ('CTA_SYNPROXY_UNSPEC', 'none'), ('CTA_SYNPROXY_ISN', 'be32'), ('CTA_SYNPROXY_ITS', 'be32'), ('CTA_SYNPROXY_TSOFF', 'be32'), ) class NFCTAttr(object): def attrs(self): return [] class NFCTAttrTuple(NFCTAttr): def __init__(self, family=socket.AF_INET, saddr=None, daddr=None, proto=None, sport=None, dport=None, icmp_id=None, icmp_type=None, icmp_code=None): self.saddr = saddr self.daddr = daddr self.proto = proto self.sport = sport self.dport = dport self.icmp_id = icmp_id self.icmp_type = icmp_type self.icmp_code = icmp_code self.family = family self._attr_ip, self._attr_icmp = { socket.AF_INET: ['CTA_IP_V4', 'CTA_PROTO_ICMP'], socket.AF_INET6: ['CTA_IP_V6', 'CTA_PROTO_ICMPV6'], }[self.family] def proto_name(self): return IP_PROTOCOLS.get(self.proto, None) def reverse(self): return NFCTAttrTuple( family=self.family, saddr=self.daddr, daddr=self.saddr, proto=self.proto, sport=self.dport, dport=self.sport, icmp_id=self.icmp_id, icmp_type=self.icmp_type, icmp_code=self.icmp_code, ) def attrs(self): cta_ip = [] cta_proto = [] cta_tuple = [] if self.saddr is not None: cta_ip.append([self._attr_ip + '_SRC', self.saddr]) if self.daddr is not None: cta_ip.append([self._attr_ip + '_DST', self.daddr]) if self.proto is not None: cta_proto.append(['CTA_PROTO_NUM', self.proto]) if self.sport is not None: cta_proto.append(['CTA_PROTO_SRC_PORT', self.sport]) if self.dport is not None: cta_proto.append(['CTA_PROTO_DST_PORT', self.dport]) if self.icmp_id is not None: cta_proto.append([self._attr_icmp + '_ID', self.icmp_id]) if self.icmp_type is not None: cta_proto.append([self._attr_icmp + '_TYPE', self.icmp_type]) if self.icmp_code is not None: cta_proto.append([self._attr_icmp + '_CODE', self.icmp_code]) if cta_ip: cta_tuple.append(['CTA_TUPLE_IP', {'attrs': cta_ip}]) if cta_proto: cta_tuple.append(['CTA_TUPLE_PROTO', {'attrs': cta_proto}]) return cta_tuple @classmethod def from_netlink(cls, family, ndmsg): cta_ip = ndmsg.get_attr('CTA_TUPLE_IP') cta_proto = ndmsg.get_attr('CTA_TUPLE_PROTO') kwargs = {'family': family} if family == socket.AF_INET: kwargs['saddr'] = cta_ip.get_attr('CTA_IP_V4_SRC') kwargs['daddr'] = cta_ip.get_attr('CTA_IP_V4_DST') elif family == socket.AF_INET6: kwargs['saddr'] = cta_ip.get_attr('CTA_IP_V6_SRC') kwargs['daddr'] = cta_ip.get_attr('CTA_IP_V6_DST') else: raise NotImplementedError(family) proto = cta_proto.get_attr('CTA_PROTO_NUM') kwargs['proto'] = proto if proto == socket.IPPROTO_ICMP: kwargs['icmp_id'] = cta_proto.get_attr('CTA_PROTO_ICMP_ID') kwargs['icmp_type'] = cta_proto.get_attr('CTA_PROTO_ICMP_TYPE') kwargs['icmp_code'] = cta_proto.get_attr('CTA_PROTO_ICMP_CODE') elif proto == socket.IPPROTO_ICMPV6: kwargs['icmp_id'] = cta_proto.get_attr('CTA_PROTO_ICMPV6_ID') kwargs['icmp_type'] = cta_proto.get_attr('CTA_PROTO_ICMPV6_TYPE') kwargs['icmp_code'] = cta_proto.get_attr('CTA_PROTO_ICMPV6_CODE') elif proto in (socket.IPPROTO_TCP, socket.IPPROTO_UDP): kwargs['sport'] = cta_proto.get_attr('CTA_PROTO_SRC_PORT') kwargs['dport'] = cta_proto.get_attr('CTA_PROTO_DST_PORT') return cls(**kwargs) def is_attr_match(self, other, attrname): l_attr = getattr(self, attrname) if l_attr is not None: r_attr = getattr(other, attrname) if l_attr != r_attr: return False return True def nla_eq(self, family, ndmsg): if self.family != family: return False test_attr = [] cta_ip = ndmsg.get_attr('CTA_TUPLE_IP') if family == socket.AF_INET: test_attr.append((self.saddr, cta_ip, 'CTA_IP_V4_SRC')) test_attr.append((self.daddr, cta_ip, 'CTA_IP_V4_DST')) elif family == socket.AF_INET6: test_attr.append((self.saddr, cta_ip, 'CTA_IP_V6_SRC')) test_attr.append((self.daddr, cta_ip, 'CTA_IP_V6_DST')) else: raise NotImplementedError(family) if self.proto is not None: cta_proto = ndmsg.get_attr('CTA_TUPLE_PROTO') if self.proto != cta_proto.get_attr('CTA_PROTO_NUM'): return False if self.proto == socket.IPPROTO_ICMP: (test_attr .append((self.icmp_id, cta_proto, 'CTA_PROTO_ICMP_ID'))) (test_attr .append((self.icmp_type, cta_proto, 'CTA_PROTO_ICMP_TYPE'))) (test_attr .append((self.icmp_code, cta_proto, 'CTA_PROTO_ICMP_CODE'))) elif self.proto == socket.IPPROTO_ICMPV6: (test_attr .append((self.icmp_id, cta_proto, 'CTA_PROTO_ICMPV6_ID'))) (test_attr .append((self.icmp_type, cta_proto, 'CTA_PROTO_ICMPV6_TYPE'))) (test_attr .append((self.icmp_code, cta_proto, 'CTA_PROTO_ICMPV6_CODE'))) elif self.proto in (socket.IPPROTO_TCP, socket.IPPROTO_UDP): (test_attr .append((self.sport, cta_proto, 'CTA_PROTO_SRC_PORT'))) (test_attr .append((self.dport, cta_proto, 'CTA_PROTO_DST_PORT'))) for val, ndmsg, attrname in test_attr: if val is not None and val != ndmsg.get_attr(attrname): return False return True def __ne__(self, other): return not self.__eq__(other) def __eq__(self, other): if not isinstance(other, self.__class__): raise NotImplementedError() if self.family != other.family: return False for attrname in ('saddr', 'daddr'): if not self.is_attr_match(other, attrname): return False if self.proto is not None: if self.proto != other.proto: return False if self.proto in (socket.IPPROTO_UDP, socket.IPPROTO_TCP): for attrname in ('sport', 'dport'): if not self.is_attr_match(other, attrname): return False elif self.proto in (socket.IPPROTO_ICMP, socket.IPPROTO_ICMPV6): for attrname in ('icmp_id', 'icmp_type', 'icmp_code'): if not self.is_attr_match(other, attrname): return False return True def __repr__(self): proto_name = self.proto_name() if proto_name is None: proto_name = 'UNKNOWN' if self.family == socket.AF_INET: r = 'IPv4(' elif self.family == socket.AF_INET6: r = 'IPv6(' else: r = 'Unkown[family={}]('.format(self.family) r += 'saddr={}, daddr={}, '.format(self.saddr, self.daddr) r += '{}('.format(proto_name) if self.proto in (socket.IPPROTO_ICMP, socket.IPPROTO_ICMPV6): r += 'id={}, type={}, code={}'.format(self.icmp_id, self.icmp_type, self.icmp_code) elif self.proto in (socket.IPPROTO_TCP, socket.IPPROTO_UDP): r += 'sport={}, dport={}'.format(self.sport, self.dport) return r + '))' class NFCTSocket(NetlinkSocket): policy = dict((k | (NFNL_SUBSYS_CTNETLINK << 8), v) for k, v in { IPCTNL_MSG_CT_NEW: nfct_msg, IPCTNL_MSG_CT_GET: nfct_msg, IPCTNL_MSG_CT_DELETE: nfct_msg, IPCTNL_MSG_CT_GET_CTRZERO: nfct_msg, IPCTNL_MSG_CT_GET_STATS_CPU: nfct_stats_cpu, IPCTNL_MSG_CT_GET_STATS: nfct_stats, IPCTNL_MSG_CT_GET_DYING: nfct_msg, IPCTNL_MSG_CT_GET_UNCONFIRMED: nfct_msg, }.items()) def __init__(self, nfgen_family=socket.AF_INET): super(NFCTSocket, self).__init__(family=NETLINK_NETFILTER) self.register_policy(self.policy) self._nfgen_family = nfgen_family def request(self, msg, msg_type, **kwargs): msg['nfgen_family'] = self._nfgen_family msg_type |= (NFNL_SUBSYS_CTNETLINK << 8) return self.nlm_request(msg, msg_type, **kwargs) def dump(self, mark=None, mark_mask=0xffffffff): msg = nfct_msg.create_from(mark=mark, mark_mask=mark_mask) return self.request(msg, IPCTNL_MSG_CT_GET, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def stat(self): return self.request(nfct_msg(), IPCTNL_MSG_CT_GET_STATS_CPU, msg_flags=NLM_F_REQUEST | NLM_F_DUMP) def count(self): return self.request(nfct_msg(), IPCTNL_MSG_CT_GET_STATS, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=terminate_single_msg) def flush(self, mark=None, mark_mask=0xffffffff): msg = nfct_msg.create_from(mark=mark, mark_mask=mark_mask) return self.request(msg, IPCTNL_MSG_CT_DELETE, msg_flags=NLM_F_REQUEST | NLM_F_ACK, terminate=terminate_error_msg) def conntrack_max_size(self): return self.request(nfct_msg(), IPCTNL_MSG_CT_GET_STATS, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=terminate_single_msg) def entry(self, cmd, **kwargs): """ Get or change a conntrack entry. Examples:: # add an entry ct.entry('add', timeout=30, tuple_orig=NFCTAttrTuple( saddr='192.168.122.1', daddr='192.168.122.67', proto=6, sport=34857, dport=5599), tuple_reply=NFCTAttrTuple( saddr='192.168.122.67', daddr='192.168.122.1', proto=6, sport=5599, dport=34857)) # set mark=5 on the matching entry ct.entry('set', mark=5, tuple_orig=NFCTAttrTuple( saddr='192.168.122.1', daddr='192.168.122.67', proto=6, sport=34857, dport=5599)) # get an entry ct.entry('get', tuple_orig=NFCTAttrTuple( saddr='192.168.122.1', daddr='192.168.122.67', proto=6, sport=34857, dport=5599)) # delete an entry ct.entry('del', tuple_orig=NFCTAttrTuple( saddr='192.168.122.1', daddr='192.168.122.67', proto=6, sport=34857, dport=5599)) """ msg_type, msg_flags = { 'add': [IPCTNL_MSG_CT_NEW, NLM_F_ACK | NLM_F_EXCL | NLM_F_CREATE], 'set': [IPCTNL_MSG_CT_NEW, NLM_F_ACK], 'get': [IPCTNL_MSG_CT_GET, NLM_F_ACK], 'del': [IPCTNL_MSG_CT_DELETE, NLM_F_ACK], }[cmd] if msg_type == IPCTNL_MSG_CT_DELETE and \ not ('tuple_orig' in kwargs or 'tuple_reply' in kwargs): raise ValueError('Deletion requires a tuple at least') return self.request(nfct_msg.create_from(**kwargs), msg_type, msg_flags=NLM_F_REQUEST | msg_flags, terminate=terminate_error_msg) pyroute2-0.5.9/pyroute2/netlink/nfnetlink/nftsocket.py0000644000175000017500000005241113610051400023022 0ustar peetpeet00000000000000""" NFTSocket -- low level nftables API See also: pyroute2.nftables """ import threading from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NETLINK_NETFILTER from pyroute2.netlink import nla from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.nfnetlink import nfgen_msg from pyroute2.netlink.nfnetlink import NFNL_SUBSYS_NFTABLES NFT_MSG_NEWTABLE = 0 NFT_MSG_GETTABLE = 1 NFT_MSG_DELTABLE = 2 NFT_MSG_NEWCHAIN = 3 NFT_MSG_GETCHAIN = 4 NFT_MSG_DELCHAIN = 5 NFT_MSG_NEWRULE = 6 NFT_MSG_GETRULE = 7 NFT_MSG_DELRULE = 8 NFT_MSG_NEWSET = 9 NFT_MSG_GETSET = 10 NFT_MSG_DELSET = 11 NFT_MSG_NEWSETELEM = 12 NFT_MSG_GETSETELEM = 13 NFT_MSG_DELSETELEM = 14 NFT_MSG_NEWGEN = 15 NFT_MSG_GETGEN = 16 NFT_MSG_TRACE = 17 class nft_gen_msg(nfgen_msg): nla_map = (('NFTA_GEN_UNSPEC', 'none'), ('NFTA_GEN_ID', 'be32')) class nft_chain_msg(nfgen_msg): prefix = 'NFTA_CHAIN_' nla_map = (('NFTA_CHAIN_UNSPEC', 'none'), ('NFTA_CHAIN_TABLE', 'asciiz'), ('NFTA_CHAIN_HANDLE', 'be64'), ('NFTA_CHAIN_NAME', 'asciiz'), ('NFTA_CHAIN_HOOK', 'hook'), ('NFTA_CHAIN_POLICY', 'be32'), ('NFTA_CHAIN_USE', 'be32'), ('NFTA_CHAIN_TYPE', 'asciiz'), ('NFTA_CHAIN_COUNTERS', 'counters')) class counters(nla): nla_map = (('NFTA_COUNTER_UNSPEC', 'none'), ('NFTA_COUNTER_BYTES', 'be64'), ('NFTA_COUNTER_PACKETS', 'be64')) class hook(nla): nla_map = (('NFTA_HOOK_UNSPEC', 'none'), ('NFTA_HOOK_HOOKNUM', 'be32'), ('NFTA_HOOK_PRIORITY', 'be32'), ('NFTA_HOOK_DEV', 'asciiz')) class nft_map_uint8(nla): ops = {} fields = [('value', 'B')] def decode(self): nla.decode(self) self.value = self.ops.get(self['value']) class nft_map_be32(nft_map_uint8): fields = [('value', '>I')] class nft_map_be32_signed(nft_map_uint8): fields = [('value', '>i')] class nft_flags_be32(nla): fields = [('value', '>I')] ops = None def decode(self): nla.decode(self) self.value = frozenset(o for i, o in enumerate(self.ops) if self['value'] & 1 << i) class nat_flags(nla): class nat_range(nft_flags_be32): ops = ('NF_NAT_RANGE_MAP_IPS', 'NF_NAT_RANGE_PROTO_SPECIFIED', 'NF_NAT_RANGE_PROTO_RANDOM', 'NF_NAT_RANGE_PERSISTENT', 'NF_NAT_RANGE_PROTO_RANDOM_FULLY') class nft_regs(nla): class regs(nft_map_be32): ops = {0x00: 'NFT_REG_VERDICT', 0x01: 'NFT_REG_1', 0x02: 'NFT_REG_2', 0x03: 'NFT_REG_3', 0x04: 'NFT_REG_4', 0x08: 'NFT_REG32_00', 0x09: 'MFT_REG32_01', 0x0a: 'NFT_REG32_02', 0x0b: 'NFT_REG32_03', 0x0c: 'NFT_REG32_04', 0x0d: 'NFT_REG32_05', 0x0e: 'NFT_REG32_06', 0x0f: 'NFT_REG32_07', 0x10: 'NFT_REG32_08', 0x11: 'NFT_REG32_09', 0x12: 'NFT_REG32_10', 0x13: 'NFT_REG32_11', 0x14: 'NFT_REG32_12', 0x15: 'NFT_REG32_13', 0x16: 'NFT_REG32_14', 0x17: 'NFT_REG32_15'} class nft_data(nla): class nfta_data(nla): nla_map = (('NFTA_DATA_UNSPEC', 'none'), ('NFTA_DATA_VALUE', 'cdata'), ('NFTA_DATA_VERDICT', 'verdict')) class verdict(nla): nla_map = (('NFTA_VERDICT_UNSPEC', 'none'), ('NFTA_VERDICT_CODE', 'verdict_code'), ('NFTA_VERDICT_CHAIN', 'asciiz')) class verdict_code(nft_map_be32_signed): ops = {0: 'NF_DROP', 1: 'NF_ACCEPT', 2: 'NF_STOLEN', 3: 'NF_QUEUE', 4: 'NF_REPEAT', 5: 'NF_STOP', -1: 'NFT_CONTINUE', -2: 'NFT_BREAK', -3: 'NFT_JUMP', -4: 'NFT_GOTO', -5: 'NFT_RETURN'} class nft_rule_msg(nfgen_msg): prefix = 'NFTA_RULE_' nla_map = (('NFTA_RULE_UNSPEC', 'none'), ('NFTA_RULE_TABLE', 'asciiz'), ('NFTA_RULE_CHAIN', 'asciiz'), ('NFTA_RULE_HANDLE', 'be64'), ('NFTA_RULE_EXPRESSIONS', '*rule_expr'), ('NFTA_RULE_COMPAT', 'hex'), ('NFTA_RULE_POSITION', 'be64'), ('NFTA_RULE_USERDATA', 'hex')) class rule_expr(nla): header_type = 1 nla_map = (('NFTA_EXPR_UNSPEC', 'none'), ('NFTA_EXPR_NAME', 'asciiz'), ('NFTA_EXPR_DATA', 'expr')) class nft_bitwise(nft_data, nft_regs): nla_map = (('NFTA_BITWISE_UNSPEC', 'none'), ('NFTA_BITWISE_SREG', 'regs'), ('NFTA_BITWISE_DREG', 'regs'), ('NFTA_BITWISE_LEN', 'be32'), ('NFTA_BITWISE_MASK', 'nfta_data'), ('NFTA_BITWISE_XOR', 'nfta_data')) class nft_byteorder(nft_regs): nla_map = (('NFTA_BYTEORDER_UNSPEC', 'none'), ('NFTA_BYTEORDER_SREG', 'regs'), ('NFTA_BYTEORDER_DREG', 'regs'), ('NFTA_BYTEORDER_OP', 'ops'), ('NFTA_BYTEORDER_LEN', 'be32'), ('NFTA_BYTEORDER_SIZE', 'be32')) class ops(nft_map_be32): ops = {0: 'NFT_BYTEORDER_NTOH', 1: 'NFT_BYTEORDER_HTON'} class nft_cmp(nft_data, nft_regs): nla_map = (('NFTA_CMP_UNSPEC', 'none'), ('NFTA_CMP_SREG', 'regs'), ('NFTA_CMP_OP', 'ops'), ('NFTA_CMP_DATA', 'nfta_data')) class ops(nft_map_be32): ops = {0: 'NFT_CMP_EQ', 1: 'NFT_CMP_NEQ', 2: 'NFT_CMP_LT', 3: 'NFT_CMP_LTE', 4: 'NFT_CMP_GT', 5: 'NFT_CMP_GTE'} class nft_match(nla): nla_map = (('NFTA_MATCH_UNSPEC', 'none'), ('NFTA_MATCH_NAME', 'asciiz'), ('NFTA_MATCH_REV', 'be32'), ('NFTA_MATCH_INFO', 'hex'), ('NFTA_MATCH_PROTOCOL', 'hex'), ('NFTA_MATCH_FLAGS', 'hex')) class nft_target(nla): nla_map = (('NFTA_TARGET_UNSPEC', 'none'), ('NFTA_TARGET_NAME', 'asciiz'), ('NFTA_TARGET_REV', 'be32'), ('NFTA_TARGET_INFO', 'hex'), ('NFTA_TARGET_PROTOCOL', 'hex'), ('NFTA_TARGET_FLAGS', 'hex')) class nft_counter(nla): nla_map = (('NFTA_COUNTER_UNSPEC', 'none'), ('NFTA_COUNTER_BYTES', 'be64'), ('NFTA_COUNTER_PACKETS', 'be64')) class nft_ct(nft_regs): nla_map = (('NFTA_CT_UNSPEC', 'none'), ('NFTA_CT_DREG', 'regs'), ('NFTA_CT_KEY', 'keys'), ('NFTA_CT_DIRECTION', 'uint8'), ('NFTA_CT_SREG', 'regs')) class keys(nft_map_be32): ops = {0x00: 'NFT_CT_STATE', 0x01: 'NFT_CT_DIRECTION', 0x02: 'NFT_CT_STATUS', 0x03: 'NFT_CT_MARK', 0x04: 'NFT_CT_SECMARK', 0x05: 'NFT_CT_EXPIRATION', 0x06: 'NFT_CT_HELPER', 0x07: 'NFT_CT_L3PROTOCOL', 0x08: 'NFT_CT_SRC', 0x09: 'NFT_CT_DST', 0x0a: 'NFT_CT_PROTOCOL', 0x0b: 'NFT_CT_PROTO_SRC', 0x0c: 'NFT_CT_PROTO_DST', 0x0d: 'NFT_CT_LABELS', 0x0e: 'NFT_CT_PKTS', 0x0f: 'NFT_CT_BYTES'} class nft_exthdr(nft_regs): nla_map = (('NFTA_EXTHDR_UNSPEC', 'none'), ('NFTA_EXTHDR_DREG', 'regs'), ('NFTA_EXTHDR_TYPE', 'uint8'), ('NFTA_EXTHDR_OFFSET', 'be32'), ('NFTA_EXTHDR_LEN', 'be32'), ('NFTA_EXTHDR_FLAGS', 'exthdr_flags'), ('NFTA_EXTHDR_OP', 'exthdr_op'), ('NFTA_EXTHDR_SREG', 'regs')) class exthdr_flags(nft_flags_be32): ops = ('NFT_EXTHDR_F_PRESENT',) class exthdr_op(nft_map_be32): ops = {0: 'NFT_EXTHDR_OP_IPV6', 1: 'NFT_EXTHDR_OP_TCPOPT'} class nft_immediate(nft_data, nft_regs): nla_map = (('NFTA_IMMEDIATE_UNSPEC', 'none'), ('NFTA_IMMEDIATE_DREG', 'regs'), ('NFTA_IMMEDIATE_DATA', 'nfta_data')) class nft_limit(nla): nla_map = (('NFTA_LIMIT_UNSPEC', 'none'), ('NFTA_LIMIT_RATE', 'be64'), ('NFTA_LIMIT_UNIT', 'be64'), ('NFTA_LIMIT_BURST', 'be32'), ('NFTA_LIMIT_TYPE', 'types'), ('NFTA_LIMIT_FLAGS', 'be32')) # make flags type class types(nft_map_be32): ops = {0: 'NFT_LIMIT_PKTS', 1: 'NFT_LIMIT_PKT_BYTES'} class nft_log(nla): nla_map = (('NFTA_LOG_UNSPEC', 'none'), ('NFTA_LOG_GROUP', 'be32'), ('NFTA_LOG_PREFIX', 'asciiz'), ('NFTA_LOG_SNAPLEN', 'be32'), ('NFTA_LOG_QTHRESHOLD', 'be32'), ('NFTA_LOG_LEVEL', 'be32'), ('NFTA_LOG_FLAGS', 'be32')) class nft_lookup(nft_regs): nla_map = (('NFTA_LOOKUP_UNSPEC', 'none'), ('NFTA_LOOKUP_SET', 'asciiz'), ('NFTA_LOOKUP_SREG', 'regs'), ('NFTA_LOOKUP_DREG', 'regs'), ('NFTA_LOOKUP_SET_ID', 'be32'), ('NFTA_LOOKUP_FLAGS', 'lookup_flags')) class lookup_flags(nft_flags_be32): ops = ('NFT_LOOKUP_F_INV',) class nft_masq(nft_regs, nat_flags): nla_map = (('NFTA_MASQ_UNSPEC', 'none'), ('NFTA_MASQ_FLAGS', 'nat_range'), ('NFTA_MASQ_REG_PROTO_MIN', 'regs'), ('NFTA_MASQ_REG_PROTO_MAX', 'regs')) class nft_meta(nft_regs): nla_map = (('NFTA_META_UNSPEC', 'none'), ('NFTA_META_DREG', 'regs'), ('NFTA_META_KEY', 'meta_key'), ('NFTA_META_SREG', 'regs')) class meta_key(nft_map_be32): ops = {0: 'NFT_META_LEN', 1: 'NFT_META_PROTOCOL', 2: 'NFT_META_PRIORITY', 3: 'NFT_META_MARK', 4: 'NFT_META_IIF', 5: 'NFT_META_OIF', 6: 'NFT_META_IIFNAME', 7: 'NFT_META_OIFNAME', 8: 'NFT_META_IIFTYPE', 9: 'NFT_META_OIFTYPE', 10: 'NFT_META_SKUID', 11: 'NFT_META_SKGID', 12: 'NFT_META_NFTRACE', 13: 'NFT_META_RTCLASSID', 14: 'NFT_META_SECMARK', 15: 'NFT_META_NFPROTO', 16: 'NFT_META_L4PROTO', 17: 'NFT_META_BRI_IIFNAME', 18: 'NFT_META_BRI_OIFNAME', 19: 'NFT_META_PKTTYPE', 20: 'NFT_META_CPU', 21: 'NFT_META_IIFGROUP', 22: 'NFT_META_OIFGROUP', 23: 'NFT_META_CGROUP', 24: 'NFT_META_PRANDOM'} class nft_nat(nft_regs, nat_flags): nla_map = (('NFTA_NAT_UNSPEC', 'none'), ('NFTA_NAT_TYPE', 'types'), ('NFTA_NAT_FAMILY', 'be32'), ('NFTA_NAT_REG_ADDR_MIN', 'regs'), ('NFTA_NAT_REG_ADDR_MAX', 'regs'), ('NFTA_NAT_REG_PROTO_MIN', 'regs'), ('NFTA_NAT_REG_PROTO_MAX', 'regs'), ('NFTA_NAT_FLAGS', 'nat_range')) class types(nft_map_be32): ops = {0: 'NFT_NAT_SNAT', 1: 'NFT_NAT_DNAT'} class nft_payload(nft_regs): nla_map = (('NFTA_PAYLOAD_UNSPEC', 'none'), ('NFTA_PAYLOAD_DREG', 'regs'), ('NFTA_PAYLOAD_BASE', 'base_type'), ('NFTA_PAYLOAD_OFFSET', 'be32'), ('NFTA_PAYLOAD_LEN', 'be32'), ('NFTA_PAYLOAD_SREG', 'regs'), ('NFTA_PAYLOAD_CSUM_TYPE', 'csum_type'), ('NFTA_PAYLOAD_CSUM_OFFSET', 'be32')) class base_type(nft_map_be32): ops = {0: 'NFT_PAYLOAD_LL_HEADER', 1: 'NFT_PAYLOAD_NETWORK_HEADER', 2: 'NFT_PAYLOAD_TRANSPORT_HEADER'} class csum_type(nft_map_be32): ops = {0: 'NFT_PAYLOAD_CSUM_NONE', 1: 'NFT_PAYLOAD_CSUM_INET'} # RFC 791 class nft_queue(nla): nla_map = (('NFTA_QUEUE_UNSPEC', 'none'), ('NFTA_QUEUE_NUM', 'be16'), ('NFTA_QUEUE_TOTAL', 'be16'), ('NFTA_QUEUE_FLAGS', 'be16')) class nft_redir(nft_regs, nat_flags): nla_map = (('NFTA_REDIR_UNSPEC', 'none'), ('NFTA_REDIR_REG_PROTO_MIN', 'regs'), ('NFTA_REDIR_REG_PROTO_MAX', 'regs'), ('NFTA_REDIR_FLAGS', 'nat_range')) class nft_reject(nla): nla_map = (('NFTA_REJECT_UNSPEC', 'none'), ('NFTA_REJECT_TYPE', 'types'), ('NFTA_REJECT_ICMP_CODE', 'codes')) class types(nft_map_be32): ops = {0: 'NFT_REJECT_ICMP_UNREACH', 1: 'NFT_REJECT_TCP_RST', 2: 'NFT_REJECT_ICMPX_UNREACH'} class codes(nft_map_uint8): ops = {0: 'NFT_REJECT_ICMPX_NO_ROUTE', 1: 'NFT_REJECT_ICMPX_PORT_UNREACH', 2: 'NFT_REJECT_ICMPX_HOST_UNREACH', 3: 'NFT_REJECT_ICMPX_ADMIN_PROHIBITED'} class nft_dynset(nft_regs): rule_expr = None nla_map = (('NFTA_QUEUE_UNSPEC', 'none'), ('NFTA_DYNSET_SET_NAME', 'asciiz'), ('NFTA_DYNSET_SET_ID', 'be32'), ('NFTA_DYNSET_OP', 'dynset_op'), ('NFTA_DYNSET_SREG_KEY', 'regs'), ('NFTA_DYNSET_SREG_DATA', 'regs'), ('NFTA_DYNSET_TIMEOUT', 'be64'), ('NFTA_DYNSET_EXPR', 'rule_expr'), ('NFTA_DYNSET_PAD', 'hex'), ('NFTA_DYNSET_FLAGS', 'dynset_flags')) class dynset_flags(nft_flags_be32): ops = ('NFT_DYNSET_F_INV',) class dynset_op(nft_map_be32): ops = {0: 'NFT_DYNSET_OP_ADD', 1: 'NFT_DYNSET_OP_UPDATE'} @staticmethod def expr(self, *argv, **kwarg): data_type = self.get_attr('NFTA_EXPR_NAME') expr = getattr(self, 'nft_%s' % data_type, self.hex) if hasattr(expr, 'rule_expr'): expr.rule_expr = self.__class__ return expr class nft_set_msg(nfgen_msg): nla_map = (('NFTA_SET_UNSPEC', 'none'), ('NFTA_SET_TABLE', 'asciiz'), ('NFTA_SET_NAME', 'asciiz'), ('NFTA_SET_FLAGS', 'set_flags'), ('NFTA_SET_KEY_TYPE', 'be32'), ('NFTA_SET_KEY_LEN', 'be32'), ('NFTA_SET_DATA_TYPE', 'be32'), ('NFTA_SET_DATA_LEN', 'be32'), ('NFTA_SET_POLICY', 'be32'), ('NFTA_SET_DESC', 'hex'), ('NFTA_SET_ID', 'be32'), ('NFTA_SET_TIMEOUT', 'be32'), ('NFTA_SET_GC_INTERVAL', 'be32'), ('NFTA_SET_USERDATA', 'hex'), ('NFTA_SET_PAD', 'hex'), ('NFTA_SET_OBJ_TYPE', 'be32')) class set_flags(nft_flags_be32): ops = ('NFT_SET_ANONYMOUS', 'NFT_SET_CONSTANT', 'NFT_SET_INTERVAL', 'NFT_SET_MAP', 'NFT_SET_TIMEOUT', 'NFT_SET_EVAL', 'NFT_SET_OBJECT') class nft_table_msg(nfgen_msg): prefix = 'NFTA_TABLE_' nla_map = (('NFTA_TABLE_UNSPEC', 'none'), ('NFTA_TABLE_NAME', 'asciiz'), ('NFTA_TABLE_FLAGS', 'be32'), ('NFTA_TABLE_USE', 'be32')) class NFTSocket(NetlinkSocket): ''' NFNetlink socket (family=NETLINK_NETFILTER). Implements API to the nftables functionality. ''' policy = {NFT_MSG_NEWTABLE: nft_table_msg, NFT_MSG_GETTABLE: nft_table_msg, NFT_MSG_DELTABLE: nft_table_msg, NFT_MSG_NEWCHAIN: nft_chain_msg, NFT_MSG_GETCHAIN: nft_chain_msg, NFT_MSG_DELCHAIN: nft_chain_msg, NFT_MSG_NEWRULE: nft_rule_msg, NFT_MSG_GETRULE: nft_rule_msg, NFT_MSG_DELRULE: nft_rule_msg, NFT_MSG_NEWSET: nft_set_msg, NFT_MSG_GETSET: nft_set_msg, NFT_MSG_DELSET: nft_set_msg, NFT_MSG_NEWGEN: nft_gen_msg, NFT_MSG_GETGEN: nft_gen_msg} def __init__(self, version=1, attr_revision=0, nfgen_family=2): super(NFTSocket, self).__init__(family=NETLINK_NETFILTER) policy = dict([(x | (NFNL_SUBSYS_NFTABLES << 8), y) for (x, y) in self.policy.items()]) self.register_policy(policy) self._proto_version = version self._attr_revision = attr_revision self._nfgen_family = nfgen_family self._ts = threading.local() self._write_lock = threading.RLock() def begin(self): with self._write_lock: if hasattr(self._ts, 'data'): # transaction is already started return False self._ts.data = b'' self._ts.seqnum = (self.addr_pool.alloc(), # begin self.addr_pool.alloc(), # tx self.addr_pool.alloc()) # commit msg = nfgen_msg() msg['res_id'] = NFNL_SUBSYS_NFTABLES msg['header']['type'] = 0x10 msg['header']['flags'] = NLM_F_REQUEST msg['header']['sequence_number'] = self._ts.seqnum[0] msg.encode() self._ts.data += msg.data return True def commit(self): with self._write_lock: msg = nfgen_msg() msg['res_id'] = NFNL_SUBSYS_NFTABLES msg['header']['type'] = 0x11 msg['header']['flags'] = NLM_F_REQUEST msg['header']['sequence_number'] = self._ts.seqnum[2] msg.encode() self._ts.data += msg.data self.sendto(self._ts.data, (0, 0)) for seqnum in self._ts.seqnum: self.addr_pool.free(seqnum, ban=10) del self._ts.data def request_get(self, msg, msg_type, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=None): ''' Read-only requests do not require transactions. Just run the request and get an answer. ''' msg['nfgen_family'] = self._nfgen_family return self.nlm_request(msg, msg_type | (NFNL_SUBSYS_NFTABLES << 8), msg_flags, terminate=terminate) def request_put(self, msg, msg_type, msg_flags=NLM_F_REQUEST): ''' Read-write requests. ''' one_shot = self.begin() msg['header']['type'] = (NFNL_SUBSYS_NFTABLES << 8) | msg_type msg['header']['flags'] = msg_flags msg['header']['sequence_number'] = self._ts.seqnum[1] msg['nfgen_family'] = self._nfgen_family msg.encode() self._ts.data += msg.data if one_shot: self.commit() def _command(self, msg_class, commands, cmd, kwarg, flags=NLM_F_REQUEST): cmd = commands[cmd] msg = msg_class() msg['attrs'] = [] # # a trick to pass keyword arguments as On rderedDict instance: # # ordered_args = OrderedDict() # ordered_args['arg1'] = value1 # ordered_args['arg2'] = value2 # ... # nft.rule('add', kwarg=ordered_args) # if 'kwarg' in kwarg: kwarg = kwarg['kwarg'] # for key, value in kwarg.items(): nla = msg_class.name2nla(key) msg['attrs'].append([nla, value]) # return self.request_put(msg, msg_type=cmd, msg_flags=flags) pyroute2-0.5.9/pyroute2/netlink/nl80211/0000755000175000017500000000000013621220110017460 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/nl80211/__init__.py0000644000175000017500000011411613610051400021577 0ustar peetpeet00000000000000''' NL80211 module ============== TODO ''' import struct import datetime from pyroute2.common import map_namespace from pyroute2.netlink import genlmsg from pyroute2.netlink.generic import GenericNetlinkSocket from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink import nla from pyroute2.netlink import nla_base # nl80211 commands NL80211_CMD_UNSPEC = 0 NL80211_CMD_GET_WIPHY = 1 NL80211_CMD_SET_WIPHY = 2 NL80211_CMD_NEW_WIPHY = 3 NL80211_CMD_DEL_WIPHY = 4 NL80211_CMD_GET_INTERFACE = 5 NL80211_CMD_SET_INTERFACE = 6 NL80211_CMD_NEW_INTERFACE = 7 NL80211_CMD_DEL_INTERFACE = 8 NL80211_CMD_GET_KEY = 9 NL80211_CMD_SET_KEY = 10 NL80211_CMD_NEW_KEY = 11 NL80211_CMD_DEL_KEY = 12 NL80211_CMD_GET_BEACON = 13 NL80211_CMD_SET_BEACON = 14 NL80211_CMD_START_AP = 15 NL80211_CMD_NEW_BEACON = NL80211_CMD_START_AP NL80211_CMD_STOP_AP = 16 NL80211_CMD_DEL_BEACON = NL80211_CMD_STOP_AP NL80211_CMD_GET_STATION = 17 NL80211_CMD_SET_STATION = 18 NL80211_CMD_NEW_STATION = 19 NL80211_CMD_DEL_STATION = 20 NL80211_CMD_GET_MPATH = 21 NL80211_CMD_SET_MPATH = 22 NL80211_CMD_NEW_MPATH = 23 NL80211_CMD_DEL_MPATH = 24 NL80211_CMD_SET_BSS = 25 NL80211_CMD_SET_REG = 26 NL80211_CMD_REQ_SET_REG = 27 NL80211_CMD_GET_MESH_CONFIG = 28 NL80211_CMD_SET_MESH_CONFIG = 29 NL80211_CMD_SET_MGMT_EXTRA_IE = 30 NL80211_CMD_GET_REG = 31 NL80211_CMD_GET_SCAN = 32 NL80211_CMD_TRIGGER_SCAN = 33 NL80211_CMD_NEW_SCAN_RESULTS = 34 NL80211_CMD_SCAN_ABORTED = 35 NL80211_CMD_REG_CHANGE = 36 NL80211_CMD_AUTHENTICATE = 37 NL80211_CMD_ASSOCIATE = 38 NL80211_CMD_DEAUTHENTICATE = 39 NL80211_CMD_DISASSOCIATE = 40 NL80211_CMD_MICHAEL_MIC_FAILURE = 41 NL80211_CMD_REG_BEACON_HINT = 42 NL80211_CMD_JOIN_IBSS = 43 NL80211_CMD_LEAVE_IBSS = 44 NL80211_CMD_TESTMODE = 45 NL80211_CMD_CONNECT = 46 NL80211_CMD_ROAM = 47 NL80211_CMD_DISCONNECT = 48 NL80211_CMD_SET_WIPHY_NETNS = 49 NL80211_CMD_GET_SURVEY = 50 NL80211_CMD_NEW_SURVEY_RESULTS = 51 NL80211_CMD_SET_PMKSA = 52 NL80211_CMD_DEL_PMKSA = 53 NL80211_CMD_FLUSH_PMKSA = 54 NL80211_CMD_REMAIN_ON_CHANNEL = 55 NL80211_CMD_CANCEL_REMAIN_ON_CHANNEL = 56 NL80211_CMD_SET_TX_BITRATE_MASK = 57 NL80211_CMD_REGISTER_FRAME = 58 NL80211_CMD_REGISTER_ACTION = NL80211_CMD_REGISTER_FRAME NL80211_CMD_FRAME = 59 NL80211_CMD_ACTION = NL80211_CMD_FRAME NL80211_CMD_FRAME_TX_STATUS = 60 NL80211_CMD_ACTION_TX_STATUS = NL80211_CMD_FRAME_TX_STATUS NL80211_CMD_SET_POWER_SAVE = 61 NL80211_CMD_GET_POWER_SAVE = 62 NL80211_CMD_SET_CQM = 63 NL80211_CMD_NOTIFY_CQM = 64 NL80211_CMD_SET_CHANNEL = 65 NL80211_CMD_SET_WDS_PEER = 66 NL80211_CMD_FRAME_WAIT_CANCEL = 67 NL80211_CMD_JOIN_MESH = 68 NL80211_CMD_LEAVE_MESH = 69 NL80211_CMD_UNPROT_DEAUTHENTICATE = 70 NL80211_CMD_UNPROT_DISASSOCIATE = 71 NL80211_CMD_NEW_PEER_CANDIDATE = 72 NL80211_CMD_GET_WOWLAN = 73 NL80211_CMD_SET_WOWLAN = 74 NL80211_CMD_START_SCHED_SCAN = 75 NL80211_CMD_STOP_SCHED_SCAN = 76 NL80211_CMD_SCHED_SCAN_RESULTS = 77 NL80211_CMD_SCHED_SCAN_STOPPED = 78 NL80211_CMD_SET_REKEY_OFFLOAD = 79 NL80211_CMD_PMKSA_CANDIDATE = 80 NL80211_CMD_TDLS_OPER = 81 NL80211_CMD_TDLS_MGMT = 82 NL80211_CMD_UNEXPECTED_FRAME = 83 NL80211_CMD_PROBE_CLIENT = 84 NL80211_CMD_REGISTER_BEACONS = 85 NL80211_CMD_UNEXPECTED_4ADDR_FRAME = 86 NL80211_CMD_SET_NOACK_MAP = 87 NL80211_CMD_CH_SWITCH_NOTIFY = 88 NL80211_CMD_START_P2P_DEVICE = 89 NL80211_CMD_STOP_P2P_DEVICE = 90 NL80211_CMD_CONN_FAILED = 91 NL80211_CMD_SET_MCAST_RATE = 92 NL80211_CMD_SET_MAC_ACL = 93 NL80211_CMD_RADAR_DETECT = 94 NL80211_CMD_GET_PROTOCOL_FEATURES = 95 NL80211_CMD_UPDATE_FT_IES = 96 NL80211_CMD_FT_EVENT = 97 NL80211_CMD_CRIT_PROTOCOL_START = 98 NL80211_CMD_CRIT_PROTOCOL_STOP = 99 NL80211_CMD_GET_COALESCE = 100 NL80211_CMD_SET_COALESCE = 101 NL80211_CMD_CHANNEL_SWITCH = 102 NL80211_CMD_VENDOR = 103 NL80211_CMD_SET_QOS_MAP = 104 NL80211_CMD_ADD_TX_TS = 105 NL80211_CMD_DEL_TX_TS = 106 NL80211_CMD_GET_MPP = 107 NL80211_CMD_JOIN_OCB = 108 NL80211_CMD_LEAVE_OCB = 109 NL80211_CMD_CH_SWITCH_STARTED_NOTIFY = 110 NL80211_CMD_TDLS_CHANNEL_SWITCH = 111 NL80211_CMD_TDLS_CANCEL_CHANNEL_SWITCH = 112 NL80211_CMD_WIPHY_REG_CHANGE = 113 NL80211_CMD_MAX = NL80211_CMD_WIPHY_REG_CHANGE (NL80211_NAMES, NL80211_VALUES) = map_namespace('NL80211_CMD_', globals()) NL80211_BSS_ELEMENTS_SSID = 0 NL80211_BSS_ELEMENTS_SUPPORTED_RATES = 1 NL80211_BSS_ELEMENTS_CHANNEL = 3 NL80211_BSS_ELEMENTS_TIM = 5 NL80211_BSS_ELEMENTS_RSN = 48 NL80211_BSS_ELEMENTS_EXTENDED_RATE = 50 NL80211_BSS_ELEMENTS_VENDOR = 221 BSS_MEMBERSHIP_SELECTOR_HT_PHY = 127 BSS_MEMBERSHIP_SELECTOR_VHT_PHY = 126 # interface types NL80211_IFTYPE_UNSPECIFIED = 0 NL80211_IFTYPE_ADHOC = 1 NL80211_IFTYPE_STATION = 2 NL80211_IFTYPE_AP = 3 NL80211_IFTYPE_AP_VLAN = 4 NL80211_IFTYPE_WDS = 5 NL80211_IFTYPE_MONITOR = 6 NL80211_IFTYPE_MESH_POINT = 7 NL80211_IFTYPE_P2P_CLIENT = 8 NL80211_IFTYPE_P2P_GO = 9 NL80211_IFTYPE_P2P_DEVICE = 10 NL80211_IFTYPE_OCB = 11 (IFTYPE_NAMES, IFTYPE_VALUES) = map_namespace('NL80211_IFTYPE_', globals(), normalize=True) # channel width NL80211_CHAN_WIDTH_20_NOHT = 0 # 20 MHz non-HT channel NL80211_CHAN_WIDTH_20 = 1 # 20 MHz HT channel NL80211_CHAN_WIDTH_40 = 2 # 40 MHz HT channel NL80211_CHAN_WIDTH_80 = 3 # 80 MHz channel NL80211_CHAN_WIDTH_80P80 = 4 # 80+80 MHz channel NL80211_CHAN_WIDTH_160 = 5 # 160 MHz channel NL80211_CHAN_WIDTH_5 = 6 # 5 MHz OFDM channel NL80211_CHAN_WIDTH_10 = 7 # 10 MHz OFDM channel (CHAN_WIDTH, WIDTH_VALUES) = map_namespace('NL80211_CHAN_WIDTH_', globals(), normalize=True) # BSS "status" NL80211_BSS_STATUS_AUTHENTICATED = 0 # Authenticated with this BS NL80211_BSS_STATUS_ASSOCIATED = 1 # Associated with this BSS NL80211_BSS_STATUS_IBSS_JOINED = 2 # Joined to this IBSS (BSS_STATUS_NAMES, BSS_STATUS_VALUES) = map_namespace('NL80211_BSS_STATUS_', globals(), normalize=True) NL80211_SCAN_FLAG_LOW_PRIORITY = 1 << 0 NL80211_SCAN_FLAG_FLUSH = 1 << 1 NL80211_SCAN_FLAG_AP = 1 << 2 NL80211_SCAN_FLAG_RANDOM_ADDR = 1 << 3 NL80211_SCAN_FLAG_FILS_MAX_CHANNEL_TIME = 1 << 4 NL80211_SCAN_FLAG_ACCEPT_BCAST_PROBE_RESP = 1 << 5 NL80211_SCAN_FLAG_OCE_PROBE_REQ_HIGH_TX_RATE = 1 << 6 NL80211_SCAN_FLAG_OCE_PROBE_REQ_DEFERRAL_SUPPRESSION = 1 << 7 (SCAN_FLAGS_NAMES, SCAN_FLAGS_VALUES) = map_namespace('NL80211_SCAN_FLAG_', globals()) NL80211_STA_FLAG_AUTHORIZED = 1 NL80211_STA_FLAG_SHORT_PREAMBLE = 2 NL80211_STA_FLAG_WME = 3 NL80211_STA_FLAG_MFP = 4 NL80211_STA_FLAG_AUTHENTICATED = 5 NL80211_STA_FLAG_TDLS_PEER = 6 NL80211_STA_FLAG_ASSOCIATED = 7 (STA_FLAG_NAMES, STA_FLAG_VALUES) = map_namespace('NL80211_STA_FLAG_', globals()) class nl80211cmd(genlmsg): prefix = 'NL80211_ATTR_' nla_map = (('NL80211_ATTR_UNSPEC', 'none'), ('NL80211_ATTR_WIPHY', 'uint32'), ('NL80211_ATTR_WIPHY_NAME', 'asciiz'), ('NL80211_ATTR_IFINDEX', 'uint32'), ('NL80211_ATTR_IFNAME', 'asciiz'), ('NL80211_ATTR_IFTYPE', 'uint32'), ('NL80211_ATTR_MAC', 'l2addr'), ('NL80211_ATTR_KEY_DATA', 'hex'), ('NL80211_ATTR_KEY_IDX', 'hex'), ('NL80211_ATTR_KEY_CIPHER', 'uint32'), ('NL80211_ATTR_KEY_SEQ', 'hex'), ('NL80211_ATTR_KEY_DEFAULT', 'hex'), ('NL80211_ATTR_BEACON_INTERVAL', 'hex'), ('NL80211_ATTR_DTIM_PERIOD', 'hex'), ('NL80211_ATTR_BEACON_HEAD', 'hex'), ('NL80211_ATTR_BEACON_TAIL', 'hex'), ('NL80211_ATTR_STA_AID', 'hex'), ('NL80211_ATTR_STA_FLAGS', 'hex'), ('NL80211_ATTR_STA_LISTEN_INTERVAL', 'hex'), ('NL80211_ATTR_STA_SUPPORTED_RATES', 'hex'), ('NL80211_ATTR_STA_VLAN', 'hex'), ('NL80211_ATTR_STA_INFO', 'STAInfo'), ('NL80211_ATTR_WIPHY_BANDS', 'hex'), ('NL80211_ATTR_MNTR_FLAGS', 'hex'), ('NL80211_ATTR_MESH_ID', 'hex'), ('NL80211_ATTR_STA_PLINK_ACTION', 'hex'), ('NL80211_ATTR_MPATH_NEXT_HOP', 'hex'), ('NL80211_ATTR_MPATH_INFO', 'hex'), ('NL80211_ATTR_BSS_CTS_PROT', 'hex'), ('NL80211_ATTR_BSS_SHORT_PREAMBLE', 'hex'), ('NL80211_ATTR_BSS_SHORT_SLOT_TIME', 'hex'), ('NL80211_ATTR_HT_CAPABILITY', 'hex'), ('NL80211_ATTR_SUPPORTED_IFTYPES', 'hex'), ('NL80211_ATTR_REG_ALPHA2', 'hex'), ('NL80211_ATTR_REG_RULES', 'hex'), ('NL80211_ATTR_MESH_CONFIG', 'hex'), ('NL80211_ATTR_BSS_BASIC_RATES', 'hex'), ('NL80211_ATTR_WIPHY_TXQ_PARAMS', 'hex'), ('NL80211_ATTR_WIPHY_FREQ', 'uint32'), ('NL80211_ATTR_WIPHY_CHANNEL_TYPE', 'hex'), ('NL80211_ATTR_KEY_DEFAULT_MGMT', 'hex'), ('NL80211_ATTR_MGMT_SUBTYPE', 'hex'), ('NL80211_ATTR_IE', 'hex'), ('NL80211_ATTR_MAX_NUM_SCAN_SSIDS', 'uint8'), ('NL80211_ATTR_SCAN_FREQUENCIES', 'hex'), ('NL80211_ATTR_SCAN_SSIDS', '*string'), ('NL80211_ATTR_GENERATION', 'uint32'), ('NL80211_ATTR_BSS', 'bss'), ('NL80211_ATTR_REG_INITIATOR', 'hex'), ('NL80211_ATTR_REG_TYPE', 'hex'), ('NL80211_ATTR_SUPPORTED_COMMANDS', 'hex'), ('NL80211_ATTR_FRAME', 'hex'), ('NL80211_ATTR_SSID', 'string'), ('NL80211_ATTR_AUTH_TYPE', 'uint32'), ('NL80211_ATTR_REASON_CODE', 'uint16'), ('NL80211_ATTR_KEY_TYPE', 'hex'), ('NL80211_ATTR_MAX_SCAN_IE_LEN', 'uint16'), ('NL80211_ATTR_CIPHER_SUITES', 'hex'), ('NL80211_ATTR_FREQ_BEFORE', 'hex'), ('NL80211_ATTR_FREQ_AFTER', 'hex'), ('NL80211_ATTR_FREQ_FIXED', 'hex'), ('NL80211_ATTR_WIPHY_RETRY_SHORT', 'uint8'), ('NL80211_ATTR_WIPHY_RETRY_LONG', 'uint8'), ('NL80211_ATTR_WIPHY_FRAG_THRESHOLD', 'hex'), ('NL80211_ATTR_WIPHY_RTS_THRESHOLD', 'hex'), ('NL80211_ATTR_TIMED_OUT', 'hex'), ('NL80211_ATTR_USE_MFP', 'hex'), ('NL80211_ATTR_STA_FLAGS2', 'hex'), ('NL80211_ATTR_CONTROL_PORT', 'hex'), ('NL80211_ATTR_TESTDATA', 'hex'), ('NL80211_ATTR_PRIVACY', 'hex'), ('NL80211_ATTR_DISCONNECTED_BY_AP', 'hex'), ('NL80211_ATTR_STATUS_CODE', 'hex'), ('NL80211_ATTR_CIPHER_SUITES_PAIRWISE', 'hex'), ('NL80211_ATTR_CIPHER_SUITE_GROUP', 'hex'), ('NL80211_ATTR_WPA_VERSIONS', 'hex'), ('NL80211_ATTR_AKM_SUITES', 'hex'), ('NL80211_ATTR_REQ_IE', 'hex'), ('NL80211_ATTR_RESP_IE', 'hex'), ('NL80211_ATTR_PREV_BSSID', 'hex'), ('NL80211_ATTR_KEY', 'hex'), ('NL80211_ATTR_KEYS', 'hex'), ('NL80211_ATTR_PID', 'hex'), ('NL80211_ATTR_4ADDR', 'hex'), ('NL80211_ATTR_SURVEY_INFO', 'hex'), ('NL80211_ATTR_PMKID', 'hex'), ('NL80211_ATTR_MAX_NUM_PMKIDS', 'uint8'), ('NL80211_ATTR_DURATION', 'hex'), ('NL80211_ATTR_COOKIE', 'hex'), ('NL80211_ATTR_WIPHY_COVERAGE_CLASS', 'uint8'), ('NL80211_ATTR_TX_RATES', 'hex'), ('NL80211_ATTR_FRAME_MATCH', 'hex'), ('NL80211_ATTR_ACK', 'hex'), ('NL80211_ATTR_PS_STATE', 'hex'), ('NL80211_ATTR_CQM', 'hex'), ('NL80211_ATTR_LOCAL_STATE_CHANGE', 'hex'), ('NL80211_ATTR_AP_ISOLATE', 'hex'), ('NL80211_ATTR_WIPHY_TX_POWER_SETTING', 'hex'), ('NL80211_ATTR_WIPHY_TX_POWER_LEVEL', 'hex'), ('NL80211_ATTR_TX_FRAME_TYPES', 'hex'), ('NL80211_ATTR_RX_FRAME_TYPES', 'hex'), ('NL80211_ATTR_FRAME_TYPE', 'hex'), ('NL80211_ATTR_CONTROL_PORT_ETHERTYPE', 'hex'), ('NL80211_ATTR_CONTROL_PORT_NO_ENCRYPT', 'hex'), ('NL80211_ATTR_SUPPORT_IBSS_RSN', 'hex'), ('NL80211_ATTR_WIPHY_ANTENNA_TX', 'hex'), ('NL80211_ATTR_WIPHY_ANTENNA_RX', 'hex'), ('NL80211_ATTR_MCAST_RATE', 'hex'), ('NL80211_ATTR_OFFCHANNEL_TX_OK', 'hex'), ('NL80211_ATTR_BSS_HT_OPMODE', 'hex'), ('NL80211_ATTR_KEY_DEFAULT_TYPES', 'hex'), ('NL80211_ATTR_MAX_REMAIN_ON_CHANNEL_DURATION', 'hex'), ('NL80211_ATTR_MESH_SETUP', 'hex'), ('NL80211_ATTR_WIPHY_ANTENNA_AVAIL_TX', 'uint32'), ('NL80211_ATTR_WIPHY_ANTENNA_AVAIL_RX', 'uint32'), ('NL80211_ATTR_SUPPORT_MESH_AUTH', 'hex'), ('NL80211_ATTR_STA_PLINK_STATE', 'hex'), ('NL80211_ATTR_WOWLAN_TRIGGERS', 'hex'), ('NL80211_ATTR_WOWLAN_TRIGGERS_SUPPORTED', 'hex'), ('NL80211_ATTR_SCHED_SCAN_INTERVAL', 'hex'), ('NL80211_ATTR_INTERFACE_COMBINATIONS', 'hex'), ('NL80211_ATTR_SOFTWARE_IFTYPES', 'hex'), ('NL80211_ATTR_REKEY_DATA', 'hex'), ('NL80211_ATTR_MAX_NUM_SCHED_SCAN_SSIDS', 'uint8'), ('NL80211_ATTR_MAX_SCHED_SCAN_IE_LEN', 'uint16'), ('NL80211_ATTR_SCAN_SUPP_RATES', 'hex'), ('NL80211_ATTR_HIDDEN_SSID', 'hex'), ('NL80211_ATTR_IE_PROBE_RESP', 'hex'), ('NL80211_ATTR_IE_ASSOC_RESP', 'hex'), ('NL80211_ATTR_STA_WME', 'hex'), ('NL80211_ATTR_SUPPORT_AP_UAPSD', 'hex'), ('NL80211_ATTR_ROAM_SUPPORT', 'hex'), ('NL80211_ATTR_SCHED_SCAN_MATCH', 'hex'), ('NL80211_ATTR_MAX_MATCH_SETS', 'uint8'), ('NL80211_ATTR_PMKSA_CANDIDATE', 'hex'), ('NL80211_ATTR_TX_NO_CCK_RATE', 'hex'), ('NL80211_ATTR_TDLS_ACTION', 'hex'), ('NL80211_ATTR_TDLS_DIALOG_TOKEN', 'hex'), ('NL80211_ATTR_TDLS_OPERATION', 'hex'), ('NL80211_ATTR_TDLS_SUPPORT', 'hex'), ('NL80211_ATTR_TDLS_EXTERNAL_SETUP', 'hex'), ('NL80211_ATTR_DEVICE_AP_SME', 'hex'), ('NL80211_ATTR_DONT_WAIT_FOR_ACK', 'hex'), ('NL80211_ATTR_FEATURE_FLAGS', 'hex'), ('NL80211_ATTR_PROBE_RESP_OFFLOAD', 'hex'), ('NL80211_ATTR_PROBE_RESP', 'hex'), ('NL80211_ATTR_DFS_REGION', 'hex'), ('NL80211_ATTR_DISABLE_HT', 'hex'), ('NL80211_ATTR_HT_CAPABILITY_MASK', 'hex'), ('NL80211_ATTR_NOACK_MAP', 'hex'), ('NL80211_ATTR_INACTIVITY_TIMEOUT', 'hex'), ('NL80211_ATTR_RX_SIGNAL_DBM', 'hex'), ('NL80211_ATTR_BG_SCAN_PERIOD', 'hex'), ('NL80211_ATTR_WDEV', 'uint64'), ('NL80211_ATTR_USER_REG_HINT_TYPE', 'hex'), ('NL80211_ATTR_CONN_FAILED_REASON', 'hex'), ('NL80211_ATTR_SAE_DATA', 'hex'), ('NL80211_ATTR_VHT_CAPABILITY', 'hex'), ('NL80211_ATTR_SCAN_FLAGS', 'uint32'), ('NL80211_ATTR_CHANNEL_WIDTH', 'uint32'), ('NL80211_ATTR_CENTER_FREQ1', 'uint32'), ('NL80211_ATTR_CENTER_FREQ2', 'uint32'), ('NL80211_ATTR_P2P_CTWINDOW', 'hex'), ('NL80211_ATTR_P2P_OPPPS', 'hex'), ('NL80211_ATTR_LOCAL_MESH_POWER_MODE', 'hex'), ('NL80211_ATTR_ACL_POLICY', 'hex'), ('NL80211_ATTR_MAC_ADDRS', 'hex'), ('NL80211_ATTR_MAC_ACL_MAX', 'hex'), ('NL80211_ATTR_RADAR_EVENT', 'hex'), ('NL80211_ATTR_EXT_CAPA', 'array(uint8)'), ('NL80211_ATTR_EXT_CAPA_MASK', 'array(uint8)'), ('NL80211_ATTR_STA_CAPABILITY', 'hex'), ('NL80211_ATTR_STA_EXT_CAPABILITY', 'hex'), ('NL80211_ATTR_PROTOCOL_FEATURES', 'hex'), ('NL80211_ATTR_SPLIT_WIPHY_DUMP', 'hex'), ('NL80211_ATTR_DISABLE_VHT', 'hex'), ('NL80211_ATTR_VHT_CAPABILITY_MASK', 'array(uint8)'), ('NL80211_ATTR_MDID', 'hex'), ('NL80211_ATTR_IE_RIC', 'hex'), ('NL80211_ATTR_CRIT_PROT_ID', 'hex'), ('NL80211_ATTR_MAX_CRIT_PROT_DURATION', 'hex'), ('NL80211_ATTR_PEER_AID', 'hex'), ('NL80211_ATTR_COALESCE_RULE', 'hex'), ('NL80211_ATTR_CH_SWITCH_COUNT', 'hex'), ('NL80211_ATTR_CH_SWITCH_BLOCK_TX', 'hex'), ('NL80211_ATTR_CSA_IES', 'hex'), ('NL80211_ATTR_CSA_C_OFF_BEACON', 'hex'), ('NL80211_ATTR_CSA_C_OFF_PRESP', 'hex'), ('NL80211_ATTR_RXMGMT_FLAGS', 'hex'), ('NL80211_ATTR_STA_SUPPORTED_CHANNELS', 'hex'), ('NL80211_ATTR_STA_SUPPORTED_OPER_CLASSES', 'hex'), ('NL80211_ATTR_HANDLE_DFS', 'hex'), ('NL80211_ATTR_SUPPORT_5_MHZ', 'hex'), ('NL80211_ATTR_SUPPORT_10_MHZ', 'hex'), ('NL80211_ATTR_OPMODE_NOTIF', 'hex'), ('NL80211_ATTR_VENDOR_ID', 'hex'), ('NL80211_ATTR_VENDOR_SUBCMD', 'hex'), ('NL80211_ATTR_VENDOR_DATA', 'hex'), ('NL80211_ATTR_VENDOR_EVENTS', 'hex'), ('NL80211_ATTR_QOS_MAP', 'hex'), ('NL80211_ATTR_MAC_HINT', 'hex'), ('NL80211_ATTR_WIPHY_FREQ_HINT', 'hex'), ('NL80211_ATTR_MAX_AP_ASSOC_STA', 'hex'), ('NL80211_ATTR_TDLS_PEER_CAPABILITY', 'hex'), ('NL80211_ATTR_SOCKET_OWNER', 'hex'), ('NL80211_ATTR_CSA_C_OFFSETS_TX', 'hex'), ('NL80211_ATTR_MAX_CSA_COUNTERS', 'hex'), ('NL80211_ATTR_TDLS_INITIATOR', 'hex'), ('NL80211_ATTR_USE_RRM', 'hex'), ('NL80211_ATTR_WIPHY_DYN_ACK', 'hex'), ('NL80211_ATTR_TSID', 'hex'), ('NL80211_ATTR_USER_PRIO', 'hex'), ('NL80211_ATTR_ADMITTED_TIME', 'hex'), ('NL80211_ATTR_SMPS_MODE', 'hex'), ('NL80211_ATTR_OPER_CLASS', 'hex'), ('NL80211_ATTR_MAC_MASK', 'hex'), ('NL80211_ATTR_WIPHY_SELF_MANAGED_REG', 'hex'), ('NUM_NL80211_ATTR', 'hex')) class bss(nla): class elementsBinary(nla_base): def binary_rates(self, offset, length): init = offset string = "" while (offset - init) < length: byte, = struct.unpack_from('B', self.data, offset) r = byte & 0x7f if r == BSS_MEMBERSHIP_SELECTOR_VHT_PHY and byte & 0x80: string += "VHT" elif r == BSS_MEMBERSHIP_SELECTOR_HT_PHY and byte & 0x80: string += "HT" else: string += "%d.%d" % (r / 2, 5 * (r & 1)) offset += 1 string += "%s " % ("*" if byte & 0x80 else "") return string def binary_tim(self, offset): (count, period, bitmapc, bitmap0) = struct.unpack_from('BBBB', self.data, offset) return ("DTIM Count {0} DTIM Period {1} Bitmap Control 0x{2} " "Bitmap[0] 0x{3}".format(count, period, bitmapc, bitmap0)) def decode_nlas(self): return def decode(self): nla_base.decode(self) self.value = {} init = offset = self.offset + 4 while (offset - init) < (self.length - 4): (msg_type, length) = struct.unpack_from('BB', self.data, offset) if msg_type == NL80211_BSS_ELEMENTS_SSID: self.value["SSID"], = (struct .unpack_from('%is' % length, self.data, offset + 2)) if msg_type == NL80211_BSS_ELEMENTS_SUPPORTED_RATES: supported_rates = self.binary_rates(offset + 2, length) self.value["SUPPORTED_RATES"] = supported_rates if msg_type == NL80211_BSS_ELEMENTS_CHANNEL: channel, = struct.unpack_from('B', self.data, offset + 2) self.value["CHANNEL"] = channel if msg_type == NL80211_BSS_ELEMENTS_TIM: self.value["TRAFFIC INDICATION MAP"] = \ self.binary_tim(offset + 2) if msg_type == NL80211_BSS_ELEMENTS_RSN: self.value["RSN"], = (struct .unpack_from('%is' % length, self.data, offset + 2)) if msg_type == NL80211_BSS_ELEMENTS_EXTENDED_RATE: extended_rates = self.binary_rates(offset + 2, length) self.value["EXTENDED_RATES"] = extended_rates if msg_type == NL80211_BSS_ELEMENTS_VENDOR: # There may be multiple vendor IEs, create a list if "VENDOR" not in self.value.keys(): self.value["VENDOR"] = [] vendor_ie, = (struct.unpack_from('%is' % length, self.data, offset + 2)) self.value["VENDOR"].append(vendor_ie) offset += length + 2 class TSF(nla_base): """Timing Synchronization Function""" def decode(self): nla_base.decode(self) offset = self.offset + 4 self.value = {} tsf, = struct.unpack_from('Q', self.data, offset) self.value["VALUE"] = tsf # TSF is in microseconds self.value["TIME"] = datetime.timedelta(microseconds=tsf) class SignalMBM(nla_base): def decode(self): nla_base.decode(self) offset = self.offset + 4 self.value = {} ss, = struct.unpack_from('i', self.data, offset) self.value["VALUE"] = ss self.value["SIGNAL_STRENGTH"] = {"VALUE": ss / 100.0, "UNITS": "dBm"} class capability(nla_base): # iw scan.c WLAN_CAPABILITY_ESS = (1 << 0) WLAN_CAPABILITY_IBSS = (1 << 1) WLAN_CAPABILITY_CF_POLLABLE = (1 << 2) WLAN_CAPABILITY_CF_POLL_REQUEST = (1 << 3) WLAN_CAPABILITY_PRIVACY = (1 << 4) WLAN_CAPABILITY_SHORT_PREAMBLE = (1 << 5) WLAN_CAPABILITY_PBCC = (1 << 6) WLAN_CAPABILITY_CHANNEL_AGILITY = (1 << 7) WLAN_CAPABILITY_SPECTRUM_MGMT = (1 << 8) WLAN_CAPABILITY_QOS = (1 << 9) WLAN_CAPABILITY_SHORT_SLOT_TIME = (1 << 10) WLAN_CAPABILITY_APSD = (1 << 11) WLAN_CAPABILITY_RADIO_MEASURE = (1 << 12) WLAN_CAPABILITY_DSSS_OFDM = (1 << 13) WLAN_CAPABILITY_DEL_BACK = (1 << 14) WLAN_CAPABILITY_IMM_BACK = (1 << 15) # def decode_nlas(self): # return def decode(self): nla_base.decode(self) offset = self.offset + 4 self.value = {} capa, = struct.unpack_from('H', self.data, offset) self.value["VALUE"] = capa s = [] if capa & self.WLAN_CAPABILITY_ESS: s.append("ESS") if capa & self.WLAN_CAPABILITY_IBSS: s.append("IBSS") if capa & self.WLAN_CAPABILITY_CF_POLLABLE: s.append("CfPollable") if capa & self.WLAN_CAPABILITY_CF_POLL_REQUEST: s.append("CfPollReq") if capa & self.WLAN_CAPABILITY_PRIVACY: s.append("Privacy") if capa & self.WLAN_CAPABILITY_SHORT_PREAMBLE: s.append("ShortPreamble") if capa & self.WLAN_CAPABILITY_PBCC: s.append("PBCC") if capa & self.WLAN_CAPABILITY_CHANNEL_AGILITY: s.append("ChannelAgility") if capa & self.WLAN_CAPABILITY_SPECTRUM_MGMT: s.append("SpectrumMgmt") if capa & self.WLAN_CAPABILITY_QOS: s.append("QoS") if capa & self.WLAN_CAPABILITY_SHORT_SLOT_TIME: s.append("ShortSlotTime") if capa & self.WLAN_CAPABILITY_APSD: s.append("APSD") if capa & self.WLAN_CAPABILITY_RADIO_MEASURE: s.append("RadioMeasure") if capa & self.WLAN_CAPABILITY_DSSS_OFDM: s.append("DSSS-OFDM") if capa & self.WLAN_CAPABILITY_DEL_BACK: s.append("DelayedBACK") if capa & self.WLAN_CAPABILITY_IMM_BACK: s.append("ImmediateBACK") self.value['CAPABILITIES'] = " ".join(s) prefix = 'NL80211_BSS_' nla_map = (('__NL80211_BSS_INVALID', 'hex'), ('NL80211_BSS_BSSID', 'hex'), ('NL80211_BSS_FREQUENCY', 'uint32'), ('NL80211_BSS_TSF', 'TSF'), ('NL80211_BSS_BEACON_INTERVAL', 'uint16'), ('NL80211_BSS_CAPABILITY', 'capability'), ('NL80211_BSS_INFORMATION_ELEMENTS', 'elementsBinary'), ('NL80211_BSS_SIGNAL_MBM', 'SignalMBM'), ('NL80211_BSS_SIGNAL_UNSPEC', 'uint8'), ('NL80211_BSS_STATUS', 'uint32'), ('NL80211_BSS_SEEN_MS_AGO', 'uint32'), ('NL80211_BSS_BEACON_IES', 'elementsBinary'), ('NL80211_BSS_CHAN_WIDTH', 'uint32'), ('NL80211_BSS_BEACON_TSF', 'uint64'), ('NL80211_BSS_PRESP_DATA', 'hex'), ('NL80211_BSS_MAX', 'hex') ) class STAInfo(nla): class STAFlags(nla_base): ''' Decode the flags that may be set. See nl80211.h: struct nl80211_sta_flag_update, NL80211_STA_INFO_STA_FLAGS ''' def decode_nlas(self): return def decode(self): nla_base.decode(self) self.value = {} self.value["AUTHORIZED"] = False self.value["SHORT_PREAMBLE"] = False self.value["WME"] = False self.value["MFP"] = False self.value["AUTHENTICATED"] = False self.value["TDLS_PEER"] = False self.value["ASSOCIATED"] = False init = offset = self.offset + 4 while (offset - init) < (self.length - 4): (msg_type, length) = struct.unpack_from('BB', self.data, offset) mask, set_ = struct.unpack_from('II', self.data, offset + 2) if mask & NL80211_STA_FLAG_AUTHORIZED: if set_ & NL80211_STA_FLAG_AUTHORIZED: self.value["AUTHORIZED"] = True if mask & NL80211_STA_FLAG_SHORT_PREAMBLE: if set_ & NL80211_STA_FLAG_SHORT_PREAMBLE: self.value["SHORT_PREAMBLE"] = True if mask & NL80211_STA_FLAG_WME: if set_ & NL80211_STA_FLAG_WME: self.value["WME"] = True if mask & NL80211_STA_FLAG_MFP: if set_ & NL80211_STA_FLAG_MFP: self.value["MFP"] = True if mask & NL80211_STA_FLAG_AUTHENTICATED: if set_ & NL80211_STA_FLAG_AUTHENTICATED: self.value["AUTHENTICATED"] = True if mask & NL80211_STA_FLAG_TDLS_PEER: if set_ & NL80211_STA_FLAG_TDLS_PEER: self.value["TDLS_PEER"] = True if mask & NL80211_STA_FLAG_ASSOCIATED: if set_ & NL80211_STA_FLAG_ASSOCIATED: self.value["ASSOCIATED"] = True offset += length + 2 prefix = 'NL80211_STA_INFO_' nla_map = (('__NL80211_STA_INFO_INVALID', 'hex'), ('NL80211_STA_INFO_INACTIVE_TIME', 'uint32'), ('NL80211_STA_INFO_RX_BYTES', 'uint32'), ('NL80211_STA_INFO_TX_BYTES', 'uint32'), ('NL80211_STA_INFO_LLID', 'uint16'), ('NL80211_STA_INFO_PLID', 'uint16'), ('NL80211_STA_INFO_PLINK_STATE', 'uint8'), ('NL80211_STA_INFO_SIGNAL', 'int8'), ('NL80211_STA_INFO_TX_BITRATE', 'hex'), ('NL80211_STA_INFO_RX_PACKETS', 'uint32'), ('NL80211_STA_INFO_TX_PACKETS', 'uint32'), ('NL80211_STA_INFO_TX_RETRIES', 'uint32'), ('NL80211_STA_INFO_TX_FAILED', 'uint32'), ('NL80211_STA_INFO_SIGNAL_AVG', 'int8'), ('NL80211_STA_INFO_RX_BITRATE', 'hex'), ('NL80211_STA_INFO_BSS_PARAM', 'hex'), ('NL80211_STA_INFO_CONNECTED_TIME', 'uint32'), ('NL80211_STA_INFO_STA_FLAGS', 'STAFlags'), ('NL80211_STA_INFO_BEACON_LOSS', 'uint32'), ('NL80211_STA_INFO_T_OFFSET', 'int64'), ('NL80211_STA_INFO_LOCAL_PM', 'hex'), ('NL80211_STA_INFO_PEER_PM', 'hex'), ('NL80211_STA_INFO_NONPEER_PM', 'hex'), ('NL80211_STA_INFO_RX_BYTES64', 'uint64'), ('NL80211_STA_INFO_TX_BYTES64', 'uint64'), ('NL80211_STA_INFO_CHAIN_SIGNAL', 'string'), ('NL80211_STA_INFO_CHAIN_SIGNAL_AVG', 'string'), ('NL80211_STA_INFO_EXPECTED_THROUGHPUT', 'uint32'), ('NL80211_STA_INFO_RX_DROP_MISC', 'uint32'), ('NL80211_STA_INFO_BEACON_RX', 'uint64'), ('NL80211_STA_INFO_BEACON_SIGNAL_AVG', 'uint8'), ('NL80211_STA_INFO_TID_STATS', 'hex'), ('NL80211_STA_INFO_RX_DURATION', 'uint64'), ('NL80211_STA_INFO_PAD', 'hex'), ('NL80211_STA_INFO_MAX', 'hex') ) class MarshalNl80211(Marshal): msg_map = {NL80211_CMD_UNSPEC: nl80211cmd, NL80211_CMD_GET_WIPHY: nl80211cmd, NL80211_CMD_SET_WIPHY: nl80211cmd, NL80211_CMD_NEW_WIPHY: nl80211cmd, NL80211_CMD_DEL_WIPHY: nl80211cmd, NL80211_CMD_GET_INTERFACE: nl80211cmd, NL80211_CMD_SET_INTERFACE: nl80211cmd, NL80211_CMD_NEW_INTERFACE: nl80211cmd, NL80211_CMD_DEL_INTERFACE: nl80211cmd, NL80211_CMD_GET_KEY: nl80211cmd, NL80211_CMD_SET_KEY: nl80211cmd, NL80211_CMD_NEW_KEY: nl80211cmd, NL80211_CMD_DEL_KEY: nl80211cmd, NL80211_CMD_GET_BEACON: nl80211cmd, NL80211_CMD_SET_BEACON: nl80211cmd, NL80211_CMD_START_AP: nl80211cmd, NL80211_CMD_NEW_BEACON: nl80211cmd, NL80211_CMD_STOP_AP: nl80211cmd, NL80211_CMD_DEL_BEACON: nl80211cmd, NL80211_CMD_GET_STATION: nl80211cmd, NL80211_CMD_SET_STATION: nl80211cmd, NL80211_CMD_NEW_STATION: nl80211cmd, NL80211_CMD_DEL_STATION: nl80211cmd, NL80211_CMD_GET_MPATH: nl80211cmd, NL80211_CMD_SET_MPATH: nl80211cmd, NL80211_CMD_NEW_MPATH: nl80211cmd, NL80211_CMD_DEL_MPATH: nl80211cmd, NL80211_CMD_SET_BSS: nl80211cmd, NL80211_CMD_SET_REG: nl80211cmd, NL80211_CMD_REQ_SET_REG: nl80211cmd, NL80211_CMD_GET_MESH_CONFIG: nl80211cmd, NL80211_CMD_SET_MESH_CONFIG: nl80211cmd, NL80211_CMD_SET_MGMT_EXTRA_IE: nl80211cmd, NL80211_CMD_GET_REG: nl80211cmd, NL80211_CMD_GET_SCAN: nl80211cmd, NL80211_CMD_TRIGGER_SCAN: nl80211cmd, NL80211_CMD_NEW_SCAN_RESULTS: nl80211cmd, NL80211_CMD_SCAN_ABORTED: nl80211cmd, NL80211_CMD_REG_CHANGE: nl80211cmd, NL80211_CMD_AUTHENTICATE: nl80211cmd, NL80211_CMD_ASSOCIATE: nl80211cmd, NL80211_CMD_DEAUTHENTICATE: nl80211cmd, NL80211_CMD_DISASSOCIATE: nl80211cmd, NL80211_CMD_MICHAEL_MIC_FAILURE: nl80211cmd, NL80211_CMD_REG_BEACON_HINT: nl80211cmd, NL80211_CMD_JOIN_IBSS: nl80211cmd, NL80211_CMD_LEAVE_IBSS: nl80211cmd, NL80211_CMD_TESTMODE: nl80211cmd, NL80211_CMD_CONNECT: nl80211cmd, NL80211_CMD_ROAM: nl80211cmd, NL80211_CMD_DISCONNECT: nl80211cmd, NL80211_CMD_SET_WIPHY_NETNS: nl80211cmd, NL80211_CMD_GET_SURVEY: nl80211cmd, NL80211_CMD_NEW_SURVEY_RESULTS: nl80211cmd, NL80211_CMD_SET_PMKSA: nl80211cmd, NL80211_CMD_DEL_PMKSA: nl80211cmd, NL80211_CMD_FLUSH_PMKSA: nl80211cmd, NL80211_CMD_REMAIN_ON_CHANNEL: nl80211cmd, NL80211_CMD_CANCEL_REMAIN_ON_CHANNEL: nl80211cmd, NL80211_CMD_SET_TX_BITRATE_MASK: nl80211cmd, NL80211_CMD_REGISTER_FRAME: nl80211cmd, NL80211_CMD_REGISTER_ACTION: nl80211cmd, NL80211_CMD_FRAME: nl80211cmd, NL80211_CMD_ACTION: nl80211cmd, NL80211_CMD_FRAME_TX_STATUS: nl80211cmd, NL80211_CMD_ACTION_TX_STATUS: nl80211cmd, NL80211_CMD_SET_POWER_SAVE: nl80211cmd, NL80211_CMD_GET_POWER_SAVE: nl80211cmd, NL80211_CMD_SET_CQM: nl80211cmd, NL80211_CMD_NOTIFY_CQM: nl80211cmd, NL80211_CMD_SET_CHANNEL: nl80211cmd, NL80211_CMD_SET_WDS_PEER: nl80211cmd, NL80211_CMD_FRAME_WAIT_CANCEL: nl80211cmd, NL80211_CMD_JOIN_MESH: nl80211cmd, NL80211_CMD_LEAVE_MESH: nl80211cmd, NL80211_CMD_UNPROT_DEAUTHENTICATE: nl80211cmd, NL80211_CMD_UNPROT_DISASSOCIATE: nl80211cmd, NL80211_CMD_NEW_PEER_CANDIDATE: nl80211cmd, NL80211_CMD_GET_WOWLAN: nl80211cmd, NL80211_CMD_SET_WOWLAN: nl80211cmd, NL80211_CMD_START_SCHED_SCAN: nl80211cmd, NL80211_CMD_STOP_SCHED_SCAN: nl80211cmd, NL80211_CMD_SCHED_SCAN_RESULTS: nl80211cmd, NL80211_CMD_SCHED_SCAN_STOPPED: nl80211cmd, NL80211_CMD_SET_REKEY_OFFLOAD: nl80211cmd, NL80211_CMD_PMKSA_CANDIDATE: nl80211cmd, NL80211_CMD_TDLS_OPER: nl80211cmd, NL80211_CMD_TDLS_MGMT: nl80211cmd, NL80211_CMD_UNEXPECTED_FRAME: nl80211cmd, NL80211_CMD_PROBE_CLIENT: nl80211cmd, NL80211_CMD_REGISTER_BEACONS: nl80211cmd, NL80211_CMD_UNEXPECTED_4ADDR_FRAME: nl80211cmd, NL80211_CMD_SET_NOACK_MAP: nl80211cmd, NL80211_CMD_CH_SWITCH_NOTIFY: nl80211cmd, NL80211_CMD_START_P2P_DEVICE: nl80211cmd, NL80211_CMD_STOP_P2P_DEVICE: nl80211cmd, NL80211_CMD_CONN_FAILED: nl80211cmd, NL80211_CMD_SET_MCAST_RATE: nl80211cmd, NL80211_CMD_SET_MAC_ACL: nl80211cmd, NL80211_CMD_RADAR_DETECT: nl80211cmd, NL80211_CMD_GET_PROTOCOL_FEATURES: nl80211cmd, NL80211_CMD_UPDATE_FT_IES: nl80211cmd, NL80211_CMD_FT_EVENT: nl80211cmd, NL80211_CMD_CRIT_PROTOCOL_START: nl80211cmd, NL80211_CMD_CRIT_PROTOCOL_STOP: nl80211cmd, NL80211_CMD_GET_COALESCE: nl80211cmd, NL80211_CMD_SET_COALESCE: nl80211cmd, NL80211_CMD_CHANNEL_SWITCH: nl80211cmd, NL80211_CMD_VENDOR: nl80211cmd, NL80211_CMD_SET_QOS_MAP: nl80211cmd, NL80211_CMD_ADD_TX_TS: nl80211cmd, NL80211_CMD_DEL_TX_TS: nl80211cmd, NL80211_CMD_GET_MPP: nl80211cmd, NL80211_CMD_JOIN_OCB: nl80211cmd, NL80211_CMD_LEAVE_OCB: nl80211cmd, NL80211_CMD_CH_SWITCH_STARTED_NOTIFY: nl80211cmd, NL80211_CMD_TDLS_CHANNEL_SWITCH: nl80211cmd, NL80211_CMD_TDLS_CANCEL_CHANNEL_SWITCH: nl80211cmd, NL80211_CMD_WIPHY_REG_CHANGE: nl80211cmd} def fix_message(self, msg): try: msg['event'] = NL80211_VALUES[msg['cmd']] except Exception: pass class NL80211(GenericNetlinkSocket): def __init__(self): GenericNetlinkSocket.__init__(self) self.marshal = MarshalNl80211() def bind(self, groups=0, **kwarg): GenericNetlinkSocket.bind(self, 'nl80211', nl80211cmd, groups, None, **kwarg) pyroute2-0.5.9/pyroute2/netlink/nlsocket.py0000644000175000017500000011574313614525012020675 0ustar peetpeet00000000000000''' Base netlink socket and marshal =============================== All the netlink providers are derived from the socket class, so they provide normal socket API, including `getsockopt()`, `setsockopt()`, they can be used in poll/select I/O loops etc. asynchronous I/O ---------------- To run async reader thread, one should call `NetlinkSocket.bind(async_cache=True)`. In that case a background thread will be launched. The thread will automatically collect all the messages and store into a userspace buffer. .. note:: There is no need to turn on async I/O, if you don't plan to receive broadcast messages. ENOBUF and async I/O -------------------- When Netlink messages arrive faster than a program reads then from the socket, the messages overflow the socket buffer and one gets ENOBUF on `recv()`:: ... self.recv(bufsize) error: [Errno 105] No buffer space available One way to avoid ENOBUF, is to use async I/O. Then the library not only reads and buffers all the messages, but also re-prioritizes threads. Suppressing the parser activity, the library increases the response delay, but spares CPU to read and enqueue arriving messages as fast, as it is possible. With logging level DEBUG you can notice messages, that the library started to calm down the parser thread:: DEBUG:root:Packet burst: the reader thread priority is increased, beware of delays on netlink calls Counters: delta=25 qsize=25 delay=0.1 This state requires no immediate action, but just some more attention. When the delay between messages on the parser thread exceeds 1 second, DEBUG messages become WARNING ones:: WARNING:root:Packet burst: the reader thread priority is increased, beware of delays on netlink calls Counters: delta=2525 qsize=213536 delay=3 This state means, that almost all the CPU resources are dedicated to the reader thread. It doesn't mean, that the reader thread consumes 100% CPU -- it means, that the CPU is reserved for the case of more intensive bursts. The library will return to the normal state only when the broadcast storm will be over, and then the CPU will be 100% loaded with the parser for some time, when it will process all the messages queued so far. when async I/O doesn't help --------------------------- Sometimes, even turning async I/O doesn't fix ENOBUF. Mostly it means, that in this particular case the Python performance is not enough even to read and store the raw data from the socket. There is no workaround for such cases, except of using something *not* Python-based. One can still play around with SO_RCVBUF socket option, but it doesn't help much. So keep it in mind, and if you expect massive broadcast Netlink storms, perform stress testing prior to deploy a solution in the production. classes ------- ''' import os import sys import time import errno import select import struct import logging import traceback import threading import collections from socket import SOCK_DGRAM from socket import MSG_PEEK from socket import SOL_SOCKET from socket import SO_RCVBUF from socket import SO_SNDBUF from pyroute2 import config from pyroute2.config import AF_NETLINK from pyroute2.common import AddrPool from pyroute2.common import DEFAULT_RCVBUF from pyroute2.netlink import nlmsg from pyroute2.netlink import mtypes from pyroute2.netlink import NLMSG_ERROR from pyroute2.netlink import NLMSG_DONE from pyroute2.netlink import NETLINK_ADD_MEMBERSHIP from pyroute2.netlink import NETLINK_DROP_MEMBERSHIP from pyroute2.netlink import NETLINK_GENERIC from pyroute2.netlink import NETLINK_LISTEN_ALL_NSID from pyroute2.netlink import NLM_F_DUMP from pyroute2.netlink import NLM_F_MULTI from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import SOL_NETLINK from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.exceptions import NetlinkDecodeError from pyroute2.netlink.exceptions import NetlinkHeaderDecodeError try: from Queue import Queue except ImportError: from queue import Queue log = logging.getLogger(__name__) Stats = collections.namedtuple('Stats', ('qsize', 'delta', 'delay')) class Marshal(object): ''' Generic marshalling class ''' msg_map = {} type_offset = 4 type_format = 'H' error_type = NLMSG_ERROR debug = False def __init__(self): self.lock = threading.Lock() # one marshal instance can be used to parse one # message at once self.msg_map = self.msg_map or {} self.defragmentation = {} def parse(self, data, seq=None, callback=None): ''' Parse string data. At this moment all transport, except of the native Netlink is deprecated in this library, so we should not support any defragmentation on that level ''' offset = 0 result = [] # there must be at least one header in the buffer, # 'IHHII' == 16 bytes while offset <= len(data) - 16: # pick type and length length, = struct.unpack_from('I', data, offset) if length == 0: break error = None msg_type, = struct.unpack_from(self.type_format, data, offset + self.type_offset) if msg_type == self.error_type: code = abs(struct.unpack_from('i', data, offset + 16)[0]) if code > 0: error = NetlinkError(code) msg_class = self.msg_map.get(msg_type, nlmsg) msg = msg_class(data, offset=offset) try: msg.decode() msg['header']['error'] = error # try to decode encapsulated error message if error is not None: enc_type = struct.unpack_from('H', data, offset + 24)[0] enc_class = self.msg_map.get(enc_type, nlmsg) enc = enc_class(data, offset=offset + 20) enc.decode() msg['header']['errmsg'] = enc if callback and seq == msg['header']['sequence_number']: if callback(msg): offset += msg.length continue except NetlinkHeaderDecodeError as e: # in the case of header decoding error, # create an empty message msg = nlmsg() msg['header']['error'] = e except NetlinkDecodeError as e: msg['header']['error'] = e mtype = msg['header'].get('type', None) if mtype in (1, 2, 3, 4): msg['event'] = mtypes.get(mtype, 'none') self.fix_message(msg) offset += msg.length result.append(msg) return result def fix_message(self, msg): pass # 8<----------------------------------------------------------- # Singleton, containing possible modifiers to the NetlinkSocket # bind() call. # # Normally, you can open only one netlink connection for one # process, but there is a hack. Current PID_MAX_LIMIT is 2^22, # so we can use the rest to modify the pid field. # # See also libnl library, lib/socket.c:generate_local_port() sockets = AddrPool(minaddr=0x0, maxaddr=0x3ff, reverse=True) # 8<----------------------------------------------------------- class LockProxy(object): def __init__(self, factory, key): self.factory = factory self.refcount = 0 self.key = key self.internal = threading.Lock() self.lock = factory.klass() def acquire(self, *argv, **kwarg): with self.internal: self.refcount += 1 return self.lock.acquire() def release(self): with self.internal: self.refcount -= 1 if (self.refcount == 0) and (self.key != 0): try: del self.factory.locks[self.key] except KeyError: pass return self.lock.release() def __enter__(self): self.acquire() def __exit__(self, exc_type, exc_value, traceback): self.release() class LockFactory(object): def __init__(self, klass=threading.RLock): self.klass = klass self.locks = {0: LockProxy(self, 0)} def __enter__(self): self.locks[0].acquire() def __exit__(self, exc_type, exc_value, traceback): self.locks[0].release() def __getitem__(self, key): if key is None: key = 0 if key not in self.locks: self.locks[key] = LockProxy(self, key) return self.locks[key] def __delitem__(self, key): del self.locks[key] class NetlinkMixin(object): ''' Generic netlink socket ''' def __init__(self, family=NETLINK_GENERIC, port=None, pid=None, fileno=None, sndbuf=1048576, rcvbuf=1048576, all_ns=False, async_qsize=None, nlm_generator=None): # # That's a trick. Python 2 is not able to construct # sockets from an open FD. # # So raise an exception, if the major version is < 3 # and fileno is not None. # # Do NOT use fileno in a core pyroute2 functionality, # since the core should be both Python 2 and 3 # compatible. # super(NetlinkMixin, self).__init__() if fileno is not None and sys.version_info[0] < 3: raise NotImplementedError('fileno parameter is not supported ' 'on Python < 3.2') # 8<----------------------------------------- self.config = {'family': family, 'port': port, 'pid': pid, 'fileno': fileno, 'sndbuf': sndbuf, 'rcvbuf': rcvbuf, 'all_ns': all_ns, 'async_qsize': async_qsize, 'nlm_generator': nlm_generator} # 8<----------------------------------------- self.addr_pool = AddrPool(minaddr=0x000000ff, maxaddr=0x0000ffff) self.epid = None self.port = 0 self.fixed = True self.family = family self._fileno = fileno self._sndbuf = sndbuf self._rcvbuf = rcvbuf self.backlog = {0: []} self.callbacks = [] # [(predicate, callback, args), ...] self.pthread = None self.closed = False self.uname = config.uname self.capabilities = {'create_bridge': config.kernel > [3, 2, 0], 'create_bond': config.kernel > [3, 2, 0], 'create_dummy': True, 'provide_master': config.kernel[0] > 2} self.backlog_lock = threading.Lock() self.read_lock = threading.Lock() self.sys_lock = threading.RLock() self.change_master = threading.Event() self.lock = LockFactory() self._sock = None self._ctrl_read, self._ctrl_write = os.pipe() if async_qsize is None: async_qsize = config.async_qsize self.async_qsize = async_qsize if nlm_generator is None: nlm_generator = config.nlm_generator self.nlm_generator = nlm_generator self.buffer_queue = Queue(maxsize=async_qsize) self.qsize = 0 self.log = [] self.get_timeout = 30 self.get_timeout_exception = None self.all_ns = all_ns if pid is None: self.pid = os.getpid() & 0x3fffff self.port = port self.fixed = self.port is not None elif pid == 0: self.pid = os.getpid() else: self.pid = pid # 8<----------------------------------------- self.groups = 0 self.marshal = Marshal() # 8<----------------------------------------- if not nlm_generator: def nlm_request(*argv, **kwarg): return tuple(self._genlm_request(*argv, **kwarg)) def get(*argv, **kwarg): return tuple(self._genlm_get(*argv, **kwarg)) self._genlm_request = self.nlm_request self._genlm_get = self.get self.nlm_request = nlm_request self.get = get # Set defaults self.post_init() def post_init(self): pass def clone(self): return type(self)(**self.config) def close(self, code=errno.ECONNRESET): if code > 0 and self.pthread: self.buffer_queue.put(struct.pack('IHHQIQQ', 28, 2, 0, 0, code, 0, 0)) try: os.close(self._ctrl_write) os.close(self._ctrl_read) except OSError: # ignore the case when it is closed already pass def __enter__(self): return self def __exit__(self, exc_type, exc_value, traceback): self.close() def release(self): log.warning("The `release()` call is deprecated") log.warning("Use `close()` instead") self.close() def register_callback(self, callback, predicate=lambda x: True, args=None): ''' Register a callback to run on a message arrival. Callback is the function that will be called with the message as the first argument. Predicate is the optional callable object, that returns True or False. Upon True, the callback will be called. Upon False it will not. Args is a list or tuple of arguments. Simplest example, assume ipr is the IPRoute() instance:: # create a simplest callback that will print messages def cb(msg): print(msg) # register callback for any message: ipr.register_callback(cb) More complex example, with filtering:: # Set object's attribute after the message key def cb(msg, obj): obj.some_attr = msg["some key"] # Register the callback only for the loopback device, index 1: ipr.register_callback(cb, lambda x: x.get('index', None) == 1, (self, )) Please note: you do **not** need to register the default 0 queue to invoke callbacks on broadcast messages. Callbacks are iterated **before** messages get enqueued. ''' if args is None: args = [] self.callbacks.append((predicate, callback, args)) def unregister_callback(self, callback): ''' Remove the first reference to the function from the callback register ''' cb = tuple(self.callbacks) for cr in cb: if cr[1] == callback: self.callbacks.pop(cb.index(cr)) return def register_policy(self, policy, msg_class=None): ''' Register netlink encoding/decoding policy. Can be specified in two ways: `nlsocket.register_policy(MSG_ID, msg_class)` to register one particular rule, or `nlsocket.register_policy({MSG_ID1: msg_class})` to register several rules at once. E.g.:: policy = {RTM_NEWLINK: ifinfmsg, RTM_DELLINK: ifinfmsg, RTM_NEWADDR: ifaddrmsg, RTM_DELADDR: ifaddrmsg} nlsocket.register_policy(policy) One can call `register_policy()` as many times, as one want to -- it will just extend the current policy scheme, not replace it. ''' if isinstance(policy, int) and msg_class is not None: policy = {policy: msg_class} assert isinstance(policy, dict) for key in policy: self.marshal.msg_map[key] = policy[key] return self.marshal.msg_map def unregister_policy(self, policy): ''' Unregister policy. Policy can be: - int -- then it will just remove one policy - list or tuple of ints -- remove all given - dict -- remove policies by keys from dict In the last case the routine will ignore dict values, it is implemented so just to make it compatible with `get_policy_map()` return value. ''' if isinstance(policy, int): policy = [policy] elif isinstance(policy, dict): policy = list(policy) assert isinstance(policy, (tuple, list, set)) for key in policy: del self.marshal.msg_map[key] return self.marshal.msg_map def get_policy_map(self, policy=None): ''' Return policy for a given message type or for all message types. Policy parameter can be either int, or a list of ints. Always return dictionary. ''' if policy is None: return self.marshal.msg_map if isinstance(policy, int): policy = [policy] assert isinstance(policy, (list, tuple, set)) ret = {} for key in policy: ret[key] = self.marshal.msg_map[key] return ret def sendto(self, *argv, **kwarg): return self._sendto(*argv, **kwarg) def recv(self, *argv, **kwarg): return self._recv(*argv, **kwarg) def recv_into(self, *argv, **kwarg): return self._recv_into(*argv, **kwarg) def recv_ft(self, *argv, **kwarg): return self._recv(*argv, **kwarg) def async_recv(self): poll = select.poll() poll.register(self._sock, select.POLLIN | select.POLLPRI) poll.register(self._ctrl_read, select.POLLIN | select.POLLPRI) sockfd = self._sock.fileno() while True: events = poll.poll() for (fd, event) in events: if fd == sockfd: try: data = bytearray(64000) self._sock.recv_into(data, 64000) self.buffer_queue.put_nowait(data) except Exception as e: self.buffer_queue.put(e) return else: return def put(self, msg, msg_type, msg_flags=NLM_F_REQUEST, addr=(0, 0), msg_seq=0, msg_pid=None): ''' Construct a message from a dictionary and send it to the socket. Parameters: - msg -- the message in the dictionary format - msg_type -- the message type - msg_flags -- the message flags to use in the request - addr -- `sendto()` addr, default `(0, 0)` - msg_seq -- sequence number to use - msg_pid -- pid to use, if `None` -- use os.getpid() Example:: s = IPRSocket() s.bind() s.put({'index': 1}, RTM_GETLINK) s.get() s.close() Please notice, that the return value of `s.get()` can be not the result of `s.put()`, but any broadcast message. To fix that, use `msg_seq` -- the response must contain the same `msg['header']['sequence_number']` value. ''' if msg_seq != 0: self.lock[msg_seq].acquire() try: if msg_seq not in self.backlog: self.backlog[msg_seq] = [] if not isinstance(msg, nlmsg): msg_class = self.marshal.msg_map[msg_type] msg = msg_class(msg) if msg_pid is None: msg_pid = self.epid or os.getpid() msg['header']['type'] = msg_type msg['header']['flags'] = msg_flags msg['header']['sequence_number'] = msg_seq msg['header']['pid'] = msg_pid self.sendto_gate(msg, addr) except: raise finally: if msg_seq != 0: self.lock[msg_seq].release() def sendto_gate(self, msg, addr): raise NotImplementedError() def get(self, bufsize=DEFAULT_RCVBUF, msg_seq=0, terminate=None, callback=None): ''' Get parsed messages list. If `msg_seq` is given, return only messages with that `msg['header']['sequence_number']`, saving all other messages into `self.backlog`. The routine is thread-safe. The `bufsize` parameter can be: - -1: bufsize will be calculated from the first 4 bytes of the network data - 0: bufsize will be calculated from SO_RCVBUF sockopt - int >= 0: just a bufsize ''' ctime = time.time() with self.lock[msg_seq]: if bufsize == -1: # get bufsize from the network data bufsize = struct.unpack("I", self.recv(4, MSG_PEEK))[0] elif bufsize == 0: # get bufsize from SO_RCVBUF bufsize = self.getsockopt(SOL_SOCKET, SO_RCVBUF) // 2 tmsg = None enough = False backlog_acquired = False try: while not enough: # 8<----------------------------------------------------------- # # This stage changes the backlog, so use mutex to # prevent side changes self.backlog_lock.acquire() backlog_acquired = True ## # Stage 1. BEGIN # # 8<----------------------------------------------------------- # # Check backlog and return already collected # messages. # if msg_seq == 0 and self.backlog[0]: # Zero queue. # # Load the backlog, if there is valid # content in it for msg in self.backlog[0]: yield msg self.backlog[0] = [] # And just exit break elif msg_seq != 0 and len(self.backlog.get(msg_seq, [])): # Any other msg_seq. # # Collect messages up to the terminator. # Terminator conditions: # * NLMSG_ERROR != 0 # * NLMSG_DONE # * terminate() function (if defined) # * not NLM_F_MULTI # # Please note, that if terminator not occured, # more `recv()` rounds CAN be required. for msg in tuple(self.backlog[msg_seq]): # Drop the message from the backlog, if any self.backlog[msg_seq].remove(msg) # If there is an error, raise exception if msg['header'].get('error', None) is not None: self.backlog[0].extend(self.backlog[msg_seq]) del self.backlog[msg_seq] # The loop is done raise msg['header']['error'] # If it is the terminator message, say "enough" # and requeue all the rest into Zero queue if terminate is not None: tmsg = terminate(msg) if isinstance(tmsg, nlmsg): yield msg if (msg['header']['type'] == NLMSG_DONE) or tmsg: # The loop is done enough = True # If it is just a normal message, append it to # the response if not enough: # finish the loop on single messages if not msg['header']['flags'] & NLM_F_MULTI: enough = True yield msg # Enough is enough, requeue the rest and delete # our backlog if enough: self.backlog[0].extend(self.backlog[msg_seq]) del self.backlog[msg_seq] break # Next iteration self.backlog_lock.release() backlog_acquired = False else: # Stage 1. END # # 8<------------------------------------------------------- # # Stage 2. BEGIN # # 8<------------------------------------------------------- # # Receive the data from the socket and put the messages # into the backlog # self.backlog_lock.release() backlog_acquired = False ## # # Control the timeout. We should not be within the # function more than TIMEOUT seconds. All the locks # MUST be released here. # if (msg_seq != 0) and \ (time.time() - ctime > self.get_timeout): # requeue already received for that msg_seq self.backlog[0].extend(self.backlog[msg_seq]) del self.backlog[msg_seq] # throw an exception if self.get_timeout_exception: raise self.get_timeout_exception() else: return # if self.read_lock.acquire(False): try: self.change_master.clear() # If the socket is free to read from, occupy # it and wait for the data # # This is a time consuming process, so all the # locks, except the read lock must be released data = self.recv_ft(bufsize) # Parse data msgs = self.marshal.parse(data, msg_seq, callback) # Reset ctime -- timeout should be measured # for every turn separately ctime = time.time() # current = self.buffer_queue.qsize() delta = current - self.qsize delay = 0 if delta > 10: delay = min(3, max(0.01, float(current) / 60000)) message = ("Packet burst: " "delta=%s qsize=%s delay=%s" % (delta, current, delay)) if delay < 1: log.debug(message) else: log.warning(message) time.sleep(delay) self.qsize = current # We've got the data, lock the backlog again with self.backlog_lock: for msg in msgs: msg['header']['stats'] = Stats(current, delta, delay) seq = msg['header']['sequence_number'] if seq not in self.backlog: if msg['header']['type'] == \ NLMSG_ERROR: # Drop orphaned NLMSG_ERROR # messages continue seq = 0 # 8<----------------------------------- # Callbacks section for cr in self.callbacks: try: if cr[0](msg): cr[1](msg, *cr[2]) except: # FIXME # # Usually such code formatting # means that the method should # be refactored to avoid such # indentation. # # Plz do something with it. # lw = log.warning lw("Callback fail: %s" % (cr)) lw(traceback.format_exc()) # 8<----------------------------------- self.backlog[seq].append(msg) # Now wake up other threads self.change_master.set() finally: # Finally, release the read lock: all data # processed self.read_lock.release() else: # If the socket is occupied and there is still no # data for us, wait for the next master change or # for a timeout self.change_master.wait(1) # 8<------------------------------------------------------- # # Stage 2. END # # 8<------------------------------------------------------- finally: if backlog_acquired: self.backlog_lock.release() def nlm_request(self, msg, msg_type, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=None, callback=None): msg_seq = self.addr_pool.alloc() with self.lock[msg_seq]: retry_count = 0 while True: try: self.put(msg, msg_type, msg_flags, msg_seq=msg_seq) for msg in self.get(msg_seq=msg_seq, terminate=terminate, callback=callback): yield msg break except NetlinkError as e: if e.code != 16: raise if retry_count >= 30: raise print('Error 16, retry {}.'.format(retry_count)) time.sleep(0.3) retry_count += 1 continue except Exception: raise finally: # Ban this msg_seq for 0xff rounds # # It's a long story. Modern kernels for RTM_SET.* # operations always return NLMSG_ERROR(0) == success, # even not setting NLM_F_MULTI flag on other response # messages and thus w/o any NLMSG_DONE. So, how to detect # the response end? One can not rely on NLMSG_ERROR on # old kernels, but we have to support them too. Ty, we # just ban msg_seq for several rounds, and NLMSG_ERROR, # being received, will become orphaned and just dropped. # # Hack, but true. self.addr_pool.free(msg_seq, ban=0xff) class BatchAddrPool(object): def alloc(self, *argv, **kwarg): return 0 def free(self, *argv, **kwarg): pass class BatchBacklogQueue(list): def append(self, *argv, **kwarg): pass def pop(self, *argv, **kwarg): pass class BatchBacklog(dict): def __getitem__(self, key): return BatchBacklogQueue() def __setitem__(self, key, value): pass def __delitem__(self, key): pass class BatchSocket(NetlinkMixin): def post_init(self): self.backlog = BatchBacklog() self.addr_pool = BatchAddrPool() self._sock = None self.reset() def reset(self): self.batch = bytearray() def nlm_request(self, msg, msg_type, msg_flags=NLM_F_REQUEST | NLM_F_DUMP, terminate=None, callback=None): msg_seq = self.addr_pool.alloc() msg_pid = self.epid or os.getpid() msg['header']['type'] = msg_type msg['header']['flags'] = msg_flags msg['header']['sequence_number'] = msg_seq msg['header']['pid'] = msg_pid msg.data = self.batch msg.offset = len(self.batch) msg.encode() return [] def get(self, *argv, **kwarg): pass class NetlinkSocket(NetlinkMixin): def post_init(self): # recreate the underlying socket with self.sys_lock: if self._sock is not None: self._sock.close() self._sock = config.SocketBase(AF_NETLINK, SOCK_DGRAM, self.family, self._fileno) self.sendto_gate = self._gate # monkey patch recv_into on Python 2.6 if sys.version_info[0] == 2 and sys.version_info[1] < 7: # --> monkey patch the socket log.warning('patching socket.recv_into()') def patch(data, bsize): data[0:] = self._sock.recv(bsize) self._sock.recv_into = patch self.setsockopt(SOL_SOCKET, SO_SNDBUF, self._sndbuf) self.setsockopt(SOL_SOCKET, SO_RCVBUF, self._rcvbuf) if self.all_ns: self.setsockopt(SOL_NETLINK, NETLINK_LISTEN_ALL_NSID, 1) def __getattr__(self, attr): if attr in ('getsockname', 'getsockopt', 'makefile', 'setsockopt', 'setblocking', 'settimeout', 'gettimeout', 'shutdown', 'recvfrom', 'recvfrom_into', 'fileno'): return getattr(self._sock, attr) elif attr in ('_sendto', '_recv', '_recv_into'): return getattr(self._sock, attr.lstrip("_")) elif attr == "recv_ft": return self._sock.recv raise AttributeError(attr) def _gate(self, msg, addr): msg.reset() msg.encode() return self._sock.sendto(msg.data, addr) def bind(self, groups=0, pid=None, **kwarg): ''' Bind the socket to given multicast groups, using given pid. - If pid is None, use automatic port allocation - If pid == 0, use process' pid - If pid == , use the value instead of pid ''' if pid is not None: self.port = 0 self.fixed = True self.pid = pid or os.getpid() if 'async' in kwarg: # FIXME # raise deprecation error after 0.5.3 # log.warning('use "async_cache" instead of "async", ' '"async" is a keyword from Python 3.7') async_cache = kwarg.get('async_cache') or kwarg.get('async') self.groups = groups # if we have pre-defined port, use it strictly if self.fixed: self.epid = self.pid + (self.port << 22) self._sock.bind((self.epid, self.groups)) else: for port in range(1024): try: self.port = port self.epid = self.pid + (self.port << 22) self._sock.bind((self.epid, self.groups)) break except Exception: # create a new underlying socket -- on kernel 4 # one failed bind() makes the socket useless self.post_init() else: raise KeyError('no free address available') # all is OK till now, so start async recv, if we need if async_cache: def recv_plugin(*argv, **kwarg): data_in = self.buffer_queue.get() if isinstance(data_in, Exception): raise data_in else: return data_in def recv_into_plugin(data, *argv, **kwarg): data_in = self.buffer_queue.get() if isinstance(data_in, Exception): raise data_in else: data[:] = data_in return len(data_in) self._recv = recv_plugin self._recv_into = recv_into_plugin self.recv_ft = recv_plugin self.pthread = threading.Thread(name="Netlink async cache", target=self.async_recv) self.pthread.setDaemon(True) self.pthread.start() def add_membership(self, group): self.setsockopt(SOL_NETLINK, NETLINK_ADD_MEMBERSHIP, group) def drop_membership(self, group): self.setsockopt(SOL_NETLINK, NETLINK_DROP_MEMBERSHIP, group) def close(self, code=errno.ECONNRESET): ''' Correctly close the socket and free all resources. ''' with self.sys_lock: if self.closed: return self.closed = True if self.pthread: os.write(self._ctrl_write, b'exit') self.pthread.join() super(NetlinkSocket, self).close(code=code) # Common shutdown procedure self._sock.close() pyroute2-0.5.9/pyroute2/netlink/rtnl/0000755000175000017500000000000013621220110017432 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/rtnl/__init__.py0000644000175000017500000001422113610051400021545 0ustar peetpeet00000000000000''' RTNetlink: network setup ======================== RTNL is a netlink protocol, used to get and set information about different network objects -- addresses, routes, interfaces etc. RTNL protocol-specific data in messages depends on the object type. E.g., complete packet with the interface address information:: nlmsg header: + uint32 length + uint16 type + uint16 flags + uint32 sequence number + uint32 pid ifaddrmsg structure: + unsigned char ifa_family + unsigned char ifa_prefixlen + unsigned char ifa_flags + unsigned char ifa_scope + uint32 ifa_index [ optional NLA tree ] NLA for this kind of packets can be of type IFA_ADDRESS, IFA_LOCAL etc. -- please refer to the corresponding source. Other objects types require different structures, sometimes really complex. All these structures are described in sources. --------------------------- Module contents: ''' from pyroute2.common import map_namespace # RTnetlink multicast group flags (for use with bind()) RTMGRP_NONE = 0x0 RTMGRP_LINK = 0x1 RTMGRP_NOTIFY = 0x2 RTMGRP_NEIGH = 0x4 RTMGRP_TC = 0x8 RTMGRP_IPV4_IFADDR = 0x10 RTMGRP_IPV4_MROUTE = 0x20 RTMGRP_IPV4_ROUTE = 0x40 RTMGRP_IPV4_RULE = 0x80 RTMGRP_IPV6_IFADDR = 0x100 RTMGRP_IPV6_MROUTE = 0x200 RTMGRP_IPV6_ROUTE = 0x400 RTMGRP_IPV6_IFINFO = 0x800 RTMGRP_DECnet_IFADDR = 0x1000 RTMGRP_NOP2 = 0x2000 RTMGRP_DECnet_ROUTE = 0x4000 RTMGRP_DECnet_RULE = 0x8000 RTMGRP_NOP4 = 0x10000 RTMGRP_IPV6_PREFIX = 0x20000 RTMGRP_IPV6_RULE = 0x40000 RTMGRP_MPLS_ROUTE = 0x4000000 # multicast group ids (for use with {add,drop}_membership) RTNLGRP_NONE = 0 RTNLGRP_LINK = 1 RTNLGRP_NOTIFY = 2 RTNLGRP_NEIGH = 3 RTNLGRP_TC = 4 RTNLGRP_IPV4_IFADDR = 5 RTNLGRP_IPV4_MROUTE = 6 RTNLGRP_IPV4_ROUTE = 7 RTNLGRP_IPV4_RULE = 8 RTNLGRP_IPV6_IFADDR = 9 RTNLGRP_IPV6_MROUTE = 10 RTNLGRP_IPV6_ROUTE = 11 RTNLGRP_IPV6_IFINFO = 12 RTNLGRP_DECnet_IFADDR = 13 RTNLGRP_NOP2 = 14 RTNLGRP_DECnet_ROUTE = 15 RTNLGRP_DECnet_RULE = 16 RTNLGRP_NOP4 = 17 RTNLGRP_IPV6_PREFIX = 18 RTNLGRP_IPV6_RULE = 19 RTNLGRP_ND_USEROPT = 20 RTNLGRP_PHONET_IFADDR = 21 RTNLGRP_PHONET_ROUTE = 22 RTNLGRP_DCB = 23 RTNLGRP_IPV4_NETCONF = 24 RTNLGRP_IPV6_NETCONF = 25 RTNLGRP_MDB = 26 RTNLGRP_MPLS_ROUTE = 27 RTNLGRP_NSID = 28 RTNLGRP_MPLS_NETCONF = 29 RTNLGRP_IPV4_MROUTE_R = 30 RTNLGRP_IPV6_MROUTE_R = 31 # Types of messages # RTM_BASE = 16 RTM_NEWLINK = 16 RTM_DELLINK = 17 RTM_GETLINK = 18 RTM_SETLINK = 19 RTM_NEWADDR = 20 RTM_DELADDR = 21 RTM_GETADDR = 22 RTM_NEWROUTE = 24 RTM_DELROUTE = 25 RTM_GETROUTE = 26 RTM_NEWNEIGH = 28 RTM_DELNEIGH = 29 RTM_GETNEIGH = 30 RTM_NEWRULE = 32 RTM_DELRULE = 33 RTM_GETRULE = 34 RTM_NEWQDISC = 36 RTM_DELQDISC = 37 RTM_GETQDISC = 38 RTM_NEWTCLASS = 40 RTM_DELTCLASS = 41 RTM_GETTCLASS = 42 RTM_NEWTFILTER = 44 RTM_DELTFILTER = 45 RTM_GETTFILTER = 46 RTM_NEWACTION = 48 RTM_DELACTION = 49 RTM_GETACTION = 50 RTM_NEWPREFIX = 52 RTM_GETMULTICAST = 58 RTM_GETANYCAST = 62 RTM_NEWNEIGHTBL = 64 RTM_GETNEIGHTBL = 66 RTM_SETNEIGHTBL = 67 RTM_NEWNDUSEROPT = 68 RTM_NEWADDRLABEL = 72 RTM_DELADDRLABEL = 73 RTM_GETADDRLABEL = 74 RTM_GETDCB = 78 RTM_SETDCB = 79 RTM_NEWNETCONF = 80 RTM_DELNETCONF = 81 RTM_GETNETCONF = 82 RTM_NEWMDB = 84 RTM_DELMDB = 85 RTM_GETMDB = 86 RTM_NEWNSID = 88 RTM_DELNSID = 89 RTM_GETNSID = 90 RTM_NEWSTATS = 92 RTM_GETSTATS = 94 RTM_NEWCACHEREPORT = 96 # fake internal message types RTM_NEWNETNS = 500 RTM_DELNETNS = 501 RTM_GETNETNS = 502 (RTM_NAMES, RTM_VALUES) = map_namespace('RTM_', globals()) TC_H_INGRESS = 0xfffffff1 TC_H_CLSACT = TC_H_INGRESS TC_H_ROOT = 0xffffffff RTMGRP_DEFAULTS = RTMGRP_IPV4_IFADDR |\ RTMGRP_IPV6_IFADDR |\ RTMGRP_IPV4_ROUTE |\ RTMGRP_IPV6_ROUTE |\ RTMGRP_IPV4_RULE |\ RTMGRP_IPV6_RULE |\ RTMGRP_NEIGH |\ RTMGRP_LINK |\ RTMGRP_TC |\ RTMGRP_MPLS_ROUTE encap_type = {'unspec': 0, 'mpls': 1, 0: 'unspec', 1: 'mpls'} rtypes = {'RTN_UNSPEC': 0, 'RTN_UNICAST': 1, # Gateway or direct route 'RTN_LOCAL': 2, # Accept locally 'RTN_BROADCAST': 3, # Accept locally as broadcast # send as broadcast 'RTN_ANYCAST': 4, # Accept locally as broadcast, # but send as unicast 'RTN_MULTICAST': 5, # Multicast route 'RTN_BLACKHOLE': 6, # Drop 'RTN_UNREACHABLE': 7, # Destination is unreachable 'RTN_PROHIBIT': 8, # Administratively prohibited 'RTN_THROW': 9, # Not in this table 'RTN_NAT': 10, # Translate this address 'RTN_XRESOLVE': 11} # Use external resolver # normalized rt_type = dict([(x[0][4:].lower(), x[1]) for x in rtypes.items()] + [(x[1], x[0][4:].lower()) for x in rtypes.items()]) rtprotos = {'RTPROT_UNSPEC': 0, 'RTPROT_REDIRECT': 1, # Route installed by ICMP redirects; # not used by current IPv4 'RTPROT_KERNEL': 2, # Route installed by kernel 'RTPROT_BOOT': 3, # Route installed during boot 'RTPROT_STATIC': 4, # Route installed by administrator # Values of protocol >= RTPROT_STATIC are not # interpreted by kernel; # keep in sync with iproute2 ! 'RTPROT_GATED': 8, # gated 'RTPROT_RA': 9, # RDISC/ND router advertisements 'RTPROT_MRT': 10, # Merit MRT 'RTPROT_ZEBRA': 11, # Zebra 'RTPROT_BIRD': 12, # BIRD 'RTPROT_DNROUTED': 13, # DECnet routing daemon 'RTPROT_XORP': 14, # XORP 'RTPROT_NTK': 15, # Netsukuku 'RTPROT_DHCP': 16} # DHCP client # normalized rt_proto = dict([(x[0][7:].lower(), x[1]) for x in rtprotos.items()] + [(x[1], x[0][7:].lower()) for x in rtprotos.items()]) rtscopes = {'RT_SCOPE_UNIVERSE': 0, 'RT_SCOPE_SITE': 200, 'RT_SCOPE_LINK': 253, 'RT_SCOPE_HOST': 254, 'RT_SCOPE_NOWHERE': 255} # normalized rt_scope = dict([(x[0][9:].lower(), x[1]) for x in rtscopes.items()] + [(x[1], x[0][9:].lower()) for x in rtscopes.items()]) pyroute2-0.5.9/pyroute2/netlink/rtnl/errmsg.py0000644000175000017500000000023313610051400021303 0ustar peetpeet00000000000000from pyroute2.netlink import nlmsg class errmsg(nlmsg): ''' Custom message type Error ersatz-message ''' fields = (('code', 'i'), ) pyroute2-0.5.9/pyroute2/netlink/rtnl/fibmsg.py0000644000175000017500000000505213610051400021257 0ustar peetpeet00000000000000 from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg from pyroute2.netlink import nla FR_ACT_UNSPEC = 0 FR_ACT_TO_TBL = 1 FR_ACT_GOTO = 2 FR_ACT_NOP = 3 FR_ACT_BLACKHOLE = 6 FR_ACT_UNREACHABLE = 7 FR_ACT_PROHIBIT = 8 (FR_ACT_NAMES, FR_ACT_VALUES) = map_namespace('FR_ACT', globals()) class fibmsg(nlmsg): ''' IP rule message C structure:: struct fib_rule_hdr { __u8 family; __u8 dst_len; __u8 src_len; __u8 tos; __u8 table; __u8 res1; /* reserved */ __u8 res2; /* reserved */ __u8 action; __u32 flags; }; ''' prefix = 'FRA_' fields = (('family', 'B'), ('dst_len', 'B'), ('src_len', 'B'), ('tos', 'B'), ('table', 'B'), ('res1', 'B'), ('res2', 'B'), ('action', 'B'), ('flags', 'I')) # fibmsg NLA numbers are not sequential, so # give them here explicitly nla_map = ((0, 'FRA_UNSPEC', 'none'), (1, 'FRA_DST', 'ipaddr'), (2, 'FRA_SRC', 'ipaddr'), (3, 'FRA_IIFNAME', 'asciiz'), (4, 'FRA_GOTO', 'uint32'), (6, 'FRA_PRIORITY', 'uint32'), (10, 'FRA_FWMARK', 'uint32'), (11, 'FRA_FLOW', 'uint32'), (12, 'FRA_TUN_ID', 'be64'), (13, 'FRA_SUPPRESS_IFGROUP', 'uint32'), (14, 'FRA_SUPPRESS_PREFIXLEN', 'uint32'), (15, 'FRA_TABLE', 'uint32'), (16, 'FRA_FWMASK', 'uint32'), (17, 'FRA_OIFNAME', 'asciiz'), (18, 'FRA_PAD', 'hex'), (19, 'FRA_L3MDEV', 'uint8'), (20, 'FRA_UID_RANGE', 'uid_range'), (21, 'FRA_PROTOCOL', 'uint8'), (22, 'FRA_IP_PROTO', 'uint8'), (23, 'FRA_SPORT_RANGE', 'port_range'), (24, 'FRA_DPORT_RANGE', 'port_range')) class fra_range(nla): __slots__ = () sql_type = 'TEXT' def encode(self): self['start'], self['end'] = self.value.split(':') nla.encode(self) def decode(self): nla.decode(self) self.value = '%s:%s' % (self['start'], self['end']) class uid_range(fra_range): fields = (('start', 'I'), ('end', 'I')) class port_range(fra_range): fields = (('start', 'H'), ('end', 'H')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifaddrmsg.py0000644000175000017500000000546613610051400021761 0ustar peetpeet00000000000000import socket from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg from pyroute2.netlink import nla # address attributes # # Important comment: # For IPv4, IFA_ADDRESS is a prefix address, not a local interface # address. It makes no difference for normal interfaces, but # for point-to-point ones IFA_ADDRESS means DESTINATION address, # and the local address is supplied in IFA_LOCAL attribute. # IFA_F_SECONDARY = 0x01 # IFA_F_TEMPORARY IFA_F_SECONDARY IFA_F_NODAD = 0x02 IFA_F_OPTIMISTIC = 0x04 IFA_F_DADFAILED = 0x08 IFA_F_HOMEADDRESS = 0x10 IFA_F_DEPRECATED = 0x20 IFA_F_TENTATIVE = 0x40 IFA_F_PERMANENT = 0x80 IFA_F_MANAGETEMPADDR = 0x100 IFA_F_NOPREFIXROUTE = 0x200 IFA_F_MCAUTOJOIN = 0x400 IFA_F_STABLE_PRIVACY = 0x800 (IFA_F_NAMES, IFA_F_VALUES) = map_namespace('IFA_F', globals()) # 8<---------------------------------------------- IFA_F_TEMPORARY = IFA_F_SECONDARY IFA_F_NAMES['IFA_F_TEMPORARY'] = IFA_F_TEMPORARY IFA_F_VALUES6 = IFA_F_VALUES IFA_F_VALUES6[IFA_F_TEMPORARY] = 'IFA_F_TEMPORARY' # 8<---------------------------------------------- class ifaddrmsg(nlmsg): ''' IP address information C structure:: struct ifaddrmsg { unsigned char ifa_family; /* Address type */ unsigned char ifa_prefixlen; /* Prefixlength of address */ unsigned char ifa_flags; /* Address flags */ unsigned char ifa_scope; /* Address scope */ int ifa_index; /* Interface index */ }; ''' prefix = 'IFA_' sql_constraints = {'IFA_LOCAL': "NOT NULL DEFAULT ''"} fields = (('family', 'B'), ('prefixlen', 'B'), ('flags', 'B'), ('scope', 'B'), ('index', 'I')) nla_map = (('IFA_UNSPEC', 'hex'), ('IFA_ADDRESS', 'ipaddr'), ('IFA_LOCAL', 'ipaddr'), ('IFA_LABEL', 'asciiz'), ('IFA_BROADCAST', 'ipaddr'), ('IFA_ANYCAST', 'ipaddr'), ('IFA_CACHEINFO', 'cacheinfo'), ('IFA_MULTICAST', 'ipaddr'), ('IFA_FLAGS', 'uint32')) class cacheinfo(nla): fields = (('ifa_preferred', 'I'), ('ifa_valid', 'I'), ('cstamp', 'I'), ('tstamp', 'I')) @staticmethod def flags2names(flags, family=socket.AF_INET): if family == socket.AF_INET6: ifa_f_values = IFA_F_VALUES6 else: ifa_f_values = IFA_F_VALUES ret = [] for f in ifa_f_values: if f & flags: ret.append(ifa_f_values[f]) return ret @staticmethod def names2flags(flags): ret = 0 for f in flags: if f[0] == '!': f = f[1:] else: ret |= IFA_F_NAMES[f] return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/0000755000175000017500000000000013621220110021234 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/__init__.py0000644000175000017500000012132013610051400023346 0ustar peetpeet00000000000000import os import sys import struct import pkgutil import logging import importlib from socket import AF_INET from socket import AF_INET6 from pyroute2 import config from pyroute2.config import AF_BRIDGE from pyroute2.common import map_namespace from pyroute2.common import basestring from pyroute2.netlink import nla from pyroute2.netlink import nlmsg from pyroute2.netlink import nlmsg_atoms from pyroute2.netlink.rtnl.iw_event import iw_event from pyroute2.netlink.rtnl.ifinfmsg.plugins import (bond, gtp, ipvlan, tuntap, vlan, vrf, vti, vti6, vxlan, xfrm, ipoib) log = logging.getLogger(__name__) # it's simpler to double constants here, than to change all the # module layout; but it is a subject of the future refactoring RTM_NEWLINK = 16 RTM_DELLINK = 17 # ## # # tuntap flags # IFT_TUN = 0x0001 IFT_TAP = 0x0002 IFT_NO_PI = 0x1000 IFT_ONE_QUEUE = 0x2000 IFT_VNET_HDR = 0x4000 IFT_TUN_EXCL = 0x8000 IFT_MULTI_QUEUE = 0x0100 IFT_ATTACH_QUEUE = 0x0200 IFT_DETACH_QUEUE = 0x0400 # read-only IFT_PERSIST = 0x0800 IFT_NOFILTER = 0x1000 ## # # normal flags # IFF_UP = 0x1 # interface is up IFF_BROADCAST = 0x2 # broadcast address valid IFF_DEBUG = 0x4 # turn on debugging IFF_LOOPBACK = 0x8 # is a loopback net IFF_POINTOPOINT = 0x10 # interface is has p-p link IFF_NOTRAILERS = 0x20 # avoid use of trailers IFF_RUNNING = 0x40 # interface RFC2863 OPER_UP IFF_NOARP = 0x80 # no ARP protocol IFF_PROMISC = 0x100 # receive all packets IFF_ALLMULTI = 0x200 # receive all multicast packets IFF_MASTER = 0x400 # master of a load balancer IFF_SLAVE = 0x800 # slave of a load balancer IFF_MULTICAST = 0x1000 # Supports multicast IFF_PORTSEL = 0x2000 # can set media type IFF_AUTOMEDIA = 0x4000 # auto media select active IFF_DYNAMIC = 0x8000 # dialup device with changing addresses IFF_LOWER_UP = 0x10000 # driver signals L1 up IFF_DORMANT = 0x20000 # driver signals dormant IFF_ECHO = 0x40000 # echo sent packets (IFF_NAMES, IFF_VALUES) = map_namespace('IFF', globals()) IFF_MASK = IFF_UP |\ IFF_DEBUG |\ IFF_NOTRAILERS |\ IFF_NOARP |\ IFF_PROMISC |\ IFF_ALLMULTI IFF_VOLATILE = IFF_LOOPBACK |\ IFF_POINTOPOINT |\ IFF_BROADCAST |\ IFF_ECHO |\ IFF_MASTER |\ IFF_SLAVE |\ IFF_RUNNING |\ IFF_LOWER_UP |\ IFF_DORMANT ## # # gre flags # GRE_ACK = 0x0080 GRE_REC = 0x0700 GRE_STRICT = 0x0800 GRE_SEQ = 0x1000 GRE_KEY = 0x2000 GRE_ROUTING = 0x4000 GRE_CSUM = 0x8000 (GRE_NAMES, GRE_VALUES) = map_namespace('GRE_', globals()) ## # # vlan filter flags # BRIDGE_VLAN_INFO_MASTER = 0x1 # operate on bridge device BRIDGE_VLAN_INFO_PVID = 0x2 # ingress untagged BRIDGE_VLAN_INFO_UNTAGGED = 0x4 # egress untagged BRIDGE_VLAN_INFO_RANGE_BEGIN = 0x8 # range start BRIDGE_VLAN_INFO_RANGE_END = 0x10 # range end BRIDGE_VLAN_INFO_BRENTRY = 0x20 # global bridge vlan entry (BRIDGE_VLAN_NAMES, BRIDGE_VLAN_VALUES) = \ map_namespace('BRIDGE_VLAN_INFO', globals()) BRIDGE_FLAGS_MASTER = 1 BRIDGE_FLAGS_SELF = 2 (BRIDGE_FLAGS_NAMES, BRIDGE_FLAGS_VALUES) = \ map_namespace('BRIDGE_FLAGS', globals()) states = ('UNKNOWN', 'NOTPRESENT', 'DOWN', 'LOWERLAYERDOWN', 'TESTING', 'DORMANT', 'UP') state_by_name = dict(((i[1], i[0]) for i in enumerate(states))) state_by_code = dict(enumerate(states)) stats_names = ('rx_packets', 'tx_packets', 'rx_bytes', 'tx_bytes', 'rx_errors', 'tx_errors', 'rx_dropped', 'tx_dropped', 'multicast', 'collisions', 'rx_length_errors', 'rx_over_errors', 'rx_crc_errors', 'rx_frame_errors', 'rx_fifo_errors', 'rx_missed_errors', 'tx_aborted_errors', 'tx_carrier_errors', 'tx_fifo_errors', 'tx_heartbeat_errors', 'tx_window_errors', 'rx_compressed', 'tx_compressed') def load_plugins_by_path(path): plugins = {} files = set([x.split('.')[0] for x in filter(lambda x: x.endswith(('.py', '.pyc', '.pyo')), os.listdir(path)) if not x.startswith('_')]) sys.path.append(path) for name in files: try: module = __import__(name, globals(), locals(), [], 0) plugins[name] = getattr(module, name) except: pass sys.path.pop() return plugins def load_plugins_by_pkg(pkg): plugin_modules = { name: name.split('.')[-1] for loader, name, ispkg in pkgutil.iter_modules( path=pkg.__path__, prefix=pkg.__name__ + '.') } # Hack to make it compatible with pyinstaller # plugin loading will work with and without pyinstaller # Inspired on: # https://github.com/webcomics/dosage/blob/master/dosagelib/loader.py # see: https://github.com/pyinstaller/pyinstaller/issues/1905 importers = map(pkgutil.get_importer, pkg.__path__) toc = set() for importer in importers: if hasattr(importer, 'toc'): toc |= importer.toc for element in toc: if (element.startswith(pkg.__name__) and element != pkg.__name__): plugin_modules[element] = element.split('.')[-1] return { mod_name: getattr(importlib.import_module(mod_path), mod_name) for mod_path, mod_name in plugin_modules.items() if not mod_name.startswith('_') } data_plugins = {} for module in (bond, gtp, ipvlan, tuntap, vlan, vrf, vti, vti6, vxlan, xfrm, ipoib): name = module.__name__.split('.')[-1] data_plugins[name] = getattr(module, name) for pkg in config.data_plugins_pkgs: data_plugins.update(load_plugins_by_pkg(pkg)) for path in config.data_plugins_path: data_plugins.update(load_plugins_by_path(path)) class ifla_bridge_id(nla): fields = [('value', '=8s')] def encode(self): r_prio = struct.pack('H', self['prio']) r_addr = struct.pack('BBBBBB', *[int(i, 16) for i in self['addr'].split(':')]) self['value'] = r_prio + r_addr nla.encode(self) def decode(self): nla.decode(self) r_prio = self['value'][:2] r_addr = self['value'][2:] self.value = {'prio': struct.unpack('H', r_prio)[0], 'addr': ':'.join('%02x' % (i) for i in struct.unpack('BBBBBB', r_addr))} class protinfo_bridge(nla): prefix = 'IFLA_BRPORT_' nla_map = (('IFLA_BRPORT_UNSPEC', 'none'), ('IFLA_BRPORT_STATE', 'uint8'), ('IFLA_BRPORT_PRIORITY', 'uint16'), ('IFLA_BRPORT_COST', 'uint32'), ('IFLA_BRPORT_MODE', 'uint8'), ('IFLA_BRPORT_GUARD', 'uint8'), ('IFLA_BRPORT_PROTECT', 'uint8'), ('IFLA_BRPORT_FAST_LEAVE', 'uint8'), ('IFLA_BRPORT_LEARNING', 'uint8'), ('IFLA_BRPORT_UNICAST_FLOOD', 'uint8'), ('IFLA_BRPORT_PROXYARP', 'uint8'), ('IFLA_BRPORT_LEARNING_SYNC', 'uint8'), ('IFLA_BRPORT_PROXYARP_WIFI', 'uint8'), ('IFLA_BRPORT_ROOT_ID', 'br_id'), ('IFLA_BRPORT_BRIDGE_ID', 'br_id'), ('IFLA_BRPORT_DESIGNATED_PORT', 'uint16'), ('IFLA_BRPORT_DESIGNATED_COST', 'uint16'), ('IFLA_BRPORT_ID', 'uint16'), ('IFLA_BRPORT_NO', 'uint16'), ('IFLA_BRPORT_TOPOLOGY_CHANGE_ACK', 'uint8'), ('IFLA_BRPORT_CONFIG_PENDING', 'uint8'), ('IFLA_BRPORT_MESSAGE_AGE_TIMER', 'uint64'), ('IFLA_BRPORT_FORWARD_DELAY_TIMER', 'uint64'), ('IFLA_BRPORT_HOLD_TIMER', 'uint64'), ('IFLA_BRPORT_FLUSH', 'flag'), ('IFLA_BRPORT_MULTICAST_ROUTER', 'uint8'), ('IFLA_BRPORT_PAD', 'uint64'), ('IFLA_BRPORT_MCAST_FLOOD', 'uint8'), ('IFLA_BRPORT_MCAST_TO_UCAST', 'uint8'), ('IFLA_BRPORT_VLAN_TUNNEL', 'uint8'), ('IFLA_BRPORT_BCAST_FLOOD', 'uint8')) class br_id(ifla_bridge_id): pass class macvx_data(nla): nla_map = (('IFLA_MACVLAN_UNSPEC', 'none'), ('IFLA_MACVLAN_MODE', 'mode'), ('IFLA_MACVLAN_FLAGS', 'flags'), ('IFLA_MACVLAN_MACADDR_MODE', 'macaddr_mode'), ('IFLA_MACVLAN_MACADDR', 'l2addr'), ('IFLA_MACVLAN_MACADDR_DATA', 'macaddr_data'), ('IFLA_MACVLAN_MACADDR_COUNT', 'uint32')) class mode(nlmsg_atoms.uint32): value_map = {0: 'none', 1: 'private', 2: 'vepa', 4: 'bridge', 8: 'passthru', 16: 'source'} class flags(nlmsg_atoms.uint16): value_map = {0: 'none', 1: 'nopromisc'} class macaddr_mode(nlmsg_atoms.uint32): value_map = {0: 'add', 1: 'del', 2: 'flush', 3: 'set'} class macaddr_data(nla): nla_map = ((4, 'IFLA_MACVLAN_MACADDR', 'l2addr'), ) class iptnl_data(nla): nla_map = (('IFLA_IPIP_UNSPEC', 'none'), ('IFLA_IPIP_LINK', 'uint32'), ('IFLA_IPIP_LOCAL', 'ip4addr'), ('IFLA_IPIP_REMOTE', 'ip4addr'), ('IFLA_IPIP_TTL', 'uint8'), ('IFLA_IPIP_TOS', 'uint8'), ('IFLA_IPIP_ENCAP_LIMIT', 'uint8'), ('IFLA_IPIP_FLOWINFO', 'be32'), ('IFLA_IPIP_FLAGS', 'be16'), ('IFLA_IPIP_PROTO', 'uint8'), ('IFLA_IPIP_PMTUDISC', 'uint8'), ('IFLA_IPIP_6RD_PREFIX', 'ip6addr'), ('IFLA_IPIP_6RD_RELAY_PREFIX', 'ip4addr'), ('IFLA_IPIP_6RD_PREFIXLEN', 'uint16'), ('IFLA_IPIP_6RD_RELAY_PREFIXLEN', 'uint16'), ('IFLA_IPIP_ENCAP_TYPE', 'uint16'), ('IFLA_IPIP_ENCAP_FLAGS', 'uint16'), ('IFLA_IPIP_ENCAP_SPORT', 'be16'), ('IFLA_IPIP_ENCAP_DPORT', 'be16'), ('IFLA_IPIP_COLLECT_METADATA', 'flag'), ('IFLA_IPIP_FWMARK', 'uint32')) class ifinfbase(object): ''' Network interface message. C structure:: struct ifinfomsg { unsigned char ifi_family; /* AF_UNSPEC */ unsigned short ifi_type; /* Device type */ int ifi_index; /* Interface index */ unsigned int ifi_flags; /* Device flags */ unsigned int ifi_change; /* change mask */ }; ''' prefix = 'IFLA_' # # Changed from PRIMARY KEY to NOT NULL to support multiple # targets in one table, so we can collect info from multiple # systems. # # To provide data integrity one should use foreign keys, # but when you create a foreign key using interfaces as # the parent table, create also a unique index on # the fields specified in the foreign key definition. # # E.g. # # CREATE TABLE interfaces (f_target TEXT NOT NULL, # f_index INTEGER NOT NULL, ...) # CREATE TABLE routes (f_target TEXT NOT NULL, # f_RTA_OIF INTEGER, ... # FOREIGN KEY (f_target, f_RTA_OIF) # REFERENCES interfaces(f_target, f_index)) # CREATE UNIQUE INDEX if_idx ON interfaces(f_target, f_index) # sql_constraints = {'index': 'NOT NULL'} sql_extra_fields = (('state', 'TEXT'), ) fields = (('family', 'B'), ('__align', 'x'), ('ifi_type', 'H'), ('index', 'i'), ('flags', 'I'), ('change', 'I')) nla_map = (('IFLA_UNSPEC', 'none'), ('IFLA_ADDRESS', 'l2addr'), ('IFLA_BROADCAST', 'l2addr'), ('IFLA_IFNAME', 'asciiz'), ('IFLA_MTU', 'uint32'), ('IFLA_LINK', 'uint32'), ('IFLA_QDISC', 'asciiz'), ('IFLA_STATS', 'ifstats'), ('IFLA_COST', 'hex'), ('IFLA_PRIORITY', 'hex'), ('IFLA_MASTER', 'uint32'), ('IFLA_WIRELESS', 'wireless'), ('IFLA_PROTINFO', 'protinfo'), ('IFLA_TXQLEN', 'uint32'), ('IFLA_MAP', 'ifmap'), ('IFLA_WEIGHT', 'hex'), ('IFLA_OPERSTATE', 'state'), ('IFLA_LINKMODE', 'uint8'), ('IFLA_LINKINFO', 'ifinfo'), ('IFLA_NET_NS_PID', 'uint32'), ('IFLA_IFALIAS', 'asciiz'), ('IFLA_NUM_VF', 'uint32'), ('IFLA_VFINFO_LIST', 'vflist'), ('IFLA_STATS64', 'ifstats64'), ('IFLA_VF_PORTS', 'hex'), ('IFLA_PORT_SELF', 'hex'), ('IFLA_AF_SPEC', 'af_spec'), ('IFLA_GROUP', 'uint32'), ('IFLA_NET_NS_FD', 'netns_fd'), ('IFLA_EXT_MASK', 'uint32'), ('IFLA_PROMISCUITY', 'uint32'), ('IFLA_NUM_TX_QUEUES', 'uint32'), ('IFLA_NUM_RX_QUEUES', 'uint32'), ('IFLA_CARRIER', 'uint8'), ('IFLA_PHYS_PORT_ID', 'hex'), ('IFLA_CARRIER_CHANGES', 'uint32'), ('IFLA_PHYS_SWITCH_ID', 'hex'), ('IFLA_LINK_NETNSID', 'int32'), ('IFLA_PHYS_PORT_NAME', 'asciiz'), ('IFLA_PROTO_DOWN', 'uint8'), ('IFLA_GSO_MAX_SEGS', 'uint32'), ('IFLA_GSO_MAX_SIZE', 'uint32'), ('IFLA_PAD', 'hex'), ('IFLA_XDP', 'hex'), ('IFLA_EVENT', 'hex'), ('IFLA_NEW_NETNSID', 'hex'), ('IFLA_IF_NETNSID', 'hex'), ('IFLA_CARRIER_UP_COUNT', 'uint32'), ('IFLA_CARRIER_DOWN_COUNT', 'uint32'), ('IFLA_NEW_IFINDEX', 'hex')) @staticmethod def flags2names(flags, mask=0xffffffff): ret = [] for flag in IFF_VALUES: if (flag & mask & flags) == flag: ret.append(IFF_VALUES[flag]) return ret @staticmethod def names2flags(flags): ret = 0 mask = 0 for flag in flags: if flag[0] == '!': flag = flag[1:] else: ret |= IFF_NAMES[flag] mask |= IFF_NAMES[flag] return (ret, mask) def encode(self): # convert flags if isinstance(self['flags'], (set, tuple, list)): self['flags'], self['change'] = self.names2flags(self['flags']) return super(ifinfbase, self).encode() class netns_fd(nla): fields = [('value', 'I')] netns_run_dir = '/var/run/netns' netns_fd = None def encode(self): # # There are two ways to specify netns # # 1. provide fd to an open file # 2. provide a file name # # In the first case, the value is passed to the kernel # as is. In the second case, the object opens appropriate # file from `self.netns_run_dir` and closes it upon # `__del__(self)` if isinstance(self.value, int): self['value'] = self.value else: if '/' in self.value: netns_path = self.value else: netns_path = '%s/%s' % (self.netns_run_dir, self.value) self.netns_fd = os.open(netns_path, os.O_RDONLY) self['value'] = self.netns_fd self.register_clean_cb(self.close) nla.encode(self) def close(self): if self.netns_fd is not None: os.close(self.netns_fd) class vflist(nla): nla_map = (('IFLA_VF_INFO_UNSPEC', 'none'), ('IFLA_VF_INFO', 'vfinfo')) class vfinfo(nla): prefix = 'IFLA_VF_' nla_map = (('IFLA_VF_UNSPEC', 'none'), ('IFLA_VF_MAC', 'vf_mac'), ('IFLA_VF_VLAN', 'vf_vlan'), ('IFLA_VF_TX_RATE', 'vf_tx_rate'), ('IFLA_VF_SPOOFCHK', 'vf_spoofchk'), ('IFLA_VF_LINK_STATE', 'vf_link_state'), ('IFLA_VF_RATE', 'vf_rate'), ('IFLA_VF_RSS_QUERY_EN', 'vf_rss_query_en'), ('IFLA_VF_STATS', 'vf_stats'), ('IFLA_VF_TRUST', 'vf_trust'), ('IFLA_VF_IB_NODE_GUID', 'vf_ib_node_guid'), ('IFLA_VF_IB_PORT_GUID', 'vf_ib_port_guid'), ('IFLA_VF_VLAN_LIST', 'vf_vlist')) class vf_ib_node_guid(nla): fields = (('vf', 'I'), ('ib_node_guid', '32B')) def decode(self): nla.decode(self) self['ib_node_guid'] = ':'.join([ '%02x' % x for x in self['ib_node_guid'][4:12][::-1]]) def encode(self): encoded_guid = self['ib_node_guid'].split(':')[::-1] self['ib_node_guid'] = ( [0] * 4 + [int(x, 16) for x in encoded_guid] + [0] * 20) nla.encode(self) class vf_ib_port_guid(nla): fields = (('vf', 'I'), ('ib_port_guid', '32B')) def decode(self): nla.decode(self) self['ib_port_guid'] = ':'.join([ '%02x' % x for x in self['ib_port_guid'][4:12][::-1]]) def encode(self): encoded_guid = self['ib_port_guid'].split(':')[::-1] self['ib_port_guid'] = ( [0] * 4 + [int(x, 16) for x in encoded_guid] + [0] * 20) nla.encode(self) class vf_mac(nla): fields = (('vf', 'I'), ('mac', '32B')) def decode(self): nla.decode(self) self['mac'] = ':'.join(['%02x' % x for x in self['mac'][:6]]) def encode(self): self['mac'] = ([int(x, 16) for x in self['mac'].split(':')] + [0] * 26) nla.encode(self) class vf_vlan(nla): fields = (('vf', 'I'), ('vlan', 'I'), ('qos', 'I')) class vf_tx_rate(nla): fields = (('vf', 'I'), ('tx_rate', 'I')) class vf_spoofchk(nla): fields = (('vf', 'I'), ('spoofchk', 'I')) class vf_link_state(nla): fields = (('vf', 'I'), ('link_state', 'I')) class vf_rate(nla): fields = (('vf', 'I'), ('min_tx_rate', 'I'), ('max_tx_rate', 'I')) class vf_rss_query_en(nla): fields = (('vf', 'I'), ('rss_query_en', 'I')) class vf_stats(nla): nla_map = (('IFLA_VF_STATS_RX_PACKETS', 'uint64'), ('IFLA_VF_STATS_TX_PACKETS', 'uint64'), ('IFLA_VF_STATS_RX_BYTES', 'uint64'), ('IFLA_VF_STATS_TX_BYTES', 'uint64'), ('IFLA_VF_STATS_BROADCAST', 'uint64'), ('IFLA_VF_STATS_MULTICAST', 'uint64'), ('IFLA_VF_STATS_PAD', 'uint64'), ('IFLA_VF_STATS_RX_DROPPED', 'uint64'), ('IFLA_VF_STATS_TX_DROPPED', 'uint64')) class vf_trust(nla): fields = (('vf', 'I'), ('trust', 'I')) class vf_vlist(nla): nla_map = (('IFLA_VF_VLAN_INFO_UNSPEC', 'none'), ('IFLA_VF_VLAN_INFO', 'ivvi')) class ivvi(nla): fields = (('vf', 'I'), ('vlan', 'I'), ('qos', 'I'), ('proto', '>H')) class wireless(iw_event): pass class state(nla): fields = (('value', 'B'), ) def encode(self): self['value'] = state_by_name[self.value] nla.encode(self) def decode(self): nla.decode(self) self.value = state_by_code[self['value']] class ifstats(nla): fields = [(i, 'I') for i in stats_names] class ifstats64(nla): fields = [(i, 'Q') for i in stats_names] class ifmap(nla): fields = (('mem_start', 'Q'), ('mem_end', 'Q'), ('base_addr', 'Q'), ('irq', 'H'), ('dma', 'B'), ('port', 'B')) @staticmethod def protinfo(self, *argv, **kwarg): proto_map = {AF_BRIDGE: protinfo_bridge} return proto_map.get(self['family'], self.hex) class ifinfo(nla): prefix = 'IFLA_INFO_' nla_map = (('IFLA_INFO_UNSPEC', 'none'), ('IFLA_INFO_KIND', 'asciiz'), ('IFLA_INFO_DATA', 'info_data'), ('IFLA_INFO_XSTATS', 'hex'), ('IFLA_INFO_SLAVE_KIND', 'asciiz'), ('IFLA_INFO_SLAVE_DATA', 'info_slave_data')) @staticmethod def info_slave_data(self, *argv, **kwarg): ''' Return IFLA_INFO_SLAVE_DATA type based on IFLA_INFO_SLAVE_KIND. ''' kind = self.get_attr('IFLA_INFO_SLAVE_KIND') data_map = {'bridge': self.bridge_slave_data, 'bond': self.bond_slave_data} return data_map.get(kind, self.hex) class bridge_slave_data(protinfo_bridge): pass class bond_slave_data(nla): nla_map = (('IFLA_BOND_SLAVE_UNSPEC', 'none'), ('IFLA_BOND_SLAVE_STATE', 'uint8'), ('IFLA_BOND_SLAVE_MII_STATUS', 'uint8'), ('IFLA_BOND_SLAVE_LINK_FAILURE_COUNT', 'uint32'), ('IFLA_BOND_SLAVE_PERM_HWADDR', 'l2addr'), ('IFLA_BOND_SLAVE_QUEUE_ID', 'uint16'), ('IFLA_BOND_SLAVE_AD_AGGREGATOR_ID', 'uint16')) @staticmethod def info_data(self, *argv, **kwarg): ''' The function returns appropriate IFLA_INFO_DATA type according to IFLA_INFO_KIND info. Return 'hex' type for all unknown kind's and when the kind is not known. ''' kind = self.get_attr('IFLA_INFO_KIND') return self.data_map.get(kind, self.hex) class veth_data(nla): nla_map = (('VETH_INFO_UNSPEC', 'none'), ('VETH_INFO_PEER', 'info_peer')) @staticmethod def info_peer(self, *argv, **kwarg): return ifinfveth class ipip_data(iptnl_data): pass class sit_data(iptnl_data): nla_map = [(x[0].replace('IPIP', 'SIT'), x[1]) for x in iptnl_data.nla_map] class ip6tnl_data(nla): nla_map = (('IFLA_IP6TNL_UNSPEC', 'none'), ('IFLA_IP6TNL_LINK', 'uint32'), ('IFLA_IP6TNL_LOCAL', 'ip6addr'), ('IFLA_IP6TNL_REMOTE', 'ip6addr'), ('IFLA_IP6TNL_TTL', 'uint8'), ('IFLA_IP6TNL_TOS', 'uint8'), ('IFLA_IP6TNL_ENCAP_LIMIT', 'uint8'), ('IFLA_IP6TNL_FLOWINFO', 'be32'), ('IFLA_IP6TNL_FLAGS', 'be16'), ('IFLA_IP6TNL_PROTO', 'uint8'), ('IFLA_IP6TNL_PMTUDISC', 'uint8'), ('IFLA_IP6TNL_6RD_PREFIX', 'ip6addr'), ('IFLA_IP6TNL_6RD_RELAY_PREFIX', 'ip4addr'), ('IFLA_IP6TNL_6RD_PREFIXLEN', 'uint16'), ('IFLA_IP6TNL_6RD_RELAY_PREFIXLEN', 'uint16'), ('IFLA_IP6TNL_ENCAP_TYPE', 'uint16'), ('IFLA_IP6TNL_ENCAP_FLAGS', 'uint16'), ('IFLA_IP6TNL_ENCAP_SPORT', 'be16'), ('IFLA_IP6TNL_ENCAP_DPORT', 'be16'), ('IFLA_IP6TNL_COLLECT_METADATA', 'flag'), ('IFLA_IP6TNL_FWMARK', 'uint32')) class gre_data(nla): prefix = 'IFLA_' nla_map = (('IFLA_GRE_UNSPEC', 'none'), ('IFLA_GRE_LINK', 'uint32'), ('IFLA_GRE_IFLAGS', 'gre_flags'), ('IFLA_GRE_OFLAGS', 'gre_flags'), ('IFLA_GRE_IKEY', 'be32'), ('IFLA_GRE_OKEY', 'be32'), ('IFLA_GRE_LOCAL', 'ip4addr'), ('IFLA_GRE_REMOTE', 'ip4addr'), ('IFLA_GRE_TTL', 'uint8'), ('IFLA_GRE_TOS', 'uint8'), ('IFLA_GRE_PMTUDISC', 'uint8'), ('IFLA_GRE_ENCAP_LIMIT', 'uint8'), ('IFLA_GRE_FLOWINFO', 'be32'), ('IFLA_GRE_FLAGS', 'uint32'), ('IFLA_GRE_ENCAP_TYPE', 'uint16'), ('IFLA_GRE_ENCAP_FLAGS', 'uint16'), ('IFLA_GRE_ENCAP_SPORT', 'be16'), ('IFLA_GRE_ENCAP_DPORT', 'be16'), ('IFLA_GRE_COLLECT_METADATA', 'flag'), ('IFLA_GRE_IGNORE_DF', 'uint8'), ('IFLA_GRE_FWMARK', 'uint32')) class gre_flags(nla): fields = [('value', '>H')] def encode(self): # # for details see: url = 'https://github.com/svinota/pyroute2/issues/531' v = self.value for flag in GRE_VALUES: v &= ~flag if v != 0: log.warning('possibly incorrect GRE flags, ' 'see %s' % url) nla.encode(self) class ip6gre_data(nla): # Ostensibly the same as ip6gre_data except that local # and remote are ipv6 addrs. # As of Linux 4.8,IFLA_GRE_COLLECT_METADATA has not been # implemented for IPv6. # Linux uses the same enum names for v6 and v4 (in if_tunnel.h); # Here we name them IFLA_IP6GRE_xxx instead to avoid conflicts # with gre_data above. nla_map = (('IFLA_IP6GRE_UNSPEC', 'none'), ('IFLA_IP6GRE_LINK', 'uint32'), ('IFLA_IP6GRE_IFLAGS', 'uint16'), ('IFLA_IP6GRE_OFLAGS', 'uint16'), ('IFLA_IP6GRE_IKEY', 'be32'), ('IFLA_IP6GRE_OKEY', 'be32'), ('IFLA_IP6GRE_LOCAL', 'ip6addr'), ('IFLA_IP6GRE_REMOTE', 'ip6addr'), ('IFLA_IP6GRE_TTL', 'uint8'), ('IFLA_IP6GRE_TOS', 'uint8'), ('IFLA_IP6GRE_PMTUDISC', 'uint8'), ('IFLA_IP6GRE_ENCAP_LIMIT', 'uint8'), ('IFLA_IP6GRE_FLOWINFO', 'be32'), ('IFLA_IP6GRE_FLAGS', 'uint32'), ('IFLA_IP6GRE_ENCAP_TYPE', 'uint16'), ('IFLA_IP6GRE_ENCAP_FLAGS', 'uint16'), ('IFLA_IP6GRE_ENCAP_SPORT', 'be16'), ('IFLA_IP6GRE_ENCAP_DPORT', 'be16')) class macvlan_data(macvx_data): pass class macvtap_data(macvx_data): nla_map = [(x[0].replace('MACVLAN', 'MACVTAP'), x[1]) for x in macvx_data.nla_map] class bridge_data(nla): prefix = 'IFLA_' nla_map = (('IFLA_BR_UNSPEC', 'none'), ('IFLA_BR_FORWARD_DELAY', 'uint32'), ('IFLA_BR_HELLO_TIME', 'uint32'), ('IFLA_BR_MAX_AGE', 'uint32'), ('IFLA_BR_AGEING_TIME', 'uint32'), ('IFLA_BR_STP_STATE', 'uint32'), ('IFLA_BR_PRIORITY', 'uint16'), ('IFLA_BR_VLAN_FILTERING', 'uint8'), ('IFLA_BR_VLAN_PROTOCOL', 'be16'), ('IFLA_BR_GROUP_FWD_MASK', 'uint16'), ('IFLA_BR_ROOT_ID', 'br_id'), ('IFLA_BR_BRIDGE_ID', 'br_id'), ('IFLA_BR_ROOT_PORT', 'uint16'), ('IFLA_BR_ROOT_PATH_COST', 'uint32'), ('IFLA_BR_TOPOLOGY_CHANGE', 'uint8'), ('IFLA_BR_TOPOLOGY_CHANGE_DETECTED', 'uint8'), ('IFLA_BR_HELLO_TIMER', 'uint64'), ('IFLA_BR_TCN_TIMER', 'uint64'), ('IFLA_BR_TOPOLOGY_CHANGE_TIMER', 'uint64'), ('IFLA_BR_GC_TIMER', 'uint64'), ('IFLA_BR_GROUP_ADDR', 'l2addr'), ('IFLA_BR_FDB_FLUSH', 'flag'), ('IFLA_BR_MCAST_ROUTER', 'uint8'), ('IFLA_BR_MCAST_SNOOPING', 'uint8'), ('IFLA_BR_MCAST_QUERY_USE_IFADDR', 'uint8'), ('IFLA_BR_MCAST_QUERIER', 'uint8'), ('IFLA_BR_MCAST_HASH_ELASTICITY', 'uint32'), ('IFLA_BR_MCAST_HASH_MAX', 'uint32'), ('IFLA_BR_MCAST_LAST_MEMBER_CNT', 'uint32'), ('IFLA_BR_MCAST_STARTUP_QUERY_CNT', 'uint32'), ('IFLA_BR_MCAST_LAST_MEMBER_INTVL', 'uint64'), ('IFLA_BR_MCAST_MEMBERSHIP_INTVL', 'uint64'), ('IFLA_BR_MCAST_QUERIER_INTVL', 'uint64'), ('IFLA_BR_MCAST_QUERY_INTVL', 'uint64'), ('IFLA_BR_MCAST_QUERY_RESPONSE_INTVL', 'uint64'), ('IFLA_BR_MCAST_STARTUP_QUERY_INTVL', 'uint64'), ('IFLA_BR_NF_CALL_IPTABLES', 'uint8'), ('IFLA_BR_NF_CALL_IP6TABLES', 'uint8'), ('IFLA_BR_NF_CALL_ARPTABLES', 'uint8'), ('IFLA_BR_VLAN_DEFAULT_PVID', 'uint16'), ('IFLA_BR_PAD', 'uint64'), ('IFLA_BR_VLAN_STATS_ENABLED', 'uint8'), ('IFLA_BR_MCAST_STATS_ENABLED', 'uint8'), ('IFLA_BR_MCAST_IGMP_VERSION', 'uint8'), ('IFLA_BR_MCAST_MLD_VERSION', 'uint8')) class br_id(ifla_bridge_id): pass # IFLA_INFO_DATA plugin system prototype data_map = {'macvlan': macvlan_data, 'macvtap': macvtap_data, 'ipip': ipip_data, 'sit': sit_data, 'ip6tnl': ip6tnl_data, 'gre': gre_data, 'gretap': gre_data, 'ip6gre': ip6gre_data, 'ip6gretap': ip6gre_data, 'veth': veth_data, 'bridge': bridge_data} # expand supported interface types data_map.update(data_plugins) sql_extend = ((ifinfo, 'IFLA_LINKINFO'), ) @staticmethod def af_spec(self, *argv, **kwarg): specs = {0: self.af_spec_inet, AF_INET: self.af_spec_inet, AF_INET6: self.af_spec_inet, AF_BRIDGE: self.af_spec_bridge} return specs.get(self['family'], self.hex) class af_spec_bridge(nla): prefix = 'IFLA_BRIDGE_' # Bug-Url: https://github.com/svinota/pyroute2/issues/284 # resolve conflict with link()/flags # IFLA_BRIDGE_FLAGS is for compatibility, in nla dicts # IFLA_BRIDGE_VLAN_FLAGS overrides it nla_map = ((0, 'IFLA_BRIDGE_FLAGS', 'uint16'), (0, 'IFLA_BRIDGE_VLAN_FLAGS', 'vlan_flags'), (1, 'IFLA_BRIDGE_MODE', 'uint16'), (2, 'IFLA_BRIDGE_VLAN_INFO', 'vlan_info')) class vlan_flags(nla): fields = [('value', 'H')] def encode(self): # convert flags if isinstance(self['value'], basestring): self['value'] = BRIDGE_FLAGS_NAMES['BRIDGE_FLAGS_' + self['value'].upper()] nla.encode(self) class vlan_info(nla): fields = (('flags', 'H'), ('vid', 'H')) @staticmethod def flags2names(flags): ret = [] for flag in BRIDGE_VLAN_VALUES: if (flag & flags) == flag: ret.append(BRIDGE_VLAN_VALUES[flag]) return ret @staticmethod def names2flags(flags): ret = 0 for flag in flags: ret |= BRIDGE_VLAN_NAMES['BRIDGE_VLAN_INFO_' + flag.upper()] return ret def encode(self): # convert flags if isinstance(self['flags'], (set, tuple, list)): self['flags'] = self.names2flags(self['flags']) return super(nla, self).encode() class af_spec_inet(nla): nla_map = (('AF_UNSPEC', 'none'), ('AF_UNIX', 'hex'), ('AF_INET', 'inet'), ('AF_AX25', 'hex'), ('AF_IPX', 'hex'), ('AF_APPLETALK', 'hex'), ('AF_NETROM', 'hex'), ('AF_BRIDGE', 'hex'), ('AF_ATMPVC', 'hex'), ('AF_X25', 'hex'), ('AF_INET6', 'inet6')) class inet(nla): # ./include/linux/inetdevice.h: struct ipv4_devconf # ./include/uapi/linux/ip.h field_names = ('dummy', 'forwarding', 'mc_forwarding', 'proxy_arp', 'accept_redirects', 'secure_redirects', 'send_redirects', 'shared_media', 'rp_filter', 'accept_source_route', 'bootp_relay', 'log_martians', 'tag', 'arpfilter', 'medium_id', 'noxfrm', 'nopolicy', 'force_igmp_version', 'arp_announce', 'arp_ignore', 'promote_secondaries', 'arp_accept', 'arp_notify', 'accept_local', 'src_vmark', 'proxy_arp_pvlan', 'route_localnet', 'igmpv2_unsolicited_report_interval', 'igmpv3_unsolicited_report_interval') fields = [(i, 'I') for i in field_names] class inet6(nla): nla_map = (('IFLA_INET6_UNSPEC', 'none'), ('IFLA_INET6_FLAGS', 'uint32'), ('IFLA_INET6_CONF', 'ipv6_devconf'), ('IFLA_INET6_STATS', 'ipv6_stats'), ('IFLA_INET6_MCAST', 'hex'), ('IFLA_INET6_CACHEINFO', 'ipv6_cache_info'), ('IFLA_INET6_ICMP6STATS', 'icmp6_stats'), ('IFLA_INET6_TOKEN', 'ip6addr'), ('IFLA_INET6_ADDR_GEN_MODE', 'uint8')) class ipv6_devconf(nla): # ./include/uapi/linux/ipv6.h # DEVCONF_ field_names = ('forwarding', 'hop_limit', 'mtu', 'accept_ra', 'accept_redirects', 'autoconf', 'dad_transmits', 'router_solicitations', 'router_solicitation_interval', 'router_solicitation_delay', 'use_tempaddr', 'temp_valid_lft', 'temp_preferred_lft', 'regen_max_retry', 'max_desync_factor', 'max_addresses', 'force_mld_version', 'accept_ra_defrtr', 'accept_ra_pinfo', 'accept_ra_rtr_pref', 'router_probe_interval', 'accept_ra_rt_info_max_plen', 'proxy_ndp', 'optimistic_dad', 'accept_source_route', 'mc_forwarding', 'disable_ipv6', 'accept_dad', 'force_tllao', 'ndisc_notify') fields = [(i, 'I') for i in field_names] class ipv6_cache_info(nla): # ./include/uapi/linux/if_link.h: struct ifla_cacheinfo fields = (('max_reasm_len', 'I'), ('tstamp', 'I'), ('reachable_time', 'I'), ('retrans_time', 'I')) class ipv6_stats(nla): # ./include/uapi/linux/snmp.h field_names = ('num', 'inpkts', 'inoctets', 'indelivers', 'outforwdatagrams', 'outpkts', 'outoctets', 'inhdrerrors', 'intoobigerrors', 'innoroutes', 'inaddrerrors', 'inunknownprotos', 'intruncatedpkts', 'indiscards', 'outdiscards', 'outnoroutes', 'reasmtimeout', 'reasmreqds', 'reasmoks', 'reasmfails', 'fragoks', 'fragfails', 'fragcreates', 'inmcastpkts', 'outmcastpkts', 'inbcastpkts', 'outbcastpkts', 'inmcastoctets', 'outmcastoctets', 'inbcastoctets', 'outbcastoctets', 'csumerrors', 'noectpkts', 'ect1pkts', 'ect0pkts', 'cepkts') fields = [(i, 'Q') for i in field_names] class icmp6_stats(nla): # ./include/uapi/linux/snmp.h field_names = ('num', 'inmsgs', 'inerrors', 'outmsgs', 'outerrors', 'csumerrors') fields = [(i, 'Q') for i in field_names] class ifinfmsg(ifinfbase, nlmsg): def decode(self): nlmsg.decode(self) if self['flags'] & 1: self['state'] = 'up' else: self['state'] = 'down' class ifinfveth(ifinfbase, nla): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/compat.py0000644000175000017500000003556713610051400023113 0ustar peetpeet00000000000000import os import json import errno import select import struct import threading import subprocess from fcntl import ioctl from pyroute2 import config from pyroute2.common import map_enoent from pyroute2.netlink.rtnl import RTM_VALUES from pyroute2.netlink.rtnl.marshal import MarshalRtnl from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.rtnl.riprsocket import RawIPRSocket # it's simpler to double constants here, than to change all the # module layout; but it is a subject of the future refactoring RTM_NEWLINK = 16 RTM_DELLINK = 17 # _BONDING_MASTERS = '/sys/class/net/bonding_masters' _BONDING_SLAVES = '/sys/class/net/%s/bonding/slaves' _BRIDGE_MASTER = '/sys/class/net/%s/brport/bridge/ifindex' _BONDING_MASTER = '/sys/class/net/%s/master/ifindex' IFNAMSIZ = 16 TUNDEV = '/dev/net/tun' if config.machine in ('i386', 'i686', 'x86_64', 's390x'): TUNSETIFF = 0x400454ca TUNSETPERSIST = 0x400454cb TUNSETOWNER = 0x400454cc TUNSETGROUP = 0x400454ce elif config.machine in ('ppc64', 'mips'): TUNSETIFF = 0x800454ca TUNSETPERSIST = 0x800454cb TUNSETOWNER = 0x800454cc TUNSETGROUP = 0x800454ce else: TUNSETIFF = None ## # # tuntap flags # IFT_TUN = 0x0001 IFT_TAP = 0x0002 IFT_NO_PI = 0x1000 IFT_ONE_QUEUE = 0x2000 IFT_VNET_HDR = 0x4000 IFT_TUN_EXCL = 0x8000 IFT_MULTI_QUEUE = 0x0100 IFT_ATTACH_QUEUE = 0x0200 IFT_DETACH_QUEUE = 0x0400 # read-only IFT_PERSIST = 0x0800 IFT_NOFILTER = 0x1000 def compat_fix_attrs(msg, nl): kind = None ifname = msg.get_attr('IFLA_IFNAME') # fix master if not nl.capabilities['provide_master']: master = compat_get_master(ifname) if master is not None: msg['attrs'].append(['IFLA_MASTER', master]) # fix linkinfo & kind li = msg.get_attr('IFLA_LINKINFO') if li is not None: kind = li.get_attr('IFLA_INFO_KIND') if kind is None: kind = get_interface_type(ifname) li['attrs'].append(['IFLA_INFO_KIND', kind]) elif 'attrs' in msg: kind = get_interface_type(ifname) msg['attrs'].append(['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', kind]]}]) else: return li = msg.get_attr('IFLA_LINKINFO') # fetch specific interface data if (kind in ('bridge', 'bond')) and \ [x for x in li['attrs'] if x[0] == 'IFLA_INFO_DATA']: if kind == 'bridge': t = '/sys/class/net/%s/bridge/%s' ifdata = ifinfmsg.ifinfo.bridge_data elif kind == 'bond': t = '/sys/class/net/%s/bonding/%s' ifdata = ifinfmsg.ifinfo.bond_data commands = [] for cmd, _ in ifdata.nla_map: try: with open(t % (ifname, ifdata.nla2name(cmd)), 'r') as f: value = f.read() if cmd == 'IFLA_BOND_MODE': value = value.split()[1] commands.append([cmd, int(value)]) except: pass if commands: li['attrs'].append(['IFLA_INFO_DATA', {'attrs': commands}]) def proxy_linkinfo(data, nl): marshal = MarshalRtnl() inbox = marshal.parse(data) data = b'' for msg in inbox: if msg['event'] == 'NLMSG_ERROR': data += msg.data continue # Sysfs operations can require root permissions, # but the script can be run under a normal user # Bug-Url: https://github.com/svinota/pyroute2/issues/113 try: compat_fix_attrs(msg, nl) except OSError: # We can safely ignore here any OSError. # In the worst case, we just return what we have got # from the kernel via netlink pass msg.reset() msg.encode() data += msg.data return {'verdict': 'forward', 'data': data} def proxy_setlink(imsg, nl): def get_interface(index): msg = nl.get_links(index)[0] try: kind = msg.get_attr('IFLA_LINKINFO').get_attr('IFLA_INFO_KIND') except AttributeError: kind = 'unknown' return {'ifname': msg.get_attr('IFLA_IFNAME'), 'master': msg.get_attr('IFLA_MASTER'), 'kind': kind} msg = ifinfmsg(imsg.data) msg.decode() forward = True kind = None infodata = None ifname = msg.get_attr('IFLA_IFNAME') or \ get_interface(msg['index'])['ifname'] linkinfo = msg.get_attr('IFLA_LINKINFO') if linkinfo: kind = linkinfo.get_attr('IFLA_INFO_KIND') infodata = linkinfo.get_attr('IFLA_INFO_DATA') if kind in ('bond', 'bridge') and infodata is not None: code = 0 # if kind == 'bond': func = compat_set_bond elif kind == 'bridge': func = compat_set_bridge # for (cmd, value) in infodata.get('attrs', []): cmd = infodata.nla2name(cmd) code = func(ifname, cmd, value) or code # if code: err = OSError() err.errno = code raise err # is it a port setup? master = msg.get_attr('IFLA_MASTER') if master is not None: if master == 0: # port delete # 1. get the current master iface = get_interface(msg['index']) master = get_interface(iface['master']) cmd = 'del' else: # port add # 1. get the master master = get_interface(master) cmd = 'add' # 2. manage the port forward_map = {'team': manage_team_port, 'bridge': compat_bridge_port, 'bond': compat_bond_port} if master['kind'] in forward_map: func = forward_map[master['kind']] forward = func(cmd, master['ifname'], ifname, nl) if forward is not None: return {'verdict': 'forward', 'data': imsg.data} def sync(f): ''' A decorator to wrap up external utility calls. A decorated function receives a netlink message as a parameter, and then: 1. Starts a monitoring thread 2. Performs the external call 3. Waits for a netlink event specified by `msg` 4. Joins the monitoring thread If the wrapped function raises an exception, the monitoring thread will be forced to stop via the control channel pipe. The exception will be then forwarded. ''' def monitor(event, ifname, cmd): with RawIPRSocket() as ipr: poll = select.poll() poll.register(ipr, select.POLLIN | select.POLLPRI) poll.register(cmd, select.POLLIN | select.POLLPRI) ipr.bind() while True: events = poll.poll() for (fd, event) in events: if fd == ipr.fileno(): msgs = ipr.get() for msg in msgs: if msg.get('event') == event and \ msg.get_attr('IFLA_IFNAME') == ifname: return else: return def decorated(msg): rcmd, cmd = os.pipe() t = threading.Thread(target=monitor, args=(RTM_VALUES[msg['header']['type']], msg.get_attr('IFLA_IFNAME'), rcmd)) t.start() ret = None try: ret = f(msg) except Exception: raise finally: os.write(cmd, b'q') t.join() os.close(rcmd) os.close(cmd) return ret return decorated def proxy_dellink(imsg, nl): orig_msg = ifinfmsg(imsg.data) orig_msg.decode() # get full interface description msg = nl.get_links(orig_msg['index'])[0] msg['header']['type'] = orig_msg['header']['type'] # get the interface kind kind = None li = msg.get_attr('IFLA_LINKINFO') if li is not None: kind = li.get_attr('IFLA_INFO_KIND') # team interfaces can be stopped by a normal RTM_DELLINK if kind == 'bond' and not nl.capabilities['create_bond']: return compat_del_bond(msg) elif kind == 'bridge' and not nl.capabilities['create_bridge']: return compat_del_bridge(msg) return {'verdict': 'forward', 'data': imsg.data} def proxy_newlink(imsg, nl): msg = ifinfmsg(imsg.data) msg.decode() kind = None # get the interface kind linkinfo = msg.get_attr('IFLA_LINKINFO') if linkinfo is not None: kind = [x[1] for x in linkinfo['attrs'] if x[0] == 'IFLA_INFO_KIND'] if kind: kind = kind[0] if kind == 'tuntap': return manage_tuntap(msg) elif kind == 'team': return manage_team(msg) elif kind == 'bond' and not nl.capabilities['create_bond']: return compat_create_bond(msg) elif kind == 'bridge' and not nl.capabilities['create_bridge']: return compat_create_bridge(msg) return {'verdict': 'forward', 'data': imsg.data} @map_enoent @sync def manage_team(msg): if msg['header']['type'] != RTM_NEWLINK: raise ValueError('wrong command type') config = {'device': msg.get_attr('IFLA_IFNAME'), 'runner': {'name': 'activebackup'}, 'link_watch': {'name': 'ethtool'}} with open(os.devnull, 'w') as fnull: subprocess.check_call(['teamd', '-d', '-n', '-c', json.dumps(config)], stdout=fnull, stderr=fnull) @map_enoent def manage_team_port(cmd, master, ifname, nl): with open(os.devnull, 'w') as fnull: subprocess.check_call(['teamdctl', master, 'port', 'remove' if cmd == 'del' else 'add', ifname], stdout=fnull, stderr=fnull) @sync def manage_tuntap(msg): if TUNSETIFF is None: raise NetlinkError(errno.EOPNOTSUPP, 'Arch not supported') if msg['header']['type'] != RTM_NEWLINK: raise NetlinkError(errno.EOPNOTSUPP, 'Unsupported event') ifru_flags = 0 linkinfo = msg.get_attr('IFLA_LINKINFO') infodata = linkinfo.get_attr('IFLA_INFO_DATA') flags = infodata.get_attr('IFTUN_IFR', None) if infodata.get_attr('IFTUN_MODE') == 'tun': ifru_flags |= IFT_TUN elif infodata.get_attr('IFTUN_MODE') == 'tap': ifru_flags |= IFT_TAP else: raise ValueError('invalid mode') if flags is not None: if flags['no_pi']: ifru_flags |= IFT_NO_PI if flags['one_queue']: ifru_flags |= IFT_ONE_QUEUE if flags['vnet_hdr']: ifru_flags |= IFT_VNET_HDR if flags['multi_queue']: ifru_flags |= IFT_MULTI_QUEUE ifr = msg.get_attr('IFLA_IFNAME') if len(ifr) > IFNAMSIZ: raise ValueError('ifname too long') ifr += (IFNAMSIZ - len(ifr)) * '\0' ifr = ifr.encode('ascii') ifr += struct.pack('H', ifru_flags) user = infodata.get_attr('IFTUN_UID') group = infodata.get_attr('IFTUN_GID') # fd = os.open(TUNDEV, os.O_RDWR) try: ioctl(fd, TUNSETIFF, ifr) if user is not None: ioctl(fd, TUNSETOWNER, user) if group is not None: ioctl(fd, TUNSETGROUP, group) ioctl(fd, TUNSETPERSIST, 1) except Exception: raise finally: os.close(fd) @sync def compat_create_bridge(msg): name = msg.get_attr('IFLA_IFNAME') with open(os.devnull, 'w') as fnull: subprocess.check_call(['brctl', 'addbr', name], stdout=fnull, stderr=fnull) @sync def compat_create_bond(msg): name = msg.get_attr('IFLA_IFNAME') with open(_BONDING_MASTERS, 'w') as f: f.write('+%s' % (name)) def compat_set_bond(name, cmd, value): # FIXME: join with bridge # FIXME: use internal IO, not bash t = 'echo %s >/sys/class/net/%s/bonding/%s' with open(os.devnull, 'w') as fnull: return subprocess.call(['bash', '-c', t % (value, name, cmd)], stdout=fnull, stderr=fnull) def compat_set_bridge(name, cmd, value): t = 'echo %s >/sys/class/net/%s/bridge/%s' with open(os.devnull, 'w') as fnull: return subprocess.call(['bash', '-c', t % (value, name, cmd)], stdout=fnull, stderr=fnull) @sync def compat_del_bridge(msg): name = msg.get_attr('IFLA_IFNAME') with open(os.devnull, 'w') as fnull: subprocess.check_call(['ip', 'link', 'set', 'dev', name, 'down']) subprocess.check_call(['brctl', 'delbr', name], stdout=fnull, stderr=fnull) @sync def compat_del_bond(msg): name = msg.get_attr('IFLA_IFNAME') subprocess.check_call(['ip', 'link', 'set', 'dev', name, 'down']) with open(_BONDING_MASTERS, 'w') as f: f.write('-%s' % (name)) def compat_bridge_port(cmd, master, port, nl): if nl.capabilities['create_bridge']: return True with open(os.devnull, 'w') as fnull: subprocess.check_call(['brctl', '%sif' % (cmd), master, port], stdout=fnull, stderr=fnull) def compat_bond_port(cmd, master, port, nl): if nl.capabilities['create_bond']: return True remap = {'add': '+', 'del': '-'} cmd = remap[cmd] with open(_BONDING_SLAVES % (master), 'w') as f: f.write('%s%s' % (cmd, port)) def compat_get_master(name): f = None for i in (_BRIDGE_MASTER, _BONDING_MASTER): try: try: f = open(i % (name)) except UnicodeEncodeError: # a special case with python3 on Ubuntu 14 f = open(i % (name.encode('utf-8'))) break except IOError: pass if f is not None: master = int(f.read()) f.close() return master def get_interface_type(name): ''' Utility function to get interface type. Unfortunately, we can not rely on RTNL or even ioctl(). RHEL doesn't support interface type in RTNL and doesn't provide extended (private) interface flags via ioctl(). Args: * name (str): interface name Returns: * False -- sysfs info unavailable * None -- type not known * str -- interface type: - 'bond' - 'bridge' ''' # FIXME: support all interface types? Right now it is # not needed try: ifattrs = os.listdir('/sys/class/net/%s/' % (name)) except OSError as e: if e.errno == 2: return 'unknown' else: raise if 'bonding' in ifattrs: return 'bond' elif 'bridge' in ifattrs: return 'bridge' else: return 'unknown' pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/0000755000175000017500000000000013621220110022715 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/__init__.py0000644000175000017500000000000013610051400025016 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/bond.py0000644000175000017500000000373213610051400024220 0ustar peetpeet00000000000000from pyroute2.netlink import nla class bond(nla): prefix = 'IFLA_' nla_map = (('IFLA_BOND_UNSPEC', 'none'), ('IFLA_BOND_MODE', 'uint8'), ('IFLA_BOND_ACTIVE_SLAVE', 'uint32'), ('IFLA_BOND_MIIMON', 'uint32'), ('IFLA_BOND_UPDELAY', 'uint32'), ('IFLA_BOND_DOWNDELAY', 'uint32'), ('IFLA_BOND_USE_CARRIER', 'uint8'), ('IFLA_BOND_ARP_INTERVAL', 'uint32'), ('IFLA_BOND_ARP_IP_TARGET', 'arp_ip_target'), ('IFLA_BOND_ARP_VALIDATE', 'uint32'), ('IFLA_BOND_ARP_ALL_TARGETS', 'uint32'), ('IFLA_BOND_PRIMARY', 'uint32'), ('IFLA_BOND_PRIMARY_RESELECT', 'uint8'), ('IFLA_BOND_FAIL_OVER_MAC', 'uint8'), ('IFLA_BOND_XMIT_HASH_POLICY', 'uint8'), ('IFLA_BOND_RESEND_IGMP', 'uint32'), ('IFLA_BOND_NUM_PEER_NOTIF', 'uint8'), ('IFLA_BOND_ALL_SLAVES_ACTIVE', 'uint8'), ('IFLA_BOND_MIN_LINKS', 'uint32'), ('IFLA_BOND_LP_INTERVAL', 'uint32'), ('IFLA_BOND_PACKETS_PER_SLAVE', 'uint32'), ('IFLA_BOND_AD_LACP_RATE', 'uint8'), ('IFLA_BOND_AD_SELECT', 'uint8'), ('IFLA_BOND_AD_INFO', 'ad_info'), ('IFLA_BOND_AD_ACTOR_SYS_PRIO', 'uint16'), ('IFLA_BOND_AD_USER_PORT_KEY', 'uint16'), ('IFLA_BOND_AD_ACTOR_SYSTEM', 'hex'), ('IFLA_BOND_TLB_DYNAMIC_LB', 'uint8')) class ad_info(nla): nla_map = (('IFLA_BOND_AD_INFO_UNSPEC', 'none'), ('IFLA_BOND_AD_INFO_AGGREGATOR', 'uint16'), ('IFLA_BOND_AD_INFO_NUM_PORTS', 'uint16'), ('IFLA_BOND_AD_INFO_ACTOR_KEY', 'uint16'), ('IFLA_BOND_AD_INFO_PARTNER_KEY', 'uint16'), ('IFLA_BOND_AD_INFO_PARTNER_MAC', 'l2addr')) class arp_ip_target(nla): fields = (('targets', '16I'), ) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/gtp.py0000644000175000017500000000035113610051400024062 0ustar peetpeet00000000000000from pyroute2.netlink import nla class gtp(nla): nla_map = (('IFLA_GTP_UNSPEC', 'none'), ('IFLA_GTP_FD0', 'uint32'), ('IFLA_GTP_FD1', 'uint32'), ('IFLA_GTP_PDP_HASHSIZE', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/ipoib.py0000755000175000017500000000065713610051400024406 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink import nlmsg_atoms class ipoib(nla): prefix = 'IFLA_IPOIB_' nla_map = (('IFLA_IPOIB_UNSPEC', 'none'), ('IFLA_IPOIB_PKEY', 'uint16'), ('IFLA_IPOIB_MODE', 'mode'), ('IFLA_IPOIB_UMCAST', 'uint16')) class mode(nlmsg_atoms.uint16): value_map = {0: 'datagram', 1: 'connected'} pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/ipvlan.py0000644000175000017500000000043513610051400024564 0ustar peetpeet00000000000000from pyroute2.netlink import nla class ipvlan(nla): nla_map = (('IFLA_IPVLAN_UNSPEC', 'none'), ('IFLA_IPVLAN_MODE', 'uint16')) modes = {0: 'IPVLAN_MODE_L2', 1: 'IPVLAN_MODE_L3', 'IPVLAN_MODE_L2': 0, 'IPVLAN_MODE_L3': 1} pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/tuntap.py0000644000175000017500000000112213610051400024600 0ustar peetpeet00000000000000from pyroute2.netlink import nla class tuntap(nla): ''' Fake data type ''' prefix = 'IFTUN_' nla_map = (('IFTUN_UNSPEC', 'none'), ('IFTUN_MODE', 'asciiz'), ('IFTUN_UID', 'uint32'), ('IFTUN_GID', 'uint32'), ('IFTUN_IFR', 'flags')) class flags(nla): fields = (('no_pi', 'B'), ('one_queue', 'B'), ('vnet_hdr', 'B'), ('tun_excl', 'B'), ('multi_queue', 'B'), ('persist', 'B'), ('nofilter', 'B')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/vlan.py0000644000175000017500000000143413610051400024233 0ustar peetpeet00000000000000from pyroute2.netlink import nla flags = {'reorder_hdr': 0x1, 'gvrp': 0x2, 'loose_binding': 0x4, 'mvrp': 0x8} class vlan(nla): prefix = 'IFLA_' nla_map = (('IFLA_VLAN_UNSPEC', 'none'), ('IFLA_VLAN_ID', 'uint16'), ('IFLA_VLAN_FLAGS', 'vlan_flags'), ('IFLA_VLAN_EGRESS_QOS', 'qos'), ('IFLA_VLAN_INGRESS_QOS', 'qos'), ('IFLA_VLAN_PROTOCOL', 'be16')) class vlan_flags(nla): fields = (('flags', 'I'), ('mask', 'I')) class qos(nla): nla_map = (('IFLA_VLAN_QOS_UNSPEC', 'none'), ('IFLA_VLAN_QOS_MAPPING', 'qos_mapping')) class qos_mapping(nla): fields = (('from', 'I'), ('to', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/vrf.py0000644000175000017500000000024113610051400024063 0ustar peetpeet00000000000000from pyroute2.netlink import nla class vrf(nla): prefix = 'IFLA_' nla_map = (('IFLA_VRF_UNSPEC', 'none'), ('IFLA_VRF_TABLE', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/vti.py0000644000175000017500000000052113610051400024071 0ustar peetpeet00000000000000from pyroute2.netlink import nla class vti(nla): prefix = 'IFLA_' nla_map = (('IFLA_VTI_UNSPEC', 'none'), ('IFLA_VTI_LINK', 'uint32'), ('IFLA_VTI_IKEY', 'be32'), ('IFLA_VTI_OKEY', 'be32'), ('IFLA_VTI_LOCAL', 'ip4addr'), ('IFLA_VTI_REMOTE', 'ip4addr')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/vti6.py0000644000175000017500000000060013610051400024155 0ustar peetpeet00000000000000from pyroute2.netlink import nla class vti6(nla): prefix = 'IFLA_' nla_map = (('IFLA_VTI_UNSPEC', 'none'), ('IFLA_VTI_LINK', 'uint32'), ('IFLA_VTI_IKEY', 'be32'), ('IFLA_VTI_OKEY', 'be32'), ('IFLA_VTI_LOCAL', 'ip6addr'), ('IFLA_VTI_REMOTE', 'ip6addr'), ('IFLA_VTI_FWMARK', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/vxlan.py0000644000175000017500000000312713610051400024424 0ustar peetpeet00000000000000from pyroute2.netlink import nla class vxlan(nla): prefix = 'IFLA_' nla_map = (('IFLA_VXLAN_UNSPEC', 'none'), ('IFLA_VXLAN_ID', 'uint32'), ('IFLA_VXLAN_GROUP', 'ip4addr'), ('IFLA_VXLAN_LINK', 'uint32'), ('IFLA_VXLAN_LOCAL', 'ip4addr'), ('IFLA_VXLAN_TTL', 'uint8'), ('IFLA_VXLAN_TOS', 'uint8'), ('IFLA_VXLAN_LEARNING', 'uint8'), ('IFLA_VXLAN_AGEING', 'uint32'), ('IFLA_VXLAN_LIMIT', 'uint32'), ('IFLA_VXLAN_PORT_RANGE', 'port_range'), ('IFLA_VXLAN_PROXY', 'uint8'), ('IFLA_VXLAN_RSC', 'uint8'), ('IFLA_VXLAN_L2MISS', 'uint8'), ('IFLA_VXLAN_L3MISS', 'uint8'), ('IFLA_VXLAN_PORT', 'be16'), ('IFLA_VXLAN_GROUP6', 'ip6addr'), ('IFLA_VXLAN_LOCAL6', 'ip6addr'), ('IFLA_VXLAN_UDP_CSUM', 'uint8'), ('IFLA_VXLAN_UDP_ZERO_CSUM6_TX', 'uint8'), ('IFLA_VXLAN_UDP_ZERO_CSUM6_RX', 'uint8'), ('IFLA_VXLAN_REMCSUM_TX', 'uint8'), ('IFLA_VXLAN_REMCSUM_RX', 'uint8'), ('IFLA_VXLAN_GBP', 'flag'), ('IFLA_VXLAN_REMCSUM_NOPARTIAL', 'flag'), ('IFLA_VXLAN_COLLECT_METADATA', 'uint8'), ('IFLA_VXLAN_LABEL', 'uint32'), ('IFLA_VXLAN_GPE', 'flag'), ('IFLA_VXLAN_TTL_INHERIT', 'flag'), ('IFLA_VXLAN_DF', 'uint8')) class port_range(nla): fields = (('low', '>H'), ('high', '>H')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/plugins/xfrm.py0000644000175000017500000000027413610051400024250 0ustar peetpeet00000000000000from pyroute2.netlink import nla class xfrm(nla): nla_map = (('IFLA_XFRM_UNSPEC', 'none'), ('IFLA_XFRM_LINK', 'uint32'), ('IFLA_XFRM_IF_ID', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ifinfmsg/proxy.py0000644000175000017500000001621113610051400022772 0ustar peetpeet00000000000000import os import json import errno import select import struct import threading import subprocess from fcntl import ioctl from pyroute2 import config from pyroute2.common import map_enoent from pyroute2.netlink.rtnl.ifinfmsg import IFT_TUN from pyroute2.netlink.rtnl.ifinfmsg import IFT_TAP from pyroute2.netlink.rtnl.ifinfmsg import IFT_NO_PI from pyroute2.netlink.rtnl.ifinfmsg import IFT_ONE_QUEUE from pyroute2.netlink.rtnl.ifinfmsg import IFT_VNET_HDR from pyroute2.netlink.rtnl.ifinfmsg import IFT_MULTI_QUEUE from pyroute2.netlink.rtnl.ifinfmsg import RTM_NEWLINK from pyroute2.netlink.rtnl import RTM_VALUES from pyroute2.netlink.rtnl.riprsocket import RawIPRSocket from pyroute2.netlink.exceptions import NetlinkError _BONDING_MASTERS = '/sys/class/net/bonding_masters' _BONDING_SLAVES = '/sys/class/net/%s/bonding/slaves' _BRIDGE_MASTER = '/sys/class/net/%s/brport/bridge/ifindex' _BONDING_MASTER = '/sys/class/net/%s/master/ifindex' IFNAMSIZ = 16 TUNDEV = '/dev/net/tun' PLATFORMS = ('i386', 'i686', 'x86_64', 'armv6l', 'armv7l', 's390x', 'aarch64') if config.machine in PLATFORMS: TUNSETIFF = 0x400454ca TUNSETPERSIST = 0x400454cb TUNSETOWNER = 0x400454cc TUNSETGROUP = 0x400454ce elif config.machine in ('ppc64', 'mips'): TUNSETIFF = 0x800454ca TUNSETPERSIST = 0x800454cb TUNSETOWNER = 0x800454cc TUNSETGROUP = 0x800454ce else: TUNSETIFF = None def sync(f): ''' A decorator to wrap up external utility calls. A decorated function receives a netlink message as a parameter, and then: 1. Starts a monitoring thread 2. Performs the external call 3. Waits for a netlink event specified by `msg` 4. Joins the monitoring thread If the wrapped function raises an exception, the monitoring thread will be forced to stop via the control channel pipe. The exception will be then forwarded. ''' def monitor(event, ifname, cmd): with RawIPRSocket() as ipr: poll = select.poll() poll.register(ipr, select.POLLIN | select.POLLPRI) poll.register(cmd, select.POLLIN | select.POLLPRI) ipr.bind() while True: events = poll.poll() for (fd, event) in events: if fd == ipr.fileno(): msgs = ipr.get() for msg in msgs: if msg.get('event') == event and \ msg.get_attr('IFLA_IFNAME') == ifname: return else: return def decorated(msg): rcmd, cmd = os.pipe() t = threading.Thread(target=monitor, args=(RTM_VALUES[msg['header']['type']], msg.get_attr('IFLA_IFNAME'), rcmd)) t.start() ret = None try: ret = f(msg) except Exception: raise finally: os.write(cmd, b'q') t.join() os.close(rcmd) os.close(cmd) return ret return decorated def proxy_setlink(msg, nl): def get_interface(index): msg = nl.get_links(index)[0] try: kind = msg.get_attr('IFLA_LINKINFO').get_attr('IFLA_INFO_KIND') except AttributeError: kind = 'unknown' return {'ifname': msg.get_attr('IFLA_IFNAME'), 'master': msg.get_attr('IFLA_MASTER'), 'kind': kind} forward = True # is it a port setup? master = msg.get_attr('IFLA_MASTER') if master is not None: if master == 0: # port delete # 1. get the current master iface = get_interface(msg['index']) master = get_interface(iface['master']) cmd = 'del' else: # port add # 1. get the master master = get_interface(master) cmd = 'add' ifname = msg.get_attr('IFLA_IFNAME') or \ get_interface(msg['index'])['ifname'] # 2. manage the port forward_map = {'team': manage_team_port} if master['kind'] in forward_map: func = forward_map[master['kind']] forward = func(cmd, master['ifname'], ifname, nl) if forward is not None: return {'verdict': 'forward', 'data': msg.data} def proxy_newlink(msg, nl): kind = None # get the interface kind linkinfo = msg.get_attr('IFLA_LINKINFO') if linkinfo is not None: kind = [x[1] for x in linkinfo['attrs'] if x[0] == 'IFLA_INFO_KIND'] if kind: kind = kind[0] if kind == 'tuntap': return manage_tuntap(msg) elif kind == 'team': return manage_team(msg) return {'verdict': 'forward', 'data': msg.data} @map_enoent @sync def manage_team(msg): if msg['header']['type'] != RTM_NEWLINK: raise ValueError('wrong command type') config = {'device': msg.get_attr('IFLA_IFNAME'), 'runner': {'name': 'activebackup'}, 'link_watch': {'name': 'ethtool'}} with open(os.devnull, 'w') as fnull: subprocess.check_call(['teamd', '-d', '-n', '-c', json.dumps(config)], stdout=fnull, stderr=fnull) @map_enoent def manage_team_port(cmd, master, ifname, nl): with open(os.devnull, 'w') as fnull: subprocess.check_call(['teamdctl', master, 'port', 'remove' if cmd == 'del' else 'add', ifname], stdout=fnull, stderr=fnull) @sync def manage_tuntap(msg): if TUNSETIFF is None: raise NetlinkError(errno.EOPNOTSUPP, 'Arch not supported') if msg['header']['type'] != RTM_NEWLINK: raise NetlinkError(errno.EOPNOTSUPP, 'Unsupported event') ifru_flags = 0 linkinfo = msg.get_attr('IFLA_LINKINFO') infodata = linkinfo.get_attr('IFLA_INFO_DATA') flags = infodata.get_attr('IFTUN_IFR', None) if infodata.get_attr('IFTUN_MODE') == 'tun': ifru_flags |= IFT_TUN elif infodata.get_attr('IFTUN_MODE') == 'tap': ifru_flags |= IFT_TAP else: raise ValueError('invalid mode') if flags is not None: if flags['no_pi']: ifru_flags |= IFT_NO_PI if flags['one_queue']: ifru_flags |= IFT_ONE_QUEUE if flags['vnet_hdr']: ifru_flags |= IFT_VNET_HDR if flags['multi_queue']: ifru_flags |= IFT_MULTI_QUEUE ifr = msg.get_attr('IFLA_IFNAME') if len(ifr) > IFNAMSIZ: raise ValueError('ifname too long') ifr += (IFNAMSIZ - len(ifr)) * '\0' ifr = ifr.encode('ascii') ifr += struct.pack('H', ifru_flags) user = infodata.get_attr('IFTUN_UID') group = infodata.get_attr('IFTUN_GID') # fd = os.open(TUNDEV, os.O_RDWR) try: ioctl(fd, TUNSETIFF, ifr) if user is not None: ioctl(fd, TUNSETOWNER, user) if group is not None: ioctl(fd, TUNSETGROUP, group) ioctl(fd, TUNSETPERSIST, 1) except Exception: raise finally: os.close(fd) pyroute2-0.5.9/pyroute2/netlink/rtnl/iprsocket.py0000644000175000017500000001670413610051400022021 0ustar peetpeet00000000000000import sys import errno import types from pyroute2 import config from pyroute2.common import Namespace from pyroute2.common import AddrPool from pyroute2.common import DEFAULT_RCVBUF from pyroute2.proxy import NetlinkProxy from pyroute2.netlink import NETLINK_ROUTE from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.nlsocket import BatchSocket from pyroute2.netlink import rtnl from pyroute2.netlink.rtnl.marshal import MarshalRtnl if sys.platform.startswith('linux'): if config.kernel < [3, 3, 0]: from pyroute2.netlink.rtnl.ifinfmsg.compat import proxy_newlink from pyroute2.netlink.rtnl.ifinfmsg.compat import proxy_setlink from pyroute2.netlink.rtnl.ifinfmsg.compat import proxy_dellink from pyroute2.netlink.rtnl.ifinfmsg.compat import proxy_linkinfo else: from pyroute2.netlink.rtnl.ifinfmsg.proxy import proxy_newlink from pyroute2.netlink.rtnl.ifinfmsg.proxy import proxy_setlink class IPRSocketMixin(object): def __init__(self, *argv, **kwarg): if 'family' in kwarg: kwarg.pop('family') super(IPRSocketMixin, self).__init__(NETLINK_ROUTE, *argv[1:], **kwarg) self.marshal = MarshalRtnl() self._s_channel = None if sys.platform.startswith('linux'): self._gate = self._gate_linux self.sendto_gate = self._gate_linux send_ns = Namespace(self, {'addr_pool': AddrPool(0x10000, 0x1ffff), 'monitor': False}) self._sproxy = NetlinkProxy(policy='return', nl=send_ns) self._sproxy.pmap = {rtnl.RTM_NEWLINK: proxy_newlink, rtnl.RTM_SETLINK: proxy_setlink} if config.kernel < [3, 3, 0]: self._recv_ns = Namespace(self, {'addr_pool': AddrPool(0x20000, 0x2ffff), 'monitor': False}) self._sproxy.pmap[rtnl.RTM_DELLINK] = proxy_dellink # inject proxy hooks into recv() and... self.__recv = self._recv self._recv = self._p_recv # ... recv_into() self._recv_ft = self.recv_ft self.recv_ft = self._p_recv_ft def bind(self, groups=rtnl.RTMGRP_DEFAULTS, **kwarg): super(IPRSocketMixin, self).bind(groups, **kwarg) def _gate_linux(self, msg, addr): msg.reset() msg.encode() ret = self._sproxy.handle(msg) if ret is not None: if ret['verdict'] == 'forward': return self._sendto(ret['data'], addr) elif ret['verdict'] in ('return', 'error'): if self._s_channel is not None: return self._s_channel.send(ret['data']) else: msgs = self.marshal.parse(ret['data']) for msg in msgs: seq = msg['header']['sequence_number'] if seq in self.backlog: self.backlog[seq].append(msg) else: self.backlog[seq] = [msg] return len(ret['data']) else: ValueError('Incorrect verdict') return self._sendto(msg.data, addr) def _p_recv_ft(self, bufsize, flags=0): data = self._recv_ft(bufsize, flags) ret = proxy_linkinfo(data, self._recv_ns) if ret is not None: if ret['verdict'] in ('forward', 'error'): return ret['data'] else: ValueError('Incorrect verdict') return data def _p_recv(self, bufsize, flags=0): data = self.__recv(bufsize, flags) ret = proxy_linkinfo(data, self._recv_ns) if ret is not None: if ret['verdict'] in ('forward', 'error'): return ret['data'] else: ValueError('Incorrect verdict') return data class IPBatchSocket(IPRSocketMixin, BatchSocket): pass class IPRSocket(IPRSocketMixin, NetlinkSocket): ''' The simplest class, that connects together the netlink parser and a generic Python socket implementation. Provides method get() to receive the next message from netlink socket and parse it. It is just simple socket-like class, it implements no buffering or like that. It spawns no additional threads, leaving this up to developers. Please note, that netlink is an asynchronous protocol with non-guaranteed delivery. You should be fast enough to get all the messages in time. If the message flow rate is higher than the speed you parse them with, exceeding messages will be dropped. *Usage* Threadless RT netlink monitoring with blocking I/O calls: >>> from pyroute2 import IPRSocket >>> from pprint import pprint >>> s = IPRSocket() >>> s.bind() >>> pprint(s.get()) [{'attrs': [('RTA_TABLE', 254), ('RTA_DST', '2a00:1450:4009:808::1002'), ('RTA_GATEWAY', 'fe80:52:0:2282::1fe'), ('RTA_OIF', 2), ('RTA_PRIORITY', 0), ('RTA_CACHEINFO', {'rta_clntref': 0, 'rta_error': 0, 'rta_expires': 0, 'rta_id': 0, 'rta_lastuse': 5926, 'rta_ts': 0, 'rta_tsage': 0, 'rta_used': 1})], 'dst_len': 128, 'event': 'RTM_DELROUTE', 'family': 10, 'flags': 512, 'header': {'error': None, 'flags': 0, 'length': 128, 'pid': 0, 'sequence_number': 0, 'type': 25}, 'proto': 9, 'scope': 0, 'src_len': 0, 'table': 254, 'tos': 0, 'type': 1}] >>> ''' _brd_socket = None def bind(self, *argv, **kwarg): if kwarg.pop('clone_socket', False): self._brd_socket = self.clone() def get(self, bufsize=DEFAULT_RCVBUF, msg_seq=0, terminate=None, callback=None): if msg_seq == 0: return self._brd_socket.get(bufsize, msg_seq, terminate, callback) else: return super(IPRSocket, self).get(bufsize, msg_seq, terminate, callback) def close(self, code=errno.ECONNRESET): with self.sys_lock: self._brd_socket.close() return super(IPRSocket, self).close(code=code) self.get = types.MethodType(get, self) self.close = types.MethodType(close, self) kwarg['recursive'] = True return self._brd_socket.bind(*argv, **kwarg) else: return super(IPRSocket, self).bind(*argv, **kwarg) pyroute2-0.5.9/pyroute2/netlink/rtnl/iw_event.py0000644000175000017500000000730013610051400021626 0ustar peetpeet00000000000000from pyroute2.netlink import nla class iw_event(nla): nla_map = ((0xB00, 'SIOCSIWCOMMIT', 'hex'), (0xB01, 'SIOCGIWNAME', 'hex'), # Basic operations (0xB02, 'SIOCSIWNWID', 'hex'), (0xB03, 'SIOCGIWNWID', 'hex'), (0xB04, 'SIOCSIWFREQ', 'hex'), (0xB05, 'SIOCGIWFREQ', 'hex'), (0xB06, 'SIOCSIWMODE', 'hex'), (0xB07, 'SIOCGIWMODE', 'hex'), (0xB08, 'SIOCSIWSENS', 'hex'), (0xB09, 'SIOCGIWSENS', 'hex'), # Informative stuff (0xB0A, 'SIOCSIWRANGE', 'hex'), (0xB0B, 'SIOCGIWRANGE', 'hex'), (0xB0C, 'SIOCSIWPRIV', 'hex'), (0xB0D, 'SIOCGIWPRIV', 'hex'), (0xB0E, 'SIOCSIWSTATS', 'hex'), (0xB0F, 'SIOCGIWSTATS', 'hex'), # Spy support (statistics per MAC address - # used for Mobile IP support) (0xB10, 'SIOCSIWSPY', 'hex'), (0xB11, 'SIOCGIWSPY', 'hex'), (0xB12, 'SIOCSIWTHRSPY', 'hex'), (0xB13, 'SIOCGIWTHRSPY', 'hex'), # Access Point manipulation (0xB14, 'SIOCSIWAP', 'hex'), (0xB15, 'SIOCGIWAP', 'hex'), (0xB17, 'SIOCGIWAPLIST', 'hex'), (0xB18, 'SIOCSIWSCAN', 'hex'), (0xB19, 'SIOCGIWSCAN', 'hex'), # 802.11 specific support (0xB1A, 'SIOCSIWESSID', 'hex'), (0xB1B, 'SIOCGIWESSID', 'hex'), (0xB1C, 'SIOCSIWNICKN', 'hex'), (0xB1D, 'SIOCGIWNICKN', 'hex'), # Other parameters useful in 802.11 and # some other devices (0xB20, 'SIOCSIWRATE', 'hex'), (0xB21, 'SIOCGIWRATE', 'hex'), (0xB22, 'SIOCSIWRTS', 'hex'), (0xB23, 'SIOCGIWRTS', 'hex'), (0xB24, 'SIOCSIWFRAG', 'hex'), (0xB25, 'SIOCGIWFRAG', 'hex'), (0xB26, 'SIOCSIWTXPOW', 'hex'), (0xB27, 'SIOCGIWTXPOW', 'hex'), (0xB28, 'SIOCSIWRETRY', 'hex'), (0xB29, 'SIOCGIWRETRY', 'hex'), # Encoding stuff (scrambling, hardware security, WEP...) (0xB2A, 'SIOCSIWENCODE', 'hex'), (0xB2B, 'SIOCGIWENCODE', 'hex'), # Power saving stuff (power management, unicast # and multicast) (0xB2C, 'SIOCSIWPOWER', 'hex'), (0xB2D, 'SIOCGIWPOWER', 'hex'), # WPA : Generic IEEE 802.11 informatiom element # (e.g., for WPA/RSN/WMM). (0xB30, 'SIOCSIWGENIE', 'hex'), (0xB31, 'SIOCGIWGENIE', 'hex'), # WPA : IEEE 802.11 MLME requests (0xB16, 'SIOCSIWMLME', 'hex'), # WPA : Authentication mode parameters (0xB32, 'SIOCSIWAUTH', 'hex'), (0xB33, 'SIOCGIWAUTH', 'hex'), # WPA : Extended version of encoding configuration (0xB34, 'SIOCSIWENCODEEXT', 'hex'), (0xB35, 'SIOCGIWENCODEEXT', 'hex'), # WPA2 : PMKSA cache management (0xB36, 'SIOCSIWPMKSA', 'hex'), # Events s.str. (0xC00, 'IWEVTXDROP', 'hex'), (0xC01, 'IWEVQUAL', 'hex'), (0xC02, 'IWEVCUSTOM', 'hex'), (0xC03, 'IWEVREGISTERED', 'hex'), (0xC04, 'IWEVEXPIRED', 'hex'), (0xC05, 'IWEVGENIE', 'hex'), (0xC06, 'IWEVMICHAELMICFAILURE', 'hex'), (0xC07, 'IWEVASSOCREQIE', 'hex'), (0xC08, 'IWEVASSOCRESPIE', 'hex'), (0xC09, 'IWEVPMKIDCAND', 'hex')) pyroute2-0.5.9/pyroute2/netlink/rtnl/marshal.py0000644000175000017500000000371713610051400021445 0ustar peetpeet00000000000000from pyroute2.netlink import rtnl from pyroute2.netlink.nlsocket import Marshal from pyroute2.netlink.rtnl.tcmsg import tcmsg from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl.ndmsg import ndmsg from pyroute2.netlink.rtnl.ndtmsg import ndtmsg from pyroute2.netlink.rtnl.nsidmsg import nsidmsg from pyroute2.netlink.rtnl.fibmsg import fibmsg from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifaddrmsg import ifaddrmsg class MarshalRtnl(Marshal): msg_map = {rtnl.RTM_NEWLINK: ifinfmsg, rtnl.RTM_DELLINK: ifinfmsg, rtnl.RTM_GETLINK: ifinfmsg, rtnl.RTM_SETLINK: ifinfmsg, rtnl.RTM_NEWADDR: ifaddrmsg, rtnl.RTM_DELADDR: ifaddrmsg, rtnl.RTM_GETADDR: ifaddrmsg, rtnl.RTM_NEWROUTE: rtmsg, rtnl.RTM_DELROUTE: rtmsg, rtnl.RTM_GETROUTE: rtmsg, rtnl.RTM_NEWRULE: fibmsg, rtnl.RTM_DELRULE: fibmsg, rtnl.RTM_GETRULE: fibmsg, rtnl.RTM_NEWNEIGH: ndmsg, rtnl.RTM_DELNEIGH: ndmsg, rtnl.RTM_GETNEIGH: ndmsg, rtnl.RTM_NEWQDISC: tcmsg, rtnl.RTM_DELQDISC: tcmsg, rtnl.RTM_GETQDISC: tcmsg, rtnl.RTM_NEWTCLASS: tcmsg, rtnl.RTM_DELTCLASS: tcmsg, rtnl.RTM_GETTCLASS: tcmsg, rtnl.RTM_NEWTFILTER: tcmsg, rtnl.RTM_DELTFILTER: tcmsg, rtnl.RTM_GETTFILTER: tcmsg, rtnl.RTM_NEWNEIGHTBL: ndtmsg, rtnl.RTM_GETNEIGHTBL: ndtmsg, rtnl.RTM_SETNEIGHTBL: ndtmsg, rtnl.RTM_NEWNSID: nsidmsg, rtnl.RTM_DELNSID: nsidmsg, rtnl.RTM_GETNSID: nsidmsg} def fix_message(self, msg): # FIXME: pls do something with it try: msg['event'] = rtnl.RTM_VALUES[msg['header']['type']] except: pass pyroute2-0.5.9/pyroute2/netlink/rtnl/ndmsg.py0000644000175000017500000000516713610051400021127 0ustar peetpeet00000000000000from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg from pyroute2.netlink import nla # neighbor cache entry flags NTF_USE = 0x01 NTF_SELF = 0x02 NTF_MASTER = 0x04 NTF_PROXY = 0x08 NTF_EXT_LEARNED = 0x10 NTF_ROUTER = 0x80 # neighbor cache entry states NUD_INCOMPLETE = 0x01 NUD_REACHABLE = 0x02 NUD_STALE = 0x04 NUD_DELAY = 0x08 NUD_PROBE = 0x10 NUD_FAILED = 0x20 # dummy states NUD_NOARP = 0x40 NUD_PERMANENT = 0x80 NUD_NONE = 0x00 (NTF_NAMES, NTF_VALUES) = map_namespace('NTF_', globals()) (NUD_NAMES, NUD_VALUES) = map_namespace('NUD_', globals()) flags = dict([(x[0][4:].lower(), x[1]) for x in NTF_NAMES.items()]) states = dict([(x[0][4:].lower(), x[1]) for x in NUD_NAMES.items()]) def states_a2n(s): # parse state string ss = s.split(',') ret = 0 for state in ss: state = state.upper() if not state.startswith('NUD_'): state = 'NUD_' + state ret |= NUD_NAMES[state] return ret class ndmsg(nlmsg): ''' ARP cache update message C structure:: struct ndmsg { unsigned char ndm_family; int ndm_ifindex; /* Interface index */ __u16 ndm_state; /* State */ __u8 ndm_flags; /* Flags */ __u8 ndm_type; }; Cache info structure:: struct nda_cacheinfo { __u32 ndm_confirmed; __u32 ndm_used; __u32 ndm_updated; __u32 ndm_refcnt; }; ''' __slots__ = () prefix = 'NDA_' sql_constraints = {'NDA_LLADDR': "NOT NULL DEFAULT ''"} fields = (('family', 'B'), ('__pad', '3x'), ('ifindex', 'i'), ('state', 'H'), ('flags', 'B'), ('ndm_type', 'B')) # Please note, that nla_map creates implicit # enumeration. In this case it will be: # # NDA_UNSPEC = 0 # NDA_DST = 1 # NDA_LLADDR = 2 # NDA_CACHEINFO = 3 # NDA_PROBES = 4 # ... # nla_map = (('NDA_UNSPEC', 'none'), ('NDA_DST', 'ipaddr'), ('NDA_LLADDR', 'l2addr'), ('NDA_CACHEINFO', 'cacheinfo'), ('NDA_PROBES', 'uint32'), ('NDA_VLAN', 'uint16'), ('NDA_PORT', 'be16'), ('NDA_VNI', 'uint32'), ('NDA_IFINDEX', 'uint32'), ('NDA_MASTER', 'uint32')) class cacheinfo(nla): __slots__ = () fields = (('ndm_confirmed', 'I'), ('ndm_used', 'I'), ('ndm_updated', 'I'), ('ndm_refcnt', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/ndtmsg.py0000644000175000017500000000456513610051400021314 0ustar peetpeet00000000000000 from pyroute2.netlink import nlmsg from pyroute2.netlink import nla class ndtmsg(nlmsg): ''' Neighbour table message ''' __slots__ = () fields = (('family', 'B'), ('__pad', '3x')) nla_map = (('NDTA_UNSPEC', 'none'), ('NDTA_NAME', 'asciiz'), ('NDTA_THRESH1', 'uint32'), ('NDTA_THRESH2', 'uint32'), ('NDTA_THRESH3', 'uint32'), ('NDTA_CONFIG', 'config'), ('NDTA_PARMS', 'parms'), ('NDTA_STATS', 'stats'), ('NDTA_GC_INTERVAL', 'uint64')) class config(nla): __slots__ = () fields = (('key_len', 'H'), ('entry_size', 'H'), ('entries', 'I'), ('last_flush', 'I'), # delta to now in msecs ('last_rand', 'I'), # delta to now in msecs ('hash_rnd', 'I'), ('hash_mask', 'I'), ('hash_chain_gc', 'I'), ('proxy_qlen', 'I')) class stats(nla): __slots__ = () fields = (('allocs', 'Q'), ('destroys', 'Q'), ('hash_grows', 'Q'), ('res_failed', 'Q'), ('lookups', 'Q'), ('hits', 'Q'), ('rcv_probes_mcast', 'Q'), ('rcv_probes_ucast', 'Q'), ('periodic_gc_runs', 'Q'), ('forced_gc_runs', 'Q')) class parms(nla): __slots__ = () nla_map = (('NDTPA_UNSPEC', 'none'), ('NDTPA_IFINDEX', 'uint32'), ('NDTPA_REFCNT', 'uint32'), ('NDTPA_REACHABLE_TIME', 'uint64'), ('NDTPA_BASE_REACHABLE_TIME', 'uint64'), ('NDTPA_RETRANS_TIME', 'uint64'), ('NDTPA_GC_STALETIME', 'uint64'), ('NDTPA_DELAY_PROBE_TIME', 'uint64'), ('NDTPA_QUEUE_LEN', 'uint32'), ('NDTPA_APP_PROBES', 'uint32'), ('NDTPA_UCAST_PROBES', 'uint32'), ('NDTPA_MCAST_PROBES', 'uint32'), ('NDTPA_ANYCAST_DELAY', 'uint64'), ('NDTPA_PROXY_DELAY', 'uint64'), ('NDTPA_PROXY_QLEN', 'uint32'), ('NDTPA_LOCKTIME', 'uint64'), ('NDTPA_QUEUE_LENBYTES', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/nsidmsg.py0000644000175000017500000000036413610051400021455 0ustar peetpeet00000000000000 from pyroute2.netlink.rtnl.rtgenmsg import rtgenmsg class nsidmsg(rtgenmsg): nla_map = (('NETNSA_NONE', 'none'), ('NETNSA_NSID', 'uint32'), ('NETNSA_PID', 'uint32'), ('NETNSA_FD', 'uint32')) pyroute2-0.5.9/pyroute2/netlink/rtnl/nsinfmsg.py0000644000175000017500000000106213610051400021631 0ustar peetpeet00000000000000from pyroute2.netlink import nlmsg from pyroute2.netlink import nlmsg_atoms class nsinfmsg(nlmsg): ''' Fake message type to represent network namespace information. This is a prototype, the NLA layout is subject to change without notification. ''' __slots__ = () prefix = 'NSINFO_' fields = (('inode', 'I'), ('netnsid', 'I')) nla_map = (('NSINFO_UNSPEC', 'none'), ('NSINFO_PATH', 'string'), ('NSINFO_PEER', 'peer')) class peer(nlmsg_atoms.string): sql_type = None pyroute2-0.5.9/pyroute2/netlink/rtnl/p2pmsg.py0000644000175000017500000000057213610051400021222 0ustar peetpeet00000000000000from pyroute2.netlink import nlmsg class p2pmsg(nlmsg): ''' Fake message type to represent peer to peer connections, be it GRE or PPP ''' __slots__ = () prefix = 'P2P_' fields = (('index', 'I'), ('family', 'I')) nla_map = (('P2P_UNSPEC', 'none'), ('P2P_LOCAL', 'target'), ('P2P_REMOTE', 'target')) pyroute2-0.5.9/pyroute2/netlink/rtnl/req.py0000644000175000017500000006740313616276270020633 0ustar peetpeet00000000000000from socket import AF_INET from socket import AF_INET6 from pyroute2.common import AF_MPLS from pyroute2.common import basestring from pyroute2.netlink.rtnl import rt_type from pyroute2.netlink.rtnl import rt_proto from pyroute2.netlink.rtnl import rt_scope from pyroute2.netlink.rtnl import encap_type from pyroute2.netlink.rtnl.ifinfmsg import ifinfmsg from pyroute2.netlink.rtnl.ifinfmsg import protinfo_bridge from pyroute2.netlink.rtnl.ifinfmsg.plugins.vlan import flags as vlan_flags from pyroute2.netlink.rtnl.rtmsg import rtmsg from pyroute2.netlink.rtnl.rtmsg import nh as nh_header from pyroute2.netlink.rtnl.fibmsg import FR_ACT_NAMES encap_types = {'mpls': 1, AF_MPLS: 1, 'seg6': 5, 'bpf': 6, 'seg6local': 7} class IPRequest(dict): def __init__(self, obj=None): dict.__init__(self) if obj is not None: self.update(obj) def update(self, obj): if obj.get('family', None): self['family'] = obj['family'] for key in obj: if key == 'family': continue v = obj[key] if isinstance(v, dict): self[key] = dict((x for x in v.items() if x[1] is not None)) elif v is not None: self[key] = v class IPRuleRequest(IPRequest): def update(self, obj): super(IPRuleRequest, self).update(obj) # now fix the rest if 'family' not in self: self['family'] = AF_INET if ('priority' not in self) and ('FRA_PRIORITY' not in self): self['priority'] = 32000 if 'table' in self and 'action' not in self: self['action'] = 'to_tbl' for key in ('src_len', 'dst_len'): if self.get(key, None) is None and key[:3] in self: self[key] = {AF_INET6: 128, AF_INET: 32}[self['family']] def __setitem__(self, key, value): if key.startswith('ipdb_'): return if key in ('src', 'dst'): v = value.split('/') if len(v) == 2: value, self['%s_len' % key] = v[0], int(v[1]) elif key == 'action' and isinstance(value, basestring): value = (FR_ACT_NAMES .get(value, (FR_ACT_NAMES .get('FR_ACT_' + value.upper(), value)))) dict.__setitem__(self, key, value) class IPRouteRequest(IPRequest): ''' Utility class, that converts human-readable dictionary into RTNL route request. ''' resolve = {'encap_type': encap_type, 'type': rt_type, 'proto': rt_proto, 'scope': rt_scope} def __init__(self, obj=None): self._mask = [] IPRequest.__init__(self, obj) def encap_header(self, header): ''' Encap header transform. Format samples: {'type': 'mpls', 'labels': '200/300'} {'type': AF_MPLS, 'labels': (200, 300)} {'type': 'mpls', 'labels': 200} {'type': AF_MPLS, 'labels': [{'bos': 0, 'label': 200, 'ttl': 16}, {'bos': 1, 'label': 300, 'ttl': 16}]} ''' if isinstance(header['type'], int) or \ (header['type'] in ('mpls', AF_MPLS)): ret = [] override_bos = True labels = header['labels'] if isinstance(labels, basestring): labels = labels.split('/') if not isinstance(labels, (tuple, list, set)): labels = (labels, ) for label in labels: if isinstance(label, dict): # dicts append intact override_bos = False ret.append(label) else: # otherwise construct label dict if isinstance(label, basestring): label = int(label) ret.append({'bos': 0, 'label': label}) # the last label becomes bottom-of-stack if override_bos: ret[-1]['bos'] = 1 return {'attrs': [['MPLS_IPTUNNEL_DST', ret]]} ''' Seg6 encap header transform. Format samples: {'type': 'seg6', 'mode': 'encap', 'segs': '2000::5,2000::6'} {'type': 'seg6', 'mode': 'encap' 'segs': '2000::5,2000::6', 'hmac': 1} ''' if header['type'] == 'seg6': # Init step ret = {} # Parse segs segs = header['segs'] # If they are in the form in_addr6,in_addr6 if isinstance(segs, basestring): # Create an array with the splitted values temp = segs.split(',') # Init segs segs = [] # Iterate over the values for seg in temp: # Discard empty string if seg != '': # Add seg to segs segs.append(seg) # Retrieve mode mode = header['mode'] # hmac is optional and contains the hmac key hmac = header.get('hmac', None) # Construct the new object ret = {'mode': mode, 'segs': segs} # If hmac is present convert to u32 if hmac: # Add to ret the hmac key ret['hmac'] = hmac & 0xffffffff # Done return the object return {'attrs': [['SEG6_IPTUNNEL_SRH', ret]]} ''' BPF encap header transform. Format samples: {'type': 'bpf', 'in': {'fd':4, 'name':'firewall'}} {'type': 'bpf', 'in' : {'fd':4, 'name':'firewall'}, 'out' : {'fd':5, 'name':'stats'}, 'xmit': {'fd':6, 'name':'vlan_push', 'headroom':4}} ''' if header['type'] == 'bpf': attrs = {} for key, value in header.items(): if key not in ['in', 'out', 'xmit']: continue obj = [['LWT_BPF_PROG_FD', value['fd']], ['LWT_BPF_PROG_NAME', value['name']]] if key == 'in': attrs['LWT_BPF_IN'] = {'attrs': obj} elif key == 'out': attrs['LWT_BPF_OUT'] = {'attrs': obj} elif key == 'xmit': attrs['LWT_BPF_XMIT'] = {'attrs': obj} if 'headroom' in value: attrs['LWT_BPF_XMIT_HEADROOM'] = value['headroom'] return {'attrs': attrs.items()} ''' Seg6 encap header transform. Format samples: {'type': 'seg6local', 'action': 'End.DT6', 'table': '10'} {'type': 'seg6local', 'action': 'End.B6', 'table': '10' 'srh': {'segs': '2000::5,2000::6'}} ''' if header['type'] == 'seg6local': # Init step ret = {} table = None nh4 = None nh6 = None iif = None # Actually not used oif = None srh = {} segs = [] hmac = None # Parse segs if srh: segs = header['srh']['segs'] # If they are in the form in_addr6,in_addr6 if isinstance(segs, basestring): # Create an array with the splitted values temp = segs.split(',') # Init segs segs = [] # Iterate over the values for seg in temp: # Discard empty string if seg != '': # Add seg to segs segs.append(seg) # hmac is optional and contains the hmac key hmac = header.get('hmac', None) # Retrieve action action = header['action'] if action == 'End.X': # Retrieve nh6 nh6 = header['nh6'] elif action == 'End.T': # Retrieve table and convert to u32 table = header['table'] & 0xffffffff elif action == 'End.DX2': # Retrieve oif and convert to u32 oif = header['oif'] & 0xffffffff elif action == 'End.DX6': # Retrieve nh6 nh6 = header['nh6'] elif action == 'End.DX4': # Retrieve nh6 nh4 = header['nh4'] elif action == 'End.DT6': # Retrieve table table = header['table'] elif action == 'End.DT4': # Retrieve table table = header['table'] elif action == 'End.B6': # Parse segs segs = header['srh']['segs'] # If they are in the form in_addr6,in_addr6 if isinstance(segs, basestring): # Create an array with the splitted values temp = segs.split(',') # Init segs segs = [] # Iterate over the values for seg in temp: # Discard empty string if seg != '': # Add seg to segs segs.append(seg) # hmac is optional and contains the hmac key hmac = header.get('hmac', None) srh['segs'] = segs # If hmac is present convert to u32 if hmac: # Add to ret the hmac key srh['hmac'] = hmac & 0xffffffff srh['mode'] = 'inline' elif action == 'End.B6.Encaps': # Parse segs segs = header['srh']['segs'] # If they are in the form in_addr6,in_addr6 if isinstance(segs, basestring): # Create an array with the splitted values temp = segs.split(',') # Init segs segs = [] # Iterate over the values for seg in temp: # Discard empty string if seg != '': # Add seg to segs segs.append(seg) # hmac is optional and contains the hmac key hmac = header.get('hmac', None) srh['segs'] = segs if hmac: # Add to ret the hmac key srh['hmac'] = hmac & 0xffffffff srh['mode'] = 'encap' # Construct the new object ret = [] ret.append(['SEG6_LOCAL_ACTION', {'value': action}]) if table: # Add the table to ret ret.append(['SEG6_LOCAL_TABLE', {'value': table}]) if nh4: # Add the nh4 to ret ret.append(['SEG6_LOCAL_NH4', {'value': nh4}]) if nh6: # Add the nh6 to ret ret.append(['SEG6_LOCAL_NH6', {'value': nh6}]) if iif: # Add the iif to ret ret.append(['SEG6_LOCAL_IIF', {'value': iif}]) if oif: # Add the oif to ret ret.append(['SEG6_LOCAL_OIF', {'value': oif}]) if srh: # Add the srh to ret ret.append(['SEG6_LOCAL_SRH', srh]) # Done return the object return {'attrs': ret} def mpls_rta(self, value): ret = [] if not isinstance(value, (list, tuple, set)): value = (value, ) for label in value: if isinstance(label, int): label = {'label': label, 'bos': 0} elif isinstance(label, basestring): label = {'label': int(label), 'bos': 0} elif not isinstance(label, dict): raise ValueError('wrong MPLS label') ret.append(label) if ret: ret[-1]['bos'] = 1 return ret def __setitem__(self, key, value): # skip virtual IPDB fields if key.startswith('ipdb_'): return # fix family if isinstance(value, basestring) and value.find(':') >= 0: self['family'] = AF_INET6 # work on the rest if key == 'family' and value == AF_MPLS: dict.__setitem__(self, 'family', value) dict.__setitem__(self, 'dst_len', 20) dict.__setitem__(self, 'table', 254) dict.__setitem__(self, 'type', 1) elif key == 'flags' and self.get('family', None) == AF_MPLS: return elif key in ('dst', 'src'): if isinstance(value, dict): dict.__setitem__(self, key, value) elif isinstance(value, int): dict.__setitem__(self, key, {'label': value, 'bos': 1}) elif value == '': # ignore empty values for src/dst return elif value != 'default': value = value.split('/') mask = None if len(value) == 1: dst = value[0] if '%s_len' % key not in self: if self.get('family', 0) == AF_INET: mask = 32 elif self.get('family', 0) == AF_INET6: mask = 128 else: self._mask.append('%s_len' % key) elif len(value) == 2: dst = value[0] mask = int(value[1]) else: raise ValueError('wrong address spec') dict.__setitem__(self, key, dst) if mask is not None: dict.__setitem__(self, '%s_len' % key, mask) elif key == 'newdst': dict.__setitem__(self, 'newdst', self.mpls_rta(value)) elif key in self.resolve.keys(): if isinstance(value, basestring): value = self.resolve[key][value] dict.__setitem__(self, key, value) elif key == 'encap': if isinstance(value, dict): # human-friendly form: # # 'encap': {'type': 'mpls', # 'labels': '200/300'} # # 'type' is mandatory if 'type' in value and 'labels' in value: dict.__setitem__(self, 'encap_type', encap_types.get(value['type'], value['type'])) dict.__setitem__(self, 'encap', self.encap_header(value)) # human-friendly form: # # 'encap': {'type': 'seg6', # 'mode': 'encap' # 'segs': '2000::5,2000::6'} # # 'encap': {'type': 'seg6', # 'mode': 'inline' # 'segs': '2000::5,2000::6' # 'hmac': 1} # # 'encap': {'type': 'seg6', # 'mode': 'encap' # 'segs': '2000::5,2000::6' # 'hmac': 0xf} # # 'encap': {'type': 'seg6', # 'mode': 'inline' # 'segs': ['2000::5', '2000::6']} # # 'type', 'mode' and 'segs' are mandatory if 'type' in value and 'mode' in value and 'segs' in value: dict.__setitem__(self, 'encap_type', encap_types.get(value['type'], value['type'])) dict.__setitem__(self, 'encap', self.encap_header(value)) elif 'type' in value and ('in' in value or 'out' in value or 'xmit' in value): dict.__setitem__(self, 'encap_type', encap_types.get(value['type'], value['type'])) dict.__setitem__(self, 'encap', self.encap_header(value)) # human-friendly form: # # 'encap': {'type': 'seg6local', # 'action': 'End'} # # 'encap': {'type': 'seg6local', # 'action': 'End.DT6', # 'table': '10'} # # 'encap': {'type': 'seg6local', # 'action': 'End.DX6', # 'nh6': '2000::5'} # # 'encap': {'type': 'seg6local', # 'action': 'End.B6' # 'srh': {'segs': '2000::5,2000::6', # 'hmac': 0xf}} # # 'type' and 'action' are mandatory elif 'type' in value and 'action' in value: dict.__setitem__(self, 'encap_type', encap_types.get(value['type'], value['type'])) dict.__setitem__(self, 'encap', self.encap_header(value)) # assume it is a ready-to-use NLA elif 'attrs' in value: dict.__setitem__(self, 'encap', value) elif key == 'via': # ignore empty RTA_VIA if isinstance(value, dict) and \ set(value.keys()) == set(('addr', 'family')) and \ value['family'] in (AF_INET, AF_INET6) and \ isinstance(value['addr'], basestring): dict.__setitem__(self, 'via', value) elif key == 'metrics': if 'attrs' in value: ret = value else: ret = {'attrs': []} for name in value: rtax = rtmsg.metrics.name2nla(name) ret['attrs'].append([rtax, value[name]]) if ret['attrs']: dict.__setitem__(self, 'metrics', ret) elif key == 'multipath': ret = [] for v in value: if 'attrs' in v: ret.append(v) continue nh = {'attrs': []} nh_fields = [x[0] for x in nh_header.fields] for name in nh_fields: nh[name] = v.get(name, 0) for name in v: if name in nh_fields or v[name] is None: continue if name == 'encap' and isinstance(v[name], dict): if v[name].get('type', None) is None or \ v[name].get('labels', None) is None: continue nh['attrs'].append(['RTA_ENCAP_TYPE', encap_types.get(v[name]['type'], v[name]['type'])]) nh['attrs'].append(['RTA_ENCAP', self.encap_header(v[name])]) elif name == 'newdst': nh['attrs'].append(['RTA_NEWDST', self.mpls_rta(v[name])]) else: rta = rtmsg.name2nla(name) nh['attrs'].append([rta, v[name]]) ret.append(nh) if ret: dict.__setitem__(self, 'multipath', ret) elif key == 'family': for d in self._mask: if d not in self: if value == AF_INET: dict.__setitem__(self, d, 32) elif value == AF_INET6: dict.__setitem__(self, d, 128) self._mask = [] dict.__setitem__(self, key, value) else: dict.__setitem__(self, key, value) class CBRequest(IPRequest): ''' FIXME ''' commands = None msg = None def __init__(self, *argv, **kwarg): self['commands'] = {'attrs': []} def __setitem__(self, key, value): if value is None: return if key in self.commands: self['commands']['attrs'].\ append([self.msg.name2nla(key), value]) else: dict.__setitem__(self, key, value) class IPBridgeRequest(IPRequest): def __setitem__(self, key, value): if key in ('vlan_info', 'mode', 'vlan_flags'): if 'IFLA_AF_SPEC' not in self: dict.__setitem__(self, 'IFLA_AF_SPEC', {'attrs': []}) nla = ifinfmsg.af_spec_bridge.name2nla(key) self['IFLA_AF_SPEC']['attrs'].append([nla, value]) else: dict.__setitem__(self, key, value) class IPBrPortRequest(dict): def __init__(self, obj=None): dict.__init__(self) dict.__setitem__(self, 'attrs', []) self.allowed = [x[0] for x in protinfo_bridge.nla_map] if obj is not None: self.update(obj) def update(self, obj): for key in obj: self[key] = obj[key] def __setitem__(self, key, value): key = protinfo_bridge.name2nla(key) if key in self.allowed: self['attrs'].append((key, value)) class IPLinkRequest(IPRequest): ''' Utility class, that converts human-readable dictionary into RTNL link request. ''' blacklist = ['carrier', 'carrier_changes', 'info_slave_kind'] # get common ifinfmsg NLAs common = [] for (key, _) in ifinfmsg.nla_map: common.append(key) common.append(key[len(ifinfmsg.prefix):].lower()) common.append('family') common.append('ifi_type') common.append('index') common.append('flags') common.append('change') def __init__(self, *argv, **kwarg): self.deferred = [] self.kind = None self.specific = {} self.linkinfo = None self._info_data = None IPRequest.__init__(self, *argv, **kwarg) if 'index' not in self: self['index'] = 0 @property def info_data(self): if self._info_data is None: info_data = ('IFLA_INFO_DATA', {'attrs': []}) self._info_data = info_data[1]['attrs'] self.linkinfo.append(info_data) return self._info_data def flush_deferred(self): # create IFLA_LINKINFO linkinfo = {'attrs': []} self.linkinfo = linkinfo['attrs'] dict.__setitem__(self, 'IFLA_LINKINFO', linkinfo) self.linkinfo.append(['IFLA_INFO_KIND', self.kind]) # load specific NLA names cls = ifinfmsg.ifinfo.data_map.get(self.kind, None) if cls is not None: prefix = cls.prefix or 'IFLA_' for nla, _ in cls.nla_map: self.specific[nla] = nla self.specific[nla[len(prefix):].lower()] = nla # flush deferred NLAs for (key, value) in self.deferred: if not self.set_specific(key, value): dict.__setitem__(self, key, value) self.deferred = [] def set_vf(self, spec): vflist = [] if not isinstance(spec, (list, tuple)): spec = (spec, ) for vf in spec: vfcfg = [] # pop VF index vfid = vf.pop('vf') # mandatory # pop VLAN spec vlan = vf.pop('vlan', None) # optional if isinstance(vlan, int): vfcfg.append(('IFLA_VF_VLAN', {'vf': vfid, 'vlan': vlan})) elif isinstance(vlan, dict): vlan['vf'] = vfid vfcfg.append(('IFLA_VF_VLAN', vlan)) elif isinstance(vlan, (list, tuple)): vlist = [] for vspec in vlan: vspec['vf'] = vfid vlist.append(('IFLA_VF_VLAN_INFO', vspec)) vfcfg.append(('IFLA_VF_VLAN_LIST', {'attrs': vlist})) # pop rate spec rate = vf.pop('rate', None) # optional if rate is not None: rate['vf'] = vfid vfcfg.append(('IFLA_VF_RATE', rate)) # create simple VF attrs for attr in vf: vfcfg.append((ifinfmsg.vflist.vfinfo.name2nla(attr), {'vf': vfid, attr: vf[attr]})) vflist.append(('IFLA_VF_INFO', {'attrs': vfcfg})) dict.__setitem__(self, 'IFLA_VFINFO_LIST', {'attrs': vflist}) def set_specific(self, key, value): # FIXME: vlan hack if self.kind == 'vlan' and key == 'vlan_flags': if isinstance(value, (list, tuple)): if len(value) == 2 and \ all((isinstance(x, int) for x in value)): value = {'flags': value[0], 'mask': value[1]} else: ret = 0 for x in value: ret |= vlan_flags.get(x, 1) value = {'flags': ret, 'mask': ret} elif isinstance(value, int): value = {'flags': value, 'mask': value} elif isinstance(value, basestring): value = vlan_flags.get(value, 1) value = {'flags': value, 'mask': value} elif not isinstance(value, dict): raise ValueError() # the kind is known: lookup the NLA if key in self.specific: self.info_data.append((self.specific[key], value)) return True elif key == 'peer' and self.kind == 'veth': # FIXME: veth hack if isinstance(value, dict): attrs = [] for k, v in value.items(): attrs.append([ifinfmsg.name2nla(k), v]) else: attrs = [['IFLA_IFNAME', value], ] nla = ['VETH_INFO_PEER', {'attrs': attrs}] self.info_data.append(nla) return True elif key == 'mode': # FIXME: ipvlan / tuntap / bond hack if self.kind == 'tuntap': nla = ['IFTUN_MODE', value] else: nla = ['IFLA_%s_MODE' % self.kind.upper(), value] self.info_data.append(nla) return True return False def __setitem__(self, key, value): # ignore blacklisted attributes if key in self.blacklist: return # there must be no "None" values in the request if value is None: return # all the values must be in ascii try: if isinstance(value, unicode): value = value.encode('ascii') except NameError: pass if key in ('kind', 'info_kind') and not self.kind: self.kind = value self.flush_deferred() elif key == 'vf': # SR-IOV virtual function setup self.set_vf(value) elif self.kind is None: if key in self.common: dict.__setitem__(self, key, value) else: self.deferred.append((key, value)) else: if not self.set_specific(key, value): dict.__setitem__(self, key, value) pyroute2-0.5.9/pyroute2/netlink/rtnl/riprsocket.py0000644000175000017500000000105413610051400022173 0ustar peetpeet00000000000000from pyroute2.netlink import rtnl from pyroute2.netlink import NETLINK_ROUTE from pyroute2.netlink.nlsocket import NetlinkSocket from pyroute2.netlink.rtnl.marshal import MarshalRtnl class RawIPRSocketMixin(object): def __init__(self, fileno=None): super(RawIPRSocketMixin, self).__init__(NETLINK_ROUTE, fileno=fileno) self.marshal = MarshalRtnl() def bind(self, groups=rtnl.RTMGRP_DEFAULTS, **kwarg): super(RawIPRSocketMixin, self).bind(groups, **kwarg) class RawIPRSocket(RawIPRSocketMixin, NetlinkSocket): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/rtgenmsg.py0000644000175000017500000000020213610051400021626 0ustar peetpeet00000000000000 from pyroute2.netlink import nlmsg class rtgenmsg(nlmsg): fields = (('rtgen_family', 'B'), ('__pad', '3x')) pyroute2-0.5.9/pyroute2/netlink/rtnl/rtmsg.py0000644000175000017500000006202213610051400021144 0ustar peetpeet00000000000000import struct from socket import inet_ntop from socket import inet_pton from socket import AF_UNSPEC from socket import AF_INET from socket import AF_INET6 from pyroute2.common import AF_MPLS from pyroute2.common import hexdump from pyroute2.common import map_namespace from pyroute2.netlink import nlmsg from pyroute2.netlink import nla RTNH_F_DEAD = 1 RTNH_F_PERVASIVE = 2 RTNH_F_ONLINK = 4 RTNH_F_OFFLOAD = 8 RTNH_F_LINKDOWN = 16 (RTNH_F_NAMES, RTNH_F_VALUES) = map_namespace('RTNH_F', globals()) LWTUNNEL_ENCAP_NONE = 0 LWTUNNEL_ENCAP_MPLS = 1 LWTUNNEL_ENCAP_IP = 2 LWTUNNEL_ENCAP_ILA = 3 LWTUNNEL_ENCAP_IP6 = 4 LWTUNNEL_ENCAP_SEG6 = 5 LWTUNNEL_ENCAP_BPF = 6 LWTUNNEL_ENCAP_SEG6_LOCAL = 7 class nlflags(object): def encode(self): if isinstance(self['flags'], (set, tuple, list)): self['flags'] = self.names2flags(self['flags']) return super(nlflags, self).encode() def flags2names(self, flags=None): ret = [] for flag in RTNH_F_VALUES: if (flag & flags) == flag: ret.append(RTNH_F_VALUES[flag].lower()[7:]) return ret def names2flags(self, flags=None): ret = 0 for flag in flags or self['flags']: ret |= RTNH_F_NAMES['RTNH_F_' + flag.upper()] return ret class rtmsg_base(nlflags): ''' Route message C structure:: struct rtmsg { unsigned char rtm_family; /* Address family of route */ unsigned char rtm_dst_len; /* Length of destination */ unsigned char rtm_src_len; /* Length of source */ unsigned char rtm_tos; /* TOS filter */ unsigned char rtm_table; /* Routing table ID */ unsigned char rtm_protocol; /* Routing protocol; see below */ unsigned char rtm_scope; /* See below */ unsigned char rtm_type; /* See below */ unsigned int rtm_flags; }; ''' __slots__ = () prefix = 'RTA_' sql_constraints = {'RTA_TABLE': 'NOT NULL DEFAULT 0', 'RTA_DST': "NOT NULL DEFAULT ''", 'RTA_PRIORITY': 'NOT NULL DEFAULT 0'} fields = (('family', 'B'), ('dst_len', 'B'), ('src_len', 'B'), ('tos', 'B'), ('table', 'B'), ('proto', 'B'), ('scope', 'B'), ('type', 'B'), ('flags', 'I')) nla_map = (('RTA_UNSPEC', 'none'), ('RTA_DST', 'target'), ('RTA_SRC', 'target'), ('RTA_IIF', 'uint32'), ('RTA_OIF', 'uint32'), ('RTA_GATEWAY', 'target'), ('RTA_PRIORITY', 'uint32'), ('RTA_PREFSRC', 'target'), ('RTA_METRICS', 'metrics'), ('RTA_MULTIPATH', '*get_nh'), ('RTA_PROTOINFO', 'uint32'), ('RTA_FLOW', 'uint32'), ('RTA_CACHEINFO', 'cacheinfo'), ('RTA_SESSION', 'hex'), ('RTA_MP_ALGO', 'hex'), ('RTA_TABLE', 'uint32'), ('RTA_MARK', 'uint32'), ('RTA_MFC_STATS', 'rta_mfc_stats'), ('RTA_VIA', 'rtvia'), ('RTA_NEWDST', 'target'), ('RTA_PREF', 'uint8'), ('RTA_ENCAP_TYPE', 'uint16'), ('RTA_ENCAP', 'encap_info'), ('RTA_EXPIRES', 'hex')) @staticmethod def encap_info(self, *argv, **kwarg): encap_type = None # Check, if RTA_ENCAP_TYPE is decoded already # for name, value in self['attrs']: if name == 'RTA_ENCAP_TYPE': encap_type = value break else: # No RTA_ENCAP_TYPE met, so iterate all the chain. # Ugly, but to do otherwise would be too complicated. # data = kwarg['data'] offset = kwarg['offset'] while offset < len(data): # Shift offset to the next NLA # NLA header: # # uint16 length # uint16 type # try: offset += struct.unpack('H', data[offset:offset + 2])[0] # 21 == RTA_ENCAP_TYPE # FIXME: should not be hardcoded if struct.unpack('H', data[offset + 2: offset + 4])[0] == 21: encap_type = struct.unpack('H', data[offset + 4: offset + 6])[0] break except: # in the case of any decoding error return self.hex break # return specific classes # return self.encaps.get(encap_type, self.hex) class mpls_encap_info(nla): __slots__ = () nla_map = (('MPLS_IPTUNNEL_UNSPEC', 'none'), ('MPLS_IPTUNNEL_DST', 'mpls_target'), ('MPLS_IPTUNNEL_TTL', 'uint8')) class seg6_encap_info(nla): __slots__ = () nla_map = (('SEG6_IPTUNNEL_UNSPEC', 'none'), ('SEG6_IPTUNNEL_SRH', 'ipv6_sr_hdr')) class ipv6_sr_hdr(nla): __slots__ = () fields = (('encapmode', 'I'), ('nexthdr', 'B'), ('hdrlen', 'B'), ('type', 'B'), ('segments_left', 'B'), ('first_segment', 'B'), ('flags', 'B'), ('reserved', 'H'), ('segs', 's'), # Potentially several type-length-value ('tlvs', 's')) # Corresponding values for seg6 encap modes SEG6_IPTUN_MODE_INLINE = 0 SEG6_IPTUN_MODE_ENCAP = 1 # Mapping string to nla value encapmodes = { "inline": SEG6_IPTUN_MODE_INLINE, "encap": SEG6_IPTUN_MODE_ENCAP } # Reverse mapping: mapping nla value to string r_encapmodes = {v: k for k, v in encapmodes.items()} # Nla value for seg6 type SEG6_TYPE = 4 # Flag value for hmac SR6_FLAG1_HMAC = 1 << 3 # Tlv value for hmac SR6_TLV_HMAC = 5 # Utility function to get the family from the msg def get_family(self): pointer = self while pointer.parent is not None: pointer = pointer.parent return pointer.get('family', AF_UNSPEC) def encode(self): # Retrieve the family family = self.get_family() # Seg6 can be applied only to IPv6 and IPv4 if family == AF_INET6 or family == AF_INET: # Get mode mode = self['mode'] # Get segs segs = self['segs'] # Get hmac hmac = self.get('hmac', None) # With "inline" mode there is not # encap into an outer IPv6 header if mode == "inline": # Add :: to segs segs.insert(0, "::") # Add mode to value self['encapmode'] = (self .encapmodes .get(mode, self.SEG6_IPTUN_MODE_ENCAP)) # Calculate srlen srhlen = 8 + 16 * len(segs) # If we are using hmac we have a tlv as trailer data if hmac: # Since we can use sha1 or sha256 srhlen += 40 # Calculate and set hdrlen self['hdrlen'] = (srhlen >> 3) - 1 # Add seg6 type self['type'] = self.SEG6_TYPE # Add segments left self['segments_left'] = len(segs) - 1 # Add fitst segment self['first_segment'] = len(segs) - 1 # If hmac is used we have to set the flags if hmac: # Add SR6_FLAG1_HMAC self['flags'] |= self.SR6_FLAG1_HMAC # Init segs self['segs'] = b'' # Iterate over segments for seg in segs: # Convert to network byte order and add to value self['segs'] += inet_pton(AF_INET6, seg) # Initialize tlvs self['tlvs'] = b'' # If hmac is used we have to properly init tlvs if hmac: # Put type self['tlvs'] += struct.pack('B', self.SR6_TLV_HMAC) # Put length -> 40-2 self['tlvs'] += struct.pack('B', 38) # Put reserved self['tlvs'] += struct.pack('H', 0) # Put hmac key self['tlvs'] += struct.pack('>I', hmac) # Put hmac self['tlvs'] += struct.pack('QQQQ', 0, 0, 0, 0) else: raise TypeError('Family %s not supported for seg6 tunnel' % family) # Finally encode as nla nla.encode(self) # Utility function to verify if hmac is present def has_hmac(self): # Useful during the decoding return self['flags'] & self.SR6_FLAG1_HMAC def decode(self): # Decode the data nla.decode(self) # Extract the encap mode self['mode'] = (self.r_encapmodes .get(self['encapmode'], "encap")) # Calculate offset of the segs offset = self.offset + 16 # Point the addresses addresses = self.data[offset:] # Extract the number of segs n_segs = self['segments_left'] + 1 # Init segs segs = [] # Move 128 bit in each step for i in range(n_segs): # Save the segment segs.append(inet_ntop(AF_INET6, addresses[i * 16:i * 16 + 16])) # Save segs self['segs'] = segs # Init tlvs self['tlvs'] = '' # If hmac is used if self.has_hmac(): # Point to the start of hmac hmac = addresses[n_segs * 16:n_segs * 16 + 40] # Save tlvs section self['tlvs'] = hexdump(hmac) # Show also the hmac key self['hmac'] = hexdump(hmac[4:8]) class bpf_encap_info(nla): __slots__ = () nla_map = (('LWT_BPF_UNSPEC', 'none'), ('LWT_BPF_IN', 'bpf_obj'), ('LWT_BPF_OUT', 'bpf_obj'), ('LWT_BPF_XMIT', 'bpf_obj'), ('LWT_BPF_XMIT_HEADROOM', 'uint32')) class bpf_obj(nla): __slots__ = () nla_map = (('LWT_BPF_PROG_UNSPEC', 'none'), ('LWT_BPF_PROG_FD', 'uint32'), ('LWT_BPF_PROG_NAME', 'asciiz')) class seg6local_encap_info(nla): __slots__ = () nla_map = (('SEG6_LOCAL_UNSPEC', 'none'), ('SEG6_LOCAL_ACTION', 'action'), ('SEG6_LOCAL_SRH', 'ipv6_sr_hdr'), ('SEG6_LOCAL_TABLE', 'table'), ('SEG6_LOCAL_NH4', 'nh4'), ('SEG6_LOCAL_NH6', 'nh6'), ('SEG6_LOCAL_IIF', 'iif'), ('SEG6_LOCAL_OIF', 'oif'), ('SEG6_LOCAL_BPF', 'none')) # Actually not used class ipv6_sr_hdr(nla): __slots__ = () fields = (('nexthdr', 'B'), ('hdrlen', 'B'), ('type', 'B'), ('segments_left', 'B'), ('first_segment', 'B'), ('flags', 'B'), ('reserved', 'H'), ('segs', 's'), # Potentially several type-length-value ('tlvs', 's')) # Corresponding values for seg6 encap modes SEG6_IPTUN_MODE_INLINE = 0 SEG6_IPTUN_MODE_ENCAP = 1 # Mapping string to nla value encapmodes = { "inline": SEG6_IPTUN_MODE_INLINE, "encap": SEG6_IPTUN_MODE_ENCAP } # Reverse mapping: mapping nla value to string r_encapmodes = {v: k for k, v in encapmodes.items()} # Nla value for seg6 type SEG6_TYPE = 4 # Flag value for hmac SR6_FLAG1_HMAC = 1 << 3 # Tlv value for hmac SR6_TLV_HMAC = 5 # Utility function to get the family from the msg def get_family(self): pointer = self while pointer.parent is not None: pointer = pointer.parent return pointer.get('family', AF_UNSPEC) def encode(self): # Retrieve the family family = self.get_family() # Seg6 can be applied only to IPv6 if family == AF_INET6: # Get mode mode = self['mode'] # Get segs segs = self['segs'] # Get hmac hmac = self.get('hmac', None) # With "inline" mode there is not # encap into an outer IPv6 header if mode == "inline": # Add :: to segs segs.insert(0, "::") # Add mode to value self['encapmode'] = (self .encapmodes .get(mode, self.SEG6_IPTUN_MODE_ENCAP)) # Calculate srlen srhlen = 8 + 16 * len(segs) # If we are using hmac we have a tlv as trailer data if hmac: # Since we can use sha1 or sha256 srhlen += 40 # Calculate and set hdrlen self['hdrlen'] = (srhlen >> 3) - 1 # Add seg6 type self['type'] = self.SEG6_TYPE # Add segments left self['segments_left'] = len(segs) - 1 # Add fitst segment self['first_segment'] = len(segs) - 1 # If hmac is used we have to set the flags if hmac: # Add SR6_FLAG1_HMAC self['flags'] |= self.SR6_FLAG1_HMAC # Init segs self['segs'] = b'' # Iterate over segments for seg in segs: # Convert to network byte order and add to value self['segs'] += inet_pton(family, seg) # Initialize tlvs self['tlvs'] = b'' # If hmac is used we have to properly init tlvs if hmac: # Put type self['tlvs'] += struct.pack('B', self.SR6_TLV_HMAC) # Put length -> 40-2 self['tlvs'] += struct.pack('B', 38) # Put reserved self['tlvs'] += struct.pack('H', 0) # Put hmac key self['tlvs'] += struct.pack('>I', hmac) # Put hmac self['tlvs'] += struct.pack('QQQQ', 0, 0, 0, 0) else: raise TypeError('Family %s not supported for seg6 tunnel' % family) # Finally encode as nla nla.encode(self) # Utility function to verify if hmac is present def has_hmac(self): # Useful during the decoding return self['flags'] & self.SR6_FLAG1_HMAC def decode(self): # Decode the data nla.decode(self) # Extract the encap mode self['mode'] = (self.r_encapmodes .get(self['encapmode'], "encap")) # Calculate offset of the segs offset = self.offset + 16 # Point the addresses addresses = self.data[offset:] # Extract the number of segs n_segs = self['segments_left'] + 1 # Init segs segs = [] # Move 128 bit in each step for i in range(n_segs): # Save the segment segs.append(inet_ntop(AF_INET6, addresses[i * 16:i * 16 + 16])) # Save segs self['segs'] = segs # Init tlvs self['tlvs'] = '' # If hmac is used if self.has_hmac(): # Point to the start of hmac hmac = addresses[n_segs * 16:n_segs * 16 + 40] # Save tlvs section self['tlvs'] = hexdump(hmac) # Show also the hmac key self['hmac'] = hexdump(hmac[4:8]) class table(nla): __slots__ = () # Table ID fields = (('value', 'I'),) class action(nla): __slots__ = () # Action fields = (('value', 'I'),) SEG6_LOCAL_ACTION_UNSPEC = 0 SEG6_LOCAL_ACTION_END = 1 SEG6_LOCAL_ACTION_END_X = 2 SEG6_LOCAL_ACTION_END_T = 3 SEG6_LOCAL_ACTION_END_DX2 = 4 SEG6_LOCAL_ACTION_END_DX6 = 5 SEG6_LOCAL_ACTION_END_DX4 = 6 SEG6_LOCAL_ACTION_END_DT6 = 7 SEG6_LOCAL_ACTION_END_DT4 = 8 SEG6_LOCAL_ACTION_END_B6 = 9 SEG6_LOCAL_ACTION_END_B6_ENCAP = 10 SEG6_LOCAL_ACTION_END_BM = 11 SEG6_LOCAL_ACTION_END_S = 12 SEG6_LOCAL_ACTION_END_AS = 13 SEG6_LOCAL_ACTION_END_AM = 14 SEG6_LOCAL_ACTION_END_BPF = 15 actions = {'End': SEG6_LOCAL_ACTION_END, 'End.X': SEG6_LOCAL_ACTION_END_X, 'End.T': SEG6_LOCAL_ACTION_END_T, 'End.DX2': SEG6_LOCAL_ACTION_END_DX2, 'End.DX6': SEG6_LOCAL_ACTION_END_DX6, 'End.DX4': SEG6_LOCAL_ACTION_END_DX4, 'End.DT6': SEG6_LOCAL_ACTION_END_DT6, 'End.DT4': SEG6_LOCAL_ACTION_END_DT4, 'End.B6': SEG6_LOCAL_ACTION_END_B6, 'End.B6.Encaps': SEG6_LOCAL_ACTION_END_B6_ENCAP, 'End.BM': SEG6_LOCAL_ACTION_END_BM, 'End.S': SEG6_LOCAL_ACTION_END_S, 'End.AS': SEG6_LOCAL_ACTION_END_AS, 'End.AM': SEG6_LOCAL_ACTION_END_AM, 'End.BPF': SEG6_LOCAL_ACTION_END_BPF} def encode(self): # Get action type and convert string to value action = self['value'] self['value'] = (self .actions .get(action, self.SEG6_LOCAL_ACTION_UNSPEC)) # Convert action type to u32 self['value'] = self['value'] & 0xffffffff # Finally encode as nla nla.encode(self) class iif(nla): __slots__ = () # Index of the incoming interface fields = (('value', 'I'),) class oif(nla): __slots__ = () # Index of the outcoming interface fields = (('value', 'I'),) class nh4(nla): __slots__ = () # Nexthop of the IPv4 family fields = (('value', 's'),) def encode(self): # Convert to network byte order self['value'] = inet_pton(AF_INET, self['value']) # Finally encode as nla nla.encode(self) def decode(self): # Decode the data nla.decode(self) # Convert the packed IP address to its string representation self['value'] = inet_ntop(AF_INET, self['value']) class nh6(nla): __slots__ = () # Nexthop of the IPv6 family fields = (('value', 's'),) def encode(self): # Convert to network byte order self['value'] = inet_pton(AF_INET6, self['value']) # Finally encode as nla nla.encode(self) def decode(self): # Decode the data nla.decode(self) # Convert the packed IP address to its string representation self['value'] = inet_ntop(AF_INET6, self['value']) # # TODO: add here other lwtunnel types # encaps = {LWTUNNEL_ENCAP_MPLS: mpls_encap_info, LWTUNNEL_ENCAP_SEG6: seg6_encap_info, LWTUNNEL_ENCAP_BPF: bpf_encap_info, LWTUNNEL_ENCAP_SEG6_LOCAL: seg6local_encap_info} class rta_mfc_stats(nla): __slots__ = () fields = (('mfcs_packets', 'uint64'), ('mfcs_bytes', 'uint64'), ('mfcs_wrong_if', 'uint64')) class metrics(nla): __slots__ = () prefix = 'RTAX_' nla_map = (('RTAX_UNSPEC', 'none'), ('RTAX_LOCK', 'uint32'), ('RTAX_MTU', 'uint32'), ('RTAX_WINDOW', 'uint32'), ('RTAX_RTT', 'uint32'), ('RTAX_RTTVAR', 'uint32'), ('RTAX_SSTHRESH', 'uint32'), ('RTAX_CWND', 'uint32'), ('RTAX_ADVMSS', 'uint32'), ('RTAX_REORDERING', 'uint32'), ('RTAX_HOPLIMIT', 'uint32'), ('RTAX_INITCWND', 'uint32'), ('RTAX_FEATURES', 'uint32'), ('RTAX_RTO_MIN', 'uint32'), ('RTAX_INITRWND', 'uint32'), ('RTAX_QUICKACK', 'uint32')) @staticmethod def get_nh(self, *argv, **kwarg): return nh class rtvia(nla): __slots__ = () fields = (('value', 's'), ) def encode(self): family = self.get('family', AF_UNSPEC) if family in (AF_INET, AF_INET6): addr = inet_pton(family, self['addr']) else: raise TypeError('Family %s not supported for RTA_VIA' % family) self['value'] = struct.pack('H', family) + addr nla.encode(self) def decode(self): nla.decode(self) family = struct.unpack('H', self['value'][:2])[0] addr = self['value'][2:] if len(addr): if (family == AF_INET and len(addr) == 4) or \ (family == AF_INET6 and len(addr) == 16): addr = inet_ntop(family, addr) else: addr = hexdump(addr) self.value = {'family': family, 'addr': addr} class cacheinfo(nla): __slots__ = () fields = (('rta_clntref', 'I'), ('rta_lastuse', 'I'), ('rta_expires', 'i'), ('rta_error', 'I'), ('rta_used', 'I'), ('rta_id', 'I'), ('rta_ts', 'I'), ('rta_tsage', 'I')) class rtmsg(rtmsg_base, nlmsg): __slots__ = () def encode(self): if self.get('family') == AF_MPLS: # force fields self['dst_len'] = 20 self['table'] = 254 self['type'] = 1 # assert NLA types for n in self.get('attrs', []): if n[0] not in ('RTA_OIF', 'RTA_DST', 'RTA_VIA', 'RTA_NEWDST', 'RTA_MULTIPATH'): raise TypeError('Incorrect NLA type %s for AF_MPLS' % n[0]) super(rtmsg_base, self).encode() class nh(rtmsg_base, nla): __slots__ = () is_nla = False sql_constraints = {} cell_header = (('length', 'H'), ) fields = (('flags', 'B'), ('hops', 'B'), ('oif', 'i')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/0000755000175000017500000000000013621220110020547 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/__init__.py0000644000175000017500000000715313610051400022670 0ustar peetpeet00000000000000import types from pyroute2.netlink import nlmsg from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg import cls_fw from pyroute2.netlink.rtnl.tcmsg import cls_u32 from pyroute2.netlink.rtnl.tcmsg import cls_matchall from pyroute2.netlink.rtnl.tcmsg import cls_basic from pyroute2.netlink.rtnl.tcmsg import cls_flow from pyroute2.netlink.rtnl.tcmsg import sched_bpf from pyroute2.netlink.rtnl.tcmsg import sched_cake from pyroute2.netlink.rtnl.tcmsg import sched_choke from pyroute2.netlink.rtnl.tcmsg import sched_clsact from pyroute2.netlink.rtnl.tcmsg import sched_codel from pyroute2.netlink.rtnl.tcmsg import sched_drr from pyroute2.netlink.rtnl.tcmsg import sched_fq_codel from pyroute2.netlink.rtnl.tcmsg import sched_hfsc from pyroute2.netlink.rtnl.tcmsg import sched_htb from pyroute2.netlink.rtnl.tcmsg import sched_ingress from pyroute2.netlink.rtnl.tcmsg import sched_netem from pyroute2.netlink.rtnl.tcmsg import sched_pfifo_fast from pyroute2.netlink.rtnl.tcmsg import sched_plug from pyroute2.netlink.rtnl.tcmsg import sched_sfq from pyroute2.netlink.rtnl.tcmsg import sched_tbf from pyroute2.netlink.rtnl.tcmsg import sched_template plugins = {'plug': sched_plug, 'sfq': sched_sfq, 'clsact': sched_clsact, 'codel': sched_codel, 'fq_codel': sched_fq_codel, 'hfsc': sched_hfsc, 'htb': sched_htb, 'bpf': sched_bpf, 'tbf': sched_tbf, 'netem': sched_netem, 'fw': cls_fw, 'u32': cls_u32, 'matchall': cls_matchall, 'basic': cls_basic, 'flow': cls_flow, 'ingress': sched_ingress, 'pfifo_fast': sched_pfifo_fast, 'choke': sched_choke, 'drr': sched_drr, 'prio': sched_pfifo_fast, 'cake': sched_cake} class tcmsg(nlmsg): prefix = 'TCA_' fields = (('family', 'B'), ('pad1', 'B'), ('pad2', 'H'), ('index', 'i'), ('handle', 'I'), ('parent', 'I'), ('info', 'I')) nla_map = (('TCA_UNSPEC', 'none'), ('TCA_KIND', 'asciiz'), ('TCA_OPTIONS', 'get_options'), ('TCA_STATS', 'stats'), ('TCA_XSTATS', 'get_xstats'), ('TCA_RATE', 'hex'), ('TCA_FCNT', 'hex'), ('TCA_STATS2', 'get_stats2'), ('TCA_STAB', 'hex')) class stats(nla): fields = (('bytes', 'Q'), ('packets', 'I'), ('drop', 'I'), ('overlimits', 'I'), ('bps', 'I'), ('pps', 'I'), ('qlen', 'I'), ('backlog', 'I')) def get_plugin(self, plug, *argv, **kwarg): # get the plugin name kind = self.get_attr('TCA_KIND') # get the plugin implementation or the default one p = plugins.get(kind, sched_template) # get the interface interface = getattr(p, plug, getattr(sched_template, plug)) # if it is a method, run and return the result if isinstance(interface, types.FunctionType): return interface(self, *argv, **kwarg) else: return interface @staticmethod def get_stats2(self, *argv, **kwarg): return self.get_plugin('stats2', *argv, **kwarg) @staticmethod def get_xstats(self, *argv, **kwarg): return self.get_plugin('stats', *argv, **kwarg) @staticmethod def get_options(self, *argv, **kwarg): return self.get_plugin('options', *argv, **kwarg) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_bpf.py0000644000175000017500000000175713610051400022533 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common import tc_actions class options(nla): nla_map = (('TCA_ACT_BPF_UNSPEC', 'none'), ('TCA_ACT_BPF_TM,', 'none'), ('TCA_ACT_BPF_PARMS', 'tca_act_bpf_parms'), ('TCA_ACT_BPF_OPS_LEN', 'uint16'), ('TCA_ACT_BPF_OPS', 'hex'), ('TCA_ACT_BPF_FD', 'uint32'), ('TCA_ACT_BPF_NAME', 'asciiz')) class tca_act_bpf_parms(nla): fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i')) def get_parameters(kwarg): ret = {'attrs': []} if 'fd' in kwarg: ret['attrs'].append(['TCA_ACT_BPF_FD', kwarg['fd']]) if 'name' in kwarg: ret['attrs'].append(['TCA_ACT_BPF_NAME', kwarg['name']]) a = tc_actions[kwarg.get('action', 'drop')] ret['attrs'].append(['TCA_ACT_BPF_PARMS', {'action': a}]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_connmark.py0000644000175000017500000000243613610051400023567 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink.rtnl.tcmsg.common import tc_actions """ connmark - netfilter connmark retriever action see tc-connmark(8) This filter restores the connection mark into the packet mark. Connection marks are typically handled by the CONNMARK iptables module. See iptables-extensions(8). There is no mandatory parameter, but you can specify the action, which defaults to 'pipe', and the conntrack zone (see the manual). """ class options(nla): nla_flags = NLA_F_NESTED nla_map = (('TCA_CONNMARK_UNSPEC', 'none'), ('TCA_CONNMARK_PARMS', 'tca_connmark_parms'), ('TCA_CONNMARK_TM', 'none'), ) class tca_connmark_parms(nla): fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i'), ('zone', 'H'), ('__padding', 'H'), # XXX is there a better way to do this ? ) def get_parameters(kwarg): ret = {'attrs': []} parms = { 'action': tc_actions[kwarg.get('action', 'pipe')], 'zone': kwarg.get('zone', 0) } ret['attrs'].append(['TCA_CONNMARK_PARMS', parms]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_gact.py0000644000175000017500000000135013610051400022667 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink.rtnl.tcmsg.common import tc_actions class options(nla): nla_flags = NLA_F_NESTED nla_map = (('TCA_GACT_UNSPEC', 'none'), ('TCA_GACT_TM', 'none'), ('TCA_GACT_PARMS', 'tca_gact_parms'), ('TCA_GACT_PROB', 'none')) class tca_gact_parms(nla): fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i')) def get_parameters(kwarg): ret = {'attrs': []} a = tc_actions[kwarg.get('action', 'drop')] ret['attrs'].append(['TCA_GACT_PARMS', {'action': a}]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_mirred.py0000644000175000017500000000341713610051400023241 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink import NLA_F_NESTED from pyroute2.netlink.rtnl.tcmsg.common import tc_actions """ Mirred - mirror/redirect action see tc-mirred(8) Use like any other action, with the following parameters available: - direction (mandatory): ingress or egress - action (mandatory): mirror or redirect - ifindex (mandatory): destination interface for mirrored or redirected packets - index: explicit index for this action """ # see tc_mirred.h MIRRED_EACTIONS = { ("egress", "redirect"): 1, # redirect packet to egress ("egress", "mirror"): 2, # mirror packet to egress ("ingress", "redirect"): 3, # redirect packet to ingress ("ingress", "mirror"): 4, # mirror packet to ingress } class options(nla): nla_flags = NLA_F_NESTED nla_map = (('TCA_MIRRED_UNSPEC', 'none'), ('TCA_MIRRED_TM', 'none'), ('TCA_MIRRED_PARMS', 'tca_mirred_parms'), ) class tca_mirred_parms(nla): fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i'), ('eaction', 'i'), ('ifindex', 'I'), ) def get_parameters(kwarg): ret = {'attrs': []} # direction, action and ifindex are mandatory parms = { 'eaction': MIRRED_EACTIONS[(kwarg['direction'], kwarg['action'])], 'ifindex': kwarg['ifindex'], } if 'index' in kwarg: parms['index'] = int(kwarg['index']) # From m_mirred.c if kwarg['action'] == 'redirect': parms['action'] = tc_actions['stolen'] else: # mirror parms['action'] = tc_actions['pipe'] ret['attrs'].append(['TCA_MIRRED_PARMS', parms]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_police.py0000644000175000017500000000405113610051400023225 0ustar peetpeet00000000000000from pyroute2.netlink.rtnl.tcmsg.common import nla_plus_rtab from pyroute2.netlink.rtnl.tcmsg.common import get_rate_parameters actions = {'unspec': -1, # TC_POLICE_UNSPEC 'ok': 0, # TC_POLICE_OK 'reclassify': 1, # TC_POLICE_RECLASSIFY 'shot': 2, # TC_POLICE_SHOT 'drop': 2, # TC_POLICE_SHOT 'pipe': 3} # TC_POLICE_PIPE class options(nla_plus_rtab): nla_map = (('TCA_POLICE_UNSPEC', 'none'), ('TCA_POLICE_TBF', 'police_tbf'), ('TCA_POLICE_RATE', 'rtab'), ('TCA_POLICE_PEAKRATE', 'ptab'), ('TCA_POLICE_AVRATE', 'uint32'), ('TCA_POLICE_RESULT', 'uint32')) class police_tbf(nla_plus_rtab.parms): fields = (('index', 'I'), ('action', 'i'), ('limit', 'I'), ('burst', 'I'), ('mtu', 'I'), ('rate_cell_log', 'B'), ('rate___reserved', 'B'), ('rate_overhead', 'H'), ('rate_cell_align', 'h'), ('rate_mpu', 'H'), ('rate', 'I'), ('peak_cell_log', 'B'), ('peak___reserved', 'B'), ('peak_overhead', 'H'), ('peak_cell_align', 'h'), ('peak_mpu', 'H'), ('peak', 'I'), ('refcnt', 'i'), ('bindcnt', 'i'), ('capab', 'I')) class nla_plus_police(object): class police(options): pass def get_parameters(kwarg): # if no limit specified, set it to zero to make # the next call happy kwarg['limit'] = kwarg.get('limit', 0) tbfp = get_rate_parameters(kwarg) # create an alias -- while TBF uses 'buffer', rate # policy uses 'burst' tbfp['burst'] = tbfp['buffer'] # action resolver tbfp['action'] = actions[kwarg.get('action', 'reclassify')] return {'attrs': [['TCA_POLICE_TBF', tbfp], ['TCA_POLICE_RATE', True]]} pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_skbedit.py0000644000175000017500000000665613610051400023414 0ustar peetpeet00000000000000''' skbedit +++++++ Usage:: from pyroute2 import IPRoute # Assume you are working with eth1 interface IFNAME = "eth1" ipr = IPRoute() ifindex = ipr.link_lookup(ifname=IFNAME) # First create parent qdisc ipr.tc("add", "htb", index=ifindex, handle=0x10000) # Then add a matchall filter with skbedit action # Simple action example action = {"kind": "skbedit", "priority": 0x10001 # Also known as "1:1" in TC format } ipr.tc("add-filter", "matchall", index=ifindex, parent=0x10000, prio=1, action=action) # Extended action example action = {"kind": "skbedit", "priority": 0x10001, # Also known as "1:1" in TC format "mark": 0x1337, "mask": 0xFFFFFFFF, "ptype": "host" } ipr.tc("add-filter", "matchall", index=ifindex, parent=0x10000, prio=1, action=action) NOTES: Here is the list of all supported options:: - mark: integer - mask: integer - priority: integer - ptype: "host", "otherhost", "broadcast" or "multicast" - queue: integer ''' from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common import tc_actions # Packet types defined in if_packet.h PACKET_HOST = 0 PACKET_BROADCAST = 1 PACKET_MULTICAST = 2 PACKET_OTHERHOST = 3 def convert_ptype(value): types = {'host': PACKET_HOST, 'otherhost': PACKET_OTHERHOST, 'broadcast': PACKET_BROADCAST, 'multicast': PACKET_MULTICAST, } res = types.get(value.lower()) if res is not None: return res raise ValueError('Invalid ptype specified! See tc-skbedit man ' 'page for valid values.') def get_parameters(kwarg): ret = {'attrs': []} attrs_map = (('priority', 'TCA_SKBEDIT_PRIORITY'), ('queue', 'TCA_SKBEDIT_QUEUE_MAPPING'), ('mark', 'TCA_SKBEDIT_MARK'), ('ptype', 'TCA_SKBEDIT_PTYPE'), ('mask', 'TCA_SKBEDIT_MASK'), ) # Assign TCA_SKBEDIT_PARMS first parms = {} parms['action'] = tc_actions['pipe'] ret['attrs'].append(['TCA_SKBEDIT_PARMS', parms]) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: if k == 'ptype': r = convert_ptype(r) ret['attrs'].append([v, r]) return ret class options(nla): nla_map = (('TCA_SKBEDIT_UNSPEC', 'none'), ('TCA_SKBEDIT_TM', 'tca_parse_tm'), ('TCA_SKBEDIT_PARMS', 'tca_parse_parms'), ('TCA_SKBEDIT_PRIORITY', 'uint32'), ('TCA_SKBEDIT_QUEUE_MAPPING', 'uint16'), ('TCA_SKBEDIT_MARK', 'uint32'), ('TCA_SKBEDIT_PAD', 'hex'), ('TCA_SKBEDIT_PTYPE', 'uint16'), ('TCA_SKBEDIT_MASK', 'uint32'), ('TCA_SKBEDIT_FLAGS', 'uint64'), ) class tca_parse_parms(nla): # As described in tc_mpls.h, it uses # generic TC action fields fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i'), ) class tca_parse_tm(nla): # See struct tcf_t fields = (('install', 'Q'), ('lastuse', 'Q'), ('expires', 'Q'), ('firstuse', 'Q'), ) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/act_vlan.py0000644000175000017500000000276513610051400022724 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common import tc_actions from socket import htons v_actions = {'pop': 1, 'push': 2, 'modify': 3} class options(nla): nla_map = (('TCA_VLAN_UNSPEC', 'none'), ('TCA_VLAN_TM', 'none'), ('TCA_VLAN_PARMS', 'tca_vlan_parms'), ('TCA_VLAN_PUSH_VLAN_ID', 'uint16'), ('TCA_VLAN_PUSH_VLAN_PROTOCOL', 'uint16'), ('TCA_VLAN_PAD', 'none'), ('TCA_VLAN_PUSH_VLAN_PRIORITY', 'uint8')) class tca_vlan_parms(nla): fields = (('index', 'I'), ('capab', 'I'), ('action', 'i'), ('refcnt', 'i'), ('bindcnt', 'i'), ('v_action', 'i'),) def get_parameters(kwarg): ret = {'attrs': []} parms = {'v_action': v_actions[kwarg['v_action']]} parms['action'] = tc_actions[kwarg.get('action', 'pipe')] ret['attrs'].append(['TCA_VLAN_PARMS', parms]) # Vlan id compulsory for "push" and "modify" if kwarg['v_action'] in ['push', 'modify']: ret['attrs'].append(['TCA_VLAN_PUSH_VLAN_ID', kwarg['id']]) if 'priority' in kwarg: ret['attrs'].append(['TCA_VLAN_PUSH_VLAN_PRIORITY', kwarg['priority']]) if kwarg.get('protocol', '802.1Q') == '802.1ad': ret['attrs'].append(['TCA_VLAN_PUSH_VLAN_PROTOCOL', htons(0x88a8)]) else: ret['attrs'].append(['TCA_VLAN_PUSH_VLAN_PROTOCOL', htons(0x8100)]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/cls_basic.py0000644000175000017500000002075013621021750023060 0ustar peetpeet00000000000000''' basic +++++ Basic filter has multiple types supports. Examples with ipset matches:: # Prepare a simple match on an ipset at index 0 src # (the first ipset name that appears when running `ipset list`) match = [{"kind": "ipset", index": 0, "mode": "src"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # The same match but inverted, simply add inverse flag match = [{"kind": "ipset", "index": 0, "mode": "src", "inverse": True}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Still one ipset but with multiple dimensions: # comma separated list of modes match = [{"kind": "ipset", "index": 0, "mode": "src,dst"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Now let's add multiple expressions (ipset 0 src and ipset 1 src) match = [{"kind": "ipset", "index": 0, "mode": "src", "relation": "and"}, {"kind": "ipset", "index": 1, "mode": "src"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # The same works with OR (ipset 0 src or ipset 1 src) match = [{"kind": "ipset", "index": 0, "mode": "src", "relation": "OR"}, {"kind": "ipset", "index": 1, "mode": "src"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) Examples with cmp matches:: # Repeating the example given in the man page match = [{"kind": "cmp", "layer": 2, "opnd": "gt", "align": "u16", "offset": 3, "mask": 0xff00, "value": 20}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Now, the same example but with variations # - use layer name instead of enum # - use operand sign instead of name match = [{"kind": "cmp", "layer": "transport", "opnd": ">","align": "u16", "offset": 3, "mask": 0xff00, "value": 20}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Again, the same example with all possible keywords even if they are # ignored match = [{"kind": "cmp", "layer": "tcp", "opnd": ">", "align": "u16", "offset": 3, "mask": 0xff00, "value": 20, "trans": False}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Another example, we want to work at the link layer # and filter incoming packets matching hwaddr 00:DE:AD:C0:DE:00 # OSI model tells us that the source hwaddr is at offset 0 of # the link layer. # Size of hwaddr is 6-bytes in length, so I use an u32 then an u16 # to do the complete match match = [{"kind": "cmp", "layer": "link", "opnd": "eq", "align": "u32", "offset": 0, "mask": 0xffffffff, "value": 0x00DEADC0, "relation": "and"}, {"kind": "cmp", "layer": "link", "opnd": "eq", "align": "u16", "offset": 4, "mask": 0xffff, "value": 0xDE00}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # As the man page says, here are the different key-value pairs you can use: # "layer": "link" or "eth" or 0 # "layer": "network" or "ip" or 1 # "layer": "transport" or "tcp" or 2 # "opnd": "eq" or "=" or 0 # "opnd": "gt" or ">" or 1 # "opnd": "lt" or "<" or 2 # "align": "u8" or "u16" or "u32" # "trans": True or False # "offset", "mask" and "value": any integer Examples with meta matches:: # Repeating the example given in the man page match = [{"kind": "meta", "object":{"kind": "nfmark", "opnd": "gt"}, "value": 24, "relation": "and"}, {"kind": "meta", "object":{"kind": "tcindex", "opnd": "eq"}, "value": 0xf0, "mask": 0xf0}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Now, the same example but with variations # - use operand sign instead of name match = [{"kind": "meta", "object":{"kind": "nfmark", "opnd": ">"}, "value": 24, "relation": "and"}, {"kind": "meta", "object":{"kind": "tcindex", "opnd": "="}, "value": 0xf0, "mask": 0xf0}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Another example given by the tc helper # meta(indev shift 1 eq "ppp") match = [{"kind": "meta", "object":{"kind": "dev", "opnd": "eq", "shift": 1}, "value": "ppp"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match) # Another example, drop every packets arriving on ifb0 match = [{"kind": "meta", "object":{"kind": "dev", "opnd": "eq"}, "value": "ifb0"}] ip.tc("add-filter", "basic", ifb0, parent=0x10000, classid=0x10010, match=match, action="drop") # As the man page says, here are the different key-value pairs you can use: # "opnd": "eq" or "=" or 0 # "opnd": "gt" or ">" or 1 # "opnd": "lt" or "<" or 2 # "shift": any integer between 0 and 255 included # "kind" object: see `tc filter add dev iface basic match 'meta(list)'` result # "value": any string if kind matches 'dev' or 'sk_bound_if', # any integer otherwise NOTES: When not specified, `inverse` flag is set to False. Do not specify `relation` keyword on the last expression or if there is only one expression. `relation` can be written using multiple format: "and", "AND", "&&", "or", "OR", "||" You can combine multiple different types of ematch. Here is an example:: match = [{"kind": "cmp", "layer": 2, "opnd": "eq", "align": "u32", "offset": 0, "value": 32, "relation": "&&"}, {"kind": "meta", "object":{"kind": "vlan_tag", "opnd": "eq"}, "value": 100, "relation": "||"}, {"kind": "ipset", "index": 0, "mode": "src", "inverse": True} ] ''' import struct from socket import htons from pyroute2 import protocols from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action from pyroute2.netlink.rtnl.tcmsg.common_act import tca_act_prio from pyroute2.netlink.rtnl.tcmsg.common_ematch import get_tcf_ematches from pyroute2.netlink.rtnl.tcmsg.common_ematch import nla_plus_tcf_ematch_opt def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.get('protocol', protocols.ETH_P_ALL) & 0xffff) |\ ((kwarg.get('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = ( ('classid', 'TCA_BASIC_CLASSID'), ) if kwarg.get('match'): ret['attrs'].append(['TCA_BASIC_EMATCHES', get_tcf_ematches(kwarg)]) if kwarg.get('action'): ret['attrs'].append(['TCA_BASIC_ACT', get_tca_action(kwarg)]) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class options(nla): nla_map = (('TCA_BASIC_UNSPEC', 'none'), ('TCA_BASIC_CLASSID', 'uint32'), ('TCA_BASIC_EMATCHES', 'parse_basic_ematch_tree'), ('TCA_BASIC_ACT', 'tca_act_prio'), ('TCA_BASIC_POLICE', 'hex'), ) class parse_basic_ematch_tree(nla): nla_map = (('TCA_EMATCH_TREE_UNSPEC', 'none'), ('TCA_EMATCH_TREE_HDR', 'tcf_parse_header'), ('TCA_EMATCH_TREE_LIST', '*tcf_parse_list'), ) class tcf_parse_header(nla): fields = (('nmatches', 'H'), ('progid', 'H'), ) class tcf_parse_list(nla, nla_plus_tcf_ematch_opt): fields = (('matchid', 'H'), ('kind', 'H'), ('flags', 'H'), ('pad', 'H'), ('opt', 's'), ) def decode(self): nla.decode(self) size = 0 for field in self.fields + self.header: if 'opt' in field: # Ignore this field as it a hack used to brain encoder continue size += struct.calcsize(field[1]) start = self.offset + size end = self.offset + self.length data = self.data[start:end] self['opt'] = self.parse_ematch_options(self, data) tca_act_prio = tca_act_prio pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/cls_flow.py0000644000175000017500000001154113610051400022735 0ustar peetpeet00000000000000''' flow ++++ Flow filter supports two types of modes:: - map - hash # Prepare a Qdisc with fq-codel ip.tc("add", "fq_codel", ifb0, parent=0x10001, handle=0x10010) # Create flow filter with hash mode # Single: keys = "src" # Multi (comma separated list of keys): keys = "src,nfct-src" ip.tc("add-filter", "flow", ifb0, mode="hash", keys=keys, divisor=1024, perturb=60, handle=0x10, baseclass=0x10010, parent=0x10001) # Create flow filter with map mode # Simple map dst with no OP: ip.tc("add-filter", "flow", ifb0, mode="map", key="dst", divisor=1024, handle=10 baseclass=0x10010) # Same filter with xor OP: ops = [{"op": "xor", "num": 0xFF}] ip.tc("add-filter", "flow", ifb0, mode="map", key="dst", divisor=1024, handle=10 baseclass=0x10010, ops=ops) # Complex one with addend OP (incl. minus support): ops = [{"op": "addend", "num": '-192.168.0.0'}] ip.tc("add-filter", "flow", ifb0, mode="map", key="dst", divisor=1024, handle=10 baseclass=0x10010, ops=ops) # Example with multiple OPS: ops = [{"op": "and", "num": 0xFF}, {"op": "rshift", "num": 4}] ip.tc("add-filter", "flow", ifb0, mode="map", key="dst", divisor=1024, handle=10 baseclass=0x10010, ops=ops) NOTES: When using `map` mode, use the keyword `key` to pass a key. When using `hash` mode, use the keyword `keys` to pass a key even if there is only one key. In `map` mode, the `num` parameter in `OPS` is always an integer unless if you use the OP `addend`, which can be a string IPv4 address. You can also add a minus sign at the begining of the `num` value even if it is an IPv4 address. ''' from socket import htons from pyroute2 import protocols from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common import get_tca_ops from pyroute2.netlink.rtnl.tcmsg.common import get_tca_mode from pyroute2.netlink.rtnl.tcmsg.common import get_tca_keys from pyroute2.netlink.rtnl.tcmsg.common import tc_flow_keys from pyroute2.netlink.rtnl.tcmsg.common import tc_flow_modes from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action from pyroute2.netlink.rtnl.tcmsg.common_act import tca_act_prio def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.get('protocol', protocols.ETH_P_ALL) & 0xffff) |\ ((kwarg.get('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = (('baseclass', 'TCA_FLOW_BASECLASS'), ('divisor', 'TCA_FLOW_DIVISOR'), ('perturb', 'TCA_FLOW_PERTURB'), ) if kwarg.get('mode'): ret['attrs'].append(['TCA_FLOW_MODE', get_tca_mode(kwarg)]) if kwarg.get('mode') == 'hash': ret['attrs'].append(['TCA_FLOW_KEYS', get_tca_keys(kwarg, 'keys')]) if kwarg.get('mode') == 'map': ret['attrs'].append(['TCA_FLOW_KEYS', get_tca_keys(kwarg, 'key')]) # Check for OPS presence if 'ops' in kwarg: get_tca_ops(kwarg, ret['attrs']) if kwarg.get('action'): ret['attrs'].append(['TCA_FLOW_ACT', get_tca_action(kwarg)]) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class options(nla): nla_map = (('TCA_FLOW_UNSPEC', 'none'), ('TCA_FLOW_KEYS', 'tca_parse_keys'), ('TCA_FLOW_MODE', 'tca_parse_mode'), ('TCA_FLOW_BASECLASS', 'uint32'), ('TCA_FLOW_RSHIFT', 'uint32'), ('TCA_FLOW_ADDEND', 'uint32'), ('TCA_FLOW_MASK', 'uint32'), ('TCA_FLOW_XOR', 'uint32'), ('TCA_FLOW_DIVISOR', 'uint32'), ('TCA_FLOW_ACT', 'tca_act_prio'), ('TCA_FLOW_POLICE', 'hex'), ('TCA_FLOW_EMATCHES', 'hex'), ('TCA_FLOW_PERTURB', 'uint32'), ) class tca_parse_mode(nla): fields = (('flow_mode', 'I'), ) def decode(self): nla.decode(self) for key, value in tc_flow_modes.items(): if self['flow_mode'] == value: self['flow_mode'] = key break def encode(self): self['flow_mode'] = self['value'] nla.encode(self) class tca_parse_keys(nla): fields = (('flow_keys', 'I'), ) def decode(self): nla.decode(self) keys = '' for key, value in tc_flow_keys.items(): if value & self['flow_keys']: keys = '{0},{1}'.format(keys, key) self['flow_keys'] = keys.strip(',') def encode(self): self['flow_keys'] = self['value'] nla.encode(self) tca_act_prio = tca_act_prio pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/cls_fw.py0000644000175000017500000000272013610051400022401 0ustar peetpeet00000000000000from socket import htons from pyroute2 import protocols from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.act_police import nla_plus_police from pyroute2.netlink.rtnl.tcmsg.act_police import get_parameters \ as ap_parameters from pyroute2.netlink.rtnl.tcmsg.common_act import tca_act_prio from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.get('protocol', protocols.ETH_P_ALL) & 0xffff) |\ ((kwarg.get('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = ( ('classid', 'TCA_FW_CLASSID'), # ('police', 'TCA_FW_POLICE'), # Handled in ap_parameters ('indev', 'TCA_FW_INDEV'), ('mask', 'TCA_FW_MASK'), ) if kwarg.get('rate'): ret['attrs'].append(['TCA_FW_POLICE', ap_parameters(kwarg)]) if kwarg.get('action'): ret['attrs'].append(['TCA_FW_ACT', get_tca_action(kwarg)]) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class options(nla, nla_plus_police): nla_map = (('TCA_FW_UNSPEC', 'none'), ('TCA_FW_CLASSID', 'uint32'), ('TCA_FW_POLICE', 'police'), # TODO string? ('TCA_FW_INDEV', 'hex'), # TODO string ('TCA_FW_ACT', 'tca_act_prio'), ('TCA_FW_MASK', 'uint32')) tca_act_prio = tca_act_prio pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/cls_matchall.py0000644000175000017500000000201313610051400023545 0ustar peetpeet00000000000000from socket import htons from pyroute2 import protocols from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action from pyroute2.netlink.rtnl.tcmsg.common_act import tca_act_prio def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.get('protocol', protocols.ETH_P_ALL) & 0xffff) |\ ((kwarg.get('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = ( ('classid', 'TCA_MATCHALL_CLASSID'), ('flags', 'TCA_MATCHALL_FLAGS') ) if kwarg.get('action'): ret['attrs'].append(['TCA_MATCHALL_ACT', get_tca_action(kwarg)]) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class options(nla): nla_map = (('TCA_MATCHALL_UNSPEC', 'none'), ('TCA_MATCHALL_CLASSID', 'be32'), ('TCA_MATCHALL_ACT', 'tca_act_prio'), ('TCA_MATCHALL_FLAGS', 'be32')) tca_act_prio = tca_act_prio pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/cls_u32.py0000644000175000017500000002011213610051400022371 0ustar peetpeet00000000000000''' u32 +++ Filters can take an `action` argument, which affects the packet behavior when the filter matches. Currently the gact, bpf, and police action types are supported, and can be attached to the u32 and bpf filter types:: # An action can be a simple string, which translates to a gact type action = "drop" # Or it can be an explicit type (these are equivalent) action = dict(kind="gact", action="drop") # There can also be a chain of actions, which depend on the return # value of the previous action. action = [ dict(kind="bpf", fd=fd, name=name, action="ok"), dict(kind="police", rate="10kbit", burst=10240, limit=0), dict(kind="gact", action="ok"), ] # Add the action to a u32 match-all filter ip.tc("add", "htb", eth0, 0x10000, default=0x200000) ip.tc("add-filter", "u32", eth0, parent=0x10000, prio=10, protocol=protocols.ETH_P_ALL, target=0x10020, keys=["0x0/0x0+0"], action=action) # Add two more filters: One to send packets with a src address of # 192.168.0.1/32 into 1:10 and the second to send packets with a # dst address of 192.168.0.0/24 into 1:20 ip.tc("add-filter", "u32", eth0, parent=0x10000, prio=10, protocol=socket.AF_INET, target=0x10010, keys=["0xc0a80001/0xffffffff+12"]) # 0xc0a800010 = 192.168.0.1 # 0xffffffff = 255.255.255.255 (/32) # 12 = Source network field bit offset ip.tc("add-filter", "u32", eth0, parent=0x10000, prio=10, protocol=socket.AF_INET, target=0x10020, keys=["0xc0a80000/0xffffff00+16"]) # 0xc0a80000 = 192.168.0.0 # 0xffffff00 = 255.255.255.0 (/24) # 16 = Destination network field bit offset ''' import struct from socket import htons from pyroute2.netlink import nla from pyroute2.netlink import nlmsg from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action from pyroute2.netlink.rtnl.tcmsg.common_act import tca_act_prio from pyroute2.netlink.rtnl.tcmsg.act_police import nla_plus_police from pyroute2.netlink.rtnl.tcmsg.act_police import get_parameters \ as ap_parameters def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.get('protocol', 0) & 0xffff) |\ ((kwarg.get('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} if kwarg.get('rate'): ret['attrs'].append(['TCA_U32_POLICE', ap_parameters(kwarg)]) elif kwarg.get('action'): ret['attrs'].append(['TCA_U32_ACT', get_tca_action(kwarg)]) ret['attrs'].append(['TCA_U32_CLASSID', kwarg['target']]) ret['attrs'].append(['TCA_U32_SEL', {'keys': kwarg['keys']}]) return ret class options(nla, nla_plus_police): nla_map = (('TCA_U32_UNSPEC', 'none'), ('TCA_U32_CLASSID', 'uint32'), ('TCA_U32_HASH', 'uint32'), ('TCA_U32_LINK', 'hex'), ('TCA_U32_DIVISOR', 'uint32'), ('TCA_U32_SEL', 'u32_sel'), ('TCA_U32_POLICE', 'police'), ('TCA_U32_ACT', 'tca_act_prio'), ('TCA_U32_INDEV', 'hex'), ('TCA_U32_PCNT', 'u32_pcnt'), ('TCA_U32_MARK', 'u32_mark')) tca_act_prio = tca_act_prio class u32_sel(nla): fields = (('flags', 'B'), ('offshift', 'B'), ('nkeys', 'B'), ('__align', 'x'), ('offmask', '>H'), ('off', 'H'), ('offoff', 'h'), ('hoff', 'h'), ('hmask', '>I')) class u32_key(nlmsg): header = None fields = (('key_mask', '>I'), ('key_val', '>I'), ('key_off', 'i'), ('key_offmask', 'i')) def encode(self): ''' Key sample:: 'keys': ['0x0006/0x00ff+8', '0x0000/0xffc0+2', '0x5/0xf+0', '0x10/0xff+33'] => 00060000/00ff0000 + 8 05000000/0f00ffc0 + 0 00100000/00ff0000 + 32 ''' def cut_field(key, separator): ''' split a field from the end of the string ''' field = '0' pos = key.find(separator) new_key = key if pos > 0: field = key[pos + 1:] new_key = key[:pos] return (new_key, field) # 'header' array to pack keys to header = [(0, 0) for i in range(256)] keys = [] # iterate keys and pack them to the 'header' for key in self['keys']: # TODO tags: filter (key, nh) = cut_field(key, '@') # FIXME: do not ignore nh (key, offset) = cut_field(key, '+') offset = int(offset, 0) # a little trick: if you provide /00ff+8, that # really means /ff+9, so we should take it into # account (key, mask) = cut_field(key, '/') if mask[:2] == '0x': mask = mask[2:] while True: if mask[:2] == '00': offset += 1 mask = mask[2:] else: break mask = '0x' + mask mask = int(mask, 0) value = int(key, 0) bits = 24 if mask == 0 and value == 0: key = self.u32_key(data=self.data) key['key_off'] = offset key['key_mask'] = mask key['key_val'] = value keys.append(key) for bmask in struct.unpack('4B', struct.pack('>I', mask)): if bmask > 0: bvalue = (value & (bmask << bits)) >> bits header[offset] = (bvalue, bmask) offset += 1 bits -= 8 # recalculate keys from 'header' key = None value = 0 mask = 0 for offset in range(256): (bvalue, bmask) = header[offset] if bmask > 0 and key is None: key = self.u32_key(data=self.data) key['key_off'] = offset key['key_mask'] = 0 key['key_val'] = 0 bits = 24 if key is not None and bits >= 0: key['key_mask'] |= bmask << bits key['key_val'] |= bvalue << bits bits -= 8 if (bits < 0 or offset == 255): keys.append(key) key = None assert keys self['nkeys'] = len(keys) # FIXME: do not hardcode flags :) self['flags'] = 1 nla.encode(self) offset = self.offset + 20 # 4 bytes header + 16 bytes fields for key in keys: key.offset = offset key.encode() offset += 16 # keys haven't header self.length = offset - self.offset struct.pack_into('H', self.data, self.offset, offset - self.offset) def decode(self): nla.decode(self) offset = self.offset + 16 self['keys'] = [] nkeys = self['nkeys'] while nkeys: key = self.u32_key(data=self.data, offset=offset) key.decode() offset += 16 self['keys'].append(key) nkeys -= 1 class u32_mark(nla): fields = (('val', 'I'), ('mask', 'I'), ('success', 'I')) class u32_pcnt(nla): fields = (('rcnt', 'Q'), ('rhit', 'Q'), ('kcnts', 'Q')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/common.py0000644000175000017500000002573313610051400022425 0ustar peetpeet00000000000000import re import os import struct import logging from math import log as logfm from socket import inet_aton from pyroute2 import config from pyroute2.common import size_suffixes from pyroute2.common import time_suffixes from pyroute2.common import rate_suffixes from pyroute2.common import basestring from pyroute2.netlink import nla log = logging.getLogger(__name__) LINKLAYER_UNSPEC = 0 LINKLAYER_ETHERNET = 1 LINKLAYER_ATM = 2 ATM_CELL_SIZE = 53 ATM_CELL_PAYLOAD = 48 TCA_ACT_MAX_PRIO = 32 TIME_UNITS_PER_SEC = 1000000 try: with open('/proc/net/psched', 'r') as psched: [t2us, us2t, clock_res, wee] = [int(i, 16) for i in psched.read().split()] clock_factor = float(clock_res) / TIME_UNITS_PER_SEC tick_in_usec = float(t2us) / us2t * clock_factor except IOError as e: if config.uname[0] == 'Linux': log.warning("tcmsg: %s", e) log.warning("the tc subsystem functionality is limited") clock_res = 0 clock_factor = 1 tick_in_usec = 1 wee = 1000 _first_letter = re.compile('[^0-9]+') def get_hz(): if clock_res == 1000000: return wee else: return os.environ.get('HZ', 1000) def get_by_suffix(value, default, func): if not isinstance(value, basestring): return value pos = _first_letter.search(value) if pos is None: suffix = default else: pos = pos.start() value, suffix = value[:pos], value[pos:] value = int(value) return func(value, suffix) def get_size(size): return get_by_suffix(size, 'b', lambda x, y: x * size_suffixes[y]) def get_time(lat): return get_by_suffix(lat, 'ms', lambda x, y: (x * TIME_UNITS_PER_SEC) / time_suffixes[y]) def get_rate(rate): return get_by_suffix(rate, 'bit', lambda x, y: (x * rate_suffixes[y]) / 8) def time2tick(time): # The code is ported from tc utility return int(time) * tick_in_usec def calc_xmittime(rate, size): # The code is ported from tc utility return int(time2tick(TIME_UNITS_PER_SEC * (float(size) / rate))) def percent2u32(pct): '''xlate a percentage to an uint32 value 0% -> 0 100% -> 2**32 - 1''' return int((2 ** 32 - 1) * pct / 100) def red_eval_ewma(qmin, burst, avpkt): # The code is ported from tc utility wlog = 1 W = 0.5 a = float(burst) + 1 - float(qmin) / avpkt assert a >= 1 while wlog < 32: wlog += 1 W /= 2 if (a <= (1 - pow(1 - W, burst)) / W): return wlog return -1 def red_eval_P(qmin, qmax, probability): # The code is ported from tc utility i = qmax - qmin assert i > 0 probability /= i for i in range(32): if probability > 1: break probability *= 2 return i def red_eval_idle_damping(Wlog, avpkt, bps): # The code is ported from tc utility xmit_time = calc_xmittime(bps, avpkt) lW = -logfm(1.0 - 1.0 / (1 << Wlog)) / xmit_time maxtime = 31.0 / lW sbuf = [] for clog in range(32): if (maxtime / (1 << clog) < 512): break if clog >= 32: return -1, sbuf for i in range(255): sbuf.append((i << clog) * lW) if sbuf[i] > 31: sbuf[i] = 31 sbuf.append(31) return clog, sbuf def get_rate_parameters(kwarg): # rate and burst are required rate = get_rate(kwarg['rate']) burst = kwarg['burst'] # if peak, mtu is required peak = get_rate(kwarg.get('peak', 0)) mtu = kwarg.get('mtu', 0) if peak: assert mtu # limit OR latency is required limit = kwarg.get('limit', None) latency = get_time(kwarg.get('latency', None)) assert limit is not None or latency is not None # calculate limit from latency if limit is None: rate_limit = rate * float(latency) /\ TIME_UNITS_PER_SEC + burst if peak: peak_limit = peak * float(latency) /\ TIME_UNITS_PER_SEC + mtu if rate_limit > peak_limit: rate_limit = peak_limit limit = rate_limit return {'rate': int(rate), 'mtu': mtu, 'buffer': calc_xmittime(rate, burst), 'limit': int(limit)} tc_flow_keys = {'src': 0x01, 'dst': 0x02, 'proto': 0x04, 'proto-src': 0x08, 'proto-dst': 0x10, 'iif': 0x20, 'priority': 0x40, 'mark': 0x80, 'nfct': 0x0100, 'nfct-src': 0x0200, 'nfct-dst': 0x0400, 'nfct-proto-src': 0x0800, 'nfct-proto-dst': 0x1000, 'rt-classid': 0x2000, 'sk-uid': 0x4000, 'sk-gid': 0x8000, 'vlan-tag': 0x010000, 'rxhash': 0x020000, } def get_tca_keys(kwarg, name): if name not in kwarg: raise ValueError('Missing attribute: {0}'.format(name)) res = 0 keys = kwarg[name] if name == 'hash': keys = keys.split(',') for key, value in tc_flow_keys.items(): if key in keys: res |= value return res tc_flow_modes = {'map': 0, 'hash': 1, } def get_tca_mode(kwarg): if 'mode' not in kwarg: raise ValueError('Missing attribute: mode') for key, value in tc_flow_modes.items(): if key == kwarg['mode']: return value raise ValueError('Unknown flow mode {0}'.format(kwarg['mode'])) def get_tca_ops(kwarg, attrs): xor_value = 0 mask_value = 0 addend_value = 0 rshift_value = 0 for elem in kwarg['ops']: op = elem['op'] num = elem['num'] if op == 'and': mask_value = num attrs.append(['TCA_FLOW_XOR', xor_value]) attrs.append(['TCA_FLOW_MASK', mask_value]) elif op == 'or': if mask_value == 0: mask_value = (~num + 1) & 0xFFFFFFFF xor_value = num attrs.append(['TCA_FLOW_XOR', xor_value]) attrs.append(['TCA_FLOW_MASK', mask_value]) elif op == 'xor': if mask_value == 0: mask_value = 0xFFFFFFFF xor_value = num attrs.append(['TCA_FLOW_XOR', xor_value]) attrs.append(['TCA_FLOW_MASK', mask_value]) elif op == 'rshift': rshift_value = num attrs.append(['TCA_FLOW_RSHIFT', rshift_value]) elif op == 'addend': # Check if an IP was specified if type(num) == str and len(num.split('.')) == 4: if num.startswith('-'): inverse = True else: inverse = False ip = num.strip('-') # Convert IP to uint32 ip = inet_aton(ip) ip = struct.unpack('>I', ip)[0] if inverse: ip = (~ip + 1) & 0xFFFFFFFF addend_value = ip else: addend_value = num attrs.append(['TCA_FLOW_ADDEND', addend_value]) tc_actions = {'unspec': -1, # TC_ACT_UNSPEC 'ok': 0, # TC_ACT_OK 'reclassify': 1, # TC_ACT_RECLASSIFY 'shot': 2, # TC_ACT_SHOT 'drop': 2, # TC_ACT_SHOT 'pipe': 3, # TC_ACT_PIPE 'stolen': 4, # TC_ACT_STOLEN 'queued': 5, # TC_ACT_QUEUED 'repeat': 6, # TC_ACT_REPEAT 'redirect': 7, # TC_ACT_REDIRECT } class nla_plus_rtab(nla): class parms(nla): def adjust_size(self, size, mpu, linklayer): # The current code is ported from tc utility if size < mpu: size = mpu if linklayer == LINKLAYER_ATM: cells = size / ATM_CELL_PAYLOAD if size % ATM_CELL_PAYLOAD > 0: cells += 1 size = cells * ATM_CELL_SIZE return size def calc_rtab(self, kind): # The current code is ported from tc utility rtab = [] mtu = self.get('mtu', 0) or 1600 cell_log = self['%s_cell_log' % (kind)] mpu = self['%s_mpu' % (kind)] rate = self.get(kind, 'rate') # calculate cell_log if cell_log == 0: while (mtu >> cell_log) > 255: cell_log += 1 # fill up the table for i in range(256): size = self.adjust_size((i + 1) << cell_log, mpu, LINKLAYER_ETHERNET) rtab.append(calc_xmittime(rate, size)) self['%s_cell_align' % (kind)] = -1 self['%s_cell_log' % (kind)] = cell_log return rtab def encode(self): self.rtab = None self.ptab = None if self.get('rate', False): self.rtab = self.calc_rtab('rate') if self.get('peak', False): self.ptab = self.calc_rtab('peak') if self.get('ceil', False): self.ctab = self.calc_rtab('ceil') nla.encode(self) class rtab(nla): fields = (('value', 's'), ) own_parent = True def encode(self): parms = self.parent.get_encoded('TCA_TBF_PARMS') or \ self.parent.get_encoded('TCA_HTB_PARMS') or \ self.parent.get_encoded('TCA_POLICE_TBF') if parms is not None: self.value = getattr(parms, self.__class__.__name__) self['value'] = struct.pack('I' * 256, *(int(x) for x in self.value)) nla.encode(self) def decode(self): nla.decode(self) parms = self.parent.get_attr('TCA_TBF_PARMS') or \ self.parent.get_attr('TCA_HTB_PARMS') or \ self.parent.get_attr('TCA_POLICE_TBF') if parms is not None: rtab = struct.unpack('I' * (len(self['value']) / 4), self['value']) self.value = rtab setattr(parms, self.__class__.__name__, rtab) class ptab(rtab): pass class ctab(rtab): pass class stats2(nla): nla_map = (('TCA_STATS_UNSPEC', 'none'), ('TCA_STATS_BASIC', 'basic'), ('TCA_STATS_RATE_EST', 'rate_est'), ('TCA_STATS_QUEUE', 'queue'), ('TCA_STATS_APP', 'stats_app')) class basic(nla): fields = (('bytes', 'Q'), ('packets', 'I')) class rate_est(nla): fields = (('bps', 'I'), ('pps', 'I')) class queue(nla): fields = (('qlen', 'I'), ('backlog', 'I'), ('drops', 'I'), ('requeues', 'I'), ('overlimits', 'I')) class stats_app(nla.hex): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/common_act.py0000644000175000017500000000445013610051400023245 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl.tcmsg.common import stats2 from pyroute2.netlink.rtnl.tcmsg.common import TCA_ACT_MAX_PRIO from pyroute2.netlink.rtnl.tcmsg import act_gact from pyroute2.netlink.rtnl.tcmsg import act_bpf from pyroute2.netlink.rtnl.tcmsg import act_police from pyroute2.netlink.rtnl.tcmsg import act_mirred from pyroute2.netlink.rtnl.tcmsg import act_connmark from pyroute2.netlink.rtnl.tcmsg import act_vlan from pyroute2.netlink.rtnl.tcmsg import act_skbedit plugins = {'gact': act_gact, 'bpf': act_bpf, 'police': act_police, 'mirred': act_mirred, 'connmark': act_connmark, 'vlan': act_vlan, 'skbedit': act_skbedit, } class nla_plus_tca_act_opt(object): @staticmethod def get_act_options(self, *argv, **kwarg): kind = self.get_attr('TCA_ACT_KIND') if kind in plugins: return plugins[kind].options return self.hex class tca_act_prio(nla): nla_map = tuple([('TCA_ACT_PRIO_%i' % x, 'tca_act') for x in range(TCA_ACT_MAX_PRIO)]) class tca_act(nla, nla_plus_tca_act_opt): nla_map = (('TCA_ACT_UNSPEC', 'none'), ('TCA_ACT_KIND', 'asciiz'), ('TCA_ACT_OPTIONS', 'get_act_options'), ('TCA_ACT_INDEX', 'hex'), ('TCA_ACT_STATS', 'stats2')) stats2 = stats2 def get_act_parms(kwarg): if 'kind' not in kwarg: raise Exception('action requires "kind" parameter') if kwarg['kind'] in plugins: return plugins[kwarg['kind']].get_parameters(kwarg) else: return [] # All filters can use any act type, this is a generic parser for all def get_tca_action(kwarg): ret = {'attrs': []} act = kwarg.get('action', 'drop') # convert simple action='..' to kwarg style if isinstance(act, str): act = {'kind': 'gact', 'action': act} # convert single dict action to first entry in a list of actions acts = act if isinstance(act, list) else [act] for i, act in enumerate(acts, start=1): opt = {'attrs': [['TCA_ACT_KIND', act['kind']], ['TCA_ACT_OPTIONS', get_act_parms(act)]]} ret['attrs'].append(['TCA_ACT_PRIO_%d' % i, opt]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/common_ematch.py0000644000175000017500000000643213621021750023750 0ustar peetpeet00000000000000from pyroute2.netlink.rtnl.tcmsg import em_cmp from pyroute2.netlink.rtnl.tcmsg import em_ipset from pyroute2.netlink.rtnl.tcmsg import em_meta plugins = { # 0: em_container, 1: em_cmp, # 2: em_nbyte, # 3: em_u32, 4: em_meta, # 5: em_text, # 6: em_vlan, # 7: em_canid, 8: em_ipset, # 9: em_ipt, } plugins_translate = { 'container': 0, 'cmp': 1, 'nbyte': 2, 'u32': 3, 'meta': 4, 'text': 5, 'vlan': 6, 'canid': 7, 'ipset': 8, 'ipt': 9, } TCF_EM_REL_END = 0 TCF_EM_REL_AND = 1 TCF_EM_REL_OR = 2 TCF_EM_INVERSE_MASK = 4 RELATIONS_DICT = {'and': TCF_EM_REL_AND, 'AND': TCF_EM_REL_AND, '&&': TCF_EM_REL_AND, 'or': TCF_EM_REL_OR, 'OR': TCF_EM_REL_OR, '||': TCF_EM_REL_OR} class nla_plus_tcf_ematch_opt(object): @staticmethod def parse_ematch_options(self, *argv, **kwarg): if 'kind' not in self: raise ValueError('ematch requires "kind" parameter') kind = self['kind'] if kind in plugins: ret = plugins[kind].data(data=argv[0]) ret.decode() return ret return self.hex def get_ematch_parms(kwarg): if 'kind' not in kwarg: raise ValueError('ematch requires "kind" parameter') if kwarg['kind'] in plugins: return plugins[kwarg['kind']].get_parameters(kwarg) else: return [] def get_tcf_ematches(kwarg): ret = {'attrs': []} matches = [] header = {'nmatches': 0, 'progid': 0} # Get the number of expressions expr_count = len(kwarg['match']) header['nmatches'] = expr_count # Load plugin and transfer data for i in range(0, expr_count): match = {'matchid': 0, 'kind': None, 'flags': 0, 'pad': 0, 'opt': None} cur_match = kwarg['match'][i] # Translate string kind into numeric kind kind = plugins_translate[cur_match['kind']] match['kind'] = kind data = plugins[kind].data() data.setvalue(cur_match) data.encode() # Add ematch encoded data match['opt'] = data.data # Safety check if i == expr_count - 1 and 'relation' in cur_match: raise ValueError('Could not set a relation to the last expression') if i < expr_count - 1 and 'relation' not in cur_match: raise ValueError('You must specify a relation for every expression' ' except the last one') # Set relation to flags if 'relation' in cur_match: relation = cur_match['relation'] if relation in RELATIONS_DICT: match['flags'] |= RELATIONS_DICT.get(relation) else: raise ValueError('Unknown relation {0}'.format(relation)) else: match['flags'] = TCF_EM_REL_END # Handle inverse flag if 'inverse' in cur_match: if cur_match['inverse']: match['flags'] |= TCF_EM_INVERSE_MASK # Append new match to list of matches matches.append(match) ret['attrs'].append(['TCA_EMATCH_TREE_HDR', header]) ret['attrs'].append(['TCA_EMATCH_TREE_LIST', matches]) return ret pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/em_cmp.py0000644000175000017500000000553313617335621022412 0ustar peetpeet00000000000000from pyroute2.netlink import nla TCF_EM_OPND_EQ = 0 TCF_EM_OPND_GT = 1 TCF_EM_OPND_LT = 2 OPERANDS_DICT = {TCF_EM_OPND_EQ: ('eq', '='), TCF_EM_OPND_GT: ('gt', '>'), TCF_EM_OPND_LT: ('lt', '<')} # align types TCF_EM_ALIGN_U8 = 1 TCF_EM_ALIGN_U16 = 2 TCF_EM_ALIGN_U32 = 4 ALIGNS_DICT = {TCF_EM_ALIGN_U8: 'u8', TCF_EM_ALIGN_U16: 'u16', TCF_EM_ALIGN_U32: 'u32'} # layer types TCF_LAYER_LINK = 0 TCF_LAYER_NETWORK = 1 TCF_LAYER_TRANSPORT = 2 LAYERS_DICT = {TCF_LAYER_LINK: ('link', 'eth'), TCF_LAYER_NETWORK: ('network', 'ip'), TCF_LAYER_TRANSPORT: ('transport', 'tcp')} # see tc_em_cmp.h TCF_EM_CMP_TRANS = 1 class data(nla): fields = (('val', 'I'), ('mask', 'I'), ('off', 'H'), ('align_flags', 'B'), ('layer_opnd', 'B') ) def decode(self): self.header = None self.length = 24 nla.decode(self) self['align'] = self['align_flags'] & 0x0F self['flags'] = (self['align_flags'] & 0xF0) >> 4 self['layer'] = self['layer_opnd'] & 0x0F self['opnd'] = (self['layer_opnd'] & 0xF0) >> 4 del self['layer_opnd'] del self['align_flags'] # Perform translation for readability with nldecap self['layer'] = 'TCF_LAYER_{}'.format(LAYERS_DICT[self['layer']][0])\ .upper() self['align'] = 'TCF_EM_ALIGN_{}'.format(ALIGNS_DICT[self['align']])\ .upper() self['opnd'] = 'TCF_EM_OPND_{}'.format(OPERANDS_DICT[self['opnd']][0])\ .upper() def encode(self): # Set default values self['layer_opnd'] = 0 self['align_flags'] = 0 # Build align_flags byte if 'trans' in self: self['align_flags'] = TCF_EM_CMP_TRANS << 4 for k, v in ALIGNS_DICT.items(): if self['align'].lower() == v: self['align_flags'] |= k break # Build layer_opnd byte if isinstance(self['opnd'], int): self['layer_opnd'] = self['opnd'] << 4 else: for k, v in OPERANDS_DICT.items(): if self['opnd'].lower() in v: self['layer_opnd'] = k << 4 break # Layer code if isinstance(self['layer'], int): self['layer_opnd'] |= self['layer'] else: for k, v in LAYERS_DICT.items(): if self['layer'].lower() in v: self['layer_opnd'] |= k break self['off'] = self.get('offset', 0) self['val'] = self.get('value', 0) nla.encode(self) # Patch NLA structure self['header']['length'] -= 4 self.data = self.data[4:] pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/em_ipset.py0000644000175000017500000000320513610051400022730 0ustar peetpeet00000000000000from pyroute2.netlink import nlmsg_base # see em_ipset.c IPSET_DIM = { 'IPSET_DIM_ZERO': 0, 'IPSET_DIM_ONE': 1, 'IPSET_DIM_TWO': 2, 'IPSET_DIM_THREE': 3, 'IPSET_DIM_MAX': 6, } TCF_IPSET_MODE_DST = 0 TCF_IPSET_MODE_SRC = 2 def get_parameters(kwarg): ret = {'attrs': []} attrs_map = ( ('matchid', 'TCF_EM_MATCHID'), ('kind', 'TCF_EM_KIND'), ('flags', 'TCF_EM_FLAGS'), ('pad', 'TCF_EM_PAD') ) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class data(nlmsg_base): fields = (('ip_set_index', 'H'), ('ip_set_dim', 'B'), ('ip_set_flags', 'B'), ) def encode(self): flags, dim = self._get_ip_set_parms() self['ip_set_index'] = self['index'] self['ip_set_dim'] = dim self['ip_set_flags'] = flags nlmsg_base.encode(self) def _get_ip_set_parms(self): flags = 0 dim = 0 mode = self['mode'] # Split to get dimension modes = mode.split(',') dim = len(modes) if dim > IPSET_DIM['IPSET_DIM_MAX']: raise ValueError('IPSet dimension could not be greater than {0}'. format(IPSET_DIM['IPSET_DIM_MAX'])) for i in range(0, dim): if modes[i] == 'dst': flags |= TCF_IPSET_MODE_DST << i elif modes[i] == 'src': flags |= TCF_IPSET_MODE_SRC << i else: raise ValueError('Unknown IP set mode "{0}"'.format(modes[i])) return (flags, dim) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/em_meta.py0000644000175000017500000001345213617560120022552 0ustar peetpeet00000000000000from pyroute2.netlink import nla from struct import pack, unpack TCF_EM_OPND_EQ = 0 TCF_EM_OPND_GT = 1 TCF_EM_OPND_LT = 2 OPERANDS_DICT = {TCF_EM_OPND_EQ: ('eq', '='), TCF_EM_OPND_GT: ('gt', '>'), TCF_EM_OPND_LT: ('lt', '<')} # meta types TCF_META_TYPE_VAR = 0 TCF_META_TYPE_INT = 1 TCF_META_ID_MASK = 0x7FF TCF_META_TYPE_MASK = 0xF << 12 # see tc_em_meta.h META_ID = { 'value': 0, 'random': 1, 'loadavg_0': 2, 'loadavg_1': 3, 'loadavg_2': 4, 'dev': 5, 'priority': 6, 'protocol': 7, 'pkttype': 8, 'pktlen': 9, 'datalen': 10, 'maclen': 11, 'nfmark': 12, 'tcindex': 13, 'rtclassid': 14, 'rtiif': 15, 'sk_family': 16, 'sk_state': 17, 'sk_reuse': 18, 'sk_bound_if': 19, 'sk_refcnt': 20, 'sk_shutdown': 21, 'sk_proto': 22, 'sk_type': 23, 'sk_rcvbuf': 24, 'sk_rmem_alloc': 25, 'sk_wmem_alloc': 26, 'sk_omem_alloc': 27, 'sk_wmem_queued': 28, 'sk_rcv_qlen': 29, 'sk_snd_qlen': 30, 'sk_err_qlen': 31, 'sk_forward_allocs': 32, 'sk_sndbuf': 33, 'sk_allocs': 34, 'sk_route_caps': 35, 'sk_hash': 36, 'sk_lingertime': 37, 'sk_ack_backlog': 38, 'sk_max_ack_backlog': 39, 'sk_prio': 40, 'sk_rcvlowat': 41, 'sk_rcvtimeo': 42, 'sk_sndtimeo': 43, 'sk_sendmsg_off': 44, 'sk_write_pending': 45, 'vlan_tag': 46, 'rxhash': 47, } strings_meta = ('dev', 'sk_bound_if') class data(nla): nla_map = (('TCA_EM_META_UNSPEC', 'none'), ('TCA_EM_META_HDR', 'tca_em_meta_header_parse'), ('TCA_EM_META_LVALUE', 'uint32'), ('TCA_EM_META_RVALUE', 'hex') ) def decode(self): self.header = None self.length = 24 nla.decode(self) # Patch to have a better view in nldecap attrs = dict(self['attrs']) rvalue = attrs.get('TCA_EM_META_RVALUE') meta_hdr = attrs.get('TCA_EM_META_HDR') meta_id = meta_hdr['id'] rvalue = bytearray.fromhex(rvalue.replace(':', '')) if meta_id == 'TCF_META_TYPE_VAR': rvalue.decode('utf-8') if meta_id == 'TCF_META_TYPE_INT': rvalue = unpack('> 12 if self['id'] == TCF_META_TYPE_VAR: self['id'] = 'TCF_META_TYPE_VAR' elif self['id'] == TCF_META_TYPE_INT: self['id'] = 'TCF_META_TYPE_INT' else: pass self['kind'] &= TCF_META_ID_MASK for k, v in META_ID.items(): if self['kind'] == v: self['kind'] = 'TCF_META_ID_{}'.format(k.upper()) fmt = 'TCF_EM_OPND_{}'.format(OPERANDS_DICT[self['opnd']][0] .upper()) self['opnd'] = fmt del self['pad'] def encode(self): if not isinstance(self['kind'], str): raise ValueError("kind' keywords must be set!") kind = self['kind'].lower() if kind in strings_meta: self['id'] = TCF_META_TYPE_VAR else: self['id'] = TCF_META_TYPE_INT self['id'] <<= 12 for k, v in META_ID.items(): if kind == k: self['kind'] = self['id'] | v break if isinstance(self['opnd'], str): for k, v in OPERANDS_DICT.items(): if self['opnd'].lower() in v: self['opnd'] = k break # Perform sanity checks on 'shift' value if isinstance(self['shift'], str): # If it fails, it will raise a ValueError # which is what we want self['shift'] = int(self['shift']) if not 0 <= self['shift'] <= 255: raise ValueError("'shift' value must be between" "0 and 255 included!") nla.encode(self) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_bpf.py0000644000175000017500000000513613610051400023045 0ustar peetpeet00000000000000''' ''' from socket import htons from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink import NLA_F_NESTED from pyroute2.protocols import ETH_P_ALL from pyroute2.netlink.rtnl.tcmsg.common import stats2 from pyroute2.netlink.rtnl.tcmsg.common import TCA_ACT_MAX_PRIO from pyroute2.netlink.rtnl.tcmsg.common_act import get_tca_action from pyroute2.netlink.rtnl.tcmsg.common_act import nla_plus_tca_act_opt from pyroute2.netlink.rtnl.tcmsg.act_police import nla_plus_police from pyroute2.netlink.rtnl.tcmsg.act_police import get_parameters \ as ap_parameters parent = TC_H_ROOT TCA_BPF_FLAG_ACT_DIRECT = 1 def fix_msg(msg, kwarg): msg['info'] = htons(kwarg.pop('protocol', ETH_P_ALL) & 0xffff) |\ ((kwarg.pop('prio', 0) << 16) & 0xffff0000) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = ( # ('action', 'TCA_BPF_ACT'), # ('police', 'TCA_BPF_POLICE'), ('classid', 'TCA_BPF_CLASSID'), ('fd', 'TCA_BPF_FD'), ('name', 'TCA_BPF_NAME'), ('flags', 'TCA_BPF_FLAGS'), ) act = kwarg.get('action') if act: ret['attrs'].append(['TCA_BPF_ACT', get_tca_action(kwarg)]) if kwarg.get('rate'): ret['attrs'].append(['TCA_BPF_POLICE', ap_parameters(kwarg)]) kwarg['flags'] = kwarg.get('flags', 0) if kwarg.get('direct_action', False): kwarg['flags'] |= TCA_BPF_FLAG_ACT_DIRECT for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: ret['attrs'].append([v, r]) return ret class options(nla, nla_plus_police): nla_map = (('TCA_BPF_UNSPEC', 'none'), ('TCA_BPF_ACT', 'bpf_act'), ('TCA_BPF_POLICE', 'police'), ('TCA_BPF_CLASSID', 'uint32'), ('TCA_BPF_OPS_LEN', 'uint32'), ('TCA_BPF_OPS', 'uint32'), ('TCA_BPF_FD', 'uint32'), ('TCA_BPF_NAME', 'asciiz'), ('TCA_BPF_FLAGS', 'uint32')) class bpf_act(nla): nla_flags = NLA_F_NESTED nla_map = tuple([('TCA_ACT_PRIO_%i' % x, 'tca_act_bpf') for x in range(TCA_ACT_MAX_PRIO)]) class tca_act_bpf(nla, nla_plus_tca_act_opt): nla_map = (('TCA_ACT_UNSPEC', 'none'), ('TCA_ACT_KIND', 'asciiz'), ('TCA_ACT_OPTIONS', 'get_act_options'), ('TCA_ACT_INDEX', 'hex'), ('TCA_ACT_STATS', 'get_stats2')) @staticmethod def get_stats2(self, *argv, **kwarg): return stats2 pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_cake.py0000644000175000017500000003243613613574566023236 0ustar peetpeet00000000000000''' cake ++++ Usage: # Imports from pyroute2 import IPRoute # Add cake with 2048kbit of bandwidth capacity with IPRoute() as ipr: # Get interface index index = ipr.link_lookup(ifname='lo') ipr.tc('add', kind='cake', index=index, bandwidth='2048kbit') # Same with 15mbit of bandwidth capacity with IPRoute() as ipr: # Get interface index index = ipr.link_lookup(ifname='lo') ipr.tc('add', kind='cake', index=index, bandwidth='15mbit') # If you don't know the bandwidth capacity, use autorate with IPRoute() as ipr: # Get interface index index = ipr.link_lookup(ifname='lo') ipr.tc('add', kind='cake', index=index, bandwidth='unlimited', autorate=True) # If you want to tune ATM properties use: # atm_mode=False # For no ATM tuning # atm_mode=True # For ADSL tuning # atm_mode='ptm' # For VDSL2 tuning with IPRoute() as ipr: # Get interface index index = ipr.link_lookup(ifname='lo') ipr.tc('add', kind='cake', index=index, bandwidth='unlimited', autorate=True, atm_mode=True) # Complex example which has no-sense with IPRoute() as ipr: # Get interface index index = ipr.link_lookup(ifname='lo') ipr.tc('add', kind='cake', index=index, bandwidth='unlimited', autorate=True, nat=True, rtt='interplanetary', target=10000, flow_mode='dsthost', diffserv_mode='precedence', fwmark=0x1337) NOTES: Here is the list of all supported options with their values: - ack_filter: False, True or 'aggressive' (False by default) - atm_mode: False, True or 'ptm' (False by default) - autorate: False or True (False by default) - bandwidth: any integer, 'N[kbit|mbit|gbit]' or 'unlimited' - diffserv_mode: 'diffserv3', 'diffserv4', 'diffserv8', 'besteffort', 'precedence' ('diffserv3' by default) - ingress: False or True (False by default) - overhead: any integer between -64 and 256 inclusive (0 by default) - flow_mode: 'flowblind', 'srchost', 'dsthost', 'hosts', 'flows', 'dual-srchost', 'dual-dsthost', 'triple-isolate' ('triple-isolate' by default) - fwmark: any integer (0 by default) - memlimit: any integer (by default, calculated based on the bandwidth and RTT settings) - mpu: any integer between 0 and 256 inclusive (0 by default) - nat: False or True (False by default) - raw: False or True (True by default) - rtt: any integer or 'datacentre', 'lan', 'metro', 'regional', 'internet', 'oceanic', 'satellite', 'interplanetary' ('internet' by default) - split_gso: False or True (True by default) - target: any integer (5000 by default) - wash: False or True (False by default) ''' from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT # Defines from pkt_sched.h CAKE_FLOW_NONE = 0 CAKE_FLOW_SRC_IP = 1 CAKE_FLOW_DST_IP = 2 CAKE_FLOW_HOSTS = 3 CAKE_FLOW_FLOWS = 4 CAKE_FLOW_DUAL_SRC = 5 CAKE_FLOW_DUAL_DST = 6 CAKE_FLOW_TRIPLE = 7 CAKE_DIFFSERV_DIFFSERV3 = 0 CAKE_DIFFSERV_DIFFSERV4 = 1 CAKE_DIFFSERV_DIFFSERV8 = 2 CAKE_DIFFSERV_BESTEFFORT = 3 CAKE_DIFFSERV_PRECEDENCE = 4 CAKE_ACK_NONE = 0 CAKE_ACK_FILTER = 1 CAKE_ACK_AGGRESSIVE = 2 CAKE_ATM_NONE = 0 CAKE_ATM_ATM = 1 CAKE_ATM_PTM = 2 TCA_CAKE_MAX_TINS = 8 def fix_msg(msg, kwarg): if 'parent' not in kwarg: msg['parent'] = TC_H_ROOT def convert_bandwidth(value): types = [('kbit', 1000), ('mbit', 1000000), ('gbit', 1000000000) ] if 'unlimited' == value: return 0 try: # Value is passed as an int x = int(value) return x >> 3 except ValueError: value = value.lower() for t, mul in types: if len(value.split(t)) == 2: x = int(value.split(t)[0]) * mul return x >> 3 raise ValueError('Invalid bandwidth value. Specify either an integer, ' '"unlimited" or a value with "kbit", "mbit" or ' '"gbit" appended') def convert_rtt(value): types = {'datacentre': 100, 'lan': 1000, 'metro': 10000, 'regional': 30000, 'internet': 100000, 'oceanic': 300000, 'satellite': 1000000, 'interplanetary': 3600000000, } try: # Value is passed as an int x = int(value) return x except ValueError: rtt = types.get(value.lower()) if rtt is not None: return rtt raise ValueError('Invalid rtt value. Specify either an integer (us), ' 'or datacentre, lan, metro, regional, internet, ' 'oceanic or interplanetary.') def convert_atm(value): if isinstance(value, bool): if not value: return CAKE_ATM_NONE else: return CAKE_ATM_ATM else: if value == 'ptm': return CAKE_ATM_PTM raise ValueError('Invalid ATM value!') def convert_flowmode(value): modes = {'flowblind': CAKE_FLOW_NONE, 'srchost': CAKE_FLOW_SRC_IP, 'dsthost': CAKE_FLOW_DST_IP, 'hosts': CAKE_FLOW_HOSTS, 'flows': CAKE_FLOW_FLOWS, 'dual-srchost': CAKE_FLOW_DUAL_SRC, 'dual-dsthost': CAKE_FLOW_DUAL_DST, 'triple-isolate': CAKE_FLOW_TRIPLE, } res = modes.get(value.lower()) if res: return res raise ValueError('Invalid flow mode specified! See tc-cake man ' 'page for valid values.') def convert_diffserv(value): modes = {'diffserv3': CAKE_DIFFSERV_DIFFSERV3, 'diffserv4': CAKE_DIFFSERV_DIFFSERV4, 'diffserv8': CAKE_DIFFSERV_DIFFSERV8, 'besteffort': CAKE_DIFFSERV_BESTEFFORT, 'precedence': CAKE_DIFFSERV_PRECEDENCE, } res = modes.get(value.lower()) if res is not None: return res raise ValueError('Invalid diffserv mode specified! See tc-cake man ' 'page for valid values.') def convert_ackfilter(value): if isinstance(value, bool): if not value: return CAKE_ACK_NONE else: return CAKE_ACK_FILTER else: if value == 'aggressive': return CAKE_ACK_AGGRESSIVE raise ValueError('Invalid ACK filter!') def check_range(name, value, start, end): if not isinstance(value, int): raise ValueError('{} value must be an integer'.format(name)) if not start <= value <= end: raise ValueError('{0} value must be between {1} and {2} ' 'inclusive.'.format(name, start, end)) def get_parameters(kwarg): ret = {'attrs': []} attrs_map = (('ack_filter', 'TCA_CAKE_ACK_FILTER'), ('atm_mode', 'TCA_CAKE_ATM'), ('autorate', 'TCA_CAKE_AUTORATE'), ('bandwidth', 'TCA_CAKE_BASE_RATE64'), ('diffserv_mode', 'TCA_CAKE_DIFFSERV_MODE'), ('ingress', 'TCA_CAKE_INGRESS'), ('overhead', 'TCA_CAKE_OVERHEAD'), ('flow_mode', 'TCA_CAKE_FLOW_MODE'), ('fwmark', 'TCA_CAKE_FWMARK'), ('memlimit', 'TCA_CAKE_MEMORY'), ('mpu', 'TCA_CAKE_MPU'), ('nat', 'TCA_CAKE_NAT'), ('raw', 'TCA_CAKE_RAW'), ('rtt', 'TCA_CAKE_RTT'), ('split_gso', 'TCA_CAKE_SPLIT_GSO'), ('target', 'TCA_CAKE_TARGET'), ('wash', 'TCA_CAKE_WASH'), ) for k, v in attrs_map: r = kwarg.get(k, None) if r is not None: if k == 'bandwidth': r = convert_bandwidth(r) elif k == 'rtt': r = convert_rtt(r) elif k == 'atm_mode': r = convert_atm(r) elif k == 'flow_mode': r = convert_flowmode(r) elif k == 'diffserv_mode': r = convert_diffserv(r) elif k == 'ack_filter': r = convert_ackfilter(r) elif k == 'mpu': check_range(k, r, 0, 256) elif k == 'overhead': check_range(k, r, -64, 256) ret['attrs'].append([v, r]) return ret class options(nla): nla_map = (('TCA_CAKE_UNSPEC', 'none'), ('TCA_CAKE_PAD', 'uint64'), ('TCA_CAKE_BASE_RATE64', 'uint64'), ('TCA_CAKE_DIFFSERV_MODE', 'uint32'), ('TCA_CAKE_ATM', 'uint32'), ('TCA_CAKE_FLOW_MODE', 'uint32'), ('TCA_CAKE_OVERHEAD', 'int32'), ('TCA_CAKE_RTT', 'uint32'), ('TCA_CAKE_TARGET', 'uint32'), ('TCA_CAKE_AUTORATE', 'uint32'), ('TCA_CAKE_MEMORY', 'uint32'), ('TCA_CAKE_NAT', 'uint32'), ('TCA_CAKE_RAW', 'uint32'), ('TCA_CAKE_WASH', 'uint32'), ('TCA_CAKE_MPU', 'uint32'), ('TCA_CAKE_INGRESS', 'uint32'), ('TCA_CAKE_ACK_FILTER', 'uint32'), ('TCA_CAKE_SPLIT_GSO', 'uint32'), ('TCA_CAKE_FWMARK', 'uint32'), ) def encode(self): # Set default Auto-Rate value if not self.get_attr('TCA_CAKE_AUTORATE'): self['attrs'].append(['TCA_CAKE_AUTORATE', 0]) nla.encode(self) class stats2(nla): nla_map = (('TCA_STATS_UNSPEC', 'none'), ('TCA_STATS_BASIC', 'basic'), ('TCA_STATS_RATE_EST', 'rate_est'), ('TCA_STATS_QUEUE', 'queue'), ('TCA_STATS_APP', 'stats_app')) class basic(nla): fields = (('bytes', 'Q'), ('packets', 'I')) class rate_est(nla): fields = (('bps', 'I'), ('pps', 'I')) class queue(nla): fields = (('qlen', 'I'), ('backlog', 'I'), ('drops', 'I'), ('requeues', 'I'), ('overlimits', 'I')) class stats_app(nla): nla_map = (('__TCA_CAKE_STATS_INVALID', 'none'), ('TCA_CAKE_STATS_PAD', 'hex'), ('TCA_CAKE_STATS_CAPACITY_ESTIMATE64', 'uint64'), ('TCA_CAKE_STATS_MEMORY_LIMIT', 'uint32'), ('TCA_CAKE_STATS_MEMORY_USED', 'uint32'), ('TCA_CAKE_STATS_AVG_NETOFF', 'uint32'), ('TCA_CAKE_STATS_MAX_NETLEN', 'uint32'), ('TCA_CAKE_STATS_MAX_ADJLEN', 'uint32'), ('TCA_CAKE_STATS_MIN_NETLEN', 'uint32'), ('TCA_CAKE_STATS_MIN_ADJLEN', 'uint32'), ('TCA_CAKE_STATS_TIN_STATS', 'tca_parse_tins'), ('TCA_CAKE_STATS_DEFICIT', 'uint32'), ('TCA_CAKE_STATS_COBALT_COUNT', 'uint32'), ('TCA_CAKE_STATS_DROPPING', 'uint32'), ('TCA_CAKE_STATS_DROP_NEXT_US', 'uint32'), ('TCA_CAKE_STATS_P_DROP', 'uint32'), ('TCA_CAKE_STATS_BLUE_TIMER_US', 'uint32'), ) class tca_parse_tins(nla): nla_map = tuple([('TCA_CAKE_TIN_STATS_%i' % x, 'tca_parse_tin_stats') for x in range(TCA_CAKE_MAX_TINS)]) class tca_parse_tin_stats(nla): nla_map = (('__TCA_CAKE_TIN_STATS_INVALID', 'none'), ('TCA_CAKE_TIN_STATS_PAD', 'hex'), ('TCA_CAKE_TIN_STATS_SENT_PACKETS', 'uint32'), ('TCA_CAKE_TIN_STATS_SENT_BYTES64', 'uint64'), ('TCA_CAKE_TIN_STATS_DROPPED_PACKETS', 'uint32'), ('TCA_CAKE_TIN_STATS_DROPPED_BYTES64', 'uint64'), ('TCA_CAKE_TIN_STATS_ACKS_DROPPED_PACKETS', 'uint32'), ('TCA_CAKE_TIN_STATS_ACKS_DROPPED_BYTES64', 'uint64'), ('TCA_CAKE_TIN_STATS_ECN_MARKED_PACKETS', 'uint32'), ('TCA_CAKE_TIN_STATS_ECN_MARKED_BYTES64', 'uint64'), ('TCA_CAKE_TIN_STATS_BACKLOG_PACKETS', 'uint32'), ('TCA_CAKE_TIN_STATS_BACKLOG_BYTES', 'uint32'), ('TCA_CAKE_TIN_STATS_THRESHOLD_RATE64', 'uint64'), ('TCA_CAKE_TIN_STATS_TARGET_US', 'uint32'), ('TCA_CAKE_TIN_STATS_INTERVAL_US', 'uint32'), ('TCA_CAKE_TIN_STATS_WAY_INDIRECT_HITS', 'uint32'), ('TCA_CAKE_TIN_STATS_WAY_MISSES', 'uint32'), ('TCA_CAKE_TIN_STATS_WAY_COLLISIONS', 'uint32'), ('TCA_CAKE_TIN_STATS_PEAK_DELAY_US', 'uint32'), ('TCA_CAKE_TIN_STATS_AVG_DELAY_US', 'uint32'), ('TCA_CAKE_TIN_STATS_BASE_DELAY_US', 'uint32'), ('TCA_CAKE_TIN_STATS_SPARSE_FLOWS', 'uint32'), ('TCA_CAKE_TIN_STATS_BULK_FLOWS', 'uint32'), ('TCA_CAKE_TIN_STATS_UNRESPONSIVE_FLOWS', 'uint32'), ('TCA_CAKE_TIN_STATS_MAX_SKBLEN', 'uint32'), ('TCA_CAKE_TIN_STATS_FLOW_QUANTUM', 'uint32'), ) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_choke.py0000644000175000017500000000722413610051400023367 0ustar peetpeet00000000000000''' choke +++++ Parameters: * `limit` (required) -- int * `bandwith` (required) -- str/int * `min` -- int * `max` -- int * `avpkt` -- str/int, packet size * `burst` -- int * `probability` -- float * `ecn` -- bool Example:: ip.tc('add', 'choke', interface, limit=5500, bandwith="10mbit", ecn=True) ''' import struct import logging from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import get_rate from pyroute2.netlink.rtnl.tcmsg.common import get_size from pyroute2.netlink.rtnl.tcmsg.common import red_eval_ewma from pyroute2.netlink.rtnl.tcmsg.common import red_eval_P from pyroute2.netlink.rtnl.tcmsg.common import red_eval_idle_damping from pyroute2.netlink.rtnl.tcmsg.common import stats2 as c_stats2 log = logging.getLogger(__name__) parent = TC_H_ROOT def get_parameters(kwarg): # The code is ported from iproute2 avpkt = 1000 probability = 0.02 opt = {'limit': kwarg['limit'], # required 'qth_min': kwarg.get('min', 0), 'qth_max': kwarg.get('max', 0), 'Wlog': 0, 'Plog': 0, 'Scell_log': 0, 'flags': 1 if kwarg.get('ecn') else 0} rate = get_rate(kwarg['bandwith']) # required burst = kwarg.get('burst', 0) avpkt = get_size(kwarg.get('avpkt', 1000)) probability = kwarg.get('probability', 0.02) if not opt['qth_max']: opt['qth_max'] = opt['limit'] // 4 if not opt['qth_min']: opt['qth_min'] = opt['qth_max'] // 3 if not burst: burst = (2 * opt['qth_min'] + opt['qth_max']) // 3 if opt['qth_max'] > opt['limit']: raise Exception('max is larger than limit') if opt['qth_min'] >= opt['qth_max']: raise Exception('min is not smaller than max') # Wlog opt['Wlog'] = red_eval_ewma(opt['qth_min'] * avpkt, burst, avpkt) if opt['Wlog'] < 0: raise Exception('failed to calculate EWMA') elif opt['Wlog'] > 10: log.warning('choke: burst %s seems to be too large' % burst) # Plog opt['Plog'] = red_eval_P(opt['qth_min'] * avpkt, opt['qth_max'] * avpkt, probability) if opt['Plog'] < 0: raise Exception('choke: failed to calculate probability') # Scell_log, stab opt['Scell_log'], stab = red_eval_idle_damping(opt['Wlog'], avpkt, rate) if opt['Scell_log'] < 0: raise Exception('choke: failed to calculate idle damping table') return {'attrs': [['TCA_CHOKE_PARMS', opt], ['TCA_CHOKE_STAB', stab], ['TCA_CHOKE_MAX_P', int(probability * pow(2, 32))]]} class options(nla): nla_map = (('TCA_CHOKE_UNSPEC', 'none'), ('TCA_CHOKE_PARMS', 'qopt'), ('TCA_CHOKE_STAB', 'stab'), ('TCA_CHOKE_MAX_P', 'uint32')) class qopt(nla): fields = (('limit', 'I'), ('qth_min', 'I'), ('qth_max', 'I'), ('Wlog', 'B'), ('Plog', 'B'), ('Scell_log', 'B'), ('flags', 'B')) class stab(nla): fields = (('value', 's'), ) def encode(self): self['value'] = struct.pack('B' * 256, *(int(x) for x in self.value)) nla.encode(self) class stats(nla): fields = (('early', 'I'), ('pdrop', 'I'), ('other', 'I'), ('marked', 'I'), ('matched', 'I')) class stats2(c_stats2): class stats_app(stats): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_clsact.py0000644000175000017500000000176213610051400023550 0ustar peetpeet00000000000000''' clsact ++++++ The clsact qdisc provides a mechanism to attach integrated filter-action classifiers to an interface, either at ingress or egress, or both. The use case shown here is using a bpf program (implemented elsewhere) to direct the packet processing. The example also uses the direct-action feature to specify what to do with each packet (pass, drop, redirect, etc.). BPF ingress/egress example using clsact qdisc:: # open_bpf_fd is outside the scope of pyroute2 #fd = open_bpf_fd() eth0 = ip.get_links(ifname="eth0")[0] ip.tc("add", "clsact", eth0) # add ingress clsact ip.tc("add-filter", "bpf", idx, ":1", fd=fd, name="myprog", parent="ffff:fff2", classid=1, direct_action=True) # add egress clsact ip.tc("add-filter", "bpf", idx, ":1", fd=fd, name="myprog", parent="ffff:fff3", classid=1, direct_action=True) ''' from pyroute2.netlink.rtnl import TC_H_CLSACT parent = TC_H_CLSACT def fix_msg(msg, kwarg): msg['handle'] = 0xffff0000 pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_codel.py0000644000175000017500000000320413610051400023356 0ustar peetpeet00000000000000import logging from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import get_time from pyroute2.netlink.rtnl.tcmsg.common import stats2 as c_stats2 log = logging.getLogger(__name__) parent = TC_H_ROOT def get_parameters(kwarg): # # ACHTUNG: experimental code # # Parameters naming scheme WILL be changed in next releases # ret = {'attrs': []} transform = {'cdl_limit': lambda x: x, 'cdl_ecn': lambda x: x, 'cdl_target': get_time, 'cdl_ce_threshold': get_time, 'cdl_interval': get_time} for key in transform.keys(): if key in kwarg: log.warning('codel parameters naming will be changed ' 'in next releases (%s)' % key) ret['attrs'].append(['TCA_CODEL_%s' % key[4:].upper(), transform[key](kwarg[key])]) return ret class options(nla): nla_map = (('TCA_CODEL_UNSPEC', 'none'), ('TCA_CODEL_TARGET', 'uint32'), ('TCA_CODEL_LIMIT', 'uint32'), ('TCA_CODEL_INTERVAL', 'uint32'), ('TCA_CODEL_ECN', 'uint32'), ('TCA_CODEL_CE_THRESHOLD', 'uint32')) class stats(nla): fields = (('maxpacket', 'I'), ('count', 'I'), ('lastcount', 'I'), ('ldelay', 'I'), ('drop_next', 'I'), ('drop_overlimit', 'I'), ('ecn_mark', 'I'), ('dropping', 'I'), ('ce_mark', 'I')) class stats2(c_stats2): class stats_app(stats): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_drr.py0000644000175000017500000000140213610051400023055 0ustar peetpeet00000000000000''' drr +++ The qdisc doesn't accept any parameters, but the class accepts `quantum` parameter:: ip.tc('add', 'drr', interface, '1:') ip.tc('add-class', 'drr', interface, '1:10', quantum=1600) ip.tc('add-class', 'drr', interface, '1:20', quantum=1600) ''' from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import stats2 as c_stats2 parent = TC_H_ROOT def get_class_parameters(kwarg): return {'attrs': [['TCA_DRR_QUANTUM', kwarg.get('quantum', 0)]]} class options(nla): nla_map = (('TCA_DRR_UNSPEC', 'none'), ('TCA_DRR_QUANTUM', 'uint32')) class stats(nla): fields = (('deficit', 'I'), ) class stats2(c_stats2): class stats_app(stats): pass pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_fq_codel.py0000644000175000017500000000543513610051400024054 0ustar peetpeet00000000000000import logging from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import stats2 from pyroute2.netlink.rtnl.tcmsg.common import get_time from pyroute2.netlink.rtnl import RTM_NEWQDISC from pyroute2.netlink.rtnl import RTM_DELQDISC log = logging.getLogger(__name__) parent = TC_H_ROOT def get_parameters(kwarg): # # ACHTUNG: experimental code # # Parameters naming scheme WILL be changed in next releases # ret = {'attrs': []} transform = {'fqc_limit': lambda x: x, 'fqc_flows': lambda x: x, 'fqc_quantum': lambda x: x, 'fqc_ecn': lambda x: x, 'fqc_target': get_time, 'fqc_ce_threshold': get_time, 'fqc_interval': get_time} for key in transform.keys(): if key in kwarg: log.warning('fq_codel parameters naming will be changed ' 'in next releases (%s)' % key) ret['attrs'].append(['TCA_FQ_CODEL_%s' % key[4:].upper(), transform[key](kwarg[key])]) return ret class options(nla): nla_map = (('TCA_FQ_CODEL_UNSPEC', 'none'), ('TCA_FQ_CODEL_TARGET', 'uint32'), ('TCA_FQ_CODEL_LIMIT', 'uint32'), ('TCA_FQ_CODEL_INTERVAL', 'uint32'), ('TCA_FQ_CODEL_ECN', 'uint32'), ('TCA_FQ_CODEL_FLOWS', 'uint32'), ('TCA_FQ_CODEL_QUANTUM', 'uint32'), ('TCA_FQ_CODEL_CE_THRESHOLD', 'uint32'), ('TCA_FQ_CODEL_DROP_BATCH_SIZE', 'uint32'), ('TCA_FQ_CODEL_MEMORY_LIMIT', 'uint32')) class qdisc_stats(nla): fields = (('type', 'I'), ('maxpacket', 'I'), ('drop_overlimit', 'I'), ('ecn_mark', 'I'), ('new_flow_count', 'I'), ('new_flows_len', 'I'), ('old_flows_len', 'I'), ('ce_mark', 'I'), ('memory_usage', 'I'), ('drop_overmemory', 'I')) class class_stats(nla): fields = (('type', 'I'), ('deficit', 'i'), ('ldelay', 'I'), ('count', 'I'), ('lastcount', 'I'), ('dropping', 'I'), ('drop_next', 'i')) class qdisc_stats2(stats2): class stats_app(qdisc_stats): pass class class_stats2(stats2): class stats_app(class_stats): pass def stats2(msg, *argv, **kwarg): if msg['header']['type'] in (RTM_NEWQDISC, RTM_DELQDISC): return qdisc_stats2 else: return class_stats2 # To keep the compatibility with TCA_XSTATS def stats(msg, *argv, **kwarg): if msg['header']['type'] in (RTM_NEWQDISC, RTM_DELQDISC): return qdisc_stats else: return class_stats pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_hfsc.py0000644000175000017500000000450313610051400023216 0ustar peetpeet00000000000000''' hfsc ++++ Simple HFSC example:: eth0 = ip.get_links(ifname="eth0")[0] ip.tc("add", "hfsc", eth0, handle="1:", default="1:1") ip.tc("add-class", "hfsc", eth0, handle="1:1", parent="1:0" rsc={"m2": "5mbit"}) HFSC curve nla types: * `rsc`: real-time curve * `fsc`: link-share curve * `usc`: upper-limit curve ''' from pyroute2.netlink.rtnl.tcmsg.common import stats2 as c_stats2 from pyroute2.netlink.rtnl.tcmsg.common import get_rate from pyroute2.netlink.rtnl.tcmsg.common import get_time from pyroute2.netlink import nla from pyroute2.netlink.rtnl import RTM_NEWQDISC from pyroute2.netlink.rtnl import RTM_DELQDISC from pyroute2.netlink.rtnl import TC_H_ROOT parent = TC_H_ROOT def get_parameters(kwarg): defcls = kwarg.get('default', kwarg.get('defcls', 0x10)) defcls &= 0xffff return {'defcls': defcls} def get_class_parameters(kwarg): ret = {'attrs': []} for key in ('rsc', 'fsc', 'usc'): if key in kwarg: ret['attrs'].append(['TCA_HFSC_%s' % key.upper(), {'m1': get_rate(kwarg[key].get('m1', 0)), 'd': get_time(kwarg[key].get('d', 0)), 'm2':get_rate(kwarg[key].get('m2', 0))}]) return ret class options_hfsc(nla): fields = (('defcls', 'H'),) # default class class options_hfsc_class(nla): nla_map = (('TCA_HFSC_UNSPEC', 'none'), ('TCA_HFSC_RSC', 'hfsc_curve'), # real-time curve ('TCA_HFSC_FSC', 'hfsc_curve'), # link-share curve ('TCA_HFSC_USC', 'hfsc_curve')) # upper-limit curve class hfsc_curve(nla): fields = (('m1', 'I'), # slope of the first segment in bps ('d', 'I'), # x-projection of the first segment in us ('m2', 'I')) # slope of the second segment in bps def options(msg, *argv, **kwarg): if msg['header']['type'] in (RTM_NEWQDISC, RTM_DELQDISC): return options_hfsc else: return options_hfsc_class class stats2(c_stats2): class stats_app(nla): fields = (('work', 'Q'), # total work done ('rtwork', 'Q'), # total work done by real-time criteria ('period', 'I'), # current period ('level', 'I')) # class level in hierarchy pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_htb.py0000644000175000017500000001370513610051400023054 0ustar peetpeet00000000000000''' htb +++ TODO: list parameters An example with htb qdisc, lets assume eth0 == 2:: # u32 --> +--> htb 1:10 --> sfq 10:0 # | | # | | # eth0 -- htb 1:0 -- htb 1:1 # | | # | | # u32 --> +--> htb 1:20 --> sfq 20:0 eth0 = 2 # add root queue 1:0 ip.tc("add", "htb", eth0, 0x10000, default=0x200000) # root class 1:1 ip.tc("add-class", "htb", eth0, 0x10001, parent=0x10000, rate="256kbit", burst=1024 * 6) # two branches: 1:10 and 1:20 ip.tc("add-class", "htb", eth0, 0x10010, parent=0x10001, rate="192kbit", burst=1024 * 6, prio=1) ip.tc("add-class", "htb", eht0, 0x10020, parent=0x10001, rate="128kbit", burst=1024 * 6, prio=2) # two leaves: 10:0 and 20:0 ip.tc("add", "sfq", eth0, 0x100000, parent=0x10010, perturb=10) ip.tc("add", "sfq", eth0, 0x200000, parent=0x10020, perturb=10) # two filters: one to load packets into 1:10 and the # second to 1:20 ip.tc("add-filter", "u32", eth0, parent=0x10000, prio=10, protocol=socket.AF_INET, target=0x10010, keys=["0x0006/0x00ff+8", "0x0000/0xffc0+2"]) ip.tc("add-filter", "u32", eth0, parent=0x10000, prio=10, protocol=socket.AF_INET, target=0x10020, keys=["0x5/0xf+0", "0x10/0xff+33"]) ''' from pyroute2.netlink.rtnl.tcmsg.common import get_hz from pyroute2.netlink.rtnl.tcmsg.common import get_rate from pyroute2.netlink.rtnl.tcmsg.common import calc_xmittime from pyroute2.netlink.rtnl.tcmsg.common import nla_plus_rtab from pyroute2.netlink.rtnl.tcmsg.common import stats2 from pyroute2.netlink.rtnl import RTM_NEWQDISC from pyroute2.netlink.rtnl import RTM_DELQDISC from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT parent = TC_H_ROOT def get_class_parameters(kwarg): prio = kwarg.get('prio', 0) mtu = kwarg.get('mtu', 1600) mpu = kwarg.get('mpu', 0) overhead = kwarg.get('overhead', 0) quantum = kwarg.get('quantum', 0) rate = get_rate(kwarg.get('rate', None)) ceil = get_rate(kwarg.get('ceil', 0)) or rate burst = kwarg.get('burst', None) or \ kwarg.get('maxburst', None) or \ kwarg.get('buffer', None) if rate is not None: if burst is None: burst = rate / get_hz() + mtu burst = calc_xmittime(rate, burst) cburst = kwarg.get('cburst', None) or \ kwarg.get('cmaxburst', None) or \ kwarg.get('cbuffer', None) if ceil is not None: if cburst is None: cburst = ceil / get_hz() + mtu cburst = calc_xmittime(ceil, cburst) return {'attrs': [['TCA_HTB_PARMS', {'buffer': burst, 'cbuffer': cburst, 'quantum': quantum, 'prio': prio, 'rate': rate, 'ceil': ceil, 'ceil_overhead': overhead, 'rate_overhead': overhead, 'rate_mpu': mpu, 'ceil_mpu': mpu}], ['TCA_HTB_RTAB', True], ['TCA_HTB_CTAB', True]]} def get_parameters(kwarg): rate2quantum = kwarg.get('r2q', 0xa) version = kwarg.get('version', 3) defcls = kwarg.get('default', 0x10) return {'attrs': [['TCA_HTB_INIT', {'debug': 0, 'defcls': defcls, 'direct_pkts': 0, 'rate2quantum': rate2quantum, 'version': version}]]} def fix_msg(msg, kwarg): if not kwarg: opts = get_parameters({}) msg['attrs'].append(['TCA_OPTIONS', opts]) # The tokens and ctokens are badly defined in the kernel structure # as unsigned int instead of signed int. (cf net/sched/sch_htb.c # in linux source) class stats(nla): fields = (('lends', 'I'), ('borrows', 'I'), ('giants', 'I'), ('tokens', 'i'), ('ctokens', 'i')) class qdisc_stats2(stats2): nla_map = (('TCA_STATS_UNSPEC', 'none'), ('TCA_STATS_BASIC', 'basic'), ('TCA_STATS_RATE_EST', 'rate_est'), ('TCA_STATS_QUEUE', 'queue')) class class_stats2(stats2): class stats_app(stats): pass def stats2(msg, *argv, **kwarg): if msg['header']['type'] in (RTM_NEWQDISC, RTM_DELQDISC): return qdisc_stats2 else: return class_stats2 class options(nla_plus_rtab): nla_map = (('TCA_HTB_UNSPEC', 'none'), ('TCA_HTB_PARMS', 'htb_parms'), ('TCA_HTB_INIT', 'htb_glob'), ('TCA_HTB_CTAB', 'ctab'), ('TCA_HTB_RTAB', 'rtab')) class htb_glob(nla): fields = (('version', 'I'), ('rate2quantum', 'I'), ('defcls', 'I'), ('debug', 'I'), ('direct_pkts', 'I')) class htb_parms(nla_plus_rtab.parms): fields = (('rate_cell_log', 'B'), ('rate___reserved', 'B'), ('rate_overhead', 'H'), ('rate_cell_align', 'h'), ('rate_mpu', 'H'), ('rate', 'I'), ('ceil_cell_log', 'B'), ('ceil___reserved', 'B'), ('ceil_overhead', 'H'), ('ceil_cell_align', 'h'), ('ceil_mpu', 'H'), ('ceil', 'I'), ('buffer', 'I'), ('cbuffer', 'I'), ('quantum', 'I'), ('level', 'I'), ('prio', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_ingress.py0000644000175000017500000000032713610051400023745 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_INGRESS parent = TC_H_INGRESS def fix_msg(msg, kwarg): msg['handle'] = 0xffff0000 class options(nla): fields = (('value', 'I'), ) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_netem.py0000644000175000017500000001007013610051400023377 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import time2tick from pyroute2.netlink.rtnl.tcmsg.common import percent2u32 parent = TC_H_ROOT def get_parameters(kwarg): delay = time2tick(kwarg.get('delay', 0)) # in microsecond limit = kwarg.get('limit', 1000) # fifo limit (packets) see netem.c:230 loss = percent2u32(kwarg.get('loss', 0)) # int percentage gap = kwarg.get('gap', 0) duplicate = kwarg.get('duplicate', 0) jitter = time2tick(kwarg.get('jitter', 0)) # in microsecond opts = { 'delay': delay, 'limit': limit, 'loss': loss, 'gap': gap, 'duplicate': duplicate, 'jitter': jitter, 'attrs': [] } # correlation (delay, loss, duplicate) delay_corr = percent2u32(kwarg.get('delay_corr', 0)) loss_corr = percent2u32(kwarg.get('loss_corr', 0)) dup_corr = percent2u32(kwarg.get('dup_corr', 0)) if delay_corr or loss_corr or dup_corr: # delay_corr requires that both jitter and delay are != 0 if delay_corr and not (delay and jitter): raise Exception('delay correlation requires delay' ' and jitter to be set') # loss correlation and loss if loss_corr and not loss: raise Exception('loss correlation requires loss to be set') # duplicate correlation and duplicate if dup_corr and not duplicate: raise Exception('duplicate correlation requires ' 'duplicate to be set') opts['attrs'].append(['TCA_NETEM_CORR', {'delay_corr': delay_corr, 'loss_corr': loss_corr, 'dup_corr': dup_corr}]) # reorder (probability, correlation) prob_reorder = percent2u32(kwarg.get('prob_reorder', 0)) corr_reorder = percent2u32(kwarg.get('corr_reorder', 0)) if prob_reorder != 0: # gap defaults to 1 if equal to 0 if gap == 0: opts['gap'] = gap = 1 opts['attrs'].append(['TCA_NETEM_REORDER', {'prob_reorder': prob_reorder, 'corr_reorder': corr_reorder}]) else: if gap != 0: raise Exception('gap can only be set when prob_reorder is set') elif corr_reorder != 0: raise Exception('corr_reorder can only be set when ' 'prob_reorder is set') # corrupt (probability, correlation) prob_corrupt = percent2u32(kwarg.get('prob_corrupt', 0)) corr_corrupt = percent2u32(kwarg.get('corr_corrupt', 0)) if prob_corrupt: opts['attrs'].append(['TCA_NETEM_CORRUPT', {'prob_corrupt': prob_corrupt, 'corr_corrupt': corr_corrupt}]) elif corr_corrupt != 0: raise Exception('corr_corrupt can only be set when ' 'prob_corrupt is set') # TODO # delay distribution (dist_size, dist_data) return opts class options(nla): nla_map = (('TCA_NETEM_UNSPEC', 'none'), ('TCA_NETEM_CORR', 'netem_corr'), ('TCA_NETEM_DELAY_DIST', 'none'), ('TCA_NETEM_REORDER', 'netem_reorder'), ('TCA_NETEM_CORRUPT', 'netem_corrupt'), ('TCA_NETEM_LOSS', 'none'), ('TCA_NETEM_RATE', 'none')) fields = (('delay', 'I'), ('limit', 'I'), ('loss', 'I'), ('gap', 'I'), ('duplicate', 'I'), ('jitter', 'I')) class netem_corr(nla): '''correlation''' fields = (('delay_corr', 'I'), ('loss_corr', 'I'), ('dup_corr', 'I')) class netem_reorder(nla): '''reorder has probability and correlation''' fields = (('prob_reorder', 'I'), ('corr_reorder', 'I')) class netem_corrupt(nla): '''corruption has probability and correlation''' fields = (('prob_corrupt', 'I'), ('corr_corrupt', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_pfifo_fast.py0000644000175000017500000000034513610051400024413 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT parent = TC_H_ROOT class options(nla): fields = (('bands', 'i'), ('priomap', '16B')) def get_parameters(kwarg): return kwarg pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_plug.py0000644000175000017500000000071713610051400023245 0ustar peetpeet00000000000000from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT parent = TC_H_ROOT actions = {'TCQ_PLUG_BUFFER': 0, 'TCQ_PLUG_RELEASE_ONE': 1, 'TCQ_PLUG_RELEASE_INDEFINITE': 2, 'TCQ_PLUG_LIMIT': 3} def get_parameters(kwarg): return {'action': actions.get(kwarg.get('action', 0), 0), 'limit': kwarg.get('limit', 0)} class options(nla): fields = (('action', 'i'), ('limit', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_sfq.py0000644000175000017500000000520213610051400023061 0ustar peetpeet00000000000000from pyroute2.netlink.rtnl.tcmsg.common import get_size from pyroute2.netlink.rtnl.tcmsg.common import red_eval_ewma from pyroute2.netlink.rtnl.tcmsg.common import red_eval_P from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT parent = TC_H_ROOT TC_RED_ECN = 1 TC_RED_HARDDROP = 2 TC_RED_ADAPTATIVE = 4 def get_parameters(kwarg): kwarg['quantum'] = get_size(kwarg.get('quantum', 0)) kwarg['perturb_period'] = kwarg.get('perturb', 0) or \ kwarg.get('perturb_period', 0) limit = kwarg['limit'] = kwarg.get('limit', 0) or \ kwarg.get('redflowlimit', 0) qth_min = kwarg.get('min', 0) qth_max = kwarg.get('max', 0) avpkt = kwarg.get('avpkt', 1000) probability = kwarg.get('probability', 0.02) ecn = kwarg.get('ecn', False) harddrop = kwarg.get('harddrop', False) kwarg['flags'] = kwarg.get('flags', 0) if ecn: kwarg['flags'] |= TC_RED_ECN if harddrop: kwarg['flags'] |= TC_RED_HARDDROP if kwarg.get('redflowlimit'): qth_max = qth_max or limit / 4 qth_min = qth_min or qth_max / 3 kwarg['burst'] = kwarg['burst'] or \ (2 * qth_min + qth_max) / (3 * avpkt) assert limit > qth_max assert qth_max > qth_min kwarg['qth_min'] = qth_min kwarg['qth_max'] = qth_max kwarg['Wlog'] = red_eval_ewma(qth_min, kwarg['burst'], avpkt) kwarg['Plog'] = red_eval_P(qth_min, qth_max, probability) assert kwarg['Wlog'] >= 0 assert kwarg['Plog'] >= 0 kwarg['max_P'] = int(probability * pow(2, 23)) return kwarg class options_sfq_v0(nla): fields = (('quantum', 'I'), ('perturb_period', 'i'), ('limit', 'I'), ('divisor', 'I'), ('flows', 'I')) class options_sfq_v1(nla): fields = (('quantum', 'I'), ('perturb_period', 'i'), ('limit_v0', 'I'), ('divisor', 'I'), ('flows', 'I'), ('depth', 'I'), ('headdrop', 'I'), ('limit_v1', 'I'), ('qth_min', 'I'), ('qth_max', 'I'), ('Wlog', 'B'), ('Plog', 'B'), ('Scell_log', 'B'), ('flags', 'B'), ('max_P', 'I'), ('prob_drop', 'I'), ('forced_drop', 'I'), ('prob_mark', 'I'), ('forced_mark', 'I'), ('prob_mark_head', 'I'), ('forced_mark_head', 'I')) def options(*argv, **kwarg): if kwarg.get('length', 0) >= options_sfq_v1.get_size(): return options_sfq_v1 else: return options_sfq_v0 pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_tbf.py0000644000175000017500000000227113610051400023046 0ustar peetpeet00000000000000from pyroute2.netlink.rtnl import TC_H_ROOT from pyroute2.netlink.rtnl.tcmsg.common import get_rate_parameters from pyroute2.netlink.rtnl.tcmsg.common import nla_plus_rtab parent = TC_H_ROOT def get_parameters(kwarg): parms = get_rate_parameters(kwarg) # fill parameters return {'attrs': [['TCA_TBF_PARMS', parms], ['TCA_TBF_RTAB', True]]} class options(nla_plus_rtab): nla_map = (('TCA_TBF_UNSPEC', 'none'), ('TCA_TBF_PARMS', 'tbf_parms'), ('TCA_TBF_RTAB', 'rtab'), ('TCA_TBF_PTAB', 'ptab')) class tbf_parms(nla_plus_rtab.parms): fields = (('rate_cell_log', 'B'), ('rate___reserved', 'B'), ('rate_overhead', 'H'), ('rate_cell_align', 'h'), ('rate_mpu', 'H'), ('rate', 'I'), ('peak_cell_log', 'B'), ('peak___reserved', 'B'), ('peak_overhead', 'H'), ('peak_cell_align', 'h'), ('peak_mpu', 'H'), ('peak', 'I'), ('limit', 'I'), ('buffer', 'I'), ('mtu', 'I')) pyroute2-0.5.9/pyroute2/netlink/rtnl/tcmsg/sched_template.py0000644000175000017500000000233613610051400024110 0ustar peetpeet00000000000000''' Template sched file. All the tcmsg plugins should be registered in `__init__.py`, see the `plugins` dict. All the methods, variables and classes are optional, but the naming scheme is fixed. ''' from pyroute2.netlink.rtnl.tcmsg import common from pyroute2.netlink import nla from pyroute2.netlink.rtnl import TC_H_ROOT # if you define the `parent` variable, it will be used # as the default parent value if no other value is # provided in the call options parent = TC_H_ROOT def fix_msg(kwarg, msg): ''' This method it called for all types -- classes, qdiscs and filters. Can be used to fix some `msg` fields. ''' pass def get_parameters(kwarg): ''' Called for qdiscs and filters. Should return the structure to be embedded as the qdisc parameters (`TCA_OPTIONS`). ''' return None def get_class_parameters(kwarg): ''' The same as above, but called only for classes. ''' return None class options(nla.hex): ''' The `TCA_OPTIONS` struct, by default not decoded. ''' pass class stats(nla.hex): ''' The struct to decode `TCA_XSTATS`. ''' pass class stats2(common.stats2): ''' The struct to decode `TCA_STATS2`. ''' pass pyroute2-0.5.9/pyroute2/netlink/taskstats/0000755000175000017500000000000013621220110020474 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netlink/taskstats/__init__.py0000644000175000017500000001365513610051400022621 0ustar peetpeet00000000000000''' TaskStats module ================ All that you should know about TaskStats, is that you should not use it. But if you have to, ok:: import os from pyroute2 import TaskStats ts = TaskStats() ts.get_pid_stat(os.getpid()) It is not implemented normally yet, but some methods are already usable. ''' from pyroute2.netlink import NLM_F_REQUEST from pyroute2.netlink import nla from pyroute2.netlink import genlmsg from pyroute2.netlink.generic import GenericNetlinkSocket TASKSTATS_CMD_UNSPEC = 0 # Reserved TASKSTATS_CMD_GET = 1 # user->kernel request/get-response TASKSTATS_CMD_NEW = 2 class tcmd(genlmsg): nla_map = (('TASKSTATS_CMD_ATTR_UNSPEC', 'none'), ('TASKSTATS_CMD_ATTR_PID', 'uint32'), ('TASKSTATS_CMD_ATTR_TGID', 'uint32'), ('TASKSTATS_CMD_ATTR_REGISTER_CPUMASK', 'asciiz'), ('TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK', 'asciiz')) class tstats(nla): pack = "struct" fields = (('version', 'H'), # 2 ('ac_exitcode', 'I'), # 4 ('ac_flag', 'B'), # 1 ('ac_nice', 'B'), # 1 --- 10 ('cpu_count', 'Q'), # 8 ('cpu_delay_total', 'Q'), # 8 ('blkio_count', 'Q'), # 8 ('blkio_delay_total', 'Q'), # 8 ('swapin_count', 'Q'), # 8 ('swapin_delay_total', 'Q'), # 8 ('cpu_run_real_total', 'Q'), # 8 ('cpu_run_virtual_total', 'Q'), # 8 ('ac_comm', '32s'), # 32 +++ 112 ('ac_sched', 'B'), # 1 ('__ac_pad', '3x'), # 3 # (the ac_uid field is aligned(8), so we add more padding) ('__implicit_pad', '4x'), # 4 ('ac_uid', 'I'), # 4 +++ 120 ('ac_gid', 'I'), # 4 ('ac_pid', 'I'), # 4 ('ac_ppid', 'I'), # 4 ('ac_btime', 'I'), # 4 +++ 136 ('ac_etime', 'Q'), # 8 +++ 144 ('ac_utime', 'Q'), # 8 ('ac_stime', 'Q'), # 8 ('ac_minflt', 'Q'), # 8 ('ac_majflt', 'Q'), # 8 ('coremem', 'Q'), # 8 ('virtmem', 'Q'), # 8 ('hiwater_rss', 'Q'), # 8 ('hiwater_vm', 'Q'), # 8 ('read_char', 'Q'), # 8 ('write_char', 'Q'), # 8 ('read_syscalls', 'Q'), # 8 ('write_syscalls', 'Q'), # 8 ('read_bytes', 'Q'), # ... ('write_bytes', 'Q'), ('cancelled_write_bytes', 'Q'), ('nvcsw', 'Q'), ('nivcsw', 'Q'), ('ac_utimescaled', 'Q'), ('ac_stimescaled', 'Q'), ('cpu_scaled_run_real_total', 'Q')) def decode(self): nla.decode(self) self['ac_comm'] = self['ac_comm'][:self['ac_comm'].find('\0')] class taskstatsmsg(genlmsg): nla_map = (('TASKSTATS_TYPE_UNSPEC', 'none'), ('TASKSTATS_TYPE_PID', 'uint32'), ('TASKSTATS_TYPE_TGID', 'uint32'), ('TASKSTATS_TYPE_STATS', 'stats'), ('TASKSTATS_TYPE_AGGR_PID', 'aggr_pid'), ('TASKSTATS_TYPE_AGGR_TGID', 'aggr_tgid')) class stats(tstats): pass # FIXME: optimize me! class aggr_id(nla): nla_map = (('TASKSTATS_TYPE_UNSPEC', 'none'), ('TASKSTATS_TYPE_PID', 'uint32'), ('TASKSTATS_TYPE_TGID', 'uint32'), ('TASKSTATS_TYPE_STATS', 'stats')) class stats(tstats): pass class aggr_pid(aggr_id): pass class aggr_tgid(aggr_id): pass class TaskStats(GenericNetlinkSocket): def __init__(self): GenericNetlinkSocket.__init__(self) def bind(self): GenericNetlinkSocket.bind(self, 'TASKSTATS', taskstatsmsg) def get_pid_stat(self, pid): ''' Get taskstats for a process. Pid should be an integer. ''' msg = tcmd() msg['cmd'] = TASKSTATS_CMD_GET msg['version'] = 1 msg['attrs'].append(['TASKSTATS_CMD_ATTR_PID', pid]) return self.nlm_request(msg, self.prid, msg_flags=NLM_F_REQUEST) def _register_mask(self, cmd, mask): msg = tcmd() msg['cmd'] = TASKSTATS_CMD_GET msg['version'] = 1 msg['attrs'].append([cmd, mask]) # there is no response to this request self.put(msg, self.prid, msg_flags=NLM_F_REQUEST) def register_mask(self, mask): ''' Start the accounting for a processors by a mask. Mask is a string, e.g.:: 0,1 -- first two CPUs 0-4,6-10 -- CPUs from 0 to 4 and from 6 to 10 Though the kernel has a procedure, that cleans up accounting, when it is not used, it is recommended to run deregister_mask() before process exit. ''' self._register_mask('TASKSTATS_CMD_ATTR_REGISTER_CPUMASK', mask) def deregister_mask(self, mask): ''' Stop the accounting. ''' self._register_mask('TASKSTATS_CMD_ATTR_DEREGISTER_CPUMASK', mask) pyroute2-0.5.9/pyroute2/netns/0000755000175000017500000000000013621220110016136 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netns/__init__.py0000644000175000017500000002363513617005357020303 0ustar peetpeet00000000000000''' Netns management overview ========================= Pyroute2 provides basic namespaces management support. Here's a quick overview of typical netns tasks and related pyroute2 tools. Move an interface to a namespace -------------------------------- Though this task is managed not via `netns` module, it should be mentioned here as well. To move an interface to a netns, one should provide IFLA_NET_NS_FD nla in a set link RTNL request. The nla is an open FD number, that refers to already created netns. The pyroute2 library provides also a possibility to specify not a FD number, but a netns name as a string. In that case the library will try to lookup the corresponding netns in the standard location. Create veth and move the peer to a netns with IPRoute:: from pyroute2 import IPRoute ipr = IPRoute() ipr.link('add', ifname='v0p0', kind='veth', peer='v0p1') idx = ipr.link_lookup(ifname='v0p1')[0] ipr.link('set', index=idx, net_ns_fd='netns_name') Create veth and move the peer to a netns with IPDB:: from pyroute2 import IPDB ipdb = IPDB() ipdb.create(ifname='v0p0', kind='veth', peer='v0p1').commit() with ipdb.interfaces.v0p1 as i: i.net_ns_fd = 'netns_name' Manage interfaces within a netns -------------------------------- This task can be done with `NetNS` objects. A `NetNS` object spawns a child and runs it within a netns, providing the same API as `IPRoute` does:: from pyroute2 import NetNS ns = NetNS('netns_name') # do some stuff within the netns ns.close() One can even start `IPDB` on the top of `NetNS`:: from pyroute2 import NetNS from pyroute2 import IPDB ns = NetNS('netns_name') ipdb = IPDB(nl=ns) # do some stuff within the netns ipdb.release() ns.close() Spawn a process within a netns ------------------------------ For that purpose one can use `NSPopen` API. It works just as normal `Popen`, but starts a process within a netns. List, set, create and remove netns ---------------------------------- These functions are described below. To use them, import `netns` module:: from pyroute2 import netns netns.listnetns() Please be aware, that in order to run system calls the library uses `ctypes` module. It can fail on platforms where SELinux is enforced. If the Python interpreter, loading this module, dumps the core, one can check the SELinux state with `getenforce` command. ''' import io import os import os.path import errno import ctypes import ctypes.util import pickle import struct import traceback from pyroute2 import config from pyroute2.common import basestring try: file = file except NameError: file = io.IOBase # FIXME: arch reference __NR = {'x86_': {'64bit': 308}, 'i386': {'32bit': 346}, 'i686': {'32bit': 346}, 'mips': {'32bit': 4344, '64bit': 5303}, # FIXME: NABI32? 'armv': {'32bit': 375}, 'aarc': {'32bit': 375, '64bit': 268}, # FIXME: EABI vs. OABI? 'ppc6': {'64bit': 350}, 's390': {'64bit': 339}} __NR_setns = __NR.get(config.machine[:4], {}).get(config.arch, 308) CLONE_NEWNET = 0x40000000 MNT_DETACH = 0x00000002 MS_BIND = 4096 MS_REC = 16384 MS_SHARED = 1 << 20 NETNS_RUN_DIR = '/var/run/netns' __saved_ns = [] def _get_netnspath(name): netnspath = name dirname = os.path.dirname(name) if not dirname: netnspath = '%s/%s' % (NETNS_RUN_DIR, name) if hasattr(netnspath, 'encode'): netnspath = netnspath.encode('ascii') return netnspath def listnetns(nspath=None): ''' List available network namespaces. ''' if nspath: nsdir = nspath else: nsdir = NETNS_RUN_DIR try: return os.listdir(nsdir) except OSError as e: if e.errno == errno.ENOENT: return [] else: raise def _get_ns_by_inode(nspath=NETNS_RUN_DIR): ''' Return a dict with inode as key and namespace name as value ''' ns_by_dev_inode = {} for ns_name in listnetns(nspath=nspath): ns_path = os.path.join(nspath, ns_name) st = os.stat(ns_path) if st.st_dev not in ns_by_dev_inode: ns_by_dev_inode[st.st_dev] = {} ns_by_dev_inode[st.st_dev][st.st_ino] = ns_name return ns_by_dev_inode def ns_pids(nspath=NETNS_RUN_DIR): ''' List pids in all netns If a pid is in a unknown netns do not return it ''' result = {} ns_by_dev_inode = _get_ns_by_inode(nspath) for pid in os.listdir('/proc'): if not pid.isdigit(): continue try: st = os.stat(os.path.join('/proc', pid, 'ns', 'net')) except OSError as e: if e.errno in (errno.EACCES, errno.ENOENT): continue raise try: ns_name = ns_by_dev_inode[st.st_dev][st.st_ino] except KeyError: continue if ns_name not in result: result[ns_name] = [] result[ns_name].append(int(pid)) return result def pid_to_ns(pid=1, nspath=NETNS_RUN_DIR): ''' Return netns name which matches the given pid, None otherwise ''' try: st = os.stat(os.path.join('/proc', str(pid), 'ns', 'net')) ns_by_dev_inode = _get_ns_by_inode(nspath) return ns_by_dev_inode[st.st_dev][st.st_ino] except OSError as e: if e.errno in (errno.EACCES, errno.ENOENT): return None raise except KeyError: return None def _create(netns, libc=None): libc = libc or ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) netnspath = _get_netnspath(netns) netnsdir = os.path.dirname(netnspath) # init netnsdir try: os.mkdir(netnsdir) except OSError as e: if e.errno != errno.EEXIST: raise # this code is ported from iproute2 done = False while libc.mount(b'', netnsdir, b'none', MS_SHARED | MS_REC, None) != 0: if done: raise OSError(ctypes.get_errno(), 'share rundir failed', netns) if libc.mount(netnsdir, netnsdir, b'none', MS_BIND | MS_REC, None) != 0: raise OSError(ctypes.get_errno(), 'mount rundir failed', netns) done = True # create mountpoint os.close(os.open(netnspath, os.O_RDONLY | os.O_CREAT | os.O_EXCL, 0)) # unshare if libc.unshare(CLONE_NEWNET) < 0: raise OSError(ctypes.get_errno(), 'unshare failed', netns) # bind the namespace if libc.mount(b'/proc/self/ns/net', netnspath, b'none', MS_BIND, None) < 0: raise OSError(ctypes.get_errno(), 'mount failed', netns) def create(netns, libc=None): ''' Create a network namespace. ''' rctl, wctl = os.pipe() pid = os.fork() if pid == 0: # child error = None try: _create(netns, libc) except Exception as e: error = e error.tb = traceback.format_exc() msg = pickle.dumps(error) os.write(wctl, struct.pack('I', len(msg))) os.write(wctl, msg) os._exit(0) else: # parent msglen = struct.unpack('I', os.read(rctl, 4))[0] error = pickle.loads(os.read(rctl, msglen)) os.close(rctl) os.close(wctl) os.waitpid(pid, 0) if error is not None: raise error def remove(netns, libc=None): ''' Remove a network namespace. ''' libc = libc or ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) netnspath = _get_netnspath(netns) libc.umount2(netnspath, MNT_DETACH) os.unlink(netnspath) def setns(netns, flags=os.O_CREAT, libc=None): ''' Set netns for the current process. The flags semantics is the same as for the `open(2)` call: - O_CREAT -- create netns, if doesn't exist - O_CREAT | O_EXCL -- create only if doesn't exist Note that "main" netns has no name. But you can access it with:: setns('foo') # move to netns foo setns('/proc/1/ns/net') # go back to default netns See also `pushns()`/`popns()`/`dropns()` Changed in 0.5.1: the routine closes the ns fd if it's not provided via arguments. ''' newfd = False libc = libc or ctypes.CDLL(ctypes.util.find_library('c'), use_errno=True) if isinstance(netns, basestring): netnspath = _get_netnspath(netns) if os.path.basename(netns) in listnetns(os.path.dirname(netns)): if flags & (os.O_CREAT | os.O_EXCL) == (os.O_CREAT | os.O_EXCL): raise OSError(errno.EEXIST, 'netns exists', netns) else: if flags & os.O_CREAT: create(netns, libc=libc) nsfd = os.open(netnspath, os.O_RDONLY) newfd = True elif isinstance(netns, file): nsfd = netns.fileno() elif isinstance(netns, int): nsfd = netns else: raise RuntimeError('netns should be a string or an open fd') error = libc.syscall(__NR_setns, nsfd, CLONE_NEWNET) if newfd: os.close(nsfd) if error != 0: raise OSError(ctypes.get_errno(), 'failed to open netns', netns) def pushns(newns=None, libc=None): ''' Save the current netns in order to return to it later. If newns is specified, change to it:: # --> the script in the "main" netns netns.pushns("test") # --> changed to "test", the "main" is saved netns.popns() # --> "test" is dropped, back to the "main" ''' global __saved_ns __saved_ns.append(os.open('/proc/self/ns/net', os.O_RDONLY)) if newns is not None: setns(newns, libc=libc) def popns(libc=None): ''' Restore the previously saved netns. ''' global __saved_ns fd = __saved_ns.pop() try: setns(fd, libc=libc) except Exception: __saved_ns.append(fd) raise os.close(fd) def dropns(libc=None): ''' Discard the last saved with `pushns()` namespace ''' global __saved_ns fd = __saved_ns.pop() try: os.close(fd) except Exception: pass pyroute2-0.5.9/pyroute2/netns/manager.py0000644000175000017500000000675613610051400020142 0ustar peetpeet00000000000000import errno from pyroute2 import netns from pyroute2 import Inotify from pyroute2 import IPRoute from pyroute2.netlink.rtnl import RTM_NEWNETNS from pyroute2.netlink.rtnl import RTM_DELNETNS from pyroute2.netlink.rtnl.nsinfmsg import nsinfmsg from pyroute2.netlink.exceptions import NetlinkError from pyroute2.netlink.exceptions import SkipInode class NetNSManager(Inotify): def __init__(self, libc=None, path=None): path = set(path or []) super(NetNSManager, self).__init__(libc, path) if not self.path: for d in ['/var/run/netns', '/var/run/docker/netns']: try: self.register_path(d) except OSError: pass self.ipr = IPRoute() self.registry = {} self.update() def update(self): self.ipr.netns_path = self.path for info in self.ipr.get_netns_info(): self.registry[info.get_attr('NSINFO_PATH')] = info def get(self): for msg in super(NetNSManager, self).get(): info = nsinfmsg() if msg is None: info['header']['error'] = NetlinkError(errno.ECONNRESET) info['header']['type'] = RTM_DELNETNS info['event'] = 'RTM_DELNETNS' yield info return path = '{path}/{name}'.format(**msg) info['header']['error'] = None if path not in self.registry: self.update() if path in self.registry: info.load(self.registry[path]) else: info['attrs'] = [('NSINFO_PATH', path)] del info['value'] if msg['mask'] & 0x200: info['header']['type'] = RTM_DELNETNS info['event'] = 'RTM_DELNETNS' elif not msg['mask'] & 0x100: continue yield info def close(self, code=None): self.ipr.close() super(NetNSManager, self).close() def create(self, path): netnspath = netns._get_netnspath(path) try: netns.create(netnspath, self.libc) except OSError as e: raise NetlinkError(e.errno) info = self.ipr._dump_one_ns(netnspath, set()) info['header']['type'] = RTM_NEWNETNS info['event'] = 'RTM_NEWNETNS' del info['value'] return info, def remove(self, path): netnspath = netns._get_netnspath(path) info = None try: info = self.ipr._dump_one_ns(netnspath, set()) except SkipInode: raise NetlinkError(errno.EEXIST) info['header']['type'] = RTM_DELNETNS info['event'] = 'RTM_DELNETNS' del info['value'] try: netns.remove(netnspath, self.libc) except OSError as e: raise NetlinkError(e.errno) return info, def netns(self, cmd, *argv, **kwarg): path = kwarg.get('path', kwarg.get('NSINFO_PATH')) if path is None: raise ValueError('netns spec is required') netnspath = netns._get_netnspath(path) if cmd == 'add': return self.create(netnspath) elif cmd == 'del': return self.remove(netnspath) elif cmd not in ('get', 'set'): raise ValueError('method not supported') for item in self.dump(): if item.get_attr('NSINFO_PATH') == netnspath: return (item, ) return tuple() def dump(self): return self.ipr.get_netns_info() pyroute2-0.5.9/pyroute2/netns/nslink.py0000644000175000017500000001442213617617476020047 0ustar peetpeet00000000000000''' NetNS objects ============= A NetNS object is IPRoute-like. It runs in the main network namespace, but also creates a proxy process running in the required netns. All the netlink requests are done via that proxy process. NetNS supports standard IPRoute API, so can be used instead of IPRoute, e.g., in IPDB:: # start the main network settings database: ipdb_main = IPDB() # start the same for a netns: ipdb_test = IPDB(nl=NetNS('test')) # create VETH ipdb_main.create(ifname='v0p0', kind='veth', peer='v0p1').commit() # move peer VETH into the netns with ipdb_main.interfaces.v0p1 as veth: veth.net_ns_fd = 'test' # please keep in mind, that netns move clears all the settings # on a VETH interface pair, so one should run netns assignment # as a separate operation only # assign addresses # please notice, that `v0p1` is already in the `test` netns, # so should be accessed via `ipdb_test` with ipdb_main.interfaces.v0p0 as veth: veth.add_ip('172.16.200.1/24') veth.up() with ipdb_test.interfaces.v0p1 as veth: veth.add_ip('172.16.200.2/24') veth.up() Please review also the test code, under `tests/test_netns.py` for more examples. By default, NetNS creates requested netns, if it doesn't exist, or uses existing one. To control this behaviour, one can use flags as for `open(2)` system call:: # create a new netns or fail, if it already exists netns = NetNS('test', flags=os.O_CREAT | os.O_EXCL) # create a new netns or use existing one netns = NetNS('test', flags=os.O_CREAT) # the same as above, the default behaviour netns = NetNS('test') To remove a network namespace:: from pyroute2 import NetNS netns = NetNS('test') netns.close() netns.remove() One should stop it first with `close()`, and only after that run `remove()`. ''' import os import errno import signal import atexit import logging from functools import partial from pyroute2.netlink.rtnl.iprsocket import MarshalRtnl from pyroute2.iproute import RTNL_API from pyroute2.netns import setns from pyroute2.netns import remove from pyroute2.remote import Server from pyroute2.remote import Transport from pyroute2.remote import RemoteSocket log = logging.getLogger(__name__) class FD(object): def __init__(self, fd): self.fd = fd for name in ('read', 'write', 'close'): setattr(self, name, partial(getattr(os, name), self.fd)) def fileno(self): return self.fd def flush(self): return None class NetNS(RTNL_API, RemoteSocket): ''' NetNS is the IPRoute API with network namespace support. **Why not IPRoute?** The task to run netlink commands in some network namespace, being in another network namespace, requires the architecture, that differs too much from a simple Netlink socket. NetNS starts a proxy process in a network namespace and uses `multiprocessing` communication channels between the main and the proxy processes to route all `recv()` and `sendto()` requests/responses. **Any specific API calls?** Nope. `NetNS` supports all the same, that `IPRoute` does, in the same way. It provides full `socket`-compatible API and can be used in poll/select as well. The only difference is the `close()` call. In the case of `NetNS` it is **mandatory** to close the socket before exit. **NetNS and IPDB** It is possible to run IPDB with NetNS:: from pyroute2 import NetNS from pyroute2 import IPDB ip = IPDB(nl=NetNS('somenetns')) ... ip.release() Do not forget to call `release()` when the work is done. It will shut down `NetNS` instance as well. ''' def __init__(self, netns, flags=os.O_CREAT): self.netns = netns self.flags = flags trnsp_in, self.remote_trnsp_out = [Transport(FD(x)) for x in os.pipe()] self.remote_trnsp_in, trnsp_out = [Transport(FD(x)) for x in os.pipe()] self.child = os.fork() if self.child == 0: # child process trnsp_in.close() trnsp_out.close() trnsp_in.file_obj.close() trnsp_out.file_obj.close() try: setns(self.netns, self.flags) except OSError as e: (self .remote_trnsp_out .send({'stage': 'init', 'error': e})) os._exit(e.errno) except Exception as e: (self. remote_trnsp_out .send({'stage': 'init', 'error': OSError(errno.ECOMM, str(e), self.netns)})) os._exit(255) try: Server(self.remote_trnsp_in, self.remote_trnsp_out) finally: os._exit(0) try: self.remote_trnsp_in.close() self.remote_trnsp_out.close() super(NetNS, self).__init__(trnsp_in, trnsp_out) except Exception: self.close() raise atexit.register(self.close) self.marshal = MarshalRtnl() def clone(self): return type(self)(self.netns, self.flags) def _cleanup_atexit(self): if hasattr(atexit, 'unregister'): atexit.unregister(self.close) else: try: atexit._exithandlers.remove((self.close, (), {})) except ValueError: pass def close(self, code=errno.ECONNRESET): self._cleanup_atexit() try: super(NetNS, self).close(code=code) except: # something went wrong, force server shutdown try: self.trnsp_out.send({'stage': 'shutdown'}) except Exception: pass log.error('forced shutdown procedure, clean up netns manually') try: os.kill(self.child, signal.SIGTERM) os.waitpid(self.child, 0) except OSError: pass def post_init(self): pass def remove(self): ''' Try to remove this network namespace from the system. ''' remove(self.netns) pyroute2-0.5.9/pyroute2/netns/process/0000755000175000017500000000000013621220110017614 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/netns/process/__init__.py0000644000175000017500000000325013610051400021727 0ustar peetpeet00000000000000import fcntl import types import subprocess from pyroute2.common import file def _map_api(api, obj): for attr_name in dir(obj): attr = getattr(obj, attr_name) api[attr_name] = {'api': None} api[attr_name]['callable'] = hasattr(attr, '__call__') api[attr_name]['doc'] = attr.__doc__ \ if hasattr(attr, '__doc__') else None class MetaPopen(type): ''' API definition for NSPopen. All this stuff is required to make `help()` function happy. ''' def __init__(cls, *argv, **kwarg): super(MetaPopen, cls).__init__(*argv, **kwarg) # copy docstrings and create proxy slots cls.api = {} _map_api(cls.api, subprocess.Popen) for fname in ('stdin', 'stdout', 'stderr'): m = {} cls.api[fname] = {'callable': False, 'api': m} _map_api(m, file) for ename in ('fcntl', 'ioctl', 'flock', 'lockf'): m[ename] = {'api': None, 'callable': True, 'doc': getattr(fcntl, ename).__doc__} def __dir__(cls): return list(cls.api.keys()) + ['release'] def __getattribute__(cls, key): try: return type.__getattribute__(cls, key) except AttributeError: attr = getattr(subprocess.Popen, key) if isinstance(attr, (types.MethodType, types.FunctionType)): def proxy(*argv, **kwarg): return attr(*argv, **kwarg) proxy.__doc__ = attr.__doc__ proxy.__objclass__ = cls return proxy else: return attr pyroute2-0.5.9/pyroute2/netns/process/proxy.py0000644000175000017500000002253713610051400021362 0ustar peetpeet00000000000000''' NSPopen ======= The `NSPopen` class has nothing to do with netlink at all, but it is required to have a reasonable network namespace support. ''' import sys import fcntl import types import atexit import threading import subprocess from pyroute2 import config from pyroute2.netns import setns from pyroute2.common import metaclass from pyroute2.netns.process import MetaPopen def _handle(result): if result['code'] == 500: raise result['data'] elif result['code'] == 200: return result['data'] else: raise TypeError('unsupported return code') def _make_fcntl(prime, target): def func(*argv, **kwarg): return target(prime.fileno(), *argv, **kwarg) return func def _make_func(target): def func(*argv, **kwarg): return target(*argv, **kwarg) return func def _make_property(name): def func(self): return getattr(self.prime, name) return property(func) class NSPopenFile(object): def __init__(self, prime): self.prime = prime for aname in dir(prime): if aname.startswith('_'): continue target = getattr(prime, aname) if isinstance(target, (types.BuiltinMethodType, types.MethodType)): func = _make_func(target) func.__name__ = aname func.__doc__ = getattr(target, '__doc__', '') setattr(self, aname, func) del func else: setattr(self.__class__, aname, _make_property(aname)) for fname in ('fcntl', 'ioctl', 'flock', 'lockf'): target = getattr(fcntl, fname) func = _make_fcntl(prime, target) func.__name__ = fname func.__doc__ = getattr(target, '__doc__', '') setattr(self, fname, func) del func def NSPopenServer(nsname, flags, channel_in, channel_out, argv, kwarg): # set netns try: setns(nsname, flags=flags) except Exception as e: channel_out.put(e) return # create the Popen object child = subprocess.Popen(*argv, **kwarg) for fname in ['stdout', 'stderr', 'stdin']: obj = getattr(child, fname) if obj is not None: fproxy = NSPopenFile(obj) setattr(child, fname, fproxy) # send the API map channel_out.put(None) while True: # synchronous mode # 1. get the command from the API try: call = channel_in.get() except: (et, ev, tb) = sys.exc_info() try: channel_out.put({'code': 500, 'data': ev}) except: pass break # 2. stop? if call['name'] == 'release': break # 3. run the call try: # get the object namespace ns = call.get('namespace') obj = child if ns: for step in ns.split('.'): obj = getattr(obj, step) attr = getattr(obj, call['name']) if isinstance(attr, (types.MethodType, types.FunctionType, types.BuiltinMethodType)): result = attr(*call['argv'], **call['kwarg']) else: result = attr channel_out.put({'code': 200, 'data': result}) except: (et, ev, tb) = sys.exc_info() channel_out.put({'code': 500, 'data': ev}) child.wait() class ObjNS(object): ns = None def __enter__(self): pass def __exit__(self, exc_type, exc_value, traceback): pass def __getattribute__(self, key): try: return object.__getattribute__(self, key) except AttributeError: with self.lock: if self.released: raise RuntimeError('the object is released') if (self.api.get(key) and self.api[key]['callable']): def proxy(*argv, **kwarg): self.channel_out.put({'name': key, 'argv': argv, 'namespace': self.ns, 'kwarg': kwarg}) return _handle(self.channel_in.get()) if key in self.api: proxy.__doc__ = self.api[key]['doc'] return proxy else: if key in ('stdin', 'stdout', 'stderr'): objns = ObjNS() objns.ns = key objns.api = self.api.get(key, {}).get('api', {}) objns.channel_out = self.channel_out objns.channel_in = self.channel_in objns.released = self.released objns.lock = self.lock return objns else: self.channel_out.put({'name': key, 'namespace': self.ns}) return _handle(self.channel_in.get()) @metaclass(MetaPopen) class NSPopen(ObjNS): ''' A proxy class to run `Popen()` object in some network namespace. Sample to run `ip ad` command in `nsname` network namespace:: nsp = NSPopen('nsname', ['ip', 'ad'], stdout=subprocess.PIPE) print(nsp.communicate()) nsp.wait() nsp.release() The `NSPopen` class was intended to be a drop-in replacement for the `Popen` class, but there are still some important differences. The `NSPopen` object implicitly spawns a child python process to be run in the background in a network namespace. The target process specified as the argument of the `NSPopen` will be started in its turn from this child. Thus all the fd numbers of the running `NSPopen` object are meaningless in the context of the main process. Trying to operate on them, one will get 'Bad file descriptor' in the best case or a system call working on a wrong file descriptor in the worst case. A possible solution would be to transfer file descriptors between the `NSPopen` object and the main process, but it is not implemented yet. The process' diagram for `NSPopen('test', ['ip', 'ad'])`:: +---------------------+ +--------------+ +------------+ | main python process |<--->| child python |<--->| netns test | | NSPopen() | | Popen() | | $ ip ad | +---------------------+ +--------------+ +------------+ As a workaround for the issue with file descriptors, some additional methods are available on file objects `stdin`, `stdout` and `stderr`. E.g., one can run fcntl calls:: from fcntl import F_GETFL from pyroute2 import NSPopen from subprocess import PIPE proc = NSPopen('test', ['my_program'], stdout=PIPE) flags = proc.stdout.fcntl(F_GETFL) In that way one can use `fcntl()`, `ioctl()`, `flock()` and `lockf()` calls. Another additional method is `release()`, which can be used to explicitly stop the proxy process and release all the resources. ''' def __init__(self, nsname, *argv, **kwarg): ''' The only differences from the `subprocess.Popen` init are: * `nsname` -- network namespace name * `flags` keyword argument All other arguments are passed directly to `subprocess.Popen`. Flags usage samples. Create a network namespace, if it doesn't exist yet:: import os nsp = NSPopen('nsname', ['command'], flags=os.O_CREAT) Create a network namespace only if it doesn't exist, otherwise fail and raise an exception:: import os nsp = NSPopen('nsname', ['command'], flags=os.O_CREAT | os.O_EXCL) ''' # create a child self.nsname = nsname if 'flags' in kwarg: self.flags = kwarg.pop('flags') else: self.flags = 0 self.channel_out = config.MpQueue() self.channel_in = config.MpQueue() self.lock = threading.Lock() self.released = False self.server = config.MpProcess(target=NSPopenServer, args=(self.nsname, self.flags, self.channel_out, self.channel_in, argv, kwarg)) # start the child and check the status self.server.start() response = self.channel_in.get() if isinstance(response, Exception): self.server.join() raise response else: atexit.register(self.release) def release(self): ''' Explicitly stop the proxy process and release all the resources. The `NSPopen` object can not be used after the `release()` call. ''' with self.lock: if self.released: return self.released = True self.channel_out.put({'name': 'release'}) self.channel_out.close() self.channel_in.close() self.server.join() def __dir__(self): return list(self.api.keys()) + ['release'] pyroute2-0.5.9/pyroute2/nftables/0000755000175000017500000000000013621220110016605 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/nftables/__init__.py0000644000175000017500000000000013610051400020706 0ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/nftables/expressions.py0000644000175000017500000000470513610051400021551 0ustar peetpeet00000000000000import struct import socket from collections import OrderedDict ## # Utility functions ## def get_mask(addr): if not addr: return [None, None] ret = addr.split('/') if len(ret) == 2: return ret else: return [addr, None] ## # Expressions generators ## def genex(name, kwarg): exp_data = [] for key, value in kwarg.items(): exp_data.append(('NFTA_%s_%s' % (name.upper(), key.upper()), value)) return {'attrs': [('NFTA_EXPR_NAME', name), ('NFTA_EXPR_DATA', {'attrs': exp_data})]} def verdict(code): kwarg = OrderedDict() kwarg['dreg'] = 0 # NFT_REG_VERDICT kwarg['data'] = {'attrs': [('NFTA_DATA_VERDICT', {'attrs': [('NFTA_VERDICT_CODE', code)]})]} return [genex('immediate', kwarg), ] def ipv4addr(src=None, dst=None): if not src and not dst: raise ValueError('must be at least one of src, dst') ret = [] # get masks src, src_mask = get_mask(src) dst, dst_mask = get_mask(dst) # load address(es) into NFT_REG_1 kwarg = OrderedDict() kwarg['dreg'] = 1 # save to NFT_REG_1 kwarg['base'] = 1 # NFT_PAYLOAD_NETWORK_HEADER kwarg['offset'] = 12 if src else 16 kwarg['len'] = 8 if (src and dst) else 4 ret.append(genex('payload', kwarg)) # run bitwise with masks -- if provided if src_mask or dst_mask: mask = b'' if src: if not src_mask: src_mask = '32' src_mask = int('1' * int(src_mask), 2) mask += struct.pack('I', src_mask) if dst: if not dst_mask: dst_mask = '32' dst_mask = int('1' * int(dst_mask), 2) mask += struct.pack('I', dst_mask) xor = '\x00' * len(mask) kwarg = OrderedDict() kwarg['sreg'] = 1 # read from NFT_REG_1 kwarg['dreg'] = 1 # save to NFT_REG_1 kwarg['len'] = 8 if (src and dst) else 4 kwarg['mask'] = {'attrs': [('NFTA_DATA_VALUE', mask)]} kwarg['xor'] = {'attrs': [('NFTA_DATA_VALUE', xor)]} ret.append(genex('bitwise', kwarg)) # run cmp packed = b'' if src: packed += socket.inet_aton(src) if dst: packed += socket.inet_aton(dst) kwarg = OrderedDict() kwarg['sreg'] = 1 # read from NFT_REG_1 kwarg['op'] = 0 # NFT_CMP_EQ kwarg['data'] = {'attrs': [('NFTA_DATA_VALUE', packed)]} ret.append(genex('cmp', kwarg)) return ret pyroute2-0.5.9/pyroute2/nftables/main.py0000644000175000017500000000617713610051400020120 0ustar peetpeet00000000000000''' ''' from pyroute2.netlink.nfnetlink import nfgen_msg from pyroute2.netlink.nfnetlink.nftsocket import \ (NFTSocket, nft_table_msg, nft_chain_msg, nft_rule_msg, NFT_MSG_NEWTABLE, NFT_MSG_GETTABLE, NFT_MSG_DELTABLE, NFT_MSG_NEWCHAIN, NFT_MSG_GETCHAIN, NFT_MSG_DELCHAIN, NFT_MSG_NEWRULE, NFT_MSG_GETRULE, NFT_MSG_DELRULE, NFT_MSG_GETSET) class NFTables(NFTSocket): # TODO: documentation # TODO: tests # TODO: dump()/load() with support for json and xml def get_tables(self): return self.request_get(nfgen_msg(), NFT_MSG_GETTABLE) def get_chains(self): return self.request_get(nfgen_msg(), NFT_MSG_GETCHAIN) def get_rules(self): return self.request_get(nfgen_msg(), NFT_MSG_GETRULE) def get_sets(self): return self.request_get(nfgen_msg(), NFT_MSG_GETSET) # # The nft API is in the prototype stage and may be # changed until the release. The planned release for # the API is 0.5.2 # def table(self, cmd, **kwarg): ''' Example:: nft.table('add', name='test0') ''' commands = {'add': NFT_MSG_NEWTABLE, 'del': NFT_MSG_DELTABLE} # fix default kwargs if 'flags' not in kwarg: kwarg['flags'] = 0 return self._command(nft_table_msg, commands, cmd, kwarg) def chain(self, cmd, **kwarg): ''' Example:: # # default policy 'drop' for input # nft.chain('add', table='test0', name='test_chain0', hook='input', type='filter', policy=0) ''' commands = {'add': NFT_MSG_NEWCHAIN, 'del': NFT_MSG_DELCHAIN} # TODO: cover all 6 hooks hooks = {'input': 1, 'forward': 2, 'output': 3} if 'hook' in kwarg: kwarg['hook'] = {'attrs': [['NFTA_HOOK_HOOKNUM', hooks[kwarg['hook']]], ['NFTA_HOOK_PRIORITY', 0]]} if 'type' not in kwarg: kwarg['type'] = 'filter' return self._command(nft_chain_msg, commands, cmd, kwarg) def rule(self, cmd, **kwarg): ''' Example:: from pyroute2.nftables.expressions import ipv4addr, verdict # # allow all traffic from 192.168.0.0/24 # nft.rule('add', table='test0', chain='test_chain0', expressions=(ipv4addr(src='192.168.0.0/24'), verdict(code=1))) ''' # TODO: more operations commands = {'add': NFT_MSG_NEWRULE, 'del': NFT_MSG_DELRULE} if 'expressions' in kwarg: expressions = [] for exp in kwarg['expressions']: expressions.extend(exp) kwarg['expressions'] = expressions # FIXME: flags!!! return self._command(nft_rule_msg, commands, cmd, kwarg, flags=3585) pyroute2-0.5.9/pyroute2/nftables/rule.py0000644000175000017500000000742513610051400020140 0ustar peetpeet00000000000000from pyroute2.nftables.parser.parser import nfta_nla_parser, conv_map_tuple from pyroute2.nftables.parser.expr import ( get_expression_from_netlink, get_expression_from_dict) NAME_2_NFPROTO = { "unspec": 0, "inet": 1, "ipv4": 2, "arp": 3, "netdev": 5, "bridge": 7, "ipv6": 10, "decnet": 12, } NFPROTO_2_NAME = {v: k for k, v in NAME_2_NFPROTO.items()} class NFTRule(nfta_nla_parser): conv_maps = ( conv_map_tuple('family', 'nfgen_family', 'family', 'nfproto'), conv_map_tuple('table', 'NFTA_RULE_TABLE', 'table', 'raw'), conv_map_tuple('chain', 'NFTA_RULE_CHAIN', 'chain', 'raw'), conv_map_tuple('handle', 'NFTA_RULE_HANDLE', 'handle', 'raw'), conv_map_tuple('expressions', 'NFTA_RULE_EXPRESSIONS', 'expr', 'expressions_list'), conv_map_tuple('compat', 'NFTA_RULE_COMPAT', 'compat', 'raw'), conv_map_tuple('position', 'NFTA_RULE_POSITION', 'position', 'raw'), conv_map_tuple('userdata', 'NFTA_RULE_USERDATA', 'userdata', 'user_data'), conv_map_tuple('rule_id', 'NFTA_RULE_ID', 'rule_id', 'raw'), conv_map_tuple('position_id', 'NFTA_RULE_POSITION_ID', 'position_id', 'raw'), ) @classmethod def from_netlink(cls, ndmsg): obj = super(NFTRule, cls).from_netlink(ndmsg) obj.family = cls.cparser_nfproto.from_netlink( ndmsg['nfgen_family']) return obj class cparser_user_data(object): def __init__(self, udata_type, value): self.type = udata_type self.value = value @classmethod def from_netlink(cls, userdata): userdata = [int(d, 16) for d in userdata.split(':')] udata_type = userdata[0] udata_len = userdata[1] udata_value = ''.join([chr(d) for d in userdata[2:udata_len + 2]]) if udata_type == 0: # 0 == COMMENT return cls('comment', udata_value) raise NotImplementedError("userdata type: {0}".format(udata_type)) @staticmethod def to_netlink(udata): if udata.type == 'comment': userdata = '00:' else: raise NotImplementedError( "userdata type: {0}".format(udata.type)) userdata += "%0.2X:" % len(udata.value) userdata += ':'.join(["%0.2X" % ord(d) for d in udata.value]) return userdata @staticmethod def to_dict(udata): # Currently nft command to not export userdata to dict return None if udata.type == "comment": return {"type": "comment", "value": udata.value} raise NotImplementedError("userdata type: {0}".format(udata.type)) @classmethod def from_dict(cls, d): # See to_dict() method return None class cparser_expressions_list(object): @staticmethod def from_netlink(expressions): return [get_expression_from_netlink(e) for e in expressions] @staticmethod def to_netlink(expressions): return [e.to_netlink() for e in expressions] @staticmethod def from_dict(expressions): return [get_expression_from_dict(e) for e in expressions] @staticmethod def to_dict(expressions): return [e.to_dict() for e in expressions] class cparser_nfproto(object): @staticmethod def from_netlink(val): return NFPROTO_2_NAME[val] @staticmethod def to_netlink(val): return NAME_2_NFPROTO[val] @staticmethod def from_dict(val): return val @staticmethod def to_dict(val): return val pyroute2-0.5.9/pyroute2/protocols/0000755000175000017500000000000013621220110017033 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/protocols/__init__.py0000644000175000017500000002117713610051400021156 0ustar peetpeet00000000000000import struct from socket import inet_ntop from socket import inet_pton from socket import AF_INET from pyroute2.common import basestring # # IEEE = 802.3 Ethernet magic constants. The frame sizes omit # the preamble and FCS/CRC (frame check sequence). # ETH_ALEN = 6 # Octets in one ethernet addr ETH_HLEN = 14 # Total octets in header. ETH_ZLEN = 60 # Min. octets in frame sans FCS ETH_DATA_LEN = 1500 # Max. octets in payload ETH_FRAME_LEN = 1514 # Max. octets in frame sans FCS ETH_FCS_LEN = 4 # Octets in the FCS # # These are the defined Ethernet Protocol ID's. # ETH_P_LOOP = 0x0060 # Ethernet Loopback packet ETH_P_PUP = 0x0200 # Xerox PUP packet ETH_P_PUPAT = 0x0201 # Xerox PUP Addr Trans packet ETH_P_IP = 0x0800 # Internet Protocol packet ETH_P_X25 = 0x0805 # CCITT X.25 ETH_P_ARP = 0x0806 # Address Resolution packet ETH_P_BPQ = 0x08FF # G8BPQ AX.25 Ethernet Packet # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] ETH_P_IEEEPUP = 0x0a00 # Xerox IEEE802.3 PUP packet ETH_P_IEEEPUPAT = 0x0a01 # Xerox IEEE802.3 PUP Addr Trans packet ETH_P_DEC = 0x6000 # DEC Assigned proto ETH_P_DNA_DL = 0x6001 # DEC DNA Dump/Load ETH_P_DNA_RC = 0x6002 # DEC DNA Remote Console ETH_P_DNA_RT = 0x6003 # DEC DNA Routing ETH_P_LAT = 0x6004 # DEC LAT ETH_P_DIAG = 0x6005 # DEC Diagnostics ETH_P_CUST = 0x6006 # DEC Customer use ETH_P_SCA = 0x6007 # DEC Systems Comms Arch ETH_P_TEB = 0x6558 # Trans Ether Bridging ETH_P_RARP = 0x8035 # Reverse Addr Res packet ETH_P_ATALK = 0x809B # Appletalk DDP ETH_P_AARP = 0x80F3 # Appletalk AARP ETH_P_8021Q = 0x8100 # = 802.1Q VLAN Extended Header ETH_P_IPX = 0x8137 # IPX over DIX ETH_P_IPV6 = 0x86DD # IPv6 over bluebook ETH_P_PAUSE = 0x8808 # IEEE Pause frames. See = 802.3 = 31B ETH_P_SLOW = 0x8809 # Slow Protocol. See = 802.3ad = 43B ETH_P_WCCP = 0x883E # Web-cache coordination protocol # defined in draft-wilson-wrec-wccp-v2-00.txt ETH_P_PPP_DISC = 0x8863 # PPPoE discovery messages ETH_P_PPP_SES = 0x8864 # PPPoE session messages ETH_P_MPLS_UC = 0x8847 # MPLS Unicast traffic ETH_P_MPLS_MC = 0x8848 # MPLS Multicast traffic ETH_P_ATMMPOA = 0x884c # MultiProtocol Over ATM ETH_P_LINK_CTL = 0x886c # HPNA, wlan link local tunnel ETH_P_ATMFATE = 0x8884 # Frame-based ATM Transport over Ethernet ETH_P_PAE = 0x888E # Port Access Entity (IEEE = 802.1X) ETH_P_AOE = 0x88A2 # ATA over Ethernet ETH_P_8021AD = 0x88A8 # = 802.1ad Service VLAN ETH_P_802_EX1 = 0x88B5 # = 802.1 Local Experimental = 1. ETH_P_TIPC = 0x88CA # TIPC ETH_P_8021AH = 0x88E7 # = 802.1ah Backbone Service Tag ETH_P_1588 = 0x88F7 # IEEE = 1588 Timesync ETH_P_FCOE = 0x8906 # Fibre Channel over Ethernet ETH_P_TDLS = 0x890D # TDLS ETH_P_FIP = 0x8914 # FCoE Initialization Protocol ETH_P_QINQ1 = 0x9100 # deprecated QinQ VLAN # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] ETH_P_QINQ2 = 0x9200 # deprecated QinQ VLAN # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] ETH_P_QINQ3 = 0x9300 # deprecated QinQ VLAN # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] ETH_P_EDSA = 0xDADA # Ethertype DSA # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] ETH_P_AF_IUCV = 0xFBFB # IBM af_iucv # ^^^ [ NOT AN OFFICIALLY REGISTERED ID ] # # Non DIX types. Won't clash for = 1500 types. # ETH_P_802_3 = 0x0001 # Dummy type for = 802.3 frames ETH_P_AX25 = 0x0002 # Dummy protocol id for AX.25 ETH_P_ALL = 0x0003 # Every packet (be careful!!!) ETH_P_802_2 = 0x0004 # = 802.2 frames ETH_P_SNAP = 0x0005 # Internal only ETH_P_DDCMP = 0x0006 # DEC DDCMP: Internal only ETH_P_WAN_PPP = 0x0007 # Dummy type for WAN PPP frames*/ ETH_P_PPP_MP = 0x0008 # Dummy type for PPP MP frames ETH_P_LOCALTALK = 0x0009 # Localtalk pseudo type ETH_P_CAN = 0x000C # Controller Area Network ETH_P_PPPTALK = 0x0010 # Dummy type for Atalk over PPP*/ ETH_P_TR_802_2 = 0x0011 # = 802.2 frames ETH_P_MOBITEX = 0x0015 # Mobitex (kaz@cafe.net) ETH_P_CONTROL = 0x0016 # Card specific control frames ETH_P_IRDA = 0x0017 # Linux-IrDA ETH_P_ECONET = 0x0018 # Acorn Econet ETH_P_HDLC = 0x0019 # HDLC frames ETH_P_ARCNET = 0x001A # = 1A for ArcNet :-) ETH_P_DSA = 0x001B # Distributed Switch Arch. ETH_P_TRAILER = 0x001C # Trailer switch tagging ETH_P_PHONET = 0x00F5 # Nokia Phonet frames ETH_P_IEEE802154 = 0x00F6 # IEEE802.15.4 frame ETH_P_CAIF = 0x00F7 # ST-Ericsson CAIF protocol class msg(dict): buf = None data_len = None fields = () _fields_names = () types = {'uint8': 'B', 'uint16': 'H', 'uint32': 'I', 'be16': '>H', 'ip4addr': {'format': '4s', 'decode': lambda x: inet_ntop(AF_INET, x), 'encode': lambda x: [inet_pton(AF_INET, x)]}, 'l2addr': {'format': '6B', 'decode': lambda x: ':'.join(['%x' % i for i in x]), 'encode': lambda x: [int(i, 16) for i in x.split(':')]}, 'l2paddr': {'format': '6B10s', 'decode': lambda x: ':'.join(['%x' % i for i in x[:6]]), 'encode': lambda x: [int(i, 16) for i in x.split(':')] + [10 * b'\x00']}} def __init__(self, content=None, buf=b'', offset=0, value=None): content = content or {} dict.__init__(self, content) self.buf = buf self.offset = offset self.value = value self._register_fields() def _register_fields(self): self._fields_names = tuple([x[0] for x in self.fields]) def _get_routine(self, mode, fmt): fmt = self.types.get(fmt, fmt) if isinstance(fmt, dict): return (fmt['format'], fmt.get(mode, lambda x: x)) else: return (fmt, lambda x: x) def reset(self): self.buf = b'' def decode(self): self._register_fields() for field in self.fields: name, sfmt = field[:2] fmt, routine = self._get_routine('decode', sfmt) size = struct.calcsize(fmt) value = struct.unpack(fmt, self.buf[self.offset: self.offset + size]) if len(value) == 1: value = value[0] if isinstance(value, basestring) and sfmt[-1] == 's': value = value[:value.find(b'\x00')] self[name] = routine(value) self.offset += size return self def encode(self): self._register_fields() for field in self.fields: name, fmt = field[:2] default = b'\x00' if len(field) <= 2 else field[2] fmt, routine = self._get_routine('encode', fmt) # special case: string if fmt == 'string': self.buf += routine(self[name])[0] else: size = struct.calcsize(fmt) if self[name] is None: if not isinstance(default, basestring): self.buf += struct.pack(fmt, default) else: self.buf += default * (size // len(default)) else: value = routine(self[name]) if not isinstance(value, (set, tuple, list)): value = [value] self.buf += struct.pack(fmt, *value) return self def __getitem__(self, key): try: return dict.__getitem__(self, key) except KeyError: if key in self._fields_names: return None raise class ethmsg(msg): fields = (('dst', 'l2addr'), ('src', 'l2addr'), ('type', 'be16')) class ip6msg(msg): fields = (('version', 'uint8', 6 << 4), ('_flow0', 'uint8'), ('_flow1', 'uint8'), ('_flow2', 'uint8'), ('plen', 'uin16'), ('next_header', 'uint8'), ('hop_limit', 'uint8'), ('src', 'ip6addr'), ('dst', 'ip6addr')) class ip4msg(msg): fields = (('verlen', 'uint8', 0x45), ('dsf', 'uint8'), ('len', 'be16'), ('id', 'be16'), ('flags', 'uint16'), ('ttl', 'uint8', 128), ('proto', 'uint8'), ('csum', 'be16'), ('src', 'ip4addr'), ('dst', 'ip4addr')) class udp4_pseudo_header(msg): fields = (('src', 'ip4addr'), ('dst', 'ip4addr'), ('pad', 'uint8'), ('proto', 'uint8', 17), ('len', 'be16')) class udpmsg(msg): fields = (('sport', 'be16'), ('dport', 'be16'), ('len', 'be16'), ('csum', 'be16')) pyroute2-0.5.9/pyroute2/protocols/icmp.py0000644000175000017500000000062013610051400020335 0ustar peetpeet00000000000000from pyroute2.protocols import msg class icmpmsg(msg): fields = [('type', 'uint8'), ('code', 'uint8'), ('csum', 'be32')] class icmp_router_adv(icmpmsg): fields = icmpmsg.fields + [('addrs_num', 'uint8'), ('alen', 'uint8'), ('lifetime', 'be32'), ('addrs', 'routers')] pyroute2-0.5.9/pyroute2/protocols/rawsocket.py0000644000175000017500000000766513610051400021427 0ustar peetpeet00000000000000import struct from ctypes import Structure from ctypes import addressof from ctypes import string_at from ctypes import sizeof from ctypes import c_ushort from ctypes import c_ubyte from ctypes import c_uint from ctypes import c_void_p from socket import socket from socket import htons from socket import AF_PACKET from socket import SOCK_RAW from socket import SOL_SOCKET from socket import error from socket import errno from pyroute2 import IPRoute ETH_P_ALL = 3 SO_ATTACH_FILTER = 26 SO_DETACH_FILTER = 27 total_filter = [[0x06, 0, 0, 0]] class sock_filter(Structure): _fields_ = [('code', c_ushort), # u16 ('jt', c_ubyte), # u8 ('jf', c_ubyte), # u8 ('k', c_uint)] # u32 class sock_fprog(Structure): _fields_ = [('len', c_ushort), ('filter', c_void_p)] def compile_bpf(code): ProgramType = sock_filter * len(code) program = ProgramType(*[sock_filter(*line) for line in code]) sfp = sock_fprog(len(code), addressof(program[0])) return string_at(addressof(sfp), sizeof(sfp)), program class RawSocket(socket): ''' This raw socket binds to an interface and optionally installs a BPF filter. When created, the socket's buffer is cleared to remove packets that arrived before bind() or the BPF filter is installed. Doing so requires calling recvfrom() which may raise an exception if the interface is down. In order to allow creating the socket when the interface is down, the ENETDOWN exception is caught and discarded. ''' fprog = None def __init__(self, ifname, bpf=None): self.ifname = ifname # lookup the interface details with IPRoute() as ip: for link in ip.get_links(): if link.get_attr('IFLA_IFNAME') == ifname: break else: raise IOError(2, 'Link not found') self.l2addr = link.get_attr('IFLA_ADDRESS') self.ifindex = link['index'] # bring up the socket socket.__init__(self, AF_PACKET, SOCK_RAW, htons(ETH_P_ALL)) socket.bind(self, (self.ifname, ETH_P_ALL)) if bpf: self.clear_buffer() fstring, self.fprog = compile_bpf(bpf) socket.setsockopt(self, SOL_SOCKET, SO_ATTACH_FILTER, fstring) else: self.clear_buffer(remove_total_filter=True) def clear_buffer(self, remove_total_filter=False): # there is a window of time after the socket has been created and # before bind/attaching a filter where packets can be queued onto the # socket buffer # see comments in function set_kernel_filter() in libpcap's # pcap-linux.c. libpcap sets a total filter which does not match any # packet. It then clears what is already in the socket # before setting the desired filter total_fstring, prog = compile_bpf(total_filter) socket.setsockopt(self, SOL_SOCKET, SO_ATTACH_FILTER, total_fstring) self.setblocking(0) while True: try: self.recvfrom(0) except error as e: if e.args[0] == errno.ENETDOWN: # we only get this exception once per down event # there may be more packets left to clean pass elif e.args[0] in [errno.EAGAIN, errno.EWOULDBLOCK]: break else: raise self.setblocking(1) if remove_total_filter: # total_fstring ignored socket.setsockopt(self, SOL_SOCKET, SO_DETACH_FILTER, total_fstring) def csum(self, data): if len(data) % 2: data += b'\x00' csum = sum([struct.unpack('>H', data[x * 2:x * 2 + 2])[0] for x in range(len(data) // 2)]) csum = (csum >> 16) + (csum & 0xffff) csum += csum >> 16 return ~csum & 0xffff pyroute2-0.5.9/pyroute2/proxy.py0000644000175000017500000000451013610051400016544 0ustar peetpeet00000000000000''' Netlink proxy engine ''' import errno import struct import logging import traceback import threading from pyroute2.netlink.exceptions import NetlinkError log = logging.getLogger(__name__) class NetlinkProxy(object): ''' Proxy schemes:: User -> NetlinkProxy -> Kernel | <---------+ User <- NetlinkProxy <- Kernel ''' def __init__(self, policy='forward', nl=None, lock=None): self.nl = nl self.lock = lock or threading.Lock() self.pmap = {} self.policy = policy def handle(self, msg): # # match the packet # ptype = msg['header']['type'] plugin = self.pmap.get(ptype, None) if plugin is not None: with self.lock: try: ret = plugin(msg, self.nl) if ret is None: # # The packet is terminated in the plugin, # return the NLMSG_ERR == 0 # # FIXME: optimize # newmsg = struct.pack('IHH', 40, 2, 0) newmsg += msg.data[8:16] newmsg += struct.pack('I', 0) # nlmsgerr struct alignment newmsg += b'\0' * 20 return {'verdict': self.policy, 'data': newmsg} else: return ret except Exception as e: log.error(''.join(traceback.format_stack())) log.error(traceback.format_exc()) # errmsg if isinstance(e, (OSError, IOError)): code = e.errno elif isinstance(e, NetlinkError): code = e.code else: code = errno.ECOMM newmsg = struct.pack('HH', 2, 0) newmsg += msg.data[8:16] newmsg += struct.pack('I', code) newmsg += msg.data newmsg = struct.pack('I', len(newmsg) + 4) + newmsg return {'verdict': 'error', 'data': newmsg} return None pyroute2-0.5.9/pyroute2/remote/0000755000175000017500000000000013621220110016302 5ustar peetpeet00000000000000pyroute2-0.5.9/pyroute2/remote/__init__.py0000644000175000017500000002662313610051400020426 0ustar peetpeet00000000000000import os import errno import atexit import pickle import select import struct import logging import signal import threading import traceback from io import BytesIO from socket import SOL_SOCKET from socket import SO_RCVBUF from pyroute2 import config from pyroute2 import netns as netnsmod from pyroute2.netlink.nlsocket import NetlinkMixin if config.uname[0][-3:] == 'BSD': from pyroute2.iproute.bsd import IPRoute else: from pyroute2.iproute.linux import IPRoute try: import queue except ImportError: import Queue as queue log = logging.getLogger(__name__) class Transport(object): ''' A simple transport protocols to send objects between two end-points. Requires an open file-like object at init. ''' def __init__(self, file_obj): self.file_obj = file_obj self.lock = threading.Lock() self.cmd_queue = queue.Queue() self.brd_queue = queue.Queue() self.run = True def fileno(self): return self.file_obj.fileno() def send(self, obj): dump = BytesIO() pickle.dump(obj, dump) packet = struct.pack("II", len(dump.getvalue()) + 8, 0) packet += dump.getvalue() self.file_obj.write(packet) self.file_obj.flush() def __recv(self): length, offset = struct.unpack("II", self.file_obj.read(8)) dump = BytesIO() dump.write(self.file_obj.read(length - 8)) dump.seek(0) ret = pickle.load(dump) return ret def _m_recv(self, own_queue, other_queue, check): while self.run: if self.lock.acquire(False): try: try: ret = own_queue.get(False) if ret is None: continue else: return ret except queue.Empty: pass ret = self.__recv() if not check(ret['stage']): other_queue.put(ret) else: other_queue.put(None) return ret finally: self.lock.release() else: ret = None try: ret = own_queue.get(timeout=1) except queue.Empty: pass if ret is not None: return ret def recv(self): return self._m_recv(self.brd_queue, self.cmd_queue, lambda x: x == 'broadcast') def recv_cmd(self): return self._m_recv(self.cmd_queue, self.brd_queue, lambda x: x != 'broadcast') def close(self): self.run = False class ProxyChannel(object): def __init__(self, channel, stage): self.target = channel self.stage = stage def send(self, data): return self.target.send({'stage': self.stage, 'data': data, 'error': None}) def Server(trnsp_in, trnsp_out, netns=None): def stop_server(signum, frame): Server.run = False Server.run = True signal.signal(signal.SIGTERM, stop_server) try: if netns is not None: netnsmod.setns(netns) ipr = IPRoute() lock = ipr._sproxy.lock ipr._s_channel = ProxyChannel(trnsp_out, 'broadcast') except Exception as e: trnsp_out.send({'stage': 'init', 'error': e}) return 255 inputs = [ipr.fileno(), trnsp_in.fileno()] broadcasts = {ipr.fileno(): ipr} outputs = [] # all is OK so far trnsp_out.send({'stage': 'init', 'uname': config.uname, 'error': None}) # 8<------------------------------------------------------------- while Server.run: try: events, _, _ = select.select(inputs, outputs, inputs) except: continue for fd in events: if fd in broadcasts: sock = broadcasts[fd] bufsize = sock.getsockopt(SOL_SOCKET, SO_RCVBUF) // 2 with lock: error = None data = None try: data = sock.recv(bufsize) except Exception as e: error = e error.tb = traceback.format_exc() trnsp_out.send({'stage': 'broadcast', 'data': data, 'error': error}) elif fd == trnsp_in.fileno(): cmd = trnsp_in.recv_cmd() if cmd['stage'] == 'shutdown': ipr.close() data = struct.pack('IHHQIQQ', 28, 2, 0, 0, 104, 0, 0) trnsp_out.send({'stage': 'broadcast', 'data': data, 'error': None}) return elif cmd['stage'] == 'reconstruct': error = None try: msg = cmd['argv'][0]() msg.load(pickle.loads(cmd['argv'][1])) ipr.sendto_gate(msg, cmd['argv'][2]) except Exception as e: error = e error.tb = traceback.format_exc() trnsp_out.send({'stage': 'reconstruct', 'error': error, 'return': None, 'cookie': cmd['cookie']}) elif cmd['stage'] == 'command': error = None try: ret = getattr(ipr, cmd['name'])(*cmd['argv'], **cmd['kwarg']) if cmd['name'] == 'bind' and \ ipr._brd_socket is not None: inputs.append(ipr._brd_socket.fileno()) broadcasts[ipr._brd_socket.fileno()] = \ ipr._brd_socket except Exception as e: ret = None error = e error.tb = traceback.format_exc() trnsp_out.send({'stage': 'command', 'error': error, 'return': ret, 'cookie': cmd['cookie']}) class RemoteSocket(NetlinkMixin): trnsp_in = None trnsp_out = None remote_trnsp_in = None remote_trnsp_out = None def __init__(self, trnsp_in, trnsp_out): super(RemoteSocket, self).__init__() self.trnsp_in = trnsp_in self.trnsp_out = trnsp_out self.cmdlock = threading.Lock() self.shutdown_lock = threading.RLock() self.closed = False init = self.trnsp_in.recv_cmd() if init['stage'] != 'init': raise TypeError('incorrect protocol init') if init['error'] is not None: raise init['error'] else: self.uname = init['uname'] atexit.register(self.close) self.sendto_gate = self._gate def _gate(self, msg, addr): with self.cmdlock: self.trnsp_out.send({'stage': 'reconstruct', 'cookie': None, 'name': None, 'argv': [type(msg), pickle.dumps(msg.dump()), addr], 'kwarg': None}) ret = self.trnsp_in.recv_cmd() if ret['error'] is not None: raise ret['error'] return ret['return'] def recv(self, bufsize, flags=0): msg = None while True: msg = self.trnsp_in.recv() if msg is None: raise EOFError() if msg['stage'] == 'signal': os.kill(os.getpid(), msg['data']) else: break if msg['error'] is not None: raise msg['error'] return msg['data'] def _cleanup_atexit(self): if hasattr(atexit, 'unregister'): atexit.unregister(self.close) else: try: atexit._exithandlers.remove((self.close, (), {})) except ValueError: pass def close(self, code=errno.ECONNRESET): with self.shutdown_lock: if not self.closed: super(RemoteSocket, self).close() self.closed = True self._cleanup_atexit() self.trnsp_out.send({'stage': 'shutdown'}) # send loopback nlmsg to terminate possible .get() if code > 0 and self.remote_trnsp_out is not None: data = struct.pack('IHHQIQQ', 28, 2, 0, 0, code, 0, 0) self.remote_trnsp_out.send({'stage': 'broadcast', 'data': data, 'error': None}) with self.trnsp_in.lock: pass transport_objs = (self.trnsp_out, self.trnsp_in, self.remote_trnsp_in, self.remote_trnsp_out) # Stop the transport objects. for trnsp in transport_objs: try: if hasattr(trnsp, 'close'): trnsp.close() except Exception: pass # Close the file descriptors. for trnsp in transport_objs: try: trnsp.file_obj.close() except Exception: pass def proxy(self, cmd, *argv, **kwarg): with self.cmdlock: self.trnsp_out.send({'stage': 'command', 'cookie': None, 'name': cmd, 'argv': argv, 'kwarg': kwarg}) ret = self.trnsp_in.recv_cmd() if ret['error'] is not None: raise ret['error'] return ret['return'] def fileno(self): return self.trnsp_in.fileno() def bind(self, *argv, **kwarg): if 'async' in kwarg: # FIXME # raise deprecation error after 0.5.3 # log.warning('use "async_cache" instead of "async", ' '"async" is a keyword from Python 3.7') del kwarg['async'] # do not work with async servers kwarg['async_cache'] = False return self.proxy('bind', *argv, **kwarg) def send(self, *argv, **kwarg): return self.proxy('send', *argv, **kwarg) def sendto(self, *argv, **kwarg): return self.proxy('sendto', *argv, **kwarg) def getsockopt(self, *argv, **kwarg): return self.proxy('getsockopt', *argv, **kwarg) def setsockopt(self, *argv, **kwarg): return self.proxy('setsockopt', *argv, **kwarg) def _sendto(self, *argv, **kwarg): return self.sendto(*argv, **kwarg) def _recv(self, *argv, **kwarg): return self.recv(*argv, **kwarg) pyroute2-0.5.9/pyroute2/remote/__main__.py0000644000175000017500000000021213610051400020371 0ustar peetpeet00000000000000import sys from pyroute2.remote import Server from pyroute2.remote import Transport Server(Transport(sys.stdin), Transport(sys.stdout)) pyroute2-0.5.9/pyroute2/remote/shell.py0000644000175000017500000000417213610051400017771 0ustar peetpeet00000000000000import errno import struct import atexit import logging import subprocess from pyroute2.remote import Transport from pyroute2.remote import RemoteSocket from pyroute2.iproute import RTNL_API from pyroute2.netlink.rtnl.iprsocket import MarshalRtnl log = logging.getLogger(__name__) class ShellIPR(RTNL_API, RemoteSocket): def __init__(self, target): self.target = target cmd = '%s python -m pyroute2.remote' % target self.shell = subprocess.Popen(cmd.split(), bufsize=0, stdin=subprocess.PIPE, stdout=subprocess.PIPE) trnsp_in = Transport(self.shell.stdout) trnsp_out = Transport(self.shell.stdin) try: super(ShellIPR, self).__init__(trnsp_in, trnsp_out) except Exception: self.close() raise atexit.register(self.close) self.marshal = MarshalRtnl() def clone(self): return type(self)(self.target) def _cleanup_atexit(self): if hasattr(atexit, 'unregister'): atexit.unregister(self.close) else: try: atexit._exithandlers.remove((self.close, (), {})) except ValueError: pass def close(self, code=errno.ECONNRESET): self._cleanup_atexit() # something went wrong, force server shutdown try: self.trnsp_out.send({'stage': 'shutdown'}) if code > 0: data = {'stage': 'broadcast', 'data': struct.pack('IHHQIQQ', 28, 2, 0, 0, code, 0, 0), 'error': None} self.trnsp_in.brd_queue.put(data) except Exception: pass # force cleanup command channels for close in (self.trnsp_in.close, self.trnsp_out.close): try: close() except Exception: pass # Maybe already closed in remote.Client.close self.shell.kill() self.shell.wait() def post_init(self): pass pyroute2-0.5.9/pyroute2/wiset.py0000644000175000017500000004441213613574566016555 0ustar peetpeet00000000000000''' WiSet module ============ High level ipset support. When :doc:`ipset` is providing a direct netlink socket with low level functions, a :class:`WiSet` object is built to map ipset objects from kernel. It helps to add/remove entries, list content, etc. For example, adding an entry with :class:`pyroute2.ipset.IPSet` object implies to set a various number of parameters: >>> ipset = IPSet() >>> ipset.add("foo", "1.2.3.4/24", etype="net") >>> ipset.close() When they are discovered by a :class:`WiSet`: >>> wiset = load_ipset("foo") >>> wiset.add("1.2.3.4/24") Listing entries is also easier using :class:`WiSet`, since it parses for you netlink messages: >>> wiset.content {'1.2.3.0/24': IPStats(packets=None, bytes=None, comment=None, timeout=None, skbmark=None, physdev=False)} ''' import errno import uuid from collections import namedtuple from inspect import getcallargs from socket import AF_INET from pyroute2 import IPSet from pyroute2.common import basestring from pyroute2.netlink.exceptions import IPSetError from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_COUNTERS from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_COMMENT from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_WITH_SKBINFO from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_PHYSDEV from pyroute2.netlink.nfnetlink.ipset import IPSET_FLAG_IFACE_WILDCARD # Debug variable to detect netlink socket leaks COUNT = {"count": 0} def need_ipset_socket(fun): """ Decorator to create netlink socket if needed. In many of our helpers, we need to open a netlink socket. This can be expensive for someone using many times the functions: instead to have only one socket and use several requests, we will open it again and again. This helper allow our functions to be flexible: the caller can pass an optional socket, or do nothing. In this last case, this decorator will open a socket for the caller (and close it after call) It also help to mix helpers. One helper can call another one: the socket will be opened only once. We just have to pass the ipset variable. Note that all functions using this helper *must* use ipset as variable name for the socket. """ def wrap(*args, **kwargs): callargs = getcallargs(fun, *args, **kwargs) if callargs["sock"] is None: # This variable is used only to debug leak in tests COUNT['count'] += 1 with IPSet() as sock: callargs["sock"] = sock # We must pop kwargs here, else the function will receive # a dict of dict if "kwargs" in callargs: callargs.update(callargs.pop("kwargs")) return fun(**callargs) # pylint:disable=star-args return fun(*args, **kwargs) return wrap class IPStats(namedtuple("IPStats", ["packets", "bytes", "comment", "timeout", "skbmark", "physdev", "wildcard"])): __slots__ = () def __new__(cls, packets, bytes, comment, timeout, skbmark, physdev=False, wildcard=False): return super(IPStats, cls).__new__(cls, packets, bytes, comment, timeout, skbmark, physdev=physdev, wildcard=wildcard) # pylint: disable=too-many-instance-attributes class WiSet(object): """ Main high level ipset manipulation class. Every high level ipset operation should be possible with this class, you probably don't need other helpers of this module, except tools to load data from kernel (:func:`load_all_ipsets` and :func:`load_ipset`) For example, you can create and an entry in a ipset just with: >>> with WiSet(name="mysuperipset") as myset: >>> myset.create() # add the ipset in the kernel >>> myset.add("198.51.100.1") # add one IP to the set Netlink sockets are opened by __enter__ and __exit__ function, so you don't have to manage it manually if you use the "with" keyword. If you want to manage it manually (for example for long operation in a daemon), you can do the following: >>> myset = WiSet(name="mysuperipset") >>> myset.open_netlink() >>> # do stuff >>> myset.close_netlink() You can also don't initiate at all any netlink socket, this code will work: >>> myset = WiSet(name="mysuperipset") >>> myset.create() >>> myset.destroy() But do it very carefully. In that case, a netlink socket will be opened in background for any operation. No socket will be leaked, but that can consume resources. You can also instantiate WiSet objects with :func:`load_all_ipsets` and :func:`load_ipset`: >>> all_sets_dict = load_all_ipsets() >>> one_set = load_ipset(name="myset") Have a look on content variable if you need list of entries in the Set. """ # pylint: disable=too-many-arguments def __init__(self, name=None, attr_type='hash:ip', family=AF_INET, sock=None, timeout=None, counters=False, comment=False, hashsize=None, revision=None, skbinfo=False): self.name = name self.hashsize = hashsize self._attr_type = None self.entry_type = None self.attr_type = attr_type self.family = family self._content = None self.sock = sock self.timeout = timeout self.counters = counters self.comment = comment self.revision = revision self.index = None self.skbinfo = skbinfo def open_netlink(self): """ Open manually a netlink socket. You can use "with WiSet()" instead """ if self.sock is None: self.sock = IPSet() def close_netlink(self): """ Clone any opened netlink socket """ if self.sock is not None: self.sock.close() self.sock = None @property def attr_type(self): return self._attr_type @attr_type.setter def attr_type(self, value): self._attr_type = value self.entry_type = value.split(":", 1)[1] def __enter__(self): self.open_netlink() return self def __exit__(self, exc_type, exc_value, traceback): self.close_netlink() @classmethod def from_netlink(cls, ndmsg, content=False): """ Create a ipset objects based on a parsed netlink message :param ndmsg: the netlink message to parse :param content: should we fill (and parse) entries info (can be slow on very large set) :type content: bool """ self = cls() self.attr_type = ndmsg.get_attr("IPSET_ATTR_TYPENAME") self.name = ndmsg.get_attr("IPSET_ATTR_SETNAME") self.hashsize = ndmsg.get_attr("IPSET_ATTR_HASHSIZE") self.family = ndmsg.get_attr("IPSET_ATTR_FAMILY") self.revision = ndmsg.get_attr("IPSET_ATTR_REVISION") self.index = ndmsg.get_attr("IPSET_ATTR_INDEX") data = ndmsg.get_attr("IPSET_ATTR_DATA") self.timeout = data.get_attr("IPSET_ATTR_TIMEOUT") flags = data.get_attr("IPSET_ATTR_CADT_FLAGS") if flags is not None: self.counters = bool(flags & IPSET_FLAG_WITH_COUNTERS) self.comment = bool(flags & IPSET_FLAG_WITH_COMMENT) self.skbinfo = bool(flags & IPSET_FLAG_WITH_SKBINFO) if content: self.update_dict_content(ndmsg) return self def update_dict_content(self, ndmsg): """ Update a dictionary statistics with values sent in netlink message :param ndmsg: the netlink message :type ndmsg: netlink message """ family = "IPSET_ATTR_IPADDR_IPV4" ip_attr = "IPSET_ATTR_IP_FROM" if self._content is None: self._content = {} timeout = None entries = ndmsg.get_attr("IPSET_ATTR_ADT").get_attrs("IPSET_ATTR_DATA") for entry in entries: key = "" for parse_type in self.entry_type.split(","): if parse_type == "ip": ip = entry.get_attr(ip_attr).get_attr(family) key += ip elif parse_type == "net": ip = entry.get_attr(ip_attr).get_attr(family) key += ip cidr = entry.get_attr("IPSET_ATTR_CIDR") if cidr is not None: key += "/{0}".format(cidr) elif parse_type == "iface": key += entry.get_attr("IPSET_ATTR_IFACE") elif parse_type == "set": key += entry.get_attr("IPSET_ATTR_NAME") elif parse_type == "mark": key += str(hex(entry.get_attr("IPSET_ATTR_MARK"))) key += "," key = key.strip(",") if self.timeout is not None: timeout = entry.get_attr("IPSET_ATTR_TIMEOUT") skbmark = entry.get_attr("IPSET_ATTR_SKBMARK") if skbmark is not None: # Convert integer to hex for mark/mask # Only display mask if != 0xffffffff if skbmark[1] != (2**32 - 1): skbmark = "/".join([str(hex(mark)) for mark in skbmark]) else: skbmark = str(hex(skbmark[0])) entry_flag_parsed = {"physdev": False} flags = entry.get_attr("IPSET_ATTR_CADT_FLAGS") if flags is not None: entry_flag_parsed["physdev"] = bool(flags & IPSET_FLAG_PHYSDEV) entry_flag_parsed["wildcard"] = bool(flags & IPSET_FLAG_IFACE_WILDCARD) value = IPStats(packets=entry.get_attr("IPSET_ATTR_PACKETS"), bytes=entry.get_attr("IPSET_ATTR_BYTES"), comment=entry.get_attr("IPSET_ATTR_COMMENT"), skbmark=skbmark, timeout=timeout, **entry_flag_parsed) self._content[key] = value def create(self, **kwargs): """ Insert this Set in the kernel Many options are set with python object attributes (like comments, counters, etc). For non-supported type, kwargs are provided. See :doc:`ipset` documentation for more information. """ create_ipset(self.name, stype=self.attr_type, family=self.family, sock=self.sock, timeout=self.timeout, comment=self.comment, counters=self.counters, hashsize=self.hashsize, skbinfo=self.skbinfo, **kwargs) def destroy(self): """ Destroy this ipset in the kernel list. It does not delete this python object (any content or other stored values are keep in memory). This function will fail if the ipset is still referenced (by example in iptables rules), you have been warned. """ destroy_ipset(self.name, sock=self.sock) def add(self, entry, **kwargs): """ Add an entry in this ipset. If counters are enabled on the set, reset by default the value when we add the element. Without this reset, kernel sometimes store old values and can add very strange behavior on counters. """ if isinstance(entry, dict): kwargs.update(entry) entry = kwargs.pop("entry") if self.counters: kwargs["packets"] = kwargs.pop("packets", 0) kwargs["bytes"] = kwargs.pop("bytes", 0) skbmark = kwargs.get("skbmark") if isinstance(skbmark, basestring): skbmark = skbmark.split('/') mark = int(skbmark[0], 16) try: mask = int(skbmark[1], 16) except IndexError: mask = int("0xffffffff", 16) kwargs["skbmark"] = (mark, mask) add_ipset_entry(self.name, entry, etype=self.entry_type, sock=self.sock, **kwargs) def delete(self, entry, **kwargs): """ Delete/remove an entry in this ipset """ delete_ipset_entry(self.name, entry, etype=self.entry_type, sock=self.sock, **kwargs) def test(self, entry, **kwargs): """ Test if an entry is in this ipset """ return test_ipset_entry(self.name, entry, etype=self.entry_type, sock=self.sock, **kwargs) def test_list(self, entries, **kwargs): """ Test if a list of a set of entries is in this ipset Return a set of entries found in the IPSet """ return test_ipset_entries(self.name, entries, etype=self.entry_type, sock=self.sock, **kwargs) def update_content(self): """ Update the content dictionary with values from kernel """ self._content = {} update_wiset_content(self, sock=self.sock) def flush(self): """ Flush entries of the ipset """ flush_ipset(self.name, sock=self.sock) @property def content(self): """ Dictionary of entries in the set. Keys are IP addresses (as string), values are IPStats tuples. """ if self._content is None: self.update_content() return self._content def insert_list(self, entries): """ Just a small helper to reduce the number of loops in main code. """ for entry in entries: self.add(entry) def replace_entries(self, new_list): """ Replace the content of an ipset with a new list of entries. This operation is like a flush() and adding all entries one by one. But this call is atomic: it creates a temporary ipset and swap the content. :param new_list: list of entries to add :type new_list: list or :py:class:`set` of basestring or of keyword arguments dict """ temp_name = str(uuid.uuid4())[0:8] # Get a copy of ourself temp = load_ipset(self.name, sock=self.sock) temp.name = temp_name temp.sock = self.sock temp.create() temp.insert_list(new_list) swap_ipsets(self.name, temp_name, sock=self.sock) temp.destroy() @need_ipset_socket def create_ipset(name, stype=None, family=AF_INET, exclusive=False, sock=None, **kwargs): """ Create an ipset. """ sock.create(name, stype=stype, family=family, exclusive=exclusive, **kwargs) @need_ipset_socket def load_all_ipsets(content=False, sock=None, inherit_sock=False, prefix=None): """ List all ipset as WiSet objects. Get full ipset data from kernel and parse it in WiSet objects. Result is a dictionary with ipset names as keys, and WiSet objects as values. :param content: parse the list of entries and fill it in WiSet content dictionary :type content: bool :param inherit_sock: use the netlink sock passed in ipset arg to fill WiSets sock :type inherit_sock: bool :param prefix: filter out all ipset with a name not beginning by this prefix :type prefix: str or None """ res = {} for myset in sock.list(): # on large sets, we can receive data in several messages name = myset.get_attr("IPSET_ATTR_SETNAME") if prefix is not None and not name.startswith(prefix): continue if name not in res: wiset = WiSet.from_netlink(myset, content=content) if inherit_sock: wiset.sock = sock res[wiset.name] = wiset elif content: res[wiset.name].update_dict_content(myset) return res @need_ipset_socket def load_ipset(name, content=False, sock=None, inherit_sock=False): """ Get one ipset as WiSet object Helper to get current WiSet object. More efficient that :func:`load_all_ipsets` since the kernel does the filtering itself. Return None if the ipset does not exist :param name: name of the ipset :type name: str :param content: parse or not content and statistics on entries :type content: bool :param inherit_sock: use the netlink sock passed in ipset arg to fill WiSet sock :type inherit_sock: bool """ res = None for msg in sock.list(name=name): if res is None: res = WiSet.from_netlink(msg, content=content) if inherit_sock: res.sock = sock elif content: res.update_dict_content(msg) return res @need_ipset_socket def update_wiset_content(wiset, sock=None): """ Update content/statistics of a wiset. You should never call yourself this function. It is only a helper to use the :func:`need_ipset_socket` decorator out of WiSet object. """ for msg in sock.list(name=wiset.name): wiset.update_dict_content(msg) @need_ipset_socket def destroy_ipset(name, sock=None): """ Remove an ipset in the kernel. """ sock.destroy(name) @need_ipset_socket def add_ipset_entry(name, entry, sock=None, **kwargs): """ Add an entry """ sock.add(name, entry, **kwargs) @need_ipset_socket def delete_ipset_entry(name, entry, sock=None, **kwargs): """ Remove one entry """ sock.delete(name, entry, **kwargs) @need_ipset_socket def test_ipset_exist(name, sock=None): """ Test if the given ipset exist """ try: sock.headers(name) return True except IPSetError as e: if e.code == errno.ENOENT: return False raise @need_ipset_socket def test_ipset_entry(name, entry, sock=None, **kwargs): """ Test if an entry is in one ipset """ return sock.test(name, entry, **kwargs) @need_ipset_socket def test_ipset_entries(name, entries, sock=None, **kwargs): """ Test a list (or a set) of entries. """ res = set() for entry in entries: if sock.test(name, entry, **kwargs): res.add(entry) return res @need_ipset_socket def flush_ipset(name, sock=None): """ Flush all ipset content """ sock.flush(name) @need_ipset_socket def swap_ipsets(name_a, name_b, sock=None): """ Swap the content of ipset a and b. ipsets must have compatible content. """ sock.swap(name_a, name_b) def get_ipset_socket(**kwargs): """ Get a socket that one can pass to several WiSet objects """ return IPSet(**kwargs) pyroute2-0.5.9/setup.ini0000644000175000017500000000007213621220101015056 0ustar peetpeet00000000000000[setup] version=0.5 release=0.5.9 setuplib=distutils.core pyroute2-0.5.9/setup.py0000644000175000017500000000623213621217360014750 0ustar peetpeet00000000000000#!/usr/bin/env python import os try: import configparser except ImportError: import ConfigParser as configparser # When one runs pip install from the git repo, the setup.ini # doesn't exist. But we still have here a full git repo with # all the git log and with the Makefile. # # So just try to use it. try: os.stat('setup.ini') except: os.system('make force-version') config = configparser.ConfigParser() config.read('setup.ini') module = __import__(config.get('setup', 'setuplib'), globals(), locals(), ['setup'], 0) setup = getattr(module, 'setup') readme = open("README.md", "r") setup(name='pyroute2', version=config.get('setup', 'release'), description='Python Netlink library', author='Peter V. Saveliev', author_email='peter@svinota.eu', url='https://github.com/svinota/pyroute2', license='dual license GPLv2+ and Apache v2', packages=['pyroute2', 'pyroute2.bsd', 'pyroute2.bsd.rtmsocket', 'pyroute2.bsd.pf_route', 'pyroute2.cli', 'pyroute2.config', 'pyroute2.dhcp', 'pyroute2.ipdb', 'pyroute2.iproute', 'pyroute2.ndb', 'pyroute2.ndb.objects', 'pyroute2.netns', 'pyroute2.netns.process', 'pyroute2.inotify', 'pyroute2.ethtool', 'pyroute2.netlink', 'pyroute2.netlink.generic', 'pyroute2.netlink.ipq', 'pyroute2.netlink.nfnetlink', 'pyroute2.netlink.rtnl', 'pyroute2.netlink.rtnl.ifinfmsg', 'pyroute2.netlink.rtnl.ifinfmsg.plugins', 'pyroute2.netlink.rtnl.tcmsg', 'pyroute2.netlink.taskstats', 'pyroute2.netlink.nl80211', 'pyroute2.netlink.devlink', 'pyroute2.netlink.diag', 'pyroute2.netlink.event', 'pyroute2.nftables', 'pyroute2.protocols', 'pyroute2.remote'], scripts=['./cli/ss2', './cli/pyroute2-cli'], classifiers=['License :: OSI Approved :: GNU General Public ' + 'License v2 or later (GPLv2+)', 'License :: OSI Approved :: Apache Software License', 'Programming Language :: Python', 'Topic :: Software Development :: Libraries :: ' + 'Python Modules', 'Topic :: System :: Networking', 'Topic :: System :: Systems Administration', 'Operating System :: POSIX :: Linux', 'Intended Audience :: Developers', 'Intended Audience :: System Administrators', 'Intended Audience :: Telecommunications Industry', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Development Status :: 4 - Beta'], long_description=readme.read()) pyroute2-0.5.9/tests/0000755000175000017500000000000013621220110014360 5ustar peetpeet00000000000000pyroute2-0.5.9/tests/unit/0000755000175000017500000000000013621220110015337 5ustar peetpeet00000000000000pyroute2-0.5.9/tests/unit/__pycache__/0000755000175000017500000000000013621220110017547 5ustar peetpeet00000000000000pyroute2-0.5.9/tests/unit/__pycache__/test_common.cpython-36.pyc0000644000175000017500000001066013336747553024563 0ustar peetpeet000000000000003 >[t@slddlmZddlmZddlmZddlmZddlmZddlmZGdddeZGd d d eZ d S) )AddrPool)hexdump)hexload)uuid32)uifname)dqn2intc@sTeZdZddZddZddZddZd d Zd d Zd dZ ddZ ddZ dS) TestAddrPoolc CsLtddd}xtdD] }|jqWy |jWntk rFYnXdS)Ni)minaddrmaxaddr)rrangeallocKeyError)selfapir6/home/peet/Projects/pyroute2/tests/unit/test_common.pytest_alloc_aligned s   zTestAddrPool.test_alloc_alignedc CsLtddd}xtdD] }|jqWy |jWntk rFYnXdS)Nr i)r r )rr r r)rrrrrrtest_alloc_odds   zTestAddrPool.test_alloc_oddcCs8tdddd}x$tdD]}|j|jkstqWdS)Nr iT)r r reversei)rr r AssertionError)rrrrrr test_reverse!szTestAddrPool.test_reversecCs"tddd}|j}|j|dS)Nr i)r r )rr free)rrfrrr test_free's zTestAddrPool.test_freec Cs4tddd}y|jdWntk r.YnXdS)Nr i)r r r)rrr)rrrrrtest_free_fail-s  zTestAddrPool.test_free_failc Cs6tdddd}y|jdWntk r0YnXdS)Nr iT)r r rr)rrr)rrrrrtest_free_reverse_fail5s z#TestAddrPool.test_free_reverse_failc Csrt}|j}|j|\}}}|j|d\}}}||ks>t||dksNt|sVt| s`t|jdksntdS)Nr )rr locater allocated) rrrZbase1Zbit1Z is_allocated1Zbase2Zbit2Z is_allocated2rrr test_locate=s  zTestAddrPool.test_locatecCst}|j}|j|d\}}}| s,t|jdks:t|j|dd|j|d\}}}|sft|jdkstt|j|d|j|d\}}}| st|jdkstdS)Nr r)rr rrrsetaddrr)rrrbasebit is_allocatedrrrtest_setaddr_allocatedJs  z#TestAddrPool.test_setaddr_allocatedc Cst}|j}|j|d\}}}| s,t|jdks:t|j|dd|j|d\}}}| sht|jdksvt|j|d|j|\}}}| st|jdksty|j|Wntk rYnXdS)Nr rr)rr rrrr"rr)rrrr#r$r%rrrtest_setaddr_freeZs"    zTestAddrPool.test_setaddr_freeN) __name__ __module__ __qualname__rrrrrrr r&r'rrrrr s   rc@s,eZdZddZddZddZddZd S) TestCommoncCsd}t|}t|dd}t|dks(tt|dks8t|d|d kon|dkon|d kondknsxtt||kstt||ddkstdS) Ns abcdef5678)lengthr!:r2)rlenrr)rbinaryZdump1Zdump2rrr test_hexdumpqs zTestCommon.test_hexdumpcCs\t}t}tjdt}t||s&tt||s4t||ks@t|dksLt|dksXtdS)Nlongl)r __builtins__getint isinstancer)rZuAZuBZprimerrr test_uuid32s   zTestCommon.test_uuid32cCs4tddksttddks ttddks0tdS)Nz 255.255.255.0z 255.240.0.0 z 255.0.0.0)rr)rrrr test_dqn2intszTestCommon.test_dqn2intcCsDt}t}||kstt|dddt|dddks@tdS)Nr!)rrr9)rZnAZnBrrr test_uifnames zTestCommon.test_uifnameN)r(r)r*r5r;r?rArrrrr+os r+N) Zpyroute2.commonrrrrrrobjectrr+rrrrs      fpyroute2-0.5.9/tests/unit/__pycache__/test_common.cpython-37.pyc0000644000175000017500000001063413430547273024555 0ustar peetpeet00000000000000B LJ[t@slddlmZddlmZddlmZddlmZddlmZddlmZGdddeZGd d d eZ d S) )AddrPool)hexdump)hexload)uuid32)uifname)dqn2intc@sTeZdZddZddZddZddZd d Zd d Zd dZ ddZ ddZ dS) TestAddrPoolcCsLtddd}xtdD] }|qWy |Wntk rFYnXdS)Ni)minaddrmaxaddr)rrangeallocKeyError)selfapir6/home/peet/Projects/pyroute2/tests/unit/test_common.pytest_alloc_aligned s   zTestAddrPool.test_alloc_alignedcCsLtddd}xtdD] }|qWy |Wntk rFYnXdS)Nr i)r r )rr r r)rrrrrrtest_alloc_odds   zTestAddrPool.test_alloc_oddcCs8tdddd}x$tdD]}||kstqWdS)Nr iT)r r reversei)rr r AssertionError)rrrrrr test_reverse!szTestAddrPool.test_reversecCs"tddd}|}||dS)Nr i)r r )rr free)rrfrrr test_free's zTestAddrPool.test_freecCs4tddd}y|dWntk r.YnXdS)Nr i)r r r)rrr)rrrrrtest_free_fail-s  zTestAddrPool.test_free_failcCs6tdddd}y|dWntk r0YnXdS)Nr iT)r r rr)rrr)rrrrrtest_free_reverse_fail5s z#TestAddrPool.test_free_reverse_failc Cspt}|}||\}}}||d\}}}||ks>t||dksNt|sVt|r^t|jdksltdS)Nr )rr locater allocated) rrrZbase1Zbit1Z is_allocated1Zbase2Zbit2Z is_allocated2rrr test_locate=s zTestAddrPool.test_locatecCst}|}||d\}}}|r*t|jdks8t||dd||d\}}}|sdt|jdksrt||d||d\}}}|rt|jdkstdS)Nr r)rr rrrsetaddrr)rrrbasebit is_allocatedrrrtest_setaddr_allocatedJsz#TestAddrPool.test_setaddr_allocatedcCst}|}||d\}}}|r*t|jdks8t||dd||d\}}}|rdt|jdksrt||d||\}}}|rt|jdksty||Wntk rYnXdS)Nr rr)rr rrrr"rr)rrrr#r$r%rrrtest_setaddr_freeZs" zTestAddrPool.test_setaddr_freeN) __name__ __module__ __qualname__rrrrrrr r&r'rrrrr s   rc@s,eZdZddZddZddZddZd S) TestCommoncCsd}t|}t|dd}t|dks(tt|dks8t|d|dkrp|dkrp|dkrpdksvntt||kstt||ddkstdS) Ns abcdef5678)lengthr!:)rlenrr)rbinaryZdump1Zdump2rrr test_hexdumpqs zTestCommon.test_hexdumpcCs\t}t}tdt}t||s&tt||s4t||ks@t|dksLt|dksXtdS)Nlongl)r __builtins__getint isinstancer)rZuAZuBZprimerrr test_uuid32s   zTestCommon.test_uuid32cCs4tddksttddks ttddks0tdS)Nz 255.255.255.0z 255.240.0.0 z 255.0.0.0)rr)rrrr test_dqn2intszTestCommon.test_dqn2intcCsDt}t}||kstt|dddt|dddks@tdS)Nr!)rrr8)rZnAZnBrrr test_uifnames zTestCommon.test_uifnameN)r(r)r*r4r:r>r@rrrrr+os r+N) Zpyroute2.commonrrrrrrobjectrr+rrrrs      fpyroute2-0.5.9/tests/unit/test_common.py0000644000175000017500000000716413610051400020252 0ustar peetpeet00000000000000from pyroute2.common import AddrPool from pyroute2.common import hexdump from pyroute2.common import hexload from pyroute2.common import uuid32 from pyroute2.common import uifname from pyroute2.common import dqn2int class TestAddrPool(object): def test_alloc_aligned(self): ap = AddrPool(minaddr=1, maxaddr=1024) for i in range(1024): ap.alloc() try: ap.alloc() except KeyError: pass def test_alloc_odd(self): ap = AddrPool(minaddr=1, maxaddr=1020) for i in range(1020): ap.alloc() try: ap.alloc() except KeyError: pass def test_reverse(self): ap = AddrPool(minaddr=1, maxaddr=1024, reverse=True) for i in range(512): assert ap.alloc() > ap.alloc() def test_free(self): ap = AddrPool(minaddr=1, maxaddr=1024) f = ap.alloc() ap.free(f) def test_free_fail(self): ap = AddrPool(minaddr=1, maxaddr=1024) try: ap.free(0) except KeyError: pass def test_free_reverse_fail(self): ap = AddrPool(minaddr=1, maxaddr=1024, reverse=True) try: ap.free(0) except KeyError: pass def test_locate(self): ap = AddrPool() f = ap.alloc() base1, bit1, is_allocated1 = ap.locate(f) base2, bit2, is_allocated2 = ap.locate(f + 1) assert base1 == base2 assert bit2 == bit1 + 1 assert is_allocated1 assert not is_allocated2 assert ap.allocated == 1 def test_setaddr_allocated(self): ap = AddrPool() f = ap.alloc() base, bit, is_allocated = ap.locate(f + 1) assert not is_allocated assert ap.allocated == 1 ap.setaddr(f + 1, 'allocated') base, bit, is_allocated = ap.locate(f + 1) assert is_allocated assert ap.allocated == 2 ap.free(f + 1) base, bit, is_allocated = ap.locate(f + 1) assert not is_allocated assert ap.allocated == 1 def test_setaddr_free(self): ap = AddrPool() f = ap.alloc() base, bit, is_allocated = ap.locate(f + 1) assert not is_allocated assert ap.allocated == 1 ap.setaddr(f + 1, 'free') base, bit, is_allocated = ap.locate(f + 1) assert not is_allocated assert ap.allocated == 1 ap.setaddr(f, 'free') base, bit, is_allocated = ap.locate(f) assert not is_allocated assert ap.allocated == 0 try: ap.free(f) except KeyError: pass class TestCommon(object): def test_hexdump(self): binary = b'abcdef5678' dump1 = hexdump(binary) dump2 = hexdump(binary, length=6) assert len(dump1) == 29 assert len(dump2) == 17 assert dump1[2] == \ dump1[-3] == \ dump2[2] == \ dump2[-3] == ':' assert hexload(dump1) == binary assert hexload(dump2) == binary[:6] def test_uuid32(self): uA = uuid32() uB = uuid32() prime = __builtins__.get('long', int) assert isinstance(uA, prime) assert isinstance(uB, prime) assert uA != uB assert uA < 0x100000000 assert uB < 0x100000000 def test_dqn2int(self): assert dqn2int('255.255.255.0') == 24 assert dqn2int('255.240.0.0') == 12 assert dqn2int('255.0.0.0') == 8 def test_uifname(self): nA = uifname() nB = uifname() assert nA != nB assert int(nA[2:], 16) != int(nB[2:], 16) pyroute2-0.5.9/tests/unit/test_common.pyc0000644000175000017500000001305713447446632020442 0ustar peetpeet00000000000000 ʔ\c@sddlmZddlmZddlmZddlmZddlmZddlmZdefdYZd efd YZ d S( i(tAddrPool(thexdump(thexload(tuuid32(tuifname(tdqn2intt TestAddrPoolcBsYeZdZdZdZdZdZdZdZdZ dZ RS( cCs\tdddd}xtdD]}|jq"Wy|jWntk rWnXdS(Ntminaddritmaxaddri(RtrangetalloctKeyError(tselftapti((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_alloc_aligned s cCs\tdddd}xtdD]}|jq"Wy|jWntk rWnXdS(NRiRi(RR R R (R R R((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_alloc_odds cCsTtdddddt}x2tdD]$}|j|jks(tq(WdS(NRiRitreversei(RtTrueR R tAssertionError(R R R((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_reverse!scCs2tdddd}|j}|j|dS(NRiRi(RR tfree(R R tf((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_free's cCs>tdddd}y|jdWntk r9nXdS(NRiRii(RRR (R R ((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_free_fail-s  cCsDtdddddt}y|jdWntk r?nXdS(NRiRiRi(RRRR (R R ((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_free_reverse_fail5s  c Cst}|j}|j|\}}}|j|d\}}}||ks[t||dksqt|s}t| st|jdkstdS(Ni(RR tlocateRt allocated( R R Rtbase1tbit1t is_allocated1tbase2tbit2t is_allocated2((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_locate=s    cCst}|j}|j|d\}}}| s>t|jdksSt|j|dd|j|d\}}}|st|jdkst|j|d|j|d\}}}| st|jdkstdS(NiRi(RR RRRtsetaddrR(R R Rtbasetbitt is_allocated((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_setaddr_allocatedJs     cCst}|j}|j|d\}}}| s>t|jdksSt|j|dd|j|d\}}}| st|jdkst|j|d|j|\}}}| st|jdksty|j|Wntk rnXdS(NiRi(RR RRRR#RR (R R RR$R%R&((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyttest_setaddr_freeZs"      ( t__name__t __module__RRRRRRR"R'R((((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyR s     t TestCommoncBs,eZdZdZdZdZRS(cCsd}t|}t|dd}t|dks<tt|dksTt|d|dko|dko|dkodknstt||kstt||d kstdS( Nt abcdef5678tlengthiiiiit:(RtlenRR(R tbinarytdump1tdump2((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_hexdumpqs #cCst}t}tjdt}t||s9tt||sNt||ks`t|dksrt|dkstdS(NtlongI(Rt __builtins__tgettintt isinstanceR(R tuAtuBtprime((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_uuid32s  cCsLtddksttddks0ttddksHtdS(Ns 255.255.255.0is 255.240.0.0i s 255.0.0.0i(RR(R ((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_dqn2intscCsTt}t}||ks$tt|ddt|ddksPtdS(Nii(RRR7(R tnAtnB((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyt test_uifnames  (R)R*R3R<R=R@(((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pyR+os  N( tpyroute2.commonRRRRRRtobjectRR+(((s6/home/peet/Projects/pyroute2/tests/unit/test_common.pytsf