sshuttle-0.76/0000700000175000017500000000000012646642532013531 5ustar brianbrian00000000000000sshuttle-0.76/setup.cfg0000600000175000017500000000007312646642532015354 0ustar brianbrian00000000000000[egg_info] tag_build = tag_date = 0 tag_svn_revision = 0 sshuttle-0.76/tox.ini0000600000175000017500000000031712631165026015037 0ustar brianbrian00000000000000[tox] downloadcache = {toxworkdir}/cache/ envlist = py27, py35, [testenv] basepython = py27: python2.7 py35: python3.5 commands = py.test deps = pytest mock setuptools>=17.1 sshuttle-0.76/run0000700000175000017500000000016512644310521014251 0ustar brianbrian00000000000000#!/bin/sh if python3.5 -V 2>/dev/null; then exec python3 -m "sshuttle" "$@" else exec python -m "sshuttle" "$@" fi sshuttle-0.76/requirements.txt0000600000175000017500000000001712646623547017022 0ustar brianbrian00000000000000setuptools_scm sshuttle-0.76/PKG-INFO0000600000175000017500000000454112646642532014634 0ustar brianbrian00000000000000Metadata-Version: 1.1 Name: sshuttle Version: 0.76 Summary: Full-featured" VPN over an SSH tunnel Home-page: https://github.com/sshuttle/sshuttle Author: Brian May Author-email: brian@linuxpenguins.xyz License: GPL2+ Description: sshuttle: where transparent proxy meets VPN meets ssh ===================================================== As far as I know, sshuttle is the only program that solves the following common case: - Your client machine (or router) is Linux, FreeBSD, or MacOS. - You have access to a remote network via ssh. - You don't necessarily have admin access on the remote network. - The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you *are* the admin and you just got frustrated with the awful state of VPN tools. - You don't want to create an ssh port forward for every single host/port on the remote network. - You hate openssh's port forwarding because it's randomly slow and/or stupid. - You can't use openssh's PermitTunnel feature because it's disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance (see below). Obtaining sshuttle ------------------ - From PyPI:: pip install sshuttle - Clone:: git clone https://github.com/sshuttle/sshuttle.git ./setup.py install Documentation ------------- The documentation for the stable version is available at: http://sshuttle.readthedocs.org/ The documentation for the latest development version is available at: http://sshuttle.readthedocs.org/en/latest/ Keywords: ssh vpn Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+) Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.5 Classifier: Topic :: System :: Networking sshuttle-0.76/.travis.yml0000600000175000017500000000020612625010735015631 0ustar brianbrian00000000000000language: python python: - 2.7 - 3.5 - pypy install: - travis_retry pip install -q pytest mock script: - PYTHONPATH=. py.test sshuttle-0.76/LICENSE0000600000175000017500000006126112553364742014547 0ustar brianbrian00000000000000 GNU LIBRARY GENERAL PUBLIC LICENSE Version 2, June 1991 Copyright (C) 1991 Free Software Foundation, Inc. 675 Mass Ave, Cambridge, MA 02139, USA Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. [This is the first released version of the library GPL. It is numbered 2 because it goes with version 2 of the ordinary GPL.] Preamble The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users. This license, the Library General Public License, applies to some specially designated Free Software Foundation software, and to any other libraries whose authors decide to use it. You can use it for your libraries, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs; and that you know you can do these things. To protect your rights, we need to make restrictions that forbid anyone to deny you these rights or to ask you to surrender the rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library, or if you modify it. For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link a program with the library, you must provide complete object files to the recipients so that they can relink them with the library, after making changes to the library and recompiling it. And you must show them these terms so they know their rights. Our method of protecting your rights has two steps: (1) copyright the library, and (2) offer you this license which gives you legal permission to copy, distribute and/or modify the library. Also, for each distributor's protection, we want to make certain that everyone understands that there is no warranty for this free library. If the library is modified by someone else and passed on, we want its recipients to know that what they have is not the original version, so that any problems introduced by others will not reflect on the original authors' reputations. Finally, any free program is threatened constantly by software patents. We wish to avoid the danger that companies distributing free software will individually obtain patent licenses, thus in effect transforming the program into proprietary software. To prevent this, we have made it clear that any patent must be licensed for everyone's free use or not licensed at all. Most GNU software, including some libraries, is covered by the ordinary GNU General Public License, which was designed for utility programs. This license, the GNU Library General Public License, applies to certain designated libraries. This license is quite different from the ordinary one; be sure to read it in full, and don't assume that anything in it is the same as in the ordinary license. The reason we have a separate public license for some libraries is that they blur the distinction we usually make between modifying or adding to a program and simply using it. Linking a program with a library, without changing the library, is in some sense simply using the library, and is analogous to running a utility program or application program. However, in a textual and legal sense, the linked executable is a combined work, a derivative of the original library, and the ordinary General Public License treats it as such. Because of this blurred distinction, using the ordinary General Public License for libraries did not effectively promote software sharing, because most developers did not use the libraries. We concluded that weaker conditions might promote sharing better. However, unrestricted linking of non-free programs would deprive the users of those programs of all benefit from the free status of the libraries themselves. This Library General Public License is intended to permit developers of non-free programs to use free libraries, while preserving your freedom as a user of such programs to change the free libraries that are incorporated in them. (We have not seen how to achieve this as regards changes in header files, but we have achieved it as regards changes in the actual functions of the Library.) The hope is that this will lead to faster development of free libraries. The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, while the latter only works together with the library. Note that it is possible for a library to be covered by the ordinary General Public License rather than by this special one. GNU LIBRARY GENERAL PUBLIC LICENSE TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION 0. This License Agreement applies to any software library which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Library General Public License (also called "this License"). Each licensee is addressed as "you". A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables. The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".) "Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library. Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does. 1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library. You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee. 2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions: a) The modified work must itself be a software library. b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change. c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License. d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful. (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.) These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it. Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library. In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License. 3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices. Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy. This option is useful when you wish to copy part of the code of the Library into a program that is not a library. 4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange. If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code. 5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License. However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables. When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law. If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.) Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself. 6. As an exception to the Sections above, you may also compile or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications. You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things: a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.) b) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution. c) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place. d) Verify that the user has already received a copy of these materials or that you have already sent this user a copy. For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the source code distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable. It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute. 7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things: a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above. b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work. 8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance. 9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it. 10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties to this License. 11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library. If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances. It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice. This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License. 12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License. 13. The Free Software Foundation may publish revised and/or new versions of the Library General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation. 14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally. NO WARRANTY 15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. END OF TERMS AND CONDITIONS Appendix: How to Apply These Terms to Your New Libraries If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License). To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This library is free software; you can redistribute it and/or modify it under the terms of the GNU Library General Public License as published by the Free Software Foundation; either version 2 of the License, or (at your option) any later version. This library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Library General Public License for more details. You should have received a copy of the GNU Library General Public License along with this library; if not, write to the Free Software Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA. Also add information on how to contact you by electronic and paper mail. You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names: Yoyodyne, Inc., hereby disclaims all copyright interest in the library `Frob' (a library for tweaking knobs) written by James Random Hacker. , 1 April 1990 Ty Coon, President of Vice That's all there is to it! sshuttle-0.76/sshuttle/0000700000175000017500000000000012646642532015404 5ustar brianbrian00000000000000sshuttle-0.76/sshuttle/version.py0000600000175000017500000000016312646642532017445 0ustar brianbrian00000000000000# coding: utf-8 # file generated by setuptools_scm # don't change, don't track in version control version = '0.76' sshuttle-0.76/sshuttle/helpers.py0000600000175000017500000000401712633621104017410 0ustar brianbrian00000000000000import sys import socket import errno logprefix = '' verbose = 0 def log(s): global logprefix try: sys.stdout.flush() if s.find("\n") != -1: prefix = logprefix s = s.rstrip("\n") for line in s.split("\n"): sys.stderr.write(prefix + line + "\n") prefix = "---> " else: sys.stderr.write(logprefix + s) sys.stderr.flush() except IOError: # this could happen if stderr gets forcibly disconnected, eg. because # our tty closes. That sucks, but it's no reason to abort the program. pass def debug1(s): if verbose >= 1: log(s) def debug2(s): if verbose >= 2: log(s) def debug3(s): if verbose >= 3: log(s) class Fatal(Exception): pass def resolvconf_nameservers(): l = [] for line in open('/etc/resolv.conf'): words = line.lower().split() if len(words) >= 2 and words[0] == 'nameserver': l.append(family_ip_tuple(words[1])) return l def resolvconf_random_nameserver(): l = resolvconf_nameservers() if l: if len(l) > 1: # don't import this unless we really need it import random random.shuffle(l) return l[0] else: return (socket.AF_INET, '127.0.0.1') def islocal(ip, family): sock = socket.socket(family) try: try: sock.bind((ip, 0)) except socket.error as e: if e.args[0] == errno.EADDRNOTAVAIL: return False # not a local IP else: raise finally: sock.close() return True # it's a local IP, or there would have been an error def family_ip_tuple(ip): if ':' in ip: return (socket.AF_INET6, ip) else: return (socket.AF_INET, ip) def family_to_string(family): if family == socket.AF_INET6: return "AF_INET6" elif family == socket.AF_INET: return "AF_INET" else: return str(family) sshuttle-0.76/sshuttle/options.py0000600000175000017500000001515512633621104017446 0ustar brianbrian00000000000000"""Command-line options parser. With the help of an options spec string, easily parse command-line options. """ import sys import os import textwrap import getopt import re import struct class OptDict: def __init__(self): self._opts = {} def __setitem__(self, k, v): if k.startswith('no-') or k.startswith('no_'): k = k[3:] v = not v self._opts[k] = v def __getitem__(self, k): if k.startswith('no-') or k.startswith('no_'): return not self._opts[k[3:]] return self._opts[k] def __getattr__(self, k): return self[k] def _default_onabort(msg): sys.exit(97) def _intify(v): try: vv = int(v or '') if str(vv) == v: return vv except ValueError: pass return v def _atoi(v): try: return int(v or 0) except ValueError: return 0 def _remove_negative_kv(k, v): if k.startswith('no-') or k.startswith('no_'): return k[3:], not v return k, v def _remove_negative_k(k): return _remove_negative_kv(k, None)[0] def _tty_width(): if not hasattr(sys.stderr, "fileno"): return _atoi(os.environ.get('WIDTH')) or 70 s = struct.pack("HHHH", 0, 0, 0, 0) try: import fcntl import termios s = fcntl.ioctl(sys.stderr.fileno(), termios.TIOCGWINSZ, s) except (IOError, ImportError): return _atoi(os.environ.get('WIDTH')) or 70 (ysize, xsize, ypix, xpix) = struct.unpack('HHHH', s) return xsize or 70 class Options: """Option parser. When constructed, two strings are mandatory. The first one is the command name showed before error messages. The second one is a string called an optspec that specifies the synopsis and option flags and their description. For more information about optspecs, consult the bup-options(1) man page. Two optional arguments specify an alternative parsing function and an alternative behaviour on abort (after having output the usage string). By default, the parser function is getopt.gnu_getopt, and the abort behaviour is to exit the program. """ def __init__(self, optspec, optfunc=getopt.gnu_getopt, onabort=_default_onabort): self.optspec = optspec self._onabort = onabort self.optfunc = optfunc self._aliases = {} self._shortopts = 'h?' self._longopts = ['help'] self._hasparms = {} self._defaults = {} self._usagestr = self._gen_usage() def _gen_usage(self): out = [] lines = self.optspec.strip().split('\n') lines.reverse() first_syn = True while lines: l = lines.pop() if l == '--': break out.append('%s: %s\n' % (first_syn and 'usage' or ' or', l)) first_syn = False out.append('\n') last_was_option = False while lines: l = lines.pop() if l.startswith(' '): out.append('%s%s\n' % (last_was_option and '\n' or '', l.lstrip())) last_was_option = False elif l: (flags, extra) = l.split(' ', 1) extra = extra.strip() if flags.endswith('='): flags = flags[:-1] has_parm = 1 else: has_parm = 0 g = re.search(r'\[([^\]]*)\]$', extra) if g: defval = g.group(1) else: defval = None flagl = flags.split(',') flagl_nice = [] for _f in flagl: f, dvi = _remove_negative_kv(_f, _intify(defval)) self._aliases[f] = _remove_negative_k(flagl[0]) self._hasparms[f] = has_parm self._defaults[f] = dvi if len(f) == 1: self._shortopts += f + (has_parm and ':' or '') flagl_nice.append('-' + f) else: f_nice = re.sub(r'\W', '_', f) self._aliases[f_nice] = _remove_negative_k(flagl[0]) self._longopts.append(f + (has_parm and '=' or '')) self._longopts.append('no-' + f) flagl_nice.append('--' + _f) flags_nice = ', '.join(flagl_nice) if has_parm: flags_nice += ' ...' prefix = ' %-20s ' % flags_nice argtext = '\n'.join(textwrap.wrap(extra, width=_tty_width(), initial_indent=prefix, subsequent_indent=' ' * 28)) out.append(argtext + '\n') last_was_option = True else: out.append('\n') last_was_option = False return ''.join(out).rstrip() + '\n' def usage(self, msg=""): """Print usage string to stderr and abort.""" sys.stderr.write(self._usagestr) e = self._onabort and self._onabort(msg) or None if e: raise e def fatal(self, s): """Print an error message to stderr and abort with usage string.""" msg = 'error: %s\n' % s sys.stderr.write(msg) return self.usage(msg) def parse(self, args): """Parse a list of arguments and return (options, flags, extra). In the returned tuple, "options" is an OptDict with known options, "flags" is a list of option flags that were used on the command-line, and "extra" is a list of positional arguments. """ try: (flags, extra) = self.optfunc( args, self._shortopts, self._longopts) except getopt.GetoptError as e: self.fatal(e) opt = OptDict() for k, v in self._defaults.items(): k = self._aliases[k] opt[k] = v for (k, v) in flags: k = k.lstrip('-') if k in ('h', '?', 'help'): self.usage() if k.startswith('no-'): k = self._aliases[k[3:]] v = 0 else: k = self._aliases[k] if not self._hasparms[k]: assert(v == '') v = (opt._opts.get(k) or 0) + 1 else: v = _intify(v) opt[k] = v for (f1, f2) in self._aliases.items(): opt[f1] = opt._opts.get(f2) return (opt, flags, extra) sshuttle-0.76/sshuttle/ssh.py0000600000175000017500000000734512633667261016570 0ustar brianbrian00000000000000import sys import os import re import socket import zlib import imp import subprocess as ssubprocess import sshuttle.helpers as helpers from sshuttle.helpers import debug2 def readfile(name): tokens = name.split(".") f = None token = tokens[0] token_name = [token] token_str = ".".join(token_name) try: f, pathname, description = imp.find_module(token_str) for token in tokens[1:]: module = imp.load_module(token_str, f, pathname, description) if f is not None: f.close() token_name.append(token) token_str = ".".join(token_name) f, pathname, description = imp.find_module( token, module.__path__) if f is not None: contents = f.read() else: contents = "" finally: if f is not None: f.close() return contents.encode("UTF8") def empackage(z, name, data=None): if not data: data = readfile(name) content = z.compress(data) content += z.flush(zlib.Z_SYNC_FLUSH) return b'%s\n%d\n%s' % (name.encode("ASCII"), len(content), content) def connect(ssh_cmd, rhostport, python, stderr, options): portl = [] if (rhostport or '').count(':') > 1: if rhostport.count(']') or rhostport.count('['): result = rhostport.split(']') rhost = result[0].strip('[') if len(result) > 1: result[1] = result[1].strip(':') if result[1] is not '': portl = ['-p', str(int(result[1]))] # can't disambiguate IPv6 colons and a port number. pass the hostname # through. else: rhost = rhostport else: # IPv4 l = (rhostport or '').split(':', 1) rhost = l[0] if len(l) > 1: portl = ['-p', str(int(l[1]))] if rhost == '-': rhost = None z = zlib.compressobj(1) content = readfile('sshuttle.assembler') optdata = ''.join("%s=%r\n" % (k, v) for (k, v) in list(options.items())) optdata = optdata.encode("UTF8") content2 = (empackage(z, 'sshuttle') + empackage(z, 'sshuttle.cmdline_options', optdata) + empackage(z, 'sshuttle.helpers') + empackage(z, 'sshuttle.ssnet') + empackage(z, 'sshuttle.hostwatch') + empackage(z, 'sshuttle.server') + b"\n") pyscript = r""" import sys; verbosity=%d; stdin=getattr(sys.stdin,"buffer",sys.stdin); exec(compile(stdin.read(%d), "assembler.py", "exec")) """ % (helpers.verbose or 0, len(content)) pyscript = re.sub(r'\s+', ' ', pyscript.strip()) if not rhost: # ignore the --python argument when running locally; we already know # which python version works. argv = [sys.argv[1], '-c', pyscript] else: if ssh_cmd: sshl = ssh_cmd.split(' ') else: sshl = ['ssh'] if python: pycmd = "'%s' -c '%s'" % (python, pyscript) else: pycmd = ("P=python3.5; $P -V 2>/dev/null || P=python; " "exec \"$P\" -c '%s'") % pyscript argv = (sshl + portl + [rhost, '--', pycmd]) (s1, s2) = socket.socketpair() def setup(): # runs in the child process s2.close() s1a, s1b = os.dup(s1.fileno()), os.dup(s1.fileno()) s1.close() debug2('executing: %r\n' % argv) p = ssubprocess.Popen(argv, stdin=s1a, stdout=s1b, preexec_fn=setup, close_fds=True, stderr=stderr) os.close(s1a) os.close(s1b) s2.sendall(content) s2.sendall(content2) return p, s2 sshuttle-0.76/sshuttle/hostwatch.py0000600000175000017500000001646512643354145017775 0ustar brianbrian00000000000000import time import socket import re import select import errno import os import sys import platform import subprocess as ssubprocess import sshuttle.helpers as helpers from sshuttle.helpers import log, debug1, debug2, debug3 POLL_TIME = 60 * 15 NETSTAT_POLL_TIME = 30 CACHEFILE = os.path.expanduser('~/.sshuttle.hosts') _nmb_ok = True _smb_ok = True hostnames = {} queue = {} try: null = open('/dev/null', 'wb') except IOError as e: log('warning: %s\n' % e) null = os.popen("sh -c 'while read x; do :; done'", 'wb', 4096) def _is_ip(s): return re.match(r'\d+\.\d+\.\d+\.\d+$', s) def write_host_cache(): tmpname = '%s.%d.tmp' % (CACHEFILE, os.getpid()) try: f = open(tmpname, 'wb') for name, ip in sorted(hostnames.items()): f.write(('%s,%s\n' % (name, ip)).encode("ASCII")) f.close() os.chmod(tmpname, 0o600) os.rename(tmpname, CACHEFILE) finally: try: os.unlink(tmpname) except: pass def read_host_cache(): try: f = open(CACHEFILE) except IOError as e: if e.errno == errno.ENOENT: return else: raise for line in f: words = line.strip().split(',') if len(words) == 2: (name, ip) = words name = re.sub(r'[^-\w]', '-', name).strip() ip = re.sub(r'[^0-9.]', '', ip).strip() if name and ip: found_host(name, ip) def found_host(hostname, ip): hostname = re.sub(r'\..*', '', hostname) hostname = re.sub(r'[^-\w]', '_', hostname) if (ip.startswith('127.') or ip.startswith('255.') or hostname == 'localhost'): return oldip = hostnames.get(hostname) if oldip != ip: hostnames[hostname] = ip debug1('Found: %s: %s\n' % (hostname, ip)) sys.stdout.write('%s,%s\n' % (hostname, ip)) write_host_cache() def _check_etc_hosts(): debug2(' > hosts\n') for line in open('/etc/hosts'): line = re.sub(r'#.*', '', line) words = line.strip().split() if not words: continue ip = words[0] names = words[1:] if _is_ip(ip): debug3('< %s %r\n' % (ip, names)) for n in names: check_host(n) found_host(n, ip) def _check_revdns(ip): debug2(' > rev: %s\n' % ip) try: r = socket.gethostbyaddr(ip) debug3('< %s\n' % r[0]) check_host(r[0]) found_host(r[0], ip) except socket.herror: pass def _check_dns(hostname): debug2(' > dns: %s\n' % hostname) try: ip = socket.gethostbyname(hostname) debug3('< %s\n' % ip) check_host(ip) found_host(hostname, ip) except socket.gaierror: pass def _check_netstat(): debug2(' > netstat\n') argv = ['netstat', '-n'] try: p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) content = p.stdout.read().decode("ASCII") p.wait() except OSError as e: log('%r failed: %r\n' % (argv, e)) return for ip in re.findall(r'\d+\.\d+\.\d+\.\d+', content): debug3('< %s\n' % ip) check_host(ip) def _check_smb(hostname): return global _smb_ok if not _smb_ok: return argv = ['smbclient', '-U', '%', '-L', hostname] debug2(' > smb: %s\n' % hostname) try: p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) lines = p.stdout.readlines() p.wait() except OSError as e: log('%r failed: %r\n' % (argv, e)) _smb_ok = False return lines.reverse() # junk at top while lines: line = lines.pop().strip() if re.match(r'Server\s+', line): break # server list section: # Server Comment # ------ ------- while lines: line = lines.pop().strip() if not line or re.match(r'-+\s+-+', line): continue if re.match(r'Workgroup\s+Master', line): break words = line.split() hostname = words[0].lower() debug3('< %s\n' % hostname) check_host(hostname) # workgroup list section: # Workgroup Master # --------- ------ while lines: line = lines.pop().strip() if re.match(r'-+\s+', line): continue if not line: break words = line.split() (workgroup, hostname) = (words[0].lower(), words[1].lower()) debug3('< group(%s) -> %s\n' % (workgroup, hostname)) check_host(hostname) check_workgroup(workgroup) if lines: assert(0) def _check_nmb(hostname, is_workgroup, is_master): return global _nmb_ok if not _nmb_ok: return argv = ['nmblookup'] + ['-M'] * is_master + ['--', hostname] debug2(' > n%d%d: %s\n' % (is_workgroup, is_master, hostname)) try: p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE, stderr=null) lines = p.stdout.readlines() rv = p.wait() except OSError as e: log('%r failed: %r\n' % (argv, e)) _nmb_ok = False return if rv: log('%r returned %d\n' % (argv, rv)) return for line in lines: m = re.match(r'(\d+\.\d+\.\d+\.\d+) (\w+)<\w\w>\n', line) if m: g = m.groups() (ip, name) = (g[0], g[1].lower()) debug3('< %s -> %s\n' % (name, ip)) if is_workgroup: _enqueue(_check_smb, ip) else: found_host(name, ip) check_host(name) def check_host(hostname): if _is_ip(hostname): _enqueue(_check_revdns, hostname) else: _enqueue(_check_dns, hostname) _enqueue(_check_smb, hostname) _enqueue(_check_nmb, hostname, False, False) def check_workgroup(hostname): _enqueue(_check_nmb, hostname, True, False) _enqueue(_check_nmb, hostname, True, True) def _enqueue(op, *args): t = (op, args) if queue.get(t) is None: queue[t] = 0 def _stdin_still_ok(timeout): r, w, x = select.select([sys.stdin.fileno()], [], [], timeout) if r: b = os.read(sys.stdin.fileno(), 4096) if not b: return False return True def hw_main(seed_hosts): if helpers.verbose >= 2: helpers.logprefix = 'HH: ' else: helpers.logprefix = 'hostwatch: ' debug1('Starting hostwatch with Python version %s\n' % platform.python_version()) read_host_cache() _enqueue(_check_etc_hosts) _enqueue(_check_netstat) check_host('localhost') check_host(socket.gethostname()) check_workgroup('workgroup') check_workgroup('-') for h in seed_hosts: check_host(h) while 1: now = time.time() for t, last_polled in list(queue.items()): (op, args) = t if not _stdin_still_ok(0): break maxtime = POLL_TIME if op == _check_netstat: maxtime = NETSTAT_POLL_TIME if now - last_polled > maxtime: queue[t] = time.time() op(*args) try: sys.stdout.flush() except IOError: break # FIXME: use a smarter timeout based on oldest last_polled if not _stdin_still_ok(1): break sshuttle-0.76/sshuttle/__main__.py0000600000175000017500000000014412646546200017472 0ustar brianbrian00000000000000"""Coverage.py's main entry point.""" import sys from sshuttle.cmdline import main sys.exit(main()) sshuttle-0.76/sshuttle/assembler.py0000600000175000017500000000162312633666547017747 0ustar brianbrian00000000000000import sys import zlib import imp z = zlib.decompressobj() while 1: name = stdin.readline().strip() if name: name = name.decode("ASCII") nbytes = int(stdin.readline()) if verbosity >= 2: sys.stderr.write('server: assembling %r (%d bytes)\n' % (name, nbytes)) content = z.decompress(stdin.read(nbytes)) module = imp.new_module(name) parent, _, parent_name = name.rpartition(".") if parent != "": setattr(sys.modules[parent], parent_name, module) code = compile(content, name, "exec") exec(code, module.__dict__) sys.modules[name] = module else: break sys.stderr.flush() sys.stdout.flush() import sshuttle.helpers sshuttle.helpers.verbose = verbosity import sshuttle.cmdline_options as options from sshuttle.server import main main(options.latency_control) sshuttle-0.76/sshuttle/linux.py0000600000175000017500000000356012633621104017107 0ustar brianbrian00000000000000import socket import subprocess as ssubprocess from sshuttle.helpers import log, debug1, Fatal, family_to_string def nonfatal(func, *args): try: func(*args) except Fatal as e: log('error: %s\n' % e) def ipt_chain_exists(family, table, name): if family == socket.AF_INET6: cmd = 'ip6tables' elif family == socket.AF_INET: cmd = 'iptables' else: raise Exception('Unsupported family "%s"' % family_to_string(family)) argv = [cmd, '-t', table, '-nL'] p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE) for line in p.stdout: if line.startswith(b'Chain %s ' % name.encode("ASCII")): return True rv = p.wait() if rv: raise Fatal('%r returned %d' % (argv, rv)) def ipt(family, table, *args): if family == socket.AF_INET6: argv = ['ip6tables', '-t', table] + list(args) elif family == socket.AF_INET: argv = ['iptables', '-t', table] + list(args) else: raise Exception('Unsupported family "%s"' % family_to_string(family)) debug1('>> %s\n' % ' '.join(argv)) rv = ssubprocess.call(argv) if rv: raise Fatal('%r returned %d' % (argv, rv)) _no_ttl_module = False def ipt_ttl(family, *args): global _no_ttl_module if not _no_ttl_module: # we avoid infinite loops by generating server-side connections # with ttl 42. This makes the client side not recapture those # connections, in case client == server. try: argsplus = list(args) + ['-m', 'ttl', '!', '--ttl', '42'] ipt(family, *argsplus) except Fatal: ipt(family, *args) # we only get here if the non-ttl attempt succeeds log('sshuttle: warning: your iptables is missing ' 'the ttl module.\n') _no_ttl_module = True else: ipt(family, *args) sshuttle-0.76/sshuttle/firewall.py0000600000175000017500000002173512633676222017574 0ustar brianbrian00000000000000import errno import socket import signal import sshuttle.ssyslog as ssyslog import sys import os import platform import traceback from sshuttle.helpers import debug1, debug2, Fatal from sshuttle.methods import get_auto_method, get_method HOSTSFILE = '/etc/hosts' def rewrite_etc_hosts(hostmap, port): BAKFILE = '%s.sbak' % HOSTSFILE APPEND = '# sshuttle-firewall-%d AUTOCREATED' % port old_content = '' st = None try: old_content = open(HOSTSFILE).read() st = os.stat(HOSTSFILE) except IOError as e: if e.errno == errno.ENOENT: pass else: raise if old_content.strip() and not os.path.exists(BAKFILE): os.link(HOSTSFILE, BAKFILE) tmpname = "%s.%d.tmp" % (HOSTSFILE, port) f = open(tmpname, 'w') for line in old_content.rstrip().split('\n'): if line.find(APPEND) >= 0: continue f.write('%s\n' % line) for (name, ip) in sorted(hostmap.items()): f.write('%-30s %s\n' % ('%s %s' % (ip, name), APPEND)) f.close() if st is not None: os.chown(tmpname, st.st_uid, st.st_gid) os.chmod(tmpname, st.st_mode) else: os.chown(tmpname, 0, 0) os.chmod(tmpname, 0o600) os.rename(tmpname, HOSTSFILE) def restore_etc_hosts(port): rewrite_etc_hosts({}, port) # Isolate function that needs to be replaced for tests def setup_daemon(): if os.getuid() != 0: raise Fatal('you must be root (or enable su/sudo) to set the firewall') # don't disappear if our controlling terminal or stdout/stderr # disappears; we still have to clean up. signal.signal(signal.SIGHUP, signal.SIG_IGN) signal.signal(signal.SIGPIPE, signal.SIG_IGN) signal.signal(signal.SIGTERM, signal.SIG_IGN) signal.signal(signal.SIGINT, signal.SIG_IGN) # ctrl-c shouldn't be passed along to me. When the main sshuttle dies, # I'll die automatically. os.setsid() # because of limitations of the 'su' command, the *real* stdin/stdout # are both attached to stdout initially. Clone stdout into stdin so we # can read from it. os.dup2(1, 0) return sys.stdin, sys.stdout # This is some voodoo for setting up the kernel's transparent # proxying stuff. If subnets is empty, we just delete our sshuttle rules; # otherwise we delete it, then make them from scratch. # # This code is supposed to clean up after itself by deleting its rules on # exit. In case that fails, it's not the end of the world; future runs will # supercede it in the transproxy list, at least, so the leftover rules # are hopefully harmless. def main(method_name, syslog): stdin, stdout = setup_daemon() hostmap = {} debug1('firewall manager: Starting firewall with Python version %s\n' % platform.python_version()) if method_name == "auto": method = get_auto_method() else: method = get_method(method_name) if syslog: ssyslog.start_syslog() ssyslog.stderr_to_syslog() debug1('firewall manager: ready method name %s.\n' % method.name) stdout.write('READY %s\n' % method.name) stdout.flush() # we wait until we get some input before creating the rules. That way, # sshuttle can launch us as early as possible (and get sudo password # authentication as early in the startup process as possible). line = stdin.readline(128) if not line: return # parent died; nothing to do subnets = [] if line != 'ROUTES\n': raise Fatal('firewall: expected ROUTES but got %r' % line) while 1: line = stdin.readline(128) if not line: raise Fatal('firewall: expected route but got %r' % line) elif line.startswith("NSLIST\n"): break try: (family, width, exclude, ip) = line.strip().split(',', 3) except: raise Fatal('firewall: expected route or NSLIST but got %r' % line) subnets.append((int(family), int(width), bool(int(exclude)), ip)) debug2('firewall manager: Got subnets: %r\n' % subnets) nslist = [] if line != 'NSLIST\n': raise Fatal('firewall: expected NSLIST but got %r' % line) while 1: line = stdin.readline(128) if not line: raise Fatal('firewall: expected nslist but got %r' % line) elif line.startswith("PORTS "): break try: (family, ip) = line.strip().split(',', 1) except: raise Fatal('firewall: expected nslist or PORTS but got %r' % line) nslist.append((int(family), ip)) debug2('firewall manager: Got partial nslist: %r\n' % nslist) debug2('firewall manager: Got nslist: %r\n' % nslist) if not line.startswith('PORTS '): raise Fatal('firewall: expected PORTS but got %r' % line) _, _, ports = line.partition(" ") ports = ports.split(",") if len(ports) != 4: raise Fatal('firewall: expected 4 ports but got %n' % len(ports)) port_v6 = int(ports[0]) port_v4 = int(ports[1]) dnsport_v6 = int(ports[2]) dnsport_v4 = int(ports[3]) assert(port_v6 >= 0) assert(port_v6 <= 65535) assert(port_v4 >= 0) assert(port_v4 <= 65535) assert(dnsport_v6 >= 0) assert(dnsport_v6 <= 65535) assert(dnsport_v4 >= 0) assert(dnsport_v4 <= 65535) debug2('firewall manager: Got ports: %d,%d,%d,%d\n' % (port_v6, port_v4, dnsport_v6, dnsport_v4)) line = stdin.readline(128) if not line: raise Fatal('firewall: expected GO but got %r' % line) elif not line.startswith("GO "): raise Fatal('firewall: expected GO but got %r' % line) _, _, udp = line.partition(" ") udp = bool(int(udp)) debug2('firewall manager: Got udp: %r\n' % udp) subnets_v6 = [i for i in subnets if i[0] == socket.AF_INET6] nslist_v6 = [i for i in nslist if i[0] == socket.AF_INET6] subnets_v4 = [i for i in subnets if i[0] == socket.AF_INET] nslist_v4 = [i for i in nslist if i[0] == socket.AF_INET] try: debug1('firewall manager: setting up.\n') if len(subnets_v6) > 0 or len(nslist_v6) > 0: debug2('firewall manager: setting up IPv6.\n') method.setup_firewall( port_v6, dnsport_v6, nslist_v6, socket.AF_INET6, subnets_v6, udp) if len(subnets_v4) > 0 or len(nslist_v4) > 0: debug2('firewall manager: setting up IPv4.\n') method.setup_firewall( port_v4, dnsport_v4, nslist_v4, socket.AF_INET, subnets_v4, udp) stdout.write('STARTED\n') try: stdout.flush() except IOError: # the parent process died for some reason; he's surely been loud # enough, so no reason to report another error return # Now we wait until EOF or any other kind of exception. We need # to stay running so that we don't need a *second* password # authentication at shutdown time - that cleanup is important! while 1: line = stdin.readline(128) if line.startswith('HOST '): (name, ip) = line[5:].strip().split(',', 1) hostmap[name] = ip debug2('firewall manager: setting up /etc/hosts.\n') rewrite_etc_hosts(hostmap, port_v6 or port_v4) elif line: if not method.firewall_command(line): raise Fatal('firewall: expected command, got %r' % line) else: break finally: try: debug1('firewall manager: undoing changes.\n') except: pass try: if len(subnets_v6) > 0 or len(nslist_v6) > 0: debug2('firewall manager: undoing IPv6 changes.\n') method.restore_firewall(port_v6, socket.AF_INET6, udp) except: try: debug1("firewall manager: " "Error trying to undo IPv6 firewall.\n") for line in traceback.format_exc().splitlines(): debug1("---> %s\n" % line) except: pass try: if len(subnets_v4) > 0 or len(nslist_v4) > 0: debug2('firewall manager: undoing IPv4 changes.\n') method.restore_firewall(port_v4, socket.AF_INET, udp) except: try: debug1("firewall manager: " "Error trying to undo IPv4 firewall.\n") for line in traceback.format_exc().splitlines(): debug1("firewall manager: ---> %s\n" % line) except: pass try: debug2('firewall manager: undoing /etc/hosts changes.\n') restore_etc_hosts(port_v6 or port_v4) except: try: debug1("firewall manager: " "Error trying to undo /etc/hosts changes.\n") for line in traceback.format_exc().splitlines(): debug1("firewall manager: ---> %s\n" % line) except: pass sshuttle-0.76/sshuttle/ssnet.py0000600000175000017500000004371012644310521017105 0ustar brianbrian00000000000000import struct import socket import errno import select import os from sshuttle.helpers import log, debug1, debug2, debug3, Fatal MAX_CHANNEL = 65535 # these don't exist in the socket module in python 2.3! SHUT_RD = 0 SHUT_WR = 1 SHUT_RDWR = 2 HDR_LEN = 8 CMD_EXIT = 0x4200 CMD_PING = 0x4201 CMD_PONG = 0x4202 CMD_TCP_CONNECT = 0x4203 CMD_TCP_STOP_SENDING = 0x4204 CMD_TCP_EOF = 0x4205 CMD_TCP_DATA = 0x4206 CMD_ROUTES = 0x4207 CMD_HOST_REQ = 0x4208 CMD_HOST_LIST = 0x4209 CMD_DNS_REQ = 0x420a CMD_DNS_RESPONSE = 0x420b CMD_UDP_OPEN = 0x420c CMD_UDP_DATA = 0x420d CMD_UDP_CLOSE = 0x420e cmd_to_name = { CMD_EXIT: 'EXIT', CMD_PING: 'PING', CMD_PONG: 'PONG', CMD_TCP_CONNECT: 'TCP_CONNECT', CMD_TCP_STOP_SENDING: 'TCP_STOP_SENDING', CMD_TCP_EOF: 'TCP_EOF', CMD_TCP_DATA: 'TCP_DATA', CMD_ROUTES: 'ROUTES', CMD_HOST_REQ: 'HOST_REQ', CMD_HOST_LIST: 'HOST_LIST', CMD_DNS_REQ: 'DNS_REQ', CMD_DNS_RESPONSE: 'DNS_RESPONSE', CMD_UDP_OPEN: 'UDP_OPEN', CMD_UDP_DATA: 'UDP_DATA', CMD_UDP_CLOSE: 'UDP_CLOSE', } NET_ERRS = [errno.ECONNREFUSED, errno.ETIMEDOUT, errno.EHOSTUNREACH, errno.ENETUNREACH, errno.EHOSTDOWN, errno.ENETDOWN] def _add(l, elem): if elem not in l: l.append(elem) def _fds(l): out = [] for i in l: try: out.append(i.fileno()) except AttributeError: out.append(i) out.sort() return out def _nb_clean(func, *args): try: return func(*args) except OSError as e: if e.errno not in (errno.EWOULDBLOCK, errno.EAGAIN): raise else: debug3('%s: err was: %s\n' % (func.__name__, e)) return None def _try_peername(sock): try: pn = sock.getpeername() if pn: return '%s:%s' % (pn[0], pn[1]) except socket.error as e: if e.args[0] not in (errno.ENOTCONN, errno.ENOTSOCK): raise return 'unknown' _swcount = 0 class SockWrapper: def __init__(self, rsock, wsock, connect_to=None, peername=None): global _swcount _swcount += 1 debug3('creating new SockWrapper (%d now exist)\n' % _swcount) self.exc = None self.rsock = rsock self.wsock = wsock self.shut_read = self.shut_write = False self.buf = [] self.connect_to = connect_to self.peername = peername or _try_peername(self.rsock) self.try_connect() def __del__(self): global _swcount _swcount -= 1 debug1('%r: deleting (%d remain)\n' % (self, _swcount)) if self.exc: debug1('%r: error was: %s\n' % (self, self.exc)) def __repr__(self): if self.rsock == self.wsock: fds = '#%d' % self.rsock.fileno() else: fds = '#%d,%d' % (self.rsock.fileno(), self.wsock.fileno()) return 'SW%s:%s' % (fds, self.peername) def seterr(self, e): if not self.exc: self.exc = e self.nowrite() self.noread() def try_connect(self): if self.connect_to and self.shut_write: self.noread() self.connect_to = None if not self.connect_to: return # already connected self.rsock.setblocking(False) debug3('%r: trying connect to %r\n' % (self, self.connect_to)) try: self.rsock.connect(self.connect_to) # connected successfully (Linux) self.connect_to = None except socket.error as e: debug3('%r: connect result: %s\n' % (self, e)) if e.args[0] == errno.EINVAL: # this is what happens when you call connect() on a socket # that is now connected but returned EINPROGRESS last time, # on BSD, on python pre-2.5.1. We need to use getsockopt() # to get the "real" error. Later pythons do this # automatically, so this code won't run. realerr = self.rsock.getsockopt(socket.SOL_SOCKET, socket.SO_ERROR) e = socket.error(realerr, os.strerror(realerr)) debug3('%r: fixed connect result: %s\n' % (self, e)) if e.args[0] in [errno.EINPROGRESS, errno.EALREADY]: pass # not connected yet elif e.args[0] == 0: # connected successfully (weird Linux bug?) # Sometimes Linux seems to return EINVAL when it isn't # invalid. This *may* be caused by a race condition # between connect() and getsockopt(SO_ERROR) (ie. it # finishes connecting in between the two, so there is no # longer an error). However, I'm not sure of that. # # I did get at least one report that the problem went away # when we added this, however. self.connect_to = None elif e.args[0] == errno.EISCONN: # connected successfully (BSD) self.connect_to = None elif e.args[0] in NET_ERRS + [errno.EACCES, errno.EPERM]: # a "normal" kind of error self.connect_to = None self.seterr(e) else: raise # error we've never heard of?! barf completely. def noread(self): if not self.shut_read: debug2('%r: done reading\n' % self) self.shut_read = True # self.rsock.shutdown(SHUT_RD) # doesn't do anything anyway def nowrite(self): if not self.shut_write: debug2('%r: done writing\n' % self) self.shut_write = True try: self.wsock.shutdown(SHUT_WR) except socket.error as e: self.seterr('nowrite: %s' % e) def too_full(self): return False # fullness is determined by the socket's select() state def uwrite(self, buf): if self.connect_to: return 0 # still connecting self.wsock.setblocking(False) try: return _nb_clean(os.write, self.wsock.fileno(), buf) except OSError as e: if e.errno == errno.EPIPE: debug1('%r: uwrite: got EPIPE\n' % self) self.nowrite() return 0 else: # unexpected error... stream is dead self.seterr('uwrite: %s' % e) return 0 def write(self, buf): assert(buf) return self.uwrite(buf) def uread(self): if self.connect_to: return None # still connecting if self.shut_read: return self.rsock.setblocking(False) try: return _nb_clean(os.read, self.rsock.fileno(), 65536) except OSError as e: self.seterr('uread: %s' % e) return b'' # unexpected error... we'll call it EOF def fill(self): if self.buf: return rb = self.uread() if rb: self.buf.append(rb) if rb == b'': # empty string means EOF; None means temporarily empty self.noread() def copy_to(self, outwrap): if self.buf and self.buf[0]: wrote = outwrap.write(self.buf[0]) self.buf[0] = self.buf[0][wrote:] while self.buf and not self.buf[0]: self.buf.pop(0) if not self.buf and self.shut_read: outwrap.nowrite() class Handler: def __init__(self, socks=None, callback=None): self.ok = True self.socks = socks or [] if callback: self.callback = callback def pre_select(self, r, w, x): for i in self.socks: _add(r, i) def callback(self, sock): log('--no callback defined-- %r\n' % self) (r, w, x) = select.select(self.socks, [], [], 0) for s in r: v = s.recv(4096) if not v: log('--closed-- %r\n' % self) self.socks = [] self.ok = False class Proxy(Handler): def __init__(self, wrap1, wrap2): Handler.__init__(self, [wrap1.rsock, wrap1.wsock, wrap2.rsock, wrap2.wsock]) self.wrap1 = wrap1 self.wrap2 = wrap2 def pre_select(self, r, w, x): if self.wrap1.shut_write: self.wrap2.noread() if self.wrap2.shut_write: self.wrap1.noread() if self.wrap1.connect_to: _add(w, self.wrap1.rsock) elif self.wrap1.buf: if not self.wrap2.too_full(): _add(w, self.wrap2.wsock) elif not self.wrap1.shut_read: _add(r, self.wrap1.rsock) if self.wrap2.connect_to: _add(w, self.wrap2.rsock) elif self.wrap2.buf: if not self.wrap1.too_full(): _add(w, self.wrap1.wsock) elif not self.wrap2.shut_read: _add(r, self.wrap2.rsock) def callback(self, sock): self.wrap1.try_connect() self.wrap2.try_connect() self.wrap1.fill() self.wrap2.fill() self.wrap1.copy_to(self.wrap2) self.wrap2.copy_to(self.wrap1) if self.wrap1.buf and self.wrap2.shut_write: self.wrap1.buf = [] self.wrap1.noread() if self.wrap2.buf and self.wrap1.shut_write: self.wrap2.buf = [] self.wrap2.noread() if (self.wrap1.shut_read and self.wrap2.shut_read and not self.wrap1.buf and not self.wrap2.buf): self.ok = False self.wrap1.nowrite() self.wrap2.nowrite() class Mux(Handler): def __init__(self, rsock, wsock): Handler.__init__(self, [rsock, wsock]) self.rsock = rsock self.wsock = wsock self.new_channel = self.got_dns_req = self.got_routes = None self.got_udp_open = self.got_udp_data = self.got_udp_close = None self.got_host_req = self.got_host_list = None self.channels = {} self.chani = 0 self.want = 0 self.inbuf = b'' self.outbuf = [] self.fullness = 0 self.too_full = False self.send(0, CMD_PING, b'chicken') def next_channel(self): # channel 0 is special, so we never allocate it for timeout in range(1024): self.chani += 1 if self.chani > MAX_CHANNEL: self.chani = 1 if not self.channels.get(self.chani): return self.chani def amount_queued(self): total = 0 for b in self.outbuf: total += len(b) return total def check_fullness(self): if self.fullness > 32768: if not self.too_full: self.send(0, CMD_PING, b'rttest') self.too_full = True # ob = [] # for b in self.outbuf: # (s1,s2,c) = struct.unpack('!ccH', b[:4]) # ob.append(c) # log('outbuf: %d %r\n' % (self.amount_queued(), ob)) def send(self, channel, cmd, data): assert isinstance(data, bytes) assert len(data) <= 65535 p = struct.pack('!ccHHH', b'S', b'S', channel, cmd, len(data)) + data self.outbuf.append(p) debug2(' > channel=%d cmd=%s len=%d (fullness=%d)\n' % (channel, cmd_to_name.get(cmd, hex(cmd)), len(data), self.fullness)) self.fullness += len(data) def got_packet(self, channel, cmd, data): debug2('< channel=%d cmd=%s len=%d\n' % (channel, cmd_to_name.get(cmd, hex(cmd)), len(data))) if cmd == CMD_PING: self.send(0, CMD_PONG, data) elif cmd == CMD_PONG: debug2('received PING response\n') self.too_full = False self.fullness = 0 elif cmd == CMD_EXIT: self.ok = False elif cmd == CMD_TCP_CONNECT: assert(not self.channels.get(channel)) if self.new_channel: self.new_channel(channel, data) elif cmd == CMD_DNS_REQ: assert(not self.channels.get(channel)) if self.got_dns_req: self.got_dns_req(channel, data) elif cmd == CMD_UDP_OPEN: assert(not self.channels.get(channel)) if self.got_udp_open: self.got_udp_open(channel, data) elif cmd == CMD_ROUTES: if self.got_routes: self.got_routes(data) else: raise Exception('got CMD_ROUTES without got_routes?') elif cmd == CMD_HOST_REQ: if self.got_host_req: self.got_host_req(data) else: raise Exception('got CMD_HOST_REQ without got_host_req?') elif cmd == CMD_HOST_LIST: if self.got_host_list: self.got_host_list(data) else: raise Exception('got CMD_HOST_LIST without got_host_list?') else: callback = self.channels.get(channel) if not callback: log('warning: closed channel %d got cmd=%s len=%d\n' % (channel, cmd_to_name.get(cmd, hex(cmd)), len(data))) else: callback(cmd, data) def flush(self): self.wsock.setblocking(False) if self.outbuf and self.outbuf[0]: wrote = _nb_clean(os.write, self.wsock.fileno(), self.outbuf[0]) debug2('mux wrote: %r/%d\n' % (wrote, len(self.outbuf[0]))) if wrote: self.outbuf[0] = self.outbuf[0][wrote:] while self.outbuf and not self.outbuf[0]: self.outbuf[0:1] = [] def fill(self): self.rsock.setblocking(False) try: b = _nb_clean(os.read, self.rsock.fileno(), 32768) except OSError as e: raise Fatal('other end: %r' % e) # log('<<< %r\n' % b) if b == b'': # EOF self.ok = False if b: self.inbuf += b def handle(self): self.fill() # log('inbuf is: (%d,%d) %r\n' # % (self.want, len(self.inbuf), self.inbuf)) while 1: if len(self.inbuf) >= (self.want or HDR_LEN): (s1, s2, channel, cmd, datalen) = \ struct.unpack('!ccHHH', self.inbuf[:HDR_LEN]) assert(s1 == b'S') assert(s2 == b'S') self.want = datalen + HDR_LEN if self.want and len(self.inbuf) >= self.want: data = self.inbuf[HDR_LEN:self.want] self.inbuf = self.inbuf[self.want:] self.want = 0 self.got_packet(channel, cmd, data) else: break def pre_select(self, r, w, x): _add(r, self.rsock) if self.outbuf: _add(w, self.wsock) def callback(self, sock): (r, w, x) = select.select([self.rsock], [self.wsock], [], 0) if self.rsock in r: self.handle() if self.outbuf and self.wsock in w: self.flush() class MuxWrapper(SockWrapper): def __init__(self, mux, channel): SockWrapper.__init__(self, mux.rsock, mux.wsock) self.mux = mux self.channel = channel self.mux.channels[channel] = self.got_packet self.socks = [] debug2('new channel: %d\n' % channel) def __del__(self): self.nowrite() SockWrapper.__del__(self) def __repr__(self): return 'SW%r:Mux#%d' % (self.peername, self.channel) def noread(self): if not self.shut_read: debug2('%r: done reading\n' % self) self.shut_read = True self.mux.send(self.channel, CMD_TCP_STOP_SENDING, b'') self.maybe_close() def nowrite(self): if not self.shut_write: debug2('%r: done writing\n' % self) self.shut_write = True self.mux.send(self.channel, CMD_TCP_EOF, b'') self.maybe_close() def maybe_close(self): if self.shut_read and self.shut_write: debug2('%r: closing connection\n' % self) # remove the mux's reference to us. The python garbage collector # will then be able to reap our object. self.mux.channels[self.channel] = None def too_full(self): return self.mux.too_full def uwrite(self, buf): if self.mux.too_full: return 0 # too much already enqueued if len(buf) > 2048: buf = buf[:2048] self.mux.send(self.channel, CMD_TCP_DATA, buf) return len(buf) def uread(self): if self.shut_read: return b'' # EOF else: return None # no data available right now def got_packet(self, cmd, data): if cmd == CMD_TCP_EOF: self.noread() elif cmd == CMD_TCP_STOP_SENDING: self.nowrite() elif cmd == CMD_TCP_DATA: self.buf.append(data) else: raise Exception('unknown command %d (%d bytes)' % (cmd, len(data))) def connect_dst(family, ip, port): debug2('Connecting to %s:%d\n' % (ip, port)) outsock = socket.socket(family) outsock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) return SockWrapper(outsock, outsock, connect_to=(ip, port), peername = '%s:%d' % (ip, port)) def runonce(handlers, mux): r = [] w = [] x = [] to_remove = [s for s in handlers if not s.ok] for h in to_remove: handlers.remove(h) for s in handlers: s.pre_select(r, w, x) debug2('Waiting: %d r=%r w=%r x=%r (fullness=%d/%d)\n' % (len(handlers), _fds(r), _fds(w), _fds(x), mux.fullness, mux.too_full)) (r, w, x) = select.select(r, w, x) debug2(' Ready: %d r=%r w=%r x=%r\n' % (len(handlers), _fds(r), _fds(w), _fds(x))) ready = r + w + x did = {} for h in handlers: for s in h.socks: if s in ready: h.callback(s) did[s] = 1 for s in ready: if s not in did: raise Fatal('socket %r was not used by any handler' % s) sshuttle-0.76/sshuttle/methods/0000700000175000017500000000000012646642532017047 5ustar brianbrian00000000000000sshuttle-0.76/sshuttle/methods/tproxy.py0000600000175000017500000002377312645402010020764 0ustar brianbrian00000000000000import struct from sshuttle.helpers import family_to_string from sshuttle.linux import ipt, ipt_ttl, ipt_chain_exists from sshuttle.methods import BaseMethod from sshuttle.helpers import debug1, debug3, Fatal recvmsg = None try: # try getting recvmsg from python import socket as pythonsocket getattr(pythonsocket.socket, "recvmsg") socket = pythonsocket recvmsg = "python" except AttributeError: # try getting recvmsg from socket_ext library try: import socket_ext getattr(socket_ext.socket, "recvmsg") socket = socket_ext recvmsg = "socket_ext" except ImportError: import socket IP_TRANSPARENT = 19 IP_ORIGDSTADDR = 20 IP_RECVORIGDSTADDR = IP_ORIGDSTADDR SOL_IPV6 = 41 IPV6_ORIGDSTADDR = 74 IPV6_RECVORIGDSTADDR = IPV6_ORIGDSTADDR if recvmsg == "python": def recv_udp(listener, bufsize): debug3('Accept UDP python using recvmsg.\n') data, ancdata, msg_flags, srcip = listener.recvmsg( 4096, socket.CMSG_SPACE(24)) dstip = None family = None for cmsg_level, cmsg_type, cmsg_data in ancdata: if cmsg_level == socket.SOL_IP and cmsg_type == IP_ORIGDSTADDR: family, port = struct.unpack('=HH', cmsg_data[0:4]) port = socket.htons(port) if family == socket.AF_INET: start = 4 length = 4 else: raise Fatal("Unsupported socket type '%s'" % family) ip = socket.inet_ntop(family, cmsg_data[start:start + length]) dstip = (ip, port) break elif cmsg_level == SOL_IPV6 and cmsg_type == IPV6_ORIGDSTADDR: family, port = struct.unpack('=HH', cmsg_data[0:4]) port = socket.htons(port) if family == socket.AF_INET6: start = 8 length = 16 else: raise Fatal("Unsupported socket type '%s'" % family) ip = socket.inet_ntop(family, cmsg_data[start:start + length]) dstip = (ip, port) break return (srcip, dstip, data) elif recvmsg == "socket_ext": def recv_udp(listener, bufsize): debug3('Accept UDP using socket_ext recvmsg.\n') srcip, data, adata, flags = listener.recvmsg( (bufsize,), socket.CMSG_SPACE(24)) dstip = None family = None for a in adata: if a.cmsg_level == socket.SOL_IP and a.cmsg_type == IP_ORIGDSTADDR: family, port = struct.unpack('=HH', a.cmsg_data[0:4]) port = socket.htons(port) if family == socket.AF_INET: start = 4 length = 4 else: raise Fatal("Unsupported socket type '%s'" % family) ip = socket.inet_ntop( family, a.cmsg_data[start:start + length]) dstip = (ip, port) break elif a.cmsg_level == SOL_IPV6 and a.cmsg_type == IPV6_ORIGDSTADDR: family, port = struct.unpack('=HH', a.cmsg_data[0:4]) port = socket.htons(port) if family == socket.AF_INET6: start = 8 length = 16 else: raise Fatal("Unsupported socket type '%s'" % family) ip = socket.inet_ntop( family, a.cmsg_data[start:start + length]) dstip = (ip, port) break return (srcip, dstip, data[0]) else: def recv_udp(listener, bufsize): debug3('Accept UDP using recvfrom.\n') data, srcip = listener.recvfrom(bufsize) return (srcip, None, data) class Method(BaseMethod): def get_supported_features(self): result = super(Method, self).get_supported_features() result.ipv6 = True if recvmsg is None: result.udp = False result.dns = False else: result.udp = True result.dns = True return result def get_tcp_dstip(self, sock): return sock.getsockname() def recv_udp(self, udp_listener, bufsize): srcip, dstip, data = recv_udp(udp_listener, bufsize) if not dstip: debug1( "-- ignored UDP from %r: " "couldn't determine destination IP address\n" % (srcip,)) return None return srcip, dstip, data def send_udp(self, sock, srcip, dstip, data): if not srcip: debug1( "-- ignored UDP to %r: " "couldn't determine source IP address\n" % (dstip,)) return sender = socket.socket(sock.family, socket.SOCK_DGRAM) sender.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sender.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) sender.bind(srcip) sender.sendto(data, dstip) sender.close() def setup_tcp_listener(self, tcp_listener): tcp_listener.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) def setup_udp_listener(self, udp_listener): udp_listener.setsockopt(socket.SOL_IP, IP_TRANSPARENT, 1) if udp_listener.v4 is not None: udp_listener.v4.setsockopt( socket.SOL_IP, IP_RECVORIGDSTADDR, 1) if udp_listener.v6 is not None: udp_listener.v6.setsockopt(SOL_IPV6, IPV6_RECVORIGDSTADDR, 1) def setup_firewall(self, port, dnsport, nslist, family, subnets, udp): if family not in [socket.AF_INET, socket.AF_INET6]: raise Exception( 'Address family "%s" unsupported by tproxy method' % family_to_string(family)) table = "mangle" def _ipt(*args): return ipt(family, table, *args) def _ipt_ttl(*args): return ipt_ttl(family, table, *args) mark_chain = 'sshuttle-m-%s' % port tproxy_chain = 'sshuttle-t-%s' % port divert_chain = 'sshuttle-d-%s' % port # basic cleanup/setup of chains self.restore_firewall(port, family, udp) _ipt('-N', mark_chain) _ipt('-F', mark_chain) _ipt('-N', divert_chain) _ipt('-F', divert_chain) _ipt('-N', tproxy_chain) _ipt('-F', tproxy_chain) _ipt('-I', 'OUTPUT', '1', '-j', mark_chain) _ipt('-I', 'PREROUTING', '1', '-j', tproxy_chain) _ipt('-A', divert_chain, '-j', 'MARK', '--set-mark', '1') _ipt('-A', divert_chain, '-j', 'ACCEPT') _ipt('-A', tproxy_chain, '-m', 'socket', '-j', divert_chain, '-m', 'tcp', '-p', 'tcp') if udp: _ipt('-A', tproxy_chain, '-m', 'socket', '-j', divert_chain, '-m', 'udp', '-p', 'udp') for f, ip in [i for i in nslist if i[0] == family]: _ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', '--dest', '%s/32' % ip, '-m', 'udp', '-p', 'udp', '--dport', '53') _ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', '%s/32' % ip, '-m', 'udp', '-p', 'udp', '--dport', '53', '--on-port', str(dnsport)) for f, swidth, sexclude, snet \ in sorted(subnets, key=lambda s: s[1], reverse=True): if sexclude: _ipt('-A', mark_chain, '-j', 'RETURN', '--dest', '%s/%s' % (snet, swidth), '-m', 'tcp', '-p', 'tcp') _ipt('-A', tproxy_chain, '-j', 'RETURN', '--dest', '%s/%s' % (snet, swidth), '-m', 'tcp', '-p', 'tcp') else: _ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', '--dest', '%s/%s' % (snet, swidth), '-m', 'tcp', '-p', 'tcp') _ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', '%s/%s' % (snet, swidth), '-m', 'tcp', '-p', 'tcp', '--on-port', str(port)) if udp: if sexclude: _ipt('-A', mark_chain, '-j', 'RETURN', '--dest', '%s/%s' % (snet, swidth), '-m', 'udp', '-p', 'udp') _ipt('-A', tproxy_chain, '-j', 'RETURN', '--dest', '%s/%s' % (snet, swidth), '-m', 'udp', '-p', 'udp') else: _ipt('-A', mark_chain, '-j', 'MARK', '--set-mark', '1', '--dest', '%s/%s' % (snet, swidth), '-m', 'udp', '-p', 'udp') _ipt('-A', tproxy_chain, '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', '%s/%s' % (snet, swidth), '-m', 'udp', '-p', 'udp', '--on-port', str(port)) def restore_firewall(self, port, family, udp): if family not in [socket.AF_INET, socket.AF_INET6]: raise Exception( 'Address family "%s" unsupported by tproxy method' % family_to_string(family)) table = "mangle" def _ipt(*args): return ipt(family, table, *args) def _ipt_ttl(*args): return ipt_ttl(family, table, *args) mark_chain = 'sshuttle-m-%s' % port tproxy_chain = 'sshuttle-t-%s' % port divert_chain = 'sshuttle-d-%s' % port # basic cleanup/setup of chains if ipt_chain_exists(family, table, mark_chain): _ipt('-D', 'OUTPUT', '-j', mark_chain) _ipt('-F', mark_chain) _ipt('-X', mark_chain) if ipt_chain_exists(family, table, tproxy_chain): _ipt('-D', 'PREROUTING', '-j', tproxy_chain) _ipt('-F', tproxy_chain) _ipt('-X', tproxy_chain) if ipt_chain_exists(family, table, divert_chain): _ipt('-F', divert_chain) _ipt('-X', divert_chain) sshuttle-0.76/sshuttle/methods/pf.py0000600000175000017500000002344712642665324020042 0ustar brianbrian00000000000000import os import sys import re import socket import struct import subprocess as ssubprocess from fcntl import ioctl from ctypes import c_char, c_uint8, c_uint16, c_uint32, Union, Structure, \ sizeof, addressof, memmove from sshuttle.helpers import debug1, debug2, debug3, Fatal, family_to_string from sshuttle.methods import BaseMethod def pfctl(args, stdin=None): argv = ['pfctl'] + list(args.split(" ")) debug1('>> %s\n' % ' '.join(argv)) p = ssubprocess.Popen(argv, stdin=ssubprocess.PIPE, stdout=ssubprocess.PIPE, stderr=ssubprocess.PIPE) o = p.communicate(stdin) if p.returncode: raise Fatal('%r returned %d' % (argv, p.returncode)) return o _pf_context = {'started_by_sshuttle': False, 'Xtoken': None} _pf_fd = None class OsDefs(object): def __init__(self, platform=None): if platform is None: platform = sys.platform self.platform = platform # This are some classes and functions used to support pf in yosemite. if platform == 'darwin': class pf_state_xport(Union): _fields_ = [("port", c_uint16), ("call_id", c_uint16), ("spi", c_uint32)] else: class pf_state_xport(Union): _fields_ = [("port", c_uint16), ("call_id", c_uint16)] class pf_addr(Structure): class _pfa(Union): _fields_ = [("v4", c_uint32), # struct in_addr ("v6", c_uint32 * 4), # struct in6_addr ("addr8", c_uint8 * 16), ("addr16", c_uint16 * 8), ("addr32", c_uint32 * 4)] _fields_ = [("pfa", _pfa)] _anonymous_ = ("pfa",) class pfioc_natlook(Structure): _fields_ = [("saddr", pf_addr), ("daddr", pf_addr), ("rsaddr", pf_addr), ("rdaddr", pf_addr), ("sxport", pf_state_xport), ("dxport", pf_state_xport), ("rsxport", pf_state_xport), ("rdxport", pf_state_xport), ("af", c_uint8), # sa_family_t ("proto", c_uint8), ("proto_variant", c_uint8), ("direction", c_uint8)] self.pfioc_natlook = pfioc_natlook # sizeof(struct pfioc_rule) self.pfioc_rule = c_char * \ (3104 if platform == 'darwin' else 3040) # sizeof(struct pfioc_pooladdr) self.pfioc_pooladdr = c_char * 1136 self.MAXPATHLEN = 1024 self.DIOCNATLOOK = ( (0x40000000 | 0x80000000) | ((sizeof(pfioc_natlook) & 0x1fff) << 16) | ((ord('D')) << 8) | (23)) self.DIOCCHANGERULE = ( (0x40000000 | 0x80000000) | ((sizeof(self.pfioc_rule) & 0x1fff) << 16) | ((ord('D')) << 8) | (26)) self.DIOCBEGINADDRS = ( (0x40000000 | 0x80000000) | ((sizeof(self.pfioc_pooladdr) & 0x1fff) << 16) | ((ord('D')) << 8) | (51)) self.PF_CHANGE_ADD_TAIL = 2 self.PF_CHANGE_GET_TICKET = 6 self.PF_PASS = 0 self.PF_RDR = 8 self.PF_OUT = 2 osdefs = OsDefs() def pf_get_dev(): global _pf_fd if _pf_fd is None: _pf_fd = os.open('/dev/pf', os.O_RDWR) return _pf_fd def pf_query_nat(family, proto, src_ip, src_port, dst_ip, dst_port): [proto, family, src_port, dst_port] = [ int(v) for v in [proto, family, src_port, dst_port]] packed_src_ip = socket.inet_pton(family, src_ip) packed_dst_ip = socket.inet_pton(family, dst_ip) assert len(packed_src_ip) == len(packed_dst_ip) length = len(packed_src_ip) pnl = osdefs.pfioc_natlook() pnl.proto = proto pnl.direction = osdefs.PF_OUT pnl.af = family memmove(addressof(pnl.saddr), packed_src_ip, length) memmove(addressof(pnl.daddr), packed_dst_ip, length) pnl.sxport.port = socket.htons(src_port) pnl.dxport.port = socket.htons(dst_port) ioctl(pf_get_dev(), osdefs.DIOCNATLOOK, (c_char * sizeof(pnl)).from_address(addressof(pnl))) ip = socket.inet_ntop( pnl.af, (c_char * length).from_address(addressof(pnl.rdaddr)).raw) port = socket.ntohs(pnl.rdxport.port) return (ip, port) def pf_add_anchor_rule(type, name): ACTION_OFFSET = 0 POOL_TICKET_OFFSET = 8 ANCHOR_CALL_OFFSET = 1040 RULE_ACTION_OFFSET = 3068 if osdefs.platform == 'darwin' else 2968 pr = osdefs.pfioc_rule() ppa = osdefs.pfioc_pooladdr() ioctl(pf_get_dev(), osdefs.DIOCBEGINADDRS, ppa) memmove(addressof(pr) + POOL_TICKET_OFFSET, ppa[4:8], 4) # pool_ticket memmove(addressof(pr) + ANCHOR_CALL_OFFSET, name, min(osdefs.MAXPATHLEN, len(name))) # anchor_call = name memmove(addressof(pr) + RULE_ACTION_OFFSET, struct.pack('I', type), 4) # rule.action = type memmove(addressof(pr) + ACTION_OFFSET, struct.pack( 'I', osdefs.PF_CHANGE_GET_TICKET), 4) # action = PF_CHANGE_GET_TICKET ioctl(pf_get_dev(), osdefs.DIOCCHANGERULE, pr) memmove(addressof(pr) + ACTION_OFFSET, struct.pack( 'I', osdefs.PF_CHANGE_ADD_TAIL), 4) # action = PF_CHANGE_ADD_TAIL ioctl(pf_get_dev(), osdefs.DIOCCHANGERULE, pr) class Method(BaseMethod): def get_tcp_dstip(self, sock): pfile = self.firewall.pfile peer = sock.getpeername() proxy = sock.getsockname() argv = (sock.family, socket.IPPROTO_TCP, peer[0].encode("ASCII"), peer[1], proxy[0].encode("ASCII"), proxy[1]) out_line = b"QUERY_PF_NAT %d,%d,%s,%d,%s,%d\n" % argv pfile.write(out_line) pfile.flush() in_line = pfile.readline() debug2(out_line.decode("ASCII") + ' > ' + in_line.decode("ASCII")) if in_line.startswith(b'QUERY_PF_NAT_SUCCESS '): (ip, port) = in_line[21:].split(b',') return (ip.decode("ASCII"), int(port)) return sock.getsockname() def setup_firewall(self, port, dnsport, nslist, family, subnets, udp): tables = [] translating_rules = [] filtering_rules = [] if family != socket.AF_INET: raise Exception( 'Address family "%s" unsupported by pf method_name' % family_to_string(family)) if udp: raise Exception("UDP not supported by pf method_name") if len(subnets) > 0: includes = [] # If a given subnet is both included and excluded, list the # exclusion first; the table will ignore the second, opposite # definition for f, swidth, sexclude, snet in sorted( subnets, key=lambda s: (s[1], s[2]), reverse=True): includes.append(b"%s%s/%d" % (b"!" if sexclude else b"", snet.encode("ASCII"), swidth)) tables.append( b'table {%s}' % b','.join(includes)) translating_rules.append( b'rdr pass on lo0 proto tcp ' b'to -> 127.0.0.1 port %r' % port) filtering_rules.append( b'pass out route-to lo0 inet proto tcp ' b'to keep state') if len(nslist) > 0: tables.append( b'table {%s}' % b','.join([ns[1].encode("ASCII") for ns in nslist])) translating_rules.append( b'rdr pass on lo0 proto udp to ' b' port 53 -> 127.0.0.1 port %r' % dnsport) filtering_rules.append( b'pass out route-to lo0 inet proto udp to ' b' port 53 keep state') rules = b'\n'.join(tables + translating_rules + filtering_rules) \ + b'\n' assert isinstance(rules, bytes) debug3("rules:\n" + rules.decode("ASCII")) pf_status = pfctl('-s all')[0] if b'\nrdr-anchor "sshuttle" all\n' not in pf_status: pf_add_anchor_rule(osdefs.PF_RDR, b"sshuttle") if b'\nanchor "sshuttle" all\n' not in pf_status: pf_add_anchor_rule(osdefs.PF_PASS, b"sshuttle") pfctl('-a sshuttle -f /dev/stdin', rules) if osdefs.platform == "darwin": o = pfctl('-E') _pf_context['Xtoken'] = \ re.search(b'Token : (.+)', o[1]).group(1) elif b'INFO:\nStatus: Disabled' in pf_status: pfctl('-e') _pf_context['started_by_sshuttle'] = True def restore_firewall(self, port, family, udp): if family != socket.AF_INET: raise Exception( 'Address family "%s" unsupported by pf method_name' % family_to_string(family)) if udp: raise Exception("UDP not supported by pf method_name") pfctl('-a sshuttle -F all') if osdefs.platform == "darwin": if _pf_context['Xtoken'] is not None: pfctl('-X %s' % _pf_context['Xtoken'].decode("ASCII")) elif _pf_context['started_by_sshuttle']: pfctl('-d') def firewall_command(self, line): if line.startswith('QUERY_PF_NAT '): try: dst = pf_query_nat(*(line[13:].split(','))) sys.stdout.write('QUERY_PF_NAT_SUCCESS %s,%r\n' % dst) except IOError as e: sys.stdout.write('QUERY_PF_NAT_FAILURE %s\n' % e) sys.stdout.flush() return True else: return False sshuttle-0.76/sshuttle/methods/nat.py0000600000175000017500000000635412633631571020212 0ustar brianbrian00000000000000import socket from sshuttle.helpers import family_to_string from sshuttle.linux import ipt, ipt_ttl, ipt_chain_exists, nonfatal from sshuttle.methods import BaseMethod class Method(BaseMethod): # We name the chain based on the transproxy port number so that it's # possible to run multiple copies of sshuttle at the same time. Of course, # the multiple copies shouldn't have overlapping subnets, or only the most- # recently-started one will win (because we use "-I OUTPUT 1" instead of # "-A OUTPUT"). def setup_firewall(self, port, dnsport, nslist, family, subnets, udp): # only ipv4 supported with NAT if family != socket.AF_INET: raise Exception( 'Address family "%s" unsupported by nat method_name' % family_to_string(family)) if udp: raise Exception("UDP not supported by nat method_name") table = "nat" def _ipt(*args): return ipt(family, table, *args) def _ipt_ttl(*args): return ipt_ttl(family, table, *args) chain = 'sshuttle-%s' % port # basic cleanup/setup of chains self.restore_firewall(port, family, udp) _ipt('-N', chain) _ipt('-F', chain) _ipt('-I', 'OUTPUT', '1', '-j', chain) _ipt('-I', 'PREROUTING', '1', '-j', chain) # create new subnet entries. Note that we're sorting in a very # particular order: we need to go from most-specific (largest # swidth) to least-specific, and at any given level of specificity, # we want excludes to come first. That's why the columns are in # such a non- intuitive order. for f, swidth, sexclude, snet \ in sorted(subnets, key=lambda s: s[1], reverse=True): if sexclude: _ipt('-A', chain, '-j', 'RETURN', '--dest', '%s/%s' % (snet, swidth), '-p', 'tcp') else: _ipt_ttl('-A', chain, '-j', 'REDIRECT', '--dest', '%s/%s' % (snet, swidth), '-p', 'tcp', '--to-ports', str(port)) for f, ip in [i for i in nslist if i[0] == family]: _ipt_ttl('-A', chain, '-j', 'REDIRECT', '--dest', '%s/32' % ip, '-p', 'udp', '--dport', '53', '--to-ports', str(dnsport)) def restore_firewall(self, port, family, udp): # only ipv4 supported with NAT if family != socket.AF_INET: raise Exception( 'Address family "%s" unsupported by nat method_name' % family_to_string(family)) if udp: raise Exception("UDP not supported by nat method_name") table = "nat" def _ipt(*args): return ipt(family, table, *args) def _ipt_ttl(*args): return ipt_ttl(family, table, *args) chain = 'sshuttle-%s' % port # basic cleanup/setup of chains if ipt_chain_exists(family, table, chain): nonfatal(_ipt, '-D', 'OUTPUT', '-j', chain) nonfatal(_ipt, '-D', 'PREROUTING', '-j', chain) nonfatal(_ipt, '-F', chain) _ipt('-X', chain) sshuttle-0.76/sshuttle/methods/__init__.py0000600000175000017500000000560612633630540021161 0ustar brianbrian00000000000000import os import importlib import socket import struct import errno from sshuttle.helpers import Fatal, debug3 def original_dst(sock): try: SO_ORIGINAL_DST = 80 SOCKADDR_MIN = 16 sockaddr_in = sock.getsockopt(socket.SOL_IP, SO_ORIGINAL_DST, SOCKADDR_MIN) (proto, port, a, b, c, d) = struct.unpack('!HHBBBB', sockaddr_in[:8]) # FIXME: decoding is IPv4 only. assert(socket.htons(proto) == socket.AF_INET) ip = '%d.%d.%d.%d' % (a, b, c, d) return (ip, port) except socket.error as e: if e.args[0] == errno.ENOPROTOOPT: return sock.getsockname() raise class Features(object): pass class BaseMethod(object): def __init__(self, name): self.firewall = None self.name = name def set_firewall(self, firewall): self.firewall = firewall def get_supported_features(self): result = Features() result.ipv6 = False result.udp = False result.dns = True return result def get_tcp_dstip(self, sock): return original_dst(sock) def recv_udp(self, udp_listener, bufsize): debug3('Accept UDP using recvfrom.\n') data, srcip = udp_listener.recvfrom(bufsize) return (srcip, None, data) def send_udp(self, sock, srcip, dstip, data): if srcip is not None: Fatal("Method %s send_udp does not support setting srcip to %r" % (self.name, srcip)) sock.sendto(data, dstip) def setup_tcp_listener(self, tcp_listener): pass def setup_udp_listener(self, udp_listener): pass def assert_features(self, features): avail = self.get_supported_features() for key in ["udp", "dns", "ipv6"]: if getattr(features, key) and not getattr(avail, key): raise Fatal( "Feature %s not supported with method %s.\n" % (key, self.name)) def setup_firewall(self, port, dnsport, nslist, family, subnets, udp): raise NotImplementedError() def restore_firewall(self, port, family, udp): raise NotImplementedError() def firewall_command(self, line): return False def _program_exists(name): paths = (os.getenv('PATH') or os.defpath).split(os.pathsep) for p in paths: fn = '%s/%s' % (p, name) if os.path.exists(fn): return not os.path.isdir(fn) and os.access(fn, os.X_OK) def get_method(method_name): module = importlib.import_module("sshuttle.methods.%s" % method_name) return module.Method(method_name) def get_auto_method(): if _program_exists('iptables'): method_name = "nat" elif _program_exists('pfctl'): method_name = "pf" else: raise Fatal( "can't find either iptables or pfctl; check your PATH") return get_method(method_name) sshuttle-0.76/sshuttle/__init__.py0000600000175000017500000000000012645364747017515 0ustar brianbrian00000000000000sshuttle-0.76/sshuttle/stresstest.py0000700000175000017500000000503412633621104020172 0ustar brianbrian00000000000000#!/usr/bin/env python import socket import select import struct import time listener = socket.socket() listener.bind(('127.0.0.1', 0)) listener.listen(500) servers = [] clients = [] remain = {} NUMCLIENTS = 50 count = 0 while 1: if len(clients) < NUMCLIENTS: c = socket.socket() c.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) c.bind(('0.0.0.0', 0)) c.connect(listener.getsockname()) count += 1 if count >= 16384: count = 1 print('cli CREATING %d' % count) b = struct.pack('I', count) + 'x' * count remain[c] = count print('cli >> %r' % len(b)) c.send(b) c.shutdown(socket.SHUT_WR) clients.append(c) r = [listener] time.sleep(0.1) else: r = [listener] + servers + clients print('select(%d)' % len(r)) r, w, x = select.select(r, [], [], 5) assert(r) for i in r: if i == listener: s, addr = listener.accept() servers.append(s) elif i in servers: b = i.recv(4096) print('srv << %r' % len(b)) if i not in remain: assert(len(b) >= 4) want = struct.unpack('I', b[:4])[0] b = b[4:] # i.send('y'*want) else: want = remain[i] if want < len(b): print('weird wanted %d bytes, got %d: %r' % (want, len(b), b)) assert(want >= len(b)) want -= len(b) remain[i] = want if not b: # EOF if want: print('weird: eof but wanted %d more' % want) assert(want == 0) i.close() servers.remove(i) del remain[i] else: print('srv >> %r' % len(b)) i.send('y' * len(b)) if not want: i.shutdown(socket.SHUT_WR) elif i in clients: b = i.recv(4096) print('cli << %r' % len(b)) want = remain[i] if want < len(b): print('weird wanted %d bytes, got %d: %r' % (want, len(b), b)) assert(want >= len(b)) want -= len(b) remain[i] = want if not b: # EOF if want: print('weird: eof but wanted %d more' % want) assert(want == 0) i.close() clients.remove(i) del remain[i] listener.accept() sshuttle-0.76/sshuttle/ssyslog.py0000600000175000017500000000056012633621104017450 0ustar brianbrian00000000000000import sys import os import subprocess as ssubprocess _p = None def start_syslog(): global _p _p = ssubprocess.Popen(['logger', '-p', 'daemon.notice', '-t', 'sshuttle'], stdin=ssubprocess.PIPE) def stderr_to_syslog(): sys.stdout.flush() sys.stderr.flush() os.dup2(_p.stdin.fileno(), 2) sshuttle-0.76/sshuttle/tests/0000700000175000017500000000000012646642532016546 5ustar brianbrian00000000000000sshuttle-0.76/sshuttle/tests/test_methods_pf.py0000600000175000017500000002321212642665324022311 0ustar brianbrian00000000000000import pytest from mock import Mock, patch, call, ANY import socket from sshuttle.methods import get_method from sshuttle.helpers import Fatal from sshuttle.methods.pf import OsDefs def test_get_supported_features(): method = get_method('pf') features = method.get_supported_features() assert not features.ipv6 assert not features.udp assert features.dns @patch('sshuttle.helpers.verbose', new=3) def test_get_tcp_dstip(): sock = Mock() sock.getpeername.return_value = ("127.0.0.1", 1024) sock.getsockname.return_value = ("127.0.0.2", 1025) sock.family = socket.AF_INET firewall = Mock() firewall.pfile.readline.return_value = \ b"QUERY_PF_NAT_SUCCESS 127.0.0.3,1026\n" method = get_method('pf') method.set_firewall(firewall) assert method.get_tcp_dstip(sock) == ('127.0.0.3', 1026) assert sock.mock_calls == [ call.getpeername(), call.getsockname(), ] assert firewall.mock_calls == [ call.pfile.write(b'QUERY_PF_NAT 2,6,127.0.0.1,1024,127.0.0.2,1025\n'), call.pfile.flush(), call.pfile.readline() ] def test_recv_udp(): sock = Mock() sock.recvfrom.return_value = "11111", "127.0.0.1" method = get_method('pf') result = method.recv_udp(sock, 1024) assert sock.mock_calls == [call.recvfrom(1024)] assert result == ("127.0.0.1", None, "11111") def test_send_udp(): sock = Mock() method = get_method('pf') method.send_udp(sock, None, "127.0.0.1", "22222") assert sock.mock_calls == [call.sendto("22222", "127.0.0.1")] def test_setup_tcp_listener(): listener = Mock() method = get_method('pf') method.setup_tcp_listener(listener) assert listener.mock_calls == [] def test_setup_udp_listener(): listener = Mock() method = get_method('pf') method.setup_udp_listener(listener) assert listener.mock_calls == [] def test_assert_features(): method = get_method('pf') features = method.get_supported_features() method.assert_features(features) features.udp = True with pytest.raises(Fatal): method.assert_features(features) features.ipv6 = True with pytest.raises(Fatal): method.assert_features(features) @patch('sshuttle.methods.pf.osdefs', OsDefs('darwin')) @patch('sshuttle.methods.pf.sys.stdout') @patch('sshuttle.methods.pf.ioctl') @patch('sshuttle.methods.pf.pf_get_dev') def test_firewall_command_darwin(mock_pf_get_dev, mock_ioctl, mock_stdout): method = get_method('pf') assert not method.firewall_command("somthing") command = "QUERY_PF_NAT %d,%d,%s,%d,%s,%d\n" % ( socket.AF_INET, socket.IPPROTO_TCP, "127.0.0.1", 1025, "127.0.0.2", 1024) assert method.firewall_command(command) assert mock_pf_get_dev.mock_calls == [call()] assert mock_ioctl.mock_calls == [ call(mock_pf_get_dev(), 0xc0544417, ANY), ] assert mock_stdout.mock_calls == [ call.write('QUERY_PF_NAT_SUCCESS 0.0.0.0,0\n'), call.flush(), ] @patch('sshuttle.methods.pf.osdefs', OsDefs('notdarwin')) @patch('sshuttle.methods.pf.sys.stdout') @patch('sshuttle.methods.pf.ioctl') @patch('sshuttle.methods.pf.pf_get_dev') def test_firewall_command_notdarwin(mock_pf_get_dev, mock_ioctl, mock_stdout): method = get_method('pf') assert not method.firewall_command("somthing") command = "QUERY_PF_NAT %d,%d,%s,%d,%s,%d\n" % ( socket.AF_INET, socket.IPPROTO_TCP, "127.0.0.1", 1025, "127.0.0.2", 1024) assert method.firewall_command(command) assert mock_pf_get_dev.mock_calls == [call()] assert mock_ioctl.mock_calls == [ call(mock_pf_get_dev(), 0xc04c4417, ANY), ] assert mock_stdout.mock_calls == [ call.write('QUERY_PF_NAT_SUCCESS 0.0.0.0,0\n'), call.flush(), ] def pfctl(args, stdin=None): if args == '-s all': return (b'INFO:\nStatus: Disabled\nanother mary had a little lamb\n', b'little lamb\n') if args == '-E': return (b'\n', b'Token : abcdefg\n') return None @patch('sshuttle.helpers.verbose', new=3) @patch('sshuttle.methods.pf.osdefs', OsDefs('darwin')) @patch('sshuttle.methods.pf.pfctl') @patch('sshuttle.methods.pf.ioctl') @patch('sshuttle.methods.pf.pf_get_dev') def test_setup_firewall_darwin(mock_pf_get_dev, mock_ioctl, mock_pfctl): mock_pfctl.side_effect = pfctl method = get_method('pf') assert method.name == 'pf' with pytest.raises(Exception) as excinfo: method.setup_firewall( 1024, 1026, [(10, u'2404:6800:4004:80c::33')], 10, [(10, 64, False, u'2404:6800:4004:80c::'), (10, 128, True, u'2404:6800:4004:80c::101f')], True) assert str(excinfo.value) \ == 'Address family "AF_INET6" unsupported by pf method_name' assert mock_pf_get_dev.mock_calls == [] assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [] with pytest.raises(Exception) as excinfo: method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], True) assert str(excinfo.value) == 'UDP not supported by pf method_name' assert mock_pf_get_dev.mock_calls == [] assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [] method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], False) assert mock_ioctl.mock_calls == [ call(mock_pf_get_dev(), 0xC4704433, ANY), call(mock_pf_get_dev(), 0xCC20441A, ANY), call(mock_pf_get_dev(), 0xCC20441A, ANY), call(mock_pf_get_dev(), 0xC4704433, ANY), call(mock_pf_get_dev(), 0xCC20441A, ANY), call(mock_pf_get_dev(), 0xCC20441A, ANY), ] assert mock_pfctl.mock_calls == [ call('-s all'), call('-a sshuttle -f /dev/stdin', b'table {!1.2.3.66/32,1.2.3.0/24}\n' b'table {1.2.3.33}\n' b'rdr pass on lo0 proto tcp ' b'to -> 127.0.0.1 port 1025\n' b'rdr pass on lo0 proto udp ' b'to port 53 -> 127.0.0.1 port 1027\n' b'pass out route-to lo0 inet proto tcp ' b'to keep state\n' b'pass out route-to lo0 inet proto udp ' b'to port 53 keep state\n'), call('-E'), ] mock_pf_get_dev.reset_mock() mock_ioctl.reset_mock() mock_pfctl.reset_mock() method.restore_firewall(1025, 2, False) assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [ call('-a sshuttle -F all'), call("-X abcdefg"), ] mock_pf_get_dev.reset_mock() mock_pfctl.reset_mock() mock_ioctl.reset_mock() @patch('sshuttle.helpers.verbose', new=3) @patch('sshuttle.methods.pf.osdefs', OsDefs('notdarwin')) @patch('sshuttle.methods.pf.pfctl') @patch('sshuttle.methods.pf.ioctl') @patch('sshuttle.methods.pf.pf_get_dev') def test_setup_firewall_notdarwin(mock_pf_get_dev, mock_ioctl, mock_pfctl): mock_pfctl.side_effect = pfctl method = get_method('pf') assert method.name == 'pf' with pytest.raises(Exception) as excinfo: method.setup_firewall( 1024, 1026, [(10, u'2404:6800:4004:80c::33')], 10, [(10, 64, False, u'2404:6800:4004:80c::'), (10, 128, True, u'2404:6800:4004:80c::101f')], True) assert str(excinfo.value) \ == 'Address family "AF_INET6" unsupported by pf method_name' assert mock_pf_get_dev.mock_calls == [] assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [] with pytest.raises(Exception) as excinfo: method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], True) assert str(excinfo.value) == 'UDP not supported by pf method_name' assert mock_pf_get_dev.mock_calls == [] assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [] method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], False) assert mock_ioctl.mock_calls == [ call(mock_pf_get_dev(), 0xC4704433, ANY), call(mock_pf_get_dev(), 0xCBE0441A, ANY), call(mock_pf_get_dev(), 0xCBE0441A, ANY), call(mock_pf_get_dev(), 0xC4704433, ANY), call(mock_pf_get_dev(), 0xCBE0441A, ANY), call(mock_pf_get_dev(), 0xCBE0441A, ANY), ] assert mock_pfctl.mock_calls == [ call('-s all'), call('-a sshuttle -f /dev/stdin', b'table {!1.2.3.66/32,1.2.3.0/24}\n' b'table {1.2.3.33}\n' b'rdr pass on lo0 proto tcp ' b'to -> 127.0.0.1 port 1025\n' b'rdr pass on lo0 proto udp ' b'to port 53 -> 127.0.0.1 port 1027\n' b'pass out route-to lo0 inet proto tcp ' b'to keep state\n' b'pass out route-to lo0 inet proto udp ' b'to port 53 keep state\n'), call('-e'), ] mock_pf_get_dev.reset_mock() mock_ioctl.reset_mock() mock_pfctl.reset_mock() method.restore_firewall(1025, 2, False) assert mock_ioctl.mock_calls == [] assert mock_pfctl.mock_calls == [ call('-a sshuttle -F all'), call("-d"), ] mock_pf_get_dev.reset_mock() mock_pfctl.reset_mock() mock_ioctl.reset_mock() sshuttle-0.76/sshuttle/tests/test_helpers.py0000600000175000017500000001366512633671203021627 0ustar brianbrian00000000000000from mock import patch, call import sys import io import socket import sshuttle.helpers @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_log(mock_stderr, mock_stdout): sshuttle.helpers.log("message") sshuttle.helpers.log("abc") sshuttle.helpers.log("message 1\n") sshuttle.helpers.log("message 2\nline2\nline3\n") sshuttle.helpers.log("message 3\nline2\nline3") assert mock_stdout.mock_calls == [ call.flush(), call.flush(), call.flush(), call.flush(), call.flush(), ] assert mock_stderr.mock_calls == [ call.write('prefix: message'), call.flush(), call.write('prefix: abc'), call.flush(), call.write('prefix: message 1\n'), call.flush(), call.write('prefix: message 2\n'), call.write('---> line2\n'), call.write('---> line3\n'), call.flush(), call.write('prefix: message 3\n'), call.write('---> line2\n'), call.write('---> line3\n'), call.flush(), ] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=1) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug1(mock_stderr, mock_stdout): sshuttle.helpers.debug1("message") assert mock_stdout.mock_calls == [ call.flush(), ] assert mock_stderr.mock_calls == [ call.write('prefix: message'), call.flush(), ] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=0) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug1_nop(mock_stderr, mock_stdout): sshuttle.helpers.debug1("message") assert mock_stdout.mock_calls == [] assert mock_stderr.mock_calls == [] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=2) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug2(mock_stderr, mock_stdout): sshuttle.helpers.debug2("message") assert mock_stdout.mock_calls == [ call.flush(), ] assert mock_stderr.mock_calls == [ call.write('prefix: message'), call.flush(), ] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=1) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug2_nop(mock_stderr, mock_stdout): sshuttle.helpers.debug2("message") assert mock_stdout.mock_calls == [] assert mock_stderr.mock_calls == [] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=3) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug3(mock_stderr, mock_stdout): sshuttle.helpers.debug3("message") assert mock_stdout.mock_calls == [ call.flush(), ] assert mock_stderr.mock_calls == [ call.write('prefix: message'), call.flush(), ] @patch('sshuttle.helpers.logprefix', new='prefix: ') @patch('sshuttle.helpers.verbose', new=2) @patch('sshuttle.helpers.sys.stdout') @patch('sshuttle.helpers.sys.stderr') def test_debug3_nop(mock_stderr, mock_stdout): sshuttle.helpers.debug3("message") assert mock_stdout.mock_calls == [] assert mock_stderr.mock_calls == [] @patch('sshuttle.helpers.open', create=True) def test_resolvconf_nameservers(mock_open): mock_open.return_value = io.StringIO(u""" # Generated by NetworkManager search pri nameserver 192.168.1.1 nameserver 192.168.2.1 nameserver 192.168.3.1 nameserver 192.168.4.1 nameserver 2404:6800:4004:80c::1 nameserver 2404:6800:4004:80c::2 nameserver 2404:6800:4004:80c::3 nameserver 2404:6800:4004:80c::4 """) ns = sshuttle.helpers.resolvconf_nameservers() assert ns == [ (2, u'192.168.1.1'), (2, u'192.168.2.1'), (2, u'192.168.3.1'), (2, u'192.168.4.1'), (10, u'2404:6800:4004:80c::1'), (10, u'2404:6800:4004:80c::2'), (10, u'2404:6800:4004:80c::3'), (10, u'2404:6800:4004:80c::4') ] @patch('sshuttle.helpers.open', create=True) def test_resolvconf_random_nameserver(mock_open): mock_open.return_value = io.StringIO(u""" # Generated by NetworkManager search pri nameserver 192.168.1.1 nameserver 192.168.2.1 nameserver 192.168.3.1 nameserver 192.168.4.1 nameserver 2404:6800:4004:80c::1 nameserver 2404:6800:4004:80c::2 nameserver 2404:6800:4004:80c::3 nameserver 2404:6800:4004:80c::4 """) ns = sshuttle.helpers.resolvconf_random_nameserver() assert ns in [ (2, u'192.168.1.1'), (2, u'192.168.2.1'), (2, u'192.168.3.1'), (2, u'192.168.4.1'), (10, u'2404:6800:4004:80c::1'), (10, u'2404:6800:4004:80c::2'), (10, u'2404:6800:4004:80c::3'), (10, u'2404:6800:4004:80c::4') ] def test_islocal(): assert sshuttle.helpers.islocal("127.0.0.1", socket.AF_INET) assert not sshuttle.helpers.islocal("192.0.2.1", socket.AF_INET) assert sshuttle.helpers.islocal("::1", socket.AF_INET6) assert not sshuttle.helpers.islocal("2001:db8::1", socket.AF_INET6) def test_family_ip_tuple(): assert sshuttle.helpers.family_ip_tuple("127.0.0.1") \ == (socket.AF_INET, "127.0.0.1") assert sshuttle.helpers.family_ip_tuple("192.168.2.6") \ == (socket.AF_INET, "192.168.2.6") assert sshuttle.helpers.family_ip_tuple("::1") \ == (socket.AF_INET6, "::1") assert sshuttle.helpers.family_ip_tuple("2404:6800:4004:80c::1") \ == (socket.AF_INET6, "2404:6800:4004:80c::1") def test_family_to_string(): assert sshuttle.helpers.family_to_string(socket.AF_INET) == "AF_INET" assert sshuttle.helpers.family_to_string(socket.AF_INET6) == "AF_INET6" if sys.version_info < (3, 0): expected = "1" assert sshuttle.helpers.family_to_string(socket.AF_UNIX) == "1" else: expected = 'AddressFamily.AF_UNIX' assert sshuttle.helpers.family_to_string(socket.AF_UNIX) == expected sshuttle-0.76/sshuttle/tests/test_methods_nat.py0000600000175000017500000001166112633622633022467 0ustar brianbrian00000000000000import pytest from mock import Mock, patch, call import socket import struct from sshuttle.helpers import Fatal from sshuttle.methods import get_method def test_get_supported_features(): method = get_method('nat') features = method.get_supported_features() assert not features.ipv6 assert not features.udp assert features.dns def test_get_tcp_dstip(): sock = Mock() sock.getsockopt.return_value = struct.pack( '!HHBBBB', socket.ntohs(socket.AF_INET), 1024, 127, 0, 0, 1) method = get_method('nat') assert method.get_tcp_dstip(sock) == ('127.0.0.1', 1024) assert sock.mock_calls == [call.getsockopt(0, 80, 16)] def test_recv_udp(): sock = Mock() sock.recvfrom.return_value = "11111", "127.0.0.1" method = get_method('nat') result = method.recv_udp(sock, 1024) assert sock.mock_calls == [call.recvfrom(1024)] assert result == ("127.0.0.1", None, "11111") def test_send_udp(): sock = Mock() method = get_method('nat') method.send_udp(sock, None, "127.0.0.1", "22222") assert sock.mock_calls == [call.sendto("22222", "127.0.0.1")] def test_setup_tcp_listener(): listener = Mock() method = get_method('nat') method.setup_tcp_listener(listener) assert listener.mock_calls == [] def test_setup_udp_listener(): listener = Mock() method = get_method('nat') method.setup_udp_listener(listener) assert listener.mock_calls == [] def test_assert_features(): method = get_method('nat') features = method.get_supported_features() method.assert_features(features) features.udp = True with pytest.raises(Fatal): method.assert_features(features) features.ipv6 = True with pytest.raises(Fatal): method.assert_features(features) def test_firewall_command(): method = get_method('nat') assert not method.firewall_command("somthing") @patch('sshuttle.methods.nat.ipt') @patch('sshuttle.methods.nat.ipt_ttl') @patch('sshuttle.methods.nat.ipt_chain_exists') def test_setup_firewall(mock_ipt_chain_exists, mock_ipt_ttl, mock_ipt): mock_ipt_chain_exists.return_value = True method = get_method('nat') assert method.name == 'nat' with pytest.raises(Exception) as excinfo: method.setup_firewall( 1024, 1026, [(10, u'2404:6800:4004:80c::33')], 10, [(10, 64, False, u'2404:6800:4004:80c::'), (10, 128, True, u'2404:6800:4004:80c::101f')], True) assert str(excinfo.value) \ == 'Address family "AF_INET6" unsupported by nat method_name' assert mock_ipt_chain_exists.mock_calls == [] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [] with pytest.raises(Exception) as excinfo: method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], True) assert str(excinfo.value) == 'UDP not supported by nat method_name' assert mock_ipt_chain_exists.mock_calls == [] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [] method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], False) assert mock_ipt_chain_exists.mock_calls == [ call(2, 'nat', 'sshuttle-1025') ] assert mock_ipt_ttl.mock_calls == [ call(2, 'nat', '-A', 'sshuttle-1025', '-j', 'REDIRECT', '--dest', u'1.2.3.0/24', '-p', 'tcp', '--to-ports', '1025'), call(2, 'nat', '-A', 'sshuttle-1025', '-j', 'REDIRECT', '--dest', u'1.2.3.33/32', '-p', 'udp', '--dport', '53', '--to-ports', '1027') ] assert mock_ipt.mock_calls == [ call(2, 'nat', '-D', 'OUTPUT', '-j', 'sshuttle-1025'), call(2, 'nat', '-D', 'PREROUTING', '-j', 'sshuttle-1025'), call(2, 'nat', '-F', 'sshuttle-1025'), call(2, 'nat', '-X', 'sshuttle-1025'), call(2, 'nat', '-N', 'sshuttle-1025'), call(2, 'nat', '-F', 'sshuttle-1025'), call(2, 'nat', '-I', 'OUTPUT', '1', '-j', 'sshuttle-1025'), call(2, 'nat', '-I', 'PREROUTING', '1', '-j', 'sshuttle-1025'), call(2, 'nat', '-A', 'sshuttle-1025', '-j', 'RETURN', '--dest', u'1.2.3.66/32', '-p', 'tcp') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() method.restore_firewall(1025, 2, False) assert mock_ipt_chain_exists.mock_calls == [ call(2, 'nat', 'sshuttle-1025') ] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [ call(2, 'nat', '-D', 'OUTPUT', '-j', 'sshuttle-1025'), call(2, 'nat', '-D', 'PREROUTING', '-j', 'sshuttle-1025'), call(2, 'nat', '-F', 'sshuttle-1025'), call(2, 'nat', '-X', 'sshuttle-1025') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() sshuttle-0.76/sshuttle/tests/test_firewall.py0000600000175000017500000000541412633621104021756 0ustar brianbrian00000000000000from mock import Mock, patch, call import io import sshuttle.firewall def setup_daemon(): stdin = io.StringIO(u"""ROUTES 2,24,0,1.2.3.0 2,32,1,1.2.3.66 10,64,0,2404:6800:4004:80c:: 10,128,1,2404:6800:4004:80c::101f NSLIST 2,1.2.3.33 10,2404:6800:4004:80c::33 PORTS 1024,1025,1026,1027 GO 1 HOST 1.2.3.3,existing """) stdout = Mock() return stdin, stdout def test_rewrite_etc_hosts(tmpdir): orig_hosts = tmpdir.join("hosts.orig") orig_hosts.write("1.2.3.3 existing\n") new_hosts = tmpdir.join("hosts") orig_hosts.copy(new_hosts) hostmap = { 'myhost': '1.2.3.4', 'myotherhost': '1.2.3.5', } with patch('sshuttle.firewall.HOSTSFILE', new=str(new_hosts)): sshuttle.firewall.rewrite_etc_hosts(hostmap, 10) with new_hosts.open() as f: line = f.readline() s = line.split() assert s == ['1.2.3.3', 'existing'] line = f.readline() s = line.split() assert s == ['1.2.3.4', 'myhost', '#', 'sshuttle-firewall-10', 'AUTOCREATED'] line = f.readline() s = line.split() assert s == ['1.2.3.5', 'myotherhost', '#', 'sshuttle-firewall-10', 'AUTOCREATED'] line = f.readline() assert line == "" with patch('sshuttle.firewall.HOSTSFILE', new=str(new_hosts)): sshuttle.firewall.restore_etc_hosts(10) assert orig_hosts.computehash() == new_hosts.computehash() @patch('sshuttle.firewall.rewrite_etc_hosts') @patch('sshuttle.firewall.setup_daemon') @patch('sshuttle.firewall.get_method') def test_main(mock_get_method, mock_setup_daemon, mock_rewrite_etc_hosts): stdin, stdout = setup_daemon() mock_setup_daemon.return_value = stdin, stdout mock_get_method("not_auto").name = "test" mock_get_method.reset_mock() sshuttle.firewall.main("not_auto", False) assert mock_rewrite_etc_hosts.mock_calls == [ call({'1.2.3.3': 'existing'}, 1024), call({}, 1024), ] assert stdout.mock_calls == [ call.write('READY test\n'), call.flush(), call.write('STARTED\n'), call.flush() ] assert mock_setup_daemon.mock_calls == [call()] assert mock_get_method.mock_calls == [ call('not_auto'), call().setup_firewall( 1024, 1026, [(10, u'2404:6800:4004:80c::33')], 10, [(10, 64, False, u'2404:6800:4004:80c::'), (10, 128, True, u'2404:6800:4004:80c::101f')], True), call().setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], True), call().restore_firewall(1024, 10, True), call().restore_firewall(1025, 2, True), ] sshuttle-0.76/sshuttle/tests/test_methods_tproxy.py0000600000175000017500000002714212633622614023252 0ustar brianbrian00000000000000from mock import Mock, patch, call from sshuttle.methods import get_method @patch("sshuttle.methods.tproxy.recvmsg") def test_get_supported_features_recvmsg(mock_recvmsg): method = get_method('tproxy') features = method.get_supported_features() assert features.ipv6 assert features.udp assert features.dns @patch("sshuttle.methods.tproxy.recvmsg", None) def test_get_supported_features_norecvmsg(): method = get_method('tproxy') features = method.get_supported_features() assert features.ipv6 assert not features.udp assert not features.dns def test_get_tcp_dstip(): sock = Mock() sock.getsockname.return_value = ('127.0.0.1', 1024) method = get_method('tproxy') assert method.get_tcp_dstip(sock) == ('127.0.0.1', 1024) assert sock.mock_calls == [call.getsockname()] @patch("sshuttle.methods.tproxy.recv_udp") def test_recv_udp(mock_recv_udp): mock_recv_udp.return_value = ("127.0.0.1", "127.0.0.2", "11111") sock = Mock() method = get_method('tproxy') result = method.recv_udp(sock, 1024) assert sock.mock_calls == [] assert mock_recv_udp.mock_calls == [call(sock, 1024)] assert result == ("127.0.0.1", "127.0.0.2", "11111") @patch("sshuttle.methods.socket.socket") def test_send_udp(mock_socket): sock = Mock() method = get_method('tproxy') method.send_udp(sock, "127.0.0.2", "127.0.0.1", "2222222") assert sock.mock_calls == [] assert mock_socket.mock_calls == [ call(sock.family, 2), call().setsockopt(1, 2, 1), call().setsockopt(0, 19, 1), call().bind('127.0.0.2'), call().sendto("2222222", '127.0.0.1'), call().close() ] def test_setup_tcp_listener(): listener = Mock() method = get_method('tproxy') method.setup_tcp_listener(listener) assert listener.mock_calls == [ call.setsockopt(0, 19, 1) ] def test_setup_udp_listener(): listener = Mock() method = get_method('tproxy') method.setup_udp_listener(listener) assert listener.mock_calls == [ call.setsockopt(0, 19, 1), call.v4.setsockopt(0, 20, 1), call.v6.setsockopt(41, 74, 1) ] def test_assert_features(): method = get_method('tproxy') features = method.get_supported_features() method.assert_features(features) def test_firewall_command(): method = get_method('tproxy') assert not method.firewall_command("somthing") @patch('sshuttle.methods.tproxy.ipt') @patch('sshuttle.methods.tproxy.ipt_ttl') @patch('sshuttle.methods.tproxy.ipt_chain_exists') def test_setup_firewall(mock_ipt_chain_exists, mock_ipt_ttl, mock_ipt): mock_ipt_chain_exists.return_value = True method = get_method('tproxy') assert method.name == 'tproxy' # IPV6 method.setup_firewall( 1024, 1026, [(10, u'2404:6800:4004:80c::33')], 10, [(10, 64, False, u'2404:6800:4004:80c::'), (10, 128, True, u'2404:6800:4004:80c::101f')], True) assert mock_ipt_chain_exists.mock_calls == [ call(10, 'mangle', 'sshuttle-m-1024'), call(10, 'mangle', 'sshuttle-t-1024'), call(10, 'mangle', 'sshuttle-d-1024') ] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [ call(10, 'mangle', '-D', 'OUTPUT', '-j', 'sshuttle-m-1024'), call(10, 'mangle', '-F', 'sshuttle-m-1024'), call(10, 'mangle', '-X', 'sshuttle-m-1024'), call(10, 'mangle', '-D', 'PREROUTING', '-j', 'sshuttle-t-1024'), call(10, 'mangle', '-F', 'sshuttle-t-1024'), call(10, 'mangle', '-X', 'sshuttle-t-1024'), call(10, 'mangle', '-F', 'sshuttle-d-1024'), call(10, 'mangle', '-X', 'sshuttle-d-1024'), call(10, 'mangle', '-N', 'sshuttle-m-1024'), call(10, 'mangle', '-F', 'sshuttle-m-1024'), call(10, 'mangle', '-N', 'sshuttle-d-1024'), call(10, 'mangle', '-F', 'sshuttle-d-1024'), call(10, 'mangle', '-N', 'sshuttle-t-1024'), call(10, 'mangle', '-F', 'sshuttle-t-1024'), call(10, 'mangle', '-I', 'OUTPUT', '1', '-j', 'sshuttle-m-1024'), call(10, 'mangle', '-I', 'PREROUTING', '1', '-j', 'sshuttle-t-1024'), call(10, 'mangle', '-A', 'sshuttle-d-1024', '-j', 'MARK', '--set-mark', '1'), call(10, 'mangle', '-A', 'sshuttle-d-1024', '-j', 'ACCEPT'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-m', 'socket', '-j', 'sshuttle-d-1024', '-m', 'tcp', '-p', 'tcp'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-m', 'socket', '-j', 'sshuttle-d-1024', '-m', 'udp', '-p', 'udp'), call(10, 'mangle', '-A', 'sshuttle-m-1024', '-j', 'MARK', '--set-mark', '1', '--dest', u'2404:6800:4004:80c::33/32', '-m', 'udp', '-p', 'udp', '--dport', '53'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'2404:6800:4004:80c::33/32', '-m', 'udp', '-p', 'udp', '--dport', '53', '--on-port', '1026'), call(10, 'mangle', '-A', 'sshuttle-m-1024', '-j', 'RETURN', '--dest', u'2404:6800:4004:80c::101f/128', '-m', 'tcp', '-p', 'tcp'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-j', 'RETURN', '--dest', u'2404:6800:4004:80c::101f/128', '-m', 'tcp', '-p', 'tcp'), call(10, 'mangle', '-A', 'sshuttle-m-1024', '-j', 'RETURN', '--dest', u'2404:6800:4004:80c::101f/128', '-m', 'udp', '-p', 'udp'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-j', 'RETURN', '--dest', u'2404:6800:4004:80c::101f/128', '-m', 'udp', '-p', 'udp'), call(10, 'mangle', '-A', 'sshuttle-m-1024', '-j', 'MARK', '--set-mark', '1', '--dest', u'2404:6800:4004:80c::/64', '-m', 'tcp', '-p', 'tcp'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'2404:6800:4004:80c::/64', '-m', 'tcp', '-p', 'tcp', '--on-port', '1024'), call(10, 'mangle', '-A', 'sshuttle-m-1024', '-j', 'MARK', '--set-mark', '1', '--dest', u'2404:6800:4004:80c::/64', '-m', 'udp', '-p', 'udp'), call(10, 'mangle', '-A', 'sshuttle-t-1024', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'2404:6800:4004:80c::/64', '-m', 'udp', '-p', 'udp', '--on-port', '1024') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() method.restore_firewall(1025, 10, True) assert mock_ipt_chain_exists.mock_calls == [ call(10, 'mangle', 'sshuttle-m-1025'), call(10, 'mangle', 'sshuttle-t-1025'), call(10, 'mangle', 'sshuttle-d-1025') ] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [ call(10, 'mangle', '-D', 'OUTPUT', '-j', 'sshuttle-m-1025'), call(10, 'mangle', '-F', 'sshuttle-m-1025'), call(10, 'mangle', '-X', 'sshuttle-m-1025'), call(10, 'mangle', '-D', 'PREROUTING', '-j', 'sshuttle-t-1025'), call(10, 'mangle', '-F', 'sshuttle-t-1025'), call(10, 'mangle', '-X', 'sshuttle-t-1025'), call(10, 'mangle', '-F', 'sshuttle-d-1025'), call(10, 'mangle', '-X', 'sshuttle-d-1025') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() # IPV4 method.setup_firewall( 1025, 1027, [(2, u'1.2.3.33')], 2, [(2, 24, False, u'1.2.3.0'), (2, 32, True, u'1.2.3.66')], True) assert mock_ipt_chain_exists.mock_calls == [ call(2, 'mangle', 'sshuttle-m-1025'), call(2, 'mangle', 'sshuttle-t-1025'), call(2, 'mangle', 'sshuttle-d-1025') ] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [ call(2, 'mangle', '-D', 'OUTPUT', '-j', 'sshuttle-m-1025'), call(2, 'mangle', '-F', 'sshuttle-m-1025'), call(2, 'mangle', '-X', 'sshuttle-m-1025'), call(2, 'mangle', '-D', 'PREROUTING', '-j', 'sshuttle-t-1025'), call(2, 'mangle', '-F', 'sshuttle-t-1025'), call(2, 'mangle', '-X', 'sshuttle-t-1025'), call(2, 'mangle', '-F', 'sshuttle-d-1025'), call(2, 'mangle', '-X', 'sshuttle-d-1025'), call(2, 'mangle', '-N', 'sshuttle-m-1025'), call(2, 'mangle', '-F', 'sshuttle-m-1025'), call(2, 'mangle', '-N', 'sshuttle-d-1025'), call(2, 'mangle', '-F', 'sshuttle-d-1025'), call(2, 'mangle', '-N', 'sshuttle-t-1025'), call(2, 'mangle', '-F', 'sshuttle-t-1025'), call(2, 'mangle', '-I', 'OUTPUT', '1', '-j', 'sshuttle-m-1025'), call(2, 'mangle', '-I', 'PREROUTING', '1', '-j', 'sshuttle-t-1025'), call(2, 'mangle', '-A', 'sshuttle-d-1025', '-j', 'MARK', '--set-mark', '1'), call(2, 'mangle', '-A', 'sshuttle-d-1025', '-j', 'ACCEPT'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-m', 'socket', '-j', 'sshuttle-d-1025', '-m', 'tcp', '-p', 'tcp'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-m', 'socket', '-j', 'sshuttle-d-1025', '-m', 'udp', '-p', 'udp'), call(2, 'mangle', '-A', 'sshuttle-m-1025', '-j', 'MARK', '--set-mark', '1', '--dest', u'1.2.3.33/32', '-m', 'udp', '-p', 'udp', '--dport', '53'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'1.2.3.33/32', '-m', 'udp', '-p', 'udp', '--dport', '53', '--on-port', '1027'), call(2, 'mangle', '-A', 'sshuttle-m-1025', '-j', 'RETURN', '--dest', u'1.2.3.66/32', '-m', 'tcp', '-p', 'tcp'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-j', 'RETURN', '--dest', u'1.2.3.66/32', '-m', 'tcp', '-p', 'tcp'), call(2, 'mangle', '-A', 'sshuttle-m-1025', '-j', 'RETURN', '--dest', u'1.2.3.66/32', '-m', 'udp', '-p', 'udp'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-j', 'RETURN', '--dest', u'1.2.3.66/32', '-m', 'udp', '-p', 'udp'), call(2, 'mangle', '-A', 'sshuttle-m-1025', '-j', 'MARK', '--set-mark', '1', '--dest', u'1.2.3.0/24', '-m', 'tcp', '-p', 'tcp'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'1.2.3.0/24', '-m', 'tcp', '-p', 'tcp', '--on-port', '1025'), call(2, 'mangle', '-A', 'sshuttle-m-1025', '-j', 'MARK', '--set-mark', '1', '--dest', u'1.2.3.0/24', '-m', 'udp', '-p', 'udp'), call(2, 'mangle', '-A', 'sshuttle-t-1025', '-j', 'TPROXY', '--tproxy-mark', '0x1/0x1', '--dest', u'1.2.3.0/24', '-m', 'udp', '-p', 'udp', '--on-port', '1025') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() method.restore_firewall(1025, 2, True) assert mock_ipt_chain_exists.mock_calls == [ call(2, 'mangle', 'sshuttle-m-1025'), call(2, 'mangle', 'sshuttle-t-1025'), call(2, 'mangle', 'sshuttle-d-1025') ] assert mock_ipt_ttl.mock_calls == [] assert mock_ipt.mock_calls == [ call(2, 'mangle', '-D', 'OUTPUT', '-j', 'sshuttle-m-1025'), call(2, 'mangle', '-F', 'sshuttle-m-1025'), call(2, 'mangle', '-X', 'sshuttle-m-1025'), call(2, 'mangle', '-D', 'PREROUTING', '-j', 'sshuttle-t-1025'), call(2, 'mangle', '-F', 'sshuttle-t-1025'), call(2, 'mangle', '-X', 'sshuttle-t-1025'), call(2, 'mangle', '-F', 'sshuttle-d-1025'), call(2, 'mangle', '-X', 'sshuttle-d-1025') ] mock_ipt_chain_exists.reset_mock() mock_ipt_ttl.reset_mock() mock_ipt.reset_mock() sshuttle-0.76/sshuttle/client.py0000600000175000017500000005324212646564323017245 0ustar brianbrian00000000000000import socket import errno import re import signal import time import subprocess as ssubprocess import sshuttle.helpers as helpers import os import sshuttle.ssnet as ssnet import sshuttle.ssh as ssh import sshuttle.ssyslog as ssyslog import sys import platform from sshuttle.ssnet import SockWrapper, Handler, Proxy, Mux, MuxWrapper from sshuttle.helpers import log, debug1, debug2, debug3, Fatal, islocal, \ resolvconf_nameservers from sshuttle.methods import get_method, Features _extra_fd = os.open('/dev/null', os.O_RDONLY) def got_signal(signum, frame): log('exiting on signal %d\n' % signum) sys.exit(1) _pidname = None def check_daemon(pidfile): global _pidname _pidname = os.path.abspath(pidfile) try: oldpid = open(_pidname).read(1024) except IOError as e: if e.errno == errno.ENOENT: return # no pidfile, ok else: raise Fatal("can't read %s: %s" % (_pidname, e)) if not oldpid: os.unlink(_pidname) return # invalid pidfile, ok oldpid = int(oldpid.strip() or 0) if oldpid <= 0: os.unlink(_pidname) return # invalid pidfile, ok try: os.kill(oldpid, 0) except OSError as e: if e.errno == errno.ESRCH: os.unlink(_pidname) return # outdated pidfile, ok elif e.errno == errno.EPERM: pass else: raise raise Fatal("%s: sshuttle is already running (pid=%d)" % (_pidname, oldpid)) def daemonize(): if os.fork(): os._exit(0) os.setsid() if os.fork(): os._exit(0) outfd = os.open(_pidname, os.O_WRONLY | os.O_CREAT | os.O_EXCL, 0o666) try: os.write(outfd, b'%d\n' % os.getpid()) finally: os.close(outfd) os.chdir("/") # Normal exit when killed, or try/finally won't work and the pidfile won't # be deleted. signal.signal(signal.SIGTERM, got_signal) si = open('/dev/null', 'r+') os.dup2(si.fileno(), 0) os.dup2(si.fileno(), 1) si.close() def daemon_cleanup(): try: os.unlink(_pidname) except OSError as e: if e.errno == errno.ENOENT: pass else: raise class MultiListener: def __init__(self, type=socket.SOCK_STREAM, proto=0): self.v6 = socket.socket(socket.AF_INET6, type, proto) self.v4 = socket.socket(socket.AF_INET, type, proto) def setsockopt(self, level, optname, value): if self.v6: self.v6.setsockopt(level, optname, value) if self.v4: self.v4.setsockopt(level, optname, value) def add_handler(self, handlers, callback, method, mux): socks = [] if self.v6: socks.append(self.v6) if self.v4: socks.append(self.v4) handlers.append( Handler( socks, lambda sock: callback(sock, method, mux, handlers) ) ) def listen(self, backlog): if self.v6: self.v6.listen(backlog) if self.v4: try: self.v4.listen(backlog) except socket.error as e: # on some systems v4 bind will fail if the v6 suceeded, # in this case the v6 socket will receive v4 too. if e.errno == errno.EADDRINUSE and self.v6: self.v4 = None else: raise e def bind(self, address_v6, address_v4): if address_v6 and self.v6: self.v6.bind(address_v6) else: self.v6 = None if address_v4 and self.v4: self.v4.bind(address_v4) else: self.v4 = None def print_listening(self, what): if self.v6: listenip = self.v6.getsockname() debug1('%s listening on %r.\n' % (what, listenip)) debug2('%s listening with %r.\n' % (what, self.v6)) if self.v4: listenip = self.v4.getsockname() debug1('%s listening on %r.\n' % (what, listenip)) debug2('%s listening with %r.\n' % (what, self.v4)) class FirewallClient: def __init__(self, method_name): self.auto_nets = [] python_path = os.path.dirname(os.path.dirname(__file__)) argvbase = ([sys.executable, sys.argv[0]] + ['-v'] * (helpers.verbose or 0) + ['--method', method_name] + ['--firewall']) if ssyslog._p: argvbase += ['--syslog'] argv_tries = [ ['sudo', '-p', '[local sudo] Password: ', ('PYTHONPATH=%s' % python_path), '--'] + argvbase, argvbase ] # we can't use stdin/stdout=subprocess.PIPE here, as we normally would, # because stupid Linux 'su' requires that stdin be attached to a tty. # Instead, attach a *bidirectional* socket to its stdout, and use # that for talking in both directions. (s1, s2) = socket.socketpair() def setup(): # run in the child process s2.close() e = None if os.getuid() == 0: argv_tries = argv_tries[-1:] # last entry only for argv in argv_tries: try: if argv[0] == 'su': sys.stderr.write('[local su] ') self.p = ssubprocess.Popen(argv, stdout=s1, preexec_fn=setup) e = None break except OSError as e: pass self.argv = argv s1.close() if sys.version_info < (3, 0): # python 2.7 self.pfile = s2.makefile('wb+') else: # python 3.5 self.pfile = s2.makefile('rwb') if e: log('Spawning firewall manager: %r\n' % self.argv) raise Fatal(e) line = self.pfile.readline() self.check() if line[0:5] != b'READY': raise Fatal('%r expected READY, got %r' % (self.argv, line)) method_name = line[6:-1] self.method = get_method(method_name.decode("ASCII")) self.method.set_firewall(self) def setup(self, subnets_include, subnets_exclude, nslist, redirectport_v6, redirectport_v4, dnsport_v6, dnsport_v4, udp): self.subnets_include = subnets_include self.subnets_exclude = subnets_exclude self.nslist = nslist self.redirectport_v6 = redirectport_v6 self.redirectport_v4 = redirectport_v4 self.dnsport_v6 = dnsport_v6 self.dnsport_v4 = dnsport_v4 self.udp = udp def check(self): rv = self.p.poll() if rv: raise Fatal('%r returned %d' % (self.argv, rv)) def start(self): self.pfile.write(b'ROUTES\n') for (family, ip, width) in self.subnets_include + self.auto_nets: self.pfile.write(b'%d,%d,0,%s\n' % (family, width, ip.encode("ASCII"))) for (family, ip, width) in self.subnets_exclude: self.pfile.write(b'%d,%d,1,%s\n' % (family, width, ip.encode("ASCII"))) self.pfile.write(b'NSLIST\n') for (family, ip) in self.nslist: self.pfile.write(b'%d,%s\n' % (family, ip.encode("ASCII"))) self.pfile.write( b'PORTS %d,%d,%d,%d\n' % (self.redirectport_v6, self.redirectport_v4, self.dnsport_v6, self.dnsport_v4)) udp = 0 if self.udp: udp = 1 self.pfile.write(b'GO %d\n' % udp) self.pfile.flush() line = self.pfile.readline() self.check() if line != b'STARTED\n': raise Fatal('%r expected STARTED, got %r' % (self.argv, line)) def sethostip(self, hostname, ip): assert(not re.search(b'[^-\w]', hostname)) assert(not re.search(b'[^0-9.]', ip)) self.pfile.write(b'HOST %s,%s\n' % (hostname, ip)) self.pfile.flush() def done(self): self.pfile.close() rv = self.p.wait() if rv: raise Fatal('cleanup: %r returned %d' % (self.argv, rv)) dnsreqs = {} udp_by_src = {} def expire_connections(now, mux): remove = [] for chan, timeout in dnsreqs.items(): if timeout < now: debug3('expiring dnsreqs channel=%d\n' % chan) remove.append(chan) del mux.channels[chan] for chan in remove: del dnsreqs[chan] debug3('Remaining DNS requests: %d\n' % len(dnsreqs)) remove = [] for peer, (chan, timeout) in udp_by_src.items(): if timeout < now: debug3('expiring UDP channel channel=%d peer=%r\n' % (chan, peer)) mux.send(chan, ssnet.CMD_UDP_CLOSE, b'') remove.append(peer) del mux.channels[chan] for peer in remove: del udp_by_src[peer] debug3('Remaining UDP channels: %d\n' % len(udp_by_src)) def onaccept_tcp(listener, method, mux, handlers): global _extra_fd try: sock, srcip = listener.accept() except socket.error as e: if e.args[0] in [errno.EMFILE, errno.ENFILE]: debug1('Rejected incoming connection: too many open files!\n') # free up an fd so we can eat the connection os.close(_extra_fd) try: sock, srcip = listener.accept() sock.close() finally: _extra_fd = os.open('/dev/null', os.O_RDONLY) return else: raise dstip = method.get_tcp_dstip(sock) debug1('Accept TCP: %s:%r -> %s:%r.\n' % (srcip[0], srcip[1], dstip[0], dstip[1])) if dstip[1] == sock.getsockname()[1] and islocal(dstip[0], sock.family): debug1("-- ignored: that's my address!\n") sock.close() return chan = mux.next_channel() if not chan: log('warning: too many open channels. Discarded connection.\n') sock.close() return mux.send(chan, ssnet.CMD_TCP_CONNECT, b'%d,%s,%d' % (sock.family, dstip[0].encode("ASCII"), dstip[1])) outwrap = MuxWrapper(mux, chan) handlers.append(Proxy(SockWrapper(sock, sock), outwrap)) expire_connections(time.time(), mux) def udp_done(chan, data, method, sock, dstip): (src, srcport, data) = data.split(b",", 2) srcip = (src, int(srcport)) debug3('doing send from %r to %r\n' % (srcip, dstip,)) method.send_udp(sock, srcip, dstip, data) def onaccept_udp(listener, method, mux, handlers): now = time.time() t = method.recv_udp(listener, 4096) if t is None: return srcip, dstip, data = t debug1('Accept UDP: %r -> %r.\n' % (srcip, dstip,)) if srcip in udp_by_src: chan, timeout = udp_by_src[srcip] else: chan = mux.next_channel() mux.channels[chan] = lambda cmd, data: udp_done( chan, data, method, listener, dstip=srcip) mux.send(chan, ssnet.CMD_UDP_OPEN, b"%d" % listener.family) udp_by_src[srcip] = chan, now + 30 hdr = b"%s,%d," % (dstip[0].encode("ASCII"), dstip[1]) mux.send(chan, ssnet.CMD_UDP_DATA, hdr + data) expire_connections(now, mux) def dns_done(chan, data, method, sock, srcip, dstip, mux): debug3('dns_done: channel=%d src=%r dst=%r\n' % (chan, srcip, dstip)) del mux.channels[chan] del dnsreqs[chan] method.send_udp(sock, srcip, dstip, data) def ondns(listener, method, mux, handlers): now = time.time() t = method.recv_udp(listener, 4096) if t is None: return srcip, dstip, data = t debug1('DNS request from %r to %r: %d bytes\n' % (srcip, dstip, len(data))) chan = mux.next_channel() dnsreqs[chan] = now + 30 mux.send(chan, ssnet.CMD_DNS_REQ, data) mux.channels[chan] = lambda cmd, data: dns_done( chan, data, method, listener, srcip=dstip, dstip=srcip, mux=mux) expire_connections(now, mux) def _main(tcp_listener, udp_listener, fw, ssh_cmd, remotename, python, latency_control, dns_listener, seed_hosts, auto_nets, daemon): debug1('Starting client with Python version %s\n' % platform.python_version()) method = fw.method handlers = [] if helpers.verbose >= 1: helpers.logprefix = 'c : ' else: helpers.logprefix = 'client: ' debug1('connecting to server...\n') try: (serverproc, serversock) = ssh.connect( ssh_cmd, remotename, python, stderr=ssyslog._p and ssyslog._p.stdin, options=dict(latency_control=latency_control)) except socket.error as e: if e.args[0] == errno.EPIPE: raise Fatal("failed to establish ssh session (1)") else: raise mux = Mux(serversock, serversock) handlers.append(mux) expected = b'SSHUTTLE0001' try: v = 'x' while v and v != b'\0': v = serversock.recv(1) v = 'x' while v and v != b'\0': v = serversock.recv(1) initstring = serversock.recv(len(expected)) except socket.error as e: if e.args[0] == errno.ECONNRESET: raise Fatal("failed to establish ssh session (2)") else: raise rv = serverproc.poll() if rv: raise Fatal('server died with error code %d' % rv) if initstring != expected: raise Fatal('expected server init string %r; got %r' % (expected, initstring)) log('Connected.\n') sys.stdout.flush() if daemon: daemonize() log('daemonizing (%s).\n' % _pidname) def onroutes(routestr): if auto_nets: for line in routestr.strip().split(b'\n'): (family, ip, width) = line.split(b',', 2) family = int(family) width = int(width) ip = ip.decode("ASCII") if family == socket.AF_INET6 and tcp_listener.v6 is None: debug2("Ignored auto net %d/%s/%d\n" % (family, ip, width)) if family == socket.AF_INET and tcp_listener.v4 is None: debug2("Ignored auto net %d/%s/%d\n" % (family, ip, width)) else: debug2("Adding auto net %d/%s/%d\n" % (family, ip, width)) fw.auto_nets.append((family, ip, width)) # we definitely want to do this *after* starting ssh, or we might end # up intercepting the ssh connection! # # Moreover, now that we have the --auto-nets option, we have to wait # for the server to send us that message anyway. Even if we haven't # set --auto-nets, we might as well wait for the message first, then # ignore its contents. mux.got_routes = None fw.start() mux.got_routes = onroutes def onhostlist(hostlist): debug2('got host list: %r\n' % hostlist) for line in hostlist.strip().split(): if line: name, ip = line.split(b',', 1) fw.sethostip(name, ip) mux.got_host_list = onhostlist tcp_listener.add_handler(handlers, onaccept_tcp, method, mux) if udp_listener: udp_listener.add_handler(handlers, onaccept_udp, method, mux) if dns_listener: dns_listener.add_handler(handlers, ondns, method, mux) if seed_hosts is not None: debug1('seed_hosts: %r\n' % seed_hosts) mux.send(0, ssnet.CMD_HOST_REQ, str.encode('\n'.join(seed_hosts))) while 1: rv = serverproc.poll() if rv: raise Fatal('server died with error code %d' % rv) ssnet.runonce(handlers, mux) if latency_control: mux.check_fullness() def main(listenip_v6, listenip_v4, ssh_cmd, remotename, python, latency_control, dns, nslist, method_name, seed_hosts, auto_nets, subnets_include, subnets_exclude, daemon, pidfile): if daemon: try: check_daemon(pidfile) except Fatal as e: log("%s\n" % e) return 5 debug1('Starting sshuttle proxy.\n') fw = FirewallClient(method_name) # Get family specific subnet lists if dns: nslist += resolvconf_nameservers() subnets = subnets_include + subnets_exclude # we don't care here subnets_v6 = [i for i in subnets if i[0] == socket.AF_INET6] nslist_v6 = [i for i in nslist if i[0] == socket.AF_INET6] subnets_v4 = [i for i in subnets if i[0] == socket.AF_INET] nslist_v4 = [i for i in nslist if i[0] == socket.AF_INET] # Check features available avail = fw.method.get_supported_features() required = Features() if listenip_v6 == "auto": if avail.ipv6: listenip_v6 = ('::1', 0) else: listenip_v6 = None required.ipv6 = len(subnets_v6) > 0 or len(nslist_v6) > 0 \ or listenip_v6 is not None required.udp = avail.udp required.dns = len(nslist) > 0 fw.method.assert_features(required) if required.ipv6 and listenip_v6 is None: raise Fatal("IPv6 required but not listening.") # display features enabled debug1("IPv6 enabled: %r\n" % required.ipv6) debug1("UDP enabled: %r\n" % required.udp) debug1("DNS enabled: %r\n" % required.dns) # bind to required ports if listenip_v4 == "auto": listenip_v4 = ('127.0.0.1', 0) if listenip_v6 and listenip_v6[1] and listenip_v4 and listenip_v4[1]: # if both ports given, no need to search for a spare port ports = [0, ] else: # if at least one port missing, we have to search ports = range(12300, 9000, -1) # search for free ports and try to bind last_e = None redirectport_v6 = 0 redirectport_v4 = 0 bound = False debug2('Binding redirector:') for port in ports: debug2(' %d' % port) tcp_listener = MultiListener() tcp_listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) if required.udp: udp_listener = MultiListener(socket.SOCK_DGRAM) udp_listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) else: udp_listener = None if listenip_v6 and listenip_v6[1]: lv6 = listenip_v6 redirectport_v6 = lv6[1] elif listenip_v6: lv6 = (listenip_v6[0], port) redirectport_v6 = port else: lv6 = None redirectport_v6 = 0 if listenip_v4 and listenip_v4[1]: lv4 = listenip_v4 redirectport_v4 = lv4[1] elif listenip_v4: lv4 = (listenip_v4[0], port) redirectport_v4 = port else: lv4 = None redirectport_v4 = 0 try: tcp_listener.bind(lv6, lv4) if udp_listener: udp_listener.bind(lv6, lv4) bound = True break except socket.error as e: if e.errno == errno.EADDRINUSE: last_e = e else: raise e debug2('\n') if not bound: assert(last_e) raise last_e tcp_listener.listen(10) tcp_listener.print_listening("TCP redirector") if udp_listener: udp_listener.print_listening("UDP redirector") bound = False if required.dns: # search for spare port for DNS debug2('Binding DNS:') ports = range(12300, 9000, -1) for port in ports: debug2(' %d' % port) dns_listener = MultiListener(socket.SOCK_DGRAM) if listenip_v6: lv6 = (listenip_v6[0], port) dnsport_v6 = port else: lv6 = None dnsport_v6 = 0 if listenip_v4: lv4 = (listenip_v4[0], port) dnsport_v4 = port else: lv4 = None dnsport_v4 = 0 try: dns_listener.bind(lv6, lv4) bound = True break except socket.error as e: if e.errno == errno.EADDRINUSE: last_e = e else: raise e debug2('\n') dns_listener.print_listening("DNS") if not bound: assert(last_e) raise last_e else: dnsport_v6 = 0 dnsport_v4 = 0 dns_listener = None # Last minute sanity checks. # These should never fail. # If these do fail, something is broken above. if len(subnets_v6) > 0: assert required.ipv6 if redirectport_v6 == 0: raise Fatal("IPv6 subnets defined but not listening") if len(nslist_v6) > 0: assert required.dns assert required.ipv6 if dnsport_v6 == 0: raise Fatal("IPv6 ns servers defined but not listening") if len(subnets_v4) > 0: if redirectport_v4 == 0: raise Fatal("IPv4 subnets defined but not listening") if len(nslist_v4) > 0: if dnsport_v4 == 0: raise Fatal("IPv4 ns servers defined but not listening") # setup method specific stuff on listeners fw.method.setup_tcp_listener(tcp_listener) if udp_listener: fw.method.setup_udp_listener(udp_listener) if dns_listener: fw.method.setup_udp_listener(dns_listener) # start the firewall fw.setup(subnets_include, subnets_exclude, nslist, redirectport_v6, redirectport_v4, dnsport_v6, dnsport_v4, required.udp) # start the client process try: return _main(tcp_listener, udp_listener, fw, ssh_cmd, remotename, python, latency_control, dns_listener, seed_hosts, auto_nets, daemon) finally: try: if daemon: # it's not our child anymore; can't waitpid fw.p.returncode = 0 fw.done() finally: if daemon: daemon_cleanup() sshuttle-0.76/sshuttle/cmdline.py0000600000175000017500000002054612646546316017405 0ustar brianbrian00000000000000import sys import re import socket import sshuttle.helpers as helpers import sshuttle.options as options import sshuttle.client as client import sshuttle.firewall as firewall import sshuttle.hostwatch as hostwatch import sshuttle.ssyslog as ssyslog from sshuttle.helpers import family_ip_tuple, log, Fatal # 1.2.3.4/5 or just 1.2.3.4 def parse_subnet4(s): m = re.match(r'(\d+)(?:\.(\d+)\.(\d+)\.(\d+))?(?:/(\d+))?$', s) if not m: raise Fatal('%r is not a valid IP subnet format' % s) (a, b, c, d, width) = m.groups() (a, b, c, d) = (int(a or 0), int(b or 0), int(c or 0), int(d or 0)) if width is None: width = 32 else: width = int(width) if a > 255 or b > 255 or c > 255 or d > 255: raise Fatal('%d.%d.%d.%d has numbers > 255' % (a, b, c, d)) if width > 32: raise Fatal('*/%d is greater than the maximum of 32' % width) return(socket.AF_INET, '%d.%d.%d.%d' % (a, b, c, d), width) # 1:2::3/64 or just 1:2::3 def parse_subnet6(s): m = re.match(r'(?:([a-fA-F\d:]+))?(?:/(\d+))?$', s) if not m: raise Fatal('%r is not a valid IP subnet format' % s) (net, width) = m.groups() if width is None: width = 128 else: width = int(width) if width > 128: raise Fatal('*/%d is greater than the maximum of 128' % width) return(socket.AF_INET6, net, width) # Subnet file, supporting empty lines and hash-started comment lines def parse_subnet_file(s): try: handle = open(s, 'r') except OSError: raise Fatal('Unable to open subnet file: %s' % s) raw_config_lines = handle.readlines() config_lines = [] for line_no, line in enumerate(raw_config_lines): line = line.strip() if len(line) == 0: continue if line[0] == '#': continue config_lines.append(line) return config_lines # list of: # 1.2.3.4/5 or just 1.2.3.4 # 1:2::3/64 or just 1:2::3 def parse_subnets(subnets_str): subnets = [] for s in subnets_str: if ':' in s: subnet = parse_subnet6(s) else: subnet = parse_subnet4(s) subnets.append(subnet) return subnets # 1.2.3.4:567 or just 1.2.3.4 or just 567 def parse_ipport4(s): s = str(s) m = re.match(r'(?:(\d+)\.(\d+)\.(\d+)\.(\d+))?(?::)?(?:(\d+))?$', s) if not m: raise Fatal('%r is not a valid IP:port format' % s) (a, b, c, d, port) = m.groups() (a, b, c, d, port) = (int(a or 0), int(b or 0), int(c or 0), int(d or 0), int(port or 0)) if a > 255 or b > 255 or c > 255 or d > 255: raise Fatal('%d.%d.%d.%d has numbers > 255' % (a, b, c, d)) if port > 65535: raise Fatal('*:%d is greater than the maximum of 65535' % port) if a is None: a = b = c = d = 0 return ('%d.%d.%d.%d' % (a, b, c, d), port) # [1:2::3]:456 or [1:2::3] or 456 def parse_ipport6(s): s = str(s) m = re.match(r'(?:\[([^]]*)])?(?::)?(?:(\d+))?$', s) if not m: raise Fatal('%s is not a valid IP:port format' % s) (ip, port) = m.groups() (ip, port) = (ip or '::', int(port or 0)) return (ip, port) def parse_list(list): return re.split(r'[\s,]+', list.strip()) if list else [] optspec = """ sshuttle [-l [ip:]port] [-r [username@]sshserver[:port]] sshuttle --firewall sshuttle --hostwatch -- l,listen= transproxy to this ip address and port number H,auto-hosts scan for remote hostnames and update local /etc/hosts N,auto-nets automatically determine subnets to route dns capture local DNS requests and forward to the remote DNS server ns-hosts= capture and forward remote DNS requests to the following servers method= auto, nat, tproxy or pf python= path to python interpreter on the remote server r,remote= ssh hostname (and optional username) of remote sshuttle server x,exclude= exclude this subnet (can be used more than once) X,exclude-from= exclude the subnets in a file (whitespace separated) v,verbose increase debug message verbosity V,version print the sshuttle version number and exit e,ssh-cmd= the command to use to connect to the remote [ssh] seed-hosts= with -H, use these hostnames for initial scan (comma-separated) no-latency-control sacrifice latency to improve bandwidth benchmarks wrap= restart counting channel numbers after this number (for testing) disable-ipv6 disables ipv6 support D,daemon run in the background as a daemon s,subnets= file where the subnets are stored, instead of on the command line syslog send log messages to syslog (default if you use --daemon) pidfile= pidfile name (only if using --daemon) [./sshuttle.pid] server (internal use only) firewall (internal use only) hostwatch (internal use only) """ def main(): o = options.Options(optspec) (opt, flags, extra) = o.parse(sys.argv[1:]) if opt.version: from sshuttle.version import version print(version) return 0 if opt.daemon: opt.syslog = 1 if opt.wrap: import sshuttle.ssnet as ssnet ssnet.MAX_CHANNEL = int(opt.wrap) helpers.verbose = opt.verbose or 0 try: if opt.firewall: if len(extra) != 0: o.fatal('exactly zero arguments expected') return firewall.main(opt.method, opt.syslog) elif opt.hostwatch: return hostwatch.hw_main(extra) else: if len(extra) < 1 and not opt.auto_nets and not opt.subnets: o.fatal('at least one subnet, subnet file, or -N expected') includes = extra excludes = ['127.0.0.0/8'] for k, v in flags: if k in ('-x', '--exclude'): excludes.append(v) if k in ('-X', '--exclude-from'): excludes += open(v).read().split() remotename = opt.remote if remotename == '' or remotename == '-': remotename = None nslist = [family_ip_tuple(ns) for ns in parse_list(opt.ns_hosts)] if opt.seed_hosts and not opt.auto_hosts: o.fatal('--seed-hosts only works if you also use -H') if opt.seed_hosts: sh = re.split(r'[\s,]+', (opt.seed_hosts or "").strip()) elif opt.auto_hosts: sh = [] else: sh = None if opt.subnets: includes = parse_subnet_file(opt.subnets) if not opt.method: method_name = "auto" elif opt.method in ["auto", "nat", "tproxy", "pf"]: method_name = opt.method else: o.fatal("method_name %s not supported" % opt.method) if opt.listen: ipport_v6 = None ipport_v4 = None list = opt.listen.split(",") for ip in list: if '[' in ip and ']' in ip: ipport_v6 = parse_ipport6(ip) else: ipport_v4 = parse_ipport4(ip) else: # parse_ipport4('127.0.0.1:0') ipport_v4 = "auto" # parse_ipport6('[::1]:0') ipport_v6 = "auto" if not opt.disable_ipv6 else None if opt.syslog: ssyslog.start_syslog() ssyslog.stderr_to_syslog() return_code = client.main(ipport_v6, ipport_v4, opt.ssh_cmd, remotename, opt.python, opt.latency_control, opt.dns, nslist, method_name, sh, opt.auto_nets, parse_subnets(includes), parse_subnets(excludes), opt.daemon, opt.pidfile) if return_code == 0: log('Normal exit code, exiting...') else: log('Abnormal exit code detected, failing...' % return_code) return return_code except Fatal as e: log('fatal: %s\n' % e) return 99 except KeyboardInterrupt: log('\n') log('Keyboard interrupt: exiting.\n') return 1 sshuttle-0.76/sshuttle/server.py0000600000175000017500000002461112643354145017267 0ustar brianbrian00000000000000import re import struct import socket import traceback import time import sys import os import platform import sshuttle.ssnet as ssnet import sshuttle.helpers as helpers import sshuttle.hostwatch as hostwatch import subprocess as ssubprocess from sshuttle.ssnet import Handler, Proxy, Mux, MuxWrapper from sshuttle.helpers import log, debug1, debug2, debug3, Fatal, \ resolvconf_random_nameserver def _ipmatch(ipstr): if ipstr == b'default': ipstr = b'0.0.0.0/0' m = re.match(b'^(\d+(\.\d+(\.\d+(\.\d+)?)?)?)(?:/(\d+))?$', ipstr) if m: g = m.groups() ips = g[0] width = int(g[4] or 32) if g[1] is None: ips += b'.0.0.0' width = min(width, 8) elif g[2] is None: ips += b'.0.0' width = min(width, 16) elif g[3] is None: ips += b'.0' width = min(width, 24) ips = ips.decode("ASCII") return (struct.unpack('!I', socket.inet_aton(ips))[0], width) def _ipstr(ip, width): if width >= 32: return ip else: return "%s/%d" % (ip, width) def _maskbits(netmask): if not netmask: return 32 for i in range(32): if netmask[0] & _shl(1, i): return 32 - i return 0 def _shl(n, bits): return n * int(2 ** bits) def _list_routes(): # FIXME: IPv4 only argv = ['netstat', '-rn'] p = ssubprocess.Popen(argv, stdout=ssubprocess.PIPE) routes = [] for line in p.stdout: cols = re.split(b'\s+', line) ipw = _ipmatch(cols[0]) if not ipw: continue # some lines won't be parseable; never mind maskw = _ipmatch(cols[2]) # linux only mask = _maskbits(maskw) # returns 32 if maskw is null width = min(ipw[1], mask) ip = ipw[0] & _shl(_shl(1, width) - 1, 32 - width) routes.append( (socket.AF_INET, socket.inet_ntoa(struct.pack('!I', ip)), width)) rv = p.wait() if rv != 0: log('WARNING: %r returned %d\n' % (argv, rv)) log('WARNING: That prevents --auto-nets from working.\n') return routes def list_routes(): for (family, ip, width) in _list_routes(): if not ip.startswith('0.') and not ip.startswith('127.'): yield (family, ip, width) def _exc_dump(): exc_info = sys.exc_info() return ''.join(traceback.format_exception(*exc_info)) def start_hostwatch(seed_hosts): s1, s2 = socket.socketpair() pid = os.fork() if not pid: # child rv = 99 try: try: s2.close() os.dup2(s1.fileno(), 1) os.dup2(s1.fileno(), 0) s1.close() rv = hostwatch.hw_main(seed_hosts) or 0 except Exception: log('%s\n' % _exc_dump()) rv = 98 finally: os._exit(rv) s1.close() return pid, s2 class Hostwatch: def __init__(self): self.pid = 0 self.sock = None class DnsProxy(Handler): def __init__(self, mux, chan, request): Handler.__init__(self, []) self.timeout = time.time() + 30 self.mux = mux self.chan = chan self.tries = 0 self.request = request self.peers = {} self.try_send() def try_send(self): if self.tries >= 3: return self.tries += 1 family, peer = resolvconf_random_nameserver() sock = socket.socket(family, socket.SOCK_DGRAM) sock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) sock.connect((peer, 53)) self.peers[sock] = peer debug2('DNS: sending to %r (try %d)\n' % (peer, self.tries)) try: sock.send(self.request) self.socks.append(sock) except socket.error as e: if e.args[0] in ssnet.NET_ERRS: # might have been spurious; try again. # Note: these errors sometimes are reported by recv(), # and sometimes by send(). We have to catch both. debug2('DNS send to %r: %s\n' % (peer, e)) self.try_send() return else: log('DNS send to %r: %s\n' % (peer, e)) return def callback(self, sock): peer = self.peers[sock] try: data = sock.recv(4096) except socket.error as e: self.socks.remove(sock) del self.peers[sock] if e.args[0] in ssnet.NET_ERRS: # might have been spurious; try again. # Note: these errors sometimes are reported by recv(), # and sometimes by send(). We have to catch both. debug2('DNS recv from %r: %s\n' % (peer, e)) self.try_send() return else: log('DNS recv from %r: %s\n' % (peer, e)) return debug2('DNS response: %d bytes\n' % len(data)) self.mux.send(self.chan, ssnet.CMD_DNS_RESPONSE, data) self.ok = False class UdpProxy(Handler): def __init__(self, mux, chan, family): sock = socket.socket(family, socket.SOCK_DGRAM) Handler.__init__(self, [sock]) self.timeout = time.time() + 30 self.mux = mux self.chan = chan self.sock = sock if family == socket.AF_INET: self.sock.setsockopt(socket.SOL_IP, socket.IP_TTL, 42) def send(self, dstip, data): debug2('UDP: sending to %r port %d\n' % dstip) try: self.sock.sendto(data, dstip) except socket.error as e: log('UDP send to %r port %d: %s\n' % (dstip[0], dstip[1], e)) return def callback(self, sock): try: data, peer = sock.recvfrom(4096) except socket.error as e: log('UDP recv from %r port %d: %s\n' % (peer[0], peer[1], e)) return debug2('UDP response: %d bytes\n' % len(data)) hdr = "%s,%r," % (peer[0], peer[1]) self.mux.send(self.chan, ssnet.CMD_UDP_DATA, hdr + data) def main(latency_control): debug1('Starting server with Python version %s\n' % platform.python_version()) if helpers.verbose >= 1: helpers.logprefix = ' s: ' else: helpers.logprefix = 'server: ' debug1('latency control setting = %r\n' % latency_control) routes = list(list_routes()) debug1('available routes:\n') for r in routes: debug1(' %d/%s/%d\n' % r) # synchronization header sys.stdout.write('\0\0SSHUTTLE0001') sys.stdout.flush() handlers = [] mux = Mux(socket.fromfd(sys.stdin.fileno(), socket.AF_INET, socket.SOCK_STREAM), socket.fromfd(sys.stdout.fileno(), socket.AF_INET, socket.SOCK_STREAM)) handlers.append(mux) routepkt = b'' for r in routes: routepkt += b'%d,%s,%d\n' % (r[0], r[1].encode("ASCII"), r[2]) mux.send(0, ssnet.CMD_ROUTES, routepkt) hw = Hostwatch() hw.leftover = b'' def hostwatch_ready(sock): assert(hw.pid) content = hw.sock.recv(4096) if content: lines = (hw.leftover + content).split(b'\n') if lines[-1]: # no terminating newline: entry isn't complete yet! hw.leftover = lines.pop() lines.append('') else: hw.leftover = b'' mux.send(0, ssnet.CMD_HOST_LIST, b'\n'.join(lines)) else: raise Fatal('hostwatch process died') def got_host_req(data): if not hw.pid: (hw.pid, hw.sock) = start_hostwatch(data.strip().split()) handlers.append(Handler(socks=[hw.sock], callback=hostwatch_ready)) mux.got_host_req = got_host_req def new_channel(channel, data): (family, dstip, dstport) = data.split(b',', 2) family = int(family) dstport = int(dstport) outwrap = ssnet.connect_dst(family, dstip, dstport) handlers.append(Proxy(MuxWrapper(mux, channel), outwrap)) mux.new_channel = new_channel dnshandlers = {} def dns_req(channel, data): debug2('Incoming DNS request channel=%d.\n' % channel) h = DnsProxy(mux, channel, data) handlers.append(h) dnshandlers[channel] = h mux.got_dns_req = dns_req udphandlers = {} def udp_req(channel, cmd, data): debug2('Incoming UDP request channel=%d, cmd=%d\n' % (channel, cmd)) if cmd == ssnet.CMD_UDP_DATA: (dstip, dstport, data) = data.split(",", 2) dstport = int(dstport) debug2('is incoming UDP data. %r %d.\n' % (dstip, dstport)) h = udphandlers[channel] h.send((dstip, dstport), data) elif cmd == ssnet.CMD_UDP_CLOSE: debug2('is incoming UDP close\n') h = udphandlers[channel] h.ok = False del mux.channels[channel] def udp_open(channel, data): debug2('Incoming UDP open.\n') family = int(data) mux.channels[channel] = lambda cmd, data: udp_req(channel, cmd, data) if channel in udphandlers: raise Fatal('UDP connection channel %d already open' % channel) else: h = UdpProxy(mux, channel, family) handlers.append(h) udphandlers[channel] = h mux.got_udp_open = udp_open while mux.ok: if hw.pid: assert(hw.pid > 0) (rpid, rv) = os.waitpid(hw.pid, os.WNOHANG) if rpid: raise Fatal( 'hostwatch exited unexpectedly: code 0x%04x\n' % rv) ssnet.runonce(handlers, mux) if latency_control: mux.check_fullness() if dnshandlers: now = time.time() remove = [] for channel, h in dnshandlers.items(): if h.timeout < now or not h.ok: debug3('expiring dnsreqs channel=%d\n' % channel) remove.append(channel) h.ok = False for channel in remove: del dnshandlers[channel] if udphandlers: remove = [] for channel, h in udphandlers.items(): if not h.ok: debug3('expiring UDP channel=%d\n' % channel) remove.append(channel) h.ok = False for channel in remove: del udphandlers[channel] sshuttle-0.76/CHANGES.rst0000600000175000017500000000074412646642324015341 0ustar brianbrian00000000000000Release 0.76 (Jan 17, 2016) =========================== * Add option to disable IPv6 support. * Update documentation. * Move documentation, including man page, to Sphinx. * Use setuptools-scm for automatic versioning. Release 0.75 (Jan 12, 2016) =========================== * Revert change that broke sshuttle entry point. Release 0.74 (Jan 10, 2016) =========================== * Add CHANGES.rst file. * Numerous bug fixes. * Python 3.5 fixes. * PF fixes, especially for BSD. sshuttle-0.76/README.rst0000600000175000017500000000246312646642365015233 0ustar brianbrian00000000000000sshuttle: where transparent proxy meets VPN meets ssh ===================================================== As far as I know, sshuttle is the only program that solves the following common case: - Your client machine (or router) is Linux, FreeBSD, or MacOS. - You have access to a remote network via ssh. - You don't necessarily have admin access on the remote network. - The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you *are* the admin and you just got frustrated with the awful state of VPN tools. - You don't want to create an ssh port forward for every single host/port on the remote network. - You hate openssh's port forwarding because it's randomly slow and/or stupid. - You can't use openssh's PermitTunnel feature because it's disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance (see below). Obtaining sshuttle ------------------ - From PyPI:: pip install sshuttle - Clone:: git clone https://github.com/sshuttle/sshuttle.git ./setup.py install Documentation ------------- The documentation for the stable version is available at: http://sshuttle.readthedocs.org/ The documentation for the latest development version is available at: http://sshuttle.readthedocs.org/en/latest/ sshuttle-0.76/.gitignore0000600000175000017500000000010212645402010015474 0ustar brianbrian00000000000000sshuttle/version.py *.pyc *~ *.8 /.do_built /.do_built.dir /.redo sshuttle-0.76/docs/0000700000175000017500000000000012646642532014461 5ustar brianbrian00000000000000sshuttle-0.76/docs/manpage.rst0000600000175000017500000002402612646641513016627 0ustar brianbrian00000000000000sshuttle ======== Synopsis -------- **sshuttle** [*options*] [**-r** *[username@]sshserver[:port]*] \<*subnets* ...\> Description ----------- :program:`sshuttle` allows you to create a VPN connection from your machine to any remote server that you can connect to via ssh, as long as that server has python 2.3 or higher. To work, you must have root access on the local machine, but you can have a normal account on the server. It's valid to run :program:`sshuttle` more than once simultaneously on a single client machine, connecting to a different server every time, so you can be on more than one VPN at once. If run on a router, :program:`sshuttle` can forward traffic for your entire subnet to the VPN. Options ------- .. program:: sshuttle .. option:: subnets A list of subnets to route over the VPN, in the form ``a.b.c.d[/width]``. Valid examples are 1.2.3.4 (a single IP address), 1.2.3.4/32 (equivalent to 1.2.3.4), 1.2.3.0/24 (a 24-bit subnet, ie. with a 255.255.255.0 netmask), and 0/0 ('just route everything through the VPN'). .. option:: -l, --listen=[ip:]port Use this ip address and port number as the transparent proxy port. By default :program:`sshuttle` finds an available port automatically and listens on IP 127.0.0.1 (localhost), so you don't need to override it, and connections are only proxied from the local machine, not from outside machines. If you want to accept connections from other machines on your network (ie. to run :program:`sshuttle` on a router) try enabling IP Forwarding in your kernel, then using ``--listen 0.0.0.0:0``. For the tproxy method this can be an IPv6 address. Use this option twice if required, to provide both IPv4 and IPv6 addresses. .. option:: -H, --auto-hosts Scan for remote hostnames and update the local /etc/hosts file with matching entries for as long as the VPN is open. This is nicer than changing your system's DNS (/etc/resolv.conf) settings, for several reasons. First, hostnames are added without domain names attached, so you can ``ssh thatserver`` without worrying if your local domain matches the remote one. Second, if you :program:`sshuttle` into more than one VPN at a time, it's impossible to use more than one DNS server at once anyway, but :program:`sshuttle` correctly merges /etc/hosts entries between all running copies. Third, if you're only routing a few subnets over the VPN, you probably would prefer to keep using your local DNS server for everything else. .. option:: -N, --auto-nets In addition to the subnets provided on the command line, ask the server which subnets it thinks we should route, and route those automatically. The suggestions are taken automatically from the server's routing table. .. option:: --dns Capture local DNS requests and forward to the remote DNS server. .. option:: --python Specify the name/path of the remote python interpreter. The default is just ``python``, which means to use the default python interpreter on the remote system's PATH. .. option:: -r, --remote=[username@]sshserver[:port] The remote hostname and optional username and ssh port number to use for connecting to the remote server. For example, example.com, testuser@example.com, testuser@example.com:2222, or example.com:2244. .. option:: -x, --exclude=subnet Explicitly exclude this subnet from forwarding. The format of this option is the same as the ```` option. To exclude more than one subnet, specify the ``-x`` option more than once. You can say something like ``0/0 -x 1.2.3.0/24`` to forward everything except the local subnet over the VPN, for example. .. option:: -X, --exclude-from=file Exclude the subnets specified in a file, one subnet per line. Useful when you have lots of subnets to exclude. .. option:: -v, --verbose Print more information about the session. This option can be used more than once for increased verbosity. By default, :program:`sshuttle` prints only error messages. .. option:: -e, --ssh-cmd The command to use to connect to the remote server. The default is just ``ssh``. Use this if your ssh client is in a non-standard location or you want to provide extra options to the ssh command, for example, ``-e 'ssh -v'``. .. option:: --seed-hosts A comma-separated list of hostnames to use to initialize the :option:`--auto-hosts` scan algorithm. :option:`--auto-hosts` does things like poll local SMB servers for lists of local hostnames, but can speed things up if you use this option to give it a few names to start from. .. option:: --no-latency-control Sacrifice latency to improve bandwidth benchmarks. ssh uses really big socket buffers, which can overload the connection if you start doing large file transfers, thus making all your other sessions inside the same tunnel go slowly. Normally, :program:`sshuttle` tries to avoid this problem using a "fullness check" that allows only a certain amount of outstanding data to be buffered at a time. But on high-bandwidth links, this can leave a lot of your bandwidth underutilized. It also makes :program:`sshuttle` seem slow in bandwidth benchmarks (benchmarks rarely test ping latency, which is what :program:`sshuttle` is trying to control). This option disables the latency control feature, maximizing bandwidth usage. Use at your own risk. .. option:: -D, --daemon Automatically fork into the background after connecting to the remote server. Implies :option:`--syslog`. .. option:: --syslog after connecting, send all log messages to the :manpage:`syslog(3)` service instead of stderr. This is implicit if you use :option:`--daemon`. .. option:: --pidfile=pidfilename when using :option:`--daemon`, save :program:`sshuttle`'s pid to *pidfilename*. The default is ``sshuttle.pid`` in the current directory. .. option:: --disable-ipv6 If using the tproxy method, this will disable IPv6 support. .. option:: --firewall (internal use only) run the firewall manager. This is the only part of :program:`sshuttle` that must run as root. If you start :program:`sshuttle` as a non-root user, it will automatically run ``sudo`` or ``su`` to start the firewall manager, but the core of :program:`sshuttle` still runs as a normal user. .. option:: --hostwatch (internal use only) run the hostwatch daemon. This process runs on the server side and collects hostnames for the :option:`--auto-hosts` option. Using this option by itself makes it a lot easier to debug and test the :option:`--auto-hosts` feature. Examples -------- Test locally by proxying all local connections, without using ssh:: $ sshuttle -v 0/0 Starting sshuttle proxy. Listening on ('0.0.0.0', 12300). [local sudo] Password: firewall manager ready. c : connecting to server... s: available routes: s: 192.168.42.0/24 c : connected. firewall manager: starting transproxy. c : Accept: 192.168.42.106:50035 -> 192.168.42.121:139. c : Accept: 192.168.42.121:47523 -> 77.141.99.22:443. ...etc... ^C firewall manager: undoing changes. KeyboardInterrupt c : Keyboard interrupt: exiting. c : SW#8:192.168.42.121:47523: deleting c : SW#6:192.168.42.106:50035: deleting Test connection to a remote server, with automatic hostname and subnet guessing:: $ sshuttle -vNHr example.org Starting sshuttle proxy. Listening on ('0.0.0.0', 12300). firewall manager ready. c : connecting to server... s: available routes: s: 77.141.99.0/24 c : connected. c : seed_hosts: [] firewall manager: starting transproxy. hostwatch: Found: testbox1: 1.2.3.4 hostwatch: Found: mytest2: 5.6.7.8 hostwatch: Found: domaincontroller: 99.1.2.3 c : Accept: 192.168.42.121:60554 -> 77.141.99.22:22. ^C firewall manager: undoing changes. c : Keyboard interrupt: exiting. c : SW#6:192.168.42.121:60554: deleting Discussion ---------- When it starts, :program:`sshuttle` creates an ssh session to the server specified by the ``-r`` option. If ``-r`` is omitted, it will start both its client and server locally, which is sometimes useful for testing. After connecting to the remote server, :program:`sshuttle` uploads its (python) source code to the remote end and executes it there. Thus, you don't need to install :program:`sshuttle` on the remote server, and there are never :program:`sshuttle` version conflicts between client and server. Unlike most VPNs, :program:`sshuttle` forwards sessions, not packets. That is, it uses kernel transparent proxying (`iptables REDIRECT` rules on Linux) to capture outgoing TCP sessions, then creates entirely separate TCP sessions out to the original destination at the other end of the tunnel. Packet-level forwarding (eg. using the tun/tap devices on Linux) seems elegant at first, but it results in several problems, notably the 'tcp over tcp' problem. The tcp protocol depends fundamentally on packets being dropped in order to implement its congestion control agorithm; if you pass tcp packets through a tcp-based tunnel (such as ssh), the inner tcp packets will never be dropped, and so the inner tcp stream's congestion control will be completely broken, and performance will be terrible. Thus, packet-based VPNs (such as IPsec and openvpn) cannot use tcp-based encrypted streams like ssh or ssl, and have to implement their own encryption from scratch, which is very complex and error prone. :program:`sshuttle`'s simplicity comes from the fact that it can safely use the existing ssh encrypted tunnel without incurring a performance penalty. It does this by letting the client-side kernel manage the incoming tcp stream, and the server-side kernel manage the outgoing tcp stream; there is no need for congestion control to be shared between the two separate streams, so a tcp-based tunnel is fine. See Also -------- :manpage:`ssh(1)`, :manpage:`python(1)` sshuttle-0.76/docs/usage.rst0000600000175000017500000000732212646635443016330 0ustar brianbrian00000000000000Usage ===== - Forward all traffic:: sshuttle -r username@sshserver 0.0.0.0/0 - By default sshuttle will automatically choose a method to use. Override with the ``--method=`` parameter. - There is a shortcut for 0.0.0.0/0 for those that value their wrists:: sshuttle -r username@sshserver 0/0 - If you would also like your DNS queries to be proxied through the DNS server of the server you are connect to:: sshuttle --dns -r username@sshserver 0/0 The above is probably what you want to use to prevent local network attacks such as Firesheep and friends. (You may be prompted for one or more passwords; first, the local password to become root using sudo, and then the remote ssh password. Or you might have sudo and ssh set up to not require passwords, in which case you won't be prompted at all.) Usage Notes ----------- That's it! Now your local machine can access the remote network as if you were right there. And if your "client" machine is a router, everyone on your local network can make connections to your remote network. You don't need to install sshuttle on the remote server; the remote server just needs to have python available. sshuttle will automatically upload and run its source code to the remote python interpreter. This creates a transparent proxy server on your local machine for all IP addresses that match 0.0.0.0/0. (You can use more specific IP addresses if you want; use any number of IP addresses or subnets to change which addresses get proxied. Using 0.0.0.0/0 proxies *everything*, which is interesting if you don't trust the people on your local network.) Any TCP session you initiate to one of the proxied IP addresses will be captured by sshuttle and sent over an ssh session to the remote copy of sshuttle, which will then regenerate the connection on that end, and funnel the data back and forth through ssh. Fun, right? A poor man's instant VPN, and you don't even have to have admin access on the server. Additional information for TPROXY --------------------------------- TPROXY is the only method that supports full support of IPv6 and UDP. There are some things you need to consider for TPROXY to work: - The following commands need to be run first as root. This only needs to be done once after booting up:: ip route add local default dev lo table 100 ip rule add fwmark 1 lookup 100 ip -6 route add local default dev lo table 100 ip -6 rule add fwmark 1 lookup 100 - The ``--auto-nets`` feature does not detect IPv6 routes automatically. Add IPv6 routes manually. e.g. by adding ``'::/0'`` to the end of the command line. - The client needs to be run as root. e.g.:: sudo SSH_AUTH_SOCK="$SSH_AUTH_SOCK" $HOME/tree/sshuttle.tproxy/sshuttle --method=tproxy ... - You may need to exclude the IP address of the server you are connecting to. Otherwise sshuttle may attempt to intercept the ssh packets, which will not work. Use the ``--exclude`` parameter for this. - Similarly, UDP return packets (including DNS) could get intercepted and bounced back. This is the case if you have a broad subnet such as ``0.0.0.0/0`` or ``::/0`` that includes the IP address of the client. Use the ``--exclude`` parameter for this. - You need the ``--method=tproxy`` parameter, as above. - The routes for the outgoing packets must already exist. For example, if your connection does not have IPv6 support, no IPv6 routes will exist, IPv6 packets will not be generated and sshuttle cannot intercept them:: telnet -6 www.google.com 80 Trying 2404:6800:4001:805::1010... telnet: Unable to connect to remote host: Network is unreachable Add some dummy routes to external interfaces. Make sure they get removed however after sshuttle exits. sshuttle-0.76/docs/support.rst0000600000175000017500000000037612646567551016746 0ustar brianbrian00000000000000Support ======= Mailing list: * Subscribe by sending a message to * List archives are at: http://groups.google.com/group/sshuttle Issue tracker and pull requests at github: * https://github.com/sshuttle/sshuttle sshuttle-0.76/docs/make.bat0000600000175000017500000001506112646565313016074 0ustar brianbrian00000000000000@ECHO OFF REM Command file for Sphinx documentation if "%SPHINXBUILD%" == "" ( set SPHINXBUILD=sphinx-build ) set BUILDDIR=_build set ALLSPHINXOPTS=-d %BUILDDIR%/doctrees %SPHINXOPTS% . set I18NSPHINXOPTS=%SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% set I18NSPHINXOPTS=-D latex_paper_size=%PAPER% %I18NSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. dirhtml to make HTML files named index.html in directories echo. singlehtml to make a single large HTML file echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. devhelp to make HTML files and a Devhelp project echo. epub to make an epub echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. text to make text files echo. man to make manual pages echo. texinfo to make Texinfo files echo. gettext to make PO message catalogs echo. changes to make an overview over all changed/added/deprecated items echo. xml to make Docutils-native XML files echo. pseudoxml to make pseudoxml-XML files for display purposes echo. linkcheck to check all external links for integrity echo. doctest to run all doctests embedded in the documentation if enabled goto end ) if "%1" == "clean" ( for /d %%i in (%BUILDDIR%\*) do rmdir /q /s %%i del /q /s %BUILDDIR%\* goto end ) %SPHINXBUILD% 2> nul if errorlevel 9009 ( echo. echo.The 'sphinx-build' command was not found. Make sure you have Sphinx echo.installed, then set the SPHINXBUILD environment variable to point echo.to the full path of the 'sphinx-build' executable. Alternatively you echo.may add the Sphinx directory to PATH. echo. echo.If you don't have Sphinx installed, grab it from echo.http://sphinx-doc.org/ exit /b 1 ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% %BUILDDIR%/html if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/html. goto end ) if "%1" == "dirhtml" ( %SPHINXBUILD% -b dirhtml %ALLSPHINXOPTS% %BUILDDIR%/dirhtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/dirhtml. goto end ) if "%1" == "singlehtml" ( %SPHINXBUILD% -b singlehtml %ALLSPHINXOPTS% %BUILDDIR%/singlehtml if errorlevel 1 exit /b 1 echo. echo.Build finished. The HTML pages are in %BUILDDIR%/singlehtml. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% %BUILDDIR%/pickle if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% %BUILDDIR%/json if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% %BUILDDIR%/htmlhelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in %BUILDDIR%/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% %BUILDDIR%/qthelp if errorlevel 1 exit /b 1 echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in %BUILDDIR%/qthelp, like this: echo.^> qcollectiongenerator %BUILDDIR%\qthelp\sshuttle.qhcp echo.To view the help file: echo.^> assistant -collectionFile %BUILDDIR%\qthelp\sshuttle.ghc goto end ) if "%1" == "devhelp" ( %SPHINXBUILD% -b devhelp %ALLSPHINXOPTS% %BUILDDIR%/devhelp if errorlevel 1 exit /b 1 echo. echo.Build finished. goto end ) if "%1" == "epub" ( %SPHINXBUILD% -b epub %ALLSPHINXOPTS% %BUILDDIR%/epub if errorlevel 1 exit /b 1 echo. echo.Build finished. The epub file is in %BUILDDIR%/epub. goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex if errorlevel 1 exit /b 1 echo. echo.Build finished; the LaTeX files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdf" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf cd %BUILDDIR%/.. echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "latexpdfja" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% %BUILDDIR%/latex cd %BUILDDIR%/latex make all-pdf-ja cd %BUILDDIR%/.. echo. echo.Build finished; the PDF files are in %BUILDDIR%/latex. goto end ) if "%1" == "text" ( %SPHINXBUILD% -b text %ALLSPHINXOPTS% %BUILDDIR%/text if errorlevel 1 exit /b 1 echo. echo.Build finished. The text files are in %BUILDDIR%/text. goto end ) if "%1" == "man" ( %SPHINXBUILD% -b man %ALLSPHINXOPTS% %BUILDDIR%/man if errorlevel 1 exit /b 1 echo. echo.Build finished. The manual pages are in %BUILDDIR%/man. goto end ) if "%1" == "texinfo" ( %SPHINXBUILD% -b texinfo %ALLSPHINXOPTS% %BUILDDIR%/texinfo if errorlevel 1 exit /b 1 echo. echo.Build finished. The Texinfo files are in %BUILDDIR%/texinfo. goto end ) if "%1" == "gettext" ( %SPHINXBUILD% -b gettext %I18NSPHINXOPTS% %BUILDDIR%/locale if errorlevel 1 exit /b 1 echo. echo.Build finished. The message catalogs are in %BUILDDIR%/locale. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% %BUILDDIR%/changes if errorlevel 1 exit /b 1 echo. echo.The overview file is in %BUILDDIR%/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% %BUILDDIR%/linkcheck if errorlevel 1 exit /b 1 echo. echo.Link check complete; look for any errors in the above output ^ or in %BUILDDIR%/linkcheck/output.txt. goto end ) if "%1" == "doctest" ( %SPHINXBUILD% -b doctest %ALLSPHINXOPTS% %BUILDDIR%/doctest if errorlevel 1 exit /b 1 echo. echo.Testing of doctests in the sources finished, look at the ^ results in %BUILDDIR%/doctest/output.txt. goto end ) if "%1" == "xml" ( %SPHINXBUILD% -b xml %ALLSPHINXOPTS% %BUILDDIR%/xml if errorlevel 1 exit /b 1 echo. echo.Build finished. The XML files are in %BUILDDIR%/xml. goto end ) if "%1" == "pseudoxml" ( %SPHINXBUILD% -b pseudoxml %ALLSPHINXOPTS% %BUILDDIR%/pseudoxml if errorlevel 1 exit /b 1 echo. echo.Build finished. The pseudo-XML files are in %BUILDDIR%/pseudoxml. goto end ) :end sshuttle-0.76/docs/Makefile0000600000175000017500000001516212646565313016131 0ustar brianbrian00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # User-friendly check for sphinx-build ifeq ($(shell which $(SPHINXBUILD) >/dev/null 2>&1; echo $$?), 1) $(error The '$(SPHINXBUILD)' command was not found. Make sure you have Sphinx installed, then set the SPHINXBUILD environment variable to point to the full path of the '$(SPHINXBUILD)' executable. Alternatively you can add the directory with the executable to your PATH. If you don't have Sphinx installed, grab it from http://sphinx-doc.org/) endif # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . # the i18n builder cannot share the environment and doctrees with the others I18NSPHINXOPTS = $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp devhelp epub latex latexpdf text man changes linkcheck doctest gettext help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " latexpdfja to make LaTeX files and run them through platex/dvipdfmx" @echo " text to make text files" @echo " man to make manual pages" @echo " texinfo to make Texinfo files" @echo " info to make Texinfo files and run them through makeinfo" @echo " gettext to make PO message catalogs" @echo " changes to make an overview of all changed/added/deprecated items" @echo " xml to make Docutils-native XML files" @echo " pseudoxml to make pseudoxml-XML files for display purposes" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/sshuttle.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/sshuttle.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) $(BUILDDIR)/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/sshuttle" @echo "# ln -s $(BUILDDIR)/devhelp $$HOME/.local/share/devhelp/sshuttle" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make' in that directory to run these through (pdf)latex" \ "(use \`make latexpdf' here to do that automatically)." latexpdf: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through pdflatex..." $(MAKE) -C $(BUILDDIR)/latex all-pdf @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." latexpdfja: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo "Running LaTeX files through platex and dvipdfmx..." $(MAKE) -C $(BUILDDIR)/latex all-pdf-ja @echo "pdflatex finished; the PDF files are in $(BUILDDIR)/latex." text: $(SPHINXBUILD) -b text $(ALLSPHINXOPTS) $(BUILDDIR)/text @echo @echo "Build finished. The text files are in $(BUILDDIR)/text." man: $(SPHINXBUILD) -b man $(ALLSPHINXOPTS) $(BUILDDIR)/man @echo @echo "Build finished. The manual pages are in $(BUILDDIR)/man." texinfo: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo @echo "Build finished. The Texinfo files are in $(BUILDDIR)/texinfo." @echo "Run \`make' in that directory to run these through makeinfo" \ "(use \`make info' here to do that automatically)." info: $(SPHINXBUILD) -b texinfo $(ALLSPHINXOPTS) $(BUILDDIR)/texinfo @echo "Running Texinfo files through makeinfo..." make -C $(BUILDDIR)/texinfo info @echo "makeinfo finished; the Info files are in $(BUILDDIR)/texinfo." gettext: $(SPHINXBUILD) -b gettext $(I18NSPHINXOPTS) $(BUILDDIR)/locale @echo @echo "Build finished. The message catalogs are in $(BUILDDIR)/locale." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." xml: $(SPHINXBUILD) -b xml $(ALLSPHINXOPTS) $(BUILDDIR)/xml @echo @echo "Build finished. The XML files are in $(BUILDDIR)/xml." pseudoxml: $(SPHINXBUILD) -b pseudoxml $(ALLSPHINXOPTS) $(BUILDDIR)/pseudoxml @echo @echo "Build finished. The pseudo-XML files are in $(BUILDDIR)/pseudoxml." sshuttle-0.76/docs/installation.rst0000600000175000017500000000024312646567475017731 0ustar brianbrian00000000000000Installation ============ - From PyPI:: pip install sshuttle - Clone:: git clone https://github.com/sshuttle/sshuttle.git ./setup.py install sshuttle-0.76/docs/requirements.rst0000600000175000017500000000235412646636604017747 0ustar brianbrian00000000000000Requirements ============ Client side Requirements ------------------------ - sudo, or root access on your client machine. (The server doesn't need admin access.) - Python 2.7 or Python 3.5. Linux with NAT method ~~~~~~~~~~~~~~~~~~~~~ Supports: * IPv4 TCP * IPv4 DNS Requires: * iptables DNAT, REDIRECT, and ttl modules. Linux with TPROXY method ~~~~~~~~~~~~~~~~~~~~~~~~ Supports: * IPv4 TCP * IPv4 UDP (requires ``recmsg`` - see below) * IPv6 DNS (requires ``recmsg`` - see below) * IPv6 TCP * IPv6 UDP (requires ``recmsg`` - see below) * IPv6 DNS (requires ``recmsg`` - see below) .. _PyXAPI: http://www.pps.univ-paris-diderot.fr/~ylg/PyXAPI/ Full UDP or DNS support with the TPROXY method requires the ``recvmsg()`` syscall. This is not available in Python 2, however is in Python 3.5 and later. Under Python 2 you might find it sufficient installing PyXAPI_ to get the ``recvmsg()`` function. MacOS with PF method ~~~~~~~~~~~~~~~~~~~~ Supports: * IPv4 TCP * IPv4 DNS Requires: * You need to have the pfctl command. Server side Requirements ------------------------ Python 2.7 or Python 3.5. Additional Suggested Software ----------------------------- - You may want to use autossh, available in various package management systems sshuttle-0.76/docs/conf.py0000600000175000017500000002017412646631610015761 0ustar brianbrian00000000000000#!/usr/bin/env python3 # -*- coding: utf-8 -*- # # sshuttle documentation build configuration file, created by # sphinx-quickstart on Sun Jan 17 12:13:47 2016. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # import sys # import os # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # sys.path.insert(0, os.path.abspath('.')) # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [ 'sphinx.ext.todo', ] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. # source_encoding = 'utf-8-sig' # The master toctree document. master_doc = 'index' # General information about the project. project = 'sshuttle' copyright = '2016, Brian May' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # # The short X.Y version. from setuptools_scm import get_version version = get_version(root="..") # The full version, including alpha/beta/rc tags. release = version # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: # today = '' # Else, today_fmt is used as the format for a strftime call. # today_fmt = '%B %d, %Y' # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. exclude_patterns = ['_build'] # The reST default role (used for this markup: `text`) to use for all # documents. # default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. # add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). # add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. # show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # A list of ignored prefixes for module index sorting. # modindex_common_prefix = [] # If true, keep warnings as "system message" paragraphs in the built documents. # keep_warnings = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. html_theme = 'default' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # html_theme_options = {} # Add any paths that contain custom themes here, relative to this directory. # html_theme_path = [] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". # html_title = None # A shorter title for the navigation bar. Default is the same as html_title. # html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. # html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. # html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Add any extra paths that contain custom files (such as robots.txt or # .htaccess) here, relative to this directory. These files are copied # directly to the root of the documentation. # html_extra_path = [] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. # html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. # html_use_smartypants = True # Custom sidebar templates, maps document names to template names. # html_sidebars = {} # Additional templates that should be rendered to pages, maps page names to # template names. # html_additional_pages = {} # If false, no module index is generated. # html_domain_indices = True # If false, no index is generated. # html_use_index = True # If true, the index is split into individual pages for each letter. # html_split_index = False # If true, links to the reST sources are added to the pages. # html_show_sourcelink = True # If true, "Created using Sphinx" is shown in the HTML footer. Default is True. # html_show_sphinx = True # If true, "(C) Copyright ..." is shown in the HTML footer. Default is True. # html_show_copyright = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. # html_use_opensearch = '' # This is the file name suffix for HTML files (e.g. ".xhtml"). # html_file_suffix = None # Output file base name for HTML help builder. htmlhelp_basename = 'sshuttledoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # 'preamble': '', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ ('index', 'sshuttle.tex', 'sshuttle documentation', 'Brian May', 'manual'), ] # The name of an image file (relative to this directory) to place at the top of # the title page. # latex_logo = None # For "manual" documents, if this is true, then toplevel headings are parts, # not chapters. # latex_use_parts = False # If true, show page references after internal links. # latex_show_pagerefs = False # If true, show URL addresses after external links. # latex_show_urls = False # Documents to append as an appendix to all manuals. # latex_appendices = [] # If false, no module index is generated. # latex_domain_indices = True # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ ('manpage', 'sshuttle', 'sshuttle documentation', ['Brian May'], 1) ] # If true, show URL addresses after external links. # man_show_urls = False # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ ('index', 'sshuttle', 'sshuttle documentation', 'Brian May', 'sshuttle', 'A transparent proxy-based VPN using ssh', 'Miscellaneous'), ] # Documents to append as an appendix to all manuals. # texinfo_appendices = [] # If false, no module index is generated. # texinfo_domain_indices = True # How to display URL addresses: 'footnote', 'no', or 'inline'. # texinfo_show_urls = 'footnote' # If true, do not generate a @detailmenu in the "Top" node's menu. # texinfo_no_detailmenu = False sshuttle-0.76/docs/trivia.rst0000600000175000017500000000367412646567567016543 0ustar brianbrian00000000000000Useless Trivia ============== This section written by the original author, Avery Pennarun . Back in 1998 (12 years ago! Yikes!), I released the first version of `Tunnel Vision `_, a semi-intelligent VPN client for Linux. Unfortunately, I made two big mistakes: I implemented the key exchange myself (oops), and I ended up doing TCP-over-TCP (double oops). The resulting program worked okay - and people used it for years - but the performance was always a bit funny. And nobody ever found any security flaws in my key exchange, either, but that doesn't mean anything. :) The same year, dcoombs and I also released Fast Forward, a proxy server supporting transparent proxying. Among other things, we used it for automatically splitting traffic across more than one Internet connection (a tool we called "Double Vision"). I was still in university at the time. A couple years after that, one of my professors was working with some graduate students on the technology that would eventually become `Slipstream Internet Acceleration `_. He asked me to do a contract for him to build an initial prototype of a transparent proxy server for mobile networks. The idea was similar to sshuttle: if you reassemble and then disassemble the TCP packets, you can reduce latency and improve performance vs. just forwarding the packets over a plain VPN or mobile network. (It's unlikely that any of my code has persisted in the Slipstream product today, but the concept is still pretty cool. I'm still horrified that people use plain TCP on complex mobile networks with crazily variable latency, for which it was never really intended.) That project I did for Slipstream was what first gave me the idea to merge the concepts of Fast Forward, Double Vision, and Tunnel Vision into a single program that was the best of all worlds. And here we are, at last, 10 years later. You're welcome. sshuttle-0.76/docs/index.rst0000600000175000017500000000053512646641060016322 0ustar brianbrian00000000000000sshuttle: where transparent proxy meets VPN meets ssh ===================================================== Contents: .. toctree:: :maxdepth: 2 overview requirements installation usage Man Page how-it-works support trivia changes Indices and tables ================== * :ref:`genindex` * :ref:`search` sshuttle-0.76/docs/overview.rst0000600000175000017500000000147612646566773017107 0ustar brianbrian00000000000000Overview ======== As far as I know, sshuttle is the only program that solves the following common case: - Your client machine (or router) is Linux, FreeBSD, or MacOS. - You have access to a remote network via ssh. - You don't necessarily have admin access on the remote network. - The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you *are* the admin and you just got frustrated with the awful state of VPN tools. - You don't want to create an ssh port forward for every single host/port on the remote network. - You hate openssh's port forwarding because it's randomly slow and/or stupid. - You can't use openssh's PermitTunnel feature because it's disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance (see below). sshuttle-0.76/docs/how-it-works.rst0000600000175000017500000000355012646567526017603 0ustar brianbrian00000000000000How it works ============ sshuttle is not exactly a VPN, and not exactly port forwarding. It's kind of both, and kind of neither. It's like a VPN, since it can forward every port on an entire network, not just ports you specify. Conveniently, it lets you use the "real" IP addresses of each host rather than faking port numbers on localhost. On the other hand, the way it *works* is more like ssh port forwarding than a VPN. Normally, a VPN forwards your data one packet at a time, and doesn't care about individual connections; ie. it's "stateless" with respect to the traffic. sshuttle is the opposite of stateless; it tracks every single connection. You could compare sshuttle to something like the old `Slirp `_ program, which was a userspace TCP/IP implementation that did something similar. But it operated on a packet-by-packet basis on the client side, reassembling the packets on the server side. That worked okay back in the "real live serial port" days, because serial ports had predictable latency and buffering. But you can't safely just forward TCP packets over a TCP session (like ssh), because TCP's performance depends fundamentally on packet loss; it *must* experience packet loss in order to know when to slow down! At the same time, the outer TCP session (ssh, in this case) is a reliable transport, which means that what you forward through the tunnel *never* experiences packet loss. The ssh session itself experiences packet loss, of course, but TCP fixes it up and ssh (and thus you) never know the difference. But neither does your inner TCP session, and extremely screwy performance ensues. sshuttle assembles the TCP stream locally, multiplexes it statefully over an ssh session, and disassembles it back into packets at the other end. So it never ends up doing TCP-over-TCP. It's just data-over-TCP, which is safe. sshuttle-0.76/docs/changes.rst0000600000175000017500000000006112646624155016623 0ustar brianbrian00000000000000Changelog --------- .. include:: ../CHANGES.rst sshuttle-0.76/MANIFEST.in0000600000175000017500000000037712646622104015271 0ustar brianbrian00000000000000include *.txt include *.rst include *.py include MANIFEST.in include LICENSE include run include tox.ini recursive-include docs *.bat recursive-include docs *.py recursive-include docs *.rst recursive-include docs Makefile recursive-include sshuttle *.py sshuttle-0.76/sshuttle.egg-info/0000700000175000017500000000000012646642532017076 5ustar brianbrian00000000000000sshuttle-0.76/sshuttle.egg-info/dependency_links.txt0000600000175000017500000000000112646642532023146 0ustar brianbrian00000000000000 sshuttle-0.76/sshuttle.egg-info/SOURCES.txt0000600000175000017500000000210512646642532020762 0ustar brianbrian00000000000000.gitignore .travis.yml CHANGES.rst LICENSE MANIFEST.in README.rst VERSION.txt requirements.txt run setup.py tox.ini docs/Makefile docs/changes.rst docs/conf.py docs/how-it-works.rst docs/index.rst docs/installation.rst docs/make.bat docs/manpage.rst docs/overview.rst docs/requirements.rst docs/support.rst docs/trivia.rst docs/usage.rst sshuttle/__init__.py sshuttle/__main__.py sshuttle/assembler.py sshuttle/client.py sshuttle/cmdline.py sshuttle/firewall.py sshuttle/helpers.py sshuttle/hostwatch.py sshuttle/linux.py sshuttle/options.py sshuttle/server.py sshuttle/ssh.py sshuttle/ssnet.py sshuttle/ssyslog.py sshuttle/stresstest.py sshuttle/version.py sshuttle.egg-info/PKG-INFO sshuttle.egg-info/SOURCES.txt sshuttle.egg-info/dependency_links.txt sshuttle.egg-info/entry_points.txt sshuttle.egg-info/top_level.txt sshuttle/methods/__init__.py sshuttle/methods/nat.py sshuttle/methods/pf.py sshuttle/methods/tproxy.py sshuttle/tests/test_firewall.py sshuttle/tests/test_helpers.py sshuttle/tests/test_methods_nat.py sshuttle/tests/test_methods_pf.py sshuttle/tests/test_methods_tproxy.pysshuttle-0.76/sshuttle.egg-info/PKG-INFO0000600000175000017500000000454112646642532020201 0ustar brianbrian00000000000000Metadata-Version: 1.1 Name: sshuttle Version: 0.76 Summary: Full-featured" VPN over an SSH tunnel Home-page: https://github.com/sshuttle/sshuttle Author: Brian May Author-email: brian@linuxpenguins.xyz License: GPL2+ Description: sshuttle: where transparent proxy meets VPN meets ssh ===================================================== As far as I know, sshuttle is the only program that solves the following common case: - Your client machine (or router) is Linux, FreeBSD, or MacOS. - You have access to a remote network via ssh. - You don't necessarily have admin access on the remote network. - The remote network has no VPN, or only stupid/complex VPN protocols (IPsec, PPTP, etc). Or maybe you *are* the admin and you just got frustrated with the awful state of VPN tools. - You don't want to create an ssh port forward for every single host/port on the remote network. - You hate openssh's port forwarding because it's randomly slow and/or stupid. - You can't use openssh's PermitTunnel feature because it's disabled by default on openssh servers; plus it does TCP-over-TCP, which has terrible performance (see below). Obtaining sshuttle ------------------ - From PyPI:: pip install sshuttle - Clone:: git clone https://github.com/sshuttle/sshuttle.git ./setup.py install Documentation ------------- The documentation for the stable version is available at: http://sshuttle.readthedocs.org/ The documentation for the latest development version is available at: http://sshuttle.readthedocs.org/en/latest/ Keywords: ssh vpn Platform: UNKNOWN Classifier: Development Status :: 5 - Production/Stable Classifier: Intended Audience :: Developers Classifier: Intended Audience :: End Users/Desktop Classifier: License :: OSI Approved :: GNU General Public License v2 or later (GPLv2+) Classifier: Operating System :: OS Independent Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3.5 Classifier: Topic :: System :: Networking sshuttle-0.76/sshuttle.egg-info/entry_points.txt0000600000175000017500000000006412646642532022376 0ustar brianbrian00000000000000[console_scripts] sshuttle = sshuttle.cmdline:main sshuttle-0.76/sshuttle.egg-info/top_level.txt0000600000175000017500000000001112646642532021622 0ustar brianbrian00000000000000sshuttle sshuttle-0.76/VERSION.txt0000600000175000017500000000000512645053336015411 0ustar brianbrian000000000000000.75 sshuttle-0.76/setup.py0000700000175000017500000000401212646546371015247 0ustar brianbrian00000000000000#!/usr/bin/env python # Copyright 2012-2014 Brian May # # This file is part of python-tldap. # # python-tldap is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # python-tldap is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. # # You should have received a copy of the GNU General Public License # along with python-tldap If not, see . from setuptools import setup, find_packages def version_scheme(version): from setuptools_scm.version import guess_next_dev_version version = guess_next_dev_version(version) return version.lstrip("v") setup( name="sshuttle", use_scm_version={ 'write_to': "sshuttle/version.py", 'version_scheme': version_scheme, }, setup_requires=['setuptools_scm'], # version=version, url='https://github.com/sshuttle/sshuttle', author='Brian May', author_email='brian@linuxpenguins.xyz', description='Full-featured" VPN over an SSH tunnel', packages=find_packages(), license="GPL2+", long_description=open('README.rst').read(), classifiers=[ "Development Status :: 5 - Production/Stable", "Intended Audience :: Developers", "Intended Audience :: End Users/Desktop", "License :: OSI Approved :: " "GNU General Public License v2 or later (GPLv2+)", "Operating System :: OS Independent", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3.5", "Topic :: System :: Networking", ], entry_points={ 'console_scripts': [ 'sshuttle = sshuttle.cmdline:main', ], }, tests_require=['pytest', 'mock'], keywords="ssh vpn", )