pax_global_header 0000666 0000000 0000000 00000000064 12217055661 0014517 g ustar 00root root 0000000 0000000 52 comment=b9255b7ef6dace6ff50a7c0425f437650724674e
obfsproxy-0.2.3/ 0000775 0000000 0000000 00000000000 12217055661 0013554 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/ChangeLog 0000664 0000000 0000000 00000002705 12217055661 0015332 0 ustar 00root root 0000000 0000000 Changes in version 0.2.3 - 2013-09-11
- Use the new pyptlib API (>= pyptlib-0.0.4). Patch by Ximin Luo.
- Add support for sending the pluggable transport name to Tor (using
the Extended ORPort) so that it can be considered in the statistics.
- Remove licenses of dependencies from the LICENSE file. (They were
moved to be with browser bundle packaging scripts.)
- Fix a bug in the SOCKS code. An assertion would trigger if
the SOCKS destination sent traffic before obfsproxy did.
Fixes #9239.
- Add a --version switch. Fixes #9255.
Changes in version 0.2.2 - 2013-04-15
- Fix a bug where the CLI compatibility patch that was introduced
in 0.2.1 was placed in the wrong place, making it useless when
obfsproxy gets installed. Patch by Lunar.
- Add dependencies to the setup script.
- Update the HOWTO to use pip.
Changes in version 0.2.1 - 2013-04-08
- Rename project from "pyobfsproxy" to "obfsproxy"!
- Add licenses of dependencies to the LICENSE file.
- Add support for logging exceptions to logfiles.
- Add shared secret support to obfs2.
- Add support for per-connection SOCKS arguments.
- Add a setup script for py2exe.
- Slightly improve the executable script.
- Improve command line interface compatibility between C-obfpsroxy
and Python-obfsproxy by supporting the "--managed" switch.
Changes in version 0.0.2 - 2013-02-17
- Add some more files to the MANIFEST.in.
Changes in version 0.0.1 - 2013-02-15
- Initial release.
obfsproxy-0.2.3/INSTALL 0000664 0000000 0000000 00000000456 12217055661 0014612 0 ustar 00root root 0000000 0000000 Just run: # python setup.py install
You will need to run the above command as root. It will install
obfsproxy somewhere in your $PATH. If you don't want that, you can
try to run
$ python setup.py install -user
as your regular user, and setup.py will install obfsproxy somewhere
in your home directory
obfsproxy-0.2.3/LICENSE 0000664 0000000 0000000 00000002757 12217055661 0014574 0 ustar 00root root 0000000 0000000 This is the license of the obfsproxy software.
Copyright 2013 George Kadianakis
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the names of the copyright owners nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
obfsproxy-0.2.3/PKG-INFO 0000664 0000000 0000000 00000000402 12217055661 0014645 0 ustar 00root root 0000000 0000000 Metadata-Version: 1.0
Name: obfsproxy
Version: 0.2.3
Summary: A pluggable transport proxy written in Python
Home-page: UNKNOWN
Author: asn
Author-email: asn@torproject.org
License: BSD
Description: UNKNOWN
Keywords: tor,obfuscation,twisted
Platform: UNKNOWN
obfsproxy-0.2.3/README 0000664 0000000 0000000 00000000475 12217055661 0014442 0 ustar 00root root 0000000 0000000 Obfsproxy is a pluggable transport proxy written in Python.
See doc/HOWTO.txt for installation instructions.
If you want to write a pluggable transport, see the code of already
existing transports in obfsproxy/transports/ . Unfortunately a coding
guide for pluggable transport authors does not exist at the moment!
obfsproxy-0.2.3/TODO 0000664 0000000 0000000 00000000560 12217055661 0014245 0 ustar 00root root 0000000 0000000 * Write more transports.
* Write more docs (architecture document, HACKING, etc.)
* Improve the integration testers (especially add better debugging
support for when a test fails)
* Kill all the XXXs in the code.
* Convert all the leftover camelCases to underscore_naming.
* Implement a SOCKS client, so that Obfsproxy can send its data
through a SOCKS proxy. obfsproxy-0.2.3/bin/ 0000775 0000000 0000000 00000000000 12217055661 0014324 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/bin/obfsproxy 0000775 0000000 0000000 00000000701 12217055661 0016303 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
import sys, os
# Forcerfully add root directory of the project to our path.
# http://www.py2exe.org/index.cgi/WhereAmI
if hasattr(sys, "frozen"):
dir_of_executable = os.path.dirname(sys.executable)
else:
dir_of_executable = os.path.dirname(__file__)
path_to_project_root = os.path.abspath(os.path.join(dir_of_executable, '..'))
sys.path.insert(0, path_to_project_root)
from obfsproxy.pyobfsproxy import run
run()
obfsproxy-0.2.3/doc/ 0000775 0000000 0000000 00000000000 12217055661 0014321 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/doc/HOWTO.txt 0000664 0000000 0000000 00000006061 12217055661 0015765 0 ustar 00root root 0000000 0000000 This is a short guide on how to setup an obfsproxy obfs2/obfs3 bridge
on a Debian/Ubuntu system.
Step 0: Install Python
To use obfsproxy you will need Python (>= 2.7) and pip. If you use
Debian testing (or unstable), or a version of Ubuntu newer than
Oneiric, this is easy:
# apt-get install python2.7 python-pip
Step 1: Install Tor
You will also need a development version of Tor. To do this, you
should use the following guide to install tor and
deb.torproject.org-keyring:
https://www.torproject.org/docs/debian.html.en#development
You need Tor 0.2.4.x because it knows how to automatically report
your obfsproxy address to BridgeDB.
Step 2: Install obfsproxy
If you have pip, installing obfsproxy and its dependencies should be
a matter of a single command:
$ pip install obfsproxy
Step 3: Setup Tor
Now setup Tor. Edit your /etc/tor/torrc to add:
SocksPort 0
ORPort 443 # or some other port if you already run a webserver/skype
BridgeRelay 1
Exitpolicy reject *:*
## CHANGEME_1 -> provide a nickname for your bridge, can be anything you like
#Nickname CHANGEME_1
## CHANGEME_2 -> provide some email address so we can contact you if there's a problem
#ContactInfo CHANGEME_2
ServerTransportPlugin obfs2,obfs3 exec /usr/local/bin/obfsproxy managed
Don't forget to uncomment and edit the CHANGEME fields.
Step 4: Launch Tor and verify that it bootstraps
Restart Tor to use the new configuration file. (Preface with sudo if
needed.)
# service tor restart
Now check /var/log/tor/log and you should see something like this:
Nov 05 16:40:45.000 [notice] We now have enough directory information to build circuits.
Nov 05 16:40:45.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Nov 05 16:40:46.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Nov 05 16:40:46.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Nov 05 16:40:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Nov 05 16:40:48.000 [notice] Bootstrapped 100%: Done.
If Tor is earlier in the bootstrapping phase, wait until it gets to 100%.
Step 5: Set up port forwarding if needed
If you're behind a NAT/firewall, you'll need to make your bridge
reachable from the outside world — both on the ORPort and the
obfsproxy port. The ORPort is whatever you defined in step two
above. To find your obfsproxy port, check your Tor logs for two
lines similar to these:
Oct 05 20:00:41.000 [notice] Registered server transport 'obfs2' at '0.0.0.0:26821
Oct 05 20:00:42.000 [notice] Registered server transport 'obfs3' at '0.0.0.0:40172
The last number in each line, in this case 26821 and 40172, are the
TCP port numbers that you need to forward through your
firewall. (This port is randomly chosen the first time Tor starts,
but Tor will cache and reuse the same number in future runs.) If you
want to change the number, use Tor 0.2.4.7-alpha or later, and set
"ServerTransportListenAddr obfs2 0.0.0.0:26821" in your torrc.
obfsproxy-0.2.3/doc/obfs2/ 0000775 0000000 0000000 00000000000 12217055661 0015334 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/doc/obfs2/obfs2-protocol-spec.txt 0000664 0000000 0000000 00000007132 12217055661 0021702 0 ustar 00root root 0000000 0000000 obfs2 (The Twobfuscator)
0. Protocol overview
This is a protocol obfuscation layer for TCP protocols. Its purpose
is to keep a third party from telling what protocol is in use based
on message contents. It is based on brl's ssh obfuscation protocol.
It does not provide authentication or data integrity. It does not
hide data lengths. It is more suitable for providing a layer of
obfuscation for an existing authenticated protocol, like SSH or TLS.
The protocol has two phases: in the first phase, the parties
establish keys. In the second, the parties exchange superenciphered
traffic.
1. Primitives, notation, and constants.
H(x) is SHA256 of x.
H^n(x) is H(x) called iteratively n times.
E(K,s) is the AES-CTR-128 encryption of s using K as key.
x | y is the concatenation of x and y.
UINT32(n) is the 4 byte value of n in big-endian (network) order.
SR(n) is n bytes of strong random data.
WR(n) is n bytes of weaker random data.
"xyz" is the ASCII characters 'x', 'y', and 'z', not NUL-terminated.
s[:n] is the first n bytes of s.
s[n:] is the last n bytes of s.
MAGIC_VALUE is 0x2BF5CA7E
SEED_LENGTH is 16
MAX_PADDING is 8192
HASH_ITERATIONS is 100000
KEYLEN is the length of the key used by E(K,s) -- that is, 16.
IVLEN is the length of the IV used by E(K,s) -- that is, 16.
HASHLEN is the length of the output of H() -- that is, 32.
MAC(s, x) = H(s | x | s)
A "byte" is an 8-bit octet.
We require that HASHLEN >= KEYLEN + IVLEN
2. Key establishment phase.
The party who opens the connection is the 'initiator'; the one who
accepts it is the 'responder'. Each begins by generating a seed
and a padding key as follows. The initiator generates:
INIT_SEED = SR(SEED_LENGTH)
INIT_PAD_KEY = MAC("Initiator obfuscation padding", INIT_SEED)[:KEYLEN]
And the responder generates:
RESP_SEED = SR(SEED_LENGTH)
RESP_PAD_KEY = MAC("Responder obfuscation padding", INIT_SEED)[:KEYLEN]
Each then generates a random number PADLEN in range from 0 through
MAX_PADDING (inclusive).
The initiator then sends:
INIT_SEED | E(INIT_PAD_KEY, UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN))
and the responder sends:
RESP_SEED | E(RESP_PAD_KEY, UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN))
Upon receiving the SEED from the other party, each party derives
the other party's padding key value as above, and decrypts the next
8 bytes of the key establishment message. If the MAGIC_VALUE does
not match, or the PADLEN value is greater than MAX_PADDING, the
party receiving it should close the connection immediately.
Otherwise, it should read the remaining PADLEN bytes of padding data
and discard them.
Additional keys are then derived as:
INIT_SECRET = MAC("Initiator obfuscated data", INIT_SEED|RESP_SEED)
RESP_SECRET = MAC("Responder obfuscated data", INIT_SEED|RESP_SEED)
INIT_KEY = INIT_SECRET[:KEYLEN]
INIT_IV = INIT_SECRET[KEYLEN:]
RESP_KEY = RESP_SECRET[:KEYLEN]
RESP_IV = RESP_SECRET[KEYLEN:]
The INIT_KEY value keys a stream cipher used to encrypt values from
initiator to responder thereafter. The stream cipher's IV is
INIT_IV. The RESP_KEY value keys a stream cipher used to encrypt
values from responder to initiator thereafter. That stream cipher's
IV is RESP_IV.
3. Shared-secret extension
Optionally, if the client and server share a secret value SECRET,
they can replace the MAC function with:
MAC(s,x) = H^n(s | x | H(SECRET) | s)
where n = HASH_ITERATIONS.
obfsproxy-0.2.3/doc/obfs2/obfs2-threat-model.txt 0000664 0000000 0000000 00000006411 12217055661 0021475 0 ustar 00root root 0000000 0000000 Threat model for the obfs2 obfuscation protocol
George Kadianakis
Nick Mathewson
0. Abstract
We discuss the intended threat model for the 'obfs2' protocol
obfuscator, its limitations, and its implications for the protocol
design.
The 'obfs2' protocol is based on Bruce Leidl's obfuscated SSH layer,
and is documented in the 'doc/protocol-spec.txt' file in the obfsproxy
distribution.
1. Adversary capabilities and non-capabilities
We assume a censor with limited per-connection resources.
The adversary controls the infrastructure of the network within and
at the edges of her jurisdiction, and she can potentially monitor,
block, alter, and inject traffic anywhere within this region.
However, the adversary's computational resources are limited.
Specifically, the adversary does not have the resources in her
censorship infrastructure to store very much long-term information
about any given IP or connection.
The adversary also holds a blacklist of network protocols, which she
is interested in blocking. We assume that the adversary does not have
a complete list of specific IPs running that protocol, though
preventing this is out-of-scope.
2. The adversary's goals
The censor wants to ban particular encrypted protocols or
applications, and is willing to tolerate some collateral damage, but
is not willing to ban all encrypted traffic entirely.
3. Goals of obfs2
Currently, most attackers in the category described above implement
their censorship by one or more firewalls that looking for protocol
signatures and block protocols matching those signatures. These
signatures are typically in the form of static strings to be matched
or regular expressions to be evaluated, over a packet or TCP flow.
obfs2 attempts to counter the above attack by removing content
signatures from network traffic. obfs2 encrypts the traffic stream
with a stream cipher, which results in the traffic looking uniformly
random.
4. Non-goals of obfs2
obfs2 was designed as a proof-of-concept for Tor's pluggable
transport system: it is simple, useable and easily implementable. It
does _not_ try to protect against more sophisticated adversaries.
obfs2 does not try to protect against non-content protocol
fingerprints, like the packet size or timing.
obfs2 does not try to protect against attackers capable of measuring
traffic entropy.
obfs2 (in its default configuration) does not try to protect against
Deep Packet Inspection machines that expect the obfs2 protocol and
have the resources to run it. Such machines can trivially retrieve
the decryption key off the traffic stream and use it to decrypt obfs2
and detect the Tor protocol.
obfs2 assumes that the underlying protocol provides (or does not
need!) integrity, confidentiality, and authentication; it provides
none of those on its own.
In other words, obfs2 does not try to protect against anything other
than fingerprintable TLS content patterns.
That said, obfs2 is not useless. It protects against many real-life
Tor traffic detection methods currentl deployed, since most of them
currently use static SSL handshake strings as signatures.
obfsproxy-0.2.3/doc/obfs3/ 0000775 0000000 0000000 00000000000 12217055661 0015335 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/doc/obfs3/obfs3-protocol-spec.txt 0000664 0000000 0000000 00000013716 12217055661 0021711 0 ustar 00root root 0000000 0000000 obfs3 (The Threebfuscator)
0. Protocol overview
This is a protocol obfuscation layer for TCP protocols. Its
purpose is to keep a third party from telling what protocol is in
use based on message contents.
Like obfs2, it does not provide authentication or data integrity.
It does not hide data lengths. It is more suitable for providing a
layer of obfuscation for an existing authenticated protocol, like
SSH or TLS.
Like obfs2, the protocol has two phases: in the first phase, the
parties establish keys. In the second, the parties exchange
superenciphered traffic.
1. Motivation
The first widely used obfuscation protocol for Tor was obfs2. obfs2
encrypted traffic using a key that was negotiated during the
protocol.
obfs2 did not use a robust cryptographic key exchange, and the key
could be retrieved by any passive adversary who monitored the
initial handshake of obfs2.
People believe that the easiest way to block obfs2 would be to
retrieve the key, decrypt the first bytes of the handshake, and
look for redundancy on the handshake message.
To defend against this attack, obfs3 negotiates keys using an
anonymous Diffie Hellman key exchange. This is done so that a
passive adversary would not be able to retrieve the obfs3 session
key.
Unfortunately, traditional DH (over subgroups of Z_p* or over
Elliptic Curves) does not fit our threat model since its public
keys are distinguishable from random strings of the same size. For
this reason, a custom DH protocol was proposed that offers public
keys that look like random strings. The UniformDH scheme was
proposed by Ian Goldberg in:
https://lists.torproject.org/pipermail/tor-dev/2012-December/004245.html
2. Primitives, notation, and constants.
E(K,s) is the AES-CTR-128 encryption of s using K as key.
x | y is the concatenation of x and y.
WR(n) is n bytes of weaker random data.
"xyz" is the ASCII characters 'x', 'y', and 'z', not NULL-terminated.
s[:n] is the first n bytes of s.
s[n:] is the last n bytes of s.
MAX_PADDING is 8194
KEYLEN is the length of the key used by E(K,s) -- that is, 16.
COUNTERLEN is the length of the counter used by AES-CTR-128 -- that is, 16.
HMAC(k,m) is HMAC-SHA256(k,m) with 'k' being the key, and 'm' the
message.
A "byte" is an 8-bit octet.
3. UniformDH
The UniformDH Diffie-Hellman scheme uses group 5 from RFC3526. It's
a 1536-bit MODP group.
To pick a private UniformDH key, we pick a random 1536-bit number,
and make it even by setting its low bit to 0. Let x be that private
key, and X = g^x (mod p).
The other party computes private and public keys, y and Y, in the
same manner.
When someone sends her public key to the other party, she randomly
decides whether to send X or p-X. This makes the public key
negligibly different from a uniform 1536-bit string
When a party wants to calculate the shared secret, she
raises the foreign public key to her private key. Note that both
(p-Y)^x = Y^x (mod p) and (p-X)^y = X^y (mod p), since x and y are
even.
3. Key establishment phase.
The party who opens the connection is the 'initiator'; the one who
accepts it is the 'responder'. Each begins by generating a
UniformDH keypair, and a random number PADLEN in [0, MAX_PADDING/2].
Both parties then send:
PUB_KEY | WR(PADLEN)
After retrieving the public key of the other end, each party
completes the DH key exchange and generates a shared-secret for the
session (named SHARED_SECRET). Using that shared-secret each party
derives its encryption keys as follows:
INIT_SECRET = HMAC(SHARED_SECRET, "Initiator obfuscated data")
RESP_SECRET = HMAC(SHARED_SECRET, "Responder obfuscated data")
INIT_KEY = INIT_SECRET[:KEYLEN]
INIT_COUNTER = INIT_SECRET[KEYLEN:]
RESP_KEY = RESP_SECRET[:KEYLEN]
RESP_COUNTER = RESP_SECRET[KEYLEN:]
The INIT_KEY value keys a block cipher (in CTR mode) used to
encrypt values from initiator to responder thereafter. The counter
mode's initial counter value is INIT_COUNTER. The RESP_KEY value
keys a block cipher (in CTR mode) used to encrypt values from
responder to initiator thereafter. That counter mode's initial
counter value is RESP_COUNTER.
After the handshake is complete, when the initiator wants to send
application-layer data for the first time, she generates another
random number PADLEN2 in [0, MAX_PADDING/2], and sends:
WR(PADLEN2) | HMAC(SHARED_SECRET, "Initiator magic") | E(INIT_KEY, DATA)
When the responder wants to send application-layer data for the
first time, she sends:
WR(PADLEN2) | HMAC(SHARED_SECRET, "Responder magic") | E(RESP_KEY, DATA)
After a party receives the public key from the other end, it needs
to find out where the padding stops and where the application-layer
data starts. To do so, every time she receives network data, the
receiver tries to find the magic HMAC string in the data between
the public key and the end of the newly received data. After
spotting the magic string, she knows where the application-layer
data starts and she can start decrypting it.
If a party has scanned more than MAX_PADDING bytes and the magic
string has not yet been found, the party MUST close the connection.
After the initiator sends the magic string and the first chunk of
application-layer data, she can send additional application-layer
data simply by encrypting it with her encryption key, and without
prepending any magic strings:
E(INIT_KEY, DATA)
Similarly, the responder sends additional application-layer data by
encrypting it with her encryption key:
E(RESP_KEY, DATA)
4. Acknowledgments
The idea of using a hash of the shared secret as the delimiter
between the padding and the data was suggested by Philipp Winter.
Ian Goldberg suggested the UniformDH scheme and helped a lot with
reviewing the protocol specification.
obfsproxy-0.2.3/doc/obfs3/obfs3-threat-model.txt 0000664 0000000 0000000 00000000612 12217055661 0021474 0 ustar 00root root 0000000 0000000 Threat model for the obfs3 obfuscation protocol
The threat model of obfs3 is identical to the threat model of obfs2,
with an added goal:
obfs3 offers protection against passive Deep Packet Inspection
machines that expect the obfs3 protocol. Such machines should not be
able to verify the existence of the obfs3 protocol without launching
an active attack against its handshake.
obfsproxy-0.2.3/obfsproxy/ 0000775 0000000 0000000 00000000000 12217055661 0015607 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/__init__.py 0000664 0000000 0000000 00000000134 12217055661 0017716 0 ustar 00root root 0000000 0000000 from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
obfsproxy-0.2.3/obfsproxy/_version.py 0000664 0000000 0000000 00000000641 12217055661 0020006 0 ustar 00root root 0000000 0000000
# This file was generated by 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '0.2.3'
version_full = '557bdebe2cf21908101d3d8b61734f4be81105ff'
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
obfsproxy-0.2.3/obfsproxy/common/ 0000775 0000000 0000000 00000000000 12217055661 0017077 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/common/__init__.py 0000664 0000000 0000000 00000000000 12217055661 0021176 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/common/aes.py 0000664 0000000 0000000 00000001251 12217055661 0020220 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module is a convenience wrapper for the AES cipher in CTR mode. """
from Crypto.Cipher import AES
from Crypto.Util import Counter
class AES_CTR_128(object):
"""An AES-CTR-128 PyCrypto wrapper."""
def __init__(self, key, iv):
"""Initialize AES with the given key and IV."""
assert(len(key) == 16)
assert(len(iv) == 16)
self.ctr = Counter.new(128, initial_value=long(iv.encode('hex'), 16))
self.cipher = AES.new(key, AES.MODE_CTR, counter=self.ctr)
def crypt(self, data):
"""
Encrypt or decrypt 'data'.
"""
return self.cipher.encrypt(data)
obfsproxy-0.2.3/obfsproxy/common/argparser.py 0000664 0000000 0000000 00000000462 12217055661 0021441 0 ustar 00root root 0000000 0000000 import argparse
import sys
"""
Overrides argparse.ArgumentParser so that it emits error messages to
stdout instead of stderr.
"""
class MyArgumentParser(argparse.ArgumentParser):
def _print_message(self, message, fd=None):
if message:
fd = sys.stdout
fd.write(message)
obfsproxy-0.2.3/obfsproxy/common/heartbeat.py 0000664 0000000 0000000 00000006143 12217055661 0021414 0 ustar 00root root 0000000 0000000 """heartbeat code"""
import datetime
import socket # for socket.inet_pton()
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
def get_integer_from_ip_str(ip_str):
"""
Given an IP address in string format in ip_str, return its
integer representation.
Throws ValueError if the IP address string was invalid.
"""
try:
return socket.inet_pton(socket.AF_INET, ip_str)
except socket.error:
pass
try:
return socket.inet_pton(socket.AF_INET6, ip_str)
except socket.error:
pass
# Down here, both inet_pton()s failed.
raise ValueError("Invalid IP address string")
class Heartbeat(object):
"""
Represents obfsproxy's heartbeat.
It keeps stats on a number of things that the obfsproxy operator
might be interested in, and every now and then it reports them in
the logs.
'unique_ips': A Python set that contains unique IPs (in integer
form) that have connected to obfsproxy.
"""
def __init__(self):
self.n_connections = 0
self.started = datetime.datetime.now()
self.last_reset = self.started
self.unique_ips = set()
def register_connection(self, ip_str):
"""Register a new connection."""
self.n_connections += 1
self._register_ip(ip_str)
def _register_ip(self, ip_str):
"""
See if 'ip_str' has connected to obfsproxy before. If not, add
it to the list of unique IPs.
"""
ip = get_integer_from_ip_str(ip_str)
if ip not in self.unique_ips:
self.unique_ips.add(ip)
def reset_stats(self):
"""Reset stats."""
self.n_connections = 0
self.unique_ips = set()
self.last_reset = datetime.datetime.now()
def say_uptime(self):
"""Log uptime information."""
now = datetime.datetime.now()
delta = now - self.started
uptime_days = delta.days
uptime_hours = round(float(delta.seconds)/3600)
uptime_minutes = round(float(delta.seconds)/60)%60
if uptime_days:
log.info("Heartbeat: obfsproxy's uptime is %d day(s), %d hour(s) and %d minute(s)." % \
(uptime_days, uptime_hours, uptime_minutes))
else:
log.info("Heartbeat: obfsproxy's uptime is %d hour(s) and %d minute(s)." % \
(uptime_hours, uptime_minutes))
def say_stats(self):
"""Log connection stats."""
now = datetime.datetime.now()
reset_delta = now - self.last_reset
log.info("Heartbeat: During the last %d hour(s) we saw %d connection(s)" \
" from %d unique address(es)." % \
(round(float(reset_delta.seconds/3600)) + reset_delta.days*24, self.n_connections,
len(self.unique_ips)))
# Reset stats every 24 hours.
if (reset_delta.days > 0):
log.debug("Resetting heartbeat.")
self.reset_stats()
def talk(self):
"""Do a heartbeat."""
self.say_uptime()
self.say_stats()
# A heartbeat singleton.
heartbeat = Heartbeat()
obfsproxy-0.2.3/obfsproxy/common/hmac_sha256.py 0000664 0000000 0000000 00000000346 12217055661 0021454 0 ustar 00root root 0000000 0000000 import hashlib
import hmac
def hmac_sha256_digest(key, msg):
"""
Return the HMAC-SHA256 message authentication code of the message
'msg' with key 'key'.
"""
return hmac.new(key, msg, hashlib.sha256).digest()
obfsproxy-0.2.3/obfsproxy/common/log.py 0000664 0000000 0000000 00000006611 12217055661 0020236 0 ustar 00root root 0000000 0000000 """obfsproxy logging code"""
import logging
import sys
from twisted.python import log
def get_obfslogger():
""" Return the current ObfsLogger instance """
return OBFSLOGGER
class ObfsLogger(object):
"""
Maintain state of logging options specified with command line arguments
Attributes:
safe_logging: Boolean value indicating if we should scrub addresses
before logging
obfslogger: Our logging instance
"""
def __init__(self):
self.safe_logging = True
observer = log.PythonLoggingObserver('obfslogger')
observer.start()
# Create the default log handler that logs to stdout.
self.obfslogger = logging.getLogger('obfslogger')
self.default_handler = logging.StreamHandler(sys.stdout)
self.set_formatter(self.default_handler)
self.obfslogger.addHandler(self.default_handler)
self.obfslogger.propagate = False
def set_formatter(self, handler):
"""Given a log handler, plug our custom formatter to it."""
formatter = logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
handler.setFormatter(formatter)
def set_log_file(self, filename):
"""Set up our logger so that it starts logging to file in 'filename' instead."""
# remove the default handler, and add the FileHandler:
self.obfslogger.removeHandler(self.default_handler)
log_handler = logging.FileHandler(filename)
self.set_formatter(log_handler)
self.obfslogger.addHandler(log_handler)
def set_log_severity(self, sev_string):
"""Update our minimum logging severity to 'sev_string'."""
# Turn it into a numeric level that logging understands first.
numeric_level = getattr(logging, sev_string.upper(), None)
self.obfslogger.setLevel(numeric_level)
def disable_logs(self):
"""Disable all logging."""
logging.disable(logging.CRITICAL)
def set_no_safe_logging(self):
""" Disable safe_logging """
self.safe_logging = False
def safe_addr_str(self, address):
"""
Unless safe_logging is False, we return '[scrubbed]' instead
of the address parameter. If safe_logging is false, then we
return the address itself.
"""
if self.safe_logging:
return '[scrubbed]'
else:
return address
def debug(self, msg, *args, **kwargs):
""" Class wrapper around debug logging method """
self.obfslogger.debug(msg, *args, **kwargs)
def warning(self, msg, *args, **kwargs):
""" Class wrapper around warning logging method """
self.obfslogger.warning(msg, *args, **kwargs)
def info(self, msg, *args, **kwargs):
""" Class wrapper around info logging method """
self.obfslogger.info(msg, *args, **kwargs)
def error(self, msg, *args, **kwargs):
""" Class wrapper around error logging method """
self.obfslogger.error(msg, *args, **kwargs)
def critical(self, msg, *args, **kwargs):
""" Class wrapper around critical logging method """
self.obfslogger.critical(msg, *args, **kwargs)
def exception(self, msg, *args, **kwargs):
""" Class wrapper around exception logging method """
self.obfslogger.exception(msg, *args, **kwargs)
""" Global variable that will track our Obfslogger instance """
OBFSLOGGER = ObfsLogger()
obfsproxy-0.2.3/obfsproxy/common/rand.py 0000664 0000000 0000000 00000000156 12217055661 0020377 0 ustar 00root root 0000000 0000000 import os
def random_bytes(n):
""" Returns n bytes of strong random data. """
return os.urandom(n)
obfsproxy-0.2.3/obfsproxy/common/serialize.py 0000664 0000000 0000000 00000001171 12217055661 0021440 0 ustar 00root root 0000000 0000000 """Helper functions to go from integers to binary data and back."""
import struct
def htonl(n):
"""
Convert integer in 'n' from host-byte order to network-byte order.
"""
return struct.pack('!I', n)
def ntohl(bs):
"""
Convert integer in 'n' from network-byte order to host-byte order.
"""
return struct.unpack('!I', bs)[0]
def htons(n):
"""
Convert integer in 'n' from host-byte order to network-byte order.
"""
return struct.pack('!h', n)
def ntohs(bs):
"""
Convert integer in 'n' from network-byte order to host-byte order.
"""
return struct.unpack('!h', bs)[0]
obfsproxy-0.2.3/obfsproxy/managed/ 0000775 0000000 0000000 00000000000 12217055661 0017203 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/managed/__init__.py 0000664 0000000 0000000 00000000000 12217055661 0021302 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/managed/client.py 0000664 0000000 0000000 00000003443 12217055661 0021037 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
from twisted.internet import reactor, error
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.transports.transports as transports
import obfsproxy.common.log as logging
from pyptlib.client import ClientTransportPlugin
from pyptlib.config import EnvError
import pprint
log = logging.get_obfslogger()
def do_managed_client():
"""Start the managed-proxy protocol as a client."""
should_start_event_loop = False
ptclient = ClientTransportPlugin()
try:
ptclient.init(transports.transports.keys())
except EnvError, err:
log.warning("Client managed-proxy protocol failed (%s)." % err)
return
log.debug("pyptlib gave us the following data:\n'%s'", pprint.pformat(ptclient.getDebugData()))
for transport in ptclient.getTransports():
try:
addrport = launch_transport.launch_transport_listener(transport, None, 'socks', None)
except transports.TransportNotFound:
log.warning("Could not find transport '%s'" % transport)
ptclient.reportMethodError(transport, "Could not find transport.")
continue
except error.CannotListenError:
log.warning("Could not set up listener for '%s'." % transport)
ptclient.reportMethodError(transport, "Could not set up listener.")
continue
should_start_event_loop = True
log.debug("Successfully launched '%s' at '%s'" % (transport, log.safe_addr_str(str(addrport))))
ptclient.reportMethodSuccess(transport, "socks4", addrport, None, None)
ptclient.reportMethodsEnd()
if should_start_event_loop:
log.info("Starting up the event loop.")
reactor.run()
else:
log.info("No transports launched. Nothing to do.")
obfsproxy-0.2.3/obfsproxy/managed/server.py 0000664 0000000 0000000 00000005174 12217055661 0021072 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
from twisted.internet import reactor, error
from pyptlib.server import ServerTransportPlugin
from pyptlib.config import EnvError
import obfsproxy.transports.transports as transports
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.common.log as logging
import pprint
log = logging.get_obfslogger()
def do_managed_server():
"""Start the managed-proxy protocol as a server."""
should_start_event_loop = False
ptserver = ServerTransportPlugin()
try:
ptserver.init(transports.transports.keys())
except EnvError, err:
log.warning("Server managed-proxy protocol failed (%s)." % err)
return
log.debug("pyptlib gave us the following data:\n'%s'", pprint.pformat(ptserver.getDebugData()))
ext_orport = ptserver.config.getExtendedORPort()
authcookie = ptserver.config.getAuthCookieFile()
orport = ptserver.config.getORPort()
for transport, transport_bindaddr in ptserver.getBindAddresses().items():
try:
if ext_orport:
addrport = launch_transport.launch_transport_listener(transport,
transport_bindaddr,
'ext_server',
ext_orport,
authcookie)
else:
addrport = launch_transport.launch_transport_listener(transport,
transport_bindaddr,
'server',
orport)
except transports.TransportNotFound:
log.warning("Could not find transport '%s'" % transport)
ptserver.reportMethodError(transport, "Could not find transport.")
continue
except error.CannotListenError:
log.warning("Could not set up listener for '%s'." % transport)
ptserver.reportMethodError(transport, "Could not set up listener.")
continue
should_start_event_loop = True
log.debug("Successfully launched '%s' at '%s'" % (transport, log.safe_addr_str(str(addrport))))
ptserver.reportMethodSuccess(transport, addrport, None)
ptserver.reportMethodsEnd()
if should_start_event_loop:
log.info("Starting up the event loop.")
reactor.run()
else:
log.info("No transports launched. Nothing to do.")
obfsproxy-0.2.3/obfsproxy/network/ 0000775 0000000 0000000 00000000000 12217055661 0017300 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/network/__init__.py 0000664 0000000 0000000 00000000000 12217055661 0021377 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/network/buffer.py 0000664 0000000 0000000 00000003716 12217055661 0021132 0 ustar 00root root 0000000 0000000 class Buffer(object):
"""
A Buffer is a simple FIFO buffer. You write() stuff to it, and you
read() them back. You can also peek() or drain() data.
"""
def __init__(self, data=''):
"""
Initialize a buffer with 'data'.
"""
self.buffer = bytes(data)
def read(self, n=-1):
"""
Read and return 'n' bytes from the buffer.
If 'n' is negative, read and return the whole buffer.
If 'n' is larger than the size of the buffer, read and return
the whole buffer.
"""
if (n < 0) or (n > len(self.buffer)):
the_whole_buffer = self.buffer
self.buffer = bytes('')
return the_whole_buffer
data = self.buffer[:n]
self.buffer = self.buffer[n:]
return data
def write(self, data):
"""
Append 'data' to the buffer.
"""
self.buffer = self.buffer + data
def peek(self, n=-1):
"""
Return 'n' bytes from the buffer, without draining them.
If 'n' is negative, return the whole buffer.
If 'n' is larger than the size of the buffer, return the whole
buffer.
"""
if (n < 0) or (n > len(self.buffer)):
return self.buffer
return self.buffer[:n]
def drain(self, n=-1):
"""
Drain 'n' bytes from the buffer.
If 'n' is negative, drain the whole buffer.
If 'n' is larger than the size of the buffer, drain the whole
buffer.
"""
if (n < 0) or (n > len(self.buffer)):
self.buffer = bytes('')
return
self.buffer = self.buffer[n:]
return
def __len__(self):
"""Returns length of buffer. Used in len()."""
return len(self.buffer)
def __nonzero__(self):
"""
Returns True if the buffer is non-empty.
Used in truth-value testing.
"""
return True if len(self.buffer) else False
obfsproxy-0.2.3/obfsproxy/network/extended_orport.py 0000664 0000000 0000000 00000034374 12217055661 0023072 0 ustar 00root root 0000000 0000000 import os
from twisted.internet import reactor
import obfsproxy.common.log as logging
import obfsproxy.common.serialize as srlz
import obfsproxy.common.hmac_sha256 as hmac_sha256
import obfsproxy.common.rand as rand
import obfsproxy.network.network as network
log = logging.get_obfslogger()
# Authentication states:
STATE_WAIT_FOR_AUTH_TYPES = 1
STATE_WAIT_FOR_SERVER_NONCE = 2
STATE_WAIT_FOR_AUTH_RESULTS = 3
STATE_WAIT_FOR_OKAY = 4
STATE_OPEN = 5
# Authentication protocol parameters:
AUTH_PROTOCOL_HEADER_LEN = 4
# Safe-cookie authentication parameters:
AUTH_SERVER_TO_CLIENT_CONST = "ExtORPort authentication server-to-client hash"
AUTH_CLIENT_TO_SERVER_CONST = "ExtORPort authentication client-to-server hash"
AUTH_NONCE_LEN = 32
AUTH_HASH_LEN = 32
# Extended ORPort commands:
# Transport-to-Bridge
EXT_OR_CMD_TB_DONE = 0x0000
EXT_OR_CMD_TB_USERADDR = 0x0001
EXT_OR_CMD_TB_TRANSPORT = 0x0002
# Bridge-to-Transport
EXT_OR_CMD_BT_OKAY = 0x1000
EXT_OR_CMD_BT_DENY = 0x1001
EXT_OR_CMD_BT_CONTROL = 0x1002
# Authentication cookie parameters
AUTH_COOKIE_LEN = 32
AUTH_COOKIE_HEADER_LEN = 32
AUTH_COOKIE_FILE_LEN = AUTH_COOKIE_LEN + AUTH_COOKIE_HEADER_LEN
AUTH_COOKIE_HEADER = "! Extended ORPort Auth Cookie !\x0a"
def _read_auth_cookie(cookie_path):
"""
Read an Extended ORPort authentication cookie from 'cookie_path' and return it.
Throw CouldNotReadCookie if we couldn't read the cookie.
"""
# Check if file exists.
if not os.path.exists(cookie_path):
raise CouldNotReadCookie("'%s' doesn't exist" % cookie_path)
# Check its size and make sure it's correct before opening.
auth_cookie_file_size = os.path.getsize(cookie_path)
if auth_cookie_file_size != AUTH_COOKIE_FILE_LEN:
raise CouldNotReadCookie("Cookie '%s' is the wrong size (%i bytes instead of %d)" % \
(cookie_path, auth_cookie_file_size, AUTH_COOKIE_FILE_LEN))
try:
with file(cookie_path, 'rb', 0) as f:
header = f.read(AUTH_COOKIE_HEADER_LEN) # first 32 bytes are the header
if header != AUTH_COOKIE_HEADER:
raise CouldNotReadCookie("Corrupted cookie file header '%s'." % header)
return f.read(AUTH_COOKIE_LEN) # nexta 32 bytes should be the cookie.
except IOError, exc:
raise CouldNotReadCookie("Unable to read '%s' (%s)" % (cookie_path, exc))
class ExtORPortProtocol(network.GenericProtocol):
"""
Represents a connection to the Extended ORPort. It begins by
completing the Extended ORPort authentication, then sending some
Extended ORPort commands, and finally passing application-data
like it would do to an ORPort.
Specifically, after completing the Extended ORPort authentication
we send a USERADDR command with the address of our client, a
TRANSPORT command with the name of the pluggable transport, and a
DONE command to signal that we are done with the Extended ORPort
protocol. Then we wait for an OKAY command back from the server to
start sending application-data.
Attributes:
state: The protocol state the connections is currently at.
ext_orport_addr: The address of the Extended ORPort.
peer_addr: The address of the client, in the other side of the
circuit, that connected to our downstream side.
cookie_file: Path to the Extended ORPort authentication cookie.
client_nonce: A random nonce used in the Extended ORPort
authentication protocol.
client_hash: Our hash which is used to verify our knowledge of the
authentication cookie in the Extended ORPort Authentication
protocol.
"""
def __init__(self, circuit, ext_orport_addr, cookie_file, peer_addr, transport_name):
self.state = STATE_WAIT_FOR_AUTH_TYPES
self.name = "ext_%s" % hex(id(self))
self.ext_orport_addr = ext_orport_addr
self.peer_addr = peer_addr
self.cookie_file = cookie_file
self.client_nonce = rand.random_bytes(AUTH_NONCE_LEN)
self.client_hash = None
self.transport_name = transport_name
network.GenericProtocol.__init__(self, circuit)
def connectionMade(self):
pass
def dataReceived(self, data_rcvd):
"""
We got some data, process it according to our current state.
"""
self.buffer.write(data_rcvd)
if self.state == STATE_WAIT_FOR_AUTH_TYPES:
try:
self._handle_auth_types()
except NeedMoreData:
return
except UnsupportedAuthTypes, err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_WAIT_FOR_SERVER_NONCE
if self.state == STATE_WAIT_FOR_SERVER_NONCE:
try:
self._handle_server_nonce_and_hash()
except NeedMoreData:
return
except (CouldNotReadCookie, RcvdInvalidAuth) as err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_WAIT_FOR_AUTH_RESULTS
if self.state == STATE_WAIT_FOR_AUTH_RESULTS:
try:
self._handle_auth_results()
except NeedMoreData:
return
except AuthFailed, err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
# We've finished the Extended ORPort authentication
# protocol. Now send all the Extended ORPort commands we
# want to send.
try:
self._send_ext_orport_commands()
except CouldNotWriteExtCommand:
self.close()
return
self.state = STATE_WAIT_FOR_OKAY
if self.state == STATE_WAIT_FOR_OKAY:
try:
self._handle_okay()
except NeedMoreData:
return
except ExtORPortProtocolFailed as err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_OPEN
if self.state == STATE_OPEN:
# We are done with the Extended ORPort protocol, we now
# treat the Extended ORPort as a normal ORPort.
if not self.circuit.circuitIsReady():
self.circuit.setUpstreamConnection(self)
self.circuit.dataReceived(self.buffer, self)
def _send_ext_orport_commands(self):
"""
Send all the Extended ORPort commands we want to send.
Throws CouldNotWriteExtCommand.
"""
# Send the actual IP address of our client to the Extended
# ORPort, then signal that we are done and that we want to
# start transferring application-data.
self._write_ext_orport_command(EXT_OR_CMD_TB_USERADDR, '%s:%s' % (self.peer_addr.host, self.peer_addr.port))
self._write_ext_orport_command(EXT_OR_CMD_TB_TRANSPORT, '%s' % self.transport_name)
self._write_ext_orport_command(EXT_OR_CMD_TB_DONE, '')
def _handle_auth_types(self):
"""
Read authentication types that the server supports, select
one, and send it to the server.
Throws NeedMoreData and UnsupportedAuthTypes.
"""
if len(self.buffer) < 2:
raise NeedMoreData('Not enough data')
data = self.buffer.peek()
if '\x00' not in data: # haven't received EndAuthTypes yet
log.debug("%s: Got some auth types data but no EndAuthTypes yet." % self.name)
raise NeedMoreData('Not EndAuthTypes.')
# Drain all data up to (and including) the EndAuthTypes.
log.debug("%s: About to drain %d bytes from %d." % \
(self.name, data.index('\x00')+1, len(self.buffer)))
data = self.buffer.read(data.index('\x00')+1)
if '\x01' not in data:
raise UnsupportedAuthTypes("%s: Could not find supported auth type (%s)." % (self.name, repr(data)))
# Send back chosen auth type.
self.write("\x01") # Static, since we only support auth type '1' atm.
# Since we are doing the safe-cookie protocol, now send our
# nonce.
# XXX This will need to be refactored out of this function in
# the future, when we have more than one auth types.
self.write(self.client_nonce)
def _handle_server_nonce_and_hash(self):
"""
Get the server's nonce and hash, validate them and send our own hash.
Throws NeedMoreData and RcvdInvalidAuth and CouldNotReadCookie.
"""
if len(self.buffer) < AUTH_HASH_LEN + AUTH_NONCE_LEN:
raise NeedMoreData('Need more data')
server_hash = self.buffer.read(AUTH_HASH_LEN)
server_nonce = self.buffer.read(AUTH_NONCE_LEN)
auth_cookie = _read_auth_cookie(self.cookie_file)
proper_server_hash = hmac_sha256.hmac_sha256_digest(auth_cookie,
AUTH_SERVER_TO_CLIENT_CONST + self.client_nonce + server_nonce)
log.debug("%s: client_nonce: %s\nserver_nonce: %s\nserver_hash: %s\nproper_server_hash: %s\n" % \
(self.name, repr(self.client_nonce), repr(server_nonce), repr(server_hash), repr(proper_server_hash)))
if proper_server_hash != server_hash:
raise RcvdInvalidAuth("%s: Invalid server hash. Authentication failed." % (self.name))
client_hash = hmac_sha256.hmac_sha256_digest(auth_cookie,
AUTH_CLIENT_TO_SERVER_CONST + self.client_nonce + server_nonce)
# Send our hash.
self.write(client_hash)
def _handle_auth_results(self):
"""
Get the authentication results. See if the authentication
succeeded or failed, and take appropriate actions.
Throws NeedMoreData and AuthFailed.
"""
if len(self.buffer) < 1:
raise NeedMoreData("Not enough data for body.")
result = self.buffer.read(1)
if result != '\x01':
raise AuthFailed("%s: Authentication failed (%s)!" % (self.name, repr(result)))
log.debug("%s: Authentication successful!" % self.name)
def _handle_okay(self):
"""
We've sent a DONE command to the Extended ORPort and we
now check if the Extended ORPort liked it or not.
Throws NeedMoreData and ExtORPortProtocolFailed.
"""
cmd, _ = self._get_ext_orport_command(self.buffer)
if cmd != EXT_OR_CMD_BT_OKAY:
raise ExtORPortProtocolFailed("%s: Unexpected command received (%d) after sending DONE." % (self.name, cmd))
def _get_ext_orport_command(self, buf):
"""
Reads an Extended ORPort command from 'buf'. Returns (command,
body) if it was well-formed, where 'command' is the Extended
ORPort command type, and 'body' is its body.
Throws NeedMoreData.
"""
if len(buf) < AUTH_PROTOCOL_HEADER_LEN:
raise NeedMoreData("Not enough data for header.")
header = buf.peek(AUTH_PROTOCOL_HEADER_LEN)
cmd = srlz.ntohs(header[:2])
bodylen = srlz.ntohs(header[2:4])
if (bodylen > len(buf) - AUTH_PROTOCOL_HEADER_LEN): # Not all here yet
raise NeedMoreData("Not enough data for body.")
# We have a whole command. Drain the header.
buf.drain(4)
body = buf.read(bodylen)
return (cmd, body)
def _write_ext_orport_command(self, command, body):
"""
Serialize 'command' and 'body' to an Extended ORPort command
and send it to the Extended ORPort.
Throws CouldNotWriteExtCommand
"""
payload = ''
if len(body) > 65535: # XXX split instead of quitting?
log.warning("Obfsproxy was asked to send Extended ORPort command with more than "
"65535 bytes of body. This is not supported by the Extended ORPort "
"protocol. Please file a bug.")
raise CouldNotWriteExtCommand("Too large body.")
if command > 65535:
raise CouldNotWriteExtCommand("Not supported command type.")
payload += srlz.htons(command)
payload += srlz.htons(len(body))
payload += body # body might be absent (empty string)
self.write(payload)
class ExtORPortClientFactory(network.StaticDestinationClientFactory):
def __init__(self, circuit, cookie_file, peer_addr, transport_name):
self.circuit = circuit
self.peer_addr = peer_addr
self.cookie_file = cookie_file
self.transport_name = transport_name
self.name = "fact_ext_c_%s" % hex(id(self))
def buildProtocol(self, addr):
return ExtORPortProtocol(self.circuit, addr, self.cookie_file, self.peer_addr, self.transport_name)
class ExtORPortServerFactory(network.StaticDestinationClientFactory):
def __init__(self, ext_or_addrport, ext_or_cookie_file, transport_name, transport_class):
self.ext_or_host = ext_or_addrport[0]
self.ext_or_port = ext_or_addrport[1]
self.cookie_file = ext_or_cookie_file
self.transport_name = transport_name
self.transport_class = transport_class
self.name = "fact_ext_s_%s" % hex(id(self))
def startFactory(self):
log.debug("%s: Starting up Extended ORPort server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection from %s:%d." % (self.name, log.safe_addr_str(addr.host), addr.port))
circuit = network.Circuit(self.transport_class())
# XXX instantiates a new factory for each client
clientFactory = ExtORPortClientFactory(circuit, self.cookie_file, addr, self.transport_name)
reactor.connectTCP(self.ext_or_host, self.ext_or_port, clientFactory)
return network.StaticDestinationProtocol(circuit, 'server', addr)
# XXX Exceptions need more thought and work. Most of these can be generalized.
class RcvdInvalidAuth(Exception): pass
class AuthFailed(Exception): pass
class UnsupportedAuthTypes(Exception): pass
class ExtORPortProtocolFailed(Exception): pass
class CouldNotWriteExtCommand(Exception): pass
class CouldNotReadCookie(Exception): pass
class NeedMoreData(Exception): pass
obfsproxy-0.2.3/obfsproxy/network/launch_transport.py 0000664 0000000 0000000 00000003466 12217055661 0023251 0 ustar 00root root 0000000 0000000 import obfsproxy.network.network as network
import obfsproxy.transports.transports as transports
import obfsproxy.network.socks as socks
import obfsproxy.network.extended_orport as extended_orport
from twisted.internet import reactor
def launch_transport_listener(transport, bindaddr, role, remote_addrport, ext_or_cookie_file=None):
"""
Launch a listener for 'transport' in role 'role' (socks/client/server/ext_server).
If 'bindaddr' is set, then listen on bindaddr. Otherwise, listen
on an ephemeral port on localhost.
'remote_addrport' is the TCP/IP address of the other end of the
circuit. It's not used if we are in 'socks' role.
'ext_or_cookie_file' is the filesystem path where the Extended
ORPort Authentication cookie is stored. It's only used in
'ext_server' mode.
Return a tuple (addr, port) representing where we managed to bind.
Throws obfsproxy.transports.transports.TransportNotFound if the
transport could not be found.
Throws twisted.internet.error.CannotListenError if the listener
could not be set up.
"""
transport_class = transports.get_transport_class(transport, role)
listen_host = bindaddr[0] if bindaddr else 'localhost'
listen_port = int(bindaddr[1]) if bindaddr else 0
if role == 'socks':
factory = socks.SOCKSv4Factory(transport_class)
elif role == 'ext_server':
assert(remote_addrport and ext_or_cookie_file)
factory = extended_orport.ExtORPortServerFactory(remote_addrport, ext_or_cookie_file, transport, transport_class)
else:
assert(remote_addrport)
factory = network.StaticDestinationServerFactory(remote_addrport, role, transport_class)
addrport = reactor.listenTCP(listen_port, factory, interface=listen_host)
return (addrport.getHost().host, addrport.getHost().port)
obfsproxy-0.2.3/obfsproxy/network/network.py 0000664 0000000 0000000 00000032403 12217055661 0021345 0 ustar 00root root 0000000 0000000 from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory
import obfsproxy.common.log as logging
import obfsproxy.common.heartbeat as heartbeat
import obfsproxy.network.buffer as obfs_buf
import obfsproxy.transports.base as base
log = logging.get_obfslogger()
"""
Networking subsystem:
A "Connection" is a bidirectional communications channel, usually
backed by a network socket. For example, the communication channel
between tor and obfsproxy is a 'connection'. In the code, it's
represented by a Twisted's twisted.internet.protocol.Protocol.
A 'Circuit' is a pair of connections, referred to as the 'upstream'
and 'downstream' connections. The upstream connection of a circuit
communicates in cleartext with the higher-level program that wishes to
make use of our obfuscation service. The downstream connection
communicates in an obfuscated fashion with the remote peer that the
higher-level client wishes to contact. In the code, it's represented
by the custom Circuit class.
The diagram below might help demonstrate the relationship between
connections and circuits:
Downstream
'Circuit C' 'Connection CD' 'Connection SD' 'Circuit S'
+-----------+ +-----------+
Upstream +---|Obfsproxy c|----------|Obfsproxy s|----+ Upstream
| +-----------+ ^ +-----------+ |
'Connection CU' | | | 'Connection SU'
+------------+ Sent over +--------------+
| Tor Client | the net | Tor Bridge |
+------------+ +--------------+
In the above diagram, "Obfsproxy c" is the client-side obfsproxy, and
"Obfsproxy s" is the server-side obfsproxy. "Connection CU" is the
Client's Upstream connection, the communication channel between tor
and obfsproxy. "Connection CD" is the Client's Downstream connection,
the communication channel between obfsproxy and the remote peer. These
two connections form the client's circuit "Circuit C".
A 'listener' is a listening socket bound to a particular obfuscation
protocol, represented using Twisted's t.i.p.Factory. Connecting to a
listener creates one connection of a circuit, and causes this program
to initiate the other connection (possibly after receiving in-band
instructions about where to connect to). A listener is said to be a
'client' listener if connecting to it creates the upstream connection,
and a 'server' listener if connecting to it creates the downstream
connection.
There are two kinds of client listeners: a 'simple' client listener
always connects to the same remote peer every time it needs to
initiate a downstream connection; a 'socks' client listener can be
told to connect to an arbitrary remote peer using the SOCKS protocol.
"""
class Circuit(Protocol):
"""
A Circuit holds a pair of connections. The upstream connection and
the downstream. The circuit proxies data from one connection to
the other.
Attributes:
transport: the pluggable transport we should use to
obfuscate traffic on this circuit.
downstream: the downstream connection
upstream: the upstream connection
"""
def __init__(self, transport):
self.transport = transport # takes a transport
self.downstream = None # takes a connection
self.upstream = None # takes a connection
self.closed = False # True if the circuit is closed.
self.name = "circ_%s" % hex(id(self))
def setDownstreamConnection(self, conn):
"""
Set the downstream connection of a circuit.
"""
log.debug("%s: Setting downstream connection (%s)." % (self.name, conn.name))
assert(not self.downstream)
self.downstream = conn
if self.circuitIsReady():
self.circuitCompleted(self.upstream)
def setUpstreamConnection(self, conn):
"""
Set the upstream connection of a circuit.
"""
log.debug("%s: Setting upstream connection (%s)." % (self.name, conn.name))
assert(not self.upstream)
self.upstream = conn
if self.circuitIsReady():
self.circuitCompleted(self.downstream)
def circuitIsReady(self):
"""
Return True if the circuit is completed.
"""
return self.downstream and self.upstream
def circuitCompleted(self, conn_to_flush):
"""
Circuit was just completed; that is, its endpoints are now
connected. Do all the things we have to do now.
"""
log.debug("%s: Circuit completed." % self.name)
# Call the transport-specific handshake method since this is a
# good time to perform a handshake.
self.transport.handshake(self)
# Do a dummy dataReceived on the initiating connection in case
# it has any buffered data that must be flushed to the network.
#
# (We use callLater because we want to return back to the
# event loop so that our handshake() messages get sent to the
# network immediately.)
reactor.callLater(0.01, conn_to_flush.dataReceived, '')
def dataReceived(self, data, conn):
"""
We received 'data' on 'conn'. Pass the data to our transport,
and then proxy it to the other side. # XXX 'data' is a buffer.
Requires both downstream and upstream connections to be set.
"""
assert(self.downstream and self.upstream)
assert((conn is self.downstream) or (conn is self.upstream))
try:
if conn is self.downstream:
log.debug("%s: downstream: Received %d bytes." % (self.name, len(data)))
self.transport.receivedDownstream(data, self)
else:
log.debug("%s: upstream: Received %d bytes." % (self.name, len(data)))
self.transport.receivedUpstream(data, self)
except base.PluggableTransportError, err: # Our transport didn't like that data.
log.info("%s: %s: Closing circuit." % (self.name, str(err)))
self.close()
def close(self, reason=None, side=None):
"""
Tear down the circuit. The reason for the torn down circuit is given in
'reason' and 'side' tells us where it happened: either upstream or
downstream.
"""
if self.closed:
return # NOP if already closed
log.debug("%s: Tearing down circuit." % self.name)
self.closed = True
if self.downstream:
self.downstream.close()
if self.upstream:
self.upstream.close()
self.transport.circuitDestroyed(self, reason, side)
class GenericProtocol(Protocol, object):
"""
Generic obfsproxy connection. Contains useful methods and attributes.
Attributes:
circuit: The circuit object this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
def __init__(self, circuit):
self.circuit = circuit
self.buffer = obfs_buf.Buffer()
self.closed = False # True if connection is closed.
def connectionLost(self, reason):
log.debug("%s: Connection was lost (%s)." % (self.name, reason.getErrorMessage()))
self.close()
def connectionFailed(self, reason):
log.debug("%s: Connection failed to connect (%s)." % (self.name, reason.getErrorMessage()))
self.close()
def write(self, buf):
"""
Write 'buf' to the underlying transport.
"""
log.debug("%s: Writing %d bytes." % (self.name, len(buf)))
self.transport.write(buf)
def close(self, also_close_circuit=True):
"""
Close the connection.
"""
if self.closed:
return # NOP if already closed
log.debug("%s: Closing connection." % self.name)
self.closed = True
self.transport.loseConnection()
if also_close_circuit:
self.circuit.close()
class StaticDestinationProtocol(GenericProtocol):
"""
Represents a connection to a static destination (as opposed to a
SOCKS connection).
Attributes:
mode: 'server' or 'client'
circuit: The circuit this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
def __init__(self, circuit, mode, peer_addr):
self.mode = mode
self.peer_addr = peer_addr
self.name = "conn_%s" % hex(id(self))
GenericProtocol.__init__(self, circuit)
def connectionMade(self):
"""
Callback for when a connection is successfully established.
Find the connection's direction in the circuit, and register
it in our circuit.
"""
# Find the connection's direction and register it in the circuit.
if self.mode == 'client' and not self.circuit.upstream:
log.debug("%s: connectionMade (client): " \
"Setting it as upstream on our circuit." % self.name)
self.circuit.setUpstreamConnection(self)
elif self.mode == 'client':
log.debug("%s: connectionMade (client): " \
"Setting it as downstream on our circuit." % self.name)
self.circuit.setDownstreamConnection(self)
elif self.mode == 'server' and not self.circuit.downstream:
log.debug("%s: connectionMade (server): " \
"Setting it as downstream on our circuit." % self.name)
# Gather some statistics for our heartbeat.
heartbeat.heartbeat.register_connection(self.peer_addr.host)
self.circuit.setDownstreamConnection(self)
elif self.mode == 'server':
log.debug("%s: connectionMade (server): " \
"Setting it as upstream on our circuit." % self.name)
self.circuit.setUpstreamConnection(self)
def dataReceived(self, data):
"""
We received some data from the network. See if we have a
complete circuit, and pass the data to it they get proxied.
XXX: Can also be called with empty 'data' because of
Circuit.setDownstreamConnection(). Document or split function.
"""
if (not self.buffer) and (not data):
log.debug("%s: dataReceived called without a reason.", self.name)
return
# Add the received data to the buffer.
self.buffer.write(data)
# Circuit is not fully connected yet, nothing to do here.
if not self.circuit.circuitIsReady():
log.debug("%s: Incomplete circuit; cached %d bytes." % (self.name, len(data)))
return
self.circuit.dataReceived(self.buffer, self)
class StaticDestinationClientFactory(Factory):
"""
Created when our listener receives a client connection. Makes the
connection that connects to the other end of the circuit.
"""
def __init__(self, circuit, mode):
self.circuit = circuit
self.mode = mode
self.name = "fact_c_%s" % hex(id(self))
def buildProtocol(self, addr):
return StaticDestinationProtocol(self.circuit, self.mode, addr)
def startedConnecting(self, connector):
log.debug("%s: Client factory started connecting." % self.name)
def clientConnectionLost(self, connector, reason):
pass # connectionLost event is handled on the Protocol.
def clientConnectionFailed(self, connector, reason):
log.debug("%s: Connection failed (%s)." % (self.name, reason.getErrorMessage()))
self.circuit.close()
class StaticDestinationServerFactory(Factory):
"""
Represents a listener. Upon receiving a connection, it creates a
circuit and tries to establish the other side of the circuit. It
then listens for data to obfuscate and proxy.
Attributes:
remote_host: The IP/DNS information of the host on the other side
of the circuit.
remote_port: The TCP port fo the host on the other side of the circuit.
mode: 'server' or 'client'
transport: the pluggable transport we should use to
obfuscate traffic on this connection.
"""
def __init__(self, remote_addrport, mode, transport_class):
self.remote_host = remote_addrport[0]
self.remote_port = int(remote_addrport[1])
self.mode = mode
self.transport_class = transport_class
self.name = "fact_s_%s" % hex(id(self))
assert(self.mode == 'client' or self.mode == 'server')
def startFactory(self):
log.debug("%s: Starting up static destination server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection from %s:%d." % (self.name, log.safe_addr_str(addr.host), addr.port))
circuit = Circuit(self.transport_class())
# XXX instantiates a new factory for each client
clientFactory = StaticDestinationClientFactory(circuit, self.mode)
reactor.connectTCP(self.remote_host, self.remote_port, clientFactory)
return StaticDestinationProtocol(circuit, self.mode, addr)
obfsproxy-0.2.3/obfsproxy/network/socks.py 0000664 0000000 0000000 00000013153 12217055661 0020777 0 ustar 00root root 0000000 0000000 import csv
from twisted.protocols import socks
from twisted.internet.protocol import Factory
import obfsproxy.common.log as logging
import obfsproxy.network.network as network
import obfsproxy.transports.base as base
log = logging.get_obfslogger()
def split_socks_args(args_str):
"""
Given a string containing the SOCKS arguments (delimited by
semicolons, and with semicolons and backslashes escaped), parse it
and return a list of the unescaped SOCKS arguments.
"""
return csv.reader([args_str], delimiter=';', escapechar='\\').next()
class MySOCKSv4Outgoing(socks.SOCKSv4Outgoing, network.GenericProtocol):
"""
Represents a downstream connection from the SOCKS server to the
destination.
It monkey-patches socks.SOCKSv4Outgoing, because we need to pass
our data to the pluggable transport before proxying them
(Twisted's socks module did not support that).
Attributes:
circuit: The circuit this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
def __init__(self, socksProtocol):
"""
Constructor.
'socksProtocol' is a 'SOCKSv4Protocol' object.
"""
self.name = "socks_down_%s" % hex(id(self))
self.socksProtocol = socksProtocol
network.GenericProtocol.__init__(self, socksProtocol.circuit)
return super(MySOCKSv4Outgoing, self).__init__(socksProtocol)
def dataReceived(self, data):
log.debug("%s: Received %d bytes:\n%s" \
% (self.name, len(data), str(data)))
# If the circuit was not set up, set it up now.
if not self.circuit.circuitIsReady():
self.socksProtocol.set_up_circuit()
self.buffer.write(data)
self.circuit.dataReceived(self.buffer, self)
def close(self): # XXX code duplication
"""
Close the connection.
"""
if self.closed:
return # NOP if already closed
log.debug("%s: Closing connection." % self.name)
self.transport.loseConnection()
self.closed = True
def connectionLost(self, reason):
network.GenericProtocol.connectionLost(self, reason)
# Monkey patches socks.SOCKSv4Outgoing with our own class.
socks.SOCKSv4Outgoing = MySOCKSv4Outgoing
class SOCKSv4Protocol(socks.SOCKSv4, network.GenericProtocol):
"""
Represents an upstream connection from a SOCKS client to our SOCKS
server.
It overrides socks.SOCKSv4 because py-obfsproxy's connections need
to have a circuit and obfuscate traffic before proxying it.
"""
def __init__(self, circuit):
self.name = "socks_up_%s" % hex(id(self))
network.GenericProtocol.__init__(self, circuit)
socks.SOCKSv4.__init__(self)
def dataReceived(self, data):
"""
Received some 'data'. They might be SOCKS handshake data, or
actual upstream traffic. Figure out what it is and either
complete the SOCKS handshake or proxy the traffic.
"""
# SOCKS handshake not completed yet: let the overriden socks
# module complete the handshake.
if not self.otherConn:
log.debug("%s: Received SOCKS handshake data." % self.name)
return socks.SOCKSv4.dataReceived(self, data)
log.debug("%s: Received %d bytes:\n%s" \
% (self.name, len(data), str(data)))
self.buffer.write(data)
"""
If we came here with an incomplete circuit, it means that we
finished the SOCKS handshake and connected downstream. Set up
our circuit and start proxying traffic.
"""
if not self.circuit.circuitIsReady():
self.set_up_circuit()
self.circuit.dataReceived(self.buffer, self)
def set_up_circuit(self):
"""
Set the upstream/downstream SOCKS connections on the circuit.
"""
assert(self.otherConn)
self.circuit.setDownstreamConnection(self.otherConn)
self.circuit.setUpstreamConnection(self)
def authorize(self, code, server, port, user):
"""
(Overriden)
Accept or reject a SOCKS client that wants to connect to
'server':'port', with the SOCKS4 username 'user'.
"""
if not user: # No SOCKS arguments were specified.
return True
# If the client sent us SOCKS arguments, we must parse them
# and send them to the appropriate transport.
log.debug("Got '%s' as SOCKS arguments." % user)
try:
socks_args = split_socks_args(user)
except csv.Error, err:
log.warning("split_socks_args failed (%s)" % str(err))
return False
try:
self.circuit.transport.handle_socks_args(socks_args)
except base.SOCKSArgsError:
return False # Transports should log the issue themselves
return True
def connectionLost(self, reason):
network.GenericProtocol.connectionLost(self, reason)
class SOCKSv4Factory(Factory):
"""
A SOCKSv4 factory.
"""
def __init__(self, transport_class):
# XXX self.logging = log
self.transport_class = transport_class
self.name = "socks_fact_%s" % hex(id(self))
def startFactory(self):
log.debug("%s: Starting up SOCKS server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection." % self.name)
circuit = network.Circuit(self.transport_class())
return SOCKSv4Protocol(circuit)
obfsproxy-0.2.3/obfsproxy/pyobfsproxy.py 0000775 0000000 0000000 00000013116 12217055661 0020572 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
This is the command line interface to py-obfsproxy.
It is designed to be a drop-in replacement for the obfsproxy executable.
Currently, not all of the obfsproxy command line options have been implemented.
"""
import sys
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.transports.transports as transports
import obfsproxy.common.log as logging
import obfsproxy.common.argparser as argparser
import obfsproxy.common.heartbeat as heartbeat
import obfsproxy.managed.server as managed_server
import obfsproxy.managed.client as managed_client
from obfsproxy import __version__
from pyptlib.config import checkClientMode
from twisted.internet import task # for LoopingCall
log = logging.get_obfslogger()
def set_up_cli_parsing():
"""Set up our CLI parser. Register our arguments and options and
query individual transports to register their own external-mode
arguments."""
parser = argparser.MyArgumentParser(
description='py-obfsproxy: A pluggable transports proxy written in Python')
subparsers = parser.add_subparsers(title='supported transports', dest='name')
parser.add_argument('-v', '--version', action='version', version=__version__)
parser.add_argument('--log-file', help='set logfile')
parser.add_argument('--log-min-severity',
choices=['error', 'warning', 'info', 'debug'],
help='set minimum logging severity (default: %(default)s)')
parser.add_argument('--no-log', action='store_true', default=False,
help='disable logging')
parser.add_argument('--no-safe-logging', action='store_true',
default=False,
help='disable safe (scrubbed address) logging')
# Managed mode is a subparser for now because there are no
# optional subparsers: bugs.python.org/issue9253
subparsers.add_parser("managed", help="managed mode")
# Add a subparser for each transport. Also add a
# transport-specific function to later validate the parsed
# arguments.
for transport, transport_class in transports.transports.items():
subparser = subparsers.add_parser(transport, help='%s help' % transport)
transport_class['base'].register_external_mode_cli(subparser)
subparser.set_defaults(validation_function=transport_class['base'].validate_external_mode_cli)
return parser
def do_managed_mode():
"""This function starts obfsproxy's managed-mode functionality."""
if checkClientMode():
log.info('Entering client managed-mode.')
managed_client.do_managed_client()
else:
log.info('Entering server managed-mode.')
managed_server.do_managed_server()
def do_external_mode(args):
"""This function starts obfsproxy's external-mode functionality."""
assert(args)
assert(args.name)
assert(args.name in transports.transports)
from twisted.internet import reactor
launch_transport.launch_transport_listener(args.name, args.listen_addr, args.mode, args.dest, args.ext_cookie_file)
log.info("Launched '%s' listener at '%s:%s' for transport '%s'." % \
(args.mode, log.safe_addr_str(args.listen_addr[0]), args.listen_addr[1], args.name))
reactor.run()
def consider_cli_args(args):
"""Check out parsed CLI arguments and take the appropriate actions."""
if args.log_file:
log.set_log_file(args.log_file)
if args.log_min_severity:
log.set_log_severity(args.log_min_severity)
if args.no_log:
log.disable_logs()
if args.no_safe_logging:
log.set_no_safe_logging()
# validate:
if (args.name == 'managed') and (not args.log_file) and (args.log_min_severity):
log.error("obfsproxy in managed-proxy mode can only log to a file!")
sys.exit(1)
elif (args.name == 'managed') and (not args.log_file):
# managed proxies without a logfile must not log at all.
log.disable_logs()
def pyobfsproxy():
"""Actual pyobfsproxy entry-point."""
parser = set_up_cli_parsing()
args = parser.parse_args()
consider_cli_args(args)
log.warning('Obfsproxy (version: %s) starting up.' % (__version__))
log.debug('argv: ' + str(sys.argv))
log.debug('args: ' + str(args))
# Fire up our heartbeat.
l = task.LoopingCall(heartbeat.heartbeat.talk)
l.start(3600.0, now=False) # do heartbeat every hour
# Initiate obfsproxy.
if (args.name == 'managed'):
do_managed_mode()
else:
# Pass parsed arguments to the appropriate transports so that
# they can initialize and setup themselves. Exit if the
# provided arguments were corrupted.
# XXX use exceptions
if (args.validation_function(args) == False):
sys.exit(1)
do_external_mode(args)
def run():
"""Fake entry-point so that we can log unhandled exceptions."""
# Pyobfsproxy's CLI uses "managed" whereas C-obfsproxy uses
# "--managed" to configure managed-mode. Python obfsproxy can't
# recognize "--managed" because it uses argparse subparsers and
# http://bugs.python.org/issue9253 is not yet solved. This is a crazy
# hack to maintain CLI compatibility between the two versions. we
# basically inplace replace "--managed" with "managed" in the argument
# list.
if len(sys.argv) > 1 and '--managed' in sys.argv:
for n, arg in enumerate(sys.argv):
if arg == '--managed':
sys.argv[n] = 'managed'
try:
pyobfsproxy()
except Exception, e:
log.exception(e)
raise
if __name__ == '__main__':
run()
obfsproxy-0.2.3/obfsproxy/test/ 0000775 0000000 0000000 00000000000 12217055661 0016566 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/test/__init__.py 0000664 0000000 0000000 00000000000 12217055661 0020665 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/test/int_tests/ 0000775 0000000 0000000 00000000000 12217055661 0020602 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/test/int_tests/obfsproxy_tester.py 0000664 0000000 0000000 00000005322 12217055661 0024577 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
from twisted.internet import reactor
from twisted.internet import protocol
import os
import logging
import subprocess
import threading
obfsproxy_env = {}
obfsproxy_env.update(os.environ)
class ObfsproxyProcess(protocol.ProcessProtocol):
"""
Represents the behavior of an obfsproxy process.
"""
def __init__(self):
self.stdout_data = ''
self.stderr_data = ''
self.name = 'obfs_%s' % hex(id(self))
def connectionMade(self):
pass
def outReceived(self, data):
"""Got data in stdout."""
logging.debug('%s: outReceived got %d bytes of data.' % (self.name, len(data)))
self.stdout_data += data
def errReceived(self, data):
"""Got data in stderr."""
logging.debug('%s: errReceived got %d bytes of data.' % (self.name, len(data)))
self.stderr_data += data
def inConnectionLost(self):
"""stdin closed."""
logging.debug('%s: stdin closed' % self.name)
def outConnectionLost(self):
"""stdout closed."""
logging.debug('%s: outConnectionLost, stdout closed!' % self.name)
# XXX Fail the test if stdout is not clean.
if self.stdout_data != '':
logging.warning('%s: stdout is not clean: %s' % (self.name, self.stdout_data))
def errConnectionLost(self):
"""stderr closed."""
logging.debug('%s: errConnectionLost, stderr closed!' % self.name)
def processExited(self, reason):
"""Process exited."""
logging.debug('%s: processExited, status %s' % (self.name, str(reason.value.exitCode)))
def processEnded(self, reason):
"""Process ended."""
logging.debug('%s: processEnded, status %s' % (self.name, str(reason.value.exitCode)))
def kill(self):
"""Kill the process."""
logging.debug('%s: killing' % self.name)
self.transport.signalProcess('KILL')
self.transport.loseConnection()
class Obfsproxy(object):
def __init__(self, *args, **kwargs):
# Fix up our argv
argv = []
argv.extend(('python', '../../../../bin/obfsproxy', '--log-min-severity=warning'))
# Extend hardcoded argv with user-specified options.
if len(args) == 1 and (isinstance(args[0], list) or
isinstance(args[0], tuple)):
argv.extend(args[0])
else:
argv.extend(args)
# Launch obfsproxy
self.obfs_process = ObfsproxyProcess()
reactor.spawnProcess(self.obfs_process, 'python', args=argv,
env=obfsproxy_env)
logging.debug('spawnProcess with %s' % str(argv))
def kill(self):
"""Kill the obfsproxy process."""
self.obfs_process.kill()
obfsproxy-0.2.3/obfsproxy/test/int_tests/pits.py 0000664 0000000 0000000 00000016330 12217055661 0022136 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
import sys
import logging
import time
import socket
import collections
from twisted.internet import task, reactor, defer
import pits_network as network
import pits_connections as conns
import obfsproxy_tester
import pits_transcript as transcript
def usage():
print "PITS usage:\n\tpits.py test_case.pits"
CLIENT_OBFSPORT = 42000 # XXX maybe randomize?
SERVER_OBFSPORT = 62000
class PITS(object):
"""
The PITS system. It executes the commands written in test case
files.
Attributes:
'transcript', is the PITS transcript. It's being written while
the tests are running.
'inbound_listener', the inbound PITS listener.
'client_obfs' and 'server_obfs', the client and server obfpsroxy processes.
"""
def __init__(self):
# Set up the transcript
self.transcript = transcript.Transcript()
# Set up connection handler
self.conn_handler = conns.PITSConnectionHandler()
# Set up our fake network:
# -> CLIENT_OBFSPORT -> SERVER_OBFSPRORT ->
# Set up PITS inbound listener
self.inbound_factory = network.PITSInboundFactory(self.transcript, self.conn_handler)
self.inbound_listener = reactor.listenTCP(0, self.inbound_factory, interface='localhost')
self.client_obfs = None
self.server_obfs = None
logging.debug("PITS initialized.")
def get_pits_inbound_address(self):
"""Return the address of the PITS inbound listener."""
return self.inbound_listener.getHost()
def launch_obfsproxies(self, obfs_client_args, obfs_server_args):
"""Launch client and server obfsproxies with the given cli arguments."""
# Set up client Obfsproxy.
self.client_obfs = obfsproxy_tester.Obfsproxy(*obfs_client_args)
# Set up server Obfsproxy.
self.server_obfs = obfsproxy_tester.Obfsproxy(*obfs_server_args)
time.sleep(1)
def pause(self, tokens):
"""Read a parse command."""
if len(tokens) > 1:
raise InvalidCommand("Too big pause line.")
if not tokens[0].isdigit():
raise InvalidCommand("Invalid pause argument (%s)." % tokens[0])
time.sleep(int(tokens[0]))
def init_conn(self, tokens):
"""Read a connection establishment command."""
if len(tokens) > 1:
raise InvalidCommand("Too big init connection line.")
# Before firing up the connetion, register its identifier to
# the PITS subsystem.
self.inbound_factory.register_new_identifier(tokens[0])
# Create outbound socket. tokens[0] is its identifier.
factory = network.PITSOutboundFactory(tokens[0], self.transcript, self.conn_handler)
reactor.connectTCP('127.0.0.1', CLIENT_OBFSPORT, factory)
def transmit(self, tokens, direction):
"""Read a transmit command."""
if len(tokens) < 2:
raise InvalidCommand("Too small transmit line.")
identifier = tokens[0]
data = " ".join(tokens[1:]) # concatenate rest of the line
data = data.decode('string_escape') # unescape string
try:
self.conn_handler.send_data_through_conn(identifier, direction, data)
except conns.NoSuchConn, err:
logging.warning("Wanted to send some data, but I can't find '%s' connection with id '%s'." % \
(direction, identifier))
# XXX note it to transcript
logging.debug("Sending '%s' from '%s' socket '%s'." % (data, direction, identifier))
def eof(self, tokens, direction):
"""Read a transmit EOF command."""
if len(tokens) > 1:
raise InvalidCommand("Too big EOF line.")
identifier = tokens[0]
try:
self.conn_handler.close_conn(identifier, direction)
except conns.NoSuchConn, err:
logging.warning("Wanted to EOF, but I can't find '%s' connection with id '%s'." % \
(direction, identifier))
# XXX note it to transcript
logging.debug("Sending EOF from '%s' socket '%s'." % (identifier, direction))
def do_command(self, line):
"""
Parse command from 'line'.
Throws InvalidCommand.
"""
logging.debug("Parsing %s" % repr(line))
line = line.rstrip()
if line == '': # Ignore blank lines
return
tokens = line.split(" ")
if len(tokens) < 2:
raise InvalidCommand("Too few tokens: '%s'." % line)
if tokens[0] == 'P':
self.pause(tokens[1:])
elif tokens[0] == '!':
self.init_conn(tokens[1:])
elif tokens[0] == '>':
self.transmit(tokens[1:], 'outbound')
elif tokens[0] == '<':
self.transmit(tokens[1:], 'inbound')
elif tokens[0] == '*':
self.eof(tokens[1:], 'inbound')
elif tokens[0] == '#': # comment
pass
else:
logging.warning("Unknown token in line: '%s'" % line)
def cleanup(self):
logging.debug("Cleanup.")
self.inbound_listener.stopListening()
self.client_obfs.kill()
self.server_obfs.kill()
class TestReader(object):
"""
Read and execute a test case from a file.
Attributes:
'script', is the text of the test case file.
'test_case_line', is a generator that yields the next line of the test case file.
'pits', is the PITS system responsible for this test case.
'assertTrue', is a function pointer to a unittest.assertTrue
function that should be used to validate this test.
"""
def __init__(self, test_assertTrue_func, fname):
self.assertTrue = test_assertTrue_func
self.script = open(fname).read()
self.test_case_line = self.test_case_line_gen()
self.pits = PITS()
def test_case_line_gen(self):
"""Yield the next line of the test case file."""
for line in self.script.split('\n'):
yield line
def do_test(self, obfs_client_args, obfs_server_args):
"""
Start a test case with obfsproxies with the given arguments.
"""
# Launch the obfsproxies
self.pits.launch_obfsproxies(obfs_client_args, obfs_server_args)
# We call _do_command() till we read the whole test case
# file. After we read the file, we call
# transcript.test_was_success() to verify the test run.
d = task.deferLater(reactor, 0.2, self._do_command)
return d
def _do_command(self):
"""
Read and execute another command from the test case file.
If the test case file is over, verify that the test was succesful.
"""
try:
line = self.test_case_line.next()
except StopIteration: # Test case is over.
return self.assertTrue(self.pits.transcript.test_was_success(self.script))
time.sleep(0.3)
self.pits.do_command(line)
# 0.4 seconds should be enough time for the network operations to complete,
# so that we can move to the next command.
d = task.deferLater(reactor, 0.4, self._do_command)
return d
def cleanup(self):
self.pits.cleanup()
class InvalidCommand(Exception): pass
obfsproxy-0.2.3/obfsproxy/test/int_tests/pits_connections.py 0000664 0000000 0000000 00000007167 12217055661 0024550 0 ustar 00root root 0000000 0000000 import logging
"""
Code that keeps track of the connections of PITS.
"""
def remove_key(d, key):
"""
Return a dictionary identical to 'd' but with 'key' (and its
value) removed.
"""
r = dict(d)
del r[key]
return r
class PITSConnectionHandler(object):
"""
Responsible for managing PITS connections.
Attributes:
'active_outbound_conns', is a dictionary mapping outbound connection identifiers with their objects.
'active_inbound_conns', is a dictionary mapping inbound connection identifiers with their objects.
"""
def __init__(self):
# { "id1" : , "id2": }
self.active_outbound_conns = {}
# { "id1" : , "id2": }
self.active_inbound_conns = {}
def register_conn(self, conn, identifier, direction):
"""
Register connection 'conn' with 'identifier'. 'direction' is
either "inbound" or "outbound".
"""
if direction == 'inbound':
self.active_inbound_conns[identifier] = conn
logging.debug("active_inbound_conns: %s" % str(self.active_inbound_conns))
elif direction == 'outbound':
self.active_outbound_conns[identifier] = conn
logging.debug("active_outbound_conns: %s" % str(self.active_outbound_conns))
def unregister_conn(self, identifier, direction):
"""
Unregister connection 'conn' with 'identifier'. 'direction' is
either "inbound" or "outbound".
"""
if direction == 'inbound':
self.active_inbound_conns = remove_key(self.active_inbound_conns, identifier)
logging.debug("active_inbound_conns: %s" % str(self.active_inbound_conns))
elif direction == 'outbound':
self.active_outbound_conns = remove_key(self.active_outbound_conns, identifier)
logging.debug("active_outbound_conns: %s" % str(self.active_outbound_conns))
def find_conn(self, identifier, direction):
"""
Find connection with 'identifier'. 'direction' is either
"inbound" or "outbound".
Raises NoSuchConn.
"""
conn = None
try:
if direction == 'inbound':
conn = self.active_inbound_conns[identifier]
elif direction == 'outbound':
conn = self.active_outbound_conns[identifier]
except KeyError:
logging.warning("find_conn: Could not find '%s' connection with identifier '%s'" %
(direction, identifier))
raise NoSuchConn()
logging.debug("Found '%s' conn with identifier '%s': '%s'" % (direction, identifier, conn))
return conn
def send_data_through_conn(self, identifier, direction, data):
"""
Send 'data' through connection with 'identifier'.
"""
try:
conn = self.find_conn(identifier, direction)
except KeyError:
logging.warning("send_data_through_conn: Could not find '%s' connection "
"with identifier '%s'" % (direction, identifier))
raise NoSuchConn()
conn.write(data)
def close_conn(self, identifier, direction):
"""
Send EOF through connection with 'identifier'.
"""
try:
conn = self.find_conn(identifier, direction)
except KeyError:
logging.warning("close_conn: Could not find '%s' connection "
"with identifier '%s'" % (direction, identifier))
raise NoSuchConn()
conn.close()
class NoSuchConn(Exception): pass
obfsproxy-0.2.3/obfsproxy/test/int_tests/pits_design.txt 0000664 0000000 0000000 00000014420 12217055661 0023654 0 ustar 00root root 0000000 0000000 Pyobfsproxy integration test suite (PITS)
Overview
Obfsproxy needs an automated and robust way of testing its pluggable
transports. While unit tests are certainly helpful, integration
tests provide realistic testing scenarios for network daemons like
obfsproxy.
Motivation
Obfsproxy needs to be tested on how well it can proxy traffic from
one side to its other side. A basic integration test would be to
transfer a string from one side and see if it arrives intact on the
other side.
A more involved integration test is the "timeline tests" of
Stegotorus, developed by Zack Weinberg. Stegotorus integration tests
are configurable: you pass them a script file that defines the
behavior of the integration test connections. This allows
customizable connection establishment and tear down, and the ability
to send arbitrary data through the integration test connections.
That's good enough, but sometimes bugs appear on more complex
network interactions. For this reason, PITS was developed which has
support for:
+ multiple network connections
+ flexible connection behavior
+ automated test case generation
The integration tests should also be cross-platform so that they can
be ran on Microsoft Windows.
Design
+-----------+ +-----------+
|-------->| client |<-------------------->| server |<--------|
| |----->| obfsproxy |<-------------------->| obfsproxy |<-----| |
| | |-->| |<-------------------->| |<--| | |
| | | +-----------+ +-----------+ | | |
| | | | | |
v v v v v v
+---------------+ +---------------+
| PITS outbound | | PITS inbound |
+---------------+ +---------------+
^ |
| |
| v
+---------------+ +---------------+
|Test case file |<------------------------------>|Transcript file|
+---------------+ +---------------+
PITS does integration tests by reading a user-provided test case
file which contains a description of the test that PITS should
perform.
A basic PITS test case usually involves launching two obfsproxies as
in the typical obfuscated bridge client-server scenario, exchanging
some data between them and finally checking if both sides received
the proper data.
A basic PITS test case usually involves opening a listening socket
(which in the case of a client-side obfsproxy, emulates the
server-side obfspoxy), and a number of outbound connections (which in
the case of a client-side obfsproxy, emulate the connections from the
Tor client).
Test case files contain instructions for the sockets of PITS. Through
test case files, PITS can be configured to perform the following
actions:
+ Open and close connections
+ Send arbitrary data through connections
+ Pause connections
While conducting the tests, the PITS inbound and outbound sockets
record the data they sent and receive in a 'transcript'; after the
test is over, the transcript and test case file are post-processed
and compared with each other to check whether the intended
conversation was performed successfully.
Test case files
The test case file format is line-oriented; each line is a command,
and the first character of the line is a directive followed by a
number of arguments.
Valid commands are:
# comment line - note that # _only_ introduces a comment at the beginning
of a line; elsewhere, it's either a syntax error or part
of an argument
P number - pause test-case execution for |number| milliseconds
! - initiate connection with identifier
* - Close connection (through inbound socket)
> - transmit on through outbound socket
< - transmit on through inbound socket
Trailing whitespace is ignored.
Test cases have to close all established connections explicitly,
otherwise the test won't be validated correctly.
Transcript files
Inbound and outbound sockets log received data to a transcript
file. The transcript file format is similar to the test case format:
! - connection established on inbound socket
> - received on inbound socket
< - received on outbound socket.
* - connection destroyed on inbound socket
Test case results
After a test case is completed and the transcript file is written,
PITS needs to evalute whether the test case was successful; that is,
whether the transcript file correctly describes the test case.
Because of the properties of TCP, the following post-processing
happens to validate the transcript file with the test case file:
a) Both files are segregated: all the traffic and events of inbound
sockets are put on top, and the traffic and events of outbound
sockets are put on the bottom.
(This happens because TCP can't guarantee order of event arival in
one direction relative to the order of event arrival in the other
direction.)
b) In both files, for each socket identifier, we concatenate all its
traffic in a single 'transmit' directive. In the end, we place the
transmit line below the events (session establishment, etc.).
(This happens because TCP is a stream protocol.)
c) We string compare the transcript and test-case files.
XXX document any unexpected behaviors or untestable cases caused by
the above postprocessing.
Acknowledgements
The script file format and the basic idea of PITS are concepts of
Zack Weinberg. They were implemented as part of Stegotorus:
https://gitweb.torproject.org/stegotorus.git/blob/HEAD:/src/test/tltester.cc
obfsproxy-0.2.3/obfsproxy/test/int_tests/pits_network.py 0000664 0000000 0000000 00000014303 12217055661 0023705 0 ustar 00root root 0000000 0000000 from twisted.internet.protocol import Protocol, Factory, ClientFactory
from twisted.internet import reactor, error, address, tcp
import logging
class GenericProtocol(Protocol):
"""
Generic PITS connection. Contains useful methods and attributes.
"""
def __init__(self, identifier, direction, transcript, conn_handler):
self.identifier = identifier
self.direction = direction
self.transcript = transcript
self.conn_handler = conn_handler
self.closed = False
self.conn_handler.register_conn(self, self.identifier, self.direction)
# If it's inbound, note the connection establishment to the transcript.
if self.direction == 'inbound':
self.transcript.write('! %s' % self.identifier)
logging.debug("Registered '%s' connection with identifier %s" % (direction, identifier))
def connectionLost(self, reason):
logging.debug("%s: Connection was lost (%s)." % (self.name, reason.getErrorMessage()))
# If it's inbound, note the connection fail to the transcript.
if self.direction == 'inbound':
self.transcript.write('* %s' % self.identifier)
self.close()
def connectionFailed(self, reason):
logging.warning("%s: Connection failed to connect (%s)." % (self.name, reason.getErrorMessage()))
# XXX Note connection fail to transcript?
self.close()
def dataReceived(self, data):
logging.debug("'%s' connection '%s' received %s" % (self.direction, self.identifier, repr(data)))
# Note data to the transcript.
symbol = '>' if self.direction == 'inbound' else '<'
self.transcript.write('%s %s %s' % (symbol, self.identifier, data.encode("string_escape")))
def write(self, buf):
"""
Write 'buf' to the underlying transport.
"""
logging.debug("Connection '%s' writing %s" % (self.identifier, repr(buf)))
self.transport.write(buf)
def close(self):
"""
Close the connection.
"""
if self.closed: return # NOP if already closed
logging.debug("%s: Closing connection." % self.name)
self.transport.loseConnection()
self.conn_handler.unregister_conn(self.identifier, self.direction)
self.closed = True
class OutboundConnection(GenericProtocol):
def __init__(self, identifier, transcript, conn_handler):
self.name = "out_%s_%s" % (identifier, hex(id(self)))
GenericProtocol.__init__(self, identifier, 'outbound', transcript, conn_handler)
class InboundConnection(GenericProtocol):
def __init__(self, identifier, transcript, conn_handler):
self.name = "in_%s_%s" % (identifier, hex(id(self)))
GenericProtocol.__init__(self, identifier, 'inbound', transcript, conn_handler)
class PITSOutboundFactory(Factory):
"""
Outbound PITS factory.
"""
def __init__(self, identifier, transcript, conn_handler):
self.transcript = transcript
self.conn_handler = conn_handler
self.identifier = identifier
self.name = "out_factory_%s" % hex(id(self))
def buildProtocol(self, addr):
# New outbound connection.
return OutboundConnection(self.identifier, self.transcript, self.conn_handler)
def startFactory(self):
logging.debug("%s: Started up PITS outbound listener." % self.name)
def stopFactory(self):
logging.debug("%s: Shutting down PITS outbound listener." % self.name)
def startedConnecting(self, connector):
logging.debug("%s: Client factory started connecting." % self.name)
def clientConnectionLost(self, connector, reason):
logging.debug("%s: Connection lost (%s)." % (self.name, reason.getErrorMessage()))
def clientConnectionFailed(self, connector, reason):
logging.debug("%s: Connection failed (%s)." % (self.name, reason.getErrorMessage()))
class PITSInboundFactory(Factory):
"""
Inbound PITS factory
"""
def __init__(self, transcript, conn_handler):
self.transcript = transcript
self.conn_handler = conn_handler
self.name = "in_factory_%s" % hex(id(self))
# List with all the identifiers observed while parsing the
# test case file so far.
self.identifiers_seen = []
# The number of identifiers used so far to name incoming
# connections. Normally it should be smaller than the length
# of 'identifiers_seen'.
self.identifiers_used_n = 0
def buildProtocol(self, addr):
# New inbound connection.
identifier = self._get_identifier_for_new_conn()
return InboundConnection(identifier, self.transcript, self.conn_handler)
def register_new_identifier(self, identifier):
"""Register new connection identifier."""
if identifier in self.identifiers_seen:
# The identifier was already in our list. Broken test case
# or broken PITS.
logging.warning("Tried to register identifier '%s' more than once (list: %s)."
"Maybe your test case is broken, or this could be a bug." %
(identifier, self.identifiers_seen))
return
self.identifiers_seen.append(identifier)
def _get_identifier_for_new_conn(self):
"""
We got a new incoming connection. Find the next identifier
that we should use, and return it.
"""
# BUG len(identifiers_seen) == 0 , identifiers_used == 0
# NORMAL len(identifiers_seen) == 1, identifiers_used == 0
# BUG len(identifiers_seen) == 2, identifiers_used == 3
if (self.identifiers_used_n >= len(self.identifiers_seen)):
logging.warning("Not enough identifiers for new connection (%d, %s)" %
(self.identifiers_used_n, str(self.identifiers_seen)))
assert(False)
identifier = self.identifiers_seen[self.identifiers_used_n]
self.identifiers_used_n += 1
return identifier
def startFactory(self):
logging.debug("%s: Started up PITS inbound listener." % self.name)
def stopFactory(self):
logging.debug("%s: Shutting down PITS inbound listener." % self.name)
# XXX here we should close all existiing connections
obfsproxy-0.2.3/obfsproxy/test/int_tests/pits_transcript.py 0000664 0000000 0000000 00000006532 12217055661 0024412 0 ustar 00root root 0000000 0000000 import collections
import logging
import pits
class Transcript(object):
"""
Manages the PITS transcript. Also contains the functions that
verify the transcript against the test case file.
Attributes:
'text', the transcript text.
"""
def __init__(self):
self.text = ''
def write(self, data):
"""Write 'data' to transcript."""
self.text += data
self.text += '\n'
def get(self):
return self.text
def test_was_success(self, original_script):
"""
Validate transcript against test case file. Return True if the
test was successful and False otherwise.
"""
postprocessed_script = self._postprocess(original_script)
postprocessed_transcript = self._postprocess(self.text)
# Log the results
log_func = logging.debug if postprocessed_script == postprocessed_transcript else logging.warning
log_func("postprocessed_script:\n'%s'" % postprocessed_script)
log_func("postprocessed_transcript:\n'%s'" % postprocessed_transcript)
return postprocessed_script == postprocessed_transcript
def _postprocess(self, script):
"""
Post-process a (trans)script, according to the instructions of
the "Test case results" section.
Return the postprocessed string.
Assume correctly formatted script file.
"""
logging.debug("Postprocessing:\n%s" % script)
postprocessed = ''
outbound_events = [] # Events of the outbound connections
inbound_events = [] # Events of the inbound connections
# Data towards outbound connections ( -> )
outbound_data = collections.OrderedDict()
# Data towards inbound connections ( -> )
inbound_data = collections.OrderedDict()
for line in script.split('\n'):
line = line.rstrip()
if line == '':
continue
tokens = line.split(" ")
if tokens[0] == 'P' or tokens[0] == '#': # Ignore
continue
elif tokens[0] == '!': # Count '!' as inbound event
inbound_events.append(line)
elif tokens[0] == '*': # Count '*' as outbound event
outbound_events.append(line)
elif tokens[0] == '>': # Data towards inbound socket
if not tokens[1] in inbound_data:
inbound_data[tokens[1]] = ''
inbound_data[tokens[1]] += ' '.join(tokens[2:])
elif tokens[0] == '<': # Data towards outbound socket
if not tokens[1] in outbound_data:
outbound_data[tokens[1]] = ''
outbound_data[tokens[1]] += ' '.join(tokens[2:])
"""
Inbound-related events and traffic go on top, the rest go to
the bottom. Event lines go on top, transmit lines on bottom.
"""
# Inbound lines
postprocessed += '\n'.join(inbound_events)
postprocessed += '\n'
for identifier, data in inbound_data.items():
postprocessed += '> %s %s\n' % (identifier, data)
# Outbound lines
postprocessed += '\n'.join(outbound_events)
for identifier, data in outbound_data.items():
postprocessed += '< %s %s\n' % (identifier, data)
return postprocessed
obfsproxy-0.2.3/obfsproxy/test/int_tests/test_case.pits 0000664 0000000 0000000 00000000206 12217055661 0023453 0 ustar 00root root 0000000 0000000 # Sample test case
! one
> one ABC
< one DEF
< one HIJ
> one KLM
P 1
! two
! three
* one
> two 123
> two 456
* three
< two 789
* two
obfsproxy-0.2.3/obfsproxy/test/int_tests/test_case_simple.pits 0000664 0000000 0000000 00000000053 12217055661 0025024 0 ustar 00root root 0000000 0000000 ! one
> one ABC
! two
< two DFG
* one
* two obfsproxy-0.2.3/obfsproxy/test/int_tests/test_pits.py 0000664 0000000 0000000 00000004543 12217055661 0023200 0 ustar 00root root 0000000 0000000 import os
import logging
import twisted.trial.unittest
import pits
class PITSTest(twisted.trial.unittest.TestCase):
def setUp(self):
pass
def tearDown(self):
self.treader.cleanup()
def _doTest(self, transport_name, test_case_file):
self.treader = pits.TestReader(self.assertTrue, test_case_file)
return self.treader.do_test(
('%s' % transport_name,
'client',
'127.0.0.1:%d' % pits.CLIENT_OBFSPORT,
'--dest=127.0.0.1:%d' % pits.SERVER_OBFSPORT),
('%s' % transport_name,
'server',
'127.0.0.1:%d' % pits.SERVER_OBFSPORT,
'--dest=127.0.0.1:%d' % self.treader.pits.get_pits_inbound_address().port))
def _doTest_shared_secret(self, transport_name, test_case_file):
self.treader = pits.TestReader(self.assertTrue, test_case_file)
return self.treader.do_test(
('%s' % transport_name,
'client',
'127.0.0.1:%d' % pits.CLIENT_OBFSPORT,
'--shared-secret=test',
"--ss-hash-iterations=50",
'--dest=127.0.0.1:%d' % pits.SERVER_OBFSPORT),
('%s' % transport_name,
'server',
'127.0.0.1:%d' % pits.SERVER_OBFSPORT,
'--shared-secret=test',
"--ss-hash-iterations=50",
'--dest=127.0.0.1:%d' % self.treader.pits.get_pits_inbound_address().port))
# XXX This is pretty ridiculous. Find a smarter way to make up for the
# absense of load_tests().
def test_dummy_1(self):
return self._doTest("dummy", "../test_case.pits")
def test_dummy_2(self):
return self._doTest("dummy", "../test_case_simple.pits")
def test_obfs2_1(self):
return self._doTest("obfs2", "../test_case.pits")
def test_obfs2_2(self):
return self._doTest("obfs2", "../test_case_simple.pits")
def test_obfs2_shared_secret_1(self):
return self._doTest_shared_secret("obfs2", "../test_case.pits")
def test_obfs2_shared_secret_2(self):
return self._doTest_shared_secret("obfs2", "../test_case_simple.pits")
def test_obfs3_1(self):
return self._doTest("obfs3", "../test_case.pits")
def test_obfs3_2(self):
return self._doTest("obfs3", "../test_case_simple.pits")
if __name__ == '__main__':
from unittest import main
main()
obfsproxy-0.2.3/obfsproxy/test/test_aes.py 0000664 0000000 0000000 00000006457 12217055661 0020763 0 ustar 00root root 0000000 0000000 import unittest
from Crypto.Cipher import AES
from Crypto.Util import Counter
import obfsproxy.common.aes as aes
import twisted.trial.unittest
class testAES_CTR_128_NIST(twisted.trial.unittest.TestCase):
def _helper_test_vector(self, input_block, output_block, plaintext, ciphertext):
self.assertEqual(long(input_block.encode('hex'), 16), self.ctr.next_value())
ct = self.cipher.encrypt(plaintext)
self.assertEqual(ct, ciphertext)
# XXX how do we extract the keystream out of the AES object?
def test_nist(self):
# Prepare the cipher
key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6\xab\xf7\x15\x88\x09\xcf\x4f\x3c"
iv = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
self.ctr = Counter.new(128, initial_value=long(iv.encode('hex'), 16))
self.cipher = AES.new(key, AES.MODE_CTR, counter=self.ctr)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
output_block = "\xec\x8c\xdf\x73\x98\x60\x7c\xb0\xf2\xd2\x16\x75\xea\x9e\xa1\xe4"
plaintext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
ciphertext = "\x87\x4d\x61\x91\xb6\x20\xe3\x26\x1b\xef\x68\x64\x99\x0d\xb6\xce"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x00"
output_block = "\x36\x2b\x7c\x3c\x67\x73\x51\x63\x18\xa0\x77\xd7\xfc\x50\x73\xae"
plaintext = "\xae\x2d\x8a\x57\x1e\x03\xac\x9c\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
ciphertext = "\x98\x06\xf6\x6b\x79\x70\xfd\xff\x86\x17\x18\x7b\xb9\xff\xfd\xff"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x01"
output_block = "\x6a\x2c\xc3\x78\x78\x89\x37\x4f\xbe\xb4\xc8\x1b\x17\xba\x6c\x44"
plaintext = "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
ciphertext = "\x5a\xe4\xdf\x3e\xdb\xd5\xd3\x5e\x5b\x4f\x09\x02\x0d\xb0\x3e\xab"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x02"
output_block = "\xe8\x9c\x39\x9f\xf0\xf1\x98\xc6\xd4\x0a\x31\xdb\x15\x6c\xab\xfe"
plaintext = "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17\xad\x2b\x41\x7b\xe6\x6c\x37\x10"
ciphertext = "\x1e\x03\x1d\xda\x2f\xbe\x03\xd1\x79\x21\x70\xa0\xf3\x00\x9c\xee"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
class testAES_CTR_128_simple(twisted.trial.unittest.TestCase):
def test_encrypt_decrypt_small_ASCII(self):
"""
Validate that decryption and encryption work as intended on a small ASCII string.
"""
self.key = "\xe3\xb0\xc4\x42\x98\xfc\x1c\x14\x9a\xfb\xf4\xc8\x99\x6f\xb9\x24"
self.iv = "\x27\xae\x41\xe4\x64\x9b\x93\x4c\xa4\x95\x99\x1b\x78\x52\xb8\x55"
test_string = "This unittest kills fascists."
cipher1 = aes.AES_CTR_128(self.key, self.iv)
cipher2 = aes.AES_CTR_128(self.key, self.iv)
ct = cipher1.crypt(test_string)
pt = cipher2.crypt(ct)
self.assertEqual(test_string, pt)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.3/obfsproxy/test/test_buffer.py 0000664 0000000 0000000 00000002652 12217055661 0021455 0 ustar 00root root 0000000 0000000 import unittest
import obfsproxy.network.buffer as obfs_buf
import twisted.trial.unittest
class testBuffer(twisted.trial.unittest.TestCase):
def setUp(self):
self.test_string = "No pop no style, I strictly roots."
self.buf = obfs_buf.Buffer(self.test_string)
def test_totalread(self):
tmp = self.buf.read(-1)
self.assertEqual(tmp, self.test_string)
def test_byte_by_byte(self):
"""Read one byte at a time."""
for i in xrange(len(self.test_string)):
self.assertEqual(self.buf.read(1), self.test_string[i])
def test_bigread(self):
self.assertEqual(self.buf.read(666), self.test_string)
def test_peek(self):
tmp = self.buf.peek(-1)
self.assertEqual(tmp, self.test_string)
self.assertEqual(self.buf.read(-1), self.test_string)
def test_drain(self):
tmp = self.buf.drain(-1) # drain everything
self.assertIsNone(tmp) # check non-existent retval
self.assertEqual(self.buf.read(-1), '') # it should be empty.
self.assertEqual(len(self.buf), 0)
def test_drain2(self):
tmp = self.buf.drain(len(self.test_string)-1) # drain everything but a byte
self.assertIsNone(tmp) # check non-existent retval
self.assertEqual(self.buf.peek(-1), '.') # peek at last character
self.assertEqual(len(self.buf), 1) # length must be 1
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.3/obfsproxy/test/test_socks.py 0000664 0000000 0000000 00000001472 12217055661 0021325 0 ustar 00root root 0000000 0000000 import obfsproxy.network.socks as socks
import twisted.trial.unittest
class test_SOCKS(twisted.trial.unittest.TestCase):
def test_socks_args_splitting(self):
socks_args = socks.split_socks_args("monday=blue;tuesday=grey;wednesday=too;thursday=don\\;tcareabout\\\\you;friday=i\\;minlove")
self.assertListEqual(socks_args, ["monday=blue", "tuesday=grey", "wednesday=too", "thursday=don;tcareabout\\you", "friday=i;minlove"])
socks_args = socks.split_socks_args("monday=blue")
self.assertListEqual(socks_args, ["monday=blue"])
socks_args = socks.split_socks_args("monday=;tuesday=grey")
self.assertListEqual(socks_args, ["monday=", "tuesday=grey"])
socks_args = socks.split_socks_args("\\;=\\;;\\\\=\\;")
self.assertListEqual(socks_args, [";=;", "\\=;"])
obfsproxy-0.2.3/obfsproxy/test/tester.py 0000664 0000000 0000000 00000025116 12217055661 0020453 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
"""@package tester.py.in
Integration tests for obfsproxy.
The obfsproxy binary is assumed to exist in the current working
directory, and you need to have Python 2.6 or better (but not 3).
You need to be able to make connections to arbitrary high-numbered
TCP ports on the loopback interface.
"""
import difflib
import errno
import multiprocessing
import Queue
import re
import signal
import socket
import struct
import subprocess
import time
import traceback
import unittest
import sys,os
def diff(label, expected, received):
"""
Helper: generate unified-format diffs between two named strings.
Pythonic escaped-string syntax is used for unprintable characters.
"""
if expected == received:
return ""
else:
return (label + "\n"
+ "\n".join(s.encode("string_escape")
for s in
difflib.unified_diff(expected.split("\n"),
received.split("\n"),
"expected", "received",
lineterm=""))
+ "\n")
class Obfsproxy(subprocess.Popen):
"""
Helper: Run obfsproxy instances and confirm that they have
completed without any errors.
"""
def __init__(self, *args, **kwargs):
"""Spawns obfsproxy with 'args'"""
argv = ["bin/obfsproxy", "--no-log"]
if len(args) == 1 and (isinstance(args[0], list) or
isinstance(args[0], tuple)):
argv.extend(args[0])
else:
argv.extend(args)
subprocess.Popen.__init__(self, argv,
stdin=open("/dev/null", "r"),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs)
severe_error_re = re.compile(r"\[(?:warn|err(?:or)?)\]")
def check_completion(self, label, force_stderr):
"""
Checks the output and exit status of obfsproxy to see if
everything went fine.
Returns an empty string if the test was good, otherwise it
returns a report that should be printed to the user.
"""
if self.poll() is None:
self.send_signal(signal.SIGINT)
(out, err) = self.communicate()
report = ""
def indent(s):
return "| " + "\n| ".join(s.strip().split("\n"))
# exit status should be zero
if self.returncode > 0:
report += label + " exit code: %d\n" % self.returncode
elif self.returncode < 0:
report += label + " killed: signal %d\n" % -self.returncode
# there should be nothing on stdout
if out != "":
report += label + " stdout:\n%s\n" % indent(out)
# there will be debugging messages on stderr, but there should be
# no [warn], [err], or [error] messages.
if force_stderr or self.severe_error_re.search(err):
report += label + " stderr:\n%s\n" % indent(err)
return report
def stop(self):
"""Terminates obfsproxy."""
if self.poll() is None:
self.terminate()
def connect_with_retry(addr):
"""
Helper: Repeatedly try to connect to the specified server socket
until either it succeeds or one full second has elapsed. (Surely
there is a better way to do this?)
"""
retry = 0
while True:
try:
return socket.create_connection(addr)
except socket.error, e:
if e.errno != errno.ECONNREFUSED: raise
if retry == 20: raise
retry += 1
time.sleep(0.05)
SOCKET_TIMEOUT = 2.0
class ReadWorker(object):
"""
Helper: In a separate process (to avoid deadlock), listen on a
specified socket. The first time something connects to that socket,
read all available data, stick it in a string, and post the string
to the output queue. Then close both sockets and exit.
"""
@staticmethod
def work(address, oq):
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listener.bind(address)
listener.listen(1)
(conn, remote) = listener.accept()
listener.close()
conn.settimeout(SOCKET_TIMEOUT)
data = ""
try:
while True:
chunk = conn.recv(4096)
if chunk == "": break
data += chunk
except socket.timeout:
pass
except Exception, e:
data += "|RECV ERROR: " + e
conn.close()
oq.put(data)
def __init__(self, address):
self.oq = multiprocessing.Queue()
self.worker = multiprocessing.Process(target=self.work,
args=(address, self.oq))
self.worker.start()
def get(self):
"""
Get a chunk of data from the ReadWorker's queue.
"""
rv = self.oq.get(timeout=SOCKET_TIMEOUT+0.1)
self.worker.join()
return rv
def stop(self):
if self.worker.is_alive(): self.worker.terminate()
# Right now this is a direct translation of the former int_test.sh
# (except that I have fleshed out the SOCKS test a bit).
# It will be made more general and parametric Real Soon.
ENTRY_PORT = 4999
SERVER_PORT = 5000
EXIT_PORT = 5001
#
# Test base classes. They do _not_ inherit from unittest.TestCase
# so that they are not scanned directly for test functions (some of
# them do provide test functions, but not in a usable state without
# further code from subclasses).
#
class DirectTest(object):
def setUp(self):
self.output_reader = ReadWorker(("127.0.0.1", EXIT_PORT))
self.obfs_server = Obfsproxy(self.server_args)
time.sleep(0.1)
self.obfs_client = Obfsproxy(self.client_args)
self.input_chan = connect_with_retry(("127.0.0.1", ENTRY_PORT))
self.input_chan.settimeout(SOCKET_TIMEOUT)
def tearDown(self):
self.obfs_client.stop()
self.obfs_server.stop()
self.output_reader.stop()
self.input_chan.close()
def test_direct_transfer(self):
# Open a server and a simple client (in the same process) and
# transfer a file. Then check whether the output is the same
# as the input.
self.input_chan.sendall(TEST_FILE)
time.sleep(2)
try:
output = self.output_reader.get()
except Queue.Empty:
output = ""
self.input_chan.close()
report = diff("errors in transfer:", TEST_FILE, output)
report += self.obfs_client.check_completion("obfsproxy client (%s)" % self.transport, report!="")
report += self.obfs_server.check_completion("obfsproxy server (%s)" % self.transport, report!="")
if report != "":
self.fail("\n" + report)
#
# Concrete test classes specialize the above base classes for each protocol.
#
class DirectDummy(DirectTest, unittest.TestCase):
transport = "dummy"
server_args = ("dummy", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("dummy", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs2(DirectTest, unittest.TestCase):
transport = "obfs2"
server_args = ("obfs2", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs2", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs2_ss(DirectTest, unittest.TestCase):
transport = "obfs2"
server_args = ("obfs2", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--shared-secret=test",
"--ss-hash-iterations=50",
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs2", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--shared-secret=test",
"--ss-hash-iterations=50",
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectB64(DirectTest, unittest.TestCase):
transport = "b64"
server_args = ("b64", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("b64", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs3(DirectTest, unittest.TestCase):
transport = "obfs3"
server_args = ("obfs3", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs3", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
TEST_FILE = """\
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
In obfuscatory age geeky warfare did I wage
For hiding bits from nasty censors' sight
I was hacker to my set in that dim dark age of net
And I hacked from noon till three or four at night
Then a rival from Helsinki said my protocol was dinky
So I flamed him with a condescending laugh,
Saying his designs for stego might as well be made of lego
And that my bikeshed was prettier by half.
But Claude Shannon saw my shame. From his noiseless channel came
A message sent with not a wasted byte
"There are nine and sixty ways to disguise communiques
And RATHER MORE THAN ONE OF THEM IS RIGHT"
(apologies to Rudyard Kipling.)
"""
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.3/obfsproxy/test/transports/ 0000775 0000000 0000000 00000000000 12217055661 0021005 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/test/transports/test_b64.py 0000664 0000000 0000000 00000003557 12217055661 0023023 0 ustar 00root root 0000000 0000000 import unittest
import twisted.trial.unittest
import obfsproxy.transports.b64 as b64
class test_b64_splitting(twisted.trial.unittest.TestCase):
def _helper_splitter(self, string, expected_chunks):
chunks = b64._get_b64_chunks_from_str(string)
self.assertEqual(chunks, expected_chunks)
def test_1(self):
string = "on==the==left==hand==side=="
expected = ["on==", "the==", "left==", "hand==", "side=="]
self._helper_splitter(string, expected)
def test_2(self):
string = "on=the=left=hand=side="
expected = ["on=", "the=", "left=", "hand=", "side="]
self._helper_splitter(string, expected)
def test_3(self):
string = "on==the=left==hand=side=="
expected = ["on==", "the=", "left==", "hand=", "side=="]
self._helper_splitter(string, expected)
def test_4(self):
string = "on==the==left=hand=side"
expected = ["on==", "the==", "left=", "hand=", "side"]
self._helper_splitter(string, expected)
def test_5(self):
string = "onthelefthandside=="
expected = ["onthelefthandside=="]
self._helper_splitter(string, expected)
def test_6(self):
string = "onthelefthandside"
expected = ["onthelefthandside"]
self._helper_splitter(string, expected)
def test_7(self):
string = "onthelefthandside="
expected = ["onthelefthandside="]
self._helper_splitter(string, expected)
def test_8(self):
string = "side=="
expected = ["side=="]
self._helper_splitter(string, expected)
def test_9(self):
string = "side="
expected = ["side="]
self._helper_splitter(string, expected)
def test_10(self):
string = "side"
expected = ["side"]
self._helper_splitter(string, expected)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.3/obfsproxy/test/transports/test_obfs3_dh.py 0000664 0000000 0000000 00000001021 12217055661 0024077 0 ustar 00root root 0000000 0000000 import unittest
import twisted.trial.unittest
import obfsproxy.transports.obfs3_dh as obfs3_dh
class test_uniform_dh(twisted.trial.unittest.TestCase):
def test_uniform_dh(self):
alice = obfs3_dh.UniformDH()
bob = obfs3_dh.UniformDH()
alice_pub = alice.get_public()
bob_pub = bob.get_public()
alice_secret = alice.get_secret(bob_pub)
bob_secret = bob.get_secret(alice_pub)
self.assertEqual(alice_secret, bob_secret)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.3/obfsproxy/transports/ 0000775 0000000 0000000 00000000000 12217055661 0020026 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/transports/__init__.py 0000664 0000000 0000000 00000000000 12217055661 0022125 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.3/obfsproxy/transports/b64.py 0000664 0000000 0000000 00000004507 12217055661 0021001 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module contains an implementation of the 'b64' transport. """
from obfsproxy.transports.base import BaseTransport
import base64
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
def _get_b64_chunks_from_str(string):
"""
Given a 'string' of concatenated base64 objects, return a list
with the objects.
Assumes that the objects are well-formed base64 strings. Also
assumes that the padding character of base64 is '='.
"""
chunks = []
while True:
pad_loc = string.find('=')
if pad_loc < 0 or pad_loc == len(string)-1 or pad_loc == len(string)-2:
# If there is no padding, or it's the last chunk: append
# it to chunks and return.
chunks.append(string)
return chunks
if pad_loc != len(string)-1 and string[pad_loc+1] == '=': # double padding
pad_loc += 1
# Append the object to the chunks, and prepare the string for
# the next iteration.
chunks.append(string[:pad_loc+1])
string = string[pad_loc+1:]
return chunks
class B64Transport(BaseTransport):
"""
Implements the b64 protocol. A protocol that encodes data with
base64 before pushing them to the network.
"""
def receivedDownstream(self, data, circuit):
"""
Got data from downstream; relay them upstream.
"""
decoded_data = ''
# TCP is a stream protocol: the data we received might contain
# more than one b64 chunk. We should inspect the data and
# split it into multiple chunks.
b64_chunks = _get_b64_chunks_from_str(data.peek())
# Now b64 decode each chunk and append it to the our decoded
# data.
for chunk in b64_chunks:
try:
decoded_data += base64.b64decode(chunk)
except TypeError:
log.info("We got corrupted b64 ('%s')." % chunk)
return
data.drain()
circuit.upstream.write(decoded_data)
def receivedUpstream(self, data, circuit):
"""
Got data from upstream; relay them downstream.
"""
circuit.downstream.write(base64.b64encode(data.read()))
return
class B64Client(B64Transport):
pass
class B64Server(B64Transport):
pass
obfsproxy-0.2.3/obfsproxy/transports/base.py 0000664 0000000 0000000 00000006650 12217055661 0021321 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
import pyptlib.util
import obfsproxy.common.log as logging
import argparse
log = logging.get_obfslogger()
"""
This module contains BaseTransport, a pluggable transport skeleton class.
"""
def addrport(string):
"""
Receive ':' and return (,).
Used during argparse CLI parsing.
"""
try:
return pyptlib.util.parse_addr_spec(string)
except ValueError, err:
raise argparse.ArgumentTypeError(err)
class BaseTransport(object):
"""
The BaseTransport class is a skeleton class for pluggable transports.
It contains callbacks that your pluggable transports should
override and customize.
"""
def __init__(self):
pass
def handshake(self, circuit):
"""
The Circuit 'circuit' was completed, and this is a good time
to do your transport-specific handshake on its downstream side.
"""
pass
def circuitDestroyed(self, circuit, reason, side):
"""
Circuit 'circuit' was tore down.
Both connections of the circuit are closed when this callback triggers.
"""
pass
def receivedDownstream(self, data, circuit):
"""
Received 'data' in the downstream side of 'circuit'.
'data' is an obfsproxy.network.buffer.Buffer.
"""
pass
def receivedUpstream(self, data, circuit):
"""
Received 'data' in the upstream side of 'circuit'.
'data' is an obfsproxy.network.buffer.Buffer.
"""
pass
def handle_socks_args(self, args):
"""
'args' is a list of k=v strings that serve as configuration
parameters to the pluggable transport.
"""
pass
@classmethod
def register_external_mode_cli(cls, subparser):
"""
Given an argparse ArgumentParser in 'subparser', register
some default external-mode CLI arguments.
Transports with more complex CLI are expected to override this
function.
"""
subparser.add_argument('mode', choices=['server', 'ext_server', 'client', 'socks'])
subparser.add_argument('listen_addr', type=addrport)
subparser.add_argument('--dest', type=addrport, help='Destination address')
subparser.add_argument('--ext-cookie-file', type=str,
help='Filesystem path where the Extended ORPort authentication cookie is stored.')
@classmethod
def validate_external_mode_cli(cls, args):
"""
Given the parsed CLI arguments in 'args', validate them and
make sure they make sense. Return True if they are kosher,
otherwise return False.
Override for your own needs.
"""
# If we are not 'socks', we need to have a static destination
# to send our data to.
if (args.mode != 'socks') and (not args.dest):
log.error("'client' and 'server' modes need a destination address.")
return False
if (args.mode != 'ext_server') and args.ext_cookie_file:
log.error("No need for --ext-cookie-file if not an ext_server.")
return False
if (args.mode == 'ext_server') and (not args.ext_cookie_file):
log.error("You need to specify --ext-cookie-file as an ext_server.")
return False
return True
class PluggableTransportError(Exception): pass
class SOCKSArgsError(Exception): pass
obfsproxy-0.2.3/obfsproxy/transports/dummy.py 0000664 0000000 0000000 00000002214 12217055661 0021532 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module contains an implementation of the 'dummy' transport. """
from obfsproxy.transports.base import BaseTransport
class DummyTransport(BaseTransport):
"""
Implements the dummy protocol. A protocol that simply proxies data
without obfuscating them.
"""
def receivedDownstream(self, data, circuit):
"""
Got data from downstream; relay them upstream.
"""
circuit.upstream.write(data.read())
def receivedUpstream(self, data, circuit):
"""
Got data from upstream; relay them downstream.
"""
circuit.downstream.write(data.read())
class DummyClient(DummyTransport):
"""
DummyClient is a client for the 'dummy' protocol.
Since this protocol is so simple, the client and the server are identical and both just trivially subclass DummyTransport.
"""
pass
class DummyServer(DummyTransport):
"""
DummyServer is a server for the 'dummy' protocol.
Since this protocol is so simple, the client and the server are identical and both just trivially subclass DummyTransport.
"""
pass
obfsproxy-0.2.3/obfsproxy/transports/obfs2.py 0000664 0000000 0000000 00000026244 12217055661 0021423 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
The obfs2 module implements the obfs2 protocol.
"""
import random
import hashlib
import argparse
import obfsproxy.common.aes as aes
import obfsproxy.common.serialize as srlz
import obfsproxy.common.rand as rand
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
MAGIC_VALUE = 0x2BF5CA7E
SEED_LENGTH = 16
MAX_PADDING = 8192
HASH_ITERATIONS = 100000
KEYLEN = 16 # is the length of the key used by E(K,s) -- that is, 16.
IVLEN = 16 # is the length of the IV used by E(K,s) -- that is, 16.
ST_WAIT_FOR_KEY = 0
ST_WAIT_FOR_PADDING = 1
ST_OPEN = 2
def h(x):
""" H(x) is SHA256 of x. """
hasher = hashlib.sha256()
hasher.update(x)
return hasher.digest()
def hn(x, n):
""" H^n(x) is H(x) called iteratively n times. """
data = x
for _ in xrange(n):
data = h(data)
return data
class Obfs2Transport(base.BaseTransport):
"""
Obfs2Transport implements the obfs2 protocol.
"""
def __init__(self):
"""Initialize the obfs2 pluggable transport."""
# Check if the shared_secret class attribute was instantiated
# by external-mode code. If not, instantiate it now.
if not hasattr(self, 'shared_secret'):
self.shared_secret = None
# If external-mode code did not specify the number of hash
# iterations, just use the default.
if not hasattr(self, 'ss_hash_iterations'):
self.ss_hash_iterations = HASH_ITERATIONS
if self.shared_secret:
log.debug("Starting obfs2 with shared secret: %s" % self.shared_secret)
# Our state.
self.state = ST_WAIT_FOR_KEY
if self.we_are_initiator:
self.initiator_seed = rand.random_bytes(SEED_LENGTH) # Initiator's seed.
self.responder_seed = None # Responder's seed.
else:
self.initiator_seed = None # Initiator's seed.
self.responder_seed = rand.random_bytes(SEED_LENGTH) # Responder's seed
# Shared secret seed.
self.secret_seed = None
# Crypto to encrypt outgoing data.
self.send_crypto = None
# Crypto to encrypt outgoing padding.
self.send_padding_crypto = None
# Crypto to decrypt incoming data.
self.recv_crypto = None
# Crypto to decrypt incoming padding.
self.recv_padding_crypto = None
# Number of padding bytes left to read.
self.padding_left_to_read = 0
# If it's True, it means that we received upstream data before
# we had the chance to set up our crypto (after receiving the
# handshake). This means that when we set up our crypto, we
# must remember to push the cached upstream data downstream.
self.pending_data_to_send = False
@classmethod
def register_external_mode_cli(cls, subparser):
subparser.add_argument('--shared-secret', type=str, help='Shared secret')
# This is a hidden CLI argument for use by the integration
# tests: so that they don't do an insane amount of hash
# iterations.
subparser.add_argument('--ss-hash-iterations', type=int, help=argparse.SUPPRESS)
super(Obfs2Transport, cls).register_external_mode_cli(subparser)
@classmethod
def validate_external_mode_cli(cls, args):
if args.shared_secret:
cls.shared_secret = args.shared_secret
if args.ss_hash_iterations:
cls.ss_hash_iterations = args.ss_hash_iterations
super(Obfs2Transport, cls).validate_external_mode_cli(args)
def handle_socks_args(self, args):
log.debug("obfs2: Got '%s' as SOCKS arguments." % args)
# A shared secret might already be set if obfsproxy is in
# external-mode and both a cli shared-secret was specified
# _and_ a SOCKS per-connection shared secret.
if self.shared_secret:
log.notice("obfs2: Hm. Weird configuration. A shared secret "
"was specified twice. I will keep the one "
"supplied by the SOCKS arguments.")
if len(args) != 1:
err_msg = "obfs2: Too many SOCKS arguments (%d) (%s)" % (len(args), str(args))
log.warning(err_msg)
raise base.SOCKSArgsError(err_msg)
if not args[0].startswith("shared-secret="):
err_msg = "obfs2: SOCKS arg is not correctly formatted (%s)" % args[0]
log.warning(err_msg)
raise base.SOCKSArgsError(err_msg)
self.shared_secret = args[0][14:]
def handshake(self, circuit):
"""
Do the obfs2 handshake:
SEED | E_PAD_KEY( UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN) )
"""
# Generate keys for outgoing padding.
self.send_padding_crypto = \
self._derive_padding_crypto(self.initiator_seed if self.we_are_initiator else self.responder_seed,
self.send_pad_keytype)
padding_length = random.randint(0, MAX_PADDING)
seed = self.initiator_seed if self.we_are_initiator else self.responder_seed
handshake_message = seed + self.send_padding_crypto.crypt(srlz.htonl(MAGIC_VALUE) +
srlz.htonl(padding_length) +
rand.random_bytes(padding_length))
log.debug("obfs2 handshake: %s queued %d bytes (padding_length: %d).",
"initiator" if self.we_are_initiator else "responder",
len(handshake_message), padding_length)
circuit.downstream.write(handshake_message)
def receivedUpstream(self, data, circuit):
"""
Got data from upstream. We need to obfuscated and proxy them downstream.
"""
if not self.send_crypto:
log.debug("Got upstream data before doing handshake. Caching.")
self.pending_data_to_send = True
return
log.debug("obfs2 receivedUpstream: Transmitting %d bytes.", len(data))
# Encrypt and proxy them.
circuit.downstream.write(self.send_crypto.crypt(data.read()))
def receivedDownstream(self, data, circuit):
"""
Got data from downstream. We need to de-obfuscate them and
proxy them upstream.
"""
log_prefix = "obfs2 receivedDownstream" # used in logs
if self.state == ST_WAIT_FOR_KEY:
log.debug("%s: Waiting for key." % log_prefix)
if len(data) < SEED_LENGTH + 8:
log.debug("%s: Not enough bytes for key (%d)." % (log_prefix, len(data)))
return data # incomplete
if self.we_are_initiator:
self.responder_seed = data.read(SEED_LENGTH)
else:
self.initiator_seed = data.read(SEED_LENGTH)
# Now that we got the other seed, let's set up our crypto.
self.send_crypto = self._derive_crypto(self.send_keytype)
self.recv_crypto = self._derive_crypto(self.recv_keytype)
self.recv_padding_crypto = \
self._derive_padding_crypto(self.responder_seed if self.we_are_initiator else self.initiator_seed,
self.recv_pad_keytype)
# XXX maybe faster with a single d() instead of two.
magic = srlz.ntohl(self.recv_padding_crypto.crypt(data.read(4)))
padding_length = srlz.ntohl(self.recv_padding_crypto.crypt(data.read(4)))
log.debug("%s: Got %d bytes of handshake data (padding_length: %d, magic: %s)" % \
(log_prefix, len(data), padding_length, hex(magic)))
if magic != MAGIC_VALUE:
raise base.PluggableTransportError("obfs2: Corrupted magic value '%s'" % hex(magic))
if padding_length > MAX_PADDING:
raise base.PluggableTransportError("obfs2: Too big padding length '%s'" % padding_length)
self.padding_left_to_read = padding_length
self.state = ST_WAIT_FOR_PADDING
while self.padding_left_to_read:
if not data: return
n_to_drain = self.padding_left_to_read
if (self.padding_left_to_read > len(data)):
n_to_drain = len(data)
data.drain(n_to_drain)
self.padding_left_to_read -= n_to_drain
log.debug("%s: Consumed %d bytes of padding, %d still to come (%d).",
log_prefix, n_to_drain, self.padding_left_to_read, len(data))
self.state = ST_OPEN
log.debug("%s: Processing %d bytes of application data.",
log_prefix, len(data))
if self.pending_data_to_send:
log.debug("%s: We got pending data to send and our crypto is ready. Pushing!" % log_prefix)
self.receivedUpstream(circuit.upstream.buffer, circuit) # XXX touching guts of network.py
self.pending_data_to_send = False
circuit.upstream.write(self.recv_crypto.crypt(data.read()))
def _derive_crypto(self, pad_string): # XXX consider secret_seed
"""
Derive and return an obfs2 key using the pad string in 'pad_string'.
"""
secret = self.mac(pad_string,
self.initiator_seed + self.responder_seed,
self.shared_secret)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:])
def _derive_padding_crypto(self, seed, pad_string): # XXX consider secret_seed
"""
Derive and return an obfs2 padding key using the pad string in 'pad_string'.
"""
secret = self.mac(pad_string,
seed,
self.shared_secret)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:])
def mac(self, s, x, secret):
"""
obfs2 regular MAC: MAC(s, x) = H(s | x | s)
Optionally, if the client and server share a secret value SECRET,
they can replace the MAC function with:
MAC(s,x) = H^n(s | x | H(SECRET) | s)
where n = HASH_ITERATIONS.
"""
if secret:
secret_hash = h(secret)
return hn(s + x + secret_hash + s, self.ss_hash_iterations)
else:
return h(s + x + s)
class Obfs2Client(Obfs2Transport):
"""
Obfs2Client is a client for the obfs2 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
self.send_pad_keytype = 'Initiator obfuscation padding'
self.recv_pad_keytype = 'Responder obfuscation padding'
self.send_keytype = "Initiator obfuscated data"
self.recv_keytype = "Responder obfuscated data"
self.we_are_initiator = True
Obfs2Transport.__init__(self)
class Obfs2Server(Obfs2Transport):
"""
Obfs2Server is a server for the obfs2 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
self.send_pad_keytype = 'Responder obfuscation padding'
self.recv_pad_keytype = 'Initiator obfuscation padding'
self.send_keytype = "Responder obfuscated data"
self.recv_keytype = "Initiator obfuscated data"
self.we_are_initiator = False
Obfs2Transport.__init__(self)
obfsproxy-0.2.3/obfsproxy/transports/obfs3.py 0000664 0000000 0000000 00000017362 12217055661 0021425 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
The obfs3 module implements the obfs3 protocol.
"""
import random
import obfsproxy.common.aes as aes
import obfsproxy.transports.base as base
import obfsproxy.transports.obfs3_dh as obfs3_dh
import obfsproxy.common.log as logging
import obfsproxy.common.hmac_sha256 as hmac_sha256
import obfsproxy.common.rand as rand
log = logging.get_obfslogger()
MAX_PADDING = 8194
PUBKEY_LEN = 192
KEYLEN = 16 # is the length of the key used by E(K,s) -- that is, 16.
HASHLEN = 32 # length of output of sha256
ST_WAIT_FOR_KEY = 0 # Waiting for public key from the other party
ST_SEARCHING_MAGIC = 1 # Waiting for magic strings from the other party
ST_OPEN = 2 # obfs3 handshake is complete. Sending application data.
class Obfs3Transport(base.BaseTransport):
"""
Obfs3Transport implements the obfs3 protocol.
"""
def __init__(self):
"""Initialize the obfs3 pluggable transport."""
# Our state.
self.state = ST_WAIT_FOR_KEY
# Uniform-DH object
self.dh = obfs3_dh.UniformDH()
# DH shared secret
self.shared_secret = None
# Bytes of padding scanned so far.
self.scanned_padding = 0
# Last padding bytes scanned.
self.last_padding_chunk = ''
# Magic value that the other party is going to send
# (initialized after deriving shared secret)
self.other_magic_value = None
# Crypto to encrypt outgoing data.
self.send_crypto = None
# Crypto to decrypt incoming data.
self.recv_crypto = None
# Buffer for the first data, Tor is trying to send but can't right now
# because we have to handle the DH handshake first.
self.queued_data = ''
# Attributes below are filled by classes that inherit Obfs3Transport.
self.send_keytype = None
self.recv_keytype = None
self.send_magic_const = None
self.recv_magic_const = None
self.we_are_initiator = None
def handshake(self, circuit):
"""
Do the obfs3 handshake:
PUBKEY | WR(PADLEN)
"""
padding_length = random.randint(0, MAX_PADDING/2)
handshake_message = self.dh.get_public() + rand.random_bytes(padding_length)
log.debug("obfs3 handshake: %s queued %d bytes (padding_length: %d) (public key: %s).",
"initiator" if self.we_are_initiator else "responder",
len(handshake_message), padding_length, repr(self.dh.get_public()))
circuit.downstream.write(handshake_message)
def receivedUpstream(self, data, circuit):
"""
Got data from upstream. We need to obfuscated and proxy them downstream.
"""
if not self.send_crypto:
log.debug("Got upstream data before doing handshake. Caching.")
self.queued_data += data.read()
return
message = self.send_crypto.crypt(data.read())
log.debug("obfs3 receivedUpstream: Transmitting %d bytes.", len(message))
# Proxy encrypted message.
circuit.downstream.write(message)
def receivedDownstream(self, data, circuit):
"""
Got data from downstream. We need to de-obfuscate them and
proxy them upstream.
"""
if self.state == ST_WAIT_FOR_KEY: # Looking for the other peer's pubkey
self._read_handshake(data, circuit)
if self.state == ST_SEARCHING_MAGIC: # Looking for the magic string
self._scan_for_magic(data)
if self.state == ST_OPEN: # Handshake is done. Just decrypt and read application data.
log.debug("obfs3 receivedDownstream: Processing %d bytes of application data." %
len(data))
circuit.upstream.write(self.recv_crypto.crypt(data.read()))
def _read_handshake(self, data, circuit):
"""
Read handshake message, parse the other peer's public key and
set up our crypto.
"""
log_prefix = "obfs3:_read_handshake()"
if len(data) < PUBKEY_LEN:
log.debug("%s: Not enough bytes for key (%d)." % (log_prefix, len(data)))
return
log.debug("%s: Got %d bytes of handshake data (waiting for key)." % (log_prefix, len(data)))
# Get the public key from the handshake message, do the DH and
# get the shared secret.
other_pubkey = data.read(PUBKEY_LEN)
try:
self.shared_secret = self.dh.get_secret(other_pubkey)
except ValueError:
raise base.PluggableTransportError("obfs3: Corrupted public key '%s'" % repr(other_pubkey))
log.debug("Got public key: %s.\nGot shared secret: %s" %
(repr(other_pubkey), repr(self.shared_secret)))
# Set up our crypto.
self.send_crypto = self._derive_crypto(self.send_keytype)
self.recv_crypto = self._derive_crypto(self.recv_keytype)
self.other_magic_value = hmac_sha256.hmac_sha256_digest(self.shared_secret,
self.recv_magic_const)
# Send our magic value to the remote end and append the queued outgoing data.
# Padding is prepended so that the server does not just send the 32-byte magic
# in a single TCP segment.
padding_length = random.randint(0, MAX_PADDING/2)
magic = hmac_sha256.hmac_sha256_digest(self.shared_secret, self.send_magic_const)
message = rand.random_bytes(padding_length) + magic + self.send_crypto.crypt(self.queued_data)
self.queued_data = ''
log.debug("%s: Transmitting %d bytes (with magic)." % (log_prefix, len(message)))
circuit.downstream.write(message)
self.state = ST_SEARCHING_MAGIC
def _scan_for_magic(self, data):
"""
Scan 'data' for the magic string. If found, drain it and all
the padding before it. Then open the connection.
"""
log_prefix = "obfs3:_scan_for_magic()"
log.debug("%s: Searching for magic." % log_prefix)
assert(self.other_magic_value)
chunk = data.peek()
index = chunk.find(self.other_magic_value)
if index < 0:
if (len(data) > MAX_PADDING+HASHLEN):
raise base.PluggableTransportError("obfs3: Too much padding (%d)!" % len(data))
log.debug("%s: Did not find magic this time (%d)." % (log_prefix, len(data)))
return
index += len(self.other_magic_value)
log.debug("%s: Found magic. Draining %d bytes." % (log_prefix, index))
data.drain(index)
self.state = ST_OPEN
def _derive_crypto(self, pad_string):
"""
Derive and return an obfs3 key using the pad string in 'pad_string'.
"""
secret = hmac_sha256.hmac_sha256_digest(self.shared_secret, pad_string)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:])
class Obfs3Client(Obfs3Transport):
"""
Obfs3Client is a client for the obfs3 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
Obfs3Transport.__init__(self)
self.send_keytype = "Initiator obfuscated data"
self.recv_keytype = "Responder obfuscated data"
self.send_magic_const = "Initiator magic"
self.recv_magic_const = "Responder magic"
self.we_are_initiator = True
class Obfs3Server(Obfs3Transport):
"""
Obfs3Server is a server for the obfs3 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
Obfs3Transport.__init__(self)
self.send_keytype = "Responder obfuscated data"
self.recv_keytype = "Initiator obfuscated data"
self.send_magic_const = "Responder magic"
self.recv_magic_const = "Initiator magic"
self.we_are_initiator = False
obfsproxy-0.2.3/obfsproxy/transports/obfs3_dh.py 0000664 0000000 0000000 00000005175 12217055661 0022077 0 ustar 00root root 0000000 0000000 import binascii
import obfsproxy.common.rand as rand
def int_to_bytes(lvalue, width):
fmt = '%%.%dx' % (2*width)
return binascii.unhexlify(fmt % (lvalue & ((1L<<8*width)-1)))
class UniformDH:
"""
This is a class that implements a DH handshake that uses public
keys that are indistinguishable from 192-byte random strings.
The idea (and even the implementation) was suggested by Ian
Goldberg in:
https://lists.torproject.org/pipermail/tor-dev/2012-December/004245.html
https://lists.torproject.org/pipermail/tor-dev/2012-December/004248.html
Attributes:
mod, the modulus of our DH group.
g, the generator of our DH group.
group_len, the size of the group in bytes.
priv_str, a byte string representing our DH private key.
priv, our DH private key as an integer.
pub_str, a byte string representing our DH public key.
pub, our DH public key as an integer.
shared_secret, our DH shared secret.
"""
# 1536-bit MODP Group from RFC3526
mod = int(
"""FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1
29024E08 8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD
EF9519B3 CD3A431B 302B0A6D F25F1437 4FE1356D 6D51C245
E485B576 625E7EC6 F44C42E9 A637ED6B 0BFF5CB6 F406B7ED
EE386BFB 5A899FA5 AE9F2411 7C4B1FE6 49286651 ECE45B3D
C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8 FD24CF5F
83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA237327 FFFFFFFF FFFFFFFF""".replace(' ','').replace('\n','').replace('\t',''), 16)
g = 2
group_len = 192 # bytes (1536-bits)
def __init__(self):
# Generate private key
self.priv_str = rand.random_bytes(self.group_len)
self.priv = int(binascii.hexlify(self.priv_str), 16)
# Make the private key even
flip = self.priv % 2
self.priv -= flip
# Generate public key
self.pub = pow(self.g, self.priv, self.mod)
if flip == 1:
self.pub = self.mod - self.pub
self.pub_str = int_to_bytes(self.pub, self.group_len)
self.shared_secret = None
def get_public(self):
return self.pub_str
def get_secret(self, their_pub_str):
"""
Given the public key of the other party as a string of bytes,
calculate our shared secret.
This might raise a ValueError since 'their_pub_str' is
attacker controlled.
"""
their_pub = int(binascii.hexlify(their_pub_str), 16)
self.shared_secret = pow(their_pub, self.priv, self.mod)
return int_to_bytes(self.shared_secret, self.group_len)
obfsproxy-0.2.3/obfsproxy/transports/transports.py 0000664 0000000 0000000 00000002063 12217055661 0022620 0 ustar 00root root 0000000 0000000 # XXX modulify transports and move this to a single import
import obfsproxy.transports.dummy as dummy
import obfsproxy.transports.b64 as b64
import obfsproxy.transports.obfs2 as obfs2
import obfsproxy.transports.obfs3 as obfs3
transports = { 'dummy' : {'base': dummy.DummyTransport, 'client' : dummy.DummyClient, 'server' : dummy.DummyServer },
'b64' : {'base': b64.B64Transport, 'client' : b64.B64Client, 'server' : b64.B64Server },
'obfs2' : {'base': obfs2.Obfs2Transport, 'client' : obfs2.Obfs2Client, 'server' : obfs2.Obfs2Server },
'obfs3' : {'base': obfs3.Obfs3Transport, 'client' : obfs3.Obfs3Client, 'server' : obfs3.Obfs3Server } }
def get_transport_class(name, role):
# Rewrite equivalent roles.
if role == 'socks':
role = 'client'
elif role == 'ext_server':
role = 'server'
# Find the correct class
if (name in transports) and (role in transports[name]):
return transports[name][role]
else:
raise TransportNotFound
class TransportNotFound(Exception): pass
obfsproxy-0.2.3/setup.py 0000664 0000000 0000000 00000001703 12217055661 0015267 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
import sys
from setuptools import setup, find_packages
import versioneer
versioneer.versionfile_source = 'obfsproxy/_version.py'
versioneer.versionfile_build = 'obfsproxy/_version.py'
versioneer.tag_prefix = 'obfsproxy-' # tags are like 1.2.0
versioneer.parentdir_prefix = 'obfsproxy-' # dirname like 'myproject-1.2.0'
setup(
name = "obfsproxy",
author = "asn",
author_email = "asn@torproject.org",
description = ("A pluggable transport proxy written in Python"),
license = "BSD",
keywords = ['tor', 'obfuscation', 'twisted'],
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
packages = find_packages(),
entry_points = {
'console_scripts': [
'obfsproxy = obfsproxy.pyobfsproxy:run'
]
},
install_requires = [
'setuptools',
'PyCrypto',
'Twisted',
'argparse',
'pyptlib >= 0.0.4'
],
)
obfsproxy-0.2.3/setup_py2exe.py 0000664 0000000 0000000 00000000700 12217055661 0016557 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
from distutils.core import setup
import py2exe
import os
topdir = "py2exe_bundle"
build_path = os.path.join(topdir, "build")
dist_path = os.path.join(topdir, "dist")
setup(
console=["bin/obfsproxy"],
zipfile="obfsproxy.zip",
options={
"build": {"build_base": build_path},
"py2exe": {
"includes": ["twisted", "pyptlib", "Crypto"],
"dist_dir": dist_path,
}
}
)
obfsproxy-0.2.3/versioneer.py 0000664 0000000 0000000 00000062247 12217055661 0016322 0 ustar 00root root 0000000 0000000 #! /usr/bin/python
"""versioneer.py
(like a rocketeer, but for versions)
* https://github.com/warner/python-versioneer
* Brian Warner
* License: Public Domain
* Version: 0.7+
This file helps distutils-based projects manage their version number by just
creating version-control tags.
For developers who work from a VCS-generated tree (e.g. 'git clone' etc),
each 'setup.py version', 'setup.py build', 'setup.py sdist' will compute a
version number by asking your version-control tool about the current
checkout. The version number will be written into a generated _version.py
file of your choosing, where it can be included by your __init__.py
For users who work from a VCS-generated tarball (e.g. 'git archive'), it will
compute a version number by looking at the name of the directory created when
te tarball is unpacked. This conventionally includes both the name of the
project and a version number.
For users who work from a tarball built by 'setup.py sdist', it will get a
version number from a previously-generated _version.py file.
As a result, loading code directly from the source tree will not result in a
real version. If you want real versions from VCS trees (where you frequently
update from the upstream repository, or do new development), you will need to
do a 'setup.py version' after each update, and load code from the build/
directory.
You need to provide this code with a few configuration values:
versionfile_source:
A project-relative pathname into which the generated version strings
should be written. This is usually a _version.py next to your project's
main __init__.py file. If your project uses src/myproject/__init__.py,
this should be 'src/myproject/_version.py'. This file should be checked
in to your VCS as usual: the copy created below by 'setup.py
update_files' will include code that parses expanded VCS keywords in
generated tarballs. The 'build' and 'sdist' commands will replace it with
a copy that has just the calculated version string.
versionfile_build:
Like versionfile_source, but relative to the build directory instead of
the source directory. These will differ when your setup.py uses
'package_dir='. If you have package_dir={'myproject': 'src/myproject'},
then you will probably have versionfile_build='myproject/_version.py' and
versionfile_source='src/myproject/_version.py'.
tag_prefix: a string, like 'PROJECTNAME-', which appears at the start of all
VCS tags. If your tags look like 'myproject-1.2.0', then you
should use tag_prefix='myproject-'. If you use unprefixed tags
like '1.2.0', this should be an empty string.
parentdir_prefix: a string, frequently the same as tag_prefix, which
appears at the start of all unpacked tarball filenames. If
your tarball unpacks into 'myproject-1.2.0', this should
be 'myproject-'.
To use it:
1: include this file in the top level of your project
2: make the following changes to the top of your setup.py:
import versioneer
versioneer.versionfile_source = 'src/myproject/_version.py'
versioneer.versionfile_build = 'myproject/_version.py'
versioneer.tag_prefix = '' # tags are like 1.2.0
versioneer.parentdir_prefix = 'myproject-' # dirname like 'myproject-1.2.0'
3: add the following arguments to the setup() call in your setup.py:
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
4: run 'setup.py update_files', which will create _version.py, and will
append the following to your __init__.py:
from _version import __version__
5: modify your MANIFEST.in to include versioneer.py
6: add both versioneer.py and the generated _version.py to your VCS
"""
import os, sys, re
from distutils.core import Command
from distutils.command.sdist import sdist as _sdist
from distutils.command.build import build as _build
versionfile_source = None
versionfile_build = None
tag_prefix = None
parentdir_prefix = None
VCS = "git"
IN_LONG_VERSION_PY = False
LONG_VERSION_PY = '''
IN_LONG_VERSION_PY = True
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (build by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.7+ (https://github.com/warner/python-versioneer)
# these strings will be replaced by git during git-archive
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %%s" %% args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%%s', no digits" %% ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %%d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %%s" %% ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %%s" %% root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%%s' doesn't start with prefix '%%s'" %% (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%%s', but '%%s' doesn't start with prefix '%%s'" %%
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
tag_prefix = "%(TAG_PREFIX)s"
parentdir_prefix = "%(PARENTDIR_PREFIX)s"
versionfile_source = "%(VERSIONFILE_SOURCE)s"
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
variables = { "refnames": git_refnames, "full": git_full }
ver = versions_from_expanded_variables(variables, tag_prefix, verbose)
if not ver:
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if not ver:
ver = versions_from_parentdir(parentdir_prefix, versionfile_source,
verbose)
if not ver:
ver = default
return ver
'''
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %s" % args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%s', no digits" % ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %s" % ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%s' doesn't start with prefix '%s'" % (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%s', but '%s' doesn't start with prefix '%s'" %
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
import sys
def do_vcs_install(versionfile_source, ipy):
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
run_command([GIT, "add", "versioneer.py"])
run_command([GIT, "add", versionfile_source])
run_command([GIT, "add", ipy])
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
run_command([GIT, "add", ".gitattributes"])
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '%(version)s'
version_full = '%(full)s'
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
"""
DEFAULT = {"version": "unknown", "full": "unknown"}
def versions_from_file(filename):
versions = {}
try:
f = open(filename)
except EnvironmentError:
return versions
for line in f.readlines():
mo = re.match("version_version = '([^']+)'", line)
if mo:
versions["version"] = mo.group(1)
mo = re.match("version_full = '([^']+)'", line)
if mo:
versions["full"] = mo.group(1)
return versions
def write_to_version_file(filename, versions):
f = open(filename, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
print("set %s to '%s'" % (filename, versions["version"]))
def get_best_versions(versionfile, tag_prefix, parentdir_prefix,
default=DEFAULT, verbose=False):
# returns dict with two keys: 'version' and 'full'
#
# extract version from first of _version.py, 'git describe', parentdir.
# This is meant to work for developers using a source checkout, for users
# of a tarball created by 'setup.py sdist', and for users of a
# tarball/zipball created by 'git archive' or github's download-from-tag
# feature.
variables = get_expanded_variables(versionfile_source)
if variables:
ver = versions_from_expanded_variables(variables, tag_prefix)
if ver:
if verbose: print("got version from expanded variable %s" % ver)
return ver
ver = versions_from_file(versionfile)
if ver:
if verbose: print("got version from file %s %s" % (versionfile, ver))
return ver
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from git %s" % ver)
return ver
ver = versions_from_parentdir(parentdir_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from parentdir %s" % ver)
return ver
if verbose: print("got version from default %s" % ver)
return default
def get_versions(default=DEFAULT, verbose=False):
assert versionfile_source is not None, "please set versioneer.versionfile_source"
assert tag_prefix is not None, "please set versioneer.tag_prefix"
assert parentdir_prefix is not None, "please set versioneer.parentdir_prefix"
return get_best_versions(versionfile_source, tag_prefix, parentdir_prefix,
default=default, verbose=verbose)
def get_version(verbose=False):
return get_versions(verbose=verbose)["version"]
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ver = get_version(verbose=True)
print("Version is currently: %s" % ver)
class cmd_build(_build):
def run(self):
versions = get_versions(verbose=True)
_build.run(self)
# now locate _version.py in the new build/ directory and replace it
# with an updated value
target_versionfile = os.path.join(self.build_lib, versionfile_build)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
class cmd_sdist(_sdist):
def run(self):
versions = get_versions(verbose=True)
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory (remembering
# that it may be a hardlink) and replace it with an updated value
target_versionfile = os.path.join(base_dir, versionfile_source)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % self._versioneer_generated_versions)
f.close()
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
class cmd_update_files(Command):
description = "modify __init__.py and create _version.py"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ipy = os.path.join(os.path.dirname(versionfile_source), "__init__.py")
print(" creating %s" % versionfile_source)
f = open(versionfile_source, "w")
f.write(LONG_VERSION_PY % {"DOLLAR": "$",
"TAG_PREFIX": tag_prefix,
"PARENTDIR_PREFIX": parentdir_prefix,
"VERSIONFILE_SOURCE": versionfile_source,
})
f.close()
try:
old = open(ipy, "r").read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
f = open(ipy, "a")
f.write(INIT_PY_SNIPPET)
f.close()
else:
print(" %s unmodified" % ipy)
do_vcs_install(versionfile_source, ipy)
def get_cmdclass():
return {'version': cmd_version,
'update_files': cmd_update_files,
'build': cmd_build,
'sdist': cmd_sdist,
}