pax_global_header 0000666 0000000 0000000 00000000064 12570034732 0014515 g ustar 00root root 0000000 0000000 52 comment=47527260d578d1d41abf45e69d36f36cd456bbfb
obfsproxy-0.2.13/ 0000775 0000000 0000000 00000000000 12570034732 0013633 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/ChangeLog 0000664 0000000 0000000 00000014357 12570034732 0015417 0 ustar 00root root 0000000 0000000 Changes in version 0.2.13 - 2014-12-31:
- Correctly handle the ScrambleSuit password being missing entirely
when running in managed mode. Patch by Yawning Angel. Fixes #13587.
- Make ScrambleSuit servers cache the HMAC of their own UniformDH messages.
Fixes #14038.
- Improve handling of failures during command line parsing.
Patch by Colin Teberg. Fixes #9823.
Changes in version 0.2.12 - 2014-07-22:
- Add txsocksx and parsley as dependencies in py2exe. Fixes bug #12381.
Changes in version 0.2.11 - 2014-07-16:
- Write a 'server_password' file with the scramblesuit password to
make it easier for bridge operators to find their password. Patch
by Philipp Winter. Fixes #10887.
- Add --password-file switch for providing scramblesuit passwords.
Patch by irregulator. Fixes #8040.
- Fix bug to prevent denial-of-service attacks targeting the
authentication mechanism. Fixes #11092. The problem was pointed
out by Yawning Angel.
- Fix bug to prevent denial-of-service attacks targeting the
authentication mechanism. Fixes #11092. The problem was pointed
out by Yawning Angel.
- Improve scramblesuit spec and fix several issues. Fixes bug #10893.
Thanks to Yawning Angel.
- Improve scramblesuit's packet morphing algorithm. Fixes bug #10991.
Thanks to Yawning Angel.
- When in external mode, only call the setup() method of the
transport we are going to launch, not of all transports.
Changes in version 0.2.10 - 2014-06-05:
- Don't set the transport's circuit to None if we are closing
a circuit; it was causing problems in obfs3. Fixes #11820.
- Make sure that we don't try to do networking operations on a
connection that should be closed. Fixes #11769.
- Only print ScrambleSuit disclaimer on startup (instead of
printing it for every connection). Fixes #11768.
- Log the pyptlib version in startup. Patch by John Giannelos. Fixes #9878.
Changes in version 0.2.9 - 2014-05-01:
- Support connecting over a HTTPS CONNECT proxy. Patch by Yawning Angel.
Fixes #11409.
- Support connecting over SOCKS4(a) and SOCKS5. Based on the patch by
Arturo Filastò with changes by Yawning Angel. Fixes #8956.
- Fix the shebang of the obfsproxy executable so that it
explicitly picks python2. Makes obfsproxy work in platforms like
Linux Arch where python3 is the default. Fixes part of #11190.
Patch by Yawning Angel.
- The AES-CTR counter of obfs2 and obfs3 now wraps around to 0.
Since, the counter value was derived from the protocol, there
was an unlikely chance that PyCrypto would raise an OverflowError
exception. Spotted by Yawning Angel. Fixes #11611.
Changes in version 0.2.8 - 2014-03-28:
- Fix a bug in the SOCKS5 sever code. An exception would be raised on systems
with Python < 2.7.4. Patch by Yawning Angel. Fixes #11329.
- Obfsproxy can now resolve bridge addresses that were provided as
DNS hostnames. Fix suggested by Yawning Angel. Resolves #10316.
Changes in version 0.2.7 - 2014-03-15
- Support SOCKS5 instead of SOCKS4. Patch by Yawning Angel. Fixes #9221.
- Fix a scramblesuit bug that makes bridges reject a session
ticket connection from already seen clients. Diagnosed and patched
by Yawning Angel. Fixes #11100.
- obfs3 now uses twisted.internet.threads.deferToThread to process
the key exchange outside of the main event loop.
Patch by Yawning Angel. Fixes #11015.
- Support gmpy2 if it is available in addition to gmpy.
Patch by Yawning Angel.
Changes in version 0.2.6 - 2014-02-03
- Stop having 'gmpy' as a hard dependency by removing it from setup.py.
Now gmpy is only used if it was already installed on the system.
Changes in version 0.2.5 - 2014-02-03
- Use gmpy's modular exponentiation function since it's more efficient.
Fixes #10031 and adds gmpy as a dependency. Patch by Philipp Winter.
- Add a transport method called setup() that gets called on obfsproxy
startup and can be used by transports for expensive initializations.
Patch by David Stainton.
- Add a transport method called get_public_server_options() that allows
transports to filter server-side options that should not be announced
to BridgeDB (because they might leak filesystem paths etc.) .
Patch by David Stainton. Fixes #10243.
- Make the circuit an attribute of the transport, rather than passing it
as a method argument. Patch by Ximin Luo. Fixes #10342.
- Rename the handshake() method to circuitConnected().
Patch by Ximin Luo.
- Add ScrambleSuit as transport protocol. Fixes #10598.
Changes in version 0.2.4 - 2013-09-30
- Make pluggable transports aware of where they should store state
in the filesystem. Also introduce --data-dir CLI switch to specify
the path in external mode. Fixes #9815. Patch by Philipp Winter.
- Pass server-side parameters (like shared-secrets) from Tor to the
transports. Fixes #8979.
Changes in version 0.2.3 - 2013-09-11
- Use the new pyptlib API (>= pyptlib-0.0.4). Patch by Ximin Luo.
- Add support for sending the pluggable transport name to Tor (using
the Extended ORPort) so that it can be considered in the statistics.
- Remove licenses of dependencies from the LICENSE file. (They were
moved to be with browser bundle packaging scripts.)
- Fix a bug in the SOCKS code. An assertion would trigger if
the SOCKS destination sent traffic before obfsproxy did.
Fixes #9239.
- Add a --version switch. Fixes #9255.
Changes in version 0.2.2 - 2013-04-15
- Fix a bug where the CLI compatibility patch that was introduced
in 0.2.1 was placed in the wrong place, making it useless when
obfsproxy gets installed. Patch by Lunar.
- Add dependencies to the setup script.
- Update the HOWTO to use pip.
Changes in version 0.2.1 - 2013-04-08
- Rename project from "pyobfsproxy" to "obfsproxy"!
- Add licenses of dependencies to the LICENSE file.
- Add support for logging exceptions to logfiles.
- Add shared secret support to obfs2.
- Add support for per-connection SOCKS arguments.
- Add a setup script for py2exe.
- Slightly improve the executable script.
- Improve command line interface compatibility between C-obfpsroxy
and Python-obfsproxy by supporting the "--managed" switch.
Changes in version 0.0.2 - 2013-02-17
- Add some more files to the MANIFEST.in.
Changes in version 0.0.1 - 2013-02-15
- Initial release.
obfsproxy-0.2.13/INSTALL 0000664 0000000 0000000 00000000456 12570034732 0014671 0 ustar 00root root 0000000 0000000 Just run: # python setup.py install
You will need to run the above command as root. It will install
obfsproxy somewhere in your $PATH. If you don't want that, you can
try to run
$ python setup.py install -user
as your regular user, and setup.py will install obfsproxy somewhere
in your home directory
obfsproxy-0.2.13/LICENSE 0000664 0000000 0000000 00000002757 12570034732 0014653 0 ustar 00root root 0000000 0000000 This is the license of the obfsproxy software.
Copyright 2013 George Kadianakis
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
* Neither the names of the copyright owners nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
obfsproxy-0.2.13/PKG-INFO 0000664 0000000 0000000 00000000403 12570034732 0014725 0 ustar 00root root 0000000 0000000 Metadata-Version: 1.0
Name: obfsproxy
Version: 0.2.13
Summary: A pluggable transport proxy written in Python
Home-page: UNKNOWN
Author: asn
Author-email: asn@torproject.org
License: BSD
Description: UNKNOWN
Keywords: tor,obfuscation,twisted
Platform: UNKNOWN
obfsproxy-0.2.13/README 0000664 0000000 0000000 00000000475 12570034732 0014521 0 ustar 00root root 0000000 0000000 Obfsproxy is a pluggable transport proxy written in Python.
See doc/HOWTO.txt for installation instructions.
If you want to write a pluggable transport, see the code of already
existing transports in obfsproxy/transports/ . Unfortunately a coding
guide for pluggable transport authors does not exist at the moment!
obfsproxy-0.2.13/TODO 0000664 0000000 0000000 00000000560 12570034732 0014324 0 ustar 00root root 0000000 0000000 * Write more transports.
* Write more docs (architecture document, HACKING, etc.)
* Improve the integration testers (especially add better debugging
support for when a test fails)
* Kill all the XXXs in the code.
* Convert all the leftover camelCases to underscore_naming.
* Implement a SOCKS client, so that Obfsproxy can send its data
through a SOCKS proxy. obfsproxy-0.2.13/bin/ 0000775 0000000 0000000 00000000000 12570034732 0014403 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/bin/obfsproxy 0000775 0000000 0000000 00000000702 12570034732 0016363 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python2
import sys, os
# Forcerfully add root directory of the project to our path.
# http://www.py2exe.org/index.cgi/WhereAmI
if hasattr(sys, "frozen"):
dir_of_executable = os.path.dirname(sys.executable)
else:
dir_of_executable = os.path.dirname(__file__)
path_to_project_root = os.path.abspath(os.path.join(dir_of_executable, '..'))
sys.path.insert(0, path_to_project_root)
from obfsproxy.pyobfsproxy import run
run()
obfsproxy-0.2.13/doc/ 0000775 0000000 0000000 00000000000 12570034732 0014400 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/doc/HOWTO.txt 0000664 0000000 0000000 00000006127 12570034732 0016047 0 ustar 00root root 0000000 0000000 This is a short guide on how to setup an obfsproxy obfs2/obfs3 bridge
on a Debian/Ubuntu system.
Step 0: Install Python
To use obfsproxy you will need Python (>= 2.7) and pip. If you use
Debian testing (or unstable), or a version of Ubuntu newer than
Oneiric, this is easy:
# apt-get install python2.7 python-pip python-dev build-essential libgmp-dev
Step 1: Install Tor
You will also need a development version of Tor. To do this, you
should use the following guide to install tor and
deb.torproject.org-keyring:
https://www.torproject.org/docs/debian.html.en#development
You need Tor 0.2.4.x because it knows how to automatically report
your obfsproxy address to BridgeDB.
Step 2: Install obfsproxy
If you have pip, installing obfsproxy and its dependencies should be
a matter of a single command:
$ pip install obfsproxy
Step 3: Setup Tor
Now setup Tor. Edit your /etc/tor/torrc to add:
SocksPort 0
ORPort 443 # or some other port if you already run a webserver/skype
BridgeRelay 1
Exitpolicy reject *:*
## CHANGEME_1 -> provide a nickname for your bridge, can be anything you like
#Nickname CHANGEME_1
## CHANGEME_2 -> provide some email address so we can contact you if there's a problem
#ContactInfo CHANGEME_2
ServerTransportPlugin obfs2,obfs3 exec /usr/local/bin/obfsproxy managed
Don't forget to uncomment and edit the CHANGEME fields.
Step 4: Launch Tor and verify that it bootstraps
Restart Tor to use the new configuration file. (Preface with sudo if
needed.)
# service tor restart
Now check /var/log/tor/log and you should see something like this:
Nov 05 16:40:45.000 [notice] We now have enough directory information to build circuits.
Nov 05 16:40:45.000 [notice] Bootstrapped 80%: Connecting to the Tor network.
Nov 05 16:40:46.000 [notice] Bootstrapped 85%: Finishing handshake with first hop.
Nov 05 16:40:46.000 [notice] Bootstrapped 90%: Establishing a Tor circuit.
Nov 05 16:40:48.000 [notice] Tor has successfully opened a circuit. Looks like client functionality is working.
Nov 05 16:40:48.000 [notice] Bootstrapped 100%: Done.
If Tor is earlier in the bootstrapping phase, wait until it gets to 100%.
Step 5: Set up port forwarding if needed
If you're behind a NAT/firewall, you'll need to make your bridge
reachable from the outside world — both on the ORPort and the
obfsproxy port. The ORPort is whatever you defined in step two
above. To find your obfsproxy port, check your Tor logs for two
lines similar to these:
Oct 05 20:00:41.000 [notice] Registered server transport 'obfs2' at '0.0.0.0:26821
Oct 05 20:00:42.000 [notice] Registered server transport 'obfs3' at '0.0.0.0:40172
The last number in each line, in this case 26821 and 40172, are the
TCP port numbers that you need to forward through your
firewall. (This port is randomly chosen the first time Tor starts,
but Tor will cache and reuse the same number in future runs.) If you
want to change the number, use Tor 0.2.4.7-alpha or later, and set
"ServerTransportListenAddr obfs2 0.0.0.0:26821" in your torrc.
obfsproxy-0.2.13/doc/obfs2/ 0000775 0000000 0000000 00000000000 12570034732 0015413 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/doc/obfs2/obfs2-protocol-spec.txt 0000664 0000000 0000000 00000007132 12570034732 0021761 0 ustar 00root root 0000000 0000000 obfs2 (The Twobfuscator)
0. Protocol overview
This is a protocol obfuscation layer for TCP protocols. Its purpose
is to keep a third party from telling what protocol is in use based
on message contents. It is based on brl's ssh obfuscation protocol.
It does not provide authentication or data integrity. It does not
hide data lengths. It is more suitable for providing a layer of
obfuscation for an existing authenticated protocol, like SSH or TLS.
The protocol has two phases: in the first phase, the parties
establish keys. In the second, the parties exchange superenciphered
traffic.
1. Primitives, notation, and constants.
H(x) is SHA256 of x.
H^n(x) is H(x) called iteratively n times.
E(K,s) is the AES-CTR-128 encryption of s using K as key.
x | y is the concatenation of x and y.
UINT32(n) is the 4 byte value of n in big-endian (network) order.
SR(n) is n bytes of strong random data.
WR(n) is n bytes of weaker random data.
"xyz" is the ASCII characters 'x', 'y', and 'z', not NUL-terminated.
s[:n] is the first n bytes of s.
s[n:] is the last n bytes of s.
MAGIC_VALUE is 0x2BF5CA7E
SEED_LENGTH is 16
MAX_PADDING is 8192
HASH_ITERATIONS is 100000
KEYLEN is the length of the key used by E(K,s) -- that is, 16.
IVLEN is the length of the IV used by E(K,s) -- that is, 16.
HASHLEN is the length of the output of H() -- that is, 32.
MAC(s, x) = H(s | x | s)
A "byte" is an 8-bit octet.
We require that HASHLEN >= KEYLEN + IVLEN
2. Key establishment phase.
The party who opens the connection is the 'initiator'; the one who
accepts it is the 'responder'. Each begins by generating a seed
and a padding key as follows. The initiator generates:
INIT_SEED = SR(SEED_LENGTH)
INIT_PAD_KEY = MAC("Initiator obfuscation padding", INIT_SEED)[:KEYLEN]
And the responder generates:
RESP_SEED = SR(SEED_LENGTH)
RESP_PAD_KEY = MAC("Responder obfuscation padding", INIT_SEED)[:KEYLEN]
Each then generates a random number PADLEN in range from 0 through
MAX_PADDING (inclusive).
The initiator then sends:
INIT_SEED | E(INIT_PAD_KEY, UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN))
and the responder sends:
RESP_SEED | E(RESP_PAD_KEY, UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN))
Upon receiving the SEED from the other party, each party derives
the other party's padding key value as above, and decrypts the next
8 bytes of the key establishment message. If the MAGIC_VALUE does
not match, or the PADLEN value is greater than MAX_PADDING, the
party receiving it should close the connection immediately.
Otherwise, it should read the remaining PADLEN bytes of padding data
and discard them.
Additional keys are then derived as:
INIT_SECRET = MAC("Initiator obfuscated data", INIT_SEED|RESP_SEED)
RESP_SECRET = MAC("Responder obfuscated data", INIT_SEED|RESP_SEED)
INIT_KEY = INIT_SECRET[:KEYLEN]
INIT_IV = INIT_SECRET[KEYLEN:]
RESP_KEY = RESP_SECRET[:KEYLEN]
RESP_IV = RESP_SECRET[KEYLEN:]
The INIT_KEY value keys a stream cipher used to encrypt values from
initiator to responder thereafter. The stream cipher's IV is
INIT_IV. The RESP_KEY value keys a stream cipher used to encrypt
values from responder to initiator thereafter. That stream cipher's
IV is RESP_IV.
3. Shared-secret extension
Optionally, if the client and server share a secret value SECRET,
they can replace the MAC function with:
MAC(s,x) = H^n(s | x | H(SECRET) | s)
where n = HASH_ITERATIONS.
obfsproxy-0.2.13/doc/obfs2/obfs2-threat-model.txt 0000664 0000000 0000000 00000006411 12570034732 0021554 0 ustar 00root root 0000000 0000000 Threat model for the obfs2 obfuscation protocol
George Kadianakis
Nick Mathewson
0. Abstract
We discuss the intended threat model for the 'obfs2' protocol
obfuscator, its limitations, and its implications for the protocol
design.
The 'obfs2' protocol is based on Bruce Leidl's obfuscated SSH layer,
and is documented in the 'doc/protocol-spec.txt' file in the obfsproxy
distribution.
1. Adversary capabilities and non-capabilities
We assume a censor with limited per-connection resources.
The adversary controls the infrastructure of the network within and
at the edges of her jurisdiction, and she can potentially monitor,
block, alter, and inject traffic anywhere within this region.
However, the adversary's computational resources are limited.
Specifically, the adversary does not have the resources in her
censorship infrastructure to store very much long-term information
about any given IP or connection.
The adversary also holds a blacklist of network protocols, which she
is interested in blocking. We assume that the adversary does not have
a complete list of specific IPs running that protocol, though
preventing this is out-of-scope.
2. The adversary's goals
The censor wants to ban particular encrypted protocols or
applications, and is willing to tolerate some collateral damage, but
is not willing to ban all encrypted traffic entirely.
3. Goals of obfs2
Currently, most attackers in the category described above implement
their censorship by one or more firewalls that looking for protocol
signatures and block protocols matching those signatures. These
signatures are typically in the form of static strings to be matched
or regular expressions to be evaluated, over a packet or TCP flow.
obfs2 attempts to counter the above attack by removing content
signatures from network traffic. obfs2 encrypts the traffic stream
with a stream cipher, which results in the traffic looking uniformly
random.
4. Non-goals of obfs2
obfs2 was designed as a proof-of-concept for Tor's pluggable
transport system: it is simple, useable and easily implementable. It
does _not_ try to protect against more sophisticated adversaries.
obfs2 does not try to protect against non-content protocol
fingerprints, like the packet size or timing.
obfs2 does not try to protect against attackers capable of measuring
traffic entropy.
obfs2 (in its default configuration) does not try to protect against
Deep Packet Inspection machines that expect the obfs2 protocol and
have the resources to run it. Such machines can trivially retrieve
the decryption key off the traffic stream and use it to decrypt obfs2
and detect the Tor protocol.
obfs2 assumes that the underlying protocol provides (or does not
need!) integrity, confidentiality, and authentication; it provides
none of those on its own.
In other words, obfs2 does not try to protect against anything other
than fingerprintable TLS content patterns.
That said, obfs2 is not useless. It protects against many real-life
Tor traffic detection methods currentl deployed, since most of them
currently use static SSL handshake strings as signatures.
obfsproxy-0.2.13/doc/obfs3/ 0000775 0000000 0000000 00000000000 12570034732 0015414 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/doc/obfs3/obfs3-protocol-spec.txt 0000664 0000000 0000000 00000013716 12570034732 0021770 0 ustar 00root root 0000000 0000000 obfs3 (The Threebfuscator)
0. Protocol overview
This is a protocol obfuscation layer for TCP protocols. Its
purpose is to keep a third party from telling what protocol is in
use based on message contents.
Like obfs2, it does not provide authentication or data integrity.
It does not hide data lengths. It is more suitable for providing a
layer of obfuscation for an existing authenticated protocol, like
SSH or TLS.
Like obfs2, the protocol has two phases: in the first phase, the
parties establish keys. In the second, the parties exchange
superenciphered traffic.
1. Motivation
The first widely used obfuscation protocol for Tor was obfs2. obfs2
encrypted traffic using a key that was negotiated during the
protocol.
obfs2 did not use a robust cryptographic key exchange, and the key
could be retrieved by any passive adversary who monitored the
initial handshake of obfs2.
People believe that the easiest way to block obfs2 would be to
retrieve the key, decrypt the first bytes of the handshake, and
look for redundancy on the handshake message.
To defend against this attack, obfs3 negotiates keys using an
anonymous Diffie Hellman key exchange. This is done so that a
passive adversary would not be able to retrieve the obfs3 session
key.
Unfortunately, traditional DH (over subgroups of Z_p* or over
Elliptic Curves) does not fit our threat model since its public
keys are distinguishable from random strings of the same size. For
this reason, a custom DH protocol was proposed that offers public
keys that look like random strings. The UniformDH scheme was
proposed by Ian Goldberg in:
https://lists.torproject.org/pipermail/tor-dev/2012-December/004245.html
2. Primitives, notation, and constants.
E(K,s) is the AES-CTR-128 encryption of s using K as key.
x | y is the concatenation of x and y.
WR(n) is n bytes of weaker random data.
"xyz" is the ASCII characters 'x', 'y', and 'z', not NULL-terminated.
s[:n] is the first n bytes of s.
s[n:] is the last n bytes of s.
MAX_PADDING is 8194
KEYLEN is the length of the key used by E(K,s) -- that is, 16.
COUNTERLEN is the length of the counter used by AES-CTR-128 -- that is, 16.
HMAC(k,m) is HMAC-SHA256(k,m) with 'k' being the key, and 'm' the
message.
A "byte" is an 8-bit octet.
3. UniformDH
The UniformDH Diffie-Hellman scheme uses group 5 from RFC3526. It's
a 1536-bit MODP group.
To pick a private UniformDH key, we pick a random 1536-bit number,
and make it even by setting its low bit to 0. Let x be that private
key, and X = g^x (mod p).
The other party computes private and public keys, y and Y, in the
same manner.
When someone sends her public key to the other party, she randomly
decides whether to send X or p-X. This makes the public key
negligibly different from a uniform 1536-bit string
When a party wants to calculate the shared secret, she
raises the foreign public key to her private key. Note that both
(p-Y)^x = Y^x (mod p) and (p-X)^y = X^y (mod p), since x and y are
even.
3. Key establishment phase.
The party who opens the connection is the 'initiator'; the one who
accepts it is the 'responder'. Each begins by generating a
UniformDH keypair, and a random number PADLEN in [0, MAX_PADDING/2].
Both parties then send:
PUB_KEY | WR(PADLEN)
After retrieving the public key of the other end, each party
completes the DH key exchange and generates a shared-secret for the
session (named SHARED_SECRET). Using that shared-secret each party
derives its encryption keys as follows:
INIT_SECRET = HMAC(SHARED_SECRET, "Initiator obfuscated data")
RESP_SECRET = HMAC(SHARED_SECRET, "Responder obfuscated data")
INIT_KEY = INIT_SECRET[:KEYLEN]
INIT_COUNTER = INIT_SECRET[KEYLEN:]
RESP_KEY = RESP_SECRET[:KEYLEN]
RESP_COUNTER = RESP_SECRET[KEYLEN:]
The INIT_KEY value keys a block cipher (in CTR mode) used to
encrypt values from initiator to responder thereafter. The counter
mode's initial counter value is INIT_COUNTER. The RESP_KEY value
keys a block cipher (in CTR mode) used to encrypt values from
responder to initiator thereafter. That counter mode's initial
counter value is RESP_COUNTER.
After the handshake is complete, when the initiator wants to send
application-layer data for the first time, she generates another
random number PADLEN2 in [0, MAX_PADDING/2], and sends:
WR(PADLEN2) | HMAC(SHARED_SECRET, "Initiator magic") | E(INIT_KEY, DATA)
When the responder wants to send application-layer data for the
first time, she sends:
WR(PADLEN2) | HMAC(SHARED_SECRET, "Responder magic") | E(RESP_KEY, DATA)
After a party receives the public key from the other end, it needs
to find out where the padding stops and where the application-layer
data starts. To do so, every time she receives network data, the
receiver tries to find the magic HMAC string in the data between
the public key and the end of the newly received data. After
spotting the magic string, she knows where the application-layer
data starts and she can start decrypting it.
If a party has scanned more than MAX_PADDING bytes and the magic
string has not yet been found, the party MUST close the connection.
After the initiator sends the magic string and the first chunk of
application-layer data, she can send additional application-layer
data simply by encrypting it with her encryption key, and without
prepending any magic strings:
E(INIT_KEY, DATA)
Similarly, the responder sends additional application-layer data by
encrypting it with her encryption key:
E(RESP_KEY, DATA)
4. Acknowledgments
The idea of using a hash of the shared secret as the delimiter
between the padding and the data was suggested by Philipp Winter.
Ian Goldberg suggested the UniformDH scheme and helped a lot with
reviewing the protocol specification.
obfsproxy-0.2.13/doc/obfs3/obfs3-threat-model.txt 0000664 0000000 0000000 00000000612 12570034732 0021553 0 ustar 00root root 0000000 0000000 Threat model for the obfs3 obfuscation protocol
The threat model of obfs3 is identical to the threat model of obfs2,
with an added goal:
obfs3 offers protection against passive Deep Packet Inspection
machines that expect the obfs3 protocol. Such machines should not be
able to verify the existence of the obfs3 protocol without launching
an active attack against its handshake.
obfsproxy-0.2.13/doc/scramblesuit/ 0000775 0000000 0000000 00000000000 12570034732 0017075 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/doc/scramblesuit/ChangeLog 0000664 0000000 0000000 00000001643 12570034732 0020653 0 ustar 00root root 0000000 0000000 2014-01-19 - Changes in version 2014.01.b:
- More unit tests and several minor bug fixes.
- Sanitise shared secret if the user got it slightly wrong.
2014-01-09 - Changes in version 2014.01.a:
- Update API to be compatible with recent obfsproxy changes.
- Improve argument parsing.
2013-11-18 - Changes in version 2013.11.a:
- Revert UniformDH group size back to 1536 bits to have less of a timing
distinguisher at the cost of having less effective security. Note that
this also breaks compatibility with version 2013.10.a!
- Add the config option "USE_IAT_OBFUSCATION" which can be used to disable
inter-arrival time obfuscation. This would mean more throughput at the
cost of being slightly more detectable.
- Add a fast FIFO buffer implementation.
- Refactored plenty of code.
- Add this ChangeLog file.
2013-10-02 - Changes in version 2013.10.a:
- First public release of ScrambleSuit.
obfsproxy-0.2.13/doc/scramblesuit/scramblesuit-spec.txt 0000664 0000000 0000000 00000044632 12570034732 0023274 0 ustar 00root root 0000000 0000000 ScrambleSuit Protocol Specification
Philipp Winter
0. Preliminaries
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
"SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
document are to be interpreted as described in RFC 2119.
1. Overview
ScrambleSuit is a pluggable transport protocol for the obfsproxy
obfuscation framework [0]. Its entire payload is computationally
indistinguishable from randomness, it modifies its flow signature to foil
simple statistical classifiers and it employs authenticated encryption to
disguise the transported protocol.
For the motivation, a protocol overview, the threat model and an
evaluation, please refer to the original research paper [1]. This protocol
specification discusses a subset of the research paper in greater detail to
facilitate alternative implementations of the protocol. Besides, this
specification is intended to be updated if necessary whereas the research
paper will remain as is.
2. Authentication
There exist two ways for a client to authenticate itself towards a
ScrambleSuit server. First, by redeeming a session ticket. Second, by
conducting a UniformDH handshake. While a valid session ticket might not
always be available, a client is always able to conduct a UniformDH
handshake. Both authentication mechanisms rely on a previously shared
secret without which authentication cannot succeed. Requiring a shared
secret should thwart active probing attacks.
As stated in the research paper [1], a server only replies to a client if
the client can prove knowledge of the shared secret. As long as clients
cannot prove knowledge of the shared secret, servers MUST NOT reply. If
authentication did not succeed after 1532 bytes have been received, the
server SHOULD stop processing incoming data to prevent denial-of-service
attacks. The server MAY close the TCP connection. Alternatively, the
server MAY proceed to accept data but it SHOULD stop buffering or
processing the data, thus effectively ignoring the client.
2.1 UniformDH Handshake
A client can authenticate itself towards a ScrambleSuit server by
conducting a UniformDH handshake. UniformDH was originally proposed in the
obfs3 protocol specification [2]. ScrambleSuit uses obfs3's 1536-bit
UniformDH handshake. Note that in order for a UniformDH handshake to
succeed, both parties MUST share a 160-bit secret k_B which is exchanged
out-of-band over Tor's BridgeDB component. ScrambleSuit bridges
automatically publish their k_B key.
A UniformDH handshake consists of two messages: one from the client to the
server and one from the server to the client. The diagram below
illustrates the handshake. After the randomly chosen 192-byte UniformDH
public key X, random padding P_C is appended. The length of the padding
must be randomly chosen from {0..1308} bytes. After the padding, a 16-byte
mark M_C is appended which is defined as:
M = HMAC-SHA256-128(k_B, X)
The mark is used to easily locate the MAC which is the last element of the
client's handshake message. The 16-byte MAC is defined as:
MAC = HMAC-SHA256-128(k_B, X | P_C | M_C | E)
The variable E is a string representation of the current Unix epoch divided
by 3600. It represents the amount of hours which have passed since the
epoch. It is used by the client and the server to prove liveness. For
example, the Unix timestamp 1378768086 would map to E = 1378768086 / 3600 =
"382991". While the client MUST determine E, the server can simply echo
the client's E in its response.
The server's handshake message is created analogous to the client.
After conducting UniformDH, a client and server agreed on a 192-byte random
number. This random number is then hashed using SHA256 to obtain the
256-bit master key k_t. Session keys are then derived from k_t as
discussed in Section 2.3.
Client Server Legend:
| X | P_C | M_C | MAC(X | P_C | M_C | E) | X: client public key
| ---------------------------------------> | Y: server public key
| Y | P_S | M_S | MAC(Y | P_S | M_S | E) | P_{C,S}: padding
| <--------------------------------------- | M_{C,S}: mark to locate MAC
| AEnc(k_t+1 | T_t+1) | E: approximate timestamp
| <--------------------------------------- | k_t+1: future master key
| AEnc(Tor traffic) | T_t+1: future ticket
| <--------------------------------------> |
Immediately after the handshake succeeded, the server proceeds to issue and
send a new session ticket T_t+1 together with the according master key
k_t+1. Session tickets are discussed in Section 2.2. This tuple can then
be used by the client to authenticate itself the next time it connects to
the server. After the newly issued ticket, encrypted and authenticated Tor
traffic is finally exchanged between the client and the server.
2.2 Session Ticket Handshake
Alternatively to UniformDH, implementations SHOULD support session tickets.
A client can authenticate itself towards a ScrambleSuit server by redeeming
a 112-byte session ticket T. Such a ticket contains the master key k_t and
is encrypted and authenticated using keys only known to the server. The
structure of a session ticket is discussed in Section 5.1. If a valid
session ticket is available, a client SHOULD redeem it rather than conduct
a UniformDH handshake.
The handshake consists of one single message which is sent by the client to
the server. The diagram below illustrates the handshake. After the
112-byte session ticket, random padding P is appended. The padding must be
uniformly chosen from {0..1388} bytes. After the padding, a 16-byte mark M
is appended which is defined as:
M = HMAC-SHA256-128(k_sh, T)
The mark is used to easily locate the MAC which is the last part of the
handshake. k_sh is the 256-bit HMAC key which is used by the client to
authenticate outgoing data. It is derived from k_t (which is embedded in
the ticket) as described in Section 2.3. The MAC is defined as:
MAC = HMAC-SHA256-128(k_sh, T | P | M | E)
The variable E is a string representation of the current Unix epoch divided
by 3600. It represents the amount of hours which have passed since the
epoch. It is used by the client and the server to prove liveness. For
example, the Unix timestamp 1378768086 would map to E = 1378768086 / 3600 =
"382991". While the client MUST determine E, the server can simply echo
the client's E in its response.
Client Server Legend:
| T | P | M | MAC(T | P | M | E) | T: session ticket
| -------------------------------> | P: random padding
| AEnc(k_t+1 | T_t+1) | M: mark to locate the MAC
| <------------------------------- | E: approximate timestamp
| AEnc(Tor traffic) | k_t+1: future master key
| <------------------------------> | T_t+1: future ticket
The server is initially unable to distinguish between a session ticket
handshake and a UniformDH handshake as both handshakes are computationally
indistinguishable from randomness. Therefore, it first tries to
opportunistically decrypt the session ticket T after verifying its MAC. If
the ticket's MAC (which should not be confused with the handshake message's
MAC) is valid and the ticket can be decrypted and is not yet expired, the
server then verifies the MAC which is built over T | P | M | E. If this
MAC is valid, the handshake succeeded. The server, like the client, then
proceeds to derive session keys from the 256-bit master key as described in
Section 2.3.
After a ticket handshake succeeded, the server replies by issuing a new
session ticket T_t+1 together with the according master key k_t+1. The
tuple can then be used by the client to authenticate itself the next time.
2.3 Key Derivation
After authenticating either by redeeming a ticket or by running UniformDH,
a client and server will have a shared 256-bit master key. Overall, 144
bytes of key material is derived from the master key using HKDF based on
SHA256. For expansion, the master key is used as HKDF's PRK and the empty
string as HKDF's "info" argument.
The 144-byte output is used as follows. The byte offsets are in decimal.
Bytes 000:031 - 256-bit AES-CTR session key to send data.
Bytes 032:039 - 64-bit AES-CTR IV to send data.
Bytes 040:071 - 256-bit AES-CTR session key to receive data.
Bytes 072:079 - 64-bit AES-CTR IV to receive data.
Bytes 080:111 - 256-bit HMAC-SHA256-128 key to send data.
Bytes 112:143 - 256-bit HMAC-SHA256-128 key to receive data.
3. Header Format
ScrambleSuit defines a 21-byte message header which contains the
transported data. After authentication, all data is transported by
encrypting it, authenticating it, and wrapping it in ScrambleSuit messages
whose header is depicted below.
+----------+------------+--------------+--------+------------+------------+
| 16 bytes | 2 bytes | 2 bytes | 1 byte | (optional) | (optional) |
| MAC | Total len. | Payload len. | Flags | Payload | Padding |
+----------+------------+--------------+--------+------------+------------+
\_ Plain _/ \____________ Encrypted and authenticated __________________/
The 16-byte MAC refers to HMAC-SHA256-128 which is keyed by a dedicated
HMAC key which is derived from the session's master key (see Section 2.3).
The MAC authenticates the remainder of the message. In accordance with the
encrypt-then-MAC principle, the MAC is built over the already-encrypted
remainder of the message.
The 2-byte total length refers to the overall length of the message
excluding the header whereas the 2-byte payload length refers to the
payload only. The difference between total length and payload length is
padding which is used for packet length obfuscation. Note that both fields
can be set to 0 which results in an empty protocol message. ScrambleSuit's
maximum message length is 1448 bytes. Exluding the header, this results in
1427 bytes for the transported data.
The 1-byte flag field is used for protocol signalling. Below, all defined
flags along with their semantics are explained.
Flag name | Bit # | Description
----------------+-------+--------------------------------------------------
FLAG_PAYLOAD | 1 | The entire payload consists of encrypted
| | application data which must be forwarded to the
| | application.
----------------+-------+--------------------------------------------------
FLAG_NEW_TICKET | 2 | The payload holds a newly issued session ticket
| | and master key. The format is:
| | 32-byte master key | 112-byte ticket
----------------+-------+--------------------------------------------------
FLAG_PRNG_SEED | 3 | The payload holds the PRNG seed which is used to
| | derive obfuscation distributions. The format is:
| | 32-byte PRNG seed
----------------+-------+--------------------------------------------------
Finally, a ScrambleSuit message contains the transported data which is
followed by padding. Padding MUST always be discarded. Since padding is
always encrypted, client and server MAY simply pad with 0 bytes.
When ScrambleSuit protocol messages are received, the receiver first MUST
validate the MAC. The receiver may only process messages if the MAC is
valid. If the MAC is invalid, the TCP connection MUST be terminated
immediately.
4. Protocol Polymorphism
Implementations SHOULD implement protocol polymorphism whose purpose is to
modify ScrambleSuit's flow signature. In particular, the packet length
distribution and the distribution of inter-arrival times are modified.
To alter these two flow signatures, implementations maintain two discrete
probability distributions from which random samples are drawn. These
random samples dictate specific inter-arrival times and packet lengths.
Both probability distributions are generated based on a random 256-bit PRNG
seed which is unique for every ScrambleSuit server. Servers communicate
their seed to clients in a dedicated protocol message whose FLAG_PRNG_SEED
bit is set. The client then extracts the PRNG seed and derives its own
probability distributions.
4.1 Deriving Probability Distributions
Probability distributions SHOULD be derived from the 256-bit seed using a
cryptographically secure PRNG. After the CSPRNG was seeded, the amount of
bins for the respective probability distribution must be determined.
Depending on the CSPRNG's output, the amount SHOULD be uniformly chosen
from {1..100}. The exact way how the CSPRNG's output is used is up to the
implementation.
After the amount of bins has been determined, every bin is assigned a value
together with a corresponding probability which is in the interval ]0, 1].
The probability of all bins sums up to 1. Again, the exact way how the
CSPRNG's output is used is up to the implementation.
For the packet length distribution, all values SHOULD be in {21..1448}.
For the inter-arrival time distribution, all values SHOULD be in the
interval [0, 0.01].
Since the distributions are generated randomly, it is possible that they
cause particularly bad throughput. To prevent this, implementations MAY
trade off obfuscation for additional throughput by carefully tuning the
above parameters.
4.2 Packet Length Obfuscation
In general, ScrambleSuit transmits MTU-sized segments as long as there is
enough data in the send buffer. Packet length obfuscation only kicks in
once the send buffer is almost processed and a segment smaller than the MTU
would have to be sent.
Instead of simply flushing the send buffer, a random sample from the
discrete packet length probability distribution is drawn. Padding messages
are then appended so that the size of the last segment in the burst equals
the freshly drawn sample.
4.3 Inter-arrival Time Obfuscation
To obfuscate inter-arrival times, implementations could maintain a
dedicated send buffer. As long as there is data in the send buffer, random
samples from the inter-arrival time distribution are drawn. The thread
processing the send buffer is then paused for the duration of the freshly
drawn sample until the next MTU-sized chunk is written to the wire. This
process is repeated until the send buffer is empty.
Note that inter-arrival time obfuscation has a drastic impact on
throughput. As a result, implementations MAY implement packet length
obfuscation but ignore inter-arrival time obfuscation.
5. Session Tickets
ScrambleSuit employs a subset of RFC 5077 [3] as its session ticket
mechanism. In a nutshell, clients can redeem session tickets to
authenticate themselves and bootstrap a ScrambleSuit connection. This
section discusses the format of session tickets and how server's manage
them.
5.1 Session Ticket Structure
Session tickets contain a server's state with the most important element
being the 32-byte master key. The state structure is encrypted using
16-byte AES-CBC and authenticated using HMAC-SHA256. Refer to Section X.X
for how the server manages this key pair. The basic structure of a
112-byte session ticket is depicted below:
+----------+----------+----------+
| 16 bytes | 64 bytes | 32 bytes |
| IV | E(state) | HMAC |
+----------+----------+----------+
The 16-byte IV is used for AES-CBC, MUST come from a CSPRNG and MUST be
different for every session ticket. The 64-byte encrypted state is
described below. The 32-byte HMAC authenticates the ticket. It is defined
as follows:
HMAC = HMAC-SHA256(k, IV | E(state))
Server's MUST verify the HMAC before attempting to decrypt the state.
E(state), the 64-byte encrypted server state, has the following structure
in its decrypted form:
+------------+------------+------------+----------+
| 4 bytes | 18 bytes | 32 bytes | 10 bytes |
| Issue date | Identifier | Master key | Padding |
+------------+------------+------------+----------+
The 4-byte issue date is a Unix epoch and specifies when the ticket was
issued by the server. The 18-byte identifier contains the ASCII string
"ScrambleSuitTicket". It is checked by the server in order to make sure
that the ticket was decrypted successfully. The 32-byte master key is used
to derive session keys as described in Section 2.3. The 10-byte padding is
used to pad the entire structure to 64 byte; a multiple of AES' block size.
The padding is ignored and it MAY consist of 0 bytes.
5.2 Session Ticket Management
Session tickets are encrypted and authenticated with a pair of keys only
known to the server. As a result, tickets are computationally
indistinguishable from randomness and opaque to clients as well as passive
observers.
For encryption, AES-CBC with a 16-byte key is used. For authentication,
HMAC-SHA256 with a 32-byte key is used. The server has to make sure that
the two keys are stored safely. Furthermore, the server SHOULD regularly
rotate its keys. A reasonable key rotation interval would be once a week.
At any given point in time, the server SHOULD have a current, valid key
pair as well as the previous, superseded key pair. The current key pair
SHOULD be used to issue and verify new tickets. The superseded key pair
SHOULD be used to verify tickets which cannot be verified with the current
key pair. The superseded key pair further SHOULD NOT be used to issue new
tickets.
References
[0] https://www.torproject.org/projects/obfsproxy.html.en
[1] http://www.cs.kau.se/philwint/pdf/wpes2013.pdf
[2] https://gitweb.torproject.org/pluggable-transports/obfsproxy.git/blob/HEAD:/doc/obfs3/obfs3-protocol-spec.txt
[3] https://tools.ietf.org/html/rfc5077
obfsproxy-0.2.13/obfsproxy/ 0000775 0000000 0000000 00000000000 12570034732 0015666 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/__init__.py 0000664 0000000 0000000 00000000134 12570034732 0017775 0 ustar 00root root 0000000 0000000 from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
obfsproxy-0.2.13/obfsproxy/_version.py 0000664 0000000 0000000 00000000642 12570034732 0020066 0 ustar 00root root 0000000 0000000
# This file was generated by 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '0.2.13'
version_full = '01a5d50179af4adf28195ce6a926c735eede6b06'
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
obfsproxy-0.2.13/obfsproxy/common/ 0000775 0000000 0000000 00000000000 12570034732 0017156 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/common/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0021255 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/common/aes.py 0000664 0000000 0000000 00000001603 12570034732 0020300 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module is a convenience wrapper for the AES cipher in CTR mode. """
from Crypto.Cipher import AES
from Crypto.Util import Counter
class AES_CTR_128(object):
"""An AES-CTR-128 PyCrypto wrapper."""
def __init__(self, key, iv, counter_wraparound=False):
"""Initialize AES with the given key and IV.
If counter_wraparound is set to True, the AES-CTR counter will
wraparound to 0 when it overflows.
"""
assert(len(key) == 16)
assert(len(iv) == 16)
self.ctr = Counter.new(128, initial_value=long(iv.encode('hex'), 16),
allow_wraparound=counter_wraparound)
self.cipher = AES.new(key, AES.MODE_CTR, counter=self.ctr)
def crypt(self, data):
"""
Encrypt or decrypt 'data'.
"""
return self.cipher.encrypt(data)
obfsproxy-0.2.13/obfsproxy/common/argparser.py 0000664 0000000 0000000 00000000462 12570034732 0021520 0 ustar 00root root 0000000 0000000 import argparse
import sys
"""
Overrides argparse.ArgumentParser so that it emits error messages to
stdout instead of stderr.
"""
class MyArgumentParser(argparse.ArgumentParser):
def _print_message(self, message, fd=None):
if message:
fd = sys.stdout
fd.write(message)
obfsproxy-0.2.13/obfsproxy/common/heartbeat.py 0000664 0000000 0000000 00000006143 12570034732 0021473 0 ustar 00root root 0000000 0000000 """heartbeat code"""
import datetime
import socket # for socket.inet_pton()
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
def get_integer_from_ip_str(ip_str):
"""
Given an IP address in string format in ip_str, return its
integer representation.
Throws ValueError if the IP address string was invalid.
"""
try:
return socket.inet_pton(socket.AF_INET, ip_str)
except socket.error:
pass
try:
return socket.inet_pton(socket.AF_INET6, ip_str)
except socket.error:
pass
# Down here, both inet_pton()s failed.
raise ValueError("Invalid IP address string")
class Heartbeat(object):
"""
Represents obfsproxy's heartbeat.
It keeps stats on a number of things that the obfsproxy operator
might be interested in, and every now and then it reports them in
the logs.
'unique_ips': A Python set that contains unique IPs (in integer
form) that have connected to obfsproxy.
"""
def __init__(self):
self.n_connections = 0
self.started = datetime.datetime.now()
self.last_reset = self.started
self.unique_ips = set()
def register_connection(self, ip_str):
"""Register a new connection."""
self.n_connections += 1
self._register_ip(ip_str)
def _register_ip(self, ip_str):
"""
See if 'ip_str' has connected to obfsproxy before. If not, add
it to the list of unique IPs.
"""
ip = get_integer_from_ip_str(ip_str)
if ip not in self.unique_ips:
self.unique_ips.add(ip)
def reset_stats(self):
"""Reset stats."""
self.n_connections = 0
self.unique_ips = set()
self.last_reset = datetime.datetime.now()
def say_uptime(self):
"""Log uptime information."""
now = datetime.datetime.now()
delta = now - self.started
uptime_days = delta.days
uptime_hours = round(float(delta.seconds)/3600)
uptime_minutes = round(float(delta.seconds)/60)%60
if uptime_days:
log.info("Heartbeat: obfsproxy's uptime is %d day(s), %d hour(s) and %d minute(s)." % \
(uptime_days, uptime_hours, uptime_minutes))
else:
log.info("Heartbeat: obfsproxy's uptime is %d hour(s) and %d minute(s)." % \
(uptime_hours, uptime_minutes))
def say_stats(self):
"""Log connection stats."""
now = datetime.datetime.now()
reset_delta = now - self.last_reset
log.info("Heartbeat: During the last %d hour(s) we saw %d connection(s)" \
" from %d unique address(es)." % \
(round(float(reset_delta.seconds/3600)) + reset_delta.days*24, self.n_connections,
len(self.unique_ips)))
# Reset stats every 24 hours.
if (reset_delta.days > 0):
log.debug("Resetting heartbeat.")
self.reset_stats()
def talk(self):
"""Do a heartbeat."""
self.say_uptime()
self.say_stats()
# A heartbeat singleton.
heartbeat = Heartbeat()
obfsproxy-0.2.13/obfsproxy/common/hmac_sha256.py 0000664 0000000 0000000 00000000346 12570034732 0021533 0 ustar 00root root 0000000 0000000 import hashlib
import hmac
def hmac_sha256_digest(key, msg):
"""
Return the HMAC-SHA256 message authentication code of the message
'msg' with key 'key'.
"""
return hmac.new(key, msg, hashlib.sha256).digest()
obfsproxy-0.2.13/obfsproxy/common/log.py 0000664 0000000 0000000 00000006611 12570034732 0020315 0 ustar 00root root 0000000 0000000 """obfsproxy logging code"""
import logging
import sys
from twisted.python import log
def get_obfslogger():
""" Return the current ObfsLogger instance """
return OBFSLOGGER
class ObfsLogger(object):
"""
Maintain state of logging options specified with command line arguments
Attributes:
safe_logging: Boolean value indicating if we should scrub addresses
before logging
obfslogger: Our logging instance
"""
def __init__(self):
self.safe_logging = True
observer = log.PythonLoggingObserver('obfslogger')
observer.start()
# Create the default log handler that logs to stdout.
self.obfslogger = logging.getLogger('obfslogger')
self.default_handler = logging.StreamHandler(sys.stdout)
self.set_formatter(self.default_handler)
self.obfslogger.addHandler(self.default_handler)
self.obfslogger.propagate = False
def set_formatter(self, handler):
"""Given a log handler, plug our custom formatter to it."""
formatter = logging.Formatter("%(asctime)s [%(levelname)s] %(message)s")
handler.setFormatter(formatter)
def set_log_file(self, filename):
"""Set up our logger so that it starts logging to file in 'filename' instead."""
# remove the default handler, and add the FileHandler:
self.obfslogger.removeHandler(self.default_handler)
log_handler = logging.FileHandler(filename)
self.set_formatter(log_handler)
self.obfslogger.addHandler(log_handler)
def set_log_severity(self, sev_string):
"""Update our minimum logging severity to 'sev_string'."""
# Turn it into a numeric level that logging understands first.
numeric_level = getattr(logging, sev_string.upper(), None)
self.obfslogger.setLevel(numeric_level)
def disable_logs(self):
"""Disable all logging."""
logging.disable(logging.CRITICAL)
def set_no_safe_logging(self):
""" Disable safe_logging """
self.safe_logging = False
def safe_addr_str(self, address):
"""
Unless safe_logging is False, we return '[scrubbed]' instead
of the address parameter. If safe_logging is false, then we
return the address itself.
"""
if self.safe_logging:
return '[scrubbed]'
else:
return address
def debug(self, msg, *args, **kwargs):
""" Class wrapper around debug logging method """
self.obfslogger.debug(msg, *args, **kwargs)
def warning(self, msg, *args, **kwargs):
""" Class wrapper around warning logging method """
self.obfslogger.warning(msg, *args, **kwargs)
def info(self, msg, *args, **kwargs):
""" Class wrapper around info logging method """
self.obfslogger.info(msg, *args, **kwargs)
def error(self, msg, *args, **kwargs):
""" Class wrapper around error logging method """
self.obfslogger.error(msg, *args, **kwargs)
def critical(self, msg, *args, **kwargs):
""" Class wrapper around critical logging method """
self.obfslogger.critical(msg, *args, **kwargs)
def exception(self, msg, *args, **kwargs):
""" Class wrapper around exception logging method """
self.obfslogger.exception(msg, *args, **kwargs)
""" Global variable that will track our Obfslogger instance """
OBFSLOGGER = ObfsLogger()
obfsproxy-0.2.13/obfsproxy/common/modexp.py 0000664 0000000 0000000 00000001055 12570034732 0021025 0 ustar 00root root 0000000 0000000 try:
from gmpy2 import mpz as mpz
except ImportError:
try:
from gmpy import mpz as mpz
except ImportError:
def mpz( x ):
return x
pass
def powMod( x, y, mod ):
"""
(Efficiently) Calculate and return `x' to the power of `y' mod `mod'.
If possible, the three numbers are converted to GMPY's bignum
representation which speeds up exponentiation. If GMPY is not installed,
built-in exponentiation is used.
"""
x = mpz(x)
y = mpz(y)
mod = mpz(mod)
return pow(x, y, mod)
obfsproxy-0.2.13/obfsproxy/common/rand.py 0000664 0000000 0000000 00000000156 12570034732 0020456 0 ustar 00root root 0000000 0000000 import os
def random_bytes(n):
""" Returns n bytes of strong random data. """
return os.urandom(n)
obfsproxy-0.2.13/obfsproxy/common/serialize.py 0000664 0000000 0000000 00000001171 12570034732 0021517 0 ustar 00root root 0000000 0000000 """Helper functions to go from integers to binary data and back."""
import struct
def htonl(n):
"""
Convert integer in 'n' from host-byte order to network-byte order.
"""
return struct.pack('!I', n)
def ntohl(bs):
"""
Convert integer in 'n' from network-byte order to host-byte order.
"""
return struct.unpack('!I', bs)[0]
def htons(n):
"""
Convert integer in 'n' from host-byte order to network-byte order.
"""
return struct.pack('!h', n)
def ntohs(bs):
"""
Convert integer in 'n' from network-byte order to host-byte order.
"""
return struct.unpack('!h', bs)[0]
obfsproxy-0.2.13/obfsproxy/common/transport_config.py 0000664 0000000 0000000 00000004764 12570034732 0023124 0 ustar 00root root 0000000 0000000 # -*- coding: utf-8 -*-
"""
Provides a class which represents a pluggable transport's configuration.
"""
class TransportConfig( object ):
"""
This class embeds configuration options for pluggable transport modules.
The options are set by obfsproxy and then passed to the transport's class
constructor. The pluggable transport might want to use these options but
does not have to. An example of such an option is the state location which
can be used by the pluggable transport to store persistent information.
"""
def __init__( self ):
"""
Initialise a `TransportConfig' object.
"""
self.stateLocation = None
self.serverTransportOptions = None
# True if we are client, False if not.
self.weAreClient = None
# True if we are in external mode. False otherwise.
self.weAreExternal = None
# Information about the outgoing SOCKS/HTTP proxy we need to
# connect to. See pyptlib.client_config.parseProxyURI().
self.proxy = None
def setProxy( self, proxy ):
"""
Set the given 'proxy'.
"""
self.proxy = proxy
def setStateLocation( self, stateLocation ):
"""
Set the given `stateLocation'.
"""
self.stateLocation = stateLocation
def getStateLocation( self ):
"""
Return the stored `stateLocation'.
"""
return self.stateLocation
def setServerTransportOptions( self, serverTransportOptions ):
"""
Set the given `serverTransportOptions'.
"""
self.serverTransportOptions = serverTransportOptions
def getServerTransportOptions( self ):
"""
Return the stored `serverTransportOptions'.
"""
return self.serverTransportOptions
def setListenerMode( self, mode ):
if mode == "client" or mode == "socks":
self.weAreClient = True
elif mode == "server" or mode == "ext_server":
self.weAreClient = False
else:
raise ValueError("Invalid listener mode: %s" % mode)
def setObfsproxyMode( self, mode ):
if mode == "external":
self.weAreExternal = True
elif mode == "managed":
self.weAreExternal = False
else:
raise ValueError("Invalid obfsproxy mode: %s" % mode)
def __str__( self ):
"""
Return a string representation of the `TransportConfig' instance.
"""
return str(vars(self))
obfsproxy-0.2.13/obfsproxy/managed/ 0000775 0000000 0000000 00000000000 12570034732 0017262 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/managed/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0021361 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/managed/client.py 0000664 0000000 0000000 00000006214 12570034732 0021115 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
from twisted.internet import reactor, error
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.network.network as network
import obfsproxy.transports.transports as transports
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
import obfsproxy.common.transport_config as transport_config
from pyptlib.client import ClientTransportPlugin
from pyptlib.config import EnvError
import pprint
log = logging.get_obfslogger()
def do_managed_client():
"""Start the managed-proxy protocol as a client."""
should_start_event_loop = False
ptclient = ClientTransportPlugin()
try:
ptclient.init(transports.transports.keys())
except EnvError, err:
log.warning("Client managed-proxy protocol failed (%s)." % err)
return
log.debug("pyptlib gave us the following data:\n'%s'", pprint.pformat(ptclient.getDebugData()))
# Apply the proxy settings if any
proxy = ptclient.config.getProxy()
if proxy:
# Make sure that we have all the necessary dependencies
try:
network.ensure_outgoing_proxy_dependencies()
except network.OutgoingProxyDepsFailure, err:
ptclient.reportProxyError(str(err))
return
ptclient.reportProxySuccess()
for transport in ptclient.getTransports():
# Will hold configuration parameters for the pluggable transport module.
pt_config = transport_config.TransportConfig()
pt_config.setStateLocation(ptclient.config.getStateLocation())
pt_config.setListenerMode("socks")
pt_config.setObfsproxyMode("managed")
pt_config.setProxy(proxy)
# Call setup() method for this transport.
transport_class = transports.get_transport_class(transport, 'socks')
try:
transport_class.setup(pt_config)
except base.TransportSetupFailed, err:
log.warning("Transport '%s' failed during setup()." % transport)
ptclient.reportMethodError(transport, "setup() failed: %s." % (err))
continue
try:
addrport = launch_transport.launch_transport_listener(transport, None, 'socks', None, pt_config)
except transports.TransportNotFound:
log.warning("Could not find transport '%s'" % transport)
ptclient.reportMethodError(transport, "Could not find transport.")
continue
except error.CannotListenError, e:
error_msg = "Could not set up listener (%s:%s) for '%s' (%s)." % \
(e.interface, e.port, transport, e.socketError[1])
log.warning(error_msg)
ptclient.reportMethodError(transport, error_msg)
continue
should_start_event_loop = True
log.debug("Successfully launched '%s' at '%s'" % (transport, log.safe_addr_str(str(addrport))))
ptclient.reportMethodSuccess(transport, "socks5", addrport, None, None)
ptclient.reportMethodsEnd()
if should_start_event_loop:
log.info("Starting up the event loop.")
reactor.run()
else:
log.info("No transports launched. Nothing to do.")
obfsproxy-0.2.13/obfsproxy/managed/server.py 0000664 0000000 0000000 00000012245 12570034732 0021146 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
from twisted.internet import reactor, error
from pyptlib.server import ServerTransportPlugin
from pyptlib.config import EnvError
import obfsproxy.transports.transports as transports
import obfsproxy.transports.base as base
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.common.log as logging
import obfsproxy.common.transport_config as transport_config
import pprint
log = logging.get_obfslogger()
def do_managed_server():
"""Start the managed-proxy protocol as a server."""
should_start_event_loop = False
ptserver = ServerTransportPlugin()
try:
ptserver.init(transports.transports.keys())
except EnvError, err:
log.warning("Server managed-proxy protocol failed (%s)." % err)
return
log.debug("pyptlib gave us the following data:\n'%s'", pprint.pformat(ptserver.getDebugData()))
ext_orport = ptserver.config.getExtendedORPort()
authcookie = ptserver.config.getAuthCookieFile()
orport = ptserver.config.getORPort()
server_transport_options = ptserver.config.getServerTransportOptions()
for transport, transport_bindaddr in ptserver.getBindAddresses().items():
# Will hold configuration parameters for the pluggable transport module.
pt_config = transport_config.TransportConfig()
pt_config.setStateLocation(ptserver.config.getStateLocation())
if ext_orport:
pt_config.setListenerMode("ext_server")
else:
pt_config.setListenerMode("server")
pt_config.setObfsproxyMode("managed")
transport_options = ""
if server_transport_options and transport in server_transport_options:
transport_options = server_transport_options[transport]
pt_config.setServerTransportOptions(transport_options)
# Call setup() method for this tranpsort.
transport_class = transports.get_transport_class(transport, 'server')
try:
transport_class.setup(pt_config)
except base.TransportSetupFailed, err:
log.warning("Transport '%s' failed during setup()." % transport)
ptserver.reportMethodError(transport, "setup() failed: %s." % (err))
continue
try:
if ext_orport:
addrport = launch_transport.launch_transport_listener(transport,
transport_bindaddr,
'ext_server',
ext_orport,
pt_config,
ext_or_cookie_file=authcookie)
else:
addrport = launch_transport.launch_transport_listener(transport,
transport_bindaddr,
'server',
orport,
pt_config)
except transports.TransportNotFound:
log.warning("Could not find transport '%s'" % transport)
ptserver.reportMethodError(transport, "Could not find transport.")
continue
except error.CannotListenError, e:
error_msg = "Could not set up listener (%s:%s) for '%s' (%s)." % \
(e.interface, e.port, transport, e.socketError[1])
log.warning(error_msg)
ptserver.reportMethodError(transport, error_msg)
continue
should_start_event_loop = True
extra_log = "" # Include server transport options in the log message if we got 'em
if transport_options:
extra_log = " (server transport options: '%s')" % str(transport_options)
log.debug("Successfully launched '%s' at '%s'%s" % (transport, log.safe_addr_str(str(addrport)), extra_log))
# Invoke the transport-specific get_public_server_options()
# method to potentially filter the server transport options
# that should be passed on to Tor and eventually to BridgeDB.
public_options_dict = transport_class.get_public_server_options(transport_options)
public_options_str = None
# If the transport filtered its options:
if public_options_dict:
optlist = []
for k, v in public_options_dict.items():
optlist.append("%s=%s" % (k,v))
public_options_str = ",".join(optlist)
log.debug("do_managed_server: sending only public_options to tor: %s" % public_options_str)
# Report success for this transport.
# If public_options_str is None then all of the
# transport options from ptserver are used instead.
ptserver.reportMethodSuccess(transport, addrport, public_options_str)
ptserver.reportMethodsEnd()
if should_start_event_loop:
log.info("Starting up the event loop.")
reactor.run()
else:
log.info("No transports launched. Nothing to do.")
obfsproxy-0.2.13/obfsproxy/network/ 0000775 0000000 0000000 00000000000 12570034732 0017357 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/network/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0021456 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/network/buffer.py 0000664 0000000 0000000 00000003716 12570034732 0021211 0 ustar 00root root 0000000 0000000 class Buffer(object):
"""
A Buffer is a simple FIFO buffer. You write() stuff to it, and you
read() them back. You can also peek() or drain() data.
"""
def __init__(self, data=''):
"""
Initialize a buffer with 'data'.
"""
self.buffer = bytes(data)
def read(self, n=-1):
"""
Read and return 'n' bytes from the buffer.
If 'n' is negative, read and return the whole buffer.
If 'n' is larger than the size of the buffer, read and return
the whole buffer.
"""
if (n < 0) or (n > len(self.buffer)):
the_whole_buffer = self.buffer
self.buffer = bytes('')
return the_whole_buffer
data = self.buffer[:n]
self.buffer = self.buffer[n:]
return data
def write(self, data):
"""
Append 'data' to the buffer.
"""
self.buffer = self.buffer + data
def peek(self, n=-1):
"""
Return 'n' bytes from the buffer, without draining them.
If 'n' is negative, return the whole buffer.
If 'n' is larger than the size of the buffer, return the whole
buffer.
"""
if (n < 0) or (n > len(self.buffer)):
return self.buffer
return self.buffer[:n]
def drain(self, n=-1):
"""
Drain 'n' bytes from the buffer.
If 'n' is negative, drain the whole buffer.
If 'n' is larger than the size of the buffer, drain the whole
buffer.
"""
if (n < 0) or (n > len(self.buffer)):
self.buffer = bytes('')
return
self.buffer = self.buffer[n:]
return
def __len__(self):
"""Returns length of buffer. Used in len()."""
return len(self.buffer)
def __nonzero__(self):
"""
Returns True if the buffer is non-empty.
Used in truth-value testing.
"""
return True if len(self.buffer) else False
obfsproxy-0.2.13/obfsproxy/network/extended_orport.py 0000664 0000000 0000000 00000034663 12570034732 0023152 0 ustar 00root root 0000000 0000000 import os
from twisted.internet import reactor
import obfsproxy.common.log as logging
import obfsproxy.common.serialize as srlz
import obfsproxy.common.hmac_sha256 as hmac_sha256
import obfsproxy.common.rand as rand
import obfsproxy.network.network as network
log = logging.get_obfslogger()
# Authentication states:
STATE_WAIT_FOR_AUTH_TYPES = 1
STATE_WAIT_FOR_SERVER_NONCE = 2
STATE_WAIT_FOR_AUTH_RESULTS = 3
STATE_WAIT_FOR_OKAY = 4
STATE_OPEN = 5
# Authentication protocol parameters:
AUTH_PROTOCOL_HEADER_LEN = 4
# Safe-cookie authentication parameters:
AUTH_SERVER_TO_CLIENT_CONST = "ExtORPort authentication server-to-client hash"
AUTH_CLIENT_TO_SERVER_CONST = "ExtORPort authentication client-to-server hash"
AUTH_NONCE_LEN = 32
AUTH_HASH_LEN = 32
# Extended ORPort commands:
# Transport-to-Bridge
EXT_OR_CMD_TB_DONE = 0x0000
EXT_OR_CMD_TB_USERADDR = 0x0001
EXT_OR_CMD_TB_TRANSPORT = 0x0002
# Bridge-to-Transport
EXT_OR_CMD_BT_OKAY = 0x1000
EXT_OR_CMD_BT_DENY = 0x1001
EXT_OR_CMD_BT_CONTROL = 0x1002
# Authentication cookie parameters
AUTH_COOKIE_LEN = 32
AUTH_COOKIE_HEADER_LEN = 32
AUTH_COOKIE_FILE_LEN = AUTH_COOKIE_LEN + AUTH_COOKIE_HEADER_LEN
AUTH_COOKIE_HEADER = "! Extended ORPort Auth Cookie !\x0a"
def _read_auth_cookie(cookie_path):
"""
Read an Extended ORPort authentication cookie from 'cookie_path' and return it.
Throw CouldNotReadCookie if we couldn't read the cookie.
"""
# Check if file exists.
if not os.path.exists(cookie_path):
raise CouldNotReadCookie("'%s' doesn't exist" % cookie_path)
# Check its size and make sure it's correct before opening.
auth_cookie_file_size = os.path.getsize(cookie_path)
if auth_cookie_file_size != AUTH_COOKIE_FILE_LEN:
raise CouldNotReadCookie("Cookie '%s' is the wrong size (%i bytes instead of %d)" % \
(cookie_path, auth_cookie_file_size, AUTH_COOKIE_FILE_LEN))
try:
with file(cookie_path, 'rb', 0) as f:
header = f.read(AUTH_COOKIE_HEADER_LEN) # first 32 bytes are the header
if header != AUTH_COOKIE_HEADER:
raise CouldNotReadCookie("Corrupted cookie file header '%s'." % header)
return f.read(AUTH_COOKIE_LEN) # nexta 32 bytes should be the cookie.
except IOError, exc:
raise CouldNotReadCookie("Unable to read '%s' (%s)" % (cookie_path, exc))
class ExtORPortProtocol(network.GenericProtocol):
"""
Represents a connection to the Extended ORPort. It begins by
completing the Extended ORPort authentication, then sending some
Extended ORPort commands, and finally passing application-data
like it would do to an ORPort.
Specifically, after completing the Extended ORPort authentication
we send a USERADDR command with the address of our client, a
TRANSPORT command with the name of the pluggable transport, and a
DONE command to signal that we are done with the Extended ORPort
protocol. Then we wait for an OKAY command back from the server to
start sending application-data.
Attributes:
state: The protocol state the connections is currently at.
ext_orport_addr: The address of the Extended ORPort.
peer_addr: The address of the client, in the other side of the
circuit, that connected to our downstream side.
cookie_file: Path to the Extended ORPort authentication cookie.
client_nonce: A random nonce used in the Extended ORPort
authentication protocol.
client_hash: Our hash which is used to verify our knowledge of the
authentication cookie in the Extended ORPort Authentication
protocol.
"""
def __init__(self, circuit, ext_orport_addr, cookie_file, peer_addr, transport_name):
self.state = STATE_WAIT_FOR_AUTH_TYPES
self.name = "ext_%s" % hex(id(self))
self.ext_orport_addr = ext_orport_addr
self.peer_addr = peer_addr
self.cookie_file = cookie_file
self.client_nonce = rand.random_bytes(AUTH_NONCE_LEN)
self.client_hash = None
self.transport_name = transport_name
network.GenericProtocol.__init__(self, circuit)
def connectionMade(self):
pass
def dataReceived(self, data_rcvd):
"""
We got some data, process it according to our current state.
"""
if self.closed:
log.debug("%s: ExtORPort dataReceived called while closed. Ignoring.", self.name)
return
self.buffer.write(data_rcvd)
if self.state == STATE_WAIT_FOR_AUTH_TYPES:
try:
self._handle_auth_types()
except NeedMoreData:
return
except UnsupportedAuthTypes, err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_WAIT_FOR_SERVER_NONCE
if self.state == STATE_WAIT_FOR_SERVER_NONCE:
try:
self._handle_server_nonce_and_hash()
except NeedMoreData:
return
except (CouldNotReadCookie, RcvdInvalidAuth) as err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_WAIT_FOR_AUTH_RESULTS
if self.state == STATE_WAIT_FOR_AUTH_RESULTS:
try:
self._handle_auth_results()
except NeedMoreData:
return
except AuthFailed, err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
# We've finished the Extended ORPort authentication
# protocol. Now send all the Extended ORPort commands we
# want to send.
try:
self._send_ext_orport_commands()
except CouldNotWriteExtCommand:
self.close()
return
self.state = STATE_WAIT_FOR_OKAY
if self.state == STATE_WAIT_FOR_OKAY:
try:
self._handle_okay()
except NeedMoreData:
return
except ExtORPortProtocolFailed as err:
log.warning("Extended ORPort Cookie Authentication failed: %s" % err)
self.close()
return
self.state = STATE_OPEN
if self.state == STATE_OPEN:
# We are done with the Extended ORPort protocol, we now
# treat the Extended ORPort as a normal ORPort.
if not self.circuit.circuitIsReady():
self.circuit.setUpstreamConnection(self)
self.circuit.dataReceived(self.buffer, self)
def _send_ext_orport_commands(self):
"""
Send all the Extended ORPort commands we want to send.
Throws CouldNotWriteExtCommand.
"""
# Send the actual IP address of our client to the Extended
# ORPort, then signal that we are done and that we want to
# start transferring application-data.
self._write_ext_orport_command(EXT_OR_CMD_TB_USERADDR, '%s:%s' % (self.peer_addr.host, self.peer_addr.port))
self._write_ext_orport_command(EXT_OR_CMD_TB_TRANSPORT, '%s' % self.transport_name)
self._write_ext_orport_command(EXT_OR_CMD_TB_DONE, '')
def _handle_auth_types(self):
"""
Read authentication types that the server supports, select
one, and send it to the server.
Throws NeedMoreData and UnsupportedAuthTypes.
"""
if len(self.buffer) < 2:
raise NeedMoreData('Not enough data')
data = self.buffer.peek()
if '\x00' not in data: # haven't received EndAuthTypes yet
log.debug("%s: Got some auth types data but no EndAuthTypes yet." % self.name)
raise NeedMoreData('Not EndAuthTypes.')
# Drain all data up to (and including) the EndAuthTypes.
log.debug("%s: About to drain %d bytes from %d." % \
(self.name, data.index('\x00')+1, len(self.buffer)))
data = self.buffer.read(data.index('\x00')+1)
if '\x01' not in data:
raise UnsupportedAuthTypes("%s: Could not find supported auth type (%s)." % (self.name, repr(data)))
# Send back chosen auth type.
self.write("\x01") # Static, since we only support auth type '1' atm.
# Since we are doing the safe-cookie protocol, now send our
# nonce.
# XXX This will need to be refactored out of this function in
# the future, when we have more than one auth types.
self.write(self.client_nonce)
def _handle_server_nonce_and_hash(self):
"""
Get the server's nonce and hash, validate them and send our own hash.
Throws NeedMoreData and RcvdInvalidAuth and CouldNotReadCookie.
"""
if len(self.buffer) < AUTH_HASH_LEN + AUTH_NONCE_LEN:
raise NeedMoreData('Need more data')
server_hash = self.buffer.read(AUTH_HASH_LEN)
server_nonce = self.buffer.read(AUTH_NONCE_LEN)
auth_cookie = _read_auth_cookie(self.cookie_file)
proper_server_hash = hmac_sha256.hmac_sha256_digest(auth_cookie,
AUTH_SERVER_TO_CLIENT_CONST + self.client_nonce + server_nonce)
log.debug("%s: client_nonce: %s\nserver_nonce: %s\nserver_hash: %s\nproper_server_hash: %s\n" % \
(self.name, repr(self.client_nonce), repr(server_nonce), repr(server_hash), repr(proper_server_hash)))
if proper_server_hash != server_hash:
raise RcvdInvalidAuth("%s: Invalid server hash. Authentication failed." % (self.name))
client_hash = hmac_sha256.hmac_sha256_digest(auth_cookie,
AUTH_CLIENT_TO_SERVER_CONST + self.client_nonce + server_nonce)
# Send our hash.
self.write(client_hash)
def _handle_auth_results(self):
"""
Get the authentication results. See if the authentication
succeeded or failed, and take appropriate actions.
Throws NeedMoreData and AuthFailed.
"""
if len(self.buffer) < 1:
raise NeedMoreData("Not enough data for body.")
result = self.buffer.read(1)
if result != '\x01':
raise AuthFailed("%s: Authentication failed (%s)!" % (self.name, repr(result)))
log.debug("%s: Authentication successful!" % self.name)
def _handle_okay(self):
"""
We've sent a DONE command to the Extended ORPort and we
now check if the Extended ORPort liked it or not.
Throws NeedMoreData and ExtORPortProtocolFailed.
"""
cmd, _ = self._get_ext_orport_command(self.buffer)
if cmd != EXT_OR_CMD_BT_OKAY:
raise ExtORPortProtocolFailed("%s: Unexpected command received (%d) after sending DONE." % (self.name, cmd))
def _get_ext_orport_command(self, buf):
"""
Reads an Extended ORPort command from 'buf'. Returns (command,
body) if it was well-formed, where 'command' is the Extended
ORPort command type, and 'body' is its body.
Throws NeedMoreData.
"""
if len(buf) < AUTH_PROTOCOL_HEADER_LEN:
raise NeedMoreData("Not enough data for header.")
header = buf.peek(AUTH_PROTOCOL_HEADER_LEN)
cmd = srlz.ntohs(header[:2])
bodylen = srlz.ntohs(header[2:4])
if (bodylen > len(buf) - AUTH_PROTOCOL_HEADER_LEN): # Not all here yet
raise NeedMoreData("Not enough data for body.")
# We have a whole command. Drain the header.
buf.drain(4)
body = buf.read(bodylen)
return (cmd, body)
def _write_ext_orport_command(self, command, body):
"""
Serialize 'command' and 'body' to an Extended ORPort command
and send it to the Extended ORPort.
Throws CouldNotWriteExtCommand
"""
payload = ''
if len(body) > 65535: # XXX split instead of quitting?
log.warning("Obfsproxy was asked to send Extended ORPort command with more than "
"65535 bytes of body. This is not supported by the Extended ORPort "
"protocol. Please file a bug.")
raise CouldNotWriteExtCommand("Too large body.")
if command > 65535:
raise CouldNotWriteExtCommand("Not supported command type.")
payload += srlz.htons(command)
payload += srlz.htons(len(body))
payload += body # body might be absent (empty string)
self.write(payload)
class ExtORPortClientFactory(network.StaticDestinationClientFactory):
def __init__(self, circuit, cookie_file, peer_addr, transport_name):
self.circuit = circuit
self.peer_addr = peer_addr
self.cookie_file = cookie_file
self.transport_name = transport_name
self.name = "fact_ext_c_%s" % hex(id(self))
def buildProtocol(self, addr):
return ExtORPortProtocol(self.circuit, addr, self.cookie_file, self.peer_addr, self.transport_name)
class ExtORPortServerFactory(network.StaticDestinationClientFactory):
def __init__(self, ext_or_addrport, ext_or_cookie_file, transport_name, transport_class, pt_config):
self.ext_or_host = ext_or_addrport[0]
self.ext_or_port = ext_or_addrport[1]
self.cookie_file = ext_or_cookie_file
self.transport_name = transport_name
self.transport_class = transport_class
self.pt_config = pt_config
self.name = "fact_ext_s_%s" % hex(id(self))
def startFactory(self):
log.debug("%s: Starting up Extended ORPort server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection from %s:%d." % (self.name, log.safe_addr_str(addr.host), addr.port))
circuit = network.Circuit(self.transport_class())
# XXX instantiates a new factory for each client
clientFactory = ExtORPortClientFactory(circuit, self.cookie_file, addr, self.transport_name)
reactor.connectTCP(self.ext_or_host, self.ext_or_port, clientFactory)
return network.StaticDestinationProtocol(circuit, 'server', addr)
# XXX Exceptions need more thought and work. Most of these can be generalized.
class RcvdInvalidAuth(Exception): pass
class AuthFailed(Exception): pass
class UnsupportedAuthTypes(Exception): pass
class ExtORPortProtocolFailed(Exception): pass
class CouldNotWriteExtCommand(Exception): pass
class CouldNotReadCookie(Exception): pass
class NeedMoreData(Exception): pass
obfsproxy-0.2.13/obfsproxy/network/http.py 0000664 0000000 0000000 00000011475 12570034732 0020720 0 ustar 00root root 0000000 0000000 from base64 import b64encode
from twisted.internet.error import ConnectError
from twisted.internet.interfaces import IStreamClientEndpoint
from twisted.internet.protocol import ClientFactory
from twisted.internet.defer import Deferred
from twisted.web.http import HTTPClient
from zope.interface import implementer
import obfsproxy.common.log as logging
"""
HTTP CONNECT Client:
Next up on the list of things one would expect Twisted to provide, but does not
is an endpoint for outgoing connections through a HTTP CONNECT proxy.
Limitations:
* Only Basic Authentication is supported (RFC2617).
"""
log = logging.get_obfslogger()
# Create the body of the RFC2617 Basic Authentication 'Authorization' header.
def _makeBasicAuth(username, password):
if username and password:
return "Basic " + b64encode(username + ':' + password)
elif username or password:
raise ValueError("expecting both a username *and* password")
else:
return None
class HTTPConnectClient(HTTPClient):
deferred = None
host = None
port = None
proxy_addr = None
auth = None
instance_factory = None
instance = None
def __init__(self, deferred, host, port, proxy_addr, auth, instance_factory):
self.deferred = deferred
self.host = host
self.port = port
self.proxy_addr = proxy_addr
self.auth = auth
self.instance_factory = instance_factory
def connectionMade(self):
log.debug("HTTPConnectClient: Proxy connection established: %s:%d",
log.safe_addr_str(self.proxy_addr.host), self.proxy_addr.port)
self.sendCommand("CONNECT", "%s:%d" % (self.host, self.port))
if self.auth:
self.sendHeader("Proxy-Authorization", self.auth)
self.endHeaders()
def connectionLost(self, reason):
if self.instance:
self.instance.connectionLost(reason)
else:
# Some HTTP proxies (Eg: polipo) are rude and opt to close the
# connection instead of sending a status code indicating failure.
self.onConnectionError(ConnectError("Proxy connection closed during setup"))
def handleEndHeaders(self):
log.info("HTTPConnectClient: Connected to %s:%d via %s:%d",
log.safe_addr_str(self.host), self.port,
log.safe_addr_str(self.proxy_addr.host), self.proxy_addr.port)
self.setRawMode()
self.instance = self.instance_factory.buildProtocol(self.proxy_addr)
self.instance.makeConnection(self.transport)
self.deferred.callback(self.instance)
tmp = self.clearLineBuffer()
if tmp:
self.instance.dataReceived(tmp)
def handleStatus(self, version, status, message):
if status != "200":
self.onConnectionError(ConnectError("Proxy returned status: %s" % status))
def rawDataReceived(self, data):
log.debug("HTTPConnectClient: Received %d bytes of proxied data", len(data))
if self.instance:
self.instance.dataReceived(data)
else:
raise RuntimeError("HTTPConnectClient.rawDataReceived() called with no instance")
def onConnectionError(self, reason):
if self.deferred:
log.warning("HTTPConnectClient: Connect error: %s", reason)
self.deferred.errback(reason)
self.deferred = None
self.transport.loseConnection()
class HTTPConnectClientFactory(ClientFactory):
deferred = None
host = None
port = None
auth = None
instance_factory = None
def __init__(self, host, port, auth, instance_factory):
self.deferred = Deferred()
self.host = host
self.port = port
self.auth = auth
self.instance_factory = instance_factory
def buildProtocol(self, addr):
proto = HTTPConnectClient(self.deferred, self.host, self.port, addr, self.auth, self.instance_factory)
return proto
def startedConnecting(self, connector):
self.instance_factory.startedConnectiong(connector)
def clientConnectionFailed(self, connector, reason):
self.instance_factory.clientConnectionFailed(connector, reason)
def clientConnectionLost(self, connector, reason):
self.instance_factory.clientConnectionLost(connector, reason)
@implementer(IStreamClientEndpoint)
class HTTPConnectClientEndpoint(object):
host = None
port = None
endpoint = None
auth = None
def __init__(self, host, port, endpoint, username=None, password=None):
self.host = host
self.port = port
self.endpoint = endpoint
self.auth = _makeBasicAuth(username, password)
def connect(self, instance_factory):
f = HTTPConnectClientFactory(self.host, self.port, self.auth, instance_factory)
d = self.endpoint.connect(f)
d.addCallback(lambda proto: f.deferred)
return d
obfsproxy-0.2.13/obfsproxy/network/launch_transport.py 0000664 0000000 0000000 00000003751 12570034732 0023325 0 ustar 00root root 0000000 0000000 import obfsproxy.network.network as network
import obfsproxy.transports.transports as transports
import obfsproxy.network.socks as socks
import obfsproxy.network.extended_orport as extended_orport
from twisted.internet import reactor
def launch_transport_listener(transport, bindaddr, role, remote_addrport, pt_config, ext_or_cookie_file=None):
"""
Launch a listener for 'transport' in role 'role' (socks/client/server/ext_server).
If 'bindaddr' is set, then listen on bindaddr. Otherwise, listen
on an ephemeral port on localhost.
'remote_addrport' is the TCP/IP address of the other end of the
circuit. It's not used if we are in 'socks' role.
'pt_config' contains configuration options (such as the state location)
which are of interest to the pluggable transport.
'ext_or_cookie_file' is the filesystem path where the Extended
ORPort Authentication cookie is stored. It's only used in
'ext_server' mode.
Return a tuple (addr, port) representing where we managed to bind.
Throws obfsproxy.transports.transports.TransportNotFound if the
transport could not be found.
Throws twisted.internet.error.CannotListenError if the listener
could not be set up.
"""
transport_class = transports.get_transport_class(transport, role)
listen_host = bindaddr[0] if bindaddr else 'localhost'
listen_port = int(bindaddr[1]) if bindaddr else 0
if role == 'socks':
factory = socks.OBFSSOCKSv5Factory(transport_class, pt_config)
elif role == 'ext_server':
assert(remote_addrport and ext_or_cookie_file)
factory = extended_orport.ExtORPortServerFactory(remote_addrport, ext_or_cookie_file, transport, transport_class, pt_config)
else:
assert(remote_addrport)
factory = network.StaticDestinationServerFactory(remote_addrport, role, transport_class, pt_config)
addrport = reactor.listenTCP(listen_port, factory, interface=listen_host)
return (addrport.getHost().host, addrport.getHost().port)
obfsproxy-0.2.13/obfsproxy/network/network.py 0000664 0000000 0000000 00000042701 12570034732 0021426 0 ustar 00root root 0000000 0000000 from twisted.internet import reactor
from twisted.internet.protocol import Protocol, Factory
import obfsproxy.common.log as logging
import obfsproxy.common.heartbeat as heartbeat
import obfsproxy.network.buffer as obfs_buf
import obfsproxy.transports.base as base
log = logging.get_obfslogger()
"""
Networking subsystem:
A "Connection" is a bidirectional communications channel, usually
backed by a network socket. For example, the communication channel
between tor and obfsproxy is a 'connection'. In the code, it's
represented by a Twisted's twisted.internet.protocol.Protocol.
A 'Circuit' is a pair of connections, referred to as the 'upstream'
and 'downstream' connections. The upstream connection of a circuit
communicates in cleartext with the higher-level program that wishes to
make use of our obfuscation service. The downstream connection
communicates in an obfuscated fashion with the remote peer that the
higher-level client wishes to contact. In the code, it's represented
by the custom Circuit class.
The diagram below might help demonstrate the relationship between
connections and circuits:
Downstream
'Circuit C' 'Connection CD' 'Connection SD' 'Circuit S'
+-----------+ +-----------+
Upstream +---|Obfsproxy c|----------|Obfsproxy s|----+ Upstream
| +-----------+ ^ +-----------+ |
'Connection CU' | | | 'Connection SU'
+------------+ Sent over +--------------+
| Tor Client | the net | Tor Bridge |
+------------+ +--------------+
In the above diagram, "Obfsproxy c" is the client-side obfsproxy, and
"Obfsproxy s" is the server-side obfsproxy. "Connection CU" is the
Client's Upstream connection, the communication channel between tor
and obfsproxy. "Connection CD" is the Client's Downstream connection,
the communication channel between obfsproxy and the remote peer. These
two connections form the client's circuit "Circuit C".
A 'listener' is a listening socket bound to a particular obfuscation
protocol, represented using Twisted's t.i.p.Factory. Connecting to a
listener creates one connection of a circuit, and causes this program
to initiate the other connection (possibly after receiving in-band
instructions about where to connect to). A listener is said to be a
'client' listener if connecting to it creates the upstream connection,
and a 'server' listener if connecting to it creates the downstream
connection.
There are two kinds of client listeners: a 'simple' client listener
always connects to the same remote peer every time it needs to
initiate a downstream connection; a 'socks' client listener can be
told to connect to an arbitrary remote peer using the SOCKS protocol.
"""
class Circuit(Protocol):
"""
A Circuit holds a pair of connections. The upstream connection and
the downstream. The circuit proxies data from one connection to
the other.
Attributes:
transport: the pluggable transport we should use to
obfuscate traffic on this circuit.
downstream: the downstream connection
upstream: the upstream connection
"""
def __init__(self, transport):
self.transport = transport # takes a transport
self.downstream = None # takes a connection
self.upstream = None # takes a connection
self.closed = False # True if the circuit is closed.
self.name = "circ_%s" % hex(id(self))
def setDownstreamConnection(self, conn):
"""
Set the downstream connection of a circuit.
"""
log.debug("%s: Setting downstream connection (%s)." % (self.name, conn.name))
assert(not self.downstream)
self.downstream = conn
if self.circuitIsReady():
self.circuitCompleted(self.upstream)
def setUpstreamConnection(self, conn):
"""
Set the upstream connection of a circuit.
"""
log.debug("%s: Setting upstream connection (%s)." % (self.name, conn.name))
assert(not self.upstream)
self.upstream = conn
if self.circuitIsReady():
self.circuitCompleted(self.downstream)
def circuitIsReady(self):
"""
Return True if the circuit is completed.
"""
return self.downstream and self.upstream
def circuitCompleted(self, conn_to_flush):
"""
Circuit was just completed; that is, its endpoints are now
connected. Do all the things we have to do now.
"""
if self.closed:
log.debug("%s: Completed circuit while closed. Ignoring.", self.name)
return
log.debug("%s: Circuit completed." % self.name)
# Set us as the circuit of our pluggable transport instance.
self.transport.circuit = self
# Call the transport-specific circuitConnected method since
# this is a good time to perform a handshake.
self.transport.circuitConnected()
# Do a dummy dataReceived on the initiating connection in case
# it has any buffered data that must be flushed to the network.
#
# (We use callLater because we want to return back to the
# event loop so that any messages we send in circuitConnected get sent
# to the network immediately.)
reactor.callLater(0.01, conn_to_flush.dataReceived, '')
def dataReceived(self, data, conn):
"""
We received 'data' on 'conn'. Pass the data to our transport,
and then proxy it to the other side. # XXX 'data' is a buffer.
Requires both downstream and upstream connections to be set.
"""
if self.closed:
log.debug("%s: Calling circuit's dataReceived while closed. Ignoring.", self.name)
return
assert(self.downstream and self.upstream)
assert((conn is self.downstream) or (conn is self.upstream))
try:
if conn is self.downstream:
log.debug("%s: downstream: Received %d bytes." % (self.name, len(data)))
self.transport.receivedDownstream(data)
else:
log.debug("%s: upstream: Received %d bytes." % (self.name, len(data)))
self.transport.receivedUpstream(data)
except base.PluggableTransportError, err: # Our transport didn't like that data.
log.info("%s: %s: Closing circuit." % (self.name, str(err)))
self.close()
def close(self, reason=None, side=None):
"""
Tear down the circuit. The reason for the torn down circuit is given in
'reason' and 'side' tells us where it happened: either upstream or
downstream.
"""
if self.closed:
return # NOP if already closed
log.debug("%s: Tearing down circuit." % self.name)
self.closed = True
if self.downstream:
self.downstream.close()
if self.upstream:
self.upstream.close()
self.transport.circuitDestroyed(reason, side)
class GenericProtocol(Protocol, object):
"""
Generic obfsproxy connection. Contains useful methods and attributes.
Attributes:
circuit: The circuit object this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
def __init__(self, circuit):
self.circuit = circuit
self.buffer = obfs_buf.Buffer()
self.closed = False # True if connection is closed.
def connectionLost(self, reason):
log.debug("%s: Connection was lost (%s)." % (self.name, reason.getErrorMessage()))
self.close()
def connectionFailed(self, reason):
log.debug("%s: Connection failed to connect (%s)." % (self.name, reason.getErrorMessage()))
self.close()
def write(self, buf):
"""
Write 'buf' to the underlying transport.
"""
if self.closed:
log.debug("%s: Calling write() while connection is closed. Ignoring.", self.name)
return
log.debug("%s: Writing %d bytes." % (self.name, len(buf)))
self.transport.write(buf)
def close(self, also_close_circuit=True):
"""
Close the connection.
"""
if self.closed:
return # NOP if already closed
log.debug("%s: Closing connection." % self.name)
self.closed = True
self.transport.loseConnection()
if also_close_circuit:
self.circuit.close()
class StaticDestinationProtocol(GenericProtocol):
"""
Represents a connection to a static destination (as opposed to a
SOCKS connection).
Attributes:
mode: 'server' or 'client'
circuit: The circuit this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
def __init__(self, circuit, mode, peer_addr):
self.mode = mode
self.peer_addr = peer_addr
self.name = "conn_%s" % hex(id(self))
GenericProtocol.__init__(self, circuit)
def connectionMade(self):
"""
Callback for when a connection is successfully established.
Find the connection's direction in the circuit, and register
it in our circuit.
"""
# Find the connection's direction and register it in the circuit.
if self.mode == 'client' and not self.circuit.upstream:
log.debug("%s: connectionMade (client): " \
"Setting it as upstream on our circuit." % self.name)
self.circuit.setUpstreamConnection(self)
elif self.mode == 'client':
log.debug("%s: connectionMade (client): " \
"Setting it as downstream on our circuit." % self.name)
self.circuit.setDownstreamConnection(self)
elif self.mode == 'server' and not self.circuit.downstream:
log.debug("%s: connectionMade (server): " \
"Setting it as downstream on our circuit." % self.name)
# Gather some statistics for our heartbeat.
heartbeat.heartbeat.register_connection(self.peer_addr.host)
self.circuit.setDownstreamConnection(self)
elif self.mode == 'server':
log.debug("%s: connectionMade (server): " \
"Setting it as upstream on our circuit." % self.name)
self.circuit.setUpstreamConnection(self)
def dataReceived(self, data):
"""
We received some data from the network. See if we have a
complete circuit, and pass the data to it they get proxied.
XXX: Can also be called with empty 'data' because of
Circuit.setDownstreamConnection(). Document or split function.
"""
if self.closed:
log.debug("%s: dataReceived called while closed. Ignoring.", self.name)
return
if (not self.buffer) and (not data):
log.debug("%s: dataReceived called without a reason.", self.name)
return
# Add the received data to the buffer.
self.buffer.write(data)
# Circuit is not fully connected yet, nothing to do here.
if not self.circuit.circuitIsReady():
log.debug("%s: Incomplete circuit; cached %d bytes." % (self.name, len(data)))
return
self.circuit.dataReceived(self.buffer, self)
class StaticDestinationClientFactory(Factory):
"""
Created when our listener receives a client connection. Makes the
connection that connects to the other end of the circuit.
"""
def __init__(self, circuit, mode):
self.circuit = circuit
self.mode = mode
self.name = "fact_c_%s" % hex(id(self))
def buildProtocol(self, addr):
return StaticDestinationProtocol(self.circuit, self.mode, addr)
def startedConnecting(self, connector):
log.debug("%s: Client factory started connecting." % self.name)
def clientConnectionLost(self, connector, reason):
pass # connectionLost event is handled on the Protocol.
def clientConnectionFailed(self, connector, reason):
log.debug("%s: Connection failed (%s)." % (self.name, reason.getErrorMessage()))
self.circuit.close()
class StaticDestinationServerFactory(Factory):
"""
Represents a listener. Upon receiving a connection, it creates a
circuit and tries to establish the other side of the circuit. It
then listens for data to obfuscate and proxy.
Attributes:
remote_host: The IP/DNS information of the host on the other side
of the circuit.
remote_port: The TCP port fo the host on the other side of the circuit.
mode: 'server' or 'client'
transport: the pluggable transport we should use to
obfuscate traffic on this connection.
pt_config: an object containing config options for the transport.
"""
def __init__(self, remote_addrport, mode, transport_class, pt_config):
self.remote_host = remote_addrport[0]
self.remote_port = int(remote_addrport[1])
self.mode = mode
self.transport_class = transport_class
self.pt_config = pt_config
self.name = "fact_s_%s" % hex(id(self))
assert(self.mode == 'client' or self.mode == 'server')
def startFactory(self):
log.debug("%s: Starting up static destination server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection from %s:%d." % (self.name, log.safe_addr_str(addr.host), addr.port))
circuit = Circuit(self.transport_class())
# XXX instantiates a new factory for each client
clientFactory = StaticDestinationClientFactory(circuit, self.mode)
if self.pt_config.proxy:
create_proxy_client(self.remote_host, self.remote_port,
self.pt_config.proxy,
clientFactory)
else:
reactor.connectTCP(self.remote_host, self.remote_port, clientFactory)
return StaticDestinationProtocol(circuit, self.mode, addr)
def create_proxy_client(host, port, proxy_spec, instance):
"""
host:
the host of the final destination
port:
the port number of the final destination
proxy_spec:
the address of the proxy server as a urlparse.SplitResult
instance:
is the instance to be associated with the endpoint
Returns a deferred that will fire when the connection to the SOCKS server has been established.
"""
# Inline import so that txsocksx is an optional dependency.
from twisted.internet.endpoints import HostnameEndpoint
from txsocksx.client import SOCKS4ClientEndpoint, SOCKS5ClientEndpoint
from obfsproxy.network.http import HTTPConnectClientEndpoint
TCPPoint = HostnameEndpoint(reactor, proxy_spec.hostname, proxy_spec.port)
username = proxy_spec.username
password = proxy_spec.password
# Do some logging
log.debug("Connecting via %s proxy %s:%d",
proxy_spec.scheme, log.safe_addr_str(proxy_spec.hostname), proxy_spec.port)
if username or password:
log.debug("Using %s:%s as the proxy credentials",
log.safe_addr_str(username), log.safe_addr_str(password))
if proxy_spec.scheme in ["socks4a", "socks5"]:
if proxy_spec.scheme == "socks4a":
if username:
assert(password == None)
SOCKSPoint = SOCKS4ClientEndpoint(host, port, TCPPoint, user=username)
else:
SOCKSPoint = SOCKS4ClientEndpoint(host, port, TCPPoint)
elif proxy_spec.scheme == "socks5":
if username and password:
SOCKSPoint = SOCKS5ClientEndpoint(host, port, TCPPoint,
methods={'login': (username, password)})
else:
assert(username == None and password == None)
SOCKSPoint = SOCKS5ClientEndpoint(host, port, TCPPoint)
d = SOCKSPoint.connect(instance)
return d
elif proxy_spec.scheme == "http":
if username and password:
HTTPPoint = HTTPConnectClientEndpoint(host, port, TCPPoint,
username, password)
else:
assert(username == None and password == None)
HTTPPoint = HTTPConnectClientEndpoint(host, port, TCPPoint)
d = HTTPPoint.connect(instance)
return d
else:
# Should *NEVER* happen
raise RuntimeError("Invalid proxy scheme %s" % proxy_spec.scheme)
def ensure_outgoing_proxy_dependencies():
"""Make sure that we have the necessary dependencies to connect to
outgoing HTTP/SOCKS proxies.
Raises OutgoingProxyDepsFailure in case of error.
"""
# We can't connect to outgoing proxies without txsocksx.
try:
import txsocksx
except ImportError:
raise OutgoingProxyDepsFailure("We don't have txsocksx. Can't do proxy. Please install txsocksx.")
# We also need a recent version of twisted ( >= twisted-13.2.0)
import twisted
from twisted.python import versions
if twisted.version < versions.Version('twisted', 13, 2, 0):
raise OutgoingProxyDepsFailure("Outdated version of twisted (%s). Please upgrade to >= twisted-13.2.0" % twisted.version.short())
class OutgoingProxyDepsFailure(Exception): pass
obfsproxy-0.2.13/obfsproxy/network/socks.py 0000664 0000000 0000000 00000014124 12570034732 0021055 0 ustar 00root root 0000000 0000000 import csv
from twisted.internet import reactor, protocol
import obfsproxy.common.log as logging
import obfsproxy.network.network as network
import obfsproxy.network.socks5 as socks5
import obfsproxy.transports.base as base
log = logging.get_obfslogger()
def _split_socks_args(args_str):
"""
Given a string containing the SOCKS arguments (delimited by
semicolons, and with semicolons and backslashes escaped), parse it
and return a list of the unescaped SOCKS arguments.
"""
return csv.reader([args_str], delimiter=';', escapechar='\\').next()
class OBFSSOCKSv5Outgoing(socks5.SOCKSv5Outgoing, network.GenericProtocol):
"""
Represents a downstream connection from the SOCKS server to the
destination.
It subclasses socks5.SOCKSv5Outgoing, so that data can be passed to the
pluggable transport before proxying.
Attributes:
circuit: The circuit this connection belongs to.
buffer: Buffer that holds data that can't be proxied right
away. This can happen because the circuit is not yet
complete, or because the pluggable transport needs more
data before deciding what to do.
"""
name = None
def __init__(self, socksProtocol):
"""
Constructor.
'socksProtocol' is a 'SOCKSv5Protocol' object.
"""
self.name = "socks_down_%s" % hex(id(self))
self.socks = socksProtocol
network.GenericProtocol.__init__(self, socksProtocol.circuit)
return super(OBFSSOCKSv5Outgoing, self).__init__(socksProtocol)
def connectionMade(self):
self.socks.set_up_circuit(self)
# XXX: The transport should be doing this after handshaking since it
# calls, self.socks.sendReply(), when this changes to defer sending the
# reply back set self.socks.otherConn here.
super(OBFSSOCKSv5Outgoing, self).connectionMade()
def dataReceived(self, data):
log.debug("%s: Recived %d bytes." % (self.name, len(data)))
assert self.circuit.circuitIsReady()
self.buffer.write(data)
self.circuit.dataReceived(self.buffer, self)
class OBFSSOCKSv5OutgoingFactory(protocol.Factory):
"""
A OBFSSOCKSv5OutgoingFactory, used only when connecting via a proxy
"""
def __init__(self, socksProtocol):
self.socks = socksProtocol
def buildProtocol(self, addr):
return OBFSSOCKSv5Outgoing(self.socks)
def clientConnectionFailed(self, connector, reason):
self.socks.transport.loseConnection()
def clientConnectionLost(self, connector, reason):
self.socks.transport.loseConnection()
class OBFSSOCKSv5Protocol(socks5.SOCKSv5Protocol, network.GenericProtocol):
"""
Represents an upstream connection from a SOCKS client to our SOCKS
server.
It overrides socks5.SOCKSv5Protocol because py-obfsproxy's connections need
to have a circuit and obfuscate traffic before proxying it.
"""
def __init__(self, circuit, pt_config):
self.name = "socks_up_%s" % hex(id(self))
self.pt_config = pt_config
network.GenericProtocol.__init__(self, circuit)
socks5.SOCKSv5Protocol.__init__(self)
def connectionLost(self, reason):
network.GenericProtocol.connectionLost(self, reason)
def processEstablishedData(self, data):
assert self.circuit.circuitIsReady()
self.buffer.write(data)
self.circuit.dataReceived(self.buffer, self)
def processRfc1929Auth(self, uname, passwd):
"""
Handle the Pluggable Transport variant of RFC1929 Username/Password
authentication.
"""
# The Tor PT spec jams the per session arguments into the UNAME/PASSWD
# fields, and uses this to pass arguments to the pluggable transport.
# Per the RFC, it's not possible to have 0 length passwords, so tor sets
# the length to 1 and the first byte to NUL when passwd doesn't actually
# contain data. Recombine the two fields if appropriate.
args = uname
if len(passwd) > 1 or ord(passwd[0]) != 0:
args += passwd
# Arguments are a CSV string with Key=Value pairs. The transport is
# responsible for dealing with the K=V format, but the SOCKS code is
# currently expected to de-CSV the args.
#
# XXX: This really should also handle converting the K=V pairs into a
# dict.
try:
split_args = _split_socks_args(args)
except csvError, err:
log.warning("split_socks_args failed (%s)" % str(err))
return False
# Pass the split up list to the transport.
try:
self.circuit.transport.handle_socks_args(split_args)
except base.SOCKSArgsError:
# Transports should log the issue themselves
return False
return True
def connectClass(self, addr, port, klass, *args):
"""
Instantiate the outgoing connection.
This is overriden so that our sub-classed SOCKSv5Outgoing gets created,
and a proxy is optionally used for the outgoing connection.
"""
if self.pt_config.proxy:
instance = OBFSSOCKSv5OutgoingFactory(self)
return network.create_proxy_client(addr, port, self.pt_config.proxy, instance)
else:
return protocol.ClientCreator(reactor, OBFSSOCKSv5Outgoing, self).connectTCP(addr, port)
def set_up_circuit(self, otherConn):
self.circuit.setDownstreamConnection(otherConn)
self.circuit.setUpstreamConnection(self)
class OBFSSOCKSv5Factory(protocol.Factory):
"""
A SOCKSv5 factory.
"""
def __init__(self, transport_class, pt_config):
# XXX self.logging = log
self.transport_class = transport_class
self.pt_config = pt_config
self.name = "socks_fact_%s" % hex(id(self))
def startFactory(self):
log.debug("%s: Starting up SOCKS server factory." % self.name)
def buildProtocol(self, addr):
log.debug("%s: New connection." % self.name)
circuit = network.Circuit(self.transport_class())
return OBFSSOCKSv5Protocol(circuit, self.pt_config)
obfsproxy-0.2.13/obfsproxy/network/socks5.py 0000664 0000000 0000000 00000043231 12570034732 0021143 0 ustar 00root root 0000000 0000000 from twisted.internet import reactor, protocol, error
from twisted.python import compat
import obfsproxy.common.log as logging
import socket
import struct
log = logging.get_obfslogger()
"""
SOCKS5 Server:
This is a SOCKS5 server. There are many others like it but this one is mine.
It is compliant with RFC 1928 and RFC 1929, with the following limitations:
* GSSAPI Autentication is not supported
* BIND/UDP_ASSOCIATE are not implemented, and will return a CommandNotSupported
SOCKS5 error, and close the connection.
"""
#
# SOCKS5 Constants
#
_SOCKS_VERSION = 0x05
_SOCKS_AUTH_NO_AUTHENTICATION_REQUIRED = 0x00
_SOCKS_AUTH_GSSAPI = 0x01
_SOCKS_AUTH_USERNAME_PASSWORD = 0x02
_SOCKS_AUTH_NO_ACCEPTABLE_METHODS = 0xFF
_SOCKS_CMD_CONNECT = 0x01
_SOCKS_CMD_BIND = 0x02
_SOCKS_CMD_UDP_ASSOCIATE = 0x03
_SOCKS_ATYP_IP_V4 = 0x01
_SOCKS_ATYP_DOMAINNAME = 0x03
_SOCKS_ATYP_IP_V6 = 0x04
_SOCKS_RSV = 0x00
_SOCKS_RFC1929_VER = 0x01
_SOCKS_RFC1929_SUCCESS = 0x00
_SOCKS_RFC1929_FAIL = 0x01
# This is a compatibility layer for twisted.internet.error.UnsupportedAddressFamily
# which was added in twisted-12.1.0.
# Defining this function should make older Twisted run properly (sorry for the kludge!)
if not hasattr(error, "UnsupportedAddressFamily"):
class UnsupportedAddressFamily(Exception):
""" AKA EAFNOSUPPORT """
pass
error.UnsupportedAddressFamily = UnsupportedAddressFamily
class SOCKSv5Reply(object):
"""
SOCKSv5 reply codes
"""
__slots__ = ['Succeded', 'GeneralFailure', 'ConnectionNotAllowed',
'NetworkUnreachable', 'HostUnreachable', 'ConnectionRefused',
'TTLExpired', 'CommandNotSupported', 'AddressTypeNotSupported']
Succeeded = 0x00
GeneralFailure = 0x01
ConnectionNotAllowed = 0x02
NetworkUnreachable = 0x03
HostUnreachable = 0x04
ConnectionRefused = 0x05
TTLExpired = 0x06
CommandNotSupported = 0x07
AddressTypeNotSupported = 0x08
class SOCKSv5Outgoing(protocol.Protocol):
socks = None
def __init__(self, socks):
self.socks = socks
def connectionMade(self):
self.socks.otherConn = self
try:
atype, addr, port = self.getRawBoundAddr()
self.socks.sendReply(SOCKSv5Reply.Succeeded, addr, port, atype)
except:
self.socks.sendReply(SOCKSv5Reply.GeneralFailure)
def connectionLost(self, reason):
self.socks.transport.loseConnection()
def dataReceived(self, data):
self.socks.write(data)
def write(self, data):
self.transport.write(data)
def getRawBoundAddr(self):
host = self.transport.getHost()
port = host.port
af = socket.getaddrinfo(host.host, port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP, socket.AI_NUMERICHOST | socket.AI_NUMERICSERV)[0][0]
raw_addr = compat.inet_pton(af, host.host)
if af == socket.AF_INET:
atype = _SOCKS_ATYP_IP_V4
elif af == socket.AF_INET6:
atype = _SOCKS_ATYP_IP_V6
else:
raise ValueError("Invalid Address Family")
return (atype, raw_addr, port)
class SOCKSv5Protocol(protocol.Protocol):
"""
Represents an upstream connection from a SOCKS client to our SOCKS server.
"""
buf = None
state = None
authMethod = None
otherConn = None
# State values
ST_INIT = 0
ST_READ_METHODS = 1
ST_AUTHENTICATING = 2
ST_READ_REQUEST = 3
ST_CONNECTING = 4
ST_ESTABLISHED = 5
# Authentication methods
ACCEPTABLE_AUTH_METHODS = [
_SOCKS_AUTH_USERNAME_PASSWORD,
_SOCKS_AUTH_NO_AUTHENTICATION_REQUIRED
]
AUTH_METHOD_VTABLE = {
_SOCKS_AUTH_USERNAME_PASSWORD:
(lambda self: self.processRfc1929Request()),
_SOCKS_AUTH_NO_AUTHENTICATION_REQUIRED:
(lambda self: self.processNoAuthRequired()),
}
# Commands
ACCEPTABLE_CMDS = [
_SOCKS_CMD_CONNECT,
]
def __init__(self, reactor=reactor):
self.reactor = reactor
self.state = self.ST_INIT
def connectionMade(self):
self.buf = _ByteBuffer()
self.otherConn = None
self.state = self.ST_READ_METHODS
self.authMethod = _SOCKS_AUTH_NO_ACCEPTABLE_METHODS
def connectionLost(self, reason):
if self.otherConn:
self.otherConn.transport.loseConnection()
def dataReceived(self, data):
if self.state == self.ST_ESTABLISHED:
self.processEstablishedData(data)
return
self.buf.add(data)
if self.state == self.ST_READ_METHODS:
self.processMethodSelect()
elif self.state == self.ST_AUTHENTICATING:
self.processAuthentication()
elif self.state == self.ST_READ_REQUEST:
self.processRequest()
elif self.state == self.ST_CONNECTING:
# This only happens when the client is busted
log.error("Client sent data before receiving response")
self.transport.loseConnection()
else:
log.error("Invalid state in SOCKS5 Server: '%d'" % self.state)
self.transport.loseConnection()
def processEstablishedData(self, data):
assert self.otherConn
self.otherConn.write(data)
def processMethodSelect(self):
"""
Parse Version Identifier/Method Selection Message, and send a response
"""
msg = self.buf.peek()
if len(msg) < 2:
return
ver = msg.get_uint8()
nmethods = msg.get_uint8()
if ver != _SOCKS_VERSION:
log.error("Invalid SOCKS version: '%d'" % ver)
self.transport.loseConnection()
return
if nmethods == 0:
log.error("No Authentication method(s) present")
self.transport.loseConnection()
return
if len(msg) < nmethods:
return
# Select the best method
methods = msg.get(nmethods)
for method in self.ACCEPTABLE_AUTH_METHODS:
if chr(method) in methods:
self.authMethod = method
break
if self.authMethod == _SOCKS_AUTH_NO_ACCEPTABLE_METHODS:
log.error("No Acceptable Authentication Methods")
self.authMethod = _SOCKS_AUTH_NO_ACCEPTABLE_METHODS
# Ensure there is no trailing garbage
if len(msg) > 0:
log.error("Peer sent trailing garbage after method select")
self.transport.loseConnection()
return
self.buf.clear()
# Send Method Selection Message
msg = _ByteBuffer()
msg.add_uint8(_SOCKS_VERSION)
msg.add_uint8(self.authMethod)
self.transport.write(str(msg))
if self.authMethod == _SOCKS_AUTH_NO_ACCEPTABLE_METHODS:
self.transport.loseConnection()
return
self.state = self.ST_AUTHENTICATING
def processAuthentication(self):
"""
Handle client data when authenticating
"""
if self.authMethod in self.AUTH_METHOD_VTABLE:
self.AUTH_METHOD_VTABLE[self.authMethod](self)
else:
# Should *NEVER* happen
log.error("Peer sent data when we failed to negotiate auth")
self.buf.clear()
self.transport.loseConnection()
def processRfc1929Request(self):
"""
Handle RFC1929 Username/Password authentication requests
"""
msg = self.buf.peek()
if len(msg) < 2:
return
# Parse VER, ULEN
ver = msg.get_uint8()
ulen = msg.get_uint8()
if ver != _SOCKS_RFC1929_VER:
log.error("Invalid RFC1929 version: '%d'" % ver)
self.sendRfc1929Reply(False)
return
if ulen == 0:
log.error("Username length is 0")
self.sendRfc1929Reply(False)
return
# Process PLEN
if len(msg) < ulen:
return
uname = msg.get(ulen)
# Parse PLEN
if len(msg) < 1:
return
plen = msg.get_uint8()
if len(msg) < plen:
return
if plen == 0:
log.error("Password length is 0")
self.sendRfc1929Reply(False)
return
passwd = msg.get(plen)
# Ensure there is no trailing garbage
if len(msg) > 0:
log.error("Peer sent trailing garbage after RFC1929 auth")
self.transport.loseConnection()
return
self.buf.clear()
if not self.processRfc1929Auth(uname, passwd):
self.sendRfc1929Reply(False)
else:
self.sendRfc1929Reply(True)
def processRfc1929Auth(self, uname, passwd):
"""
Handle the RFC1929 Username/Password received from the client
"""
return False
def sendRfc1929Reply(self, success):
"""
Send a RFC1929 Username/Password Authentication response
"""
msg = _ByteBuffer()
msg.add_uint8(_SOCKS_RFC1929_VER)
if success:
msg.add_uint8(_SOCKS_RFC1929_SUCCESS)
self.transport.write(str(msg))
self.state = self.ST_READ_REQUEST
else:
msg.add_uint8(_SOCKS_RFC1929_FAIL)
self.transport.write(str(msg))
self.transport.loseConnection()
def processNoAuthRequired(self):
"""
Handle the RFC1928 No Authentication Required
"""
self.state = self.ST_READ_REQUEST
self.processRequest()
def processRequest(self):
"""
Parse the client request, and setup the TCP/IP connection
"""
msg = self.buf.peek()
if len(msg) < 4:
return
# Parse VER, CMD, RSV, ATYP
ver = msg.get_uint8()
cmd = msg.get_uint8()
rsv = msg.get_uint8()
atyp = msg.get_uint8()
if ver != _SOCKS_VERSION:
log.error("Invalid SOCKS version: '%d'" % ver)
self.sendReply(SOCKSv5Reply.GeneralFailure)
return
if cmd not in self.ACCEPTABLE_CMDS:
log.error("Invalid SOCKS command: '%d'" % cmd)
self.sendReply(SOCKSv5Reply.CommandNotSupported)
return
if rsv != _SOCKS_RSV:
log.error("Invalid SOCKS RSV: '%d'" % rsv)
self.sendReply(SOCKSv5Reply.GeneralFailure)
return
# Deal with the address
addr = None
if atyp == _SOCKS_ATYP_IP_V4:
if len(msg) < 4:
return
addr = socket.inet_ntoa(msg.get(4))
elif atyp == _SOCKS_ATYP_IP_V6:
if len(msg) < 16:
return
addr = compat.inet_ntop(socket.AF_INET6,msg.get(16))
elif atyp == _SOCKS_ATYP_DOMAINNAME:
if len(msg) < 1:
return
alen = msg.get_uint8()
if alen == 0:
log.error("Domain name length is 0")
self.sendReply(SOCKSv5Reply.GeneralFailure)
return
if len(msg) < alen:
return
addr = msg.get(alen)
else:
log.error("Invalid SOCKS address type: '%d'" % atyp)
self.sendReply(SOCKSv5Reply.AddressTypeNotSupported)
return
# Deal with the port
if len(msg) < 2:
return
port = msg.get_uint16(True)
# Ensure there is no trailing garbage
if len(msg) > 0:
log.error("Peer sent trailing garbage after request")
self.transport.loseConnection()
return
self.buf.clear()
if cmd == _SOCKS_CMD_CONNECT:
self.processCmdConnect(addr, port)
elif cmd == _SOCKS_CMD_BIND:
self.processCmdBind(addr, port)
elif cmd == _SOCKS_CMD_UDP_ASSOCIATE:
self.processCmdUdpAssociate(addr, port)
else:
# Should *NEVER* happen
log.error("Unimplemented command received")
self.transport.loseConnection()
def processCmdConnect(self, addr, port):
"""
Open a TCP/IP connection to the peer
"""
d = self.connectClass(addr, port, SOCKSv5Outgoing, self)
d.addErrback(self.handleCmdConnectFailure)
self.state = self.ST_CONNECTING
def connectClass(self, addr, port, klass, *args):
return protocol.ClientCreator(self.reactor, klass, *args).connectTCP(addr, port)
def handleCmdConnectFailure(self, failure):
log.error("CMD CONNECT: %s" % failure.getErrorMessage())
# Map common twisted errors to SOCKS error codes
if failure.type == error.NoRouteError:
self.sendReply(SOCKSv5Reply.NetworkUnreachable)
elif failure.type == error.ConnectionRefusedError:
self.sendReply(SOCKSv5Reply.ConnectionRefused)
elif failure.type == error.TCPTimedOutError or failure.type == error.TimeoutError:
self.sendReply(SOCKSv5Reply.TTLExpired)
elif failure.type == error.UnsupportedAddressFamily:
self.sendReply(SOCKSv5Reply.AddressTypeNotSupported)
elif failure.type == error.ConnectError:
# Twisted doesn't have a exception defined for EHOSTUNREACH,
# so the failure is a ConnectError. Try to catch this case
# and send a better reply, but fall back to a GeneralFailure.
reply = SOCKSv5Reply.GeneralFailure
try:
import errno
if hasattr(errno, "EHOSTUNREACH"):
if failure.value.osError == errno.EHOSTUNREACH:
reply = SOCKSv5Reply.HostUnreachable
if hasattr(errno, "WSAEHOSTUNREACH"):
if failure.value.osError == errno.WSAEHOSTUNREACH:
reply = SOCKSv5Reply.HostUnreachable
except Exception:
pass
self.sendReply(reply)
else:
self.sendReply(SOCKSv5Reply.GeneralFailure)
failure.trap(error.NoRouteError, error.ConnectionRefusedError,
error.TCPTimedOutError, error.TimeoutError,
error.UnsupportedAddressFamily, error.ConnectError)
def processCmdBind(self, addr, port):
self.sendReply(SOCKSv5Reply.CommandNotSupported)
def processCmdUdpAssociate(self, addr, port):
self.sendReply(SOCKSv5Reply.CommandNotSupported)
def sendReply(self, reply, addr=struct.pack("!I", 0), port=0, atype=_SOCKS_ATYP_IP_V4):
"""
Send a reply to the request, and complete circuit setup
"""
msg = _ByteBuffer()
msg.add_uint8(_SOCKS_VERSION)
msg.add_uint8(reply)
msg.add_uint8(_SOCKS_RSV)
msg.add_uint8(atype)
msg.add(addr)
msg.add_uint16(port, True)
self.transport.write(str(msg))
if reply == SOCKSv5Reply.Succeeded:
self.state = self.ST_ESTABLISHED
else:
self.transport.loseConnection()
class SOCKSv5Factory(protocol.Factory):
"""
A SOCKSv5 Factory.
"""
def buildProtocol(self, addr):
return SOCKSv5Protocol(reactor)
class _ByteBuffer(bytearray):
"""
A byte buffer, based on bytearray. get_* always removes reads from the
head (and is destructive), and add_* appends to the tail.
"""
def add_uint8(self, val):
"""Append a uint8_t to the tail of the buffer."""
self.extend(struct.pack("B", val))
def get_uint8(self):
"""Destructively read a uint8_t from the head of the buffer."""
return self.pop(0)
def add_uint16(self, val, htons=False):
"""
Append a uint16_t to the tail of the buffer.
Args:
val (int): The uint16_t to append.
Kwargs:
htons (bool): Convert to network byte order?
"""
if htons:
self.extend(struct.pack("!H", val))
else:
self.extend(struct.pack("H", val))
def get_uint16(self, ntohs=False):
"""
Destructively read a uint16_t from the head of the buffer
Kwargs:
ntohs (bool): Convert from network byte order?
"""
# Casting to string to workaround http://bugs.python.org/issue10212
tmp_string = str(self[0:2])
if ntohs:
ret = struct.unpack("!H", tmp_string)[0]
else:
ret = struct.unpack("H", tmp_string)[0]
del self[0:2]
return ret
def add_uint32(self, val, htonl=False):
"""
Append a uint32_t to the tail of the buffer.
Args:
val (int): The uint32_t to append.
Kwargs:
htonl (bool): Convert to network byte order?
"""
if htonl:
self.extend(struct.pack("!I", val))
else:
self.extend(struct.pack("I", val))
def get_uint32(self, ntohl=False):
"""
Destructively read a uint32_t from the head of the buffer
Kwargs:
ntohl (bool): Convert from network byte order?
"""
# Casting to string to workaround http://bugs.python.org/issue10212
tmp_string = str(self[0:4])
if ntohl:
ret = struct.unpack("!I", tmp_string)[0]
else:
ret = struct.unpack("I", tmp_string)[0]
del self[0:4]
return ret
def add(self, val):
"""Append bytes to the tail of the buffer."""
self.extend(val)
def get(self, length):
"""
Destructively read bytes from the head of the buffer
Args:
length (int): The number of bytes to read.
"""
ret = self[0:length]
del self[0:length]
return str(ret)
def peek(self):
"""Clone the buffer."""
ret = _ByteBuffer()
ret[:] = self
return ret
def clear(self):
"""Clear the contents of the buffer."""
del self[0:]
def __repr__(self):
return self.decode('ISO-8859-1')
obfsproxy-0.2.13/obfsproxy/pyobfsproxy.py 0000775 0000000 0000000 00000017111 12570034732 0020650 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
This is the command line interface to py-obfsproxy.
It is designed to be a drop-in replacement for the obfsproxy executable.
Currently, not all of the obfsproxy command line options have been implemented.
"""
import sys
import obfsproxy.network.launch_transport as launch_transport
import obfsproxy.network.network as network
import obfsproxy.transports.transports as transports
import obfsproxy.common.log as logging
import obfsproxy.common.argparser as argparser
import obfsproxy.common.heartbeat as heartbeat
import obfsproxy.common.transport_config as transport_config
import obfsproxy.managed.server as managed_server
import obfsproxy.managed.client as managed_client
from obfsproxy import __version__
try:
from pyptlib import __version__ as pyptlibversion
except Exception:
pass
from pyptlib.config import checkClientMode
from pyptlib.client_config import parseProxyURI
from twisted.internet import task # for LoopingCall
log = logging.get_obfslogger()
def set_up_cli_parsing():
"""Set up our CLI parser. Register our arguments and options and
query individual transports to register their own external-mode
arguments."""
parser = argparser.MyArgumentParser(
description='py-obfsproxy: A pluggable transports proxy written in Python')
subparsers = parser.add_subparsers(title='supported transports', dest='name')
parser.add_argument('-v', '--version', action='version', version=__version__)
parser.add_argument('--log-file', help='set logfile')
parser.add_argument('--log-min-severity',
choices=['error', 'warning', 'info', 'debug'],
help='set minimum logging severity (default: %(default)s)')
parser.add_argument('--no-log', action='store_true', default=False,
help='disable logging')
parser.add_argument('--no-safe-logging', action='store_true',
default=False,
help='disable safe (scrubbed address) logging')
parser.add_argument('--data-dir', help='where persistent information should be stored.',
default=None)
parser.add_argument('--proxy', action='store', dest='proxy',
help='Outgoing proxy (://[][:][@]:)')
# Managed mode is a subparser for now because there are no
# optional subparsers: bugs.python.org/issue9253
subparsers.add_parser("managed", help="managed mode")
# Add a subparser for each transport. Also add a
# transport-specific function to later validate the parsed
# arguments.
for transport, transport_class in transports.transports.items():
subparser = subparsers.add_parser(transport, help='%s help' % transport)
transport_class['base'].register_external_mode_cli(subparser)
subparser.set_defaults(validation_function=transport_class['base'].validate_external_mode_cli)
return parser
def do_managed_mode():
"""This function starts obfsproxy's managed-mode functionality."""
if checkClientMode():
log.info('Entering client managed-mode.')
managed_client.do_managed_client()
else:
log.info('Entering server managed-mode.')
managed_server.do_managed_server()
def do_external_mode(args):
"""This function starts obfsproxy's external-mode functionality."""
assert(args)
assert(args.name)
assert(args.name in transports.transports)
from twisted.internet import reactor
pt_config = transport_config.TransportConfig()
pt_config.setStateLocation(args.data_dir)
pt_config.setListenerMode(args.mode)
pt_config.setObfsproxyMode("external")
if args.proxy: # Set outgoing proxy settings if we have them
proxy = parseProxyURI(args.proxy)
pt_config.setProxy(proxy)
# Run setup() method.
run_transport_setup(pt_config, args.name)
launch_transport.launch_transport_listener(args.name, args.listen_addr, args.mode, args.dest, pt_config, args.ext_cookie_file)
log.info("Launched '%s' listener at '%s:%s' for transport '%s'." % \
(args.mode, log.safe_addr_str(args.listen_addr[0]), args.listen_addr[1], args.name))
reactor.run()
def consider_cli_args(args):
"""Check out parsed CLI arguments and take the appropriate actions."""
if args.log_file:
log.set_log_file(args.log_file)
if args.log_min_severity:
log.set_log_severity(args.log_min_severity)
if args.no_log:
log.disable_logs()
if args.no_safe_logging:
log.set_no_safe_logging()
# validate:
if (args.name == 'managed') and (not args.log_file) and (args.log_min_severity):
log.error("obfsproxy in managed-proxy mode can only log to a file!")
sys.exit(1)
elif (args.name == 'managed') and (not args.log_file):
# managed proxies without a logfile must not log at all.
log.disable_logs()
if args.proxy:
# CLI proxy is only supported in external mode.
if args.name == 'managed':
log.error("Don't set the proxy using the CLI in managed mode. " \
"Use the managed-proxy configuration protocol instead!")
sys.exit(1)
# Check if we have the necessary dependencies
# (the function will raise an exception if not)
network.ensure_outgoing_proxy_dependencies()
# Make sure that the proxy URI parses smoothly.
try:
proxy = parseProxyURI(args.proxy)
except Exception as e:
log.error("Failed to parse proxy specifier: %s", e)
sys.exit(1)
def run_transport_setup(pt_config, transport_name):
"""Run the setup() method for our transports."""
for transport, transport_class in transports.transports.items():
if transport == transport_name:
transport_class['base'].setup(pt_config)
def pyobfsproxy():
"""Actual pyobfsproxy entry-point."""
parser = set_up_cli_parsing()
args = parser.parse_args()
consider_cli_args(args)
log.warning('Obfsproxy (version: %s) starting up.' % (__version__))
try:
log.warning('Pyptlib version: %s' % pyptlibversion)
except Exception:
pass
log.debug('argv: ' + str(sys.argv))
log.debug('args: ' + str(args))
# Fire up our heartbeat.
l = task.LoopingCall(heartbeat.heartbeat.talk)
l.start(3600.0, now=False) # do heartbeat every hour
# Initiate obfsproxy.
if (args.name == 'managed'):
do_managed_mode()
else:
# Pass parsed arguments to the appropriate transports so that
# they can initialize and setup themselves. Exit if the
# provided arguments were corrupted.
try:
args.validation_function(args)
except ValueError, err:
log.error(err)
sys.exit(1)
do_external_mode(args)
def run():
"""Fake entry-point so that we can log unhandled exceptions."""
# Pyobfsproxy's CLI uses "managed" whereas C-obfsproxy uses
# "--managed" to configure managed-mode. Python obfsproxy can't
# recognize "--managed" because it uses argparse subparsers and
# http://bugs.python.org/issue9253 is not yet solved. This is a crazy
# hack to maintain CLI compatibility between the two versions. we
# basically inplace replace "--managed" with "managed" in the argument
# list.
if len(sys.argv) > 1 and '--managed' in sys.argv:
for n, arg in enumerate(sys.argv):
if arg == '--managed':
sys.argv[n] = 'managed'
try:
pyobfsproxy()
except Exception, e:
log.exception(e)
raise
if __name__ == '__main__':
run()
obfsproxy-0.2.13/obfsproxy/test/ 0000775 0000000 0000000 00000000000 12570034732 0016645 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/test/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0020744 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/test/int_tests/ 0000775 0000000 0000000 00000000000 12570034732 0020661 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/test/int_tests/pits_design.txt 0000664 0000000 0000000 00000014500 12570034732 0023732 0 ustar 00root root 0000000 0000000 Pyobfsproxy integration test suite (PITS)
THIS IS UNIMPLEMENTED. IT'S JUST A DESIGN DOC.
Overview
Obfsproxy needs an automated and robust way of testing its pluggable
transports. While unit tests are certainly helpful, integration
tests provide realistic testing scenarios for network daemons like
obfsproxy.
Motivation
Obfsproxy needs to be tested on how well it can proxy traffic from
one side to its other side. A basic integration test would be to
transfer a string from one side and see if it arrives intact on the
other side.
A more involved integration test is the "timeline tests" of
Stegotorus, developed by Zack Weinberg. Stegotorus integration tests
are configurable: you pass them a script file that defines the
behavior of the integration test connections. This allows
customizable connection establishment and tear down, and the ability
to send arbitrary data through the integration test connections.
That's good enough, but sometimes bugs appear on more complex
network interactions. For this reason, PITS was developed which has
support for:
+ multiple network connections
+ flexible connection behavior
+ automated test case generation
The integration tests should also be cross-platform so that they can
be ran on Microsoft Windows.
Design
+-----------+ +-----------+
|-------->| client |<-------------------->| server |<--------|
| |----->| obfsproxy |<-------------------->| obfsproxy |<-----| |
| | |-->| |<-------------------->| |<--| | |
| | | +-----------+ +-----------+ | | |
| | | | | |
v v v v v v
+---------------+ +---------------+
| PITS outbound | | PITS inbound |
+---------------+ +---------------+
^ |
| |
| v
+---------------+ +---------------+
|Test case file |<------------------------------>|Transcript file|
+---------------+ +---------------+
PITS does integration tests by reading a user-provided test case
file which contains a description of the test that PITS should
perform.
A basic PITS test case usually involves launching two obfsproxies as
in the typical obfuscated bridge client-server scenario, exchanging
some data between them and finally checking if both sides received
the proper data.
A basic PITS test case usually involves opening a listening socket
(which in the case of a client-side obfsproxy, emulates the
server-side obfspoxy), and a number of outbound connections (which in
the case of a client-side obfsproxy, emulate the connections from the
Tor client).
Test case files contain instructions for the sockets of PITS. Through
test case files, PITS can be configured to perform the following
actions:
+ Open and close connections
+ Send arbitrary data through connections
+ Pause connections
While conducting the tests, the PITS inbound and outbound sockets
record the data they sent and receive in a 'transcript'; after the
test is over, the transcript and test case file are post-processed
and compared with each other to check whether the intended
conversation was performed successfully.
Test case files
The test case file format is line-oriented; each line is a command,
and the first character of the line is a directive followed by a
number of arguments.
Valid commands are:
# comment line - note that # _only_ introduces a comment at the beginning
of a line; elsewhere, it's either a syntax error or part
of an argument
P number - pause test-case execution for |number| milliseconds
! - initiate connection with identifier
* - Close connection (through inbound socket)
> - transmit on through outbound socket
< - transmit on through inbound socket
Trailing whitespace is ignored.
Test cases have to close all established connections explicitly,
otherwise the test won't be validated correctly.
Transcript files
Inbound and outbound sockets log received data to a transcript
file. The transcript file format is similar to the test case format:
! - connection established on inbound socket
> - received on inbound socket
< - received on outbound socket.
* - connection destroyed on inbound socket
Test case results
After a test case is completed and the transcript file is written,
PITS needs to evalute whether the test case was successful; that is,
whether the transcript file correctly describes the test case.
Because of the properties of TCP, the following post-processing
happens to validate the transcript file with the test case file:
a) Both files are segregated: all the traffic and events of inbound
sockets are put on top, and the traffic and events of outbound
sockets are put on the bottom.
(This happens because TCP can't guarantee order of event arival in
one direction relative to the order of event arrival in the other
direction.)
b) In both files, for each socket identifier, we concatenate all its
traffic in a single 'transmit' directive. In the end, we place the
transmit line below the events (session establishment, etc.).
(This happens because TCP is a stream protocol.)
c) We string compare the transcript and test-case files.
XXX document any unexpected behaviors or untestable cases caused by
the above postprocessing.
Acknowledgements
The script file format and the basic idea of PITS are concepts of
Zack Weinberg. They were implemented as part of Stegotorus:
https://gitweb.torproject.org/stegotorus.git/blob/HEAD:/src/test/tltester.cc
obfsproxy-0.2.13/obfsproxy/test/test_aes.py 0000664 0000000 0000000 00000006457 12570034732 0021042 0 ustar 00root root 0000000 0000000 import unittest
from Crypto.Cipher import AES
from Crypto.Util import Counter
import obfsproxy.common.aes as aes
import twisted.trial.unittest
class testAES_CTR_128_NIST(twisted.trial.unittest.TestCase):
def _helper_test_vector(self, input_block, output_block, plaintext, ciphertext):
self.assertEqual(long(input_block.encode('hex'), 16), self.ctr.next_value())
ct = self.cipher.encrypt(plaintext)
self.assertEqual(ct, ciphertext)
# XXX how do we extract the keystream out of the AES object?
def test_nist(self):
# Prepare the cipher
key = "\x2b\x7e\x15\x16\x28\xae\xd2\xa6\xab\xf7\x15\x88\x09\xcf\x4f\x3c"
iv = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
self.ctr = Counter.new(128, initial_value=long(iv.encode('hex'), 16))
self.cipher = AES.new(key, AES.MODE_CTR, counter=self.ctr)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xfe\xff"
output_block = "\xec\x8c\xdf\x73\x98\x60\x7c\xb0\xf2\xd2\x16\x75\xea\x9e\xa1\xe4"
plaintext = "\x6b\xc1\xbe\xe2\x2e\x40\x9f\x96\xe9\x3d\x7e\x11\x73\x93\x17\x2a"
ciphertext = "\x87\x4d\x61\x91\xb6\x20\xe3\x26\x1b\xef\x68\x64\x99\x0d\xb6\xce"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x00"
output_block = "\x36\x2b\x7c\x3c\x67\x73\x51\x63\x18\xa0\x77\xd7\xfc\x50\x73\xae"
plaintext = "\xae\x2d\x8a\x57\x1e\x03\xac\x9c\x9e\xb7\x6f\xac\x45\xaf\x8e\x51"
ciphertext = "\x98\x06\xf6\x6b\x79\x70\xfd\xff\x86\x17\x18\x7b\xb9\xff\xfd\xff"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x01"
output_block = "\x6a\x2c\xc3\x78\x78\x89\x37\x4f\xbe\xb4\xc8\x1b\x17\xba\x6c\x44"
plaintext = "\x30\xc8\x1c\x46\xa3\x5c\xe4\x11\xe5\xfb\xc1\x19\x1a\x0a\x52\xef"
ciphertext = "\x5a\xe4\xdf\x3e\xdb\xd5\xd3\x5e\x5b\x4f\x09\x02\x0d\xb0\x3e\xab"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
input_block = "\xf0\xf1\xf2\xf3\xf4\xf5\xf6\xf7\xf8\xf9\xfa\xfb\xfc\xfd\xff\x02"
output_block = "\xe8\x9c\x39\x9f\xf0\xf1\x98\xc6\xd4\x0a\x31\xdb\x15\x6c\xab\xfe"
plaintext = "\xf6\x9f\x24\x45\xdf\x4f\x9b\x17\xad\x2b\x41\x7b\xe6\x6c\x37\x10"
ciphertext = "\x1e\x03\x1d\xda\x2f\xbe\x03\xd1\x79\x21\x70\xa0\xf3\x00\x9c\xee"
self._helper_test_vector(input_block, output_block, plaintext, ciphertext)
class testAES_CTR_128_simple(twisted.trial.unittest.TestCase):
def test_encrypt_decrypt_small_ASCII(self):
"""
Validate that decryption and encryption work as intended on a small ASCII string.
"""
self.key = "\xe3\xb0\xc4\x42\x98\xfc\x1c\x14\x9a\xfb\xf4\xc8\x99\x6f\xb9\x24"
self.iv = "\x27\xae\x41\xe4\x64\x9b\x93\x4c\xa4\x95\x99\x1b\x78\x52\xb8\x55"
test_string = "This unittest kills fascists."
cipher1 = aes.AES_CTR_128(self.key, self.iv)
cipher2 = aes.AES_CTR_128(self.key, self.iv)
ct = cipher1.crypt(test_string)
pt = cipher2.crypt(ct)
self.assertEqual(test_string, pt)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/test_buffer.py 0000664 0000000 0000000 00000002652 12570034732 0021534 0 ustar 00root root 0000000 0000000 import unittest
import obfsproxy.network.buffer as obfs_buf
import twisted.trial.unittest
class testBuffer(twisted.trial.unittest.TestCase):
def setUp(self):
self.test_string = "No pop no style, I strictly roots."
self.buf = obfs_buf.Buffer(self.test_string)
def test_totalread(self):
tmp = self.buf.read(-1)
self.assertEqual(tmp, self.test_string)
def test_byte_by_byte(self):
"""Read one byte at a time."""
for i in xrange(len(self.test_string)):
self.assertEqual(self.buf.read(1), self.test_string[i])
def test_bigread(self):
self.assertEqual(self.buf.read(666), self.test_string)
def test_peek(self):
tmp = self.buf.peek(-1)
self.assertEqual(tmp, self.test_string)
self.assertEqual(self.buf.read(-1), self.test_string)
def test_drain(self):
tmp = self.buf.drain(-1) # drain everything
self.assertIsNone(tmp) # check non-existent retval
self.assertEqual(self.buf.read(-1), '') # it should be empty.
self.assertEqual(len(self.buf), 0)
def test_drain2(self):
tmp = self.buf.drain(len(self.test_string)-1) # drain everything but a byte
self.assertIsNone(tmp) # check non-existent retval
self.assertEqual(self.buf.peek(-1), '.') # peek at last character
self.assertEqual(len(self.buf), 1) # length must be 1
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/test_obfs3_dh.py 0000664 0000000 0000000 00000012611 12570034732 0021746 0 ustar 00root root 0000000 0000000 import unittest
import time
import obfsproxy.transports.obfs3_dh as obfs3_dh
import twisted.trial.unittest
from twisted.python import log
class testUniformDH_KAT(twisted.trial.unittest.TestCase):
#
# Test keypair x/X:
#
# The test vector specifies "... 756e" for x but this forces the UniformDH
# code to return p - X as the public key, and more importantly that's what
# the original material I used ends with.
#
_x = int(
"""6f59 2d67 6f53 6874 746f 2068 6e6b 776f
2073 6874 2065 6167 6574 202e 6f59 2d67
6f53 6874 746f 2068 7369 7420 6568 6720
7461 2e65 5920 676f 532d 746f 6f68 6874
6920 2073 6874 2065 656b 2079 6e61 2064
7567 7261 6964 6e61 6f20 2066 6874 2065
6167 6574 202e 6150 7473 202c 7270 7365
6e65 2c74 6620 7475 7275 2c65 6120 6c6c
6120 6572 6f20 656e 6920 206e 6f59 2d67
6f53 6874 746f 2e68 4820 2065 6e6b 776f
2073 6877 7265 2065 6874 2065 6c4f 2064
6e4f 7365 6220 6f72 656b 7420 7268 756f""".replace(' ','').replace('\n',''), 16)
_X = int(
"""76a3 d17d 5c55 b03e 865f a3e8 2679 90a7
24ba a24b 0bdd 0cc4 af93 be8d e30b e120
d553 3c91 bf63 ef92 3b02 edcb 84b7 4438
3f7d e232 cca6 eb46 d07c ad83 dcaa 317f
becb c68c a13e 2c40 19e6 a365 3106 7450
04ae cc0b e1df f0a7 8733 fb0e 7d5c b7c4
97ca b77b 1331 bf34 7e5f 3a78 47aa 0bc0
f4bc 6414 6b48 407f ed7b 931d 1697 2d25
fb4d a5e6 dc07 4ce2 a58d aa8d e762 4247
cdf2 ebe4 e4df ec6d 5989 aac7 78c8 7559
d321 3d60 40d4 111c e3a2 acae 19f9 ee15
3250 9e03 7f69 b252 fdc3 0243 cbbc e9d0""".replace(' ','').replace('\n',''), 16)
#
# Test keypair y/Y
#
_y = int(
"""7365 6220 6f72 656b 7420 7268 756f 6867
6f20 2066 6c6f 2c64 6120 646e 7720 6568
6572 5420 6568 2079 6873 6c61 206c 7262
6165 206b 6874 6f72 6775 2068 6761 6961
2e6e 4820 2065 6e6b 776f 2073 6877 7265
2065 6854 7965 6820 7661 2065 7274 646f
6520 7261 6874 7327 6620 6569 646c 2c73
6120 646e 7720 6568 6572 5420 6568 2079
7473 6c69 206c 7274 6165 2064 6874 6d65
202c 6e61 2064 6877 2079 6f6e 6f20 656e
6320 6e61 6220 6865 6c6f 2064 6854 6d65
6120 2073 6854 7965 7420 6572 6461 0a2e""".replace(' ','').replace('\n',''), 16)
_Y = int(
"""d04e 156e 554c 37ff d7ab a749 df66 2350
1e4f f446 6cb1 2be0 5561 7c1a 3687 2237
36d2 c3fd ce9e e0f9 b277 7435 0849 112a
a5ae b1f1 2681 1c9c 2f3a 9cb1 3d2f 0c3a
7e6f a2d3 bf71 baf5 0d83 9171 534f 227e
fbb2 ce42 27a3 8c25 abdc 5ba7 fc43 0111
3a2c b206 9c9b 305f aac4 b72b f21f ec71
578a 9c36 9bca c84e 1a7d cf07 54e3 42f5
bc8f e491 7441 b882 5443 5e2a baf2 97e9
3e1e 5796 8672 d45b d7d4 c8ba 1bc3 d314
889b 5bc3 d3e4 ea33 d4f2 dfdd 34e5 e5a7
2ff2 4ee4 6316 d475 7dad 0936 6a0b 66b3""".replace(' ','').replace('\n',''), 16)
#
# Shared secret: x + Y/y + X
#
_xYyX = int(
"""78af af5f 457f 1fdb 832b ebc3 9764 4a33
038b e9db a10c a2ce 4a07 6f32 7f3a 0ce3
151d 477b 869e e7ac 4677 5529 2ad8 a77d
b9bd 87ff bbc3 9955 bcfb 03b1 5838 88c8
fd03 7834 ff3f 401d 463c 10f8 99aa 6378
4451 40b7 f838 6a7d 509e 7b9d b19b 677f
062a 7a1a 4e15 0960 4d7a 0839 ccd5 da61
73e1 0afd 9eab 6dda 7453 9d60 493c a37f
a5c9 8cd9 640b 409c d8bb 3be2 bc51 36fd
42e7 64fc 3f3c 0ddb 8db3 d87a bcf2 e659
8d2b 101b ef7a 56f5 0ebc 658f 9df1 287d
a813 5954 3e77 e4a4 cfa7 598a 4152 e4c0""".replace(' ','').replace('\n',''), 16)
def __init__(self, methodName='runTest'):
self._x_str = obfs3_dh.int_to_bytes(self._x, 192)
self._X_str = obfs3_dh.int_to_bytes(self._X, 192)
self._y_str = obfs3_dh.int_to_bytes(self._y, 192)
self._Y_str = obfs3_dh.int_to_bytes(self._Y, 192)
self._xYyX_str = obfs3_dh.int_to_bytes(self._xYyX, 192)
twisted.trial.unittest.TestCase.__init__(self, methodName)
def test_odd_key(self):
dh_x = obfs3_dh.UniformDH(self._x_str)
self.assertEqual(self._x_str, dh_x.priv_str)
self.assertEqual(self._X_str, dh_x.get_public())
def test_even_key(self):
dh_y = obfs3_dh.UniformDH(self._y_str)
self.assertEqual(self._y_str, dh_y.priv_str)
self.assertEqual(self._Y_str, dh_y.get_public())
def test_exchange(self):
dh_x = obfs3_dh.UniformDH(self._x_str)
dh_y = obfs3_dh.UniformDH(self._y_str)
xY = dh_x.get_secret(dh_y.get_public())
yX = dh_y.get_secret(dh_x.get_public())
self.assertEqual(self._xYyX_str, xY)
self.assertEqual(self._xYyX_str, yX)
class testUniformDH_Benchmark(twisted.trial.unittest.TestCase):
def test_benchmark(self):
start = time.clock()
for i in range(0, 1000):
dh_x = obfs3_dh.UniformDH()
dh_y = obfs3_dh.UniformDH()
xY = dh_x.get_secret(dh_y.get_public())
yX = dh_y.get_secret(dh_x.get_public())
self.assertEqual(xY, yX)
end = time.clock()
taken = (end - start) / 1000 / 2
log.msg("Generate + Exchange: %f sec" % taken)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/test_socks.py 0000664 0000000 0000000 00000001476 12570034732 0021410 0 ustar 00root root 0000000 0000000 import obfsproxy.network.socks as socks
import twisted.trial.unittest
class test_SOCKS(twisted.trial.unittest.TestCase):
def test_socks_args_splitting(self):
socks_args = socks._split_socks_args("monday=blue;tuesday=grey;wednesday=too;thursday=don\\;tcareabout\\\\you;friday=i\\;minlove")
self.assertListEqual(socks_args, ["monday=blue", "tuesday=grey", "wednesday=too", "thursday=don;tcareabout\\you", "friday=i;minlove"])
socks_args = socks._split_socks_args("monday=blue")
self.assertListEqual(socks_args, ["monday=blue"])
socks_args = socks._split_socks_args("monday=;tuesday=grey")
self.assertListEqual(socks_args, ["monday=", "tuesday=grey"])
socks_args = socks._split_socks_args("\\;=\\;;\\\\=\\;")
self.assertListEqual(socks_args, [";=;", "\\=;"])
obfsproxy-0.2.13/obfsproxy/test/test_socks5.py 0000664 0000000 0000000 00000034421 12570034732 0021471 0 ustar 00root root 0000000 0000000 from twisted.internet import defer, error
from twisted.trial import unittest
from twisted.test import proto_helpers
from twisted.python.failure import Failure
from obfsproxy.network import socks5
import binascii
import struct
class SOCKSv5Protocol_testMethodSelect(unittest.TestCase):
proto = None
tr = None
def _sendMsg(self, msg):
msg = binascii.unhexlify(msg)
self.proto.dataReceived(msg)
def _recvMsg(self, expected):
self.assertEqual(self.tr.value(), binascii.unhexlify(expected))
self.tr.clear()
def setUp(self):
factory = socks5.SOCKSv5Factory()
self.proto = factory.buildProtocol(('127.0.0.1', 0))
self.tr = proto_helpers.StringTransportWithDisconnection()
self.tr.protocol = self.proto
self.proto.makeConnection(self.tr)
def test_InvalidVersion(self):
"""
Test Method Select message containing a invalid VER.
"""
# VER = 03, NMETHODS = 01, METHODS = ['00']
self._sendMsg("030100")
self.assertFalse(self.tr.connected)
def test_InvalidNMethods(self):
"""
Test Method Select message containing no methods.
"""
# VER = 05, NMETHODS = 00
self._sendMsg("0500")
self.assertFalse(self.tr.connected)
def test_NoAuth(self):
"""
Test Method Select message containing NO AUTHENTICATION REQUIRED.
"""
# VER = 05, NMETHODS = 01, METHODS = ['00']
self._sendMsg("050100")
# VER = 05, METHOD = 00
self._recvMsg("0500")
self.assertEqual(self.proto.authMethod, socks5._SOCKS_AUTH_NO_AUTHENTICATION_REQUIRED)
self.assertEqual(self.proto.state, self.proto.ST_AUTHENTICATING)
self.assertTrue(self.tr.connected)
# Send the first byte of the request to prod it into ST_READ_REQUEST
self._sendMsg("05")
self.assertEqual(self.proto.state, self.proto.ST_READ_REQUEST)
def test_UsernamePasswd(self):
"""
Test Method Select message containing USERNAME/PASSWORD.
"""
# VER = 05, NMETHODS = 01, METHODS = ['02']
self._sendMsg("050102")
# VER = 05, METHOD = 02
self._recvMsg("0502")
self.assertEqual(self.proto.authMethod, socks5._SOCKS_AUTH_USERNAME_PASSWORD)
self.assertEqual(self.proto.state, self.proto.ST_AUTHENTICATING)
self.assertTrue(self.tr.connected)
def test_Both(self):
"""
Test Method Select message containing both NO AUTHENTICATION REQUIRED
and USERNAME/PASSWORD.
"""
# VER = 05, NMETHODS = 02, METHODS = [00, 02]
self._sendMsg("05020002")
# VER = 05, METHOD = 02
self._recvMsg("0502")
self.assertEqual(self.proto.authMethod, socks5._SOCKS_AUTH_USERNAME_PASSWORD)
self.assertEqual(self.proto.state, self.proto.ST_AUTHENTICATING)
self.assertTrue(self.tr.connected)
def test_Unknown(self):
"""
Test Method Select message containing a unknown auth method.
"""
# VER = 05, NMETHODS = 01, METHODS = [01]
self._sendMsg("050101")
# VER = 05, METHOD = ff
self._recvMsg("05ff")
self.assertEqual(self.proto.authMethod, socks5._SOCKS_AUTH_NO_ACCEPTABLE_METHODS)
self.assertFalse(self.tr.connected)
def test_BothUnknown(self):
"""
Test Method Select message containing supported and unknown methods.
"""
# VER = 05, NMETHODS = 03, METHODS = [00, 02, ff]
self._sendMsg("05030002ff")
# VER = 05, METHOD = 02
self._recvMsg("0502")
self.assertEqual(self.proto.authMethod, socks5._SOCKS_AUTH_USERNAME_PASSWORD)
self.assertEqual(self.proto.state, self.proto.ST_AUTHENTICATING)
self.assertTrue(self.tr.connected)
def test_TrailingGarbage(self):
"""
Test Method Select message with a impatient client.
"""
# VER = 05, NMETHODS = 01, METHODS = ['00'], Garbage= deadbabe
self._sendMsg("050100deadbabe")
self.assertFalse(self.tr.connected)
class SOCKSv5Protocol_testRfc1929Auth(unittest.TestCase):
proto = None
tr = None
def _sendMsg(self, msg):
msg = binascii.unhexlify(msg)
self.proto.dataReceived(msg)
def _recvMsg(self, expected):
self.assertEqual(self.tr.value(), binascii.unhexlify(expected))
self.tr.clear()
def _processAuthTrue(self, uname, passwd):
self.assertEqual(uname, "ABCDE")
self.assertEqual(passwd, "abcde")
return True
def _processAuthFalse(self, uname, passwd):
self.assertEqual(uname, "ABCDE")
self.assertEqual(passwd, "abcde")
return False
def setUp(self):
factory = socks5.SOCKSv5Factory()
self.proto = factory.buildProtocol(('127.0.0.1', 0))
self.tr = proto_helpers.StringTransportWithDisconnection()
self.tr.protocol = self.proto
self.proto.makeConnection(self.tr)
# Get things to where the next step is the client sends the auth message
self._sendMsg("050102")
self._recvMsg("0502")
def test_InvalidVersion(self):
"""
Test auth request containing a invalid VER.
"""
# VER = 03, ULEN = 5, UNAME = "ABCDE", PLEN = 5, PASSWD = "abcde"
self._sendMsg("03054142434445056162636465")
# VER = 01, STATUS = 01
self._recvMsg("0101")
self.assertFalse(self.tr.connected)
def test_InvalidUlen(self):
"""
Test auth request with a invalid ULEN.
"""
# VER = 01, ULEN = 0, UNAME = "", PLEN = 5, PASSWD = "abcde"
self._sendMsg("0100056162636465")
# VER = 01, STATUS = 01
self._recvMsg("0101")
self.assertFalse(self.tr.connected)
def test_InvalidPlen(self):
"""
Test auth request with a invalid PLEN.
"""
# VER = 01, ULEN = 5, UNAME = "ABCDE", PLEN = 0, PASSWD = ""
self._sendMsg("0105414243444500")
# VER = 01, STATUS = 01
self._recvMsg("0101")
self.assertFalse(self.tr.connected)
def test_ValidAuthSuccess(self):
"""
Test auth request that is valid and successful at authenticating.
"""
self.proto.processRfc1929Auth = self._processAuthTrue
# VER = 01, ULEN = 5, UNAME = "ABCDE", PLEN = 5, PASSWD = "abcde"
self._sendMsg("01054142434445056162636465")
# VER = 01, STATUS = 00
self._recvMsg("0100")
self.assertEqual(self.proto.state, self.proto.ST_READ_REQUEST)
self.assertTrue(self.tr.connected)
def test_ValidAuthFailure(self):
"""
Test auth request that is valid and failed at authenticating.
"""
self.proto.processRfc1929Auth = self._processAuthFalse
# VER = 01, ULEN = 5, UNAME = "ABCDE", PLEN = 5, PASSWD = "abcde"
self._sendMsg("01054142434445056162636465")
# VER = 01, STATUS = 01
self._recvMsg("0101")
self.assertFalse(self.tr.connected)
def test_TrailingGarbage(self):
"""
Test auth request with a impatient client.
"""
# VER = 01, ULEN = 5, UNAME = "ABCDE", PLEN = 5, PASSWD = "abcde", Garbage = deadbabe
self._sendMsg("01054142434445056162636465deadbabe")
self.assertFalse(self.tr.connected)
class SOCKSv5Protocol_testRequest(unittest.TestCase):
proto = None
tr = None
connectDeferred = None
def _sendMsg(self, msg):
msg = binascii.unhexlify(msg)
self.proto.dataReceived(msg)
def _recvMsg(self, expected):
self.assertEqual(self.tr.value(), binascii.unhexlify(expected))
self.tr.clear()
def _recvFailureResponse(self, expected):
# VER = 05, REP = expected, RSV = 00, ATYPE = 01, BND.ADDR = 0.0.0.0, BND.PORT = 0000
fail_msg = "05" + binascii.hexlify(chr(expected)) + "0001000000000000"
self._recvMsg(fail_msg)
def _connectClassIPv4(self, addr, port, klass, *args):
self.assertEqual(addr, "127.0.0.1")
self.assertEqual(port, 9050)
self.connectDeferred = defer.Deferred()
self.connectDeferred.addCallback(self._connectIPv4)
return self.connectDeferred
def _connectIPv4(self, unused):
self.proto.sendReply(socks5.SOCKSv5Reply.Succeeded, struct.pack("!I", 0x7f000001), 9050)
def _connectClassIPv6(self, addr, port, klass, *args):
self.assertEqual(addr, "102:304:506:708:90a:b0c:d0e:f10")
self.assertEqual(port, 9050)
self.connectDeferred = defer.Deferred()
self.connectDeferred.addCallback(self._connectIPv6)
return self.connectDeferred
def _connectIPv6(self, unused):
addr = binascii.unhexlify("0102030405060708090a0b0c0d0e0f10")
self.proto.sendReply(socks5.SOCKSv5Reply.Succeeded, addr, 9050, socks5._SOCKS_ATYP_IP_V6)
def _connectClassDomainname(self, addr, port, klass, *args):
self.assertEqual(addr, "example.com")
self.assertEqual(port, 9050)
self.connectDeferred = defer.Deferred()
self.connectDeferred.addCallback(self._connectIPv4)
return self.connectDeferred
def setUp(self):
factory = socks5.SOCKSv5Factory()
self.proto = factory.buildProtocol(('127.0.0.1', 0))
self.tr = proto_helpers.StringTransportWithDisconnection()
self.tr.protocol = self.proto
self.proto.makeConnection(self.tr)
self.connectDeferred = None
# Get things to where the next step is the client sends the auth message
self._sendMsg("050100")
self._recvMsg("0500")
def test_InvalidVersion(self):
"""
Test Request with a invalid VER.
"""
# VER = 03, CMD = 01, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("030100017f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.GeneralFailure)
self.assertFalse(self.tr.connected)
def test_InvalidCommand(self):
"""
Test Request with a invalid CMD.
"""
# VER = 05, CMD = 05, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050500017f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.CommandNotSupported)
self.assertFalse(self.tr.connected)
def test_InvalidRsv(self):
"""
Test Request with a invalid RSV.
"""
# VER = 05, CMD = 01, RSV = 30, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050130017f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.GeneralFailure)
self.assertFalse(self.tr.connected)
def test_InvalidAtyp(self):
"""
Test Request with a invalid ATYP.
"""
# VER = 05, CMD = 01, RSV = 01, ATYPE = 05, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050100057f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.AddressTypeNotSupported)
self.assertFalse(self.tr.connected)
def test_CmdBind(self):
"""
Test Request with a BIND CMD.
"""
# VER = 05, CMD = 02, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050200017f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.CommandNotSupported)
self.assertFalse(self.tr.connected)
def test_CmdUdpAssociate(self):
"""
Test Request with a UDP ASSOCIATE CMD.
"""
# VER = 05, CMD = 03, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050300017f000001235a")
self._recvFailureResponse(socks5.SOCKSv5Reply.CommandNotSupported)
self.assertFalse(self.tr.connected)
def test_CmdConnectIPv4(self):
"""
Test Successful Request with a IPv4 CONNECT.
"""
self.proto.connectClass = self._connectClassIPv4
# VER = 05, CMD = 01, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050100017f000001235a")
self.connectDeferred.callback(self)
# VER = 05, REP = 00, RSV = 00, ATYPE = 01, BND.ADDR = 127.0.0.1, BND.PORT = 9050
self._recvMsg("050000017f000001235a")
self.assertEqual(self.proto.state, self.proto.ST_ESTABLISHED)
self.assertTrue(self.tr.connected)
def test_CmdConnectIPv6(self):
"""
Test Successful Request with a IPv6 CONNECT.
"""
self.proto.connectClass = self._connectClassIPv6
# VER = 05, CMD = 01, RSV = 00, ATYPE = 04, DST.ADDR = 0102:0304:0506:0708:090a:0b0c:0d0e:0f10, DST.PORT = 9050
self._sendMsg("050100040102030405060708090a0b0c0d0e0f10235a")
self.connectDeferred.callback(self)
# VER = 05, REP = 00, RSV = 00, ATYPE = 04, BND.ADDR = 0102:0304:0506:0708:090a:0b0c:0d0e:0f10, DST.PORT = 9050
self._recvMsg("050000040102030405060708090a0b0c0d0e0f10235a")
self.assertEqual(self.proto.state, self.proto.ST_ESTABLISHED)
self.assertTrue(self.tr.connected)
def test_CmdConnectDomainName(self):
"""
Test Sucessful request with a DOMAINNAME CONNECT.
"""
self.proto.connectClass = self._connectClassDomainname
# VER = 05, CMD = 01, RSV = 00, ATYPE = 04, DST.ADDR = example.com, DST.PORT = 9050
self._sendMsg("050100030b6578616d706c652e636f6d235a")
self.connectDeferred.callback(self)
# VER = 05, REP = 00, RSV = 00, ATYPE = 01, BND.ADDR = 127.0.0.1, BND.PORT = 9050
self._recvMsg("050000017f000001235a")
self.assertEqual(self.proto.state, self.proto.ST_ESTABLISHED)
self.assertTrue(self.tr.connected)
def test_TrailingGarbage(self):
"""
Test request with a impatient client.
"""
# VER = 05, CMD = 01, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050, Garbage = deadbabe
self._sendMsg("050100017f000001235adeadbabe")
self.assertFalse(self.tr.connected)
def test_CmdConnectErrback(self):
"""
Test Unsuccessful Request with a IPv4 CONNECT.
"""
self.proto.connectClass = self._connectClassIPv4
# VER = 05, CMD = 01, RSV = 00, ATYPE = 01, DST.ADDR = 127.0.0.1, DST.PORT = 9050
self._sendMsg("050100017f000001235a")
self.connectDeferred.errback(Failure(error.ConnectionRefusedError("Foo")))
self._recvFailureResponse(socks5.SOCKSv5Reply.ConnectionRefused)
self.assertFalse(self.tr.connected)
obfsproxy-0.2.13/obfsproxy/test/tester.py 0000664 0000000 0000000 00000027764 12570034732 0020545 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
"""@package tester.py.in
Integration tests for obfsproxy.
The obfsproxy binary is assumed to exist in the current working
directory, and you need to have Python 2.6 or better (but not 3).
You need to be able to make connections to arbitrary high-numbered
TCP ports on the loopback interface.
"""
import difflib
import errno
import multiprocessing
import Queue
import re
import signal
import socket
import struct
import subprocess
import time
import traceback
import unittest
import sys,os
import tempfile
import shutil
def diff(label, expected, received):
"""
Helper: generate unified-format diffs between two named strings.
Pythonic escaped-string syntax is used for unprintable characters.
"""
if expected == received:
return ""
else:
return (label + "\n"
+ "\n".join(s.encode("string_escape")
for s in
difflib.unified_diff(expected.split("\n"),
received.split("\n"),
"expected", "received",
lineterm=""))
+ "\n")
class Obfsproxy(subprocess.Popen):
"""
Helper: Run obfsproxy instances and confirm that they have
completed without any errors.
"""
def __init__(self, *args, **kwargs):
"""Spawns obfsproxy with 'args'"""
argv = ["bin/obfsproxy", "--no-log"]
if len(args) == 1 and (isinstance(args[0], list) or
isinstance(args[0], tuple)):
argv.extend(args[0])
else:
argv.extend(args)
subprocess.Popen.__init__(self, argv,
stdin=open("/dev/null", "r"),
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
**kwargs)
severe_error_re = re.compile(r"\[(?:warn|err(?:or)?)\]")
def check_completion(self, label, force_stderr):
"""
Checks the output and exit status of obfsproxy to see if
everything went fine.
Returns an empty string if the test was good, otherwise it
returns a report that should be printed to the user.
"""
if self.poll() is None:
self.send_signal(signal.SIGINT)
(out, err) = self.communicate()
report = ""
def indent(s):
return "| " + "\n| ".join(s.strip().split("\n"))
# exit status should be zero
if self.returncode > 0:
report += label + " exit code: %d\n" % self.returncode
elif self.returncode < 0:
report += label + " killed: signal %d\n" % -self.returncode
# there should be nothing on stdout
if out != "":
report += label + " stdout:\n%s\n" % indent(out)
# there will be debugging messages on stderr, but there should be
# no [warn], [err], or [error] messages.
if force_stderr or self.severe_error_re.search(err):
report += label + " stderr:\n%s\n" % indent(err)
return report
def stop(self):
"""Terminates obfsproxy."""
if self.poll() is None:
self.terminate()
def connect_with_retry(addr):
"""
Helper: Repeatedly try to connect to the specified server socket
until either it succeeds or one full second has elapsed. (Surely
there is a better way to do this?)
"""
retry = 0
while True:
try:
return socket.create_connection(addr)
except socket.error, e:
if e.errno != errno.ECONNREFUSED: raise
if retry == 20: raise
retry += 1
time.sleep(0.05)
SOCKET_TIMEOUT = 2.0
class ReadWorker(object):
"""
Helper: In a separate process (to avoid deadlock), listen on a
specified socket. The first time something connects to that socket,
read all available data, stick it in a string, and post the string
to the output queue. Then close both sockets and exit.
"""
@staticmethod
def work(address, oq):
listener = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
listener.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
listener.bind(address)
listener.listen(1)
(conn, remote) = listener.accept()
listener.close()
conn.settimeout(SOCKET_TIMEOUT)
data = ""
try:
while True:
chunk = conn.recv(4096)
if chunk == "": break
data += chunk
except socket.timeout:
pass
except Exception, e:
data += "|RECV ERROR: " + e
conn.close()
oq.put(data)
def __init__(self, address):
self.oq = multiprocessing.Queue()
self.worker = multiprocessing.Process(target=self.work,
args=(address, self.oq))
self.worker.start()
def get(self):
"""
Get a chunk of data from the ReadWorker's queue.
"""
rv = self.oq.get(timeout=SOCKET_TIMEOUT+0.1)
self.worker.join()
return rv
def stop(self):
if self.worker.is_alive(): self.worker.terminate()
# Right now this is a direct translation of the former int_test.sh
# (except that I have fleshed out the SOCKS test a bit).
# It will be made more general and parametric Real Soon.
ENTRY_PORT = 4999
SERVER_PORT = 5000
EXIT_PORT = 5001
#
# Test base classes. They do _not_ inherit from unittest.TestCase
# so that they are not scanned directly for test functions (some of
# them do provide test functions, but not in a usable state without
# further code from subclasses).
#
class DirectTest(object):
def setUp(self):
self.output_reader = ReadWorker(("127.0.0.1", EXIT_PORT))
self.obfs_server = Obfsproxy(self.server_args)
time.sleep(0.1)
self.obfs_client = Obfsproxy(self.client_args)
self.input_chan = connect_with_retry(("127.0.0.1", ENTRY_PORT))
self.input_chan.settimeout(SOCKET_TIMEOUT)
def tearDown(self):
self.obfs_client.stop()
self.obfs_server.stop()
self.output_reader.stop()
self.input_chan.close()
def test_direct_transfer(self):
# Open a server and a simple client (in the same process) and
# transfer a file. Then check whether the output is the same
# as the input.
self.input_chan.sendall(TEST_FILE)
time.sleep(2)
try:
output = self.output_reader.get()
except Queue.Empty:
output = ""
self.input_chan.close()
report = diff("errors in transfer:", TEST_FILE, output)
report += self.obfs_client.check_completion("obfsproxy client (%s)" % self.transport, report!="")
report += self.obfs_server.check_completion("obfsproxy server (%s)" % self.transport, report!="")
if report != "":
self.fail("\n" + report)
#
# Concrete test classes specialize the above base classes for each protocol.
#
class DirectDummy(DirectTest, unittest.TestCase):
transport = "dummy"
server_args = ("dummy", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("dummy", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs2(DirectTest, unittest.TestCase):
transport = "obfs2"
server_args = ("obfs2", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs2", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs2_ss(DirectTest, unittest.TestCase):
transport = "obfs2"
server_args = ("obfs2", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--shared-secret=test",
"--ss-hash-iterations=50",
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs2", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--shared-secret=test",
"--ss-hash-iterations=50",
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectB64(DirectTest, unittest.TestCase):
transport = "b64"
server_args = ("b64", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("b64", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectObfs3(DirectTest, unittest.TestCase):
transport = "obfs3"
server_args = ("obfs3", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--dest=127.0.0.1:%d" % EXIT_PORT)
client_args = ("obfs3", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--dest=127.0.0.1:%d" % SERVER_PORT)
class DirectScrambleSuit(DirectTest, unittest.TestCase):
transport = "scramblesuit"
def setUp(self):
# First, we need to create data directories for ScrambleSuit. It uses
# them to store persistent information such as session tickets and the
# server's long-term keys.
self.tmpdir_srv = tempfile.mkdtemp(prefix="server")
self.tmpdir_cli = tempfile.mkdtemp(prefix="client")
self.server_args = ("--data-dir=%s" % self.tmpdir_srv,
"scramblesuit", "server",
"127.0.0.1:%d" % SERVER_PORT,
"--password=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"--dest=127.0.0.1:%d" % EXIT_PORT)
self.client_args = ("--data-dir=%s" % self.tmpdir_cli,
"scramblesuit", "client",
"127.0.0.1:%d" % ENTRY_PORT,
"--password=AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA",
"--dest=127.0.0.1:%d" % SERVER_PORT)
# Now, the remaining setup steps can be done.
super(DirectScrambleSuit, self).setUp()
def tearDown(self):
# First, let the parent class shut down the test.
super(DirectScrambleSuit, self).tearDown()
# Now, we can clean up after ourselves.
shutil.rmtree(self.tmpdir_srv)
shutil.rmtree(self.tmpdir_cli)
TEST_FILE = """\
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
THIS IS A TEST FILE. IT'S USED BY THE INTEGRATION TESTS.
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
"Can entropy ever be reversed?"
"THERE IS AS YET INSUFFICIENT DATA FOR A MEANINGFUL ANSWER."
In obfuscatory age geeky warfare did I wage
For hiding bits from nasty censors' sight
I was hacker to my set in that dim dark age of net
And I hacked from noon till three or four at night
Then a rival from Helsinki said my protocol was dinky
So I flamed him with a condescending laugh,
Saying his designs for stego might as well be made of lego
And that my bikeshed was prettier by half.
But Claude Shannon saw my shame. From his noiseless channel came
A message sent with not a wasted byte
"There are nine and sixty ways to disguise communiques
And RATHER MORE THAN ONE OF THEM IS RIGHT"
(apologies to Rudyard Kipling.)
"""
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/transports/ 0000775 0000000 0000000 00000000000 12570034732 0021064 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/test/transports/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0023163 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/test/transports/test_b64.py 0000664 0000000 0000000 00000003557 12570034732 0023102 0 ustar 00root root 0000000 0000000 import unittest
import twisted.trial.unittest
import obfsproxy.transports.b64 as b64
class test_b64_splitting(twisted.trial.unittest.TestCase):
def _helper_splitter(self, string, expected_chunks):
chunks = b64._get_b64_chunks_from_str(string)
self.assertEqual(chunks, expected_chunks)
def test_1(self):
string = "on==the==left==hand==side=="
expected = ["on==", "the==", "left==", "hand==", "side=="]
self._helper_splitter(string, expected)
def test_2(self):
string = "on=the=left=hand=side="
expected = ["on=", "the=", "left=", "hand=", "side="]
self._helper_splitter(string, expected)
def test_3(self):
string = "on==the=left==hand=side=="
expected = ["on==", "the=", "left==", "hand=", "side=="]
self._helper_splitter(string, expected)
def test_4(self):
string = "on==the==left=hand=side"
expected = ["on==", "the==", "left=", "hand=", "side"]
self._helper_splitter(string, expected)
def test_5(self):
string = "onthelefthandside=="
expected = ["onthelefthandside=="]
self._helper_splitter(string, expected)
def test_6(self):
string = "onthelefthandside"
expected = ["onthelefthandside"]
self._helper_splitter(string, expected)
def test_7(self):
string = "onthelefthandside="
expected = ["onthelefthandside="]
self._helper_splitter(string, expected)
def test_8(self):
string = "side=="
expected = ["side=="]
self._helper_splitter(string, expected)
def test_9(self):
string = "side="
expected = ["side="]
self._helper_splitter(string, expected)
def test_10(self):
string = "side"
expected = ["side"]
self._helper_splitter(string, expected)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/transports/test_obfs3_dh.py 0000664 0000000 0000000 00000001021 12570034732 0024156 0 ustar 00root root 0000000 0000000 import unittest
import twisted.trial.unittest
import obfsproxy.transports.obfs3_dh as obfs3_dh
class test_uniform_dh(twisted.trial.unittest.TestCase):
def test_uniform_dh(self):
alice = obfs3_dh.UniformDH()
bob = obfs3_dh.UniformDH()
alice_pub = alice.get_public()
bob_pub = bob.get_public()
alice_secret = alice.get_secret(bob_pub)
bob_secret = bob.get_secret(alice_pub)
self.assertEqual(alice_secret, bob_secret)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/test/transports/test_scramblesuit.py 0000664 0000000 0000000 00000041400 12570034732 0025171 0 ustar 00root root 0000000 0000000 import unittest
import os
import base64
import shutil
import tempfile
import Crypto.Hash.SHA256
import Crypto.Hash.HMAC
import obfsproxy.common.log as logging
import obfsproxy.network.buffer as obfs_buf
import obfsproxy.common.transport_config as transport_config
import obfsproxy.transports.base as base
import obfsproxy.transports.scramblesuit.state as state
import obfsproxy.transports.scramblesuit.util as util
import obfsproxy.transports.scramblesuit.const as const
import obfsproxy.transports.scramblesuit.mycrypto as mycrypto
import obfsproxy.transports.scramblesuit.uniformdh as uniformdh
import obfsproxy.transports.scramblesuit.scramblesuit as scramblesuit
import obfsproxy.transports.scramblesuit.message as message
import obfsproxy.transports.scramblesuit.state as state
import obfsproxy.transports.scramblesuit.ticket as ticket
import obfsproxy.transports.scramblesuit.packetmorpher as packetmorpher
import obfsproxy.transports.scramblesuit.probdist as probdist
# Disable all logging as it would yield plenty of warning and error
# messages.
log = logging.get_obfslogger()
log.disable_logs()
class CryptoTest( unittest.TestCase ):
"""
The HKDF test cases are taken from the appendix of RFC 5869:
https://tools.ietf.org/html/rfc5869
"""
def setUp( self ):
pass
def extract( self, salt, ikm ):
return Crypto.Hash.HMAC.new(salt, ikm, Crypto.Hash.SHA256).digest()
def runHKDF( self, ikm, salt, info, prk, okm ):
myprk = self.extract(salt, ikm)
self.failIf(myprk != prk)
myokm = mycrypto.HKDF_SHA256(myprk, info).expand()
self.failUnless(myokm in okm)
def test1_HKDF_TestCase1( self ):
ikm = "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b".decode('hex')
salt = "000102030405060708090a0b0c".decode('hex')
info = "f0f1f2f3f4f5f6f7f8f9".decode('hex')
prk = ("077709362c2e32df0ddc3f0dc47bba6390b6c73bb50f9c3122e" + \
"c844ad7c2b3e5").decode('hex')
okm = ("3cb25f25faacd57a90434f64d0362f2a2d2d0a90cf1a5a4c5db" + \
"02d56ecc4c5bf34007208d5b887185865").decode('hex')
self.runHKDF(ikm, salt, info, prk, okm)
def test2_HKDF_TestCase2( self ):
ikm = ("000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c" + \
"1d1e1f202122232425262728292a2b2c2d2e2f30313233343536373839" + \
"3a3b3c3d3e3f404142434445464748494a4b4c4d4e4f").decode('hex')
salt =("606162636465666768696a6b6c6d6e6f707172737475767778797a7b7c" + \
"7d7e7f808182838485868788898a8b8c8d8e8f90919293949596979899" + \
"9a9b9c9d9e9fa0a1a2a3a4a5a6a7a8a9aaabacadaeaf").decode('hex')
info =("b0b1b2b3b4b5b6b7b8b9babbbcbdbebfc0c1c2c3c4c5c6c7c8c9cacbcc" + \
"cdcecfd0d1d2d3d4d5d6d7d8d9dadbdcdddedfe0e1e2e3e4e5e6e7e8e9" + \
"eaebecedeeeff0f1f2f3f4f5f6f7f8f9fafbfcfdfeff").decode('hex')
prk = ("06a6b88c5853361a06104c9ceb35b45cef760014904671014a193f40c1" + \
"5fc244").decode('hex')
okm = ("b11e398dc80327a1c8e7f78c596a49344f012eda2d4efad8a050cc4c19" + \
"afa97c59045a99cac7827271cb41c65e590e09da3275600c2f09b83677" + \
"93a9aca3db71cc30c58179ec3e87c14c01d5c1" + \
"f3434f1d87").decode('hex')
self.runHKDF(ikm, salt, info, prk, okm)
def test3_HKDF_TestCase3( self ):
ikm = "0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b0b".decode('hex')
salt = ""
info = ""
prk = ("19ef24a32c717b167f33a91d6f648bdf96596776afdb6377a" + \
"c434c1c293ccb04").decode('hex')
okm = ("8da4e775a563c18f715f802a063c5a31b8a11f5c5ee1879ec" + \
"3454e5f3c738d2d9d201395faa4b61a96c8").decode('hex')
self.runHKDF(ikm, salt, info, prk, okm)
def test4_HKDF_TestCase4( self ):
self.assertRaises(ValueError,
mycrypto.HKDF_SHA256, "x" * 40, length=(32*255)+1)
self.assertRaises(ValueError,
mycrypto.HKDF_SHA256, "tooShort")
# Accidental re-use should raise an exception.
hkdf = mycrypto.HKDF_SHA256("x" * 40)
hkdf.expand()
self.assertRaises(base.PluggableTransportError, hkdf.expand)
def test4_CSPRNG( self ):
self.failIf(mycrypto.strongRandom(10) == mycrypto.strongRandom(10))
self.failIf(len(mycrypto.strongRandom(100)) != 100)
def test5_AES( self ):
plain = "this is a test"
key = os.urandom(16)
iv = os.urandom(8)
crypter1 = mycrypto.PayloadCrypter()
crypter1.setSessionKey(key, iv)
crypter2 = mycrypto.PayloadCrypter()
crypter2.setSessionKey(key, iv)
cipher = crypter1.encrypt(plain)
self.failIf(cipher == plain)
self.failUnless(crypter2.decrypt(cipher) == plain)
def test6_HMAC_SHA256_128( self ):
self.assertRaises(AssertionError, mycrypto.HMAC_SHA256_128,
"x" * (const.SHARED_SECRET_LENGTH - 1), "test")
self.failUnless(len(mycrypto.HMAC_SHA256_128("x" * \
const.SHARED_SECRET_LENGTH, "test")) == 16)
class UniformDHTest( unittest.TestCase ):
def setUp( self ):
weAreServer = True
self.udh = uniformdh.new("A" * const.SHARED_SECRET_LENGTH, weAreServer)
def test1_createHandshake( self ):
handshake = self.udh.createHandshake()
self.failUnless((const.PUBLIC_KEY_LENGTH +
const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH) <= len(handshake) <=
(const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH +
const.MAX_PADDING_LENGTH))
def test2_receivePublicKey( self ):
buf = obfs_buf.Buffer(self.udh.createHandshake())
def callback( masterKey ):
self.failUnless(len(masterKey) == const.MASTER_KEY_LENGTH)
self.failUnless(self.udh.receivePublicKey(buf, callback) == True)
publicKey = self.udh.getRemotePublicKey()
self.failUnless(len(publicKey) == const.PUBLIC_KEY_LENGTH)
def test3_invalidHMAC( self ):
# Make the HMAC invalid.
handshake = self.udh.createHandshake()
if handshake[-1] != 'a':
handshake = handshake[:-1] + 'a'
else:
handshake = handshake[:-1] + 'b'
buf = obfs_buf.Buffer(handshake)
self.failIf(self.udh.receivePublicKey(buf, lambda x: x) == True)
def test4_extractPublicKey( self ):
# Create UniformDH authentication message.
sharedSecret = "A" * const.SHARED_SECRET_LENGTH
realEpoch = util.getEpoch
# Try three valid and one invalid epoch value.
for epoch in util.expandedEpoch() + ["000000"]:
udh = uniformdh.new(sharedSecret, True)
util.getEpoch = lambda: epoch
authMsg = udh.createHandshake()
util.getEpoch = realEpoch
buf = obfs_buf.Buffer()
buf.write(authMsg)
if epoch == "000000":
self.assertFalse(udh.extractPublicKey(buf))
else:
self.assertTrue(udh.extractPublicKey(buf))
class UtilTest( unittest.TestCase ):
def test1_isValidHMAC( self ):
self.failIf(util.isValidHMAC("A" * const.HMAC_SHA256_128_LENGTH,
"B" * const.HMAC_SHA256_128_LENGTH,
"X" * const.SHA256_LENGTH) == True)
self.failIf(util.isValidHMAC("A" * const.HMAC_SHA256_128_LENGTH,
"A" * const.HMAC_SHA256_128_LENGTH,
"X" * const.SHA256_LENGTH) == False)
def test2_locateMark( self ):
self.failIf(util.locateMark("D", "ABC") != None)
hmac = "X" * const.HMAC_SHA256_128_LENGTH
mark = "A" * const.MARK_LENGTH
payload = mark + hmac
self.failIf(util.locateMark(mark, payload) == None)
self.failIf(util.locateMark(mark, payload[:-1]) != None)
def test3_sanitiseBase32( self ):
self.failUnless(util.sanitiseBase32("abc") == "ABC")
self.failUnless(util.sanitiseBase32("ABC1XYZ") == "ABCIXYZ")
self.failUnless(util.sanitiseBase32("ABC1XYZ0") == "ABCIXYZO")
def test4_setStateLocation( self ):
name = (const.TRANSPORT_NAME).lower()
# Check if function creates non-existant directories.
d = tempfile.mkdtemp()
util.setStateLocation(d)
self.failUnless(const.STATE_LOCATION == "%s/%s/" % (d, name))
self.failUnless(os.path.exists("%s/%s/" % (d, name)))
# Nothing should change if we pass "None".
util.setStateLocation(None)
self.failUnless(const.STATE_LOCATION == "%s/%s/" % (d, name))
shutil.rmtree(d)
def test5_getEpoch( self ):
e = util.getEpoch()
self.failUnless(isinstance(e, basestring))
def test7_readFromFile( self ):
# Read from non-existant file.
self.failUnless(util.readFromFile(tempfile.mktemp()) == None)
# Read file where we (hopefully) don't have permissions.
self.failUnless(util.readFromFile("/etc/shadow") == None)
class StateTest( unittest.TestCase ):
def setUp( self ):
const.STATE_LOCATION = tempfile.mkdtemp()
self.stateFile = os.path.join(const.STATE_LOCATION, const.SERVER_STATE_FILE)
self.state = state.State()
def tearDown( self ):
try:
shutil.rmtree(const.STATE_LOCATION)
except OSError:
pass
def test1_genState( self ):
self.state.genState()
self.failUnless(os.path.exists(self.stateFile))
def test2_loadState( self ):
# load() should create the state file if it doesn't exist yet.
self.failIf(os.path.exists(self.stateFile))
self.failUnless(isinstance(state.load(), state.State))
self.failUnless(os.path.exists(self.stateFile))
def test3_replay( self ):
key = "A" * const.HMAC_SHA256_128_LENGTH
self.state.genState()
self.state.registerKey(key)
self.failUnless(self.state.isReplayed(key))
self.failIf(self.state.isReplayed("B" * const.HMAC_SHA256_128_LENGTH))
def test4_ioerrorFail( self ):
def fake_open(name, mode):
raise IOError()
self.state.genState()
import __builtin__
real_open = __builtin__.open
__builtin__.open = fake_open
# Make state.load() fail
self.assertRaises(SystemExit, state.load)
# Make State.writeState() fail.
self.assertRaises(SystemExit, self.state.genState)
__builtin__.open = real_open
class MockArgs( object ):
uniformDHSecret = sharedSecret = ext_cookie_file = dest = None
mode = 'socks'
class ScrambleSuitTransportTest( unittest.TestCase ):
def setUp( self ):
config = transport_config.TransportConfig( )
config.state_location = const.STATE_LOCATION
args = MockArgs( )
suit = scramblesuit.ScrambleSuitTransport
suit.weAreServer = False
self.suit = suit
self.args = args
self.config = config
self.validSecret = base64.b32encode( 'A' * const.SHARED_SECRET_LENGTH )
self.invalidSecret = 'a' * const.SHARED_SECRET_LENGTH
self.statefile = tempfile.mkdtemp()
def tearDown( self ):
try:
shutil.rmtree(self.statefile)
except OSError:
pass
def test1_validateExternalModeCli( self ):
"""Test with valid scramblesuit args and valid obfsproxy args."""
self.args.uniformDHSecret = self.validSecret
self.assertTrue(
super( scramblesuit.ScrambleSuitTransport,
self.suit ).validate_external_mode_cli( self.args ))
self.assertIsNone( self.suit.validate_external_mode_cli( self.args ) )
def test2_validateExternalModeCli( self ):
"""Test with invalid scramblesuit args and valid obfsproxy args."""
self.args.uniformDHSecret = self.invalidSecret
with self.assertRaises( base.PluggableTransportError ):
self.suit.validate_external_mode_cli( self.args )
def test3_get_public_server_options( self ):
transCfg = transport_config.TransportConfig()
transCfg.setStateLocation(self.statefile)
scramblesuit.ScrambleSuitTransport.setup(transCfg)
options = scramblesuit.ScrambleSuitTransport.get_public_server_options("")
self.failUnless("password" in options)
d = { "password": "3X5BIA2MIHLZ55UV4VAEGKZIQPPZ4QT3" }
options = scramblesuit.ScrambleSuitTransport.get_public_server_options(d)
self.failUnless("password" in options)
self.failUnless(options["password"] == "3X5BIA2MIHLZ55UV4VAEGKZIQPPZ4QT3")
class MessageTest( unittest.TestCase ):
def test1_createProtocolMessages( self ):
# An empty message consists only of a header.
self.failUnless(len(message.createProtocolMessages("")[0]) == \
const.HDR_LENGTH)
msg = message.createProtocolMessages('X' * const.MPU)
self.failUnless((len(msg) == 1) and (len(msg[0]) == const.MTU))
msg = message.createProtocolMessages('X' * (const.MPU + 1))
self.failUnless((len(msg) == 2) and \
(len(msg[0]) == const.MTU) and \
(len(msg[1]) == (const.HDR_LENGTH + 1)))
def test2_getFlagNames( self ):
self.failUnless(message.getFlagNames(0) == "Undefined")
self.failUnless(message.getFlagNames(1) == "PAYLOAD")
self.failUnless(message.getFlagNames(2) == "NEW_TICKET")
self.failUnless(message.getFlagNames(4) == "PRNG_SEED")
def test3_isSane( self ):
self.failUnless(message.isSane(0, 0, const.FLAG_NEW_TICKET) == True)
self.failUnless(message.isSane(const.MPU, const.MPU,
const.FLAG_PRNG_SEED) == True)
self.failUnless(message.isSane(const.MPU + 1, 0,
const.FLAG_PAYLOAD) == False)
self.failUnless(message.isSane(0, 0, 1234) == False)
self.failUnless(message.isSane(0, 1, const.FLAG_PAYLOAD) == False)
def test4_ProtocolMessage( self ):
flags = [const.FLAG_NEW_TICKET,
const.FLAG_PAYLOAD,
const.FLAG_PRNG_SEED]
self.assertRaises(base.PluggableTransportError,
message.ProtocolMessage, "1", paddingLen=const.MPU)
class TicketTest( unittest.TestCase ):
def setUp( self ):
const.STATE_LOCATION = tempfile.mkdtemp()
self.stateFile = os.path.join(const.STATE_LOCATION, const.SERVER_STATE_FILE)
self.state = state.State()
self.state.genState()
def tearDown( self ):
try:
shutil.rmtree(const.STATE_LOCATION)
except OSError:
pass
def test1_authentication( self ):
ss = scramblesuit.ScrambleSuitTransport()
ss.srvState = self.state
realEpoch = util.getEpoch
# Try three valid and one invalid epoch value.
for epoch in util.expandedEpoch() + ["000000"]:
util.getEpoch = lambda: epoch
# Prepare ticket message.
blurb = ticket.issueTicketAndKey(self.state)
rawTicket = blurb[const.MASTER_KEY_LENGTH:]
masterKey = blurb[:const.MASTER_KEY_LENGTH]
ss.deriveSecrets(masterKey)
ticketMsg = ticket.createTicketMessage(rawTicket, ss.recvHMAC)
util.getEpoch = realEpoch
buf = obfs_buf.Buffer()
buf.write(ticketMsg)
if epoch == "000000":
self.assertFalse(ss.receiveTicket(buf))
else:
self.assertTrue(ss.receiveTicket(buf))
class PacketMorpher( unittest.TestCase ):
def test1_calcPadding( self ):
def checkDistribution( dist ):
pm = packetmorpher.new(dist)
for i in xrange(0, const.MTU + 2):
padLen = pm.calcPadding(i)
self.assertTrue(const.HDR_LENGTH <= \
padLen < \
(const.MTU + const.HDR_LENGTH))
# Test randomly generated distributions.
for i in xrange(0, 100):
checkDistribution(None)
# Test border-case distributions.
checkDistribution(probdist.new(lambda: 0))
checkDistribution(probdist.new(lambda: 1))
checkDistribution(probdist.new(lambda: const.MTU))
checkDistribution(probdist.new(lambda: const.MTU + 1))
def test2_getPadding( self ):
pm = packetmorpher.new()
sendCrypter = mycrypto.PayloadCrypter()
sendCrypter.setSessionKey("A" * 32, "A" * 8)
sendHMAC = "A" * 32
for i in xrange(0, const.MTU + 2):
padLen = len(pm.getPadding(sendCrypter, sendHMAC, i))
self.assertTrue(const.HDR_LENGTH <= padLen < const.MTU + \
const.HDR_LENGTH)
if __name__ == '__main__':
unittest.main()
obfsproxy-0.2.13/obfsproxy/transports/ 0000775 0000000 0000000 00000000000 12570034732 0020105 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/transports/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0022204 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/transports/b64.py 0000664 0000000 0000000 00000004605 12570034732 0021057 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module contains an implementation of the 'b64' transport. """
from obfsproxy.transports.base import BaseTransport
import base64
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
def _get_b64_chunks_from_str(string):
"""
Given a 'string' of concatenated base64 objects, return a list
with the objects.
Assumes that the objects are well-formed base64 strings. Also
assumes that the padding character of base64 is '='.
"""
chunks = []
while True:
pad_loc = string.find('=')
if pad_loc < 0 or pad_loc == len(string)-1 or pad_loc == len(string)-2:
# If there is no padding, or it's the last chunk: append
# it to chunks and return.
chunks.append(string)
return chunks
if pad_loc != len(string)-1 and string[pad_loc+1] == '=': # double padding
pad_loc += 1
# Append the object to the chunks, and prepare the string for
# the next iteration.
chunks.append(string[:pad_loc+1])
string = string[pad_loc+1:]
return chunks
class B64Transport(BaseTransport):
"""
Implements the b64 protocol. A protocol that encodes data with
base64 before pushing them to the network.
"""
def __init__(self):
super(B64Transport, self).__init__()
def receivedDownstream(self, data):
"""
Got data from downstream; relay them upstream.
"""
decoded_data = ''
# TCP is a stream protocol: the data we received might contain
# more than one b64 chunk. We should inspect the data and
# split it into multiple chunks.
b64_chunks = _get_b64_chunks_from_str(data.peek())
# Now b64 decode each chunk and append it to the our decoded
# data.
for chunk in b64_chunks:
try:
decoded_data += base64.b64decode(chunk)
except TypeError:
log.info("We got corrupted b64 ('%s')." % chunk)
return
data.drain()
self.circuit.upstream.write(decoded_data)
def receivedUpstream(self, data):
"""
Got data from upstream; relay them downstream.
"""
self.circuit.downstream.write(base64.b64encode(data.read()))
return
class B64Client(B64Transport):
pass
class B64Server(B64Transport):
pass
obfsproxy-0.2.13/obfsproxy/transports/base.py 0000664 0000000 0000000 00000012733 12570034732 0021377 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
import pyptlib.util
import obfsproxy.common.log as logging
import argparse
log = logging.get_obfslogger()
"""
This module contains BaseTransport, a pluggable transport skeleton class.
"""
def addrport(string):
"""
Receive ':' and return (,).
Used during argparse CLI parsing.
"""
try:
return pyptlib.util.parse_addr_spec(string, resolve=True)
except ValueError, err:
raise argparse.ArgumentTypeError(err)
class BaseTransport(object):
"""
The BaseTransport class is a skeleton class for pluggable transports.
It contains callbacks that your pluggable transports should
override and customize.
Attributes:
circuit: Circuit object. This is set just before circuitConnected is called.
"""
def __init__(self):
"""
Initialize transport. This is called right after TCP connect.
Subclass overrides should still call this via super().
"""
self.name = "tran_%s" % hex(id(self))
self.circuit = None
@classmethod
def setup(cls, pt_config):
"""
Receive Pluggable Transport Config, perform setup task
and save state in class attributes.
Called at obfsproxy startup.
Raise TransportSetupFailed if something goes wrong.
"""
@classmethod
def get_public_server_options(cls, transport_options):
"""
By default all server transport options are passed to BridgeDB.
If the transport server wishes to prevent some server
transport options from being added to the BridgeDB then
the transport may override this method and return a
transport_options dict with the keys/values to be distributed.
get_public_server_options receives the transport_options argument which
is a dict of server transport options... for example:
A torrc could specify multiple server transport options:
ServerTransportPlugin bananaphone exec /usr/local/bin/obfsproxy --log-min-severity=debug --log-file=/var/log/tor/obfsproxy.log managed
ServerTransportOptions bananaphone corpus=/opt/bananaphone-corpora/pg29468.txt encodingSpec=words,sha1,4 modelName=markov order=1
But if the transport wishes to only pass the encodingSpec to
the BridgeDB then get_public_server_options can be overridden like this:
@classmethod
def get_public_server_options(cls, transport_options):
return dict(encodingSpec = transport_options['encodingSpec'])
In this example the get_public_server_options receives the transport_options dict:
{'corpus': '/opt/bananaphone-corpora/pg29468.txt', 'modelName': 'markov', 'order': '1', 'encodingSpec': 'words,sha1,4'}
"""
return None
def circuitConnected(self):
"""
Our circuit was completed, and this is a good time to do your
transport-specific handshake on its downstream side.
"""
def circuitDestroyed(self, reason, side):
"""
Our circuit was tore down.
Both connections of the circuit are closed when this callback triggers.
"""
def receivedDownstream(self, data):
"""
Received 'data' in the downstream side of our circuit.
'data' is an obfsproxy.network.buffer.Buffer.
"""
def receivedUpstream(self, data):
"""
Received 'data' in the upstream side of our circuit.
'data' is an obfsproxy.network.buffer.Buffer.
"""
def handle_socks_args(self, args):
"""
'args' is a list of k=v strings that serve as configuration
parameters to the pluggable transport.
"""
@classmethod
def register_external_mode_cli(cls, subparser):
"""
Given an argparse ArgumentParser in 'subparser', register
some default external-mode CLI arguments.
Transports with more complex CLI are expected to override this
function.
"""
subparser.add_argument('mode', choices=['server', 'ext_server', 'client', 'socks'])
subparser.add_argument('listen_addr', type=addrport)
subparser.add_argument('--dest', type=addrport, help='Destination address')
subparser.add_argument('--ext-cookie-file', type=str,
help='Filesystem path where the Extended ORPort authentication cookie is stored.')
@classmethod
def validate_external_mode_cli(cls, args):
"""
Given the parsed CLI arguments in 'args', validate them and
make sure they make sense. Return True if they are kosher,
otherwise return False.
Override for your own needs.
"""
err = None
# If we are not 'socks', we need to have a static destination
# to send our data to.
if (args.mode != 'socks') and (not args.dest):
err = "'client' and 'server' modes need a destination address."
elif (args.mode != 'ext_server') and args.ext_cookie_file:
err = "No need for --ext-cookie-file if not an ext_server."
elif (args.mode == 'ext_server') and (not args.ext_cookie_file):
err = "You need to specify --ext-cookie-file as an ext_server."
if not err: # We didn't encounter any errors during validation
return True
else: # Ugh, something failed.
raise ValueError(err)
class PluggableTransportError(Exception): pass
class SOCKSArgsError(Exception): pass
class TransportSetupFailed(Exception): pass
obfsproxy-0.2.13/obfsproxy/transports/dummy.py 0000664 0000000 0000000 00000002457 12570034732 0021622 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
""" This module contains an implementation of the 'dummy' transport. """
from obfsproxy.transports.base import BaseTransport
class DummyTransport(BaseTransport):
"""
Implements the dummy protocol. A protocol that simply proxies data
without obfuscating them.
"""
def __init__(self):
"""
If you override __init__, you ought to call the super method too.
"""
super(DummyTransport, self).__init__()
def receivedDownstream(self, data):
"""
Got data from downstream; relay them upstream.
"""
self.circuit.upstream.write(data.read())
def receivedUpstream(self, data):
"""
Got data from upstream; relay them downstream.
"""
self.circuit.downstream.write(data.read())
class DummyClient(DummyTransport):
"""
DummyClient is a client for the 'dummy' protocol.
Since this protocol is so simple, the client and the server are identical and both just trivially subclass DummyTransport.
"""
pass
class DummyServer(DummyTransport):
"""
DummyServer is a server for the 'dummy' protocol.
Since this protocol is so simple, the client and the server are identical and both just trivially subclass DummyTransport.
"""
pass
obfsproxy-0.2.13/obfsproxy/transports/obfs2.py 0000664 0000000 0000000 00000027603 12570034732 0021502 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
The obfs2 module implements the obfs2 protocol.
"""
import random
import hashlib
import argparse
import sys
import obfsproxy.common.aes as aes
import obfsproxy.common.serialize as srlz
import obfsproxy.common.rand as rand
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
MAGIC_VALUE = 0x2BF5CA7E
SEED_LENGTH = 16
MAX_PADDING = 8192
HASH_ITERATIONS = 100000
KEYLEN = 16 # is the length of the key used by E(K,s) -- that is, 16.
IVLEN = 16 # is the length of the IV used by E(K,s) -- that is, 16.
ST_WAIT_FOR_KEY = 0
ST_WAIT_FOR_PADDING = 1
ST_OPEN = 2
def h(x):
""" H(x) is SHA256 of x. """
hasher = hashlib.sha256()
hasher.update(x)
return hasher.digest()
def hn(x, n):
""" H^n(x) is H(x) called iteratively n times. """
data = x
for _ in xrange(n):
data = h(data)
return data
class Obfs2Transport(base.BaseTransport):
"""
Obfs2Transport implements the obfs2 protocol.
"""
def __init__(self):
"""Initialize the obfs2 pluggable transport."""
super(Obfs2Transport, self).__init__()
# Check if the shared_secret class attribute was already
# instantiated. If not, instantiate it now.
if not hasattr(self, 'shared_secret'):
self.shared_secret = None
# If external-mode code did not specify the number of hash
# iterations, just use the default.
if not hasattr(self, 'ss_hash_iterations'):
self.ss_hash_iterations = HASH_ITERATIONS
if self.shared_secret:
log.debug("Starting obfs2 with shared secret: %s" % self.shared_secret)
# Our state.
self.state = ST_WAIT_FOR_KEY
if self.we_are_initiator:
self.initiator_seed = rand.random_bytes(SEED_LENGTH) # Initiator's seed.
self.responder_seed = None # Responder's seed.
else:
self.initiator_seed = None # Initiator's seed.
self.responder_seed = rand.random_bytes(SEED_LENGTH) # Responder's seed
# Shared secret seed.
self.secret_seed = None
# Crypto to encrypt outgoing data.
self.send_crypto = None
# Crypto to encrypt outgoing padding.
self.send_padding_crypto = None
# Crypto to decrypt incoming data.
self.recv_crypto = None
# Crypto to decrypt incoming padding.
self.recv_padding_crypto = None
# Number of padding bytes left to read.
self.padding_left_to_read = 0
# If it's True, it means that we received upstream data before
# we had the chance to set up our crypto (after receiving the
# handshake). This means that when we set up our crypto, we
# must remember to push the cached upstream data downstream.
self.pending_data_to_send = False
@classmethod
def setup(cls, transport_config):
"""Setup the obfs2 pluggable transport."""
cls.we_are_initiator = transport_config.weAreClient
# Check for shared-secret in the server transport options.
transport_options = transport_config.getServerTransportOptions()
if transport_options and "shared-secret" in transport_options:
log.debug("Setting shared-secret from server transport options: '%s'", transport_options["shared-secret"])
cls.shared_secret = transport_options["shared-secret"]
@classmethod
def register_external_mode_cli(cls, subparser):
subparser.add_argument('--shared-secret', type=str, help='Shared secret')
# This is a hidden CLI argument for use by the integration
# tests: so that they don't do an insane amount of hash
# iterations.
subparser.add_argument('--ss-hash-iterations', type=int, help=argparse.SUPPRESS)
super(Obfs2Transport, cls).register_external_mode_cli(subparser)
@classmethod
def validate_external_mode_cli(cls, args):
if args.shared_secret:
cls.shared_secret = args.shared_secret
if args.ss_hash_iterations:
cls.ss_hash_iterations = args.ss_hash_iterations
try:
super(Obfs2Transport, cls).validate_external_mode_cli(args)
except ValueError, err:
log.error(err)
sys.exit(1)
def handle_socks_args(self, args):
log.debug("obfs2: Got '%s' as SOCKS arguments." % args)
# A shared secret might already be set if obfsproxy is in
# external-mode and both a cli shared-secret was specified
# _and_ a SOCKS per-connection shared secret.
if self.shared_secret:
log.notice("obfs2: Hm. Weird configuration. A shared secret "
"was specified twice. I will keep the one "
"supplied by the SOCKS arguments.")
if len(args) != 1:
err_msg = "obfs2: Too many SOCKS arguments (%d) (%s)" % (len(args), str(args))
log.warning(err_msg)
raise base.SOCKSArgsError(err_msg)
if not args[0].startswith("shared-secret="):
err_msg = "obfs2: SOCKS arg is not correctly formatted (%s)" % args[0]
log.warning(err_msg)
raise base.SOCKSArgsError(err_msg)
self.shared_secret = args[0][14:]
def circuitConnected(self):
"""
Do the obfs2 handshake:
SEED | E_PAD_KEY( UINT32(MAGIC_VALUE) | UINT32(PADLEN) | WR(PADLEN) )
"""
# Generate keys for outgoing padding.
self.send_padding_crypto = \
self._derive_padding_crypto(self.initiator_seed if self.we_are_initiator else self.responder_seed,
self.send_pad_keytype)
padding_length = random.randint(0, MAX_PADDING)
seed = self.initiator_seed if self.we_are_initiator else self.responder_seed
handshake_message = seed + self.send_padding_crypto.crypt(srlz.htonl(MAGIC_VALUE) +
srlz.htonl(padding_length) +
rand.random_bytes(padding_length))
log.debug("obfs2 handshake: %s queued %d bytes (padding_length: %d).",
"initiator" if self.we_are_initiator else "responder",
len(handshake_message), padding_length)
self.circuit.downstream.write(handshake_message)
def receivedUpstream(self, data):
"""
Got data from upstream. We need to obfuscated and proxy them downstream.
"""
if not self.send_crypto:
log.debug("Got upstream data before doing handshake. Caching.")
self.pending_data_to_send = True
return
log.debug("obfs2 receivedUpstream: Transmitting %d bytes.", len(data))
# Encrypt and proxy them.
self.circuit.downstream.write(self.send_crypto.crypt(data.read()))
def receivedDownstream(self, data):
"""
Got data from downstream. We need to de-obfuscate them and
proxy them upstream.
"""
log_prefix = "obfs2 receivedDownstream" # used in logs
if self.state == ST_WAIT_FOR_KEY:
log.debug("%s: Waiting for key." % log_prefix)
if len(data) < SEED_LENGTH + 8:
log.debug("%s: Not enough bytes for key (%d)." % (log_prefix, len(data)))
return data # incomplete
if self.we_are_initiator:
self.responder_seed = data.read(SEED_LENGTH)
else:
self.initiator_seed = data.read(SEED_LENGTH)
# Now that we got the other seed, let's set up our crypto.
self.send_crypto = self._derive_crypto(self.send_keytype)
self.recv_crypto = self._derive_crypto(self.recv_keytype)
self.recv_padding_crypto = \
self._derive_padding_crypto(self.responder_seed if self.we_are_initiator else self.initiator_seed,
self.recv_pad_keytype)
# XXX maybe faster with a single d() instead of two.
magic = srlz.ntohl(self.recv_padding_crypto.crypt(data.read(4)))
padding_length = srlz.ntohl(self.recv_padding_crypto.crypt(data.read(4)))
log.debug("%s: Got %d bytes of handshake data (padding_length: %d, magic: %s)" % \
(log_prefix, len(data), padding_length, hex(magic)))
if magic != MAGIC_VALUE:
raise base.PluggableTransportError("obfs2: Corrupted magic value '%s'" % hex(magic))
if padding_length > MAX_PADDING:
raise base.PluggableTransportError("obfs2: Too big padding length '%s'" % padding_length)
self.padding_left_to_read = padding_length
self.state = ST_WAIT_FOR_PADDING
while self.padding_left_to_read:
if not data: return
n_to_drain = self.padding_left_to_read
if (self.padding_left_to_read > len(data)):
n_to_drain = len(data)
data.drain(n_to_drain)
self.padding_left_to_read -= n_to_drain
log.debug("%s: Consumed %d bytes of padding, %d still to come (%d).",
log_prefix, n_to_drain, self.padding_left_to_read, len(data))
self.state = ST_OPEN
log.debug("%s: Processing %d bytes of application data.",
log_prefix, len(data))
if self.pending_data_to_send:
log.debug("%s: We got pending data to send and our crypto is ready. Pushing!" % log_prefix)
self.receivedUpstream(self.circuit.upstream.buffer) # XXX touching guts of network.py
self.pending_data_to_send = False
self.circuit.upstream.write(self.recv_crypto.crypt(data.read()))
def _derive_crypto(self, pad_string): # XXX consider secret_seed
"""
Derive and return an obfs2 key using the pad string in 'pad_string'.
"""
secret = self.mac(pad_string,
self.initiator_seed + self.responder_seed,
self.shared_secret)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:],
counter_wraparound=True)
def _derive_padding_crypto(self, seed, pad_string): # XXX consider secret_seed
"""
Derive and return an obfs2 padding key using the pad string in 'pad_string'.
"""
secret = self.mac(pad_string,
seed,
self.shared_secret)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:],
counter_wraparound=True)
def mac(self, s, x, secret):
"""
obfs2 regular MAC: MAC(s, x) = H(s | x | s)
Optionally, if the client and server share a secret value SECRET,
they can replace the MAC function with:
MAC(s,x) = H^n(s | x | H(SECRET) | s)
where n = HASH_ITERATIONS.
"""
if secret:
secret_hash = h(secret)
return hn(s + x + secret_hash + s, self.ss_hash_iterations)
else:
return h(s + x + s)
class Obfs2Client(Obfs2Transport):
"""
Obfs2Client is a client for the obfs2 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
self.send_pad_keytype = 'Initiator obfuscation padding'
self.recv_pad_keytype = 'Responder obfuscation padding'
self.send_keytype = "Initiator obfuscated data"
self.recv_keytype = "Responder obfuscated data"
Obfs2Transport.__init__(self)
class Obfs2Server(Obfs2Transport):
"""
Obfs2Server is a server for the obfs2 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
self.send_pad_keytype = 'Responder obfuscation padding'
self.recv_pad_keytype = 'Initiator obfuscation padding'
self.send_keytype = "Responder obfuscated data"
self.recv_keytype = "Initiator obfuscated data"
Obfs2Transport.__init__(self)
obfsproxy-0.2.13/obfsproxy/transports/obfs3.py 0000664 0000000 0000000 00000022324 12570034732 0021476 0 ustar 00root root 0000000 0000000 #!/usr/bin/python
# -*- coding: utf-8 -*-
"""
The obfs3 module implements the obfs3 protocol.
"""
import random
import obfsproxy.common.aes as aes
import obfsproxy.transports.base as base
import obfsproxy.transports.obfs3_dh as obfs3_dh
import obfsproxy.common.log as logging
import obfsproxy.common.hmac_sha256 as hmac_sha256
import obfsproxy.common.rand as rand
from twisted.internet import threads
log = logging.get_obfslogger()
MAX_PADDING = 8194
PUBKEY_LEN = 192
KEYLEN = 16 # is the length of the key used by E(K,s) -- that is, 16.
HASHLEN = 32 # length of output of sha256
ST_WAIT_FOR_KEY = 0 # Waiting for public key from the other party
ST_WAIT_FOR_HANDSHAKE = 1 # Waiting for the DH handshake
ST_SEARCHING_MAGIC = 2 # Waiting for magic strings from the other party
ST_OPEN = 3 # obfs3 handshake is complete. Sending application data.
class Obfs3Transport(base.BaseTransport):
"""
Obfs3Transport implements the obfs3 protocol.
"""
def __init__(self):
"""Initialize the obfs3 pluggable transport."""
super(Obfs3Transport, self).__init__()
# Our state.
self.state = ST_WAIT_FOR_KEY
# Uniform-DH object
self.dh = obfs3_dh.UniformDH()
# DH shared secret
self.shared_secret = None
# Bytes of padding scanned so far.
self.scanned_padding = 0
# Last padding bytes scanned.
self.last_padding_chunk = ''
# Magic value that the other party is going to send
# (initialized after deriving shared secret)
self.other_magic_value = None
# Crypto to encrypt outgoing data.
self.send_crypto = None
# Crypto to decrypt incoming data.
self.recv_crypto = None
# Buffer for the first data, Tor is trying to send but can't right now
# because we have to handle the DH handshake first.
self.queued_data = ''
# Attributes below are filled by classes that inherit Obfs3Transport.
self.send_keytype = None
self.recv_keytype = None
self.send_magic_const = None
self.recv_magic_const = None
self.we_are_initiator = None
def circuitConnected(self):
"""
Do the obfs3 handshake:
PUBKEY | WR(PADLEN)
"""
padding_length = random.randint(0, MAX_PADDING/2)
handshake_message = self.dh.get_public() + rand.random_bytes(padding_length)
log.debug("obfs3 handshake: %s queued %d bytes (padding_length: %d) (public key: %s).",
"initiator" if self.we_are_initiator else "responder",
len(handshake_message), padding_length, repr(self.dh.get_public()))
self.circuit.downstream.write(handshake_message)
def receivedUpstream(self, data):
"""
Got data from upstream. We need to obfuscated and proxy them downstream.
"""
if not self.send_crypto:
log.debug("Got upstream data before doing handshake. Caching.")
self.queued_data += data.read()
return
message = self.send_crypto.crypt(data.read())
log.debug("obfs3 receivedUpstream: Transmitting %d bytes.", len(message))
# Proxy encrypted message.
self.circuit.downstream.write(message)
def receivedDownstream(self, data):
"""
Got data from downstream. We need to de-obfuscate them and
proxy them upstream.
"""
if self.state == ST_WAIT_FOR_KEY: # Looking for the other peer's pubkey
self._read_handshake(data)
if self.state == ST_WAIT_FOR_HANDSHAKE: # Doing the exp mod
return
if self.state == ST_SEARCHING_MAGIC: # Looking for the magic string
self._scan_for_magic(data)
if self.state == ST_OPEN: # Handshake is done. Just decrypt and read application data.
log.debug("obfs3 receivedDownstream: Processing %d bytes of application data." %
len(data))
self.circuit.upstream.write(self.recv_crypto.crypt(data.read()))
def _read_handshake(self, data):
"""
Read handshake message, parse the other peer's public key and
schedule the key exchange for execution outside of the event loop.
"""
log_prefix = "obfs3:_read_handshake()"
if len(data) < PUBKEY_LEN:
log.debug("%s: Not enough bytes for key (%d)." % (log_prefix, len(data)))
return
log.debug("%s: Got %d bytes of handshake data (waiting for key)." % (log_prefix, len(data)))
# Get the public key from the handshake message, do the DH and
# get the shared secret.
other_pubkey = data.read(PUBKEY_LEN)
# Do the UniformDH handshake asynchronously
self.d = threads.deferToThread(self.dh.get_secret, other_pubkey)
self.d.addCallback(self._read_handshake_post_dh, other_pubkey, data)
self.d.addErrback(self._uniform_dh_errback, other_pubkey)
self.state = ST_WAIT_FOR_HANDSHAKE
def _uniform_dh_errback(self, failure, other_pubkey):
"""
Worker routine that does the actual UniformDH key exchange. We need to
call it from a defered so that it does not block the main event loop.
"""
self.circuit.close()
e = failure.trap(ValueError)
log.warning("obfs3: Corrupted public key '%s'" % repr(other_pubkey))
def _read_handshake_post_dh(self, shared_secret, other_pubkey, data):
"""
Setup the crypto from the calculated shared secret, and complete the
obfs3 handshake.
"""
self.shared_secret = shared_secret
log_prefix = "obfs3:_read_handshake_post_dh()"
log.debug("Got public key: %s.\nGot shared secret: %s" %
(repr(other_pubkey), repr(self.shared_secret)))
# Set up our crypto.
self.send_crypto = self._derive_crypto(self.send_keytype)
self.recv_crypto = self._derive_crypto(self.recv_keytype)
self.other_magic_value = hmac_sha256.hmac_sha256_digest(self.shared_secret,
self.recv_magic_const)
# Send our magic value to the remote end and append the queued outgoing data.
# Padding is prepended so that the server does not just send the 32-byte magic
# in a single TCP segment.
padding_length = random.randint(0, MAX_PADDING/2)
magic = hmac_sha256.hmac_sha256_digest(self.shared_secret, self.send_magic_const)
message = rand.random_bytes(padding_length) + magic + self.send_crypto.crypt(self.queued_data)
self.queued_data = ''
log.debug("%s: Transmitting %d bytes (with magic)." % (log_prefix, len(message)))
self.circuit.downstream.write(message)
self.state = ST_SEARCHING_MAGIC
if len(data) > 0:
log.debug("%s: Processing %d bytes of handshake data remaining after key." % (log_prefix, len(data)))
self._scan_for_magic(data)
def _scan_for_magic(self, data):
"""
Scan 'data' for the magic string. If found, drain it and all
the padding before it. Then open the connection.
"""
log_prefix = "obfs3:_scan_for_magic()"
log.debug("%s: Searching for magic." % log_prefix)
assert(self.other_magic_value)
chunk = data.peek()
index = chunk.find(self.other_magic_value)
if index < 0:
if (len(data) > MAX_PADDING+HASHLEN):
raise base.PluggableTransportError("obfs3: Too much padding (%d)!" % len(data))
log.debug("%s: Did not find magic this time (%d)." % (log_prefix, len(data)))
return
index += len(self.other_magic_value)
log.debug("%s: Found magic. Draining %d bytes." % (log_prefix, index))
data.drain(index)
self.state = ST_OPEN
if len(data) > 0:
log.debug("%s: Processing %d bytes of application data remaining after magic." % (log_prefix, len(data)))
self.circuit.upstream.write(self.recv_crypto.crypt(data.read()))
def _derive_crypto(self, pad_string):
"""
Derive and return an obfs3 key using the pad string in 'pad_string'.
"""
secret = hmac_sha256.hmac_sha256_digest(self.shared_secret, pad_string)
return aes.AES_CTR_128(secret[:KEYLEN], secret[KEYLEN:],
counter_wraparound=True)
class Obfs3Client(Obfs3Transport):
"""
Obfs3Client is a client for the obfs3 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
Obfs3Transport.__init__(self)
self.send_keytype = "Initiator obfuscated data"
self.recv_keytype = "Responder obfuscated data"
self.send_magic_const = "Initiator magic"
self.recv_magic_const = "Responder magic"
self.we_are_initiator = True
class Obfs3Server(Obfs3Transport):
"""
Obfs3Server is a server for the obfs3 protocol.
The client and server differ in terms of their padding strings.
"""
def __init__(self):
Obfs3Transport.__init__(self)
self.send_keytype = "Responder obfuscated data"
self.recv_keytype = "Initiator obfuscated data"
self.send_magic_const = "Responder magic"
self.recv_magic_const = "Initiator magic"
self.we_are_initiator = False
obfsproxy-0.2.13/obfsproxy/transports/obfs3_dh.py 0000664 0000000 0000000 00000006260 12570034732 0022152 0 ustar 00root root 0000000 0000000 import binascii
import obfsproxy.common.rand as rand
import obfsproxy.common.modexp as modexp
def int_to_bytes(lvalue, width):
fmt = '%%.%dx' % (2*width)
return binascii.unhexlify(fmt % (lvalue & ((1L<<8*width)-1)))
class UniformDH:
"""
This is a class that implements a DH handshake that uses public
keys that are indistinguishable from 192-byte random strings.
The idea (and even the implementation) was suggested by Ian
Goldberg in:
https://lists.torproject.org/pipermail/tor-dev/2012-December/004245.html
https://lists.torproject.org/pipermail/tor-dev/2012-December/004248.html
Attributes:
mod, the modulus of our DH group.
g, the generator of our DH group.
group_len, the size of the group in bytes.
priv_str, a byte string representing our DH private key.
priv, our DH private key as an integer.
pub_str, a byte string representing our DH public key.
pub, our DH public key as an integer.
shared_secret, our DH shared secret.
"""
# 1536-bit MODP Group from RFC3526
mod = int(
"""FFFFFFFF FFFFFFFF C90FDAA2 2168C234 C4C6628B 80DC1CD1
29024E08 8A67CC74 020BBEA6 3B139B22 514A0879 8E3404DD
EF9519B3 CD3A431B 302B0A6D F25F1437 4FE1356D 6D51C245
E485B576 625E7EC6 F44C42E9 A637ED6B 0BFF5CB6 F406B7ED
EE386BFB 5A899FA5 AE9F2411 7C4B1FE6 49286651 ECE45B3D
C2007CB8 A163BF05 98DA4836 1C55D39A 69163FA8 FD24CF5F
83655D23 DCA3AD96 1C62F356 208552BB 9ED52907 7096966D
670C354E 4ABC9804 F1746C08 CA237327 FFFFFFFF FFFFFFFF""".replace(' ','').replace('\n','').replace('\t',''), 16)
g = 2
group_len = 192 # bytes (1536-bits)
def __init__(self, private_key = None):
# Generate private key
if private_key != None:
if len(private_key) != self.group_len:
raise ValueError("private_key is a invalid length (Expected %d, got %d)" % (group_len, len(private_key)))
self.priv_str = private_key
else:
self.priv_str = rand.random_bytes(self.group_len)
self.priv = int(binascii.hexlify(self.priv_str), 16)
# Make the private key even
flip = self.priv % 2
self.priv -= flip
# Generate public key
#
# Note: Always generate both valid public keys, and then pick to avoid
# leaking timing information about which key was chosen.
pub = modexp.powMod(self.g, self.priv, self.mod)
pub_p_sub_X = self.mod - pub
if flip == 1:
self.pub = pub_p_sub_X
else:
self.pub = pub
self.pub_str = int_to_bytes(self.pub, self.group_len)
self.shared_secret = None
def get_public(self):
return self.pub_str
def get_secret(self, their_pub_str):
"""
Given the public key of the other party as a string of bytes,
calculate our shared secret.
This might raise a ValueError since 'their_pub_str' is
attacker controlled.
"""
their_pub = int(binascii.hexlify(their_pub_str), 16)
self.shared_secret = modexp.powMod(their_pub, self.priv, self.mod)
return int_to_bytes(self.shared_secret, self.group_len)
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/ 0000775 0000000 0000000 00000000000 12570034732 0022602 5 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/__init__.py 0000664 0000000 0000000 00000000000 12570034732 0024701 0 ustar 00root root 0000000 0000000 obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/const.py 0000664 0000000 0000000 00000007125 12570034732 0024307 0 ustar 00root root 0000000 0000000 """
This module defines constant values for the ScrambleSuit protocol.
While some values can be changed, in general they should not. If you do not
obey, be at least careful because the protocol could easily break.
"""
# Length of the key of the HMAC which used to authenticate tickets in bytes.
TICKET_HMAC_KEY_LENGTH = 32
# Length of the AES key used to encrypt tickets in bytes.
TICKET_AES_KEY_LENGTH = 16
# Length of the IV for AES-CBC which is used to encrypt tickets in bytes.
TICKET_AES_CBC_IV_LENGTH = 16
# Directory where long-lived information is stored. It defaults to the current
# directory but is later set by `setStateLocation()' in util.py.
STATE_LOCATION = ""
# Contains a ready-to-use bridge descriptor (in managed mode) or simply the
# server's bind address together with the password (in external mode).
PASSWORD_FILE = "server_password"
# Divisor (in seconds) for the Unix epoch used to defend against replay
# attacks.
EPOCH_GRANULARITY = 3600
# Flags which can be set in a ScrambleSuit protocol message.
FLAG_PAYLOAD = (1 << 0)
FLAG_NEW_TICKET = (1 << 1)
FLAG_PRNG_SEED = (1 << 2)
# Length of ScrambleSuit's header in bytes.
HDR_LENGTH = 16 + 2 + 2 + 1
# Length of the HMAC-SHA256-128 digest in bytes.
HMAC_SHA256_128_LENGTH = 16
# Whether or not to use inter-arrival time obfuscation. Disabling this option
# makes the transported protocol more identifiable but increases throughput a
# lot.
USE_IAT_OBFUSCATION = False
# Key rotation time for session ticket keys in seconds.
KEY_ROTATION_TIME = 60 * 60 * 24 * 7
# Mark used to easily locate the HMAC authenticating handshake messages in
# bytes.
MARK_LENGTH = 16
# The master key's length in bytes.
MASTER_KEY_LENGTH = 32
# Maximum amount of seconds, a packet is delayed due to inter arrival time
# obfuscation.
MAX_PACKET_DELAY = 0.01
# The maximum amount of padding to be appended to handshake data.
MAX_PADDING_LENGTH = 1500
# The maximum length of a handshake in bytes (UniformDH as well as session
# tickets).
MAX_HANDSHAKE_LENGTH = MAX_PADDING_LENGTH + \
MARK_LENGTH + \
HMAC_SHA256_128_LENGTH
# Length of ScrambleSuit's MTU in bytes. Note that this is *not* the link MTU
# which is probably 1500.
MTU = 1448
# Maximum payload unit of a ScrambleSuit message in bytes.
MPU = MTU - HDR_LENGTH
# The minimum amount of distinct bins for probability distributions.
MIN_BINS = 1
# The maximum amount of distinct bins for probability distributions.
MAX_BINS = 100
# Length of a UniformDH public key in bytes.
PUBLIC_KEY_LENGTH = 192
# Length of the PRNG seed used to generate probability distributions in bytes.
PRNG_SEED_LENGTH = 32
# File which holds the server's state information.
SERVER_STATE_FILE = "server_state.cpickle"
# Life time of session tickets in seconds.
SESSION_TICKET_LIFETIME = KEY_ROTATION_TIME
# SHA256's digest length in bytes.
SHA256_LENGTH = 32
# The length of the UniformDH shared secret in bytes. It should be a multiple
# of 5 bytes since outside ScrambleSuit it is encoded in Base32. That way, we
# can avoid padding which might confuse users.
SHARED_SECRET_LENGTH = 20
# States which are used for the protocol state machine.
ST_WAIT_FOR_AUTH = 0
ST_AUTH_FAILED = 1
ST_CONNECTED = 2
# File which holds the client's session tickets.
CLIENT_TICKET_FILE = "session_ticket.yaml"
# Static validation string embedded in all tickets. Must be a multiple of 16
# bytes due to AES' block size.
TICKET_IDENTIFIER = "ScrambleSuitTicket"
# Length of a session ticket in bytes.
TICKET_LENGTH = 112
# The protocol name which is used in log messages.
TRANSPORT_NAME = "ScrambleSuit"
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/fifobuf.py 0000664 0000000 0000000 00000006245 12570034732 0024603 0 ustar 00root root 0000000 0000000 """
Provides an interface for a fast FIFO buffer.
The interface implements only 'read()', 'write()' and 'len()'. The
implementation below is a modified version of the code originally written by
Ben Timby: http://ben.timby.com/?p=139
"""
try:
from cStringIO import StringIO
except ImportError:
from StringIO import StringIO
MAX_BUFFER = 1024**2*4
class Buffer( object ):
"""
Implements a fast FIFO buffer.
Internally, the buffer consists of a list of StringIO objects. New
StringIO objects are added and delete as data is written to and read from
the FIFO buffer.
"""
def __init__( self, max_size=MAX_BUFFER ):
"""
Initialise a Buffer object.
"""
self.buffers = []
self.max_size = max_size
self.read_pos = 0
self.write_pos = 0
def write( self, data ):
"""
Write `data' to the FIFO buffer.
If necessary, a new internal buffer is created.
"""
# Add a StringIO buffer if none exists yet.
if not self.buffers:
self.buffers.append(StringIO())
self.write_pos = 0
lastBuf = self.buffers[-1]
lastBuf.seek(self.write_pos)
lastBuf.write(data)
# If we are over the limit, a new internal buffer is created.
if lastBuf.tell() >= self.max_size:
lastBuf = StringIO()
self.buffers.append(lastBuf)
self.write_pos = lastBuf.tell()
def read( self, length=-1 ):
"""
Read `length' elements of the FIFO buffer.
Drained data is automatically deleted.
"""
read_buf = StringIO()
remaining = length
while True:
if not self.buffers:
break
firstBuf = self.buffers[0]
firstBuf.seek(self.read_pos)
read_buf.write(firstBuf.read(remaining))
self.read_pos = firstBuf.tell()
if length == -1:
# We did not limit the read, we exhausted the buffer, so delete
# it. Keep reading from the remaining buffers.
del self.buffers[0]
self.read_pos = 0
else:
# We limited the read so either we exhausted the buffer or not.
remaining = length - read_buf.tell()
if remaining > 0:
# Exhausted, remove buffer, read more. Keep reading from
# remaining buffers.
del self.buffers[0]
self.read_pos = 0
else:
# Did not exhaust buffer, but read all that was requested.
# Break to stop reading and return data of requested
# length.
break
return read_buf.getvalue()
def __len__(self):
"""
Return the length of the Buffer object.
"""
length = 0
for buf in self.buffers:
# Jump to the end of the internal buffer.
buf.seek(0, 2)
if buf == self.buffers[0]:
length += buf.tell() - self.read_pos
else:
length += buf.tell()
return length
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/message.py 0000664 0000000 0000000 00000015716 12570034732 0024612 0 ustar 00root root 0000000 0000000 """
This module provides code to handle ScrambleSuit protocol messages.
The exported classes and functions provide interfaces to handle protocol
messages, check message headers for validity and create protocol messages out
of application data.
"""
import obfsproxy.common.log as logging
import obfsproxy.common.serialize as pack
import obfsproxy.transports.base as base
import mycrypto
import const
log = logging.get_obfslogger()
def createProtocolMessages( data, flags=const.FLAG_PAYLOAD ):
"""
Create protocol messages out of the given payload.
The given `data' is turned into a list of protocol messages with the given
`flags' set. The list is then returned. If possible, all messages fill
the MTU.
"""
messages = []
while len(data) > const.MPU:
messages.append(ProtocolMessage(data[:const.MPU], flags=flags))
data = data[const.MPU:]
messages.append(ProtocolMessage(data, flags=flags))
log.debug("Created %d protocol messages." % len(messages))
return messages
def getFlagNames( flags ):
"""
Return the flag name encoded in the integer `flags' as string.
This function is only useful for printing easy-to-read flag names in debug
log messages.
"""
if flags == 1:
return "PAYLOAD"
elif flags == 2:
return "NEW_TICKET"
elif flags == 4:
return "PRNG_SEED"
else:
return "Undefined"
def isSane( totalLen, payloadLen, flags ):
"""
Verifies whether the given header fields are sane.
The values of the fields `totalLen', `payloadLen' and `flags' are checked
for their sanity. If they are in the expected range, `True' is returned.
If any of these fields has an invalid value, `False' is returned.
"""
def isFine( length ):
"""
Check if the given length is fine.
"""
return True if (0 <= length <= const.MPU) else False
log.debug("Message header: totalLen=%d, payloadLen=%d, flags"
"=%s" % (totalLen, payloadLen, getFlagNames(flags)))
validFlags = [
const.FLAG_PAYLOAD,
const.FLAG_NEW_TICKET,
const.FLAG_PRNG_SEED,
]
return isFine(totalLen) and \
isFine(payloadLen) and \
totalLen >= payloadLen and \
(flags in validFlags)
class ProtocolMessage( object ):
"""
Represents a ScrambleSuit protocol message.
This class provides methods to deal with protocol messages. The methods
make it possible to add padding as well as to encrypt and authenticate
protocol messages.
"""
def __init__( self, payload="", paddingLen=0, flags=const.FLAG_PAYLOAD ):
"""
Initialises a ProtocolMessage object.
"""
payloadLen = len(payload)
if (payloadLen + paddingLen) > const.MPU:
raise base.PluggableTransportError("No overly long messages.")
self.totalLen = payloadLen + paddingLen
self.payloadLen = payloadLen
self.payload = payload
self.flags = flags
def encryptAndHMAC( self, crypter, hmacKey ):
"""
Encrypt and authenticate this protocol message.
This protocol message is encrypted using `crypter' and authenticated
using `hmacKey'. Finally, the encrypted message prepended by a
HMAC-SHA256-128 is returned and ready to be sent over the wire.
"""
encrypted = crypter.encrypt(pack.htons(self.totalLen) +
pack.htons(self.payloadLen) +
chr(self.flags) + self.payload +
(self.totalLen - self.payloadLen) * '\0')
hmac = mycrypto.HMAC_SHA256_128(hmacKey, encrypted)
return hmac + encrypted
def addPadding( self, paddingLen ):
"""
Add padding to this protocol message.
Padding is added to this protocol message. The exact amount is
specified by `paddingLen'.
"""
# The padding must not exceed the message size.
if (self.totalLen + paddingLen) > const.MPU:
raise base.PluggableTransportError("Can't pad more than the MTU.")
if paddingLen == 0:
return
log.debug("Adding %d bytes of padding to %d-byte message." %
(paddingLen, const.HDR_LENGTH + self.totalLen))
self.totalLen += paddingLen
def __len__( self ):
"""
Return the length of this protocol message.
"""
return const.HDR_LENGTH + self.totalLen
# Alias class name in order to provide a more intuitive API.
new = ProtocolMessage
class MessageExtractor( object ):
"""
Extracts ScrambleSuit protocol messages out of an encrypted stream.
"""
def __init__( self ):
"""
Initialise a new MessageExtractor object.
"""
self.recvBuf = ""
self.totalLen = None
self.payloadLen = None
self.flags = None
def extract( self, data, aes, hmacKey ):
"""
Extracts (i.e., decrypts and authenticates) protocol messages.
The raw `data' coming directly from the wire is decrypted using `aes'
and authenticated using `hmacKey'. The payload is then returned as
unencrypted protocol messages. In case of invalid headers or HMACs, an
exception is raised.
"""
self.recvBuf += data
msgs = []
# Keep trying to unpack as long as there is at least a header.
while len(self.recvBuf) >= const.HDR_LENGTH:
# If necessary, extract the header fields.
if self.totalLen == self.payloadLen == self.flags == None:
self.totalLen = pack.ntohs(aes.decrypt(self.recvBuf[16:18]))
self.payloadLen = pack.ntohs(aes.decrypt(self.recvBuf[18:20]))
self.flags = ord(aes.decrypt(self.recvBuf[20]))
if not isSane(self.totalLen, self.payloadLen, self.flags):
raise base.PluggableTransportError("Invalid header.")
# Parts of the message are still on the wire; waiting.
if (len(self.recvBuf) - const.HDR_LENGTH) < self.totalLen:
break
rcvdHMAC = self.recvBuf[0:const.HMAC_SHA256_128_LENGTH]
vrfyHMAC = mycrypto.HMAC_SHA256_128(hmacKey,
self.recvBuf[const.HMAC_SHA256_128_LENGTH:
(self.totalLen + const.HDR_LENGTH)])
if rcvdHMAC != vrfyHMAC:
raise base.PluggableTransportError("Invalid message HMAC.")
# Decrypt the message and remove it from the input buffer.
extracted = aes.decrypt(self.recvBuf[const.HDR_LENGTH:
(self.totalLen + const.HDR_LENGTH)])[:self.payloadLen]
msgs.append(ProtocolMessage(payload=extracted, flags=self.flags))
self.recvBuf = self.recvBuf[const.HDR_LENGTH + self.totalLen:]
# Protocol message processed; now reset length fields.
self.totalLen = self.payloadLen = self.flags = None
return msgs
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/mycrypto.py 0000664 0000000 0000000 00000010754 12570034732 0025051 0 ustar 00root root 0000000 0000000 """
This module provides cryptographic functions not implemented in PyCrypto.
The implemented algorithms include HKDF-SHA256, HMAC-SHA256-128, (CS)PRNGs and
an interface for encryption and decryption using AES in counter mode.
"""
import Crypto.Hash.SHA256
import Crypto.Hash.HMAC
import Crypto.Util.Counter
import Crypto.Cipher.AES
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
import math
import os
import const
log = logging.get_obfslogger()
class HKDF_SHA256( object ):
"""
Implements HKDF using SHA256: https://tools.ietf.org/html/rfc5869
This class only implements the `expand' but not the `extract' stage since
the provided PRK already exhibits strong entropy.
"""
def __init__( self, prk, info="", length=32 ):
"""
Initialise a HKDF_SHA256 object.
"""
self.hashLen = const.SHA256_LENGTH
if length > (self.hashLen * 255):
raise ValueError("The OKM's length cannot be larger than %d." %
(self.hashLen * 255))
if len(prk) < self.hashLen:
raise ValueError("The PRK must be at least %d bytes in length "
"(%d given)." % (self.hashLen, len(prk)))
self.N = math.ceil(float(length) / self.hashLen)
self.prk = prk
self.info = info
self.length = length
self.ctr = 1
self.T = ""
def expand( self ):
"""
Return the expanded output key material.
The output key material is calculated based on the given PRK, info and
L.
"""
tmp = ""
# Prevent the accidental re-use of output keying material.
if len(self.T) > 0:
raise base.PluggableTransportError("HKDF-SHA256 OKM must not "
"be re-used by application.")
while self.length > len(self.T):
tmp = Crypto.Hash.HMAC.new(self.prk, tmp + self.info +
chr(self.ctr),
Crypto.Hash.SHA256).digest()
self.T += tmp
self.ctr += 1
return self.T[:self.length]
def HMAC_SHA256_128( key, msg ):
"""
Return the HMAC-SHA256-128 of the given `msg' authenticated by `key'.
"""
assert(len(key) >= const.SHARED_SECRET_LENGTH)
h = Crypto.Hash.HMAC.new(key, msg, Crypto.Hash.SHA256)
# Return HMAC truncated to 128 out of 256 bits.
return h.digest()[:16]
def strongRandom( size ):
"""
Return `size' bytes of strong randomness suitable for cryptographic use.
"""
return os.urandom(size)
class PayloadCrypter:
"""
Provides methods to encrypt data using AES in counter mode.
This class provides methods to set a session key as well as an
initialisation vector and to encrypt and decrypt data.
"""
def __init__( self ):
"""
Initialise a PayloadCrypter object.
"""
log.debug("Initialising AES-CTR instance.")
self.sessionKey = None
self.crypter = None
self.counter = None
def setSessionKey( self, key, iv ):
"""
Set AES' session key and the initialisation vector for counter mode.
The given `key' and `iv' are used as 256-bit AES key and as 128-bit
initialisation vector for counter mode. Both, the key as well as the
IV must come from a CSPRNG.
"""
self.sessionKey = key
# Our 128-bit counter has the following format:
# [ 64-bit static and random IV ] [ 64-bit incrementing counter ]
# Counter wrapping is not allowed which makes it possible to transfer
# 2^64 * 16 bytes of data while avoiding counter reuse. That amount is
# effectively out of reach given today's networking performance.
log.debug("Setting IV for AES-CTR.")
self.counter = Crypto.Util.Counter.new(64,
prefix = iv,
initial_value = 1,
allow_wraparound = False)
log.debug("Setting session key for AES-CTR.")
self.crypter = Crypto.Cipher.AES.new(key, Crypto.Cipher.AES.MODE_CTR,
counter=self.counter)
def encrypt( self, data ):
"""
Encrypts the given `data' using AES in counter mode.
"""
return self.crypter.encrypt(data)
# Encryption equals decryption in AES-CTR.
decrypt = encrypt
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/packetmorpher.py 0000664 0000000 0000000 00000006222 12570034732 0026022 0 ustar 00root root 0000000 0000000 """
Provides code to morph a chunk of data to a given probability distribution.
The class provides an interface to morph a network packet's length to a
previously generated probability distribution. The packet lengths of the
morphed network data should then match the probability distribution.
"""
import random
import message
import probdist
import const
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
class PacketMorpher( object ):
"""
Implements methods to morph data to a target probability distribution.
This class is used to modify ScrambleSuit's packet length distribution on
the wire. The class provides a method to determine the padding for packets
smaller than the MTU.
"""
def __init__( self, dist=None ):
"""
Initialise the packet morpher with the given distribution `dist'.
If `dist' is `None', a new discrete probability distribution is
generated randomly.
"""
if dist:
self.dist = dist
else:
self.dist = probdist.new(lambda: random.randint(const.HDR_LENGTH,
const.MTU))
def getPadding( self, sendCrypter, sendHMAC, dataLen ):
"""
Based on the burst's size, return a ready-to-send padding blurb.
"""
padLen = self.calcPadding(dataLen)
assert const.HDR_LENGTH <= padLen < (const.MTU + const.HDR_LENGTH), \
"Invalid padding length %d." % padLen
# We have to use two padding messages if the padding is > MTU.
if padLen > const.MTU:
padMsgs = [message.new("", paddingLen=700 - const.HDR_LENGTH),
message.new("", paddingLen=padLen - 700 - \
const.HDR_LENGTH)]
else:
padMsgs = [message.new("", paddingLen=padLen - const.HDR_LENGTH)]
blurbs = [msg.encryptAndHMAC(sendCrypter, sendHMAC) for msg in padMsgs]
return "".join(blurbs)
def calcPadding( self, dataLen ):
"""
Based on `dataLen', determine and return a burst's padding.
ScrambleSuit morphs the last packet in a burst, i.e., packets which
don't fill the link's MTU. This is done by drawing a random sample
from our probability distribution which is used to determine and return
the padding for such packets. This effectively gets rid of Tor's
586-byte signature.
"""
# The `is' and `should-be' length of the burst's last packet.
dataLen = dataLen % const.MTU
sampleLen = self.dist.randomSample()
# Now determine the padding length which is in {0..MTU-1}.
if sampleLen >= dataLen:
padLen = sampleLen - dataLen
else:
padLen = (const.MTU - dataLen) + sampleLen
if padLen < const.HDR_LENGTH:
padLen += const.MTU
log.debug("Morphing the last %d-byte packet to %d bytes by adding %d "
"bytes of padding." %
(dataLen % const.MTU, sampleLen, padLen))
return padLen
# Alias class name in order to provide a more intuitive API.
new = PacketMorpher
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/probdist.py 0000664 0000000 0000000 00000005344 12570034732 0025010 0 ustar 00root root 0000000 0000000 """
This module provides code to generate and sample probability distributions.
The class RandProbDist provides an interface to randomly generate probability
distributions. Random samples can then be drawn from these distributions.
"""
import random
import const
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
class RandProbDist:
"""
Provides code to generate, sample and dump probability distributions.
"""
def __init__( self, genSingleton, seed=None ):
"""
Initialise a discrete probability distribution.
The parameter `genSingleton' is expected to be a function which yields
singletons for the probability distribution. The optional `seed' can
be used to seed the PRNG so that the probability distribution is
generated deterministically.
"""
self.prng = random if (seed is None) else random.Random(seed)
self.sampleList = []
self.dist = self.genDistribution(genSingleton)
self.dumpDistribution()
def genDistribution( self, genSingleton ):
"""
Generate a discrete probability distribution.
The parameter `genSingleton' is a function which is used to generate
singletons for the probability distribution.
"""
dist = {}
# Amount of distinct bins, i.e., packet lengths or inter arrival times.
bins = self.prng.randint(const.MIN_BINS, const.MAX_BINS)
# Cumulative probability of all bins.
cumulProb = 0
for _ in xrange(bins):
prob = self.prng.uniform(0, (1 - cumulProb))
cumulProb += prob
singleton = genSingleton()
dist[singleton] = prob
self.sampleList.append((cumulProb, singleton,))
dist[genSingleton()] = (1 - cumulProb)
return dist
def dumpDistribution( self ):
"""
Dump the probability distribution using the logging object.
Only probabilities > 0.01 are dumped.
"""
log.debug("Dumping probability distribution.")
for singleton in self.dist.iterkeys():
# We are not interested in tiny probabilities.
if self.dist[singleton] > 0.01:
log.debug("P(%s) = %.3f" %
(str(singleton), self.dist[singleton]))
def randomSample( self ):
"""
Draw and return a random sample from the probability distribution.
"""
assert len(self.sampleList) > 0
rand = random.random()
for cumulProb, singleton in self.sampleList:
if rand <= cumulProb:
return singleton
return self.sampleList[-1][1]
# Alias class name in order to provide a more intuitive API.
new = RandProbDist
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/replay.py 0000664 0000000 0000000 00000004645 12570034732 0024461 0 ustar 00root root 0000000 0000000 """
This module implements a mechanism to protect against replay attacks.
The replay protection mechanism is based on a dictionary which caches
previously observed keys. New keys can be added to the dictionary and existing
ones can be queried. A pruning mechanism deletes expired keys from the
dictionary.
"""
import time
import const
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
class Tracker( object ):
"""
Implement methods to keep track of replayed keys.
This class provides methods to add new keys (elements), check whether keys
are already present in the dictionary and to prune the lookup table.
"""
def __init__( self ):
"""
Initialise a `Tracker' object.
"""
self.table = dict()
def addElement( self, element ):
"""
Add the given `element' to the lookup table.
"""
if self.isPresent(element):
raise LookupError("Element already present in table.")
# The key is a HMAC and the value is the current Unix timestamp.
self.table[element] = int(time.time())
def isPresent( self, element ):
"""
Check if the given `element' is already present in the lookup table.
Return `True' if `element' is already in the lookup table and `False'
otherwise.
"""
log.debug("Looking for existing element in size-%d lookup table." %
len(self.table))
# Prune the replay table before looking up the given `element'. This
# could be done more efficiently, e.g. by pruning every n minutes and
# only checking the timestamp of this particular element.
self.prune()
return (element in self.table)
def prune( self ):
"""
Delete expired elements from the lookup table.
Keys whose Unix timestamps are older than `const.EPOCH_GRANULARITY' are
being removed from the lookup table.
"""
log.debug("Pruning the replay table.")
deleteList = []
now = int(time.time())
for element in self.table.iterkeys():
if (now - self.table[element]) > const.EPOCH_GRANULARITY:
deleteList.append(element)
# We can't delete from a dictionary while iterating over it; therefore
# this construct.
for elem in deleteList:
log.debug("Deleting expired element.")
del self.table[elem]
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/scramblesuit.py 0000664 0000000 0000000 00000065174 12570034732 0025666 0 ustar 00root root 0000000 0000000 """
The scramblesuit module implements the ScrambleSuit obfuscation protocol.
The paper discussing the design and evaluation of the ScrambleSuit pluggable
transport protocol is available here:
http://www.cs.kau.se/philwint/scramblesuit/
"""
from twisted.internet import reactor
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
import random
import base64
import yaml
import argparse
import probdist
import mycrypto
import message
import const
import util
import packetmorpher
import ticket
import uniformdh
import state
import fifobuf
log = logging.get_obfslogger()
class ReadPassFile(argparse.Action):
def __call__(self, parser, namespace, values, option_string=None):
with open(values) as f:
setattr(namespace, self.dest, f.readline().strip())
class ScrambleSuitTransport( base.BaseTransport ):
"""
Implement the ScrambleSuit protocol.
The class implements methods which implement the ScrambleSuit protocol. A
large part of the protocol's functionality is outsources to different
modules.
"""
def __init__( self ):
"""
Initialise a ScrambleSuitTransport object.
"""
log.debug("Initialising %s." % const.TRANSPORT_NAME)
super(ScrambleSuitTransport, self).__init__()
# Load the server's persistent state from file.
if self.weAreServer:
self.srvState = state.load()
# Initialise the protocol's state machine.
log.debug("Switching to state ST_WAIT_FOR_AUTH.")
self.protoState = const.ST_WAIT_FOR_AUTH
# Buffer for outgoing data.
self.sendBuf = ""
# Buffer for inter-arrival time obfuscation.
self.choppingBuf = fifobuf.Buffer()
# AES instances to decrypt incoming and encrypt outgoing data.
self.sendCrypter = mycrypto.PayloadCrypter()
self.recvCrypter = mycrypto.PayloadCrypter()
# Packet morpher to modify the protocol's packet length distribution.
self.pktMorpher = packetmorpher.new(self.srvState.pktDist
if self.weAreServer else None)
# Inter-arrival time morpher to obfuscate inter arrival times.
self.iatMorpher = self.srvState.iatDist if self.weAreServer else \
probdist.new(lambda: random.random() %
const.MAX_PACKET_DELAY)
# Used to extract protocol messages from encrypted data.
self.protoMsg = message.MessageExtractor()
# Used by the server-side: `True' if the ticket is already
# decrypted but not yet authenticated.
self.decryptedTicket = False
# If we are in external mode we should already have a shared
# secret set up because of validate_external_mode_cli().
if self.weAreExternal:
assert(self.uniformDHSecret)
if self.weAreClient and not self.weAreExternal:
# As a client in managed mode, we get the shared secret
# from callback `handle_socks_args()' per-connection. Set
# the shared secret to None for now.
self.uniformDHSecret = None
self.uniformdh = uniformdh.new(self.uniformDHSecret, self.weAreServer)
@classmethod
def setup( cls, transportConfig ):
"""
Called once when obfsproxy starts.
"""
log.error("\n\n################################################\n"
"Do NOT rely on ScrambleSuit for strong security!\n"
"################################################\n")
util.setStateLocation(transportConfig.getStateLocation())
cls.weAreClient = transportConfig.weAreClient
cls.weAreServer = not cls.weAreClient
cls.weAreExternal = transportConfig.weAreExternal
# If we are server and in managed mode, we should get the
# shared secret from the server transport options.
if cls.weAreServer and not cls.weAreExternal:
cfg = transportConfig.getServerTransportOptions()
if cfg and "password" in cfg:
try:
cls.uniformDHSecret = base64.b32decode(util.sanitiseBase32(
cfg["password"]))
except (TypeError, AttributeError) as error:
raise base.TransportSetupFailed(
"Password could not be base32 decoded (%s)" % error)
cls.uniformDHSecret = cls.uniformDHSecret.strip()
if cls.weAreServer:
if not hasattr(cls, "uniformDHSecret"):
log.debug("Using fallback password for descriptor file.")
srv = state.load()
cls.uniformDHSecret = srv.fallbackPassword
if len(cls.uniformDHSecret) != const.SHARED_SECRET_LENGTH:
raise base.TransportSetupFailed(
"Wrong password length (%d instead of %d)"
% len(cls.uniformDHSecret), const.SHARED_SECRET_LENGTH)
if not const.STATE_LOCATION:
raise base.TransportSetupFailed(
"No state location set. If you are using external mode, " \
"please set it using the --data-dir switch.")
state.writeServerPassword(cls.uniformDHSecret)
@classmethod
def get_public_server_options( cls, transportOptions ):
"""
Return ScrambleSuit's BridgeDB parameters, i.e., the shared secret.
As a fallback mechanism, we return an automatically generated password
if the bridge operator did not use `ServerTransportOptions'.
"""
log.debug("Tor's transport options: %s" % str(transportOptions))
if not "password" in transportOptions:
log.warning("No password found in transport options (use Tor's " \
"`ServerTransportOptions' to set your own password)." \
" Using automatically generated password instead.")
srv = state.load()
transportOptions = {"password":
base64.b32encode(srv.fallbackPassword)}
cls.uniformDHSecret = srv.fallbackPassword
return transportOptions
def deriveSecrets( self, masterKey ):
"""
Derive various session keys from the given `masterKey'.
The argument `masterKey' is used to derive two session keys and nonces
for AES-CTR and two HMAC keys. The derivation is done using
HKDF-SHA256.
"""
assert len(masterKey) == const.MASTER_KEY_LENGTH
log.debug("Deriving session keys from %d-byte master key." %
len(masterKey))
# We need key material for two symmetric AES-CTR keys, nonces and
# HMACs. In total, this equals 144 bytes of key material.
hkdf = mycrypto.HKDF_SHA256(masterKey, "", (32 * 4) + (8 * 2))
okm = hkdf.expand()
assert len(okm) >= ((32 * 4) + (8 * 2))
# Set AES-CTR keys and nonces for our two AES instances.
self.sendCrypter.setSessionKey(okm[0:32], okm[32:40])
self.recvCrypter.setSessionKey(okm[40:72], okm[72:80])
# Set the keys for the two HMACs protecting our data integrity.
self.sendHMAC = okm[80:112]
self.recvHMAC = okm[112:144]
if self.weAreServer:
self.sendHMAC, self.recvHMAC = self.recvHMAC, self.sendHMAC
self.sendCrypter, self.recvCrypter = self.recvCrypter, \
self.sendCrypter
def circuitConnected( self ):
"""
Initiate a ScrambleSuit handshake.
This method is only relevant for clients since servers never initiate
handshakes. If a session ticket is available, it is redeemed.
Otherwise, a UniformDH handshake is conducted.
"""
# The server handles the handshake passively.
if self.weAreServer:
return
# The preferred authentication mechanism is a session ticket.
bridge = self.circuit.downstream.transport.getPeer()
storedTicket = ticket.findStoredTicket(bridge)
if storedTicket is not None:
log.debug("Redeeming stored session ticket.")
(masterKey, rawTicket) = storedTicket
self.deriveSecrets(masterKey)
self.circuit.downstream.write(ticket.createTicketMessage(rawTicket,
self.sendHMAC))
# We switch to ST_CONNECTED opportunistically since we don't know
# yet whether the server accepted the ticket.
log.debug("Switching to state ST_CONNECTED.")
self.protoState = const.ST_CONNECTED
self.flushSendBuffer()
# Conduct an authenticated UniformDH handshake if there's no ticket.
else:
if self.uniformDHSecret is None:
log.warning("A UniformDH password is not set, most likely " \
"a missing 'password' argument.")
self.circuit.close()
return
log.debug("No session ticket to redeem. Running UniformDH.")
self.circuit.downstream.write(self.uniformdh.createHandshake())
def sendRemote( self, data, flags=const.FLAG_PAYLOAD ):
"""
Send data to the remote end after a connection was established.
The given `data' is first encapsulated in protocol messages. Then, the
protocol message(s) are sent over the wire. The argument `flags'
specifies the protocol message flags with the default flags signalling
payload.
"""
log.debug("Processing %d bytes of outgoing data." % len(data))
# Wrap the application's data in ScrambleSuit protocol messages.
messages = message.createProtocolMessages(data, flags=flags)
blurb = "".join([msg.encryptAndHMAC(self.sendCrypter,
self.sendHMAC) for msg in messages])
# Flush data chunk for chunk to obfuscate inter-arrival times.
if const.USE_IAT_OBFUSCATION:
if len(self.choppingBuf) == 0:
self.choppingBuf.write(blurb)
reactor.callLater(self.iatMorpher.randomSample(),
self.flushPieces)
else:
# flushPieces() is still busy processing the chopping buffer.
self.choppingBuf.write(blurb)
else:
padBlurb = self.pktMorpher.getPadding(self.sendCrypter,
self.sendHMAC,
len(blurb))
self.circuit.downstream.write(blurb + padBlurb)
def flushPieces( self ):
"""
Write the application data in chunks to the wire.
The cached data is sent over the wire in chunks. After every write
call, control is given back to the Twisted reactor so it has a chance
to flush the data. Shortly thereafter, this function is called again
to write the next chunk of data. The delays in between subsequent
write calls are controlled by the inter-arrival time obfuscator.
"""
# Drain and send an MTU-sized chunk from the chopping buffer.
if len(self.choppingBuf) > const.MTU:
self.circuit.downstream.write(self.choppingBuf.read(const.MTU))
# Drain and send whatever is left in the output buffer.
else:
blurb = self.choppingBuf.read()
padBlurb = self.pktMorpher.getPadding(self.sendCrypter,
self.sendHMAC,
len(blurb))
self.circuit.downstream.write(blurb + padBlurb)
return
reactor.callLater(self.iatMorpher.randomSample(), self.flushPieces)
def processMessages( self, data ):
"""
Acts on extracted protocol messages based on header flags.
After the incoming `data' is decrypted and authenticated, this method
processes the received data based on the header flags. Payload is
written to the local application, new tickets are stored, or keys are
added to the replay table.
"""
if (data is None) or (len(data) == 0):
return
# Try to extract protocol messages from the encrypted blurb.
msgs = self.protoMsg.extract(data, self.recvCrypter, self.recvHMAC)
if (msgs is None) or (len(msgs) == 0):
return
for msg in msgs:
# Forward data to the application.
if msg.flags == const.FLAG_PAYLOAD:
self.circuit.upstream.write(msg.payload)
# Store newly received ticket.
elif self.weAreClient and (msg.flags == const.FLAG_NEW_TICKET):
assert len(msg.payload) == (const.TICKET_LENGTH +
const.MASTER_KEY_LENGTH)
peer = self.circuit.downstream.transport.getPeer()
ticket.storeNewTicket(msg.payload[0:const.MASTER_KEY_LENGTH],
msg.payload[const.MASTER_KEY_LENGTH:
const.MASTER_KEY_LENGTH +
const.TICKET_LENGTH], peer)
# Use the PRNG seed to generate the same probability distributions
# as the server. That's where the polymorphism comes from.
elif self.weAreClient and (msg.flags == const.FLAG_PRNG_SEED):
assert len(msg.payload) == const.PRNG_SEED_LENGTH
log.debug("Obtained PRNG seed.")
prng = random.Random(msg.payload)
pktDist = probdist.new(lambda: prng.randint(const.HDR_LENGTH,
const.MTU),
seed=msg.payload)
self.pktMorpher = packetmorpher.new(pktDist)
self.iatMorpher = probdist.new(lambda: prng.random() %
const.MAX_PACKET_DELAY,
seed=msg.payload)
else:
log.warning("Invalid message flags: %d." % msg.flags)
def flushSendBuffer( self ):
"""
Flush the application's queued data.
The application could have sent data while we were busy authenticating
the remote machine. This method flushes the data which could have been
queued in the meanwhile in `self.sendBuf'.
"""
if len(self.sendBuf) == 0:
log.debug("Send buffer is empty; nothing to flush.")
return
# Flush the buffered data, the application is so eager to send.
log.debug("Flushing %d bytes of buffered application data." %
len(self.sendBuf))
self.sendRemote(self.sendBuf)
self.sendBuf = ""
def receiveTicket( self, data ):
"""
Extract and verify a potential session ticket.
The given `data' is treated as a session ticket. The ticket is being
decrypted and authenticated (yes, in that order). If all these steps
succeed, `True' is returned. Otherwise, `False' is returned.
"""
if len(data) < (const.TICKET_LENGTH + const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH):
return False
potentialTicket = data.peek()
# Now try to decrypt and parse the ticket. We need the master key
# inside to verify the HMAC in the next step.
if not self.decryptedTicket:
newTicket = ticket.decrypt(potentialTicket[:const.TICKET_LENGTH],
self.srvState)
if newTicket != None and newTicket.isValid():
self.deriveSecrets(newTicket.masterKey)
self.decryptedTicket = True
else:
return False
# First, find the mark to efficiently locate the HMAC.
mark = mycrypto.HMAC_SHA256_128(self.recvHMAC,
potentialTicket[:const.TICKET_LENGTH])
index = util.locateMark(mark, potentialTicket)
if not index:
return False
# Now, verify if the HMAC is valid.
existingHMAC = potentialTicket[index + const.MARK_LENGTH:
index + const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH]
authenticated = False
for epoch in util.expandedEpoch():
myHMAC = mycrypto.HMAC_SHA256_128(self.recvHMAC,
potentialTicket[0:index + \
const.MARK_LENGTH] + epoch)
if util.isValidHMAC(myHMAC, existingHMAC, self.recvHMAC):
authenticated = True
break
log.debug("HMAC invalid. Trying next epoch value.")
if not authenticated:
log.warning("Could not verify the authentication message's HMAC.")
return False
# Do nothing if the ticket is replayed. Immediately closing the
# connection would be suspicious.
if self.srvState.isReplayed(existingHMAC):
log.warning("The HMAC was already present in the replay table.")
return False
data.drain(index + const.MARK_LENGTH + const.HMAC_SHA256_128_LENGTH)
log.debug("Adding the HMAC authenticating the ticket message to the " \
"replay table: %s." % existingHMAC.encode('hex'))
self.srvState.registerKey(existingHMAC)
log.debug("Switching to state ST_CONNECTED.")
self.protoState = const.ST_CONNECTED
return True
def receivedUpstream( self, data ):
"""
Sends data to the remote machine or queues it to be sent later.
Depending on the current protocol state, the given `data' is either
directly sent to the remote machine or queued. The buffer is then
flushed once, a connection is established.
"""
if self.protoState == const.ST_CONNECTED:
self.sendRemote(data.read())
# Buffer data we are not ready to transmit yet.
else:
self.sendBuf += data.read()
log.debug("Buffered %d bytes of outgoing data." %
len(self.sendBuf))
def sendTicketAndSeed( self ):
"""
Send a session ticket and the PRNG seed to the client.
This method is only called by the server after successful
authentication. Finally, the server's send buffer is flushed.
"""
log.debug("Sending a new session ticket and the PRNG seed to the " \
"client.")
self.sendRemote(ticket.issueTicketAndKey(self.srvState),
flags=const.FLAG_NEW_TICKET)
self.sendRemote(self.srvState.prngSeed,
flags=const.FLAG_PRNG_SEED)
self.flushSendBuffer()
def receivedDownstream( self, data ):
"""
Receives and processes data coming from the remote machine.
The incoming `data' is dispatched depending on the current protocol
state and whether we are the client or the server. The data is either
payload or authentication data.
"""
if self.weAreServer and (self.protoState == const.ST_AUTH_FAILED):
self.drainedHandshake += len(data)
data.drain(len(data))
if self.drainedHandshake > self.srvState.closingThreshold:
log.info("Terminating connection after having received >= %d"
" bytes because client could not "
"authenticate." % self.srvState.closingThreshold)
self.circuit.close()
return
elif self.weAreServer and (self.protoState == const.ST_WAIT_FOR_AUTH):
# First, try to interpret the incoming data as session ticket.
if self.receiveTicket(data):
log.debug("Ticket authentication succeeded.")
self.sendTicketAndSeed()
# Second, interpret the data as a UniformDH handshake.
elif self.uniformdh.receivePublicKey(data, self.deriveSecrets,
self.srvState):
# Now send the server's UniformDH public key to the client.
handshakeMsg = self.uniformdh.createHandshake(srvState=
self.srvState)
log.debug("Sending %d bytes of UniformDH handshake and "
"session ticket." % len(handshakeMsg))
self.circuit.downstream.write(handshakeMsg)
log.debug("UniformDH authentication succeeded.")
log.debug("Switching to state ST_CONNECTED.")
self.protoState = const.ST_CONNECTED
self.sendTicketAndSeed()
elif len(data) > const.MAX_HANDSHAKE_LENGTH:
self.protoState = const.ST_AUTH_FAILED
self.drainedHandshake = len(data)
data.drain(self.drainedHandshake)
log.info("No successful authentication after having " \
"received >= %d bytes. Now ignoring client." % \
const.MAX_HANDSHAKE_LENGTH)
return
else:
log.debug("Authentication unsuccessful so far. "
"Waiting for more data.")
return
elif self.weAreClient and (self.protoState == const.ST_WAIT_FOR_AUTH):
if not self.uniformdh.receivePublicKey(data, self.deriveSecrets):
log.debug("Unable to finish UniformDH handshake just yet.")
return
log.debug("UniformDH authentication succeeded.")
log.debug("Switching to state ST_CONNECTED.")
self.protoState = const.ST_CONNECTED
self.flushSendBuffer()
if self.protoState == const.ST_CONNECTED:
self.processMessages(data.read())
@classmethod
def register_external_mode_cli( cls, subparser ):
"""
Register a CLI arguments to pass a secret or ticket to ScrambleSuit.
Two options are made available over the command line interface: one to
specify a ticket file and one to specify a UniformDH shared secret.
"""
passArgs = subparser.add_mutually_exclusive_group(required=True)
passArgs.add_argument("--password",
type=str,
help="Shared secret for UniformDH",
dest="uniformDHSecret")
passArgs.add_argument("--password-file",
type=str,
help="File containing shared secret for UniformDH",
action=ReadPassFile,
dest="uniformDHSecret")
super(ScrambleSuitTransport, cls).register_external_mode_cli(subparser)
@classmethod
def validate_external_mode_cli( cls, args ):
"""
Assign the given command line arguments to local variables.
"""
uniformDHSecret = None
try:
uniformDHSecret = base64.b32decode(util.sanitiseBase32(
args.uniformDHSecret))
except (TypeError, AttributeError) as error:
log.error(error.message)
raise base.PluggableTransportError("Given password '%s' is not " \
"valid Base32! Run 'generate_password.py' to generate " \
"a good password." % args.uniformDHSecret)
parentalApproval = super(
ScrambleSuitTransport, cls).validate_external_mode_cli(args)
if not parentalApproval:
# XXX not very descriptive nor helpful, but the parent class only
# returns a boolean without telling us what's wrong.
raise base.PluggableTransportError(
"Pluggable Transport args invalid: %s" % args )
if uniformDHSecret:
rawLength = len(uniformDHSecret)
if rawLength != const.SHARED_SECRET_LENGTH:
raise base.PluggableTransportError(
"The UniformDH password must be %d bytes in length, ",
"but %d bytes are given."
% (const.SHARED_SECRET_LENGTH, rawLength))
else:
cls.uniformDHSecret = uniformDHSecret
def handle_socks_args( self, args ):
"""
Receive arguments `args' passed over a SOCKS connection.
The SOCKS authentication mechanism is (ab)used to pass arguments to
pluggable transports. This method receives these arguments and parses
them. As argument, we only expect a UniformDH shared secret.
"""
log.debug("Received the following arguments over SOCKS: %s." % args)
if len(args) != 1:
raise base.SOCKSArgsError("Too many SOCKS arguments "
"(expected 1 but got %d)." % len(args))
# The ScrambleSuit specification defines that the shared secret is
# called "password".
if not args[0].startswith("password="):
raise base.SOCKSArgsError("The SOCKS argument must start with "
"`password='.")
# A shared secret might already be set if obfsproxy is in external
# mode.
if self.uniformDHSecret:
log.warning("A UniformDH password was already specified over "
"the command line. Using the SOCKS secret instead.")
try:
self.uniformDHSecret = base64.b32decode(util.sanitiseBase32(
args[0].split('=')[1].strip()))
except TypeError as error:
log.error(error.message)
raise base.PluggableTransportError("Given password '%s' is not " \
"valid Base32! Run 'generate_password.py' to generate " \
"a good password." % args[0].split('=')[1].strip())
rawLength = len(self.uniformDHSecret)
if rawLength != const.SHARED_SECRET_LENGTH:
raise base.PluggableTransportError("The UniformDH password "
"must be %d bytes in length but %d bytes are given." %
(const.SHARED_SECRET_LENGTH, rawLength))
self.uniformdh = uniformdh.new(self.uniformDHSecret, self.weAreServer)
class ScrambleSuitClient( ScrambleSuitTransport ):
"""
Extend the ScrambleSuit class.
"""
def __init__( self ):
"""
Initialise a ScrambleSuitClient object.
"""
ScrambleSuitTransport.__init__(self)
class ScrambleSuitServer( ScrambleSuitTransport ):
"""
Extend the ScrambleSuit class.
"""
def __init__( self ):
"""
Initialise a ScrambleSuitServer object.
"""
ScrambleSuitTransport.__init__(self)
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/state.py 0000664 0000000 0000000 00000014120 12570034732 0024272 0 ustar 00root root 0000000 0000000 """
Provide a way to store the server's state information on disk.
The server possesses state information which should persist across runs. This
includes key material to encrypt and authenticate session tickets, replay
tables and PRNG seeds. This module provides methods to load, store and
generate such state information.
"""
import os
import sys
import time
import cPickle
import random
import const
import replay
import mycrypto
import probdist
import base64
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
def load( ):
"""
Load the server's state object from file.
The server's state file is loaded and the state object returned. If no
state file is found, a new one is created and returned.
"""
stateFile = os.path.join(const.STATE_LOCATION, const.SERVER_STATE_FILE)
log.info("Attempting to load the server's state file from `%s'." %
stateFile)
if not os.path.exists(stateFile):
log.info("The server's state file does not exist (yet).")
state = State()
state.genState()
return state
try:
with open(stateFile, 'r') as fd:
stateObject = cPickle.load(fd)
except IOError as err:
log.error("Error reading server state file from `%s': %s" %
(stateFile, err))
sys.exit(1)
return stateObject
def writeServerPassword( password ):
"""
Dump our ScrambleSuit server descriptor to file.
The file should make it easy for bridge operators to obtain copy &
pasteable server descriptors.
"""
assert len(password) == const.SHARED_SECRET_LENGTH
assert const.STATE_LOCATION != ""
passwordFile = os.path.join(const.STATE_LOCATION, const.PASSWORD_FILE)
log.info("Writing server password to file `%s'." % passwordFile)
password_str = "# You are supposed to give this password to your clients to append it to their Bridge line"
password_str = "# For example: Bridge scramblesuit 192.0.2.1:5555 EXAMPLEFINGERPRINTNOTREAL password=EXAMPLEPASSWORDNOTREAL"
password_str = "# Here is your password:"
password_str = "password=%s\n" % base64.b32encode(password)
try:
with open(passwordFile, 'w') as fd:
fd.write(password_str)
except IOError as err:
log.error("Error writing password file to `%s': %s" %
(passwordFile, err))
class State( object ):
"""
Implement a state class which stores the server's state.
This class makes it possible to store state information on disk. It
provides methods to generate and write state information.
"""
def __init__( self ):
"""
Initialise a `State' object.
"""
self.prngSeed = None
self.keyCreation = None
self.hmacKey = None
self.aesKey = None
self.oldHmacKey = None
self.oldAesKey = None
self.ticketReplay = None
self.uniformDhReplay = None
self.pktDist = None
self.iatDist = None
self.fallbackPassword = None
self.closingThreshold = None
def genState( self ):
"""
Populate all the local variables with values.
"""
log.info("Generating parameters for the server's state file.")
# PRNG seed for the client to reproduce the packet and IAT morpher.
self.prngSeed = mycrypto.strongRandom(const.PRNG_SEED_LENGTH)
# HMAC and AES key used to encrypt and authenticate tickets.
self.hmacKey = mycrypto.strongRandom(const.TICKET_HMAC_KEY_LENGTH)
self.aesKey = mycrypto.strongRandom(const.TICKET_AES_KEY_LENGTH)
self.keyCreation = int(time.time())
# The previous HMAC and AES keys.
self.oldHmacKey = None
self.oldAesKey = None
# Replay dictionary for both authentication mechanisms.
self.replayTracker = replay.Tracker()
# Distributions for packet lengths and inter arrival times.
prng = random.Random(self.prngSeed)
self.pktDist = probdist.new(lambda: prng.randint(const.HDR_LENGTH,
const.MTU),
seed=self.prngSeed)
self.iatDist = probdist.new(lambda: prng.random() %
const.MAX_PACKET_DELAY,
seed=self.prngSeed)
# Fallback UniformDH shared secret. Only used if the bridge operator
# did not set `ServerTransportOptions'.
self.fallbackPassword = os.urandom(const.SHARED_SECRET_LENGTH)
# Unauthenticated connections are closed after having received the
# following amount of bytes.
self.closingThreshold = prng.randint(const.MAX_HANDSHAKE_LENGTH,
const.MAX_HANDSHAKE_LENGTH * 5)
self.writeState()
def isReplayed( self, hmac ):
"""
Check if `hmac' is present in the replay table.
Return `True' if the given `hmac' is present in the replay table and
`False' otherwise.
"""
assert self.replayTracker is not None
log.debug("Querying if HMAC is present in the replay table.")
return self.replayTracker.isPresent(hmac)
def registerKey( self, hmac ):
"""
Add the given `hmac' to the replay table.
"""
assert self.replayTracker is not None
log.debug("Adding a new HMAC to the replay table.")
self.replayTracker.addElement(hmac)
# We must write the data to disk immediately so that other ScrambleSuit
# connections can share the same state.
self.writeState()
def writeState( self ):
"""
Write the state object to a file using the `cPickle' module.
"""
stateFile = os.path.join(const.STATE_LOCATION, const.SERVER_STATE_FILE)
log.debug("Writing server's state file to `%s'." %
stateFile)
try:
with open(stateFile, 'w') as fd:
cPickle.dump(self, fd)
except IOError as err:
log.error("Error writing state file to `%s': %s" %
(stateFile, err))
sys.exit(1)
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/ticket.py 0000664 0000000 0000000 00000032410 12570034732 0024437 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
"""
This module provides a session ticket mechanism.
The implemented mechanism is a subset of session tickets as proposed for
TLS in RFC 5077.
The format of a 112-byte ticket is:
+------------+------------------+--------------+
| 16-byte IV | 64-byte E(state) | 32-byte HMAC |
+------------+------------------+--------------+
The 64-byte encrypted state contains:
+-------------------+--------------------+--------------------+-------------+
| 4-byte issue date | 18-byte identifier | 32-byte master key | 10-byte pad |
+-------------------+--------------------+--------------------+-------------+
"""
import os
import time
import const
import yaml
import struct
import random
import datetime
from Crypto.Cipher import AES
from Crypto.Hash import HMAC
from Crypto.Hash import SHA256
from twisted.internet.address import IPv4Address
import obfsproxy.common.log as logging
import mycrypto
import util
import state
log = logging.get_obfslogger()
def createTicketMessage( rawTicket, HMACKey ):
"""
Create and return a ready-to-be-sent ticket authentication message.
Pseudo-random padding and a mark are added to `rawTicket' and the result is
then authenticated using `HMACKey' as key for a HMAC. The resulting
authentication message is then returned.
"""
assert len(rawTicket) == const.TICKET_LENGTH
assert len(HMACKey) == const.TICKET_HMAC_KEY_LENGTH
# Subtract the length of the ticket to make the handshake on
# average as long as a UniformDH handshake message.
padding = mycrypto.strongRandom(random.randint(0,
const.MAX_PADDING_LENGTH -
const.TICKET_LENGTH))
mark = mycrypto.HMAC_SHA256_128(HMACKey, rawTicket)
hmac = mycrypto.HMAC_SHA256_128(HMACKey, rawTicket + padding +
mark + util.getEpoch())
return rawTicket + padding + mark + hmac
def issueTicketAndKey( srvState ):
"""
Issue a new session ticket and append it to the according master key.
The parameter `srvState' contains the key material and is passed on to
`SessionTicket'. The returned ticket and key are ready to be wrapped into
a protocol message with the flag FLAG_NEW_TICKET set.
"""
log.info("Issuing new session ticket and master key.")
masterKey = mycrypto.strongRandom(const.MASTER_KEY_LENGTH)
newTicket = (SessionTicket(masterKey, srvState)).issue()
return masterKey + newTicket
def storeNewTicket( masterKey, ticket, bridge ):
"""
Store a new session ticket and the according master key for future use.
This method is only called by clients. The given data, `masterKey',
`ticket' and `bridge', is YAMLed and stored in the global ticket
dictionary. If there already is a ticket for the given `bridge', it is
overwritten.
"""
assert len(masterKey) == const.MASTER_KEY_LENGTH
assert len(ticket) == const.TICKET_LENGTH
ticketFile = const.STATE_LOCATION + const.CLIENT_TICKET_FILE
log.debug("Storing newly received ticket in `%s'." % ticketFile)
# Add a new (key, ticket) tuple with the given bridge as hash key.
tickets = dict()
content = util.readFromFile(ticketFile)
if (content is not None) and (len(content) > 0):
tickets = yaml.safe_load(content)
# We also store a timestamp so we later know if our ticket already expired.
tickets[str(bridge)] = [int(time.time()), masterKey, ticket]
util.writeToFile(yaml.dump(tickets), ticketFile)
def findStoredTicket( bridge ):
"""
Retrieve a previously stored ticket from the ticket dictionary.
The global ticket dictionary is loaded and the given `bridge' is used to
look up the ticket and the master key. If the ticket dictionary does not
exist (yet) or the ticket data could not be found, `None' is returned.
"""
assert bridge
ticketFile = const.STATE_LOCATION + const.CLIENT_TICKET_FILE
log.debug("Attempting to read master key and ticket from file `%s'." %
ticketFile)
# Load the ticket hash table from file.
yamlBlurb = util.readFromFile(ticketFile)
if (yamlBlurb is None) or (len(yamlBlurb) == 0):
return None
tickets = yaml.safe_load(yamlBlurb)
try:
timestamp, masterKey, ticket = tickets[str(bridge)]
except KeyError:
log.info("Found no ticket for bridge `%s'." % str(bridge))
return None
# We can remove the ticket now since we are about to redeem it.
log.debug("Deleting ticket since it is about to be redeemed.")
del tickets[str(bridge)]
util.writeToFile(yaml.dump(tickets), ticketFile)
# If our ticket is expired, we can't redeem it.
ticketAge = int(time.time()) - timestamp
if ticketAge > const.SESSION_TICKET_LIFETIME:
log.warning("We did have a ticket but it already expired %s ago." %
str(datetime.timedelta(seconds=
(ticketAge - const.SESSION_TICKET_LIFETIME))))
return None
return (masterKey, ticket)
def checkKeys( srvState ):
"""
Check whether the key material for session tickets must be rotated.
The key material (i.e., AES and HMAC keys for session tickets) contained in
`srvState' is checked if it needs to be rotated. If so, the old keys are
stored and new ones are created.
"""
assert (srvState.hmacKey is not None) and \
(srvState.aesKey is not None) and \
(srvState.keyCreation is not None)
if (int(time.time()) - srvState.keyCreation) > const.KEY_ROTATION_TIME:
log.info("Rotating server key material for session tickets.")
# Save expired keys to be able to validate old tickets.
srvState.oldAesKey = srvState.aesKey
srvState.oldHmacKey = srvState.hmacKey
# Create new key material...
srvState.aesKey = mycrypto.strongRandom(const.TICKET_AES_KEY_LENGTH)
srvState.hmacKey = mycrypto.strongRandom(const.TICKET_HMAC_KEY_LENGTH)
srvState.keyCreation = int(time.time())
# ...and save it to disk.
srvState.writeState()
def decrypt( ticket, srvState ):
"""
Decrypts, verifies and returns the given `ticket'.
The key material used to verify the ticket is contained in `srvState'.
First, the HMAC over the ticket is verified. If it is valid, the ticket is
decrypted. Finally, a `ProtocolState()' object containing the master key
and the ticket's issue date is returned. If any of these steps fail,
`None' is returned.
"""
assert (ticket is not None) and (len(ticket) == const.TICKET_LENGTH)
assert (srvState.hmacKey is not None) and (srvState.aesKey is not None)
log.debug("Attempting to decrypt and verify ticket.")
checkKeys(srvState)
# Verify the ticket's authenticity before decrypting.
hmac = HMAC.new(srvState.hmacKey, ticket[0:80], digestmod=SHA256).digest()
if util.isValidHMAC(hmac, ticket[80:const.TICKET_LENGTH],
srvState.hmacKey):
aesKey = srvState.aesKey
else:
if srvState.oldHmacKey is None:
return None
# Was the HMAC created using the rotated key material?
oldHmac = HMAC.new(srvState.oldHmacKey, ticket[0:80],
digestmod=SHA256).digest()
if util.isValidHMAC(oldHmac, ticket[80:const.TICKET_LENGTH],
srvState.oldHmacKey):
aesKey = srvState.oldAesKey
else:
return None
# Decrypt the ticket to extract the state information.
aes = AES.new(aesKey, mode=AES.MODE_CBC,
IV=ticket[0:const.TICKET_AES_CBC_IV_LENGTH])
plainTicket = aes.decrypt(ticket[const.TICKET_AES_CBC_IV_LENGTH:80])
issueDate = struct.unpack('I', plainTicket[0:4])[0]
identifier = plainTicket[4:22]
masterKey = plainTicket[22:54]
if not (identifier == const.TICKET_IDENTIFIER):
log.error("The ticket's HMAC is valid but the identifier is invalid. "
"The ticket could be corrupt.")
return None
return ProtocolState(masterKey, issueDate=issueDate)
class ProtocolState( object ):
"""
Defines a ScrambleSuit protocol state contained in a session ticket.
A protocol state is essentially a master key which can then be used by the
server to derive session keys. Besides, a state object contains an issue
date which specifies the expiry date of a ticket. This class contains
methods to check the expiry status of a ticket and to dump it in its raw
form.
"""
def __init__( self, masterKey, issueDate=int(time.time()) ):
"""
The constructor of the `ProtocolState' class.
The four class variables are initialised.
"""
self.identifier = const.TICKET_IDENTIFIER
self.masterKey = masterKey
self.issueDate = issueDate
# Pad to multiple of 16 bytes to match AES' block size.
self.pad = "\0\0\0\0\0\0\0\0\0\0"
def isValid( self ):
"""
Verifies the expiry date of the object's issue date.
If the expiry date is not yet reached and the protocol state is still
valid, `True' is returned. If the protocol state has expired, `False'
is returned.
"""
assert self.issueDate
lifetime = int(time.time()) - self.issueDate
if lifetime > const.SESSION_TICKET_LIFETIME:
log.debug("The ticket is invalid and expired %s ago." %
str(datetime.timedelta(seconds=
(lifetime - const.SESSION_TICKET_LIFETIME))))
return False
log.debug("The ticket is still valid for %s." %
str(datetime.timedelta(seconds=
(const.SESSION_TICKET_LIFETIME - lifetime))))
return True
def __repr__( self ):
"""
Return a raw string representation of the object's protocol state.
The length of the returned representation is exactly 64 bytes; a
multiple of AES' 16-byte block size. That makes it suitable to be
encrypted using AES-CBC.
"""
return struct.pack('I', self.issueDate) + self.identifier + \
self.masterKey + self.pad
class SessionTicket( object ):
"""
Encrypts and authenticates an encapsulated `ProtocolState()' object.
This class implements a session ticket which can be redeemed by clients.
The class contains methods to initialise and issue session tickets.
"""
def __init__( self, masterKey, srvState ):
"""
The constructor of the `SessionTicket()' class.
The class variables are initialised and the validity of the symmetric
keys for the session tickets is checked.
"""
assert (masterKey is not None) and \
len(masterKey) == const.MASTER_KEY_LENGTH
checkKeys(srvState)
# Initialisation vector for AES-CBC.
self.IV = mycrypto.strongRandom(const.TICKET_AES_CBC_IV_LENGTH)
# The server's (encrypted) protocol state.
self.state = ProtocolState(masterKey)
# AES and HMAC keys to encrypt and authenticate the ticket.
self.symmTicketKey = srvState.aesKey
self.hmacTicketKey = srvState.hmacKey
def issue( self ):
"""
Returns a ready-to-use session ticket after prior initialisation.
After the `SessionTicket()' class was initialised with a master key,
this method encrypts and authenticates the protocol state and returns
the final result which is ready to be sent over the wire.
"""
self.state.issueDate = int(time.time())
# Encrypt the protocol state.
aes = AES.new(self.symmTicketKey, mode=AES.MODE_CBC, IV=self.IV)
state = repr(self.state)
assert (len(state) % AES.block_size) == 0
cryptedState = aes.encrypt(state)
# Authenticate the encrypted state and the IV.
hmac = HMAC.new(self.hmacTicketKey,
self.IV + cryptedState, digestmod=SHA256).digest()
finalTicket = self.IV + cryptedState + hmac
log.debug("Returning %d-byte ticket." % len(finalTicket))
return finalTicket
# Alias class name in order to provide a more intuitive API.
new = SessionTicket
# Give ScrambleSuit server operators a way to manually issue new session
# tickets for out-of-band distribution.
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser()
parser.add_argument("ip_addr", type=str, help="The IPv4 address of the "
"%s server." % const.TRANSPORT_NAME)
parser.add_argument("tcp_port", type=int, help="The TCP port of the %s "
"server." % const.TRANSPORT_NAME)
parser.add_argument("ticket_file", type=str, help="The file, the newly "
"issued ticket is written to.")
args = parser.parse_args()
print "[+] Loading server state file."
serverState = state.load()
print "[+] Generating new session ticket."
masterKey = mycrypto.strongRandom(const.MASTER_KEY_LENGTH)
ticket = SessionTicket(masterKey, serverState).issue()
print "[+] Writing new session ticket to `%s'." % args.ticket_file
tickets = dict()
server = IPv4Address('TCP', args.ip_addr, args.tcp_port)
tickets[str(server)] = [int(time.time()), masterKey, ticket]
util.writeToFile(yaml.dump(tickets), args.ticket_file)
print "[+] Success."
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/uniformdh.py 0000664 0000000 0000000 00000016222 12570034732 0025152 0 ustar 00root root 0000000 0000000 """
This module implements a class to deal with Uniform Diffie-Hellman handshakes.
The class `UniformDH' is used by the server as well as by the client to handle
the Uniform Diffie-Hellman handshake used by ScrambleSuit.
"""
import const
import random
import binascii
import Crypto.Hash.SHA256
import util
import mycrypto
import obfsproxy.transports.obfs3_dh as obfs3_dh
import obfsproxy.transports.base as base
import obfsproxy.common.log as logging
log = logging.get_obfslogger()
class UniformDH( object ):
"""
Provide methods to deal with Uniform Diffie-Hellman handshakes.
The class provides methods to extract public keys and to generate public
keys wrapped in a valid UniformDH handshake.
"""
def __init__( self, sharedSecret, weAreServer ):
"""
Initialise a UniformDH object.
"""
# `True' if we are the server; `False' otherwise.
self.weAreServer = weAreServer
# The shared UniformDH secret.
self.sharedSecret = sharedSecret
# Cache a UniformDH public key until it's added to the replay table.
self.remotePublicKey = None
# Uniform Diffie-Hellman object (implemented in obfs3_dh.py).
self.udh = None
# Used by the server so it can simply echo the client's epoch.
self.echoEpoch = None
def getRemotePublicKey( self ):
"""
Return the cached remote UniformDH public key.
"""
return self.remotePublicKey
def receivePublicKey( self, data, callback, srvState=None ):
"""
Extract the public key and invoke a callback with the master secret.
First, the UniformDH public key is extracted out of `data'. Then, the
shared master secret is computed and `callback' is invoked with the
master secret as argument. If any of this fails, `False' is returned.
"""
# Extract the public key sent by the remote host.
remotePublicKey = self.extractPublicKey(data, srvState)
if not remotePublicKey:
return False
if self.weAreServer:
self.remotePublicKey = remotePublicKey
# As server, we need a DH object; as client, we already have one.
self.udh = obfs3_dh.UniformDH()
assert self.udh is not None
try:
uniformDHSecret = self.udh.get_secret(remotePublicKey)
except ValueError:
raise base.PluggableTransportError("Corrupted public key.")
# First, hash the 4096-bit UniformDH secret to obtain the master key.
masterKey = Crypto.Hash.SHA256.new(uniformDHSecret).digest()
# Second, session keys are now derived from the master key.
callback(masterKey)
return True
def extractPublicKey( self, data, srvState=None ):
"""
Extract and return a UniformDH public key out of `data'.
Before the public key is touched, the HMAC is verified. If the HMAC is
invalid or some other error occurs, `False' is returned. Otherwise,
the public key is returned. The extracted data is finally drained from
the given `data' object.
"""
assert self.sharedSecret is not None
# Do we already have the minimum amount of data?
if len(data) < (const.PUBLIC_KEY_LENGTH + const.MARK_LENGTH +
const.HMAC_SHA256_128_LENGTH):
return False
log.debug("Attempting to extract the remote machine's UniformDH "
"public key out of %d bytes of data." % len(data))
handshake = data.peek()
# First, find the mark to efficiently locate the HMAC.
publicKey = handshake[:const.PUBLIC_KEY_LENGTH]
mark = mycrypto.HMAC_SHA256_128(self.sharedSecret, publicKey)
index = util.locateMark(mark, handshake)
if not index:
return False
# Now that we know where the authenticating HMAC is: verify it.
hmacStart = index + const.MARK_LENGTH
existingHMAC = handshake[hmacStart:
(hmacStart + const.HMAC_SHA256_128_LENGTH)]
authenticated = False
for epoch in util.expandedEpoch():
myHMAC = mycrypto.HMAC_SHA256_128(self.sharedSecret,
handshake[0 : hmacStart] + epoch)
if util.isValidHMAC(myHMAC, existingHMAC, self.sharedSecret):
self.echoEpoch = epoch
authenticated = True
break
log.debug("HMAC invalid. Trying next epoch value.")
if not authenticated:
log.warning("Could not verify the authentication message's HMAC.")
return False
# Do nothing if the ticket is replayed. Immediately closing the
# connection would be suspicious.
if srvState is not None and srvState.isReplayed(existingHMAC):
log.warning("The HMAC was already present in the replay table.")
return False
data.drain(index + const.MARK_LENGTH + const.HMAC_SHA256_128_LENGTH)
if srvState is not None:
log.debug("Adding the HMAC authenticating the UniformDH message " \
"to the replay table: %s." % existingHMAC.encode('hex'))
srvState.registerKey(existingHMAC)
return handshake[:const.PUBLIC_KEY_LENGTH]
def createHandshake( self, srvState=None ):
"""
Create and return a ready-to-be-sent UniformDH handshake.
The returned handshake data includes the public key, pseudo-random
padding, the mark and the HMAC. If a UniformDH object has not been
initialised yet, a new instance is created.
"""
assert self.sharedSecret is not None
log.debug("Creating UniformDH handshake message.")
if self.udh is None:
self.udh = obfs3_dh.UniformDH()
publicKey = self.udh.get_public()
assert (const.MAX_PADDING_LENGTH - const.PUBLIC_KEY_LENGTH) >= 0
# Subtract the length of the public key to make the handshake on
# average as long as a redeemed ticket. That should thwart statistical
# length-based attacks.
padding = mycrypto.strongRandom(random.randint(0,
const.MAX_PADDING_LENGTH -
const.PUBLIC_KEY_LENGTH))
# Add a mark which enables efficient location of the HMAC.
mark = mycrypto.HMAC_SHA256_128(self.sharedSecret, publicKey)
if self.echoEpoch is None:
epoch = util.getEpoch()
else:
epoch = self.echoEpoch
log.debug("Echoing epoch rather than recreating it.")
# Authenticate the handshake including the current approximate epoch.
mac = mycrypto.HMAC_SHA256_128(self.sharedSecret,
publicKey + padding + mark + epoch)
if self.weAreServer and (srvState is not None):
log.debug("Adding the HMAC authenticating the server's UniformDH "
"message to the replay table: %s." % mac.encode('hex'))
srvState.registerKey(mac)
return publicKey + padding + mark + mac
# Alias class name in order to provide a more intuitive API.
new = UniformDH
obfsproxy-0.2.13/obfsproxy/transports/scramblesuit/util.py 0000664 0000000 0000000 00000012256 12570034732 0024137 0 ustar 00root root 0000000 0000000 """
This module implements several commonly used utility functions.
The implemented functions can be used to swap variables, write and read data
from files and to convert a number to raw text.
"""
import obfsproxy.common.log as logging
import os
import time
import const
import mycrypto
log = logging.get_obfslogger()
def setStateLocation( stateLocation ):
"""
Set the constant `STATE_LOCATION' to the given `stateLocation'.
The variable `stateLocation' determines where persistent information (such
as the server's key material) is stored. If `stateLocation' is `None', it
remains to be the current directory. In general, however, it should be a
subdirectory of Tor's data directory.
"""
if stateLocation is None:
return
if not stateLocation.endswith('/'):
stateLocation += '/'
# To be polite, we create a subdirectory inside wherever we are asked to
# store data in.
stateLocation += (const.TRANSPORT_NAME).lower() + '/'
# ...and if it does not exist yet, we attempt to create the full
# directory path.
if not os.path.exists(stateLocation):
log.info("Creating directory path `%s'." % stateLocation)
os.makedirs(stateLocation)
log.debug("Setting the state location to `%s'." % stateLocation)
const.STATE_LOCATION = stateLocation
def isValidHMAC( hmac1, hmac2, key ):
"""
Compares `hmac1' and `hmac2' after HMACing them again using `key'.
The arguments `hmac1' and `hmac2' are compared. If they are equal, `True'
is returned and otherwise `False'. To prevent timing attacks, double HMAC
verification is used meaning that the two arguments are HMACed again before
(variable-time) string comparison. The idea is taken from:
https://www.isecpartners.com/blog/2011/february/double-hmac-verification.aspx
"""
assert len(hmac1) == len(hmac2)
# HMAC the arguments again to prevent timing attacks.
doubleHmac1 = mycrypto.HMAC_SHA256_128(key, hmac1)
doubleHmac2 = mycrypto.HMAC_SHA256_128(key, hmac2)
if doubleHmac1 != doubleHmac2:
return False
log.debug("The computed HMAC is valid.")
return True
def locateMark( mark, payload ):
"""
Locate the given `mark' in `payload' and return its index.
The `mark' is placed before the HMAC of a ScrambleSuit authentication
mechanism and makes it possible to efficiently locate the HMAC. If the
`mark' could not be found, `None' is returned.
"""
index = payload.find(mark, 0, const.MAX_PADDING_LENGTH + const.MARK_LENGTH)
if index < 0:
log.debug("Could not find the mark just yet.")
return None
if (len(payload) - index - const.MARK_LENGTH) < \
const.HMAC_SHA256_128_LENGTH:
log.debug("Found the mark but the HMAC is still incomplete.")
return None
log.debug("Successfully located the mark.")
return index
def getEpoch( ):
"""
Return the Unix epoch divided by a constant as string.
This function returns a coarse-grained version of the Unix epoch. The
seconds passed since the epoch are divided by the constant
`EPOCH_GRANULARITY'.
"""
return str(int(time.time()) / const.EPOCH_GRANULARITY)
def expandedEpoch( ):
"""
Return [epoch, epoch-1, epoch+1].
"""
epoch = int(getEpoch())
return [str(epoch), str(epoch - 1), str(epoch + 1)]
def writeToFile( data, fileName ):
"""
Writes the given `data' to the file specified by `fileName'.
If an error occurs, the function logs an error message but does not throw
an exception or return an error code.
"""
log.debug("Opening `%s' for writing." % fileName)
try:
with open(fileName, "wb") as desc:
desc.write(data)
except IOError as err:
log.error("Error writing to `%s': %s." % (fileName, err))
def readFromFile( fileName, length=-1 ):
"""
Read `length' amount of bytes from the given `fileName'
If `length' equals -1 (the default), the entire file is read and the
content returned. If an error occurs, the function logs an error message
but does not throw an exception or return an error code.
"""
data = None
if not os.path.exists(fileName):
log.debug("File `%s' does not exist (yet?)." % fileName)
return None
log.debug("Opening `%s' for reading." % fileName)
try:
with open(fileName, "rb") as desc:
data = desc.read(length)
except IOError as err:
log.error("Error reading from `%s': %s." % (fileName, err))
return data
def sanitiseBase32( data ):
"""
Try to sanitise a Base32 string if it's slightly wrong.
ScrambleSuit's shared secret might be distributed verbally which could
cause mistakes. This function fixes simple mistakes, e.g., when a user
noted "1" rather than "I".
"""
data = data.upper()
if "1" in data:
log.info("Found a \"1\" in Base32-encoded \"%s\". Assuming " \
"it's actually \"I\"." % data)
data = data.replace("1", "I")
if "0" in data:
log.info("Found a \"0\" in Base32-encoded \"%s\". Assuming " \
"it's actually \"O\"." % data)
data = data.replace("0", "O")
return data
obfsproxy-0.2.13/obfsproxy/transports/transports.py 0000664 0000000 0000000 00000002537 12570034732 0022705 0 ustar 00root root 0000000 0000000 # XXX modulify transports and move this to a single import
import obfsproxy.transports.dummy as dummy
import obfsproxy.transports.b64 as b64
import obfsproxy.transports.obfs2 as obfs2
import obfsproxy.transports.obfs3 as obfs3
import obfsproxy.transports.scramblesuit.scramblesuit as scramblesuit
transports = { 'dummy' : {'base': dummy.DummyTransport, 'client' : dummy.DummyClient, 'server' : dummy.DummyServer },
'b64' : {'base': b64.B64Transport, 'client' : b64.B64Client, 'server' : b64.B64Server },
'obfs2' : {'base': obfs2.Obfs2Transport, 'client' : obfs2.Obfs2Client, 'server' : obfs2.Obfs2Server },
'scramblesuit' : {'base': scramblesuit.ScrambleSuitTransport,
'client':scramblesuit.ScrambleSuitClient,
'server':scramblesuit.ScrambleSuitServer },
'obfs3' : {'base': obfs3.Obfs3Transport, 'client' : obfs3.Obfs3Client, 'server' : obfs3.Obfs3Server } }
def get_transport_class(name, role):
# Rewrite equivalent roles.
if role == 'socks':
role = 'client'
elif role == 'ext_server':
role = 'server'
# Find the correct class
if (name in transports) and (role in transports[name]):
return transports[name][role]
else:
raise TransportNotFound
class TransportNotFound(Exception): pass
obfsproxy-0.2.13/setup.py 0000664 0000000 0000000 00000002025 12570034732 0015344 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
import sys
from setuptools import setup, find_packages
import versioneer
versioneer.versionfile_source = 'obfsproxy/_version.py'
versioneer.versionfile_build = 'obfsproxy/_version.py'
versioneer.tag_prefix = 'obfsproxy-' # tags are like 1.2.0
versioneer.parentdir_prefix = 'obfsproxy-' # dirname like 'myproject-1.2.0'
setup(
name = "obfsproxy",
author = "asn",
author_email = "asn@torproject.org",
description = ("A pluggable transport proxy written in Python"),
license = "BSD",
keywords = ['tor', 'obfuscation', 'twisted'],
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
packages = find_packages(),
entry_points = {
'console_scripts': [
'obfsproxy = obfsproxy.pyobfsproxy:run'
]
},
install_requires = [
'setuptools',
'PyCrypto',
'Twisted',
'argparse',
'pyptlib >= 0.0.6',
'pyyaml'
],
extras_require = {
'SOCKS': ["txsocksx"]
}
)
obfsproxy-0.2.13/setup_py2exe.py 0000664 0000000 0000000 00000000727 12570034732 0016647 0 ustar 00root root 0000000 0000000 #!/usr/bin/env python
from distutils.core import setup
import py2exe
import os
topdir = "py2exe_bundle"
build_path = os.path.join(topdir, "build")
dist_path = os.path.join(topdir, "dist")
setup(
console=["bin/obfsproxy"],
zipfile="obfsproxy.zip",
options={
"build": {"build_base": build_path},
"py2exe": {
"includes": ["twisted", "pyptlib", "Crypto", "parsley", "txsocksx"],
"dist_dir": dist_path,
}
}
)
obfsproxy-0.2.13/versioneer.py 0000664 0000000 0000000 00000062247 12570034732 0016401 0 ustar 00root root 0000000 0000000 #! /usr/bin/python
"""versioneer.py
(like a rocketeer, but for versions)
* https://github.com/warner/python-versioneer
* Brian Warner
* License: Public Domain
* Version: 0.7+
This file helps distutils-based projects manage their version number by just
creating version-control tags.
For developers who work from a VCS-generated tree (e.g. 'git clone' etc),
each 'setup.py version', 'setup.py build', 'setup.py sdist' will compute a
version number by asking your version-control tool about the current
checkout. The version number will be written into a generated _version.py
file of your choosing, where it can be included by your __init__.py
For users who work from a VCS-generated tarball (e.g. 'git archive'), it will
compute a version number by looking at the name of the directory created when
te tarball is unpacked. This conventionally includes both the name of the
project and a version number.
For users who work from a tarball built by 'setup.py sdist', it will get a
version number from a previously-generated _version.py file.
As a result, loading code directly from the source tree will not result in a
real version. If you want real versions from VCS trees (where you frequently
update from the upstream repository, or do new development), you will need to
do a 'setup.py version' after each update, and load code from the build/
directory.
You need to provide this code with a few configuration values:
versionfile_source:
A project-relative pathname into which the generated version strings
should be written. This is usually a _version.py next to your project's
main __init__.py file. If your project uses src/myproject/__init__.py,
this should be 'src/myproject/_version.py'. This file should be checked
in to your VCS as usual: the copy created below by 'setup.py
update_files' will include code that parses expanded VCS keywords in
generated tarballs. The 'build' and 'sdist' commands will replace it with
a copy that has just the calculated version string.
versionfile_build:
Like versionfile_source, but relative to the build directory instead of
the source directory. These will differ when your setup.py uses
'package_dir='. If you have package_dir={'myproject': 'src/myproject'},
then you will probably have versionfile_build='myproject/_version.py' and
versionfile_source='src/myproject/_version.py'.
tag_prefix: a string, like 'PROJECTNAME-', which appears at the start of all
VCS tags. If your tags look like 'myproject-1.2.0', then you
should use tag_prefix='myproject-'. If you use unprefixed tags
like '1.2.0', this should be an empty string.
parentdir_prefix: a string, frequently the same as tag_prefix, which
appears at the start of all unpacked tarball filenames. If
your tarball unpacks into 'myproject-1.2.0', this should
be 'myproject-'.
To use it:
1: include this file in the top level of your project
2: make the following changes to the top of your setup.py:
import versioneer
versioneer.versionfile_source = 'src/myproject/_version.py'
versioneer.versionfile_build = 'myproject/_version.py'
versioneer.tag_prefix = '' # tags are like 1.2.0
versioneer.parentdir_prefix = 'myproject-' # dirname like 'myproject-1.2.0'
3: add the following arguments to the setup() call in your setup.py:
version=versioneer.get_version(),
cmdclass=versioneer.get_cmdclass(),
4: run 'setup.py update_files', which will create _version.py, and will
append the following to your __init__.py:
from _version import __version__
5: modify your MANIFEST.in to include versioneer.py
6: add both versioneer.py and the generated _version.py to your VCS
"""
import os, sys, re
from distutils.core import Command
from distutils.command.sdist import sdist as _sdist
from distutils.command.build import build as _build
versionfile_source = None
versionfile_build = None
tag_prefix = None
parentdir_prefix = None
VCS = "git"
IN_LONG_VERSION_PY = False
LONG_VERSION_PY = '''
IN_LONG_VERSION_PY = True
# This file helps to compute a version number in source trees obtained from
# git-archive tarball (such as those provided by githubs download-from-tag
# feature). Distribution tarballs (build by setup.py sdist) and build
# directories (produced by setup.py build) will contain a much shorter file
# that just contains the computed version number.
# This file is released into the public domain. Generated by
# versioneer-0.7+ (https://github.com/warner/python-versioneer)
# these strings will be replaced by git during git-archive
git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s"
git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s"
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %%s" %% args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %%s (error)" %% args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%%s', no digits" %% ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %%d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %%s" %% ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %%s" %% r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %%s" %% root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%%s' doesn't start with prefix '%%s'" %% (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%%s', but '%%s' doesn't start with prefix '%%s'" %%
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
tag_prefix = "%(TAG_PREFIX)s"
parentdir_prefix = "%(PARENTDIR_PREFIX)s"
versionfile_source = "%(VERSIONFILE_SOURCE)s"
def get_versions(default={"version": "unknown", "full": ""}, verbose=False):
variables = { "refnames": git_refnames, "full": git_full }
ver = versions_from_expanded_variables(variables, tag_prefix, verbose)
if not ver:
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if not ver:
ver = versions_from_parentdir(parentdir_prefix, versionfile_source,
verbose)
if not ver:
ver = default
return ver
'''
import subprocess
import sys
def run_command(args, cwd=None, verbose=False):
try:
# remember shell=False, so use git.cmd on windows, not just git
p = subprocess.Popen(args, stdout=subprocess.PIPE, cwd=cwd)
except EnvironmentError:
e = sys.exc_info()[1]
if verbose:
print("unable to run %s" % args[0])
print(e)
return None
stdout = p.communicate()[0].strip()
if sys.version >= '3':
stdout = stdout.decode()
if p.returncode != 0:
if verbose:
print("unable to run %s (error)" % args[0])
return None
return stdout
import sys
import re
import os.path
def get_expanded_variables(versionfile_source):
# the code embedded in _version.py can just fetch the value of these
# variables. When used from setup.py, we don't want to import
# _version.py, so we do it with a regexp instead. This function is not
# used from _version.py.
variables = {}
try:
for line in open(versionfile_source,"r").readlines():
if line.strip().startswith("git_refnames ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["refnames"] = mo.group(1)
if line.strip().startswith("git_full ="):
mo = re.search(r'=\s*"(.*)"', line)
if mo:
variables["full"] = mo.group(1)
except EnvironmentError:
pass
return variables
def versions_from_expanded_variables(variables, tag_prefix, verbose=False):
refnames = variables["refnames"].strip()
if refnames.startswith("$Format"):
if verbose:
print("variables are unexpanded, not using")
return {} # unexpanded, so not in an unpacked git-archive tarball
refs = set([r.strip() for r in refnames.strip("()").split(",")])
for ref in list(refs):
if not re.search(r'\d', ref):
if verbose:
print("discarding '%s', no digits" % ref)
refs.discard(ref)
# Assume all version tags have a digit. git's %d expansion
# behaves like git log --decorate=short and strips out the
# refs/heads/ and refs/tags/ prefixes that would let us
# distinguish between branches and tags. By ignoring refnames
# without digits, we filter out many common branch names like
# "release" and "stabilization", as well as "HEAD" and "master".
if verbose:
print("remaining refs: %s" % ",".join(sorted(refs)))
for ref in sorted(refs):
# sorting will prefer e.g. "2.0" over "2.0rc1"
if ref.startswith(tag_prefix):
r = ref[len(tag_prefix):]
if verbose:
print("picking %s" % r)
return { "version": r,
"full": variables["full"].strip() }
# no suitable tags, so we use the full revision id
if verbose:
print("no suitable tags, using full revision id")
return { "version": variables["full"].strip(),
"full": variables["full"].strip() }
def versions_from_vcs(tag_prefix, versionfile_source, verbose=False):
# this runs 'git' from the root of the source tree. That either means
# someone ran a setup.py command (and this code is in versioneer.py, so
# IN_LONG_VERSION_PY=False, thus the containing directory is the root of
# the source tree), or someone ran a project-specific entry point (and
# this code is in _version.py, so IN_LONG_VERSION_PY=True, thus the
# containing directory is somewhere deeper in the source tree). This only
# gets called if the git-archive 'subst' variables were *not* expanded,
# and _version.py hasn't already been rewritten with a short version
# string, meaning we're inside a checked out source tree.
try:
here = os.path.abspath(__file__)
except NameError:
# some py2exe/bbfreeze/non-CPython implementations don't do __file__
return {} # not always correct
# versionfile_source is the relative path from the top of the source tree
# (where the .git directory might live) to this file. Invert this to find
# the root from __file__.
root = here
if IN_LONG_VERSION_PY:
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
root = os.path.dirname(here)
if not os.path.exists(os.path.join(root, ".git")):
if verbose:
print("no .git in %s" % root)
return {}
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
stdout = run_command([GIT, "describe", "--tags", "--dirty", "--always"],
cwd=root)
if stdout is None:
return {}
if not stdout.startswith(tag_prefix):
if verbose:
print("tag '%s' doesn't start with prefix '%s'" % (stdout, tag_prefix))
return {}
tag = stdout[len(tag_prefix):]
stdout = run_command([GIT, "rev-parse", "HEAD"], cwd=root)
if stdout is None:
return {}
full = stdout.strip()
if tag.endswith("-dirty"):
full += "-dirty"
return {"version": tag, "full": full}
def versions_from_parentdir(parentdir_prefix, versionfile_source, verbose=False):
if IN_LONG_VERSION_PY:
# We're running from _version.py. If it's from a source tree
# (execute-in-place), we can work upwards to find the root of the
# tree, and then check the parent directory for a version string. If
# it's in an installed application, there's no hope.
try:
here = os.path.abspath(__file__)
except NameError:
# py2exe/bbfreeze/non-CPython don't have __file__
return {} # without __file__, we have no hope
# versionfile_source is the relative path from the top of the source
# tree to _version.py. Invert this to find the root from __file__.
root = here
for i in range(len(versionfile_source.split("/"))):
root = os.path.dirname(root)
else:
# we're running from versioneer.py, which means we're running from
# the setup.py in a source tree. sys.argv[0] is setup.py in the root.
here = os.path.abspath(sys.argv[0])
root = os.path.dirname(here)
# Source tarballs conventionally unpack into a directory that includes
# both the project name and a version string.
dirname = os.path.basename(root)
if not dirname.startswith(parentdir_prefix):
if verbose:
print("guessing rootdir is '%s', but '%s' doesn't start with prefix '%s'" %
(root, dirname, parentdir_prefix))
return None
return {"version": dirname[len(parentdir_prefix):], "full": ""}
import sys
def do_vcs_install(versionfile_source, ipy):
GIT = "git"
if sys.platform == "win32":
GIT = "git.cmd"
run_command([GIT, "add", "versioneer.py"])
run_command([GIT, "add", versionfile_source])
run_command([GIT, "add", ipy])
present = False
try:
f = open(".gitattributes", "r")
for line in f.readlines():
if line.strip().startswith(versionfile_source):
if "export-subst" in line.strip().split()[1:]:
present = True
f.close()
except EnvironmentError:
pass
if not present:
f = open(".gitattributes", "a+")
f.write("%s export-subst\n" % versionfile_source)
f.close()
run_command([GIT, "add", ".gitattributes"])
SHORT_VERSION_PY = """
# This file was generated by 'versioneer.py' (0.7+) from
# revision-control system data, or from the parent directory name of an
# unpacked source archive. Distribution tarballs contain a pre-generated copy
# of this file.
version_version = '%(version)s'
version_full = '%(full)s'
def get_versions(default={}, verbose=False):
return {'version': version_version, 'full': version_full}
"""
DEFAULT = {"version": "unknown", "full": "unknown"}
def versions_from_file(filename):
versions = {}
try:
f = open(filename)
except EnvironmentError:
return versions
for line in f.readlines():
mo = re.match("version_version = '([^']+)'", line)
if mo:
versions["version"] = mo.group(1)
mo = re.match("version_full = '([^']+)'", line)
if mo:
versions["full"] = mo.group(1)
return versions
def write_to_version_file(filename, versions):
f = open(filename, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
print("set %s to '%s'" % (filename, versions["version"]))
def get_best_versions(versionfile, tag_prefix, parentdir_prefix,
default=DEFAULT, verbose=False):
# returns dict with two keys: 'version' and 'full'
#
# extract version from first of _version.py, 'git describe', parentdir.
# This is meant to work for developers using a source checkout, for users
# of a tarball created by 'setup.py sdist', and for users of a
# tarball/zipball created by 'git archive' or github's download-from-tag
# feature.
variables = get_expanded_variables(versionfile_source)
if variables:
ver = versions_from_expanded_variables(variables, tag_prefix)
if ver:
if verbose: print("got version from expanded variable %s" % ver)
return ver
ver = versions_from_file(versionfile)
if ver:
if verbose: print("got version from file %s %s" % (versionfile, ver))
return ver
ver = versions_from_vcs(tag_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from git %s" % ver)
return ver
ver = versions_from_parentdir(parentdir_prefix, versionfile_source, verbose)
if ver:
if verbose: print("got version from parentdir %s" % ver)
return ver
if verbose: print("got version from default %s" % ver)
return default
def get_versions(default=DEFAULT, verbose=False):
assert versionfile_source is not None, "please set versioneer.versionfile_source"
assert tag_prefix is not None, "please set versioneer.tag_prefix"
assert parentdir_prefix is not None, "please set versioneer.parentdir_prefix"
return get_best_versions(versionfile_source, tag_prefix, parentdir_prefix,
default=default, verbose=verbose)
def get_version(verbose=False):
return get_versions(verbose=verbose)["version"]
class cmd_version(Command):
description = "report generated version string"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ver = get_version(verbose=True)
print("Version is currently: %s" % ver)
class cmd_build(_build):
def run(self):
versions = get_versions(verbose=True)
_build.run(self)
# now locate _version.py in the new build/ directory and replace it
# with an updated value
target_versionfile = os.path.join(self.build_lib, versionfile_build)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % versions)
f.close()
class cmd_sdist(_sdist):
def run(self):
versions = get_versions(verbose=True)
self._versioneer_generated_versions = versions
# unless we update this, the command will keep using the old version
self.distribution.metadata.version = versions["version"]
return _sdist.run(self)
def make_release_tree(self, base_dir, files):
_sdist.make_release_tree(self, base_dir, files)
# now locate _version.py in the new base_dir directory (remembering
# that it may be a hardlink) and replace it with an updated value
target_versionfile = os.path.join(base_dir, versionfile_source)
print("UPDATING %s" % target_versionfile)
os.unlink(target_versionfile)
f = open(target_versionfile, "w")
f.write(SHORT_VERSION_PY % self._versioneer_generated_versions)
f.close()
INIT_PY_SNIPPET = """
from ._version import get_versions
__version__ = get_versions()['version']
del get_versions
"""
class cmd_update_files(Command):
description = "modify __init__.py and create _version.py"
user_options = []
boolean_options = []
def initialize_options(self):
pass
def finalize_options(self):
pass
def run(self):
ipy = os.path.join(os.path.dirname(versionfile_source), "__init__.py")
print(" creating %s" % versionfile_source)
f = open(versionfile_source, "w")
f.write(LONG_VERSION_PY % {"DOLLAR": "$",
"TAG_PREFIX": tag_prefix,
"PARENTDIR_PREFIX": parentdir_prefix,
"VERSIONFILE_SOURCE": versionfile_source,
})
f.close()
try:
old = open(ipy, "r").read()
except EnvironmentError:
old = ""
if INIT_PY_SNIPPET not in old:
print(" appending to %s" % ipy)
f = open(ipy, "a")
f.write(INIT_PY_SNIPPET)
f.close()
else:
print(" %s unmodified" % ipy)
do_vcs_install(versionfile_source, ipy)
def get_cmdclass():
return {'version': cmd_version,
'update_files': cmd_update_files,
'build': cmd_build,
'sdist': cmd_sdist,
}