pax_global_header00006660000000000000000000000064123635063670014525gustar00rootroot0000000000000052 comment=3c4a3b4904120015d6995face2052123f8ed2a55 flashproxy-1.7/000077500000000000000000000000001236350636700135735ustar00rootroot00000000000000flashproxy-1.7/.gitattributes000066400000000000000000000000101236350636700164550ustar00rootroot00000000000000* -text flashproxy-1.7/.gitignore000066400000000000000000000001011236350636700155530ustar00rootroot00000000000000*.pyc # built by setup*.py /build /dist /*.egg-info /py2exe-tmp flashproxy-1.7/ChangeLog000066400000000000000000000323061236350636700153510ustar00rootroot00000000000000Changes in version 1.7 o Made the badge color reflect what's going on when it encounters a network error and tries to reconnect. Fixes bug 11400. o Renamed facilitator programs: facilitator → fp-facilitator facilitator.cgi → fp-registrar.cgi facilitator-email-poller → fp-registrar-email facilitator-reg → fp-reg-decrypt facilitator-reg-daemon → fp-reg-decryptd o Fixed a bug in the browser proxy which caused it to stop accepting new connections once it had failed 5 previous connections. o Updated the Tor Browser detection for the Firefox 24.0 User-Agent string. Patch by Arlo Breault. Fixes bug 11290. Changes in version 1.6 o Allowed the --port-forwarding option to work when the remote port number is given as 0. o Fixed registration on Mac OS X when the REMOTE address had an empty host part. A specification of ":9000", for example, would try to register "[]:9000". o Fixed registration on Windows with flashproxy-reg-appspot and flashproxy-reg-email. The certificate pinning code used a Python NamedTemporaryFile, which is not reopenable on Windows. Changes in version 1.5 o Add manpages for the facilitator and nodejs proxy, automatically generated by help2man. o Have nodejs flashproxy take GNU-style long command-line options. o Automate much of the configuration tasks involved in installing the facilitator using GNU autotools. See facilitator/INSTALL for details on the new process. Also move some common code here into the common python module. Patch by Ximin Luo. Fixes bug 9974. o Move common code to a separate flashproxy-common python module. Also split out some build scripts so distro packagers have an easier time. Patch by Ximin Luo. Fixes bug 6810. o Enabled binary frames (avoiding the overhead of base64 encoding) for Firefox 11 and later. Patch by Arlo Breault. Fixes bug 9069. o Removed a Python 2.7–dependent reference in flashproxy-reg-appspot. Changes in version 1.4 o Allowed websocket-server to gracefully handle SIGTERM. o Makefiles that install now obey DESTDIR to install relative to a different root. o Added a new observed Google public key pin for flashproxy-reg-email. o New --transport options in the client programs allow you to inform the facilitator that you want to receive connections of a certain kind. Transports other than the default "websocket" are experimental. Patch by George Kadianakis and David Fifield. Part of bug 9349. o Proxies now send a list of transport protocols they support (currently only "websocket"). This will allow the facilitator to assign proxies to clients that use matching transports. Patch by George Kadianakis. Part of bug 9349. o Allowed the facilitator to handle layered transports. For example, a client that register with the transport "obfs3|websocket" will receive a connection from a proxy using websocket, and will be connected to a relay that has an obfs3 server behind a websocket front end. Patch by Ximin Luo and George Kadianakis. Fixes bug 9349. o Changed to use the pluggable transport method name "flashproxy" rather than "websocket". Both names are equivalent and "websocket" continues to work. The reason for this change is to reduce confusion with a transport that simply makes a WebSocket connection to a "websocket" bridge, without receiving an inbound connection from a flash proxy. The default argument to the --transport option continues to be "websocket", because that option controls which particular protocol flash proxies should use to connect to you, and is distinct from the transport method name used by Tor. o Rearranged some files in the source tree. Facilitator documentation is now under facilitator/doc. The App Engine source code is under facilitator/appengine. The directory containing other ways to use the proxy moved from modules to proxy/modules. Patch by Ximin Luo. Fixes bug 9668. Changes in version 1.3 o Added a new observed Google public key pin. Changes in version 1.2 o The facilitator daemons have a --privdrop-user option that causes them to change to another user ID after reading keys and opening log files. facilitator-howto.txt shows how to configure them to use an unprivileged facilitator-nobody user. Patch by Alexandre Allaire and David Fifield. Fixes bug 8424. o Proxies now send the list of clients they are currently serving in their facilitator polling requests. This is meant to enable the facilitator to estimate the level of service each client is getting. Proxies send a protocol revision number "r=1" to signify the change. o The managed transport method name "flashproxy" is now recognized as a synonym for "websocket". o The badge localization now understands language subtags such as "ru-RU". Fixes bug 8828. o Language tags for badge localization are now case-insensitive. Patch by Eduardo Stalinho. Fixes bug 8829. o The badge localization is taken from the JavaScript property window.navigator.language when possible. Patch by Arlo Breault. Fixes bug 8827. o Proxies now attempt to connect to the client first, and only connect to the relay after the client connection is successful. This is meant to reduce the number of connections to the relay when clients haven't set up port forwarding. Introduced bug 9009, later fixed. o A proxy no longer contacts the facilitator when it is given the "client" and "relay" parameters. It serves the one given client and then stops. Patch by Arlo Breault. Fixes bug 9006. o facilitator-email-poller ignores messages received a long time ago. This is to fix the situation where facilitator-email-poller stops running for some reason, comes back after some hours, and then flushes a lot of no-longer-relevant registrations out to proxies. Patch by Sukhbir Singh and David Fifield. Fixes bug 8285. o New --port-forwarding and friends options enable flashproxy-client to invoke tor-fw-helper to forward ports automatically. Patch by Arlo Breault and David Fifield. Fixes bug 9033. o The flash proxy, in debug mode, now hides potentially sensistive information like IP addresses. Patch by Arlo Breault. Fixes bug 9170. o The new modules/nodejs allows running a standalone flash proxy (outside a browser) under Node.js. Patch by Arlo Breault. Fixes bug 7944. o Registration helpers have a new --unsafe-logging option and helpers don't log IP addresses by default. Patch by Arlo Breault. Fixes bug 9185. o Certificate pins now match against the public keys of intermediate certificates, not only those of leaves. This will help with flashproxy-reg-appspot, whose leaf key was often changing. It also allows us to copy pin digests directly from the Chromium source code. Patch by David Fifield. Fixes bug 9167. Changes in version 1.1 o Programs that use certificate pins now take a --disable-pin option that causes pins to be ignored. Changes in version 1.0 o The facilitator runs on a new domain name fp-facilitator.org. Fixes bug 7160. o Fixed badge rendering for a certain combination of Chrome and AdBlock Plus. Patch by Arlo Breault. Fixes bug 8300. o websocket-server sends the new TRANSPORT command of the extended OR port protocol to identify incoming connections as websocket. o There is now a 10-second HTTP request timeout in websocket-server. Fixes bug 8626. o The new --facilitator-pubkey option of flashproxy-client lets you configure a different facilitator public key, if you're using one other than the one at fp-facilitator.org. Patch by Arlo Breault. Fixes bug 8800. o The badge now has a "lang" parameter for localization. Translations exist for en, de, and ru. Patch by Peter Bourgelais. o Made facilitator-email-poller reconnect after some SSL and socket errors. Patch by Alexandre Allaire and David Fifield. Fixes bug 8284. o Added flashproxy-reg-url to the py2exe instructions in setup.py; this lack meant that flashproxy-reg-url was missing from Windows bundles. Patch by Arlo Breault. Fixes bug 8840. o Enabled HTTP Strict Transport Security (HSTS) on the facilitator. Patch by Eduardo Stalinho. Fixes bug 8772. o Added a new "appspot" registration method, which is now the first registration method tried, ahead of "email". "appspot" sends registrations through Google App Engine. Patch by Arlo Breault and David Fifield. Fixes bug 8860. Changes in version 0.12 o The new flashproxy-reg-url program prints a URL which, when requested, causes an address to be registered with the facilitator. You can use this program if the other registration methods are blocked: pass the URL to a third party and ask them to request it. Patch by Alexandre Allaire. Fixes bug 7559. o The new websocket-server program is the server transport plugin that flash proxies talk to. It replaces the third-party websockify program that was used formerly. It works as a managed proxy and supports the extended ORPort protocol. Fixes bug 7620. o Added a line of JavaScript that you can use to put a proxy badge on MediaWiki sites that allow custom JavaScript. Follow the instructions in modules/mediawiki/custom.js. Contributed by Sathyanarayanan Gunasekaran. o Make flashproxy-client ignore errors in opening listeners, as long as at least one local and one remote listener can be opened. A user reported a problem with listening on IPv6, while being able to listen on IPv4. Fixes bug 8319. o The facilitator now returns a check-back-in parameter in its response, telling proxies how often to poll. Fixes bug 8171. Patch by Alexandre Allaire. o Updated the Tor Browser check to match the behavior of new Tor Browsers. Patch by Alexandre Allaire and Arlo Breault. Fixes bug 8434. Changes in version 0.11 o Added -4 and -6 options to flashproxy-client and flashproxy-reg-http. (The options already existed in flashproxy-reg-email.) These options cause registrations helpers to use IPv4 or IPv6 only. Fixes bug 7622. Patch by Jorge Couchet. o The facilitator now gives only IPv4 clients to proxies requesting over IPv4, and IPv6 clients to proxies requesting over IPv6. This is to avoid the situation where an IPv4-only proxy is given an IPv6 address it cannot connect to. Fixes bug 6124. Patch by Jorge Couchet and David Fifield. o The proxy now accepts a cookierequired parameter that controls whether users have to explicitly state their desire to be a proxy. The page at http://crypto.stanford.edu/flashproxy/options.html allows changing user preference. o Proxies now poll for clients every 60 seconds rather than 10 seconds, and do not begin to poll immediately upon beginning to run. o There are new alpha Tor Browser Bundles for download at https://people.torproject.org/~dcf/flashproxy/. Changes in version 0.10 o Fixed a bug in flashproxy-client that made it susceptible to a denial of service (program crash) when receiving large WebSocket messages made up of many small fragmented frames. o Made the facilitator hand out more proxies by default, reducing a client's need to re-register. Changes in version 0.9 o There are executable Windows packages of the client programs, so that the programs can be run without Python being installed. Fixes bug 7283. Alexandre Allaire and David Fifield. o There are now man pages for the client programs (flashproxy-client, flashproxy-reg-email, and flashproxy-reg-http). Fixes bug 6453. Alexandre Allaire. o The proxy now tries to determine whether it is running in Tor Browser, and disables itself if so. Fixes bug 6293. Patch by Jorge Couchet. Changes in version 0.8 o flashproxy-client now operates as a managed proxy by default. This means that there is no longer a need to start flashproxy-client separately from Tor. Use a "ClientTransportPlugin websocket exec" line as in the included torrc. To use flashproxy-client as an external proxy (the way it worked before), use the --external option. Fixes bug 7016. o The proxy badge does more intelligent parsing of the boolean "debug" parameter. "0", "false", and other values are now interpreted as false and do not activate debug mode. Formerly any non-empty value was interpreted as true. Fixes bug 7110. Patch by Alexandre Allaire. o Fixed a runtime error in flashproxy-client on Windows: AttributeError: 'module' object has no attribute 'IPPROTO_IPV6' Fixes bug 7147. Patch by Alexandre Allaire. o Fixed an exception that happened in Windows in flashproxy-reg-email in reading the trusted CA list. The exception message was: Failed to register: [Errno 185090050] _ssl.c:340: error:0B084002:x509 certificate routines:X509_load_cert_crl_file:system lib Fixes bug 7271. Patch by Alexandre Allaire. o Fixed an exception that happened on Windows in flashproxy-client, relating to the use of nonblocking sockets: Socket error writing to local: '[Errno 10035] A non-blocking socket operation could not be completed immediately' Fixes bug 7272. Patch by Alexandre Allaire. flashproxy-1.7/LICENSE000066400000000000000000000021241236350636700145770ustar00rootroot00000000000000This is the license of the flash proxy software. Copyright 2011-2013 David Fifield Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. flashproxy-1.7/Makefile000066400000000000000000000057361236350636700152460ustar00rootroot00000000000000# Makefile for a self-contained binary distribution of flashproxy-client. # # This builds two zipball targets, dist and dist-exe, for POSIX and Windows # respectively. Both can be extracted and run in-place by the end user. # (PGP-signed forms also exist, sign and sign-exe.) # # If you are a distro packager, instead see the separate build scripts for each # source component, all of which have an `install` target: # - client: Makefile.client # - common: setup-common.py # - facilitator: facilitator/{configure.ac,Makefile.am} # # It is possible to build dist-exe on GNU/Linux by using wine to install # the windows versions of Python, py2exe, and m2crypto, then running # `make PYTHON="wine python" dist-exe`. PACKAGE = flashproxy-client VERSION = $(shell sh version.sh) DISTNAME = $(PACKAGE)-$(VERSION) THISFILE = $(lastword $(MAKEFILE_LIST)) PYTHON = python MAKE_CLIENT = $(MAKE) -f Makefile.client PYTHON="$(PYTHON)" # don't rebuild man pages due to VCS giving spurious timestamps, see #9940 REBUILD_MAN = 0 # all is N/A for a binary package, but include for completeness all: install: $(MAKE_CLIENT) DESTDIR=$(DESTDIR) REBUILD_MAN=$(REBUILD_MAN) install $(PYTHON) setup-common.py install $(if $(DESTDIR),--root=$(DESTDIR)) DISTDIR = dist/$(DISTNAME) $(DISTDIR): Makefile.client setup-common.py $(THISFILE) mkdir -p $(DISTDIR) $(MAKE_CLIENT) DESTDIR=$(DISTDIR) bindir=/ docdir=/ man1dir=/doc/ \ REBUILD_MAN="$(REBUILD_MAN)" install $(PYTHON) setup-common.py build_py -d $(DISTDIR) dist/%.zip: dist/% cd dist && zip -q -r -9 "$(@:dist/%=%)" "$(<:dist/%=%)" dist/%.zip.asc: dist/%.zip rm -f "$@" gpg --sign --detach-sign --armor "$<" gpg --verify "$@" "$<" dist: force-dist $(DISTDIR).zip sign: force-dist $(DISTDIR).zip.asc PY2EXE_TMPDIR = py2exe-tmp export PY2EXE_TMPDIR $(PY2EXE_TMPDIR): setup-client-exe.py $(PYTHON) setup-client-exe.py py2exe -q DISTDIR_W32 = $(DISTDIR)-win32 # below, we override DST_SCRIPT and DST_MAN1 for windows $(DISTDIR_W32): $(PY2EXE_TMPDIR) $(THISFILE) mkdir -p $(DISTDIR_W32) $(MAKE_CLIENT) DESTDIR=$(DISTDIR_W32) bindir=/ docdir=/ man1dir=/doc/ \ DST_SCRIPT= DST_MAN1='$$(SRC_MAN1)' \ REBUILD_MAN="$(REBUILD_MAN)" install cp -t $(DISTDIR_W32) $(PY2EXE_TMPDIR)/dist/* dist-exe: force-dist-exe $(DISTDIR_W32).zip sign-exe: force-dist-exe $(DISTDIR_W32).zip.asc # clean is N/A for a binary package, but include for completeness clean: distclean distclean: $(MAKE_CLIENT) clean $(PYTHON) setup-common.py clean --all rm -rf dist $(PY2EXE_TMPDIR) test: check check: $(MAKE_CLIENT) check $(PYTHON) setup-common.py test test-full: test cd facilitator && \ { test -x ./config.status && ./config.status || \ { test -x ./configure || ./autogen.sh; } && ./configure; } \ && make && PYTHONPATH=.. make check cd proxy && make test force-dist: rm -rf $(DISTDIR) $(DISTDIR).zip force-dist-exe: rm -rf $(DISTDIR_W32) $(DISTDIR_W32).zip $(PY2EXE_TMPDIR) .PHONY: all dist sign dist-exe sign-exe clean distclean test check test-full force-dist force-dist-exe flashproxy-1.7/Makefile.client000066400000000000000000000052641236350636700165170ustar00rootroot00000000000000# Makefile for a source distribution of flashproxy-client. # # This package is not self-contained and the build products may require other # dependencies to function; it is given as a reference for distro packagers. PACKAGE = flashproxy-client VERSION = $(shell sh version.sh) DESTDIR = THISFILE = $(lastword $(MAKEFILE_LIST)) PYTHON = python # GNU command variables # see http://www.gnu.org/prep/standards/html_node/Command-Variables.html INSTALL = install INSTALL_DATA = $(INSTALL) -m 644 INSTALL_PROGRAM = $(INSTALL) INSTALL_SCRIPT = $(INSTALL) # GNU directory variables # see http://www.gnu.org/prep/standards/html_node/Directory-Variables.html prefix = /usr/local exec_prefix = $(prefix) bindir = $(exec_prefix)/bin datarootdir = $(prefix)/share datadir = $(datarootdir) sysconfdir = $(prefix)/etc docdir = $(datarootdir)/doc/$(PACKAGE) mandir = $(datarootdir)/man man1dir = $(mandir)/man1 srcdir = . SRC_MAN1 = doc/flashproxy-client.1.txt doc/flashproxy-reg-appspot.1.txt doc/flashproxy-reg-email.1.txt doc/flashproxy-reg-http.1.txt doc/flashproxy-reg-url.1.txt SRC_SCRIPT = flashproxy-client flashproxy-reg-appspot flashproxy-reg-email flashproxy-reg-http flashproxy-reg-url SRC_DOC = README LICENSE ChangeLog torrc SRC_ALL = $(SRC_SCRIPT) $(SRC_DOC) $(SRC_MAN1) DST_MAN1 = $(SRC_MAN1:%.1.txt=%.1) DST_SCRIPT = $(SRC_SCRIPT) DST_DOC = $(SRC_DOC) DST_ALL = $(DST_SCRIPT) $(DST_DOC) $(DST_MAN1) TEST_PY = flashproxy-client-test.py REBUILD_MAN = 1 all: $(DST_ALL) $(THISFILE) %.1: %.1.txt ifeq ($(REBUILD_MAN),0) @echo "warning: $@ *may* be out-of-date; if so then rm and re-checkout from VCS or force a re-build with REBUILD_MAN=1" else rm -f $@ a2x --no-xmllint --xsltproc-opts "--stringparam man.th.title.max.length 24" -d manpage -f manpage $< endif install: all mkdir -p $(DESTDIR)$(bindir) for i in $(DST_SCRIPT); do $(INSTALL_SCRIPT) "$$i" $(DESTDIR)$(bindir); done mkdir -p $(DESTDIR)$(docdir) for i in $(DST_DOC); do $(INSTALL_DATA) "$$i" $(DESTDIR)$(docdir); done mkdir -p $(DESTDIR)$(man1dir) for i in $(DST_MAN1); do $(INSTALL_DATA) "$$i" $(DESTDIR)$(man1dir); done uninstall: for i in $(notdir $(DST_SCRIPT)); do rm $(DESTDIR)$(bindir)/"$$i"; done for i in $(notdir $(DST_DOC)); do rm $(DESTDIR)$(docdir)/"$$i"; done for i in $(notdir $(DST_MAN1)); do rm $(DESTDIR)$(man1dir)/"$$i"; done clean: rm -f *.pyc distclean: clean maintainer-clean: distclean rm -f $(DST_MAN1) # TODO(infinity0): eventually do this as part of 'check' once we have a decent # overrides file in place that filters out false-positives pylint: $(SRC_SCRIPT) pylint -E $^ check: $(THISFILE) for i in $(TEST_PY); do $(PYTHON) "$$i"; done .PHONY: all install uninstall clean distclean maintainer-clean check pylint flashproxy-1.7/README000066400000000000000000000076351236350636700144660ustar00rootroot00000000000000== Quick start for users You must have a version of Tor that supports pluggable transports. This means version 0.2.3.2-alpha or later. All the flashproxy programs and source code can be downloaded this way: git clone https://git.torproject.org/flashproxy.git But as a user you only need these files: https://gitweb.torproject.org/flashproxy.git/blob_plain/HEAD:/flashproxy-client https://gitweb.torproject.org/flashproxy.git/blob_plain/HEAD:/torrc You must be able to receive TCP connections; unfortunately means that you cannot be behind NAT. See the section "Using a public client transport plugin" below to try out the system even behind NAT. Run Tor using the included torrc file: $ tor -f torrc By default the transport plugin listens on Internet-facing TCP port 9000. If you have to use a different port (to get through a firewall, for example), edit the ClientTransportPlugin line of the torrc to give a different port number: ClientTransportPlugin flashproxy exec ./flashproxy-client --register :0 :8888 If the flashproxy-client program is in a different directoy (after being installed, for example), use the full path in the ClientTransportPlugin line: ClientTransportPlugin flashproxy exec /usr/local/bin/flashproxy-client --register You should receive a flash proxy connection within about 60 seconds. See "Troubleshooting" below if it doesn't work. == Overview This is a set of tools that make it possible to connect Tor through an browser-based proxy running on another computer. The flash proxy can be run just by opening a web page in a browser. Flash proxies are one of several pluggable transports for Tor. There are five main parts. 1. The Tor client, running on someone's localhost. 2. A client transport plugin, which is a program that waits for connections from a flash proxy and connects them to the Tor client. 3. A flash proxy, which is a JavaScript program running in someone's web browser. 4. A facilitator, which is a server that keeps a list of clients that want a connection and assigns those addresses to proxies. 5. A Tor relay running a server transport plugin capable of receiving WebSocket connections. The purpose of this project is to create many ephemeral bridge IP addresses, with the goal of outpacing a censor's ability to block them. Rather than increasing the number of bridges at static addresses, we aim to make existing bridges reachable by a larger and changing pool of addresses. == Demonstration page This page has a description of the project; viewing it also turns your computer into a flash proxy as long as the page is open. http://crypto.stanford.edu/flashproxy/ == Troubleshooting Make sure someone is viewing http://crypto.stanford.edu/flashproxy/, or another web page with a flash proxy badge on it. You can add the --log option to the ClientTransportPlugin command line in order to save debugging log messages. If tor hangs at 10% with these messages: [notice] Bootstrapped 10%: Finishing handshake with directory server. [notice] no known bridge descriptors running yet; stalling as a last resort you can try deleting the files in ~/.tor and /var/lib/tor, and then restarting tor. If tor apparently hangs here: [notice] Bootstrapped 50%: Loading relay descriptors. [notice] new bridge descriptor '...' (fresh) wait a few minutes. It can take a while to download relay descriptors. If you suspect that the facilitator has lost your client registration, you can re-register: $ flashproxy-reg-email $ flashproxy-reg-http == How to run a relay Proxies talk to a relay running the websocket pluggable transport. Source code and documentation for the server transport plugin are in the Git repository at https://git.torproject.org/pluggable-transports/websocket.git. == How to put a flash proxy badge on a web page Paste in this HTML where you want the badge to appear: flashproxy-1.7/doc/000077500000000000000000000000001236350636700143405ustar00rootroot00000000000000flashproxy-1.7/doc/design.txt000066400000000000000000000254121236350636700163560ustar00rootroot00000000000000Design of flash proxies 0. Problem statement Provide access to the Tor network for users behind a restrictive firewall that blocks direct access to all Tor relays and bridges. 1. Overview and background We assume the existence of an adversary powerful enough to enumerate and block all public and non-public (bridge) relays. For users facing such an adversary, we assume there exists a subset of reachable hosts that themselves can reach the Tor network. We call this subset the unrestricted Internet. A browser-based proxy (flash proxy), running in a web page in the unrestricted Internet, proxies connections between the restricted Internet and the Tor network. These proxies are expected to be temporary and short-lived, but their number will be great enough that they can't all be blocked effectively. The implementation of a browser-based proxy using WebSocket is complicated by restrictions that prevent it being a straightforward proxy. Chief among these is the lack of listening sockets. WebSocket can only initiate outgoing connections, not receive incoming ones. The flash proxy can only connect to external hosts by connecting directly to them. Another, but less important, restriction is that browser-based networking does not provide low-level socket access such as control of source address. 2. Components Conceptually, each flash proxy is nothing more than a simple proxy, which accepts connections from a client and forwards data to a server. But because of the limited networking facilities available to an in-browser application, several other pieces are needed. 1. Tor client: with a ClientTransportPlugin config option to allow it to use the flashproxy transport client. 2. Client transport plugin: Runs on the same computer as the Tor client. On startup, it registers with the facilitator to inform that it is waiting for a connection from a flash proxy. When this is received, it starts proxying data between it and the local Tor client. 3. Flash proxy: Runs in someone's browser, in an uncensored region of the Internet. The flash proxy first connects to the facilitator to get a client registration. It then makes two outgoing connections, one to a Tor relay and one to a waiting Tor client, and starts proxying data between them. 4. Facilitator: Keeps track of client registrations and hands them out to clients. It is capable of receiving client registrations in a variety of ways. It sends registrations to flash proxies over HTTP. The facilitator is responsible for matching clients to proxies in a reasonable manner. 5. Tor relay: with a ServerTransportPlugin config option to allow it to use the flashproxy transport server. 6. Server transport plugin: Waits for a connection from a flash proxy and proxies data between it and the local Tor relay. 3. Protocols The numbers refer to the same components as in sect 2 above. Arrows indicate the direction of the initial TCP connection. 1>2. Pluggable transport, client-side. See core tor docs for details. 2>4. Secure rendezvous using a variety of custom methods; see facilitator-howto.txt for details. This must be very hard to censor, e.g. using a popular web service over HTTPS. 3>4. Custom protocol specific to flashproxy, where each flashproxy polls a facilitator for client registrations. 2<3. WebSocket. This must be very hard to censor, which may require additional transformations to the underlying data stream. Note that this stream is controlled by the source client, not the flash proxy; in a plain flashproxy-only channel, it is as described in websocket-transport.txt. 5<3. WebSocket. 5>6. Pluggable transport, server-side. See core tor docs for details. 4. Sample session 1. The restricted Tor user starts the client transport plugin. 2. The client transport plugin notifies the facilitator that it needs a connection. 3. The restricted user starts Tor, which connects to the client transport plugin. 4. An unrestricted user opens the web page containing the flash proxy. 5. The flash proxy connects to the facilitator and asks for a client. 6. The facilitator sends one of its client registrations to the proxy. 7. The flash proxy connects to a Tor relay and to the waiting client transport plugin. 8. The client transport plugin receives the flash proxy's connection and begins relaying data between it and the Tor relay. Later, the flash proxy may go offline. Assuming that another flash proxy is available, it will receive the same client's address from the facilitator, and the local Tor client will reconnect to the client through it. 5. Behavior of the Tor client The Tor client must be configured to make its connections through a local proxy (the client transport plugin). This configuration is sufficient: ClientTransportPlugin flashproxy socks4 127.0.0.1:9001 UseBridges 1 Bridge flashproxy 0.0.1.0:1 LearnCircuitBuildTimeout 0 The address given for the "Bridge" option is actually irrelevant. The client transport plugin will ignore it and connect (through the flash proxy) to a Tor relay. The Tor client does not have control of its first hop. 6. Behavior of the client transport plugin The client transport plugin serves two purposes: It sends a registration message to the facilitator and it carries data between a flash proxy and the local Tor client. On startup, the client transport plugin sends a registration message to the facilitator, informing the facilitator that it is waiting for a connection. If the client transport plugin obfuscates its connections using pluggable transports, then it also appends the listening address of its transports to the registration message. The facilitator will later hand this registration to a flash proxy. The registration message is an HTTP POST request of the form: POST / HTTP/1.0 client=[
]:[&client-transport=][ client=[
]:[&client-transport=] ...] Where 'transport' is the name of the pluggable transport that is listening on
:. The default flashproxy transport is named 'websocket'. For example a registration message might look like this: client=1.2.3.4:9000 client=1.2.3.4:10000&client-transport=obfs3|websocket The facilitator sends a 200 reply if the registration was successful and an error status otherwise. If the transport plugin omits the [
] part, the facilitator will automatically fill it in based on the HTTP client address, which means the transport plugin doesn't have to know its external address. The client transport plugin solves the impedance mismatch between the Tor client and the flash proxy, both of which want to make outgoing connections to the other. The transport plugin sits in between, listens for connections from both ends, and matches them together. The remote socket listens on port 9000 and the local on port 9001. On the local side, it acts as a SOCKS proxy (albeit one that always goes to the same destination). 7. Behavior of the flash proxy The flash proxy polls the facilitator for client registrations. When it receives a registration, it opens one connection to the given Tor relay, one to the given client, and begin proxying data between them. The proxy asks the facilitator for a registration with an HTTP GET request: GET /?r=&client=:&transport= HTTP/1.0 The 'r' parameter is the protocol revision number (should be '1' for now). The 'client' parameter carries the IP address of a flashproxy client. The client parameter can repeat to report multiple connected clients. The 'transport' parameter may be repeated zero or many times and signals the outer-transports that this flashproxy supports. (See section 10 for a discussion of inner and outer transports.) For example: GET /?r=1&client=7.1.43.21:9999&client=1.2.3.4:9000&transport=webrtc&transport=websocket HTTP/1.0 The response code is 200 and the body looks like this: client=
:&client-transport=&relay=
:&relay-transport= For example: client=1.2.3.4:2000&client-transport=websocket&relay=10.10.10:9902&relay-transport=websocket As with the request, the response transports are actually outer transports; inner transports are not the proxy's concern and therefore not given. If the value for the client parameter is empty, it means that there are no client registrations for this proxy. The flash proxy may serve more than one relay–client pair at once. 8. Behavior of the facilitator The faciliator is a HTTP server that handles client POST registrations and proxy GET requests according to the formats given above. The facilitator listens on port 9002. In the current implementation, the facilitator forgets a client registration after giving it to a flash proxy. The client must re-register if it wants another connection later. 9. Behavior of the Tor relay. The Tor relay requires no special configuration. 10. Inner and outer transports The client can talk to the relay using not only the Tor protocol, but any transport protocol implemented by e.g. another pluggable transport that sits between tor and the flashproxy PT. For the facilitator to match a client with a relay that understands it, flashproxy-client must be given the name of the transport protocol, via the --transport option. This is divided into two parts, the inner and outer transport, written like "inner|outer" or just "outer" if the inner transport is the plain Tor protocol. The inner transport is the protocol that the non-flashproxy parts of the client and relay talk to each other with, and must be the same for each connected pair. Beyond that, the semantics of the transport are opaque to flashproxy; it does not know or care. The outer transports are the protocols that the browser proxy uses to talk to the client and relay, and may be different for each. The proxy un-applies the outer transport of the client so that only the inner traffic remains, then re-applies the outer transport of the relay to this and sends it to the relay; and vice-versa for traffic going in the opposite direction. Diagram: client <======outer-C=======> proxy <======outer-S=======> relay <=======inner=========-------========inner========> Currently the only supported outer transport is "websocket", but we will also add support for newer technologies such as webRTC. (We have also seen third-party proxies running outside the browser on NodeJS that can open plain TCP connections, so that the outer transport is effectively just "tcp", although this is not currently recognised by the facilitator.) flashproxy-1.7/doc/flashproxy-client.1000066400000000000000000000114721236350636700201020ustar00rootroot00000000000000'\" t .\" Title: flashproxy-client .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.78.1 .\" Date: 05/07/2014 .\" Manual: \ \& .\" Source: \ \& .\" Language: English .\" .TH "FLASHPROXY\-CLIENT" "1" "05/07/2014" "\ \&" "\ \&" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" flashproxy-client \- The flash proxy client transport plugin .SH "SYNOPSIS" .sp \fBflashproxy\-client\fR \fB\-\-register\fR [\fIOPTIONS\fR] [\fILOCAL\fR][:\fIPORT\fR] [\fIREMOTE\fR][:\fIPORT\fR] .SH "DESCRIPTION" .sp Wait for connections on a local and a remote port\&. When any pair of connections exists, data is ferried between them until one side is closed\&. By default \fILOCAL\fR is localhost addresses on port 9001 and \fIREMOTE\fR is all addresses on port 9000\&. .sp The local connection acts as a SOCKS4a proxy, but the host and port in the SOCKS request are ignored and the local connection is always linked to a remote connection\&. .sp By default, runs as a managed proxy: informs a parent Tor process of support for the "flashproxy" or "websocket" pluggable transport\&. In managed mode, the \fILOCAL\fR port is chosen arbitrarily instead of defaulting to 9001; however this can be overridden by including a \fILOCAL\fR port in the command\&. This is the way the program should be invoked in a torrc ClientTransportPlugin "exec" line\&. Use the \fB\-\-external\fR option to run as an external proxy that does not interact with Tor\&. .sp If any of the \fB\-\-register\fR, \fB\-\-register\-addr\fR, or \fB\-\-register\-methods\fR options are used, then your IP address will be sent to the facilitator so that proxies can connect to you\&. You need to register in some way in order to get any service\&. The \fB\-\-facilitator\fR option allows controlling which facilitator is used; if omitted, it uses a public default\&. .SH "OPTIONS" .PP \fB\-4\fR .RS 4 Registration helpers use IPv4\&. .RE .PP \fB\-6\fR .RS 4 Registration helpers use IPv6\&. .RE .PP \fB\-\-daemon\fR .RS 4 Daemonize (Unix only)\&. .RE .PP \fB\-\-external\fR .RS 4 Be an external proxy (don\(cqt interact with Tor using environment variables and stdout)\&. .RE .PP \fB\-f\fR, \fB\-\-facilitator\fR=\fIURL\fR .RS 4 Advertise willingness to receive connections to URL\&. .RE .PP \fB\-\-facilitator\-pubkey\fR=\fIFILENAME\fR .RS 4 Encrypt registrations to the given PEM\-formatted public key (default built\-in)\&. .RE .PP \fB\-h\fR, \fB\-\-help\fR .RS 4 Display a help message and exit\&. .RE .PP \fB\-l\fR, \fB\-\-log\fR=\fIFILENAME\fR .RS 4 Write log to \fIFILENAME\fR (default is stdout)\&. .RE .PP \fB\-\-pidfile\fR=\fIFILENAME\fR .RS 4 Write PID to \fIFILENAME\fR after daemonizing\&. .RE .PP \fB\-\-port\-forwarding\fR .RS 4 Attempt to forward \fIREMOTE\fR port\&. .RE .PP \fB\-\-port\-forwarding\-helper\fR=\fIPROGRAM\fR .RS 4 Use the given \fIPROGRAM\fR to forward ports (default "tor\-fw\-helper")\&. Implies \fB\-\-port\-forwarding\fR\&. .RE .PP \fB\-\-port\-forwarding\-external\fR=\fIPORT\fR .RS 4 Forward the external \fIPORT\fR to \fIREMOTE\fR on the local host (default same as REMOTE)\&. Implies \fB\-\-port\-forwarding\fR\&. .RE .PP \fB\-r\fR, \fB\-\-register\fR .RS 4 Register with the facilitator\&. .RE .PP \fB\-\-register\-addr\fR=\fIADDR\fR .RS 4 Register the given address (in case it differs from \fIREMOTE\fR)\&. Implies \fB\-\-register\fR\&. .RE .PP \fB\-\-register\-methods\fR=\fIMETHOD\fR[,\fIMETHOD\fR] .RS 4 Register using the given comma\-separated list of methods\&. Implies \fB\-\-register\fR\&. Possible methods are: appspot, email, http\&. Default is "appspot,email,http"\&. .RE .PP \fB\-\-transport\fR=\fITRANSPORT\fR .RS 4 Registrations include the fact that you intend to use the given \fITRANSPORT\fR (default "websocket")\&. .RE .PP \fB\-\-unsafe\-logging\fR .RS 4 Don\(cqt scrub IP addresses from logs\&. .RE .SH "SEE ALSO" .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .SH "BUGS" .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. flashproxy-1.7/doc/flashproxy-client.1.txt000066400000000000000000000067521236350636700207250ustar00rootroot00000000000000// This file is asciidoc source code. // To generate manpages, use the a2x command i.e. // a2x --no-xmllint -d manpage -f manpage flashproxy-client.1.txt // see http://www.methods.co.nz/asciidoc/userguide.html#X1 FLASHPROXY-CLIENT(1) ==================== NAME ---- flashproxy-client - The flash proxy client transport plugin SYNOPSIS -------- **flashproxy-client** **--register** [__OPTIONS__] [__LOCAL__][:__PORT__] [__REMOTE__][:__PORT__] DESCRIPTION ----------- Wait for connections on a local and a remote port. When any pair of connections exists, data is ferried between them until one side is closed. By default __LOCAL__ is localhost addresses on port 9001 and __REMOTE__ is all addresses on port 9000. The local connection acts as a SOCKS4a proxy, but the host and port in the SOCKS request are ignored and the local connection is always linked to a remote connection. By default, runs as a managed proxy: informs a parent Tor process of support for the "flashproxy" or "websocket" pluggable transport. In managed mode, the __LOCAL__ port is chosen arbitrarily instead of defaulting to 9001; however this can be overridden by including a __LOCAL__ port in the command. This is the way the program should be invoked in a torrc ClientTransportPlugin "exec" line. Use the **--external** option to run as an external proxy that does not interact with Tor. If any of the **--register**, **--register-addr**, or **--register-methods** options are used, then your IP address will be sent to the facilitator so that proxies can connect to you. You need to register in some way in order to get any service. The **--facilitator** option allows controlling which facilitator is used; if omitted, it uses a public default. OPTIONS ------- **-4**:: Registration helpers use IPv4. **-6**:: Registration helpers use IPv6. **--daemon**:: Daemonize (Unix only). **--external**:: Be an external proxy (don't interact with Tor using environment variables and stdout). **-f**, **--facilitator**=__URL__:: Advertise willingness to receive connections to URL. **--facilitator-pubkey**=__FILENAME__:: Encrypt registrations to the given PEM-formatted public key (default built-in). **-h**, **--help**:: Display a help message and exit. **-l**, **--log**=__FILENAME__:: Write log to __FILENAME__ (default is stdout). **--pidfile**=__FILENAME__:: Write PID to __FILENAME__ after daemonizing. **--port-forwarding**:: Attempt to forward __REMOTE__ port. **--port-forwarding-helper**=__PROGRAM__:: Use the given __PROGRAM__ to forward ports (default "tor-fw-helper"). Implies **--port-forwarding**. **--port-forwarding-external**=__PORT__:: Forward the external __PORT__ to __REMOTE__ on the local host (default same as REMOTE). Implies **--port-forwarding**. **-r**, **--register**:: Register with the facilitator. **--register-addr**=__ADDR__:: Register the given address (in case it differs from __REMOTE__). Implies **--register**. **--register-methods**=__METHOD__[,__METHOD__]:: Register using the given comma-separated list of methods. Implies **--register**. Possible methods are: appspot, email, http. Default is "appspot,email,http". **--transport**=__TRANSPORT__:: Registrations include the fact that you intend to use the given __TRANSPORT__ (default "websocket"). **--unsafe-logging**:: Don't scrub IP addresses from logs. SEE ALSO -------- **http://crypto.stanford.edu/flashproxy/** BUGS ---- Please report using **https://trac.torproject.org/projects/tor**. flashproxy-1.7/doc/flashproxy-reg-appspot.1000066400000000000000000000055641236350636700210720ustar00rootroot00000000000000'\" t .\" Title: flashproxy-reg-appspot .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.78.1 .\" Date: 05/07/2014 .\" Manual: \ \& .\" Source: \ \& .\" Language: English .\" .TH "FLASHPROXY\-REG\-APPSPOT" "1" "05/07/2014" "\ \&" "\ \&" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" flashproxy-reg-appspot \- Register with a facilitator through Google App Engine\&. .SH "SYNOPSIS" .sp \fBflashproxy\-reg\-appspot\fR [\fIOPTIONS\fR] [\fIREMOTE\fR][:\fIPORT\fR] .SH "DESCRIPTION" .sp Register with a flash proxy facilitator through a Google App Engine app\&. By default the remote address registered is ":9000" (the external IP address is guessed)\&. It requires https://www\&.google\&.com/ not to be blocked\&. .sp This program uses a trick to talk to App Engine, even though appspot\&.com may be blocked\&. The IP address and Server Name Indication of the request are for www\&.google\&.com, but the Host header inside the request is for an appspot\&.com subdomain\&. .sp Requires the \fBflashproxy\-reg\-url\fR program\&. .SH "OPTIONS" .PP \fB\-4\fR .RS 4 Name lookups use only IPv4\&. .RE .PP \fB\-6\fR .RS 4 Name lookups use only IPv6\&. .RE .PP \fB\-\-disable\-pin\fR .RS 4 Don\(cqt check the server\(cqs public key against a list of known pins\&. You can use this if the server\(cqs public key has changed and this program hasn\(cqt been updated yet\&. .RE .PP \fB\-\-facilitator\-pubkey\fR=\fIFILENAME\fR .RS 4 Encrypt registrations to the given PEM\-formatted public key (default built\-in)\&. .RE .PP \fB\-h\fR, \fB\-\-help\fR .RS 4 Display help message and exit\&. .RE .PP \fB\-\-transport\fR=\fITRANSPORT\fR .RS 4 Registrations include the fact that you intend to use the given \fITRANSPORT\fR (default "websocket")\&. .RE .PP \fB\-\-unsafe\-logging\fR .RS 4 Don\(cqt scrub IP addresses from logs\&. .RE .SH "SEE ALSO" .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .SH "BUGS" .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. flashproxy-1.7/doc/flashproxy-reg-appspot.1.txt000066400000000000000000000035551236350636700217060ustar00rootroot00000000000000// This file is asciidoc source code. // To generate manpages, use the a2x command. // This one has a long name, if you don't change the // default length parameter it will be truncated, use: // a2x --no-xmllint --xsltproc-opts "--stringparam man.th.title.max.length 24" -d manpage -f manpage flashproxy-reg-appspot.1.txt FLASHPROXY-REG-APPSPOT(1) ========================= NAME ---- flashproxy-reg-appspot - Register with a facilitator through Google App Engine. SYNOPSIS -------- **flashproxy-reg-appspot** [__OPTIONS__] [__REMOTE__][:__PORT__] DESCRIPTION ----------- Register with a flash proxy facilitator through a Google App Engine app. By default the remote address registered is ":9000" (the external IP address is guessed). It requires https://www.google.com/ not to be blocked. This program uses a trick to talk to App Engine, even though appspot.com may be blocked. The IP address and Server Name Indication of the request are for www.google.com, but the Host header inside the request is for an appspot.com subdomain. Requires the **flashproxy-reg-url** program. OPTIONS ------- **-4**:: Name lookups use only IPv4. **-6**:: Name lookups use only IPv6. **--disable-pin**:: Don't check the server's public key against a list of known pins. You can use this if the server's public key has changed and this program hasn't been updated yet. **--facilitator-pubkey**=__FILENAME__:: Encrypt registrations to the given PEM-formatted public key (default built-in). **-h**, **--help**:: Display help message and exit. **--transport**=__TRANSPORT__:: Registrations include the fact that you intend to use the given __TRANSPORT__ (default "websocket"). **--unsafe-logging**:: Don't scrub IP addresses from logs. SEE ALSO -------- **http://crypto.stanford.edu/flashproxy/** BUGS ---- Please report using **https://trac.torproject.org/projects/tor**. flashproxy-1.7/doc/flashproxy-reg-email.1000066400000000000000000000065251236350636700204710ustar00rootroot00000000000000'\" t .\" Title: flashproxy-reg-email .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.78.1 .\" Date: 05/07/2014 .\" Manual: \ \& .\" Source: \ \& .\" Language: English .\" .TH "FLASHPROXY\-REG\-EMAIL" "1" "05/07/2014" "\ \&" "\ \&" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" flashproxy-reg-email \- Register with a facilitator using the email method .SH "SYNOPSIS" .sp \fBflashproxy\-reg\-email\fR [\fIOPTIONS\fR] [\fIREMOTE\fR][:\fIPORT\fR] .SH "DESCRIPTION" .sp Register with a flash proxy facilitator through email\&. Makes a STARTTLS connection to an SMTP server and sends mail with a client IP address to a designated address\&. By default the remote address registered is ":9000" (the external IP address is guessed based on the SMTP server\(cqs response)\&. .sp Using an SMTP server or email address other than the defaults will not work unless you have made special arrangements to connect them to a facilitator\&. .sp The email address is not polled continually\&. After running the program, it may take up to a minute for the registration to be recognized\&. .sp This program requires the M2Crypto library for Python\&. .SH "OPTIONS" .PP \fB\-4\fR .RS 4 Name lookups use only IPv4\&. .RE .PP \fB\-6\fR .RS 4 Name lookups use only IPv6\&. .RE .PP \fB\-d\fR, \fB\-\-debug\fR .RS 4 Enable debugging output (Python smtplib messages)\&. .RE .PP \fB\-\-disable\-pin\fR .RS 4 Don\(cqt check the server\(cqs public key against a list of known pins\&. You can use this if the server\(cqs public key has changed and this program hasn\(cqt been updated yet\&. .RE .PP \fB\-e\fR, \fB\-\-email\fR=\fIADDRESS\fR .RS 4 Send mail to \fIADDRESS\fR (default is "flashproxyreg\&.a@gmail\&.com")\&. .RE .PP \fB\-\-facilitator\-pubkey\fR=\fIFILENAME\fR .RS 4 Encrypt registrations to the given PEM\-formatted public key (default built\-in)\&. .RE .PP \fB\-h\fR, \fB\-\-help\fR .RS 4 Display help message and exit\&. .RE .PP \fB\-s\fR, \fB\-\-smtp\fR=\fIHOST\fR[:\fIPORT\fR] .RS 4 Use the given SMTP server (default is "gmail\-smtp\-in\&.l\&.google\&.com:25")\&. .RE .PP \fB\-\-transport\fR=\fITRANSPORT\fR .RS 4 Registrations include the fact that you intend to use the given \fITRANSPORT\fR (default "websocket")\&. .RE .PP \fB\-\-unsafe\-logging\fR .RS 4 Don\(cqt scrub IP addresses from logs\&. .RE .SH "SEE ALSO" .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .SH "BUGS" .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. flashproxy-1.7/doc/flashproxy-reg-email.1.txt000066400000000000000000000044221236350636700213010ustar00rootroot00000000000000// This file is asciidoc source code. // To generate manpages, use the a2x command. // This one has a long name, if you don't change the // default length parameter it will be truncated, use: // a2x --no-xmllint --xsltproc-opts "--stringparam man.th.title.max.length 23" -d manpage -f manpage flashproxy-reg-email.1.txt FLASHPROXY-REG-EMAIL(1) ======================= NAME ---- flashproxy-reg-email - Register with a facilitator using the email method SYNOPSIS -------- **flashproxy-reg-email** [__OPTIONS__] [__REMOTE__][:__PORT__] DESCRIPTION ----------- Register with a flash proxy facilitator through email. Makes a STARTTLS connection to an SMTP server and sends mail with a client IP address to a designated address. By default the remote address registered is ":9000" (the external IP address is guessed based on the SMTP server's response). Using an SMTP server or email address other than the defaults will not work unless you have made special arrangements to connect them to a facilitator. The email address is not polled continually. After running the program, it may take up to a minute for the registration to be recognized. This program requires the M2Crypto library for Python. OPTIONS ------- **-4**:: Name lookups use only IPv4. **-6**:: Name lookups use only IPv6. **-d**, **--debug**:: Enable debugging output (Python smtplib messages). **--disable-pin**:: Don't check the server's public key against a list of known pins. You can use this if the server's public key has changed and this program hasn't been updated yet. **-e**, **--email**=__ADDRESS__:: Send mail to __ADDRESS__ (default is "flashproxyreg.a@gmail.com"). **--facilitator-pubkey**=__FILENAME__:: Encrypt registrations to the given PEM-formatted public key (default built-in). **-h**, **--help**:: Display help message and exit. **-s**, **--smtp**=__HOST__[:__PORT__]:: Use the given SMTP server (default is "gmail-smtp-in.l.google.com:25"). **--transport**=__TRANSPORT__:: Registrations include the fact that you intend to use the given __TRANSPORT__ (default "websocket"). **--unsafe-logging**:: Don't scrub IP addresses from logs. SEE ALSO -------- **http://crypto.stanford.edu/flashproxy/** BUGS ---- Please report using **https://trac.torproject.org/projects/tor**. flashproxy-1.7/doc/flashproxy-reg-http.1000066400000000000000000000043341236350636700203550ustar00rootroot00000000000000'\" t .\" Title: flashproxy-reg-http .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.78.1 .\" Date: 05/07/2014 .\" Manual: \ \& .\" Source: \ \& .\" Language: English .\" .TH "FLASHPROXY\-REG\-HTTP" "1" "05/07/2014" "\ \&" "\ \&" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" flashproxy-reg-http \- Register with a facilitator using the HTTP method .SH "SYNOPSIS" .sp \fBflashproxy\-reg\-http\fR [\fIOPTIONS\fR] [\fIREMOTE\fR][:\fIPORT\fR] .SH "DESCRIPTION" .sp Register with a flash proxy facilitator using an HTTP POST\&. By default the remote address registered is ":9000"\&. .SH "OPTIONS" .PP \fB\-4\fR .RS 4 Name lookups use only IPv4\&. .RE .PP \fB\-6\fR .RS 4 Name lookups use only IPv6\&. .RE .PP \fB\-f\fR, \fB\-\-facilitator\fR=\fIURL\fR .RS 4 Register with the given facilitator (default "https://fp\-facilitator\&.org/")\&. .RE .PP \fB\-h\fR, \fB\-\-help\fR .RS 4 Display help message and exit\&. .RE .PP \fB\-\-transport\fR=\fITRANSPORT\fR .RS 4 Registrations include the fact that you intend to use the given \fITRANSPORT\fR (default "websocket")\&. .RE .PP \fB\-\-unsafe\-logging\fR .RS 4 Don\(cqt scrub IP addresses from logs\&. .RE .SH "SEE ALSO" .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .SH "BUGS" .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. flashproxy-1.7/doc/flashproxy-reg-http.1.txt000066400000000000000000000024041236350636700211670ustar00rootroot00000000000000// This file is asciidoc source code. // To generate manpages, use the a2x command. // This one has a long name, if you don't change the // default length parameter it will be truncated, use: // a2x --no-xmllint --xsltproc-opts "--stringparam man.th.title.max.length 22" -d manpage -f manpage flashproxy-reg-http.1.txt FLASHPROXY-REG-HTTP(1) ====================== NAME ---- flashproxy-reg-http - Register with a facilitator using the HTTP method SYNOPSIS -------- **flashproxy-reg-http** [__OPTIONS__] [__REMOTE__][:__PORT__] DESCRIPTION ----------- Register with a flash proxy facilitator using an HTTP POST. By default the remote address registered is ":9000". OPTIONS ------- **-4**:: Name lookups use only IPv4. **-6**:: Name lookups use only IPv6. **-f**, **--facilitator**=__URL__:: Register with the given facilitator (default "https://fp-facilitator.org/"). **-h**, **--help**:: Display help message and exit. **--transport**=__TRANSPORT__:: Registrations include the fact that you intend to use the given __TRANSPORT__ (default "websocket"). **--unsafe-logging**:: Don't scrub IP addresses from logs. SEE ALSO -------- **http://crypto.stanford.edu/flashproxy/** BUGS ---- Please report using **https://trac.torproject.org/projects/tor**. flashproxy-1.7/doc/flashproxy-reg-url.1000066400000000000000000000053321236350636700201770ustar00rootroot00000000000000'\" t .\" Title: flashproxy-reg-url .\" Author: [FIXME: author] [see http://docbook.sf.net/el/author] .\" Generator: DocBook XSL Stylesheets v1.78.1 .\" Date: 05/07/2014 .\" Manual: \ \& .\" Source: \ \& .\" Language: English .\" .TH "FLASHPROXY\-REG\-URL" "1" "05/07/2014" "\ \&" "\ \&" .\" ----------------------------------------------------------------- .\" * Define some portability stuff .\" ----------------------------------------------------------------- .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .\" http://bugs.debian.org/507673 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" ----------------------------------------------------------------- .\" * set default formatting .\" ----------------------------------------------------------------- .\" disable hyphenation .nh .\" disable justification (adjust text to left margin only) .ad l .\" ----------------------------------------------------------------- .\" * MAIN CONTENT STARTS HERE * .\" ----------------------------------------------------------------- .SH "NAME" flashproxy-reg-url \- Register with a facilitator using an indirect URL .SH "SYNOPSIS" .sp \fBflashproxy\-reg\-url\fR [\fIOPTIONS\fR] \fIREMOTE\fR[:\fIPORT\fR] .SH "DESCRIPTION" .sp Print a URL, which, when retrieved, will cause the client address \fIREMOTE\fR[:\fIPORT\fR] to be registered with the flash proxy facilitator\&. The default \fIPORT\fR is 9000\&. .SH "OPTIONS" .PP \fB\-f\fR, \fB\-\-facilitator\fR=\fIURL\fR .RS 4 Register with the given facilitator (default "https://fp\-facilitator\&.org/")\&. .RE .PP \fB\-\-facilitator\-pubkey\fR=\fIFILENAME\fR .RS 4 Encrypt registrations to the given PEM\-formatted public key (default built\-in)\&. .RE .PP \fB\-h\fR, \fB\-\-help\fR .RS 4 Display help message and exit\&. .RE .PP \fB\-\-transport\fR=\fITRANSPORT\fR .RS 4 Registrations include the fact that you intend to use the given \fITRANSPORT\fR (default "websocket")\&. .RE .SH "EXAMPLE" .sp Say you wish to register 192\&.0\&.2\&.1:9000\&. Run .sp .if n \{\ .RS 4 .\} .nf \&./flashproxy\-reg\-url 192\&.0\&.2\&.1:9000 .fi .if n \{\ .RE .\} .sp The program should output a long string looking something like .sp https://fp\-facilitator\&.org/reg/0labtDob545HeKpLZ8LqGeOi\-OK7HXoQvfQzj0P2pjh1NrCKNDaPe91zo\&.\&.\&. .sp Copy this string and paste it into any URL fetching website or program\&. Once the URL is retrieved your address will be registered with the facilitator\&. .SH "SEE ALSO" .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .SH "BUGS" .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. flashproxy-1.7/doc/flashproxy-reg-url.1.txt000066400000000000000000000033771236350636700210240ustar00rootroot00000000000000// This file is asciidoc source code. // To generate manpages, use the a2x command. // This one has a long name, if you don't change the // default length parameter it will be truncated, use: // a2x --no-xmllint --xsltproc-opts "--stringparam man.th.title.max.length 23" -d manpage -f manpage flashproxy-reg-url.1.txt FLASHPROXY-REG-URL(1) ===================== NAME ---- flashproxy-reg-url - Register with a facilitator using an indirect URL SYNOPSIS -------- **flashproxy-reg-url** [__OPTIONS__] __REMOTE__[:__PORT__] DESCRIPTION ----------- Print a URL, which, when retrieved, will cause the client address __REMOTE__[:__PORT__] to be registered with the flash proxy facilitator. The default __PORT__ is 9000. OPTIONS ------- **-f**, **--facilitator**=__URL__:: Register with the given facilitator (default "https://fp-facilitator.org/"). **--facilitator-pubkey**=__FILENAME__:: Encrypt registrations to the given PEM-formatted public key (default built-in). **-h**, **--help**:: Display help message and exit. **--transport**=__TRANSPORT__:: Registrations include the fact that you intend to use the given __TRANSPORT__ (default "websocket"). EXAMPLE ------- Say you wish to register 192.0.2.1:9000. Run ................................... ./flashproxy-reg-url 192.0.2.1:9000 ................................... The program should output a long string looking something like https://fp-facilitator.org/reg/0labtDob545HeKpLZ8LqGeOi-OK7HXoQvfQzj0P2pjh1NrCKNDaPe91zo\... Copy this string and paste it into any URL fetching website or program. Once the URL is retrieved your address will be registered with the facilitator. SEE ALSO -------- **http://crypto.stanford.edu/flashproxy/** BUGS ---- Please report using **https://trac.torproject.org/projects/tor**. flashproxy-1.7/experiments/000077500000000000000000000000001236350636700161365ustar00rootroot00000000000000flashproxy-1.7/experiments/README000066400000000000000000000035321236350636700170210ustar00rootroot00000000000000This directory contains scripts for testing and benchmarking the flash proxy. == Preparation You need to have installed certain software before running the tests. Firefox 8.0.1 socat Wget Python thttpd websockify socat, Wget, and Python are easily installed on most GNU/Linux distributions. thttpd can be compiled from the packages at http://acme.com/software/thttpd/. websockify is from https://github.com/kanaka/websockify/. The old Firefox is from http://download.mozilla.org/?product=firefox-8.0.1&os=linux&lang=en-US. Before compiling thttpd, increade IDLE_READ_TIMEOUT in config.h to a high value (several thousand). This is because some tests wait a long time between making a connection and sending an HTTP request. Firefox versions 9 and 10 will not work; these versions have a change to the -no-remote option that prevents the tests from running. This is supposed to be fixed with a -new-instance option in version 12. You need to create some dedicated Firefox profiles. Create profiles named flashexp1 and flashexp2 by running firefox -ProfileManager -no-remote Start the browsers with firefox -P flashexp1 -no-remote & firefox -P flashexp2 -no-remote & and in each one, set this about:config variable: browser.link.open_newwindow=1 (default is 3) This allows the scripts to clear the contents of a tab and replace them with another page. I personally run these tests in an Arch Linux VM. useradd -m user passwd user pacman -Sy pacman -Su pacman -S firefox socat python2 xorg xorg-xinit xterm flashplugin gcc make Download thttpd, compile it (you have to rename the getline function to avoid a naming conflict), and install it in /usr/local/bin. Symlink /usr/bin/python to /usr/bin/python2. Also you have to install the ttf-ms-fonts package from the AUR for text to show up in Flash Player. Add a window manager, run "startx", and you should be set. flashproxy-1.7/experiments/client-extract.py000077500000000000000000000023031236350636700214370ustar00rootroot00000000000000#!/usr/bin/env python import datetime import getopt import re import sys def usage(f = sys.stdout): print >> f, """\ Usage: %s [INPUTFILE] Extract client connections from a facilitator log. Each output line is date\tcount\n where count is the number of client requests in that hour. -h, --help show this help. """ % sys.argv[0] opts, args = getopt.gnu_getopt(sys.argv[1:], "h", ["help"]) for o, a in opts: if o == "-h" or o == "--help": usage() sys.exit() if len(args) == 0: input_file = sys.stdin elif len(args) == 1: input_file = open(args[0]) else: usage() sys.exit() prev_output = None count = 0.0 for line in input_file: m = re.match(r'^(\d+-\d+-\d+ \d+:\d+:\d+) client', line) if not m: continue date_str, = m.groups() date = datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S") count += 1 rounded_date = date.replace(minute=0, second=0, microsecond=0) prev_output = prev_output or rounded_date if prev_output is None or rounded_date != prev_output: avg = float(count) print date.strftime("%Y-%m-%d %H:%M:%S") + "\t" + "%.2f" % avg prev_output = rounded_date count = 0.0 flashproxy-1.7/experiments/client-graph.py000077500000000000000000000044301236350636700210710ustar00rootroot00000000000000#!/usr/bin/env python # Makes a graph of flash proxy client counts from a facilitator log. import datetime import getopt import re import sys import matplotlib import matplotlib.pyplot as plt import numpy as np START_DATE = datetime.datetime(2012, 12, 15) def usage(f = sys.stdout): print >> f, """\ Usage: %s -o OUTPUT [INPUTFILE] Makes a graph of flash proxy counts from a facilitator log. -h, --help show this help. -o, --output=OUTPUT output file name (required).\ """ % sys.argv[0] output_file_name = None opts, args = getopt.gnu_getopt(sys.argv[1:], "ho:", ["help", "output="]) for o, a in opts: if o == "-h" or o == "--help": usage() sys.exit() elif o == "-o" or o == "--output": output_file_name = a if not output_file_name: usage() sys.exit() if len(args) == 0: input_file = sys.stdin elif len(args) == 1: input_file = open(args[0]) else: usage() sys.exit() def format_date(d, pos=None): d = matplotlib.dates.num2date(d) return d.strftime("%B %d") def timedelta_to_seconds(delta): return delta.days * (24 * 60 * 60) + delta.seconds + delta.microseconds / 1000000.0 prev_output = None count = 0 data = [] for line in input_file: m = re.match(r'^(\d+-\d+-\d+ \d+:\d+:\d+) client', line) if not m: continue date_str, = m.groups() date = datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S") if date < START_DATE: continue count += 1 rounded_date = date.replace(minute=0, second=0, microsecond=0) prev_output = prev_output or rounded_date if prev_output is None or rounded_date != prev_output: delta = timedelta_to_seconds(date - prev_output) # avg = float(count) / delta avg = float(count) data.append((date, avg)) print date, avg prev_output = rounded_date count = 0 data = np.array(data) fig = plt.figure() ax = fig.add_axes([0.10, 0.30, 0.88, 0.60]) ax.set_ylabel(u"Number of clients", fontsize=8) fig.set_size_inches((8, 3)) ax.tick_params(direction="out", top="off", right="off") ax.set_frame_on(False) ax.xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(format_date)) fig.autofmt_xdate() plt.fill_between(data[:,0], data[:,1], linewidth=0, color="black") fig.savefig(output_file_name) flashproxy-1.7/experiments/client-graph.r000066400000000000000000000004151236350636700206760ustar00rootroot00000000000000library(ggplot2) x <- read.delim("client.dat", header=FALSE, col.names=c("date", "count"), colClasses=c("POSIXct", "numeric")) png("client-count.png", width=720, height=480) qplot(date, data=x, geom="bar", weight=count, binwidth=86400, ylab="client requests per day") flashproxy-1.7/experiments/common.sh000066400000000000000000000024041236350636700177620ustar00rootroot00000000000000# This file contains common variables and subroutines used by the experiment # scripts. FLASHPROXY_DIR="$(dirname $BASH_SOURCE)/.." FIREFOX=firefox SOCAT=socat WEBSOCKIFY=websockify THTTPD=thttpd TOR=tor visible_sleep() { N="$1" echo -n "sleep $N" while [ "$N" -gt 0 ]; do sleep 1 N=$((N-1)) echo -ne "\rsleep $N " done echo -ne "\n" } ensure_browser_started() { local PROFILE="$1" ("$FIREFOX" -P "$PROFILE" -remote "ping()" || ("$FIREFOX" -P "$PROFILE" -no-remote & visible_sleep 5)) 2>/dev/null } browser_clear() { local PROFILE="$1" ("$FIREFOX" -P "$PROFILE" -remote "ping()" && "$FIREFOX" -P "$PROFILE" -remote "openurl(about:blank)" &) 2>/dev/null } browser_goto() { local PROFILE="$1" local URL="$2" ensure_browser_started "$PROFILE" "$FIREFOX" -P "$PROFILE" -remote "openurl($URL)" 2>/dev/null } # Run a command and get the "real" part of time(1) output as a number of # seconds. real_time() { # Make a spare copy of stderr (fd 2). exec 3>&2 # Point the subcommand's stderr to our copy (fd 3), and extract the # original stderr (fd 2) output of time. (time -p eval "$@" 2>&3) |& tail -n 3 | head -n 1 | awk '{print $2}' } # Repeat a subcommand N times. repeat() { local N N="$1" shift while [ $N -gt 0 ]; do eval "$@" N=$((N-1)) done } flashproxy-1.7/experiments/exercise/000077500000000000000000000000001236350636700177455ustar00rootroot00000000000000flashproxy-1.7/experiments/exercise/exercise.sh000077500000000000000000000013651236350636700221200ustar00rootroot00000000000000#!/bin/bash # This script registers with the flash proxy facilitator, tries to download # check.torproject.org, and saves a timestamped log file. FLASHPROXY_DIR="$HOME/flashproxy" TOR="$HOME/tor/src/or/tor" LOCAL_PORT=1080 REMOTE_PORT=7070 declare -a PIDS_TO_KILL stop() { if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi exit } trap stop EXIT date cd "$FLASHPROXY_DIR" ./flashproxy-client --external --register ":$LOCAL_PORT" ":$REMOTE_PORT" & PIDS_TO_KILL+=($!) sleep 20 "$TOR" ClientTransportPlugin "flashproxy socks4 127.0.0.1:$LOCAL_PORT" UseBridges 1 Bridge "flashproxy 0.0.1.0:1" & PIDS_TO_KILL+=($!) sleep 60 curl --retry 5 --socks4a 127.0.0.1:9050 http://check.torproject.org/ flashproxy-1.7/experiments/exercise/flashproxy-exercise.sh000077500000000000000000000004151236350636700243100ustar00rootroot00000000000000#!/bin/sh # Usage (for example in crontab for hourly tests): # 0 * * * * cd /path/flashproxy-exercise && ./flashproxy-exercise.sh LOGDIR=log DATE=$(date +"%Y-%m-%d-%H:%M") LOG="$LOGDIR/log-$DATE" mkdir -p "$LOGDIR" (./exercise.sh &> "$LOG") || cat "$LOG" flashproxy-1.7/experiments/facilitator-graph.py000077500000000000000000000043041236350636700221140ustar00rootroot00000000000000#!/usr/bin/env python # Makes a graph of flash proxy counts from a facilitator log. import datetime import getopt import re import sys import matplotlib import matplotlib.pyplot as plt import numpy as np POLL_INTERVAL = 10.0 def usage(f = sys.stdout): print >> f, """\ Usage: %s -o OUTPUT [INPUTFILE] Makes a graph of flash proxy counts from a facilitator log. -h, --help show this help. -o, --output=OUTPUT output file name (required).\ """ % sys.argv[0] output_file_name = None opts, args = getopt.gnu_getopt(sys.argv[1:], "ho:", ["help", "output="]) for o, a in opts: if o == "-h" or o == "--help": usage() sys.exit() elif o == "-o" or o == "--output": output_file_name = a if not output_file_name: usage() sys.exit() if len(args) == 0: input_file = sys.stdin elif len(args) == 1: input_file = open(args[0]) else: usage() sys.exit() def format_date(d, pos=None): d = matplotlib.dates.num2date(d) return d.strftime("%B %d") def timedelta_to_seconds(delta): return delta.days * (24 * 60 * 60) + delta.seconds + delta.microseconds / 1000000.0 prev_output = None count = 0 data = [] for line in input_file: m = re.match(r'^(\d+-\d+-\d+ \d+:\d+:\d+) proxy gets', line) if not m: continue date_str, = m.groups() date = datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S") count += 1 rounded_date = date.replace(minute=0, second=0, microsecond=0) prev_output = prev_output or rounded_date if prev_output is None or rounded_date != prev_output: delta = timedelta_to_seconds(date - prev_output) avg = float(count) / delta * POLL_INTERVAL data.append((date, avg)) print date, avg prev_output = rounded_date count = 0 data = np.array(data) fig = plt.figure() ax = fig.add_axes([0.10, 0.30, 0.88, 0.60]) ax.set_ylabel(u"Number of proxies", fontsize=8) fig.set_size_inches((8, 3)) ax.tick_params(direction="out", top="off", right="off") ax.set_frame_on(False) ax.xaxis.set_major_formatter(matplotlib.ticker.FuncFormatter(format_date)) fig.autofmt_xdate() plt.fill_between(data[:,0], data[:,1], linewidth=0, color="black") fig.savefig(output_file_name) flashproxy-1.7/experiments/proxy-extract.py000077500000000000000000000047011236350636700213460ustar00rootroot00000000000000#!/usr/bin/env python import datetime import getopt import re import sys def usage(f = sys.stdout): print >> f, """\ Usage: %s [INPUTFILE] Extract proxy connections from a facilitator log. Each output line is date\tcount\n where count is the approximate poll interval in effect at date. -h, --help show this help. """ % sys.argv[0] opts, args = getopt.gnu_getopt(sys.argv[1:], "h", ["help"]) for o, a in opts: if o == "-h" or o == "--help": usage() sys.exit() if len(args) == 0: input_file = sys.stdin elif len(args) == 1: input_file = open(args[0]) else: usage() sys.exit() def timedelta_to_seconds(delta): return delta.days * (24 * 60 * 60) + delta.seconds + delta.microseconds / 1000000.0 # commit 49de7bf689ee989997a1edbf2414a7bdbc2164f9 # Author: David Fifield # Date: Thu Jan 3 21:01:39 2013 -0800 # # Bump poll interval from 10 s to 60 s. # # commit 69d429db12cedc90dac9ccefcace80c86af7eb51 # Author: David Fifield # Date: Tue Jan 15 14:02:02 2013 -0800 # # Increase facilitator_poll_interval from 1 m to 10 m. BEGIN_60S = datetime.datetime(2013, 1, 3, 21, 0, 0) BEGIN_600S = datetime.datetime(2013, 1, 15, 14, 0, 0) # Proxies refresh themselves once a day, so interpolate across a day when the # polling interval historically changed. def get_poll_interval(date): if date < BEGIN_60S: return 10 elif BEGIN_60S <= date < BEGIN_60S + datetime.timedelta(1): return timedelta_to_seconds(date-BEGIN_60S) / timedelta_to_seconds(datetime.timedelta(1)) * (60-10) + 10 elif date < BEGIN_600S: return 60 elif BEGIN_600S <= date < BEGIN_600S + datetime.timedelta(1): return timedelta_to_seconds(date-BEGIN_600S) / timedelta_to_seconds(datetime.timedelta(1)) * (600-60) + 60 else: return 600 prev_output = None count = 0.0 for line in input_file: m = re.match(r'^(\d+-\d+-\d+ \d+:\d+:\d+) proxy gets', line) if not m: continue date_str, = m.groups() date = datetime.datetime.strptime(date_str, "%Y-%m-%d %H:%M:%S") count += get_poll_interval(date) rounded_date = date.replace(minute=0, second=0, microsecond=0) prev_output = prev_output or rounded_date if prev_output is None or rounded_date != prev_output: avg = float(count) / 10.0 print date.strftime("%Y-%m-%d %H:%M:%S") + "\t" + "%.2f" % avg prev_output = rounded_date count = 0.0 flashproxy-1.7/experiments/proxy-graph.r000066400000000000000000000004231236350636700206000ustar00rootroot00000000000000library(ggplot2) x <- read.delim("proxy.dat", header=FALSE, col.names=c("date", "interval"), colClasses=c("POSIXct", "numeric")) png("proxy-count.png", width=720, height=480) qplot(date, data=x, geom="bar", weight=interval/10, binwidth=86400, ylab="proxy requests per day") flashproxy-1.7/experiments/switching/000077500000000000000000000000001236350636700201355ustar00rootroot00000000000000flashproxy-1.7/experiments/switching/local-http-alternating.sh000077500000000000000000000033401236350636700250510ustar00rootroot00000000000000#!/bin/bash # Usage: ./local-http-alternating.sh [OUTPUT_FILENAME] # # Tests a download over alternating flash proxies. If OUTPUT_FILENAME is # supplied, appends the time measurement to that file. . ../common.sh PROFILE_1=flashexp1 PROFILE_2=flashexp2 PROXY_URL="http://127.0.0.1:8000/embed.html?facilitator=127.0.0.1:9002&ratelimit=off" DATA_FILE_NAME="$FLASHPROXY_DIR/dump" OUTPUT_FILENAME="$1" # Declare an array. declare -a PIDS_TO_KILL stop() { browser_clear "$PROFILE_1" browser_clear "$PROFILE_2" if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Create data file." dd if=/dev/null of="$DATA_FILE_NAME" bs=1M seek=500 2>/dev/null || exit echo "Start web server." "$THTTPD" -D -d "$FLASHPROXY_DIR" -p 8000 & PIDS_TO_KILL+=($!) echo "Start facilitator." "$FLASHPROXY_DIR"/facilitator -d --relay 127.0.0.1:8000 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 5 echo "Start client transport plugin." "$FLASHPROXY_DIR"/flashproxy-client --register --facilitator 127.0.0.1:9002 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start browsers." ensure_browser_started "$PROFILE_1" ensure_browser_started "$PROFILE_2" ./proxy-loop.sh "$PROXY_URL" "$PROFILE_1" "$PROFILE_2" >/dev/null 2>&1 & PIDS_TO_KILL+=($!) visible_sleep 2 echo "Start socat." "$SOCAT" TCP-LISTEN:2000,reuseaddr,fork SOCKS4A:127.0.0.1:dummy:0,socksport=9001 & PIDS_TO_KILL+=($!) visible_sleep 2 if [ -n "$OUTPUT_FILENAME" ]; then real_time wget http://127.0.0.1:2000/dump --wait=0 --waitretry=0 -t 1000 -O /dev/null >> "$OUTPUT_FILENAME" else real_time wget http://127.0.0.1:2000/dump --wait=0 --waitretry=0 -t 1000 -O /dev/null fi flashproxy-1.7/experiments/switching/local-http-constant.sh000077500000000000000000000032561236350636700244000ustar00rootroot00000000000000#!/bin/bash # Usage: ./local-http-constant.sh [OUTPUT_FILENAME] # # Tests a download over an uninterrupted flash proxy. If OUTPUT_FILENAME # is supplied, appends the time measurement to that file. . ../common.sh PROFILE_1=flashexp1 PROFILE_2=flashexp2 PROXY_URL="http://127.0.0.1:8000/embed.html?facilitator=127.0.0.1:9002&ratelimit=off" DATA_FILE_NAME="$FLASHPROXY_DIR/dump" OUTPUT_FILENAME="$1" # Declare an array. declare -a PIDS_TO_KILL stop() { browser_clear "$PROFILE_1" browser_clear "$PROFILE_2" if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Create data file." dd if=/dev/null of="$DATA_FILE_NAME" bs=1M seek=500 2>/dev/null || exit echo "Start web server." "$THTTPD" -D -d "$FLASHPROXY_DIR" -p 8000 & PIDS_TO_KILL+=($!) echo "Start websockify." "$WEBSOCKIFY" -v 8001 127.0.0.1:8000 >/dev/null & PIDS_TO_KILL+=($!) echo "Start facilitator." "$FLASHPROXY_DIR"/facilitator -d --relay 127.0.0.1:8001 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 5 echo "Start client transport plugin." "$FLASHPROXY_DIR"/flashproxy-client --register --facilitator 127.0.0.1:9002 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start browser." browser_goto "$PROFILE_1" "$PROXY_URL" echo "Start socat." "$SOCAT" TCP-LISTEN:2000,reuseaddr,fork SOCKS4A:127.0.0.1:dummy:0,socksport=9001 & PIDS_TO_KILL+=($!) visible_sleep 2 if [ -n "$OUTPUT_FILENAME" ]; then real_time wget http://127.0.0.1:2000/dump --wait=0 --waitretry=0 -t 1000 -O /dev/null >> "$OUTPUT_FILENAME" else real_time wget http://127.0.0.1:2000/dump --wait=0 --waitretry=0 -t 1000 -O /dev/null fi flashproxy-1.7/experiments/switching/proxy-loop.sh000077500000000000000000000016061236350636700226270ustar00rootroot00000000000000#!/bin/bash # Runs overlapping flash proxy instances in a loop. # Usage: /proxy-loop.sh PROFILE1 PROFILE2 # The profiles need to have the open_newwindow configuration option set # properly. See ../README. # browser.link.open_newwindow=1 (default is 3) . ../common.sh URL=$1 PROFILE_1=$2 PROFILE_2=$3 # OVERLAP must be at most half of PERIOD. PERIOD=10 OVERLAP=2 ensure_browser_started "$PROFILE_1" browser_clear "$PROFILE_1" ensure_browser_started "$PROFILE_2" browser_clear "$PROFILE_2" sleep 1 while true; do echo "1 on" firefox -P "$PROFILE_1" -remote "openurl($URL)" sleep $OVERLAP echo "2 off" firefox -P "$PROFILE_2" -remote "openurl(about:blank)" sleep $(($PERIOD - (2 * $OVERLAP))) echo "2 on" firefox -P "$PROFILE_2" -remote "openurl($URL)" sleep $OVERLAP echo "1 off" firefox -P "$PROFILE_1" -remote "openurl(about:blank)" sleep $(($PERIOD - (2 * $OVERLAP))) done flashproxy-1.7/experiments/switching/remote-tor-alternating.sh000077500000000000000000000032171236350636700251020ustar00rootroot00000000000000#!/bin/bash # Usage: ./remote-tor-alternating.sh [OUTPUT_FILENAME] # # Tests a Tor download over alternating flash proxies. If OUTPUT_FILENAME is # supplied, appends the time measurement to that file. . ../common.sh PROFILE_1=flashexp1 PROFILE_2=flashexp2 PROXY_URL="http://127.0.0.1:8000/embed.html?facilitator=127.0.0.1:9002&ratelimit=off" DATA_FILE_NAME="$FLASHPROXY_DIR/dump" OUTPUT_FILENAME="$1" # Declare an array. declare -a PIDS_TO_KILL stop() { browser_clear "$PROFILE_1" browser_clear "$PROFILE_2" if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Start web server." "$THTTPD" -D -d "$FLASHPROXY_DIR" -p 8000 & PIDS_TO_KILL+=($!) echo "Start facilitator." "$FLASHPROXY_DIR"/facilitator -d --relay tor1.bamsoftware.com:9901 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 15 echo "Start client transport plugin." "$FLASHPROXY_DIR"/flashproxy-client --register --facilitator 127.0.0.1:9002 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start Tor." "$TOR" -f "$FLASHPROXY_DIR"/torrc & PIDS_TO_KILL+=($!) echo "Start browsers." ensure_browser_started "$PROFILE_1" ensure_browser_started "$PROFILE_2" ./proxy-loop.sh "$PROXY_URL" "$PROFILE_1" "$PROFILE_2" >/dev/null 2>&1 & PIDS_TO_KILL+=($!) # Let Tor bootstrap. visible_sleep 15 repeat_download() { until torify wget http://torperf.torproject.org/.5mbfile --wait=0 --waitretry=0 -c -t 1000 -O "$DATA_FILE_NAME"; do echo "retrying" done } if [ -n "$OUTPUT_FILENAME" ]; then real_time repeat_download >> "$OUTPUT_FILENAME" else real_time repeat_download fi flashproxy-1.7/experiments/switching/remote-tor-constant.sh000077500000000000000000000030041236350636700244150ustar00rootroot00000000000000#!/bin/bash # Usage: ./remote-tor-constant.sh [OUTPUT_FILENAME] # # Tests a Tor download over an uninterrupted flash proxy. If OUTPUT_FILENAME is # supplied, appends the time measurement to that file. . ../common.sh PROFILE_1=flashexp1 PROFILE_2=flashexp2 PROXY_URL="http://127.0.0.1:8000/embed.html?facilitator=127.0.0.1:9002&ratelimit=off" DATA_FILE_NAME="$FLASHPROXY_DIR/dump" OUTPUT_FILENAME="$1" # Declare an array. declare -a PIDS_TO_KILL stop() { browser_clear "$PROFILE_1" if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Start web server." "$THTTPD" -D -d "$FLASHPROXY_DIR" -p 8000 & PIDS_TO_KILL+=($!) echo "Start facilitator." "$FLASHPROXY_DIR"/facilitator -d --relay tor1.bamsoftware.com:9901 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 15 echo "Start client transport plugin." "$FLASHPROXY_DIR"/flashproxy-client --register --facilitator 127.0.0.1:9002 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start Tor." "$TOR" -f "$FLASHPROXY_DIR"/torrc & PIDS_TO_KILL+=($!) echo "Start browsers." browser_goto "$PROFILE_1" "$PROXY_URL" # Let Tor bootstrap. visible_sleep 15 if [ -n "$OUTPUT_FILENAME" ]; then real_time torify wget http://torperf.torproject.org/.5mbfile --wait=0 --waitretry=0 -c -t 1000 -O "$DATA_FILE_NAME" >> "$OUTPUT_FILENAME" else real_time torify wget http://torperf.torproject.org/.5mbfile --wait=0 --waitretry=0 -c -t 1000 -O "$DATA_FILE_NAME" fi flashproxy-1.7/experiments/switching/remote-tor-direct.sh000077500000000000000000000016151236350636700240440ustar00rootroot00000000000000#!/bin/bash # Usage: ./remote-tor-direct.sh [OUTPUT_FILENAME] # # Tests a Tor download without using a flash proxy. If OUTPUT_FILENAME is # supplied, appends the time measurement to that file. . ../common.sh DATA_FILE_NAME="$FLASHPROXY_DIR/dump" OUTPUT_FILENAME="$1" # Declare an array. declare -a PIDS_TO_KILL stop() { if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Start Tor." "$TOR" -f torrc.bridge & PIDS_TO_KILL+=($!) # Let Tor bootstrap. visible_sleep 15 if [ -n "$OUTPUT_FILENAME" ]; then real_time torify wget http://torperf.torproject.org/.5mbfile --wait=0 --waitretry=0 -c -t 1000 -O "$DATA_FILE_NAME" >> "$OUTPUT_FILENAME" else real_time torify wget http://torperf.torproject.org/.5mbfile --wait=0 --waitretry=0 -c -t 1000 -O "$DATA_FILE_NAME" fi flashproxy-1.7/experiments/switching/switching-all.sh000077500000000000000000000020501236350636700232360ustar00rootroot00000000000000#!/bin/bash # Usage: ./switching-all.sh [-n NUM_ITERATIONS] # # Runs the switching experiment scripts several times and stores the results in # log files # local-http-constant-DATE.log # local-http-alternating-DATE.log # remote-tor-constant-DATE.log # remote-tor-alternating-DATE.log # where DATE is the current date. . ../common.sh NUM_ITERATIONS=1 while getopts "n:" OPTNAME; do if [ "$OPTNAME" == n ]; then NUM_ITERATIONS="$OPTARG" fi done DATE="$(date --iso)" > "local-http-constant-$DATE.log" repeat $NUM_ITERATIONS ./local-http-constant.sh "local-http-constant-$DATE.log" > "local-http-alternating-$DATE.log" repeat $NUM_ITERATIONS ./local-http-alternating.sh "local-http-alternating-$DATE.log" > "remote-tor-direct-$DATE.log" repeat $NUM_ITERATIONS ./remote-tor-direct.sh "remote-tor-direct-$DATE.log" > "remote-tor-constant-$DATE.log" repeat $NUM_ITERATIONS ./remote-tor-constant.sh "remote-tor-constant-$DATE.log" > "remote-tor-alternating-$DATE.log" repeat $NUM_ITERATIONS ./remote-tor-alternating.sh "remote-tor-alternating-$DATE.log" flashproxy-1.7/experiments/switching/torrc.bridge000066400000000000000000000002301236350636700224370ustar00rootroot00000000000000# This configuration file causes a direct Tor connection to use the same bridge # used by a flash proxy. UseBridges 1 Bridge tor1.bamsoftware.com:9001 flashproxy-1.7/experiments/throughput/000077500000000000000000000000001236350636700203475ustar00rootroot00000000000000flashproxy-1.7/experiments/throughput/httpget.py000077500000000000000000000014631236350636700224070ustar00rootroot00000000000000#!/usr/bin/env python # A simple HTTP downloader that discards what it downloads and prints the time # taken to download. We use this rather than "time wget" because the latter # includes time taken to establish (and possibly retry) the connection. import getopt import sys import time import urllib2 BLOCK_SIZE = 65536 label = None opts, args = getopt.gnu_getopt(sys.argv[1:], "l:") for o, a in opts: if o == "-l": label = a try: stream = urllib2.urlopen(args[0], timeout=100) start_time = time.time() while stream.read(BLOCK_SIZE): pass end_time = time.time() if label: print "%s %.3f" % (label, end_time - start_time) else: print "%.3f" % (end_time - start_time) except: if label: print "%s error" % label else: print "error" flashproxy-1.7/experiments/throughput/throughput-all.sh000077500000000000000000000001021236350636700236560ustar00rootroot00000000000000#!/bin/bash for n in $(seq 1 50); do ./throughput.sh -n $n done flashproxy-1.7/experiments/throughput/throughput.sh000077500000000000000000000056241236350636700231260ustar00rootroot00000000000000#!/bin/bash # Usage: ./throughput.sh [-n NUM_CLIENTS] # # Tests the raw throughput of a single proxy. This script starts a web # server serving swfcat.swf and a large data file, starts a facilitator, # client transport plugin, and socat shim, and then starts multiple # downloads through the proxy at once. Results are saved in a file # called results-NUM_CLIENTS-DATE, where DATE is the current date. # plain socks ws ws plain # httpget <---> socat <---> flashproxy-client <---> flashproxy <---> websockify <---> thttpd # 2000 9001 9000 8001 8000 . ../common.sh NUM_CLIENTS=1 while getopts "n:" OPTNAME; do if [ "$OPTNAME" == n ]; then NUM_CLIENTS="$OPTARG" fi done PROFILE=flashexp1 PROXY_URL="http://127.0.0.1:8000/embed.html?facilitator=127.0.0.1:9002&max_clients=$NUM_CLIENTS&ratelimit=off&facilitator_poll_interval=1.0" DATA_FILE_NAME="$FLASHPROXY_DIR/dump" RESULTS_FILE_NAME="results-$NUM_CLIENTS-$(date --iso)" # Declare an array. declare -a PIDS_TO_KILL stop() { browser_clear "$PROFILE" if [ -n "${PIDS_TO_KILL[*]}" ]; then echo "Kill pids ${PIDS_TO_KILL[@]}." kill "${PIDS_TO_KILL[@]}" fi echo "Delete data file." rm -f "$DATA_FILE_NAME" exit } trap stop EXIT echo "Create data file." dd if=/dev/null of="$DATA_FILE_NAME" bs=1M seek=10 2>/dev/null || exit echo "Start web server." "$THTTPD" -D -d "$FLASHPROXY_DIR" -p 8000 & PIDS_TO_KILL+=($!) echo "Start websockify." "$WEBSOCKIFY" -v 8001 127.0.0.1:8000 >/dev/null & PIDS_TO_KILL+=($!) echo "Start facilitator." "$FLASHPROXY_DIR"/facilitator -d --relay 127.0.0.1:8001 127.0.0.1 9002 >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start client transport plugin." "$FLASHPROXY_DIR"/flashproxy-client >/dev/null & PIDS_TO_KILL+=($!) visible_sleep 1 echo "Start browser." browser_goto "$PROFILE" "$PROXY_URL" visible_sleep 2 # Create sufficiently many client registrations. i=0 while [ $i -lt $NUM_CLIENTS ]; do echo -ne "\rRegister client $((i + 1))." echo $'POST / HTTP/1.0\r\n\r\nclient=127.0.0.1:9000' | socat STDIN TCP-CONNECT:127.0.0.1:9002 sleep 1 i=$((i + 1)) done echo visible_sleep 2 echo "Start socat." "$SOCAT" TCP-LISTEN:2000,fork,reuseaddr SOCKS4A:127.0.0.1:dummy:0,socksport=9001 & PIDS_TO_KILL+=($!) visible_sleep 1 > "$RESULTS_FILE_NAME" # Proxied downloads. declare -a WAIT_PIDS i=0 while [ $i -lt $NUM_CLIENTS ]; do echo "Start downloader $((i + 1))." ./httpget.py -l proxy http://127.0.0.1:2000/dump >> "$RESULTS_FILE_NAME" & WAIT_PIDS+=($!) i=$((i + 1)) done for pid in "${WAIT_PIDS[@]}"; do wait "$pid" done unset WAIT_PIDS # Direct downloads. declare -a WAIT_PIDS i=0 while [ $i -lt $NUM_CLIENTS ]; do echo "Start downloader $((i + 1))." ./httpget.py -l direct http://127.0.0.1:8000/dump >> "$RESULTS_FILE_NAME" & WAIT_PIDS+=($!) i=$((i + 1)) done for pid in "${WAIT_PIDS[@]}"; do wait "$pid" done unset WAIT_PIDS flashproxy-1.7/facilitator/000077500000000000000000000000001236350636700160745ustar00rootroot00000000000000flashproxy-1.7/facilitator/.gitignore000066400000000000000000000007041236350636700200650ustar00rootroot00000000000000# files build by autogen.sh /aclocal.m4 /autom4te.cache /configure /depcomp /install-sh /missing /test-driver /Makefile.in # files built by ./configure /init.d/fp-facilitator /init.d/fp-registrar-email /init.d/fp-reg-decryptd /Makefile /config.status /config.log # files built by make /examples/fp-facilitator.conf /doc/*.1 # files for binary-distribution /flashproxy-facilitator-*.tar.* # files output by test-driver test*.log *test.log *test.trs flashproxy-1.7/facilitator/HACKING000066400000000000000000000015501236350636700170640ustar00rootroot00000000000000== Running from source checkout In order to run the code directly from a source checkout, you must make sure it can find the flashproxy module, located in the top-level directory of the source checkout, which is probably the parent directory. You have two options: 1. Install it in "development mode", see [1] flashproxy# python setup-common.py develop This process is reversible too: flashproxy# python setup-common.py develop --uninstall The disadvantage is that other programs (such as a system-installed flashproxy, or other checkouts in another directory) will see this development copy, rather than a more appropriate copy. 2. Export PYTHONPATH when you need to run $ export PYTHONPATH=.. $ make && make check The disadvantage is that you need to do this every shell session. [1] http://pythonhosted.org/distribute/setuptools.html#development-mode flashproxy-1.7/facilitator/INSTALL000066400000000000000000000025621236350636700171320ustar00rootroot00000000000000Install the dependencies. # apt-get install help2man make openssl python-m2crypto # apt-get install automake autoconf # if running from git # apt-get install apache2 You may use a different webserver, but currently we only provide an apache2 site config example, so you will need to adapt this to the correct syntax. # apt-get install flashproxy-common If your distro does not have flashproxy-common, you can install it directly from the top-level source directory: flashproxy# python setup-common.py install --record install.log \ --single-version-externally-managed Configure and install. $ ./autogen.sh # if running from git or ./configure doesn't otherwise exist $ ./configure --localstatedir=/var/local --enable-initscripts && make # make pre-install install post-install This installs fp-registrar.cgi, fp-facilitator, fp-registrar-email, fp-reg-decryptd, and fp-reg-decrypt to /usr/local/bin. It also installs System V init files to /etc/init.d/. The pre/post-install scripts create a user for the daemon to as, and sets up the initscripts in the default system runlevels. They also generate a RSA key in /usr/local/etc/flashproxy/reg-daemon.{key,pub}. Uninstall. # make pre-remove uninstall post-remove This will leave behind some config files (e.g. secret keys and passwords). To get rid of those too, run this instead: # make pre-purge uninstall post-purge flashproxy-1.7/facilitator/Makefile.am000066400000000000000000000132101236350636700201250ustar00rootroot00000000000000# our own variables fpfacilitatoruser = @fpfacilitatoruser@ initconfdir = @initconfdir@ cgibindir = @cgibindir@ # unfortunately sysvinit does not support having initscripts in /usr/local/etc # yet, so we have to hard code a path here. :( initscriptdir = /etc/init.d exampledir = $(docdir)/examples appenginedir = $(pkgdatadir)/appengine pkgconfdir = $(sysconfdir)/flashproxy appengineconfdir = $(pkgconfdir)/reg-appspot PYENV = PYTHONPATH='$(srcdir):$(PYTHONPATH)'; export PYTHONPATH; # automake PLVs dist_bin_SCRIPTS = fp-facilitator fp-registrar-email fp-reg-decryptd fp-reg-decrypt man1_MANS = $(dist_bin_SCRIPTS:%=doc/%.1) dist_cgibin_SCRIPTS = fp-registrar.cgi if DO_INITSCRIPTS initscript_names = fp-facilitator fp-registrar-email fp-reg-decryptd initscript_SCRIPTS = $(initscript_names:%=init.d/%) dist_initconf_DATA = $(initscript_names:%=default/%) endif dist_doc_DATA = doc/appspot-howto.txt doc/facilitator-design.txt doc/email-howto.txt doc/http-howto.txt doc/server-howto.txt README dist_example_DATA = examples/fp-facilitator.conf examples/reg-email.pass examples/facilitator-relays pkgconf_DATA = examples/facilitator-relays dist_appengine_DATA = appengine/app.yaml appengine/config.go appengine/fp-reg.go appengineconf_DATA = appengine/config.go CLEANFILES = examples/fp-facilitator.conf $(man1_MANS) EXTRA_DIST = examples/fp-facilitator.conf.in mkman.sh mkman.inc HACKING $(TESTS) TESTS = fp-facilitator-test.py # see http://www.gnu.org/software/automake/manual/html_node/Parallel-Test-Harness.html#index-TEST_005fEXTENSIONS TEST_EXTENSIONS = .py PY_LOG_COMPILER = $(PYTHON) AM_TESTS_ENVIRONMENT = $(PYENV) AM_PY_LOG_FLAGS = # AC_CONFIG_FILES doesn't fully-expand directory variables # see http://www.gnu.org/software/automake/manual/automake.html#Scripts subst_vars = sed -e 's,[@]cgibindir[@],$(cgibindir),g' # our own targets doc/%.1: % mkman.sh mkman.inc Makefile # mkdir needed for out-of-source build $(MKDIR_P) $$(dirname "$@") { $(PYENV) $(PYTHON) "$<" --help; } \ | { $(PYENV) $(srcdir)/mkman.sh "$<" $(VERSION) > "$@"; } examples/fp-facilitator.conf: examples/fp-facilitator.conf.in Makefile # mkdir needed for out-of-source build $(MKDIR_P) $$(dirname "$@") $(subst_vars) "$<" > "$@" pylint: $(dist_bin_SCRIPTS) pylint -E $^ install-data-local: $(INSTALL_DATA) -m 600 -t $(DESTDIR)$(pkgconfdir) $(srcdir)/examples/reg-email.pass uninstall-local: rm $(DESTDIR)$(pkgconfdir)/reg-email.pass # The {pre,post}-{install,remove} targets are just given as reference, and # ought to be separate scripts as part of your distro's installation process. # They are intentionally not linked to the install target since they require # root access and *must not be run* for fake/staged installs, e.g. when giving # non-standard directories to ./configure or DESTDIR to make. pre-install: meta-install-sanity install-user post-install: meta-install-sanity install-secrets install-symlinks install-daemon pre-remove: meta-install-sanity remove-daemon remove-symlinks post-remove: meta-install-sanity pre-purge: pre-remove remove-secrets remove-daemon-data post-purge: post-remove remove-user meta-install-sanity: test "x$(DESTDIR)" = "x" || { echo >&2 \ "don't run {pre,post}-{install,remove} when DESTDIR is set"; false; } install-user: id -u ${fpfacilitatoruser} >/dev/null 2>&1 || { \ which adduser >/dev/null 2>&1 && \ adduser --quiet \ --system \ --group \ --disabled-password \ --home ${pkgconfdir} \ --no-create-home \ --shell /bin/false \ ${fpfacilitatoruser} || \ useradd \ --system \ --home ${pkgconfdir} \ -M \ --shell /bin/false \ ${fpfacilitatoruser} ; } remove-user: : # deluser does actually remove the group as well id -u ${fpfacilitatoruser} >/dev/null 2>&1 && { \ which deluser >/dev/null 2>&1 && \ deluser --quiet \ --system \ ${fpfacilitatoruser} || \ userdel \ ${fpfacilitatoruser} ; } || true install-secrets: test -f ${pkgconfdir}/reg-daemon.key || { \ install -m 600 /dev/null ${pkgconfdir}/reg-daemon.key && \ openssl genrsa 2048 | tee ${pkgconfdir}/reg-daemon.key | \ openssl rsa -pubout > ${pkgconfdir}/reg-daemon.pub; } remove-secrets: for i in reg-daemon.key reg-daemon.pub; do \ rm -f ${pkgconfdir}/$$i; \ done install-symlinks: for i in fp-reg.go app.yaml; do \ $(LN_S) -f ${appenginedir}/$$i ${appengineconfdir}/$$i; \ done remove-symlinks: for i in fp-reg.go app.yaml; do \ rm -f ${appengineconfdir}/$$i; \ done # initscripts: assume that if the user wanted to install them, then they also # wanted to configure them, and that the system supports them. if this isn't the # case then either (a) they are doing a staged install for another system and # shouldn't be running {pre,post}-{install,remove} or (b) they shouldn't have # told us to install initscripts for their system that doesn't support it. install-daemon: if DO_INITSCRIPTS # initscripts use these directories for logs and runtime data mkdir -p ${localstatedir}/log mkdir -p ${localstatedir}/run for i in ${initscript_names}; do \ update-rc.d $$i defaults; \ invoke-rc.d $$i start; \ done endif remove-daemon: if DO_INITSCRIPTS # we don't rm created directories since they might be system-managed for i in ${initscript_names}; do \ invoke-rc.d $$i stop; \ update-rc.d $$i remove; \ done endif remove-daemon-data: if DO_INITSCRIPTS for i in ${initscript_names}; do \ rm -f ${localstatedir}/log/$$i.log* \ rm -f ${localstatedir}/run/$$i.pid \ done endif .PHONY: pre-install post-install pre-remove post-remove pre-purge post-purge .PHONY: install-user install-secrets install-symlinks install-daemon .PHONY: remove-user remove-secrets remove-symlinks remove-daemon .PHONY: pylint flashproxy-1.7/facilitator/README000066400000000000000000000031551236350636700167600ustar00rootroot00000000000000This package contains files needed to run a flashproxy facilitator. Normal users who just want to bypass censorship, should use the flashproxy-client package instead. For instructions on building/installing this package from source, see INSTALL. (This should only be necessary if your distro does not already integrate this package into its repositories.) The flashproxy config directory is installation-dependant, usually at /etc/flashproxy or /usr/local/etc/flashproxy. You are strongly recommended to keep this on encrypted storage. The main backends, fp-facilitator and fp-reg-decryptd, are installed as system services, and you should be able to configure them in the normal place for your system (e.g. /etc/default/fp-facilitator for a Debian-based system using initscripts). You probably need to at least set RUN_DAEMON=yes to enable the services. Each installation has its own public-private keypair, stored in the flashproxy config directory. You will need to securely distribute the public key (reg-daemon.pub) to your users - e.g. by publishing it somewhere, signed by your own PGP key. There are three supported helper rendezvous methods: HTTP, email, and appspot. Each helper method may require additional manual configuration and might also depend on other helper methods; see the corresponding doc/x-howto.txt for more details. At a very minimum, you must configure and enable the HTTP method, since that also serves the browser proxies. For suggestions on configuring a dedicated facilitator machine, see doc/server-howto.txt. For documentation on the design of the facilitator components, see doc/facilitator-design.txt. flashproxy-1.7/facilitator/appengine/000077500000000000000000000000001236350636700200425ustar00rootroot00000000000000flashproxy-1.7/facilitator/appengine/app.yaml000066400000000000000000000002761236350636700215130ustar00rootroot00000000000000# override this with appcfg.py -A $YOUR_APP_ID application: facilitator-registration-example version: 1 runtime: go api_version: go1 handlers: - url: /.* script: _go_app secure: always flashproxy-1.7/facilitator/appengine/config.go000066400000000000000000000010471236350636700216400ustar00rootroot00000000000000/* This is the server-side code that runs on Google App Engine for the "appspot" registration method. See doc/appspot-howto.txt for more details about setting up an application, and advice on running one. To upload a new version: $ torify ~/go_appengine/appcfg.py --no_cookies -A $YOUR_APP_ID update . */ package fp_reg // host:port/basepath of the facilitator you want to register with // for example, fp-facilitator.org or example.com:12345/facilitator // https:// and /reg/ will be prepended and appended respectively. const FP_FACILITATOR = "" flashproxy-1.7/facilitator/appengine/fp-reg.go000066400000000000000000000024411236350636700215520ustar00rootroot00000000000000package fp_reg import ( "io" "net" "net/http" "path" "appengine" "appengine/urlfetch" ) func robotsTxtHandler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Write([]byte("User-agent: *\nDisallow:\n")) } func ipHandler(w http.ResponseWriter, r *http.Request) { remoteAddr := r.RemoteAddr if net.ParseIP(remoteAddr).To4() == nil { remoteAddr = "[" + remoteAddr + "]" } w.Header().Set("Content-Type", "text/plain; charset=utf-8") w.Write([]byte(remoteAddr)) } func regHandler(w http.ResponseWriter, r *http.Request) { dir, blob := path.Split(path.Clean(r.URL.Path)) if dir != "/reg/" { http.NotFound(w, r) return } client := urlfetch.Client(appengine.NewContext(r)) resp, err := client.Get("https://" + FP_FACILITATOR + "/reg/" + blob) if err != nil { http.Error(w, err.Error(), http.StatusInternalServerError) return } for key, values := range resp.Header { for _, value := range values { w.Header().Add(key, value) } } w.WriteHeader(resp.StatusCode) io.Copy(w, resp.Body) } func init() { http.HandleFunc("/robots.txt", robotsTxtHandler) http.HandleFunc("/ip", ipHandler) http.HandleFunc("/reg/", regHandler) if FP_FACILITATOR == "" { panic("FP_FACILITATOR empty; did you forget to edit config.go?") } } flashproxy-1.7/facilitator/autogen.sh000077500000000000000000000000311236350636700200670ustar00rootroot00000000000000#!/bin/sh autoreconf -if flashproxy-1.7/facilitator/configure.ac000066400000000000000000000040641236350636700203660ustar00rootroot00000000000000AC_PREREQ([2.68]) AC_INIT([flashproxy-facilitator], [1.7]) AM_INIT_AUTOMAKE([-Wall foreign]) AC_ARG_VAR(fpfacilitatoruser, [the user/group for the facilitator to run as]) fpfacilitatoruser="${fpfacilitatoruser:-fp-facilitator}" # check that we want to install initscripts. don't bother checking that they # are supported, since we might be doing a staged install on a different system. # disabled by default since it ignores ${prefix} so `make distcheck` would fail AC_ARG_ENABLE([initscripts], [AS_HELP_STRING([--enable-initscripts], [install and configure sysvinit-style initscripts (default no)])], [do_initscripts=yes], [do_initscripts=]) AM_CONDITIONAL([DO_INITSCRIPTS], [test "x$do_initscripts" = xyes]) AC_ARG_VAR(initconfdir, [directory for initscripts configuration, if enabled]) # Try to detect the appropriate conf dir. Several systems have both /etc/default # and /etc/sysconfig but latter is always primary. if test "x$do_initscripts" = xyes; then if test "x$initconfdir" = x; then AC_CHECK_FILE(/etc/conf.d, [initconfdir='$(sysconfdir)/conf.d}'], [# Gentoo/Arch AC_CHECK_FILE(/etc/sysconfig, [initconfdir='$(sysconfdir)/sysconfig'], [# RedHat/Fedora/Slax/Mandriva/SuSE AC_CHECK_FILE(/etc/default, [initconfdir='$(sysconfdir)/default'], [# Debian/Ubuntu AC_MSG_ERROR([could not determine system initscripts config dir; please set initconfdir manually.])])])]) fi fi # Try to detect cgi-bin directory, falling back to $(libexec) if not found # from http://wiki.apache.org/httpd/DistrosDefaultLayout AC_ARG_VAR(cgibindir, [directory for CGI executables]) if test "x$cgibindir" = x; then AC_CHECK_FILE(/usr/lib/cgi-bin, [cgibindir='$(libdir)/cgi-bin'], [ AC_CHECK_FILE(/var/www/cgi-bin, [cgibindir='/var/www/cgi-bin'], [ AC_CHECK_FILE(/srv/httpd/cgi-bin, [cgibindir='/srv/httpd/cgi-bin'], [ AC_MSG_WARN([could not determine system CGI executables dir, using \$(libexecdir); set cgibindir to override.]) cgibindir='$(libexecdir)' ])])]) fi AC_PROG_LN_S AM_PATH_PYTHON AC_CONFIG_FILES([Makefile init.d/fp-facilitator init.d/fp-registrar-email init.d/fp-reg-decryptd]) AC_OUTPUT flashproxy-1.7/facilitator/default/000077500000000000000000000000001236350636700175205ustar00rootroot00000000000000flashproxy-1.7/facilitator/default/fp-facilitator000066400000000000000000000005541236350636700223530ustar00rootroot00000000000000# Change to "yes" to run the service. RUN_DAEMON="no" # Uncomment this to log potentially sensitive information from your users. # This may be useful for debugging or diagnosing functional problems, but # should be avoided in most other cases. #UNSAFE_LOGGING="yes" # Set the port for this service to listen on. # If not set, uses the default (9002). #PORT=9002 flashproxy-1.7/facilitator/default/fp-reg-decryptd000066400000000000000000000005541236350636700224430ustar00rootroot00000000000000# Change to "yes" to run the service. RUN_DAEMON="no" # Uncomment this to log potentially sensitive information from your users. # This may be useful for debugging or diagnosing functional problems, but # should be avoided in most other cases. #UNSAFE_LOGGING="yes" # Set the port for this service to listen on. # If not set, uses the default (9003). #PORT=9003 flashproxy-1.7/facilitator/default/fp-registrar-email000066400000000000000000000004131236350636700231330ustar00rootroot00000000000000# Change to "yes" to run the service. RUN_DAEMON="no" # Uncomment this to log potentially sensitive information from your users. # This may be useful for debugging or diagnosing functional problems, but # should be avoided in most other cases. #UNSAFE_LOGGING="yes" flashproxy-1.7/facilitator/doc/000077500000000000000000000000001236350636700166415ustar00rootroot00000000000000flashproxy-1.7/facilitator/doc/appspot-howto.txt000066400000000000000000000060441236350636700222320ustar00rootroot00000000000000These are instructions for how to set up a Google App Engine application for the appspot rendezvous method (flashproxy-reg-appspot). It requires the HTTP rendezvous to be available, so you should set that up first and ensure it is working correctly, or find someone else's to use. If you choose the latter, note that it is *their* reg-daemon.pub that your users must give to flashproxy-reg-appspot. For more information about Google App Engine, see the links at the bottom of this document. You are strongly recommended to create a Google account dedicated for this purpose, rather than a personal or organisation account. See email-howto.txt for how to do that. Download the SDK: https://developers.google.com/appengine/downloads#Google_App_Engine_SDK_for_Go This guide was written for version 1.8.9 of the SDK. Find your facilitator appengine installation, probably in reg-appspot/ in your flashproxy config dir. Edit config.go to point to the address of the HTTP facilitator. Follow the directions to register a new application: https://developers.google.com/appengine/docs/go/gettingstarted/uploading Enter an application ID and create the application. To run locally using the development server: $ ~/go_appengine/goapp serve reg-appspot/ You are advised to do this on a non-production machine, away from the main facilitator. Use the appcfg.py program to upload the program. It should look something like this: $ torify ./go_appengine/goapp --no_cookies -A update reg-appspot/ 07:25 PM Host: appengine.google.com 07:25 PM Application: application-id; version: 1 07:25 PM Starting update of app: application-id, version: 1 07:25 PM Getting current resource limits. Email: xxx@gmail.com Password for xxx@gmail.com: 07:26 PM Scanning files on local disk. 07:26 PM Cloning 2 application files. 07:26 PM Uploading 1 files and blobs. 07:26 PM Uploaded 1 files and blobs 07:26 PM Compilation starting. 07:26 PM Compilation: 1 files left. 07:26 PM Compilation completed. 07:26 PM Starting deployment. 07:26 PM Checking if deployment succeeded. 07:26 PM Deployment successful. 07:26 PM Checking if updated app version is serving. 07:26 PM Completed update of app: application-id, version: 1 The --no_cookies flag stops authentication cookies from being written to disk, in ~/.appcfg_cookies. We recommend this for security, since no long-running services need this password, only the update process above which is run once. However, if this reasoning doesn't apply to you (e.g. if your fp-registrar-email uses the same account, so that the password is already on the disk) *and* you find yourself running update a lot for some reason, then you may at your own risk omit it for convenience. Once logged in, you can disable logging for the application. Click "Logs" on the left panel. Under "Total Logs Storage", click "Change Settings". Enter "0" in the "days of logs" box and click "Save Settings". General links: https://developers.google.com/appengine/ https://developers.google.com/appengine/docs/whatisgoogleappengine https://developers.google.com/appengine/docs/go/gettingstarted/ flashproxy-1.7/facilitator/doc/email-howto.txt000066400000000000000000000102151236350636700216260ustar00rootroot00000000000000These are instructions for setting up an email account for use with the email rendezvous (fp-registrar-email / flashproxy-reg-email). You are strongly advised to use an email account dedicated for this purpose. If your email provider supports it, we advise you to use an app-specific password rather than your account password. Once you have an email address and the password for it, you should add this information to reg-email.pass in your flashproxy config directory. For your security, this file should be on encrypted storage. The following section provides some instructions on how to set up a new Google account whilst revealing as little information to Google as is feasible. == Creating a Google account securely These instructions were current as of May 2013. You may have trouble if you are using Tor to create the account, for two reasons. The first is that exit nodes are a source of abuse and Google is more suspicious of them. The second is that Gmail is suspicious and can lock you out of the account when your IP address is changing. While setting up the account, use a single node in your torrc ExitNodes configuration. Choose a U.S. exit node, one with low bandwidth. Go to https://mail.google.com/. Allow JavaScript to run (even from youtube.com; it seems to be necessary). Click the "CREATE AN ACCOUNT" button. Enter the account details. You don't need to fill in "Your current email address". Enter a mobile phone number for later activation of two-factor authentication. Solve the captcha. Click "Next Step". You may have to do a phone SMS verification here. At this point the Gmail account is created. If you are pushed into joining Google+, close everything out and go back to https://mail.google.com/. Log out of the account and then back in again. There will be new text in the lower right reading "Last account activity". Click "Details" and turn off the unusual activity alerts. This will keep you from getting locked out when you come from different IP addresses. At this point you should remove the temporary ExitNodes configuration from torrc. Add a filter to prevent registrations from being marked as spam. Click on the gear icon and select "Settings". Select "Filters" then "Create a new filter". For "Has the words" type "in:spam", then "Create filter with this search". There will be a warning that filters using "in:" will never match incoming mail; this appears to be false and you can just click OK. Check "Never send it to Spam" and click "Create filter". Enable IMAP. Click the gear icon, then "Settings", then "Forwarding and POP/IMAP". * Disable POP * Enable IMAP * Auto-Expunge on Click "Save Changes". Enable two-factor authentication. We do this not so much for the two-factor, but because it allows creating an independent password that is used only for IMAP and does not have access to the web interface of Gmail. Two-factor authentication also enables you to set up a Google Authenticator one-time password token and decouple the account from the phone number. Click the email address in the upper right, then "Account". Click "Security". By "2-step verification" click "Setup". Click through until it lets you set up. The phone number you provided when the account was created will be automatically filled in. Choose "Text message (SMS)" then click "Send code". Get your text message, type it in, and hit "Verify". Uncheck "Trust this computer" on the next screen. Finally "Confirm". Now set up a Google Authenticator secret and. Under "Primary way you receive codes", click "Switch to app". Choose "BlackBerry" and "Continue". Copy the secret key to a file. Use a program such as https://github.com/tadeck/onetimepass to generate a verification code and click "Verify and Save". Now you can remove the phone number if you wish by clicking "Remove" next to it. Under "Backup codes", click "Print or download", and save the codes to a file so you can log in if all else fails. Still on the 2-step verification page, click the "App-specific passwords" tab and the "Manage application-specific passwords" button. Under "Select app", select "Custom" and enter "IMAP" for the name. Click "Generate". Store the password in reg-email.pass, as mentioned in the introduction. flashproxy-1.7/facilitator/doc/facilitator-design.txt000066400000000000000000000043721236350636700231600ustar00rootroot00000000000000The main fp-facilitator program is a backend server that is essentially a dynamic database of client addresses, as well as helper programs that receive client registrations from the Internet over various means and pass them to the backend. There are three supported helper rendezvous methods: HTTP, email, and appspot. fp-reg-decrypt is a simple program that forwards its standard input to a local fp-reg-decryptd process. It is used by other components as a utility, but is also useful for debugging and testing. fp-reg-decryptd accepts connections containing encrypted client registrations and forwards them to the facilitator. It exists as a process of its own so that only one program requires access to the facilitator's private key. The HTTP rendezvous uses an HTTP server and a CGI program. The HTTP server is responsible for speaking TLS and invoking the CGI program. The CGI program receives client registrations and proxy requests for clients, parses them, and forwards them to the backend. We use Apache 2 as the HTTP server. The CGI script is fp-registrar.cgi. Currently this is also the only method for accepting browser proxy registrations, so you must enable this method, otherwise your clients will not be served. For the HTTP rendezvous, there are two formats you may use for a client registration - plain vs. (end-to-end) encrypted. Direct registrations (e.g. flashproxy-reg-http) can use the plain format over HTTPS, which provides transport encryption; but if you proxy registrations through another service (e.g. reg-appspot), you must use the end-to-end format. On the client side, you may use flashproxy-reg-url to generate registration URLs for the end-to-end encrypted format. The email rendezvous uses the helper program fp-registrar-email. Clients use the flashproxy-reg-email program to send an encrypted message to a Gmail address. The poller constantly checks for new messages and forwards them to fp-reg-decrypt. The appspot rendezvous uses Google's appengine platform as a proxy for the HTTP method, either yours or that of another facilitator. It takes advantage of the fact that a censor cannot distinguish between a TLS connection to appspot.com or google.com, since the IPs are the same, and it is highly unlikely that anyone will try to block the latter. flashproxy-1.7/facilitator/doc/http-howto.txt000066400000000000000000000033771236350636700215310ustar00rootroot00000000000000These are instructions for how to set up an Apache Web Server for handling the HTTP client registration method (fp-registrar.cgi / flashproxy-reg-http / flashproxy-reg-url), as well as for browser proxies to poll and receive a client to serve. Unfortunately we only had time to give commands specific to the Debian distribution of Apache; other distributions may need to tweak some things, e.g. a2enmod, a2ensite only exist on Debian. == HTTP server setup Apache is the web server that runs the CGI program. # apt-get install apache2 libapache2-mod-evasive # a2enmod ssl headers Edit /etc/apache2/ports.conf and comment out the port 80 configuration. # NameVirtualHost *:80 # Listen 80 Copy examples/fp-facilitator.conf to /etc/apache2/sites-available/ or wherever is appropriate for your Apache2 installation, then edit it as per the instructions given in that file itself. Link the configured site into sites-enabled. # a2ensite fp-facilitator.conf === HTTPS setup The HTTP server should serve only over HTTPS and not unencrypted HTTP. You will need a certificate and private key from a certificate authority. An article on making a certificate signing request and getting it signed is here: http://www.debian-administration.org/articles/284 This is the basic command to generate a CSR. $ openssl req -new -nodes -out fp-facilitator.csr.pem The instructions below assume you have an offline private key in fp-facilitator.key.pem and a certificate in fp-facilitator.crt.pem. Make a file containing both the private key and a certificate. $ cat fp-facilitator.key.pem fp-facilitator.crt.pem > fp-facilitator.pem $ chmod 400 fp-facilitator.pem Copy the new fp-facilitator.pem to the facilitator server as /etc/apache2/fp-facilitator.pem. # /etc/init.d/apache2 restart flashproxy-1.7/facilitator/doc/server-howto.txt000066400000000000000000000027561236350636700220600ustar00rootroot00000000000000This document describes how to configure a server running the facilitator on Debian 7. It is not necessary to make things work, but gives you some added security, and is a good reference if you want to create a dedicated VM for a facilitator from scratch. We will use the domain name fp-facilitator.example.com. == Basic and security setup Install some essential packages and configure a firewall. # cat >/etc/apt/apt.conf.d/90suggests< # Update this with your hostname! ServerName fp-facilitator.example.com DocumentRoot /dev/null ScriptAliasMatch ^(.*) @cgibindir@/fp-registrar.cgi$1 # Non-Debian distros will need to tweak the log dir too # Only log errors by default, to protect sensitive information. CustomLog /dev/null common #CustomLog ${APACHE_LOG_DIR}/fp-access.log common ErrorLog ${APACHE_LOG_DIR}/fp-error.log LogLevel warn # requires mod_ssl SSLEngine on # Manually install your certificate to the following location. SSLCertificateFile /etc/apache2/fp-facilitator.pem # If you got an intermediate certificate, uncomment the following line # and install the certificate to that location too. #SSLCertificateChainFile /etc/apache2/fp-intermediate.pem # requires mod_headers Header add Strict-Transport-Security "max-age=15768000" flashproxy-1.7/facilitator/examples/reg-email.pass000066400000000000000000000007761236350636700224560ustar00rootroot00000000000000# This file should contain "[] " on a single line, # separated by whitespace. If is omitted, it defaults to # imap.( domain):993. # # If your email provider supports it, we advise you to use an app-specific # password rather than your account password; see email-howto.txt in this # package's documentation for details on how to do this. # #imap.gmail.com:993 flashproxyreg.a@gmail.com topsecret11!one #flashproxyreg.a@gmail.com passwords with spaces are ok too flashproxy-1.7/facilitator/fp-facilitator000077500000000000000000000465061236350636700207410ustar00rootroot00000000000000#!/usr/bin/env python """ The flashproxy facilitator. """ import SocketServer import getopt import os import socket import sys import threading import time from collections import defaultdict from flashproxy import fac from flashproxy import proc from flashproxy.reg import Transport, Endpoint from flashproxy.util import parse_addr_spec, format_addr, canonical_ip LISTEN_ADDRESS = "127.0.0.1" DEFAULT_LISTEN_PORT = 9002 DEFAULT_RELAY_PORT = 9001 DEFAULT_LOG_FILENAME = "fp-facilitator.log" # Tell proxies to poll for clients every POLL_INTERVAL seconds. POLL_INTERVAL = 600 # Don't indulge clients for more than this many seconds. CLIENT_TIMEOUT = 1.0 # Buffer no more than this many bytes when trying to read a line. READLINE_MAX_LENGTH = 10240 MAX_PROXIES_PER_CLIENT = 5 DEFAULT_OUTER_TRANSPORTS = ["websocket"] LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S" class UnknownTransport(Exception): pass class options(object): listen_port = DEFAULT_LISTEN_PORT log_filename = DEFAULT_LOG_FILENAME log_file = sys.stdout relay_filename = None daemonize = True pid_filename = None privdrop_username = None safe_logging = True outer_transports = DEFAULT_OUTER_TRANSPORTS def usage(f = sys.stdout): print >> f, """\ Usage: %(progname)s -r RELAY Flash proxy facilitator: Register client addresses and serve them out again. Listen on 127.0.0.1 and port PORT (by default %(port)d). -d, --debug don't daemonize, log to stdout. -h, --help show this help. -l, --log FILENAME write log to FILENAME (default \"%(log)s\"). -p, --port PORT listen on PORT (default %(port)d). --pidfile FILENAME write PID to FILENAME after daemonizing. --privdrop-user USER switch UID and GID to those of USER. -r, --relay-file RELAY learn relays from FILE. --outer-transports TRANSPORTS comma-sep list of outer transports to accept proxies for (by default %(outer-transports)s) --unsafe-logging don't scrub IP addresses from logs.\ """ % { "progname": sys.argv[0], "port": DEFAULT_LISTEN_PORT, "log": DEFAULT_LOG_FILENAME, "outer-transports": ",".join(DEFAULT_OUTER_TRANSPORTS) } def safe_str(s): """Return "[scrubbed]" if options.safe_logging is true, and s otherwise.""" if options.safe_logging: return "[scrubbed]" else: return s log_lock = threading.Lock() def log(msg): with log_lock: print >> options.log_file, (u"%s %s" % (time.strftime(LOG_DATE_FORMAT), msg)).encode("UTF-8") options.log_file.flush() class Endpoints(object): """ Tracks endpoints (either client/server) and the transports they support. """ matchingLock = threading.Condition() def __init__(self, af, maxserve=float("inf")): self.af = af self._maxserve = maxserve self._endpoints = {} # address -> transport self._indexes = defaultdict(lambda: defaultdict(set)) # outer -> inner -> [ addresses ] self._served = {} # address -> num_times_served self._cv = threading.Condition() def getNumEndpoints(self): """:returns: the number of endpoints known to us.""" with self._cv: return len(self._endpoints) def getNumUnservedEndpoints(self): """:returns: the number of unserved endpoints known to us.""" with self._cv: return len(filter(lambda t: t == 0, self._served.itervalues())) def addEndpoint(self, addr, transport): """Add an endpoint. :param addr: Address of endpoint, usage-dependent. :param list transports: List of transports. :returns: False if the address is already known, in which case no update is made to its supported transports, else True. """ transport = Transport.parse(transport) with self._cv: if addr in self._endpoints: return False inner, outer = transport self._endpoints[addr] = transport self._served[addr] = 0 self._indexes[outer][inner].add(addr) self._cv.notify() return True def delEndpoint(self, addr): """Forget an endpoint. :param addr: Address of endpoint, usage-dependent. :returns: False if the address was already forgotten, else True. """ with self._cv: if addr not in self._endpoints: return False inner, outer = self._endpoints[addr] self._indexes[outer][inner].remove(addr) # TODO(infinity0): maybe delete empty bins del self._served[addr] del self._endpoints[addr] self._cv.notify() return True def _findInnerForOuter(self, *supported_outer): """Find all endpoint addresses that support any of the given outer transports. Results are grouped by the inner transport. :returns: { inner: [addr] }, where each address supports some outer transport from supported_outer. """ inners = defaultdict(set) for outer in set(supported_outer) & set(self._indexes.iterkeys()): for inner, addrs in self._indexes[outer].iteritems(): if addrs: # don't add empty bins, to avoid false-positive key checks inners[inner].update(addrs) return inners def _serveReg(self, addrpool): """ :param list addrpool: List of candidate addresses. :returns: An Endpoint whose address is from the given pool. The serve counter for that address is also incremented, and if it hits self._maxserve the endpoint is removed from this collection. :raises: KeyError if any address is not registered with this collection """ if not addrpool: raise ValueError("gave empty address pool") prio_addr = min(addrpool, key=lambda a: self._served[a]) assert self._served[prio_addr] < self._maxserve self._served[prio_addr] += 1 transport = self._endpoints[prio_addr] if self._served[prio_addr] == self._maxserve: self.delEndpoint(prio_addr) return Endpoint(prio_addr, transport) EMPTY_MATCH = (None, None) @staticmethod def match(ptsClient, ptsServer, supported_outer): """ :returns: A tuple (client Reg, server Reg) arbitrarily selected from the available endpoints that can satisfy supported_outer. """ if ptsClient.af != ptsServer.af: raise ValueError("address family not equal") if ptsServer._maxserve < float("inf"): raise ValueError("servers mustn't run out") # need to operate on both structures # so hold both locks plus a pair-wise lock with Endpoints.matchingLock, ptsClient._cv, ptsServer._cv: server_inner = ptsServer._findInnerForOuter(*supported_outer) client_inner = ptsClient._findInnerForOuter(*supported_outer) both = set(server_inner.keys()) & set(client_inner.keys()) if not both: return Endpoints.EMPTY_MATCH # find a client to serve client_pool = [addr for inner in both for addr in client_inner[inner]] assert len(client_pool) client_reg = ptsClient._serveReg(client_pool) # find a server to serve that has the same inner transport inner = client_reg.transport.inner assert inner in server_inner and len(server_inner[inner]) server_reg = ptsServer._serveReg(server_inner[inner]) # assume servers never run out return (client_reg, server_reg) class Handler(SocketServer.StreamRequestHandler): def __init__(self, *args, **kwargs): self.deadline = time.time() + CLIENT_TIMEOUT # Buffer for readline. self.buffer = "" SocketServer.StreamRequestHandler.__init__(self, *args, **kwargs) def recv(self): timeout = self.deadline - time.time() self.connection.settimeout(timeout) return self.connection.recv(1024) def readline(self): # A line already buffered? i = self.buffer.find("\n") if i >= 0: line = self.buffer[:i+1] self.buffer = self.buffer[i+1:] return line auxbuf = [] buflen = len(self.buffer) while True: data = self.recv() if not data: if self.buffer or auxbuf: raise socket.error("readline: stream does not end with a newline") else: return "" i = data.find("\n") if i >= 0: line = self.buffer + "".join(auxbuf) + data[:i+1] self.buffer = data[i+1:] return line else: auxbuf.append(data) buflen += len(data) if buflen >= READLINE_MAX_LENGTH: raise socket.error("readline: refusing to buffer %d bytes (last read was %d bytes)" % (buflen, len(data))) @proc.catch_epipe def handle(self): num_lines = 0 while True: try: line = self.readline() if not line: break num_lines += 1 except socket.error, e: log("socket error after reading %d lines: %s" % (num_lines, str(e))) break if not self.handle_line(line): break def handle_line(self, line): if not (len(line) > 0 and line[-1] == '\n'): raise ValueError("No newline at end of string returned by readline") try: command, params = fac.parse_transaction(line[:-1]) except ValueError, e: return self.error("fac.parse_transaction: %s" % e) if command == "GET": return self.do_GET(params) elif command == "PUT": return self.do_PUT(params) else: self.send_error() return False def send_ok(self): print >> self.wfile, "OK" def send_error(self): print >> self.wfile, "ERROR" def error(self, log_msg): log(log_msg) self.send_error() return False # Handle a GET request (got flashproxy poll; need to return a proper client registration) # Example: GET FROM="3.3.3.3:3333" PROXY-TRANSPORT="websocket" PROXY-TRANSPORT="webrtc" def do_GET(self, params): proxy_spec = fac.param_first("FROM", params) if proxy_spec is None: return self.error(u"GET missing FROM param") try: proxy_addr = canonical_ip(*parse_addr_spec(proxy_spec, defport=0)) except ValueError, e: return self.error(u"syntax error in proxy address %s: %s" % (safe_str(repr(proxy_spec)), safe_str(repr(str(e))))) transport_list = fac.param_getlist("PROXY-TRANSPORT", params) if not transport_list: return self.error(u"GET missing PROXY-TRANSPORT param") try: client_reg, relay_reg = get_match_for_proxy(proxy_addr, transport_list) except Exception, e: return self.error(u"error getting reg for proxy address %s: %s" % (safe_str(repr(proxy_spec)), safe_str(repr(str(e))))) check_back_in = get_check_back_in_for_proxy(proxy_addr) if client_reg: log(u"proxy (%s) gets client '%s' (supported transports: %s) (num relays: %s) (remaining regs: %d/%d)" % (safe_str(repr(proxy_spec)), safe_str(repr(client_reg.addr)), transport_list, num_relays(), num_unhandled_regs(), num_regs())) print >> self.wfile, fac.render_transaction("OK", ("CLIENT", format_addr(client_reg.addr)), ("CLIENT-TRANSPORT", client_reg.transport.outer), ("RELAY", format_addr(relay_reg.addr)), ("RELAY-TRANSPORT", relay_reg.transport.outer), ("CHECK-BACK-IN", str(check_back_in))) else: log(u"proxy (%s) gets none" % safe_str(repr(proxy_spec))) print >> self.wfile, fac.render_transaction("NONE", ("CHECK-BACK-IN", str(check_back_in))) return True # Handle a PUT request (client made a registration request; register it.) # Example: PUT CLIENT="1.1.1.1:5555" TRANSPORT="obfs3|websocket" def do_PUT(self, params): # Check out if we recognize the transport in this registration request transport_spec = fac.param_first("TRANSPORT", params) if transport_spec is None: return self.error(u"PUT missing TRANSPORT param") transport = Transport.parse(transport_spec) # See if we have relays that support this transport if transport.outer not in options.outer_transports: return self.error(u"Unrecognized transport: %s" % transport.outer) client_spec = fac.param_first("CLIENT", params) if client_spec is None: return self.error(u"PUT missing CLIENT param") try: reg = Endpoint.parse(client_spec, transport) except (UnknownTransport, ValueError) as e: # XXX should we throw a better error message to the client? Is it possible? return self.error(u"syntax error in %s: %s" % (safe_str(repr(client_spec)), safe_str(repr(str(e))))) try: ok = put_reg(reg) except Exception, e: return self.error(u"error putting reg %s: %s" % (safe_str(repr(client_spec)), safe_str(repr(str(e))))) if ok: log(u"client %s (transports: %s) (remaining regs: %d/%d)" % (safe_str(unicode(reg)), reg.transport, num_unhandled_regs(), num_regs())) else: log(u"client %s (already present) (transports: %s) (remaining regs: %d/%d)" % (safe_str(unicode(reg)), reg.transport, num_unhandled_regs(), num_regs())) self.send_ok() return True finish = proc.catch_epipe(SocketServer.StreamRequestHandler.finish) class Server(SocketServer.ThreadingMixIn, SocketServer.TCPServer): allow_reuse_address = True # Addresses are plain tuples (str(host), int(port)) CLIENTS = { socket.AF_INET: Endpoints(af=socket.AF_INET, maxserve=MAX_PROXIES_PER_CLIENT), socket.AF_INET6: Endpoints(af=socket.AF_INET6, maxserve=MAX_PROXIES_PER_CLIENT) } RELAYS = { socket.AF_INET: Endpoints(af=socket.AF_INET), socket.AF_INET6: Endpoints(af=socket.AF_INET6) } def num_relays(): """Return the total number of relays.""" return sum(pts.getNumEndpoints() for pts in RELAYS.itervalues()) def num_regs(): """Return the total number of registrations.""" return sum(pts.getNumEndpoints() for pts in CLIENTS.itervalues()) def num_unhandled_regs(): """Return the total number of unhandled registrations.""" return sum(pts.getNumUnservedEndpoints() for pts in CLIENTS.itervalues()) def addr_af(addr_str): """Return the address family for an address string. This is a plain string, not a tuple, and IPv6 addresses are not bracketed.""" addrs = socket.getaddrinfo(addr_str, 0, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP, socket.AI_NUMERICHOST) return addrs[0][0] def get_match_for_proxy(proxy_addr, transport_list): af = addr_af(proxy_addr[0]) try: return Endpoints.match(CLIENTS[af], RELAYS[af], transport_list) except ValueError as e: raise UnknownTransport("Could not find registration for transport list: %s: %s" % (transport_list, e)) def get_check_back_in_for_proxy(proxy_addr): """Get a CHECK-BACK-IN interval suitable for this proxy.""" return POLL_INTERVAL def put_reg(reg): """Add a registration.""" af = addr_af(reg.addr[0]) return CLIENTS[af].addEndpoint(reg.addr, reg.transport) def parse_relay_file(servers, fp): """Parse a file containing Tor relays that we can point proxies to. Throws ValueError on a parsing error. Each line contains a transport chain and an address, for example obfs2|websocket 1.4.6.1:4123 :returns: number of relays added """ n = 0 for line in fp.readlines(): line = line.strip("\n") if not line or line.startswith('#'): continue try: transport_spec, addr_spec = line.strip().split() except ValueError, e: raise ValueError("Wrong line format: %s." % repr(line)) addr = parse_addr_spec(addr_spec, defport=DEFAULT_RELAY_PORT) transport = Transport.parse(transport_spec) if transport.outer not in options.outer_transports: raise ValueError(u"Unrecognized transport: %s" % transport) af = addr_af(addr[0]) servers[af].addEndpoint(addr, transport) n += 1 return n def main(): opts, args = getopt.gnu_getopt(sys.argv[1:], "dhl:p:r:", [ "debug", "help", "log=", "port=", "pidfile=", "privdrop-user=", "relay-file=", "unsafe-logging", ]) for o, a in opts: if o == "-d" or o == "--debug": options.daemonize = False options.log_filename = None elif o == "-h" or o == "--help": usage() sys.exit() elif o == "-l" or o == "--log": options.log_filename = a elif o == "-p" or o == "--port": options.listen_port = int(a) elif o == "--pidfile": options.pid_filename = a elif o == "--privdrop-user": options.privdrop_username = a elif o == "-r" or o == "--relay-file": options.relay_filename = a elif o == "--outer-transports": options.outer_transports = a.split(",") elif o == "--unsafe-logging": options.safe_logging = False if not options.relay_filename: print >> sys.stderr, """\ The -r option is required. Give it the name of a file containing relay transports and addresses. -r HOST[:PORT] Example file contents: obfs2|websocket 1.4.6.1:4123\ """ sys.exit(1) try: with open(options.relay_filename) as fp: n = parse_relay_file(RELAYS, fp) if not n: raise ValueError("file contained no relays") except ValueError as e: print >> sys.stderr, u"Could not parse file %s: %s" % (repr(options.relay_filename), str(e)) sys.exit(1) # Setup log file if options.log_filename: options.log_file = open(options.log_filename, "a") # Send error tracebacks to the log. sys.stderr = options.log_file else: options.log_file = sys.stdout addrinfo = socket.getaddrinfo(LISTEN_ADDRESS, options.listen_port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)[0] server = Server(addrinfo[4], Handler) log(u"start on %s" % format_addr(addrinfo[4])) log(u"using IPv4 relays %s" % str(RELAYS[socket.AF_INET]._endpoints)) log(u"using IPv6 relays %s" % str(RELAYS[socket.AF_INET6]._endpoints)) if options.daemonize: log(u"daemonizing") pid = os.fork() if pid != 0: if options.pid_filename: f = open(options.pid_filename, "w") print >> f, pid f.close() sys.exit(0) if options.privdrop_username is not None: log(u"dropping privileges to those of user %s" % options.privdrop_username) try: proc.drop_privs(options.privdrop_username) except BaseException, e: print >> sys.stderr, "Can't drop privileges:", str(e) sys.exit(1) try: server.serve_forever() except KeyboardInterrupt: sys.exit(0) if __name__ == "__main__": main() flashproxy-1.7/facilitator/fp-facilitator-test.py000077500000000000000000000322741236350636700223420ustar00rootroot00000000000000#!/usr/bin/env python from cStringIO import StringIO import os import socket import subprocess import tempfile import sys import time import unittest from flashproxy import fac from flashproxy.reg import Transport, Endpoint from flashproxy.util import format_addr # Import the facilitator program as a module. import imp dont_write_bytecode = sys.dont_write_bytecode sys.dont_write_bytecode = True facilitator = imp.load_source("fp-facilitator", os.path.join(os.path.dirname(__file__), "fp-facilitator")) Endpoints = facilitator.Endpoints parse_relay_file = facilitator.parse_relay_file sys.dont_write_bytecode = dont_write_bytecode del dont_write_bytecode del facilitator FACILITATOR_HOST = "127.0.0.1" FACILITATOR_PORT = 39002 # diff port to not conflict with production service FACILITATOR_ADDR = (FACILITATOR_HOST, FACILITATOR_PORT) CLIENT_TP = "websocket" RELAY_TP = "websocket" PROXY_TPS = ["websocket", "webrtc"] def gimme_socket(host, port): addrinfo = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)[0] s = socket.socket(addrinfo[0], addrinfo[1], addrinfo[2]) s.settimeout(10.0) s.connect(addrinfo[4]) return s class EndpointsTest(unittest.TestCase): def setUp(self): self.pts = Endpoints(af=socket.AF_INET) def test_addEndpoints_twice(self): self.pts.addEndpoint("A", "a|b|p") self.assertFalse(self.pts.addEndpoint("A", "zzz")) self.assertEquals(self.pts._endpoints["A"], Transport("a|b", "p")) def test_delEndpoints_twice(self): self.pts.addEndpoint("A", "a|b|p") self.assertTrue(self.pts.delEndpoint("A")) self.assertFalse(self.pts.delEndpoint("A")) self.assertEquals(self.pts._endpoints.get("A"), None) def test_Endpoints_indexing(self): self.assertEquals(self.pts._indexes.get("p"), None) # test defaultdict works as expected self.assertEquals(self.pts._indexes["p"]["a|b"], set("")) self.pts.addEndpoint("A", "a|b|p") self.assertEquals(self.pts._indexes["p"]["a|b"], set("A")) self.pts.addEndpoint("B", "a|b|p") self.assertEquals(self.pts._indexes["p"]["a|b"], set("AB")) self.pts.delEndpoint("A") self.assertEquals(self.pts._indexes["p"]["a|b"], set("B")) self.pts.delEndpoint("B") self.assertEquals(self.pts._indexes["p"]["a|b"], set("")) def test_serveReg_maxserve_infinite_roundrobin(self): # case for servers, they never exhaust self.pts.addEndpoint("A", "a|p") self.pts.addEndpoint("B", "a|p") self.pts.addEndpoint("C", "a|p") for i in xrange(64): # 64 is infinite ;) served = set() served.add(self.pts._serveReg("ABC").addr) served.add(self.pts._serveReg("ABC").addr) served.add(self.pts._serveReg("ABC").addr) self.assertEquals(served, set("ABC")) def test_serveReg_maxserve_finite_exhaustion(self): # case for clients, we don't want to keep serving them self.pts = Endpoints(af=socket.AF_INET, maxserve=5) self.pts.addEndpoint("A", "a|p") self.pts.addEndpoint("B", "a|p") self.pts.addEndpoint("C", "a|p") # test getNumUnservedEndpoints whilst we're at it self.assertEquals(self.pts.getNumUnservedEndpoints(), 3) served = set() served.add(self.pts._serveReg("ABC").addr) self.assertEquals(self.pts.getNumUnservedEndpoints(), 2) served.add(self.pts._serveReg("ABC").addr) self.assertEquals(self.pts.getNumUnservedEndpoints(), 1) served.add(self.pts._serveReg("ABC").addr) self.assertEquals(self.pts.getNumUnservedEndpoints(), 0) self.assertEquals(served, set("ABC")) for i in xrange(5-2): served = set() served.add(self.pts._serveReg("ABC").addr) served.add(self.pts._serveReg("ABC").addr) served.add(self.pts._serveReg("ABC").addr) self.assertEquals(served, set("ABC")) remaining = set("ABC") remaining.remove(self.pts._serveReg(remaining).addr) self.assertRaises(KeyError, self.pts._serveReg, "ABC") remaining.remove(self.pts._serveReg(remaining).addr) self.assertRaises(KeyError, self.pts._serveReg, "ABC") remaining.remove(self.pts._serveReg(remaining).addr) self.assertRaises(KeyError, self.pts._serveReg, "ABC") self.assertEquals(remaining, set()) self.assertEquals(self.pts.getNumUnservedEndpoints(), 0) def test_match_normal(self): self.pts.addEndpoint("A", "a|p") self.pts2 = Endpoints(af=socket.AF_INET) self.pts2.addEndpoint("B", "a|p") self.pts2.addEndpoint("C", "b|p") self.pts2.addEndpoint("D", "a|q") expected = (Endpoint("A", Transport("a","p")), Endpoint("B", Transport("a","p"))) empty = Endpoints.EMPTY_MATCH self.assertEquals(expected, Endpoints.match(self.pts, self.pts2, ["p"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) def test_match_unequal_client_server(self): self.pts.addEndpoint("A", "a|p") self.pts2 = Endpoints(af=socket.AF_INET) self.pts2.addEndpoint("B", "a|q") expected = (Endpoint("A", Transport("a","p")), Endpoint("B", Transport("a","q"))) empty = Endpoints.EMPTY_MATCH self.assertEquals(expected, Endpoints.match(self.pts, self.pts2, ["p", "q"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["p"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["q"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) def test_match_raw_server(self): self.pts.addEndpoint("A", "p") self.pts2 = Endpoints(af=socket.AF_INET) self.pts2.addEndpoint("B", "p") expected = (Endpoint("A", Transport("","p")), Endpoint("B", Transport("","p"))) empty = Endpoints.EMPTY_MATCH self.assertEquals(expected, Endpoints.match(self.pts, self.pts2, ["p"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) def test_match_many_inners(self): self.pts.addEndpoint("A", "a|p") self.pts.addEndpoint("B", "b|p") self.pts.addEndpoint("C", "p") self.pts2 = Endpoints(af=socket.AF_INET) self.pts2.addEndpoint("D", "a|p") self.pts2.addEndpoint("E", "b|p") self.pts2.addEndpoint("F", "p") # this test ensures we have a sane policy for selecting between inners pools expected = set() expected.add((Endpoint("A", Transport("a","p")), Endpoint("D", Transport("a","p")))) expected.add((Endpoint("B", Transport("b","p")), Endpoint("E", Transport("b","p")))) expected.add((Endpoint("C", Transport("","p")), Endpoint("F", Transport("","p")))) result = set() result.add(Endpoints.match(self.pts, self.pts2, ["p"])) result.add(Endpoints.match(self.pts, self.pts2, ["p"])) result.add(Endpoints.match(self.pts, self.pts2, ["p"])) empty = Endpoints.EMPTY_MATCH self.assertEquals(expected, result) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) self.assertEquals(empty, Endpoints.match(self.pts, self.pts2, ["x"])) def test_match_exhaustion(self): self.pts.addEndpoint("A", "p") self.pts2 = Endpoints(af=socket.AF_INET, maxserve=2) self.pts2.addEndpoint("B", "p") Endpoints.match(self.pts2, self.pts, ["p"]) Endpoints.match(self.pts2, self.pts, ["p"]) empty = Endpoints.EMPTY_MATCH self.assertTrue("B" not in self.pts2._endpoints) self.assertTrue("B" not in self.pts2._indexes["p"][""]) self.assertEquals(empty, Endpoints.match(self.pts2, self.pts, ["p"])) class FacilitatorTest(unittest.TestCase): def test_parse_relay_file(self): fp = StringIO() fp.write("websocket 0.0.1.0:1\n") fp.flush() fp.seek(0) af = socket.AF_INET servers = { af: Endpoints(af=af) } parse_relay_file(servers, fp) self.assertEquals(servers[af]._endpoints, {('0.0.1.0', 1): Transport('', 'websocket')}) class FacilitatorProcTest(unittest.TestCase): IPV4_CLIENT_ADDR = ("1.1.1.1", 9000) IPV6_CLIENT_ADDR = ("[11::11]", 9000) IPV4_PROXY_ADDR = ("2.2.2.2", 13000) IPV6_PROXY_ADDR = ("[22::22]", 13000) IPV4_RELAY_ADDR = ("0.0.1.0", 1) IPV6_RELAY_ADDR = ("[0:0::1:0]", 1) def gimme_socket(self): return gimme_socket(FACILITATOR_HOST, FACILITATOR_PORT) def setUp(self): self.relay_file = tempfile.NamedTemporaryFile() self.relay_file.write("%s %s\n" % (RELAY_TP, format_addr(self.IPV4_RELAY_ADDR))) self.relay_file.write("%s %s\n" % (RELAY_TP, format_addr(self.IPV6_RELAY_ADDR))) self.relay_file.flush() self.relay_file.seek(0) fn = os.path.join(os.path.dirname(__file__), "./fp-facilitator") self.process = subprocess.Popen(["python", fn, "-d", "-p", str(FACILITATOR_PORT), "-r", self.relay_file.name, "-l", "/dev/null"]) time.sleep(0.1) def tearDown(self): ret = self.process.poll() if ret is not None: raise Exception("facilitator subprocess exited unexpectedly with status %d" % ret) self.process.terminate() def test_timeout(self): """Test that the socket will not accept slow writes indefinitely. Successive sends should not reset the timeout counter.""" s = self.gimme_socket() time.sleep(0.3) s.send("w") time.sleep(0.3) s.send("w") time.sleep(0.3) s.send("w") time.sleep(0.3) s.send("w") time.sleep(0.3) self.assertRaises(socket.error, s.send, "w") def test_readline_limit(self): """Test that reads won't buffer indefinitely.""" s = self.gimme_socket() buflen = 0 try: while buflen + 1024 < 200000: s.send("X" * 1024) buflen += 1024 # TODO(dcf1): sometimes no error is raised, and this test fails self.fail("should have raised a socket error") except socket.error: pass def test_af_v4_v4(self): """Test that IPv4 proxies can get IPv4 clients.""" fac.put_reg(FACILITATOR_ADDR, self.IPV4_CLIENT_ADDR, CLIENT_TP) fac.put_reg(FACILITATOR_ADDR, self.IPV6_CLIENT_ADDR, CLIENT_TP) reg = fac.get_reg(FACILITATOR_ADDR, self.IPV4_PROXY_ADDR, PROXY_TPS) self.assertEqual(reg["client"], format_addr(self.IPV4_CLIENT_ADDR)) def test_af_v4_v6(self): """Test that IPv4 proxies do not get IPv6 clients.""" fac.put_reg(FACILITATOR_ADDR, self.IPV6_CLIENT_ADDR, CLIENT_TP) reg = fac.get_reg(FACILITATOR_ADDR, self.IPV4_PROXY_ADDR, PROXY_TPS) self.assertEqual(reg["client"], "") def test_af_v6_v4(self): """Test that IPv6 proxies do not get IPv4 clients.""" fac.put_reg(FACILITATOR_ADDR, self.IPV4_CLIENT_ADDR, CLIENT_TP) reg = fac.get_reg(FACILITATOR_ADDR, self.IPV6_PROXY_ADDR, PROXY_TPS) self.assertEqual(reg["client"], "") def test_af_v6_v6(self): """Test that IPv6 proxies can get IPv6 clients.""" fac.put_reg(FACILITATOR_ADDR, self.IPV4_CLIENT_ADDR, CLIENT_TP) fac.put_reg(FACILITATOR_ADDR, self.IPV6_CLIENT_ADDR, CLIENT_TP) reg = fac.get_reg(FACILITATOR_ADDR, self.IPV6_PROXY_ADDR, PROXY_TPS) self.assertEqual(reg["client"], format_addr(self.IPV6_CLIENT_ADDR)) def test_fields(self): """Test that facilitator responses contain all the required fields.""" fac.put_reg(FACILITATOR_ADDR, self.IPV4_CLIENT_ADDR, CLIENT_TP) reg = fac.get_reg(FACILITATOR_ADDR, self.IPV4_PROXY_ADDR, PROXY_TPS) self.assertEqual(reg["client"], format_addr(self.IPV4_CLIENT_ADDR)) self.assertEqual(reg["client-transport"], CLIENT_TP) self.assertEqual(reg["relay"], format_addr(self.IPV4_RELAY_ADDR)) self.assertEqual(reg["relay-transport"], RELAY_TP) self.assertGreater(int(reg["check-back-in"]), 0) # def test_same_proxy(self): # """Test that the same proxy doesn't get the same client when asking # twice.""" # self.fail() # # def test_num_clients(self): # """Test that the same proxy can pick up up to five different clients but # no more. Test that a proxy ceasing to handle a client allows the proxy # to handle another, different client.""" # self.fail() # # def test_num_proxies(self): # """Test that a single client is handed out to five different proxies but # no more. Test that a proxy ceasing to handle a client reduces its count # so another proxy can handle it.""" # self.fail() # # def test_proxy_timeout(self): # """Test that a proxy ceasing to connect for some time period causes that # proxy's clients to be unhandled by that proxy.""" # self.fail() # # def test_localhost_only(self): # """Test that the facilitator doesn't listen on any external # addresses.""" # self.fail() # # def test_hostname(self): # """Test that the facilitator rejects hostnames.""" # self.fail() if __name__ == "__main__": unittest.main() flashproxy-1.7/facilitator/fp-reg-decrypt000077500000000000000000000032261236350636700206550ustar00rootroot00000000000000#!/usr/bin/env python """ Forwards encrypted client registrations to a running fp-reg-decryptd. """ import getopt import socket import sys CONNECT_ADDRESS = "127.0.0.1" DEFAULT_CONNECT_PORT = 9003 class options(object): connect_port = DEFAULT_CONNECT_PORT def usage(f = sys.stdout): print >> f, """\ Usage: %(progname)s Reads a base64-encoded encrypted client registration from stdin and feeds it to a local fp-reg-decryptd process. Returns 0 if the registration was successful, 1 otherwise. -h, --help show this help. -p, --port PORT connect to PORT (default %(port)d).\ """ % { "progname": sys.argv[0], "port": DEFAULT_CONNECT_PORT, } def main(): opts, args = getopt.gnu_getopt(sys.argv[1:], "hp:", [ "help", "port=", ]) for o, a in opts: if o == "-h" or o == "--help": usage() sys.exit() elif o == "-p" or o == "--port": options.connect_port = int(a) if len(args) != 0: usage(sys.stderr) sys.exit(1) addrinfo = socket.getaddrinfo(CONNECT_ADDRESS, options.connect_port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)[0] s = socket.socket(addrinfo[0], addrinfo[1], addrinfo[2]) s.connect(addrinfo[4]) sent = 0 while True: data = sys.stdin.read(1024) if data == "": mod = sent % 4 if mod != 0: s.sendall((4 - mod) * "=") break s.sendall(data) sent += len(data) s.shutdown(socket.SHUT_WR) data = s.recv(1024) if data.strip() == "OK": sys.exit(0) else: sys.exit(1) if __name__ == "__main__": main() flashproxy-1.7/facilitator/fp-reg-decryptd000077500000000000000000000156201236350636700210220ustar00rootroot00000000000000#!/usr/bin/env python """ Accepts encrypted client registrations and forwards them to the facilitator. """ import SocketServer import getopt import os import socket import sys import threading import time from flashproxy import fac from flashproxy import proc from flashproxy.util import format_addr from M2Crypto import RSA # Generating an RSA keypair for use by this program: # openssl genrsa -out /etc/flashproxy/reg-daemon.key 2048 # chmod 600 /etc/flashproxy/reg-daemon.key LISTEN_ADDRESS = "127.0.0.1" DEFAULT_LISTEN_PORT = 9003 FACILITATOR_ADDR = ("127.0.0.1", 9002) DEFAULT_LOG_FILENAME = "fp-reg-decryptd.log" # Don't indulge clients for more than this many seconds. CLIENT_TIMEOUT = 1.0 # Buffer no more than this many bytes per connection. MAX_LENGTH = 40 * 1024 LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S" class options(object): key_filename = None listen_port = DEFAULT_LISTEN_PORT log_filename = DEFAULT_LOG_FILENAME log_file = sys.stdout daemonize = True pid_filename = None privdrop_username = None safe_logging = True def usage(f = sys.stdout): print >> f, """\ Usage: %(progname)s --key=KEYFILE Facilitator-side daemon that reads base64-encoded encrypted client registrations and registers them with a local facilitator. This program exists on its own in order to isolate the reading of key material in a single process. -d, --debug don't daemonize, log to stdout. -h, --help show this help. -k, --key=KEYFILE read the private key from KEYFILE (required). -l, --log FILENAME write log to FILENAME (default \"%(log)s\"). -p, --port PORT listen on PORT (default %(port)d). --pidfile FILENAME write PID to FILENAME after daemonizing. --privdrop-user USER switch UID and GID to those of USER. --unsafe-logging don't scrub IP addresses from logs.\ """ % { "progname": sys.argv[0], "log": DEFAULT_LOG_FILENAME, "port": DEFAULT_LISTEN_PORT, } def safe_str(s): """Return "[scrubbed]" if options.safe_logging is true, and s otherwise.""" if options.safe_logging: return "[scrubbed]" else: return s log_lock = threading.Lock() def log(msg): log_lock.acquire() try: print >> options.log_file, (u"%s %s" % (time.strftime(LOG_DATE_FORMAT), msg)).encode("UTF-8") options.log_file.flush() finally: log_lock.release() class Handler(SocketServer.StreamRequestHandler): def __init__(self, *args, **kwargs): self.deadline = time.time() + CLIENT_TIMEOUT self.buffer = "" SocketServer.StreamRequestHandler.__init__(self, *args, **kwargs) def recv(self): timeout = self.deadline - time.time() self.connection.settimeout(timeout) return self.connection.recv(1024) def read_input(self): while True: data = self.recv() if not data: break self.buffer += data buflen = len(self.buffer) if buflen > MAX_LENGTH: raise socket.error("refusing to buffer %d bytes (last read was %d bytes)" % (buflen, len(data))) return self.buffer @proc.catch_epipe def handle(self): try: b64_ciphertext = self.read_input() except socket.error, e: log("socket error reading input: %s" % str(e)) return try: ciphertext = b64_ciphertext.decode("base64") plaintext = rsa.private_decrypt(ciphertext, RSA.pkcs1_oaep_padding) for client_reg in fac.read_client_registrations(plaintext): log(u"registering %s" % safe_str(format_addr(client_reg.addr))) if not fac.put_reg(FACILITATOR_ADDR, client_reg.addr, client_reg.transport): print >> self.wfile, "FAIL" break else: print >> self.wfile, "OK" except Exception, e: log("error registering: %s" % str(e)) print >> self.wfile, "FAIL" raise finish = proc.catch_epipe(SocketServer.StreamRequestHandler.finish) class Server(SocketServer.ThreadingMixIn, SocketServer.TCPServer): allow_reuse_address = True def main(): global rsa opts, args = getopt.gnu_getopt(sys.argv[1:], "dhk:l:p:", ["debug", "help", "key=", "log=", "port=", "pidfile=", "privdrop-user=", "unsafe-logging"]) for o, a in opts: if o == "-d" or o == "--debug": options.daemonize = False options.log_filename = None elif o == "-h" or o == "--help": usage() sys.exit() elif o == "-k" or o == "--key": options.key_filename = a elif o == "-l" or o == "--log": options.log_filename = a elif o == "-p" or o == "--pass": options.listen_port = int(a) elif o == "--pidfile": options.pid_filename = a elif o == "--privdrop-user": options.privdrop_username = a elif o == "--unsafe-logging": options.safe_logging = False if len(args) != 0: usage(sys.stderr) sys.exit(1) # Load the private key. if options.key_filename is None: print >> sys.stderr, "The --key option is required." sys.exit(1) try: key_file = open(options.key_filename) except Exception, e: print >> sys.stderr, "Failed to open private key file \"%s\": %s." % (options.key_filename, str(e)) sys.exit(1) try: if not proc.check_perms(key_file.fileno()): print >> sys.stderr, "Refusing to run with group- or world-readable private key file. Try" print >> sys.stderr, "\tchmod 600 %s" % options.key_filename sys.exit(1) rsa = RSA.load_key_string(key_file.read()) finally: key_file.close() if options.log_filename: options.log_file = open(options.log_filename, "a") # Send error tracebacks to the log. sys.stderr = options.log_file else: options.log_file = sys.stdout addrinfo = socket.getaddrinfo(LISTEN_ADDRESS, options.listen_port, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)[0] server = Server(addrinfo[4], Handler) log(u"start on %s" % format_addr(addrinfo[4])) if options.daemonize: log(u"daemonizing") pid = os.fork() if pid != 0: if options.pid_filename: f = open(options.pid_filename, "w") print >> f, pid f.close() sys.exit(0) if options.privdrop_username is not None: log(u"dropping privileges to those of user %s" % options.privdrop_username) try: proc.drop_privs(options.privdrop_username) except BaseException, e: print >> sys.stderr, "Can't drop privileges:", str(e) sys.exit(1) try: server.serve_forever() except KeyboardInterrupt: sys.exit(0) if __name__ == "__main__": main() flashproxy-1.7/facilitator/fp-registrar-email000077500000000000000000000340641236350636700215230ustar00rootroot00000000000000#!/usr/bin/env python """ Polls a mailbox for new registrations and forwards them using fp-reg-decrypt. """ import calendar import datetime import email import email.utils import getopt import imaplib import math import os import re import socket import ssl import stat import sys import tempfile import time from flashproxy import fac from flashproxy import keys from flashproxy import proc from flashproxy.util import parse_addr_spec from hashlib import sha1 from M2Crypto import SSL # TODO(infinity0): we only support gmail so this is OK for now. in the future, # could maybe do an MX lookup and try to guess the imap server from that. DEFAULT_IMAP_HOST = "imap.gmail.com" DEFAULT_IMAP_PORT = 993 DEFAULT_LOG_FILENAME = "fp-registrar-email.log" POLL_INTERVAL = 60 # Ignore message older than this many seconds old, or newer than this many # seconds in the future. REGISTRATION_AGE_LIMIT = 30 * 60 FACILITATOR_ADDR = ("127.0.0.1", 9002) LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S" class options(object): password_filename = None log_filename = DEFAULT_LOG_FILENAME log_file = sys.stdout daemonize = True pid_filename = None privdrop_username = None safe_logging = True imaplib_debug = False use_certificate_pin = True # Like socket.create_connection in that it tries resolving different address # families, but doesn't connect the socket. def create_socket(address, timeout = None, source_address = None): host, port = address addrs = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM) if not addrs: raise socket.error("getaddrinfo returns an empty list") err = None for addr in addrs: try: s = socket.socket(addr[0], addr[1], addr[2]) if timeout is not None and type(timeout) == float: s.settimeout(timeout) if source_address is not None: s.bind(source_address) return s except Exception, e: err = e raise err class IMAP4_SSL_REQUIRED(imaplib.IMAP4_SSL): """A subclass of of IMAP4_SSL that uses ssl_version=ssl.PROTOCOL_TLSv1 and cert_reqs=ssl.CERT_REQUIRED.""" def open(self, host = "", port = imaplib.IMAP4_SSL_PORT): ctx = SSL.Context("tlsv1") ctx.set_verify(SSL.verify_peer, 3) ret = ctx.load_verify_locations(self.certfile) assert ret == 1 self.host = host self.port = port self.sock = create_socket((self.host, self.port)) self.sslobj = SSL.Connection(ctx, self.sock) self.sslobj.connect((self.host, self.port)) self.file = self.sslobj.makefile('rb') def usage(f = sys.stdout): print >> f, """\ Usage: %(progname)s --pass=PASSFILE Facilitator-side helper for the flashproxy-reg-email rendezvous. Polls an IMAP server for email messages with client registrations, deletes them, and forwards the registrations to the facilitator. -d, --debug don't daemonize, log to stdout. --disable-pin don't check server public key against a known pin. -h, --help show this help. --imaplib-debug show raw IMAP messages (will include email password). -l, --log FILENAME write log to FILENAME (default \"%(log)s\"). -p, --pass=PASSFILE use the email/password contained in PASSFILE. This file should contain "[] " on a single line, separated by whitespace. If is omitted, it defaults to imap.( domain):993. --pidfile FILENAME write PID to FILENAME after daemonizing. --privdrop-user USER switch UID and GID to those of USER. --unsafe-logging don't scrub email password and IP addresses from logs.\ """ % { "progname": sys.argv[0], "log": DEFAULT_LOG_FILENAME, } def safe_str(s): """Return "[scrubbed]" if options.safe_logging is true, and s otherwise.""" if options.safe_logging: return "[scrubbed]" else: return s def log(msg): print >> options.log_file, (u"%s %s" % (time.strftime(LOG_DATE_FORMAT), msg)).encode("UTF-8") options.log_file.flush() def main(): opts, args = getopt.gnu_getopt(sys.argv[1:], "de:hi:l:p:", [ "debug", "disable-pin", "email=", "help", "imap=", "imaplib-debug", "log=", "pass=", "pidfile=", "privdrop-user=", "unsafe-logging", ]) for o, a in opts: if o == "-d" or o == "--debug": options.daemonize = False options.log_filename = None elif o == "--disable-pin": options.use_certificate_pin = False elif o == "-h" or o == "--help": usage() sys.exit() if o == "--imaplib-debug": options.imaplib_debug = True elif o == "-l" or o == "--log": options.log_filename = a elif o == "-p" or o == "--pass": options.password_filename = a elif o == "--pidfile": options.pid_filename = a elif o == "--privdrop-user": options.privdrop_username = a elif o == "--unsafe-logging": options.safe_logging = False if len(args) != 0: usage(sys.stderr) sys.exit(1) # Load the email password. if options.password_filename is None: print >> sys.stderr, "The --pass option is required." sys.exit(1) try: password_file = open(options.password_filename) except Exception, e: print >> sys.stderr, """\ Failed to open password file "%s": %s.\ """ % (options.password_filename, str(e)) sys.exit(1) try: if not proc.check_perms(password_file.fileno()): print >> sys.stderr, "Refusing to run with group- or world-readable password file. Try" print >> sys.stderr, "\tchmod 600 %s" % options.password_filename sys.exit(1) for (lineno0, line) in enumerate(password_file.readlines()): line = line.strip("\n") if not line or line.startswith('#'): continue # we do this stricter regex match because passwords might have spaces in res = re.match(r"(?:(\S+)\s)?(\S+@\S+)\s(.+)", line) if not res: raise ValueError("could not find email or password on line %s" % (lineno0+1)) (imap_addr_spec, email_addr, email_password) = res.groups() imap_addr = parse_addr_spec( imap_addr_spec or "", DEFAULT_IMAP_HOST, DEFAULT_IMAP_PORT) break else: raise ValueError("no email line found") except Exception, e: print >> sys.stderr, """\ Failed to parse password file "%s": %s. Syntax is [] . """ % (options.password_filename, str(e)) sys.exit(1) finally: password_file.close() if options.log_filename: options.log_file = open(options.log_filename, "a") # Send error tracebacks to the log. sys.stderr = options.log_file else: options.log_file = sys.stdout if options.daemonize: log(u"daemonizing") pid = os.fork() if pid != 0: if options.pid_filename: f = open(options.pid_filename, "w") print >> f, pid f.close() sys.exit(0) if options.privdrop_username is not None: log(u"dropping privileges to those of user %s" % options.privdrop_username) try: proc.drop_privs(options.privdrop_username) except BaseException, e: print >> sys.stderr, "Can't drop privileges:", str(e) sys.exit(1) if options.imaplib_debug: imaplib.Debug = 4 login_limit = RateLimit() while True: try: imap = imap_login(imap_addr, email_addr, email_password) try: imap_loop(imap) except imaplib.IMAP4.error: imap.close() imap.logout() except (imaplib.IMAP4.error, ssl.SSLError, SSL.SSLError, socket.error), e: # Try again after a disconnection. log(u"lost server connection: %s" % str(e)) except KeyboardInterrupt: break # Don't reconnect too fast. t = login_limit.time_to_wait() if t > 0: log(u"waiting %.2f seconds before logging in again" % t) time.sleep(t) log(u"closing") imap.close() imap.logout() def message_get_date(msg): """Get the datetime when the message was received by reading the X-Received header, relative to UTC. Returns None on error.""" x_received = msg["X-Received"] if x_received is None: log(u"X-Received is missing") return None try: _, date_str = x_received.rsplit(";", 1) date_str = date_str.strip() except ValueError: log(u"can't parse X-Received %s" % repr(x_received)) return None date_tuple = email.utils.parsedate_tz(date_str) if date_tuple is None: log(u"can't parse X-Received date string %s" % repr(date_str)) return None timestamp_utc = calendar.timegm(date_tuple[:8] + (0,)) - date_tuple[9] return datetime.datetime.utcfromtimestamp(timestamp_utc) def message_ok(msg): date = message_get_date(msg) if date is not None: now = datetime.datetime.utcnow() age = time.mktime(now.utctimetuple()) - time.mktime(date.utctimetuple()) if age > REGISTRATION_AGE_LIMIT: log(u"message dated %s UTC is too old: %d seconds" % (date, age)) return False if -age > REGISTRATION_AGE_LIMIT: log(u"message dated %s UTC is from the future: %d seconds" % (date, -age)) return False return True def handle_message(msg): try: if fac.put_reg_proc(["fp-reg-decrypt"], msg.get_payload()): log(u"registered client") else: log(u"failed to register client") except Exception, e: log(u"error registering client") raise def truncate_repr(s, n): if not isinstance(s, basestring): s = repr(s) if len(s) > n: return repr(s[:n]) + "[...]" else: return repr(s) def check_imap_return(typ, data): if typ != "OK": raise imaplib.IMAP4.abort("Got type \"%s\": %s" % (typ, truncate_repr(data, 100))) def imap_get_uid(imap, index): typ, data = imap.fetch(str(index), "(UID)") if data[0] is None: return None check_imap_return(typ, data) # This grepping for the UID is bogus, but imaplib doesn't properly break up # the list of name-value pairs for us. m = re.match(r'^\d+\s+\(.*\bUID\s+(\d+)\b.*\)\s*$', data[0]) if m is None: raise imaplib.IMAP4.abort("Can't find UID in %s" % repr(data[0])) return m.group(1) # Gmail's IMAP folders are funny: they are not real folders, but actually views # of messages having a particular label. INBOX consists of messages having the # INBOX label, for example. Deleting a message from a folder just removes its # label, but the message itself continues to exist in "[Gmail]/All Mail". # https://support.google.com/mail/bin/answer.py?answer=78755 # http://gmailblog.blogspot.com/2008/10/new-in-labs-advanced-imap-controls.html # To really delete a message, you must copy it to "[Gmail]/Trash" and then # delete it from there. Messages in Trash are deleted automatically after 30 # days, but we do it immediately. def imap_loop(imap): while True: # Copy all messages to Trash, and work on them from there. This is a # failsafe so that messages will eventually be deleted if we are not # able to retrieve them. This act of copying also deletes from All Mail. typ, data = imap.select("[Gmail]/All Mail") check_imap_return(typ, data) imap.copy("1:*", "[Gmail]/Trash") typ, data = imap.select("[Gmail]/Trash") check_imap_return(typ, data) exists = int(data[0]) if exists > 0: while True: # Grab message 1 on each iteration; remaining messages shift down so # the next message we process is also message 1. uid = imap_get_uid(imap, "1") if uid is None: break typ, data = imap.uid("FETCH", uid, "(BODY[])") check_imap_return(typ, data) msg_text = data[0][1] typ, data = imap.uid("STORE", uid, "+FLAGS", "\\Deleted") check_imap_return(typ, data) typ, data = imap.expunge() check_imap_return(typ, data) try: msg = email.message_from_string(msg_text) if message_ok(msg): handle_message(msg) except Exception, e: log("Error processing message, deleting anyway: %s" % str(e)) time.sleep(POLL_INTERVAL) def imap_login(imap_addr, email_addr, email_password): """Make an IMAP connection, check the certificate and public key, and log in.""" with keys.temp_cert(keys.PIN_GOOGLE_CA_CERT) as ca_certs_file: imap = IMAP4_SSL_REQUIRED( imap_addr[0], imap_addr[1], None, ca_certs_file.name) if options.use_certificate_pin: keys.check_certificate_pin(imap.ssl(), keys.PIN_GOOGLE_PUBKEY_SHA1) log(u"logging in as %s" % email_addr) imap.login(email_addr, email_password) return imap class RateLimit(object): INITIAL_INTERVAL = 1.0 # These constants are chosen to reach a steady state of one attempt every # ten minutes, assuming a new failing attempt after each penalty interval. MAX_INTERVAL = 10 * 60 MULTIPLIER = 2.0 DECAY = math.log(MULTIPLIER) / MAX_INTERVAL def __init__(self): self.time_last = time.time() self.interval = self.INITIAL_INTERVAL def time_to_wait(self): now = time.time() delta = now - self.time_last # Discount time already served. wait = max(self.interval - delta, 0) self.time_last = now self.interval = self.interval * math.exp(-self.DECAY * delta) * self.MULTIPLIER return wait if __name__ == "__main__": main() flashproxy-1.7/facilitator/fp-registrar.cgi000077500000000000000000000065031236350636700211740ustar00rootroot00000000000000#!/usr/bin/env python import cgi import os import socket import sys import urllib from flashproxy import fac FACILITATOR_ADDR = ("127.0.0.1", 9002) def output_status(status): print """\ Status: %d\r \r""" % status def exit_error(status): output_status(status) sys.exit() # Send a base64-encoded client address to the registration daemon. def send_url_reg(reg): # Translate from url-safe base64 alphabet to the standard alphabet. reg = reg.replace('-', '+').replace('_', '/') return fac.put_reg_proc(["fp-reg-decrypt"], reg) method = os.environ.get("REQUEST_METHOD") remote_addr = (os.environ.get("REMOTE_ADDR"), None) path_info = os.environ.get("PATH_INFO") or "/" if not method or not remote_addr[0]: exit_error(400) # Print the HEAD part of a URL-based registration response, or exit with an # error if appropriate. def url_reg(reg): try: if send_url_reg(reg): output_status(204) else: exit_error(400) except Exception: exit_error(500) def do_head(): path_parts = [x for x in path_info.split("/") if x] if len(path_parts) == 2 and path_parts[0] == "reg": url_reg(path_parts[1]) else: exit_error(400) def do_get(): """Parses flashproxy polls. Example: GET /?r=1&client=7.1.43.21&client=1.2.3.4&transport=webrtc&transport=websocket """ fs = cgi.FieldStorage() path_parts = [x for x in path_info.split("/") if x] if len(path_parts) == 2 and path_parts[0] == "reg": url_reg(path_parts[1]) elif len(path_parts) == 0: # Check for recent enough flash proxy protocol. r = fs.getlist("r") if len(r) != 1 or r[0] != "1": exit_error(400) # 'transports' (optional) can be repeated and carries # transport names. transport_list = fs.getlist("transport") if not transport_list: transport_list = ["websocket"] try: reg = fac.get_reg(FACILITATOR_ADDR, remote_addr, transport_list) or "" except Exception: exit_error(500) # Allow XMLHttpRequest from any domain. http://www.w3.org/TR/cors/. print """\ Status: 200\r Content-Type: application/x-www-form-urlencoded\r Cache-Control: no-cache\r Access-Control-Allow-Origin: *\r \r""" sys.stdout.write(urllib.urlencode(reg)) else: exit_error(400) def do_post(): """Parse client registration.""" if path_info != "/": exit_error(400) # We treat sys.stdin as being a bunch of newline-separated query strings. I # think that this is technically a violation of the # application/x-www-form-urlencoded content-type the client likely used, but # it at least matches the standard multiline registration format used by # fp-reg-decryptd. try: regs = list(fac.read_client_registrations(sys.stdin.read(), defhost=remote_addr[0])) except ValueError: exit_error(400) for reg in regs: # XXX need to link these registrations together, so that # when one is answerered (or errors) the rest are invalidated. if not fac.put_reg(FACILITATOR_ADDR, reg.addr, reg.transport): exit_error(500) print """\ Status: 200\r \r""" if method == "HEAD": do_head() elif method == "GET": do_get() elif method == "POST": do_post() else: exit_error(405) flashproxy-1.7/facilitator/init.d/000077500000000000000000000000001236350636700172615ustar00rootroot00000000000000flashproxy-1.7/facilitator/init.d/fp-facilitator.in000077500000000000000000000066731236350636700225340ustar00rootroot00000000000000#! /bin/sh ### BEGIN INIT INFO # Provides: fp-facilitator # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Flash proxy facilitator # Description: Debian init script for the flash proxy facilitator. ### END INIT INFO # # Author: David Fifield # # Based on /etc/init.d/skeleton from Debian 6. PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin DESC="Flash proxy facilitator" NAME=fp-facilitator prefix=@prefix@ exec_prefix=@exec_prefix@ PIDFILE=@localstatedir@/run/$NAME.pid LOGFILE=@localstatedir@/log/$NAME.log CONFDIR=@sysconfdir@/flashproxy RELAYFILE=$CONFDIR/facilitator-relays PRIVDROP_USER=@fpfacilitatoruser@ DAEMON=@bindir@/$NAME DAEMON_ARGS="--relay-file $RELAYFILE --log $LOGFILE --pidfile $PIDFILE --privdrop-user $PRIVDROP_USER" DEFAULTSFILE=@sysconfdir@/default/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r "$DEFAULTSFILE" ] && . "$DEFAULTSFILE" . /lib/init/vars.sh . /lib/lsb/init-functions [ "$UNSAFE_LOGGING" = "yes" ] && DAEMON_ARGS="$DAEMON_ARGS --unsafe-logging" [ -n "$PORT" ] && DAEMON_ARGS="$DAEMON_ARGS --port $PORT" # # Function that starts the daemon/service # do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # 2 if daemon could not be started start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \ $DAEMON_ARGS \ || return 2 } # # Function that stops the daemon/service # do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # 2 if daemon could not be stopped # other if a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 # Wait for children to finish too if this is a daemon that forks # and if the daemon is only ever run from this initscript. # If the above conditions are not satisfied then add some other code # that waits for the process to drop all resources that could be # needed by services started subsequently. A last resort is to # sleep for some time. start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON [ "$?" = 2 ] && return 2 rm -f $PIDFILE return "$RETVAL" } case "$1" in start) if [ "$RUN_DAEMON" != "yes" ]; then log_action_msg "Not starting $DESC (Disabled in $DEFAULTSFILE)." exit 0 fi [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) status_of_proc -p "$PIDFILE" "$DAEMON" "$NAME" && exit 0 || exit $? ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $0 {start|stop|status|restart|force-reload}" >&2 exit 3 ;; esac : flashproxy-1.7/facilitator/init.d/fp-reg-decryptd.in000077500000000000000000000067101236350636700226140ustar00rootroot00000000000000#! /bin/sh ### BEGIN INIT INFO # Provides: fp-reg-decryptd # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Flash proxy local registration daemon. # Description: Debian init script for the flash proxy local registration daemon. ### END INIT INFO # # Author: David Fifield # # Based on /etc/init.d/skeleton from Debian 6. PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin DESC="Flash proxy local registration daemon" NAME=fp-reg-decryptd prefix=@prefix@ exec_prefix=@exec_prefix@ PIDFILE=@localstatedir@/run/$NAME.pid LOGFILE=@localstatedir@/log/$NAME.log CONFDIR=@sysconfdir@/flashproxy PRIVDROP_USER=@fpfacilitatoruser@ DAEMON=@bindir@/$NAME DAEMON_ARGS="--key $CONFDIR/reg-daemon.key --log $LOGFILE --pidfile $PIDFILE --privdrop-user $PRIVDROP_USER" DEFAULTSFILE=@sysconfdir@/default/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r "$DEFAULTSFILE" ] && . "$DEFAULTSFILE" [ "$UNSAFE_LOGGING" = "yes" ] && DAEMON_ARGS="$DAEMON_ARGS --unsafe-logging" [ -n "$PORT" ] && DAEMON_ARGS="$DAEMON_ARGS --port $PORT" . /lib/init/vars.sh . /lib/lsb/init-functions # # Function that starts the daemon/service # do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # 2 if daemon could not be started start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \ $DAEMON_ARGS \ || return 2 } # # Function that stops the daemon/service # do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # 2 if daemon could not be stopped # other if a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 # Wait for children to finish too if this is a daemon that forks # and if the daemon is only ever run from this initscript. # If the above conditions are not satisfied then add some other code # that waits for the process to drop all resources that could be # needed by services started subsequently. A last resort is to # sleep for some time. start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON [ "$?" = 2 ] && return 2 rm -f $PIDFILE return "$RETVAL" } case "$1" in start) if [ "$RUN_DAEMON" != "yes" ]; then log_action_msg "Not starting $DESC (Disabled in $DEFAULTSFILE)." exit 0 fi [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) status_of_proc -p "$PIDFILE" "$DAEMON" "$NAME" && exit 0 || exit $? ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $0 {start|stop|status|restart|force-reload}" >&2 exit 3 ;; esac : flashproxy-1.7/facilitator/init.d/fp-registrar-email.in000077500000000000000000000066161236350636700233170ustar00rootroot00000000000000#! /bin/sh ### BEGIN INIT INFO # Provides: fp-registrar-email # Required-Start: $remote_fs $syslog # Required-Stop: $remote_fs $syslog # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: Flash proxy email rendezvous poller # Description: Debian init script for the flash proxy email rendezvous poller. ### END INIT INFO # # Author: David Fifield # # Based on /etc/init.d/skeleton from Debian 6. PATH=/sbin:/usr/sbin:/bin:/usr/bin:/usr/local/bin DESC="Flash proxy email rendezvous poller" NAME=fp-registrar-email prefix=@prefix@ exec_prefix=@exec_prefix@ PIDFILE=@localstatedir@/run/$NAME.pid LOGFILE=@localstatedir@/log/$NAME.log CONFDIR=@sysconfdir@/flashproxy PRIVDROP_USER=@fpfacilitatoruser@ DAEMON=@bindir@/$NAME DAEMON_ARGS="--pass $CONFDIR/reg-email.pass --log $LOGFILE --pidfile $PIDFILE --privdrop-user $PRIVDROP_USER" DEFAULTSFILE=@sysconfdir@/default/$NAME # Exit if the package is not installed [ -x "$DAEMON" ] || exit 0 # Read configuration variable file if it is present [ -r "$DEFAULTSFILE" ] && . "$DEFAULTSFILE" . /lib/init/vars.sh . /lib/lsb/init-functions [ "$UNSAFE_LOGGING" = "yes" ] && DAEMON_ARGS="$DAEMON_ARGS --unsafe-logging" # # Function that starts the daemon/service # do_start() { # Return # 0 if daemon has been started # 1 if daemon was already running # 2 if daemon could not be started start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON --test > /dev/null \ || return 1 start-stop-daemon --start --quiet --pidfile $PIDFILE --exec $DAEMON -- \ $DAEMON_ARGS \ || return 2 } # # Function that stops the daemon/service # do_stop() { # Return # 0 if daemon has been stopped # 1 if daemon was already stopped # 2 if daemon could not be stopped # other if a failure occurred start-stop-daemon --stop --quiet --retry=TERM/30/KILL/5 --pidfile $PIDFILE RETVAL="$?" [ "$RETVAL" = 2 ] && return 2 # Wait for children to finish too if this is a daemon that forks # and if the daemon is only ever run from this initscript. # If the above conditions are not satisfied then add some other code # that waits for the process to drop all resources that could be # needed by services started subsequently. A last resort is to # sleep for some time. start-stop-daemon --stop --quiet --oknodo --retry=0/30/KILL/5 --exec $DAEMON [ "$?" = 2 ] && return 2 rm -f $PIDFILE return "$RETVAL" } case "$1" in start) if [ "$RUN_DAEMON" != "yes" ]; then log_action_msg "Not starting $DESC (Disabled in $DEFAULTSFILE)." exit 0 fi [ "$VERBOSE" != no ] && log_daemon_msg "Starting $DESC" "$NAME" do_start case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; stop) [ "$VERBOSE" != no ] && log_daemon_msg "Stopping $DESC" "$NAME" do_stop case "$?" in 0|1) [ "$VERBOSE" != no ] && log_end_msg 0 ;; 2) [ "$VERBOSE" != no ] && log_end_msg 1 ;; esac ;; status) status_of_proc -p "$PIDFILE" "$DAEMON" "$NAME" && exit 0 || exit $? ;; restart|force-reload) log_daemon_msg "Restarting $DESC" "$NAME" do_stop case "$?" in 0|1) do_start case "$?" in 0) log_end_msg 0 ;; 1) log_end_msg 1 ;; # Old process is still running *) log_end_msg 1 ;; # Failed to start esac ;; *) # Failed to stop log_end_msg 1 ;; esac ;; *) echo "Usage: $0 {start|stop|status|restart|force-reload}" >&2 exit 3 ;; esac : flashproxy-1.7/facilitator/mkman.inc000077700000000000000000000000001236350636700216762../mkman.incustar00rootroot00000000000000flashproxy-1.7/facilitator/mkman.sh000077700000000000000000000000001236350636700214002../mkman.shustar00rootroot00000000000000flashproxy-1.7/flashproxy-client000077500000000000000000001305501236350636700172000ustar00rootroot00000000000000#!/usr/bin/env python """ The flashproxy client transport plugin. """ import argparse import BaseHTTPServer import array import base64 import cStringIO import flashproxy import os import os.path import select import socket import struct import subprocess import sys import threading import time import traceback from flashproxy.util import parse_addr_spec, addr_family, format_addr, safe_str, safe_format_addr from flashproxy.reg import DEFAULT_TRANSPORT from hashlib import sha1 try: import numpy except ImportError: numpy = None # Default local port in managed mode (choose one arbitrarily). DEFAULT_LOCAL_PORT_MANAGED = 0 # Default local port in external mode. DEFAULT_LOCAL_PORT_EXTERNAL = 9001 DEFAULT_REMOTE_PORT = 9000 DEFAULT_REGISTER_METHODS = ["appspot", "email", "http"] DEFAULT_PORT_FORWARDING_HELPER = "tor-fw-helper" # We will re-register if we have fewer than this many waiting proxies. The # facilitator may choose to ignore our requests. DESIRED_NUMBER_OF_PROXIES = 3 # We accept up to this many bytes from a socket not yet matched with a partner # before disconnecting it. UNCONNECTED_BUFFER_LIMIT = 10240 LOG_DATE_FORMAT = "%Y-%m-%d %H:%M:%S" class options(object): local_addrs = [] remote_addrs = [] register_addr = None managed = True daemonize = False log_filename = None log_file = sys.stdout pid_filename = None port_forwarding = False port_forwarding_helper = DEFAULT_PORT_FORWARDING_HELPER port_forwarding_external = None register = False register_commands = [] # registration options address_family = socket.AF_UNSPEC transport = DEFAULT_TRANSPORT safe_logging = True facilitator_url = None facilitator_pubkey_filename = None log_lock = threading.Lock() def log(msg): with log_lock: print >> options.log_file, (u"%s %s" % (time.strftime(LOG_DATE_FORMAT), msg)).encode("UTF-8") options.log_file.flush() def format_sockaddr(sockaddr): host, port = socket.getnameinfo(sockaddr, socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) port = int(port) return format_addr((host, port)) def safe_format_sockaddr(sockaddr): return safe_str(format_sockaddr(sockaddr)) def safe_format_peername(s): try: return safe_format_sockaddr(s.getpeername()) except socket.error, e: return "" def apply_mask_numpy(payload, mask_key): if len(payload) == 0: return "" payload_a = numpy.frombuffer(payload, dtype="|u4", count=len(payload)//4) m, = numpy.frombuffer(mask_key, dtype="|u4", count=1) result = numpy.bitwise_xor(payload_a, m).tostring() i = len(payload) // 4 * 4 if i < len(payload): remainder = [] while i < len(payload): remainder.append(chr(ord(payload[i]) ^ ord(mask_key[i % 4]))) i += 1 result = result + "".join(remainder) return result def apply_mask_py(payload, mask_key): result = array.array("B", payload) m = array.array("B", mask_key) i = 0 while i < len(result) - 7: result[i] ^= m[0] result[i+1] ^= m[1] result[i+2] ^= m[2] result[i+3] ^= m[3] result[i+4] ^= m[0] result[i+5] ^= m[1] result[i+6] ^= m[2] result[i+7] ^= m[3] i += 8 while i < len(result): result[i] ^= m[i%4] i += 1 return result.tostring() if numpy is not None: apply_mask = apply_mask_numpy else: apply_mask = apply_mask_py class WebSocketFrame(object): def __init__(self): self.fin = False self.opcode = None self.payload = None def is_control(self): return (self.opcode & 0x08) != 0 class WebSocketMessage(object): def __init__(self): self.opcode = None self.payload = None def is_control(self): return (self.opcode & 0x08) != 0 class WebSocketDecoder(object): """RFC 6455 section 5 is about the WebSocket framing format.""" # Raise an exception rather than buffer anything larger than this. MAX_MESSAGE_LENGTH = 1024 * 1024 class MaskingError(ValueError): pass def __init__(self, use_mask = False): """use_mask should be True for server-to-client sockets, and False for client-to-server sockets.""" self.use_mask = use_mask # Per-frame state. self.buf = "" # Per-message state. self.message_buf = "" self.message_opcode = None def feed(self, data): self.buf += data def read_frame(self): """Read a frame from the internal buffer, if one is available. Returns a WebSocketFrame object, or None if there are no complete frames to read.""" # RFC 6255 section 5.2. if len(self.buf) < 2: return None offset = 0 b0, b1 = struct.unpack_from(">BB", self.buf, offset) offset += 2 fin = (b0 & 0x80) != 0 opcode = b0 & 0x0f frame_masked = (b1 & 0x80) != 0 payload_len = b1 & 0x7f if payload_len == 126: if len(self.buf) < offset + 2: return None payload_len, = struct.unpack_from(">H", self.buf, offset) offset += 2 elif payload_len == 127: if len(self.buf) < offset + 8: return None payload_len, = struct.unpack_from(">Q", self.buf, offset) offset += 8 if frame_masked: if not self.use_mask: # "A client MUST close a connection if it detects a masked # frame." raise self.MaskingError("Got masked payload from server") if len(self.buf) < offset + 4: return None mask_key = self.buf[offset:offset+4] offset += 4 else: if self.use_mask: # "The server MUST close the connection upon receiving a frame # that is not masked." raise self.MaskingError("Got unmasked payload from client") mask_key = None if payload_len > self.MAX_MESSAGE_LENGTH: raise ValueError("Refusing to buffer payload of %d bytes" % payload_len) if len(self.buf) < offset + payload_len: return None if mask_key: payload = apply_mask(self.buf[offset:offset+payload_len], mask_key) else: payload = self.buf[offset:offset+payload_len] self.buf = self.buf[offset+payload_len:] frame = WebSocketFrame() frame.fin = fin frame.opcode = opcode frame.payload = payload return frame def read_message(self): """Read a complete message. If the opcode is 1, the payload is decoded from a UTF-8 binary string to a unicode string. If a control frame is read while another fragmented message is in progress, the control frame is returned as a new message immediately. Returns None if there is no complete frame to be read.""" # RFC 6455 section 5.4 is about fragmentation. while True: frame = self.read_frame() if frame is None: return None # "Control frames (see Section 5.5) MAY be injected in the middle of # a fragmented message. Control frames themselves MUST NOT be # fragmented." if frame.is_control(): if not frame.fin: raise ValueError("Control frame (opcode %d) has FIN bit clear" % frame.opcode) message = WebSocketMessage() message.opcode = frame.opcode message.payload = frame.payload return message if self.message_opcode is None: if frame.opcode == 0: raise ValueError("First frame has opcode 0") self.message_opcode = frame.opcode else: if frame.opcode != 0: raise ValueError("Non-first frame has nonzero opcode %d" % frame.opcode) if len(self.message_buf) + len(frame.payload) > self.MAX_MESSAGE_LENGTH: raise ValueError("Refusing to buffer payload of %d bytes" % (len(self.message_buf) + len(frame.payload))) self.message_buf += frame.payload if frame.fin: break message = WebSocketMessage() message.opcode = self.message_opcode message.payload = self.message_buf self.postprocess_message(message) self.message_opcode = None self.message_buf = "" return message def postprocess_message(self, message): if message.opcode == 1: message.payload = message.payload.decode("utf-8") return message class WebSocketEncoder(object): def __init__(self, use_mask = False): self.use_mask = use_mask def encode_frame(self, opcode, payload): if opcode >= 16: raise ValueError("Opcode of %d is >= 16" % opcode) length = len(payload) if self.use_mask: mask_key = os.urandom(4) payload = apply_mask(payload, mask_key) mask_bit = 0x80 else: mask_key = "" mask_bit = 0x00 if length < 126: len_b, len_ext = length, "" elif length < 0x10000: len_b, len_ext = 126, struct.pack(">H", length) elif length < 0x10000000000000000: len_b, len_ext = 127, struct.pack(">Q", length) else: raise ValueError("payload length of %d is too long" % length) return chr(0x80 | opcode) + chr(mask_bit | len_b) + len_ext + mask_key + payload def encode_message(self, opcode, payload): if opcode == 1: payload = payload.encode("utf-8") return self.encode_frame(opcode, payload) # WebSocket implementations generally support text (opcode 1) messages, which # are UTF-8-encoded text. Not all support binary (opcode 2) messages. During the # WebSocket handshake, we use the "base64" value of the Sec-WebSocket-Protocol # header field to indicate that text frames should encoded UTF-8-encoded # base64-encoded binary data. Binary messages are always interpreted verbatim, # but text messages are rejected if "base64" was not negotiated. # # The idea here is that browsers that know they don't support binary messages # can negotiate "base64" with both endpoints and still reliably transport binary # data. Those that know they can support binary messages can just use binary # messages in the straightforward way. class WebSocketBinaryDecoder(object): def __init__(self, protocols, use_mask = False): self.dec = WebSocketDecoder(use_mask) self.base64 = "base64" in protocols def feed(self, data): self.dec.feed(data) def read(self): """Returns None when there are currently no data to be read. Returns "" when a close message is received.""" while True: message = self.dec.read_message() if message is None: return None elif message.opcode == 1: if not self.base64: raise ValueError("Received text message on decoder incapable of base64") payload = base64.b64decode(message.payload) if payload: return payload elif message.opcode == 2: if message.payload: return message.payload elif message.opcode == 8: return "" # Ignore all other opcodes. return None class WebSocketBinaryEncoder(object): def __init__(self, protocols, use_mask = False): self.enc = WebSocketEncoder(use_mask) self.base64 = "base64" in protocols def encode(self, data): if self.base64: return self.enc.encode_message(1, base64.b64encode(data)) else: return self.enc.encode_message(2, data) def listen_socket(addr): """Return a socket listening on the given address.""" addrinfo = socket.getaddrinfo(addr[0], addr[1], 0, socket.SOCK_STREAM, socket.IPPROTO_TCP)[0] s = socket.socket(addrinfo[0], addrinfo[1], addrinfo[2]) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) if addrinfo[0] == socket.AF_INET6 and socket.has_ipv6: # Set the IPV6_V6ONLY socket option, otherwise some operating systems # will listen on an IPv4 address as well as IPv6 by default. For # example, "::" will listen on both "::" and "0.0.0.0", and "::1" will # listen on both "::1" and "127.0.0.1". See # https://trac.torproject.org/projects/tor/ticket/4760. try: s.setsockopt(socket.IPPROTO_IPV6, socket.IPV6_V6ONLY, 1) except AttributeError: # Python 2.7.3 on Windows does not define IPPROTO_IPV6; see # http://bugs.python.org/issue6926. IPV6_V6ONLY is the default # behavior on Windows anyway, so we can skip the setsockopt. pass except socket.error: # Seen on Windows XP: # socket.error: [Errno 109] Protocol not available pass s.bind(addr) s.listen(10) return s # How long to wait for a WebSocket request on the remote socket. It is limited # to avoid Slowloris-like attacks. WEBSOCKET_REQUEST_TIMEOUT = 2.0 # This subclass of BaseHTTPRequestHandler is essentially a means of parsing an # HTTP request. class WebSocketRequestHandler(BaseHTTPServer.BaseHTTPRequestHandler): def __init__(self, request_text, fd): self.rfile = cStringIO.StringIO(request_text) self.wfile = fd.makefile() self.error = False self.raw_requestline = self.rfile.readline() self.parse_request() def log_message(self, *args): pass def send_error(self, code, message = None): BaseHTTPServer.BaseHTTPRequestHandler.send_error(self, code, message) self.error = True MAGIC_GUID = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" def handle_websocket_request(fd): try: request_text = fd.recv(10 * 1024) except socket.error, e: log(u"Socket error while receiving WebSocket request: %s" % repr(str(e))) return None handler = WebSocketRequestHandler(request_text, fd) if handler.error or not hasattr(handler, "path"): return None method = handler.command path = handler.path headers = handler.headers # See RFC 6455 section 4.2.1 for this sequence of checks. # # 1. An HTTP/1.1 or higher GET request, including a "Request-URI"... if method != "GET": handler.send_error(405) return None if path != "/": handler.send_error(404) return None # 2. A |Host| header field containing the server's authority. # We deliberately skip this test. # 3. An |Upgrade| header field containing the value "websocket", treated as # an ASCII case-insensitive value. upgrade = headers.get("upgrade") if upgrade is None: handler.send_error(400) return None if "websocket" not in [x.strip().lower() for x in upgrade.split(",")]: handler.send_error(400) return None # 4. A |Connection| header field that includes the token "Upgrade", treated # as an ASCII case-insensitive value. connection = headers.get("connection") if connection is None: handler.send_error(400) return None if "upgrade" not in [x.strip().lower() for x in connection.split(",")]: handler.send_error(400) return None # 5. A |Sec-WebSocket-Key| header field with a base64-encoded value that, # when decoded, is 16 bytes in length. try: key = headers.get("sec-websocket-key") if len(base64.b64decode(key)) != 16: raise TypeError("Sec-WebSocket-Key must be 16 bytes") except TypeError: handler.send_error(400) return None # 6. A |Sec-WebSocket-Version| header field, with a value of 13. We also # allow 8 from draft-ietf-hybi-thewebsocketprotocol-10. version = headers.get("sec-websocket-version") KNOWN_VERSIONS = ["8", "13"] if version not in KNOWN_VERSIONS: # "If this version does not match a version understood by the server, # the server MUST abort the WebSocket handshake described in this # section and instead send an appropriate HTTP error code (such as 426 # Upgrade Required) and a |Sec-WebSocket-Version| header field # indicating the version(s) the server is capable of understanding." handler.send_response(426) handler.send_header("Sec-WebSocket-Version", ", ".join(KNOWN_VERSIONS)) handler.end_headers() return None # 7. Optionally, an |Origin| header field. # 8. Optionally, a |Sec-WebSocket-Protocol| header field, with a list of # values indicating which protocols the client would like to speak, ordered # by preference. protocols_str = headers.get("sec-websocket-protocol") if protocols_str is None: protocols = [] else: protocols = [x.strip().lower() for x in protocols_str.split(",")] # 9. Optionally, a |Sec-WebSocket-Extensions| header field... # 10. Optionally, other header fields... # See RFC 6455 section 4.2.2, item 5 for these steps. # 1. A Status-Line with a 101 response code as per RFC 2616. handler.send_response(101) # 2. An |Upgrade| header field with value "websocket" as per RFC 2616. handler.send_header("Upgrade", "websocket") # 3. A |Connection| header field with value "Upgrade". handler.send_header("Connection", "Upgrade") # 4. A |Sec-WebSocket-Accept| header field. The value of this header field # is constructed by concatenating /key/, defined above in step 4 in Section # 4.2.2, with the string "258EAFA5-E914-47DA-95CA-C5AB0DC85B11", taking the # SHA-1 hash of this concatenated value to obtain a 20-byte value and # base64-encoding (see Section 4 of [RFC4648]) this 20-byte hash. accept_key = base64.b64encode(sha1(key + MAGIC_GUID).digest()) handler.send_header("Sec-WebSocket-Accept", accept_key) # 5. Optionally, a |Sec-WebSocket-Protocol| header field, with a value # /subprotocol/ as defined in step 4 in Section 4.2.2. if "base64" in protocols: handler.send_header("Sec-WebSocket-Protocol", "base64") # 6. Optionally, a |Sec-WebSocket-Extensions| header field... handler.end_headers() return protocols def grab_string(s, pos): """Grab a NUL-terminated string from the given string, starting at the given offset. Return (pos, str) tuple, or (pos, None) on error.""" i = pos while i < len(s): if s[i] == '\0': return (i + 1, s[pos:i]) i += 1 return pos, None # http://ftp.icm.edu.pl/packages/socks/socks4/SOCKS4.protocol # https://en.wikipedia.org/wiki/SOCKS#SOCKS4a def parse_socks_request(data): """Parse the 8-byte SOCKS header at the beginning of data. Returns a (dest, port) tuple. Raises ValueError on error.""" try: ver, cmd, dport, o1, o2, o3, o4 = struct.unpack(">BBHBBBB", data[:8]) except struct.error: raise ValueError("Couldn't unpack SOCKS4 header") if ver != 4: raise ValueError("Wrong SOCKS version (%d)" % ver) if cmd != 1: raise ValueError("Wrong SOCKS command (%d)" % cmd) pos, userid = grab_string(data, 8) if userid is None: raise ValueError("Couldn't read userid") if o1 == 0 and o2 == 0 and o3 == 0 and o4 != 0: pos, dest = grab_string(data, pos) if dest is None: raise ValueError("Couldn't read destination") else: dest = "%d.%d.%d.%d" % (o1, o2, o3, o4) return dest, dport def handle_socks_request(fd): try: addr = fd.getpeername() data = fd.recv(100) except socket.error, e: log(u"Socket error from SOCKS-pending: %s" % repr(str(e))) return False try: dest_addr = parse_socks_request(data) except ValueError, e: log(u"Error parsing SOCKS request: %s." % str(e)) # Error reply. fd.sendall(struct.pack(">BBHBBBB", 0, 91, 0, 0, 0, 0, 0)) return False log(u"Got SOCKS request for %s." % safe_format_addr(dest_addr)) fd.sendall(struct.pack(">BBHBBBB", 0, 90, dest_addr[1], 127, 0, 0, 1)) # Note we throw away the requested address and port. return True def report_pending(): log(u"locals (%d): %s" % (len(locals), [safe_format_peername(x) for x in locals])) log(u"remotes (%d): %s" % (len(remotes), [safe_format_peername(x) for x in remotes])) def forward_ports(pairs): """Attempt to forward all given pairs (external, internal) pairs of ports using port_forwarding_helper.""" command = [options.port_forwarding_helper] basename = os.path.basename(command[0]) for external, internal in pairs: command += ["-p", "%d:%d" % (external, internal)] try: log(u"Running port forwarding command: %s" % " ".join(command)) p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() except OSError, e: log(u"Error running %s: %s" % (basename, str(e))) return False for line in stdout.splitlines(): log(u"%s: %s" % (basename, line)) for line in stderr.splitlines(): log(u"%s: %s" % (basename, line)) if p.returncode != 0: log("%s exited with status %d." % (basename, p.returncode)) return False return True def forward_listeners(listeners): """Attempt to forward the ports belonging to the given listening sockets. Non-IPv4 addresses are ignored. If options.port_forwarding_external is not None, only the first IPv4 address in the list will be forwarded.""" forward_list = [] for listener in remote_listen: host, port = socket.getnameinfo(listener.getsockname(), socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) port = int(port) af = addr_family(host) if af != socket.AF_INET: # I guess tor-fw-helper can only handle IPv4. log(u"Not forwarding to %s because it is not an IPv4 address." % format_addr((host, port))) continue if options.port_forwarding_external is not None: forward_list.append((options.port_forwarding_external, port)) # A fixed external address means we can forward only one port. break else: forward_list.append((port, port)) forward_ports(forward_list) register_condvar = threading.Condition() # register_flag true means registration_thread_func should register at its next # opportunity. register_flag = False def register(): global register_flag if not options.register: return with register_condvar: register_flag = True register_condvar.notify() def register_using_command(command): basename = os.path.basename(command[0]) try: log(u"Running command: %s" % " ".join(command)) p = subprocess.Popen(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE) stdout, stderr = p.communicate() except OSError, e: log(u"Error running %s: %s" % (basename, str(e))) return False for line in stdout.splitlines(): log(u"%s: %s" % (basename, line)) for line in stderr.splitlines(): log(u"%s: %s" % (basename, line)) if p.returncode != 0: log("%s exited with status %d." % (basename, p.returncode)) return False return True def register_one(): spec = format_addr(options.register_addr) log(u"Trying to register \"%s\"." % spec) for command in options.register_commands: if register_using_command(command + [spec]): break else: log(u"All registration commands failed.") def registration_thread_func(): global register_flag while True: with register_condvar: while not register_flag: register_condvar.wait() register_flag = False if len(unlinked_remotes) < DESIRED_NUMBER_OF_PROXIES: register_one() def proxy_chunk_local_to_remote(local, remote, data = None): if data is None: try: data = local.recv(65536) except socket.error, e: # Can be "Connection reset by peer". log(u"Socket error from local: %s" % repr(str(e))) remote.close() return False if not data: log(u"EOF from local %s." % safe_format_peername(local)) local.close() remote.close() return False else: try: remote.send_chunk(data) except socket.error, e: log(u"Socket error writing to remote: %s" % repr(str(e))) local.close() return False return True def proxy_chunk_remote_to_local(remote, local, data = None): if data is None: try: data = remote.recv(65536) except socket.error, e: # Can be "Connection reset by peer". log(u"Socket error from remote: %s" % repr(str(e))) local.close() return False if not data: log(u"EOF from remote %s." % safe_format_peername(remote)) remote.close() local.close() return False else: remote.dec.feed(data) while True: try: data = remote.dec.read() except (WebSocketDecoder.MaskingError, ValueError), e: log(u"WebSocket decode error from remote: %s" % repr(str(e))) remote.close() local.close() return False if data is None: break elif not data: log(u"WebSocket close from remote %s." % safe_format_peername(remote)) remote.close() local.close() return False try: local.send_chunk(data) except socket.error, e: log(u"Socket error writing to local: %s" % repr(str(e))) remote.close() return False return True def receive_unlinked(fd, label): """Receive and buffer data on a socket that has not been linked yet. Returns True iff there was no error and the socket may still be used; otherwise, the socket will be closed before returning.""" try: data = fd.recv(1024) except socket.error, e: log(u"Socket error from %s: %s" % (label, repr(str(e)))) fd.close() return False if not data: log(u"EOF from unlinked %s %s with %d bytes buffered." % (label, safe_format_peername(fd), len(fd.buf))) fd.close() return False else: log(u"Data from unlinked %s %s (%d bytes)." % (label, safe_format_peername(fd), len(data))) fd.buf += data if len(fd.buf) >= UNCONNECTED_BUFFER_LIMIT: log(u"Refusing to buffer more than %d bytes from %s %s." % (UNCONNECTED_BUFFER_LIMIT, label, safe_format_peername(fd))) fd.close() return False return True def match_proxies(): while unlinked_remotes and unlinked_locals: remote = unlinked_remotes.pop(0) local = unlinked_locals.pop(0) log(u"Linking %s and %s." % (safe_format_peername(local), safe_format_peername(remote))) remote.partner = local local.partner = remote if remote.buf: if not proxy_chunk_remote_to_local(remote, local, remote.buf): remotes.remove(remote) locals.remove(local) register() return if local.buf: if not proxy_chunk_local_to_remote(local, remote, local.buf): remotes.remove(remote) locals.remove(local) return class TimeoutSocket(object): def __init__(self, fd): self.fd = fd self.birthday = time.time() def age(self): return time.time() - self.birthday def __getattr__(self, name): return getattr(self.fd, name) class RemoteSocket(object): def __init__(self, fd, protocols): self.fd = fd self.buf = "" self.partner = None self.dec = WebSocketBinaryDecoder(protocols, use_mask = True) self.enc = WebSocketBinaryEncoder(protocols, use_mask = False) def send_chunk(self, data): self.sendall(self.enc.encode(data)) def __getattr__(self, name): return getattr(self.fd, name) class LocalSocket(object): def __init__(self, fd): self.fd = fd self.buf = "" self.partner = None def send_chunk(self, data): self.sendall(data) def __getattr__(self, name): return getattr(self.fd, name) def proxy_loop(): while True: rset = remote_listen + local_listen + websocket_pending + socks_pending + locals + remotes rset, _, _ = select.select(rset, [], [], WEBSOCKET_REQUEST_TIMEOUT) for fd in rset: if fd in remote_listen: remote_c, addr = fd.accept() log(u"Remote connection from %s." % safe_format_sockaddr(addr)) websocket_pending.append(TimeoutSocket(remote_c)) elif fd in local_listen: local_c, addr = fd.accept() log(u"Local connection from %s." % safe_format_sockaddr(addr)) socks_pending.append(local_c) register() elif fd in websocket_pending: log(u"Data from WebSocket-pending %s." % safe_format_peername(fd)) protocols = handle_websocket_request(fd) if protocols is not None: wrapped = RemoteSocket(fd, protocols) remotes.append(wrapped) unlinked_remotes.append(wrapped) else: fd.close() register() websocket_pending.remove(fd) report_pending() elif fd in socks_pending: log(u"SOCKS request from %s." % safe_format_peername(fd)) if handle_socks_request(fd): wrapped = LocalSocket(fd) locals.append(wrapped) unlinked_locals.append(wrapped) else: fd.close() socks_pending.remove(fd) report_pending() elif fd in remotes: local = fd.partner if local: if not proxy_chunk_remote_to_local(fd, local): remotes.remove(fd) locals.remove(local) register() else: if not receive_unlinked(fd, "remote"): remotes.remove(fd) unlinked_remotes.remove(fd) register() report_pending() elif fd in locals: remote = fd.partner if remote: if not proxy_chunk_local_to_remote(fd, remote): remotes.remove(remote) locals.remove(fd) else: if not receive_unlinked(fd, "local"): locals.remove(fd) unlinked_locals.remove(fd) report_pending() match_proxies() while websocket_pending: pending = websocket_pending[0] if pending.age() < WEBSOCKET_REQUEST_TIMEOUT: break log(u"Expired remote connection from %s." % safe_format_peername(pending)) pending.close() websocket_pending.pop(0) report_pending() def build_register_command(method): # sys.path[0] usually contains the directory the script is located in. # py2exe overwrites this for bundled executables. if getattr(sys, "frozen", False): script_dir = os.path.dirname(sys.executable) else: script_dir = sys.path[0] if not script_dir: # Maybe the script was read from stdin; in any case don't guess at the directory. raise ValueError("Can't find executable directory for registration helpers") if method not in ("http", "appspot", "email"): raise ValueError("Unknown registration method \"%s\"" % method) args = [] # facilitator selection if method == "http": if options.facilitator_url is not None: args += ["-f", options.facilitator_url] else: if options.facilitator_pubkey_filename is not None: args += ["--facilitator-pubkey", options.facilitator_pubkey_filename] # options shared by every registration helper. if options.address_family == socket.AF_INET: args += ["-4"] elif options.address_family == socket.AF_INET6: args += ["-6"] if options.transport is not None: args += ["--transport", options.transport] if not options.safe_logging: args += ["--unsafe-logging"] prog = os.path.join(script_dir, "flashproxy-reg-%s" % method) return [prog] + args def pt_escape(s): result = [] for c in s: if c == "\n": result.append("\\n") elif c == "\\": result.append("\\\\") elif 0 < ord(c) < 128: result.append(c) else: result.append("\\x%02x" % ord(c)) return "".join(result) def pt_line(keyword, *args): log(keyword + " " + " ".join(pt_escape(x) for x in args)) print keyword, " ".join(pt_escape(x) for x in args) sys.stdout.flush() def pt_cmethoderror(msg): pt_line("CMETHOD-ERROR", msg) sys.exit(1) def pt_get_client_transports(known, wildcard = None): result = [] if os.environ.get("TOR_PT_CLIENT_TRANSPORTS") == "*": if wildcard is None: wildcard = known return wildcard for method in os.environ.get("TOR_PT_CLIENT_TRANSPORTS", "").split(","): if method in known: result.append(method) return result def pt_setup_managed(): TOR_PT_MANAGED_TRANSPORT_VER = os.environ.get("TOR_PT_MANAGED_TRANSPORT_VER") if TOR_PT_MANAGED_TRANSPORT_VER is None: pt_line("VERSION-ERROR", "no-version") print >> sys.stderr, """\ No TOR_PT_MANAGED_TRANSPORT_VER found in environment. If you are running flashproxy-client from the command line and not from a ClientTransportPlugin configuration line, you must use the --external option.\ """ sys.exit(1) for ver in TOR_PT_MANAGED_TRANSPORT_VER.split(","): if ver == "1": pt_line("VERSION", ver) break else: pt_line("VERSION-ERROR", "no-version") sys.exit(1) client_transports = pt_get_client_transports(["flashproxy", "websocket"], ["flashproxy"]) if not client_transports: pt_cmethods_done() sys.exit(1) return client_transports def pt_cmethod(method_name, addr): pt_line("CMETHOD", method_name, "socks4", format_sockaddr(addr)) def pt_cmethods_done(): pt_line("CMETHODS", "DONE") def main(): global remote_listen, local_listen global locals, remotes global websocket_pending, socks_pending global unlinked_locals, unlinked_remotes parser = argparse.ArgumentParser( usage="%(prog)s --register [OPTIONS] [LOCAL][:PORT] [REMOTE][:PORT]", formatter_class=argparse.RawDescriptionHelpFormatter, description="""\ Wait for connections on a local and a remote port. When any pair of connections exists, data is ferried between them until one side is closed. The local connection acts as a SOCKS4a proxy, but the host and port in the SOCKS request are ignored and the local connection is always linked to a remote connection. By default, runs as a managed proxy: informs a parent Tor process of support for the "flashproxy" or "websocket" pluggable transport. In managed mode, the LOCAL port is chosen arbitrarily instead of the default; this can be overridden by including a LOCAL port in the command. This is the way the program should be invoked in a torrc ClientTransportPlugin "exec" line. Use the --external option to run as an external proxy that does not interact with Tor. If any of the --register, --register-addr, or --register-methods options are used, then your IP address will be sent to the facilitator so that proxies can connect to you. You need to register in some way in order to get any service. The --facilitator option allows controlling which facilitator is used; if omitted, it uses a public default.""", epilog="""\ The -4, -6, --unsafe-logging, --transport and --facilitator-pubkey options are propagated to the child registration helpers. For backwards compatilibility, the --facilitator option is also propagated to the http registration helper. If you need to pass more options, use TODO #9976.""") flashproxy.util.add_module_opts(parser) flashproxy.reg.add_module_opts(parser) parser.add_argument("-f", "--facilitator", metavar="URL", help="register with the facilitator at this URL, default %(default)s. " "This is passed to the http registration ONLY.") # specific opts and args parser.add_argument("--daemon", help="daemonize (Unix only).", action="store_true") parser.add_argument("--external", help="be an external (non-managed) proxy - don't interact with Tor " "using environment variables and stdout.", action="store_true") parser.add_argument("-l", "--log", metavar="FILENAME", help="write log to FILENAME (default stderr).") parser.add_argument("--pidfile", metavar="FILENAME", help="write PID to FILENAME after daemonizing.") parser.add_argument("--port-forwarding", help="attempt to forward REMOTE port.", action="store_true") parser.add_argument("--port-forwarding-helper", metavar="PROGRAM", help="use the given PROGRAM to forward ports, default %s. Implies " "--port-forwarding." % DEFAULT_PORT_FORWARDING_HELPER) parser.add_argument("--port-forwarding-external", metavar="PORT", help="forward the external PORT to REMOTE on the local host, default " "same as the REMOTE. Implies --port-forwarding.", type=int) parser.add_argument("-r", "--register", help="register with the facilitator.", action="store_true") parser.add_argument("--register-addr", metavar="ADDR", help="register the given address (in case it differs from REMOTE). " "Implies --register.") parser.add_argument("--register-methods", metavar="METHOD[,METHOD...]", help="register using the given comma-separated list of methods. " "Implies --register. Possible methods are appspot,email,http. Default " "is %s." % ",".join(DEFAULT_REGISTER_METHODS), type=lambda x: None if x is None else x.split(",") if x else []) parser.add_argument("local_addr", metavar="LOCAL:PORT", help="local addr+port to listen on, default all localhost addresses on " "port %s. In managed mode, the port is chosen arbitrarily if not given." % DEFAULT_LOCAL_PORT_EXTERNAL, default="", nargs="?") parser.add_argument("remote_addr", metavar="REMOTE:PORT", help="remote addr+port to listen on, default all addresses on port %s" % DEFAULT_REMOTE_PORT, default="", nargs="?") ns = parser.parse_args(sys.argv[1:]) # set registration options options.address_family = ns.address_family options.transport = ns.transport options.safe_logging = not ns.unsafe_logging options.facilitator_url = ns.facilitator options.facilitator_pubkey_filename = ns.facilitator_pubkey options.managed = not ns.external # do registration if any of the register options were set do_register = (ns.register or ns.register_addr is not None or ns.register_methods is not None) # do port forwarding if any of the port-forwarding options were set do_port_forwarding = (ns.port_forwarding or ns.port_forwarding_helper is not None or ns.port_forwarding_external is not None) options.log_filename = ns.log options.daemonize = ns.daemon options.pid_filename = ns.pidfile if options.log_filename: options.log_file = open(options.log_filename, "a") # Send error tracebacks to the log. sys.stderr = options.log_file else: options.log_file = sys.stderr if options.managed: method_names = pt_setup_managed() else: method_names = ["flashproxy"] if options.managed: default_local_port = DEFAULT_LOCAL_PORT_MANAGED else: default_local_port = DEFAULT_LOCAL_PORT_EXTERNAL default_remote_port = DEFAULT_REMOTE_PORT local_addr = parse_addr_spec(ns.local_addr, defhost="", defport=default_local_port) remote_addr = parse_addr_spec(ns.remote_addr, defhost="", defport=default_remote_port) if local_addr[0]: options.local_addrs.append(local_addr) else: options.local_addrs.append(("127.0.0.1", local_addr[1])) # Listen on both IPv4 and IPv6 if no host is given, unless we are in # managed mode. if not options.managed and socket.has_ipv6: options.local_addrs.append(("::1", local_addr[1])) if remote_addr[0]: options.remote_addrs.append(remote_addr) else: options.remote_addrs.append(("0.0.0.0", remote_addr[1])) if socket.has_ipv6: options.remote_addrs.append(("::", remote_addr[1])) # Determine registration info if requested. options.register = do_register register_addr_spec = ns.register_addr register_methods = ns.register_methods if not register_methods: register_methods = DEFAULT_REGISTER_METHODS for method in register_methods: options.register_commands.append(build_register_command(method)) options.port_forwarding = do_port_forwarding options.port_forwarding_helper = ns.port_forwarding_helper or DEFAULT_PORT_FORWARDING_HELPER options.port_forwarding_external = ns.port_forwarding_external # Remote sockets, accepting remote WebSocket connections from proxies. remote_listen = [] for addr in options.remote_addrs: try: listen = listen_socket(addr) except socket.error, e: log(u"Failed to listen remote on %s: %s." % (addr, str(e))) continue remote_listen.append(listen) log(u"Listening remote on %s." % format_sockaddr(listen.getsockname())) if options.register_addr is None: host, port = socket.getnameinfo(listen.getsockname(), socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) port = int(port) if not remote_addr[0]: # Make host part blank (not 0.0.0.0) if unspecified. host = "" options.register_addr = parse_addr_spec(register_addr_spec or ":", host, port) if not remote_listen: log(u"Failed to open any remote listeners, quitting.") pt_cmethoderror("Failed to open any remote listeners.") # Local sockets, accepting SOCKS requests from localhost local_listen = [] for addr in options.local_addrs: for method_name in method_names: try: listen = listen_socket(addr) except socket.error, e: log(u"Failed to listen local on %s: %s." % (addr, str(e))) continue local_listen.append(listen) log(u"Listening local on %s." % format_sockaddr(listen.getsockname())) if options.managed: pt_cmethod(method_name, listen.getsockname()) if not local_listen: log(u"Failed to open any local listeners, quitting.") pt_cmethoderror("Failed to open any local listeners.") if options.managed: pt_cmethods_done() # Attempt to forward ports if requested. if options.port_forwarding: forward_listeners(remote_listen) # New remote sockets waiting to finish their WebSocket negotiation. websocket_pending = [] # Remote connection sockets. remotes = [] # Remotes not yet linked with a local. This is a subset of remotes. unlinked_remotes = [] # New local sockets waiting to finish their SOCKS negotiation. socks_pending = [] # Local Tor sockets, after SOCKS negotiation. locals = [] # Locals not yet linked with a remote. This is a subset of remotes. unlinked_locals = [] if options.daemonize: log(u"Daemonizing.") pid = os.fork() if pid != 0: if options.pid_filename: f = open(options.pid_filename, "w") print >> f, pid f.close() sys.exit(0) if options.register: registration_thread = threading.Thread(target=registration_thread_func, name="register") registration_thread.daemon = True registration_thread.start() register() try: proxy_loop() except Exception: exc = traceback.format_exc() log("".join(exc)) if __name__ == "__main__": main() flashproxy-1.7/flashproxy-client-test.py000077500000000000000000000353431236350636700206100ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- import base64 import cStringIO import httplib import socket import subprocess import sys import unittest try: from hashlib import sha1 except ImportError: # Python 2.4 uses this name. from sha import sha as sha1 # Special tricks to load a module whose filename contains a dash and doesn't end # in ".py". import imp dont_write_bytecode = sys.dont_write_bytecode sys.dont_write_bytecode = True fp_client = imp.load_source("fp_client", "flashproxy-client") parse_socks_request = fp_client.parse_socks_request handle_websocket_request = fp_client.handle_websocket_request WebSocketDecoder = fp_client.WebSocketDecoder WebSocketEncoder = fp_client.WebSocketEncoder sys.dont_write_bytecode = dont_write_bytecode del dont_write_bytecode del fp_client LOCAL_ADDRESS = ("127.0.0.1", 40000) REMOTE_ADDRESS = ("127.0.0.1", 40001) class TestSocks(unittest.TestCase): def test_parse_socks_request_empty(self): self.assertRaises(ValueError, parse_socks_request, "") def test_parse_socks_request_short(self): self.assertRaises(ValueError, parse_socks_request, "\x04\x01\x99\x99\x01\x02\x03\x04") def test_parse_socks_request_ip_userid_missing(self): dest, port = parse_socks_request("\x04\x01\x99\x99\x01\x02\x03\x04\x00") dest, port = parse_socks_request("\x04\x01\x99\x99\x01\x02\x03\x04\x00userid") self.assertEqual((dest, port), ("1.2.3.4", 0x9999)) def test_parse_socks_request_ip(self): dest, port = parse_socks_request("\x04\x01\x99\x99\x01\x02\x03\x04userid\x00") self.assertEqual((dest, port), ("1.2.3.4", 0x9999)) def test_parse_socks_request_hostname_missing(self): self.assertRaises(ValueError, parse_socks_request, "\x04\x01\x99\x99\x00\x00\x00\x01userid\x00") self.assertRaises(ValueError, parse_socks_request, "\x04\x01\x99\x99\x00\x00\x00\x01userid\x00abc") def test_parse_socks_request_hostname(self): dest, port = parse_socks_request("\x04\x01\x99\x99\x00\x00\x00\x01userid\x00abc\x00") class DummySocket(object): def __init__(self, read_fd, write_fd): self.read_fd = read_fd self.write_fd = write_fd self.readp = 0 def read(self, *args, **kwargs): self.read_fd.seek(self.readp, 0) data = self.read_fd.read(*args, **kwargs) self.readp = self.read_fd.tell() return data def readline(self, *args, **kwargs): self.read_fd.seek(self.readp, 0) data = self.read_fd.readline(*args, **kwargs) self.readp = self.read_fd.tell() return data def recv(self, size, *args, **kwargs): return self.read(size) def write(self, data): self.write_fd.seek(0, 2) self.write_fd.write(data) def send(self, data, *args, **kwargs): return self.write(data) def sendall(self, data, *args, **kwargs): return self.write(data) def makefile(self, *args, **kwargs): return self def dummy_socketpair(): f1 = cStringIO.StringIO() f2 = cStringIO.StringIO() return (DummySocket(f1, f2), DummySocket(f2, f1)) class HTTPRequest(object): def __init__(self): self.method = "GET" self.path = "/" self.headers = {} def transact_http(req): l, r = dummy_socketpair() r.send("%s %s HTTP/1.0\r\n" % (req.method, req.path)) for k, v in req.headers.items(): r.send("%s: %s\r\n" % (k, v)) r.send("\r\n") protocols = handle_websocket_request(l) resp = httplib.HTTPResponse(r) resp.begin() return resp, protocols class TestHandleWebSocketRequest(unittest.TestCase): DEFAULT_KEY = "0123456789ABCDEF" DEFAULT_KEY_BASE64 = base64.b64encode(DEFAULT_KEY) MAGIC_GUID = "258EAFA5-E914-47DA-95CA-C5AB0DC85B11" @staticmethod def default_req(): req = HTTPRequest() req.method = "GET" req.path = "/" req.headers["Upgrade"] = "websocket" req.headers["Connection"] = "Upgrade" req.headers["Sec-WebSocket-Key"] = TestHandleWebSocketRequest.DEFAULT_KEY_BASE64 req.headers["Sec-WebSocket-Version"] = "13" return req def assert_ok(self, req): resp, protocols = transact_http(req) self.assertEqual(resp.status, 101) self.assertEqual(resp.getheader("Upgrade").lower(), "websocket") self.assertEqual(resp.getheader("Connection").lower(), "upgrade") self.assertEqual(resp.getheader("Sec-WebSocket-Accept"), base64.b64encode(sha1(self.DEFAULT_KEY_BASE64 + self.MAGIC_GUID).digest())) self.assertEqual(protocols, []) def assert_not_ok(self, req): resp, protocols = transact_http(req) self.assertEqual(resp.status // 100, 4) self.assertEqual(protocols, None) def test_default(self): req = self.default_req() self.assert_ok(req) def test_missing_upgrade(self): req = self.default_req() del req.headers["Upgrade"] self.assert_not_ok(req) def test_missing_connection(self): req = self.default_req() del req.headers["Connection"] self.assert_not_ok(req) def test_case_insensitivity(self): """Test that the values of the Upgrade and Connection headers are case-insensitive.""" req = self.default_req() req.headers["Upgrade"] = req.headers["Upgrade"].lower() self.assert_ok(req) req.headers["Upgrade"] = req.headers["Upgrade"].upper() self.assert_ok(req) req.headers["Connection"] = req.headers["Connection"].lower() self.assert_ok(req) req.headers["Connection"] = req.headers["Connection"].upper() self.assert_ok(req) def test_bogus_key(self): req = self.default_req() req.headers["Sec-WebSocket-Key"] = base64.b64encode(self.DEFAULT_KEY[:-1]) self.assert_not_ok(req) req.headers["Sec-WebSocket-Key"] = "///" self.assert_not_ok(req) def test_versions(self): req = self.default_req() req.headers["Sec-WebSocket-Version"] = "13" self.assert_ok(req) req.headers["Sec-WebSocket-Version"] = "8" self.assert_ok(req) req.headers["Sec-WebSocket-Version"] = "7" self.assert_not_ok(req) req.headers["Sec-WebSocket-Version"] = "9" self.assert_not_ok(req) del req.headers["Sec-WebSocket-Version"] self.assert_not_ok(req) def test_protocols(self): req = self.default_req() req.headers["Sec-WebSocket-Protocol"] = "base64" resp, protocols = transact_http(req) self.assertEqual(resp.status, 101) self.assertEqual(protocols, ["base64"]) self.assertEqual(resp.getheader("Sec-WebSocket-Protocol"), "base64") req = self.default_req() req.headers["Sec-WebSocket-Protocol"] = "cat" resp, protocols = transact_http(req) self.assertEqual(resp.status, 101) self.assertEqual(protocols, ["cat"]) self.assertEqual(resp.getheader("Sec-WebSocket-Protocol"), None) req = self.default_req() req.headers["Sec-WebSocket-Protocol"] = "cat, base64" resp, protocols = transact_http(req) self.assertEqual(resp.status, 101) self.assertEqual(protocols, ["cat", "base64"]) self.assertEqual(resp.getheader("Sec-WebSocket-Protocol"), "base64") def read_frames(dec): frames = [] while True: frame = dec.read_frame() if frame is None: break frames.append((frame.fin, frame.opcode, frame.payload)) return frames def read_messages(dec): messages = [] while True: message = dec.read_message() if message is None: break messages.append((message.opcode, message.payload)) return messages class TestWebSocketDecoder(unittest.TestCase): def test_rfc(self): """Test samples from RFC 6455 section 5.7.""" TESTS = [ ("\x81\x05\x48\x65\x6c\x6c\x6f", False, [(True, 1, "Hello")], [(1, u"Hello")]), ("\x81\x85\x37\xfa\x21\x3d\x7f\x9f\x4d\x51\x58", True, [(True, 1, "Hello")], [(1, u"Hello")]), ("\x01\x03\x48\x65\x6c\x80\x02\x6c\x6f", False, [(False, 1, "Hel"), (True, 0, "lo")], [(1, u"Hello")]), ("\x89\x05\x48\x65\x6c\x6c\x6f", False, [(True, 9, "Hello")], [(9, u"Hello")]), ("\x8a\x85\x37\xfa\x21\x3d\x7f\x9f\x4d\x51\x58", True, [(True, 10, "Hello")], [(10, u"Hello")]), ("\x82\x7e\x01\x00" + "\x00" * 256, False, [(True, 2, "\x00" * 256)], [(2, "\x00" * 256)]), ("\x82\x7f\x00\x00\x00\x00\x00\x01\x00\x00" + "\x00" * 65536, False, [(True, 2, "\x00" * 65536)], [(2, "\x00" * 65536)]), ("\x82\x7f\x00\x00\x00\x00\x00\x01\x00\x03" + "ABCD" * 16384 + "XYZ", False, [(True, 2, "ABCD" * 16384 + "XYZ")], [(2, "ABCD" * 16384 + "XYZ")]), ] for data, use_mask, expected_frames, expected_messages in TESTS: dec = WebSocketDecoder(use_mask = use_mask) dec.feed(data) actual_frames = read_frames(dec) self.assertEqual(actual_frames, expected_frames) dec = WebSocketDecoder(use_mask = use_mask) dec.feed(data) actual_messages = read_messages(dec) self.assertEqual(actual_messages, expected_messages) dec = WebSocketDecoder(use_mask = not use_mask) dec.feed(data) self.assertRaises(WebSocketDecoder.MaskingError, dec.read_frame) def test_empty_feed(self): """Test that the decoder can handle a zero-byte feed.""" dec = WebSocketDecoder() self.assertEqual(dec.read_frame(), None) dec.feed("") self.assertEqual(dec.read_frame(), None) dec.feed("\x81\x05H") self.assertEqual(dec.read_frame(), None) dec.feed("ello") self.assertEqual(read_frames(dec), [(True, 1, u"Hello")]) def test_empty_frame(self): """Test that a frame may contain a zero-byte payload.""" dec = WebSocketDecoder() dec.feed("\x81\x00") self.assertEqual(read_frames(dec), [(True, 1, u"")]) dec.feed("\x82\x00") self.assertEqual(read_frames(dec), [(True, 2, "")]) def test_empty_message(self): """Test that a message may have a zero-byte payload.""" dec = WebSocketDecoder() dec.feed("\x01\x00\x00\x00\x80\x00") self.assertEqual(read_messages(dec), [(1, u"")]) dec.feed("\x02\x00\x00\x00\x80\x00") self.assertEqual(read_messages(dec), [(2, "")]) def test_interleaved_control(self): """Test that control messages interleaved with fragmented messages are returned.""" dec = WebSocketDecoder() dec.feed("\x89\x04PING\x01\x03Hel\x8a\x04PONG\x80\x02lo\x89\x04PING") self.assertEqual(read_messages(dec), [(9, "PING"), (10, "PONG"), (1, u"Hello"), (9, "PING")]) def test_fragmented_control(self): """Test that illegal fragmented control messages cause an error.""" dec = WebSocketDecoder() dec.feed("\x09\x04PING") self.assertRaises(ValueError, dec.read_message) def test_zero_opcode(self): """Test that it is an error for the first frame in a message to have an opcode of 0.""" dec = WebSocketDecoder() dec.feed("\x80\x05Hello") self.assertRaises(ValueError, dec.read_message) dec = WebSocketDecoder() dec.feed("\x00\x05Hello") self.assertRaises(ValueError, dec.read_message) def test_nonzero_opcode(self): """Test that every frame after the first must have a zero opcode.""" dec = WebSocketDecoder() dec.feed("\x01\x01H\x01\x02el\x80\x02lo") self.assertRaises(ValueError, dec.read_message) dec = WebSocketDecoder() dec.feed("\x01\x01H\x00\x02el\x01\x02lo") self.assertRaises(ValueError, dec.read_message) def test_utf8(self): """Test that text frames (opcode 1) are decoded from UTF-8.""" text = u"Hello World or Καλημέρα κόσμε or こんにちは 世界 or \U0001f639" utf8_text = text.encode("utf-8") dec = WebSocketDecoder() dec.feed("\x81" + chr(len(utf8_text)) + utf8_text) self.assertEqual(read_messages(dec), [(1, text)]) def test_wrong_utf8(self): """Test that failed UTF-8 decoding causes an error.""" TESTS = [ "\xc0\x41", # Non-shortest form. "\xc2", # Unfinished sequence. ] for test in TESTS: dec = WebSocketDecoder() dec.feed("\x81" + chr(len(test)) + test) self.assertRaises(ValueError, dec.read_message) def test_overly_large_payload(self): """Test that large payloads are rejected.""" dec = WebSocketDecoder() dec.feed("\x82\x7f\x00\x00\x00\x00\x01\x00\x00\x00") self.assertRaises(ValueError, dec.read_frame) class TestWebSocketEncoder(unittest.TestCase): def test_length(self): """Test that payload lengths are encoded using the smallest number of bytes.""" TESTS = [(0, 0), (125, 0), (126, 2), (65535, 2), (65536, 8)] for length, encoded_length in TESTS: enc = WebSocketEncoder(use_mask = False) eframe = enc.encode_frame(2, "\x00" * length) self.assertEqual(len(eframe), 1 + 1 + encoded_length + length) enc = WebSocketEncoder(use_mask = True) eframe = enc.encode_frame(2, "\x00" * length) self.assertEqual(len(eframe), 1 + 1 + encoded_length + 4 + length) def test_roundtrip(self): TESTS = [ (1, u"Hello world"), (1, u"Hello \N{WHITE SMILING FACE}"), ] for opcode, payload in TESTS: for use_mask in (False, True): enc = WebSocketEncoder(use_mask = use_mask) enc_message = enc.encode_message(opcode, payload) dec = WebSocketDecoder(use_mask = use_mask) dec.feed(enc_message) self.assertEqual(read_messages(dec), [(opcode, payload)]) def format_address(addr): return "%s:%d" % addr class TestConnectionLimit(unittest.TestCase): def setUp(self): self.p = subprocess.Popen(["./flashproxy-client", format_address(LOCAL_ADDRESS), format_address(REMOTE_ADDRESS)]) def tearDown(self): self.p.terminate() # def test_remote_limit(self): # """Test that the client transport plugin limits the number of remote # connections that it will accept.""" # for i in range(5): # s = socket.create_connection(REMOTE_ADDRESS, 2) # self.assertRaises(socket.error, socket.create_connection, REMOTE_ADDRESS) if __name__ == "__main__": unittest.main() flashproxy-1.7/flashproxy-reg-appspot000077500000000000000000000107561236350636700201700ustar00rootroot00000000000000#!/usr/bin/env python """Register with a facilitator through Google App Engine.""" import argparse import flashproxy import httplib import socket import sys import urlparse import urllib2 from flashproxy.keys import PIN_GOOGLE_CA_CERT, PIN_GOOGLE_PUBKEY_SHA1, check_certificate_pin, ensure_M2Crypto, temp_cert from flashproxy.reg import build_reg_b64enc from flashproxy.util import parse_addr_spec, safe_str, safe_format_addr try: from M2Crypto import SSL except ImportError: # Defer the error reporting so that --help works even without M2Crypto. pass # The domain to which requests appear to go. FRONT_DOMAIN = "www.google.com" # The value of the Host header within requests. TARGET_DOMAIN = "fp-reg-a.appspot.com" # Like socket.create_connection in that it tries resolving different address # families, but doesn't connect the socket. def create_socket(address, timeout = None): host, port = address addrs = socket.getaddrinfo(host, port, 0, socket.SOCK_STREAM) if not addrs: raise socket.error("getaddrinfo returns an empty list") err = None for addr in addrs: try: s = socket.socket(addr[0], addr[1], addr[2]) if timeout is not None and type(timeout) == float: s.settimeout(timeout) return s except Exception, e: err = e raise err # Certificate validation and pinning for urllib2. Inspired by # http://web.archive.org/web/20110125104752/http://www.muchtooscrawled.com/2010/03/https-certificate-verification-in-python-with-urllib2/. class PinHTTPSConnection(httplib.HTTPSConnection): def connect(self): sock = create_socket((self.host, self.port), self.timeout) if self._tunnel_host: self.sock = sock self._tunnel() ctx = SSL.Context("tlsv1") ctx.set_verify(SSL.verify_peer, 3) with temp_cert(PIN_GOOGLE_CA_CERT) as ca_filename: ret = ctx.load_verify_locations(ca_filename) assert ret == 1 self.sock = SSL.Connection(ctx, sock) self.sock.connect((self.host, self.port)) check_certificate_pin(self.sock, PIN_GOOGLE_PUBKEY_SHA1) class PinHTTPSHandler(urllib2.HTTPSHandler): def https_open(self, req): return self.do_open(PinHTTPSConnection, req) def urlopen(url): req = urllib2.Request(url) req.add_header("Host", TARGET_DOMAIN) opener = urllib2.build_opener(PinHTTPSHandler()) return opener.open(req) def get_external_ip(): f = urlopen(urlparse.urlunparse(("https", FRONT_DOMAIN, "/ip", "", "", ""))) try: return f.read().strip() finally: f.close() parser = argparse.ArgumentParser( usage="%(prog)s [OPTIONS] [REMOTE][:PORT]", description="Register with a facilitator through a Google App Engine app. " "If only the external port is given, the remote server guesses our " "external address.") flashproxy.util.add_module_opts(parser) flashproxy.keys.add_module_opts(parser) flashproxy.reg.add_registration_args(parser) options = parser.parse_args(sys.argv[1:]) flashproxy.util.enforce_address_family(options.address_family) remote_addr = options.remote_addr ensure_M2Crypto() if not remote_addr[0]: try: ip = get_external_ip() except urllib2.HTTPError, e: print >> sys.stderr, "Status code was %d, not 200" % e.code sys.exit(1) except urllib2.URLError, e: print >> sys.stderr, "Failed to get external IP address: %s" % str(e.reason) sys.exit(1) except Exception, e: print >> sys.stderr, "Failed to get external IP address: %s" % str(e) sys.exit(1) try: remote_addr = parse_addr_spec(ip, *remote_addr) except ValueError, e: print >> sys.stderr, "Error parsing external IP address %s: %s" % (safe_str(repr(ip)), str(e)) sys.exit(1) try: reg = build_reg_b64enc(remote_addr, options.transport, urlsafe=True) url = urlparse.urljoin(urlparse.urlunparse(("https", FRONT_DOMAIN, "/", "", "", "")), "reg/" + reg) except Exception, e: print >> sys.stderr, "Error generating URL: %s" % str(e) sys.exit(1) try: http = urlopen(url) except urllib2.HTTPError, e: print >> sys.stderr, "Status code was %d, not 200" % e.code sys.exit(1) except urllib2.URLError, e: print >> sys.stderr, "Failed to register: %s" % str(e.reason) sys.exit(1) except Exception, e: print >> sys.stderr, "Failed to register: %s" % str(e) sys.exit(1) http.close() print "Registered \"%s\" with %s." % (safe_format_addr(remote_addr), TARGET_DOMAIN) flashproxy-1.7/flashproxy-reg-email000077500000000000000000000102261236350636700175610ustar00rootroot00000000000000#!/usr/bin/env python """Register with a facilitator using the email method.""" import argparse import flashproxy import os import re import smtplib import sys from flashproxy.keys import PIN_GOOGLE_CA_CERT, PIN_GOOGLE_PUBKEY_SHA1, check_certificate_pin, ensure_M2Crypto, temp_cert from flashproxy.reg import build_reg_b64enc from flashproxy.util import parse_addr_spec, format_addr, safe_format_addr try: from M2Crypto import SSL except ImportError: # Defer the error reporting so that --help works even without M2Crypto. pass DEFAULT_EMAIL_ADDRESS = "flashproxyreg.a@gmail.com" # dig MX gmail.com DEFAULT_SMTP = ("gmail-smtp-in.l.google.com", 25) # Use this to prevent Python smtplib from guessing and leaking our hostname. EHLO_FQDN = "[127.0.0.1]" FROM_EMAIL_ADDRESS = "nobody@localhost" parser = argparse.ArgumentParser( usage="%(prog)s [OPTIONS] [REMOTE][:PORT]", description="Register with a flash proxy facilitator through email. Makes " "a STARTTLS connection to an SMTP server and sends mail with a client IP " "address to a designated address. If only the external port is given, the " "external address is guessed from the SMTP EHLO response.", epilog="Using an SMTP server or email address other than the defaults will " "not work unless you have made special arrangements to connect them to a " "facilitator.") flashproxy.util.add_module_opts(parser) flashproxy.keys.add_module_opts(parser) flashproxy.reg.add_registration_args(parser) # specific opts parser.add_argument("-e", "--email", metavar="ADDRESS", help="send mail to ADDRESS, default %(default)s.", default=DEFAULT_EMAIL_ADDRESS) parser.add_argument("-s", "--smtp", metavar="HOST[:PORT]", help="use the given SMTP server, default %s." % format_addr(DEFAULT_SMTP), default="", type=lambda x: parse_addr_spec(x, *DEFAULT_SMTP)) parser.add_argument("-d", "--debug", help="enable debugging output (Python smtplib messages).", action="store_true") options = parser.parse_args(sys.argv[1:]) flashproxy.util.enforce_address_family(options.address_family) ensure_M2Crypto() smtp = smtplib.SMTP(options.smtp[0], options.smtp[1], EHLO_FQDN) if options.debug: smtp.set_debuglevel(1) try: ctx = SSL.Context("tlsv1") ctx.set_verify(SSL.verify_peer, 3) with temp_cert(PIN_GOOGLE_CA_CERT) as ca_filename: # We roll our own initial EHLO/STARTTLS because smtplib.SMTP.starttls # doesn't allow enough certificate validation. code, msg = smtp.docmd("EHLO", EHLO_FQDN) if code != 250: raise ValueError("Got code %d after EHLO" % code) code, msg = smtp.docmd("STARTTLS") if code != 220: raise ValueError("Got code %d after STARTTLS" % code) ret = ctx.load_verify_locations(ca_filename) assert ret == 1 smtp.sock = SSL.Connection(ctx, smtp.sock) smtp.sock.setup_ssl() smtp.sock.set_connect_state() smtp.sock.connect_ssl() smtp.file = smtp.sock.makefile() check_certificate_pin(smtp.sock, PIN_GOOGLE_PUBKEY_SHA1) smtp.ehlo(EHLO_FQDN) if not options.remote_addr[0]: # Grep the EHLO response for our public IP address. m = re.search(r'at your service, \[([0-9a-fA-F.:]+)\]', smtp.ehlo_resp) if not m: raise ValueError("Could not guess external IP address from EHLO response") spec = m.group(1) if ":" in spec: # Guess IPv6. spec = "[" + spec + "]" options.remote_addr = parse_addr_spec(spec, *options.remote_addr) body = build_reg_b64enc(options.remote_addr, options.transport) # Add a random subject to keep Gmail from threading everything. rand_string = os.urandom(5).encode("hex") smtp.sendmail(options.email, options.email, """\ To: %(to_addr)s\r From: %(from_addr)s\r Subject: client reg %(rand_string)s\r \r %(body)s """ % { "to_addr": options.email, "from_addr": FROM_EMAIL_ADDRESS, "rand_string": rand_string, "body": body, }) smtp.quit() except Exception, e: print >> sys.stderr, "Failed to register: %s" % str(e) sys.exit(1) print "Registered \"%s\" with %s." % (safe_format_addr(options.remote_addr), options.email) flashproxy-1.7/flashproxy-reg-http000077500000000000000000000034241236350636700174530ustar00rootroot00000000000000#!/usr/bin/env python """Register with a facilitator using the HTTP method.""" import argparse import flashproxy import sys import urllib2 from flashproxy.util import format_addr, parse_addr_spec, safe_format_addr from flashproxy.reg import DEFAULT_FACILITATOR_URL, DEFAULT_REMOTE, DEFAULT_TRANSPORT, build_reg parser = argparse.ArgumentParser( usage="%(prog)s [OPTIONS] [REMOTE][:PORT]", description="Register with a flash proxy facilitator using an HTTP POST. " "If only the external port is given, the remote server guesses our " "external address.") flashproxy.util.add_module_opts(parser) parser.add_argument("--transport", metavar="TRANSPORT", help="register using the given transport, default %(default)s.", default=DEFAULT_TRANSPORT) parser.add_argument("remote_addr", metavar="ADDR:PORT", help="external addr+port to register, default %s" % format_addr(DEFAULT_REMOTE), default="", nargs="?", type=lambda x: parse_addr_spec(x, *DEFAULT_REMOTE)) parser.add_argument("-f", "--facilitator", metavar="URL", help="register with the given facilitator, default %(default)s.", default=DEFAULT_FACILITATOR_URL) options = parser.parse_args(sys.argv[1:]) flashproxy.util.enforce_address_family(options.address_family) body = build_reg(options.remote_addr, options.transport) try: http = urllib2.urlopen(options.facilitator, body, 10) except urllib2.HTTPError, e: print >> sys.stderr, "Status code was %d, not 200" % e.code sys.exit(1) except urllib2.URLError, e: print >> sys.stderr, "Failed to register: %s" % str(e.reason) sys.exit(1) except Exception, e: print >> sys.stderr, "Failed to register: %s" % str(e) sys.exit(1) http.close() print "Registered \"%s\" with %s." % (safe_format_addr(options.remote_addr), options.facilitator) flashproxy-1.7/flashproxy-reg-url000077500000000000000000000017651236350636700173040ustar00rootroot00000000000000#!/usr/bin/env python """Register with a facilitator using an indirect URL.""" import argparse import flashproxy import sys import urlparse from flashproxy.keys import ensure_M2Crypto from flashproxy.reg import DEFAULT_FACILITATOR_URL, build_reg_b64enc parser = argparse.ArgumentParser( usage="%(prog)s [OPTIONS] REMOTE[:PORT]", description="Print a URL, which, when retrieved, will cause the input " "client address to be registered with the flash proxy facilitator.") flashproxy.reg.add_registration_args(parser) parser.add_argument("-f", "--facilitator", metavar="URL", help="register with the given facilitator, default %(default)s.", default=DEFAULT_FACILITATOR_URL) options = parser.parse_args(sys.argv[1:]) ensure_M2Crypto() if not options.remote_addr[0]: print >> sys.stderr, "An IP address (not just a port) is required." sys.exit(1) reg = build_reg_b64enc(options.remote_addr, options.transport, urlsafe=True) print urlparse.urljoin(options.facilitator, "reg/" + reg) flashproxy-1.7/flashproxy/000077500000000000000000000000001236350636700157725ustar00rootroot00000000000000flashproxy-1.7/flashproxy/__init__.py000066400000000000000000000000001236350636700200710ustar00rootroot00000000000000flashproxy-1.7/flashproxy/fac.py000066400000000000000000000202171236350636700170770ustar00rootroot00000000000000import socket import subprocess import urlparse from flashproxy import reg from flashproxy.util import parse_addr_spec, format_addr DEFAULT_CLIENT_TRANSPORT = "websocket" def read_client_registrations(body, defhost=None, defport=None): """Yield client registrations (as Endpoints) from an encoded registration message body. The message format is one registration per line, with each line being encoded as application/x-www-form-urlencoded. The key "client" is required and contains the client address and port (perhaps filled in by defhost and defport). The key "client-transport" is optional and defaults to "websocket". Example: client=1.2.3.4:9000&client-transport=websocket client=1.2.3.4:9090&client-transport=obfs3|websocket """ for line in body.splitlines(): qs = urlparse.parse_qs(line, keep_blank_values=True, strict_parsing=True) # Get the unique value associated with the given key in qs. If the key # is absent or appears more than once, raise ValueError. def get_unique(key, default=None): try: vals = qs[key] except KeyError: if default is None: raise ValueError("missing %r key" % key) vals = (default,) if len(vals) != 1: raise ValueError("more than one %r key" % key) return vals[0] addr = parse_addr_spec(get_unique("client"), defhost, defport) transport = get_unique("client-transport", DEFAULT_CLIENT_TRANSPORT) yield reg.Endpoint(addr, transport) def skip_space(pos, line): """Skip a (possibly empty) sequence of space characters (the ASCII character '\x20' exactly). Returns a pair (pos, num_skipped).""" begin = pos while pos < len(line) and line[pos] == "\x20": pos += 1 return pos, pos - begin TOKEN_CHARS = set("abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_-") def get_token(pos, line): begin = pos while pos < len(line) and line[pos] in TOKEN_CHARS: pos += 1 if begin == pos: raise ValueError("No token found at position %d" % pos) return pos, line[begin:pos] def get_quoted_string(pos, line): chars = [] if not (pos < len(line) and line[pos] == '"'): raise ValueError("Expected '\"' at beginning of quoted string.") pos += 1 while pos < len(line) and line[pos] != '"': if line[pos] == '\\': pos += 1 if not (pos < len(line)): raise ValueError("End of line after backslash in quoted string") chars.append(line[pos]) pos += 1 if not (pos < len(line) and line[pos] == '"'): raise ValueError("Expected '\"' at end of quoted string.") pos += 1 return pos, "".join(chars) def parse_transaction(line): """A transaction is a command followed by zero or more key-value pairs. Like so: COMMAND KEY="VALUE" KEY="\"ESCAPED\" VALUE" Values must be quoted. Any byte value may be escaped with a backslash. Returns a pair: (COMMAND, ((KEY1, VALUE1), (KEY2, VALUE2), ...)). """ pos = 0 pos, skipped = skip_space(pos, line) pos, command = get_token(pos, line) pairs = [] while True: pos, skipped = skip_space(pos, line) if not (pos < len(line)): break if skipped == 0: raise ValueError("Expected space before key-value pair") pos, key = get_token(pos, line) if not (pos < len(line) and line[pos] == '='): raise ValueError("No '=' found after key") pos += 1 pos, value = get_quoted_string(pos, line) pairs.append((key, value)) return command, tuple(pairs) def param_first(key, params): """Search 'params' for 'key' and return the first value that occurs. If 'key' was not found, return None.""" for k, v in params: if key == k: return v return None def param_getlist(key, params): """Search 'params' for 'key' and return a list with its values. If 'key' did not appear in 'params', return the empty list.""" result = [] for k, v in params: if key == k: result.append(v) return result def quote_string(s): chars = [] for c in s: if c == "\\": c = "\\\\" elif c == "\"": c = "\\\"" chars.append(c) return "\"" + "".join(chars) + "\"" def render_transaction(command, *params): parts = [command] for key, value in params: parts.append("%s=%s" % (key, quote_string(value))) return " ".join(parts) def fac_socket(facilitator_addr): return socket.create_connection(facilitator_addr, 1.0).makefile() def transact(f, command, *params): transaction = render_transaction(command, *params) print >> f, transaction f.flush() line = f.readline() if not (len(line) > 0 and line[-1] == '\n'): raise ValueError("No newline at end of string returned by facilitator") return parse_transaction(line[:-1]) def put_reg(facilitator_addr, client_addr, transport): """Send a registration to the facilitator using a one-time socket. Returns true iff the command was successful. transport is a transport string such as "websocket" or "obfs3|websocket".""" f = fac_socket(facilitator_addr) params = [("CLIENT", format_addr(client_addr))] params.append(("TRANSPORT", transport)) try: command, params = transact(f, "PUT", *params) finally: f.close() return command == "OK" def get_reg(facilitator_addr, proxy_addr, proxy_transport_list): """ Get a client registration for proxy proxy_addr from the facilitator at facilitator_addr using a one-time socket. proxy_transport_list is a list containing the transport names that the flashproxy supports. Returns a dict with keys "client", "client-transport", "relay", and "relay-transport" if successful, or a dict with the key "client" mapped to the value "" if there are no registrations available for proxy_addr. Raises an exception otherwise.""" f = fac_socket(facilitator_addr) # Form a list (in transact() format) with the transports that we # should send to the facilitator. Then pass that list to the # transact() function. # For example, PROXY-TRANSPORT=obfs2 PROXY-TRANSPORT=obfs3. transports = [("PROXY-TRANSPORT", tp) for tp in proxy_transport_list] try: command, params = transact(f, "GET", ("FROM", format_addr(proxy_addr)), *transports) finally: f.close() response = {} check_back_in = param_first("CHECK-BACK-IN", params) if check_back_in is not None: try: float(check_back_in) except ValueError: raise ValueError("Facilitator returned non-numeric polling interval.") response["check-back-in"] = check_back_in if command == "NONE": response["client"] = "" return response elif command == "OK": client_spec = param_first("CLIENT", params) client_transport = param_first("CLIENT-TRANSPORT", params) relay_spec = param_first("RELAY", params) relay_transport = param_first("RELAY-TRANSPORT", params) if not client_spec: raise ValueError("Facilitator did not return CLIENT") if not client_transport: raise ValueError("Facilitator did not return CLIENT-TRANSPORT") if not relay_spec: raise ValueError("Facilitator did not return RELAY") if not relay_transport: raise ValueError("Facilitator did not return RELAY-TRANSPORT") # Check the syntax returned by the facilitator. client = parse_addr_spec(client_spec) relay = parse_addr_spec(relay_spec) response["client"] = format_addr(client) response["client-transport"] = client_transport response["relay"] = format_addr(relay) response["relay-transport"] = relay_transport return response else: raise ValueError("Facilitator response was not \"OK\"") def put_reg_proc(args, data): """Attempt to add a registration by running a program.""" p = subprocess.Popen(args, stdin=subprocess.PIPE) stdout, stderr = p.communicate(data) return p.returncode == 0 flashproxy-1.7/flashproxy/keys.py000066400000000000000000000116751236350636700173310ustar00rootroot00000000000000import base64 import errno import os import sys import tempfile from hashlib import sha1 try: import M2Crypto from M2Crypto import BIO, RSA except ImportError: # Defer the error so that the main program gets a chance to print help text M2Crypto = None class options(object): disable_pin = True def add_module_opts(parser): parser.add_argument("--disable-pin", help="disable all certificate pinning " "checks", action="store_true",) old_parse = parser.parse_args def parse_args(namespace): options.disable_pin = namespace.disable_pin return namespace parser.parse_args = lambda *a, **kw: parse_args(old_parse(*a, **kw)) # We trust no other CA certificate than this. # # To find the certificate to copy here, # $ strace openssl s_client -connect FRONT_DOMAIN:443 -verify 10 -CApath /etc/ssl/certs 2>&1 | grep /etc/ssl/certs # stat("/etc/ssl/certs/XXXXXXXX.0", {st_mode=S_IFREG|0644, st_size=YYYY, ...}) = 0 PIN_GOOGLE_CA_CERT = """\ subject=/C=US/O=Equifax/OU=Equifax Secure Certificate Authority issuer=/C=US/O=Equifax/OU=Equifax Secure Certificate Authority -----BEGIN CERTIFICATE----- MIIDIDCCAomgAwIBAgIENd70zzANBgkqhkiG9w0BAQUFADBOMQswCQYDVQQGEwJV UzEQMA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2Vy dGlmaWNhdGUgQXV0aG9yaXR5MB4XDTk4MDgyMjE2NDE1MVoXDTE4MDgyMjE2NDE1 MVowTjELMAkGA1UEBhMCVVMxEDAOBgNVBAoTB0VxdWlmYXgxLTArBgNVBAsTJEVx dWlmYXggU2VjdXJlIENlcnRpZmljYXRlIEF1dGhvcml0eTCBnzANBgkqhkiG9w0B AQEFAAOBjQAwgYkCgYEAwV2xWGcIYu6gmi0fCG2RFGiYCh7+2gRvE4RiIcPRfM6f BeC4AfBONOziipUEZKzxa1NfBbPLZ4C/QgKO/t0BCezhABRP/PvwDN1Dulsr4R+A cJkVV5MW8Q+XarfCaCMczE1ZMKxRHjuvK9buY0V7xdlfUNLjUA86iOe/FP3gx7kC AwEAAaOCAQkwggEFMHAGA1UdHwRpMGcwZaBjoGGkXzBdMQswCQYDVQQGEwJVUzEQ MA4GA1UEChMHRXF1aWZheDEtMCsGA1UECxMkRXF1aWZheCBTZWN1cmUgQ2VydGlm aWNhdGUgQXV0aG9yaXR5MQ0wCwYDVQQDEwRDUkwxMBoGA1UdEAQTMBGBDzIwMTgw ODIyMTY0MTUxWjALBgNVHQ8EBAMCAQYwHwYDVR0jBBgwFoAUSOZo+SvSspXXR9gj IBBPM5iQn9QwHQYDVR0OBBYEFEjmaPkr0rKV10fYIyAQTzOYkJ/UMAwGA1UdEwQF MAMBAf8wGgYJKoZIhvZ9B0EABA0wCxsFVjMuMGMDAgbAMA0GCSqGSIb3DQEBBQUA A4GBAFjOKer89961zgK5F7WF0bnj4JXMJTENAKaSbn+2kmOeUJXRmm/kEd5jhW6Y 7qj/WsjTVbJmcVfewCHrPSqnI0kBBIZCe/zuf6IWUrVnZ9NA2zsmWLIodz2uFHdh 1voqZiegDfqnc1zqcPGUIWVEX/r87yloqaKHee9570+sB3c4 -----END CERTIFICATE----- """ # SHA-1 digest of expected public keys. Any of these is valid. See # http://www.imperialviolet.org/2011/05/04/pinning.html for the reason behind # hashing the public key, not the entire certificate. PIN_GOOGLE_PUBKEY_SHA1 = ( # https://src.chromium.org/viewvc/chrome/trunk/src/net/http/transport_security_state_static.h?revision=209003&view=markup # kSPKIHash_Google1024 "\x40\xc5\x40\x1d\x6f\x8c\xba\xf0\x8b\x00\xed\xef\xb1\xee\x87\xd0\x05\xb3\xb9\xcd", # kSPKIHash_GoogleG2 "\x43\xda\xd6\x30\xee\x53\xf8\xa9\x80\xca\x6e\xfd\x85\xf4\x6a\xa3\x79\x90\xe0\xea", ) def check_certificate_pin(sock, cert_pubkey): if options.disable_pin: return found = [] for cert in sock.get_peer_cert_chain(): pubkey_der = cert.get_pubkey().as_der() pubkey_digest = sha1(pubkey_der).digest() if pubkey_digest in cert_pubkey: break found.append(pubkey_digest) else: found = "(" + ", ".join(x.encode("hex") for x in found) + ")" expected = "(" + ", ".join(x.encode("hex") for x in cert_pubkey) + ")" raise ValueError("Public key does not match pin: got %s but expected any of %s" % (found, expected)) def get_state_dir(): """Get a directory where we can put temporary files. Returns None if any suitable temporary directory will do.""" pt_dir = os.environ.get("TOR_PT_STATE_LOCATION") if pt_dir is None: return None try: os.makedirs(pt_dir) except OSError, e: if e.errno != errno.EEXIST: raise return pt_dir class temp_cert(object): """Implements a with-statement over raw certificate data.""" def __init__(self, certdata): fd, self.path = tempfile.mkstemp(prefix="fp-cert-temp-", dir=get_state_dir(), suffix=".crt") os.write(fd, certdata) os.close(fd) def __enter__(self): return self.path def __exit__(self, type, value, traceback): os.unlink(self.path) def get_pubkey(defaultkeybytes, overridefn=None): if overridefn is not None: return RSA.load_pub_key(overridefn) else: return RSA.load_pub_key_bio(BIO.MemoryBuffer(defaultkeybytes)) def pubkey_b64enc(plaintext, pubkey, urlsafe=False): ciphertext = pubkey.public_encrypt(plaintext, RSA.pkcs1_oaep_padding) if urlsafe: return base64.urlsafe_b64encode(ciphertext) else: return ciphertext.encode("base64") def ensure_M2Crypto(): if M2Crypto is None: print >> sys.stderr, """\ This program requires the M2Crypto library, which is not installed. You can install it using one of the packages at http://chandlerproject.org/Projects/MeTooCrypto#Downloads. On Debian-like systems, use the command "apt-get install python-m2crypto".\ """ sys.exit(1) flashproxy-1.7/flashproxy/proc.py000066400000000000000000000027521236350636700173150ustar00rootroot00000000000000import errno import os import socket import stat import pwd DEFAULT_CLIENT_TRANSPORT = "websocket" # Return true iff the given fd is readable, writable, and executable only by its # owner. def check_perms(fd): mode = os.fstat(fd)[0] return (mode & (stat.S_IRWXG | stat.S_IRWXO)) == 0 # Drop privileges by switching ID to that of the given user. # http://stackoverflow.com/questions/2699907/dropping-root-permissions-in-python/2699996#2699996 # https://www.securecoding.cert.org/confluence/display/seccode/POS36-C.+Observe+correct+revocation+order+while+relinquishing+privileges # https://www.securecoding.cert.org/confluence/display/seccode/POS37-C.+Ensure+that+privilege+relinquishment+is+successful def drop_privs(username): uid = pwd.getpwnam(username).pw_uid gid = pwd.getpwnam(username).pw_gid os.setgroups([]) os.setgid(gid) os.setuid(uid) try: os.setuid(0) except OSError: pass else: raise AssertionError("setuid(0) succeeded after attempting to drop privileges") # A decorator to ignore "broken pipe" errors. def catch_epipe(fn): def ret(self, *args): try: return fn(self, *args) except socket.error, e: try: err_num = e.errno except AttributeError: # Before Python 2.6, exception can be a pair. err_num, errstr = e except: raise if err_num != errno.EPIPE: raise return ret flashproxy-1.7/flashproxy/reg.py000066400000000000000000000063271236350636700171310ustar00rootroot00000000000000import urllib from collections import namedtuple from flashproxy.keys import get_pubkey, pubkey_b64enc from flashproxy.util import parse_addr_spec, format_addr DEFAULT_REMOTE = ("", 9000) DEFAULT_FACILITATOR_URL = "https://fp-facilitator.org/" DEFAULT_TRANSPORT = "websocket" # Default facilitator pubkey owned by the operator of DEFAULT_FACILITATOR_URL DEFAULT_FACILITATOR_PUBKEY_PEM = """\ -----BEGIN PUBLIC KEY----- MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA44Mt8c599/4N2fgu6ppN oatPW1GOgZxxObljFtEy0OWM1eHB35OOn+Kn9MxNHTRxVWwCEi0HYxWNVs2qrXxV 84LmWBz6A65d2qBlgltgLXusiXLrpwxVmJeO+GfmbF8ur0U9JSYxA20cGW/kujNg XYDGQxO1Gvxq2lHK2LQmBpkfKEE1DMFASmIvlHDQgDj3XBb5lYeOsHZmg16UrGAq 1UH238hgJITPGLXBtwLtJkYbrATJvrEcmvI7QSm57SgYGpaB5ZdCbJL5bag5Pgt6 M5SDDYYY4xxEPzokjFJfCQv+kcyAnzERNMQ9kR41ePTXG62bpngK5iWGeJ5XdkxG gwIDAQAB -----END PUBLIC KEY----- """ class options(object): transport = DEFAULT_TRANSPORT facilitator_pubkey = None def add_module_opts(parser): parser.add_argument("--transport", metavar="TRANSPORT", help="register using the given transport, default %(default)s.", default=DEFAULT_TRANSPORT) parser.add_argument("--facilitator-pubkey", metavar="FILENAME", help=("encrypt registrations to the given PEM-formatted public " "key file (default built-in).")) old_parse = parser.parse_args def parse_args(namespace): options.transport = namespace.transport options.facilitator_pubkey = namespace.facilitator_pubkey return namespace parser.parse_args = lambda *a, **kw: parse_args(old_parse(*a, **kw)) def add_registration_args(parser): add_module_opts(parser) parser.add_argument("remote_addr", metavar="ADDR:PORT", help="external addr+port to register, default %s" % format_addr(DEFAULT_REMOTE), default="", nargs="?", type=lambda x: parse_addr_spec(x, *DEFAULT_REMOTE)) def build_reg(addr, transport): return urllib.urlencode(( ("client", format_addr(addr)), ("client-transport", transport), )) def build_reg_b64enc(addr, transport, urlsafe=False): pubkey = get_pubkey(DEFAULT_FACILITATOR_PUBKEY_PEM, options.facilitator_pubkey) return pubkey_b64enc(build_reg(addr, transport), pubkey, urlsafe=urlsafe) class Transport(namedtuple("Transport", "inner outer")): @classmethod def parse(cls, transport): if isinstance(transport, cls): return transport elif type(transport) == str: if "|" in transport: inner, outer = transport.rsplit("|", 1) else: inner, outer = "", transport return cls(inner, outer) else: raise ValueError("could not parse transport: %s" % transport) def __init__(self, inner, outer): if not outer: raise ValueError("outer (proxy) part of transport must be non-empty: %s" % str(self)) def __str__(self): return "%s|%s" % (self.inner, self.outer) if self.inner else self.outer class Endpoint(namedtuple("Endpoint", "addr transport")): @classmethod def parse(cls, spec, transport, defhost = None, defport = None): host, port = parse_addr_spec(spec, defhost, defport) return cls((host, port), Transport.parse(transport)) flashproxy-1.7/flashproxy/test/000077500000000000000000000000001236350636700167515ustar00rootroot00000000000000flashproxy-1.7/flashproxy/test/__init__.py000066400000000000000000000000001236350636700210500ustar00rootroot00000000000000flashproxy-1.7/flashproxy/test/test_fac.py000066400000000000000000000107241236350636700211170ustar00rootroot00000000000000#!/usr/bin/env python import unittest from flashproxy.fac import parse_transaction, read_client_registrations class ParseTransactionTest(unittest.TestCase): def test_empty_string(self): self.assertRaises(ValueError, parse_transaction, "") def test_correct(self): self.assertEqual(parse_transaction("COMMAND"), ("COMMAND", ())) self.assertEqual(parse_transaction("COMMAND X=\"\""), ("COMMAND", (("X", ""),))) self.assertEqual(parse_transaction("COMMAND X=\"ABC\""), ("COMMAND", (("X", "ABC"),))) self.assertEqual(parse_transaction("COMMAND X=\"\\A\\B\\C\""), ("COMMAND", (("X", "ABC"),))) self.assertEqual(parse_transaction("COMMAND X=\"\\\\\\\"\""), ("COMMAND", (("X", "\\\""),))) self.assertEqual(parse_transaction("COMMAND X=\"ABC\" Y=\"DEF\""), ("COMMAND", (("X", "ABC"), ("Y", "DEF")))) self.assertEqual(parse_transaction("COMMAND KEY-NAME=\"ABC\""), ("COMMAND", (("KEY-NAME", "ABC"),))) self.assertEqual(parse_transaction("COMMAND KEY_NAME=\"ABC\""), ("COMMAND", (("KEY_NAME", "ABC"),))) def test_missing_command(self): self.assertRaises(ValueError, parse_transaction, "X=\"ABC\"") self.assertRaises(ValueError, parse_transaction, " X=\"ABC\"") def test_missing_space(self): self.assertRaises(ValueError, parse_transaction, "COMMAND/X=\"ABC\"") self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"ABC\"Y=\"DEF\"") def test_bad_quotes(self): self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"") self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"ABC") self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"ABC\" Y=\"ABC") self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"ABC\\") def test_truncated(self): self.assertRaises(ValueError, parse_transaction, "COMMAND X=") def test_newline(self): self.assertRaises(ValueError, parse_transaction, "COMMAND X=\"ABC\" \nY=\"DEF\"") class ReadClientRegistrationsTest(unittest.TestCase): def testSingle(self): l = list(read_client_registrations("")) self.assertEqual(len(l), 0) l = list(read_client_registrations("client=1.2.3.4:1111")) self.assertEqual(len(l), 1) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) l = list(read_client_registrations("client=1.2.3.4:1111\n")) self.assertEqual(len(l), 1) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) l = list(read_client_registrations("foo=bar&client=1.2.3.4:1111&baz=quux")) self.assertEqual(len(l), 1) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) l = list(read_client_registrations("foo=b%3dar&client=1.2.3.4%3a1111")) self.assertEqual(len(l), 1) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) l = list(read_client_registrations("client=%5b1::2%5d:3333")) self.assertEqual(len(l), 1) self.assertEqual(l[0].addr, ("1::2", 3333)) def testDefaultAddress(self): l = list(read_client_registrations("client=:1111&transport=websocket", defhost="1.2.3.4")) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) l = list(read_client_registrations("client=1.2.3.4:&transport=websocket", defport=1111)) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) def testDefaultTransport(self): l = list(read_client_registrations("client=1.2.3.4:1111")) self.assertEqual(l[0].transport, "websocket") def testMultiple(self): l = list(read_client_registrations("client=1.2.3.4:1111&foo=bar\nfoo=bar&client=5.6.7.8:2222")) self.assertEqual(len(l), 2) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) self.assertEqual(l[1].addr, ("5.6.7.8", 2222)) l = list(read_client_registrations("client=1.2.3.4:1111&foo=bar\nfoo=bar&client=%5b1::2%5d:3333")) self.assertEqual(len(l), 2) self.assertEqual(l[0].addr, ("1.2.3.4", 1111)) self.assertEqual(l[1].addr, ("1::2", 3333)) def testInvalid(self): # Missing "client". with self.assertRaises(ValueError): list(read_client_registrations("foo=bar")) # More than one "client". with self.assertRaises(ValueError): list(read_client_registrations("client=1.2.3.4:1111&foo=bar&client=5.6.7.8:2222")) # Single client with bad syntax. with self.assertRaises(ValueError): list(read_client_registrations("client=1.2.3.4,1111")) if __name__ == "__main__": unittest.main() flashproxy-1.7/flashproxy/test/test_keys.py000066400000000000000000000014741236350636700213430ustar00rootroot00000000000000import os.path import unittest from flashproxy.keys import PIN_GOOGLE_CA_CERT, PIN_GOOGLE_PUBKEY_SHA1, check_certificate_pin, temp_cert class TempCertTest(unittest.TestCase): def test_temp_cert_success(self): fn = None with temp_cert(PIN_GOOGLE_CA_CERT) as ca_filename: self.assertTrue(os.path.exists(ca_filename)) with open(ca_filename) as f: lines = f.readlines() self.assertIn("-----BEGIN CERTIFICATE-----\n", lines) self.assertFalse(os.path.exists(ca_filename)) def test_temp_cert_raise(self): fn = None try: with temp_cert(PIN_GOOGLE_CA_CERT) as ca_filename: raise ValueError() self.fail() except ValueError: self.assertFalse(os.path.exists(ca_filename)) flashproxy-1.7/flashproxy/test/test_reg.py000066400000000000000000000016501236350636700211410ustar00rootroot00000000000000#!/usr/bin/env python import unittest from flashproxy.reg import Transport class TransportTest(unittest.TestCase): def test_transport_parse(self): self.assertEquals(Transport.parse("a"), Transport("", "a")) self.assertEquals(Transport.parse("|a"), Transport("", "a")) self.assertEquals(Transport.parse("a|b|c"), Transport("a|b","c")) self.assertEquals(Transport.parse(Transport("a|b","c")), Transport("a|b","c")) self.assertRaises(ValueError, Transport, "", "") self.assertRaises(ValueError, Transport, "a", "") self.assertRaises(ValueError, Transport.parse, "") self.assertRaises(ValueError, Transport.parse, "|") self.assertRaises(ValueError, Transport.parse, "a|") self.assertRaises(ValueError, Transport.parse, ["a"]) self.assertRaises(ValueError, Transport.parse, [Transport("a", "b")]) if __name__ == "__main__": unittest.main() flashproxy-1.7/flashproxy/test/test_util.py000066400000000000000000000105201236350636700213350ustar00rootroot00000000000000#!/usr/bin/env python import socket import unittest from flashproxy.util import parse_addr_spec, canonical_ip, addr_family, format_addr class ParseAddrSpecTest(unittest.TestCase): def test_ipv4(self): self.assertEqual(parse_addr_spec("192.168.0.1:9999"), ("192.168.0.1", 9999)) def test_ipv6(self): self.assertEqual(parse_addr_spec("[12::34]:9999"), ("12::34", 9999)) def test_defhost_defport_ipv4(self): self.assertEqual(parse_addr_spec("192.168.0.2:8888", defhost="192.168.0.1", defport=9999), ("192.168.0.2", 8888)) self.assertEqual(parse_addr_spec("192.168.0.2:", defhost="192.168.0.1", defport=9999), ("192.168.0.2", 9999)) self.assertEqual(parse_addr_spec("192.168.0.2", defhost="192.168.0.1", defport=9999), ("192.168.0.2", 9999)) self.assertEqual(parse_addr_spec(":8888", defhost="192.168.0.1", defport=9999), ("192.168.0.1", 8888)) self.assertEqual(parse_addr_spec(":", defhost="192.168.0.1", defport=9999), ("192.168.0.1", 9999)) self.assertEqual(parse_addr_spec("", defhost="192.168.0.1", defport=9999), ("192.168.0.1", 9999)) def test_defhost_defport_ipv6(self): self.assertEqual(parse_addr_spec("[1234::2]:8888", defhost="1234::1", defport=9999), ("1234::2", 8888)) self.assertEqual(parse_addr_spec("[1234::2]:", defhost="1234::1", defport=9999), ("1234::2", 9999)) self.assertEqual(parse_addr_spec("[1234::2]", defhost="1234::1", defport=9999), ("1234::2", 9999)) self.assertEqual(parse_addr_spec(":8888", defhost="1234::1", defport=9999), ("1234::1", 8888)) self.assertEqual(parse_addr_spec(":", defhost="1234::1", defport=9999), ("1234::1", 9999)) self.assertEqual(parse_addr_spec("", defhost="1234::1", defport=9999), ("1234::1", 9999)) def test_empty_defaults(self): self.assertEqual(parse_addr_spec("192.168.0.2:8888"), ("192.168.0.2", 8888)) self.assertEqual(parse_addr_spec("", defhost="", defport=0), ("", 0)) self.assertEqual(parse_addr_spec(":8888", defhost=""), ("", 8888)) self.assertRaises(ValueError, parse_addr_spec, ":8888") self.assertEqual(parse_addr_spec("192.168.0.2", defport=0), ("192.168.0.2", 0)) self.assertRaises(ValueError, parse_addr_spec, "192.168.0.2") def test_canonical_ip_noresolve(self): """Test that canonical_ip does not do DNS resolution by default.""" self.assertRaises(ValueError, canonical_ip, *parse_addr_spec("example.com:80")) class AddrFamilyTest(unittest.TestCase): def test_ipv4(self): self.assertEqual(addr_family("1.2.3.4"), socket.AF_INET) def test_ipv6(self): self.assertEqual(addr_family("1:2::3:4"), socket.AF_INET6) def test_name(self): self.assertRaises(socket.gaierror, addr_family, "localhost") class FormatAddrTest(unittest.TestCase): def test_none_none(self): self.assertRaises(ValueError, format_addr, (None, None)) def test_none_port(self): self.assertEqual(format_addr((None, 1234)), ":1234") def test_none_invalid(self): self.assertRaises(ValueError, format_addr, (None, "string")) def test_empty_none(self): self.assertRaises(ValueError, format_addr, ("", None)) def test_empty_port(self): self.assertEqual(format_addr(("", 1234)), ":1234") def test_empty_invalid(self): self.assertRaises(ValueError, format_addr, ("", "string")) def test_ipv4_none(self): self.assertEqual(format_addr(("1.2.3.4", None)), "1.2.3.4") def test_ipv4_port(self): self.assertEqual(format_addr(("1.2.3.4", 1234)), "1.2.3.4:1234") def test_ipv4_invalid(self): self.assertRaises(ValueError, format_addr, ("1.2.3.4", "string")) def test_ipv6_none(self): self.assertEqual(format_addr(("1:2::3:4", None)), "[1:2::3:4]") def test_ipv6_port(self): self.assertEqual(format_addr(("1:2::3:4", 1234)), "[1:2::3:4]:1234") def test_ipv6_invalid(self): self.assertRaises(ValueError, format_addr, ("1:2::3:4", "string")) def test_name_none(self): self.assertEqual(format_addr(("localhost", None)), "localhost") def test_name_port(self): self.assertEqual(format_addr(("localhost", 1234)), "localhost:1234") def test_name_invalid(self): self.assertRaises(ValueError, format_addr, ("localhost", "string")) if __name__ == "__main__": unittest.main() flashproxy-1.7/flashproxy/util.py000066400000000000000000000141551236350636700173270ustar00rootroot00000000000000import re import socket _old_socket_getaddrinfo = socket.getaddrinfo class options(object): safe_logging = True address_family = socket.AF_UNSPEC def add_module_opts(parser): parser.add_argument("-4", help="name lookups use only IPv4.", action="store_const", const=socket.AF_INET, dest="address_family") parser.add_argument("-6", help="name lookups use only IPv6.", action="store_const", const=socket.AF_INET6, dest="address_family") parser.add_argument("--unsafe-logging", help="don't scrub IP addresses and other sensitive information from " "logs.", action="store_true") old_parse = parser.parse_args def parse_args(namespace): options.safe_logging = not namespace.unsafe_logging options.address_family = namespace.address_family or socket.AF_UNSPEC return namespace parser.parse_args = lambda *a, **kw: parse_args(old_parse(*a, **kw)) def enforce_address_family(address_family): """Force all future name lookups to use the given address family.""" if address_family != socket.AF_UNSPEC: def getaddrinfo_replacement(host, port, family, *args, **kwargs): return _old_socket_getaddrinfo(host, port, options.address_family, *args, **kwargs) socket.getaddrinfo = getaddrinfo_replacement def safe_str(s): """Return "[scrubbed]" if options.safe_logging is true, and s otherwise.""" if options.safe_logging: return "[scrubbed]" else: return s def safe_format_addr(addr): return safe_str(format_addr(addr)) def parse_addr_spec(spec, defhost = None, defport = None): """Parse a host:port specification and return a 2-tuple ("host", port) as understood by the Python socket functions. >>> parse_addr_spec("192.168.0.1:9999") ('192.168.0.1', 9999) If defhost or defport are given and not None, the respective parts of the specification may be omitted, and will be filled in with the defaults. If defhost or defport are omitted or None, the respective parts of the specification must be given, or else a ValueError will be raised. >>> parse_addr_spec("192.168.0.2:8888", defhost="192.168.0.1", defport=9999) ('192.168.0.2', 8888) >>> parse_addr_spec(":8888", defhost="192.168.0.1", defport=9999) ('192.168.0.1', 8888) >>> parse_addr_spec("192.168.0.2", defhost="192.168.0.1", defport=9999) ('192.168.0.2', 9999) >>> parse_addr_spec("192.168.0.2:", defhost="192.168.0.1", defport=9999) ('192.168.0.2', 9999) >>> parse_addr_spec(":", defhost="192.168.0.1", defport=9999) ('192.168.0.1', 9999) >>> parse_addr_spec("", defhost="192.168.0.1", defport=9999) ('192.168.0.1', 9999) >>> parse_addr_spec(":") Traceback (most recent call last): [..] ValueError: Bad address specification ":" >>> parse_addr_spec(":", "", 0) ('', 0) IPv6 addresses must be enclosed in square brackets.""" host = None port = None af = 0 m = None # IPv6 syntax. if not m: m = re.match(ur'^\[(.+)\]:(\d*)$', spec) if m: host, port = m.groups() af = socket.AF_INET6 if not m: m = re.match(ur'^\[(.+)\]$', spec) if m: host, = m.groups() af = socket.AF_INET6 # IPv4/hostname/port-only syntax. if not m: try: host, port = spec.split(":", 1) except ValueError: host = spec if re.match(ur'^[\d.]+$', host): af = socket.AF_INET else: af = 0 host = host or defhost port = port or defport if host is None or port is None: raise ValueError("Bad address specification \"%s\"" % spec) return host, int(port) def resolve_to_ip(host, port, af=0, gai_flags=0): """Resolves a host string to an IP address in canonical format. Note: in many cases this is not necessary since the consumer of the address can probably accept host names directly. :param: host string to resolve; may be a DNS name or an IP address. :param: port of the host :param: af address family, default unspecified. set to socket.AF_INET or socket.AF_INET6 to force IPv4 or IPv6 name resolution. :returns: (IP address in canonical format, port) """ # Forward-resolve the name into an addrinfo struct. Real DNS resolution is # done only if resolve is true; otherwise the address must be numeric. try: addrs = socket.getaddrinfo(host, port, af, 0, 0, gai_flags) except socket.gaierror, e: raise ValueError("Bad host or port: \"%s\" \"%s\": %s" % (host, port, str(e))) if not addrs: raise ValueError("Bad host or port: \"%s\" \"%s\"" % (host, port)) # Convert the result of socket.getaddrinfo (which is a 2-tuple for IPv4 and # a 4-tuple for IPv6) into a (host, port) 2-tuple. host, port = socket.getnameinfo(addrs[0][4], socket.NI_NUMERICHOST | socket.NI_NUMERICSERV) return host, int(port) def canonical_ip(host, port, af=0): """Convert an IP address to a canonical format. Identical to resolve_to_ip, except that the host param must already be an IP address.""" return resolve_to_ip(host, port, af, gai_flags=socket.AI_NUMERICHOST) def addr_family(ip): """Return the address family of an IP address. Raises socket.gaierror if ip is not a numeric IP.""" addrs = socket.getaddrinfo(ip, 0, 0, socket.SOCK_STREAM, socket.IPPROTO_TCP, socket.AI_NUMERICHOST) return addrs[0][0] def format_addr(addr): host, port = addr host_str = u"" port_str = u"" if not (host is None or host == ""): # Numeric IPv6 address? try: af = addr_family(host) except socket.gaierror, e: af = 0 if af == socket.AF_INET6: host_str = u"[%s]" % host else: host_str = u"%s" % host if port is not None: port = int(port) if not (0 < port <= 65535): raise ValueError("port must be between 1 and 65535 (is %d)" % port) port_str = u":%d" % port if not host_str and not port_str: raise ValueError("host and port may not both be None") return u"%s%s" % (host_str, port_str) flashproxy-1.7/mkman.inc000066400000000000000000000003601236350636700153700ustar00rootroot00000000000000[REPORTING BUGS] .sp Please report using \fBhttps://trac\&.torproject\&.org/projects/tor\fR\&. [SEE ALSO] .sp \fBhttp://crypto\&.stanford\&.edu/flashproxy/\fR .sp \fBhttps://www\&.torproject\&.org/docs/pluggable\-transports\&.html\&.en\fR flashproxy-1.7/mkman.sh000077500000000000000000000031641236350636700152410ustar00rootroot00000000000000#!/bin/sh # Wrapper around help2man that takes input from stdin. set -o errexit # Read a python program's description from the first paragraph of its docstring. get_description() { PYTHONPATH=".:$PYTHONPATH" python - "./$1" </dev/null 2>&1; then size() { stat -c "%s" "$@"; } else size() { stat -f "%z" "$@"; } fi prog="$1" ver="$2" name="${3:-$(get_description "$1")}" progname="$(basename "$prog")" # Prepare a temporary executable file that just dumps its own contents. trap 'rm -rf .tmp.$$' EXIT INT TERM shebang="#!/usr/bin/tail -n+2" mkdir -p ".tmp.$$" { echo "$shebang" cat } > ".tmp.$$/$progname" test $(size ".tmp.$$/$progname") -gt $((${#shebang} + 1)) || { echo >&2 "no input received; abort"; exit 1; } chmod +x ".tmp.$$/$progname" help2man ".tmp.$$/$progname" --help-option="-q" \ --name="$name" --version-string="$ver" \ --no-info --include "$(dirname "$0")/mkman.inc" \ | help2man_fixup flashproxy-1.7/proxy/000077500000000000000000000000001236350636700147545ustar00rootroot00000000000000flashproxy-1.7/proxy/Makefile000066400000000000000000000003421236350636700164130ustar00rootroot00000000000000LANGS = de en pt ru all: $(addprefix badge-, $(addsuffix .png, $(LANGS))) test: ./flashproxy-test.js badge-%.png: badge.xcf (cat badge-export-lang.scm; echo '(export "$*") (gimp-quit 0)') | gimp -i -b - .PHONY: all test flashproxy-1.7/proxy/README000066400000000000000000000011211236350636700156270ustar00rootroot00000000000000The proxy directory contains the flash proxy JavaScript proxy program and associated HTML and media files. End users don't have to do anything with these files. They are meant to be installed on a centralized web server and then accessed through a browser. The modules subdirectory contains modules and plugins for making flash proxies work with other systems such as web publishing platforms. See a collection of modules for other platforms at https://github.com/glamrock/cupcake. For a plugin for Mozilla Firefox, see https://addons.mozilla.org/en-us/firefox/addon/tor-flashproxy-badge/. flashproxy-1.7/proxy/badge-de.png000066400000000000000000000004321236350636700171110ustar00rootroot00000000000000PNG  IHDRPsHIDATXWQ {k;>Vf/iZ㉨V8{Tl%Rj2W6]3x; eXuWxUk؝Zn{)OƯ}V!wyLƑhD'!I3T"9B'>p"t$jP˔8xq>|[8߁?ml#XsIENDB`flashproxy-1.7/proxy/badge-en.png000066400000000000000000000003761236350636700171320ustar00rootroot00000000000000PNG  IHDRPsHIDATXV0˟^lty=0 )bE2)ME rB#hh9eҭ2='K#(%b@΁Z7Ӎ5nX6Ym|2{Dlf953-vUߝٚ#N:yafh~"]?([xOk5rϸ >E_IENDB`flashproxy-1.7/proxy/badge-export-lang.scm000066400000000000000000000020451236350636700207610ustar00rootroot00000000000000; This is a Gimp script-fu script that selects and exports the appropriate ; language layers from an input XCF containing multiple layers. (define xcf-filename "badge.xcf") (define (export lang) (let* ((image (car (gimp-file-load RUN-NONINTERACTIVE xcf-filename xcf-filename))) (shine-layer (car (gimp-image-get-layer-by-name image "shine"))) (text-layer (car (gimp-image-get-layer-by-name image (string-append "text-" lang)))) (output-filename (string-append "badge-" lang ".png"))) ; Turn off all layers. (for-each (lambda (x) (gimp-item-set-visible x FALSE)) (vector->list (cadr (gimp-image-get-layers image)))) ; Except the shine and the wanted text. (gimp-item-set-visible shine-layer TRUE) (gimp-item-set-visible text-layer TRUE) (gimp-image-merge-visible-layers image CLIP-TO-IMAGE) (file-png-save RUN-NONINTERACTIVE image (car (gimp-image-get-active-layer image)) output-filename output-filename FALSE 9 FALSE FALSE FALSE FALSE FALSE) )) flashproxy-1.7/proxy/badge-pt.png000066400000000000000000000005441236350636700171500ustar00rootroot00000000000000PNG  IHDRPsH+IDATXJ1H}Jtt!!t+ Qj{d43)T$޹sL'iEl,w*f/FY #Ւ5{o|M:+`m{` <%} gpZ+ )0vT?ه P$0hƽr1)P84-$K$m$mrp|:ܔڸTjw}Ovgn})0VOIuSrJjՍ9دa}*%׷ZeHZ6^6?xv3vv1pTsGIENDB`flashproxy-1.7/proxy/badge-ru.png000066400000000000000000000006031236350636700171470ustar00rootroot00000000000000PNG  IHDRPsHJIDATXVAN1 GRQ'^QSx)gNZN-.i匓؎Xk$"HU+%#%0K=c`'e̢;kT MOA=% u 2~Gr# \a xtܸ^Oř6Hsӯ'1uHk|2x72|b[;){=CYa 63 Gi s٫ʪ6U /Kz#7GG$5>W=wQՆuM$m]^-('======>? + (Ptext-en     EP]Pq@>======>?  (Ptext-pt     DP\ JPp @>=== # . ==>?  (Ptext-ru      P  @P 2 @>===. ===>?  (Pshine      P^P&Foooo(flashproxy-1.7/proxy/embed.html000066400000000000000000000032601236350636700167170ustar00rootroot00000000000000 flashproxy-1.7/proxy/flashproxy-test.js000077500000000000000000000275171236350636700205050ustar00rootroot00000000000000#!/usr/bin/env node /* To run this test program, install nodejs (apt-get install nodejs). */ var VERBOSE = false; if (process.argv.indexOf("-v") >= 0) VERBOSE = true; var num_tests = 0; var num_failed = 0; var window = {location: {search: "?"}}; var document = {cookie: ""}; var fs = require("fs"); var data = fs.readFileSync("./flashproxy.js", "utf-8"); eval(data); function objects_equal(a, b) { if ((a === null) != (b === null)) return false; if (typeof a != typeof b) return false; if (typeof a != "object") return a == b; for (var k in a) { if (!objects_equal(a[k], b[k])) return false; } for (var k in b) { if (!objects_equal(a[k], b[k])) return false; } return true; } var top = true; function announce(test_name) { if (VERBOSE) { if (!top) console.log(); console.log(test_name); } top = false; } function pass(test) { num_tests++; if (VERBOSE) console.log("PASS " + repr(test)); } function fail(test, expected, actual) { num_tests++; num_failed++; console.log("FAIL " + repr(test) + " expected: " + repr(expected) + " actual: " + repr(actual)); } function test_build_url() { var TESTS = [ { args: ["http", "example.com"], expected: "http://example.com" }, { args: ["http", "example.com", 80], expected: "http://example.com" }, { args: ["http", "example.com", 81], expected: "http://example.com:81" }, { args: ["https", "example.com", 443], expected: "https://example.com" }, { args: ["https", "example.com", 444], expected: "https://example.com:444" }, { args: ["http", "example.com", 80, "/"], expected: "http://example.com/" }, { args: ["http", "example.com", 80, "/test?k=%#v"], expected: "http://example.com/test%3Fk%3D%25%23v" }, { args: ["http", "example.com", 80, "/test", []], expected: "http://example.com/test?" }, { args: ["http", "example.com", 80, "/test", [["k", "%#v"]]], expected: "http://example.com/test?k=%25%23v" }, { args: ["http", "example.com", 80, "/test", [["a", "b"], ["c", "d"]]], expected: "http://example.com/test?a=b&c=d" }, { args: ["http", "1.2.3.4"], expected: "http://1.2.3.4" }, { args: ["http", "1:2::3:4"], expected: "http://[1:2::3:4]" }, { args: ["http", "bog][us"], expected: "http://bog%5D%5Bus" }, { args: ["http", "bog:u]s"], expected: "http://bog%3Au%5Ds" }, ]; announce("test_build_url"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; actual = build_url.apply(undefined, test.args); if (objects_equal(actual, test.expected)) pass(test.args); else fail(test.args, test.expected, actual); } } /* This test only checks that things work for strings formatted like document.cookie. Browsers maintain several properties about this string, for example cookie names are unique with no trailing whitespace. See http://www.ietf.org/rfc/rfc2965.txt for the grammar. */ function test_parse_cookie_string() { var TESTS = [ { cs: "", expected: { } }, { cs: "a=b", expected: { a: "b"} }, { cs: "a=b=c", expected: { a: "b=c"} }, { cs: "a=b; c=d", expected: { a: "b", c: "d" } }, { cs: "a=b ; c=d", expected: { a: "b", c: "d" } }, { cs: "a= b", expected: {a: "b" } }, { cs: "a=", expected: { a: "" } }, { cs: "key", expected: null }, { cs: "key=%26%20", expected: { key: "& " } }, { cs: "a=\"\"", expected: { a: "\"\"" } }, ]; announce("test_parse_cookie_string"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; actual = parse_cookie_string(test.cs); if (objects_equal(actual, test.expected)) pass(test.cs); else fail(test.cs, test.expected, actual); } } function test_parse_query_string() { var TESTS = [ { qs: "", expected: { } }, { qs: "a=b", expected: { a: "b" } }, { qs: "a=b=c", expected: { a: "b=c" } }, { qs: "a=b&c=d", expected: { a: "b", c: "d" } }, { qs: "client=&relay=1.2.3.4%3A9001", expected: { client: "", relay: "1.2.3.4:9001" } }, { qs: "a=b%26c=d", expected: { a: "b&c=d" } }, { qs: "a%3db=d", expected: { "a=b": "d" } }, { qs: "a=b+c%20d", expected: { "a": "b c d" } }, { qs: "a=b+c%2bd", expected: { "a": "b c+d" } }, { qs: "a+b=c", expected: { "a b": "c" } }, { qs: "a=b+c+d", expected: { a: "b c d" } }, /* First appearance wins. */ { qs: "a=b&c=d&a=e", expected: { a: "b", c: "d" } }, { qs: "a", expected: { a: "" } }, { qs: "=b", expected: { "": "b" } }, { qs: "&a=b", expected: { "": "", a: "b" } }, { qs: "a=b&", expected: { "": "", a: "b" } }, { qs: "a=b&&c=d", expected: { "": "", a: "b", c: "d" } }, ]; announce("test_parse_query_string"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; actual = parse_query_string(test.qs); if (objects_equal(actual, test.expected)) pass(test.qs); else fail(test.qs, test.expected, actual); } } function test_get_param_boolean() { var TESTS = [ { qs: "param=true", expected: true }, { qs: "param", expected: true }, { qs: "param=", expected: true }, { qs: "param=1", expected: true }, { qs: "param=0", expected: false }, { qs: "param=false", expected: false }, { qs: "param=unexpected", expected: null }, { qs: "pram=true", expected: false }, ]; announce("test_get_param_boolean"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; var query; query = parse_query_string(test.qs); actual = get_param_boolean(query, "param", false); if (objects_equal(actual, test.expected)) pass(test.qs); else fail(test.qs, test.expected, actual); } } function test_parse_addr_spec() { var TESTS = [ { spec: "", expected: null }, { spec: "3.3.3.3:4444", expected: { host: "3.3.3.3", port: 4444 } }, { spec: "3.3.3.3", expected: null }, { spec: "3.3.3.3:0x1111", expected: null }, { spec: "3.3.3.3:-4444", expected: null }, { spec: "3.3.3.3:65536", expected: null }, { spec: "[1:2::a:f]:4444", expected: { host: "1:2::a:f", port: 4444 } }, { spec: "[1:2::a:f]", expected: null }, { spec: "[1:2::a:f]:0x1111", expected: null }, { spec: "[1:2::a:f]:-4444", expected: null }, { spec: "[1:2::a:f]:65536", expected: null }, { spec: "[1:2::ffff:1.2.3.4]:4444", expected: { host: "1:2::ffff:1.2.3.4", port: 4444 } }, ]; announce("test_parse_addr_spec"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; actual = parse_addr_spec(test.spec); if (objects_equal(actual, test.expected)) pass(test.spec); else fail(test.spec, test.expected, actual); } } function test_get_param_addr() { var DEFAULT = { host: "1.1.1.1", port: 2222 }; var TESTS = [ { query: { }, expected: DEFAULT }, { query: { addr: "3.3.3.3:4444" }, expected: { host: "3.3.3.3", port: 4444 } }, { query: { x: "3.3.3.3:4444" }, expected: DEFAULT }, { query: { addr: "---" }, expected: null }, ]; announce("test_get_param_addr"); for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual; actual = get_param_addr(test.query, "addr", DEFAULT); if (objects_equal(actual, test.expected)) pass(test.query); else fail(test.query, test.expected, actual); } } function test_lang_keys() { var TESTS = [ { code: "de", expected: ["de"] }, { code: "DE", expected: ["de"] }, { code: "de-at", expected: ["de-at", "de"] }, { code: "de-AT", expected: ["de-at", "de"] }, ]; for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; var actual = lang_keys(test.code); var j, k; k = 0; for (j = 0; j < test.expected.length; j++) { for (; k < actual.length; k++) { if (test.expected[j] === actual[k]) break; } if (k === actual.length) fail(test.code, test.expected, actual) } } } function test_have_websocket_binary_frames() { var TESTS = [ { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:10.0.2) Gecko/20100101 Firefox/10.0.2", expected: false }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:11.0) Gecko/20100101 Firefox/11.0", expected: true }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36", expected: true }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5 Safari/536.30.1", expected: true }, { expected: false }, // no userAgent ]; var _navigator = window.navigator; for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; window.navigator = { userAgent: test.ua }; var actual = have_websocket_binary_frames(); if (objects_equal(actual, test.expected)) pass(test.ua); else fail(test.ua, test.expected, actual); } window.navigator = _navigator; } function test_is_likely_tor_browser() { var TESTS = [ { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:10.0.2) Gecko/20100101 Firefox/10.0.2", expected: false }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.8; rv:11.0) Gecko/20100101 Firefox/11.0", expected: false }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/27.0.1453.110 Safari/537.36", expected: false }, { ua: "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_4) AppleWebKit/536.30.1 (KHTML, like Gecko) Version/6.0.5 Safari/536.30.1", expected: false }, { ua: "Mozilla/5.0 (Windows NT 6.1; rv:10.0) Gecko/20100101 Firefox/10.0", expected: true }, { ua: "Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0", expected: true }, { ua: "Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Firefox/24.0", expected: true }, { expected: false }, // no userAgent ]; var _navigator = window.navigator; window.navigator = { mimeTypes: [] }; for (var i = 0; i < TESTS.length; i++) { var test = TESTS[i]; window.navigator.userAgent = test.ua; var actual = is_likely_tor_browser(); if (objects_equal(actual, test.expected)) pass(test.ua); else fail(test.ua, test.expected, actual); } window.navigator = _navigator; } test_build_url(); test_parse_cookie_string(); test_parse_query_string(); test_get_param_boolean(); test_parse_addr_spec(); test_get_param_addr(); test_lang_keys(); test_have_websocket_binary_frames(); test_is_likely_tor_browser(); if (num_failed == 0) process.exit(0); else process.exit(1); flashproxy-1.7/proxy/flashproxy.js000066400000000000000000001077221236350636700175220ustar00rootroot00000000000000/* Query string parameters. These change how the program runs from the outside. * For example: * http://www.example.com/embed.html?facilitator=http://127.0.0.1:9002&debug=1 * * cookierequired=0|1 * If true, the proxy will disable itself if the user has not explicitly opted * in by setting a cookie through the options page. If absent or false, the proxy * will run unless the user has explicitly opted out. * * lang= * Display language of the badge, as an IETF language tag. * * facilitator_poll_interval= * How often to poll the facilitator, in seconds. The default is * DEFAULT_FACILITATOR_POLL_INTERVAL. There is a sanity-check minimum of 1.0 s. * * initial_facilitator_poll_interval= * How long to wait before polling the facilitator the first time, in seconds. * DEFAULT_INITIAL_FACILITATOR_POLL_INTERVAL. * * max_clients= * How many clients to serve concurrently. The default is * DEFAULT_MAX_NUM_CLIENTS. * * ratelimit=()?|off * What rate to limit all proxy traffic combined to. The special value "off" * disables the limit. The default is DEFAULT_RATE_LIMIT. There is a * sanity-check minimum of "10K". * * facilitator=https://host:port/ * The URL of the facilitator CGI script. By default it is * DEFAULT_FACILITATOR_URL. * * debug=0|1 * If true, show verbose terminal-like output instead of the badge. The values * "1", "true", and the empty string "" all enable debug mode. Any other value * uses the normal badge display. * * client=: * The address of the client to connect to. The proxy normally receives this * information from the facilitator. When this option is used, the facilitator * query is not done. The "relay" parameter must be given as well. * * relay=: * The address of the relay to connect to. The proxy normally receives this * information from the facilitator. When this option is used, the facilitator * query is not done. The "client" parameter must be given as well. */ /* WebSocket links. * * The WebSocket Protocol * https://tools.ietf.org/html/rfc6455 * * The WebSocket API * http://dev.w3.org/html5/websockets/ * * MDN page with browser compatibility * https://developer.mozilla.org/en/WebSockets * * Implementation tests (including tests of binary messages) * http://autobahn.ws/testsuite/reports/clients/index.html */ var DEFAULT_FACILITATOR_URL = DEFAULT_FACILITATOR_URL || "https://fp-facilitator.org/"; /* Start two connections because some versions of Tor make two PT connections: https://lists.torproject.org/pipermail/tor-dev/2012-December/004221.html https://trac.torproject.org/projects/tor/ticket/7733 */ var CONNECTIONS_PER_CLIENT = 2 var DEFAULT_MAX_NUM_CLIENTS = DEFAULT_MAX_NUM_CLIENTS || 10; var DEFAULT_INITIAL_FACILITATOR_POLL_INTERVAL = DEFAULT_INITIAL_FACILITATOR_POLL_INTERVAL || 60.0; var DEFAULT_FACILITATOR_POLL_INTERVAL = DEFAULT_FACILITATOR_POLL_INTERVAL || 3600.0; var MIN_FACILITATOR_POLL_INTERVAL = 10.0; /* Bytes per second. Set to undefined to disable limit. */ var DEFAULT_RATE_LIMIT = DEFAULT_RATE_LIMIT || undefined; var MIN_RATE_LIMIT = 10 * 1024; var RATE_LIMIT_HISTORY = 5.0; /* Name of cookie that controls opt-in/opt-out. */ var OPT_IN_COOKIE = "flashproxy-allow"; /* Firefox before version 11.0 uses the name MozWebSocket. Whether the global variable WebSocket is defined indicates whether WebSocket is supported at all. */ var WebSocket = window.WebSocket || window.MozWebSocket; var query = parse_query_string(window.location.search.substr(1)); var DEBUG = get_param_boolean(query, "debug", false); var SAFE_LOGGING = !get_param_boolean(query, "unsafe_logging", false); var debug_div; /* HEADLESS is true if we are running not in a browser with a DOM. */ var HEADLESS = typeof(document) === "undefined"; var cookies; if (HEADLESS) { cookies = {}; } else { cookies = parse_cookie_string(document.cookie); if (DEBUG) { debug_div = document.createElement("pre"); debug_div.className = "debug"; } } function puts(s) { if (DEBUG) { s = new Date().toISOString() + " | " + s; /* This shows up in the Web Console in Firefox and F12 developer tools in Internet Explorer. */ (console.debug || console.log).call(console, s); if (debug_div) { var at_bottom; /* http://www.w3.org/TR/cssom-view/#element-scrolling-members */ at_bottom = (debug_div.scrollTop + debug_div.clientHeight === debug_div.scrollHeight); debug_div.appendChild(document.createTextNode(s + "\n")); if (at_bottom) debug_div.scrollTop = debug_div.scrollHeight; } } } /* Parse a cookie data string (usually document.cookie). The return type is an object mapping cookies names to values. Returns null on error. http://www.w3.org/TR/DOM-Level-2-HTML/html.html#ID-8747038 */ function parse_cookie_string(cookies) { var strings; var result; result = {}; if (cookies) strings = cookies.split(";"); else strings = []; for (var i = 0; i < strings.length; i++) { var string = strings[i]; var j, name, value; j = string.indexOf("="); if (j === -1) { return null; } name = decodeURIComponent(string.substr(0, j).trim()); value = decodeURIComponent(string.substr(j + 1).trim()); if (!(name in result)) result[name] = value; } return result; } /* Parse a URL query string or application/x-www-form-urlencoded body. The return type is an object mapping string keys to string values. By design, this function doesn't support multiple values for the same named parameter, for example "a=1&a=2&a=3"; the first definition always wins. Returns null on error. Always decodes from UTF-8, not any other encoding. http://dev.w3.org/html5/spec/Overview.html#url-encoded-form-data */ function parse_query_string(qs) { var strings; var result; result = {}; if (qs) strings = qs.split("&"); else strings = []; for (var i = 0; i < strings.length; i++) { var string = strings[i]; var j, name, value; j = string.indexOf("="); if (j === -1) { name = string; value = ""; } else { name = string.substr(0, j); value = string.substr(j + 1); } name = decodeURIComponent(name.replace(/\+/g, " ")); value = decodeURIComponent(value.replace(/\+/g, " ")); if (!(name in result)) result[name] = value; } return result; } /* params is a list of (key, value) 2-tuples. */ function build_query_string(params) { var parts = []; for (var i = 0; i < params.length; i++) { parts.push(encodeURIComponent(params[i][0]) + "=" + encodeURIComponent(params[i][1])); } return parts.join("&"); } var DEFAULT_PORTS = { http: 80, https: 443 } /* Build an escaped URL string from unescaped components. Only scheme and host are required. See RFC 3986, section 3. */ function build_url(scheme, host, port, path, params) { var parts = [] parts.push(encodeURIComponent(scheme)); parts.push("://"); /* If it contains a colon but no square brackets, treat it like an IPv6 address. */ if (host.match(/:/) && !host.match(/[[\]]/)) { parts.push("["); parts.push(host); parts.push("]"); } else { parts.push(encodeURIComponent(host)); } if (port !== undefined && port !== DEFAULT_PORTS[scheme]) { parts.push(":"); parts.push(encodeURIComponent(port.toString())); } if (path !== undefined && path !== "") { if (!path.match(/^\//)) path = "/" + path; /* Slash is significant so we must protect it from encodeURIComponent, while still encoding question mark and number sign. RFC 3986, section 3.3: "The path is terminated by the first question mark ('?') or number sign ('#') character, or by the end of the URI. ... A path consists of a sequence of path segments separated by a slash ('/') character." */ path = path.replace(/[^\/]+/, function(m) { return encodeURIComponent(m); }); parts.push(path); } if (params !== undefined) { parts.push("?"); parts.push(build_query_string(params)); } return parts.join(""); } /* Get an object value and return it as a boolean. True values are "1", "true", and "". False values are "0" and "false". Any other value causes the function to return null (effectively false). Returns default_val if param is not a key. The empty string is true so that URLs like http://example.com/?debug will enable debug mode. */ function get_param_boolean(query, param, default_val) { var val; val = query[param]; if (val === undefined) return default_val; else if (val === "true" || val === "1" || val === "") return true; else if (val === "false" || val === "0") return false; else return null; } /* Get an object value and return it as a string. Returns default_val if param is not a key. */ function get_param_string(query, param, default_val) { var val; val = query[param]; if (val === undefined) return default_val; else return val; } /* Get an object value and parse it as an address spec. Returns default_val if param is not a key. Returns null on a parsing error. */ function get_param_addr(query, param, default_val) { var val; val = query[param]; if (val === undefined) return default_val; else return parse_addr_spec(val); } /* Get an object value and parse it as an integer. Returns default_val if param is not a key. Return null on a parsing error. */ function get_param_integer(query, param, default_val) { var spec; var val; spec = query[param]; if (spec === undefined) { return default_val; } else if (!spec.match(/^-?[0-9]+/)) { return null; } else { val = parseInt(spec, 10); if (isNaN(val)) return null; else return val; } } /* Get an object value and parse it as a real number. Returns default_val if param is not a key. Return null on a parsing error. */ function get_param_number(query, param, default_val) { var spec; var val; spec = query[param]; if (spec === undefined) { return default_val; } else { val = Number(spec); if (isNaN(val)) return null; else return val; } } /* Get a floating-point number of seconds from a time specification. The only time specification format is a decimal number of seconds. Returns null on error. */ function get_param_timespec(query, param, default_val) { return get_param_number(query, param, default_val); } /* Parse a count of bytes. A suffix of "k", "m", or "g" (or uppercase) does what you would think. Returns null on error. */ function parse_byte_count(spec) { var UNITS = { k: 1024, m: 1024 * 1024, g: 1024 * 1024 * 1024, K: 1024, M: 1024 * 1024, G: 1024 * 1024 * 1024 }; var count, units; var matches; matches = spec.match(/^(\d+(?:\.\d*)?)(\w*)$/); if (matches === null) return null; count = Number(matches[1]); if (isNaN(count)) return null; if (matches[2] === "") { units = 1; } else { units = UNITS[matches[2]]; if (units === null) return null; } return count * Number(units); } /* Get an object value and parse it as a byte count. Example byte counts are "100" and "1.3m". Returns default_val if param is not a key. Return null on a parsing error. */ function get_param_byte_count(query, param, default_val) { var spec; spec = query[param]; if (spec === undefined) return default_val; else return parse_byte_count(spec); } /* Return an array of the user's preferred IETF language tags, in descending order of priority. Return an empty array in case of no preference. */ function get_langs() { var param, result; result = []; param = get_param_string(query, "lang"); if (param !== undefined) result.push(param); /* https://developer.mozilla.org/en/docs/DOM/window.navigator.language */ if (window.navigator.language) result.push(window.navigator.language); return result; } /* Parse an address in the form "host:port". Returns an Object with keys "host" (String) and "port" (int). Returns null on error. */ function parse_addr_spec(spec) { var m, host, port; m = null; /* IPv6 syntax. */ if (!m) m = spec.match(/^\[([\0-9a-fA-F:.]+)\]:([0-9]+)$/); /* IPv4 syntax. */ if (!m) m = spec.match(/^([0-9.]+):([0-9]+)$/); if (!m) return null; host = m[1]; port = parseInt(m[2], 10); if (isNaN(port) || port < 0 || port > 65535) return null; return { host: host, port: port } } function format_addr(addr) { return addr.host + ":" + addr.port; } /* Does the WebSocket implementation in this browser support binary frames? (RFC 6455 section 5.6.) If not, we have to use base64-encoded text frames. It is assumed that the client and relay endpoints always support binary frames. */ function have_websocket_binary_frames() { var BROWSERS = [ { idString: "Chrome", verString: "Chrome", version: 16 }, { idString: "Safari", verString: "Version", version: 6 }, { idString: "Firefox", verString: "Firefox", version: 11 } ]; var ua; ua = window.navigator.userAgent; if (!ua) return false; for (var i = 0; i < BROWSERS.length; i++) { var matches, reg; reg = "\\b" + BROWSERS[i].idString + "\\b"; if (!ua.match(new RegExp(reg, "i"))) continue; reg = "\\b" + BROWSERS[i].verString + "\\/(\\d+)"; matches = ua.match(new RegExp(reg, "i")); return matches !== null && Number(matches[1]) >= BROWSERS[i].version; } return false; } function make_websocket(addr) { var url; var ws; url = build_url("ws", addr.host, addr.port, "/"); if (have_websocket_binary_frames()) ws = new WebSocket(url); else ws = new WebSocket(url, "base64"); /* "User agents can use this as a hint for how to handle incoming binary data: if the attribute is set to 'blob', it is safe to spool it to disk, and if it is set to 'arraybuffer', it is likely more efficient to keep the data in memory." */ ws.binaryType = "arraybuffer"; return ws; } function FlashProxy() { if (HEADLESS) { /* No badge. */ } else if (DEBUG) { this.badge_elem = debug_div; } else { this.badge = new Badge(); this.badge_elem = this.badge.elem; } if (this.badge_elem) this.badge_elem.setAttribute("id", "flashproxy-badge"); this.proxy_pairs = []; this.start = function() { var client_addr; var relay_addr; var rate_limit_bytes; this.fac_url = get_param_string(query, "facilitator", DEFAULT_FACILITATOR_URL); this.max_num_clients = get_param_integer(query, "max_clients", DEFAULT_MAX_NUM_CLIENTS); if (this.max_num_clients === null || this.max_num_clients < 0) { puts("Error: max_clients must be a nonnegative integer."); this.die(); return; } this.initial_facilitator_poll_interval = get_param_timespec(query, "initial_facilitator_poll_interval", DEFAULT_INITIAL_FACILITATOR_POLL_INTERVAL); if (this.initial_facilitator_poll_interval === null || this.initial_facilitator_poll_interval < 0) { puts("Error: initial_facilitator_poll_interval must be a nonnegative number."); this.die(); return; } this.facilitator_poll_interval = get_param_timespec(query, "facilitator_poll_interval"); if (this.facilitator_poll_interval !== undefined && (this.facilitator_poll_interval === null || this.facilitator_poll_interval < MIN_FACILITATOR_POLL_INTERVAL)) { puts("Error: facilitator_poll_interval must be a nonnegative number at least " + MIN_FACILITATOR_POLL_INTERVAL + "."); this.die(); return; } if (query["ratelimit"] === "off") rate_limit_bytes = undefined; else rate_limit_bytes = get_param_byte_count(query, "ratelimit", DEFAULT_RATE_LIMIT); if (rate_limit_bytes === undefined) { this.rate_limit = new DummyRateLimit(); } else if (rate_limit_bytes === null || rate_limit_bytes < MIN_FACILITATOR_POLL_INTERVAL) { puts("Error: ratelimit must be a nonnegative number at least " + MIN_RATE_LIMIT + "."); this.die(); return; } else { this.rate_limit = new BucketRateLimit(rate_limit_bytes * RATE_LIMIT_HISTORY, RATE_LIMIT_HISTORY); } client_addr = get_param_addr(query, "client"); if (client_addr === null) { puts("Error: can't parse \"client\" parameter."); this.die(); return; } relay_addr = get_param_addr(query, "relay"); if (relay_addr === null) { puts("Error: can't parse \"relay\" parameter."); this.die(); return; } if (client_addr !== undefined && relay_addr !== undefined) { this.begin_proxy(client_addr, relay_addr); return; } else if (client_addr !== undefined) { puts("Error: the \"client\" parameter requires \"relay\" also.") this.die(); return; } else if (relay_addr !== undefined) { puts("Error: the \"relay\" parameter requires \"client\" also.") this.die(); return; } puts("Starting; will contact facilitator in " + this.initial_facilitator_poll_interval + " seconds."); setTimeout(this.proxy_main.bind(this), this.initial_facilitator_poll_interval * 1000); }; this.proxy_main = function() { var params; var base_url, url; var xhr; if (this.proxy_pairs.length >= this.max_num_clients * CONNECTIONS_PER_CLIENT) { setTimeout(this.proxy_main.bind(this), this.facilitator_poll_interval * 1000); return; } /* Flash proxy protocol revision. */ params = [["r", "1"]]; params.push(["transport", "websocket"]); /* Clients we're currently handling. */ for (var i = 0; i < this.proxy_pairs.length; i++) params.push(["client", format_addr(this.proxy_pairs[i].client_addr)]); base_url = this.fac_url.replace(/\?.*/, ""); url = base_url + "?" + build_query_string(params); xhr = new XMLHttpRequest(); try { xhr.open("GET", url); } catch (err) { /* An exception happens here when, for example, NoScript allows the domain on which the proxy badge runs, but not the domain to which it's trying to make the HTTP request. The exception message is like "Component returned failure code: 0x805e0006 [nsIXMLHttpRequest.open]" on Firefox. */ puts("Facilitator: exception while connecting: " + repr(err.message) + "."); return; } xhr.responseType = "text"; xhr.onreadystatechange = function() { if (xhr.readyState === xhr.DONE) { if (xhr.status === 200) { this.fac_complete(xhr.responseText); } else { puts("Facilitator: can't connect: got status " + repr(xhr.status) + " and status text " + repr(xhr.statusText) + "."); } } }.bind(this); /* Remove query string if scrubbing. */ if (SAFE_LOGGING) puts("Facilitator: connecting to " + base_url + "."); else puts("Facilitator: connecting to " + url + "."); xhr.send(null); }; this.fac_complete = function(text) { var response; var client_addr; var relay_addr; var poll_interval; response = parse_query_string(text); if (this.facilitator_poll_interval) { poll_interval = this.facilitator_poll_interval; } else { poll_interval = get_param_integer(response, "check-back-in", DEFAULT_FACILITATOR_POLL_INTERVAL); if (poll_interval === null) { puts("Error: can't parse polling interval from facilitator, " + repr(poll_interval) + "."); poll_interval = DEFAULT_FACILITATOR_POLL_INTERVAL; } if (poll_interval < MIN_FACILITATOR_POLL_INTERVAL) poll_interval = MIN_FACILITATOR_POLL_INTERVAL; } puts("Next check in " + repr(poll_interval) + " seconds."); setTimeout(this.proxy_main.bind(this), poll_interval * 1000); if (!response.client) { puts("No clients."); return; } client_addr = parse_addr_spec(response.client); if (client_addr === null) { puts("Error: can't parse client spec " + safe_repr(response.client) + "."); return; } if (!response.relay) { puts("Error: missing relay in response."); return; } relay_addr = parse_addr_spec(response.relay); if (relay_addr === null) { puts("Error: can't parse relay spec " + safe_repr(response.relay) + "."); return; } puts("Facilitator: got client:" + safe_repr(client_addr) + " " + "relay:" + safe_repr(relay_addr) + "."); this.begin_proxy(client_addr, relay_addr); }; this.begin_proxy = function(client_addr, relay_addr) { for (var i=0; i 0) this.proxy_pairs.pop().close(); }; this.disable = function() { puts("Disabling."); this.cease_operation(); if (this.badge) this.badge.disable(); }; this.die = function() { puts("Dying."); this.cease_operation(); if (this.badge) this.badge.die(); }; } /* An instance of a client-relay connection. */ function ProxyPair(client_addr, relay_addr, rate_limit) { var MAX_BUFFER = 10 * 1024 * 1024; function log(s) { if (!SAFE_LOGGING) { s = format_addr(client_addr) + '|' + format_addr(relay_addr) + ' : ' + s } puts(s) } this.client_addr = client_addr; this.relay_addr = relay_addr; this.rate_limit = rate_limit; this.c2r_schedule = []; this.r2c_schedule = []; this.running = true; this.flush_timeout_id = null; /* This callback function can be overridden by external callers. */ this.cleanup_callback = function() { }; this.connect = function() { log("Client: connecting."); this.client_s = make_websocket(this.client_addr); /* Try to connect to the client first (since that is more likely to fail) and only after that try to connect to the relay. */ this.client_s.label = "Client"; this.client_s.onopen = this.client_onopen_callback; this.client_s.onclose = this.onclose_callback; this.client_s.onerror = this.onerror_callback; this.client_s.onmessage = this.onmessage_client_to_relay; }; this.client_onopen_callback = function(event) { var ws = event.target; log(ws.label + ": connected."); log("Relay: connecting."); this.relay_s = make_websocket(this.relay_addr); this.relay_s.label = "Relay"; this.relay_s.onopen = this.relay_onopen_callback; this.relay_s.onclose = this.onclose_callback; this.relay_s.onerror = this.onerror_callback; this.relay_s.onmessage = this.onmessage_relay_to_client; }.bind(this); this.relay_onopen_callback = function(event) { var ws = event.target; log(ws.label + ": connected."); }.bind(this); this.maybe_cleanup = function() { if (this.running && is_closed(this.client_s) && is_closed(this.relay_s)) { this.running = false; this.cleanup_callback(); return true; } return false; } this.onclose_callback = function(event) { var ws = event.target; log(ws.label + ": closed."); this.flush(); if (this.maybe_cleanup()) { puts("Complete."); } }.bind(this); this.onerror_callback = function(event) { var ws = event.target; log(ws.label + ": error."); this.close(); // we can't rely on onclose_callback to cleanup, since one common error // case is when the client fails to connect and the relay never starts. // in that case close() is a NOP and onclose_callback is never called. this.maybe_cleanup(); }.bind(this); this.onmessage_client_to_relay = function(event) { this.c2r_schedule.push(event.data); this.flush(); }.bind(this); this.onmessage_relay_to_client = function(event) { this.r2c_schedule.push(event.data); this.flush(); }.bind(this); function is_open(ws) { return ws !== undefined && ws.readyState === WebSocket.OPEN; } function is_closed(ws) { return ws === undefined || ws.readyState === WebSocket.CLOSED; } this.close = function() { if (!is_closed(this.client_s)) this.client_s.close(); if (!is_closed(this.relay_s)) this.relay_s.close(); }; /* Send as much data as the rate limit currently allows. */ this.flush = function() { var busy; if (this.flush_timeout_id) clearTimeout(this.flush_timeout_id); this.flush_timeout_id = null; busy = true; while (busy && !this.rate_limit.is_limited()) { var chunk; busy = false; if (is_open(this.client_s) && this.client_s.bufferedAmount < MAX_BUFFER && this.r2c_schedule.length > 0) { chunk = this.r2c_schedule.shift(); this.rate_limit.update(chunk.length); this.client_s.send(chunk); busy = true; } if (is_open(this.relay_s) && this.relay_s.bufferedAmount < MAX_BUFFER && this.c2r_schedule.length > 0) { chunk = this.c2r_schedule.shift(); this.rate_limit.update(chunk.length); this.relay_s.send(chunk); busy = true; } } if (is_closed(this.relay_s) && !is_closed(this.client_s) && this.client_s.bufferedAmount === 0 && this.r2c_schedule.length === 0) { log("Client: closing."); this.client_s.close(); } if (is_closed(this.client_s) && !is_closed(this.relay_s) && this.relay_s.bufferedAmount === 0 && this.c2r_schedule.length === 0) { log("Relay: closing."); this.relay_s.close(); } if (this.r2c_schedule.length > 0 || (is_open(this.client_s) && this.client_s.bufferedAmount > 0) || this.c2r_schedule.length > 0 || (is_open(this.relay_s) && this.relay_s.bufferedAmount > 0)) this.flush_timeout_id = setTimeout(this.flush.bind(this), this.rate_limit.when() * 1000); }; } function BucketRateLimit(capacity, time) { this.amount = 0.0; /* capacity / time is the rate we are aiming for. */ this.capacity = capacity; this.time = time; this.last_update = new Date(); this.age = function() { var now; var delta; now = new Date(); delta = (now - this.last_update) / 1000.0; this.last_update = now; this.amount -= delta * this.capacity / this.time; if (this.amount < 0.0) this.amount = 0.0; }; this.update = function(n) { this.age(); this.amount += n; return this.amount <= this.capacity; }; /* How many seconds in the future will the limit expire? */ this.when = function() { this.age(); return (this.amount - this.capacity) / (this.capacity / this.time); } this.is_limited = function() { this.age(); return this.amount > this.capacity; } } /* A rate limiter that never limits. */ function DummyRateLimit(capacity, time) { this.update = function(n) { return true; }; this.when = function() { return 0.0; } this.is_limited = function() { return false; } } var HTML_ESCAPES = { "&": "amp", "<": "lt", ">": "gt", "'": "apos", "\"": "quot" }; function escape_html(s) { return s.replace(/&<>'"/, function(x) { return HTML_ESCAPES[x] }); } var LOCALIZATIONS = { "en": { filename: "badge-en.png", text: "Internet Freedom" }, "de": { filename: "badge-de.png", text: "Internetfreiheit" }, "pt": { filename: "badge-pt.png", text: "Internet Livre" }, "ru": { filename: "badge-ru.png", text: "Свобода Интернета" } }; var DEFAULT_LOCALIZATION = { filename: "badge.png", text: "Internet Freedom" }; /* Return an array of progressively less specific language tags, canonicalized for lookup in LOCALIZATIONS. */ function lang_keys(code) { code = code.toLowerCase(); var result = [code]; var m = code.match(/^(\w+)/); if (m !== null) { result.push(m[0]); } return result; } /* Return an object with "filename" and "text" keys appropriate for the given array of language codes. Returns a default value if there is no localization for any of the codes. */ function get_badge_localization(langs) { for (var i = 0; i < langs.length; i++) { var tags = lang_keys(langs[i]); for (var j = 0; j < tags.length; j++) { var localization = LOCALIZATIONS[tags[j]]; if (localization !== undefined) return localization; } } return DEFAULT_LOCALIZATION; } /* The usual embedded HTML badge. The "elem" member is a DOM element that can be included elsewhere. */ function Badge() { /* Number of proxy pairs currently connected. */ this.num_proxy_pairs = 0; var table, tr, td, a, img; table = document.createElement("table"); tr = document.createElement("tr"); table.appendChild(tr); td = document.createElement("td"); tr.appendChild(td); a = document.createElement("a"); a.setAttribute("href", "options.html"); a.setAttribute("target", "_blank"); td.appendChild(a); img = document.createElement("img"); var localization = get_badge_localization(get_langs()); img.setAttribute("src", localization.filename); img.setAttribute("alt", localization.text); a.appendChild(img); this.elem = table; this.elem.className = "idle"; this.proxy_begin = function() { this.num_proxy_pairs++; this.elem.className = "active"; }; this.proxy_end = function() { this.num_proxy_pairs--; if (this.num_proxy_pairs <= 0) { this.elem.className = "idle"; } } this.disable = function() { this.elem.className = "disabled"; } this.die = function() { this.elem.className = "dead"; } } function quote(s) { return "\"" + s.replace(/([\\\"])/g, "\\$1") + "\""; } function maybe_quote(s) { if (!/^[a-zA-Z_]\w*$/.test(s)) return quote(s); else return s; } function repr(x) { if (x === null) { return "null"; } else if (typeof x === "undefined") { return "undefined"; } else if (typeof x === "object") { var elems = []; for (var k in x) elems.push(maybe_quote(k) + ": " + repr(x[k])); return "{ " + elems.join(", ") + " }"; } else if (typeof x === "string") { return quote(x); } else { return x.toString(); } } function safe_repr(s) { return SAFE_LOGGING ? "[scrubbed]" : repr(s); } /* Do we seem to be running in Tor Browser? Check the user-agent string and for no listing of supported MIME types. */ var TBB_UAS = [ "Mozilla/5.0 (Windows NT 6.1; rv:10.0) Gecko/20100101 Firefox/10.0", "Mozilla/5.0 (Windows NT 6.1; rv:17.0) Gecko/20100101 Firefox/17.0", "Mozilla/5.0 (Windows NT 6.1; rv:24.0) Gecko/20100101 Firefox/24.0", ]; function is_likely_tor_browser() { return TBB_UAS.indexOf(window.navigator.userAgent) > -1 && (window.navigator.mimeTypes && window.navigator.mimeTypes.length === 0); } /* Are circumstances such that we should self-disable and not be a proxy? We take a best-effort guess as to whether this device runs on a battery or the data transfer might be expensive. http://www.zytrax.com/tech/web/mobile_ids.html http://googlewebmastercentral.blogspot.com/2011/03/mo-better-to-also-detect-mobile-user.html http://search.cpan.org/~cmanley/Mobile-UserAgent-1.05/lib/Mobile/UserAgent.pm */ function flashproxy_should_disable() { var ua; /* https://trac.torproject.org/projects/tor/ticket/6293 */ if (is_likely_tor_browser()) { puts("Disable because running in Tor Browser."); return true; } ua = window.navigator.userAgent; if (ua) { var UA_LIST = [ /\bmobile\b/i, /\bandroid\b/i, /\bopera mobi\b/i, ]; for (var i = 0; i < UA_LIST.length; i++) { var re = UA_LIST[i]; if (ua.match(re)) { puts("Disable because User-Agent matches mobile pattern " + re + "."); return true; } } if (ua.match(/\bsafari\b/i) && !ua.match(/\bchrome\b/i) && !ua.match(/\bversion\/[6789]\./i)) { /* Disable before Safari 6.0 because it doesn't have the hybi/RFC type of WebSockets. */ puts("Disable because User-Agent is Safari before 6.0."); return true; } } if (!WebSocket) { /* No WebSocket support. */ puts("Disable because of no WebSocket support."); return true; } var flashproxy_allow = get_param_boolean(cookies, OPT_IN_COOKIE); var cookierequired = get_param_boolean(query, "cookierequired", false); /* flashproxy_allow may be true, false, or undefined. If undefined, only disable if the cookierequired param is also set. */ if (flashproxy_allow === false) { puts("Disable because of cookie opt-out."); return true; } else if (cookierequired && !flashproxy_allow) { puts("Disable because of cookie required and no opt-in."); return true; } return false; } function flashproxy_badge_new() { var fp; fp = new FlashProxy(); if (flashproxy_should_disable()) fp.disable(); return fp; } function flashproxy_badge_insert() { var fp; var e; fp = flashproxy_badge_new(); /* http://intertwingly.net/blog/2006/11/10/Thats-Not-Write for this trick to insert right after the flashproxy-1.7/setup-client-exe.py000077500000000000000000000011361236350636700173440ustar00rootroot00000000000000#!/usr/bin/python """Setup file for the flashproxy-common python module.""" from distutils.core import setup import os import py2exe build_path = os.path.join(os.environ["PY2EXE_TMPDIR"], "build") dist_path = os.path.join(os.environ["PY2EXE_TMPDIR"], "dist") setup( console=["flashproxy-client", "flashproxy-reg-appspot", "flashproxy-reg-email", "flashproxy-reg-http", "flashproxy-reg-url"], zipfile="py2exe-flashproxy.zip", options={ "build": { "build_base": build_path }, "py2exe": { "includes": ["M2Crypto"], "dist_dir": dist_path } } ) flashproxy-1.7/setup-common.py000077500000000000000000000031551236350636700166020ustar00rootroot00000000000000#!/usr/bin/env python """Setup file for the flashproxy-common python module. To build/install a self-contained binary distribution of flashproxy-client (which integrates this module within it), see Makefile. """ # Note to future developers: # # We place flashproxy-common in the same directory as flashproxy-client for # convenience, so that it's possible to run the client programs directly from # a source checkout without needing to set PYTHONPATH. This works OK currently # because flashproxy-client does not contain python modules, only programs, and # therefore doesn't conflict with the flashproxy-common module. # # If we ever need to have a python module specific to flashproxy-client, the # natural thing would be to add a setup.py for it. That is the reason why this # file is called setup-common.py instead. However, there are still issues that # arise from having two setup*.py files in the same directory, which is an # unfortunate limitation of python's setuptools. # # See discussion on #6810 for more details. import subprocess import sys from setuptools import setup, find_packages p = subprocess.Popen(["sh", "version.sh"], stdout=subprocess.PIPE) output, _ = p.communicate() assert p.poll() == 0 version = output.strip() setup( name = "flashproxy-common", author = "dcf", author_email = "dcf@torproject.org", description = ("Common code for flashproxy"), license = "BSD", keywords = ['tor', 'flashproxy'], packages = find_packages(exclude=['*.test']), test_suite='flashproxy.test', version = version, install_requires = [ 'setuptools', 'M2Crypto', ], ) flashproxy-1.7/torrc000066400000000000000000000013571236350636700146550ustar00rootroot00000000000000## Configuration file for Tor over flash proxies. ## Usage: ## tor -f torrc UseBridges 1 # The address and port are ignored by the client transport plugin. Bridge flashproxy 0.0.1.0:1 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD Bridge flashproxy 0.0.1.0:2 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD Bridge flashproxy 0.0.1.0:3 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD Bridge flashproxy 0.0.1.0:4 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD Bridge flashproxy 0.0.1.0:5 4D6C0DF6DEC9398A4DEF07084F3CD395A96DD2AD # Change the second number here (9000) to the number of a port that can # receive connections from the Internet (the port for which you # configured port forwarding). ClientTransportPlugin flashproxy exec ./flashproxy-client --register :0 :9000 flashproxy-1.7/version.sh000077500000000000000000000002351236350636700156170ustar00rootroot00000000000000#!/bin/sh # Read version from the ChangeLog to avoid repeating in multiple build scripts sed -ne 's/^Changes .* version \(..*\)$/\1/g;tx b :x p;q' ChangeLog