pax_global_header00006660000000000000000000000064140071251650014513gustar00rootroot0000000000000052 comment=faef7c3b6aa58fab45afc0c5a14a2d88ad6d5d3e magic-wormhole-0.12.0/000077500000000000000000000000001400712516500145055ustar00rootroot00000000000000magic-wormhole-0.12.0/.coveragerc000066400000000000000000000012711400712516500166270ustar00rootroot00000000000000# -*- mode: conf -*- [run] # only record trace data for wormhole.* source = wormhole # and don't trace the test files themselves, or Versioneer's stuff omit = src/wormhole/test/* src/wormhole/_version.py # This allows 'coverage combine' to correlate the tracing data built while # running tests in multiple tox virtualenvs. To take advantage of this # properly, use "coverage erase" before tox, "coverage run --parallel-mode" # inside tox to avoid overwriting the output data (by writing it into # .coverage-XYZ instead of just .coverage), and run "coverage combine" # afterwards. [paths] source = src/ .tox/*/lib/python*/site-packages/ .tox/pypy*/site-packages/ magic-wormhole-0.12.0/LICENSE000066400000000000000000000020701400712516500155110ustar00rootroot00000000000000The MIT License (MIT) Copyright (c) 2015 Brian Warner Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. magic-wormhole-0.12.0/MANIFEST.in000066400000000000000000000005601400712516500162440ustar00rootroot00000000000000include versioneer.py include src/wormhole/_version.py include LICENSE README.md NEWS.md recursive-include docs *.md *.rst *.dot include docs/wormhole.1 docs/Makefile docs/conf.py include docs/state-machines/Makefile include .coveragerc tox.ini snapcraft.yaml include misc/windows-build.cmd include misc/*.py misc/web/*.html misc/web/*.js misc/web/*.css include pyi/* magic-wormhole-0.12.0/NEWS.md000066400000000000000000000636211400712516500156130ustar00rootroot00000000000000 User-visible changes in "magic-wormhole": ## Release 0.12.0 (04-Apr-2020) * A command like `wormhole send /dev/fd0` can send the contents of the named block device (USB stick, SD card, floppy, etc), resulting in a plain file on the other side. (#323) * Change "accept this file?" default answer from no to yes. (#327 #330 #331) * Actually use tempfile for large directory transfers. This fixes a five-year old bug which prevents transfers of directories larger than available RAM by finally really building the temporary zipfile on disk. (#379) * Accept 'wss' for TLS-protected relay connections, which default to port 443 if no other port is accepted. A future release will change the public relay to use TLS. (#144) * Drop support for python3.4 * Stall `--verify` long enough to send the verifier. This fixes a bug when both sides use `--verify`, the receiver uses tab-completion, the sender sees the verifier and waits for the user to confirm, but the receiver cannot show the verifier (enabling that confirmation) until the sender approves the transfer. (#349) This release also includes an incomplete implementation of the new "Dilation" API (see ticket #312 for details). In the future this will enable restarting interrupted transfers, tolerating changes in network address, bidirectional transfers in a long-running GUI/daemon process, and more. The protocol is not finalized, nor is it backward compatible with the old "Transit" protocol yet, so there is no CLI access so far. The code is present and tested to make sure it doesn't regress and for ease of development, but intrepid folks who want to try it out will need to write a client first (and be aware that the protocol may change out from under them). A future release will add compatibility negotiation with old clients and start using the new protocol. PRs and tickets addressed in this release: #144 #312 #318 #321 #323 #327 #330 #331 #332 #339 #349 #361 #365 #368 #367 #378 #379. Thanks to the many contributors of bugs, patches, and other help with this release: * Adam Spiers aka @aspiers * Евгений Протозанов aka @WeirdCarrotMonster * Edward Betts aka @EdwardBetts * Jacek Politowski aka @jpolnetpl * Julian Stecklina aka @blitz * Jürgen Gmach aka @jugmac00 * Louis Wilson aka @louiswins * Miro Hrončok aka @hroncok * Moritz Schlichting aka @morrieinmaas * Shea Polansky aka @Phyxius * @sneakypete81 ## Release 0.11.2 (13-Nov-2018) Rerelease to fix the long description on PyPI. Thanks to Marius Gedminas for tracking down the problem and providing the fix. (#316) ## Release 0.11.1 (13-Nov-2018) * Fix `python -m wormhole` on py2. (#315) Thanks to Marius Gedminas, FreddieHo, and Jakub Wilk for patches and bug reports in this release. ## Release 0.11.0 (16-Oct-2018) * Python-3.7 compatibility was fixed. (#306) * Support for Python-3.4 on Windows has been dropped. py3.4 is still supported on unix-like operating systems. * The client version is now sent to the mailbox server for each connection. I strive to have the client share as little information as possible, but I think this will help me improve the protocol by giving me a better idea of client-upgrade adoption rates. (#293) Packaging changes: * We removed the Rendezvous Server (now named the "Mailbox Server") out to a separate package and repository named `magic-wormhole-mailbox-server`. We still import it for tests. Use `pip install magic-wormhole-mailbox-server` to run your own server. (#240) * The code is now formatted to be PEP8 compliant. (#296) * The Dockerfile was removed: after the Mailbox Server was moved out, I don't think it was relevant. (#295) Thanks to Andreas `Baeumla` Bäuml, Marius `mgedmin` Gedminas, Ofek `ofek` Lev, Thomas `ThomasWaldmann` Waldmann, and Vasudev `copyninja` Kamath for patches and bug reports in this release. ## Release 0.10.5 (14-Feb-2018) * Upgrade to newer python-spake2, to improve startup speed by not computing blinding factors for unused parameter sets. On a Raspberry Pi 3, this reduces "wormhole --version" time from ~19s to 7s. * Fix a concurrency bug that could cause a crash if the server responded too quickly. (#280) ## Release 0.10.4 (28-Jan-2018) Minor client changes: * accept `$WORMHOLE_RELAY_URL` and `$WORMHOLE_TRANSIT_HELPER` environment variables, in addition to command-line arguments (#256) * fix --tor-control-port=, which was completely broken before. If you use --tor but not --tor-control-port=, we'll try the default control ports before falling back to the default SOCKS port (#252) * fix more directory-separator pathname problems, especially for bash-on-windows (#251) * change `send` output format to make copy-paste easier (#266, #267) We also moved the docs to readthedocs (https://magic-wormhole.readthedocs.io/), rather than pointing folks at the GitHub rendered markdown files. This should encourage us to write more instructional text in the future. Finally, we removed the Transit Relay server code from the `magic-wormhole` package and repository. It now lives in a separate repository named `magic-wormhole-transit-relay`, and we only import it for tests. If you'd like to run a transit relay, you'll want to use `pip install magic-wormhole-transit-relay`. Thanks to meejah, Jonathan "jml" Lange, Alex Gaynor, David "dharrigan" Harrigan, and Jaye "jtdoepke" Doepke, for patches and bug reports in this release. ## Release 0.10.3 (12-Sep-2017) Minor client changes: * `wormhole help` should behave like `wormhole --help` (#61) * accept unicode pathnames (although bugs likely remain) (#223) * reject invalid codes (with space, or non-numeric prefix) at entry (#212) * docs improvements (#225, #249) Server changes: * `wormhole-server start` adds `--relay-database-path` and `--stats-json-path` (#186) * accept `--websocket-protocol-option=` (#196, #197) * increase RLIMIT_NOFILE to allow more simultaneous client connections (#238) * "crowded" mailboxes now deliver an error to clients, so they should give up instead of reconnecting (#211) * construct relay DB more safely (#189) In addition, the snapcraft packaging was updated (#202), and `setup.py` now properly marks the dependency on `attrs` (#248). Thanks to cclauss, Buckaroo9, JP Calderone, Pablo Oliveira, Leo Arias, Johan Lindskogen, lanzelot1989, CottonEaster, Chandan Rai, Jaakko Luttinen, Alex Gaynor, and Quentin Hibon for patches and bug reports fixed in this release. ## Release 0.10.2 (26-Jun-2017) WebSocket connection errors are now reported properly. Previous versions crashed with an unhelpful `automat._core.NoTransition` exception when the TCP connection was established but WebSocket negotiation could not complete (e.g. the URL path was incorrect and the server reported a 404, or we connected to an SMTP or other non-HTTP server). (#180) The unit test suite should now pass: a CLI-version advertisement issue caused the 0.10.1 release tests to fail. Thanks to Fabien "fdev31" Devaux for bug reports addressed in this release. ## Release 0.10.1 (26-Jun-2017) Server-only: the rendezvous server no longer advertises a CLI version unless specifically requested (by passing --advertise-version= to `wormhole-server start`). The public server no longer does this, so e.g. 0.10.0 clients will not emit a warning about the server recommending the 0.9.2 release. This feature was useful when the only way to use magic-wormhole was to install the CLI tool with pip, however now that 0.9.1 is in debian Stretch (and we hope to maintain compatibility with it), the nag-you-to-upgrade messages probably do more harm than good. (#179) No user-visible client-side changes. Thanks to ilovezfs and JP Calderone for bug reports addressed in this release. ## Release 0.10.0 (24-Jun-2017) The client-side code was completely rewritten, with proper Automat state machines. The only immediately user-visible consequence is that restarting the rendezvous server no longer terminates all waiting clients, so server upgrades won't be quite so traumatic. In the future, this will also support "Journaled Mode" (see docs/journal.md for details). (#42, #68) The programmatic API has changed (see docs/api.md). Stability is not promised until we reach 1.0, but this should be close, at least for the non-Transit portions. `wormhole send DIRECTORY` can now handle larger (>2GB) directories. However the entire zipfile is built in-RAM before transmission, so the maximum size is still limited by available memory (follow #58 for progress on fixing this). (#138) `wormhole rx --output-file=` for a pre-existing file will now overwrite the file (noisily), instead of terminating with an error. (#73) We now test on py3.6. Support for py3.3 was dropped. Magic-wormhole should now work on NetBSD. (#158) Added a Dockerfile to build a rendezvous/transit-relay server. (#149) `wormhole-server --disallow-list` instructs the rendezvous server to not honor "list nameplates" requests, effectively disabling tab-completion of the initial numeric portion of the wormhole code, but also making DoS attacks slightly easier to detect. (#53, #150) `wormhole send --ignore-unsendable-files` will skip things that cannot be sent (mostly dangling symlinks and files for which you do not have read permission, but possibly also unix-domain sockets, device nodes, and pipes). (#112, #161) `txtorcon` is now required by default, so the `magic-wormhole[tor]` "extra" was removed, and a simple `pip install magic-wormhole` should provide tor-based transport as long as Tor itself is available. Also, Tor works on py3 now. (#136, #174) `python -m wormhole` is an alternative way to run the CLI tool. (#159) `wormhole send` might handle non-ascii (unicode) filenames better now. (#157) Thanks to Alex Gaynor, Atul Varma, dkg, JP Calderone, Kenneth Reitz, Kurt Rose, maxalbert, meejah, midnightmagic, Robert Foss, Shannon Mulloy, and Shirley Kotian, for patches and bug reports in this release cycle. A special thanks to Glyph, Mark Williams, and the whole #twisted crew at PyCon for help with the transition to Automat. ## Release 0.9.2 (16-Jan-2017) Tor support was rewritten. `wormhole send`, `wormhole receive`, `wormhole ssh invite`, and `wormhole ssh accept` all now accept three Tor-related arguments: * `--tor`: use Tor for all connections, and hide all IP addresses * `--launch-tor`: launch a new Tor process instead of using an existing one * `--tor-control-port=`: use a specific control port, instead of using the default If Tor is already running on your system (either as an OS-installed package, or because the [TorBrowser](https://www.torproject.org/projects/torbrowser.html) application is running), simply adding `--tor` should be sufficient. If Tor is installed but not running, you may need to use both, e.g. `wormhole send --tor --launch-tor`. See docs/tor.md for more details. Note that Tor support must be requested at install time (with `pip install magic-wormhole[tor]`), and only works on python2.7 (not py3). (#64, #97) The relay and transit URLs were changed to point at the project's official domain name (magic-wormhole.io). The servers themselves are identical (only the domain name changed, not the IP address), so this release is fully compatible with previous releases. A packaging file for "snapcraft.io" is now included. (#131) `wormhole receive` now reminds you that tab-completion is available, if you didn't use the Tab key while entering the code. (#15) `wormhole receive` should work on cygwin now (a problem with the readline-completion library caused a failure on previous releases). (#111) Thanks to Atul Varma, Leo Arias, Daniel Kahn Gillmor, Christopher Wood, Kostin Anagnostopoulos, Martin Falatic, and Joey Hess for patches and bug reports in this cycle. ## Release 0.9.1 (01-Jan-2017) The `wormhole` client's `--transit-helper=` argument can now include a "relay priority" via a numerical `priority=` field, e.g. `--transit-helper tcp:example.org:12345:priority=2.5`. Clients exchange transit relay suggestions, then try to use the highest-priority relay first, falling back to others after a few seconds if necessary. Direct connections are always preferred to a relay. Clients running 0.9.0 or earlier will ignore priorities, and unmarked relay arguments have an implicit priority of 0. (#103) Other changes: * clients now tolerate duplicate peer messages: in the future, this will help clients recover from intermittent rendezvous connections (#121) * rendezvous server: ensure release() and close() are idempotent (from different connections), also for lost-connection recovery (#118) * transit server: respect --blur-usage= by not logging connections * README: note py3.6 compatibility Thanks to xloem, kneufeld, and meejah for their help this cycle. ## Release 0.9.0 (24-Dec-2016) This release fixes an important "Transit Relay" bug that would have prevented future versions from using non-default relay servers. It is now easier to run `wormhole` as a subprocess beneath some other program (the long term goal is to provide a nice API, but even with one, there will be programs written in languages without Wormhole bindings that may find it most convenient to use a subprocess). * fix `--transit-helper=`: Older versions had a bug that broke file/directory transfers when the two sides offered different transit-relay servers. This was fixed by deduplicating relay hints and adding a new kind of relay handshake. Clients running 0.9.0 or higher now require a transit-relay server running 0.9.0 or higher. (#115) * `wormhole receive`: reject transfers when the target does not appear to have enough space (not available on windows) (#91) * CLI: emit pacifier message when key-verification is slow (#29) * add `--appid=` so wrapping scripts can use a distinct value (#113) * `wormhole send`: flush output after displaying code, for use in scripts (#108) * CLI: print progress messages to stderr, not stdout (#99) * add basic man(1) pages (#69) Many thanks to patch submitters for this release: Joey Hess, Jared Anderson, Antoine Beaupré, and to everyone testing and filing issues on Github. ## Release 0.8.2 (08-Dec-2016) * CLI: add new "wormhole ssh invite" and "wormhole ssh accept" commands, to facilitate appending your `~/.ssh/id_*.pub` key into a suitably-permissioned remote `~/.ssh/authorized_keys` file. These commands are experimental: the syntax might be changed in the future, or they might be removed altogether. * CLI: "wormhole recv" and "wormhole recieve" are now accepted as aliases for "wormhole receive", to help bad spelers :) * CLI: improve display of abbreviated file sizes * CLI: don't print traceback upon "normal" errors * CLI: when target file already exists, don't reveal that fact to the sender, just say "transfer rejected" * magic-wormhole now depends upon `Twisted[tls]`, which will cause pyOpenSSL and the `cryptography` package to be installed. This should prevent a warning about the "service_identity" module not being available. * other smaller internal changes Thanks to everyone who submitted patches in this release cycle: anarcat, Ofekmeister, Tom Lowenthal, meejah, dreid, and dkg. And thanks to the many bug reporters on Github! ## Release 0.8.1 (27-Jul-2016) This release contains mostly minor changes. The most noticeable is that long-lived wormholes should be more reliable now. Previously, if you run `wormhole send` but your peer doesn't run their `receive` for several hours, a NAT/firewall box on either side could stop forwarding traffic for the idle connection (without sending a FIN or RST to properly close the socket), causing both sides to hang forever and never actually connect. Now both sides send periodic keep-alive messages to prevent this. In addition, by switching to "Click" for argument parsing, we now have short command aliases: `wormhole tx` does the same thing as `wormhole send`, and `wormhole rx` is an easier-to-spell equivalent of `wormhole receive`. Other changes: * CLI: move most arguments to be attached to the subcommand (new: `wormhole send --verify`) rather than on the "wormhole" command (old: `wormhole --verify send`). Four arguments remain on the "wormhole" command: `--relay-url=`, `--transit-helper=`, `--dump-timing=`, and `--version`. * docs: add links to PyCon2016 presentation * reject wormhole-codes with spaces with a better error message * magic-wormhole ought to work on windows now * code-input tab-completion should work on stock OS-X python (with libedit) * sending a directory should restore file permissions correctly * server changes: * expire channels after two hours, not 3 days * prune channels more accurately * improve munin plugins for server monitoring Many thanks to the folks who contributed to this release, during the PyCon sprints and afterwards: higs4281, laharah, Chris Wolfe, meejah, wsanchez, Kurt Neufeld, and Francois Marier. ## Release 0.8.0 (28-May-2016) This release is completely incompatible with the previous 0.7.6 release. Clients using 0.7.6 or earlier will not even notice clients using 0.8.0 or later. * Overhaul client-server websocket protocol, client-client PAKE messages, per-message encryption-key derivation, relay-server database schema, SPAKE2 key-derivation, and public relay URLs. Add version fields and unknown-message tolerance to most protocol steps. * Hopefully this will provide forward-compatibility with future protocol changes. I have several on my list, and the version fields should make it possible to add these without a flag day (at worst a "flag month"). * User-visible changes are minimal, although some operations should be faster because we no longer need to wait for ACKs before proceeding. * API changes: `.send_data()/.get_data()` became `.send()/.get()`, neither takes a phase= argument (the Wormhole is now a record pipe) `.get_verifier()` became `.verify()` (and waits to receive the key-confirmation message before firing its Deferred), wormholes are constructed with a function call instead of a class constructor, `close()` always waits for server ack of outbound messages. Note that the API remains unstable until 1.0.0 . * misc/munin/ contains plugins for relay server operators ## Release 0.7.6 (08-May-2016) * Switch to "tqdm" for nicer CLI progress bars. * Fail better when input-code is interrupted (prompt user to hit Return, rather than hanging forever) * Close channel upon error more reliably. * Explain WrongPasswordError better. * (internal): improve --dump-timing instrumentation and rendering. Compatibility: this remains compatible with 0.7.x, and 0.8.x is still expected to break compatibility. ## Release 0.7.5 (20-Apr-2016) * The CLI tools now use the Twisted-based library exclusively. * The blocking-flavor "Transit" library has been removed. Transit is the bulk-transfer protocol used by send-file/send-directory. Upcoming protocol improvements (performance and connectivity) proved too difficult to implement in a blocking fashion, so for now if you want Transit, use Twisted. * The Twisted-flavor "Wormhole" library now uses WebSockets to connect, rather than HTTP. The blocking-flavor library continues to use HTTP. "Wormhole" is the one-message-at-a-time relay-based protocol, and is used to set up Transit for the send-file and send-directory modes of the CLI tool. * Twisted-flavor input_code() now does readline-based code entry, with tab completion. * The package now installs two executables: "wormhole" (for send and receive), and "wormhole-server" (to start and manage the relay servers). These may be re-merged in a future release. Compatibility: * This release remains compatible with the previous ones. The next major release (0.8.x) will probably break compatibility. Packaging: * magic-wormhole now depends upon "Twisted" and "autobahn" (for WebSockets). Autobahn pulls in txaio, but we don't support it yet (a future version of magic-wormhole might). * To work around a bug in autobahn, we also (temporarily) depend upon "pytrie". This dependency will be removed when the next autobahn release is available. ## Release 0.7.0 (28-Mar-2016) * `wormhole send DIRNAME/` used to deal very badly with the trailing slash (sending a directory with an empty name). This is now fixed. * Preliminary Tor support was added. Install `magic-wormhole[tor]`, make sure you have a Tor executable on your $PATH, and run `wormhole --tor send`. This will launch a new Tor process. Do not use this in anger/fear until it has been tested more carefully. This feature is likely to be unstable for a while, and lacks tests. * The relay now prunes unused channels properly. * Added --dump-timing= to record timeline of events, for debugging and performance improvements. You can combine timing data from both sides to see where the delays are happening. The server now returns timestamps in its responses, to measure round-trip delays. A web-based visualization tool was added in `misc/dump-timing.py`. * twisted.transit was not properly handling multiple records received in a single chunk. Some producer/consumer helper methods were added. You can now run e.g. `wormhole --twisted send` to force the use of the Twisted implementation. * The Twisted wormhole now uses a persistent connection for all relay messages, which should be slightly faster. * Add `--no-listen` to prevent Transit from listening for inbound connections (or advertising any addresses): this is only useful for testing. * The tests now collect code coverage information, and upload them to https://codecov.io/github/warner/magic-wormhole?ref=master . ## Release 0.6.3 (29-Feb-2016) Mostly internal changes: * twisted.transit was added, so Twisted-based applications can use it now. This includes Producer/Consumer -based flow control. The Transit protocol and API are documented in docs/transit.md . * The transit relay server can blur filesizes, rounding them to some roughly-logarithmic interval. * Use --relay-helper="" to disable use of the transit relay entirely, limiting the file transfer to direct connections. * The new --hide-progress option disables the progress bar. * Made some windows-compatibility fixes, but all tests do not yet pass. ## Release 0.6.2 (12-Jan-2016) * the server can now "blur" usage information: this turns off HTTP logging, and rounds timestamps to coarse intervals * `wormhole server usage` now shows Transit usage too, not just Rendezvous ## Release 0.6.1 (03-Dec-2015) * `wormhole` can now send/receive entire directories. They are zipped before transport. * Python 3 is now supported for async (Twisted) library use, requiring at least Twisted-15.5.0. * A bug was fixed which prevented py3-based clients from using the relay transit server (not used if the two sides can reach each other directly). * The `--output-file=` argument was finally implemented, which allows the receiver to override the filename that it writes. This may help scripted usage. * Support for Python-2.6 was removed, since the recent Twisted-15.5.0 removed it too. It might still work, but is no longer automatically tested. * The transit relay now implements proper flow control (Producer/Consumer), so it won't buffer the entire file when the sender can push data faster than the receiver can accept it. The sender should now throttle down to the receiver's maximum rate. ## Release 0.6.0 (23-Nov-2015) * Add key-confirmation message so "wormhole send" doesn't hang when the receiver mistypes the code. * Fix `wormhole send --text -` to read the text message from stdin. `wormhole receive >outfile` works, but currently appends an extra newline, which may be removed in a future release. * Arrange for 0.4.0 senders to print an error message when connecting to a current (0.5.0) server, instead of an ugly stack trace. Unfortunately 0.4.0 receivers still display the traceback, since they don't check the welcome message before using a missing API. 0.5.0 and 0.6.0 will do better. * Improve channel deallocation upon error. * Inform the server of our "mood" when the connection closes, so it can track the rate of successful/unsuccessful transfers. The server DB now stores a summary of each transfer (waiting time and reported outcome). * Rename (and deprecate) one server API (the non-EventSource form of "get"), leaving it in place until after the next release. 0.5.0 clients should interoperate with both the 0.6.0 server and 0.6.0 clients, but eventually they'll stop working. ## Release 0.5.0 (07-Oct-2015) * Change the CLI to merge send-file with send-text, and receive-file with receive-text. Add confirmation before accepting a file. * Change the remote server API significantly, breaking compatibility with 0.4.0 peers. Fix EventSource to match W3C spec and real browser behavior. * Add py3 (3.3, 3.4, 3.5) compatibility for blocking calls (but not Twisted). * internals * Introduce Channel and ChannelManager to factor out the HTTP/EventSource technology in use (making room for WebSocket or Tor in the future). * Change app-visible API to allow multiple message phases. * Change most API arguments from bytes to unicode strings (appid, URLs, wormhole code, derive_key purpose string, message phase). Derived keys are bytes, of course. * Add proper unit tests. ## Release 0.4.0 (22-Sep-2015) This changes the protocol (to a symmetric form), breaking compatibility with 0.3.0 peers. Now both blocking-style and Twisted-style use a symmetric protocol, and the two sides do not need to figure out (ahead of time) which one goes first. The internal layout was rearranged, so applications that import wormhole must be updated. ## Release 0.3.0 (24-Jun-2015) Add preliminary Twisted support, only for symmetric endpoints (no initator/receiver distinction). Lacks code-entry tab-completion. May still leave timers lingering. Add test suite (only for Twisted, so far). Use a sqlite database for Relay server state, to survive reboots with less data loss. Add "--advertise-version=" to "wormhole relay start", to override the version we recommend to clients. ## Release 0.2.0 (10-Apr-2015) Initial release: supports blocking/synchronous asymmetric endpoints (Initiator on one side, Receiver on the other). Codes can be generated by Initiator, or created externally and passed into both (as long as they start with digits: NNN-anything). magic-wormhole-0.12.0/PKG-INFO000066400000000000000000000070761400712516500156140ustar00rootroot00000000000000Metadata-Version: 2.1 Name: magic-wormhole Version: 0.12.0 Summary: Securely transfer data between computers Home-page: https://github.com/warner/magic-wormhole Author: Brian Warner Author-email: warner-magic-wormhole@lothar.com License: MIT Description: # Magic Wormhole [![PyPI](http://img.shields.io/pypi/v/magic-wormhole.svg)](https://pypi.python.org/pypi/magic-wormhole) [![Build Status](https://travis-ci.org/warner/magic-wormhole.svg?branch=master)](https://travis-ci.org/warner/magic-wormhole) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/mfnn5rsyfnrq576a/branch/master?svg=true)](https://ci.appveyor.com/project/warner/magic-wormhole) [![codecov.io](https://codecov.io/github/warner/magic-wormhole/coverage.svg?branch=master)](https://codecov.io/github/warner/magic-wormhole?branch=master) [![Docs](https://readthedocs.org/projects/magic-wormhole/badge/?version=latest)](https://magic-wormhole.readthedocs.io) Get things from one computer to another, safely. This package provides a library and a command-line tool named `wormhole`, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine. The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized. * PyCon 2016 presentation: [Slides](http://www.lothar.com/~warner/MagicWormhole-PyCon2016.pdf), [Video](https://youtu.be/oFrTqQw0_3c) For complete documentation, please see https://magic-wormhole.readthedocs.io or the docs/ subdirectory. ## License, Compatibility Magic-Wormhole is released under the MIT license, see the `LICENSE` file for details. This library is compatible with Python 3.5 and higher (tested against 3.5, 3.6, 3.7, and 3.8). It also still works with Python 2.7. ## Packaging, Installation Magic Wormhole packages are included in many operating systems. [![Packaging status](https://repology.org/badge/vertical-allrepos/magic-wormhole.svg)](https://repology.org/project/magic-wormhole/versions) To install it without an OS package, follow the [Installation docs](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation). Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Security :: Cryptography Classifier: Topic :: System :: Networking Classifier: Topic :: System :: Systems Administration Classifier: Topic :: Utilities Description-Content-Type: text/markdown Provides-Extra: dilate Provides-Extra: dev magic-wormhole-0.12.0/README.md000066400000000000000000000043111400712516500157630ustar00rootroot00000000000000# Magic Wormhole [![PyPI](http://img.shields.io/pypi/v/magic-wormhole.svg)](https://pypi.python.org/pypi/magic-wormhole) [![Build Status](https://travis-ci.org/warner/magic-wormhole.svg?branch=master)](https://travis-ci.org/warner/magic-wormhole) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/mfnn5rsyfnrq576a/branch/master?svg=true)](https://ci.appveyor.com/project/warner/magic-wormhole) [![codecov.io](https://codecov.io/github/warner/magic-wormhole/coverage.svg?branch=master)](https://codecov.io/github/warner/magic-wormhole?branch=master) [![Docs](https://readthedocs.org/projects/magic-wormhole/badge/?version=latest)](https://magic-wormhole.readthedocs.io) Get things from one computer to another, safely. This package provides a library and a command-line tool named `wormhole`, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine. The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized. * PyCon 2016 presentation: [Slides](http://www.lothar.com/~warner/MagicWormhole-PyCon2016.pdf), [Video](https://youtu.be/oFrTqQw0_3c) For complete documentation, please see https://magic-wormhole.readthedocs.io or the docs/ subdirectory. ## License, Compatibility Magic-Wormhole is released under the MIT license, see the `LICENSE` file for details. This library is compatible with Python 3.5 and higher (tested against 3.5, 3.6, 3.7, and 3.8). It also still works with Python 2.7. ## Packaging, Installation Magic Wormhole packages are included in many operating systems. [![Packaging status](https://repology.org/badge/vertical-allrepos/magic-wormhole.svg)](https://repology.org/project/magic-wormhole/versions) To install it without an OS package, follow the [Installation docs](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation). magic-wormhole-0.12.0/docs/000077500000000000000000000000001400712516500154355ustar00rootroot00000000000000magic-wormhole-0.12.0/docs/Makefile000066400000000000000000000011441400712516500170750ustar00rootroot00000000000000# Minimal makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build SPHINXPROJ = Magic-Wormhole SOURCEDIR = . BUILDDIR = _build # Put it first so that "make" without argument is like "make help". help: @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) .PHONY: help Makefile # Catch-all target: route all unknown targets to Sphinx using the new # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). %: Makefile @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) magic-wormhole-0.12.0/docs/api.md000066400000000000000000001026041400712516500165330ustar00rootroot00000000000000# The Magic-Wormhole API This library provides a mechanism to securely transfer small amounts of data between two computers. Both machines must be connected to the internet, but they do not need to have public IP addresses or know how to contact each other ahead of time. Security and connectivity is provided by means of a "wormhole code": a short string that is transcribed from one machine to the other by the users at the keyboard. This works in conjunction with a baked-in "rendezvous server" that relays information from one machine to the other. The "Wormhole" object provides a secure record pipe between any two programs that use the same wormhole code (and are configured with the same application ID and rendezvous server). Each side can send multiple messages to the other, but the encrypted data for all messages must pass through (and be temporarily stored on) the rendezvous server, which is a shared resource. For this reason, larger data (including bulk file transfers) should use the Transit class instead. The Wormhole can be used to create a Transit object for this purpose. In the future, Transit will be deprecated, and this functionality will be incorporated directly as a "dilated wormhole". A quick example: ```python import wormhole from twisted.internet.defer import inlineCallbacks @inlineCallbacks def go(): w = wormhole.create(appid, relay_url, reactor) w.allocate_code() code = yield w.get_code() print "code:", code w.send_message(b"outbound data") inbound = yield w.get_message() yield w.close() ``` ## Modes The API comes in two flavors: Delegated and Deferred. Controlling the Wormhole and sending data is identical in both, but they differ in how inbound data and events are delivered to the application. In Delegated mode, the Wormhole is given a "delegate" object, on which certain methods will be called when information is available (e.g. when the code is established, or when data messages are received). In Deferred mode, the Wormhole object has methods which return Deferreds that will fire at these same times. Delegated mode: ```python class MyDelegate: def wormhole_got_code(self, code): print("code: %s" % code) def wormhole_got_message(self, msg): # called for each message print("got data, %d bytes" % len(msg)) w = wormhole.create(appid, relay_url, reactor, delegate=MyDelegate()) w.allocate_code() ``` Deferred mode: ```python w = wormhole.create(appid, relay_url, reactor) w.allocate_code() def print_code(code): print("code: %s" % code) w.get_code().addCallback(print_code) def received(msg): print("got data, %d bytes" % len(msg)) w.get_message().addCallback(received) # gets exactly one message ``` ## Application Identifier Applications using this library must provide an "application identifier", a simple string that distinguishes one application from another. To ensure uniqueness, use a domain name. To use multiple apps for a single domain, append a URL-like slash and path, like `example.com/app1`. This string must be the same on both clients, otherwise they will not see each other. The invitation codes are scoped to the app-id. Note that the app-id must be unicode, not bytes, so on python2 use `u"appid"`. Distinct app-ids reduce the size of the connection-id numbers. If fewer than ten Wormholes are active for a given app-id, the connection-id will only need to contain a single digit, even if some other app-id is currently using thousands of concurrent sessions. ## Rendezvous Servers The library depends upon a "rendezvous server", which is a service (on a public IP address) that delivers small encrypted messages from one client to the other. This must be the same for both clients, and is generally baked-in to the application source code or default config. This library includes the URL of a public rendezvous server run by the author. Application developers can use this one, or they can run their own (see the https://github.com/warner/magic-wormhole-mailbox-server repository) and configure their clients to use it instead. The URL of the public rendevouz server is passed as a unicode string. Note that because the server actually speaks WebSockets, the URL starts with `ws:` instead of `http:`. ## Wormhole Parameters All wormholes must be created with at least three parameters: * `appid`: a (unicode) string * `relay_url`: a (unicode) string * `reactor`: the Twisted reactor object In addition to these three, the `wormhole.create()` function takes several optional arguments: * `delegate`: provide a Delegate object to enable "delegated mode", or pass None (the default) to get "deferred mode" * `journal`: provide a Journal object to enable journaled mode. See journal.md for details. Note that journals only work with delegated mode, not with deferred mode. * `tor_manager`: to enable Tor support, create a `wormhole.TorManager` instance and pass it here. This will hide the client's IP address by proxying all connections (rendezvous and transit) through Tor. It also enables connecting to Onion-service transit hints, and (in the future) will enable the creation of Onion-services for transit purposes. * `timing`: this accepts a DebugTiming instance, mostly for internal diagnostic purposes, to record the transmit/receive timestamps for all messages. The `wormhole --dump-timing=` feature uses this to build a JSON-format data bundle, and the `misc/dump-timing.py` tool can build a scrollable timing diagram from these bundles. * `welcome_handler`: this is a function that will be called when the Rendezvous Server's "welcome" message is received. It is used to display important server messages in an application-specific way. * `versions`: this can accept a dictionary (JSON-encodable) of data that will be made available to the peer via the `got_version` event. This data is delivered before any data messages, and can be used to indicate peer capabilities. ## Code Management Each wormhole connection is defined by a shared secret "wormhole code". These codes can be created by humans offline (by picking a unique number and some secret words), but are more commonly generated by asking the library to make one. In the "bin/wormhole" file-transfer tool, the default behavior is for the sender's program to create the code, and for the receiver to type it in. The code is a (unicode) string in the form `NNN-code-words`. The numeric "NNN" prefix is the "channel id" or "nameplate", and is a short integer allocated by talking to the rendezvous server. The rest is a randomly-generated selection from the PGP wordlist, providing a default of 16 bits of entropy. The initiating program should display this code to the user, who should transcribe it to the receiving user, who gives it to their local Wormhole object by calling `set_code()`. The receiving program can also use `input_code()` to use a readline-based input function: this offers tab completion of allocated channel-ids and known codewords. The Wormhole object has three APIs for generating or accepting a code: * `w.allocate_code(length=2)`: this contacts the Rendezvous Server, allocates a short numeric nameplate, chooses a configurable number of random words, then assembles them into the code * `w.set_code(code)`: this accepts the complete code as an argument * `helper = w.input_code()`: this facilitates interactive entry of the code, with tab-completion. The helper object has methods to return a list of viable completions for whatever portion of the code has been entered so far. A convenience wrapper is provided to attach this to the `rlcompleter` function of libreadline. No matter which mode is used, the `w.get_code()` Deferred (or `delegate.wormhole_got_code(code)` callback) will fire when the code is known. `get_code` is clearly necessary for `allocate_code`, since there's no other way to learn what code was created, but it may be useful in other modes for consistency. The code-entry Helper object has the following API: * `refresh_nameplates()`: requests an updated list of nameplates from the Rendezvous Server. These form the first portion of the wormhole code (e.g. "4" in "4-purple-sausages"). Note that they are unicode strings (so "4", not 4). The Helper will get the response in the background, and calls to `get_nameplate_completions()` after the response will use the new list. Calling this after `h.choose_nameplate` will raise `AlreadyChoseNameplateError`. * `matches = h.get_nameplate_completions(prefix)`: returns (synchronously) a set of completions for the given nameplate prefix, along with the hyphen that always follows the nameplate (and separates the nameplate from the rest of the code). For example, if the server reports nameplates 1, 12, 13, 24, and 170 are in use, `get_nameplate_completions("1")` will return `{"1-", "12-", "13-", "170-"}`. You may want to sort these before displaying them to the user. Raises `AlreadyChoseNameplateError` if called after `h.choose_nameplate`. * `h.choose_nameplate(nameplate)`: accepts a string with the chosen nameplate. May only be called once, after which `AlreadyChoseNameplateError` is raised. (in this future, this might return a Deferred that fires (with None) when the nameplate's wordlist is known (which happens after the nameplate is claimed, requiring a roundtrip to the server)). * `d = h.when_wordlist_is_available()`: return a Deferred that fires (with None) when the wordlist is known. This can be used to block a readline frontend which has just called `h.choose_nameplate()` until the resulting wordlist is known, which can improve the tab-completion behavior. * `matches = h.get_word_completions(prefix)`: return (synchronously) a set of completions for the given words prefix. This will include a trailing hyphen if more words are expected. The possible completions depend upon the wordlist in use for the previously-claimed nameplate, so calling this before `choose_nameplate` will raise `MustChooseNameplateFirstError`. Calling this after `h.choose_words()` will raise `AlreadyChoseWordsError`. Given a prefix like "su", this returns a set of strings which are potential matches (e.g. `{"supportive-", "surrender-", "suspicious-"}`. The prefix should not include the nameplate, but *should* include whatever words and hyphens have been typed so far (the default wordlist uses alternate lists, where even numbered words have three syllables, and odd numbered words have two, so the completions depend upon how many words are present, not just the partial last word). E.g. `get_word_completions("pr")` will return `{"processor-", "provincial-", "proximate-"}`, while `get_word_completions("opulent-pr")` will return `{"opulent-preclude", "opulent-prefer", "opulent-preshrunk", "opulent-printer", "opulent-prowler"}` (note the lack of a trailing hyphen, because the wordlist is expecting a code of length two). If the wordlist is not yet known, this returns an empty set. All return values will `.startswith(prefix)`. The frontend is responsible for sorting the results before display. * `h.choose_words(words)`: call this when the user is finished typing in the code. It does not return anything, but will cause the Wormhole's `w.get_code()` (or corresponding delegate) to fire, and triggers the wormhole connection process. This accepts a string like "purple-sausages", without the nameplate. It must be called after `h.choose_nameplate()` or `MustChooseNameplateFirstError` will be raised. May only be called once, after which `AlreadyChoseWordsError` is raised. The `input_with_completion` wrapper is a function that knows how to use the code-entry helper to do tab completion of wormhole codes: ```python from wormhole import create, input_with_completion w = create(appid, relay_url, reactor) input_with_completion("Wormhole code:", w.input_code(), reactor) d = w.get_code() ``` This helper runs python's (raw) `input()` function inside a thread, since `input()` normally blocks. The two machines participating in the wormhole setup are not distinguished: it doesn't matter which one goes first, and both use the same Wormhole constructor function. However if `w.allocate_code()` is used, only one side should use it. Providing an invalid nameplate (which is easily caused by cut-and-paste errors that include an extra space at the beginning, or which copy the words but not the number) will raise a `KeyFormatError`, either in `w.set_code(code)` or in `h.choose_nameplate()`. ## Offline Codes In most situations, the "sending" or "initiating" side will call `w.allocate_code()` and display the resulting code. The sending human reads it and speaks, types, performs charades, or otherwise transmits the code to the receiving human. The receiving human then types it into the receiving computer, where it either calls `w.set_code()` (if the code is passed in via argv) or `w.input_code()` (for interactive entry). Usually one machine generates the code, and a pair of humans transcribes it to the second machine (so `w.allocate_code()` on one side, and `w.set_code()` or `w.input_code()` on the other). But it is also possible for the humans to generate the code offline, perhaps at a face-to-face meeting, and then take the code back to their computers. In this case, `w.set_code()` will be used on both sides. It is unlikely that the humans will restrict themselves to a pre-established wordlist when manually generating codes, so the completion feature of `w.input_code()` is not helpful. When the humans create an invitation code out-of-band, they are responsible for choosing an unused channel-ID (simply picking a random 3-or-more digit number is probably enough), and some random words. Dice, coin flips, shuffled cards, or repeated sampling of a high-resolution stopwatch are all useful techniques. The invitation code uses the same format either way: channel-ID, a hyphen, and an arbitrary string. There is no need to encode the sampled random values (e.g. by using the Diceware wordlist) unless that makes it easier to transcribe: e.g. rolling 6 dice could result in a code like "913-166532", and flipping 16 coins could result in "123-HTTHHHTTHTTHHTHH". ## Welcome Messages The first message sent by the rendezvous server is a "welcome" message (a dictionary). This is sent as soon as the client connects to the server, generally before the code is established. Clients should use `d=get_welcome()` to get and process the `motd` key (and maybe `current_cli_version`) inside the welcome message. The welcome message serves three main purposes: * notify users about important server changes, such as CAPTCHA requirements driven by overload, or donation requests * enable future protocol negotiation between clients and the server * advise users of the CLI tools (`wormhole send`) to upgrade to a new version There are three keys currently defined for the welcome message, all of which are optional (the welcome message omits "error" and "motd" unless the server operator needs to signal a problem). * `motd`: if this key is present, it will be a string with embedded newlines. The client should display this string to the user, including a note that it comes from the magic-wormhole Rendezvous Server and that server's URL. * `error`: if present, the server has decided it cannot service this client. The string will be wrapped in a `WelcomeError` (which is a subclass of `WormholeError`), and all API calls will signal errors (pending Deferreds will errback). The rendezvous connection will be closed. * `current_cli_version`: if present, the server is advising instances of the CLI tools (the `wormhole` command included in the python distribution) that there is a newer release available, thus users should upgrade if they can, because more features will be available if both clients are running the same version. The CLI tools compare this string against their `__version__` and can print a short message to stderr if an upgrade is warranted. There is currently no facility in the server to actually send `motd`, but a static `error` string can be included by running the server with `--signal-error=MESSAGE`. The main idea of `error` is to allow the server to cleanly inform the client about some necessary action it didn't take. The server currently sends the welcome message as soon as the client connects (even before it receives the "claim" request), but a future server could wait for a required client message and signal an error (via the Welcome message) if it didn't see this extra message before the CLAIM arrived. This could enable changes to the protocol, e.g. requiring a CAPTCHA or proof-of-work token when the server is under DoS attack. The new server would send the current requirements in an initial message (which old clients would ignore). New clients would be required to send the token before their "claim" message. If the server sees "claim" before "token", it knows that the client is too old to know about this protocol, and it could send a "welcome" with an `error` field containing instructions (explaining to the user that the server is under attack, and they must either upgrade to a client that can speak the new protocol, or wait until the attack has passed). Either case is better than an opaque exception later when the required message fails to arrive. (Note that the server can also send an explicit ERROR message at any time, and the client should react with a ServerError. Versions 0.9.2 and earlier of the library did not pay attention to the ERROR message, hence the server should deliver errors in a WELCOME message if at all possible) The `error` field is handled internally by the Wormhole object. The other fields can be processed by application, by using `d=w.get_welcome()`. The Deferred will fire with the full welcome dictionary, so any other keys that a future server might send will be available to it. Applications which need to display `motd` or an upgrade message, and wish to do so before using stdin/stdout for interactive code entry (`w.input_code()`) should wait for `get_welcome()` before starting the entry process: ```python @inlineCallbacks def go(): w = wormhole.create(appid, relay_url, reactor) welcome = yield w.get_welcome() if "motd" in welcome: print welcome["motd"] input_with_completion("Wormhole code:", w.input_code(), reactor) ... ``` ## Verifier For extra protection against guessing attacks, Wormhole can provide a "Verifier". This is a moderate-length series of bytes (a SHA256 hash) that is derived from the supposedly-shared session key. If desired, both sides can display this value, and the humans can manually compare them before allowing the rest of the protocol to proceed. If they do not match, then the two programs are not talking to each other (they may both be talking to a man-in-the-middle attacker), and the protocol should be abandoned. Deferred-mode applications can wait for `d=w.get_verifier()`: the Deferred it returns will fire with the verifier. You can turn this into hex or Base64 to print it, or render it as ASCII-art, etc. Asking the wormhole object for the verifier does not affect the flow of the protocol. To benefit from verification, applications must refrain from sending any data (with `w.send_message(data)`) until after the verifiers are approved by the user. In addition, applications must queue or otherwise ignore incoming (received) messages until that point. However once the verifiers are confirmed, previously-received messages can be considered valid and processed as usual. ## Events As the wormhole connection is established, several events may be dispatched to the application. In Delegated mode, these are dispatched by calling functions on the delegate object. In Deferred mode, the application retrieves Deferred objects from the wormhole, and event dispatch is performed by firing those Deferreds. Most applications will only use `code`, `received`, and `close`. * code (`code = yield w.get_code()` / `dg.wormhole_got_code(code)`): fired when the wormhole code is established, either after `w.allocate_code()` finishes the generation process, or when the Input Helper returned by `w.input_code()` has been told `h.set_words()`, or immediately after `w.set_code(code)` is called. This is most useful after calling `w.allocate_code()`, to show the generated code to the user so they can transcribe it to their peer. * key (`yield w.get_unverified_key()` / `dg.wormhole_got_unverified_key(key)`): fired (with the raw master SPAKE2 key) when the key-exchange process has completed and a purported shared key is established. At this point we do not know that anyone else actually shares this key: the peer may have used the wrong code, or may have disappeared altogether. To wait for proof that the key is shared, wait for `get_verifier` instead. This event is really only useful for detecting that the initiating peer has disconnected after leaving the initial PAKE message, to display a pacifying message to the user. * verifier (`verifier = yield w.get_verifier()` / `dg.wormhole_got_verifier(verifier)`: fired when the key-exchange process has completed and a valid VERSION message has arrived. The "verifier" is a byte string with a hash of the shared session key; clients can compare them (probably as hex) to ensure that they're really talking to each other, and not to a man-in-the-middle. When `get_verifier` happens, this side knows that *someone* has used the correct wormhole code; if someone used the wrong code, the VERSION message cannot be decrypted, and the wormhole will be closed instead. * versions (`versions = yield w.get_versions()` / `dg.wormhole_got_versions(versions)`: fired when the VERSION message arrives from the peer. This fires just after `verified`, but delivers the "app_versions" data (as passed into `wormhole.create(versions=)`) instead of the verifier string. This is mostly a hack to make room for forwards-compatible changes to the CLI file-transfer protocol, which sends a request in the first message (rather than merely sending the abilities of each side). * received (`yield w.get_message()` / `dg.wormhole_got_message(msg)`: fired each time a data message arrives from the peer, with the bytestring that the peer passed into `w.send_message(msg)`. This is the primary data-transfer API. * closed (`yield w.close()` / `dg.wormhole_closed(result)`: fired when `w.close()` has finished shutting down the wormhole, which means all nameplates and mailboxes have been deallocated, and the WebSocket connection has been closed. This also fires if an internal error occurs (specifically WrongPasswordError, which indicates that an invalid encrypted message was received), which also shuts everything down. The `result` value is an exception (or Failure) object if the wormhole closed badly, or a string like "happy" if it had no problems before shutdown. ## Sending Data The main purpose of a Wormhole is to send data. At any point after construction, callers can invoke `w.send_message(msg)`. This will queue the message if necessary, but (if all goes well) will eventually result in the peer getting a `received` event and the data being delivered to the application. Since Wormhole provides an ordered record pipe, each call to `w.send_message` will result in exactly one `received` event on the far side. Records are not split, merged, dropped, or reordered. Each side can do an arbitrary number of `send_message()` calls. The Wormhole is not meant as a long-term communication channel, but some protocols work better if they can exchange an initial pair of messages (perhaps offering some set of negotiable capabilities), and then follow up with a second pair (to reveal the results of the negotiation). The Rendezvous Server does not currently enforce any particular limits on number of messages, size of messages, or rate of transmission, but in general clients are expected to send fewer than a dozen messages, of no more than perhaps 20kB in size (remember that all these messages are temporarily stored in a SQLite database on the server). A future version of the protocol may make these limits more explicit, and will allow clients to ask for greater capacity when they connect (probably by passing additional "mailbox attribute" parameters with the `allocate`/`claim`/`open` messages). For bulk data transfer, see "transit.md", or the "Dilation" section below. ## Closing When the application is done with the wormhole, it should call `w.close()`, and wait for a `closed` event. This ensures that all server-side resources are released (allowing the nameplate to be re-used by some other client), and all network sockets are shut down. In Deferred mode, this just means waiting for the Deferred returned by `w.close()` to fire. In Delegated mode, this means calling `w.close()` (which doesn't return anything) and waiting for the delegate's `wormhole_closed()` method to be called. `w.close()` will errback (with some form of `WormholeError`) if anything went wrong with the process, such as: * `WelcomeError`: the server told us to signal an error, probably because the client is too old understand some new protocol feature * `ServerError`: the server rejected something we did * `LonelyError`: we didn't hear from the other side, so no key was established * `WrongPasswordError`: we received at least one incorrectly-encrypted message. This probably indicates that the other side used a different wormhole code than we did, perhaps because of a typo, or maybe an attacker tried to guess your code and failed. If the wormhole was happy at the time it was closed, the `w.close()` Deferred will callback (probably with the string "happy", but this may change in the future). ## Serialization (NOTE: this section is speculative: this code has not yet been written) Wormhole objects can be serialized. This can be useful for apps which save their own state before shutdown, and restore it when they next start up again. The `w.serialize()` method returns a dictionary which can be JSON encoded into a unicode string (most applications will probably want to UTF-8 -encode this into a bytestring before saving on disk somewhere). To restore a Wormhole, call `wormhole.from_serialized(data, reactor, delegate)`. This will return a wormhole in roughly the same state as was serialized (of course all the network connections will be disconnected). Serialization only works for delegated-mode wormholes (since Deferreds point at functions, which cannot be serialized easily). It also only works for "non-dilated" wormholes (see below). To ensure correct behavior, serialization should probably only be done in "journaled mode". See journal.md for details. If you use serialization, be careful to never use the same partial wormhole object twice. ## Dilation (NOTE: this API is still in development) To send bulk data, or anything more than a handful of messages, a Wormhole can be "dilated" into a form that uses a direct TCP connection between the two endpoints. All wormholes start out "undilated". In this state, all messages are queued on the Rendezvous Server for the lifetime of the wormhole, and server-imposed number/size/rate limits apply. Calling `w.dilate()` initiates the dilation process, and eventually yields a set of Endpoints. Once dilated, the usual `.send_message()`/`.get_message()` APIs are disabled (TODO: really?), and these endpoints can be used to establish multiple (encrypted) "subchannel" connections to the other side. Each subchannel behaves like a regular Twisted `ITransport`, so they can be glued to the Protocol instance of your choice. They also implement the IConsumer/IProducer interfaces. These subchannels are *durable*: as long as the processes on both sides keep running, the subchannel will survive the network connection being dropped. For example, a file transfer can be started from a laptop, then while it is running, the laptop can be closed, moved to a new wifi network, opened back up, and the transfer will resume from the new IP address. What's good about a non-dilated wormhole?: * setup is faster: no delay while it tries to make a direct connection * works with "journaled mode", allowing progress to be made even when both sides are never online at the same time, by serializing the wormhole What's good about dilated wormholes?: * they support bulk data transfer * you get flow control (backpressure), and IProducer/IConsumer * throughput is faster: no store-and-forward step Use non-dilated wormholes when your application only needs to exchange a couple of messages, for example to set up public keys or provision access tokens. Use a dilated wormhole to move files. Dilated wormholes can provide multiple "subchannels": these are multiplexed through the single (encrypted) TCP connection. Each subchannel is a separate stream (offering IProducer/IConsumer for flow control), and is opened and closed independently. A special "control channel" is available to both sides so they can coordinate how they use the subchannels. The `d = w.dilate()` Deferred fires with a triple of Endpoints: ```python d = w.dilate() def _dilated(res): (control_channel_ep, subchannel_client_ep, subchannel_server_ep) = res d.addCallback(_dilated) ``` The `control_channel_ep` endpoint is a client-style endpoint, so both sides will connect to it with `ep.connect(factory)`. This endpoint is single-use: calling `.connect()` a second time will fail. The control channel is symmetric: it doesn't matter which side is the application-level client/server or initiator/responder, if the application even has such concepts. The two applications can use the control channel to negotiate who goes first, if necessary. The subchannel endpoints are *not* symmetric: for each subchannel, one side must listen as a server, and the other must connect as a client. Subchannels can be established by either side at any time. This supports e.g. bidirectional file transfer, where either user of a GUI app can drop files into the "wormhole" whenever they like. The `subchannel_client_ep` on one side is used to connect to the other side's `subchannel_server_ep`, and vice versa. The client endpoint is reusable. The server endpoint is single-use: `.listen(factory)` may only be called once. Applications are under no obligation to use subchannels: for many use cases, the control channel is enough. To use subchannels, once the wormhole is dilated and the endpoints are available, the listening-side application should attach a listener to the `subchannel_server_ep` endpoint: ```python def _dilated(res): (control_channel_ep, subchannel_client_ep, subchannel_server_ep) = res f = Factory(MyListeningProtocol) subchannel_server_ep.listen(f) ``` When the connecting-side application wants to connect to that listening protocol, it should use `.connect()` with a suitable connecting protocol factory: ```python def _connect(): f = Factory(MyConnectingProtocol) subchannel_client_ep.connect(f) ``` For a bidirectional file-transfer application, both sides will establish a listening protocol. Later, if/when the user drops a file on the application window, that side will initiate a connection, use the resulting subchannel to transfer the single file, and then close the subchannel. ```python def FileSendingProtocol(internet.Protocol): def __init__(self, metadata, filename): self.file_metadata = metadata self.file_name = filename def connectionMade(self): self.transport.write(self.file_metadata) sender = protocols.basic.FileSender() f = open(self.file_name,"rb") d = sender.beginFileTransfer(f, self.transport) d.addBoth(self._done, f) def _done(res, f): self.transport.loseConnection() f.close() def _send(metadata, filename): f = protocol.ClientCreator(reactor, FileSendingProtocol, metadata, filename) subchannel_client_ep.connect(f) def FileReceivingProtocol(internet.Protocol): state = INITIAL def dataReceived(self, data): if state == INITIAL: self.state = DATA metadata = parse(data) self.f = open(metadata.filename, "wb") else: # local file writes are blocking, so don't bother with IConsumer self.f.write(data) def connectionLost(self, reason): self.f.close() def _dilated(res): (control_channel_ep, subchannel_client_ep, subchannel_server_ep) = res f = Factory(FileReceivingProtocol) subchannel_server_ep.listen(f) ``` ## Bytes, Strings, Unicode, and Python 3 All cryptographically-sensitive parameters are passed as bytes ("str" in python2, "bytes" in python3): * verifier string * data in/out * transit records in/out Other (human-facing) values are always unicode ("unicode" in python2, "str" in python3): * wormhole code * relay URL * transit URLs * transit connection hints (e.g. "host:port") * application identifier * derived-key "purpose" string: `w.derive_key(PURPOSE, LENGTH)` ## Full API list action | Deferred-Mode | Delegated-Mode ------------------ | ------------------ | -------------- . | d=w.get_welcome() | dg.wormhole_got_welcome(welcome) w.allocate_code() | | h=w.input_code() | | w.set_code(code) | | . | d=w.get_code() | dg.wormhole_got_code(code) . | d=w.get_unverified_key() | dg.wormhole_got_unverified_key(key) . | d=w.get_verifier() | dg.wormhole_got_verifier(verifier) . | d=w.get_versions() | dg.wormhole_got_versions(versions) key=w.derive_key(purpose, length) | | w.send_message(msg) | | . | d=w.get_message() | dg.wormhole_got_message(msg) w.close() | | dg.wormhole_closed(result) . | d=w.close() | magic-wormhole-0.12.0/docs/attacks.md000066400000000000000000000111521400712516500174110ustar00rootroot00000000000000# Known Vulnerabilities ## Low-probability Man-In-The-Middle Attacks By default, wormhole codes contain 16 bits of entropy. If an attacker can intercept your network connection (either by owning your network, or owning the rendezvous server), they can attempt an attack. They will have a one-in-65536 chance of successfully guessing your code, allowing them to pose as your intended partner. If they succeed, they can turn around and immediately start a new wormhole (using the same code), allowing your partner to connect to them instead of you. By passing, observing, and possibly modifying messages between these two connections, they could perform an MitM (Man In The Middle) attack. If the server refused to re-use the same channel id (aka "nameplate") right away (issue #31), a network attacker would be unable to set up the second connection, cutting this attack in half. An attacker who controls the server would not be affected. Basic probability tells us that peers will see a large number of WrongPasswordErrors before the attacker has a useful chance of successfully guessing any wormhole code. You should expect to see about 32000 failures before they have a 50% chance of being successful. If you see many failures, and think someone is trying to guess your codes, you can use e.g. `wormhole send --code-length=4` to make a longer code (reducing their chances significantly). Of course, an attacker who learns your secret wormhole code directly (because you delivered it over an insecure channel) can perform this attack with 100% reliability. ## DoS Attack on the Rendezvous Server Wormhole codes can be so short because they implicitly contain a common rendezvous server URL (any two applications that use magic-wormhole should be configured to use the same server). As a result, successful operation depends upon both clients being able to contact that server, making it a SPOF (single point of failure). In particular, grumpy people could disrupt service for everyone by writing a program that just keeps connecting to the rendezvous server, pretending to be real clients, and claiming messages meant for legitimate users. I do not have any good mitigations for this attack, and functionality may depend upon the continued goodwill of potential vandals. The weak ones that I've considered (but haven't implemented yet) include: * hashcash challenges when the server is under attack * per-IP rate-limiting (although I'd want to be careful about protecting the privacy of the IP addresses, so it'd need a rotating hash seed) * require users to go through some external service (maybe ReCAPTCHA?) and get a rate-limiting ticket before claiming a channel * shipping an attack tool (flooding the first million channels), as part of the distribution, in a subcommand named `wormhole break-this-useful-service-for-everybody-because-i-am-a-horrible-person`, in the hopes that pointing out how easy it is might dissuade a few would-be vandals from feeling a sense of accomplishment at writing their own :). Not sure it would help much, but I vaguely remember hearing about something similar in the early multi-user unix systems (a publically-executable /bin/crash or something, which new users tended to only run once before learning some responsibility). Using the secret words as part of the "channel id" isn't safe, since it would allow a network attacker, or the rendezvous server, to deduce what the secret words are: since they only have 16 bits of entropy, the attacker just makes a table of hash(words) -> channel-id, then reverses it. To make that safer we'd need to increase the codes to maybe 80 bits (ten words), plus do some significant key-stretching (like 5-10 seconds of scrypt or argon2), which would increase latency and CPU demands, and still be less secure overall. The core problem is that, because things are so easy for the legitimate participants, they're really easy for the attacker too. Short wormhole codes are the easiest to use, but they make it for a trivially predictable channel-id target. I don't have a good answer for this one. I'm hoping that it isn't sufficiently interesting to attack that it'll be an issue, but I can't think of any simple answers. If the API is sufficiently compelling for other applications to incorporate Wormhole "technology" into their apps, I'm expecting that they'll run their own rendezvous server, and of course those apps can incorporate whatever sort of DoS protection seems appropriate. For the built-in/upstream send-text/file/directory tools, using the public relay that I run, it may just have to be a best-effort service, and if someone decides to kill it, it fails. See #107 for more discussion. magic-wormhole-0.12.0/docs/client-protocol.md000066400000000000000000000067761400712516500211140ustar00rootroot00000000000000# Client-to-Client Protocol Wormhole clients do not talk directly to each other (at least at first): they only connect directly to the Rendezvous Server. They ask this server to convey messages to the other client (via the `add` command and the `message` response). This document explains the format of these client-to-client messages. Each such message contains a "phase" string, and a hex-encoded binary "body". Any phase which is purely numeric (`^\d+$`) is reserved for encrypted application data. The Rendezvous server may deliver these messages multiple times, or out-of-order, but the wormhole client will deliver the corresponding decrypted data to the application in strict numeric order. All other (non-numeric) phases are reserved for the Wormhole client itself. Clients will ignore any phase they do not recognize. Immediately upon opening the mailbox, clients send the `pake` phase, which contains the binary SPAKE2 message (the one computed as `X+M*pw` or `Y+N*pw`). Upon receiving their peer's `pake` phase, clients compute and remember the shared key. They derive the "verifier" (a hash of the shared key) and deliver it to the application by calling `got_verifier`: applications can display this to users who want additional assurance (by manually comparing the values from both sides: they ought to be identical). At this point clients also send the encrypted `version` phase, whose plaintext payload is a UTF-8-encoded JSON-encoded dictionary of metadata. This allows the two Wormhole instances to signal their ability to do other things (like "dilate" the wormhole). The version data will also include an `app_versions` key which contains a dictionary of metadata provided by the application, allowing apps to perform similar negotiation. At this stage, the client knows the supposed shared key, but has not yet seen evidence that the peer knows it too. When the first peer message arrives (i.e. the first message with a `.side` that does not equal our own), it will be decrypted: we use authenticated encryption (`nacl.SecretBox`), so if this decryption succeeds, then we're confident that *somebody* used the same wormhole code as us. This event pushes the client mood from "lonely" to "happy". This might be triggered by the peer's `version` message, but if we had to re-establish the Rendezvous Server connection, we might get peer messages out of order and see some application-level message first. When a `version` message is successfully decrypted, the application is signaled with `got_version`. When any application message is successfully decrypted, `received` is signaled. Application messages are delivered strictly in-order: if we see phases 3 then 2 then 1, all three will be delivered in sequence after phase 1 is received. If any message cannot be successfully decrypted, the mood is set to "scary", and the wormhole is closed. All pending Deferreds will be errbacked with a `WrongPasswordError` (a subclass of `WormholeError`), the nameplate/mailbox will be released, and the WebSocket connection will be dropped. If the application calls `close()`, the resulting Deferred will not fire until deallocation has finished and the WebSocket is closed, and then it will fire with an errback. Both `version` and all numeric (app-specific) phases are encrypted. The message body will be the hex-encoded output of a NaCl `SecretBox`, keyed by a phase+side -specific key (computed with HKDF-SHA256, using the shared PAKE key as the secret input, and `wormhole:phase:%s%s % (SHA256(side), SHA256(phase))` as the CTXinfo), with a random nonce. magic-wormhole-0.12.0/docs/conf.py000066400000000000000000000130451400712516500167370ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Magic-Wormhole documentation build configuration file, created by # sphinx-quickstart on Sun Nov 12 10:24:09 2017. # # This file is execfile()d with the current directory set to its # containing dir. # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. # If extensions (or modules to document with autodoc) are in another directory, # add these directories to sys.path here. If the directory is relative to the # documentation root, use os.path.abspath to make it absolute, like shown here. # # import os # import sys # sys.path.insert(0, os.path.abspath('.')) from recommonmark.parser import CommonMarkParser source_parsers = { ".md": CommonMarkParser, } # -- General configuration ------------------------------------------------ # If your documentation needs a minimal Sphinx version, state it here. # # needs_sphinx = '1.0' # Add any Sphinx extension module names here, as strings. They can be # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom # ones. extensions = [] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix(es) of source filenames. # You can specify multiple suffix as a list of string: # source_suffix = ['.rst', '.md'] #source_suffix = '.md' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Magic-Wormhole' copyright = u'2017, Brian Warner' author = u'Brian Warner' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. # def _get_versions(): import os.path, sys, subprocess here = os.path.dirname(os.path.abspath(__file__)) parent = os.path.dirname(here) v = subprocess.check_output([sys.executable, "setup.py", "--version"], cwd=parent) if sys.version_info[0] >= 3: v = v.decode() short = ".".join(v.split(".")[:2]) long = v return short, long version, release = _get_versions() # The short X.Y version. #version = u'0.10' # The full version, including alpha/beta/rc tags. #release = u'0.10.3' # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. # # This is also used if you do content translation via gettext catalogs. # Usually you set "language" from the command line for these cases. language = None # List of patterns, relative to source directory, that match files and # directories to ignore when looking for source files. # This patterns also effect to html_static_path and html_extra_path exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store'] # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'sphinx' # If true, `todo` and `todoList` produce output, else they produce nothing. todo_include_todos = False # -- Options for HTML output ---------------------------------------------- # The theme to use for HTML and HTML Help pages. See the documentation for # a list of builtin themes. # html_theme = 'alabaster' # Theme options are theme-specific and customize the look and feel of a theme # further. For a list of options available for each theme, see the # documentation. # # html_theme_options = {} # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # Custom sidebar templates, must be a dictionary that maps document names # to template names. # # This is required for the alabaster theme # refs: http://alabaster.readthedocs.io/en/latest/installation.html#sidebars html_sidebars = { '**': [ 'relations.html', # needs 'show_related': True theme option to display 'searchbox.html', ] } # -- Options for HTMLHelp output ------------------------------------------ # Output file base name for HTML help builder. htmlhelp_basename = 'Magic-Wormholedoc' # -- Options for LaTeX output --------------------------------------------- latex_elements = { # The paper size ('letterpaper' or 'a4paper'). # # 'papersize': 'letterpaper', # The font size ('10pt', '11pt' or '12pt'). # # 'pointsize': '10pt', # Additional stuff for the LaTeX preamble. # # 'preamble': '', # Latex figure (float) alignment # # 'figure_align': 'htbp', } # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, # author, documentclass [howto, manual, or own class]). latex_documents = [ (master_doc, 'Magic-Wormhole.tex', u'Magic-Wormhole Documentation', u'Brian Warner', 'manual'), ] # -- Options for manual page output --------------------------------------- # One entry per manual page. List of tuples # (source start file, name, description, authors, manual section). man_pages = [ (master_doc, 'magic-wormhole', u'Magic-Wormhole Documentation', [author], 1) ] # -- Options for Texinfo output ------------------------------------------- # Grouping the document tree into Texinfo files. List of tuples # (source start file, target name, title, author, # dir menu entry, description, category) texinfo_documents = [ (master_doc, 'Magic-Wormhole', u'Magic-Wormhole Documentation', author, 'Magic-Wormhole', 'One line description of project.', 'Miscellaneous'), ] magic-wormhole-0.12.0/docs/dilation-protocol.md000066400000000000000000000714311400712516500214270ustar00rootroot00000000000000# Dilation Internals Wormhole dilation involves several moving parts. Both sides exchange messages through the Mailbox server to coordinate the establishment of a more direct connection. This connection might flow in either direction, so they trade "connection hints" to point at potential listening ports. This process might succeed in making multiple connections at about the same time, so one side must select the best one to use, and cleanly shut down the others. To make the dilated connection *durable*, this side must also decide when the connection has been lost, and then coordinate the construction of a replacement. Within this connection, a series of queued-and-acked subchannel messages are used to open/use/close the application-visible subchannels. ## Versions and can-dilate The Wormhole protocol includes a `versions` message sent immediately after the shared PAKE key is established. This also serves as a key-confirmation message, allowing each side to confirm that the other side knows the right key. The body of the `versions` message is a JSON-formatted string with keys that are available for learning the abilities of the peer. Dilation is signaled by a key named `can-dilate`, whose value is a list of strings. Any version present in both side's lists is eligible for use. ## Leaders and Followers Each side of a Wormhole has a randomly-generated dilation `side` string (this is included in the `please-dilate` message, and is independent of the Wormhole's mailbox "side"). When the wormhole is dilated, the side with the lexicographically-higher "side" value is named the "Leader", and the other side is named the "Follower". The general wormhole protocol treats both sides identically, but the distinction matters for the dilation protocol. Both sides send a `please-dilate` as soon as dilation is triggered. Each side discovers whether it is the Leader or the Follower when the peer's "please-dilate" arrives. The Leader has exclusive control over whether a given connection is considered established or not: if there are multiple potential connections to use, the Leader decides which one to use, and the Leader gets to decide when the connection is no longer viable (and triggers the establishment of a new one). The `please-dilate` includes a `use-version` key, computed as the "best" version of the intersection of the two sides' abilities as reported in the `versions` message. Both sides will use whichever `use-version` was specified by the Leader (they learn which side is the Leader at the same moment they learn the peer's `use-version` value). If the Follower cannot handle the `use-version` value, dilation fails (this shouldn't happen, as the Leader knew what the Follower was and was not capable of before sending that message). ## Connection Layers We describe the protocol as a series of layers. Messages sent on one layer may be encoded or transformed before being delivered on some other layer. L1 is the mailbox channel (queued store-and-forward messages that always go to the mailbox server, and then are forwarded to other clients subscribed to the same mailbox). Both clients remain connected to the mailbox server until the Wormhole is closed. They send DILATE-n messages to each other to manage the dilation process, including records like `please`, `connection-hints`, `reconnect`, and `reconnecting`. L2 is the set of competing connection attempts for a given generation of connection. Each time the Leader decides to establish a new connection, a new generation number is used. Hopefully these are direct TCP connections between the two peers, but they may also include connections through the transit relay. Each connection must go through an encrypted handshake process before it is considered viable. Viable connections are then submitted to a selection process (on the Leader side), which chooses exactly one to use, and drops the others. It may wait an extra few seconds in the hopes of getting a "better" connection (faster, cheaper, etc), but eventually it will select one. L3 is the current selected connection. There is one L3 for each generation. At all times, the wormhole will have exactly zero or one L3 connection. L3 is responsible for the selection process, connection monitoring/keepalives, and serialization/deserialization of the plaintext frames. L3 delivers decoded frames and connection-establishment events up to L4. L4 is the persistent higher-level channel. It is created as soon as the first L3 connection is selected, and lasts until wormhole is closed entirely. L4 contains OPEN/DATA/CLOSE/ACK messages: OPEN/DATA/CLOSE have a sequence number (scoped to the L4 connection and the direction of travel), and the ACK messages reference those sequence numbers. When a message is given to the L4 channel for delivery to the remote side, it is always queued, then transmitted if there is an L3 connection available. This message remains in the queue until an ACK is received to retire it. If a new L3 connection is made, all queued messages will be re-sent (in seqnum order). L5 are subchannels. There is one pre-established subchannel 0 known as the "control channel", which does not require an OPEN message. All other subchannels are created by the receipt of an OPEN message with the subchannel number. DATA frames are delivered to a specific subchannel. When the subchannel is no longer needed, one side will invoke the ``close()`` API (``loseConnection()`` in Twisted), which will cause a CLOSE message to be sent, and the local L5 object will be put into the "closing "state. When the other side receives the CLOSE, it will send its own CLOSE for the same subchannel, and fully close its local object (``connectionLost()``). When the first side receives CLOSE in the "closing" state, it will fully close its local object too. All L5 subchannels will be paused (``pauseProducing()``) when the L3 connection is paused or lost. They are resumed when the L3 connection is resumed or reestablished. ## Initiating Dilation Dilation is triggered by calling the `w.dilate()` API. This returns a Deferred that will fire once the first L3 connection is established. It fires with a 3-tuple of endpoints that can be used to establish subchannels, or an error if dilation is not possible. If the other side's `versions` message indicates that it does not support dilation, the Deferred will errback with an `OldPeerCannotDilateError`. For dilation to succeed, both sides must call `w.dilate()`, since the resulting endpoints are the only way to access the subchannels. If the other side is capable of dilation, but never calls `w.dilate()`, the Deferred will never fire. The L1 (mailbox) path is used to deliver dilation requests and connection hints. The current mailbox protocol uses named "phases" to distinguish messages (rather than behaving like a regular ordered channel of arbitrary frames or bytes), and all-number phase names are reserved for application data (sent via `w.send_message()`). Therefore the dilation control messages use phases named `DILATE-0`, `DILATE-1`, etc. Each side maintains its own counter, so one side might be up to e.g. `DILATE-5` while the other has only gotten as far as `DILATE-2`. This effectively creates a pair of unidirectional streams of `DILATE-n` messages, each containing one or more dilation record, of various types described below. Note that all phases beyond the initial VERSION and PAKE phases are encrypted by the shared session key. A future mailbox protocol might provide a simple ordered stream of typed messages, with application records and dilation records mixed together. Each `DILATE-n` message is a JSON-encoded dictionary with a `type` field that has a string value. The dictionary will have other keys that depend upon the type. `w.dilate()` triggers transmission of a `please` (i.e. "please dilate") record with a set of versions that can be accepted. Versions use strings, rather than integers, to support experimental protocols, however there is still a total ordering of preferability. ``` { "type": "please", "side": "abcdef", "accepted-versions": ["1"] } ``` If one side receives a `please` before `w.dilate()` has been called locally, the contents are stored in case `w.dilate()` is called in the future. Once both `w.dilate()` has been called and the peer's `please` has been received, the side determines whether it is the Leader or the Follower. Both sides also compare `accepted-versions` fields to choose the best mutually-compatible version to use: they should always pick the same one. Then both sides begin the connection process for generation 1 by opening listening sockets and sending `connection-hint` records for each one. After a slight delay they will also open connections to the Transit Relay of their choice and produce hints for it too. The receipt of inbound hints (on both sides) will trigger outbound connection attempts. Some number of these connections may succeed, and the Leader decides which to use (via an in-band signal on the established connection). The others are dropped. If something goes wrong with the established connection and the Leader decides a new one is necessary, the Leader will send a `reconnect` message. This might happen while connections are still being established, or while the Follower thinks it still has a viable connection (the Leader might observe problems that the Follower does not), or after the Follower thinks the connection has been lost. In all cases, the Leader is the only side which should send `reconnect`. The state machine code looks the same on both sides, for simplicity, but one path on each side is never used. Upon receiving a `reconnect`, the Follower should stop any pending connection attempts and terminate any existing connections (even if they appear viable). Listening sockets may be retained, but any previous connection made through them must be dropped. Once all connections have stopped, the Follower should send a `reconnecting` message, then start the connection process for the next generation, which will send new `connection-hint` messages for all listening sockets. Generations are non-overlapping. The Leader will drop all connections from generation 1 before sending the `reconnect` for generation 2, and will not initiate any gen-2 connections until it receives the matching `reconnecting` from the Follower. The Follower must drop all gen-1 connections before it sends the `reconnecting` response (even if it thinks they are still functioning: if the Leader thought the gen-1 connection still worked, it wouldn't have started gen-2). (TODO: what about a follower->leader connection that was started before start-dilation is received, and gets established on the Leader side after start-dilation is sent? the follower will drop it after it receives start-dilation, but meanwhile the leader may accept it as gen2) (probably need to include the generation number in the handshake, or in the derived key) (TODO: reduce the number of round-trip stalls here, I've added too many) Each side is in the "connecting" state (which encompasses both making connection attempts and having an established connection) starting with the receipt of a `please-dilate` message and a local `w.dilate()` call. The Leader remains in that state until it abandons the connection and sends a `reconnect` message, at which point it remains in the "flushing" state until the Follower's `reconnecting` message is received. The Follower remains in "connecting" until it receives `reconnect`, then it stays in "dropping" until it finishes halting all outstanding connections, after which it sends `reconnecting` and switches back to "connecting". "Connection hints" are type/address/port records that tell the other side of likely targets for L2 connections. Both sides will try to determine their external IP addresses, listen on a TCP port, and advertise `(tcp, external-IP, port)` as a connection hint. The Transit Relay is also used as a (lower-priority) hint. These are sent in `connection-hint` records, which can be sent any time after both sending and receiving a `please` record. Each side will initiate connections upon receipt of the hints. ``` { "type": "connection-hints", "hints": [ ... ] } ``` Hints can arrive at any time. One side might immediately send hints that can be computed quickly, then send additional hints later as they become available. For example, it might enumerate the local network interfaces and send hints for all of the LAN addresses first, then send port-forwarding (UPnP) requests to the local router. When the forwarding is established (providing an externally-visible IP address and port), it can send additional hints for that new endpoint. If the other peer happens to be on the same LAN, the local connection can be established without waiting for the router's response. ### Connection Hint Format Each member of the `hints` field describes a potential L2 connection target endpoint, with an associated priority and a set of hints. The priority is a number (positive or negative float), where larger numbers indicate that the client supplying that hint would prefer to use this connection over others of lower number. This indicates a sense of cost or performance. For example, the Transit Relay is lower priority than a direct TCP connection, because it incurs a bandwidth cost (on the relay operator), as well as adding latency. Each endpoint has a set of hints, because the same target might be reachable by multiple hints. Once one hint succeeds, there is no point in using the other hints. TODO: think this through some more. What's the example of a single endpoint reachable by multiple hints? Should each hint have its own priority, or just each endpoint? ## L2 protocol Upon ``connectionMade()``, both sides send their handshake message. The Leader sends "Magic-Wormhole Dilation Handshake v1 Leader\n\n". The Follower sends "Magic-Wormhole Dilation Handshake v1 Follower\n\n". This should trigger an immediate error for most non-magic-wormhole listeners (e.g. HTTP servers that were contacted by accident). If the wrong handshake is received, the connection will be dropped. For debugging purposes, the node might want to keep looking at data beyond the first incorrect character and log a few hundred characters until the first newline. Everything beyond that point is a Noise protocol message, which consists of a 4-byte big-endian length field, followed by the indicated number of bytes. This uses the `NNpsk0` pattern with the Leader as the first party ("-> psk, e" in the Noise spec), and the Follower as the second ("<- e, ee"). The pre-shared-key is the "dilation key", which is statically derived from the master PAKE key using HKDF. Each L2 connection uses the same dilation key, but different ephemeral keys, so each gets a different session key. The Leader sends the first message, which is a psk-encrypted ephemeral key. The Follower sends the next message, its own psk-encrypted ephemeral key. These two messages are known as "handshake messages" in the Noise protocol, and must be processed in a specific order (the Leader must not accept the Follower's message until it has generated its own). Noise allows handshake messages to include a payload, but we do not use this feature. All subsequent messages as known as "Noise transport messages", and use independent channels for each direction, so they no longer have ordering dependencies. Transport messages are encrypted by the shared key, in a form that evolves as more messages are sent. The Follower's first transport message is an empty packet, which we use as a "key confirmation message" (KCM). The Leader doesn't send a transport message right away: it waits to see the Follower's KCM, which indicates this connection is viable (i.e. the Follower used the same dilation key as the Leader, which means they both used the same wormhole code). The Leader delivers the now-viable protocol object to the L3 manager, which will decide which connection to select. When some L2 connection is selected to be the new L3, the Leader finally sends an empty KCM of its own over that L2, to let the Follower know which connection has been selected. All other L2 connections (either viable or still in handshake) are dropped, and all other connection attempts are cancelled. All listening sockets may or may not be shut down (TODO: think about it). After sending their KCM, the Follower will wait for either an empty KCM (at which point the L2 connection is delivered to the Dilation manager as the new L3), a disconnection, or an invalid message (which causes the connection to be dropped). Other connections and/or listening sockets are stopped. Internally, the L2Protocol object manages the Noise session itself. It knows (via a constructor argument) whether it is on the Leader or Follower side, which affects both the role is plays in the Noise pattern, and the reaction to receiving the handshake message / ephemeral key (for which only the Follower sends an empty KCM message). After that, the L2Protocol notifies the L3 object in three situations: * the Noise session produces a valid decrypted frame (for Leader, this includes the Follower's KCM, and thus indicates a viable candidate for connection selection) * the Noise session reports a failed decryption * the TCP session is lost All notifications include a reference to the L2Protocol object (`self`). The L3 object uses this reference to either close the connection (for errors or when the selection process chooses someone else), to send the KCM message (after selection, only for the Leader), or to send other L4 messages. The L3 object will retain a reference to the winning L2 object. ## L3 protocol The L3 layer is responsible for connection selection, monitoring/keepalives, and message (de)serialization. Framing is handled by L2, so the inbound L3 codepath receives single-message bytestrings, and delivers the same down to L2 for encryption, framing, and transmission. Connection selection takes place exclusively on the Leader side, and includes the following: * receipt of viable L2 connections from below (indicated by the first valid decrypted frame received for any given connection) * expiration of a timer * comparison of TBD quality/desirability/cost metrics of viable connections * selection of winner * instructions to losing connections to disconnect * delivery of KCM message through winning connection * retain reference to winning connection On the Follower side, the L3 manager just waits for the first connection to receive the Leader's KCM, at which point it is retained and all others are dropped. The L3 manager knows which "generation" of connection is being established. Each generation uses a different dilation key (?), and is triggered by a new set of L1 messages. Connections from one generation should not be confused with those of a different generation. Each time a new L3 connection is established, the L4 protocol is notified. It will will immediately send all the L4 messages waiting in its outbound queue. The L3 protocol simply wraps these in Noise frames and sends them to the other side. The L3 manager monitors the viability of the current connection, and declares it as lost when bidirectional traffic cannot be maintained. It uses PING and PONG messages to detect this. These also serve to keep NAT entries alive, since many firewalls will stop forwarding packets if they don't observe any traffic for e.g. 5 minutes. Our goals are: * don't allow more than 30? seconds to pass without at least *some* data being sent along each side of the connection * allow the Leader to detect silent connection loss within 60? seconds * minimize overhead We need both sides to: * maintain a 30-second repeating timer * set a flag each time we write to the connection * each time the timer fires, if the flag was clear then send a PONG, otherwise clear the flag In addition, the Leader must: * run a 60-second repeating timer (ideally somewhat offset from the other) * set a flag each time we receive data from the connection * each time the timer fires, if the flag was clear then drop the connection, otherwise clear the flag In the future, we might have L2 links that are less connection-oriented, which might have a unidirectional failure mode, at which point we'll need to monitor full roundtrips. To accomplish this, the Leader will send periodic unconditional PINGs, and the Follower will respond with PONGs. If the Leader->Follower connection is down, the PINGs won't arrive and no PONGs will be produced. If the Follower->Leader direction has failed, the PONGs won't arrive. The delivery of both will be delayed by actual data, so the timeouts should be adjusted if we see regular data arriving. If the connection is dropped before the wormhole is closed (either the other end explicitly dropped it, we noticed a problem and told TCP to drop it, or TCP noticed a problem itself), the Leader-side L3 manager will initiate a reconnection attempt. This uses L1 to send a new DILATE message through the mailbox server, along with new connection hints. Eventually this will result in a new L3 connection being established. Finally, L3 is responsible for message serialization and deserialization. L2 performs decryption and delivers plaintext frames to L3. Each frame starts with a one-byte type indicator. The rest of the message depends upon the type: * 0x00 PING, 4-byte ping-id * 0x01 PONG, 4-byte ping-id * 0x02 OPEN, 4-byte subchannel-id, 4-byte seqnum * 0x03 DATA, 4-byte subchannel-id, 4-byte seqnum, variable-length payload * 0x04 CLOSE, 4-byte subchannel-id, 4-byte seqnum * 0x05 ACK, 4-byte response-seqnum All seqnums are big-endian, and are provided by the L4 protocol. The other fields are arbitrary and not interpreted as integers. The subchannel-ids must be allocated by both sides without collision, but otherwise they are only used to look up L5 objects for dispatch. The response-seqnum is always copied from the OPEN/DATA/CLOSE packet being acknowledged. L3 consumes the PING and PONG messages. Receiving any PING will provoke a PONG in response, with a copy of the ping-id field. The 30-second timer will produce unprovoked PONGs with a ping-id of all zeros. A future viability protocol will use PINGs to test for roundtrip functionality. All other messages (OPEN/DATA/CLOSE/ACK) are deserialized and delivered "upstairs" to the L4 protocol handler. The current L3 connection's `IProducer`/`IConsumer` interface is made available to the L4 flow-control manager. ## L4 protocol The L4 protocol manages a durable stream of OPEN/DATA/CLOSE/ACK messages. Since each will be enclosed in a Noise frame before they pass to L3, they do not need length fields or other framing. Each OPEN/DATA/CLOSE has a sequence number, starting at 0, and monotonically increasing by 1 for each message. Each direction has a separate number space. The L4 manager maintains a double-ended queue of unacknowledged outbound messages. Subchannel activity (opening, closing, sending data) cause messages to be added to this queue. If an L3 connection is available, these messages are also sent over that connection, but they remain in the queue in case the connection is lost and they must be retransmitted on some future replacement connection. Messages stay in the queue until they can be retired by the receipt of an ACK with a matching response-sequence-number. This provides reliable message delivery that survives the L3 connection being replaced. ACKs are not acked, nor do they have seqnums of their own. Each inbound side remembers the highest ACK it has sent, and ignores incoming OPEN/DATA/CLOSE messages with that sequence number or higher. This ensures in-order at-most-once processing of OPEN/DATA/CLOSE messages. Each inbound OPEN message causes a new L5 subchannel object to be created. Subsequent DATA/CLOSE messages for the same subchannel-id are delivered to that object. Each time an L3 connection is established, the side will immediately send all L4 messages waiting in the outbound queue. A future protocol might reduce this duplication by including the highest received sequence number in the L1 PLEASE-DILATE message, which would effectively retire queued messages before initiating the L2 connection process. On any given L3 connection, all messages are sent in-order. The receipt of an ACK for seqnum `N` allows all messages with `seqnum <= N` to be retired. The L4 layer is also responsible for managing flow control among the L3 connection and the various L5 subchannels. ## L5 subchannels The L5 layer consists of a collection of "subchannel" objects, a dispatcher, and the endpoints that provide the Twisted-flavored API. Other than the "control channel", all subchannels are created by a client endpoint connection API. The side that calls this API is named the Initiator, and the other side is named the Acceptor. Subchannels can be initiated in either direction, independent of the Leader/Follower distinction. For a typical file-transfer application, the subchannel would be initiated by the side seeking to send a file. Each subchannel uses a distinct subchannel-id, which is a four-byte identifier. Both directions share a number space (unlike L4 seqnums), so the rule is that the Leader side sets the last bit of the last byte to a 1, while the Follower sets it to a 0. These are not generally treated as integers, however for the sake of debugging, the implementation generates them with a simple big-endian-encoded counter (`counter*2+1` for the Leader, `counter*2+2` for the Follower, with id `0` reserved for the control channel). When the `client_ep.connect()` API is called, the Initiator allocates a subchannel-id and sends an OPEN. It can then immediately send DATA messages with the outbound data (there is no special response to an OPEN, so there is no need to wait). The Acceptor will trigger their `.connectionMade` handler upon receipt of the OPEN. Subchannels are durable: they do not close until one side calls `.loseConnection` on the subchannel object (or the enclosing Wormhole is closed). Either the Initiator or the Acceptor can call `.loseConnection`. This causes a CLOSE message to be sent (with the subchannel-id). The other side will send its own CLOSE message in response. Each side will signal the `.connectionLost()` event upon receipt of a CLOSE. There is no equivalent to TCP's "half-closed" state, however if only one side calls `close()`, then all data written before that call will be delivered before the other side observes `.connectionLost()`. Any inbound data that was queued for delivery before the other side sees the CLOSE will still be delivered to the side that called `close()` before it sees `.connectionLost()`. Internally, the side which called `.loseConnection` will remain in a special "closing" state until the CLOSE response arrives, during which time DATA payloads are still delivered. After calling `close()` (or receiving CLOSE), any outbound `.write()` calls will trigger an error. DATA payloads that arrive for a non-open subchannel are logged and discarded. This protocol calls for one OPEN and two CLOSE messages for each subchannel, with some arbitrary number of DATA messages in between. Subchannel-ids should not be reused (it would probably work, the protocol hasn't been analyzed enough to be sure). The "control channel" is special. It uses a subchannel-id of all zeros, and is opened implicitly by both sides as soon as the first L3 connection is selected. It is routed to a special client-on-both-sides endpoint, rather than causing the listening endpoint to accept a new connection. This avoids the need for application-level code to negotiate who should be the one to open it (the Leader/Follower distinction is private to the Wormhole internals: applications are not obligated to pick a side). OPEN and CLOSE messages for the control channel are logged and discarded. The control-channel client endpoints can only be used once, and does not close until the Wormhole itself is closed. Each OPEN/DATA/CLOSE message is delivered to the L4 object for queueing, delivery, and eventual retirement. The L5 layer does not keep track of old messages. ### Flow Control Subchannels are flow-controlled by pausing their writes when the L3 connection is paused, and pausing the L3 connection when the subchannel signals a pause. When the outbound L3 connection is full, *all* subchannels are paused. Likewise the inbound connection is paused if *any* of the subchannels asks for a pause. This is much easier to implement and improves our utilization factor (we can use TCP's window-filling algorithm, instead of rolling our own), but will block all subchannels even if only one of them gets full. This shouldn't matter for many applications, but might be noticeable when combining very different kinds of traffic (e.g. a chat conversation sharing a wormhole with file-transfer might prefer the IM text to take priority). Each subchannel implements Twisted's `ITransport`, `IProducer`, and `IConsumer` interfaces. The Endpoint API causes a new `IProtocol` object to be created (by the caller's factory) and glued to the subchannel object in the `.transport` property, as is standard in Twisted-based applications. All subchannels are also paused when the L3 connection is lost, and are unpaused when a new replacement connection is selected. magic-wormhole-0.12.0/docs/file-transfer-protocol.md000066400000000000000000000235321400712516500223640ustar00rootroot00000000000000# File-Transfer Protocol The `bin/wormhole` tool uses a Wormhole to establish a connection, then speaks a file-transfer -specific protocol over that Wormhole to decide how to transfer the data. This application-layer protocol is described here. All application-level messages are dictionaries, which are JSON-encoded and and UTF-8 encoded before being handed to `wormhole.send` (which then encrypts them before sending through the rendezvous server to the peer). ## Sender `wormhole send` has two main modes: file/directory (which requires a non-wormhole Transit connection), or text (which does not). If the sender is doing files or directories, its first message contains just a `transit` key, whose value is a dictionary with `abilities-v1` and `hints-v1` keys. These are given to the Transit object, described below. Then (for both files/directories and text) it sends a message with an `offer` key. The offer contains a single key, exactly one of (`message`, `file`, or `directory`). For `message`, the value is the message being sent. For `file` and `directory`, it contains a dictionary with additional information: * `message`: the text message, for text-mode * `file`: for file-mode, a dict with `filename` and `filesize` * `directory`: for directory-mode, a dict with: * `mode`: the compression mode, currently always `zipfile/deflated` * `dirname` * `zipsize`: integer, size of the transmitted data in bytes * `numbytes`: integer, estimated total size of the uncompressed directory * `numfiles`: integer, number of files+directories being sent The sender runs a loop where it waits for similar dictionary-shaped messages from the recipient, and processes them. It reacts to the following keys: * `error`: use the value to throw a TransferError and terminates * `transit`: use the value to build the Transit instance * `answer`: * if `message_ack: ok` is in the value (we're in text-mode), then exit with success * if `file_ack: ok` in the value (and we're in file/directory mode), then wait for Transit to connect, then send the file through Transit, then wait for an ack (via Transit), then exit The sender can handle all of these keys in the same message, or spaced out over multiple ones. It will ignore any keys it doesn't recognize, and will completely ignore messages that don't contain any recognized key. The only constraint is that the message containing `message_ack` or `file_ack` is the last one: it will stop looking for wormhole messages at that point. ## Recipient `wormhole receive` is used for both file/directory-mode and text-mode: it learns which is being used from the `offer` message. The recipient enters a loop where it processes the following keys from each received message: * `error`: if present in any message, the recipient raises TransferError (with the value) and exits immediately (before processing any other keys) * `transit`: the value is used to build the Transit instance * `offer`: parse the offer: * `message`: accept the message and terminate * `file`: connect a Transit instance, wait for it to deliver the indicated number of bytes, then write them to the target filename * `directory`: as with `file`, but unzip the bytes into the target directory ## Transit The Wormhole API does not currently provide for large-volume data transfer (this feature will be added to a future version, under the name "Dilated Wormhole"). For now, bulk data is sent through a "Transit" object, which does not use the Rendezvous Server. Instead, it tries to establish a direct TCP connection from sender to recipient (or vice versa). If that fails, both sides connect to a "Transit Relay", a very simple Server that just glues two TCP sockets together when asked. The Transit object is created with a key (the same key on each side), and all data sent through it will be encrypted with a derivation of that key. The transit key is also used to derive handshake messages which are used to make sure we're talking to the right peer, and to help the Transit Relay match up the two client connections. Unlike Wormhole objects (which are symmetric), Transit objects come in pairs: one side is the Sender, and the other is the Receiver. Like Wormhole, Transit provides an encrypted record pipe. If you call `.send()` with 40 bytes, the other end will see a `.gotData()` with exactly 40 bytes: no splitting, merging, dropping, or re-ordering. The Transit object also functions as a twisted Producer/Consumer, so it can be connected directly to file-readers and writers, and does flow-control properly. Most of the complexity of the Transit object has to do with negotiating and scheduling likely targets for the TCP connection. Each Transit object has a set of "abilities". These are outbound connection mechanisms that the client is capable of using. The basic CLI tool (running on a normal computer) has two abilities: `direct-tcp-v1` and `relay-v1`. * `direct-tcp-v1` indicates that it can make outbound TCP connections to a requested host and port number. "v1" means that the first thing sent over these connections is a specific derived handshake message, e.g. `transit sender HEXHEX ready\n\n`. * `relay-v1` indicates it can connect to the Transit Relay and speak the matching protocol (in which the first message is `please relay HEXHEX for side HEX\n`, and the relay might eventually say `ok\n`). Future implementations may have additional abilities, such as connecting directly to Tor onion services, I2P services, WebSockets, WebRTC, or other connection technologies. Implementations on some platforms (such as web browsers) may lack `direct-tcp-v1` or `relay-v1`. While it isn't strictly necessary for both sides to emit what they're capable of using, it does help performance: a Tor Onion-service -capable receiver shouldn't spend the time and energy to set up an onion service if the sender can't use it. After learning the abilities of its peer, the Transit object can create a list of "hints", which are endpoints that the peer should try to connect to. Each hint will fall under one of the abilities that the peer indicated it could use. Hints have types like `direct-tcp-v1`, `tor-tcp-v1`, and `relay-v1`. Hints are encoded into dictionaries (with a mandatory `type` key, and other keys as necessary): * `direct-tcp-v1` {hostname:, port:, priority:?} * `tor-tcp-v1` {hostname:, port:, priority:?} * `relay-v1` {hints: [{hostname:, port:, priority:?}, ..]} For example, if our peer can use `direct-tcp-v1`, then our Transit object will deduce our local IP addresses (unless forbidden, i.e. we're using Tor), listen on a TCP port, then send a list of `direct-tcp-v1` hints pointing at all of them. If our peer can use `relay-v1`, then we'll connect to our relay server and give the peer a hint to the same. `tor-tcp-v1` hints indicate an Onion service, which cannot be reached without Tor. `direct-tcp-v1` hints can be reached with direct TCP connections (unless forbidden) or by proxying through Tor. Onion services take about 30 seconds to spin up, but bypass NAT, allowing two clients behind NAT boxes to connect without a transit relay (really, the entire Tor network is acting as a relay). The file-transfer application uses `transit` messages to convey these abilities and hints from one Transit object to the other. After updating the Transit objects, it then asks the Transit object to connect, whereupon Transit will try to connect to all the hints that it can, and will use the first one that succeeds. The file-transfer application, when actually sending file/directory data, will close the Wormhole as soon as it has enough information to begin opening the Transit connection. The final ack of the received data is sent through the Transit object, as a UTF-8-encoded JSON-encoded dictionary with `ack: ok` and `sha256: HEXHEX` containing the hash of the received data. ## Future Extensions Transit will be extended to provide other connection techniques: * WebSocket: usable by web browsers, not too hard to use by normal computers, requires direct (or relayed) TCP connection * WebRTC: usable by web browsers, hard-but-technically-possible to use by normal computers, provides NAT hole-punching for "free" * (web browsers cannot make direct TCP connections, so interop between browsers and CLI clients will either require adding WebSocket to CLI, or a relay that is capable of speaking/bridging both) * I2P: like Tor, but not capable of proxying to normal TCP hints. * ICE-mediated STUN/STUNT: NAT hole-punching, assisted somewhat by a server that can tell you your external IP address and port. Maybe implemented as a uTP stream (which is UDP based, and thus easier to get through NAT). The file-transfer protocol will be extended too: * "command mode": establish the connection, *then* figure out what we want to use it for, allowing multiple files to be exchanged, in either direction. This is to support a GUI that lets you open the wormhole, then drop files into it on either end. * some Transit messages being sent early, so ports and Onion services can be spun up earlier, to reduce overall waiting time * transit messages being sent in multiple phases: maybe the transit connection can progress while waiting for the user to confirm the transfer The hope is that by sending everything in dictionaries and multiple messages, there will be enough wiggle room to make these extensions in a backwards-compatible way. For example, to add "command mode" while allowing the fancy new (as yet unwritten) GUI client to interoperate with old-fashioned one-file-only CLI clients, we need the GUI tool to send an "I'm capable of command mode" in the VERSION message, and look for it in the received VERSION. If it isn't present, it will either expect to see an offer (if the other side is sending), or nothing (if it is waiting to receive), and can explain the situation to the user accordingly. It might show a locked set of bars over the wormhole graphic to mean "cannot send", or a "waiting to send them a file" overlay for send-only. magic-wormhole-0.12.0/docs/index.rst000066400000000000000000000012001400712516500172670ustar00rootroot00000000000000.. Magic-Wormhole documentation master file, created by sphinx-quickstart on Sun Nov 12 10:24:09 2017. You can adapt this file completely to your liking, but it should at least contain the root `toctree` directive. Magic-Wormhole: Get Things From One Computer To Another, Safely =============================================================== .. toctree:: :maxdepth: 2 :caption: Contents: welcome tor introduction api transit server-protocol client-protocol file-transfer-protocol attacks journal Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` magic-wormhole-0.12.0/docs/introduction.md000066400000000000000000000054741400712516500205120ustar00rootroot00000000000000# Protocol/API/Library Introduction The magic-wormhole (Python) distribution provides several things: an executable tool ("bin/wormhole"), an importable library (`import wormhole`), the URL of a publically-available Rendezvous Server, and the definition of a protocol used by all three. The executable tool provides basic sending and receiving of files, directories, and short text strings. These all use `wormhole send` and `wormhole receive` (which can be abbreviated as `wormhole tx` and `wormhole rx`). It also has a mode to facilitate the transfer of SSH keys. This tool, while useful on its own, is just one possible use of the protocol. The `wormhole` library provides an API to establish a bidirectional ordered encrypted record pipe to another instance (where each record is an arbitrary-sized bytestring). This does not provide file-transfer directly: the "bin/wormhole" tool speaks a simple protocol through this record pipe to negotiate and perform the file transfer. `wormhole/cli/public_relay.py` contains the URLs of a Rendezvous Server and a Transit Relay which I provide to support the file-transfer tools, which other developers should feel free to use for their applications as well. I cannot make any guarantees about performance or uptime for these servers: if you want to use Magic Wormhole in a production environment, please consider running a server on your own infrastructure (just run `wormhole-server start` and modify the URLs in your application to point at it). ## The Magic-Wormhole Protocol There are several layers to the protocol. At the bottom level, each client opens a WebSocket to the Rendezvous Server, sending JSON-based commands to the server, and receiving similarly-encoded messages. Some of these commands are addressed to the server itself, while others are instructions to queue a message to other clients, or are indications of messages coming from other clients. All these messages are described in "server-protocol.md". These inter-client messages are used to convey the PAKE protocol exchange, then a "VERSION" message (which doubles to verify the session key), then some number of encrypted application-level data messages. "client-protocol.md" describes these wormhole-to-wormhole messages. Each wormhole-using application is then free to interpret the data messages as it pleases. The file-transfer app sends an "offer" from the `wormhole send` side, to which the `wormhole receive` side sends a response, after which the Transit connection is negotiated (if necessary), and finally the data is sent through the Transit connection. "file-transfer-protocol.md" describes this application's use of the client messages. ## The `wormhole` API Application use the `wormhole` library to establish wormhole connections and exchange data through them. Please see `api.md` for a complete description of this interface. magic-wormhole-0.12.0/docs/journal.md000066400000000000000000000153431400712516500174370ustar00rootroot00000000000000# Journaled Mode (note: this section is speculative, the code has not yet been written) Magic-Wormhole supports applications which are written in a "journaled" or "checkpointed" style. These apps store their entire state in a well-defined checkpoint (perhaps in a database), and react to inbound events or messages by carefully moving from one state to another, then releasing any outbound messages. As a result, they can be terminated safely at any moment, without warning, and ensure that the externally-visible behavior is deterministic and independent of this stop/restart timing. This is the style encouraged by the E event loop, the original [Waterken Server](http://waterken.sourceforge.net/), and the more modern [Ken Platform](http://web.eecs.umich.edu/~tpkelly/Ken/), all influential in the object-capability security community. ## Requirements Applications written in this style must follow some strict rules: * all state goes into the checkpoint * the only way to affect the state is by processing an input message * event processing is deterministic (any non-determinism must be implemented as a message, e.g. from a clock service or a random-number generator) * apps must never forget a message for which they've accepted responsibility The main processing function takes the previous state checkpoint and a single input message, and produces a new state checkpoint and a set of output messages. For performance, the state might be kept in memory between events, but the behavior should be indistinguishable from that of a server which terminates completely between events. In general, applications must tolerate duplicate inbound messages, and should re-send outbound messages until the recipient acknowledges them. Any outbound responses to an inbound message must be queued until the checkpoint is recorded. If outbound messages were delivered before the checkpointing, then a crash just after delivery would roll the process back to a state where it forgot about the inbound event, causing observably inconsistent behavior that depends upon whether the outbound message successfully escaped the dying process or not. As a result, journaled-style applications use a very specific process when interacting with the outside world. Their event-processing function looks like: * receive inbound event * (load state) * create queue for any outbound messages * process message (changing state and queuing outbound messages) * serialize state, record in checkpoint * deliver any queued outbound messages In addition, the protocols used to exchange messages should include message IDs and acks. Part of the state vector will include a set of unacknowledged outbound messages. When a connection is established, all outbound messages should be re-sent, and messages are removed from the pending set when an inbound ack is received. The state must include a set of inbound message ids which have been processed already. All inbound messages receive an ack, but only new ones are processed. Connection establishment/loss is not strictly included in the journaled-app model (in Waterken/Ken, message delivery is provided by the platform, and apps do not know about connections), but general: * "I want to have a connection" is stored in the state vector * "I am connected" is not * when a connection is established, code can run to deliver pending messages, and this does not qualify as an inbound event * inbound events can only happen when at least one connection is established * immediately after restarting from a checkpoint, no connections are established, but the app might initiate outbound connections, or prepare to accept inbound ones ## Wormhole Support To support this mode, the Wormhole constructor accepts a `journal=` argument. If provided, it must be an object that implements the `wormhole.IJournal` interface, which consists of two methods: * `j.queue_outbound(fn, *args, **kwargs)`: used to delay delivery of outbound messages until the checkpoint has been recorded * `j.process()`: a context manager which should be entered before processing inbound messages `wormhole.Journal` is an implementation of this interface, which is constructed with a (synchronous) `save_checkpoint` function. Applications can use it, or bring their own. The Wormhole object, when configured with a journal, will wrap all inbound WebSocket message processing with the `j.process()` context manager, and will deliver all outbound messages through `j.queue_outbound`. Applications using such a Wormhole must also use the same journal for their own (non-wormhole) events. It is important to coordinate multiple sources of events: e.g. a UI event may cause the application to call `w.send(data)`, and the outbound wormhole message should be checkpointed along with the app's state changes caused by the UI event. Using a shared journal for both wormhole- and non-wormhole- events provides this coordination. The `save_checkpoint` function should serialize application state along with any Wormholes that are active. Wormhole state can be obtained by calling `w.serialize()`, which will return a dictionary (that can be JSON-serialized). At application startup (or checkpoint resumption), Wormholes can be regenerated with `wormhole.from_serialized()`. Note that only "delegated-mode" wormholes can be serialized: Deferreds are not amenable to usage beyond a single process lifetime. For a functioning example of a journaled-mode application, see misc/demo-journal.py. The following snippet may help illustrate the concepts: ```python class App: @classmethod def new(klass): self = klass() self.state = {} self.j = wormhole.Journal(self.save_checkpoint) self.w = wormhole.create(.., delegate=self, journal=self.j) @classmethod def from_serialized(klass): self = klass() self.j = wormhole.Journal(self.save_checkpoint) with open("state.json", "r") as f: data = json.load(f) self.state = data["state"] self.w = wormhole.from_serialized(data["wormhole"], reactor, delegate=self, journal=self.j) def inbound_event(self, event): # non-wormhole events must be performed in the journal context with self.j.process(): parse_event(event) change_state() self.j.queue_outbound(self.send, outbound_message) def wormhole_received(self, data): # wormhole events are already performed in the journal context change_state() self.j.queue_outbound(self.send, stuff) def send(self, outbound_message): actually_send_message(outbound_message) def save_checkpoint(self): app_state = {"state": self.state, "wormhole": self.w.serialize()} with open("state.json", "w") as f: json.dump(app_state, f) ``` magic-wormhole-0.12.0/docs/server-protocol.md000066400000000000000000000277611400712516500211410ustar00rootroot00000000000000# Rendezvous Server Protocol ## Concepts The Rendezvous Server provides queued delivery of binary messages from one client to a second, and vice versa. Each message contains a "phase" (a string) and a body (bytestring). These messages are queued in a "Mailbox" until the other side connects and retrieves them, but are delivered immediately if both sides are connected to the server at the same time. Mailboxes are identified by a large random string. "Nameplates", in contrast, have short numeric identities: in a wormhole code like "4-purple-sausages", the "4" is the nameplate. Each client has a randomly-generated "side", a short hex string, used to differentiate between echoes of a client's own message, and real messages from the other client. ## Application IDs The server isolates each application from the others. Each client provides an "App Id" when it first connects (via the "BIND" message), and all subsequent commands are scoped to this application. This means that nameplates (described below) and mailboxes can be re-used between different apps. The AppID is a unicode string. Both sides of the wormhole must use the same AppID, of course, or they'll never see each other. The server keeps track of which applications are in use for maintenance purposes. Each application should use a unique AppID. Developers are encouraged to use "DNSNAME/APPNAME" to obtain a unique one: e.g. the `bin/wormhole` file-transfer tool uses `lothar.com/wormhole/text-or-file-xfer`. ## WebSocket Transport At the lowest level, each client establishes (and maintains) a WebSocket connection to the Rendezvous Server. If the connection is lost (which could happen because the server was rebooted for maintenance, or because the client's network connection migrated from one network to another, or because the resident network gremlins decided to mess with you today), clients should reconnect after waiting a random (and exponentially-growing) delay. The Python implementation waits about 1 second after the first connection loss, growing by 50% each time, capped at 1 minute. Each message to the server is a dictionary, with at least a `type` key, and other keys that depend upon the particular message type. Messages from server to client follow the same format. `misc/dump-timing.py` is a debug tool which renders timing data gathered from the server and both clients, to identify protocol slowdowns and guide optimization efforts. To support this, the client/server messages include additional keys. Client->Server messages include a random `id` key, which is copied into the `ack` that is immediately sent back to the client for all commands (logged for the timing tool but otherwise ignored). Some client->server messages (`list`, `allocate`, `claim`, `release`, `close`, `ping`) provoke a direct response by the server: for these, `id` is copied into the response. This helps the tool correlate the command and response. All server->client messages have a `server_tx` timestamp (seconds since epoch, as a float), which records when the message left the server. Direct responses include a `server_rx` timestamp, to record when the client's command was received. The tool combines these with local timestamps (recorded by the client and not shared with the server) to build a full picture of network delays and round-trip times. All messages are serialized as JSON, encoded to UTF-8, and the resulting bytes sent as a single "binary-mode" WebSocket payload. Servers can signal `error` for any message type it does not recognize. Clients and Servers must ignore unrecognized keys in otherwise-recognized messages. Clients must ignore unrecognized message types from the Server. ## Connection-Specific (Client-to-Server) Messages The first thing each client sends to the server, immediately after the WebSocket connection is established, is a `bind` message. This specifies the AppID and side (in keys `appid` and `side`, respectively) that all subsequent messages will be scoped to. While technically each message could be independent (with its own `appid` and `side`), I thought it would be less confusing to use exactly one WebSocket per logical wormhole connection. The first thing the server sends to each client is the `welcome` message. This is intended to deliver important status information to the client that might influence its operation. The Python client currently reacts to the following keys (and ignores all others): * `current_cli_version`: prompts the user to upgrade if the server's advertised version is greater than the client's version (as derived from the git tag) * `motd`: prints this message, if present; intended to inform users about performance problems, scheduled downtime, or to beg for donations to keep the server running * `error`: causes the client to print the message and then terminate. If a future version of the protocol requires a rate-limiting CAPTCHA ticket or other authorization record, the server can send `error` (explaining the requirement) if it does not see this ticket arrive before the `bind`. A `ping` will provoke a `pong`: these are only used by unit tests for synchronization purposes (to detect when a batch of messages have been fully processed by the server). NAT-binding refresh messages are handled by the WebSocket layer (by asking Autobahn to send a keepalive messages every 60 seconds), and do not use `ping`. If any client->server command is invalid (e.g. it lacks a necessary key, or was sent in the wrong order), an `error` response will be sent, This response will include the error string in the `error` key, and a full copy of the original message dictionary in `orig`. ## Nameplates Wormhole codes look like `4-purple-sausages`, consisting of a number followed by some random words. This number is called a "Nameplate". On the Rendezvous Server, the Nameplate contains a pointer to a Mailbox. Clients can "claim" a nameplate, and then later "release" it. Each claim is for a specific side (so one client claiming the same nameplate multiple times only counts as one claim). Nameplates are deleted once the last client has released it, or after some period of inactivity. Clients can either make up nameplates themselves, or (more commonly) ask the server to allocate one for them. Allocating a nameplate automatically claims it (to avoid a race condition), but for simplicity, clients send a claim for all nameplates, even ones which they've allocated themselves. Nameplates (on the server) must live until the second client has learned about the associated mailbox, after which point they can be reused by other clients. So if two clients connect quickly, but then maintain a long-lived wormhole connection, they do not need to consume the limited space of short nameplates for that whole time. The `allocate` command allocates a nameplate (the server returns one that is as short as possible), and the `allocated` response provides the answer. Clients can also send a `list` command to get back a `nameplates` response with all allocated nameplates for the bound AppID: this helps the code-input tab-completion feature know which prefixes to offer. The `nameplates` response returns a list of dictionaries, one per claimed nameplate, with at least an `id` key in each one (with the nameplate string). Future versions may record additional attributes in the nameplate records, specifically a wordlist identifier and a code length (again to help with code-completion on the receiver). ## Mailboxes The server provides a single "Mailbox" to each pair of connecting Wormhole clients. This holds an unordered set of messages, delivered immediately to connected clients, and queued for delivery to clients which connect later. Messages from both clients are merged together: clients use the included `side` identifier to distinguish echoes of their own messages from those coming from the other client. Each mailbox is "opened" by some number of clients at a time, until all clients have closed it. Mailboxes are kept alive by either an open client, or a Nameplate which points to the mailbox (so when a Nameplate is deleted from inactivity, the corresponding Mailbox will be too). The `open` command both marks the mailbox as being opened by the bound side, and also adds the WebSocket as subscribed to that mailbox, so new messages are delivered immediately to the connected client. There is no explicit ack to the `open` command, but since all clients add a message to the mailbox as soon as they connect, there will always be a `message` response shortly after the `open` goes through. The `close` command provokes a `closed` response. The `close` command accepts an optional "mood" string: this allows clients to tell the server (in general terms) about their experiences with the wormhole interaction. The server records the mood in its "usage" record, so the server operator can get a sense of how many connections are succeeding and failing. The moods currently recognized by the Rendezvous Server are: * `happy` (default): the PAKE key-establishment worked, and the client saw at least one valid encrypted message from its peer * `lonely`: the client gave up without hearing anything from its peer * `scary`: the client saw an invalid encrypted message from its peer, indicating that either the wormhole code was typed in wrong, or an attacker tried (and failed) to guess the code * `errory`: the client encountered some other error: protocol problem or internal error The server will also record `pruney` if it deleted the mailbox due to inactivity, or `crowded` if more than two sides tried to access the mailbox. When clients use the `add` command to add a client-to-client message, they will put the body (a bytestring) into the command as a hex-encoded string in the `body` key. They will also put the message's "phase", as a string, into the `phase` key. See client-protocol.md for details about how different phases are used. When a client sends `open`, it will get back a `message` response for every message in the mailbox. It will also get a real-time `message` for every `add` performed by clients later. These `message` responses include "side" and "phase" from the sending client, and "body" (as a hex string, encoding the binary message body). The decoded "body" will either by a random-looking cryptographic value (for the PAKE message), or a random-looking encrypted blob (for the VERSION message, as well as all application-provided payloads). The `message` response will also include `id`, copied from the `id` of the `add` message (and used only by the timing-diagram tool). The Rendezvous Server does not de-duplicate messages, nor does it retain ordering: clients must do both if they need to. ## All Message Types This lists all message types, along with the type-specific keys for each (if any), and which ones provoke direct responses: * S->C welcome {welcome:} * (C->S) bind {appid:, side:} * (C->S) list {} -> nameplates * S->C nameplates {nameplates: [{id: str},..]} * (C->S) allocate {} -> allocated * S->C allocated {nameplate:} * (C->S) claim {nameplate:} -> claimed * S->C claimed {mailbox:} * (C->S) release {nameplate:?} -> released * S->C released * (C->S) open {mailbox:} * (C->S) add {phase: str, body: hex} -> message (to all connected clients) * S->C message {side:, phase:, body:, id:} * (C->S) close {mailbox:?, mood:?} -> closed * S->C closed * S->C ack * (C->S) ping {ping: int} -> ping * S->C pong {pong: int} * S->C error {error: str, orig:} ## Persistence The server stores all messages in a database, so it should not lose any information when it is restarted. The server will not send a direct response until any side-effects (such as the message being added to the mailbox) have been safely committed to the database. The client library knows how to resume the protocol after a reconnection event, assuming the client process itself continues to run. Clients which terminate entirely between messages (e.g. a secure chat application, which requires multiple wormhole messages to exchange address-book entries, and which must function even if the two apps are never both running at the same time) can use "Journal Mode" to ensure forward progress is made: see "journal.md" for details. magic-wormhole-0.12.0/docs/state-machines/000077500000000000000000000000001400712516500203425ustar00rootroot00000000000000magic-wormhole-0.12.0/docs/state-machines/Makefile000066400000000000000000000003631400712516500220040ustar00rootroot00000000000000 default: images images: allocator.png boss.png code.png input.png key.png lister.png machines.png mailbox.png nameplate.png order.png receive.png send.png terminator.png dilation.png .PHONY: default images %.png: %.dot dot -T png $< >$@ magic-wormhole-0.12.0/docs/state-machines/_connection.dot000066400000000000000000000075201400712516500233540ustar00rootroot00000000000000digraph { /* note: this is nominally what we want from the machine that establishes the WebSocket connection (and re-establishes it when it is lost). We aren't using this yet; for now we're relying upon twisted.application.internet.ClientService, which does reconnection and random exponential backoff. The one thing it doesn't do is fail entirely when the first connection attempt fails, which I think would be good for usability. If the first attempt fails, it's probably because you don't have a network connection, or the hostname is wrong, or the service has been retired entirely. And retrying silently forever is not being honest with the user. So I'm keeping this diagram around, as a reminder of how we'd like to modify ClientService. */ /* ConnectionMachine */ C_start [label="Connection\nMachine" style="dotted"] C_start -> C_Pc1 [label="CM_start()" color="orange" fontcolor="orange"] C_Pc1 [shape="box" label="ep.connect()" color="orange"] C_Pc1 -> C_Sc1 [color="orange"] C_Sc1 [label="connecting\n(1st time)" color="orange"] C_Sc1 -> C_P_reset [label="d.callback" color="orange" fontcolor="orange"] C_P_reset [shape="box" label="reset\ntimer" color="orange"] C_P_reset -> C_S_negotiating [color="orange"] C_Sc1 -> C_P_failed [label="d.errback" color="red"] C_Sc1 -> C_P_failed [label="p.onClose" color="red"] C_Sc1 -> C_P_cancel [label="C_stop()"] C_P_cancel [shape="box" label="d.cancel()"] C_P_cancel -> C_S_cancelling C_S_cancelling [label="cancelling"] C_S_cancelling -> C_P_stopped [label="d.errback"] C_S_negotiating [label="negotiating" color="orange"] C_S_negotiating -> C_P_failed [label="p.onClose"] C_S_negotiating -> C_P_connected [label="p.onOpen" color="orange" fontcolor="orange"] C_S_negotiating -> C_P_drop2 [label="C_stop()"] C_P_drop2 [shape="box" label="p.dropConnection()"] C_P_drop2 -> C_S_disconnecting C_P_connected [shape="box" label="tx bind\nM_connected()" color="orange"] C_P_connected -> C_S_open [color="orange"] C_S_open [label="open" color="green"] C_S_open -> C_P_lost [label="p.onClose" color="blue" fontcolor="blue"] C_S_open -> C_P_drop [label="C_stop()" color="orange" fontcolor="orange"] C_P_drop [shape="box" label="p.dropConnection()\nM_lost()" color="orange"] C_P_drop -> C_S_disconnecting [color="orange"] C_S_disconnecting [label="disconnecting" color="orange"] C_S_disconnecting -> C_P_stopped [label="p.onClose" color="orange" fontcolor="orange"] C_P_lost [shape="box" label="M_lost()" color="blue"] C_P_lost -> C_P_wait [color="blue"] C_P_wait [shape="box" label="start timer" color="blue"] C_P_wait -> C_S_waiting [color="blue"] C_S_waiting [label="waiting" color="blue"] C_S_waiting -> C_Pc2 [label="expire" color="blue" fontcolor="blue"] C_S_waiting -> C_P_stop_timer [label="C_stop()"] C_P_stop_timer [shape="box" label="timer.cancel()"] C_P_stop_timer -> C_P_stopped C_Pc2 [shape="box" label="ep.connect()" color="blue"] C_Pc2 -> C_Sc2 [color="blue"] C_Sc2 [label="reconnecting" color="blue"] C_Sc2 -> C_P_reset [label="d.callback" color="blue" fontcolor="blue"] C_Sc2 -> C_P_wait [label="d.errback"] C_Sc2 -> C_P_cancel [label="C_stop()"] C_P_stopped [shape="box" label="MC_stopped()" color="orange"] C_P_stopped -> C_S_stopped [color="orange"] C_S_stopped [label="stopped" color="orange"] C_P_failed [shape="box" label="notify_fail" color="red"] C_P_failed -> C_S_failed C_S_failed [label="failed" color="red"] } magic-wormhole-0.12.0/docs/state-machines/allocator.dot000066400000000000000000000022611400712516500230330ustar00rootroot00000000000000digraph { start [label="A:\nNameplate\nAllocation" style="dotted"] {rank=same; start S0A S0B} start -> S0A [style="invis"] S0A [label="S0A:\nidle\ndisconnected" color="orange"] S0A -> S0B [label="connected"] S0B -> S0A [label="lost"] S0B [label="S0B:\nidle\nconnected"] S0A -> S1A [label="allocate(length, wordlist)" color="orange"] S0B -> P_allocate [label="allocate(length, wordlist)"] P_allocate [shape="box" label="RC.tx_allocate" color="orange"] P_allocate -> S1B [color="orange"] {rank=same; S1A P_allocate S1B} S0B -> S1B [style="invis"] S1B [label="S1B:\nallocating\nconnected" color="orange"] S1B -> foo [label="lost"] foo [style="dotted" label=""] foo -> S1A S1A [label="S1A:\nallocating\ndisconnected" color="orange"] S1A -> P_allocate [label="connected" color="orange"] S1B -> P_allocated [label="rx_allocated" color="orange"] P_allocated [shape="box" label="choose words\nC.allocated(nameplate,code)" color="orange"] P_allocated -> S2 [color="orange"] S2 [label="S2:\ndone" color="orange"] } magic-wormhole-0.12.0/docs/state-machines/boss.dot000066400000000000000000000061001400712516500220150ustar00rootroot00000000000000digraph { /* could shave a RTT by committing to the nameplate early, before finishing the rest of the code input. While the user is still typing/completing the code, we claim the nameplate, open the mailbox, and retrieve the peer's PAKE message. Then as soon as the user finishes entering the code, we build our own PAKE message, send PAKE, compute the key, send VERSION. Starting from the Return, this saves two round trips. OTOH it adds consequences to hitting Tab. */ start [label="Boss\n(manager)" style="dotted"] {rank=same; P0_code S0} P0_code [shape="box" style="dashed" label="C.input_code\n or C.allocate_code\n or C.set_code"] P0_code -> S0 S0 [label="S0: empty"] S0 -> P0_build [label="got_code"] S0 -> P_close_error [label="rx_error"] P_close_error [shape="box" label="T.close(errory)"] P_close_error -> S_closing S0 -> P_close_lonely [label="close"] S0 -> P_close_unwelcome [label="rx_unwelcome"] P_close_unwelcome [shape="box" label="T.close(unwelcome)"] P_close_unwelcome -> S_closing P0_build [shape="box" label="W.got_code"] P0_build -> S1 S1 [label="S1: lonely" color="orange"] S1 -> S2 [label="happy"] S1 -> P_close_error [label="rx_error"] S1 -> P_close_scary [label="scared" color="red"] S1 -> P_close_unwelcome [label="rx_unwelcome"] S1 -> P_close_lonely [label="close"] P_close_lonely [shape="box" label="T.close(lonely)"] P_close_lonely -> S_closing P_close_scary [shape="box" label="T.close(scary)" color="red"] P_close_scary -> S_closing [color="red"] S2 [label="S2: happy" color="green"] S2 -> P2_close [label="close"] P2_close [shape="box" label="T.close(happy)"] P2_close -> S_closing S2 -> P2_got_phase [label="got_phase"] P2_got_phase [shape="box" label="W.received"] P2_got_phase -> S2 S2 -> P2_got_version [label="got_version"] P2_got_version [shape="box" label="W.got_version"] P2_got_version -> S2 S2 -> P_close_error [label="rx_error"] S2 -> P_close_scary [label="scared" color="red"] S2 -> P_close_unwelcome [label="rx_unwelcome"] S_closing [label="closing"] S_closing -> P_closed [label="closed\nerror"] S_closing -> S_closing [label="got_version\ngot_phase\nhappy\nscared\nclose"] P_closed [shape="box" label="W.closed(reason)"] P_closed -> S_closed S_closed [label="closed"] S0 -> P_closed [label="error"] S1 -> P_closed [label="error"] S2 -> P_closed [label="error"] {rank=same; Other S_closed} Other [shape="box" style="dashed" label="rx_welcome -> process (maybe rx_unwelcome)\nsend -> S.send\ngot_message -> got_version or got_phase\ngot_key -> W.got_key\ngot_verifier -> W.got_verifier\nallocate_code -> C.allocate_code\ninput_code -> C.input_code\nset_code -> C.set_code" ] } magic-wormhole-0.12.0/docs/state-machines/code.dot000066400000000000000000000027441400712516500217730ustar00rootroot00000000000000digraph { start [label="C:\nCode\n(management)" style="dotted"] {rank=same; start S0} start -> S0 [style="invis"] S0 [label="S0:\nidle"] S0 -> P0_got_code [label="set_code\n(code)"] P0_got_code [shape="box" label="N.set_nameplate"] P0_got_code -> P_done P_done [shape="box" label="K.got_code\nB.got_code"] P_done -> S4 S4 [label="S4: known" color="green"] {rank=same; S1_inputting_nameplate S3_allocating} {rank=same; P0_got_code P1_set_nameplate P3_got_nameplate} S0 -> P_input [label="input_code"] P_input [shape="box" label="I.start\n(helper)"] P_input -> S1_inputting_nameplate S1_inputting_nameplate [label="S1:\ninputting\nnameplate"] S1_inputting_nameplate -> P1_set_nameplate [label="got_nameplate\n(nameplate)"] P1_set_nameplate [shape="box" label="N.set_nameplate"] P1_set_nameplate -> S2_inputting_words S2_inputting_words [label="S2:\ninputting\nwords"] S2_inputting_words -> P_done [label="finished_input\n(code)"] S0 -> P_allocate [label="allocate_code\n(length,\nwordlist)"] P_allocate [shape="box" label="A.allocate\n(length, wordlist)"] P_allocate -> S3_allocating S3_allocating [label="S3:\nallocating"] S3_allocating -> P3_got_nameplate [label="allocated\n(nameplate,\ncode)"] P3_got_nameplate [shape="box" label="N.set_nameplate"] P3_got_nameplate -> P_done } magic-wormhole-0.12.0/docs/state-machines/dilation.dot000066400000000000000000000034171400712516500226620ustar00rootroot00000000000000digraph { Manager [label="Manager" shape="box" color="blue" fontcolor="blue"] Connector [label="Connector" shape="oval"] Framer [label="Framer"] DCP [label="Dilated\nConnection\nProtocol"] DCP -> Connector [style="dashed" label="add_candidate\n"] Record [label="Record"] Record -> Framer [style="dashed" label="connectionMade\nsend_frame"] Record -> Framer [style="dashed" label="add_and_parse (-> tokens)"] ITransport -> DCP [style="dashed" label="connectionMade\ndataReceived\nconnectionLost"] Framer -> ITransport [style="dashed" label="write"] Manager -> DCP [style="dashed" color="green" label="disconnect"] DCP -> Manager [style="dashed" color="green" label="got_record CClost"] DCP -> Record [style="dashed" label="set_role\nconnectionMade\nsend_record"] DCP -> Record [style="dashed" label="add_and_unframe (-> tokens)"] Manager -> Connector [style="dashed" label="start\ngot_hints\nstop"] Connector -> Manager [style="dashed" color="green" label="CCmade"] Connector -> DCP [color="green" fontcolor="blue" label="select\nsend_record(KCM)"] Connector -> DCP [color="red" fontcolor="red" label="disconnect"] Connector -> Connector [color="green" fontcolor="green" label="accept"] Inbound [label="Inbound" shape="box" color="blue" fontcolor="blue"] Manager -> Inbound [style="dashed" label="use_connection"] Inbound -> DCP [style="dashed" label="pauseProducing\nresumeProducing"] Outbound [label="Outbound" shape="box" color="blue" fontcolor="blue"] Manager -> Outbound [style="dashed" label="use_connection"] Outbound -> DCP [style="dashed" label="send_record\ntransport.(un)registerProducer"] } magic-wormhole-0.12.0/docs/state-machines/input.dot000066400000000000000000000035031400712516500222120ustar00rootroot00000000000000digraph { start [label="I:\nCode\nInput" style="dotted"] {rank=same; start S0} start -> S0 [style="invis"] S0 [label="S0:\nidle"] S0 -> P0_list_nameplates [label="start"] P0_list_nameplates [shape="box" label="L.refresh"] P0_list_nameplates -> S1 S1 [label="S1: typing\nnameplate" color="orange"] {rank=same; foo P0_list_nameplates} S1 -> foo [label="refresh_nameplates" color="orange" fontcolor="orange"] foo [style="dashed" label=""] foo -> P0_list_nameplates S1 -> P1_record [label="got_nameplates"] P1_record [shape="box" label="record\nnameplates"] P1_record -> S1 S1 -> P1_claim [label="choose_nameplate" color="orange" fontcolor="orange"] P1_claim [shape="box" label="stash nameplate\nC.got_nameplate"] P1_claim -> S2 S2 [label="S2: typing\ncode\n(no wordlist)"] S2 -> S2 [label="got_nameplates"] S2 -> P2_stash_wordlist [label="got_wordlist"] P2_stash_wordlist [shape="box" label="stash wordlist"] P2_stash_wordlist -> S3 S2 -> P_done [label="choose_words" color="orange" fontcolor="orange"] S3 [label="S3: typing\ncode\n(yes wordlist)"] S3 -> S3 [label="got_nameplates"] S3 -> P_done [label="choose_words" color="orange" fontcolor="orange"] P_done [shape="box" label="build code\nC.finished_input(code)"] P_done -> S4 S4 [label="S4: done" color="green"] S4 -> S4 [label="got_nameplates\ngot_wordlist"] other [shape="box" style="dotted" color="orange" fontcolor="orange" label="h.refresh_nameplates()\nh.get_nameplate_completions(prefix)\nh.choose_nameplate(nameplate)\nh.get_word_completions(prefix)\nh.choose_words(words)" ] {rank=same; S4 other} } magic-wormhole-0.12.0/docs/state-machines/key.dot000066400000000000000000000052201400712516500216410ustar00rootroot00000000000000digraph { /* could shave a RTT by committing to the nameplate early, before finishing the rest of the code input. While the user is still typing/completing the code, we claim the nameplate, open the mailbox, and retrieve the peer's PAKE message. Then as soon as the user finishes entering the code, we build our own PAKE message, send PAKE, compute the key, send VERSION. Starting from the Return, this saves two round trips. OTOH it adds consequences to hitting Tab. */ start [label="Key\nMachine" style="dotted"] /* two connected state machines: the first just puts the messages in the right order, the second handles PAKE */ {rank=same; SO_00 PO_got_code SO_10} {rank=same; SO_01 PO_got_both SO_11} SO_00 [label="S00"] SO_01 [label="S01: pake"] SO_10 [label="S10: code"] SO_11 [label="S11: both"] SO_00 -> SO_01 [label="got_pake\n(early)"] SO_00 -> PO_got_code [label="got_code"] PO_got_code [shape="box" label="K1.got_code"] PO_got_code -> SO_10 SO_01 -> PO_got_both [label="got_code"] PO_got_both [shape="box" label="K1.got_code\nK1.got_pake"] PO_got_both -> SO_11 SO_10 -> PO_got_pake [label="got_pake"] PO_got_pake [shape="box" label="K1.got_pake"] PO_got_pake -> SO_11 S0 [label="S0: know\nnothing"] S0 -> P0_build [label="got_code"] P0_build [shape="box" label="build_pake\nM.add_message(pake)"] P0_build -> S1 S1 [label="S1: know\ncode"] /* the Mailbox will deliver each message exactly once, but doesn't guarantee ordering: if Alice starts the process, then disconnects, then Bob starts (reading PAKE, sending both his PAKE and his VERSION phase), then Alice will see both PAKE and VERSION on her next connect, and might get the VERSION first. The Wormhole will queue inbound messages that it isn't ready for. The wormhole shim that lets applications do w.get(phase=) must do something similar, queueing inbound messages until it sees one for the phase it currently cares about.*/ S1 -> P_mood_scary [label="got_pake\npake bad"] P_mood_scary [shape="box" color="red" label="W.scared"] P_mood_scary -> S5 [color="red"] S5 [label="S5:\nscared" color="red"] S1 -> P1_compute [label="got_pake\npake good"] #S1 -> P_mood_lonely [label="close"] P1_compute [label="compute_key\nM.add_message(version)\nB.got_key\nR.got_key" shape="box"] P1_compute -> S4 S4 [label="S4: know_key" color="green"] } magic-wormhole-0.12.0/docs/state-machines/lister.dot000066400000000000000000000022461400712516500223600ustar00rootroot00000000000000digraph { {rank=same; title S0A S0B} title [label="(Nameplate)\nLister" style="dotted"] S0A [label="S0A:\nnot wanting\nunconnected"] S0B [label="S0B:\nnot wanting\nconnected" color="orange"] S0A -> S0B [label="connected"] S0B -> S0A [label="lost"] S0A -> S1A [label="refresh"] S0B -> P_tx [label="refresh" color="orange" fontcolor="orange"] S0A -> P_tx [style="invis"] {rank=same; S1A P_tx S1B P_notify} S1A [label="S1A:\nwant list\nunconnected"] S1B [label="S1B:\nwant list\nconnected" color="orange"] S1A -> P_tx [label="connected"] P_tx [shape="box" label="RC.tx_list()" color="orange"] P_tx -> S1B [color="orange"] S1B -> S1A [label="lost"] S1A -> foo [label="refresh"] foo [label="" style="dashed"] foo -> S1A S1B -> foo2 [label="refresh"] foo2 [label="" style="dashed"] foo2 -> P_tx S0B -> P_notify [label="rx_nameplates"] S1B -> P_notify [label="rx_nameplates" color="orange" fontcolor="orange"] P_notify [shape="box" label="I.got_nameplates()"] P_notify -> S0B } magic-wormhole-0.12.0/docs/state-machines/machines.dot000066400000000000000000000141311400712516500226410ustar00rootroot00000000000000digraph { Wormhole [shape="oval" color="blue" fontcolor="blue"] Boss [shape="box" label="Boss\n(manager)" color="blue" fontcolor="blue"] Nameplate [label="Nameplate\n(claimer)" shape="box" color="blue" fontcolor="blue"] Mailbox [label="Mailbox\n(opener)" shape="box" color="blue" fontcolor="blue"] Connection [label="Rendezvous\nConnector" shape="oval" color="blue" fontcolor="blue"] #websocket [color="blue" fontcolor="blue"] Order [shape="box" label="Ordering" color="blue" fontcolor="blue"] Key [shape="box" label="Key" color="blue" fontcolor="blue"] Send [shape="box" label="Send" color="blue" fontcolor="blue"] Receive [shape="box" label="Receive" color="blue" fontcolor="blue"] Code [shape="box" label="Code" color="blue" fontcolor="blue"] Lister [shape="box" label="(nameplate)\nLister" color="blue" fontcolor="blue"] Allocator [shape="box" label="(nameplate)\nAllocator" color="blue" fontcolor="blue"] Input [shape="box" label="(interactive\ncode)\nInput" color="blue" fontcolor="blue"] Terminator [shape="box" color="blue" fontcolor="blue"] InputHelperAPI [shape="oval" label="input\nhelper\nAPI" color="blue" fontcolor="blue"] Dilator [shape="box" label="Dilator" color="blue" fontcolor="blue"] #Connection -> websocket [color="blue"] #Connection -> Order [color="blue"] Wormhole -> Boss [style="dashed" label="allocate_code\ninput_code\nset_code\ndilate\nsend\nclose\n(once)" color="red" fontcolor="red"] #Wormhole -> Boss [color="blue"] Boss -> Wormhole [style="dashed" label="got_code\ngot_key\ngot_verifier\ngot_version\nreceived (seq)\nclosed\n(once)"] #Boss -> Connection [color="blue"] Boss -> Connection [style="dashed" label="start" color="red" fontcolor="red"] Connection -> Boss [style="dashed" label="rx_welcome\nrx_error\nerror"] Boss -> Send [style="dashed" color="red" fontcolor="red" label="send"] #Boss -> Mailbox [color="blue"] Mailbox -> Order [style="dashed" label="got_message (once)"] Key -> Boss [style="dashed" label="got_key\nscared"] Order -> Key [style="dashed" label="got_pake"] Order -> Receive [style="dashed" label="got_message"] #Boss -> Key [color="blue"] Key -> Mailbox [style="dashed" label="add_message (pake)\nadd_message (version)"] Receive -> Send [style="dashed" label="got_verified_key"] Send -> Mailbox [style="dashed" color="red" fontcolor="red" label="add_message (phase)"] Key -> Receive [style="dashed" label="got_key"] Receive -> Boss [style="dashed" label="happy\nscared\ngot_verifier\ngot_message"] Nameplate -> Connection [style="dashed" label="tx_claim\ntx_release"] Connection -> Nameplate [style="dashed" label="connected\nlost\nrx_claimed\nrx_released"] Mailbox -> Nameplate [style="dashed" label="release"] Nameplate -> Mailbox [style="dashed" label="got_mailbox"] Nameplate -> Input [style="dashed" label="got_wordlist"] Mailbox -> Connection [style="dashed" color="red" fontcolor="red" label="tx_open\ntx_add\ntx_close" ] Connection -> Mailbox [style="dashed" label="connected\nlost\nrx_message\nrx_closed"] Connection -> Lister [style="dashed" label="connected\nlost\nrx_nameplates" ] Lister -> Connection [style="dashed" label="tx_list" ] #Boss -> Code [color="blue"] Connection -> Allocator [style="dashed" label="connected\nlost\nrx_allocated"] Allocator -> Connection [style="dashed" color="red" fontcolor="red" label="tx_allocate" ] Lister -> Input [style="dashed" label="got_nameplates" ] #Code -> Lister [color="blue"] Input -> Lister [style="dashed" color="red" fontcolor="red" label="refresh" ] Boss -> Code [style="dashed" color="red" fontcolor="red" label="allocate_code\ninput_code\nset_code"] Code -> Boss [style="dashed" label="got_code"] Code -> Key [style="dashed" label="got_code"] Code -> Nameplate [style="dashed" label="set_nameplate"] Code -> Input [style="dashed" color="red" fontcolor="red" label="start"] Input -> Code [style="dashed" label="got_nameplate\nfinished_input"] InputHelperAPI -> Input [label="refresh_nameplates\nget_nameplate_completions\nchoose_nameplate\nget_word_completions\nchoose_words" color="orange" fontcolor="orange"] Code -> Allocator [style="dashed" color="red" fontcolor="red" label="allocate"] Allocator -> Code [style="dashed" label="allocated"] Nameplate -> Terminator [style="dashed" label="nameplate_done"] Mailbox -> Terminator [style="dashed" label="mailbox_done"] Terminator -> Nameplate [style="dashed" label="close"] Terminator -> Mailbox [style="dashed" label="close"] Terminator -> Connection [style="dashed" label="stop"] Connection -> Terminator [style="dashed" label="stopped"] Terminator -> Boss [style="dashed" label="closed\n(once)"] Boss -> Terminator [style="dashed" color="red" fontcolor="red" label="close"] Boss -> Dilator [style="dashed" label="dilate\nreceived_dilate\ngot_wormhole_versions"] Dilator -> Send [style="dashed" label="send(dilate-N)"] } magic-wormhole-0.12.0/docs/state-machines/mailbox.dot000066400000000000000000000074271400712516500225170ustar00rootroot00000000000000digraph { /* new idea */ title [label="Mailbox\nMachine" style="dotted"] {rank=same; S0A S0B} S0A [label="S0A:\nunknown"] S0A -> S0B [label="connected"] S0B [label="S0B:\nunknown\n(bound)" color="orange"] S0B -> S0A [label="lost"] S0A -> P0A_queue [label="add_message" style="dotted"] P0A_queue [shape="box" label="queue" style="dotted"] P0A_queue -> S0A [style="dotted"] S0B -> P0B_queue [label="add_message" style="dotted"] P0B_queue [shape="box" label="queue" style="dotted"] P0B_queue -> S0B [style="dotted"] subgraph {rank=same; S1A P_open} S0A -> S1A [label="got_mailbox"] S1A [label="S1A:\nknown"] S1A -> P_open [label="connected"] S1A -> P1A_queue [label="add_message" style="dotted"] P1A_queue [shape="box" label="queue" style="dotted"] P1A_queue -> S1A [style="dotted"] S1A -> S2A [style="invis"] P_open -> P2_connected [style="invis"] S0A -> S2A [style="invis"] S0B -> P_open [label="got_mailbox" color="orange" fontcolor="orange"] P_open [shape="box" label="store mailbox\nRC.tx_open\nRC.tx_add(queued)" color="orange"] P_open -> S2B [color="orange"] subgraph {rank=same; S2A S2B P2_connected} S2A [label="S2A:\nknown\nmaybe opened"] S2B [label="S2B:\nopened\n(bound)" color="green"] S2A -> P2_connected [label="connected"] S2B -> S2A [label="lost"] P2_connected [shape="box" label="RC.tx_open\nRC.tx_add(queued)"] P2_connected -> S2B S2A -> P2_queue [label="add_message" style="dotted"] P2_queue [shape="box" label="queue" style="dotted"] P2_queue -> S2A [style="dotted"] S2B -> P2_send [label="add_message"] P2_send [shape="box" label="queue\nRC.tx_add(msg)"] P2_send -> S2B {rank=same; P2_send P2_close P2_process_theirs} P2_process_theirs -> P2_close [style="invis"] S2B -> P2_process_ours [label="rx_message\n(ours)"] P2_process_ours [shape="box" label="dequeue"] P2_process_ours -> S2B S2B -> P2_process_theirs [label="rx_message\n(theirs)" color="orange" fontcolor="orange"] P2_process_theirs [shape="box" color="orange" label="N.release\nO.got_message if new\nrecord" ] P2_process_theirs -> S2B [color="orange"] S2B -> P2_close [label="close" color="red"] P2_close [shape="box" label="RC.tx_close" color="red"] P2_close -> S3B [color="red"] subgraph {rank=same; S3A P3_connected S3B} S3A [label="S3A:\nclosing"] S3A -> P3_connected [label="connected"] P3_connected [shape="box" label="RC.tx_close"] P3_connected -> S3B #S3A -> S3A [label="add_message"] # implicit S3B [label="S3B:\nclosing\n(bound)" color="red"] S3B -> S3B [label="add_message\nrx_message\nclose"] S3B -> S3A [label="lost"] subgraph {rank=same; P3A_done P3B_done} P3A_done [shape="box" label="T.mailbox_done" color="red"] P3A_done -> S4A S3B -> P3B_done [label="rx_closed" color="red"] P3B_done [shape="box" label="T.mailbox_done" color="red"] P3B_done -> S4B subgraph {rank=same; S4A S4B} S4A [label="S4A:\nclosed"] S4B [label="S4B:\nclosed"] S4A -> S4B [label="connected"] S4B -> S4A [label="lost"] S4B -> S4B [label="add_message\nrx_message\nclose"] # is "close" needed? S0A -> P3A_done [label="close" color="red"] S0B -> P3B_done [label="close" color="red"] S1A -> P3A_done [label="close" color="red"] S2A -> S3A [label="close" color="red"] } magic-wormhole-0.12.0/docs/state-machines/nameplate.dot000066400000000000000000000067531400712516500230330ustar00rootroot00000000000000digraph { /* new idea */ title [label="Nameplate\nMachine" style="dotted"] title -> S0A [style="invis"] {rank=same; S0A S0B} S0A [label="S0A:\nknow nothing"] S0B [label="S0B:\nknow nothing\n(bound)" color="orange"] S0A -> S0B [label="connected"] S0B -> S0A [label="lost"] S0A -> S1A [label="set_nameplate"] S0B -> P2_connected [label="set_nameplate" color="orange" fontcolor="orange"] S1A [label="S1A:\nnever claimed"] S1A -> P2_connected [label="connected"] S1A -> S2A [style="invis"] S1B [style="invis"] S0B -> S1B [style="invis"] S1B -> S2B [style="invis"] {rank=same; S1A S1B} S1A -> S1B [style="invis"] {rank=same; S2A P2_connected S2B} S2A [label="S2A:\nmaybe claimed"] S2A -> P2_connected [label="connected"] P2_connected [shape="box" label="RC.tx_claim" color="orange"] P2_connected -> S2B [color="orange"] S2B [label="S2B:\nmaybe claimed\n(bound)" color="orange"] #S2B -> S2A [label="lost"] # causes bad layout S2B -> foo2 [label="lost"] foo2 [label="" style="dashed"] foo2 -> S2A S2A -> S3A [label="(none)" style="invis"] S2B -> P_open [label="rx_claimed" color="orange" fontcolor="orange"] P_open [shape="box" label="I.got_wordlist\nM.got_mailbox" color="orange"] P_open -> S3B [color="orange"] subgraph {rank=same; S3A S3B} S3A [label="S3A:\nclaimed"] S3B [label="S3B:\nclaimed\n(bound)" color="orange"] S3A -> S3B [label="connected"] S3B -> foo3 [label="lost"] foo3 [label="" style="dashed"] foo3 -> S3A #S3B -> S3B [label="rx_claimed"] # shouldn't happen S3B -> P3_release [label="release" color="orange" fontcolor="orange"] P3_release [shape="box" color="orange" label="RC.tx_release"] P3_release -> S4B [color="orange"] subgraph {rank=same; S4A P4_connected S4B} S4A [label="S4A:\nmaybe released\n"] S4B [label="S4B:\nmaybe released\n(bound)" color="orange"] S4A -> P4_connected [label="connected"] P4_connected [shape="box" label="RC.tx_release"] S4B -> S4B [label="release"] P4_connected -> S4B S4B -> foo4 [label="lost"] foo4 [label="" style="dashed"] foo4 -> S4A S4A -> S5B [style="invis"] P4_connected -> S5B [style="invis"] subgraph {rank=same; P5A_done P5B_done} S4B -> P5B_done [label="rx released" color="orange" fontcolor="orange"] P5B_done [shape="box" label="T.nameplate_done" color="orange"] P5B_done -> S5B [color="orange"] subgraph {rank=same; S5A S5B} S5A [label="S5A:\nreleased"] S5A -> S5B [label="connected"] S5B -> S5A [label="lost"] S5B [label="S5B:\nreleased" color="green"] S5B -> S5B [label="release\nclose"] P5A_done [shape="box" label="T.nameplate_done"] P5A_done -> S5A S0A -> P5A_done [label="close" color="red"] S1A -> P5A_done [label="close" color="red"] S2A -> S4A [label="close" color="red"] S3A -> S4A [label="close" color="red"] S4A -> S4A [label="close" color="red"] S0B -> P5B_done [label="close" color="red"] S2B -> P3_release [label="close" color="red"] S3B -> P3_release [label="close" color="red"] S4B -> S4B [label="close" color="red"] } magic-wormhole-0.12.0/docs/state-machines/order.dot000066400000000000000000000027161400712516500221730ustar00rootroot00000000000000digraph { start [label="Order\nMachine" style="dotted"] /* our goal: deliver PAKE before anything else */ {rank=same; S0 P0_other} {rank=same; S1 P1_other} S0 [label="S0: no pake" color="orange"] S1 [label="S1: yes pake" color="green"] S0 -> P0_pake [label="got_pake" color="orange" fontcolor="orange"] P0_pake [shape="box" color="orange" label="K.got_pake\ndrain queue:\n[R.got_message]" ] P0_pake -> S1 [color="orange"] S0 -> P0_other [label="got_version\ngot_phase" style="dotted"] P0_other [shape="box" label="queue" style="dotted"] P0_other -> S0 [style="dotted"] S1 -> P1_other [label="got_version\ngot_phase"] P1_other [shape="box" label="R.got_message"] P1_other -> S1 /* the Mailbox will deliver each message exactly once, but doesn't guarantee ordering: if Alice starts the process, then disconnects, then Bob starts (reading PAKE, sending both his PAKE and his VERSION phase), then Alice will see both PAKE and VERSION on her next connect, and might get the VERSION first. The Wormhole will queue inbound messages that it isn't ready for. The wormhole shim that lets applications do w.get(phase=) must do something similar, queueing inbound messages until it sees one for the phase it currently cares about.*/ } magic-wormhole-0.12.0/docs/state-machines/receive.dot000066400000000000000000000032111400712516500224710ustar00rootroot00000000000000digraph { /* could shave a RTT by committing to the nameplate early, before finishing the rest of the code input. While the user is still typing/completing the code, we claim the nameplate, open the mailbox, and retrieve the peer's PAKE message. Then as soon as the user finishes entering the code, we build our own PAKE message, send PAKE, compute the key, send VERSION. Starting from the Return, this saves two round trips. OTOH it adds consequences to hitting Tab. */ start [label="Receive\nMachine" style="dotted"] S0 [label="S0:\nunknown key" color="orange"] S0 -> P0_got_key [label="got_key" color="orange"] P0_got_key [shape="box" label="record key" color="orange"] P0_got_key -> S1 [color="orange"] S1 [label="S1:\nunverified key" color="orange"] S1 -> P_mood_scary [label="got_message\n(bad)"] S1 -> P1_accept_msg [label="got_message\n(good)" color="orange"] P1_accept_msg [shape="box" label="S.got_verified_key\nB.happy\nB.got_verifier\nB.got_message" color="orange"] P1_accept_msg -> S2 [color="orange"] S2 [label="S2:\nverified key" color="green"] S2 -> P2_accept_msg [label="got_message\n(good)" color="orange"] S2 -> P_mood_scary [label="got_message(bad)"] P2_accept_msg [label="B.got_message" shape="box" color="orange"] P2_accept_msg -> S2 [color="orange"] P_mood_scary [shape="box" label="B.scared" color="red"] P_mood_scary -> S3 [color="red"] S3 [label="S3:\nscared" color="red"] S3 -> S3 [label="got_message"] } magic-wormhole-0.12.0/docs/state-machines/send.dot000066400000000000000000000012111400712516500217760ustar00rootroot00000000000000digraph { start [label="Send\nMachine" style="dotted"] {rank=same; S0 P0_queue} {rank=same; S1 P1_send} S0 [label="S0: unknown\nkey"] S0 -> P0_queue [label="send" style="dotted"] P0_queue [shape="box" label="queue" style="dotted"] P0_queue -> S0 [style="dotted"] S0 -> P0_got_key [label="got_verified_key"] P0_got_key [shape="box" label="drain queue:\n[encrypt\n M.add_message]"] P0_got_key -> S1 S1 [label="S1: verified\nkey"] S1 -> P1_send [label="send"] P1_send [shape="box" label="encrypt\nM.add_message"] P1_send -> S1 } magic-wormhole-0.12.0/docs/state-machines/terminator.dot000066400000000000000000000033761400712516500232470ustar00rootroot00000000000000digraph { /* M_close pathways */ title [label="Terminator\nMachine" style="dotted"] initial [style="invis"] initial -> Snmo [style="dashed"] Snmo [label="Snmo:\nnameplate active\nmailbox active\nopen" color="orange"] Sno [label="Sno:\nnameplate active\nmailbox done\nopen"] Smo [label="Smo:\nnameplate done\nmailbox active\nopen" color="green"] S0o [label="S0o:\nnameplate done\nmailbox done\nopen"] Snmo -> Sno [label="mailbox_done"] Snmo -> Smo [label="nameplate_done" color="orange"] Sno -> S0o [label="nameplate_done"] Smo -> S0o [label="mailbox_done"] Snmo -> Snm [label="close"] Sno -> Sn [label="close"] Smo -> Sm [label="close" color="red"] S0o -> P_stop [label="close"] Snm [label="Snm:\nnameplate active\nmailbox active\nclosing" style="dashed"] Sn [label="Sn:\nnameplate active\nmailbox done\nclosing" style="dashed"] Sm [label="Sm:\nnameplate done\nmailbox active\nclosing" style="dashed" color="red"] Snm -> Sn [label="mailbox_done"] Snm -> Sm [label="nameplate_done"] Sn -> P_stop [label="nameplate_done"] Sm -> P_stop [label="mailbox_done" color="red"] {rank=same; S_stopping Pss S_stopped} P_stop [shape="box" label="RC.stop" color="red"] P_stop -> S_stopping [color="red"] S_stopping [label="S_stopping" color="red"] S_stopping -> Pss [label="stopped"] Pss [shape="box" label="B.closed"] Pss -> S_stopped S_stopped [label="S_stopped"] other [shape="box" style="dashed" label="close -> N.close, M.close"] } magic-wormhole-0.12.0/docs/tor.md000066400000000000000000000065601400712516500165720ustar00rootroot00000000000000# Tor Support in Magic-Wormhole The ``wormhole`` command-line tool has built-in support for performing transfers over Tor. To use it, you must install with the "tor" extra, like this: ``` pip install magic-wormhole[tor] ``` ## Usage Just add ``--tor`` to use a running Tor daemon: ``` wormhole send --tor myfile.jpg wormhole receive --tor ``` You should use ``--tor`` rather than running ``wormhole`` under tsocks or torsocks because the magic-wormhole "Transit" protocol normally sends the IP addresses of each computer to its peer, to attempt a direct connection between the two (somewhat like the FTP protocol would do). External tor-ifying programs don't know about this, so they can't strip these addresses out. Using ``--tor`` puts magic-wormhole into a mode where it does not share any IP addresses. ``--tor`` causes the program to look for a Tor control port in the three most common locations: * ``unix:/var/run/tor/control``: Debian/Ubuntu Tor listen here * ``tcp:localhost:9051``: the standard Tor control port * ``tcp:localhost:9151``: control port for TorBrowser's embedded Tor If ``wormhole`` is unable to establish a control-port connection to any of those locations, it will assume there is a SOCKS daemon listening on ``tcp:localhost:9050``, and hope for the best (if no SOCKS daemon is available on that port, the initial Rendezvous connection will fail, and the program will exit with an error before doing anything else). The default behavior will Just Work if: * you are on a Debian-like system, and the ``tor`` package is installed, or: * you have launched the ``tor`` daemon manually, or: * the TorBrowser application is running when you start ``wormhole`` On Debian-like systems, if your account is a member of the ``debian-tor`` group, ``wormhole`` will use the control-port to ask for the right SOCKS port. If not, it should fall back to using the default SOCKS port on 9050. To add your account to the ``debian-tor`` group, use e.g. ``sudo adduser MYUSER debian-tor``. Access to the control-port will be more significant in the future, when ``wormhole`` can listen on "onion services": see below for details. ## Other Ways To Reach Tor If ``tor`` is installed, but you cannot use the control-port or SOCKS-port for some reason, then you can use ``--launch-tor`` to ask ``wormhole`` to start a new Tor daemon for the duration of the transfer (and then shut it down afterwards). This will add 30-40 seconds to program startup. ``` wormhole send --tor --launch-tor myfile.jpg ``` Alternatively, if you know of a pre-existing Tor daemon with a non-standard control-port, you can specify that control port with the ``--tor-control-port=`` argument: ``` wormhole send --tor --tor-control-port=tcp:127.0.0.1:9251 myfile.jpg ``` ## .onion servers In the future, ``wormhole`` with ``--tor`` will listen on an ephemeral "onion service" when file transfers are requested. If both sides are Tor-capable, this will allow transfers to take place "directly" (via the Tor network) from sender to receiver, bypassing the Transit Relay server. This will require access to a Tor control-port (to ask Tor to create a new ephemeral onion service). SOCKS-port access will not be sufficient. However the current version of ``wormhole`` does not use onion services. For now, if both sides use ``--tor``, any file transfers must use the transit relay, since neither side will advertise any listening IP addresses. magic-wormhole-0.12.0/docs/transit.md000066400000000000000000000254001400712516500174440ustar00rootroot00000000000000# Transit Protocol The Transit protocol is responsible for establishing an encrypted bidirectional record stream between two programs. It must be given a "transit key" and a set of "hints" which help locate the other end (which are both delivered by Wormhole). The protocol tries hard to create a **direct** connection between the two ends, but if that fails, it uses a centralized relay server to ferry data between two separate TCP streams (one to each client). Direct connection hints are used for the first, and relay hints are used for the second. The current implementation starts with the following: * detect all of the host's IP addresses * listen on a random TCP port * offers the (address,port) pairs as hints The other side will attempt to connect to each of those ports, as well as listening on its own socket. After a few seconds without success, they will both connect to a relay server. ## Roles The Transit protocol has pre-defined "Sender" and "Receiver" roles (unlike Wormhole, which is symmetric/nobody-goes-first). Each connection must have exactly one Sender and exactly one Receiver. The connection itself is bidirectional: either side can send or receive records. However the connection establishment mechanism needs to know who is in charge, and the encryption layer needs a way to produce separate keys for each side.. This may be relaxed in the future, much as Wormhole was. ## Records Transit establishes a **record-pipe**, so the two sides can send and receive whole records, rather than unframed bytes. This is a side-effect of the encryption (which uses the NaCl "secretbox" function). The encryption adds 44 bytes of overhead to each record (4-byte length, 24-byte nonce, 32-byte MAC), so you might want to use slightly larger records for efficiency. The maximum record size is 2^32 bytes (4GiB). The whole record must be held in memory at the same time, plus its ciphertext, so very large ciphertexts are not recommended. Transit provides **confidentiality**, **integrity**, and **ordering** of records. Passive attackers can only do the following: * learn the size and transmission time of each record * learn the sending and destination IP addresses In addition, an active attacker is able to: * delay delivery of individual records, while maintaining ordering (if they delay record #4, they must delay #5 and later as well) * terminate the connection at any time If either side receives a corrupted or out-of-order record, they drop the connection. Attackers cannot modify the contents of a record, or change the order of the records, without being detected and the connection being dropped. If a record is lost (e.g. the receiver observes records #1,#2,#4, but not #3), the connection is dropped when the unexpected sequence number is received. ## Handshake The transit key is used to derive several secondary keys. Two of them are used as a "handshake", to distinguish correct Transit connections from other programs that happen to connect to the Transit sockets by mistake or malice. The handshake is also responsible for choosing exactly one TCP connection to use, even though multiple outbound and inbound connections are being attempted. The SENDER-HANDSHAKE is the string `transit sender %s ready\n\n`, with the `%s` replaced by a hex-encoded 32-byte HKDF derivative of the transit key, using a "context string" of `transit_sender`. The RECEIVER-HANDSHAKE is the same but with `receiver` instead of `sender` (both for the string and the HKDF context). The handshake protocol is like this: * immediately upon connection establishment, the Sender writes SENDER-HANDSHAKE to the socket (regardless of whether the Sender initiated the TCP connection, or was listening on a socket and accepted the connection) * likewise the Receiver immediately writes RECEIVER-HANDSHAKE to either kind of socket * if the Sender sees anything other than RECEIVER-HANDSHAKE as the first bytes on the wire, it hangs up * likewise with the Receiver and SENDER-HANDSHAKE * if the Sender sees that this is the first connection to get RECEIVER-HANDSHAKE, it sends `go\n`. If some other connection got there first, it hangs up (or sends `nevermind\n` and then hangs up, but this is mostly for debugging, and implementations should not depend upon it). After sending `go`, it switches to encrypted-record mode. * if the Receiver sees `go\n`, it switches to encrypted-record mode. If the receiver sees anything else, or a disconnected socket, it disconnects. To tolerate the inevitable race conditions created by multiple contending sockets, only the Sender gets to decide which one wins: the first one to make it past negotiation. Hopefully this is correlated with the fastest connection pathway. The protocol ignores any socket that is not somewhat affiliated with the matching Transit instance. Hints will frequently point to local IP addresses (local to the other end) which might be in use by unrelated nearby computers. The handshake helps to ignore these spurious connections. It is still possible for an attacker to cause the connection to fail, by intercepting both connections (to learn the two handshakes), then making new connections to play back the recorded handshakes, but this level of attacker could simply drop the user's packets directly. Any participant in a Transit connection (i.e. the party on the other end of your wormhole) can cause their peer to make a TCP connection (and send the handshake string) to any IP address and port of their choosing. The handshake protocol is intended to make this no more than a minor nuisance. ## Relay The **Transit Relay** is a host which offers TURN-like services for magic-wormhole instances. It uses a TCP-based protocol with a handshake to determine which connection wants to be connected to which. When connecting to a relay, the Transit client first writes RELAY-HANDSHAKE to the socket, which is `please relay %s\n`, where `%s` is the hex-encoded 32-byte HKDF derivative of the transit key, using `transit_relay_token` as the context. The client then waits for `ok\n`. The relay waits for a second connection that uses the same token. When this happens, the relay sends `ok\n` to both, then wires the connections together, so that everything received after the token on one is written out (after the ok) on the other. When either connection is lost, the other will be closed (the relay does not support "half-close"). When clients use a relay connection, they perform the usual sender/receiver handshake just after the `ok\n` is received: until that point they pretend the connection doesn't even exist. Direct connections are better, since they are faster and less expensive for the relay operator. If there are any potentially-viable direct connection hints available, the Transit instance will wait a few seconds before attempting to use the relay. If it has no viable direct hints, it will start using the relay right away. This prefers direct connections, but doesn't introduce completely unnecessary stalls. The Transit client can attempt connections to multiple relays, and uses the first one that passes negotiation. Each side combines a locally-configured hostname/port (usually "baked in" to the application, and hosted by the application author) with additional hostname/port pairs that come from the peer. This way either side can suggest the relays to use. The `wormhole` application accepts a `--transit-helper tcp:myrelay.example.org:12345` command-line option to supply an additional relay. The connection hints provided by the Transit instance include the locally-configured relay, along with the dynamically-determined direct hints. Both should be delivered to the peer. ## API The Transit API uses Twisted and returns Deferreds for any call that cannot be handled immediately. The complete example is here: ```python from twisted.internet.defer import inlineCallbacks from wormhole.transit import TransitSender @inlineCallbacks def do_transit(): s = TransitSender("tcp:relayhost.example.org:12345") my_connection_hints = yield s.get_connection_hints() # (send my hints via wormhole) # (get their hints via wormhole) s.add_connection_hints(their_connection_hints) key = w.derive_key(application_id + "/transit-key") s.set_transit_key(key) rp = yield s.connect() rp.send_record(b"my first record") their_record = yield rp.receive_record() rp.send_record(b"Greatest Hits) other = yield rp.receive_record() yield rp.close() ``` First, create a Transit instance, giving it the connection information of the "baked-in" transit relay. The application must know whether it should use a Sender or a Receiver: ```python from wormhole.transit import TransitSender s = TransitSender(baked_in_relay) ``` Next, ask the Transit for its direct and relay hints. This should be delivered to the other side via a Wormhole message (i.e. add them to a dict, serialize it with JSON, send the result as a message with `wormhole.send()`). The `get_connection_hints` method returns a Deferred, so in the example we use `@inlineCallbacks` to `yield` the result. ```python my_connection_hints = yield s.get_connection_hints() ``` Then, perform the Wormhole exchange, which ought to give you the direct and relay hints of the other side. Tell your Transit instance about their hints. ```python s.add_connection_hints(their_connection_hints) ``` Then use `wormhole.derive_key()` to obtain a shared key for Transit purposes, and tell your Transit about it. Both sides must use the same derivation string, and this string must not be used for any other purpose, but beyond that it doesn't much matter what the exact derivation string is. The key is secret, of course. ```python key = w.derive_key(application_id + "/transit-key") s.set_transit_key(key) ``` Finally, tell the Transit instance to connect. This returns a Deferred that will yield a "record pipe" object, on which records can be sent and received. If no connection can be established within a timeout (defaults to 30 seconds), `connect()` will signal a Failure instead. The pipe can be closed with `close()`, which returns a Deferred that fires when all data has been flushed. ```python rp = yield s.connect() rp.send_record(b"my first record") their_record = yield rp.receive_record() rp.send_record(b"Greatest Hits) other = yield rp.receive_record() yield rp.close() ``` Records can be sent and received in arbitrary order (you are not limited to taking turns). The record-pipe object also implements the `IConsumer`/`IProducer` protocols for **bytes**, which means you can transfer a file by wiring up a file reader as a Producer. Each chunk of bytes that the Producer generates will be put into a single record. The Consumer interface works the same way. This enables backpressure and flow-control: if the far end (or the network) cannot keep up with the stream of data, the sender will wait for them to catch up before filling buffers without bound. magic-wormhole-0.12.0/docs/w.dot000066400000000000000000000077451400712516500164300ustar00rootroot00000000000000digraph { /* NM_start [label="Nameplate\nMachine" style="dotted"] NM_start -> NM_S_unclaimed [style="invis"] NM_S_unclaimed [label="no nameplate"] NM_S_unclaimed -> NM_S_unclaimed [label="NM_release()"] NM_P_set_nameplate [shape="box" label="post_claim()"] NM_S_unclaimed -> NM_P_set_nameplate [label="NM_set_nameplate()"] NM_S_claiming [label="claim pending"] NM_P_set_nameplate -> NM_S_claiming NM_S_claiming -> NM_P_rx_claimed [label="rx claimed"] NM_P_rx_claimed [label="MM_set_mailbox()" shape="box"] NM_P_rx_claimed -> NM_S_claimed NM_S_claimed [label="claimed"] NM_S_claimed -> NM_P_release [label="NM_release()"] NM_P_release [shape="box" label="post_release()"] NM_P_release -> NM_S_releasing NM_S_releasing [label="release pending"] NM_S_releasing -> NM_S_releasing [label="NM_release()"] NM_S_releasing -> NM_S_released [label="rx released"] NM_S_released [label="released"] NM_S_released -> NM_S_released [label="NM_release()"] */ /* MM_start [label="Mailbox\nMachine" style="dotted"] MM_start -> MM_S_want_mailbox [style="invis"] MM_S_want_mailbox [label="want mailbox"] MM_S_want_mailbox -> MM_P_queue1 [label="MM_send()" style="dotted"] MM_P_queue1 [shape="box" style="dotted" label="queue message"] MM_P_queue1 -> MM_S_want_mailbox [style="dotted"] MM_P_open_mailbox [shape="box" label="post_open()"] MM_S_want_mailbox -> MM_P_open_mailbox [label="set_mailbox()"] MM_P_send_queued [shape="box" label="post add() for\nqueued messages"] MM_P_open_mailbox -> MM_P_send_queued MM_P_send_queued -> MM_S_open MM_S_open [label="open\n(unused)"] MM_S_open -> MM_P_send1 [label="MM_send()"] MM_P_send1 [shape="box" label="post add()\nfor message"] MM_P_send1 -> MM_S_open MM_S_open -> MM_P_release1 [label="MM_close()"] MM_P_release1 [shape="box" label="NM_release()"] MM_P_release1 -> MM_P_close MM_S_open -> MM_P_rx [label="rx message"] MM_P_rx [shape="box" label="WM_rx_pake()\nor WM_rx_msg()"] MM_P_rx -> MM_P_release2 MM_P_release2 [shape="box" label="NM_release()"] MM_P_release2 -> MM_S_used MM_S_used [label="open\n(used)"] MM_S_used -> MM_P_rx [label="rx message"] MM_S_used -> MM_P_send2 [label="MM_send()"] MM_P_send2 [shape="box" label="post add()\nfor message"] MM_P_send2 -> MM_S_used MM_S_used -> MM_P_close [label="MM_close()"] MM_P_close [shape="box" label="post_close(mood)"] MM_P_close -> MM_S_closing MM_S_closing [label="waiting"] MM_S_closing -> MM_S_closing [label="MM_close()"] MM_S_closing -> MM_S_closed [label="rx closed"] MM_S_closed [label="closed"] MM_S_closed -> MM_S_closed [label="MM_close()"] */ /* upgrading to new PAKE algorithm, the slower form (the faster form puts the pake_abilities record in the nameplate_info message) */ /* P2_start [label="(PAKE\nupgrade)\nstart"] P2_start -> P2_P_send_abilities [label="set_code()"] P2_P_send_abilities [shape="box" label="send pake_abilities"] P2_P_send_abilities -> P2_wondering P2_wondering [label="waiting\nwondering"] P2_wondering -> P2_P_send_pakev1 [label="rx pake_v1"] P2_P_send_pakev1 [shape="box" label="send pake_v1"] P2_P_send_pakev1 -> P2_P_process_v1 P2_P_process_v1 [shape="box" label="process v1"] P2_wondering -> P2_P_find_max [label="rx pake_abilities"] P2_P_find_max [shape="box" label="find max"] P2_P_find_max -> P2_P_send_pakev2 P2_P_send_pakev2 P2_P_send_pakev2 [shape="box" label="send pake_v2"] P2_P_send_pakev2 -> P2_P_process_v2 [label="rx pake_v2"] P2_P_process_v2 [shape="box" label="process v2"] */ } magic-wormhole-0.12.0/docs/welcome.md000066400000000000000000000273351400712516500174240ustar00rootroot00000000000000# Welcome Get things from one computer to another, safely. This package provides a library and a command-line tool named `wormhole`, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine. The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized. * PyCon 2016 presentation: [Slides](http://www.lothar.com/~warner/MagicWormhole-PyCon2016.pdf), [Video](https://youtu.be/oFrTqQw0_3c) ## Example Sender: ``` % wormhole send README.md Sending 7924 byte file named 'README.md' On the other computer, please run: wormhole receive Wormhole code is: 7-crossover-clockwork Sending (<-10.0.1.43:58988).. 100%|=========================| 7.92K/7.92K [00:00<00:00, 6.02MB/s] File sent.. waiting for confirmation Confirmation received. Transfer complete. ``` Receiver: ``` % wormhole receive Enter receive wormhole code: 7-crossover-clockwork Receiving file (7924 bytes) into: README.md ok? (y/n): y Receiving (->tcp:10.0.1.43:58986).. 100%|===========================| 7.92K/7.92K [00:00<00:00, 120KB/s] Received file written to README.md ``` ## Installation The easiest way to install magic-wormhole is to use a packaged version from your operating system. If there is none, or you want to participate in development, you can install from source. ### MacOS / OS-X [Install Homebrew](https://brew.sh/), then run `brew install magic-wormhole`. ### Linux (Debian/Ubuntu) Magic-wormhole is available with `apt` in Debian 9 "stretch", Ubuntu 17.04 "zesty", and later versions: ``` $ sudo apt install magic-wormhole ``` ### Linux (Fedora) ``` $ sudo dnf install magic-wormhole ``` ### Linux (openSUSE) ``` $ sudo zypper install python-magic-wormhole ``` ### Linux (Snap package) Many linux distributions (including Ubuntu) can install ["Snap" packages](https://snapcraft.io/). Magic-wormhole is available through a third-party package (published by the "snapcrafters" group): ``` $ sudo snap install wormhole ``` ### Install from Source Magic-wormhole is a Python package, and can be installed in the usual ways. The basic idea is to do `pip install magic-wormhole`, however to avoid modifying the system's python libraries, you probably want to put it into a "user" environment (putting the ``wormhole`` executable in ``~/.local/bin/wormhole``) like this: ``` pip install --user magic-wormhole ``` or put it into a virtualenv, like this: ``` virtualenv venv source venv/bin/activate pip install magic-wormhole ``` You can then run `venv/bin/wormhole` without first activating the virtualenv, so e.g. you could make a symlink from `~/bin/wormhole` to `.../path/to/venv/bin/wormhole`, and then plain `wormhole send` will find it on your `$PATH`. You probably *don't* want to use ``sudo`` when you run ``pip``. This tends to create [conflicts](https://github.com/warner/magic-wormhole/issues/336) with the system python libraries. On OS X, you may need to pre-install `pip`, and run `$ xcode-select --install` to get GCC, which is needed to compile the `libsodium` cryptography library during the installation process. On Debian/Ubuntu systems, you may need to install some support libraries first: `$ sudo apt-get install python-pip build-essential python-dev libffi-dev libssl-dev` On Linux, if you get errors like `fatal error: sodium.h: No such file or directory`, either use `SODIUM_INSTALL=bundled pip install magic-wormhole`, or try installing the `libsodium-dev` / `libsodium-devel` package. These work around a bug in pynacl which gets confused when the libsodium runtime is installed (e.g. `libsodium13`) but not the development package. On Windows, python2 may work better than python3. On older systems, `$ pip install --upgrade pip` may be necessary to get a version that can compile all the dependencies. Most of the dependencies are published as binary wheels, but in case your system is unable to find these, it will have to compile them, for which Microsoft Visual C++ 9.0 may be required. Get it from http://aka.ms/vcpython27 . ## Motivation * Moving a file to a friend's machine, when the humans can speak to each other (directly) but the computers cannot * Delivering a properly-random password to a new user via the phone * Supplying an SSH public key for future login use Copying files onto a USB stick requires physical proximity, and is uncomfortable for transferring long-term secrets because flash memory is hard to erase. Copying files with ssh/scp is fine, but requires previous arrangements and an account on the target machine, and how do you bootstrap the account? Copying files through email first requires transcribing an email address in the opposite direction, and is even worse for secrets, because email is unencrypted. Copying files through encrypted email requires bootstrapping a GPG key as well as an email address. Copying files through Dropbox is not secure against the Dropbox server and results in a large URL that must be transcribed. Using a URL shortener adds an extra step, reveals the full URL to the shortening service, and leaves a short URL that can be guessed by outsiders. Many common use cases start with a human-mediated communication channel, such as IRC, IM, email, a phone call, or a face-to-face conversation. Some of these are basically secret, or are "secret enough" to last until the code is delivered and used. If this does not feel strong enough, users can turn on additional verification that doesn't depend upon the secrecy of the channel. The notion of a "magic wormhole" comes from the image of two distant wizards speaking the same enchanted phrase at the same time, and causing a mystical connection to pop into existence between them. The wizards then throw books into the wormhole and they fall out the other side. Transferring files securely should be that easy. ## Design The `wormhole` tool uses PAKE "Password-Authenticated Key Exchange", a family of cryptographic algorithms that uses a short low-entropy password to establish a strong high-entropy shared key. This key can then be used to encrypt data. `wormhole` uses the SPAKE2 algorithm, due to Abdalla and Pointcheval[1]. PAKE effectively trades off interaction against offline attacks. The only way for a network attacker to learn the shared key is to perform a man-in-the-middle attack during the initial connection attempt, and to correctly guess the code being used by both sides. Their chance of doing this is inversely proportional to the entropy of the wormhole code. The default is to use a 16-bit code (use --code-length= to change this), so for each use of the tool, an attacker gets a 1-in-65536 chance of success. As such, users can expect to see many error messages before the attacker has a reasonable chance of success. ## Timing The program does not have any built-in timeouts, however it is expected that both clients will be run within an hour or so of each other. This makes the tool most useful for people who are having a real-time conversation already, and want to graduate to a secure connection. Both clients must be left running until the transfer has finished. ## Relays The wormhole library requires a "Rendezvous Server": a simple WebSocket-based relay that delivers messages from one client to another. This allows the wormhole codes to omit IP addresses and port numbers. The URL of a public server is baked into the library for use as a default, and will be freely available until volume or abuse makes it infeasible to support. Applications which desire more reliability can easily run their own relay and configure their clients to use it instead. Code for the Rendezvous Server is included in the library. The file-transfer commands also use a "Transit Relay", which is another simple server that glues together two inbound TCP connections and transfers data on each to the other. The `wormhole send` file mode shares the IP addresses of each client with the other (inside the encrypted message), and both clients first attempt to connect directly. If this fails, they fall back to using the transit relay. As before, the host/port of a public server is baked into the library, and should be sufficient to handle moderate traffic. Code for the Transit Relay is provided a separate package named `magic-wormhole-transit-relay`. The protocol includes provisions to deliver notices and error messages to clients: if either relay must be shut down, these channels will be used to provide information about alternatives. ## CLI tool * `wormhole send [args] --text TEXT` * `wormhole send [args] FILENAME` * `wormhole send [args] DIRNAME` * `wormhole receive [args]` Both commands accept additional arguments to influence their behavior: * `--code-length WORDS`: use more or fewer than 2 words for the code * `--verify` : print (and ask user to compare) extra verification string ## Library The `wormhole` module makes it possible for other applications to use these code-protected channels. This includes Twisted support, and (in the future) will include blocking/synchronous support too. See docs/api.md for details. The file-transfer tools use a second module named `wormhole.transit`, which provides an encrypted record-pipe. It knows how to use the Transit Relay as well as direct connections, and attempts them all in parallel. `TransitSender` and `TransitReceiver` are distinct, although once the connection is established, data can flow in either direction. All data is encrypted (using nacl/libsodium "secretbox") using a key derived from the PAKE phase. See `src/wormhole/cli/cmd_send.py` for examples. ## Development * Bugs and Patches: https://github.com/warner/magic-wormhole * Chat: #magic-wormhole on irc.freenode.net To set up Magic Wormhole for development, you will first need to install [virtualenv][]. Once you've done that, ``git clone`` the repo, ``cd`` into the root of the repository, and run: ``` virtualenv venv source venv/bin/activate pip install --upgrade pip setuptools ``` Now your virtualenv has been activated. You'll want to re-run `source venv/bin/activate` for every new terminal session you open. To install Magic Wormhole and its development dependencies into your virtualenv, run: ``` pip install -e .[dev] ``` While the virtualenv is active, running ``wormhole`` will get you the development version. ### Running Tests Within your virtualenv, the command-line program `trial` will run the test suite: ``` trial wormhole ``` This tests the entire `wormhole` package. If you want to run only the tests for a specific module, or even just a specific test, you can specify it instead via Python's standard dotted import notation, e.g.: ``` trial wormhole.test.test_cli.PregeneratedCode.test_file_tor ``` Developers can also just clone the source tree and run `tox` to run the unit tests on all supported (and installed) versions of python: 2.7, 3.4, 3.5, and 3.6. ### Troubleshooting Every so often, you might get a traceback with the following kind of error: ``` pkg_resources.DistributionNotFound: The 'magic-wormhole==0.9.1-268.g66e0d86.dirty' distribution was not found and is required by the application ``` If this happens, run `pip install -e .[dev]` again. [virtualenv]: http://python-guide-pt-br.readthedocs.io/en/latest/dev/virtualenvs/ ### Other Relevant [xkcd](https://xkcd.com/949/) :-) ## License, Compatibility This library is released under the MIT license, see LICENSE for details. This library is compatible with python2.7, 3.4 (non-Windows-only), 3.5, and 3.6 . [1]: http://www.di.ens.fr/~pointche/Documents/Papers/2005_rsa.pdf "RSA 2005" magic-wormhole-0.12.0/docs/wormhole.1000066400000000000000000000021011400712516500173450ustar00rootroot00000000000000.TH WORMHOLE "1" "July 2016" "" "User Commands" .SH NAME wormhole \- Securely and simply transfer data between computers .SH SYNOPSIS .B wormhole [\fI\,OPTIONS\/\fR] \fI\,COMMAND \/\fR[\fI\,ARGS\/\fR]... .SH DESCRIPTION .IP Create a Magic Wormhole and communicate through it. .IP Wormholes are created by speaking the same magic CODE in two different places at the same time. Wormholes are secure against anyone who doesn't use the same code. .SH OPTIONS .TP \fB\-\-relay\-url\fR URL rendezvous relay to use .TP \fB\-\-transit\-helper\fR tcp:HOST:PORT transit relay to use .TP \fB\-\-dump\-timing\fR FILE.json (debug) write timing data to file .TP \fB\-\-version\fR Show the version and exit. .TP \fB\-\-help\fR Show this message and exit. .SS "Commands:" .TP receive Receive a text message, file, or directory... .TP send Send a text message, file, or directory .SH SEE ALSO .BR wormhole-server (8) .SH AUTHORS Brian Warner .PP This manual was written by Jameson Rollins for the Debian project (and may be used by others). magic-wormhole-0.12.0/misc/000077500000000000000000000000001400712516500154405ustar00rootroot00000000000000magic-wormhole-0.12.0/misc/demo-journal.py000066400000000000000000000236701400712516500204160ustar00rootroot00000000000000import os, sys, json, contextlib, random from twisted.internet import task, defer, endpoints from twisted.application import service, internet from twisted.web import server, static, resource from wormhole import journal, wormhole # considerations for state management: # * be somewhat principled about the data (e.g. have a schema) # * discourage accidental schema changes # * avoid surprise mutations by app code (don't hand out mutables) # * discourage app from keeping state itself: make state object easy enough # to use for everything. App should only hold objects that are active # (Services, subscribers, etc). App must wire up these objects each time. def parse(args): raise NotImplementedError def update_my_state(): raise NotImplementedError class State(object): @classmethod def create_empty(klass): self = klass() # to avoid being tripped up by state-mutation side-effect bugs, we # hold the serialized state in RAM, and re-deserialize it each time # someone asks for a piece of it. # iid->invitation_stat empty = {"version": 1, "invitations": {}, "contacts": [], } self._bytes = json.dumps(empty).encode("utf-8") return self @classmethod def from_filename(klass, fn): self = klass() with open(fn, "rb") as f: bytes = f.read() self._bytes = bytes # version check data = self._as_data() assert data["version"] == 1 # schema check? return self def save_to_filename(self, fn): tmpfn = fn + ".tmp" with open(tmpfn, "wb") as f: f.write(self._bytes) os.rename(tmpfn, fn) def _as_data(self): return json.loads(bytes.decode("utf-8")) @contextlib.contextmanager def _mutate(self): data = self._as_data() yield data # mutable self._bytes = json.dumps(data).encode("utf-8") def get_all_invitations(self): return self._as_data()["invitations"] def add_invitation(self, iid, invitation_state): with self._mutate() as data: data["invitations"][iid] = invitation_state def update_invitation(self, iid, invitation_state): with self._mutate() as data: assert iid in data["invitations"] data["invitations"][iid] = invitation_state def remove_invitation(self, iid): with self._mutate() as data: del data["invitations"][iid] def add_contact(self, contact): with self._mutate() as data: data["contacts"].append(contact) class Root(resource.Resource): pass class Status(resource.Resource): def __init__(self, c): resource.Resource.__init__(self) self._call = c def render_GET(self, req): data = self._call() req.setHeader(b"content-type", "text/plain") return data class Action(resource.Resource): def __init__(self, c): resource.Resource.__init__(self) self._call = c def render_POST(self, req): req.setHeader(b"content-type", "text/plain") try: args = json.load(req.content) except ValueError: req.setResponseCode(500) return b"bad JSON" data = self._call(args) return data class Agent(service.MultiService): def __init__(self, basedir, reactor): service.MultiService.__init__(self) self._basedir = basedir self._reactor = reactor root = Root() site = server.Site(root) ep = endpoints.serverFromString(reactor, "tcp:8220") internet.StreamServerEndpointService(ep, site).setServiceParent(self) self._jm = journal.JournalManager(self._save_state) root.putChild(b"", static.Data("root", "text/plain")) root.putChild(b"list-invitations", Status(self._list_invitations)) root.putChild(b"invite", Action(self._invite)) # {petname:} root.putChild(b"accept", Action(self._accept)) # {petname:, code:} self._state_fn = os.path.join(self._basedir, "state.json") self._state = State.from_filename(self._state_fn) self._wormholes = {} for iid, invitation_state in self._state.get_all_invitations().items(): def _dispatch(event, *args, **kwargs): self._dispatch_wormhole_event(iid, event, *args, **kwargs) w = wormhole.journaled_from_data(invitation_state["wormhole"], reactor=self._reactor, journal=self._jm, event_handler=self, event_handler_args=(iid,)) self._wormholes[iid] = w w.setServiceParent(self) def _save_state(self): self._state.save_to_filename(self._state_fn) def _list_invitations(self): inv = self._state.get_all_invitations() lines = ["%d: %s" % (iid, inv[iid]) for iid in sorted(inv)] return b"\n".join(lines) + b"\n" def _invite(self, args): print("invite", args) petname = args["petname"] # it'd be better to use a unique object for the event_handler # correlation, but we can't store them into the state database. I'm # not 100% sure we need one for the database: maybe it should hold a # list instead, and assign lookup keys at runtime. If they really # need to be serializable, they should be allocated rather than # random. iid = random.randint(1, 1000) my_pubkey = random.randint(1, 1000) with self._jm.process(): w = wormhole.journaled(reactor=self._reactor, journal=self._jm, event_handler=self, event_handler_args=(iid,)) self._wormholes[iid] = w w.setServiceParent(self) w.get_code() # event_handler means code returns via callback invitation_state = {"wormhole": w.to_data(), "petname": petname, "my_pubkey": my_pubkey, } self._state.add_invitation(iid, invitation_state) return b"ok" def _accept(self, args): print("accept", args) petname = args["petname"] code = args["code"] iid = random.randint(1, 1000) my_pubkey = random.randint(2, 2000) with self._jm.process(): w = wormhole.journaled(reactor=self._reactor, journal=self._jm, event_dispatcher=self, event_dispatcher_args=(iid,)) w.set_code(code) md = {"my_pubkey": my_pubkey} w.send(json.dumps(md).encode("utf-8")) invitation_state = {"wormhole": w.to_data(), "petname": petname, "my_pubkey": my_pubkey, } self._state.add_invitation(iid, invitation_state) return b"ok" # dispatch options: # * register one function, which takes (eventname, *args) # * to handle multiple wormholes, app must give is a closure # * register multiple functions (one per event type) # * register an object, with well-known method names # * extra: register args and/or kwargs with the callback # # events to dispatch: # generated_code(code) # got_verifier(verifier_bytes) # verified() # got_data(data_bytes) # closed() def wormhole_dispatch_got_code(self, code, iid): # we're already in a jm.process() context invitation_state = self._state.get_all_invitations()[iid] invitation_state["code"] = code self._state.update_invitation(iid, invitation_state) self._wormholes[iid].set_code(code) # notify UI subscribers to update the display def wormhole_dispatch_got_verifier(self, verifier, iid): pass def wormhole_dispatch_verified(self, _, iid): pass def wormhole_dispatch_got_data(self, data, iid): invitation_state = self._state.get_all_invitations()[iid] md = json.loads(data.decode("utf-8")) contact = {"petname": invitation_state["petname"], "my_pubkey": invitation_state["my_pubkey"], "their_pubkey": md["my_pubkey"], } self._state.add_contact(contact) self._wormholes[iid].close() # now waiting for "closed" def wormhole_dispatch_closed(self, _, iid): self._wormholes[iid].disownServiceParent() del self._wormholes[iid] self._state.remove_invitation(iid) def handle_app_event(self, args, ack_f): # sample function # Imagine here that the app has received a message (not # wormhole-related) from some other server, and needs to act on it. # Also imagine that ack_f() is how we tell the sender that they can # stop sending the message, or how we ask our poller/subscriber # client to send a DELETE message. If the process dies before ack_f() # delivers whatever it needs to deliver, then in the next launch, # handle_app_event() will be called again. stuff = parse(args) # noqa with self._jm.process(): update_my_state() self._jm.queue_outbound(ack_f) def create(reactor, basedir): os.mkdir(basedir) s = State.create_empty() s.save(os.path.join(basedir, "state.json")) return defer.succeed(None) def run(reactor, basedir): a = Agent(basedir, reactor) a.startService() print("agent listening on http://localhost:8220/") d = defer.Deferred() return d if __name__ == "__main__": command = sys.argv[1] basedir = sys.argv[2] if command == "create": task.react(create, (basedir,)) elif command == "run": task.react(run, (basedir,)) else: print("Unrecognized subcommand '%s'" % command) sys.exit(1) magic-wormhole-0.12.0/misc/dump-stats.py000077500000000000000000000006451400712516500201230ustar00rootroot00000000000000from __future__ import print_function import time, json # Run this as 'watch python misc/dump-stats.py' against a 'wormhole-server # start --stats-file=stats.json' with open("stats.json") as f: data_s = f.read() now = time.time() data = json.loads(data_s) if now < data["valid_until"]: valid = "valid" else: valid = "EXPIRED" age = now - data["created"] print("age: %d (%s)" % (age, valid)) print(data_s) magic-wormhole-0.12.0/misc/dump-timing.py000066400000000000000000000037341400712516500202530ustar00rootroot00000000000000# To use the web() option, you should do: # * cd misc # * npm install d3@3.5.17 d3-tip@0.6.7 zepto from __future__ import print_function import os, sys, time, json, random streams = sys.argv[1:] if len(streams) != 2: print("run like: python dump-timing.py tx.json rx.json") sys.exit(1) # for now, require sender as first file, receiver as second # later, allow use of only one file. data = {} for i,fn in enumerate(streams): name = ["send", "receive"][i] with open(fn, "rb") as f: events = json.load(f) data[name] = {"fn": os.path.basename(fn), "events": events} from pprint import pprint pprint(data) here = os.path.dirname(__file__) web_root = os.path.join(here, "web") lib_root = os.path.join(here, "node_modules") if not os.path.isdir(lib_root): print("Cannot find 'd3' and 'd3-tip' in misc/node_modules/") print("Please run 'npm install d3 d3-tip zepto' from the misc/ directory.") sys.exit(1) def web(): # set up a server that serves web/ at the root, plus a /data.json built # from {timeline}. Quit when it fetches /done . from twisted.web import resource, static, server from twisted.internet import reactor, endpoints ep = endpoints.serverFromString(reactor, "tcp:8066:interface=127.0.0.1") root = static.File(web_root) root.putChild("data.json", static.Data(json.dumps(data).encode("utf-8"), "application/json")) root.putChild("lib", static.File(lib_root)) class Shutdown(resource.Resource): def render_GET(self, request): #print("timeline ready, server shutting down") #reactor.stop() return "shutting down" root.putChild("done", Shutdown()) site = server.Site(root) ep.listen(site) import webbrowser def launch_browser(): webbrowser.open("http://localhost:%d/timeline.html" % 8066) print("browser opened, waiting for shutdown") reactor.callLater(0, launch_browser) reactor.run() web() magic-wormhole-0.12.0/misc/web/000077500000000000000000000000001400712516500162155ustar00rootroot00000000000000magic-wormhole-0.12.0/misc/web/timeline.css000066400000000000000000000023151400712516500205360ustar00rootroot00000000000000 line.client_tx { stroke: red; stroke-dasharray: 5,5; } line.client_rx { stroke: blue; stroke-dasharray: 5,5; } line.c2c_column { stroke: black; stroke-dasharray: 1,5; } line.c2c { stroke-width: 2.0; } line.c2c.active { stroke-width: 4.0; } line.y_axis { stroke: gray; } /* putting these in a .css file doesn't work, for some reason I have to add the markers as a style= attribute directly. */ line.circle-arrow-circle { /*marker-start: url(#markerCircle);*/ marker-mid: url(#markerArrow); /*marker-end: url(#markerCircle);*/ } rect.wait-crypto { stroke: black; fill: #ccc; } rect.wait-user { stroke: #00f; fill: #bbe; } text.wait-text-user { fill: #00f; } rect.proc-span-import { fill: #fcc; } rect.proc-span-websocket { fill: #cfc; } rect.api { fill: #cfc; } rect.bar { stroke: black; } .lane-0 { fill: #fcc; } .lane-1 { fill: #cfc; } .lane-2 { fill: #ccf; } .lane-3 { fill: #ccf; } .lane-4 { fill: #ccf; } .lane-5 { fill: #ccf; } .lane-6 { fill: #ccf; } .vis-item .vis-item-overflow { overflow: visible; } .d3-tip { margin: 4px; padding: 2px; background: #111; color: #fff; } magic-wormhole-0.12.0/misc/web/timeline.html000066400000000000000000000007361400712516500207170ustar00rootroot00000000000000 Timeline Visualizer

Wormhole Timeline


magic-wormhole-0.12.0/misc/web/timeline.js000066400000000000000000001137211400712516500203660ustar00rootroot00000000000000var d3; // hush var container = d3.select("#viz"); var data; var items; var globals = {}; var server_time_offset=0, rx_time_offset=0; // in seconds, relative to tx var zoom = d3.behavior.zoom().scaleExtent([1, Infinity]); function zoomin() { //var w = Number(container.style("width").slice(0,-2)); //console.log("zoomin", w); //zoom.center([w/2, 20]); // doesn't work yet zoom.scale(zoom.scale() * 2); globals.redraw(); } function zoomout() { zoom.scale(zoom.scale() * 0.5); globals.redraw(); } function is_span(ev, category) { if (ev.category === category && !!ev.stop) return true; return false; } function is_event(ev, category) { if (ev.category === category && !ev.stop) return true; return false; } const server_message_color = { "welcome": 0, // receive "bind": 0, // send "allocate": 1, // send "allocated": 1, // receive "list": 2, // send "nameplates": 2, // receive "claim": 3, // send "claimed": 3, // receive "open": 4, // send "release": 5, // send "released": 5, // receive "error": 6, // receive //"add": 8, // send (client message) //"message": 8, // receive (client message) "ping": 7, // send "pong": 7 // receive }; const proc_map = { "command dispatch": "dispatch", "open websocket": "websocket", "code established": "code-established", "key established": "key-established", "transit connected": "transit-connected", "print": "print", "exit": "exit", "transit connect": "transit-connect", "import": "import" }; const TX_COLUMN = 14; const RX_COLUMN = 18; const SERVER_COLUMN0 = 20; const SERVER_COLUMNS = [20,21,22,23,24,25]; const NUM_SERVER_COLUMNS = 6; const MAX_COLUMN = 45; function x_offset(offset, side_name) { if (side_name === "send") return offset; return MAX_COLUMN - offset; } function side_text_anchor(side_name) { if (side_name === "send") return "end"; return "start"; } function side_text_dx(side_name) { if (side_name === "send") return "-5px"; return "5px"; } d3.json("data.json", function(d) { data = d; // data is {send,receive}{fn,events} // each event has: {name, start, [stop], [server_rx], [server_tx], // [id], details={} } // Display all timestamps relative to the sender's startup event. If all // clocks are in sync, this will be the same as first_timestamp, but in // case they aren't, I'd rather use the sender as the reference point. var first = data.send.events[0].start; // The X axis is divided up into 50 slots, and then scaled to the screen // later. The left portion represents the "wormhole send" side, the // middle is the rendezvous server, and the right portion is the // "wormhole receive" side. // // 0: time axis, tick marks // 3: sender process events: import, dispatch, exit // 4: sender major application-level events: code/key establishment, // transit-connect // 8: sender stalls: waiting for user, waiting for permission // 10: sender websocket transmits originate from here // 15: sender websocket receives terminate here // 20-25: rendezvous-server message lanes // 30: receiver websocket receives // 35: receiver websocket transmits // 37: receiver stalls // 41: receiver app-level events // 42: receiver process events var first_timestamp = Infinity; var last_timestamp = 0; function prepare_data(e, side_name) { var rel_e = {side_name: side_name, // send or receive name: e.name, start: e.start - first, details: e.details }; if (e.stop) rel_e.stop = e.stop - first; if (side_name == "receive") { rel_e.start -= rx_time_offset; if (e.stop) rel_e.stop -= rx_time_offset; } if (rel_e.details.message) { if (rel_e.details.message.server_rx) rel_e.details.message.server_rx -= server_time_offset; if (rel_e.details.message.server_tx) rel_e.details.message.server_tx -= server_time_offset; } // sort events into categories, assign X coordinates to some if (proc_map[e.name]) { rel_e.category = "proc"; rel_e.x = x_offset(3, side_name); if (e.name === "open websocket") rel_e.x = x_offset(4, side_name); rel_e.text = proc_map[e.name]; if (e.name === "import") rel_e.text += " " + e.details.which; } if (e.details.waiting) { rel_e.category = "wait"; var off = 8; if (e.details.waiting === "user") off += 0.5; rel_e.x = x_offset(off, side_name); } // also, calculate the overall time domain while we're at it [rel_e.start, rel_e.stop].forEach(v => { if (v) { if (v > last_timestamp) last_timestamp = v; if (v < first_timestamp) first_timestamp = v; } }); return rel_e; } var events = data.send.events.map(e => prepare_data(e, "send")); events = events.concat(data.receive.events.map(e => prepare_data(e, "receive"))); /* "Client messages" are ones that go all the way from one client to the other, through the rendezvous channel (and get echoed back to the sender too). We can correlate three websocket messages for each (the send, the local receive, and the remote receive) by comparing their "id" strings. Scan for all client messages, to build a list of central columns. For each message, we'll have tx/server_rx/server_tx/rx for the sending side, and server_rx/server_tx/rx for the receiving side. The "add" event contributes tx, the sender's echo contributes and the "message" event contributes server_rx, server_tx, and rx. */ var side_map = new Map(); // side -> "send"/"receive" var c2c = new Map(); // msgid => {send,receive}{tx,server_rx,server_tx,rx} events.forEach(ev => { var id, phase; if (ev.name === "ws_send") { if (ev.details.type !== "add") return; id = ev.details.id; phase = ev.details.phase; side_map.set(ev.details._side, ev.side_name); } else if (ev.name === "ws_receive") { if (ev.details.message.type !== "message") return; id = ev.details.message.id; phase = ev.details.message.phase; } else return; if (!c2c.has(id)) { c2c.set(id, {phase: phase, side_id: ev.details._side, //tx_side_name: assigned when we see 'add' id: id, arrivals: [] //col, server_x: assigned later //server_rx: assigned when we see 'message' }); } var cm = c2c.get(id); if (ev.name === "ws_send") { // add cm.tx = ev.start; cm.tx_x = x_offset(TX_COLUMN, ev.side_name); cm.tx_side_name = ev.side_name; } else { // message cm.server_rx = ev.details.message.server_rx - first; cm.arrivals.push({server_tx: ev.details.message.server_tx - first, rx: ev.start, rx_x: x_offset(RX_COLUMN, ev.side_name)}); } }); // sort c2c messages by initial sending time var client_messages = Array.from(c2c.values()); client_messages.sort( (a,b) => (a.tx - b.tx) ); // assign columns // TODO: identify overlaps between the c2c messages, share columns // between messages which don't overlap client_messages.forEach((cm,index) => { cm.col = index % 6; cm.server_x = 20 + cm.col; }); console.log("client_messages", client_messages); console.log(side_map); console.log(first_timestamp, last_timestamp); /* "Server messages" are ones that stop or originate at the rendezvous server. These are of types other than "add" or "message". Although many of these provoke responses, we do not attempt to correlate these with any other message. For outbound ws_send messages, we know the send timestamp, but not the server receipt timestamp. For inbound ws_receive messages, we know both. */ var outbound_sm = new Map(); globals.outbound_sm = outbound_sm; events .filter(ev => ev.name === "ws_send") .forEach(ev => { // we don't know the server receipt time, so draw a horizontal // line by setting stop_timestamp=start_timestamp var sm = {side_name: ev.side_name, start_timestamp: ev.start, stop_timestamp: ev.start, start_x: x_offset(TX_COLUMN, ev.side_name), end_x: x_offset(20, ev.side_name), text_x: x_offset(TX_COLUMN, ev.side_name), text_timestamp: ev.start, text_dy: "-5px", type: ev.details.type, tip: ev.details.type, ev: ev }; outbound_sm.set(ev.details.id, sm); }); events .filter(ev => ev.name === "ws_receive") .filter(ev => ev.details.message.type === "ack") .forEach(ev => { var id = ev.details.message.id; var server_tx = ev.details.message.server_tx; var sm = outbound_sm.get(id); sm.stop_timestamp = server_tx - first; }); var server_messages = []; events .filter(ev => ev.name === "ws_receive") .filter(ev => ev.details.message.type !== "message") .filter(ev => ev.details.message.type !== "ack") .forEach(ev => { var sm = {side_name: ev.side_name, start_timestamp: ev.details.message.server_tx - first, stop_timestamp: ev.start, start_x: x_offset(20, ev.side_name), end_x: x_offset(RX_COLUMN, ev.side_name), text_x: x_offset(RX_COLUMN, ev.side_name), text_timestamp: ev.start, text_dy: "8px", type: ev.details.message.type, tip: ev.details.message.type, ev: ev }; server_messages.push(sm); }); server_messages = server_messages.concat( Array.from(outbound_sm.values()) .filter(sm => sm.type !== "add")); console.log("server_messages", server_messages); // TODO: this goes off the edge of the screen, use the viewport instead var container_width = Number(container.style("width").slice(0,-2)); var container_height = Number(container.style("height").slice(0,-2)); container_height = 700; // no contents, so no height is allocated yet // scale the X axis to the full width of our container var x = d3.scale.linear().domain([0, 50]).range([0, container_width]); // scale the Y axis later var y = d3.scale.linear().domain([first_timestamp, last_timestamp]) .range([0, container_height]); zoom.y(y); zoom.on("zoom", redraw); var tip = d3.tip() .attr("class", "d3-tip") .html(function(d) { return "" + d + ""; }) .direction("s") ; var chart = container.append("svg:svg") .attr("id", "outer_chart") .attr("width", container_width) .attr("height", container_height) .attr("pointer-events", "all") .call(zoom) .call(tip) ; var defs = chart.append("svg:defs"); defs.append("svg:marker") .attr("id", "markerCircle") .attr("markerWidth", 8) .attr("markerHeight", 8) .attr("refX", 5) .attr("refY", 5) .append("circle") .attr("cx", 5) .attr("cy", 5) .attr("r", 3) .attr("style", "stroke: none; fill: #000000;") ; defs.append("svg:marker") .attr("id", "markerArrow") .attr("markerWidth", 26) .attr("markerHeight", 26) .attr("refX", 26) .attr("refY", 12) .attr("orient", "auto") .attr("markerUnits", "userSpaceOnUse") // don't scale to stroke-width .append("path") .attr("d", "M8,20 L20,12 L8,4") .attr("style", "stroke: #000000; fill: none") ; chart.append("svg:line") .attr("x1", x(0.5)).attr("y1", 0) .attr("x2", x(0.5)).attr("y2", container_height) .attr("class", "y_axis") ; chart.append("svg:g") .attr("class", "seconds_g") .attr("transform", "translate("+(x(0.5)+5)+","+(container_height-10)+")") .append("svg:text") .text("seconds") ; chart.append("svg:line") .attr("x1", x(TX_COLUMN)).attr("y1", y(first_timestamp)) .attr("x2", x(TX_COLUMN)).attr("y2", y(last_timestamp)) .attr("class", "client_tx") ; chart.append("svg:text") .attr("x", x(TX_COLUMN)).attr("y", 10) .attr("text-anchor", "middle") .text("sender tx"); chart.append("svg:line") .attr("x1", x(RX_COLUMN)).attr("y1", y(first_timestamp)) .attr("x2", x(RX_COLUMN)).attr("y2", y(last_timestamp)) .attr("class", "client_rx") ; chart.append("svg:text") .attr("x", x(RX_COLUMN)).attr("y", 10) .attr("text-anchor", "middle") .text("sender rx"); chart.selectAll("line.c2c_column").data(SERVER_COLUMNS) .enter().append("svg:line") .attr("class", "c2c_column") .attr("x1", d => x(d)).attr("y1", y(first_timestamp)) .attr("x2", d => x(d)).attr("y2", y(last_timestamp)) ; chart.append("svg:line") .attr("x1", x(MAX_COLUMN-RX_COLUMN)).attr("y1", y(first_timestamp)) .attr("x2", x(MAX_COLUMN-RX_COLUMN)).attr("y2", y(last_timestamp)) .attr("class", "client_rx") ; chart.append("svg:text") .attr("x", x(MAX_COLUMN-RX_COLUMN)).attr("y", 10) .attr("text-anchor", "middle") .text("receiver rx"); chart.append("svg:line") .attr("x1", x(MAX_COLUMN-TX_COLUMN)).attr("y1", y(first_timestamp)) .attr("x2", x(MAX_COLUMN-TX_COLUMN)).attr("y2", y(last_timestamp)) .attr("class", "client_tx") ; chart.append("svg:text") .attr("x", x(MAX_COLUMN-TX_COLUMN)).attr("y", 10) .attr("text-anchor", "middle") .text("receiver tx"); // produces list of {p_from, p_to, col, add_arrow, tip} function cm_line(cm) { // We draw a bunch of two-point lines var lines = []; function push(p_from, p_to, add_arrow) { lines.push({p_from: p_from, p_to: p_to, col: cm.col, tip: cm.tip, add_arrow: add_arrow}); } // the first goes from the sender to the server_rx, if we know it // TODO: tolerate not knowing it var sender_point = [cm.tx_x, cm.tx]; var server_rx_point = [cm.server_x, cm.server_rx]; push(sender_point, server_rx_point, true); // the second goes from the server_rx to the last server_tx var last_server_tx = Math.max.apply(null, cm.arrivals.map(a => a.server_tx)); var last_server_tx_point = [cm.server_x, last_server_tx]; push(server_rx_point, last_server_tx_point, false); cm.arrivals.forEach(ar => { var delivery_tx_point = [cm.server_x, ar.server_tx]; var delivery_rx_point = [ar.rx_x, ar.rx]; push(delivery_tx_point, delivery_rx_point, true); }); return lines; } var all_cm_lines = []; client_messages.forEach(v => { all_cm_lines = all_cm_lines.concat(cm_line(v)); }); console.log(all_cm_lines); var cm_colors = d3.scale.category10(); chart.selectAll("line.c2c").data(all_cm_lines) .enter() .append("svg:line") .attr("class", "c2c") // circle-arrow-circle") .attr("stroke", ls => cm_colors(ls.col)) .attr("style", ls => { if (ls.add_arrow) return "marker-end: url(#markerArrow);"; return ""; }) .on("mouseover", ls => { if (ls.tip) tip.show(ls.tip); chart.selectAll("circle.c2c").filter(d => d.col == ls.col) .attr("r", 10); chart.selectAll("line.c2c") .classed("active", d => d.col == ls.col); }) .on("mouseout", ls => { tip.hide(ls); chart.selectAll("circle.c2c") .attr("r", 5); chart.selectAll("line.c2c") .classed("active", false); }) ; chart.selectAll("g.c2c").data(client_messages) .enter() .append("svg:g") .attr("class", "c2c") .append("svg:text") .attr("class", "c2c") .attr("text-anchor", cm => side_text_anchor(cm.tx_side_name)) .attr("dx", cm => side_text_dx(cm.tx_side_name)) .attr("dy", "10px") .attr("fill", cm => cm_colors(cm.col)) .text(cm => cm.phase); function cm_dot(cm) { var dots = []; var color = cm_colors(cm.col); var tip = cm.phase; function push(x,y) { dots.push({x: x, y: y, col: cm.col, color: color, tip: tip}); } push(cm.tx_x, cm.tx); cm.arrivals.forEach(ar => push(ar.rx_x, ar.rx)); return dots; } var all_cm_dots = []; client_messages.forEach(cm => { all_cm_dots = all_cm_dots.concat(cm_dot(cm)); }); chart.selectAll("circle.c2c").data(all_cm_dots) .enter() .append("svg:circle") .attr("class", "c2c") .attr("r", 5) .attr("fill", dot => dot.color) .on("mouseover", dot => { if (dot.tip) tip.show(dot.tip); chart.selectAll("circle.c2c").filter(d => d.col == dot.col) .attr("r", 10); chart.selectAll("line.c2c") .classed("active", d => d.col == dot.col); }) .on("mouseout", dot => { tip.hide(dot); chart.selectAll("circle.c2c") .attr("r", 5); chart.selectAll("line.c2c") .classed("active", false); }) ; // server messages chart.selectAll("line.server-message").data(server_messages) .enter() .append("svg:line") .attr("class", "server-message") .attr("stroke", sm => cm_colors(server_message_color[sm.type] || 0)) .attr("style", "marker-end: url(#markerArrow)") .on("mouseover", sm => { if (sm.tip) tip.show(sm.tip); }) .on("mouseout", sm => { tip.hide(sm); }) ; chart.selectAll("g.server-message").data(server_messages) .enter() .append("svg:g") .attr("class", "server-message") .append("svg:text") .attr("class", "server-message") .attr("text-anchor", sm => side_text_anchor(sm.side_name)) .attr("dx", sm => side_text_dx(sm.side_name)) .attr("dy", sm => sm.text_dy) .attr("fill", sm => cm_colors(server_message_color[sm.type] || 0)) .text(sm => sm.type); // TODO: add dots on the known send/receive time points var w = chart.selectAll("g.wait") .data(events.filter(ev => ev.category === "wait")) .enter().append("svg:g") .attr("class", "wait"); w.append("svg:rect") .attr("class", ev => "wait wait-"+ev.details.waiting) .attr("width", 10); var wt = chart.selectAll("g.wait-text") .data(events.filter(ev => ev.category === "wait")) .enter().append("svg:g") .attr("class", "wait-text"); wt.append("svg:text") .attr("class", ev => "wait-text wait-text-"+ev.details.waiting) .attr("text-anchor", ev => ev.side_name === "send" ? "end" : "start") .attr("dx", ev => ev.side_name === "send" ? "-5px" : "15px") .attr("dy", "5px") .text(v => v.name+" ("+v.details.waiting+")"); // process-related events var pe = chart.selectAll("g.proc-event") .data(events.filter(ev => is_event(ev, "proc"))) .enter().append("svg:g") .attr("class", "proc-event"); pe.append("svg:circle") .attr("class", ev => "proc-event proc-event-"+proc_map[ev.name]) .attr("cx", ev => ev.side_name === "send" ? "12px" : "-2px") .attr("r", 5) .attr("fill", "red") .attr("width", 10); pe.append("svg:text") .attr("class", ev => "proc-event proc-event-"+proc_map[ev.name]) .attr("text-anchor", ev => ev.side_name === "send" ? "start" : "end") .attr("dx", ev => ev.side_name === "send" ? "15px" : "-5px") .attr("dy", "5px") .attr("transform", "rotate(-30)") .text(ev => proc_map[ev.name]); // process-related spans var ps = chart.selectAll("g.proc-span") .data(events.filter(ev => is_span(ev, "proc"))) .enter().append("svg:g") .attr("class", "proc-span"); ps.append("svg:rect") .attr("class", ev => "proc-span proc-span-"+proc_map[ev.name]) .attr("width", 10); var pst = chart.selectAll("g.proc-span-text") .data(events.filter(ev => is_span(ev, "proc"))) .enter().append("svg:g") .attr("class", "proc-span-text"); pst.append("svg:text") .attr("class", ev => "proc-span-text proc-span-text-"+proc_map[ev.name]) .attr("text-anchor", ev => ev.side_name === "send" ? "start" : "end") .attr("dx", ev => ev.side_name === "send" ? "15px" : "-5px") .attr("dy", "5px") .text(ev => ev.text); function ty(d) { return "translate(0,"+y(d)+")"; } function redraw() { chart.selectAll("line.c2c") .attr("x1", ls => x(ls.p_from[0])) .attr("y1", ls => y(ls.p_from[1])) .attr("x2", ls => x(ls.p_to[0])) .attr("y2", ls => y(ls.p_to[1])) ; chart.selectAll("g.c2c") .attr("transform", cm => "translate("+x(cm.tx_x)+","+y(cm.tx)+")") ; chart.selectAll("circle.c2c") .attr("cx", d => x(d.x)) .attr("cy", d => y(d.y)) ; chart.selectAll("line.server-message") .attr("x1", sm => x(sm.start_x)) .attr("y1", sm => y(sm.start_timestamp)) .attr("x2", sm => x(sm.end_x)) .attr("y2", sm => y(sm.stop_timestamp)); chart.selectAll("g.server-message") .attr("transform", sm => { return "translate("+x(sm.text_x)+","+y(sm.text_timestamp)+")"; }) ; chart.selectAll("g.wait") .attr("transform", ev => { return "translate("+x(ev.x)+","+y(ev.start)+")"; }); chart.selectAll("rect.wait") .attr("height", ev => y(ev.stop)-y(ev.start)); chart.selectAll("g.wait-text") .attr("transform", ev => { return "translate("+x(ev.x)+","+y((ev.start+ev.stop)/2)+")"; }); chart.selectAll("g.proc-event") .attr("transform", ev => { return "translate("+x(ev.x)+","+y(ev.start)+")"; }) ; chart.selectAll("g.proc-span") .attr("transform", ev => { return "translate("+x(ev.x)+","+y(ev.start)+")"; }) ; chart.selectAll("rect.proc-span") .attr("height", ev => y(ev.stop)-y(ev.start)); chart.selectAll("g.proc-span-text") .attr("transform", ev => { return "translate("+x(ev.x)+","+y((ev.start+ev.stop)/2)+")"; }); // vertical scale markers: horizontal tick lines at rational // timestamps // TODO: clicking on a dot should set the new zero time var rules = chart.selectAll("g.rule") .data(y.ticks(10)) .attr("transform", ty); rules.select("text") .text(t => y.tickFormat(10, "s")(t)+"s"); var newrules = rules.enter().insert("svg:g") .attr("class", "rule") .attr("transform", ty) ; newrules.append("svg:line") .attr("class", "rule-tick") .attr("stroke", "black"); chart.selectAll("line.rule-tick") .attr("x1", x(0.5)-5) .attr("x2", x(0.5)); newrules.append("svg:line") .attr("class", "rule-red") .attr("stroke", "red") .attr("stroke-opacity", .3); chart.selectAll("line.rule-red") .attr("x1", x(0.5)) .attr("x2", x(MAX_COLUMN)); newrules.append("svg:text") .attr("class", "rule-text") .attr("dx", ".1em") .attr("dy", "-0.2em") .attr("text-anchor", "start") .attr("fill", "black") .text(t => y.tickFormat(10, "s")(t)+"s"); chart.selectAll("text.rule-text") .attr("x", 6 + 9); rules.exit().remove(); } redraw(); }); /* TODO * identify the largest gaps in the timeline (biggest is probably waiting for the recipient to start the program, followed by waiting for recipient to type in code, followed by waiting for recipient to approve transfer, with the time of actual transfer being anywhere among the others). * identify groups of events that are separated by those gaps * put a [1 2 3 4 all] set of buttons at the top of the page * clicking on each button will zoom the display to 10% beyond the span of events in the given group, or reset the zoom to include all events */ function OFF() { /* leftover code from an older implementation, retained since there might still be some useful pieces here */ function y_off(d) { return (LANE_HEIGHT * (d.side*(data.lanes.length+1) + d.lane) + d.wiggle); } var bottom_rule_y = LANE_HEIGHT * data.sides.length * (data.lanes.length+1); var bottom_y = bottom_rule_y + 45; //var chart_g = chart.append("svg:g"); // this "backboard" rect lets us catch mouse events anywhere in the // chart, even between the bars. Without it, we only see events on solid // objects like bars and text, but not in the gaps between. chart.append("svg:rect") .attr("id", "outer_rect") .attr("width", w).attr("height", bottom_y).attr("fill", "none"); // but the stuff we put inside it should have some room w = w-50; chart.selectAll("text.sides-label").data(data.sides).enter() .append("svg:text") .attr("class", "sides-label") .attr("x", "0px") .attr("y", function(d,idx) { return y_off({side: idx, lane: data.lanes.length/2, wiggle: 0}) ;}) .attr("text-anchor", "start") // anchor at top-left .attr("dy", ".71em") .attr("fill", "black") .text(function(d) { return d; }) ; var lanes_by_sides = []; data.sides.forEach(function(side, side_index) { data.lanes.forEach(function(lane, lane_index) { lanes_by_sides.push({side: side, side_index: side_index, lane: lane, lane_index: lane_index}); }); }); chart.selectAll("text.lanes-label").data(lanes_by_sides).enter() .append("svg:text") .attr("class", "lanes-label") .attr("x", "50px") .attr("y", function(d) { return y_off({side: d.side_index, lane: d.lane_index, wiggle: 0}) ;}) .attr("text-anchor", "start") // anchor at top-left .attr("dy", ".91em") .attr("fill", "#f88") .text(function(d) { return d.lane; }) ; chart.append("svg:text") .attr("class", "seconds-label") //.attr("x", w/2).attr("y", y + 35) .attr("text-anchor", "middle") .attr("fill", "black") .text("seconds"); d3.select("#outer_chart").attr("height", bottom_y); d3.select("#outer_rect").attr("height", bottom_y); d3.select("#zoom").attr("transform", "translate("+(w-10)+","+10+")"); function reltime(t) {return t-data.bounds.min;} var last = data.bounds.max - data.bounds.min; //last = reltime(d3.max(data.dyhb, function(d){return d.finish_time;})); last = last * 1.05; // long downloads are likely to have too much info, start small if (last > 10.0) last = 10.0; // d3.time.scale() has no support for ms or us. var xOFF = d3.time.scale().domain([data.bounds.min, data.bounds.max]) .range([0,w]); var x = d3.scale.linear().domain([-last*0.05, last]) .range([0,w]); zoom.x(x); function tx(d) { return "translate(" +x(d) + ",0)"; } function left(d) { return x(reltime(d.start_time)); } function left_server(d) { return x(reltime(d.server_sent)); } function right(d) { return d.finish_time ? x(reltime(d.finish_time)) : "1px"; } function width(d) { return d.finish_time ? x(reltime(d.finish_time))-x(reltime(d.start_time)) : "1px"; } function halfwidth(d) { if (d.finish_time) return (x(reltime(d.finish_time))-x(reltime(d.start_time)))/2; return "1px"; } function middle(d) { if (d.finish_time) return (x(reltime(d.start_time))+x(reltime(d.finish_time)))/2; else return x(reltime(d.start_time)) + 1; } function color(d) { return data.server_info[d.serverid].color; } function servername(d) { return data.server_info[d.serverid].short; } function timeformat(duration) { // TODO: trim to microseconds, maybe humanize return duration; } function oldredraw() { // at this point zoom/pan must be fixed var min = data.bounds.min + x.domain()[0]; var max = data.bounds.min + x.domain()[1]; function inside(d) { var finish_time = d.finish_time || d.start_time; if (Math.max(d.start_time, min) <= Math.min(finish_time, max)) return true; return false; } // from the data, build a list of bars, dots, and lines var clipped = {bars: [], dots: [], lines: []}; data.items.filter(inside).forEach(function(d) { if (!d.finish_time) { clipped.dots.push(d); } else { clipped.bars.push(d); if (!!d.server_sent) { clipped.lines.push(d); } } }); globals.clipped = clipped; //chart.select(".dyhb-label") // .attr("x", x(0))//"20px") // .attr("y", y); // Panning and zooming will re-run this function multiple times, and // bars will come and go, so we must process all three selections // (including enter() and exit()). // TODO: add dots for events that have only start, not finish. Add // the server-sent bar (a vertical line, half height, centered // vertically) for events that have server-sent as well as finish. // This probably requires creating a dot for everything, but making // it invisible if finished is non-null, likewise for the server-sent // bar. // each item gets an SVG group (g.bars), translated left and down // to match the start time and side/lane of the event var bars = chart.selectAll("g.bars") .data(clipped.bars, function(d) { return d.start_time; }) .attr("transform", function(d) { return "translate("+left(d)+","+y_off(d)+")"; }) ; // update the variable parts of each bar, which depends upon the // current pan/zoom values bars.select("rect") .attr("width", width); bars.select("text") .attr("x", halfwidth); bars.exit().remove(); var new_bars = bars.enter() .append("svg:g") .attr("class", "bars") .attr("transform", function(d) { return "translate("+left(d)+","+y_off(d)+")"; }) ; // inside the group, we have a rect with a width for the duration of // the event, and a fixed height. The fill and stroke color depend // upon the event, and the title has the details. We append the rects // first, so the text is drawn on top (higher z-order) //y += 30*(1+d3.max(data.bars, function(d){return d.row;})); new_bars.append("svg:rect") .attr("width", width) .attr("height", RECT_HEIGHT) .attr("class", function(d) { var c = ["bar", "lane-" + d.lane]; if (d.details.waiting) c.push("wait-" + d.details.waiting); return c.join(" "); }) .on("mouseover", function(d) {if (d.details_str) tip.show(d);}) .on("mouseout", tip.hide) //.attr("title", function(d) {return d.details_str;}) ; // each group also has a text, with 'x' set to place it in the middle // of the rect, and text contents that are drawn in the rect new_bars.append("svg:text") .attr("x", halfwidth) .attr("text-anchor", "middle") .attr("dy", "0.9em") //.attr("fill", "black") .text((d) => d.what) .on("mouseover", function(d) {if (d.details_str) tip.show(d);}) .on("mouseout", tip.hide) ; // dots: events that have a single timestamp, rather than a range. // These get an SVG group, and a circle and some text. var dots = chart.selectAll("g.dots") .data(clipped.dots, (d) => d.start_time) .attr("transform", (d) => "translate("+left(d)+","+(y_off(d)+LANE_HEIGHT/3)+")") ; dots.exit().remove(); var new_dots = dots.enter() .append("svg:g") .attr("class", "dots") .attr("transform", (d) => "translate("+left(d)+","+(y_off(d)+LANE_HEIGHT/3)+")") ; new_dots.append("svg:circle") .attr("r", "5") .attr("class", (d) => "dot lane-"+d.lane) .attr("fill", "#888") .attr("stroke", "black") .on("mouseover", function(d) {if (d.details_str) tip.show(d);}) .on("mouseout", tip.hide) ; new_dots.append("svg:text") .attr("x", "5px") .attr("text-anchor", "start") .attr("dy", "0.2em") .text((d) => d.what) .on("mouseover", function(d) {if (d.details_str) tip.show(d);}) .on("mouseout", tip.hide) ; // lines: these represent the time at which the server sent a message // which finished a bar. These get an SVG group, and a line var linedata = clipped.lines.map(d => [ [d.server_sent, 0], [d.server_sent, LANE_HEIGHT], [d.finish_time, 0], ]); function lineshape(d) { var l = d3.svg.line() .x(d => x(d[0])) .y(d => y_off(d) + 12345); } function update_line(sel) { sel.attr("d", lineshape) .attr("class", d => "line lane-"+d.lane) ; } var lines = chart.selectAll("polyline.lines") .data(linedata) .attr("transform", (d) => "translate("+left(d)+","+y_off(d)+")") ; lines.exit().remove(); var new_lines = lines.enter() .append("svg:g") .attr("class", "lines") .attr("transform", (d) => "translate("+left_server(d)+","+(y_off(d))+")") ; new_lines.append("svg:line") .attr("x1", 0) .attr("y1", -5) .attr("x2", "0") .attr("y2", LANE_HEIGHT) .attr("class", (d) => "line lane-"+d.lane) .attr("stroke", "red") ; new_lines.append("svg:line") .attr("x1", 0).attr("y1", -5) .attr("x2", (d) => x(d.finish_time - d.server_sent)) .attr("y2", 0) .attr("class", (d) => "line lane-"+d.lane) .attr("stroke", "red") ; // horizontal scale markers: vertical lines at rational timestamps var rules = chart.selectAll("g.rule") .data(x.ticks(10)) .attr("transform", tx); rules.select("text").text(x.tickFormat(10)); var newrules = rules.enter().insert("svg:g") .attr("class", "rule") .attr("transform", tx) ; newrules.append("svg:line") .attr("class", "rule-tick") .attr("stroke", "black"); chart.selectAll("line.rule-tick") .attr("y1", bottom_rule_y) .attr("y2", bottom_rule_y + 6); newrules.append("svg:line") .attr("class", "rule-red") .attr("stroke", "red") .attr("stroke-opacity", .3); chart.selectAll("line.rule-red") .attr("y1", 0) .attr("y2", bottom_rule_y); newrules.append("svg:text") .attr("class", "rule-text") .attr("dy", ".71em") .attr("text-anchor", "middle") .attr("fill", "black") .text(x.tickFormat(10)); chart.selectAll("text.rule-text") .attr("y", bottom_rule_y + 9); rules.exit().remove(); chart.select(".seconds-label") .attr("x", w/2) .attr("y", bottom_rule_y + 35); } globals.x = x; globals.redraw = redraw; zoom.on("zoom", redraw); d3.select("#zoom_in_button").on("click", zoomin); d3.select("#zoom_out_button").on("click", zoomout); d3.select("#reset_button").on("click", function() { x.domain([-last*0.05, last]).range([0,w]); redraw(); }); redraw(); $.get("done", function(_) {}); } magic-wormhole-0.12.0/misc/windows-build.cmd000066400000000000000000000015061400712516500207160ustar00rootroot00000000000000@echo off :: To build extensions for 64 bit Python 3, we need to configure environment :: variables to use the MSVC 2010 C++ compilers from GRMSDKX_EN_DVD.iso of: :: MS Windows SDK for Windows 7 and .NET Framework 4 :: :: More details at: :: https://github.com/cython/cython/wiki/64BitCythonExtensionsOnWindows IF "%DISTUTILS_USE_SDK%"=="1" ( ECHO Configuring environment to build with MSVC on a 64bit architecture ECHO Using Windows SDK 7.1 "C:\Program Files\Microsoft SDKs\Windows\v7.1\Setup\WindowsSdkVer.exe" -q -version:v7.1 CALL "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64 /release SET MSSdk=1 REM Need the following to allow tox to see the SDK compiler SET TOX_TESTENV_PASSENV=DISTUTILS_USE_SDK MSSdk INCLUDE LIB ) ELSE ( ECHO Using default MSVC build environment ) CALL %* magic-wormhole-0.12.0/pyi/000077500000000000000000000000001400712516500153065ustar00rootroot00000000000000magic-wormhole-0.12.0/pyi/build-exe000077500000000000000000000013111400712516500171060ustar00rootroot00000000000000#!/bin/sh # use pyinstaller to build a single-file "fat binary" called wormhole.exe. # # the .exe here does NOT mean a windows executable, but an executable in general. # # "fat binary" means it includes the python interpreter, the python source code # and libs, compiled code parts and externally needed (C/compiled) libraries. # it does NOT include the (g)libc though as this needs to be provided by the # target platform and needs to match the kernel there. thus, it is a good idea # to run the build on an old, but still security-supported linux (or other posix # OS) to keep the minimum (g)libc requirement low. pyinstaller --clean --distpath=dist wormhole.exe.spec # result will be in dist/wormhole.exe magic-wormhole-0.12.0/pyi/wormhole.exe.spec000066400000000000000000000022741400712516500206030ustar00rootroot00000000000000# -*- mode: python -*- # this pyinstaller spec file is used to build wormhole binaries on posix platforms import os, sys # your cwd should be in the same dir as this file, so .. is the project directory: basepath = os.path.realpath('..') a = Analysis([os.path.join(basepath, 'src/wormhole/__main__.py'), ], pathex=[basepath, ], binaries=[], datas=[], hiddenimports=[], hookspath=[], runtime_hooks=[], excludes=[], win_no_prefer_redirects=False, win_private_assemblies=False, cipher=None) pyz = PYZ(a.pure, a.zipped_data, cipher=None) exe = EXE(pyz, a.scripts, a.binaries, a.zipfiles, a.datas, name='wormhole.exe', debug=False, strip=False, upx=True, console=True) if False: # Enable this block to build a directory-based binary instead of # a packed single file. coll = COLLECT(exe, a.binaries, a.zipfiles, a.datas, strip=False, upx=True, name='wormhole-dir') magic-wormhole-0.12.0/setup.cfg000066400000000000000000000003731400712516500163310ustar00rootroot00000000000000[wheel] universal = 1 [versioneer] vcs = git versionfile_source = src/wormhole/_version.py versionfile_build = wormhole/_version.py tag_prefix = parentdir_prefix = magic-wormhole [flake8] max-line-length = 85 [egg_info] tag_build = tag_date = 0 magic-wormhole-0.12.0/setup.py000066400000000000000000000044711400712516500162250ustar00rootroot00000000000000from setuptools import setup import versioneer commands = versioneer.get_cmdclass() trove_classifiers = [ "Development Status :: 4 - Beta", "Environment :: Console", "License :: OSI Approved :: MIT License", "Programming Language :: Python :: 2", "Programming Language :: Python :: 2.7", "Programming Language :: Python :: 3", "Programming Language :: Python :: 3.5", "Programming Language :: Python :: 3.6", "Programming Language :: Python :: 3.7", "Programming Language :: Python :: 3.8", "Programming Language :: Python :: Implementation :: CPython", "Topic :: Security :: Cryptography", "Topic :: System :: Networking", "Topic :: System :: Systems Administration", "Topic :: Utilities", ] setup(name="magic-wormhole", version=versioneer.get_version(), description="Securely transfer data between computers", long_description=open('README.md', 'r').read(), long_description_content_type='text/markdown', author="Brian Warner", author_email="warner-magic-wormhole@lothar.com", license="MIT", url="https://github.com/warner/magic-wormhole", classifiers=trove_classifiers, package_dir={"": "src"}, packages=["wormhole", "wormhole.cli", "wormhole._dilation", "wormhole.test", "wormhole.test.dilate", ], entry_points={ "console_scripts": [ "wormhole = wormhole.cli.cli:wormhole", ] }, install_requires=[ "spake2==0.8", "pynacl", "six", "attrs >= 16.3.0", # 16.3.0 adds __attrs_post_init__ "twisted[tls] >= 17.5.0", # 17.5.0 adds failAfterFailures= "autobahn[twisted] >= 0.14.1", "automat", "hkdf", "tqdm >= 4.13.0", # 4.13.0 fixes crash on NetBSD "click", "humanize", "txtorcon >= 18.0.2", # 18.0.2 fixes py3.4 support ], extras_require={ ':sys_platform=="win32"': ["pywin32"], "dev": ["mock", "tox", "pyflakes", "magic-wormhole-transit-relay==0.1.2", "magic-wormhole-mailbox-server==0.3.1"], "dilate": ["noiseprotocol"], }, test_suite="wormhole.test", cmdclass=commands, ) magic-wormhole-0.12.0/snapcraft.yaml000066400000000000000000000025251400712516500173560ustar00rootroot00000000000000name: wormhole version: git version-script: python3 -c "import versioneer; print(versioneer.get_version())" summary: get things from one computer to another, safely description: | This package provides a library and a command-line tool named wormhole, which makes it possible to get short pieces of text (and arbitrary-sized files and directories) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine. The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized. grade: devel confinement: strict apps: wormhole: command: env LC_ALL=C.UTF-8 LANG=C.UTF-8 wormhole plugs: [home, network, network-bind] parts: magic-wormhole: source: . source-type: git plugin: python build-packages: - gcc - libffi-dev - libsodium-dev - libssl-dev - make prepare: | # FIXME make sure that the build dir has all the files from the repo, # so the version is not tagged as dirty. --elopio - 20170730 cp ../src/snapcraft.yaml . magic-wormhole-0.12.0/src/000077500000000000000000000000001400712516500152745ustar00rootroot00000000000000magic-wormhole-0.12.0/src/magic_wormhole.egg-info/000077500000000000000000000000001400712516500217625ustar00rootroot00000000000000magic-wormhole-0.12.0/src/magic_wormhole.egg-info/PKG-INFO000066400000000000000000000070761400712516500230710ustar00rootroot00000000000000Metadata-Version: 2.1 Name: magic-wormhole Version: 0.12.0 Summary: Securely transfer data between computers Home-page: https://github.com/warner/magic-wormhole Author: Brian Warner Author-email: warner-magic-wormhole@lothar.com License: MIT Description: # Magic Wormhole [![PyPI](http://img.shields.io/pypi/v/magic-wormhole.svg)](https://pypi.python.org/pypi/magic-wormhole) [![Build Status](https://travis-ci.org/warner/magic-wormhole.svg?branch=master)](https://travis-ci.org/warner/magic-wormhole) [![Windows Build Status](https://ci.appveyor.com/api/projects/status/mfnn5rsyfnrq576a/branch/master?svg=true)](https://ci.appveyor.com/project/warner/magic-wormhole) [![codecov.io](https://codecov.io/github/warner/magic-wormhole/coverage.svg?branch=master)](https://codecov.io/github/warner/magic-wormhole?branch=master) [![Docs](https://readthedocs.org/projects/magic-wormhole/badge/?version=latest)](https://magic-wormhole.readthedocs.io) Get things from one computer to another, safely. This package provides a library and a command-line tool named `wormhole`, which makes it possible to get arbitrary-sized files and directories (or short pieces of text) from one computer to another. The two endpoints are identified by using identical "wormhole codes": in general, the sending machine generates and displays the code, which must then be typed into the receiving machine. The codes are short and human-pronounceable, using a phonetically-distinct wordlist. The receiving side offers tab-completion on the codewords, so usually only a few characters must be typed. Wormhole codes are single-use and do not need to be memorized. * PyCon 2016 presentation: [Slides](http://www.lothar.com/~warner/MagicWormhole-PyCon2016.pdf), [Video](https://youtu.be/oFrTqQw0_3c) For complete documentation, please see https://magic-wormhole.readthedocs.io or the docs/ subdirectory. ## License, Compatibility Magic-Wormhole is released under the MIT license, see the `LICENSE` file for details. This library is compatible with Python 3.5 and higher (tested against 3.5, 3.6, 3.7, and 3.8). It also still works with Python 2.7. ## Packaging, Installation Magic Wormhole packages are included in many operating systems. [![Packaging status](https://repology.org/badge/vertical-allrepos/magic-wormhole.svg)](https://repology.org/project/magic-wormhole/versions) To install it without an OS package, follow the [Installation docs](https://magic-wormhole.readthedocs.io/en/latest/welcome.html#installation). Platform: UNKNOWN Classifier: Development Status :: 4 - Beta Classifier: Environment :: Console Classifier: License :: OSI Approved :: MIT License Classifier: Programming Language :: Python :: 2 Classifier: Programming Language :: Python :: 2.7 Classifier: Programming Language :: Python :: 3 Classifier: Programming Language :: Python :: 3.5 Classifier: Programming Language :: Python :: 3.6 Classifier: Programming Language :: Python :: 3.7 Classifier: Programming Language :: Python :: 3.8 Classifier: Programming Language :: Python :: Implementation :: CPython Classifier: Topic :: Security :: Cryptography Classifier: Topic :: System :: Networking Classifier: Topic :: System :: Systems Administration Classifier: Topic :: Utilities Description-Content-Type: text/markdown Provides-Extra: dilate Provides-Extra: dev magic-wormhole-0.12.0/src/magic_wormhole.egg-info/SOURCES.txt000066400000000000000000000075561400712516500236630ustar00rootroot00000000000000.coveragerc LICENSE MANIFEST.in NEWS.md README.md setup.cfg setup.py snapcraft.yaml tox.ini versioneer.py docs/Makefile docs/api.md docs/attacks.md docs/client-protocol.md docs/conf.py docs/dilation-protocol.md docs/file-transfer-protocol.md docs/index.rst docs/introduction.md docs/journal.md docs/server-protocol.md docs/tor.md docs/transit.md docs/w.dot docs/welcome.md docs/wormhole.1 docs/state-machines/Makefile docs/state-machines/_connection.dot docs/state-machines/allocator.dot docs/state-machines/boss.dot docs/state-machines/code.dot docs/state-machines/dilation.dot docs/state-machines/input.dot docs/state-machines/key.dot docs/state-machines/lister.dot docs/state-machines/machines.dot docs/state-machines/mailbox.dot docs/state-machines/nameplate.dot docs/state-machines/order.dot docs/state-machines/receive.dot docs/state-machines/send.dot docs/state-machines/terminator.dot misc/demo-journal.py misc/dump-stats.py misc/dump-timing.py misc/windows-build.cmd misc/web/timeline.css misc/web/timeline.html misc/web/timeline.js pyi/build-exe pyi/wormhole.exe.spec src/magic_wormhole.egg-info/PKG-INFO src/magic_wormhole.egg-info/SOURCES.txt src/magic_wormhole.egg-info/dependency_links.txt src/magic_wormhole.egg-info/entry_points.txt src/magic_wormhole.egg-info/requires.txt src/magic_wormhole.egg-info/top_level.txt src/wormhole/__init__.py src/wormhole/__main__.py src/wormhole/_allocator.py src/wormhole/_boss.py src/wormhole/_code.py src/wormhole/_hints.py src/wormhole/_input.py src/wormhole/_interfaces.py src/wormhole/_key.py src/wormhole/_lister.py src/wormhole/_mailbox.py src/wormhole/_nameplate.py src/wormhole/_order.py src/wormhole/_receive.py src/wormhole/_rendezvous.py src/wormhole/_rlcompleter.py src/wormhole/_send.py src/wormhole/_terminator.py src/wormhole/_version.py src/wormhole/_wordlist.py src/wormhole/errors.py src/wormhole/eventual.py src/wormhole/ipaddrs.py src/wormhole/journal.py src/wormhole/observer.py src/wormhole/timing.py src/wormhole/tor_manager.py src/wormhole/transit.py src/wormhole/util.py src/wormhole/wormhole.py src/wormhole/xfer_util.py src/wormhole/_dilation/__init__.py src/wormhole/_dilation/_noise.py src/wormhole/_dilation/connection.py src/wormhole/_dilation/connector.py src/wormhole/_dilation/encode.py src/wormhole/_dilation/inbound.py src/wormhole/_dilation/manager.py src/wormhole/_dilation/outbound.py src/wormhole/_dilation/roles.py src/wormhole/_dilation/subchannel.py src/wormhole/cli/__init__.py src/wormhole/cli/cli.py src/wormhole/cli/cmd_receive.py src/wormhole/cli/cmd_send.py src/wormhole/cli/cmd_ssh.py src/wormhole/cli/public_relay.py src/wormhole/cli/welcome.py src/wormhole/test/__init__.py src/wormhole/test/common.py src/wormhole/test/run_trial.py src/wormhole/test/test_args.py src/wormhole/test/test_cli.py src/wormhole/test/test_eventual.py src/wormhole/test/test_hints.py src/wormhole/test/test_hkdf.py src/wormhole/test/test_ipaddrs.py src/wormhole/test/test_journal.py src/wormhole/test/test_keys.py src/wormhole/test/test_machines.py src/wormhole/test/test_observer.py src/wormhole/test/test_rlcompleter.py src/wormhole/test/test_ssh.py src/wormhole/test/test_tor_manager.py src/wormhole/test/test_transit.py src/wormhole/test/test_util.py src/wormhole/test/test_wordlist.py src/wormhole/test/test_wormhole.py src/wormhole/test/test_xfer_util.py src/wormhole/test/dilate/__init__.py src/wormhole/test/dilate/common.py src/wormhole/test/dilate/test_connect.py src/wormhole/test/dilate/test_connection.py src/wormhole/test/dilate/test_connector.py src/wormhole/test/dilate/test_encoding.py src/wormhole/test/dilate/test_endpoints.py src/wormhole/test/dilate/test_framer.py src/wormhole/test/dilate/test_full.py src/wormhole/test/dilate/test_inbound.py src/wormhole/test/dilate/test_manager.py src/wormhole/test/dilate/test_outbound.py src/wormhole/test/dilate/test_parse.py src/wormhole/test/dilate/test_record.py src/wormhole/test/dilate/test_subchannel.pymagic-wormhole-0.12.0/src/magic_wormhole.egg-info/dependency_links.txt000066400000000000000000000000011400712516500260300ustar00rootroot00000000000000 magic-wormhole-0.12.0/src/magic_wormhole.egg-info/entry_points.txt000066400000000000000000000000701400712516500252550ustar00rootroot00000000000000[console_scripts] wormhole = wormhole.cli.cli:wormhole magic-wormhole-0.12.0/src/magic_wormhole.egg-info/requires.txt000066400000000000000000000004521400712516500243630ustar00rootroot00000000000000spake2==0.8 pynacl six attrs>=16.3.0 twisted[tls]>=17.5.0 autobahn[twisted]>=0.14.1 automat hkdf tqdm>=4.13.0 click humanize txtorcon>=18.0.2 [:sys_platform=="win32"] pywin32 [dev] mock tox pyflakes magic-wormhole-transit-relay==0.1.2 magic-wormhole-mailbox-server==0.3.1 [dilate] noiseprotocol magic-wormhole-0.12.0/src/magic_wormhole.egg-info/top_level.txt000066400000000000000000000000111400712516500245040ustar00rootroot00000000000000wormhole magic-wormhole-0.12.0/src/wormhole/000077500000000000000000000000001400712516500171305ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/__init__.py000066400000000000000000000002301400712516500212340ustar00rootroot00000000000000from ._rlcompleter import input_with_completion from .wormhole import create, __version__ __all__ = ["create", "input_with_completion", "__version__"] magic-wormhole-0.12.0/src/wormhole/__main__.py000066400000000000000000000003351400712516500212230ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals if __name__ == "__main__": from .cli import cli cli.wormhole() else: # raise ImportError('this module should not be imported') pass magic-wormhole-0.12.0/src/wormhole/_allocator.py000066400000000000000000000052161400712516500216250ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import provides from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces @attrs @implementer(_interfaces.IAllocator) class Allocator(object): _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def wire(self, rendezvous_connector, code): self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) self._C = _interfaces.ICode(code) @m.state(initial=True) def S0A_idle(self): pass # pragma: no cover @m.state() def S0B_idle_connected(self): pass # pragma: no cover @m.state() def S1A_allocating(self): pass # pragma: no cover @m.state() def S1B_allocating_connected(self): pass # pragma: no cover @m.state() def S2_done(self): pass # pragma: no cover # from Code @m.input() def allocate(self, length, wordlist): pass # from RendezvousConnector @m.input() def connected(self): pass @m.input() def lost(self): pass @m.input() def rx_allocated(self, nameplate): pass @m.output() def stash(self, length, wordlist): self._length = length self._wordlist = _interfaces.IWordlist(wordlist) @m.output() def stash_and_RC_rx_allocate(self, length, wordlist): self._length = length self._wordlist = _interfaces.IWordlist(wordlist) self._RC.tx_allocate() @m.output() def RC_tx_allocate(self): self._RC.tx_allocate() @m.output() def build_and_notify(self, nameplate): words = self._wordlist.choose_words(self._length) code = nameplate + "-" + words self._C.allocated(nameplate, code) S0A_idle.upon(connected, enter=S0B_idle_connected, outputs=[]) S0B_idle_connected.upon(lost, enter=S0A_idle, outputs=[]) S0A_idle.upon(allocate, enter=S1A_allocating, outputs=[stash]) S0B_idle_connected.upon( allocate, enter=S1B_allocating_connected, outputs=[stash_and_RC_rx_allocate]) S1A_allocating.upon( connected, enter=S1B_allocating_connected, outputs=[RC_tx_allocate]) S1B_allocating_connected.upon(lost, enter=S1A_allocating, outputs=[]) S1B_allocating_connected.upon( rx_allocated, enter=S2_done, outputs=[build_and_notify]) S2_done.upon(connected, enter=S2_done, outputs=[]) S2_done.upon(lost, enter=S2_done, outputs=[]) magic-wormhole-0.12.0/src/wormhole/_boss.py000066400000000000000000000374221400712516500206170ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals import re import six from attr import attrib, attrs from attr.validators import instance_of, optional, provides from automat import MethodicalMachine from twisted.python import log from zope.interface import implementer from . import _interfaces from ._allocator import Allocator from ._code import Code, validate_code from ._dilation.manager import Dilator from ._input import Input from ._key import Key from ._lister import Lister from ._mailbox import Mailbox from ._nameplate import Nameplate from ._order import Order from ._receive import Receive from ._rendezvous import RendezvousConnector from ._send import Send from ._terminator import Terminator from ._wordlist import PGPWordList from .errors import (LonelyError, OnlyOneCodeError, ServerError, WelcomeError, WrongPasswordError, _UnknownPhaseError) from .util import bytes_to_dict @attrs @implementer(_interfaces.IBoss) class Boss(object): _W = attrib() _side = attrib(validator=instance_of(type(u""))) _url = attrib(validator=instance_of(type(u""))) _appid = attrib(validator=instance_of(type(u""))) _versions = attrib(validator=instance_of(dict)) _client_version = attrib(validator=instance_of(tuple)) _reactor = attrib() _eventual_queue = attrib() _cooperator = attrib() _journal = attrib(validator=provides(_interfaces.IJournal)) _tor = attrib(validator=optional(provides(_interfaces.ITorManager))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._build_workers() self._init_other_state() def _build_workers(self): self._N = Nameplate() self._M = Mailbox(self._side) self._S = Send(self._side, self._timing) self._O = Order(self._side, self._timing) self._K = Key(self._appid, self._versions, self._side, self._timing) self._R = Receive(self._side, self._timing) self._RC = RendezvousConnector(self._url, self._appid, self._side, self._reactor, self._journal, self._tor, self._timing, self._client_version) self._L = Lister(self._timing) self._A = Allocator(self._timing) self._I = Input(self._timing) self._C = Code(self._timing) self._T = Terminator() self._D = Dilator(self._reactor, self._eventual_queue, self._cooperator) self._N.wire(self._M, self._I, self._RC, self._T) self._M.wire(self._N, self._RC, self._O, self._T) self._S.wire(self._M) self._O.wire(self._K, self._R) self._K.wire(self, self._M, self._R) self._R.wire(self, self._S) self._RC.wire(self, self._N, self._M, self._A, self._L, self._T) self._L.wire(self._RC, self._I) self._A.wire(self._RC, self._C) self._I.wire(self._C, self._L) self._C.wire(self, self._A, self._N, self._K, self._I) self._T.wire(self, self._RC, self._N, self._M, self._D) self._D.wire(self._S, self._T) def _init_other_state(self): self._did_start_code = False self._next_tx_phase = 0 self._next_rx_phase = 0 self._rx_phases = {} # phase -> plaintext self._next_rx_dilate_seqnum = 0 self._rx_dilate_seqnums = {} # seqnum -> plaintext self._result = "empty" # these methods are called from outside def start(self): self._RC.start() def _print_trace(self, old_state, input, new_state, client_name, machine, file): if new_state: print( "%s.%s[%s].%s -> [%s]" % (client_name, machine, old_state, input, new_state), file=file) else: # the RendezvousConnector emits message events as if # they were state transitions, except that old_state # and new_state are empty strings. "input" is one of # R.connected, R.rx(type phase+side), R.tx(type # phase), R.lost . print("%s.%s.%s" % (client_name, machine, input), file=file) file.flush() def output_tracer(output): print(" %s.%s.%s()" % (client_name, machine, output), file=file) file.flush() return output_tracer def _set_trace(self, client_name, which, file): names = { "B": self, "N": self._N, "M": self._M, "S": self._S, "O": self._O, "K": self._K, "SK": self._K._SK, "R": self._R, "RC": self._RC, "L": self._L, "A": self._A, "I": self._I, "C": self._C, "T": self._T } for machine in which.split(): t = (lambda old_state, input, new_state, machine=machine: self._print_trace(old_state, input, new_state, client_name=client_name, machine=machine, file=file)) names[machine].set_trace(t) if machine == "I": self._I.set_debug(t) # def serialize(self): # raise NotImplemented # and these are the state-machine transition functions, which don't take # args @m.state(initial=True) def S0_empty(self): pass # pragma: no cover @m.state() def S1_lonely(self): pass # pragma: no cover @m.state() def S2_happy(self): pass # pragma: no cover @m.state() def S3_closing(self): pass # pragma: no cover @m.state(terminal=True) def S4_closed(self): pass # pragma: no cover # from the Wormhole # input/allocate/set_code are regular methods, not state-transition # inputs. We expect them to be called just after initialization, while # we're in the S0_empty state. You must call exactly one of them, and the # call must happen while we're in S0_empty, which makes them good # candidates for being a proper @m.input, but set_code() will immediately # (reentrantly) cause self.got_code() to be fired, which is messy. These # are all passthroughs to the Code machine, so one alternative would be # to have Wormhole call Code.{input,allocate,set_code} instead, but that # would require the Wormhole to be aware of Code (whereas right now # Wormhole only knows about this Boss instance, and everything else is # hidden away). def input_code(self): if self._did_start_code: raise OnlyOneCodeError() self._did_start_code = True return self._C.input_code() def allocate_code(self, code_length): if self._did_start_code: raise OnlyOneCodeError() self._did_start_code = True wl = PGPWordList() self._C.allocate_code(code_length, wl) def set_code(self, code): validate_code(code) # can raise KeyFormatError if self._did_start_code: raise OnlyOneCodeError() self._did_start_code = True self._C.set_code(code) def dilate(self, transit_relay_location=None, no_listen=False): return self._D.dilate(transit_relay_location, no_listen=no_listen) # fires with endpoints @m.input() def send(self, plaintext): pass @m.input() def close(self): pass # from RendezvousConnector: # * "rx_welcome" is the Welcome message, which might signal an error, or # our welcome_handler might signal one # * "rx_error" is error message from the server (probably because of # something we said badly, or due to CrowdedError) # * "error" is when an exception happened while it tried to deliver # something else def rx_welcome(self, welcome): try: if "error" in welcome: raise WelcomeError(welcome["error"]) # TODO: it'd be nice to not call the handler when we're in # S3_closing or S4_closed states. I tried to implement this with # rx_welcome as an @input, but in the error case I'd be # delivering a new input (rx_error or something) while in the # middle of processing the rx_welcome input, and I wasn't sure # Automat would handle that correctly. self._W.got_welcome(welcome) # TODO: let this raise WelcomeError? except WelcomeError as welcome_error: self.rx_unwelcome(welcome_error) @m.input() def rx_unwelcome(self, welcome_error): pass @m.input() def rx_error(self, errmsg, orig): pass @m.input() def error(self, err): pass # from Code (provoked by input/allocate/set_code) @m.input() def got_code(self, code): pass # Key sends (got_key, scared) # Receive sends (got_message, happy, got_verifier, scared) @m.input() def happy(self): pass @m.input() def scared(self): pass def got_message(self, phase, plaintext): assert isinstance(phase, type("")), type(phase) assert isinstance(plaintext, type(b"")), type(plaintext) d_mo = re.search(r'^dilate-(\d+)$', phase) if phase == "version": self._got_version(plaintext) elif d_mo: self._got_dilate(int(d_mo.group(1)), plaintext) elif re.search(r'^\d+$', phase): self._got_phase(int(phase), plaintext) else: # Ignore unrecognized phases, for forwards-compatibility. Use # log.err so tests will catch surprises. log.err(_UnknownPhaseError("received unknown phase '%s'" % phase)) @m.input() def _got_version(self, plaintext): pass @m.input() def _got_phase(self, phase, plaintext): pass @m.input() def _got_dilate(self, seqnum, plaintext): pass @m.input() def got_key(self, key): pass @m.input() def got_verifier(self, verifier): pass # Terminator sends closed @m.input() def closed(self): pass @m.output() def do_got_code(self, code): self._W.got_code(code) @m.output() def process_version(self, plaintext): # most of this is wormhole-to-wormhole, ignored for now # in the future, this is how Dilation is signalled self._their_versions = bytes_to_dict(plaintext) self._D.got_wormhole_versions(self._their_versions) # but this part is app-to-app app_versions = self._their_versions.get("app_versions", {}) self._W.got_versions(app_versions) @m.output() def S_send(self, plaintext): assert isinstance(plaintext, type(b"")), type(plaintext) phase = self._next_tx_phase self._next_tx_phase += 1 self._S.send("%d" % phase, plaintext) @m.output() def close_unwelcome(self, welcome_error): # assert isinstance(err, WelcomeError) self._result = welcome_error self._T.close("unwelcome") @m.output() def close_error(self, errmsg, orig): self._result = ServerError(errmsg) self._T.close("errory") @m.output() def close_scared(self): self._result = WrongPasswordError() self._T.close("scary") @m.output() def close_lonely(self): self._result = LonelyError() self._T.close("lonely") @m.output() def close_happy(self): self._result = "happy" self._T.close("happy") @m.output() def W_got_key(self, key): self._W.got_key(key) @m.output() def D_got_key(self, key): self._D.got_key(key) @m.output() def W_got_verifier(self, verifier): self._W.got_verifier(verifier) @m.output() def W_received(self, phase, plaintext): assert isinstance(phase, six.integer_types), type(phase) # we call Wormhole.received() in strict phase order, with no gaps self._rx_phases[phase] = plaintext while self._next_rx_phase in self._rx_phases: self._W.received(self._rx_phases.pop(self._next_rx_phase)) self._next_rx_phase += 1 @m.output() def D_received_dilate(self, seqnum, plaintext): assert isinstance(seqnum, six.integer_types), type(seqnum) # strict phase order, no gaps self._rx_dilate_seqnums[seqnum] = plaintext while self._next_rx_dilate_seqnum in self._rx_dilate_seqnums: m = self._rx_dilate_seqnums.pop(self._next_rx_dilate_seqnum) self._D.received_dilate(m) self._next_rx_dilate_seqnum += 1 @m.output() def W_close_with_error(self, err): self._result = err # exception self._W.closed(self._result) @m.output() def W_closed(self): # result is either "happy" or a WormholeError of some sort self._W.closed(self._result) S0_empty.upon(close, enter=S3_closing, outputs=[close_lonely]) S0_empty.upon(send, enter=S0_empty, outputs=[S_send]) S0_empty.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) S0_empty.upon(got_code, enter=S1_lonely, outputs=[do_got_code]) S0_empty.upon(rx_error, enter=S3_closing, outputs=[close_error]) S0_empty.upon(error, enter=S4_closed, outputs=[W_close_with_error]) S1_lonely.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) S1_lonely.upon(happy, enter=S2_happy, outputs=[]) S1_lonely.upon(scared, enter=S3_closing, outputs=[close_scared]) S1_lonely.upon(close, enter=S3_closing, outputs=[close_lonely]) S1_lonely.upon(send, enter=S1_lonely, outputs=[S_send]) S1_lonely.upon(got_key, enter=S1_lonely, outputs=[W_got_key, D_got_key]) S1_lonely.upon(rx_error, enter=S3_closing, outputs=[close_error]) S1_lonely.upon(error, enter=S4_closed, outputs=[W_close_with_error]) S2_happy.upon(rx_unwelcome, enter=S3_closing, outputs=[close_unwelcome]) S2_happy.upon(got_verifier, enter=S2_happy, outputs=[W_got_verifier]) S2_happy.upon(_got_phase, enter=S2_happy, outputs=[W_received]) S2_happy.upon(_got_version, enter=S2_happy, outputs=[process_version]) S2_happy.upon(_got_dilate, enter=S2_happy, outputs=[D_received_dilate]) S2_happy.upon(scared, enter=S3_closing, outputs=[close_scared]) S2_happy.upon(close, enter=S3_closing, outputs=[close_happy]) S2_happy.upon(send, enter=S2_happy, outputs=[S_send]) S2_happy.upon(rx_error, enter=S3_closing, outputs=[close_error]) S2_happy.upon(error, enter=S4_closed, outputs=[W_close_with_error]) S3_closing.upon(rx_unwelcome, enter=S3_closing, outputs=[]) S3_closing.upon(rx_error, enter=S3_closing, outputs=[]) S3_closing.upon(got_verifier, enter=S3_closing, outputs=[]) S3_closing.upon(_got_phase, enter=S3_closing, outputs=[]) S3_closing.upon(_got_version, enter=S3_closing, outputs=[]) S3_closing.upon(_got_dilate, enter=S3_closing, outputs=[]) S3_closing.upon(happy, enter=S3_closing, outputs=[]) S3_closing.upon(scared, enter=S3_closing, outputs=[]) S3_closing.upon(close, enter=S3_closing, outputs=[]) S3_closing.upon(send, enter=S3_closing, outputs=[]) S3_closing.upon(closed, enter=S4_closed, outputs=[W_closed]) S3_closing.upon(error, enter=S4_closed, outputs=[W_close_with_error]) S4_closed.upon(rx_unwelcome, enter=S4_closed, outputs=[]) S4_closed.upon(got_verifier, enter=S4_closed, outputs=[]) S4_closed.upon(_got_phase, enter=S4_closed, outputs=[]) S4_closed.upon(_got_version, enter=S4_closed, outputs=[]) S4_closed.upon(_got_dilate, enter=S4_closed, outputs=[]) S4_closed.upon(happy, enter=S4_closed, outputs=[]) S4_closed.upon(scared, enter=S4_closed, outputs=[]) S4_closed.upon(close, enter=S4_closed, outputs=[]) S4_closed.upon(send, enter=S4_closed, outputs=[]) S4_closed.upon(error, enter=S4_closed, outputs=[]) magic-wormhole-0.12.0/src/wormhole/_code.py000066400000000000000000000067001400712516500205560ustar00rootroot00000000000000from __future__ import print_function, absolute_import, unicode_literals from zope.interface import implementer from attr import attrs, attrib from attr.validators import provides from automat import MethodicalMachine from . import _interfaces from ._nameplate import validate_nameplate from .errors import KeyFormatError def validate_code(code): if ' ' in code: raise KeyFormatError("Code '%s' contains spaces." % code) nameplate = code.split("-", 2)[0] validate_nameplate(nameplate) # can raise KeyFormatError def first(outputs): return list(outputs)[0] @attrs @implementer(_interfaces.ICode) class Code(object): _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def wire(self, boss, allocator, nameplate, key, input): self._B = _interfaces.IBoss(boss) self._A = _interfaces.IAllocator(allocator) self._N = _interfaces.INameplate(nameplate) self._K = _interfaces.IKey(key) self._I = _interfaces.IInput(input) @m.state(initial=True) def S0_idle(self): pass # pragma: no cover @m.state() def S1_inputting_nameplate(self): pass # pragma: no cover @m.state() def S2_inputting_words(self): pass # pragma: no cover @m.state() def S3_allocating(self): pass # pragma: no cover @m.state() def S4_known(self): pass # pragma: no cover # from App @m.input() def allocate_code(self, length, wordlist): pass @m.input() def input_code(self): pass def set_code(self, code): validate_code(code) # can raise KeyFormatError self._set_code(code) @m.input() def _set_code(self, code): pass # from Allocator @m.input() def allocated(self, nameplate, code): pass # from Input @m.input() def got_nameplate(self, nameplate): pass @m.input() def finished_input(self, code): pass @m.output() def do_set_code(self, code): nameplate = code.split("-", 2)[0] self._N.set_nameplate(nameplate) self._B.got_code(code) self._K.got_code(code) @m.output() def do_start_input(self): return self._I.start() @m.output() def do_middle_input(self, nameplate): self._N.set_nameplate(nameplate) @m.output() def do_finish_input(self, code): self._B.got_code(code) self._K.got_code(code) @m.output() def do_start_allocate(self, length, wordlist): self._A.allocate(length, wordlist) @m.output() def do_finish_allocate(self, nameplate, code): assert code.startswith(nameplate + "-"), (nameplate, code) self._N.set_nameplate(nameplate) self._B.got_code(code) self._K.got_code(code) S0_idle.upon(_set_code, enter=S4_known, outputs=[do_set_code]) S0_idle.upon( input_code, enter=S1_inputting_nameplate, outputs=[do_start_input], collector=first) S1_inputting_nameplate.upon( got_nameplate, enter=S2_inputting_words, outputs=[do_middle_input]) S2_inputting_words.upon( finished_input, enter=S4_known, outputs=[do_finish_input]) S0_idle.upon( allocate_code, enter=S3_allocating, outputs=[do_start_allocate]) S3_allocating.upon(allocated, enter=S4_known, outputs=[do_finish_allocate]) magic-wormhole-0.12.0/src/wormhole/_dilation/000077500000000000000000000000001400712516500210725ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/_dilation/__init__.py000066400000000000000000000000001400712516500231710ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/_dilation/_noise.py000066400000000000000000000006601400712516500227220ustar00rootroot00000000000000try: from noise.exceptions import NoiseInvalidMessage except ImportError: class NoiseInvalidMessage(Exception): pass try: from noise.exceptions import NoiseHandshakeError except ImportError: class NoiseHandshakeError(Exception): pass try: from noise.connection import NoiseConnection except ImportError: # allow imports to work on py2.7, even if dilation doesn't NoiseConnection = None magic-wormhole-0.12.0/src/wormhole/_dilation/connection.py000066400000000000000000000525141400712516500236120ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from collections import namedtuple import six from attr import attrs, attrib from attr.validators import instance_of, provides from automat import MethodicalMachine from zope.interface import Interface, implementer from twisted.python import log from twisted.internet.protocol import Protocol from twisted.internet.interfaces import ITransport from .._interfaces import IDilationConnector from ..observer import OneShotObserver from .encode import to_be4, from_be4 from .roles import LEADER, FOLLOWER from ._noise import NoiseInvalidMessage, NoiseHandshakeError # InboundFraming is given data and returns Frames (Noise wire-side # bytestrings). It handles the relay handshake and the prologue. The Frames it # returns are either the ephemeral key (the Noise "handshake") or ciphertext # messages. # The next object up knows whether it's expecting a Handshake or a message. It # feeds the first into Noise as a handshake, it feeds the rest into Noise as a # message (which produces a plaintext stream). It emits tokens that are either # "i've finished with the handshake (so you can send the KCM if you want)", or # "here is a decrypted message (which might be the KCM)". # the transmit direction goes directly to transport.write, and doesn't touch # the state machine. we can do this because the way we encode/encrypt/frame # things doesn't depend upon the receiver state. It would be more safe to e.g. # prohibit sending ciphertext frames unless we're in the received-handshake # state, but then we'll be in the middle of an inbound state transition ("we # just received the handshake, so you can send a KCM now") when we perform an # operation that depends upon the state (send_plaintext(kcm)), which is not a # coherent/safe place to touch the state machine. # we could set a flag and test it from inside send_plaintext, which kind of # violates the state machine owning the state (ideally all "if" statements # would be translated into same-input transitions from different starting # states). For the specific question of sending plaintext frames, Noise will # refuse us unless it's ready anyways, so the question is probably moot. class IFramer(Interface): pass class IRecord(Interface): pass def first(l): return l[0] class Disconnect(Exception): pass # all connections look like: # (step 1: only for outbound connections) # 1: if we're connecting to a transit relay: # * send "sided relay handshake": "please relay TOKEN for side SIDE\n" # * the relay will send "ok\n" if/when our peer connects # * a non-relay will probably send junk # * wait for "ok\n", hang up if we get anything different # (all subsequent steps are for both inbound and outbound connections) # 2: send PROLOGUE_LEADER/FOLLOWER: "Magic-Wormhole Dilation Handshale v1 (l/f)\n\n" # 3: wait for the opposite PROLOGUE string, else hang up # (everything past this point is a Frame, with be4 length prefix. Frames are # either noise handshake or an encrypted message) # 4: if LEADER, send noise handshake string. if FOLLOWER, wait for it # LEADER: m=n.write_message(), FOLLOWER: n.read_message(m) # 5: if FOLLOWER, send noise response string. if LEADER, wait for it # FOLLOWER: m=n.write_message(), LEADER: n.read_message(m) # 6: if FOLLOWER: send KCM (m=n.encrypt('')), wait for KCM (n.decrypt(m)) # if LEADER: wait for KCM, gather viable connections, select # send KCM over selected connection, drop the rest # 7: both: send Ping/Pong/Open/Data/Close/Ack records (n.encrypt(rec)) RelayOK = namedtuple("RelayOk", []) Prologue = namedtuple("Prologue", []) Frame = namedtuple("Frame", ["frame"]) @attrs @implementer(IFramer) class _Framer(object): _transport = attrib(validator=provides(ITransport)) _outbound_prologue = attrib(validator=instance_of(bytes)) _inbound_prologue = attrib(validator=instance_of(bytes)) _buffer = b"" _can_send_frames = False # in: use_relay # in: connectionMade, dataReceived # out: prologue_received, frame_received # out (shared): transport.loseConnection # out (shared): transport.write (relay handshake, prologue) # states: want_relay, want_prologue, want_frame m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover @m.state() def want_relay(self): pass # pragma: no cover @m.state(initial=True) def want_prologue(self): pass # pragma: no cover @m.state() def want_frame(self): pass # pragma: no cover @m.input() def use_relay(self, relay_handshake): pass @m.input() def connectionMade(self): pass @m.input() def parse(self): pass @m.input() def got_relay_ok(self): pass @m.input() def got_prologue(self): pass @m.output() def store_relay_handshake(self, relay_handshake): self._outbound_relay_handshake = relay_handshake self._expected_relay_handshake = b"ok\n" # TODO: make this configurable @m.output() def send_relay_handshake(self): self._transport.write(self._outbound_relay_handshake) @m.output() def send_prologue(self): self._transport.write(self._outbound_prologue) @m.output() def parse_relay_ok(self): if self._get_expected("relay_ok", self._expected_relay_handshake): return RelayOK() @m.output() def parse_prologue(self): if self._get_expected("prologue", self._inbound_prologue): return Prologue() @m.output() def can_send_frames(self): self._can_send_frames = True # for assertion in send_frame() @m.output() def parse_frame(self): if len(self._buffer) < 4: return None frame_length = from_be4(self._buffer[0:4]) if len(self._buffer) < 4 + frame_length: return None frame = self._buffer[4:4 + frame_length] self._buffer = self._buffer[4 + frame_length:] # TODO: avoid copy return Frame(frame=frame) want_prologue.upon(use_relay, outputs=[store_relay_handshake], enter=want_relay) want_relay.upon(connectionMade, outputs=[send_relay_handshake], enter=want_relay) want_relay.upon(parse, outputs=[parse_relay_ok], enter=want_relay, collector=first) want_relay.upon(got_relay_ok, outputs=[send_prologue], enter=want_prologue) want_prologue.upon(connectionMade, outputs=[send_prologue], enter=want_prologue) want_prologue.upon(parse, outputs=[parse_prologue], enter=want_prologue, collector=first) want_prologue.upon(got_prologue, outputs=[can_send_frames], enter=want_frame) want_frame.upon(parse, outputs=[parse_frame], enter=want_frame, collector=first) def _get_expected(self, name, expected): lb = len(self._buffer) le = len(expected) if self._buffer.startswith(expected): # if the buffer starts with the expected string, consume it and # return True self._buffer = self._buffer[le:] return True if not expected.startswith(self._buffer): # we're not on track: the data we've received so far does not # match the expected value, so this can't possibly be right. # Don't complain until we see the expected length, or a newline, # so we can capture the weird input in the log for debugging. if (b"\n" in self._buffer or lb >= le): log.msg("bad {}: {}".format(name, self._buffer[:le])) raise Disconnect() return False # wait a bit longer # good so far, just waiting for the rest return False # external API is: connectionMade, add_and_parse, and send_frame def add_and_parse(self, data): # we can't make this an @m.input because we can't change the state # from within an input. Instead, let the state choose the parser to # use, then use the parsed token to drive a state transition. self._buffer += data while True: # it'd be nice to use an iterator here, but since self.parse() # dispatches to a different parser (depending upon the current # state), we'd be using multiple iterators token = self.parse() if isinstance(token, RelayOK): self.got_relay_ok() elif isinstance(token, Prologue): self.got_prologue() yield token # triggers send_handshake elif isinstance(token, Frame): yield token else: break def send_frame(self, frame): assert self._can_send_frames self._transport.write(to_be4(len(frame)) + frame) # RelayOK: Newline-terminated buddy-is-connected response from Relay. # First data received from relay. # Prologue: double-newline-terminated this-is-really-wormhole response # from peer. First data received from peer. # Frame: Either handshake or encrypted message. Length-prefixed on wire. # Handshake: the Noise ephemeral key, first framed message # Message: plaintext: encoded KCM/PING/PONG/OPEN/DATA/CLOSE/ACK # KCM: Key Confirmation Message (encrypted b"\x00"). First frame # from peer. Sent immediately by Follower, after Selection by Leader. # Record: namedtuple of KCM/Open/Data/Close/Ack/Ping/Pong Handshake = namedtuple("Handshake", []) # decrypted frames: produces KCM, Ping, Pong, Open, Data, Close, Ack KCM = namedtuple("KCM", []) Ping = namedtuple("Ping", ["ping_id"]) # ping_id is arbitrary 4-byte value Pong = namedtuple("Pong", ["ping_id"]) Open = namedtuple("Open", ["seqnum", "scid"]) # seqnum is integer Data = namedtuple("Data", ["seqnum", "scid", "data"]) Close = namedtuple("Close", ["seqnum", "scid"]) # scid is integer Ack = namedtuple("Ack", ["resp_seqnum"]) # resp_seqnum is integer Records = (KCM, Ping, Pong, Open, Data, Close, Ack) Handshake_or_Records = (Handshake,) + Records T_KCM = b"\x00" T_PING = b"\x01" T_PONG = b"\x02" T_OPEN = b"\x03" T_DATA = b"\x04" T_CLOSE = b"\x05" T_ACK = b"\x06" def parse_record(plaintext): msgtype = plaintext[0:1] if msgtype == T_KCM: return KCM() if msgtype == T_PING: ping_id = plaintext[1:5] return Ping(ping_id) if msgtype == T_PONG: ping_id = plaintext[1:5] return Pong(ping_id) if msgtype == T_OPEN: scid = from_be4(plaintext[1:5]) seqnum = from_be4(plaintext[5:9]) return Open(seqnum, scid) if msgtype == T_DATA: scid = from_be4(plaintext[1:5]) seqnum = from_be4(plaintext[5:9]) data = plaintext[9:] return Data(seqnum, scid, data) if msgtype == T_CLOSE: scid = from_be4(plaintext[1:5]) seqnum = from_be4(plaintext[5:9]) return Close(seqnum, scid) if msgtype == T_ACK: resp_seqnum = from_be4(plaintext[1:5]) return Ack(resp_seqnum) log.err("received unknown message type: {}".format(plaintext)) raise ValueError() def encode_record(r): if isinstance(r, KCM): return b"\x00" if isinstance(r, Ping): return b"\x01" + r.ping_id if isinstance(r, Pong): return b"\x02" + r.ping_id if isinstance(r, Open): assert isinstance(r.scid, six.integer_types) assert isinstance(r.seqnum, six.integer_types) return b"\x03" + to_be4(r.scid) + to_be4(r.seqnum) if isinstance(r, Data): assert isinstance(r.scid, six.integer_types) assert isinstance(r.seqnum, six.integer_types) return b"\x04" + to_be4(r.scid) + to_be4(r.seqnum) + r.data if isinstance(r, Close): assert isinstance(r.scid, six.integer_types) assert isinstance(r.seqnum, six.integer_types) return b"\x05" + to_be4(r.scid) + to_be4(r.seqnum) if isinstance(r, Ack): assert isinstance(r.resp_seqnum, six.integer_types) return b"\x06" + to_be4(r.resp_seqnum) raise TypeError(r) def _is_role(_record, _attr, value): if value not in [LEADER, FOLLOWER]: raise ValueError("role must be LEADER or FOLLOWER") @attrs @implementer(IRecord) class _Record(object): _framer = attrib(validator=provides(IFramer)) _noise = attrib() _role = attrib(default="unspecified", validator=_is_role) # for debugging n = MethodicalMachine() # TODO: set_trace def __attrs_post_init__(self): self._noise.start_handshake() # in: role= # in: prologue_received, frame_received # out: handshake_received, record_received # out: transport.write (noise handshake, encrypted records) # states: want_prologue, want_handshake, want_record @n.state(initial=True) def no_role_set(self): pass # pragma: no cover @n.state() def want_prologue_leader(self): pass # pragma: no cover @n.state() def want_prologue_follower(self): pass # pragma: no cover @n.state() def want_handshake_leader(self): pass # pragma: no cover @n.state() def want_handshake_follower(self): pass # pragma: no cover @n.state() def want_message(self): pass # pragma: no cover @n.input() def set_role_leader(self): pass @n.input() def set_role_follower(self): pass @n.input() def got_prologue(self): pass @n.input() def got_frame(self, frame): pass @n.output() def ignore_and_send_handshake(self, frame): self._send_handshake() @n.output() def send_handshake(self): self._send_handshake() def _send_handshake(self): try: handshake = self._noise.write_message() # generate the ephemeral key except NoiseHandshakeError as e: log.err(e, "noise error during handshake") raise self._framer.send_frame(handshake) @n.output() def process_handshake(self, frame): try: payload = self._noise.read_message(frame) # Noise can include unencrypted data in the handshake, but we don't # use it del payload except NoiseInvalidMessage as e: log.err(e, "bad inbound noise handshake") raise Disconnect() return Handshake() @n.output() def decrypt_message(self, frame): try: message = self._noise.decrypt(frame) except NoiseInvalidMessage as e: # if this happens during tests, flunk the test log.err(e, "bad inbound noise frame") raise Disconnect() return parse_record(message) no_role_set.upon(set_role_leader, outputs=[], enter=want_prologue_leader) want_prologue_leader.upon(got_prologue, outputs=[send_handshake], enter=want_handshake_leader) want_handshake_leader.upon(got_frame, outputs=[process_handshake], collector=first, enter=want_message) no_role_set.upon(set_role_follower, outputs=[], enter=want_prologue_follower) want_prologue_follower.upon(got_prologue, outputs=[], enter=want_handshake_follower) want_handshake_follower.upon(got_frame, outputs=[process_handshake, ignore_and_send_handshake], collector=first, enter=want_message) want_message.upon(got_frame, outputs=[decrypt_message], collector=first, enter=want_message) # external API is: connectionMade, dataReceived, send_record def connectionMade(self): self._framer.connectionMade() def add_and_unframe(self, data): for token in self._framer.add_and_parse(data): if isinstance(token, Prologue): self.got_prologue() # triggers send_handshake else: assert isinstance(token, Frame) yield self.got_frame(token.frame) # Handshake or a Record type def send_record(self, r): message = encode_record(r) frame = self._noise.encrypt(message) self._framer.send_frame(frame) @attrs(cmp=False) class DilatedConnectionProtocol(Protocol, object): """I manage an L2 connection. When a new L2 connection is needed (as determined by the Leader), both Leader and Follower will initiate many simultaneous connections (probably TCP, but conceivably others). A subset will actually connect. A subset of those will successfully pass negotiation by exchanging handshakes to demonstrate knowledge of the session key. One of the negotiated connections will be selected by the Leader for active use, and the others will be dropped. At any given time, there is at most one active L2 connection. """ _eventual_queue = attrib(repr=False) _role = attrib() _description = attrib() _connector = attrib(validator=provides(IDilationConnector), repr=False) _noise = attrib(repr=False) _outbound_prologue = attrib(validator=instance_of(bytes), repr=False) _inbound_prologue = attrib(validator=instance_of(bytes), repr=False) _use_relay = False _relay_handshake = None m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._manager = None # set if/when we are selected self._disconnected = OneShotObserver(self._eventual_queue) self._can_send_records = False self._inbound_record_queue = [] @m.state(initial=True) def unselected(self): pass # pragma: no cover @m.state() def selecting(self): pass # pragma: no cover @m.state() def selected(self): pass # pragma: no cover @m.input() def got_kcm(self): pass @m.input() def select(self, manager): pass # fires set_manager() @m.input() def got_record(self, record): pass @m.output() def add_candidate(self): self._connector.add_candidate(self) @m.output() def queue_inbound_record(self, record): # the Follower will see a dataReceived chunk containing both the KCM # (leader says we've been picked) and the first record. # Connector.consider takes an eventual-turn to decide to accept this # connection, which means the record will arrive before we get # .select() and move to the 'selected' state where we can # deliver_record. So we need to queue the record for a turn. TODO: # when we move to the sans-io event-driven scheme, this queue # shouldn't be necessary self._inbound_record_queue.append(record) @m.output() def set_manager(self, manager): self._manager = manager self.when_disconnected().addCallback(lambda c: manager.connector_connection_lost()) @m.output() def can_send_records(self, manager): self._can_send_records = True @m.output() def process_inbound_queue(self, manager): while self._inbound_record_queue: r = self._inbound_record_queue.pop(0) self._manager.got_record(r) @m.output() def deliver_record(self, record): self._manager.got_record(record) unselected.upon(got_kcm, outputs=[add_candidate], enter=selecting) selecting.upon(got_record, outputs=[queue_inbound_record], enter=selecting) selecting.upon(select, outputs=[set_manager, can_send_records, process_inbound_queue], enter=selected) selected.upon(got_record, outputs=[deliver_record], enter=selected) # called by Connector def use_relay(self, relay_handshake): assert isinstance(relay_handshake, bytes) self._use_relay = True self._relay_handshake = relay_handshake def when_disconnected(self): return self._disconnected.when_fired() def disconnect(self): self.transport.loseConnection() # select() called by Connector # called by Manager def send_record(self, record): assert self._can_send_records self._record.send_record(record) # IProtocol methods def connectionMade(self): try: framer = _Framer(self.transport, self._outbound_prologue, self._inbound_prologue) if self._use_relay: framer.use_relay(self._relay_handshake) self._record = _Record(framer, self._noise, self._role) if self._role is LEADER: self._record.set_role_leader() else: self._record.set_role_follower() self._record.connectionMade() except: log.err() raise def dataReceived(self, data): try: for token in self._record.add_and_unframe(data): assert isinstance(token, Handshake_or_Records) if isinstance(token, Handshake): if self._role is FOLLOWER: self._record.send_record(KCM()) elif isinstance(token, KCM): # if we're the leader, add this connection as a candidate. # if we're the follower, accept this connection. self.got_kcm() # connector.add_candidate() else: self.got_record(token) # manager.got_record() except Disconnect: self.transport.loseConnection() def connectionLost(self, why=None): self._disconnected.fire(self) magic-wormhole-0.12.0/src/wormhole/_dilation/connector.py000066400000000000000000000440511400712516500234420ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from collections import defaultdict from binascii import hexlify from attr import attrs, attrib from attr.validators import instance_of, provides, optional from automat import MethodicalMachine from zope.interface import implementer from twisted.internet.task import deferLater from twisted.internet.defer import DeferredList, CancelledError from twisted.internet.endpoints import serverFromString from twisted.internet.protocol import ClientFactory, ServerFactory from twisted.internet.address import HostnameAddress, IPv4Address, IPv6Address from twisted.internet.error import ConnectingCancelledError, ConnectionRefusedError, DNSLookupError from twisted.python import log from .. import ipaddrs # TODO: move into _dilation/ from .._interfaces import IDilationConnector, IDilationManager from ..timing import DebugTiming from ..observer import EmptyableSet from ..util import HKDF, to_unicode from .connection import DilatedConnectionProtocol, KCM from .roles import LEADER from .._hints import (DirectTCPV1Hint, TorTCPV1Hint, RelayV1Hint, parse_hint_argv, describe_hint_obj, endpoint_from_hint_obj, encode_hint) from ._noise import NoiseConnection def build_sided_relay_handshake(key, side): assert isinstance(side, type(u"")) # magic-wormhole-transit-relay expects a specific layout for the # handshake message: "please relay {64} for side {16}\n" assert len(side) == 8 * 2, side token = HKDF(key, 32, CTXinfo=b"transit_relay_token") return (b"please relay " + hexlify(token) + b" for side " + side.encode("ascii") + b"\n") PROLOGUE_LEADER = b"Magic-Wormhole Dilation Handshake v1 Leader\n\n" PROLOGUE_FOLLOWER = b"Magic-Wormhole Dilation Handshake v1 Follower\n\n" NOISEPROTO = b"Noise_NNpsk0_25519_ChaChaPoly_BLAKE2s" def build_noise(): return NoiseConnection.from_name(NOISEPROTO) @attrs(cmp=False) @implementer(IDilationConnector) class Connector(object): """I manage a single generation of connection. The Manager creates one of me at a time, whenever it wants a connection (which is always, once w.dilate() has been called and we know the remote end can dilate, and is expressed by the Manager calling my .start() method). I am discarded when my established connection is lost (and if we still want to be connected, a new generation is started and a new Connector is created). I am also discarded if we stop wanting to be connected (which the Manager expresses by calling my .stop() method). I manage the race between multiple connections for a specific generation of the dilated connection. I send connection hints when my InboundConnectionFactory yields addresses (self.listener_ready), and I initiate outbond connections (with OutboundConnectionFactory) as I receive connection hints from my peer (self.got_hints). Both factories use my build_protocol() method to create connection.DilatedConnectionProtocol instances. I track these protocol instances until one finishes negotiation and wins the race. I then shut down the others, remember the winner as self._winning_connection, and deliver the winner to manager.connector_connection_made(c). When an active connection is lost, we call manager.connector_connection_lost, allowing the manager to decide whether it wants to start a new generation or not. """ _dilation_key = attrib(validator=instance_of(type(b""))) _transit_relay_location = attrib(validator=optional(instance_of(type(u"")))) _manager = attrib(validator=provides(IDilationManager)) _reactor = attrib() _eventual_queue = attrib() _no_listen = attrib(validator=instance_of(bool)) _tor = attrib() _timing = attrib() _side = attrib(validator=instance_of(type(u""))) # was self._side = bytes_to_hexstr(os.urandom(8)) # unicode _role = attrib() m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover RELAY_DELAY = 2.0 def __attrs_post_init__(self): if self._transit_relay_location: # TODO: allow multiple hints for a single relay relay_hint = parse_hint_argv(self._transit_relay_location) relay = RelayV1Hint(hints=(relay_hint,)) self._transit_relays = [relay] else: self._transit_relays = [] self._listeners = set() # IListeningPorts that can be stopped self._pending_connectors = set() # Deferreds that can be cancelled self._pending_connections = EmptyableSet( _eventual_queue=self._eventual_queue) # Protocols to be stopped self._contenders = set() # viable connections self._winning_connection = None self._timing = self._timing or DebugTiming() self._timing.add("transit") # this describes what our Connector can do, for the initial advertisement @classmethod def get_connection_abilities(klass): return [{"type": "direct-tcp-v1"}, {"type": "relay-v1"}, ] def build_protocol(self, addr, description): # encryption: let's use Noise NNpsk0 (or maybe NNpsk2). That uses # ephemeral keys plus a pre-shared symmetric key (the Transit key), a # different one for each potential connection. noise = build_noise() noise.set_psks(self._dilation_key) if self._role is LEADER: noise.set_as_initiator() outbound_prologue = PROLOGUE_LEADER inbound_prologue = PROLOGUE_FOLLOWER else: noise.set_as_responder() outbound_prologue = PROLOGUE_FOLLOWER inbound_prologue = PROLOGUE_LEADER p = DilatedConnectionProtocol(self._eventual_queue, self._role, description, self, noise, outbound_prologue, inbound_prologue) return p @m.state(initial=True) def connecting(self): pass # pragma: no cover @m.state() def connected(self): pass # pragma: no cover @m.state(terminal=True) def stopped(self): pass # pragma: no cover # TODO: unify the tense of these method-name verbs # add_relay() and got_hints() are called by the Manager as it receives # messages from our peer. stop() is called when the Manager shuts down @m.input() def add_relay(self, hint_objs): pass @m.input() def got_hints(self, hint_objs): pass @m.input() def stop(self): pass # called by ourselves, when _start_listener() is ready @m.input() def listener_ready(self, hint_objs): pass # called when DilatedConnectionProtocol submits itself, after KCM # received @m.input() def add_candidate(self, c): pass # called by ourselves, via consider() @m.input() def accept(self, c): pass @m.output() def use_hints(self, hint_objs): self._use_hints(hint_objs) @m.output() def publish_hints(self, hint_objs): self._publish_hints(hint_objs) def _publish_hints(self, hint_objs): self._manager.send_hints([encode_hint(h) for h in hint_objs]) @m.output() def consider(self, c): self._contenders.add(c) if self._role is LEADER: # for now, just accept the first one. TODO: be clever. self._eventual_queue.eventually(self.accept, c) else: # the follower always uses the first contender, since that's the # only one the leader picked self._eventual_queue.eventually(self.accept, c) @m.output() def select_and_stop_remaining(self, c): self._winning_connection = c self._contenders.clear() # we no longer care who else came close # remove this winner from the losers, so we don't shut it down self._pending_connections.discard(c) # shut down losing connections self.stop_listeners() # TODO: maybe keep it open? NAT/p2p assist self.stop_pending_connectors() self.stop_pending_connections() c.select(self._manager) # subsequent frames go directly to the manager # c.select also wires up when_disconnected() to fire # manager.connector_connection_lost(). TODO: rename this, since the # Connector is no longer the one calling it if self._role is LEADER: # TODO: this should live in Connection c.send_record(KCM()) # leader sends KCM now self._manager.connector_connection_made(c) # manager sends frames to Connection @m.output() def stop_everything(self): self.stop_listeners() self.stop_pending_connectors() self.stop_pending_connections() self.break_cycles() def stop_listeners(self): d = DeferredList([l.stopListening() for l in self._listeners]) self._listeners.clear() return d # synchronization for tests def stop_pending_connectors(self): for d in self._pending_connectors: d.cancel() def stop_pending_connections(self): d = self._pending_connections.when_next_empty() [c.disconnect() for c in self._pending_connections] return d def break_cycles(self): # help GC by forgetting references to things that reference us self._listeners.clear() self._pending_connectors.clear() self._pending_connections.clear() self._winning_connection = None connecting.upon(listener_ready, enter=connecting, outputs=[publish_hints]) connecting.upon(add_relay, enter=connecting, outputs=[use_hints, publish_hints]) connecting.upon(got_hints, enter=connecting, outputs=[use_hints]) connecting.upon(add_candidate, enter=connecting, outputs=[consider]) connecting.upon(accept, enter=connected, outputs=[ select_and_stop_remaining]) connecting.upon(stop, enter=stopped, outputs=[stop_everything]) # once connected, we ignore everything except stop connected.upon(listener_ready, enter=connected, outputs=[]) connected.upon(add_relay, enter=connected, outputs=[]) connected.upon(got_hints, enter=connected, outputs=[]) # TODO: tell them to disconnect? will they hang out forever? I *think* # they'll drop this once they get a KCM on the winning connection. connected.upon(add_candidate, enter=connected, outputs=[]) connected.upon(accept, enter=connected, outputs=[]) connected.upon(stop, enter=stopped, outputs=[stop_everything]) # from Manager: start, got_hints, stop # maybe add_candidate, accept def start(self): if not self._no_listen and not self._tor: addresses = self._get_listener_addresses() self._start_listener(addresses) if self._transit_relays: self._publish_hints(self._transit_relays) self._use_hints(self._transit_relays) def _get_listener_addresses(self): addresses = ipaddrs.find_addresses() non_loopback_addresses = [a for a in addresses if a != "127.0.0.1"] if non_loopback_addresses: # some test hosts, including the appveyor VMs, *only* have # 127.0.0.1, and the tests will hang badly if we remove it. addresses = non_loopback_addresses return addresses def _start_listener(self, addresses): # TODO: listen on a fixed port, if possible, for NAT/p2p benefits, also # to make firewall configs easier # TODO: retain listening port between connection generations? ep = serverFromString(self._reactor, "tcp:0") f = InboundConnectionFactory(self) d = ep.listen(f) def _listening(lp): # lp is an IListeningPort self._listeners.add(lp) # for shutdown and tests portnum = lp.getHost().port direct_hints = [DirectTCPV1Hint(to_unicode(addr), portnum, 0.0) for addr in addresses] self.listener_ready(direct_hints) d.addCallback(_listening) d.addErrback(log.err) def _schedule_connection(self, delay, h, is_relay): ep = endpoint_from_hint_obj(h, self._tor, self._reactor) desc = describe_hint_obj(h, is_relay, self._tor) d = deferLater(self._reactor, delay, self._connect, ep, desc, is_relay) d.addErrback(lambda f: f.trap(ConnectingCancelledError, ConnectionRefusedError, CancelledError, )) # TODO: HostnameEndpoint.connect catches CancelledError and replaces # it with DNSLookupError. Remove this workaround when # https://twistedmatrix.com/trac/ticket/9696 is fixed. d.addErrback(lambda f: f.trap(DNSLookupError)) d.addErrback(log.err) self._pending_connectors.add(d) def _use_hints(self, hints): # first, pull out all the relays, we'll connect to them later relays = [] direct = defaultdict(list) for h in hints: if isinstance(h, RelayV1Hint): relays.append(h) else: direct[h.priority].append(h) delay = 0.0 made_direct = False priorities = sorted(set(direct.keys()), reverse=True) for p in priorities: for h in direct[p]: if isinstance(h, TorTCPV1Hint) and not self._tor: continue self._schedule_connection(delay, h, is_relay=False) made_direct = True # Make all direct connections immediately. Later, we'll change # the add_candidate() function to look at the priority when # deciding whether to accept a successful connection or not, # and it can wait for more options if it sees a higher-priority # one still running. But if we bail on that, we might consider # putting an inter-direct-hint delay here to influence the # process. # delay += 1.0 if made_direct and not self._no_listen: # Prefer direct connections by stalling relay connections by a # few seconds. We don't wait until direct connections have # failed, because many direct hints will be to unused # local-network IP address, which won't answer, and can take the # full 30s TCP timeout to fail. # # If we didn't make any direct connections, or we're using # --no-listen, then we're probably going to have to use the # relay, so don't delay it at all. delay += self.RELAY_DELAY # It might be nice to wire this so that a failure in the direct hints # causes the relay hints to be used right away (fast failover). But # none of our current use cases would take advantage of that: if we # have any viable direct hints, then they're either going to succeed # quickly or hang for a long time. for r in relays: for h in r.hints: self._schedule_connection(delay, h, is_relay=True) # TODO: # if not contenders: # raise TransitError("No contenders for connection") # TODO: add 2*TIMEOUT deadline for first generation, don't wait forever for # the initial connection def _connect(self, ep, description, is_relay=False): relay_handshake = None if is_relay: relay_handshake = build_sided_relay_handshake(self._dilation_key, self._side) f = OutboundConnectionFactory(self, relay_handshake, description) d = ep.connect(f) # fires with protocol, or ConnectError def _connected(p): self._pending_connections.add(p) # c might not be in _pending_connections, if it turned out to be a # winner, which is why we use discard() and not remove() p.when_disconnected().addCallback(self._pending_connections.discard) d.addCallback(_connected) return d # Connection selection. All instances of DilatedConnectionProtocol which # look viable get passed into our add_contender() method. # On the Leader side, "viable" means we've seen their KCM frame, which is # the first Noise-encrypted packet on any given connection, and it has an # empty body. We gather viable connections until we see one that we like, # or a timer expires. Then we "select" it, close the others, and tell our # Manager to use it. # On the Follower side, we'll only see a KCM on the one connection selected # by the Leader, so the first viable connection wins. # our Connection protocols call: add_candidate @attrs(repr=False) class OutboundConnectionFactory(ClientFactory, object): _connector = attrib(validator=provides(IDilationConnector)) _relay_handshake = attrib(validator=optional(instance_of(bytes))) _description = attrib() def __repr__(self): return "OutboundConnectionFactory(%s %s)" % (self._connector._role, self._description) def buildProtocol(self, addr): p = self._connector.build_protocol(addr, self._description) p.factory = self if self._relay_handshake is not None: p.use_relay(self._relay_handshake) return p def describe_inbound(addr): if isinstance(addr, HostnameAddress): return "<-tcp:%s:%d" % (addr.hostname, addr.port) elif isinstance(addr, IPv4Address): return "<-tcp:%s:%d" % (addr.host, addr.port) elif isinstance(addr, IPv6Address): return "<-tcp:[%s]:%d" % (addr.host, addr.port) return "<-%r" % addr @attrs(repr=False) class InboundConnectionFactory(ServerFactory, object): _connector = attrib(validator=provides(IDilationConnector)) def __repr__(self): return "InboundConnectionFactory(%s)" % (self._connector._role) def buildProtocol(self, addr): description = describe_inbound(addr) p = self._connector.build_protocol(addr, description) p.factory = self return p magic-wormhole-0.12.0/src/wormhole/_dilation/encode.py000066400000000000000000000006531400712516500227050ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import struct assert len(struct.pack("L", value) def from_be4(b): if not isinstance(b, bytes): raise TypeError(repr(b)) if len(b) != 4: raise ValueError return struct.unpack(">L", b)[0] magic-wormhole-0.12.0/src/wormhole/_dilation/inbound.py000066400000000000000000000126531400712516500231110ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from attr import attrs, attrib from attr.validators import provides from zope.interface import implementer from twisted.python import log from .._interfaces import IDilationManager, IInbound, ISubChannel from .subchannel import (SubChannel, _SubchannelAddress) class DuplicateOpenError(Exception): pass class DataForMissingSubchannelError(Exception): pass class CloseForMissingSubchannelError(Exception): pass @attrs @implementer(IInbound) class Inbound(object): # Inbound flow control: TCP delivers data to Connection.dataReceived, # Connection delivers to our handle_data, we deliver to # SubChannel.remote_data, subchannel delivers to proto.dataReceived _manager = attrib(validator=provides(IDilationManager)) _host_addr = attrib() def __attrs_post_init__(self): # we route inbound Data records to Subchannels .dataReceived self._open_subchannels = {} # scid -> Subchannel self._paused_subchannels = set() # Subchannels that have paused us # the set is non-empty, we pause the transport self._highest_inbound_acked = -1 self._connection = None # from our Manager def set_listener_endpoint(self, listener_endpoint): self._listener_endpoint = listener_endpoint def set_subchannel_zero(self, scid0, sc0): self._open_subchannels[scid0] = sc0 def use_connection(self, c): self._connection = c # We can pause the connection's reads when we receive too much data. If # this is a non-initial connection, then we might already have # subchannels that are paused from before, so we might need to pause # the new connection before it can send us any data if self._paused_subchannels: self._connection.pauseProducing() def subchannel_local_open(self, scid, sc): assert ISubChannel.providedBy(sc) assert scid not in self._open_subchannels self._open_subchannels[scid] = sc # Inbound is responsible for tracking the high watermark and deciding # whether to ignore inbound messages or not def is_record_old(self, r): if r.seqnum <= self._highest_inbound_acked: return True return False def update_ack_watermark(self, seqnum): self._highest_inbound_acked = max(self._highest_inbound_acked, seqnum) def handle_open(self, scid): log.msg("inbound.handle_open", scid) if scid in self._open_subchannels: log.err(DuplicateOpenError( "received duplicate OPEN for {}".format(scid))) return peer_addr = _SubchannelAddress(scid) sc = SubChannel(scid, self._manager, self._host_addr, peer_addr) self._open_subchannels[scid] = sc self._listener_endpoint._got_open(sc, peer_addr) def handle_data(self, scid, data): log.msg("inbound.handle_data", scid, len(data)) sc = self._open_subchannels.get(scid) if sc is None: log.err(DataForMissingSubchannelError( "received DATA for non-existent subchannel {}".format(scid))) return sc.remote_data(data) def handle_close(self, scid): log.msg("inbound.handle_close", scid) sc = self._open_subchannels.get(scid) if sc is None: log.err(CloseForMissingSubchannelError( "received CLOSE for non-existent subchannel {}".format(scid))) return sc.remote_close() def subchannel_closed(self, scid, sc): # connectionLost has just been signalled assert self._open_subchannels[scid] is sc del self._open_subchannels[scid] def stop_using_connection(self): self._connection = None # from our Subchannel, or rather from the Protocol above it and sent # through the subchannel # The subchannel is an IProducer, and application protocols can always # thell them to pauseProducing if we're delivering inbound data too # quickly. They don't need to register anything. def subchannel_pauseProducing(self, sc): was_paused = bool(self._paused_subchannels) self._paused_subchannels.add(sc) if self._connection and not was_paused: self._connection.pauseProducing() def subchannel_resumeProducing(self, sc): was_paused = bool(self._paused_subchannels) self._paused_subchannels.discard(sc) if self._connection and was_paused and not self._paused_subchannels: self._connection.resumeProducing() def subchannel_stopProducing(self, sc): # This protocol doesn't want any additional data. If we were a normal # (single-owner) Transport, we'd call .loseConnection now. But our # Connection is shared among many subchannels, so instead we just # stop letting them pause the connection. was_paused = bool(self._paused_subchannels) self._paused_subchannels.discard(sc) if self._connection and was_paused and not self._paused_subchannels: self._connection.resumeProducing() # TODO: we might refactor these pause/resume/stop methods by building a # context manager that look at the paused/not-paused state first, then # lets the caller modify self._paused_subchannels, then looks at it a # second time, and calls c.pauseProducing/c.resumeProducing as # appropriate. I'm not sure it would be any cleaner, though. magic-wormhole-0.12.0/src/wormhole/_dilation/manager.py000066400000000000000000000600521400712516500230610ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import six import os from collections import deque try: # py >= 3.3 from collections.abc import Sequence except ImportError: # py 2 and py3 < 3.3 from collections import Sequence from attr import attrs, attrib from attr.validators import provides, instance_of, optional from automat import MethodicalMachine from zope.interface import implementer from twisted.internet.defer import Deferred from twisted.internet.interfaces import (IStreamClientEndpoint, IStreamServerEndpoint) from twisted.python import log, failure from .._interfaces import IDilator, IDilationManager, ISend, ITerminator from ..util import dict_to_bytes, bytes_to_dict, bytes_to_hexstr from ..observer import OneShotObserver from .._key import derive_key from .subchannel import (SubChannel, _SubchannelAddress, _WormholeAddress, ControlEndpoint, SubchannelConnectorEndpoint, SubchannelListenerEndpoint) from .connector import Connector from .._hints import parse_hint from .roles import LEADER, FOLLOWER from .connection import KCM, Ping, Pong, Open, Data, Close, Ack from .inbound import Inbound from .outbound import Outbound # exported to Wormhole() for inclusion in versions message DILATION_VERSIONS = ["1"] class OldPeerCannotDilateError(Exception): pass class UnknownDilationMessageType(Exception): pass class ReceivedHintsTooEarly(Exception): pass class UnexpectedKCM(Exception): pass class UnknownMessageType(Exception): pass @attrs class EndpointRecord(Sequence): control = attrib(validator=provides(IStreamClientEndpoint)) connect = attrib(validator=provides(IStreamClientEndpoint)) listen = attrib(validator=provides(IStreamServerEndpoint)) def __len__(self): return 3 def __getitem__(self, n): return (self.control, self.connect, self.listen)[n] def make_side(): return bytes_to_hexstr(os.urandom(8)) # new scheme: # * both sides send PLEASE as soon as they have an unverified key and # w.dilate has been called, # * PLEASE includes a dilation-specific "side" (independent of the "side" # used by mailbox messages) # * higher "side" is Leader, lower is Follower # * PLEASE includes can-dilate list of version integers, requires overlap # "1" is current # * we start dilation after both w.dilate() and receiving VERSION, putting us # in WANTING, then we process all previously-queued inbound DILATE-n # messages. When PLEASE arrives, we move to CONNECTING # * HINTS sent after dilation starts # * only Leader sends RECONNECT, only Follower sends RECONNECTING. This # is the only difference between the two sides, and is not enforced # by the protocol (i.e. if the Follower sends RECONNECT to the Leader, # the Leader will obey, although TODO how confusing will this get?) # * upon receiving RECONNECT: drop Connector, start new Connector, send # RECONNECTING, start sending HINTS # * upon sending RECONNECT: go into FLUSHING state and ignore all HINTS until # RECONNECTING received. The new Connector can be spun up earlier, and it # can send HINTS, but it must not be given any HINTS that arrive before # RECONNECTING (since they're probably stale) # * after VERSIONS(KCM) received, we might learn that the other side cannot # dilate. w.dilate errbacks at this point # * maybe signal warning if we stay in a "want" state for too long # * nobody sends HINTS until they're ready to receive # * nobody sends HINTS unless they've called w.dilate() and received PLEASE # * nobody connects to inbound hints unless they've called w.dilate() # * if leader calls w.dilate() but not follower, leader waits forever in # "want" (doesn't send anything) # * if follower calls w.dilate() but not leader, follower waits forever # in "want", leader waits forever in "wanted" @attrs(cmp=False) @implementer(IDilationManager) class Manager(object): _S = attrib(validator=provides(ISend), repr=False) _my_side = attrib(validator=instance_of(type(u""))) _transit_relay_location = attrib(validator=optional(instance_of(str))) _reactor = attrib(repr=False) _eventual_queue = attrib(repr=False) _cooperator = attrib(repr=False) # TODO: can this validator work when the parameter is optional? _no_listen = attrib(validator=instance_of(bool), default=False) _dilation_key = None _tor = None # TODO _timing = None # TODO _next_subchannel_id = None # initialized in choose_role m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._got_versions_d = Deferred() self._my_role = None # determined upon rx_PLEASE self._host_addr = _WormholeAddress() self._connection = None self._made_first_connection = False self._stopped = OneShotObserver(self._eventual_queue) self._debug_stall_connector = False self._next_dilation_phase = 0 # I kept getting confused about which methods were for inbound data # (and thus flow-control methods go "out") and which were for # outbound data (with flow-control going "in"), so I split them up # into separate pieces. self._inbound = Inbound(self, self._host_addr) self._outbound = Outbound(self, self._cooperator) # from us to peer # We must open subchannel0 early, since messages may arrive very # quickly once the connection is established. This subchannel may or # may not ever get revealed to the caller, since the peer might not # even be capable of dilation. scid0 = 0 peer_addr0 = _SubchannelAddress(scid0) sc0 = SubChannel(scid0, self, self._host_addr, peer_addr0) self._inbound.set_subchannel_zero(scid0, sc0) # we can open non-zero subchannels as soon as we get our first # connection, and we can make the Endpoints even earlier control_ep = ControlEndpoint(peer_addr0, sc0, self._eventual_queue) connect_ep = SubchannelConnectorEndpoint(self, self._host_addr, self._eventual_queue) listen_ep = SubchannelListenerEndpoint(self, self._host_addr, self._eventual_queue) # TODO: let inbound/outbound create the endpoints, then return them # to us self._inbound.set_listener_endpoint(listen_ep) self._endpoints = EndpointRecord(control_ep, connect_ep, listen_ep) def get_endpoints(self): return self._endpoints def got_dilation_key(self, key): assert isinstance(key, bytes) self._dilation_key = key def got_wormhole_versions(self, their_wormhole_versions): # this always happens before received_dilation_message dilation_version = None their_dilation_versions = set(their_wormhole_versions.get("can-dilate", [])) my_versions = set(DILATION_VERSIONS) shared_versions = my_versions.intersection(their_dilation_versions) if "1" in shared_versions: dilation_version = "1" # dilation_version is the best mutually-compatible version we have # with the peer, or None if we have nothing in common if not dilation_version: # "1" or None # TODO: be more specific about the error. dilation_version==None # means we had no version in common with them, which could either # be because they're so old they don't dilate at all, or because # they're so new that they no longer accommodate our old version self.fail(failure.Failure(OldPeerCannotDilateError())) self.start() def fail(self, f): self._endpoints.control._main_channel_failed(f) self._endpoints.connect._main_channel_failed(f) self._endpoints.listen._main_channel_failed(f) def received_dilation_message(self, plaintext): # this receives new in-order DILATE-n payloads, decrypted but not # de-JSONed. message = bytes_to_dict(plaintext) type = message["type"] if type == "please": self.rx_PLEASE(message) elif type == "connection-hints": self.rx_HINTS(message) elif type == "reconnect": self.rx_RECONNECT() elif type == "reconnecting": self.rx_RECONNECTING() else: log.err(UnknownDilationMessageType(message)) return def when_stopped(self): return self._stopped.when_fired() def send_dilation_phase(self, **fields): dilation_phase = self._next_dilation_phase self._next_dilation_phase += 1 self._S.send("dilate-%d" % dilation_phase, dict_to_bytes(fields)) def send_hints(self, hints): # from Connector self.send_dilation_phase(type="connection-hints", hints=hints) # forward inbound-ish things to _Inbound def subchannel_pauseProducing(self, sc): self._inbound.subchannel_pauseProducing(sc) def subchannel_resumeProducing(self, sc): self._inbound.subchannel_resumeProducing(sc) def subchannel_stopProducing(self, sc): self._inbound.subchannel_stopProducing(sc) def subchannel_local_open(self, scid, sc): self._inbound.subchannel_local_open(scid, sc) # forward outbound-ish things to _Outbound def subchannel_registerProducer(self, sc, producer, streaming): self._outbound.subchannel_registerProducer(sc, producer, streaming) def subchannel_unregisterProducer(self, sc): self._outbound.subchannel_unregisterProducer(sc) def send_open(self, scid): assert isinstance(scid, six.integer_types) self._queue_and_send(Open, scid) def send_data(self, scid, data): assert isinstance(scid, six.integer_types) self._queue_and_send(Data, scid, data) def send_close(self, scid): assert isinstance(scid, six.integer_types) self._queue_and_send(Close, scid) def _queue_and_send(self, record_type, *args): r = self._outbound.build_record(record_type, *args) # Outbound owns the send_record() pipe, so that it can stall new # writes after a new connection is made until after all queued # messages are written (to preserve ordering). self._outbound.queue_and_send_record(r) # may trigger pauseProducing def subchannel_closed(self, scid, sc): # let everyone clean up. This happens just after we delivered # connectionLost to the Protocol, except for the control channel, # which might get connectionLost later after they use ep.connect. # TODO: is this inversion a problem? self._inbound.subchannel_closed(scid, sc) self._outbound.subchannel_closed(scid, sc) # our Connector calls these def connector_connection_made(self, c): self.connection_made() # state machine update self._connection = c self._inbound.use_connection(c) self._outbound.use_connection(c) # does c.registerProducer if not self._made_first_connection: self._made_first_connection = True self._endpoints.control._main_channel_ready() self._endpoints.connect._main_channel_ready() self._endpoints.listen._main_channel_ready() pass def connector_connection_lost(self): self._stop_using_connection() if self._my_role is LEADER: self.connection_lost_leader() # state machine else: self.connection_lost_follower() def _stop_using_connection(self): # the connection is already lost by this point self._connection = None self._inbound.stop_using_connection() self._outbound.stop_using_connection() # does c.unregisterProducer # from our active Connection def got_record(self, r): # records with sequence numbers: always ack, ignore old ones if isinstance(r, (Open, Data, Close)): self.send_ack(r.seqnum) # always ack, even for old ones if self._inbound.is_record_old(r): return self._inbound.update_ack_watermark(r.seqnum) if isinstance(r, Open): self._inbound.handle_open(r.scid) elif isinstance(r, Data): self._inbound.handle_data(r.scid, r.data) else: # isinstance(r, Close) self._inbound.handle_close(r.scid) return if isinstance(r, KCM): log.err(UnexpectedKCM()) elif isinstance(r, Ping): self.handle_ping(r.ping_id) elif isinstance(r, Pong): self.handle_pong(r.ping_id) elif isinstance(r, Ack): self._outbound.handle_ack(r.resp_seqnum) # retire queued messages else: log.err(UnknownMessageType("{}".format(r))) # pings, pongs, and acks are not queued def send_ping(self, ping_id): self._outbound.send_if_connected(Ping(ping_id)) def send_pong(self, ping_id): self._outbound.send_if_connected(Pong(ping_id)) def send_ack(self, resp_seqnum): self._outbound.send_if_connected(Ack(resp_seqnum)) def handle_ping(self, ping_id): self.send_pong(ping_id) def handle_pong(self, ping_id): # TODO: update is-alive timer pass # subchannel maintenance def allocate_subchannel_id(self): scid_num = self._next_subchannel_id self._next_subchannel_id += 2 return scid_num # state machine @m.state(initial=True) def WAITING(self): pass # pragma: no cover @m.state() def WANTING(self): pass # pragma: no cover @m.state() def CONNECTING(self): pass # pragma: no cover @m.state() def CONNECTED(self): pass # pragma: no cover @m.state() def FLUSHING(self): pass # pragma: no cover @m.state() def ABANDONING(self): pass # pragma: no cover @m.state() def LONELY(self): pass # pragma: no cover @m.state() def STOPPING(self): pass # pragma: no cover @m.state(terminal=True) def STOPPED(self): pass # pragma: no cover @m.input() def start(self): pass # pragma: no cover @m.input() def rx_PLEASE(self, message): pass # pragma: no cover @m.input() # only sent by Follower def rx_HINTS(self, hint_message): pass # pragma: no cover @m.input() # only Leader sends RECONNECT, so only Follower receives it def rx_RECONNECT(self): pass # pragma: no cover @m.input() # only Follower sends RECONNECTING, so only Leader receives it def rx_RECONNECTING(self): pass # pragma: no cover # Connector gives us connection_made() @m.input() def connection_made(self): pass # pragma: no cover # our connection_lost() fires connection_lost_leader or # connection_lost_follower depending upon our role. If either side sees a # problem with the connection (timeouts, bad authentication) then they # just drop it and let connection_lost() handle the cleanup. @m.input() def connection_lost_leader(self): pass # pragma: no cover @m.input() def connection_lost_follower(self): pass @m.input() def stop(self): pass # pragma: no cover @m.output() def send_please(self): self.send_dilation_phase(type="please", side=self._my_side) @m.output() def choose_role(self, message): their_side = message["side"] if self._my_side > their_side: self._my_role = LEADER # scid 0 is reserved for the control channel. the leader uses odd # numbers starting with 1 self._next_subchannel_id = 1 elif their_side > self._my_side: self._my_role = FOLLOWER # the follower uses even numbers starting with 2 self._next_subchannel_id = 2 else: raise ValueError("their side shouldn't be equal: reflection?") # these Outputs behave differently for the Leader vs the Follower @m.output() def start_connecting_ignore_message(self, message): del message # ignored return self._start_connecting() @m.output() def start_connecting(self): self._start_connecting() def _start_connecting(self): assert self._my_role is not None assert self._dilation_key is not None self._connector = Connector(self._dilation_key, self._transit_relay_location, self, self._reactor, self._eventual_queue, self._no_listen, self._tor, self._timing, self._my_side, # needed for relay handshake self._my_role) if self._debug_stall_connector: # unit tests use this hook to send messages while we know we # don't have a connection self._eventual_queue.eventually(self._debug_stall_connector, self._connector) return self._connector.start() @m.output() def send_reconnect(self): self.send_dilation_phase(type="reconnect") # TODO: generation number? @m.output() def send_reconnecting(self): self.send_dilation_phase(type="reconnecting") # TODO: generation? @m.output() def use_hints(self, hint_message): hint_objs = filter(lambda h: h, # ignore None, unrecognizable [parse_hint(hs) for hs in hint_message["hints"]]) hint_objs = list(hint_objs) self._connector.got_hints(hint_objs) @m.output() def stop_connecting(self): self._connector.stop() @m.output() def abandon_connection(self): # we think we're still connected, but the Leader disagrees. Or we've # been told to shut down. self._connection.disconnect() # let connection_lost do cleanup @m.output() def notify_stopped(self): self._stopped.fire(None) # We are born WAITING after the local app calls w.dilate(). We enter # WANTING (and send a PLEASE) when we learn of a mutually-compatible # dilation_version. WAITING.upon(start, enter=WANTING, outputs=[send_please]) # we start CONNECTING when we get rx_PLEASE WANTING.upon(rx_PLEASE, enter=CONNECTING, outputs=[choose_role, start_connecting_ignore_message]) CONNECTING.upon(connection_made, enter=CONNECTED, outputs=[]) # Leader CONNECTED.upon(connection_lost_leader, enter=FLUSHING, outputs=[send_reconnect]) FLUSHING.upon(rx_RECONNECTING, enter=CONNECTING, outputs=[start_connecting]) # Follower # if we notice a lost connection, just wait for the Leader to notice too CONNECTED.upon(connection_lost_follower, enter=LONELY, outputs=[]) LONELY.upon(rx_RECONNECT, enter=CONNECTING, outputs=[send_reconnecting, start_connecting]) # but if they notice it first, abandon our (seemingly functional) # connection, then tell them that we're ready to try again CONNECTED.upon(rx_RECONNECT, enter=ABANDONING, outputs=[abandon_connection]) ABANDONING.upon(connection_lost_follower, enter=CONNECTING, outputs=[send_reconnecting, start_connecting]) # and if they notice a problem while we're still connecting, abandon our # incomplete attempt and try again. in this case we don't have to wait # for a connection to finish shutdown CONNECTING.upon(rx_RECONNECT, enter=CONNECTING, outputs=[stop_connecting, send_reconnecting, start_connecting]) # rx_HINTS never changes state, they're just accepted or ignored WANTING.upon(rx_HINTS, enter=WANTING, outputs=[]) # too early CONNECTING.upon(rx_HINTS, enter=CONNECTING, outputs=[use_hints]) CONNECTED.upon(rx_HINTS, enter=CONNECTED, outputs=[]) # too late, ignore FLUSHING.upon(rx_HINTS, enter=FLUSHING, outputs=[]) # stale, ignore LONELY.upon(rx_HINTS, enter=LONELY, outputs=[]) # stale, ignore ABANDONING.upon(rx_HINTS, enter=ABANDONING, outputs=[]) # shouldn't happen STOPPING.upon(rx_HINTS, enter=STOPPING, outputs=[]) WAITING.upon(stop, enter=STOPPED, outputs=[notify_stopped]) WANTING.upon(stop, enter=STOPPED, outputs=[notify_stopped]) CONNECTING.upon(stop, enter=STOPPED, outputs=[stop_connecting, notify_stopped]) CONNECTED.upon(stop, enter=STOPPING, outputs=[abandon_connection]) ABANDONING.upon(stop, enter=STOPPING, outputs=[]) FLUSHING.upon(stop, enter=STOPPED, outputs=[notify_stopped]) LONELY.upon(stop, enter=STOPPED, outputs=[notify_stopped]) STOPPING.upon(connection_lost_leader, enter=STOPPED, outputs=[notify_stopped]) STOPPING.upon(connection_lost_follower, enter=STOPPED, outputs=[notify_stopped]) @attrs @implementer(IDilator) class Dilator(object): """I launch the dilation process. I am created with every Wormhole (regardless of whether .dilate() was called or not), and I handle the initial phase of dilation, before we know whether we'll be the Leader or the Follower. Once we hear the other side's VERSION message (which tells us that we have a connection, they are capable of dilating, and which side we're on), then we build a Manager and hand control to it. """ _reactor = attrib() _eventual_queue = attrib() _cooperator = attrib() def __attrs_post_init__(self): self._manager = None self._pending_dilation_key = None self._pending_wormhole_versions = None self._pending_inbound_dilate_messages = deque() def wire(self, sender, terminator): self._S = ISend(sender) self._T = ITerminator(terminator) # this is the primary entry point, called when w.dilate() is invoked def dilate(self, transit_relay_location=None, no_listen=False): if not self._manager: # build the manager right away, and tell it later when the # VERSIONS message arrives, and also when the dilation_key is set my_dilation_side = make_side() m = Manager(self._S, my_dilation_side, transit_relay_location, self._reactor, self._eventual_queue, self._cooperator, no_listen) self._manager = m if self._pending_dilation_key is not None: m.got_dilation_key(self._pending_dilation_key) if self._pending_wormhole_versions: m.got_wormhole_versions(self._pending_wormhole_versions) while self._pending_inbound_dilate_messages: plaintext = self._pending_inbound_dilate_messages.popleft() m.received_dilation_message(plaintext) return self._manager.get_endpoints() # Called by Terminator after everything else (mailbox, nameplate, server # connection) has shut down. Expects to fire T.stoppedD() when Dilator is # stopped too. def stop(self): if self._manager: self._manager.stop() # TODO: avoid Deferreds for control flow, hard to serialize self._manager.when_stopped().addCallback(lambda _: self._T.stoppedD()) else: self._T.stoppedD() return # TODO: tolerate multiple calls # from Boss def got_key(self, key): # TODO: verify this happens before got_wormhole_versions, or add a gate # to tolerate either ordering purpose = b"dilation-v1" LENGTH = 32 # TODO: whatever Noise wants, I guess dilation_key = derive_key(key, purpose, LENGTH) if self._manager: self._manager.got_dilation_key(dilation_key) else: self._pending_dilation_key = dilation_key def got_wormhole_versions(self, their_wormhole_versions): if self._manager: self._manager.got_wormhole_versions(their_wormhole_versions) else: self._pending_wormhole_versions = their_wormhole_versions def received_dilate(self, plaintext): if not self._manager: self._pending_inbound_dilate_messages.append(plaintext) else: self._manager.received_dilation_message(plaintext) magic-wormhole-0.12.0/src/wormhole/_dilation/outbound.py000066400000000000000000000420431400712516500233060ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from collections import deque from attr import attrs, attrib from attr.validators import provides from zope.interface import implementer from twisted.internet.interfaces import IPushProducer, IPullProducer from twisted.python import log from twisted.python.reflect import safe_str from .._interfaces import IDilationManager, IOutbound from .connection import KCM, Ping, Pong, Ack # Outbound flow control: app writes to subchannel, we write to Connection # The app can register an IProducer of their choice, to let us throttle their # outbound data. Not all subchannels will have producers registered, and the # producer probably won't be the IProtocol instance (it'll be something else # which feeds data out through the protocol, like a t.p.basic.FileSender). If # a producerless subchannel writes too much, we won't be able to stop them, # and we'll keep writing records into the Connection even though it's asked # us to pause. Likewise, when the connection is down (and we're busily trying # to reestablish a new one), registered subchannels will be paused, but # unregistered ones will just dump everything in _outbound_queue, and we'll # consume memory without bound until they stop. # We need several things: # # * Add each registered IProducer to a list, whose order remains stable. We # want fairness under outbound throttling: each time the outbound # connection opens up (our resumeProducing method is called), we should let # just one producer have an opportunity to do transport.write, and then we # should pause them again, and not come back to them until everyone else # has gotten a turn. So we want an ordered list of producers to track this # rotation. # # * Remove the IProducer if/when the protocol uses unregisterProducer # # * Remove any registered IProducer when the associated Subchannel is closed. # This isn't a problem for normal transports, because usually there's a # one-to-one mapping from Protocol to Transport, so when the Transport you # forget the only reference to the Producer anyways. Our situation is # unusual because we have multiple Subchannels that get merged into the # same underlying Connection: each Subchannel's Protocol can register a # producer on the Subchannel (which is an ITransport), but that adds it to # a set of Producers for the Connection (which is also an ITransport). So # if the Subchannel is closed, we need to remove its Producer (if any) even # though the Connection remains open. # # * Register ourselves as an IPushProducer with each successive Connection # object. These connections will come and go, but there will never be more # than one. When the connection goes away, pause all our producers. When a # new one is established, write all our queued messages, then unpause our # producers as we would in resumeProducing. # # * Inside our resumeProducing call, we'll cycle through all producers, # calling their individual resumeProducing methods one at a time. If they # write so much data that the Connection pauses us again, we'll find out # because our pauseProducing will be called inside that loop. When that # happens, we need to stop looping. If we make it through the whole loop # without being paused, then all subchannel Producers are left unpaused, # and are free to write whenever they want. During this loop, some # Producers will be paused, and others will be resumed # # * If our pauseProducing is called, all Producers must be paused, and a flag # should be set to notify the resumeProducing loop to exit # # * In between calls to our resumeProducing method, we're in one of two # states. # * If we're writing data too fast, then we'll be left in the "paused" # state, in which all Subchannel producers are paused, and the aggregate # is paused too (our Connection told us to pauseProducing and hasn't yet # told us to resumeProducing). In this state, activity is driven by the # outbound TCP window opening up, which calls resumeProducing and allows # (probably just) one message to be sent. We receive pauseProducing in # the middle of their transport.write, so the loop exits early, and the # only state change is that some other Producer should get to go next # time. # * If we're writing too slowly, we'll be left in the "unpaused" state: all # Subchannel producers are unpaused, and the aggregate is unpaused too # (resumeProducing is the last thing we've been told). In this satte, # activity is driven by the Subchannels doing a transport.write, which # queues some data on the TCP connection (and then might call # pauseProducing if it's now full). # # * We want to guard against: # # * application protocol registering a Producer without first unregistering # the previous one # # * application protocols writing data despite being told to pause # (Subchannels without a registered Producer cannot be throttled, and we # can't do anything about that, but we must also handle the case where # they give us a pause switch and then proceed to ignore it) # # * our Connection calling resumeProducing or pauseProducing without an # intervening call of the other kind # # * application protocols that don't handle a resumeProducing or # pauseProducing call without an intervening call of the other kind (i.e. # we should keep track of the last thing we told them, and not repeat # ourselves) # # * If the Wormhole is closed, all Subchannels should close. This is not our # responsibility: it lives in (Manager? Inbound?) # # * If we're given an IPullProducer, we should keep calling its # resumeProducing until it runs out of data. We still want fairness, so we # won't call it a second time until everyone else has had a turn. # There are a couple of different ways to approach this. The one I've # selected is: # # * keep a dict that maps from Subchannel to Producer, which only contains # entries for Subchannels that have registered a producer. We use this to # remove Producers when Subchannels are closed # # * keep a Deque of Producers. This represents the fair-throttling rotation: # the left-most item gets the next upcoming turn, and then they'll be moved # to the end of the queue. # # * keep a set of IPushProducers which are paused, a second set of # IPushProducers which are unpaused, and a third set of IPullProducers # (which are always left paused) Enforce the invariant that these three # sets are disjoint, and that their union equals the contents of the deque. # # * keep a "paused" flag, which is cleared upon entry to resumeProducing, and # set upon entry to pauseProducing. The loop inside resumeProducing checks # this flag after each call to producer.resumeProducing, to sense whether # they used their turn to write data, and if that write was large enough to # fill the TCP window. If set, we break out of the loop. If not, we look # for the next producer to unpause. The loop finishes when all producers # are unpaused (evidenced by the two sets of paused producers being empty) # # * the "paused" flag also determines whether new IPushProducers are added to # the paused or unpaused set (IPullProducers are always added to the # pull+paused set). If we have any IPullProducers, we're always in the # "writing data too fast" state. # other approaches that I didn't decide to do at this time (but might use in # the future): # # * use one set instead of two. pros: fewer moving parts. cons: harder to # spot decoherence bugs like adding a producer to the deque but forgetting # to add it to one of the # # * use zero sets, and keep the paused-vs-unpaused state in the Subchannel as # a visible boolean flag. This conflates Subchannels with their associated # Producer (so if we went this way, we should also let them track their own # Producer). Our resumeProducing loop ends when 'not any(sc.paused for sc # in self._subchannels_with_producers)'. Pros: fewer subchannel->producer # mappings lying around to disagree with one another. Cons: exposes a bit # too much of the Subchannel internals @attrs @implementer(IOutbound, IPushProducer) class Outbound(object): # Manage outbound data: subchannel writes to us, we write to transport _manager = attrib(validator=provides(IDilationManager)) _cooperator = attrib() def __attrs_post_init__(self): # _outbound_queue holds all messages we've ever sent but not retired self._outbound_queue = deque() self._next_outbound_seqnum = 0 # _queued_unsent are messages to retry with our new connection self._queued_unsent = deque() # outbound flow control: the Connection throttles our writes self._subchannel_producers = {} # Subchannel -> IProducer self._paused = True # our Connection called our pauseProducing self._all_producers = deque() # rotates, left-is-next self._paused_producers = set() self._unpaused_producers = set() self._check_invariants() self._connection = None def _check_invariants(self): assert self._unpaused_producers.isdisjoint(self._paused_producers) assert (self._paused_producers.union(self._unpaused_producers) == set(self._all_producers)) def build_record(self, record_type, *args): seqnum = self._next_outbound_seqnum self._next_outbound_seqnum += 1 r = record_type(seqnum, *args) assert hasattr(r, "seqnum"), r # only Open/Data/Close return r def queue_and_send_record(self, r): # we always queue it, to resend on a subsequent connection if # necessary self._outbound_queue.append(r) if self._connection: if self._queued_unsent: # to maintain correct ordering, queue this instead of sending it self._queued_unsent.append(r) else: # we're allowed to send it immediately self._connection.send_record(r) def send_if_connected(self, r): assert isinstance(r, (KCM, Ping, Pong, Ack)), r # nothing with seqnum if self._connection: self._connection.send_record(r) # our subchannels call these to register a producer def subchannel_registerProducer(self, sc, producer, streaming): # streaming==True: IPushProducer (pause/resume) # streaming==False: IPullProducer (just resume) if sc in self._subchannel_producers: raise ValueError( "registering producer %s before previous one (%s) was " "unregistered" % (producer, self._subchannel_producers[sc])) # our underlying Connection uses streaming==True, so to make things # easier, use an adapter when the Subchannel asks for streaming=False if not streaming: def unregister(): self.subchannel_unregisterProducer(sc) producer = PullToPush(producer, unregister, self._cooperator) self._subchannel_producers[sc] = producer self._all_producers.append(producer) if self._paused: self._paused_producers.add(producer) else: self._unpaused_producers.add(producer) self._check_invariants() if streaming: if self._paused: # IPushProducers need to be paused immediately, before they # speak producer.pauseProducing() # you wake up sleeping else: # our PullToPush adapter must be started, but if we're paused then # we tell it to pause before it gets a chance to write anything producer.startStreaming(self._paused) def subchannel_unregisterProducer(self, sc): # TODO: what if the subchannel closes, so we unregister their # producer for them, then the application reacts to connectionLost # with a duplicate unregisterProducer? p = self._subchannel_producers.pop(sc) if isinstance(p, PullToPush): p.stopStreaming() self._all_producers.remove(p) self._paused_producers.discard(p) self._unpaused_producers.discard(p) self._check_invariants() def subchannel_closed(self, scid, sc): self._check_invariants() if sc in self._subchannel_producers: self.subchannel_unregisterProducer(sc) # our Manager tells us when we've got a new Connection to work with def use_connection(self, c): self._connection = c assert not self._queued_unsent self._queued_unsent.extend(self._outbound_queue) # the connection can tell us to pause when we send too much data c.transport.registerProducer(self, True) # IPushProducer: pause+resume # send our queued messages self.resumeProducing() def stop_using_connection(self): self._connection.transport.unregisterProducer() self._connection = None self._queued_unsent.clear() self.pauseProducing() # TODO: I expect this will call pauseProducing twice: the first time # when we get stopProducing (since we're registere with the # underlying connection as the producer), and again when the manager # notices the connectionLost and calls our _stop_using_connection def handle_ack(self, resp_seqnum): # we've received an inbound ack, so retire something while (self._outbound_queue and self._outbound_queue[0].seqnum <= resp_seqnum): self._outbound_queue.popleft() while (self._queued_unsent and self._queued_unsent[0].seqnum <= resp_seqnum): self._queued_unsent.popleft() # Inbound is responsible for tracking the high watermark and deciding # whether to ignore inbound messages or not # IPushProducer: the active connection calls these because we used # c.transport.registerProducer to ask for them def pauseProducing(self): if self._paused: return # someone is confused and called us twice self._paused = True for p in self._all_producers: if p in self._unpaused_producers: self._unpaused_producers.remove(p) self._paused_producers.add(p) p.pauseProducing() def resumeProducing(self): if not self._paused: return # someone is confused and called us twice self._paused = False while not self._paused: if self._queued_unsent: r = self._queued_unsent.popleft() self._connection.send_record(r) continue p = self._get_next_unpaused_producer() if not p: break self._paused_producers.remove(p) self._unpaused_producers.add(p) p.resumeProducing() def _get_next_unpaused_producer(self): self._check_invariants() if not self._paused_producers: return None while True: p = self._all_producers[0] self._all_producers.rotate(-1) # p moves to the end of the line # the only unpaused Producers are at the end of the list assert p in self._paused_producers return p def stopProducing(self): # we'll hopefully have a new connection to work with in the future, # so we don't shut anything down. We do pause everyone, though. self.pauseProducing() # modelled after twisted.internet._producer_helper._PullToPush , but with a # configurable Cooperator, a pause-immediately argument to startStreaming() @implementer(IPushProducer) @attrs(cmp=False) class PullToPush(object): _producer = attrib(validator=provides(IPullProducer)) _unregister = attrib(validator=lambda _a, _b, v: callable(v)) _cooperator = attrib() _finished = False def _pull(self): while True: try: self._producer.resumeProducing() except Exception: log.err(None, "%s failed, producing will be stopped:" % (safe_str(self._producer),)) try: self._unregister() # The consumer should now call stopStreaming() on us, # thus stopping the streaming. except Exception: # Since the consumer blew up, we may not have had # stopStreaming() called, so we just stop on our own: log.err(None, "%s failed to unregister producer:" % (safe_str(self._unregister),)) self._finished = True return yield None def startStreaming(self, paused): self._coopTask = self._cooperator.cooperate(self._pull()) if paused: self.pauseProducing() # timer is scheduled, but task is removed def stopStreaming(self): if self._finished: return self._finished = True self._coopTask.stop() def pauseProducing(self): self._coopTask.pause() def resumeProducing(self): self._coopTask.resume() def stopProducing(self): self.stopStreaming() self._producer.stopProducing() magic-wormhole-0.12.0/src/wormhole/_dilation/roles.py000066400000000000000000000003071400712516500225700ustar00rootroot00000000000000class _Role(object): def __init__(self, which): self._which = which def __repr__(self): return "Role(%s)" % self._which LEADER, FOLLOWER = _Role("LEADER"), _Role("FOLLOWER") magic-wormhole-0.12.0/src/wormhole/_dilation/subchannel.py000066400000000000000000000347001400712516500235720ustar00rootroot00000000000000import six from collections import deque from attr import attrs, attrib from attr.validators import instance_of, provides from zope.interface import implementer from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.interfaces import (ITransport, IProducer, IConsumer, IAddress, IListeningPort, IHalfCloseableProtocol, IStreamClientEndpoint, IStreamServerEndpoint) from twisted.internet.error import ConnectionDone from automat import MethodicalMachine from .._interfaces import ISubChannel, IDilationManager from ..observer import OneShotObserver # each subchannel frame (the data passed into transport.write(data)) gets a # 9-byte header prefix (type, subchannel id, and sequence number), then gets # encrypted (adding a 16-byte authentication tag). The result is transmitted # with a 4-byte length prefix (which only covers the padded message, not the # length prefix itself), so the padded message must be less than 2**32 bytes # long. MAX_FRAME_LENGTH = 2**32 - 1 - 9 - 16; @attrs class Once(object): _errtype = attrib() def __attrs_post_init__(self): self._called = False def __call__(self): if self._called: raise self._errtype() self._called = True class SingleUseEndpointError(Exception): pass # created in the (OPEN) state, by either: # * receipt of an OPEN message # * or local client_endpoint.connect() # then transitions are: # (OPEN) rx DATA: deliver .dataReceived(), -> (OPEN) # (OPEN) rx CLOSE: deliver .connectionLost(), send CLOSE, -> (CLOSED) # (OPEN) local .write(): send DATA, -> (OPEN) # (OPEN) local .loseConnection(): send CLOSE, -> (CLOSING) # (CLOSING) local .write(): error # (CLOSING) local .loseConnection(): error # (CLOSING) rx DATA: deliver .dataReceived(), -> (CLOSING) # (CLOSING) rx CLOSE: deliver .connectionLost(), -> (CLOSED) # object is deleted upon transition to (CLOSED) class AlreadyClosedError(Exception): pass class NormalCloseUsedOnHalfCloseable(Exception): pass class HalfCloseUsedOnNonHalfCloseable(Exception): pass @implementer(IAddress) class _WormholeAddress(object): pass @implementer(IAddress) @attrs class _SubchannelAddress(object): _scid = attrib(validator=instance_of(six.integer_types)) @attrs(cmp=False) @implementer(ITransport) @implementer(IProducer) @implementer(IConsumer) @implementer(ISubChannel) class SubChannel(object): _scid = attrib(validator=instance_of(six.integer_types)) _manager = attrib(validator=provides(IDilationManager)) _host_addr = attrib(validator=instance_of(_WormholeAddress)) _peer_addr = attrib(validator=instance_of(_SubchannelAddress)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): # self._mailbox = None # self._pending_outbound = {} # self._processed = set() self._protocol = None self._pending_remote_data = [] self._pending_remote_close = False @m.state(initial=True) def unconnected(self): pass # pragma: no cover # once we get the IProtocol, it's either a IHalfCloseableProtocol, or it # can only be fully closed @m.state() def open_half(self): pass # pragma: no cover @m.state() def read_closed(): pass # pragma: no cover @m.state() def write_closed(): pass # pragma: no cover @m.state() def open_full(self): pass # pragma: no cover @m.state() def closing(): pass # pragma: no cover @m.state() def closed(): pass # pragma: no cover @m.input() def connect_protocol_half(self): pass @m.input() def connect_protocol_full(self): pass @m.input() def remote_data(self, data): pass @m.input() def remote_close(self): pass @m.input() def local_data(self, data): pass @m.input() def local_close(self): pass @m.output() def queue_remote_data(self, data): self._pending_remote_data.append(data) @m.output() def queue_remote_close(self): self._pending_remote_close = True @m.output() def send_data(self, data): self._manager.send_data(self._scid, data) @m.output() def send_close(self): self._manager.send_close(self._scid) @m.output() def signal_dataReceived(self, data): assert self._protocol self._protocol.dataReceived(data) @m.output() def signal_readConnectionLost(self): IHalfCloseableProtocol(self._protocol).readConnectionLost() @m.output() def signal_writeConnectionLost(self): IHalfCloseableProtocol(self._protocol).writeConnectionLost() @m.output() def signal_connectionLost(self): assert self._protocol self._protocol.connectionLost(ConnectionDone()) @m.output() def close_subchannel(self): self._manager.subchannel_closed(self._scid, self) # we're deleted momentarily @m.output() def error_closed_write(self, data): raise AlreadyClosedError("write not allowed on closed subchannel") @m.output() def error_closed_close(self): raise AlreadyClosedError( "loseConnection not allowed on closed subchannel") # stuff that arrives before we have a protocol connected unconnected.upon(remote_data, enter=unconnected, outputs=[queue_remote_data]) unconnected.upon(remote_close, enter=unconnected, outputs=[queue_remote_close]) # IHalfCloseableProtocol flow unconnected.upon(connect_protocol_half, enter=open_half, outputs=[]) open_half.upon(remote_data, enter=open_half, outputs=[signal_dataReceived]) open_half.upon(local_data, enter=open_half, outputs=[send_data]) # remote closes first open_half.upon(remote_close, enter=read_closed, outputs=[signal_readConnectionLost]) read_closed.upon(local_data, enter=read_closed, outputs=[send_data]) read_closed.upon(local_close, enter=closed, outputs=[send_close, close_subchannel, # TODO: eventual-signal this? signal_writeConnectionLost, ]) # local closes first open_half.upon(local_close, enter=write_closed, outputs=[signal_writeConnectionLost, send_close]) write_closed.upon(local_data, enter=write_closed, outputs=[error_closed_write]) write_closed.upon(remote_data, enter=write_closed, outputs=[signal_dataReceived]) write_closed.upon(remote_close, enter=closed, outputs=[close_subchannel, signal_readConnectionLost, ]) # error cases write_closed.upon(local_close, enter=write_closed, outputs=[error_closed_close]) # fully-closeable-only flow unconnected.upon(connect_protocol_full, enter=open_full, outputs=[]) open_full.upon(remote_data, enter=open_full, outputs=[signal_dataReceived]) open_full.upon(local_data, enter=open_full, outputs=[send_data]) open_full.upon(remote_close, enter=closed, outputs=[send_close, close_subchannel, signal_connectionLost]) open_full.upon(local_close, enter=closing, outputs=[send_close]) closing.upon(remote_data, enter=closing, outputs=[signal_dataReceived]) closing.upon(remote_close, enter=closed, outputs=[close_subchannel, signal_connectionLost]) # error cases # we won't ever see an OPEN, since L4 will log+ignore those for us closing.upon(local_data, enter=closing, outputs=[error_closed_write]) closing.upon(local_close, enter=closing, outputs=[error_closed_close]) # the CLOSED state won't ever see messages, since we'll be deleted # our endpoints use these def _set_protocol(self, protocol): assert not self._protocol self._protocol = protocol if IHalfCloseableProtocol.providedBy(protocol): self.connect_protocol_half() else: # move from UNCONNECTED to OPEN self.connect_protocol_full(); def _deliver_queued_data(self): for data in self._pending_remote_data: self.remote_data(data) del self._pending_remote_data if self._pending_remote_close: self.remote_close() del self._pending_remote_close # ITransport def write(self, data): assert isinstance(data, type(b"")) assert len(data) <= MAX_FRAME_LENGTH self.local_data(data) def writeSequence(self, iovec): self.write(b"".join(iovec)) def loseWriteConnection(self): if not IHalfCloseableProtocol.providedBy(self._protocol): # this is a clear error raise HalfCloseUsedOnNonHalfCloseable() self.local_close(); def loseConnection(self): # TODO: what happens if an IHalfCloseableProtocol calls normal # loseConnection()? I think we need to close the read side too. if IHalfCloseableProtocol.providedBy(self._protocol): # I don't know is correct, so avoid this for now raise NormalCloseUsedOnHalfCloseable() self.local_close() def getHost(self): # we define "host addr" as the overall wormhole return self._host_addr def getPeer(self): # and "peer addr" as the subchannel within that wormhole return self._peer_addr # IProducer: throttle inbound data (wormhole "up" to local app's Protocol) def stopProducing(self): self._manager.subchannel_stopProducing(self) def pauseProducing(self): self._manager.subchannel_pauseProducing(self) def resumeProducing(self): self._manager.subchannel_resumeProducing(self) # IConsumer: allow the wormhole to throttle outbound data (app->wormhole) def registerProducer(self, producer, streaming): self._manager.subchannel_registerProducer(self, producer, streaming) def unregisterProducer(self): self._manager.subchannel_unregisterProducer(self) @implementer(IStreamClientEndpoint) @attrs class ControlEndpoint(object): _peer_addr = attrib(validator=provides(IAddress)) _subchannel_zero = attrib(validator=provides(ISubChannel)) _eventual_queue = attrib(repr=False) _used = False def __attrs_post_init__(self): self._once = Once(SingleUseEndpointError) self._wait_for_main_channel = OneShotObserver(self._eventual_queue) # from manager def _main_channel_ready(self): self._wait_for_main_channel.fire(None) def _main_channel_failed(self, f): self._wait_for_main_channel.error(f) @inlineCallbacks def connect(self, protocolFactory): # return Deferred that fires with IProtocol or Failure(ConnectError) self._once() yield self._wait_for_main_channel.when_fired() p = protocolFactory.buildProtocol(self._peer_addr) self._subchannel_zero._set_protocol(p) # this sets p.transport and calls p.connectionMade() p.makeConnection(self._subchannel_zero) self._subchannel_zero._deliver_queued_data() returnValue(p) @implementer(IStreamClientEndpoint) @attrs class SubchannelConnectorEndpoint(object): _manager = attrib(validator=provides(IDilationManager)) _host_addr = attrib(validator=instance_of(_WormholeAddress)) _eventual_queue = attrib(repr=False) def __attrs_post_init__(self): self._connection_deferreds = deque() self._wait_for_main_channel = OneShotObserver(self._eventual_queue) def _main_channel_ready(self): self._wait_for_main_channel.fire(None) def _main_channel_failed(self, f): self._wait_for_main_channel.error(f) @inlineCallbacks def connect(self, protocolFactory): # return Deferred that fires with IProtocol or Failure(ConnectError) yield self._wait_for_main_channel.when_fired() scid = self._manager.allocate_subchannel_id() self._manager.send_open(scid) peer_addr = _SubchannelAddress(scid) # ? f.doStart() # ? f.startedConnecting(CONNECTOR) # ?? sc = SubChannel(scid, self._manager, self._host_addr, peer_addr) self._manager.subchannel_local_open(scid, sc) p = protocolFactory.buildProtocol(peer_addr) sc._set_protocol(p) p.makeConnection(sc) # set p.transport = sc and call connectionMade() returnValue(p) @implementer(IStreamServerEndpoint) @attrs class SubchannelListenerEndpoint(object): _manager = attrib(validator=provides(IDilationManager)) _host_addr = attrib(validator=provides(IAddress)) _eventual_queue = attrib(repr=False) def __attrs_post_init__(self): self._once = Once(SingleUseEndpointError) self._factory = None self._pending_opens = deque() self._wait_for_main_channel = OneShotObserver(self._eventual_queue) # from manager (actually Inbound) def _got_open(self, t, peer_addr): if self._factory: self._connect(t, peer_addr) else: self._pending_opens.append((t, peer_addr)) def _connect(self, t, peer_addr): p = self._factory.buildProtocol(peer_addr) t._set_protocol(p) p.makeConnection(t) t._deliver_queued_data() def _main_channel_ready(self): self._wait_for_main_channel.fire(None) def _main_channel_failed(self, f): self._wait_for_main_channel.error(f) # IStreamServerEndpoint @inlineCallbacks def listen(self, protocolFactory): self._once() yield self._wait_for_main_channel.when_fired() self._factory = protocolFactory while self._pending_opens: (t, peer_addr) = self._pending_opens.popleft() self._connect(t, peer_addr) lp = SubchannelListeningPort(self._host_addr) returnValue(lp) @implementer(IListeningPort) @attrs class SubchannelListeningPort(object): _host_addr = attrib(validator=provides(IAddress)) def startListening(self): pass def stopListening(self): # TODO pass def getHost(self): return self._host_addr magic-wormhole-0.12.0/src/wormhole/_hints.py000066400000000000000000000133051400712516500207700ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import sys import re import six from collections import namedtuple from twisted.internet.endpoints import TCP4ClientEndpoint, TCP6ClientEndpoint, HostnameEndpoint from twisted.internet.abstract import isIPAddress, isIPv6Address from twisted.python import log # These namedtuples are "hint objects". The JSON-serializable dictionaries # are "hint dicts". # DirectTCPV1Hint and TorTCPV1Hint mean the following protocol: # * make a TCP connection (possibly via Tor) # * send the sender/receiver handshake bytes first # * expect to see the receiver/sender handshake bytes from the other side # * the sender writes "go\n", the receiver waits for "go\n" # * the rest of the connection contains transit data DirectTCPV1Hint = namedtuple("DirectTCPV1Hint", ["hostname", "port", "priority"]) TorTCPV1Hint = namedtuple("TorTCPV1Hint", ["hostname", "port", "priority"]) # RelayV1Hint contains a tuple of DirectTCPV1Hint and TorTCPV1Hint hints (we # use a tuple rather than a list so they'll be hashable into a set). For each # one, make the TCP connection, send the relay handshake, then complete the # rest of the V1 protocol. Only one hint per relay is useful. RelayV1Hint = namedtuple("RelayV1Hint", ["hints"]) def describe_hint_obj(hint, relay, tor): prefix = "tor->" if tor else "->" if relay: prefix = prefix + "relay:" if isinstance(hint, DirectTCPV1Hint): return prefix + "tcp:%s:%d" % (hint.hostname, hint.port) elif isinstance(hint, TorTCPV1Hint): return prefix + "tor:%s:%d" % (hint.hostname, hint.port) else: return prefix + str(hint) def parse_hint_argv(hint, stderr=sys.stderr): assert isinstance(hint, type(u"")) # return tuple or None for an unparseable hint priority = 0.0 mo = re.search(r'^([a-zA-Z0-9]+):(.*)$', hint) if not mo: print("unparseable hint '%s'" % (hint, ), file=stderr) return None hint_type = mo.group(1) if hint_type != "tcp": print("unknown hint type '%s' in '%s'" % (hint_type, hint), file=stderr) return None hint_value = mo.group(2) pieces = hint_value.split(":") if len(pieces) < 2: print("unparseable TCP hint (need more colons) '%s'" % (hint, ), file=stderr) return None mo = re.search(r'^(\d+)$', pieces[1]) if not mo: print("non-numeric port in TCP hint '%s'" % (hint, ), file=stderr) return None hint_host = pieces[0] hint_port = int(pieces[1]) for more in pieces[2:]: if more.startswith("priority="): more_pieces = more.split("=") try: priority = float(more_pieces[1]) except ValueError: print("non-float priority= in TCP hint '%s'" % (hint, ), file=stderr) return None return DirectTCPV1Hint(hint_host, hint_port, priority) def endpoint_from_hint_obj(hint, tor, reactor): if tor: if isinstance(hint, (DirectTCPV1Hint, TorTCPV1Hint)): # this Tor object will throw ValueError for non-public IPv4 # addresses and any IPv6 address try: return tor.stream_via(hint.hostname, hint.port) except ValueError: return None return None if isinstance(hint, DirectTCPV1Hint): # avoid DNS lookup unless necessary if isIPAddress(hint.hostname): return TCP4ClientEndpoint(reactor, hint.hostname, hint.port) if isIPv6Address(hint.hostname): return TCP6ClientEndpoint(reactor, hint.hostname, hint.port) return HostnameEndpoint(reactor, hint.hostname, hint.port) return None def parse_tcp_v1_hint(hint): # hint_struct -> hint_obj hint_type = hint.get("type", "") if hint_type not in ["direct-tcp-v1", "tor-tcp-v1"]: log.msg("unknown hint type: %r" % (hint, )) return None if not ("hostname" in hint and isinstance(hint["hostname"], type(""))): log.msg("invalid hostname in hint: %r" % (hint, )) return None if not ("port" in hint and isinstance(hint["port"], six.integer_types)): log.msg("invalid port in hint: %r" % (hint, )) return None priority = hint.get("priority", 0.0) if hint_type == "direct-tcp-v1": return DirectTCPV1Hint(hint["hostname"], hint["port"], priority) else: return TorTCPV1Hint(hint["hostname"], hint["port"], priority) def parse_hint(hint_struct): hint_type = hint_struct.get("type", "") if hint_type == "relay-v1": # the struct can include multiple ways to reach the same relay rhints = filter(lambda h: h, # drop None (unrecognized) [parse_tcp_v1_hint(rh) for rh in hint_struct["hints"]]) return RelayV1Hint(list(rhints)) return parse_tcp_v1_hint(hint_struct) def encode_hint(h): if isinstance(h, DirectTCPV1Hint): return {"type": "direct-tcp-v1", "priority": h.priority, "hostname": h.hostname, "port": h.port, # integer } elif isinstance(h, RelayV1Hint): rhint = {"type": "relay-v1", "hints": []} for rh in h.hints: rhint["hints"].append({"type": "direct-tcp-v1", "priority": rh.priority, "hostname": rh.hostname, "port": rh.port}) return rhint elif isinstance(h, TorTCPV1Hint): return {"type": "tor-tcp-v1", "priority": h.priority, "hostname": h.hostname, "port": h.port, # integer } raise ValueError("unknown hint type", h) magic-wormhole-0.12.0/src/wormhole/_input.py000066400000000000000000000250131400712516500210010ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals # We use 'threading' defensively here, to detect if we're being called from a # non-main thread. _rlcompleter.py is the only internal Wormhole code that # deliberately creates a new thread. import threading from attr import attrib, attrs from attr.validators import provides from automat import MethodicalMachine from twisted.internet import defer from zope.interface import implementer from . import _interfaces, errors from ._nameplate import validate_nameplate def first(outputs): return list(outputs)[0] @attrs @implementer(_interfaces.IInput) class Input(object): _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._all_nameplates = set() self._nameplate = None self._wordlist = None self._wordlist_waiters = [] self._trace = None def set_debug(self, f): self._trace = f def _debug(self, what): # pragma: no cover if self._trace: self._trace(old_state="", input=what, new_state="") def wire(self, code, lister): self._C = _interfaces.ICode(code) self._L = _interfaces.ILister(lister) def when_wordlist_is_available(self): if self._wordlist: return defer.succeed(None) d = defer.Deferred() self._wordlist_waiters.append(d) return d @m.state(initial=True) def S0_idle(self): pass # pragma: no cover @m.state() def S1_typing_nameplate(self): pass # pragma: no cover @m.state() def S2_typing_code_no_wordlist(self): pass # pragma: no cover @m.state() def S3_typing_code_yes_wordlist(self): pass # pragma: no cover @m.state(terminal=True) def S4_done(self): pass # pragma: no cover # from Code @m.input() def start(self): pass # from Lister @m.input() def got_nameplates(self, all_nameplates): pass # from Nameplate @m.input() def got_wordlist(self, wordlist): pass # API provided to app as ICodeInputHelper @m.input() def refresh_nameplates(self): pass @m.input() def get_nameplate_completions(self, prefix): pass def choose_nameplate(self, nameplate): validate_nameplate(nameplate) # can raise KeyFormatError self._choose_nameplate(nameplate) @m.input() def _choose_nameplate(self, nameplate): pass @m.input() def get_word_completions(self, prefix): pass @m.input() def choose_words(self, words): pass @m.output() def do_start(self): self._start_timing = self._timing.add("input code", waiting="user") self._L.refresh() return Helper(self) @m.output() def do_refresh(self): self._L.refresh() @m.output() def record_nameplates(self, all_nameplates): # we get a set of nameplate id strings self._all_nameplates = all_nameplates @m.output() def _get_nameplate_completions(self, prefix): completions = set() for nameplate in self._all_nameplates: if nameplate.startswith(prefix): # TODO: it's a little weird that Input is responsible for the # hyphen on nameplates, but WordList owns it for words completions.add(nameplate + "-") return completions @m.output() def record_all_nameplates(self, nameplate): self._nameplate = nameplate self._C.got_nameplate(nameplate) @m.output() def record_wordlist(self, wordlist): from ._rlcompleter import debug debug(" -record_wordlist") self._wordlist = wordlist @m.output() def notify_wordlist_waiters(self, wordlist): while self._wordlist_waiters: d = self._wordlist_waiters.pop() d.callback(None) @m.output() def no_word_completions(self, prefix): return set() @m.output() def _get_word_completions(self, prefix): assert self._wordlist return self._wordlist.get_completions(prefix) @m.output() def raise_must_choose_nameplate1(self, prefix): raise errors.MustChooseNameplateFirstError() @m.output() def raise_must_choose_nameplate2(self, words): raise errors.MustChooseNameplateFirstError() @m.output() def raise_already_chose_nameplate1(self): raise errors.AlreadyChoseNameplateError() @m.output() def raise_already_chose_nameplate2(self, prefix): raise errors.AlreadyChoseNameplateError() @m.output() def raise_already_chose_nameplate3(self, nameplate): raise errors.AlreadyChoseNameplateError() @m.output() def raise_already_chose_words1(self, prefix): raise errors.AlreadyChoseWordsError() @m.output() def raise_already_chose_words2(self, words): raise errors.AlreadyChoseWordsError() @m.output() def do_words(self, words): code = self._nameplate + "-" + words self._start_timing.finish() self._C.finished_input(code) S0_idle.upon( start, enter=S1_typing_nameplate, outputs=[do_start], collector=first) # wormholes that don't use input_code (i.e. they use allocate_code or # generate_code) will never start() us, but Nameplate will give us a # wordlist anyways (as soon as the nameplate is claimed), so handle it. S0_idle.upon( got_wordlist, enter=S0_idle, outputs=[record_wordlist, notify_wordlist_waiters]) S1_typing_nameplate.upon( got_nameplates, enter=S1_typing_nameplate, outputs=[record_nameplates]) # but wormholes that *do* use input_code should not get got_wordlist # until after we tell Code that we got_nameplate, which is the earliest # it can be claimed S1_typing_nameplate.upon( refresh_nameplates, enter=S1_typing_nameplate, outputs=[do_refresh]) S1_typing_nameplate.upon( get_nameplate_completions, enter=S1_typing_nameplate, outputs=[_get_nameplate_completions], collector=first) S1_typing_nameplate.upon( _choose_nameplate, enter=S2_typing_code_no_wordlist, outputs=[record_all_nameplates]) S1_typing_nameplate.upon( get_word_completions, enter=S1_typing_nameplate, outputs=[raise_must_choose_nameplate1]) S1_typing_nameplate.upon( choose_words, enter=S1_typing_nameplate, outputs=[raise_must_choose_nameplate2]) S2_typing_code_no_wordlist.upon( got_nameplates, enter=S2_typing_code_no_wordlist, outputs=[]) S2_typing_code_no_wordlist.upon( got_wordlist, enter=S3_typing_code_yes_wordlist, outputs=[record_wordlist, notify_wordlist_waiters]) S2_typing_code_no_wordlist.upon( refresh_nameplates, enter=S2_typing_code_no_wordlist, outputs=[raise_already_chose_nameplate1]) S2_typing_code_no_wordlist.upon( get_nameplate_completions, enter=S2_typing_code_no_wordlist, outputs=[raise_already_chose_nameplate2]) S2_typing_code_no_wordlist.upon( _choose_nameplate, enter=S2_typing_code_no_wordlist, outputs=[raise_already_chose_nameplate3]) S2_typing_code_no_wordlist.upon( get_word_completions, enter=S2_typing_code_no_wordlist, outputs=[no_word_completions], collector=first) S2_typing_code_no_wordlist.upon( choose_words, enter=S4_done, outputs=[do_words]) S3_typing_code_yes_wordlist.upon( got_nameplates, enter=S3_typing_code_yes_wordlist, outputs=[]) # got_wordlist: should never happen S3_typing_code_yes_wordlist.upon( refresh_nameplates, enter=S3_typing_code_yes_wordlist, outputs=[raise_already_chose_nameplate1]) S3_typing_code_yes_wordlist.upon( get_nameplate_completions, enter=S3_typing_code_yes_wordlist, outputs=[raise_already_chose_nameplate2]) S3_typing_code_yes_wordlist.upon( _choose_nameplate, enter=S3_typing_code_yes_wordlist, outputs=[raise_already_chose_nameplate3]) S3_typing_code_yes_wordlist.upon( get_word_completions, enter=S3_typing_code_yes_wordlist, outputs=[_get_word_completions], collector=first) S3_typing_code_yes_wordlist.upon( choose_words, enter=S4_done, outputs=[do_words]) S4_done.upon(got_nameplates, enter=S4_done, outputs=[]) S4_done.upon(got_wordlist, enter=S4_done, outputs=[]) S4_done.upon( refresh_nameplates, enter=S4_done, outputs=[raise_already_chose_nameplate1]) S4_done.upon( get_nameplate_completions, enter=S4_done, outputs=[raise_already_chose_nameplate2]) S4_done.upon( _choose_nameplate, enter=S4_done, outputs=[raise_already_chose_nameplate3]) S4_done.upon( get_word_completions, enter=S4_done, outputs=[raise_already_chose_words1]) S4_done.upon( choose_words, enter=S4_done, outputs=[raise_already_chose_words2]) # we only expose the Helper to application code, not _Input @attrs @implementer(_interfaces.IInputHelper) class Helper(object): _input = attrib() def __attrs_post_init__(self): self._main_thread = threading.current_thread().ident def refresh_nameplates(self): assert threading.current_thread().ident == self._main_thread self._input.refresh_nameplates() def get_nameplate_completions(self, prefix): assert threading.current_thread().ident == self._main_thread return self._input.get_nameplate_completions(prefix) def choose_nameplate(self, nameplate): assert threading.current_thread().ident == self._main_thread self._input._debug("I.choose_nameplate") self._input.choose_nameplate(nameplate) self._input._debug("I.choose_nameplate finished") def when_wordlist_is_available(self): assert threading.current_thread().ident == self._main_thread return self._input.when_wordlist_is_available() def get_word_completions(self, prefix): assert threading.current_thread().ident == self._main_thread return self._input.get_word_completions(prefix) def choose_words(self, words): assert threading.current_thread().ident == self._main_thread self._input._debug("I.choose_words") self._input.choose_words(words) self._input._debug("I.choose_words finished") magic-wormhole-0.12.0/src/wormhole/_interfaces.py000066400000000000000000000347041400712516500217740ustar00rootroot00000000000000from zope.interface import Interface # These interfaces are private: we use them as markers to detect # swapped argument bugs in the various .wire() calls class IWormhole(Interface): """Internal: this contains the methods invoked 'from below'.""" def got_welcome(welcome): pass def got_code(code): pass def got_key(key): pass def got_verifier(verifier): pass def got_versions(versions): pass def received(plaintext): pass def closed(result): pass class IBoss(Interface): pass class INameplate(Interface): pass class IMailbox(Interface): pass class ISend(Interface): pass class IOrder(Interface): pass class IKey(Interface): pass class IReceive(Interface): pass class IRendezvousConnector(Interface): pass class ILister(Interface): pass class ICode(Interface): pass class IInput(Interface): pass class IAllocator(Interface): pass class ITerminator(Interface): pass class ITiming(Interface): pass class ITorManager(Interface): pass class IWordlist(Interface): def choose_words(length): """Randomly select LENGTH words, join them with hyphens, return the result.""" def get_completions(prefix): """Return a list of all suffixes that could complete the given prefix.""" # These interfaces are public, and are re-exported by __init__.py class IDeferredWormhole(Interface): def get_welcome(): """ Wait for the 'welcome message' dictionary, sent by the server upon first connection. :rtype: ``Deferred[dict]`` :return: the welcome dictionary, when it arrives from the server """ def allocate_code(code_length=2): """ Ask the wormhole to allocate a nameplate and generate a random code. When the code is ready, any Deferreds returned by ``get_code()`` will be fired. Only one of generate_code/set_code/input_code may be used. :param int code_length: the number of random words to use. More words means the code is harder to guess. Defaults to 2. :return: None ~mod.class """ def set_code(code): """ Tell the wormhole to use a specific code, either copied from a wormhole that used ``allocate_code``, or created out-of-band by humans (and given to ``set_code`` on both wormholes). Any Deferreds returned by ``get_code()`` will be fired as soon as this is called. Only one of generate_code/set_code/input_code may be used. :return: None """ def input_code(): """ Ask the wormhole to perform interactive entry of the code, with completion on the nameplate and/or words. This does not actually interact with the user, but instead returns a 'code-entry helper' object. The application is responsible for doing the IO: the helper is used to get completion lists and to submit the finished code. See ``input_with_completion`` for a wrapper function that uses ``readline`` to do CLI-style input completion. Any Deferreds returned by ``get_code()`` will be fired when the final code is submitted to the helper. Only one of generate_code/set_code/input_code may be used. :return: a code-entry helper instance :rtype: IHelper """ def get_code(): """ Wait for the wormhole code to be established, then return the code. This is really only useful on the initiating side, which needs to deliver the code to the user (so the user can dictate it to the other user, who can deliver it to their application with ``set_code`` or ``input_code``). On the receiving side, merely submitting the code is sufficient. The wormhole code is always unicode (so ``str`` on py3, ``unicode`` on py2). For ``allocate_code``, this must wait for the server to allocate a nameplate. For ``input_code``, it waits for the final code to be submitted to the helper. For ``set_code``, it fires immediately. :return: the wormhole code :rtype: ``Deferred[str]`` """ def get_unverified_key(): """ Wait for key-exchange to occur, then return the raw unverified SPAKE2 key. When this fires, we have not seen any evidence that anyone else shares this key (nor have we seen evidence of a failed attack: e.g. a payload encrypted with a different key). This is only useful for testing, and for noticing a significant delay between the key-agreement message and the subsequent key-verification ("versions") message. :return: the raw unverified SPAKE2 key :rtype: ``Deferred[bytes]`` """ def get_verifier(): """ Wait for key verification to occur, then return the verifier string. When this fires, we have seen at least one validly-encrypted message from our peer, indicating that we have established a shared secret key with some party who knows (or correctly guessed) the wormhole code. The verifier string (bytes) can be displayed to the user (perhaps as hex), who can manually compare it with the peer's verifier, to obtain more confidence in the secrecy of the established key. If we receive an invalid encrypted message (such as what would happen if an attacker tried and failed to guess the wormhole code), this will instead errback with a ``WrongPasswordError``. :return: the verifier string, after a valid encrypted message has arrived :rtype: ``Deferred[bytes]`` """ def get_versions(): """ Wait for a valid VERSION message to arrive, then return the peer's "versions" dictionary. This dictionary comes from the ``versions=`` argument to the peer's ``wormhole()`` constructor, and is meant to assist with capability-negotiation between the two peers. In particular, the ``versions`` dictionary is delivered before either side has called ``send_message()``, so it can influence the first message sent to a peer that is too old to use that first message for negotiation purposes. If we receive any invalid encrypted message (such as what would happen if an attacker tried and failed to guess the wormhole code), this will instead errback with a ``WrongPasswordError``. :return: the verisions dictionary :rtype: ``Deferred[dict]`` """ def derive_key(purpose, length): """ Derive a purpose-specific key. This combines the master SPAKE2 key with the given purpose string and deterministically derives a new key of the requested length. Any two connected Wormhole objects which call ``derive_key`` with the same purpose and length will get the same key. This can be used to encrypt or sign other messages, or exchanged for verification purposes. The master key will remain secret even if you reveal a derivative key. This must be called after the key has been established, so after any of ``get_unverified_key()/get_verifier()/get_versions()/get_message()`` have fired. ``derive_key()`` returns immediately, rather than returning a ``Deferred``. :return: a derivative key, of the requested length :rtype: ``bytes`` """ def send_message(msg): """ Send a message to the connected peer. This accepts a bytestring, and queues it for encryption and delivery to the other side, where it will eventually appear in ``get_message()``. Messages are delivered in-order, and complete (the Wormhole is a record-pipe, not a byte-pipe). This can be called at any time, even before setting the wormhole code. The message will be queued for delivery after the master key is established. :return: None """ def get_message(): """ Wait for, and return, the next message. This returns a Deferred that will fire when the next (sequential) application message has been received and successfully decrypted. Messages will be delivered in-order and intact (the Wormhole is a record-pipe, not a byte-pipe). This can be called at any time, even before setting the wormhole code. The Deferred will not fire until key-negotiation has completed and a validly-encrypted message has arrived. If we receive any invalid encrypted message (such as what would happen if an attacker tried and failed to guess the wormhole code), this will instead errback with a ``WrongPasswordError``. :return: the next decrypted message :rtype: ``Deferred[bytes]`` """ def close(): """ Close the wormhole. This frees all resources associated with the wormhole (including server-side queues and any established network connections). For operational purposes, it informs the server that the wormhole closed "happy". Less-happy moods may be reported if the connection closed due to a ``WrongPasswordError`` or because of a timeout. ``close()`` returns a Deferred, which fires after the server has been informed and the sockets have been shut down. One-shot applications should delay shutdown until this Deferred has fired, to increase the chances that server resources will be freed. Long-running applications can probably ignore the Deferred, as they'll probably remain running long enough to allow the shutdown to complete. The Deferred will errback if the wormhole had problems, like a ``WrongPasswordError``. :return: a Deferred that fires when shutdown is complete :rtype: ``Deferred`` """ class IInputHelper(Interface): def refresh_nameplates(): """ Refresh the nameplates list. This asks the server for the set of currently-active nameplates (either from calls to ``allocate_code()`` or referenced by active wormhole clients). It updates the set available to ``get_nameplate_completions()``. :return: None """ def get_nameplate_completions(prefix): """ Return a list of nameplate completions for the given prefix. This takes the most-recently-received set of active nameplates from the rendezvous server, finds the subset that start with the given prefix, and returns the result. The result strings include the prefix and the terminating hyphen, in random order. This returns synchronously: it does not wait for a server response. If called before getting any response from the server, it will return an empty set. If user input causes completion, it may be a good idea to kick off a new ``refresh_nameplates()`` too, in case the user is bouncing on the TAB key in the hopes of seeing their expected nameplate appear in the list eventually. :param str prefix: the nameplate as typed so far :return: a set of potential completions :rtype: set[str] """ def choose_nameplate(nameplate): """ Commit to a nameplate, allowing the word-completion phase to begin. This may only be called once. Calling it a second time will raise ``AlreadyChoseNameplateError``. :param str nameplate: the complete nameplate, without a trailing hyphen :return: None """ def when_wordlist_is_available(): """ Wait for the wordlist to be available. This fires when the wordlist is available, which means ``get_word_completions()`` is able to return a non-empty set. This requires the nameplate be submitted, and may also require some server interaction (to claim the channel and learn a channel-specific wordlist, e.g. for i18n language selection). :return: a ``Deferred`` that fires when the wordlist is available :rtype: Deferred[None] """ def get_word_completions(prefix): """ Return a list of word completions for the given prefix. This takes the claimed channel's wordlist, finds the subset that start with the given prefix, and returns the result. The result strings include the prefix and the terminating hyphen, in random order. The prefix should not include the nameplate, but should include whatever words have been selected so far (the default uses separate odd/even wordlists, which means the completion for a single string depends upon how many words have been entered so far). This returns synchronously: it does not wait for a server response. If called before getting the wordlist, it will return an empty set. If called before ``choose_nameplate()``, this will raise ``MustChooseNameplateFirstError``. If called after ``choose_words()``, this will raise ``AlreadyChoseWordsError``. :param str prefix: the words typed so far :return: a set of potential completions :rtype: set[str] """ def choose_words(words): """ Submit the final words. This should be called when the user is finished typing in the code, and terminates the code-entry process. It does not return anything, but will cause the Wormhole's ``w.get_code()`` to fire, and initiates the wormhole connection process. It accepts a string like "purple-sausages", without the leading nameplate (which must have been submitted to ``choose_nameplate()`` earlier) or its hyphen. If ``choose_nameplate()`` was not called first, this will raise ``MustChooseNameplateFirstError``. This may only be called once, otherwise ``AlreadyChoseWordsError`` will be raised. :param str words: the 'words' portion of the wormhole code :return: None """ class IJournal(Interface): # TODO: this needs to be public pass class IDilator(Interface): pass class IDilationManager(Interface): pass class IDilationConnector(Interface): pass class ISubChannel(Interface): pass class IInbound(Interface): pass class IOutbound(Interface): pass magic-wormhole-0.12.0/src/wormhole/_key.py000066400000000000000000000151701400712516500204350ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from hashlib import sha256 import six from attr import attrib, attrs from attr.validators import instance_of, provides from automat import MethodicalMachine from nacl import utils from nacl.exceptions import CryptoError from nacl.secret import SecretBox from spake2 import SPAKE2_Symmetric from zope.interface import implementer from . import _interfaces from .util import (bytes_to_dict, bytes_to_hexstr, dict_to_bytes, hexstr_to_bytes, to_bytes, HKDF) CryptoError __all__ = ["derive_key", "derive_phase_key", "CryptoError", "Key"] def derive_key(key, purpose, length=SecretBox.KEY_SIZE): if not isinstance(key, type(b"")): raise TypeError(type(key)) if not isinstance(purpose, type(b"")): raise TypeError(type(purpose)) if not isinstance(length, six.integer_types): raise TypeError(type(length)) return HKDF(key, length, CTXinfo=purpose) def derive_phase_key(key, side, phase): assert isinstance(side, type("")), type(side) assert isinstance(phase, type("")), type(phase) side_bytes = side.encode("ascii") phase_bytes = phase.encode("ascii") purpose = (b"wormhole:phase:" + sha256(side_bytes).digest() + sha256(phase_bytes).digest()) return derive_key(key, purpose) def decrypt_data(key, encrypted): assert isinstance(key, type(b"")), type(key) assert isinstance(encrypted, type(b"")), type(encrypted) assert len(key) == SecretBox.KEY_SIZE, len(key) box = SecretBox(key) data = box.decrypt(encrypted) return data def encrypt_data(key, plaintext): assert isinstance(key, type(b"")), type(key) assert isinstance(plaintext, type(b"")), type(plaintext) assert len(key) == SecretBox.KEY_SIZE, len(key) box = SecretBox(key) nonce = utils.random(SecretBox.NONCE_SIZE) return box.encrypt(plaintext, nonce) # the Key we expose to callers (Boss, Ordering) is responsible for sorting # the two messages (got_code and got_pake), then delivering them to # _SortedKey in the right order. @attrs @implementer(_interfaces.IKey) class Key(object): _appid = attrib(validator=instance_of(type(u""))) _versions = attrib(validator=instance_of(dict)) _side = attrib(validator=instance_of(type(u""))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._SK = _SortedKey(self._appid, self._versions, self._side, self._timing) self._debug_pake_stashed = False # for tests def wire(self, boss, mailbox, receive): self._SK.wire(boss, mailbox, receive) @m.state(initial=True) def S00(self): pass # pragma: no cover @m.state() def S01(self): pass # pragma: no cover @m.state() def S10(self): pass # pragma: no cover @m.state() def S11(self): pass # pragma: no cover @m.input() def got_code(self, code): pass @m.input() def got_pake(self, body): pass @m.output() def stash_pake(self, body): self._pake = body self._debug_pake_stashed = True @m.output() def deliver_code(self, code): self._SK.got_code(code) @m.output() def deliver_pake(self, body): self._SK.got_pake(body) @m.output() def deliver_code_and_stashed_pake(self, code): self._SK.got_code(code) self._SK.got_pake(self._pake) S00.upon(got_code, enter=S10, outputs=[deliver_code]) S10.upon(got_pake, enter=S11, outputs=[deliver_pake]) S00.upon(got_pake, enter=S01, outputs=[stash_pake]) S01.upon(got_code, enter=S11, outputs=[deliver_code_and_stashed_pake]) @attrs class _SortedKey(object): _appid = attrib(validator=instance_of(type(u""))) _versions = attrib(validator=instance_of(dict)) _side = attrib(validator=instance_of(type(u""))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def wire(self, boss, mailbox, receive): self._B = _interfaces.IBoss(boss) self._M = _interfaces.IMailbox(mailbox) self._R = _interfaces.IReceive(receive) @m.state(initial=True) def S0_know_nothing(self): pass # pragma: no cover @m.state() def S1_know_code(self): pass # pragma: no cover @m.state() def S2_know_key(self): pass # pragma: no cover @m.state(terminal=True) def S3_scared(self): pass # pragma: no cover # from Boss @m.input() def got_code(self, code): pass # from Ordering def got_pake(self, body): assert isinstance(body, type(b"")), type(body) payload = bytes_to_dict(body) if "pake_v1" in payload: self.got_pake_good(hexstr_to_bytes(payload["pake_v1"])) else: self.got_pake_bad() @m.input() def got_pake_good(self, msg2): pass @m.input() def got_pake_bad(self): pass @m.output() def build_pake(self, code): with self._timing.add("pake1", waiting="crypto"): self._sp = SPAKE2_Symmetric( to_bytes(code), idSymmetric=to_bytes(self._appid)) msg1 = self._sp.start() body = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg1)}) self._M.add_message("pake", body) @m.output() def scared(self): self._B.scared() @m.output() def compute_key(self, msg2): assert isinstance(msg2, type(b"")) with self._timing.add("pake2", waiting="crypto"): key = self._sp.finish(msg2) # TODO: make B.got_key() an eventual send, since it will fire the # user/application-layer get_unverified_key() Deferred, and if that # calls back into other wormhole APIs, bad things will happen self._B.got_key(key) phase = "version" data_key = derive_phase_key(key, self._side, phase) plaintext = dict_to_bytes(self._versions) encrypted = encrypt_data(data_key, plaintext) self._M.add_message(phase, encrypted) # TODO: R.got_key() needs to be eventual-send too, as it can trigger # app-level got_verifier() and got_message() Deferreds. self._R.got_key(key) S0_know_nothing.upon(got_code, enter=S1_know_code, outputs=[build_pake]) S1_know_code.upon(got_pake_good, enter=S2_know_key, outputs=[compute_key]) S1_know_code.upon(got_pake_bad, enter=S3_scared, outputs=[scared]) magic-wormhole-0.12.0/src/wormhole/_lister.py000066400000000000000000000061441400712516500211500ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import provides from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces @attrs @implementer(_interfaces.ILister) class Lister(object): _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def wire(self, rendezvous_connector, input): self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) self._I = _interfaces.IInput(input) # Ideally, each API request would spawn a new "list_nameplates" message # to the server, so the response would be maximally fresh, but that would # require correlating server request+response messages, and the protocol # is intended to be less stateful than that. So we offer a weaker # freshness property: if no server requests are in flight, then a new API # request will provoke a new server request, and the result will be # fresh. But if a server request is already in flight when a second API # request arrives, both requests will be satisfied by the same response. @m.state(initial=True) def S0A_idle_disconnected(self): pass # pragma: no cover @m.state() def S1A_wanting_disconnected(self): pass # pragma: no cover @m.state() def S0B_idle_connected(self): pass # pragma: no cover @m.state() def S1B_wanting_connected(self): pass # pragma: no cover @m.input() def connected(self): pass @m.input() def lost(self): pass @m.input() def refresh(self): pass @m.input() def rx_nameplates(self, all_nameplates): pass @m.output() def RC_tx_list(self): self._RC.tx_list() @m.output() def I_got_nameplates(self, all_nameplates): # We get a set of nameplate ids. There may be more attributes in the # future: change RendezvousConnector._response_handle_nameplates to # get them self._I.got_nameplates(all_nameplates) S0A_idle_disconnected.upon(connected, enter=S0B_idle_connected, outputs=[]) S0B_idle_connected.upon(lost, enter=S0A_idle_disconnected, outputs=[]) S0A_idle_disconnected.upon( refresh, enter=S1A_wanting_disconnected, outputs=[]) S1A_wanting_disconnected.upon( refresh, enter=S1A_wanting_disconnected, outputs=[]) S1A_wanting_disconnected.upon( connected, enter=S1B_wanting_connected, outputs=[RC_tx_list]) S0B_idle_connected.upon( refresh, enter=S1B_wanting_connected, outputs=[RC_tx_list]) S0B_idle_connected.upon( rx_nameplates, enter=S0B_idle_connected, outputs=[I_got_nameplates]) S1B_wanting_connected.upon( lost, enter=S1A_wanting_disconnected, outputs=[]) S1B_wanting_connected.upon( refresh, enter=S1B_wanting_connected, outputs=[RC_tx_list]) S1B_wanting_connected.upon( rx_nameplates, enter=S0B_idle_connected, outputs=[I_got_nameplates]) magic-wormhole-0.12.0/src/wormhole/_mailbox.py000066400000000000000000000151251400712516500213000ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import instance_of from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces @attrs @implementer(_interfaces.IMailbox) class Mailbox(object): _side = attrib(validator=instance_of(type(u""))) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._mailbox = None self._pending_outbound = {} self._processed = set() def wire(self, nameplate, rendezvous_connector, ordering, terminator): self._N = _interfaces.INameplate(nameplate) self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) self._O = _interfaces.IOrder(ordering) self._T = _interfaces.ITerminator(terminator) # all -A states: not connected # all -B states: yes connected # B states serialize as A, so they deserialize as unconnected # S0: know nothing @m.state(initial=True) def S0A(self): pass # pragma: no cover @m.state() def S0B(self): pass # pragma: no cover # S1: mailbox known, not opened @m.state() def S1A(self): pass # pragma: no cover # S2: mailbox known, opened # We've definitely tried to open the mailbox at least once, but it must # be re-opened with each connection, because open() is also subscribe() @m.state() def S2A(self): pass # pragma: no cover @m.state() def S2B(self): pass # pragma: no cover # S3: closing @m.state() def S3A(self): pass # pragma: no cover @m.state() def S3B(self): pass # pragma: no cover # S4: closed. We no longer care whether we're connected or not # @m.state() # def S4A(self): pass # @m.state() # def S4B(self): pass @m.state(terminal=True) def S4(self): pass # pragma: no cover S4A = S4 S4B = S4 # from Terminator @m.input() def close(self, mood): pass # from Nameplate @m.input() def got_mailbox(self, mailbox): pass # from RendezvousConnector @m.input() def connected(self): pass @m.input() def lost(self): pass def rx_message(self, side, phase, body): assert isinstance(side, type("")), type(side) assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) if side == self._side: self.rx_message_ours(phase, body) else: self.rx_message_theirs(side, phase, body) @m.input() def rx_message_ours(self, phase, body): pass @m.input() def rx_message_theirs(self, side, phase, body): pass @m.input() def rx_closed(self): pass # from Send or Key @m.input() def add_message(self, phase, body): pass @m.output() def record_mailbox(self, mailbox): self._mailbox = mailbox @m.output() def RC_tx_open(self): assert self._mailbox self._RC.tx_open(self._mailbox) @m.output() def queue(self, phase, body): assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), (type(body), phase, body) self._pending_outbound[phase] = body @m.output() def record_mailbox_and_RC_tx_open_and_drain(self, mailbox): self._mailbox = mailbox self._RC.tx_open(mailbox) self._drain() @m.output() def drain(self): self._drain() def _drain(self): for phase, body in self._pending_outbound.items(): self._RC.tx_add(phase, body) @m.output() def RC_tx_add(self, phase, body): assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) self._RC.tx_add(phase, body) @m.output() def N_release_and_accept(self, side, phase, body): self._N.release() if phase not in self._processed: self._processed.add(phase) self._O.got_message(side, phase, body) @m.output() def RC_tx_close(self): assert self._mood self._RC_tx_close() def _RC_tx_close(self): self._RC.tx_close(self._mailbox, self._mood) @m.output() def dequeue(self, phase, body): self._pending_outbound.pop(phase, None) @m.output() def record_mood(self, mood): self._mood = mood @m.output() def record_mood_and_RC_tx_close(self, mood): self._mood = mood self._RC_tx_close() @m.output() def ignore_mood_and_T_mailbox_done(self, mood): self._T.mailbox_done() @m.output() def T_mailbox_done(self): self._T.mailbox_done() S0A.upon(connected, enter=S0B, outputs=[]) S0A.upon(got_mailbox, enter=S1A, outputs=[record_mailbox]) S0A.upon(add_message, enter=S0A, outputs=[queue]) S0A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done]) S0B.upon(lost, enter=S0A, outputs=[]) S0B.upon(add_message, enter=S0B, outputs=[queue]) S0B.upon(close, enter=S4B, outputs=[ignore_mood_and_T_mailbox_done]) S0B.upon( got_mailbox, enter=S2B, outputs=[record_mailbox_and_RC_tx_open_and_drain]) S1A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain]) S1A.upon(add_message, enter=S1A, outputs=[queue]) S1A.upon(close, enter=S4A, outputs=[ignore_mood_and_T_mailbox_done]) S2A.upon(connected, enter=S2B, outputs=[RC_tx_open, drain]) S2A.upon(add_message, enter=S2A, outputs=[queue]) S2A.upon(close, enter=S3A, outputs=[record_mood]) S2B.upon(lost, enter=S2A, outputs=[]) S2B.upon(add_message, enter=S2B, outputs=[queue, RC_tx_add]) S2B.upon(rx_message_theirs, enter=S2B, outputs=[N_release_and_accept]) S2B.upon(rx_message_ours, enter=S2B, outputs=[dequeue]) S2B.upon(close, enter=S3B, outputs=[record_mood_and_RC_tx_close]) S3A.upon(connected, enter=S3B, outputs=[RC_tx_close]) S3B.upon(lost, enter=S3A, outputs=[]) S3B.upon(rx_closed, enter=S4B, outputs=[T_mailbox_done]) S3B.upon(add_message, enter=S3B, outputs=[]) S3B.upon(rx_message_theirs, enter=S3B, outputs=[]) S3B.upon(rx_message_ours, enter=S3B, outputs=[]) S3B.upon(close, enter=S3B, outputs=[]) S4A.upon(connected, enter=S4B, outputs=[]) S4B.upon(lost, enter=S4A, outputs=[]) S4.upon(add_message, enter=S4, outputs=[]) S4.upon(rx_message_theirs, enter=S4, outputs=[]) S4.upon(rx_message_ours, enter=S4, outputs=[]) S4.upon(close, enter=S4, outputs=[]) magic-wormhole-0.12.0/src/wormhole/_nameplate.py000066400000000000000000000130001400712516500216010ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals import re from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces from ._wordlist import PGPWordList from .errors import KeyFormatError def validate_nameplate(nameplate): if not re.search(r'^\d+$', nameplate): raise KeyFormatError( "Nameplate '%s' must be numeric, with no spaces." % nameplate) @implementer(_interfaces.INameplate) class Nameplate(object): m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __init__(self): self._nameplate = None def wire(self, mailbox, input, rendezvous_connector, terminator): self._M = _interfaces.IMailbox(mailbox) self._I = _interfaces.IInput(input) self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) self._T = _interfaces.ITerminator(terminator) # all -A states: not connected # all -B states: yes connected # B states serialize as A, so they deserialize as unconnected # S0: know nothing @m.state(initial=True) def S0A(self): pass # pragma: no cover @m.state() def S0B(self): pass # pragma: no cover # S1: nameplate known, never claimed @m.state() def S1A(self): pass # pragma: no cover # S2: nameplate known, maybe claimed @m.state() def S2A(self): pass # pragma: no cover @m.state() def S2B(self): pass # pragma: no cover # S3: nameplate claimed @m.state() def S3A(self): pass # pragma: no cover @m.state() def S3B(self): pass # pragma: no cover # S4: maybe released @m.state() def S4A(self): pass # pragma: no cover @m.state() def S4B(self): pass # pragma: no cover # S5: released # we no longer care whether we're connected or not # @m.state() # def S5A(self): pass # @m.state() # def S5B(self): pass @m.state() def S5(self): pass # pragma: no cover S5A = S5 S5B = S5 # from Boss def set_nameplate(self, nameplate): validate_nameplate(nameplate) # can raise KeyFormatError self._set_nameplate(nameplate) @m.input() def _set_nameplate(self, nameplate): pass # from Mailbox @m.input() def release(self): pass # from Terminator @m.input() def close(self): pass # from RendezvousConnector @m.input() def connected(self): pass @m.input() def lost(self): pass @m.input() def rx_claimed(self, mailbox): pass @m.input() def rx_released(self): pass @m.output() def record_nameplate(self, nameplate): validate_nameplate(nameplate) self._nameplate = nameplate @m.output() def record_nameplate_and_RC_tx_claim(self, nameplate): validate_nameplate(nameplate) self._nameplate = nameplate self._RC.tx_claim(self._nameplate) @m.output() def RC_tx_claim(self): # when invoked via M.connected(), we must use the stored nameplate self._RC.tx_claim(self._nameplate) @m.output() def I_got_wordlist(self, mailbox): # TODO select wordlist based on nameplate properties, in rx_claimed wordlist = PGPWordList() self._I.got_wordlist(wordlist) @m.output() def M_got_mailbox(self, mailbox): self._M.got_mailbox(mailbox) @m.output() def RC_tx_release(self): assert self._nameplate self._RC.tx_release(self._nameplate) @m.output() def T_nameplate_done(self): self._T.nameplate_done() S0A.upon(_set_nameplate, enter=S1A, outputs=[record_nameplate]) S0A.upon(connected, enter=S0B, outputs=[]) S0A.upon(close, enter=S5A, outputs=[T_nameplate_done]) S0B.upon( _set_nameplate, enter=S2B, outputs=[record_nameplate_and_RC_tx_claim]) S0B.upon(lost, enter=S0A, outputs=[]) S0B.upon(close, enter=S5A, outputs=[T_nameplate_done]) S1A.upon(connected, enter=S2B, outputs=[RC_tx_claim]) S1A.upon(close, enter=S5A, outputs=[T_nameplate_done]) S2A.upon(connected, enter=S2B, outputs=[RC_tx_claim]) S2A.upon(close, enter=S4A, outputs=[]) S2B.upon(lost, enter=S2A, outputs=[]) S2B.upon(rx_claimed, enter=S3B, outputs=[I_got_wordlist, M_got_mailbox]) S2B.upon(close, enter=S4B, outputs=[RC_tx_release]) S3A.upon(connected, enter=S3B, outputs=[]) S3A.upon(close, enter=S4A, outputs=[]) S3B.upon(lost, enter=S3A, outputs=[]) # S3B.upon(rx_claimed, enter=S3B, outputs=[]) # shouldn't happen S3B.upon(release, enter=S4B, outputs=[RC_tx_release]) S3B.upon(close, enter=S4B, outputs=[RC_tx_release]) S4A.upon(connected, enter=S4B, outputs=[RC_tx_release]) S4A.upon(close, enter=S4A, outputs=[]) S4B.upon(lost, enter=S4A, outputs=[]) S4B.upon(rx_claimed, enter=S4B, outputs=[]) S4B.upon(rx_released, enter=S5B, outputs=[T_nameplate_done]) S4B.upon(release, enter=S4B, outputs=[]) # mailbox is lazy # Mailbox doesn't remember how many times it's sent a release, and will # re-send a new one for each peer message it receives. Ignoring it here # is easier than adding a new pair of states to Mailbox. S4B.upon(close, enter=S4B, outputs=[]) S5A.upon(connected, enter=S5B, outputs=[]) S5B.upon(lost, enter=S5A, outputs=[]) S5.upon(release, enter=S5, outputs=[]) # mailbox is lazy S5.upon(close, enter=S5, outputs=[]) magic-wormhole-0.12.0/src/wormhole/_order.py000066400000000000000000000047151400712516500207630ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import instance_of, provides from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces @attrs @implementer(_interfaces.IOrder) class Order(object): _side = attrib(validator=instance_of(type(u""))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._key = None self._queue = [] def wire(self, key, receive): self._K = _interfaces.IKey(key) self._R = _interfaces.IReceive(receive) @m.state(initial=True) def S0_no_pake(self): pass # pragma: no cover @m.state(terminal=True) def S1_yes_pake(self): pass # pragma: no cover def got_message(self, side, phase, body): # print("ORDER[%s].got_message(%s)" % (self._side, phase)) assert isinstance(side, type("")), type(phase) assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) if phase == "pake": self.got_pake(side, phase, body) else: self.got_non_pake(side, phase, body) @m.input() def got_pake(self, side, phase, body): pass @m.input() def got_non_pake(self, side, phase, body): pass @m.output() def queue(self, side, phase, body): assert isinstance(side, type("")), type(phase) assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) self._queue.append((side, phase, body)) @m.output() def notify_key(self, side, phase, body): self._K.got_pake(body) @m.output() def drain(self, side, phase, body): del phase del body for (side, phase, body) in self._queue: self._deliver(side, phase, body) self._queue[:] = [] @m.output() def deliver(self, side, phase, body): self._deliver(side, phase, body) def _deliver(self, side, phase, body): self._R.got_message(side, phase, body) S0_no_pake.upon(got_non_pake, enter=S0_no_pake, outputs=[queue]) S0_no_pake.upon(got_pake, enter=S1_yes_pake, outputs=[notify_key, drain]) S1_yes_pake.upon(got_non_pake, enter=S1_yes_pake, outputs=[deliver]) magic-wormhole-0.12.0/src/wormhole/_receive.py000066400000000000000000000063001400712516500212620ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import instance_of, provides from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces from ._key import CryptoError, decrypt_data, derive_key, derive_phase_key @attrs @implementer(_interfaces.IReceive) class Receive(object): _side = attrib(validator=instance_of(type(u""))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._key = None def wire(self, boss, send): self._B = _interfaces.IBoss(boss) self._S = _interfaces.ISend(send) @m.state(initial=True) def S0_unknown_key(self): pass # pragma: no cover @m.state() def S1_unverified_key(self): pass # pragma: no cover @m.state() def S2_verified_key(self): pass # pragma: no cover @m.state(terminal=True) def S3_scared(self): pass # pragma: no cover # from Ordering def got_message(self, side, phase, body): assert isinstance(side, type("")), type(phase) assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) assert self._key data_key = derive_phase_key(self._key, side, phase) try: plaintext = decrypt_data(data_key, body) except CryptoError: self.got_message_bad() return self.got_message_good(phase, plaintext) @m.input() def got_message_good(self, phase, plaintext): pass @m.input() def got_message_bad(self): pass # from Key @m.input() def got_key(self, key): pass @m.output() def record_key(self, key): self._key = key @m.output() def S_got_verified_key(self, phase, plaintext): assert self._key self._S.got_verified_key(self._key) @m.output() def W_happy(self, phase, plaintext): self._B.happy() @m.output() def W_got_verifier(self, phase, plaintext): self._B.got_verifier(derive_key(self._key, b"wormhole:verifier")) @m.output() def W_got_message(self, phase, plaintext): assert isinstance(phase, type("")), type(phase) assert isinstance(plaintext, type(b"")), type(plaintext) self._B.got_message(phase, plaintext) @m.output() def W_scared(self): self._B.scared() S0_unknown_key.upon(got_key, enter=S1_unverified_key, outputs=[record_key]) S1_unverified_key.upon( got_message_good, enter=S2_verified_key, outputs=[S_got_verified_key, W_happy, W_got_verifier, W_got_message]) S1_unverified_key.upon( got_message_bad, enter=S3_scared, outputs=[W_scared]) S2_verified_key.upon(got_message_bad, enter=S3_scared, outputs=[W_scared]) S2_verified_key.upon( got_message_good, enter=S2_verified_key, outputs=[W_got_message]) S3_scared.upon(got_message_good, enter=S3_scared, outputs=[]) S3_scared.upon(got_message_bad, enter=S3_scared, outputs=[]) magic-wormhole-0.12.0/src/wormhole/_rendezvous.py000066400000000000000000000300341400712516500220450ustar00rootroot00000000000000from __future__ import print_function, absolute_import, unicode_literals import os from six.moves.urllib_parse import urlparse from attr import attrs, attrib from attr.validators import provides, instance_of, optional from zope.interface import implementer from twisted.python import log from twisted.internet import defer, endpoints, task from twisted.application import internet from autobahn.twisted import websocket from . import _interfaces, errors from .util import (bytes_to_hexstr, hexstr_to_bytes, bytes_to_dict, dict_to_bytes) class WSClient(websocket.WebSocketClientProtocol): def onConnect(self, response): # this fires during WebSocket negotiation, and isn't very useful # unless you want to modify the protocol settings # print("onConnect", response) pass def onOpen(self, *args): # this fires when the WebSocket is ready to go. No arguments # print("onOpen", args) # self.wormhole_open = True self._RC.ws_open(self) def onMessage(self, payload, isBinary): assert not isBinary try: self._RC.ws_message(payload) except Exception: from twisted.python.failure import Failure print("LOGGING", Failure()) log.err() raise def onClose(self, wasClean, code, reason): # print("onClose") self._RC.ws_close(wasClean, code, reason) # if self.wormhole_open: # self.wormhole._ws_closed(wasClean, code, reason) # else: # # we closed before establishing a connection (onConnect) or # # finishing WebSocket negotiation (onOpen): errback # self.factory.d.errback(error.ConnectError(reason)) class WSFactory(websocket.WebSocketClientFactory): protocol = WSClient def __init__(self, RC, *args, **kwargs): websocket.WebSocketClientFactory.__init__(self, *args, **kwargs) self._RC = RC def buildProtocol(self, addr): proto = websocket.WebSocketClientFactory.buildProtocol(self, addr) proto._RC = self._RC # proto.wormhole_open = False return proto @attrs @implementer(_interfaces.IRendezvousConnector) class RendezvousConnector(object): _url = attrib(validator=instance_of(type(u""))) _appid = attrib(validator=instance_of(type(u""))) _side = attrib(validator=instance_of(type(u""))) _reactor = attrib() _journal = attrib(validator=provides(_interfaces.IJournal)) _tor = attrib(validator=optional(provides(_interfaces.ITorManager))) _timing = attrib(validator=provides(_interfaces.ITiming)) _client_version = attrib(validator=instance_of(tuple)) def __attrs_post_init__(self): self._have_made_a_successful_connection = False self._stopping = False self._trace = None self._ws = None f = WSFactory(self, self._url) f.setProtocolOptions(autoPingInterval=60, autoPingTimeout=600) ep = self._make_endpoint(self._url) self._connector = internet.ClientService(ep, f) faf = None if self._have_made_a_successful_connection else 1 d = self._connector.whenConnected(failAfterFailures=faf) # if the initial connection fails, signal an error and shut down. do # this in a different reactor turn to avoid some hazards d.addBoth(lambda res: task.deferLater(self._reactor, 0.0, lambda: res)) # TODO: use EventualQueue d.addErrback(self._initial_connection_failed) self._debug_record_inbound_f = None def set_trace(self, f): self._trace = f def _debug(self, what): if self._trace: self._trace(old_state="", input=what, new_state="") def _make_endpoint(self, url): p = urlparse(url) tls = (p.scheme == "wss") port = p.port or (443 if tls else 80) if self._tor: return self._tor.stream_via(p.hostname, port, tls=tls) if tls: return endpoints.clientFromString(self._reactor, "tls:%s:%s" % (p.hostname, port)) return endpoints.HostnameEndpoint(self._reactor, p.hostname, port) def wire(self, boss, nameplate, mailbox, allocator, lister, terminator): self._B = _interfaces.IBoss(boss) self._N = _interfaces.INameplate(nameplate) self._M = _interfaces.IMailbox(mailbox) self._A = _interfaces.IAllocator(allocator) self._L = _interfaces.ILister(lister) self._T = _interfaces.ITerminator(terminator) # from Boss def start(self): self._connector.startService() # from Mailbox def tx_claim(self, nameplate): self._tx("claim", nameplate=nameplate) def tx_open(self, mailbox): self._tx("open", mailbox=mailbox) def tx_add(self, phase, body): assert isinstance(phase, type("")), type(phase) assert isinstance(body, type(b"")), type(body) self._tx("add", phase=phase, body=bytes_to_hexstr(body)) def tx_release(self, nameplate): self._tx("release", nameplate=nameplate) def tx_close(self, mailbox, mood): self._tx("close", mailbox=mailbox, mood=mood) def stop(self): # ClientService.stopService is defined to "Stop attempting to # reconnect and close any existing connections" self._stopping = True # to catch _initial_connection_failed error d = defer.maybeDeferred(self._connector.stopService) # ClientService.stopService always fires with None, even if the # initial connection failed, so log.err just in case d.addErrback(log.err) d.addBoth(self._stopped) # from Lister def tx_list(self): self._tx("list") # from Code def tx_allocate(self): self._tx("allocate") # from our ClientService def _initial_connection_failed(self, f): if not self._stopping: sce = errors.ServerConnectionError(self._url, f.value) d = defer.maybeDeferred(self._connector.stopService) # this should happen right away: the ClientService ought to be in # the "_waiting" state, and everything in the _waiting.stop # transition is immediate d.addErrback(log.err) # just in case something goes wrong d.addCallback(lambda _: self._B.error(sce)) # from our WSClient (the WebSocket protocol) def ws_open(self, proto): self._debug("R.connected") self._have_made_a_successful_connection = True self._ws = proto try: self._tx( "bind", appid=self._appid, side=self._side, client_version=self._client_version) self._N.connected() self._M.connected() self._L.connected() self._A.connected() except Exception as e: self._B.error(e) raise self._debug("R.connected finished notifications") def ws_message(self, payload): msg = bytes_to_dict(payload) if msg["type"] != "ack": self._debug("R.rx(%s %s%s)" % ( msg["type"], msg.get("phase", ""), "[mine]" if msg.get("side", "") == self._side else "", )) self._timing.add("ws_receive", _side=self._side, message=msg) if self._debug_record_inbound_f: self._debug_record_inbound_f(msg) mtype = msg["type"] meth = getattr(self, "_response_handle_" + mtype, None) if not meth: # make tests fail, but real application will ignore it log.err( errors._UnknownMessageTypeError( "Unknown inbound message type %r" % (msg, ))) return try: return meth(msg) except Exception as e: log.err(e) self._B.error(e) raise def ws_close(self, wasClean, code, reason): self._debug("R.lost") was_open = bool(self._ws) self._ws = None # when Autobahn connects to a non-websocket server, it gets a # CLOSE_STATUS_CODE_ABNORMAL_CLOSE, and delivers onClose() without # ever calling onOpen first. This confuses our state machines, so # avoid telling them we've lost the connection unless we'd previously # told them we'd connected. if was_open: self._N.lost() self._M.lost() self._L.lost() self._A.lost() # and if this happens on the very first connection, then we treat it # as a failed initial connection, even though ClientService didn't # notice it. There's a Twisted ticket (#8375) about giving # ClientService an extra setup function to use, so it can tell # whether post-connection negotiation was successful or not, and # restart the process if it fails. That would be useful here, so that # failAfterFailures=1 would do the right thing if the initial TCP # connection succeeds but the first WebSocket negotiation fails. if not self._have_made_a_successful_connection: # shut down the ClientService, which currently thinks it has a # valid connection sce = errors.ServerConnectionError(self._url, reason) d = defer.maybeDeferred(self._connector.stopService) d.addErrback(log.err) # just in case something goes wrong # tell the Boss to quit and inform the user d.addCallback(lambda _: self._B.error(sce)) # internal def _stopped(self, res): self._T.stoppedRC() def _tx(self, mtype, **kwargs): assert self._ws # msgid is used by misc/dump-timing.py to correlate our sends with # their receives, and vice versa. They are also correlated with the # ACKs we get back from the server (which we otherwise ignore). There # are so few messages, 16 bits is enough to be mostly-unique. kwargs["id"] = bytes_to_hexstr(os.urandom(2)) kwargs["type"] = mtype self._debug("R.tx(%s %s)" % (mtype.upper(), kwargs.get("phase", ""))) payload = dict_to_bytes(kwargs) self._timing.add("ws_send", _side=self._side, **kwargs) self._ws.sendMessage(payload, False) def _response_handle_allocated(self, msg): nameplate = msg["nameplate"] assert isinstance(nameplate, type("")), type(nameplate) self._A.rx_allocated(nameplate) def _response_handle_nameplates(self, msg): # we get list of {id: ID}, with maybe more attributes in the future nameplates = msg["nameplates"] assert isinstance(nameplates, list), type(nameplates) nids = set() for n in nameplates: assert isinstance(n, dict), type(n) nameplate_id = n["id"] assert isinstance(nameplate_id, type("")), type(nameplate_id) nids.add(nameplate_id) # deliver a set of nameplate ids self._L.rx_nameplates(nids) def _response_handle_ack(self, msg): pass def _response_handle_error(self, msg): # the server sent us a type=error. Most cases are due to our mistakes # (malformed protocol messages, sending things in the wrong order), # but it can also result from CrowdedError (more than two clients # using the same channel). err = msg["error"] orig = msg["orig"] self._B.rx_error(err, orig) def _response_handle_welcome(self, msg): self._B.rx_welcome(msg["welcome"]) def _response_handle_claimed(self, msg): mailbox = msg["mailbox"] assert isinstance(mailbox, type("")), type(mailbox) self._N.rx_claimed(mailbox) def _response_handle_message(self, msg): side = msg["side"] phase = msg["phase"] assert isinstance(phase, type("")), type(phase) body = hexstr_to_bytes(msg["body"]) # bytes self._M.rx_message(side, phase, body) def _response_handle_released(self, msg): self._N.rx_released() def _response_handle_closed(self, msg): self._M.rx_closed() # record, message, payload, packet, bundle, ciphertext, plaintext magic-wormhole-0.12.0/src/wormhole/_rlcompleter.py000066400000000000000000000225261400712516500222000ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import traceback from sys import stderr from attr import attrib, attrs from six.moves import input from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.threads import blockingCallFromThread, deferToThread from .errors import AlreadyInputNameplateError, KeyFormatError try: import readline except ImportError: readline = None errf = None # uncomment this to enable tab-completion debugging # import os ; errf = open("err", "w") if os.path.exists("err") else None def debug(*args, **kwargs): # pragma: no cover if errf: print(*args, file=errf, **kwargs) errf.flush() @attrs class CodeInputter(object): _input_helper = attrib() _reactor = attrib() def __attrs_post_init__(self): self.used_completion = False self._matches = None # once we've claimed the nameplate, we can't go back self._committed_nameplate = None # or string def bcft(self, f, *a, **kw): return blockingCallFromThread(self._reactor, f, *a, **kw) def completer(self, text, state): try: return self._wrapped_completer(text, state) except Exception as e: # completer exceptions are normally silently discarded, which # makes debugging challenging print("completer exception: %s" % e) traceback.print_exc() raise def _wrapped_completer(self, text, state): self.used_completion = True # if we get here, then readline must be active ct = readline.get_completion_type() if state == 0: debug("completer starting (%s) (state=0) (ct=%d)" % (text, ct)) self._matches = self._commit_and_build_completions(text) debug(" matches:", " ".join(["'%s'" % m for m in self._matches])) else: debug(" s%d t'%s' ct=%d" % (state, text, ct)) if state >= len(self._matches): debug(" returning None") return None debug(" returning '%s'" % self._matches[state]) return self._matches[state] def _commit_and_build_completions(self, text): ih = self._input_helper if "-" in text: got_nameplate = True nameplate, words = text.split("-", 1) else: got_nameplate = False nameplate = text # partial # 'text' is one of these categories: # "" or "12": complete on nameplates (all that match, maybe just one) # "123-": if we haven't already committed to a nameplate, commit and # wait for the wordlist. Then (either way) return the whole wordlist. # "123-supp": if we haven't already committed to a nameplate, commit # and wait for the wordlist. Then (either way) return all current # matches. if self._committed_nameplate: if not got_nameplate or nameplate != self._committed_nameplate: # they deleted past the commitment point: we can't use # this. For now, bail, but in the future let's find a # gentler way to encourage them to not do that. raise AlreadyInputNameplateError( "nameplate (%s-) already entered, cannot go back" % self._committed_nameplate) if not got_nameplate: # we're completing on nameplates: "" or "12" or "123" self.bcft(ih.refresh_nameplates) # results arrive later debug(" getting nameplates") completions = self.bcft(ih.get_nameplate_completions, nameplate) else: # "123-" or "123-supp" # time to commit to this nameplate, if they haven't already if not self._committed_nameplate: debug(" choose_nameplate(%s)" % nameplate) self.bcft(ih.choose_nameplate, nameplate) self._committed_nameplate = nameplate # Now we want to wait for the wordlist to be available. If # the user just typed "12-supp TAB", we'll claim "12" but # will need a server roundtrip to discover that "supportive" # is the only match. If we don't block, we'd return an empty # wordlist to readline (which will beep and show no # completions). *Then* when the user hits TAB again a moment # later (after the wordlist has arrived, but the user hasn't # modified the input line since the previous empty response), # readline would show one match but not complete anything. # In general we want to avoid returning empty lists to # readline. If the user hits TAB when typing in the nameplate # (before the sender has established one, or before we're # heard about it from the server), it can't be helped. But # for the rest of the code, a simple wait-for-wordlist will # improve the user experience. self.bcft(ih.when_wordlist_is_available) # blocks on CLAIM # and we're completing on words now debug(" getting words (%s)" % (words, )) completions = [ nameplate + "-" + c for c in self.bcft(ih.get_word_completions, words) ] # rlcompleter wants full strings return sorted(completions) def finish(self, text): if "-" not in text: raise KeyFormatError("incomplete wormhole code") nameplate, words = text.split("-", 1) if self._committed_nameplate: if nameplate != self._committed_nameplate: # they deleted past the commitment point: we can't use # this. For now, bail, but in the future let's find a # gentler way to encourage them to not do that. raise AlreadyInputNameplateError( "nameplate (%s-) already entered, cannot go back" % self._committed_nameplate) else: debug(" choose_nameplate(%s)" % nameplate) self.bcft(self._input_helper.choose_nameplate, nameplate) debug(" choose_words(%s)" % words) self.bcft(self._input_helper.choose_words, words) def _input_code_with_completion(prompt, input_helper, reactor): # reminder: this all occurs in a separate thread. All calls to input_helper # must go through blockingCallFromThread() c = CodeInputter(input_helper, reactor) if readline is not None: if readline.__doc__ and "libedit" in readline.__doc__: readline.parse_and_bind("bind ^I rl_complete") else: readline.parse_and_bind("tab: complete") readline.set_completer(c.completer) readline.set_completer_delims("") debug("==== readline-based completion is prepared") else: debug("==== unable to import readline, disabling completion") code = input(prompt) # Code is str(bytes) on py2, and str(unicode) on py3. We want unicode. if isinstance(code, bytes): code = code.decode("utf-8") c.finish(code) return c.used_completion def warn_readline(): # When our process receives a SIGINT, Twisted's SIGINT handler will # stop the reactor and wait for all threads to terminate before the # process exits. However, if we were waiting for # input_code_with_completion() when SIGINT happened, the readline # thread will be blocked waiting for something on stdin. Trick the # user into satisfying the blocking read so we can exit. print("\nCommand interrupted: please press Return to quit", file=stderr) # Other potential approaches to this problem: # * hard-terminate our process with os._exit(1), but make sure the # tty gets reset to a normal mode ("cooked"?) first, so that the # next shell command the user types is echoed correctly # * track down the thread (t.p.threadable.getThreadID from inside the # thread), get a cffi binding to pthread_kill, deliver SIGINT to it # * allocate a pty pair (pty.openpty), replace sys.stdin with the # slave, build a pty bridge that copies bytes (and other PTY # things) from the real stdin to the master, then close the slave # at shutdown, so readline sees EOF # * write tab-completion and basic editing (TTY raw mode, # backspace-is-erase) without readline, probably with curses or # twisted.conch.insults # * write a separate program to get codes (maybe just "wormhole # --internal-get-code"), run it as a subprocess, let it inherit # stdin/stdout, send it SIGINT when we receive SIGINT ourselves. It # needs an RPC mechanism (over some extra file descriptors) to ask # us to fetch the current nameplate_id list. # # Note that hard-terminating our process with os.kill(os.getpid(), # signal.SIGKILL), or SIGTERM, doesn't seem to work: the thread # doesn't see the signal, and we must still wait for stdin to make # readline finish. @inlineCallbacks def input_with_completion(prompt, input_helper, reactor): t = reactor.addSystemEventTrigger("before", "shutdown", warn_readline) # input_helper.refresh_nameplates() used_completion = yield deferToThread(_input_code_with_completion, prompt, input_helper, reactor) reactor.removeSystemEventTrigger(t) returnValue(used_completion) magic-wormhole-0.12.0/src/wormhole/_send.py000066400000000000000000000043611400712516500205760ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from attr import attrib, attrs from attr.validators import instance_of, provides from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces from ._key import derive_phase_key, encrypt_data @attrs @implementer(_interfaces.ISend) class Send(object): _side = attrib(validator=instance_of(type(u""))) _timing = attrib(validator=provides(_interfaces.ITiming)) m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __attrs_post_init__(self): self._queue = [] def wire(self, mailbox): self._M = _interfaces.IMailbox(mailbox) @m.state(initial=True) def S0_no_key(self): pass # pragma: no cover @m.state(terminal=True) def S1_verified_key(self): pass # pragma: no cover # from Receive @m.input() def got_verified_key(self, key): pass # from Boss @m.input() def send(self, phase, plaintext): pass @m.output() def queue(self, phase, plaintext): assert isinstance(phase, type("")), type(phase) assert isinstance(plaintext, type(b"")), type(plaintext) self._queue.append((phase, plaintext)) @m.output() def record_key(self, key): self._key = key @m.output() def drain(self, key): del key for (phase, plaintext) in self._queue: self._encrypt_and_send(phase, plaintext) self._queue[:] = [] @m.output() def deliver(self, phase, plaintext): assert isinstance(phase, type("")), type(phase) assert isinstance(plaintext, type(b"")), type(plaintext) self._encrypt_and_send(phase, plaintext) def _encrypt_and_send(self, phase, plaintext): assert self._key data_key = derive_phase_key(self._key, self._side, phase) encrypted = encrypt_data(data_key, plaintext) self._M.add_message(phase, encrypted) S0_no_key.upon(send, enter=S0_no_key, outputs=[queue]) S0_no_key.upon( got_verified_key, enter=S1_verified_key, outputs=[record_key, drain]) S1_verified_key.upon(send, enter=S1_verified_key, outputs=[deliver]) magic-wormhole-0.12.0/src/wormhole/_terminator.py000066400000000000000000000072241400712516500220320ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from automat import MethodicalMachine from zope.interface import implementer from . import _interfaces @implementer(_interfaces.ITerminator) class Terminator(object): m = MethodicalMachine() set_trace = getattr(m, "_setTrace", lambda self, f: None) # pragma: no cover def __init__(self): self._mood = None def wire(self, boss, rendezvous_connector, nameplate, mailbox, dilator): self._B = _interfaces.IBoss(boss) self._RC = _interfaces.IRendezvousConnector(rendezvous_connector) self._N = _interfaces.INameplate(nameplate) self._M = _interfaces.IMailbox(mailbox) self._D = _interfaces.IDilator(dilator) # 2*2-1+1 main states: # (nm, m, n, d): nameplate and/or mailbox is active # (o, ""): open (not-yet-closing), or trying to close # after closing the mailbox-server connection, we stop Dilation # S0 is special: we don't hang out in it # TODO: rename o to 0, "" to 1. "S1" is special/terminal # so S0nm/S0n/S0m/S0, S1nm/S1n/S1m/(S1) # We start in Snmo (non-closing). When both nameplate and mailboxes are # done, and we're closing, then we stop the RendezvousConnector @m.state(initial=True) def Snmo(self): pass # pragma: no cover @m.state() def Smo(self): pass # pragma: no cover @m.state() def Sno(self): pass # pragma: no cover @m.state() def S0o(self): pass # pragma: no cover @m.state() def Snm(self): pass # pragma: no cover @m.state() def Sm(self): pass # pragma: no cover @m.state() def Sn(self): pass # pragma: no cover # @m.state() # def S0(self): pass # unused @m.state() def S_stoppingRC(self): pass # pragma: no cover @m.state() def S_stoppingD(self): pass # pragma: no cover @m.state() def S_stopped(self, terminal=True): pass # pragma: no cover # from Boss @m.input() def close(self, mood): pass # from Nameplate @m.input() def nameplate_done(self): pass # from Mailbox @m.input() def mailbox_done(self): pass # from RendezvousConnector @m.input() def stoppedRC(self): pass @m.input() def stoppedD(self): pass @m.output() def close_nameplate(self, mood): self._N.close() # ignores mood @m.output() def close_mailbox(self, mood): self._M.close(mood) @m.output() def ignore_mood_and_RC_stop(self, mood): self._RC.stop() @m.output() def RC_stop(self): self._RC.stop() @m.output() def stop_dilator(self): self._D.stop() @m.output() def B_closed(self): self._B.closed() Snmo.upon(mailbox_done, enter=Sno, outputs=[]) Snmo.upon(close, enter=Snm, outputs=[close_nameplate, close_mailbox]) Snmo.upon(nameplate_done, enter=Smo, outputs=[]) Sno.upon(close, enter=Sn, outputs=[close_nameplate]) Sno.upon(nameplate_done, enter=S0o, outputs=[]) Smo.upon(close, enter=Sm, outputs=[close_mailbox]) Smo.upon(mailbox_done, enter=S0o, outputs=[]) Snm.upon(mailbox_done, enter=Sn, outputs=[]) Snm.upon(nameplate_done, enter=Sm, outputs=[]) Sn.upon(nameplate_done, enter=S_stoppingRC, outputs=[RC_stop]) Sm.upon(mailbox_done, enter=S_stoppingRC, outputs=[RC_stop]) S0o.upon(close, enter=S_stoppingRC, outputs=[ignore_mood_and_RC_stop]) S_stoppingRC.upon(stoppedRC, enter=S_stoppingD, outputs=[stop_dilator]) S_stoppingD.upon(stoppedD, enter=S_stopped, outputs=[B_closed]) magic-wormhole-0.12.0/src/wormhole/_version.py000066400000000000000000000007621400712516500213330ustar00rootroot00000000000000 # This file was generated by 'versioneer.py' (0.18) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json version_json = ''' { "date": "2020-04-04T16:38:01-0700", "dirty": false, "error": null, "full-revisionid": "52ee3ce1050213934e536b91e14356f17532081d", "version": "0.12.0" } ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) magic-wormhole-0.12.0/src/wormhole/_wordlist.py000066400000000000000000000257461400712516500215260ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import os from binascii import unhexlify from zope.interface import implementer from ._interfaces import IWordlist # The PGP Word List, which maps bytes to phonetically-distinct words. There # are two lists, even and odd, and encodings should alternate between then to # detect dropped words. https://en.wikipedia.org/wiki/PGP_Words # Thanks to Warren Guy for transcribing them: # https://github.com/warrenguy/javascript-pgp-word-list raw_words = { '00': ['aardvark', 'adroitness'], '01': ['absurd', 'adviser'], '02': ['accrue', 'aftermath'], '03': ['acme', 'aggregate'], '04': ['adrift', 'alkali'], '05': ['adult', 'almighty'], '06': ['afflict', 'amulet'], '07': ['ahead', 'amusement'], '08': ['aimless', 'antenna'], '09': ['Algol', 'applicant'], '0A': ['allow', 'Apollo'], '0B': ['alone', 'armistice'], '0C': ['ammo', 'article'], '0D': ['ancient', 'asteroid'], '0E': ['apple', 'Atlantic'], '0F': ['artist', 'atmosphere'], '10': ['assume', 'autopsy'], '11': ['Athens', 'Babylon'], '12': ['atlas', 'backwater'], '13': ['Aztec', 'barbecue'], '14': ['baboon', 'belowground'], '15': ['backfield', 'bifocals'], '16': ['backward', 'bodyguard'], '17': ['banjo', 'bookseller'], '18': ['beaming', 'borderline'], '19': ['bedlamp', 'bottomless'], '1A': ['beehive', 'Bradbury'], '1B': ['beeswax', 'bravado'], '1C': ['befriend', 'Brazilian'], '1D': ['Belfast', 'breakaway'], '1E': ['berserk', 'Burlington'], '1F': ['billiard', 'businessman'], '20': ['bison', 'butterfat'], '21': ['blackjack', 'Camelot'], '22': ['blockade', 'candidate'], '23': ['blowtorch', 'cannonball'], '24': ['bluebird', 'Capricorn'], '25': ['bombast', 'caravan'], '26': ['bookshelf', 'caretaker'], '27': ['brackish', 'celebrate'], '28': ['breadline', 'cellulose'], '29': ['breakup', 'certify'], '2A': ['brickyard', 'chambermaid'], '2B': ['briefcase', 'Cherokee'], '2C': ['Burbank', 'Chicago'], '2D': ['button', 'clergyman'], '2E': ['buzzard', 'coherence'], '2F': ['cement', 'combustion'], '30': ['chairlift', 'commando'], '31': ['chatter', 'company'], '32': ['checkup', 'component'], '33': ['chisel', 'concurrent'], '34': ['choking', 'confidence'], '35': ['chopper', 'conformist'], '36': ['Christmas', 'congregate'], '37': ['clamshell', 'consensus'], '38': ['classic', 'consulting'], '39': ['classroom', 'corporate'], '3A': ['cleanup', 'corrosion'], '3B': ['clockwork', 'councilman'], '3C': ['cobra', 'crossover'], '3D': ['commence', 'crucifix'], '3E': ['concert', 'cumbersome'], '3F': ['cowbell', 'customer'], '40': ['crackdown', 'Dakota'], '41': ['cranky', 'decadence'], '42': ['crowfoot', 'December'], '43': ['crucial', 'decimal'], '44': ['crumpled', 'designing'], '45': ['crusade', 'detector'], '46': ['cubic', 'detergent'], '47': ['dashboard', 'determine'], '48': ['deadbolt', 'dictator'], '49': ['deckhand', 'dinosaur'], '4A': ['dogsled', 'direction'], '4B': ['dragnet', 'disable'], '4C': ['drainage', 'disbelief'], '4D': ['dreadful', 'disruptive'], '4E': ['drifter', 'distortion'], '4F': ['dropper', 'document'], '50': ['drumbeat', 'embezzle'], '51': ['drunken', 'enchanting'], '52': ['Dupont', 'enrollment'], '53': ['dwelling', 'enterprise'], '54': ['eating', 'equation'], '55': ['edict', 'equipment'], '56': ['egghead', 'escapade'], '57': ['eightball', 'Eskimo'], '58': ['endorse', 'everyday'], '59': ['endow', 'examine'], '5A': ['enlist', 'existence'], '5B': ['erase', 'exodus'], '5C': ['escape', 'fascinate'], '5D': ['exceed', 'filament'], '5E': ['eyeglass', 'finicky'], '5F': ['eyetooth', 'forever'], '60': ['facial', 'fortitude'], '61': ['fallout', 'frequency'], '62': ['flagpole', 'gadgetry'], '63': ['flatfoot', 'Galveston'], '64': ['flytrap', 'getaway'], '65': ['fracture', 'glossary'], '66': ['framework', 'gossamer'], '67': ['freedom', 'graduate'], '68': ['frighten', 'gravity'], '69': ['gazelle', 'guitarist'], '6A': ['Geiger', 'hamburger'], '6B': ['glitter', 'Hamilton'], '6C': ['glucose', 'handiwork'], '6D': ['goggles', 'hazardous'], '6E': ['goldfish', 'headwaters'], '6F': ['gremlin', 'hemisphere'], '70': ['guidance', 'hesitate'], '71': ['hamlet', 'hideaway'], '72': ['highchair', 'holiness'], '73': ['hockey', 'hurricane'], '74': ['indoors', 'hydraulic'], '75': ['indulge', 'impartial'], '76': ['inverse', 'impetus'], '77': ['involve', 'inception'], '78': ['island', 'indigo'], '79': ['jawbone', 'inertia'], '7A': ['keyboard', 'infancy'], '7B': ['kickoff', 'inferno'], '7C': ['kiwi', 'informant'], '7D': ['klaxon', 'insincere'], '7E': ['locale', 'insurgent'], '7F': ['lockup', 'integrate'], '80': ['merit', 'intention'], '81': ['minnow', 'inventive'], '82': ['miser', 'Istanbul'], '83': ['Mohawk', 'Jamaica'], '84': ['mural', 'Jupiter'], '85': ['music', 'leprosy'], '86': ['necklace', 'letterhead'], '87': ['Neptune', 'liberty'], '88': ['newborn', 'maritime'], '89': ['nightbird', 'matchmaker'], '8A': ['Oakland', 'maverick'], '8B': ['obtuse', 'Medusa'], '8C': ['offload', 'megaton'], '8D': ['optic', 'microscope'], '8E': ['orca', 'microwave'], '8F': ['payday', 'midsummer'], '90': ['peachy', 'millionaire'], '91': ['pheasant', 'miracle'], '92': ['physique', 'misnomer'], '93': ['playhouse', 'molasses'], '94': ['Pluto', 'molecule'], '95': ['preclude', 'Montana'], '96': ['prefer', 'monument'], '97': ['preshrunk', 'mosquito'], '98': ['printer', 'narrative'], '99': ['prowler', 'nebula'], '9A': ['pupil', 'newsletter'], '9B': ['puppy', 'Norwegian'], '9C': ['python', 'October'], '9D': ['quadrant', 'Ohio'], '9E': ['quiver', 'onlooker'], '9F': ['quota', 'opulent'], 'A0': ['ragtime', 'Orlando'], 'A1': ['ratchet', 'outfielder'], 'A2': ['rebirth', 'Pacific'], 'A3': ['reform', 'pandemic'], 'A4': ['regain', 'Pandora'], 'A5': ['reindeer', 'paperweight'], 'A6': ['rematch', 'paragon'], 'A7': ['repay', 'paragraph'], 'A8': ['retouch', 'paramount'], 'A9': ['revenge', 'passenger'], 'AA': ['reward', 'pedigree'], 'AB': ['rhythm', 'Pegasus'], 'AC': ['ribcage', 'penetrate'], 'AD': ['ringbolt', 'perceptive'], 'AE': ['robust', 'performance'], 'AF': ['rocker', 'pharmacy'], 'B0': ['ruffled', 'phonetic'], 'B1': ['sailboat', 'photograph'], 'B2': ['sawdust', 'pioneer'], 'B3': ['scallion', 'pocketful'], 'B4': ['scenic', 'politeness'], 'B5': ['scorecard', 'positive'], 'B6': ['Scotland', 'potato'], 'B7': ['seabird', 'processor'], 'B8': ['select', 'provincial'], 'B9': ['sentence', 'proximate'], 'BA': ['shadow', 'puberty'], 'BB': ['shamrock', 'publisher'], 'BC': ['showgirl', 'pyramid'], 'BD': ['skullcap', 'quantity'], 'BE': ['skydive', 'racketeer'], 'BF': ['slingshot', 'rebellion'], 'C0': ['slowdown', 'recipe'], 'C1': ['snapline', 'recover'], 'C2': ['snapshot', 'repellent'], 'C3': ['snowcap', 'replica'], 'C4': ['snowslide', 'reproduce'], 'C5': ['solo', 'resistor'], 'C6': ['southward', 'responsive'], 'C7': ['soybean', 'retraction'], 'C8': ['spaniel', 'retrieval'], 'C9': ['spearhead', 'retrospect'], 'CA': ['spellbind', 'revenue'], 'CB': ['spheroid', 'revival'], 'CC': ['spigot', 'revolver'], 'CD': ['spindle', 'sandalwood'], 'CE': ['spyglass', 'sardonic'], 'CF': ['stagehand', 'Saturday'], 'D0': ['stagnate', 'savagery'], 'D1': ['stairway', 'scavenger'], 'D2': ['standard', 'sensation'], 'D3': ['stapler', 'sociable'], 'D4': ['steamship', 'souvenir'], 'D5': ['sterling', 'specialist'], 'D6': ['stockman', 'speculate'], 'D7': ['stopwatch', 'stethoscope'], 'D8': ['stormy', 'stupendous'], 'D9': ['sugar', 'supportive'], 'DA': ['surmount', 'surrender'], 'DB': ['suspense', 'suspicious'], 'DC': ['sweatband', 'sympathy'], 'DD': ['swelter', 'tambourine'], 'DE': ['tactics', 'telephone'], 'DF': ['talon', 'therapist'], 'E0': ['tapeworm', 'tobacco'], 'E1': ['tempest', 'tolerance'], 'E2': ['tiger', 'tomorrow'], 'E3': ['tissue', 'torpedo'], 'E4': ['tonic', 'tradition'], 'E5': ['topmost', 'travesty'], 'E6': ['tracker', 'trombonist'], 'E7': ['transit', 'truncated'], 'E8': ['trauma', 'typewriter'], 'E9': ['treadmill', 'ultimate'], 'EA': ['Trojan', 'undaunted'], 'EB': ['trouble', 'underfoot'], 'EC': ['tumor', 'unicorn'], 'ED': ['tunnel', 'unify'], 'EE': ['tycoon', 'universe'], 'EF': ['uncut', 'unravel'], 'F0': ['unearth', 'upcoming'], 'F1': ['unwind', 'vacancy'], 'F2': ['uproot', 'vagabond'], 'F3': ['upset', 'vertigo'], 'F4': ['upshot', 'Virginia'], 'F5': ['vapor', 'visitor'], 'F6': ['village', 'vocalist'], 'F7': ['virus', 'voyager'], 'F8': ['Vulcan', 'warranty'], 'F9': ['waffle', 'Waterloo'], 'FA': ['wallet', 'whimsical'], 'FB': ['watchword', 'Wichita'], 'FC': ['wayside', 'Wilmington'], 'FD': ['willow', 'Wyoming'], 'FE': ['woodlark', 'yesteryear'], 'FF': ['Zulu', 'Yucatan'] } byte_to_even_word = dict([(unhexlify(k.encode("ascii")), both_words[0]) for k, both_words in raw_words.items()]) byte_to_odd_word = dict([(unhexlify(k.encode("ascii")), both_words[1]) for k, both_words in raw_words.items()]) even_words_lowercase, odd_words_lowercase = set(), set() for k, both_words in raw_words.items(): even_word, odd_word = both_words even_words_lowercase.add(even_word.lower()) odd_words_lowercase.add(odd_word.lower()) @implementer(IWordlist) class PGPWordList(object): def get_completions(self, prefix, num_words=2): # start with the odd words count = prefix.count("-") if count % 2 == 0: words = odd_words_lowercase else: words = even_words_lowercase last_partial_word = prefix.split("-")[-1] lp = len(last_partial_word) completions = set() for word in words: if word.startswith(last_partial_word): if lp == 0: suffix = prefix + word else: suffix = prefix[:-lp] + word # append a hyphen if we expect more words if count + 1 < num_words: suffix += "-" completions.add(suffix) return completions def choose_words(self, length): words = [] for i in range(length): # we start with an "odd word" if i % 2 == 0: words.append(byte_to_odd_word[os.urandom(1)].lower()) else: words.append(byte_to_even_word[os.urandom(1)].lower()) return "-".join(words) magic-wormhole-0.12.0/src/wormhole/cli/000077500000000000000000000000001400712516500176775ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/cli/__init__.py000066400000000000000000000000001400712516500217760ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/cli/cli.py000066400000000000000000000236131400712516500210250ustar00rootroot00000000000000from __future__ import print_function import os import time start = time.time() from sys import stderr, stdout # noqa: E402 from textwrap import dedent, fill # noqa: E402 import click # noqa: E402 import six # noqa: E402 from twisted.internet.defer import inlineCallbacks, maybeDeferred # noqa: E402 from twisted.internet.task import react # noqa: E402 from twisted.python.failure import Failure # noqa: E402 from . import public_relay # noqa: E402 from .. import __version__ # noqa: E402 from ..errors import (KeyFormatError, NoTorError, # noqa: E402 ServerConnectionError, TransferError, UnsendableFileError, WelcomeError, WrongPasswordError) from ..timing import DebugTiming # noqa: E402 top_import_finish = time.time() class Config(object): """ Union of config options that we pass down to (sub) commands. """ def __init__(self): # This only holds attributes which are *not* set by CLI arguments. # Everything else comes from Click decorators, so we can be sure # we're exercising the defaults. self.timing = DebugTiming() self.cwd = os.getcwd() self.stdout = stdout self.stderr = stderr self.tor = False # XXX? def _compose(*decorators): def decorate(f): for d in reversed(decorators): f = d(f) return f return decorate ALIASES = { "tx": "send", "rx": "receive", "recieve": "receive", "recv": "receive", } class AliasedGroup(click.Group): def get_command(self, ctx, cmd_name): cmd_name = ALIASES.get(cmd_name, cmd_name) return click.Group.get_command(self, ctx, cmd_name) # top-level command ("wormhole ...") @click.group(cls=AliasedGroup) @click.option("--appid", default=None, metavar="APPID", help="appid to use") @click.option( "--relay-url", default=public_relay.RENDEZVOUS_RELAY, envvar='WORMHOLE_RELAY_URL', metavar="URL", help="rendezvous relay to use", ) @click.option( "--transit-helper", default=public_relay.TRANSIT_RELAY, envvar='WORMHOLE_TRANSIT_HELPER', metavar="tcp:HOST:PORT", help="transit relay to use", ) @click.option( "--dump-timing", type=type(u""), # TODO: hide from --help output default=None, metavar="FILE.json", help="(debug) write timing data to file", ) @click.version_option( message="magic-wormhole %(version)s", version=__version__, ) @click.pass_context def wormhole(context, dump_timing, transit_helper, relay_url, appid): """ Create a Magic Wormhole and communicate through it. Wormholes are created by speaking the same magic CODE in two different places at the same time. Wormholes are secure against anyone who doesn't use the same code. """ context.obj = cfg = Config() cfg.appid = appid cfg.relay_url = relay_url cfg.transit_helper = transit_helper cfg.dump_timing = dump_timing @inlineCallbacks def _dispatch_command(reactor, cfg, command): """ Internal helper. This calls the given command (a no-argument callable) with the Config instance in cfg and interprets any errors for the user. """ cfg.timing.add("command dispatch") cfg.timing.add( "import", when=start, which="top").finish(when=top_import_finish) try: yield maybeDeferred(command) except (WrongPasswordError, NoTorError) as e: msg = fill("ERROR: " + dedent(e.__doc__)) print(msg, file=cfg.stderr) raise SystemExit(1) except (WelcomeError, UnsendableFileError, KeyFormatError) as e: msg = fill("ERROR: " + dedent(e.__doc__)) print(msg, file=cfg.stderr) print(six.u(""), file=cfg.stderr) print(six.text_type(e), file=cfg.stderr) raise SystemExit(1) except TransferError as e: print(u"TransferError: %s" % six.text_type(e), file=cfg.stderr) raise SystemExit(1) except ServerConnectionError as e: msg = fill("ERROR: " + dedent(e.__doc__)) + "\n" msg += "(relay URL was %s)\n" % e.url msg += six.text_type(e) print(msg, file=cfg.stderr) raise SystemExit(1) except Exception as e: # this prints a proper traceback, whereas # traceback.print_exc() just prints a TB to the "yield" # line above ... Failure().printTraceback(file=cfg.stderr) print(u"ERROR:", six.text_type(e), file=cfg.stderr) raise SystemExit(1) cfg.timing.add("exit") if cfg.dump_timing: cfg.timing.write(cfg.dump_timing, cfg.stderr) CommonArgs = _compose( click.option( "-0", "zeromode", default=False, is_flag=True, help="enable no-code anything-goes mode", ), click.option( "-c", "--code-length", default=2, metavar="NUMWORDS", help="length of code (in bytes/words)", ), click.option( "-v", "--verify", is_flag=True, default=False, help="display verification string (and wait for approval)", ), click.option( "--hide-progress", is_flag=True, default=False, help="suppress progress-bar display", ), click.option( "--listen/--no-listen", default=True, help="(debug) don't open a listening socket for Transit", ), ) TorArgs = _compose( click.option( "--tor", is_flag=True, default=False, help="use Tor when connecting", ), click.option( "--launch-tor", is_flag=True, default=False, help="launch Tor, rather than use existing control/socks port", ), click.option( "--tor-control-port", default=None, metavar="ENDPOINT", help="endpoint descriptor for Tor control port", ), ) @wormhole.command() @click.pass_context def help(context, **kwargs): print(context.find_root().get_help()) # wormhole send (or "wormhole tx") @wormhole.command() @CommonArgs @TorArgs @click.option( "--code", metavar="CODE", help="human-generated code phrase", ) @click.option( "--text", default=None, metavar="MESSAGE", help=("text message to send, instead of a file." " Use '-' to read from stdin."), ) @click.option( "--ignore-unsendable-files", default=False, is_flag=True, help="Don't raise an error if a file can't be read.") @click.argument("what", required=False, type=click.Path(path_type=type(u""))) @click.pass_obj def send(cfg, **kwargs): """Send a text message, file, or directory""" for name, value in kwargs.items(): setattr(cfg, name, value) with cfg.timing.add("import", which="cmd_send"): from . import cmd_send return go(cmd_send.send, cfg) # this intermediate function can be mocked by tests that need to build a # Config object def go(f, cfg): # note: react() does not return return react(_dispatch_command, (cfg, lambda: f(cfg))) # wormhole receive (or "wormhole rx") @wormhole.command() @CommonArgs @TorArgs @click.option( "--only-text", "-t", is_flag=True, help="refuse file transfers, only accept text transfers", ) @click.option( "--accept-file", is_flag=True, help="accept file transfer without asking for confirmation", ) @click.option( "--output-file", "-o", metavar="FILENAME|DIRNAME", help=("The file or directory to create, overriding the name suggested" " by the sender."), ) @click.argument( "code", nargs=-1, default=None, # help=("The magic-wormhole code, from the sender. If omitted, the" # " program will ask for it, using tab-completion."), ) @click.pass_obj def receive(cfg, code, **kwargs): """ Receive a text message, file, or directory (from 'wormhole send') """ for name, value in kwargs.items(): setattr(cfg, name, value) with cfg.timing.add("import", which="cmd_receive"): from . import cmd_receive if len(code) == 1: cfg.code = code[0] elif len(code) > 1: print("Pass either no code or just one code; you passed" " {}: {}".format(len(code), ', '.join(code))) raise SystemExit(1) else: cfg.code = None return go(cmd_receive.receive, cfg) @wormhole.group() def ssh(): """ Facilitate sending/receiving SSH public keys """ @ssh.command(name="invite") @click.option( "-c", "--code-length", default=2, metavar="NUMWORDS", help="length of code (in bytes/words)", ) @click.option( "--user", "-u", default=None, metavar="USER", help="Add to USER's ~/.ssh/authorized_keys", ) @TorArgs @click.pass_context def ssh_invite(ctx, code_length, user, **kwargs): """ Add a public-key to a ~/.ssh/authorized_keys file """ for name, value in kwargs.items(): setattr(ctx.obj, name, value) from . import cmd_ssh ctx.obj.code_length = code_length ctx.obj.ssh_user = user return go(cmd_ssh.invite, ctx.obj) @ssh.command(name="accept") @click.argument( "code", nargs=1, required=True, ) @click.option( "--key-file", "-F", default=None, type=click.Path(exists=True), ) @click.option( "--yes", "-y", is_flag=True, help="Skip confirmation prompt to send key", ) @TorArgs @click.pass_obj def ssh_accept(cfg, code, key_file, yes, **kwargs): """ Send your SSH public-key In response to a 'wormhole ssh invite' this will send public-key you specify (if there's only one in ~/.ssh/* that will be sent). """ for name, value in kwargs.items(): setattr(cfg, name, value) from . import cmd_ssh kind, keyid, pubkey = cmd_ssh.find_public_key(key_file) print("Sending public key type='{}' keyid='{}'".format(kind, keyid)) if yes is not True: click.confirm( "Really send public key '{}' ?".format(keyid), abort=True) cfg.public_key = (kind, keyid, pubkey) cfg.code = code return go(cmd_ssh.accept, cfg) magic-wormhole-0.12.0/src/wormhole/cli/cmd_receive.py000066400000000000000000000430001400712516500225130ustar00rootroot00000000000000from __future__ import print_function import hashlib import os import shutil import sys import tempfile import zipfile import six from humanize import naturalsize from tqdm import tqdm from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python import log from wormhole import __version__, create, input_with_completion from ..errors import TransferError from ..transit import TransitReceiver from ..util import (bytes_to_dict, bytes_to_hexstr, dict_to_bytes, estimate_free_space) from .welcome import handle_welcome APPID = u"lothar.com/wormhole/text-or-file-xfer" KEY_TIMER = float(os.environ.get("_MAGIC_WORMHOLE_TEST_KEY_TIMER", 1.0)) VERIFY_TIMER = float(os.environ.get("_MAGIC_WORMHOLE_TEST_VERIFY_TIMER", 1.0)) class RespondError(Exception): def __init__(self, response): self.response = response class TransferRejectedError(RespondError): def __init__(self): RespondError.__init__(self, "transfer rejected") def receive(args, reactor=reactor, _debug_stash_wormhole=None): """I implement 'wormhole receive'. I return a Deferred that fires with None (for success), or signals one of the following errors: * WrongPasswordError: the two sides didn't use matching passwords * Timeout: something didn't happen fast enough for our tastes * TransferError: the sender rejected the transfer: verifier mismatch * any other error: something unexpected happened """ r = Receiver(args, reactor) d = r.go() if _debug_stash_wormhole is not None: _debug_stash_wormhole.append(r._w) return d class Receiver: def __init__(self, args, reactor=reactor): assert isinstance(args.relay_url, type(u"")) self.args = args self._reactor = reactor self._tor = None self._transit_receiver = None def _msg(self, *args, **kwargs): print(*args, file=self.args.stderr, **kwargs) @inlineCallbacks def go(self): if self.args.tor: with self.args.timing.add("import", which="tor_manager"): from ..tor_manager import get_tor # For now, block everything until Tor has started. Soon: launch # tor in parallel with everything else, make sure the Tor object # can lazy-provide an endpoint, and overlap the startup process # with the user handing off the wormhole code self._tor = yield get_tor( self._reactor, self.args.launch_tor, self.args.tor_control_port, timing=self.args.timing) w = create( self.args.appid or APPID, self.args.relay_url, self._reactor, tor=self._tor, timing=self.args.timing) self._w = w # so tests can wait on events too # I wanted to do this instead: # # try: # yield self._go(w, tor) # finally: # yield w.close() # # but when _go had a UsageError, the stacktrace was always displayed # as coming from the "yield self._go" line, which wasn't very useful # for tracking it down. d = self._go(w) # if we succeed, we should close and return the w.close results # (which might be an error) @inlineCallbacks def _good(res): yield w.close() # wait for ack returnValue(res) # if we raise an error, we should close and then return the original # error (the close might give us an error, but it isn't as important # as the original one) @inlineCallbacks def _bad(f): try: yield w.close() # might be an error too except Exception: pass returnValue(f) d.addCallbacks(_good, _bad) yield d @inlineCallbacks def _go(self, w): welcome = yield w.get_welcome() handle_welcome(welcome, self.args.relay_url, __version__, self.args.stderr) yield self._handle_code(w) def on_slow_key(): print(u"Waiting for sender...", file=self.args.stderr) notify = self._reactor.callLater(KEY_TIMER, on_slow_key) try: # We wait here until we connect to the server and see the senders # PAKE message. If we used set_code() in the "human-selected # offline codes" mode, then the sender might not have even # started yet, so we might be sitting here for a while. Because # of that possibility, it's probably not appropriate to give up # automatically after some timeout. The user can express their # impatience by quitting the program with control-C. yield w.get_unverified_key() finally: if not notify.called: notify.cancel() def on_slow_verification(): print( u"Key established, waiting for confirmation...", file=self.args.stderr) notify = self._reactor.callLater(VERIFY_TIMER, on_slow_verification) try: # We wait here until we've seen their VERSION message (which they # send after seeing our PAKE message, and has the side-effect of # verifying that we both share the same key). There is a # round-trip between these two events, and we could experience a # significant delay here if: # * the relay server is being restarted # * the network is very slow # * the sender is very slow # * the sender has quit (in which case we may wait forever) # It would be reasonable to give up after waiting here for too # long. verifier_bytes = yield w.get_verifier() finally: if not notify.called: notify.cancel() self._show_verifier(verifier_bytes) want_offer = True while True: them_d = yield self._get_data(w) # print("GOT", them_d) recognized = False if u"transit" in them_d: recognized = True yield self._parse_transit(them_d[u"transit"], w) if u"offer" in them_d: recognized = True if not want_offer: raise TransferError("duplicate offer") want_offer = False try: yield self._parse_offer(them_d[u"offer"], w) except RespondError as r: self._send_data({"error": r.response}, w) raise TransferError(r.response) returnValue(None) if not recognized: log.msg("unrecognized message %r" % (them_d, )) def _send_data(self, data, w): data_bytes = dict_to_bytes(data) w.send_message(data_bytes) @inlineCallbacks def _get_data(self, w): # this may raise WrongPasswordError them_bytes = yield w.get_message() them_d = bytes_to_dict(them_bytes) if "error" in them_d: raise TransferError(them_d["error"]) returnValue(them_d) @inlineCallbacks def _handle_code(self, w): code = self.args.code if self.args.zeromode: assert not code code = u"0-" if code: w.set_code(code) else: prompt = "Enter receive wormhole code: " used_completion = yield input_with_completion( prompt, w.input_code(), self._reactor) if not used_completion: print( " (note: you can use to complete words)", file=self.args.stderr) yield w.get_code() def _show_verifier(self, verifier_bytes): verifier_hex = bytes_to_hexstr(verifier_bytes) if self.args.verify: self._msg(u"Verifier %s." % verifier_hex) @inlineCallbacks def _parse_transit(self, sender_transit, w): if self._transit_receiver: # TODO: accept multiple messages, add the additional hints to the # existing TransitReceiver return yield self._build_transit(w, sender_transit) @inlineCallbacks def _build_transit(self, w, sender_transit): tr = TransitReceiver( self.args.transit_helper, no_listen=(not self.args.listen), tor=self._tor, reactor=self._reactor, timing=self.args.timing) self._transit_receiver = tr # When I made it possible to override APPID with a CLI argument # (issue #113), I forgot to also change this w.derive_key() (issue # #339). We're stuck with it now. Use a local constant to make this # clear. BUG339_APPID = u"lothar.com/wormhole/text-or-file-xfer" transit_key = w.derive_key(BUG339_APPID + u"/transit-key", tr.TRANSIT_KEY_LENGTH) tr.set_transit_key(transit_key) tr.add_connection_hints(sender_transit.get("hints-v1", [])) receiver_abilities = tr.get_connection_abilities() receiver_hints = yield tr.get_connection_hints() receiver_transit = { "abilities-v1": receiver_abilities, "hints-v1": receiver_hints, } self._send_data({u"transit": receiver_transit}, w) # TODO: send more hints as the TransitReceiver produces them @inlineCallbacks def _parse_offer(self, them_d, w): if "message" in them_d: self._handle_text(them_d, w) returnValue(None) # transit will be created by this point, but not connected if "file" in them_d: f = self._handle_file(them_d) self._send_permission(w) rp = yield self._establish_transit() datahash = yield self._transfer_data(rp, f) self._write_file(f) yield self._close_transit(rp, datahash) elif "directory" in them_d: f = self._handle_directory(them_d) self._send_permission(w) rp = yield self._establish_transit() datahash = yield self._transfer_data(rp, f) self._write_directory(f) yield self._close_transit(rp, datahash) else: self._msg(u"I don't know what they're offering\n") self._msg(u"Offer details: %r" % (them_d, )) raise RespondError("unknown offer type") def _handle_text(self, them_d, w): # we're receiving a text message self.args.timing.add("print") print(them_d["message"], file=self.args.stdout) self._send_data({"answer": {"message_ack": "ok"}}, w) def _handle_file(self, them_d): file_data = them_d["file"] self.abs_destname = self._decide_destname("file", file_data["filename"]) self.xfersize = file_data["filesize"] free = estimate_free_space(self.abs_destname) if free is not None and free < self.xfersize: self._msg(u"Error: insufficient free space (%sB) for file (%sB)" % (free, self.xfersize)) raise TransferRejectedError() self._msg(u"Receiving file (%s) into: %s" % (naturalsize(self.xfersize), os.path.basename(self.abs_destname))) self._ask_permission() tmp_destname = self.abs_destname + ".tmp" return open(tmp_destname, "wb") def _handle_directory(self, them_d): file_data = them_d["directory"] zipmode = file_data["mode"] if zipmode != "zipfile/deflated": self._msg(u"Error: unknown directory-transfer mode '%s'" % (zipmode, )) raise RespondError("unknown mode") self.abs_destname = self._decide_destname("directory", file_data["dirname"]) self.xfersize = file_data["zipsize"] free = estimate_free_space(self.abs_destname) if free is not None and free < file_data["numbytes"]: self._msg( u"Error: insufficient free space (%sB) for directory (%sB)" % (free, file_data["numbytes"])) raise TransferRejectedError() self._msg(u"Receiving directory (%s) into: %s/" % (naturalsize(self.xfersize), os.path.basename(self.abs_destname))) self._msg(u"%d files, %s (uncompressed)" % (file_data["numfiles"], naturalsize(file_data["numbytes"]))) self._ask_permission() f = tempfile.SpooledTemporaryFile() # workaround for https://bugs.python.org/issue26175 (STF doesn't # fully implement IOBase abstract class), which breaks the new # zipfile in py3.7.0 that expects .seekable if not hasattr(f, "seekable"): # AFAICT all the filetypes that STF wraps can seek f.seekable = lambda: True return f def _decide_destname(self, mode, destname): # the basename() is intended to protect us against # "~/.ssh/authorized_keys" and other attacks destname = os.path.basename(destname) if self.args.output_file: destname = self.args.output_file # override abs_destname = os.path.abspath(os.path.join(self.args.cwd, destname)) # get confirmation from the user before writing to the local directory if os.path.exists(abs_destname): if self.args.output_file: # overwrite is intentional self._msg(u"Overwriting '%s'" % destname) if self.args.accept_file: self._remove_existing(abs_destname) else: self._msg( u"Error: refusing to overwrite existing '%s'" % destname) raise TransferRejectedError() return abs_destname def _remove_existing(self, path): if os.path.isfile(path): os.remove(path) if os.path.isdir(path): shutil.rmtree(path) def _ask_permission(self): with self.args.timing.add("permission", waiting="user") as t: while True and not self.args.accept_file: ok = six.moves.input("ok? (Y/n): ") if ok.lower().startswith("y") or len(ok) == 0: if os.path.exists(self.abs_destname): self._remove_existing(self.abs_destname) break print(u"transfer rejected", file=sys.stderr) t.detail(answer="no") raise TransferRejectedError() t.detail(answer="yes") def _send_permission(self, w): self._send_data({"answer": {"file_ack": "ok"}}, w) @inlineCallbacks def _establish_transit(self): record_pipe = yield self._transit_receiver.connect() self.args.timing.add("transit connected") returnValue(record_pipe) @inlineCallbacks def _transfer_data(self, record_pipe, f): # now receive the rest of the owl self._msg(u"Receiving (%s).." % record_pipe.describe()) with self.args.timing.add("rx file"): progress = tqdm( file=self.args.stderr, disable=self.args.hide_progress, unit="B", unit_scale=True, total=self.xfersize) hasher = hashlib.sha256() with progress: received = yield record_pipe.writeToFile( f, self.xfersize, progress.update, hasher.update) datahash = hasher.digest() # except TransitError if received < self.xfersize: self._msg() self._msg(u"Connection dropped before full file received") self._msg(u"got %d bytes, wanted %d" % (received, self.xfersize)) raise TransferError("Connection dropped before full file received") assert received == self.xfersize returnValue(datahash) def _write_file(self, f): tmp_name = f.name f.close() os.rename(tmp_name, self.abs_destname) self._msg(u"Received file written to %s" % os.path.basename( self.abs_destname)) def _extract_file(self, zf, info, extract_dir): """ the zipfile module does not restore file permissions so we'll do it manually """ out_path = os.path.join(extract_dir, info.filename) out_path = os.path.abspath(out_path) if not out_path.startswith(extract_dir): raise ValueError( "malicious zipfile, %s outside of extract_dir %s" % (info.filename, extract_dir)) zf.extract(info.filename, path=extract_dir) # not sure why zipfiles store the perms 16 bits away but they do perm = info.external_attr >> 16 os.chmod(out_path, perm) def _write_directory(self, f): self._msg(u"Unpacking zipfile..") with self.args.timing.add("unpack zip"): with zipfile.ZipFile(f, "r", zipfile.ZIP_DEFLATED) as zf: for info in zf.infolist(): self._extract_file(zf, info, self.abs_destname) self._msg(u"Received files written to %s/" % os.path.basename( self.abs_destname)) f.close() @inlineCallbacks def _close_transit(self, record_pipe, datahash): datahash_hex = bytes_to_hexstr(datahash) ack = {u"ack": u"ok", u"sha256": datahash_hex} ack_bytes = dict_to_bytes(ack) with self.args.timing.add("send ack"): yield record_pipe.send_record(ack_bytes) yield record_pipe.close() magic-wormhole-0.12.0/src/wormhole/cli/cmd_send.py000066400000000000000000000463471400712516500220430ustar00rootroot00000000000000from __future__ import print_function import hashlib import os import sys import stat import tempfile import zipfile import six from humanize import naturalsize from tqdm import tqdm from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue, Deferred from twisted.protocols import basic from twisted.python import log from wormhole import __version__, create from ..errors import TransferError, UnsendableFileError from ..transit import TransitSender from ..util import bytes_to_dict, bytes_to_hexstr, dict_to_bytes from .welcome import handle_welcome APPID = u"lothar.com/wormhole/text-or-file-xfer" VERIFY_TIMER = float(os.environ.get("_MAGIC_WORMHOLE_TEST_VERIFY_TIMER", 1.0)) def send(args, reactor=reactor): """I implement 'wormhole send'. I return a Deferred that fires with None (for success), or signals one of the following errors: * WrongPasswordError: the two sides didn't use matching passwords * Timeout: something didn't happen fast enough for our tastes * TransferError: the receiver rejected the transfer: verifier mismatch, permission not granted, ack not successful. * any other error: something unexpected happened """ return Sender(args, reactor).go() class Sender: def __init__(self, args, reactor): self._args = args self._reactor = reactor self._tor = None self._timing = args.timing self._fd_to_send = None self._transit_sender = None @inlineCallbacks def go(self): assert isinstance(self._args.relay_url, type(u"")) if self._args.tor: with self._timing.add("import", which="tor_manager"): from ..tor_manager import get_tor # For now, block everything until Tor has started. Soon: launch # tor in parallel with everything else, make sure the Tor object # can lazy-provide an endpoint, and overlap the startup process # with the user handing off the wormhole code self._tor = yield get_tor( reactor, self._args.launch_tor, self._args.tor_control_port, timing=self._timing) w = create( self._args.appid or APPID, self._args.relay_url, self._reactor, tor=self._tor, timing=self._timing) d = self._go(w) # if we succeed, we should close and return the w.close results # (which might be an error) @inlineCallbacks def _good(res): yield w.close() # wait for ack returnValue(res) # if we raise an error, we should close and then return the original # error (the close might give us an error, but it isn't as important # as the original one) @inlineCallbacks def _bad(f): try: yield w.close() # might be an error too except Exception: pass returnValue(f) d.addCallbacks(_good, _bad) yield d def _send_data(self, data, w): data_bytes = dict_to_bytes(data) w.send_message(data_bytes) @inlineCallbacks def _go(self, w): welcome = yield w.get_welcome() handle_welcome(welcome, self._args.relay_url, __version__, self._args.stderr) # TODO: run the blocking zip-the-directory IO in a thread, let the # wormhole exchange happen in parallel offer, self._fd_to_send = self._build_offer() args = self._args other_cmd = u"wormhole receive" if args.verify: other_cmd = u"wormhole receive --verify" if args.zeromode: assert not args.code args.code = u"0-" other_cmd += u" -0" if args.code: w.set_code(args.code) else: w.allocate_code(args.code_length) code = yield w.get_code() if not args.zeromode: print(u"Wormhole code is: %s" % code, file=args.stderr) other_cmd += u" " + code print(u"On the other computer, please run:", file=args.stderr) print(u"", file=args.stderr) print(other_cmd, file=args.stderr) print(u"", file=args.stderr) # flush stderr so the code is displayed immediately args.stderr.flush() # We don't print a "waiting" message for get_unverified_key() here, # even though we do that in cmd_receive.py, because it's not at all # surprising to we waiting here for a long time. We'll sit in # get_unverified_key() until the receiver has typed in the code and # their PAKE message makes it to us. yield w.get_unverified_key() # TODO: don't stall on w.get_verifier() unless they want it def on_slow_connection(): print( u"Key established, waiting for confirmation...", file=args.stderr) notify = self._reactor.callLater(VERIFY_TIMER, on_slow_connection) try: # The usual sender-chooses-code sequence means the receiver's # PAKE should be followed immediately by their VERSION, so # w.get_verifier() should fire right away. However if we're # using the offline-codes sequence, and the receiver typed in # their code first, and then they went offline, we might be # sitting here for a while, so printing the "waiting" message # seems like a good idea. It might even be appropriate to give up # after a while. verifier_bytes = yield w.get_verifier() # might WrongPasswordError finally: if not notify.called: notify.cancel() if args.verify: # check_verifier() does a blocking call to input(), so stall for # a moment to let any outbound messages get written into the # kernel. At this point, we're sitting in a callback of # get_verifier(), which is triggered by receipt of the other # side's VERSION message. But we might have gotten both the PAKE # and the VERSION message in the same turn, and our outbound # VERSION message (triggered by receipt of their PAKE) is still # in Twisted's transmit queue. If we don't wait a moment, it will # be stuck there until `input()` returns, and the receiver won't # be able to compute a Verifier for the users to compare. #349 # has more details d = Deferred() reactor.callLater(0.001, d.callback, None) yield d self._check_verifier(w, verifier_bytes) # blocks, can TransferError if self._fd_to_send: ts = TransitSender( args.transit_helper, no_listen=(not args.listen), tor=self._tor, reactor=self._reactor, timing=self._timing) self._transit_sender = ts # for now, send this before the main offer sender_abilities = ts.get_connection_abilities() sender_hints = yield ts.get_connection_hints() sender_transit = { "abilities-v1": sender_abilities, "hints-v1": sender_hints, } self._send_data({u"transit": sender_transit}, w) # When I made it possible to override APPID with a CLI argument # (issue #113), I forgot to also change this w.derive_key() # (issue #339). We're stuck with it now. Use a local constant to # make this clear. BUG339_APPID = u"lothar.com/wormhole/text-or-file-xfer" # TODO: move this down below w.get_message() transit_key = w.derive_key(BUG339_APPID + "/transit-key", ts.TRANSIT_KEY_LENGTH) ts.set_transit_key(transit_key) self._send_data({"offer": offer}, w) want_answer = True while True: them_d_bytes = yield w.get_message() # TODO: get_message() fired, so get_verifier must have fired, so # now it's safe to use w.derive_key() them_d = bytes_to_dict(them_d_bytes) # print("GOT", them_d) recognized = False if u"error" in them_d: raise TransferError( "remote error, transfer abandoned: %s" % them_d["error"]) if u"transit" in them_d: recognized = True yield self._handle_transit(them_d[u"transit"]) if u"answer" in them_d: recognized = True if not want_answer: raise TransferError("duplicate answer") want_answer = True yield self._handle_answer(them_d[u"answer"]) returnValue(None) if not recognized: log.msg("unrecognized message %r" % (them_d, )) def _check_verifier(self, w, verifier_bytes): verifier = bytes_to_hexstr(verifier_bytes) while True: ok = six.moves.input("Verifier %s. ok? (yes/no): " % verifier) if ok.lower() == "yes": break if ok.lower() == "no": err = "sender rejected verification check, abandoned transfer" reject_data = dict_to_bytes({"error": err}) w.send_message(reject_data) raise TransferError(err) def _handle_transit(self, receiver_transit): ts = self._transit_sender ts.add_connection_hints(receiver_transit.get("hints-v1", [])) def _build_offer(self): offer = {} args = self._args text = args.text if text == "-": print(u"Reading text message from stdin..", file=args.stderr) text = sys.stdin.read() if not text and not args.what: text = six.moves.input("Text to send: ") if text is not None: print( u"Sending text message (%s)" % naturalsize(len(text)), file=args.stderr) offer = {"message": text} fd_to_send = None return offer, fd_to_send # click.Path (with resolve_path=False, the default) does not do path # resolution, so we must join it to cwd ourselves. We could use # resolve_path=True, but then it would also do os.path.realpath(), # which would replace the basename with the target of a symlink (if # any), which is not what I think users would want: if you symlink # X->Y and send X, you expect the recipient to save it in X, not Y. # # TODO: an open question is whether args.cwd (i.e. os.getcwd()) will # be unicode or bytes. We need it to be something that can be # os.path.joined with the unicode args.what . what = os.path.join(args.cwd, args.what) # We always tell the receiver to create a file (or directory) with the # same basename as what the local user typed, even if the local object # is a symlink to something with a different name. The normpath() is # there to remove trailing slashes. basename = os.path.basename(os.path.normpath(what)) assert basename != "", what # normpath shouldn't allow this # We use realpath() instead of normpath() to locate the actual # file/directory, because the path might contain symlinks, and # normpath() would collapse those before resolving them. # test_cli.OfferData.test_symlink_collapse tests this. # Unfortunately on windows, realpath() (on py3) is built out of # normpath() because of a py2-era belief that windows lacks a working # os.path.islink(): see https://bugs.python.org/issue9949 . The # consequence is that "wormhole send PATH" might send the wrong file, # if: # * we're running on windows # * PATH goes down through a symlink and then up with parent-directory # navigation (".."), then back down again # * the back-down-again portion of the path also exists under the # original directory (an error is thrown if not) # I'd like to fix this. The core issue is sending directories with a # trailing slash: we need to 1: open the right directory, and 2: strip # the right parent path out of the filenames we get from os.walk(). We # used to use what.rstrip() for this, but bug #251 reported this # failing on windows-with-bash. realpath() works in both those cases, # but fails with the up-down symlinks situation. I think we'll need to # find a third way to strip the trailing slash reliably in all # environments. what = os.path.realpath(what) if not os.path.exists(what): raise TransferError( "Cannot send: no file/directory named '%s'" % args.what) if os.path.isfile(what): # we're sending a file filesize = os.stat(what).st_size offer["file"] = { "filename": basename, "filesize": filesize, } print( u"Sending %s file named '%s'" % (naturalsize(filesize), basename), file=args.stderr) fd_to_send = open(what, "rb") return offer, fd_to_send if os.path.isdir(what): print(u"Building zipfile..", file=args.stderr) # We're sending a directory. Create a zipfile and send that # instead. SpooledTemporaryFile will use RAM until our size # threshold (10MB) is reached, then moves everything into a # tempdir (it tries $TMPDIR, $TEMP, $TMP, then platform-specific # paths like /tmp). fd_to_send = tempfile.SpooledTemporaryFile(max_size=10*1000*1000) # workaround for https://bugs.python.org/issue26175 (STF doesn't # fully implement IOBase abstract class), which breaks the new # zipfile in py3.7.0 that expects .seekable if not hasattr(fd_to_send, "seekable"): # AFAICT all the filetypes that STF wraps can seek fd_to_send.seekable = lambda: True num_files = 0 num_bytes = 0 tostrip = len(what.split(os.sep)) with zipfile.ZipFile( fd_to_send, "w", compression=zipfile.ZIP_DEFLATED, allowZip64=True) as zf: for path, dirs, files in os.walk(what): # path always starts with args.what, then sometimes might # have "/subdir" appended. We want the zipfile to contain # "" or "subdir" localpath = list(path.split(os.sep)[tostrip:]) for fn in files: archivename = os.path.join(*tuple(localpath + [fn])) localfilename = os.path.join(path, fn) try: zf.write(localfilename, archivename) num_bytes += os.stat(localfilename).st_size num_files += 1 except OSError as e: errmsg = u"{}: {}".format(fn, e.strerror) if self._args.ignore_unsendable_files: print( u"{} (ignoring error)".format(errmsg), file=args.stderr) else: raise UnsendableFileError(errmsg) fd_to_send.seek(0, 2) filesize = fd_to_send.tell() fd_to_send.seek(0, 0) offer["directory"] = { "mode": "zipfile/deflated", "dirname": basename, "zipsize": filesize, "numbytes": num_bytes, "numfiles": num_files, } print( u"Sending directory (%s compressed) named '%s'" % (naturalsize(filesize), basename), file=args.stderr) return offer, fd_to_send if stat.S_ISBLK(os.stat(what).st_mode): fd_to_send = open(what, "rb") filesize = fd_to_send.seek(0, 2) offer["file"] = { "filename": basename, "filesize": filesize, } print( u"Sending %s block device named '%s'" % (naturalsize(filesize), basename), file=args.stderr) fd_to_send.seek(0) return offer, fd_to_send raise TypeError("'%s' is neither file nor directory" % args.what) @inlineCallbacks def _handle_answer(self, them_answer): if self._fd_to_send is None: if them_answer["message_ack"] == "ok": print(u"text message sent", file=self._args.stderr) returnValue(None) # terminates this function raise TransferError("error sending text: %r" % (them_answer, )) if them_answer.get("file_ack") != "ok": raise TransferError("ambiguous response from remote, " "transfer abandoned: %s" % (them_answer, )) yield self._send_file() @inlineCallbacks def _send_file(self): ts = self._transit_sender self._fd_to_send.seek(0, 2) filesize = self._fd_to_send.tell() self._fd_to_send.seek(0, 0) record_pipe = yield ts.connect() self._timing.add("transit connected") # record_pipe should implement IConsumer, chunks are just records stderr = self._args.stderr print(u"Sending (%s).." % record_pipe.describe(), file=stderr) hasher = hashlib.sha256() progress = tqdm( file=stderr, disable=self._args.hide_progress, unit="B", unit_scale=True, total=filesize) def _count_and_hash(data): hasher.update(data) progress.update(len(data)) return data fs = basic.FileSender() with self._timing.add("tx file"): with progress: if filesize: # don't send zero-length files yield fs.beginFileTransfer( self._fd_to_send, record_pipe, transform=_count_and_hash) expected_hash = hasher.digest() expected_hex = bytes_to_hexstr(expected_hash) print(u"File sent.. waiting for confirmation", file=stderr) with self._timing.add("get ack") as t: ack_bytes = yield record_pipe.receive_record() record_pipe.close() ack = bytes_to_dict(ack_bytes) ok = ack.get(u"ack", u"") if ok != u"ok": t.detail(ack="failed") raise TransferError("Transfer failed (remote says: %r)" % ack) if u"sha256" in ack: if ack[u"sha256"] != expected_hex: t.detail(datahash="failed") raise TransferError("Transfer failed (bad remote hash)") print(u"Confirmation received. Transfer complete.", file=stderr) t.detail(ack="ok") magic-wormhole-0.12.0/src/wormhole/cli/cmd_ssh.py000066400000000000000000000075041400712516500216770ustar00rootroot00000000000000from __future__ import print_function import os from os.path import exists, expanduser, join import click from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from .. import xfer_util class PubkeyError(Exception): pass def find_public_key(hint=None): """ This looks for an appropriate SSH key to send, possibly querying the user in the meantime. DO NOT CALL after reactor.run as this (possibly) does blocking stuff like asking the user questions (via click.prompt()) Returns a 3-tuple: kind, keyid, pubkey_data """ if hint is None: hint = expanduser('~/.ssh/') else: if not exists(hint): raise PubkeyError("Can't find '{}'".format(hint)) pubkeys = [f for f in os.listdir(hint) if f.endswith('.pub')] if len(pubkeys) == 0: raise PubkeyError("No public keys in '{}'".format(hint)) elif len(pubkeys) > 1: got_key = False while not got_key: ans = click.prompt( "Multiple public-keys found:\n" + "\n".join([" {}: {}".format(a, b) for a, b in enumerate(pubkeys)]) + "\nSend which one?" ) try: ans = int(ans) if ans < 0 or ans >= len(pubkeys): ans = None else: got_key = True with open(join(hint, pubkeys[ans]), 'r') as f: pubkey = f.read() except Exception: got_key = False else: with open(join(hint, pubkeys[0]), 'r') as f: pubkey = f.read() parts = pubkey.strip().split() kind = parts[0] keyid = 'unknown' if len(parts) <= 2 else parts[2] return kind, keyid, pubkey @inlineCallbacks def accept(cfg, reactor=reactor): yield xfer_util.send( reactor, cfg.appid or u"lothar.com/wormhole/ssh-add", cfg.relay_url, data=cfg.public_key[2], code=cfg.code, use_tor=cfg.tor, launch_tor=cfg.launch_tor, tor_control_port=cfg.tor_control_port, ) print("Key sent.") @inlineCallbacks def invite(cfg, reactor=reactor): def on_code_created(code): print("Now tell the other user to run:") print() print("wormhole ssh accept {}".format(code)) print() if cfg.ssh_user is None: ssh_path = expanduser('~/.ssh/'.format(cfg.ssh_user)) else: ssh_path = expanduser('~{}/.ssh/'.format(cfg.ssh_user)) auth_key_path = join(ssh_path, 'authorized_keys') if not exists(auth_key_path): print("Note: '{}' not found; will be created".format(auth_key_path)) if not exists(ssh_path): print(" '{}' doesn't exist either".format(ssh_path)) else: try: open(auth_key_path, 'a').close() except OSError: print("No write permission on '{}'".format(auth_key_path)) return try: os.listdir(ssh_path) except OSError: print("Can't read '{}'".format(ssh_path)) return pubkey = yield xfer_util.receive( reactor, cfg.appid or u"lothar.com/wormhole/ssh-add", cfg.relay_url, None, # allocate a code for us use_tor=cfg.tor, launch_tor=cfg.launch_tor, tor_control_port=cfg.tor_control_port, on_code=on_code_created, ) parts = pubkey.split() kind = parts[0] keyid = 'unknown' if len(parts) <= 2 else parts[2] if not exists(auth_key_path): if not exists(ssh_path): os.mkdir(ssh_path, mode=0o700) with open(auth_key_path, 'a', 0o600) as f: f.write('{}\n'.format(pubkey.strip())) print("Appended key type='{kind}' id='{key_id}' to '{auth_file}'".format( kind=kind, key_id=keyid, auth_file=auth_key_path)) magic-wormhole-0.12.0/src/wormhole/cli/public_relay.py000066400000000000000000000003251400712516500227230ustar00rootroot00000000000000# This is a relay I run on a personal server. If it gets too expensive to # run, I'll shut it down. RENDEZVOUS_RELAY = u"ws://relay.magic-wormhole.io:4000/v1" TRANSIT_RELAY = u"tcp:transit.magic-wormhole.io:4001" magic-wormhole-0.12.0/src/wormhole/cli/welcome.py000066400000000000000000000016361400712516500217120ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals def handle_welcome(welcome, relay_url, my_version, stderr): if "motd" in welcome: motd_lines = welcome["motd"].splitlines() motd_formatted = "\n ".join(motd_lines) print( "Server (at %s) says:\n %s" % (relay_url, motd_formatted), file=stderr) # Only warn if we're running a release version (e.g. 0.0.6, not # 0.0.6+DISTANCE.gHASH). Only warn once. if (("current_cli_version" in welcome and "+" not in my_version and welcome["current_cli_version"] != my_version)): print( ("Warning: errors may occur unless both sides are running the" " same version"), file=stderr) print( "Server claims %s is current, but ours is %s" % (welcome["current_cli_version"], my_version), file=stderr) magic-wormhole-0.12.0/src/wormhole/errors.py000066400000000000000000000067461400712516500210330ustar00rootroot00000000000000from __future__ import unicode_literals class WormholeError(Exception): """Parent class for all wormhole-related errors""" class UnsendableFileError(Exception): """ A file you wanted to send couldn't be read, maybe because it's not a file, or because it's a symlink that points to something that doesn't exist. To ignore this kind of error, you can run wormhole with the --ignore-unsendable-files flag. """ class ServerError(WormholeError): """The relay server complained about something we did.""" class ServerConnectionError(WormholeError): """We had a problem connecting to the relay server:""" def __init__(self, url, reason): self.url = url self.reason = reason def __str__(self): return str(self.reason) class Timeout(WormholeError): pass class WelcomeError(WormholeError): """ The relay server told us to signal an error, probably because our version is too old to possibly work. The server said:""" pass class LonelyError(WormholeError): """wormhole.close() was called before the peer connection could be established""" class WrongPasswordError(WormholeError): """ Key confirmation failed. Either you or your correspondent typed the code wrong, or a would-be man-in-the-middle attacker guessed incorrectly. You could try again, giving both your correspondent and the attacker another chance. """ # or the data blob was corrupted, and that's why decrypt failed pass class KeyFormatError(WormholeError): """ The key you entered contains spaces or was missing a dash. Magic-wormhole expects the numerical nameplate and the code words to be separated by dashes. Please reenter the key you were given separating the words with dashes. """ class ReflectionAttack(WormholeError): """An attacker (or bug) reflected our outgoing message back to us.""" class InternalError(WormholeError): """The programmer did something wrong.""" class TransferError(WormholeError): """Something bad happened and the transfer failed.""" class NoTorError(WormholeError): """--tor was requested, but 'txtorcon' is not installed.""" class NoKeyError(WormholeError): """w.derive_key() was called before got_verifier() fired""" class OnlyOneCodeError(WormholeError): """Only one w.generate_code/w.set_code/w.input_code may be called""" class MustChooseNameplateFirstError(WormholeError): """The InputHelper was asked to do get_word_completions() or choose_words() before the nameplate was chosen.""" class AlreadyChoseNameplateError(WormholeError): """The InputHelper was asked to do get_nameplate_completions() after choose_nameplate() was called, or choose_nameplate() was called a second time.""" class AlreadyChoseWordsError(WormholeError): """The InputHelper was asked to do get_word_completions() after choose_words() was called, or choose_words() was called a second time.""" class AlreadyInputNameplateError(WormholeError): """The CodeInputter was asked to do completion on a nameplate, when we had already committed to a different one.""" class WormholeClosed(Exception): """Deferred-returning API calls errback with WormholeClosed if the wormhole was already closed, or if it closes before a real result can be obtained.""" class _UnknownPhaseError(Exception): """internal exception type, for tests.""" class _UnknownMessageTypeError(Exception): """internal exception type, for tests.""" magic-wormhole-0.12.0/src/wormhole/eventual.py000066400000000000000000000030731400712516500213300ustar00rootroot00000000000000# inspired-by/adapted-from Foolscap's eventual.py, which Glyph wrote for me # years ago. from twisted.internet.defer import Deferred from twisted.internet.interfaces import IReactorTime from twisted.python import log class EventualQueue(object): def __init__(self, clock): # pass clock=reactor unless you're testing self._clock = IReactorTime(clock) self._calls = [] self._flush_d = None self._timer = None def eventually(self, f, *args, **kwargs): self._calls.append((f, args, kwargs)) if not self._timer: self._timer = self._clock.callLater(0, self._turn) def fire_eventually(self, value=None): d = Deferred() self.eventually(d.callback, value) return d def _turn(self): while self._calls: (f, args, kwargs) = self._calls.pop(0) try: f(*args, **kwargs) except Exception: log.err() self._timer = None d, self._flush_d = self._flush_d, None if d: d.callback(None) def flush_sync(self): # if you have control over the Clock, this will synchronously flush the # queue assert self._clock.advance, "needs clock=twisted.internet.task.Clock()" while self._calls: self._clock.advance(0) def flush(self): # this is for unit tests, not application code assert not self._flush_d, "only one flush at a time" self._flush_d = Deferred() self.eventually(lambda: None) return self._flush_d magic-wormhole-0.12.0/src/wormhole/ipaddrs.py000066400000000000000000000055571400712516500211440ustar00rootroot00000000000000# no unicode_literals # Find all of our ip addresses. From tahoe's src/allmydata/util/iputil.py import errno import os import re import subprocess from sys import platform from twisted.python.procutils import which # Wow, I'm really amazed at home much mileage we've gotten out of calling # the external route.exe program on windows... It appears to work on all # versions so far. Still, the real system calls would much be preferred... # ... thus wrote Greg Smith in time immemorial... _win32_re = re.compile( (r'^\s*\d+\.\d+\.\d+\.\d+\s.+\s' r'(?P
\d+\.\d+\.\d+\.\d+)\s+(?P\d+)\s*$'), flags=re.M | re.I | re.S) _win32_commands = (('route.exe', ('print', ), _win32_re), ) # These work in most Unices. _addr_re = re.compile( r'^\s*inet [a-zA-Z]*:?(?P
\d+\.\d+\.\d+\.\d+)[\s/].+$', flags=re.M | re.I | re.S) _unix_commands = ( ('/bin/ip', ('addr', ), _addr_re), ('/sbin/ip', ('addr', ), _addr_re), ('/sbin/ifconfig', ('-a', ), _addr_re), ('/usr/sbin/ifconfig', ('-a', ), _addr_re), ('/usr/etc/ifconfig', ('-a', ), _addr_re), ('ifconfig', ('-a', ), _addr_re), ('/sbin/ifconfig', (), _addr_re), ) def find_addresses(): # originally by Greg Smith, hacked by Zooko and then Daira # We don't reach here for cygwin. if platform == 'win32': commands = _win32_commands else: commands = _unix_commands for (pathtotool, args, regex) in commands: # If pathtotool is a fully qualified path then we just try that. # If it is merely an executable name then we use Twisted's # "which()" utility and try each executable in turn until one # gives us something that resembles a dotted-quad IPv4 address. if os.path.isabs(pathtotool): exes_to_try = [pathtotool] else: exes_to_try = which(pathtotool) for exe in exes_to_try: try: addresses = _query(exe, args, regex) except Exception: addresses = [] if addresses: return addresses return ["127.0.0.1"] def _query(path, args, regex): env = {'LANG': 'en_US.UTF-8'} trial = 0 while True: trial += 1 try: p = subprocess.Popen( [path] + list(args), stdout=subprocess.PIPE, stderr=subprocess.PIPE, env=env, universal_newlines=True) (output, err) = p.communicate() break except OSError as e: if e.errno == errno.EINTR and trial < 5: continue raise addresses = [] outputsplit = output.split('\n') for outline in outputsplit: m = regex.match(outline) if m: addr = m.group('address') if addr not in addresses: addresses.append(addr) return addresses magic-wormhole-0.12.0/src/wormhole/journal.py000066400000000000000000000022331400712516500211540ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals import contextlib from zope.interface import implementer from ._interfaces import IJournal @implementer(IJournal) class Journal(object): def __init__(self, save_checkpoint): self._save_checkpoint = save_checkpoint self._outbound_queue = [] self._processing = False def queue_outbound(self, fn, *args, **kwargs): assert self._processing self._outbound_queue.append((fn, args, kwargs)) @contextlib.contextmanager def process(self): assert not self._processing assert not self._outbound_queue self._processing = True yield # process inbound messages, change state, queue outbound self._save_checkpoint() for (fn, args, kwargs) in self._outbound_queue: fn(*args, **kwargs) self._outbound_queue[:] = [] self._processing = False @implementer(IJournal) class ImmediateJournal(object): def __init__(self): pass def queue_outbound(self, fn, *args, **kwargs): fn(*args, **kwargs) @contextlib.contextmanager def process(self): yield magic-wormhole-0.12.0/src/wormhole/observer.py000066400000000000000000000053651400712516500213420ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from twisted.internet.defer import Deferred from twisted.python.failure import Failure NoResult = object() class OneShotObserver(object): def __init__(self, eventual_queue): self._eq = eventual_queue self._result = NoResult self._observers = [] # list of Deferreds def when_fired(self): d = Deferred() self._observers.append(d) self._maybe_call_observers() return d def fire(self, result): assert self._result is NoResult self._result = result self._maybe_call_observers() def _maybe_call_observers(self): if self._result is NoResult: return observers, self._observers = self._observers, [] for d in observers: self._eq.eventually(d.callback, self._result) def error(self, f): # errors will override an existing result assert isinstance(f, Failure) self._result = f self._maybe_call_observers() def fire_if_not_fired(self, result): if self._result is NoResult: self.fire(result) class SequenceObserver(object): def __init__(self, eventual_queue): self._eq = eventual_queue self._error = None self._results = [] self._observers = [] def when_next_event(self): d = Deferred() if self._error: self._eq.eventually(d.errback, self._error) elif self._results: result = self._results.pop(0) self._eq.eventually(d.callback, result) else: self._observers.append(d) return d def fire(self, result): if isinstance(result, Failure): self._error = result for d in self._observers: self._eq.eventually(d.errback, self._error) self._observers = [] else: self._results.append(result) if self._observers: d = self._observers.pop(0) self._eq.eventually(d.callback, self._results.pop(0)) class EmptyableSet(set): # manage a set which grows and shrinks over time. Fire a Deferred the first # time it becomes empty after you start watching for it. def __init__(self, *args, **kwargs): self._eq = kwargs.pop("_eventual_queue") # required super(EmptyableSet, self).__init__(*args, **kwargs) self._observer = None def when_next_empty(self): if not self._observer: self._observer = OneShotObserver(self._eq) return self._observer.when_fired() def discard(self, o): super(EmptyableSet, self).discard(o) if self._observer and not self: self._observer.fire(None) self._observer = None magic-wormhole-0.12.0/src/wormhole/test/000077500000000000000000000000001400712516500201075ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/test/__init__.py000066400000000000000000000000001400712516500222060ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/test/common.py000066400000000000000000000117571400712516500217640ustar00rootroot00000000000000# no unicode_literals until twisted update from click.testing import CliRunner from twisted.application import internet, service from twisted.internet import defer, endpoints, reactor, task from twisted.python import log import mock from wormhole_mailbox_server.database import create_channel_db, create_usage_db from wormhole_mailbox_server.server import make_server from wormhole_mailbox_server.web import make_web_server from wormhole_transit_relay.transit_server import Transit from ..cli import cli from ..transit import allocate_tcp_port class MyInternetService(service.Service, object): # like StreamServerEndpointService, but you can retrieve the port def __init__(self, endpoint, factory): self.endpoint = endpoint self.factory = factory self._port_d = defer.Deferred() self._lp = None def startService(self): super(MyInternetService, self).startService() d = self.endpoint.listen(self.factory) def good(lp): self._lp = lp self._port_d.callback(lp.getHost().port) def bad(f): log.err(f) self._port_d.errback(f) d.addCallbacks(good, bad) @defer.inlineCallbacks def stopService(self): if self._lp: yield self._lp.stopListening() def getPort(self): # only call once! return self._port_d class ServerBase: @defer.inlineCallbacks def setUp(self): yield self._setup_relay(None) @defer.inlineCallbacks def _setup_relay(self, error, advertise_version=None): self.sp = service.MultiService() self.sp.startService() # need to talk to twisted team about only using unicode in # endpoints.serverFromString db = create_channel_db(":memory:") self._usage_db = create_usage_db(":memory:") self._rendezvous = make_server( db, advertise_version=advertise_version, signal_error=error, usage_db=self._usage_db) ep = endpoints.TCP4ServerEndpoint(reactor, 0, interface="127.0.0.1") site = make_web_server(self._rendezvous, log_requests=False) # self._lp = yield ep.listen(site) s = MyInternetService(ep, site) s.setServiceParent(self.sp) self.rdv_ws_port = yield s.getPort() self._relay_server = s # self._rendezvous = s._rendezvous self.relayurl = u"ws://127.0.0.1:%d/v1" % self.rdv_ws_port # ws://127.0.0.1:%d/wormhole-relay/ws self.transitport = allocate_tcp_port() ep = endpoints.serverFromString( reactor, "tcp:%d:interface=127.0.0.1" % self.transitport) self._transit_server = f = Transit( blur_usage=None, log_file=None, usage_db=None) internet.StreamServerEndpointService(ep, f).setServiceParent(self.sp) self.transit = u"tcp:127.0.0.1:%d" % self.transitport @defer.inlineCallbacks def tearDown(self): # Unit tests that spawn a (blocking) client in a thread might still # have threads running at this point, if one is stuck waiting for a # message from a companion which has exited with an error. Our # relay's .stopService() drops all connections, which ought to # encourage those threads to terminate soon. If they don't, print a # warning to ease debugging. # XXX FIXME there's something in _noclobber test that's not # waiting for a close, I think -- was pretty relieably getting # unclean-reactor, but adding a slight pause here stops it... tp = reactor.getThreadPool() if not tp.working: yield self.sp.stopService() yield task.deferLater(reactor, 0.1, lambda: None) defer.returnValue(None) # disconnect all callers d = defer.maybeDeferred(self.sp.stopService) # wait a second, then check to see if it worked yield task.deferLater(reactor, 1.0, lambda: None) if len(tp.working): log.msg("wormhole.test.common.ServerBase.tearDown:" " I was unable to convince all threads to exit.") tp.dumpStats() print("tearDown warning: threads are still active") print("This test will probably hang until one of the" " clients gives up of their own accord.") else: log.msg("wormhole.test.common.ServerBase.tearDown:" " I convinced all threads to exit.") yield d def config(*argv): r = CliRunner() with mock.patch("wormhole.cli.cli.go") as go: res = r.invoke(cli.wormhole, argv, catch_exceptions=False) if res.exit_code != 0: print(res.exit_code) print(res.output) print(res) assert 0 cfg = go.call_args[0][1] return cfg @defer.inlineCallbacks def poll_until(predicate): # return a Deferred that won't fire until the predicate is True while not predicate(): d = defer.Deferred() reactor.callLater(0.001, d.callback, None) yield d magic-wormhole-0.12.0/src/wormhole/test/dilate/000077500000000000000000000000001400712516500213515ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/test/dilate/__init__.py000066400000000000000000000000001400712516500234500ustar00rootroot00000000000000magic-wormhole-0.12.0/src/wormhole/test/dilate/common.py000066400000000000000000000006461400712516500232210ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from ..._interfaces import IDilationManager, IWormhole def mock_manager(): m = mock.Mock() alsoProvides(m, IDilationManager) return m def mock_wormhole(): m = mock.Mock() alsoProvides(m, IWormhole) return m def clear_mock_calls(*args): for a in args: a.mock_calls[:] = [] magic-wormhole-0.12.0/src/wormhole/test/dilate/test_connect.py000066400000000000000000000055451400712516500244240ustar00rootroot00000000000000import re import mock from twisted.internet import reactor from twisted.trial import unittest from twisted.internet.task import Cooperator from twisted.internet.defer import Deferred, inlineCallbacks from zope.interface import implementer from ... import _interfaces from ...eventual import EventualQueue from ..._interfaces import ITerminator from ..._dilation import manager from ..._dilation._noise import NoiseConnection @implementer(_interfaces.ISend) class MySend(object): def __init__(self, side): self.rx_phase = 0 self.side = side def send(self, phase, plaintext): #print("SEND[%s]" % self.side, phase, plaintext) self.peer.got(phase, plaintext) def got(self, phase, plaintext): d_mo = re.search(r'^dilate-(\d+)$', phase) p = int(d_mo.group(1)) assert p == self.rx_phase self.rx_phase += 1 self.dilator.received_dilate(plaintext) @implementer(ITerminator) class FakeTerminator(object): def __init__(self): self.d = Deferred() def stoppedD(self): self.d.callback(None) class Connect(unittest.TestCase): @inlineCallbacks def test1(self): if not NoiseConnection: raise unittest.SkipTest("noiseprotocol unavailable") #print() send_left = MySend("left") send_right = MySend("right") send_left.peer = send_right send_right.peer = send_left key = b"\x00"*32 eq = EventualQueue(reactor) cooperator = Cooperator(scheduler=eq.eventually) t_left = FakeTerminator() t_right = FakeTerminator() d_left = manager.Dilator(reactor, eq, cooperator) d_left.wire(send_left, t_left) d_left.got_key(key) d_left.got_wormhole_versions({"can-dilate": ["1"]}) send_left.dilator = d_left d_right = manager.Dilator(reactor, eq, cooperator) d_right.wire(send_right, t_right) d_right.got_key(key) d_right.got_wormhole_versions({"can-dilate": ["1"]}) send_right.dilator = d_right with mock.patch("wormhole._dilation.connector.ipaddrs.find_addresses", return_value=["127.0.0.1"]): eps_left_d = d_left.dilate(no_listen=True) eps_right_d = d_right.dilate() eps_left = yield eps_left_d eps_right = yield eps_right_d #print("left connected", eps_left) #print("right connected", eps_right) control_ep_left, connect_ep_left, listen_ep_left = eps_left control_ep_right, connect_ep_right, listen_ep_right = eps_right #control_ep_left.connect( # we normally shut down with w.close(), which calls Dilator.stop(), # which calls Terminator.stoppedD(), which (after everything else is # done) calls Boss.stopped d_left.stop() d_right.stop() yield t_left.d yield t_right.d magic-wormhole-0.12.0/src/wormhole/test/dilate/test_connection.py000066400000000000000000000307651400712516500251340ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.task import Clock from twisted.internet.interfaces import ITransport from ...eventual import EventualQueue from ..._interfaces import IDilationConnector from ..._dilation.roles import LEADER, FOLLOWER from ..._dilation.connection import (DilatedConnectionProtocol, encode_record, KCM, Open, Ack) from .common import clear_mock_calls def make_con(role, use_relay=False): clock = Clock() eq = EventualQueue(clock) connector = mock.Mock() alsoProvides(connector, IDilationConnector) n = mock.Mock() # pretends to be a Noise object n.write_message = mock.Mock(side_effect=[b"handshake"]) c = DilatedConnectionProtocol(eq, role, "desc", connector, n, b"outbound_prologue\n", b"inbound_prologue\n") if use_relay: c.use_relay(b"relay_handshake\n") t = mock.Mock() alsoProvides(t, ITransport) return c, n, connector, t, eq class Connection(unittest.TestCase): def test_hashable(self): c, n, connector, t, eq = make_con(LEADER) hash(c) def test_bad_prologue(self): c, n, connector, t, eq = make_con(LEADER) c.makeConnection(t) d = c.when_disconnected() self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) clear_mock_calls(n, connector, t) c.dataReceived(b"prologue\n") self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.loseConnection()]) eq.flush_sync() self.assertNoResult(d) c.connectionLost(b"why") eq.flush_sync() self.assertIdentical(self.successResultOf(d), c) def _test_no_relay(self, role): c, n, connector, t, eq = make_con(role) t_kcm = KCM() t_open = Open(seqnum=1, scid=0x11223344) t_ack = Ack(resp_seqnum=2) n.decrypt = mock.Mock(side_effect=[ encode_record(t_kcm), encode_record(t_open), ]) exp_kcm = b"\x00\x00\x00\x03kcm" n.encrypt = mock.Mock(side_effect=[b"kcm", b"ack1"]) m = mock.Mock() # Manager c.makeConnection(t) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) clear_mock_calls(n, connector, t, m) c.dataReceived(b"inbound_prologue\n") exp_handshake = b"\x00\x00\x00\x09handshake" if role is LEADER: # the LEADER sends the Noise handshake message immediately upon # receipt of the prologue self.assertEqual(n.mock_calls, [mock.call.write_message()]) self.assertEqual(t.mock_calls, [mock.call.write(exp_handshake)]) else: # however the FOLLOWER waits until receiving the leader's # handshake before sending their own self.assertEqual(n.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(connector.mock_calls, []) clear_mock_calls(n, connector, t, m) c.dataReceived(b"\x00\x00\x00\x0Ahandshake2") if role is LEADER: # we're the leader, so we don't send the KCM right away self.assertEqual(n.mock_calls, [ mock.call.read_message(b"handshake2")]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(c._manager, None) else: # we're the follower, so we send our Noise handshake, then # encrypt and send the KCM immediately self.assertEqual(n.mock_calls, [ mock.call.read_message(b"handshake2"), mock.call.write_message(), mock.call.encrypt(encode_record(t_kcm)), ]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [ mock.call.write(exp_handshake), mock.call.write(exp_kcm)]) self.assertEqual(c._manager, None) clear_mock_calls(n, connector, t, m) c.dataReceived(b"\x00\x00\x00\x03KCM") # leader: inbound KCM means we add the candidate # follower: inbound KCM means we've been selected. # in both cases we notify Connector.add_candidate(), and the Connector # decides if/when to call .select() self.assertEqual(n.mock_calls, [mock.call.decrypt(b"KCM")]) self.assertEqual(connector.mock_calls, [mock.call.add_candidate(c)]) self.assertEqual(t.mock_calls, []) clear_mock_calls(n, connector, t, m) # now pretend this connection wins (either the Leader decides to use # this one among all the candidates, or we're the Follower and the # Connector is reacting to add_candidate() by recognizing we're the # only candidate there is) c.select(m) self.assertIdentical(c._manager, m) if role is LEADER: # TODO: currently Connector.select_and_stop_remaining() is # responsible for sending the KCM just before calling c.select() # iff we're the LEADER, therefore Connection.select won't send # anything. This should be moved to c.select(). self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(m.mock_calls, []) c.send_record(KCM()) self.assertEqual(n.mock_calls, [ mock.call.encrypt(encode_record(t_kcm)), ]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(exp_kcm)]) self.assertEqual(m.mock_calls, []) else: # follower: we already sent the KCM, do nothing self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(m.mock_calls, []) clear_mock_calls(n, connector, t, m) c.dataReceived(b"\x00\x00\x00\x04msg1") self.assertEqual(n.mock_calls, [mock.call.decrypt(b"msg1")]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(m.mock_calls, [mock.call.got_record(t_open)]) clear_mock_calls(n, connector, t, m) c.send_record(t_ack) exp_ack = b"\x06\x00\x00\x00\x02" self.assertEqual(n.mock_calls, [mock.call.encrypt(exp_ack)]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"\x00\x00\x00\x04ack1")]) self.assertEqual(m.mock_calls, []) clear_mock_calls(n, connector, t, m) c.disconnect() self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.loseConnection()]) self.assertEqual(m.mock_calls, []) clear_mock_calls(n, connector, t, m) def test_no_relay_leader(self): return self._test_no_relay(LEADER) def test_no_relay_follower(self): return self._test_no_relay(FOLLOWER) def test_relay(self): c, n, connector, t, eq = make_con(LEADER, use_relay=True) c.makeConnection(t) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"relay_handshake\n")]) clear_mock_calls(n, connector, t) c.dataReceived(b"ok\n") self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) clear_mock_calls(n, connector, t) c.dataReceived(b"inbound_prologue\n") self.assertEqual(n.mock_calls, [mock.call.write_message()]) self.assertEqual(connector.mock_calls, []) exp_handshake = b"\x00\x00\x00\x09handshake" self.assertEqual(t.mock_calls, [mock.call.write(exp_handshake)]) clear_mock_calls(n, connector, t) def test_relay_jilted(self): c, n, connector, t, eq = make_con(LEADER, use_relay=True) d = c.when_disconnected() c.makeConnection(t) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"relay_handshake\n")]) clear_mock_calls(n, connector, t) c.connectionLost(b"why") eq.flush_sync() self.assertIdentical(self.successResultOf(d), c) def test_relay_bad_response(self): c, n, connector, t, eq = make_con(LEADER, use_relay=True) c.makeConnection(t) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"relay_handshake\n")]) clear_mock_calls(n, connector, t) c.dataReceived(b"not ok\n") self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.loseConnection()]) clear_mock_calls(n, connector, t) def test_follower_combined(self): c, n, connector, t, eq = make_con(FOLLOWER) t_kcm = KCM() t_open = Open(seqnum=1, scid=0x11223344) n.decrypt = mock.Mock(side_effect=[ encode_record(t_kcm), encode_record(t_open), ]) exp_kcm = b"\x00\x00\x00\x03kcm" n.encrypt = mock.Mock(side_effect=[b"kcm", b"ack1"]) m = mock.Mock() # Manager c.makeConnection(t) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) clear_mock_calls(n, connector, t, m) c.dataReceived(b"inbound_prologue\n") exp_handshake = b"\x00\x00\x00\x09handshake" # however the FOLLOWER waits until receiving the leader's # handshake before sending their own self.assertEqual(n.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(connector.mock_calls, []) clear_mock_calls(n, connector, t, m) c.dataReceived(b"\x00\x00\x00\x0Ahandshake2") # we're the follower, so we send our Noise handshake, then # encrypt and send the KCM immediately self.assertEqual(n.mock_calls, [ mock.call.read_message(b"handshake2"), mock.call.write_message(), mock.call.encrypt(encode_record(t_kcm)), ]) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, [ mock.call.write(exp_handshake), mock.call.write(exp_kcm)]) self.assertEqual(c._manager, None) clear_mock_calls(n, connector, t, m) # the leader will select a connection, send the KCM, and then # immediately send some more data kcm_and_msg1 = (b"\x00\x00\x00\x03KCM" + b"\x00\x00\x00\x04msg1") c.dataReceived(kcm_and_msg1) # follower: inbound KCM means we've been selected. # in both cases we notify Connector.add_candidate(), and the Connector # decides if/when to call .select() self.assertEqual(n.mock_calls, [mock.call.decrypt(b"KCM"), mock.call.decrypt(b"msg1")]) self.assertEqual(connector.mock_calls, [mock.call.add_candidate(c)]) self.assertEqual(t.mock_calls, []) clear_mock_calls(n, connector, t, m) # now pretend this connection wins (either the Leader decides to use # this one among all the candidates, or we're the Follower and the # Connector is reacting to add_candidate() by recognizing we're the # only candidate there is) c.select(m) self.assertIdentical(c._manager, m) # follower: we already sent the KCM, do nothing self.assertEqual(n.mock_calls, []) self.assertEqual(connector.mock_calls, []) self.assertEqual(t.mock_calls, []) self.assertEqual(m.mock_calls, [mock.call.got_record(t_open)]) clear_mock_calls(n, connector, t, m) magic-wormhole-0.12.0/src/wormhole/test/dilate/test_connector.py000066400000000000000000000513621400712516500247630ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.task import Clock from twisted.internet.defer import Deferred from twisted.internet.address import IPv4Address, IPv6Address, HostnameAddress from ...eventual import EventualQueue from ..._interfaces import IDilationManager, IDilationConnector from ..._hints import DirectTCPV1Hint, RelayV1Hint, TorTCPV1Hint from ..._dilation import roles from ..._dilation._noise import NoiseConnection from ..._dilation.connection import KCM from ..._dilation.connector import (Connector, build_sided_relay_handshake, build_noise, describe_inbound, OutboundConnectionFactory, InboundConnectionFactory, PROLOGUE_LEADER, PROLOGUE_FOLLOWER, ) from .common import clear_mock_calls class Handshake(unittest.TestCase): def test_build(self): key = b"k"*32 side = "12345678abcdabcd" self.assertEqual(build_sided_relay_handshake(key, side), b"please relay 3f4147851dbd2589d25b654ee9fb35ed0d3e5f19c5c5403e8e6a195c70f0577a for side 12345678abcdabcd\n") class Outbound(unittest.TestCase): def test_no_relay(self): c = mock.Mock() alsoProvides(c, IDilationConnector) p0 = mock.Mock() c.build_protocol = mock.Mock(return_value=p0) relay_handshake = None f = OutboundConnectionFactory(c, relay_handshake, "desc") addr = object() p = f.buildProtocol(addr) self.assertIdentical(p, p0) self.assertEqual(c.mock_calls, [mock.call.build_protocol(addr, "desc")]) self.assertEqual(p.mock_calls, []) self.assertIdentical(p.factory, f) def test_with_relay(self): c = mock.Mock() alsoProvides(c, IDilationConnector) p0 = mock.Mock() c.build_protocol = mock.Mock(return_value=p0) relay_handshake = b"relay handshake" f = OutboundConnectionFactory(c, relay_handshake, "desc") addr = object() p = f.buildProtocol(addr) self.assertIdentical(p, p0) self.assertEqual(c.mock_calls, [mock.call.build_protocol(addr, "desc")]) self.assertEqual(p.mock_calls, [mock.call.use_relay(relay_handshake)]) self.assertIdentical(p.factory, f) class Inbound(unittest.TestCase): def test_build(self): c = mock.Mock() alsoProvides(c, IDilationConnector) p0 = mock.Mock() c.build_protocol = mock.Mock(return_value=p0) f = InboundConnectionFactory(c) addr = IPv4Address("TCP", "1.2.3.4", 55) p = f.buildProtocol(addr) self.assertIdentical(p, p0) self.assertEqual(c.mock_calls, [mock.call.build_protocol(addr, "<-tcp:1.2.3.4:55")]) self.assertIdentical(p.factory, f) def make_connector(listen=True, tor=False, relay=None, role=roles.LEADER): class Holder: pass h = Holder() h.dilation_key = b"key" h.relay = relay h.manager = mock.Mock() alsoProvides(h.manager, IDilationManager) h.clock = Clock() h.reactor = h.clock h.eq = EventualQueue(h.clock) h.tor = None if tor: h.tor = mock.Mock() timing = None h.side = u"abcd1234abcd5678" h.role = role c = Connector(h.dilation_key, h.relay, h.manager, h.reactor, h.eq, not listen, h.tor, timing, h.side, h.role) return c, h class TestConnector(unittest.TestCase): def test_build(self): c, h = make_connector() c, h = make_connector(relay="tcp:host:1234") def test_connection_abilities(self): self.assertEqual(Connector.get_connection_abilities(), [{"type": "direct-tcp-v1"}, {"type": "relay-v1"}, ]) def test_build_noise(self): if not NoiseConnection: raise unittest.SkipTest("noiseprotocol unavailable") build_noise() def test_build_protocol_leader(self): c, h = make_connector(role=roles.LEADER) n0 = mock.Mock() p0 = mock.Mock() addr = object() with mock.patch("wormhole._dilation.connector.build_noise", return_value=n0) as bn: with mock.patch("wormhole._dilation.connector.DilatedConnectionProtocol", return_value=p0) as dcp: p = c.build_protocol(addr, "desc") self.assertEqual(bn.mock_calls, [mock.call()]) self.assertEqual(n0.mock_calls, [mock.call.set_psks(h.dilation_key), mock.call.set_as_initiator()]) self.assertIdentical(p, p0) self.assertEqual(dcp.mock_calls, [mock.call(h.eq, h.role, "desc", c, n0, PROLOGUE_LEADER, PROLOGUE_FOLLOWER)]) def test_build_protocol_follower(self): c, h = make_connector(role=roles.FOLLOWER) n0 = mock.Mock() p0 = mock.Mock() addr = object() with mock.patch("wormhole._dilation.connector.build_noise", return_value=n0) as bn: with mock.patch("wormhole._dilation.connector.DilatedConnectionProtocol", return_value=p0) as dcp: p = c.build_protocol(addr, "desc") self.assertEqual(bn.mock_calls, [mock.call()]) self.assertEqual(n0.mock_calls, [mock.call.set_psks(h.dilation_key), mock.call.set_as_responder()]) self.assertIdentical(p, p0) self.assertEqual(dcp.mock_calls, [mock.call(h.eq, h.role, "desc", c, n0, PROLOGUE_FOLLOWER, PROLOGUE_LEADER)]) def test_start_stop(self): c, h = make_connector(listen=False, relay=None, role=roles.LEADER) c.start() # no relays, so it publishes no hints self.assertEqual(h.manager.mock_calls, []) # and no listener, so nothing happens until we provide a hint c.stop() # we stop while we're connecting, so no connections must be stopped def test_empty(self): c, h = make_connector(listen=False, relay=None, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() # no relays, so it publishes no hints self.assertEqual(h.manager.mock_calls, []) # and no listener, so nothing happens until we provide a hint self.assertEqual(c._schedule_connection.mock_calls, []) c.stop() def test_basic(self): c, h = make_connector(listen=False, relay=None, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() # no relays, so it publishes no hints self.assertEqual(h.manager.mock_calls, []) # and no listener, so nothing happens until we provide a hint self.assertEqual(c._schedule_connection.mock_calls, []) hint = DirectTCPV1Hint("foo", 55, 0.0) c.got_hints([hint]) # received hints don't get published self.assertEqual(h.manager.mock_calls, []) # they just schedule a connection self.assertEqual(c._schedule_connection.mock_calls, [mock.call(0.0, DirectTCPV1Hint("foo", 55, 0.0), is_relay=False)]) def test_listen_addresses(self): c, h = make_connector(listen=True, role=roles.LEADER) with mock.patch("wormhole.ipaddrs.find_addresses", return_value=["127.0.0.1", "1.2.3.4", "5.6.7.8"]): self.assertEqual(c._get_listener_addresses(), ["1.2.3.4", "5.6.7.8"]) with mock.patch("wormhole.ipaddrs.find_addresses", return_value=["127.0.0.1"]): # some test hosts, including the appveyor VMs, *only* have # 127.0.0.1, and the tests will hang badly if we remove it. self.assertEqual(c._get_listener_addresses(), ["127.0.0.1"]) def test_listen(self): c, h = make_connector(listen=True, role=roles.LEADER) c._start_listener = mock.Mock() with mock.patch("wormhole.ipaddrs.find_addresses", return_value=["127.0.0.1", "1.2.3.4", "5.6.7.8"]): c.start() self.assertEqual(c._start_listener.mock_calls, [mock.call(["1.2.3.4", "5.6.7.8"])]) def test_start_listen(self): c, h = make_connector(listen=True, role=roles.LEADER) ep = mock.Mock() d = Deferred() ep.listen = mock.Mock(return_value=d) with mock.patch("wormhole._dilation.connector.serverFromString", return_value=ep) as sfs: c._start_listener(["1.2.3.4", "5.6.7.8"]) self.assertEqual(sfs.mock_calls, [mock.call(h.reactor, "tcp:0")]) lp = mock.Mock() host = mock.Mock() host.port = 66 lp.getHost = mock.Mock(return_value=host) d.callback(lp) self.assertEqual(h.manager.mock_calls, [mock.call.send_hints([{"type": "direct-tcp-v1", "hostname": "1.2.3.4", "port": 66, "priority": 0.0 }, {"type": "direct-tcp-v1", "hostname": "5.6.7.8", "port": 66, "priority": 0.0 }, ])]) def test_schedule_connection_no_relay(self): c, h = make_connector(listen=True, role=roles.LEADER) hint = DirectTCPV1Hint("foo", 55, 0.0) ep = mock.Mock() with mock.patch("wormhole._dilation.connector.endpoint_from_hint_obj", side_effect=[ep]) as efho: c._schedule_connection(0.0, hint, False) self.assertEqual(efho.mock_calls, [mock.call(hint, h.tor, h.reactor)]) self.assertEqual(ep.mock_calls, []) d = Deferred() ep.connect = mock.Mock(side_effect=[d]) # direct hints are scheduled for T+0.0 f = mock.Mock() with mock.patch("wormhole._dilation.connector.OutboundConnectionFactory", return_value=f) as ocf: h.clock.advance(1.0) self.assertEqual(ocf.mock_calls, [mock.call(c, None, "->tcp:foo:55")]) self.assertEqual(ep.connect.mock_calls, [mock.call(f)]) p = mock.Mock() d.callback(p) self.assertEqual(p.mock_calls, [mock.call.when_disconnected(), mock.call.when_disconnected().addCallback(c._pending_connections.discard)]) def test_schedule_connection_relay(self): c, h = make_connector(listen=True, role=roles.LEADER) hint = DirectTCPV1Hint("foo", 55, 0.0) ep = mock.Mock() with mock.patch("wormhole._dilation.connector.endpoint_from_hint_obj", side_effect=[ep]) as efho: c._schedule_connection(0.0, hint, True) self.assertEqual(efho.mock_calls, [mock.call(hint, h.tor, h.reactor)]) self.assertEqual(ep.mock_calls, []) d = Deferred() ep.connect = mock.Mock(side_effect=[d]) # direct hints are scheduled for T+0.0 f = mock.Mock() with mock.patch("wormhole._dilation.connector.OutboundConnectionFactory", return_value=f) as ocf: h.clock.advance(1.0) handshake = build_sided_relay_handshake(h.dilation_key, h.side) self.assertEqual(ocf.mock_calls, [mock.call(c, handshake, "->relay:tcp:foo:55")]) def test_listen_but_tor(self): c, h = make_connector(listen=True, tor=True, role=roles.LEADER) with mock.patch("wormhole.ipaddrs.find_addresses", return_value=["127.0.0.1", "1.2.3.4", "5.6.7.8"]) as fa: c.start() # don't even look up addresses self.assertEqual(fa.mock_calls, []) # no relays and the listener isn't ready yet, so no hints yet self.assertEqual(h.manager.mock_calls, []) def test_no_listen(self): c, h = make_connector(listen=False, tor=False, role=roles.LEADER) with mock.patch("wormhole.ipaddrs.find_addresses", return_value=["127.0.0.1", "1.2.3.4", "5.6.7.8"]) as fa: c.start() # don't even look up addresses self.assertEqual(fa.mock_calls, []) self.assertEqual(h.manager.mock_calls, []) def test_relay_delay(self): # given a direct connection and a relay, we should see the direct # connection initiated at T+0 seconds, and the relay at T+RELAY_DELAY c, h = make_connector(listen=True, relay=None, role=roles.LEADER) c._schedule_connection = mock.Mock() c._start_listener = mock.Mock() c.start() hint1 = DirectTCPV1Hint("foo", 55, 0.0) hint2 = DirectTCPV1Hint("bar", 55, 0.0) hint3 = RelayV1Hint([DirectTCPV1Hint("relay", 55, 0.0)]) c.got_hints([hint1, hint2, hint3]) self.assertEqual(c._schedule_connection.mock_calls, [mock.call(0.0, hint1, is_relay=False), mock.call(0.0, hint2, is_relay=False), mock.call(c.RELAY_DELAY, hint3.hints[0], is_relay=True), ]) def test_initial_relay(self): c, h = make_connector(listen=False, relay="tcp:foo:55", role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() self.assertEqual(h.manager.mock_calls, [mock.call.send_hints([{"type": "relay-v1", "hints": [ {"type": "direct-tcp-v1", "hostname": "foo", "port": 55, "priority": 0.0 }, ], }])]) self.assertEqual(c._schedule_connection.mock_calls, [mock.call(0.0, DirectTCPV1Hint("foo", 55, 0.0), is_relay=True)]) def test_add_relay(self): c, h = make_connector(listen=False, relay=None, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() self.assertEqual(h.manager.mock_calls, []) self.assertEqual(c._schedule_connection.mock_calls, []) hint = RelayV1Hint([DirectTCPV1Hint("foo", 55, 0.0)]) c.add_relay([hint]) self.assertEqual(h.manager.mock_calls, [mock.call.send_hints([{"type": "relay-v1", "hints": [ {"type": "direct-tcp-v1", "hostname": "foo", "port": 55, "priority": 0.0 }, ], }])]) self.assertEqual(c._schedule_connection.mock_calls, [mock.call(0.0, DirectTCPV1Hint("foo", 55, 0.0), is_relay=True)]) def test_tor_no_manager(self): # tor hints should be ignored if we don't have a Tor manager to use them c, h = make_connector(listen=False, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() hint = TorTCPV1Hint("foo", 55, 0.0) c.got_hints([hint]) self.assertEqual(h.manager.mock_calls, []) self.assertEqual(c._schedule_connection.mock_calls, []) def test_tor_with_manager(self): # tor hints should be processed if we do have a Tor manager c, h = make_connector(listen=False, tor=True, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() hint = TorTCPV1Hint("foo", 55, 0.0) c.got_hints([hint]) self.assertEqual(c._schedule_connection.mock_calls, [mock.call(0.0, hint, is_relay=False)]) def test_priorities(self): # given two hints with different priorities, we should somehow prefer # one. This is a placeholder to fill in once we implement priorities. pass class Race(unittest.TestCase): def test_one_leader(self): c, h = make_connector(listen=True, role=roles.LEADER) lp = mock.Mock() def start_listener(addresses): c._listeners.add(lp) c._start_listener = start_listener c._schedule_connection = mock.Mock() c.start() self.assertEqual(c._listeners, set([lp])) p1 = mock.Mock() # DilatedConnectionProtocol instance c.add_candidate(p1) self.assertEqual(h.manager.mock_calls, []) h.eq.flush_sync() self.assertEqual(h.manager.mock_calls, [mock.call.connector_connection_made(p1)]) self.assertEqual(p1.mock_calls, [mock.call.select(h.manager), mock.call.send_record(KCM())]) self.assertEqual(lp.mock_calls[0], mock.call.stopListening()) # stop_listeners() uses a DeferredList, so we ignore the second call def test_one_follower(self): c, h = make_connector(listen=True, role=roles.FOLLOWER) lp = mock.Mock() def start_listener(addresses): c._listeners.add(lp) c._start_listener = start_listener c._schedule_connection = mock.Mock() c.start() self.assertEqual(c._listeners, set([lp])) p1 = mock.Mock() # DilatedConnectionProtocol instance c.add_candidate(p1) self.assertEqual(h.manager.mock_calls, []) h.eq.flush_sync() self.assertEqual(h.manager.mock_calls, [mock.call.connector_connection_made(p1)]) # just like LEADER, but follower doesn't send KCM now (it sent one # earlier, to tell the leader that this connection looks viable) self.assertEqual(p1.mock_calls, [mock.call.select(h.manager)]) self.assertEqual(lp.mock_calls[0], mock.call.stopListening()) # stop_listeners() uses a DeferredList, so we ignore the second call # TODO: make sure a pending connection is abandoned when the listener # answers successfully # TODO: make sure a second pending connection is abandoned when the first # connection succeeds def test_late(self): c, h = make_connector(listen=False, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() p1 = mock.Mock() # DilatedConnectionProtocol instance c.add_candidate(p1) self.assertEqual(h.manager.mock_calls, []) h.eq.flush_sync() self.assertEqual(h.manager.mock_calls, [mock.call.connector_connection_made(p1)]) clear_mock_calls(h.manager) self.assertEqual(p1.mock_calls, [mock.call.select(h.manager), mock.call.send_record(KCM())]) # late connection is ignored p2 = mock.Mock() c.add_candidate(p2) self.assertEqual(h.manager.mock_calls, []) # make sure an established connection is dropped when stop() is called def test_stop(self): c, h = make_connector(listen=False, role=roles.LEADER) c._schedule_connection = mock.Mock() c.start() p1 = mock.Mock() # DilatedConnectionProtocol instance c.add_candidate(p1) self.assertEqual(h.manager.mock_calls, []) h.eq.flush_sync() self.assertEqual(p1.mock_calls, [mock.call.select(h.manager), mock.call.send_record(KCM())]) self.assertEqual(h.manager.mock_calls, [mock.call.connector_connection_made(p1)]) c.stop() class Describe(unittest.TestCase): def test_describe_inbound(self): self.assertEqual(describe_inbound(HostnameAddress("example.com", 1234)), "<-tcp:example.com:1234") self.assertEqual(describe_inbound(IPv4Address("TCP", "1.2.3.4", 1234)), "<-tcp:1.2.3.4:1234") self.assertEqual(describe_inbound(IPv6Address("TCP", "::1", 1234)), "<-tcp:[::1]:1234") other = "none-of-the-above" self.assertEqual(describe_inbound(other), "<-%r" % other) magic-wormhole-0.12.0/src/wormhole/test/dilate/test_encoding.py000066400000000000000000000017121400712516500245510ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from twisted.trial import unittest from ..._dilation.encode import to_be4, from_be4 class Encoding(unittest.TestCase): def test_be4(self): self.assertEqual(to_be4(0), b"\x00\x00\x00\x00") self.assertEqual(to_be4(1), b"\x00\x00\x00\x01") self.assertEqual(to_be4(256), b"\x00\x00\x01\x00") self.assertEqual(to_be4(257), b"\x00\x00\x01\x01") with self.assertRaises(ValueError): to_be4(-1) with self.assertRaises(ValueError): to_be4(2**32) self.assertEqual(from_be4(b"\x00\x00\x00\x00"), 0) self.assertEqual(from_be4(b"\x00\x00\x00\x01"), 1) self.assertEqual(from_be4(b"\x00\x00\x01\x00"), 256) self.assertEqual(from_be4(b"\x00\x00\x01\x01"), 257) with self.assertRaises(TypeError): from_be4(0) with self.assertRaises(ValueError): from_be4(b"\x01\x00\x00\x00\x00") magic-wormhole-0.12.0/src/wormhole/test/dilate/test_endpoints.py000066400000000000000000000320421400712516500247660ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.task import Clock from twisted.python.failure import Failure from ..._interfaces import ISubChannel from ...eventual import EventualQueue from ..._dilation.subchannel import (ControlEndpoint, SubchannelConnectorEndpoint, SubchannelListenerEndpoint, SubchannelListeningPort, _WormholeAddress, _SubchannelAddress, SingleUseEndpointError) from .common import mock_manager class CannotDilateError(Exception): pass class Control(unittest.TestCase): def test_early_succeed(self): # ep.connect() is called before dilation can proceed scid0 = 0 peeraddr = _SubchannelAddress(scid0) sc0 = mock.Mock() alsoProvides(sc0, ISubChannel) eq = EventualQueue(Clock()) ep = ControlEndpoint(peeraddr, sc0, eq) f = mock.Mock() p = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) d = ep.connect(f) self.assertNoResult(d) ep._main_channel_ready() eq.flush_sync() self.assertIdentical(self.successResultOf(d), p) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr)]) self.assertEqual(sc0.mock_calls, [mock.call._set_protocol(p), mock.call._deliver_queued_data()]) self.assertEqual(p.mock_calls, [mock.call.makeConnection(sc0)]) d = ep.connect(f) self.failureResultOf(d, SingleUseEndpointError) def test_early_fail(self): # ep.connect() is called before dilation is abandoned scid0 = 0 peeraddr = _SubchannelAddress(scid0) sc0 = mock.Mock() alsoProvides(sc0, ISubChannel) eq = EventualQueue(Clock()) ep = ControlEndpoint(peeraddr, sc0, eq) f = mock.Mock() p = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) d = ep.connect(f) self.assertNoResult(d) ep._main_channel_failed(Failure(CannotDilateError())) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) self.assertEqual(sc0.mock_calls, []) d = ep.connect(f) self.failureResultOf(d, SingleUseEndpointError) def test_late_succeed(self): # dilation can proceed, then ep.connect() is called scid0 = 0 peeraddr = _SubchannelAddress(scid0) sc0 = mock.Mock() alsoProvides(sc0, ISubChannel) eq = EventualQueue(Clock()) ep = ControlEndpoint(peeraddr, sc0, eq) ep._main_channel_ready() f = mock.Mock() p = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) d = ep.connect(f) eq.flush_sync() self.assertIdentical(self.successResultOf(d), p) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr)]) self.assertEqual(sc0.mock_calls, [mock.call._set_protocol(p), mock.call._deliver_queued_data()]) self.assertEqual(p.mock_calls, [mock.call.makeConnection(sc0)]) d = ep.connect(f) self.failureResultOf(d, SingleUseEndpointError) def test_late_fail(self): # dilation is abandoned, then ep.connect() is called scid0 = 0 peeraddr = _SubchannelAddress(scid0) sc0 = mock.Mock() alsoProvides(sc0, ISubChannel) eq = EventualQueue(Clock()) ep = ControlEndpoint(peeraddr, sc0, eq) ep._main_channel_failed(Failure(CannotDilateError())) f = mock.Mock() p = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) d = ep.connect(f) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) self.assertEqual(sc0.mock_calls, []) d = ep.connect(f) self.failureResultOf(d, SingleUseEndpointError) class Endpoints(unittest.TestCase): def OFFassert_makeConnection(self, mock_calls): self.assertEqual(len(mock_calls), 1) self.assertEqual(mock_calls[0][0], "makeConnection") self.assertEqual(len(mock_calls[0][1]), 1) return mock_calls[0][1][0] class Connector(unittest.TestCase): def test_early_succeed(self): m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() peeraddr = _SubchannelAddress(0) eq = EventualQueue(Clock()) ep = SubchannelConnectorEndpoint(m, hostaddr, eq) f = mock.Mock() p = mock.Mock() t = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) with mock.patch("wormhole._dilation.subchannel.SubChannel", return_value=t) as sc: d = ep.connect(f) eq.flush_sync() self.assertNoResult(d) ep._main_channel_ready() eq.flush_sync() self.assertIdentical(self.successResultOf(d), p) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr)]) self.assertEqual(sc.mock_calls, [mock.call(0, m, hostaddr, peeraddr)]) self.assertEqual(t.mock_calls, [mock.call._set_protocol(p)]) self.assertEqual(p.mock_calls, [mock.call.makeConnection(t)]) def test_early_fail(self): m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelConnectorEndpoint(m, hostaddr, eq) f = mock.Mock() p = mock.Mock() t = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) with mock.patch("wormhole._dilation.subchannel.SubChannel", return_value=t) as sc: d = ep.connect(f) eq.flush_sync() self.assertNoResult(d) ep._main_channel_failed(Failure(CannotDilateError())) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) self.assertEqual(sc.mock_calls, []) self.assertEqual(t.mock_calls, []) def test_late_succeed(self): m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() peeraddr = _SubchannelAddress(0) eq = EventualQueue(Clock()) ep = SubchannelConnectorEndpoint(m, hostaddr, eq) ep._main_channel_ready() f = mock.Mock() p = mock.Mock() t = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) with mock.patch("wormhole._dilation.subchannel.SubChannel", return_value=t) as sc: d = ep.connect(f) eq.flush_sync() self.assertIdentical(self.successResultOf(d), p) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr)]) self.assertEqual(sc.mock_calls, [mock.call(0, m, hostaddr, peeraddr)]) self.assertEqual(t.mock_calls, [mock.call._set_protocol(p)]) self.assertEqual(p.mock_calls, [mock.call.makeConnection(t)]) def test_late_fail(self): m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelConnectorEndpoint(m, hostaddr, eq) ep._main_channel_failed(Failure(CannotDilateError())) f = mock.Mock() p = mock.Mock() t = mock.Mock() f.buildProtocol = mock.Mock(return_value=p) with mock.patch("wormhole._dilation.subchannel.SubChannel", return_value=t) as sc: d = ep.connect(f) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) self.assertEqual(sc.mock_calls, []) self.assertEqual(t.mock_calls, []) class Listener(unittest.TestCase): def test_early_succeed(self): # listen, main_channel_ready, got_open, got_open m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelListenerEndpoint(m, hostaddr, eq) f = mock.Mock() p1 = mock.Mock() p2 = mock.Mock() f.buildProtocol = mock.Mock(side_effect=[p1, p2]) d = ep.listen(f) eq.flush_sync() self.assertNoResult(d) self.assertEqual(f.buildProtocol.mock_calls, []) ep._main_channel_ready() eq.flush_sync() lp = self.successResultOf(d) self.assertIsInstance(lp, SubchannelListeningPort) self.assertEqual(lp.getHost(), hostaddr) # TODO: IListeningPort says we must provide this, but I don't know # that anyone would ever call it. lp.startListening() t1 = mock.Mock() peeraddr1 = _SubchannelAddress(1) ep._got_open(t1, peeraddr1) self.assertEqual(t1.mock_calls, [mock.call._set_protocol(p1), mock.call._deliver_queued_data()]) self.assertEqual(p1.mock_calls, [mock.call.makeConnection(t1)]) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr1)]) t2 = mock.Mock() peeraddr2 = _SubchannelAddress(2) ep._got_open(t2, peeraddr2) self.assertEqual(t2.mock_calls, [mock.call._set_protocol(p2), mock.call._deliver_queued_data()]) self.assertEqual(p2.mock_calls, [mock.call.makeConnection(t2)]) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr1), mock.call(peeraddr2)]) lp.stopListening() # TODO: should this do more? def test_early_fail(self): # listen, main_channel_fail m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelListenerEndpoint(m, hostaddr, eq) f = mock.Mock() p1 = mock.Mock() p2 = mock.Mock() f.buildProtocol = mock.Mock(side_effect=[p1, p2]) d = ep.listen(f) eq.flush_sync() self.assertNoResult(d) ep._main_channel_failed(Failure(CannotDilateError())) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) def test_late_succeed(self): # main_channel_ready, got_open, listen, got_open m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelListenerEndpoint(m, hostaddr, eq) ep._main_channel_ready() f = mock.Mock() p1 = mock.Mock() p2 = mock.Mock() f.buildProtocol = mock.Mock(side_effect=[p1, p2]) t1 = mock.Mock() peeraddr1 = _SubchannelAddress(1) ep._got_open(t1, peeraddr1) eq.flush_sync() self.assertEqual(t1.mock_calls, []) self.assertEqual(p1.mock_calls, []) d = ep.listen(f) eq.flush_sync() lp = self.successResultOf(d) self.assertIsInstance(lp, SubchannelListeningPort) self.assertEqual(lp.getHost(), hostaddr) lp.startListening() # TODO: assert makeConnection is called *before* _deliver_queued_data self.assertEqual(t1.mock_calls, [mock.call._set_protocol(p1), mock.call._deliver_queued_data()]) self.assertEqual(p1.mock_calls, [mock.call.makeConnection(t1)]) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr1)]) t2 = mock.Mock() peeraddr2 = _SubchannelAddress(2) ep._got_open(t2, peeraddr2) self.assertEqual(t2.mock_calls, [mock.call._set_protocol(p2), mock.call._deliver_queued_data()]) self.assertEqual(p2.mock_calls, [mock.call.makeConnection(t2)]) self.assertEqual(f.buildProtocol.mock_calls, [mock.call(peeraddr1), mock.call(peeraddr2)]) lp.stopListening() # TODO: should this do more? def test_late_fail(self): # main_channel_fail, listen m = mock_manager() m.allocate_subchannel_id = mock.Mock(return_value=0) hostaddr = _WormholeAddress() eq = EventualQueue(Clock()) ep = SubchannelListenerEndpoint(m, hostaddr, eq) ep._main_channel_failed(Failure(CannotDilateError())) f = mock.Mock() p1 = mock.Mock() p2 = mock.Mock() f.buildProtocol = mock.Mock(side_effect=[p1, p2]) d = ep.listen(f) eq.flush_sync() self.failureResultOf(d).check(CannotDilateError) self.assertEqual(f.buildProtocol.mock_calls, []) magic-wormhole-0.12.0/src/wormhole/test/dilate/test_framer.py000066400000000000000000000103501400712516500242350ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.interfaces import ITransport from ..._dilation.connection import _Framer, Frame, Prologue, Disconnect def make_framer(): t = mock.Mock() alsoProvides(t, ITransport) f = _Framer(t, b"outbound_prologue\n", b"inbound_prologue\n") return f, t class Framer(unittest.TestCase): def test_bad_prologue_length(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) t.mock_calls[:] = [] self.assertEqual([], list(f.add_and_parse(b"inbound_"))) # wait for it self.assertEqual(t.mock_calls, []) with mock.patch("wormhole._dilation.connection.log.msg") as m: with self.assertRaises(Disconnect): list(f.add_and_parse(b"not the prologue after all")) self.assertEqual(m.mock_calls, [mock.call("bad prologue: {}".format( b"inbound_not the p"))]) self.assertEqual(t.mock_calls, []) def test_bad_prologue_newline(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) t.mock_calls[:] = [] self.assertEqual([], list(f.add_and_parse(b"inbound_"))) # wait for it self.assertEqual([], list(f.add_and_parse(b"not"))) with mock.patch("wormhole._dilation.connection.log.msg") as m: with self.assertRaises(Disconnect): list(f.add_and_parse(b"\n")) self.assertEqual(m.mock_calls, [mock.call("bad prologue: {}".format( b"inbound_not\n"))]) self.assertEqual(t.mock_calls, []) def test_good_prologue(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) t.mock_calls[:] = [] self.assertEqual([Prologue()], list(f.add_and_parse(b"inbound_prologue\n"))) self.assertEqual(t.mock_calls, []) # now send_frame should work f.send_frame(b"frame") self.assertEqual(t.mock_calls, [mock.call.write(b"\x00\x00\x00\x05frame")]) def test_bad_relay(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.use_relay(b"relay handshake\n") f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"relay handshake\n")]) t.mock_calls[:] = [] with mock.patch("wormhole._dilation.connection.log.msg") as m: with self.assertRaises(Disconnect): list(f.add_and_parse(b"goodbye\n")) self.assertEqual(m.mock_calls, [mock.call("bad relay_ok: {}".format(b"goo"))]) self.assertEqual(t.mock_calls, []) def test_good_relay(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.use_relay(b"relay handshake\n") self.assertEqual(t.mock_calls, []) f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"relay handshake\n")]) t.mock_calls[:] = [] self.assertEqual([], list(f.add_and_parse(b"ok\n"))) self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) def test_frame(self): f, t = make_framer() self.assertEqual(t.mock_calls, []) f.connectionMade() self.assertEqual(t.mock_calls, [mock.call.write(b"outbound_prologue\n")]) t.mock_calls[:] = [] self.assertEqual([Prologue()], list(f.add_and_parse(b"inbound_prologue\n"))) self.assertEqual(t.mock_calls, []) encoded_frame = b"\x00\x00\x00\x05frame" self.assertEqual([], list(f.add_and_parse(encoded_frame[:2]))) self.assertEqual([], list(f.add_and_parse(encoded_frame[2:6]))) self.assertEqual([Frame(frame=b"frame")], list(f.add_and_parse(encoded_frame[6:]))) magic-wormhole-0.12.0/src/wormhole/test/dilate/test_full.py000066400000000000000000000273371400712516500237400ustar00rootroot00000000000000from __future__ import print_function, absolute_import, unicode_literals import wormhole from twisted.internet import reactor from twisted.internet.defer import Deferred, inlineCallbacks, gatherResults from twisted.internet.protocol import Protocol, Factory from twisted.trial import unittest from ..common import ServerBase, poll_until from ..._interfaces import IDilationConnector from ...eventual import EventualQueue from ..._dilation._noise import NoiseConnection APPID = u"lothar.com/dilate-test" def doBoth(d1, d2): return gatherResults([d1, d2], True) class L(Protocol): def connectionMade(self): print("got connection") self.transport.write(b"hello\n") def dataReceived(self, data): print("dataReceived: {}".format(data)) self.factory.d.callback(data) def connectionLost(self, why): print("connectionLost") class Full(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): if not NoiseConnection: raise unittest.SkipTest("noiseprotocol unavailable") # test_welcome wants to see [current_cli_version] yield self._setup_relay(None) @inlineCallbacks def test_control(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w2 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w1.allocate_code() code = yield w1.get_code() print("code is: {}".format(code)) w2.set_code(code) yield doBoth(w1.get_verifier(), w2.get_verifier()) print("connected") eps1 = w1.dilate() eps2 = w2.dilate() print("w.dilate ready") f1 = Factory() f1.protocol = L f1.d = Deferred() f1.d.addCallback(lambda data: eq.fire_eventually(data)) d1 = eps1.control.connect(f1) f2 = Factory() f2.protocol = L f2.d = Deferred() f2.d.addCallback(lambda data: eq.fire_eventually(data)) d2 = eps2.control.connect(f2) yield d1 yield d2 print("control endpoints connected") # note: I'm making some horrible assumptions about one-to-one writes # and reads across a TCP stack that isn't obligated to maintain such # a relationship, but it's much easier than doing this properly. If # the tests ever start failing, do the extra work, probably by # using a twisted.protocols.basic.LineOnlyReceiver data1 = yield f1.d data2 = yield f2.d self.assertEqual(data1, b"hello\n") self.assertEqual(data2, b"hello\n") yield w1.close() yield w2.close() test_control.timeout = 30 class ReconP(Protocol): def eventually(self, which, data): d = self.factory.deferreds[which] self.factory.eq.fire_eventually(data).addCallback(d.callback) def connectionMade(self): self.eventually("connectionMade", self) #self.transport.write(b"hello\n") def dataReceived(self, data): self.eventually("dataReceived", data) def connectionLost(self, why): self.eventually("connectionLost", (self, why)) class ReconF(Factory): protocol = ReconP def __init__(self, eq): Factory.__init__(self) self.eq = eq self.deferreds = {} for name in ["connectionMade", "dataReceived", "connectionLost"]: self.deferreds[name] = Deferred() def resetDeferred(self, name): d = Deferred() self.deferreds[name] = d return d class Reconnect(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): if not NoiseConnection: raise unittest.SkipTest("noiseprotocol unavailable") # test_welcome wants to see [current_cli_version] yield self._setup_relay(None) @inlineCallbacks def test_reconnect(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w2 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) yield doBoth(w1.get_verifier(), w2.get_verifier()) eps1 = w1.dilate() eps2 = w2.dilate() print("w.dilate ready") f1 = ReconF(eq); f2 = ReconF(eq) d1 = eps1.control.connect(f1); d2 = eps2.control.connect(f2) yield d1 yield d2 protocols = {} def p_connected(p, index): protocols[index] = p msg = "hello from %s\n" % index p.transport.write(msg.encode("ascii")) f1.deferreds["connectionMade"].addCallback(p_connected, 1) f2.deferreds["connectionMade"].addCallback(p_connected, 2) data1 = yield f1.deferreds["dataReceived"] data2 = yield f2.deferreds["dataReceived"] self.assertEqual(data1, b"hello from 2\n") self.assertEqual(data2, b"hello from 1\n") # the ACKs are now in flight and may not arrive before we kill the # connection f1.resetDeferred("connectionMade") f2.resetDeferred("connectionMade") d1 = f1.resetDeferred("dataReceived") d2 = f2.resetDeferred("dataReceived") # now we reach inside and drop the connection sc = protocols[1].transport orig_connection = sc._manager._connection orig_connection.disconnect() # stall until the connection has been replaced yield poll_until(lambda: sc._manager._connection and (orig_connection != sc._manager._connection)) # now write some more data, which should travel over the new # connection protocols[1].transport.write(b"more\n") data2 = yield d2 self.assertEqual(data2, b"more\n") replacement_connection = sc._manager._connection self.assertNotEqual(orig_connection, replacement_connection) # the application-visible Protocol should not observe the # interruption self.assertNoResult(f1.deferreds["connectionMade"]) self.assertNoResult(f2.deferreds["connectionMade"]) self.assertNoResult(f1.deferreds["connectionLost"]) self.assertNoResult(f2.deferreds["connectionLost"]) yield w1.close() yield w2.close() @inlineCallbacks def test_data_while_offline(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w2 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) yield doBoth(w1.get_verifier(), w2.get_verifier()) eps1 = w1.dilate() eps2 = w2.dilate() print("w.dilate ready") f1 = ReconF(eq); f2 = ReconF(eq) d1 = eps1.control.connect(f1); d2 = eps2.control.connect(f2) yield d1 yield d2 protocols = {} def p_connected(p, index): protocols[index] = p msg = "hello from %s\n" % index p.transport.write(msg.encode("ascii")) f1.deferreds["connectionMade"].addCallback(p_connected, 1) f2.deferreds["connectionMade"].addCallback(p_connected, 2) data1 = yield f1.deferreds["dataReceived"] data2 = yield f2.deferreds["dataReceived"] self.assertEqual(data1, b"hello from 2\n") self.assertEqual(data2, b"hello from 1\n") # the ACKs are now in flight and may not arrive before we kill the # connection f1.resetDeferred("connectionMade") f2.resetDeferred("connectionMade") d1 = f1.resetDeferred("dataReceived") d2 = f2.resetDeferred("dataReceived") # switch off connections assert w1._boss._D._manager._debug_stall_connector == False cd1 = Deferred(); cd2 = Deferred() w1._boss._D._manager._debug_stall_connector = cd1.callback w2._boss._D._manager._debug_stall_connector = cd2.callback # now we reach inside and drop the connection sc = protocols[1].transport orig_connection = sc._manager._connection orig_connection.disconnect() c1 = yield cd1 c2 = yield cd2 assert IDilationConnector.providedBy(c1) assert IDilationConnector.providedBy(c2) assert c1 is not orig_connection w1._boss._D._manager._debug_stall_connector = False w2._boss._D._manager._debug_stall_connector = False # now write some data while the connection is definitely offline protocols[1].transport.write(b"more 1->2\n") protocols[2].transport.write(b"more 2->1\n") # allow the connections to proceed c1.start() c2.start() # and wait for the data to arrive data2 = yield d2 self.assertEqual(data2, b"more 1->2\n") data1 = yield d1 self.assertEqual(data1, b"more 2->1\n") # the application-visible Protocol should not observe the # interruption self.assertNoResult(f1.deferreds["connectionMade"]) self.assertNoResult(f2.deferreds["connectionMade"]) self.assertNoResult(f1.deferreds["connectionLost"]) self.assertNoResult(f2.deferreds["connectionLost"]) yield w1.close() yield w2.close() class Endpoints(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): if not NoiseConnection: raise unittest.SkipTest("noiseprotocol unavailable") # test_welcome wants to see [current_cli_version] yield self._setup_relay(None) @inlineCallbacks def test_endpoints(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w2 = wormhole.create(APPID, self.relayurl, reactor, _enable_dilate=True) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) yield doBoth(w1.get_verifier(), w2.get_verifier()) eps1 = w1.dilate() eps2 = w2.dilate() print("w.dilate ready") f0 = ReconF(eq) yield eps2.listen.listen(f0) from twisted.python import log f1 = ReconF(eq) log.msg("connecting") p1_client = yield eps1.connect.connect(f1) log.msg("sending c->s") p1_client.transport.write(b"hello from p1\n") data = yield f0.deferreds["dataReceived"] self.assertEqual(data, b"hello from p1\n") p1_server = self.successResultOf(f0.deferreds["connectionMade"]) log.msg("sending s->c") p1_server.transport.write(b"hello p1\n") log.msg("waiting for client to receive") data = yield f1.deferreds["dataReceived"] self.assertEqual(data, b"hello p1\n") # open a second channel f0.resetDeferred("connectionMade") f0.resetDeferred("dataReceived") f1.resetDeferred("dataReceived") f2 = ReconF(eq) p2_client = yield eps1.connect.connect(f2) p2_server = yield f0.deferreds["connectionMade"] p2_server.transport.write(b"hello p2\n") data = yield f2.deferreds["dataReceived"] self.assertEqual(data, b"hello p2\n") p2_client.transport.write(b"hello from p2\n") data = yield f0.deferreds["dataReceived"] self.assertEqual(data, b"hello from p2\n") self.assertNoResult(f1.deferreds["dataReceived"]) # now close the first subchannel (p1) from the listener side p1_server.transport.loseConnection() yield f0.deferreds["connectionLost"] yield f1.deferreds["connectionLost"] f0.resetDeferred("connectionLost") # and close the second from the connector side p2_client.transport.loseConnection() yield f0.deferreds["connectionLost"] yield f2.deferreds["connectionLost"] yield w1.close() yield w2.close() magic-wormhole-0.12.0/src/wormhole/test/dilate/test_inbound.py000066400000000000000000000147051400712516500244270ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from ..._interfaces import IDilationManager from ..._dilation.connection import Open, Data, Close from ..._dilation.inbound import (Inbound, DuplicateOpenError, DataForMissingSubchannelError, CloseForMissingSubchannelError) def make_inbound(): m = mock.Mock() alsoProvides(m, IDilationManager) host_addr = object() i = Inbound(m, host_addr) return i, m, host_addr class InboundTest(unittest.TestCase): def test_seqnum(self): i, m, host_addr = make_inbound() r1 = Open(scid=513, seqnum=1) r2 = Data(scid=513, seqnum=2, data=b"") r3 = Close(scid=513, seqnum=3) self.assertFalse(i.is_record_old(r1)) self.assertFalse(i.is_record_old(r2)) self.assertFalse(i.is_record_old(r3)) i.update_ack_watermark(r1.seqnum) self.assertTrue(i.is_record_old(r1)) self.assertFalse(i.is_record_old(r2)) self.assertFalse(i.is_record_old(r3)) i.update_ack_watermark(r2.seqnum) self.assertTrue(i.is_record_old(r1)) self.assertTrue(i.is_record_old(r2)) self.assertFalse(i.is_record_old(r3)) def test_open_data_close(self): i, m, host_addr = make_inbound() scid1 = b"scid" scid2 = b"scXX" c = mock.Mock() lep = mock.Mock() i.set_listener_endpoint(lep) i.use_connection(c) sc1 = mock.Mock() peer_addr = object() with mock.patch("wormhole._dilation.inbound.SubChannel", side_effect=[sc1]) as sc: with mock.patch("wormhole._dilation.inbound._SubchannelAddress", side_effect=[peer_addr]) as sca: i.handle_open(scid1) self.assertEqual(lep.mock_calls, [mock.call._got_open(sc1, peer_addr)]) self.assertEqual(sc.mock_calls, [mock.call(scid1, m, host_addr, peer_addr)]) self.assertEqual(sca.mock_calls, [mock.call(scid1)]) lep.mock_calls[:] = [] # a subsequent duplicate OPEN should be ignored with mock.patch("wormhole._dilation.inbound.SubChannel", side_effect=[sc1]) as sc: with mock.patch("wormhole._dilation.inbound._SubchannelAddress", side_effect=[peer_addr]) as sca: i.handle_open(scid1) self.assertEqual(lep.mock_calls, []) self.assertEqual(sc.mock_calls, []) self.assertEqual(sca.mock_calls, []) self.flushLoggedErrors(DuplicateOpenError) i.handle_data(scid1, b"data") self.assertEqual(sc1.mock_calls, [mock.call.remote_data(b"data")]) sc1.mock_calls[:] = [] i.handle_data(scid2, b"for non-existent subchannel") self.assertEqual(sc1.mock_calls, []) self.flushLoggedErrors(DataForMissingSubchannelError) i.handle_close(scid1) self.assertEqual(sc1.mock_calls, [mock.call.remote_close()]) sc1.mock_calls[:] = [] i.handle_close(scid2) self.assertEqual(sc1.mock_calls, []) self.flushLoggedErrors(CloseForMissingSubchannelError) # after the subchannel is closed, the Manager will notify Inbound i.subchannel_closed(scid1, sc1) i.stop_using_connection() def test_control_channel(self): i, m, host_addr = make_inbound() lep = mock.Mock() i.set_listener_endpoint(lep) scid0 = b"scid" sc0 = mock.Mock() i.set_subchannel_zero(scid0, sc0) # OPEN on the control channel identifier should be ignored as a # duplicate, since the control channel is already registered sc1 = mock.Mock() peer_addr = object() with mock.patch("wormhole._dilation.inbound.SubChannel", side_effect=[sc1]) as sc: with mock.patch("wormhole._dilation.inbound._SubchannelAddress", side_effect=[peer_addr]) as sca: i.handle_open(scid0) self.assertEqual(lep.mock_calls, []) self.assertEqual(sc.mock_calls, []) self.assertEqual(sca.mock_calls, []) self.flushLoggedErrors(DuplicateOpenError) # and DATA to it should be delivered correctly i.handle_data(scid0, b"data") self.assertEqual(sc0.mock_calls, [mock.call.remote_data(b"data")]) sc0.mock_calls[:] = [] def test_pause(self): i, m, host_addr = make_inbound() c = mock.Mock() lep = mock.Mock() i.set_listener_endpoint(lep) # add two subchannels, pause one, then add a connection scid1 = b"sci1" scid2 = b"sci2" sc1 = mock.Mock() sc2 = mock.Mock() peer_addr = object() with mock.patch("wormhole._dilation.inbound.SubChannel", side_effect=[sc1, sc2]): with mock.patch("wormhole._dilation.inbound._SubchannelAddress", return_value=peer_addr): i.handle_open(scid1) i.handle_open(scid2) self.assertEqual(c.mock_calls, []) i.subchannel_pauseProducing(sc1) self.assertEqual(c.mock_calls, []) i.subchannel_resumeProducing(sc1) self.assertEqual(c.mock_calls, []) i.subchannel_pauseProducing(sc1) self.assertEqual(c.mock_calls, []) i.use_connection(c) self.assertEqual(c.mock_calls, [mock.call.pauseProducing()]) c.mock_calls[:] = [] i.subchannel_resumeProducing(sc1) self.assertEqual(c.mock_calls, [mock.call.resumeProducing()]) c.mock_calls[:] = [] # consumers aren't really supposed to do this, but tolerate it i.subchannel_resumeProducing(sc1) self.assertEqual(c.mock_calls, []) i.subchannel_pauseProducing(sc1) self.assertEqual(c.mock_calls, [mock.call.pauseProducing()]) c.mock_calls[:] = [] i.subchannel_pauseProducing(sc2) self.assertEqual(c.mock_calls, []) # was already paused # tolerate duplicate pauseProducing i.subchannel_pauseProducing(sc2) self.assertEqual(c.mock_calls, []) # stopProducing is treated like a terminal resumeProducing i.subchannel_stopProducing(sc1) self.assertEqual(c.mock_calls, []) i.subchannel_stopProducing(sc2) self.assertEqual(c.mock_calls, [mock.call.resumeProducing()]) c.mock_calls[:] = [] magic-wormhole-0.12.0/src/wormhole/test/dilate/test_manager.py000066400000000000000000000570011400712516500243770ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.task import Clock, Cooperator from twisted.internet.interfaces import IStreamServerEndpoint import mock from ...eventual import EventualQueue from ..._interfaces import ISend, ITerminator, ISubChannel from ...util import dict_to_bytes from ..._dilation import roles from ..._dilation.manager import (Dilator, Manager, make_side, OldPeerCannotDilateError, UnknownDilationMessageType, UnexpectedKCM, UnknownMessageType) from ..._dilation.connection import Open, Data, Close, Ack, KCM, Ping, Pong from ..._dilation.subchannel import _SubchannelAddress from .common import clear_mock_calls class Holder(): pass def make_dilator(): h = Holder() h.reactor = object() h.clock = Clock() h.eq = EventualQueue(h.clock) term = mock.Mock(side_effect=lambda: True) # one write per Eventual tick def term_factory(): return term h.coop = Cooperator(terminationPredicateFactory=term_factory, scheduler=h.eq.eventually) h.send = mock.Mock() alsoProvides(h.send, ISend) dil = Dilator(h.reactor, h.eq, h.coop) h.terminator = mock.Mock() alsoProvides(h.terminator, ITerminator) dil.wire(h.send, h.terminator) return dil, h class TestDilator(unittest.TestCase): # we should test the interleavings between: # * application calls w.dilate() and gets back endpoints # * wormhole gets: dilation key, VERSION, 0-n dilation messages def test_dilate_first(self): (dil, h) = make_dilator() side = object() m = mock.Mock() eps = object() m.get_endpoints = mock.Mock(return_value=eps) mm = mock.Mock(side_effect=[m]) with mock.patch("wormhole._dilation.manager.Manager", mm), \ mock.patch("wormhole._dilation.manager.make_side", return_value=side): eps1 = dil.dilate() eps2 = dil.dilate() self.assertIdentical(eps1, eps) self.assertIdentical(eps1, eps2) self.assertEqual(mm.mock_calls, [mock.call(h.send, side, None, h.reactor, h.eq, h.coop, False)]) self.assertEqual(m.mock_calls, [mock.call.get_endpoints(), mock.call.get_endpoints()]) clear_mock_calls(m) key = b"key" transit_key = object() with mock.patch("wormhole._dilation.manager.derive_key", return_value=transit_key) as dk: dil.got_key(key) self.assertEqual(dk.mock_calls, [mock.call(key, b"dilation-v1", 32)]) self.assertEqual(m.mock_calls, [mock.call.got_dilation_key(transit_key)]) clear_mock_calls(m) wv = object() dil.got_wormhole_versions(wv) self.assertEqual(m.mock_calls, [mock.call.got_wormhole_versions(wv)]) clear_mock_calls(m) dm1 = object() dm2 = object() dil.received_dilate(dm1) dil.received_dilate(dm2) self.assertEqual(m.mock_calls, [mock.call.received_dilation_message(dm1), mock.call.received_dilation_message(dm2), ]) clear_mock_calls(m) stopped_d = mock.Mock() m.when_stopped = mock.Mock(return_value=stopped_d) dil.stop() self.assertEqual(m.mock_calls, [mock.call.stop(), mock.call.when_stopped(), ]) def test_dilate_later(self): (dil, h) = make_dilator() m = mock.Mock() mm = mock.Mock(side_effect=[m]) key = b"key" transit_key = object() with mock.patch("wormhole._dilation.manager.derive_key", return_value=transit_key) as dk: dil.got_key(key) self.assertEqual(dk.mock_calls, [mock.call(key, b"dilation-v1", 32)]) wv = object() dil.got_wormhole_versions(wv) dm1 = object() dil.received_dilate(dm1) self.assertEqual(mm.mock_calls, []) with mock.patch("wormhole._dilation.manager.Manager", mm): dil.dilate() self.assertEqual(m.mock_calls, [mock.call.got_dilation_key(transit_key), mock.call.got_wormhole_versions(wv), mock.call.received_dilation_message(dm1), mock.call.get_endpoints(), ]) clear_mock_calls(m) dm2 = object() dil.received_dilate(dm2) self.assertEqual(m.mock_calls, [mock.call.received_dilation_message(dm2), ]) def test_stop_early(self): (dil, h) = make_dilator() # we stop before w.dilate(), so there is no Manager to stop dil.stop() self.assertEqual(h.terminator.mock_calls, [mock.call.stoppedD()]) def test_peer_cannot_dilate(self): (dil, h) = make_dilator() eps = dil.dilate() dil.got_key(b"\x01" * 32) dil.got_wormhole_versions({}) # missing "can-dilate" d = eps.connect.connect(None) h.eq.flush_sync() self.failureResultOf(d).check(OldPeerCannotDilateError) def test_disjoint_versions(self): (dil, h) = make_dilator() eps = dil.dilate() dil.got_key(b"\x01" * 32) dil.got_wormhole_versions({"can-dilate": [-1]}) d = eps.connect.connect(None) h.eq.flush_sync() self.failureResultOf(d).check(OldPeerCannotDilateError) def test_transit_relay(self): (dil, h) = make_dilator() transit_relay_location = object() side = object() m = mock.Mock() mm = mock.Mock(side_effect=[m]) with mock.patch("wormhole._dilation.manager.Manager", mm), \ mock.patch("wormhole._dilation.manager.make_side", return_value=side): dil.dilate(transit_relay_location) self.assertEqual(mm.mock_calls, [mock.call(h.send, side, transit_relay_location, h.reactor, h.eq, h.coop, False)]) LEADER = "ff3456abcdef" FOLLOWER = "123456abcdef" def make_manager(leader=True): h = Holder() h.send = mock.Mock() alsoProvides(h.send, ISend) if leader: side = LEADER else: side = FOLLOWER h.key = b"\x00" * 32 h.relay = None h.reactor = object() h.clock = Clock() h.eq = EventualQueue(h.clock) term = mock.Mock(side_effect=lambda: True) # one write per Eventual tick def term_factory(): return term h.coop = Cooperator(terminationPredicateFactory=term_factory, scheduler=h.eq.eventually) h.inbound = mock.Mock() h.Inbound = mock.Mock(return_value=h.inbound) h.outbound = mock.Mock() h.Outbound = mock.Mock(return_value=h.outbound) h.sc0 = mock.Mock() alsoProvides(h.sc0, ISubChannel) h.SubChannel = mock.Mock(return_value=h.sc0) h.listen_ep = mock.Mock() alsoProvides(h.listen_ep, IStreamServerEndpoint) with mock.patch("wormhole._dilation.manager.Inbound", h.Inbound), \ mock.patch("wormhole._dilation.manager.Outbound", h.Outbound), \ mock.patch("wormhole._dilation.manager.SubChannel", h.SubChannel), \ mock.patch("wormhole._dilation.manager.SubchannelListenerEndpoint", return_value=h.listen_ep): m = Manager(h.send, side, h.relay, h.reactor, h.eq, h.coop) h.hostaddr = m._host_addr m.got_dilation_key(h.key) return m, h class TestManager(unittest.TestCase): def test_make_side(self): side = make_side() self.assertEqual(type(side), type(u"")) self.assertEqual(len(side), 2 * 8) def test_create(self): m, h = make_manager() def test_leader(self): m, h = make_manager(leader=True) self.assertEqual(h.send.mock_calls, []) self.assertEqual(h.Inbound.mock_calls, [mock.call(m, h.hostaddr)]) self.assertEqual(h.Outbound.mock_calls, [mock.call(m, h.coop)]) scid0 = 0 sc0_peer_addr = _SubchannelAddress(scid0) self.assertEqual(h.SubChannel.mock_calls, [ mock.call(scid0, m, m._host_addr, sc0_peer_addr), ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.set_subchannel_zero(scid0, h.sc0), mock.call.set_listener_endpoint(h.listen_ep) ]) clear_mock_calls(h.inbound) eps = m.get_endpoints() self.assertTrue(hasattr(eps, "control")) self.assertTrue(hasattr(eps, "connect")) self.assertEqual(eps.listen, h.listen_ep) m.got_wormhole_versions({"can-dilate": ["1"]}) self.assertEqual(h.send.mock_calls, [ mock.call.send("dilate-0", dict_to_bytes({"type": "please", "side": LEADER})) ]) clear_mock_calls(h.send) # ignore early hints m.rx_HINTS({}) self.assertEqual(h.send.mock_calls, []) c = mock.Mock() connector = mock.Mock(return_value=c) with mock.patch("wormhole._dilation.manager.Connector", connector): # receiving this PLEASE triggers creation of the Connector m.rx_PLEASE({"side": FOLLOWER}) self.assertEqual(h.send.mock_calls, []) self.assertEqual(connector.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing LEADER, roles.LEADER), ]) self.assertEqual(c.mock_calls, [mock.call.start()]) clear_mock_calls(connector, c) # now any inbound hints should get passed to our Connector with mock.patch("wormhole._dilation.manager.parse_hint", side_effect=["p1", None, "p3"]) as ph: m.rx_HINTS({"hints": [1, 2, 3]}) self.assertEqual(ph.mock_calls, [mock.call(1), mock.call(2), mock.call(3)]) self.assertEqual(c.mock_calls, [mock.call.got_hints(["p1", "p3"])]) clear_mock_calls(ph, c) # and we send out any (listening) hints from our Connector m.send_hints([1, 2]) self.assertEqual(h.send.mock_calls, [ mock.call.send("dilate-1", dict_to_bytes({"type": "connection-hints", "hints": [1, 2]})) ]) clear_mock_calls(h.send) # the first successful connection fires when_first_connected(), so # the endpoints can activate c1 = mock.Mock() m.connector_connection_made(c1) self.assertEqual(h.inbound.mock_calls, [mock.call.use_connection(c1)]) self.assertEqual(h.outbound.mock_calls, [mock.call.use_connection(c1)]) clear_mock_calls(h.inbound, h.outbound) # the Leader making a new outbound channel should get scid=1 scid1 = 1 self.assertEqual(m.allocate_subchannel_id(), scid1) r1 = Open(10, scid1) # seqnum=10 h.outbound.build_record = mock.Mock(return_value=r1) m.send_open(scid1) self.assertEqual(h.outbound.mock_calls, [ mock.call.build_record(Open, scid1), mock.call.queue_and_send_record(r1), ]) clear_mock_calls(h.outbound) r2 = Data(11, scid1, b"data") h.outbound.build_record = mock.Mock(return_value=r2) m.send_data(scid1, b"data") self.assertEqual(h.outbound.mock_calls, [ mock.call.build_record(Data, scid1, b"data"), mock.call.queue_and_send_record(r2), ]) clear_mock_calls(h.outbound) r3 = Close(12, scid1) h.outbound.build_record = mock.Mock(return_value=r3) m.send_close(scid1) self.assertEqual(h.outbound.mock_calls, [ mock.call.build_record(Close, scid1), mock.call.queue_and_send_record(r3), ]) clear_mock_calls(h.outbound) # ack the OPEN m.got_record(Ack(10)) self.assertEqual(h.outbound.mock_calls, [ mock.call.handle_ack(10) ]) clear_mock_calls(h.outbound) # test that inbound records get acked and routed to Inbound h.inbound.is_record_old = mock.Mock(return_value=False) scid2 = 2 o200 = Open(200, scid2) m.got_record(o200) self.assertEqual(h.outbound.mock_calls, [ mock.call.send_if_connected(Ack(200)) ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.is_record_old(o200), mock.call.update_ack_watermark(200), mock.call.handle_open(scid2), ]) clear_mock_calls(h.outbound, h.inbound) # old (duplicate) records should provoke new Acks, but not get # forwarded h.inbound.is_record_old = mock.Mock(return_value=True) m.got_record(o200) self.assertEqual(h.outbound.mock_calls, [ mock.call.send_if_connected(Ack(200)) ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.is_record_old(o200), ]) clear_mock_calls(h.outbound, h.inbound) # check Data and Close too h.inbound.is_record_old = mock.Mock(return_value=False) d201 = Data(201, scid2, b"data") m.got_record(d201) self.assertEqual(h.outbound.mock_calls, [ mock.call.send_if_connected(Ack(201)) ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.is_record_old(d201), mock.call.update_ack_watermark(201), mock.call.handle_data(scid2, b"data"), ]) clear_mock_calls(h.outbound, h.inbound) c202 = Close(202, scid2) m.got_record(c202) self.assertEqual(h.outbound.mock_calls, [ mock.call.send_if_connected(Ack(202)) ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.is_record_old(c202), mock.call.update_ack_watermark(202), mock.call.handle_close(scid2), ]) clear_mock_calls(h.outbound, h.inbound) # Now we lose the connection. The Leader should tell the other side # that we're reconnecting. m.connector_connection_lost() self.assertEqual(h.send.mock_calls, [ mock.call.send("dilate-2", dict_to_bytes({"type": "reconnect"})) ]) self.assertEqual(h.inbound.mock_calls, [ mock.call.stop_using_connection() ]) self.assertEqual(h.outbound.mock_calls, [ mock.call.stop_using_connection() ]) clear_mock_calls(h.send, h.inbound, h.outbound) # leader does nothing (stays in FLUSHING) until the follower acks by # sending RECONNECTING # inbound hints should be ignored during FLUSHING with mock.patch("wormhole._dilation.manager.parse_hint", return_value=None) as ph: m.rx_HINTS({"hints": [1, 2, 3]}) self.assertEqual(ph.mock_calls, []) # ignored c2 = mock.Mock() connector2 = mock.Mock(return_value=c2) with mock.patch("wormhole._dilation.manager.Connector", connector2): # this triggers creation of a new Connector m.rx_RECONNECTING() self.assertEqual(h.send.mock_calls, []) self.assertEqual(connector2.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing LEADER, roles.LEADER), ]) self.assertEqual(c2.mock_calls, [mock.call.start()]) clear_mock_calls(connector2, c2) self.assertEqual(h.inbound.mock_calls, []) self.assertEqual(h.outbound.mock_calls, []) # and a new connection should re-register with Inbound/Outbound, # which are responsible for re-sending unacked queued messages c3 = mock.Mock() m.connector_connection_made(c3) self.assertEqual(h.inbound.mock_calls, [mock.call.use_connection(c3)]) self.assertEqual(h.outbound.mock_calls, [mock.call.use_connection(c3)]) clear_mock_calls(h.inbound, h.outbound) def test_follower(self): m, h = make_manager(leader=False) m.got_wormhole_versions({"can-dilate": ["1"]}) self.assertEqual(h.send.mock_calls, [ mock.call.send("dilate-0", dict_to_bytes({"type": "please", "side": FOLLOWER})) ]) clear_mock_calls(h.send) clear_mock_calls(h.inbound) c = mock.Mock() connector = mock.Mock(return_value=c) with mock.patch("wormhole._dilation.manager.Connector", connector): # receiving this PLEASE triggers creation of the Connector m.rx_PLEASE({"side": LEADER}) self.assertEqual(h.send.mock_calls, []) self.assertEqual(connector.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing FOLLOWER, roles.FOLLOWER), ]) self.assertEqual(c.mock_calls, [mock.call.start()]) clear_mock_calls(connector, c) # get connected, then lose the connection c1 = mock.Mock() m.connector_connection_made(c1) self.assertEqual(h.inbound.mock_calls, [mock.call.use_connection(c1)]) self.assertEqual(h.outbound.mock_calls, [mock.call.use_connection(c1)]) clear_mock_calls(h.inbound, h.outbound) # now lose the connection. As the follower, we don't notify the # leader, we just wait for them to notice m.connector_connection_lost() self.assertEqual(h.send.mock_calls, []) self.assertEqual(h.inbound.mock_calls, [ mock.call.stop_using_connection() ]) self.assertEqual(h.outbound.mock_calls, [ mock.call.stop_using_connection() ]) clear_mock_calls(h.send, h.inbound, h.outbound) # now we get a RECONNECT: we should send RECONNECTING c2 = mock.Mock() connector2 = mock.Mock(return_value=c2) with mock.patch("wormhole._dilation.manager.Connector", connector2): m.rx_RECONNECT() self.assertEqual(h.send.mock_calls, [ mock.call.send("dilate-1", dict_to_bytes({"type": "reconnecting"})) ]) self.assertEqual(connector2.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing FOLLOWER, roles.FOLLOWER), ]) self.assertEqual(c2.mock_calls, [mock.call.start()]) clear_mock_calls(connector2, c2) # while we're trying to connect, we get told to stop again, so we # should abandon the connection attempt and start another c3 = mock.Mock() connector3 = mock.Mock(return_value=c3) with mock.patch("wormhole._dilation.manager.Connector", connector3): m.rx_RECONNECT() self.assertEqual(c2.mock_calls, [mock.call.stop()]) self.assertEqual(connector3.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing FOLLOWER, roles.FOLLOWER), ]) self.assertEqual(c3.mock_calls, [mock.call.start()]) clear_mock_calls(c2, connector3, c3) m.connector_connection_made(c3) # finally if we're already connected, rx_RECONNECT means we should # abandon this connection (even though it still looks ok to us), then # when the attempt is finished stopping, we should start another m.rx_RECONNECT() c4 = mock.Mock() connector4 = mock.Mock(return_value=c4) with mock.patch("wormhole._dilation.manager.Connector", connector4): m.connector_connection_lost() self.assertEqual(c3.mock_calls, [mock.call.disconnect()]) self.assertEqual(connector4.mock_calls, [ mock.call(b"\x00" * 32, None, m, h.reactor, h.eq, False, # no_listen None, # tor None, # timing FOLLOWER, roles.FOLLOWER), ]) self.assertEqual(c4.mock_calls, [mock.call.start()]) clear_mock_calls(c3, connector4, c4) def test_mirror(self): # receive a PLEASE with the same side as us: shouldn't happen m, h = make_manager(leader=True) m.start() clear_mock_calls(h.send) e = self.assertRaises(ValueError, m.rx_PLEASE, {"side": LEADER}) self.assertEqual(str(e), "their side shouldn't be equal: reflection?") def test_ping_pong(self): m, h = make_manager(leader=False) m.got_record(KCM()) self.flushLoggedErrors(UnexpectedKCM) m.got_record(Ping(1)) self.assertEqual(h.outbound.mock_calls, [mock.call.send_if_connected(Pong(1))]) clear_mock_calls(h.outbound) m.got_record(Pong(2)) # currently ignored, will eventually update a timer m.got_record("not recognized") e = self.flushLoggedErrors(UnknownMessageType) self.assertEqual(len(e), 1) self.assertEqual(str(e[0].value), "not recognized") m.send_ping(3) self.assertEqual(h.outbound.mock_calls, [mock.call.send_if_connected(Pong(3))]) clear_mock_calls(h.outbound) def test_subchannel(self): m, h = make_manager(leader=True) clear_mock_calls(h.inbound) sc = object() m.subchannel_pauseProducing(sc) self.assertEqual(h.inbound.mock_calls, [ mock.call.subchannel_pauseProducing(sc)]) clear_mock_calls(h.inbound) m.subchannel_resumeProducing(sc) self.assertEqual(h.inbound.mock_calls, [ mock.call.subchannel_resumeProducing(sc)]) clear_mock_calls(h.inbound) m.subchannel_stopProducing(sc) self.assertEqual(h.inbound.mock_calls, [ mock.call.subchannel_stopProducing(sc)]) clear_mock_calls(h.inbound) p = object() streaming = object() m.subchannel_registerProducer(sc, p, streaming) self.assertEqual(h.outbound.mock_calls, [ mock.call.subchannel_registerProducer(sc, p, streaming)]) clear_mock_calls(h.outbound) m.subchannel_unregisterProducer(sc) self.assertEqual(h.outbound.mock_calls, [ mock.call.subchannel_unregisterProducer(sc)]) clear_mock_calls(h.outbound) m.subchannel_closed(4, sc) self.assertEqual(h.inbound.mock_calls, [ mock.call.subchannel_closed(4, sc)]) self.assertEqual(h.outbound.mock_calls, [ mock.call.subchannel_closed(4, sc)]) clear_mock_calls(h.inbound, h.outbound) def test_unknown_message(self): # receive a PLEASE with the same side as us: shouldn't happen m, h = make_manager(leader=True) m.start() m.received_dilation_message(dict_to_bytes(dict(type="unknown"))) self.flushLoggedErrors(UnknownDilationMessageType) # TODO: test transit relay is used magic-wormhole-0.12.0/src/wormhole/test/dilate/test_outbound.py000066400000000000000000000642621400712516500246330ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from collections import namedtuple from itertools import cycle import mock from zope.interface import alsoProvides from twisted.trial import unittest from twisted.internet.task import Clock, Cooperator from twisted.internet.interfaces import IPullProducer from ...eventual import EventualQueue from ..._interfaces import IDilationManager from ..._dilation.connection import KCM, Open, Data, Close, Ack from ..._dilation.outbound import Outbound, PullToPush from .common import clear_mock_calls Pauser = namedtuple("Pauser", ["seqnum"]) NonPauser = namedtuple("NonPauser", ["seqnum"]) Stopper = namedtuple("Stopper", ["sc"]) def make_outbound(): m = mock.Mock() alsoProvides(m, IDilationManager) clock = Clock() eq = EventualQueue(clock) term = mock.Mock(side_effect=lambda: True) # one write per Eventual tick def term_factory(): return term coop = Cooperator(terminationPredicateFactory=term_factory, scheduler=eq.eventually) o = Outbound(m, coop) c = mock.Mock() # Connection def maybe_pause(r): if isinstance(r, Pauser): o.pauseProducing() elif isinstance(r, Stopper): o.subchannel_unregisterProducer(r.sc) c.send_record = mock.Mock(side_effect=maybe_pause) o._test_eq = eq o._test_term = term return o, m, c class OutboundTest(unittest.TestCase): def test_build_record(self): o, m, c = make_outbound() scid1 = b"scid" self.assertEqual(o.build_record(Open, scid1), Open(seqnum=0, scid=b"scid")) self.assertEqual(o.build_record(Data, scid1, b"dataaa"), Data(seqnum=1, scid=b"scid", data=b"dataaa")) self.assertEqual(o.build_record(Close, scid1), Close(seqnum=2, scid=b"scid")) self.assertEqual(o.build_record(Close, scid1), Close(seqnum=3, scid=b"scid")) def test_outbound_queue(self): o, m, c = make_outbound() scid1 = b"scid" r1 = o.build_record(Open, scid1) r2 = o.build_record(Data, scid1, b"data1") r3 = o.build_record(Data, scid1, b"data2") o.queue_and_send_record(r1) o.queue_and_send_record(r2) o.queue_and_send_record(r3) self.assertEqual(list(o._outbound_queue), [r1, r2, r3]) # we would never normally receive an ACK without first getting a # connection o.handle_ack(r2.seqnum) self.assertEqual(list(o._outbound_queue), [r3]) o.handle_ack(r3.seqnum) self.assertEqual(list(o._outbound_queue), []) o.handle_ack(r3.seqnum) # ignored self.assertEqual(list(o._outbound_queue), []) o.handle_ack(r1.seqnum) # ignored self.assertEqual(list(o._outbound_queue), []) def test_duplicate_registerProducer(self): o, m, c = make_outbound() sc1 = object() p1 = mock.Mock() o.subchannel_registerProducer(sc1, p1, True) with self.assertRaises(ValueError) as ar: o.subchannel_registerProducer(sc1, p1, True) s = str(ar.exception) self.assertIn("registering producer", s) self.assertIn("before previous one", s) self.assertIn("was unregistered", s) def test_connection_send_queued_unpaused(self): o, m, c = make_outbound() scid1 = b"scid" r1 = o.build_record(Open, scid1) r2 = o.build_record(Data, scid1, b"data1") r3 = o.build_record(Data, scid1, b"data2") o.queue_and_send_record(r1) o.queue_and_send_record(r2) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), []) # as soon as the connection is established, everything is sent o.use_connection(c) self.assertEqual(c.mock_calls, [mock.call.transport.registerProducer(o, True), mock.call.send_record(r1), mock.call.send_record(r2)]) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), []) clear_mock_calls(c) o.queue_and_send_record(r3) self.assertEqual(list(o._outbound_queue), [r1, r2, r3]) self.assertEqual(list(o._queued_unsent), []) self.assertEqual(c.mock_calls, [mock.call.send_record(r3)]) def test_connection_send_queued_paused(self): o, m, c = make_outbound() r1 = Pauser(seqnum=1) r2 = Pauser(seqnum=2) r3 = Pauser(seqnum=3) o.queue_and_send_record(r1) o.queue_and_send_record(r2) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), []) # pausing=True, so our mock Manager will pause the Outbound producer # after each write. So only r1 should have been sent before getting # paused o.use_connection(c) self.assertEqual(c.mock_calls, [mock.call.transport.registerProducer(o, True), mock.call.send_record(r1)]) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), [r2]) clear_mock_calls(c) # Outbound is responsible for sending all records, so when Manager # wants to send a new one, and Outbound is still in the middle of # draining the beginning-of-connection queue, the new message gets # queued behind the rest (in addition to being queued in # _outbound_queue until an ACK retires it). o.queue_and_send_record(r3) self.assertEqual(list(o._outbound_queue), [r1, r2, r3]) self.assertEqual(list(o._queued_unsent), [r2, r3]) self.assertEqual(c.mock_calls, []) o.handle_ack(r1.seqnum) self.assertEqual(list(o._outbound_queue), [r2, r3]) self.assertEqual(list(o._queued_unsent), [r2, r3]) self.assertEqual(c.mock_calls, []) def test_premptive_ack(self): # one mode I have in mind is for each side to send an immediate ACK, # with everything they've ever seen, as the very first message on each # new connection. The idea is that you might preempt sending stuff from # the _queued_unsent list if it arrives fast enough (in practice this # is more likely to be delivered via the DILATE mailbox message, but # the effects might be vaguely similar, so it seems worth testing # here). A similar situation would be if each side sends ACKs with the # highest seqnum they've ever seen, instead of merely ACKing the # message which was just received. o, m, c = make_outbound() r1 = Pauser(seqnum=1) r2 = Pauser(seqnum=2) r3 = Pauser(seqnum=3) o.queue_and_send_record(r1) o.queue_and_send_record(r2) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), []) o.use_connection(c) self.assertEqual(c.mock_calls, [mock.call.transport.registerProducer(o, True), mock.call.send_record(r1)]) self.assertEqual(list(o._outbound_queue), [r1, r2]) self.assertEqual(list(o._queued_unsent), [r2]) clear_mock_calls(c) o.queue_and_send_record(r3) self.assertEqual(list(o._outbound_queue), [r1, r2, r3]) self.assertEqual(list(o._queued_unsent), [r2, r3]) self.assertEqual(c.mock_calls, []) o.handle_ack(r2.seqnum) self.assertEqual(list(o._outbound_queue), [r3]) self.assertEqual(list(o._queued_unsent), [r3]) self.assertEqual(c.mock_calls, []) def test_pause(self): o, m, c = make_outbound() o.use_connection(c) self.assertEqual(c.mock_calls, [mock.call.transport.registerProducer(o, True)]) self.assertEqual(list(o._outbound_queue), []) self.assertEqual(list(o._queued_unsent), []) clear_mock_calls(c) sc1, sc2, sc3 = object(), object(), object() p1, p2, p3 = mock.Mock(name="p1"), mock.Mock( name="p2"), mock.Mock(name="p3") # we aren't paused yet, since we haven't sent any data o.subchannel_registerProducer(sc1, p1, True) self.assertEqual(p1.mock_calls, []) r1 = Pauser(seqnum=1) o.queue_and_send_record(r1) # now we should be paused self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r1)]) self.assertEqual(p1.mock_calls, [mock.call.pauseProducing()]) clear_mock_calls(p1, c) # so an IPushProducer will be paused right away o.subchannel_registerProducer(sc2, p2, True) self.assertEqual(p2.mock_calls, [mock.call.pauseProducing()]) clear_mock_calls(p2) o.subchannel_registerProducer(sc3, p3, True) self.assertEqual(p3.mock_calls, [mock.call.pauseProducing()]) self.assertEqual(o._paused_producers, set([p1, p2, p3])) self.assertEqual(list(o._all_producers), [p1, p2, p3]) clear_mock_calls(p3) # one resumeProducing should cause p1 to get a turn, since p2 was added # after we were paused and p1 was at the "end" of a one-element list. # If it writes anything, it will get paused again immediately. r2 = Pauser(seqnum=2) p1.resumeProducing.side_effect = lambda: c.send_record(r2) o.resumeProducing() self.assertEqual(p1.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p2.mock_calls, []) self.assertEqual(p3.mock_calls, []) self.assertEqual(c.mock_calls, [mock.call.send_record(r2)]) clear_mock_calls(p1, p2, p3, c) # p2 should now be at the head of the queue self.assertEqual(list(o._all_producers), [p2, p3, p1]) # next turn: p2 has nothing to send, but p3 does. we should see p3 # called but not p1. The actual sequence of expected calls is: # p2.resume, p3.resume, pauseProducing, set(p2.pause, p3.pause) r3 = Pauser(seqnum=3) p2.resumeProducing.side_effect = lambda: None p3.resumeProducing.side_effect = lambda: c.send_record(r3) o.resumeProducing() self.assertEqual(p1.mock_calls, []) self.assertEqual(p2.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p3.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(c.mock_calls, [mock.call.send_record(r3)]) clear_mock_calls(p1, p2, p3, c) # p1 should now be at the head of the queue self.assertEqual(list(o._all_producers), [p1, p2, p3]) # next turn: p1 has data to send, but not enough to cause a pause. same # for p2. p3 causes a pause r4 = NonPauser(seqnum=4) r5 = NonPauser(seqnum=5) r6 = Pauser(seqnum=6) p1.resumeProducing.side_effect = lambda: c.send_record(r4) p2.resumeProducing.side_effect = lambda: c.send_record(r5) p3.resumeProducing.side_effect = lambda: c.send_record(r6) o.resumeProducing() self.assertEqual(p1.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p2.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p3.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(c.mock_calls, [mock.call.send_record(r4), mock.call.send_record(r5), mock.call.send_record(r6), ]) clear_mock_calls(p1, p2, p3, c) # p1 should now be at the head of the queue again self.assertEqual(list(o._all_producers), [p1, p2, p3]) # now we let it catch up. p1 and p2 send non-pausing data, p3 sends # nothing. r7 = NonPauser(seqnum=4) r8 = NonPauser(seqnum=5) p1.resumeProducing.side_effect = lambda: c.send_record(r7) p2.resumeProducing.side_effect = lambda: c.send_record(r8) p3.resumeProducing.side_effect = lambda: None o.resumeProducing() self.assertEqual(p1.mock_calls, [mock.call.resumeProducing(), ]) self.assertEqual(p2.mock_calls, [mock.call.resumeProducing(), ]) self.assertEqual(p3.mock_calls, [mock.call.resumeProducing(), ]) self.assertEqual(c.mock_calls, [mock.call.send_record(r7), mock.call.send_record(r8), ]) clear_mock_calls(p1, p2, p3, c) # p1 should now be at the head of the queue again self.assertEqual(list(o._all_producers), [p1, p2, p3]) self.assertFalse(o._paused) # now a producer disconnects itself (spontaneously, not from inside a # resumeProducing) o.subchannel_unregisterProducer(sc1) self.assertEqual(list(o._all_producers), [p2, p3]) self.assertEqual(p1.mock_calls, []) self.assertFalse(o._paused) # and another disconnects itself when called p2.resumeProducing.side_effect = lambda: None p3.resumeProducing.side_effect = lambda: o.subchannel_unregisterProducer( sc3) o.pauseProducing() o.resumeProducing() self.assertEqual(p2.mock_calls, [mock.call.pauseProducing(), mock.call.resumeProducing()]) self.assertEqual(p3.mock_calls, [mock.call.pauseProducing(), mock.call.resumeProducing()]) clear_mock_calls(p2, p3) self.assertEqual(list(o._all_producers), [p2]) self.assertFalse(o._paused) def test_subchannel_closed(self): o, m, c = make_outbound() sc1 = mock.Mock() p1 = mock.Mock(name="p1") o.subchannel_registerProducer(sc1, p1, True) self.assertEqual(p1.mock_calls, [mock.call.pauseProducing()]) clear_mock_calls(p1) o.subchannel_closed(1, sc1) self.assertEqual(p1.mock_calls, []) self.assertEqual(list(o._all_producers), []) sc2 = mock.Mock() o.subchannel_closed(2, sc2) def test_disconnect(self): o, m, c = make_outbound() o.use_connection(c) sc1 = mock.Mock() p1 = mock.Mock(name="p1") o.subchannel_registerProducer(sc1, p1, True) self.assertEqual(p1.mock_calls, []) o.stop_using_connection() self.assertEqual(p1.mock_calls, [mock.call.pauseProducing()]) def OFF_test_push_pull(self): # use one IPushProducer and one IPullProducer. They should take turns o, m, c = make_outbound() o.use_connection(c) clear_mock_calls(c) sc1, sc2 = object(), object() p1, p2 = mock.Mock(name="p1"), mock.Mock(name="p2") r1 = Pauser(seqnum=1) r2 = NonPauser(seqnum=2) # we aren't paused yet, since we haven't sent any data o.subchannel_registerProducer(sc1, p1, True) # push o.queue_and_send_record(r1) # now we're paused self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r1)]) self.assertEqual(p1.mock_calls, [mock.call.pauseProducing()]) self.assertEqual(p2.mock_calls, []) clear_mock_calls(p1, p2, c) p1.resumeProducing.side_effect = lambda: c.send_record(r1) p2.resumeProducing.side_effect = lambda: c.send_record(r2) o.subchannel_registerProducer(sc2, p2, False) # pull: always ready # p1 is still first, since p2 was just added (at the end) self.assertTrue(o._paused) self.assertEqual(c.mock_calls, []) self.assertEqual(p1.mock_calls, []) self.assertEqual(p2.mock_calls, []) self.assertEqual(list(o._all_producers), [p1, p2]) clear_mock_calls(p1, p2, c) # resume should send r1, which should pause everything o.resumeProducing() self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r1), ]) self.assertEqual(p1.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p2.mock_calls, []) self.assertEqual(list(o._all_producers), [p2, p1]) # now p2 is next clear_mock_calls(p1, p2, c) # next should fire p2, then p1 o.resumeProducing() self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r2), mock.call.send_record(r1), ]) self.assertEqual(p1.mock_calls, [mock.call.resumeProducing(), mock.call.pauseProducing(), ]) self.assertEqual(p2.mock_calls, [mock.call.resumeProducing(), ]) self.assertEqual(list(o._all_producers), [p2, p1]) # p2 still at bat clear_mock_calls(p1, p2, c) def test_pull_producer(self): # a single pull producer should write until it is paused, rate-limited # by the cooperator (so we'll see back-to-back resumeProducing calls # until the Connection is paused, or 10ms have passed, whichever comes # first, and if it's stopped by the timer, then the next EventualQueue # turn will start it off again) o, m, c = make_outbound() eq = o._test_eq o.use_connection(c) clear_mock_calls(c) self.assertFalse(o._paused) sc1 = mock.Mock() p1 = mock.Mock(name="p1") alsoProvides(p1, IPullProducer) records = [NonPauser(seqnum=1)] * 10 records.append(Pauser(seqnum=2)) records.append(Stopper(sc1)) it = iter(records) p1.resumeProducing.side_effect = lambda: c.send_record(next(it)) o.subchannel_registerProducer(sc1, p1, False) eq.flush_sync() # fast forward into the glorious (paused) future self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r) for r in records[:-1]]) self.assertEqual(p1.mock_calls, [mock.call.resumeProducing()] * (len(records) - 1)) clear_mock_calls(c, p1) # next resumeProducing should cause it to disconnect o.resumeProducing() eq.flush_sync() self.assertEqual(c.mock_calls, [mock.call.send_record(records[-1])]) self.assertEqual(p1.mock_calls, [mock.call.resumeProducing()]) self.assertEqual(len(o._all_producers), 0) self.assertFalse(o._paused) def test_two_pull_producers(self): # we should alternate between them until paused p1_records = ([NonPauser(seqnum=i) for i in range(5)] + [Pauser(seqnum=5)] + [NonPauser(seqnum=i) for i in range(6, 10)]) p2_records = ([NonPauser(seqnum=i) for i in range(10, 19)] + [Pauser(seqnum=19)]) expected1 = [NonPauser(0), NonPauser(10), NonPauser(1), NonPauser(11), NonPauser(2), NonPauser(12), NonPauser(3), NonPauser(13), NonPauser(4), NonPauser(14), Pauser(5)] expected2 = [NonPauser(15), NonPauser(6), NonPauser(16), NonPauser(7), NonPauser(17), NonPauser(8), NonPauser(18), NonPauser(9), Pauser(19), ] o, m, c = make_outbound() eq = o._test_eq o.use_connection(c) clear_mock_calls(c) self.assertFalse(o._paused) sc1 = mock.Mock() p1 = mock.Mock(name="p1") alsoProvides(p1, IPullProducer) it1 = iter(p1_records) p1.resumeProducing.side_effect = lambda: c.send_record(next(it1)) o.subchannel_registerProducer(sc1, p1, False) sc2 = mock.Mock() p2 = mock.Mock(name="p2") alsoProvides(p2, IPullProducer) it2 = iter(p2_records) p2.resumeProducing.side_effect = lambda: c.send_record(next(it2)) o.subchannel_registerProducer(sc2, p2, False) eq.flush_sync() # fast forward into the glorious (paused) future sends = [mock.call.resumeProducing()] self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r) for r in expected1]) self.assertEqual(p1.mock_calls, 6 * sends) self.assertEqual(p2.mock_calls, 5 * sends) clear_mock_calls(c, p1, p2) o.resumeProducing() eq.flush_sync() self.assertTrue(o._paused) self.assertEqual(c.mock_calls, [mock.call.send_record(r) for r in expected2]) self.assertEqual(p1.mock_calls, 4 * sends) self.assertEqual(p2.mock_calls, 5 * sends) clear_mock_calls(c, p1, p2) def test_send_if_connected(self): o, m, c = make_outbound() o.send_if_connected(Ack(1)) # not connected yet o.use_connection(c) o.send_if_connected(KCM()) self.assertEqual(c.mock_calls, [mock.call.transport.registerProducer(o, True), mock.call.send_record(KCM())]) def test_tolerate_duplicate_pause_resume(self): o, m, c = make_outbound() self.assertTrue(o._paused) # no connection o.use_connection(c) self.assertFalse(o._paused) o.pauseProducing() self.assertTrue(o._paused) o.pauseProducing() self.assertTrue(o._paused) o.resumeProducing() self.assertFalse(o._paused) o.resumeProducing() self.assertFalse(o._paused) def test_stopProducing(self): o, m, c = make_outbound() o.use_connection(c) self.assertFalse(o._paused) o.stopProducing() # connection does this before loss self.assertTrue(o._paused) o.stop_using_connection() self.assertTrue(o._paused) def test_resume_error(self): o, m, c = make_outbound() o.use_connection(c) sc1 = mock.Mock() p1 = mock.Mock(name="p1") alsoProvides(p1, IPullProducer) p1.resumeProducing.side_effect = PretendResumptionError o.subchannel_registerProducer(sc1, p1, False) o._test_eq.flush_sync() # the error is supposed to automatically unregister the producer self.assertEqual(list(o._all_producers), []) self.flushLoggedErrors(PretendResumptionError) def make_pushpull(pauses): p = mock.Mock() alsoProvides(p, IPullProducer) unregister = mock.Mock() clock = Clock() eq = EventualQueue(clock) term = mock.Mock(side_effect=lambda: True) # one write per Eventual tick def term_factory(): return term coop = Cooperator(terminationPredicateFactory=term_factory, scheduler=eq.eventually) pp = PullToPush(p, unregister, coop) it = cycle(pauses) def action(i): if isinstance(i, Exception): raise i elif i: pp.pauseProducing() p.resumeProducing.side_effect = lambda: action(next(it)) return p, unregister, pp, eq class PretendResumptionError(Exception): pass class PretendUnregisterError(Exception): pass class PushPull(unittest.TestCase): # test our wrapper utility, which I copied from # twisted.internet._producer_helpers since it isn't publicly exposed def test_start_unpaused(self): p, unr, pp, eq = make_pushpull([True]) # pause on each resumeProducing # if it starts unpaused, it gets one write before being halted pp.startStreaming(False) eq.flush_sync() self.assertEqual(p.mock_calls, [mock.call.resumeProducing()] * 1) clear_mock_calls(p) # now each time we call resumeProducing, we should see one delivered to # the underlying IPullProducer pp.resumeProducing() eq.flush_sync() self.assertEqual(p.mock_calls, [mock.call.resumeProducing()] * 1) pp.stopStreaming() pp.stopStreaming() # should tolerate this def test_start_unpaused_two_writes(self): p, unr, pp, eq = make_pushpull([False, True]) # pause every other time # it should get two writes, since the first didn't pause pp.startStreaming(False) eq.flush_sync() self.assertEqual(p.mock_calls, [mock.call.resumeProducing()] * 2) def test_start_paused(self): p, unr, pp, eq = make_pushpull([True]) # pause on each resumeProducing pp.startStreaming(True) eq.flush_sync() self.assertEqual(p.mock_calls, []) pp.stopStreaming() def test_stop(self): p, unr, pp, eq = make_pushpull([True]) pp.startStreaming(True) pp.stopProducing() eq.flush_sync() self.assertEqual(p.mock_calls, [mock.call.stopProducing()]) def test_error(self): p, unr, pp, eq = make_pushpull([PretendResumptionError()]) unr.side_effect = lambda: pp.stopStreaming() pp.startStreaming(False) eq.flush_sync() self.assertEqual(unr.mock_calls, [mock.call()]) self.flushLoggedErrors(PretendResumptionError) def test_error_during_unregister(self): p, unr, pp, eq = make_pushpull([PretendResumptionError()]) unr.side_effect = PretendUnregisterError() pp.startStreaming(False) eq.flush_sync() self.assertEqual(unr.mock_calls, [mock.call()]) self.flushLoggedErrors(PretendResumptionError, PretendUnregisterError) # TODO: consider making p1/p2/p3 all elements of a shared Mock, maybe I # could capture the inter-call ordering that way magic-wormhole-0.12.0/src/wormhole/test/dilate/test_parse.py000066400000000000000000000045421400712516500241010ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from twisted.trial import unittest from ..._dilation.connection import (parse_record, encode_record, KCM, Ping, Pong, Open, Data, Close, Ack) class Parse(unittest.TestCase): def test_parse(self): self.assertEqual(parse_record(b"\x00"), KCM()) self.assertEqual(parse_record(b"\x01\x55\x44\x33\x22"), Ping(ping_id=b"\x55\x44\x33\x22")) self.assertEqual(parse_record(b"\x02\x55\x44\x33\x22"), Pong(ping_id=b"\x55\x44\x33\x22")) self.assertEqual(parse_record(b"\x03\x00\x00\x02\x01\x00\x00\x01\x00"), Open(scid=513, seqnum=256)) self.assertEqual(parse_record(b"\x04\x00\x00\x02\x02\x00\x00\x01\x01dataaa"), Data(scid=514, seqnum=257, data=b"dataaa")) self.assertEqual(parse_record(b"\x05\x00\x00\x02\x03\x00\x00\x01\x02"), Close(scid=515, seqnum=258)) self.assertEqual(parse_record(b"\x06\x00\x00\x01\x03"), Ack(resp_seqnum=259)) with mock.patch("wormhole._dilation.connection.log.err") as le: with self.assertRaises(ValueError): parse_record(b"\x07unknown") self.assertEqual(le.mock_calls, [mock.call("received unknown message type: {}".format( b"\x07unknown"))]) def test_encode(self): self.assertEqual(encode_record(KCM()), b"\x00") self.assertEqual(encode_record(Ping(ping_id=b"ping")), b"\x01ping") self.assertEqual(encode_record(Pong(ping_id=b"pong")), b"\x02pong") self.assertEqual(encode_record(Open(scid=65536, seqnum=16)), b"\x03\x00\x01\x00\x00\x00\x00\x00\x10") self.assertEqual(encode_record(Data(scid=65537, seqnum=17, data=b"dataaa")), b"\x04\x00\x01\x00\x01\x00\x00\x00\x11dataaa") self.assertEqual(encode_record(Close(scid=65538, seqnum=18)), b"\x05\x00\x01\x00\x02\x00\x00\x00\x12") self.assertEqual(encode_record(Ack(resp_seqnum=19)), b"\x06\x00\x00\x00\x13") with self.assertRaises(TypeError) as ar: encode_record("not a record") self.assertEqual(str(ar.exception), "not a record") magic-wormhole-0.12.0/src/wormhole/test/dilate/test_record.py000066400000000000000000000306001400712516500242370ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import alsoProvides from twisted.trial import unittest from ..._dilation._noise import NoiseInvalidMessage from ..._dilation.connection import (IFramer, Frame, Prologue, _Record, Handshake, Disconnect, Ping) from ..._dilation.roles import LEADER def make_record(): f = mock.Mock() alsoProvides(f, IFramer) n = mock.Mock() # pretends to be a Noise object r = _Record(f, n, LEADER) r.set_role_leader() return r, f, n class Record(unittest.TestCase): def test_good2(self): f = mock.Mock() alsoProvides(f, IFramer) f.add_and_parse = mock.Mock(side_effect=[ [], [Prologue()], [Frame(frame=b"rx-handshake")], [Frame(frame=b"frame1"), Frame(frame=b"frame2")], ]) n = mock.Mock() n.write_message = mock.Mock(return_value=b"tx-handshake") p1, p2 = object(), object() n.decrypt = mock.Mock(side_effect=[p1, p2]) r = _Record(f, n, LEADER) r.set_role_leader() self.assertEqual(f.mock_calls, []) r.connectionMade() self.assertEqual(f.mock_calls, [mock.call.connectionMade()]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) n.mock_calls[:] = [] # Pretend to deliver the prologue in two parts. The text we send in # doesn't matter: the side_effect= is what causes the prologue to be # recognized by the second call. self.assertEqual(list(r.add_and_unframe(b"pro")), []) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"pro")]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, []) # recognizing the prologue causes a handshake frame to be sent self.assertEqual(list(r.add_and_unframe(b"logue")), []) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"logue"), mock.call.send_frame(b"tx-handshake")]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, [mock.call.write_message()]) n.mock_calls[:] = [] # next add_and_unframe is recognized as the Handshake self.assertEqual(list(r.add_and_unframe(b"blah")), [Handshake()]) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"blah")]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, [mock.call.read_message(b"rx-handshake")]) n.mock_calls[:] = [] # next is a pair of Records r1, r2 = object(), object() with mock.patch("wormhole._dilation.connection.parse_record", side_effect=[r1, r2]) as pr: self.assertEqual(list(r.add_and_unframe(b"blah2")), [r1, r2]) self.assertEqual(n.mock_calls, [mock.call.decrypt(b"frame1"), mock.call.decrypt(b"frame2")]) self.assertEqual(pr.mock_calls, [mock.call(p1), mock.call(p2)]) def test_bad_handshake(self): f = mock.Mock() alsoProvides(f, IFramer) f.add_and_parse = mock.Mock(return_value=[Prologue(), Frame(frame=b"rx-handshake")]) n = mock.Mock() n.write_message = mock.Mock(return_value=b"tx-handshake") nvm = NoiseInvalidMessage() n.read_message = mock.Mock(side_effect=nvm) r = _Record(f, n, LEADER) r.set_role_leader() self.assertEqual(f.mock_calls, []) r.connectionMade() self.assertEqual(f.mock_calls, [mock.call.connectionMade()]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) n.mock_calls[:] = [] with mock.patch("wormhole._dilation.connection.log.err") as le: with self.assertRaises(Disconnect): list(r.add_and_unframe(b"data")) self.assertEqual(le.mock_calls, [mock.call(nvm, "bad inbound noise handshake")]) def test_bad_message(self): f = mock.Mock() alsoProvides(f, IFramer) f.add_and_parse = mock.Mock(return_value=[Prologue(), Frame(frame=b"rx-handshake"), Frame(frame=b"bad-message")]) n = mock.Mock() n.write_message = mock.Mock(return_value=b"tx-handshake") nvm = NoiseInvalidMessage() n.decrypt = mock.Mock(side_effect=nvm) r = _Record(f, n, LEADER) r.set_role_leader() self.assertEqual(f.mock_calls, []) r.connectionMade() self.assertEqual(f.mock_calls, [mock.call.connectionMade()]) f.mock_calls[:] = [] self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) n.mock_calls[:] = [] with mock.patch("wormhole._dilation.connection.log.err") as le: with self.assertRaises(Disconnect): list(r.add_and_unframe(b"data")) self.assertEqual(le.mock_calls, [mock.call(nvm, "bad inbound noise frame")]) def test_send_record(self): f = mock.Mock() alsoProvides(f, IFramer) n = mock.Mock() f1 = object() n.encrypt = mock.Mock(return_value=f1) r1 = Ping(b"pingid") r = _Record(f, n, LEADER) r.set_role_leader() self.assertEqual(f.mock_calls, []) m1 = object() with mock.patch("wormhole._dilation.connection.encode_record", return_value=m1) as er: r.send_record(r1) self.assertEqual(er.mock_calls, [mock.call(r1)]) self.assertEqual(n.mock_calls, [mock.call.start_handshake(), mock.call.encrypt(m1)]) self.assertEqual(f.mock_calls, [mock.call.send_frame(f1)]) def test_good(self): # Exercise the success path. The Record instance is given each chunk # of data as it arrives on Protocol.dataReceived, and is supposed to # return a series of Tokens (maybe none, if the chunk was incomplete, # or more than one, if the chunk was larger). Internally, it delivers # the chunks to the Framer for unframing (which returns 0 or more # frames), manages the Noise decryption object, and parses any # decrypted messages into tokens (some of which are consumed # internally, others for delivery upstairs). # # in the normal flow, we get: # # | | Inbound | NoiseAction | Outbound | ToUpstairs | # | | - | - | - | - | # | 1 | | | prologue | | # | 2 | prologue | | | | # | 3 | | write_message | handshake | | # | 4 | handshake | read_message | | Handshake | # | 5 | | encrypt | KCM | | # | 6 | KCM | decrypt | | KCM | # | 7 | msg1 | decrypt | | msg1 | # 1: instantiating the Record instance causes the outbound prologue # to be sent # 2+3: receipt of the inbound prologue triggers creation of the # ephemeral key (the "handshake") by calling noise.write_message() # and then writes the handshake to the outbound transport # 4: when the peer's handshake is received, it is delivered to # noise.read_message(), which generates the shared key (enabling # noise.send() and noise.decrypt()). It also delivers the Handshake # token upstairs, which might (on the Follower) trigger immediate # transmission of the Key Confirmation Message (KCM) # 5: the outbound KCM is framed and fed into noise.encrypt(), then # sent outbound # 6: the peer's KCM is decrypted then delivered upstairs. The # Follower treats this as a signal that it should use this connection # (and drop all others). # 7: the peer's first message is decrypted, parsed, and delivered # upstairs. This might be an Open or a Data, depending upon what # queued messages were left over from the previous connection r, f, n = make_record() outbound_handshake = object() kcm, msg1 = object(), object() f_kcm, f_msg1 = object(), object() n.write_message = mock.Mock(return_value=outbound_handshake) n.decrypt = mock.Mock(side_effect=[kcm, msg1]) n.encrypt = mock.Mock(side_effect=[f_kcm, f_msg1]) f.add_and_parse = mock.Mock(side_effect=[[], # no tokens yet [Prologue()], [Frame("f_handshake")], [Frame("f_kcm"), Frame("f_msg1")], ]) self.assertEqual(f.mock_calls, []) self.assertEqual(n.mock_calls, [mock.call.start_handshake()]) n.mock_calls[:] = [] # 1. The Framer is responsible for sending the prologue, so we don't # have to check that here, we just check that the Framer was told # about connectionMade properly. r.connectionMade() self.assertEqual(f.mock_calls, [mock.call.connectionMade()]) self.assertEqual(n.mock_calls, []) f.mock_calls[:] = [] # 2 # we dribble the prologue in over two messages, to make sure we can # handle a dataReceived that doesn't complete the token # remember, add_and_unframe is a generator self.assertEqual(list(r.add_and_unframe(b"pro")), []) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"pro")]) self.assertEqual(n.mock_calls, []) f.mock_calls[:] = [] self.assertEqual(list(r.add_and_unframe(b"logue")), []) # 3: write_message, send outbound handshake self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"logue"), mock.call.send_frame(outbound_handshake), ]) self.assertEqual(n.mock_calls, [mock.call.write_message()]) f.mock_calls[:] = [] n.mock_calls[:] = [] # 4 # Now deliver the Noise "handshake", the ephemeral public key. This # is framed, but not a record, so it shouldn't decrypt or parse # anything, but the handshake is delivered to the Noise object, and # it does return a Handshake token so we can let the next layer up # react (by sending the KCM frame if we're a Follower, or not if # we're the Leader) self.assertEqual(list(r.add_and_unframe(b"handshake")), [Handshake()]) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"handshake")]) self.assertEqual(n.mock_calls, [mock.call.read_message("f_handshake")]) f.mock_calls[:] = [] n.mock_calls[:] = [] # 5: at this point we ought to be able to send a message, the KCM with mock.patch("wormhole._dilation.connection.encode_record", side_effect=[b"r-kcm"]) as er: r.send_record(kcm) self.assertEqual(er.mock_calls, [mock.call(kcm)]) self.assertEqual(n.mock_calls, [mock.call.encrypt(b"r-kcm")]) self.assertEqual(f.mock_calls, [mock.call.send_frame(f_kcm)]) n.mock_calls[:] = [] f.mock_calls[:] = [] # 6: Now we deliver two messages stacked up: the KCM (Key # Confirmation Message) and the first real message. Concatenating # them tests that we can handle more than one token in a single # chunk. We need to mock parse_record() because everything past the # handshake is decrypted and parsed. with mock.patch("wormhole._dilation.connection.parse_record", side_effect=[kcm, msg1]) as pr: self.assertEqual(list(r.add_and_unframe(b"kcm,msg1")), [kcm, msg1]) self.assertEqual(f.mock_calls, [mock.call.add_and_parse(b"kcm,msg1")]) self.assertEqual(n.mock_calls, [mock.call.decrypt("f_kcm"), mock.call.decrypt("f_msg1")]) self.assertEqual(pr.mock_calls, [mock.call(kcm), mock.call(msg1)]) n.mock_calls[:] = [] f.mock_calls[:] = [] magic-wormhole-0.12.0/src/wormhole/test/dilate/test_subchannel.py000066400000000000000000000216001400712516500251030ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from zope.interface import directlyProvides from twisted.trial import unittest from twisted.internet.interfaces import ITransport, IHalfCloseableProtocol from twisted.internet.error import ConnectionDone from ..._dilation.subchannel import (Once, SubChannel, _WormholeAddress, _SubchannelAddress, AlreadyClosedError, NormalCloseUsedOnHalfCloseable) from .common import mock_manager def make_sc(set_protocol=True, half_closeable=False): scid = 4 hostaddr = _WormholeAddress() peeraddr = _SubchannelAddress(scid) m = mock_manager() sc = SubChannel(scid, m, hostaddr, peeraddr) p = mock.Mock() if half_closeable: directlyProvides(p, IHalfCloseableProtocol) if set_protocol: sc._set_protocol(p) return sc, m, scid, hostaddr, peeraddr, p class SubChannelAPI(unittest.TestCase): def test_once(self): o = Once(ValueError) o() with self.assertRaises(ValueError): o() def test_create(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() self.assert_(ITransport.providedBy(sc)) self.assertEqual(m.mock_calls, []) self.assertIdentical(sc.getHost(), hostaddr) self.assertIdentical(sc.getPeer(), peeraddr) def test_write(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.write(b"data") self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"data")]) m.mock_calls[:] = [] sc.writeSequence([b"more", b"data"]) self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"moredata")]) def test_write_when_closing(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.loseConnection() self.assertEqual(m.mock_calls, [mock.call.send_close(scid)]) m.mock_calls[:] = [] with self.assertRaises(AlreadyClosedError) as e: sc.write(b"data") self.assertEqual(str(e.exception), "write not allowed on closed subchannel") def test_local_close(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.loseConnection() self.assertEqual(m.mock_calls, [mock.call.send_close(scid)]) m.mock_calls[:] = [] # late arriving data is still delivered sc.remote_data(b"late") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"late")]) p.mock_calls[:] = [] sc.remote_close() self.assert_connectionDone(p.mock_calls) def test_local_double_close(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.loseConnection() self.assertEqual(m.mock_calls, [mock.call.send_close(scid)]) m.mock_calls[:] = [] with self.assertRaises(AlreadyClosedError) as e: sc.loseConnection() self.assertEqual(str(e.exception), "loseConnection not allowed on closed subchannel") def assert_connectionDone(self, mock_calls): self.assertEqual(len(mock_calls), 1) self.assertEqual(mock_calls[0][0], "connectionLost") self.assertEqual(len(mock_calls[0][1]), 1) self.assertIsInstance(mock_calls[0][1][0], ConnectionDone) def test_remote_close(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.remote_close() self.assertEqual(m.mock_calls, [mock.call.send_close(scid), mock.call.subchannel_closed(scid, sc)]) self.assert_connectionDone(p.mock_calls) def test_data(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.remote_data(b"data") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"data")]) p.mock_calls[:] = [] sc.remote_data(b"not") sc.remote_data(b"coalesced") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"not"), mock.call.dataReceived(b"coalesced"), ]) def test_data_before_open(self): sc, m, scid, hostaddr, peeraddr, p = make_sc(set_protocol=False) sc.remote_data(b"data1") sc.remote_data(b"data2") self.assertEqual(p.mock_calls, []) sc._set_protocol(p) sc._deliver_queued_data() self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"data1"), mock.call.dataReceived(b"data2")]) p.mock_calls[:] = [] sc.remote_data(b"more") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"more")]) def test_close_before_open(self): sc, m, scid, hostaddr, peeraddr, p = make_sc(set_protocol=False) sc.remote_close() self.assertEqual(p.mock_calls, []) sc._set_protocol(p) sc._deliver_queued_data() self.assert_connectionDone(p.mock_calls) def test_producer(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() sc.pauseProducing() self.assertEqual(m.mock_calls, [mock.call.subchannel_pauseProducing(sc)]) m.mock_calls[:] = [] sc.resumeProducing() self.assertEqual(m.mock_calls, [mock.call.subchannel_resumeProducing(sc)]) m.mock_calls[:] = [] sc.stopProducing() self.assertEqual(m.mock_calls, [mock.call.subchannel_stopProducing(sc)]) m.mock_calls[:] = [] def test_consumer(self): sc, m, scid, hostaddr, peeraddr, p = make_sc() # TODO: more, once this is implemented sc.registerProducer(None, True) sc.unregisterProducer() class HalfCloseable(unittest.TestCase): def test_create(self): sc, m, scid, hostaddr, peeraddr, p = make_sc(half_closeable=True) self.assert_(ITransport.providedBy(sc)) self.assertEqual(m.mock_calls, []) self.assertIdentical(sc.getHost(), hostaddr) self.assertIdentical(sc.getPeer(), peeraddr) def test_local_close(self): sc, m, scid, hostaddr, peeraddr, p = make_sc(half_closeable=True) sc.write(b"data") self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"data")]) m.mock_calls[:] = [] sc.writeSequence([b"more", b"data"]) self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"moredata")]) m.mock_calls[:] = [] sc.remote_data(b"inbound1") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"inbound1")]) p.mock_calls[:] = [] with self.assertRaises(NormalCloseUsedOnHalfCloseable) as e: sc.loseConnection() # TODO: maybe this shouldn't be an error # after a local close, we can't write anymore, but we can still # receive data sc.loseWriteConnection() # TODO or loseConnection? self.assertEqual(m.mock_calls, [mock.call.send_close(scid)]) m.mock_calls[:] = [] self.assertEqual(p.mock_calls, [mock.call.writeConnectionLost()]) p.mock_calls[:] = [] with self.assertRaises(AlreadyClosedError) as e: sc.write(b"data") self.assertEqual(str(e.exception), "write not allowed on closed subchannel") with self.assertRaises(AlreadyClosedError) as e: sc.loseWriteConnection() self.assertEqual(str(e.exception), "loseConnection not allowed on closed subchannel") with self.assertRaises(NormalCloseUsedOnHalfCloseable) as e: sc.loseConnection() # TODO: maybe expect AlreadyClosedError sc.remote_data(b"inbound2") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"inbound2")]) p.mock_calls[:] = [] # the remote end will finally shut down the connection sc.remote_close() self.assertEqual(m.mock_calls, [mock.call.subchannel_closed(scid, sc)]) self.assertEqual(p.mock_calls, [mock.call.readConnectionLost()]) def test_remote_close(self): sc, m, scid, hostaddr, peeraddr, p = make_sc(half_closeable=True) sc.write(b"data") self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"data")]) m.mock_calls[:] = [] sc.remote_data(b"inbound1") self.assertEqual(p.mock_calls, [mock.call.dataReceived(b"inbound1")]) p.mock_calls[:] = [] # after a remote close, we can still write data sc.remote_close() self.assertEqual(m.mock_calls, []) self.assertEqual(p.mock_calls, [mock.call.readConnectionLost()]) p.mock_calls[:] = [] sc.write(b"out2") self.assertEqual(m.mock_calls, [mock.call.send_data(scid, b"out2")]) m.mock_calls[:] = [] # and a local close will shutdown the connection sc.loseWriteConnection() self.assertEqual(m.mock_calls, [mock.call.send_close(scid), mock.call.subchannel_closed(scid, sc)]) self.assertEqual(p.mock_calls, [mock.call.writeConnectionLost()]) magic-wormhole-0.12.0/src/wormhole/test/run_trial.py000066400000000000000000000012221400712516500224550ustar00rootroot00000000000000from __future__ import unicode_literals # This is a tiny helper module, to let "python -m wormhole.test.run_trial # ARGS" does the same thing as running "trial ARGS" (unfortunately # twisted/scripts/trial.py does not have a '__name__=="__main__"' clause). # # This makes it easier to run trial under coverage from tox: # * "coverage run trial ARGS" is how you'd usually do it # * but "trial" must be the one in tox's virtualenv # * "coverage run `which trial` ARGS" works from a shell # * but tox doesn't use a shell # So use: # "coverage run -m wormhole.test.run_trial ARGS" if __name__ == "__main__": from twisted.scripts.trial import run run() magic-wormhole-0.12.0/src/wormhole/test/test_args.py000066400000000000000000000161431400712516500224610ustar00rootroot00000000000000import os import sys from twisted.trial import unittest import mock from ..cli.public_relay import RENDEZVOUS_RELAY, TRANSIT_RELAY from .common import config class Send(unittest.TestCase): def test_baseline(self): cfg = config("send", "--text", "hi") self.assertEqual(cfg.what, None) self.assertEqual(cfg.code, None) self.assertEqual(cfg.code_length, 2) self.assertEqual(cfg.dump_timing, None) self.assertEqual(cfg.hide_progress, False) self.assertEqual(cfg.listen, True) self.assertEqual(cfg.appid, None) self.assertEqual(cfg.relay_url, RENDEZVOUS_RELAY) self.assertEqual(cfg.transit_helper, TRANSIT_RELAY) self.assertEqual(cfg.text, "hi") self.assertEqual(cfg.tor, False) self.assertEqual(cfg.verify, False) self.assertEqual(cfg.zeromode, False) def test_appid(self): cfg = config("--appid", "xyz", "send", "--text", "hi") self.assertEqual(cfg.appid, "xyz") cfg = config("--appid=xyz", "send", "--text", "hi") self.assertEqual(cfg.appid, "xyz") def test_file(self): cfg = config("send", "fn") self.assertEqual(cfg.what, u"fn") self.assertEqual(cfg.text, None) def test_text(self): cfg = config("send", "--text", "hi") self.assertEqual(cfg.what, None) self.assertEqual(cfg.text, u"hi") def test_nolisten(self): cfg = config("send", "--no-listen", "fn") self.assertEqual(cfg.listen, False) def test_code(self): cfg = config("send", "--code", "1-abc", "fn") self.assertEqual(cfg.code, u"1-abc") def test_code_length(self): cfg = config("send", "-c", "3", "fn") self.assertEqual(cfg.code_length, 3) def test_dump_timing(self): cfg = config("--dump-timing", "tx.json", "send", "fn") self.assertEqual(cfg.dump_timing, "tx.json") def test_hide_progress(self): cfg = config("send", "--hide-progress", "fn") self.assertEqual(cfg.hide_progress, True) def test_tor(self): cfg = config("send", "--tor", "fn") self.assertEqual(cfg.tor, True) def test_verify(self): cfg = config("send", "--verify", "fn") self.assertEqual(cfg.verify, True) def test_zeromode(self): cfg = config("send", "-0", "fn") self.assertEqual(cfg.zeromode, True) def test_relay_env_var(self): relay_url = str(mock.sentinel.relay_url) with mock.patch.dict(os.environ, WORMHOLE_RELAY_URL=relay_url): cfg = config("send") self.assertEqual(cfg.relay_url, relay_url) # Make sure cmd line option overrides environment variable relay_url_2 = str(mock.sentinel.relay_url_2) with mock.patch.dict(os.environ, WORMHOLE_RELAY_URL=relay_url): cfg = config("--relay-url", relay_url_2, "send") self.assertEqual(cfg.relay_url, relay_url_2) def test_transit_env_var(self): transit_url = str(mock.sentinel.transit_url) with mock.patch.dict(os.environ, WORMHOLE_TRANSIT_HELPER=transit_url): cfg = config("send") self.assertEqual(cfg.transit_helper, transit_url) # Make sure cmd line option overrides environment variable transit_url_2 = str(mock.sentinel.transit_url_2) with mock.patch.dict(os.environ, WORMHOLE_TRANSIT_HELPER=transit_url): cfg = config("--transit-helper", transit_url_2, "send") self.assertEqual(cfg.transit_helper, transit_url_2) class Receive(unittest.TestCase): def test_baseline(self): cfg = config("receive") self.assertEqual(cfg.accept_file, False) self.assertEqual(cfg.code, None) self.assertEqual(cfg.code_length, 2) self.assertEqual(cfg.dump_timing, None) self.assertEqual(cfg.hide_progress, False) self.assertEqual(cfg.listen, True) self.assertEqual(cfg.only_text, False) self.assertEqual(cfg.output_file, None) self.assertEqual(cfg.appid, None) self.assertEqual(cfg.relay_url, RENDEZVOUS_RELAY) self.assertEqual(cfg.transit_helper, TRANSIT_RELAY) self.assertEqual(cfg.tor, False) self.assertEqual(cfg.verify, False) self.assertEqual(cfg.zeromode, False) def test_appid(self): cfg = config("--appid", "xyz", "receive") self.assertEqual(cfg.appid, "xyz") cfg = config("--appid=xyz", "receive") self.assertEqual(cfg.appid, "xyz") def test_nolisten(self): cfg = config("receive", "--no-listen") self.assertEqual(cfg.listen, False) def test_code(self): cfg = config("receive", "1-abc") self.assertEqual(cfg.code, u"1-abc") def test_code_length(self): cfg = config("receive", "-c", "3") self.assertEqual(cfg.code_length, 3) def test_dump_timing(self): cfg = config("--dump-timing", "tx.json", "receive") self.assertEqual(cfg.dump_timing, "tx.json") def test_hide_progress(self): cfg = config("receive", "--hide-progress") self.assertEqual(cfg.hide_progress, True) def test_tor(self): cfg = config("receive", "--tor") self.assertEqual(cfg.tor, True) def test_verify(self): cfg = config("receive", "--verify") self.assertEqual(cfg.verify, True) def test_zeromode(self): cfg = config("receive", "-0") self.assertEqual(cfg.zeromode, True) def test_only_text(self): cfg = config("receive", "-t") self.assertEqual(cfg.only_text, True) def test_accept_file(self): cfg = config("receive", "--accept-file") self.assertEqual(cfg.accept_file, True) def test_output_file(self): cfg = config("receive", "--output-file", "fn") self.assertEqual(cfg.output_file, u"fn") def test_relay_env_var(self): relay_url = str(mock.sentinel.relay_url) with mock.patch.dict(os.environ, WORMHOLE_RELAY_URL=relay_url): cfg = config("receive") self.assertEqual(cfg.relay_url, relay_url) # Make sure cmd line option overrides environment variable relay_url_2 = str(mock.sentinel.relay_url_2) with mock.patch.dict(os.environ, WORMHOLE_RELAY_URL=relay_url): cfg = config("--relay-url", relay_url_2, "receive") self.assertEqual(cfg.relay_url, relay_url_2) def test_transit_env_var(self): transit_url = str(mock.sentinel.transit_url) with mock.patch.dict(os.environ, WORMHOLE_TRANSIT_HELPER=transit_url): cfg = config("receive") self.assertEqual(cfg.transit_helper, transit_url) # Make sure cmd line option overrides environment variable transit_url_2 = str(mock.sentinel.transit_url_2) with mock.patch.dict(os.environ, WORMHOLE_TRANSIT_HELPER=transit_url): cfg = config("--transit-helper", transit_url_2, "receive") self.assertEqual(cfg.transit_helper, transit_url_2) class Config(unittest.TestCase): def test_send(self): cfg = config("send") self.assertEqual(cfg.stdout, sys.stdout) def test_receive(self): cfg = config("receive") self.assertEqual(cfg.stdout, sys.stdout) magic-wormhole-0.12.0/src/wormhole/test/test_cli.py000066400000000000000000001456251400712516500223040ustar00rootroot00000000000000from __future__ import print_function import io import os import re import stat import sys import zipfile from textwrap import dedent, fill import six from click.testing import CliRunner from humanize import naturalsize from twisted.internet import endpoints, reactor from twisted.internet.defer import gatherResults, inlineCallbacks, returnValue from twisted.internet.error import ConnectionRefusedError from twisted.internet.utils import getProcessOutputAndValue from twisted.python import log, procutils from twisted.trial import unittest from zope.interface import implementer import mock from .. import __version__ from .._interfaces import ITorManager from ..cli import cli, cmd_receive, cmd_send, welcome from ..errors import (ServerConnectionError, TransferError, UnsendableFileError, WelcomeError, WrongPasswordError) from .common import ServerBase, config def build_offer(args): s = cmd_send.Sender(args, None) return s._build_offer() class OfferData(unittest.TestCase): def setUp(self): self._things_to_delete = [] self.cfg = cfg = config("send") cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() def tearDown(self): for fn in self._things_to_delete: if os.path.exists(fn): os.unlink(fn) del self.cfg def test_text(self): self.cfg.text = message = "blah blah blah ponies" d, fd_to_send = build_offer(self.cfg) self.assertIn("message", d) self.assertNotIn("file", d) self.assertNotIn("directory", d) self.assertEqual(d["message"], message) self.assertEqual(fd_to_send, None) def test_file(self): self.cfg.what = filename = "my file" message = b"yay ponies\n" send_dir = self.mktemp() os.mkdir(send_dir) abs_filename = os.path.join(send_dir, filename) with open(abs_filename, "wb") as f: f.write(message) self.cfg.cwd = send_dir d, fd_to_send = build_offer(self.cfg) self.assertNotIn("message", d) self.assertIn("file", d) self.assertNotIn("directory", d) self.assertEqual(d["file"]["filesize"], len(message)) self.assertEqual(d["file"]["filename"], filename) self.assertEqual(fd_to_send.tell(), 0) self.assertEqual(fd_to_send.read(), message) def _create_broken_symlink(self): if not hasattr(os, 'symlink'): raise unittest.SkipTest("host OS does not support symlinks") parent_dir = self.mktemp() os.mkdir(parent_dir) send_dir = "dirname" os.mkdir(os.path.join(parent_dir, send_dir)) os.symlink('/non/existent/file', os.path.join(parent_dir, send_dir, 'linky')) send_dir_arg = send_dir self.cfg.what = send_dir_arg self.cfg.cwd = parent_dir def test_broken_symlink_raises_err(self): self._create_broken_symlink() self.cfg.ignore_unsendable_files = False e = self.assertRaises(UnsendableFileError, build_offer, self.cfg) # On english distributions of Linux, this will be # "linky: No such file or directory", but the error may be # different on Windows and other locales and/or Unix variants, so # we'll just assert the part we know about. self.assertIn("linky: ", str(e)) def test_broken_symlink_is_ignored(self): self._create_broken_symlink() self.cfg.ignore_unsendable_files = True d, fd_to_send = build_offer(self.cfg) self.assertIn('(ignoring error)', self.cfg.stderr.getvalue()) self.assertEqual(d['directory']['numfiles'], 0) self.assertEqual(d['directory']['numbytes'], 0) def test_missing_file(self): self.cfg.what = filename = "missing" send_dir = self.mktemp() os.mkdir(send_dir) self.cfg.cwd = send_dir e = self.assertRaises(TransferError, build_offer, self.cfg) self.assertEqual( str(e), "Cannot send: no file/directory named '%s'" % filename) def _do_test_directory(self, addslash): parent_dir = self.mktemp() os.mkdir(parent_dir) send_dir = "dirname" os.mkdir(os.path.join(parent_dir, send_dir)) ponies = [str(i) for i in range(5)] for p in ponies: with open(os.path.join(parent_dir, send_dir, p), "wb") as f: f.write(("%s ponies\n" % p).encode("ascii")) send_dir_arg = send_dir if addslash: send_dir_arg += os.sep self.cfg.what = send_dir_arg self.cfg.cwd = parent_dir d, fd_to_send = build_offer(self.cfg) self.assertNotIn("message", d) self.assertNotIn("file", d) self.assertIn("directory", d) self.assertEqual(d["directory"]["dirname"], send_dir) self.assertEqual(d["directory"]["mode"], "zipfile/deflated") self.assertEqual(d["directory"]["numfiles"], 5) self.assertIn("numbytes", d["directory"]) self.assertIsInstance(d["directory"]["numbytes"], six.integer_types) self.assertEqual(fd_to_send.tell(), 0) zdata = fd_to_send.read() self.assertEqual(len(zdata), d["directory"]["zipsize"]) fd_to_send.seek(0, 0) with zipfile.ZipFile(fd_to_send, "r", zipfile.ZIP_DEFLATED) as zf: zipnames = zf.namelist() self.assertEqual(list(sorted(ponies)), list(sorted(zipnames))) for name in zipnames: contents = zf.open(name, "r").read() self.assertEqual(("%s ponies\n" % name).encode("ascii"), contents) def test_directory(self): return self._do_test_directory(addslash=False) def test_directory_addslash(self): return self._do_test_directory(addslash=True) def test_unknown(self): self.cfg.what = filename = "unknown" send_dir = self.mktemp() os.mkdir(send_dir) abs_filename = os.path.abspath(os.path.join(send_dir, filename)) self.cfg.cwd = send_dir try: os.mkfifo(abs_filename) except AttributeError: raise unittest.SkipTest("is mkfifo supported on this platform?") # Delete the named pipe for the sake of users who might run "pip # wheel ." in this directory later. That command wants to copy # everything into a tempdir before building a wheel, and the # shutil.copy_tree() is uses can't handle the named pipe. self._things_to_delete.append(abs_filename) self.assertFalse(os.path.isfile(abs_filename)) self.assertFalse(os.path.isdir(abs_filename)) e = self.assertRaises(TypeError, build_offer, self.cfg) self.assertEqual( str(e), "'%s' is neither file nor directory" % filename) def test_symlink(self): if not hasattr(os, 'symlink'): raise unittest.SkipTest("host OS does not support symlinks") # build A/B1 -> B2 (==A/B2), and A/B2/C.txt parent_dir = self.mktemp() os.mkdir(parent_dir) os.mkdir(os.path.join(parent_dir, "B2")) with open(os.path.join(parent_dir, "B2", "C.txt"), "wb") as f: f.write(b"success") os.symlink("B2", os.path.join(parent_dir, "B1")) # now send "B1/C.txt" from A, and it should get the right file self.cfg.cwd = parent_dir self.cfg.what = os.path.join("B1", "C.txt") d, fd_to_send = build_offer(self.cfg) self.assertEqual(d["file"]["filename"], "C.txt") self.assertEqual(fd_to_send.read(), b"success") def test_symlink_collapse(self): if not hasattr(os, 'symlink'): raise unittest.SkipTest("host OS does not support symlinks") # build A/B1, A/B1/D.txt # A/B2/C2, A/B2/D.txt # symlink A/B1/C1 -> A/B2/C2 parent_dir = self.mktemp() os.mkdir(parent_dir) os.mkdir(os.path.join(parent_dir, "B1")) with open(os.path.join(parent_dir, "B1", "D.txt"), "wb") as f: f.write(b"fail") os.mkdir(os.path.join(parent_dir, "B2")) os.mkdir(os.path.join(parent_dir, "B2", "C2")) with open(os.path.join(parent_dir, "B2", "D.txt"), "wb") as f: f.write(b"success") os.symlink( os.path.abspath(os.path.join(parent_dir, "B2", "C2")), os.path.join(parent_dir, "B1", "C1")) # Now send "B1/C1/../D.txt" from A. The correct traversal will be: # * start: A # * B1: A/B1 # * C1: follow symlink to A/B2/C2 # * ..: climb to A/B2 # * D.txt: open A/B2/D.txt, which contains "success" # If the code mistakenly uses normpath(), it would do: # * normpath turns B1/C1/../D.txt into B1/D.txt # * start: A # * B1: A/B1 # * D.txt: open A/B1/D.txt , which contains "fail" self.cfg.cwd = parent_dir self.cfg.what = os.path.join("B1", "C1", os.pardir, "D.txt") d, fd_to_send = build_offer(self.cfg) self.assertEqual(d["file"]["filename"], "D.txt") self.assertEqual(fd_to_send.read(), b"success") if os.name == "nt": test_symlink_collapse.todo = "host OS has broken os.path.realpath()" # ntpath.py's realpath() is built out of normpath(), and does not # follow symlinks properly, so this test always fails. "wormhole send # PATH" on windows will do the wrong thing. See # https://bugs.python.org/issue9949" for details. I'm making this a # TODO instead of a SKIP because 1: this causes an observable # misbehavior (albeit in rare circumstances), 2: it probably used to # work (sometimes, but not in #251). See cmd_send.py for more notes. class LocaleFinder: def __init__(self): self._run_once = False @inlineCallbacks def find_utf8_locale(self): if sys.platform == "win32": returnValue("en_US.UTF-8") if self._run_once: returnValue(self._best_locale) self._best_locale = yield self._find_utf8_locale() self._run_once = True returnValue(self._best_locale) @inlineCallbacks def _find_utf8_locale(self): # Click really wants to be running under a unicode-capable locale, # especially on python3. macOS has en-US.UTF-8 but not C.UTF-8, and # most linux boxes have C.UTF-8 but not en-US.UTF-8 . For tests, # figure out which one is present and use that. For runtime, it's a # mess, as really the user must take responsibility for setting their # locale properly. I'm thinking of abandoning Click and going back to # twisted.python.usage to avoid this problem in the future. (out, err, rc) = yield getProcessOutputAndValue("locale", ["-a"]) if rc != 0: log.msg("error running 'locale -a', rc=%s" % (rc, )) log.msg("stderr: %s" % (err, )) returnValue(None) out = out.decode("utf-8") # make sure we get a string utf8_locales = {} for locale in out.splitlines(): locale = locale.strip() if locale.lower().endswith((".utf-8", ".utf8")): utf8_locales[locale.lower()] = locale for wanted in ["C.utf8", "C.UTF-8", "en_US.utf8", "en_US.UTF-8"]: if wanted.lower() in utf8_locales: returnValue(utf8_locales[wanted.lower()]) if utf8_locales: returnValue(list(utf8_locales.values())[0]) returnValue(None) locale_finder = LocaleFinder() class ScriptsBase: def find_executable(self): # to make sure we're running the right executable (in a virtualenv), # we require that our "wormhole" lives in the same directory as our # "python" locations = procutils.which("wormhole") if not locations: raise unittest.SkipTest("unable to find 'wormhole' in $PATH") wormhole = locations[0] if (os.path.dirname(os.path.abspath(wormhole)) != os.path.dirname( sys.executable)): log.msg("locations: %s" % (locations, )) log.msg("sys.executable: %s" % (sys.executable, )) raise unittest.SkipTest( "found the wrong 'wormhole' in $PATH: %s %s" % (wormhole, sys.executable)) return wormhole @inlineCallbacks def is_runnable(self): # One property of Versioneer is that many changes to the source tree # (making a commit, dirtying a previously-clean tree) will change the # version string. Entrypoint scripts frequently insist upon importing # a library version that matches the script version (whatever was # reported when 'pip install' was run), and throw a # DistributionNotFound error when they don't match. This is really # annoying in a workspace created with "pip install -e .", as you # must re-run pip after each commit. # So let's report just one error in this case (from test_version), # and skip the other tests that we know will fail. # Setting LANG/LC_ALL to a unicode-capable locale is necessary to # convince Click to not complain about a forced-ascii locale. My # apologies to folks who want to run tests on a machine that doesn't # have the C.UTF-8 locale installed. locale = yield locale_finder.find_utf8_locale() if not locale: raise unittest.SkipTest("unable to find UTF-8 locale") locale_env = dict(LC_ALL=locale, LANG=locale) wormhole = self.find_executable() res = yield getProcessOutputAndValue( wormhole, ["--version"], env=locale_env) out, err, rc = res if rc != 0: log.msg("wormhole not runnable in this tree:") log.msg("out", out) log.msg("err", err) log.msg("rc", rc) raise unittest.SkipTest("wormhole is not runnable in this tree") returnValue(locale_env) class ScriptVersion(ServerBase, ScriptsBase, unittest.TestCase): # we need Twisted to run the server, but we run the sender and receiver # with deferToThread() @inlineCallbacks def test_version(self): # "wormhole" must be on the path, so e.g. "pip install -e ." in a # virtualenv. This guards against an environment where the tests # below might run the wrong executable. self.maxDiff = None wormhole = self.find_executable() # we must pass on the environment so that "something" doesn't # get sad about UTF8 vs. ascii encodings out, err, rc = yield getProcessOutputAndValue( wormhole, ["--version"], env=os.environ) err = err.decode("utf-8") if "DistributionNotFound" in err: log.msg("stderr was %s" % err) last = err.strip().split("\n")[-1] self.fail("wormhole not runnable: %s" % last) ver = out.decode("utf-8") or err self.failUnlessEqual(ver.strip(), "magic-wormhole {}".format(__version__)) self.failUnlessEqual(rc, 0) @implementer(ITorManager) class FakeTor: # use normal endpoints, but record the fact that we were asked def __init__(self): self.endpoints = [] def stream_via(self, host, port, tls=False): self.endpoints.append((host, port, tls)) return endpoints.HostnameEndpoint(reactor, host, port) class PregeneratedCode(ServerBase, ScriptsBase, unittest.TestCase): # we need Twisted to run the server, but we run the sender and receiver # with deferToThread() @inlineCallbacks def setUp(self): self._env = yield self.is_runnable() yield ServerBase.setUp(self) @inlineCallbacks def _do_test(self, as_subprocess=False, mode="text", addslash=False, override_filename=False, fake_tor=False, overwrite=False, mock_accept=False, verify=False): assert mode in ("text", "file", "empty-file", "directory", "slow-text", "slow-sender-text") if fake_tor: assert not as_subprocess send_cfg = config("send") recv_cfg = config("receive") message = "blah blah blah ponies" for cfg in [send_cfg, recv_cfg]: cfg.hide_progress = True cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.listen = True cfg.code = u"1-abc" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() cfg.verify = verify send_dir = self.mktemp() os.mkdir(send_dir) receive_dir = self.mktemp() os.mkdir(receive_dir) if mode in ("text", "slow-text", "slow-sender-text"): send_cfg.text = message elif mode in ("file", "empty-file"): if mode == "empty-file": message = "" send_filename = u"testfil\u00EB" # e-with-diaeresis with open(os.path.join(send_dir, send_filename), "w") as f: f.write(message) send_cfg.what = send_filename receive_filename = send_filename recv_cfg.accept_file = False if mock_accept else True if override_filename: recv_cfg.output_file = receive_filename = u"outfile" if overwrite: recv_cfg.output_file = receive_filename existing_file = os.path.join(receive_dir, receive_filename) with open(existing_file, 'w') as f: f.write('pls overwrite me') elif mode == "directory": # $send_dir/ # $send_dir/middle/ # $send_dir/middle/$dirname/ # $send_dir/middle/$dirname/[12345] # cd $send_dir && wormhole send middle/$dirname # cd $receive_dir && wormhole receive # expect: $receive_dir/$dirname/[12345] send_dirname = u"testdir" def message(i): return "test message %d\n" % i os.mkdir(os.path.join(send_dir, u"middle")) source_dir = os.path.join(send_dir, u"middle", send_dirname) os.mkdir(source_dir) modes = {} for i in range(5): path = os.path.join(source_dir, str(i)) with open(path, "w") as f: f.write(message(i)) if i == 3: os.chmod(path, 0o755) modes[i] = stat.S_IMODE(os.stat(path).st_mode) send_dirname_arg = os.path.join(u"middle", send_dirname) if addslash: send_dirname_arg += os.sep send_cfg.what = send_dirname_arg receive_dirname = send_dirname recv_cfg.accept_file = False if mock_accept else True if override_filename: recv_cfg.output_file = receive_dirname = u"outdir" if overwrite: recv_cfg.output_file = receive_dirname os.mkdir(os.path.join(receive_dir, receive_dirname)) if as_subprocess: wormhole_bin = self.find_executable() if send_cfg.text: content_args = ['--text', send_cfg.text] elif send_cfg.what: content_args = [send_cfg.what] # raise the rx KEY_TIMER to some large number here, to avoid # spurious test failures on hosts that are slow enough to trigger # the "Waiting for sender..." pacifier message. We can do in # not-as_subprocess, because we can directly patch the value before # running the receiver. But we can't patch across the subprocess # boundary, so we use an environment variable. env = self._env.copy() env["_MAGIC_WORMHOLE_TEST_KEY_TIMER"] = "999999" env["_MAGIC_WORMHOLE_TEST_VERIFY_TIMER"] = "999999" send_args = [ '--relay-url', self.relayurl, '--transit-helper', '', 'send', '--hide-progress', '--code', send_cfg.code, ] + content_args send_d = getProcessOutputAndValue( wormhole_bin, send_args, path=send_dir, env=env, ) recv_args = [ '--relay-url', self.relayurl, '--transit-helper', '', 'receive', '--hide-progress', '--accept-file', recv_cfg.code, ] if override_filename: recv_args.extend(['-o', receive_filename]) receive_d = getProcessOutputAndValue( wormhole_bin, recv_args, path=receive_dir, env=env, ) (send_res, receive_res) = yield gatherResults([send_d, receive_d], True) send_stdout = send_res[0].decode("utf-8") send_stderr = send_res[1].decode("utf-8") send_rc = send_res[2] receive_stdout = receive_res[0].decode("utf-8") receive_stderr = receive_res[1].decode("utf-8") receive_rc = receive_res[2] NL = os.linesep self.assertEqual((send_rc, receive_rc), (0, 0), (send_res, receive_res)) else: send_cfg.cwd = send_dir recv_cfg.cwd = receive_dir if fake_tor: send_cfg.tor = True send_cfg.transit_helper = self.transit tx_tm = FakeTor() with mock.patch( "wormhole.tor_manager.get_tor", return_value=tx_tm, ) as mtx_tm: send_d = cmd_send.send(send_cfg) recv_cfg.tor = True recv_cfg.transit_helper = self.transit rx_tm = FakeTor() with mock.patch( "wormhole.tor_manager.get_tor", return_value=rx_tm, ) as mrx_tm: receive_d = cmd_receive.receive(recv_cfg) else: KEY_TIMER = 0 if mode == "slow-sender-text" else 99999 rxw = [] with mock.patch.object(cmd_receive, "KEY_TIMER", KEY_TIMER): send_d = cmd_send.send(send_cfg) receive_d = cmd_receive.receive( recv_cfg, _debug_stash_wormhole=rxw) # we need to keep KEY_TIMER patched until the receiver # gets far enough to start the timer, which happens after # the code is set if mode == "slow-sender-text": yield rxw[0].get_unverified_key() # The sender might fail, leaving the receiver hanging, or vice # versa. Make sure we don't wait on one side exclusively VERIFY_TIMER = 0 if mode == "slow-text" else 99999 with mock.patch.object(cmd_receive, "VERIFY_TIMER", VERIFY_TIMER): with mock.patch.object(cmd_send, "VERIFY_TIMER", VERIFY_TIMER): if mock_accept or verify: with mock.patch.object( cmd_receive.six.moves, 'input', return_value='yes') as i: yield gatherResults([send_d, receive_d], True) if verify: s = i.mock_calls[0][1][0] mo = re.search(r'^Verifier (\w+)\. ok\?', s) self.assertTrue(mo, s) sender_verifier = mo.group(1) else: yield gatherResults([send_d, receive_d], True) if fake_tor: expected_endpoints = [("127.0.0.1", self.rdv_ws_port, False)] if mode in ("file", "directory"): expected_endpoints.append(("127.0.0.1", self.transitport, False)) tx_timing = mtx_tm.call_args[1]["timing"] self.assertEqual(tx_tm.endpoints, expected_endpoints) self.assertEqual( mtx_tm.mock_calls, [mock.call(reactor, False, None, timing=tx_timing)]) rx_timing = mrx_tm.call_args[1]["timing"] self.assertEqual(rx_tm.endpoints, expected_endpoints) self.assertEqual( mrx_tm.mock_calls, [mock.call(reactor, False, None, timing=rx_timing)]) send_stdout = send_cfg.stdout.getvalue() send_stderr = send_cfg.stderr.getvalue() receive_stdout = recv_cfg.stdout.getvalue() receive_stderr = recv_cfg.stderr.getvalue() # all output here comes from a StringIO, which uses \n for # newlines, even if we're on windows NL = "\n" self.maxDiff = None # show full output for assertion failures key_established = "" if mode == "slow-text": key_established = "Key established, waiting for confirmation...\n" self.assertEqual(send_stdout, "") # check sender if mode == "text" or mode == "slow-text": expected = ("Sending text message ({bytes:d} Bytes){NL}" "Wormhole code is: {code}{NL}" "On the other computer, please run:{NL}{NL}" "wormhole receive {verify}{code}{NL}{NL}" "{KE}" "text message sent{NL}").format( bytes=len(message), verify="--verify " if verify else "", code=send_cfg.code, NL=NL, KE=key_established) self.failUnlessEqual(send_stderr, expected) elif mode == "file": self.failUnlessIn(u"Sending {size:s} file named '{name}'{NL}" .format( size=naturalsize(len(message)), name=send_filename, NL=NL), send_stderr) self.failUnlessIn(u"Wormhole code is: {code}{NL}" "On the other computer, please run:{NL}{NL}" "wormhole receive {code}{NL}{NL}".format( code=send_cfg.code, NL=NL), send_stderr) self.failUnlessIn( u"File sent.. waiting for confirmation{NL}" "Confirmation received. Transfer complete.{NL}".format(NL=NL), send_stderr) elif mode == "directory": self.failUnlessIn(u"Sending directory", send_stderr) self.failUnlessIn(u"named 'testdir'", send_stderr) self.failUnlessIn(u"Wormhole code is: {code}{NL}" "On the other computer, please run:{NL}{NL}" "wormhole receive {code}{NL}{NL}".format( code=send_cfg.code, NL=NL), send_stderr) self.failUnlessIn( u"File sent.. waiting for confirmation{NL}" "Confirmation received. Transfer complete.{NL}".format(NL=NL), send_stderr) # check receiver if mode in ("text", "slow-text", "slow-sender-text"): self.assertEqual(receive_stdout, message + NL) if mode == "text": if verify: mo = re.search(r'^Verifier (\w+)\.\s*$', receive_stderr) self.assertTrue(mo, receive_stderr) receiver_verifier = mo.group(1) self.assertEqual(sender_verifier, receiver_verifier) else: self.assertEqual(receive_stderr, "") elif mode == "slow-text": self.assertEqual(receive_stderr, key_established) elif mode == "slow-sender-text": self.assertEqual(receive_stderr, "Waiting for sender...\n") elif mode == "file": self.failUnlessEqual(receive_stdout, "") self.failUnlessIn(u"Receiving file ({size:s}) into: {name}".format( size=naturalsize(len(message)), name=receive_filename), receive_stderr) self.failUnlessIn(u"Received file written to ", receive_stderr) fn = os.path.join(receive_dir, receive_filename) self.failUnless(os.path.exists(fn)) with open(fn, "r") as f: self.failUnlessEqual(f.read(), message) elif mode == "directory": self.failUnlessEqual(receive_stdout, "") want = (r"Receiving directory \(\d+ \w+\) into: {name}/" .format(name=receive_dirname)) self.failUnless( re.search(want, receive_stderr), (want, receive_stderr)) self.failUnlessIn( u"Received files written to {name}" .format(name=receive_dirname), receive_stderr) fn = os.path.join(receive_dir, receive_dirname) self.failUnless(os.path.exists(fn), fn) for i in range(5): fn = os.path.join(receive_dir, receive_dirname, str(i)) with open(fn, "r") as f: self.failUnlessEqual(f.read(), message(i)) self.failUnlessEqual(modes[i], stat.S_IMODE( os.stat(fn).st_mode)) def test_text(self): return self._do_test() def test_text_subprocess(self): return self._do_test(as_subprocess=True) def test_text_tor(self): return self._do_test(fake_tor=True) def test_text_verify(self): return self._do_test(verify=True) def test_file(self): return self._do_test(mode="file") def test_file_override(self): return self._do_test(mode="file", override_filename=True) def test_file_overwrite(self): return self._do_test(mode="file", overwrite=True) def test_file_overwrite_mock_accept(self): return self._do_test(mode="file", overwrite=True, mock_accept=True) def test_file_tor(self): return self._do_test(mode="file", fake_tor=True) def test_empty_file(self): return self._do_test(mode="empty-file") def test_directory(self): return self._do_test(mode="directory") def test_directory_addslash(self): return self._do_test(mode="directory", addslash=True) def test_directory_override(self): return self._do_test(mode="directory", override_filename=True) def test_directory_overwrite(self): return self._do_test(mode="directory", overwrite=True) def test_directory_overwrite_mock_accept(self): return self._do_test( mode="directory", overwrite=True, mock_accept=True) def test_slow_text(self): return self._do_test(mode="slow-text") def test_slow_sender_text(self): return self._do_test(mode="slow-sender-text") @inlineCallbacks def _do_test_fail(self, mode, failmode): assert mode in ("file", "directory") assert failmode in ("noclobber", "toobig") send_cfg = config("send") recv_cfg = config("receive") for cfg in [send_cfg, recv_cfg]: cfg.hide_progress = True cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.listen = False cfg.code = u"1-abc" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() send_dir = self.mktemp() os.mkdir(send_dir) receive_dir = self.mktemp() os.mkdir(receive_dir) recv_cfg.accept_file = True # don't ask for permission if mode == "file": message = "test message\n" send_cfg.what = receive_name = send_filename = "testfile" fn = os.path.join(send_dir, send_filename) with open(fn, "w") as f: f.write(message) size = os.stat(fn).st_size elif mode == "directory": # $send_dir/ # $send_dir/$dirname/ # $send_dir/$dirname/[12345] # cd $send_dir && wormhole send $dirname # cd $receive_dir && wormhole receive # expect: $receive_dir/$dirname/[12345] size = 0 send_cfg.what = receive_name = send_dirname = "testdir" os.mkdir(os.path.join(send_dir, send_dirname)) for i in range(5): path = os.path.join(send_dir, send_dirname, str(i)) with open(path, "w") as f: f.write("test message %d\n" % i) size += os.stat(path).st_size if failmode == "noclobber": PRESERVE = "don't clobber me\n" clobberable = os.path.join(receive_dir, receive_name) with open(clobberable, "w") as f: f.write(PRESERVE) send_cfg.cwd = send_dir send_d = cmd_send.send(send_cfg) recv_cfg.cwd = receive_dir receive_d = cmd_receive.receive(recv_cfg) # both sides will fail if failmode == "noclobber": free_space = 10000000 else: free_space = 0 with mock.patch( "wormhole.cli.cmd_receive.estimate_free_space", return_value=free_space): f = yield self.assertFailure(send_d, TransferError) self.assertEqual( str(f), "remote error, transfer abandoned: transfer rejected") f = yield self.assertFailure(receive_d, TransferError) self.assertEqual(str(f), "transfer rejected") send_stdout = send_cfg.stdout.getvalue() send_stderr = send_cfg.stderr.getvalue() receive_stdout = recv_cfg.stdout.getvalue() receive_stderr = recv_cfg.stderr.getvalue() # all output here comes from a StringIO, which uses \n for # newlines, even if we're on windows NL = "\n" self.maxDiff = None # show full output for assertion failures self.assertEqual(send_stdout, "") self.assertEqual(receive_stdout, "") # check sender if mode == "file": self.failUnlessIn("Sending {size:s} file named '{name}'{NL}" .format( size=naturalsize(size), name=send_filename, NL=NL), send_stderr) self.failUnlessIn("Wormhole code is: {code}{NL}" "On the other computer, please run:{NL}{NL}" "wormhole receive {code}{NL}".format( code=send_cfg.code, NL=NL), send_stderr) self.failIfIn( "File sent.. waiting for confirmation{NL}" "Confirmation received. Transfer complete.{NL}".format(NL=NL), send_stderr) elif mode == "directory": self.failUnlessIn("Sending directory", send_stderr) self.failUnlessIn("named 'testdir'", send_stderr) self.failUnlessIn("Wormhole code is: {code}{NL}" "On the other computer, please run:{NL}{NL}" "wormhole receive {code}{NL}".format( code=send_cfg.code, NL=NL), send_stderr) self.failIfIn( "File sent.. waiting for confirmation{NL}" "Confirmation received. Transfer complete.{NL}".format(NL=NL), send_stderr) # check receiver if mode == "file": self.failIfIn("Received file written to ", receive_stderr) if failmode == "noclobber": self.failUnlessIn( "Error: " "refusing to overwrite existing 'testfile'{NL}" .format(NL=NL), receive_stderr) else: self.failUnlessIn( "Error: " "insufficient free space (0B) for file ({size:d}B){NL}" .format(NL=NL, size=size), receive_stderr) elif mode == "directory": self.failIfIn( "Received files written to {name}".format(name=receive_name), receive_stderr) # want = (r"Receiving directory \(\d+ \w+\) into: {name}/" # .format(name=receive_name)) # self.failUnless(re.search(want, receive_stderr), # (want, receive_stderr)) if failmode == "noclobber": self.failUnlessIn( "Error: " "refusing to overwrite existing 'testdir'{NL}" .format(NL=NL), receive_stderr) else: self.failUnlessIn(("Error: " "insufficient free space (0B) for directory" " ({size:d}B){NL}").format( NL=NL, size=size), receive_stderr) if failmode == "noclobber": fn = os.path.join(receive_dir, receive_name) self.failUnless(os.path.exists(fn)) with open(fn, "r") as f: self.failUnlessEqual(f.read(), PRESERVE) def test_fail_file_noclobber(self): return self._do_test_fail("file", "noclobber") def test_fail_directory_noclobber(self): return self._do_test_fail("directory", "noclobber") def test_fail_file_toobig(self): return self._do_test_fail("file", "toobig") def test_fail_directory_toobig(self): return self._do_test_fail("directory", "toobig") class ZeroMode(ServerBase, unittest.TestCase): @inlineCallbacks def test_text(self): send_cfg = config("send") recv_cfg = config("receive") message = "textponies" for cfg in [send_cfg, recv_cfg]: cfg.hide_progress = True cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.listen = True cfg.zeromode = True cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() send_cfg.text = message # send_cfg.cwd = send_dir # recv_cfg.cwd = receive_dir send_d = cmd_send.send(send_cfg) receive_d = cmd_receive.receive(recv_cfg) yield gatherResults([send_d, receive_d], True) send_stdout = send_cfg.stdout.getvalue() send_stderr = send_cfg.stderr.getvalue() receive_stdout = recv_cfg.stdout.getvalue() receive_stderr = recv_cfg.stderr.getvalue() # all output here comes from a StringIO, which uses \n for # newlines, even if we're on windows NL = "\n" self.maxDiff = None # show full output for assertion failures self.assertEqual(send_stdout, "") # check sender expected = ("Sending text message ({bytes:d} Bytes){NL}" "On the other computer, please run:{NL}" "{NL}" "wormhole receive -0{NL}" "{NL}" "text message sent{NL}").format( bytes=len(message), code=send_cfg.code, NL=NL) self.failUnlessEqual(send_stderr, expected) # check receiver self.assertEqual(receive_stdout, message + NL) self.assertEqual(receive_stderr, "") class NotWelcome(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): yield self._setup_relay(error="please upgrade XYZ") self.cfg = cfg = config("send") cfg.hide_progress = True cfg.listen = False cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() @inlineCallbacks def test_sender(self): self.cfg.text = "hi" self.cfg.code = u"1-abc" send_d = cmd_send.send(self.cfg) f = yield self.assertFailure(send_d, WelcomeError) self.assertEqual(str(f), "please upgrade XYZ") @inlineCallbacks def test_receiver(self): self.cfg.code = u"1-abc" receive_d = cmd_receive.receive(self.cfg) f = yield self.assertFailure(receive_d, WelcomeError) self.assertEqual(str(f), "please upgrade XYZ") class NoServer(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): yield self._setup_relay(None) yield self._relay_server.disownServiceParent() @inlineCallbacks def test_sender(self): cfg = config("send") cfg.hide_progress = True cfg.listen = False cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() cfg.text = "hi" cfg.code = u"1-abc" send_d = cmd_send.send(cfg) e = yield self.assertFailure(send_d, ServerConnectionError) self.assertIsInstance(e.reason, ConnectionRefusedError) @inlineCallbacks def test_sender_allocation(self): cfg = config("send") cfg.hide_progress = True cfg.listen = False cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() cfg.text = "hi" send_d = cmd_send.send(cfg) e = yield self.assertFailure(send_d, ServerConnectionError) self.assertIsInstance(e.reason, ConnectionRefusedError) @inlineCallbacks def test_receiver(self): cfg = config("receive") cfg.hide_progress = True cfg.listen = False cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() cfg.code = u"1-abc" receive_d = cmd_receive.receive(cfg) e = yield self.assertFailure(receive_d, ServerConnectionError) self.assertIsInstance(e.reason, ConnectionRefusedError) class Cleanup(ServerBase, unittest.TestCase): def make_config(self): cfg = config("send") # common options for all tests in this suite cfg.hide_progress = True cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() return cfg @inlineCallbacks @mock.patch('sys.stdout') def test_text(self, stdout): # the rendezvous channel should be deleted after success cfg = self.make_config() cfg.text = "hello" cfg.code = u"1-abc" send_d = cmd_send.send(cfg) receive_d = cmd_receive.receive(cfg) yield send_d yield receive_d cids = self._rendezvous.get_app(cmd_send.APPID).get_nameplate_ids() self.assertEqual(len(cids), 0) @inlineCallbacks def test_text_wrong_password(self): # if the password was wrong, the rendezvous channel should still be # deleted send_cfg = self.make_config() send_cfg.text = "secret message" send_cfg.code = u"1-abc" send_d = cmd_send.send(send_cfg) rx_cfg = self.make_config() rx_cfg.code = u"1-WRONG" receive_d = cmd_receive.receive(rx_cfg) # both sides should be capable of detecting the mismatch yield self.assertFailure(send_d, WrongPasswordError) yield self.assertFailure(receive_d, WrongPasswordError) cids = self._rendezvous.get_app(cmd_send.APPID).get_nameplate_ids() self.assertEqual(len(cids), 0) class ExtractFile(unittest.TestCase): def test_filenames(self): args = mock.Mock() args.relay_url = u"" ef = cmd_receive.Receiver(args)._extract_file extract_dir = os.path.abspath(self.mktemp()) zf = mock.Mock() zi = mock.Mock() zi.filename = "ok" zi.external_attr = 5 << 16 expected = os.path.join(extract_dir, "ok") with mock.patch.object(cmd_receive.os, "chmod") as chmod: ef(zf, zi, extract_dir) self.assertEqual(zf.extract.mock_calls, [mock.call(zi.filename, path=extract_dir)]) self.assertEqual(chmod.mock_calls, [mock.call(expected, 5)]) zf = mock.Mock() zi = mock.Mock() zi.filename = "../haha" e = self.assertRaises(ValueError, ef, zf, zi, extract_dir) self.assertIn("malicious zipfile", str(e)) zf = mock.Mock() zi = mock.Mock() zi.filename = "haha//root" # abspath squashes this, hopefully zipfile # does too zi.external_attr = 5 << 16 expected = os.path.join(extract_dir, "haha", "root") with mock.patch.object(cmd_receive.os, "chmod") as chmod: ef(zf, zi, extract_dir) self.assertEqual(zf.extract.mock_calls, [mock.call(zi.filename, path=extract_dir)]) self.assertEqual(chmod.mock_calls, [mock.call(expected, 5)]) zf = mock.Mock() zi = mock.Mock() zi.filename = "/etc/passwd" e = self.assertRaises(ValueError, ef, zf, zi, extract_dir) self.assertIn("malicious zipfile", str(e)) class AppID(ServerBase, unittest.TestCase): @inlineCallbacks def setUp(self): yield super(AppID, self).setUp() self.cfg = cfg = config("send") # common options for all tests in this suite cfg.hide_progress = True cfg.relay_url = self.relayurl cfg.transit_helper = "" cfg.stdout = io.StringIO() cfg.stderr = io.StringIO() @inlineCallbacks def test_override(self): # make sure we use the overridden appid, not the default self.cfg.text = "hello" self.cfg.appid = u"appid2" self.cfg.code = u"1-abc" send_d = cmd_send.send(self.cfg) receive_d = cmd_receive.receive(self.cfg) yield send_d yield receive_d used = self._usage_db.execute("SELECT DISTINCT `app_id`" " FROM `nameplates`").fetchall() self.assertEqual(len(used), 1, used) self.assertEqual(used[0]["app_id"], u"appid2") class Welcome(unittest.TestCase): def do(self, welcome_message, my_version="2.0"): stderr = io.StringIO() welcome.handle_welcome(welcome_message, "url", my_version, stderr) return stderr.getvalue() def test_empty(self): stderr = self.do({}) self.assertEqual(stderr, "") def test_version_current(self): stderr = self.do({"current_cli_version": "2.0"}) self.assertEqual(stderr, "") def test_version_old(self): stderr = self.do({"current_cli_version": "3.0"}) expected = ("Warning: errors may occur unless both sides are" " running the same version\n" "Server claims 3.0 is current, but ours is 2.0\n") self.assertEqual(stderr, expected) def test_version_unreleased(self): stderr = self.do( { "current_cli_version": "3.0" }, my_version="2.5+middle.something") self.assertEqual(stderr, "") def test_motd(self): stderr = self.do({"motd": "hello"}) self.assertEqual(stderr, "Server (at url) says:\n hello\n") class Dispatch(unittest.TestCase): @inlineCallbacks def test_success(self): cfg = config("send") cfg.stderr = io.StringIO() called = [] def fake(): called.append(1) yield cli._dispatch_command(reactor, cfg, fake) self.assertEqual(called, [1]) self.assertEqual(cfg.stderr.getvalue(), "") @inlineCallbacks def test_timing(self): cfg = config("send") cfg.stderr = io.StringIO() cfg.timing = mock.Mock() cfg.dump_timing = "filename" def fake(): pass yield cli._dispatch_command(reactor, cfg, fake) self.assertEqual(cfg.stderr.getvalue(), "") self.assertEqual(cfg.timing.mock_calls[-1], mock.call.write("filename", cfg.stderr)) @inlineCallbacks def test_wrong_password_error(self): cfg = config("send") cfg.stderr = io.StringIO() def fake(): raise WrongPasswordError("abcd") yield self.assertFailure( cli._dispatch_command(reactor, cfg, fake), SystemExit) expected = fill("ERROR: " + dedent(WrongPasswordError.__doc__)) + "\n" self.assertEqual(cfg.stderr.getvalue(), expected) @inlineCallbacks def test_welcome_error(self): cfg = config("send") cfg.stderr = io.StringIO() def fake(): raise WelcomeError("abcd") yield self.assertFailure( cli._dispatch_command(reactor, cfg, fake), SystemExit) expected = ( fill("ERROR: " + dedent(WelcomeError.__doc__)) + "\n\nabcd\n") self.assertEqual(cfg.stderr.getvalue(), expected) @inlineCallbacks def test_transfer_error(self): cfg = config("send") cfg.stderr = io.StringIO() def fake(): raise TransferError("abcd") yield self.assertFailure( cli._dispatch_command(reactor, cfg, fake), SystemExit) expected = "TransferError: abcd\n" self.assertEqual(cfg.stderr.getvalue(), expected) @inlineCallbacks def test_server_connection_error(self): cfg = config("send") cfg.stderr = io.StringIO() def fake(): raise ServerConnectionError("URL", ValueError("abcd")) yield self.assertFailure( cli._dispatch_command(reactor, cfg, fake), SystemExit) expected = fill( "ERROR: " + dedent(ServerConnectionError.__doc__)) + "\n" expected += "(relay URL was URL)\n" expected += "abcd\n" self.assertEqual(cfg.stderr.getvalue(), expected) @inlineCallbacks def test_other_error(self): cfg = config("send") cfg.stderr = io.StringIO() def fake(): raise ValueError("abcd") # I'm seeing unicode problems with the Failure().printTraceback, and # the output would be kind of unpredictable anyways, so we'll mock it # out here. f = mock.Mock() def mock_print(file): file.write(u"\n") f.printTraceback = mock_print with mock.patch("wormhole.cli.cli.Failure", return_value=f): yield self.assertFailure( cli._dispatch_command(reactor, cfg, fake), SystemExit) expected = "\nERROR: abcd\n" self.assertEqual(cfg.stderr.getvalue(), expected) class Help(unittest.TestCase): def _check_top_level_help(self, got): # the main wormhole.cli.cli.wormhole docstring should be in the # output, but formatted differently self.assertIn("Create a Magic Wormhole and communicate through it.", got) self.assertIn("--relay-url", got) self.assertIn("Receive a text message, file, or directory", got) def test_help(self): result = CliRunner().invoke(cli.wormhole, ["help"]) self._check_top_level_help(result.output) self.assertEqual(result.exit_code, 0) def test_dash_dash_help(self): result = CliRunner().invoke(cli.wormhole, ["--help"]) self._check_top_level_help(result.output) self.assertEqual(result.exit_code, 0) magic-wormhole-0.12.0/src/wormhole/test/test_eventual.py000066400000000000000000000035431400712516500233500ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from twisted.internet import reactor from twisted.internet.defer import Deferred, inlineCallbacks from twisted.internet.task import Clock from twisted.trial import unittest import mock from ..eventual import EventualQueue class IntentionalError(Exception): pass class Eventual(unittest.TestCase, object): def test_eventually(self): c = Clock() eq = EventualQueue(c) c1 = mock.Mock() eq.eventually(c1, "arg1", "arg2", kwarg1="kw1") eq.eventually(c1, "arg3", "arg4", kwarg5="kw5") d2 = eq.fire_eventually() d3 = eq.fire_eventually("value") self.assertEqual(c1.mock_calls, []) self.assertNoResult(d2) self.assertNoResult(d3) eq.flush_sync() self.assertEqual(c1.mock_calls, [ mock.call("arg1", "arg2", kwarg1="kw1"), mock.call("arg3", "arg4", kwarg5="kw5") ]) self.assertEqual(self.successResultOf(d2), None) self.assertEqual(self.successResultOf(d3), "value") def test_error(self): c = Clock() eq = EventualQueue(c) c1 = mock.Mock(side_effect=IntentionalError) eq.eventually(c1, "arg1", "arg2", kwarg1="kw1") self.assertEqual(c1.mock_calls, []) eq.flush_sync() self.assertEqual(c1.mock_calls, [mock.call("arg1", "arg2", kwarg1="kw1")]) self.flushLoggedErrors(IntentionalError) @inlineCallbacks def test_flush(self): eq = EventualQueue(reactor) d1 = eq.fire_eventually() d2 = Deferred() def _more(res): eq.eventually(d2.callback, None) d1.addCallback(_more) yield eq.flush() # d1 will fire, which will queue d2 to fire, and the flush() ought to # wait for d2 too self.successResultOf(d2) magic-wormhole-0.12.0/src/wormhole/test/test_hints.py000066400000000000000000000176121400712516500226540ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import io from collections import namedtuple import mock from twisted.internet import endpoints, reactor from twisted.trial import unittest from .._hints import (endpoint_from_hint_obj, parse_hint_argv, parse_tcp_v1_hint, describe_hint_obj, parse_hint, encode_hint, DirectTCPV1Hint, TorTCPV1Hint, RelayV1Hint) UnknownHint = namedtuple("UnknownHint", ["stuff"]) class Hints(unittest.TestCase): def test_endpoint_from_hint_obj(self): def efho(hint, tor=None): return endpoint_from_hint_obj(hint, tor, reactor) self.assertIsInstance(efho(DirectTCPV1Hint("host", 1234, 0.0)), endpoints.HostnameEndpoint) self.assertEqual(efho("unknown:stuff:yowza:pivlor"), None) # tor=None self.assertEqual(efho(TorTCPV1Hint("host", "port", 0)), None) self.assertEqual(efho(UnknownHint("foo")), None) tor = mock.Mock() def tor_ep(hostname, port): if hostname == "non-public": raise ValueError return ("tor_ep", hostname, port) tor.stream_via = mock.Mock(side_effect=tor_ep) self.assertEqual(efho(DirectTCPV1Hint("host", 1234, 0.0), tor), ("tor_ep", "host", 1234)) self.assertEqual(efho(TorTCPV1Hint("host2.onion", 1234, 0.0), tor), ("tor_ep", "host2.onion", 1234)) self.assertEqual( efho(DirectTCPV1Hint("non-public", 1234, 0.0), tor), None) self.assertEqual(efho(UnknownHint("foo"), tor), None) def test_comparable(self): h1 = DirectTCPV1Hint("hostname", "port1", 0.0) h1b = DirectTCPV1Hint("hostname", "port1", 0.0) h2 = DirectTCPV1Hint("hostname", "port2", 0.0) r1 = RelayV1Hint(tuple(sorted([h1, h2]))) r2 = RelayV1Hint(tuple(sorted([h2, h1]))) r3 = RelayV1Hint(tuple(sorted([h1b, h2]))) self.assertEqual(r1, r2) self.assertEqual(r2, r3) self.assertEqual(len(set([r1, r2, r3])), 1) def test_parse_tcp_v1_hint(self): p = parse_tcp_v1_hint self.assertEqual(p({"type": "unknown"}), None) h = p({"type": "direct-tcp-v1", "hostname": "foo", "port": 1234}) self.assertEqual(h, DirectTCPV1Hint("foo", 1234, 0.0)) h = p({ "type": "direct-tcp-v1", "hostname": "foo", "port": 1234, "priority": 2.5 }) self.assertEqual(h, DirectTCPV1Hint("foo", 1234, 2.5)) h = p({"type": "tor-tcp-v1", "hostname": "foo", "port": 1234}) self.assertEqual(h, TorTCPV1Hint("foo", 1234, 0.0)) h = p({ "type": "tor-tcp-v1", "hostname": "foo", "port": 1234, "priority": 2.5 }) self.assertEqual(h, TorTCPV1Hint("foo", 1234, 2.5)) self.assertEqual(p({ "type": "direct-tcp-v1" }), None) # missing hostname self.assertEqual(p({ "type": "direct-tcp-v1", "hostname": 12 }), None) # invalid hostname self.assertEqual( p({ "type": "direct-tcp-v1", "hostname": "foo" }), None) # missing port self.assertEqual( p({ "type": "direct-tcp-v1", "hostname": "foo", "port": "not a number" }), None) # invalid port def test_parse_hint(self): p = parse_hint self.assertEqual(p({"type": "direct-tcp-v1", "hostname": "foo", "port": 12}), DirectTCPV1Hint("foo", 12, 0.0)) self.assertEqual(p({"type": "relay-v1", "hints": [ {"type": "direct-tcp-v1", "hostname": "foo", "port": 12}, {"type": "unrecognized"}, {"type": "direct-tcp-v1", "hostname": "bar", "port": 13}]}), RelayV1Hint([DirectTCPV1Hint("foo", 12, 0.0), DirectTCPV1Hint("bar", 13, 0.0)])) def test_parse_hint_argv(self): def p(hint): stderr = io.StringIO() value = parse_hint_argv(hint, stderr=stderr) return value, stderr.getvalue() h, stderr = p("tcp:host:1234") self.assertEqual(h, DirectTCPV1Hint("host", 1234, 0.0)) self.assertEqual(stderr, "") h, stderr = p("tcp:host:1234:priority=2.6") self.assertEqual(h, DirectTCPV1Hint("host", 1234, 2.6)) self.assertEqual(stderr, "") h, stderr = p("tcp:host:1234:unknown=stuff") self.assertEqual(h, DirectTCPV1Hint("host", 1234, 0.0)) self.assertEqual(stderr, "") h, stderr = p("$!@#^") self.assertEqual(h, None) self.assertEqual(stderr, "unparseable hint '$!@#^'\n") h, stderr = p("unknown:stuff") self.assertEqual(h, None) self.assertEqual(stderr, "unknown hint type 'unknown' in 'unknown:stuff'\n") h, stderr = p("tcp:just-a-hostname") self.assertEqual(h, None) self.assertEqual( stderr, "unparseable TCP hint (need more colons) 'tcp:just-a-hostname'\n") h, stderr = p("tcp:host:number") self.assertEqual(h, None) self.assertEqual(stderr, "non-numeric port in TCP hint 'tcp:host:number'\n") h, stderr = p("tcp:host:1234:priority=bad") self.assertEqual(h, None) self.assertEqual( stderr, "non-float priority= in TCP hint 'tcp:host:1234:priority=bad'\n") def test_describe_hint_obj(self): d = describe_hint_obj self.assertEqual(d(DirectTCPV1Hint("host", 1234, 0.0), False, False), "->tcp:host:1234") self.assertEqual(d(DirectTCPV1Hint("host", 1234, 0.0), True, False), "->relay:tcp:host:1234") self.assertEqual(d(DirectTCPV1Hint("host", 1234, 0.0), False, True), "tor->tcp:host:1234") self.assertEqual(d(DirectTCPV1Hint("host", 1234, 0.0), True, True), "tor->relay:tcp:host:1234") self.assertEqual(d(TorTCPV1Hint("host", 1234, 0.0), False, False), "->tor:host:1234") self.assertEqual(d(UnknownHint("stuff"), False, False), "->%s" % str(UnknownHint("stuff"))) def test_encode_hint(self): e = encode_hint self.assertEqual(e(DirectTCPV1Hint("host", 1234, 1.0)), {"type": "direct-tcp-v1", "priority": 1.0, "hostname": "host", "port": 1234}) self.assertEqual(e(RelayV1Hint([DirectTCPV1Hint("foo", 12, 0.0), DirectTCPV1Hint("bar", 13, 0.0)])), {"type": "relay-v1", "hints": [ {"type": "direct-tcp-v1", "hostname": "foo", "port": 12, "priority": 0.0}, {"type": "direct-tcp-v1", "hostname": "bar", "port": 13, "priority": 0.0}, ]}) self.assertEqual(e(TorTCPV1Hint("host", 1234, 1.0)), {"type": "tor-tcp-v1", "priority": 1.0, "hostname": "host", "port": 1234}) e = self.assertRaises(ValueError, e, "not a Hint") self.assertIn("unknown hint type", str(e)) self.assertIn("not a Hint", str(e)) magic-wormhole-0.12.0/src/wormhole/test/test_hkdf.py000066400000000000000000000033771400712516500224460ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import unittest from binascii import unhexlify # , hexlify from hkdf import Hkdf # def generate_KAT(): # print("KAT = [") # for salt in (b"", b"salt"): # for context in (b"", b"context"): # skm = b"secret" # out = HKDF(skm, 64, XTS=salt, CTXinfo=context) # hexout = " '%s' +\n '%s'" % (hexlify(out[:32]), # hexlify(out[32:])) # print(" (%r, %r, %r,\n%s)," % (salt, context, skm, hexout)) # print("]") KAT = [ ('', '', 'secret', '2f34e5ff91ec85d53ca9b543683174d0cf550b60d5f52b24c97b386cfcf6cbbf' + '9cfd42fd37e1e5a214d15f03058d7fee63dc28f564b7b9fe3da514f80daad4bf'), ('', 'context', 'secret', 'c24c303a1adfb4c3e2b092e6254ed481c41d8955ba8ec3f6a1473493a60c957b' + '31b723018ca75557214d3d5c61c0c7a5315b103b21ff00cb03ebe023dc347a47'), ('salt', '', 'secret', 'f1156507c39b0e326159e778696253122de430899a8df2484040a85a5f95ceb1' + 'dfca555d4cc603bdf7153ed1560de8cbc3234b27a6d2be8e8ca202d90649679a'), ('salt', 'context', 'secret', '61a4f201a867bcc12381ddb180d27074408d03ee9d5750855e5a12d967fa060f' + '10336ead9370927eaabb0d60b259346ee5f57eb7ceba8c72f1ed3f2932b1bf19'), ] class TestKAT(unittest.TestCase): # note: this uses SHA256 def test_kat(self): for (salt, context, skm, expected_hexout) in KAT: expected_out = unhexlify(expected_hexout) for outlen in range(0, len(expected_out)): out = Hkdf(salt.encode("ascii"), skm.encode("ascii")).expand( context.encode("ascii"), outlen) self.assertEqual(out, expected_out[:outlen]) # if __name__ == '__main__': # generate_KAT() magic-wormhole-0.12.0/src/wormhole/test/test_ipaddrs.py000066400000000000000000000146731400712516500231610ustar00rootroot00000000000000import errno import os import re import subprocess from twisted.trial import unittest from .. import ipaddrs DOTTED_QUAD_RE = re.compile(r"^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$") MOCK_IPADDR_OUTPUT = """\ 1: lo: mtu 16436 qdisc noqueue state UNKNOWN \n\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host \n\ valid_lft forever preferred_lft forever 2: eth1: mtu 1500 qdisc pfifo_fast state UP \ qlen 1000 link/ether d4:3d:7e:01:b4:3e brd ff:ff:ff:ff:ff:ff inet 192.168.0.6/24 brd 192.168.0.255 scope global eth1 inet6 fe80::d63d:7eff:fe01:b43e/64 scope link \n\ valid_lft forever preferred_lft forever 3: wlan0: mtu 1500 qdisc mq state UP qlen\ 1000 link/ether 90:f6:52:27:15:0a brd ff:ff:ff:ff:ff:ff inet 192.168.0.2/24 brd 192.168.0.255 scope global wlan0 inet6 fe80::92f6:52ff:fe27:150a/64 scope link \n\ valid_lft forever preferred_lft forever """ MOCK_IFCONFIG_OUTPUT = """\ eth1 Link encap:Ethernet HWaddr d4:3d:7e:01:b4:3e \n\ inet addr:192.168.0.6 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::d63d:7eff:fe01:b43e/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:154242234 errors:0 dropped:0 overruns:0 frame:0 TX packets:155461891 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 \n\ RX bytes:84367213640 (78.5 GiB) TX bytes:73401695329 (68.3 GiB) Interrupt:20 Memory:f4f00000-f4f20000 \n\ lo Link encap:Local Loopback \n\ inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:27449267 errors:0 dropped:0 overruns:0 frame:0 TX packets:27449267 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 \n\ RX bytes:192643017823 (179.4 GiB) TX bytes:192643017823 (179.4 GiB) wlan0 Link encap:Ethernet HWaddr 90:f6:52:27:15:0a \n\ inet addr:192.168.0.2 Bcast:192.168.0.255 Mask:255.255.255.0 inet6 addr: fe80::92f6:52ff:fe27:150a/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:12352750 errors:0 dropped:0 overruns:0 frame:0 TX packets:4501451 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 \n\ RX bytes:3916475942 (3.6 GiB) TX bytes:458353654 (437.1 MiB) """ # This is actually from a VirtualBox VM running XP. MOCK_ROUTE_OUTPUT = """\ =========================================================================== Interface List 0x1 ........................... MS TCP Loopback interface 0x2 ...08 00 27 c3 80 ad ...... AMD PCNET Family PCI Ethernet Adapter - \ Packet Scheduler Miniport =========================================================================== =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 10.0.2.2 10.0.2.15 20 10.0.2.0 255.255.255.0 10.0.2.15 10.0.2.15 20 10.0.2.15 255.255.255.255 127.0.0.1 127.0.0.1 20 10.255.255.255 255.255.255.255 10.0.2.15 10.0.2.15 20 127.0.0.0 255.0.0.0 127.0.0.1 127.0.0.1 1 224.0.0.0 240.0.0.0 10.0.2.15 10.0.2.15 20 255.255.255.255 255.255.255.255 10.0.2.15 10.0.2.15 1 Default Gateway: 10.0.2.2 =========================================================================== Persistent Routes: None """ UNIX_TEST_ADDRESSES = set(["127.0.0.1", "192.168.0.6", "192.168.0.2"]) WINDOWS_TEST_ADDRESSES = set(["127.0.0.1", "10.0.2.15"]) CYGWIN_TEST_ADDRESSES = set(["127.0.0.1"]) class FakeProcess: def __init__(self, output, err): self.output = output self.err = err def communicate(self): return (self.output, self.err) class ListAddresses(unittest.TestCase): def test_list(self): addresses = ipaddrs.find_addresses() self.failUnlessIn("127.0.0.1", addresses) self.failIfIn("0.0.0.0", addresses) # David A.'s OpenSolaris box timed out on this test one time when it was at # 2s. test_list.timeout = 4 def _test_list_mock(self, command, output, expected): self.first = True def call_Popen(args, bufsize=0, executable=None, stdin=None, stdout=None, stderr=None, preexec_fn=None, close_fds=False, shell=False, cwd=None, env=None, universal_newlines=False, startupinfo=None, creationflags=0): if self.first: self.first = False e = OSError("EINTR") e.errno = errno.EINTR raise e elif os.path.basename(args[0]) == command: return FakeProcess(output, "") else: e = OSError("[Errno 2] No such file or directory") e.errno = errno.ENOENT raise e self.patch(subprocess, 'Popen', call_Popen) self.patch(os.path, 'isfile', lambda x: True) def call_which(name): return [name] self.patch(ipaddrs, 'which', call_which) addresses = ipaddrs.find_addresses() self.failUnlessEquals(set(addresses), set(expected)) def test_list_mock_ip_addr(self): self.patch(ipaddrs, 'platform', "linux2") self._test_list_mock("ip", MOCK_IPADDR_OUTPUT, UNIX_TEST_ADDRESSES) def test_list_mock_ifconfig(self): self.patch(ipaddrs, 'platform', "linux2") self._test_list_mock("ifconfig", MOCK_IFCONFIG_OUTPUT, UNIX_TEST_ADDRESSES) def test_list_mock_route(self): self.patch(ipaddrs, 'platform', "win32") self._test_list_mock("route.exe", MOCK_ROUTE_OUTPUT, WINDOWS_TEST_ADDRESSES) def test_list_mock_cygwin(self): self.patch(ipaddrs, 'platform', "cygwin") self._test_list_mock(None, None, CYGWIN_TEST_ADDRESSES) magic-wormhole-0.12.0/src/wormhole/test/test_journal.py000066400000000000000000000020461400712516500231740ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from twisted.trial import unittest from .. import journal from .._interfaces import IJournal class Journal(unittest.TestCase): def test_journal(self): events = [] j = journal.Journal(lambda: events.append("checkpoint")) self.assert_(IJournal.providedBy(j)) with j.process(): j.queue_outbound(events.append, "message1") j.queue_outbound(events.append, "message2") self.assertEqual(events, []) self.assertEqual(events, ["checkpoint", "message1", "message2"]) def test_immediate(self): events = [] j = journal.ImmediateJournal() self.assert_(IJournal.providedBy(j)) with j.process(): j.queue_outbound(events.append, "message1") self.assertEqual(events, ["message1"]) j.queue_outbound(events.append, "message2") self.assertEqual(events, ["message1", "message2"]) self.assertEqual(events, ["message1", "message2"]) magic-wormhole-0.12.0/src/wormhole/test/test_keys.py000066400000000000000000000070121400712516500224730ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import mock from twisted.trial import unittest from .._key import derive_key, derive_phase_key, encrypt_data, decrypt_data from ..util import bytes_to_hexstr, hexstr_to_bytes class Derive(unittest.TestCase): def test_derive_errors(self): self.assertRaises(TypeError, derive_key, 123, b"purpose") self.assertRaises(TypeError, derive_key, b"key", 123) self.assertRaises(TypeError, derive_key, b"key", b"purpose", "not len") def test_derive_key(self): m = "588ba9eef353778b074413a0140205d90d7479e36e0dd4ee35bb729d26131ef1" main = hexstr_to_bytes(m) dk1 = derive_key(main, b"purpose1") self.assertEqual(bytes_to_hexstr(dk1), "835b5df80ce9ca46908e8524fb308649" "122cfbcefbeaa7e65061c6ef08ee1b2a") dk2 = derive_key(main, b"purpose2", 10) self.assertEqual(bytes_to_hexstr(dk2), "f2238e84315b47eb6279") def test_derive_phase_key(self): m = "588ba9eef353778b074413a0140205d90d7479e36e0dd4ee35bb729d26131ef1" main = hexstr_to_bytes(m) dk11 = derive_phase_key(main, "side1", "phase1") self.assertEqual(bytes_to_hexstr(dk11), "3af6a61d1a111225cc8968c6ca6265ef" "e892065c3ab46de79dda21306b062990") dk12 = derive_phase_key(main, "side1", "phase2") self.assertEqual(bytes_to_hexstr(dk12), "88a1dd12182d989ff498022a9656d1e2" "806f17328d8bf5d8d0c9753e4381a752") dk21 = derive_phase_key(main, "side2", "phase1") self.assertEqual(bytes_to_hexstr(dk21), "a306627b436ec23bdae3af8fa90c9ac9" "27780d86be1831003e7f617c518ea689") dk22 = derive_phase_key(main, "side2", "phase2") self.assertEqual(bytes_to_hexstr(dk22), "bf99e3e16420f2dad33f9b1ccb0be146" "2b253d639dacdb50ed9496fa528d8758") class Encrypt(unittest.TestCase): def test_encrypt(self): k = "ddc543ef8e4629a603d39dd0307a51bb1e7adb9cb259f6b085c91d0842a18679" key = hexstr_to_bytes(k) plaintext = hexstr_to_bytes("edc089a518219ec1cee184e89d2d37af") self.assertEqual(len(plaintext), 16) nonce = hexstr_to_bytes("2d5e43eb465aa42e750f991e425bee48" "5f06abad7e04af80") self.assertEqual(len(nonce), 24) with mock.patch("nacl.utils.random", return_value=nonce): encrypted = encrypt_data(key, plaintext) self.assertEqual(len(encrypted), 24 + 16 + 16) self.assertEqual(bytes_to_hexstr(encrypted), "2d5e43eb465aa42e750f991e425bee48" "5f06abad7e04af80fe318e39d0e4ce93" "2d2b54b300c56d2cda55ee5f0488d63e" "b1d5f76f7919a49a") def test_decrypt(self): k = "ddc543ef8e4629a603d39dd0307a51bb1e7adb9cb259f6b085c91d0842a18679" key = hexstr_to_bytes(k) encrypted = hexstr_to_bytes("2d5e43eb465aa42e750f991e425bee48" "5f06abad7e04af80fe318e39d0e4ce93" "2d2b54b300c56d2cda55ee5f0488d63e" "b1d5f76f7919a49a") decrypted = decrypt_data(key, encrypted) self.assertEqual(len(decrypted), len(encrypted) - 24 - 16) self.assertEqual(bytes_to_hexstr(decrypted), "edc089a518219ec1cee184e89d2d37af") magic-wormhole-0.12.0/src/wormhole/test/test_machines.py000066400000000000000000001607301400712516500233160ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import json from nacl.secret import SecretBox from spake2 import SPAKE2_Symmetric from twisted.trial import unittest from zope.interface import directlyProvides, implementer import mock from .. import (__version__, _allocator, _boss, _code, _input, _key, _lister, _mailbox, _nameplate, _order, _receive, _rendezvous, _send, _terminator, errors, timing) from .._interfaces import (IAllocator, IBoss, ICode, IDilator, IInput, IKey, ILister, IMailbox, INameplate, IOrder, IReceive, IRendezvousConnector, ISend, ITerminator, IWordlist, ITorManager) from .._key import derive_key, derive_phase_key, encrypt_data from ..journal import ImmediateJournal from ..util import (bytes_to_dict, bytes_to_hexstr, dict_to_bytes, hexstr_to_bytes, to_bytes) @implementer(IWordlist) class FakeWordList(object): def choose_words(self, length): return "-".join(["word"] * length) def get_completions(self, prefix): self._get_completions_prefix = prefix return self._completions class Dummy: def __init__(self, name, events, iface, *meths): self.name = name self.events = events if iface: directlyProvides(self, iface) for meth in meths: self.mock(meth) self.retval = None def mock(self, meth): def log(*args): self.events.append(("%s.%s" % (self.name, meth), ) + args) return self.retval setattr(self, meth, log) class Send(unittest.TestCase): def build(self): events = [] s = _send.Send(u"side", timing.DebugTiming()) m = Dummy("m", events, IMailbox, "add_message") s.wire(m) return s, m, events def test_send_first(self): s, m, events = self.build() s.send("phase1", b"msg") self.assertEqual(events, []) key = b"\x00" * 32 nonce1 = b"\x00" * SecretBox.NONCE_SIZE with mock.patch("nacl.utils.random", side_effect=[nonce1]) as r: s.got_verified_key(key) self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) # print(bytes_to_hexstr(events[0][2])) enc1 = hexstr_to_bytes( ("000000000000000000000000000000000000000000000000" "22f1a46c3c3496423c394621a2a5a8cf275b08")) self.assertEqual(events, [("m.add_message", "phase1", enc1)]) events[:] = [] nonce2 = b"\x02" * SecretBox.NONCE_SIZE with mock.patch("nacl.utils.random", side_effect=[nonce2]) as r: s.send("phase2", b"msg") self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) enc2 = hexstr_to_bytes( ("0202020202020202020202020202020202020202" "020202026660337c3eac6513c0dac9818b62ef16d9cd7e")) self.assertEqual(events, [("m.add_message", "phase2", enc2)]) def test_key_first(self): s, m, events = self.build() key = b"\x00" * 32 s.got_verified_key(key) self.assertEqual(events, []) nonce1 = b"\x00" * SecretBox.NONCE_SIZE with mock.patch("nacl.utils.random", side_effect=[nonce1]) as r: s.send("phase1", b"msg") self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) enc1 = hexstr_to_bytes(("00000000000000000000000000000000000000000000" "000022f1a46c3c3496423c394621a2a5a8cf275b08")) self.assertEqual(events, [("m.add_message", "phase1", enc1)]) events[:] = [] nonce2 = b"\x02" * SecretBox.NONCE_SIZE with mock.patch("nacl.utils.random", side_effect=[nonce2]) as r: s.send("phase2", b"msg") self.assertEqual(r.mock_calls, [mock.call(SecretBox.NONCE_SIZE)]) enc2 = hexstr_to_bytes( ("0202020202020202020202020202020202020" "202020202026660337c3eac6513c0dac9818b62ef16d9cd7e")) self.assertEqual(events, [("m.add_message", "phase2", enc2)]) class Order(unittest.TestCase): def build(self): events = [] o = _order.Order(u"side", timing.DebugTiming()) k = Dummy("k", events, IKey, "got_pake") r = Dummy("r", events, IReceive, "got_message") o.wire(k, r) return o, k, r, events def test_in_order(self): o, k, r, events = self.build() o.got_message(u"side", u"pake", b"body") self.assertEqual(events, [("k.got_pake", b"body")]) # right away o.got_message(u"side", u"version", b"body") o.got_message(u"side", u"1", b"body") self.assertEqual(events, [ ("k.got_pake", b"body"), ("r.got_message", u"side", u"version", b"body"), ("r.got_message", u"side", u"1", b"body"), ]) def test_out_of_order(self): o, k, r, events = self.build() o.got_message(u"side", u"version", b"body") self.assertEqual(events, []) # nothing yet o.got_message(u"side", u"1", b"body") self.assertEqual(events, []) # nothing yet o.got_message(u"side", u"pake", b"body") # got_pake is delivered first self.assertEqual(events, [ ("k.got_pake", b"body"), ("r.got_message", u"side", u"version", b"body"), ("r.got_message", u"side", u"1", b"body"), ]) class Receive(unittest.TestCase): def build(self): events = [] r = _receive.Receive(u"side", timing.DebugTiming()) b = Dummy("b", events, IBoss, "happy", "scared", "got_verifier", "got_message") s = Dummy("s", events, ISend, "got_verified_key") r.wire(b, s) return r, b, s, events def test_good(self): r, b, s, events = self.build() key = b"key" r.got_key(key) self.assertEqual(events, []) verifier = derive_key(key, b"wormhole:verifier") phase1_key = derive_phase_key(key, u"side", u"phase1") data1 = b"data1" good_body = encrypt_data(phase1_key, data1) r.got_message(u"side", u"phase1", good_body) self.assertEqual(events, [ ("s.got_verified_key", key), ("b.happy", ), ("b.got_verifier", verifier), ("b.got_message", u"phase1", data1), ]) phase2_key = derive_phase_key(key, u"side", u"phase2") data2 = b"data2" good_body = encrypt_data(phase2_key, data2) r.got_message(u"side", u"phase2", good_body) self.assertEqual(events, [ ("s.got_verified_key", key), ("b.happy", ), ("b.got_verifier", verifier), ("b.got_message", u"phase1", data1), ("b.got_message", u"phase2", data2), ]) def test_early_bad(self): r, b, s, events = self.build() key = b"key" r.got_key(key) self.assertEqual(events, []) phase1_key = derive_phase_key(key, u"side", u"bad") data1 = b"data1" bad_body = encrypt_data(phase1_key, data1) r.got_message(u"side", u"phase1", bad_body) self.assertEqual(events, [ ("b.scared", ), ]) phase2_key = derive_phase_key(key, u"side", u"phase2") data2 = b"data2" good_body = encrypt_data(phase2_key, data2) r.got_message(u"side", u"phase2", good_body) self.assertEqual(events, [ ("b.scared", ), ]) def test_late_bad(self): r, b, s, events = self.build() key = b"key" r.got_key(key) self.assertEqual(events, []) verifier = derive_key(key, b"wormhole:verifier") phase1_key = derive_phase_key(key, u"side", u"phase1") data1 = b"data1" good_body = encrypt_data(phase1_key, data1) r.got_message(u"side", u"phase1", good_body) self.assertEqual(events, [ ("s.got_verified_key", key), ("b.happy", ), ("b.got_verifier", verifier), ("b.got_message", u"phase1", data1), ]) phase2_key = derive_phase_key(key, u"side", u"bad") data2 = b"data2" bad_body = encrypt_data(phase2_key, data2) r.got_message(u"side", u"phase2", bad_body) self.assertEqual(events, [ ("s.got_verified_key", key), ("b.happy", ), ("b.got_verifier", verifier), ("b.got_message", u"phase1", data1), ("b.scared", ), ]) r.got_message(u"side", u"phase1", good_body) r.got_message(u"side", u"phase2", bad_body) self.assertEqual(events, [ ("s.got_verified_key", key), ("b.happy", ), ("b.got_verifier", verifier), ("b.got_message", u"phase1", data1), ("b.scared", ), ]) class Key(unittest.TestCase): def build(self): events = [] k = _key.Key(u"appid", {}, u"side", timing.DebugTiming()) b = Dummy("b", events, IBoss, "scared", "got_key") m = Dummy("m", events, IMailbox, "add_message") r = Dummy("r", events, IReceive, "got_key") k.wire(b, m, r) return k, b, m, r, events def test_good(self): k, b, m, r, events = self.build() code = u"1-foo" k.got_code(code) self.assertEqual(len(events), 1) self.assertEqual(events[0][:2], ("m.add_message", "pake")) msg1_json = events[0][2].decode("utf-8") events[:] = [] msg1 = json.loads(msg1_json) msg1_bytes = hexstr_to_bytes(msg1["pake_v1"]) sp = SPAKE2_Symmetric(to_bytes(code), idSymmetric=to_bytes(u"appid")) msg2_bytes = sp.start() key2 = sp.finish(msg1_bytes) msg2 = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg2_bytes)}) k.got_pake(msg2) self.assertEqual(len(events), 3, events) self.assertEqual(events[0], ("b.got_key", key2)) self.assertEqual(events[1][:2], ("m.add_message", "version")) self.assertEqual(events[2], ("r.got_key", key2)) def test_bad(self): k, b, m, r, events = self.build() code = u"1-foo" k.got_code(code) self.assertEqual(len(events), 1) self.assertEqual(events[0][:2], ("m.add_message", "pake")) pake_1_json = events[0][2].decode("utf-8") pake_1 = json.loads(pake_1_json) self.assertEqual(list(pake_1.keys()), ["pake_v1"]) # value is PAKE stuff events[:] = [] bad_pake_d = {"not_pake_v1": "stuff"} k.got_pake(dict_to_bytes(bad_pake_d)) self.assertEqual(events, [("b.scared", )]) def test_reversed(self): # A receiver using input_code() will choose the nameplate first, then # the rest of the code. Once the nameplate is selected, we'll claim # it and open the mailbox, which will cause the senders PAKE to # arrive before the code has been set. Key() is supposed to stash the # PAKE message until the code is set (allowing the PAKE computation # to finish). This test exercises that PAKE-then-code sequence. k, b, m, r, events = self.build() code = u"1-foo" sp = SPAKE2_Symmetric(to_bytes(code), idSymmetric=to_bytes(u"appid")) msg2_bytes = sp.start() msg2 = dict_to_bytes({"pake_v1": bytes_to_hexstr(msg2_bytes)}) k.got_pake(msg2) self.assertEqual(len(events), 0) k.got_code(code) self.assertEqual(len(events), 4) self.assertEqual(events[0][:2], ("m.add_message", "pake")) msg1_json = events[0][2].decode("utf-8") msg1 = json.loads(msg1_json) msg1_bytes = hexstr_to_bytes(msg1["pake_v1"]) key2 = sp.finish(msg1_bytes) self.assertEqual(events[1], ("b.got_key", key2)) self.assertEqual(events[2][:2], ("m.add_message", "version")) self.assertEqual(events[3], ("r.got_key", key2)) class Code(unittest.TestCase): def build(self): events = [] c = _code.Code(timing.DebugTiming()) b = Dummy("b", events, IBoss, "got_code") a = Dummy("a", events, IAllocator, "allocate") n = Dummy("n", events, INameplate, "set_nameplate") k = Dummy("k", events, IKey, "got_code") i = Dummy("i", events, IInput, "start") c.wire(b, a, n, k, i) return c, b, a, n, k, i, events def test_set_code(self): c, b, a, n, k, i, events = self.build() c.set_code(u"1-code") self.assertEqual(events, [ ("n.set_nameplate", u"1"), ("b.got_code", u"1-code"), ("k.got_code", u"1-code"), ]) def test_set_code_invalid(self): c, b, a, n, k, i, events = self.build() with self.assertRaises(errors.KeyFormatError) as e: c.set_code(u"1-code ") self.assertEqual(str(e.exception), "Code '1-code ' contains spaces.") with self.assertRaises(errors.KeyFormatError) as e: c.set_code(u" 1-code") self.assertEqual(str(e.exception), "Code ' 1-code' contains spaces.") with self.assertRaises(errors.KeyFormatError) as e: c.set_code(u"code-code") self.assertEqual( str(e.exception), "Nameplate 'code' must be numeric, with no spaces.") # it should still be possible to use the wormhole at this point c.set_code(u"1-code") self.assertEqual(events, [ ("n.set_nameplate", u"1"), ("b.got_code", u"1-code"), ("k.got_code", u"1-code"), ]) def test_allocate_code(self): c, b, a, n, k, i, events = self.build() wl = FakeWordList() c.allocate_code(2, wl) self.assertEqual(events, [("a.allocate", 2, wl)]) events[:] = [] c.allocated("1", "1-code") self.assertEqual(events, [ ("n.set_nameplate", u"1"), ("b.got_code", u"1-code"), ("k.got_code", u"1-code"), ]) def test_input_code(self): c, b, a, n, k, i, events = self.build() c.input_code() self.assertEqual(events, [("i.start", )]) events[:] = [] c.got_nameplate("1") self.assertEqual(events, [ ("n.set_nameplate", u"1"), ]) events[:] = [] c.finished_input("1-code") self.assertEqual(events, [ ("b.got_code", u"1-code"), ("k.got_code", u"1-code"), ]) class Input(unittest.TestCase): def build(self): events = [] i = _input.Input(timing.DebugTiming()) c = Dummy("c", events, ICode, "got_nameplate", "finished_input") l = Dummy("l", events, ILister, "refresh") i.wire(c, l) return i, c, l, events def test_ignore_completion(self): i, c, l, events = self.build() helper = i.start() self.assertIsInstance(helper, _input.Helper) self.assertEqual(events, [("l.refresh", )]) events[:] = [] with self.assertRaises(errors.MustChooseNameplateFirstError): helper.choose_words("word-word") helper.choose_nameplate("1") self.assertEqual(events, [("c.got_nameplate", "1")]) events[:] = [] with self.assertRaises(errors.AlreadyChoseNameplateError): helper.choose_nameplate("2") helper.choose_words("word-word") with self.assertRaises(errors.AlreadyChoseWordsError): helper.choose_words("word-word") self.assertEqual(events, [("c.finished_input", "1-word-word")]) def test_bad_nameplate(self): i, c, l, events = self.build() helper = i.start() self.assertIsInstance(helper, _input.Helper) self.assertEqual(events, [("l.refresh", )]) events[:] = [] with self.assertRaises(errors.MustChooseNameplateFirstError): helper.choose_words("word-word") with self.assertRaises(errors.KeyFormatError): helper.choose_nameplate(" 1") # should still work afterwards helper.choose_nameplate("1") self.assertEqual(events, [("c.got_nameplate", "1")]) events[:] = [] with self.assertRaises(errors.AlreadyChoseNameplateError): helper.choose_nameplate("2") helper.choose_words("word-word") with self.assertRaises(errors.AlreadyChoseWordsError): helper.choose_words("word-word") self.assertEqual(events, [("c.finished_input", "1-word-word")]) def test_with_completion(self): i, c, l, events = self.build() helper = i.start() self.assertIsInstance(helper, _input.Helper) self.assertEqual(events, [("l.refresh", )]) events[:] = [] d = helper.when_wordlist_is_available() self.assertNoResult(d) helper.refresh_nameplates() self.assertEqual(events, [("l.refresh", )]) events[:] = [] with self.assertRaises(errors.MustChooseNameplateFirstError): helper.get_word_completions("prefix") i.got_nameplates({"1", "12", "34", "35", "367"}) self.assertNoResult(d) self.assertEqual( helper.get_nameplate_completions(""), {"1-", "12-", "34-", "35-", "367-"}) self.assertEqual(helper.get_nameplate_completions("1"), {"1-", "12-"}) self.assertEqual(helper.get_nameplate_completions("2"), set()) self.assertEqual( helper.get_nameplate_completions("3"), {"34-", "35-", "367-"}) helper.choose_nameplate("34") with self.assertRaises(errors.AlreadyChoseNameplateError): helper.refresh_nameplates() with self.assertRaises(errors.AlreadyChoseNameplateError): helper.get_nameplate_completions("1") self.assertEqual(events, [("c.got_nameplate", "34")]) events[:] = [] # no wordlist yet self.assertNoResult(d) self.assertEqual(helper.get_word_completions(""), set()) wl = FakeWordList() i.got_wordlist(wl) self.assertEqual(self.successResultOf(d), None) # a new Deferred should fire right away d = helper.when_wordlist_is_available() self.assertEqual(self.successResultOf(d), None) wl._completions = {"abc-", "abcd-", "ae-"} self.assertEqual(helper.get_word_completions("a"), wl._completions) self.assertEqual(wl._get_completions_prefix, "a") with self.assertRaises(errors.AlreadyChoseNameplateError): helper.refresh_nameplates() with self.assertRaises(errors.AlreadyChoseNameplateError): helper.get_nameplate_completions("1") helper.choose_words("word-word") with self.assertRaises(errors.AlreadyChoseWordsError): helper.get_word_completions("prefix") with self.assertRaises(errors.AlreadyChoseWordsError): helper.choose_words("word-word") self.assertEqual(events, [("c.finished_input", "34-word-word")]) class Lister(unittest.TestCase): def build(self): events = [] lister = _lister.Lister(timing.DebugTiming()) rc = Dummy("rc", events, IRendezvousConnector, "tx_list") i = Dummy("i", events, IInput, "got_nameplates") lister.wire(rc, i) return lister, rc, i, events def test_connect_first(self): l, rc, i, events = self.build() l.connected() l.lost() l.connected() self.assertEqual(events, []) l.refresh() self.assertEqual(events, [ ("rc.tx_list", ), ]) events[:] = [] l.rx_nameplates({"1", "2", "3"}) self.assertEqual(events, [ ("i.got_nameplates", {"1", "2", "3"}), ]) events[:] = [] # now we're satisfied: disconnecting and reconnecting won't ask again l.lost() l.connected() self.assertEqual(events, []) # but if we're told to refresh, we'll do so l.refresh() self.assertEqual(events, [ ("rc.tx_list", ), ]) def test_connect_first_ask_twice(self): l, rc, i, events = self.build() l.connected() self.assertEqual(events, []) l.refresh() l.refresh() self.assertEqual(events, [ ("rc.tx_list", ), ("rc.tx_list", ), ]) l.rx_nameplates({"1", "2", "3"}) self.assertEqual(events, [ ("rc.tx_list", ), ("rc.tx_list", ), ("i.got_nameplates", {"1", "2", "3"}), ]) l.rx_nameplates({"1", "2", "3", "4"}) self.assertEqual(events, [ ("rc.tx_list", ), ("rc.tx_list", ), ("i.got_nameplates", {"1", "2", "3"}), ("i.got_nameplates", {"1", "2", "3", "4"}), ]) def test_reconnect(self): l, rc, i, events = self.build() l.refresh() l.connected() self.assertEqual(events, [ ("rc.tx_list", ), ]) events[:] = [] l.lost() l.connected() self.assertEqual(events, [ ("rc.tx_list", ), ]) def test_refresh_first(self): l, rc, i, events = self.build() l.refresh() self.assertEqual(events, []) l.connected() self.assertEqual(events, [ ("rc.tx_list", ), ]) l.rx_nameplates({"1", "2", "3"}) self.assertEqual(events, [ ("rc.tx_list", ), ("i.got_nameplates", {"1", "2", "3"}), ]) def test_unrefreshed(self): l, rc, i, events = self.build() self.assertEqual(events, []) # we receive a spontaneous rx_nameplates, without asking l.connected() self.assertEqual(events, []) l.rx_nameplates({"1", "2", "3"}) self.assertEqual(events, [ ("i.got_nameplates", {"1", "2", "3"}), ]) class Allocator(unittest.TestCase): def build(self): events = [] a = _allocator.Allocator(timing.DebugTiming()) rc = Dummy("rc", events, IRendezvousConnector, "tx_allocate") c = Dummy("c", events, ICode, "allocated") a.wire(rc, c) return a, rc, c, events def test_no_allocation(self): a, rc, c, events = self.build() a.connected() self.assertEqual(events, []) def test_allocate_first(self): a, rc, c, events = self.build() a.allocate(2, FakeWordList()) self.assertEqual(events, []) a.connected() self.assertEqual(events, [("rc.tx_allocate", )]) events[:] = [] a.lost() a.connected() self.assertEqual(events, [ ("rc.tx_allocate", ), ]) events[:] = [] a.rx_allocated("1") self.assertEqual(events, [ ("c.allocated", "1", "1-word-word"), ]) def test_connect_first(self): a, rc, c, events = self.build() a.connected() self.assertEqual(events, []) a.allocate(2, FakeWordList()) self.assertEqual(events, [("rc.tx_allocate", )]) events[:] = [] a.lost() a.connected() self.assertEqual(events, [ ("rc.tx_allocate", ), ]) events[:] = [] a.rx_allocated("1") self.assertEqual(events, [ ("c.allocated", "1", "1-word-word"), ]) class Nameplate(unittest.TestCase): def build(self): events = [] n = _nameplate.Nameplate() m = Dummy("m", events, IMailbox, "got_mailbox") i = Dummy("i", events, IInput, "got_wordlist") rc = Dummy("rc", events, IRendezvousConnector, "tx_claim", "tx_release") t = Dummy("t", events, ITerminator, "nameplate_done") n.wire(m, i, rc, t) return n, m, i, rc, t, events def test_set_invalid(self): n, m, i, rc, t, events = self.build() with self.assertRaises(errors.KeyFormatError) as e: n.set_nameplate(" 1") self.assertEqual( str(e.exception), "Nameplate ' 1' must be numeric, with no spaces.") with self.assertRaises(errors.KeyFormatError) as e: n.set_nameplate("one") self.assertEqual( str(e.exception), "Nameplate 'one' must be numeric, with no spaces.") # wormhole should still be usable n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) def test_set_first(self): # connection remains up throughout n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_connect_first(self): # connection remains up throughout n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_reconnect_while_claiming(self): # connection bounced while waiting for rx_claimed n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] n.lost() n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) def test_reconnect_while_claimed(self): # connection bounced while claimed: no retransmits should be sent n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.lost() n.connected() self.assertEqual(events, []) def test_reconnect_while_releasing(self): # connection bounced while waiting for rx_released n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.lost() n.connected() self.assertEqual(events, [("rc.tx_release", "1")]) def test_reconnect_while_done(self): # connection bounces after we're done n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) events[:] = [] n.lost() n.connected() self.assertEqual(events, []) def test_close_while_idle(self): n, m, i, rc, t, events = self.build() n.close() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_idle_connected(self): n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.close() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_unclaimed(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") n.close() # before ever being connected self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_claiming(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] n.close() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_claiming_but_disconnected(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] n.lost() n.close() self.assertEqual(events, []) # we're now waiting for a connection, so we can release the nameplate n.connected() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_claimed(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.close() # this path behaves just like a deliberate release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_claimed_but_disconnected(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.lost() n.close() # we're now waiting for a connection, so we can release the nameplate n.connected() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_releasing(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.close() # ignored, we're already on our way out the door self.assertEqual(events, []) n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_releasing_but_disconnecteda(self): n, m, i, rc, t, events = self.build() n.set_nameplate("1") self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.lost() n.close() # we must retransmit the tx_release when we reconnect self.assertEqual(events, []) n.connected() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) def test_close_while_done(self): # connection remains up throughout n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) events[:] = [] n.close() # NOP self.assertEqual(events, []) def test_close_while_done_but_disconnected(self): # connection remains up throughout n, m, i, rc, t, events = self.build() n.connected() self.assertEqual(events, []) n.set_nameplate("1") self.assertEqual(events, [("rc.tx_claim", "1")]) events[:] = [] wl = object() with mock.patch("wormhole._nameplate.PGPWordList", return_value=wl): n.rx_claimed("mbox1") self.assertEqual(events, [ ("i.got_wordlist", wl), ("m.got_mailbox", "mbox1"), ]) events[:] = [] n.release() self.assertEqual(events, [("rc.tx_release", "1")]) events[:] = [] n.rx_released() self.assertEqual(events, [("t.nameplate_done", )]) events[:] = [] n.lost() n.close() # NOP self.assertEqual(events, []) class Mailbox(unittest.TestCase): def build(self): events = [] m = _mailbox.Mailbox("side1") n = Dummy("n", events, INameplate, "release") rc = Dummy("rc", events, IRendezvousConnector, "tx_add", "tx_open", "tx_close") o = Dummy("o", events, IOrder, "got_message") t = Dummy("t", events, ITerminator, "mailbox_done") m.wire(n, rc, o, t) return m, n, rc, o, t, events # TODO: test moods def assert_events(self, events, initial_events, tx_add_events): self.assertEqual( len(events), len(initial_events) + len(tx_add_events), events) self.assertEqual(events[:len(initial_events)], initial_events) self.assertEqual(set(events[len(initial_events):]), tx_add_events) def test_connect_first(self): # connect before got_mailbox m, n, rc, o, t, events = self.build() m.add_message("phase1", b"msg1") self.assertEqual(events, []) m.connected() self.assertEqual(events, []) m.got_mailbox("mbox1") self.assertEqual(events, [("rc.tx_open", "mbox1"), ("rc.tx_add", "phase1", b"msg1")]) events[:] = [] m.add_message("phase2", b"msg2") self.assertEqual(events, [("rc.tx_add", "phase2", b"msg2")]) events[:] = [] # bouncing the connection should retransmit everything, even the open() m.lost() self.assertEqual(events, []) # and messages sent while here should be queued m.add_message("phase3", b"msg3") self.assertEqual(events, []) m.connected() # the other messages are allowed to be sent in any order self.assert_events( events, [("rc.tx_open", "mbox1")], { ("rc.tx_add", "phase1", b"msg1"), ("rc.tx_add", "phase2", b"msg2"), ("rc.tx_add", "phase3", b"msg3"), }) events[:] = [] m.rx_message("side1", "phase1", b"msg1") # echo of our message, dequeue self.assertEqual(events, []) m.lost() m.connected() self.assert_events(events, [("rc.tx_open", "mbox1")], { ("rc.tx_add", "phase2", b"msg2"), ("rc.tx_add", "phase3", b"msg3"), }) events[:] = [] # a new message from the peer gets delivered, and the Nameplate is # released since the message proves that our peer opened the Mailbox # and therefore no longer needs the Nameplate m.rx_message("side2", "phase1", b"msg1them") # new message from peer self.assertEqual(events, [ ("n.release", ), ("o.got_message", "side2", "phase1", b"msg1them"), ]) events[:] = [] # we de-duplicate peer messages, but still re-release the nameplate # since Nameplate is smart enough to ignore that m.rx_message("side2", "phase1", b"msg1them") self.assertEqual(events, [ ("n.release", ), ]) events[:] = [] m.close("happy") self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) events[:] = [] # while closing, we ignore a lot m.add_message("phase-late", b"late") m.rx_message("side1", "phase2", b"msg2") m.close("happy") self.assertEqual(events, []) # bouncing the connection forces a retransmit of the tx_close m.lost() self.assertEqual(events, []) m.connected() self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) events[:] = [] m.rx_closed() self.assertEqual(events, [("t.mailbox_done", )]) events[:] = [] # while closed, we ignore everything m.add_message("phase-late", b"late") m.rx_message("side1", "phase2", b"msg2") m.close("happy") m.lost() m.connected() self.assertEqual(events, []) def test_mailbox_first(self): # got_mailbox before connect m, n, rc, o, t, events = self.build() m.add_message("phase1", b"msg1") self.assertEqual(events, []) m.got_mailbox("mbox1") m.add_message("phase2", b"msg2") self.assertEqual(events, []) m.connected() self.assert_events(events, [("rc.tx_open", "mbox1")], { ("rc.tx_add", "phase1", b"msg1"), ("rc.tx_add", "phase2", b"msg2"), }) def test_close_while_idle(self): m, n, rc, o, t, events = self.build() m.close("happy") self.assertEqual(events, [("t.mailbox_done", )]) def test_close_while_idle_but_connected(self): m, n, rc, o, t, events = self.build() m.connected() m.close("happy") self.assertEqual(events, [("t.mailbox_done", )]) def test_close_while_mailbox_disconnected(self): m, n, rc, o, t, events = self.build() m.got_mailbox("mbox1") m.close("happy") self.assertEqual(events, [("t.mailbox_done", )]) def test_close_while_reconnecting(self): m, n, rc, o, t, events = self.build() m.got_mailbox("mbox1") m.connected() self.assertEqual(events, [("rc.tx_open", "mbox1")]) events[:] = [] m.lost() self.assertEqual(events, []) m.close("happy") self.assertEqual(events, []) # we now wait to connect, so we can send the tx_close m.connected() self.assertEqual(events, [("rc.tx_close", "mbox1", "happy")]) events[:] = [] m.rx_closed() self.assertEqual(events, [("t.mailbox_done", )]) events[:] = [] class Terminator(unittest.TestCase): def build(self): events = [] t = _terminator.Terminator() b = Dummy("b", events, IBoss, "closed") rc = Dummy("rc", events, IRendezvousConnector, "stop") n = Dummy("n", events, INameplate, "close") m = Dummy("m", events, IMailbox, "close") d = Dummy("d", events, IDilator, "stop") t.wire(b, rc, n, m, d) return t, b, rc, n, m, events # there are three events, and we need to test all orderings of them def _do_test(self, ev1, ev2, ev3): t, b, rc, n, m, events = self.build() input_events = { "mailbox": lambda: t.mailbox_done(), "nameplate": lambda: t.nameplate_done(), "rc": lambda: t.close("happy"), } close_events = [ ("n.close", ), ("m.close", "happy"), ] if ev1 == "mailbox": close_events.remove(("m.close", "happy")) elif ev1 == "nameplate": close_events.remove(("n.close",)) input_events[ev1]() expected = [] if ev1 == "rc": expected.extend(close_events) self.assertEqual(events, expected) events[:] = [] if ev2 == "mailbox": close_events.remove(("m.close", "happy")) elif ev2 == "nameplate": close_events.remove(("n.close",)) input_events[ev2]() expected = [] if ev2 == "rc": expected.extend(close_events) self.assertEqual(events, expected) events[:] = [] if ev3 == "mailbox": close_events.remove(("m.close", "happy")) elif ev3 == "nameplate": close_events.remove(("n.close",)) input_events[ev3]() expected = [] if ev3 == "rc": expected.extend(close_events) expected.append(("rc.stop", )) self.assertEqual(events, expected) events[:] = [] t.stoppedRC() self.assertEqual(events, [("d.stop", )]) events[:] = [] t.stoppedD() self.assertEqual(events, [("b.closed", )]) def test_terminate(self): self._do_test("mailbox", "nameplate", "rc") self._do_test("mailbox", "rc", "nameplate") self._do_test("nameplate", "mailbox", "rc") self._do_test("nameplate", "rc", "mailbox") self._do_test("rc", "nameplate", "mailbox") self._do_test("rc", "mailbox", "nameplate") # TODO: test moods class MockBoss(_boss.Boss): def __attrs_post_init__(self): # self._build_workers() self._init_other_state() class Boss(unittest.TestCase): def build(self): events = [] wormhole = Dummy("w", events, None, "got_welcome", "got_code", "got_key", "got_verifier", "got_versions", "received", "closed") versions = {"app": "version1"} reactor = None eq = None cooperator = None journal = ImmediateJournal() tor_manager = None client_version = ("python", __version__) b = MockBoss(wormhole, "side", "url", "appid", versions, client_version, reactor, eq, cooperator, journal, tor_manager, timing.DebugTiming()) b._T = Dummy("t", events, ITerminator, "close") b._S = Dummy("s", events, ISend, "send") b._RC = Dummy("rc", events, IRendezvousConnector, "start") b._C = Dummy("c", events, ICode, "allocate_code", "input_code", "set_code") b._D = Dummy("d", events, IDilator, "got_wormhole_versions", "got_key") return b, events def test_basic(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.got_code("1-code") self.assertEqual(events, [("w.got_code", "1-code")]) events[:] = [] welcome = {"howdy": "how are ya"} b.rx_welcome(welcome) self.assertEqual(events, [ ("w.got_welcome", welcome), ]) events[:] = [] # pretend a peer message was correctly decrypted b.got_key(b"key") b.happy() b.got_verifier(b"verifier") b.got_message("version", b"{}") b.got_message("0", b"msg1") self.assertEqual(events, [ ("w.got_key", b"key"), ("d.got_key", b"key"), ("w.got_verifier", b"verifier"), ("d.got_wormhole_versions", {}), ("w.got_versions", {}), ("w.received", b"msg1"), ]) events[:] = [] b.send(b"msg2") self.assertEqual(events, [("s.send", "0", b"msg2")]) events[:] = [] b.close() self.assertEqual(events, [("t.close", "happy")]) events[:] = [] b.closed() self.assertEqual(events, [("w.closed", "happy")]) def test_unwelcome(self): b, events = self.build() unwelcome = {"error": "go away"} b.rx_welcome(unwelcome) self.assertEqual(events, [("t.close", "unwelcome")]) def test_lonely(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.got_code("1-code") self.assertEqual(events, [("w.got_code", "1-code")]) events[:] = [] b.close() self.assertEqual(events, [("t.close", "lonely")]) events[:] = [] b.closed() self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], errors.LonelyError) def test_server_error(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] orig = {} b.rx_error("server-error-msg", orig) self.assertEqual(events, [("t.close", "errory")]) events[:] = [] b.closed() self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], errors.ServerError) self.assertEqual(events[0][1].args[0], "server-error-msg") def test_internal_error(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.error(ValueError("catch me")) self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], ValueError) self.assertEqual(events[0][1].args[0], "catch me") def test_close_early(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.close() # before even w.got_code self.assertEqual(events, [("t.close", "lonely")]) events[:] = [] b.closed() self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], errors.LonelyError) def test_error_while_closing(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.close() self.assertEqual(events, [("t.close", "lonely")]) events[:] = [] b.error(ValueError("oops")) self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], ValueError) def test_scary_version(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.got_code("1-code") self.assertEqual(events, [("w.got_code", "1-code")]) events[:] = [] b.scared() self.assertEqual(events, [("t.close", "scary")]) events[:] = [] b.closed() self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], errors.WrongPasswordError) def test_scary_phase(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.got_code("1-code") self.assertEqual(events, [("w.got_code", "1-code")]) events[:] = [] b.happy() # phase=version b.scared() # phase=0 self.assertEqual(events, [("t.close", "scary")]) events[:] = [] b.closed() self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "w.closed") self.assertIsInstance(events[0][1], errors.WrongPasswordError) def test_unknown_phase(self): b, events = self.build() b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) events[:] = [] b.got_code("1-code") self.assertEqual(events, [("w.got_code", "1-code")]) events[:] = [] b.happy() # phase=version b.got_message("unknown-phase", b"spooky") self.assertEqual(events, []) self.flushLoggedErrors(errors._UnknownPhaseError) def test_set_code_bad_format(self): b, events = self.build() with self.assertRaises(errors.KeyFormatError): b.set_code("1 code") # wormhole should still be usable b.set_code("1-code") self.assertEqual(events, [("c.set_code", "1-code")]) def test_set_code_twice(self): b, events = self.build() b.set_code("1-code") with self.assertRaises(errors.OnlyOneCodeError): b.set_code("1-code") def test_input_code(self): b, events = self.build() b._C.retval = "helper" helper = b.input_code() self.assertEqual(events, [("c.input_code", )]) self.assertEqual(helper, "helper") with self.assertRaises(errors.OnlyOneCodeError): b.input_code() def test_allocate_code(self): b, events = self.build() wl = object() with mock.patch("wormhole._boss.PGPWordList", return_value=wl): b.allocate_code(3) self.assertEqual(events, [("c.allocate_code", 3, wl)]) with self.assertRaises(errors.OnlyOneCodeError): b.allocate_code(3) class Rendezvous(unittest.TestCase): def build(self): events = [] reactor = object() journal = ImmediateJournal() tor_manager = None client_version = ("python", __version__) rc = _rendezvous.RendezvousConnector( "ws://host:4000/v1", "appid", "side", reactor, journal, tor_manager, timing.DebugTiming(), client_version) b = Dummy("b", events, IBoss, "error") n = Dummy("n", events, INameplate, "connected", "lost") m = Dummy("m", events, IMailbox, "connected", "lost") a = Dummy("a", events, IAllocator, "connected", "lost") l = Dummy("l", events, ILister, "connected", "lost") t = Dummy("t", events, ITerminator) rc.wire(b, n, m, a, l, t) return rc, events def test_basic(self): rc, events = self.build() del rc, events def test_websocket_failure(self): # if the TCP connection succeeds, but the subsequent WebSocket # negotiation fails, then we'll see an onClose without first seeing # onOpen rc, events = self.build() rc.ws_close(False, 1006, "connection was closed uncleanly") # this should cause the ClientService to be shut down, and an error # delivered to the Boss self.assertEqual(len(events), 1, events) self.assertEqual(events[0][0], "b.error") self.assertIsInstance(events[0][1], errors.ServerConnectionError) self.assertEqual(str(events[0][1]), "connection was closed uncleanly") def test_websocket_lost(self): # if the TCP connection succeeds, and negotiation completes, then the # connection is lost, several machines should be notified rc, events = self.build() ws = mock.Mock() def notrandom(length): return b"\x00" * length with mock.patch("os.urandom", notrandom): rc.ws_open(ws) self.assertEqual(events, [ ("n.connected", ), ("m.connected", ), ("l.connected", ), ("a.connected", ), ]) events[:] = [] def sent_messages(ws): for c in ws.mock_calls: self.assertEqual(c[0], "sendMessage", ws.mock_calls) self.assertEqual(c[1][1], False, ws.mock_calls) yield bytes_to_dict(c[1][0]) self.assertEqual( list(sent_messages(ws)), [ dict( appid="appid", side="side", client_version=["python", __version__], id="0000", type="bind"), ]) rc.ws_close(True, None, None) self.assertEqual(events, [ ("n.lost", ), ("m.lost", ), ("l.lost", ), ("a.lost", ), ]) def test_endpoints(self): # parse different URLs and check the tls status of each reactor = object() journal = ImmediateJournal() tor_manager = None client_version = ("python", __version__) rc = _rendezvous.RendezvousConnector( "ws://host:4000/v1", "appid", "side", reactor, journal, tor_manager, timing.DebugTiming(), client_version) new_ep = object() with mock.patch("twisted.internet.endpoints.HostnameEndpoint", return_value=new_ep) as he: ep = rc._make_endpoint("ws://host:4000/v1") self.assertEqual(he.mock_calls, [mock.call(reactor, "host", 4000)]) self.assertIs(ep, new_ep) new_ep = object() with mock.patch("twisted.internet.endpoints.HostnameEndpoint", return_value=new_ep) as he: ep = rc._make_endpoint("ws://host/v1") self.assertEqual(he.mock_calls, [mock.call(reactor, "host", 80)]) self.assertIs(ep, new_ep) new_ep = object() with mock.patch("twisted.internet.endpoints.clientFromString", return_value=new_ep) as cfs: ep = rc._make_endpoint("wss://host:4000/v1") self.assertEqual(cfs.mock_calls, [mock.call(reactor, "tls:host:4000")]) self.assertIs(ep, new_ep) new_ep = object() with mock.patch("twisted.internet.endpoints.clientFromString", return_value=new_ep) as cfs: ep = rc._make_endpoint("wss://host/v1") self.assertEqual(cfs.mock_calls, [mock.call(reactor, "tls:host:443")]) self.assertIs(ep, new_ep) tor_manager = mock.Mock() directlyProvides(tor_manager, ITorManager) rc = _rendezvous.RendezvousConnector( "ws://host:4000/v1", "appid", "side", reactor, journal, tor_manager, timing.DebugTiming(), client_version) tor_manager.mock_calls[:] = [] ep = rc._make_endpoint("ws://host:4000/v1") self.assertEqual(tor_manager.mock_calls, [mock.call.stream_via("host", 4000, tls=False)]) tor_manager.mock_calls[:] = [] ep = rc._make_endpoint("ws://host/v1") self.assertEqual(tor_manager.mock_calls, [mock.call.stream_via("host", 80, tls=False)]) tor_manager.mock_calls[:] = [] ep = rc._make_endpoint("wss://host:4000/v1") self.assertEqual(tor_manager.mock_calls, [mock.call.stream_via("host", 4000, tls=True)]) tor_manager.mock_calls[:] = [] ep = rc._make_endpoint("wss://host/v1") self.assertEqual(tor_manager.mock_calls, [mock.call.stream_via("host", 443, tls=True)]) # TODO # #Send # #Mailbox # #Nameplate # #Terminator # Boss # RendezvousConnector (not a state machine) # #Input: exercise helper methods # #wordlist # test idempotency / at-most-once where applicable magic-wormhole-0.12.0/src/wormhole/test/test_observer.py000066400000000000000000000100601400712516500233440ustar00rootroot00000000000000from twisted.internet.task import Clock from twisted.python.failure import Failure from twisted.trial import unittest from ..eventual import EventualQueue from ..observer import OneShotObserver, SequenceObserver, EmptyableSet class OneShot(unittest.TestCase): def test_fire(self): c = Clock() eq = EventualQueue(c) o = OneShotObserver(eq) res = object() d1 = o.when_fired() eq.flush_sync() self.assertNoResult(d1) o.fire(res) eq.flush_sync() self.assertIdentical(self.successResultOf(d1), res) d2 = o.when_fired() eq.flush_sync() self.assertIdentical(self.successResultOf(d2), res) o.fire_if_not_fired(object()) eq.flush_sync() def test_fire_if_not_fired(self): c = Clock() eq = EventualQueue(c) o = OneShotObserver(eq) res1 = object() res2 = object() d1 = o.when_fired() eq.flush_sync() self.assertNoResult(d1) o.fire_if_not_fired(res1) o.fire_if_not_fired(res2) eq.flush_sync() self.assertIdentical(self.successResultOf(d1), res1) def test_error_before_firing(self): c = Clock() eq = EventualQueue(c) o = OneShotObserver(eq) f = Failure(ValueError("oops")) d1 = o.when_fired() eq.flush_sync() self.assertNoResult(d1) o.error(f) eq.flush_sync() self.assertIdentical(self.failureResultOf(d1), f) d2 = o.when_fired() eq.flush_sync() self.assertIdentical(self.failureResultOf(d2), f) def test_error_after_firing(self): c = Clock() eq = EventualQueue(c) o = OneShotObserver(eq) res = object() f = Failure(ValueError("oops")) o.fire(res) eq.flush_sync() d1 = o.when_fired() o.error(f) d2 = o.when_fired() eq.flush_sync() self.assertIdentical(self.successResultOf(d1), res) self.assertIdentical(self.failureResultOf(d2), f) class Sequence(unittest.TestCase): def test_fire(self): c = Clock() eq = EventualQueue(c) o = SequenceObserver(eq) d1 = o.when_next_event() eq.flush_sync() self.assertNoResult(d1) d2 = o.when_next_event() eq.flush_sync() self.assertNoResult(d1) self.assertNoResult(d2) ev1 = object() ev2 = object() o.fire(ev1) eq.flush_sync() self.assertIdentical(self.successResultOf(d1), ev1) self.assertNoResult(d2) o.fire(ev2) eq.flush_sync() self.assertIdentical(self.successResultOf(d2), ev2) ev3 = object() ev4 = object() o.fire(ev3) o.fire(ev4) d3 = o.when_next_event() eq.flush_sync() self.assertIdentical(self.successResultOf(d3), ev3) d4 = o.when_next_event() eq.flush_sync() self.assertIdentical(self.successResultOf(d4), ev4) def test_error(self): c = Clock() eq = EventualQueue(c) o = SequenceObserver(eq) d1 = o.when_next_event() eq.flush_sync() self.assertNoResult(d1) f = Failure(ValueError("oops")) o.fire(f) eq.flush_sync() self.assertIdentical(self.failureResultOf(d1), f) d2 = o.when_next_event() eq.flush_sync() self.assertIdentical(self.failureResultOf(d2), f) class Empty(unittest.TestCase): def test_set(self): eq = EventualQueue(Clock()) s = EmptyableSet(_eventual_queue=eq) d1 = s.when_next_empty() eq.flush_sync() self.assertNoResult(d1) s.add(1) eq.flush_sync() self.assertNoResult(d1) s.add(2) s.discard(1) d2 = s.when_next_empty() eq.flush_sync() self.assertNoResult(d1) self.assertNoResult(d2) s.discard(2) eq.flush_sync() self.assertEqual(self.successResultOf(d1), None) self.assertEqual(self.successResultOf(d2), None) s.add(3) s.discard(3) magic-wormhole-0.12.0/src/wormhole/test/test_rlcompleter.py000066400000000000000000000373751400712516500240670ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals from itertools import count from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twisted.internet.threads import deferToThread from twisted.trial import unittest import mock from .._rlcompleter import (CodeInputter, _input_code_with_completion, input_with_completion, warn_readline) from ..errors import AlreadyInputNameplateError, KeyFormatError APPID = "appid" class Input(unittest.TestCase): @inlineCallbacks def test_wrapper(self): helper = object() trueish = object() with mock.patch( "wormhole._rlcompleter._input_code_with_completion", return_value=trueish) as m: used_completion = yield input_with_completion( "prompt:", helper, reactor) self.assertIs(used_completion, trueish) self.assertEqual(m.mock_calls, [mock.call("prompt:", helper, reactor)]) # note: if this test fails, the warn_readline() message will probably # get written to stderr class Sync(unittest.TestCase): # exercise _input_code_with_completion, which uses the blocking builtin # "input()" function, hence _input_code_with_completion is usually in a # thread with deferToThread @mock.patch("wormhole._rlcompleter.CodeInputter") @mock.patch("wormhole._rlcompleter.readline", __doc__="I am GNU readline") @mock.patch("wormhole._rlcompleter.input", return_value="code") def test_readline(self, input, readline, ci): c = mock.Mock(name="inhibit parenting") c.completer = object() trueish = object() c.used_completion = trueish ci.configure_mock(return_value=c) prompt = object() input_helper = object() reactor = object() used = _input_code_with_completion(prompt, input_helper, reactor) self.assertIs(used, trueish) self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) self.assertEqual(c.mock_calls, [mock.call.finish("code")]) self.assertEqual(input.mock_calls, [mock.call(prompt)]) self.assertEqual(readline.mock_calls, [ mock.call.parse_and_bind("tab: complete"), mock.call.set_completer(c.completer), mock.call.set_completer_delims(""), ]) @mock.patch("wormhole._rlcompleter.CodeInputter") @mock.patch("wormhole._rlcompleter.readline") @mock.patch("wormhole._rlcompleter.input", return_value="code") def test_readline_no_docstring(self, input, readline, ci): del readline.__doc__ # when in doubt, it assumes GNU readline c = mock.Mock(name="inhibit parenting") c.completer = object() trueish = object() c.used_completion = trueish ci.configure_mock(return_value=c) prompt = object() input_helper = object() reactor = object() used = _input_code_with_completion(prompt, input_helper, reactor) self.assertIs(used, trueish) self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) self.assertEqual(c.mock_calls, [mock.call.finish("code")]) self.assertEqual(input.mock_calls, [mock.call(prompt)]) self.assertEqual(readline.mock_calls, [ mock.call.parse_and_bind("tab: complete"), mock.call.set_completer(c.completer), mock.call.set_completer_delims(""), ]) @mock.patch("wormhole._rlcompleter.CodeInputter") @mock.patch("wormhole._rlcompleter.readline", __doc__="I am libedit") @mock.patch("wormhole._rlcompleter.input", return_value="code") def test_libedit(self, input, readline, ci): c = mock.Mock(name="inhibit parenting") c.completer = object() trueish = object() c.used_completion = trueish ci.configure_mock(return_value=c) prompt = object() input_helper = object() reactor = object() used = _input_code_with_completion(prompt, input_helper, reactor) self.assertIs(used, trueish) self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) self.assertEqual(c.mock_calls, [mock.call.finish("code")]) self.assertEqual(input.mock_calls, [mock.call(prompt)]) self.assertEqual(readline.mock_calls, [ mock.call.parse_and_bind("bind ^I rl_complete"), mock.call.set_completer(c.completer), mock.call.set_completer_delims(""), ]) @mock.patch("wormhole._rlcompleter.CodeInputter") @mock.patch("wormhole._rlcompleter.readline", None) @mock.patch("wormhole._rlcompleter.input", return_value="code") def test_no_readline(self, input, ci): c = mock.Mock(name="inhibit parenting") c.completer = object() trueish = object() c.used_completion = trueish ci.configure_mock(return_value=c) prompt = object() input_helper = object() reactor = object() used = _input_code_with_completion(prompt, input_helper, reactor) self.assertIs(used, trueish) self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) self.assertEqual(c.mock_calls, [mock.call.finish("code")]) self.assertEqual(input.mock_calls, [mock.call(prompt)]) @mock.patch("wormhole._rlcompleter.CodeInputter") @mock.patch("wormhole._rlcompleter.readline", None) @mock.patch("wormhole._rlcompleter.input", return_value=b"code") def test_bytes(self, input, ci): c = mock.Mock(name="inhibit parenting") c.completer = object() trueish = object() c.used_completion = trueish ci.configure_mock(return_value=c) prompt = object() input_helper = object() reactor = object() used = _input_code_with_completion(prompt, input_helper, reactor) self.assertIs(used, trueish) self.assertEqual(ci.mock_calls, [mock.call(input_helper, reactor)]) self.assertEqual(c.mock_calls, [mock.call.finish(u"code")]) self.assertEqual(input.mock_calls, [mock.call(prompt)]) def get_completions(c, prefix): completions = [] for state in count(0): text = c.completer(prefix, state) if text is None: return completions completions.append(text) def fake_blockingCallFromThread(f, *a, **kw): return f(*a, **kw) class Completion(unittest.TestCase): def test_simple(self): # no actual completion helper = mock.Mock() c = CodeInputter(helper, "reactor") c.bcft = fake_blockingCallFromThread c.finish("1-code-ghost") self.assertFalse(c.used_completion) self.assertEqual(helper.mock_calls, [ mock.call.choose_nameplate("1"), mock.call.choose_words("code-ghost") ]) @mock.patch( "wormhole._rlcompleter.readline", get_completion_type=mock.Mock(return_value=0)) def test_call(self, readline): # check that it calls _commit_and_build_completions correctly helper = mock.Mock() c = CodeInputter(helper, "reactor") c.bcft = fake_blockingCallFromThread # pretend nameplates: 1, 12, 34 # first call will be with "1" cabc = mock.Mock(return_value=["1", "12"]) c._commit_and_build_completions = cabc self.assertEqual(get_completions(c, "1"), ["1", "12"]) self.assertEqual(cabc.mock_calls, [mock.call("1")]) # then "12" cabc.reset_mock() cabc.configure_mock(return_value=["12"]) self.assertEqual(get_completions(c, "12"), ["12"]) self.assertEqual(cabc.mock_calls, [mock.call("12")]) # now we have three "a" words: "and", "ark", "aaah!zombies!!" cabc.reset_mock() cabc.configure_mock(return_value=["aargh", "ark", "aaah!zombies!!"]) self.assertEqual( get_completions(c, "12-a"), ["aargh", "ark", "aaah!zombies!!"]) self.assertEqual(cabc.mock_calls, [mock.call("12-a")]) cabc.reset_mock() cabc.configure_mock(return_value=["aargh", "aaah!zombies!!"]) self.assertEqual( get_completions(c, "12-aa"), ["aargh", "aaah!zombies!!"]) self.assertEqual(cabc.mock_calls, [mock.call("12-aa")]) cabc.reset_mock() cabc.configure_mock(return_value=["aaah!zombies!!"]) self.assertEqual(get_completions(c, "12-aaa"), ["aaah!zombies!!"]) self.assertEqual(cabc.mock_calls, [mock.call("12-aaa")]) c.finish("1-code") self.assert_(c.used_completion) def test_wrap_error(self): helper = mock.Mock() c = CodeInputter(helper, "reactor") c._wrapped_completer = mock.Mock(side_effect=ValueError("oops")) with mock.patch("wormhole._rlcompleter.traceback") as traceback: with mock.patch("wormhole._rlcompleter.print") as mock_print: with self.assertRaises(ValueError) as e: c.completer("text", 0) self.assertEqual(traceback.mock_calls, [mock.call.print_exc()]) self.assertEqual(mock_print.mock_calls, [mock.call("completer exception: oops")]) self.assertEqual(str(e.exception), "oops") @inlineCallbacks def test_build_completions(self): rn = mock.Mock() # InputHelper.get_nameplate_completions returns just the suffixes gnc = mock.Mock() # get_nameplate_completions cn = mock.Mock() # choose_nameplate gwc = mock.Mock() # get_word_completions cw = mock.Mock() # choose_words helper = mock.Mock( refresh_nameplates=rn, get_nameplate_completions=gnc, choose_nameplate=cn, get_word_completions=gwc, choose_words=cw, ) # this needs a real reactor, for blockingCallFromThread c = CodeInputter(helper, reactor) cabc = c._commit_and_build_completions # in this test, we pretend that nameplates 1,12,34 are active. # 43 TAB -> nothing (and refresh_nameplates) gnc.configure_mock(return_value=[]) matches = yield deferToThread(cabc, "43") self.assertEqual(matches, []) self.assertEqual(rn.mock_calls, [mock.call()]) self.assertEqual(gnc.mock_calls, [mock.call("43")]) self.assertEqual(cn.mock_calls, []) rn.reset_mock() gnc.reset_mock() # 1 TAB -> 1-, 12- (and refresh_nameplates) gnc.configure_mock(return_value=["1-", "12-"]) matches = yield deferToThread(cabc, "1") self.assertEqual(matches, ["1-", "12-"]) self.assertEqual(rn.mock_calls, [mock.call()]) self.assertEqual(gnc.mock_calls, [mock.call("1")]) self.assertEqual(cn.mock_calls, []) rn.reset_mock() gnc.reset_mock() # 12 TAB -> 12- (and refresh_nameplates) # I wouldn't mind if it didn't refresh the nameplates here, but meh gnc.configure_mock(return_value=["12-"]) matches = yield deferToThread(cabc, "12") self.assertEqual(matches, ["12-"]) self.assertEqual(rn.mock_calls, [mock.call()]) self.assertEqual(gnc.mock_calls, [mock.call("12")]) self.assertEqual(cn.mock_calls, []) rn.reset_mock() gnc.reset_mock() # 12- TAB -> 12- {all words} (claim, no refresh) gnc.configure_mock(return_value=["12-"]) gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"]) matches = yield deferToThread(cabc, "12-") self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"]) self.assertEqual(rn.mock_calls, []) self.assertEqual(gnc.mock_calls, []) self.assertEqual(cn.mock_calls, [mock.call("12")]) self.assertEqual(gwc.mock_calls, [mock.call("")]) cn.reset_mock() gwc.reset_mock() # TODO: another path with "3 TAB" then "34-an TAB", so the claim # happens in the second call (and it waits for the wordlist) # 12-a TAB -> 12-and- 12-ark- 12-aaah!zombies!!- gnc.configure_mock(side_effect=ValueError) gwc.configure_mock(return_value=["and-", "ark-", "aaah!zombies!!-"]) matches = yield deferToThread(cabc, "12-a") # matches are always sorted self.assertEqual(matches, ["12-aaah!zombies!!-", "12-and-", "12-ark-"]) self.assertEqual(rn.mock_calls, []) self.assertEqual(gnc.mock_calls, []) self.assertEqual(cn.mock_calls, []) self.assertEqual(gwc.mock_calls, [mock.call("a")]) gwc.reset_mock() # 12-and-b TAB -> 12-and-bat 12-and-bet 12-and-but gnc.configure_mock(side_effect=ValueError) # wordlist knows the code length, so doesn't add hyphens to these gwc.configure_mock(return_value=["and-bat", "and-bet", "and-but"]) matches = yield deferToThread(cabc, "12-and-b") self.assertEqual(matches, ["12-and-bat", "12-and-bet", "12-and-but"]) self.assertEqual(rn.mock_calls, []) self.assertEqual(gnc.mock_calls, []) self.assertEqual(cn.mock_calls, []) self.assertEqual(gwc.mock_calls, [mock.call("and-b")]) gwc.reset_mock() yield deferToThread(c.finish, "12-and-bat") self.assertEqual(cw.mock_calls, [mock.call("and-bat")]) def test_incomplete_code(self): helper = mock.Mock() c = CodeInputter(helper, "reactor") c.bcft = fake_blockingCallFromThread with self.assertRaises(KeyFormatError) as e: c.finish("1") self.assertEqual(str(e.exception), "incomplete wormhole code") @inlineCallbacks def test_rollback_nameplate_during_completion(self): helper = mock.Mock() gwc = helper.get_word_completions = mock.Mock() gwc.configure_mock(return_value=["code", "court"]) c = CodeInputter(helper, reactor) cabc = c._commit_and_build_completions matches = yield deferToThread(cabc, "1-co") # this commits us to 1- self.assertEqual(helper.mock_calls, [ mock.call.choose_nameplate("1"), mock.call.when_wordlist_is_available(), mock.call.get_word_completions("co") ]) self.assertEqual(matches, ["1-code", "1-court"]) helper.reset_mock() with self.assertRaises(AlreadyInputNameplateError) as e: yield deferToThread(cabc, "2-co") self.assertEqual( str(e.exception), "nameplate (1-) already entered, cannot go back") self.assertEqual(helper.mock_calls, []) @inlineCallbacks def test_rollback_nameplate_during_finish(self): helper = mock.Mock() gwc = helper.get_word_completions = mock.Mock() gwc.configure_mock(return_value=["code", "court"]) c = CodeInputter(helper, reactor) cabc = c._commit_and_build_completions matches = yield deferToThread(cabc, "1-co") # this commits us to 1- self.assertEqual(helper.mock_calls, [ mock.call.choose_nameplate("1"), mock.call.when_wordlist_is_available(), mock.call.get_word_completions("co") ]) self.assertEqual(matches, ["1-code", "1-court"]) helper.reset_mock() with self.assertRaises(AlreadyInputNameplateError) as e: yield deferToThread(c.finish, "2-code") self.assertEqual( str(e.exception), "nameplate (1-) already entered, cannot go back") self.assertEqual(helper.mock_calls, []) @mock.patch("wormhole._rlcompleter.stderr") def test_warn_readline(self, stderr): # there is no good way to test that this function gets used at the # right time, since it involves a reactor and a "system event # trigger", but let's at least make sure it's invocable warn_readline() expected = "\nCommand interrupted: please press Return to quit" self.assertEqual(stderr.mock_calls, [mock.call.write(expected), mock.call.write("\n")]) magic-wormhole-0.12.0/src/wormhole/test/test_ssh.py000066400000000000000000000052371400712516500223240ustar00rootroot00000000000000import io import os from twisted.trial import unittest import mock from ..cli import cmd_ssh OTHERS = ["config", "config~", "known_hosts", "known_hosts~"] class FindPubkey(unittest.TestCase): def test_find_one(self): files = OTHERS + ["id_rsa.pub", "id_rsa"] pubkey_data = u"ssh-rsa AAAAkeystuff email@host\n" pubkey_file = io.StringIO(pubkey_data) with mock.patch("wormhole.cli.cmd_ssh.exists", return_value=True): with mock.patch("os.listdir", return_value=files) as ld: with mock.patch( "wormhole.cli.cmd_ssh.open", return_value=pubkey_file): res = cmd_ssh.find_public_key() self.assertEqual(ld.mock_calls, [mock.call(os.path.expanduser("~/.ssh/"))]) self.assertEqual(len(res), 3, res) kind, keyid, pubkey = res self.assertEqual(kind, "ssh-rsa") self.assertEqual(keyid, "email@host") self.assertEqual(pubkey, pubkey_data) def test_find_none(self): files = OTHERS # no pubkey with mock.patch("wormhole.cli.cmd_ssh.exists", return_value=True): with mock.patch("os.listdir", return_value=files): e = self.assertRaises(cmd_ssh.PubkeyError, cmd_ssh.find_public_key) dot_ssh = os.path.expanduser("~/.ssh/") self.assertEqual(str(e), "No public keys in '{}'".format(dot_ssh)) def test_bad_hint(self): with mock.patch("wormhole.cli.cmd_ssh.exists", return_value=False): e = self.assertRaises( cmd_ssh.PubkeyError, cmd_ssh.find_public_key, hint="bogus/path") self.assertEqual(str(e), "Can't find 'bogus/path'") def test_find_multiple(self): files = OTHERS + ["id_rsa.pub", "id_rsa", "id_dsa.pub", "id_dsa"] pubkey_data = u"ssh-rsa AAAAkeystuff email@host\n" pubkey_file = io.StringIO(pubkey_data) with mock.patch("wormhole.cli.cmd_ssh.exists", return_value=True): with mock.patch("os.listdir", return_value=files): responses = iter(["frog", "NaN", "-1", "0"]) with mock.patch( "click.prompt", side_effect=lambda p: next(responses)): with mock.patch( "wormhole.cli.cmd_ssh.open", return_value=pubkey_file): res = cmd_ssh.find_public_key() self.assertEqual(len(res), 3, res) kind, keyid, pubkey = res self.assertEqual(kind, "ssh-rsa") self.assertEqual(keyid, "email@host") self.assertEqual(pubkey, pubkey_data) magic-wormhole-0.12.0/src/wormhole/test/test_tor_manager.py000066400000000000000000000142421400712516500240210ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import io from twisted.internet import defer from twisted.internet.error import ConnectError from twisted.trial import unittest import mock from .._interfaces import ITorManager from ..errors import NoTorError from ..tor_manager import SocksOnlyTor, get_tor class X(): pass class Tor(unittest.TestCase): def test_no_txtorcon(self): with mock.patch("wormhole.tor_manager.txtorcon", None): self.failureResultOf(get_tor(None), NoTorError) def test_bad_args(self): f = self.failureResultOf( get_tor(None, launch_tor="not boolean"), TypeError) self.assertEqual(str(f.value), "launch_tor= must be boolean") f = self.failureResultOf( get_tor(None, tor_control_port=1234), TypeError) self.assertEqual(str(f.value), "tor_control_port= must be str or None") f = self.failureResultOf( get_tor( None, launch_tor=True, tor_control_port="tcp:127.0.0.1:1234"), ValueError) self.assertEqual( str(f.value), "cannot combine --launch-tor and --tor-control-port=") def test_launch(self): reactor = object() my_tor = X() # object() didn't like providedBy() launch_d = defer.Deferred() stderr = io.StringIO() with mock.patch( "wormhole.tor_manager.txtorcon.launch", side_effect=launch_d) as launch: d = get_tor(reactor, launch_tor=True, stderr=stderr) self.assertNoResult(d) self.assertEqual(launch.mock_calls, [mock.call(reactor)]) launch_d.callback(my_tor) tor = self.successResultOf(d) self.assertIs(tor, my_tor) self.assert_(ITorManager.providedBy(tor)) self.assertEqual( stderr.getvalue(), " launching a new Tor process, this may take a while..\n") def test_connect(self): reactor = object() my_tor = X() # object() didn't like providedBy() connect_d = defer.Deferred() stderr = io.StringIO() with mock.patch( "wormhole.tor_manager.txtorcon.connect", side_effect=connect_d) as connect: with mock.patch( "wormhole.tor_manager.clientFromString", side_effect=["foo"]) as sfs: d = get_tor(reactor, stderr=stderr) self.assertEqual(sfs.mock_calls, []) self.assertNoResult(d) self.assertEqual(connect.mock_calls, [mock.call(reactor)]) connect_d.callback(my_tor) tor = self.successResultOf(d) self.assertIs(tor, my_tor) self.assert_(ITorManager.providedBy(tor)) self.assertEqual(stderr.getvalue(), " using Tor via default control port\n") def test_connect_fails(self): reactor = object() connect_d = defer.Deferred() stderr = io.StringIO() with mock.patch( "wormhole.tor_manager.txtorcon.connect", side_effect=connect_d) as connect: with mock.patch( "wormhole.tor_manager.clientFromString", side_effect=["foo"]) as sfs: d = get_tor(reactor, stderr=stderr) self.assertEqual(sfs.mock_calls, []) self.assertNoResult(d) self.assertEqual(connect.mock_calls, [mock.call(reactor)]) connect_d.errback(ConnectError()) tor = self.successResultOf(d) self.assertIsInstance(tor, SocksOnlyTor) self.assert_(ITorManager.providedBy(tor)) self.assertEqual(tor._reactor, reactor) self.assertEqual( stderr.getvalue(), " unable to find default Tor control port, using SOCKS\n") def test_connect_custom_control_port(self): reactor = object() my_tor = X() # object() didn't like providedBy() tcp = "PORT" ep = object() connect_d = defer.Deferred() stderr = io.StringIO() with mock.patch( "wormhole.tor_manager.txtorcon.connect", side_effect=connect_d) as connect: with mock.patch( "wormhole.tor_manager.clientFromString", side_effect=[ep]) as sfs: d = get_tor(reactor, tor_control_port=tcp, stderr=stderr) self.assertEqual(sfs.mock_calls, [mock.call(reactor, tcp)]) self.assertNoResult(d) self.assertEqual(connect.mock_calls, [mock.call(reactor, ep)]) connect_d.callback(my_tor) tor = self.successResultOf(d) self.assertIs(tor, my_tor) self.assert_(ITorManager.providedBy(tor)) self.assertEqual(stderr.getvalue(), " using Tor via control port at PORT\n") def test_connect_custom_control_port_fails(self): reactor = object() tcp = "port" ep = object() connect_d = defer.Deferred() stderr = io.StringIO() with mock.patch( "wormhole.tor_manager.txtorcon.connect", side_effect=connect_d) as connect: with mock.patch( "wormhole.tor_manager.clientFromString", side_effect=[ep]) as sfs: d = get_tor(reactor, tor_control_port=tcp, stderr=stderr) self.assertEqual(sfs.mock_calls, [mock.call(reactor, tcp)]) self.assertNoResult(d) self.assertEqual(connect.mock_calls, [mock.call(reactor, ep)]) connect_d.errback(ConnectError()) self.failureResultOf(d, ConnectError) self.assertEqual(stderr.getvalue(), "") class SocksOnly(unittest.TestCase): def test_tor(self): reactor = object() sot = SocksOnlyTor(reactor) fake_ep = object() with mock.patch( "wormhole.tor_manager.txtorcon.TorClientEndpoint", return_value=fake_ep) as tce: ep = sot.stream_via("host", "port") self.assertIs(ep, fake_ep) self.assertEqual(tce.mock_calls, [ mock.call( "host", "port", socks_endpoint=None, tls=False, reactor=reactor) ]) magic-wormhole-0.12.0/src/wormhole/test/test_transit.py000066400000000000000000001600001400712516500232010ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import gc import io from binascii import hexlify, unhexlify import six from nacl.exceptions import CryptoError from nacl.secret import SecretBox from twisted.internet import address, defer, endpoints, error, protocol, task from twisted.internet.defer import gatherResults, inlineCallbacks from twisted.python import log from twisted.test import proto_helpers from twisted.trial import unittest import mock from wormhole_transit_relay import transit_server from .. import transit from .._hints import DirectTCPV1Hint from ..errors import InternalError from ..util import HKDF from .common import ServerBase class Highlander(unittest.TestCase): def test_one_winner(self): cancelled = set() contenders = [ defer.Deferred(lambda d, i=i: cancelled.add(i)) for i in range(5) ] d = transit.there_can_be_only_one(contenders) self.assertNoResult(d) contenders[0].errback(ValueError()) self.assertNoResult(d) contenders[1].errback(TypeError()) self.assertNoResult(d) contenders[2].callback("yay") self.assertEqual(self.successResultOf(d), "yay") self.assertEqual(cancelled, set([3, 4])) def test_there_might_also_be_none(self): cancelled = set() contenders = [ defer.Deferred(lambda d, i=i: cancelled.add(i)) for i in range(4) ] d = transit.there_can_be_only_one(contenders) self.assertNoResult(d) contenders[0].errback(ValueError()) self.assertNoResult(d) contenders[1].errback(TypeError()) self.assertNoResult(d) contenders[2].errback(TypeError()) self.assertNoResult(d) contenders[3].errback(NameError()) self.failureResultOf(d, ValueError) # first failure is recorded self.assertEqual(cancelled, set()) def test_cancel_early(self): cancelled = set() contenders = [ defer.Deferred(lambda d, i=i: cancelled.add(i)) for i in range(4) ] d = transit.there_can_be_only_one(contenders) self.assertNoResult(d) self.assertEqual(cancelled, set()) d.cancel() self.failureResultOf(d, defer.CancelledError) self.assertEqual(cancelled, set(range(4))) def test_cancel_after_one_failure(self): cancelled = set() contenders = [ defer.Deferred(lambda d, i=i: cancelled.add(i)) for i in range(4) ] d = transit.there_can_be_only_one(contenders) self.assertNoResult(d) self.assertEqual(cancelled, set()) contenders[0].errback(ValueError()) d.cancel() self.failureResultOf(d, ValueError) self.assertEqual(cancelled, set([1, 2, 3])) class Forever(unittest.TestCase): def _forever_setup(self): clock = task.Clock() c = transit.Common("", reactor=clock) cancelled = [] d0 = defer.Deferred(cancelled.append) d = c._not_forever(1.0, d0) return c, clock, d0, d, cancelled def test_not_forever_fires(self): c, clock, d0, d, cancelled = self._forever_setup() self.assertNoResult(d) self.assertEqual(cancelled, []) d.callback(1) self.assertEqual(self.successResultOf(d), 1) self.assertEqual(cancelled, []) self.assertNot(clock.getDelayedCalls()) def test_not_forever_errs(self): c, clock, d0, d, cancelled = self._forever_setup() self.assertNoResult(d) self.assertEqual(cancelled, []) d.errback(ValueError()) self.assertEqual(cancelled, []) self.failureResultOf(d, ValueError) self.assertNot(clock.getDelayedCalls()) def test_not_forever_cancel_early(self): c, clock, d0, d, cancelled = self._forever_setup() self.assertNoResult(d) self.assertEqual(cancelled, []) d.cancel() self.assertEqual(cancelled, [d0]) self.failureResultOf(d, defer.CancelledError) self.assertNot(clock.getDelayedCalls()) def test_not_forever_timeout(self): c, clock, d0, d, cancelled = self._forever_setup() self.assertNoResult(d) self.assertEqual(cancelled, []) clock.advance(2.0) self.assertEqual(cancelled, [d0]) self.failureResultOf(d, defer.CancelledError) self.assertNot(clock.getDelayedCalls()) class Misc(unittest.TestCase): def test_allocate_port(self): portno = transit.allocate_tcp_port() self.assertIsInstance(portno, int) def test_allocate_port_no_reuseaddr(self): mock_sys = mock.Mock() mock_sys.platform = "cygwin" with mock.patch("wormhole.transit.sys", mock_sys): portno = transit.allocate_tcp_port() self.assertIsInstance(portno, int) # ipaddrs.py currently uses native strings: bytes on py2, unicode on # py3 if six.PY2: LOOPADDR = b"127.0.0.1" OTHERADDR = b"1.2.3.4" else: LOOPADDR = "127.0.0.1" # unicode_literals OTHERADDR = "1.2.3.4" class Basic(unittest.TestCase): @inlineCallbacks def test_relay_hints(self): URL = "tcp:host:1234" c = transit.Common(URL, no_listen=True) hints = yield c.get_connection_hints() self.assertEqual(hints, [{ "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "hostname": "host", "port": 1234, "priority": 0.0 }], }]) self.assertRaises(InternalError, transit.Common, 123) @inlineCallbacks def test_no_relay_hints(self): c = transit.Common(None, no_listen=True) hints = yield c.get_connection_hints() self.assertEqual(hints, []) def test_ignore_bad_hints(self): c = transit.Common("") c.add_connection_hints([{"type": "unknown"}]) c.add_connection_hints([{ "type": "relay-v1", "hints": [{ "type": "unknown" }] }]) self.assertEqual(c._their_direct_hints, []) self.assertEqual(c._our_relay_hints, set()) def test_ignore_localhost_hint_orig(self): # this actually starts the listener c = transit.TransitSender("") hints = self.successResultOf(c.get_connection_hints()) c._stop_listening() # If there are non-localhost hints, then localhost hints should be # removed. But if the only hint is localhost, it should stay. if len(hints) == 1: if hints[0]["hostname"] == "127.0.0.1": return for hint in hints: self.assertFalse(hint["hostname"] == "127.0.0.1") def test_ignore_localhost_hint(self): # this actually starts the listener c = transit.TransitSender("") with mock.patch( "wormhole.ipaddrs.find_addresses", return_value=[LOOPADDR, OTHERADDR]): hints = self.successResultOf(c.get_connection_hints()) c._stop_listening() # If there are non-localhost hints, then localhost hints should be # removed. self.assertEqual(len(hints), 1) self.assertEqual(hints[0]["hostname"], "1.2.3.4") def test_keep_only_localhost_hint(self): # this actually starts the listener c = transit.TransitSender("") with mock.patch( "wormhole.ipaddrs.find_addresses", return_value=[LOOPADDR]): hints = self.successResultOf(c.get_connection_hints()) c._stop_listening() # If the only hint is localhost, it should stay. self.assertEqual(len(hints), 1) self.assertEqual(hints[0]["hostname"], "127.0.0.1") def test_abilities(self): c = transit.Common(None, no_listen=True) abilities = c.get_connection_abilities() self.assertEqual(abilities, [ { "type": "direct-tcp-v1" }, { "type": "relay-v1" }, ]) def test_transit_key_wait(self): KEY = b"123" c = transit.Common("") d = c._get_transit_key() self.assertNoResult(d) c.set_transit_key(KEY) self.assertEqual(self.successResultOf(d), KEY) def test_transit_key_already_set(self): KEY = b"123" c = transit.Common("") c.set_transit_key(KEY) d = c._get_transit_key() self.assertEqual(self.successResultOf(d), KEY) def test_transit_keys(self): KEY = b"123" s = transit.TransitSender("") s.set_transit_key(KEY) r = transit.TransitReceiver("") r.set_transit_key(KEY) self.assertEqual(s._send_this(), ( b"transit sender " b"559bdeae4b49fa6a23378d2b68f4c7e69378615d4af049c371c6a26e82391089" b" ready\n\n")) self.assertEqual(s._send_this(), r._expect_this()) self.assertEqual(r._send_this(), ( b"transit receiver " b"ed447528194bac4c00d0c854b12a97ce51413d89aa74d6304475f516fdc23a1b" b" ready\n\n")) self.assertEqual(r._send_this(), s._expect_this()) self.assertEqual( hexlify(s._sender_record_key()), b"5a2fba3a9e524ab2e2823ff53b05f946896f6e4ce4e282ffd8e3ac0e5e9e0cda" ) self.assertEqual( hexlify(s._sender_record_key()), hexlify(r._receiver_record_key())) self.assertEqual( hexlify(r._sender_record_key()), b"eedb143117249f45b39da324decf6bd9aae33b7ccd58487436de611a3c6b871d" ) self.assertEqual( hexlify(r._sender_record_key()), hexlify(s._receiver_record_key())) def test_connection_ready(self): s = transit.TransitSender("") self.assertEqual(s.connection_ready("p1"), "go") self.assertEqual(s._winner, "p1") self.assertEqual(s.connection_ready("p2"), "nevermind") self.assertEqual(s._winner, "p1") r = transit.TransitReceiver("") self.assertEqual(r.connection_ready("p1"), "wait-for-decision") self.assertEqual(r.connection_ready("p2"), "wait-for-decision") class Listener(unittest.TestCase): def test_listener(self): c = transit.Common("") hints, ep = c._build_listener() self.assertIsInstance(hints, (list, set)) if hints: self.assertIsInstance(hints[0], DirectTCPV1Hint) self.assertIsInstance(ep, endpoints.TCP4ServerEndpoint) def test_get_direct_hints(self): # this actually starts the listener c = transit.TransitSender("") d = c.get_connection_hints() hints = self.successResultOf(d) # the hints are supposed to be cached, so calling this twice won't # start a second listener self.assert_(c._listener) d2 = c.get_connection_hints() self.assertEqual(self.successResultOf(d2), hints) c._stop_listening() class DummyProtocol(protocol.Protocol): def __init__(self): self.buf = b"" self._count = None self._d2 = None def wait_for(self, count): if len(self.buf) >= count: data = self.buf[:count] self.buf = self.buf[count:] return defer.succeed(data) self._d = defer.Deferred() self._count = count return self._d def dataReceived(self, data): self.buf += data # print("oDR", self._count, len(self.buf)) if self._count is not None and len(self.buf) >= self._count: got = self.buf[:self._count] self.buf = self.buf[self._count:] self._count = None self._d.callback(got) def wait_for_disconnect(self): self._d2 = defer.Deferred() return self._d2 def connectionLost(self, reason): if self._d2: self._d2.callback(None) class FakeTransport: signalConnectionLost = True def __init__(self, p, peeraddr): self.protocol = p self._peeraddr = peeraddr self._buf = b"" self._connected = True def write(self, data): self._buf += data def loseConnection(self): self._connected = False if self.signalConnectionLost: self.protocol.connectionLost() def getPeer(self): return self._peeraddr def read_buf(self): b = self._buf self._buf = b"" return b class RandomError(Exception): pass class MockConnection: def __init__(self, owner, relay_handshake, start, description): self.owner = owner self.relay_handshake = relay_handshake self.start = start self._description = description def cancel(d): self._cancelled = True self._d = defer.Deferred(cancel) self._start_negotiation_called = False self._cancelled = False def startNegotiation(self): self._start_negotiation_called = True return self._d class InboundConnectionFactory(unittest.TestCase): def test_describe(self): f = transit.InboundConnectionFactory(None) addrH = address.HostnameAddress("example.com", 1234) self.assertEqual(f._describePeer(addrH), "<-example.com:1234") addr4 = address.IPv4Address("TCP", "1.2.3.4", 1234) self.assertEqual(f._describePeer(addr4), "<-1.2.3.4:1234") addr6 = address.IPv6Address("TCP", "::1", 1234) self.assertEqual(f._describePeer(addr6), "<-::1:1234") addrU = address.UNIXAddress("/dev/unlikely") self.assertEqual( f._describePeer(addrU), "<-UNIXAddress('/dev/unlikely')") def test_success(self): f = transit.InboundConnectionFactory("owner") f.protocol = MockConnection d = f.whenDone() self.assertNoResult(d) addr = address.HostnameAddress("example.com", 1234) p = f.buildProtocol(addr) self.assertIsInstance(p, MockConnection) self.assertEqual(p.owner, "owner") self.assertEqual(p.relay_handshake, None) self.assertEqual(p._start_negotiation_called, False) # meh .start # this is normally called from Connection.connectionMade f.connectionWasMade(p) self.assertEqual(p._start_negotiation_called, True) self.assertNoResult(d) self.assertEqual(p._description, "<-example.com:1234") p._d.callback(p) self.assertEqual(self.successResultOf(d), p) def test_one_fail_one_success(self): f = transit.InboundConnectionFactory("owner") f.protocol = MockConnection d = f.whenDone() self.assertNoResult(d) addr1 = address.HostnameAddress("example.com", 1234) addr2 = address.HostnameAddress("example.com", 5678) p1 = f.buildProtocol(addr1) p2 = f.buildProtocol(addr2) f.connectionWasMade(p1) f.connectionWasMade(p2) self.assertNoResult(d) p1._d.errback(transit.BadHandshake("nope")) self.assertNoResult(d) p2._d.callback(p2) self.assertEqual(self.successResultOf(d), p2) def test_first_success_wins(self): f = transit.InboundConnectionFactory("owner") f.protocol = MockConnection d = f.whenDone() self.assertNoResult(d) addr1 = address.HostnameAddress("example.com", 1234) addr2 = address.HostnameAddress("example.com", 5678) p1 = f.buildProtocol(addr1) p2 = f.buildProtocol(addr2) f.connectionWasMade(p1) f.connectionWasMade(p2) self.assertNoResult(d) p1._d.callback(p1) self.assertEqual(self.successResultOf(d), p1) self.assertEqual(p1._cancelled, False) self.assertEqual(p2._cancelled, True) def test_log_other_errors(self): f = transit.InboundConnectionFactory("owner") f.protocol = MockConnection d = f.whenDone() self.assertNoResult(d) addr = address.HostnameAddress("example.com", 1234) p1 = f.buildProtocol(addr) # if the Connection protocol throws an unexpected error, that should # get logged to the Twisted logs (as an Unhandled Error in Deferred) # so we can diagnose the bug f.connectionWasMade(p1) our_error = RandomError("boom1") p1._d.errback(our_error) self.assertNoResult(d) log.msg("=== note: the next RandomError is expected ===") # Make sure the Deferred has gone out of scope, so the UnhandledError # happens quickly. We must manually break the gc cycle. del p1._d gc.collect() # make PyPy happy errors = self.flushLoggedErrors(RandomError) self.assertEqual(1, len(errors)) self.assertEqual(our_error, errors[0].value) log.msg("=== note: the preceding RandomError was expected ===") def test_cancel(self): f = transit.InboundConnectionFactory("owner") f.protocol = MockConnection d = f.whenDone() self.assertNoResult(d) addr1 = address.HostnameAddress("example.com", 1234) addr2 = address.HostnameAddress("example.com", 5678) p1 = f.buildProtocol(addr1) p2 = f.buildProtocol(addr2) f.connectionWasMade(p1) f.connectionWasMade(p2) self.assertNoResult(d) d.cancel() self.failureResultOf(d, defer.CancelledError) self.assertEqual(p1._cancelled, True) self.assertEqual(p2._cancelled, True) # XXX check descriptions class OutboundConnectionFactory(unittest.TestCase): def test_success(self): f = transit.OutboundConnectionFactory("owner", "relay_handshake", "description") f.protocol = MockConnection addr = address.HostnameAddress("example.com", 1234) p = f.buildProtocol(addr) self.assertIsInstance(p, MockConnection) self.assertEqual(p.owner, "owner") self.assertEqual(p.relay_handshake, "relay_handshake") self.assertEqual(p._start_negotiation_called, False) # meh .start # this is normally called from Connection.connectionMade f.connectionWasMade(p) # no-op for outbound self.assertEqual(p._start_negotiation_called, False) class MockOwner: _connection_ready_called = False def connection_ready(self, connection): self._connection_ready_called = True self._connection = connection return self._state def _send_this(self): return b"send_this" def _expect_this(self): return b"expect_this" def _sender_record_key(self): return b"s" * 32 def _receiver_record_key(self): return b"r" * 32 class MockFactory: _connectionWasMade_called = False def connectionWasMade(self, p): self._connectionWasMade_called = True self._p = p class Connection(unittest.TestCase): # exercise the Connection protocol class def test_check_and_remove(self): c = transit.Connection(None, None, None, "description") c.buf = b"" EXP = b"expectation" self.assertFalse(c._check_and_remove(EXP)) self.assertEqual(c.buf, b"") c.buf = b"unexpected" e = self.assertRaises(transit.BadHandshake, c._check_and_remove, EXP) self.assertEqual( str(e), "got %r want %r" % (b'unexpected', b'expectation')) self.assertEqual(c.buf, b"unexpected") c.buf = b"expect" self.assertFalse(c._check_and_remove(EXP)) self.assertEqual(c.buf, b"expect") c.buf = b"expectation" self.assertTrue(c._check_and_remove(EXP)) self.assertEqual(c.buf, b"") c.buf = b"expectation exceeded" self.assertTrue(c._check_and_remove(EXP)) self.assertEqual(c.buf, b" exceeded") def test_describe(self): c = transit.Connection(None, None, None, "description") self.assertEqual(c.describe(), "description") def test_sender_accepting(self): relay_handshake = None owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, relay_handshake, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) owner._state = "go" d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(t.read_buf(), b"go\n") self.assertEqual(t._connected, True) self.assertEqual(c.state, "records") self.assertEqual(self.successResultOf(d), c) c.close() self.assertEqual(t._connected, False) def test_sender_rejecting(self): relay_handshake = None owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, relay_handshake, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) owner._state = "nevermind" d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(t.read_buf(), b"nevermind\n") self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") f = self.failureResultOf(d, transit.BadHandshake) self.assertEqual(str(f.value), "abandoned") def test_handshake_other_error(self): owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.state = RandomError("boom2") self.assertRaises(RandomError, c.dataReceived, b"surprise!") self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") self.failureResultOf(d, RandomError) def test_handshake_bad_state(self): owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.state = "unknown-bogus-state" self.assertRaises(ValueError, c.dataReceived, b"surprise!") self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") self.failureResultOf(d, ValueError) def test_relay_handshake(self): relay_handshake = b"relay handshake" owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, relay_handshake, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) self.assertEqual(t.read_buf(), b"") # quiet until startNegotiation owner._state = "go" d = c.startNegotiation() self.assertEqual(t.read_buf(), relay_handshake) self.assertEqual(c.state, "relay") # waiting for OK from relay c.dataReceived(b"ok\n") self.assertEqual(t.read_buf(), b"send_this") self.assertEqual(c.state, "handshake") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(c.state, "records") self.assertEqual(self.successResultOf(d), c) self.assertEqual(t.read_buf(), b"go\n") def test_relay_handshake_bad(self): relay_handshake = b"relay handshake" owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, relay_handshake, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) self.assertEqual(t.read_buf(), b"") # quiet until startNegotiation owner._state = "go" d = c.startNegotiation() self.assertEqual(t.read_buf(), relay_handshake) self.assertEqual(c.state, "relay") # waiting for OK from relay c.dataReceived(b"not ok\n") self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") f = self.failureResultOf(d, transit.BadHandshake) self.assertEqual( str(f.value), "got %r want %r" % (b"not ok\n", b"ok\n")) def test_receiver_accepted(self): # we're on the receiving side, so we wait for the sender to decide owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) owner._state = "wait-for-decision" d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(c.state, "wait-for-decision") self.assertNoResult(d) c.dataReceived(b"go\n") self.assertEqual(c.state, "records") self.assertEqual(self.successResultOf(d), c) def test_receiver_rejected_politely(self): # we're on the receiving side, so we wait for the sender to decide owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) owner._state = "wait-for-decision" d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(c.state, "wait-for-decision") self.assertNoResult(d) c.dataReceived(b"nevermind\n") # polite rejection self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") f = self.failureResultOf(d, transit.BadHandshake) self.assertEqual( str(f.value), "got %r want %r" % (b"nevermind\n", b"go\n")) def test_receiver_rejected_rudely(self): # we're on the receiving side, so we wait for the sender to decide owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() self.assertEqual(factory._connectionWasMade_called, True) self.assertEqual(factory._p, c) owner._state = "wait-for-decision" d = c.startNegotiation() self.assertEqual(c.state, "handshake") self.assertEqual(t.read_buf(), b"send_this") self.assertNoResult(d) c.dataReceived(b"expect_this") self.assertEqual(c.state, "wait-for-decision") self.assertNoResult(d) t.loseConnection() self.assertEqual(t._connected, False) f = self.failureResultOf(d, transit.BadHandshake) self.assertEqual(str(f.value), "connection lost") def test_cancel(self): owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() d = c.startNegotiation() # while we're waiting for negotiation, we get cancelled d.cancel() self.assertEqual(t._connected, False) self.assertEqual(c.state, "hung up") self.failureResultOf(d, defer.CancelledError) def test_timeout(self): clock = task.Clock() owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") def _callLater(period, func): clock.callLater(period, func) c.callLater = _callLater self.assertEqual(c.state, "too-early") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() # the timer should now be running d = c.startNegotiation() # while we're waiting for negotiation, the timer expires clock.advance(transit.TIMEOUT + 1.0) self.assertEqual(t._connected, False) f = self.failureResultOf(d, transit.BadHandshake) self.assertEqual(str(f.value), "timeout") def make_connection(self): owner = MockOwner() factory = MockFactory() addr = address.HostnameAddress("example.com", 1234) c = transit.Connection(owner, None, None, "description") t = c.transport = FakeTransport(c, addr) c.factory = factory c.connectionMade() owner._state = "go" d = c.startNegotiation() c.dataReceived(b"expect_this") self.assertEqual(self.successResultOf(d), c) t.read_buf() # flush input buffer, prepare for encrypted records return t, c, owner def test_records_not_binary(self): t, c, owner = self.make_connection() RECORD1 = u"not binary" with self.assertRaises(InternalError): c.send_record(RECORD1) def test_records_good(self): # now make sure that outbound records are encrypted properly t, c, owner = self.make_connection() RECORD1 = b"record" c.send_record(RECORD1) buf = t.read_buf() expected = ("%08x" % (24 + len(RECORD1) + 16)).encode("ascii") self.assertEqual(hexlify(buf[:4]), expected) encrypted = buf[4:] receive_box = SecretBox(owner._sender_record_key()) nonce_buf = encrypted[:SecretBox.NONCE_SIZE] # assume it's prepended nonce = int(hexlify(nonce_buf), 16) self.assertEqual(nonce, 0) # first message gets nonce 0 decrypted = receive_box.decrypt(encrypted) self.assertEqual(decrypted, RECORD1) # second message gets nonce 1 RECORD2 = b"record2" c.send_record(RECORD2) buf = t.read_buf() expected = ("%08x" % (24 + len(RECORD2) + 16)).encode("ascii") self.assertEqual(hexlify(buf[:4]), expected) encrypted = buf[4:] receive_box = SecretBox(owner._sender_record_key()) nonce_buf = encrypted[:SecretBox.NONCE_SIZE] # assume it's prepended nonce = int(hexlify(nonce_buf), 16) self.assertEqual(nonce, 1) decrypted = receive_box.decrypt(encrypted) self.assertEqual(decrypted, RECORD2) # and that we can receive records properly inbound_records = [] c.recordReceived = inbound_records.append send_box = SecretBox(owner._receiver_record_key()) RECORD3 = b"record3" nonce_buf = unhexlify("%048x" % 0) # first nonce must be 0 encrypted = send_box.encrypt(RECORD3, nonce_buf) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long c.dataReceived(length[:2]) c.dataReceived(length[2:]) c.dataReceived(encrypted[:-2]) self.assertEqual(inbound_records, []) c.dataReceived(encrypted[-2:]) self.assertEqual(inbound_records, [RECORD3]) RECORD4 = b"record4" nonce_buf = unhexlify("%048x" % 1) # nonces increment encrypted = send_box.encrypt(RECORD4, nonce_buf) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long c.dataReceived(length[:2]) c.dataReceived(length[2:]) c.dataReceived(encrypted[:-2]) self.assertEqual(inbound_records, [RECORD3]) c.dataReceived(encrypted[-2:]) self.assertEqual(inbound_records, [RECORD3, RECORD4]) # receiving two records at the same time: deliver both inbound_records[:] = [] RECORD5 = b"record5" nonce_buf = unhexlify("%048x" % 2) # nonces increment encrypted = send_box.encrypt(RECORD5, nonce_buf) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long r5 = length + encrypted RECORD6 = b"record6" nonce_buf = unhexlify("%048x" % 3) # nonces increment encrypted = send_box.encrypt(RECORD6, nonce_buf) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long r6 = length + encrypted c.dataReceived(r5 + r6) self.assertEqual(inbound_records, [RECORD5, RECORD6]) def corrupt(self, orig): last_byte = orig[-1:] num = int(hexlify(last_byte).decode("ascii"), 16) corrupt_num = 256 - num as_byte = unhexlify("%02x" % corrupt_num) return orig[:-1] + as_byte def test_records_corrupt(self): # corrupt records should be rejected t, c, owner = self.make_connection() inbound_records = [] c.recordReceived = inbound_records.append RECORD = b"record" send_box = SecretBox(owner._receiver_record_key()) nonce_buf = unhexlify("%048x" % 0) # first nonce must be 0 encrypted = self.corrupt(send_box.encrypt(RECORD, nonce_buf)) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long c.dataReceived(length) c.dataReceived(encrypted[:-2]) self.assertEqual(inbound_records, []) self.assertRaises(CryptoError, c.dataReceived, encrypted[-2:]) self.assertEqual(inbound_records, []) # and the connection should have been dropped self.assertEqual(t._connected, False) def test_out_of_order_nonce(self): # an inbound out-of-order nonce should be rejected t, c, owner = self.make_connection() inbound_records = [] c.recordReceived = inbound_records.append RECORD = b"record" send_box = SecretBox(owner._receiver_record_key()) nonce_buf = unhexlify("%048x" % 1) # first nonce must be 0 encrypted = send_box.encrypt(RECORD, nonce_buf) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long c.dataReceived(length) c.dataReceived(encrypted[:-2]) self.assertEqual(inbound_records, []) self.assertRaises(transit.BadNonce, c.dataReceived, encrypted[-2:]) self.assertEqual(inbound_records, []) # and the connection should have been dropped self.assertEqual(t._connected, False) # TODO: check that .connectionLost/loseConnection signatures are # consistent: zero args, or one arg? # XXX: if we don't set the transit key before connecting, what # happens? We currently get a type-check assertion from HKDF because # the key is None. def test_receive_queue(self): c = transit.Connection(None, None, None, "description") c.transport = FakeTransport(c, None) c.transport.signalConnectionLost = False c.recordReceived(b"0") c.recordReceived(b"1") c.recordReceived(b"2") d0 = c.receive_record() self.assertEqual(self.successResultOf(d0), b"0") d1 = c.receive_record() d2 = c.receive_record() # they must fire in order of receipt, not order of addCallback self.assertEqual(self.successResultOf(d2), b"2") self.assertEqual(self.successResultOf(d1), b"1") d3 = c.receive_record() d4 = c.receive_record() self.assertNoResult(d3) self.assertNoResult(d4) c.recordReceived(b"3") self.assertEqual(self.successResultOf(d3), b"3") self.assertNoResult(d4) c.recordReceived(b"4") self.assertEqual(self.successResultOf(d4), b"4") d5 = c.receive_record() c.close() self.failureResultOf(d5, error.ConnectionClosed) def test_producer(self): # a Transit object (receiving data from the remote peer) produces # data and writes it into a local Consumer c = transit.Connection(None, None, None, "description") c.transport = proto_helpers.StringTransport() c.recordReceived(b"r1.") c.recordReceived(b"r2.") consumer = proto_helpers.StringTransport() rv = c.connectConsumer(consumer) self.assertIs(rv, None) self.assertIs(c._consumer, consumer) self.assertEqual(consumer.value(), b"r1.r2.") self.assertRaises(RuntimeError, c.connectConsumer, consumer) c.recordReceived(b"r3.") self.assertEqual(consumer.value(), b"r1.r2.r3.") c.pauseProducing() self.assertEqual(c.transport.producerState, "paused") c.resumeProducing() self.assertEqual(c.transport.producerState, "producing") c.disconnectConsumer() self.assertEqual(consumer.producer, None) c.connectConsumer(consumer) c.stopProducing() self.assertEqual(c.transport.producerState, "stopped") def test_connectConsumer(self): # connectConsumer() takes an optional number of bytes to expect, and # fires a Deferred when that many have been written c = transit.Connection(None, None, None, "description") c._negotiation_d.addErrback(lambda err: None) # eat it c.transport = proto_helpers.StringTransport() c.recordReceived(b"r1.") consumer = proto_helpers.StringTransport() d = c.connectConsumer(consumer, expected=10) self.assertEqual(consumer.value(), b"r1.") self.assertNoResult(d) c.recordReceived(b"r2.") self.assertEqual(consumer.value(), b"r1.r2.") self.assertNoResult(d) c.recordReceived(b"r3.") self.assertEqual(consumer.value(), b"r1.r2.r3.") self.assertNoResult(d) c.recordReceived(b"!") self.assertEqual(consumer.value(), b"r1.r2.r3.!") self.assertEqual(self.successResultOf(d), 10) # that should automatically disconnect the consumer, and subsequent # records should get queued, not delivered self.assertIs(c._consumer, None) c.recordReceived(b"overflow") self.assertEqual(consumer.value(), b"r1.r2.r3.!") # now test that the Deferred errbacks when the connection is lost d = c.connectConsumer(consumer, expected=10) c.connectionLost() self.failureResultOf(d, error.ConnectionClosed) def test_connectConsumer_empty(self): # if connectConsumer() expects 0 bytes (e.g. someone is "sending" a # zero-length file), make sure it gets woken up right away, so it can # disconnect itself, even though no bytes will actually arrive c = transit.Connection(None, None, None, "description") c._negotiation_d.addErrback(lambda err: None) # eat it c.transport = proto_helpers.StringTransport() consumer = proto_helpers.StringTransport() d = c.connectConsumer(consumer, expected=0) self.assertEqual(self.successResultOf(d), 0) self.assertEqual(consumer.value(), b"") self.assertIs(c._consumer, None) def test_writeToFile(self): c = transit.Connection(None, None, None, "description") c._negotiation_d.addErrback(lambda err: None) # eat it c.transport = proto_helpers.StringTransport() c.recordReceived(b"r1.") f = io.BytesIO() progress = [] d = c.writeToFile(f, 10, progress.append) self.assertEqual(f.getvalue(), b"r1.") self.assertEqual(progress, [3]) self.assertNoResult(d) c.recordReceived(b"r2.") self.assertEqual(f.getvalue(), b"r1.r2.") self.assertEqual(progress, [3, 3]) self.assertNoResult(d) c.recordReceived(b"r3.") self.assertEqual(f.getvalue(), b"r1.r2.r3.") self.assertEqual(progress, [3, 3, 3]) self.assertNoResult(d) c.recordReceived(b"!") self.assertEqual(f.getvalue(), b"r1.r2.r3.!") self.assertEqual(progress, [3, 3, 3, 1]) self.assertEqual(self.successResultOf(d), 10) # that should automatically disconnect the consumer, and subsequent # records should get queued, not delivered self.assertIs(c._consumer, None) c.recordReceived(b"overflow.") self.assertEqual(f.getvalue(), b"r1.r2.r3.!") self.assertEqual(progress, [3, 3, 3, 1]) # test what happens when enough data is queued ahead of time c.recordReceived(b"second.") # now "overflow.second." c.recordReceived(b"third.") # now "overflow.second.third." f = io.BytesIO() d = c.writeToFile(f, 10) self.assertEqual(f.getvalue(), b"overflow.second.") # whole records self.assertEqual(self.successResultOf(d), 16) self.assertEqual(list(c._inbound_records), [b"third."]) # now test that the Deferred errbacks when the connection is lost d = c.writeToFile(f, 10) c.connectionLost() self.failureResultOf(d, error.ConnectionClosed) def test_consumer(self): # a local producer sends data to a consuming Transit object c = transit.Connection(None, None, None, "description") c.transport = proto_helpers.StringTransport() records = [] c.send_record = records.append producer = proto_helpers.StringTransport() c.registerProducer(producer, True) self.assertIs(c.transport.producer, producer) c.write(b"r1.") self.assertEqual(records, [b"r1."]) c.unregisterProducer() self.assertEqual(c.transport.producer, None) class FileConsumer(unittest.TestCase): def test_basic(self): f = io.BytesIO() progress = [] fc = transit.FileConsumer(f, progress.append) self.assertEqual(progress, []) self.assertEqual(f.getvalue(), b"") fc.write(b"." * 99) self.assertEqual(progress, [99]) self.assertEqual(f.getvalue(), b"." * 99) fc.write(b"!") self.assertEqual(progress, [99, 1]) self.assertEqual(f.getvalue(), b"." * 99 + b"!") def test_hasher(self): hashee = [] f = io.BytesIO() progress = [] fc = transit.FileConsumer(f, progress.append, hasher=hashee.append) self.assertEqual(progress, []) self.assertEqual(f.getvalue(), b"") self.assertEqual(hashee, []) fc.write(b"." * 99) self.assertEqual(progress, [99]) self.assertEqual(f.getvalue(), b"." * 99) self.assertEqual(hashee, [b"." * 99]) fc.write(b"!") self.assertEqual(progress, [99, 1]) self.assertEqual(f.getvalue(), b"." * 99 + b"!") self.assertEqual(hashee, [b"." * 99, b"!"]) DIRECT_HINT_JSON = { "type": "direct-tcp-v1", "hostname": "direct", "port": 1234 } RELAY_HINT_JSON = { "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "hostname": "relay", "port": 1234 }] } UNRECOGNIZED_DIRECT_HINT_JSON = { "type": "direct-tcp-v1", "hostname": ["cannot", "parse", "list"] } UNRECOGNIZED_HINT_JSON = {"type": "unknown"} UNAVAILABLE_HINT_JSON = { "type": "direct-tcp-v1", # e.g. Tor without txtorcon "hostname": "unavailable", "port": 1234 } RELAY_HINT2_JSON = { "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "hostname": "relay", "port": 1234 }, UNRECOGNIZED_HINT_JSON] } UNAVAILABLE_RELAY_HINT_JSON = { "type": "relay-v1", "hints": [UNAVAILABLE_HINT_JSON] } class Transit(unittest.TestCase): def setUp(self): self._connectors = [] self._waiters = [] self._descriptions = [] def _start_connector(self, ep, description, is_relay=False): d = defer.Deferred() self._connectors.append(ep) self._waiters.append(d) self._descriptions.append(description) return d @inlineCallbacks def test_success_direct(self): reactor = mock.Mock() s = transit.TransitSender("", reactor=reactor) s.set_transit_key(b"key") hints = yield s.get_connection_hints() # start the listener del hints s.add_connection_hints([ DIRECT_HINT_JSON, UNRECOGNIZED_DIRECT_HINT_JSON, UNRECOGNIZED_HINT_JSON ]) s._start_connector = self._start_connector d = s.connect() self.assertNoResult(d) self.assertEqual(len(self._waiters), 1) self.assertIsInstance(self._waiters[0], defer.Deferred) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") self.assertEqual(self._descriptions, ["->tcp:direct:1234"]) @inlineCallbacks def test_success_direct_tor(self): clock = task.Clock() s = transit.TransitSender("", tor=mock.Mock(), reactor=clock) s.set_transit_key(b"key") hints = yield s.get_connection_hints() # start the listener del hints s.add_connection_hints([DIRECT_HINT_JSON]) s._start_connector = self._start_connector d = s.connect() self.assertNoResult(d) self.assertEqual(len(self._waiters), 1) self.assertIsInstance(self._waiters[0], defer.Deferred) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") self.assertEqual(self._descriptions, ["tor->tcp:direct:1234"]) @inlineCallbacks def test_success_direct_tor_relay(self): clock = task.Clock() s = transit.TransitSender("", tor=mock.Mock(), reactor=clock) s.set_transit_key(b"key") hints = yield s.get_connection_hints() # start the listener del hints s.add_connection_hints([RELAY_HINT_JSON]) s._start_connector = self._start_connector d = s.connect() # move the clock forward any amount, since relay connections are # triggered starting at T+0.0 clock.advance(1.0) self.assertNoResult(d) self.assertEqual(len(self._waiters), 1) self.assertIsInstance(self._waiters[0], defer.Deferred) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") self.assertEqual(self._descriptions, ["tor->relay:tcp:relay:1234"]) def _endpoint_from_hint_obj(self, hint, _tor, _reactor): if isinstance(hint, DirectTCPV1Hint): if hint.hostname == "unavailable": return None return hint.hostname return None @inlineCallbacks def test_wait_for_relay(self): clock = task.Clock() s = transit.TransitSender("", reactor=clock, no_listen=True) s.set_transit_key(b"key") hints = yield s.get_connection_hints() del hints s.add_connection_hints( [DIRECT_HINT_JSON, UNRECOGNIZED_HINT_JSON, RELAY_HINT_JSON]) s._start_connector = self._start_connector with mock.patch("wormhole.transit.endpoint_from_hint_obj", self._endpoint_from_hint_obj): d = s.connect() self.assertNoResult(d) # the direct connectors are tried right away, but the relay # connectors are stalled for a few seconds self.assertEqual(self._connectors, ["direct"]) clock.advance(s.RELAY_DELAY + 1.0) self.assertEqual(self._connectors, ["direct", "relay"]) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") @inlineCallbacks def test_priorities(self): clock = task.Clock() s = transit.TransitSender("", reactor=clock, no_listen=True) s.set_transit_key(b"key") hints = yield s.get_connection_hints() del hints s.add_connection_hints([ { "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "hostname": "relay", "port": 1234 }] }, { "type": "direct-tcp-v1", "hostname": "direct", "port": 1234 }, { "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "priority": 2.0, "hostname": "relay2", "port": 1234 }, { "type": "direct-tcp-v1", "priority": 3.0, "hostname": "relay3", "port": 1234 }] }, { "type": "relay-v1", "hints": [{ "type": "direct-tcp-v1", "priority": 2.0, "hostname": "relay4", "port": 1234 }] }, ]) s._start_connector = self._start_connector with mock.patch("wormhole.transit.endpoint_from_hint_obj", self._endpoint_from_hint_obj): d = s.connect() self.assertNoResult(d) # direct connector should be used first, then the priority=3.0 relay, # then the two 2.0 relays, then the (default) 0.0 relay self.assertEqual(self._connectors, ["direct"]) clock.advance(s.RELAY_DELAY + 1.0) self.assertEqual(self._connectors, ["direct", "relay3"]) clock.advance(s.RELAY_DELAY) self.assertIn(self._connectors, (["direct", "relay3", "relay2", "relay4"], ["direct", "relay3", "relay4", "relay2"])) clock.advance(s.RELAY_DELAY) self.assertIn(self._connectors, (["direct", "relay3", "relay2", "relay4", "relay"], ["direct", "relay3", "relay4", "relay2", "relay"])) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") @inlineCallbacks def test_no_direct_hints(self): clock = task.Clock() s = transit.TransitSender("", reactor=clock, no_listen=True) s.set_transit_key(b"key") hints = yield s.get_connection_hints() # start the listener del hints # include hints that can't be turned into an endpoint at runtime s.add_connection_hints([ UNRECOGNIZED_HINT_JSON, UNAVAILABLE_HINT_JSON, RELAY_HINT2_JSON, UNAVAILABLE_RELAY_HINT_JSON ]) s._start_connector = self._start_connector with mock.patch("wormhole.transit.endpoint_from_hint_obj", self._endpoint_from_hint_obj): d = s.connect() self.assertNoResult(d) # since there are no usable direct hints, the relay connector will # only be stalled for 0 seconds self.assertEqual(self._connectors, []) clock.advance(0) self.assertEqual(self._connectors, ["relay"]) self._waiters[0].callback("winner") self.assertEqual(self.successResultOf(d), "winner") @inlineCallbacks def test_no_contenders(self): clock = task.Clock() s = transit.TransitSender("", reactor=clock, no_listen=True) s.set_transit_key(b"key") hints = yield s.get_connection_hints() # start the listener del hints s.add_connection_hints([]) # no hints at all s._start_connector = self._start_connector with mock.patch("wormhole.transit.endpoint_from_hint_obj", self._endpoint_from_hint_obj): d = s.connect() f = self.failureResultOf(d, transit.TransitError) self.assertEqual(str(f.value), "No contenders for connection") class RelayHandshake(unittest.TestCase): def old_build_relay_handshake(self, key): token = HKDF(key, 32, CTXinfo=b"transit_relay_token") return (token, b"please relay " + hexlify(token) + b"\n") def test_old(self): key = b"\x00" token, old_handshake = self.old_build_relay_handshake(key) tc = transit_server.TransitConnection() tc.factory = mock.Mock() tc.factory.connection_got_token = mock.Mock() tc.dataReceived(old_handshake[:-1]) self.assertEqual(tc.factory.connection_got_token.mock_calls, []) tc.dataReceived(old_handshake[-1:]) self.assertEqual(tc.factory.connection_got_token.mock_calls, [mock.call(hexlify(token), None, tc)]) def test_new(self): c = transit.Common(None) c.set_transit_key(b"\x00") new_handshake = c._build_relay_handshake() token, old_handshake = self.old_build_relay_handshake(b"\x00") tc = transit_server.TransitConnection() tc.factory = mock.Mock() tc.factory.connection_got_token = mock.Mock() tc.dataReceived(new_handshake[:-1]) self.assertEqual(tc.factory.connection_got_token.mock_calls, []) tc.dataReceived(new_handshake[-1:]) self.assertEqual( tc.factory.connection_got_token.mock_calls, [mock.call(hexlify(token), c._side.encode("ascii"), tc)]) class Full(ServerBase, unittest.TestCase): def doBoth(self, d1, d2): return gatherResults([d1, d2], True) @inlineCallbacks def test_direct(self): KEY = b"k" * 32 s = transit.TransitSender(None) r = transit.TransitReceiver(None) s.set_transit_key(KEY) r.set_transit_key(KEY) # TODO: this sometimes fails with EADDRINUSE shints = yield s.get_connection_hints() rhints = yield r.get_connection_hints() s.add_connection_hints(rhints) r.add_connection_hints(shints) (x, y) = yield self.doBoth(s.connect(), r.connect()) self.assertIsInstance(x, transit.Connection) self.assertIsInstance(y, transit.Connection) d = y.receive_record() x.send_record(b"record1") r = yield d self.assertEqual(r, b"record1") yield x.close() yield y.close() @inlineCallbacks def test_relay(self): KEY = b"k" * 32 s = transit.TransitSender(self.transit, no_listen=True) r = transit.TransitReceiver(self.transit, no_listen=True) s.set_transit_key(KEY) r.set_transit_key(KEY) shints = yield s.get_connection_hints() rhints = yield r.get_connection_hints() s.add_connection_hints(rhints) r.add_connection_hints(shints) (x, y) = yield self.doBoth(s.connect(), r.connect()) self.assertIsInstance(x, transit.Connection) self.assertIsInstance(y, transit.Connection) d = y.receive_record() x.send_record(b"record1") r = yield d self.assertEqual(r, b"record1") yield x.close() yield y.close() magic-wormhole-0.12.0/src/wormhole/test/test_util.py000066400000000000000000000041771400712516500225060ustar00rootroot00000000000000from __future__ import unicode_literals import unicodedata import six from twisted.trial import unittest import mock from .. import util class Utils(unittest.TestCase): def test_to_bytes(self): b = util.to_bytes("abc") self.assertIsInstance(b, type(b"")) self.assertEqual(b, b"abc") A = unicodedata.lookup("LATIN SMALL LETTER A WITH DIAERESIS") b = util.to_bytes(A + "bc") self.assertIsInstance(b, type(b"")) self.assertEqual(b, b"\xc3\xa4\x62\x63") def test_bytes_to_hexstr(self): b = b"\x00\x45\x91\xfe\xff" hexstr = util.bytes_to_hexstr(b) self.assertIsInstance(hexstr, type("")) self.assertEqual(hexstr, "004591feff") def test_hexstr_to_bytes(self): hexstr = "004591feff" b = util.hexstr_to_bytes(hexstr) hexstr = util.bytes_to_hexstr(b) self.assertIsInstance(b, type(b"")) self.assertEqual(b, b"\x00\x45\x91\xfe\xff") def test_dict_to_bytes(self): d = {"a": "b"} b = util.dict_to_bytes(d) self.assertIsInstance(b, type(b"")) self.assertEqual(b, b'{"a": "b"}') def test_bytes_to_dict(self): b = b'{"a": "b", "c": 2}' d = util.bytes_to_dict(b) self.assertIsInstance(d, dict) self.assertEqual(d, {"a": "b", "c": 2}) class Space(unittest.TestCase): def test_free_space(self): free = util.estimate_free_space(".") self.assert_( isinstance(free, six.integer_types + (type(None), )), repr(free)) # some platforms (I think the VMs used by travis are in this # category) return 0, and windows will return None, so don't assert # anything more specific about the return value def test_no_statvfs(self): # this mock.patch fails on windows, which is sad because windows is # the one platform that the code under test was supposed to help with try: with mock.patch("os.statvfs", side_effect=AttributeError()): self.assertEqual(util.estimate_free_space("."), None) except AttributeError: # raised by mock.get_original() pass magic-wormhole-0.12.0/src/wormhole/test/test_wordlist.py000066400000000000000000000027031400712516500233710ustar00rootroot00000000000000from __future__ import print_function, unicode_literals from twisted.trial import unittest import mock from .._wordlist import PGPWordList class Completions(unittest.TestCase): def test_completions(self): wl = PGPWordList() gc = wl.get_completions self.assertEqual(gc("ar", 2), {"armistice-", "article-"}) self.assertEqual(gc("armis", 2), {"armistice-"}) self.assertEqual(gc("armistice", 2), {"armistice-"}) lots = gc("armistice-", 2) self.assertEqual(len(lots), 256, lots) first = list(lots)[0] self.assert_(first.startswith("armistice-"), first) self.assertEqual( gc("armistice-ba", 2), { "armistice-baboon", "armistice-backfield", "armistice-backward", "armistice-banjo" }) self.assertEqual( gc("armistice-ba", 3), { "armistice-baboon-", "armistice-backfield-", "armistice-backward-", "armistice-banjo-" }) self.assertEqual(gc("armistice-baboon", 2), {"armistice-baboon"}) self.assertEqual(gc("armistice-baboon", 3), {"armistice-baboon-"}) self.assertEqual(gc("armistice-baboon", 4), {"armistice-baboon-"}) class Choose(unittest.TestCase): def test_choose_words(self): wl = PGPWordList() with mock.patch("os.urandom", side_effect=[b"\x04", b"\x10"]): self.assertEqual(wl.choose_words(2), "alkali-assume") magic-wormhole-0.12.0/src/wormhole/test/test_wormhole.py000066400000000000000000000723031400712516500233610ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import io import re from twisted.internet import reactor from twisted.internet.defer import gatherResults, inlineCallbacks, returnValue from twisted.internet.error import ConnectionRefusedError from twisted.trial import unittest import mock from .. import _rendezvous, wormhole from ..errors import (KeyFormatError, LonelyError, NoKeyError, OnlyOneCodeError, ServerConnectionError, WormholeClosed, WrongPasswordError) from ..eventual import EventualQueue from ..transit import allocate_tcp_port from .common import ServerBase, poll_until APPID = "appid" # event orderings to exercise: # # * normal sender: set_code, send_phase1, connected, claimed, learn_msg2, # learn_phase1 # * normal receiver (argv[2]=code): set_code, connected, learn_msg1, # learn_phase1, send_phase1, # * normal receiver (readline): connected, input_code # * # * set_code, then connected # * connected, receive_pake, send_phase, set_code class Delegate: def __init__(self): self.welcome = None self.code = None self.key = None self.verifier = None self.versions = None self.messages = [] self.closed = None def wormhole_got_welcome(self, welcome): self.welcome = welcome def wormhole_got_code(self, code): self.code = code def wormhole_got_unverified_key(self, key): self.key = key def wormhole_got_verifier(self, verifier): self.verifier = verifier def wormhole_got_versions(self, versions): self.versions = versions def wormhole_got_message(self, data): self.messages.append(data) def wormhole_closed(self, result): self.closed = result class Delegated(ServerBase, unittest.TestCase): @inlineCallbacks def test_delegated(self): dg = Delegate() w1 = wormhole.create(APPID, self.relayurl, reactor, delegate=dg) # w1.debug_set_trace("W1") with self.assertRaises(NoKeyError): w1.derive_key("purpose", 12) w1.set_code("1-abc") self.assertEqual(dg.code, "1-abc") w2 = wormhole.create(APPID, self.relayurl, reactor) w2.set_code(dg.code) yield poll_until(lambda: dg.key is not None) yield poll_until(lambda: dg.verifier is not None) yield poll_until(lambda: dg.versions is not None) w1.send_message(b"ping") got = yield w2.get_message() self.assertEqual(got, b"ping") w2.send_message(b"pong") yield poll_until(lambda: dg.messages) self.assertEqual(dg.messages[0], b"pong") key1 = w1.derive_key("purpose", 16) self.assertEqual(len(key1), 16) self.assertEqual(type(key1), type(b"")) with self.assertRaises(TypeError): w1.derive_key(b"not unicode", 16) with self.assertRaises(TypeError): w1.derive_key(12345, 16) w1.close() yield w2.close() @inlineCallbacks def test_allocate_code(self): dg = Delegate() w1 = wormhole.create(APPID, self.relayurl, reactor, delegate=dg) w1.allocate_code() yield poll_until(lambda: dg.code is not None) w1.close() @inlineCallbacks def test_input_code(self): dg = Delegate() w1 = wormhole.create(APPID, self.relayurl, reactor, delegate=dg) h = w1.input_code() h.choose_nameplate("123") h.choose_words("purple-elephant") yield poll_until(lambda: dg.code is not None) w1.close() class Wormholes(ServerBase, unittest.TestCase): # integration test, with a real server @inlineCallbacks def setUp(self): # test_welcome wants to see [current_cli_version] yield self._setup_relay(None, advertise_version="advertised.version") def doBoth(self, d1, d2): return gatherResults([d1, d2], True) @inlineCallbacks def test_allocate_default(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w1.allocate_code() code = yield w1.get_code() mo = re.search(r"^\d+-\w+-\w+$", code) self.assert_(mo, code) # w.close() fails because we closed before connecting yield self.assertFailure(w1.close(), LonelyError) @inlineCallbacks def test_allocate_more_words(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w1.allocate_code(3) code = yield w1.get_code() mo = re.search(r"^\d+-\w+-\w+-\w+$", code) self.assert_(mo, code) yield self.assertFailure(w1.close(), LonelyError) @inlineCallbacks def test_basic(self): w1 = wormhole.create(APPID, self.relayurl, reactor) # w1.debug_set_trace("W1") with self.assertRaises(NoKeyError): w1.derive_key("purpose", 12) w2 = wormhole.create(APPID, self.relayurl, reactor) # w2.debug_set_trace(" W2") w1.allocate_code() code = yield w1.get_code() w2.set_code(code) yield w1.get_unverified_key() yield w2.get_unverified_key() key1 = w1.derive_key("purpose", 16) self.assertEqual(len(key1), 16) self.assertEqual(type(key1), type(b"")) with self.assertRaises(TypeError): w1.derive_key(b"not unicode", 16) with self.assertRaises(TypeError): w1.derive_key(12345, 16) verifier1 = yield w1.get_verifier() verifier2 = yield w2.get_verifier() self.assertEqual(verifier1, verifier2) versions1 = yield w1.get_versions() versions2 = yield w2.get_versions() # app-versions are exercised properly in test_versions, this just # tests the defaults self.assertEqual(versions1, {}) self.assertEqual(versions2, {}) w1.send_message(b"data1") w2.send_message(b"data2") dataX = yield w1.get_message() dataY = yield w2.get_message() self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") versions1_again = yield w1.get_versions() self.assertEqual(versions1, versions1_again) c1 = yield w1.close() self.assertEqual(c1, "happy") c2 = yield w2.close() self.assertEqual(c2, "happy") @inlineCallbacks def test_get_code_early(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) d = w1.get_code() w1.set_code("1-abc") yield eq.flush() code = self.successResultOf(d) self.assertEqual(code, "1-abc") yield self.assertFailure(w1.close(), LonelyError) @inlineCallbacks def test_get_code_late(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w1.set_code("1-abc") d = w1.get_code() yield eq.flush() code = self.successResultOf(d) self.assertEqual(code, "1-abc") yield self.assertFailure(w1.close(), LonelyError) @inlineCallbacks def test_same_message(self): # the two sides use random nonces for their messages, so it's ok for # both to try and send the same body: they'll result in distinct # encrypted messages w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) w1.send_message(b"data") w2.send_message(b"data") dataX = yield w1.get_message() dataY = yield w2.get_message() self.assertEqual(dataX, b"data") self.assertEqual(dataY, b"data") yield w1.close() yield w2.close() @inlineCallbacks def test_interleaved(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) w1.send_message(b"data1") dataY = yield w2.get_message() self.assertEqual(dataY, b"data1") d = w1.get_message() w2.send_message(b"data2") dataX = yield d self.assertEqual(dataX, b"data2") yield w1.close() yield w2.close() @inlineCallbacks def test_unidirectional(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) w1.send_message(b"data1") dataY = yield w2.get_message() self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() @inlineCallbacks def test_early(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w1.send_message(b"data1") w2 = wormhole.create(APPID, self.relayurl, reactor) d = w2.get_message() w1.set_code("123-abc-def") w2.set_code("123-abc-def") dataY = yield d self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() @inlineCallbacks def test_fixed_code(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send_message(b"data1"), w2.send_message(b"data2") dl = yield self.doBoth(w1.get_message(), w2.get_message()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() @inlineCallbacks def test_input_code(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") h = w2.input_code() h.choose_nameplate("123") # Pause to allow some messages to get delivered. Specifically we want # to wait until w2 claims the nameplate, opens the mailbox, and # receives the PAKE message, to exercise the PAKE-before-CODE path in # Key. yield poll_until(lambda: w2._boss._K._debug_pake_stashed) h.choose_words("purple-elephant") w1.send_message(b"data1"), w2.send_message(b"data2") dl = yield self.doBoth(w1.get_message(), w2.get_message()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() @inlineCallbacks def test_multiple_messages(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send_message(b"data1"), w2.send_message(b"data2") w1.send_message(b"data3"), w2.send_message(b"data4") dl = yield self.doBoth(w1.get_message(), w2.get_message()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") dl = yield self.doBoth(w1.get_message(), w2.get_message()) (dataX, dataY) = dl self.assertEqual(dataX, b"data4") self.assertEqual(dataY, b"data3") yield w1.close() yield w2.close() @inlineCallbacks def test_closed(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w2 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w1.set_code("123-foo") w2.set_code("123-foo") # let it connect and become HAPPY yield w1.get_versions() yield w2.get_versions() yield w1.close() yield w2.close() # once closed, all Deferred-yielding API calls get an prompt error yield self.assertFailure(w1.get_welcome(), WormholeClosed) e = yield self.assertFailure(w1.get_code(), WormholeClosed) self.assertEqual(e.args[0], "happy") yield self.assertFailure(w1.get_unverified_key(), WormholeClosed) yield self.assertFailure(w1.get_verifier(), WormholeClosed) yield self.assertFailure(w1.get_versions(), WormholeClosed) yield self.assertFailure(w1.get_message(), WormholeClosed) @inlineCallbacks def test_closed_idle(self): yield self._relay_server.disownServiceParent() w1 = wormhole.create(APPID, self.relayurl, reactor) # without a relay server, this won't ever connect d_welcome = w1.get_welcome() self.assertNoResult(d_welcome) d_code = w1.get_code() d_key = w1.get_unverified_key() d_verifier = w1.get_verifier() d_versions = w1.get_versions() d_message = w1.get_message() yield self.assertFailure(w1.close(), LonelyError) yield self.assertFailure(d_welcome, LonelyError) yield self.assertFailure(d_code, LonelyError) yield self.assertFailure(d_key, LonelyError) yield self.assertFailure(d_verifier, LonelyError) yield self.assertFailure(d_versions, LonelyError) yield self.assertFailure(d_message, LonelyError) @inlineCallbacks def test_wrong_password(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w2 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w1.allocate_code() code = yield w1.get_code() w2.set_code(code + "not") code2 = yield w2.get_code() self.assertNotEqual(code, code2) # That's enough to allow both sides to discover the mismatch, but # only after the confirmation message gets through. API calls that # don't wait will appear to work until the mismatched confirmation # message arrives. w1.send_message(b"should still work") w2.send_message(b"should still work") key2 = yield w2.get_unverified_key() # should work # w2 has just received w1.PAKE, and is about to send w2.VERSION key1 = yield w1.get_unverified_key() # should work # w1 has just received w2.PAKE, and is about to send w1.VERSION, and # then will receive w2.VERSION. When it sees w2.VERSION, it will # learn about the WrongPasswordError. self.assertNotEqual(key1, key2) # API calls that wait (i.e. get) will errback. We collect all these # Deferreds early to exercise the wait-then-fail path d1_verified = w1.get_verifier() d1_versions = w1.get_versions() d1_received = w1.get_message() d2_verified = w2.get_verifier() d2_versions = w2.get_versions() d2_received = w2.get_message() # wait for each side to notice the failure yield self.assertFailure(w1.get_verifier(), WrongPasswordError) yield self.assertFailure(w2.get_verifier(), WrongPasswordError) # the rest of the loops should fire within the next tick yield eq.flush() # now all the rest should have fired already self.failureResultOf(d1_verified, WrongPasswordError) self.failureResultOf(d1_versions, WrongPasswordError) self.failureResultOf(d1_received, WrongPasswordError) self.failureResultOf(d2_verified, WrongPasswordError) self.failureResultOf(d2_versions, WrongPasswordError) self.failureResultOf(d2_received, WrongPasswordError) # and at this point, with the failure safely noticed by both sides, # new get_unverified_key() calls should signal the failure, even # before we close # any new calls in the error state should immediately fail yield self.assertFailure(w1.get_unverified_key(), WrongPasswordError) yield self.assertFailure(w1.get_verifier(), WrongPasswordError) yield self.assertFailure(w1.get_versions(), WrongPasswordError) yield self.assertFailure(w1.get_message(), WrongPasswordError) yield self.assertFailure(w2.get_unverified_key(), WrongPasswordError) yield self.assertFailure(w2.get_verifier(), WrongPasswordError) yield self.assertFailure(w2.get_versions(), WrongPasswordError) yield self.assertFailure(w2.get_message(), WrongPasswordError) yield self.assertFailure(w1.close(), WrongPasswordError) yield self.assertFailure(w2.close(), WrongPasswordError) # API calls should still get the error, not WormholeClosed yield self.assertFailure(w1.get_unverified_key(), WrongPasswordError) yield self.assertFailure(w1.get_verifier(), WrongPasswordError) yield self.assertFailure(w1.get_versions(), WrongPasswordError) yield self.assertFailure(w1.get_message(), WrongPasswordError) yield self.assertFailure(w2.get_unverified_key(), WrongPasswordError) yield self.assertFailure(w2.get_verifier(), WrongPasswordError) yield self.assertFailure(w2.get_versions(), WrongPasswordError) yield self.assertFailure(w2.get_message(), WrongPasswordError) @inlineCallbacks def test_wrong_password_with_spaces(self): w = wormhole.create(APPID, self.relayurl, reactor) badcode = "4 oops spaces" with self.assertRaises(KeyFormatError) as ex: w.set_code(badcode) expected_msg = "Code '%s' contains spaces." % (badcode, ) self.assertEqual(expected_msg, str(ex.exception)) yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_wrong_password_with_leading_space(self): w = wormhole.create(APPID, self.relayurl, reactor) badcode = " 4-oops-space" with self.assertRaises(KeyFormatError) as ex: w.set_code(badcode) expected_msg = "Code '%s' contains spaces." % (badcode, ) self.assertEqual(expected_msg, str(ex.exception)) yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_wrong_password_with_non_numeric_nameplate(self): w = wormhole.create(APPID, self.relayurl, reactor) badcode = "four-oops-space" with self.assertRaises(KeyFormatError) as ex: w.set_code(badcode) expected_msg = "Nameplate 'four' must be numeric, with no spaces." self.assertEqual(expected_msg, str(ex.exception)) yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_welcome(self): w1 = wormhole.create(APPID, self.relayurl, reactor) wel1 = yield w1.get_welcome() # early: before connection established wel2 = yield w1.get_welcome() # late: already received welcome self.assertEqual(wel1, wel2) self.assertIn("current_cli_version", wel1) # cause an error, so a later get_welcome will return the error w1.set_code("123-foo") w2 = wormhole.create(APPID, self.relayurl, reactor) w2.set_code("123-NOT") yield self.assertFailure(w1.get_verifier(), WrongPasswordError) yield self.assertFailure(w1.get_welcome(), WrongPasswordError) # late yield self.assertFailure(w1.close(), WrongPasswordError) yield self.assertFailure(w2.close(), WrongPasswordError) @inlineCallbacks def test_verifier(self): eq = EventualQueue(reactor) w1 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w2 = wormhole.create(APPID, self.relayurl, reactor, _eventual_queue=eq) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) v1 = yield w1.get_verifier() # early v2 = yield w2.get_verifier() self.failUnlessEqual(type(v1), type(b"")) self.failUnlessEqual(v1, v2) w1.send_message(b"data1") w2.send_message(b"data2") dataX = yield w1.get_message() dataY = yield w2.get_message() self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") # calling get_verifier() this late should fire right away d = w2.get_verifier() yield eq.flush() v1_late = self.successResultOf(d) self.assertEqual(v1_late, v1) yield w1.close() yield w2.close() @inlineCallbacks def test_versions(self): # there's no API for this yet, but make sure the internals work w1 = wormhole.create( APPID, self.relayurl, reactor, versions={"w1": 123}) w2 = wormhole.create( APPID, self.relayurl, reactor, versions={"w2": 456}) w1.allocate_code() code = yield w1.get_code() w2.set_code(code) w1_versions = yield w2.get_versions() self.assertEqual(w1_versions, {"w1": 123}) w2_versions = yield w1.get_versions() self.assertEqual(w2_versions, {"w2": 456}) yield w1.close() yield w2.close() @inlineCallbacks def test_rx_dedup(self): # Future clients will handle losing/reestablishing the Rendezvous # Server connection by retransmitting messages, which will sometimes # cause duplicate messages. Make sure this client can tolerate them. # The first place this would fail was when the second copy of the # incoming PAKE message was received, which would cause # SPAKE2.finish() to be called a second time, which throws an error # (which, being somewhat unexpected, caused a hang rather than a # clear exception). The Mailbox object is responsible for # deduplication, so we must patch the RendezvousConnector to simulate # duplicated messages. with mock.patch("wormhole._boss.RendezvousConnector", MessageDoubler): w1 = wormhole.create(APPID, self.relayurl, reactor) w2 = wormhole.create(APPID, self.relayurl, reactor) w1.set_code("123-purple-elephant") w2.set_code("123-purple-elephant") w1.send_message(b"data1"), w2.send_message(b"data2") dl = yield self.doBoth(w1.get_message(), w2.get_message()) (dataX, dataY) = dl self.assertEqual(dataX, b"data2") self.assertEqual(dataY, b"data1") yield w1.close() yield w2.close() class MessageDoubler(_rendezvous.RendezvousConnector): # we could double messages on the sending side, but a future server will # strip those duplicates, so to really exercise the receiver, we must # double them on the inbound side instead # def _msg_send(self, phase, body): # wormhole._Wormhole._msg_send(self, phase, body) # self._ws_send_command("add", phase=phase, body=bytes_to_hexstr(body)) def _response_handle_message(self, msg): _rendezvous.RendezvousConnector._response_handle_message(self, msg) _rendezvous.RendezvousConnector._response_handle_message(self, msg) class Errors(ServerBase, unittest.TestCase): @inlineCallbacks def test_derive_key_early(self): w = wormhole.create(APPID, self.relayurl, reactor) # definitely too early with self.assertRaises(NoKeyError): w.derive_key("purpose", 12) yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_multiple_set_code(self): w = wormhole.create(APPID, self.relayurl, reactor) w.set_code("123-purple-elephant") # code can only be set once with self.assertRaises(OnlyOneCodeError): w.set_code("123-nope") yield self.assertFailure(w.close(), LonelyError) @inlineCallbacks def test_allocate_and_set_code(self): w = wormhole.create(APPID, self.relayurl, reactor) w.allocate_code() yield w.get_code() with self.assertRaises(OnlyOneCodeError): w.set_code("123-nope") yield self.assertFailure(w.close(), LonelyError) class Reconnection(ServerBase, unittest.TestCase): @inlineCallbacks def test_basic(self): w1 = wormhole.create(APPID, self.relayurl, reactor) w1_in = [] w1._boss._RC._debug_record_inbound_f = w1_in.append # w1.debug_set_trace("W1") w1.allocate_code() code = yield w1.get_code() w1.send_message(b"data1") # queued until wormhole is established # now wait until we've deposited all our messages on the server def seen_our_pake(): for m in w1_in: if m["type"] == "message" and m["phase"] == "pake": return True return False yield poll_until(seen_our_pake) w1_in[:] = [] # drop the connection w1._boss._RC._ws.transport.loseConnection() # wait for it to reconnect and redeliver all the messages. The server # sends mtype=message messages in random order, but we've only sent # one of them, so it's safe to wait for just the PAKE phase. yield poll_until(seen_our_pake) # now let the second side proceed. this simulates the most common # case: the server is bounced while the sender is waiting, before the # receiver has started w2 = wormhole.create(APPID, self.relayurl, reactor) # w2.debug_set_trace(" W2") w2.set_code(code) dataY = yield w2.get_message() self.assertEqual(dataY, b"data1") w2.send_message(b"data2") dataX = yield w1.get_message() self.assertEqual(dataX, b"data2") c1 = yield w1.close() self.assertEqual(c1, "happy") c2 = yield w2.close() self.assertEqual(c2, "happy") class InitialFailure(unittest.TestCase): @inlineCallbacks def assertSCEFailure(self, eq, d, innerType): yield eq.flush() f = self.failureResultOf(d, ServerConnectionError) inner = f.value.reason self.assertIsInstance(inner, innerType) returnValue(inner) @inlineCallbacks def test_bad_dns(self): eq = EventualQueue(reactor) # point at a URL that will never connect w = wormhole.create( APPID, "ws://%%%.example.org:4000/v1", reactor, _eventual_queue=eq) # that should have already received an error, when it tried to # resolve the bogus DNS name. All API calls will return an error. e = yield self.assertSCEFailure(eq, w.get_unverified_key(), ValueError) self.assertIsInstance(e, ValueError) self.assertEqual(str(e), "invalid hostname: %%%.example.org") yield self.assertSCEFailure(eq, w.get_code(), ValueError) yield self.assertSCEFailure(eq, w.get_verifier(), ValueError) yield self.assertSCEFailure(eq, w.get_versions(), ValueError) yield self.assertSCEFailure(eq, w.get_message(), ValueError) @inlineCallbacks def assertSCE(self, d, innerType): e = yield self.assertFailure(d, ServerConnectionError) inner = e.reason self.assertIsInstance(inner, innerType) returnValue(inner) @inlineCallbacks def test_no_connection(self): # point at a URL that will never connect port = allocate_tcp_port() w = wormhole.create(APPID, "ws://127.0.0.1:%d/v1" % port, reactor) # nothing is listening, but it will take a turn to discover that d1 = w.get_code() d2 = w.get_unverified_key() d3 = w.get_verifier() d4 = w.get_versions() d5 = w.get_message() yield self.assertSCE(d1, ConnectionRefusedError) yield self.assertSCE(d2, ConnectionRefusedError) yield self.assertSCE(d3, ConnectionRefusedError) yield self.assertSCE(d4, ConnectionRefusedError) yield self.assertSCE(d5, ConnectionRefusedError) @inlineCallbacks def test_all_deferreds(self): # point at a URL that will never connect port = allocate_tcp_port() w = wormhole.create(APPID, "ws://127.0.0.1:%d/v1" % port, reactor) # nothing is listening, but it will take a turn to discover that w.allocate_code() d1 = w.get_code() d2 = w.get_unverified_key() d3 = w.get_verifier() d4 = w.get_versions() d5 = w.get_message() yield self.assertSCE(d1, ConnectionRefusedError) yield self.assertSCE(d2, ConnectionRefusedError) yield self.assertSCE(d3, ConnectionRefusedError) yield self.assertSCE(d4, ConnectionRefusedError) yield self.assertSCE(d5, ConnectionRefusedError) class Trace(unittest.TestCase): def test_basic(self): w1 = wormhole.create(APPID, "ws://localhost:1", reactor) stderr = io.StringIO() w1.debug_set_trace("W1", file=stderr) # if Automat doesn't have the tracing API, then we won't actually # exercise the tracing function, so exercise the RendezvousConnector # function manually (it isn't a state machine, so it will always wire # up the tracer) w1._boss._RC._debug("what") stderr = io.StringIO() out = w1._boss._print_trace("OLD", "IN", "NEW", "C1", "M1", stderr) self.assertEqual(stderr.getvalue().splitlines(), ["C1.M1[OLD].IN -> [NEW]"]) out("OUT1") self.assertEqual(stderr.getvalue().splitlines(), ["C1.M1[OLD].IN -> [NEW]", " C1.M1.OUT1()"]) w1._boss._print_trace("", "R.connected", "", "C1", "RC1", stderr) self.assertEqual( stderr.getvalue().splitlines(), ["C1.M1[OLD].IN -> [NEW]", " C1.M1.OUT1()", "C1.RC1.R.connected"]) def test_delegated(self): dg = Delegate() w1 = wormhole.create(APPID, "ws://localhost:1", reactor, delegate=dg) stderr = io.StringIO() w1.debug_set_trace("W1", file=stderr) w1._boss._RC._debug("what") magic-wormhole-0.12.0/src/wormhole/test/test_xfer_util.py000066400000000000000000000034761400712516500235330ustar00rootroot00000000000000from twisted.internet import defer, reactor from twisted.internet.defer import inlineCallbacks from twisted.trial import unittest from .. import xfer_util from .common import ServerBase APPID = u"appid" class Xfer(ServerBase, unittest.TestCase): @inlineCallbacks def test_xfer(self): code = u"1-code" data = u"data" d1 = xfer_util.send(reactor, APPID, self.relayurl, data, code) d2 = xfer_util.receive(reactor, APPID, self.relayurl, code) send_result = yield d1 receive_result = yield d2 self.assertEqual(send_result, None) self.assertEqual(receive_result, data) @inlineCallbacks def test_on_code(self): code = u"1-code" data = u"data" send_code = [] receive_code = [] d1 = xfer_util.send( reactor, APPID, self.relayurl, data, code, on_code=send_code.append) d2 = xfer_util.receive( reactor, APPID, self.relayurl, code, on_code=receive_code.append) send_result = yield d1 receive_result = yield d2 self.assertEqual(send_code, [code]) self.assertEqual(receive_code, [code]) self.assertEqual(send_result, None) self.assertEqual(receive_result, data) @inlineCallbacks def test_make_code(self): data = u"data" got_code = defer.Deferred() d1 = xfer_util.send( reactor, APPID, self.relayurl, data, code=None, on_code=got_code.callback) code = yield got_code d2 = xfer_util.receive(reactor, APPID, self.relayurl, code) send_result = yield d1 receive_result = yield d2 self.assertEqual(send_result, None) self.assertEqual(receive_result, data) magic-wormhole-0.12.0/src/wormhole/timing.py000066400000000000000000000041141400712516500207710ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals import json import time from zope.interface import implementer from ._interfaces import ITiming class Event: def __init__(self, name, when, **details): # data fields that will be dumped to JSON later self._name = name self._start = time.time() if when is None else float(when) self._stop = None self._details = details def detail(self, **details): self._details.update(details) def finish(self, when=None, **details): self._stop = time.time() if when is None else float(when) self.detail(**details) def __enter__(self): return self def __exit__(self, exc_type, exc_value, exc_tb): if exc_type: # inlineCallbacks uses a special exception (defer._DefGen_Return) # to deliver returnValue(), so if returnValue is used inside our # with: block, we'll mistakenly think it means something broke. # I've moved all returnValue() calls outside the 'with # timing.add()' blocks to avoid this, but if a new one # accidentally pops up, it'll get marked as an error. I used to # catch-and-release _DefGen_Return to avoid this, but removed it # because it requires referencing defer.py's private class self.finish(exception=str(exc_type)) else: self.finish() @implementer(ITiming) class DebugTiming: def __init__(self): self._events = [] def add(self, name, when=None, **details): ev = Event(name, when, **details) self._events.append(ev) return ev def write(self, fn, stderr): with open(fn, "wt") as f: data = [ dict( name=e._name, start=e._start, stop=e._stop, details=e._details, ) for e in self._events ] json.dump(data, f, indent=1) f.write("\n") print("Timing data written to %s" % fn, file=stderr) magic-wormhole-0.12.0/src/wormhole/tor_manager.py000066400000000000000000000110241400712516500217760ustar00rootroot00000000000000from __future__ import print_function, unicode_literals import sys from attr import attrib, attrs from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.endpoints import clientFromString from zope.interface.declarations import directlyProvides from . import _interfaces, errors from .timing import DebugTiming try: import txtorcon except ImportError: txtorcon = None @attrs class SocksOnlyTor(object): _reactor = attrib() def stream_via(self, host, port, tls=False): return txtorcon.TorClientEndpoint( host, port, socks_endpoint=None, # tries localhost:9050 and 9150 tls=tls, reactor=self._reactor, ) @inlineCallbacks def get_tor(reactor, launch_tor=False, tor_control_port=None, timing=None, stderr=sys.stderr): """ If launch_tor=True, I will try to launch a new Tor process, ask it for its SOCKS and control ports, and use those for outbound connections (and inbound onion-service listeners, if necessary). Otherwise if tor_control_port is provided, I will attempt to connect to an existing Tor's control port at the endpoint it specifies. I'll ask that Tor for its SOCKS port. With no arguments, I will try to connect to an existing Tor's control port at the usual places: [unix:/var/run/tor/control, tcp:127.0.0.1:9051, tcp:127.0.0.1:9151]. If any are successful, I'll ask that Tor for its SOCKS port. If none are successful, I'll attempt to do SOCKS to the usual places: [tcp:127.0.0.1:9050, tcp:127.0.0.1:9150]. If I am unable to make a SOCKS connection, the initial connection to the Rendezvous Server will fail, and the program will terminate. Control-port connections can only succeed if I can authenticate (by reading a cookie file named by the Tor process), so the current user must have permission to read that file (either they started Tor, e.g. TorBrowser, or they are in a unix group that's been given access, e.g. debian-tor). """ # rationale: launching a new Tor takes a long time, so only do it if # the user specifically asks for it with --launch-tor. Using an # existing Tor should be much faster, but still requires general # permission via --tor. if not txtorcon: raise errors.NoTorError() if not isinstance(launch_tor, bool): # note: False is int raise TypeError("launch_tor= must be boolean") if not isinstance(tor_control_port, (type(""), type(None))): raise TypeError("tor_control_port= must be str or None") assert tor_control_port != "" if launch_tor and tor_control_port is not None: raise ValueError("cannot combine --launch-tor and --tor-control-port=") timing = timing or DebugTiming() # Connect to an existing Tor, or create a new one. If we need to # launch an onion service, then we need a working control port (and # authentication cookie). If we're only acting as a client, we don't # need the control port. if launch_tor: print( " launching a new Tor process, this may take a while..", file=stderr) with timing.add("launch tor"): tor = yield txtorcon.launch(reactor, # data_directory=, # tor_binary=, ) elif tor_control_port: with timing.add("find tor"): control_ep = clientFromString(reactor, tor_control_port) tor = yield txtorcon.connect(reactor, control_ep) # might raise print( " using Tor via control port at %s" % tor_control_port, file=stderr) else: # Let txtorcon look through a list of usual places. If that fails, # we'll arrange to attempt the default SOCKS port with timing.add("find tor"): try: tor = yield txtorcon.connect(reactor) print(" using Tor via default control port", file=stderr) except Exception: # TODO: make this more specific. I think connect() is # likely to throw a reactor.connectTCP -type error, like # ConnectionFailed or ConnectionRefused or something print( " unable to find default Tor control port, using SOCKS", file=stderr) tor = SocksOnlyTor(reactor) directlyProvides(tor, _interfaces.ITorManager) returnValue(tor) magic-wormhole-0.12.0/src/wormhole/transit.py000066400000000000000000001032521400712516500211710ustar00rootroot00000000000000# no unicode_literals, revisit after twisted patch from __future__ import absolute_import, print_function import os import socket import sys import time from binascii import hexlify, unhexlify from collections import deque import six from nacl.secret import SecretBox from twisted.internet import (address, defer, endpoints, error, interfaces, protocol, task) from twisted.internet.defer import inlineCallbacks, returnValue from twisted.protocols import policies from twisted.python import log from twisted.python.runtime import platformType from zope.interface import implementer from . import ipaddrs from .errors import InternalError from .timing import DebugTiming from .util import bytes_to_hexstr, HKDF from ._hints import (DirectTCPV1Hint, RelayV1Hint, parse_hint_argv, describe_hint_obj, endpoint_from_hint_obj, parse_tcp_v1_hint) class TransitError(Exception): pass class BadHandshake(Exception): pass class TransitClosed(TransitError): pass class BadNonce(TransitError): pass # The beginning of each TCP connection consists of the following handshake # messages. The sender transmits the same text regardless of whether it is on # the initiating/connecting end of the TCP connection, or on the # listening/accepting side. Same for the receiver. # # sender -> receiver: transit sender TXID_HEX ready\n\n # receiver -> sender: transit receiver RXID_HEX ready\n\n # # Any deviations from this result in the socket being closed. The handshake # messages are designed to provoke an invalid response from other sorts of # servers (HTTP, SMTP, echo). # # If the sender is satisfied with the handshake, and this is the first socket # to complete negotiation, the sender does: # # sender -> receiver: go\n # # and the next byte on the wire will be from the application. # # If this is not the first socket, the sender does: # # sender -> receiver: nevermind\n # # and closes the socket. # So the receiver looks for "transit sender TXID_HEX ready\n\ngo\n" and hangs # up upon the first wrong byte. The sender lookgs for "transit receiver # RXID_HEX ready\n\n" and then makes a first/not-first decision about sending # "go\n" or "nevermind\n"+close(). def build_receiver_handshake(key): hexid = HKDF(key, 32, CTXinfo=b"transit_receiver") return b"transit receiver " + hexlify(hexid) + b" ready\n\n" def build_sender_handshake(key): hexid = HKDF(key, 32, CTXinfo=b"transit_sender") return b"transit sender " + hexlify(hexid) + b" ready\n\n" def build_sided_relay_handshake(key, side): assert isinstance(side, type(u"")) assert len(side) == 8 * 2 token = HKDF(key, 32, CTXinfo=b"transit_relay_token") return b"please relay " + hexlify(token) + b" for side " + side.encode( "ascii") + b"\n" TIMEOUT = 60 # seconds @implementer(interfaces.IProducer, interfaces.IConsumer) class Connection(protocol.Protocol, policies.TimeoutMixin): def __init__(self, owner, relay_handshake, start, description): self.state = "too-early" self.buf = b"" self.owner = owner self.relay_handshake = relay_handshake self.start = start self._description = description self._negotiation_d = defer.Deferred(self._cancel) self._error = None self._consumer = None self._consumer_bytes_written = 0 self._consumer_bytes_expected = None self._consumer_deferred = None self._inbound_records = deque() self._waiting_reads = deque() def connectionMade(self): self.setTimeout(TIMEOUT) # does timeoutConnection() when it expires self.factory.connectionWasMade(self) def startNegotiation(self): if self.relay_handshake is not None: self.transport.write(self.relay_handshake) self.state = "relay" else: self.state = "start" self.dataReceived(b"") # cycle the state machine return self._negotiation_d def _cancel(self, d): self.state = "hung up" # stop reacting to anything further self._error = defer.CancelledError() self.transport.loseConnection() # if connectionLost isn't called synchronously, then our # self._negotiation_d will have been errbacked by Deferred.cancel # (which is our caller). So if it's still around, clobber it if self._negotiation_d: self._negotiation_d = None def dataReceived(self, data): try: self._dataReceived(data) except Exception as e: self.setTimeout(None) self._error = e self.transport.loseConnection() self.state = "hung up" if not isinstance(e, BadHandshake): raise def _check_and_remove(self, expected): # any divergence is a handshake error if not self.buf.startswith(expected[:len(self.buf)]): raise BadHandshake("got %r want %r" % (self.buf, expected)) if len(self.buf) < len(expected): return False # keep waiting self.buf = self.buf[len(expected):] return True def _dataReceived(self, data): # protocol is: # (maybe: send relay handshake, wait for ok) # send (send|receive)_handshake # wait for (receive|send)_handshake # sender: decide, send "go" or hang up # receiver: wait for "go" self.buf += data assert self.state != "too-early" if self.state == "relay": if not self._check_and_remove(b"ok\n"): return self.state = "start" if self.state == "start": self.transport.write(self.owner._send_this()) self.state = "handshake" if self.state == "handshake": if not self._check_and_remove(self.owner._expect_this()): return self.state = self.owner.connection_ready(self) # If we're the receiver, we'll be moved to state # "wait-for-decision", which means we're waiting for the other # side (the sender) to make a decision. If we're the sender, # we'll either be moved to state "go" (send GO and move directly # to state "records") or state "nevermind" (send NEVERMIND and # hang up). if self.state == "wait-for-decision": if not self._check_and_remove(b"go\n"): return self._negotiationSuccessful() if self.state == "go": GO = b"go\n" self.transport.write(GO) self._negotiationSuccessful() if self.state == "nevermind": self.transport.write(b"nevermind\n") raise BadHandshake("abandoned") if self.state == "records": return self.dataReceivedRECORDS() if self.state == "hung up": return if isinstance(self.state, Exception): # for tests raise self.state raise ValueError("internal error: unknown state %s" % (self.state, )) def _negotiationSuccessful(self): self.state = "records" self.setTimeout(None) send_key = self.owner._sender_record_key() self.send_box = SecretBox(send_key) self.send_nonce = 0 receive_key = self.owner._receiver_record_key() self.receive_box = SecretBox(receive_key) self.next_receive_nonce = 0 d, self._negotiation_d = self._negotiation_d, None d.callback(self) def dataReceivedRECORDS(self): while True: if len(self.buf) < 4: return length = int(hexlify(self.buf[:4]), 16) if len(self.buf) < 4 + length: return encrypted, self.buf = self.buf[4:4 + length], self.buf[4 + length:] record = self._decrypt_record(encrypted) self.recordReceived(record) def _decrypt_record(self, encrypted): nonce_buf = encrypted[:SecretBox.NONCE_SIZE] # assume it's prepended nonce = int(hexlify(nonce_buf), 16) if nonce != self.next_receive_nonce: raise BadNonce( "received out-of-order record: got %d, expected %d" % (nonce, self.next_receive_nonce)) self.next_receive_nonce += 1 record = self.receive_box.decrypt(encrypted) return record def describe(self): return self._description def send_record(self, record): if not isinstance(record, type(b"")): raise InternalError assert SecretBox.NONCE_SIZE == 24 assert self.send_nonce < 2**(8 * 24) assert len(record) < 2**(8 * 4) nonce = unhexlify("%048x" % self.send_nonce) # big-endian self.send_nonce += 1 encrypted = self.send_box.encrypt(record, nonce) length = unhexlify("%08x" % len(encrypted)) # always 4 bytes long self.transport.write(length) self.transport.write(encrypted) def recordReceived(self, record): if self._consumer: self._writeToConsumer(record) return self._inbound_records.append(record) self._deliverRecords() def receive_record(self): d = defer.Deferred() self._waiting_reads.append(d) self._deliverRecords() return d def _deliverRecords(self): while self._inbound_records and self._waiting_reads: r = self._inbound_records.popleft() d = self._waiting_reads.popleft() d.callback(r) def close(self): self.transport.loseConnection() while self._waiting_reads: d = self._waiting_reads.popleft() d.errback(error.ConnectionClosed()) def timeoutConnection(self): self._error = BadHandshake("timeout") self.transport.loseConnection() def connectionLost(self, reason=None): self.setTimeout(None) d, self._negotiation_d = self._negotiation_d, None # the Deferred is only relevant until negotiation finishes, so skip # this if it's already been fired if d: # Each call to loseConnection() sets self._error first, so we can # deliver useful information to the Factory that's waiting on # this (although they'll generally ignore the specific error, # except for logging unexpected ones). The possible cases are: # # cancel: defer.CancelledError # far-end disconnect: BadHandshake("connection lost") # handshake error (something we didn't like): BadHandshake(what) # other error: some other Exception # timeout: BadHandshake("timeout") d.errback(self._error or BadHandshake("connection lost")) if self._consumer_deferred: self._consumer_deferred.errback(error.ConnectionClosed()) # IConsumer methods, for outbound flow-control. We pass these through to # the transport. The 'producer' is something like a t.p.basic.FileSender def registerProducer(self, producer, streaming): assert interfaces.IConsumer.providedBy(self.transport) self.transport.registerProducer(producer, streaming) def unregisterProducer(self): self.transport.unregisterProducer() def write(self, data): self.send_record(data) # IProducer methods, for inbound flow-control. We pass these through to # the transport. def stopProducing(self): self.transport.stopProducing() def pauseProducing(self): self.transport.pauseProducing() def resumeProducing(self): self.transport.resumeProducing() # Helper methods def connectConsumer(self, consumer, expected=None): """Helper method to glue an instance of e.g. t.p.ftp.FileConsumer to us. Inbound records will be written as bytes to the consumer. Set 'expected' to an integer to automatically disconnect when at least that number of bytes have been written. This function will then return a Deferred (that fires with the number of bytes actually received). If the connection is lost while this Deferred is outstanding, it will errback. If 'expected' is 0, the Deferred will fire right away. If 'expected' is None, then this function returns None instead of a Deferred, and you must call disconnectConsumer() when you are done.""" if self._consumer: raise RuntimeError( "A consumer is already attached: %r" % self._consumer) # be aware of an ordering hazard: when we call the consumer's # .registerProducer method, they are likely to immediately call # self.resumeProducing, which we'll deliver to self.transport, which # might call our .dataReceived, which may cause more records to be # available. By waiting to set self._consumer until *after* we drain # any pending records, we avoid delivering records out of order, # which would be bad. consumer.registerProducer(self, True) # There might be enough data queued to exceed 'expected' before we # leave this function. We must be sure to register the producer # before it gets unregistered. self._consumer = consumer self._consumer_bytes_written = 0 self._consumer_bytes_expected = expected d = None if expected is not None: d = defer.Deferred() self._consumer_deferred = d if expected == 0: # write empty record to kick consumer into shutdown self._writeToConsumer(b"") # drain any pending records while self._consumer and self._inbound_records: r = self._inbound_records.popleft() self._writeToConsumer(r) return d def _writeToConsumer(self, record): self._consumer.write(record) self._consumer_bytes_written += len(record) if self._consumer_bytes_expected is not None: if self._consumer_bytes_written >= self._consumer_bytes_expected: d = self._consumer_deferred self.disconnectConsumer() d.callback(self._consumer_bytes_written) def disconnectConsumer(self): self._consumer.unregisterProducer() self._consumer = None self._consumer_bytes_expected = None self._consumer_deferred = None # Helper method to write a known number of bytes to a file. This has no # flow control: the filehandle cannot push back. 'progress' is an # optional callable which will be called on each write (with the number # of bytes written). Returns a Deferred that fires (with the number of # bytes written) when the count is reached or the RecordPipe is closed. def writeToFile(self, f, expected, progress=None, hasher=None): fc = FileConsumer(f, progress, hasher) return self.connectConsumer(fc, expected) class OutboundConnectionFactory(protocol.ClientFactory): protocol = Connection def __init__(self, owner, relay_handshake, description): self.owner = owner self.relay_handshake = relay_handshake self._description = description self.start = time.time() def buildProtocol(self, addr): p = self.protocol(self.owner, self.relay_handshake, self.start, self._description) p.factory = self return p def connectionWasMade(self, p): # outbound connections are handled via the endpoint pass class InboundConnectionFactory(protocol.ClientFactory): protocol = Connection def __init__(self, owner): self.owner = owner self.start = time.time() self._inbound_d = defer.Deferred(self._cancel) self._pending_connections = set() def whenDone(self): return self._inbound_d def _cancel(self, inbound_d): self._shutdown() # our _inbound_d will be errbacked by Deferred.cancel() def _shutdown(self): for d in list(self._pending_connections): d.cancel() # that fires _remove and _proto_failed def _describePeer(self, addr): if isinstance(addr, address.HostnameAddress): return "<-%s:%d" % (addr.hostname, addr.port) elif isinstance(addr, (address.IPv4Address, address.IPv6Address)): return "<-%s:%d" % (addr.host, addr.port) return "<-%r" % addr def buildProtocol(self, addr): p = self.protocol(self.owner, None, self.start, self._describePeer(addr)) p.factory = self return p def connectionWasMade(self, p): d = p.startNegotiation() self._pending_connections.add(d) d.addBoth(self._remove, d) d.addCallbacks(self._proto_succeeded, self._proto_failed) def _remove(self, res, d): self._pending_connections.remove(d) return res def _proto_succeeded(self, p): self._shutdown() self._inbound_d.callback(p) def _proto_failed(self, f): # ignore these two, let Twisted log everything else f.trap(BadHandshake, defer.CancelledError) def allocate_tcp_port(): """Return an (integer) available TCP port on localhost. This briefly listens on the port in question, then closes it right away.""" # We want to bind() the socket but not listen(). Twisted (in # tcp.Port.createInternetSocket) would do several other things: # non-blocking, close-on-exec, and SO_REUSEADDR. We don't need # non-blocking because we never listen on it, and we don't need # close-on-exec because we close it right away. So just add SO_REUSEADDR. s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) if platformType == "posix" and sys.platform != "cygwin": s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(("127.0.0.1", 0)) port = s.getsockname()[1] s.close() return port class _ThereCanBeOnlyOne: """Accept a list of contender Deferreds, and return a summary Deferred. When the first contender fires successfully, cancel the rest and fire the summary with the winning contender's result. If all error, errback the summary. status_cb=? """ def __init__(self, contenders): self._remaining = set(contenders) self._winner_d = defer.Deferred(self._cancel) self._first_success = None self._first_failure = None self._have_winner = False self._fired = False def _cancel(self, _): for d in list(self._remaining): d.cancel() # since that will errback everything in _remaining, we'll have hit # _maybe_done() and fired self._winner_d by this point def run(self): for d in list(self._remaining): d.addBoth(self._remove, d) d.addCallbacks(self._succeeded, self._failed) d.addCallback(self._maybe_done) return self._winner_d def _remove(self, res, d): self._remaining.remove(d) return res def _succeeded(self, res): self._have_winner = True self._first_success = res for d in list(self._remaining): d.cancel() def _failed(self, f): if self._first_failure is None: self._first_failure = f def _maybe_done(self, _): if self._remaining: return if self._fired: return self._fired = True if self._have_winner: self._winner_d.callback(self._first_success) else: self._winner_d.errback(self._first_failure) def there_can_be_only_one(contenders): return _ThereCanBeOnlyOne(contenders).run() class Common: RELAY_DELAY = 2.0 TRANSIT_KEY_LENGTH = SecretBox.KEY_SIZE def __init__(self, transit_relay, no_listen=False, tor=None, reactor=None, timing=None): self._side = bytes_to_hexstr(os.urandom(8)) # unicode if transit_relay: if not isinstance(transit_relay, type(u"")): raise InternalError # TODO: allow multiple hints for a single relay relay_hint = parse_hint_argv(transit_relay) relay = RelayV1Hint(hints=(relay_hint, )) self._transit_relays = [relay] else: self._transit_relays = [] self._their_direct_hints = [] # hintobjs self._our_relay_hints = set(self._transit_relays) self._tor = tor self._transit_key = None self._no_listen = no_listen self._waiting_for_transit_key = [] self._listener = None self._winner = None if reactor is None: from twisted.internet import reactor self._reactor = reactor self._timing = timing or DebugTiming() self._timing.add("transit") def _build_listener(self): if self._no_listen or self._tor: return ([], None) portnum = allocate_tcp_port() addresses = ipaddrs.find_addresses() non_loopback_addresses = [a for a in addresses if a != "127.0.0.1"] if non_loopback_addresses: # some test hosts, including the appveyor VMs, *only* have # 127.0.0.1, and the tests will hang badly if we remove it. addresses = non_loopback_addresses direct_hints = [ DirectTCPV1Hint(six.u(addr), portnum, 0.0) for addr in addresses ] ep = endpoints.serverFromString(self._reactor, "tcp:%d" % portnum) return direct_hints, ep def get_connection_abilities(self): return [ { u"type": u"direct-tcp-v1" }, { u"type": u"relay-v1" }, ] @inlineCallbacks def get_connection_hints(self): hints = [] direct_hints = yield self._get_direct_hints() for dh in direct_hints: hints.append({ u"type": u"direct-tcp-v1", u"priority": dh.priority, u"hostname": dh.hostname, u"port": dh.port, # integer }) for relay in self._transit_relays: rhint = {u"type": u"relay-v1", u"hints": []} for rh in relay.hints: rhint[u"hints"].append({ u"type": u"direct-tcp-v1", u"priority": rh.priority, u"hostname": rh.hostname, u"port": rh.port }) hints.append(rhint) returnValue(hints) def _get_direct_hints(self): if self._listener: return defer.succeed(self._my_direct_hints) # there is a slight race here: if someone calls get_direct_hints() a # second time, before the listener has actually started listening, # then they'll get a Deferred that fires (with the hints) before the # listener starts listening. But most applications won't call this # multiple times, and the race is between 1: the parent Wormhole # protocol getting the connection hints to the other end, and 2: the # listener being ready for connections, and I'm confident that the # listener will win. self._my_direct_hints, self._listener = self._build_listener() if self._listener is None: # don't listen self._listener_d = None return defer.succeed(self._my_direct_hints) # empty # Start the server, so it will be running by the time anyone tries to # connect to the direct hints we return. f = InboundConnectionFactory(self) self._listener_f = f # for tests # XX move to __init__ ? self._listener_d = f.whenDone() d = self._listener.listen(f) def _listening(lp): # lp is an IListeningPort # self._listener_port = lp # for tests def _stop_listening(res): lp.stopListening() return res self._listener_d.addBoth(_stop_listening) return self._my_direct_hints d.addCallback(_listening) return d def _stop_listening(self): # this is for unit tests. The usual control flow (via connect()) # wires the listener's Deferred into a there_can_be_only_one(), which # eats the errback. If we don't ever call connect(), we must catch it # ourselves. self._listener_d.addErrback(lambda f: None) self._listener_d.cancel() def add_connection_hints(self, hints): for h in hints: # hint structs hint_type = h.get(u"type", u"") if hint_type in [u"direct-tcp-v1", u"tor-tcp-v1"]: dh = parse_tcp_v1_hint(h) if dh: self._their_direct_hints.append(dh) # hint_obj elif hint_type == u"relay-v1": # TODO: each relay-v1 clause describes a different relay, # with a set of equally-valid ways to connect to it. Treat # them as separate relays, instead of merging them all # together like this. relay_hints = [] for rhs in h.get(u"hints", []): h = parse_tcp_v1_hint(rhs) if h: relay_hints.append(h) if relay_hints: rh = RelayV1Hint(hints=tuple(sorted(relay_hints))) self._our_relay_hints.add(rh) else: log.msg("unknown hint type: %r" % (h, )) def _send_this(self): assert self._transit_key if self.is_sender: return build_sender_handshake(self._transit_key) else: return build_receiver_handshake(self._transit_key) def _expect_this(self): assert self._transit_key if self.is_sender: return build_receiver_handshake(self._transit_key) else: return build_sender_handshake(self._transit_key) # + b"go\n" def _sender_record_key(self): assert self._transit_key if self.is_sender: return HKDF( self._transit_key, SecretBox.KEY_SIZE, CTXinfo=b"transit_record_sender_key") else: return HKDF( self._transit_key, SecretBox.KEY_SIZE, CTXinfo=b"transit_record_receiver_key") def _receiver_record_key(self): assert self._transit_key if self.is_sender: return HKDF( self._transit_key, SecretBox.KEY_SIZE, CTXinfo=b"transit_record_receiver_key") else: return HKDF( self._transit_key, SecretBox.KEY_SIZE, CTXinfo=b"transit_record_sender_key") def set_transit_key(self, key): assert isinstance(key, type(b"")), type(key) # We use pubsub to protect against the race where the sender knows # the hints and the key, and connects to the receiver's transit # socket before the receiver gets the relay message (and thus the # key). self._transit_key = key waiters = self._waiting_for_transit_key del self._waiting_for_transit_key for d in waiters: # We don't need eventual-send here. It's safer in general, but # set_transit_key() is only called once, and _get_transit_key() # won't touch the subscribers list once the key is set. d.callback(key) def _get_transit_key(self): if self._transit_key: return defer.succeed(self._transit_key) d = defer.Deferred() self._waiting_for_transit_key.append(d) return d @inlineCallbacks def connect(self): with self._timing.add("transit connect"): yield self._get_transit_key() # we want to have the transit key before starting any outbound # connections, so those connections will know what to say when # they connect winner = yield self._connect() returnValue(winner) def _connect(self): # It might be nice to wire this so that a failure in the direct hints # causes the relay hints to be used right away (fast failover). But # none of our current use cases would take advantage of that: if we # have any viable direct hints, then they're either going to succeed # quickly or hang for a long time. contenders = [] if self._listener_d: contenders.append(self._listener_d) relay_delay = 0 for hint_obj in self._their_direct_hints: # Check the hint type to see if we can support it (e.g. skip # onion hints on a non-Tor client). Do not increase relay_delay # unless we have at least one viable hint. ep = endpoint_from_hint_obj(hint_obj, self._tor, self._reactor) if not ep: continue d = self._start_connector(ep, describe_hint_obj(hint_obj, False, self._tor)) contenders.append(d) relay_delay = self.RELAY_DELAY # Start trying the relays a few seconds after we start to try the # direct hints. The idea is to prefer direct connections, but not be # afraid of using a relay when we have direct hints that don't # resolve quickly. Many direct hints will be to unused local-network # IP addresses, which won't answer, and would take the full TCP # timeout (30s or more) to fail. prioritized_relays = {} for rh in self._our_relay_hints: for hint_obj in rh.hints: priority = hint_obj.priority if priority not in prioritized_relays: prioritized_relays[priority] = set() prioritized_relays[priority].add(hint_obj) for priority in sorted(prioritized_relays, reverse=True): for hint_obj in prioritized_relays[priority]: ep = endpoint_from_hint_obj(hint_obj, self._tor, self._reactor) if not ep: continue d = task.deferLater( self._reactor, relay_delay, self._start_connector, ep, describe_hint_obj(hint_obj, True, self._tor), is_relay=True) contenders.append(d) relay_delay += self.RELAY_DELAY if not contenders: raise TransitError("No contenders for connection") winner = there_can_be_only_one(contenders) return self._not_forever(2 * TIMEOUT, winner) def _not_forever(self, timeout, d): """If the timer fires first, cancel the deferred. If the deferred fires first, cancel the timer.""" t = self._reactor.callLater(timeout, d.cancel) def _done(res): if t.active(): t.cancel() return res d.addBoth(_done) return d def _build_relay_handshake(self): return build_sided_relay_handshake(self._transit_key, self._side) def _start_connector(self, ep, description, is_relay=False): relay_handshake = None if is_relay: assert self._transit_key relay_handshake = self._build_relay_handshake() f = OutboundConnectionFactory(self, relay_handshake, description) d = ep.connect(f) # fires with protocol, or ConnectError d.addCallback(lambda p: p.startNegotiation()) return d def connection_ready(self, p): # inbound/outbound Connection protocols call this when they finish # negotiation. The first one wins and gets a "go". Any subsequent # ones lose and get a "nevermind" before being closed. if not self.is_sender: return "wait-for-decision" if self._winner: # we already have a winner, so this one loses return "nevermind" # this one wins! self._winner = p return "go" class TransitSender(Common): is_sender = True class TransitReceiver(Common): is_sender = False # based on twisted.protocols.ftp.FileConsumer, but don't close the filehandle # when done, and add a progress function that gets called with the length of # each write, and a hasher function that gets called with the data. @implementer(interfaces.IConsumer) class FileConsumer: def __init__(self, f, progress=None, hasher=None): self._f = f self._progress = progress self._hasher = hasher self._producer = None def registerProducer(self, producer, streaming): assert not self._producer self._producer = producer assert streaming def write(self, bytes): self._f.write(bytes) if self._progress: self._progress(len(bytes)) if self._hasher: self._hasher(bytes) def unregisterProducer(self): assert self._producer self._producer = None # the TransitSender/Receiver.connect() yields a Connection, on which you can # do send_record(), but what should the receive API be? set a callback for # inbound records? get a Deferred for the next record? The producer/consumer # API is enough for file transfer, but what would other applications want? # how should the Listener be managed? we want to shut it down when the # connect() Deferred is cancelled, as well as terminating any negotiations in # progress. # # the factory should return/manage a deferred, which fires iff an inbound # connection completes negotiation successfully, can be cancelled (which # stops the listener and drops all pending connections), but will never # timeout, and only errbacks if cancelled. # write unit test for _ThereCanBeOnlyOne # check start/finish time-gathering instrumentation # relay URLs are probably mishandled: both sides probably send their URL, # then connect to the *other* side's URL, when they really should connect to # both their own and the other side's. The current implementation probably # only works if the two URLs are the same. magic-wormhole-0.12.0/src/wormhole/util.py000066400000000000000000000030041400712516500204540ustar00rootroot00000000000000# No unicode_literals import json import os import unicodedata from binascii import hexlify, unhexlify from hkdf import Hkdf def HKDF(skm, outlen, salt=None, CTXinfo=b""): return Hkdf(salt, skm).expand(CTXinfo, outlen) def to_bytes(u): return unicodedata.normalize("NFC", u).encode("utf-8") def to_unicode(any): if isinstance(any, type(u"")): return any return any.decode("ascii") def bytes_to_hexstr(b): assert isinstance(b, type(b"")) hexstr = hexlify(b).decode("ascii") assert isinstance(hexstr, type(u"")) return hexstr def hexstr_to_bytes(hexstr): assert isinstance(hexstr, type(u"")) b = unhexlify(hexstr.encode("ascii")) assert isinstance(b, type(b"")) return b def dict_to_bytes(d): assert isinstance(d, dict) b = json.dumps(d).encode("utf-8") assert isinstance(b, type(b"")) return b def bytes_to_dict(b): assert isinstance(b, type(b"")) d = json.loads(b.decode("utf-8")) assert isinstance(d, dict) return d def estimate_free_space(target): # f_bfree is the blocks available to a root user. It might be more # accurate to use f_bavail (blocks available to non-root user), but we # don't know which user is running us, and a lot of installations don't # bother with reserving extra space for root, so let's just stick to the # basic (larger) estimate. try: s = os.statvfs(os.path.dirname(os.path.abspath(target))) return s.f_frsize * s.f_bfree except AttributeError: return None magic-wormhole-0.12.0/src/wormhole/wormhole.py000066400000000000000000000241761400712516500213500ustar00rootroot00000000000000from __future__ import absolute_import, print_function, unicode_literals import os import sys from attr import attrib, attrs from twisted.python import failure from twisted.internet.task import Cooperator from zope.interface import implementer from ._boss import Boss from ._dilation.manager import DILATION_VERSIONS from ._dilation.connector import Connector from ._interfaces import IDeferredWormhole, IWormhole from ._key import derive_key from .errors import NoKeyError, WormholeClosed from .eventual import EventualQueue from .journal import ImmediateJournal from .observer import OneShotObserver, SequenceObserver from .timing import DebugTiming from .util import bytes_to_hexstr, to_bytes from ._version import get_versions __version__ = get_versions()['version'] del get_versions # We can provide different APIs to different apps: # * Deferreds # w.get_code().addCallback(print_code) # w.send_message(data) # w.get_message().addCallback(got_data) # w.close().addCallback(closed) # * delegate callbacks (better for journaled environments) # w = wormhole(delegate=app) # w.send_message(data) # app.wormhole_got_code(code) # app.wormhole_got_verifier(verifier) # app.wormhole_got_versions(versions) # app.wormhole_got_message(data) # w.close() # app.wormhole_closed() # # * potential delegate options # wormhole(delegate=app, delegate_prefix="wormhole_", # delegate_args=(args, kwargs)) @attrs @implementer(IWormhole) class _DelegatedWormhole(object): _delegate = attrib() def __attrs_post_init__(self): self._key = None def _set_boss(self, boss): self._boss = boss # from above def allocate_code(self, code_length=2): self._boss.allocate_code(code_length) def input_code(self): return self._boss.input_code() def set_code(self, code): self._boss.set_code(code) # def serialize(self): # s = {"serialized_wormhole_version": 1, # "boss": self._boss.serialize(), # } # return s def send_message(self, plaintext): self._boss.send(plaintext) def derive_key(self, purpose, length): """Derive a new key from the established wormhole channel for some other purpose. This is a deterministic randomized function of the session key and the 'purpose' string (unicode/py3-string). This cannot be called until when_verifier() has fired, nor after close() was called. """ if not isinstance(purpose, type("")): raise TypeError(type(purpose)) if not self._key: raise NoKeyError() return derive_key(self._key, to_bytes(purpose), length) def close(self): self._boss.close() def debug_set_trace(self, client_name, which="B N M S O K SK R RC L C T", file=sys.stderr): self._boss._set_trace(client_name, which, file) # from below def got_welcome(self, welcome): self._delegate.wormhole_got_welcome(welcome) def got_code(self, code): self._delegate.wormhole_got_code(code) def got_key(self, key): self._delegate.wormhole_got_unverified_key(key) self._key = key # for derive_key() def got_verifier(self, verifier): self._delegate.wormhole_got_verifier(verifier) def got_versions(self, versions): self._delegate.wormhole_got_versions(versions) def received(self, plaintext): self._delegate.wormhole_got_message(plaintext) def closed(self, result): self._delegate.wormhole_closed(result) @implementer(IWormhole, IDeferredWormhole) class _DeferredWormhole(object): def __init__(self, reactor, eq, _enable_dilate=False): self._reactor = reactor self._welcome_observer = OneShotObserver(eq) self._code_observer = OneShotObserver(eq) self._key = None self._key_observer = OneShotObserver(eq) self._verifier_observer = OneShotObserver(eq) self._version_observer = OneShotObserver(eq) self._received_observer = SequenceObserver(eq) self._closed = False self._closed_observer = OneShotObserver(eq) self._enable_dilate = _enable_dilate def _set_boss(self, boss): self._boss = boss # from above def get_code(self): # TODO: consider throwing error unless one of allocate/set/input_code # was called first. It's legit to grab the Deferred before triggering # the process that will cause it to fire, but forbidding that # ordering would make it easier to cause programming errors that # forget to trigger it entirely. return self._code_observer.when_fired() def get_welcome(self): return self._welcome_observer.when_fired() def get_unverified_key(self): return self._key_observer.when_fired() def get_verifier(self): return self._verifier_observer.when_fired() def get_versions(self): return self._version_observer.when_fired() def get_message(self): return self._received_observer.when_next_event() def allocate_code(self, code_length=2): self._boss.allocate_code(code_length) def input_code(self): return self._boss.input_code() def set_code(self, code): self._boss.set_code(code) # no .serialize in Deferred-mode def send_message(self, plaintext): self._boss.send(plaintext) def derive_key(self, purpose, length): """Derive a new key from the established wormhole channel for some other purpose. This is a deterministic randomized function of the session key and the 'purpose' string (unicode/py3-string). This cannot be called until when_verified() has fired, nor after close() was called. """ if not isinstance(purpose, type("")): raise TypeError(type(purpose)) if not self._key: raise NoKeyError() return derive_key(self._key, to_bytes(purpose), length) def dilate(self, transit_relay_location=None, no_listen=False): if not self._enable_dilate: raise NotImplementedError return self._boss.dilate(transit_relay_location, no_listen) # fires with (endpoints) def close(self): # fails with WormholeError unless we established a connection # (state=="happy"). Fails with WrongPasswordError (a subclass of # WormholeError) if state=="scary". d = self._closed_observer.when_fired() # maybe Failure if not self._closed: self._boss.close() # only need to close if it wasn't already return d def debug_set_trace(self, client_name, which="B N M S O K SK R RC L A I C T", file=sys.stderr): self._boss._set_trace(client_name, which, file) # from below def got_welcome(self, welcome): self._welcome_observer.fire_if_not_fired(welcome) def got_code(self, code): self._code_observer.fire_if_not_fired(code) def got_key(self, key): self._key = key # for derive_key() self._key_observer.fire_if_not_fired(key) def got_verifier(self, verifier): self._verifier_observer.fire_if_not_fired(verifier) def got_versions(self, versions): self._version_observer.fire_if_not_fired(versions) def received(self, plaintext): self._received_observer.fire(plaintext) def closed(self, result): self._closed = True # print("closed", result, type(result), file=sys.stderr) if isinstance(result, Exception): # everything pending gets an error, including close() f = failure.Failure(result) self._closed_observer.error(f) else: # everything pending except close() gets an error: # w.get_code()/welcome/unverified_key/verifier/versions/message f = failure.Failure(WormholeClosed(result)) # but w.close() only gets error if we're unhappy self._closed_observer.fire_if_not_fired(result) self._welcome_observer.error(f) self._code_observer.error(f) self._key_observer.error(f) self._verifier_observer.error(f) self._version_observer.error(f) self._received_observer.fire(f) def create( appid, relay_url, reactor, # use keyword args for everything else versions={}, delegate=None, journal=None, tor=None, timing=None, stderr=sys.stderr, _eventual_queue=None, _enable_dilate=False): timing = timing or DebugTiming() side = bytes_to_hexstr(os.urandom(5)) journal = journal or ImmediateJournal() eq = _eventual_queue or EventualQueue(reactor) cooperator = Cooperator(scheduler=eq.eventually) if delegate: w = _DelegatedWormhole(delegate) else: w = _DeferredWormhole(reactor, eq, _enable_dilate=_enable_dilate) # this indicates Wormhole capabilities wormhole_versions = { "can-dilate": DILATION_VERSIONS, "dilation-abilities": Connector.get_connection_abilities(), } if not _enable_dilate: wormhole_versions = {} # don't advertise Dilation yet: not ready wormhole_versions["app_versions"] = versions # app-specific capabilities v = __version__ if isinstance(v, type(b"")): v = v.decode("utf-8", errors="replace") client_version = ("python", v) b = Boss(w, side, relay_url, appid, wormhole_versions, client_version, reactor, eq, cooperator, journal, tor, timing) w._set_boss(b) b.start() return w # def from_serialized(serialized, reactor, delegate, # journal=None, tor=None, # timing=None, stderr=sys.stderr): # assert serialized["serialized_wormhole_version"] == 1 # timing = timing or DebugTiming() # w = _DelegatedWormhole(delegate) # # now unpack state machines, including the SPAKE2 in Key # b = Boss.from_serialized(w, serialized["boss"], reactor, journal, timing) # w._set_boss(b) # b.start() # ?? # raise NotImplemented # # should the new Wormhole call got_code? only if it wasn't called before. magic-wormhole-0.12.0/src/wormhole/xfer_util.py000066400000000000000000000102641400712516500215060ustar00rootroot00000000000000import json from twisted.internet.defer import inlineCallbacks, returnValue from . import wormhole from .tor_manager import get_tor @inlineCallbacks def receive(reactor, appid, relay_url, code, use_tor=False, launch_tor=False, tor_control_port=None, on_code=None): """ This is a convenience API which returns a Deferred that callbacks with a single chunk of data from another wormhole (and then closes the wormhole). Under the hood, it's just using an instance returned from :func:`wormhole.wormhole`. This is similar to the `wormhole receive` command. :param unicode appid: our application ID :param unicode relay_url: the relay URL to use :param unicode code: a pre-existing code to use, or None :param bool use_tor: True if we should use Tor, False to not use it (None for default) :param on_code: if not None, this is called when we have a code (even if you passed in one explicitly) :type on_code: single-argument callable """ tor = None if use_tor: tor = yield get_tor(reactor, launch_tor, tor_control_port) # For now, block everything until Tor has started. Soon: launch # tor in parallel with everything else, make sure the Tor object # can lazy-provide an endpoint, and overlap the startup process # with the user handing off the wormhole code wh = wormhole.create(appid, relay_url, reactor, tor=tor) if code is None: wh.allocate_code() code = yield wh.get_code() else: wh.set_code(code) # we'll call this no matter what, even if you passed in a code -- # maybe it should be only in the 'if' block above? if on_code: on_code(code) data = yield wh.get_message() data = json.loads(data.decode("utf-8")) offer = data.get('offer', None) if not offer: raise Exception("Do not understand response: {}".format(data)) msg = None if 'message' in offer: msg = offer['message'] wh.send_message( json.dumps({ "answer": { "message_ack": "ok" } }).encode("utf-8")) else: raise Exception("Unknown offer type: {}".format(offer.keys())) yield wh.close() returnValue(msg) @inlineCallbacks def send(reactor, appid, relay_url, data, code, use_tor=False, launch_tor=False, tor_control_port=None, on_code=None): """ This is a convenience API which returns a Deferred that callbacks after a single chunk of data has been sent to another wormhole. Under the hood, it's just using an instance returned from :func:`wormhole.wormhole`. This is similar to the `wormhole send` command. :param unicode appid: the application ID :param unicode relay_url: the relay URL to use :param unicode code: a pre-existing code to use, or None :param bool use_tor: True if we should use Tor, False to not use it (None for default) :param on_code: if not None, this is called when we have a code (even if you passed in one explicitly) :type on_code: single-argument callable """ tor = None if use_tor: tor = yield get_tor(reactor, launch_tor, tor_control_port) # For now, block everything until Tor has started. Soon: launch # tor in parallel with everything else, make sure the Tor object # can lazy-provide an endpoint, and overlap the startup process # with the user handing off the wormhole code wh = wormhole.create(appid, relay_url, reactor, tor=tor) if code is None: wh.allocate_code() code = yield wh.get_code() else: wh.set_code(code) if on_code: on_code(code) wh.send_message(json.dumps({"offer": {"message": data}}).encode("utf-8")) data = yield wh.get_message() data = json.loads(data.decode("utf-8")) answer = data.get('answer', None) yield wh.close() if answer: returnValue(None) else: raise Exception("Unknown answer: {}".format(data)) magic-wormhole-0.12.0/tox.ini000066400000000000000000000033331400712516500160220ustar00rootroot00000000000000# Tox (http://tox.testrun.org/) is a tool for running tests # in multiple virtualenvs. This configuration file will run the # test suite on all supported python versions. To use it, "pip install tox" # and then run "tox" from this directory. [tox] # useful envs: py27-nodilate, py35, py36, py37, py38, pypy, flake8 envlist = {py27-nodilate,py35,py36,py37,py38} skip_missing_interpreters = True minversion = 2.4.0 [testenv] usedevelop = True extras = nodilate: dev !nodilate: dev, dilate deps = pyflakes >= 1.2.3 coverage: coverage commands = pyflakes setup.py src wormhole --version !coverage: python -m wormhole.test.run_trial {posargs:wormhole} coverage: coverage run --branch -m wormhole.test.run_trial {posargs:wormhole} coverage: coverage xml # on windows, trial is installed as venv/bin/trial.py, not .exe, but (at # least appveyor) adds .PY to $PATHEXT. So "trial wormhole" might work on # windows, and certainly does on unix. But to get "coverage run" to work, we # need a script name (since "python -m twisted.scripts.trial" doesn't have a # 'if __name__ == "__main__": run()' -style clause), and the script name will # vary on the platform. So we added a small class (wormhole.test.run_trial) # that does the right import for us. [testenv:flake8] deps = flake8 commands = flake8 src/wormhole [flake8] ignore = E741,W503,W504 exclude = .git,__pycache__,docs/source/conf.py,old,build,dist max-complexity = 40 [testenv:flake8less] deps = flake8 commands = flake8 --select=E901,E999,F821,F822,F823 src/wormhole [testenv:docs] deps = sphinx recommonmark skip_install = True commands = sphinx-build -b html -d {toxinidir}/docs/_build/doctrees {toxinidir}/docs {toxinidir}/docs/_build/html magic-wormhole-0.12.0/versioneer.py000066400000000000000000002060031400712516500172410ustar00rootroot00000000000000 # Version: 0.18 """The Versioneer - like a rocketeer, but for versions. The Versioneer ============== * like a rocketeer, but for versions! * https://github.com/warner/python-versioneer * Brian Warner * License: Public Domain * Compatible With: python2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, and pypy * [![Latest Version] (https://pypip.in/version/versioneer/badge.svg?style=flat) ](https://pypi.python.org/pypi/versioneer/) * [![Build Status] (https://travis-ci.org/warner/python-versioneer.png?branch=master) ](https://travis-ci.org/warner/python-versioneer) This is a tool for managing a recorded version number in distutils-based python projects. The goal is to remove the tedious and error-prone "update the embedded version string" step from your release process. Making a new release should be as easy as recording a new tag in your version-control system, and maybe making new tarballs. ## Quick Install * `pip install versioneer` to somewhere to your $PATH * add a `[versioneer]` section to your setup.cfg (see below) * run `versioneer install` in your source tree, commit the results ## Version Identifiers Source trees come from a variety of places: * a version-control system checkout (mostly used by developers) * a nightly tarball, produced by build automation * a snapshot tarball, produced by a web-based VCS browser, like github's "tarball from tag" feature * a release tarball, produced by "setup.py sdist", distributed through PyPI Within each source tree, the version identifier (either a string or a number, this tool is format-agnostic) can come from a variety of places: * ask the VCS tool itself, e.g. "git describe" (for checkouts), which knows about recent "tags" and an absolute revision-id * the name of the directory into which the tarball was unpacked * an expanded VCS keyword ($Id$, etc) * a `_version.py` created by some earlier build step For released software, the version identifier is closely related to a VCS tag. Some projects use tag names that include more than just the version string (e.g. "myproject-1.2" instead of just "1.2"), in which case the tool needs to strip the tag prefix to extract the version identifier. For unreleased software (between tags), the version identifier should provide enough information to help developers recreate the same tree, while also giving them an idea of roughly how old the tree is (after version 1.2, before version 1.3). Many VCS systems can report a description that captures this, for example `git describe --tags --dirty --always` reports things like "0.7-1-g574ab98-dirty" to indicate that the checkout is one revision past the 0.7 tag, has a unique revision id of "574ab98", and is "dirty" (it has uncommitted changes. The version identifier is used for multiple purposes: * to allow the module to self-identify its version: `myproject.__version__` * to choose a name and prefix for a 'setup.py sdist' tarball ## Theory of Operation Versioneer works by adding a special `_version.py` file into your source tree, where your `__init__.py` can import it. This `_version.py` knows how to dynamically ask the VCS tool for version information at import time. `_version.py` also contains `$Revision$` markers, and the installation process marks `_version.py` to have this marker rewritten with a tag name during the `git archive` command. As a result, generated tarballs will contain enough information to get the proper version. To allow `setup.py` to compute a version too, a `versioneer.py` is added to the top level of your source tree, next to `setup.py` and the `setup.cfg` that configures it. This overrides several distutils/setuptools commands to compute the version when invoked, and changes `setup.py build` and `setup.py sdist` to replace `_version.py` with a small static file that contains just the generated version data. ## Installation See [INSTALL.md](./INSTALL.md) for detailed installation instructions. ## Version-String Flavors Code which uses Versioneer can learn about its version string at runtime by importing `_version` from your main `__init__.py` file and running the `get_versions()` function. From the "outside" (e.g. in `setup.py`), you can import the top-level `versioneer.py` and run `get_versions()`. Both functions return a dictionary with different flavors of version information: * `['version']`: A condensed version string, rendered using the selected style. This is the most commonly used value for the project's version string. The default "pep440" style yields strings like `0.11`, `0.11+2.g1076c97`, or `0.11+2.g1076c97.dirty`. See the "Styles" section below for alternative styles. * `['full-revisionid']`: detailed revision identifier. For Git, this is the full SHA1 commit id, e.g. "1076c978a8d3cfc70f408fe5974aa6c092c949ac". * `['date']`: Date and time of the latest `HEAD` commit. For Git, it is the commit date in ISO 8601 format. This will be None if the date is not available. * `['dirty']`: a boolean, True if the tree has uncommitted changes. Note that this is only accurate if run in a VCS checkout, otherwise it is likely to be False or None * `['error']`: if the version string could not be computed, this will be set to a string describing the problem, otherwise it will be None. It may be useful to throw an exception in setup.py if this is set, to avoid e.g. creating tarballs with a version string of "unknown". Some variants are more useful than others. Including `full-revisionid` in a bug report should allow developers to reconstruct the exact code being tested (or indicate the presence of local changes that should be shared with the developers). `version` is suitable for display in an "about" box or a CLI `--version` output: it can be easily compared against release notes and lists of bugs fixed in various releases. The installer adds the following text to your `__init__.py` to place a basic version in `YOURPROJECT.__version__`: from ._version import get_versions __version__ = get_versions()['version'] del get_versions ## Styles The setup.cfg `style=` configuration controls how the VCS information is rendered into a version string. The default style, "pep440", produces a PEP440-compliant string, equal to the un-prefixed tag name for actual releases, and containing an additional "local version" section with more detail for in-between builds. For Git, this is TAG[+DISTANCE.gHEX[.dirty]] , using information from `git describe --tags --dirty --always`. For example "0.11+2.g1076c97.dirty" indicates that the tree is like the "1076c97" commit but has uncommitted changes (".dirty"), and that this commit is two revisions ("+2") beyond the "0.11" tag. For released software (exactly equal to a known tag), the identifier will only contain the stripped tag, e.g. "0.11". Other styles are available. See [details.md](details.md) in the Versioneer source tree for descriptions. ## Debugging Versioneer tries to avoid fatal errors: if something goes wrong, it will tend to return a version of "0+unknown". To investigate the problem, run `setup.py version`, which will run the version-lookup code in a verbose mode, and will display the full contents of `get_versions()` (including the `error` string, which may help identify what went wrong). ## Known Limitations Some situations are known to cause problems for Versioneer. This details the most significant ones. More can be found on Github [issues page](https://github.com/warner/python-versioneer/issues). ### Subprojects Versioneer has limited support for source trees in which `setup.py` is not in the root directory (e.g. `setup.py` and `.git/` are *not* siblings). The are two common reasons why `setup.py` might not be in the root: * Source trees which contain multiple subprojects, such as [Buildbot](https://github.com/buildbot/buildbot), which contains both "master" and "slave" subprojects, each with their own `setup.py`, `setup.cfg`, and `tox.ini`. Projects like these produce multiple PyPI distributions (and upload multiple independently-installable tarballs). * Source trees whose main purpose is to contain a C library, but which also provide bindings to Python (and perhaps other languages) in subdirectories. Versioneer will look for `.git` in parent directories, and most operations should get the right version string. However `pip` and `setuptools` have bugs and implementation details which frequently cause `pip install .` from a subproject directory to fail to find a correct version string (so it usually defaults to `0+unknown`). `pip install --editable .` should work correctly. `setup.py install` might work too. Pip-8.1.1 is known to have this problem, but hopefully it will get fixed in some later version. [Bug #38](https://github.com/warner/python-versioneer/issues/38) is tracking this issue. The discussion in [PR #61](https://github.com/warner/python-versioneer/pull/61) describes the issue from the Versioneer side in more detail. [pip PR#3176](https://github.com/pypa/pip/pull/3176) and [pip PR#3615](https://github.com/pypa/pip/pull/3615) contain work to improve pip to let Versioneer work correctly. Versioneer-0.16 and earlier only looked for a `.git` directory next to the `setup.cfg`, so subprojects were completely unsupported with those releases. ### Editable installs with setuptools <= 18.5 `setup.py develop` and `pip install --editable .` allow you to install a project into a virtualenv once, then continue editing the source code (and test) without re-installing after every change. "Entry-point scripts" (`setup(entry_points={"console_scripts": ..})`) are a convenient way to specify executable scripts that should be installed along with the python package. These both work as expected when using modern setuptools. When using setuptools-18.5 or earlier, however, certain operations will cause `pkg_resources.DistributionNotFound` errors when running the entrypoint script, which must be resolved by re-installing the package. This happens when the install happens with one version, then the egg_info data is regenerated while a different version is checked out. Many setup.py commands cause egg_info to be rebuilt (including `sdist`, `wheel`, and installing into a different virtualenv), so this can be surprising. [Bug #83](https://github.com/warner/python-versioneer/issues/83) describes this one, but upgrading to a newer version of setuptools should probably resolve it. ### Unicode version strings While Versioneer works (and is continually tested) with both Python 2 and Python 3, it is not entirely consistent with bytes-vs-unicode distinctions. Newer releases probably generate unicode version strings on py2. It's not clear that this is wrong, but it may be surprising for applications when then write these strings to a network connection or include them in bytes-oriented APIs like cryptographic checksums. [Bug #71](https://github.com/warner/python-versioneer/issues/71) investigates this question. ## Updating Versioneer To upgrade your project to a new release of Versioneer, do the following: * install the new Versioneer (`pip install -U versioneer` or equivalent) * edit `setup.cfg`, if necessary, to include any new configuration settings indicated by the release notes. See [UPGRADING](./UPGRADING.md) for details. * re-run `versioneer install` in your source tree, to replace `SRC/_version.py` * commit any changed files ## Future Directions This tool is designed to make it easily extended to other version-control systems: all VCS-specific components are in separate directories like src/git/ . The top-level `versioneer.py` script is assembled from these components by running make-versioneer.py . In the future, make-versioneer.py will take a VCS name as an argument, and will construct a version of `versioneer.py` that is specific to the given VCS. It might also take the configuration arguments that are currently provided manually during installation by editing setup.py . Alternatively, it might go the other direction and include code from all supported VCS systems, reducing the number of intermediate scripts. ## License To make Versioneer easier to embed, all its code is dedicated to the public domain. The `_version.py` that it creates is also in the public domain. Specifically, both are released under the Creative Commons "Public Domain Dedication" license (CC0-1.0), as described in https://creativecommons.org/publicdomain/zero/1.0/ . """ from __future__ import print_function try: import configparser except ImportError: import ConfigParser as configparser import errno import json import os import re import subprocess import sys class VersioneerConfig: """Container for Versioneer configuration parameters.""" def get_root(): """Get the project root directory. We require that all commands are run from the project root, i.e. the directory that contains setup.py, setup.cfg, and versioneer.py . """ root = os.path.realpath(os.path.abspath(os.getcwd())) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): # allow 'python path/to/setup.py COMMAND' root = os.path.dirname(os.path.realpath(os.path.abspath(sys.argv[0]))) setup_py = os.path.join(root, "setup.py") versioneer_py = os.path.join(root, "versioneer.py") if not (os.path.exists(setup_py) or os.path.exists(versioneer_py)): err = ("Versioneer was unable to run the project root directory. " "Versioneer requires setup.py to be executed from " "its immediate directory (like 'python setup.py COMMAND'), " "or in a way that lets it use sys.argv[0] to find the root " "(like 'python path/to/setup.py COMMAND').") raise VersioneerBadRootError(err) try: # Certain runtime workflows (setup.py install/develop in a setuptools # tree) execute all dependencies in a single python process, so # "versioneer" may be imported multiple times, and python's shared # module-import table will cache the first one. So we can't use # os.path.dirname(__file__), as that will find whichever # versioneer.py was first imported, even in later projects. me = os.path.realpath(os.path.abspath(__file__)) me_dir = os.path.normcase(os.path.splitext(me)[0]) vsr_dir = os.path.normcase(os.path.splitext(versioneer_py)[0]) if me_dir != vsr_dir: print("Warning: build in %s is using versioneer.py from %s" % (os.path.dirname(me), versioneer_py)) except NameError: pass return root def get_config_from_root(root): """Read the project setup.cfg file to determine Versioneer config.""" # This might raise EnvironmentError (if setup.cfg is missing), or # configparser.NoSectionError (if it lacks a [versioneer] section), or # configparser.NoOptionError (if it lacks "VCS="). See the docstring at # the top of versioneer.py for instructions on writing your setup.cfg . setup_cfg = os.path.join(root, "setup.cfg") parser = configparser.SafeConfigParser() with open(setup_cfg, "r") as f: parser.readfp(f) VCS = parser.get("versioneer", "VCS") # mandatory def get(parser, name): if parser.has_option("versioneer", name): return parser.get("versioneer", name) return None cfg = VersioneerConfig() cfg.VCS = VCS cfg.style = get(parser, "style") or "" cfg.versionfile_source = get(parser, "versionfile_source") cfg.versionfile_build = get(parser, "versionfile_build") cfg.tag_prefix = get(parser, "tag_prefix") if cfg.tag_prefix in ("''", '""'): cfg.tag_prefix = "" cfg.parentdir_prefix = get(parser, "parentdir_prefix") cfg.verbose = get(parser, "verbose") return cfg class NotThisMethod(Exception): """Exception raised if a method is not valid for the current scenario.""" # these dictionaries contain VCS-specific tools LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator """Decorator to mark a method as the handler for a particular VCS.""" def decorate(f): """Store f in HANDLERS[vcs][method].""" if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): """Call the given command(s).""" assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %s" % dispcmd) print(e) return None, None else: if verbose: print("unable to find command, tried %s" % (commands,)) return None, None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %s (error)" % dispcmd) print("stdout was %s" % stdout) return None, p.returncode return stdout, p.returncode LONG_VERSION_PY['git'] = ''' # This file helps to compute a version number in source trees obtained from # git-archive tarball (such as those provided by githubs download-from-tag # feature). Distribution tarballs (built by setup.py sdist) and build # directories (produced by setup.py build) will contain a much shorter file # that just contains the computed version number. # This file is released into the public domain. Generated by # versioneer-0.18 (https://github.com/warner/python-versioneer) """Git implementation of _version.py.""" import errno import os import re import subprocess import sys def get_keywords(): """Get the keywords needed to look up the version information.""" # these strings will be replaced by git during git-archive. # setup.py/versioneer.py will grep for the variable names, so they must # each be defined on a line of their own. _version.py will just call # get_keywords(). git_refnames = "%(DOLLAR)sFormat:%%d%(DOLLAR)s" git_full = "%(DOLLAR)sFormat:%%H%(DOLLAR)s" git_date = "%(DOLLAR)sFormat:%%ci%(DOLLAR)s" keywords = {"refnames": git_refnames, "full": git_full, "date": git_date} return keywords class VersioneerConfig: """Container for Versioneer configuration parameters.""" def get_config(): """Create, populate and return the VersioneerConfig() object.""" # these strings are filled in when 'setup.py versioneer' creates # _version.py cfg = VersioneerConfig() cfg.VCS = "git" cfg.style = "%(STYLE)s" cfg.tag_prefix = "%(TAG_PREFIX)s" cfg.parentdir_prefix = "%(PARENTDIR_PREFIX)s" cfg.versionfile_source = "%(VERSIONFILE_SOURCE)s" cfg.verbose = False return cfg class NotThisMethod(Exception): """Exception raised if a method is not valid for the current scenario.""" LONG_VERSION_PY = {} HANDLERS = {} def register_vcs_handler(vcs, method): # decorator """Decorator to mark a method as the handler for a particular VCS.""" def decorate(f): """Store f in HANDLERS[vcs][method].""" if vcs not in HANDLERS: HANDLERS[vcs] = {} HANDLERS[vcs][method] = f return f return decorate def run_command(commands, args, cwd=None, verbose=False, hide_stderr=False, env=None): """Call the given command(s).""" assert isinstance(commands, list) p = None for c in commands: try: dispcmd = str([c] + args) # remember shell=False, so use git.cmd on windows, not just git p = subprocess.Popen([c] + args, cwd=cwd, env=env, stdout=subprocess.PIPE, stderr=(subprocess.PIPE if hide_stderr else None)) break except EnvironmentError: e = sys.exc_info()[1] if e.errno == errno.ENOENT: continue if verbose: print("unable to run %%s" %% dispcmd) print(e) return None, None else: if verbose: print("unable to find command, tried %%s" %% (commands,)) return None, None stdout = p.communicate()[0].strip() if sys.version_info[0] >= 3: stdout = stdout.decode() if p.returncode != 0: if verbose: print("unable to run %%s (error)" %% dispcmd) print("stdout was %%s" %% stdout) return None, p.returncode return stdout, p.returncode def versions_from_parentdir(parentdir_prefix, root, verbose): """Try to determine the version from the parent directory name. Source tarballs conventionally unpack into a directory that includes both the project name and a version string. We will also support searching up two directory levels for an appropriately named parent directory """ rootdirs = [] for i in range(3): dirname = os.path.basename(root) if dirname.startswith(parentdir_prefix): return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None, "date": None} else: rootdirs.append(root) root = os.path.dirname(root) # up a level if verbose: print("Tried directories %%s but none started with prefix %%s" %% (str(rootdirs), parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): """Extract version information from the given file.""" # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) if line.strip().startswith("git_date ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["date"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): """Get version information from git keywords.""" if not keywords: raise NotThisMethod("no keywords at all, weird") date = keywords.get("date") if date is not None: # git-2.2.0 added "%%cI", which expands to an ISO-8601 -compliant # datestamp. However we prefer "%%ci" (which expands to an "ISO-8601 # -like" string, which we must then edit to make compliant), because # it's been around since git-1.5.3, and it's too difficult to # discover which version we're using, or to work around using an # older one. date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %%d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%%s', no digits" %% ",".join(refs - tags)) if verbose: print("likely tags: %%s" %% ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %%s" %% r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None, "date": date} # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags", "date": None} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): """Get version from 'git describe' in the root of the source tree. This only gets called if the git-archive 'subst' keywords were *not* expanded, and _version.py hasn't already been rewritten with a short version string, meaning we're inside a checked out source tree. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True) if rc != 0: if verbose: print("Directory %%s not under git control" %% root) raise NotThisMethod("'git rev-parse --git-dir' returned error") # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] # if there isn't one, this yields HEX[-dirty] (no NUM) describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long", "--match", "%%s*" %% tag_prefix], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%%s'" %% describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%%s' doesn't start with prefix '%%s'" print(fmt %% (full_tag, tag_prefix)) pieces["error"] = ("tag '%%s' doesn't start with prefix '%%s'" %% (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits # commit date: see ISO-8601 comment in git_versions_from_keywords() date = run_command(GITS, ["show", "-s", "--format=%%ci", "HEAD"], cwd=root)[0].strip() pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) return pieces def plus_or_dot(pieces): """Return a + if we don't already have one, else return a .""" if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): """Build up version string, with post-release "local version identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty Exceptions: 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%%d.g%%s" %% (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): """TAG[.post.devDISTANCE] -- No -dirty. Exceptions: 1: no tags. 0.post.devDISTANCE """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%%d" %% pieces["distance"] else: # exception #1 rendered = "0.post.dev%%d" %% pieces["distance"] return rendered def render_pep440_post(pieces): """TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that .dev0 sorts backwards (a dirty tree will appear "older" than the corresponding clean one), but you shouldn't be releasing software with -dirty anyways. Exceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%%s" %% pieces["short"] else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%%s" %% pieces["short"] return rendered def render_pep440_old(pieces): """TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. Eexceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%%d" %% pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): """TAG[-DISTANCE-gHEX][-dirty]. Like 'git describe --tags --dirty --always'. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): """TAG-DISTANCE-gHEX[-dirty]. Like 'git describe --tags --dirty --always -long'. The distance/hash is unconditional. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%%d-g%%s" %% (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): """Render the given version pieces into the requested style.""" if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"], "date": None} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%%s'" %% style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None, "date": pieces.get("date")} def get_versions(): """Get version information or return default if unable to do so.""" # I am in _version.py, which lives at ROOT/VERSIONFILE_SOURCE. If we have # __file__, we can work backwards from there to the root. Some # py2exe/bbfreeze/non-CPython implementations don't do __file__, in which # case we can only use expanded keywords. cfg = get_config() verbose = cfg.verbose try: return git_versions_from_keywords(get_keywords(), cfg.tag_prefix, verbose) except NotThisMethod: pass try: root = os.path.realpath(__file__) # versionfile_source is the relative path from the top of the source # tree (where the .git directory might live) to this file. Invert # this to find the root from __file__. for i in cfg.versionfile_source.split('/'): root = os.path.dirname(root) except NameError: return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to find root of source tree", "date": None} try: pieces = git_pieces_from_vcs(cfg.tag_prefix, root, verbose) return render(pieces, cfg.style) except NotThisMethod: pass try: if cfg.parentdir_prefix: return versions_from_parentdir(cfg.parentdir_prefix, root, verbose) except NotThisMethod: pass return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version", "date": None} ''' @register_vcs_handler("git", "get_keywords") def git_get_keywords(versionfile_abs): """Extract version information from the given file.""" # the code embedded in _version.py can just fetch the value of these # keywords. When used from setup.py, we don't want to import _version.py, # so we do it with a regexp instead. This function is not used from # _version.py. keywords = {} try: f = open(versionfile_abs, "r") for line in f.readlines(): if line.strip().startswith("git_refnames ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["refnames"] = mo.group(1) if line.strip().startswith("git_full ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["full"] = mo.group(1) if line.strip().startswith("git_date ="): mo = re.search(r'=\s*"(.*)"', line) if mo: keywords["date"] = mo.group(1) f.close() except EnvironmentError: pass return keywords @register_vcs_handler("git", "keywords") def git_versions_from_keywords(keywords, tag_prefix, verbose): """Get version information from git keywords.""" if not keywords: raise NotThisMethod("no keywords at all, weird") date = keywords.get("date") if date is not None: # git-2.2.0 added "%cI", which expands to an ISO-8601 -compliant # datestamp. However we prefer "%ci" (which expands to an "ISO-8601 # -like" string, which we must then edit to make compliant), because # it's been around since git-1.5.3, and it's too difficult to # discover which version we're using, or to work around using an # older one. date = date.strip().replace(" ", "T", 1).replace(" ", "", 1) refnames = keywords["refnames"].strip() if refnames.startswith("$Format"): if verbose: print("keywords are unexpanded, not using") raise NotThisMethod("unexpanded keywords, not a git-archive tarball") refs = set([r.strip() for r in refnames.strip("()").split(",")]) # starting in git-1.8.3, tags are listed as "tag: foo-1.0" instead of # just "foo-1.0". If we see a "tag: " prefix, prefer those. TAG = "tag: " tags = set([r[len(TAG):] for r in refs if r.startswith(TAG)]) if not tags: # Either we're using git < 1.8.3, or there really are no tags. We use # a heuristic: assume all version tags have a digit. The old git %d # expansion behaves like git log --decorate=short and strips out the # refs/heads/ and refs/tags/ prefixes that would let us distinguish # between branches and tags. By ignoring refnames without digits, we # filter out many common branch names like "release" and # "stabilization", as well as "HEAD" and "master". tags = set([r for r in refs if re.search(r'\d', r)]) if verbose: print("discarding '%s', no digits" % ",".join(refs - tags)) if verbose: print("likely tags: %s" % ",".join(sorted(tags))) for ref in sorted(tags): # sorting will prefer e.g. "2.0" over "2.0rc1" if ref.startswith(tag_prefix): r = ref[len(tag_prefix):] if verbose: print("picking %s" % r) return {"version": r, "full-revisionid": keywords["full"].strip(), "dirty": False, "error": None, "date": date} # no suitable tags, so version is "0+unknown", but full hex is still there if verbose: print("no suitable tags, using unknown + full revision id") return {"version": "0+unknown", "full-revisionid": keywords["full"].strip(), "dirty": False, "error": "no suitable tags", "date": None} @register_vcs_handler("git", "pieces_from_vcs") def git_pieces_from_vcs(tag_prefix, root, verbose, run_command=run_command): """Get version from 'git describe' in the root of the source tree. This only gets called if the git-archive 'subst' keywords were *not* expanded, and _version.py hasn't already been rewritten with a short version string, meaning we're inside a checked out source tree. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] out, rc = run_command(GITS, ["rev-parse", "--git-dir"], cwd=root, hide_stderr=True) if rc != 0: if verbose: print("Directory %s not under git control" % root) raise NotThisMethod("'git rev-parse --git-dir' returned error") # if there is a tag matching tag_prefix, this yields TAG-NUM-gHEX[-dirty] # if there isn't one, this yields HEX[-dirty] (no NUM) describe_out, rc = run_command(GITS, ["describe", "--tags", "--dirty", "--always", "--long", "--match", "%s*" % tag_prefix], cwd=root) # --long was added in git-1.5.5 if describe_out is None: raise NotThisMethod("'git describe' failed") describe_out = describe_out.strip() full_out, rc = run_command(GITS, ["rev-parse", "HEAD"], cwd=root) if full_out is None: raise NotThisMethod("'git rev-parse' failed") full_out = full_out.strip() pieces = {} pieces["long"] = full_out pieces["short"] = full_out[:7] # maybe improved later pieces["error"] = None # parse describe_out. It will be like TAG-NUM-gHEX[-dirty] or HEX[-dirty] # TAG might have hyphens. git_describe = describe_out # look for -dirty suffix dirty = git_describe.endswith("-dirty") pieces["dirty"] = dirty if dirty: git_describe = git_describe[:git_describe.rindex("-dirty")] # now we have TAG-NUM-gHEX or HEX if "-" in git_describe: # TAG-NUM-gHEX mo = re.search(r'^(.+)-(\d+)-g([0-9a-f]+)$', git_describe) if not mo: # unparseable. Maybe git-describe is misbehaving? pieces["error"] = ("unable to parse git-describe output: '%s'" % describe_out) return pieces # tag full_tag = mo.group(1) if not full_tag.startswith(tag_prefix): if verbose: fmt = "tag '%s' doesn't start with prefix '%s'" print(fmt % (full_tag, tag_prefix)) pieces["error"] = ("tag '%s' doesn't start with prefix '%s'" % (full_tag, tag_prefix)) return pieces pieces["closest-tag"] = full_tag[len(tag_prefix):] # distance: number of commits since tag pieces["distance"] = int(mo.group(2)) # commit: short hex revision ID pieces["short"] = mo.group(3) else: # HEX: no tags pieces["closest-tag"] = None count_out, rc = run_command(GITS, ["rev-list", "HEAD", "--count"], cwd=root) pieces["distance"] = int(count_out) # total number of commits # commit date: see ISO-8601 comment in git_versions_from_keywords() date = run_command(GITS, ["show", "-s", "--format=%ci", "HEAD"], cwd=root)[0].strip() pieces["date"] = date.strip().replace(" ", "T", 1).replace(" ", "", 1) return pieces def do_vcs_install(manifest_in, versionfile_source, ipy): """Git-specific installation logic for Versioneer. For Git, this means creating/changing .gitattributes to mark _version.py for export-subst keyword substitution. """ GITS = ["git"] if sys.platform == "win32": GITS = ["git.cmd", "git.exe"] files = [manifest_in, versionfile_source] if ipy: files.append(ipy) try: me = __file__ if me.endswith(".pyc") or me.endswith(".pyo"): me = os.path.splitext(me)[0] + ".py" versioneer_file = os.path.relpath(me) except NameError: versioneer_file = "versioneer.py" files.append(versioneer_file) present = False try: f = open(".gitattributes", "r") for line in f.readlines(): if line.strip().startswith(versionfile_source): if "export-subst" in line.strip().split()[1:]: present = True f.close() except EnvironmentError: pass if not present: f = open(".gitattributes", "a+") f.write("%s export-subst\n" % versionfile_source) f.close() files.append(".gitattributes") run_command(GITS, ["add", "--"] + files) def versions_from_parentdir(parentdir_prefix, root, verbose): """Try to determine the version from the parent directory name. Source tarballs conventionally unpack into a directory that includes both the project name and a version string. We will also support searching up two directory levels for an appropriately named parent directory """ rootdirs = [] for i in range(3): dirname = os.path.basename(root) if dirname.startswith(parentdir_prefix): return {"version": dirname[len(parentdir_prefix):], "full-revisionid": None, "dirty": False, "error": None, "date": None} else: rootdirs.append(root) root = os.path.dirname(root) # up a level if verbose: print("Tried directories %s but none started with prefix %s" % (str(rootdirs), parentdir_prefix)) raise NotThisMethod("rootdir doesn't start with parentdir_prefix") SHORT_VERSION_PY = """ # This file was generated by 'versioneer.py' (0.18) from # revision-control system data, or from the parent directory name of an # unpacked source archive. Distribution tarballs contain a pre-generated copy # of this file. import json version_json = ''' %s ''' # END VERSION_JSON def get_versions(): return json.loads(version_json) """ def versions_from_file(filename): """Try to determine the version from _version.py if present.""" try: with open(filename) as f: contents = f.read() except EnvironmentError: raise NotThisMethod("unable to read _version.py") mo = re.search(r"version_json = '''\n(.*)''' # END VERSION_JSON", contents, re.M | re.S) if not mo: mo = re.search(r"version_json = '''\r\n(.*)''' # END VERSION_JSON", contents, re.M | re.S) if not mo: raise NotThisMethod("no version_json in _version.py") return json.loads(mo.group(1)) def write_to_version_file(filename, versions): """Write the given version number to the given _version.py file.""" os.unlink(filename) contents = json.dumps(versions, sort_keys=True, indent=1, separators=(",", ": ")) with open(filename, "w") as f: f.write(SHORT_VERSION_PY % contents) print("set %s to '%s'" % (filename, versions["version"])) def plus_or_dot(pieces): """Return a + if we don't already have one, else return a .""" if "+" in pieces.get("closest-tag", ""): return "." return "+" def render_pep440(pieces): """Build up version string, with post-release "local version identifier". Our goal: TAG[+DISTANCE.gHEX[.dirty]] . Note that if you get a tagged build and then dirty it, you'll get TAG+0.gHEX.dirty Exceptions: 1: no tags. git_describe was just HEX. 0+untagged.DISTANCE.gHEX[.dirty] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += plus_or_dot(pieces) rendered += "%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" else: # exception #1 rendered = "0+untagged.%d.g%s" % (pieces["distance"], pieces["short"]) if pieces["dirty"]: rendered += ".dirty" return rendered def render_pep440_pre(pieces): """TAG[.post.devDISTANCE] -- No -dirty. Exceptions: 1: no tags. 0.post.devDISTANCE """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += ".post.dev%d" % pieces["distance"] else: # exception #1 rendered = "0.post.dev%d" % pieces["distance"] return rendered def render_pep440_post(pieces): """TAG[.postDISTANCE[.dev0]+gHEX] . The ".dev0" means dirty. Note that .dev0 sorts backwards (a dirty tree will appear "older" than the corresponding clean one), but you shouldn't be releasing software with -dirty anyways. Exceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += plus_or_dot(pieces) rendered += "g%s" % pieces["short"] else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" rendered += "+g%s" % pieces["short"] return rendered def render_pep440_old(pieces): """TAG[.postDISTANCE[.dev0]] . The ".dev0" means dirty. Eexceptions: 1: no tags. 0.postDISTANCE[.dev0] """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"] or pieces["dirty"]: rendered += ".post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" else: # exception #1 rendered = "0.post%d" % pieces["distance"] if pieces["dirty"]: rendered += ".dev0" return rendered def render_git_describe(pieces): """TAG[-DISTANCE-gHEX][-dirty]. Like 'git describe --tags --dirty --always'. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] if pieces["distance"]: rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render_git_describe_long(pieces): """TAG-DISTANCE-gHEX[-dirty]. Like 'git describe --tags --dirty --always -long'. The distance/hash is unconditional. Exceptions: 1: no tags. HEX[-dirty] (note: no 'g' prefix) """ if pieces["closest-tag"]: rendered = pieces["closest-tag"] rendered += "-%d-g%s" % (pieces["distance"], pieces["short"]) else: # exception #1 rendered = pieces["short"] if pieces["dirty"]: rendered += "-dirty" return rendered def render(pieces, style): """Render the given version pieces into the requested style.""" if pieces["error"]: return {"version": "unknown", "full-revisionid": pieces.get("long"), "dirty": None, "error": pieces["error"], "date": None} if not style or style == "default": style = "pep440" # the default if style == "pep440": rendered = render_pep440(pieces) elif style == "pep440-pre": rendered = render_pep440_pre(pieces) elif style == "pep440-post": rendered = render_pep440_post(pieces) elif style == "pep440-old": rendered = render_pep440_old(pieces) elif style == "git-describe": rendered = render_git_describe(pieces) elif style == "git-describe-long": rendered = render_git_describe_long(pieces) else: raise ValueError("unknown style '%s'" % style) return {"version": rendered, "full-revisionid": pieces["long"], "dirty": pieces["dirty"], "error": None, "date": pieces.get("date")} class VersioneerBadRootError(Exception): """The project root directory is unknown or missing key files.""" def get_versions(verbose=False): """Get the project version from whatever source is available. Returns dict with two keys: 'version' and 'full'. """ if "versioneer" in sys.modules: # see the discussion in cmdclass.py:get_cmdclass() del sys.modules["versioneer"] root = get_root() cfg = get_config_from_root(root) assert cfg.VCS is not None, "please set [versioneer]VCS= in setup.cfg" handlers = HANDLERS.get(cfg.VCS) assert handlers, "unrecognized VCS '%s'" % cfg.VCS verbose = verbose or cfg.verbose assert cfg.versionfile_source is not None, \ "please set versioneer.versionfile_source" assert cfg.tag_prefix is not None, "please set versioneer.tag_prefix" versionfile_abs = os.path.join(root, cfg.versionfile_source) # extract version from first of: _version.py, VCS command (e.g. 'git # describe'), parentdir. This is meant to work for developers using a # source checkout, for users of a tarball created by 'setup.py sdist', # and for users of a tarball/zipball created by 'git archive' or github's # download-from-tag feature or the equivalent in other VCSes. get_keywords_f = handlers.get("get_keywords") from_keywords_f = handlers.get("keywords") if get_keywords_f and from_keywords_f: try: keywords = get_keywords_f(versionfile_abs) ver = from_keywords_f(keywords, cfg.tag_prefix, verbose) if verbose: print("got version from expanded keyword %s" % ver) return ver except NotThisMethod: pass try: ver = versions_from_file(versionfile_abs) if verbose: print("got version from file %s %s" % (versionfile_abs, ver)) return ver except NotThisMethod: pass from_vcs_f = handlers.get("pieces_from_vcs") if from_vcs_f: try: pieces = from_vcs_f(cfg.tag_prefix, root, verbose) ver = render(pieces, cfg.style) if verbose: print("got version from VCS %s" % ver) return ver except NotThisMethod: pass try: if cfg.parentdir_prefix: ver = versions_from_parentdir(cfg.parentdir_prefix, root, verbose) if verbose: print("got version from parentdir %s" % ver) return ver except NotThisMethod: pass if verbose: print("unable to compute version") return {"version": "0+unknown", "full-revisionid": None, "dirty": None, "error": "unable to compute version", "date": None} def get_version(): """Get the short version string for this project.""" return get_versions()["version"] def get_cmdclass(): """Get the custom setuptools/distutils subclasses used by Versioneer.""" if "versioneer" in sys.modules: del sys.modules["versioneer"] # this fixes the "python setup.py develop" case (also 'install' and # 'easy_install .'), in which subdependencies of the main project are # built (using setup.py bdist_egg) in the same python process. Assume # a main project A and a dependency B, which use different versions # of Versioneer. A's setup.py imports A's Versioneer, leaving it in # sys.modules by the time B's setup.py is executed, causing B to run # with the wrong versioneer. Setuptools wraps the sub-dep builds in a # sandbox that restores sys.modules to it's pre-build state, so the # parent is protected against the child's "import versioneer". By # removing ourselves from sys.modules here, before the child build # happens, we protect the child from the parent's versioneer too. # Also see https://github.com/warner/python-versioneer/issues/52 cmds = {} # we add "version" to both distutils and setuptools from distutils.core import Command class cmd_version(Command): description = "report generated version string" user_options = [] boolean_options = [] def initialize_options(self): pass def finalize_options(self): pass def run(self): vers = get_versions(verbose=True) print("Version: %s" % vers["version"]) print(" full-revisionid: %s" % vers.get("full-revisionid")) print(" dirty: %s" % vers.get("dirty")) print(" date: %s" % vers.get("date")) if vers["error"]: print(" error: %s" % vers["error"]) cmds["version"] = cmd_version # we override "build_py" in both distutils and setuptools # # most invocation pathways end up running build_py: # distutils/build -> build_py # distutils/install -> distutils/build ->.. # setuptools/bdist_wheel -> distutils/install ->.. # setuptools/bdist_egg -> distutils/install_lib -> build_py # setuptools/install -> bdist_egg ->.. # setuptools/develop -> ? # pip install: # copies source tree to a tempdir before running egg_info/etc # if .git isn't copied too, 'git describe' will fail # then does setup.py bdist_wheel, or sometimes setup.py install # setup.py egg_info -> ? # we override different "build_py" commands for both environments if "setuptools" in sys.modules: from setuptools.command.build_py import build_py as _build_py else: from distutils.command.build_py import build_py as _build_py class cmd_build_py(_build_py): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() _build_py.run(self) # now locate _version.py in the new build/ directory and replace # it with an updated value if cfg.versionfile_build: target_versionfile = os.path.join(self.build_lib, cfg.versionfile_build) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) cmds["build_py"] = cmd_build_py if "cx_Freeze" in sys.modules: # cx_freeze enabled? from cx_Freeze.dist import build_exe as _build_exe # nczeczulin reports that py2exe won't like the pep440-style string # as FILEVERSION, but it can be used for PRODUCTVERSION, e.g. # setup(console=[{ # "version": versioneer.get_version().split("+", 1)[0], # FILEVERSION # "product_version": versioneer.get_version(), # ... class cmd_build_exe(_build_exe): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() target_versionfile = cfg.versionfile_source print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) _build_exe.run(self) os.unlink(target_versionfile) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) cmds["build_exe"] = cmd_build_exe del cmds["build_py"] if 'py2exe' in sys.modules: # py2exe enabled? try: from py2exe.distutils_buildexe import py2exe as _py2exe # py3 except ImportError: from py2exe.build_exe import py2exe as _py2exe # py2 class cmd_py2exe(_py2exe): def run(self): root = get_root() cfg = get_config_from_root(root) versions = get_versions() target_versionfile = cfg.versionfile_source print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, versions) _py2exe.run(self) os.unlink(target_versionfile) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) cmds["py2exe"] = cmd_py2exe # we override different "sdist" commands for both environments if "setuptools" in sys.modules: from setuptools.command.sdist import sdist as _sdist else: from distutils.command.sdist import sdist as _sdist class cmd_sdist(_sdist): def run(self): versions = get_versions() self._versioneer_generated_versions = versions # unless we update this, the command will keep using the old # version self.distribution.metadata.version = versions["version"] return _sdist.run(self) def make_release_tree(self, base_dir, files): root = get_root() cfg = get_config_from_root(root) _sdist.make_release_tree(self, base_dir, files) # now locate _version.py in the new base_dir directory # (remembering that it may be a hardlink) and replace it with an # updated value target_versionfile = os.path.join(base_dir, cfg.versionfile_source) print("UPDATING %s" % target_versionfile) write_to_version_file(target_versionfile, self._versioneer_generated_versions) cmds["sdist"] = cmd_sdist return cmds CONFIG_ERROR = """ setup.cfg is missing the necessary Versioneer configuration. You need a section like: [versioneer] VCS = git style = pep440 versionfile_source = src/myproject/_version.py versionfile_build = myproject/_version.py tag_prefix = parentdir_prefix = myproject- You will also need to edit your setup.py to use the results: import versioneer setup(version=versioneer.get_version(), cmdclass=versioneer.get_cmdclass(), ...) Please read the docstring in ./versioneer.py for configuration instructions, edit setup.cfg, and re-run the installer or 'python versioneer.py setup'. """ SAMPLE_CONFIG = """ # See the docstring in versioneer.py for instructions. Note that you must # re-run 'versioneer.py setup' after changing this section, and commit the # resulting files. [versioneer] #VCS = git #style = pep440 #versionfile_source = #versionfile_build = #tag_prefix = #parentdir_prefix = """ INIT_PY_SNIPPET = """ from ._version import get_versions __version__ = get_versions()['version'] del get_versions """ def do_setup(): """Main VCS-independent setup function for installing Versioneer.""" root = get_root() try: cfg = get_config_from_root(root) except (EnvironmentError, configparser.NoSectionError, configparser.NoOptionError) as e: if isinstance(e, (EnvironmentError, configparser.NoSectionError)): print("Adding sample versioneer config to setup.cfg", file=sys.stderr) with open(os.path.join(root, "setup.cfg"), "a") as f: f.write(SAMPLE_CONFIG) print(CONFIG_ERROR, file=sys.stderr) return 1 print(" creating %s" % cfg.versionfile_source) with open(cfg.versionfile_source, "w") as f: LONG = LONG_VERSION_PY[cfg.VCS] f.write(LONG % {"DOLLAR": "$", "STYLE": cfg.style, "TAG_PREFIX": cfg.tag_prefix, "PARENTDIR_PREFIX": cfg.parentdir_prefix, "VERSIONFILE_SOURCE": cfg.versionfile_source, }) ipy = os.path.join(os.path.dirname(cfg.versionfile_source), "__init__.py") if os.path.exists(ipy): try: with open(ipy, "r") as f: old = f.read() except EnvironmentError: old = "" if INIT_PY_SNIPPET not in old: print(" appending to %s" % ipy) with open(ipy, "a") as f: f.write(INIT_PY_SNIPPET) else: print(" %s unmodified" % ipy) else: print(" %s doesn't exist, ok" % ipy) ipy = None # Make sure both the top-level "versioneer.py" and versionfile_source # (PKG/_version.py, used by runtime code) are in MANIFEST.in, so # they'll be copied into source distributions. Pip won't be able to # install the package without this. manifest_in = os.path.join(root, "MANIFEST.in") simple_includes = set() try: with open(manifest_in, "r") as f: for line in f: if line.startswith("include "): for include in line.split()[1:]: simple_includes.add(include) except EnvironmentError: pass # That doesn't cover everything MANIFEST.in can do # (http://docs.python.org/2/distutils/sourcedist.html#commands), so # it might give some false negatives. Appending redundant 'include' # lines is safe, though. if "versioneer.py" not in simple_includes: print(" appending 'versioneer.py' to MANIFEST.in") with open(manifest_in, "a") as f: f.write("include versioneer.py\n") else: print(" 'versioneer.py' already in MANIFEST.in") if cfg.versionfile_source not in simple_includes: print(" appending versionfile_source ('%s') to MANIFEST.in" % cfg.versionfile_source) with open(manifest_in, "a") as f: f.write("include %s\n" % cfg.versionfile_source) else: print(" versionfile_source already in MANIFEST.in") # Make VCS-specific changes. For git, this means creating/changing # .gitattributes to mark _version.py for export-subst keyword # substitution. do_vcs_install(manifest_in, cfg.versionfile_source, ipy) return 0 def scan_setup_py(): """Validate the contents of setup.py against Versioneer's expectations.""" found = set() setters = False errors = 0 with open("setup.py", "r") as f: for line in f.readlines(): if "import versioneer" in line: found.add("import") if "versioneer.get_cmdclass()" in line: found.add("cmdclass") if "versioneer.get_version()" in line: found.add("get_version") if "versioneer.VCS" in line: setters = True if "versioneer.versionfile_source" in line: setters = True if len(found) != 3: print("") print("Your setup.py appears to be missing some important items") print("(but I might be wrong). Please make sure it has something") print("roughly like the following:") print("") print(" import versioneer") print(" setup( version=versioneer.get_version(),") print(" cmdclass=versioneer.get_cmdclass(), ...)") print("") errors += 1 if setters: print("You should remove lines like 'versioneer.VCS = ' and") print("'versioneer.versionfile_source = ' . This configuration") print("now lives in setup.cfg, and should be removed from setup.py") print("") errors += 1 return errors if __name__ == "__main__": cmd = sys.argv[1] if cmd == "setup": errors = do_setup() errors += scan_setup_py() if errors: sys.exit(1)