werkzeug-0.14.1/000077500000000000000000000000001322225165500134435ustar00rootroot00000000000000werkzeug-0.14.1/.appveyor.yml000066400000000000000000000003731322225165500161140ustar00rootroot00000000000000environment: global: TOXENV: "py" matrix: - PYTHON: "C:\\Python27" - PYTHON: "C:\\Python36" install: - "%PYTHON%\\python.exe -m pip install -U pip setuptools wheel tox" build: false test_script: - "%PYTHON%\\python.exe -m tox" werkzeug-0.14.1/.coveragerc000066400000000000000000000002461322225165500155660ustar00rootroot00000000000000[run] branch = True source = werkzeug tests [paths] source = werkzeug .tox/*/lib/python*/site-packages/werkzeug .tox/pypy/site-packages/werkzeug werkzeug-0.14.1/.gitattributes000066400000000000000000000000361322225165500163350ustar00rootroot00000000000000tests/res/chunked.txt binary werkzeug-0.14.1/.gitignore000066400000000000000000000002721322225165500154340ustar00rootroot00000000000000MANIFEST build dist *.egg-info *.pyc *.pyo env .DS_Store docs/_build bench/a bench/b .tox .coverage .coverage.* coverage_out htmlcov .cache .xprocess .hypothesis test_uwsgi_failed .idea werkzeug-0.14.1/.travis.yml000066400000000000000000000030501322225165500155520ustar00rootroot00000000000000os: linux sudo: false language: python matrix: include: - python: 3.6 env: TOXENV=hypothesis-uwsgi,codecov,stylecheck,docs-html - python: 3.5 env: TOXENV=py,codecov - python: 3.4 env: TOXENV=py,codecov - python: 2.7 env: TOXENV=py,codecov - python: pypy env: TOXENV=py,codecov - python: nightly env: TOXENV=py - os: osx language: generic env: TOXENV=py allow_failures: - os: osx language: generic env: TOXENV=py fast_finish: true before_install: - if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then brew update; brew install python3 redis memcached; virtualenv -p python3 ~/py-env; . ~/py-env/bin/activate; fi # Travis uses an outdated PyPy, this installs a more recent one. - if [[ "$TRAVIS_PYTHON_VERSION" == "pypy" ]]; then git clone https://github.com/pyenv/pyenv.git ~/.pyenv; PYENV_ROOT="$HOME/.pyenv"; PATH="$PYENV_ROOT/bin:$PATH"; eval "$(pyenv init -)"; pyenv install pypy2.7-5.8.0; pyenv global pypy2.7-5.8.0; fi - if [[ "$TRAVIS_PYTHON_VERSION" == "pypy3" ]]; then git clone https://github.com/pyenv/pyenv.git ~/.pyenv; PYENV_ROOT="$HOME/.pyenv"; PATH="$PYENV_ROOT/bin:$PATH"; eval "$(pyenv init -)"; pyenv install pypy3.5-5.8.0; pyenv global pypy3.5-5.8.0; fi install: - pip install tox script: - tox cache: - pip branches: only: - master - /^.*-maintenance$/ notifications: email: false werkzeug-0.14.1/AUTHORS000066400000000000000000000034271322225165500145210ustar00rootroot00000000000000Werkzeug is developed and maintained by the Pallets team and community contributors. It was created by Armin Ronacher. The core maintainers are: - Armin Ronacher (mitsuhiko) - Marcus Unterwaditzer (untitaker) - Adrian Mönnich (ThiefMaster) - David Lord (davidism) A full list of contributors is available from git with: - Georg Brandl - Leif K-Brooks - Thomas Johansson - Marian Sigler - Ronny Pfannschmidt - Noah Slater - Alec Thomas - Shannon Behrens - Christoph Rauch - Clemens Hermann - Jason Kirtland - Ali Afshar - Christopher Grebs - Sean Cazzell - Florent Xicluna - Kyle Dawkins - Pedro Algarvio - Zahari Petkov - Ludvig Ericson - Kenneth Reitz - Daniel Neuhäuser - Markus Unterwaditzer - Joe Esposito - Abhinav Upadhyay - immerrr - Cédric Krier - Phil Jones - Michael Hunsinger - Lars Holm Nielsen - Joël Charles - Benjamin Dopplinger - Nils Steinger - Mark Szymanski - Andrew Bednar - Craig Blaszczyk - Felix König The SSL parts of the Werkzeug development server are partially taken from Paste. The same is true for the range support which comes from WebOb, a Paste project. The original code is MIT licensed and largely compatible with the BSD 3-clause license. The following copyrights apply: - (c) 2005 Ian Bicking and contributors - (c) 2005 Clark C. Evans The rename() function from the posixemulation was taken almost unmodified from the Trac project's utility module. The original code is BSD licensed with the following copyrights from that module: - (c) 2003-2009 Edgewall Software - (c) 2003-2006 Jonas Borgström - (c) 2006 Matthew Good - (c) 2005-2006 Christian Boos werkzeug-0.14.1/CHANGES.rst000066400000000000000000001535511322225165500152570ustar00rootroot00000000000000Werkzeug Changelog ================== Version 0.14.1 -------------- Released on December 31st 2017 - Resolved a regression with status code handling in the integrated development server. Version 0.14 ------------ Released on December 31st 2017 - HTTP exceptions are now automatically caught by ``Request.application``. - Added support for edge as browser. - Added support for platforms that lack ``SpooledTemporaryFile``. - Add support for etag handling through if-match - Added support for the SameSite cookie attribute. - Added ``werkzeug.wsgi.ProxyMiddleware`` - Implemented ``has`` for ``NullCache`` - ``get_multi`` on cache clients now returns lists all the time. - Improved the watchdog observer shutdown for the reloader to not crash on exit on older Python versions. - Added support for ``filename*`` filename attributes according to RFC 2231 - Resolved an issue where machine ID for the reloader PIN was not read accurately on windows. - Added a workaround for syntax errors in init files in the reloader. - Added support for using the reloader with console scripts on windows. - The built-in HTTP server will no longer close a connection in cases where no HTTP body is expected (204, 204, HEAD requests etc.) - The ``EnvironHeaders`` object now skips over empty content type and lengths if they are set to falsy values. - Werkzeug will no longer send the content-length header on 1xx or 204/304 responses. - Cookie values are now also permitted to include slashes and equal signs without quoting. - Relaxed the regex for the routing converter arguments. - If cookies are sent without values they are now assumed to have an empty value and the parser accepts this. Previously this could have corrupted cookies that followed the value. - The test ``Client`` and ``EnvironBuilder`` now support mimetypes like the request object does. - Added support for static weights in URL rules. - Better handle some more complex reloader scenarios where sys.path contained non directory paths. - ``EnvironHeaders`` no longer raises weird errors if non string keys are passed to it. Version 0.13 ------------ Released on December 7th 2017 - **Deprecate support for Python 2.6 and 3.3.** CI tests will not run for these versions, and support will be dropped completely in the next version. (`pallets/meta#24`_) - Raise ``TypeError`` when port is not an integer. (`#1088`_) - Fully deprecate ``werkzeug.script``. Use `Click`_ instead. (`#1090`_) - ``response.age`` is parsed as a ``timedelta``. Previously, it was incorrectly treated as a ``datetime``. The header value is an integer number of seconds, not a date string. (`#414`_) - Fix a bug in ``TypeConversionDict`` where errors are not propagated when using the converter. (`#1102`_) - ``Authorization.qop`` is a string instead of a set, to comply with RFC 2617. (`#984`_) - An exception is raised when an encoded cookie is larger than, by default, 4093 bytes. Browsers may silently ignore cookies larger than this. ``BaseResponse`` has a new attribute ``max_cookie_size`` and ``dump_cookie`` has a new argument ``max_size`` to configure this. (`#780`_, `#1109`_) - Fix a TypeError in ``werkzeug.contrib.lint.GuardedIterator.close``. (`#1116`_) - ``BaseResponse.calculate_content_length`` now correctly works for Unicode responses on Python 3. It first encodes using ``iter_encoded``. (`#705`_) - Secure cookie contrib works with string secret key on Python 3. (`#1205`_) - Shared data middleware accepts a list instead of a dict of static locations to preserve lookup order. (`#1197`_) - HTTP header values without encoding can contain single quotes. (`#1208`_) - The built-in dev server supports receiving requests with chunked transfer encoding. (`#1198`_) .. _Click: https://www.palletsprojects.com/p/click/ .. _pallets/meta#24: https://github.com/pallets/meta/issues/24 .. _#414: https://github.com/pallets/werkzeug/pull/414 .. _#705: https://github.com/pallets/werkzeug/pull/705 .. _#780: https://github.com/pallets/werkzeug/pull/780 .. _#984: https://github.com/pallets/werkzeug/pull/984 .. _#1088: https://github.com/pallets/werkzeug/pull/1088 .. _#1090: https://github.com/pallets/werkzeug/pull/1090 .. _#1102: https://github.com/pallets/werkzeug/pull/1102 .. _#1109: https://github.com/pallets/werkzeug/pull/1109 .. _#1116: https://github.com/pallets/werkzeug/pull/1116 .. _#1197: https://github.com/pallets/werkzeug/pull/1197 .. _#1198: https://github.com/pallets/werkzeug/pull/1198 .. _#1205: https://github.com/pallets/werkzeug/pull/1205 .. _#1208: https://github.com/pallets/werkzeug/pull/1208 Version 0.12.2 -------------- Released on May 16 2017 - Fix regression: Pull request ``#892`` prevented Werkzeug from correctly logging the IP of a remote client behind a reverse proxy, even when using `ProxyFix`. - Fix a bug in `safe_join` on Windows. Version 0.12.1 -------------- Released on March 15th 2017 - Fix crash of reloader (used on debug mode) on Windows. (`OSError: [WinError 10038]`). See pull request ``#1081`` - Partially revert change to class hierarchy of `Headers`. See ``#1084``. Version 0.12 ------------ Released on March 10th 2017 - Spit out big deprecation warnings for werkzeug.script - Use `inspect.getfullargspec` internally when available as `inspect.getargspec` is gone in 3.6 - Added support for status code 451 and 423 - Improved the build error suggestions. In particular only if someone stringifies the error will the suggestions be calculated. - Added support for uWSGI's caching backend. - Fix a bug where iterating over a `FileStorage` would result in an infinite loop. - Datastructures now inherit from the relevant baseclasses from the `collections` module in the stdlib. See #794. - Add support for recognizing NetBSD, OpenBSD, FreeBSD, DragonFlyBSD platforms in the user agent string. - Recognize SeaMonkey browser name and version correctly - Recognize Baiduspider, and bingbot user agents - If `LocalProxy`'s wrapped object is a function, refer to it with __wrapped__ attribute. - The defaults of ``generate_password_hash`` have been changed to more secure ones, see pull request ``#753``. - Add support for encoding in options header parsing, see pull request ``#933``. - ``test.Client`` now properly handles Location headers with relative URLs, see pull request ``#879``. - When `HTTPException` is raised, it now prints the description, for easier debugging. - Werkzeug's dict-like datastructures now have ``view``-methods under Python 2, see pull request ``#968``. - Fix a bug in ``MultiPartParser`` when no ``stream_factory`` was provided during initialization, see pull request ``#973``. - Disable autocorrect and spellchecker in the debugger middleware's Python prompt, see pull request ``#994``. - Don't redirect to slash route when method doesn't match, see pull request ``#907``. - Fix a bug when using ``SharedDataMiddleware`` with frozen packages, see pull request ``#959``. - `Range` header parsing function fixed for invalid values ``#974``. - Add support for byte Range Requests, see pull request ``#978``. - Use modern cryptographic defaults in the dev servers ``#1004``. - the post() method of the test client now accept file object through the data parameter. - Color run_simple's terminal output based on HTTP codes ``#1013``. - Fix self-XSS in debugger console, see ``#1031``. - Fix IPython 5.x shell support, see ``#1033``. - Change Accept datastructure to sort by specificity first, allowing for more accurate results when using ``best_match`` for mime types (for example in ``requests.accept_mimetypes.best_match``) Version 0.11.16 --------------- - werkzeug.serving: set CONTENT_TYPE / CONTENT_LENGTH if only they're provided by the client - werkzeug.serving: Fix crash of reloader when using `python -m werkzeug.serving`. Version 0.11.15 --------------- Released on December 30th 2016. - Bugfix for the bugfix in the previous release. Version 0.11.14 --------------- Released on December 30th 2016. - Check if platform can fork before importing ``ForkingMixIn``, raise exception when creating ``ForkingWSGIServer`` on such a platform, see PR ``#999``. Version 0.11.13 --------------- Released on December 26th 2016. - Correct fix for the reloader issuer on certain Windows installations. Version 0.11.12 --------------- Released on December 26th 2016. - Fix more bugs in multidicts regarding empty lists. See ``#1000``. - Add some docstrings to some `EnvironBuilder` properties that were previously unintentionally missing. - Added a workaround for the reloader on windows. Version 0.11.11 --------------- Released on August 31st 2016. - Fix JSONRequestMixin for Python3. See #731 - Fix broken string handling in test client when passing integers. See #852 - Fix a bug in ``parse_options_header`` where an invalid content type starting with comma or semi-colon would result in an invalid return value, see issue ``#995``. - Fix a bug in multidicts when passing empty lists as values, see issue ``#979``. - Fix a security issue that allows XSS on the Werkzeug debugger. See ``#1001``. Version 0.11.10 --------------- Released on May 24th 2016. - Fixed a bug that occurs when running on Python 2.6 and using a broken locale. See pull request #912. - Fixed a crash when running the debugger on Google App Engine. See issue #925. - Fixed an issue with multipart parsing that could cause memory exhaustion. Version 0.11.9 -------------- Released on April 24th 2016. - Corrected an issue that caused the debugger not to use the machine GUID on POSIX systems. - Corrected a Unicode error on Python 3 for the debugger's PIN usage. - Corrected the timestamp verification in the pin debug code. Without this fix the pin was remembered for too long. Version 0.11.8 -------------- Released on April 15th 2016. - fixed a problem with the machine GUID detection code on OS X on Python 3. Version 0.11.7 -------------- Released on April 14th 2016. - fixed a regression on Python 3 for the debugger. Version 0.11.6 -------------- Released on April 14th 2016. - werkzeug.serving: Still show the client address on bad requests. - improved the PIN based protection for the debugger to make it harder to brute force via trying cookies. Please keep in mind that the debugger *is not intended for running on production environments* - increased the pin timeout to a week to make it less annoying for people which should decrease the chance that users disable the pin check entirely. - werkzeug.serving: Fix broken HTTP_HOST when path starts with double slash. Version 0.11.5 -------------- Released on March 22nd 2016. - werkzeug.serving: Fix crash when attempting SSL connection to HTTP server. Version 0.11.4 -------------- Released on February 14th 2016. - Fixed werkzeug.serving not working from -m flag. - Fixed incorrect weak etag handling. Version 0.11.3 -------------- Released on December 20th 2015. - Fixed an issue with copy operations not working against proxies. - Changed the logging operations of the development server to correctly log where the server is running in all situations again. - Fixed another regression with SSL wrapping similar to the fix in 0.11.2 but for a different code path. Version 0.11.2 -------------- Released on November 12th 2015. - Fix inheritable sockets on Windows on Python 3. - Fixed an issue with the forking server not starting any longer. - Fixed SSL wrapping on platforms that supported opening sockets by file descriptor. - No longer log from the watchdog reloader. - Unicode errors in hosts are now better caught or converted into bad request errors. Version 0.11.1 -------------- Released on November 10th 2015. - Fixed a regression on Python 3 in the debugger. Version 0.11 ------------ Released on November 8th 2015, codename Gleisbaumaschine. - Added ``reloader_paths`` option to ``run_simple`` and other functions in ``werkzeug.serving``. This allows the user to completely override the Python module watching of Werkzeug with custom paths. - Many custom cached properties of Werkzeug's classes are now subclasses of Python's ``property`` type (issue ``#616``). - ``bind_to_environ`` now doesn't differentiate between implicit and explicit default port numbers in ``HTTP_HOST`` (pull request ``#204``). - ``BuildErrors`` are now more informative. They come with a complete sentence as error message, and also provide suggestions (pull request ``#691``). - Fix a bug in the user agent parser where Safari's build number instead of version would be extracted (pull request ``#703``). - Fixed issue where RedisCache set_many was broken for twemproxy, which doesn't support the default MULTI command (pull request ``#702``). - ``mimetype`` parameters on request and response classes are now always converted to lowercase. - Changed cache so that cache never expires if timeout is 0. This also fixes an issue with redis setex (issue ``#550``) - Werkzeug now assumes ``UTF-8`` as filesystem encoding on Unix if Python detected it as ASCII. - New optional `has` method on caches. - Fixed various bugs in `parse_options_header` (pull request ``#643``). - If the reloader is enabled the server will now open the socket in the parent process if this is possible. This means that when the reloader kicks in the connection from client will wait instead of tearing down. This does not work on all Python versions. - Implemented PIN based authentication for the debugger. This can optionally be disabled but is discouraged. This change was necessary as it has been discovered that too many people run the debugger in production. - Devserver no longer requires SSL module to be installed. Version 0.10.5 -------------- (bugfix release, release date yet to be decided) - Reloader: Correctly detect file changes made by moving temporary files over the original, which is e.g. the case with PyCharm (pull request ``#722``). - Fix bool behavior of ``werkzeug.datastructures.ETags`` under Python 3 (issue ``#744``). Version 0.10.4 -------------- (bugfix release, released on March 26th 2015) - Re-release of 0.10.3 with packaging artifacts manually removed. Version 0.10.3 -------------- (bugfix release, released on March 26th 2015) - Re-release of 0.10.2 without packaging artifacts. Version 0.10.2 -------------- (bugfix release, released on March 26th 2015) - Fixed issue where ``empty`` could break third-party libraries that relied on keyword arguments (pull request ``#675``) - Improved ``Rule.empty`` by providing a ```get_empty_kwargs`` to allow setting custom kwargs without having to override entire ``empty`` method. (pull request ``#675``) - Fixed ```extra_files``` parameter for reloader to not cause startup to crash when included in server params - Using `MultiDict` when building URLs is now not supported again. The behavior introduced several regressions. - Fix performance problems with stat-reloader (pull request ``#715``). Version 0.10.1 -------------- (bugfix release, released on February 3rd 2015) - Fixed regression with multiple query values for URLs (pull request ``#667``). - Fix issues with eventlet's monkeypatching and the builtin server (pull request ``#663``). Version 0.10 ------------ Released on January 30th 2015, codename Bagger. - Changed the error handling of and improved testsuite for the caches in ``contrib.cache``. - Fixed a bug on Python 3 when creating adhoc ssl contexts, due to `sys.maxint` not being defined. - Fixed a bug on Python 3, that caused :func:`~werkzeug.serving.make_ssl_devcert` to fail with an exception. - Added exceptions for 504 and 505. - Added support for ChromeOS detection. - Added UUID converter to the routing system. - Added message that explains how to quit the server. - Fixed a bug on Python 2, that caused ``len`` for :class:`werkzeug.datastructures.CombinedMultiDict` to crash. - Added support for stdlib pbkdf2 hmac if a compatible digest is found. - Ported testsuite to use ``py.test``. - Minor optimizations to various middlewares (pull requests ``#496`` and ``#571``). - Use stdlib ``ssl`` module instead of ``OpenSSL`` for the builtin server (issue ``#434``). This means that OpenSSL contexts are not supported anymore, but instead ``ssl.SSLContext`` from the stdlib. - Allow protocol-relative URLs when building external URLs. - Fixed Atom syndication to print time zone offset for tz-aware datetime objects (pull request ``#254``). - Improved reloader to track added files and to recover from broken sys.modules setups with syntax errors in packages. - ``cache.RedisCache`` now supports arbitrary ``**kwargs`` for the redis object. - ``werkzeug.test.Client`` now uses the original request method when resolving 307 redirects (pull request ``#556``). - ``werkzeug.datastructures.MIMEAccept`` now properly deals with mimetype parameters (pull request ``#205``). - ``werkzeug.datastructures.Accept`` now handles a quality of ``0`` as intolerable, as per RFC 2616 (pull request ``#536``). - ``werkzeug.urls.url_fix`` now properly encodes hostnames with ``idna`` encoding (issue ``#559``). It also doesn't crash on malformed URLs anymore (issue ``#582``). - ``werkzeug.routing.MapAdapter.match`` now recognizes the difference between the path ``/`` and an empty one (issue ``#360``). - The interactive debugger now tries to decode non-ascii filenames (issue ``#469``). - Increased default key size of generated SSL certificates to 1024 bits (issue ``#611``). - Added support for specifying a ``Response`` subclass to use when calling :func:`~werkzeug.utils.redirect`\ . - ``werkzeug.test.EnvironBuilder`` now doesn't use the request method anymore to guess the content type, and purely relies on the ``form``, ``files`` and ``input_stream`` properties (issue ``#620``). - Added Symbian to the user agent platform list. - Fixed make_conditional to respect automatically_set_content_length - Unset ``Content-Length`` when writing to response.stream (issue ``#451``) - ``wrappers.Request.method`` is now always uppercase, eliminating inconsistencies of the WSGI environment (issue ``647``). - ``routing.Rule.empty`` now works correctly with subclasses of ``Rule`` (pull request ``#645``). - Made map updating safe in light of concurrent updates. - Allow multiple values for the same field for url building (issue ``#658``). Version 0.9.7 ------------- (bugfix release, release date to be decided) - Fix unicode problems in ``werkzeug.debug.tbtools``. - Fix Python 3-compatibility problems in ``werkzeug.posixemulation``. - Backport fix of fatal typo for ``ImmutableList`` (issue ``#492``). - Make creation of the cache dir for ``FileSystemCache`` atomic (issue ``#468``). - Use native strings for memcached keys to work with Python 3 client (issue ``#539``). - Fix charset detection for ``werkzeug.debug.tbtools.Frame`` objects (issues ``#547`` and ``#532``). - Fix ``AttributeError`` masking in ``werkzeug.utils.import_string`` (issue ``#182``). - Explicitly shut down server (issue ``#519``). - Fix timeouts greater than 2592000 being misinterpreted as UNIX timestamps in ``werkzeug.contrib.cache.MemcachedCache`` (issue ``#533``). - Fix bug where ``werkzeug.exceptions.abort`` would raise an arbitrary subclass of the expected class (issue ``#422``). - Fix broken ``jsrouting`` (due to removal of ``werkzeug.templates``) - ``werkzeug.urls.url_fix`` now doesn't crash on malformed URLs anymore, but returns them unmodified. This is a cheap workaround for ``#582``, the proper fix is included in version 0.10. - The repr of ``werkzeug.wrappers.Request`` doesn't crash on non-ASCII-values anymore (pull request ``#466``). - Fix bug in ``cache.RedisCache`` when combined with ``redis.StrictRedis`` object (pull request ``#583``). - The ``qop`` parameter for ``WWW-Authenticate`` headers is now always quoted, as required by RFC 2617 (issue ``#633``). - Fix bug in ``werkzeug.contrib.cache.SimpleCache`` with Python 3 where add/set may throw an exception when pruning old entries from the cache (pull request ``#651``). Version 0.9.6 ------------- (bugfix release, released on June 7th 2014) - Added a safe conversion for IRI to URI conversion and use that internally to work around issues with spec violations for protocols such as ``itms-service``. Version 0.9.7 ------------- - Fixed uri_to_iri() not re-encoding hashes in query string parameters. Version 0.9.5 ------------- (bugfix release, released on June 7th 2014) - Forward charset argument from request objects to the environ builder. - Fixed error handling for missing boundaries in multipart data. - Fixed session creation on systems without ``os.urandom()``. - Fixed pluses in dictionary keys not being properly URL encoded. - Fixed a problem with deepcopy not working for multi dicts. - Fixed a double quoting issue on redirects. - Fixed a problem with unicode keys appearing in headers on 2.x. - Fixed a bug with unicode strings in the test builder. - Fixed a unicode bug on Python 3 in the WSGI profiler. - Fixed an issue with the safe string compare function on Python 2.7.7 and Python 3.4. Version 0.9.4 ------------- (bugfix release, released on August 26th 2013) - Fixed an issue with Python 3.3 and an edge case in cookie parsing. - Fixed decoding errors not handled properly through the WSGI decoding dance. - Fixed URI to IRI conversion incorrectly decoding percent signs. Version 0.9.3 ------------- (bugfix release, released on July 25th 2013) - Restored behavior of the ``data`` descriptor of the request class to pre 0.9 behavior. This now also means that ``.data`` and ``.get_data()`` have different behavior. New code should use ``.get_data()`` always. In addition to that there is now a flag for the ``.get_data()`` method that controls what should happen with form data parsing and the form parser will honor cached data. This makes dealing with custom form data more consistent. Version 0.9.2 ------------- (bugfix release, released on July 18th 2013) - Added `unsafe` parameter to :func:`~werkzeug.urls.url_quote`. - Fixed an issue with :func:`~werkzeug.urls.url_quote_plus` not quoting `'+'` correctly. - Ported remaining parts of :class:`~werkzeug.contrib.RedisCache` to Python 3.3. - Ported remaining parts of :class:`~werkzeug.contrib.MemcachedCache` to Python 3.3 - Fixed a deprecation warning in the contrib atom module. - Fixed a regression with setting of content types through the headers dictionary instead with the content type parameter. - Use correct name for stdlib secure string comparison function. - Fixed a wrong reference in the docstring of :func:`~werkzeug.local.release_local`. - Fixed an `AttributeError` that sometimes occurred when accessing the :attr:`werkzeug.wrappers.BaseResponse.is_streamed` attribute. Version 0.9.1 ------------- (bugfix release, released on June 14th 2013) - Fixed an issue with integers no longer being accepted in certain parts of the routing system or URL quoting functions. - Fixed an issue with `url_quote` not producing the right escape codes for single digit codepoints. - Fixed an issue with :class:`~werkzeug.wsgi.SharedDataMiddleware` not reading the path correctly and breaking on etag generation in some cases. - Properly handle `Expect: 100-continue` in the development server to resolve issues with curl. - Automatically exhaust the input stream on request close. This should fix issues where not touching request files results in a timeout. - Fixed exhausting of streams not doing anything if a non-limited stream was passed into the multipart parser. - Raised the buffer sizes for the multipart parser. Version 0.9 ----------- Released on June 13nd 2013, codename Planierraupe. - Added support for :meth:`~werkzeug.wsgi.LimitedStream.tell` on the limited stream. - :class:`~werkzeug.datastructures.ETags` now is nonzero if it contains at least one etag of any kind, including weak ones. - Added a workaround for a bug in the stdlib for SSL servers. - Improved SSL interface of the devserver so that it can generate certificates easily and load them from files. - Refactored test client to invoke the open method on the class for redirects. This makes subclassing more powerful. - :func:`werkzeug.wsgi.make_chunk_iter` and :func:`werkzeug.wsgi.make_line_iter` now support processing of iterators and streams. - URL generation by the routing system now no longer quotes ``+``. - URL fixing now no longer quotes certain reserved characters. - The :func:`werkzeug.security.generate_password_hash` and check functions now support any of the hashlib algorithms. - `wsgi.get_current_url` is now ascii safe for browsers sending non-ascii data in query strings. - improved parsing behavior for :func:`werkzeug.http.parse_options_header` - added more operators to local proxies. - added a hook to override the default converter in the routing system. - The description field of HTTP exceptions is now always escaped. Use markup objects to disable that. - Added number of proxy argument to the proxy fix to make it more secure out of the box on common proxy setups. It will by default no longer trust the x-forwarded-for header as much as it did before. - Added support for fragment handling in URI/IRI functions. - Added custom class support for :func:`werkzeug.http.parse_dict_header`. - Renamed `LighttpdCGIRootFix` to `CGIRootFix`. - Always treat `+` as safe when fixing URLs as people love misusing them. - Added support to profiling into directories in the contrib profiler. - The escape function now by default escapes quotes. - Changed repr of exceptions to be less magical. - Simplified exception interface to no longer require environments to be passed to receive the response object. - Added sentinel argument to IterIO objects. - Added pbkdf2 support for the security module. - Added a plain request type that disables all form parsing to only leave the stream behind. - Removed support for deprecated `fix_headers`. - Removed support for deprecated `header_list`. - Removed support for deprecated parameter for `iter_encoded`. - Removed support for deprecated non-silent usage of the limited stream object. - Removed support for previous dummy `writable` parameter on the cached property. - Added support for explicitly closing request objects to close associated resources. - Conditional request handling or access to the data property on responses no longer ignores direct passthrough mode. - Removed werkzeug.templates and werkzeug.contrib.kickstart. - Changed host lookup logic for forwarded hosts to allow lists of hosts in which case only the first one is picked up. - Added `wsgi.get_query_string`, `wsgi.get_path_info` and `wsgi.get_script_name` and made the `wsgi.pop_path_info` and `wsgi.peek_path_info` functions perform unicode decoding. This was necessary to avoid having to expose the WSGI encoding dance on Python 3. - Added `content_encoding` and `content_md5` to the request object's common request descriptor mixin. - added `options` and `trace` to the test client. - Overhauled the utilization of the input stream to be easier to use and better to extend. The detection of content payload on the input side is now more compliant with HTTP by detecting off the content type header instead of the request method. This also now means that the stream property on the request class is always available instead of just when the parsing fails. - Added support for using :class:`werkzeug.wrappers.BaseResponse` in a with statement. - Changed `get_app_iter` to fetch the response early so that it does not fail when wrapping a response iterable. This makes filtering easier. - Introduced `get_data` and `set_data` methods for responses. - Introduced `get_data` for requests. - Soft deprecated the `data` descriptors for request and response objects. - Added `as_bytes` operations to some of the headers to simplify working with things like cookies. - Made the debugger paste tracebacks into github's gist service as private pastes. Version 0.8.4 ------------- (bugfix release, release date to be announced) - Added a favicon to the debugger which fixes problem with state changes being triggered through a request to /favicon.ico in Google Chrome. This should fix some problems with Flask and other frameworks that use context local objects on a stack with context preservation on errors. - Fixed an issue with scrolling up in the debugger. - Fixed an issue with debuggers running on a different URL than the URL root. - Fixed a problem with proxies not forwarding some rarely used special methods properly. - Added a workaround to prevent the XSS protection from Chrome breaking the debugger. - Skip redis tests if redis is not running. - Fixed a typo in the multipart parser that caused content-type to not be picked up properly. Version 0.8.3 ------------- (bugfix release, released on February 5th 2012) - Fixed another issue with :func:`werkzeug.wsgi.make_line_iter` where lines longer than the buffer size were not handled properly. - Restore stdout after debug console finished executing so that the debugger can be used on GAE better. - Fixed a bug with the redis cache for int subclasses (affects bool caching). - Fixed an XSS problem with redirect targets coming from untrusted sources. - Redis cache backend now supports password authentication. Version 0.8.2 ------------- (bugfix release, released on December 16th 2011) - Fixed a problem with request handling of the builtin server not responding to socket errors properly. - The routing request redirect exception's code attribute is now used properly. - Fixed a bug with shutdowns on Windows. - Fixed a few unicode issues with non-ascii characters being hardcoded in URL rules. - Fixed two property docstrings being assigned to fdel instead of ``__doc__``. - Fixed an issue where CRLF line endings could be split into two by the line iter function, causing problems with multipart file uploads. Version 0.8.1 ------------- (bugfix release, released on September 30th 2011) - Fixed an issue with the memcache not working properly. - Fixed an issue for Python 2.7.1 and higher that broke copying of multidicts with :func:`copy.copy`. - Changed hashing methodology of immutable ordered multi dicts for a potential problem with alternative Python implementations. Version 0.8 ----------- Released on September 29th 2011, codename Lötkolben - Removed data structure specific KeyErrors for a general purpose :exc:`~werkzeug.exceptions.BadRequestKeyError`. - Documented :meth:`werkzeug.wrappers.BaseRequest._load_form_data`. - The routing system now also accepts strings instead of dictionaries for the `query_args` parameter since we're only passing them through for redirects. - Werkzeug now automatically sets the content length immediately when the :attr:`~werkzeug.wrappers.BaseResponse.data` attribute is set for efficiency and simplicity reasons. - The routing system will now normalize server names to lowercase. - The routing system will no longer raise ValueErrors in case the configuration for the server name was incorrect. This should make deployment much easier because you can ignore that factor now. - Fixed a bug with parsing HTTP digest headers. It rejected headers with missing nc and nonce params. - Proxy fix now also updates wsgi.url_scheme based on X-Forwarded-Proto. - Added support for key prefixes to the redis cache. - Added the ability to suppress some auto corrections in the wrappers that are now controlled via `autocorrect_location_header` and `automatically_set_content_length` on the response objects. - Werkzeug now uses a new method to check that the length of incoming data is complete and will raise IO errors by itself if the server fails to do so. - :func:`~werkzeug.wsgi.make_line_iter` now requires a limit that is not higher than the length the stream can provide. - Refactored form parsing into a form parser class that makes it possible to hook into individual parts of the parsing process for debugging and extending. - For conditional responses the content length is no longer set when it is already there and added if missing. - Immutable datastructures are hashable now. - Headers datastructure no longer allows newlines in values to avoid header injection attacks. - Made it possible through subclassing to select a different remote addr in the proxy fix. - Added stream based URL decoding. This reduces memory usage on large transmitted form data that is URL decoded since Werkzeug will no longer load all the unparsed data into memory. - Memcache client now no longer uses the buggy cmemcache module and supports pylibmc. GAE is not tried automatically and the dedicated class is no longer necessary. - Redis cache now properly serializes data. - Removed support for Python 2.4 Version 0.7.2 ------------- (bugfix release, released on September 30th 2011) - Fixed a CSRF problem with the debugger. - The debugger is now generating private pastes on lodgeit. - If URL maps are now bound to environments the query arguments are properly decoded from it for redirects. Version 0.7.1 ------------- (bugfix release, released on July 26th 2011) - Fixed a problem with newer versions of IPython. - Disabled pyinotify based reloader which does not work reliably. Version 0.7 ----------- Released on July 24th 2011, codename Schraubschlüssel - Add support for python-libmemcached to the Werkzeug cache abstraction layer. - Improved :func:`url_decode` and :func:`url_encode` performance. - Fixed an issue where the SharedDataMiddleware could cause an internal server error on weird paths when loading via pkg_resources. - Fixed an URL generation bug that caused URLs to be invalid if a generated component contains a colon. - :func:`werkzeug.import_string` now works with partially set up packages properly. - Disabled automatic socket switching for IPv6 on the development server due to problems it caused. - Werkzeug no longer overrides the Date header when creating a conditional HTTP response. - The routing system provides a method to retrieve the matching methods for a given path. - The routing system now accepts a parameter to change the encoding error behaviour. - The local manager can now accept custom ident functions in the constructor that are forwarded to the wrapped local objects. - url_unquote_plus now accepts unicode strings again. - Fixed an issue with the filesystem session support's prune function and concurrent usage. - Fixed a problem with external URL generation discarding the port. - Added support for pylibmc to the Werkzeug cache abstraction layer. - Fixed an issue with the new multipart parser that happened when a linebreak happened to be on the chunk limit. - Cookies are now set properly if ports are in use. A runtime error is raised if one tries to set a cookie for a domain without a dot. - Fixed an issue with Template.from_file not working for file descriptors. - Reloader can now use inotify to track reloads. This requires the pyinotify library to be installed. - Werkzeug debugger can now submit to custom lodgeit installations. - redirect function's status code assertion now allows 201 to be used as redirection code. While it's not a real redirect, it shares enough with redirects for the function to still be useful. - Fixed securecookie for pypy. - Fixed `ValueErrors` being raised on calls to `best_match` on `MIMEAccept` objects when invalid user data was supplied. - Deprecated `werkzeug.contrib.kickstart` and `werkzeug.contrib.testtools` - URL routing now can be passed the URL arguments to keep them for redirects. In the future matching on URL arguments might also be possible. - Header encoding changed from utf-8 to latin1 to support a port to Python 3. Bytestrings passed to the object stay untouched which makes it possible to have utf-8 cookies. This is a part where the Python 3 version will later change in that it will always operate on latin1 values. - Fixed a bug in the form parser that caused the last character to be dropped off if certain values in multipart data are used. - Multipart parser now looks at the part-individual content type header to override the global charset. - Introduced mimetype and mimetype_params attribute for the file storage object. - Changed FileStorage filename fallback logic to skip special filenames that Python uses for marking special files like stdin. - Introduced more HTTP exception classes. - `call_on_close` now can be used as a decorator. - Support for redis as cache backend. - Added `BaseRequest.scheme`. - Support for the RFC 5789 PATCH method. - New custom routing parser and better ordering. - Removed support for `is_behind_proxy`. Use a WSGI middleware instead that rewrites the `REMOTE_ADDR` according to your setup. Also see the :class:`werkzeug.contrib.fixers.ProxyFix` for a drop-in replacement. - Added cookie forging support to the test client. - Added support for host based matching in the routing system. - Switched from the default 'ignore' to the better 'replace' unicode error handling mode. - The builtin server now adds a function named 'werkzeug.server.shutdown' into the WSGI env to initiate a shutdown. This currently only works in Python 2.6 and later. - Headers are now assumed to be latin1 for better compatibility with Python 3 once we have support. - Added :func:`werkzeug.security.safe_join`. - Added `accept_json` property analogous to `accept_html` on the :class:`werkzeug.datastructures.MIMEAccept`. - :func:`werkzeug.utils.import_string` now fails with much better error messages that pinpoint to the problem. - Added support for parsing of the `If-Range` header (:func:`werkzeug.http.parse_if_range_header` and :class:`werkzeug.datastructures.IfRange`). - Added support for parsing of the `Range` header (:func:`werkzeug.http.parse_range_header` and :class:`werkzeug.datastructures.Range`). - Added support for parsing of the `Content-Range` header of responses and provided an accessor object for it (:func:`werkzeug.http.parse_content_range_header` and :class:`werkzeug.datastructures.ContentRange`). Version 0.6.2 ------------- (bugfix release, released on April 23th 2010) - renamed the attribute `implicit_seqence_conversion` attribute of the request object to `implicit_sequence_conversion`. Version 0.6.1 ------------- (bugfix release, released on April 13th 2010) - heavily improved local objects. Should pick up standalone greenlet builds now and support proxies to free callables as well. There is also a stacked local now that makes it possible to invoke the same application from within itself by pushing current request/response on top of the stack. - routing build method will also build non-default method rules properly if no method is provided. - added proper IPv6 support for the builtin server. - windows specific filesystem session store fixes. (should now be more stable under high concurrency) - fixed a `NameError` in the session system. - fixed a bug with empty arguments in the werkzeug.script system. - fixed a bug where log lines will be duplicated if an application uses :meth:`logging.basicConfig` (#499) - added secure password hashing and checking functions. - `HEAD` is now implicitly added as method in the routing system if `GET` is present. Not doing that was considered a bug because often code assumed that this is the case and in web servers that do not normalize `HEAD` to `GET` this could break `HEAD` requests. - the script support can start SSL servers now. Version 0.6 ----------- Released on Feb 19th 2010, codename Hammer. - removed pending deprecations - sys.path is now printed from the testapp. - fixed an RFC 2068 incompatibility with cookie value quoting. - the :class:`FileStorage` now gives access to the multipart headers. - `cached_property.writeable` has been deprecated. - :meth:`MapAdapter.match` now accepts a `return_rule` keyword argument that returns the matched `Rule` instead of just the `endpoint` - :meth:`routing.Map.bind_to_environ` raises a more correct error message now if the map was bound to an invalid WSGI environment. - added support for SSL to the builtin development server. - Response objects are no longer modified in place when they are evaluated as WSGI applications. For backwards compatibility the `fix_headers` function is still called in case it was overridden. You should however change your application to use `get_wsgi_headers` if you need header modifications before responses are sent as the backwards compatibility support will go away in future versions. - :func:`append_slash_redirect` no longer requires the QUERY_STRING to be in the WSGI environment. - added :class:`~werkzeug.contrib.wrappers.DynamicCharsetResponseMixin` - added :class:`~werkzeug.contrib.wrappers.DynamicCharsetRequestMixin` - added :attr:`BaseRequest.url_charset` - request and response objects have a default `__repr__` now. - builtin data structures can be pickled now. - the form data parser will now look at the filename instead the content type to figure out if it should treat the upload as regular form data or file upload. This fixes a bug with Google Chrome. - improved performance of `make_line_iter` and the multipart parser for binary uploads. - fixed :attr:`~werkzeug.BaseResponse.is_streamed` - fixed a path quoting bug in `EnvironBuilder` that caused PATH_INFO and SCRIPT_NAME to end up in the environ unquoted. - :meth:`werkzeug.BaseResponse.freeze` now sets the content length. - for unknown HTTP methods the request stream is now always limited instead of being empty. This makes it easier to implement DAV and other protocols on top of Werkzeug. - added :meth:`werkzeug.MIMEAccept.best_match` - multi-value test-client posts from a standard dictionary are now supported. Previously you had to use a multi dict. - rule templates properly work with submounts, subdomains and other rule factories now. - deprecated non-silent usage of the :class:`werkzeug.LimitedStream`. - added support for IRI handling to many parts of Werkzeug. - development server properly logs to the werkzeug logger now. - added :func:`werkzeug.extract_path_info` - fixed a querystring quoting bug in :func:`url_fix` - added `fallback_mimetype` to :class:`werkzeug.SharedDataMiddleware`. - deprecated :meth:`BaseResponse.iter_encoded`'s charset parameter. - added :meth:`BaseResponse.make_sequence`, :attr:`BaseResponse.is_sequence` and :meth:`BaseResponse._ensure_sequence`. - added better __repr__ of :class:`werkzeug.Map` - `import_string` accepts unicode strings as well now. - development server doesn't break on double slashes after the host name. - better `__repr__` and `__str__` of :exc:`werkzeug.exceptions.HTTPException` - test client works correctly with multiple cookies now. - the :class:`werkzeug.routing.Map` now has a class attribute with the default converter mapping. This helps subclasses to override the converters without passing them to the constructor. - implemented :class:`OrderedMultiDict` - improved the session support for more efficient session storing on the filesystem. Also added support for listing of sessions currently stored in the filesystem session store. - werkzeug no longer utilizes the Python time module for parsing which means that dates in a broader range can be parsed. - the wrappers have no class attributes that make it possible to swap out the dict and list types it uses. - werkzeug debugger should work on the appengine dev server now. - the URL builder supports dropping of unexpected arguments now. Previously they were always appended to the URL as query string. - profiler now writes to the correct stream. Version 0.5.1 ------------- (bugfix release for 0.5, released on July 9th 2009) - fixed boolean check of :class:`FileStorage` - url routing system properly supports unicode URL rules now. - file upload streams no longer have to provide a truncate() method. - implemented :meth:`BaseRequest._form_parsing_failed`. - fixed #394 - :meth:`ImmutableDict.copy`, :meth:`ImmutableMultiDict.copy` and :meth:`ImmutableTypeConversionDict.copy` return mutable shallow copies. - fixed a bug with the `make_runserver` script action. - :meth:`MultiDict.items` and :meth:`MutiDict.iteritems` now accept an argument to return a pair for each value of each key. - the multipart parser works better with hand-crafted multipart requests now that have extra newlines added. This fixes a bug with setuptools uploads not handled properly (#390) - fixed some minor bugs in the atom feed generator. - fixed a bug with client cookie header parsing being case sensitive. - fixed a not-working deprecation warning. - fixed package loading for :class:`SharedDataMiddleware`. - fixed a bug in the secure cookie that made server-side expiration on servers with a local time that was not set to UTC impossible. - fixed console of the interactive debugger. Version 0.5 ----------- Released on April 24th, codename Schlagbohrer. - requires Python 2.4 now - fixed a bug in :class:`~contrib.IterIO` - added :class:`MIMEAccept` and :class:`CharsetAccept` that work like the regular :class:`Accept` but have extra special normalization for mimetypes and charsets and extra convenience methods. - switched the serving system from wsgiref to something homebrew. - the :class:`Client` now supports cookies. - added the :mod:`~werkzeug.contrib.fixers` module with various fixes for webserver bugs and hosting setup side-effects. - added :mod:`werkzeug.contrib.wrappers` - added :func:`is_hop_by_hop_header` - added :func:`is_entity_header` - added :func:`remove_hop_by_hop_headers` - added :func:`pop_path_info` - added :func:`peek_path_info` - added :func:`wrap_file` and :class:`FileWrapper` - moved `LimitedStream` from the contrib package into the regular werkzeug one and changed the default behavior to raise exceptions rather than stopping without warning. The old class will stick in the module until 0.6. - implemented experimental multipart parser that replaces the old CGI hack. - added :func:`dump_options_header` and :func:`parse_options_header` - added :func:`quote_header_value` and :func:`unquote_header_value` - :func:`url_encode` and :func:`url_decode` now accept a separator argument to switch between `&` and `;` as pair separator. The magic switch is no longer in place. - all form data parsing functions as well as the :class:`BaseRequest` object have parameters (or attributes) to limit the number of incoming bytes (either totally or per field). - added :class:`LanguageAccept` - request objects are now enforced to be read only for all collections. - added many new collection classes, refactored collections in general. - test support was refactored, semi-undocumented `werkzeug.test.File` was replaced by :class:`werkzeug.FileStorage`. - :class:`EnvironBuilder` was added and unifies the previous distinct :func:`create_environ`, :class:`Client` and :meth:`BaseRequest.from_values`. They all work the same now which is less confusing. - officially documented imports from the internal modules as undefined behavior. These modules were never exposed as public interfaces. - removed `FileStorage.__len__` which previously made the object falsy for browsers not sending the content length which all browsers do. - :class:`SharedDataMiddleware` uses `wrap_file` now and has a configurable cache timeout. - added :class:`CommonRequestDescriptorsMixin` - added :attr:`CommonResponseDescriptorsMixin.mimetype_params` - added :mod:`werkzeug.contrib.lint` - added `passthrough_errors` to `run_simple`. - added `secure_filename` - added :func:`make_line_iter` - :class:`MultiDict` copies now instead of revealing internal lists to the caller for `getlist` and iteration functions that return lists. - added :attr:`follow_redirect` to the :func:`open` of :class:`Client`. - added support for `extra_files` in :func:`~werkzeug.script.make_runserver` Version 0.4.1 ------------- (Bugfix release, released on January 11th 2009) - `werkzeug.contrib.cache.Memcached` accepts now objects that implement the memcache.Client interface as alternative to a list of strings with server addresses. There is also now a `GAEMemcachedCache` that connects to the Google appengine cache. - explicitly convert secret keys to bytestrings now because Python 2.6 no longer does that. - `url_encode` and all interfaces that call it, support ordering of options now which however is disabled by default. - the development server no longer resolves the addresses of clients. - Fixed a typo in `werkzeug.test` that broke `File`. - `Map.bind_to_environ` uses the `Host` header now if available. - Fixed `BaseCache.get_dict` (#345) - `werkzeug.test.Client` can now run the application buffered in which case the application is properly closed automatically. - Fixed `Headers.set` (#354). Caused header duplication before. - Fixed `Headers.pop` (#349). default parameter was not properly handled. - Fixed UnboundLocalError in `create_environ` (#351) - `Headers` is more compatible with wsgiref now. - `Template.render` accepts multidicts now. - dropped support for Python 2.3 Version 0.4 ----------- Released on November 23rd 2008, codename Schraubenzieher. - `Client` supports an empty `data` argument now. - fixed a bug in `Response.application` that made it impossible to use it as method decorator. - the session system should work on appengine now - the secure cookie works properly in load balanced environments with different cpu architectures now. - `CacheControl.no_cache` and `CacheControl.private` behavior changed to reflect the possibilities of the HTTP RFC. Setting these attributes to `None` or `True` now sets the value to "the empty value". More details in the documentation. - fixed `werkzeug.contrib.atom.AtomFeed.__call__`. (#338) - `BaseResponse.make_conditional` now always returns `self`. Previously it didn't for post requests and such. - fixed a bug in boolean attribute handling of `html` and `xhtml`. - added graceful error handling to the debugger pastebin feature. - added a more list like interface to `Headers` (slicing and indexing works now) - fixed a bug with the `__setitem__` method of `Headers` that didn't properly remove all keys on replacing. - added `remove_entity_headers` which removes all entity headers from a list of headers (or a `Headers` object) - the responses now automatically call `remove_entity_headers` if the status code is 304. - fixed a bug with `Href` query parameter handling. Previously the last item of a call to `Href` was not handled properly if it was a dict. - headers now support a `pop` operation to better work with environ properties. Version 0.3.1 ------------- (bugfix release, released on June 24th 2008) - fixed a security problem with `werkzeug.contrib.SecureCookie`. More details available in the `release announcement`_. .. _release announcement: http://lucumr.pocoo.org/cogitations/2008/06/24/werkzeug-031-released/ Version 0.3 ----------- Released on June 14th 2008, codename EUR325CAT6. - added support for redirecting in url routing. - added `Authorization` and `AuthorizationMixin` - added `WWWAuthenticate` and `WWWAuthenticateMixin` - added `parse_list_header` - added `parse_dict_header` - added `parse_authorization_header` - added `parse_www_authenticate_header` - added `_get_current_object` method to `LocalProxy` objects - added `parse_form_data` - `MultiDict`, `CombinedMultiDict`, `Headers`, and `EnvironHeaders` raise special key errors now that are subclasses of `BadRequest` so if you don't catch them they give meaningful HTTP responses. - added support for alternative encoding error handling and the new `HTTPUnicodeError` which (if not caught) behaves like a `BadRequest`. - added `BadRequest.wrap`. - added ETag support to the SharedDataMiddleware and added an option to disable caching. - fixed `is_xhr` on the request objects. - fixed error handling of the url adapter's `dispatch` method. (#318) - fixed bug with `SharedDataMiddleware`. - fixed `Accept.values`. - `EnvironHeaders` contain content-type and content-length now - `url_encode` treats lists and tuples in dicts passed to it as multiple values for the same key so that one doesn't have to pass a `MultiDict` to the function. - added `validate_arguments` - added `BaseRequest.application` - improved Python 2.3 support - `run_simple` accepts `use_debugger` and `use_evalex` parameters now, like the `make_runserver` factory function from the script module. - the `environ_property` is now read-only by default - it's now possible to initialize requests as "shallow" requests which causes runtime errors if the request object tries to consume the input stream. Version 0.2 ----------- Released Feb 14th 2008, codename Faustkeil. - Added `AnyConverter` to the routing system. - Added `werkzeug.contrib.securecookie` - Exceptions have a ``get_response()`` method that return a response object - fixed the path ordering bug (#293), thanks Thomas Johansson - `BaseReporterStream` is now part of the werkzeug contrib module. From Werkzeug 0.3 onwards you will have to import it from there. - added `DispatcherMiddleware`. - `RequestRedirect` is now a subclass of `HTTPException` and uses a 301 status code instead of 302. - `url_encode` and `url_decode` can optionally treat keys as unicode strings now, too. - `werkzeug.script` has a different caller format for boolean arguments now. - renamed `lazy_property` to `cached_property`. - added `import_string`. - added is_* properties to request objects. - added `empty()` method to routing rules. - added `werkzeug.contrib.profiler`. - added `extends` to `Headers`. - added `dump_cookie` and `parse_cookie`. - added `as_tuple` to the `Client`. - added `werkzeug.contrib.testtools`. - added `werkzeug.unescape` - added `BaseResponse.freeze` - added `werkzeug.contrib.atom` - the HTTPExceptions accept an argument `description` now which overrides the default description. - the `MapAdapter` has a default for path info now. If you use `bind_to_environ` you don't have to pass the path later. - the wsgiref subclass werkzeug uses for the dev server does not use direct sys.stderr logging any more but a logger called "werkzeug". - implemented `Href`. - implemented `find_modules` - refactored request and response objects into base objects, mixins and full featured subclasses that implement all mixins. - added simple user agent parser - werkzeug's routing raises `MethodNotAllowed` now if it matches a rule but for a different method. - many fixes and small improvements Version 0.1 ----------- Released on Dec 9th 2007, codename Wictorinoxger. - Initial release werkzeug-0.14.1/CONTRIBUTING.rst000066400000000000000000000107451322225165500161130ustar00rootroot00000000000000How to contribute to Werkzeug ============================= Thank you for considering contributing to Werkzeug! Support questions ----------------- Please, don't use the issue tracker for this. Use one of the following resources for questions about your own code: - The IRC channel ``#pocoo`` on FreeNode. - The IRC channel ``#python`` on FreeNode for more general questions. - The mailing list flask@python.org for long term discussion or larger issues. - Ask on `Stack Overflow`_. Search with Google first using: ``site:stackoverflow.com werkzeug {search term, exception message, etc.}``. Be sure to include a `minimal, complete, and verifiable example`_. Reporting issues ---------------- - Describe what you expected to happen. - If possible, include a `minimal, complete, and verifiable example`_ to help us identify the issue. This also helps check that the issue is not with your own code. - Describe what actually happened. Include the full traceback if there was an exception. - List your Python and Werkzeug versions. If possible, check if this issue is already fixed in the repository. Submitting patches ------------------ - Include tests if your patch is supposed to solve a bug, and explain clearly under which circumstances the bug happens. Make sure the test fails without your patch. - Follow the `PEP8`_ style guide. First time setup ~~~~~~~~~~~~~~~~ - Download and install the `latest version of git`_. - Configure git with your `username`_ and `email`_:: git config --global user.name 'your name' git config --global user.email 'your email' - Make sure you have a `GitHub account`_. - Fork Werkzeug to your GitHub account by clicking the `Fork`_ button. - `Clone`_ your GitHub fork locally:: git clone https://github.com/{username}/werkzeug cd werkzeug - Add the main repository as a remote to update later:: git remote add pallets https://github.com/pallets/werkzeug git fetch pallets - Create a virtualenv:: python3 -m venv venv . venv/bin/activate # or "venv\Scripts\activate" on Windows - Install Werkzeug in editable mode with development dependencies:: pip install -e ".[dev]" Start coding ~~~~~~~~~~~~ - Create a branch to identify the issue you would like to work on (e.g. ``2287-dry-test-suite``) - Using your favorite editor, make your changes, `committing as you go`_. - Follow the `PEP8`_ style guide. - Include tests that cover any code changes you make. Make sure the test fails without your patch. Run the tests as described below. - Push your commits to GitHub and `create a pull request`_. - Celebrate 🎉 Running the tests ~~~~~~~~~~~~~~~~~ Run the basic test suite with:: pytest This only runs the tests for the current environment. Whether this is relevant depends on which part of Werkzeug you're working on. Travis-CI will run the full suite when you submit your pull request. The full test suite takes a long time to run because it tests multiple combinations of Python and dependencies. You need to have Python 2.7, 3.4, 3.5, 3.6, and PyPy 2.7, as well as Redis and memcached installed to run all of the environments. Then run:: tox Running test coverage ~~~~~~~~~~~~~~~~~~~~~ Generating a report of lines that do not have test coverage can indicate where to start contributing. Run ``pytest`` using ``coverage`` and generate a report on the terminal and as an interactive HTML document:: coverage run -m pytest coverage report coverage html # then open htmlcov/index.html Read more about `coverage`_. Running the full test suite with ``tox`` will combine the coverage reports from all runs. .. _Stack Overflow: https://stackoverflow.com/questions/tagged/werkzeug?sort=linked .. _minimal, complete, and verifiable example: https://stackoverflow.com/help/mcve .. _GitHub account: https://github.com/join .. _latest version of git: https://git-scm.com/downloads .. _username: https://help.github.com/articles/setting-your-username-in-git/ .. _email: https://help.github.com/articles/setting-your-email-in-git/ .. _Fork: https://github.com/pallets/werkzeug/pull/2305#fork-destination-box .. _Clone: https://help.github.com/articles/fork-a-repo/#step-2-create-a-local-clone-of-your-fork .. _committing as you go: http://dont-be-afraid-to-commit.readthedocs.io/en/latest/git/commandlinegit.html#commit-your-changes .. _PEP8: https://pep8.org/ .. _create a pull request: https://help.github.com/articles/creating-a-pull-request/ .. _coverage: https://coverage.readthedocs.io werkzeug-0.14.1/LICENSE000066400000000000000000000027761322225165500144640ustar00rootroot00000000000000Copyright © 2007 by the Pallets team. Some rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE AND DOCUMENTATION IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE AND DOCUMENTATION, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. werkzeug-0.14.1/MANIFEST.in000066400000000000000000000003131322225165500151760ustar00rootroot00000000000000include CHANGES.rst LICENSE AUTHORS tox.ini graft werkzeug/debug/shared graft tests graft docs graft artwork graft examples prune docs/_build prune docs/_themes global-exclude *.py[cdo] __pycache__ *.so werkzeug-0.14.1/Makefile000066400000000000000000000013421322225165500151030ustar00rootroot00000000000000documentation: @(cd docs; make html) release: python scripts/make-release.py test: pytest tox-test: tox coverage: @(coverage run --module pytest $(TEST_OPTIONS) $(TESTS)) doctest: @(cd docs; sphinx-build -b doctest . _build/doctest) upload-docs: $(MAKE) -C docs html dirhtml latex $(MAKE) -C docs/_build/latex all-pdf cd docs/_build/; mv html werkzeug-docs; zip -r werkzeug-docs.zip werkzeug-docs; mv werkzeug-docs html rsync -a docs/_build/dirhtml/ flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/ rsync -a docs/_build/latex/Werkzeug.pdf flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/ rsync -a docs/_build/werkzeug-docs.zip flow.srv.pocoo.org:/srv/websites/werkzeug.pocoo.org/docs/werkzeug-docs.zip werkzeug-0.14.1/README.rst000066400000000000000000000046221322225165500151360ustar00rootroot00000000000000Werkzeug ======== Werkzeug is a comprehensive `WSGI`_ web application library. It began as a simple collection of various utilities for WSGI applications and has become one of the most advanced WSGI utility libraries. It includes: * An interactive debugger that allows inspecting stack traces and source code in the browser with an interactive interpreter for any frame in the stack. * A full-featured request object with objects to interact with headers, query args, form data, files, and cookies. * A response object that can wrap other WSGI applications and handle streaming data. * A routing system for matching URLs to endpoints and generating URLs for endpoints, with an extensible system for capturing variables from URLs. * HTTP utilities to handle entity tags, cache control, dates, user agents, cookies, files, and more. * A threaded WSGI server for use while developing applications locally. * A test client for simulating HTTP requests during testing without requiring running a server. Werkzeug is Unicode aware and doesn't enforce any dependencies. It is up to the developer to choose a template engine, database adapter, and even how to handle requests. It can be used to build all sorts of end user applications such as blogs, wikis, or bulletin boards. `Flask`_ wraps Werkzeug, using it to handle the details of WSGI while providing more structure and patterns for defining powerful applications. Installing ---------- Install and update using `pip`_: .. code-block:: text pip install -U Werkzeug A Simple Example ---------------- .. code-block:: python from werkzeug.wrappers import Request, Response @Request.application def application(request): return Response('Hello, World!') if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 4000, application) Links ----- * Website: https://www.palletsprojects.com/p/werkzeug/ * Releases: https://pypi.org/project/Werkzeug/ * Code: https://github.com/pallets/werkzeug * Issue tracker: https://github.com/pallets/werkzeug/issues * Test status: * Linux, Mac: https://travis-ci.org/pallets/werkzeug * Windows: https://ci.appveyor.com/project/davidism/werkzeug * Test coverage: https://codecov.io/gh/pallets/werkzeug .. _WSGI: https://wsgi.readthedocs.io/en/latest/ .. _Flask: https://www.palletsprojects.com/p/flask/ .. _pip: https://pip.pypa.io/en/stable/quickstart/ werkzeug-0.14.1/artwork/000077500000000000000000000000001322225165500151345ustar00rootroot00000000000000werkzeug-0.14.1/artwork/logo.png000066400000000000000000001043751322225165500166140ustar00rootroot00000000000000PNG  IHDRxsBIT|dtEXtSoftwarewww.inkscape.org< IDATxyיNH%6=J(Z( !B֔[~޿?#ܙg>w9#ƽ缓}#91AD*9w:`-09#cX`L|)yl9XjDO) )nVD"`)P@#~l6_yn롑?.VaD|`PP#~|t\XF 4T#UcBf1DD*ˁ%/%{O<"\,ƘX`LsIE3);48JDSbI=hLUA+6OTcRd1r—ڕ{ߐp Os#,ƘHi @k^TyM?| `xt:ιUccg BiϑMЯ'0f(tm=4M sdWK ."86+Q "Z9F>7vԊaduPI0UDګe0&Y`LH^`1zwӊ~j)~D`tR`L0BV'nԊqCrRKL*}^V&" xZ+cBZ3ڕ~p@+K.sH!4E> +ɀ:֪OwVsa%X9QM3HDR`L\k0&H"0PNwsgxvV`|j4y9gW Ƥ$t=:a !8뿅Ma˯i:T`L NZ9 O7|/iF8 cRa1!Hj]nh /88{~Y5( 0&V uk M}4c}38V }l^z2D?"`y;Ř(c1'{񐔤!?i&]00+I N7'BlZǖXji@G,D +9=O^̛ ٳke>[oT;f1& ё#cB#"שs'`|tjV5`{Ҋ_*xm̩!mju_yh۫Řf1DN@W— wn ϿV Xj@s,D(+D/0\+U` ȓG+Co>D5bsnjc"&-H_K^|y2Į@·isTa0qMDzW͆2ľ;~x}4ιY [" xV+OgN޼hTZ`u7 K" ?Oea(\H#z|ڵ uoyۮŘ`;"V Xs`<(RX#z|۽v+U|qŘY`⊈*w*=I#سwUӬj9TbL "_,k9|'uU\ "Y &.H`:2w\iX9Ϸ5gYCi.VHQ, D!2JgͿDq&%ٲqЪjJj&X`bf*_J쭚TD%+L%@ PFtipo%EYQ`j"r> PW0?, T҈n£w(xjc̆%"UJy_"&=:TDPbL&Y`ojϙLkjD70 %])Hv D9 9+/BM0z sR1vD l+Qg9/OCޥɔD+LDStϖ/O9nujGAw\ D,9ɿFD1ӈnIaXwBIN`4T`L:Y`"ogkOLٷcU_fKRK#"M2V#"'+s4',lD[Bٳ,"-2"+LDrFFBM,hR ͑]-E6`V`L0CD ?nhĒadBI0UDګe0& V " .m5XT:wTLNjI&p"X \ƎZMy ,|χPLnjI&P"XujĺjWPy`T` X`#"C>=xqe~ݘ}j4y9gWLY`ԉH"0^+GԣZэIozSj9;` *yѫ =68߶?a^`??xJҧA޴F~E5=!` FDI@7[he>_77_Kǟp0'&ib3EP*(S:fPV49gLXX`Trtit5| 9s?]Og@pr6C4c}6d 9VZ qˆ᥹~3E 1y<?? ? 20i8$|Vy_v1tm@^yiJMiz8f11 v"2Pk޶LMP6;seMK~ob/'׬<>*g?߻V}1pE +G3~f2ŊQV[l/=?ƛ'aKooDӫ \|>^"P|)V7bEoXυ;^R"uOZ)=;ASonز?;4ӀιYLL< RnfSpcr{ӡMХ-qL}S?] ?'ns'Lc''_zн=`,0՟M(Q =-0 wWbb&,Dd0pVFu`DȖM+Cdwt > :W@܅l/j] {Z;ٟ;8tE![mw g`41Ͽsc GD~-೯T8JcY`2MDjԄN!ŲUoAVȞ .eJArF3 &W( -O\ll #qpvYu>8tbVxx(>uOrN/8T"^k@#.,&Y`MDõ_Ùb0L( K^v8 LGT+`߿AQ`Pb"("?^Te^Rj>Ѧ@~i1=8 1B-Kg)'g2SU<>{`'a( QQy`Tb"?"R +u gP5wVMq*YLD NH'{EiD7vqH60+I[C&Sc͐TSiqHD.*s*hx~~ d90u[0wܷZtcc͠{);X )\oT҈n48b'OЫ:0w~?; |XSpc)sf1+q$__0octo:;+wn?mGߦHea3/HO<"1VĉyC]El>+nR䘣}%%*?4ko%Ģa#) G WQb"r _/jD7(u_?XkJM>p޹ЧGh?pj6@\bx2jܨh;P9G+]q"r'7>6hRкGܳʨtX8'&ŒqpJqw4 ]cPzw$D/E~`Ira"RX'rצMtJ`ʷ[\~Cq;|Yh)^ f=ْV5N0~$/zrPQ"RT9s)pmU&dOcg!HR {-CqEex2IМ`_ͺC)ꜻS5I]@"DqfM>Z-ol$6 ~?ֹ(YB5M'dW"$ӁrFD7h6|W4_g?'E~} \#᎟7e(i&6fs. F$yѻ+Y_g}'vSKSMx1vQf@Fݵk/k&i:{ o̯͡m_۾MJ͂[pS;`ZPF< ܪUc66i|n gd>)v}x!3? < eJۯ&e5[g_qSsa@Dd0um&mB4 ve.O?!l4)+zpaE4́YQ }j\f=ٲie0{Olsʙ>] gXlKY~+/?2OU,;oi| @`V5a$n5I GЈpۍ{_GѶko,׃co:vQ_~UGkl(oǼV5bsnj8c@׭'Ƥ_zR\sEhrۖkG=fl]-Q4yj zT,92yb@{ X5WڧՍpIv9<%׭m E7غcŚ;辶Y 4tT'"xB+~%ȕK+{t z%^ %Z}m [9bN5X<3<O<CrdANPu~ޔuƚʷTӼ wPP&"#_UͰf&mA0WrmaS}pMc闌ŝ0K5[ByrÀIER~Ϯ]pQ X7xܥlOtص ௖(zۮ%YHDzhſRX23>&O z'6K91/ބN=￾:Wf!v[j0spg϶۽u Wc|qΝ DDnFjůro Űg#w^Xluwm߳X_j'?x3'2\D faV(h|X2ϧĚjϞL)%“{*07mc}I#a;<~%nja Фj"Mf"RquaEX1 Ԉnb͊7~!gxu Y<ЭB[#/G:S9lh67WU|pYt+a$"PϫfoBs:|ß *ɟ?p$&<`߻}oxԴћ?L< k:-{>,) fUVHq,1Ʈ*=O*zFtk6muAif?l__GܳOLHܹ|sqvYߚV{3+^}(V484PglF,e3`^̭58Lf=P9E5K D90P_Y,o}Mh?~!5+ kW@>C&e֍7af$ VoP$_gVA _o4;x_%XI"9LXmLZv'*GvxoKZud.Fb.Ŀs{c@?umh3,ŷi /=}1h#P99W1+2ADgV)S ּЈnb́'1OM&׿ /3猒 (|eCG9eaШnb;:csIi6ዀMY  Fi/}o Uog;U$Oߏоߘg/ OwKhc,_M5M) g@HM`2wS,:s_RMS0^uJ%f٢P?w@<ևo] 3q)NֈH:<N"R X\f)&MzwY)Gvxw\S2b.\2U7SacŢl \%tWQO)+oBR?>h#3!u{OxlrrxbŢALJ%X"xPwr1;S#E|g}gcl=;Cl|0,??m;Lߩ8JDSET*O5; V O _A`}|ևFx_6me/fzx^E.T%H\,|4t'峡M,ڸ wkPˏ }kuIǏ7]3E@`RH/z%aD _+/ӈnbU0uvЫ5^EY{O6:k/: `̷|NE.`A򜗸` "U 5q{d'A+_7ىt)PW0z)ms?R9)̏j!'0?yk̋@DN?M#~~ZY5X5x`HЫrF}cumˇv![߹: umv|LE`4Q!O4gs'A;Zb2c|}wЫ8l`xNЫMb" $j\})bU0{$;𲈴PSUgKAԈnbՒf8!O zSZ*ל{vx3:2͜ӟ/9FuadUI6`V,Dd2#:)_Ftw\/ppFIV@8n8{Ks#v/ԯ OULjw?}?1Ri@#U_}vJܹ}<*/MaͻoRׄNuxmZMD`tR*D$`9}?1?r}#&Vm ? z%. RT? WYv&7ʹPŦ1< %EZM "U'oTѦFtl-Amclzw=뙝F_ּb3="Ua mUDZ,D J.OA(;,e1|u+9byxNkܠ՚=)0oa]GO~gC]ɜw?`4:Q͐b ?m&}IE'K?gg=%ڎ0Y*^  W֭?]~?Q0j"O5C@DZ9F & G U ^Pc3ϿØ0y`L&TρU<)"wfPD$~R+ӏ­ݵXo4Vuk'źMķٷ;>l>\ /[v;zA!MńѺϡi8U͠$&  \cCpZM,:pZٯW[(n3C{m<гۚDϿm]5ιT3(@Dr5r wݪ"mWry`r8L+:wu/Ͽ`?w.hX_3~ XP9jǜsf*D$c0 ~^9!U辰`^ȑ÷ͮZzSGn:\@"]ZMqo zd6GBܢfs.*>&D "ــZ{QuqDe *»CΜAĘqTojQι[T3A".dO<Ī&M^偵+;ǘh?+oVM3E&}D$x Ϳ6n;ķ3N-6r /"FҒ:^+G0|Vt^Y5;u͝vcipn1?D("ڨ9 PwSG3T+U@vgo+9EiwcV&ߪttHY( `E+Gv0n8he0}ױ|/֫ w΅auxDe$ocQ;}l7s&27*m7ĊUsUӴf$?F=khS1;JNnWaLd;o)jJ[ "O}j BbV6mSo,ao/vƄ꯿~@s,i@Dkof_k![æWrb+AĘv [f19[5K*"b<ߤL6>Z-"w;6cү@~X: VVMSxUD7 N԰"H:蕤дAЫ0&z g5 ."yT boZVh>v^I*_of1~|ʷTӼ wPr,D1SlI}qX2蕤Pdouj ]IGXZ5;@=\=OqD7jUm7sٛ?gl7&rUWMsLD f9BD"rրi?1rY ^In  cbS0Ejr)%Y^F?*V:`bm^E.X`Ѷo^WMۦ$" 4r,Ou Ã^E O/JCۛԯ ~p ["a˶lomd$?MS4"R\+Atơ_X7s(_#e^jOРVЫ0&$&”z4E@ v(!Uֈnbо' XV󭋥1A:x1A)D0 PsIE4Xh z%;_WbqnMQMY~ @DSQ)?okв[ooLdΪiֈHp@Dg +BM,{#h|t8ghj c̑D!ЧjR"R&4*˔5*G%L,K w+IUU`|x'o3$K )}O;U#elV'mçxQ߿CAƘpg b `P "5(mgUl7G<:616c~)J<7T X)Y'&7~z%t\[5Uc㑻ᑻTSVyy- XшJqٿM,/1|WFuQieSpڌ[sM*\,T\?vY&5[Мy] cLf {V@mG!D _$6^wCѳ'oL,<5H5E!(*!\}°bsFtª^IF + cLW~&_OS X+\φhD7A,蕄s.Uc­WW5P,zaX$@\ 4-w(CMO z;\xuȕ+cL| U:9*d @3Q #}m7qѵs&oL^ z E$Ł*D<P龟/, ^ĺGA"}&=eJ cLVL}),' r*E3K4XsA"}&^1&+mA||ixod @$5JܹpMH9ڋ3-5c`<cL r?tɳse MWiD7n =?JBwr1dl}k ǕSKh[,Dt_Jce9s+/BjMvJB?_}y+1DחCNgZ?ss|@DJPgl7Fh66l7 p IDATR&,⯆+) H 'rɖs^hD7npy}~s+I&`WaD+ݥ~WD8柔3o2PumeJG1Dj\ɭ"@D7`sдFt/^IPa\g8 .I"&?42'&1ТFtx3蕤߳A wUp )o2,AM5xp0}^ЫHms۠Wa&}HAq' =}47YpЫH +;CNӽƘ3%vJcG]7瞴dYpgn쉶cB7e?1P']|_Cgg&-] ۩T! qcυ={PG>jɸB.ѷumƘZnVns"9DV'~=X{{+Ikslȏ1&4s_VU發朻S/wޢăm@aS+IPX+1Dw:ι(mdN[sOJ'o $@|pFjD6eA"cxcLZ~XLG gDo~ WWBM~1Ƥe{Rj887̔ ph?;u˕F cL4X<$:h Ψ&}*/WN?Wbto?+ [J/H> w}'ĺIaڜW1cB%YƘXP C՝sD &:<;7M,Z-40_Ы0D>:-៝j)9~I녇-k$ANa/1L,ٷ. ~J텐#G+1DB_Aks?C{A]MxeQtn m7ƤnPzH?$ι-@'@z􃱓4X0> DgcL$+K-f9=oosUz@z ό׈nV4*1쫯FsZ6GN97h=x}0bVt6ZM+ᑻ^1&}To ^mߐ7sz\ g֖2d/-Нz2| z%ƘHF1U-VZH8sFZ£#hlMtmْ`x1);\y|1@ &rZ[kt"_& cLڲM8 `&"2뵁z?t]ktΖ ҋ\A+cζn߸E-!,րy&"2r*zIktJ t)lP_1&5j!D`S "ڸ%}c/hnyg 6\yYvvu0As}mBPl6M:P_AZ*1h^WgZ@CCcp;/]`KMB}90Ƅ}]y,鍞w%0+J6E(wb"ő8r8V{20sVnZ-ej!w* bu\)B_~$&ʖC}A6c:tuSʹ'GR:oFC\X4 dy]-Wa 7@"2I-B:?0]LԶtݕ/ѣpVswMB}%Ƙp B$DdZLt=Q5`c0q׊`BT)We/Ԯ`1&XhNuOAZ<\JA4F]Q%"Pع . ՛݅_3B?C I@N-B^D6VsOfwcZL(Qz14ϬSFo1hIuO A^A=?K>xUQJ&9nk\S*̟ TcL$:~vIB$]EKyP9a/";p++42wfLt(Rĕ-^`+Q|ȉ +VCfnycI'y%"܁bZn׊`1&<" B5"j Y "ǀxk6@yVc1Wg!3$ ZjmorIZ1ƄzCOTC|""OF'"Fkظ6u"c W/ oW j @DZ[-JuZ1Ƅ^CTC|< 8VaH"F+ƶP)ZcLx?j@H!I:jmwvGҊ`1& {[DU( I! _xWL_sVc13xE?RnZ#R&y'Z1NUZ+1Ƙ4`<{ GMc !++?@"c)}<3@˔-T}<5~0{k~شvre\WS*\h cBZSc哤$63?8w̓0P RSG; 5~L,4h"5W@|{*"W~~$SF"rT-BEd "Z&`TF'w 0Q%#n+HPOk)x<-a›>H;?h("쬈l;Z;q"ǣ7Cl,{@l>yXQ~ۡ47hu7&~/"B+cZwØ ZtZ@X @N+4sz8癇ګ?mɤipg;'""OD#!$$lsw9m $sdw0o͡#(/_}ՖьDϵ#ԙ.wcd1PWDE#QSBDz =B$hwo$)Rfݝo䊊p \ W\}n 84 u t#ǻg%nc QlP(Y2qCMw5`r;2B9,W hW+`6X~}FsܾczuDdZ05 {1pEs9]Cr+:k TAkK ~* f3> .2`A@mR@D {d]R-'g`J(Q"] lZ^_۷O?kJɮ[AMfU3%KBW$XxD${2d'C]x~_4= _{YջK]`IN{Z:"qBy=?Ṷq>jNa,* C) aPуdԥC?z=3I^ˇVfy?٧7rtnm+Q8?j_qw|JO %%(Z-;o Mw7Xvvwߩq}^*\_f.H&aħP4t#v?_KH|ԺP wA-B) L< 0Pɛ{CۂH[Wp;īܔ{0?e ߪF]{>7\㦂P}y$=0 kCB7܆vi:*ָ?|ځWn.9 ?AF8Fv^}NcyY1ﻌqsxKJ*7>]er'6lixxቂ scYR=>T >.Lwɹ|>/\_`[j6Yv& N*5e4b=ScX)Turske'^sBG ʔ^N JAOp짏 |^ߩ~*:j\\4^29@@m{F{Oʸdcy vWޅj3ϼG/aЬx#O,V̿xs`ۼj4w= G-B+t&Ag,o~w-vc-\G%O*C?}/np?*uv}+ܽM/w>~ʪ |/In:hiO]O @1X֛(R%]##1Tw?58^zP^z˿~`sm?˗+.aێ{lcnZ@=G'¡>H/%K,bWTjcd^8׆4?<|ߣn׹_G7ÜqPɏnCAxեxYwB#'އ7-W\SF׀HqџgW`,>ׇ]F~ @D,H!#\P&dxߩW9^z1+]?cwؕX1xϲK9fqɅ0u4v %ìY~eIS1Y+6{e_c h("E2@z"4@eaח Z~/yr hxKeW?ߕZk|+ְ~ͼGh:ۏ pM c'>7d>V;y#}]_(CDfEBd""%ܚoF=|ZЮ:?q~UЯw!w^%З-(RjGߘXxW'>&T> п3NV??uҬp Ф{p?Ųo̕Т+| {ܳaQ}?p_c7NLzdCDV{W Y7NdX2e]/K/S*^;+ o*Ȫ5@0jo˽sf?Mpy"-[pVW]1Uo;.‰'SCl.6W }\WBQ ^t5\H?V@|.U419$q86nkqpz=<\s4{a! t4珵^ThQ]zaJD@u@GиC,^{wrZOU:_Пy`Wp3g\s`W4+؊bf*7okp96K~QDFE(D,lǭЊ1u4j2hM%J='p+_nC g|rdp#Ҝ{6>= ژg1}e˸C_~ub89v trͦ$wwj K "p+AWK3h֕՝E2 IDATc_r|V gn ѭBjׂS0}&, pf*_OwEչ =:fS]sh\'.w$*"250S)@ fʮVu-]ovU ܐǦ?8 Iຫ~ǟ2W9;vW粋ጊdH^-ӻW|+Uk]}/H+FrgBX"ٰa맱a^"nݯwKn9t~=۳/Nx18d=zΙPP*\{UpzC`%<PqJ0{f*Ƙ#1ny|qD\=~SOv-~?ȧj 1K󼓀I@>k`T]ǫyTD"Q= ":Z{h1j16 D0PPx뢵[Y1&9:?ʙ+zNDF0 Z1aZbc]r2ty F qTX ՛]SDۓa^7U#g1@=PԪ r-K/ȦtKݎ3Noz5\![aKvFvVw ?kxO> #e6{ɩK>{;kuZ\rH>tT:1Tk׻#\/{u|4va8ԝ%/]:smTǽ.bB3ݼ}]#N~,1ѕ~޸8kqwo3}.8TwzPᤴk~cΟ)'CNiia_mW \qicl fPkzUj5;k1\]SN]!>:öi9N?-䗈p0oKLd"r h mY5u?ɈtN^}7kx. ʔv Yb#=2_ ^|+? t?oaگ&e}wg>~es~2MxB~Ǟ0p8IUUᓡ5W?]Kk{\ꇟ/pi5Lǂ?[MMre7#KZ$dSaWan0[8 7/^V28.c]E0,]¼WIfsi%=nn3+J5a@'MvSfQW$Zn2Oo=aY7?8בy;b |ϫO_Bxb5zqZ㒁[ 6?ew)v`Dwހwf ƤA gl%4okz(y[Ou{n] ط?'w?мapۍ\2~K\qny?|[NN>΢ <) r sR,T%m?+\;+/=MW }d}.^~P5`&OHyD.;v NSGםeN֬d$S'{2.E+ k>1z~ܤE͆z5ݟM垇.W}U)g]߳/5Q7I11u{"6~Rcϝlᒼ*D 7ѧ1eK JKM}3MSffѢ1 pwҟפ뢗V.(c섄;TG@eks9sa&6SNv6l=!ܸZtҾ}y.A 9K ${ܣc^W,h?xѡîKY.iy?]O 6lS-)3Ӗ=:3m2: ,|8ܠБ;&% G߉tvTs2^~?_"5:^|*ws$=vʿz2z\d܏O`zh x{R;ezeJ)dKsҎ+惘kx$݀{$`7օK.Dwsuo:o.C+~쎸v5xE/6M?Ki9_Sfg&2q8{;w^r3QbvU&=)3ᡮY?hҕns\st5 :}WʖISO.Ӆ灇[ |v?-Ҿ?&Q+>JS*O S*^m} %xHLA=,%}Pkr j&pu)Y?+B{ܝqSnvX6n\N_K. 睝krIn>h$z'`.>&VϘ]zqJ+}M,Yp]!:Yɒ_Bw{($"IQ_,qj8ptUfG〳U2s?D|̔#oM3iܞf×/Vh(䭕Ӗ}p)a ̙6Oy.S%8x5} 7^X+Mϼ(Ҁ!n]6KBHDԚ^:u[ng] ྟ:RgRv$. nPIMr\SN=k|3ڭz{~- \̎ĸeeʸ7>p29= Մ!܉jMi(fl@s%/h$uKUoՈYWS1Xz_ pXwUʕu &q^y6'dn[2N|x"tsG}ާ׳+SPTs5Wq2]UKL[2_p#or_hQz &rK2.n ༳7f#2%OV6/աXwnfbFd^(W6^[3߻>e7nS @˔z(& YDgJ1РV_Ø0㏌~_'x,Nn71e-7? \!'pkNΨUnr+&ٕcZg+L/U%2ޣۀj"N- )KyU@g͞Zc|j ToTjLYE<ϫL|g^zSءwonZ,2 Lkx>Ƙ՚6x{pr&lX<ϫ L*hx+|x*} 1m&-zKy߀SbgI16ou-j!*KySӴbtmB;Pj~ZC>E0a(yޕTVέ];aK ;݆5B|&lYPxw0 V-PVc QNEdZ,($<ϻuS(VL+1= SG"2S- {h[H+M7HPShLwn:m7")%=b8ZvxDN+XWTDE0B L.ԊѰ8 JЊ`Lt9xj NȯjLDB(W5`V Sig8vL+1ZuZoҳ@u_CpVc"ߑ+qjLDZ1΂ 6V+1+6W'h/"c"e @!'"p+Kb>uw:QhfI:(&Y`=@ `V3G"9I'+I"g @D5Z1v9<w84=S"="Z,0OD?b,X Z6n]'yTD>U`%&["4fhXu;۹K+1&yJD>V`%&G" 4hX%wjE0&tCϪaLTJD;AuVc ^r2ty I5LOyO5Z<1)0^+ . ؤؘ֬#> P 󪈼D5KOD8c&lؤ}"30[0E5&Y`|&"@+`VM[]vVct=>yQ5),0~-Vbtv^ U P` KD$ݾ%+WkE0&y>-<*HR+ಕZ !Lc $]!Z1vu,ӊ`LjêLd ɗ$Vt~;`?Z } z?*'MSnS`,0󼧀?80׊` oFAU'i@cS`L,0ǀZ/ 捴"1H;;PDE0&yރXQ1Z5AA9@}Q`L,0n E`'Ю&ڍ"rD-1>c`\!դd |& Um7!g+&d< j_ă}k{Mu*4 [jVOg_ino3ݪM&X`BJDFPJx0Dct φqˀ"_-1Z *<?DsᎎWg%nZcd "2h݇="54 ڻJjnm4aZ1 =D ][zDdZcv<ϫJjx;&j M["y^ `PJ+KO=F7P8bnߤ -2Z1z<o5 ',-`[p%&yW+V>inUP9٧bP]D֪E0&,0a󼛁I@yw~F7rwQ wZ-1 ,0dVoҢȄuP);wR-1J,0Jo)Z1$ ՚¶j!5DdZcY`"ySӴbtm2Yk&7o;WgZcY`"yWSZ1:BQ>FP l;WZc %&"yw0 Vva'DݝzSp]E0X`"ybj#>bŴ"`پMkN< KLD<"`:pV PV_;wAf:1@}fۜLDu@5`V&] >^+ɏ{֝QM&X`"^Jj&,*~ j͊M s W`L*" 4hXu۾S+BtKL6\eE@Ƅ9KL#"G;IZ1VwImZSR5`R|` )Dclڢ!$%A_IDATUtBKD-Z16lvIZCr2ty I5̋"jc"%Px0Z+Ʀ. X֬8OTü*"U#a,0$mblᒀjEL"p0[0E5D KD$|c.wDpZpw[]yQ51cRHR+ݮXвZ"/a!>`$]!Z1vuIeZ/Ç%x>U#,0&$0H+w"OUC T#,0&""Z1t]\!&[saD KɁ8b< uZZkAo$p}D51Qc "Oi8굆"ޛU _o,0"Gb~[1G+BT0;e1G# = ԙZ ^O{/N)G81~lx"6X| jjE( zJ)d%Z8~ kkE0x9wq6cySZP~MkEpSe",1&@=x1h!} ]W1O= <o>M5Fd)Ӏ"Bc*I@"h7' $ě 4j)d,0&H<ϻDHxm~hQ?C!Q ĨE0 <#$mQx0/tm1LBAy@]9Bcp4FݸnPGDE0J*S> hSeWP&cX`T om?w  Zc%h1@ V'_>Fj,h}F,j~Z1|Bktg\h_ Tj1c yu@I=/?3]qLjT`c y5q@)/> 7oW`c yU@_ X :-) ZcL,0yWѮV>oE@;AMZcL,0&<ϻ׊ؽ޻dl T j1<20.ѢhJێ_'K !*zܟ[GoIPޫu5W-1'b]LNӊѥ|Rk:vߍWE0c€yWSZ1:r턋fӧpz7oߥ@ YK ]L]sI$`&7o;)"K"cf 1aKpIYZ1Z5Bbi7mՊA,R` %Ƅ.hh [†ZQ8@-1&`<ϻpVCJbuEdZcLX`L<\J?EdN/\cBID6Հ5?oL0&"[@$ΉC}!ƘY`LKi[ Xk1L c|c 1@DvVRɡc,0&B8גNRD&B1""P_ _B}!Y`L@m^FND~ 5cc"sC> ("B$D0PUaEdd4(&"1@`FA 1F%D8S4PƘd 1Q@DwB<,"6Ƅ%D 94yDd`4Ƅ%D9iȧE e #eD$hP=D \1& Y`L-]C("1ac$A~-xJDz\1&\x"k0(<6.0 tesUƘPBR=@eb`\໔GƘBߢ[IENDB`werkzeug-0.14.1/artwork/logo.svg000066400000000000000000000403331322225165500166200ustar00rootroot00000000000000 image/svg+xml werkzeug-0.14.1/bench/000077500000000000000000000000001322225165500145225ustar00rootroot00000000000000werkzeug-0.14.1/bench/wzbench.py000077500000000000000000000303051322225165500165400ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ wzbench ~~~~~~~ A werkzeug internal benchmark module. It's used in combination with hg bisect to find out how the Werkzeug performance of some internal core parts changes over time. :copyright: 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from __future__ import division import os import gc import sys import subprocess from cStringIO import StringIO from timeit import default_timer as timer from types import FunctionType PY2 = sys.version_info[0] == 2 if not PY2: xrange = range # create a new module where we later store all the werkzeug attributes. wz = type(sys)('werkzeug_nonlazy') sys.path.insert(0, '') null_out = open(os.devnull, 'w') # ±4% are ignored TOLERANCE = 0.04 MIN_RESOLUTION = 0.002 # we run each test 5 times TEST_RUNS = 5 def find_hg_tag(path): """Returns the current node or tag for the given path.""" tags = {} try: client = subprocess.Popen(['hg', 'cat', '-r', 'tip', '.hgtags'], stdout=subprocess.PIPE, cwd=path) for line in client.communicate()[0].splitlines(): line = line.strip() if not line: continue hash, tag = line.split() tags[hash] = tag except OSError: return client = subprocess.Popen(['hg', 'parent', '--template', '#node#'], stdout=subprocess.PIPE, cwd=path) tip = client.communicate()[0].strip() tag = tags.get(tip) if tag is not None: return tag return tip def load_werkzeug(path): """Load werkzeug.""" sys.path[0] = path # get rid of already imported stuff wz.__dict__.clear() for key in sys.modules.keys(): if key.startswith('werkzeug.') or key == 'werkzeug': sys.modules.pop(key, None) # import werkzeug again. import werkzeug for key in werkzeug.__all__: setattr(wz, key, getattr(werkzeug, key)) # get the hg tag hg_tag = find_hg_tag(path) # get the real version from the setup file try: f = open(os.path.join(path, 'setup.py')) except IOError: pass else: try: for line in f: line = line.strip() if line.startswith('version='): return line[8:].strip(' \t,')[1:-1], hg_tag finally: f.close() print >> sys.stderr, 'Unknown werkzeug version loaded' sys.exit(2) def median(seq): seq = sorted(seq) if not seq: return 0.0 return seq[len(seq) // 2] def format_func(func): if type(func) is FunctionType: name = func.__name__ else: name = func if name.startswith('time_'): name = name[5:] return name.replace('_', ' ').title() def bench(func): """Times a single function.""" sys.stdout.write('%44s ' % format_func(func)) sys.stdout.flush() # figure out how many times we have to run the function to # get reliable timings. for i in xrange(3, 10): rounds = 1 << i t = timer() for x in xrange(rounds): func() if timer() - t >= 0.2: break # now run the tests without gc TEST_RUNS times and use the median # value of these runs. def _run(): gc.collect() gc.disable() try: t = timer() for x in xrange(rounds): func() return (timer() - t) / rounds * 1000 finally: gc.enable() delta = median(_run() for x in xrange(TEST_RUNS)) sys.stdout.write('%.4f\n' % delta) sys.stdout.flush() return delta def main(): """The main entrypoint.""" from optparse import OptionParser parser = OptionParser(usage='%prog [options]') parser.add_option('--werkzeug-path', '-p', dest='path', default='..', help='the path to the werkzeug package. defaults to cwd') parser.add_option('--compare', '-c', dest='compare', nargs=2, default=False, help='compare two hg nodes of Werkzeug') parser.add_option('--init-compare', dest='init_compare', action='store_true', default=False, help='Initializes the comparison feature') options, args = parser.parse_args() if args: parser.error('Script takes no arguments') if options.compare: compare(*options.compare) elif options.init_compare: init_compare() else: run(options.path) def init_compare(): """Initializes the comparison feature.""" print('Initializing comparison feature') subprocess.Popen(['hg', 'clone', '..', 'a']).wait() subprocess.Popen(['hg', 'clone', '..', 'b']).wait() def compare(node1, node2): """Compares two Werkzeug hg versions.""" if not os.path.isdir('a'): print >> sys.stderr, 'error: comparison feature not initialized' sys.exit(4) print('=' * 80) print('WERKZEUG INTERNAL BENCHMARK -- COMPARE MODE'.center(80)) print('-' * 80) def _error(msg): print >> sys.stderr, 'error:', msg sys.exit(1) def _hg_update(repo, node): hg = lambda *x: subprocess.call(['hg'] + list(x), cwd=repo, stdout=null_out, stderr=null_out) hg('revert', '-a', '--no-backup') client = subprocess.Popen(['hg', 'status', '--unknown', '-n', '-0'], stdout=subprocess.PIPE, cwd=repo) unknown = client.communicate()[0] if unknown: client = subprocess.Popen(['xargs', '-0', 'rm', '-f'], cwd=repo, stdout=null_out, stdin=subprocess.PIPE) client.communicate(unknown) hg('pull', '../..') hg('update', node) if node == 'tip': diff = subprocess.Popen(['hg', 'diff'], cwd='..', stdout=subprocess.PIPE).communicate()[0] if diff: client = subprocess.Popen(['hg', 'import', '--no-commit', '-'], cwd=repo, stdout=null_out, stdin=subprocess.PIPE) client.communicate(diff) _hg_update('a', node1) _hg_update('b', node2) d1 = run('a', no_header=True) d2 = run('b', no_header=True) print('DIRECT COMPARISON'.center(80)) print('-' * 80) for key in sorted(d1): delta = d1[key] - d2[key] if abs(1 - d1[key] / d2[key]) < TOLERANCE or \ abs(delta) < MIN_RESOLUTION: delta = '==' else: delta = '%+.4f (%+d%%)' % \ (delta, round(d2[key] / d1[key] * 100 - 100)) print('%36s %.4f %.4f %s' % (format_func(key), d1[key], d2[key], delta)) print('-' * 80) def run(path, no_header=False): path = os.path.abspath(path) wz_version, hg_tag = load_werkzeug(path) result = {} if not no_header: print('=' * 80) print('WERKZEUG INTERNAL BENCHMARK'.center(80)) print('-' * 80) print('Path: %s' % path) print('Version: %s' % wz_version) if hg_tag is not None: print('HG Tag: %s' % hg_tag) print('-' * 80) for key, value in sorted(globals().items()): if key.startswith('time_'): before = globals().get('before_' + key[5:]) if before: before() result[key] = bench(value) after = globals().get('after_' + key[5:]) if after: after() print('-' * 80) return result URL_DECODED_DATA = dict((str(x), str(x)) for x in xrange(100)) URL_ENCODED_DATA = '&'.join('%s=%s' % x for x in URL_DECODED_DATA.items()) MULTIPART_ENCODED_DATA = '\n'.join(( '--foo', 'Content-Disposition: form-data; name=foo', '', 'this is just bar', '--foo', 'Content-Disposition: form-data; name=bar', '', 'blafasel', '--foo', 'Content-Disposition: form-data; name=foo; filename=wzbench.py', 'Content-Type: text/plain', '', open(__file__.rstrip('c')).read(), '--foo--' )) MULTIDICT = None REQUEST = None TEST_ENV = None LOCAL = None LOCAL_MANAGER = None def time_url_decode(): wz.url_decode(URL_ENCODED_DATA) def time_url_encode(): wz.url_encode(URL_DECODED_DATA) def time_parse_form_data_multipart(): # use a hand written env creator so that we don't bench # from_values which is known to be slowish in 0.5.1 and higher. # we don't want to bench two things at once. environ = { 'REQUEST_METHOD': 'POST', 'CONTENT_TYPE': 'multipart/form-data; boundary=foo', 'wsgi.input': StringIO(MULTIPART_ENCODED_DATA), 'CONTENT_LENGTH': str(len(MULTIPART_ENCODED_DATA)) } request = wz.Request(environ) request.form def before_multidict_lookup_hit(): global MULTIDICT MULTIDICT = wz.MultiDict({'foo': 'bar'}) def time_multidict_lookup_hit(): MULTIDICT['foo'] def after_multidict_lookup_hit(): global MULTIDICT MULTIDICT = None def before_multidict_lookup_miss(): global MULTIDICT MULTIDICT = wz.MultiDict() def time_multidict_lookup_miss(): try: MULTIDICT['foo'] except KeyError: pass def after_multidict_lookup_miss(): global MULTIDICT MULTIDICT = None def time_cached_property(): class Foo(object): @wz.cached_property def x(self): return 42 f = Foo() for x in xrange(60): f.x def before_request_form_access(): global REQUEST data = 'foo=bar&blah=blub' REQUEST = wz.Request({ 'CONTENT_LENGTH': str(len(data)), 'wsgi.input': StringIO(data), 'REQUEST_METHOD': 'POST', 'wsgi.version': (1, 0), 'QUERY_STRING': data, 'CONTENT_TYPE': 'application/x-www-form-urlencoded', 'PATH_INFO': '/', 'SCRIPT_NAME': '' }) def time_request_form_access(): for x in xrange(30): REQUEST.path REQUEST.script_root REQUEST.args['foo'] REQUEST.form['foo'] def after_request_form_access(): global REQUEST REQUEST = None def time_request_from_values(): wz.Request.from_values(base_url='http://www.google.com/', query_string='foo=bar&blah=blaz', input_stream=StringIO(MULTIPART_ENCODED_DATA), content_length=len(MULTIPART_ENCODED_DATA), content_type='multipart/form-data; ' 'boundary=foo', method='POST') def before_request_shallow_init(): global TEST_ENV TEST_ENV = wz.create_environ() def time_request_shallow_init(): wz.Request(TEST_ENV, shallow=True) def after_request_shallow_init(): global TEST_ENV TEST_ENV = None def time_response_iter_performance(): resp = wz.Response(u'Hällo Wörld ' * 1000, mimetype='text/html') for item in resp({'REQUEST_METHOD': 'GET'}, lambda *s: None): pass def time_response_iter_head_performance(): resp = wz.Response(u'Hällo Wörld ' * 1000, mimetype='text/html') for item in resp({'REQUEST_METHOD': 'HEAD'}, lambda *s: None): pass def before_local_manager_dispatch(): global LOCAL_MANAGER, LOCAL LOCAL = wz.Local() LOCAL_MANAGER = wz.LocalManager([LOCAL]) def time_local_manager_dispatch(): for x in xrange(10): LOCAL.x = 42 for x in xrange(10): LOCAL.x def after_local_manager_dispatch(): global LOCAL_MANAGER, LOCAL LOCAL = LOCAL_MANAGER = None def before_html_builder(): global TABLE TABLE = [['col 1', 'col 2', 'col 3', '4', '5', '6'] for x in range(10)] def time_html_builder(): html_rows = [] for row in TABLE: # noqa html_cols = [wz.html.td(col, class_='col') for col in row] html_rows.append(wz.html.tr(class_='row', *html_cols)) wz.html.table(*html_rows) def after_html_builder(): global TABLE TABLE = None if __name__ == '__main__': os.chdir(os.path.dirname(__file__) or os.path.curdir) try: main() except KeyboardInterrupt: print >> sys.stderr, 'interrupted!' werkzeug-0.14.1/docs/000077500000000000000000000000001322225165500143735ustar00rootroot00000000000000werkzeug-0.14.1/docs/Makefile000066400000000000000000000100761322225165500160370ustar00rootroot00000000000000# Makefile for Sphinx documentation # # You can set these variables from the command line. SPHINXOPTS = SPHINXBUILD = sphinx-build PAPER = BUILDDIR = _build # Internal variables. PAPEROPT_a4 = -D latex_paper_size=a4 PAPEROPT_letter = -D latex_paper_size=letter ALLSPHINXOPTS = -d $(BUILDDIR)/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) . .PHONY: help clean html dirhtml singlehtml pickle json htmlhelp qthelp epub latex changes linkcheck doctest help: @echo "Please use \`make ' where is one of" @echo " html to make standalone HTML files" @echo " dirhtml to make HTML files named index.html in directories" @echo " singlehtml to make a single large HTML file" @echo " pickle to make pickle files" @echo " json to make JSON files" @echo " htmlhelp to make HTML files and a HTML help project" @echo " qthelp to make HTML files and a qthelp project" @echo " devhelp to make HTML files and a Devhelp project" @echo " epub to make an epub" @echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter" @echo " latexpdf to make LaTeX files and run them through pdflatex" @echo " changes to make an overview of all changed/added/deprecated items" @echo " linkcheck to check all external links for integrity" @echo " doctest to run all doctests embedded in the documentation (if enabled)" clean: -rm -rf $(BUILDDIR)/* html: $(SPHINXBUILD) -b html $(ALLSPHINXOPTS) $(BUILDDIR)/html @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/html." dirhtml: $(SPHINXBUILD) -b dirhtml $(ALLSPHINXOPTS) $(BUILDDIR)/dirhtml @echo @echo "Build finished. The HTML pages are in $(BUILDDIR)/dirhtml." singlehtml: $(SPHINXBUILD) -b singlehtml $(ALLSPHINXOPTS) $(BUILDDIR)/singlehtml @echo @echo "Build finished. The HTML page is in $(BUILDDIR)/singlehtml." pickle: $(SPHINXBUILD) -b pickle $(ALLSPHINXOPTS) $(BUILDDIR)/pickle @echo @echo "Build finished; now you can process the pickle files." json: $(SPHINXBUILD) -b json $(ALLSPHINXOPTS) $(BUILDDIR)/json @echo @echo "Build finished; now you can process the JSON files." htmlhelp: $(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) $(BUILDDIR)/htmlhelp @echo @echo "Build finished; now you can run HTML Help Workshop with the" \ ".hhp project file in $(BUILDDIR)/htmlhelp." qthelp: $(SPHINXBUILD) -b qthelp $(ALLSPHINXOPTS) $(BUILDDIR)/qthelp @echo @echo "Build finished; now you can run "qcollectiongenerator" with the" \ ".qhcp project file in $(BUILDDIR)/qthelp, like this:" @echo "# qcollectiongenerator $(BUILDDIR)/qthelp/Flask.qhcp" @echo "To view the help file:" @echo "# assistant -collectionFile $(BUILDDIR)/qthelp/Flask.qhc" devhelp: $(SPHINXBUILD) -b devhelp $(ALLSPHINXOPTS) _build/devhelp @echo @echo "Build finished." @echo "To view the help file:" @echo "# mkdir -p $$HOME/.local/share/devhelp/Flask" @echo "# ln -s _build/devhelp $$HOME/.local/share/devhelp/Flask" @echo "# devhelp" epub: $(SPHINXBUILD) -b epub $(ALLSPHINXOPTS) $(BUILDDIR)/epub @echo @echo "Build finished. The epub file is in $(BUILDDIR)/epub." latex: $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) $(BUILDDIR)/latex @echo @echo "Build finished; the LaTeX files are in $(BUILDDIR)/latex." @echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \ "run these through (pdf)latex." latexpdf: latex $(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) _build/latex @echo "Running LaTeX files through pdflatex..." make -C _build/latex all-pdf @echo "pdflatex finished; the PDF files are in _build/latex." changes: $(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) $(BUILDDIR)/changes @echo @echo "The overview file is in $(BUILDDIR)/changes." linkcheck: $(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) $(BUILDDIR)/linkcheck @echo @echo "Link check complete; look for any errors in the above output " \ "or in $(BUILDDIR)/linkcheck/output.txt." doctest: $(SPHINXBUILD) -b doctest $(ALLSPHINXOPTS) $(BUILDDIR)/doctest @echo "Testing of doctests in the sources finished, look at the " \ "results in $(BUILDDIR)/doctest/output.txt." werkzeug-0.14.1/docs/_static/000077500000000000000000000000001322225165500160215ustar00rootroot00000000000000werkzeug-0.14.1/docs/_static/background.png000066400000000000000000002166101322225165500206540ustar00rootroot00000000000000PNG  IHDR,briTXtXML:com.adobe.xmp 5tEXtSoftwareAdobe ImageReadyqe< IDATx|ْeQmP@{ݯ0DS**+sF9)CZs{5ܧOg/?___~?>w_w߿/~q__o߾=<~G?ѿo87s>rt^ٍ[ν~|ysEsW3n}??T~=wml"#[gL a=wlnoK;?!z;9G8wrVfy~&b7gfϕWŴ7Avn{~'Spo[u;gfN2n{ۂq6:&b"d1kcuf-}jfw|Sa7aOlfTx~!aF6NNsu#+C}˃+oe dR#v(^ًlMO~gbvYvY1_j1=phOﯻY;-b-,ݎ 5ayo޿bmV!`XW ֎ƮϾts1[ٷRO}jW [ 1a9vNa'wV^8"N V9~m~cLi˦O5Ջ1=Dc2Xx>-NYlşX0#o | )`3]+XV$7meq&`&v{O;k\;4oϊdlo绎w FV`~ >po!s-;w'epcgY|{]W׷۶1m.N `R9w,5(+Vp=Ȇlp2 4b_:; 3j]15mcG0 }.FvіCl:jX(7LlHsn sҟ%:94AF =xa{d P=〉cn&|);Hщs`5;[+3[rl=# 1-w /9ǘ9ڄMc+WY/7ٲopz8jb|f>\P|;ncyP^v/ߞ#2^v皛݆u5{dNP#xX@,6= 57ooC5)Ł8ք0 T}H ^ِ ǩs\SGk;;$x49LmohĀX}Ç=,aۛ傘`g/bϭU=a Vy'Y~cy9F'xLg{-hrٌ e9Awx5Tbe7邍H%fqs.< (|4\\l{޼h64-%feg4qb{xN, N\g%%f] z1asvY96so!@v!ٛ63^جi y{؅roawÙw# :Ѱywӎ;fyr:en a7,baV,l־oij0nbSI"%gh,t;D;;O+ k:["_P3DX e$ɾaG`ɐr?1I#~t 㯞6U=o߾}XZ Y>DwBûdlv!|Yr02m3ϖv|@6x}=q,Ys` 3> 0`c7>1v^Xa| I$2aoXm> , KR7!u^ c}ĵagq+D-N?tstpyAk<G6v~x|1S};WHlೳi6t>(9O/(WC+CӮƞalrQx.5x.[[&5['1qYZp,{ws=,} ?٫Q  RMMxM,0XSf6{|"i,,lkKf'M L#G|Ώzv6j$s88>(54,9C~6.w*; S<F}_YPg@ې3;l+\Hə<%=sicqk0P\ʏ]gؼMN!#gF.+# ϙ}/PS/^qr0O‚h ȖXc%ȟKSm_Wk78ִ~pQF$;: pN95 G&/۾8'Esj6d#|^A3FK\;O@%%Ba1C2= j.U5 ?mbzQYU/1f`şG996H0D (H`92ϡB> v3O2G|:3#+A=XdB960mL3S})Lۖ <8p;#)}WpF2(]BQ|]WS@⌭1gNr[W{=l"+@!7^"o'Uǚ'dٮdmɲu]JrXܧΏ)swvgNH@0֍)Cu1٨B9` [:d}.HcaX \j^E6K,<^{ηKK6kvyXPg&Pւfuiݘ;^Rnv1 HpF\rS͌렝dPLf)]pV6Ziኤ Ex&jRDGobFX :pXhloZy-ŖM2eZ6ťID6/<^8#q" aYs{R7庼}X[5lX&~¥hb`DCph:0-Y ǎ s{MzWb}%2gRpV$55J1 @ p&㌰8\Scky9 j%oOCx[*K%UVyt-%*d µeODx C "&J(!@ACK;_I.뷋Sg T'h'CL[=>`t  jFsZѪFVb9,;(3*Qd҃4LFx"_ӄ|=7gagʹOe[El2WB$Tg_96o {0.#z=lXNi: mm2l t"f?Џe4pgFΨDnkͭ %x8´Kl=Wާ~EP»Z#f#X" &ˎ Z7`8sǷ4A9t^ѳdhza>./`_R 1gfptG78Ksw8|xK;w901/78Ҡ^N <,Fb9Y b4TpF=A5r8\1g[ohsBr{oc;[CBh̹ #kQ5Qodʤ G'ŭi+B}USca儗Yb4ɴiS-]DJy+wÌ؎y팒K>ro夐;Z3@aN MY;{ uT#gx1F' @`m F)3@LXfJ'idsa S&fTa JeZJ_ild[[ 0Є2F>笪dѴF&dWn P]3eVp23ePZ|% :v SCY!n%[yIsdM1u!7`@AN7K]_JcGabKs8N}<̚>@Xl҈.:iSH ="SGBv1~J0ʡ,qӷ!s;/ dzD]wwk8@SN>DIZf&Y,*iSnFao` hR }*ڙ7ș`\\gdS7vWF;b#'pdN#D`,Nݳ,̑XWѷ*wٽYTI ImGMf0uP{gfl8(ٟPp.M4kT,!S?1jgpcք@VEw@nq ;d'2=7 ݓm Q*Wfv >oJHA|( >?q2SsX;#5T%ޯ؈xWs$nql$gJgjj"p5q=y% fXKȶvMpcͯSmp^%MTaN <aↄ.InLfbeOZ:^V5dfŒ%`kw$r͝Qm"Ij߭Om/vV2ْ[xgݫZxnGgNrhi;}gl gMAʭ3hSl3g=猳g|)oa`D:8VOz:w*"LqfCAFcFNyoDO&87ey l]X [K.gj'(#^h$ƙ݁wV_ AQrl(ȊA1&ഃp5t%RB'.:+eND$pdT6999=ץYSB';[q龜̽ &Q^a7aoY! zhCL$ d!]fۗNx5)T:>wTf.SBm8^yKºjFm]6!Ӂn}*'F!-SsMuarK|]=k`O$J!UrBo:S[⒤0'bh>||s;ո{,q?-sKaDSZ "eJV;p&̦oSSm@Y-9D.~Nrt%&90Hͣ1lRDR]C@Ė-k;v{*f+0fݨNcoT,Lb0dNf+x؛֮"0da]iA&T\&ϳDoBW[=XUtsJ>Ynw6+hoŔ'cLvcLt8gpUH'M [{9TwF,>OA]s 4*Ne8Իe|WtgP\MbH47 rR'=ݭM;g,4 Ecymy03X i&mbB!Xgܡs5ysV7`TRsT,*wSo"X41uɱM #l8Q`cԾ;-j3n\$ q1%ZQ7ڝO|'8feue=Rwblt:jb%l@6س 3k;7LU;˱=e2O%>K0˚e$d(bc _Q[_*Sb-d4'XEz_ !^(=k[EkZvA:"R"I,)qo@ܪ#s2B8XP6i"w[5nS=&A\j?R~[5pt hu49 3sl[C[0^ћ{F܅YQ/Su]I¸Inz9TzAO  ?l/Z*"n )v8 i4~BrJv5ͥiKfiAAįq<`q=Gh5wnД'L< R0@ os;SH7/Ή֨Q]QRvEx`>."Dr;Q\zBˉaj¸3 )M9u(Ÿ>Tħ$r`h+=v&*er[n[,I/|sc8[ׯ_zܴiW Nb{9J b\CynSIV(ԮXgLbp7A))PʁbPqyÓ_J;zX}"'m_C.q?Bb '2еݼk?nvzY9/|z#9?sϮ|L/ 6NzxJX?yW^ڧ*?ȥ4$ Wp È43 >;')i;@"i;$Əi)u&\bɡŇ]dW/8S̍d–u[{srN:D[;llGR7,;7Lk1u0Ѡw#;Εw{[[% ^X_r8wDv&.8vY[p7nKtZ3F.ˎ~n=ܨ-ouf6;{q;n 6o:qRׁ|EPͥ{ۛ$U[hf`M7!R ~0"ˁ7e~qw'LQ 6(dfu -̐j,k`$\m)qMFK'a޴t/qM7gз\iw ^RTs^Q.P[Q1DBaM8dž #4!B \ڴ}+,]W(Hَ6z:\ rw;ǩJF,x\`w,7 6! G+J'%([ !B&>̚~`^As5 LBF<;;n~M'wԴ)4ɧ!^ -z6!F` [`7f9GZwe JepLzv8DzGDqd3~.=7f>'cR]NZܽS dmҵ"coXYYbU&%rw)ksދ5u f;ƻCC/"d*Z|<RxOV@1pKmIzaQZa+u%[3{#|X/1xՑpxm=L5 $-VdQ#i%1Z(C,g4sY3'oH'E8TTIxjXzE*-wjY&.9 p( 2=^c̥nT:ި|LG~;JϛQXVH^ϧHA(˩ \ʌtXc>8F)u5ߛ7o5 2Ubs =t`nOȰ@ p*sƓktz:WPծ1 lUѠqx|Z  D'ܥ]EN3Em˭ BlffKsĚˆqu"t`jFk#]F%iNG*4QaͫJbw4إW"Ihns߿u*2tgιA Ȗ1!K@]AۆGGt,>֢N eŬKNv}e͓ti #}K0өTa< rul^ fcX[c_~m0&gQD1"0ü(<썘ngbC~dl|U ʁxa63R9*Bds}rnlш9曧/ĈFu!C&a[E3A^}]O]4W~w3pl:E\Y$!Ħ9ܬZI=TqtvA.3ss4g_uCA-?RNlwY 9DVwOцv\+4dw*:(x܉BkMXPh:u%N" L%OͩL61 /1l\0iߕo\l"l.Ǿ[Cv aڤ#S3X6'KbU0-~( t /ґKu($  '&Hl,n唎/liSh/cX/4d'F j080fS9g [hc3't&7I?a(dƉ%'L kFF3~d[D̢%~f8ٵwInXb:(YWz)n[*H6JK;fdjS0ß6nj:b+KXƹ!N5> ȴ7g =Tf,a㝙"Gy$127H~pCf(;/"ݘ̉O.`x\FۇZ|zxs<5K]hGBX(~y7 @eAm9UNU$w,>YI{qZ 3ӱn'4L j]i\&L5z53jԄq8q} *Z H\ 8(,$1"uK0{,ȑ0޼y8>=utOLӌ?3LM]N >㓅O\8dua?Y^.ۘy{EAE3Pq#<<vRlWAC4(;AS-g>cd ~d2"cbQD4䰮ruXGS1avbj/0 t dpPwĘNqq z͙%cƔ=HP@u֛\z4ϢLG5|h0Se0D[Ў{%NKŴ4srqJSѥAۭ'pc/Z$8V[Ho5T5%06<049bX LZٕ!{ R\taJR6QFs؏#Xۧ}}#[ amMTIߝg7!qyhhuډp;I3l{0]'wr,!\L3/;Ye^nvM)u~LY VNy1 rU1V-%HjMn޳{uiYBDK h7r՛ h^dUKL9Lp~}S]|IaFm}F76,=\B:p;NNs1#fZܲ5"|qt)"miM"nIfdUZ:QN^czC/'ticxԮO[& nf]ĬlaN++G7Im%l\5]:rg]kN RT![Ƶxeìx"!̶s}W%ߑncl)iv|%A4s+/&Y7*}+ZpVHղSsvm0()l#1 v>k,i0x=|mtǁk[FLNv[c,fۚ@Z"E?>C Ɲ#ߺy(O,ɦӛ7D9;A/mL06SUm;k蟣s&̝O~VKa>.fL<-8k32 i|d&'˜+pozCq'HY3Hm[-N=QAA 7yx`#-aMlfZ{;)W)Hu~5N{Ϲ xe|70"IRՁi> 3+^K0Ϻ)@; D En)i3dɈ׽тmFe]4Ficx{ToEOi*kx6@Fu{a4οfx|uܤ.|7kRI~Y9h1s5A%Jd!߿x&#)`,[Ue>5$clPPAYg-JRG2=o?Ynd(;^ Rr{QHoQi8v|qg{i ZϾNR@MG?W&%O@fѢseKAE`0qD4Ç8euN3$(Ac;yP#_ HdkR7*֟!)3рuR>f.M (8[q_&Ļ#R^ DMpN'"'%8* (sޞ?K >vjrm{Fbm }6yL bq,pUdQo` [\|idEyU3THz;RnpҶggE}]Z/!F(YcN'(`{ws l'u;M]Ʋn}YoDW4>~I_ORpڴMMsIe]LOoddNa -eEbG/ 5=F@jGT}nacD֪$9],_>?^aL010Jl.O&]dcw2dh]yݑ%g3{&"qy,4iTރOsOǰhv3K4`6H[*MZ?!懥R4yOCxx| -d[g!aʆm^9hu4l]_e曳-T2Jo0kk8$|Wh0vԖT, kI+k&φM{އֺ~# ܓL|spе)ͩb3rDxn@oCKn K![<3nX8Djя!@Ѧ/}2Ձh\9iD,Av@DsnCW0kBپVx6{So.K檀EѺK돿as/(hvM{:`mQ0QuY'oKFj&rI ہi-axZNvLъɔd9ϐG/9wO< OBIdVNyaw97͓n3+ k @ظY\J,-NO_f87ŕ^e"kH$k.94Cܹ^I1\&6PMJi`|-#eߓԡ7SIRp:W$ ݀m@cFMMOjkm s\*'|@\mv6@/zWcE<Ωc%lkT5.07<zTM߼yck3$pfS;d1 YPjzĜ_r TvhNTY-WGY@:~$ \D~4J>SGB q QU6ɦͩ [7oRܭWcWђ#~> i@2.Z: )6 /3nlٔ;so@SѸS\a5o"%Nb88` qp,-9~}cȆ6l: BZ38ra`g3ӛ'9B.}X1,E&7G@4+:q8E#rR=#*$>ɓ;m\~&tfث/!1]d94?aڋR4h&A"8#7!|h45&Ah"i݁uaJ-Vg0̂edۛosOVN{MV,,+p_U$,ٿ}5mw샄av{Z/> kܲtp0Ӂ)vb6fPailTcт攂f:Y,cɪ^Gܘb!'Y-ay\\7u e .:m&7tlSG}>vZ&,&V0`n`a9_&3YJy%Ct:yWqMAph0&R P2=M( [Zpm4"D9͎r2^9y0M.)a~0;4ydC~Û $ls]rTg_ UIrYmfAia+H|s>96g57(-Mk]V)rC9'|(okӆ/߿ei)| $?6מv.8D^:XW6hH-`TZZ I` !䯝h2⟻JyUd3Ws[&ө( 5$5PtAXo!fecsY|ҽ̨N :e0kieYFn_ezA _rg(2o-w^믩b-!$;NՍ)7P$̸vK1ǕtȊo,rXж?qNk!3s=:ᬦmJ}n0֊wҒN @{"bu1-u"Q|߉NTFITLrڑ1/]yb2n4#aMެ[Y[VÛd`+aTOkcO{i", 1ި/?=\ \ iNR,~t~fŬy;G@$scc9N Ž 3`9L.Sp 3w6RYC}D3q-|VLIW&>}'pH&3ZQT";Ko)V ^۵oY^̽mĝr+3=eDO4*֌w@ iYcwΙ6aÆnb lBO1R_3i԰)!%N1IWB)*`F7׿fQuӐU>E\f /מHN~FuB-r?Źa\3Fl,YgKHDh8F3:aD+8`Y;QHBiGv<aH#kKŞW+\c?PY+W*1HS#ӺO1iŰ_5Б,Et]"cvo7űȥ#5n2L9~K NU[.`&ϥ9%C O#baJQ%8E4fwִ6F680uo3"J΃&FU4h@] rNr5y͕q%s*AyþB9ghb9еo~y_R1G\ߑ 6!]LKo?}=.Pso KE2&SUϚĜYٍ!njbDKʙW? x6;gܟزfϹB4i_zAie}"=o^{TPY1(wQZR!n !HYaCUt`ۦ^ơwhw Svd|ݡq )&k"t kQ6έ.foT@^xUI븵Ps$( AL>zѫ37rİYҺ؃'1$kH2:eL &j)l]5$E.CsWEԛc':[2 f El8`qtY!nj{F`MpzΩ$cD ]αo2)3s=-mL&{%Mw+s;Ǫ5b""Ll6ǜ؏299{&8?J`0}tĴWzn%)SB1)OKTo)OK4#XKwY@ ܶ=N"k̇\΋[1r VN0KJ)Wv!7xO2 x#>WD&\n,5Ȥ[g ̙dC )htZݶֽss񗓃8~Jt8rL;8AܵI.HhWt3K p|Lp7gl*uvXVNNq*xIt60veU\ӁuLz2ѢݡVKn9͐Vwم_qíC+ %ή:e\Ydy؍XqɒMd톿 EȷPSK]R5tw#ww`Ox&d, J\, dt'8 +V>r}XoO3},=BʏsFb h0МǛ 'M4w! hHXc+m}f,^> M;Vw-%iGu1j'-OTYCgVݿ.Sxiц-sfOӛ+2^ 7D-M'{Sg=>}49sg3= }T葱)=@{)F4 ګ\)r 7xӝW% Hjvw]j."-<;f[\F‹ cҔ\>:*ZWn9EiKrG @)kEL81U. .A۔7E'P7ƙdfw(FrG \" v;A` 5뾮j{. zSS> h5kWw,beF.hj99S|w5;ɽL15dҏ׹0ƀ[.{a{oAJ4& -2m)-ت.!4q(v€U|N ;+n~aAdߓI錫ܒC蘮evBXC!sK\B&2g2  3g坸ENsx;kenxOZu)n4zٛwޞ^Lŷ6n-3 Ús@apuJLmY)CQ& 0{rC20iccYw/υ9@ Nڪ>wʭׯ_ hx.\*F{w^͏7iCK̴O1F:<,%qT aL"Z.Uq׬9z*ۚyE6M:QpJ5)s͔:G祐bPvb^xh䡩g{G.:>8Kt;]|W<]Q F6vk+xqn[η L"FGp|k.2 # k5;,Aph&0 p~Gɏ\J{Fʼn=z3"0EAߍ\,,@qXn{QkixY;M%Qef@ z۝ 1)6fMINxHD"g_:Dgg0a֫u&(--x0VjZw}6Bhi@NS2YhA[wxp͆DC>cˌO4PJ6Ue+ɪ5sbAax*gQ2uyF'(mߙ5n{Kf褯gW0AIø5'<~Eq5~IOgW\~(FSNs&V7MOVê.+EMt9[+h껴*'Ivļ߿PB7# #ZIkY*=png~߽{gˈy]Ç2+|2QEZKDY =pkGX1T΁pp)6)bӟc?y:~5twH4I%fdgS8eL7e X0V(qܷ&S:uf+s#v`78D5#sj @ajɁi:Clq[0g$4KStI])=gE>nYEZC&L刕۴Jbɀb ;ȵfLƆ[aY1U'{ygf{Rܹ 7[uRC[7[<4Dzjl"c+ |$CN ]ra9H0=[.kDZ ͉I2Z*Nx]yB$LSKMRV1wK'4Ρ#1HeymFBhEZ{,޼y%>1IIPԻ~h:$ cV38iL;[\{"!X~-]Fq n۩5JQȵ0{:~s7[-%AA81%,;s{Y;IT[,d. lG` 3`051n€ yīp,quSaKuj 3BJtٖe9jiy[]-ss)]G˝C΀:}P:iPBTh@#r6%eaFQ*R"VroE~DZŅ(:z3b!vYEYD |o I> DRDl\\cRnikoXJ-2M ygT{ob:gkq\%$#$kԚ}T}`'yeC.XaTݷB wHŸmҠz5@X;X4N$)߰gSٛv;xF,c7t#i ǕlMeyw۲R 7I~+[)W#F8 G; =bq:O,:Q㬗)2t@3[˓pbCML]d=Ŧ&w=v"㑘[SJV޽(epz-m餌u$ARgI"wqK;U4l[e/8(w:? $<[`06\ <]OO.O Z𡂗J@KΡP^D EE/%.9;>G/}=ήlF>#fD?T$_4᰹xV7Im]&EdqԈ<9|x^먯Dji!caJ-snaFmҦdVIp ~&D.K00mҘnoBEH''#.aB]xyC9fo;'c9Vdw y[%{;#èٝuBl]]I  &L߸^0[eko=ݖߋP) N9ܸ]Ow\$|Si[f1x HSvFnI2/uWQ~,/y&pӸ%.^7Z̶sYV+V涸)Nr;ְyMJ:>lLOy`$mmhII3Ks(^aSW8`Yq+<&{Brhԕ͞_X3EJޝ4 wh6O.5_>"7Y6z 6ܝ2Fy 7I QAt2c,9-^ aM rA%cH, m0a4(`D6 ŒlVUT,z9e,k;si!#VT(@&2g0t.Co#0Ag5"$^n.lN;ͩkZ0vF3a bQV?ުy=a? + ҫ:B# ]r؏#cAWÕu=a#'FO 41 '# 8b&}<|0xŶa7Ei?0IZދ]0%q 54N>(_r8$W,q*ϰ{NY )D]sqҾUS7r`bыϔ1jٝKTXgeOd@$a\;JRbt)qMqt F n7uIhRfW̎m;wqO$Ap'7DVYi ^')]- t黏}Ijٕv]u cyʹQc'ش1ܵxZ/?ZW{ YzdCF RHV *Hp'Kӹ a#2 1KkPH|8>vkQ}ggHلk5 N3Ȓ'0A.bΥelZ`NN[7!%,adܳ($WcTIqé Jo 8Hnka@ỻ&XJRY +nbJ ) ߄["c^km5 9M&n'ȮMGf j0hq+Qvngq-QIΐVzEG$yΫIl€+nUÊTޝ&pXd$} xK77F֎þk$a, )3t'S`# Ǐ >i7i@[d^dIq3_°b{To܂i4}URl ,n`J=i֨-J/>^4>,Rs,>s,ǒop{-e4M>hAEoPDA&92i'"K:~Vb p'}}3&J3ΧW`c/Plוtsr&DFp|VL7A!}kHFFՆmOxu#HΊًO& t"tt4J_߀V^YQT/QЀ7g9jvj#=7f X b鷞Gf,;b^hN*~ B q3lFN*ӍsYEHn<~Îq(!~nWvCS앚Ys#]TFK"6<!wn׼yTy `/¦4b %TjREtn|rzXJfȟ!bg-s^4S=LI&MvME\ Ot/M˝ 0)A,`^9Ӻ6aOBEFDG=+e-`NWVخ]cqŻ_݋(9VXKacSN;C*Ic ;\ CY!_}PxԢ&̶$-ňI-Fp7aنF5>,禮"'uMLS:P:{*.;y /;"}'nWe5'8A-CIk(*}pbrW?QDa:hNs+]\j5vUDOGsߓHw71;ڍW8Yx1N)wY1[~ew l Wޚb}Fј/orA҅dn?* <ܭ@ʂ0|Q;Kw⛡v::fM(psY$+goy;awXpcR҅To;IGEɸy"yؽX`L*=l.3 ҝ[0XwnhX2Ӡ¢/c LH/VI,{ >NIp٘`RߪᩏCRbCtF0OQ#UBhǭ Pk =_v>#OũvyY!*On!(kWk D]W%{ABYH-Ul.+#_ fYP>ʉg!jnYqE*,M eC):Kzs `b u0vGYLG07ϧfb5痷oSCe6\7^~Lw@ J]aOsVcO,?D.ϚNwԢdxnZvƀ"K`eZF7SХ7I\S(GvûgY2.'ëoj Aq[uy'xǠCb> bMx[FSǸ5#Wta2mq:lgjm*X8cyCbȻbBL+F=*g@{#*7WOfw=h]z>E~TYU WyұYc.ƜıvFrhxJb.d92ڝmiY(CK/kCzN-8]M $8VD>K}Lf?90h,k4'o=P2~ϱm~&޴㙆Wj;h8?0{J#s6|q:@g8cZi5"vic"-acq1F ,g#%%-L%Na9| ,׻" W-0<-Mc ĸEPuOn7.lW $"behZ;Sf.Y%իWEE2>w#MyC`NuAP e.w ~\)jI)bt>\?H~Õd& y=go 7 w'8/t#[X}[qM*E91RJIw0ӗ%#|S>Ց#$7dƬG̰5(K{5KqXRwM ),wW Iϑj)(q(b";DR.3H!˖4הܟ4"p)<@brBw@["3Mꀧ8{E)Վm;c(3Tfܦх9dNM  hCBu|a:5cgdI\,, qP ,swͭa+lVEnzzgwwYA(7k*4=;Q]rct~9q M85SidO5/ Qvw,]aLG{8qP&~^TfЎ)7NZB^ .42h`<>&F}q H,:ݣ;b%D*tNrc>D4tg|r'V{֙\20<zD (P HS)f& !E&H[J.1Bfp>|ܟ[}Ӭ1Gu\Rm#)C6KMEƄ;H}h ]IJGF oDsE +)̴N ZI8P!t#2;⬍xjB.P.!1`Y9e7lMń]3tdDy)i*-oh) _edLn-k*IYtea97[FY`4=^ 4F b>pd]qq}~Pa~{\-gi:)|KYdF )}o7KNkl:IJ>x>. LkMka2^2nL`K\@ K<WfTAKF'Rj%xwEa>VD(N!MEq1Mb<4) g:_gQdݑ;ZJw&LN#˞;^ݔQ+YҭMT=:n^]&[Z&$wt m1[A0)]x FKX*N%[ Wc!uN$7X/aZ 4A7@;/\ 1ԛv-^I[(!"4ePibc")zݻw8#UOOspou w8iok`Ÿhn@VKJM{Oqƀ>>Ky\;6[<[9MOVԭP+gcTŲF x#_6yRUOL0ʛ _c1ȐN%$Z \?D asA5׎^^@\`wR!tϭhpnGN_ Cl#m{Hs$R'wk:z~h(:%Sh5T5ue%o0'%M)Hpw9eKrZAhrhLTu3r7?5Dy- }5-֢c] %hq\ZȊ) ]Y|?2T\3tHiǗ{ވHo7;"?F,U}ABM(x7t6찻˞!wuEYM~F`Tq&=q'5^XR:纋,w5"hkvߴ;GD}P89-OVn-.2o8 f7w>B1ш 'HU3a\mcGl5W=}NiqFl9/zV;=4Sir2(>tl" ;/vSm3Pe( 5"rw4 @Xrg,G䏓WC:]cR oQ`AA 7t@f#n &C$: 7Iw V[!-a8RV·p*\#wlt [ q  b| Bsҿdx{jI̞a=֝ϒH,G43[g3@ۭIs%u2w<37'pGL\W+hjuW_'cb.67:0%Qtq]& f±.hW' i[E@FnQbԈ1qIm!ܶԬfVa Nl(٤:3%,[f/r*1vNfՑ.EE^Nq7p/ "Ua96kjG|d# Ql#bk8Kd:+a{Yv3^ cAk $e0DbG:Ez#l_=axDjP.c0[xSLE[H\"w/e$LNK8q5iZ'Jv!Ǟ,i7NF 6).c/Yt 4^S̆AN-g\dl}ZqyAioUMlpC#׷Xk;=JǿNJ(6zk\h#&t)i~`?<+?57Xd n½/OA;:a@Ƒms|(\d:w]pZ}CEƣv1"9 X*X0$Y?8):B:/ [5dakpܡ޵%rEFLm,yf|13 hwnZ<'!Gs܍>[(ZՆVR( mϵH;4yQ/7wg Y[)@D4&]κ_B?ggBtb#x,M7)1'tCX!i ^N}pgf>f[ـ(ynSްlF֪^]f݃*bm3ua gk82ngp] ,݃qR N V 7 b/]-M=a3PԧH |:S=}ؼ$Վ;f$i})&5Z3aƀZ<UȮ\f]$"YF Ň,bt'b/C)nw7AWxݶO&ojj `u KEmVq|ndnU }ACm_ORHt H1+e{T"YaKwL1GΛ `3 qn| k&$yGoAqBmM`׫2#c%k-cM`Yt)q3(ǃi^C pt՜*xguwyZf& Qh<ջB<0(N\ _Ήfid%YmXi ļ{%c Kk偆jnxtYuN? ڒcˡyePn@ wo36wq7^~ْC/ Lj9}"smٳ:TotE-ݹv4ocm|`-l]Es)H"Rt d·"MJCjn7ǡt%Q0 yǝS6^nJT0ڝ>3,rsM؝qGtT %hlіBs_wy3W9[B5=cZ(ŴI~HzgskG@H?D9' _h 1D'M(K0ZJӝZ4kN BT[m(CpI]*faY܂C n{* w8z岂},J0;E˿{7E5< ;&`HU. x+r km"xFMKgy~ uӦYh;v8rTitD1!ytx ez7.%SçYiZea0+HGlV)).I`{+D b /f/Mx:mG#mJiGb3z߆iltwM{r7 E0Gv5lϷ n5cÇ C:B7Xsnh Efp#:d.Râ@$vx΅rȷ*EF196$װr@3)tQ0(@Qt` n& DNm3v¬}DlEg3(!m=^ Uo8)w$h%MrY&ݢ&nT5 D5d︤:b!E[ޅNE/q\m`O X`)|j2m74H{ KڂX5Lʅ)瀦Id8lf vVێ,SqM@ӽT#gècϧ޾}{>0cFQ6axC3;LaOTXI"BcxUhm{hxߜ; !VC`~;tCE'Ό=Gyf9!? =ѝ(Ťױ>:d|J,FK=ʞ[Fiʗ).W @;ghNzi1b*1ݷNqv2/W4̒'K4u~*02÷0ZQyꅕwc4y>BvJMih>fyf3'򢕎G™CVEouUMU8An(Z48:|_ Ғ4^ͩ {=O=[|ْv`r*P`vNǧ-{P÷E'Jk G+at@Rv驍$_2J /ٽޜ1EAFrt0TdR3ղM Ab\ib{0, 3Ǟ-KFUvoĥҜQЉS&t-hcŪ kI-0 } h߃k1 [OrVU͌:<#IJ: vյ泳5̸%,?zhN1~zJQ ͸XL$k4!<]y`1v;{P?4Op}IuVց6:G11>_jݢ 驛(\ofԕgT|Tbs ;?UǏ&CBi*%EkZ5K*glN+Q:w#eR_U~_P*Enatx^Ot0&ğc`vP|(~*0 q'>Nɛ{aG3$H@co4/\'`YJ$ _;yGh"Jui5_FU9WHӽӪů+fv&X奘lj窹d/T+s(i02j mrd$Aj_f.z$ƃƜlUӷog@+unK)tbPACXl0wF߶WhA /#xdM=LU[a_z8ģ>jC[O(\JxTI"oӲE?;Bg]s'{=<˫},ՓH~Gv}(n{FF{d#f@s8r;aܿPP4ͯĂ Ǘ dNY4~[R~|xP*SKJ:|C!S|6.p|W X:/Z0.a"Y^:nc…77cd>&{:OU,ͬϯG`oRp#5_vo"c;ne![J㝺( HidG'wP<Cd!gAKrFx zJ .kb}5qaMUe*V ldF,ӹ0P%dk(spb} ejG%4in e/a+65!Ъ=kgh\fGhR>c7i)2 j}=ݱ8=msr~O"n kZKQлTgyR)QS\JAmOZ5\jًޯE[ R8̷(ة_E420Y= {C}T}By#c"@`+jdMZ&)WJxĘX0FB1Χ[{nNgDk}@P+]OtAI뱥Q5/ܚ(%q&q362u4ڑSq pVP#d6|Χ/j>F~zկV3c(;gi+Ɠ5}GgFM˜='K8Z+e-ݶqӝMˌgw햗e1xq~$L2´(Y, q}4WN2应rKtBvn#~5ȧ2L܏|U]a+=2ŧIŏtj sͣIwIPyg͔UCZyךˆeԻfZ~>i!:*V>Pm\UNKכkG~N9䩊ZIm[S=SdM$5 |M@V(\q4JVC3[ {CeR;8@ koǸ-;eaa7՜/Ŗ4F\)J5пilʝ#?m?9e{V88)d<9 \t2J&7$]*S>J}%5ݓ./2dm~N!z)DɌmAz 7)7rs*c˯4@SrgTΌفB(Jҁ32Шb ??/cmI >?>ni|`N(^-f O0rŻD,b ڛyr.'Dfii`ʬDl>T/hNgWm%?Lbu2E|'QhD8=)x9oךaa-D+1D* ; ~D>{hJ&̛<GL2$ (v>8PGdX8AJj^g|ۈ MSP(Nt;N3@ɺV*xn l{60Ss>B,M貊7qn~PX~whrFZ\7ڡHx,C!;}仩M7l9*K&9݉(#R?󲥭U֏g\cFf1t}aؚ̞?l̬]Dtivu,i ,FV5Eha}ŐHq ZGu ˼=$D™13g!&ŦJ >y u|yv9c8O}oQ3ޥ⥱ƕTбyUfI(z2o= )ޓ& 'f)e'عWVe47G cHϑ?ЏOِ#NN銦;%isZQV0O$ 2щx"\cۛhƣK񜈦qw.Eb[*IyE<»Mz(̨ȟ!{DU HR"R)Ci|.Q,N*b.mȶ|ѽ :}זs|=ӹq$85`96V P/Eaxl/hYTh VOnWq5*/pIċ|G;+z5_.Xaj̷JCG2v R\­`eu Hi*`SBAuսINB "@l5~,*>!0<\@Nq>بhjn4O0Naп⟎\b/e!gjY@NE$dJ~Fa4Mp=,9ɭqT6!GVz 6(9E5Z LMgtQXGSPS<:<tTS2b tdD0s5uZ 5O=A;cbhNqğ-TϨt/[dGW Rע= sWˮ%߆H[,74:9/.6|g[6k?s3rN]/bvk bz"-hUTw8:hJnAQԯaBs.dFvf^A+H}J˞W6KnتW#@-qcծ/I]y*ƶ>ԖVUzsu4tuxLu0vA@`~TuTe3(Fy Qu^4q>aג-cƼSbF[<|Ӫ `+V/q/F^'(?}M|o)? dhfrȾx}w9`!ĺvFm$}!.$'Q6WXpt0%R0MՐ{bԩkwse&2k%SmN'gLS*4F>'$ن[-Me P`?VK͛Z9ZΠqmwRvgݫ jyՐ*g#Jל-NEŚThr|ŵqJ"7Xf]!qWUG+#40}K=^Sx-Ĩ j3G\= //Us[ Hw: (LCڋ @iXJy/2Jqzm&+YO!>Br(,?јwR6c!bSGm15RFVTD1T  ɠMcoi/y|R9eҾHj<0z dL=DE@v|lEvd*3[bN[Ҡ`Jqܶd9g3WKhX|EX Xvf5NYxER8U: 3K\ …j-Dμ'4G8|F&^OWLChzp7R`ux֍Xύ{(|鴬İm޵]w9h3@5~U_U4DMǚl/+GqpOm5xIAuF>3D#' PTv}jl3ܙNF/,4gMZ_XKÒi S#llYqY{mr*39;C+0MnәMDCQo$qWv%\^HjQ|:& tK`; Yqn,iEqn'/ :ϴA3\x%i `Xf% ~,!:ߙy;{9RdRDACGPʻ[+Қ;^%~} z`EtHZN1ԅKYk`|;FP|iSY'Zk^^SoBCk;3q$65|sgi:;];7( ݤP %TH\۴.5[6 |r\c>}S!OvʹyxVIlLpFtf\uYZidqI%G"C M PamAml-<"8*`UOoN 'tNZனC(֛F};VjH ",~SF{nVB kP`$ 7&oC3lD1Dt|aX2TFyºuH@UY nZaދ\ݛ{gGJ8BOrn&&rPFUoY`7PM$GVTdMn'qWnN?Vr& -E-Fy,Mb"{ B &&)zX$2fhFF#Ud.T.rd&{CWOv|2?P-k-cCa.iL,,9GՕS{{3-T>f*I\^ؽ X:l]ܹ>w.'~>gW3#9m62D,|FLo[6A{*i?Id岼:3ԒE|qNj몪.h#uN $PcXi`'բOg:*ɽRu_ ި6٨h쐚M5uKZnmhBG VK+cIŠ曻GHzB?GsEq6 Wgo#hdku<msj`x vTI aVhN1܏_m͌M6^GD\^^jI)U:c>jv# fhվ*rJ,kLٽ^FۂR4 N\jj(m}T i^ j%UxH}t(#Ȣ< *_C͇BCŎ.>ZBGبWoO⢌@f慾ϼySUrEЕԵ14+# XBA6JmӇ="HjnuhϤLBOzIUT4 7)}V'6jJɼıKQ?D@ dt{@rDܢɂAE\:9GL?(:f䘤}DIu&BQ8xK{dܡ83]^i'[C6lii>J]6*!À*iޭ]J]N9YHIg?4FxxNv&^88>M"xXP,ⵎny5N\ДUJ JNK:ղJ{[a\.P'tJq\5/l'R~n!iIACnaC.^5Zl9Pnwr/{UuOՒ;wS~Ƙ`F%vʡQw۾J Q-*h$I [g(CDxiR' ex=vWOrA^FWzlRwފ@+m,|zR_` ^ruo9chMceGw-y"ވϡVE-@xJ "!IQsd0.D~=G]gùBlIoȥrK,Mr{ɘ[%&@6v AA[15ҡq'n9:){1 xOr +d5Ugҿ[|~嗋nh^rLIIAv.kB@ 15`]ϱ³gr ٔ5+\b&wYGهtz(ŧy}5w<sj5;,%8W k%|XSƺuz+0.8s ȥ0uuqw^!Eh#࠯~r&S{(U7*NJHlƹZnT3;#6'G:uiQЉfY WǛ˔,2 qF/zJ-Ū=~`a=lrȁROKwh\߲-V2fLG;=0^iQJ}9:^.Wqb](<\DGsUcNGXx> wflkaaK>GRC_&|AjZ/E_G"Y'K%<Ú]%0;0 0 oofxl԰抙]d~h,k Iw{GG3{=oNe]m7٭51((M2}.J۩!e3tNCԀv]a]geVRmbdt17\{vTaCtsHoG5w`y\ XsU{O;TQzyojV6C6j]P4ao<8? \P>q5q^4mYՇT)JZ_tӗYՀeLH^7ͬ'uRa2<) ppFFzn69qYaZk l&<=}CGLy qK]9vN 4/ 33'H}w|EŴ+SKׄBղ 9s?a\:7F,leChq* ]ů0+<)pɍ# 9D FL_wR ;J҉G Gsq}][ɣk{{j CyHE;|2wUhBKG]wlsrvbčc/*dA=Oa6v]Jts5alicʞ uXi3oKq/^)Zc ;S tˮaf#״>Z ;Gi+^ۄcO9&Xȉ֯k)22axZխ1t2猿Ϟ(^|w ױ+ӮP ?6v;坛5i 5i5zbXX$c%]URฯhiR3D+FO{IO:΋\Jn*MN0e(=,a[{ѥ6upWUfNUgJ6&8,Ź#13ϣ:0A0G.e|5;eE6DW3c[}_(j)YW}bPn Mz$g c46zB,)p: Rڎ|7~IYPy$WHO&kX&iوQ(t ,bNKq!z)^G.EݖVD4jAR9Sćc-1S"&}͆B2ʌ/NJxM>ؙ*L+FcddOeVձNCc ;ӑ׆Vp3T?p Um홎l~~+RaoOu>JQG9J5idMwsz.7%t͇f ekvKU<xt>(&Qqф7h4׌7 .cރQg 5ή}LyE52b?iC_@9xTg=9cgs/woշj86=z%y3(fNWbUhcOOWO^;iMЯdH2+Iqdn=kr4AGc' %~DFanܽ]J;RT]$L`'(\puLG\:IlSW t4O^hi<|k_xtDzt弾&m:%7e!?=/+ ,^l̕X6$i9]n( _$1g#mJQ6ʖ?*(nj)$7;CKf.\Ըch#UX.ے9CRgvS$1畐ܟEӼW#uj^y_?+h* i'g0JW]cN}&_:m܌rTf gs (ؗQQ1<)52x)b@[xT; ?` ȵ83rje N_;z o̳4H?^*--`aU(jZŸ7hqf[xb' ߸7{3bݰ9}?Ha§jxp߿oWr| p P$F"X$f;FvnY 2ňLKJK]Lck.σDc-Uib/GD~J%/]@[HBѤW/p>}wmII4 3 IߨM;"EggL8;h;ìv|䘫-}sZTŲ[?@ m e*&ZKu^g]qۋP*5_x=M -c]TTT#:T YW=1`A`{3_?O^>VETqsܕ84Wd;31Ay u]TQM[ȨUf嘢"trG$b{^D-uŎ9!Й,퍃B4˦s/"x*^^LQ6WMa=!{iZLAB}wR%Gdg3z7G[hzZ [K6/wIrD>BS#落 X8׏0#k-dkIK'8Y"(CvC!`-%!u{Jۖ`-! x%*U:բ``g_?Wl&} e@Fԡ÷jȉRC6)7Jț̳{{يO۠N2%\؂C0kZrP~@!u3ѬR6W[P *KmpyGaˑ[`иH4\ڜv1%u`%i>9是} 3%tpOtߩ̨ՀqN'&xˬayn+y\9ޠ@Qj{(vq!](xe`W HT,,h]B"_|$f8E8#,3W6).L`y4-Ӹ츼:9cBt9oF; zF7ʍov"wp Bw&Wrduct 4 z46$WIZ=ghc vqec#) >6mM wv.ǎ%(5[pok/P[ױI֟v63c.(&2v^㻨8eKԻb=#QlUg<(CXr ܱ%a4gVOHżHcJr 헕}io-yY\YSyZkx96䗎k;YꝶUCֳƟwr&FʏauMmّQk Rwr{Nby:׏ĮiDHp+7UbG%W}n,UԞ||x\:8?K!aբt5p-_*FztT,F Xr0NeypI!?tLW,G鶎tRagSСsH O5˵U)3w0~EW;(UnbM[ɽ_ L~0:ʣ8ip}ɨ'[zeWɣŔc=n=IPBѮ6ۼ!lYuIȑs0Y %4(훲$Z?pF;e_F]#VeCvjiVSHᵶ0_rfvL]O M%xc|CIAnz*Ϗh+}3X]s`JL@x__ k/qE K>MV#dv, ]SYVfbq7~1QG q{E~zyc k⭎3pXy3 8JlLŴf,sWv0vqg:Thh:, fihԣ9Plʪ+*L0L I/y"-~m5y'6 P+P؄v%KjRuYgQ^l%im'IʨĢR(6q0]p4CUkе|0^dh^vV*Nl//CP: RiZgQtќfK,]ېUY U=Fﱍxsѫ~YؑZKNpZٕ)aom|/z($ Λ[6mqomMs=P?-3{lBlY_0 F7繪HKgwjM+ZUmj'T©tW6݌$̮ u_'ZXiesuu S B /tP]p4%me̤TR fť.ns;V ב~%WpLϰ6a:=ǚ俷pMPiƫémԋk3M($U'OJJBgAzPI=G{#}g ϋ6G|ڢ*&jW*gt=66>vAA.R?mY$gd)'ca6UWht0QGZwCցĂiL2'(+'fW*5B@G"u~SNd2|pjRMA$T= `7wL\mBSD*D!Ie KQ(Q]|swzL % &͵P ٤VV˄?^/ZnjMrTX&0BvL 6nU) %'V] XBj럡ms?W/WK[_bXʇMƈdT hvj] -6%eJAF+`hV%n6Xn=3m갹,eM0 uBgbHu_5T4*hٯMM\[iaI<6-<h3ʥWA}wciv= n:'}ͦl~OS;GXvPFA/K5\8p.kSY ~Òe|&+סr +Q֍]P߻߀}Z5[=jSC3¹K@1š-l=wEZt5DNn`H֫Nb` 2TY奣M.rU奨2k19i6'ܠWǴ/{Bt*(&֫hFG|t:T:=/L+CPX3ktmr.x?Q F1{/71S84$cnvx$pcFq]n^_Nb).*6{5¯Yd ra~lņŧF/ C)Y_ULDkAPw\1fd ~LMXj)J5VQ˞]3M8&{?_g |ùYIҀ;3Tpuм} q^u;.ƶr%IUJvAw”_X^BwU4u *ImMx!#P㓎yOj:ux/߿믿v~ݏl}l8`;ё%(hR$@FLFzUQJ=8Ν%Zd =W^,3&JX]Gp{*1O3EzGjއ5W=tj!2刁= #\VO֟9+dsuLGtbݥCPG|z|ƀ>&SQg'*%VNcm21ot?<'>x#6^12n6gF_p15}}}S ZIEg;U {iD>Ɂ|f~XFTx;?=LQ; U+ՠ-"hEM=Gf\TCdr#Xň^˪Ա-}Ag:HRBv.nI>,dr1 J}Ajy=!T 1=%T vi,?|׿/'^W;D/F>vEJ' sf|ڣ$Ni5FW΄ x;tET,uF~YN{#xNB+ `+#3ʅIńJ%k0 +BI}Dfe:$3?Dug,O*Pta|kZ7T4k1:ϒ'!y6 jNȸkB5knPڴCM:gKum&1P~bo Ue[&&f4G虞dgiyLyJJNdcbmf xQ\l\9QX5a_WWzn5zEcu앟Y7aׄ+R #ngμ|9G^6j:uSy6CqM^QDWpieh2otn?:Y {4zP c ͵U~dd4"1dZ^rT6raKyԻ {-@RSYT ƙ5i+Y,u:/a l#O> L۹gfYi<,|;Lg`䐺1u+JE%28%"ŭӲco@d^;KxPc֪d#aG(:6Jw@AAKO\|ǽjGt>}G_d0~ɖ=f/u#ڍ,ӧ 8ib߶tC+~=Wc*\1z&[]iklЫ5:[u}l*˄6*Pkmx,"c g8FaU0ߢzS3ɗ̆lCKv7hLQ@mt4iuF}"$|@Lnuz,j#+Ѭ{ gN㨚d\\o.^c?+)E6qJ\ W6}<3qky 29RCq}IUU͌MT;L(͞c3%+Uߘ`*\~Wx T9ZTJ^ Ԩߣs@/Rs36Zk1(˂Ss|A!FX R{P?41Op}IYsVց6:J# k_|6s,8uGufF]tR]mwc|(%1P:T?rFJi;0%X3ЪYR9mgs6^#܍PTU}%BIwh994s i%7{jit|8vY)ږ_O+Փ>߿6(oKjG%3FHI8bKn].;=#.puA 5p"pSk/K/@ ^3jW"ā>N+jpaߙLE B\3U#,x[S(9JEŕHc4e >DۯpTMt.G(j\IGEem4Kzy(6Y CӪ;Ҍ=e u빧]B$=  }.nY*9R _;yGh"Im]^i5_Ԃ01I@3ZBq53mTpP&gSFprZ)L(O^x 賡-nF2NO@yXTM߾ܿO5[ K̷¢d0^/ ɱ3(;e뽩=VG]#u0!ģe 2Nɾxj@#"*E,&f@sqDkӼ/v`7 CHlPt 42k{gBZz3kFǎq<"Mk#<1Y28m)x}+?:_ e_(;_tu&#;C!S|6.p*NW VرΘI`_m_iuLj#k3Om;h/4s֩WuNz #'3)3+>ZEϥc;n17m~(L5")`&8F'wP<[+R ͐ r%VL {x ϢkqC=u]3# j;,FI?nK3zOG%4UzG [ԭf3D>&aH4m5bis4Ӄۑv^ǁGcNPOyľmf[ݧx΍p_6;3.3 V4w@+%,feح}a ԯƌ 6ݕb^Hcdu]4&/pwf M9~TiV"!.oUco\R(`ETkJYE3[  i8K1TN_Eٯ90r=[&ޤ]HNAl;glH#+id| &yT3Y1!J Kv7,jv=qCMqe9/bb.fϞix^s UGp}7{Q"Qm骘}TE^֤*ylzR]E.joLsFMj3XV9 {ig;'FsT^(qr䋥,JUˑJoi5ca64FSճ]*d*9Jlp #WɰcCFx[aF1U-]|Q@_@Il9),g՜}.HdȠhˏǴJUk ֆB*;,(>j{9u +#t 2&S6BKyŷ uKJa>-A$WK$ YfkM4ǂȢuV-s `& +R$vl FDz“;Yb* DxChs 7'lxiZ GP-a *;OcFІOܪ&(HZ2"֫]_A3`qG}쁜X&>5 dQ&'K‘6;sDsQqQb ^86yB^E_AJ`F#ӫs'}ݣo.ާ6s]_N/#kn1շ͜P[2d!`څJ*$2N $(Iwi`~1x]GY -MS%v(R]3-Np_@ek%8֒I\UFZsyjn9'H0ʢ*Sjd4 kRIv3 Fqj X=#N7u=۶+'"İrUbp5 V II5OTU+#NyutITۉP~j&,2:WSdxz1‡̽SsU? X \  D.%҈J 9:\T#6y~ -OM;PmGWP]!1+W5Nd& I3W5Ru7e|73A( [}i6NjFab>jdMZwmMjMWӲlBm&[3TMiقhH_:(]Otի~19(wZhTAxc(;gi+Ɠ5}GgަгKseFk%yyV(v2#g]lj%bw`68AQ/eRڟfd~ArJ5PL@4=-sci R0q?Vu }ANm.s1ґ:ItBEO ]+HqbG\똫;rFw ]:ըǢ$~UW #$5FkI>R9*]}QElŗlfy#KD[hݽЈ6^ɜMem3VQ(=rWKkU ë1_c1,_Ҋ򬢛cJr.ge^y{T QOܖhiNzR gA'1Oe~Q"'pe[ӹST8@rmߞaQ{;KX.yL!S18ͮT.e"+Gi>GʷzəcUy^rrz+xVh/oArpj:NVt+~'"/>av@U{<\hM#he 6g]j>S0&N_=Gd$ݓpf2Jl4nhaTK߆;kʻx_6$l.55 `8u&. (>ne:9~v`U9@ymS|ZImag HuIXk~=PpiDgΘ$ ҷcuZy2w~4s^ eE\39[GP-Ӌ>5Gܼ;c"@N urMdIμgyM NC Yt* Dcd'id}>hC {914BU !%ǬO`& }ڑapYRlُ<@$q0q=}G o[r[q5 I}ҵKU\\_zY} NUhyTVdn wHe 8b2c#BM}JMr/SWs )3wn}W\g@ew|%DKfh4ha_ S'< ? i|`N(^-f O0rzqKӉf)YĔ2R{Zw/rRIkn̺͑I!I) tvF}qZ$VǮ!)Sw%ֲAtêU&"Ajx` C6&(,hbnKLLC7TUyO#ꩻKiJPPMU&h[;|b}Tu#nuϿu=I䧒Y-m#.<7^LOB!|x9 4+mU˸bz5Ð9|wbi IV]V&O  ]Y޿I{6Z׊H9GdR6 q#M%lT6;sY*2#Qqt^*z3sycc.PCRm4 )Cc3 ۮ.:aH귦[Rm57CUQly 8LB$q ?sb2Yl*GP狟`:fsg5]*^k\I ڝSѓ@QƊTFIEzLӲYܫY+2am2A?sO9S6gå/IěhÞ^65PejD "Cg/be1فf<ωh*wrZ$5OY,,1{!ۤr3U=G+TPZ_=kcK-{!_t/O)fxC(aߵ_tvUqp|-e^j&?3BZ*/.obVby5 4tSĤz2-DP /0N XNՂy(@eAY K[ݗf}!8]DB[ՠX`zrQ|K"^;Y18ЫJ>Դp eScUvffUuaג-cƼSbF[<|Ӫ `+V/q/F^'(?}M|o)? dhfrȾx}w9`!ĺvFm$}!.$'Q6WXpt0%R0Mr=W+NU]#[-(5Y3(Oos:9c2U1Z9$86ji.%eZ??)Xj\עu-Սk#<3疲;^MP=T<Qlmtr,*.֤@{w+UQyƪ%6 q8B\i=C ܄^Qk!F mP=ROx)|2؞Ӟ@:W@JөW(u$@WE)$Fgg^hdMRW{Qm3]̊}Ty=@Md\F31?hń]\2"R":RHPMmo.@5xK{H N/[\XDRSV_ &g!T-c+*xH3'Ty2+p=S$>ZB+zO3;ԭ׎\~S{#¿U: 3K\ …j-Dμ'4G8|F&^OWLChzp7R`ux֍Xύ{(܌09zū+u3I Alhmj0R h<5:_"Vxث+?bk%c.J}f@GN@F*9=z2 dngڹ3^Xh~/ /%9)Fm٦17G] g8[3Uf4jsv6WxaNR3Y۵8IL?I D,!J0 ?`uLvl2YDȋ-WO_tGiLg09 J҈$͘K@2Y$Ct3wu# !sȤuSࡔwW:5w< %J".KnR8x鋵9*c stlyw^/Nxz4]Wi^y&&&oTsN,M'ytJY$ )|¥Ʒs ن^ķo*ɮT9!*<ΈΠ̔K5K4 ;93$Ht^UmQiakWBc^/BU;1yyNtzV .b{ܤ.K[V u.' :`aiLFɠ0V %awف=r2.$|8 szVGVB2 g-, upED좜:T/ K۷a_ܛK}aϜ 6EA$[3< &aPm$;@fcẇLw9< 6o~r-2;:?RGr2KI'?~g Qn(Eըa/G"Wc|fـpS '餉 ԤQU[ GbQ)4YI\aӏܠ{;iKQ9rˠeKo^¥P:y޵6-()`i:fdT'[gG30Y W}"#I;X7߫Z ZB}ǖ,a$q5cPV{a.cVĪvq$܉|;o%2r Z_U0dK.x`[Pː.KrOƇVH.oXTz Y1#mMz=so[ ߉/Z;R7괨N5vvbk DX]eN)DoTlJ[4vHTV2|ڔSڍ MH5ժWԩT$aEdGHzJ3UWQŽБ{4mER "Yw +h4{'}η26rC\R\0S.\%hr=r64kQnw׬ilw R#YG_[oyWNt@51R1;Jߙ]^H>z?zV 3H75]0{|/Dmt*d,ؙ9*b92dc4_{ `7훙*cSUmseHȴ}nh'Nw%hPg9Jj8iӀƻ&lO9K6obB91>nNO_a*$aL{MtrF%:>;⠃w@xZ%U=imDƯig*Q`3]ơ3AZv:ҫ1?Zw/WE<P%z_NT\U3^*r_A>ɎJ_2N)oOaOe?2%styux-3Fs$M0xs㽈~VHBZQF"߇)hJ񩶮&#e/҃]<'oclXrGA$XSdbt~|` s'Cs^J&^-=xeo_6޵( @9>(V%hpA'3:C'8':Ҳ L:{tKBQ.j.Kg+R;9ؓB%#u[ef?}5,V,̕-qr3\NԞzGG^i}C)wUfұb)K'8/T;@ET׉+Ùa͉㳧1h [ 5pU%hNQPJĉş]^6skZlot% S:dP0Rku_`tn3H)~Uq.w>R[UsҵV ovhऄ2['話k/:M1;d6(4bDCpܶzQ8Ae\TKKgmja UOew.H~5$g;>LEzp&3A My6;ݑ<DzY$HAm+'PTVSpOe!*l9Eيg :(?Xs $las yclC/y}tQx_嗳|5nJ>t2oTfFM`u#&3M7ߢfgRR@3_:,!g fNMU}«Sٺ;4AcN`uc~KʼFhzr/q'I,*_%~ Ԥ/ 4Hqjz$hZ"Jc]'wyi%6*'6{L\w FG,[YQ]_7巿>Vs.(a9=[ rpml_ @n"Y`rb\þEP7P[xZ?sdC!l,2Gry>e5ݕ.BS^q5crc0yE1ԭ\Mbdŷ H4xG_R<Ɂѧ7}{gm~>շG+YhшZgߑGF hЃW䪰V O ~> :hUsztTƾ<х';,hÅ/Qc%+Gz}D',?qWj~4qbT&bXxb$C MRZ5{Ŵˡ*5!4=\PMeأZzN-~=x4cJS )^kY8e.&&M0Sl>& W>%*~ƣhd>~h$a%zC+wYxK4Fcz`.?>=t{3uе]P焐ZqPjxWNZVP_OHTWsw9"DGq=4[{z I+#s`^ P_f<{_X%kk%4CR@fjYּHVRc#}jiz>8+4,/]A l^_~` l; G(}! AV&R[˄|4= >Kei2C jZc Fʙ"?j랠ș0D?#23oQBJJGKg;#Iy9Rj~NVӥrc(Ӷ??1Ĕ״֎Вjck;cZ UЖ0Ѧzͮ,3͕UrPI;F%Fd" ,f Sle=%IrFtz%ثѳAgpgت'x%#@2:b/Bvsbw*Mb9ih= hX;ʫa͂U3֌z ͽ<$Ѩ4omDՄKE،&~2Z \1 } *C;XggIpO;BlBH#gK:fO^UH.YJg׌\csy?tKS, *P LÒJhqtрxyHJ5!+-GHnz}Է>s?rX!Xr4qk;]B4Qn=T_.^HT%tMk5\a5M}]zȢp>iu4(~Ocg\'R<#;EĿyP5@P" b(MQzL(7: 戦QSҶ4HOS2璭ZFrfSP5%ܮV6FEwOx4aLUd TB05ԯ(QOa5pT{ܝ$F`:Qjiz!mJh QI]j^[0#6uDX\ oVJ D-U>3&S;2Z"#Ԟg4?KfpCi|Ce>~X#FߣW/7Q،K-[lsCs -Zr2 y5=U}f_fe N U1kzҧ^⾮B 9S$!Jl,]'HgHLr||'JQApàf#o7Ϗa`+!#q4GӃK硃#1ݞ6U\TљIGJeT33E[nE;Bvdyj϶_aٗ769N-)0Lk@P%kN&_h4LJ<0=cUqbifgY7.a_9~iV5=*.!hbi<0l h4#6Of݀ ܪQ\88UV;VZy'aV\FUyE/][^jvnOS­ƌꋍ׎>>̾ѭwh d!'}n 'HQ/4Lhn~3:G7zQѫ'#Iw>7` MSdΕ.i)7=–35iLÎqR-xFً-Pñ$Ie&'0Z5'^k]bNs3a? ̅%q~~yJ%PJ.jjkQasjN TjdnguT4 AS9DР]^2͠, WRoE6ITwRg긑ǍNeVv8)b_e U\W:'IYqQn}ou[f%ݣzv\\4u*r Br5f웶e_y:yL/}P5Y[y) K4AZz Amùɞl-ca 7c˘ qFo q}%{7MiT#T:3 ?=E{ AigvF=glIn#1Dѵ6q@5VA%bP%Db,jhK-u\ ܝW͐KHBA%h s; ,,{u|C6I6!~pѰGirLnN'6(̊s6HjO}5-mʓU(JcJb\8kCCWM¸VȧK'^)Tfm,LbDN@ aAʎ =r\]93dˍg(t/5 BUJO*]E(&s;u*_:ckv(kc{D\ 0=~J] L%_~];,EFF.ž~.!Z+FU= A?M} N侳aN֮1D m4Nېi6yYy8VYПֵWRnĴJܗ?Ej ޲eKC By 7}qRN̔sw bP^6؄n}ϕ{UX^obЏk928lٶCJt9yG|  T5CJ׫eˉ"z囮ڨpnbU.I]lgep6Wy*)󥙻=~}" ɷۄ;P}x}WΙ?,~RcDo0/9y*6 æA"'6ߖLJ\IT4&3wWs_9эUI_Y QBf1w7:}1?cHIENDB`werkzeug-0.14.1/docs/_static/codebackground.png000066400000000000000000000201631322225165500215030ustar00rootroot00000000000000PNG  IHDRxx9d6iTXtXML:com.adobe.xmp FtEXtSoftwareAdobe ImageReadyqe<IDATx̝ sGDXIѼı⫡ utzV%RWU.h7?~tssr߭~Oy7?}u~|\/uB^~n};ekμ\scӧO޾@nn>|?s&,,tɅksy#A>fCn SH.vZr7hyKy{yp:)4n۹xfSGϧr#NOfQ"n<\\n9E)R\y|lR;Xj:`öiQ9v>aSй mr1ly(੸SuTx Sf5um"sU9Hu3$3 <(̓b=;^K37 ֲ䓙l铬pФI#AE?t,rCdoāɠ~F\Ml `yƂ\EOt u.9<[[0SnOBv=fRv`<^**r1au!T2bt/}Z:9Wua!U6 ۜ63JCK4: iPi쭥VOOTZ㛷hF \`!ʓuj5b$OnC vtᆡMЉɝL0+8jgkNVj)J'1}]}Z'yб:h9mZxmq{vKc z6eGB0{]x\t PuN4t3rTgqc_Ry1 *çd,֥)TKgZ+sN*zlmt+[-I!=Oˤ<'~-yRb77T"~]+X#[<3qcodGgf7a w],_KЪ%OO7/ZA39O57暈Gm}<`{3`KiW3gѬ].!n 4G5N+6zci> /TQƲrh.΍ uKl<$Gfh& ӓqn}z<~[hudeB/!{G1/%OKNFDl =dXDg$'QI4g# Mk`Kca9T66AjkZ|unZ&EJB|E"CGObK6jva1:I>gYhpr&#r3Ġ[lxg7%R? i'C6qSGW"z4`T;CjmnNa^R!-i,ZjOݮ:lDQdebᄉ<(5',0ޯ2hş6x+  .<+ :[ږ\"4!7'LٴC3O`2I.uOU ϩE%If!`9E Hz=ܰqH~׬<5噓E,=RMl0WMB@~.[ڰ '㑤d^H?UmsIlZ)R^!-#UAh/'{1}!AfKc4jYM%1U*iClL9b%w7>|8fioQ9??ߨk&jpYl'maVWmv_xRY[df828$Y-kc\>aVbj*)[oLNf`ܝ;GG*P箭@Ry[m,E3ڐvM=+r2]VO ^k?dG|fwihv%4(LA{ؖ`1| dqz"lM[Jl[ 2YmoЪ^KF D>P&#)lT]K;+|KTmZw X0Qf_"7[mRk]hEi.Lkؼz%6Za(ǎ#Ԋ2lWžh2}<c**D®~OXtH,?/+ '{Ҙ5F0y^m)rs㵇>YMڍk̽6})yKYMk:$ftɓOr4uM~+5cXSEܮB^n>[]&]hcYyhSj.B~n'xp^׌yh>CK\5tAC|!YL=gkn~[Yya 썻s{>d9Z̈X~+B5jGm\/gw&ZƧ]3v|F^ye`Ұfvõx 2cT9?l\z޴~lkFto]nDk&6_шC f٘C8(6,z5bيͶRy$9rB+h RAr1:5iiז(@KԠY֪8Zy`m=ftIENDB`werkzeug-0.14.1/docs/_static/contents.png000066400000000000000000000126271322225165500203740ustar00rootroot00000000000000PNG  IHDRddtEXtSoftwareAdobe ImageReadyqe<9IDATx|ْG MYC%{v1J&Mc4B|޺/_\ʼ8Wo.8pe~]z.kyk3#>^/^_gs>9"^.3󭏯xQx_ tJ1\dkfyů?.\gk/ r0K6όs}xgs,/fl1{p]zqf8=Os `d?dXY. Hޙ\#`ɼ!u#s]|6]a.པ׻ׄ+3yqz䰳ys};r^4# x%2-8!pzy]<;`{׹Wwsg-. 59D7.rG,o3+(r Ȉ5br3.CGXqkyl5lb4;5ݦⷞZpXyeddq <#,ȈK\۸-\V/l3‘ʣ^OKF=5{c5"v;,׬xx?#z )VC,đ;{CY8~Ơ^G Ofxs^ΥXeN=xyi(łfbAxY^>gר0yX >tsd{`z@;>3 [+(n=р>>SlU/f]yO+2/.2MW kc|`9D #ˣ"s!%hf"^q&2A-m.ߏqg/Y8g hE/s=V/s@W%FI㑾)'α,=}p,zm{;~3Üq8jЏa…ebC/ k[]x|p63oߝ]%›eO(J`lËAG1tW. =uh9q|6m}رk`^/bK#a]W+0bM8V3 ? zɸVEOƺXPrB /~H4R\pɘ(k޹&DL+'?x6xi]%M>Bq`2R4Y|qrhl?~v|I!,[8Q[؉ .X`n;g}dSh$X=dΡ"ѬOk `db>E 6sk$na`Ѷ->[,k1juʾ,AÜl>{`<}?*e'/ Ofw_pD|JI_ҁC%OV=(/'X۰NGZ2qrXDw,( 1$po24>ޙV'GN^u {!ҳ;2Qи3 ; s3ZYvn)ɷg!1-hkڮYGlЃ|o_v{}:Y{/b4Mk33$wtͷP7̛cwpYo=/4WXA[qnO@En>L]hbv_I[At ilb|= ,X JV.BO%4J Ra xɌM2:p4H>sPho>ˎ( *҆Ҷ`qfsA7/B3&boXmLp/Iv#_ΝxAuYˈSa\QE8^6yВC;h=#F8l{Vw:ǃ C0onfp*7&Katm zuԋSm$"5q$e16rQ I: 7C&H{8453h/6lYRa7o`; hh% ,믙ztiV!zpgQب$( eslڮ~1i؏|l_ T.sF $J  n5whɍ"<~Gf! !k, ѩqaa"]YkAZT;"]ּXά !jbn&c|4eQ|9s;q̈3dfacY{0m%xo4hhF1yڡJ3-pCL؁d`lGLùA6e|I,&YIƻYi:l 'X?P˭}Lrl %s@ϩ\8T䋐Gnqq2.yaD Xnj$a#3c.nHJb#ВŠ`ې6"荽I8$4 %ej |hBu|XW]&b'>~Ѡƕ'f[n`ĵ&I &=% 墺TL$rM5a,Rv")Zc4G8<8w&d w])I[_K*c3X.ɱL̃KRTA]r-||{)&{3F{Li%Ҏ]1N+_;$[iiRxSϻ+#3m a.;2W_<46/w8b Q ֫:9Þbg={ohcr2 D<9vf+or5(^ͰS;٥p"XO.TL2H|]%v`  -SawIφ]W$;/;0\p"Čp,7EUGx>kt•q¡Mw5iLfnPݼV/`=N]U\W{Cq7J4CqɎ0[*e<ֆGku,T@\8,#qGŸ+I"k(K%;r$;>Q1|sG۔&/O1Y K*ϝX=d]a5ŕ)rLZu{( s]~ةu᱂')]8f'lx-`6#6$7voQnKo fvc ^CN;o%_t 6}mh܅isʘG4t: ?3JᘈN6 zӬBܥ~fVN#~4i:.s~~T}WDΫH 㲶En'5TBGYR3eǎ {=am:äw9tJ/9rFAsbђ )ĹE4Ͼd@Pev-*C¿O~v/ .B"l|ꬄL'Ov')Kv`&Zutp~HRh>[Ʃ'¯.6.HSi鎚{B6ro5ڈnݦ=z~%&z+pmjq_AEvLƮ8vQ9|.2yq!pYE'I&5%Lfp{\$2XwHf&M_0 ,0L)M"4 ;>3H_(_xšz M%@f&sZ27ٝ7Վqshsb+: 0ޭ@>6;ݞ@Dj#J>qZ34CdT5oyo;AܤHvKT[S&!9軰XrYztġ04siG,M(~Di,L3!_bϝ*,I-'f$ g+ l zZ:⧅+>Aa(]"rhVl٥|~,b 63H7fLߗPC9VFǦ_XoMh0~Ċ: 6726J ҫp$I춸GjN: F,ͫFM`! i%j'n668zhwy[Mbfi!iH萾n {?N6IENDB`werkzeug-0.14.1/docs/_static/debug-screenshot.png000066400000000000000000006303301322225165500217750ustar00rootroot00000000000000PNG  IHDRedbtEXtSoftwareAdobe ImageReadyqe<0zIDATxxTM6N!H  RF R/-Җ*)⮛d}s6 x9{fglR z?W.#1. O1f`0 `98i%_QMNZ9F}i.Z9gPžS)nd0 0‰QXg߄H )&BIF#4u5PsDgO-ks2ݰgX@d:(4"3?keFjȭ점I8oWn(@(xL4;o(( `0 y(8zu:AfO`QWoPfbͪױ]IU-;v!Ǡ+igEAzҽPVM%4J9g/%;/cȿ%S}|'ɚSEօ J`0 Tჵ$ B>U%Ν:rΗoMBHtҲ3d9F,Xng_A/gcqmXаOZ,T*?`0f5W=8yJk珟Wl}79PЫ@ t*@$k2Ӻ5: M$We)~D׾݌Z~U3[KYg0  J-v3n{7Ą Y\01bj3aAC46H%g*J+RABWA3@ MA Mi=v eH[VK{'l3Iӂh4Ҡ4. 8NEΌhE*o 3Z>!rjJE'l$i]u Q[Q(\]VD-fpquYSC (DVv4@vi sɿ"5(),] a -o`0 5ױ=o(?=B;,9ceҩgcHȩ ㆾ:x_~r:.fE 2Phiwݎ.82Êp;75JTVUCCP@"Ĭ>k/!-j +ŋ|IWѵ:jrWPA+;88aX49wRے~ ;4"^^"Η0h||lj_/Oyʡ҈.=ر|>=p٥e(PR+B!}w^@cF?0֬#PAye lbwe_?nB_5- Q9dXͧt:FZO*EiI)D[{g89c¼ױAM,"H}/Vـ̼(iU--m`0 ,(}SܙgN\EZbiC9uJE)ѵ|5j q+.k(+ybou׾7 @y UAk+O8@`UO!{ʐKqWeTXSot$%! J7Җ֍HҪ=NN@C‹X0i4m;P<6@mr2SR:݋Ho˦#?y$ŕƶv))!pg5'tHB s?U1}9V,B& (@>=nL_GܨUnϻ4]r]Cy3 xZ1IDaP4r/?ś31?S'MP@e+2g- t[쾢'#m`b$*/ڽ^tn:VJnۏZ-o^ªI 4/JPt5+(cBlh htz~hХcP7I˖]9 l]<7]kzP>?7~> kq}a͙_/lB`ѢfN!"E;9ł9Ĥ3 Z;6o"Vk,>X7Ѓh(oFMѡuFUj"JTʞ\`0&!!2dȾ'{*VIrɡ\IdTGwOĈ餘43#7ՒǠ%2+lGvhy [R;rО!U>TzCw\`ʷe$!xfWӽ\x~"{LY-DX?ͦ]q/A~ZȽ" v2KcɅۍDƹ d'oߎiӦKLAn?`Buqd+d#14[ IlG__A I-!L+#}I.ދW3MCɕTCV0wyG}:`0 㩦n<56e/銆+Č] c^ ѠVWjr"LmkT& xרju;N)P=t*7^IoӪ'Vύ1}ۙͼ!hn !!ёqRVL"OV|v+?L wf!?&~Q`oך 쭏`0FCުS8~Fdξl,uTyOf#-(iʗcg./e%݄`0 _ (Ϝ:3w3t\jf7 l]=ޮc8*(72뉫æO_~Ӻ ^X QIaլ?jknP%UId0 BPط?ՏdpWy~ Z/ ԫ\\ l<<3 c4);wM,;v@Rxw`0]5 iaL7#fC#'p4kbp=4/JZ|`0߂0$ZP]\w8ZoH/rYVs7>(ϼu?tE &7Ԫ<ܾ5Ab*뷊{b'[1 B_3*萑 oWfDoJg@7&& _'~h&wZ^rqMr)ŧߟĸcLR1$2S:[xJM[p:jO)c{A~NejIڒZ]FOL!mFHl>hT>7ӊmaee࿨'an=W6#0-P{v_+$2 SW[] ?xt2Kǘ1c3y W1dBELbo<'&|e FbO"`0lF] ,Z<&7niW*r;bhNآe}^y|o6"(N_NCI+ >v^j_[»سg+캿i][.A$XQ'4 Gi[K1qWg @N|"Jk @B¬o=b`rt.vxk0v)-賯bޘr< $ZfWyL*p^x]P[;FcH7Ć?fL_28Ul ؛\޷.G=x\ttfO"`0!ɱ ȪMhˍN~ՅW .9FnAHVgd`P~oC3_'{Ox Y瑍c~y#ll;dЩ/O};rM=N/V= c.fG,>x3h4rܟrh:J0?f8.Ơ wGFq ׾7V[GEb5d 2o \"[9.ۅ_$|܄f>z`>^0N!SCh$M*lf:05fHď m`σ7/;~XVvHQ*L.ao} `<¼5^zi F)ǜr~y^Fm1>]< ?Z>,] ׮}S_*Pe໯6ve3 Be!\VHJDѨ.mqjrqW)3-G:4Đ3\Xԫpf*T Ksp*͹N%R{Fu Ԧ|a` ''6ňù 3|X%iW}iXG <23/ >òiPiyꀈR+o8p*nQR꠭.@Z^ֹy€ȖppF5!so׽!m;N 'ǎe t l@.83-t~b=Ȯtq.~z5Oat9ݭwSs'Xs:4-0w+,]B X`0tRKdPX(Y'2ۊ 49={|P#5t*}9 #!!-ȥߒHșRB7vp3KҏFm׿e7?xE$bRc,#+Gt ۑQɝ*r`Ri4? l?ܤe>OJRR9j>iaL/!Ew/>vvij]G2h\ON@&#>m;; N:GjH7>'Tr-gd~ӀqّRMzyޏL2iCRf9b(+*&GȽI rn$2;2v(С Vh*BQKJK ə]ξGpGSe$_Dr+1Ѥ}d#R"eOZ 't!-H<}H=?C+^ISPhOI߿#vkdȔON7FDȜIo<X5vNdܳϒNd T&ޭHN!§ٗ^CHM>r#>39 N:@r4)$&2"Ess;{.CYϟDE$S&$C4.RJ>{yᣇ!ΐڐK62ctR'bق!䥴Аn'SsƑ@77yL»$'O#p". 'aIT_?Ů#?ΟH|:'0ےk=rʡmdRg(~~9Ovvv2"!B6 -%%eɇI' t fBҳ/]&8.#EgHo Lzu #RyY+jb_2|9>&=w*ȊR2nbrg$kCJ=HtgLҦ34dLw{5RGɒht"V;Vj&(# hDW_N:$|;#&t "x׶΁dcV8gLBnؓʵé-Yw.w[;q6X6S#xc˫$sk3O#\c]|ss&?MҮ$-g8$:$~mI.s]9 ! |i̓,iHI.'>\ҡR*bPO/2pŶ)'UJblּ7IXo2ýX<Ćo.$!AYyu3i:ܠ/}!rIڱD23x 3:k҃Idb$ӆYNr?3e5I;EO>݃>,Ez { ߆#IyMCB!I.<EDҁvY@$53;C'Fؒc>t&'wt\>t)1j ɤdW!dKbfJlZoRIFU4S:f^$25S+I^dkT礓NAM>xG 4-ۗ Cf|?"۸!sדdf+7$&*ڳYXtwo'=[EMd6=0hHY.KN'*ʊEeoCȐ>"sveӛZֆlB_`H0O}}6Nd<߉/'IbUjVUdld[צw'>jTvE)&q~g.!MӗH1$:OVAS)&[,N pUɬ~@ZG"~ZL"ynp췄LM &#=&#fJGc5}^kv~CwH5ymbG13mn|xߚH٬ZR kOTqI>|#Ҫװ`0CYcG$o;Cb?$(& aPbQ'F Q/ʗ,D8:Zxs+֭/I "\tE#ߚ"$+qrZ_ph.x{v:uN,7Q3MVө 9%.]ŔaP^Z-lЯw!LV3ƚк0I& AXiK?951N$Gd!4P;uFZVAaY}06?.!J^*~ԕͱtzDvUa%*%tQ3rr6dFcl;׸aHSUFj-9%^d]p(o\'D{!BP 0jQ!:44=V((^/@P`D Q9U檳k  :Z*[ 3(i %4RZZ uqa+[yzѿɶbSGD#j*%BKaR λ^²؍r%[c"ul#TjAZ,!DJmMNlczvER&+g_OٔZ($~ibi4iHzw!r1470Y娤ϏP(LVQۊ$~WbpVxyK8eР`4.TU^kVyA?ӵ1C) bN^ b3>hʏ>\Yr5b`<',M; Wxh1sa@5|@cˢrPH{ӆW L \ pUSgrJJ.Cܖ: )3ht|ea Bhh;` _董8qf"C(TIgU47m$:L[(_s+.2* PiNh{؜y&Rce!m()7Y#+##=j>8(|=TP &r{ 8z8N>y}E 3 EMn /̡WQQhP[pTO5@A@AIt!eɗ1Կ iH0a ^VwM >|9u+^ mlJy=a^"SgBg@c  ܴeN\jMG}E-hVً *QP}o.WC0M4kQVeZU[\'K:Q;x>  i;2/R!rS2]x ͏> M.a1ri5_Kk4hH* š?Ì^-uM#LT,⁗[qG4<#=83Ebj\V,T[TZ0}q[,z`'aLG'h27<ϴhIB/&qtOڳ=}~LB }m4 Z`;ga+sacfai ~`DbD*uVmէHq}ekЫܤ~" e}װ <_~ X:w׼+bڷ>ɍU}%0| >{6v ~cCE)r'»K^CVZ|,_s²w_Get f׼ 4IXx{/:ɐw<V@*wZ+ 0T65 aA1iV +ԡ6|j]]7STŕ+YJv0siv3nn k#{Ak7dF:SMO.>P\7YCFYZEM6CTabG{n<:LZq42|Qwg>l*TuZ[bİ^XsAܺ-PWGmmf%cЄ)bGxѦkp5[GGS5 U%̘*^_یs?+͒VO_zxmyOC% MCKB`Ѷ`q?#tRw,~q\qkN1 ăTTp"^/`U6P7a_ )τbGoAlyia/piz :5ĭGX0~:M?"A' rnǻo'b[TlpsrƏ6`k7&v QW!dK*P3RNj{n@!? #z9[_`_m|rS[~Ap~煖2,7Ka1.:ZbIndcF/}0ny@" {VMoS)~P?[]n\t$-/yhuˏ+jab d.e`4ur׸{3ƀ##Ū᪳8dn r Ev@ XZ?aøz "22*6[iVv"ħ`U?:o im:b+ "8!!ްrEiGG+ n&Q#BDhK[n bBa֭ǕoB1]<]ЯW' DY][ [Du\FU5itWXt$'$.z$>xm"m٭3{0W㜗hl)ZX uV4 RxtꂘVZ@4)wS-+wx#<'~f^h!qq755Z+#[߰}'&%TZ-BY5ɉgOw,}~$ƷҠ}1  fDŽh?V):xauAc= `77zתX8y(vN wBfr0կ_%/IT0 a0pOg B&=SvO@pK4 "*bb:1Er|P7QьWzw. 1MO"g0j+^H@`Iwq6˯!)ze5FKf" C&am$A|.\g{8lBζFxĘޞع2jt|X_, Ш L`iJ`cam!-`K!K"\ُ3O =[.Vy9ۇwVHޕBvb0 i$F8{ U X8, WX"jd1(V1PUDZOu Sԭ%*=l,h)t(/D' -a!"*#\ma.VYB2*$j ,d\z*yQIQ+EUT xi݃qjoYi8 'BPkTtJBQCԀE0 PXC!yi4=D[jNh*E2TVq?wkc c+j$8_MEH&^#PUע@NsS-aٓhPQ ڿD\2&(7/}d*r4ƍǵ7c֯`0][_pv>D֘y)X9Ts /;9Ÿ!X {!k~D%oh 挌B 3HNދ׍5;ZCl!-._B8aswc3 p|JŸ 'wL Q4/Bd_)C($l Z*,EuKCTui+A7T =Vԩxv0>g7K7P3]ݭQTi`0O"*,Bޣ},-괫X/5uH-*/FKvX;9 3buS&J pL{'xaҥةfgB>ɓ"1듋P [1/ rXX&bkT Z "Z~aٸ0v/KAL 9GM;1llP"K{:'pdžWG_]|obe6nI'aVR-*Tfblm zP5{+7wJjoXwaOQkL C:3Fu]]625r)TEZ a/Sbh^(W_o5 CǵBn 58 5TVC@ mLq4Q:xz1եSa-==1j`&`) jn%`mm(i޻wC\FU%|q *pp$m-;*=ʳ*(PV2(fO3`0OlP?Lb4[,O'/SA *1w -[l@7P86ΫDwVo\=baaom!1K}e"d"zz8૸RB j w*6rKKi/o|koSKEF4NxonR;N B|/?"Hhzt BP-u-)+7C;9B ݺS$o'l 8:πZ _W =Pj ;+n܉Y[U/%C,+MW;'gLN5$rk maښjx8A`W?'jKpNj8ZNFvP+ rHhP+hSѝK@]5ˁ+NFJ|.Uhaf&TZ WǞɨi E]lRqgBc&RZ"VFttz*Thw*pzrtԄ +UC&egi ZDIf0 i͊V 3hC!H0͑3Ji BG&" d)<ݨH!1!S 94SPW^1P9@l^yHR~ˆ yUZ,SVBi,3-h V,9772*` (Y kQJ&0蠪"Uһy-cg Cdlw(֌-J<"TJ`NEP Skֆ~n&V z *jtTxJ Locg{HB'AjVbSƸyPTFgs9@WeF9MNOEQ%5>-F o6ţ#͛sg Yn RbP Ḛճ;{|ӗ*{!|s/ג`0 ?$(ޭ~ 73 8$FJŒvH) "HP)ʢ;xnqTT@ QXR!!&Q=D2)ńQ (7bo#򁕔JI}-a1cg ?bh5:~4 wfb~;tf.7ZTT+PBf~0pZl?7\/κ Ԉ[)Wz&bzwĘtl \bn(apsv4AƁCpAe T٠c;̈́@ ?*&5"}5nb7Q`xfpZ$p1"EE4i$jkC)E iy qY8Z̟3-,rC(wmJjjkKzX[9E}i zz  ?l)@LKj.AROufFNE3l7`0O:LH !z!-r$TIxTХ, k?zu.^nÞ p$3L ?`ݯ X3:g?ol%mVCIQ12mL ,E ' %]IYheXFb qઘLB v'wݯrsoۆ$b5KGvG;6ޞ^g[=mčRkwg+Ua&C&I*d(S.B]r0[' J|s0Ax~BGTCdmV2H-K;->xr[83ej>\n$BƯj6\'4=^X58A컝7r1_/|ܶ ._žcxaRꆢJ-EػianNWe^)gn1|~ZADY 4=5U8z:i^*<{R aƴgBxyZ %A*Tt Q脻zD|Ozt Wshx ۰uü 3T^^jRm1sA3/jl] f0 㿁ٹ^U UN>T;SUhu goA|Q Kv3=UJkPUQ7F&p+QC_SLLH IH.^ UQ\DMeˊq53ow_G{aU|-Ge\x|aU5I/=0} 2* כr\GoDӃjŤ SS֠"$ $7ɣ RcZ^#li(*.R/#>ջp J8qOC=Y)i O؆ y{ōGNQ)mcBZ#pיQJqOMƬ`2G3yJ(wHIL"ARvO>HAixĊ|E1~U(rIr%!fT=ONaG%* щƘ)kV8;].i2ܳusD %c/Ж`-KDQȗ&9)k$n{uUv,Q4HC3`5::)sIL@ wf"%-Slߖ@e Ԗ_Au=A)vSKNJ&ɦda0FgQofQ/"G2xbmEx.OWHt|ް( Gx *jS,:U_ZCll"6=v'Q(|-mé]/]ضl.!YH[Z֋Lٴ]|KZ*]L5D{JuؕA]3O{~VҘV^BuiU#pխ%ΰQ5"4W!ÿKQA>fݘ37J?LEhE6qq+64_sfP.{V})%XQB؝b{V*=DzO>ޜ0iq˩{$q+vmHE fAOGSbBE)rHeLm2:ldl&˽L1rxهXk:<;\کUn=<|߈7?@(r{'z8) hU,$F'38!G@7Gvvf6XJ_Wj^ݺIu4BP^Iک8Y [;%E:Т ;D"ÇQ&Vy.8޷'d#ث(w8_c39o/OΩXVDo6F~FVa8ڻ5۪xsZN?ߖ93akHҘ{37qUw1rS]Ɉk 1Q]\Fz3\&DeI^g@ufH):"k<̘9t^fv2z6<$5;O?JP +}LZ7h31w:tK6Oh p@_^"y^RŁ}+;ͳW_ZK4B px:e]Yw$k_A&i ܩ\k\[ X;2-d瞝ؚwsw'ZtN;/W׊I工{йEwG;x[h#ѫх)hrF$%Պu4 *4G|,Gs5M#,RYTF3AfqJV㟾zu۷n\,uR_ƟV7ڴ%"/`زWhߍbԶh>r.Wtq)OM3ȸW0m`}f<ͻ^L<9j ߣiNxd1`6)h_Kt U{9|lZLp ܥF^)SAܬF ^gdz-P\qR.H$ AupÌ;ݦ6`興L*Cdb6:"9(Yh^?-j ^N LkP-Ep6Yj>#-r49+V$!~ ygskYW;ʵma6;DkKNZD'&pvM=KmvHM2ʬ)ɀlĬy0EBrth4ZqFI/>nLim[VfmĢ&Sesڪ՘q¢DΪo22Ҿ?[M:\"!ĵ'8* TNcڬ|DRqV$-Y5kh޹3 ÔCYp) mIg%١Ғunܚʜ\jD1ⶭihZӃ?߂+:5h1N߳bb4ErSQy:!/bbգ(2zLNzIT =yܼqU7rzE;2nC^E pjs ShŒ[U={(*7K3+9KSj{'rse ¬j".e6e\C[]!6qk3ZE1 Y@NmKp8R=|0,_8QED8M3Q>*.?yZ+QhV7hLUy^tF\:3g4M&PWϮzw0iBvk{P)vSS YORQq"Oy殛-6dqjG['S钶3\OwSٴm:D`=hο  |$]u4F?NDe&H2aIG h0!RRѸQ: Sx_C&gQٓ9-j[)rcy]۵ɦ;ib1d`.dw?IOO-,Ho0i'OgOʹF֞lQ'GYx `;^#xeU) yoҕ )wVc;+pE ~Mm B sp{#7C?F6hЀjg&[b;$.)"&.-FQ=ASY=fۏsc_ҋSSkKndz˹JsENX=/s'xkK>|u4b+UɴW\ր5w۸|BT\X~`w>$!B4I9 ;(5Qj%95]ܣޣ*pPX|E_v5V԰"Yg@\Tu5: ~ӿk3y>$B~> W_{q.Gix|<|$kjj.yhJaşj'*y{u]<ސZ!KO>H)Y]v9"aelۏ6!i1Yd 2MyH_)e,D +J^fmkb U?sAdu*|YbQ^5wOS ˣtNK6^g`PoĿ&:8O:eߊ7UBTu-ym,Y€zz&e|Fr7"s>7Ѭ5|{\P3gqүՊÆ A%_d҉ {3yTfc=kJٗC1_Z/gۯn5 Fv/ʱYV=>U^O ՕJDcf cG,["\Fz"pzÜ|3!w'QLnŰ3;D=}wŠ>f_&K.ْWɻ>dxRxߍDYc1{# ƱGƚέڨ߯ʢ~6Fl,ha4EDQޠJuFi7hFxcx|9c2[4iFh+2jHaO$$יĘkCi5ы5F3`ˀΚMW>L;oΗjϫ#dߓ/P.e֮硞`F'O?+}P4Nzn.o,cO\͈3E#Ղ!w_N\Nt)te6T҇xGY5]6s4KZyh5BLX8 -rM CZ6bDnc!$w@E\ǾqsзՅS; `1cRh0(ts {  P'wlb4D:"NQG`k6X2{1aHa¬DwҙvifJl \~ͥ؂0tD>z/"nPN/as̢lF#u6gb^pMKױhOG0jY6fm_DNqP[YRxCqwڜ|Z6=Ze3iʏ"LZYԟ&=2GT1:2mF.ޙi ٵr2+L;y0uNF4*FD%R_geleku/߿oO•v<̵hޔt:י\}۾G]gmy[^Dnn ﻍ{1d+'H4v Hh:N)kt@@uQ}kʊt"*>|ҽ:z.GQUYN+@|bqHs0bYAӋ-*(IuaH^{Ӄ5:e%ҷ0oFɇgy u!U'A'HR\,SMKVh+-1X15e-Cq*yVoP0b[kx&zouMN!S0nmm:~Q&>՛(J}*j|B0YDz#+,8UUۚSVC+^8e8va'.6"l;;:V:ԫ;zujuVV&WVUӚ"ҭߏ* 7y Cz4,ҭx˴:uF.555 |Q]QmIUm{uΨ1|;M>wgAŻԗZP)>{~c [Vvq؈QJy9KLuѫP:vAjsߛAO<:QΑt+PѬ?o]ߏ <O#^n;_|P+uP^aWЋSsw;PKH$ꗧ*_8ѱ42,-?XC;X!Oq(h"z?QRQJ$ s/l:W}=,Bhꚓ߫ j7/:]58k, +B(^|:c3{J((Q=J+kB"H$俀,l[JeAFK@(,}D"H$f Y<ҕ(H$D"]HA)H$D"%(joMV`'81CNh0MraTNC#M]AE>10R:~D"Da:pjqWcFv !4Ȭ)f͏+9IBP9Jkr9cS A"IkAJr(Hw hEW"&Yjz'H$D ?xy# DiNjxt6g`4 H[\)ToPï^[޾dz|l&Yse$3ww\uM x9F~: nL5Gn }0ݺH:ܷQE)O?}~nd徣hjmT}mS[{l͚uXLf/<)5yh#i>|2>.į՛x y 9CI rܑe4J֗CyQth0.#K$JZ5=L4*gq&2D˥ L98!+c/UHHCsSSvYpx!fi{)ƣ xIhՓCMD7>[ h1췝¹q]1^9i ƊhPNW h X,z )ya_O kH[קzBjjaNǎ`RO8`rИmD.\j8a#=.Jċ2Q\ f*-/~5xE ZJrqUty]v!-\:HHJ 謡y2PY^>*.]{$-DI0lUYHnQ5q < ҆'f':)N-nb _YXQNf~J*D («,c EFbj: 0Es 6 rYxc,^uc Os.D"^Ѷ$X-'-t8~ߙDG]ɴb3u%lr >?Aɢ5:"op}ϳqS>7X:cn{emRKCgľ'˟FZ4yr8T,Y{iz`>~7#FW7,Y\x+ rh`qQ/s1;Xaw7]Cr zc:$7>.꼌zj&IU w NdjQ?YF4MI8R[1ar>)ۀx{1I.惱#}>BЖ;tjJ^W&|-3l?Gjm.:Wo}=ǫ=9hFZ_Ys|;K$u[.g&Ď|0%CL=,;0yJ&t+'b ϟg}̒5 wy5_ tAm8gAtG|_bn Bܔiw@,IWW.;wq eM>~;b7FX^y/إ^ޘ|s=вrr*5=!G+¬SYy3!}dAg( ev,D_O)#iqq#aӲteyY4;.-75Jԯ DŽt<lnќQu7+aOQ95?yͬ~_,-dԷ+Y+wlc"lg1f*|\ng?M& }{x,[36EO \ز%7¦Ѹ yk8f̞{ʆ3:4.eQ 8oף/ŘW8}O~ɟ}D1bp܏/=#Yɛ7vU>۽6&-^gql-{"t`KY;m470QQ6熱lD;],z,;~#sF̖Ͼf]f1jY:>X-ؼ{a!g@2kz$}5cѥgݧ2n8cNlS>?$7H$*Y!cҹGikv糜퓿dV7bc`T ֫LF " S0$$^Ya-نhV5qA&]Ũ}H6jEn= :8F ۟ztE2dȣ( o'/CEנzΔ9g\=&?A7^9"2]8}>"m!^̓.W!MU)+:d5hDeqF\x#صmDIq}#J`P 5s aKd^#l#ct^:gmtbA!Љg >Jz;T xХI`zZ&Pt#CEG^]T r cDs:BTɮis냧h =8wڽm._NixV{2ZuoFii6囲ޣ M`s, /;TT\<bO3qޗoqC/M]TQz7gE>aoD Йk0oD"?cHѐz| Iţ5hHnޝWFoT!$@0QoNyM?LbL,1F!4!Ä6lU/3 'Ԭ_!NK%eٟLn4L$y31 dBn-ʿjPЇ[+c*F#D}<6BkMC|.<}φfW2u,:.\ǃda|a,'4B^h.2REqXZf\ZW9?}w0O9nC!jZx+¦Xb".!a巳)HR .^v^+ cR|1i W.⺮|^>a@ VHnoq2}x aߪ "ȄS^Jё"'јc8&E#Q~FB|a|^"pP_ VOF|=|mj\HUόaժoN^xGUT2'ʚٟ<' /2WVu`x}4lNɸECeVzN)ωINO&0q&NfQ> 23ߦj|:a-zeJVvckJVm]Aٽ8uۧzܽTjw:36o;27!n68~l"gǜUcEEx$Dz(D͊>$D%hK5<~- 0{h,NMRd7[Xe˗9mso[xݞS)Xw[_ģ[~6.k@eU=Ncr {k&'[_Y(f̝FeyķeeQF9&Ejg YQiݒoogt8Vm {ÿ,ky1CUS ]Nvbs9=INtR~1_ґqLoU[ Zèkv͜8^mQ_0?:^Jzk]hSWoviǴЙQo1Gۊ3IJjnuFMOt>Sڸ5ùܥ5Bp=R+<9qlm4xNavg3pV{I7wm*:S^W%XA+٧K/F?gz=-}F-9Tg'!7S/KRB u s檁G9~7MUW6J ,,>V\q&U]5DsZ=eWkIo&Y3 U~*uލh-T{M4j(_] ֨g5 >~׿wXe{O:B4nE6}> {+6R*TR=H3u ;s\4oMHuqFH ͆ ;FӾC'l1CJs hbN<%٬۰-4niO:w'[wd¼Q.?%e_sMb ը vfn )ٵFtщX)$`K&%Vm[8_6H4lQ [rELsYOYQa˘[݅/?{@ID" E"gy7MJ8vUP+BjCsd2`, 8hMF#օ~GFVSSb (+u<4GRhЫs76D:le(Q T~ULfGpīћԹ7p|AeMEBp!rE`<ŽFMȇ DuQ?zϫ9gQ{ yѝ34O~c hk :m|!?xQ4E%r^*h%(6+ywG9*puV{EOϑs_QRI~!!un^O]SoOycdnum$pbzXݴ]o[.zϹTV+)D!&u\N u-tj=7*e.iԳY<uxX!so㪋:[,)˟zNrQDI|zqE zb C[3Ώ)ҫxBΓ(s4:U=&Qvaas2W_$>HvR.+7) *aTSn|i>ǎ[=._P%D ?CPJ~S* kJ;@lS٣QB,E#AΦ?sVen9;P=Vc_u5%)ʠѩj4c/h4IaG,N،Zůld*IFD?E: ~.1,w6B; 8IIvQnD"*(UAXUC)H$DoFqCQ?TP ?OHna,H$D_ןt?EPZ%%HK$D"?B+H$D"R"H$D"D"H$D JD"H$D"H$D"H$[$l&O~>T_UO~/v@ח_$?⪧4&H$g 4`0hX,HeY;Xb%;ve-,)v&G3k;lM㔟 &g.XOq.vi0ᐄڹs"gʩ*c{y;߉)c!g!fk}26Ɩ7&[Og)ͱo(\UAzJthkviM"V`l!#'JؘyX8{:ف$D#+^ή|bՔ2~:6TjԴ>_h:Vb<ޯ[!5!;Ii$Az(Wņ PT,\AA4)&$$3>"^~'<3gez君ϧa8Dڟqt`e)^- sy`m=rx;ۓsIcGB-qT6pL?zҏ3}m:+Da+JIf*4N 9)lݺg8B҅$ BL%_ˁ'Gbu;_mvt dȐ! ?=ByndپsJ9;:=qNtԖ|Nfr. _ჲj*Jg֊ثH=ŗR".3+oVȶ A秱'0GGy&bv\}$8?n8QU礳K+T:S~\GsY;žyT=%Ν=ŹLnvk)gUJ<9N |D@H?xSIQ&&cww*~:]mrJ3r4\*=+'Nq*-eɢ՜){~)8|>OrF&vw ?̘¾,; ѤnkL증%TKP5{6xg ,1O ׊6;og"%u)'r2,r$W P :gcGk*6WYVn@ٔni~ZEtx"H.(1z\"ROctwV" Yq?qk|H'1.jq;vHxy_ YF&b(Ō'ӯU5)k#0䍡,\A]qMnwP_w^KL}[snǜY !CD/eَ#?!؟6",;VБke"[oWAHV<ԯ/azo&鼝I;l92Sګwo*{D2dJRJ4k'czVK)Ԣuvy\șwxˎ& :8/sArқ%;YJTjaL(ށqP*X|+r7)Q"i*ݜN\xm:t8 6<eӂ$6"J결|6,#P2S ya[lE;09)ʃyKgqHO0u%ȑ9+xٗQ̅_s0ya7g܂x9Iz1ff;gHAb 0"&̞C|~eqO=A*L*߆D5)ܞxSL{-̛?DU0❜)D1 ٳirS$\X4~ 3 -T2ؼjěi͊uylЋPs-H3yr}x z`DhUq޹φLx^~ Z,q$S`'">מL>BFBds(F>u!%p&%V>{ WUs1jV"AKA(% uFe;e: ei.h$t)S/1^BN ۉГya;>Ȟmٻg3R72>X(JCqޞG~53 lDz͎63p+S&QBєͼ5l0DY53yGQ_|rzn(6u9Z_'OL3y[}ܼwxGׄ]xwkG{-j㱑[ã ,]Pg撳8+V|L H*/1WWx=9֥E<F%L1vfL]A|ڿ:A]Bj:-1]0Y)ogɇO_wɌ5c>K3~dEDȒ|?9qINY;tbZ~1f'Wq>GnKOK,G^1)E6'3Ahc Ei)Bfv20sdO/͠% x{&#-wyFh8tm 2~)o?=hpÂas7*0x8.N)@ͷ˶לXE2 I?}h3e]箆l^Y%PTO3uk6O<2 q?6'[{qC_l^R[4_苲2f<~:JJ*YveF" 6r.ekC訯Dۻwad,C>J*\t8hnѷcㅻ=8=ioNXl}?7CUbJKP`e[g$+QXyݛ]Cg3ܸwy[q_p\V;nO䝷ѥQ8IU bu-k{"Zs[xsNzlw׭Y_lfKr>UE|x 6r˄0ygk<ʊG99}磌} T9\L-ځBI^ݿӱj̢g۶\~om^~&o\*R¥0 w|8lH+ YiiWɲf<6 &&/Qka|NKh=cSU_62skݨBO[yh"֤|@'ȶ\֍&a}[g41>z&+)M4SMqˌI>ŀ{Ѹe7n2jkthSށu;2bxmojB٤*p_3 7/Ǐ_fQ[d㿜N = 2{JƱ{^-2R X":f+OtW&^HCVC/k%R(9x7hm,g޾iJ j)Ҧ+}nZ'thLyŐP|Fk= 8WUT4p,Z/Mz˗&ѵ=_3}%UEDUjaaXX(pVcۂλ>IQ" F^Z J`TI>KBEttTpiq9~Pj{zZJ=O@Zn򪔪K9!/UPvCG n-ֵi❇Sō/ij/ƒu%J BR)v3%J^"avXQBiUʅtt.ZxT Ztj\SXJFbPKX(QmsOZmY*9jTʪ B+N'YKv edM['?vl-#)8Fao굾5Kg垵^Z-Ź$\,exS%LSKIT$H")s2m&6! 8700vXcczs8WK˕:y/#Ƨ_eޮS*kA8mv(=_c}qAT/w˸7]"-|Ѻ^HˊC6AoK/W;nmGLXo< M!q3VEbZ11|9~~Igvd$>;ų ِJjvHC.ʫ{% ǑXRicا-` tqΉmPpm*}N2|sܗq9-dh~0emH+*WS),*&V1 Y&| FNFS E)8:zV3-(vWg6["6Z}d9<߻3j/whg.`MYSHyI,9rC.;/P8)9v|fGuʠԱ)lONM MVKծky!6> m4''*:D*uV:WU&UuM8iJ*Srz6Y~g COJvKӖ#HB۷`۹wuDD$KpSfU]^Oj+&LvԯG9uay QUNDj 1Djn*+*IOK 70X zVZDGY[#<ئ`զYֳ*ED Ck7epp:i.tQܧsl| L"ODn.$y)y/(&)Z{ͱѵq%}:BvUYD(ml۷zmxK+VVRTOL[>}3VJc3gt?Vx.:hryGMrUM[LO`MGrQ)[yk ΜHu0Oa ~ܰ&0W!_|?u B(tB17_~a-vS+o=*Quc6TA٧9ԍDxS7:% " ݲcd:Q Ժݼp{=C.#" 2ҫ fO .! 0zQ7\Bgݘ&>Y{pIF=AZ:| ҡ՛iS !ԻfVe/*Fh !Vkvt,f¬4$~mIi*>M1 <~ǝdBbP <4:f QF݆O<əmD66oYΑmnS4L{n "F⹐ID-㍽61y㾪niN"nn7uy1g˫Ƴaqm)4zy;QC Du=Km~8=:d3Ό+;UrͲL)w&iHh4{7kTiF AT%j7s~_ }zOG'ɪy'iFC'ѻ]%9 4ah#FU=V}ƈ;07GQhm+[^׊Uhta=OWouvE0 ˩\ {D2d~UN\_Zr/ܧ+᰻-5鸜N)KwWY{gϛh-.ur9t:ȴ<7=DЯzN}8ZB"b?Ww1v=kuɯ=Xdq$Y ׵a/D Uwգ$w>q*Yʹ<|{]ƍb4|\}3ֶϹwßeٿ(;nT ܟ=iI_{ukqs/>Uvm_WY>wccǪ>>pm=Zܓ_~yYW~T 2dqdffV\S(4h؀:^vvK5i+~5+w)4էa :~NquQS"TWThԨ>1&#,\DB1q1(:-}Biՠ6zo5:T(l] ""hV;QYn#Y^ׯeRhP>uk^CGnT.-bc<}_=aA\mO'VU^/ I%mׅ:A߅l cTe٫TU'\G jp; ^]:um/c8 dO5IJs&3Mcm9|g&skܤ$&nAF]䓒f][~u 2dRfdȐ!w}rPYaF3תo(kiͶLj:uPJk^zU72dqdeeUe1Ȑ!A&%)UYKFG(+I2dS 4ٽ*C 2dȐ!Q):BYfVTNd*h&C 2dȐ鈶gOPAO) 2dȐ!C2$ Ylj?} #%2)J2dȐ!C}B[R 2nQ'eȐ!P8;ګ2z=`eBףUk zT*^BW>褗>_W=z2 7`FcЩ3RNEizdG"?V19/?lJ#/ rK5mE 2VJ J%o',gӞdD ۣMx-?d'bֽ[Ԅzk;{qg5kvyHOi[ bc9ZP)O#)!sRpt&8K8gĩ%$,9N{iΝiLRz-sIxWu"C4*&_YƉba7yff\̷Q+2B8g) nv9LNa)jG(Br&٩;p)@# G}!nnk)Iaߞ8&M>Dϰ}6N'` "0 {< `B|pKƒS<9p ˀ;(xQ^Yvl -({I\ Ľla?oV%Qف~N[; [4`Dne=q?CHTGeIٽg6xoZErjOph^Q)H;{O}-qw{ tқ~Z ;=2?7q.ZbNւ9}6qLEAF7gN j$"Ԅ-;'mKIV1v \+Wom,_D4^a/-. {,g,ZGD$yx=C6wбC |uҴHm+N&~?,e\2NOdc| 1{u6o~ɋ>XISqWvofߑD0t_.\75ItkTRFd=~ 49Da+HiK& [? mhWpGy[)v6e8IPLCN$sh&҇ Lhfggn4߭X@W!}sP~2T!4f͗V2{W:C Ib.sKtY1jNq;`J{`s4TKێa:COD߼$.DrX,=+P\l8-vrd*ǼI߀GUj4uC'ֻPUBr19{| ?fxg />Ϙgy'ծE4yj %M >̢l+ĒuKz7mw߯o^>3gy(+,|v'&~\:GEz~N?:o.y]Q&zdOvԵ8w0nS鶡osݻ,^.'#y#:-ݦj5AQuPy( |<Dtȴ /㭴x^wYZTL~YL\/hͷ&pptY`iڟ~s{M)aa̸t3`xAd5ƙu%f֧0sP?fgѩCN'0hRWh&r-baqMxQ;|sN2gl mĉXzF~ySäib?£{am՞"̵0ۯh[ɴ9?WmҊ^cۑ/0{ !0iL>rmcphxWy$ L.̭ر|-|ՎK!C?AXN^]-g:rk_Fz9D3cl>)=pI oti>ȗL߼0Due|<^v5"l?K\ m̨'zRy_r>^|5(W'TE^f#cr:AmN1p)id,qO{'k]DJZV&ڑ\R8qAgG|lۀă_;^¼_<7Cꑝg}sBfaqi Y(޺9,\Cg[NWM,'-A65a;Z4:gJiSߖR b)o=Aʞ]5t~#f|1XŸn›ۙ?m81gFn{w93v_N6kYqI^UW@cГGu2llfLۅM,? #&Rttg(~@-q3f婬Y%Kq[{ప#f~ǣ_ȦE˳-ؐYq# аzs'}BS^Z9#Ũ!'׳ az6jnBQ^ cvFO'S+u +%.ʉK$7< Zp!A\wہһ!}&qs>`4c^3<{E4 ۉBB#g}rtM aOn{u4]c=S. **PP;N&D6+)dW0j"GPOv:q݀ʞΆm2`:m{Ql^qrkZy]źzr[P+1ܣŃEx1˨ -;]hғ/~(LTk:չma7Fl7Ʋxy%oJV Z CqJ {=?1+c3̋Ť~:/P;2XhIՎVwiɀțc8 _S?cuY1y4;g/H.P"ҚJ2 1),8{<38 b+FFhiޖCX%.hTXJ609Ыd6vsJ ݮ#5JJeXRhkNG9XQLWzNvJy)|6܆p"CN}]SJ4>$a^/ k+l[>Hon |*i ҍ&ȶ^u|mXKk45*rKPihHU!da(ӆ(MGa.26ѸɽRQ"ʜIr/U T6gWPQa5Vu ʑI2dʿ Y,lg 񋽉]rDmyoؼc!Y8ZA %E9ΡYirX"\:;p+lXvf)2!eŕt|0 azYn=6n>m Q&-#'NriN<< (pX*i|;oeҫYDChƟc?qƌ[r+-8 J(.)r`=a*0?W_Nb Al3;ElX7FFxW{^5 _/ۡ<;z v]MԎְz[6:rJmfLB 4nHdvLn7m -'ܗac_"Gs+1WfRRn$Zh߾)>,/__D JS6ku%*kdG^eӞ-_-C(HGS?z<97nd톍<&,)4BJ.0 Aҋ,41wx=_E |P}h~i=KׯɻP)HrqQ(wVK9%e8ŠB|4q*)H]"uX8Y^rʳAEZW,KYg  70tr>}%ԂPܞ9vaҢ$o]_e!e̜#/_aa祔VZqQTZ*W@\.*da7EDΫMv躑bK`aRnmM|v,mLa(.2%dUx2=MZި),'Ѧ~9X_Qg¬ 1s u[%rdKBj-NRTT"1SQ^B-êd&fβEF#,al6|q!;sӥ V$LYak~Aͳ2(0 Ϊ bBG 3(2֫No2dʿ :oiWu.O^/p疗f|]Կc>|+>EyxgSyd;D4Ӽó/GSCcԱDy0Qjȝ[KYf{@ #Ͽv^ux:lݗmdgsx}]l+H .No~Mj"pb%kDV4r3鑘j\$1^JɆFr-[t;oWfdTG<#@ =_H}JneGHAUƪ蔷V^Jzq#.2hQ Bbx(z1>޹_ r\}E(K4NRnt~e=F(gf0yQ&mèq~s/z#MIʙ5f,{M:D@^LBi1) gK3_ Zw-a3 m`Ш+ԟW?YXL )v=2:Z*yO@F  a["oB W?ŵG|ؔy&) >GzJ/ov+6l_ɖڸڽ:ڡ,m4HϙⸯSv1#O Tt~ճwO~:}m^15K\[-̽ӷ{7BG7K2~ȣ,pal|c} DKx4ڙ˦b)YqIϾondw0%LM@x|Q_Ԩ}H=ZGNfqxHEQ-jӸٺ{4[ j=c,m?7>J+e2!C?21lƀ?XJ~Y9Zo|[HADǬbt/]S+HϤR_?,&?[(B$nG/ө"(8p~NjsQ.Dfl{(6mUTn+EyyT%,Њ.(E7AI "m{vkDGtYa-+ 3;RZQ*jz( _?^AFE=vxK3lW*B:fV'upu.J2DR-6ρFr!C[O`n-`*'e\yJ}ނ vW\ҵt,RFFfvA|:WXSЈH:RͥLaz/CMD=TIB'DDf6czc&zv;̤cw2+*C%08ЫNݙܕ~|H¼|,N|vpkd[\ lCD:OS 0=Q.K)tB`WKѨؕK*q6[&B>_BJ͍A{vK2j%'#6P)F'yX va(0gŀUNL!7~AAb"au]5l4zvTggQjgEv\'@*.*,D$:*ŀ/ї@?t:QV[%XJa! BEB1J9=Gdez"lЗUOa d_Lԩ!::R)nVgV_UajYzvG>4OWZZN~Yʗ@ %oǽCa<2r Œ7⇴eM2d(JNqA7| DVVVL(6%,zM=Duj60HLi*hr2ҫtxy>3QINzqMI_6pmDP=*8>ˋqTA&~m(Wɮ6?B O~?וRͥh$sP\[.:MER5J箪'/vY'1VLccsQjJ5zK<(?&">zt0 σ7Eb븪TwWѥz_٦UߍFEy=mֵԢp]C!/m~vKݰR+Bݥ:aqI2dȄo# i=b%%%|C0EzeQ>9dܭ$(,5?P,NK9E%NB_J[e)fA}m(YłX].K<("۶Ԝ~og~ٌT=Aa` ѩZJMG^Q[ 2_%5/]&IooHSԆAh"7HQY?tۮ4C%MGҩZQWD3xz 2d3 "㚿nptKy r`:nh,mh^%bO'iGY/lv9s8~G RpQũ- eȐ!C 2 ;TBW6o!C 2dȐ??#~eȐ!C 2AP"!C 2dȐ!J2dȐ!C 2!C 2dȐ!J2dȐ!C 2!C 2dȐ!C&2dȐ!C 2BmV zou983-].%zJ^'p[l6L>n7VNRt/ F o~ ajG_UEGYę_)űzh7S2cYڷl씝L C}I;[3 '26|U#"HkB'@]o̷qqٔZ@RA}5`d8=:ynBjGcfzDx}E>9}笴ו2+X`)`"x^ Ei,[-}/oȊ%_I%YdӪ*۟~w:c6ťT~SRI^F29E0BAq鯦cTTSRR /#L>]JՆ] tZT%`[TiSEx0+UrRsKp8lE䟯aEI>eߗpesCGQ*v"†5q39UjrPPOA-5,ߺǕǓ.9xr,͙^ӆ ̏;`hxӿk3|a-rVyশXsΰP"ep=0D:jrq|wZ󙉬:u&& :MҥSk5.N};R(6RtR(ӵC:F\aD-hl".<GSya$rúPGO= Kiʍj]؂kedٌ ًFJ4Z*umjXo?g M4Mm"PX9AeE>U࣭aT:vM¸DB2mCw'Ia i%23h=ğNAފapd6o'A0#7u!a 8ȦIP>l8Gwo?551BVHQ6Qw͢[ r9t2 oЌ;FR)QHaPa>w<*G"G" !OTI<;sxqS.DW]`㾃$ڮ+=^܄ k#eOe[];PRC%#QK=G?0W.!;Gdނqy@US5'ŇiM|c[L#-;8A+'g~\<:TuB4O7Ӫ[_(\x-;w7Mxnk[AvúqO/RA48OZeɒ_OP*UJRrx@)J`|W^mӪW쵅4 :s|1g!:o2q1*J`-&ڳt Y;d) De_ΊU| =+r-[ vచY;>m#s1a>1|d55r r5 fݪ%,O*E[).燥YGn :#.z 0c1ۄHK?s41"KldˇT1V):R^j%i Uo.vkÊDSi7O@P063skg%d O@ďQK9R;=y$1nU%/4X+X~ZQeH.?k\Vda邯y}A̦jV,w^c8ZV`>tÇ72m9Gvn 'hթwư;ү/,b1lq<˫1f\RhXyG1j0gfr&VΎ4'>L~sSh&LmգUMIm:u/Ըl [xi\>Q\`ŤJ#Y|h$f*J"xzsK¶33m;<2 MGظ$(Qoubz0G=b"%g"Z:zF-["n!tlݏqWti׎~||})*+uOE^eeU(=h&0^˾Kygm~u?ƨ؝UEQamV_JZvS#DO&&HLISY'e JW?"ι϶ Rҁs㨱6, .J~5߾S+h_'~,~Ӽ;Ö́7^_!%b͂uI}P/\Ͷit,#D;[ןn݈t掻Ф?_PI}1a5<4u`:)W%{t1X{W3=,Y9@i_5<羑;Ϋv\N<6}_N;|5Z_GnpڕXTApبtؑ>wqr1\<Ծ ϽS_ɡ$'6`P}5_| xK!le݋Z?w|ʩ]kCEJ-oI6]_e')&3X/:PIJ6nIlAG;S)U~鋡6?|Tjm:( pv ~Ǐevbp(3cT;o=mCؽvvGTZ~ch#>7f~b`]r9~Qj!CefmRhk"i^- gsР]ت = :w7'ovׅBVhTD)/AYH.kOifڥ!iD_c|0 F/OE`swa:#I?iͰQWEev %va+P4j GrKPI̅w8onOybJ HX`¢Z)]ԩz=m-LI 4jwFNuQ R= ZϧX]R}PWT_vF;XȫӾ#5;?!]/-@{: Pح2~7Fuw^^-zw#eGFک:('v]#6z?A)/( SnVXAHw$6+mp,9bg`7o_ف@?E߽%iC@hz/|=#:{w_"t%OI^@|—#tс1Rtqbϝh%55bP{DYds.FUWNn| (\hȂNS쇟N/2ROwHqc?ϔu:ú{)qOֲ9> 4:*]tiNZRxk- sPt4-:Hڎ9u"篹 -7lKgY$dPt6ZRRp8@7oю~J2. 0V#q,\-]*UgɌ`E\͵6Whe hɷ3DZdQ-۠v1{ Nd^ZԖ%#~:(-)Б#9~ӧ}G]㦴 Kt#|DUNb"wf;~iLo΋'/\Zr  \Z\C*^. ϛי!:aYCql{ si&N%5عgtx6wD{cb|c6nd-#4e]iE[Ŀ1q :ulc'-ۚycQ#@_lG0F[yN]fgZ'C6e~>w,V!hk/pruE" 1dǎS8.;wn`L|s]; !GIDVʤ1cbf@r(KZcbah(FYݙ4jIS=7vjΪ߰jf e`h^;4VBilx%ފKe8V:=1# ÓNmL>¦㢓Ӳ!_]Gt9r$5k܁{㭳qvQ?'4YhԈpfDݰECE^rc6ĉCwzmyoG|>T:uFk][6E&UT+ j^bӠat6Z5ٹ QZ+)ŴjוnWqw0#:E )a=46恺@餬F| "W3u8t0)dY tjCt.QݨhEF7&+&\ I6ZVaѮM+ KY2r7XahkV|ʕk9coLvݰCUiInv^`mo{mYwІڿ5[_ٸMҷ/udhٲ;ۄOQ\R13$InR if{DYdWDeeeuqq.sY؏g2lo\פS\q96׿]\MvC *Jplc-u}/w5c׮q!5~ _."5rgWdysצs[!c&Z~k/\L_{q;.[kl۲4rͯ%}ڨkcQek浮׺֭[Yrkk\UgvMukǯ8w]dɒǕ[jrJӐN7_f)pN Չoue)ȩL"6D{*^ԠըCنZ׭Mz۩ %dҨEr8FdB=e*h9t$-hz5hQ+}VSYn3 x։&81/'EBVF DKOhy ?m+M;ʒǹccɏ%KT^^^Ϳ(eɒ%K,Yd3R~,Ydɒ%K??&{U,Ydɒ%(i>.-s?%_,Ydɒ%_/|uQ*PJ2hh%˒%K,YdJB2>@)q2Pʒ%K,Yd)yY~?++Jdɒ%KX2P^GRUg0냷Q@KSixx덧Vo ==|v<2j#vRT QzQ>NhB8h Z\ .>QYAR#3,Ydɒ%ߍ, x/-̩E}JZ-Uiǘs<P(@]R8zSKk5h1qdl8,<ҫmw~,ͫr ^/!l /wHQW{z{2Ѭ>_TFYdɒ%K߉=hU)QuzumLGR@Y0=o1 SR-hDQCm9ܹRlӠQ*P(EL+l4 u"T(=)~Z{ d8_֊F?mVk#JNJ#gp`-dFW['R]iW?*Y2#JhXBxV3rT#dɒ%KDeJK죱x4(]Ntء'`:LCCЙGDS]]jW)\xב[GL؇^ NMPFs݃oMDUT\nUrz^5[#TTVRcs^erUU$dx)=2ݼe}n $aq$O(}nF nQCJ)w-apsאr& r+ugt5[Qw[5秲z:ϕFFF'2}ij\a*UN+ }Y,$斻U:+x&utc.kɯpAw=ׂDv%y,I956A7A# 1tجC۸3ײj2lӏѤ]gzќ-S@cʫDqi1V۰XGZɒ%K,Y:Ϥ@a <ҳ|~w.`IuFOWSFT*iGLs&}!\QRРf޸Q|0J =̳ɴHZ9|MS#J}|FM>c´W?̞t*U,;>_NeIYa.kأ(Z 2O)W訮(*hw🷿 /?O{{.$J9y, 7n$ޚ0*2RYTDzzUf':G1߼~\CIINҪ˫frT:Kq~g`± FQ.|v, f.šw;l>ϣ,~$>~=ʔ˄P fٸ*~xU>\v {{Y2K1VRW%@,ʗIisy iiU*9ꆌ TaaŤ 屗ndbΙ4rR,Yd ?Pr`2wЇ}N-Ez S~LJl[6dm^Ky5.A]N)h!= X씺 t =7"hfʶdD2λ0WH{.^?ra+hԢsbYH!WIڽfQNZnwOϺJ{vx0&%f+}B9CX+괐-mԑ{u%,ԉlSsh`q{gxyΰ23M|>crξ/^H<;U[{ZPYSZiL-OL໯ck76*IT:/(^_"o6'm:fròu˱HfO:Tu"@2 E JU4΢̈=LG>`H rle<ح exOA!8 7[(;=ć"pn̉(Cp\)2 5aTH Deɒ%K+@;ѭ#NK5>@ kemfW3(=GqL*gg @EqڰXWOwv\lꏛ bB`O,B/pg&+ ϨrOCk4jJ*åaLk2w#[*tF;鴙Q] T:ՅUEVc/xs?dlt: BBn%YJICeu7V] enTSNo$~].v# #٢]g!RJF6hQWƏ`R X<Ӡ"{i-DΊr  =+_'oʫ9vfҔ9t*-7f-ep`tZ1Iv[hb0s)EZW%\1dɒ%K E )Rc>Mg㯡(u l&z/B#Q H^v튬p8*8{&^;i68$:9 %&5>J 2ILN%SFd2S. l]nh6GG`ĩss[EAxT[3o[*?e&=3dHKM'N*Ȥѽ7kcg܅x lB;:7B #6]V%*Zm*FKES ͍9{ޔaՇᙽKp V#U0U?ˤ?M`=;7^Si1*`dVʧ3+v uGOBb~=M B^ (_BT*r(!E-U/K,Yd_Jv"yhp^tapױ-sAwG5xRFK-.;5ؤM2G*J [Lbٞ<94 ;<<>Rao0~,)s0 #guOW,*#aPRPP 46!yGMYLw~^|X.$ҳo_"8K99a* I*<gb?LJϏWsO(ahz7FB녢>4iBxaXϳN`}?^yw .&z#tڒ80>(0pOwDV]qEADFwYc1({9e{G܃ңцaX542yzd1M;>=Ƽ& K̢go\%ja^Hrxaسo0ia]%t|iSJASw#4cYdɒ%K֟#ZF?XBQeZo~F+. LZxjJ":{cJ@TYqjlV+.j6 E:?ʽG]t:.>Q)l`/+RP u\|NF#`aIߑRsUjE"mQhij~^x=_A tU'=jH%M puwѽ]!IEPhie#S0gQ;{J[5~&P+vI+QfI_䡬^֊V:uΠWfE-v}:&Tzлt)l܏r1ՖO-QM#Q7NP ϯ1Clkfإ)|l5\ʒ%K,Y~:I|'8au^*yyy5_nKls3p$MKoZ`b^<.p?Q%-NzկDPdzTN jy|jw `rOJ:%>/ 05^4lBZmjqe~Pc{?ŲmiESY/rB@]YÞeؚ^6ijo?i;N ]ZVqq*/ [& i1ތmq[- _.i}S`|1q'vԂț)>{(NTQϤE<-K,Yd%PU ! UQ-ze^j<fϧp̞+?۳H=o:|Ssк};nNWxtlߊMw>qt|h^kx6=K%z¯wb=ύ|YUNbRs˒P'{g4 GGp`mUUwQ/VKiF +R#Y[eef \ ,bTtƃS;' וa2ރ> ;Īӹx+d!,ޑtiր<3 Q,wa0oM2U1sFr樤S9VCÔ6^oSSpiffq;H Qƛ/ .] cu0`:of355TTP HeN㗪CEiݮ3-߁J+jYLeˢ/1{j <6fGfݪx' |J/l @_%p!;qCW^3Uկ8:w8X1ckg\m;]ua5SR\ZM[Naar q^Vϊ"b,g0r-K?PAnpېA?uܩxjk/Y61)wݷSg sC ՄW^:r,cp3khԚKTP+ыcD`Q1n=LCCU~ n#C]'sIIa:z]]Ȃ'1q-ChUDrat  ۳3jc B[rg@n܅·>O~ c ӢlzVBna%Ki>{0M*Y=n5kf:KgTr=Mdiްbp{|xH1_"1>ՇQ[_|a8#EnƑyݹoV-q5J'rZsV# .TKS,fBJV-:њXw I4穻nī89xkM?L^pqB[uDۈݶshܢOmY hDͱQEtd.n9oidS aԤ,6,l]j¯*Q.f ;;0ܰ|Ԡ^zHCj{'l)72J0P w .6+GUpO ܻc5kP{- (]*Q_ahP*)?eVoڕ1ukxE)@]oxi ~u P~a,'ݙgnTCUN'ފ߀k"UGRЩ-l?~x_4kQ]Z7jGok0%Wފ#v 9=3Mm$qdN^Mpa$XMҒGk).TF<Ͽ/#˨~Wq :I_~Y^3` y_:X5g#ƽ;RM8Axe~|>O+YڑJCIQӘ~[43^}8rQoY#a۹<. ]OSnLx6.;ݗXqcDΚG w=XR`#\̛<3I8|:wj1,ټ_v0O?6,c/GZ^8K'x:+_M&Qc>gu'ͩ< M7s ]H9&,D݆ϰx@>yuByW0,Y7#jK`sqۿyB{`CONfcXvޘyEdQ؀;{vm TR.6=\4uk] 4tgѼYh"x|@/͞SR+C*Hό|.F&m=|5g<|CGRۤ9q'|5&k%/N& (8IYXJX V Q?egU>4HR޸VH|ōha{R]zeGѽE(;Wr@]}8k( n,[Hq%2j2 3C.^KbfgF>b\;`_ 1M[Ye'2nb.+bͤH q_Z&.XM^ӽU|XY3T屛;w1Swu"ʺuwgһM >X \PU߱5drl*]["-u0ٵole-ޗF9giJ/[6!ijNiл'}ux:(zK0\Ucm.<ڻ!{Vd䓱Lߓ͐oo(w[(zy}S"Q#u Nb568!\',1఻OVT (W*svww,>cp=*b"E_j< fz-yvpWfbqaXyΗӻ]S[η.N_͜N[%v_1)f%|'|ݯm gʮd N1KY[ٜRLL6؁[nO:>TjÈq)TceO0Qϝݛ|Gx-<0 |%GQ^P%ݱ-{qA0xʡ,9xh;Hε 0wlōʖȻO<^g;Fؙ4 ~aQaҷx.ŵ{_3t}8nIvp[Y|1mVnLJ 㾞MW2qܟ{G܅6'"x3`?7f=ioC.sf+kˡ\:M; bOe8wHYEWMg#X43q6>/LmCXb6 5/3s$Uk9K $6,߼˜7h6kVŸAwfW^e Js9Dr{mJtK9)gٽv [{4_IMTξ** 5ZI ֓Va!>rUh4:'?:Nӌƫ#GrM7ϛ;JLWG6/E61xxQ xqP':v"c 28WzRF4 l$%&!'PaszFZu* qDԳdgp %tnT`p͹ /[t8V8'Qc`<=s61gkBBy⁇}@w"Զ2''=GNSg2j+uxd޷}TT4jH} =̄Q_* K}_ʏ{LMe;O)OzS*Qr Ak ?a/o>FƁD&۵t6G<ė&1:v'0_OO#ZfEk@UN*OKdzc<*}zM$$KD@)bQt{k[,E4iһB{ޟ>>}fΜ93w̝˒ә3c$Lf_̆NƇJ.^&R}\ o=־1l>~:AC{u\٭dΓy~.GF_G߮=:6wH TboeڬWasVGB~SfIN6؛C'X5s6?@AYI޿Vv䍇9{5)6{g5VӸHUkfS_Yafյ5r?aWu%6܆킕 Vs}ȉG64""˙\CR+87e Olpyo4a=Sg? ʼnUyx\!ZBbHJ!&> Iزq7~AiӒ\fxѴ-TV:?Yy1th/ǭ|l7ե|"ړ9"&q5#LElA^n UD0ףGj8YABQ}'1\s~31>R{)]G85Um@gy{/X$&24nݮCx9<;Iߧq>HPK^ ]s8ҏO?e`OӔ׎,Av(ä ׆?JC?֧PTwi;0A\Up:؍GO(hD'rWY5.:_N# OFD~)ȣ&hJGP@瑞ؘ$u{M Bd@#h:@Cj?2i)E{l " g6%cc6ѫUc^^;.|ɦRbϢGZGh!=VG_ ͻrx5A@K_J[;Չy!KjL"fQKX Js˶toۉaX Z$nl^i/9[Z:[QO8I&)+?ɬud0Y6{v MmOű||g:?3P>sX$ voVc^sv=%Ӊ#?'O eu[0}tYw HHH482 E$L={Q됬ba݋?jkv`Њ@E );3.?Uڙ ۴9U}S"HYT󼴞|5iH "RUyQit;뮹.2O~>dƲ=ZơtL0&PzR$Ohy"rRGe5s2^8J( D7‘uNZJ_g<xYܲ^CEqa몀=Kel!Ƹ>7?%{Nomy݂yk<AJ6%&yOؑ:hK@ꗐJDl)\Z/Ԗ nAZ'w?BR~u^{ͅHYu^_QYV'^ *2Z.wpB{<*-D5Uosϭ@cH#"qgpOkn,YSD~V0믦6|8zखqWqnsp]v(l]כYflO#/z?mBB,܅EDdx\^~9߯%(%`OGOBR,)Fq"ߕpHN'Ȯp6-`] MVmE#na!D[ g_꣥H 8:mT禶^Uuf M|K8ۮ&&/ oKG٠YGJN1ozز'y_[RNU;bjEQM)qq4o%FgxLC6vbnLyvUh#04s g_Q͒-;`,f^7OCqv_(!={xH.-}8j16lGH2 Q W/vQ,c seGQi3%;pRL>A~j.9sXW]%>o݂{Oiz31cנHxy?H)al_@z@)JDEs%]2pV+<`+4JQNpۓ|xwc~ *dDTRg+HRƲ^޳v-ؖ5`54=5e̛5D> oOKZ^Ԇ8N^z~>]x''Ij Fȏ_;RMjol-ZwĴ F?ZBy;^ژtr%k^zQ)J) |Yb=hE4EV%=q3 ^w. X[xGܵbFsg|V7 #WNr4ZQ)C]QCBJ1cqz nc ʁS w@?zu&ﳵd%%3sK~A`2I 'L7Wh蔓ʯƞJf_KnhU:'R[Qưn-q/u?UBtFk ^KVMл8\X1$¬'&eEVQQ^ȁj2EY\?,'Qg%^I$ab 'AQAz,R """ 1FFoqETqd)iٝ֡ &X*jHjέiDexL,C"AN!sxR>M+14JϢerJIoB$)s_?.ψg;śis- F"B#1 'kiԴ#$FZ7:3[e1' 49p̍м5c>t4lS%t[F$D6 U]9Gt NEm+d間4MON{1MjpQ~zb:X2~ Iz /o]KLad> PED2Q\VIFڽum>Ed6"l2-: DLl 5oMxwiẒc}hF)A [ &#-5VȤOij^D0$%$!Ni٪#y 'Jeh4 ѻ+=fK B RA [F#]]hP Yjvi` .]ILrѻ/}Tra"22itҾ[d'ɹv5\8DS|V sU-䌝U7L%ŻWÏQtm>kjtR%IS,AŜ!g#i԰! 9({7s8-8|bz3Nb̜=;%4g_{f`U"Qk«Ni:x?MHegPws嫄,de6"$A: ӒĦ)tj8/Ѧ*Y<K}{nTdQvKa >QSp :X93ÅD2g*]tcZ5r@!N^=hIpFې#R9dg֊-(>q _l*:'5JlkV:zTdb9OA1d5n,lrn^OiJ+8y =f4yZmڶ+Fu'W6-qωdQ-W]NVjһte}NRԒѲ=iQ>DA)\Hx5$)94MpdщjF Q[[VԹ\.sxxΪc|v?^OM1f/#i҃Lw ⿡)<"6f޳-"W°lEjYLlɌCrX{ǫL{c"횩/\JD@ߜ 5JM# w/g{gx$^=w˄Wnylx*P?B_P*SD%lP  +%*1 yQÛ{$B wrqqhdb.Puﯣ y]-A]ANOlˎn䓙kѝ9H:}#&_"[Ý֞NM@B((P@ (uBVԠ@ (PPS@ (txJMYm- (P@r3& sMbA/Ĺ[Ղ5ȊdSW*Kg42Ta4aUFDDCTj-Ơ %k;⤭>v"VX ?UhIojm® ƍD^T?/Cҙ/`1Zˎׯ:F,f݅:dhՉ?v1y]jtjeUG ]2 RV_R,yP2sE.LW%ge.rϗWo6shỌߝ~eCn5&c 5zԮ2|o֍^x2 Z BYvEScjX+g!SRN?%a:?vzjԿ>~Y<9Έr/¼y~#@LNȡ5ڷ'M˯wc ە6E r7̍}aL\I,R}K5 m^.fGt,'P_#⋌Yt: yywvt:OmTP]o[Ơ]>vy1Zn6g*'* B6^M3iAz$|])KI+iUꋎ3qWTڟs>qr>kTW¹D+ѕpCOP5U@PY%pkFA.pfT¡[1uMfF D{y(Aj䌗ϝEy4i XrFd0/(Ԋ^j2JC5`1i%9tRF΂,VA !Md8r8pVȦդ/Ǹ[g^$&_U˙l#ΊeTgZLI:2 Bk2hEoXS 9iVNK2џ CҡI^ U }>΍B{BfSb/0%B ,4}+L /Z: [zUDKE&O!DUŇྗ^ځ quP)`FOA d4u©]XWlؐV~[J#H8{Q$̪/U؆"tV348X  h&H^mg|z,¶%2g2RYҽV,JϬ(EAx<.6E}FQflAn΃Fw9#d=ƾCȊ/SoEKX+y}z|;ɖ;1olCDrIvjv"Q^ɞ žMhU_qR~%xU}bs$WW;z0Yd(RV]حI/@@vn!}1aמgR}R&W؎T`ɖEIаm/ ?}cpJ UOVSp[sԔ[Ed+k~oVM庻-*%Va9igdz8z,#a'?s&Z\N} jɋ8u=|P*>< ݗZ8egA>#v)D|?s M_:\B !\}xO%ASz[Y8Gܮ@Y~Qtq5{fؠ~8km<.|W{Y|>=ژ&gkX?{bna\!nLF^ּU T[YJH-0dQV6oib.9nw`+ am:n{<]6yombSO'OeOy;qOo~ ZYy)(;o©R/1w/'|z a׶W˴fܱqxe xIeM69 Wx` _. G(c.oYDMzM>Z5}Jw}DZ\Uh"Wlz ž F 9kq:adOZRIɾΝ8^񎥼7i.e쪱 rhԶ^RFmF|\KL|w*Unm7D-]|.o.'ȸ u[-Nm2=ws' -gFBtb afąΐH^B|K2^[-fNC>!.DKp+|$9Ģw1kv9|oYG1Yu_i J`Emq^=r+17uѯi$qbc{h{HzdrV>7]NJdq+:h\7?A?A#&O^#n!5HVM'pZ4D\*Q~Ǔ>ar nzNc嬯Y]TKmTMa4b˗٢xX­d]=;r;lq )0Ak/2ovx'zuyϤ@%C[A|Ok8k!6Vt}Hڵ&"! 2ZҦ]+c"Ţ F-Ys9vCͩL|h<\Ay 1Y9})CB-f ,|EPU@N,fj-@ei7!!H)Sњ[]i WޮO:9L[9l~̡$5@zp|L8r)ٻl*O7.׌$αǞ}yG3_0[[vQ{| _|j%9VH[zɾ5ЯgA }?dpFռ L 3d?6tnMplٍݲ͛6!H+1oB6>)nB.{dz&}ZWV*!!|kl_!Cob@vD՜~A. B:-uEyRMWCeeln>?ƨzb+DՈߺQ?di6|$S1C*CefcSiZZTwpm$} vN7!*óѹ5 6}+yiKTƵ&G+=rDFnYM߉92rD-`\$^b whfiiGpR,YҺU:A@p&ސ2r\fX[EL~ [o=OxH:L`q12΃办dXv]IǴJ~ܱZx% ( w\47/~ۚ:RmrW䟠؟Qj6{.~j,ZimF-(^W /Sjk:3wm+廊*I%"*5te`W[-u8tU |7Ĕܙ_o_c)/><8jJKZp^<^m4cw`_jJ a\ @!qpx催8#l F^vN{?>ƳvQtBŒlϵMuX 72?LEaAB ªߋ*.N֡.YH^˵wȫY)׿{Nt1/svqJklnk}.oCsZ|ꆌ=m˷vUSrh={JCy#{6Q+<}رj]±'@bo;Ȑk,ƊEImC6 2GLJ?: AOyDBdu؍{!"Zv?ʄF<6]?ऻ-,n%m1]:?nGv9#l,:J|h[IMm]Ic=5>aWutCdE eٟ; u>[ʊ_xԃ4p k <-utٰL!хƒήc"`qx"Od#wQ+ TFC1y*IVʎf2x0\4"u9ʓ9 (P2> R;H؇ڈ )ӅT[i$FΤT◗eeW%(O8A4kl8Z/8CrÈ3hЛ 6zU¨}(z:A+u` Guy)+ZccpҞ7(ILCDRr2QF0=T>wiINWEOaUhHj#mp>R^'gRRZ=";D)ėə,<$y 3JRdz  GIJj RT/w qફvBg @j[JV3QS+JFSpL9kxvCHsə0LҴ0D7H*SuV:_QKckW}-Bދ(j&y_A Q OT7"uZ (_pNW88$~#I 1ozJaηK9V$wWiZa18֚qͮEc/D=.Aޅ=B.yZb/ɓ>ҍ B~aHBX2Ͻ%  Bb Q}g-]B~ ۩=̙5vcR~Xfрb|!]E /@@q9l1 8~W'l@Zmm+doy'`#!V公Z0՛sjwa .k'Dӿeٜ}`7\Aw)VZ@\pD`t%iM 2p :uS;+DTCxr&_ɂx>UrN?mbѲr;DDC`ƴwxdf肪8gÒlX=߯*.;~^ aP%G8iS3w* +=<_ӄēuwo1Ge%ىDϒ/(kI~1.AzN\WTB%tc}IO 4&DMO!5 p㑩pd"mmD8I3`78+k;o jHkԚ 9\utӊ1f~ "2(6Qy9]X;1]ryW;DŽDž8_ؙĸ6'ժ'XKؕW믧E'E{Fl\L(P? =:o&el.T2>jj i|iFpŦYSdJ5VڶkN';=kYt-z*X<{;_]iڴ pjV8Ca99qNZNsRr|BiPYd6ӧ`tQn+ָ4Of_l*Us`w6bKPGQWqRw*.KG%LYi5ƈtZ6d,`%?_@Z*C^jLćip]7O};m'ٳ4Mҧo/]'9s_9;UӤU\Hӌp~iYNꪩ4ӹE,.8hO5/ЁNӺc+rL)4W[E1v٩ж'+Ϸ Az"ʉdA3uƐ?$: C϶8h&KfMqVQRJ.MQ۪Uyiѭ Ï`嚟Hw}M=TIaȶ.dY9P3iظ[JՙՕ-K);ر -t"ڸϿ5k~ӌ dU]q5WtÌkJSD1 VZ|+uڒJ]uHa&jŴї2iٵ;c+h%N!p^x~!B_[5d_)<=C;Qrp-= sp4rG6zl4uf(EXb2i%:J͛~v@B nZj{ ˖-# 4cå X>BCydrɱt|AL{tEd"l_rzCҿm%~WT'6!)lUȌRs8iH'1NDГ*ILBV&˧ʢbhٳ6Ԗ͵,= Wsai_m9F-yDt2^FY͙qɉ_oatjۂ yd0^&zcz>4vr;O>m#}2%7kq5FȺI/2em s@- 7~ZrJNJkj _껼U-!ivotFZDE]-T3as12t97^O8IV._s&f#n 38ӎҟ-?R~ lyttT>QSUhjԟxvkcDY׈sQ:鑞.ZM8jqz26aҨ ]p gd?DV'm^`y:: m1U\)Y1`O^:deԢ1Xt}ZsK7sG蘕NCY~b|T7pS$GDъjjK^QdtFL lv,tjkjQO!l#dx :ŖŮ<׷taQ"@IGM@bfsZ ^@c A+#Qgw2f 70wGpxj>xLà4uv;]MM(&t\Lv$9>}:ƃYަez:RKGGxǵ]8>=Ilj+lj |ԏTKL|f#t~1,Hx-[׭&-%F^L 2ȫ nF~Uy8.y9>(Dg T:<#œ҂n9Yh&:df r+9M#R'r>j*.j:N\ R]Vm3p3if )W:B)A{EЩ{*۽/VLZy 8oqY)Eufg+lJ3Kî}D-o3:;1HDĎ,WUXa S؎ a>K28;gkx*՟1GXBy/ܭ˜l"`x[ve[ySȤ ?Tg:^\g0W C{qZɈj+8r :Ըs=qMVi4)|>Ú?DY>O-yx>4-ꋃ UFuARyTh{BZtR߸ܢ<;NZ;~/'s #!iBG;w.ɏ BNڪbNThU$[l-㹹OH5vz0JaӖm}E!4_.DLq{Pi5ˋsY>s0qQ14NO#XwiUjJ9\\G#1t NaR{qH }ՅG-ӑӴЩG~wsnCؘWX>mi.-ƣ&!6׻E${]  Y[5յh fLͿ /wP6Cbݷ3s!L| S~@zyόWśrNK5+^}olLb*T_-̫pORǰV|5/4gޜƁjivPYS8&}Kɟ7_;ƒtg쬔>dk11v͚΢myuSUUgPf}S|߷?ɾ /[ijPud-55mkÚpY|w_Es@ _ϜKSPx~ Y_~v1&L)`UxOYuݻ_;vj+oN^Lz;ysrV,O7r׾=˟<}'/;yo>|/7A+Ћ@{'<&ޑ yN^w ^>L\{ݿw.Jo@6n7qE$>At~39Z5* Ϯe0CYkR^S{Y_A Gu cw @N&Ndё[e*wxNDس"1OEլ+OC.'o#ZQ_xXc~m ]⯩`\B6J=Q-ĸ.Kٶ^Z}iOCMʓ؛yd%+.U0ʣdVtfT1&sW1mpw<e]k9py̚p;w w󈭆'OR\Q}(NӔUŝc{E 5 _t)6G=.{U/7nw 'q}z%>|HZ֢C]) +ׇNˇ ~9wwp*iYmh}ƥ{9f5.'<u)JkΕ(ػc`2;= [E&=\})ᖡ7.>0I96Be:3S:@ɠQ\1GoZR3_j{ 6|ˤQ )kM#duN_N8s`eu䩛}{!4]2G_blq%7wJce|h 6mrwֲ^do;@LZS&37&[8lM ߟ}x-=zL'3co's }+`cœ Zgrwr) 2ij\*?[F #j?.ߋYd#ГifEXvG:T0CwHHkC]NSc9p vt[rcVau8PBá8Qq^dh:Wg}GAV<"6يyl+,a6SHiU&Wc6UGk/'?;v@UKOm8c%.q Z+ Vq:<ښ+c BOZmݻqwyn rWwyubHgG6|\#GS S}1͹G32q Z=B Z7gd` .8Lh;`#Cc1O:ZH]{Ƽ/Oal wǍA8#18\+K/8]pSIχ8/,eʲ ދvt :N.%޾E0W(rlin{aK3qZ] ЫT1WdmʧӧK#7|"<]d߯n{0L> !6W #%$+73m<"`plgx jdRJQǶYCg{ DY\z)w&;CoSs~G晤7H]3Qi8Ĭ9Us3k&5v4$_wg&-"ۍ<һ!~#G#,ܰfm40}Xæ"/=3.AZ{l?NщܷP^ݓ}CoOQ!N;ع|1xwV?!LN`nQ&Wѭkkp׍ChTy ՙxGr]`[pG6 .'O5w%kF!8&pS(D̦Q!=\e?>uEQTůw}ޟ˗ah&l;>Y+Ӭu;<,TO⮶ڽK D f`EB2Co.@ø>ʆçqg5-bPt{cO]hpp'l.AGmAu^;~Ve\ִ O hlwo7xGxy 0M(c:c#B@Ky&~R<=𤲒(mǘ^]k֢V aidM/z4kLlxJ6{-#nw&\Ăq䝼vkOw"ZCxw~z^p!Շв{:C;Y)r7F`o-ڈOM{T#ʯ"n,4M6]e5:WgE>Ywl1\kR rp3MU,ҽr.b {-޳ U͸GCyQ'="-gjr`bVg/މopz4õ?m6f>ڗnb5]x,b(dLbK_05ݵ[ohAE*(f+Ӊ,e\X.s'99\GY^;6p:g_37dKwN.a*de;LV<2̬XUIۀ lex7nG|3o!"*6rm0dq"-Z9{4!1<2?cl0:5aҍ}iV?xeЪi}BPFaQ6-T)zE@cGMh67ÅL9v]D3ܓ*b"OG{ u&^. h Aѿ7?YԚ5^:=0cA&u<.zvʦeF5|TFvnY]"XMs#Zt]G2u^ln.8wɩcXlֈ:u'lBhE];kv3DU]섷O1IF*[&e_^~ ET aDܸۖYnT\;[BN5y ۓ(ԥqUV]`ޒ,Me.E7Htb12o٧(#załWL[ԋ:k}h_3{⃯>Qu5k_Qvʩx2a(_&ZzQytt! kXjuĘ^&76ڊr4LKU:O}q:tNM\c W^}4iE)kig٬xyVdnlƜ7 :q#"j }W j\iqo IK(~&d3׋ыtgm}g@a8Ў V6sE0WLyuXnz}D$s|<ngϿJhoC^1}'NjQWP6⺍>1Rn5_)&Z^OxX,O>? OUƺ,(\{Q@Ul`0RG>`꓄dfֲkOzϞ8"ԳV7j ʄ|9}* : NH{ åi%kɫ*+\j@CXlbΎDbCYŠڝv}o1/: x9ͬ8P`Ċ%+7Aq\Cz5%ىXx}B<ֵ)Z EUծ5yl~r*9Mzu}vpt@tۦeۏH~"@Vcwpiʪ9Lq/ل//" ]+ٲ^>k]?g_.byz1`V h0qTjyՋAl\7v;og]v l-Dr k(mB3TZ~P9.tugUvywm f9p*Yt0hZ):5^SZ ;,:o_xo|"&Nv c'CbltMQk7* (#z3yx{zk&/q1uXo:ϔħ^9|m$qјG(5?G˾X? ӏX-ߵ St.r XU7宮>,Z4$e؛ѩE(e IfG;wAg?(Q"7 1!a6Rm3fX_}7ۙeTS4NwCYC3~ɋw%zS:&D׹d]>\Gp(5b|Vv%ʇջyFcN5N00yoͤ_Kc8uo$5]5$K al|-Vu>ٞoΤyxolKam@)2,KR#]PuV5il]z9.[ÉP [v[eRйG'oUoD|BkP'+{0zC]KLl<||UzZQ޹˶d+<s0FAKơ௮ҸKO&tٟTx &QGF'иKmFϚ[\`rM%w[ZWQZGkz(sP]#n#B O͚'r.X˘Čx{Cmfզr9N7{viOЖ#kIX˧s2VΤ"5b'I&w97]voF`S:~С1 y8ϿFI+2.Ywzk:jIn"Vh,Κr岴:GuwF3m< +CvcϏ֢̚=.0T؈GLGzѡ5s$Wk517^-"o~CFiֻh$踚ڙLjc]i;9s,Oyu; s!H!jxM,JB)ї!'cD^Zv("C*eӦ䖙oԁs䜦|)]}.5NJdl<G&p+ mzxkv &&qD0Z7VOՓ>n5vb#AaѴVƻ#!#33p=!TB= =pҒPGdSJ`H;׮2!e!-V9_-- FFf>!Qxۊ9_椁(ϒ\*5,7 բPQ.6 M4$^|n`o?Izz(U$]ȣ`t_`ƄQاs$ yIQ 2V6)g8S\I`*-Hε%)dk@3vJNt;\<1ѡh8#*R J33 W$ C1*t֣I >w.(LԑZPL.ٹON | ,jZUFFQثJ9r:M@"eiT7s a׬aXUB^AO)m(,Z@^<1%*")( hZs\Qu +08eU\U,':*EyրXԢ3UAqQF7Q<;< a$:*⅝9VÝԳZ1.,Q4:E3ޛKdכ]F̜e[TE+)sPJld$Vc1HB%;+fmuNZt4 KOdD{Ǽ(dsCY]Jz1UHPxzS1aLf:BNȘz al}ث8op]+nDEw5|UtQ"]̊Yĝ0gTeG>agY B.7VePϜ[-'ظ%:')'Q7#u7nnftܞ˥ݐ#۪>W*c{79G׶ËrROZWMFJ_0Py}^xмIή iJ(_l _1gm^#J،dz*IB {7怇z'ɳ׋m'mtZI:gmؑ>bH<.!~WCٹ^#_ >J*k Ke&nLc}u/"nME{wW$]tn ًCVi`nwQ_۸߈`Yu/a1+&'l2t܃zG80ߝQqpjR (|9aoap+9[<'Gh/{81C9_ս/uZ撫 j5jc9_&XOG#UHGJIy5>x[@ࡢZV8e78)(!P?2}VJ՚k!]bӷc'˶a wRLJ$RPJ$D"HnR$D"H$k7ZmTUT(dJ$D"p?4,kfD"H$_KDVZ8}x tţ=!D"H$T)y\C)H$D"CHA4Y+pҡJ J5H$D"_pZ2i*t:liҪTi4 LvNu  +hD"HP%ġR.+?z_$}'>C4T(">Ry5ŕ WzܢU(NePxJ$ѩUCͳRԊtjHf >>RD"H#hthUN6P5nQ8(5:D_\\N~n!^*v׫&*E OVkGuXDOŌC%N+U{"&~zT8]:bnd"XZ;_( JiN ~~{^ Sc&tf5J$D"#FԎE2GdϟBƌ<Gά;ÓC{- ӛYH5zT0%1bH7&?Đֿ?wǜ{#7d(-܋^˴'N> T!>KOc(4_7ԡ47fb'q򋹜ΫJ$D"HAk(T8 |t& oœi>>jOR(QJܽtaz]3L#Yz.  5WFT YΜϳ|_'& dә|A̸]Ngaߖj6$}Qqcto;'+c's$ois-qXTtJ!GAg3^̗hqD+%D"HPuQ;].Cd}`Z4GaT(SB9] QXdٌH$D_ fIn3_ϓj\9[TjAfgc^vTz ޞrY>Ex|ܛie9$Ov ‰JEBΪpW2k.Kya>UjݥP)Ž"ʵtoMJY~5 !4]i=| @q崶kcv#6NJ$D"?N[ &-JO5$4q޹jTch'G|I:̻S{!- z5FҐ|CcƳxk@Ԅ7'B!7{6znw>0V}hG+*lf;J?)|,YKgkj&^Ͻ:~Zw`($LBT=(M:A~Qq8䛅$D"wzv^ÁmXЩTJ&H{ \"H$fv8_.UUUVEeeeb ÑUV滼j=IK| n{Yx3_` *V T䰙6;FR4TWcJO-*LV@q/o/JjN%N鰻ð)uj&+5fy{QWt:p[|V*ȧoUώYϷ=?9Ɨ&&V"H$NP[m YṹLPP$'la][4t=;ᡰv'ԱJ;C>^[=WĐAR9u`+y MD"H$RP҅:Huld-'tyF5xh`U`&kG!wxK$D"/:TYp:l/˦j5-ŤD"H$vT\O"H$DtTPY4-D"H$}-%*h$K^"H$.F JWTjNjD"H$)XɭR"H$䟂bu#5 o0}lD"H$먠R?GP&UU)ÑH$D"G߼GD"H$NYD"H$?Ÿkuy9YE3j6ZQ=םlvJ*/n #3SChX~b%D"S8gVD"H$f.gI87]PI{ՕUVc1jxŽҒR eWTQa\Wٰ~b<()^Bg;k0UTY~**fZ(+-h_C0R^jjq\,rݎD֑ٟgc)-HkH$D+(]˞ ̶޿o,];WNL_}y+Or]މ۞M\,wJѶ=.Op{~FȐ}pÛ\!oI^}չ?Zs1*Oo<ZK+ݗ}z1/1^]2vnԕV=5il:鹘ĵS[[H^#>}oy-]{fኣx❻4caM4L}ˏe]joGH$D.(]6U^\_K{Krfڜ L;|DwhWrOgx{t{.cqkc =LefߝxG.j3Kf&=O]e|s*9pRxf;Lz{rp^Ex<Z5ԕY9xl۸{{zͪ L{ꏙ+oS*Xl޼ [!](є]p!f["H$¡|s"7xuƈ^Qŀ9z,jn|uŇR<W(8f,[&̀91?i xE u5^9x*uDze,b_7mWI {;IMA Xv*8]ByQ!#yÕUTǷ[{ 3R]@e$8 Jah¾8=DSiD"H4xz8v4fFsT"Q +"88tP;*d}wMC\DB^;rBH֡h9020NMJMEV;O@`zo"#f': o/όB喇3^e-qbX3- 47Zҩir3q$d3JS΀'c7btk/iD"H'8+.vWYq'7g~uUVoÈ_>`(bEVӼ}{(ѼtB|]]Vo؅0Fx.}KNCo,F>j0})+]*u#\-:n:Ψ+/{3e7]3VB4A/>Uf-%XÛ 2ڃI.!z[t_Oj rcG2GIZR"H$Eeeeb ÁMvZY z+_ |!=|]7羾'Ч}s_%3Q\ߧ/}`ήN_OL$$n0q%ϊݛ#Ʋ1:h {Gos[L\##ou#k[.Mg]V)jROJg8ݺ[Z2a<БC|cu?j4#{9XZD"H$ďޙN C5,(O%o<¸׿vs`3JN fFhxg~J-ex>;d|G B+س<.7=q }<mccӏ`|{^{Q\ Ew3FѠc'.sf=L&޿<|,; 4vXO1md;fI"BSV2E֡p9G[@j_(Ke;NIKH$DC]yaT#FOxCϽv4ne= =B9e#FǞkKgW*:?掭\0^$|uu&;i*Rq4y@vi" ET JJuIUzI@yU"o/a|_ yFVӲySb=^mF}0^{fȐV⪕JQw0y4N۞ #ωuou@Lc =Na~.WaJ$D" J0ql)WN +=d=Ż}3[JKAU^5NVt?w}x,og:Gy*K9H=Q !(M(u.o`&L&!2xy߂Ca $*0UV~S۾j]>߳1'~fڧkkܯt\Y݈a4_>{Op}Shq418a7QUp<<$裂M_39ufm N9@e?^1ب9Sx'iٸ)FL8mǀ[^{ ,cm v Kʃ:3U)9ۇ,YRܛn~g5Uvvf2ܖ#H$CsM^3l8N!.*:)0rPʂ,"[ԉ`s.|b#6:4kAdp5uӫck&Bn>t҈=;C9<;3ت)NJ{yhD7ٸ;9~E"bZѺy$eG8t}!-Bq*}ԩ19i:ЮcB}5xEY[!_)4lђۊ~mBIM!Asb/+D"H$kE$gD^\@tƂ]H$D" Ջłi=>cH$D"\'en{D"H$k|D"H$)P 'p =E zvVk~S)¸G9];rȕŊfT\|:EVo˿: oeu\'3g|&!.4Wn]WkΙhܶYvb1D8*._lIbmA/5N(7C$u"7pgSE&/b#Ug=_|?x|fkN\,',ɗ?KqYtڝ^zѽK'^zk%2w2o8u,p~O!B}8i_O.lDZYӼ sj9fΤp>w<£1s&sL5%bxN=];e-r]V'fΫxzҳ.jM}ϒʈxO:ꋸ/jf,I.Bsߧ`--7>!i 2ymqj<6o5.X}߸yXrlǹ_~ә) )&dfz+\ꜿE ys2w["WX؜Q]Y+Vq0 "s?ɶhbՆɹwb K>ژHﰍYs<5(X˘{sjJQ/RjuwKӖ&^J7ѥ4e.d.d?@> Ni%6Mc[VYtu"i`/`"c| gܵiM}\0ҥcIvRѓyhY֡D;E+ giZ=?|ˎcY(3bH0eaQ k2Oa(gdW/d9My FA:pۋm붥'܋2<^Y ˗,dύ%L˰;G[6k AcFa{Q𹓻XzB|R׏'RؚI\_z"#"pViK 0HILLHe;_奠82խu=ZnDYv_(%0$ͣPӪI#b.=ރF!j.7mрA7z&I+"^ Az+'Rd_zFϯƻtXH/j:TZ\Lq^>D}4jՀzqggϖ_*&$T˱!yYl:QMq/ÊJ{Q|ZRSϰl^®nhK 9Tە̥L ;cPU2_ $ _AlK'#s1L|8q|霧gϧ֔P>22`=lLAV9g/ᦛl0L.8cwɢ/ЅÖx=_Nx9ek(**6|Q׊D)iW`,I2 ks8yCL߁I:cù~._$-~7=Db?|t{.}GJѨQd[C'~,֍w][n}y=Of2kFOG^ԙTs3ƺg-2:EEyb1 Hr\m_UV"O`עg&[:<+KycL`w^rʸlTwڷy)L\tĈ8%3;oϢ Xƫp/ޚ1_^mg/N'sP(|*SYEuI9oIZy*VrMؼ֊RϞd'͚K4C=o$vK r8| NFZm5! mUNO n:mR;Uj! !+ +(1,&b+d)X@QKn !E0Hy܀ͩ`\QCP1rDoTxn>8daoWQ:*s~?Iޝ;s̙9sLclGW6se]9_n=Nl.5lQZ{fcRݕ;Ke]%쀱&WV!V$?#'P5h$nQU&2Rc1.^?B!*b (0lI=#dtb}O0O_bǻ8#dI>5c qBR@a=IE8!z+" gOOd{7(J_*}kޛn-~GNcȭ,~o #r泵V7㋻\>dI8ky2-vU%[A(>dNH|=yerbNE#oX^Ʈ<Щ<+ N_BO7IJσf&Y`ѣw:$_r̩hrԥ[.H 6,G r\l~tsT6yup}5mV>19VdÙɩk o%L$&U-]Mf88ƒ'cFlZ&>3y+;ޘB^ye=CɄO_)mZGC :ANo/~S8Qpp)9emyyuXu兽0>jndb1ɂ9S9`J%%#[Άe~'h[})2% y-: _̋9ټvɇ^S>g!3&m QF5myݮ;\#k_Rv Gֽ(ؖwН>jEfN7a|,~0/|FCX7F uHY@AQF4;;.u8ĞKoo GU:rJ(*|~{WNuZ=B)煃)ߙ͔ g4 p~]pm:OnQY QWϒék#W0']^SP{brvbk/9M;pۓMin.҉Arbg'Eû#௨8OMsEޗۋoOH>vW3YǓ=H*c"F#}\<-Og X3LrN3t`@a+'O&mD;ڸ(\CSL~ukݒ*A=?`+" '28co{}.r hAAbjxC0v,qyG=~ O>"kM9hbmFQO;Mِs\pWH]Tu`1ėm3[W{^ҕg gwiUlSeawO+HE(ݮSsw:keCИ\%# -JQ|K֭},l.MR?O @ Ӈx5 Na-ۧͬMyW1+QT`>*I\'poGWlQ%f*;7C^4.:Yg?Pgo B[Q)#&h/딼^v7M&[ebVj<³ n<QiSPavwY' ZLF~*"J}?+Oҳ_Gԅ;f;)>^IXeڲU(?ۆ%j dg1p]Bփ.W%Ȑ~i'*%ےph4 f%j TX eCO`L.}\LZB2.ܢ+ LBj,$|iTW]_5j w7|ђPߨm:rAd;OXnN8vɆpߜtuο!]Kf *dzs~w&ل-V)o='j-,_{ >_8'Y:vˣ͖Նz¼1|v9~-*@NC蛵j% &W~/I5嵄|?y w?s;)NN߭;CYl> v9)پ~SoA+LO?yO?RfGeB|yYhK x)֝+Ce>'9.jf F2G9 cI2r.!L&%OMBAE;C %:#ًc To@›ڰ(9Vh덅0_(ۅg^z,eb Ν8LjYrҦ3K'+3i ֝$g?܂j@dQv-0W0K^MCs :;SdohoСvɣ oc]o_yQF]]Ʌ3єUpl5¸efaȌ A+& Jgj[ 婱l([ю"cݿpJBn8HefZ\+O넑I(X=ʢLbI.!LP-W ĿE[_}s&Q01mnEQkѓ k -+K|;TɤikIJH^"vdTRWWq).Ztʼn U/#;t6ӜYN8Klw_uF+yl֎k&vq߂TdE[ HTR!Tjp&DB۴xz\4w&/~Nq΍WӪDG瑝_JqOfڨAuOt_,Fx:aMO7"f !HS[ԱZ8V(.V gPIٙGa^+{"S(nnE$R^-B83Ze;:9X@2c\%:UO{_'+) }ol5\NH3^:'waP ie-#[q;R*8JAI99/eۙڄ|>M;wi9u%d_kc؞XL\lktŐ]Vg7rJKIʫVMrpu7^c>?p*+D_ˡrYSX&z'T|H1 /Xu̒X\EyM% 9%VNb^1/ 4FdT# g豵7yybHgh֢jr jR^cލcQx*PW ;WJp3 ),/dTVS#Ph3œ9Ñ2"}?T%}sF͠@&];w/y4\LzY ȕFZ2w`\ c޴ooX޶$ _?{4bּ£mgmiאvk;&311|rNϢJP'GMwf”(ysX2p}r<ȴ>fCo|$\]YLmfL C:| @8N<:{1N\wԇ?Hm$b AŸBN2wL$~ u&v͝Ŗy~^ҞK8^ϫ ŲFFfk}8q;vg]>%l⦣ޖqۣ(`wGsCG<ZvNuN87șĕ[!O #*<?O:SȻI-$$%L ES ^둶vXNl׎HD_Sě?/Ȗ 5jzIb"W̆HRY(w͏96R#|,9Y.͞`OkcJ:W(۹0Gqh}]IVҶ}q>[,0#D w1gN>=v])& Ix 3k(j֭3*ڷ|y?sW]qVϋc(Rٱl _Ͳ~H^:Iʽl:W@w7S5y įCC '͍˾Wc[gfՒ䂷s`p<>tP.(;qE<"NA;,ACZwm\uѿpl%x}X.--5'P3Uzd2a4zk&|~9̫lV )o~_z^'zdS7+Tf^FsT(Fe3p̼Q1^tX؍P}ȼlPs)3t6;K|چQ ?'}Μܿoo}Ri~oxs/LdlZj+l*-4rL^ȼR^^2̞>Ssݤ/1(M_\^5~S(u7!7/6kn]d./'ڭP92ٟi %5za^yEI-?>,|z6{U\lgYP(5+=l7/ì{re_Og"Ϸ=z,K9y44sCh_o)it4}g's췚/şTqKN-E!WPцEv"f},,}nRiD(u:.N`x;9m`٪w7g2I)$H I<ɮh6P]Uy?h &6f`d/?|:jz!0seY^AK'm d9rGI$rlګX$v "w2T%> [mGdcЩ>lQYVHGT% A $H?=?\pI5 }19ϕN A $ 6o!*-ʒ"~=GU9~*7_SVHvvU5x<kQY72#;4ԒAuufgSTZhOde٤e[l驭,Z>ezb: P[h301kl{kM\E%RW]NF:I $HO$ q'70\](zYQǪ3\vކC3i'׽¨Q6e"ڶcҝM[wdu<[(m)I-7l;kdKF;:kOC3yD&M"OgC~ ލaW:ʠv5l(L <+ol=&ʽS&Dȶ>8znGM%H A Y{~kV=ϓHUװalڸ:yruG> _O T|>^ooԉ Q0M[+O,8?#A=y([ӻ9'5'Hg%؀.$r@Qc|?]ڰt|{F#:;fL;'\Sm㕽-ȯyC3{GSJl>-$H A(4$̤0ewnx+¡5U? i xv^Py7 k␣\ (5]XY[ɏ.g%b{LNf,{Ӱh6.SKT][R?Lfq}xȫPVAAgK@Fi,TC.\وQ'.}fA(3͸7190 ̲+Ͷ47S+Hoz3f$H A"K9-ԍ똷]&~_?yn>9ek%b-'Kb ^{)>?ZI+3xlNȂ7j83ۦ=@„N!H3Ƴ3}:#/4lԚ\ P@TlB<[9ED%ס'7}ӧv2e<|R Uf-1 yxW3J=޾2KRU$'q*dnQcgn8ZF:Bji-] $H A 夜ы,sȎ m^Sy $H APmR9rx?jm^ A $H>7 3I(A $H_ $H A'4,[mpFF_ hL&#,o`3 on&d}`32tTUUSg0G3,Eo4]O!Lze[Fӟ/ v$?~?4Zj!A@O?c4I$ȅMԳǯ8V{ 6T`r\ѽwn{_pz=ztk+M-M ְዷxQZ"#B2 761ǻٽS.?^}8)z7H?C4\4p {袋xougM>v^ʝQxÕR"ws;ֿ; "bR qv5rDbf)N.;ݠf%]dϑ3Ħ"sv/>s8xyaȵԕdRׁ.njoOӿS#߿˻Gr=6&֕})g/OT7yQW& nN7$%y<'uݮTwQN&ㅛ A˥v$€M}ˋ˙d}|q!+%N_7 vƤ$661huVQW^CG8^aGkC?9w.]LCӊ6^2NGjvx!C Tj4ةˮ?kjji(;ȸ,NOi/+ s5L\~N ؍aIjHc#P]UEᵅgsʱV-֡kԨ4uZ+"Jcj5TToI[[[C84D[°E#Wע56HH#RGmӭ[Ozh;ֲ'Lr&%J!#]Vl:mo>JZEkNgiCxypQY} wi9iYeZR\գ~\ }g(ANl汏֑PVGnFg/`hPjkdurGOKZ0kйQ:/vԔ|.LVzZ;o:^ :SZu6A8}:iƕ 쥜S<6zJώA8:ٰfVOJdXŊ2:i.!yb>lm+,{LB yԻjvMG(Qxpb.ud$uI֙|7Z^Vqә?J9Z%jPYmd9h+Ws6WI[NK:}sB-W[ւ;pq#^}= RtOUiRxpwMH)cgt7jRx6r;T2zU'?YRcdx.+L+Vb>|8:#p\%3vzu&1#5h_$g"/J̞ w pݺU&7?*df4;2l.H%5:ڄPVPݏb=)bake< dڼ糑:.< h䄓RYϓpٔٮ B FnT֚ywq fޜ0fIT:3u rgֲSlM BM{6/?G?ik%M1ٰa=b ڡpN[r_:N*5Z'sgW?W$_7^X4oS +BtVr1NAV5JPmmtcT~ч89;3 7WV>WIBR!>zʺaiS+Z:AvZwĒ8c#?HE&Ԧ;Ŏ[,Ղue؞fD!AXw5ղy^Ea>QUɟ7T(fA&PXZƯ`Jsvm hjrJ-M#sZt=–1g~8QãO>KCKDt"y|u%u%©8ypYiՉ )rmy٨ROrף8mēmt&2҇~ ;!fwF)_ꂕP&8@dm/=4Pz[WIFu=cf) B6}5t qa4.~Z)cVOmQ<},::H PJ#'v{>y&Na@z1 symm7?FՑ_L54f$32a̼;tes,$%W7 uKnF@pK};O%@QGA^(?R뱉ʢ3ΕW8J͙TJ}]guABrlh3/=3h}y[ d|z 31p$̡ݑ䕕nf-ͽGEw gioYvoc U9 ]GO;3?~>F%..n;p!ni"65pE#,ZN5;H/;]xL^^:]hֲ O;L9C`&ȥy7-ԲYTDC E yrdZ:_(3'Go#Yj Th5մ3dV^+ҡ}:i@vWf%C},\_{j,Q[J&-L]LjsH͌_cዟp*ʺj`!C" CX"RRXهl2ݛM2+f84lq .+d.Sx8'F^O^3ښxCx A" j pjܺ[hX33t wVdS)&}[ d/NUI8$>SʖͼL|'T:bm4>7>o"_[oĕxyP\]*:f`sOZmBADL2CzσiAyR4y|rBT6x^T[&PvoFj#Əqzag̡]+'6=Ypv^LN+_2&f/G -Z4 GchnOqIӳ#CZּ];S]LxMVAZ)-OW'l/ot{%θNv%4Ub! ))d=}3m?T&Op*.҇Xw,\l9^-pE5WBW\]pqP^葧 ⧇\wTSZ'+Vs ERo`635ׯ/_Sbֽ=iֽ'VԠ+uR_Zㅝ5BL~&8mm%9hX~A)Ҥ':֣/A[[BN[񻇋ry,*RIP1et\2+5ueB7kߒi['JM 84}:0jtET ڃ\uiB qpo1^l Ks3Uu:;ZWFE] IbLTz b\{*;Qo?wT⧋h;NM,l+~$Nە;JSۿY, @;ӷPyd\|0c{~`p"?u,\x uOhk.!s04Dɘ0v^\bR6h1J((U .3˅qvƒIO!\aɿxJϖ<"Q%>e {0f5s&E'M@f¬iBn2byX2AZ0uPT=J|,JgD+o4Z 3Za kJB)%IJ-\ 71*K" deW4эtIE.$ꖔeemJ_O'r)Yr,Qd uK,?-,n֊7 y5TM^Qˍ"lחlӚo} oӞӼ+d*ό EWE~|,B2TY< e}U)ÿ=wLfMssp YυUFP'9syrFe',NN-9uݙ CMYU0tl/Y}8^TWo%|&k\!2o pT,H%(و,0 }TZX/4ѳ "sNאSYI), 3t@ѾҊBJųjRKVcS,j'wܜjS~U-coA8Jf%ͽ8+ڮU G;pL-/yo!76u|eExhR3|N8Sg6Hׇ{fPzg=Z?;w!F ΦWзەhy/?/.,7~=<&R/.e^8@?/'ˬc6F>EvsNB !> EJ3۟Azj|=sƚa0_A#g1°fٚja-qGf"zK_V ud0: Z "' ?D~%vD%sS,&'q8>'s:M #q,Jc͏8(Aj~!eu*zqо=4A9XͮGLW.ڷ'\psvNGGaQ%!mPFr<ב[,"=ub/"Am15.ВNRT,'b!X)"Hi8YOiٯ#Ul=̠1ee%#2w0ɡ "Μ&ԆQ :֯>]гs돜Ϋ?۶SٱL=Zyq2|K@[sdt5r"D{Qt  @t"YV YLElMFz& s-p!7 =7GNe$F㸵N')ar*jA *tF_;s(!ӂ\??gLZA1G:sp?klb^&;Ptv ײ-K{ E:9mh‘Vod\۶:wwA /חsI՚q]\ qd]\}$ڷ~EDbGN Ӿ3K&t'!E 14YFvbWe{ יD , ]y&'~G$yѭ;B?(13q m*'7l!#ݻRz)u[O{ k>u2adl!#B.vlj4 u+gF\"|'+Xm!~,[P#`WI@\^[*gn׏V|&D(]5ΒL 䥉8X*spWn8(\j6^ Wm1љ9_-ȡ9{ڒW!J2rˌq|ulСUӷK;JH~]:)ػiL9ށ-۹%@b|-qILBz:.x"ӪhƻX<uZ\#XQ*!uj=A-aD^Fsu٘H!&نVҭ ٝ];RϪr$ޢ=m}rz2J/DfmN p8+ _7ɨU^ p +R+M" 5z.^L34 ?'%٩D%7+kπ69e&sR6:-m;t}]* ҅.fWӯ_\*8r1=&ܼ׽R}6R{+_UfҥC dBo/Ӫ}kPlNKZ@i.tN91ݻKk:J=qaCNǦ g@GP+u ]UͩT,/H^k+9r>*n s>i8?v>j[OIjFzg8'E8 \.\t:y< t2nJJymվm''ѬS[T5iA&ԂpAkr9q&Z[ ஢֙M,oէ;*ŐhZѿS3LB)Te-;'b v-/^wC{FNE~6/P%l_o㦭)bF9mڷf5QbH-E+!V>֥886oMs4J,ᅯPJ&̗Z $HPZιVka.sw5_aF]FsZ#&oA-_8}' ]3ƃb>=p=5}Z KT!) $H ßC)g_<墘m}mȽlLdiLӃ[hjx2}z3o|gnDzY%a(=лǣW|˘۬}XP]ꕫq8H um )D#7_W3^{gѵg5BwK)ERhR8-Zk@D#$! Yov7x[{mkJ{gyݍۯq̻ `%K&h_Y³/3o?#:c⤉80yu-YnLտ4s؀M?հ7`ex[֣[v܆IsZp_8/B[ẆO׿-_ \i5*@Up=<_p~c?L\ԂY.>;aži`0 Ɵ?dž݋q*>6Ƶ!,(AiV&\ `0Rgg(!I Wp!a˼ `0*gB 9`0? `0 &( `0dAI %ʯTB|5 `0 ϐm8u+H,zٿAچ`0  J 78[B}tӻpn&:vfBՓ`0 *kkkR:5!)I>׳Tn Ed*dϏsy4H!)XK2 `K>29> IshS/åSp1K}ca\aQ0b2`0 2F{+0}d/%E"6W k[WgR< ~iXha0 ?Ջ `0. `0? `0 &( `0LP2 `0d0 `0AG!ٔ`0 OJz4HLX2 `[v"(u5p0fg0 ? e0 `s`0 `0d0 `0A`0 ``0 %`0 %`0  J`0  `0 &( `0 &( `0AIGYy'c Eu__ 7w JEm |QR'CJkJjl_g+7(o{p7q5$&w/B;ήJǼ+@U :< ~Jؽ{.Dy//#66ۃ}[,[ HH+WoX{"/A{%OŁ lQLaVKZL^J*&ni3S*XDb0Ar:"^(HODfas4əoNMV(>|oB  C!3 B~2J!5A+I J :`x0ᑑlTeR)Wb5$"^TTDYHMDHh]pC]-g 0v)4Qq~2i/KZȤ!SzIXz:Y3q,c{GSzTٵ=fJ:!&-$aY /n_LMz vnM&|J_r805)+W 1w)g{ґ,u[}HE5sqv&=qC<ĕv:M$uG#EDHz+=ҁĽqv'@_oDy%8nƦ7ߣ0 =e[}];ŝLNxz4ywt|͡1rp%[cHNN>x,qii@ "QI+jöɍ4 )dƅkD:KN%F~D>]U[{6,Ό #nm6mɇ_'57GL"_@<:}ḿ+sܻE |F%u+ҩ]K~DX$`Bj560&mǐGEnn_D> QykmY(Ҳ+rNQ_[RA6/I숋+6q1c{g2ezV%{: Y~M[G2HԔz/=yOҹc+H qÈ{{wiVf$N"_ΟKZ7WID)1dġmݑLZԿiX_2ՕiIH?|6ZH~Sȑw TwBV#i!.ɕ4p;2w!uC y3G͍qkr+Kީ;.]O DNd̻5-é`$!ݗЍC+Ҳû~̛v~8XjOmINd5!ɰAɼY[ Ҷ{_=䝁94砪o* $QR#-[/] u)W$& H~~~ݯF!=Rw;WUثO'oM[sWԒG6RTudAFHRY/&%I/_y2 i5p* J#W~$fd:`<WvLL[HyoOڭɕ8RG_:rHI]ޱ+EC{?iMҦ2y2|N_μA}oIrn1ץHÉ_D4cnm&I%D\N>L[Wړ϶%4bB),Te&luydwIvDӺ9D{&ɂQMֽ*,igMz@]DEx-{Z4c>$qYdˇCŀ@ZrX6  d^/g2uϪk~7 4cr`:n5 "B;.T呌*"ݎ,E ˔@FL~yZ%`MlA ZBk"֓inBFDڻI^D*Tjx]L,ɁD!$~08L"'}-?&S[Qwfd}}5XNrj$$hBڟ+?.61'o<"u$+'*ȏs}C轉eVN{4!/ IdޤTH"ȶÈoU6b3DHynnqE1 }9I)Uab}I,l2քo\"3>}RLN|.i9dQnHghR]SOL,d+V\Bz;HL2 3[^T]7j c#$7?7q8N{w"#yTQ(bM9AMBfl8Fr˨#Hǽ0\Y5<:ݚ&EZVLR!U[WI<]2$i6D~j qHb`]эIu/q2$36# _P% KB!`c݀I-@:0]3S:tjU]$5exPG4c0.GKo-E/4n/: 1jz.F m>z&U)1X?}8f<<92Cs5%aЄXR,?-kkaa:6['+Syr;,v:^p4M lfE!"*ɕRc.;PH }&C[/[ 1i_lCy2bBtFDO;C4kiS@OcCF07wҩ5s J 9ks74_77?(MU#OPk`'6 E0Νjaf@EjLVsQػyZۺ{vJ ƸACpN"P8L]#]gv5anZ'\M> ђ.gi "'Fe@Sۅ~Ԧ~_؇ 1 ~3|xJS0Eܺ=fc;aas&O@W(x.#*ac}{c7!7KO<ΰutQ(Ϯ@BRl1\hf=BaJ{Z*Th򕃊4-azZCs}+X8w a嚯JDj.:+K|0{2bJh֥ehn@C`#|%R] !4қ>!O /} 7&J"ET5YЁ.Q ucm2>5hN^hW.ԂEM _Òz xh|*ʪ^@[S@& BÃB۬W/`{>#a7AU{fB$"T_ZR|w,9Phf`XU?Cjn5Z{P珣0}x!M&j7n#%H\.jcʤR(>:bQZPzT 5ܽoϫO+ BԕR@D$ϟ>aP".F2rھ oZ l=P\Q&=z8x6\y{oA(ʇm8*jB[|=\TNO"7\@PYIxKBM A^6-N1k)田q|݉7< 8l.Ƣc^~) bTX``3$bSFV(,TFQPJ3Y1 Ž\pJ a!p;~}qF$ TjKxK]CA o嵄=֯]t'a{}Q]ogTC/_1xPXM!f-c<:^aH .ba*ʡRUƜɻ TPHbVLCl),v 5/_4h&Edx)3Y|FWMn*[kޞ/,BH[qf"(nB)LJƑCzHwm jH<¼a0tD_uN lBW"xzG(.ňK x_(=v0۳qzt2oM,5a LK_OXG;"!:+=-4ScH;SbW^E/x@}u4A!,߸W/]A ;OA[`' Θ1- bvs{aO2z ڹ>mpxx^Gz]:u .C^Z%3j8:aYJazCT€Nhkw n##@Z GOHuԾfGp!=Q_͂Xٖ]Хpp`.o`HGX H)%5|jV)v zCmz(J@I5TZpVMDD%Zwzqn^U{>c#5]A5W)oǥہ8a/>F!㧹<| /e(QaGWЀ3А8^tb֭8q6,:NƏ>ƫ 59b8iqy鉐8`pWKQv;["0f67%  =}?ea/q%4_?&Uf?k^(9 o?:(*x[%L|:{qU9B8;;COg`Loy:H B|r:Ф!(*jU$Ez? p5OY\L=!2ז7T zF]w k눋}lg6Zk *뵰&.yOSS:D<՞QA(ҥ&dZ*v1z@KZEmvBOq'$GKWo\f [=mK ڋ hjQ~wBgS'%rOkQEˆCЪOBnX\ nNʮ[~hLBc0}Q +}c箘2?t%~&1 oB;sFXBx5n;4=QC1҈E󋁩>bPO}h jxGk4׷R8\|%0n \ MBs+D޻2X` p2~CPXAW[0E]C_ GJG3Ȭ;ƏC! HEaB-;j 1o[&E &W SAq1uX.3WDf^YiȊAt4w~} =>P]׈$_/Z9=r깉a[S/koZ\;0)GГluCij1ÑTԀ}chOUH( U46G3F !ȴeh d^W9W}Ggg3$/L (03чNd =x;Z#'b9=Z%Iqt,)oC3ei|C w\z 4tD-h/O$ Q|b +k00 nV.mge<=`cf0hǻa[5/(a܄!xHX1Υ`U)8w*QUR1Pb̺K㉯"*hu;x#F]~ .=:} fL8z]1yHU?}ei(]m S\1pȳCXQN<1ql/qY1~PG<`|BgMuCԈ,"ãi:T%TI@!vPwËڠ \&Μ[$-Ň"(t1~X0xry#Ye^DFhC=+hxN.x%^LV'|h jP^?C*ߩΊ!ȗx2.:#^2Τ3lj4&ȩ+&]ZB:FPZBKN3! Gi3Fw.i2ƈ҄,i9|fDǢ07Cf(_dP(䪥ƗX]ӻ! =wŃʤlZv@ ~n;J¶;> AJ  PR;ѥz-MNނEKXRAOEP ? p~ |J+زuѐGRar}Br~3ՙ}-( m۹ /,l; z{a'PJhV/To'IZVp"oaW !{ S'2}W,*/ccX^;ȮQљI4EfJh²ӷ8v>45h)֠R8''%-ѲcotB#,j+"g$zR;*;9٘4 ѱA';# T#~ll]0В>IMZUع41ʋqxzHi!Ij:݆6TDIP[t8 $8Dv 7Px!(J% <ZCbi a[OXĭC@e\ OT؉cۮ{bƐv=W(cz/mV}w+t}A5ypieo.;>v6D!Ni#Ϧ bU*Q*(aÊx^ 4FB 8[V.ـxhkcS~\SpjYC^ iUzʙr^:"+N%; ŧCشl$h/wo}=6n?Z=X pvD^ТB~>Hu y#vD>uk6Ο8ߔhi |7)MT|.5O'E@GoDKgTGH(Kiu 4Bh@'w{v&BĜڶcT%aX7":i}'B`&tk,8v,M--$߃׃$h U퓰HnspzFl n3k57\@˦61۟#``i +3|bĄ\1ӑ3Σ5oHʼn 7P(Հ~ǶnZ=uH.WR ._ɦwlT4;@&rphIǁWQػWpbgi?ځjCtjm [TE7n͘<88#34YΛ3Ծ|BB!T&cm(⛢-wNnŶ3^oiBu_BG)GC\y1wpuUD^C7`zUu$don?&ih߽2.L =[qQ2,)k?vm#DGwwGXժ{8J$F:@Fx=y.f{lSgnTpěH(k)+ 4 M %7V(VZQã,w1 fFA9$8rzr,.\@)ӨФkkq-8%4l^y+w \``J+T/fЦ6bx [vxza{iuҕaSw=Vx&ub1( Ա՛7 kw. N[ ,<:AA3E CpU^ ͞G'Ř>{<ޘc֧";% ut@fq + UXx!j9AEal8x#iˆv0^ӗjXN=J8¨%aA;x\vhqj8A ӾV/׳A|T5 1v ҆0 |Hn)pR3ᠧ)V/ۅiΣ~gxb.m\Uf\zձX,[6 ZE注IUmbe{.x+]'rCq9ms mgqjWgkO\:)qndzt˃CG}2gb#" :R4X| 6gDM= U}{_XXI3!' 1hf Fz~)nI!-:T}ҁ^O֓4xI= o΍HoXQٙG!5Ѹa`W +kHrPKOnFP7+jV#\|T^=WT]+r9 \t>85Ip([EzOƦ7Gs1gj?WÕam[4N(.y4qߘA)FH MGC䤦"! A%2νa͂IT jig0}eC/>4X KdL_R(,q5FaMĚuAZt4쪧>L5HqD4{)3 eWl!,hS8r-;A>x&o/Fw6LthڄOὅKj*PQQa׎If  [b :mh'IA-Rإ7 #jO?|@B%!=&X4VЛ{Oʢ!A]˖b嫶j[zH:ɑZ!)[5Gt=&/*jL2ӡrI|4dc"KiٗfW6YEAz#ąg{S! 'uD/)øebDPh?aiG[(u&N<OÀ:n {hr3//n#Fb"Ҿ9d13cG+ynѰ<6+~DH9^xw*hx)/M'aYPQcޛ/*g{®Xk6=Z?X ]mАyk1sDۮ#׮gP D c%,(hAT!)T"SnR&"gAw?٫KJ3o3cƬH:* rԔ S,@b)rfFd+T{{+y/&G#CEuh\8,K* -e9 գ "ƌ ZX"Zkr9 fWn߉z.\[۪0ʈBxeXXo9*x~,>PEp5S3PR4Jeo>4:-܌DO!Ѓ[W$YU^;3.I6N Kp@hH) ZBmluw; o[s>I2;{9Ϲ(K/se]r-ԔpucDb=I=oVBW O)tuuI"T_!u "DAVHTO艏{0E2 }TS <ŖބϛPgθ%{B:4+U(j2$Ͷ?2IIGr=_T&`\{蛻 'ol@sq^ItJãq_U3z҆u|;o!ڽJQ&Dp%WĤ&*δ4\_r6:^|UlbŒWʐDS.: <&+z}{%v"_tuws?,٢}u&mU92E%nV]jAH\ !xJq* kHvPxߧƨIt #.|gN<N59SZ'WEyhD\b%~kӏ>TImn'fNĤ7ޘ&L3飅L*5X# ԋ-,^xz/B|LN{8@2`[ԣ{g)?Sʵ:th[:AWzƗc.:F/&C HY^loCwoڍ^ߑ:h$LjO BJ~χ3 l$|.Z%4 =@?UEQ=ĬN5ŇHLNˆ.sO&WF^79s2kt@W9 . 7s;ېHʸAW1[~{f?5Iy:ʉ%xpI55PZ# ,5zgh'b"paݾbMB44DSueWJu'N wς>e,~q+10ײL2[mdCG[1f3u8<ݶ=}NQoT֭!1>KA68MI1V,Q]!ӥAc!r!ƒ|9'igdw-UKU/Ip˞`HRvsp# ;92ytc;2입$ky``P1-|3љQ)!<,Znayt{$~EAg4/$\Sl+GL Ue9s䓗uJ+JjPKM\k<_+1d^BFgS'Q:,&c2m`;e=U2Vw 6Vc 1 @ey##SmEU}U%͝B`]7dē6qQSGP}/g뽾QbAƪS|m>x/3U٬:(Me94{cd-&]U [FZ)<׬h IDKu`&ZLCB]t'mQÜ'0"1$0soakh%n.S=O wrt5#{m"䱍;rʦ?m]hQ.%+c[w4ɶYzYU?oE`ܓذggʰԶـRpkpIC"gk99gr{{ HM"1?|11Ḻm_s&?|>MNMOh'T~`.͟$)fսs4tTTUibCe~΁Ivg[N;lXӕ\el\Chxq tmJ)J\Y?U@Dd8>RO`y̾vFEpS 3r[-G$Yu凭tSm]= ,Z5ߟEG98Kw_R7%7WrsgmW~YEyt;kV#$JrNU.=Ο>|PBS?Ĭ1IhDѥ'Ǘ+p4s IF_!T YY ;y%9~f__{@=p0>nJ~=,b|2;qO8]xy,zqhu86Бʈ0l|K>>tݸ8eO㶊z/|" #CS LC!=:kOǧnK|[ G)Dl͗_]W{ F'܌Rn*]g/J}xD1:&S7;o|p&Q<{,_1y$fx;'ʽE6 ^6u᳷(9Ema22$RZy׸}s1NgdKN+'`x-罥?iF? ʫ1߾>~q Lqۏ_'SO(4NT^) 1ϯ^[ۿu9iZS$ ͷݧ?S|Y9T#w?r?Lr;%˫/.|i=~gw|BqOy"ܴ4YU6eKx"o=fܤ5]d7z} ,:ͼʯ `18wwM 1+IMCP۪_y< Bt=Wʋx'ָK&xuf $g@ۃ:}sŦCwL{)~/=CT+B{|U\?6|g-Q-xF\Y5!zܹ~ޯж>;/ s/1)B #D?@$Y٤W-em ~S"ܿ?=O7XQw!q[ ꅖ7n^~=K+X^rhMzj[>+lb\AErJ _N}u_~?o˔7`og_fY7<,sŽJW)hU3y7o?WA6drGX}+BfGOiX;;Q{Oꏼ/rD0"ʏm~>$nuIt{֎غOP|O!4:q goܙ8i&{{!z銅Alԭ6;a5y66'/!Цo}1/e2⎟q,93{)ZFgx7骛S$6fL/e[ȑ^w?ʛT=!]t&{˛ϓG$?by$Ylj֋?½ R{:"* +?ǂyTln/<]W']ZdGBAe`E 5] 9Q]i!uĤqQs}U)孄IkHBdJOgLz~m\?+lJ[EL;xGশUl%4 Ɔ VYL˨t#&> (EqQ!Z\}OR8œ/Ș;%҄5x{]Z|HupTԶj ">&f#QKMm|VBB.HWd 558 몰f{bRtse/+zqij覴0O֣;(0νiљGhQjZd=A~;[hh7B/omjuMT CgSE%vp$YvndD&OO)B| ".DOΚKJbOu! CXkScєH*[ K_~ȩ0Y#:P|FxZj)AJd|.瞥HTbVsb-|Z%G%wxLR'z*Akv!2.@O EUy)]zJOA^T|ںq,h++F@F'u5u,iGĝ[qk)CkPALDX%\Is&{ L i'%D *J!HraEErggCBDo1ĄmL`IX\=yJʭWm^==~YR1A^-j`UTDT?t AlAGYAZ_al9ZC\R ߺ߮C(~u+֓-˖Q&:_ G6"ə֋(dF#>.!?kw3&F4~( mb[v#)Hѡi%9>}VSҥ^T~@6Uʿ͒Xc'&]t \&NlnjNgRn!\0qrתI ƫ]?P.~.B}\0 _1Z+z)HlQȟ6\MS~Nv/ŏ?o 'ſz2OR~ˇYU׳f~V]'wo@--֎D[S#m]sĉdU6×\<֦=}]+ӊ٠ SKˇdm{+]L 'ٞuNC+[w𓯹83KX൷ҩ3<Ҷ`&`qŏ~`{<ܛ-n/#>LSkOW,ێ ̢&f?*?l>T>PGa 8g*[(ȩy'y|̞7?/~_E'wC)wM_(;^#vq0cW|yT)} I߶wJSiٹ0g{?>@T-'Ŗ?{qb)m]ۛ,]=H,+71: N:~k⮾4 wiWw'G)A0Ob4x\pqۜYqSY8m |2 aBYk 488:×q l]<:k?ep7oof98*]s'ٻ#ʃcL*''WIuІsvqp1aɟj67 JŚU 4ᐁWqO=kX/=?C˿XBzO-lgpA1yH~8j +*#.c'G<$N *%McT>זp`u'Ud_Tlw9F8ѕP7:}bZ GGi` .+{cwӴFF2"0 [bl/-aܭsvLtneώNX;RCht<{xQ?[AIl¼9wчgwq~i'1؅ ̇ߌ/|C2G5 wR:̘3鑴'HI at83k#o;uV*nљP:b>#qGی]~d3,DϽSqJ͉Ic."CPa 7o6SvKpf$GqaGSU*VKdd3OѸkV΢[Ox;&|D᭥'䋥7tǽwO7k8^͛- ~fY8]PFPB4|ػ1#̶>*ae-Y0~cm.[}H y8gcn7,Zp=+'Z1iL;/]ONwh<_D޳Uó pR :-=G@=0qjNl[2żw, od͗d)&4NޯU `v?S9ハK RCjxw_3})1ٳu-jyQF$I-ҏXY@PL&q}mkĞ'p/}Sy8K7a_pl=̞>JV 5gvhR1x̞1qaّ ln][áy g8͞? l[iv LGfDWs-Ld K`?_m-!)JÖ-x2LjQ58W[{˖o>b3hC g򿰣ҏG~>fs8Xعc7Z^4U=3 uwp|KňxmxX c1O炵|Hfq=n3~XM a<2{ʠN·Nrh?È5￟ Ub\9Enm~qyb/&'n*_S=EjBp*oaݱVG'=b~hI&xf/| ޢ_xrsHX՘U<⣶#K'y5ʞRGsio-teo!.~^%qNUiy2!=\n.;[Xu!ߵ F=~4;a宓c;w&meY6,7;.fH#;W|]YrH QRTٓ9,`Lrk 뚅4 }t#-Z=y|}8 Jl*(Q{f+8E2-~D{w@UƲބEXٷy͚ziz#26; aZ:r(˛aG]3%,Q/O\V[%g7i ΘʂnOvB\D}%({Pϧ'ۏY{PRoΛǡw,`צgLIhpf漇h n{7>GKZ|lf^=Td? ?L`1ay?CDIs֝7xN>ŏ>Ġ\]iˁ*)kٻe ;vl[neKE_D ҷͲ\wwm$xchPY[ذ,k YvpMS.d&YOo&Wu|~i ",.׌G]GlG0nvPЛشk*@r6}Ky`ѓυO 9 JVVҳo2Z17 ڡO'8:o77"3;$dQ7 ѱFƑذì|:۱uK̙;iGheڃ9˕f.Neۏ[c!沞S7pv+(}3sm ,KldWo ' O6x#q/+9WvY˗kO2[Ub&]1m)`} : f!W ƥK&dEe_ ,1ǖVSmTko}I{yz5\.DLMߓBvmZIS8x3vqVka;lwVfsam,;[2x`#= ӟ1FAwr> SSͼԉCfodOh  R-=W\Jܤ[Y0n g7wBAK۪U߰x76]2>dHw_PS0= O;l~CM՜)9Nܳu=t+T#E# ^}VTrSghp}8p獓 vS Dc-ŷY{ؖem7b,% 3ϚkOEsCB஖qbX'#m{ ێ)iF@3FpTł;m5YO0ǎaedS "Gx1Wfɫ@HNAp%~'ϼ~r[1|Sd}\m7ϘHΕ67ޠ5j"ڳ>:Șx1<䐋fMF[N^d Vqk^ܼ9דh5p^.؆wLK#[>mܱinRdT# )9\wLRoQaTbGw yq&EX+ᩖ-_*\ꇋڊIl&=|#ϕ~? GNN[JUOFN!S]wФ]ӏG/ӶΛًg>wMudn6WEȺ#9̛nc6{=#<%|ɧ ffg0Rs &v O,J>Yz8G3;7qoLw\'0| \?8__ןڻc8/4'>ϲ~r}ҸQd~ |3n%αOgD $&/WW1z`Je~pNuCٲn)Ţڢ-KCIC}Zk NZ0] $茠 J,}\G'Gh]GTN Yyx I%l3*Ȁ` 5 ln_1I@Zhg}x }_26y^pM'knQkC f}>ڶ6p<?S 6aͶp:CRi9{([}7ɡUjњw(ޓ-@E X:%0(ϩ'4ԊcH:=Ř8o+N}֖0.Z}m%>ec̜H{![9]7#WM:LN͔յeId2:R> xxS=Ȫ'fL8x(J9Bsp_L x!jt%n%sZ͔BU|5 'E)IVo~.ęe>ĆR5qfR-M5;G?=ə{"qb|tH\ah+NWo_چN-iP ȴ \- `"n|a-G(fJCW9qgёCoNeѨ(W90s#Kߏj#ZՍ2m$H}bQZNAoBB?L_Ιϰ -ZUa%G{ &-)#Cg܃$2mչ?#$s|/3CQ98sE}C?s %dA=z:vmۨL[Cm,Iz3p߹rM!3Ǩߙfьŷ.Epk+9T4r ^Lш9ﬠVOdd*7Mr`׾v`!qr?̻N.PMg3F]]*AG)N|vffI(Ȣ؅k7-W1dǾ\88 l.TYP2,:Y$`5.m:q=*6𘸞jx O"%j)߮Z+(e_x D#]G0 LGם( eupq!m@݉ CzZabj}{FIN}gNZPE򷠨"l3[}ky/G>2YYKwZDW.5KXp*sz(kUIu sh 3)ãTވS8? S"2Sg$IcٺѪfa+hGDpZ_BYql-x=S%#*kA9譸xy?z^+0lT}S _8:wMzjT5 MLZd_J !<Kϩ쨣m~7^X?w93] kD>x8vWMz^zj6HZI'c}7Wg!f[;jRx.[ѳvM8dU;F?~CNHV+Ofhكe8Hm4=8VzY?mE=BlĎ{ň@_π+Ģye)vsv ķ ?z;«кX%<3FɑsXW!?gm`hiq:y]dnXOG70>wYj!j@ܝt яN`d-Q[^`%z2x3le MŔ4t]!Cz<1ZWT\ȱljfjOti5!@e[ѓ50Iס4Q).Ʀ6<4i 1YN.B$[Jo?eOܖ·oݶrmty4F戭n.#HHe_: &?FBdxt[e[CCG"5%d2~8 I$£o7dT%{S#v;mr׾>r .-kZJuK+e4TyP 0ݢ{)=J1ipN%UUn=MKeŅ4m ?\MN@^>_N q /;_Z.O#QUɁ=;5HV$z 9t+7ŤF%IU~#1rP*>>nK 9>VSJ]};hUp4XrfiwcqF訶=3nv>D>@J00}@Bq!VYY~?]^du8y:ؿzc-ϣۢoҹx(QU{1 ^Uٜ93|t-4 C_%wF8ui}ݹ\vQ3Eyqi:RFNdd<{\U}>QsM>(xl]gu6=]ٺ:&O;sv³)ryW. P;Cyqiɡ 0vG[˿Gp䬖ُ=̈K7O59]Ͱf1uX43'%Iv3EIB0;FhRTyF21lHM!g>}0'#̙N1|~nٸ:FOsɑ1=9GW&ITܲ0coOzL6~+7ð8\ڶ3&'L@vu3fCP}tf0/.4guL Z Yw<|{2HI™G>~ G()VOk1m% 9y$! ;jؓUƠQL Sڕع?mLrT%d1!M!LOgUNj5em0W<#IڹmLAL:}|b3Gdq-F@]r=ǎ@ 2!.?[)e(5X]yMlݵ*dF )&}dB g~I3mxÖL[>_IֲE5 å zW CײlAIC?Ey~(oN.o[ώYX}l*U7FVy'JH1I).>Md;9uZ)x>ˎyfmRs ri~3gʛ pLøú-9G+ ƑmLj2ibjY|/I~as~.VNavMvY` L+TۋCӉ{gq5,b.cZ*Yr=e :}p̠8?tm,gE\t$NV/ $/DRJ7q8"2 Z;nE90fDtEGX|5xX:yꑙKN+dUp֊/~ϾEoû%M6#E]J#$TrN;Q1Pqx-m[EVn&!s);̐Itٙ=Tݙl !:• KqAv,sðhI>lg덣_߻9U1c&X!*tpG֙6 O8rI,0"cۏ?UЁ Q<8M}8Lw3(D%hzwNRF'?>3vQ{ Oxzđ?aÙ2w& v(kK9P hU aƏN͓زu;ItqL=Ogj{6Mp']:NV >BCpJ9g;ҭ̹QtG]#j/b '<'_AWM16nv(g`e +1 q.Dz(Vr0:.Qcօ>ۏl=[R `Ԅ W'ԅkױw1^uĀQDurlqBF+ĝc$ƒk d^bÿp!׭#1~7\߾wFΩu[U0!MSKh|#G?)@ ݄-)G*NncN!w1{o\S9vvzc-DS$]6 d6Ij[a¿8w䋹2 5>ȋgMyg [JBDf-0Er=DO?umVʪZn"s?E*QvFvɫ̟jx\pkLJ<;Nx5vɻbΚ[r`3*"jV&B4RY'1EtXSg9FU}#*Yqp;5z;^rr~ 5Uz<<|pw `,,HXM[t*dӊ{_M>ؼb R镆o U+ Ѫ$.c%ݦX.HG]ΧT7>Ue, !޸Rڒw5b\DK޾VkVP]kDswg-Fꋏʹu#XۥI&5,K[^'":𖎁6Fy9rQmQ烱BSk*E/  nRdW%UkjbrhE_m_]U?L\gS-t(NXOʊJ=/!Bj.HQtYBP*:M}^$ފh[ŕ5B/+кU˘,"cyXĺU>5rbr#c\R"Y6iǧZ&t]`{__r'ᓏoc鲖dj\ "P˼vuPzn :oBV_ ^$w*?9)8*kZil6Zf)bu^Ȫ O! RRj !WXJat6Rd?oLԉUJɂF豧+ċ&x)YBpm:e}TाZ+>2ږWN.U \Qg&ܨG:ۮ6 )^lw∢ME4*4m"aI UƏB%x8/B8"ҽ~,2J[˃dvjvj7|71:^}~<ڗZ'‚~W>fJUkvj(;N'sm%^$ĚU ;wککvjvjvjvjv;5hvNN͢]کڦeu>|%IS~;f?RR׮<~bO?x'w K*}!5Buy)YJzfI7L*dD~~&[[bʑL:]єu/؍+YE?|p _$ll3_0A|~gCy E%\0ޱױm՗l8P)ˡ@O@EmkT[^HN~E[d)dY&?G%ɥoPMe Gs5b +=z-k +o_[@OM3Yݛz>ce!)9E/fHoǻommH|()jl&rUyl9[ k$}A9 ha[qU+~$x4Rted 59꺶!V[\ot$#5-/M#.+1N'FSPgqJRJ: o'~YětIzPP_PZ{l:xC,ujjiV3fMrh>>Zwpn|fkV*,Vo4ҕښjˣݵ5 gHE M[)h5.rYrLJc4VWPgr@PIOVQ*^WeGEzo CMMm¶Bs9.y5 cϭ1%\?)GK6V b ;]VKskUϗʸjkĻr&/ MX*J1 ,-hZdgsI&wNYXa4ȼ\u8JTg46{ Қo&zeS[ݚߝnذTG7dTݦP2hd\6f5La6ZrdZ^N.xnsT!2zMz^~v>=P,7Y!2`2 Ri|g&I$1ZI3,PӠnR$_*rB*!H:nO6[Ssvh)_X pNүZ1?I:S Ά2&w8Q,µO=7ݽˍ^TBG\pQeR.\X\;u{;˔8OeusSq[MbBvթՒ:Yv4\|߭2᣽jd[֥_[Uȥ&,*tljثF̗0X/)dxzy&Kdu69K -Ѐv-fvʸֆ K%`؍>I6JX,xr4v{[NuH?7Wdckd67@c!w026ܔü2+ܐ9yϋacdD#xV+&zPpq2 i2?WS[蜿9VIhfdrⲩ9εAboC%ѽV.pMu1ϗ$7.}>nH6Oed(P =4tђA7 0+6EL]%,}f c~囄CfKxpc{?0=7)ykƃ9ř3~4qr4]xoy|ZM<<~ͮsz2uw2 aJY R`D@5ŖBܹ>:B,ߗG©S*FJ&};s?ɘn~{(f߲ao:/7;O姧F&obx}jcwƋN[%\ˊU<`g-nF2`~8xlxP/?wLbv~<̃E뎡屇g '~϶DHĿ~1=BE<( xWy_ɞ|at\}< \Os0ǀJ^frۍWӢ4Cu~g<:>gWsĕ)>:R~#vHGF2y\>$|7 Eu0'vTgmc"rz2_ Wva DN˼ǵvr珬\_p(#бy[CCY>l~̠bҵpՃ1ï%BEq]dݒĤo}mRx9=<0%|'t#:Wݭ#nw _݉!a΃ɦtR1ɋR(a7_G ~& LhSf2wvOoɻe_)6 C2~b9Y|u9J-!!-T͆%KtxcɎ'Pψ7`Tc"}.QWs͗b~`m3''-cޓw[[׋tf*ݩ.I拏VT~Q]|]~qh1*X1z1z HW04'KPa<$$Х__ ՞bo>z 6);fFr|4o\I|f% O+~܉Y`C߉3g6;jزWR)eS$gYy9񋄅m5a.̝REa0>;_;U>x~])8g|Gqu~,y4frϵCe>4DX:M$$o\[V|D8a;9]9&e]y澩{`:f.7]җ`\q{sǯN7ԛprdQӕ{?.Xʄ-^ʩ:<̬fb[KMel]n?;nAxk6?} y4"YodDZ 4"?SJSzي—ű_mm2FEo$n Ȉ/V3y)ݙwCX٧Jҟq;!7x};3eֵL27+=;ݢS.(w[?30Op\W&p|6rwϿy`SsoX ^}`%ݗ͖R ^dZ_VGx͗Pɪy)+6cO₧;s)6޾ _,ȵ7jL:}$mYOj]]҃dIf>$ _|h>Yܛa)# ɧuAtV̸W]^}i^} S~7_#ug}gsO[VJ 'NkqוQ]1L|7N^}-b˯{݄C͜S>L؍.ݣ)mY'ۏOe.W]r+ᄑ)=|U(|t71c\Z g" p샻w7nzv8H bY,/Z\\# }yp6-^RONj 9X5>U4+ut5o.W_.OUǚO_H6/>8ĝl=_84cv) Lb yˍ 5_}SFIl/9Po~ŷ iY8y9[p]ZDP /b}/?mOp{<5FvԒU\Ih0TWQdȨ1fsA(U$d ZrŘ#pe}7M檹Zm8Snr>)"2GNʟ b·YptNo]' &[)̢ƙb6SQ$o-|]-G$䡳Ee12a$Z͗5Co*ODrҋu>_.KnCןwZJ ط3=S5{OhtF@p3@/JgSq̻eד"&i7_}I-nܮ_XS| @JØrH$ $uC9^yKmdK' +y%I:ɥ?>.zO?o) Xb\^L!Wj>U =ݷ< j2X]t؏Y#w>=y5gw{WVYnߔ2W,oJèJ8NRF) #H'AQS7 =6 稦fƱ`63ytdTŐ1qs2$;;t¸at0;^`U?3ϼ2O>0蟾m[_Y{g_B*8uW|Y.X[\ZUkjVEVJ'K͖e*ҼqS:>c㍅O3t:,xpysX9e#g+}xtQnSo?}9gxQmSo.u+z /,xF"jl,N$/Gy)^~i3&gMrg!;W,bJf< 7`EDX; BnǦPVX=?φNE4΄u˨y3|usSHB@Kt"TXMgZ gO'M1KBYpǵt񔼹!sŜN˦J`c)F᪕pAD>q ]ޝȸ)ؽVZj]O"[>< yC7K^gc'ndw5h,~gˑLHO uˆtw:o=%! W)~FiՙHR飐҂AP3s(:㪪iiq,"8$Fy >*l ȹNbhtO'Nau;BplLɱtK1s yNuͼ/%Ⱥ4UV03K?`|#z(}Cvuktq!aY*U&nXq~ܠl]]/ajFӱŠBJ=KVQcw!0uk:HjF!:wzI#^5[8{%@\ʫ|" eEXtK…qh#zr෇봫gsƤP58bӦQ`O%Ww16EYgȪk5"H4QU".Y ,cj(EP!?=u7/>=ONT2(U /ʤk9Y3h<9 Ha*ec=HVZ"]u'8p͙IWu0=G5Ԋ=[䂽_'CVGv&ϹcP,\ "E81s\:]~V]BоϊO?`ʬ[59 sY> E#&P^b[]56}xUTº_;1=`Ta| #za]}>z|gU*q=7։ fiiI(=}oRc=3v`gSMͤNOp7KEn)zJ%bV&43fOy_T#:ؑU_0v ~87{kakBMӔzr> N$㗓ox\IcΠߔ{Kc'`^s|Hy*[. zF%пy'g!"ڒ䔖r}/5Px5dVƶXkHO'r  d\.31֜ӏR(*.ICrՅHq)Pl҅FV󔥟"S=&ba='cOK)}> 1Yvdʼ{n%J 5a9o$꫈|m[돧%r2)s5<Y]ԣzqhOcy9F\T[ >Q-Nm6;󈜒(uRtv|||K%5\ݴ#"y".|)WB:36YnB!G~۸Hr~JHRe`g㶊xw0 tt|3K[&wUZ!:W9)'=d4UNr|g#Oj3|\6JDj+[Ұ:;j`VY0#8Bqosȩr{1+>Nfc%B;4|̹i5ܼ}W"yؘ!u<9i=>(M-.%R:+le`J()ZfD勤y;v9M7"ះKüRT5NŒrk(b5r/]%#xW*E.v"ʑ@Mj3cȩ69I9J-b1R'8 HHg8h:o/3Tڙ'CK)W3(CΩ!{x]Ȓ`Z  ? 4R)T{x3[c(tu{'WvEa9\>(6MVI cbo47]݋+~HC"i NV s]=k먬s H %JE^j lC*e]\Ou<ǥ?՞~8Q[\nGkU:OȃY~rҽsP Yf2SlhQ(H)'//1x@Jķ4KvwR z u5s2T_Uf.@@#mm ]:a=Iѣ [W\Ɍ"Rb㗣1k#gKR䆃=FaGVjƼ`<'0B$;MEGQ+Ё/+TB!(mn0j*)$bH/1,RK[/™k}FJ}VR+:10HK>sP_y2l[@Lz=ED^]wʻXTNe pW('iIM-Վa:Ǻ_vPZφ_$MۍnAŅ ?Wʅ I~}ݻ78)%#/ Rs' EpuQo:j4ݼKOz'DT描}u% ]mnj*rH+}Œ8le:Çk#H"^A!UpHMl0rnat '6e9\KVQ硐C%] AP⋿605#^nU$QE43vl`b>gmzOt.CODQ>d)95mvVӳnE$V8sAt楦$ H"tPTAfQEXQogio<Ҩ5Y[.Z;I$ D`Xʹӕtܱڕ͛9 㶵Kh"fxr:TWTȮZ!^v>Gaޱ @iWMIA Nb؂:ٱ 6(u:˾X~K6PTw/U;SIhH5e)-SB$?): &LKgz(8}5ΜaݷK}FFp?FGEB8*C^(]3[r>izbm1d&!T21c3oهDpozq҆CAlΥ3䲫+;7/:9MGRF7@";-a߶픋Eקx{{QvSd8>$uPǛs$v`l| C/a=818Jcw&MWahu5ܾ3zFptT]|$ܵS'(qcǟ-7cղu.A>|(6oͱX3 0YBOpdo9Kً;{~AQPBb}FĭU3:σڝ@`H\cٟf2=U432MG&u^_^ IV6v%>nU#` xLGA VpxnbOdf5}da%F]߱ Og C{ޝ">o~Y8! B̾=p\;G;̿;#& wr,(:e?=]gc/gw\E1`̹'}>bl*]{x Gs~37t :FCGeT+5DE"!0.!K;嚘~\KuxV%njE>'2a bFFr2 V')覢 fn ݅(I8{8ɠrUQ=PUdsd^0z`ZF1Xѧso /Q֝B^\1B].MޟM9[=8{w juԐg1uw EH;]`3Vy B|?"X)m/jQNkvc9t8з?[3&s(z _+'5~&-<2 3DM>^dMt LbrG $+1&YU)1OsIxtWXGF%v;(qL~jÅC8z]³lݲc Ƽۦ_$橄H;_וRb[좩$){Rc`@=n02FE#'ct_Rx<Pyӱz} Tc/q)0jKЃ0B ܓ>qfϞ}8uZ )HWDY+0Iy(ѷ/^SHz]"h\klRBl˺i^tR>Kjj_8TcƲVMg(S-4b[mLy>R~FXz.<}/f$`{t1շŔbN*-r[q5?o,KJnR*7lk9RO^>VEW7YTi'Jm(!8eyiNr)&%dlvZk&;C>ƻTk)2U9H )E.Ŝ]tYr~Oڨj5^مIӴ})q([+;/6dީ=I-Q9yScE5FƳLSVr*ʧJ_M\>GM.wWXQ.Ԗm+:)U }4 ~u ?SY~s -샴V%>N^K*9Wyn)!mt$ls4Y+SwߒdtzU M`ygonoFOНNv6|ҡ!)UAJQ&48H!۝KBmQ]+d,rHZƘմmkkȉjzAXأTw1%m'1WWpU i*ȶ?i%BƣielMjV Sr7.{\ s|4dU!-)CՄm8-DȆO 1&? IB/'_J6?҉aJ8i-5VȲ6m:ge\kN!MeF۟U [OХF]naι9=0k`}PyJ.vSn)v\s ډ-J̎_ؚl"Pmnәt./\Z?J~;UDnNi_eMXuKNNIų&[ϿΚB^^jdh_ҾҾN!}$tzjO?zLm2 )Cj?",.?!`+-ݔ-NNN?PYλI*\dO,6X"VIi!)Mwٴ=~=1 ΞC_׺FN-֗UѡککҡH;-*û2Jkg+GF_9KƢe[ώx2 };%zrMU"TmTPYYjK9},aZ7}BBY]Y[9Gyککک5}驩G٭uTVd_oXfuV}>{N%QY#PhԘ**`jiLgwiLJr2HKF_U0(J HM͠X9V6sbeM 8KKK#ߝhtMr* 2x.7ijLIAz\kctil`5SUUTl8/tPY*撚N~Iy~-5rYlr01V[e> {)+J WTPQrHKjF6ƶ1@U cui3ys /3:ܿщggSQQAEe7 )u TsN`׋9^@Mm]~t!΢Țx pk|oq~9B4&%,?Ҽr1;NNNm_'z>Z ZC/!3lFJvR%\5k3'fr\ʜyS&w,pV مzz^v ZU&~yѫqm1|J*j-QkFXu87wZ;WӇS}E_o@NNr"^:IfMNa6- ]W@}r>] Sz:=һy`.庇5Yۢ3ZDw^G5]G䥷M^ MhvEsu?;GA.a`hW~7nQ^LfV7? {gv쥰Nf,]=O,7 ZlyՇ_Qڟwz/D^j=+~ucCYI5>jwnzrT]ppȬ(F1{^*bO-VtWfҍѫ.=/δ35(ˤxbRE]1_w>o/`ΥQ+wGf2,̘* AiVR->Lt];qS3yHdVf?cdV Ketf3[<Ì R=1SA>SoyE Ƶ`7k'F;S;S;?P)`zKpWB݅[oIG@zERZ~vMO|Mo7ϠAba7U# ,}EmE!dmJ$y}퀟*.W& raDzUʎщ[oEkGc9;v84^|96E!e V䞥ȫ+G9kX"y; z+c \R6.[+vxb+jru8ԌT|没zJWMS$0r4FRV\D%/7|'WיN+M X㩗 FF)(suxqtgD'ݡe  O/|%^ n0x\19rJ &l\k@/5uFLp\u=΢J .j;91U0pho\ 0\2*F[ >nH]pskQ3!S;zSs+CR+k%%) c<೪~V'{ Zu~Zw(" !{ϰ ==y6jy=|9gq$ȭVŸZ4:l[CI $H fKJfA}j*`pƔ>Rbnx*Cm+ۓ߯%+J9F,uC{ ;zk ף5)hSGW^UBqes澾fSS&? cѾXl-HmO}j裳G4WRzVR JOI]fT\!r{MW]cy)zbxo"WKvb{-7FV-aW-Hml/6\yqI698f?gh F.Frը̍4 ).^\t 5m8>ZlmiȤ7VMJq;v zݨo{8ƅ/~$ H73jw/)jH A  O 5-f՚6rPLF:M,fgɔNܲV;A1L}6XtX84 VoKmk+sجAT bOt!9Yl>S:#OfOV?Z_ vA]j£p;"ǸEqV~ŸW^Kycw hƱOc'_L]{; r)jEfqIch47?d 6I_,)1^βk";'Fo:lң6gꮇPB]=X}J]bIE^i6èn}\J䞯Igj5x%bRrʳg8Wx+qhH(0B]:3i W7_e꽨x{9g(+7fٞYQmg5[6uճ5+.s Ѷ{*'-;;aזyo4:#;8ѝ|q뚹TGim~(8oo? R $H /AK/=;!S:0;[z$z䢯XBHO\5 cgrό8ʝŸT"ƝOCes 郆`ni3 ^hh7<(`/o3* 2_\ 17z{{-6S c!UcLf%Mc::J&6CK~AAiԁaĆ\L19ƉEa%6?!1D:QV/l ?i)x{q%Q_3TTU/(.wbXF4" $H?.o Xٽ(. DyŔxm%H A5wyK@."whFݟD&%H AH3_Z7ri)Y $H4C)VȐ$*)A $;Az $H APJ A $H $H APJ A $H $H A  A $H J $H A  A $H J $H A $B)A $H A"$H A $B) ,kКf2͝gL:jo+K[n.6b j8ܢLc]]zQNMM$3 ۄޚ0~s]m\k^V#55k;[k]ImnQOEYa9g%[J+[tY}Z_p6$ o嫞2~ڮM+?U,ge|sb%|qIw.b?|{%Y}.kl]\-ty?i]Bw{ŇwWv\:r1rtvr*>370vc+TȾ"~BV"Qaobߍq+sfn&_A[])_}ΞLNcC7ƞPP`S4tm47ߘ26k;1鵷jAkv6nnLe>hS֔s.g`sFYm;M$VœBj;3^zYd7 LIѶ~F͟9JX'/*U/ lo7aFJJE指-4MAq -ג} z, }CT:Qz,膯GJ/"lP (Mm:v6!m¢i%_³TAn{ dzOSqgܜ4o6p E餤Rt"K]rնUDRȲ v'dpc}UU (pŅ#GQFlЉbLB.~WEIr. zqSL䐓os7[˨l)| ][ Nfs=_Hnyq>/VÂ8oMm:>7v=);"0孔tm!/D|⺥/app\KB?/Crp4k->6St݅BFwC!$nL_p.®lt6\rcGOPi& JVj2^H p7H-+ۃyavF <eu(-K9z"VZOEu:Du$QΔTpV۳q:{^~"UR#7uVr\~(s.?SB ^+!*-Tn=y V+ϝ|.7vk\fߚstV9U菹#3$8л׾Fʋr9~HtG}O'c uhZ/'\^ӥ^+(l_]O6Ÿ+ M#\1Ʒ7P[.SFL؞F.[eb̄:(2>q6kz({kDFVkNIn9""P\yqD2\tEN"|ZzRx^UDr-=Y:WB*k[qyE3I\/`e?0O:/sZ撲h-"7ʥBoE '!r(v{Ph":-j|owY!.?P|];u3eUzmR|u%^j6vM =MUT6(۱#ǩi7 ?zw| @|[kn,O,OpRͳ/s,'s($ V.Y\M^i%2g'mL;W={B'g(;,]g<%<8`ԶlQ^pZf|PE.jAeq3 G^'گy33DHp^OMS4EzZ`7[H L,wQuvJ1 ' vdc.J|rVg]Xš YE$*ȅ+)J0u1vTP)o)׻0$9[yc / ghh8m'Mf$~$~ Nҵ6hr?+_GW#7m烍4"?}ʉA;IS)_ў%tk©HOca>c J?N Gn%ҕb4Wx.hn>˚=U=Wa5'^ɒ %L>Lk(=?2wnd~( ՟rlyl~\3_[vSTvS8]LQ~Nk;d2"t|_MmuN* gp|g d|q ᡺1%g!mט1 & ϞH!,=5̃KY__Р5RQpܒ6G'1Zݍغ-tC(9zp7uq HED]Tu18-R|X<5f&g:|}9 r1h$~N7fc|:fNl/eP|dd0e]".d|%Oul(~iB.7WwC."f%\GLEo8ZARz.^2U[r|9'Dn!mHo9j"o:NƎHLǍx"rT 8~S)?/Hj_'+10]kRDNgP28% :x)YMfێ$Ӎ~Tƶ=(ɱ"l "k?X+oޝ#$Cky6 +Ͻr)|p.*ݼ*Z8~`' {hk_[lĉ聃Ə"=ԅM˱_gx'#ˑڳؕKa9uA [ U6BCΆOd.P۾s?CnX9ڲeuu=89KZE kuV._|h-mmGRrP ,g:&̠2w;198.8[$|4{S#iPE=BY^:-R75a;¢7RoQ) {ߤ B=5|;|@{ gaA5i(8V,DU#|A r5p֤b)څlWe3 -~i))cswXQA5߿Sdo>{SRR-yĒ*bc}^\~cFJw9:(ib_9af ~.w>bh3GWw78wQĕ`L&]E*8M20v51^WSQTFZN< i,6vSG4eR?sH67`'mtxsOX}h ؊hqϟ_ᢈ0Yu V+dtf5Oe{n#~'{V^`COrgB8|{{GAxxAPw#*f۞td"P̸@=Md߉?>2O4'2Le=  'jBFoDȁH{n^TjV,"QRF3}XcG3i9{6#K? syd{04<7Wp\>~ Sğ];"?11yq3kl D-|{Rs c@s^˨y BuDb9B}Q,p_m${س#ɿ{ݼ{lrfT0}/êo'y:/>6tO2+fݷ{yd0qVōG~<~2**,`j/i r,IHD@k~J}V9p 66h&}:-mUgh{-rES/F?9&`1EUO$kocgxMwc9-aO2!. yȹg߿w)kC4ңd9Iײ`v Wmxɏz<;'BA0u}tӹqo֯ioPZBɇi?|hOwT{ 7D݌S}!9pl J> g%X1w/?~8Y Roi-s-_;e 2km%z#<>s,~-`f<*U2/͝AՑLIɜGqFv(de_^Z$x$v 1iZ{Y_&rjg xLϥꇛ=M H/>'<8!͝oÌc͎fľ˞8gƲmd]ńwkx崖SnɟMww̸c/C\+`c-3S^üTMiN+S`͂?PR>l۸c,Y6\) \ވV~ԁa~1 >ёټ 1޺G1s\?Wd?`eq-=֓8bA+#Ae( ?vdO-ż~y!*q łL}G&1twp#jQ˄M>4,p3 s{x:9/^<里:ky腏۫cGr/1l6À>2 od:Q|(S0z=!I*gg;q QIV}(ő[d7$2 W}n(-Vهe"~Ho_:Ԋ ?Iz?gkdfA~06'sx>r  a^LR 72 sgVoY'd(Qlr]Ee2}'s:+åo}dT.3vq1dd[ ,r+! PR#njR WoZqFo O%.Fgz2yl2IÒ9A)ڿRb̪lk~-dXrol#0"^Hl4$2sEV'!)@p ӆenkTFL(RE܎V[>%5Wj10)vBp!kŖquf &/O'MIISLy1 w>8iU儷?e򻞠rqwk~ނarJL$jDj'Ͷ NA[YJ*ܼDX#\q)r`f?'Ăǟ`j*ZoٶoD ucxaF/NrĘFqʐE 4n"B1n]WqͦvT$:`y-cLB+e,$khekS\tFJLGŁ%3@Nԇu o`d2?!6:m_9$C#{ۗYf}V{o(]ˍ4]={ d')E5j1-; k1.5!ɿDI̔\hbfJ8m@O.::HKm>oEu)>vmg7%fm[f/fz^ +-3Wdfw$wږ\~K'<"ճp?ua4 y[+O1&f= dșl!7o\+ɽH?q9m%xtRmnY5zo~r Ke^~R^g.'[17-n<Н")V\WSϥpw^]/)qeZ{s9N_OKȑ)}Ekdn!ȉU$fSE]LmC[o#&:. qyw?O,|ផsXے[ܴl`?CR9׃?]&H7wk{!vS'ϧHk7v1sH:?y6$V_n߿ŤX E;c)~{=/7O8|W_⍼Y}߭ő UY+|M--&&5!H(DL5[ip?OMMl5[R. 44bBpf競{-bsRڍvNN45P"RsN+Tx[s,x$j`߮>뤩S/wύ0͖ug#K "Ȯ+OԣhiitBcW)} -6*';;;1Ǒt<_|9E@YzPN}~>X( Ю#66vמyq20cJ 2 N]k8) @ ZWFӃ[L~~o33"-Ҫf4>cWOsEGMnjZtJfw3^^K/j=}'IɎlSڊ=)+U/'[W.9p,L<§/j,mq?C9 Wo.UuDuwG"ӣ|f:Xd>ꉴ%a0c`\PO0ġa{zD7_Mtv% kVQ/DǮ.Cck;#$f&U:XnwAMVT+H|">&v2mo~~.rJHâggzWXtujE,*1o}.5qnT㙐;KGW*A1kn}SyU^̞ܟƊ.!&L5oT;eQz|xdpleQ58K\lłD uO2V2x?PYE3vkYTODlz%*{UkBЛG\4B.VbӱXd!SƠhcLˌavv$ާ:`?b.|}k602/G3(l{&L`h-/Z==c~ .f^_z<1{ *֊xe@+tRyԑsx5TQfF1UDc >{KBFQy1y]boHS]ObQ "ޫK;@ߛn6;|XY-[IۦXnX( E 3Y[S7j4vw7oMe,Y_<4.+\Xc•gIoxnA#i00?/}ͨ!c:^g-"Y؞9lOK\RiKdjn]q7>]&l4S'@JE|یۉ Av].y8pmi?Trz!׶M7:8qh9}tU;.QW_ELΘIx0A>?ecMk..Dرq59GnȉK<6,\葳I2hwNlme`F F‚+2v|e QoSio[w52s'703gE'+Ճw,}Ku~K˸t u$S30 ǽ+Z;,O=+B\*ܼH-TNd'wҮ)w4܄#,bwّeˆT"¢*e$ Z+NHL,gQ&̞>NC@d E+*"%.>Ĉs\og]|IOg3-:<DC +HTV=qqE'5.&{ "62OX%Gb TD|?ص8D1Gʅ :$&5}њ8KJ& <&"LL5 ?g.>DG+bd+'>W6g3MMX關r)Pko;fƐ(;- B)t c@IP7Qg@ M'I|I/$o(>3 têt 04zIJhb"Cv 2ڛ.b`mJ@\"a\юo IllN&"b7&@X _\"u7=z(G.*n̙9* 'Nlj2DL_pwdH">|O(!b A£ [؀d8LN$.~ C&ۥ7,H*Az‰{UJ=>s$:AFf.9]HK$88}~rb"[Asmߗ$\[vbV!ߴ3o|2 ~ihl5,ᅬ~j[+/Q"8rr7$BġȄxBDÒ2W(H{>8z$rA;m-]8 IG[u1ɣ9:Ġ+ ED7xnf ]=EDl#<$M17ܨ yWt b1Tz:zF[ L¢g,t7>; OoB31ud":(frNЬ~odiԩCCO/騹H߱sVVFbr6X…7rkiMGٯ$Yi?Vs i֭9h3%Oe8TγqAjLL9W9uel{nA84șt  GvGm>YcrT^L}DznEOѱ=9SI q&+^+CnFBgbBI]#gCJk;[wrɴIqs!7zrBgD8rR/:/ ;!+KlbfFLɐF~_j6oNUc2.-yB>B}ME7}$]Z}8؁L4,6h]:CGAJg9ۗcp2sfM.vr45bӅc\B*F O2;W/8FvIYmj0&ϘNkSy.2| 9y24Pl7ONj:@6u0E$H A¿ 6tR-"??<7wϳhǜ1r ֪[9 f^{YE Wegܩ&.̃+=~L,<wgى`OyQ~rҩ^lȽ\SB '!! G })Bi@H(c Ee[,YTN:?{ !|̓Iwo.~Z@" r6>)P.–P7|8:ӟ<6'&XFN_O< G 3?z"3f~sF\۟"x/7|7%8G = &F/~p?Mi [5"m5[Yw,@~f*IIE-Q7bN9Qk1iaN 廘752h,\|r,Y6e ҹ/~y)t*6tGZrH1x%g-̚y"i3.ƫV+bc𝻿bnܬLrB-$d7>TS%|[.[Y.N'3Y9/k4D"|xIx<;q^^-ZZ::"Д\V1^%ĈV(@ LzV6K1˘^b6eMŔىaHXGYw+MK<(XEZrP8LTi& Q#<ސ)DPTI|PuuƏSdubhɳ 7^_uGMG:ȷSowH&߱dE;QldKI1O#7,?&B'!3T诋k!("Q de6/Dʘ*/9$>^GU}7W ?bҷ~?Ӗs>mZ? 3e;@P\AeIbj|7 el|ʲHj8pۭTS%5!IeT2/8(3,~)4wdc8|~cAᏄT2` 8 hәW.d &ZOqtX!=+W!+F#k4D"|x7W_Myy9wqK,9qZm-o?D]F!8Lqѣؕ_a=`2 FCBDH![G-jsO\D߱/R }g:X$ojz®5(beKe2{F I񳙕3w?vOl;e(*zy|DoYʪE8Jcҙo+:>y%$  pdV$:Uq[n5[ ("/,~6Dp[h6.ت9e+g>YyI}^sϯ¶?Ds6lEs@νFC.> a/aQYdQ6OY>p9DHY H`8 wv e f6E k vjQ9>vեBd&H$dSNTTzv%Gr{OI׃'8n5<_G)4Kb~c$iwh''|W8ٶy܇9J[f^ڳ"|nBĻE΀kat7Y09Q1=IRJ$ߏ*;jsPJ!1ZKOs -=N2s1- h%bby?W_MV| %Fr7ep򢓣;lo>9M}eGs#}u1LFYO砹|b jzar#9CMM=Ǐ 3HK4pn%Xں&=N=ѷ?Ϩ'_g~ | wx{(<;n;pH$D ۶SgK6;cm=ʎSSY1h/<ß^xFFǀ-NkgDD }܎;4Dgm]hôvt`'U-}M7R9s'L}lDJ%NxMdW85AQq-Gv'HqTbR8nG32wWkĵjI*{==§q8Fp㟇 ョ6GЉw7Db#wxۡoG" vmƄQ>î=NFF"^§˩ixyl!>;eh5ʃE'Q8)bn>+%nNϛVà:q(ܘR]S3NĴz_>tDHK$A~iZZZXhW^yeDsp;@νtt3ŠjʖQfO7ALͧ} :H7]_yEEjE|[naϋ<˗t#q:?<0eXÇQ;/1RcO\u~;H/9R3}Zna!0^ ut Xv č/c'>͌h-;x?3"~J>OjT3p'{jED~u8~-*^Ƨ;\wD"P*bފJ۷Ñ.yq}3j4'W.y(ň[ǐ{~^|kO>3^ݴ_L[W3|k<.\PݥfYyċSܽm>h2S{h=(]C=VCTw-~(޼pwp#F**;X}еZP1|5WXX6>]ƽ?gNBYdh%K摗5EsQU[PYk/Ebx( 4щ<2*>Md鲙$efrM¹̼.?OKo>#p<!bbT{~K>Q҅4#=,lY3=WJ ~Fp_}soxzC 9fjBo6UGSUx31̱9s߸9)^^|ab\Bzf,[7Kؾi5J:q;ݣ!~+|^k*c[3ygO1:hs?}6ydM>O|t ~sKf[嘣u2c_KfsP>kO?A!/=]a*G0-΍xhc0"Kv3 I[2`3l^bq:e7-GkogZf*BM.I9y*3+w@6o}+d-,IG}OP3`KlJ[q dQ-gUQ\BRI]w JC'V f }:kn(YWH2n*< bEAQBDYXqէX6#G&^Ftq.%-<{U\sdJ:ŸikjURwƨSf'kxr1F5Ai|1'/aӑaQm}f3 6c00cL$>hr7OFk< Qi߾S xQOO [cqA%Բ22UCQjZ r>>y eKeGzj8JKQY0.?EiF!6 E:=dZSlyBգ > EӊaItP3Y%c]y @ȋV+C5:NHOy:6dwb)YG\| iŤZbtK./s a42lcE^Qcѧ$oC\~$@JM=b ZcqM!efߤ` fgxZMCCLMιlA| SrO%:6RYK$ɇnrժUAib<.a4IKPhSE_eҜA"㍢Q'9*@PHe*@0Hh5&(Ƭ!86K% N`@\/115ROMF-FiBaikN i0 M\K 5aq+Չ)eEtvkdA4cߡ\5{P (vA/߂e~AtM\Wb'=O 'lP [L5{af\V\>_ 2g)cqJO>N;[oJ+$O-H׏:&Nttbe J=p˟,%VT^ a9a$蔲W8iZD1&Cʨ.nOC;7YEm:嗐""LI\wGԤUd!xP$^#r/O{=|':J*8+|24B+r-^Q>g/P*:3R}ZD"&(/B{sg$nv>$CEÍu B'DOE߇_x1w6n~[uf)mQ:pzLM[ys~:jv{k- NB'f:{m˴h(H<@i,^/iIS)پTnt1ѫCN[KDTj8w?Q s'8K͎y鐍I;˜Ivnie_3eK%](< w7RX@c~¿1Vyi}!dI u42DeSgJ^ͯqYđr[ڎ602qSшWtn+b_:=^|8O0TQRR 3% byQ5odxu³ci.`ra>}?!GQ&>_>N}|&y2!h!C|?{iArdRUTH_z'#Sh2e * l G,(( ٦ƄiZG r!a)iZz]\,Шߟ#c6 )+N}J)#!YydNwv"#sG(+o`^w%eSF3(Jدq e=ȧ|e"Sgȋ25Vb{΃D"H\O8w;6Kf>jS;q~j}$q-q@G󱏿\E٘9ThLa92c Z=}QkǖFDVݪD^Oz^sXv?ƥEk{e]d>4Y=wُ߈Rs9-j{ī=~.a{~ggHv8(ZQ(pΊ`TߕD"Om?Edt-,i;Jpd3wCEx|&О6f EFhL~k9j#O$PkA+_+H$͸?Xy~of(bT,nUhzޒ`Ԙb̘#$H.H$D"HA)H$D"R"H$D"D"H$D JD"H$D JD"H$D"H$)(%D"H$RPJ$D"H$RPJ$D"HH$D"HA)H$D"R"H$D"R"H$D"D"H$D JD"H$D"H$D"H$)(%D"H$RPJ$D"HH$D"HA)H$D"HA)H)7, )!</`}J?<ga;e/D"uT]#߅B^쨠_z@}/txȇP0tbx6k!xwMm=Lizس&7R7nK=cyGhnWvu$z6nv'Fٶa?xQQA;[+'gG[q<}ͪӉx~v$Ѣ OՋ%DFU^F)*8Gli?ԅ 9La/~3n˭=\Ŏ!'o^Yj sg񹭧Ab(NY4>;ډ-ц.{y##[1-Ld''ԷZI#gH>d| KsohcZA߇''?;ѡ~:Qc)HAQVH4Gs3ӐfN ht(j F=j!r;mfrQ9ٱm93X\VFNRIښPPx2SEKJ%ڏAt,&Q}نMHƠͨ_MzjIF~.ЋƢwЁ9>DZ!ʠfA>FHJN \%%5d:*hsW/S."SxC#?+*^~ m}ou[^KrKG{$]ɤ$J&FZTUGRbva_8i;,lvie\<ĆN,Wk PEE]+~O)XQKMRUȈf2H$'U{HQ{#/tTdw 7[%p|n)dh%*"T In!2A(PMkrr<OvqID 9 fͱKM uXC\L cdT.](Xc.LNEĥφG(SElXmEF']֢3D{&!5%u ݮ a샣"M>ͭĦD!CKEtґ(bQF:amn{O'ސN'@BQ{}#&{LXoP(b<$9$9q ZN;hҝŀE]6d2Sm}vTh0 ;"=9('I)I"zẺIh\B M$/"|fhpݤd`RyhƧ"7Kܖ .#~ zÙavq-TeGrcuk)J0$DV9r~Py%kv`m6i(yUK/j M#hJ(ڬXNt23RAI[>bE)GV ( &1x)?aO>u4ٙ)D޽GB K)H1GʹC1 )Bd zXı&DLQdž\~ 2+9.D=i)Ibdgsvb,}m}Y3cnaq~uP ڠ$ㅢ)~d3+|8%AxS"QSϾNmP=BY#6D%\_CK?LO촪DW.68Q=Yh|TGG˔,:jQ%Emc[eQ4ēvbtKGd't`7oiؤtƓ_i`JV2 lb_ UdctudcYy]$pʑZZEkmsX6l`t &`c3'g\3dle嫄o(LQ=)[+sE4<)85uM؝30zT"р)--b< @*ӣ3JOtAxc#C>W?[{ SL] LT$ :D8^ :ѳ(.)`Ot*eI4T1%~όafѕ/U4!kvơVʊyn64B$`[m/Aw/jډO+nmFvWB84']@("aSr~GkiIض7x툲DCvJ,;":sKҍ>6lXǮmXm=>1t͛q׍a & C5Hya ߀Am °>[7,DBQ(5$w |>kQ'j/PͰ&8Ĺny뭷huG3=7±׷%&-TϨ͑:RI6EGf;:T 64eM{\ ֟,7wo#mxڎikgom;]uVDPMCEVJrNbfܸK48vo"QWW;*bӡc4ScGk[*t╊ZB77@B';r^9=PYCӋ$}k+um3'e:>W]DǶЅW@κm{FLD.:~v'*!Ctj۽NÕC eqp:, Qt|jZ,(z'octn;`za*^e&lB>\ڜEV-p A3ƓsƽN7555Sjo_Vl~-9Fֈu8#;O:*D6UN3r W#[XҰye14QLq~VF4fJE9iƲk(%Z 5]+#96k D'rʋY=?̝εVpYvf4$*5 p]XcoK/ VrF譏40`:|-A{{0&0YTz%sX>#4 Bdb5KISE/ንkX,v5TZO+>W#uar5S9dŗV3)n-'7 =3Li\*y۵rhPL!ŵaM/=kzG -x3$;B~jY&qPFB<9aC:`ZʣB쩔őq,9m/()k ZH_-O<"GNA~wUhM̞724| ubԢEE3{db#BR29"]0X&FF̥\Lً/٨ 'Mg~i:221c$jl##cY"ޕ5L{aDG.Ջ(L κϻ?rS'DhdnZ}>^޺:qB4[i):&nĤ2z*YZ_|"2'-#NNfPD_tt젰|9WXyJQ鷬!6I%YOgլB r]p%\=t/)S `\s Obt3p?B)6[W\RGy 8oTcUtYqKUs]ʵ/QA/ݛ5d: ԌnKF  fiveZ\Q+gPwn!ga89lP¸xnY9taSgUWp^~JQvΠPT Uj@[yʏpI3:Y:zj}4u6Dc#:WFvwҝmNՋ V15 "ehCdK+tFRS3N6lo#CAml6e1;?UO´HD7FDū55j#4mwD /b4bh""UB?DTzs S~TJR:jq\tLN85 t74esMNg_Yjт6)٩7c~{Rt+! if#7RZ>)Itw8|LMzvڜf.]: v;R-W_)ߐ_R:DQ'r2(D0-l0^9"1:┱X:~׷lcoZ-A擽41;BR^!9 c6$'%E45Ɍ_Ģ1]DCe#"QQYe7.Dtue&%f H3^5Fҳ*Q,ʱ߃a=l2+'Y%4=dGH٥&vxE%9!*RLJ \KGZJg4+S 孊\DA?QEɦ R"ce G퀨 XbbinBn]N -*%Lj^k ;kp))`}r~:i"HN@ (*p65 ڇEEwf`0  xDšv&su1ie%2Bڨi Y? f1OC4#]v71fQT! )dM-ìT<.'ǎUƭ4{b? 2'=ui-DZV"Ge|b18A!w]-Tq1%?޺:چT\vYkV/dWU##nElP ZČ$q}Hgʨ_\SoL`ΔlҦuZihe# բ Lhʬ TS7b`^aq Jg݆Ͼ.1i5 i裲QodOiʲ*quSA@ ,ːJ9Ƃޫby\lIO/ k0Gbrg(AkƮ{p.]!&=S0AIt̢H5o8HD3FXgLJD6PY2l]JٵU*U,?aS>̜˗ͥ0ɟWĤ}S)Kb C 쬪m{qiκt|5b@(,lu|t_#5BAQ -ɔlKq|,xaJv"&EĚR6H侏I &5KW,gfAjѤjX͢>aZBѧs"Oam1 ]Hf9)! 5͢&:~9_YssyHZ>6a*JI#d4 G<ާ|.Y:l!OɎQ墵 k5"D~|J}/W_".^2LQj!50Hg096t1v|촷~AaR"駵NK{'^C"^gOa^6&T\쯮/EQ`¨ q%SNi^:f&eԂ,=g#*LјaP8V}밖H3>$eV#*9F8ՋfuȝY姹 / 7N.*q鬜UITN{W$-683٢;՘-Dc=Xh(U=jᯙlC؜n!FvLO7+[)$8Imk;B'Ew=ࢰ(h%;)v´b**PXz V6!@Cs#^K L!d":6VeY8EhDlR҄⶿p|(trrHzhl렭*VE1(i&0:ր}<$6YH68TEMsR.^0Ihog#5DFN. :VlΚ1$ץ 3"z'Sو!9+,q݇Dd'YXDGtTLy:zcH11drbUSREzx@c-:.iyeQu?t4aGQb`%8)Ό38;{ס!Ug,fg1B9 ,mE0qab6pzU/!c0':?V]O?cVsF[i¬ch?O"g##)Vl>0= ݙ't8G&"8&0Kk@u%{赴$WYL. n[-lAOE wnTnᥜ' PA/A&|0a*<שXT ܙ?9k31DSȦ FݧxF)vN( mqwoCI{8KVEh;o5 A bHxtp3&{5ܤ낫@ l=h|$|D^(^%%QQ@p;z3O6|Ѿ(&ZX*5:077~.S9Gpa 5۔4b|*8blV4UE!.b[;8i3xWtO~{SuRY8n.' fa^.O-kU1]y.e! |>U*&^#N.~7?nTe}c.a_,~cIwQ4[$^a{1d7n9㼾vѵgU:bCz$ޘ| -.NQѴx c-7c5|SӜd?r٬frHm紮T.0QXmYu_w]-ҫ퇠ܗ"L]Ce =k8÷V1fa@O++("{]*M>͏shJSҔ"RuOi\a*2dtn~I\bEiī(P$>JeRճ,%{, ":rfEPҕ=A$76wg܂\)uX*~#24yb 7WqRjo k-97rxuh <359t={w`:d'Μ;c 6Qcnex#^ ,T@3p[p1lOpwL{\:g_ H-Pm ^k1cOCm|.!gg/vT~~{Oqi!{[|7?E73vW=7iEҾ~cc<@dYI:קu;ʌ<<#Aq6[46C"A 2&`~t[? f"e hZnn2ʻLۄPRM4ͶN@Zߍp% DY/SkIƆC462h@Rj;bF}M7:*ɜhI'{>j:Bv)b}"S_baf0R6e by 8f}:# 5+&ŋ!+1$/lͺ|udyS&zަĊEUq`L޽XpH͂ynWkĂ(^<#cb,8[S&??3?^;G_{9Yb,9u98W !NP&[ewbk1Ep(䥒%Eγq%,i|MR޾fO%Gr',Z^q8|s./4E˸*rs8ȺPi m͍w{&PId~f c-QlͦDQQJu1U)bcpL㘻"'2  |Qbc8;%I'(PWL>lWT 0[z7iR<2`Pp~4tؒ*}wNylJ;_o d;Edzg:Jx>xՓkm!֒1S/&hs)xb)ē|Eae:<9!6v݊\ WW,&' ` q̓42c8UM-M9D7K4WMdK>z#k;Pby%_ɓw@4D eE Z:T&b/&Ic"# /Z%w9Y#bSkn ח|Ŝ 49ׯrl]aJGe`=n6IL7 aJإA]I656[+;[O5.9c+Do%jadqN%N0yJIT WXS0xU(^!w ⵒ_YC-WQD鄓 UzL݋Ukyh,ԛ C3z(`fD}GYVQ7ͮs ~`.{Nf|0zi~@"iהJ5IAi;-wBV IKP> %Ĩ!=%¢1}Bڐq0!NE32{B^o=kz^ҖB4ts ڑty+'+# dAFwz HvFogn9 n&3 v~% 6aniRqc_GhB3쯄rөc Udo\!{y|.`:끁䠼oHK>F䈡ґ>G-y@V;Ewy]ֽy^~ڧ4 { !f$ mu׷$k ȴ{vtra!LFmeZt# #%Id0DV*vW MXr6fvl) j:/ɜ۫%}+Nclbν:\EG۔Yo&<}(3:` 6» *£ cggGL MƫLfG|OϘ&1wkϳ'\A(b(5whD(qe3Z`L 8_`m&,on]pܬ2́(8x>%M$ #l#AV }i~m̜vGRPmhGӖ\)L[ޔyH[@?v-[=r@]N0bYS\Q[xssW_ e&IS@I=h=+M{|TIʂnu:[㵲[sJWQsق,[s1M"iJ[YU~J/[m1gDw)p_)[S@dL0lmR-=[vF6Rem]ekVQ"-r{W<#(BƼdKU[x̮pS%V"z*R7SS ^a6I8.\xGŸNe l/п@{ڑ\zmhHfݴA@^buUB^:ڠ;8KXՖwG[?Rcr6] d@cqh`*fzWw6܃pA?qi> UyՃI_"p_[eO\H7v̌79KW Ug1i`"T{n"@>5\{8ZzpV26=+{!rf_?Wϻw_"~CdI2iIJK&UOjʁev]V.jSR5 #fI/7{Ύ'D&bZ6yfq}a,%F$J@`:Ɍ\OO9/(ʒ9/6vĘ!bG9ccl$ D[J箕'/ W!G .0ݕ2Mג,GH@FEI ljiڌwţm\ծwqϣ[jƛ`\z׬%42g:?fr,D9J&B%W_ 9?e[uD*@1OJRWh-V<F,MIU^8qg|d2==~£hB(Mս3PWTlI!p wyWC)+nﮘĆ۲wL2g>GAkn*SG[DwD_,xofxVr6䢕b݆d}D(Tj(PyhtLCՒib\jC1p JW VQD]5ah:A0ڧNg<4lEɥd4Cݽkga5 Jq\ةj0i:P7Py qx8^{V6AM1$b h[QA1 W*Y?f|˨3^4,f8C,dWiAc .ͦ@x}j&$Ι ' Lg$4f\,xW Nh$Qa"KQfyNdb043IMfMg\^#J&Bҕ8-+<DYoYo[Jϛ IdH"(2/_g|>d= IDAT6X4Am͸֓MUQuAxϑ8b͙?H ?^yE\euڑ7&َbt7 Ӕ? tµsxYg4ӥ-G{$kduS+ 0')M(g6/ T,*BVUQf퐢T0pEUÿիz(Q%#N*#zQ85ڴ2{_cY?]2Fz_ί>]ol&k71: SŌ:D˥`IRC/0{I3f=Eqf\F#+j0YK~Fq Qlk3rxЀ`9 "$ҁЖz@Nhͤjw*g%$pTvnuO6mKH>,<__ _xso1p+9& f3u-о9lCkL;Qh8n}^WK^oȮ`sǜv#m$~ oPc*c׿dãҡw^p R[;"; b#Ȱod%)]2ǣ^XCZch:$1g@"cϢv$-ۯL}]J=E{B ]^;WWFAAvp%/ݽözn*ZC >2¤ SCϣ\:yTև.J/L`ў^eH`~w?[_kfǧ^V?r^2OqޓY<߾a)횳9qe۸%f17$XyE-Kw[>|t$O%96~7/|رӫ CO06: |C^l ,`1%~!yڽ&X^vFqewuK-z˷.Ja -b=w/ ׏~inZL4 ư#2=84\toYq#X%+>f?D$»8۔bUsĘ <;]QUV.,HeqG &d,++N_o;ͮ#ţ>g29fOQzy9YQAٸ^)aP0p(~dDX+ Gxu7KI^Ӱ6{OvYN* U$l7zhѕˮSLy󚮣м p>oZwE/r?40!uݳ.nP $4R u݆+l;Seyix*diB{r8t$]ZFeecС{=lClHH57MBD{ d[!Dl5h5 d_1t!3/ ϒ|_xV7d,[ 2\9+<ビ)BX2nV%̦4k,ޕ S|~BY)ʂo"UԖvB)$ q;~7 =Q2e-I䮠Xg`QX%R.vb&F1#4‘.(48CX.Lx빧6V☲LV|j/Rgg'|-osB^h׮53=MOf0%}mM״:.45 c Q'/(_8jZ _8M)ŋ^0V%/"o+Bt`"TpT9ՀQ_wV!ZDB;^շxt|B8{[׸W߮w`;w ۴ e{j_|èĥ MقuZ7>2 hb 3>cr`KpkEp'@;z|Ցhwv>/A>`0~}< x#tݵhlzm(ם E/c{"}yH#[f2]tc BfXϪ[9$X:0]:k{z4M-Tmѷ8XI]cEl;?Vz{0 ?b~O?L]2UјA@ EH-;=|#}&\PObiT`nt}{rA,tS͒tب*}݅P7²L)IuO5Fyo1x݁5jt6Wm,&S?`Ŕ]:EVX> mvpz:i=cpo:0HS6kD;Kٵ\!2.S|ɰcL0 e5B.Le< `b?=Tz >dYpz$,uh^i@}5RYM#`hW߲;pjXg% + = iĆjƚJFD8O{uHBFmiukۉ7Bm`. #):{W5uj3-[֞4+xm p{~]rΧjl.yytvQ›WC$!{A||sgyŋKv]rœ9Gi5cdN$i̬\oaS&rb??oQk)6+b(`T 'GefQ³`JVbeq<'MT: s׊ժrؚʖ]7n F1-"iW}{)oGʟ|Weﲘ+)bo~nmf-Dڙ]nEt:ujڟ˧b W<~[sP%ɔk\Si(e"<∝5r?nThU|܁w|:#!RnKRÍTpZ! :$ d7:ӡcs[֊@W<d7A߮j،ddIl/=$5]pdLP뢵_J8m!)K2\;8ֲclc$j0cCӻ{ lw3cl- zkG°^ie@k,}z ںt~5Axءn5}M7غf59>P'nm|Hn4mǨǸuLL޶ӻ[)Sx]*7?g)Mx'[XJu&ūQSdId-5%WF_pz4!M,Iam-6IcˣyOӓ;>yv+ cSz\Q.(yM&[Wljs4r2c,>/nG1M8-wv{/OR<ƠjEEU8\<*7xj3pnx__'D\&3>OǣS~&m^{}zbZ[-LIRL㐾H;u欮IjxX Pn{^^`(*[SxCY8)]eU*X"#[y\z] uV"NO/X*3ĵوF>;א^x5:lI /RU% Fwaaݐ c/0CmH玌8fU2,o)?v|Dd,Xkvȧ) ;6j%l@2,]3Ԧx{>)>!ʀ}aWc:1ԿchRTv6;60I] ʥNk@5H l43vCzn-k?$=Q[,yJi݌-_i n#*(jiv>4TL*vC+0y,Ǔ'|<{̏wx/YHB 'y E\ΓS|[)dz-G)~r8+1 :)r6Oxℯ^_SJW2&M>8XS*Mįx篯ki/_ŧ?nuݥ֡ z*֌JR[k"qg\b$I;JWyW%bbU(J&#-?JNbr ?/Ez7exh;a",H-n2iȩ Bw/ }y&.x0''9lT:ެXrk+x"VśsW$` D֢$Di1 Gcd0Niw'635>Į?}F,3zCdcw.0 !tmw=;%v2vf~V K$# Sj"2ˎg[22MXFfCeK1W^{AY3-$筎Ŵ&5`b0V\)`hbͬkiGbCfz ;p)YC1td&Cқ'gio&[=Yö[Q4\DitrҦK r|/9L>G\ݓ/xg$NZOeL:9x~ԕlxw{(1 4->]`Lf}Cų3)B,6J)U H]=2GҕD&"^m6d+9sq{kw6$..25[.︻g4#8DȘZnPz s(-y|įS~=Ōc}ఊޔrR-o@e~5`t?v΂fiubx]S|NOA2_Q͗|z[X|5Zr|ݭ 8REcW5nAMW/9e"q|Lf)W?omsn)j㋧]qus[,uJY 59GGxUXLwT5gIz|Zg< %Vקj_o櫑"itԚ)`<J/uO'v߹{%"v}GY锫+8N)(ȋf"8"mVr{;^Nܭ.{bSgEhEP@,<'"ʲy[2s~$5*pV*iX[ule ~{ӚTmtaX*1"H$nY=Q*E^0iֶ:bs|'/c~2=rׄ]YY^4mf16Z̸e&5K.QŎ|ǿ;f62$>YK4]9_yCKvBd- M}vA)|PU&I>CJ[e+5R`bLTe#s׷W,f 6{|R wZ2 aFGE"f@Qa7iڬjmPuql}&Ahc tw :GHz|}`\?]4,!D^T3#zDx #{6 e::R74 rEB]2bph^ްȚ@`5`j=BKA '`(^3.}zSn BHn4$k"2Uoî1=侸Etgwub3*]m{(kdQѝtt‹r?Mgo[~d}d:s6yFLY7o^!X?! w;(J>YE3)xu}W$jDf+Ƽ_Gųւ5(t:P P׭B|Qu( F λZ[^Y/KtIG*Tڪ<7֢ηvg%8r¯y/fO/ õ`f[ὕ8mn%+m( Lotl:uVߙ>%lSɓ",-͚e7Dq\Zbl5ΊbkwY8bĶTnIuVNEuwZ7` I"$IB^fSwr>?bkh+Oک9.Ґ_~ 4iX 3lmلJG˜ iQ@7N|җNwkϯ@u01#5חH~M4 B, qSNÝy?@f}k. V75m9\B9,T舄`O@8' 02m) IDAT9`j {2` d!uH KZr%Кݚw44|Aۘ0䉮J#t:SڕYȞ4R Lҗ5 'mW91Do}wR{8 י#MF~MVB7Nx%|1yʯGyz'Lo$ IJxU6ekՠ lY`l,I3x9{`(*EP/ꑢDۭR٦`&6YcD"T g;w#c0mKTE1iZnW*+CigmueIZ$r97,dle| ?S?eiK5aY9(R8ח7W=e*bʘ9Ui]_`7dS8E0I'\.FUM񪣩KŨW~F5ClDB&6q"乯X5_5HzM8,65Y]xoFih À7vjw}v[Wo(]mXc tBkq3wK HivBe0nUZNS;{f뀔% \ bD+" ڸf{(IgG :v7{ug T:$h4ĴoF>kP-j9!f"zo X=8 ^ź.`5vb֕u. &xg=ڕl[׺{CB t i>ޝ"n mi 8 4xFil#tiC9'8 rGU кM@ƲsAzHm,&|!Kf~KdxvbV,Op~zÃ%7tS֎qbst3ܸTL)F ,J`́KS}r _᭿p/.Y/0yXEևY#ExsWsX@U+ASUX.VBIK$002:-C"8ZaŊ+LXgH1I 6  [ϗ8]_ฐx\t5e#n ֔lJ\ngn@s2I+oJ=iOrD GT64˽+ Ɯ8A /bSR8T!f)/5LKՉ~9aʳ=9@ YcnFCC&5c] cqAE`xr2P*}8=#eἭhkĞAJ'^G6lZ8Qu(2qyh) 98w<(wvvS;E u{Q7s}p?ymv7)/T#b@Q1$]fx8Jkq [~}76/qF=_|3Ǻ kVW/`M ,`e+ 'dv J`[6k\p)5qqz/muik0Frİp'Q(/޹?ܴJh3 T =}mA j*Og@ bFo%|yh!o߿CQ8Xq0?9g}>nt&^N-;J;wZi~ZZѿbs%Vl*X'P$njJy`΁!G/A PX0h7+nڀ|Hӳ+l h<0 HO3J|atl*@|go2=Cƪқ[D@qBl9D vڢBBVqak)ς.;ÐW:,{#'ss@֩skĄ>]IAgRvi@KErRc\h^{W`5ݎ޿sf^x披sc9"pɜXSמRm5tI 4%F|K}v_9\_XF@cGcۂQwq'Hc/hڶѾM^B¹<6ğ^SMs7.P6^.q"AXB(l*"T HUqj>T8C03q+l\Γ+<>{_} b,u !uD0$;tn.`"=Sb\@B@t̲7G tb'u"!zu]`='̘ _㏏eYbXbVHBz/ţx_S\Aɘs 9$gݬPCz f̊00k $ȃ)ufԆ!|zq,bG* K';:{y-A`(|/?j)5@'L%,f-(fof[Tʊ.SxpLɣ!ǔȯ%9|tMp3%b?LqJ΃EJ|.$:J&%IL.C=MCE#37 ٜ,9422#9yDj"nO⑮)N3*ㅍ5SNt22J"uC%N˕b$sB6+yxrF#h(<#EV| 焣^p?񏡵g/❿-w#]q >W8 ~X]ĦB{owh\WqqDrVkܽ}Up7A ¼(ƺB XxDJP*j ) J VXƢ5-fsr/7 pG*6 f eNS0yexw߉ ty4gNyE1{!y PE8˜8XL S,O &S)qLȧ\cK|DjMPFCf>1rR~b,hpM9Ly-7ꖧ?ҹ09)EgV$n6\ CoZ;Jjݻ .'3N#sT h.!3¯id;>z} `V$PκO*$ dkodȱKv~BMZR8,fX7534A1[@)fWxvq )?~̌h[mGHxֽ;fK/P.\;{J<$]GW B0Z[j[|cmQ`l!IlY`Vz<ڭPtu1W`A0nn ڸ`&"ԍ`%m(cR` (45.X79 lj]aa5n B :z3ntw=FuԔ#zk/c/U@#Lq;R)1؂bV%|vkoJ2e a ˆ]%Ҿ统|ѷVΉg&uN>2ggS>#yj Y%bG/.!M _w_­vUpgPo6PO7p~-%ԩkTO+PUAk Ƹ`#&Xҹ@@k+%k,f mְ`HIB` mm\IV̤_\@)YbZcQqÝp/>_|U௾8bubC`토;jS ̑iɍ}.`,g1N`F1[jZEA?,)9M̄Hiuwo6p`%x GUөl8OG921|a_1T%3t)#{4g aiA>D7kA)[@QˌlNf #F Y?"w@sLhR0#T< i0ԝ@\ܟ@5?&yW4[R{׺vT]8)J%"d3h$ѾKn $\ 3pjcgЇ'X%UXΗ8>:EYX.Opt MBކR zPBB@-j%zM)~Bea ZvQaZn0`XnXp0_@1vÊbqX|xvjw^{ _ocBD~pug]&an44k|Xmep VNXpK=C}9l NXlt`jBLfTe'/6%^9>[۪ƶiuöv B$ c\qŀw.x_w`A j6oubO/ . Z8Mn~Fc Ѡ~#g2E/#톊C/ | !M\;e/)S$~Qھ*^Η)$1kLWQ?ȼGܞoχ@_ܹgHۉtOEv;7q1f=8znZ+x1~zrlp>GhLq|tMu[wŋoPq~hy`p]kh/> IDAT|:p=3%hwֱzRg{|wOt1 )\YkyI tгCXcaƢ(p%~m NDÝ@ ~C)!SPʲ?rw 8lH]v9}7t2xR`~}&9y^S+#"WlJ;M|~R| "ou:*ǍagyD HNDRCw C{)AZq#P|*k?@֏vPrK|Zp;>1dyH:3c1 W.XR&u=JL]-ГHќܻT ;2SD_J)$%w0v]v?n-ߵ 0D Aij <ZHJ)>{'_]ᴹ'+lavv-X^dyggk,%u J?wOwK,h’tY o Ac-0ǼZ"?ph`P +|?9YkQ%H8eX.V(%%.qYUxb^DJ8kl 31ݙ\Ncwɧ Wxqzkk x6CU34pv&^ %d]ªkXk`u.@e /-YK q@֍%B 0~PVJսsiik6tII )j C( Ks'R8wH;b֗[lV@K+g2̐/5H6<1g+|hvoXKR@xw hb + l $ X]s"pCB² 6Y=i fk#]m|C a [;I$䇟,Bd>p5S&50 f,hs!Od9qŨ.96Hc'j]9ڹutaqš""pp#C<׽X$()(#>߽19[kľ7;7mPbpX*l$I}Rh)T{'']<Eigڪ=>ei )ü (pk Q#G!1S%nw~tKp((OaVoa5G'8>>fºFUk\758=}w~lើ[kohLlLݽ?6 %l=k%-J'[vYB@on60Ƣϰ6J.Ο[mx=f3bٰ`QwŎ%RdR!g!R$fRk-`-g@ faFS7`+Pו f\`&0Il J @0 vh %`dNbxW wK "wTV}F<_? ߑx~ח6sg}ȩ]oS*1Ȟ]·,1ئeC̤yxtBSy4g>ak_Mq Q udx&vNte`ĺ}-0A& >F9$C)$ ֏N%#yC [э֛tBt8Sl.KԴsٻPܵ׬\bpI=b9]tu}8rIFz+}bx<*'Xu#`o*AE\ET;(ȁ7Б0RV$_pF߶MUUgq\LlF0Xc)(u!d8=|'PXXk Xk[,ުp (aWqq!\LxA%nɻ gxW} .؇46Bya^g FC k,`WpD³K#:l9n2r/;#=@,2AJ9k@pK|#b]m<8x <}~,k6^kvfVkX>Q @I\JkYhcQJI;|kր+)Į-F@ٴm&s-ݞs2 Kame<;;9lƝe9>037t#}9ɍ$)}p|4ۄU*0P2`=˯>s (Hx nsI, s J&Xc1,='RfK)v:NL~# na y 7ksXU˧ăS'ð%0x6M=V;'cĆ~uo4K^}ܹ֜p?",^M}SDG.nC1V TqYٛ4Ozr1Tr=ņRƮuc9ι.ƒVVjC˞d( m\$6 |-V?ƻ %^ט—º֖ 2Fvxk7οi V*!K|Zc[k/JXY`!TYB(6ABk"v9N. $Il ƣ_c~ 1j]WW3fggߠhr5. H,#K mf"PcAJBS{ 48fs Y8 _ح~)K;%xOxK$ $A7v7sL\4O. xquX̀ b%!) 7Q۰h]4[=08 R4CY'ڙH@J`3mIJ@]lr"c6i8i {or5G`D\N(o45"<.--309u>o0LU~X9-2aS96tw+DrpWs-(#ߺ(a#x%Js`k ݗcgL j8s"D'1`c;S sE#UM!S_BFrǭA 2,4'IܢyR:%ysC'@Pn-G 8 pr>Mt[}{hDp#_`p6$g ?d3\ׄBOo%>xW8z7j]?* maVβZ[`/'$qkl~ 2佺WZ?t#6WXo6 )Y_B mAUm0e ]7F lB)2l z\+Cg$ -X 2v hEQ}R}3 bWؐdHR; rב 3T*EQ% 6-%WE(N$5c,桔tNg2e:q>J "|=`}~ :0{N$HV1y |9L3^1M:.͔ 3nA9e")Io x7wZr94SOwj2c}b~;KAn8*-#r` w/tЃ)c&O՞}Nu;A?t`"dF3)8 "t\>(:>~`Ɂ1J/G u=i/ J|Jcy Ҟ^1טY2%NcƱsWX,J\^m0`X oё57)x,C%Go%aF1CVNS^wW0^_cϾ 3`$ E nluI$P ,ðCB^B d"r\ZYR(34ڸso=w> FyjgWb+0w| ,"ُfL0>P)}c7tO4 1n͸; 6H=[g L )IEׅ#ZaD4`&&INHv\D2h@heoޜweNhs)hU0gxi\,!_95+Li, ٕNBܐ.VPR)o0˧pqi7Lj!q$TJB$n`45cefHpM FF+/.qT`3(QW,pWB %$1XX(%!$Ak})$: ~ l%eIͺ&$'y_L\.`5-}%UGq维w84yh=Dz$w{CH!ԱL̵c=4וs S%^?x} DCt9`my}i1 #~~4[jИi<&y8q\=2t2EWu IDATIIZ4z#v9l@RI<)r18Lq| %q>| ]|oB܄)߻EZ=;@:so;]$I%{^ Ϯ ;NTˁGm\{x6,-w^Ss4iK.1 w`j)/r'o78n%7p3l#Hrzlc$f9 epxuR*P8}͙LՐ;Ɇ,$MŶޢ*0%`#H }F7Ra *d0 k\bM$^O4n$; ǎK"l0FC !a%^m(iƫ3aeɳM H?@B@A PQ9kB)9-p l'&A11+C#i< ;sY0k](iOa9w^237ж >'1{ȱc!Ifcʴg~1ʉM,ܸ-<{d(o$#hATBcs=#GܟHH2^:sB?IfRUcF YS۱d6$[=/f<ņqU8@WMF"2"C"S>߽\;tom A:)O!1w`J;C Q.}pՙ ֭ \4QB7o:*X_ ܮW(@J|u| g/V(% I7jowh2R@l DYb^c fwS[yGȶ#V1S_֒#Rjε0m"_9 aJb3Q1o%~yt_VOV}cSΖZ c y=|' c0_1W5>O!Mh6{vvj,dMf0Mf}&KJΠEkr!%J YP@kzkpZ`X:tև0Zbm,$93XXk{:8[²-Sn,kA; S [v۝#4jb^v"a81R9{߈aK 扜4K%1~)ޓ8a\Z <8Ai*36g="-;77YYvAx_prdG|6g ,:δ?^TčA;Vws畃hNyf2l5 HRdMYde+)Px= $4t!bI5-ȧ1'ik%cx觖A [A0&j$%'b=ө!X&<cD9oa"$pu`v3E% 0wP3e,?l.9N5 Ђ,޹0n+\n*})޸{!N a5!q]mvC$%i?)TuQ*T ]Y`kX6h?j٢l]i!Pv HI0Z!$`;ֹ A-KbƂܱېe,u<> J0$.|$jAMʂ+Ǥ}p;Fw}x(ȴi7ڤ9O1`:l̽K$ =yJ}@!}5,Ӎ nFy׮ulđQD:3:A$Rpǚt|"IílH^,ْQ4ʢPFŢ`dGJ4Nm\<"VLS?5/)IR]414S7r7g'~ҜwW8o.?{f{=|/.7XW ^Yޏ\B7D = Gxkl64B(RN*Ҭf30 MFc< \nM l6@ d[1A{̰   G1BIIx`ohE31at*`^SOk@>`Y/=[@al3TChM`"j{mxDg7՞ϦfwF:go8t*#!.nmz?y/#YFLgB;'Nd.cEHig&µn}Nro ҙ2!0Oa;3MpȆONo}JYS׺،Ӷx$@ܠXݗ`MvlSMb?n+stjNndB#bH`NjnqLPG0էL A%E24s_Ȩ5X`)I*suH@C!3a4W& r7gZd[^Y#o Q|%RᵻrEpCF»hJH6+n6kV7`MI`a`L {dlDX1QG!f 4 tiRYB r0TU(ւi ^&b9D )T7I@mv2JQ@,J*:Bi2"X+8f@l#GR|wτi]$ ':u{kGS쏞OX{6/\%L5#ҟ:ڌp1Q7"qK35a{ H GpwT)]n'i!XbNc[ҜSU](r-p΄ !#ܚ?|8W wN_+ggDxw7ހ5ƥTzq}gخ+NR 'gN#78 m5f4h5$ZziI@J)7EV%)1hEx~P*}1¡)66;xNsדFF[hk"J@ ghE9NbmxLp>* MoJa@(5dJgAdF1v| $hD-Gq}0%: g.::4o'+=LNc ~,fW>At]d,qHN"'%fDϱ{zb Kq֘P}?qo,q] {="3{ ;H$D(&1ɦ{iI=8\"AJXbBU-#-2_A+#*_fd/?  :էs-<&)S _ '3ɱS!W2gAH♦~NrsєmMIG<1פYG^$Pc">YE0Xr(p9l*eؑ^)YXB32(9?ߛ n|{`C@Ry3|?>+xx^}\NZ`5x^f( Zoaj՞Aopy 2=+X ҹdL1 aD@k4V,4 E;pvzlvԀWv-#.HH1l?c C p<t$;wu[Nhe?CMo ZCLl2U N{p'?F:f\¿/w;ĵLJYv$dٙ)zm*NӬ(7e.fQkc3> :rRrp<4s*~^VNQc* i/gJL>A~PŇ^::4tvg ( c~̬Кxu{|\h"aG1K^ԑ&51;7;d)/G8͈Sc DxC#kdfaDdPEoQRIs2<74ޠR*{=Ċr߁Ft,#Ѓw}g?o_3/pl]6W-$6vs\o/ EVΑEhW^$NhW-501;+Į+\S"@ -A i+h=$pk +Ƴc#(yJ= 쵆фU;8JxHU5{^܄TQ|dg:ǀNQ*HSZx:]}45fͱp錦K *f8 &@? #b~IS QBq %DU/=n9 ‰bbhZ%v>d$?RsjMq+Τ9dNI}(ݗ9!)ǯU.tS4P%VKиgs}D70 {B- A *h2|5& a=&jrsDNw9P7^MR"[:+NBY(ԯ j$4pG&7z A"d mSgȩ7w85xXhέ`GpkRƶXfm#jXk bt}@losE`5ah@`ARsqpq AJXRc(W"Cæs!5f"(c`!ba` =<[YŽ-'{\4.S9#Y ujS}P Ua24xFND*(ʠL`/Ji?& OJJd`,2ɯ_)Iy$ȑj*h*U??4ǡQǦ'WXjۂBr(?a>K_;í[r$9X3"^?XZ~ @γ8=901o z1Ƹ&ObWMV1 ޘX10P3 Z jUc4 ~,0\dãr,;m*dPPа6-7جV8_ޅ։FnnI`Ы9ni,H>OlĹsIɰTn~Chۑ)^pKP!6oJSh*ޱ=wkX󾾿t̙64'UIR>UjJ*Wf5}@b-k@1/[=۳RM1/|9 YIR9BQ2Hy}VJ#iv'3) L*O?FB%vQ0k #4Eҙ`1J@RN0d߱WՔ&U֯K~3|RK.!{o1_CkM-W"5eo=|~7xoW?~?oz1zw)9z}X ]uà j BH׭ȱ[qtFֺ લnz +J9C0V'2@ihk&ߌ'`!$ PjC]' %#siV'ƫC^K#գ|?Qf s"f_(0T̛SV]^$&JA?SlIr;΁duE CMM G^G=#*{Y}%%ZlE@Fl1 N(ҩ,YP, jU"$Zxc5L5${IfS]/Ce.3Q 0sw+BBX0L/ %Q`pPC&;G '^,)F#)SϋPFݺ!6) 8`OgvR%PALK;o8>)Ow[נN wL`Z\`ܬqu 5x} )Bwy{6hU⎣؅4 Tc~[zm=K0"hzKP4EҨ]%lĺFQ3ZXއ q1 0JHd;5c'X*9NٿRO,$͊(P) (gLb^[IHζqDH%NbM4f&liBf mkŀ, 兕T?U6!ЦY5 @9t].,B﹆Y*W0{La8[sZ[cTf `Mtb{M+|ns⏯+ƽO nZZ c 5vEc a m W@mbht0`60CYA5z2'Y)]nAc k Vׇ5 AOT1B۴zcv.)?b7td_ M~N~HR,tqv@ z =RL6$VPJɀXoxω$߼vk[|TmT /*;b.#&$Nj@KJϨV8 吃ݪ{(ׇH?\TFE oT/j(vU1%! &hX=ur\0abӘq+JL!*%]wdٝ&I ei1H1@BclW ?'A| l 9 VLn %M}LULqqlErDGhVޛ(NAkWh(杳 1dC57<6z9JVA.,iYS#Tr\bg,%J0Ú֋ <0}(jZ\[X# f\w=>vB0-ck z#!$uȤ(.,723C`&t|BB̽KƁ!ndL$5, fnZ2>;>b/~s&NF6dp<7IޏDa$QK{%=UdEթX[ % #Al-5}ِq٩Cٔ\!6 $p;Y1ϚzJ<; )RP 7..qDH=k\ 䥢xv4y"M4lJ!3ux@]bT"ibn_M&9< }G3|h3(>Tt(P(9Q,!JM!|/adzh& yr캑_SI˽_!zV3=[/ 9Xϟ`ڦbnX1k pk@A櫆]:20V2K0Vp a5nuBPӞW[ + X0\Nbhm]S0xM*P,Q0::$ y`ȦDgjϐ}Kl|G\$ᜋI{P6!2s$PbLsa$\xOr+aT*Q0OQĹ"3#$ӢC%;[Ԩ!5!x %GT-n'URW48ļD_2`rI"?rbYzNuJ5OlWD|t;M&LxbsQ6:{Ԯ S{nU;t(ُKEsf;~CxDG>RB ЙّG|O{y &;9hm`)ײf|Co N :4LC)7.Z5N0E@7)juGWLA3}`oD\3H 0 C)F\1Z΢ȓ]ͽ^6n.LN{h,S*ݙپϙ!$^8xS4!wƶ^5O 7$!mٳ79rš\d%%njR`ʊ0`=^L,5qbH]rsL:HX#RJ%qBp2rc߇h&PWѱ擞\ i8@{ )S6C ұ>7:|x]jC5L-X N,XF}mZNEg&O;[XqQںQr=8XrDͦ)Tc, m˰Ƣ mKh@[_:cډ+zݭC|WwE2X`EjU2 .rs#0& A͎-Ma:P3 KK#'TY>+)SU,7ߏ9z1۵/ !t$2DDG6a~+-^CDCf߱অ\tV'C b^ 6nYצ4Y(6V\Z"B/=:8RD+n1lB I^fnhkab5lom6mmS-+ko`{rk7w! JDNX\|@8A@UZP}YȶY:HcUj(ȥ&&@: (ŠKKKʳ_ /4gV% yTTAг¶?6IbDĊӒ-Bد^C5B({!O-z={=$vO "4'ǘKc^[?}0nss9h'̛6F>yjyqwctKNsZW:Ɍ $0^ ,xmXco/aLm{V8oa46-zm`ALJ6+(BӰKTcuCk7?N7J}c+҃13%: @@hAdqoD Ӑ(*ӚH* qrg"BRtџT&^!`/9dUEhz08RsK fb!>X](JB :*\nJS ʐ)(09 VI=X*f " ]|(ԃOTXEpsa^לP.J;)zpi/N Żk%,%/~1Y`:dJIO`NU5 hh=K4Fg$ Ў  ɑꨩ %DҷX84-feJŎv+Xq^}`ܬB+ ł^,zCFzNY E JA`n\s%ݻ<3WNmFPl8V_Unи!r½Ɲqaia!Z@j'ip8vWpl ~#q$u"T_JU䩵7¾hFLLG {Xrh钾IɜdXɃ⧥F"5.fH9NdNcc FpԐ^[^P^/~TTdq^ `'\+NjO6L|E.@<p#p+cT`{8|FgL@|2mŔ۱% )̞6۹,9NbɦXT~HH۫96OwEKu\2L-[߀@BXa6= &B{((@+,`µNFCVCWF[q:F4;klFo>3#Jxz[(+V`lZ4 6ϰz:.]x3]Jݕxs:QCԓq(S9Th"3R6Rۥƻ7V0,&(%1@Ym,m8zZKAzY,pH ?+dM ,f.%d&ϔ\[|%VG_iͼ *0K{,V"/8<1̱'S?gY"qnjls6{:ˊG|r}IȈw|HkS)=# 6b32lHMMED;]/rJ޽q@#S><`U0&Kh~_p#l=siuf q: [5:ZVK蘢Hv'I:L'pyՁն4 n0F"8E3vk>R[+Mbt'an@[EiBN4g+ Du eWLdtHa؟և&{R(k@HIcM}6XQ.FPaQ&Iܟj1Zr99BPpD1ǒEp%IPFZ fKMșG],릀oxk3%N~YL*{'6) _n3ů*˴V]|cn=3PD]}I.H2lIzM(J"N?Y.6=CbVH[a'2yjdA$Qs151N)uRP X8>dٵKLݟ9b#Vl=O%Tȑj"c͚R8Hׯd*ed aX275n 3/t@%x w!P2qfb)s;IpVMe#+>}DF֫40-F5,䈊 )STNpl/`o݅}|!nF)m}kaM"AF I.!;4bJ1B"33V b*=934<{R5B6XF10:mpZa1zqXUJz"^Y^tKÂ#Xז86,(@$nggmPG@'TۄBOxߤ"(g ,~u#-*UKO{*#R%{K̕L zN'|<\j,Fd-p⺆}Ԓ*XJЖ/"$ḩ 9:>3#v*0ysԔdA߁o4g Bg9/70/ң(RzK0SيīF ÙܔBiҢ= N|CL7q]^A)onLBgU6N'Agg 3@Z\u=U }ף:v\)v PA% &1cZiC d  ek aZjEX(|ZdHz;ȃkWq M~Ct$3X 1PT}f9(ڲ@,zlC~?㱌{@c̶4 3?TEk+vt>k ֛'K!UBȁ}>WC^O-8cLJ 7GşJvajh&I8>ioLiBOH%@6LN4Kʒf %$ҥ&>+]f@T#<X³  m9jDU%ba( @E7mbT:A"y(li'`AopY -+׀:\B}-BuPK_e"4P_Ȟ}Y@BN0WP MN,~f *0*",bMgyrA'qo7gcБ_ > <EZ %FXԆ-)QdHﳰ"BmX1 a(Epm +UbXk!b Ê sK:({K"wdc/ IQ1cG1 zZw7/"J=`U%7MYtO]@hо'&(0Qt)_Dezh])zŅ6rQLe͢w'܆tv2]Bu![j wZC4 l'CmtM;B 6 ƌNa}f0a:+ ܗRJ]: IDATAN@' ZF3umtD;;+&hkqvBu0z>^fr%e=΂Ve(Qr $Bԭ^F62k0=dmJbZ6Ȝ%sT.%Ԧ ~=#(69֙#zX47!8-*:q\IP ('%x4yZT.$G. )HJMS r('hjj$+707glM 8c@D2hhHt^;X) >'lz uAJ5_$8\R "c5E紈X xjCԋ3ka |N1[hh^ƠE̡# q$[赁U b!`k(5\ӔvֺE߃@ "tb5.SZ37+(pkxr0dAv4ADɻHbzd٩NӣI6M1~H 26J!{%TF˂ML pK2r^c0РX3CMg8 6 YHJl".f'%c~vRjd{-B["r35oR*H_bƟQ"y 4;t3>f隘`QKqxbW*W"r@b%|;\o70%XX1`&h;:8zk q+&zc[ւxX, ZX@ .h`40X V\ V+X,55.3K# Y3nm`#|ݜڗ 78#~1sV<¯g(c)ʼ7QEt-?$L`^=5F<'wZo'K,KE \eb̺,C!fM,() /%WS2% nD y๵PM Ѩ +q@{\M;q"9 qbhZL7رyQ qh`;=A $5 , V57Va2&(А RL0l>śƗɾ38oz2|$MrtO̻-$)S!Ű;{Fʀ#c}bԥS]䪀р,OEpI7Zfy|!;^KW耱jX"Tn|/)d_ g~#) %^ 4xa!%0w`4i!VSZ`] rT<:=?R'K&yP<rvA]bkA'=x 0Lhlm)+((&tl v@pl:XU1`Ţa7P^"EkkOI{׸%M"Ɗw^Wg{}{P\)>坱L\͎_Rl/Ȁt$WV(D˼$f)JA2cQrXs#q/k5h&~aEjewx4-%Hel 7KMghT /K t )]N4K Yt r*Wm2D{1h5ku4?R$PJv^Ch*l^%ɂ]_*-Y{N7EEr,&"^y ׸) Ɵ,Y:k}#\@u \lZV)[c \sV,Vj f07"A4 6c Z;=AOnA,n[{=E$ xE5$p-jijV%rٯ~r5gW%&pjM뽑,ǜx(=NyMߝ6gJTEԇ6ZY'#s9||&!Gr&cw苸8ľGXDig/rqhvGR^(> Bֶpe)H7*aMRuv9 Wb@@ /5n}~%;}"˧ؚXŠ`X&p`ae=ًٙ21хERh i\)xӋ-1V c!pøծaF 'P7xp%zcX@QXmi/m̄VJ"BnR5:[ILygq@ƩQILyݴ?awX:oʎluccn$ ֟U`EָbJ$n&l!љxIѢ`'yf6iWYbVzDo%*eT1!>*[,֒ɲס )#.qyNUeOi䂉MF*2Iu _~n͑zE,XñNJVn[ϟX &#34d H,Z&ƅ‸Kdr~#g0b`D7$ytB \#h*lݗnW_8Ed-֪ X'-֛[x\l{TVĂգWЮY E2גOaA׿oGJr/7K&u K'M8ǤR)h&k(|$NE06J2(ax ) Lii zo8ROؗ*xddHRQ.gS)]'ŞԦf!AIVƉ!$$şY0Eg2IIآtN0&T4zOj*c$*,0aS!U^Y&ɔ4ԪWPChE#0 XTiX6s0"0ѭe03-zmq5X7 -VW8;]O=x o ;_xr@Љ +' >3YNrجkq9ɫ§wҧa?ce< K0^^[nx(V(@dʅYv!KJ@5Edf )%MzeIy7. 2ձI@TēPVNA9x|Ʃ#̮[`%hFG/T_CRVc_Xc+|za@Ɉ dz!rbn``J0q\I3k\IVM|ey gBqX)nnķsq8"T(cHG4R`ݡeS9!s8V|r`mHNd BuۧBޢe>7cOo~zk F\bն6Rha3o+Sow) "kPY8)djU0h*[űf:DRP[R)r[JOي Uߟr޻{=/R ]򉮘$T8UdH=NF (=iH⇩&\~MLaS-USl`RI~8˭; q;d%zN3''偘`wZ*7ZگR/%\_*P2Nڹ4*= &rx^W++p ŸѨրz3_rC C BZ`!2D ..;6"6=]1>ó+ܿuoZ7x1>zƏGuvO5=/Ͽr12e%*+;w!o~Q.S/<*5'i_|cNd8+^F ^'lU17ٳ%U$79xKV8|hmj{"1^ǸzQmp)#YFAaoHrlY]_y_}/=($FN$ S"-3|)B(HJ( ::)a$rmRq",C,#׹ceEX$Kňj \]JPԮRx5O*oNӾ!4:1-iq٠^`.v(L׃aɅ(MhXk b(%جZ\=zkpY ϞemE ktk<{%Ο>ŏ>w}wX:+Ek,iʷtœ׾yU 8x@ P-#/ %4"u!=z[U8Ə4rLPu#R2ET/,:D/\)4p!тe,ɠCƆd2<„҅S %95㴴Iy|GX8g3S6.ֿǟb"RHuu+o&a3a,oN, Ry4B#ODр5O&T@J8A^zZjVЬ:n{2qPaG "X]_b dv{ m.$V5Y0$X5 ;ǘ3rC}v`i{T!U ]pQg_ bϚǑ ^ך#mIUXp"ᗒ"]H'y8/ݒO^8'Â*J z徜R"E&fHck!udH:9sN7Bxd-: a#VL<FRCtfV+ZH(x~M q7ȦȤo.G=c V4V[%hlA;`9D. Ď.ΦfN $?hČo|W^z]/ƟZW9+|wqί.Çx7ыm ~ h7q:5y2M"->sN3e;%D&R貔 XxIi"YTK|, Uewq^@֖=U6KlBM-֡IDQ,-Rn1P "$}"T K&7K"`IEwqY<$ N ㆕ĝS(yij/ͼp$f:r}ͤ'ъ]!*'I HkQ;%{upd\@fQR("ֳbd{ͱ]E%Qw$ D.1)ww `SvT OJ=F'C f"}(AdˑiJia#89˳[hd8\j-I@ _Om;E^7,@.!f}'x=ty Z0.Ə8@O{tFX)ba+ӣ=6.N}qm]$^C3wf}'b_]{}O˯jϟbur@Fw8?k 65X5Bńu fPi"ekp&@&Ӣj:hjCAbp&e4𜯢ڬtJ\o)u)2J?5߻: z>2!Vj= 6gM<[jNac{@G܃{Fa%]T+@`@b4/ݿ/RW_~/@lb0o oݿy*5NKaS~l@μn1S":r pFhEu<-#šD^Va\slli%3;U*KReUZRƖ:v%GR`oF#R 0 DbRBbxɉwQR"[zTC I'37]؏ݡ @jhBj!ʏtK>PjE"N&ǒ#I'Jn3Azupl| sjmvXqX/!9% ` C׌i X k#N (""1CQ' +׽XAÌV8PkЃЬk ~5nWo47?>~_?+/"s s--AAz.$˄ kJhe@1Q:UJΩEH3( xL;lʁD N{,U`KsBTV#ñ[_ 6mI r*9wKX})⨅7Q时B)%N* ,G:і yuL_,55fJm%Yt>0d1ϘڹQ}(z4F6ǚRæb-ggױIVɟZsyV,F9w)ޑ5h ;-'3!!XrRb 3&ICjjd/eos>avcq+ 5n054 >s_`"|z̴= 7=.F_jGYݐ>鼉Kwor|}M=_)aRˑ$⠺M9 Kla2F5W'^EbLrE_342 £}efH)p5"tt)]x 6R>+-9'"zHd` Gx A: IDATopXAOPzka `87at +B$@V9IL"Fc\o{<=yϿ}>{_y}!~vgP-'h??{ p7keA.^2?Xi?8,a@?]3\nF{.G'0bEE$Ml` MwrIϟ6&vjcpm+~Ӓ4:&"u$" 3(aNԮMu=lYB@ bg{>a6=BfA?0$ 5P2ZJkص5D`nH33XP!V\sk$8]7xvr &A0? ~¿| Vŝw|:a#9 mldPg.ɃhQXGE qW%1n1X+\cld3QzZMS.VH2̡1+?X1%;c"яBrsLfo S"XQ P%F,FT!b%֙ћ'NcpIgݚp}\L$K2l̲ V@nCjR,zF'[JL]!i1r C_]5 Dahq-XP "`a18B[%Fg:4`v:j$uUZ+ ralhO+I3ΐ$w8hQ]ⓙCpeV(et~sUDHֺKvtF΢ $,!\/^$`5&04 P ,|-w_ael>W};Dk'I2ǂ,.+>Ϻ{D^|1]s| 9f,Α9v2I;޿M9 %è YeG B: 8:XLq_c<ݝ$AILebwhፍgl3׻s/W5fhb}z$Y& Ԛi䁌~B?l8{{޲Y]_B}`HA+ipY'O;>jsL.aZf=W jAB@@FR"I@)D)n&Ukh2ABDƥe#`FBR4ݳj A JCHk@iLJ ( L(tXT52!!S ~=|~".׍~yne]w6m.va'~aeid]Cbvx=bqcP3Tb.!@ `Z?7 .P߽S@6t\Ğ\ך<4I$ [7Wm4~"sL/'jq(u09/S";{}oF sonޕSXlc๜jH-+xt!GLx܄ Q+y;"JK7{ =@UUB*`LN% $ h*tz 6l f <#,-xB@ FwՍى$ D ZCPQ)bmPkLĀF%RgPvna2jxi s*A q("JuGw."ƢnӉ9tp.?Ym϶ 1$ФunY39glo ,d>͆!6w 8/E mBHȦS2o֜'f E=u[ɼnACL5`Wۯk҈!v-.@>j&H#!B@dBPMO7.)D@Q+' T5+E%14$Xi$@m/MƑ&!U7zu08;>_~R7u44AJL^~^x -wV/Nw!;w2oq89#DX&>\ƣZ }@",,nm}eYt2;3q ڶsP6p`m]\"֨B|KxIhַ)]rĞv71+h[3-F(k)BrZRqp:t c g^v2B'x)a ; DW"̗(=@fF ͤq7jXI@B@QӁSs'7̓#\`*K"07E;WE$!D+F GY , $֠)W%H'X(Hgxxe,Zciѭc FAy8%[f6,9#FuKmlcb5|ڳzX=:X؝}<3SnW.ŷ4~|WH0XA#OzvZ"bي 02:l}k#Aylw?R4"cKL,E0O10qna?uVwW޲-229 vrP}>ѧ9!FCƒ&HISgywl[6ƿ p$?iU^+T9BӤGB$Aƛ I(Њp<͑IUYAkI2< BF3u_X!]p=<9BQVLR$Rãdsn;5,&2XzJ똻A=poZ)flc(wzvj0s{> 9:[5(h8F ~[b p8 m8@g>d6X:$3I0I6!(jՀ|qF*BIBgkIRXE&VF(1dY?.+iVRs/_#%޺gC3a&YFv -6C7D*"v<,Nxo3?.dc{AC@<͐!12ex)tnc:ؑhno{&Cy X'D96wcb&n) \k̎i j1'.!\zg\׹o۞wbl{%.wN3I9\ߖ\iڈwzb,ol˒ -{Z.lz)OBd\f]pU?C U@&F&iPi Z3&D]ut<;#OU~ETku JCKR "H4bB YTj );wNP5?EZaYH "Oo~_fgZ0[;\E:yh \;#v.#"=GI,ױuPr$X\d$lKC!Y[<tPVyn˹h}hZ?H,;ҁ >k+~8q{yt4e=v Y9XԱqd} `<n >6sΖi]ZMV[]8Ni;?/E&C7]GӃ ogL^O~EU@3PiF5t&Lk4?/jZ,iB5*pt03''( t3w/p,@jvʦiP hh, Ӄ)X1n0>Qa8|%o9F˜ M[h;tv;-˲aęwlc ǫsԾXؿ@qYk~Lb0"gr[1ZLp4Xȏ&\ ;~Cx+XD{Ũ8H4 +Yʛ61cxH뤄c }b=#}ўo޹;g1 |AgP`S,Pry#!}Z!9 Qcz!v6Sj{<Γ= [bd,{.:F0*kحжmc"?KFն#hQ-&D6~<෸X-nP+V 5`[hκh05 y.qpbY3k jJ94+ i12\VH!$AHgNP*Y*k)J 'bU㦬 "s7Da綋3nI?SDΖ- >e`iC6aF<pfsѷXF=~ p;6 [ϳOU hÑ296NmHԎIOǸ^}̞r \#\=9XMnlh{edior0CczzG1 g IH*=,u Ppۡ(u[?~&Oxj);B Rzl5ΜwOqoP^^aQ(uVZV ֬7bFB҄M I9R${|[8:<>!' !M "jgFHЂ $HBYՐpe( $YsXIrB@IUv;:c͙$u6- 1rqG,6΍tcb*c9;fWH⇴~爭WAOʷ;ٺ@{ܮ(ސ%t nl`FfJ66b:Q7~HHa#`0qLsƇYz"={tr6k[O_/`-0l~0NѴRb{6 Ko̭nk܉^ j62& -~b6%d%=gk(-2fE5'0;>Xo5_x6CbYXuti| w 5P\ =8@f}"3f/2O~ I@ʦ9֌!S F]BzYV< BYihMf$IS\*HDvxWpjt =нo2ǰQ@f8ܲS?sM=tx`ܶp䔕!p`@66ApmCdu-A?{וA݁d‡|}jF ?wha6do5tlf QqULA؛EPd0H7Қ-YAc9%w٤I]8FI& P0fy-ѦР iuX8N-V(+ (]!Q f̍\h]XI;pN G^޽ Wq\BnTt +֐A$FQ!Q49o!z_=~p]'6r b !)A t"AR^["68(H&Ww/׻ 荳fpimo髱5۩iY~)f`, sĶ5pq\xsFyꈅh ˘16[tOl4h'o9b1'Gl3anlPv`3(vt}vm ytcsd5 CP1s1AjÖ1Gbbflw DXV&YȂ@" ƂGi0Cv 5< ChR}90HpY?Ckpyr?$g=SmeHl٫ݢ FQ4C\A!Ay\N|Ʊo[zi;4%[݀xGV֠\oQbh" k}'C3.hd 57v f@_<|!% Lli$ LZPH@Bi!_JN* k#dĸs ^=A?GBNFmXöP}I!5AD=r 8("8jb4挰  z@8$+1k!LTl8a)fg8b, # o=|\1jh&h&dmi-nPC8!U 0##fIAY(9Ʒyp2?z]ݢ&ſIgÃ]BL(;X5PF[7%wr G\5*ĵhHAJ57]/` [GY]3 c%cgOy5 Z$^E[3dvݱ(،aC,#Z46²ϢyݍDw<LًȢм++`PGāvA#!K 9[ kv%6+4mo' YA318B1v ?1IG,^sj&B m#a FoNII%з$ec'HZmȳ3SuM{A2DԌT@&,iKu l(6@.b~W?ZM2jL[nf$ZF/ BT5R)1Kdl "Qj@Gtb'H@*hQuCQ10232fuݽ!T"277DX=s (6,3w9;UGۈ:`S"Ʊka[h!Pqsq>EՄgwd\ O1#`-tDj;Ĥ̀0X!Drs2%+g-0a 9 "$[_5%WKnc1T MaDfd2| AYޘ+IPl'!Ժ8@$)$yJYBJ?BK$iXp+XN_ƲyxS5Q> cY l6ϑ0ᐔ#HC?fb0Q!@81N#htcȍ^H^0g* .!,{lbhw=+ibvkE.rǦ)0`!#h-$I ΀-FL`)qd(Â%1Mllvb$0{jV:֝V+蹵MnZ"2Gv4[%ۅ:q.sWP" 3l>+vIJÆQI$Jq_g,>=DUVPh% (|O(n 3 `QXAAhͫZd7}dp2?Fu#ׄ}S@ըd .^ܠʺV ~`2^W/lIzm!mr-D4Vlf }c4+ɡQQm Xb!{qXں K N%LL$h,bc1=i6(gcC MP^Ƹ^x@raY d}0Lvdǵu 86ț-ii:[u'm7 5'~˯?Ui>M3JAJ(QUxQZ$0qY"/HJ7=Z4`&EQ-$ q$2AYAf03 ҘͦsvP ;w *,Mh`OwzѾܝ|Bd %_@{.sL<߳ul kQsX'ql^i|ft+P~l#P"x|$MKb@H́ ΅g="Bi%hu.0v rR4/;wa;k;ѠN'nYj3tN 16t |xՍs`erL荭rh -4D,s/cWwe?G IVڟF7.!/?{KC]AQ, y>a$"t]pv|:WKauY\~;6q[$ ۝֘:. G i/7s44,a(5¬#?byDby8w SCC9!%CC dd܋O{lm"H nкGĴdqMe]IsVtt$1"`}#[#QxMll#`L6 ,- ڌ{cǃ; 0'b.;7+̵}Atl#1~Trc.nrj]ec=pwc\WsF3 ]$Z(Bh^`xH }MD',.xɷ3,:)lύ_! ާo}UJ3 h)Q׺if %1$1"@'X);Z3y*D ɄR+H0d/A] <ËksdHP5U,Og$d6Cg8LU5[ï^&~(@S-Ý?:6uT[@sXʍc-tmQa׵\V"Nzvds,b0G,J,fEp0Dez7i8y{މ*n%]۷ƾc>cJG bΡdd&,|Tv0`OEW=CUK@_^:9h x|tfk-c|YsQVhTA/k1d,(-\f}!l n9egԵ: ;]8*?Gˏ~UY,B !vj`HP&jd- BJQ$BBU5H/q~>{IA9$h-iQ͎Mg ǘ`uu)2Gx?ǯ8JymsnՀREG&{@f'Ɨܖк"-b; CN^b< Tlr`]Os>DSa\v}Ի1 ևAV;b1\fwzDzG>b`z΢y^s8e,Ζd`ʹWE8=!y48^/ IX|5Ӽ1/1ir[+d\Mt0yZm-avb]U}}[; I^SOΞ`JNmN¹B.5^!V ˪9~0 P5T&+Th jQ* -I F* 2H T`"sL'HAPZA+泇(x [8=8Ī\10M! H89>knPi0i,E_:M s079finҴ.^  rn=f{˃&bs<cGAXZ'"yiNQ oeP=԰c(ڇݍPz ݵ/è"!!'_F@|&ٶ #o,^gz6Sy6h|Dv!Rrk?i2sh[ꐬdH}$BDN 8KH$Txٴ=4rG,H[;:jNu 䡦la,"َGxd]7]ޠni;k?NεDe"qsJؔ֨F$j"QV5@㙐R(I kFQVsxyD$n4|/}Nΰe dB8<8BR&M!@UWy*;CcV0=i˭ g;Hcc5vEC~!hE{/grcdA6_=lASd2=4bp#wtZvl!4QPl>ρ2mae" @%|&Xڬ|rrTl=j-};'2c;ߛZ_Јu3;&a%ހp=`ndEʣyw5#*]2v- Ŵ-Vv,z.BfBE. ]# L'an!toܹ@ǦKJR't[7.3 |Wx'  47( UWVЌeU#@Y) Z3r) 1)!P#@TC ,EL0Wу/%44}Y|wpWO(5Hhܻ}' @kUIR<~9Jpб2qn,82ᶮc[f;sa#C(|mvv5 Z`Y#osB'7?_H$N&˞=Qٝ=5.xT؁Ðe\QѦ;9L}z.;.h|CCϚεtrO>>=ǧ+so.$j,YB`2A$pȲ)t4jdJ)\>_~r5;0qS [yk gc==X$8;QAUv',v~eKC9t QI!(-au3!tm߆'`閬\ ,]{6y%\G:= 7M 6- vHGHm IDAT3(źXF,@i kI,-$- P>=Or7y0])Dy;3y@RingV F;J#Wxӟ5G 7Ŧ;ZCHDBj@& ƪp+0i\J")I)JAgwej;k|i"^÷_}77's%b! L %!)d"S?̟ nM7Ůz[n@m ~oޱma춰s@J7̎޺U c |V^1߷c]HXgLX XcsSwr;}vŻ< f˚C}gcȱC`g{k59͂g{.+C'`9O7\c8r6G&<ȋ !ہxq[21Ft/[d*E<9bjn{ٜL(=㑵gIP}dLG2G!a]& FM}eQ[KI?=>5HiTu V Z˲UQ`nf/uPF.$Tb"%Y!$|>êT.'$b6=Fg5_'W'vg%)Lgx\!L1@UK\||~DIk<2>x>*qY7kM{0o04-wS#:w7Nԥ 2UdiDꨭQc]6$m@ r5E-{5q0C-_VO{(OGS޶/w317xw1cTmQgdY<]{i-}I:* q6-VJba2qsM9wB뮯wEX~/I L \Oۊ\̺{ [ jE|XdB[%&1ѼE"mK,3jGhg*W&5  r%t Pq,ղu]@iF&Rb"dIi.&d ݁Vu 5ʚ W7xDH\/ |%8›/޷pĄZ\!"Q ZAf3 ZW֨u!x]>fxm&Q6NV6`|co'#2[)亾K:M'x ގBO1jlB/ԋaci`]o5< S(6jX7v!t>v=+* x ֜m-<_{{.v۸͇v/T99rryØx{6 a\BsF<z%p36"HB/*1=LPC$KH3`b8vf71l۬y;0 *Ҷ졸ƱGcb'_u"uJXtֻϲh=v'w$ۿ[dWJ\5$J(kZ)EU:͠Q 7UAI4IeGS$PT`QTDʵӛ%2A8=*_=ƽ|<:GoK8$R& ͑UBY.Z\R'P"WgwRxO[e6`WBı~C_#WK^a#_Z{XVc_gͥl{Քɫfμ5WZƶ qXXv#C/]v}=e ct̩gj3ҌWRǐx.bʧ0Z4=4)bSp\ס1 ӻC=rU"v.JK \ge)ŋw 'kmFUbOB5ZjZcU+\,K$Tugw8IQD'ȥ)>k/+ __,0H+kO/pzxw^{>sLB\@gfJ,!d#eX<~~m?f+| u7-8:`d6R {pR0hqʑ}SL %.ܗ 4 ştsgd|b'Og899b5?ӨcHella 61?M-AP6x4.oh߭9ym(v,~Mboc"G^oҶ#l+n|GpYnZ#K$ qS*,Uմ\f)%N&8i!vp|C3㫧HJX5ZCJ3s$DPJcPkJמCeJrb躂b83Ȅ@jj : w8~ Q9%wޱߦrV3`/m]uV!t#-Kl;lsqwWqT=Z[_߆f1>E1 :Xb+n;g_=) fwHk&ۭ}ulncmalwJ=V>-zd" ƚ9asb.=ym̵R6vbbV֋vd?@ncCMcb6}"×HS?y&08 |lBD|u#19pya1C:HЪ_COdad׈8± 4m9YԖkQӀ{M;ӭVi,Heff.U8 )h|wom1W6vF6[!O  EUR @,W^)JY`V)&) ) aQ lU'x?BO2|}~eU|y1RfUW^0tzǧ+0+ ܔ++IvkbmW~.T,ZF aG"Яw9  b#|1kсʎd}?`w>9BLRb{Uda"d&#NJ/Et-֍Ys3G̓b#;:z Qh>6Yxhm ![PqzH$vcP5 y7 DbNxʈ7420:F  31e喂 ݁j8x1ү>]}7 EUVBf̳RʣSL'b|^'!.,f r]HhݹNoQߔ8b/p(0S<{weP89>_^ 2 \cu%pt< x:;ÇϿ%Xx0 2ͼ{يŽA]|snhail]"{g9RlG!Fپ `!K}YYt>;=%mӅ!7,@d:;`6b8c 3< d@hm F8Gr[X΀.r(cPq3"zg4uN15H~ '\}Wr]PZCiV D(*s=W˯U,qUWXx%E|b\)8l&uDoňCmihL"j'91Lv _G nж~Ɍ8EI~bC6pv7k+6:v`ۊ|aCǍ rZ>7]FmR+Xհ8E)(<*oӸ+Mj%2tn:Xo犓)SDS~i(|g9cxIͦ|@Fb7efA,,0c@oAWX?TH @א%D`=;D,{Mm0к ZAV+ @凘.ϑnPT5Ľq x/ݽg9<>%nԮfG{ a76Ҕ+Nl62v-8X1}c}m>si7}K.kcn'n^~˱-zAY,"}ylNϮ5@p@&2os e|cvqD,xGvmhDFuub*fP7)&WwLOa"}qGlxIm1VVϊI ɿ\32| r>x"4FtK #i7{7]IPiܢ6!h} QCogWFV=a>7nA2 .`>"#GDoD?oMg ]7t/ '* tK۰ŊYFv.dMcORlR&) `W%%"hPǪKW/'/nO5#`.90h܆ Ή`ׄN&emh,͘Y3@|fp\GIxdT6jp"]':o0s4^T}JZȜy+u#p^A0l ߽-'gDe+܂k|omK2)kkgMi<dgAq+]R师M3V) E N;X5屒gCǔḘK{B40u )5&xRr`U^JlڭL&95;!725`ge^΁C)ˋ2cFk~oi CLxV< >r0j!a,~FdǯsjIKF_2g6, Pϖܫb4GCӂ{2"/+Q?`*w'tSFO~FZø0Lrnz4(]8yxN_%˲qCPha )g8jh9(ؗ:a}~a&yZ[0鰧 Yp3u9)O90Jyϰ{#`Mg'tz^F% ]m5|ѽksF'5<,6YQЌgg2JE+RI93ر6ʙܰDP,hс)BR}F`Q`^wbt2T ؛{=?n6kC5Tk.2*]=xPvPӐH'Y=u!t.a_a;8qG1[ Nk@g z;~V<[:l)INj5 (GluOyDΌ8}JG`RS=f6"`$gzVk'>Q5$AYN7־csA&mfsG?5+e*X@lZ#N'&Ss%o C(W.0U&Z$gզYXKa 9uMUKݢVuϬ`R@ ᪦ml6%ME؉P٥à0}bU_<:Dب3ć}yD4v]SY gUHFH #r""x/{W0jvd#Z-dU^*~6tOW<v]K"~,W&z]~^,QŸg ǃpXN;ώ܆?/썜Ul(>*N%埪xѠ"3#Vr ' I8e~ #Y5[N}NQ[U_3:TQa۵TqS.KK;?p`4b魵g=K͔0Q Ny,hIG93D% @(icOڔZ? tivΩg `GMYJr=9wȌsCe\`5lyM]iΣ33,5tywT+ [HQ Z=e `TNWn30\^.L_mU_ `:B ,{c2};g.Xt,&v?覺D0=˚m4a:-qX){ɁCj}uz6"6D45`1kLrG+DJ%Vz=-3t/Έ=J: RF'Ƕ0َ+ls8gg<l@ -]APyHqwmqNz(8$h]Om!l1cp`4=xNyퟒ]/2|402ơhH ?_VFDtZr\@?ވ~U:+W;6C_xG-aʛFW<}fj_$X 4J3m:*Z[LeR'h  A< aA ѽ.J؝ P8pQ-t>(2pXo9w2}15 lVwtbܔֿV * l3Ŷ+]]lDTm2 lCkdy~/X>:M3'W5ii~0яN]/L=1Oϕ?-haЯw?@?2 3DG;#rEB`0E ]Cmv^cGL#(-#+PO8eY`Fxj뻭aG `=#!gX^ad8&#xk׶87Z|*b4xt2qi6p*y Q'큵ٹQ饻G[usGe30 G8=9%؏=J]GBp_Jig M:Z hs:Gɬ}1䛥,5.Ͼd`_sUnj7W ߘli(&xSy5~b SY+Ww邞5>?{ϾG01Qf)E#I1ݾ-e"h"i 8㕌d"_F'@D3R(z$ՙRSò=fYp9w)Ϲ: md/ 銚yv|͖À>_]*/qKÀCcjJVZxK{:wO о&'3k"CV\GW F2C:X+EdfއT+/[˕ik!~~.t`ş%gXj S;[lnv5JU!l|CUEAmd<0RfQ94S}-ˆy (mOdgcϞ3@:Z{玭tOeSFlbr!3#ދY*3NqǃܮNw&2u` a]J=Ɔ?&Ѵle76 TH"Nk_Bj~tRYj[@w:cF";&%lҬEqH[|  Fjof! am ʾ|uTl1Le\<]~0}ybZR Nm!F}Y0nwDJb(>%w{ jX=*2AtPv5dYC(yVh4 j"V2Q3zެ,i1H>b=u@Fr@"(:ץDE:wz԰GM 1ᄝ!x36; AP c1#@L,'*.1 }vEPޏSl`%#( 3#.uGJW/0"wjjgc[uUh2o2g;k@!(3`l OL.Dm GQrڽn1;HKBVCa3P8r.z軄ހK:,1s,1L;)}=hOxI$ҝS b:xD#m҃bNc$y8N^gbYwpX`DjSv!7m:$~бW5TyX 6쌾g0rbER:{PS^s}-5Xu[<*:Dsm3-i҇}_Y5J[PN)F̞~)&T#+/p!nr1Z}k2 k14xhE5roٌs{2{N~W[{$.8JXr%b)-V N8;zhȋ<٣~{OMڍGɻkvj|; Xs}Of3 = !i\=o |Lu31auu?2# =HwyJՏzk8|c.k@u;uq-+̂CРneI V0ZpTsׯKH][^ 1:k__(E-vg7 L`@( SRJхW< bQ;ԄgDW,qzôIENDB`werkzeug-0.14.1/docs/_static/navigation.png000066400000000000000000000003321322225165500206640ustar00rootroot00000000000000PNG  IHDR<sRGB pHYs  tIME y݉tEXtCommentCreated with GIMPWGIDATӽ0 OBt~8qg*m, 0{,Bt6o.q\Y~t7"LIENDB`werkzeug-0.14.1/docs/_static/navigation_active.png000066400000000000000000000003471322225165500222250ustar00rootroot00000000000000PNG  IHDR<sRGB pHYs  tIME ':atEXtCommentCreated with GIMPWTIDATӽ A?d>f.9'ݿXv/XUX$ d$L,|j7zϙ;/13gIENDB`werkzeug-0.14.1/docs/_static/shortly.png000066400000000000000000002042551322225165500202430ustar00rootroot00000000000000PNG  IHDR3MlNiCCPICC ProfilexYgXͲM%Er$HHZҒsJR ""`D `}>ϝ癞wkzm-8 0ܼƿ Xeܯ_R96lzxð @79_n X"0DP ?$n&х?H OyW?o Hmc/dn\Ů_HK/JoZ-V@hfFIܤR:JULu.K/x<ēG& ~ZZډ::"u73/AEp$ru^aHƈƃ&&(GZJZ!ZmZgXBki(tJ&e/?_`133x02T12QD.DL!V"FY C+L LLvLLLL(faf#f\f ,\,,,,WXFYX9XX=Y3YYY屵bG[Ge_`P qdr4s<998889qqsrsZf.^!hy xne-]woo_ߖ?@@ xzBBB>BEBքE텏 /Ċԋ%jV>Ê)}KKd#{0{T3)I--)Y/9y-{?J J9JIݗږV~! c,,.YV\$[*D g $**/!)_.Tp@BOE%P+JJJeJʌT0*:*I**ͪ$.-繯zߜ:yi ^ WsӚ|nZZZ5ZŴ}/kԑ չۭ3gз/џ2767X1T03ޏo?oɨhX8τڤdT4Դ|/̈́Ź)W"!,od[]/Yѱɵya+jacGkdWgfgo?}P`vC#α!C;)8;M9}3s 5W%-7sJoFe+$]RiCˣcS3󝗺Wׂ)EMBe.߷wߎcEk@[ C_`_wPtHDpztj鐕PК0(pXk8#r8!1Yeu->:0z0F<&#]A8t)'/hLvD(='I )5iڣTG>JNObҞʕz$u.0>&=4}ڱCrg23=2 I'dNU-?=xr"O36>?6ԁS7 x 2 v9P(hشgJ|JKuJ82z-*RUUoV WVa"VU߿|&ӵV}uJuu8/.75^r1 4E4zu٤+ׅ ȼ ݌2:fӮ~[;:J;:sRvnܽ|\KϋރO,}p^}?x_e@qG n )|uXe}dHר1{O LNg/y;zr 'ԕg sRĔ$8;\>[|nJJէ>,]\}S7si~ncbSq77yZ[4ۍnYwwFw߾ӽЃ%'~_C~Oli>Zjx:2<- y?6qw3g;ϻ_ľ{a*iL٬ל{o=ߩ.0,_zd̷O:+Ն_]|CGO킝_`:ce¾]ȦQhhE vLU,}KdA>ep+%D|DkŖ$$Jػ!)sT<^T1GiDj_]Bjnhx>b&b.gjj%g-hCc]}L@GCJNN;g\j\<ܵH5z4og%2y_`~ {9!aMQ?#b1bOY!+ʑޣ))iS_˘̺}'M4N-=_YRL]Ultdȹ㕯kPij/Il`~šѳ)jvsS7V[V6v[:ovi=~} kS@ڠ#!|GcO&H&Oq=<_~WUS'^O̅IOtbG^|\_QY ѭm?N*́$c5NcTrxC5rt) t\ֳl}O9?rxyu\S -ЈʈيIT\Vq+0̡jFޗQyUCNހ}ۆWe~=pϬȜlb`feKm;iWm|P0Xri].fDId!M{tz}ǕC=O n< =.0B/s(X81P<&)P"&)㑑ɞ)ʩ4ikgd|ȼUuB)*g:d~^h)o O,q(U-cGvNOLJ>vXMńڀ.7\qko*zѵ7[[Y{J8y;{zV~>8m8ov8ׄdgC7^ }==Cc}f1qwԺnsK{V;%@8 ``4/LhC*N f%$|PnԋMHK}A"(Ҝ١ӦF/N_ HT#20~e:,<>qs?w&%/'>*,XaQg1WqO=1{˥ڥ'e~{#DWK*uԚiikm{d^g"kj ,<עܲ ]}v?tǙZL >L:H1dGdkd esMI92:!}CVBNNBs3/\3nhon!R޶uooݍ>cg=901Tь뷩 KWW+}nxSgkj;!Jc..0>ATG@P9M#`ߢP(OT6}Я`0f4L'R}Kc p% )NJ,YTTYT,jM? 44EH%:4;{"-!S stVyַl6t\R阸ǦcfD[~xC'j/kX˭Ymh rOX_ Ht̂ Fre20 >("JanPhJ]@_Fcx00Ř qǖ`p8_561E1˜Cr^@77Qcg fNi uknnl1ypy'OϮmm;";A;-=DЏS;;_3oggrggRl @5"{٭]n! pHYs   IDATx]E6% g8D1!z xo9`VDD$흝v- ]]]r`X,E"`8P>P+nmX,E"` ֡"`X,EF:g+oX,E"`Z,E"`XhC{@|E"`X,֡}"`X,EF }GI!b-E"`O [nj:f1giX,ou\^lM[,E"`@`ut+N]^,"`X,@j arnsNl]ֺ lE"`X,Z;Ңo GfebE"`Xq^ÑoS6LG6,ϫ/TKY,E"`= Cj44|K-E"`/!v>io9Ro#c۸E"`X, M7D<_9z˱άiZ ƃQ&T"`X,Υ5ObNZ^^Cęu;4j4,X,E"{G 3̸ӎ\gj#5N+UyʄkE"`XT}.zR E"`X,@ii)x_l[.GW3VI:y2,Wcf-:jQ4PquPSʺh\i|FP,E"`X- V T3^+|wnZкt񊛍|yQWHۉ2YdLc޽ڵ kgΕPэIHDy@4h T*qLvbgQz,h Hz(2MXXj~ڶ(/+GB\2:>GUX,E"PN'A=3+*2t2S ;5S5 v17J .{U\W[PP,y)Ihڼ%n@e v.h>`su!U չȜ𠃐|#mmC~V6B4>O:]$4k M5CiY)ve_w!?g3tRQeXXl=R;61oju?yw\㶖beMnm ٛVa}^ v0~HBi["Kliue<TΗ6HTCO.AUиI͸ʐлq",*-qaر68s mda3e(2,^i/E8 G{8qԟN^shP6K~wfhh+{n}:]Dg8kME$oS&QW|26m-G Y_Swmq|z/$$$56DRLm2@?*36U)V@J >+ŠW%0 ҘPmehX^2)9HX1y7\ċ#좽`]7w,GUĦrwp6 t;cN)qf;XI|W|)KFfvɒKPIn(Nl"i:mE <v'gS52Ccq7"77EaMمgΑ3bO9DŽ>ԟb\l{*XNm݃{FG<9 *?99b<&[uG V wN;hQΞ9O3븡VLvk}}Ǚ_! ;1}pWQRQg'uGa;w{mbc@> Ì~g4͡4I5˩QV(d)=x wfhuL4Q7^v;M ˱;q1'PMxl"p͜A_ĺBeKHgJC30{c:`|t= ]>8b?⌾x-`ii_N69>+Ӧ>AK^ӳԑۆN팗Ι/ؕbة/`J >/6*{4 ?Njy Ǘh&J$^m>k £"ƟǯB^TX˧+{`1wec_? 7}/Mg`c+NQ(۹{ &_)k<+t<{iGSIfK?dۻg)}|>xgR?ھP> #MghGˑǠ+qM+q]=+JXmV@iq懬|  HjƍL$Zڷ h1/IDq|ϪĀWJClqUش9ANAc$/V)|ILdq`SODl'(1MLF)HǬKdiSmD .pV2G<;; _Wwn(ߞf'u-~·']DqZq`KK䔗Cz.xر㭇扏'fh/,>k\_<5',3#yznHo,>B O˒ˑ 9^K ?/84y' [u:MZs( ʼn\5pB?w?= |{g?yH2:RCѶug1 ZE?;Z={<)gJw:E"P3|47/ʯҤG f@2!L8Jh,k6 )qҐ/%o_ufbRS\'(t:ABNdmmln "y/*3(YϚ*Nl(x9dZ$QťtYXPGǃeТؔǖICIDœE^I QU*@Fkk@y/w~I>*nj]t$Oarb :z6/ S. P 8yydJc{S{cqg [ 2L9!/+q_=_~ \eZ^./YR3m,9s KF q)MkdIM Sq妗NGWXuYu,_Cj]Yb9I(r˚W"^<+,?ܙ[,rז9N dV)Sfe(Gi˙0qӣb1RPǟCnv9}!GeN߭wqnJh7FI(+y}__~'%y\/<7b6lXjIA'Uid;=X<7IoXm |Y9 f+NWТ9,$v]liz ,y!!l|e ®;xhw {Pd@;J:rD:\V܃REevq!|m ~j()h\ֱ\փ7|8ݍ&,|/e2aE"PW7u`u0_8 fr<RkЊy|)Ugi_`*oH^̈fޜ+r EH?hn≇$hl.%3ʙUˉm''qOdtiy`/6`qYCcb%CҒ pB4UظEVD5: Ɗ7Dv[Xk|h$_~9r>H2|9ޗOk9Wq}3yb0G0Kdsr|~xrs|G\>n V=u 5ul3g-oQz2ٝL:jfIw]"3!P -Z>ΪJ佷0ǟ|dlY>rYnH8\3ca},G 5 }:HC1Hi!n=VK 'Czw&߀qz{Qi\W.ޛ3#a#3YW-?y_]U㳥ˑQ6-~^Ͷ |`>NU˷B|6n/CL~;3["7JJK\l'D(f\zǕx^a99dvm0q8 q/gM>&XԎɬS-Df?.EG0cdMp8nD^Lmⲳ)z9^7,%Hz x<MQ IDATՎIjf ⑵sE0KBaqm$ wj}5&B/JՂoؤ1Jvz{ɕ5oݻQW G;  3w (3㿜qv[E#]^<=S jo&H#%as4]M E0<;5,E>;''aImѣ1( 'g @ IB\, I o4jzRyaUJ~St˒*22h[ZdcYOqZ:tdq F)N;NRpjM!gik;i Vq|逆 tf)WRRly1xE :T3T;lE"`%ؾvERM:1(Α~ÊX:IqP~QL|LrR5uh0G;n:fr!@!X^2+,o_ fY,%ʊz%Hl[-BGS6܎jܔѼ/YImLx8`X,E""@-,+gwZYLtd5OekM#в2̸js{Stg&gKuͫOf:ToW/9̢OZZUEyU%KiYsUpQ^URZ8sM%t[-[S:uRi:p[z).ʫ*^J˒2RZ bUU28W 剤雩VӤ*c:®*CI$۞6|+UR)qE`z^ *$""lB[EF, or/9%M:\σZ)˫ A"eY-qR ;k DU75eW 7_migڭ]Baoqg<>}BW 3j'Of ||4|=*OR&˼8ZPFӸJk+e/^hi%vS$_"bg0; v4J)g)g^Tc8^AԆRʚqʙ6vXF1WP9fmi7eT L{SJ)k6vXF1WP9fmi7eT L{SJ)k6vXF1WP9fmi7eT L{SJ)k6v(#^>3J7o8Ls#++ LFʱ qR sMсmC,uAR )kh}LO^z4]KޔvzkOL C-UW_ɰPO3)iqz0h_*̿!|6ig\)m*[ ޲۶m 㬳Bjjj̎"`8Ň~(}؃:X Y޴r̷'N`7IOGl8-Zpk:7>GML5\;?f_3f~,U-̸_UCROiˌ7JJS}7y9ŧΰ:a"8TEݍմinS?/4kVX|,a޽֙U-f?sΜҹM,exIW)q=GKYVY-Ue2mqS&Y֌*kʘ@fY3)H1f7˚HGJY73H Y֌GJ>Rz̺@ͲfJMLOyuk57q]pll|\.p@?ܹ(4ݨQ#pևlX8㻠qp9F`>fy`ƕj]փ3W=VJWPT( G唆d_+ G唆v}(^JCrJC[-><|3r LʙqsVi=YkU&"-+Šԗ WTu~^`8S'Q tx9{gVE_c>8# WAX~3t|rjd@˛Xe?f 'bhV%8nKO2~(܉n{3{6wxLs̬ Ʊ̱\VKe8JnMeY[^) \]ԞҺ Snʩ=uny4ruS{J/jOi*U_՞pUN)>w{yOKK__dPo2t4eio*&YbZqTqRW:4 hPGW8|Ҥ$Mf:hnݺUĽ`lx8Švs㪓;`f,Rw١Ԅ_[qSp[Mhg_|:L>_s`ce> 0!+B`8gt?GNClTB{)x8ѱE*Xz-k4~lo\٭8QVJ+˓S=ģ}$6m*GbthTR|,`z__o=Gn02u$l֣)_'&ɪ="W{J#?>4\jOiҧ4_)@Ԟ@r=HSHI&ai ;w%\A5~R377_l56LAE7buA gi!.=8CļtԔggmgDZRZt8 Cʰf:?|mڴ2PϏ۴lhC>59ւexLY ;b+ sPInx(F 3n=o)pd|\>+XAѻoAc#6v~+ڌU--uX$5ACJ٘7^_|)+؄7。WL\ўj~'Uov9R{_[C`3L|ı5W_#+*ҕXP~WoA+r lTfGN^V7z ?DA:{j"-q7uvV9)ǵ:JӡAyqb.8O攕#W8za>a8 wB^VԧGʻECFEoc&:d*Mvt!-2L¹x18XC^'wኬQxlHᄫൿgYSʳFXTðUwu2M,e,/C(O@v]Ti?*mkCF\OC F HްW2]$m㊡1oMs>xuҺ}cRo`yShvKcv|PtT뮐:^O^<z4nwY+ųv(&Vn 3+ų8Q8I5ytt&郝;wV:4Bf_g .>5~HrN:5fܔ1Q9f;Aq@x1?]ZƝ~:V.+,BYreS<%L^BI"<)n$- T;q'[rk8X"<-Soo͟Eď}.m,Ꙋ"EZ9Cǣ{L2̓}>ufIu[}BCǖ嵬u 5[ʆ z+J.TS|(PjOiB=ʇ +J.TS|(PjOiB=ʇ +J.TS%cgvvG]'e0,=ɫ˲fZ^]jj00'O/2:sC_u2Whݖ'`3n;U5r!ڡI\>r闕C}Iֲ4Sϼڧ` 8隗pqj ee;Ye^qJQзk'L[!mrr@AV>_OmdvG0(SN؎< O v$e:QpؗqwpVߑxe78Ro+e"_>[dCy4 E1hqa(Yn<|ҕvrXLJ^'ӿq%`#O.ov+T4䉔/UIik)xѻpp/b^TwnXʊMfq't%r >d}"X5x4>2E_yn'O(ˡk֬A[yzp$:AyJ{TIUN[jRURSH^唯ԝ@-||T|n9+u=*4+u)_;_)U95_[NJjO) $JrWW{JUNi yWSRwSrJkR՞RS5+U9QR7CŘfPKᮡI%k"k6exr}5Ϥpfcp_F;^$p_x±8/+J[p\+ߋ;-?cMؼǿwYT"Ė/Ei7֟XVjۘ|;2 Π8(YdRƍYSuJϰ%3+ĉ?$Ѧ|7Lf-|<`,w-^t)Vmɬ-muw0o+K/pߐj\\^eM##,0 4upP}cUիWsxn<7-r|jtNr=~bD"/^}>p0< } 2 Qr,BK "Cv40a5toɲT֚fIYWKNG͕I0= %ULNOw8uKu](Q+"5%V*+d,ꬆaLuJ7,L<߻@T5%>~HmnC .uUud8K:"{ފ^ېuK,2ˇeOe[JEUCiu<\'cO4Ml JYתb[bшc_D)2D3og̮O<Ưj5<8ur"s9+dmI y~q;̬R/)2& gʹiRwYy+/X9SeL,ϔsǃPSEqUoxHA{s#7O|ÛX{gƃܘWYL.+J#Ar,L3^f9`+X=B8%5ϳ *5H1ϧ'ldErrr%PY_75y@iڍ,ЊZ-uAݟ4{Wu*Tޡ/jOٜƍ;Gƛf'26n :1:f]vθY}1Fms:ΛvY*E"!;td "Κώsۯ_?I^lX~_k6}xӹGj2Ck:aSR_ׯA|&v)N0uĮ džWJ_0y/nWgZH]11 9_ըK9&[w%{v̙z:>8#-{aY,E"`߭CakCпo4knԹsg|%N3<3<۷,x9<]ߪ>t||~qf}Ek_8< Z=AΣ'Lk96n܈[o;vtN\v _SO=?3{[GZk4y<+Ougokߎp=/n5wi:={?=oitC8fK,ڵkq}!!!GBL{p;koL$㩩ի54DM믿ƺu뜯)RE~SN9lB5سg$x;!'9Xۚ7/OVGr''4:+zf~,dxRO}Z&ΗiyW__g2u1c8W^Yb'tyfGDvr9HD&MpI'9/mS ~6lX>M:#b`ǎxל\{Q#<ϭX˗/G۶mqak׮sDGن"&d/8ck|=={60n8$''Wi|zTn|MU-/Sh߻뮻ª7U'5\MV),UСC[oiuۣOAݻw5y7CUKanPj.,> ީrV!܁:ܔ?>I&aܯ_?n]']UW]8ϡ:gO ht6]O`t"5>(=snÜ9sڨQV?9puY5jtqCo;0vLՔJLLtru*XSYR y}cȑ΄>Z_}wڐ.CsێW:\,ͺ smaÆ~7: ȼ31cnfN?(tw@<_}x衇8M_'#^\X`)|ά֛;w*O~8ܳ.nt }o؟x}gҥKзωK'-+W? [qF|9c5]B,Z9'&k)Sҟ6a/yw1(=Jsgg9B4FPN[BB(Vƃ_\\3tj?#uYpD6>;뢾'@g(^%BQHZ68Tg\={tԎG4{sG<7|,rGwfG5ٵ|:D>.Ѷj2άr)Z=r q ݻwG]W[֦- k7۷owAa5rh+*Ei8a2Hաe&q}yΎ䖖1re8u66At&;>2 <?1rA9|e1QhWuvpY >d Qg_\?/^xw?b9%k%@!p =P}n 'Pm*r5,箋2mg.5MMԻgG5%;1_⣋effGq 珦.g}Ofpwww'q˻Ylҭ<ʩu*ysߘ|j zݢI'L*h[e.fP>W}x=f=}31N]|_|:2_>+a3qRU[mayCB|n=ںuk=3Pвx嫬۞:kڤL2[6;Bc˖-c6ҧ,``^AtP(ey"W{a > g EUދ7sQmk<;))Qҏ_ >6I5(廽ܫsF!ze|^,d~o]6^^lMgFfNߩ(Gf3QXs Q%$9;G\q>,HoGAa C֜.[=vUQ:O-5v@KY;h ٦s >N ,u{n4mLeߓn"/h\s+.i&g%nımK:۷V<*w|MJYƑH!Nmt,ĉ X [FFZ#I_kf(.oy~jQQuAHU/_y]SYƳ/gb;<ř[:ܕ Poߜ=Ѕ ],ѣ>Lr<1_:xe3]֦ǫ>A'r4M:ztjY+>A8@vP*n\kf3/cNg6 k#,;'GAN$[Ύ20O0\m.4˙ y1upd-33ù#Z/-cWCŏpu xne71\.A5;Bf=k,ۗ$3VbEN[S .FgyW[ꉭ^gV.׏7}^|o5rVYqfۜ|)z Q֨D(?Q͋2Oۤ:g?_r%.\ ^ȹԆ!!>mrmVr'Y+3]r٦(vמ k!ɛd 5_( .ξq p+8 zPꖣ x~DǶqNxMXkғpy6׎cԹK=ʐ_ p_TVx>nG!WC[G-ذ#bHe1Q'Yc90|5G ֜ C}Gk79Bg3Kme *Ǵ^4nօ3GM,XPar^m-zqw)J3|%\ߐ7ܕO`\7sL=NQäSu5ef[gp[^^^ʨD38O/5ɝw2~tr PS CeHHwHd t`%>LL8Cn`y*u"Ϝ`l.AS3f\6q%e6Yr7c[f1LTiw@/{lic7boÆoa,ڀ}.:nn|=/m3d)fzsUF3frftNm,ٺ 8e|ygytzQxƔGoS[6ف W/[+Őn>KbXF+ɗcxNC1eX N$m;V~*Jwہc.8|L=]~\tǤ#ߩߩh4UYw:@Qά/TǼ0>Y-[:m^\y̧:۵Ǣ_a,5Hºu=T._]p]^=,\&Yޔ|t9n2(3:,gP31zPjKtnY=aᤁxon{?}bnLgcQG4Ca_K/lGf"qf.g]GtRZ&Gfw,!ν8|Q{O3B>Vs֟qV/ݘ]w:4/θ#N&WY//xڏ˃UnK+}x'~8Q⾖R/( o_ ~Z>P0ǁIy.SGG.EwqD9SLXrg&L0YN; OkkWݩ> ֝K⵸o۷o_4=$j,qѷDk5G00<!2)|HǍw6tf!˘/Z42\‹ ^,2A Ν;鴦Iy0kv.Ki;1PYb6S O ~ޛ.?n^? aF; gзxw*NqYf\plǬW_ hں3Z4tht*Etrᘞ2}ę=r"[:?/9"Z m}8ɲuXd>Fh\({`ظ=42/~s5#A.pg|5w2C .\ n{%g`e}1PSMy!,,N@~BG?}N2+HYȗYpc7:$B B # "AܰjZU[jknZ)u֕W) "$@BBX"{޹KBs{ff93癳M2Ec<|Y%A Sa nON k_zJY`,<ʊ 2u?2K<3:jcM!FG닷s=~Zt=o^'KJ_(M}>: ׿'ƴyNv^ ޿کQSӻ~2rbm珥ƥ2~|Amѹ:e9Q_Y?Tӗo KI2eN[oL^o[5mDb7LOi )>m4MOo5.cQģ|P+c1q4&ib'W=~ЕLC\E@No+&aJڱF[q样QkDsiM8>[ ѫw7w}޷yQ{qYe.9 Q|gN^ؾ5>AÍ7hu,`̡=Vrȭ/+@uSEV_ SLT4:ȢG53rtL-'͋1z۷Bk« ǣϽ2aGriajEq2Z6}|:SvF9k%4)aҊ/ZICQeiz\NGSMP*tWك3J[v U0u.:Xr SQ@tݷ}2l2P^%[.KIZ& `ݸu·==n_G{_]*sfEшZz^3SsR /G-͸='d5VO*N8:&)0tHO<ױdn?'NJ H㯦Jɨ*+Y)Y W+ {tC%}yu2ڐЬ,eV{eeAV#D"72ܧCUKv8Z|,_ ¨ }#k.===ʵ>ʬQ{=|U&5 )Z§kzi)'O^M0=W=GRV]o5-n~M^WLjơ&ԼXwSFL=ڨiO1oWz~[W^zlΘPZMK+72"0Ud_oPɟixCM_xj=SHO(Ǫ tѡ<qE,jj`m/47Dž8+Uh {f!UC<-l1쩒rYxDe-q\+)>y鷸hDL\99JDm0E 64PTZ-=iG#'̇TޖkQBѳ3bSj$[ޏzo软 ޳ަRvHr]H+J}CGpH;ѭ+RDz@[ouloaO=s=upҤI֗ tm8kt֪U]HBFG`hB}1.::ÎmőQ> ?b?’%8-u(_kOm#Փj/W0>NYUcy̙=2R^ʤ\_E.(*qUQ+xu(iVCXe©ڈinŏƊ&FNi/ Y*Ff7w~Ϗw[ERC]tFT1?N퍩zT$FգMիiF{s~ҶYl*m}Nr{k'4uKCoN|-NuN1N0k̓QjԔhmՄ66 KM1GIϵK^hAo*ZIV#:i\ԟQh͹oϞ=V#I&~=Oׯ51]o }[R{>;&5=̡4^jD'r%>U2yvLؼuu$wgaR5U!Ȣu2lm;D?.*WQqy׹xmMQ|D=cr:# ga8o;|иqP<:Vfp oq@8TC7𰈼GnYƋI]a( 8o^rLꪘHu,[G-[t- n7Zf(%;ŵ?|_ԇ\x0 V[ ꦍ!W  oޑM;[w|:C NYQ5O~3uT)F ygUjb$t^gܴiv~훶S{՟{mIeO1}iO>mVR^[|qWNrZ{f Q赑 KUnuvNУNYҩKI^=-є>g͚e6;uGsjר([&֟ CfN?^ P7.a B Xo9ףy.:AiA*RUbM1{(Ƥ_cOa0ur VKF?"aK(r_~V ծUGH#SH6 eyw)E7z0ڇ#CnnߏEzbew[!.v?m{dZd=d:xxlZM%=R#v}^ˀi /_6p}1+Q¯&`WK12!๗y$H4 W4# 7<݇v٣67n~,!߬(d梇 + ߔPԘN/A̩w0S\K^fBƐ8[:OJC0U=9(;3^S6^7Q^/]RJJJ%ͯ3:O_bu[r YK[|DbJ Ѿ|35 OF](FFI;Lm ~~t'ƫX["l( _Ǜ= Ku-2"~s//6.qX1v}X%Ϙ&aZuX5Gv=Q߻/#Q?߬]n_UPun MWѲ W'>t(LF<9{stGXc2/:vꦊQ O۴*ܤ$ݿ$=nWS{ţqrǯ纕1y3v|tmX`0u?nbk(ѹjt7UP_}UA VrH a**N2=P g'N_é"`42b}/r?USDt:ws>4ZZ\lYDHӣFNJr;Q>fP[ȪeUWV!oŗ8⇮s[<~3岊5?x:}4J݅<&!rsJ{>Vv\>U cՈz`XǞoyn\bc3ttk$ B\?Ofh$ecQ)W2&Eߕ=+8푘n$(߲01QftEmӺê:Iݵ1PV0XTlق2(y/F;.wr|+ F;3˱EӟJqZf#[.+A!&*hVz䝠}lN㬚:;'*^>KʂdC)Lij)3C(ձD4L]\eDDˎ5bM3hYP#Kp:=V,'vԈ^S_'Zt~סhݞ 6=c1 Ѳ6S ooL;\+sO7XTʣc~**:]gˡGjG9ɦ<\jȭaL:joVT>^Ҹ5L ԟL|vw=wSߒќ ǟ|Lpk셩K&[) 597СCTty- %4qꍪC,:\bikuCof7ڈxubVӪ;!s*tm:k<5MxF9}pgx#Yʡs1 PŶڑקR, 2e2 vMh\:_~/Cu:F?UjՏzn7IrL>D(j,ER$HV$JY# ىyeTi9skl ~؟uK0 =Ѯ$?/DJCNɧp*Myxwg%gy! XQGg\Yξ M`xY(/.ǥRvƴ#*ȆvߎZz<{>0}Qsj>Tm¦.9O<jiUutFP;5:aUru;+U1~ݲt &z'==u&/zl;iBɋVm^ #&ުZc ؛4ovw]e7t@P{?Pgxz y 647|O72}MO/H#Rz?muRU t~/E2M' #ח Cu NhNu[o3fx:YLMB5휺sMxwQ/Tdk y\[s=Ց)fx,i(BPF[=?dI@ij*S{sn? ^Q ؇dtXQst8XLUbu#0oq?~gB(t6>ط1qҞH hBnc>4zI's'SQH(m\]l'i)j~m֐'|6FnSubS"T1z/so6i7.ѹF^jo?~Zϝ~bm{z ˴yV&jt^ <㭰RpuNY]8stp&U6M,ͧBНHL&g }hGtKq"=tӘ)Ni6qbdX5"w~Y<ڦ6?V-5zT:_Hېg4]0WB ]k׮zyu~r;=2`dO7i^0zTXy@?醷b/w|O7 -_{t;/w|O7 -_{Z"xnxo;`ק/X|ܽ vmococۓ֖;,=>_kۼBkd*9ӹV3luK@\t԰[jS|or1gk5JVc?Iտӹ]M<蝞S+*)sSΝBMc:YZ?<sտӹ]M<蝞S-y^LЅSJ$*^,f SEX0v͹96Gm={^͹9uٛC>{^͹96Gm={^͹9uٛC>{^͹96Gm={^͹9uٛC>{^͹96Gm={^͹9uٛC>{^͹96G!3V54=~uʥoPKڻKC$@$@$@$>xؚ^[c# @(_T *NThG$@$@$@$6ڼBc!    ھBkd= @+1RizY    [Thö(8 Bz@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx  Irm IDAT  *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   B:@$@$@$@$ІuQx    *$@$@$@$@aM mX'   $8uT1 @ଳ  PpOjHHHB"`Wa:ې m9{ EVNlW!F/$@$@$@$PO,Snu TjY;B돌(VbGg   PRʬ(OZk'H6#oekUh>O}-?UnЉHHHtYU`g k>Zl.Thm0b _(tJH@ /ړ 4@qF4ڳ-SVgj%2IHHL@;TPCGM m֩ǚf =ѝHHHt{ Ӊ[ mB6ɚr sgiHHHH :ZHK'S "I}ޖBN/$@$@$@$pZVu*aB뇓@k\ߐX5 @3pkݤ9SB$U}5JUE'   "`;T ,|J$@$@$@$ PmDHHHHB]IHHH8*m( @`Th+ @'@#   L m`>t%   hжx$@$@$@$@ P ̇$@$@$@$@m6^@HHHH 0*ЕHHHHB &@60 qThxQ<    CW    6N m/ G$@$@$@$|J$@$@$@$ PmDHHHHB]IHHH8*m( @`Th+ @'@#   L m`>t%   h"۸|N!qPt"|=(,8DRtl$@$@$@aF mX0q˿\} NXy2Ń$@$@$Іc925oxXV?7J8#th:~5%##M3K}yF$@$@Th-%}d^,(j!9'V"  8 pQ^gr2wHHHlTheHHHH'@/oYme]  6XGcK^, Kgnx{˷/FcIX]w`~¼iG+16lʶ\'?B}Mc8?&)dz4OH HHHepApmt^G!*{.?.))^~~YE5O eXqP et/n{iLjHqb,SVf5x&mbҲqS,;b7*\>lNj>ʬ8 ATfhҎHHZCۈog67"@g^wҬZzc{5.K0G>wW=gXʜxiv~r#ߍ68tw@N7p I+  hITh[nqo3 @z~ XMɸ5ymonAmiIpVf,o8uߪu IN9ڭ[9+׳vV>ݒ0wW}xwQkt%#los`[OuCV$@$@$@M$<=1XSTb:ll~{׭X1rH #ibmbx_b{}-vw)y6~Ա7 0';RG|b}o澲JJ@=s1жۀ [1<[tJٷl \Nz_ 2h?ϔ>|to㞴/nŹz߸{Nִ!  hTh[s$sqĪfəJ5o_f1i[8Ɯ֫ɽ~sy{kYTs">vAo1:A+   p֩S?Kv?U~pۛUw=~=/nQ'쿺ztoYʏ`QWxteFE".6Iҵ#:ST߰ƾ{x5Rqc-=Sj'PeEՉ:DƴCB;?yHr9OltŲBbHO$@$@M!mo1##@glYJG5kJMӵcMzWO~sRvzksL9VU9%#_kߌty")J#:6+HQAg/AetHHZږc˘Ìw_ ϴa3K$@$@g6!Q9ƕ9!Lr} 4*|D?`߆fIHŠUqQؖ$PSAϴl<#   /Th|cuǰwlKCb椁\GPIn; bQ]q eVݑx%tmW;AXtZq8DUu>ٸeRc#P$+c/"Or20e%z8u{&3 Nj7o;rv?sj;X$G:^ 8/<5?FȪms3sa (-MFc۽OK2].-P=R]*@x*Ƨ5cĆw3*YRw3,)A\b@ 'UNy4B(R6]MaHœ6Pu.!b8H~eU++ (WWa]9F!L1E_|9 KEɒ0}ɥVִf rĥHj~␚S5Vフ1y(!]j/$lvbUK1}++0k07ٿ?@EDL"leboITRfU}-%eE&z\8BG;( vp(,^ (%&#gTLꑫ~; “z_bǟ j"ѩCWf>KVbӮKJLŐї`']W]a-~ lqq؆D~ qQ|GĕSju 9hFwWSK6X!+Uw濊Gĉ/|Aβq)G&aGcyJyaeaKY: qX'{M{g]6&OHoon",շđcMFXnJ"3-s.&ǭ;׃ H! ^Zɟ'm<-"e\XDEՠ*㯜"iۑw7FbծV]f\Ų;V/'kԺ=bH֠߬L f @ n\Ű\t3ě6:;uUPX,'5X8dbAr-6ZFI#^ov:]=R4})/:yK|% !,z.N,Y!s&WJQ:'5X#ބS[+_A|eKDjj'+^b"ުQV|kDF9`Wbmh߶IS1lp*jJ Q*q*QVQxQ 5n}"|u&O@P⏍W\/! P'"GɮTTT"o2 12ܤ:O\Hg^gʷ]'K$ж8 DXEcnA< ssz}5y~[Oo+&ѝNI?_Y³I& -s(`݃;~:.:}i\ Y%U^Bs,ق,e6}^a IDAT _MUثα4} .*ĻEqq(W7PJ}]rsgn?BK[;Rb4}d|+TyyL]Ee"ݮ <4$@-F mmL<螿va'gΚ"C(X$\(dTSz.OiC1(n J?E_Z5(:lCgg{鑉NAtlzr)s5qL-H69&fړQfKS%ִ>tg<{غ'yexIN5ŒQY Ep28}VD5,e`s=ʬe!y6d2W/=ʬ-C 77@΍hUƙ!^$sY+ΈT]kuH⒞^g\9ӛ"GEÒExǶ.(0hd+Eq:{yk ϱ%/')HWMc$[{lMͣ=/NsH Nc]=nK7b$sqdV)靝42dLOWf۷W^SO:뙕[1aU,*|3y\ N<-/(lC5qC% W{=g;+Rmq r(6$@-BY(qAc-[.Vusw~h/WŁ=4S_mZ]y` YO&yEn}>KxDIzgbQzdQ5(*NrXoxe] )CQf [x,vѨFZ9})ʾpsV"[zsAl hfThh,E^h$Ť"坧A^+1[!WS#WUk\ ġKj?/(~<_CF=&Z(̮#1UzQw6ZyQtʡ'@zjoO2WOVbas06^Ne_^\dꢤ̌ (iƢv.S5EA͇߭<*uoYdo4kN{ZZ{}D.F0Q^$R=s쾃c7~`|_ [,MÛ]Jp^ {|]KE9"nIч6ӰA=ۺ[6OsħSeel\r:$U!y0GaTKo zrV天|b}a)}k˘iyt=\yq%~W%6ANJU|E\K҃S$7tk^IFIj}P7.T"*'}$fu[pTH>܏2Jafli_)HR×ߺ=S~bɴui+;b*ׇ @sBG4%%زK AT+jc]PjzUZQ|'.WO/jڮ(iune3TvMn‘E} =&ںcƠ?_ e!QN~kkK+0)>y|*:?+Tl-¢sWgRbl-dcU&M?7ݫ"{tWU'_p_.s/cBaߧDƽRTa*-l$Gn 4a@5ݱa'|ck%/u.K;q{n<};7a?ש8#t[>""ډ%2&*{:[%R{5:wZ^ВϗD i}kKihˋD+:Lrtj^S ,#)zg-#)=Q%c)c&}[q7jd}U,O]X!YgCfm@uQ$^^#;j)O,1J^oWftO##᧣Q Y;;49+sWDˑIC$nD?z[Z Gtvp^ѮMУ2}?Yp$p >SqU?!=v {g5uҥiW_bqeذ8e'(&,TDM|0Lj?S:!ʬMRKMGYc)E>+ȖNKW6ݫb#j}@Yϱ]l Q"kJ@}PXx{# Uzz*y]5"kaR}o[+Y ^d蠾l꒷|A:dDN>eTYpVZ$b$H_ڛYY|[[r$wc%F1BFUEE+[&e]q`kemq|:G^6>8O^]ziZVitm-;ٟHFj xlpWۺ }ce>,Z!+Îy^_< ж cK`<'P0" 夝lV-HOHQYNB'8O%î%/+pQ,% bTHjUM[sިU$ ʃܣ0˥ Ike[XҋȔ /' EWiw 6b' EraJ1z`wa.BBBj$%9ȹ`*%Fpۣ3i-چe6ZGNYDSW/ǒ2=$5+VKCWbEdl:׮ |7ﲪ:˒+H/څW\>)NO?’$ &o[hz[ib1d+Η^XiSGHضj|ծT^.aTK|S/h\GSD Fu3wVe yg%\]O:IvupY ;Ya(߄[DePS-t w &*Tq靽^Knjr?KOX[ly>-u0O *lŇcFWtϐ*b䤑X#-^R6 $?``(iKpNᢾ(),$sjD~mF&5QM ʢ p$#Fe,\q--=kr8\1}V>\5a߀Yo͕%΍lnWry.y,\?y:FɦsP[ y[>S1EryMN,IY_ Co|[#t+w{!$K#cDan;]zp;:'~ Y>rӏ􏷱qLDY|y5ʰPU|zYvE16C>Ԑ+|&*YpuQڬi Oo~3/Ԯz1FiVl'&`WSYPP^ N:7ĸj) S{sn?)6GmwZjg~ƾeϋ}dkɯFVKO[[L봢h>❆BMǎ26d?H뺪?Z::l+=U6oo?vuU8vʒ/6>ѡH(C&BĪF{j)ղ3ڵy#uX7(uU c; >${uS[MVMsd!]f*|wr7! RqLwz$;$ume*ͱ T>1I!{ǐ;({%հwH}y1Lgw =.dh1JF}6ΖtyT3a[¥kě$mZ~:"w^f?gXn;=we9n#щM :6^}ް?2d>Cv!W@6H_=U'$he};'qf+N($@$@$@N m#m|' >i읍{&fGR<~| <[TmE>=t<61 Rs3'jE)/ųU,õ:"DhDoY33+F&_$",p J0v9|dg~{;K-ƃy ay+sbc\lqow;.$7rk@k7"%  oCFҶj\uKx᭘fa8;%/BQf-AПsaC\<{y>jQ9{v~ѿBc'OzbBSWZ[oY͓H9OHHHZ{h["0ahe[`Թ=qnVR|\?6 aX4~slڣX-*_zWvwVn6% 'cd{Nðq5󷋐E۱eR ^5bk%MCi瞒h5X6.5=٘hC$@$@@ m+@D{_魄>+[B2e~;} 6{{rװۆ/+2$_`rv>ɷsO>҃1ը(mJ"rxiH @kࢰ֤ a+;' z [nwKoצ_(dM:||pQfM.52g(ƹ{^4豪Ɩ4hfoWf;͜2#  {hC%"0dpp)vڍmMcs3k3:ߙǭOe( Btgz4USak|@Vφ ƥQ8aP:lځy:7\侂o's   P S:Y}e&$Rߐpx/u^ ?~i5oXE>R3EwbP~5N~y <+ހۇO8/5F0%  3*Ig@0  !QQBQqApO kPqaqAqgɈ3 ~a0bd餿ުӷ:v3s]{ܲ*A 8C_ɎfUM޾v*trs媜[5VyNt(? V܌=Ȱf}t8wp$xkջy2e>{D ͋vI=^pOg7[znZ9fLM܏ ^z?KvKۺvdáGz_uέU}bw͞{^'/\A 0Pځ+h;ۜ\?[o?Uw'׮<`6yMwVGe}W&~}}wճ+g=Mޘ_bj_fYEm_[˽hr F]v7-j6<Ӌ@@P0Ÿ];AՏk97β=OuV?ic{׭~_V}^woۿNzȋ*A O]z#&d¸~!} kX€ ?y0[vy;W(hvifw66|Uﱽ?!07;}i!_۟I;f{1;S=PѻSD@h`!@V Jv Ύ6ˎڍΰ/{?f7ΙUV-ۍlڻ:_Z3i7acmj[^oWu6֪%b'z>~ oݵXEOyI9Gyls{ֿ߃Sw7L@\M #jg\z@cX{s:k3^molܸ6c}.k3XZY[> WF [Slږ Uu]eO<|P㷰¯؛=B8ϖB`Ou=l}u1uh Gා-DŽPGQDWMwӧL>9pl wغĪ;B:lf^ۚ;I [Oj7vzvna^vfq>?:WA 02 Q @JKA[K!@ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @ @@ hK}< @@' }C6qD=zt+è 06l`W/| b h9g5k͘1Ǝ;ϕFuҥKGIq6@FA%o6~Q @` YVXŋI6ONkJLA[0Б>ݘ1clܸq1ϕFnaڂ 뮻^7ym f N`ڵ̛߸utt´Quڴq}+ĵ~<ǍZ߸q 0C!@`XjU&\UCM9Fb2^Suqmڗ֫?A` h >  A#!;,69 7 Ën A^tNp1+qލ[ݓifϨ^yV޳ꭡ4zWAj 0B~v= 9N0e[>O!(! dJ宇`>_fu}5f,ZN7- Uj쳯k<*6g~6wWfλͰ^W޻hzX6Wr]/{~ڞ~nms> CE;CE^뾰~7|!vZw]=zW}Z-{7 z#" PĚ~qտ y?ݰxvW?~v/`G~Vͣ#s޵=?{5wdd+S9i,MA;؄.Ňv;ęq6Ov~y|tv͂_|#8xyд,wG>gn'u0vk?}a3m~AGoB#^Փ@` pvd.:Ҧn5նnڱRXp-YIvw;@`&}%:w3~qW{M]r}tfgn7sїo3}~t~SkoEkbRO//gknRHJWQ o3;vZhͰ}L֫<Ӄ'͸-`mY|n~y=K];/Ϟe_W?:IC`ڂtU? g16Ov3m쓟yͶ?A;O @G@['z%mMwxjͽJ3{J n FV1[$o]WV?4}c{oo:Ƀ)S>GuL=߮8`Mmj5? j`>sR 0ǿĎ<4|تSfZD[/̦kL#g O>`!L zkۢG٪[u+iZdkJN=}Ra/?:cm{dg,p[[g{&܆}%m:]{nߓ 35әxM҄KX@#a;aw;7gSF0?a{3$n=3 ܬ*tjG}`o]`St+5tu==;A1=c˦7"j'fGD.iyO ZmKVYb!0{G7>R@;cv5_m=f={ gEҧy}Cשeklsuv62[ZgƆ9eo9![ʯۛ/z멶_Ǝ9`OvOOFz[zv569fh?[/#gLK}oxu}Cw_}LU[k2o~~0k͜>ukwe2W}_jϷ_]j;)r&x%tAGS/xf}-C?k< 6tek6Q?~Igjlm?g}W?o{j6f0y+Cxf_vZ5i o[fc;;mLh]6LtդـS&6ttx[$V5~Yfr[?Cr tb;T#ϑM15 !9~M۾3iӪ_vDqS%+c*%%-c P,; n oԦs53t^bMO @@i pC@FUS̛sB5o+RB9Sk͋@ hB1 0" H5:T53|EsZE AYfIQ@#ߡPKlǐy܏Ǧt\Z(Nt\'F}C=v~Ǚ ~~۳\sqzqVjF,uZт6O4<@j5 @ P0w/XSMժO6O٢m7/ @` bv¤[M:NV Xy, @HŬ-T)mqCjòl<8nv @Z$'fghS]ͬ&]'̘jC6pH'$iZ @ l Mu4jEmR6:YiA 4 HF6ˤdTWS4+hVzjzZ+ɶUA&ZncO")4@ 4+fghS滈[nsVj~}Gh*wA,߭y<گ[:}ʹK? @)}86+#9?sqs~(֏8ZNȯ8 D "לiyϹUW;<)ǢW8|zXVغO @e"(wMGU}ϹT^bRT}illÐڜc7DW֊!Zަӓk0 Te}m'<֚jk @ P-KG7IJ[mZujy5ʥ6o\Z0.BЦ hSHxry!<->o.Xccc^i @b.jd]x/~8& mxT;yVf$45mXO;HW{MkvP[QVuab'|lI*|l:j^RV{s  @g[#YϻFRbٸ}{~ϥcT[so@(A}co}oʥA鮫qN}!5=c[k|Ç @e ]*%H}.I6coUM/S&فZߜ F-I}M}Ҽ5Ƣ5Y#߭^) Z  t)ʻcǶQ^}nZ35hEbq'W|lw`>yWW54}չռ㾴c @(XHŪŵʥ6q_{g+=5FOcVm XL*!Hc{IJz)'ʹա~4@ PvIOűDwnU*JN}U!AwxT^Iz,_c;$͕GV$pOݦr^coW@ PҮYSYu?=4GZ|#:58Ϋ%[[7&1q8Kc JujǻIO5@JMu7iؗRĩq}Śǭ7JWME Z߬ġ|YߠN&g^\9_ǫ/}$4}4@ Pvb}9Pꗯ8v_˗՘Vl&k>ƛlgy)BbCS+jM9-jO} @("Y߻׳/+WNcܪɗU>}V/"Q-/V[kn}xS]%zN>^[\k9%غO @e" ]${VurcuLJ+s}>_q__AߝZ2sirm&>:W TySx>y=1n1WK<  wGڧqNrn%>S_S:19o9>͵~m9y_0F]pzű$Zzέ>6C*i=Ū@Fi*jU4}9!^[Hct^W[?qoq=PZS>V5  $ty5S8/?k\( gOd+Q鴾 xqŲqWg7s*XY[ysg9|@ PS^=|{jdӼɪkz^q1З"6BߨDZ $Tc}ϥCsJzƸ)!@@H?jqb^뾷8zY˧^-lO^lS-蛍š6ڼ>?q Rn 97Û؏sY/ @d4{ѡʙSyimW0/nJfq|yB2ΩF罥^.+neڤĥj%@կؕXűnum4>C N+~K4V^jZӂ6ߜ AYT{fZ,d5Fb_"Vil=[S @e! M*gCy]qrn|ϗ6&_677-hbhmsSkʺCwh\[9R['@I@z(ݽu_+˩yd}l8koIܥ[7BQ/7F9G 4U>oq⬣' @L@!Xʹm?ڸ/lQ]lF9*Q8FX@ 0 䉹4c~ǹ8S<&֛*Qzw';>6 GS8QN5קq^j~\@F %?W仕)[զ9sMk}Gd鴾1|OS45F9ͣXcƹ؏k!@@ HyľKޗqj7_1y@b\3kT-Y]Dz6݋ꔏs @`$E9Y)zP܊kSc>n6YZOyd#?sg뭥q-+ޯA@FIk<_<4zt~Mk{A KIw}L88w+IvjMS[&8l51 @#ח暉|gVZͯK`6{קc4gjUC~ @`$HPOZ g[ZΝZ@Ui^.o4GܗFO @F:Tt[?Ǿ/Ϋses^mN:Kcx.ոFc @=Kq]k4Aߡ.ܩQ˩?^_ZB ͙@*88fj_j&h}~DmV$QLM5% !@%PWd&gL]zl> ZjBf{Q}mVjh @#@CseS ^"hke+L]35,C7 @@D_j)#u*h&تM @S"ɵ4wd}71qw! @hIS06ތtJb@  MOe t#r󨐃 @O`׼3.x^G @K B3#J"Y@ L`H>9  @`@ЎkB MA;//'@F>ȿƜ! @#rr @`@ЎkB M>IENDB`werkzeug-0.14.1/docs/_static/shorty-screenshot.png000066400000000000000000002461261322225165500222450ustar00rootroot00000000000000PNG  IHDRqX`tEXtSoftwareAdobe ImageReadyqe<PLTE.nQng,N9&tjhؤΰu,ا%׵jխXlG+L|v-̎'uuwKMeu+^݊.,X'8d"o!_yFLd+ݣ,q )fɕ+!J"I먓.&_h D'/_0u'm8_vky vpL1e-0̤#[)Ӓ]<~L'+.>Ķóp."ݴQ_},X,%"܅jax.ަ+If{-՚U&lBBB-a3|) CCCNlyT o}K..޺. ̟).,"ی/"x/+;y."߈ !$gs[Il^v!"_w_쨥-&[l !Æ(HAs^w!-[r!Z\s,"CL3|f$C[N&y ~Z_'HIDATx|]mIu.4BQEؘ(V0' xZHG$l8߅?EˡAmDEby @Sv%UAAʛN +[$ɷf񌿚mwkϨQc<ϸr_.W_Pw/|$y_&J_f"-U~P *sOrI79,7ӊ|> lNHo<^x[_nzuL[^ca;{z?[Nv(|Wٻ}=G;*fM|p1cOWӭ-{xޯa#WRim#kZ,zߋ1kAѻqǢ~֮kDk4=d{إgקݭН. cEa*fqɮD{g,.bpɊe^Ⲷx碲U ~3Ggn$^e6XL|sl4x600/ˏA9q8X9LԲm*DIٻ-{' rqMvDݞfaCqJ.޻^F1f0c7-9&E_JAN>$g™ mYpm9ܲGmx}AX۴nqG4Xg6goU;༌Xelk(7Ib,t.Yr߽s9^G_l(b݁JHVS\](>{D%-=V!熋 .&P,~& LGzz{CPo4nQK7M7br<-ޏ]<<<(8"\mZ\H uiwٻ%0]5cz:q[8KxF=O D,thǞ\HUTa~ U}&]s>.;t)Km~VB5x^Κ ɟg`|r~~ cȔ׹!̋/K7 !i-xB\m E81G rq[:fLEs2yFQH/}%^:lIW$[<^\,DWN!˷<,[]h) F\93n1Es oAe/Q]``R>m Z XC H5._ MJ7 Ya6%w9t  `,>),kyf_,?}Eoo •#$agCSr~Ic"+LF ].d;΁<K՚bgLrݪR no> Y }+-J Y{\hy:ˁt~cY];.,6b 8e>[7U7CUb]5:\6}sz^^h]w=\V,lQ _qhdW}tue[楄!U]% +,ȲuX_b|h, p̊ ιk\æ/6n/KxF]p\fK[XDϗfb|ƽgu沆뱶Bг-P)kmofե'+\(s[&qAPJ\գD![y C#=KjKD+v25cJE\}+Ry Ƭ"곫RV1llSсO/ -TY>ٚ}x9wX%K @roz%=VrCYcEgps쑭#8r֫ rMKQC SۋOs"Ն՗6|vZ|)Vb=hārlq0 CCi#d*2NZ <`@H8y.D M\e*'E Q^)\w QYα#^y}Z}I>&Iq4{4v\6[+%f9t',_*%2O%ֺpx xWN0宨/H{^Ylx\(uK\,H}⧗ZB m-WBrx;͔sKd(G˒wc+%VJ`n r{o#,.08{+6ҮдO }+ln\z<&Ţ)G%DK'iɱrḯ =sNJVbtN*%ku+.0{)+voDlQKlvFdNYrRJ|5{+>h%/-Te҃X@^.G^R(/TpPCDXqP#$pc_Gya&X!r2#y!V朣 s eRCPb7U܎gŷ%吷1oQе/HNngrH]d<{0䃁Rs砖ڔIF[iɧ^2Ksv%EC"}v H^W" c,:FB=F,$u%z}^&_ ̗⽥zKY%+B`\\R56or‘K?rXJYY>/k9k*9']JT|m^磻kۻBI_^ǜzN-A K#7/_rs|RU˫u)ŕ;jR}b96`C95dK\hr}SqVXlηpłUYo-]}+ȫ!-9g0׈@3_<+罘{R,7_rqwUVd"{Uț[${%H/eD-0U/ܬY=aeWQ&k1"pYrJAN2]N붑h(fGҥg/ؾNmE y]rT[r _VevhiAaRZe,~&XFVH9XMEO-A#T J 5͠RM+{FʉPcO&q_w29N+k=žx96{Cs '+H ]}r9@s݄@VeU.PK7nU")T*ˁU B`emЊ*0382[Еp%r}Æ1]KfN=N%Q%zi|E0)G4$G8*_ *k>o,Q1GU+X]YJʬ^.`x-uyreSn|C{}x(xO~E je(@Q^[\]|/%Byj(K[cL(о"]ɞJq_~qU[XtZ=!izHuVfx-&]U AOk] r kyᅻذgK(E7;e2qq1cS]͖ .}\e~nD}s"0Œ9.C{2B,tXOPk; =,eAGI ŰERj7S$zNZcz%V_37H@W(,%?<%uPuY|I11tzvsJ^ BG-}ӡ[A7 0z-eZB8Mų9A BWhZ|kNx ?%$dk#e_Gy ]uԅZ0Q \CO>ɨ̵~q/#֨T0YYeUby9@C KbaH pW8'簃kB%z~mY{-fygOKAt1X{DP0RҌ( 4LIT1.8|Ux\lP('l06mU/NOHG/zL*!ppC BiDI ';j#Yx3}1x|.o22w̗ʑ%h:Q:N'HWΟ'YI :y1/v>!?ߑ_y6JB G$T~c;=y褓^UKV^k/^|OI^ Yj}+zid:pt)y3Gq[0 9Vpu|k48.#.F"V 'qtAb}% 8Iy%wpmZ[ހ}RT)źv.s.6],&iá:(wh9x![8{Tkw]R9v8Ge(e\*)dpPC3nLO+~_;D\3h<fՊtaĩ1`8Aܱ3ӣ%q[*?8@POz !LB!@[nk[: W'upr,aҷS0<_{Y8A _4Uo?'~dZN!\И<.i6tA/8YU]~C~<>WM/f"9(b/>H5sX N!,q5x I9&x!_dk6M rMSoW}|k( >ǖy[Yc,,PY&uВ#nA# 2Z%7pGn?|<֐1ua]p F  (-U2lΓ86?Jc7رIc `bkTEv{p"eK$G$h*h}"Kp= \;_@_ fI-`KLҵ `;8$s vf:dX>3:+>|w I 9$[EHc"@$ճS`/!PFKPb "p~\FI*ntMZ& v !X"c1R1 3CnPX! ,qUnmW5Vuh(j.M O)ϼW7(u!R1H0$]M7Rjvi~5Sq]ak'DE2; V &P`- N 9Uj ;% i k0(iY0Q= 9 -cuhTYkM&R:dCtMu)v r2w- dQ^ \VM C#SsQyDA9/Z.qmǝ-3 A:PDzkW> wu#u#+8؜PuJUJUD 57rpT3W\W@NPDW!0 ە 3C|U X"ݔPޮitpORuZWuYUJ "-$x|å X5 dz2/څddy P0e4y`Bid!m/IJ;15őg() 1lm}(Ky$ @@ۮZLLIp('A'1%η/qz{PUW RL0X{D28b/چj:[TD6j'ʽ4SkaI0ɱNx Vlbɔbw xDamJ *ZcGBkX8D(85d@OM6e~veZ%엵(p#kÙ3ĔJ~NB!Za 8?Ѝo 8N)H .1cG>%F@(Jl(CNs;@  {r0)uPWMA챃@iP>+ew2_?n ;XݼEпt!Dgh:duSm/ N[)"\B ,W 7n!>X<0l 3%=^]]}\޻M{!J I>bkl[9G%XۂwZ# S8| Ua2nWpI.`YZD*ES3iT5q'jrB1?ôE3I_L  ƽF0Hpb k|U BMhH u S@5? ƩE't{vMːa|t" !APNX@B20i6HNhOi.wJ 7,3$(`h*%( &7D#sdqrsvXE)'MCL { C>x2ι K[i( |R@J鰡>}=PE/MZ{ ːGhKX%=4e&0 MvFU #_5) ,v @W2 SqA^%JNYik0'{-(gac92;9t R,P,)nFv*!qG?|f\__?ż)8 r=blq8H}Vh" Q tCy&,@<)nfH +W6]K8Iq('T~.V1LZ\}KH6#(χ"I$T8K& jSDfy>?'X\2]dVXQs2`B3!iwΪ%JR\i)QH-i`G: H#t{Oۑ@qPCt9aD{ At3pTpC @v G1yj Yx(~ݡdDEKM7K- 84hfh 릊+R4Hڌ:!) 5b09IR@1X[ĥ%2VHRAL5,޵cKS2{,8?צ:&I;V a[ c(' K`C 炋1 <$^9&];vaH\%Z+lhG7CM,&B&a{ws mnm R11ټf!HORB94XXQAnX-NIvW$jM`IZT?%%Y-:4+WP84^őI߭c rpNƦ)Izt:)\p趷KC61 \ޥF p' u1ȁ*!h6iQuTskd ˸N{Xt ei G30} AhF>PXAc%1}䩧~wuU7"Puv!0Gu?񙟽OPF M(KKfkcX4R;TՒ 3{ndh=誟 (_*,|Yvh%NM5=LGc[ o#YҰ^M ;^~Y7^{-sUY@xJsH)!`lWmRҚ !sD]9(HPDP`ÞڵJVZ+Ivg7,2{!s8%L(a_>PTj$*ȶץ*IK~$Q:{~ _OP%uHDg^JKIo|Ctw"pڂ~@D]׆o*"ř(%Z-LN1l:)NUM^ ʎJ()20KbB'{ҽ{ ) 9`G21xK 2R.׬T$-(U@־h9e]vZX"]:hܺBΨ~ơ[0M("Q=`Q: ?NqǍ|,}{|AVWbӖ+# w"^*>\^VB2bS U ֖o zaJHQg(Yr7p/ }dssœrr*^:seCFiܣ+0/ 堧dh HFaRW}$E! o?<{.\sE c=҇ɻÒ,7G7RI<}o4LRTS2₪a hIhS0t Z:?^9R1ݠ-w>=̡rvݕ#%@ g׽S+ڥ)њC` Fg[4 S.I9C~>钾^iō3T| v ߡң.݊+'[-._neTis\d0;H'a)~y*Fl(ppy[T^ !c6ĽRRF'H3ӡ!4"^#\1f=vćG~|w˻VGpz10wC`6404KJ"teajKu'vkGO~.QK7>[%C˙ܒ]c2J3` -4 yC\F*"r W!fCyRpq=#?G$K㽥۪t_o>q>!wc PC(fsz?4򓄙'+""Z8äv>5KpsjħuUHr'CY_U("C:A*ؿ&H|ga ]t[v9 XorC\t/_mLCه`;ypqMaԻv;K}kv !q@+I( Iki`Q׮*]wX-?Hv0P>LX s>As.gLTd@AiG7שR~쪅)y#&_SaӠi~IKAK:{'DUe{`@r }P& _'tED&.On&<"(ɏauUI AvD,FFБ%@+h>}vҮTQ6fL7NCZ 멫¤O |jL܇/2^j>/[럿|Mޟ]I(߬>Nڧ_(y9 QMD]EwE6QmhVRv.v !%Wn ,1npe[&dtB"q_RJO$$SgW5Rǭh2'($MiY{s^fȤ%0۷~90@,go+/ֆYCyrm$:Ĉ0nf<Jq3bL+ wFe'kMRꮱcVC01DK F ˊh7kT@p~U0;0K%g~vWu`?Z{L:e𓑌Г'U ;:#\sIǗ <`o>uJRԸg[7vl[*+0\l7Ц6pESEFNI6nsDm('u#<%Ҷͦu8)ګ~'fMf_z`ҬhŰm!F F#>.5á%22Qua,*ҭ$_A:kJh[A>Bl|/֕mTRIu9+V!!ݤ|tF3; ]N&b8]}݂._0 qعqPob L>fV: 9ddk}ڕQWv6a2j8[03CK:7N%w6RrtyU%unFC"P6SVB]Ud2DJ+)$HeVE29R栍_kDvB0*tQw7uKDEt3H%DDF&CJ22C^^Bq4gyS I +Q~IڐB*2KDt&!!'!DVRs 1%di3s <##LJ_7ϧ> C:ؒQ*ra"F7YHƨPRp2 8tֵ.ZRFx'6QΘnz9b50:9bi,=M(7 ˴%3!/DT s`tnE=:m\cǙ$}Xb6>[gЙ {i<<Hרu浌sEPIT:* =~p:]vC.ގ{S9rYyՠIzA*2Qv[q!. 5 ``ɬlmTϸmj;8cͯ4[Z>-CjN>fIh"/"2@" dڢK8iͺtˎpi53/ )CZjvp0:$RF| 7tA( ERBL+4gڵZL} {,Ϊ9r}(և@t2z5G,dۑHB$wI&Bہɠ9 l5nv IGγB Il)/IRQni#|SjPx7:uM]<.k:znoq䪈@:HrپAahFRX4 htRP]y]X0?hMi DXaPڲ$XE$Y%ep $b37P$lq\F:f 4dhM=>;d2caϾ^ԟ~PQfi5"%&52CS $W3r`%}GY&\u_Q-r"&$B2l$upGqA3blH󵇳~C̚CY-TGG=զ G-b@s^FK11fwCfA˙Q7Kea>nD=ynT@Y){tj;Do ՏOJrSrZ~𳩽}KlnֹUvy1$oȥDdl&J62[0OGm`!8;aԒ; VJ]Fhl BT%s@\:+!p3Wx1Y];qߙ(;,OKY9D5[Wq( naqp%ƶ]v=ĨGCp>;e̐KQWma#{v>4%FN}45%1;\.' qE971(-3pCXCޡb|fas3)}X_dgL%0NN9o\J_cJ;uCɹ3R SP)uS |& \n>J7hII7.0([;:a UmV4* ":T(V2iǑR)%e.g 0hD vqӱ)~N }Wv<ƲB$vwc4\~Du$ %\J BcdUޟda$w]bH)I/P2'IC. T[ԡ3>Ͱ$:JqZƘ褃,.݌ݾdfDd9Q(Ye짜?h%La?l 5PCnD7C4RRwGcI> +Z,a>7n%zy8LsK*> Mc$7p) [״5)$"CڈlDN9S7V}njI:CpXν\UmP@n#S0@ʋHV ~BqC@Aiڥ5:b !#a&Dž*:DxPzPWUf*R )K_Q%.0ݖO| YkMR7t$.PC~d!B暸:j 嚔c(%2p+ h-CJ nt 2KmmA2`Ed&Ad B@L#Cxҙ&zAPv2A:='%:,_::K;u &K1ѠmI`萡x2p{5@cb$K:UQf@Y ^A|1) h7JJQ8D}0qh_mNuL(H((,G5\*ojBI/iP vާ݆*]aFq

About Werkzeug

Werkzeug is a WSGI utility library. It can serve as the basis for a custom framework.

Other Formats

You can download the documentation in other formats as well:

Useful Links

werkzeug-0.14.1/docs/_templates/sidebarlogo.html000066400000000000000000000002141322225165500217050ustar00rootroot00000000000000 werkzeug-0.14.1/docs/_themes/000077500000000000000000000000001322225165500160175ustar00rootroot00000000000000werkzeug-0.14.1/docs/_themes/LICENSE000066400000000000000000000033751322225165500170340ustar00rootroot00000000000000Copyright (c) 2011 by Armin Ronacher. Some rights reserved. Redistribution and use in source and binary forms of the theme, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. * The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. We kindly ask you to only use these themes in an unmodified manner just for Flask and Flask-related products, not for unrelated projects. If you like the visual style and want to use it for your own projects, please consider making some larger changes to the themes (such as changing font faces, sizes, colors or margins). THIS THEME IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS THEME, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. werkzeug-0.14.1/docs/_themes/README000066400000000000000000000021051322225165500166750ustar00rootroot00000000000000Flask Sphinx Styles =================== This repository contains sphinx styles for Flask and Flask related projects. To use this style in your Sphinx documentation, follow this guide: 1. put this folder as _themes into your docs folder. Alternatively you can also use git submodules to check out the contents there. 2. add this to your conf.py: sys.path.append(os.path.abspath('_themes')) html_theme_path = ['_themes'] html_theme = 'flask' The following themes exist: - 'flask' - the standard flask documentation theme for large projects - 'flask_small' - small one-page theme. Intended to be used by very small addon libraries for flask. The following options exist for the flask_small theme: [options] index_logo = '' filename of a picture in _static to be used as replacement for the h1 in the index.rst file. index_logo_height = 120px height of the index logo github_fork = '' repository name on github for the "fork me" badge werkzeug-0.14.1/docs/_themes/werkzeug/000077500000000000000000000000001322225165500176625ustar00rootroot00000000000000werkzeug-0.14.1/docs/_themes/werkzeug/layout.html000066400000000000000000000003711322225165500220660ustar00rootroot00000000000000{%- extends "basic/layout.html" %} {%- block relbar2 %}{% endblock %} {%- block footer %} {%- endblock %} werkzeug-0.14.1/docs/_themes/werkzeug/relations.html000066400000000000000000000011161322225165500225470ustar00rootroot00000000000000

Related Topics

werkzeug-0.14.1/docs/_themes/werkzeug/static/000077500000000000000000000000001322225165500211515ustar00rootroot00000000000000werkzeug-0.14.1/docs/_themes/werkzeug/static/werkzeug.css_t000066400000000000000000000145701322225165500240600ustar00rootroot00000000000000/* * werkzeug.css_t * ~~~~~~~~~~~~~~ * * :copyright: Copyright 2011 by Armin Ronacher. * :license: Flask Design License, see LICENSE for details. */ {% set page_width = '940px' %} {% set sidebar_width = '220px' %} {% set font_family = "'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif" %} {% set header_font_family = "'Ubuntu', " ~ font_family %} @import url("basic.css"); @import url(http://fonts.googleapis.com/css?family=Ubuntu); /* -- page layout ----------------------------------------------------------- */ body { font-family: {{ font_family }}; font-size: 15px; background-color: white; color: #000; margin: 0; padding: 0; } div.document { width: {{ page_width }}; margin: 30px auto 0 auto; } div.documentwrapper { float: left; width: 100%; } div.bodywrapper { margin: 0 0 0 {{ sidebar_width }}; } div.sphinxsidebar { width: {{ sidebar_width }}; } hr { border: 1px solid #B1B4B6; } div.body { background-color: #ffffff; color: #3E4349; padding: 0 30px 0 30px; } img.floatingflask { padding: 0 0 10px 10px; float: right; } div.footer { width: {{ page_width }}; margin: 20px auto 30px auto; font-size: 14px; color: #888; text-align: right; } div.footer a { color: #888; } div.related { display: none; } div.sphinxsidebar a { color: #444; text-decoration: none; border-bottom: 1px dotted #999; } div.sphinxsidebar a:hover { border-bottom: 1px solid #999; } div.sphinxsidebar { font-size: 13px; line-height: 1.5; } div.sphinxsidebarwrapper { padding: 18px 10px; } div.sphinxsidebarwrapper p.logo { padding: 0 0 20px 0; margin: 0; text-align: center; } div.sphinxsidebar h3, div.sphinxsidebar h4 { font-family: {{ font_family }}; color: #444; font-size: 24px; font-weight: normal; margin: 0 0 5px 0; padding: 0; } div.sphinxsidebar h4 { font-size: 20px; } div.sphinxsidebar h3 a { color: #444; } div.sphinxsidebar p.logo a, div.sphinxsidebar h3 a, div.sphinxsidebar p.logo a:hover, div.sphinxsidebar h3 a:hover { border: none; } div.sphinxsidebar p { color: #555; margin: 10px 0; } div.sphinxsidebar ul { margin: 10px 0; padding: 0; color: #000; } div.sphinxsidebar input { border: 1px solid #ccc; font-family: {{ font_family }}; font-size: 14px; } div.sphinxsidebar form.search input[name="q"] { width: 130px; } /* -- body styles ----------------------------------------------------------- */ a { color: #185F6D; text-decoration: underline; } a:hover { color: #2794AA; text-decoration: underline; } div.body h1, div.body h2, div.body h3, div.body h4, div.body h5, div.body h6 { font-family: {{ header_font_family }}; font-weight: normal; margin: 30px 0px 10px 0px; padding: 0; color: black; } div.body h1 { margin-top: 0; padding-top: 0; font-size: 240%; } div.body h2 { font-size: 180%; } div.body h3 { font-size: 150%; } div.body h4 { font-size: 130%; } div.body h5 { font-size: 100%; } div.body h6 { font-size: 100%; } a.headerlink { color: #ddd; padding: 0 4px; text-decoration: none; } a.headerlink:hover { color: #444; background: #eaeaea; } div.body p, div.body dd, div.body li { line-height: 1.4em; } div.admonition { background: #fafafa; margin: 20px -30px; padding: 10px 30px; border-top: 1px solid #ccc; border-bottom: 1px solid #ccc; } div.admonition tt.xref, div.admonition a tt { border-bottom: 1px solid #fafafa; } dd div.admonition { margin-left: -60px; padding-left: 60px; } div.admonition p.admonition-title { font-family: {{ font_family }}; font-weight: normal; font-size: 24px; margin: 0 0 10px 0; padding: 0; line-height: 1; } div.admonition p.last { margin-bottom: 0; } div.highlight { background-color: white; } dt:target, .highlight { background: #FAF3E8; } div.note { background-color: #eee; border: 1px solid #ccc; } div.seealso { background-color: #ffc; border: 1px solid #ff6; } div.topic { background-color: #eee; } p.admonition-title { display: inline; } p.admonition-title:after { content: ":"; } pre, tt { font-family: 'Consolas', 'Menlo', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.9em; } img.screenshot { } tt.descname, tt.descclassname { font-size: 0.95em; } tt.descname { padding-right: 0.08em; } img.screenshot { -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils { border: 1px solid #888; -moz-box-shadow: 2px 2px 4px #eee; -webkit-box-shadow: 2px 2px 4px #eee; box-shadow: 2px 2px 4px #eee; } table.docutils td, table.docutils th { border: 1px solid #888; padding: 0.25em 0.7em; } table.field-list, table.footnote { border: none; -moz-box-shadow: none; -webkit-box-shadow: none; box-shadow: none; } table.footnote { margin: 15px 0; width: 100%; border: 1px solid #eee; background: #fdfdfd; font-size: 0.9em; } table.footnote + table.footnote { margin-top: -15px; border-top: none; } table.field-list th { padding: 0 0.8em 0 0; } table.field-list td { padding: 0; } table.footnote td.label { width: 0px; padding: 0.3em 0 0.3em 0.5em; } table.footnote td { padding: 0.3em 0.5em; } dl { margin: 0; padding: 0; } dl dd { margin-left: 30px; } blockquote { margin: 0 0 0 30px; padding: 0; } ul, ol { margin: 10px 0 10px 30px; padding: 0; } pre { background: #E8EFF0; padding: 7px 30px; margin: 15px -30px; line-height: 1.3em; } dl pre, blockquote pre, li pre { margin-left: -60px; padding-left: 60px; } dl dl pre { margin-left: -90px; padding-left: 90px; } tt { background-color: #E8EFF0; color: #222; /* padding: 1px 2px; */ } tt.xref, a tt { background-color: #E8EFF0; border-bottom: 1px solid white; } a.reference { text-decoration: none; border-bottom: 1px dotted #2BABC4; } a.reference:hover { border-bottom: 1px solid #2794AA; } a.footnote-reference { text-decoration: none; font-size: 0.7em; vertical-align: top; border-bottom: 1px dotted #004B6B; } a.footnote-reference:hover { border-bottom: 1px solid #6D4100; } a:hover tt { background: #EEE; } werkzeug-0.14.1/docs/_themes/werkzeug/theme.conf000066400000000000000000000001501322225165500216270ustar00rootroot00000000000000[theme] inherit = basic stylesheet = werkzeug.css pygments_style = werkzeug_theme_support.WerkzeugStyle werkzeug-0.14.1/docs/_themes/werkzeug_theme_support.py000066400000000000000000000113141322225165500232120ustar00rootroot00000000000000from pygments.style import Style from pygments.token import Keyword, Name, Comment, String, Error, \ Number, Operator, Generic, Whitespace, Punctuation, Other, Literal class WerkzeugStyle(Style): background_color = "#f8f8f8" default_style = "" styles = { # No corresponding class for the following: #Text: "", # class: '' Whitespace: "underline #f8f8f8", # class: 'w' Error: "#a40000 border:#ef2929", # class: 'err' Other: "#000000", # class 'x' Comment: "italic #8f5902", # class: 'c' Comment.Preproc: "noitalic", # class: 'cp' Keyword: "bold #004461", # class: 'k' Keyword.Constant: "bold #004461", # class: 'kc' Keyword.Declaration: "bold #004461", # class: 'kd' Keyword.Namespace: "bold #004461", # class: 'kn' Keyword.Pseudo: "bold #004461", # class: 'kp' Keyword.Reserved: "bold #004461", # class: 'kr' Keyword.Type: "bold #004461", # class: 'kt' Operator: "#582800", # class: 'o' Operator.Word: "bold #004461", # class: 'ow' - like keywords Punctuation: "bold #000000", # class: 'p' # because special names such as Name.Class, Name.Function, etc. # are not recognized as such later in the parsing, we choose them # to look the same as ordinary variables. Name: "#000000", # class: 'n' Name.Attribute: "#c4a000", # class: 'na' - to be revised Name.Builtin: "#004461", # class: 'nb' Name.Builtin.Pseudo: "#3465a4", # class: 'bp' Name.Class: "#000000", # class: 'nc' - to be revised Name.Constant: "#000000", # class: 'no' - to be revised Name.Decorator: "#1B5C66", # class: 'nd' - to be revised Name.Entity: "#ce5c00", # class: 'ni' Name.Exception: "bold #cc0000", # class: 'ne' Name.Function: "#000000", # class: 'nf' Name.Property: "#000000", # class: 'py' Name.Label: "#f57900", # class: 'nl' Name.Namespace: "#000000", # class: 'nn' - to be revised Name.Other: "#000000", # class: 'nx' Name.Tag: "bold #004461", # class: 'nt' - like a keyword Name.Variable: "#000000", # class: 'nv' - to be revised Name.Variable.Class: "#000000", # class: 'vc' - to be revised Name.Variable.Global: "#000000", # class: 'vg' - to be revised Name.Variable.Instance: "#000000", # class: 'vi' - to be revised Number: "#990000", # class: 'm' Literal: "#000000", # class: 'l' Literal.Date: "#000000", # class: 'ld' String: "#4e9a06", # class: 's' String.Backtick: "#4e9a06", # class: 'sb' String.Char: "#4e9a06", # class: 'sc' String.Doc: "italic #8f5902", # class: 'sd' - like a comment String.Double: "#4e9a06", # class: 's2' String.Escape: "#4e9a06", # class: 'se' String.Heredoc: "#4e9a06", # class: 'sh' String.Interpol: "#4e9a06", # class: 'si' String.Other: "#4e9a06", # class: 'sx' String.Regex: "#4e9a06", # class: 'sr' String.Single: "#4e9a06", # class: 's1' String.Symbol: "#4e9a06", # class: 'ss' Generic: "#000000", # class: 'g' Generic.Deleted: "#a40000", # class: 'gd' Generic.Emph: "italic #000000", # class: 'ge' Generic.Error: "#ef2929", # class: 'gr' Generic.Heading: "bold #000080", # class: 'gh' Generic.Inserted: "#00A000", # class: 'gi' Generic.Output: "#888", # class: 'go' Generic.Prompt: "#745334", # class: 'gp' Generic.Strong: "bold #000000", # class: 'gs' Generic.Subheading: "bold #800080", # class: 'gu' Generic.Traceback: "bold #a40000", # class: 'gt' } werkzeug-0.14.1/docs/changes.rst000066400000000000000000000126361322225165500165450ustar00rootroot00000000000000================== Werkzeug Changelog ================== .. module:: werkzeug This file lists all major changes in Werkzeug over the versions. For API breaking changes have a look at :ref:`api-changes`, they are listed there in detail. .. include:: ../CHANGES.rst .. _api-changes: API Changes =========== `0.9` - Soft-deprecated the :attr:`BaseRequest.data` and :attr:`BaseResponse.data` attributes and introduced new methods to interact with entity data. This will allows in the future to make better APIs to deal with request and response entity bodies. So far there is no deprecation warning but users are strongly encouraged to update. - The :class:`Headers` and :class:`EnvironHeaders` datastructures are now designed to operate on unicode data. This is a backwards incompatible change and was necessary for the Python 3 support. - The :class:`Headers` object no longer supports in-place operations through the old ``linked`` method. This has been removed without replacement due to changes on the encoding model. `0.6.2` - renamed the attribute `implicit_seqence_conversion` attribute of the request object to `implicit_sequence_conversion`. Because this is a feature that is typically unused and was only in there for the 0.6 series we consider this a bug that does not require backwards compatibility support which would be impossible to properly implement. `0.6` - Old deprecations were removed. - `cached_property.writeable` was deprecated. - :meth:`BaseResponse.get_wsgi_headers` replaces the older `BaseResponse.fix_headers` method. The older method stays around for backwards compatibility reasons until 0.7. - `BaseResponse.header_list` was deprecated. You should not need this function, `get_wsgi_headers` and the `to_list` method on the regular headers should serve as a replacement. - Deprecated `BaseResponse.iter_encoded`'s charset parameter. - :class:`LimitedStream` non-silent usage was deprecated. - the `__repr__` of HTTP exceptions changed. This might break doctests. `0.5` - Werkzeug switched away from wsgiref as library for the builtin webserver. - The `encoding` parameter for :class:`Template`\s is now called `charset`. The older one will work for another two versions but warn with a :exc:`DeprecationWarning`. - The :class:`Client` has cookie support now which is enabled by default. - :meth:`BaseResponse._get_file_stream` is now passed more parameters to make the function more useful. In 0.6 the old way to invoke the method will no longer work. To support both newer and older Werkzeug versions you can add all arguments to the signature and provide default values for each of them. - :func:`url_decode` no longer supports both `&` and `;` as separator. This has to be specified explicitly now. - The request object is now enforced to be read-only for all attributes. If your code relies on modifications of some values makes sure to create copies of them using the mutable counterparts! - Some data structures that were only used on request objects are now immutable as well. (:class:`Authorization` / :class:`Accept` and subclasses) - `CacheControl` was split up into :class:`RequestCacheControl` and :class:`ResponseCacheControl`, the former being immutable. The old class will go away in 0.6 - undocumented `werkzeug.test.File` was replaced by :class:`FileWrapper`. - it's not longer possible to pass dicts inside the `data` dict in :class:`Client`. Use tuples instead. - It's save to modify the return value of :meth:`MultiDict.getlist` and methods that return lists in the :class:`MultiDict` now. The class creates copies instead of revealing the internal lists. However :class:`MultiDict.setlistdefault` still (and intentionally) returns the internal list for modifications. `0.3` - Werkzeug 0.3 will be the last release with Python 2.3 compatibility. - The `environ_property` is now read-only by default. This decision was made because the request in general should be considered read-only. `0.2` - The `BaseReporterStream` is now part of the contrib module, the new module is `werkzeug.contrib.reporterstream`. Starting with `0.3`, the old import will not work any longer. - `RequestRedirect` now uses a 301 status code. Previously a 302 status code was used incorrectly. If you want to continue using this 302 code, use ``response = redirect(e.new_url, 302)``. - `lazy_property` is now called `cached_property`. The alias for the old name will disappear in Werkzeug 0.3. - `match` can now raise `MethodNotAllowed` if configured for methods and there was no method for that request. - The `response_body` attribute on the response object is now called `data`. With Werkzeug 0.3 the old name will not work any longer. - The file-like methods on the response object are deprecated. If you want to use the response object as file like object use the `Response` class or a subclass of `BaseResponse` and mix the new `ResponseStreamMixin` class and use `response.stream`. werkzeug-0.14.1/docs/conf.py000066400000000000000000000147531322225165500157040ustar00rootroot00000000000000# -*- coding: utf-8 -*- # # Werkzeug documentation build configuration file, created by # sphinx-quickstart on Fri Jan 16 23:10:43 2009. # # This file is execfile()d with the current directory set to its containing dir. # # The contents of this file are pickled, so don't put values in the namespace # that aren't pickleable (module imports are okay, they're removed automatically). # # Note that not all possible configuration values are present in this # autogenerated file. # # All configuration values have a default; values that are commented out # serve to show the default. import sys, os # If your extensions are in another directory, add it here. If the directory # is relative to the documentation root, use os.path.abspath to make it # absolute, like shown here. sys.path.append(os.path.abspath('.')) sys.path.append(os.path.abspath('_themes')) # General configuration # --------------------- # Add any Sphinx extension module names here, as strings. They can be extensions # coming with Sphinx (named 'sphinx.ext.*') or your custom ones. extensions = ['sphinx.ext.autodoc', 'sphinx.ext.intersphinx', 'sphinx.ext.doctest', 'werkzeugext'] # Add any paths that contain templates here, relative to this directory. templates_path = ['_templates'] # The suffix of source filenames. source_suffix = '.rst' # The encoding of source files. #source_encoding = 'utf-8' # The master toctree document. master_doc = 'index' # General information about the project. project = u'Werkzeug' copyright = u'2011, The Werkzeug Team' # The version info for the project you're documenting, acts as replacement for # |version| and |release|, also used in various other places throughout the # built documents. import re try: import werkzeug except ImportError: sys.path.append(os.path.abspath('../')) from werkzeug import __version__ as release if 'dev' in release: release = release[:release.find('dev') + 3] if release == 'unknown': version = release else: version = re.match(r'\d+\.\d+(?:\.\d+)?', release).group() # The language for content autogenerated by Sphinx. Refer to documentation # for a list of supported languages. #language = None # There are two options for replacing |today|: either, you set today to some # non-false value, then it is used: #today = '' # Else, today_fmt is used as the format for a strftime call. #today_fmt = '%B %d, %Y' # List of documents that shouldn't be included in the build. #unused_docs = [] # List of directories, relative to source directory, that shouldn't be searched # for source files. exclude_trees = ['_build'] # The reST default role (used for this markup: `text`) to use for all documents. #default_role = None # If true, '()' will be appended to :func: etc. cross-reference text. add_function_parentheses = True # If true, the current module name will be prepended to all description # unit titles (such as .. function::). #add_module_names = True # If true, sectionauthor and moduleauthor directives will be shown in the # output. They are ignored by default. #show_authors = False # The name of the Pygments (syntax highlighting) style to use. pygments_style = 'werkzeug_theme_support.WerkzeugStyle' # doctest setup code doctest_global_setup = '''\ from werkzeug import * ''' # Options for HTML output # ----------------------- html_theme = 'werkzeug' html_theme_path = ['_themes'] # The name for this set of Sphinx documents. If None, it defaults to # " v documentation". #html_title = None # A shorter title for the navigation bar. Default is the same as html_title. #html_short_title = None # The name of an image file (relative to this directory) to place at the top # of the sidebar. #html_logo = None # The name of an image file (within the static path) to use as favicon of the # docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32 # pixels large. #html_favicon = None # Add any paths that contain custom static files (such as style sheets) here, # relative to this directory. They are copied after the builtin static files, # so a file named "default.css" will overwrite the builtin "default.css". html_static_path = ['_static'] # If not '', a 'Last updated on:' timestamp is inserted at every page bottom, # using the given strftime format. #html_last_updated_fmt = '%b %d, %Y' # If true, SmartyPants will be used to convert quotes and dashes to # typographically correct entities. #html_use_smartypants = True # Custom sidebar templates, maps document names to template names. html_sidebars = { 'index': ['sidebarlogo.html', 'sidebarintro.html', 'sourcelink.html', 'searchbox.html'], '**': ['sidebarlogo.html', 'localtoc.html', 'relations.html', 'sourcelink.html', 'searchbox.html'] } # Additional templates that should be rendered to pages, maps page names to # template names. #html_additional_pages = {} # If false, no module index is generated. #html_use_modindex = True # If false, no index is generated. #html_use_index = True # If true, the index is split into individual pages for each letter. #html_split_index = False # If true, links to the reST sources are added to the pages. #html_show_sourcelink = True # If true, an OpenSearch description file will be output, and all pages will # contain a tag referring to it. The value of this option must be the # base URL from which the finished HTML is served. #html_use_opensearch = '' # If nonempty, this is the file name suffix for HTML files (e.g. ".xhtml"). #html_file_suffix = '' # Output file base name for HTML help builder. htmlhelp_basename = 'Werkzeugdoc' # Options for LaTeX output # ------------------------ # The paper size ('letter' or 'a4'). latex_paper_size = 'a4' # Grouping the document tree into LaTeX files. List of tuples # (source start file, target name, title, author, document class [howto/manual]). latex_documents = [ ('latexindex', 'Werkzeug.tex', ur'Werkzeug Documentation', ur'The Werkzeug Team', 'manual'), ] # Additional stuff for LaTeX latex_elements = { 'fontpkg': r'\usepackage{mathpazo}', 'papersize': 'a4paper', 'pointsize': '12pt', 'preamble': r''' \usepackage{werkzeugstyle} % i hate you latex, here too \DeclareUnicodeCharacter{2603}{\\N\{SNOWMAN\}} ''' } latex_use_parts = True latex_additional_files = ['werkzeugstyle.sty', 'logo.pdf'] latex_use_modindex = False # Example configuration for intersphinx: refer to the Python standard library. intersphinx_mapping = { 'http://docs.python.org/dev': None, 'http://docs.sqlalchemy.org/en/latest/': None } werkzeug-0.14.1/docs/contents.rst.inc000066400000000000000000000023571322225165500175410ustar00rootroot00000000000000Getting Started --------------- If you are new to Werkzeug or WSGI development in general you should start here. .. toctree:: :maxdepth: 2 installation transition tutorial levels quickstart python3 Serving and Testing ------------------- The development server and testing support and management script utilities are covered here: .. toctree:: :maxdepth: 2 serving test debug Reference --------- .. toctree:: :maxdepth: 2 wrappers routing wsgi filesystem http datastructures utils urls local middlewares exceptions Deployment ---------- This section covers running your application in production on a web server such as Apache or lighttpd. .. toctree:: :maxdepth: 3 deployment/index Contributed Modules ------------------- A lot of useful code contributed by the community is shipped with Werkzeug as part of the `contrib` module: .. toctree:: :maxdepth: 3 contrib/index Additional Information ---------------------- .. toctree:: :maxdepth: 2 terms unicode request_data changes If you can’t find the information you’re looking for, have a look at the index or try to find it using the search function: * :ref:`genindex` * :ref:`search` werkzeug-0.14.1/docs/contrib/000077500000000000000000000000001322225165500160335ustar00rootroot00000000000000werkzeug-0.14.1/docs/contrib/atom.rst000066400000000000000000000002321322225165500175220ustar00rootroot00000000000000================ Atom Syndication ================ .. automodule:: werkzeug.contrib.atom .. autoclass:: AtomFeed :members: .. autoclass:: FeedEntry werkzeug-0.14.1/docs/contrib/cache.rst000066400000000000000000000010611322225165500176260ustar00rootroot00000000000000===== Cache ===== .. automodule:: werkzeug.contrib.cache Cache System API ================ .. autoclass:: BaseCache :members: Cache Systems ============= .. autoclass:: NullCache .. autoclass:: SimpleCache .. autoclass:: MemcachedCache .. class:: GAEMemcachedCache This class is deprecated in favour of :class:`MemcachedCache` which now supports Google Appengine as well. .. versionchanged:: 0.8 Deprecated in favour of :class:`MemcachedCache`. .. autoclass:: RedisCache .. autoclass:: FileSystemCache .. autoclass:: UWSGICache werkzeug-0.14.1/docs/contrib/fixers.rst000066400000000000000000000003571322225165500200720ustar00rootroot00000000000000====== Fixers ====== .. automodule:: werkzeug.contrib.fixers .. autoclass:: CGIRootFix .. autoclass:: PathInfoFromRequestUriFix .. autoclass:: ProxyFix :members: .. autoclass:: HeaderRewriterFix .. autoclass:: InternetExplorerFix werkzeug-0.14.1/docs/contrib/index.rst000066400000000000000000000004511322225165500176740ustar00rootroot00000000000000=================== Contributed Modules =================== A lot of useful code contributed by the community is shipped with Werkzeug as part of the `contrib` module: .. toctree:: :maxdepth: 2 atom sessions securecookie cache wrappers iterio fixers profiler lint werkzeug-0.14.1/docs/contrib/iterio.rst000066400000000000000000000001451322225165500200600ustar00rootroot00000000000000======= Iter IO ======= .. automodule:: werkzeug.contrib.iterio .. autoclass:: IterIO :members: werkzeug-0.14.1/docs/contrib/lint.rst000066400000000000000000000003011322225165500175250ustar00rootroot00000000000000========================== Lint Validation Middleware ========================== .. currentmodule:: werkzeug.contrib.lint .. automodule:: werkzeug.contrib.lint .. autoclass:: LintMiddleware werkzeug-0.14.1/docs/contrib/profiler.rst000066400000000000000000000003271322225165500204110ustar00rootroot00000000000000========================= WSGI Application Profiler ========================= .. automodule:: werkzeug.contrib.profiler .. autoclass:: MergeStream .. autoclass:: ProfilerMiddleware .. autofunction:: make_action werkzeug-0.14.1/docs/contrib/securecookie.rst000066400000000000000000000025421322225165500212500ustar00rootroot00000000000000============= Secure Cookie ============= .. automodule:: werkzeug.contrib.securecookie Security ======== The default implementation uses Pickle as this is the only module that used to be available in the standard library when this module was created. If you have simplejson available it's strongly recommended to create a subclass and replace the serialization method:: import json from werkzeug.contrib.securecookie import SecureCookie class JSONSecureCookie(SecureCookie): serialization_method = json The weakness of Pickle is that if someone gains access to the secret key the attacker can not only modify the session but also execute arbitrary code on the server. Reference ========= .. autoclass:: SecureCookie :members: .. attribute:: new `True` if the cookie was newly created, otherwise `False` .. attribute:: modified Whenever an item on the cookie is set, this attribute is set to `True`. However this does not track modifications inside mutable objects in the cookie: >>> c = SecureCookie() >>> c["foo"] = [1, 2, 3] >>> c.modified True >>> c.modified = False >>> c["foo"].append(4) >>> c.modified False In that situation it has to be set to `modified` by hand so that :attr:`should_save` can pick it up. .. autoexception:: UnquoteError werkzeug-0.14.1/docs/contrib/sessions.rst000066400000000000000000000017661322225165500204450ustar00rootroot00000000000000======== Sessions ======== .. automodule:: werkzeug.contrib.sessions .. testsetup:: from werkzeug.contrib.sessions import * Reference ========= .. autoclass:: Session .. attribute:: sid The session ID as string. .. attribute:: new `True` is the cookie was newly created, otherwise `False` .. attribute:: modified Whenever an item on the cookie is set, this attribute is set to `True`. However this does not track modifications inside mutable objects in the session: >>> c = Session({}, sid='deadbeefbabe2c00ffee') >>> c["foo"] = [1, 2, 3] >>> c.modified True >>> c.modified = False >>> c["foo"].append(4) >>> c.modified False In that situation it has to be set to `modified` by hand so that :attr:`should_save` can pick it up. .. autoattribute:: should_save .. autoclass:: SessionStore :members: .. autoclass:: FilesystemSessionStore :members: list .. autoclass:: SessionMiddleware werkzeug-0.14.1/docs/contrib/wrappers.rst000066400000000000000000000006341322225165500204330ustar00rootroot00000000000000============== Extra Wrappers ============== .. automodule:: werkzeug.contrib.wrappers .. autoclass:: JSONRequestMixin :members: .. autoclass:: ProtobufRequestMixin :members: .. autoclass:: RoutingArgsRequestMixin :members: .. autoclass:: ReverseSlashBehaviorRequestMixin :members: .. autoclass:: DynamicCharsetRequestMixin :members: .. autoclass:: DynamicCharsetResponseMixin :members: werkzeug-0.14.1/docs/datastructures.rst000066400000000000000000000050161322225165500202040ustar00rootroot00000000000000=============== Data Structures =============== .. module:: werkzeug.datastructures Werkzeug provides some subclasses of common Python objects to extend them with additional features. Some of them are used to make them immutable, others are used to change some semantics to better work with HTTP. General Purpose =============== .. versionchanged:: 0.6 The general purpose classes are now pickleable in each protocol as long as the contained objects are pickleable. This means that the :class:`FileMultiDict` won't be pickleable as soon as it contains a file. .. autoclass:: TypeConversionDict :members: .. autoclass:: ImmutableTypeConversionDict :members: copy .. autoclass:: MultiDict :members: :inherited-members: .. autoclass:: OrderedMultiDict .. autoclass:: ImmutableMultiDict :members: copy .. autoclass:: ImmutableOrderedMultiDict :members: copy .. autoclass:: CombinedMultiDict .. autoclass:: ImmutableDict :members: copy .. autoclass:: ImmutableList .. autoclass:: FileMultiDict :members: .. _http-datastructures: HTTP Related ============ .. autoclass:: Headers([defaults]) :members: .. autoclass:: EnvironHeaders .. autoclass:: HeaderSet :members: .. autoclass:: Accept :members: .. autoclass:: MIMEAccept :members: accept_html, accept_xhtml, accept_json .. autoclass:: CharsetAccept .. autoclass:: LanguageAccept .. autoclass:: RequestCacheControl :members: .. autoattribute:: no_cache .. autoattribute:: no_store .. autoattribute:: max_age .. autoattribute:: no_transform .. autoclass:: ResponseCacheControl :members: .. autoattribute:: no_cache .. autoattribute:: no_store .. autoattribute:: max_age .. autoattribute:: no_transform .. autoclass:: ETags :members: .. autoclass:: Authorization :members: .. autoclass:: WWWAuthenticate :members: .. autoclass:: IfRange :members: .. autoclass:: Range :members: .. autoclass:: ContentRange :members: Others ====== .. autoclass:: FileStorage :members: .. attribute:: stream The input stream for the uploaded file. This usually points to an open temporary file. .. attribute:: filename The filename of the file on the client. .. attribute:: name The name of the form field. .. attribute:: headers The multipart headers as :class:`Headers` object. This usually contains irrelevant information but in combination with custom multipart requests the raw headers might be interesting. .. versionadded:: 0.6 werkzeug-0.14.1/docs/debug.rst000066400000000000000000000070241322225165500162160ustar00rootroot00000000000000====================== Debugging Applications ====================== .. module:: werkzeug.debug Depending on the WSGI gateway/server, exceptions are handled differently. But most of the time, exceptions go to stderr or the error log. Since this is not the best debugging environment, Werkzeug provides a WSGI middleware that renders nice debugging tracebacks, optionally with an AJAX based debugger (which allows to execute code in the context of the traceback's frames). The interactive debugger however does not work in forking environments which makes it nearly impossible to use on production servers. Also the debugger allows the execution of arbitrary code which makes it a major security risk and **must never be used on production machines** because of that. **We cannot stress this enough. Do not enable this in production.** Enabling the Debugger ===================== You can enable the debugger by wrapping the application in a :class:`DebuggedApplication` middleware. Additionally there are parameters to the :func:`run_simple` function to enable it because this is a common task during development. .. autoclass:: DebuggedApplication Using the Debugger ================== Once enabled and an error happens during a request you will see a detailed traceback instead of a general "internal server error". If you have the `evalex` feature enabled you can also get a traceback for every frame in the traceback by clicking on the console icon. Once clicked a console opens where you can execute Python code in: .. image:: _static/debug-screenshot.png :alt: a screenshot of the interactive debugger :align: center Inside the interactive consoles you can execute any kind of Python code. Unlike regular Python consoles the output of the object reprs is colored and stripped to a reasonable size by default. If the output is longer than what the console decides to display a small plus sign is added to the repr and a click will expand the repr. To display all variables that are defined in the current frame you can use the `dump()` function. You can call it without arguments to get a detailed list of all variables and their values, or with an object as argument to get a detailed list of all the attributes it has. Debugger PIN ============ Starting with Werkzeug 0.11 the debugger is additionally protected by a PIN. This is a security helper to make it less likely for the debugger to be exploited in production as it has happened to people to keep the debugger active. The PIN based authentication is enabled by default. When the debugger comes up, on first usage it will prompt for a PIN that is printed to the command line. The PIN is generated in a stable way that is specific to the project. In some situations it might be not possible to generate a stable PIN between restarts in which case an explicit PIN can be provided through the environment variable ``WERKZEUG_DEBUG_PIN``. This can be set to a number and will become the PIN. This variable can also be set to the value ``off`` to disable the PIN check entirely. If the PIN is entered too many times incorrectly the server needs to be restarted. **This feature is not supposed to entirely secure the debugger. It's intended to make it harder for an attacker to exploit the debugger. Never enable the debugger in production.** Pasting Errors ============== If you click on the `Traceback` title, the traceback switches over to a text based one. The text based one can be pasted to `gist.github.com `_ with one click. .. _paste.pocoo.org: https://gist.github.com werkzeug-0.14.1/docs/deployment/000077500000000000000000000000001322225165500165535ustar00rootroot00000000000000werkzeug-0.14.1/docs/deployment/cgi.rst000066400000000000000000000024571322225165500200570ustar00rootroot00000000000000=== CGI === If all other deployment methods do not work, CGI will work for sure. CGI is supported by all major servers but usually has a less-than-optimal performance. This is also the way you can use a Werkzeug application on Google's `AppEngine`_, there however the execution does happen in a CGI-like environment. The application's performance is unaffected because of that. .. _AppEngine: http://code.google.com/appengine/ Creating a `.cgi` file ====================== First you need to create the CGI application file. Let's call it `yourapplication.cgi`:: #!/usr/bin/python from wsgiref.handlers import CGIHandler from yourapplication import make_app application = make_app() CGIHandler().run(application) If you're running Python 2.4 you will need the :mod:`wsgiref` package. Python 2.5 and higher ship this as part of the standard library. Server Setup ============ Usually there are two ways to configure the server. Either just copy the `.cgi` into a `cgi-bin` (and use `mod_rerwite` or something similar to rewrite the URL) or let the server point to the file directly. In Apache for example you can put a like like this into the config: .. sourcecode:: apache ScriptAlias /app /path/to/the/application.cgi For more information consult the documentation of your webserver. werkzeug-0.14.1/docs/deployment/fastcgi.rst000066400000000000000000000113151322225165500207260ustar00rootroot00000000000000======= FastCGI ======= A very popular deployment setup on servers like `lighttpd`_ and `nginx`_ is FastCGI. To use your WSGI application with any of them you will need a FastCGI server first. The most popular one is `flup`_ which we will use for this guide. Make sure to have it installed. Creating a `.fcgi` file ======================= First you need to create the FastCGI server file. Let's call it `yourapplication.fcgi`:: #!/usr/bin/python from flup.server.fcgi import WSGIServer from yourapplication import make_app if __name__ == '__main__': application = make_app() WSGIServer(application).run() This is enough for Apache to work, however ngingx and older versions of lighttpd need a socket to be explicitly passed to communicate with the FastCGI server. For that to work you need to pass the path to the socket to the :class:`~flup.server.fcgi.WSGIServer`:: WSGIServer(application, bindAddress='/path/to/fcgi.sock').run() The path has to be the exact same path you define in the server config. Save the `yourapplication.fcgi` file somewhere you will find it again. It makes sense to have that in `/var/www/yourapplication` or something similar. Make sure to set the executable bit on that file so that the servers can execute it:: # chmod +x /var/www/yourapplication/yourapplication.fcgi Configuring lighttpd ==================== A basic FastCGI configuration for lighttpd looks like this:: fastcgi.server = ("/yourapplication.fcgi" => (( "socket" => "/tmp/yourapplication-fcgi.sock", "bin-path" => "/var/www/yourapplication/yourapplication.fcgi", "check-local" => "disable", "max-procs" -> 1 )) ) alias.url = ( "/static/" => "/path/to/your/static" ) url.rewrite-once = ( "^(/static.*)$" => "$1", "^(/.*)$" => "/yourapplication.fcgi$1" Remember to enable the FastCGI, alias and rewrite modules. This configuration binds the application to `/yourapplication`. If you want the application to work in the URL root you have to work around a lighttpd bug with the :class:`~werkzeug.contrib.fixers.LighttpdCGIRootFix` middleware. Make sure to apply it only if you are mounting the application the URL root. Also, see the Lighty docs for more information on `FastCGI and Python `_ (note that explicitly passing a socket to run() is no longer necessary). Configuring nginx ================= Installing FastCGI applications on nginx is a bit tricky because by default some FastCGI parameters are not properly forwarded. A basic FastCGI configuration for nginx looks like this:: location /yourapplication/ { include fastcgi_params; if ($uri ~ ^/yourapplication/(.*)?) { set $path_url $1; } fastcgi_param PATH_INFO $path_url; fastcgi_param SCRIPT_NAME /yourapplication; fastcgi_pass unix:/tmp/yourapplication-fcgi.sock; } This configuration binds the application to `/yourapplication`. If you want to have it in the URL root it's a bit easier because you don't have to figure out how to calculate `PATH_INFO` and `SCRIPT_NAME`:: location /yourapplication/ { include fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_param SCRIPT_NAME ""; fastcgi_pass unix:/tmp/yourapplication-fcgi.sock; } Since Nginx doesn't load FastCGI apps, you have to do it by yourself. You can either write an `init.d` script for that or execute it inside a screen session:: $ screen $ /var/www/yourapplication/yourapplication.fcgi Debugging ========= FastCGI deployments tend to be hard to debug on most webservers. Very often the only thing the server log tells you is something along the lines of "premature end of headers". In order to debug the application the only thing that can really give you ideas why it breaks is switching to the correct user and executing the application by hand. This example assumes your application is called `application.fcgi` and that your webserver user is `www-data`:: $ su www-data $ cd /var/www/yourapplication $ python application.fcgi Traceback (most recent call last): File "yourapplication.fcg", line 4, in ImportError: No module named yourapplication In this case the error seems to be "yourapplication" not being on the python path. Common problems are: - relative paths being used. Don't rely on the current working directory - the code depending on environment variables that are not set by the web server. - different python interpreters being used. .. _lighttpd: http://www.lighttpd.net/ .. _nginx: http://nginx.net/ .. _flup: http://trac.saddi.com/flup werkzeug-0.14.1/docs/deployment/index.rst000066400000000000000000000004071322225165500204150ustar00rootroot00000000000000.. _deployment: ====================== Application Deployment ====================== This section covers running your application in production on a web server such as Apache or lighttpd. .. toctree:: :maxdepth: 2 cgi mod_wsgi fastcgi proxying werkzeug-0.14.1/docs/deployment/mod_wsgi.rst000066400000000000000000000053541322225165500211240ustar00rootroot00000000000000=================== `mod_wsgi` (Apache) =================== If you are using the `Apache`_ webserver you should consider using `mod_wsgi`_. .. _Apache: http://httpd.apache.org/ Installing `mod_wsgi` ===================== If you don't have `mod_wsgi` installed yet you have to either install it using a package manager or compile it yourself. The mod_wsgi `installation instructions`_ cover installation instructions for source installations on UNIX systems. If you are using ubuntu / debian you can apt-get it and activate it as follows:: # apt-get install libapache2-mod-wsgi On FreeBSD install `mod_wsgi` by compiling the `www/mod_wsgi` port or by using pkg_add:: # pkg_add -r mod_wsgi If you are using pkgsrc you can install `mod_wsgi` by compiling the `www/ap2-wsgi` package. If you encounter segfaulting child processes after the first apache reload you can safely ignore them. Just restart the server. Creating a `.wsgi` file ======================= To run your application you need a `yourapplication.wsgi` file. This file contains the code `mod_wsgi` is executing on startup to get the application object. The object called `application` in that file is then used as application. For most applications the following file should be sufficient:: from yourapplication import make_app application = make_app() If you don't have a factory function for application creation but a singleton instance you can directly import that one as `application`. Store that file somewhere where you will find it again (eg: `/var/www/yourapplication`) and make sure that `yourapplication` and all the libraries that are in use are on the python load path. If you don't want to install it system wide consider using a `virtual python`_ instance. Configuring Apache ================== The last thing you have to do is to create an Apache configuration file for your application. In this example we are telling `mod_wsgi` to execute the application under a different user for security reasons: .. sourcecode:: apache ServerName example.com WSGIDaemonProcess yourapplication user=user1 group=group1 processes=2 threads=5 WSGIScriptAlias / /var/www/yourapplication/yourapplication.wsgi WSGIProcessGroup yourapplication WSGIApplicationGroup %{GLOBAL} Order deny,allow Allow from all For more information consult the `mod_wsgi wiki`_. .. _mod_wsgi: http://code.google.com/p/modwsgi/ .. _installation instructions: http://code.google.com/p/modwsgi/wiki/QuickInstallationGuide .. _virtual python: http://pypi.python.org/pypi/virtualenv .. _mod_wsgi wiki: http://code.google.com/p/modwsgi/wiki/ werkzeug-0.14.1/docs/deployment/proxying.rst000066400000000000000000000032121322225165500211620ustar00rootroot00000000000000============= HTTP Proxying ============= Many people prefer using a standalone Python HTTP server and proxying that server via nginx, Apache etc. A very stable Python server is CherryPy. This part of the documentation shows you how to combine your WSGI application with the CherryPy WSGI server and how to configure the webserver for proxying. Creating a `.py` server ======================= To run your application you need a `start-server.py` file that starts up the WSGI Server. It looks something along these lines:: from cherrypy import wsgiserver from yourapplication import make_app server = wsgiserver.CherryPyWSGIServer(('localhost', 8080), make_app()) try: server.start() except KeyboardInterrupt: server.stop() If you now start the file the server will listen on `localhost:8080`. Keep in mind that WSGI applications behave slightly different for proxied setups. If you have not developed your application for proxying in mind, you can apply the :class:`~werkzeug.contrib.fixers.ProxyFix` middleware. Configuring nginx ================= As an example we show here how to configure nginx to proxy to the server. The basic nginx configuration looks like this:: location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; proxy_redirect default; } Since Nginx doesn't start your server for you, you have to do it by yourself. You can either write an `init.d` script for that or execute it inside a screen session:: $ screen $ python start-server.py werkzeug-0.14.1/docs/exceptions.rst000066400000000000000000000075521322225165500173170ustar00rootroot00000000000000=============== HTTP Exceptions =============== .. automodule:: werkzeug.exceptions Error Classes ============= The following error classes exist in Werkzeug: .. autoexception:: BadRequest .. autoexception:: Unauthorized .. autoexception:: Forbidden .. autoexception:: NotFound .. autoexception:: MethodNotAllowed .. autoexception:: NotAcceptable .. autoexception:: RequestTimeout .. autoexception:: Conflict .. autoexception:: Gone .. autoexception:: LengthRequired .. autoexception:: PreconditionFailed .. autoexception:: RequestEntityTooLarge .. autoexception:: RequestURITooLarge .. autoexception:: UnsupportedMediaType .. autoexception:: RequestedRangeNotSatisfiable .. autoexception:: ExpectationFailed .. autoexception:: ImATeapot .. autoexception:: PreconditionRequired .. autoexception:: TooManyRequests .. autoexception:: RequestHeaderFieldsTooLarge .. autoexception:: InternalServerError .. autoexception:: NotImplemented .. autoexception:: BadGateway .. autoexception:: ServiceUnavailable .. exception:: HTTPUnicodeError This exception is used to signal unicode decode errors of request data. For more information see the :ref:`unicode` chapter. .. autoexception:: ClientDisconnected .. autoexception:: SecurityError Baseclass ========= All the exceptions implement this common interface: .. autoexception:: HTTPException :members: get_response, __call__ Special HTTP Exceptions ======================= Starting with Werkzeug 0.3 some of the builtin classes raise exceptions that look like regular python exceptions (eg :exc:`KeyError`) but are :exc:`BadRequest` HTTP exceptions at the same time. This decision was made to simplify a common pattern where you want to abort if the client tampered with the submitted form data in a way that the application can't recover properly and should abort with ``400 BAD REQUEST``. Assuming the application catches all HTTP exceptions and reacts to them properly a view function could do the following safely and doesn't have to check if the keys exist:: def new_post(request): post = Post(title=request.form['title'], body=request.form['body']) post.save() return redirect(post.url) If `title` or `body` are missing in the form, a special key error will be raised which behaves like a :exc:`KeyError` but also a :exc:`BadRequest` exception. Simple Aborting =============== Sometimes it's convenient to just raise an exception by the error code, without importing the exception and looking up the name etc. For this purpose there is the :func:`abort` function. .. autofunction:: abort If you want to use this functionality with custom exceptions you can create an instance of the aborter class: .. autoclass:: Aborter Custom Errors ============= As you can see from the list above not all status codes are available as errors. Especially redirects and other non 200 status codes that do not represent errors are missing. For redirects you can use the :func:`redirect` function from the utilities. If you want to add an error yourself you can subclass :exc:`HTTPException`:: from werkzeug.exceptions import HTTPException class PaymentRequired(HTTPException): code = 402 description = '

Payment required.

' This is the minimal code you need for your own exception. If you want to add more logic to the errors you can override the :meth:`~HTTPException.get_description`, :meth:`~HTTPException.get_body`, :meth:`~HTTPException.get_headers` and :meth:`~HTTPException.get_response` methods. In any case you should have a look at the sourcecode of the exceptions module. You can override the default description in the constructor with the `description` parameter (it's the first argument for all exceptions except of the :exc:`MethodNotAllowed` which accepts a list of allowed methods as first argument):: raise BadRequest('Request failed because X was not present') werkzeug-0.14.1/docs/filesystem.rst000066400000000000000000000003401322225165500173060ustar00rootroot00000000000000==================== Filesystem Utilities ==================== Various utilities for the local filesystem. .. module:: werkzeug.filesystem .. autoclass:: BrokenFilesystemWarning .. autofunction:: get_filesystem_encoding werkzeug-0.14.1/docs/http.rst000066400000000000000000000100601322225165500161010ustar00rootroot00000000000000============== HTTP Utilities ============== .. module:: werkzeug.http Werkzeug provides a couple of functions to parse and generate HTTP headers that are useful when implementing WSGI middlewares or whenever you are operating on a lower level layer. All this functionality is also exposed from request and response objects. Date Functions ============== The following functions simplify working with times in an HTTP context. Werkzeug uses offset-naive :class:`~datetime.datetime` objects internally that store the time in UTC. If you're working with timezones in your application make sure to replace the tzinfo attribute with a UTC timezone information before processing the values. .. autofunction:: cookie_date .. autofunction:: http_date .. autofunction:: parse_date Header Parsing ============== The following functions can be used to parse incoming HTTP headers. Because Python does not provide data structures with the semantics required by :rfc:`2616`, Werkzeug implements some custom data structures that are :ref:`documented separately `. .. autofunction:: parse_options_header .. autofunction:: parse_set_header .. autofunction:: parse_list_header .. autofunction:: parse_dict_header .. autofunction:: parse_accept_header(value, [class]) .. autofunction:: parse_cache_control_header .. autofunction:: parse_authorization_header .. autofunction:: parse_www_authenticate_header .. autofunction:: parse_if_range_header .. autofunction:: parse_range_header .. autofunction:: parse_content_range_header Header Utilities ================ The following utilities operate on HTTP headers well but do not parse them. They are useful if you're dealing with conditional responses or if you want to proxy arbitrary requests but want to remove WSGI-unsupported hop-by-hop headers. Also there is a function to create HTTP header strings from the parsed data. .. autofunction:: is_entity_header .. autofunction:: is_hop_by_hop_header .. autofunction:: remove_entity_headers .. autofunction:: remove_hop_by_hop_headers .. autofunction:: is_byte_range_valid .. autofunction:: quote_header_value .. autofunction:: unquote_header_value .. autofunction:: dump_header Cookies ======= .. autofunction:: parse_cookie .. autofunction:: dump_cookie Conditional Response Helpers ============================ For conditional responses the following functions might be useful: .. autofunction:: parse_etags .. autofunction:: quote_etag .. autofunction:: unquote_etag .. autofunction:: generate_etag .. autofunction:: is_resource_modified Constants ========= .. data:: HTTP_STATUS_CODES A dict of status code -> default status message pairs. This is used by the wrappers and other places where an integer status code is expanded to a string throughout Werkzeug. Form Data Parsing ================= .. module:: werkzeug.formparser Werkzeug provides the form parsing functions separately from the request object so that you can access form data from a plain WSGI environment. The following formats are currently supported by the form data parser: - `application/x-www-form-urlencoded` - `multipart/form-data` Nested multipart is not currently supported (Werkzeug 0.9), but it isn't used by any of the modern web browsers. Usage example: >>> from cStringIO import StringIO >>> data = '--foo\r\nContent-Disposition: form-data; name="test"\r\n' \ ... '\r\nHello World!\r\n--foo--' >>> environ = {'wsgi.input': StringIO(data), 'CONTENT_LENGTH': str(len(data)), ... 'CONTENT_TYPE': 'multipart/form-data; boundary=foo', ... 'REQUEST_METHOD': 'POST'} >>> stream, form, files = parse_form_data(environ) >>> stream.read() '' >>> form['test'] u'Hello World!' >>> not files True Normally the WSGI environment is provided by the WSGI gateway with the incoming data as part of it. If you want to generate such fake-WSGI environments for unittesting you might want to use the :func:`create_environ` function or the :class:`EnvironBuilder` instead. .. autoclass:: FormDataParser .. autofunction:: parse_form_data .. autofunction:: parse_multipart_headers werkzeug-0.14.1/docs/index.rst000066400000000000000000000002261322225165500162340ustar00rootroot00000000000000====================== Documentation Overview ====================== Welcome to the Werkzeug |version| documentation. .. include:: contents.rst.inc werkzeug-0.14.1/docs/installation.rst000066400000000000000000000103631322225165500176310ustar00rootroot00000000000000============ Installation ============ Werkzeug requires at least Python 2.6 to work correctly. If you do need to support an older version you can download an older version of Werkzeug though we strongly recommend against that. Werkzeug currently has experimental support for Python 3. For more information about the Python 3 support see :ref:`python3`. Installing a released version ============================= As a Python egg (via easy_install or pip) ----------------------------------------- You can install the most recent Werkzeug version using `easy_install`_:: easy_install Werkzeug Alternatively you can also use pip:: pip install Werkzeug Either way we strongly recommend using these tools in combination with :ref:`virtualenv`. This will install a Werkzeug egg in your Python installation's `site-packages` directory. From the tarball release ------------------------- 1. Download the most recent tarball from the `download page`_. 2. Unpack the tarball. 3. ``python setup.py install`` Note that the last command will automatically download and install `setuptools`_ if you don't already have it installed. This requires a working Internet connection. This will install Werkzeug into your Python installation's `site-packages` directory. Installing the development version ================================== 1. Install `Git`_ 2. ``git clone git://github.com/pallets/werkzeug.git`` 3. ``cd werkzeug`` 4. ``pip install --editable .`` .. _virtualenv: virtualenv ========== Virtualenv is probably what you want to use during development, and in production too if you have shell access there. What problem does virtualenv solve? If you like Python as I do, chances are you want to use it for other projects besides Werkzeug-based web applications. But the more projects you have, the more likely it is that you will be working with different versions of Python itself, or at least different versions of Python libraries. Let's face it; quite often libraries break backwards compatibility, and it's unlikely that any serious application will have zero dependencies. So what do you do if two or more of your projects have conflicting dependencies? Virtualenv to the rescue! It basically enables multiple side-by-side installations of Python, one for each project. It doesn't actually install separate copies of Python, but it does provide a clever way to keep different project environments isolated. So let's see how virtualenv works! If you are on Mac OS X or Linux, chances are that one of the following two commands will work for you:: $ sudo easy_install virtualenv or even better:: $ sudo pip install virtualenv One of these will probably install virtualenv on your system. Maybe it's even in your package manager. If you use Ubuntu, try:: $ sudo apt-get install python-virtualenv If you are on Windows and don't have the `easy_install` command, you must install it first. Once you have it installed, run the same commands as above, but without the `sudo` prefix. Once you have virtualenv installed, just fire up a shell and create your own environment. I usually create a project folder and an `env` folder within:: $ mkdir myproject $ cd myproject $ virtualenv env New python executable in env/bin/python Installing setuptools............done. Now, whenever you want to work on a project, you only have to activate the corresponding environment. On OS X and Linux, do the following:: $ . env/bin/activate (Note the space between the dot and the script name. The dot means that this script should run in the context of the current shell. If this command does not work in your shell, try replacing the dot with ``source``) If you are a Windows user, the following command is for you:: $ env\scripts\activate Either way, you should now be using your virtualenv (see how the prompt of your shell has changed to show the virtualenv). Now you can just enter the following command to get Werkzeug activated in your virtualenv:: $ pip install Werkzeug A few seconds later you are good to go. .. _download page: https://pypi.python.org/pypi/Werkzeug .. _setuptools: http://peak.telecommunity.com/DevCenter/setuptools .. _easy_install: http://peak.telecommunity.com/DevCenter/EasyInstall .. _Git: http://git-scm.org/ werkzeug-0.14.1/docs/latexindex.rst000066400000000000000000000001271322225165500172720ustar00rootroot00000000000000:orphan: Werkzeug Documentation ====================== .. include:: contents.rst.inc werkzeug-0.14.1/docs/levels.rst000066400000000000000000000050571322225165500164260ustar00rootroot00000000000000========== API Levels ========== .. module:: werkzeug Werkzeug is intended to be a utility rather than a framework. Because of that the user-friendly API is separated from the lower-level API so that Werkzeug can easily be used to extend another system. All the functionality the :class:`Request` and :class:`Response` objects (aka the "wrappers") provide is also available in small utility functions. Example ======= This example implements a small `Hello World` application that greets the user with the name entered:: from werkzeug.utils import escape from werkzeug.wrappers import Request, Response @Request.application def hello_world(request): result = ['Greeter'] if request.method == 'POST': result.append('

Hello %s!

' % escape(request.form['name'])) result.append('''

Name:

''') return Response(''.join(result), mimetype='text/html') Alternatively the same application could be used without request and response objects but by taking advantage of the parsing functions werkzeug provides:: from werkzeug.formparser import parse_form_data from werkzeug.utils import escape def hello_world(environ, start_response): result = ['Greeter'] if environ['REQUEST_METHOD'] == 'POST': form = parse_form_data(environ)[1] result.append('

Hello %s!

' % escape(form['name'])) result.append('''

Name:

''') start_response('200 OK', [('Content-Type', 'text/html; charset=utf-8')]) return [''.join(result)] High or Low? ============ Usually you want to use the high-level layer (the request and response objects). But there are situations where this might not be what you want. For example you might be maintaining code for an application written in Django or another framework and you have to parse HTTP headers. You can utilize Werkzeug for that by accessing the lower-level HTTP header parsing functions. Another situation where the low level parsing functions can be useful are custom WSGI frameworks, unit-testing or modernizing an old CGI/mod_python application to WSGI as well as WSGI middlewares where you want to keep the overhead low. werkzeug-0.14.1/docs/local.rst000066400000000000000000000067161322225165500162310ustar00rootroot00000000000000============== Context Locals ============== .. module:: werkzeug.local Sooner or later you have some things you want to have in every single view or helper function or whatever. In PHP the way to go are global variables. However, that isn't possible in WSGI applications without a major drawback: As soon as you operate on the global namespace your application isn't thread-safe any longer. The Python standard library has a concept called "thread locals" (or thread-local data). A thread local is a global object in which you can put stuff in and get back later in a thread-safe and thread-specific way. That means that whenever you set or get a value on a thread local object, the thread local object checks in which thread you are and retrieves the value corresponding to your thread (if one exists). So, you won't accidentally get another thread's data. This approach, however, has a few disadvantages. For example, besides threads, there are other types of concurrency in Python. A very popular one is greenlets. Also, whether every request gets its own thread is not guaranteed in WSGI. It could be that a request is reusing a thread from a previous request, and hence data is left over in the thread local object. Werkzeug provides its own implementation of local data storage called `werkzeug.local`. This approach provides a similar functionality to thread locals but also works with greenlets. Here's a simple example of how one could use werkzeug.local:: from werkzeug.local import Local, LocalManager local = Local() local_manager = LocalManager([local]) def application(environ, start_response): local.request = request = Request(environ) ... application = local_manager.make_middleware(application) This binds the request to `local.request`. Every other piece of code executed after this assignment in the same context can safely access local.request and will get the same request object. The `make_middleware` method on the local manager ensures that all references to the local objects are cleared up after the request. The same context means the same greenlet (if you're using greenlets) in the same thread and same process. If a request object is not yet set on the local object and you try to access it, you will get an `AttributeError`. You can use `getattr` to avoid that:: def get_request(): return getattr(local, 'request', None) This will try to get the request or return `None` if the request is not (yet?) available. Note that local objects cannot manage themselves, for that you need a local manager. You can pass a local manager multiple locals or add additionals later by appending them to `manager.locals` and every time the manager cleans up it will clean up all the data left in the locals for this context. .. autofunction:: release_local .. autoclass:: LocalManager :members: cleanup, make_middleware, middleware, get_ident .. autoclass:: LocalStack :members: push, pop, top .. autoclass:: LocalProxy :members: _get_current_object Keep in mind that ``repr()`` is also forwarded, so if you want to find out if you are dealing with a proxy you can do an ``isinstance()`` check: .. sourcecode:: pycon >>> from werkzeug.local import LocalProxy >>> isinstance(request, LocalProxy) True You can also create proxy objects by hand: .. sourcecode:: python from werkzeug.local import Local, LocalProxy local = Local() request = LocalProxy(local, 'request') werkzeug-0.14.1/docs/logo.pdf000066400000000000000000000124231322225165500160300ustar00rootroot00000000000000%PDF-1.4 % 3 0 obj << /Length 4 0 R /Filter /FlateDecode >> stream xmZI )rN}Yz%$1]vt?U?wնb>;m>kSkũAoNUj+8ޣ< -1=iޟqu?UW'FέYp+QqeLǃ=d{CK}?'p'j=pd>=Svd{gCpAiH;SH?,%E@zρXO+Wf7PK~9!7ƺqB[S;x* wgW'R½[SqɃ@C\-J:Ob+I@6S }Tɀ~HINl@ 8o::6? p ɩ!ށQI1;!+d3Pm/o13_EjÆ (2D4)ưnZC(!܋$ 8BChbK6BCJ]7DڵҏH;EӏHRCIKb "U/"f3"Pnd o!!%4é'I";E|)i>آ:2XB[dn/DSz[!@N99UmaE; zQ3d0etd ef=]ȦHD4ꧢg| 'é& ydfeGWPqPo''"q0C*5vHP`|$ $g]tձeYSn X 5|r, }v#T)Q{)+C hPvewΝ>JΨ-IElUAUTfC3BAst gE❪oIl+ ga~oE'3 Ax+OZ98m66ul0hX"5@.U賬ݍb4Yo ? ṘBC, {Y7tiuH>3d7~ c+ѯ*LՊ 'B#XԸPя8.IeZֹ|P0h" Q7Th9z3ib1'՛^ɤ>A ]ʲ$dg)?4O&eK ھ<  Q Z QGJqIhc4yq%*U1|*&&$MG`G4yBJVu`NQ4zP/ !/f%t(PmJ ζXv&d{cQ=[dwML" QTćP7fWmbNf7pz+f:d7gVt]Űch$5?ۻ8"yŒ䳫`;H;tcoƬ!q S ֎.t7I>GZ0Te 7"4:ؼ{B3C}N0v9/u%@Y ,GxpyBK U_t8䐍lepơw-S␈DfjM˝`dӕ@Wn^MmІK JJ_pfc2%[0ݵJֈcS0 s1B '|2uvuKC5)P#9d+A_e z!sx°EeMLVo4Gڃ{ie(  KMfaߔΆۙ;+A*COA"O7M3\M(M6hrbT3.Mv!eX!BM eMi54{]&R0\Yw⩇vc4%(IAwʦ ]4W8U9I5[\]]ڜ-GRZm ud.ع/(Z0wSISC+hH\_9ϷP8!Rz!ʒ})T!<\z<5 _z>J*}OO@OW3K5h..(!K}Ǡ+=y&ie3ه N]䖪<#Cb&n|md4^%vqa"Fo #HڮqmGywd [E#&KCHޫxZ˳Jw;gթFrN-nL(jr{IRPEu>]+A+Dܼ/en7,,98V4꬏ I7;x3hy# @ŁF.ȕՊow.c']9 W0.OHw驑ppAS5#%Xn氐/GR%q(:vdSx_KNGlf ̛8osL(3ugdz# I[%:BXY#>]3ko|yTL[kg$e.w;AVuxR9 hR>hKi8LQ9 4 p'ZSuX(ݏӓLͣoMd%CE8j"il/,T׸MH_lӌLz}c}="b;FzCh_o/tF[zzTižj~TƖ|;f}V\P҇{`fҮa;>&25-XRzx_)~`A/Tꘋ'*) t~ǩxg.kU9"w5mR:h KjXLkL+,Co/'xAmhdF5kc^uVebesE~E~ QzT.+Fp2U8hiƇ!L2PS2ӈg;;,'eCpZkbWWtR2hAl6XtQ>6̏Zv5c4ЊِܶB?S HdUo@u;Mda3xy izSN2`?\ `&rHt1#F%?EG>GjPE5 O&Ϫc@3$2ޮjV8n#+˸oKHV` 9+p>XΊ\:Ai_+*!;[)3eFSЯa +׻d><H=L$͈= ߍ[4V)/^nbzF> >> >> endobj 5 0 obj << /Type /Page /Parent 1 0 R /MediaBox [ 0 0 252.285721 252.285721 ] /Contents 3 0 R /Group << /Type /Group /S /Transparency /CS /DeviceRGB >> /Resources 2 0 R >> endobj 1 0 obj << /Type /Pages /Kids [ 5 0 R ] /Count 1 >> endobj 6 0 obj << /Creator (cairo 1.8.10 (http://cairographics.org)) /Producer (cairo 1.8.10 (http://cairographics.org)) >> endobj 7 0 obj << /Type /Catalog /Pages 1 0 R >> endobj xref 0 8 0000000000 65535 f 0000004909 00000 n 0000004623 00000 n 0000000015 00000 n 0000004600 00000 n 0000004695 00000 n 0000004974 00000 n 0000005101 00000 n trailer << /Size 8 /Root 7 0 R /Info 6 0 R >> startxref 5153 %%EOF werkzeug-0.14.1/docs/make.bat000066400000000000000000000046241322225165500160060ustar00rootroot00000000000000@ECHO OFF REM Command file for Sphinx documentation set SPHINXBUILD=sphinx-build set ALLSPHINXOPTS=-d _build/doctrees %SPHINXOPTS% . if NOT "%PAPER%" == "" ( set ALLSPHINXOPTS=-D latex_paper_size=%PAPER% %ALLSPHINXOPTS% ) if "%1" == "" goto help if "%1" == "help" ( :help echo.Please use `make ^` where ^ is one of echo. html to make standalone HTML files echo. pickle to make pickle files echo. json to make JSON files echo. htmlhelp to make HTML files and a HTML help project echo. qthelp to make HTML files and a qthelp project echo. latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter echo. changes to make an overview over all changed/added/deprecated items echo. linkcheck to check all external links for integrity goto end ) if "%1" == "clean" ( for /d %%i in (_build\*) do rmdir /q /s %%i del /q /s _build\* goto end ) if "%1" == "html" ( %SPHINXBUILD% -b html %ALLSPHINXOPTS% _build/html echo. echo.Build finished. The HTML pages are in _build/html. goto end ) if "%1" == "pickle" ( %SPHINXBUILD% -b pickle %ALLSPHINXOPTS% _build/pickle echo. echo.Build finished; now you can process the pickle files. goto end ) if "%1" == "json" ( %SPHINXBUILD% -b json %ALLSPHINXOPTS% _build/json echo. echo.Build finished; now you can process the JSON files. goto end ) if "%1" == "htmlhelp" ( %SPHINXBUILD% -b htmlhelp %ALLSPHINXOPTS% _build/htmlhelp echo. echo.Build finished; now you can run HTML Help Workshop with the ^ .hhp project file in _build/htmlhelp. goto end ) if "%1" == "qthelp" ( %SPHINXBUILD% -b qthelp %ALLSPHINXOPTS% _build/qthelp echo. echo.Build finished; now you can run "qcollectiongenerator" with the ^ .qhcp project file in _build/qthelp, like this: echo.^> qcollectiongenerator _build\qthelp\Werkzeug.qhcp echo.To view the help file: echo.^> assistant -collectionFile _build\qthelp\Werkzeug.ghc goto end ) if "%1" == "latex" ( %SPHINXBUILD% -b latex %ALLSPHINXOPTS% _build/latex echo. echo.Build finished; the LaTeX files are in _build/latex. goto end ) if "%1" == "changes" ( %SPHINXBUILD% -b changes %ALLSPHINXOPTS% _build/changes echo. echo.The overview file is in _build/changes. goto end ) if "%1" == "linkcheck" ( %SPHINXBUILD% -b linkcheck %ALLSPHINXOPTS% _build/linkcheck echo. echo.Link check complete; look for any errors in the above output ^ or in _build/linkcheck/output.txt. goto end ) :end werkzeug-0.14.1/docs/makearchive.py000066400000000000000000000002701322225165500172230ustar00rootroot00000000000000import os import conf name = "werkzeug-docs-" + conf.version os.chdir("_build") os.rename("html", name) os.system("tar czf %s.tar.gz %s" % (name, name)) os.rename(name, "html") werkzeug-0.14.1/docs/middlewares.rst000066400000000000000000000007711322225165500174320ustar00rootroot00000000000000=========== Middlewares =========== .. module:: werkzeug.wsgi Middlewares wrap applications to dispatch between them or provide additional request handling. Additionally to the middlewares documented here, there is also the :class:`DebuggedApplication` class that is implemented as a WSGI middleware. .. autoclass:: SharedDataMiddleware :members: is_allowed .. autoclass:: ProxyMiddleware .. autoclass:: DispatcherMiddleware Also there's the … .. autofunction:: werkzeug._internal._easteregg werkzeug-0.14.1/docs/python3.rst000066400000000000000000000052411322225165500165330ustar00rootroot00000000000000.. _python3: ============== Python 3 Notes ============== Since version 0.9, Werkzeug supports Python 3.3+ in addition to versions 2.6 and 2.7. Older Python 3 versions such as 3.2 or 3.1 are not supported. This part of the documentation outlines special information required to use Werkzeug and WSGI on Python 3. .. warning:: Python 3 support in Werkzeug is currently highly experimental. Please give feedback on it and help us improve it. WSGI Environment ================ The WSGI environment on Python 3 works slightly different than it does on Python 2. For the most part Werkzeug hides the differences from you if you work on the higher level APIs. The main difference between Python 2 and Python 3 is that on Python 2 the WSGI environment contains bytes whereas the environment on Python 3 contains a range of differently encoded strings. There are two different kinds of strings in the WSGI environ on Python 3: - unicode strings restricted to latin1 values. These are used for HTTP headers and a few other things. - unicode strings carrying binary payload, roundtripped through latin1 values. This is usually referred as “WSGI encoding dance” throughout Werkzeug. Werkzeug provides you with functionality to deal with these automatically so that you don't need to be aware of the inner workings. The following functions and classes should be used to read information out of the WSGI environment: - :func:`~werkzeug.wsgi.get_current_url` - :func:`~werkzeug.wsgi.get_host` - :func:`~werkzeug.wsgi.get_script_name` - :func:`~werkzeug.wsgi.get_path_info` - :func:`~werkzeug.wsgi.get_query_string` - :func:`~werkzeug.datastructures.EnvironHeaders` Applications are strongly discouraged to create and modify a WSGI environment themselves on Python 3 unless they take care of the proper decoding step. All high level interfaces in Werkzeug will apply the correct encoding and decoding steps as necessary. URLs ==== URLs in Werkzeug attempt to represent themselves as unicode strings on Python 3. All the parsing functions generally also provide functionality that allow operations on bytes. In some cases functions that deal with URLs allow passing in `None` as charset to change the return value to byte objects. Internally Werkzeug will now unify URIs and IRIs as much as possible. Request Cleanup =============== Request objects on Python 3 and PyPy require explicit closing when file uploads are involved. This is required to properly close temporary file objects created by the multipart parser. For that purpose the ``close()`` method was introduced. In addition to that request objects now also act as context managers that automatically close. werkzeug-0.14.1/docs/quickstart.rst000066400000000000000000000224701322225165500173240ustar00rootroot00000000000000========== Quickstart ========== .. module:: werkzeug This part of the documentation shows how to use the most important parts of Werkzeug. It's intended as a starting point for developers with basic understanding of :pep:`333` (WSGI) and :rfc:`2616` (HTTP). .. warning:: Make sure to import all objects from the places the documentation suggests. It is theoretically possible in some situations to import objects from different locations but this is not supported. For example :class:`MultiDict` is a member of the `werkzeug` module but internally implemented in a different one. WSGI Environment ================ The WSGI environment contains all the information the user request transmits to the application. It is passed to the WSGI application but you can also create a WSGI environ dict using the :func:`create_environ` helper: >>> from werkzeug.test import create_environ >>> environ = create_environ('/foo', 'http://localhost:8080/') Now we have an environment to play around: >>> environ['PATH_INFO'] '/foo' >>> environ['SCRIPT_NAME'] '' >>> environ['SERVER_NAME'] 'localhost' Usually nobody wants to work with the environ directly because it is limited to bytestrings and does not provide any way to access the form data besides parsing that data by hand. Enter Request ============= For access to the request data the :class:`Request` object is much more fun. It wraps the `environ` and provides a read-only access to the data from there: >>> from werkzeug.wrappers import Request >>> request = Request(environ) Now you can access the important variables and Werkzeug will parse them for you and decode them where it makes sense. The default charset for requests is set to `utf-8` but you can change that by subclassing :class:`Request`. >>> request.path u'/foo' >>> request.script_root u'' >>> request.host 'localhost:8080' >>> request.url 'http://localhost:8080/foo' We can also find out which HTTP method was used for the request: >>> request.method 'GET' This way we can also access URL arguments (the query string) and data that was transmitted in a POST/PUT request. For testing purposes we can create a request object from supplied data using the :meth:`~BaseRequest.from_values` method: >>> from cStringIO import StringIO >>> data = "name=this+is+encoded+form+data&another_key=another+one" >>> request = Request.from_values(query_string='foo=bar&blah=blafasel', ... content_length=len(data), input_stream=StringIO(data), ... content_type='application/x-www-form-urlencoded', ... method='POST') ... >>> request.method 'POST' Now we can access the URL parameters easily: >>> request.args.keys() ['blah', 'foo'] >>> request.args['blah'] u'blafasel' Same for the supplied form data: >>> request.form['name'] u'this is encoded form data' Handling for uploaded files is not much harder as you can see from this example:: def store_file(request): file = request.files.get('my_file') if file: file.save('/where/to/store/the/file.txt') else: handle_the_error() The files are represented as :class:`FileStorage` objects which provide some common operations to work with them. Request headers can be accessed by using the :class:`~BaseRequest.headers` attribute: >>> request.headers['Content-Length'] '54' >>> request.headers['Content-Type'] 'application/x-www-form-urlencoded' The keys for the headers are of course case insensitive. Header Parsing ============== There is more. Werkzeug provides convenient access to often used HTTP headers and other request data. Let's create a request object with all the data a typical web browser transmits so that we can play with it: >>> environ = create_environ() >>> environ.update( ... HTTP_USER_AGENT='Mozilla/5.0 (Macintosh; U; Mac OS X 10.5; en-US; ) Firefox/3.1', ... HTTP_ACCEPT='text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8', ... HTTP_ACCEPT_LANGUAGE='de-at,en-us;q=0.8,en;q=0.5', ... HTTP_ACCEPT_ENCODING='gzip,deflate', ... HTTP_ACCEPT_CHARSET='ISO-8859-1,utf-8;q=0.7,*;q=0.7', ... HTTP_IF_MODIFIED_SINCE='Fri, 20 Feb 2009 10:10:25 GMT', ... HTTP_IF_NONE_MATCH='"e51c9-1e5d-46356dc86c640"', ... HTTP_CACHE_CONTROL='max-age=0' ... ) ... >>> request = Request(environ) Let's start with the most useless header: the user agent: >>> request.user_agent.browser 'firefox' >>> request.user_agent.platform 'macos' >>> request.user_agent.version '3.1' >>> request.user_agent.language 'en-US' A more useful header is the accept header. With this header the browser informs the web application what mimetypes it can handle and how well. All accept headers are sorted by the quality, the best item being the first: >>> request.accept_mimetypes.best 'text/html' >>> 'application/xhtml+xml' in request.accept_mimetypes True >>> print request.accept_mimetypes["application/json"] 0.8 The same works for languages: >>> request.accept_languages.best 'de-at' >>> request.accept_languages.values() ['de-at', 'en-us', 'en'] And of course encodings and charsets: >>> 'gzip' in request.accept_encodings True >>> request.accept_charsets.best 'ISO-8859-1' >>> 'utf-8' in request.accept_charsets True Normalization is available, so you can safely use alternative forms to perform containment checking: >>> 'UTF8' in request.accept_charsets True >>> 'de_AT' in request.accept_languages True E-tags and other conditional headers are available in parsed form as well: >>> request.if_modified_since datetime.datetime(2009, 2, 20, 10, 10, 25) >>> request.if_none_match >>> request.cache_control >>> request.cache_control.max_age 0 >>> 'e51c9-1e5d-46356dc86c640' in request.if_none_match True Responses ========= Response objects are the opposite of request objects. They are used to send data back to the client. In reality, response objects are nothing more than glorified WSGI applications. So what you are doing is not *returning* the response objects from your WSGI application but *calling* it as WSGI application inside your WSGI application and returning the return value of that call. So imagine your standard WSGI "Hello World" application:: def application(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Hello World!'] With response objects it would look like this:: from werkzeug.wrappers import Response def application(environ, start_response): response = Response('Hello World!') return response(environ, start_response) Also, unlike request objects, response objects are designed to be modified. So here is what you can do with them: >>> from werkzeug.wrappers import Response >>> response = Response("Hello World!") >>> response.headers['content-type'] 'text/plain; charset=utf-8' >>> response.data 'Hello World!' >>> response.headers['content-length'] = len(response.data) You can modify the status of the response in the same way. Either just the code or provide a message as well: >>> response.status '200 OK' >>> response.status = '404 Not Found' >>> response.status_code 404 >>> response.status_code = 400 >>> response.status '400 BAD REQUEST' As you can see attributes work in both directions. So you can set both :attr:`~BaseResponse.status` and :attr:`~BaseResponse.status_code` and the change will be reflected to the other. Also common headers are exposed as attributes or with methods to set / retrieve them: >>> response.content_length 12 >>> from datetime import datetime >>> response.date = datetime(2009, 2, 20, 17, 42, 51) >>> response.headers['Date'] 'Fri, 20 Feb 2009 17:42:51 GMT' Because etags can be weak or strong there are methods to set them: >>> response.set_etag("12345-abcd") >>> response.headers['etag'] '"12345-abcd"' >>> response.get_etag() ('12345-abcd', False) >>> response.set_etag("12345-abcd", weak=True) >>> response.get_etag() ('12345-abcd', True) Some headers are available as mutable structures. For example most of the `Content-` headers are sets of values: >>> response.content_language.add('en-us') >>> response.content_language.add('en') >>> response.headers['Content-Language'] 'en-us, en' Also here this works in both directions: >>> response.headers['Content-Language'] = 'de-AT, de' >>> response.content_language HeaderSet(['de-AT', 'de']) Authentication headers can be set that way as well: >>> response.www_authenticate.set_basic("My protected resource") >>> response.headers['www-authenticate'] 'Basic realm="My protected resource"' Cookies can be set as well: >>> response.set_cookie('name', 'value') >>> response.headers['Set-Cookie'] 'name=value; Path=/' >>> response.set_cookie('name2', 'value2') If headers appear multiple times you can use the :meth:`~Headers.getlist` method to get all values for a header: >>> response.headers.getlist('Set-Cookie') ['name=value; Path=/', 'name2=value2; Path=/'] Finally if you have set all the conditional values, you can make the response conditional against a request. Which means that if the request can assure that it has the information already, no data besides the headers is sent over the network which saves traffic. For that you should set at least an etag (which is used for comparison) and the date header and then call :class:`~BaseRequest.make_conditional` with the request object. The response is modified accordingly (status code changed, response body removed, entity headers removed etc.) werkzeug-0.14.1/docs/request_data.rst000066400000000000000000000110201322225165500176000ustar00rootroot00000000000000.. _dealing-with-request-data: Dealing with Request Data ========================= .. module:: werkzeug The most important rule about web development is "Do not trust the user". This is especially true for incoming request data on the input stream. With WSGI this is actually a bit harder than you would expect. Because of that Werkzeug wraps the request stream for you to save you from the most prominent problems with it. Missing EOF Marker on Input Stream ---------------------------------- The input stream has no end-of-file marker. If you would call the :meth:`~file.read` method on the `wsgi.input` stream you would cause your application to hang on conforming servers. This is actually intentional however painful. Werkzeug solves that problem by wrapping the input stream in a special :class:`LimitedStream`. The input stream is exposed on the request objects as :attr:`~BaseRequest.stream`. This one is either an empty stream (if the form data was parsed) or a limited stream with the contents of the input stream. When does Werkzeug Parse? ------------------------- Werkzeug parses the incoming data under the following situations: - you access either :attr:`~BaseRequest.form`, :attr:`~BaseRequest.files`, or :attr:`~BaseRequest.stream` and the request method was `POST` or `PUT`. - if you call :func:`parse_form_data`. These calls are not interchangeable. If you invoke :func:`parse_form_data` you must not use the request object or at least not the attributes that trigger the parsing process. This is also true if you read from the `wsgi.input` stream before the parsing. **General rule:** Leave the WSGI input stream alone. Especially in WSGI middlewares. Use either the parsing functions or the request object. Do not mix multiple WSGI utility libraries for form data parsing or anything else that works on the input stream. How does it Parse? ------------------ The standard Werkzeug parsing behavior handles three cases: - input content type was `multipart/form-data`. In this situation the :class:`~BaseRequest.stream` will be empty and :class:`~BaseRequest.form` will contain the regular `POST` / `PUT` data, :class:`~BaseRequest.files` will contain the uploaded files as :class:`FileStorage` objects. - input content type was `application/x-www-form-urlencoded`. Then the :class:`~BaseRequest.stream` will be empty and :class:`~BaseRequest.form` will contain the regular `POST` / `PUT` data and :class:`~BaseRequest.files` will be empty. - the input content type was neither of them, :class:`~BaseRequest.stream` points to a :class:`LimitedStream` with the input data for further processing. Special note on the :attr:`~BaseRequest.get_data` method: Calling this loads the full request data into memory. This is only safe to do if the :attr:`~BaseRequest.max_content_length` is set. Also you can *either* read the stream *or* call :meth:`~BaseRequest.get_data`. Limiting Request Data --------------------- To avoid being the victim of a DDOS attack you can set the maximum accepted content length and request field sizes. The :class:`BaseRequest` class has two attributes for that: :attr:`~BaseRequest.max_content_length` and :attr:`~BaseRequest.max_form_memory_size`. The first one can be used to limit the total content length. For example by setting it to ``1024 * 1024 * 16`` the request won't accept more than 16MB of transmitted data. Because certain data can't be moved to the hard disk (regular post data) whereas temporary files can, there is a second limit you can set. The :attr:`~BaseRequest.max_form_memory_size` limits the size of `POST` transmitted form data. By setting it to ``1024 * 1024 * 2`` you can make sure that all in memory-stored fields are not more than 2MB in size. This however does *not* affect in-memory stored files if the `stream_factory` used returns a in-memory file. How to extend Parsing? ---------------------- Modern web applications transmit a lot more than multipart form data or url encoded data. Extending the parsing capabilities by subclassing the :class:`BaseRequest` is simple. The following example implements parsing for incoming JSON data:: from werkzeug.utils import cached_property from werkzeug.wrappers import Request from simplejson import loads class JSONRequest(Request): # accept up to 4MB of transmitted data. max_content_length = 1024 * 1024 * 4 @cached_property def json(self): if self.headers.get('content-type') == 'application/json': return loads(self.data) werkzeug-0.14.1/docs/routing.rst000066400000000000000000000144221322225165500166170ustar00rootroot00000000000000.. _routing: =========== URL Routing =========== .. module:: werkzeug.routing .. testsetup:: from werkzeug.routing import * When it comes to combining multiple controller or view functions (however you want to call them), you need a dispatcher. A simple way would be applying regular expression tests on ``PATH_INFO`` and call registered callback functions that return the value. Werkzeug provides a much more powerful system, similar to `Routes`_. All the objects mentioned on this page must be imported from :mod:`werkzeug.routing`, not from :mod:`werkzeug`! .. _Routes: http://routes.groovie.org/ Quickstart ========== Here is a simple example which could be the URL definition for a blog:: from werkzeug.routing import Map, Rule, NotFound, RequestRedirect url_map = Map([ Rule('/', endpoint='blog/index'), Rule('//', endpoint='blog/archive'), Rule('///', endpoint='blog/archive'), Rule('////', endpoint='blog/archive'), Rule('////', endpoint='blog/show_post'), Rule('/about', endpoint='blog/about_me'), Rule('/feeds/', endpoint='blog/feeds'), Rule('/feeds/.rss', endpoint='blog/show_feed') ]) def application(environ, start_response): urls = url_map.bind_to_environ(environ) try: endpoint, args = urls.match() except HTTPException, e: return e(environ, start_response) start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Rule points to %r with arguments %r' % (endpoint, args)] So what does that do? First of all we create a new :class:`Map` which stores a bunch of URL rules. Then we pass it a list of :class:`Rule` objects. Each :class:`Rule` object is instantiated with a string that represents a rule and an endpoint which will be the alias for what view the rule represents. Multiple rules can have the same endpoint, but should have different arguments to allow URL construction. The format for the URL rules is straightforward, but explained in detail below. Inside the WSGI application we bind the url_map to the current request which will return a new :class:`MapAdapter`. This url_map adapter can then be used to match or build domains for the current request. The :meth:`MapAdapter.match` method can then either return a tuple in the form ``(endpoint, args)`` or raise one of the three exceptions :exc:`~werkzeug.exceptions.NotFound`, :exc:`~werkzeug.exceptions.MethodNotAllowed`, or :exc:`~werkzeug.exceptions.RequestRedirect`. For more details about those exceptions have a look at the documentation of the :meth:`MapAdapter.match` method. Rule Format =========== Rule strings basically are just normal URL paths with placeholders in the format ````, where converter and the arguments are optional. If no converter is defined, the `default` converter is used (which means `string` in the normal configuration). URL rules that end with a slash are branch URLs, others are leaves. If you have `strict_slashes` enabled (which is the default), all branch URLs that are visited without a trailing slash will trigger a redirect to the same URL with that slash appended. The list of converters can be extended, the default converters are explained below. Builtin Converters ================== Here a list of converters that come with Werkzeug: .. autoclass:: UnicodeConverter .. autoclass:: PathConverter .. autoclass:: AnyConverter .. autoclass:: IntegerConverter .. autoclass:: FloatConverter .. autoclass:: UUIDConverter Maps, Rules and Adapters ======================== .. autoclass:: Map :members: .. attribute:: converters The dictionary of converters. This can be modified after the class was created, but will only affect rules added after the modification. If the rules are defined with the list passed to the class, the `converters` parameter to the constructor has to be used instead. .. autoclass:: MapAdapter :members: .. autoclass:: Rule :members: empty Rule Factories ============== .. autoclass:: RuleFactory :members: get_rules .. autoclass:: Subdomain .. autoclass:: Submount .. autoclass:: EndpointPrefix Rule Templates ============== .. autoclass:: RuleTemplate Custom Converters ================= You can easily add custom converters. The only thing you have to do is to subclass :class:`BaseConverter` and pass that new converter to the url_map. A converter has to provide two public methods: `to_python` and `to_url`, as well as a member that represents a regular expression. Here is a small example:: from random import randrange from werkzeug.routing import Rule, Map, BaseConverter, ValidationError class BooleanConverter(BaseConverter): def __init__(self, url_map, randomify=False): super(BooleanConverter, self).__init__(url_map) self.randomify = randomify self.regex = '(?:yes|no|maybe)' def to_python(self, value): if value == 'maybe': if self.randomify: return not randrange(2) raise ValidationError() return value == 'yes' def to_url(self, value): return value and 'yes' or 'no' url_map = Map([ Rule('/vote/', endpoint='vote'), Rule('/vote/', endpoint='foo') ], converters={'bool': BooleanConverter}) If you want that converter to be the default converter, name it ``'default'``. Host Matching ============= .. versionadded:: 0.7 Starting with Werkzeug 0.7 it's also possible to do matching on the whole host names instead of just the subdomain. To enable this feature you need to pass ``host_matching=True`` to the :class:`Map` constructor and provide the `host` argument to all routes:: url_map = Map([ Rule('/', endpoint='www_index', host='www.example.com'), Rule('/', endpoint='help_index', host='help.example.com') ], host_matching=True) Variable parts are of course also possible in the host section:: url_map = Map([ Rule('/', endpoint='www_index', host='www.example.com'), Rule('/', endpoint='user_index', host='.example.com') ], host_matching=True) werkzeug-0.14.1/docs/serving.rst000066400000000000000000000204621322225165500166060ustar00rootroot00000000000000========================= Serving WSGI Applications ========================= .. module:: werkzeug.serving There are many ways to serve a WSGI application. While you're developing it, you usually don't want to have a full-blown webserver like Apache up and running, but instead a simple standalone one. Because of that Werkzeug comes with a builtin development server. The easiest way is creating a small ``start-myproject.py`` file that runs the application using the builtin server:: #!/usr/bin/env python # -*- coding: utf-8 -*- from werkzeug.serving import run_simple from myproject import make_app app = make_app(...) run_simple('localhost', 8080, app, use_reloader=True) You can also pass it the `extra_files` keyword argument with a list of additional files (like configuration files) you want to observe. .. autofunction:: run_simple .. autofunction:: is_running_from_reloader .. autofunction:: make_ssl_devcert .. admonition:: Information The development server is not intended to be used on production systems. It was designed especially for development purposes and performs poorly under high load. For deployment setups have a look at the :ref:`deployment` pages. .. _reloader: Reloader -------- .. versionchanged:: 0.10 The Werkzeug reloader constantly monitors modules and paths of your web application, and restarts the server if any of the observed files change. Since version 0.10, there are two backends the reloader supports: ``stat`` and ``watchdog``. - The default ``stat`` backend simply checks the ``mtime`` of all files in a regular interval. This is sufficient for most cases, however, it is known to drain a laptop's battery. - The ``watchdog`` backend uses filesystem events, and is much faster than ``stat``. It requires the `watchdog `_ module to be installed. The recommended way to achieve this is to add ``Werkzeug[watchdog]`` to your requirements file. If ``watchdog`` is installed and available it will automatically be used instead of the builtin ``stat`` reloader. To switch between the backends you can use the `reloader_type` parameter of the :func:`run_simple` function. ``'stat'`` sets it to the default stat based polling and ``'watchdog'`` forces it to the watchdog backend. .. note:: Some edge cases, like modules that failed to import correctly, are not handled by the stat reloader for performance reasons. The watchdog reloader monitors such files too. Colored Logging --------------- Werkzeug is able to color the output of request logs when ran from a terminal, just install the `termcolor `_ package. Windows users need to install `colorama `_ in addition to termcolor for this to work. Virtual Hosts ------------- Many web applications utilize multiple subdomains. This can be a bit tricky to simulate locally. Fortunately there is the `hosts file`_ that can be used to assign the local computer multiple names. This allows you to call your local computer `yourapplication.local` and `api.yourapplication.local` (or anything else) in addition to `localhost`. You can find the hosts file on the following location: =============== ============================================== Windows ``%SystemRoot%\system32\drivers\etc\hosts`` Linux / OS X ``/etc/hosts`` =============== ============================================== You can open the file with your favorite text editor and add a new name after `localhost`:: 127.0.0.1 localhost yourapplication.local api.yourapplication.local Save the changes and after a while you should be able to access the development server on these host names as well. You can use the :ref:`routing` system to dispatch between different hosts or parse :attr:`request.host` yourself. Shutting Down The Server ------------------------ .. versionadded:: 0.7 Starting with Werkzeug 0.7 the development server provides a way to shut down the server after a request. This currently only works with Python 2.6 and later and will only work with the development server. To initiate the shutdown you have to call a function named ``'werkzeug.server.shutdown'`` in the WSGI environment:: def shutdown_server(environ): if not 'werkzeug.server.shutdown' in environ: raise RuntimeError('Not running the development server') environ['werkzeug.server.shutdown']() Troubleshooting --------------- On operating systems that support ipv6 and have it configured such as modern Linux systems, OS X 10.4 or higher as well as Windows Vista some browsers can be painfully slow if accessing your local server. The reason for this is that sometimes "localhost" is configured to be available on both ipv4 and ipv6 sockets and some browsers will try to access ipv6 first and then ipv4. At the current time the integrated webserver does not support ipv6 and ipv4 at the same time and for better portability ipv4 is the default. If you notice that the web browser takes ages to load the page there are two ways around this issue. If you don't need ipv6 support you can disable the ipv6 entry in the `hosts file`_ by removing this line:: ::1 localhost Alternatively you can also disable ipv6 support in your browser. For example if Firefox shows this behavior you can disable it by going to ``about:config`` and disabling the `network.dns.disableIPv6` key. This however is not recommended as of Werkzeug 0.6.1! Starting with Werkzeug 0.6.1, the server will now switch between ipv4 and ipv6 based on your operating system's configuration. This means if that you disabled ipv6 support in your browser but your operating system is preferring ipv6, you will be unable to connect to your server. In that situation, you can either remove the localhost entry for ``::1`` or explicitly bind the hostname to an ipv4 address (`127.0.0.1`) .. _hosts file: http://en.wikipedia.org/wiki/Hosts_file SSL --- .. versionadded:: 0.6 The builtin server supports SSL for testing purposes. If an SSL context is provided it will be used. That means a server can either run in HTTP or HTTPS mode, but not both. Quickstart `````````` The easiest way to do SSL based development with Werkzeug is by using it to generate an SSL certificate and private key and storing that somewhere and to then put it there. For the certificate you need to provide the name of your server on generation or a `CN`. 1. Generate an SSL key and store it somewhere: >>> from werkzeug.serving import make_ssl_devcert >>> make_ssl_devcert('/path/to/the/key', host='localhost') ('/path/to/the/key.crt', '/path/to/the/key.key') 2. Now this tuple can be passed as ``ssl_context`` to the :func:`run_simple` method:: run_simple('localhost', 4000, application, ssl_context=('/path/to/the/key.crt', '/path/to/the/key.key')) You will have to acknowledge the certificate in your browser once then. Loading Contexts by Hand ```````````````````````` In Python 2.7.9 and 3+ you also have the option to use a ``ssl.SSLContext`` object instead of a simple tuple. This way you have better control over the SSL behavior of Werkzeug's builtin server:: import ssl ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) ctx.load_cert_chain('ssl.cert', 'ssl.key') run_simple('localhost', 4000, application, ssl_context=ctx) .. versionchanged 0.10:: ``OpenSSL`` contexts are not supported anymore. Generating Certificates ``````````````````````` A key and certificate can be created in advance using the openssl tool instead of the :func:`make_ssl_devcert`. This requires that you have the `openssl` command installed on your system:: $ openssl genrsa 1024 > ssl.key $ openssl req -new -x509 -nodes -sha1 -days 365 -key ssl.key > ssl.cert Adhoc Certificates `````````````````` The easiest way to enable SSL is to start the server in adhoc-mode. In that case Werkzeug will generate an SSL certificate for you:: run_simple('localhost', 4000, application, ssl_context='adhoc') The downside of this of course is that you will have to acknowledge the certificate each time the server is reloaded. Adhoc certificates are discouraged because modern browsers do a bad job at supporting them for security reasons. This feature requires the pyOpenSSL library to be installed. werkzeug-0.14.1/docs/terms.rst000066400000000000000000000027701322225165500162650ustar00rootroot00000000000000=============== Important Terms =============== .. module:: werkzeug This page covers important terms used in the documentation and Werkzeug itself. WSGI ---- WSGI a specification for Python web applications Werkzeug follows. It was specified in the :pep:`333` and is widely supported. Unlike previous solutions it guarantees that web applications, servers and utilities can work together. Response Object --------------- For Werkzeug, a response object is an object that works like a WSGI application but does not do any request processing. Usually you have a view function or controller method that processes the request and assembles a response object. A response object is *not* necessarily the :class:`BaseResponse` object or a subclass thereof. For example Pylons/webob provide a very similar response class that can be used as well (:class:`webob.Response`). View Function ------------- Often people speak of MVC (Model, View, Controller) when developing web applications. However, the Django framework coined MTV (Model, Template, View) which basically means the same but reduces the concept to the data model, a function that processes data from the request and the database and renders a template. Werkzeug itself does not tell you how you should develop applications, but the documentation often speaks of view functions that work roughly the same. The idea of a view function is that it's called with a request object (and optionally some parameters from an URL rule) and returns a response object. werkzeug-0.14.1/docs/test.rst000066400000000000000000000111701322225165500161040ustar00rootroot00000000000000============== Test Utilities ============== .. module:: werkzeug.test Quite often you want to unittest your application or just check the output from an interactive python session. In theory that is pretty simple because you can fake a WSGI environment and call the application with a dummy `start_response` and iterate over the application iterator but there are argumentably better ways to interact with an application. Diving In ========= Werkzeug provides a `Client` object which you can pass a WSGI application (and optionally a response wrapper) which you can use to send virtual requests to the application. A response wrapper is a callable that takes three arguments: the application iterator, the status and finally a list of headers. The default response wrapper returns a tuple. Because response objects have the same signature, you can use them as response wrapper, ideally by subclassing them and hooking in test functionality. >>> from werkzeug.test import Client >>> from werkzeug.testapp import test_app >>> from werkzeug.wrappers import BaseResponse >>> c = Client(test_app, BaseResponse) >>> resp = c.get('/') >>> resp.status_code 200 >>> resp.headers Headers([('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', '8339')]) >>> resp.data.splitlines()[0] '>> c = Client(test_app) >>> app_iter, status, headers = c.get('/') >>> status '200 OK' >>> headers [('Content-Type', 'text/html; charset=utf-8'), ('Content-Length', '8339')] >>> ''.join(app_iter).splitlines()[0] '>> from werkzeug.test import EnvironBuilder >>> from StringIO import StringIO >>> builder = EnvironBuilder(method='POST', data={'foo': 'this is some text', ... 'file': (StringIO('my file contents'), 'test.txt')}) >>> env = builder.get_environ() The resulting environment is a regular WSGI environment that can be used for further processing: >>> from werkzeug.wrappers import Request >>> req = Request(env) >>> req.form['foo'] u'this is some text' >>> req.files['file'] >>> req.files['file'].read() 'my file contents' The :class:`EnvironBuilder` figures out the content type automatically if you pass a dict to the constructor as `data`. If you provide a string or an input stream you have to do that yourself. By default it will try to use ``application/x-www-form-urlencoded`` and only use ``multipart/form-data`` if files are uploaded: >>> builder = EnvironBuilder(method='POST', data={'foo': 'bar'}) >>> builder.content_type 'application/x-www-form-urlencoded' >>> builder.files['foo'] = StringIO('contents') >>> builder.content_type 'multipart/form-data' If a string is provided as data (or an input stream) you have to specify the content type yourself: >>> builder = EnvironBuilder(method='POST', data='{"json": "this is"}') >>> builder.content_type >>> builder.content_type = 'application/json' Testing API =========== .. autoclass:: EnvironBuilder :members: .. attribute:: path The path of the application. (aka `PATH_INFO`) .. attribute:: charset The charset used to encode unicode data. .. attribute:: headers A :class:`Headers` object with the request headers. .. attribute:: errors_stream The error stream used for the `wsgi.errors` stream. .. attribute:: multithread The value of `wsgi.multithread` .. attribute:: multiprocess The value of `wsgi.multiprocess` .. attribute:: environ_base The dict used as base for the newly create environ. .. attribute:: environ_overrides A dict with values that are used to override the generated environ. .. attribute:: input_stream The optional input stream. This and :attr:`form` / :attr:`files` is mutually exclusive. Also do not provide this stream if the request method is not `POST` / `PUT` or something comparable. .. autoclass:: Client .. automethod:: open Shortcut methods are available for many HTTP methods: .. automethod:: get .. automethod:: patch .. automethod:: post .. automethod:: head .. automethod:: put .. automethod:: delete .. automethod:: options .. automethod:: trace .. autofunction:: create_environ([options]) .. autofunction:: run_wsgi_app werkzeug-0.14.1/docs/transition.rst000066400000000000000000000044761322225165500173320ustar00rootroot00000000000000Transition to Werkzeug 1.0 ========================== Werkzeug originally had a magical import system hook that enabled everything to be imported from one module and still loading the actual implementations lazily as necessary. Unfortunately this turned out to be slow and also unreliable on alternative Python implementations and Google's App Engine. Starting with 0.7 we recommend against the short imports and strongly encourage starting importing from the actual implementation module. Werkzeug 1.0 will disable the magical import hook completely. Because finding out where the actual functions are imported and rewriting them by hand is a painful and boring process we wrote a tool that aids in making this transition. Automatically Rewriting Imports ------------------------------- For instance, with Werkzeug < 0.7 the recommended way to use the escape function was this:: from werkzeug import escape With Werkzeug 0.7, the recommended way to import this function is directly from the utils module (and with 1.0 this will become mandatory). To automatically rewrite all imports one can use the `werkzeug-import-rewrite `_ script. You can use it by executing it with Python and with a list of folders with Werkzeug based code. It will then spit out a hg/git compatible patch file. Example patch file creation:: $ python werkzeug-import-rewrite.py . > new-imports.udiff To apply the patch one of the following methods work: hg: :: hg import new-imports.udiff git: :: git apply new-imports.udiff patch: :: patch -p1 < new-imports.udiff Stop Using Deprecated Things ---------------------------- A few things in Werkzeug will stop being supported and for others, we're suggesting alternatives even if they will stick around for a longer time. Do not use: - `werkzeug.script`, replace it with custom scripts written with `argparse`, `click` or something similar. - `werkzeug.template`, replace with a proper template engine. - `werkzeug.contrib.jsrouting`, stop using URL generation for JavaScript, it does not scale well with many public routing. - `werkzeug.contrib.kickstart`, replace with hand written code, the Werkzeug API became better in general that this is no longer necessary. - `werkzeug.contrib.testtools`, not useful really. werkzeug-0.14.1/docs/tutorial.rst000066400000000000000000000440131322225165500167720ustar00rootroot00000000000000================= Werkzeug Tutorial ================= .. module:: werkzeug Welcome to the Werkzeug tutorial in which we will create a `TinyURL`_ clone that stores URLs in a redis instance. The libraries we will use for this applications are `Jinja`_ 2 for the templates, `redis`_ for the database layer and, of course, Werkzeug for the WSGI layer. You can use `pip` to install the required libraries:: pip install Jinja2 redis Werkzeug Also make sure to have a redis server running on your local machine. If you are on OS X, you can use `brew` to install it:: brew install redis If you are on Ubuntu or Debian, you can use apt-get:: sudo apt-get install redis-server Redis was developed for UNIX systems and was never really designed to work on Windows. For development purposes, the unofficial ports however work well enough. You can get them from `github `_. Introducing Shortly ------------------- In this tutorial, we will together create a simple URL shortener service with Werkzeug. Please keep in mind that Werkzeug is not a framework, it's a library with utilities to create your own framework or application and as such is very flexible. The approach we use here is just one of many you can use. As data store, we will use `redis`_ here instead of a relational database to keep this simple and because that's the kind of job that `redis`_ excels at. The final result will look something like this: .. image:: _static/shortly.png :alt: a screenshot of shortly .. _TinyURL: http://tinyurl.com/ .. _Jinja: http://jinja.pocoo.org/ .. _redis: http://redis.io/ Step 0: A Basic WSGI Introduction --------------------------------- Werkzeug is a utility library for WSGI. WSGI itself is a protocol or convention that ensures that your web application can speak with the webserver and more importantly that web applications work nicely together. A basic “Hello World” application in WSGI without the help of Werkzeug looks like this:: def application(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return ['Hello World!'] A WSGI application is something you can call and pass an environ dict and a ``start_response`` callable. The environ contains all incoming information, the ``start_response`` function can be used to indicate the start of the response. With Werkzeug you don't have to deal directly with either as request and response objects are provided to work with them. The request data takes the environ object and allows you to access the data from that environ in a nice manner. The response object is a WSGI application in itself and provides a much nicer way to create responses. Here is how you would write that application with response objects:: from werkzeug.wrappers import Response def application(environ, start_response): response = Response('Hello World!', mimetype='text/plain') return response(environ, start_response) And here an expanded version that looks at the query string in the URL (more importantly at the `name` parameter in the URL to substitute “World” against another word):: from werkzeug.wrappers import Request, Response def application(environ, start_response): request = Request(environ) text = 'Hello %s!' % request.args.get('name', 'World') response = Response(text, mimetype='text/plain') return response(environ, start_response) And that's all you need to know about WSGI. Step 1: Creating the Folders ---------------------------- Before we get started, let’s create the folders needed for this application:: /shortly /static /templates The shortly folder is not a python package, but just something where we drop our files. Directly into this folder we will then put our main module in the following steps. The files inside the static folder are available to users of the application via HTTP. This is the place where CSS and JavaScript files go. Inside the templates folder we will make Jinja2 look for templates. The templates you create later in the tutorial will go in this directory. Step 2: The Base Structure -------------------------- Now let's get right into it and create a module for our application. Let's create a file called `shortly.py` in the `shortly` folder. At first we will need a bunch of imports. I will pull in all the imports here, even if they are not used right away, to keep it from being confusing:: import os import redis import urlparse from werkzeug.wrappers import Request, Response from werkzeug.routing import Map, Rule from werkzeug.exceptions import HTTPException, NotFound from werkzeug.wsgi import SharedDataMiddleware from werkzeug.utils import redirect from jinja2 import Environment, FileSystemLoader Then we can create the basic structure for our application and a function to create a new instance of it, optionally with a piece of WSGI middleware that exports all the files on the `static` folder on the web:: class Shortly(object): def __init__(self, config): self.redis = redis.Redis(config['redis_host'], config['redis_port']) def dispatch_request(self, request): return Response('Hello World!') def wsgi_app(self, environ, start_response): request = Request(environ) response = self.dispatch_request(request) return response(environ, start_response) def __call__(self, environ, start_response): return self.wsgi_app(environ, start_response) def create_app(redis_host='localhost', redis_port=6379, with_static=True): app = Shortly({ 'redis_host': redis_host, 'redis_port': redis_port }) if with_static: app.wsgi_app = SharedDataMiddleware(app.wsgi_app, { '/static': os.path.join(os.path.dirname(__file__), 'static') }) return app Lastly we can add a piece of code that will start a local development server with automatic code reloading and a debugger:: if __name__ == '__main__': from werkzeug.serving import run_simple app = create_app() run_simple('127.0.0.1', 5000, app, use_debugger=True, use_reloader=True) The basic idea here is that our ``Shortly`` class is an actual WSGI application. The ``__call__`` method directly dispatches to ``wsgi_app``. This is done so that we can wrap ``wsgi_app`` to apply middlewares like we do in the ``create_app`` function. The actual ``wsgi_app`` method then creates a :class:`Request` object and calls the ``dispatch_request`` method which then has to return a :class:`Response` object which is then evaluated as WSGI application again. As you can see: turtles all the way down. Both the ``Shortly`` class we create, as well as any request object in Werkzeug implements the WSGI interface. As a result of that you could even return another WSGI application from the ``dispatch_request`` method. The ``create_app`` factory function can be used to create a new instance of our application. Not only will it pass some parameters as configuration to the application but also optionally add a WSGI middleware that exports static files. This way we have access to the files from the static folder even when we are not configuring our server to provide them which is very helpful for development. Intermezzo: Running the Application ----------------------------------- Now you should be able to execute the file with `python` and see a server on your local machine:: $ python shortly.py * Running on http://127.0.0.1:5000/ * Restarting with reloader: stat() polling It also tells you that the reloader is active. It will use various techniques to figure out if any file changed on the disk and then automatically restart. Just go to the URL and you should see “Hello World!”. Step 3: The Environment ----------------------- Now that we have the basic application class, we can make the constructor do something useful and provide a few helpers on there that can come in handy. We will need to be able to render templates and connect to redis, so let's extend the class a bit:: def __init__(self, config): self.redis = redis.Redis(config['redis_host'], config['redis_port']) template_path = os.path.join(os.path.dirname(__file__), 'templates') self.jinja_env = Environment(loader=FileSystemLoader(template_path), autoescape=True) def render_template(self, template_name, **context): t = self.jinja_env.get_template(template_name) return Response(t.render(context), mimetype='text/html') Step 4: The Routing ------------------- Next up is routing. Routing is the process of matching and parsing the URL to something we can use. Werkzeug provides a flexible integrated routing system which we can use for that. The way it works is that you create a :class:`~werkzeug.routing.Map` instance and add a bunch of :class:`~werkzeug.routing.Rule` objects. Each rule has a pattern it will try to match the URL against and an “endpoint”. The endpoint is typically a string and can be used to uniquely identify the URL. We could also use this to automatically reverse the URL, but that's not what we will do in this tutorial. Just put this into the constructor:: self.url_map = Map([ Rule('/', endpoint='new_url'), Rule('/', endpoint='follow_short_link'), Rule('/+', endpoint='short_link_details') ]) Here we create a URL map with three rules. ``/`` for the root of the URL space where we will just dispatch to a function that implements the logic to create a new URL. And then one that follows the short link to the target URL and another one with the same rule but a plus (``+``) at the end to show the link details. So how do we find our way from the endpoint to a function? That's up to you. The way we will do it in this tutorial is by calling the method ``on_`` + endpoint on the class itself. Here is how this works:: def dispatch_request(self, request): adapter = self.url_map.bind_to_environ(request.environ) try: endpoint, values = adapter.match() return getattr(self, 'on_' + endpoint)(request, **values) except HTTPException, e: return e We bind the URL map to the current environment and get back a :class:`~werkzeug.routing.URLAdapter`. The adapter can be used to match the request but also to reverse URLs. The match method will return the endpoint and a dictionary of values in the URL. For instance the rule for ``follow_short_link`` has a variable part called ``short_id``. When we go to ``http://localhost:5000/foo`` we will get the following values back:: endpoint = 'follow_short_link' values = {'short_id': u'foo'} If it does not match anything, it will raise a :exc:`~werkzeug.exceptions.NotFound` exception, which is an :exc:`~werkzeug.exceptions.HTTPException`. All HTTP exceptions are also WSGI applications by themselves which render a default error page. So we just catch all of them down and return the error itself. If all works well, we call the function ``on_`` + endpoint and pass it the request as argument as well as all the URL arguments as keyword arguments and return the response object that method returns. Step 5: The First View ---------------------- Let's start with the first view: the one for new URLs:: def on_new_url(self, request): error = None url = '' if request.method == 'POST': url = request.form['url'] if not is_valid_url(url): error = 'Please enter a valid URL' else: short_id = self.insert_url(url) return redirect('/%s+' % short_id) return self.render_template('new_url.html', error=error, url=url) This logic should be easy to understand. Basically we are checking that the request method is POST, in which case we validate the URL and add a new entry to the database, then redirect to the detail page. This means we need to write a function and a helper method. For URL validation this is good enough:: def is_valid_url(url): parts = urlparse.urlparse(url) return parts.scheme in ('http', 'https') For inserting the URL, all we need is this little method on our class:: def insert_url(self, url): short_id = self.redis.get('reverse-url:' + url) if short_id is not None: return short_id url_num = self.redis.incr('last-url-id') short_id = base36_encode(url_num) self.redis.set('url-target:' + short_id, url) self.redis.set('reverse-url:' + url, short_id) return short_id ``reverse-url:`` + the URL will store the short id. If the URL was already submitted this won't be None and we can just return that value which will be the short ID. Otherwise we increment the ``last-url-id`` key and convert it to base36. Then we store the link and the reverse entry in redis. And here the function to convert to base 36:: def base36_encode(number): assert number >= 0, 'positive integer required' if number == 0: return '0' base36 = [] while number != 0: number, i = divmod(number, 36) base36.append('0123456789abcdefghijklmnopqrstuvwxyz'[i]) return ''.join(reversed(base36)) So what is missing for this view to work is the template. We will create this later, let's first also write the other views and then do the templates in one go. Step 6: Redirect View --------------------- The redirect view is easy. All it has to do is to look for the link in redis and redirect to it. Additionally we will also increment a counter so that we know how often a link was clicked:: def on_follow_short_link(self, request, short_id): link_target = self.redis.get('url-target:' + short_id) if link_target is None: raise NotFound() self.redis.incr('click-count:' + short_id) return redirect(link_target) In this case we will raise a :exc:`~werkzeug.exceptions.NotFound` exception by hand if the URL does not exist, which will bubble up to the ``dispatch_request`` function and be converted into a default 404 response. Step 7: Detail View ------------------- The link detail view is very similar, we just render a template again. In addition to looking up the target, we also ask redis for the number of times the link was clicked and let it default to zero if such a key does not yet exist:: def on_short_link_details(self, request, short_id): link_target = self.redis.get('url-target:' + short_id) if link_target is None: raise NotFound() click_count = int(self.redis.get('click-count:' + short_id) or 0) return self.render_template('short_link_details.html', link_target=link_target, short_id=short_id, click_count=click_count ) Please be aware that redis always works with strings, so you have to convert the click count to :class:`int` by hand. Step 8: Templates ----------------- And here are all the templates. Just drop them into the `templates` folder. Jinja2 supports template inheritance, so the first thing we will do is create a layout template with blocks that act as placeholders. We also set up Jinja2 so that it automatically escapes strings with HTML rules, so we don't have to spend time on that ourselves. This prevents XSS attacks and rendering errors. *layout.html*: .. sourcecode:: html+jinja {% block title %}{% endblock %} | shortly

shortly

Shortly is a URL shortener written with Werkzeug {% block body %}{% endblock %}

*new_url.html*: .. sourcecode:: html+jinja {% extends "layout.html" %} {% block title %}Create New Short URL{% endblock %} {% block body %}

Submit URL

{% if error %}

Error: {{ error }} {% endif %}

URL:

{% endblock %} *short_link_details.html*: .. sourcecode:: html+jinja {% extends "layout.html" %} {% block title %}Details about /{{ short_id }}{% endblock %} {% block body %}

/{{ short_id }}

Full link
Click count:
{{ click_count }}
{% endblock %} Step 9: The Style ----------------- For this to look better than ugly black and white, here a simple stylesheet that goes along: *static/style.css*: .. sourcecode:: css body { background: #E8EFF0; margin: 0; padding: 0; } body, input { font-family: 'Helvetica Neue', Arial, sans-serif; font-weight: 300; font-size: 18px; } .box { width: 500px; margin: 60px auto; padding: 20px; background: white; box-shadow: 0 1px 4px #BED1D4; border-radius: 2px; } a { color: #11557C; } h1, h2 { margin: 0; color: #11557C; } h1 a { text-decoration: none; } h2 { font-weight: normal; font-size: 24px; } .tagline { color: #888; font-style: italic; margin: 0 0 20px 0; } .link div { overflow: auto; font-size: 0.8em; white-space: pre; padding: 4px 10px; margin: 5px 0; background: #E5EAF1; } dt { font-weight: normal; } .error { background: #E8EFF0; padding: 3px 8px; color: #11557C; font-size: 0.9em; border-radius: 2px; } .urlinput { width: 300px; } Bonus: Refinements ------------------ Look at the implementation in the example dictionary in the Werkzeug repository to see a version of this tutorial with some small refinements such as a custom 404 page. - `shortly in the example folder `_ werkzeug-0.14.1/docs/unicode.rst000066400000000000000000000150211322225165500165520ustar00rootroot00000000000000.. _unicode: ======= Unicode ======= .. module:: werkzeug Since early Python 2 days unicode was part of all default Python builds. It allows developers to write applications that deal with non-ASCII characters in a straightforward way. But working with unicode requires a basic knowledge about that matter, especially when working with libraries that do not support it. Werkzeug uses unicode internally everywhere text data is assumed, even if the HTTP standard is not unicode aware as it. Basically all incoming data is decoded from the charset specified (per default `utf-8`) so that you don't operate on bytestrings any more. Outgoing unicode data is then encoded into the target charset again. Unicode in Python ================= In Python 2 there are two basic string types: `str` and `unicode`. `str` may carry encoded unicode data but it's always represented in bytes whereas the `unicode` type does not contain bytes but charpoints. What does this mean? Imagine you have the German Umlaut `ö`. In ASCII you cannot represent that character, but in the `latin-1` and `utf-8` character sets you can represent it, but they look differently when encoded: >>> u'ö'.encode('latin1') '\xf6' >>> u'ö'.encode('utf-8') '\xc3\xb6' So an `ö` might look totally different depending on the encoding which makes it hard to work with it. The solution is using the `unicode` type (as we did above, note the `u` prefix before the string). The unicode type does not store the bytes for `ö` but the information, that this is a ``LATIN SMALL LETTER O WITH DIAERESIS``. Doing ``len(u'ö')`` will always give us the expected "1" but ``len('ö')`` might give different results depending on the encoding of ``'ö'``. Unicode in HTTP =============== The problem with unicode is that HTTP does not know what unicode is. HTTP is limited to bytes but this is not a big problem as Werkzeug decodes and encodes for us automatically all incoming and outgoing data. Basically what this means is that data sent from the browser to the web application is per default decoded from an utf-8 bytestring into a `unicode` string. Data sent from the application back to the browser that is not yet a bytestring is then encoded back to utf-8. Usually this "just works" and we don't have to worry about it, but there are situations where this behavior is problematic. For example the Python 2 IO layer is not unicode aware. This means that whenever you work with data from the file system you have to properly decode it. The correct way to load a text file from the file system looks like this:: f = file('/path/to/the_file.txt', 'r') try: text = f.decode('utf-8') # assuming the file is utf-8 encoded finally: f.close() There is also the codecs module which provides an open function that decodes automatically from the given encoding. Error Handling ============== With Werkzeug 0.3 onwards you can further control the way Werkzeug works with unicode. In the past Werkzeug ignored encoding errors silently on incoming data. This decision was made to avoid internal server errors if the user tampered with the submitted data. However there are situations where you want to abort with a `400 BAD REQUEST` instead of silently ignoring the error. All the functions that do internal decoding now accept an `errors` keyword argument that behaves like the `errors` parameter of the builtin string method `decode`. The following values are possible: `ignore` This is the default behavior and tells the codec to ignore characters that it doesn't understand silently. `replace` The codec will replace unknown characters with a replacement character (`U+FFFD` ``REPLACEMENT CHARACTER``) `strict` Raise an exception if decoding fails. Unlike the regular python decoding Werkzeug does not raise an :exc:`UnicodeDecodeError` if the decoding failed but an :exc:`~exceptions.HTTPUnicodeError` which is a direct subclass of `UnicodeError` and the `BadRequest` HTTP exception. The reason is that if this exception is not caught by the application but a catch-all for HTTP exceptions exists a default `400 BAD REQUEST` error page is displayed. There is additional error handling available which is a Werkzeug extension to the regular codec error handling which is called `fallback`. Often you want to use utf-8 but support latin1 as legacy encoding too if decoding failed. For this case you can use the `fallback` error handling. For example you can specify ``'fallback:iso-8859-15'`` to tell Werkzeug it should try with `iso-8859-15` if `utf-8` failed. If this decoding fails too (which should not happen for most legacy charsets such as `iso-8859-15`) the error is silently ignored as if the error handling was `ignore`. Further details are available as part of the API documentation of the concrete implementations of the functions or classes working with unicode. Request and Response Objects ============================ As request and response objects usually are the central entities of Werkzeug powered applications you can change the default encoding Werkzeug operates on by subclassing these two classes. For example you can easily set the application to utf-7 and strict error handling:: from werkzeug.wrappers import BaseRequest, BaseResponse class Request(BaseRequest): charset = 'utf-7' encoding_errors = 'strict' class Response(BaseResponse): charset = 'utf-7' Keep in mind that the error handling is only customizable for all decoding but not encoding. If Werkzeug encounters an encoding error it will raise a :exc:`UnicodeEncodeError`. It's your responsibility to not create data that is not present in the target charset (a non issue with all unicode encodings such as utf-8). .. _filesystem-encoding: The Filesystem ============== .. versionchanged:: 0.11 Up until version 0.11, Werkzeug used Python's stdlib functionality to detect the filesystem encoding. However, several bug reports against Werkzeug have shown that the value of :py:func:`sys.getfilesystemencoding` cannot be trusted under traditional UNIX systems. The usual problems come from misconfigured systems, where ``LANG`` and similar environment variables are not set. In such cases, Python would default to ASCII as filesystem encoding, a very conservative default that is usually wrong and causes more problems than it avoids. Therefore Werkzeug will force the filesystem encoding to ``UTF-8`` and issue a warning whenever it detects that it is running under BSD or Linux, and :py:func:`sys.getfilesystemencoding` is returning an ASCII encoding. See also :py:mod:`werkzeug.filesystem`. werkzeug-0.14.1/docs/urls.rst000066400000000000000000000001201322225165500161030ustar00rootroot00000000000000=========== URL Helpers =========== .. automodule:: werkzeug.urls :members: werkzeug-0.14.1/docs/utils.rst000066400000000000000000000021451322225165500162670ustar00rootroot00000000000000========= Utilities ========= Various utility functions shipped with Werkzeug. HTML Helpers ============ .. module:: werkzeug.utils .. autoclass:: HTMLBuilder .. autofunction:: escape .. autofunction:: unescape General Helpers =============== .. autoclass:: cached_property :members: .. autoclass:: environ_property .. autoclass:: header_property .. autofunction:: parse_cookie .. autofunction:: dump_cookie .. autofunction:: redirect .. autofunction:: append_slash_redirect .. autofunction:: import_string .. autofunction:: find_modules .. autofunction:: validate_arguments .. autofunction:: secure_filename .. autofunction:: bind_arguments URL Helpers =========== Please refer to :doc:`urls`. UserAgent Parsing ================= .. module:: werkzeug.useragents .. autoclass:: UserAgent :members: Security Helpers ================ .. module:: werkzeug.security .. versionadded:: 0.6.1 .. autofunction:: generate_password_hash .. autofunction:: check_password_hash .. autofunction:: safe_str_cmp .. autofunction:: safe_join .. autofunction:: pbkdf2_hex .. autofunction:: pbkdf2_bin werkzeug-0.14.1/docs/werkzeugext.py000066400000000000000000000006361322225165500173360ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Werkzeug Sphinx Extensions ~~~~~~~~~~~~~~~~~~~~~~~~~~ Provides some more helpers for the werkzeug docs. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from sphinx.ext.autodoc import cut_lines def setup(app): app.connect('autodoc-process-docstring', cut_lines(3, 3, what=['module'])) werkzeug-0.14.1/docs/werkzeugstyle.sty000066400000000000000000000061211322225165500200600ustar00rootroot00000000000000\definecolor{TitleColor}{rgb}{0,0,0} \definecolor{InnerLinkColor}{rgb}{0,0,0} \definecolor{OuterLinkColor}{rgb}{1.0,0.5,0.0} \renewcommand{\maketitle}{% \begin{titlepage}% \let\footnotesize\small \let\footnoterule\relax \ifsphinxpdfoutput \begingroup % This \def is required to deal with multi-line authors; it % changes \\ to ', ' (comma-space), making it pass muster for % generating document info in the PDF file. \def\\{, } \pdfinfo{ /Author (\@author) /Title (\@title) } \endgroup \fi \begin{flushright}% %\sphinxlogo% {\center \vspace*{3cm} \includegraphics{logo.pdf} \vspace{3cm} \par {\rm\Huge \@title \par}% {\em\LARGE \py@release\releaseinfo \par} {\large \@date \par \py@authoraddress \par }}% \end{flushright}%\par \@thanks \end{titlepage}% \cleardoublepage% \setcounter{footnote}{0}% \let\thanks\relax\let\maketitle\relax %\gdef\@thanks{}\gdef\@author{}\gdef\@title{} } \fancypagestyle{normal}{ \fancyhf{} \fancyfoot[LE,RO]{{\thepage}} \fancyfoot[LO]{{\nouppercase{\rightmark}}} \fancyfoot[RE]{{\nouppercase{\leftmark}}} \fancyhead[LE,RO]{{ \@title, \py@release}} \renewcommand{\headrulewidth}{0.4pt} \renewcommand{\footrulewidth}{0.4pt} } \fancypagestyle{plain}{ \fancyhf{} \fancyfoot[LE,RO]{{\thepage}} \renewcommand{\headrulewidth}{0pt} \renewcommand{\footrulewidth}{0.4pt} } \titleformat{\section}{\Large}% {\py@TitleColor\thesection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\subsection}{\large}% {\py@TitleColor\thesubsection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\subsubsection}{}% {\py@TitleColor\thesubsubsection}{0.5em}{\py@TitleColor}{\py@NormalColor} \titleformat{\paragraph}{\large}% {\py@TitleColor}{0em}{\py@TitleColor}{\py@NormalColor} \ChNameVar{\raggedleft\normalsize} \ChNumVar{\raggedleft \bfseries\Large} \ChTitleVar{\raggedleft \rm\Huge} \renewcommand\thepart{\@Roman\c@part} \renewcommand\part{% \pagestyle{plain} \if@noskipsec \leavevmode \fi \cleardoublepage \vspace*{6cm}% \@afterindentfalse \secdef\@part\@spart} \def\@part[#1]#2{% \ifnum \c@secnumdepth >\m@ne \refstepcounter{part}% \addcontentsline{toc}{part}{\thepart\hspace{1em}#1}% \else \addcontentsline{toc}{part}{#1}% \fi {\parindent \z@ %\center \interlinepenalty \@M \normalfont \ifnum \c@secnumdepth >\m@ne \rm\Large \partname~\thepart \par\nobreak \fi \MakeUppercase{\rm\Huge #2}% \markboth{}{}\par}% \nobreak \vskip 8ex \@afterheading} \def\@spart#1{% {\parindent \z@ %\center \interlinepenalty \@M \normalfont \huge \bfseries #1\par}% \nobreak \vskip 3ex \@afterheading} % use inconsolata font \usepackage{inconsolata} % fix single quotes, for inconsolata. (does not work) %%\usepackage{textcomp} %%\begingroup %% \catcode`'=\active %% \g@addto@macro\@noligs{\let'\textsinglequote} %% \endgroup %%\endinput werkzeug-0.14.1/docs/wrappers.rst000066400000000000000000000123551322225165500167760ustar00rootroot00000000000000.. _wrappers: ========================== Request / Response Objects ========================== .. module:: werkzeug.wrappers The request and response objects wrap the WSGI environment or the return value from a WSGI application so that it is another WSGI application (wraps a whole application). How they Work ============= Your WSGI application is always passed two arguments. The WSGI "environment" and the WSGI `start_response` function that is used to start the response phase. The :class:`Request` class wraps the `environ` for easier access to request variables (form data, request headers etc.). The :class:`Response` on the other hand is a standard WSGI application that you can create. The simple hello world in Werkzeug looks like this:: from werkzeug.wrappers import Response application = Response('Hello World!') To make it more useful you can replace it with a function and do some processing:: from werkzeug.wrappers import Request, Response def application(environ, start_response): request = Request(environ) response = Response("Hello %s!" % request.args.get('name', 'World!')) return response(environ, start_response) Because this is a very common task the :class:`~Request` object provides a helper for that. The above code can be rewritten like this:: from werkzeug.wrappers import Request, Response @Request.application def application(request): return Response("Hello %s!" % request.args.get('name', 'World!')) The `application` is still a valid WSGI application that accepts the environment and `start_response` callable. Mutability and Reusability of Wrappers ====================================== The implementation of the Werkzeug request and response objects are trying to guard you from common pitfalls by disallowing certain things as much as possible. This serves two purposes: high performance and avoiding of pitfalls. For the request object the following rules apply: 1. The request object is immutable. Modifications are not supported by default, you may however replace the immutable attributes with mutable attributes if you need to modify it. 2. The request object may be shared in the same thread, but is not thread safe itself. If you need to access it from multiple threads, use locks around calls. 3. It's not possible to pickle the request object. For the response object the following rules apply: 1. The response object is mutable 2. The response object can be pickled or copied after `freeze()` was called. 3. Since Werkzeug 0.6 it's safe to use the same response object for multiple WSGI responses. 4. It's possible to create copies using `copy.deepcopy`. Base Wrappers ============= These objects implement a common set of operations. They are missing fancy addon functionality like user agent parsing or etag handling. These features are available by mixing in various mixin classes or using :class:`Request` and :class:`Response`. .. autoclass:: BaseRequest :members: .. attribute:: environ The WSGI environment that the request object uses for data retrival. .. attribute:: shallow `True` if this request object is shallow (does not modify :attr:`environ`), `False` otherwise. .. automethod:: _get_file_stream .. autoclass:: BaseResponse :members: .. attribute:: response The application iterator. If constructed from a string this will be a list, otherwise the object provided as application iterator. (The first argument passed to :class:`BaseResponse`) .. attribute:: headers A :class:`Headers` object representing the response headers. .. attribute:: status_code The response status as integer. .. attribute:: direct_passthrough If ``direct_passthrough=True`` was passed to the response object or if this attribute was set to `True` before using the response object as WSGI application, the wrapped iterator is returned unchanged. This makes it possible to pass a special `wsgi.file_wrapper` to the response object. See :func:`wrap_file` for more details. .. automethod:: __call__ .. automethod:: _ensure_sequence Mixin Classes ============= Werkzeug also provides helper mixins for various HTTP related functionality such as etags, cache control, user agents etc. When subclassing you can mix those classes in to extend the functionality of the :class:`BaseRequest` or :class:`BaseResponse` object. Here a small example for a request object that parses accept headers:: from werkzeug.wrappers import AcceptMixin, BaseRequest class Request(BaseRequest, AcceptMixin): pass The :class:`Request` and :class:`Response` classes subclass the :class:`BaseRequest` and :class:`BaseResponse` classes and implement all the mixins Werkzeug provides: .. autoclass:: Request .. autoclass:: Response .. autoclass:: AcceptMixin :members: .. autoclass:: AuthorizationMixin :members: .. autoclass:: ETagRequestMixin :members: .. autoclass:: ETagResponseMixin :members: .. autoclass:: ResponseStreamMixin :members: .. autoclass:: CommonRequestDescriptorsMixin :members: .. autoclass:: CommonResponseDescriptorsMixin :members: .. autoclass:: WWWAuthenticateMixin :members: .. autoclass:: UserAgentMixin :members: werkzeug-0.14.1/docs/wsgi.rst000066400000000000000000000024431322225165500161010ustar00rootroot00000000000000============ WSGI Helpers ============ .. module:: werkzeug.wsgi The following classes and functions are designed to make working with the WSGI specification easier or operate on the WSGI layer. All the functionality from this module is available on the high-level :ref:`Request/Response classes `. Iterator / Stream Helpers ========================= These classes and functions simplify working with the WSGI application iterator and the input stream. .. autoclass:: ClosingIterator .. autoclass:: FileWrapper .. autoclass:: LimitedStream :members: .. autofunction:: make_line_iter .. autofunction:: make_chunk_iter .. autofunction:: wrap_file Environ Helpers =============== These functions operate on the WSGI environment. They extract useful information or perform common manipulations: .. autofunction:: get_host .. autofunction:: get_content_length .. autofunction:: get_input_stream .. autofunction:: get_current_url .. autofunction:: get_query_string .. autofunction:: get_script_name .. autofunction:: get_path_info .. autofunction:: pop_path_info .. autofunction:: peek_path_info .. autofunction:: extract_path_info .. autofunction:: host_is_trusted Convenience Helpers =================== .. autofunction:: responder .. autofunction:: werkzeug.testapp.test_app werkzeug-0.14.1/examples/000077500000000000000000000000001322225165500152615ustar00rootroot00000000000000werkzeug-0.14.1/examples/README000066400000000000000000000057661322225165500161570ustar00rootroot00000000000000================= Werkzeug Examples ================= This directory contains various example applications and example code of Werkzeug powered applications. Beside the proof of concept applications and code snippets in the partial folder they all have external depencencies for template engines or database adapters (SQLAlchemy only so far). Also, every application has click as external dependency, used to create the command line interface. Full Example Applications ========================= The following example applications are application types you would actually find in real life :-) `simplewiki` A simple Wiki implementation. Requirements: - SQLAlchemy - Creoleparser >= 0.7 - genshi You can obtain all packages in the Cheeseshop via easy_install. You have to have at least version 0.7 of Creoleparser. Usage:: ./manage-simplewiki.py initdb ./manage-simplewiki.py runserver Or of course you can just use the application object (`simplewiki.SimpleWiki`) and hook that into your favourite WSGI gateway. The constructor of the application object takes a single argument which is the SQLAlchemy URI for the database. The management script for the devserver looks up the an environment var called `SIMPLEWIKI_DATABASE_URI` and uses that for the database URI. If no such variable is provided "sqlite:////tmp/simplewiki.db" is assumed. `plnt` A planet called plnt, pronounce plant. Requirements: - SQLAlchemy - Jinja - feedparser You can obtain all packages in the Cheeseshop via easy_install. Usage:: ./manage-plnt.py initdb ./manage-plnt.py sync ./manage-plnt.py runserver The WSGI application is called `plnt.Plnt` which, like the simple wiki, accepts a database URI as first argument. The environment variable for the database key is called `PLNT_DATABASE_URI` and the default is "sqlite:////tmp/plnt.db". Per default a few python related blogs are added to the database, you can add more in a python shell by playing with the `Blog` model. `shorty` A tinyurl clone for the Werkzeug tutorial. Requirements: - SQLAlchemy - Jinja2 You can obtain all packages in the Cheeseshop via easy_install. Usage:: ./manage-shorty.py initdb ./manage-shorty.py runserver The WSGI application is called `shorty.application.Shorty` which, like the simple wiki, accepts a database URI as first argument. The source code of the application is explained in detail in the Werkzeug tutorial. `couchy` Like shorty, but implemented using CouchDB. Requirements : - werkzeug : http://werkzeug.pocoo.org - jinja : http://jinja.pocoo.org - couchdb 0.72 & above : http://www.couchdb.org `cupoftee` A `Teeworlds `_ server browser. This application works best in a non forking environment and won't work for CGI. Usage:: ./manage-cupoftee.py runserver werkzeug-0.14.1/examples/contrib/000077500000000000000000000000001322225165500167215ustar00rootroot00000000000000werkzeug-0.14.1/examples/contrib/README000066400000000000000000000000771322225165500176050ustar00rootroot00000000000000This folder includes example applications for werkzeug.contrib werkzeug-0.14.1/examples/contrib/securecookie.py000066400000000000000000000024561322225165500217620ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Secure Cookie Example ~~~~~~~~~~~~~~~~~~~~~ Stores session on the client. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from time import asctime from werkzeug.serving import run_simple from werkzeug.wrappers import BaseRequest, BaseResponse from werkzeug.contrib.securecookie import SecureCookie SECRET_KEY = 'V\x8a$m\xda\xe9\xc3\x0f|f\x88\xbccj>\x8bI^3+' class Request(BaseRequest): def __init__(self, environ): BaseRequest.__init__(self, environ) self.session = SecureCookie.load_cookie(self, secret_key=SECRET_KEY) def index(request): return 'Set the Time or Get the time' def get_time(request): return 'Time: %s' % request.session.get('time', 'not set') def set_time(request): request.session['time'] = time = asctime() return 'Time set to %s' % time def application(environ, start_response): request = Request(environ) response = BaseResponse({ 'get': get_time, 'set': set_time }.get(request.path.strip('/'), index)(request), mimetype='text/html') request.session.save_cookie(response) return response(environ, start_response) if __name__ == '__main__': run_simple('localhost', 5000, application) werkzeug-0.14.1/examples/contrib/sessions.py000066400000000000000000000023131322225165500211400ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from werkzeug.serving import run_simple from werkzeug.contrib.sessions import SessionStore, SessionMiddleware class MemorySessionStore(SessionStore): def __init__(self, session_class=None): SessionStore.__init__(self, session_class=None) self.sessions = {} def save(self, session): self.sessions[session.sid] = session def delete(self, session): self.sessions.pop(session.id, None) def get(self, sid): if not self.is_valid_key(sid) or sid not in self.sessions: return self.new() return self.session_class(self.sessions[sid], sid, False) def application(environ, start_response): session = environ['werkzeug.session'] session['visit_count'] = session.get('visit_count', 0) + 1 start_response('200 OK', [('Content-Type', 'text/html')]) return [''' Session Example

Session Example

You visited this page %d times.

''' % session['visit_count']] def make_app(): return SessionMiddleware(application, MemorySessionStore()) if __name__ == '__main__': run_simple('localhost', 5000, make_app()) werkzeug-0.14.1/examples/cookieauth.py000066400000000000000000000057421322225165500177760ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Cookie Based Auth ~~~~~~~~~~~~~~~~~ This is a very simple application that uses a secure cookie to do the user authentification. :copyright: Copyright 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from werkzeug.serving import run_simple from werkzeug.utils import cached_property, escape, redirect from werkzeug.wrappers import Request, Response from werkzeug.contrib.securecookie import SecureCookie # don't use this key but a different one; you could just use # os.unrandom(20) to get something random. Changing this key # invalidates all sessions at once. SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea' # the cookie name for the session COOKIE_NAME = 'session' # the users that may access USERS = { 'admin': 'default', 'user1': 'default' } class AppRequest(Request): """A request with a secure cookie session.""" def logout(self): """Log the user out.""" self.session.pop('username', None) def login(self, username): """Log the user in.""" self.session['username'] = username @property def logged_in(self): """Is the user logged in?""" return self.user is not None @property def user(self): """The user that is logged in.""" return self.session.get('username') @cached_property def session(self): data = self.cookies.get(COOKIE_NAME) if not data: return SecureCookie(secret_key=SECRET_KEY) return SecureCookie.unserialize(data, SECRET_KEY) def login_form(request): error = '' if request.method == 'POST': username = request.form.get('username') password = request.form.get('password') if password and USERS.get(username) == password: request.login(username) return redirect('') error = '

Invalid credentials' return Response(''' Login

Login

Not logged in. %s

''' % error, mimetype='text/html') def index(request): return Response(''' Logged in

Logged in

Logged in as %s

Logout ''' % escape(request.user), mimetype='text/html') @AppRequest.application def application(request): if request.args.get('do') == 'logout': request.logout() response = redirect('.') elif request.logged_in: response = index(request) else: response = login_form(request) request.session.save_cookie(response) return response if __name__ == '__main__': run_simple('localhost', 4000, application) werkzeug-0.14.1/examples/coolmagic/000077500000000000000000000000001322225165500172165ustar00rootroot00000000000000werkzeug-0.14.1/examples/coolmagic/__init__.py000066400000000000000000000004121322225165500213240ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic ~~~~~~~~~ Package description goes here. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from coolmagic.application import make_app werkzeug-0.14.1/examples/coolmagic/application.py000066400000000000000000000050531322225165500220760ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic.application ~~~~~~~~~~~~~~~~~~~~~ This module provides the WSGI application. The WSGI middlewares are applied in the `make_app` factory function that automatically wraps the application within the require middlewares. Per default only the `SharedDataMiddleware` is applied. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from os import path, listdir from coolmagic.utils import Request, local_manager, redirect from werkzeug.routing import Map, Rule, RequestRedirect from werkzeug.exceptions import HTTPException, NotFound class CoolMagicApplication(object): """ The application class. It's passed a directory with configuration values. """ def __init__(self, config): self.config = config for fn in listdir(path.join(path.dirname(__file__), 'views')): if fn.endswith('.py') and fn != '__init__.py': __import__('coolmagic.views.' + fn[:-3]) from coolmagic.utils import exported_views rules = [ # url for shared data. this will always be unmatched # because either the middleware or the webserver # handles that request first. Rule('/public/', endpoint='shared_data') ] self.views = {} for endpoint, (func, rule, extra) in exported_views.iteritems(): if rule is not None: rules.append(Rule(rule, endpoint=endpoint, **extra)) self.views[endpoint] = func self.url_map = Map(rules) def __call__(self, environ, start_response): urls = self.url_map.bind_to_environ(environ) req = Request(environ, urls) try: endpoint, args = urls.match(req.path) resp = self.views[endpoint](**args) except NotFound, e: resp = self.views['static.not_found']() except (HTTPException, RequestRedirect), e: resp = e return resp(environ, start_response) def make_app(config=None): """ Factory function that creates a new `CoolmagicApplication` object. Optional WSGI middlewares should be applied here. """ config = config or {} app = CoolMagicApplication(config) # static stuff from werkzeug.wsgi import SharedDataMiddleware app = SharedDataMiddleware(app, { '/public': path.join(path.dirname(__file__), 'public') }) # clean up locals app = local_manager.make_middleware(app) return app werkzeug-0.14.1/examples/coolmagic/helpers.py000066400000000000000000000007331322225165500212350ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic.helpers ~~~~~~~~~~~~~~~~~ The star-import module for all views. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from coolmagic.utils import Response, TemplateResponse, ThreadedRequest, \ export, url_for, redirect from werkzeug.utils import escape #: a thread local proxy request object request = ThreadedRequest() del ThreadedRequest werkzeug-0.14.1/examples/coolmagic/public/000077500000000000000000000000001322225165500204745ustar00rootroot00000000000000werkzeug-0.14.1/examples/coolmagic/public/style.css000066400000000000000000000001711322225165500223450ustar00rootroot00000000000000body { margin: 0; padding: 20px; font-family: sans-serif; font-size: 15px; } h1, a { color: #a00; } werkzeug-0.14.1/examples/coolmagic/templates/000077500000000000000000000000001322225165500212145ustar00rootroot00000000000000werkzeug-0.14.1/examples/coolmagic/templates/layout.html000066400000000000000000000006301322225165500234160ustar00rootroot00000000000000 {{ page_title }} — Cool Magic!

Cool Magic

{{ page_title }}

{% block page_body %}{% endblock %} werkzeug-0.14.1/examples/coolmagic/templates/static/000077500000000000000000000000001322225165500225035ustar00rootroot00000000000000werkzeug-0.14.1/examples/coolmagic/templates/static/about.html000066400000000000000000000003551322225165500245060ustar00rootroot00000000000000{% extends "layout.html" %} {% set page_title = 'About the Magic' %} {% block page_body %}

Nothing to see. It's just magic.

back to the index

{% endblock %} werkzeug-0.14.1/examples/coolmagic/templates/static/index.html000066400000000000000000000006541322225165500245050ustar00rootroot00000000000000{% extends "layout.html" %} {% set page_title = 'Welcome to the Magic' %} {% block page_body %}

Welcome to the magic! This is a bigger example for the Werkzeug toolkit. And it contains a lot of magic.

about the implementation or click here if you want to see a broken view.

{% endblock %} werkzeug-0.14.1/examples/coolmagic/templates/static/not_found.html000066400000000000000000000004021322225165500253600ustar00rootroot00000000000000{% extends "layout.html" %} {% set page_title = 'Missing Magic' %} {% block page_body %}

The requested magic really does not exist. Maybe you want to look for it on the index.

{% endblock %} werkzeug-0.14.1/examples/coolmagic/utils.py000066400000000000000000000056701322225165500207400ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic.utils ~~~~~~~~~~~~~~~ This module contains the subclasses of the base request and response objects provided by werkzeug. The subclasses know about their charset and implement some additional functionallity like the ability to link to view functions. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from os.path import dirname, join from jinja import Environment, FileSystemLoader from werkzeug.local import Local, LocalManager from werkzeug.utils import redirect from werkzeug.wrappers import BaseRequest, BaseResponse local = Local() local_manager = LocalManager([local]) template_env = Environment( loader=FileSystemLoader(join(dirname(__file__), 'templates'), use_memcache=False) ) exported_views = {} def export(string, template=None, **extra): """ Decorator for registering view functions and adding templates to it. """ def wrapped(f): endpoint = (f.__module__ + '.' + f.__name__)[16:] if template is not None: old_f = f def f(**kwargs): rv = old_f(**kwargs) if not isinstance(rv, Response): rv = TemplateResponse(template, **(rv or {})) return rv f.__name__ = old_f.__name__ f.__doc__ = old_f.__doc__ exported_views[endpoint] = (f, string, extra) return f return wrapped def url_for(endpoint, **values): """ Build a URL """ return local.request.url_adapter.build(endpoint, values) class Request(BaseRequest): """ The concrete request object used in the WSGI application. It has some helper functions that can be used to build URLs. """ charset = 'utf-8' def __init__(self, environ, url_adapter): BaseRequest.__init__(self, environ) self.url_adapter = url_adapter local.request = self class ThreadedRequest(object): """ A pseudo request object that always poins to the current context active request. """ def __getattr__(self, name): if name == '__members__': return [x for x in dir(local.request) if not x.startswith('_')] return getattr(local.request, name) def __setattr__(self, name, value): return setattr(local.request, name, value) class Response(BaseResponse): """ The concrete response object for the WSGI application. """ charset = 'utf-8' default_mimetype = 'text/html' class TemplateResponse(Response): """ Render a template to a response. """ def __init__(self, template_name, **values): from coolmagic import helpers values.update( request=local.request, h=helpers ) template = template_env.get_template(template_name) Response.__init__(self, template.render(values)) werkzeug-0.14.1/examples/coolmagic/views/000077500000000000000000000000001322225165500203535ustar00rootroot00000000000000werkzeug-0.14.1/examples/coolmagic/views/__init__.py000066400000000000000000000003711322225165500224650ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic.views ~~~~~~~~~~~~~~~ This module collects and assambles the urls. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ werkzeug-0.14.1/examples/coolmagic/views/static.py000066400000000000000000000013241322225165500222140ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ coolmagic.views.static ~~~~~~~~~~~~~~~~~~~~~~ Some static views. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from coolmagic.helpers import * @export('/', template='static/index.html') def index(): pass @export('/about', template='static/about.html') def about(): pass @export('/broken') def broken(): foo = request.args.get('foo', 42) raise RuntimeError('that\'s really broken') @export(None, template='static/not_found.html') def not_found(): """ This function is always executed if an url does not match or a `NotFound` exception is raised. """ pass werkzeug-0.14.1/examples/couchy/000077500000000000000000000000001322225165500165535ustar00rootroot00000000000000werkzeug-0.14.1/examples/couchy/README000066400000000000000000000003361322225165500174350ustar00rootroot00000000000000couchy README Requirements : - werkzeug : http://werkzeug.pocoo.org - jinja : http://jinja.pocoo.org - couchdb 0.72 & above : http://www.couchdb.org - couchdb-python 0.3 & above : http://code.google.com/p/couchdb-python werkzeug-0.14.1/examples/couchy/__init__.py000066400000000000000000000000001322225165500206520ustar00rootroot00000000000000werkzeug-0.14.1/examples/couchy/application.py000066400000000000000000000026441322225165500214360ustar00rootroot00000000000000from couchdb.client import Server from couchy.utils import STATIC_PATH, local, local_manager, \ url_map from werkzeug.wrappers import Request from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware from werkzeug.exceptions import HTTPException, NotFound from couchy import views from couchy.models import URL import couchy.models class Couchy(object): def __init__(self, db_uri): local.application = self server = Server(db_uri) try: db = server.create('urls') except: db = server['urls'] self.dispatch = SharedDataMiddleware(self.dispatch, { '/static': STATIC_PATH }) URL.db = db def dispatch(self, environ, start_response): local.application = self request = Request(environ) local.url_adapter = adapter = url_map.bind_to_environ(environ) try: endpoint, values = adapter.match() handler = getattr(views, endpoint) response = handler(request, **values) except NotFound, e: response = views.not_found(request) response.status_code = 404 except HTTPException, e: response = e return ClosingIterator(response(environ, start_response), [local_manager.cleanup]) def __call__(self, environ, start_response): return self.dispatch(environ, start_response) werkzeug-0.14.1/examples/couchy/models.py000066400000000000000000000023551322225165500204150ustar00rootroot00000000000000from datetime import datetime from couchdb.mapping import Document, TextField, BooleanField, DateTimeField from couchy.utils import url_for, get_random_uid class URL(Document): target = TextField() public = BooleanField() added = DateTimeField(default=datetime.utcnow()) shorty_id = TextField(default=None) db = None @classmethod def load(self, id): return super(URL, self).load(URL.db, id) @classmethod def query(self, code): return URL.db.query(code) def store(self): if getattr(self._data, 'id', None) is None: new_id = self.shorty_id if self.shorty_id else None while 1: id = new_id if new_id else get_random_uid() docid = None try: docid = URL.db.resource.put(content=self._data, path='/%s/' % str(id))['id'] except: continue if docid: break self._data = URL.db.get(docid) else: super(URL, self).store(URL.db) return self @property def short_url(self): return url_for('link', uid=self.id, _external=True) def __repr__(self): return '' % self.id werkzeug-0.14.1/examples/couchy/static/000077500000000000000000000000001322225165500200425ustar00rootroot00000000000000werkzeug-0.14.1/examples/couchy/static/style.css000066400000000000000000000030611322225165500217140ustar00rootroot00000000000000body { background-color: #333; font-family: 'Lucida Sans', 'Verdana', sans-serif; font-size: 16px; margin: 3em 0 3em 0; padding: 0; text-align: center; } a { color: #0C4850; } a:hover { color: #1C818F; } h1 { width: 500px; background-color: #24C0CE; text-align: center; font-size: 3em; margin: 0 auto 0 auto; padding: 0; } h1 a { display: block; padding: 0.3em; color: #fff; text-decoration: none; } h1 a:hover { color: #ADEEF7; background-color: #0E8A96; } div.footer { margin: 0 auto 0 auto; font-size: 13px; text-align: right; padding: 10px; width: 480px; background-color: #004C63; color: white; } div.footer a { color: #A0E9FF; } div.body { margin: 0 auto 0 auto; padding: 20px; width: 460px; background-color: #98CE24; color: black; } div.body h2 { margin: 0 0 0.5em 0; text-align: center; } div.body input { margin: 0.2em 0 0.2em 0; font-family: 'Lucida Sans', 'Verdana', sans-serif; font-size: 20px; background-color: #CCEB98; color: black; } div.body #url { width: 400px; } div.body #alias { width: 300px; margin-right: 10px; } div.body #submit { width: 90px; } div.body p { margin: 0; padding: 0.2em 0 0.2em 0; } div.body ul { margin: 1em 0 1em 0; padding: 0; list-style: none; } div.error { margin: 1em 0 1em 0; border: 2px solid #AC0202; background-color: #9E0303; font-weight: bold; color: white; } div.pagination { font-size: 13px; } werkzeug-0.14.1/examples/couchy/templates/000077500000000000000000000000001322225165500205515ustar00rootroot00000000000000werkzeug-0.14.1/examples/couchy/templates/display.html000066400000000000000000000003011322225165500230760ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Shortened URL

The URL {{ url.target|urlize(40, true) }} was shortened to {{ url.short_url|urlize }}.

{% endblock %} werkzeug-0.14.1/examples/couchy/templates/layout.html000066400000000000000000000010141322225165500227500ustar00rootroot00000000000000 Shorty

Shorty

{% block body %}{% endblock %}
werkzeug-0.14.1/examples/couchy/templates/list.html000066400000000000000000000013241322225165500224120ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

List of URLs

    {%- for url in pagination.entries %}
  • {{ url.id|e }} » {{ url.target|urlize(38, true) }}
  • {%- else %}
  • no URLs shortened yet
  • {%- endfor %}
{% endblock %} werkzeug-0.14.1/examples/couchy/templates/new.html000066400000000000000000000011671322225165500222350ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Create a Shorty-URL!

{% if error %}
{{ error }}
{% endif -%}

Enter the URL you want to shorten

Optionally you can give the URL a memorable name

{# #}

{% endblock %} werkzeug-0.14.1/examples/couchy/templates/not_found.html000066400000000000000000000003471322225165500234360ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Page Not Found

The page you have requested does not exist on this server. What about adding a new URL?

{% endblock %} werkzeug-0.14.1/examples/couchy/utils.py000066400000000000000000000041751322225165500202740ustar00rootroot00000000000000from os import path from urlparse import urlparse from random import sample, randrange from jinja import Environment, FileSystemLoader from werkzeug.local import Local, LocalManager from werkzeug.utils import cached_property from werkzeug.wrappers import Response from werkzeug.routing import Map, Rule TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates') STATIC_PATH = path.join(path.dirname(__file__), 'static') ALLOWED_SCHEMES = frozenset(['http', 'https', 'ftp', 'ftps']) URL_CHARS = 'abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789' local = Local() local_manager = LocalManager([local]) application = local('application') url_map = Map([Rule('/static/', endpoint='static', build_only=True)]) jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_PATH)) def expose(rule, **kw): def decorate(f): kw['endpoint'] = f.__name__ url_map.add(Rule(rule, **kw)) return f return decorate def url_for(endpoint, _external=False, **values): return local.url_adapter.build(endpoint, values, force_external=_external) jinja_env.globals['url_for'] = url_for def render_template(template, **context): return Response(jinja_env.get_template(template).render(**context), mimetype='text/html') def validate_url(url): return urlparse(url)[0] in ALLOWED_SCHEMES def get_random_uid(): return ''.join(sample(URL_CHARS, randrange(3, 9))) class Pagination(object): def __init__(self, results, per_page, page, endpoint): self.results = results self.per_page = per_page self.page = page self.endpoint = endpoint @cached_property def count(self): return len(self.results) @cached_property def entries(self): return self.results[((self.page - 1) * self.per_page):(((self.page - 1) * self.per_page)+self.per_page)] has_previous = property(lambda x: x.page > 1) has_next = property(lambda x: x.page < x.pages) previous = property(lambda x: url_for(x.endpoint, page=x.page - 1)) next = property(lambda x: url_for(x.endpoint, page=x.page + 1)) pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1) werkzeug-0.14.1/examples/couchy/views.py000066400000000000000000000037171322225165500202720ustar00rootroot00000000000000from werkzeug.utils import redirect from werkzeug.exceptions import NotFound from couchy.utils import render_template, expose, \ validate_url, url_for, Pagination from couchy.models import URL @expose('/') def new(request): error = url = '' if request.method == 'POST': url = request.form.get('url') alias = request.form.get('alias') if not validate_url(url): error = "I'm sorry but you cannot shorten this URL." elif alias: if len(alias) > 140: error = 'Your alias is too long' elif '/' in alias: error = 'Your alias might not include a slash' elif URL.load(alias): error = 'The alias you have requested exists already' if not error: url = URL(target=url, public='private' not in request.form, shorty_id=alias if alias else None) url.store() uid = url.id return redirect(url_for('display', uid=uid)) return render_template('new.html', error=error, url=url) @expose('/display/') def display(request, uid): url = URL.load(uid) if not url: raise NotFound() return render_template('display.html', url=url) @expose('/u/') def link(request, uid): url = URL.load(uid) if not url: raise NotFound() return redirect(url.target, 301) @expose('/list/', defaults={'page': 1}) @expose('/list/') def list(request, page): def wrap(doc): data = doc.value data['_id'] = doc.id return URL.wrap(data) code = '''function(doc) { if (doc.public){ map([doc._id], doc); }}''' docResults = URL.query(code) results = [wrap(doc) for doc in docResults] pagination = Pagination(results, 1, page, 'list') if pagination.page > 1 and not pagination.entries: raise NotFound() return render_template('list.html', pagination=pagination) def not_found(request): return render_template('not_found.html') werkzeug-0.14.1/examples/cupoftee/000077500000000000000000000000001322225165500170735ustar00rootroot00000000000000werkzeug-0.14.1/examples/cupoftee/__init__.py000066400000000000000000000004231322225165500212030ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupoftee ~~~~~~~~ Werkzeug powered Teeworlds Server Browser. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from cupoftee.application import make_app werkzeug-0.14.1/examples/cupoftee/application.py000066400000000000000000000065761322225165500217660ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupoftee.application ~~~~~~~~~~~~~~~~~~~~ The WSGI appliction for the cup of tee browser. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import time from os import path from threading import Thread from cupoftee.db import Database from cupoftee.network import ServerBrowser from werkzeug.templates import Template from werkzeug.wrappers import Request, Response from werkzeug.wsgi import SharedDataMiddleware from werkzeug.exceptions import HTTPException, NotFound from werkzeug.routing import Map, Rule templates = path.join(path.dirname(__file__), 'templates') pages = {} url_map = Map([Rule('/shared/', endpoint='shared')]) def make_app(database, interval=60): return SharedDataMiddleware(Cup(database), { '/shared': path.join(path.dirname(__file__), 'shared') }) class PageMeta(type): def __init__(cls, name, bases, d): type.__init__(cls, name, bases, d) if d.get('url_rule') is not None: pages[cls.identifier] = cls url_map.add(Rule(cls.url_rule, endpoint=cls.identifier, **cls.url_arguments)) identifier = property(lambda x: x.__name__.lower()) class Page(object): __metaclass__ = PageMeta url_arguments = {} def __init__(self, cup, request, url_adapter): self.cup = cup self.request = request self.url_adapter = url_adapter def url_for(self, endpoint, **values): return self.url_adapter.build(endpoint, values) def process(self): pass def render_template(self, template=None): if template is None: template = self.__class__.identifier + '.html' context = dict(self.__dict__) context.update(url_for=self.url_for, self=self) body_tmpl = Template.from_file(path.join(templates, template)) layout_tmpl = Template.from_file(path.join(templates, 'layout.html')) context['body'] = body_tmpl.render(context) return layout_tmpl.render(context) def get_response(self): return Response(self.render_template(), mimetype='text/html') class Cup(object): def __init__(self, database, interval=120): self.interval = interval self.db = Database(database) self.master = ServerBrowser(self) self.updater = Thread(None, self.update_master) self.updater.setDaemon(True) self.updater.start() def update_master(self): wait = self.interval while 1: if self.master.sync(): wait = self.interval else: wait = self.interval // 2 time.sleep(wait) def dispatch_request(self, request): url_adapter = url_map.bind_to_environ(request.environ) try: endpoint, values = url_adapter.match() page = pages[endpoint](self, request, url_adapter) response = page.process(**values) except NotFound, e: page = MissingPage(self, request, url_adapter) response = page.process() except HTTPException, e: return e return response or page.get_response() def __call__(self, environ, start_response): request = Request(environ) return self.dispatch_request(request)(environ, start_response) from cupoftee.pages import MissingPage werkzeug-0.14.1/examples/cupoftee/db.py000066400000000000000000000035541322225165500200410ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupoftee.db ~~~~~~~~~~~ A simple object database. As long as the server is not running in multiprocess mode that's good enough. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import gdbm from threading import Lock from pickle import dumps, loads class Database(object): def __init__(self, filename): self.filename = filename self._fs = gdbm.open(filename, 'cf') self._local = {} self._lock = Lock() def __getitem__(self, key): with self._lock: return self._load_key(key) def _load_key(self, key): if key in self._local: return self._local[key] rv = loads(self._fs[key]) self._local[key] = rv return rv def __setitem__(self, key, value): self._local[key] = value def __delitem__(self, key, value): with self._lock: self._local.pop(key, None) if self._fs.has_key(key): del self._fs[key] def __del__(self): self.close() def __contains__(self, key): with self._lock: try: self._load_key(key) except KeyError: pass return key in self._local def setdefault(self, key, factory): with self._lock: try: rv = self._load_key(key) except KeyError: self._local[key] = rv = factory() return rv def sync(self): with self._lock: for key, value in self._local.iteritems(): self._fs[key] = dumps(value, 2) self._fs.sync() def close(self): try: self.sync() self._fs.close() except: pass werkzeug-0.14.1/examples/cupoftee/network.py000066400000000000000000000076451322225165500211520ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupyoftee.network ~~~~~~~~~~~~~~~~~ Query the servers for information. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import time import socket from math import log from datetime import datetime from cupoftee.utils import unicodecmp class ServerError(Exception): pass class Syncable(object): last_sync = None def sync(self): try: self._sync() except (socket.error, socket.timeout, IOError): return False self.last_sync = datetime.utcnow() return True class ServerBrowser(Syncable): def __init__(self, cup): self.cup = cup self.servers = cup.db.setdefault('servers', dict) def _sync(self): to_delete = set(self.servers) for x in xrange(1, 17): addr = ('master%d.teeworlds.com' % x, 8300) print addr try: self._sync_master(addr, to_delete) except (socket.error, socket.timeout, IOError), e: continue for server_id in to_delete: self.servers.pop(server_id, None) if not self.servers: raise IOError('no servers found') self.cup.db.sync() def _sync_master(self, addr, to_delete): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.settimeout(5) s.sendto('\x20\x00\x00\x00\x00\x48\xff\xff\xff\xffreqt', addr) data = s.recvfrom(1024)[0][14:] s.close() for n in xrange(0, len(data) / 6): addr = ('.'.join(map(str, map(ord, data[n * 6:n * 6 + 4]))), ord(data[n * 6 + 5]) * 256 + ord(data[n * 6 + 4])) server_id = '%s:%d' % addr if server_id in self.servers: if not self.servers[server_id].sync(): continue else: try: self.servers[server_id] = Server(addr, server_id) except ServerError: pass to_delete.discard(server_id) class Server(Syncable): def __init__(self, addr, server_id): self.addr = addr self.id = server_id self.players = [] if not self.sync(): raise ServerError('server not responding in time') def _sync(self): s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM) s.settimeout(1) s.sendto('\xff\xff\xff\xff\xff\xff\xff\xff\xff\xffgief', self.addr) bits = s.recvfrom(1024)[0][14:].split('\x00') s.close() self.version, server_name, map_name = bits[:3] self.name = server_name.decode('latin1') self.map = map_name.decode('latin1') self.gametype = bits[3] self.flags, self.progression, player_count, \ self.max_players = map(int, bits[4:8]) # sync the player stats players = dict((p.name, p) for p in self.players) for i in xrange(player_count): name = bits[8 + i * 2].decode('latin1') score = int(bits[9 + i * 2]) # update existing player if name in players: player = players.pop(name) player.score = score # add new player else: self.players.append(Player(self, name, score)) # delete players that left for player in players.itervalues(): try: self.players.remove(player) except: pass # sort the player list and count them self.players.sort(key=lambda x: -x.score) self.player_count = len(self.players) def __cmp__(self, other): return unicodecmp(self.name, other.name) class Player(object): def __init__(self, server, name, score): self.server = server self.name = name self.score = score self.size = round(100 + log(max(score, 1)) * 25, 2) werkzeug-0.14.1/examples/cupoftee/pages.py000066400000000000000000000045571322225165500205570ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupoftee.pages ~~~~~~~~~~~~~~ The pages. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from werkzeug.utils import redirect from werkzeug.exceptions import NotFound from cupoftee.application import Page from cupoftee.utils import unicodecmp class ServerList(Page): url_rule = '/' def order_link(self, name, title): cls = '' link = '?order_by=' + name desc = False if name == self.order_by: desc = not self.order_desc cls = ' class="%s"' % (desc and 'down' or 'up') if desc: link += '&dir=desc' return '%s' % (link, cls, title) def process(self): self.order_by = self.request.args.get('order_by') or 'name' sort_func = { 'name': lambda x: x, 'map': lambda x: x.map, 'gametype': lambda x: x.gametype, 'players': lambda x: x.player_count, 'progression': lambda x: x.progression, }.get(self.order_by) if sort_func is None: return redirect(self.url_for('serverlist')) self.servers = self.cup.master.servers.values() self.servers.sort(key=sort_func) if self.request.args.get('dir') == 'desc': self.servers.reverse() self.order_desc = True else: self.order_desc = False self.players = reduce(lambda a, b: a + b.players, self.servers, []) self.players.sort(lambda a, b: unicodecmp(a.name, b.name)) class Server(Page): url_rule = '/server/' def process(self, id): try: self.server = self.cup.master.servers[id] except KeyError: raise NotFound() class Search(Page): url_rule = '/search' def process(self): self.user = self.request.args.get('user') if self.user: self.results = [] for server in self.cup.master.servers.itervalues(): for player in server.players: if player.name == self.user: self.results.append(server) class MissingPage(Page): def get_response(self): response = super(MissingPage, self).get_response() response.status_code = 404 return response werkzeug-0.14.1/examples/cupoftee/shared/000077500000000000000000000000001322225165500203415ustar00rootroot00000000000000werkzeug-0.14.1/examples/cupoftee/shared/content.png000066400000000000000000000003461322225165500225240ustar00rootroot00000000000000PNG  IHDR;GˇsRGBbKGDNE pHYs  tIME-;h7|tEXtCommentCreated with GIMPWAIDAT8c4t 0BO#Ƴc`=ao;9|!oH8|_wXL ݀||! :v|~_\L~1)00Lc^þ~ =t÷_ o3?ӛL2!NZWD^$+Uǰ) g=@KLxIENDB`werkzeug-0.14.1/examples/cupoftee/shared/favicon.ico000066400000000000000000001366561322225165500225030ustar00rootroot00000000000000 00h ( 00 hn" ['00 %   hF(0`wwwwwwwwwwwppp333;331333331391990997{9{{{07s;{{;p;3;1p;{1{s;s0#00{p3R;0 pp ;3@ppp;0pppp 0` 0ppp0;p  4 9;p;p8p00p{0@731pqwp?????( @pppp0110p37{3{;;0909{{{{{031 q13113;{A 03{@;pp;5p p ;0;pppp {p0ppp54Rp?( ; ;; ; 0!a!(0`      "$+)# '!%%,$) 1%:,7+4!!!(((...%17"4<(7>222555999===,D.I6Q7U<\m9}hjnloorsqwuv{xz} y~DnGvLvSs]xF{P~`z{}~~S[]]_YZdj~  "/#) %,'*/0148%+92=c{PAkrDKSZkqdm{t{\ZZYZYYZZZ\\\\ZZZYZYYZZ*$)U*YJDcsvwwj]- jswwj]-\=n /s YĻĻB ]$ƻa@pcƻ0nBbŻE*aƻya?fɼLƉ01Ţ/ƹ_ r m\dĻEB÷Z ,>EEE0, A ,`/0`$IZ$=!LJ0\w|$ Y/I`8Z"n{L*)O6)wO}I$e3~OUMA:2'T8"a3~FK\TkR<%P"Y A2$n{L)M$hRK*7&MV gO" F8)" FH|{H9#KUYMJKMZ?????( @        !&(*+,!"' $$* 1!5$6$5$9'<)>(=*2+1/9!!!%%%(((...#.3%05"5?111555:::===)@,E3O 3H6B6R:Y;Y=]"8C?c+BM-GT-IVBdCfEiHmLrOwMv7VfCCCFFFIIIMMMQQQTTT[[[\\\@YeDgwEhyHk|Il}cccfffjjjmmmppp~~~X[[_ ah ieiki srzAoKq@r $RTicrd{  !&!!$)%+),-/00+68#/9:?WrtmvAADHLRV]obmp|vruy{}T6100006Q81100008_8%(*;;;%%(;;;*% ^H~@.B.B0E2M 7N 9T-:@->F KLW`ellnnlle LO`eglnqqnng LT`e;r@qnl KV`gk&rqj!7U`g;r arn?S`elnqqrqjYegllnni =]glc<! #?PNG  IHDR\rf IDATxyGy693,hG=G6%my#3؀jdH%%n儰]@rŽ%͇WGk3F˲X̜TWwUuUuUsH籏z:㬅Wԑ):n|? `0QcEu]_ҧ _u}@@Yi0 (iQè@]]e#B5:T+AT ͨ{u(X ,סXŬ"Jo=:quOBf6:2G7" 0]A/> ꨣPaT_IΆazׇuTEP# B};֡FѮE@9(5MmF9t˽wO`p~{&vMB}3`4պj7d8EstQ; mrq+m+?<ŦGIlݱchKغ@gF֨Uwv-sѹZ^Dv\%)R P|%Iyaѧ1IyNc= FǙMXڅZ^D"9*-*W lSơ/1Obx䄢0%uhP'5ҋA"zW.Ev(ܤVW =hm量?b1eRC?5@Qq~nrz(+ƪGVM(6݉z{ܳP.ԇ D72V*LmB\|.B%tU~KhcF%mGq}e=LVkV]'P~Xk0X 6QG0N#8g vtc[^x AZ pP|+)MwrC{_9Q+w4Ÿ* Spw-hYO%ذ;PH!?U}oSm-y<|žs, .#dz$~p 6U@'1Dmi_E^SX󿾉=^Lx '3:A4*[w۵XW{#JM.j_r,ah\*(l_8l6x)&>g'py+҅neRtMSqk#r]|q"?53gЋMKᷰkK B$&Ac͍+HSx7_'G!+g'pz9_Ҁ/͟M,˳6 KǍ,_(U(춻Ģjަɍt̛GSc(+p9 _SW]x%qilߤ _ǖA$ EdRPէ-97f ~b70Jjz| V\l^߁ˀ,HGJVU5ϤP*F)osС=t LdEco6}M.,*8Y~&0Q e*7J- S@*N Ph#t~]PNd&ܹ8Mj ?Yy3B$ _剀Ph~Qg |0c90c5RTIm(?n'mkeT@@¾ {ht*L7O4]۟h_, -sh5X:+"Hm-|^iu1ռ*]1D0y׆rH`ف w0;EtD+l,./*Iv9|;/9}{3؃gŤT[|),Iq%$tbiFdM3vRBQEiˍX) X{bn2Xh~Z WVqK;uif_ +wWBCTJmRtǔA'Cr{l2(d4wx8ro <&p7*1Ih }wOz.l'.-wU~iyYOʊo ؕQ۬uPZ;IJ$|_?g+2'ۡ>}UP\ic0j-"D#J/Bu(@UU9rv},!$j; MQ6)Cp\J"i %@ui:bzR`k*/+hte5dkp L\2\KSi `#SS)x tjRBg˷YV~U DA}emr?7 3"0V@sVu.KL,(XՕ_ F=`JM4%:/R)4E٠=#qQb.Oz(-b^9m8wV[D$F-J$|-W o.Z<;3U:Mi:Y<%^ҹ_, *wɣ-iSܦhk,$D[L:]#4yPx%]PeJvZFS,I!#G%t -e 0lݞhz_p2IPk1DIy@tD;a3h+)ʧyi6_nEZeu J:\Aq}9W|16kk:'}\2$YV 1;𣊅>*4Lf!nC*24(:*Moԁ=M")I$e6Zw Yb]$׎g8#yYVbB{Re=:rLIr4CM&ѲMAB^E$oSMALD%.8o&@H׎7X;@-xHu-ֿR8HW n(D:f[eq"iPvi-*=Y^Ph[Wթ$loФǓoFו!V"'E-_h6ԡ@>xöJQTDqRHs5ХˤS\i_9WSKKe(9sC d`s.mD4*MVy&7bZ[SZ&+t>o%OdVr As2dSlسE m-uXPhkiCk] lţ9Ƶ+p%;fkxŷ6=IML;6`?Ɯ_=g+I}ptqX/)\P2ලľcLl_Ӕ}^QP|>ݸ=7)rAKSMF }ku %R1N8*ג2/ymS˾m{ VسE|YE<4;_U~w?B$p:%zK^!>m=@%vФ^&Ћ 4Z`s&mzBM߽ڢmn;vp{{$^ @ŴEC-ښ)DzaسyI(圫1?*q Y+>w4ZAeU n3١ ۣ}?wkIOsS,\֌իt$a@VPŸ)[2Uf*{}ָ#-+ywȠ?Sg?I?q+WYէL8;f4;}ȱTeɎ wG3<=r;(|RO!Q1^1wkc2 ȊܖI^ rNXgHĴĞTڣCo!PZ;vӗYyݫ, a@PMi7s4rfrTӎ٪Xymᴞ PU ѧl@"p$P!YT`w@qh̘PfW~pYuy.]@JH&Qy딻{D%+/tm@/ M֝9pNN1=lm&c Nm}pK1݇1 MX"FU9wF5HEvϵLI H6CG( ^gq?J~ 0kO.oIVB@ϫ_pr<[dE,[2ϵ?XڑQu :ܼHrLJUZW*lUFծ !Aaj$n,)NHKHBtCKYXA KkV(^TKI#\t'8i+--tTJ1BXyeq*Mj_GjpRb% 6h߲<-b;4DQx9l tBg\۳Hid:/VoVkeWB^5KaCb'tH\+P40vRJ$(tJ`Q&$o;GgU;b?]J]em D{"0-( vJq(2HV,`~Kx0]e6[`% e)GMGm&)5ˇ(Pq(fk4B1`8X$ȨOhkߓ{׾͒T)MN`tG"+M([[r7$TnbjkoN4 J8vj^w86]"E;md:H5B8y݇P `EQF((됎Wm޾R:j,,;Phk)WFR ۊ4g$dTtPE@0: ȗ3mZV.]<׮дUi+Ο;x[6wݷAAm9+-*Mq}YԡPE@!I&^"c:Y|"Ŷ1$֘ӀND=y;Uoq|U9t.wi ZCoO6T~4y8KPڂ_(c:1oc?&)Iu/[G s5\#~v/ Y4 IDATE] Gt=@~d!4m+|DQX\0mxض<]W-D ?F^Uypm`"񀮫Vɣip0p `Іr 4>=.M'7"TW*2H!Ǖ1*= ʙp@r?P Ti@*ca8p@@ΤPN~LKG<td`JU|^F.J!X $)$# 8"<8N>|Nm;4 EL#ƠT (eyqR~9]sG5P ḇ*#GU 9t])vXo X[y2et- TzC67RHJ6vӀCϧprQN Uhn;( c=T tU]Jud@zC AWRUKsMi<0118Ng/#vӔ`|MZ4VCh$n|j_+o"!KR]J{<޳ߙRAŃCB'+yKO,ؚk4Iǫ6.W~ȱM6" &]k~w-B4$!_m7p4"dEu$P_Rر 6y&7gZ@`ExɔH 8/>iR}ΡJ+,\$OA.?DY%^خeTǎiUtMr߲"P շP0plY+6DlB+;ƍSb92 1+む H+W~z: RE(=ҩ5r:&H@竔sblפ\!:ȚPC@#iц8ƚRN t"к4A'ou,S5 z"~8T{C kR 1dN~I1|M1b J*(҅sSH-mH4 i&`PGN cǀtO \bH;y/VnI] _s#\;<P5 ]GQ6̤%1:^]q~#Ij_*kʫd1Uyž|T]gý2tHsZԬ0 [`i>&B ju&U^I(WjmjZE ?ׯ<&_^Gyir|:7 =!E>N} 2,묊BUyiczY6y@'']W0Ck8MN-|@Ɓc{ UXm2)]Dw‚:6)7|Uy9ON[~# HHuT؞Q8iJE ɖLJqh:! մW9$Pl[g#] !mep_zp9`d0{gʩ%G4y:"^iDJV)nIpydҼ}ow}Il]kvd}ZMA* uk bR@UmhC!!d`S*]EW^_ Zf\-vv] $ Z&^FvfH߶&S *kd ޤخ帼a\*_U^Y3?81քt0Mذ|i$`ߩC*S~ TedEF2k]S$t'{~QKRF,>'[6 p;Y]WQ>G>udPxkèqʭgД )?Aܖ{jE!*qn`I0SR@$1cI |mG9Q:B?FqѺiAy9t~Gf?-lK7L/􁡗wGv̝(>K))gD˪@BJq sQ\Fu 4Jnׇ!L`}6`"Ǝx."$$`S$!ž r|*ܟn)8,PQ|!ᑓ.8"S|k+|rhj/ɮAb0~ 5ϧ HuMm^ `YNVYk˩`#CVzLDTEr2T2|\O}7o6ߌ?CP$nE[B+p5&}x@<`sj&$`.,/?w~|ǵPRT?R 'wkcK ܶ'k; T2$2&XCRznYmς뿘$GJyn8L Nu|O`8]'0 :2{זczT|\" l^Sjy_۟ڃo J}v(N1Kil 7K8O diJw'b-7C/lFE?PY' /nyu}JӀ\]]l`ʮKw rA޿ viPaT`+\ t],^@;x&abM,]},=iP?js? vMbEk'/Qa ?bHBW^@`&f\fB,`x$n8rmJl@#3`Va[Y 3se, 6}>+E**5y00 F ] 4^CW_Q&Vke/#X@` GNa_QaPeT贠s@(:(NƁ:4H-ٷck,fX`7$KU zs @J~]@%/@g2O eL]D7GO:- 5I fZ! u-4 ( 9"0~On8orm, te$AAc;âJj!c=u @ߝ@g vHd?,) J F\OcP- 4qiUH bpA;I}?$B@Γ|K;uA![y!Q[cThc6P<X֑4xѕз7Wd Y$fac8ehЯoc.G?*_W"t>2ޯk %C@ߓTBJ zˊ @y#-s| \rqyLJmdʋt6kцj&/ I Z͏ecճF{ %%1&D4G= A PUF#生6Ako{(РߦDUg"vV.7x^-s)eF_W.Հ֫4/=Pvaj(/V `-h0p?* YX\!hpi̡/ YE?ޓv!;K-w¢mJ +-j Yo`UGMȚgrő2\~E%puʨ?-@͢/VS<\<&<@ vI]&`M"fDMsР%0(jYJuX@40:[4j*urQ; q{;xNa`Zx];@ K ]Kuo,La L LA ?a$ F4 -cGۙNN>~ } Xf@ (z{Y ^Q<x8a ¨:$>!J(x{wRF"|2hÄI3i&07ށtT'i=J c#ɗ^No71lnROQMX ˩{/,}S^He!0~(UaajK|^٣f"īdLmqm|uZrzrL1ie J*~. ay. 仜leOH >T"ضw;V6{X-E@n2 ƶr5 R&KֽtW/S _:qe_Ȩ!jrJPRpY/+~ ]\.'* %i-'(yG +^>NK%X%@$ V@JD R*aTBi|8ÿst^Z#)i`$Uv;J|80~ʔFA ʿ`jnjǗ^coc̼hsCPͅCPCdX^l.H?DoT򽀐)PCi评ſDA])U orH^}q)ڕ$Jcӡ/?2擃ڲ0tXY`2àq"4G}6wݜk/ɇ λ rܶQ!ޓHB< O- ,> <5<lX:RN=(Cҟ2z>A=8'Rm$DcXkcZs.P\N1υ}9f|Տbw {؟pb!NW|R`1|~5pLH)Qi<(8~7T*I]y5pnl;!'k$/(‹7 \Jw $"/3|he7?#@JOɁveg ֣Pxu:2;vhhl@C$\r$|0|inʹJac?P jkw,=<§3Y}|7 0 1cjW X؞zQry[dI+\LV\ƹVM" iSAʛ/h\SX<>QaWT*%Y4G!I͑>tE o|ٰ7^1 زץ_LJ|I7LBa2߿h%W /wAVC1EsT.WsGt0x&a=P tɹmmuC)O4}Sݱ^ 8cn}Av=_0~QXGon C53͏/+;Zx ]|FL p|)@{[Hd jmż/ _G!u[G=eq1{ĸ1g~+]m-y|/ᗟ.n:#:}/x[I{S"uMR P@v l nY!!Ix ~1B(K3T&xQ'JވEL32BksjA)w fANŗ]#\܈cmJULYBoG(z @T,fhba ;a0/p? 4:z7-+/`cnз hz_micL7k6l}4|愙 M_0%7wA'/B,+ϑ5q0]U`A7#!)D\~/ ïKP(Tj6Γ i2?=XY8?&!gcs9,/` a@6Dt%[gu áqy\x T$$*ȆC( lE緣JOF ny]K_pfy}鿙$(8? nVԲK.ɐPTH31%<\nR%QuWY> x4 7PPjW?@SNq= @t\O!Z.\uCC lЁ%"h o)pr@̛ـ떶`pެ0*L 9Tq 5$Z0疫X๼Eh+,`ªl^*2Ff{eg$@NqiEj>Tʞ|d !0iyEc49/m[i5 +n}n1NQ T=X:@{/Zt1olij:PTp^Rfp? ٞbϓA VTiMO`Ol`PyjCPz~ 4#e Pꐌ jkFjX4 _:K7-g\l\A&qw-a$ŗYȚ^µ9x )98 ؊ pA:!@x{⼿J)}X߈]b((Q[~|ABgPAHRx/@u;b+x^#hmYxkV)+RyB9Ҡ%y~$-U\l}(O9:a(Y135^@H{/ɦuݖ"VLϽcUy@4pO!p+H rYY* 1oF5`\Fvk=R+>&R'/';{RW~wI/<A `$0s3Y`V٧,ZS{ZQ@: #^olЋ⢐@˜K˲82 7lQ?Xl #bI#{Ba J{+C' )a1,^#PK!!BwX5v_޼ Ҙ^մ*B lb!qвbb]ےd2DpE9v=?Ƴ@S=, gDu! SVޅJ.u &#po@xXx=# X{YUrv@uMwn>1eEf)qPJ7_ⶺ"O* 8O!x?m^k08Q)}1yYSݭi Es''@BT|DE;/3{by;i$*|=}B zbU`EYowo< ]0/TWzxt@83]r =?sZ0#h WmX‹R6NJҗO&GA6 8I; +CA;c0F! 0;k4t"jY ˊ=^~uA0ꕰd>ʓ[x VyJ-P|iܜ;̈́N"ˤ,5eq fV|-jB?g?Ҭ4 ofT'v֌ 0X淙_xw>$fCK[`osczy < V> -X2Qw `[^<'bQ;O="Oro<ܙW6=Wpua)\`w.a`kA.yHٴOB؁Sf # P$#H#ٜAVo~Z[&U'~;H E{U\ib ʳ=rsJʂ)#xcbb^tp-*;X€tS='crx͂{[J'[.n{OODIQ' !a>.*9K<*?βc?p]? 8 _#|A~~ũ{7{:P1AD1,$ 큳D,k_?7#* ad·wC9Qq}Y;cjZZ"PM"-{d*? {s"L(D&cZsxg֜2pX|Q41?t%$ץbT&xKx4g{T_]r.V^|9.TdҽT9N(ISnӿ`@ _~~Hq'`sB(=2*7~v>rw^qa/t ݒ "]2ziįDr?cE g>N2ۢĆsՈEs"WX)IswIuyrE pIl_ ":z`{ I3  [k2 mhF-lZ2\"tiZ\稈#GU .)¸WpAVP|u%`a?AXyY RĐhV{]@#VtGO$~Gb~e*!mx-?{ s ;p`lO[]n?t9%IqVpI q)muNʏsv x|7?nֿ5fGwյQKFRd2ԐV@}mCKQ?>$ &w8OJ[OӨ|j <,_r.B5<8hR-<ڑ;\rk5.+xɹbj}Ai8 ^7|^qR H>?:Zv<:P)?}Pxtm@ꨃ9_y>? Y, :)}g+Lu10tߘA p*dp'7p!Á1NMgt:e *1r@eI; |Чgh?whuLP}RӾonPP/c9@[뱁:D }F5@˙XU# *tauyKlCY7G% !"Ч/zʰRCA Lw %' ?[TxTuV+ !}OA/]Q~%IY9@`V E ǹCK"0A;p/ׄz'fxc{îcYZJ,SaGmoqP`+ѳIPK|Hm-GT`Pʸ-uP4{h$lF3Xu8@fP"hV¹&Ч88 jΘ꺧V:؃y AIbw6B#{pˠ%Py \u3P't1+A6L0ONg c4K0U?P'BŘIMHjBŔ1 "8f83$^Gu1a :L*IENDB`(0` %!1<GPV\`fffe`ZUMD9+ %Egyxzyx~xY<+|PqV&*RqvY'?yZ(A@f  /I     "##"!O}^  $%(*,---*#f 9S#!!#%&''"u  $(,-134420( 8WDia.,),--(Td #',05689873-" Y9V$7021*}  $).26:;<:84),C+=951%.J #*/368885-[& 5 u$&&$Go<`&@QZ\UH17-;\IqMyEk3P#:`Dt}{z{z|~~\ $"5c"2 +@W_}xwxyyyxyyxvw|Lzwwx{|{}}}}{|{zy{yKx}vwz|}}~~}|{{K4Oxvz||~| Hww{}f2Lww{| xxw{~J{OP}x{|.G wz}Y[dyw|&?K~ wy~Bk~J|w|~Uxwz|W w{}7R^m!Y/~x|~b;PYN{pH|y{"On}9XhHkY{x}%2:[u$*usdzx}"K^h9#ezz|$2:^wwb_{x}!'On|.FQTzy}gI_iJ|>~w|>[in'38Bbr!xz_Abwy~]jzx}Il}'2~xzgyvz}d9z{~Dev3@} ?o8UatB)HXdh"3;"u )FSJ&=H%-SI?(  @5kf&/HZ^cbJvmhb['s %,2*Ch=]g+/o"5@A1T1-2KE0M :W8V-R"h^xyb'- u gd 0 ,6Ls $ H*1MbjCOv!%jm&:+)6Xe*g_werkzeug-0.14.1/examples/cupoftee/shared/header.png000066400000000000000000000174001322225165500223010ustar00rootroot00000000000000PNG  IHDR;cq OiCCPPhotoshop ICC profilexڝSgTS=BKKoR RB&*! J!QEEȠQ, !{kּ> H3Q5 B.@ $pd!s#~<<+"x M0B\t8K@zB@F&S`cbP-`'{[! eDh;VEX0fK9-0IWfH  0Q){`##xFW<+*x<$9E[-qWW.(I+6aa@.y24x6_-"bbϫp@t~,/;m%h^ uf@Wp~<5j>{-]cK'Xto(hw?G%fIq^D$.Tʳ?D*A, `6B$BB dr`)B(Ͱ*`/@4Qhp.U=pa( Aa!ڈbX#!H$ ɈQ"K5H1RT UH=r9\F;2G1Q= C7F dt1r=6Ыhڏ>C03l0.B8, c˱" VcϱwE 6wB aAHXLXNH $4 7 Q'"K&b21XH,#/{C7$C2'ITFnR#,4H#dk9, +ȅ3![ b@qS(RjJ4e2AURݨT5ZBRQ4u9̓IKhhitݕNWGw Ljg(gwLӋT071oUX**| J&*/Tު UUT^S}FU3S ԖUPSSg;goT?~YYLOCQ_ cx,!k u5&|v*=9C3J3WRf?qtN (~))4L1e\kXHQG6EYAJ'\'GgSSݧ M=:.kDwn^Loy}/TmG X $ <5qo</QC]@Caaᄑ.ȽJtq]zۯ6iܟ4)Y3sCQ? 0k߬~OCOg#/c/Wװwa>>r><72Y_7ȷOo_C#dz%gA[z|!?:eAAA!h쐭!ΑiP~aa~ 'W?pX15wCsDDDޛg1O9-J5*>.j<74?.fYXXIlK9.*6nl {/]py.,:@LN8A*%w% yg"/6шC\*NH*Mz쑼5y$3,幄'L Lݛ:v m2=:1qB!Mggfvˬen/kY- BTZ(*geWf͉9+̳ې7ᒶKW-X潬j9(xoʿܔĹdff-[n ڴ VE/(ۻCɾUUMfeI?m]Nmq#׹=TR+Gw- 6 U#pDy  :v{vg/jBFS[b[O>zG499?rCd&ˮ/~јѡ򗓿m|x31^VwwO| (hSЧc3-bKGDNE pHYs  tIME):{2IDATx]=,͍dctq ˑC `5XSȨ ս{3=UYL2 w3DD@px{Dp}wrAu2&О#A`<"Kqo{Q po G\c˜pc& u6y Ώ>޳jyd{8rMaQ3dN^y>/4`,|<%%8y>ZC^i9>lge1''m:p7 oxfq 1`ئ̈5 r\ ׷?+1+kb628 HH#~χu)'9_7c{*[\3Hq-יfÖCVgO8rankUsnn7pbd`=?e6 a6s'Zေ]H?Tn'w΁<hX1(` H4 'e=6EԸY.#bR$$@ 蠑86bqzKJ#팺UbٳC?+ N00VOn< ᘆ6kcUgJF@\i v1OWUc+׮Z$4o1? ){w\a3ՙ  c?aör '+C?Y}\b'в[uL=A{aj}qa ȗ!\#/.6i93[i^l`:wx6DŽ2iu?R rYM\n W`",\)]1T\B@#y'?9 E%yF>9gE[I  0Wf=9&/<dP?PpN^Aqzf@a)M=k^3GQxcUNv]0Sxu2XtY?6i001쀜 P9-C c1CgH&49>!ŊGe6^dk{2 NBSfw8GR:r-$F8fsRl[K'^h<ۉ]vV$o9ۘ2l6۹I);8y:/\f2!{p`}cG $:m/ ,Wd=aa]7gB V 0!(&2?EA$ zCY)TɧK5o'S9YZ'4쨔+sE^4ӄa!9$$y9<4Y W\Q;rId'jɄ}(1IFG"2z[!U6h(0%EEٌ*M)+ 8%%S@B^qbm] U'ax*vB;nϩ}췱!Pimt\^m bg*Slb]d^קK%/gJa.,R΂RB-V%]mX) ڟۉ-]U䌹b[RJS& i8RX)]a 0+)eʉQ)!xrCHELL& J"+Zu&$Qh޸e憄4nQwkSu;vL:7JfTV3.Uq $S߲'~Y4 `ad=2Ya&捪.r >KDh+Auw0L ]񍷶Dj Fw )dM(r`BZ2-)NL`W?ఈ No 뽣p;#rΝ> Yrhrx &XKE]b9;p ڶ a Y5S4/yp.PB* ]!ym;JpD{ cLRJ[Vswn,l,K]KUeˡ!GN0QQlIP _ kG::$ɼqD?+K@ľ-iJnE|hsLj u0K6cqRE9 Km|JҲ'v%ҘjIzU8ӈTjItQL^ǯL~uhMBgOF˭ B}"gDC̨AΉ@>]r5ĩP42reeLN `䳉: AѺ^zlQ6͓ KgzI^!,#MD~|𳬥0qZ^C٩4Ȋ߻j?%)bVMQOAeukN8S&du~ ^H{Skieg1C) TGbPAD!ye_"+]r솅P)Lz[QS`ruⲤYJtvpt3EUe|a>Š.-B~3rKU5 JL!DžSE4)B 似FtY & 6&@љLFt\JO0*l~)ߟj*kPs=hV]lo:.ߎSC#K']̅7|gN}pr"":m4}:]f:#bKhP'Lx'/*Yזġ yIe xXdkcCJp/J\Wn2pC|SJSoUe6+W͟]hX Wvk> .y,45D-ޓ@2EVIYS54,\ Za7#tZh &s1D30)єqh ѫs҄;YNڞTןZ ZD|KvSN^,|}Y9O^*mʵU|*{LChEYʨs{(ff{xU͞wr%%8]z0ȓoxg.|n!=_ݑ MqwOnvVxU[|,󊾪Pc 908vd|ckpʇ،E:F6GuPCweP)$)i/jQF|eLw32^9P^M. Yǒ-0 N60x޻<,SG=;ٞ8Zbu Qc'4;W ߡ2; zߟHH;)͖4^`eG}1Z0Յ_(Լ2!Y]"+މlg90elt[$I:)O}lfs-Λ_] PwM3Bü axasz82hbpX4P3E vAp)-2C/Sp~31#}W rgZOXxO k-'tkR&;lf `YlM꽯pM $o/pgrr4 3ԬM ){Vw$ݤ&һps&οU3EjRqCh:e(MU%;o5.uITn\;-E/6|VwhUnq[&o#kGί1??a_<zo?a<_9Oy% p窩IENDB`werkzeug-0.14.1/examples/cupoftee/shared/logo.png000066400000000000000000001417561322225165500220250ustar00rootroot00000000000000PNG  IHDRZIzsRGBbKGDNE pHYs  tIMEaȓ IDATxy]y~۷7nm,d6p= @HfS0IMU L E1TBPxC`JlYKV=˻=macܧV<[:X`s:X`gXp +.uXU@Y]Eu9X`BTPZ@Y/𼶺Lui` Q`u?5:@hTşӳz$`uE<0Wyت# Tu:mV^DuXX10t+r36>ss 9=kД |rN{rH F!F\s)+q M5"VCNXoИoՆcڐXުqJv\τW9ʩCJn,^gK (F?׮x-nMY6wj¹1\gs7V?ְ$B li菊?Sɱ\3!^istΡfDxe~i[Z"WcA ڹA+ .B_ԫ6j4#KAMJZxA;<H^5]rN@АEk(d.X R AaR\|,āLYrTJڡb.R$"= )@B|7afɆ}s.4v̻bAM Ke\;5|%cAAQ0߄z769,7t0a\LG{93ՠBZFu%1 08±j ui] ڨĕE&ƀ5) !pa!1 ;%lt?hE5_5t%RxQ,4r-c &v9ciO$b#ʗg]N.5Cw# KsK!-^CuNFA!ysJ"}/?I4 R觐kOcWs-¨Ox7GGUWx/?ԂFd}3滌EW-woklcYr$tD+g=rQn|ԐOQ%r3B*;7+Vܻ:/DȕM<"/Gpmz1,&>68 o+8RŸB47OݝU柬^[,-2E*R Sx>3p3ENrϿ|7ܟK~c,&_+-Pf4Ls #n90f;- 7RCC YAn\_cC=Ϭ-ާKg42USzcBϳ tCgzjdW;_^~ INNZ b+s ;p`gp1<@TZ}N)p bN)- EZf*0Z'nW.%r_(> [\=E̝<1,#jXš`Hm[Yg ȶ4I0'sDޒ9ՁyFu}qǾ&VPOAX5]xc!ՅqM\Ey! bk`+F)oYZ%0www>4[)JT蕙 uZؽ\>M 2>٫S{^#58 +,(( ^߃Ү^t? ˷lRxSH7RU^ٌ K-!a!.^{mk\e:G{CMu>BlwQ CJ*LWQ2x& exC##qqk;ΘdR^ qkmO#GwmyZ Ab kBBTn:}$Ϯwܳ}*#:`S N@Ph ʹ8B5#%RZzuC^ڶJW|wV1UIg!6Y  $AX"kDh,uw(9㝜| \+ÁElġr@r^u>/Լom-qC7Ŭܳ32՘»2@$dpcZ [̟\cJ-,8ʬL T JT 6qy|8s&kXw*Gd0F "[ZM6|:+9f[6DǾ@.Cm4  T\nB) *Vtq0G3&o ЙjU;AHXSjviz,! %ujk2-~S{\iYkk '2 *04248 (ti O/sx:->wsۤB,'Ԁk)Fzi%g.n?;XZw  5#a}|R9R j.'WHM>ŏ^=wRcp}bT%vbdx?֫[,ܵ™o|JnxVJ)F(_I1Q@F5z'z4jD!_r)G!/~Y$'4Rtt8ci׉PB\07xҿ|]T)&>OcN jdC`,7ev.nfCKƣ,oQ SF;){7^-v/2JMݮޮ]wܤb;)ˉ*L_?#V\S^D UOvqy&{(XS*-N怸]g66ٹG;,RgYm%8t"Z XlQjN$t$CAce1ޡ$J)0iIgA{Ε]x .<{2OYnGn^Nn#użYȆpz{-,#Kt&z7!i`-e^>~"i̷ykb1V8Ln%Ýk6mr&;FD ևe͝j~ɓ?5rw <.۱ACBAܪ‸;fh))ŏ3bwYh!CHK__箰s"/9Ҁv P%7b}oQv[A] y׽qŻy[2P ip^}vׯ_+꒬(YZ;c_n78 BpNvWGenaI)P PQC|?h7,"n4A I`!n,u8 揶X8ޣ֦a,4U%껸7X:ʩ1:wg6^_`nQ6OLƧo)| -\ÝY1n=@%'98?]OArkck&HKQFVΕEƻ}I,X?5ѥ[T庪;rϻp7CsD}°4P>| * VNo>)[Dk:冠<IT{~{iU|)2G!:-':u9{9 0 *]B b] l_p+\k;MhtVWGn1lO+/pk~8^EIpݏ(SX>m15ES "`Y#u>Ci6 jI(ֺ1_{xC/1K#{!Ż~B?x ER_ks5s p%z }BIX ̗RXѠ}FRaggSE%$O U fţ\h%dcZɑG`{3 pg,\hBJ{xj` |wch9 ɼ`yAg4sh)x(+"߿*|2N滴z5T䍘z?8>k\L=M,-x˱7XRp^W7wW1U,EnD6Z@HұX1x r Ôej?L@׹%T8 Bn^{? GGbmrk۶;(mS>rx,uj~4^ P 5'0@^߶JQA:ORM= I[kCo5kyqG$CJu$PxɑS ,_=-Xk)?~2*$z"%)ܩ185ljQJKH.]x׳ ,A#򹓨|gy[Oj\s7E=v 0:S JJ:qHP~qHpn{ s//o[p3G>|ew-?|'+kF#/Ywk+pb.-#[NZ~^_,lHr`t7֦/Q0ni}i@j̑V—9'|;Q( A1-}1U]QY*hڽFI1L g~)]O\fbFN@¡Gֹ=w < O@L`묀NTiEϘZAkΕ#3αrzqs m9="'iF==#u~k;}DT?gYSN1 icY눛}O?m*M?o*ZF/fuEUZ_w7ӫt{2׻x8Jց/W^L:K*YӘ{C_ef_;CM(Jar ݊T 'WxG[YW6DZHZzO2GF}!1OH2 å?кg/ PGBۇ}(l*+zS+i%(hĒ8<=su=DDo֫ |M3BCHq&!)p۔|ueu|38܇~mXf=򉴨Z!rFfNOC:ވo15*&uf[V@g(Q=&Ռ< ];H N bf_/@g` YBEc*+4T!&bJ|`5J ;4t`ӊu_BY!I_>YW3]VMl 'p6&Q 쌂tUEtW9XR$RcSXg۩?[0I:]݄bP)I)^y Pj b AgAJ0 K_nV *0ƻ98ՄRd^|/HM٩7o^frF(=/ իU98{l4n+rcRKox |=w~ Ag9OYkZVWb"OIB彼*&55'f<6[`ɇ6ؾ]7ǎ́eM9c(wFfQk#& c50KMqJIש~G JIʬ7nc'Кk$p'ϑ1}ɹzbf⏱RshE7#_\M[EuM+ @[ɩ9͉Ǐ G{ӳ-&$g >|l z#L LqаS\hq0m^h\t_+:Oe> c(_ AUAMF )jL r3= kVфL3Yc72 #p?Z{U Qċ 4$z콑"MWe˪F[&2-*AVSpLGtO Nt=rY}FS ޤrUgo}D.( ay6КH:@Q$@*o 7 qg TEQJrf#yԍ|? #z="o0lEU=F;ZLQPQG[[U-f*x>劶v bD Q)pOs?((˸:{]X:`9c=* +q֐sH"D Ī`h<5zKrwv]ieVNјnMqu8B F/(wn5;cda:/RLXwDDdUǂ Un&8Ɯ~~dYR%DNHVelL~bMÌ`0Œ,G Xx}P5qڱTtpؓy?|@ !L^yrzJ|0%d#Oo.MWaY?7mLB]Յ+%.3>vM,[z .CC9.si/X{%ZjlQ!P^O<]3:QϤؿT5l2zO96l/E jZ<0 w ܨ虫\}(v{2C ɭnoFX3`j}ueMҊ( j|Dyj>S'Ԙ-|`/VuGՍ<ܳ0U秚9 m+JB넌3pXC1.6o< {(3@1+ԮLytw)Ʃ :JVhi*^Ҍyij+Wޚ>jgrZ[U:s()AJFWv1z(dG>Lw̛D Ƭo̱n=H/AJ0й&Ip)[k} Ἦ#'YEsdE$6Qfl9>/5}aNVTՈ9l*ԙ>& &2/>Dr7ԝ}tcYuz l *ÈVJ j{{,e ̨J Pp~+7 uc:{!C0`9uPisa anX(Jl'ʒN#_ѣ)Q#ܾ$N+=l#A4ԠhۻT3Xfz.s.Q ivQk2TFV3&$.t$0a)}(d^<) UUi{S)VE$;QSq6GӱjWQ%'jfL`RO=y+dN-$Yo@|Jb2 : 7o/ns>k۹DUY޵>AL7r-CS!3s2: dF<Y {%rg#w/1^hFp++sé< (سU;bP͠(QVUWU7>+HQH,!Rv} .g<d{˛hw!ۅlǿ(G tZR3TnCk0&+ tߑx"fB)2-팧k+K4bq ym+ƈ2}oc2Sfi)h7pLf+5D36m$0{*w1ę Me~I#`U 'u\HaJkwwMu-"/Y;=PJ'DWWvҙT)Jp!RXHXx^F=#4,Ru+@3$!R1b|[WȕUȝ3(.9 \}:K ݔkN6 ϳ!EIT9N-{8MSst9iﰷ;bdrƪ, z uFIKRIk;\~!K&*̥D5)Di?ܦ901bx GfWb%8?Xjrv6lϰ8 l^zfUU~a (~Jm7ꘝrW3^fh#'$^U8I2K9vhBQ&P l0uC& _=3W3݉۫pp9dZ@TD10*'g($5m/%Gzr{ ^Ç<^lG7K C 0Bz%S)V=).1dL ݑz+{N|nj }2O3bhp8cno F9TYɤݑ3rjqcFQc1Qp,&ֱCG3PW6ռ)X%í8ߩZslhr5EǷQkbø TLS7)YIK;cڧc\q>&G7);)F{9jjq+}U TVS^r]O.Zb0&K)j&8D+)WR0 eYAjBP=Mt&3c Q%}?vZt6;!ei*C(I>SZJ[N _gǡ/5wW )?V9|jnxlf 9UADmOػ1 $ |dH[}>fEnDЉW>} m-et?̃DYL٨xAz1Ұ% \?W}i@1]:Wl_r&޾Rznp匁1B#v?4nwww:=;R"S$CBKlAA b!p2YRlEJdH.8̐Ùګngɇs{[CJ#pQ5=U}i_#%h4D"y\_hw8\\[%K 4bz(&9XhRtckx&R}c,J}kOTHt}%=6u09&tV:*k,^g`|h[_c[ rY&]րY]WV +\#S&G%JGb uhZLA}wtH/gaiJ")83h{=.'KZ"i71 VGn]8J3Qt{'a㓨23YXvv+Z]uX qc ba)Sb.%9]ArM±+ fyQ?g!+ Ґ娉F{AD-6_byA8T0b|ukz7_^g땜 kPل:&$ax+dEIo3YN-DDQĝ7Xx+Xlxjmok7~~q؎ lt"c݉; pgʒ4KL[0s>G\YR÷F>XvGeYu䦀꽘pE5nVAJk-@ p_0jgh P^FX{/<廭 +׹jcڢe1{ fg ݸLvi=vV+#`1gVۮ=Ws/fjDVt~Ω}4j}.=}Nx8e~Y&HKڜ.XbƊZµ0]o|\M,A`2Gw_$$uzuVMF_~ v0QJ8N5[aAZZڑbZ;B7>՗6{/'uAie7Y\{0b! hDXƈr%aΖ?Lj݈35-\Kg]#EJL)^XH邶b+U?;oe|f>nw!0: *M )?Å2L,XuT^`ϩ[81Xfc#:.`1?F"v?9]=zK̓mX=!YmoL.=$! E ņhkQو ׿"&<A#AkXmIq6d׆iv<'sban4s C̍Aֳ`ގRZt :3C_5N9[/Pn0 q(1e\^Jb '3UMmEڠTIf!,]n a7b(B[ý^40 L(3%B<]*BFQfŨ@ fG$#FBH&%{ri2 2māi3-roC/hGYUcڲVK` q30`x0"5fjB5M .Ө m}ј$ C^mA>WﴪEC# +%@gVXg[ 0%4M֛,Q{LXJc),Ctd\S6>:*66bŘӊeKLάH 'l\ߝ[ YX:7+?﬇.1b(?~3WOi5M[!ެaqMoOhp|d8&nofu5a #)h؅56a=ᄣ!o0xp]r^ Iy)DLqNV 藎鱵V8Y|GfgpӲg~T+։BZ'e0b>H%rAHC95'CgY-B=Ƈip? MVe1iaL`t[ +[]XoDq^BLzv!A=ĒP cIP/ՈP5fV=xNU<\T% Z|?rd~g!Ό* >VxoBQۜJU#Dp ;׏z $z37l/JL5n.C $T>}'^VV7H UIB0nkyOQ+)]>յ"έrNnB)d)WlĻ5@gnE_L]6>"QCqxn. ,p/E(dC[tNS$@ )S L( og#WܶW sI>Ԟn ;װ|IB#{:q)U+</#` 3Zx"Ɔe;w08qPXD`hCA(V KSɂIcu3?>e j)v\zl+KF KOsP1ҕVUR.Oh+g;H_!#;gʔ&1hjoڞkB;ն@._ƃxKÉe)=B$9a+2W0ual~")rߡˬu$sH-]'OD}MJC^9gךx"}U% Г0.Ce\#U+N&^x`}7>s@%ɔz-q+jgDD̵/)yծBjj|w0,63FN;TG(;Fζ1_lh~Fqмe-V5I8Tif9YП_yWϯp/,X\@JA+4BI.} M$k͝%mfXX@wsЈOOt[d $k 8 U`D劭7c؉(d./asA }.Afm=E$Ҙ U0pބߑhUJҁMͻ=U|ީFfo3 RQ Fchj'h_#jHǾӤ֞˩>_%" R`AOo!><8݁SOJw} %)W;ۘ"`tЌr==TaI?MX&q/u{A~b*A "9mBSCO *<2gЖ9H AAQj y9hT%I F0%BhgG}]Lν*w_;xj骖3Thp8cހ?QCXp WWkMXmϞ?<ßׁ_WquI؈9 adH¸4d%f;S{ %FlIػt6(RJۛԩ[fdңִ Ўm_.bb,j#IAzM[i& s #jYA 3̟!ՒFEs3[WDI10V"6*]=R宩_wu5{9`Mz|Ha{ ʼn bZ[p/6cX"%V80M}`~YYf\ ,7 ƩNc Y"*A@Y#%FDn$B G;f’BY,'<ǑӐa. ۯ'UڅT4βta@#c580rs6u{a{&1&6q7H][ g Vf;̹ ;\ 9.-iᬡ& qH"Tpk9=`ň&D;) ױٹCEbM`Ag ]h{EmD~_<˂v_SwG;'Υ2$=Dpi%f1r§>ѿX ͙PzJ)՞YrNq@PN20NjkC> PQ8BK;V*;^L(J/ , l CqŖZpҠJH?D K& G;!fL11.̩OP`R|FpUʤudr (| y4ontQߔS #TnЍhƄqm&1(C ±tQi<TЃ CT[_QL #KͧbAd5,nvO(Pk"!-VZ jc{B`ogGqՖSϜ%u2Ԋ@T[g%=vsQZ:_N r.x{#0ھKɂ]D A32k fc`P0'wr; 2`Il;%1W;$Zz/m2j $` M1*B)8 sÍ{]QpGʀCInB7Y"tDnUl3s89mjgӣf<'X=QvnLNY?_Y. Rb#D$!,!J'?(H)lH1wf7.kUϯŒrqN~Beoho1RTUިˡE+8sp&e  Z:!t[pz"n{;Eg2 r?[v8Ƹ rw>`̃k=Y+֧YktwÃ㱛, e aXXid`5Mܹ}V.͐t, cE?U,*/)'ꓗe 5A LI;G#Sm_Ll#Y9f܂X~XO BA@r/:V |Dh{/X[VWCX AQp80ܮ []^E#$v$kA܉ġ$ XmsRħk6l .mn4 1kL g7LAƷv'Y=?WG b='Q<ڌxAkN3bo??u]>6$GkLzF'v4X@HDuv4U4C{RgQ 2QlI1E/}l&Yl"q6BQco*p\s0R~ΚL:EmO2.α3yVXҤ]$Zn#de>dp{v%\ sxjxIQ?x󂣭C_ 'XRQ@+:+Mӌ%c粛Swz\r|oc/|vpj#~TQL~>y1}=4ZXaQR)Ur*} ) ʇZѡ!%$!6mM|R^3iTꚏIioS?\) <F5Z0ݮiښqLfm1qH[?`.nB˼ӊb]R^O,͠9ڋIƽb񂃑yhI@q=tmӋc"DkՖ(\=D{*dV!>yYg15qmEh#Ŏykּ̈́VhKsa.,3dp3\=ݠ)!<,v(޵l-(]XP9X\⩏]߻I# XhE<8zcK 'š`!fjN \I݆}ҁrAtF۩u~L$YMY [d# i$ I Ql0$=.)󬺱(Ô.m$,>V.v=EY'VHp{$n-IU{9D!8+`B:) Iη9o8|yo>0{Ǧ8(ԈkҾT ܣXi'"ye^ Dqm($X̑詢ؤ{EzA .=~Ć!!SƸ`t0D_ S5&SX e.讷 A9.hn\\)ω2~=E\YF;SqoySO +nM]+tE3@PXƅ % _;  Unw-Q"Ľies])nŌSi+Xqs?c~vxynD+u,ZQE2n' H-V¤pf~ֻU3wX2fV̙FgNVJdX"g dX d׀Ew7Ab Wnu/p$T{Mmѩil VL+EXg95-9kшuR@y+-/EI>ܕ.NioDE3 IDATwo)4Ix30@CF$`G֯|/],NSb>B8ڡm3u{NN.v[ڒ@)*ݍNJlr[ZqKetb%x≙zCoiEL`_e(z?q5&24/xp¤d]XVZ,V6I?  f|~Lr@/ClvBzw'7xgv"m#dX>baEfHdߧTF@~bwek;$ >!y(Fgw&&/%\YRo)YeܕUVk!n6cAAU~4Oz"(|$dHB^baunt `#~K8f0X1lԼl5>zHvG%~6wsOe'tB($j'B]p;@iMK.LtGֲ,s(ő}c#M\dvV:vr8d4ߙ;bRO M4t 6,%.]NzFfbЄ g P*ťSyW!`")L n6[etc7ٸ*K#E`^>S#5$݀F+$kZ$ mG9{>ܾ{p멭!,ևnFzsrdTN܋rwpG*S;e/Pt6AZ*H!g#WرuL@8wK 7[M;iC%/Q*@[zzBk-RE+5\ X؂6RL];7#f,PwMw|x)D9+!tV E$!AB3tD008 Y?GH =4V:RʱV 3M-LJhKmNJa0Al%4Dϱws?\5ccΤ*?pa\kaAx VW(μpg?b0ryv,̔.?3 &9B &"+]P~ct=ٓI/bWiv/^z\Mi tOw{7*JmH~w%V•CkQ|5Ѭ%PPˠ]lPJ=B- b0~G؏ N܇NR(h4&Fbɥ^?~N_ WV;QF`xhs sRhl$ZM2Lc%*AlP{!KM6,#7lhNkd ($Rvl{.Ƣ`8κ3wshe!5n,0 ,_qB JP() qF16S^r*k5T:A[%Yu'[v9LelX<&dʬL-s]Z'-gKsAsI ]z_L e 2?~ |뷿7[zZXr9t !(ӌ>Om"}*8>]Lrs ?5ub$ RK*W -OoUg0jw |*]p + ؛  y,S\ :.a*>Bbܮ"AEe%@hc4;*PZ襚Ialob!."&GcLon#nV` Q(lsxdM`!hi/"_O+XnkMk?䍐w7Ƭ!#)'!2(m0:%* խWM$]7_oLtE9 \_%YiQv}~ .)')yϒ]q:b™'7駸J`vD__pZ·ѳk!J8|9hbI44ÝC&#tQ^iZiw6$=xs;οΕ&h(\{>>y%q☒sQR  m0XNHS#aWumvD'L^>I3ŪcRc",r47[8dMk4ͥW>=/?7nihrDz̬5 F\ #UyN6$K6 hƊGL!l,|q rOxMV?zW 2l7aCw^~7^҃Y!Ndl >ȯpRϴ F+ a6BwOxF^uNENXYo29rB:SHa'ir%RcE|l.Bqw8I"O+LO89@ndyRI{)@DMo!δa"dM>M$4BN81̑ñ3)JjLmlH\պi !gsę]jKB'>1;0k/]e;Om6B{l[TDW/fI١x  n*Saʤ'X:1LdD-=0JfD{5N?~!-)S{o"qҋEF4VSP/<.D.sj K$9p;@TIf[EL$BӒS@^/nB>+)~R]*%|X#'X^$ W̌\U m/"1zM{*R ,^ ᭴}fzfIt# HBI3*=$Yj E(kQlՆ2-h}z yf]PBAPҭÎX[y? DB1wW*-V, #,ֹΎ 9>?I65D/BcBd%ы'k_%"V{7ׁ?!:"ו{׾ A25BuRō{7IktU 0+j En,Osw0&p!0v_g`D9,5"- ;C!\k,sRf܀CU?>B寝C(˕O[lOGvF4yᇟfe)'=8!sAXb.mC_"qApNŝrgNāʸb2C:V6E|CO֦; JV6y3|AS-'D7HSWEoޅx"HB$QL2Ե|WA9∰7秵(Rng/?`Z;Y7@.`K|obbf t#mhrx SFF`s0),H#v a⥟􅢬>2ߕ Ҝ]fhD#_j:Ң2AwImb1ψu +-0>ɢ-Pb-b!"ЖF+V YC7kLCCeͼL&t1QDy C8=z7qRԹvs'5.`4BL,m.QDKvv> *ˁ=DHi"d :?vϩGc8;l|>q =q8f,m."pJv#okud0ic cMr1T3Y\D{ Q;[Gi)j0 VEBSXM}4QCf1dq [ 0a11&C`lecˋګwqW 9|Z<γ+GMpxǫ5bhe}]8<^Y92R莗@D4[9(کn&Kr}q)hFa5m DRI,@k9@)nK @\^ڥa c.b h,opj! hv :v"hYxo:'@;]+?wڹ"nM%9Ƞ̡2 0g1apXgT^HQmz@:]Ȥ.T }L@UkhݔHJP$kkҦ՘%,4aT({AjGb4>*n0%+HE!RF r1aezyV@e/ |)O )@4!ZI: \N1k!3'Q11UE(lNT}T 0? {s;I@R\E"T'=7:TX͜pI  A`s%Xe_9/s0Oz:nW/NE%c(Ƶ (JFXQd3Ff.đ6SV Oqcg[ ڥؼNXjI O$^5cSZš؆`k dçOh`,_1Z#1eҒRTOdHZO(x.*ָ[BH=Vj adTP9x[% cx",`/JZ`zff PՎ#`M8xQH 433|jA8s8pbz%lx=JPJ41=8'V~9\>b"PPmf^x̶牢ԾnqX e }\"xvC63\gFefAIy5l4EUdBNP`0E-F [NZ Z+J:<Ł@*4q;WsFKFP/{%9@2 TH/Ƙ42T3ƁZ:':x1C(4nL~W &mnG4#「gJ#hq,Ơ 3$CM2 <B p=Èb`8:=091WC@V*8@1L@%xf7ƐjHR͝Lc )c6.nY|)-!/\h/FS@^3,G<td=|<QY:9p ۀ)(nb`="og-=bm!ﰹ-SkAXG.Ga5zt^3  L 1uj'|{|baݷX\sZͩJ0&`큋PIΛ SaHy>vF6yR"(J *q᤼(mV]0YGFjXT"N~)(B@q/pi\j$CV!2 ~1a(N1sf|ݠej+㨷ЮXTjSPJ_(EDkh6^~b^%ps`) X(B۾^LU06hJ t^XCuT}-j`Q#zԅ Tl9r#0fII;)Zj`7= xCc%DaRzc *iIQT>Z(`J ul%Rk ! 0qfV QAQySEfE.n'1'+r,ͱp`"[_?*h%#c;͘ޠv!! xX|bۭ:q<޻D.JH 4TgC4+DY{J9 oE˂R0f01.%!HJ.W/n{Q\nZ'H߁@lFC[ZM&%:evx3)B;à㹏VW@7bW[P^qFǡ|h1[q7& dV8yt]XiKW;%)*2(2,{lL]MCc ({3cO`q(S ZCQto֑u,? )aZcmoA1AM_]iC3Lkpn{$%M09oNT5@=b98稃zꉝT`3Q`d ^CG`:|eXrױa9""(0ri]IĂEM0gէ°D<$,]^/_0ZxPd}8gC>q5.gMP=zhW{\1FZnn` {1~‘ G ~3aVr!%(ݧ"C %H^ߧ(3\`sxejsBZYg0\H=t]Pt2B@*=Fs#m92zwbw4D 80,hֳz=%j>UJ?qI)3ѐca0@)@)&!v,-r 0XB i^4a\ h=/w(67 692F2ҵMS(LAf%D4  IDAT2FBg@ @#A2]%S*H )2{uBi5=%c!5 7FYI!^{% h!A^!) k U(k1DN]L A9{O8XuT\av GzuBDC_H)^=ANyykqJ<6 h> IavDXO5NqӝV.n*n.G`T8š&> EP ġo/ڢc W@:%P.=jFm!3QR b\*gNj["8z2Æc C cd { hT W86") pF8iTC F-/( Pe{Cؼ\u䥇lcX턭 ivd!kY(rCk@&* dk0^qű) H7SxȤ0>nS>QM ]10pw^n/К˯ܹ 'H: ʲ 2&xbG݅,l6;i/pUfjQI%ֹBQl,B@+S:GXYЙmeÈ=Q]+($d7HƘxu<<Өۆ=-qZ (<43o$h7@hۨVubfe`J`{8~4-Bvwg~qpҦ5<5n' h %mMIcrCi€1mumBK^N23Y+Aق~46/t KCvZvR7VomeoC?.uDUS+>N ^Xî<'oqJ}`cCVnS\M6/ l=ۂ*,RNvF[9ZK?m'+ӒPTbvB=nS-r`#es^L`10FReGl!/tw3$maABZaxFOvV [ݜ+%ɵ-|⮅tۗ;<8{ >y8U`Y=b`wpی`TJ~V~PZJ߸b#1_LjjO R9C[ήuIyhOJti<4""HgÿCB<,BkCq.%c-&u=Ziؼ$€aza0=k`̀S$;ҡ^&;^Ac: !U 5=Pf >hK/ĥ)DBп]+ re, ^|&掄hN0 s~Atʴ~3HJ%Z@=d8Kz+&`uT*[Ӡ`(ZdҞ%闒&l\o^gR'Cv#1B4"DLHa rmv(B"9]C;k ۋxj6JmN^.+9۹٭sޠH-ʐ(x@Yp(D=X0QV6[ 064(5"ʺ8DéD,_`UEvìҎ@ɥL@l @S A `.Bu6)8(j4@b&g"%C>mZ8^C84aPaA"n^ϸaw>bs$V\{ "@v.rVJ 9gb<χ,iwJdV NG@ Ul-)gr'sw;J@ l Gﮠv Ij {3d!kho>VPvIP J6JD ݛe>Ѷd I|c50YX?.GXAo xu]khi ]Ń, v4n*QJAE7|L[jrxu+hV\`Z*5ZT[eεk9vxi,CWbAPaMx3ͷt!ॶ DBzV(r"#YH@A/ɬT=Tm shm K+K Q*pIKBj$_X>gOh\Th@bgPQPmV6 @,5@9= ]H\CɌMJ m H" {LB`'}(zy 34!*Y7.<#M 롷YZNLNZB;Yo@!Au, t9;%D;cϬAƥ㼹 `ǶW_<Rk0Opt"*PXr .G4qL[1HVKp;蜮!Q0A VlH,oz7 ]~J)\]pmeAa$!t1wK<=JaD;FLQ quր++ǿ+$zbN^n  l(z/>+Dbqr:DQ㾖T Ḱ?k=2ntϽoW:ɘaL8TJ,`lHIgBcQHcN{ F*PP2 dG4 ;Ò Gqb*BG!u;JRjpixtC6x1m,YY!4\a7M$6B6(k-cY^(%Z;܀PK>3s~'C|fZjl5Bmm#}wǸ5X '>wu9/OK/_iң1F P P#eJ A52pRn'㋛f-Vw~k.<֬}_fT*>a7Rh Й Y->0S pgƠpJ8BV^=6odK!($B`ѥ xR)-35X"^Fȥ1$eMjSUZ9ЈI [zR7bХ$E)n)|>}q|6Rٰ7O6? ?vkSSLWCj~Q>^8>uN& RKWg_q_/O;9Ԍy{+9R LP ;UkwJ!}K\ 10 Nx￲)_,J)]$qR?o]׃WyFxVBc <.O }UtƭoE=dbg霛@'S[NՍ~2fxh6bP*B#bhDbZ]imFp3f75FQzd|J '/覜.C}r"\dY'<ΝQ{ڙD51s9riBh\! 0H9'۹$W7ޅKkL'[J@+5l,i;Cs'dm7{?Ó~6#s>#pxfҮe3@ 5kdjɭ՞OكđIr*9[ \503t¥B1PjSRtrA2g_آ-)nm8Oݸ?hlsQY&A!azֹ2:U cc6PJz΃]qQ^K0QUC3SML6fHĬS7܇R>sꦾX LAtżBRjRBtL *NseFhBQGgE\taSwCASDElO5R#v݊}Ju~!4B*N0J|ӛo4dUjll p!prtlVp ,$(AS(!> Ji#4J6BH#c3fLqSJ5Y.RʨAa Knu17˿ַO==Ex># %ģ3}3N渵ޑKA){}׿{{U0^RBZ9T)J[ 3BpA`pR2_|X#ǡQ3#:(3I?%)cL @(C6rNz^&Jcsrz^tQ oy O>1yWBsOQY}2tW(BBH^!kGO?WFϰ@) )PBϜFp~'NRJb1lë+@ IDAT"22sPn)2J|j&COOpE^SG#Og5c.sRk cJ.Vژk-2J f'br.!'d U*oƧɠ'TUa"‘Zeo g60JtPҕ9Nj\^n*+бs.DMeoz3 MzLTp32 H7|f,R1.:4XJ5IKM7N^fxZHŅT:ſn_cqD{"w`o\lo`fv;Bu]j`1KgH=nnKq[pr&KܴU`l]>0;?=ss\AQ\ gL { 1ɱy)Oc\"/pԵK;b 0{/ڮ~Q |hb7J^,<,s)9為^ALZkD(}2ըػ tң_CdR\ms!sUFƘrLjk:yKb,qqcVÃ1"^tsB y"' 'wc@a=6" 7=>3 gJwG8cOi>wlw| ohsgc#/WP}[W~xT5PN؄3B;(93ٗ@ [O} q^gc#5:%ZSnǍcr:T;tfN`Xwo&6y#2vtQ`f=S\:;?z,ӘPκ3Ӹ+- nUW1;cf2B\HvVo2F-df~?5F!C!nTVο}Q| 'H\ muA>4C\  >Kld]Go\ pP'oj 085FQtWlmipZn~v{T4($$E;>:%h]k(jsDW`KjdN^Zs`0ßWq{])d~aؘ7'*7S Or$ǔ~]u}ͭ;ybǔF/r>tL7tLfȾHn\1=J=̽Аcf9rq2{Tuh8ecexOAwd)ox ' x5\{8~yWÛ̧QhYOf؇7=]]Ks 0boLBRruPx.NL{@ 5/ GoCb2' _s?^Hf)A~O-ǗqQ{r G=8"FZ%Hûm{#Th }Q**6QM(cZℒ8I=rȏpďؼ*n~_6|=(nF_S IvZZ<>8-IVzx6@m̬2cr<lLEs~x'!᥹N Fwn!_ 8<zz| /%y zC]` os;/}ђ{ }JBf/sNb>M)o])[ #-%4J~7"1EG#·hUli,>ϋx~c(Nj V`ڠ} /bRU9E)VE#J# oF[W,[r6ML&&ߘ!V)`ww>y ?OI9r8Ngz#(2 WigK>ss7G^ĻI_}4)ӯ Mp81kWd}/R/ړ ϝC4]QzE-åm<;yU_Glc.2!ҿĊ3 9 _/bK*t7΀x[ơf@58`- zG|k]%㘹>枇}=Ӊ/p$ JN2|>sQݽճG_u>' Xc3 whUY!# X8AriN:G=)FYu>p7gzhiӜ>|;Cz##XR窳zgM7 psQ02{230v$AJP 2Ջ1JX;_)'y$"`0 uTo^!> ̃*=~=vѳ,zUcg9q'ʢoW<~=tƔmnlm\1)g_ƈ\u}#Z`Fuq:`z̄ GA/5Ө- 3 u|W"Ƀ')նEAqGa ( =daK.} Lj8`0w<8q8}l obʂljTn@`'JT?Lٓ8r<~Xg bb]SfHwrEƽߑ3`*l0J y13-Q~߀e&YWM`~gi^uEhNR &Hha0w(eZ*p Nޅf{*cFᬶ:+F o$^v5P2[21ntA/\Cii`#;1inzΝdT  (!0ֵk<"8{DgY{a 8qP;ɮ]j}gl)͔Ɋ,Űp_8H'y b HAlvbK%#8Έޗnnƴ8?Pw9 `HOǴew~o={0v<cm2:XNf,crix JE [u /e݊Y4Z]1+- 󅑡1BSiy)FC}F&,=>WB |`O>V p@IYj"itY+(:9[xXȹEHږvCow?[2csSaO2GT,P*9]G ͕8k63C AP! \VEy-MʼGz$pETc"}bX~uaM؏=Ǝ)$d6֑]&yy%rgf$uTW X2M\kfDt}{(BXp -zPaz:˫.ZTϪO]`4Fı?tIX!,~ȕR yȹ/\dBLQK#[j!jP:DQJC!,{b:=(|f=w$`c v]|䰟wh0Wd*n,eY e!w>Q k?x_RA)@V2\YN̊UzQG7pđ%K-HSKsG( ǰ+ߑ]\8e KTW,4#6ޖ^p6Z}·*~< ;%~b(}Wx=)CjMv6袋.>z'gLƓY?( < *Z~KE%?WaK6#Ne~N8Uє@KjTJZacvUJQ yW L>P1/  \=/K5Q=w>HĒjxAPG)<Vnk3mh7?u -;!! *e*#'>~cDZRzz 7Z,O #kH3J=.KhKȟ,4ۚ2ctvueک%ԣlwePEїm@.>_1\᳭mS0ߴV P,B&Alar XRh X5-)V`t0z0dA[Is@M=j?Rdం8)K3)ڥy?Ja"!iX` kb𬰽lynTR<0v,t!Q0):J!OM.niJ`,/B ZH32"^b9zbV/HأxC&BBr(K o_P(8_#P Uԧ s6^(X{[1 EeZBSF,:X_X(գ\b;r?ZmaHOB&jLޚ]16TQPv6bhh2iGBm/aЎ,Q0> winQn"QBPNF ǾQΘHUGGHg歋JѶF2F(?=T@套I5MC]B%abYaĖ*')Ki7a ˚5! FbCJn;/Qb&exylRbK"Z; C+ 8 !ܸ(| 5!H\gb1uީRi>XA2mYM?jk\X9+̲0mx5aTו]`i&d\%AvSjkd% \ۿIwߍ4w, =||Б U!eSNnw[h:BZj"m,FBrgq~;ctU7oqCfLu$Yx\)kK!EИ͠mh5ͦC=0y8ZUm=l?<\lLZDk}wېN ZCZn K;\2`'D:1+a:tlA{ 6iak)",q2{"P*2<4PI Y3c鮘*S^BD[Ю-MZg@Ao $ $K<[_)„O/&{BZff lɍ7''O~Nx)62慰d8z6#( i8bkb5Պ,sV”pɌD9 z Y(񰫡:[,?uhg 3uA;Je?7GyaM?dEi[g=v^P&%-[+)w7֚,"Y31AZ46/h{'G_UYk\6хe}vۺFݰf|"S(B{ipӍY[?c3A'½[ `uΌ&Nl0r9}N>s?LԡS!֎eõoy1}soʧ{S1vn1b:{W>'WMV&w4(2dCG^cWL㓫UA<^]\ZJjʿxlLU I*`soX_m3QQ Ylpy~>?Ù-^=:$'WgLbv+m۳ llhv~S)^>X [[GVT}[_GGQI-;\n%e=}}~{.չ]zV602 +<'[\0WPĉu5[e G571Xzb2+ն}m29I`557.OZ蛟[6B5^jmΚ(uasYۦyo yuu jVade#iat"2hn ӷE./~3Kki~Ѥ48Jaw6R}e } ߶rq ѤBN~7 ;;l-7w5vv'='N3gGy )5^ K4!gN a,3f2WgFx/(Oo͔{z0ON<6g<D)+s<$_n;]:}b/>~ZHZO7Zl|b;f"d&IDATŚrHbk7ָ-qGxOgҺWN+EUhq~OR㏪;%O1t{~ϕC*t=L~,q`nWF@I2`f~k-ꤔ;}q>}z0X^];o-sXF{c#e{+*:}yd=EŎY9|5?AAG/ ]p!{2|K!)zΘ y> Lc茵[%j>`Ҿ}G$:!\>7lm_~e+d쉎pR!sqwBK|T~:.v=8rρopu{ٓhvswr~߇pF~|<O56?|\cs*Ǽ43y;˟aOVEɿvz>poHW9y7I7 E]t~g2t{;Һ93:tK;},?2k),σ(UG Kg?ÏOWhc\:ב+Qgݼj]tE]tE]tE]tE]tE]tE]tE{ ϮIENDB`werkzeug-0.14.1/examples/cupoftee/shared/style.css000066400000000000000000000035051322225165500222160ustar00rootroot00000000000000body { font-family: 'Verdana', sans-serif; background: #2b93ad; margin: 0; padding: 0; font-size: 15px; text-align: center; } h1 { font-size: 0; margin: 0; padding: 10px 0 0 10px; height: 124px; line-height: 100px; background: url(header.png); color: white; } h1 a { display: block; margin: 0 auto 0 auto; height: 90px; width: 395px; background: url(logo.png); } div.contents { background: white url(content.png) repeat-x; margin: -8px auto 0 auto; text-align: left; padding: 15px; max-width: 1000px; } div.contents a { margin: 0 5px 0 5px; } div.footer { max-width: 1014px; margin: 0 auto 0 auto; background: #1a6f96; padding: 8px; font-size: 10px; color: white; } div.footer a { color: #79b9d7; } a { color: #1a6f96; text-decoration: none; } a:hover { color: #ffb735; } h2 { margin: 0 0 0.5em 0; padding: 0 0 0.1em 0; color: #ffb735; font-size: 2em; border-bottom: 1px solid #ccc; } h3 { margin: 1em 0 0.7em 0; color: #ffb735; font-size: 1.5em; } table { width: 100%; border-collapse: collapse; border: 3px solid #79b9d7; } table td, table th { border: 1px solid #79b9d7; padding: 3px 6px 3px 6px; font-weight: normal; text-align: center; font-size: 13px; } table th { background: #f2f8fb; text-align: left; } table thead th { font-weight: bold; background-color: #79b9d7; text-align: center; } table thead th a { color: white; } table thead th a.up { background: url(up.png) no-repeat right; padding-right: 20px; } table thead th a.down { background: url(down.png) no-repeat right; padding-right: 20px; } div.players { font-size: 11px; } dl dt { font-weight: bold; padding: 5px 0 0 0; } werkzeug-0.14.1/examples/cupoftee/shared/up.png000077500000000000000000000006051322225165500214770ustar00rootroot00000000000000PNG  IHDRasRGBbKGDNE pHYs  tIME 4g%IDAT8˵JAFA-V*JW' $XXI,lj!,uwgBEwՁޏw`?iߵ"p^!l-_MWj%X? }OF=9:5O6OϽ.oB : Teeworlds Server Browser

Teeworlds Server Browser

${body}
werkzeug-0.14.1/examples/cupoftee/templates/missingpage.html000066400000000000000000000004051322225165500242640ustar00rootroot00000000000000

Page Not Found

The requested page does not exist on this server. If you expect something here (for example a server) it probably went away after the last update.

go back to the server list.

werkzeug-0.14.1/examples/cupoftee/templates/search.html000066400000000000000000000016621322225165500232310ustar00rootroot00000000000000

Nick Search

<% if not user %>

You have to enter a nickname.

Take me back to the server list.

<% else %> <% if results %>

The nickname "$escape(user)" is currently playing on the following ${len(results) == 1 and 'server' or 'servers'}:

<% else %>

The nickname "$escape(user)" is currently not playing.

<% endif %>

You can bookmark this link to search for "$escape(user)" quickly or return to the server list.

<% endif %> werkzeug-0.14.1/examples/cupoftee/templates/server.html000066400000000000000000000014301322225165500232630ustar00rootroot00000000000000

$escape(server.name)

Take me back to the server list.

Map
$escape(server.map)
Gametype
$server.gametype
Number of players
$server.player_count
Server version
$server.version
Maximum number of players
$server.max_players
<% if server.progression >= 0 %>
Game progression
$server.progression%
<% endif %>
<% if server.players %>

Players

<% endif %> werkzeug-0.14.1/examples/cupoftee/templates/serverlist.html000066400000000000000000000040261322225165500241630ustar00rootroot00000000000000

Server List

Currently $len(players) players are playing on $len(servers) servers. <% if cup.master.last_sync %> This list was last synced on $cup.master.last_sync.strftime('%d %B %Y at %H:%M UTC'). <% else %> Syncronization with master server in progress. Reload the page in a minute or two, to see the server list. <% endif %>

<% for server in servers %> <% endfor %>
$self.order_link('name', 'Name') $self.order_link('map', 'Map') $self.order_link('gametype', 'Gametype') $self.order_link('players', 'Players') $self.order_link('progression', 'Progression')
$escape(server.name) $escape(server.map) $server.gametype $server.player_count / $server.max_players ${server.progression >= 0 and '%d%%' % server.progression or '?'}

Players online

The following map represents the users playing currently. The bigger their name the higher their score in the current game. Clicking on the name takes you to the detail page of the server for some more information.

<% for player in players %> $escape(player.name) <% endfor %>

Find User

Find a user by username. The result page contains a link you can bookmark to find your buddy easily. Because currently there is no central user database users can appear on multiple servers for too generic usernames (like the default "nameless tee" user).

werkzeug-0.14.1/examples/cupoftee/utils.py000066400000000000000000000006421322225165500206070ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ cupoftee.utils ~~~~~~~~~~~~~~ Various utilities. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re _sort_re = re.compile(r'\w+', re.UNICODE) def unicodecmp(a, b): x, y = map(_sort_re.search, [a, b]) return cmp((x and x.group() or a).lower(), (y and y.group() or b).lower()) werkzeug-0.14.1/examples/httpbasicauth.py000066400000000000000000000027711322225165500205050ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ HTTP Basic Auth Example ~~~~~~~~~~~~~~~~~~~~~~~ Shows how you can implement HTTP basic auth support without an additional component. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from werkzeug.serving import run_simple from werkzeug.wrappers import Request, Response class Application(object): def __init__(self, users, realm='login required'): self.users = users self.realm = realm def check_auth(self, username, password): return username in self.users and self.users[username] == password def auth_required(self, request): return Response('Could not verify your access level for that URL.\n' 'You have to login with proper credentials', 401, {'WWW-Authenticate': 'Basic realm="%s"' % self.realm}) def dispatch_request(self, request): return Response('Logged in as %s' % request.authorization.username) def __call__(self, environ, start_response): request = Request(environ) auth = request.authorization if not auth or not self.check_auth(auth.username, auth.password): response = self.auth_required(request) else: response = self.dispatch_request(request) return response(environ, start_response) if __name__ == '__main__': application = Application({'user1': 'password', 'user2': 'password'}) run_simple('localhost', 5000, application) werkzeug-0.14.1/examples/i18nurls/000077500000000000000000000000001322225165500167465ustar00rootroot00000000000000werkzeug-0.14.1/examples/i18nurls/__init__.py000066400000000000000000000000711322225165500210550ustar00rootroot00000000000000from i18nurls.application import Application as make_app werkzeug-0.14.1/examples/i18nurls/application.py000066400000000000000000000053271322225165500216320ustar00rootroot00000000000000from os import path from werkzeug.templates import Template from werkzeug.wrappers import BaseRequest, BaseResponse from werkzeug.routing import NotFound, RequestRedirect from werkzeug.exceptions import HTTPException, NotFound from i18nurls.urls import map TEMPLATES = path.join(path.dirname(__file__), 'templates') views = {} def expose(name): """Register the function as view.""" def wrapped(f): views[name] = f return f return wrapped class Request(BaseRequest): def __init__(self, environ, urls): super(Request, self).__init__(environ) self.urls = urls self.matched_url = None def url_for(self, endpoint, **args): if not 'lang_code' in args: args['lang_code'] = self.language if endpoint == 'this': endpoint = self.matched_url[0] tmp = self.matched_url[1].copy() tmp.update(args) args = tmp return self.urls.build(endpoint, args) class Response(BaseResponse): pass class TemplateResponse(Response): def __init__(self, template_name, **values): self.template_name = template_name self.template_values = values Response.__init__(self, mimetype='text/html') def __call__(self, environ, start_response): req = environ['werkzeug.request'] values = self.template_values.copy() values['req'] = req values['body'] = self.render_template(self.template_name, values) self.write(self.render_template('layout.html', values)) return Response.__call__(self, environ, start_response) def render_template(self, name, values): return Template.from_file(path.join(TEMPLATES, name)).render(values) class Application(object): def __init__(self): from i18nurls import views self.not_found = views.page_not_found def __call__(self, environ, start_response): urls = map.bind_to_environ(environ) req = Request(environ, urls) try: endpoint, args = urls.match(req.path) req.matched_url = (endpoint, args) if endpoint == '#language_select': lng = req.accept_languages.best lng = lng and lng.split('-')[0].lower() or 'en' index_url = urls.build('index', {'lang_code': lng}) resp = Response('Moved to %s' % index_url, status=302) resp.headers['Location'] = index_url else: req.language = args.pop('lang_code', None) resp = views[endpoint](req, **args) except NotFound: resp = self.not_found(req) except (RequestRedirect, HTTPException), e: resp = e return resp(environ, start_response) werkzeug-0.14.1/examples/i18nurls/templates/000077500000000000000000000000001322225165500207445ustar00rootroot00000000000000werkzeug-0.14.1/examples/i18nurls/templates/about.html000066400000000000000000000002021322225165500227360ustar00rootroot00000000000000

This is just another page. Maybe you want to head over to the blog.

werkzeug-0.14.1/examples/i18nurls/templates/blog.html000066400000000000000000000002411322225165500225520ustar00rootroot00000000000000

Blog <% if mode == 'index' %>Index<% else %>Post $post_id<% endif %>

How about going to the index.

werkzeug-0.14.1/examples/i18nurls/templates/index.html000066400000000000000000000003131322225165500227360ustar00rootroot00000000000000

Hello in the i18n URL example application.

Because I'm too lazy to translate here is just english content.

werkzeug-0.14.1/examples/i18nurls/templates/layout.html000066400000000000000000000010571322225165500231520ustar00rootroot00000000000000 $title | Example Application

Example Application

Request Language: $req.language

$body

This page in other languages:

    <% for lng in ['en', 'de', 'fr'] %>
  • $lng
  • <% endfor %>
werkzeug-0.14.1/examples/i18nurls/urls.py000066400000000000000000000005341322225165500203070ustar00rootroot00000000000000from werkzeug.routing import Map, Rule, Submount map = Map([ Rule('/', endpoint='#language_select'), Submount('/', [ Rule('/', endpoint='index'), Rule('/about', endpoint='about'), Rule('/blog/', endpoint='blog/index'), Rule('/blog/', endpoint='blog/show') ]) ]) werkzeug-0.14.1/examples/i18nurls/views.py000066400000000000000000000012101322225165500204470ustar00rootroot00000000000000from i18nurls.application import TemplateResponse, Response, expose @expose('index') def index(req): return TemplateResponse('index.html', title='Index') @expose('about') def about(req): return TemplateResponse('about.html', title='About') @expose('blog/index') def blog_index(req): return TemplateResponse('blog.html', title='Blog Index', mode='index') @expose('blog/show') def blog_show(req, post_id): return TemplateResponse('blog.html', title='Blog Post #%d' % post_id, post_id=post_id, mode='show') def page_not_found(req): return Response('

Page Not Found

', mimetype='text/html') werkzeug-0.14.1/examples/manage-coolmagic.py000077500000000000000000000036621322225165500210300ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Manage Coolmagic ~~~~~~~~~~~~~~~~ Manage the coolmagic example application. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click from coolmagic import make_app from werkzeug.serving import run_simple @click.group() def cli(): pass @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_app() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = dict() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-couchy.py000077500000000000000000000037571322225165500203720ustar00rootroot00000000000000#!/usr/bin/env python import click from werkzeug.serving import run_simple def make_app(): from couchy.application import Couchy return Couchy('http://localhost:5984') def make_shell(): from couchy import models, utils application = make_app() return locals() @click.group() def cli(): pass @cli.command() def initdb(): from couchy.application import Couchy Couchy('http://localhost:5984').init_database() @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_app() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = make_shell() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-cupoftee.py000077500000000000000000000022571322225165500207040ustar00rootroot00000000000000#!/usr/bin/env python """ Manage Cup Of Tee ~~~~~~~~~~~~~~~~~ Manage the cup of tee application. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click from werkzeug.serving import run_simple def make_app(): from cupoftee import make_app return make_app('/tmp/cupoftee.db') @click.group() def cli(): pass @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, reloader, debugger, evalex, threaded, processes): """Start a new development server.""" app = make_app() run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-i18nurls.py000077500000000000000000000036561322225165500205630ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Manage i18nurls ~~~~~~~~~~~~~~~ Manage the i18n url example application. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click from i18nurls import make_app from werkzeug.serving import run_simple @click.group() def cli(): pass @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_app() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = dict() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-plnt.py000077500000000000000000000067541322225165500200550ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Manage plnt ~~~~~~~~~~~ This script manages the plnt application. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click import os from werkzeug.serving import run_simple def make_app(): """Helper function that creates a plnt app.""" from plnt import Plnt database_uri = os.environ.get('PLNT_DATABASE_URI') app = Plnt(database_uri or 'sqlite:////tmp/plnt.db') app.bind_to_context() return app @click.group() def cli(): pass @cli.command() def initdb(): """Initialize the database""" from plnt.database import Blog, session make_app().init_database() # and now fill in some python blogs everybody should read (shamelessly # added my own blog too) blogs = [ Blog('Armin Ronacher', 'http://lucumr.pocoo.org/', 'http://lucumr.pocoo.org/cogitations/feed/'), Blog('Georg Brandl', 'http://pyside.blogspot.com/', 'http://pyside.blogspot.com/feeds/posts/default'), Blog('Ian Bicking', 'http://blog.ianbicking.org/', 'http://blog.ianbicking.org/feed/'), Blog('Amir Salihefendic', 'http://amix.dk/', 'http://feeds.feedburner.com/amixdk'), Blog('Christopher Lenz', 'http://www.cmlenz.net/blog/', 'http://www.cmlenz.net/blog/atom.xml'), Blog('Frederick Lundh', 'http://online.effbot.org/', 'http://online.effbot.org/rss.xml') ] # okay. got tired here. if someone feels that he is missing, drop me # a line ;-) for blog in blogs: session.add(blog) session.commit() click.echo('Initialized database, now run manage-plnt.py sync to get the posts') @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_app() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = {'app': make_app()} if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) @cli.command() def sync(): """Sync the blogs in the planet. Call this from a cronjob.""" from plnt.sync import sync make_app().bind_to_context() sync() if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-shorty.py000077500000000000000000000040231322225165500204130ustar00rootroot00000000000000#!/usr/bin/env python import click import os import tempfile from werkzeug.serving import run_simple def make_app(): from shorty.application import Shorty filename = os.path.join(tempfile.gettempdir(), "shorty.db") return Shorty('sqlite:///{0}'.format(filename)) def make_shell(): from shorty import models, utils application = make_app() return locals() @click.group() def cli(): pass @cli.command() def initdb(): make_app().init_database() @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_app() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = make_shell() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-simplewiki.py000077500000000000000000000046761322225165500212560ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Manage SimpleWiki ~~~~~~~~~~~~~~~~~ This script provides some basic commands to debug and test SimpleWiki. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click import os import tempfile from werkzeug.serving import run_simple def make_wiki(): """Helper function that creates a new wiki instance.""" from simplewiki import SimpleWiki database_uri = os.environ.get('SIMPLEWIKI_DATABASE_URI') return SimpleWiki(database_uri or 'sqlite:////tmp/simplewiki.db') def make_shell(): from simplewiki import database wiki = make_wiki() wiki.bind_to_context() return { 'wiki': wiki, 'db': database } @click.group() def cli(): pass @cli.command() def initdb(): make_wiki().init_database() @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" app = make_wiki() reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = make_shell() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/manage-webpylike.py000077500000000000000000000043371322225165500210660ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Manage web.py like application ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ A small example application that is built after the web.py tutorial. We even use regular expression based dispatching. The original code can be found on the `webpy.org webpage`__ in the tutorial section. __ http://webpy.org/tutorial2.en :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import click import os import sys sys.path.append(os.path.join(os.path.dirname(__file__), 'webpylike')) from example import app from werkzeug.serving import run_simple @click.group() def cli(): pass @cli.command() @click.option('-h', '--hostname', type=str, default='localhost', help="localhost") @click.option('-p', '--port', type=int, default=5000, help="5000") @click.option('--no-reloader', is_flag=True, default=False) @click.option('--debugger', is_flag=True) @click.option('--no-evalex', is_flag=True, default=False) @click.option('--threaded', is_flag=True) @click.option('--processes', type=int, default=1, help="1") def runserver(hostname, port, no_reloader, debugger, no_evalex, threaded, processes): """Start a new development server.""" reloader = not no_reloader evalex = not no_evalex run_simple(hostname, port, app, use_reloader=reloader, use_debugger=debugger, use_evalex=evalex, threaded=threaded, processes=processes) @cli.command() @click.option('--no-ipython', is_flag=True, default=False) def shell(no_ipython): """Start a new interactive python session.""" banner = 'Interactive Werkzeug Shell' namespace = dict() if not no_ipython: try: try: from IPython.frontend.terminal.embed import InteractiveShellEmbed sh = InteractiveShellEmbed.instance(banner1=banner) except ImportError: from IPython.Shell import IPShellEmbed sh = IPShellEmbed(banner=banner) except ImportError: pass else: sh(local_ns=namespace) return from code import interact interact(banner, local=namespace) if __name__ == '__main__': cli() werkzeug-0.14.1/examples/partial/000077500000000000000000000000001322225165500167155ustar00rootroot00000000000000werkzeug-0.14.1/examples/partial/README000066400000000000000000000002351322225165500175750ustar00rootroot00000000000000This directory contains modules that have code but that are not excutable. For example routing definitions to play around in the python interactive prompt. werkzeug-0.14.1/examples/partial/complex_routing.py000066400000000000000000000011231322225165500225020ustar00rootroot00000000000000from werkzeug.routing import Map, Rule, Subdomain, Submount, EndpointPrefix m = Map([ # Static URLs EndpointPrefix('static/', [ Rule('/', endpoint='index'), Rule('/about', endpoint='about'), Rule('/help', endpoint='help'), ]), # Knowledge Base Subdomain('kb', [EndpointPrefix('kb/', [ Rule('/', endpoint='index'), Submount('/browse', [ Rule('/', endpoint='browse'), Rule('//', defaults={'page': 1}, endpoint='browse'), Rule('//', endpoint='browse') ]) ])]) ]) werkzeug-0.14.1/examples/plnt/000077500000000000000000000000001322225165500162365ustar00rootroot00000000000000werkzeug-0.14.1/examples/plnt/__init__.py000066400000000000000000000003711322225165500203500ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt ~~~~ Noun. plnt (plant) -- a planet application that sounds like a herb. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from plnt.webapp import Plnt werkzeug-0.14.1/examples/plnt/database.py000066400000000000000000000035051322225165500203570ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt.database ~~~~~~~~~~~~~ The database definitions for the planet. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from sqlalchemy import MetaData, Table, Column, ForeignKey, Boolean, \ Integer, String, DateTime from sqlalchemy.orm import dynamic_loader, scoped_session, create_session, \ mapper from plnt.utils import application, local_manager def new_db_session(): return create_session(application.database_engine, autoflush=True, autocommit=False) metadata = MetaData() session = scoped_session(new_db_session, local_manager.get_ident) blog_table = Table('blogs', metadata, Column('id', Integer, primary_key=True), Column('name', String(120)), Column('description', String), Column('url', String(200)), Column('feed_url', String(250)) ) entry_table = Table('entries', metadata, Column('id', Integer, primary_key=True), Column('blog_id', Integer, ForeignKey('blogs.id')), Column('guid', String(200), unique=True), Column('title', String(140)), Column('url', String(200)), Column('text', String), Column('pub_date', DateTime), Column('last_update', DateTime) ) class Blog(object): query = session.query_property() def __init__(self, name, url, feed_url, description=u''): self.name = name self.url = url self.feed_url = feed_url self.description = description def __repr__(self): return '<%s %r>' % (self.__class__.__name__, self.url) class Entry(object): query = session.query_property() def __repr__(self): return '<%s %r>' % (self.__class__.__name__, self.guid) mapper(Entry, entry_table) mapper(Blog, blog_table, properties=dict( entries=dynamic_loader(Entry, backref='blog') )) werkzeug-0.14.1/examples/plnt/shared/000077500000000000000000000000001322225165500175045ustar00rootroot00000000000000werkzeug-0.14.1/examples/plnt/shared/style.css000066400000000000000000000041641322225165500213630ustar00rootroot00000000000000body { font-family: 'Luxi Sans', 'Lucida Sans', 'Verdana', sans-serif; margin: 1em; padding: 0; background-color: #BDE1EC; color: #0B2B35; } a { color: #50ACC4; } a:hover { color: #0B2B35; } div.header { display: block; margin: -1em -1em 0 -1em; padding: 1em; background-color: #0B2B35; color: white; } div.header h1 { font-family: 'Georgia', serif; margin: 0; font-size: 1.8em; } div.header blockquote { margin: 0; padding: 0.5em 0 0 1em; font-size: 0.9em; } div.footer { margin: 1em -1em -1em -1em; padding: 0.5em; color: #F3F7F8; background-color: #1F6070; } div.footer p { margin: 0; padding: 0; font-size: 0.8em; text-align: right; } ul.navigation { float: right; padding: 0.7em 1em 0.7em 1em; background-color: #F3F7F8; border: 1px solid #85CADB; border-right-color: #50ACC4; border-bottom-color: #50ACC4; list-style: none; } ul.navigation li { padding: 0.3em 0 0.3em 0; } ul.navigation li a { color: #0B2B35; } ul.navigation li a:hover { color: #50ACC4; } div.pagination { margin: 0.5em 0 0.5em 0; padding: 0.7em; text-align: center; max-width: 50em; background-color: white; border: 1px solid #B1CDD4; } div.day, div.page { max-width: 50em; background-color: white; border: 1px solid #50ACC4; margin: 1em 0 1em 0; padding: 0.7em; } div.day h2, div.page h2 { margin: 0 0 0.5em 0; padding: 0; color: black; font-size: 1.7em; } div.page p { margin: 0.7em 1em 0.7em 1em; line-height: 1.5em; } div.day div.entry { margin: 0.5em 0.25em 0.5em 1em; padding: 1em; background-color: #F3F7F8; border: 1px solid #85CADB; border-left-color: #50ACC4; border-top-color: #50ACC4; } div.day div.entry h3 { margin: 0; padding: 0; } div.day div.entry h3 a { color: #1C6D81; text-decoration: none; } div.day div.entry p.meta { color: #666; font-size: 0.85em; margin: 0.3em 0 0.6em 0; } div.day div.entry p.meta a { color: #666; } div.day div.entry div.text { padding: 0 0 0 0.5em; } werkzeug-0.14.1/examples/plnt/sync.py000066400000000000000000000072011322225165500175640ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt.sync ~~~~~~~~~ Does the synchronization. Called by "manage-plnt.py sync" :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import sys import feedparser from time import time from datetime import datetime from werkzeug.utils import escape from plnt.database import Blog, Entry, session from plnt.utils import strip_tags, nl2p HTML_MIMETYPES = set(['text/html', 'application/xhtml+xml']) def sync(): """ Performs a synchronization. Articles that are already syncronized aren't touched anymore. """ for blog in Blog.query.all(): # parse the feed. feedparser.parse will never given an exception # but the bozo bit might be defined. feed = feedparser.parse(blog.feed_url) blog_author = feed.get('author') or blog.name blog_author_detail = feed.get('author_detail') for entry in feed.entries: # get the guid. either the id if specified, otherwise the link. # if none is available we skip the entry. guid = entry.get('id') or entry.get('link') if not guid: continue # get an old entry for the guid to check if we need to update # or recreate the item old_entry = Entry.query.filter_by(guid=guid).first() # get title, url and text. skip if no title or no text is # given. if the link is missing we use the blog link. if 'title_detail' in entry: title = entry.title_detail.get('value') or '' if entry.title_detail.get('type') in HTML_MIMETYPES: title = strip_tags(title) else: title = escape(title) else: title = entry.get('title') url = entry.get('link') or blog.blog_url text = 'content' in entry and entry.content[0] or \ entry.get('summary_detail') if not title or not text: continue # if we have an html text we use that, otherwise we HTML # escape the text and use that one. We also handle XHTML # with our tag soup parser for the moment. if text.get('type') not in HTML_MIMETYPES: text = escape(nl2p(text.get('value') or '')) else: text = text.get('value') or '' # no text? continue if not text.strip(): continue # get the pub date and updated date. This is rather complex # because different feeds do different stuff pub_date = entry.get('published_parsed') or \ entry.get('created_parsed') or \ entry.get('date_parsed') updated = entry.get('updated_parsed') or pub_date pub_date = pub_date or updated # if we don't have a pub_date we skip. if not pub_date: continue # convert the time tuples to datetime objects. pub_date = datetime(*pub_date[:6]) updated = datetime(*updated[:6]) if old_entry and updated <= old_entry.last_update: continue # create a new entry object based on the data collected or # update the old one. entry = old_entry or Entry() entry.blog = blog entry.guid = guid entry.title = title entry.url = url entry.text = text entry.pub_date = pub_date entry.last_update = updated session.add(entry) session.commit() werkzeug-0.14.1/examples/plnt/templates/000077500000000000000000000000001322225165500202345ustar00rootroot00000000000000werkzeug-0.14.1/examples/plnt/templates/about.html000066400000000000000000000013051322225165500222330ustar00rootroot00000000000000{% extends "layout.html" %} {% block body %}

About Plnt

Plnt is a small example application written using the Werkzeug WSGI toolkit, the Jinja template language, the SQLAlchemy database abstraction layer and ORM and last but not least the awesome feedparser library.

It's one of the example applications developed to show some of the features werkzeug provides and could be the base of a real planet software.

{% endblock %} werkzeug-0.14.1/examples/plnt/templates/index.html000066400000000000000000000017051322225165500222340ustar00rootroot00000000000000{% extends "layout.html" %} {% block body %} {% for day in days %}

{{ day.date.strftime('%d %B %Y') }}

{%- for entry in day.entries %}

{{ entry.title }}

by {{ entry.blog.name|e }} at {{ entry.pub_date.strftime('%H:%m') }}

{{ entry.text }}
{%- endfor %}
{%- endfor %} {% if pagination.pages > 1 %} {% endif %} {% endblock %} werkzeug-0.14.1/examples/plnt/templates/layout.html000066400000000000000000000012761322225165500224450ustar00rootroot00000000000000 Plnt Planet

Plnt Planet

This is the Plnt Planet Werkzeug Example Application
{% block body %}{% endblock %}
werkzeug-0.14.1/examples/plnt/utils.py000066400000000000000000000071611322225165500177550ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt.utils ~~~~~~~~~~ The planet utilities. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import re from os import path from jinja2 import Environment, FileSystemLoader from werkzeug.local import Local, LocalManager from werkzeug.urls import url_encode, url_quote from werkzeug.utils import cached_property from werkzeug.wrappers import Response from werkzeug.routing import Map, Rule # context locals. these two objects are use by the application to # bind objects to the current context. A context is defined as the # current thread and the current greenlet if there is greenlet support. # the `get_request` and `get_application` functions look up the request # and application objects from this local manager. local = Local() local_manager = LocalManager([local]) # proxy objects request = local('request') application = local('application') url_adapter = local('url_adapter') # let's use jinja for templates this time template_path = path.join(path.dirname(__file__), 'templates') jinja_env = Environment(loader=FileSystemLoader(template_path)) # the collected url patterns url_map = Map([Rule('/shared/', endpoint='shared')]) endpoints = {} _par_re = re.compile(r'\n{2,}') _entity_re = re.compile(r'&([^;]+);') _striptags_re = re.compile(r'(|<[^>]*>)') from htmlentitydefs import name2codepoint html_entities = name2codepoint.copy() html_entities['apos'] = 39 del name2codepoint def expose(url_rule, endpoint=None, **kwargs): """Expose this function to the web layer.""" def decorate(f): e = endpoint or f.__name__ endpoints[e] = f url_map.add(Rule(url_rule, endpoint=e, **kwargs)) return f return decorate def render_template(template_name, **context): """Render a template into a response.""" tmpl = jinja_env.get_template(template_name) context['url_for'] = url_for return Response(tmpl.render(context), mimetype='text/html') def nl2p(s): """Add paragraphs to a text.""" return u'\n'.join(u'

%s

' % p for p in _par_re.split(s)) def url_for(endpoint, **kw): """Simple function for URL generation.""" return url_adapter.build(endpoint, kw) def strip_tags(s): """Resolve HTML entities and remove tags from a string.""" def handle_match(m): name = m.group(1) if name in html_entities: return unichr(html_entities[name]) if name[:2] in ('#x', '#X'): try: return unichr(int(name[2:], 16)) except ValueError: return u'' elif name.startswith('#'): try: return unichr(int(name[1:])) except ValueError: return u'' return u'' return _entity_re.sub(handle_match, _striptags_re.sub('', s)) class Pagination(object): """ Paginate a SQLAlchemy query object. """ def __init__(self, query, per_page, page, endpoint): self.query = query self.per_page = per_page self.page = page self.endpoint = endpoint @cached_property def entries(self): return self.query.offset((self.page - 1) * self.per_page) \ .limit(self.per_page).all() @cached_property def count(self): return self.query.count() has_previous = property(lambda x: x.page > 1) has_next = property(lambda x: x.page < x.pages) previous = property(lambda x: url_for(x.endpoint, page=x.page - 1)) next = property(lambda x: url_for(x.endpoint, page=x.page + 1)) pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1) werkzeug-0.14.1/examples/plnt/views.py000066400000000000000000000021711322225165500177460ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt.views ~~~~~~~~~~ Display the aggregated feeds. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from datetime import datetime, date from plnt.database import Blog, Entry from plnt.utils import Pagination, expose, render_template #: number of items per page PER_PAGE = 30 @expose('/', defaults={'page': 1}) @expose('/page/') def index(request, page): """Show the index page or any an offset of it.""" days = [] days_found = set() query = Entry.query.order_by(Entry.pub_date.desc()) pagination = Pagination(query, PER_PAGE, page, 'index') for entry in pagination.entries: day = date(*entry.pub_date.timetuple()[:3]) if day not in days_found: days_found.add(day) days.append({'date': day, 'entries': []}) days[-1]['entries'].append(entry) return render_template('index.html', days=days, pagination=pagination) @expose('/about') def about(request): """Show the about page, so that we have another view func ;-)""" return render_template('about.html') werkzeug-0.14.1/examples/plnt/webapp.py000066400000000000000000000033461322225165500200740ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ plnt.webapp ~~~~~~~~~~~ The web part of the planet. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from os import path from sqlalchemy import create_engine from werkzeug.wrappers import Request from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware from werkzeug.exceptions import HTTPException, NotFound from plnt.utils import local, local_manager, url_map, endpoints from plnt.database import session, metadata # import the views module because it contains setup code import plnt.views #: path to shared data SHARED_DATA = path.join(path.dirname(__file__), 'shared') class Plnt(object): def __init__(self, database_uri): self.database_engine = create_engine(database_uri) self._dispatch = local_manager.middleware(self.dispatch_request) self._dispatch = SharedDataMiddleware(self._dispatch, { '/shared': SHARED_DATA }) def init_database(self): metadata.create_all(self.database_engine) def bind_to_context(self): local.application = self def dispatch_request(self, environ, start_response): self.bind_to_context() local.request = request = Request(environ, start_response) local.url_adapter = adapter = url_map.bind_to_environ(environ) try: endpoint, values = adapter.match(request.path) response = endpoints[endpoint](request, **values) except HTTPException, e: response = e return ClosingIterator(response(environ, start_response), session.remove) def __call__(self, environ, start_response): return self._dispatch(environ, start_response) werkzeug-0.14.1/examples/shortly/000077500000000000000000000000001322225165500167655ustar00rootroot00000000000000werkzeug-0.14.1/examples/shortly/shortly.py000066400000000000000000000110261322225165500210430ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ shortly ~~~~~~~ A simple URL shortener using Werkzeug and redis. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import redis import urlparse from werkzeug.wrappers import Request, Response from werkzeug.routing import Map, Rule from werkzeug.exceptions import HTTPException, NotFound from werkzeug.wsgi import SharedDataMiddleware from werkzeug.utils import redirect from jinja2 import Environment, FileSystemLoader def base36_encode(number): assert number >= 0, 'positive integer required' if number == 0: return '0' base36 = [] while number != 0: number, i = divmod(number, 36) base36.append('0123456789abcdefghijklmnopqrstuvwxyz'[i]) return ''.join(reversed(base36)) def is_valid_url(url): parts = urlparse.urlparse(url) return parts.scheme in ('http', 'https') def get_hostname(url): return urlparse.urlparse(url).netloc class Shortly(object): def __init__(self, config): self.redis = redis.Redis(config['redis_host'], config['redis_port']) template_path = os.path.join(os.path.dirname(__file__), 'templates') self.jinja_env = Environment(loader=FileSystemLoader(template_path), autoescape=True) self.jinja_env.filters['hostname'] = get_hostname self.url_map = Map([ Rule('/', endpoint='new_url'), Rule('/', endpoint='follow_short_link'), Rule('/+', endpoint='short_link_details') ]) def on_new_url(self, request): error = None url = '' if request.method == 'POST': url = request.form['url'] if not is_valid_url(url): error = 'Please enter a valid URL' else: short_id = self.insert_url(url) return redirect('/%s+' % short_id) return self.render_template('new_url.html', error=error, url=url) def on_follow_short_link(self, request, short_id): link_target = self.redis.get('url-target:' + short_id) if link_target is None: raise NotFound() self.redis.incr('click-count:' + short_id) return redirect(link_target) def on_short_link_details(self, request, short_id): link_target = self.redis.get('url-target:' + short_id) if link_target is None: raise NotFound() click_count = int(self.redis.get('click-count:' + short_id) or 0) return self.render_template('short_link_details.html', link_target=link_target, short_id=short_id, click_count=click_count ) def error_404(self): response = self.render_template('404.html') response.status_code = 404 return response def insert_url(self, url): short_id = self.redis.get('reverse-url:' + url) if short_id is not None: return short_id url_num = self.redis.incr('last-url-id') short_id = base36_encode(url_num) self.redis.set('url-target:' + short_id, url) self.redis.set('reverse-url:' + url, short_id) return short_id def render_template(self, template_name, **context): t = self.jinja_env.get_template(template_name) return Response(t.render(context), mimetype='text/html') def dispatch_request(self, request): adapter = self.url_map.bind_to_environ(request.environ) try: endpoint, values = adapter.match() return getattr(self, 'on_' + endpoint)(request, **values) except NotFound, e: return self.error_404() except HTTPException, e: return e def wsgi_app(self, environ, start_response): request = Request(environ) response = self.dispatch_request(request) return response(environ, start_response) def __call__(self, environ, start_response): return self.wsgi_app(environ, start_response) def create_app(redis_host='localhost', redis_port=6379, with_static=True): app = Shortly({ 'redis_host': redis_host, 'redis_port': redis_port }) if with_static: app.wsgi_app = SharedDataMiddleware(app.wsgi_app, { '/static': os.path.join(os.path.dirname(__file__), 'static') }) return app if __name__ == '__main__': from werkzeug.serving import run_simple app = create_app() run_simple('127.0.0.1', 5000, app, use_debugger=True, use_reloader=True) werkzeug-0.14.1/examples/shortly/static/000077500000000000000000000000001322225165500202545ustar00rootroot00000000000000werkzeug-0.14.1/examples/shortly/static/style.css000066400000000000000000000016051322225165500221300ustar00rootroot00000000000000body { background: #E8EFF0; margin: 0; padding: 0; } body, input { font-family: 'Helvetica Neue', Arial, sans-serif; font-weight: 300; font-size: 18px; } .box { width: 500px; margin: 60px auto; padding: 20px; background: white; box-shadow: 0 1px 4px #BED1D4; border-radius: 2px; } a { color: #11557C; } h1, h2 { margin: 0; color: #11557C; } h1 a { text-decoration: none; } h2 { font-weight: normal; font-size: 24px; } .tagline { color: #888; font-style: italic; margin: 0 0 20px 0; } .link div { overflow: auto; font-size: 0.8em; white-space: pre; padding: 4px 10px; margin: 5px 0; background: #E5EAF1; } dt { font-weight: normal; } .error { background: #E8EFF0; padding: 3px 8px; color: #11557C; font-size: 0.9em; border-radius: 2px; } .urlinput { width: 300px; } werkzeug-0.14.1/examples/shortly/templates/000077500000000000000000000000001322225165500207635ustar00rootroot00000000000000werkzeug-0.14.1/examples/shortly/templates/404.html000066400000000000000000000002661322225165500221640ustar00rootroot00000000000000{% extends "layout.html" %} {% block title %}Page Not Found{% endblock %} {% block body %}

Page Not Found

I am sorry, but no such page was found here. {% endblock %} werkzeug-0.14.1/examples/shortly/templates/layout.html000066400000000000000000000004411322225165500231650ustar00rootroot00000000000000 {% block title %}{% endblock %} | shortly

shortly

Shortly is a URL shortener written with Werkzeug {% block body %}{% endblock %}

werkzeug-0.14.1/examples/shortly/templates/new_url.html000066400000000000000000000006031322225165500233230ustar00rootroot00000000000000{% extends "layout.html" %} {% block title %}Create New Short URL{% endblock %} {% block body %}

Submit URL

{% if error %}

Error: {{ error }} {% endif %}

URL:

{% endblock %} werkzeug-0.14.1/examples/shortly/templates/short_link_details.html000066400000000000000000000006101322225165500255270ustar00rootroot00000000000000{% extends "layout.html" %} {% block title %}Details about /{{ short_id }}{% endblock %} {% block body %}

/{{ short_id }}

Target host:
{{ link_target|hostname }}
Full link
Click count:
{{ click_count }}
{% endblock %} werkzeug-0.14.1/examples/shorty/000077500000000000000000000000001322225165500166115ustar00rootroot00000000000000werkzeug-0.14.1/examples/shorty/__init__.py000066400000000000000000000000001322225165500207100ustar00rootroot00000000000000werkzeug-0.14.1/examples/shorty/application.py000066400000000000000000000026541322225165500214750ustar00rootroot00000000000000from sqlalchemy import create_engine from werkzeug.wrappers import Request from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware from werkzeug.exceptions import HTTPException, NotFound from shorty.utils import STATIC_PATH, session, local, local_manager, \ metadata, url_map import shorty.models from shorty import views class Shorty(object): def __init__(self, db_uri): local.application = self self.database_engine = create_engine(db_uri, convert_unicode=True) self.dispatch = SharedDataMiddleware(self.dispatch, { '/static': STATIC_PATH }) def init_database(self): metadata.create_all(self.database_engine) def dispatch(self, environ, start_response): local.application = self request = Request(environ) local.url_adapter = adapter = url_map.bind_to_environ(environ) try: endpoint, values = adapter.match() handler = getattr(views, endpoint) response = handler(request, **values) except NotFound, e: response = views.not_found(request) response.status_code = 404 except HTTPException, e: response = e return ClosingIterator(response(environ, start_response), [session.remove, local_manager.cleanup]) def __call__(self, environ, start_response): return self.dispatch(environ, start_response) werkzeug-0.14.1/examples/shorty/models.py000066400000000000000000000017451322225165500204550ustar00rootroot00000000000000from datetime import datetime from sqlalchemy import Table, Column, String, Boolean, DateTime from sqlalchemy.orm import mapper from shorty.utils import session, metadata, url_for, get_random_uid url_table = Table('urls', metadata, Column('uid', String(140), primary_key=True), Column('target', String(500)), Column('added', DateTime), Column('public', Boolean) ) class URL(object): query = session.query_property() def __init__(self, target, public=True, uid=None, added=None): self.target = target self.public = public self.added = added or datetime.utcnow() if not uid: while 1: uid = get_random_uid() if not URL.query.get(uid): break self.uid = uid session.add(self) @property def short_url(self): return url_for('link', uid=self.uid, _external=True) def __repr__(self): return '' % self.uid mapper(URL, url_table) werkzeug-0.14.1/examples/shorty/static/000077500000000000000000000000001322225165500201005ustar00rootroot00000000000000werkzeug-0.14.1/examples/shorty/static/style.css000066400000000000000000000030611322225165500217520ustar00rootroot00000000000000body { background-color: #333; font-family: 'Lucida Sans', 'Verdana', sans-serif; font-size: 16px; margin: 3em 0 3em 0; padding: 0; text-align: center; } a { color: #0C4850; } a:hover { color: #1C818F; } h1 { width: 500px; background-color: #24C0CE; text-align: center; font-size: 3em; margin: 0 auto 0 auto; padding: 0; } h1 a { display: block; padding: 0.3em; color: #fff; text-decoration: none; } h1 a:hover { color: #ADEEF7; background-color: #0E8A96; } div.footer { margin: 0 auto 0 auto; font-size: 13px; text-align: right; padding: 10px; width: 480px; background-color: #004C63; color: white; } div.footer a { color: #A0E9FF; } div.body { margin: 0 auto 0 auto; padding: 20px; width: 460px; background-color: #98CE24; color: black; } div.body h2 { margin: 0 0 0.5em 0; text-align: center; } div.body input { margin: 0.2em 0 0.2em 0; font-family: 'Lucida Sans', 'Verdana', sans-serif; font-size: 20px; background-color: #CCEB98; color: black; } div.body #url { width: 400px; } div.body #alias { width: 300px; margin-right: 10px; } div.body #submit { width: 90px; } div.body p { margin: 0; padding: 0.2em 0 0.2em 0; } div.body ul { margin: 1em 0 1em 0; padding: 0; list-style: none; } div.error { margin: 1em 0 1em 0; border: 2px solid #AC0202; background-color: #9E0303; font-weight: bold; color: white; } div.pagination { font-size: 13px; } werkzeug-0.14.1/examples/shorty/templates/000077500000000000000000000000001322225165500206075ustar00rootroot00000000000000werkzeug-0.14.1/examples/shorty/templates/display.html000066400000000000000000000003011322225165500231340ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Shortened URL

The URL {{ url.target|urlize(40, true) }} was shortened to {{ url.short_url|urlize }}.

{% endblock %} werkzeug-0.14.1/examples/shorty/templates/layout.html000066400000000000000000000010141322225165500230060ustar00rootroot00000000000000 Shorty

Shorty

{% block body %}{% endblock %}
werkzeug-0.14.1/examples/shorty/templates/list.html000066400000000000000000000013251322225165500224510ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

List of URLs

    {%- for url in pagination.entries %}
  • {{ url.uid|e }} » {{ url.target|urlize(38, true) }}
  • {%- else %}
  • no URLs shortened yet
  • {%- endfor %}
{% endblock %} werkzeug-0.14.1/examples/shorty/templates/new.html000066400000000000000000000011601322225165500222640ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Create a Shorty-URL!

{% if error %}
{{ error }}
{% endif %}

Enter the URL you want to shorten

Optionally you can give the URL a memorable name

{# #}

{% endblock %} werkzeug-0.14.1/examples/shorty/templates/not_found.html000066400000000000000000000003471322225165500234740ustar00rootroot00000000000000{% extends 'layout.html' %} {% block body %}

Page Not Found

The page you have requested does not exist on this server. What about adding a new URL?

{% endblock %} werkzeug-0.14.1/examples/shorty/utils.py000066400000000000000000000047041322225165500203300ustar00rootroot00000000000000from os import path from urlparse import urlparse from random import sample, randrange from jinja2 import Environment, FileSystemLoader from werkzeug.local import Local, LocalManager from werkzeug.utils import cached_property from werkzeug.wrappers import Response from werkzeug.routing import Map, Rule from sqlalchemy import MetaData from sqlalchemy.orm import create_session, scoped_session TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates') STATIC_PATH = path.join(path.dirname(__file__), 'static') ALLOWED_SCHEMES = frozenset(['http', 'https', 'ftp', 'ftps']) URL_CHARS = 'abcdefghijkmpqrstuvwxyzABCDEFGHIJKLMNPQRST23456789' local = Local() local_manager = LocalManager([local]) application = local('application') metadata = MetaData() url_map = Map([Rule('/static/', endpoint='static', build_only=True)]) session = scoped_session(lambda: create_session(application.database_engine, autocommit=False, autoflush=False)) jinja_env = Environment(loader=FileSystemLoader(TEMPLATE_PATH)) def expose(rule, **kw): def decorate(f): kw['endpoint'] = f.__name__ url_map.add(Rule(rule, **kw)) return f return decorate def url_for(endpoint, _external=False, **values): return local.url_adapter.build(endpoint, values, force_external=_external) jinja_env.globals['url_for'] = url_for def render_template(template, **context): return Response(jinja_env.get_template(template).render(**context), mimetype='text/html') def validate_url(url): return urlparse(url)[0] in ALLOWED_SCHEMES def get_random_uid(): return ''.join(sample(URL_CHARS, randrange(3, 9))) class Pagination(object): def __init__(self, query, per_page, page, endpoint): self.query = query self.per_page = per_page self.page = page self.endpoint = endpoint @cached_property def count(self): return self.query.count() @cached_property def entries(self): return self.query.offset((self.page - 1) * self.per_page) \ .limit(self.per_page).all() has_previous = property(lambda x: x.page > 1) has_next = property(lambda x: x.page < x.pages) previous = property(lambda x: url_for(x.endpoint, page=x.page - 1)) next = property(lambda x: url_for(x.endpoint, page=x.page + 1)) pages = property(lambda x: max(0, x.count - 1) // x.per_page + 1) werkzeug-0.14.1/examples/shorty/views.py000066400000000000000000000033241322225165500203220ustar00rootroot00000000000000from werkzeug.utils import redirect from werkzeug.exceptions import NotFound from shorty.utils import session, Pagination, render_template, expose, \ validate_url, url_for from shorty.models import URL @expose('/') def new(request): error = url = '' if request.method == 'POST': url = request.form.get('url') alias = request.form.get('alias') if not validate_url(url): error = "I'm sorry but you cannot shorten this URL." elif alias: if len(alias) > 140: error = 'Your alias is too long' elif '/' in alias: error = 'Your alias might not include a slash' elif URL.query.get(alias): error = 'The alias you have requested exists already' if not error: uid = URL(url, 'private' not in request.form, alias).uid session.commit() return redirect(url_for('display', uid=uid)) return render_template('new.html', error=error, url=url) @expose('/display/') def display(request, uid): url = URL.query.get(uid) if not url: raise NotFound() return render_template('display.html', url=url) @expose('/u/') def link(request, uid): url = URL.query.get(uid) if not url: raise NotFound() return redirect(url.target, 301) @expose('/list/', defaults={'page': 1}) @expose('/list/') def list(request, page): query = URL.query.filter_by(public=True) pagination = Pagination(query, 30, page, 'list') if pagination.page > 1 and not pagination.entries: raise NotFound() return render_template('list.html', pagination=pagination) def not_found(request): return render_template('not_found.html') werkzeug-0.14.1/examples/simplewiki/000077500000000000000000000000001322225165500174365ustar00rootroot00000000000000werkzeug-0.14.1/examples/simplewiki/__init__.py000066400000000000000000000005301322225165500215450ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki ~~~~~~~~~~ Very simple wiki application based on Genshi, Werkzeug and SQLAlchemy. Additionally the creoleparser is used for the wiki markup. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from simplewiki.application import SimpleWiki werkzeug-0.14.1/examples/simplewiki/actions.py000066400000000000000000000144341322225165500214560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki.actions ~~~~~~~~~~~~~~~~~~ The per page actions. The actions are defined in the URL with the `action` parameter and directly dispatched to the functions in this module. In the module the actions are prefixed with 'on_', so be careful not to name any other objects in the module with the same prefix unless you want to act them as actions. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from difflib import unified_diff from simplewiki.utils import Response, generate_template, parse_creole, \ href, redirect, format_datetime from simplewiki.database import RevisionedPage, Page, Revision, session def on_show(request, page_name): """Displays the page the user requests.""" revision_id = request.args.get('rev', type=int) query = RevisionedPage.query.filter_by(name=page_name) if revision_id: query = query.filter_by(revision_id=revision_id) revision_requested = True else: query = query.order_by(RevisionedPage.revision_id.desc()) revision_requested = False page = query.first() if page is None: return page_missing(request, page_name, revision_requested) return Response(generate_template('action_show.html', page=page )) def on_edit(request, page_name): """Edit the current revision of a page.""" change_note = error = '' revision = Revision.query.filter( (Page.name == page_name) & (Page.page_id == Revision.page_id) ).order_by(Revision.revision_id.desc()).first() if revision is None: page = None else: page = revision.page if request.method == 'POST': text = request.form.get('text') if request.form.get('cancel') or \ revision and revision.text == text: return redirect(href(page.name)) elif not text: error = 'You cannot save empty revisions.' else: change_note = request.form.get('change_note', '') if page is None: page = Page(page_name) session.add(page) session.add(Revision(page, text, change_note)) session.commit() return redirect(href(page.name)) return Response(generate_template('action_edit.html', revision=revision, page=page, new=page is None, page_name=page_name, change_note=change_note, error=error )) def on_log(request, page_name): """Show the list of recent changes.""" page = Page.query.filter_by(name=page_name).first() if page is None: return page_missing(request, page_name, False) return Response(generate_template('action_log.html', page=page )) def on_diff(request, page_name): """Show the diff between two revisions.""" old = request.args.get('old', type=int) new = request.args.get('new', type=int) error = '' diff = page = old_rev = new_rev = None if not (old and new): error = 'No revisions specified.' else: revisions = dict((x.revision_id, x) for x in Revision.query.filter( (Revision.revision_id.in_((old, new))) & (Revision.page_id == Page.page_id) & (Page.name == page_name) )) if len(revisions) != 2: error = 'At least one of the revisions requested ' \ 'does not exist.' else: new_rev = revisions[new] old_rev = revisions[old] page = old_rev.page diff = unified_diff( (old_rev.text + '\n').splitlines(True), (new_rev.text + '\n').splitlines(True), page.name, page.name, format_datetime(old_rev.timestamp), format_datetime(new_rev.timestamp), 3 ) return Response(generate_template('action_diff.html', error=error, old_revision=old_rev, new_revision=new_rev, page=page, diff=diff )) def on_revert(request, page_name): """Revert an old revision.""" rev_id = request.args.get('rev', type=int) old_revision = page = None error = 'No such revision' if request.method == 'POST' and request.form.get('cancel'): return redirect(href(page_name)) if rev_id: old_revision = Revision.query.filter( (Revision.revision_id == rev_id) & (Revision.page_id == Page.page_id) & (Page.name == page_name) ).first() if old_revision: new_revision = Revision.query.filter( (Revision.page_id == Page.page_id) & (Page.name == page_name) ).order_by(Revision.revision_id.desc()).first() if old_revision == new_revision: error = 'You tried to revert the current active ' \ 'revision.' elif old_revision.text == new_revision.text: error = 'There are no changes between the current ' \ 'revision and the revision you want to ' \ 'restore.' else: error = '' page = old_revision.page if request.method == 'POST': change_note = request.form.get('change_note', '') change_note = 'revert' + (change_note and ': ' + change_note or '') session.add(Revision(page, old_revision.text, change_note)) session.commit() return redirect(href(page_name)) return Response(generate_template('action_revert.html', error=error, old_revision=old_revision, page=page )) def page_missing(request, page_name, revision_requested, protected=False): """Displayed if page or revision does not exist.""" return Response(generate_template('page_missing.html', page_name=page_name, revision_requested=revision_requested, protected=protected ), status=404) def missing_action(request, action): """Displayed if a user tried to access a action that does not exist.""" return Response(generate_template('missing_action.html', action=action ), status=404) werkzeug-0.14.1/examples/simplewiki/application.py000066400000000000000000000071361322225165500223220ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki.application ~~~~~~~~~~~~~~~~~~~~~~ This module implements the wiki WSGI application which dispatches requests to specific wiki pages and actions. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from os import path from sqlalchemy import create_engine from werkzeug.utils import redirect from werkzeug.wsgi import ClosingIterator, SharedDataMiddleware from simplewiki.utils import Request, Response, local, local_manager, href from simplewiki.database import session, metadata from simplewiki import actions from simplewiki.specialpages import pages, page_not_found #: path to shared data SHARED_DATA = path.join(path.dirname(__file__), 'shared') class SimpleWiki(object): """ Our central WSGI application. """ def __init__(self, database_uri): self.database_engine = create_engine(database_uri) # apply our middlewares. we apply the middlewars *inside* the # application and not outside of it so that we never lose the # reference to the `SimpleWiki` object. self._dispatch = SharedDataMiddleware(self.dispatch_request, { '/_shared': SHARED_DATA }) # free the context locals at the end of the request self._dispatch = local_manager.make_middleware(self._dispatch) def init_database(self): """Called from the management script to generate the db.""" metadata.create_all(bind=self.database_engine) def bind_to_context(self): """ Useful for the shell. Binds the application to the current active context. It's automatically called by the shell command. """ local.application = self def dispatch_request(self, environ, start_response): """Dispatch an incoming request.""" # set up all the stuff we want to have for this request. That is # creating a request object, propagating the application to the # current context and instanciating the database session. self.bind_to_context() request = Request(environ) request.bind_to_context() # get the current action from the url and normalize the page name # which is just the request path action_name = request.args.get('action') or 'show' page_name = u'_'.join([x for x in request.path.strip('/') .split() if x]) # redirect to the Main_Page if the user requested the index if not page_name: response = redirect(href('Main_Page')) # check special pages elif page_name.startswith('Special:'): if page_name[8:] not in pages: response = page_not_found(request, page_name) else: response = pages[page_name[8:]](request) # get the callback function for the requested action from the # action module. It's "on_" + the action name. If it doesn't # exists call the missing_action method from the same module. else: action = getattr(actions, 'on_' + action_name, None) if action is None: response = actions.missing_action(request, action_name) else: response = action(request, page_name) # make sure the session is removed properly return ClosingIterator(response(environ, start_response), session.remove) def __call__(self, environ, start_response): """Just forward a WSGI call to the first internal middleware.""" return self._dispatch(environ, start_response) werkzeug-0.14.1/examples/simplewiki/database.py000066400000000000000000000073741322225165500215670ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki.database ~~~~~~~~~~~~~~~~~~~ The database. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from datetime import datetime from sqlalchemy import Table, Column, Integer, String, DateTime, \ ForeignKey, MetaData, join from sqlalchemy.orm import relation, create_session, scoped_session, \ mapper from simplewiki.utils import application, local_manager, parse_creole # create a global metadata metadata = MetaData() def new_db_session(): """ This function creates a new session if there is no session yet for the current context. It looks up the application and if it finds one it creates a session bound to the active database engine in that application. If there is no application bound to the context it raises an exception. """ return create_session(application.database_engine, autoflush=True, autocommit=False) # and create a new global session factory. Calling this object gives # you the current active session session = scoped_session(new_db_session, local_manager.get_ident) # our database tables. page_table = Table('pages', metadata, Column('page_id', Integer, primary_key=True), Column('name', String(60), unique=True) ) revision_table = Table('revisions', metadata, Column('revision_id', Integer, primary_key=True), Column('page_id', Integer, ForeignKey('pages.page_id')), Column('timestamp', DateTime), Column('text', String), Column('change_note', String(200)) ) class Revision(object): """ Represents one revision of a page. This is useful for editing particular revision of pages or creating new revisions. It's also used for the diff system and the revision log. """ query = session.query_property() def __init__(self, page, text, change_note='', timestamp=None): if isinstance(page, (int, long)): self.page_id = page else: self.page = page self.text = text self.change_note = change_note self.timestamp = timestamp or datetime.utcnow() def render(self): """Render the page text into a genshi stream.""" return parse_creole(self.text) def __repr__(self): return '<%s %r:%r>' % ( self.__class__.__name__, self.page_id, self.revision_id ) class Page(object): """ Represents a simple page without any revisions. This is for example used in the page index where the page contents are not relevant. """ query = session.query_property() def __init__(self, name): self.name = name @property def title(self): return self.name.replace('_', ' ') def __repr__(self): return '<%s %r>' % (self.__class__.__name__, self.name) class RevisionedPage(Page, Revision): """ Represents a wiki page with a revision. Thanks to multiple inhertiance and the ability of SQLAlchemy to map to joins we can combine `Page` and `Revision` into one class here. """ query = session.query_property() def __init__(self): raise TypeError('cannot create WikiPage instances, use the Page and ' 'Revision classes for data manipulation.') def __repr__(self): return '<%s %r:%r>' % ( self.__class__.__name__, self.name, self.revision_id ) # setup mappers mapper(Revision, revision_table) mapper(Page, page_table, properties=dict( revisions=relation(Revision, backref='page', order_by=Revision.revision_id.desc()) )) mapper(RevisionedPage, join(page_table, revision_table), properties=dict( page_id=[page_table.c.page_id, revision_table.c.page_id], )) werkzeug-0.14.1/examples/simplewiki/shared/000077500000000000000000000000001322225165500207045ustar00rootroot00000000000000werkzeug-0.14.1/examples/simplewiki/shared/style.css000066400000000000000000000072101322225165500225560ustar00rootroot00000000000000body { font-family: 'Luxi Sans', 'Lucida Sans', 'Trebuchet MS', sans-serif; margin: 2em 1em 2em 1em; padding: 0; background: #1C0424; } a { color: #6A2F7E; } a:hover { color: #3D0F4D; } pre { border: 1px solid #ccc; background-color: white; font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans', monospace; font-size: 0.9em; padding: 0.3em; } table { border: 2px solid #ccc; border-collapse: collapse; } table td, table th { border: 1px solid #ccc; padding: 0.4em; } div.bodywrapper { margin: 0 auto 0 auto; max-width: 50em; background: #F1EBF3; border: 1px solid #4C1068; padding: 0; color: #111; } div.header { background-color: #320846; color: white; } div.header h1 { margin: 0; padding: 0.4em; font-size: 1.7em; } div.header h1 a { text-decoration: none; color: white; } div.header h1 a:hover { color: #6A2F7E; } div.contents { padding: 1em; margin: 0; border: 1px solid #3D0F4D; } div.footer { padding: 0.5em; background: #15031B; color: white; font-size: 0.8em; text-align: right; color: white; } div.contents h1, div.contents h2, div.contents h3, div.contents h4, div.contents h5 { margin: 0; padding: 0.3em 0 0.2em 0; color: #3D0F4D; } div.contents h1 { font-size: 1.7em; } div.contents h2 { font-size: 1.6em; } div.contents h3 { font-size: 1.4em; } div.contents h4 { font-size: 1.2em; } div.contents h5 { font-size: 1em; } div.contents p { margin: 0; padding: 0.3em 0 0.3em 0; line-height: 1.5em; } div.contents div.navigation { padding: 0 0 0.3em 0; margin: 0 0 0.3em 0; border-bottom: 1px solid #6A2F7E; font-size: 0.85em; color: #555; } div.contents div.navigation a { padding: 0 0.2em 0 0.2em; font-weight: bold; color: #555; } div.contents div.navigation a:hover { color: #6A2F7E; } div.contents div.navigation a.active { background-color: #ccc; text-decoration: none; } div.contents div.page_meta { font-size: 0.7em; color: #555; float: right; } textarea { width: 99%; font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans', monospace; font-size: 0.9em; padding: 0.3em; margin: 0.5em 0 0.5em 0; } input { font-family: 'Luxi Sans', 'Lucida Sans', 'Trebuchet MS', sans-serif; } table.revisions, table.changes { border-collapse: collapse; border: 1px solid #6A2F7E; background: #fdfdfd; width: 100%; margin: 1em 0 0.5em 0; } table.revisions th, table.changes th { background-color: #6A2F7E; color: white; padding: 0.1em 0.6em 0.1em 0.6em; font-size: 0.8em; border: none; } table.revisions td, table.changes td { padding: 0.2em 0.5em 0.2em 0.5em; font-size: 0.9em; border: none; } table.revisions .timestamp, table.changes .timestamp { text-align: left; width: 10em; } table.revisions td.timestamp, table.changes td.timestamp { color: #444; } table.revisions .change_note, table.changes .change_note { text-align: left; } table.revisions td.change_note, table.changes td.change_note { font-style: italic; } table.revisions th.diff input { background-color: #3D0F4D; color: white; border: 1px solid #1C0424; } table.revisions .diff { width: 5em; text-align: right; } table.revisions .actions { width: 8em; text-align: left; } table.revisions td.actions { font-size: 0.75em; } table.revisions tr.odd, table.changes tr.odd { background-color: #f7f7f7; } pre.udiff { overflow: auto; font-size: 0.75em; } div.pagination { font-size: 0.9em; padding: 0.5em 0 0.5em 0; text-align: center; } werkzeug-0.14.1/examples/simplewiki/specialpages.py000066400000000000000000000025271322225165500224560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki.specialpages ~~~~~~~~~~~~~~~~~~~~~~~ This module contains special pages such as the recent changes page. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from simplewiki.utils import Response, Pagination, generate_template, href from simplewiki.database import RevisionedPage, Page from simplewiki.actions import page_missing def page_index(request): """Index of all pages.""" letters = {} for page in Page.query.order_by(Page.name): letters.setdefault(page.name.capitalize()[0], []).append(page) return Response(generate_template('page_index.html', letters=sorted(letters.items()) )) def recent_changes(request): """Display the recent changes.""" page = max(1, request.args.get('page', type=int)) query = RevisionedPage.query \ .order_by(RevisionedPage.revision_id.desc()) return Response(generate_template('recent_changes.html', pagination=Pagination(query, 20, page, 'Special:Recent_Changes') )) def page_not_found(request, page_name): """ Displays an error message if a user tried to access a not existing special page. """ return page_missing(request, page_name, True) pages = { 'Index': page_index, 'Recent_Changes': recent_changes } werkzeug-0.14.1/examples/simplewiki/templates/000077500000000000000000000000001322225165500214345ustar00rootroot00000000000000werkzeug-0.14.1/examples/simplewiki/templates/action_diff.html000066400000000000000000000016561322225165500245770ustar00rootroot00000000000000 View Diff

Diff for “${page.title}

Below you can see the differences between the revision from ${format_datetime(old_revision.timestamp)} and the revision from ${format_datetime(new_revision.timestamp)} in unified diff format.

${diff}

Cannot Display Diff

${error}

werkzeug-0.14.1/examples/simplewiki/templates/action_edit.html000066400000000000000000000017341322225165500246110ustar00rootroot00000000000000 ${new and 'Create' or 'Edit'} Page

${new and 'Create' or 'Edit'} “${page.title or page_name}”

You can now ${new and 'create' or 'modify'} the page contents. To format your text you can use creole markup.

${error}

werkzeug-0.14.1/examples/simplewiki/templates/action_log.html000066400000000000000000000033521322225165500244430ustar00rootroot00000000000000 Revisions for “${page.title}”

Revisions for “${page.title}

In this list you can see all the revisions of the requested page.

Date Change Note Actions
${format_datetime(revision.timestamp)} ${revision.change_note} show | revert
werkzeug-0.14.1/examples/simplewiki/templates/action_revert.html000066400000000000000000000020641322225165500251700ustar00rootroot00000000000000 Revert Old Revision

Revert Old Revision of “${page.title}

If you want to restore the old revision from ${format_datetime(old_revision.timestamp)} enter your change note and click “Revert”.

Cannot Revert

${error}

werkzeug-0.14.1/examples/simplewiki/templates/action_show.html000066400000000000000000000004541322225165500246420ustar00rootroot00000000000000 ${page.title} ${page.render()} werkzeug-0.14.1/examples/simplewiki/templates/layout.html000066400000000000000000000035171322225165500236450ustar00rootroot00000000000000 <py:if test="title">${title} — </py:if>SimpleWiki ${select('*[local-name()!="title"]')}
This revision was created on ${format_datetime(page.timestamp)}.
${select('*|text()')}
werkzeug-0.14.1/examples/simplewiki/templates/macros.xml000066400000000000000000000013141322225165500234410ustar00rootroot00000000000000
werkzeug-0.14.1/examples/simplewiki/templates/missing_action.html000066400000000000000000000007021322225165500253270ustar00rootroot00000000000000 Action Not Found

Action “${action}” Not Found

The requested action does not exist.

Try to access the same URL without parameters.

werkzeug-0.14.1/examples/simplewiki/templates/page_index.html000066400000000000000000000010161322225165500244230ustar00rootroot00000000000000 Index

Index

${letter}

werkzeug-0.14.1/examples/simplewiki/templates/page_missing.html000066400000000000000000000014051322225165500247670ustar00rootroot00000000000000 Page Not Found

Page Not Found

The page you requested does not exist.

It also could be that there is no such revision of that page.

Feel free to create such a page.

Although this page does not exist by now you cannot create it because the system protected the page name for future use.

werkzeug-0.14.1/examples/simplewiki/templates/recent_changes.html000066400000000000000000000015511322225165500252740ustar00rootroot00000000000000 Recent Changes

Recent Changes

Date Page Change Note
${format_datetime(entry.timestamp)} ${entry.title} ${entry.change_note}
${render_pagination(pagination)} werkzeug-0.14.1/examples/simplewiki/utils.py000066400000000000000000000101651322225165500211530ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ simplewiki.utils ~~~~~~~~~~~~~~~~ This module implements various utility functions and classes used all over the application. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import difflib import creoleparser from os import path from genshi import Stream from genshi.template import TemplateLoader from werkzeug.local import Local, LocalManager from werkzeug.urls import url_encode, url_quote from werkzeug.utils import cached_property, redirect from werkzeug.wrappers import BaseRequest, BaseResponse # calculate the path to the templates an create the template loader TEMPLATE_PATH = path.join(path.dirname(__file__), 'templates') template_loader = TemplateLoader(TEMPLATE_PATH, auto_reload=True, variable_lookup='lenient') # context locals. these two objects are use by the application to # bind objects to the current context. A context is defined as the # current thread and the current greenlet if there is greenlet support. local = Local() local_manager = LocalManager([local]) request = local('request') application = local('application') # create a new creole parser creole_parser = creoleparser.Parser( dialect=creoleparser.create_dialect(creoleparser.creole10_base, wiki_links_base_url='', wiki_links_path_func=lambda page_name: href(page_name), wiki_links_space_char='_', no_wiki_monospace=True ), method='html' ) def generate_template(template_name, **context): """Load and generate a template.""" context.update( href=href, format_datetime=format_datetime ) return template_loader.load(template_name).generate(**context) def parse_creole(markup): """Parse some creole markup and create a genshi stream.""" return creole_parser.generate(markup) def href(*args, **kw): """ Simple function for URL generation. Position arguments are used for the URL path and keyword arguments are used for the url parameters. """ result = [(request and request.script_root or '') + '/'] for idx, arg in enumerate(args): result.append((idx and '/' or '') + url_quote(arg)) if kw: result.append('?' + url_encode(kw)) return ''.join(result) def format_datetime(obj): """Format a datetime object.""" return obj.strftime('%Y-%m-%d %H:%M') class Request(BaseRequest): """ Simple request subclass that allows to bind the object to the current context. """ def bind_to_context(self): local.request = self class Response(BaseResponse): """ Encapsulates a WSGI response. Unlike the default response object werkzeug provides, this accepts a genshi stream and will automatically render it to html. This makes it possible to switch to xhtml or html5 easily. """ default_mimetype = 'text/html' def __init__(self, response=None, status=200, headers=None, mimetype=None, content_type=None): if isinstance(response, Stream): response = response.render('html', encoding=None, doctype='html') BaseResponse.__init__(self, response, status, headers, mimetype, content_type) class Pagination(object): """ Paginate a SQLAlchemy query object. """ def __init__(self, query, per_page, page, link): self.query = query self.per_page = per_page self.page = page self.link = link self._count = None @cached_property def entries(self): return self.query.offset((self.page - 1) * self.per_page) \ .limit(self.per_page).all() @property def has_previous(self): return self.page > 1 @property def has_next(self): return self.page < self.pages @property def previous(self): return href(self.link, page=self.page - 1) @property def next(self): return href(self.link, page=self.page + 1) @cached_property def count(self): return self.query.count() @property def pages(self): return max(0, self.count - 1) // self.per_page + 1 werkzeug-0.14.1/examples/upload.py000066400000000000000000000023711322225165500171220ustar00rootroot00000000000000#!/usr/bin/env python """ Simple Upload Application ~~~~~~~~~~~~~~~~~~~~~~~~~ All uploaded files are directly send back to the client. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from werkzeug.serving import run_simple from werkzeug.wrappers import BaseRequest, BaseResponse from werkzeug.wsgi import wrap_file def view_file(req): if not 'uploaded_file' in req.files: return BaseResponse('no file uploaded') f = req.files['uploaded_file'] return BaseResponse(wrap_file(req.environ, f), mimetype=f.content_type, direct_passthrough=True) def upload_file(req): return BaseResponse('''

Upload File

''', mimetype='text/html') def application(environ, start_response): req = BaseRequest(environ) if req.method == 'POST': resp = view_file(req) else: resp = upload_file(req) return resp(environ, start_response) if __name__ == '__main__': run_simple('localhost', 5000, application, use_debugger=True) werkzeug-0.14.1/examples/webpylike/000077500000000000000000000000001322225165500172545ustar00rootroot00000000000000werkzeug-0.14.1/examples/webpylike/example.py000066400000000000000000000010751322225165500212640ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ Example application based on weblikepy ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The application from th web.py tutorial. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ from webpylike import WebPyApp, View, Response urls = ( '/', 'index', '/about', 'about' ) class index(View): def GET(self): return Response('Hello World') class about(View): def GET(self): return Response('This is the about page') app = WebPyApp(urls, globals()) werkzeug-0.14.1/examples/webpylike/webpylike.py000066400000000000000000000036171322225165500216300ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ webpylike ~~~~~~~~~ This module implements web.py like dispatching. What this module does not implement is a stream system that hooks into sys.stdout like web.py provides. I consider this bad design. :copyright: (c) 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import re from werkzeug.wrappers import BaseRequest, BaseResponse from werkzeug.exceptions import HTTPException, MethodNotAllowed, \ NotImplemented, NotFound class Request(BaseRequest): """Encapsulates a request.""" class Response(BaseResponse): """Encapsulates a response.""" class View(object): """Baseclass for our views.""" def __init__(self, app, req): self.app = app self.req = req def GET(self): raise MethodNotAllowed() POST = DELETE = PUT = GET def HEAD(self): return self.GET() class WebPyApp(object): """ An interface to a web.py like application. It works like the web.run function in web.py """ def __init__(self, urls, views): self.urls = [(re.compile('^%s$' % urls[i]), urls[i + 1]) for i in xrange(0, len(urls), 2)] self.views = views def __call__(self, environ, start_response): try: req = Request(environ) for regex, view in self.urls: match = regex.match(req.path) if match is not None: view = self.views[view](self, req) if req.method not in ('GET', 'HEAD', 'POST', 'DELETE', 'PUT'): raise NotImplemented() resp = getattr(view, req.method)(*match.groups()) break else: raise NotFound() except HTTPException, e: resp = e return resp(environ, start_response) werkzeug-0.14.1/scripts/000077500000000000000000000000001322225165500151325ustar00rootroot00000000000000werkzeug-0.14.1/scripts/make-release.py000066400000000000000000000075061322225165500200470ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- from __future__ import print_function import os import re import sys from datetime import date, datetime from subprocess import PIPE, Popen _date_strip_re = re.compile(r'(?<=\d)(st|nd|rd|th)') def parse_changelog(): with open('CHANGES.rst') as f: lineiter = iter(f) for line in lineiter: match = re.search('^Version\s+(.*)', line.strip()) if match is None: continue version = match.group(1).strip() if next(lineiter).count('-') != len(match.group(0)): continue while 1: change_info = next(lineiter).strip() if change_info: break match = re.search( r'released on (\w+\s+\d+\w+\s+\d+)(?:, codename (.*))?', change_info, flags=re.IGNORECASE ) if match is None: continue datestr, codename = match.groups() return version, parse_date(datestr), codename def bump_version(version): try: parts = [int(i) for i in version.split('.')] except ValueError: fail('Current version is not numeric') parts[-1] += 1 return '.'.join(map(str, parts)) def parse_date(string): string = _date_strip_re.sub('', string) return datetime.strptime(string, '%B %d %Y') def set_filename_version(filename, version_number, pattern): changed = [] def inject_version(match): before, old, after = match.groups() changed.append(True) return before + version_number + after with open(filename) as f: contents = re.sub( r"^(\s*%s\s*=\s*')(.+?)(')" % pattern, inject_version, f.read(), flags=re.DOTALL | re.MULTILINE ) if not changed: fail('Could not find %s in %s', pattern, filename) with open(filename, 'w') as f: f.write(contents) def set_init_version(version): info('Setting __init__.py version to %s', version) set_filename_version('werkzeug/__init__.py', version, '__version__') def build(): cmd = [sys.executable, 'setup.py', 'sdist', 'bdist_wheel'] Popen(cmd).wait() def fail(message, *args): print('Error:', message % args, file=sys.stderr) sys.exit(1) def info(message, *args): print(message % args, file=sys.stderr) def get_git_tags(): return set( Popen(['git', 'tag'], stdout=PIPE).communicate()[0].splitlines() ) def git_is_clean(): return Popen(['git', 'diff', '--quiet']).wait() == 0 def make_git_commit(message, *args): message = message % args Popen(['git', 'commit', '-am', message]).wait() def make_git_tag(tag): info('Tagging "%s"', tag) Popen(['git', 'tag', tag]).wait() def main(): os.chdir(os.path.join(os.path.dirname(__file__), '..')) rv = parse_changelog() if rv is None: fail('Could not parse changelog') version, release_date, codename = rv dev_version = bump_version(version) + '.dev' info( 'Releasing %s (codename %s, release date %s)', version, codename, release_date.strftime('%d/%m/%Y') ) tags = get_git_tags() if version in tags: fail('Version "%s" is already tagged', version) if release_date.date() != date.today(): fail( 'Release date is not today (%s != %s)', release_date.date(), date.today() ) if not git_is_clean(): fail('You have uncommitted changes in git') try: import wheel # noqa: F401 except ImportError: fail('You need to install the wheel package.') set_init_version(version) make_git_commit('Bump version number to %s', version) make_git_tag(version) build() set_init_version(dev_version) if __name__ == '__main__': main() werkzeug-0.14.1/setup.cfg000066400000000000000000000005141322225165500152640ustar00rootroot00000000000000[tool:pytest] minversion = 3.0 testpaths = tests norecursedirs = tests/hypothesis filterwarnings = ignore::requests.packages.urllib3.exceptions.InsecureRequestWarning [bdist_wheel] universal = 1 [metadata] license_file = LICENSE [flake8] ignore = E126,E241,E272,E305,E402,E731,W503 exclude=.tox,examples,docs max-line-length=100 werkzeug-0.14.1/setup.py000077500000000000000000000033561322225165500151670ustar00rootroot00000000000000#!/usr/bin/env python import io import re from setuptools import find_packages, setup with io.open('README.rst', 'rt', encoding='utf8') as f: readme = f.read() with io.open('werkzeug/__init__.py', 'rt', encoding='utf8') as f: version = re.search( r'__version__ = \'(.*?)\'', f.read(), re.M).group(1) setup( name='Werkzeug', version=version, url='https://www.palletsprojects.org/p/werkzeug/', license='BSD', author='Armin Ronacher', author_email='armin.ronacher@active-4.com', description='The comprehensive WSGI web application library.', long_description=readme, classifiers=[ 'Development Status :: 5 - Production/Stable', 'Environment :: Web Environment', 'Intended Audience :: Developers', 'License :: OSI Approved :: BSD License', 'Operating System :: OS Independent', 'Programming Language :: Python', 'Programming Language :: Python :: 2', 'Programming Language :: Python :: 2.6', 'Programming Language :: Python :: 2.7', 'Programming Language :: Python :: 3', 'Programming Language :: Python :: 3.3', 'Programming Language :: Python :: 3.4', 'Programming Language :: Python :: 3.5', 'Programming Language :: Python :: 3.6', 'Topic :: Internet :: WWW/HTTP :: Dynamic Content', 'Topic :: Software Development :: Libraries :: Python Modules', ], packages=find_packages(exclude=('tests*',)), extras_require={ 'watchdog': ['watchdog'], 'termcolor': ['termcolor'], 'dev': [ 'pytest', 'coverage', 'tox', 'sphinx', ], }, include_package_data=True, zip_safe=False, platforms='any' ) werkzeug-0.14.1/tests/000077500000000000000000000000001322225165500146055ustar00rootroot00000000000000werkzeug-0.14.1/tests/__init__.py000066400000000000000000000007341322225165500167220ustar00rootroot00000000000000def strict_eq(x, y): """Equality test bypassing the implicit string conversion in Python 2.""" __tracebackhide__ = True assert x == y, (x, y) assert issubclass(type(x), type(y)) or issubclass(type(y), type(x)) if isinstance(x, dict) and isinstance(y, dict): x = sorted(x.items()) y = sorted(y.items()) elif isinstance(x, set) and isinstance(y, set): x = sorted(x) y = sorted(y) assert repr(x) == repr(y), (x, y) werkzeug-0.14.1/tests/conftest.py000066400000000000000000000116471322225165500170150ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.conftest ~~~~~~~~~~~~~~ :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement, print_function import os import signal import sys import textwrap import time import requests import pytest from werkzeug import serving from werkzeug.utils import cached_property from werkzeug._compat import to_bytes from itertools import count try: __import__('pytest_xprocess') except ImportError: @pytest.fixture(scope='session') def subprocess(): pytest.skip('pytest-xprocess not installed.') else: @pytest.fixture(scope='session') def subprocess(xprocess): return xprocess port_generator = count(13220) def _patch_reloader_loop(): def f(x): print('reloader loop finished') # Need to flush for some reason even though xprocess opens the # subprocess' stdout in unbuffered mode. # flush=True makes the test fail on py2, so flush manually sys.stdout.flush() return time.sleep(x) import werkzeug._reloader werkzeug._reloader.ReloaderLoop._sleep = staticmethod(f) def _get_pid_middleware(f): def inner(environ, start_response): if environ['PATH_INFO'] == '/_getpid': start_response('200 OK', [('Content-Type', 'text/plain')]) return [to_bytes(str(os.getpid()))] return f(environ, start_response) return inner def _dev_server(): _patch_reloader_loop() sys.path.insert(0, sys.argv[1]) import testsuite_app app = _get_pid_middleware(testsuite_app.app) serving.run_simple(hostname='localhost', application=app, **testsuite_app.kwargs) class _ServerInfo(object): xprocess = None addr = None url = None port = None last_pid = None def __init__(self, xprocess, addr, url, port): self.xprocess = xprocess self.addr = addr self.url = url self.port = port @cached_property def logfile(self): return self.xprocess.getinfo('dev_server').logpath.open() def request_pid(self): for i in range(20): time.sleep(0.1 * i) try: self.last_pid = int(requests.get(self.url + '/_getpid', verify=False).text) return self.last_pid except Exception as e: # urllib also raises socketerrors print(self.url) print(e) def wait_for_reloader(self): old_pid = self.last_pid for i in range(20): time.sleep(0.1 * i) new_pid = self.request_pid() if not new_pid: raise RuntimeError('Server is down.') if self.request_pid() != old_pid: return raise RuntimeError('Server did not reload.') def wait_for_reloader_loop(self): for i in range(20): time.sleep(0.1 * i) line = self.logfile.readline() if 'reloader loop finished' in line: return @pytest.fixture def dev_server(tmpdir, subprocess, request, monkeypatch): '''Run werkzeug.serving.run_simple in its own process. :param application: String for the module that will be created. The module must have a global ``app`` object, a ``kwargs`` dict is also available whose values will be passed to ``run_simple``. ''' def run_dev_server(application): app_pkg = tmpdir.mkdir('testsuite_app') appfile = app_pkg.join('__init__.py') port = next(port_generator) appfile.write('\n\n'.join(( 'kwargs = dict(port=%d)' % port, textwrap.dedent(application) ))) monkeypatch.delitem(sys.modules, 'testsuite_app', raising=False) monkeypatch.syspath_prepend(str(tmpdir)) import testsuite_app port = testsuite_app.kwargs['port'] if testsuite_app.kwargs.get('ssl_context', None): url_base = 'https://localhost:{0}'.format(port) else: url_base = 'http://localhost:{0}'.format(port) info = _ServerInfo( subprocess, 'localhost:{0}'.format(port), url_base, port ) def preparefunc(cwd): args = [sys.executable, __file__, str(tmpdir)] return lambda: 'pid=%s' % info.request_pid(), args subprocess.ensure('dev_server', preparefunc, restart=True) def teardown(): # Killing the process group that runs the server, not just the # parent process attached. xprocess is confused about Werkzeug's # reloader and won't help here. pid = info.request_pid() if pid: os.killpg(os.getpgid(pid), signal.SIGTERM) request.addfinalizer(teardown) return info return run_dev_server if __name__ == '__main__': _dev_server() werkzeug-0.14.1/tests/contrib/000077500000000000000000000000001322225165500162455ustar00rootroot00000000000000werkzeug-0.14.1/tests/contrib/__init__.py000066400000000000000000000000001322225165500203440ustar00rootroot00000000000000werkzeug-0.14.1/tests/contrib/cache/000077500000000000000000000000001322225165500173105ustar00rootroot00000000000000werkzeug-0.14.1/tests/contrib/cache/conftest.py000066400000000000000000000014271322225165500215130ustar00rootroot00000000000000import os import pytest # build the path to the uwsgi marker file # when running in tox, this will be relative to the tox env filename = os.path.join( os.environ.get('TOX_ENVTMPDIR', ''), 'test_uwsgi_failed' ) @pytest.hookimpl(tryfirst=True, hookwrapper=True) def pytest_runtest_makereport(item, call): """``uwsgi --pyrun`` doesn't pass on the exit code when ``pytest`` fails, so Tox thinks the tests passed. For UWSGI tests, create a file to mark what tests fail. The uwsgi Tox env has a command to read this file and exit appropriately. """ outcome = yield report = outcome.get_result() if item.cls.__name__ != 'TestUWSGICache': return if report.failed: with open(filename, 'a') as f: f.write(item.name + '\n') werkzeug-0.14.1/tests/contrib/cache/test_cache.py000066400000000000000000000225561322225165500217760ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.cache ~~~~~~~~~~~ Tests the cache system :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import errno import pytest from werkzeug._compat import text_type from werkzeug.contrib import cache try: import redis except ImportError: redis = None try: import pylibmc as memcache except ImportError: try: from google.appengine.api import memcache except ImportError: try: import memcache except ImportError: memcache = None class CacheTests(object): _can_use_fast_sleep = True _guaranteed_deletes = False @pytest.fixture def fast_sleep(self, monkeypatch): if self._can_use_fast_sleep: def sleep(delta): orig_time = cache.time monkeypatch.setattr(cache, 'time', lambda: orig_time() + delta) return sleep else: import time return time.sleep @pytest.fixture def make_cache(self): """Return a cache class or factory.""" raise NotImplementedError() @pytest.fixture def c(self, make_cache): """Return a cache instance.""" return make_cache() def test_generic_get_dict(self, c): assert c.set('a', 'a') assert c.set('b', 'b') d = c.get_dict('a', 'b') assert 'a' in d assert 'a' == d['a'] assert 'b' in d assert 'b' == d['b'] def test_generic_set_get(self, c): for i in range(3): assert c.set(str(i), i * i) for i in range(3): result = c.get(str(i)) assert result == i * i, result def test_generic_get_set(self, c): assert c.set('foo', ['bar']) assert c.get('foo') == ['bar'] def test_generic_get_many(self, c): assert c.set('foo', ['bar']) assert c.set('spam', 'eggs') assert c.get_many('foo', 'spam') == [['bar'], 'eggs'] def test_generic_set_many(self, c): assert c.set_many({'foo': 'bar', 'spam': ['eggs']}) assert c.get('foo') == 'bar' assert c.get('spam') == ['eggs'] def test_generic_add(self, c): # sanity check that add() works like set() assert c.add('foo', 'bar') assert c.get('foo') == 'bar' assert not c.add('foo', 'qux') assert c.get('foo') == 'bar' def test_generic_delete(self, c): assert c.add('foo', 'bar') assert c.get('foo') == 'bar' assert c.delete('foo') assert c.get('foo') is None def test_generic_delete_many(self, c): assert c.add('foo', 'bar') assert c.add('spam', 'eggs') assert c.delete_many('foo', 'spam') assert c.get('foo') is None assert c.get('spam') is None def test_generic_inc_dec(self, c): assert c.set('foo', 1) assert c.inc('foo') == c.get('foo') == 2 assert c.dec('foo') == c.get('foo') == 1 assert c.delete('foo') def test_generic_true_false(self, c): assert c.set('foo', True) assert c.get('foo') in (True, 1) assert c.set('bar', False) assert c.get('bar') in (False, 0) def test_generic_timeout(self, c, fast_sleep): c.set('foo', 'bar', 0) assert c.get('foo') == 'bar' c.set('baz', 'qux', 1) assert c.get('baz') == 'qux' fast_sleep(3) # timeout of zero means no timeout assert c.get('foo') == 'bar' if self._guaranteed_deletes: assert c.get('baz') is None def test_generic_has(self, c): assert c.has('foo') in (False, 0) assert c.has('spam') in (False, 0) assert c.set('foo', 'bar') assert c.has('foo') in (True, 1) assert c.has('spam') in (False, 0) c.delete('foo') assert c.has('foo') in (False, 0) assert c.has('spam') in (False, 0) class TestSimpleCache(CacheTests): _guaranteed_deletes = True @pytest.fixture def make_cache(self): return cache.SimpleCache def test_purge(self): c = cache.SimpleCache(threshold=2) c.set('a', 'a') c.set('b', 'b') c.set('c', 'c') c.set('d', 'd') # Cache purges old items *before* it sets new ones. assert len(c._cache) == 3 class TestFileSystemCache(CacheTests): _guaranteed_deletes = True @pytest.fixture def make_cache(self, tmpdir): return lambda **kw: cache.FileSystemCache(cache_dir=str(tmpdir), **kw) def test_filesystemcache_prune(self, make_cache): THRESHOLD = 13 c = make_cache(threshold=THRESHOLD) for i in range(2 * THRESHOLD): assert c.set(str(i), i) nof_cache_files = c.get(c._fs_count_file) assert nof_cache_files <= THRESHOLD def test_filesystemcache_clear(self, c): assert c.set('foo', 'bar') nof_cache_files = c.get(c._fs_count_file) assert nof_cache_files == 1 assert c.clear() nof_cache_files = c.get(c._fs_count_file) assert nof_cache_files == 0 cache_files = c._list_dir() assert len(cache_files) == 0 def test_no_threshold(self, make_cache): THRESHOLD = 0 c = make_cache(threshold=THRESHOLD) for i in range(10): assert c.set(str(i), i) cache_files = c._list_dir() assert len(cache_files) == 10 # File count is not maintained with threshold = 0 nof_cache_files = c.get(c._fs_count_file) assert nof_cache_files is None def test_count_file_accuracy(self, c): assert c.set('foo', 'bar') assert c.set('moo', 'car') c.add('moo', 'tar') assert c.get(c._fs_count_file) == 2 assert c.add('too', 'far') assert c.get(c._fs_count_file) == 3 assert c.delete('moo') assert c.get(c._fs_count_file) == 2 assert c.clear() assert c.get(c._fs_count_file) == 0 # don't use pytest.mark.skipif on subclasses # https://bitbucket.org/hpk42/pytest/issue/568 # skip happens in requirements fixture instead class TestRedisCache(CacheTests): _can_use_fast_sleep = False _guaranteed_deletes = True @pytest.fixture(scope='class', autouse=True) def requirements(self, subprocess): if redis is None: pytest.skip('Python package "redis" is not installed.') def prepare(cwd): return '[Rr]eady to accept connections', ['redis-server'] try: subprocess.ensure('redis_server', prepare) except IOError as e: # xprocess raises FileNotFoundError if e.errno == errno.ENOENT: pytest.skip('Redis is not installed.') else: raise yield subprocess.getinfo('redis_server').terminate() @pytest.fixture(params=(None, False, True)) def make_cache(self, request): if request.param is None: host = 'localhost' elif request.param: host = redis.StrictRedis() else: host = redis.Redis() c = cache.RedisCache( host=host, key_prefix='werkzeug-test-case:', ) yield lambda: c c.clear() def test_compat(self, c): assert c._client.set(c.key_prefix + 'foo', 'Awesome') assert c.get('foo') == b'Awesome' assert c._client.set(c.key_prefix + 'foo', '42') assert c.get('foo') == 42 def test_empty_host(self): with pytest.raises(ValueError) as exc_info: cache.RedisCache(host=None) assert text_type(exc_info.value) == 'RedisCache host parameter may not be None' class TestMemcachedCache(CacheTests): _can_use_fast_sleep = False @pytest.fixture(scope='class', autouse=True) def requirements(self, subprocess): if memcache is None: pytest.skip( 'Python package for memcache is not installed. Need one of ' '"pylibmc", "google.appengine", or "memcache".' ) def prepare(cwd): return '', ['memcached'] try: subprocess.ensure('memcached', prepare) except IOError as e: # xprocess raises FileNotFoundError if e.errno == errno.ENOENT: pytest.skip('Memcached is not installed.') else: raise yield subprocess.getinfo('memcached').terminate() @pytest.fixture def make_cache(self): c = cache.MemcachedCache(key_prefix='werkzeug-test-case:') yield lambda: c c.clear() def test_compat(self, c): assert c._client.set(c.key_prefix + 'foo', 'bar') assert c.get('foo') == 'bar' def test_huge_timeouts(self, c): # Timeouts greater than epoch are interpreted as POSIX timestamps # (i.e. not relative to now, but relative to epoch) epoch = 2592000 c.set('foo', 'bar', epoch + 100) assert c.get('foo') == 'bar' class TestUWSGICache(CacheTests): _can_use_fast_sleep = False @pytest.fixture(scope='class', autouse=True) def requirements(self): try: import uwsgi # NOQA except ImportError: pytest.skip( 'Python "uwsgi" package is only avaialable when running ' 'inside uWSGI.' ) @pytest.fixture def make_cache(self): c = cache.UWSGICache(cache='werkzeugtest') yield lambda: c c.clear() werkzeug-0.14.1/tests/contrib/test_atom.py000066400000000000000000000062621322225165500206240ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.atom ~~~~~~~~~~ Tests the cache system :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import datetime import pytest from werkzeug.contrib.atom import format_iso8601, AtomFeed, FeedEntry class TestAtomFeed(object): """ Testcase for the `AtomFeed` class """ def test_atom_no_args(self): with pytest.raises(ValueError): AtomFeed() def test_atom_title_no_id(self): with pytest.raises(ValueError): AtomFeed(title='test_title') def test_atom_add_one(self): a = AtomFeed(title='test_title', id=1) f = FeedEntry( title='test_title', id=1, updated=datetime.datetime.now()) assert len(a.entries) == 0 a.add(f) assert len(a.entries) == 1 def test_atom_add_one_kwargs(self): a = AtomFeed(title='test_title', id=1) assert len(a.entries) == 0 a.add(title='test_title', id=1, updated=datetime.datetime.now()) assert len(a.entries) == 1 assert isinstance(a.entries[0], FeedEntry) def test_atom_to_str(self): updated_time = datetime.datetime.now() expected_repr = ''' test_title 1 %s Werkzeug ''' % format_iso8601(updated_time) a = AtomFeed(title='test_title', id=1, updated=updated_time) assert str(a).strip().replace(' ', '') == \ expected_repr.strip().replace(' ', '') class TestFeedEntry(object): """ Test case for the `FeedEntry` object """ def test_feed_entry_no_args(self): with pytest.raises(ValueError): FeedEntry() def test_feed_entry_no_id(self): with pytest.raises(ValueError): FeedEntry(title='test_title') def test_feed_entry_no_updated(self): with pytest.raises(ValueError): FeedEntry(title='test_title', id=1) def test_feed_entry_to_str(self): updated_time = datetime.datetime.now() expected_feed_entry_str = ''' test_title 1 %s ''' % format_iso8601(updated_time) f = FeedEntry(title='test_title', id=1, updated=updated_time) assert str(f).strip().replace(' ', '') == \ expected_feed_entry_str.strip().replace(' ', '') def test_format_iso8601(): # naive datetime should be treated as utc dt = datetime.datetime(2014, 8, 31, 2, 5, 6) assert format_iso8601(dt) == '2014-08-31T02:05:06Z' # tz-aware datetime dt = datetime.datetime(2014, 8, 31, 11, 5, 6, tzinfo=KST()) assert format_iso8601(dt) == '2014-08-31T11:05:06+09:00' class KST(datetime.tzinfo): """KST implementation for test_format_iso8601().""" def utcoffset(self, dt): return datetime.timedelta(hours=9) def tzname(self, dt): return 'KST' def dst(self, dt): return datetime.timedelta(0) werkzeug-0.14.1/tests/contrib/test_cache.py000066400000000000000000000203471322225165500207270ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.cache ~~~~~~~~~~~ Tests the cache system :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest import os import random from werkzeug.contrib import cache try: import redis except ImportError: redis = None try: import pylibmc as memcache except ImportError: try: from google.appengine.api import memcache except ImportError: try: import memcache except ImportError: memcache = None class CacheTests(object): _can_use_fast_sleep = True @pytest.fixture def make_cache(self): '''Return a cache class or factory.''' raise NotImplementedError() @pytest.fixture def fast_sleep(self, monkeypatch): if self._can_use_fast_sleep: def sleep(delta): orig_time = cache.time monkeypatch.setattr(cache, 'time', lambda: orig_time() + delta) return sleep else: import time return time.sleep @pytest.fixture def c(self, make_cache): '''Return a cache instance.''' return make_cache() def test_generic_get_dict(self, c): assert c.set('a', 'a') assert c.set('b', 'b') d = c.get_dict('a', 'b') assert 'a' in d assert 'a' == d['a'] assert 'b' in d assert 'b' == d['b'] def test_generic_set_get(self, c): for i in range(3): assert c.set(str(i), i * i) for i in range(3): result = c.get(str(i)) assert result == i * i, result def test_generic_get_set(self, c): assert c.set('foo', ['bar']) assert c.get('foo') == ['bar'] def test_generic_get_many(self, c): assert c.set('foo', ['bar']) assert c.set('spam', 'eggs') assert list(c.get_many('foo', 'spam')) == [['bar'], 'eggs'] def test_generic_set_many(self, c): assert c.set_many({'foo': 'bar', 'spam': ['eggs']}) assert c.get('foo') == 'bar' assert c.get('spam') == ['eggs'] def test_generic_expire(self, c, fast_sleep): assert c.set('foo', 'bar', 1) fast_sleep(5) assert c.get('foo') is None def test_generic_add(self, c): # sanity check that add() works like set() assert c.add('foo', 'bar') assert c.get('foo') == 'bar' assert not c.add('foo', 'qux') assert c.get('foo') == 'bar' def test_generic_delete(self, c): assert c.add('foo', 'bar') assert c.get('foo') == 'bar' assert c.delete('foo') assert c.get('foo') is None def test_generic_delete_many(self, c): assert c.add('foo', 'bar') assert c.add('spam', 'eggs') assert c.delete_many('foo', 'spam') assert c.get('foo') is None assert c.get('spam') is None def test_generic_inc_dec(self, c): assert c.set('foo', 1) assert c.inc('foo') == c.get('foo') == 2 assert c.dec('foo') == c.get('foo') == 1 assert c.delete('foo') def test_generic_true_false(self, c): assert c.set('foo', True) assert c.get('foo') in (True, 1) assert c.set('bar', False) assert c.get('bar') in (False, 0) def test_generic_no_timeout(self, c, fast_sleep): # Timeouts of zero should cause the cache to never expire c.set('foo', 'bar', 0) fast_sleep(random.randint(1, 5)) assert c.get('foo') == 'bar' def test_generic_timeout(self, c, fast_sleep): # Check that cache expires when the timeout is reached timeout = random.randint(1, 5) c.set('foo', 'bar', timeout) assert c.get('foo') == 'bar' # sleep a bit longer than timeout to ensure there are no # race conditions fast_sleep(timeout + 5) assert c.get('foo') is None def test_generic_has(self, c): assert c.has('foo') in (False, 0) assert c.has('spam') in (False, 0) assert c.set('foo', 'bar') assert c.has('foo') in (True, 1) assert c.has('spam') in (False, 0) c.delete('foo') assert c.has('foo') in (False, 0) assert c.has('spam') in (False, 0) class TestSimpleCache(CacheTests): @pytest.fixture def make_cache(self): return cache.SimpleCache def test_purge(self): c = cache.SimpleCache(threshold=2) c.set('a', 'a') c.set('b', 'b') c.set('c', 'c') c.set('d', 'd') # Cache purges old items *before* it sets new ones. assert len(c._cache) == 3 class TestFileSystemCache(CacheTests): @pytest.fixture def make_cache(self, tmpdir): return lambda **kw: cache.FileSystemCache(cache_dir=str(tmpdir), **kw) def test_filesystemcache_prune(self, make_cache): THRESHOLD = 13 c = make_cache(threshold=THRESHOLD) for i in range(2 * THRESHOLD): assert c.set(str(i), i) cache_files = os.listdir(c._path) assert len(cache_files) <= THRESHOLD def test_filesystemcache_clear(self, c): assert c.set('foo', 'bar') cache_files = os.listdir(c._path) # count = 2 because of the count file assert len(cache_files) == 2 assert c.clear() # The only file remaining is the count file cache_files = os.listdir(c._path) assert os.listdir(c._path) == [ os.path.basename(c._get_filename(c._fs_count_file))] # Don't use pytest marker # https://bitbucket.org/hpk42/pytest/issue/568 if redis is not None: class TestRedisCache(CacheTests): _can_use_fast_sleep = False @pytest.fixture(params=[ ([], dict()), ([redis.Redis()], dict()), ([redis.StrictRedis()], dict()) ]) def make_cache(self, xprocess, request): def preparefunc(cwd): return 'Ready to accept connections', ['redis-server'] xprocess.ensure('redis_server', preparefunc) args, kwargs = request.param c = cache.RedisCache(*args, key_prefix='werkzeug-test-case:', **kwargs) request.addfinalizer(c.clear) return lambda: c def test_compat(self, c): assert c._client.set(c.key_prefix + 'foo', 'Awesome') assert c.get('foo') == b'Awesome' assert c._client.set(c.key_prefix + 'foo', '42') assert c.get('foo') == 42 # Don't use pytest marker # https://bitbucket.org/hpk42/pytest/issue/568 if memcache is not None: class TestMemcachedCache(CacheTests): _can_use_fast_sleep = False @pytest.fixture def make_cache(self, xprocess, request): def preparefunc(cwd): return '', ['memcached'] xprocess.ensure('memcached', preparefunc) c = cache.MemcachedCache(key_prefix='werkzeug-test-case:') request.addfinalizer(c.clear) return lambda: c def test_compat(self, c): assert c._client.set(c.key_prefix + 'foo', 'bar') assert c.get('foo') == 'bar' def test_huge_timeouts(self, c): # Timeouts greater than epoch are interpreted as POSIX timestamps # (i.e. not relative to now, but relative to epoch) import random epoch = 2592000 timeout = epoch + random.random() * 100 c.set('foo', 'bar', timeout) assert c.get('foo') == 'bar' def _running_in_uwsgi(): try: import uwsgi # NOQA except ImportError: return False else: return True @pytest.mark.skipif(not _running_in_uwsgi(), reason="uWSGI module can't be imported outside of uWSGI") class TestUWSGICache(CacheTests): _can_use_fast_sleep = False @pytest.fixture def make_cache(self, xprocess, request): c = cache.UWSGICache(cache='werkzeugtest') request.addfinalizer(c.clear) return lambda: c class TestNullCache(object): @pytest.fixture def make_cache(self): return cache.NullCache @pytest.fixture def c(self, make_cache): return make_cache() def test_nullcache_has(self, c): assert c.has('foo') in (False, 0) assert c.has('spam') in (False, 0) werkzeug-0.14.1/tests/contrib/test_fixers.py000066400000000000000000000155411322225165500211640ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.fixers ~~~~~~~~~~~~ Server / Browser fixers. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from tests import strict_eq from werkzeug.datastructures import ResponseCacheControl from werkzeug.http import parse_cache_control_header from werkzeug.test import create_environ, Client from werkzeug.wrappers import Request, Response from werkzeug.contrib import fixers from werkzeug.utils import redirect @Request.application def path_check_app(request): return Response('PATH_INFO: %s\nSCRIPT_NAME: %s' % ( request.environ.get('PATH_INFO', ''), request.environ.get('SCRIPT_NAME', '') )) class TestServerFixer(object): def test_cgi_root_fix(self): app = fixers.CGIRootFix(path_check_app) response = Response.from_app( app, dict(create_environ(), SCRIPT_NAME='/foo', PATH_INFO='/bar', SERVER_SOFTWARE='lighttpd/1.4.27')) assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: ' def test_cgi_root_fix_custom_app_root(self): app = fixers.CGIRootFix(path_check_app, app_root='/baz/poop/') response = Response.from_app( app, dict(create_environ(), SCRIPT_NAME='/foo', PATH_INFO='/bar')) assert response.get_data() == b'PATH_INFO: /foo/bar\nSCRIPT_NAME: baz/poop' def test_path_info_from_request_uri_fix(self): app = fixers.PathInfoFromRequestUriFix(path_check_app) for key in 'REQUEST_URI', 'REQUEST_URL', 'UNENCODED_URL': env = dict(create_environ(), SCRIPT_NAME='/test', PATH_INFO='/?????') env[key] = '/test/foo%25bar?drop=this' response = Response.from_app(app, env) assert response.get_data() == b'PATH_INFO: /foo%bar\nSCRIPT_NAME: /test' def test_proxy_fix(self): @Request.application def app(request): return Response('%s|%s' % ( request.remote_addr, # do not use request.host as this fixes too :) request.environ['HTTP_HOST'] )) app = fixers.ProxyFix(app, num_proxies=2) environ = dict( create_environ(), HTTP_X_FORWARDED_PROTO="https", HTTP_X_FORWARDED_HOST='example.com', HTTP_X_FORWARDED_FOR='1.2.3.4, 5.6.7.8', REMOTE_ADDR='127.0.0.1', HTTP_HOST='fake' ) response = Response.from_app(app, environ) assert response.get_data() == b'1.2.3.4|example.com' # And we must check that if it is a redirection it is # correctly done: redirect_app = redirect('/foo/bar.hml') response = Response.from_app(redirect_app, environ) wsgi_headers = response.get_wsgi_headers(environ) assert wsgi_headers['Location'] == 'https://example.com/foo/bar.hml' def test_proxy_fix_weird_enum(self): @fixers.ProxyFix @Request.application def app(request): return Response(request.remote_addr) environ = dict( create_environ(), HTTP_X_FORWARDED_FOR=',', REMOTE_ADDR='127.0.0.1', ) response = Response.from_app(app, environ) strict_eq(response.get_data(), b'127.0.0.1') def test_header_rewriter_fix(self): @Request.application def application(request): return Response("", headers=[ ('X-Foo', 'bar') ]) application = fixers.HeaderRewriterFix(application, ('X-Foo',), (('X-Bar', '42'),)) response = Response.from_app(application, create_environ()) assert response.headers['Content-Type'] == 'text/plain; charset=utf-8' assert 'X-Foo' not in response.headers assert response.headers['X-Bar'] == '42' class TestBrowserFixer(object): def test_ie_fixes(self): @fixers.InternetExplorerFix @Request.application def application(request): response = Response('binary data here', mimetype='application/vnd.ms-excel') response.headers['Vary'] = 'Cookie' response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' return response c = Client(application, Response) response = c.get('/', headers=[ ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') ]) # IE gets no vary assert response.get_data() == b'binary data here' assert 'vary' not in response.headers assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' assert response.headers['content-type'] == 'application/vnd.ms-excel' # other browsers do c = Client(application, Response) response = c.get('/') assert response.get_data() == b'binary data here' assert 'vary' in response.headers cc = ResponseCacheControl() cc.no_cache = True @fixers.InternetExplorerFix @Request.application def application(request): response = Response('binary data here', mimetype='application/vnd.ms-excel') response.headers['Pragma'] = ', '.join(pragma) response.headers['Cache-Control'] = cc.to_header() response.headers['Content-Disposition'] = 'attachment; filename=foo.xls' return response # IE has no pragma or cache control pragma = ('no-cache',) c = Client(application, Response) response = c.get('/', headers=[ ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') ]) assert response.get_data() == b'binary data here' assert 'pragma' not in response.headers assert 'cache-control' not in response.headers assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' # IE has simplified pragma pragma = ('no-cache', 'x-foo') cc.proxy_revalidate = True response = c.get('/', headers=[ ('User-Agent', 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)') ]) assert response.get_data() == b'binary data here' assert response.headers['pragma'] == 'x-foo' assert response.headers['cache-control'] == 'proxy-revalidate' assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' # regular browsers get everything response = c.get('/') assert response.get_data() == b'binary data here' assert response.headers['pragma'] == 'no-cache, x-foo' cc = parse_cache_control_header(response.headers['cache-control'], cls=ResponseCacheControl) assert cc.no_cache assert cc.proxy_revalidate assert response.headers['content-disposition'] == 'attachment; filename=foo.xls' werkzeug-0.14.1/tests/contrib/test_iterio.py000066400000000000000000000125601322225165500211550ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.iterio ~~~~~~~~~~~~ Tests the iterio object. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest from tests import strict_eq from werkzeug.contrib.iterio import IterIO, greenlet class TestIterO(object): def test_basic_native(self): io = IterIO(["Hello", "World", "1", "2", "3"]) assert io.tell() == 0 assert io.read(2) == "He" assert io.tell() == 2 assert io.read(3) == "llo" assert io.tell() == 5 io.seek(0) assert io.read(5) == "Hello" assert io.tell() == 5 assert io._buf == "Hello" assert io.read() == "World123" assert io.tell() == 13 io.close() assert io.closed io = IterIO(["Hello\n", "World!"]) assert io.readline() == 'Hello\n' assert io._buf == 'Hello\n' assert io.read() == 'World!' assert io._buf == 'Hello\nWorld!' assert io.tell() == 12 io.seek(0) assert io.readlines() == ['Hello\n', 'World!'] io = IterIO(['Line one\nLine ', 'two\nLine three']) assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] io = IterIO(iter('Line one\nLine two\nLine three')) assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] io = IterIO(['Line one\nL', 'ine', ' two', '\nLine three']) assert list(io) == ['Line one\n', 'Line two\n', 'Line three'] io = IterIO(["foo\n", "bar"]) io.seek(-4, 2) assert io.read(4) == '\nbar' pytest.raises(IOError, io.seek, 2, 100) io.close() pytest.raises(ValueError, io.read) def test_basic_bytes(self): io = IterIO([b"Hello", b"World", b"1", b"2", b"3"]) assert io.tell() == 0 assert io.read(2) == b"He" assert io.tell() == 2 assert io.read(3) == b"llo" assert io.tell() == 5 io.seek(0) assert io.read(5) == b"Hello" assert io.tell() == 5 assert io._buf == b"Hello" assert io.read() == b"World123" assert io.tell() == 13 io.close() assert io.closed io = IterIO([b"Hello\n", b"World!"]) assert io.readline() == b'Hello\n' assert io._buf == b'Hello\n' assert io.read() == b'World!' assert io._buf == b'Hello\nWorld!' assert io.tell() == 12 io.seek(0) assert io.readlines() == [b'Hello\n', b'World!'] io = IterIO([b"foo\n", b"bar"]) io.seek(-4, 2) assert io.read(4) == b'\nbar' pytest.raises(IOError, io.seek, 2, 100) io.close() pytest.raises(ValueError, io.read) def test_basic_unicode(self): io = IterIO([u"Hello", u"World", u"1", u"2", u"3"]) assert io.tell() == 0 assert io.read(2) == u"He" assert io.tell() == 2 assert io.read(3) == u"llo" assert io.tell() == 5 io.seek(0) assert io.read(5) == u"Hello" assert io.tell() == 5 assert io._buf == u"Hello" assert io.read() == u"World123" assert io.tell() == 13 io.close() assert io.closed io = IterIO([u"Hello\n", u"World!"]) assert io.readline() == u'Hello\n' assert io._buf == u'Hello\n' assert io.read() == u'World!' assert io._buf == u'Hello\nWorld!' assert io.tell() == 12 io.seek(0) assert io.readlines() == [u'Hello\n', u'World!'] io = IterIO([u"foo\n", u"bar"]) io.seek(-4, 2) assert io.read(4) == u'\nbar' pytest.raises(IOError, io.seek, 2, 100) io.close() pytest.raises(ValueError, io.read) def test_sentinel_cases(self): io = IterIO([]) strict_eq(io.read(), '') io = IterIO([], b'') strict_eq(io.read(), b'') io = IterIO([], u'') strict_eq(io.read(), u'') io = IterIO([]) strict_eq(io.read(), '') io = IterIO([b'']) strict_eq(io.read(), b'') io = IterIO([u'']) strict_eq(io.read(), u'') io = IterIO([]) strict_eq(io.readline(), '') io = IterIO([], b'') strict_eq(io.readline(), b'') io = IterIO([], u'') strict_eq(io.readline(), u'') io = IterIO([]) strict_eq(io.readline(), '') io = IterIO([b'']) strict_eq(io.readline(), b'') io = IterIO([u'']) strict_eq(io.readline(), u'') @pytest.mark.skipif(greenlet is None, reason='Greenlet is not installed.') class TestIterI(object): def test_basic(self): def producer(out): out.write('1\n') out.write('2\n') out.flush() out.write('3\n') iterable = IterIO(producer) assert next(iterable) == '1\n2\n' assert next(iterable) == '3\n' pytest.raises(StopIteration, next, iterable) def test_sentinel_cases(self): def producer_dummy_flush(out): out.flush() iterable = IterIO(producer_dummy_flush) strict_eq(next(iterable), '') def producer_empty(out): pass iterable = IterIO(producer_empty) pytest.raises(StopIteration, next, iterable) iterable = IterIO(producer_dummy_flush, b'') strict_eq(next(iterable), b'') iterable = IterIO(producer_dummy_flush, u'') strict_eq(next(iterable), u'') werkzeug-0.14.1/tests/contrib/test_securecookie.py000066400000000000000000000026741322225165500223470ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.securecookie ~~~~~~~~~~~~~~~~~~ Tests the secure cookie. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from werkzeug.utils import parse_cookie from werkzeug.wrappers import Request, Response from werkzeug.contrib.securecookie import SecureCookie def test_basic_support(): c = SecureCookie(secret_key=b'foo') assert c.new assert not c.modified assert not c.should_save c['x'] = 42 assert c.modified assert c.should_save s = c.serialize() c2 = SecureCookie.unserialize(s, b'foo') assert c is not c2 assert not c2.new assert not c2.modified assert not c2.should_save assert c2 == c c3 = SecureCookie.unserialize(s, b'wrong foo') assert not c3.modified assert not c3.new assert c3 == {} c4 = SecureCookie({'x': 42}, 'foo') c4_serialized = c4.serialize() assert SecureCookie.unserialize(c4_serialized, 'foo') == c4 def test_wrapper_support(): req = Request.from_values() resp = Response() c = SecureCookie.load_cookie(req, secret_key=b'foo') assert c.new c['foo'] = 42 assert c.secret_key == b'foo' c.save_cookie(resp) req = Request.from_values(headers={ 'Cookie': 'session="%s"' % parse_cookie(resp.headers['set-cookie'])['session'] }) c2 = SecureCookie.load_cookie(req, secret_key=b'foo') assert not c2.new assert c2 == c werkzeug-0.14.1/tests/contrib/test_sessions.py000066400000000000000000000032421322225165500215250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.sessions ~~~~~~~~~~~~~~ Added tests for the sessions. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import os from tempfile import gettempdir from werkzeug.contrib.sessions import FilesystemSessionStore def test_default_tempdir(): store = FilesystemSessionStore() assert store.path == gettempdir() def test_basic_fs_sessions(tmpdir): store = FilesystemSessionStore(str(tmpdir)) x = store.new() assert x.new assert not x.modified x['foo'] = [1, 2, 3] assert x.modified store.save(x) x2 = store.get(x.sid) assert not x2.new assert not x2.modified assert x2 is not x assert x2 == x x2['test'] = 3 assert x2.modified assert not x2.new store.save(x2) x = store.get(x.sid) store.delete(x) x2 = store.get(x.sid) # the session is not new when it was used previously. assert not x2.new def test_non_urandom(tmpdir): urandom = os.urandom del os.urandom try: store = FilesystemSessionStore(str(tmpdir)) store.new() finally: os.urandom = urandom def test_renewing_fs_session(tmpdir): store = FilesystemSessionStore(str(tmpdir), renew_missing=True) x = store.new() store.save(x) store.delete(x) x2 = store.get(x.sid) assert x2.new def test_fs_session_lising(tmpdir): store = FilesystemSessionStore(str(tmpdir), renew_missing=True) sessions = set() for x in range(10): sess = store.new() store.save(sess) sessions.add(sess.sid) listed_sessions = set(store.list()) assert sessions == listed_sessions werkzeug-0.14.1/tests/contrib/test_wrappers.py000066400000000000000000000057441322225165500215330ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.contrib.wrappers ~~~~~~~~~~~~~~~~~~~~~~ Added tests for the sessions. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement from werkzeug.contrib import wrappers from werkzeug import routing from werkzeug.wrappers import Request, Response def test_json_request_mixin(): class MyRequest(wrappers.JSONRequestMixin, Request): pass req = MyRequest.from_values( data=u'{"foä": "bar"}'.encode('utf-8'), content_type='text/json' ) assert req.json == {u'foä': 'bar'} def test_reverse_slash_behavior(): class MyRequest(wrappers.ReverseSlashBehaviorRequestMixin, Request): pass req = MyRequest.from_values('/foo/bar', 'http://example.com/test') assert req.url == 'http://example.com/test/foo/bar' assert req.path == 'foo/bar' assert req.script_root == '/test/' # make sure the routing system works with the slashes in # reverse order as well. map = routing.Map([routing.Rule('/foo/bar', endpoint='foo')]) adapter = map.bind_to_environ(req.environ) assert adapter.match() == ('foo', {}) adapter = map.bind(req.host, req.script_root) assert adapter.match(req.path) == ('foo', {}) def test_dynamic_charset_request_mixin(): class MyRequest(wrappers.DynamicCharsetRequestMixin, Request): pass env = {'CONTENT_TYPE': 'text/html'} req = MyRequest(env) assert req.charset == 'latin1' env = {'CONTENT_TYPE': 'text/html; charset=utf-8'} req = MyRequest(env) assert req.charset == 'utf-8' env = {'CONTENT_TYPE': 'application/octet-stream'} req = MyRequest(env) assert req.charset == 'latin1' assert req.url_charset == 'latin1' MyRequest.url_charset = 'utf-8' env = {'CONTENT_TYPE': 'application/octet-stream'} req = MyRequest(env) assert req.charset == 'latin1' assert req.url_charset == 'utf-8' def return_ascii(x): return "ascii" env = {'CONTENT_TYPE': 'text/plain; charset=x-weird-charset'} req = MyRequest(env) req.unknown_charset = return_ascii assert req.charset == 'ascii' assert req.url_charset == 'utf-8' def test_dynamic_charset_response_mixin(): class MyResponse(wrappers.DynamicCharsetResponseMixin, Response): default_charset = 'utf-7' resp = MyResponse(mimetype='text/html') assert resp.charset == 'utf-7' resp.charset = 'utf-8' assert resp.charset == 'utf-8' assert resp.mimetype == 'text/html' assert resp.mimetype_params == {'charset': 'utf-8'} resp.mimetype_params['charset'] = 'iso-8859-15' assert resp.charset == 'iso-8859-15' resp.set_data(u'Hällo Wörld') assert b''.join(resp.iter_encoded()) == \ u'Hällo Wörld'.encode('iso-8859-15') del resp.headers['content-type'] try: resp.charset = 'utf-8' except TypeError: pass else: assert False, 'expected type error on charset setting without ct' werkzeug-0.14.1/tests/hypothesis/000077500000000000000000000000001322225165500170045ustar00rootroot00000000000000werkzeug-0.14.1/tests/hypothesis/__init__.py000066400000000000000000000000001322225165500211030ustar00rootroot00000000000000werkzeug-0.14.1/tests/hypothesis/test_urls.py000066400000000000000000000017501322225165500214050ustar00rootroot00000000000000import hypothesis from hypothesis.strategies import text, dictionaries, lists, integers from werkzeug import urls from werkzeug.datastructures import OrderedMultiDict @hypothesis.given(text()) def test_quote_unquote_text(t): assert t == urls.url_unquote(urls.url_quote(t)) @hypothesis.given(dictionaries(text(), text())) def test_url_encoding_dict_str_str(d): assert OrderedMultiDict(d) == urls.url_decode(urls.url_encode(d)) @hypothesis.given(dictionaries(text(), lists(elements=text()))) def test_url_encoding_dict_str_list(d): assert OrderedMultiDict(d) == urls.url_decode(urls.url_encode(d)) @hypothesis.given(dictionaries(text(), integers())) def test_url_encoding_dict_str_int(d): assert OrderedMultiDict({k: str(v) for k, v in d.items()}) == \ urls.url_decode(urls.url_encode(d)) @hypothesis.given(text(), text()) def test_multidict_encode_decode_text(t1, t2): d = OrderedMultiDict() d.add(t1, t2) assert d == urls.url_decode(urls.url_encode(d)) werkzeug-0.14.1/tests/multipart/000077500000000000000000000000001322225165500166265ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/__init__.py000066400000000000000000000000001322225165500207250ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/firefox3-2png1txt/000077500000000000000000000000001322225165500220405ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/firefox3-2png1txt/file1.png000066400000000000000000000010131322225165500235410ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT8˥S1k@] ?Cҭ[QN N (v\[ڡd)8  vSPD0KI~W }ݽﻗw/Զmz^4ʹa.IENDB`werkzeug-0.14.1/tests/multipart/firefox3-2png1txt/file2.png000066400000000000000000000012771322225165500235560ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<QIDATkqw~X0$֢Q"BJ1I-QJqA;jv\x(yQvlVc%f]X&kq3@SũPS9 sʄX#uDfEͩhQ+/b,⿲pe3?<挆msݍ=|3%/8 05bNW0\>%2*;|' &yC3׉>j;Y=䭦SGL"*jD ro[A]1S2ɕ !fx#?\o3B$Y\Su!" Br&1g-C/"ѯDK[+,"!/y쇛$n`(u/Ia;Sj88gN ED ~([&"1#70Tl:OQL!@W[3)Tz_fRYt=KR4BjFOuԟ 9)Km>H6UV/ō5.IENDB` -----------------------------186454651713519341951581030105 Content-Disposition: form-data; name="file2"; filename="application_edit.png" Content-Type: image/png PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<QIDATkqw~X0$֢Q"BJ1I-QJqA;jv\x(yQvlVc%f]X&kq3@SũPS9 sʄX#uDfEͩhQ+/b,⿲pe3?<挆msݍ=|3%/8 05bNW0\>%2*;|' &yC3׉>j;Y=䭦SGL"*jD ro[A]1S2ɕ !fx#?\o3B$Y\Su!" Br&1g-C/"ѯDK[+,"!/y쇛$n`(u/Ia;Sj88gN ED ~([&"1#70Tl:OQL!@W[3)Tz_fRYt=KR4BjFOuԟ 9)Km>H6UV/ō5ݾW ϛJ߸Pd makD|=G Vn6[Įd桚(Pm.0Q`'Fb#&ܧ6aP׏Q12[+zi; ]C17оpI9̾jD}›?7ayze,hXAK^3*bk @+wQ=!}uXzq:g쯺n= :d+_GTA;Ր Jƣ.!P)5!H:epր"݂"Kyw|{H2!i~3z_X;okBZK* ^R:O(jF*^ȰS诿_ gЬycIENDB`werkzeug-0.14.1/tests/multipart/firefox3-2pnglongtext/file2.png000066400000000000000000000013351322225165500245150ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<oIDAT8˥Ka[/Y()%X(olNۖskn.-h;8fEP"jïMGˈ}yພ羹$I.tulu AX:𼂒ZHh1DnZJOJB{Z?`2`S=N$ő=;a &jw qJG#<"N2h8޵`6xցn_+ ~Zto}`x%XЛ͈ hXѿƻ/}BJ_G&|Qr-6Aރ EL⬡\U3:WUh[C6+ 6.f *K͸ܝFq ou4܄?d|XҥMvD` *_[ #A20liR|xq`4w=\uQ m+G|%$5Թ5RO*YGMUO Gqj4ְ(X& s1c˭(LVf RdjQ '-1ATA>U j4,pV"4L$e@.ArBY a~myY])Q8tNLܞt2"I o=CSd)__AF(IENDB`werkzeug-0.14.1/tests/multipart/firefox3-2pnglongtext/request.txt000066400000000000000000000037721322225165500252460ustar00rootroot00000000000000-----------------------------14904044739787191031754711748 Content-Disposition: form-data; name="file1"; filename="accept.png" Content-Type: image/png PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT8˥KSa;vvl dD!P{$; ż,Kݽ6cL2r^H)-jsNm֔2qQB̽BatoL#z {q' r=)La8,u%2Rg>ݾW ϛJ߸Pd makD|=G Vn6[Įd桚(Pm.0Q`'Fb#&ܧ6aP׏Q12[+zi; ]C17оpI9̾jD}›?7ayze,hXAK^3*bk @+wQ=!}uXzq:g쯺n= :d+_GTA;Ր Jƣ.!P)5!H:epր"݂"Kyw|{H2!i~3z_X;okBZK* ^R:O(jF*^ȰS诿_ gЬycIENDB` -----------------------------14904044739787191031754711748 Content-Disposition: form-data; name="file2"; filename="add.png" Content-Type: image/png PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<oIDAT8˥Ka[/Y()%X(olNۖskn.-h;8fEP"jïMGˈ}yພ羹$I.tulu AX:𼂒ZHh1DnZJOJB{Z?`2`S=N$ő=;a &jw qJG#<"N2h8޵`6xցn_+ ~Zto}`x%XЛ͈ hXѿƻ/}BJ_G&|Qr-6Aރ EL⬡\U3:WUh[C6+ 6.f *K͸ܝFq ou4܄?d|XҥMvD` *_[ #A20liR|xq`4w=\uQ m+G|%$5Թ5RO*YGMUO Gqj4ְ(X& s1c˭(LVf RdjQ '-1ATA>U j4,pV"4L$e@.ArBY a~myY])Q8tNLܞt2"I o=CSd)__AF(IENDB` -----------------------------14904044739787191031754711748 Content-Disposition: form-data; name="text" --long text --with boundary --lookalikes-- -----------------------------14904044739787191031754711748-- werkzeug-0.14.1/tests/multipart/firefox3-2pnglongtext/text.txt000066400000000000000000000000521322225165500245260ustar00rootroot00000000000000--long text --with boundary --lookalikes--werkzeug-0.14.1/tests/multipart/ie6-2png1txt/000077500000000000000000000000001322225165500207765ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/ie6-2png1txt/file1.png000066400000000000000000000010131322225165500224770ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT8˥S1k@] ?Cҭ[QN N (v\[ڡd)8  vSPD0KI~W }ݽﻗw/Զmz^4ʹa.IENDB`werkzeug-0.14.1/tests/multipart/ie6-2png1txt/file2.png000066400000000000000000000012771322225165500225140ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<QIDATkqw~X0$֢Q"BJ1I-QJqA;jv\x(yQvlVc%f]X&kq3@SũPS9 sʄX#uDfEͩhQ+/b,⿲pe3?<挆msݍ=|3%/8 05bNW0\>%2*;|' &yC3׉>j;Y=䭦SGL"*jD ro[A]1S2ɕ !fx#?\o3B$Y\Su!" Br&1g-C/"ѯDK[+,"!/y쇛$n`(u/Ia;Sj88gN ED ~([&"1#70Tl:OQL!@W[3)Tz_fRYt=KR4BjFOuԟ 9)Km>H6UV/ō5.IENDB` -----------------------------7d91b03a20128 Content-Disposition: form-data; name="file2"; filename="C:\Python25\wztest\werkzeug-main\tests\multipart\firefox3-2png1txt\file2.png" Content-Type: image/x-png PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<QIDATkqw~X0$֢Q"BJ1I-QJqA;jv\x(yQvlVc%f]X&kq3@SũPS9 sʄX#uDfEͩhQ+/b,⿲pe3?<挆msݍ=|3%/8 05bNW0\>%2*;|' &yC3׉>j;Y=䭦SGL"*jD ro[A]1S2ɕ !fx#?\o3B$Y\Su!" Br&1g-C/"ѯDK[+,"!/y쇛$n`(u/Ia;Sj88gN ED ~([&"1#70Tl:OQL!@W[3)Tz_fRYt=KR4BjFOuԟ 9)Km>H6UV/ō5 463#` Cbjbjmm .C     $h+nz z z   z    z   Pq  0   4   2 l    z z z z     SELLERSBURG TOWN COUNCIL MEETING February 22, 2010 These minutes are not intended to be a verbatim transcript TOWN COUNCIL MEETING: The Sellersburg Town Council met on February 22, 2010 at the Sellersburg Town Hall. Present were Council President Brian K. Meyer, Council Vice President Paul J. Rhodes, Council Member Terry E. Langford, Council Member James H. LaMaster, Council Member Michael N. Lockhart, Town Attorney William P. McCall, III and Clerk-Treasurer David L. Kinder. CALL TO ORDER: President Brian K. Meyer called the meeting to order at 7:00 P.M. PRAYER: led by Paul J. Rhodes. PLEDGE OF ALLEGIANCE: By all present. JAMES H. LaMASTER makes a motion to approve the minutes of the February 8, 2010 meeting and the claims as submitted by the Clerk-Treasurers office, seconded by Terry E. Langford. 5-aye, 0-nay, motion is approved. PRESIDENT MEYER presents contract with E-Z Eye Advertising Company for advertising benches for the Town. PAUL J. RHODES makes a motion to not renew contract with E-Z Eye Advertising Company and have the benches that are presently in town removed, seconded by Michael N. Lockhart. 5-aye, 0-nay, motion is approved. Town Attorney William P. McCall, III will write letter to company notifying them of the cancellation of the contract. Sellersburg Town Council Meeting February 22, 2010 Page 2 of 2 pages WILLIAM E. COONS presents an ordinance for zoning change at 475 North Indiana Avenue, the William H. Bueter, Jr. property from R-1 to B-1. PAUL J. RHODES makes a motion to pass ORDINANCE NO. 2010-005, AN ORDINANCE AMENDING THE ZONING ORDINANCE OF SELLERSBURG, INDIANA, an ordinance changing the property at 475 North Indiana Avenue from R-1 to B-1, seconded by Michael N. Lockhart. 5-aye, 0-nay, motion is approved. MICHAEL N. LOCKHART makes a motion to adjourn the meeting, seconded by James H. LaMaster. 5-aye, 0-nay, motion approved and meeting adjourned at 7:28 P.M. _____________________________ ________________________________ Brian K. Meyer, Council President Paul J. Rhodes, Council Vice-President ______________________________ _________________________________ James H. LaMaster, Council Member Terry E. Langford, Council Member _______________________________ _________________________________ Michael N. Lockhart, Council Member ATTEST: David L. Kinder Clerk-Treasurer "#%.027qtu ; B D Z \ r [ \ k  V W g Zc xhuhu5>*huhuhxS5huhxS5>*huhu5hxShxS5>*h2 gh2 g5>*hxSh2 ghXhXhX5>* hF5>* hXCJhXhXCJ hxS5 h2 g5hXhW}5hXhX5 hF5.#$%7rstu : ; [ \ [ \  2 gdudhgdugdX$a$gdXC2 D V W "#$)ABCgdXdhgdugduBC hXhFh dhF,1h/ =!"#$% @@@ NormalCJ_HaJmH sH tH DA@D Default Paragraph FontRiR  Table Normal4 l4a (k(No ListC #$%7rstu:;[\[\2DVW"#$) A B E 000000000000000000000000000000000000000000000000C 2 C C XCYCFZC_[C߉\C ]Cd^CD_C`C4FaC\BbC1cC$dCE      !!E   :*urn:schemas-microsoft-com:office:smarttagsStreet;*urn:schemas-microsoft-com:office:smarttagsaddress9 *urn:schemas-microsoft-com:office:smarttagsplace= *urn:schemas-microsoft-com:office:smarttags PlaceName= *urn:schemas-microsoft-com:office:smarttags PlaceType>*urn:schemas-microsoft-com:office:smarttags PersonName   E E 1UB E E ! =?9n F|6"2 > XW zs  F# $* 1/IwD$+@Cc!~J" dtyx1& < uYn8~5?E[`Zks !C!d3">5#:& '9'(m(n()*)4)N+0,C,p-M(.jn.K/40La0h0)1w+1?[1+2 22JV3:4U45QU69{"9t89];bE<h(=D=q=>>tT?W?!.BLCVDrD~f3r}6&bClQdH vp~>&P&l(lkPBCZm '*,i6u=yQF|*G Sz)463h*T;&Iim  RV`!l_#hItjL@u'Jq/9hiOK;AXUa* AymG h#X[z!0o;,*R{/#C'*4FnH2>OrL3P@s326whQw) g%/.$ Pqw{z18P F7B:Z/Ic*d=VQ-oR@OC p@UnknownGz Times New Roman5Symbol3& z Arial"hFFF aa!r4d? ? 2HP)?X SELLERSBURG TOWN COUNCIL MEETING  Oh+'0 ( H T ` lx$SELLERSBURG TOWN COUNCIL MEETING  Normal.dot 2Microsoft Office Word@e@dD@{h@{ha՜.+,0 hp|  tos? ' !SELLERSBURG TOWN COUNCIL MEETING Title  !"$%&'()*,-./0125Root Entry F@q7Data 1TableWordDocument.SummaryInformation(#DocumentSummaryInformation8+CompObjq  FMicrosoft Office Word Document MSWordDocWord.Document.89q -----------------------------7da36d1b4a0164 Content-Disposition: form-data; name="submit" Submit -----------------------------7da36d1b4a0164-- werkzeug-0.14.1/tests/multipart/opera8-2png1txt/000077500000000000000000000000001322225165500215115ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/opera8-2png1txt/file1.png000066400000000000000000000011061322225165500232150ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT8˕RKTq[(IV߶桂"0i):DJҥ@2tX%=T`J^p%޳".徙Ϸ*>sa9^Mnw4 twft=scl&wl߼ ӏ/:}XdOLd@HeH=]?>Ax6=֊n! 9)w #W \:r@ف@BqwF`,f-bud_~P,Ig!!Ds܍bܞ ]3W4C.}L* 0W`3Hb5U '"އ"B@27g/"@Ȅf/"F>z~}~W^UW`,XCKi>xwrW A(CDa?֬./IENDB`werkzeug-0.14.1/tests/multipart/opera8-2png1txt/file2.png000066400000000000000000000013351322225165500232220ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<oIDAT8˅MHTQ7yc3Id1"iIҢUDQ[W*hӢVD JʲpQ- ͤ KǙ4CXgs=rνP}5 55A ϟXaARXD~!T"TZmb~>R]#C6?Ikd~jY@>n E޽%KؘCV~k]BPFȎ#6}"h8Qoz{/ˡ֎a*U" r79g-&[Me`e'L .UDx*cf&P3?14@ Vx 6 SoGC:_UljS&BfK$4iʍML s:;=`<ė)2biWhvQv\Ztؑ"ߺ"TV6rE m͋ w!>%1Oz||5=Zu`q6{utm7od+WcC<R"(oOsa9^Mnw4 twft=scl&wl߼ ӏ/:}XdOLd@HeH=]?>Ax6=֊n! 9)w #W \:r@ف@BqwF`,f-bud_~P,Ig!!Ds܍bܞ ]3W4C.}L* 0W`3Hb5U '"އ"B@27g/"@Ȅf/"F>z~}~W^UW`,XCKi>xwrW A(CDa?֬./IENDB` ------------zEO9jQKmLc2Cq88c23Dx19 Content-Disposition: form-data; name="file2"; filename="award_star_bronze_1.png" Content-Type: image/png PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<oIDAT8˅MHTQ7yc3Id1"iIҢUDQ[W*hӢVD JʲpQ- ͤ KǙ4CXgs=rνP}5 55A ϟXaARXD~!T"TZmb~>R]#C6?Ikd~jY@>n E޽%KؘCV~k]BPFȎ#6}"h8Qoz{/ˡ֎a*U" r79g-&[Me`e'L .UDx*cf&P3?14@ Vx 6 SoGC:_UljS&BfK$4iʍML s:;=`<ė)2biWhvQv\Ztؑ"ߺ"TV6rE m͋ w!>%1Oz||5=Zu`q6{utm7od+WcC<R"(oOUpload File



''', mimetype='text/html') def application(environ, start_responseonse): request = Request(environ) if request.method == 'POST': response = stats(request) else: response = upload_file(request) return response(environ, start_responseonse) if __name__ == '__main__': run_simple('localhost', 5000, application, use_debugger=True) werkzeug-0.14.1/tests/multipart/webkit3-2png1txt/000077500000000000000000000000001322225165500216635ustar00rootroot00000000000000werkzeug-0.14.1/tests/multipart/webkit3-2png1txt/file1.png000066400000000000000000000017521322225165500233760ustar00rootroot00000000000000PNG  IHDR sBIT|d pHYs B(xtEXtSoftwarewww.inkscape.org<gIDAT8}hU9Ϲ/Ͻn8ӕ-g۰nb S ph bGdFhi2 QvBj-s+w˽86\;!n LY\4~<5GC*RF_n~@":.|L^x>ii) x_S $F zuIL'Fn|ڠʷ{b0n#@tsҠіRYM¥ZG=0|y%t). ,JdFރI#󷢫ݸ}V{%F[\g֋lOAgﱸnZn5%Fj*^8BhS8tfOoC[:RY9o Qz۷iM*c>y87Ph*պy|nv\ae梂Abe+=oTCp݌٬]8Ŭn̴\ 5 +'{& $# Ð#r!8i;= B 2hYKjvnl'\0+n۳-||Y:y1wS-nK5YRJc1iDdf-&M흭?j2*L'J󷔽*Ȧ˱p'@X VRu-r_iNۡ=ⵣA] 1u:DB %s_Heӏx̘0؎Xɤ PǘAN`H3r䉟bwIENDB`werkzeug-0.14.1/tests/multipart/webkit3-2png1txt/file2.png000066400000000000000000000016701322225165500233760ustar00rootroot00000000000000PNG  IHDR sBIT|d pHYs B(xtEXtSoftwarewww.inkscape.org<5IDAT8kG?3m241۞Rr0JL/S@(zS9[ ƥ\5R,㬤%K_ GInyo?U"97*u_Æ)p^D_R ^lfU9.N48 v`k_(!{wX xsshc8&nȔϧO%W"?|TZVK%(K\q"iJnsppЎvQ.nݚ NN$!jeqr) G 0nڰ^DZ" $Ȋ{Eju :P׀dوENiׁqܝ}\p}Q.i>efa«cc$IW'4W28R]\>ިXU10]4ڽ^l<H| 0kq {c`n7Z\kqe ht >40`w |$'ίCSuw )[8Iv\Q?hZ A&x0 `ϲRj3xm\JQ$#0<~p;|:p-eUw=Yک#wiifEQz.R <&%`j@#w>}X=sLD|}(),#'#.C ۔u IENDB`werkzeug-0.14.1/tests/multipart/webkit3-2png1txt/request.txt000066400000000000000000000045501322225165500241200ustar00rootroot00000000000000------WebKitFormBoundaryjdSFhcARk8fyGNy6 Content-Disposition: form-data; name="file1"; filename="gtk-apply.png" Content-Type: image/png PNG  IHDR sBIT|d pHYs B(xtEXtSoftwarewww.inkscape.org<gIDAT8}hU9Ϲ/Ͻn8ӕ-g۰nb S ph bGdFhi2 QvBj-s+w˽86\;!n LY\4~<5GC*RF_n~@":.|L^x>ii) x_S $F zuIL'Fn|ڠʷ{b0n#@tsҠіRYM¥ZG=0|y%t). ,JdFރI#󷢫ݸ}V{%F[\g֋lOAgﱸnZn5%Fj*^8BhS8tfOoC[:RY9o Qz۷iM*c>y87Ph*պy|nv\ae梂Abe+=oTCp݌٬]8Ŭn̴\ 5 +'{& $# Ð#r!8i;= B 2hYKjvnl'\0+n۳-||Y:y1wS-nK5YRJc1iDdf-&M흭?j2*L'J󷔽*Ȧ˱p'@X VRu-r_iNۡ=ⵣA] 1u:DB %s_Heӏx̘0؎Xɤ PǘAN`H3r䉟bwIENDB` ------WebKitFormBoundaryjdSFhcARk8fyGNy6 Content-Disposition: form-data; name="file2"; filename="gtk-no.png" Content-Type: image/png PNG  IHDR sBIT|d pHYs B(xtEXtSoftwarewww.inkscape.org<5IDAT8kG?3m241۞Rr0JL/S@(zS9[ ƥ\5R,㬤%K_ GInyo?U"97*u_Æ)p^D_R ^lfU9.N48 v`k_(!{wX xsshc8&nȔϧO%W"?|TZVK%(K\q"iJnsppЎvQ.nݚ NN$!jeqr) G 0nڰ^DZ" $Ȋ{Eju :P׀dوENiׁqܝ}\p}Q.i>efa«cc$IW'4W28R]\>ިXU10]4ڽ^l<H| 0kq {c`n7Z\kqe ht >40`w |$'ίCSuw )[8Iv\Q?hZ A&x0 `ϲRj3xm\JQ$#0<~p;|:p-eUw=Yک#wiifEQz.R <&%`j@#w>}X=sLD|}(),#'#.C ۔u IENDB` ------WebKitFormBoundaryjdSFhcARk8fyGNy6 Content-Disposition: form-data; name="text" this is another text with ümläüts ------WebKitFormBoundaryjdSFhcARk8fyGNy6-- werkzeug-0.14.1/tests/multipart/webkit3-2png1txt/text.txt000066400000000000000000000000441322225165500234060ustar00rootroot00000000000000this is another text with ümläütswerkzeug-0.14.1/tests/res/000077500000000000000000000000001322225165500153765ustar00rootroot00000000000000werkzeug-0.14.1/tests/res/chunked.txt000066400000000000000000000005641322225165500175650ustar00rootroot0000000000000094 ----------------------------898239224156930639461866 Content-Disposition: form-data; name="file"; filename="test.txt" Content-Type: text/plain f This is a test 2 65 ----------------------------898239224156930639461866 Content-Disposition: form-data; name="type" a text/plain 3a ----------------------------898239224156930639461866-- 0 werkzeug-0.14.1/tests/res/test.txt000066400000000000000000000000061322225165500171120ustar00rootroot00000000000000FOUND werkzeug-0.14.1/tests/test_compat.py000066400000000000000000000015271322225165500175060ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.compat ~~~~~~~~~~~~ Ensure that old stuff does not break on update. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ # This file shouldn't be linted: # flake8: noqa import warnings from werkzeug.wrappers import Response from werkzeug.test import create_environ def test_old_imports(): from werkzeug.utils import Headers, MultiDict, CombinedMultiDict, \ Headers, EnvironHeaders from werkzeug.http import Accept, MIMEAccept, CharsetAccept, \ LanguageAccept, ETags, HeaderSet, WWWAuthenticate, \ Authorization def test_exposed_werkzeug_mod(): import werkzeug for key in werkzeug.__all__: # deprecated, skip it if key in ('templates', 'Template'): continue getattr(werkzeug, key) werkzeug-0.14.1/tests/test_datastructures.py000066400000000000000000001023621322225165500212770ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.datastructures ~~~~~~~~~~~~~~~~~~~~ Tests the functionality of the provided Werkzeug datastructures. Classes prefixed with an underscore are mixins and are not discovered by the test runner. TODO: - FileMultiDict - Immutable types undertested - Split up dict tests :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import pytest from tests import strict_eq import pickle from contextlib import contextmanager from copy import copy, deepcopy from werkzeug import datastructures from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \ iterlistvalues, text_type, PY2 from werkzeug.exceptions import BadRequestKeyError class TestNativeItermethods(object): def test_basic(self): @datastructures.native_itermethods(['keys', 'values', 'items']) class StupidDict(object): def keys(self, multi=1): return iter(['a', 'b', 'c'] * multi) def values(self, multi=1): return iter([1, 2, 3] * multi) def items(self, multi=1): return iter(zip(iterkeys(self, multi=multi), itervalues(self, multi=multi))) d = StupidDict() expected_keys = ['a', 'b', 'c'] expected_values = [1, 2, 3] expected_items = list(zip(expected_keys, expected_values)) assert list(iterkeys(d)) == expected_keys assert list(itervalues(d)) == expected_values assert list(iteritems(d)) == expected_items assert list(iterkeys(d, 2)) == expected_keys * 2 assert list(itervalues(d, 2)) == expected_values * 2 assert list(iteritems(d, 2)) == expected_items * 2 class _MutableMultiDictTests(object): storage_class = None def test_pickle(self): cls = self.storage_class def create_instance(module=None): if module is None: d = cls() else: old = cls.__module__ cls.__module__ = module d = cls() cls.__module__ = old d.setlist(b'foo', [1, 2, 3, 4]) d.setlist(b'bar', b'foo bar baz'.split()) return d for protocol in range(pickle.HIGHEST_PROTOCOL + 1): d = create_instance() s = pickle.dumps(d, protocol) ud = pickle.loads(s) assert type(ud) == type(d) assert ud == d alternative = pickle.dumps(create_instance('werkzeug'), protocol) assert pickle.loads(alternative) == d ud[b'newkey'] = b'bla' assert ud != d def test_basic_interface(self): md = self.storage_class() assert isinstance(md, dict) mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), ('a', 1), ('a', 3), ('d', 4), ('c', 3)] md = self.storage_class(mapping) # simple getitem gives the first value assert md['a'] == 1 assert md['c'] == 3 with pytest.raises(KeyError): md['e'] assert md.get('a') == 1 # list getitem assert md.getlist('a') == [1, 2, 1, 3] assert md.getlist('d') == [3, 4] # do not raise if key not found assert md.getlist('x') == [] # simple setitem overwrites all values md['a'] = 42 assert md.getlist('a') == [42] # list setitem md.setlist('a', [1, 2, 3]) assert md['a'] == 1 assert md.getlist('a') == [1, 2, 3] # verify that it does not change original lists l1 = [1, 2, 3] md.setlist('a', l1) del l1[:] assert md['a'] == 1 # setdefault, setlistdefault assert md.setdefault('u', 23) == 23 assert md.getlist('u') == [23] del md['u'] md.setlist('u', [-1, -2]) # delitem del md['u'] with pytest.raises(KeyError): md['u'] del md['d'] assert md.getlist('d') == [] # keys, values, items, lists assert list(sorted(md.keys())) == ['a', 'b', 'c'] assert list(sorted(iterkeys(md))) == ['a', 'b', 'c'] assert list(sorted(itervalues(md))) == [1, 2, 3] assert list(sorted(itervalues(md))) == [1, 2, 3] assert list(sorted(md.items())) == [('a', 1), ('b', 2), ('c', 3)] assert list(sorted(md.items(multi=True))) == \ [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)] assert list(sorted(iteritems(md))) == [('a', 1), ('b', 2), ('c', 3)] assert list(sorted(iteritems(md, multi=True))) == \ [('a', 1), ('a', 2), ('a', 3), ('b', 2), ('c', 3)] assert list(sorted(md.lists())) == \ [('a', [1, 2, 3]), ('b', [2]), ('c', [3])] assert list(sorted(iterlists(md))) == \ [('a', [1, 2, 3]), ('b', [2]), ('c', [3])] # copy method c = md.copy() assert c['a'] == 1 assert c.getlist('a') == [1, 2, 3] # copy method 2 c = copy(md) assert c['a'] == 1 assert c.getlist('a') == [1, 2, 3] # deepcopy method c = md.deepcopy() assert c['a'] == 1 assert c.getlist('a') == [1, 2, 3] # deepcopy method 2 c = deepcopy(md) assert c['a'] == 1 assert c.getlist('a') == [1, 2, 3] # update with a multidict od = self.storage_class([('a', 4), ('a', 5), ('y', 0)]) md.update(od) assert md.getlist('a') == [1, 2, 3, 4, 5] assert md.getlist('y') == [0] # update with a regular dict md = c od = {'a': 4, 'y': 0} md.update(od) assert md.getlist('a') == [1, 2, 3, 4] assert md.getlist('y') == [0] # pop, poplist, popitem, popitemlist assert md.pop('y') == 0 assert 'y' not in md assert md.poplist('a') == [1, 2, 3, 4] assert 'a' not in md assert md.poplist('missing') == [] # remaining: b=2, c=3 popped = md.popitem() assert popped in [('b', 2), ('c', 3)] popped = md.popitemlist() assert popped in [('b', [2]), ('c', [3])] # type conversion md = self.storage_class({'a': '4', 'b': ['2', '3']}) assert md.get('a', type=int) == 4 assert md.getlist('b', type=int) == [2, 3] # repr md = self.storage_class([('a', 1), ('a', 2), ('b', 3)]) assert "('a', 1)" in repr(md) assert "('a', 2)" in repr(md) assert "('b', 3)" in repr(md) # add and getlist md.add('c', '42') md.add('c', '23') assert md.getlist('c') == ['42', '23'] md.add('c', 'blah') assert md.getlist('c', type=int) == [42, 23] # setdefault md = self.storage_class() md.setdefault('x', []).append(42) md.setdefault('x', []).append(23) assert md['x'] == [42, 23] # to dict md = self.storage_class() md['foo'] = 42 md.add('bar', 1) md.add('bar', 2) assert md.to_dict() == {'foo': 42, 'bar': 1} assert md.to_dict(flat=False) == {'foo': [42], 'bar': [1, 2]} # popitem from empty dict with pytest.raises(KeyError): self.storage_class().popitem() with pytest.raises(KeyError): self.storage_class().popitemlist() # key errors are of a special type with pytest.raises(BadRequestKeyError): self.storage_class()[42] # setlist works md = self.storage_class() md['foo'] = 42 md.setlist('foo', [1, 2]) assert md.getlist('foo') == [1, 2] class _ImmutableDictTests(object): storage_class = None def test_follows_dict_interface(self): cls = self.storage_class data = {'foo': 1, 'bar': 2, 'baz': 3} d = cls(data) assert d['foo'] == 1 assert d['bar'] == 2 assert d['baz'] == 3 assert sorted(d.keys()) == ['bar', 'baz', 'foo'] assert 'foo' in d assert 'foox' not in d assert len(d) == 3 def test_copies_are_mutable(self): cls = self.storage_class immutable = cls({'a': 1}) with pytest.raises(TypeError): immutable.pop('a') mutable = immutable.copy() mutable.pop('a') assert 'a' in immutable assert mutable is not immutable assert copy(immutable) is immutable def test_dict_is_hashable(self): cls = self.storage_class immutable = cls({'a': 1, 'b': 2}) immutable2 = cls({'a': 2, 'b': 2}) x = set([immutable]) assert immutable in x assert immutable2 not in x x.discard(immutable) assert immutable not in x assert immutable2 not in x x.add(immutable2) assert immutable not in x assert immutable2 in x x.add(immutable) assert immutable in x assert immutable2 in x class TestImmutableTypeConversionDict(_ImmutableDictTests): storage_class = datastructures.ImmutableTypeConversionDict class TestImmutableMultiDict(_ImmutableDictTests): storage_class = datastructures.ImmutableMultiDict def test_multidict_is_hashable(self): cls = self.storage_class immutable = cls({'a': [1, 2], 'b': 2}) immutable2 = cls({'a': [1], 'b': 2}) x = set([immutable]) assert immutable in x assert immutable2 not in x x.discard(immutable) assert immutable not in x assert immutable2 not in x x.add(immutable2) assert immutable not in x assert immutable2 in x x.add(immutable) assert immutable in x assert immutable2 in x class TestImmutableDict(_ImmutableDictTests): storage_class = datastructures.ImmutableDict class TestImmutableOrderedMultiDict(_ImmutableDictTests): storage_class = datastructures.ImmutableOrderedMultiDict def test_ordered_multidict_is_hashable(self): a = self.storage_class([('a', 1), ('b', 1), ('a', 2)]) b = self.storage_class([('a', 1), ('a', 2), ('b', 1)]) assert hash(a) != hash(b) class TestMultiDict(_MutableMultiDictTests): storage_class = datastructures.MultiDict def test_multidict_pop(self): make_d = lambda: self.storage_class({'foo': [1, 2, 3, 4]}) d = make_d() assert d.pop('foo') == 1 assert not d d = make_d() assert d.pop('foo', 32) == 1 assert not d d = make_d() assert d.pop('foos', 32) == 32 assert d with pytest.raises(KeyError): d.pop('foos') def test_multidict_pop_raise_badrequestkeyerror_for_empty_list_value(self): mapping = [('a', 'b'), ('a', 'c')] md = self.storage_class(mapping) md.setlistdefault('empty', []) with pytest.raises(KeyError): md.pop('empty') def test_multidict_popitem_raise_badrequestkeyerror_for_empty_list_value(self): mapping = [] md = self.storage_class(mapping) md.setlistdefault('empty', []) with pytest.raises(KeyError): md.popitem() def test_setlistdefault(self): md = self.storage_class() assert md.setlistdefault('u', [-1, -2]) == [-1, -2] assert md.getlist('u') == [-1, -2] assert md['u'] == -1 def test_iter_interfaces(self): mapping = [('a', 1), ('b', 2), ('a', 2), ('d', 3), ('a', 1), ('a', 3), ('d', 4), ('c', 3)] md = self.storage_class(mapping) assert list(zip(md.keys(), md.listvalues())) == list(md.lists()) assert list(zip(md, iterlistvalues(md))) == list(iterlists(md)) assert list(zip(iterkeys(md), iterlistvalues(md))) == \ list(iterlists(md)) @pytest.mark.skipif(not PY2, reason='viewmethods work only for the 2-nd version.') def test_view_methods(self): mapping = [('a', 'b'), ('a', 'c')] md = self.storage_class(mapping) vi = md.viewitems() vk = md.viewkeys() vv = md.viewvalues() assert list(vi) == list(md.items()) assert list(vk) == list(md.keys()) assert list(vv) == list(md.values()) md['k'] = 'n' assert list(vi) == list(md.items()) assert list(vk) == list(md.keys()) assert list(vv) == list(md.values()) @pytest.mark.skipif(not PY2, reason='viewmethods work only for the 2-nd version.') def test_viewitems_with_multi(self): mapping = [('a', 'b'), ('a', 'c')] md = self.storage_class(mapping) vi = md.viewitems(multi=True) assert list(vi) == list(md.items(multi=True)) md['k'] = 'n' assert list(vi) == list(md.items(multi=True)) def test_getitem_raise_badrequestkeyerror_for_empty_list_value(self): mapping = [('a', 'b'), ('a', 'c')] md = self.storage_class(mapping) md.setlistdefault('empty', []) with pytest.raises(KeyError): md['empty'] class TestOrderedMultiDict(_MutableMultiDictTests): storage_class = datastructures.OrderedMultiDict def test_ordered_interface(self): cls = self.storage_class d = cls() assert not d d.add('foo', 'bar') assert len(d) == 1 d.add('foo', 'baz') assert len(d) == 1 assert list(iteritems(d)) == [('foo', 'bar')] assert list(d) == ['foo'] assert list(iteritems(d, multi=True)) == \ [('foo', 'bar'), ('foo', 'baz')] del d['foo'] assert not d assert len(d) == 0 assert list(d) == [] d.update([('foo', 1), ('foo', 2), ('bar', 42)]) d.add('foo', 3) assert d.getlist('foo') == [1, 2, 3] assert d.getlist('bar') == [42] assert list(iteritems(d)) == [('foo', 1), ('bar', 42)] expected = ['foo', 'bar'] assert list(d.keys()) == expected assert list(d) == expected assert list(iterkeys(d)) == expected assert list(iteritems(d, multi=True)) == \ [('foo', 1), ('foo', 2), ('bar', 42), ('foo', 3)] assert len(d) == 2 assert d.pop('foo') == 1 assert d.pop('blafasel', None) is None assert d.pop('blafasel', 42) == 42 assert len(d) == 1 assert d.poplist('bar') == [42] assert not d d.get('missingkey') is None d.add('foo', 42) d.add('foo', 23) d.add('bar', 2) d.add('foo', 42) assert d == datastructures.MultiDict(d) id = self.storage_class(d) assert d == id d.add('foo', 2) assert d != id d.update({'blah': [1, 2, 3]}) assert d['blah'] == 1 assert d.getlist('blah') == [1, 2, 3] # setlist works d = self.storage_class() d['foo'] = 42 d.setlist('foo', [1, 2]) assert d.getlist('foo') == [1, 2] with pytest.raises(BadRequestKeyError): d.pop('missing') with pytest.raises(BadRequestKeyError): d['missing'] # popping d = self.storage_class() d.add('foo', 23) d.add('foo', 42) d.add('foo', 1) assert d.popitem() == ('foo', 23) with pytest.raises(BadRequestKeyError): d.popitem() assert not d d.add('foo', 23) d.add('foo', 42) d.add('foo', 1) assert d.popitemlist() == ('foo', [23, 42, 1]) with pytest.raises(BadRequestKeyError): d.popitemlist() # Unhashable d = self.storage_class() d.add('foo', 23) pytest.raises(TypeError, hash, d) def test_iterables(self): a = datastructures.MultiDict((("key_a", "value_a"),)) b = datastructures.MultiDict((("key_b", "value_b"),)) ab = datastructures.CombinedMultiDict((a, b)) assert sorted(ab.lists()) == [('key_a', ['value_a']), ('key_b', ['value_b'])] assert sorted(ab.listvalues()) == [['value_a'], ['value_b']] assert sorted(ab.keys()) == ["key_a", "key_b"] assert sorted(iterlists(ab)) == [('key_a', ['value_a']), ('key_b', ['value_b'])] assert sorted(iterlistvalues(ab)) == [['value_a'], ['value_b']] assert sorted(iterkeys(ab)) == ["key_a", "key_b"] class TestTypeConversionDict(object): storage_class = datastructures.TypeConversionDict def test_value_conversion(self): d = self.storage_class(foo='1') assert d.get('foo', type=int) == 1 def test_return_default_when_conversion_is_not_possible(self): d = self.storage_class(foo='bar') assert d.get('foo', default=-1, type=int) == -1 def test_propagate_exceptions_in_conversion(self): d = self.storage_class(foo='bar') switch = {'a': 1} with pytest.raises(KeyError): d.get('foo', type=lambda x: switch[x]) class TestCombinedMultiDict(object): storage_class = datastructures.CombinedMultiDict def test_basic_interface(self): d1 = datastructures.MultiDict([('foo', '1')]) d2 = datastructures.MultiDict([('bar', '2'), ('bar', '3')]) d = self.storage_class([d1, d2]) # lookup assert d['foo'] == '1' assert d['bar'] == '2' assert d.getlist('bar') == ['2', '3'] assert sorted(d.items()) == [('bar', '2'), ('foo', '1')] assert sorted(d.items(multi=True)) == \ [('bar', '2'), ('bar', '3'), ('foo', '1')] assert 'missingkey' not in d assert 'foo' in d # type lookup assert d.get('foo', type=int) == 1 assert d.getlist('bar', type=int) == [2, 3] # get key errors for missing stuff with pytest.raises(KeyError): d['missing'] # make sure that they are immutable with pytest.raises(TypeError): d['foo'] = 'blub' # copies are immutable d = d.copy() with pytest.raises(TypeError): d['foo'] = 'blub' # make sure lists merges md1 = datastructures.MultiDict((("foo", "bar"),)) md2 = datastructures.MultiDict((("foo", "blafasel"),)) x = self.storage_class((md1, md2)) assert list(iterlists(x)) == [('foo', ['bar', 'blafasel'])] def test_length(self): d1 = datastructures.MultiDict([('foo', '1')]) d2 = datastructures.MultiDict([('bar', '2')]) assert len(d1) == len(d2) == 1 d = self.storage_class([d1, d2]) assert len(d) == 2 d1.clear() assert len(d1) == 0 assert len(d) == 1 class TestHeaders(object): storage_class = datastructures.Headers def test_basic_interface(self): headers = self.storage_class() headers.add('Content-Type', 'text/plain') headers.add('X-Foo', 'bar') assert 'x-Foo' in headers assert 'Content-type' in headers headers['Content-Type'] = 'foo/bar' assert headers['Content-Type'] == 'foo/bar' assert len(headers.getlist('Content-Type')) == 1 # list conversion assert headers.to_wsgi_list() == [ ('Content-Type', 'foo/bar'), ('X-Foo', 'bar') ] assert str(headers) == ( "Content-Type: foo/bar\r\n" "X-Foo: bar\r\n" "\r\n" ) assert str(self.storage_class()) == "\r\n" # extended add headers.add('Content-Disposition', 'attachment', filename='foo') assert headers['Content-Disposition'] == 'attachment; filename=foo' headers.add('x', 'y', z='"') assert headers['x'] == r'y; z="\""' def test_defaults_and_conversion(self): # defaults headers = self.storage_class([ ('Content-Type', 'text/plain'), ('X-Foo', 'bar'), ('X-Bar', '1'), ('X-Bar', '2') ]) assert headers.getlist('x-bar') == ['1', '2'] assert headers.get('x-Bar') == '1' assert headers.get('Content-Type') == 'text/plain' assert headers.setdefault('X-Foo', 'nope') == 'bar' assert headers.setdefault('X-Bar', 'nope') == '1' assert headers.setdefault('X-Baz', 'quux') == 'quux' assert headers.setdefault('X-Baz', 'nope') == 'quux' headers.pop('X-Baz') # type conversion assert headers.get('x-bar', type=int) == 1 assert headers.getlist('x-bar', type=int) == [1, 2] # list like operations assert headers[0] == ('Content-Type', 'text/plain') assert headers[:1] == self.storage_class([('Content-Type', 'text/plain')]) del headers[:2] del headers[-1] assert headers == self.storage_class([('X-Bar', '1')]) def test_copying(self): a = self.storage_class([('foo', 'bar')]) b = a.copy() a.add('foo', 'baz') assert a.getlist('foo') == ['bar', 'baz'] assert b.getlist('foo') == ['bar'] def test_popping(self): headers = self.storage_class([('a', 1)]) assert headers.pop('a') == 1 assert headers.pop('b', 2) == 2 with pytest.raises(KeyError): headers.pop('c') def test_set_arguments(self): a = self.storage_class() a.set('Content-Disposition', 'useless') a.set('Content-Disposition', 'attachment', filename='foo') assert a['Content-Disposition'] == 'attachment; filename=foo' def test_reject_newlines(self): h = self.storage_class() for variation in 'foo\nbar', 'foo\r\nbar', 'foo\rbar': with pytest.raises(ValueError): h['foo'] = variation with pytest.raises(ValueError): h.add('foo', variation) with pytest.raises(ValueError): h.add('foo', 'test', option=variation) with pytest.raises(ValueError): h.set('foo', variation) with pytest.raises(ValueError): h.set('foo', 'test', option=variation) def test_slicing(self): # there's nothing wrong with these being native strings # Headers doesn't care about the data types h = self.storage_class() h.set('X-Foo-Poo', 'bleh') h.set('Content-Type', 'application/whocares') h.set('X-Forwarded-For', '192.168.0.123') h[:] = [(k, v) for k, v in h if k.startswith(u'X-')] assert list(h) == [ ('X-Foo-Poo', 'bleh'), ('X-Forwarded-For', '192.168.0.123') ] def test_bytes_operations(self): h = self.storage_class() h.set('X-Foo-Poo', 'bleh') h.set('X-Whoops', b'\xff') assert h.get('x-foo-poo', as_bytes=True) == b'bleh' assert h.get('x-whoops', as_bytes=True) == b'\xff' def test_to_wsgi_list(self): h = self.storage_class() h.set(u'Key', u'Value') for key, value in h.to_wsgi_list(): if PY2: strict_eq(key, b'Key') strict_eq(value, b'Value') else: strict_eq(key, u'Key') strict_eq(value, u'Value') class TestEnvironHeaders(object): storage_class = datastructures.EnvironHeaders def test_basic_interface(self): # this happens in multiple WSGI servers because they # use a vary naive way to convert the headers; broken_env = { 'HTTP_CONTENT_TYPE': 'text/html', 'CONTENT_TYPE': 'text/html', 'HTTP_CONTENT_LENGTH': '0', 'CONTENT_LENGTH': '0', 'HTTP_ACCEPT': '*', 'wsgi.version': (1, 0) } headers = self.storage_class(broken_env) assert headers assert len(headers) == 3 assert sorted(headers) == [ ('Accept', '*'), ('Content-Length', '0'), ('Content-Type', 'text/html') ] assert not self.storage_class({'wsgi.version': (1, 0)}) assert len(self.storage_class({'wsgi.version': (1, 0)})) == 0 assert 42 not in headers def test_skip_empty_special_vars(self): env = { 'HTTP_X_FOO': '42', 'CONTENT_TYPE': '', 'CONTENT_LENGTH': '', } headers = self.storage_class(env) assert dict(headers) == {'X-Foo': '42'} env = { 'HTTP_X_FOO': '42', 'CONTENT_TYPE': '', 'CONTENT_LENGTH': '0', } headers = self.storage_class(env) assert dict(headers) == {'X-Foo': '42', 'Content-Length': '0'} def test_return_type_is_unicode(self): # environ contains native strings; we return unicode headers = self.storage_class({ 'HTTP_FOO': '\xe2\x9c\x93', 'CONTENT_TYPE': 'text/plain', }) assert headers['Foo'] == u"\xe2\x9c\x93" assert isinstance(headers['Foo'], text_type) assert isinstance(headers['Content-Type'], text_type) iter_output = dict(iter(headers)) assert iter_output['Foo'] == u"\xe2\x9c\x93" assert isinstance(iter_output['Foo'], text_type) assert isinstance(iter_output['Content-Type'], text_type) def test_bytes_operations(self): foo_val = '\xff' h = self.storage_class({ 'HTTP_X_FOO': foo_val }) assert h.get('x-foo', as_bytes=True) == b'\xff' assert h.get('x-foo') == u'\xff' class TestHeaderSet(object): storage_class = datastructures.HeaderSet def test_basic_interface(self): hs = self.storage_class() hs.add('foo') hs.add('bar') assert 'Bar' in hs assert hs.find('foo') == 0 assert hs.find('BAR') == 1 assert hs.find('baz') < 0 hs.discard('missing') hs.discard('foo') assert hs.find('foo') < 0 assert hs.find('bar') == 0 with pytest.raises(IndexError): hs.index('missing') assert hs.index('bar') == 0 assert hs hs.clear() assert not hs class TestImmutableList(object): storage_class = datastructures.ImmutableList def test_list_hashable(self): data = (1, 2, 3, 4) store = self.storage_class(data) assert hash(data) == hash(store) assert data != store def make_call_asserter(func=None): """Utility to assert a certain number of function calls. :param func: Additional callback for each function call. >>> assert_calls, func = make_call_asserter() >>> with assert_calls(2): func() func() """ calls = [0] @contextmanager def asserter(count, msg=None): calls[0] = 0 yield assert calls[0] == count def wrapped(*args, **kwargs): calls[0] += 1 if func is not None: return func(*args, **kwargs) return asserter, wrapped class TestCallbackDict(object): storage_class = datastructures.CallbackDict def test_callback_dict_reads(self): assert_calls, func = make_call_asserter() initial = {'a': 'foo', 'b': 'bar'} dct = self.storage_class(initial=initial, on_update=func) with assert_calls(0, 'callback triggered by read-only method'): # read-only methods dct['a'] dct.get('a') pytest.raises(KeyError, lambda: dct['x']) 'a' in dct list(iter(dct)) dct.copy() with assert_calls(0, 'callback triggered without modification'): # methods that may write but don't dct.pop('z', None) dct.setdefault('a') def test_callback_dict_writes(self): assert_calls, func = make_call_asserter() initial = {'a': 'foo', 'b': 'bar'} dct = self.storage_class(initial=initial, on_update=func) with assert_calls(8, 'callback not triggered by write method'): # always-write methods dct['z'] = 123 dct['z'] = 123 # must trigger again del dct['z'] dct.pop('b', None) dct.setdefault('x') dct.popitem() dct.update([]) dct.clear() with assert_calls(0, 'callback triggered by failed del'): pytest.raises(KeyError, lambda: dct.__delitem__('x')) with assert_calls(0, 'callback triggered by failed pop'): pytest.raises(KeyError, lambda: dct.pop('x')) class TestCacheControl(object): def test_repr(self): cc = datastructures.RequestCacheControl( [("max-age", "0"), ("private", "True")], ) assert repr(cc) == "" class TestAccept(object): storage_class = datastructures.Accept def test_accept_basic(self): accept = self.storage_class([('tinker', 0), ('tailor', 0.333), ('soldier', 0.667), ('sailor', 1)]) # check __getitem__ on indices assert accept[3] == ('tinker', 0) assert accept[2] == ('tailor', 0.333) assert accept[1] == ('soldier', 0.667) assert accept[0], ('sailor', 1) # check __getitem__ on string assert accept['tinker'] == 0 assert accept['tailor'] == 0.333 assert accept['soldier'] == 0.667 assert accept['sailor'] == 1 assert accept['spy'] == 0 # check quality method assert accept.quality('tinker') == 0 assert accept.quality('tailor') == 0.333 assert accept.quality('soldier') == 0.667 assert accept.quality('sailor') == 1 assert accept.quality('spy') == 0 # check __contains__ assert 'sailor' in accept assert 'spy' not in accept # check index method assert accept.index('tinker') == 3 assert accept.index('tailor') == 2 assert accept.index('soldier') == 1 assert accept.index('sailor') == 0 with pytest.raises(ValueError): accept.index('spy') # check find method assert accept.find('tinker') == 3 assert accept.find('tailor') == 2 assert accept.find('soldier') == 1 assert accept.find('sailor') == 0 assert accept.find('spy') == -1 # check to_header method assert accept.to_header() == \ 'sailor,soldier;q=0.667,tailor;q=0.333,tinker;q=0' # check best_match method assert accept.best_match(['tinker', 'tailor', 'soldier', 'sailor'], default=None) == 'sailor' assert accept.best_match(['tinker', 'tailor', 'soldier'], default=None) == 'soldier' assert accept.best_match(['tinker', 'tailor'], default=None) == \ 'tailor' assert accept.best_match(['tinker'], default=None) is None assert accept.best_match(['tinker'], default='x') == 'x' def test_accept_wildcard(self): accept = self.storage_class([('*', 0), ('asterisk', 1)]) assert '*' in accept assert accept.best_match(['asterisk', 'star'], default=None) == \ 'asterisk' assert accept.best_match(['star'], default=None) is None def test_accept_keep_order(self): accept = self.storage_class([('*', 1)]) assert accept.best_match(["alice", "bob"]) == "alice" assert accept.best_match(["bob", "alice"]) == "bob" accept = self.storage_class([('alice', 1), ('bob', 1)]) assert accept.best_match(["alice", "bob"]) == "alice" assert accept.best_match(["bob", "alice"]) == "bob" def test_accept_wildcard_specificity(self): accept = self.storage_class([('asterisk', 0), ('star', 0.5), ('*', 1)]) assert accept.best_match(['star', 'asterisk'], default=None) == 'star' assert accept.best_match(['asterisk', 'star'], default=None) == 'star' assert accept.best_match(['asterisk', 'times'], default=None) == \ 'times' assert accept.best_match(['asterisk'], default=None) is None class TestMIMEAccept(object): storage_class = datastructures.MIMEAccept def test_accept_wildcard_subtype(self): accept = self.storage_class([('text/*', 1)]) assert accept.best_match(['text/html'], default=None) == 'text/html' assert accept.best_match(['image/png', 'text/plain']) == 'text/plain' assert accept.best_match(['image/png'], default=None) is None def test_accept_wildcard_specificity(self): accept = self.storage_class([('*/*', 1), ('text/html', 1)]) assert accept.best_match(['image/png', 'text/html']) == 'text/html' assert accept.best_match(['image/png', 'text/plain']) == 'image/png' accept = self.storage_class([('*/*', 1), ('text/html', 1), ('image/*', 1)]) assert accept.best_match(['image/png', 'text/html']) == 'text/html' assert accept.best_match(['text/plain', 'image/png']) == 'image/png' class TestFileStorage(object): storage_class = datastructures.FileStorage def test_mimetype_always_lowercase(self): file_storage = self.storage_class(content_type='APPLICATION/JSON') assert file_storage.mimetype == 'application/json' def test_bytes_proper_sentinel(self): # ensure we iterate over new lines and don't enter into an infinite loop import io unicode_storage = self.storage_class(io.StringIO(u"one\ntwo")) for idx, line in enumerate(unicode_storage): assert idx < 2 assert idx == 1 binary_storage = self.storage_class(io.BytesIO(b"one\ntwo")) for idx, line in enumerate(binary_storage): assert idx < 2 assert idx == 1 werkzeug-0.14.1/tests/test_debug.py000066400000000000000000000245161322225165500173140ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.debug ~~~~~~~~~~~ Tests some debug utilities. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import sys import re import io import pytest import requests from werkzeug.debug import get_machine_id from werkzeug.debug.repr import debug_repr, DebugReprGenerator, \ dump, helper from werkzeug.debug.console import HTMLStringO from werkzeug.debug.tbtools import Traceback from werkzeug._compat import PY2 class TestDebugRepr(object): def test_basic_repr(self): assert debug_repr([]) == u'[]' assert debug_repr([1, 2]) == \ u'[1, 2]' assert debug_repr([1, 'test']) == \ u'[1, \'test\']' assert debug_repr([None]) == \ u'[None]' def test_string_repr(self): assert debug_repr('') == u'\'\'' assert debug_repr('foo') == u'\'foo\'' assert debug_repr('s' * 80) == u'\''\ + 's' * 70 + ''\ + 's' * 10 + '\'' assert debug_repr('<' * 80) == u'\''\ + '<' * 70 + ''\ + '<' * 10 + '\'' def test_sequence_repr(self): assert debug_repr(list(range(20))) == ( u'[0, 1, ' u'2, 3, ' u'4, 5, ' u'6, 7, ' u'8, ' u'9, 10, ' u'11, 12, ' u'13, 14, ' u'15, 16, ' u'17, 18, ' u'19]' ) def test_mapping_repr(self): assert debug_repr({}) == u'{}' assert debug_repr({'foo': 42}) == ( u'{\'foo\'' u': 42' u'}' ) assert debug_repr(dict(zip(range(10), [None] * 10))) == ( u'{0: None, 1: None, 2: None, 3: None, 4: None, 5: None, 6: None, 7: None, 8: None, 9: None}' # noqa ) assert debug_repr((1, 'zwei', u'drei')) == ( u'(1, \'' u'zwei\', %s\'drei\')' ) % ('u' if PY2 else '') def test_custom_repr(self): class Foo(object): def __repr__(self): return '' assert debug_repr(Foo()) == \ '<Foo 42>' def test_list_subclass_repr(self): class MyList(list): pass assert debug_repr(MyList([1, 2])) == ( u'tests.test_debug.MyList([' u'1, 2])' ) def test_regex_repr(self): assert debug_repr(re.compile(r'foo\d')) == \ u're.compile(r\'foo\\d\')' # No ur'' in Py3 # http://bugs.python.org/issue15096 assert debug_repr(re.compile(u'foo\\d')) == ( u're.compile(%sr\'foo\\d\')' % ('u' if PY2 else '') ) def test_set_repr(self): assert debug_repr(frozenset('x')) == \ u'frozenset([\'x\'])' assert debug_repr(set('x')) == \ u'set([\'x\'])' def test_recursive_repr(self): a = [1] a.append(a) assert debug_repr(a) == u'[1, [...]]' def test_broken_repr(self): class Foo(object): def __repr__(self): raise Exception('broken!') assert debug_repr(Foo()) == ( u'<broken repr (Exception: ' u'broken!)>' ) class Foo(object): x = 42 y = 23 def __init__(self): self.z = 15 class TestDebugHelpers(object): def test_object_dumping(self): drg = DebugReprGenerator() out = drg.dump_object(Foo()) assert re.search('Details for tests.test_debug.Foo object at', out) assert re.search('x.*42', out, flags=re.DOTALL) assert re.search('y.*23', out, flags=re.DOTALL) assert re.search('z.*15', out, flags=re.DOTALL) out = drg.dump_object({'x': 42, 'y': 23}) assert re.search('Contents of', out) assert re.search('x.*42', out, flags=re.DOTALL) assert re.search('y.*23', out, flags=re.DOTALL) out = drg.dump_object({'x': 42, 'y': 23, 23: 11}) assert not re.search('Contents of', out) out = drg.dump_locals({'x': 42, 'y': 23}) assert re.search('Local variables in frame', out) assert re.search('x.*42', out, flags=re.DOTALL) assert re.search('y.*23', out, flags=re.DOTALL) def test_debug_dump(self): old = sys.stdout sys.stdout = HTMLStringO() try: dump([1, 2, 3]) x = sys.stdout.reset() dump() y = sys.stdout.reset() finally: sys.stdout = old assert 'Details for list object at' in x assert '1' in x assert 'Local variables in frame' in y assert 'x' in y assert 'old' in y def test_debug_help(self): old = sys.stdout sys.stdout = HTMLStringO() try: helper([1, 2, 3]) x = sys.stdout.reset() finally: sys.stdout = old assert 'Help on list object' in x assert '__delitem__' in x class TestTraceback(object): def test_log(self): try: 1 / 0 except ZeroDivisionError: traceback = Traceback(*sys.exc_info()) buffer_ = io.BytesIO() if PY2 else io.StringIO() traceback.log(buffer_) assert buffer_.getvalue().strip() == traceback.plaintext.strip() def test_sourcelines_encoding(self): source = (u'# -*- coding: latin1 -*-\n\n' u'def foo():\n' u' """höhö"""\n' u' 1 / 0\n' u'foo()').encode('latin1') code = compile(source, filename='lol.py', mode='exec') try: eval(code) except ZeroDivisionError: traceback = Traceback(*sys.exc_info()) frames = traceback.frames assert len(frames) == 3 assert frames[1].filename == 'lol.py' assert frames[2].filename == 'lol.py' class Loader(object): def get_source(self, module): return source frames[1].loader = frames[2].loader = Loader() assert frames[1].sourcelines == frames[2].sourcelines assert [line.code for line in frames[1].get_annotated_lines()] == \ [line.code for line in frames[2].get_annotated_lines()] assert u'höhö' in frames[1].sourcelines[3] def test_filename_encoding(self, tmpdir, monkeypatch): moduledir = tmpdir.mkdir('föö') moduledir.join('bar.py').write('def foo():\n 1/0\n') monkeypatch.syspath_prepend(str(moduledir)) import bar try: bar.foo() except ZeroDivisionError: traceback = Traceback(*sys.exc_info()) assert u'föö' in u'\n'.join(frame.render() for frame in traceback.frames) def test_get_machine_id(): rv = get_machine_id() assert isinstance(rv, bytes) @pytest.mark.parametrize('crash', (True, False)) def test_basic(dev_server, crash): server = dev_server(''' from werkzeug.debug import DebuggedApplication @DebuggedApplication def app(environ, start_response): if {crash}: 1 / 0 start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] '''.format(crash=crash)) r = requests.get(server.url) assert r.status_code == 500 if crash else 200 if crash: assert 'The debugger caught an exception in your WSGI application' \ in r.text else: assert r.text == 'hello' werkzeug-0.14.1/tests/test_exceptions.py000066400000000000000000000055261322225165500204070ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.exceptions ~~~~~~~~~~~~~~~~ The tests for the exception classes. TODO: - This is undertested. HTML is never checked :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest from werkzeug import exceptions from werkzeug.wrappers import Response from werkzeug._compat import text_type def test_proxy_exception(): orig_resp = Response('Hello World') with pytest.raises(exceptions.HTTPException) as excinfo: exceptions.abort(orig_resp) resp = excinfo.value.get_response({}) assert resp is orig_resp assert resp.get_data() == b'Hello World' @pytest.mark.parametrize('test', [ (exceptions.BadRequest, 400), (exceptions.Unauthorized, 401), (exceptions.Forbidden, 403), (exceptions.NotFound, 404), (exceptions.MethodNotAllowed, 405, ['GET', 'HEAD']), (exceptions.NotAcceptable, 406), (exceptions.RequestTimeout, 408), (exceptions.Gone, 410), (exceptions.LengthRequired, 411), (exceptions.PreconditionFailed, 412), (exceptions.RequestEntityTooLarge, 413), (exceptions.RequestURITooLarge, 414), (exceptions.UnsupportedMediaType, 415), (exceptions.UnprocessableEntity, 422), (exceptions.Locked, 423), (exceptions.InternalServerError, 500), (exceptions.NotImplemented, 501), (exceptions.BadGateway, 502), (exceptions.ServiceUnavailable, 503) ]) def test_aborter_general(test): exc_type = test[0] args = test[1:] with pytest.raises(exc_type) as exc_info: exceptions.abort(*args) assert type(exc_info.value) is exc_type def test_aborter_custom(): myabort = exceptions.Aborter({1: exceptions.NotFound}) pytest.raises(LookupError, myabort, 404) pytest.raises(exceptions.NotFound, myabort, 1) myabort = exceptions.Aborter(extra={1: exceptions.NotFound}) pytest.raises(exceptions.NotFound, myabort, 404) pytest.raises(exceptions.NotFound, myabort, 1) def test_exception_repr(): exc = exceptions.NotFound() assert text_type(exc) == ( '404 Not Found: The requested URL was not found ' 'on the server. If you entered the URL manually please check your ' 'spelling and try again.') assert repr(exc) == "" exc = exceptions.NotFound('Not There') assert text_type(exc) == '404 Not Found: Not There' assert repr(exc) == "" exc = exceptions.HTTPException('An error message') assert text_type(exc) == '??? Unknown Error: An error message' assert repr(exc) == "" def test_special_exceptions(): exc = exceptions.MethodNotAllowed(['GET', 'HEAD', 'POST']) h = dict(exc.get_headers({})) assert h['Allow'] == 'GET, HEAD, POST' assert 'The method is not allowed' in exc.get_description() werkzeug-0.14.1/tests/test_formparser.py000066400000000000000000000455661322225165500204160ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.formparser ~~~~~~~~~~~~~~~~ Tests the form parsing facilities. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import pytest from os.path import join, dirname from tests import strict_eq from werkzeug import formparser from werkzeug.test import create_environ, Client from werkzeug.wrappers import Request, Response from werkzeug.exceptions import RequestEntityTooLarge from werkzeug.datastructures import MultiDict from werkzeug.formparser import parse_form_data, FormDataParser from werkzeug._compat import BytesIO @Request.application def form_data_consumer(request): result_object = request.args['object'] if result_object == 'text': return Response(repr(request.form['text'])) f = request.files[result_object] return Response(b'\n'.join(( repr(f.filename).encode('ascii'), repr(f.name).encode('ascii'), repr(f.content_type).encode('ascii'), f.stream.read() ))) def get_contents(filename): with open(filename, 'rb') as f: return f.read() class TestFormParser(object): def test_limiting(self): data = b'foo=Hello+World&bar=baz' req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='application/x-www-form-urlencoded', method='POST') req.max_content_length = 400 strict_eq(req.form['foo'], u'Hello World') req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='application/x-www-form-urlencoded', method='POST') req.max_form_memory_size = 7 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='application/x-www-form-urlencoded', method='POST') req.max_form_memory_size = 400 strict_eq(req.form['foo'], u'Hello World') data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' b'Hello World\r\n' b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' b'bar=baz\r\n--foo--') req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') req.max_content_length = 4 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') req.max_content_length = 400 strict_eq(req.form['foo'], u'Hello World') req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') req.max_form_memory_size = 7 pytest.raises(RequestEntityTooLarge, lambda: req.form['foo']) req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') req.max_form_memory_size = 400 strict_eq(req.form['foo'], u'Hello World') def test_missing_multipart_boundary(self): data = (b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\n' b'Hello World\r\n' b'--foo\r\nContent-Disposition: form-field; name=bar\r\n\r\n' b'bar=baz\r\n--foo--') req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data', method='POST') assert req.form == {} def test_parse_form_data_put_without_content(self): # A PUT without a Content-Type header returns empty data # Both rfc1945 and rfc2616 (1.0 and 1.1) say "Any HTTP/[1.0/1.1] message # containing an entity-body SHOULD include a Content-Type header field # defining the media type of that body." In the case where either # headers are omitted, parse_form_data should still work. env = create_environ('/foo', 'http://example.org/', method='PUT') del env['CONTENT_TYPE'] del env['CONTENT_LENGTH'] stream, form, files = formparser.parse_form_data(env) strict_eq(stream.read(), b'') strict_eq(len(form), 0) strict_eq(len(files), 0) def test_parse_form_data_get_without_content(self): env = create_environ('/foo', 'http://example.org/', method='GET') del env['CONTENT_TYPE'] del env['CONTENT_LENGTH'] stream, form, files = formparser.parse_form_data(env) strict_eq(stream.read(), b'') strict_eq(len(form), 0) strict_eq(len(files), 0) def test_large_file(self): data = b'x' * (1024 * 600) req = Request.from_values(data={'foo': (BytesIO(data), 'test.txt')}, method='POST') # make sure we have a real file here, because we expect to be # on the disk. > 1024 * 500 assert hasattr(req.files['foo'].stream, u'fileno') # close file to prevent fds from leaking req.files['foo'].close() def test_streaming_parse(self): data = b'x' * (1024 * 600) class StreamMPP(formparser.MultiPartParser): def parse(self, file, boundary, content_length): i = iter(self.parse_lines(file, boundary, content_length, cap_at_buffer=False)) one = next(i) two = next(i) return self.cls(()), {'one': one, 'two': two} class StreamFDP(formparser.FormDataParser): def _sf_parse_multipart(self, stream, mimetype, content_length, options): form, files = StreamMPP( self.stream_factory, self.charset, self.errors, max_form_memory_size=self.max_form_memory_size, cls=self.cls).parse(stream, options.get('boundary').encode('ascii'), content_length) return stream, form, files parse_functions = {} parse_functions.update(formparser.FormDataParser.parse_functions) parse_functions['multipart/form-data'] = _sf_parse_multipart class StreamReq(Request): form_data_parser_class = StreamFDP req = StreamReq.from_values(data={'foo': (BytesIO(data), 'test.txt')}, method='POST') strict_eq('begin_file', req.files['one'][0]) strict_eq(('foo', 'test.txt'), req.files['one'][1][1:]) strict_eq('cont', req.files['two'][0]) strict_eq(data, req.files['two'][1]) def test_parse_bad_content_type(self): parser = FormDataParser() assert parser.parse('', 'bad-mime-type', 0) == \ ('', MultiDict([]), MultiDict([])) def test_parse_from_environ(self): parser = FormDataParser() stream, _, _ = parser.parse_from_environ({'wsgi.input': ''}) assert stream is not None class TestMultiPart(object): def test_basic(self): resources = join(dirname(__file__), 'multipart') client = Client(form_data_consumer, Response) repository = [ ('firefox3-2png1txt', '---------------------------186454651713519341951581030105', [ (u'anchor.png', 'file1', 'image/png', 'file1.png'), (u'application_edit.png', 'file2', 'image/png', 'file2.png') ], u'example text'), ('firefox3-2pnglongtext', '---------------------------14904044739787191031754711748', [ (u'accept.png', 'file1', 'image/png', 'file1.png'), (u'add.png', 'file2', 'image/png', 'file2.png') ], u'--long text\r\n--with boundary\r\n--lookalikes--'), ('opera8-2png1txt', '----------zEO9jQKmLc2Cq88c23Dx19', [ (u'arrow_branch.png', 'file1', 'image/png', 'file1.png'), (u'award_star_bronze_1.png', 'file2', 'image/png', 'file2.png') ], u'blafasel öäü'), ('webkit3-2png1txt', '----WebKitFormBoundaryjdSFhcARk8fyGNy6', [ (u'gtk-apply.png', 'file1', 'image/png', 'file1.png'), (u'gtk-no.png', 'file2', 'image/png', 'file2.png') ], u'this is another text with ümläüts'), ('ie6-2png1txt', '---------------------------7d91b03a20128', [ (u'file1.png', 'file1', 'image/x-png', 'file1.png'), (u'file2.png', 'file2', 'image/x-png', 'file2.png') ], u'ie6 sucks :-/') ] for name, boundary, files, text in repository: folder = join(resources, name) data = get_contents(join(folder, 'request.txt')) for filename, field, content_type, fsname in files: response = client.post( '/?object=' + field, data=data, content_type='multipart/form-data; boundary="%s"' % boundary, content_length=len(data)) lines = response.get_data().split(b'\n', 3) strict_eq(lines[0], repr(filename).encode('ascii')) strict_eq(lines[1], repr(field).encode('ascii')) strict_eq(lines[2], repr(content_type).encode('ascii')) strict_eq(lines[3], get_contents(join(folder, fsname))) response = client.post( '/?object=text', data=data, content_type='multipart/form-data; boundary="%s"' % boundary, content_length=len(data)) strict_eq(response.get_data(), repr(text).encode('utf-8')) def test_ie7_unc_path(self): client = Client(form_data_consumer, Response) data_file = join(dirname(__file__), 'multipart', 'ie7_full_path_request.txt') data = get_contents(data_file) boundary = '---------------------------7da36d1b4a0164' response = client.post( '/?object=cb_file_upload_multiple', data=data, content_type='multipart/form-data; boundary="%s"' % boundary, content_length=len(data)) lines = response.get_data().split(b'\n', 3) strict_eq(lines[0], repr(u'Sellersburg Town Council Meeting 02-22-2010doc.doc').encode('ascii')) def test_end_of_file(self): # This test looks innocent but it was actually timeing out in # the Werkzeug 0.5 release version (#394) data = ( b'--foo\r\n' b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' b'Content-Type: text/plain\r\n\r\n' b'file contents and no end' ) data = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') assert not data.files assert not data.form def test_broken(self): data = ( '--foo\r\n' 'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n' 'Content-Transfer-Encoding: base64\r\n' 'Content-Type: text/plain\r\n\r\n' 'broken base 64' '--foo--' ) _, form, files = formparser.parse_form_data(create_environ( data=data, method='POST', content_type='multipart/form-data; boundary=foo' )) assert not files assert not form pytest.raises(ValueError, formparser.parse_form_data, create_environ(data=data, method='POST', content_type='multipart/form-data; boundary=foo'), silent=False) def test_file_no_content_type(self): data = ( b'--foo\r\n' b'Content-Disposition: form-data; name="test"; filename="test.txt"\r\n\r\n' b'file contents\r\n--foo--' ) data = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') assert data.files['test'].filename == 'test.txt' strict_eq(data.files['test'].read(), b'file contents') def test_extra_newline(self): # this test looks innocent but it was actually timeing out in # the Werkzeug 0.5 release version (#394) data = ( b'\r\n\r\n--foo\r\n' b'Content-Disposition: form-data; name="foo"\r\n\r\n' b'a string\r\n' b'--foo--' ) data = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') assert not data.files strict_eq(data.form['foo'], u'a string') def test_headers(self): data = (b'--foo\r\n' b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' b'X-Custom-Header: blah\r\n' b'Content-Type: text/plain; charset=utf-8\r\n\r\n' b'file contents, just the contents\r\n' b'--foo--') req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') foo = req.files['foo'] strict_eq(foo.mimetype, 'text/plain') strict_eq(foo.mimetype_params, {'charset': 'utf-8'}) strict_eq(foo.headers['content-type'], foo.content_type) strict_eq(foo.content_type, 'text/plain; charset=utf-8') strict_eq(foo.headers['x-custom-header'], 'blah') def test_nonstandard_line_endings(self): for nl in b'\n', b'\r', b'\r\n': data = nl.join(( b'--foo', b'Content-Disposition: form-data; name=foo', b'', b'this is just bar', b'--foo', b'Content-Disposition: form-data; name=bar', b'', b'blafasel', b'--foo--' )) req = Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; ' 'boundary=foo', method='POST') strict_eq(req.form['foo'], u'this is just bar') strict_eq(req.form['bar'], u'blafasel') def test_failures(self): def parse_multipart(stream, boundary, content_length): parser = formparser.MultiPartParser(content_length) return parser.parse(stream, boundary, content_length) pytest.raises(ValueError, parse_multipart, BytesIO(), b'broken ', 0) data = b'--foo\r\n\r\nHello World\r\n--foo--' pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n' \ b'Content-Transfer-Encoding: base64\r\n\r\nHello World\r\n--foo--' pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) data = b'--foo\r\nContent-Disposition: form-field; name=foo\r\n\r\nHello World\r\n' pytest.raises(ValueError, parse_multipart, BytesIO(data), b'foo', len(data)) x = formparser.parse_multipart_headers(['foo: bar\r\n', ' x test\r\n']) strict_eq(x['foo'], 'bar\n x test') pytest.raises(ValueError, formparser.parse_multipart_headers, ['foo: bar\r\n', ' x test']) def test_bad_newline_bad_newline_assumption(self): class ISORequest(Request): charset = 'latin1' contents = b'U2vlbmUgbORu' data = b'--foo\r\nContent-Disposition: form-data; name="test"\r\n' \ b'Content-Transfer-Encoding: base64\r\n\r\n' + \ contents + b'\r\n--foo--' req = ISORequest.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST') strict_eq(req.form['test'], u'Sk\xe5ne l\xe4n') def test_empty_multipart(self): environ = {} data = b'--boundary--' environ['REQUEST_METHOD'] = 'POST' environ['CONTENT_TYPE'] = 'multipart/form-data; boundary=boundary' environ['CONTENT_LENGTH'] = str(len(data)) environ['wsgi.input'] = BytesIO(data) stream, form, files = parse_form_data(environ, silent=False) rv = stream.read() assert rv == b'' assert form == MultiDict() assert files == MultiDict() class TestMultiPartParser(object): def test_constructor_not_pass_stream_factory_and_cls(self): parser = formparser.MultiPartParser() assert parser.stream_factory is formparser.default_stream_factory assert parser.cls is MultiDict def test_constructor_pass_stream_factory_and_cls(self): def stream_factory(): pass parser = formparser.MultiPartParser(stream_factory=stream_factory, cls=dict) assert parser.stream_factory is stream_factory assert parser.cls is dict class TestInternalFunctions(object): def test_line_parser(self): assert formparser._line_parse('foo') == ('foo', False) assert formparser._line_parse('foo\r\n') == ('foo', True) assert formparser._line_parse('foo\r') == ('foo', True) assert formparser._line_parse('foo\n') == ('foo', True) def test_find_terminator(self): lineiter = iter(b'\n\n\nfoo\nbar\nbaz'.splitlines(True)) find_terminator = formparser.MultiPartParser()._find_terminator line = find_terminator(lineiter) assert line == b'foo' assert list(lineiter) == [b'bar\n', b'baz'] assert find_terminator([]) == b'' assert find_terminator([b'']) == b'' werkzeug-0.14.1/tests/test_http.py000066400000000000000000000571751322225165500172140ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.http ~~~~~~~~~~ HTTP parsing utilities. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest from datetime import datetime from tests import strict_eq from werkzeug._compat import itervalues, wsgi_encoding_dance from werkzeug import http, datastructures from werkzeug.test import create_environ class TestHTTPUtility(object): def test_accept(self): a = http.parse_accept_header('en-us,ru;q=0.5') assert list(itervalues(a)) == ['en-us', 'ru'] assert a.best == 'en-us' assert a.find('ru') == 1 pytest.raises(ValueError, a.index, 'de') assert a.to_header() == 'en-us,ru;q=0.5' def test_mime_accept(self): a = http.parse_accept_header('text/xml,application/xml,' 'application/xhtml+xml,' 'application/foo;quiet=no; bar=baz;q=0.6,' 'text/html;q=0.9,text/plain;q=0.8,' 'image/png,*/*;q=0.5', datastructures.MIMEAccept) pytest.raises(ValueError, lambda: a['missing']) assert a['image/png'] == 1 assert a['text/plain'] == 0.8 assert a['foo/bar'] == 0.5 assert a['application/foo;quiet=no; bar=baz'] == 0.6 assert a[a.find('foo/bar')] == ('*/*', 0.5) def test_accept_matches(self): a = http.parse_accept_header('text/xml,application/xml,application/xhtml+xml,' 'text/html;q=0.9,text/plain;q=0.8,' 'image/png', datastructures.MIMEAccept) assert a.best_match(['text/html', 'application/xhtml+xml']) == \ 'application/xhtml+xml' assert a.best_match(['text/html']) == 'text/html' assert a.best_match(['foo/bar']) is None assert a.best_match(['foo/bar', 'bar/foo'], default='foo/bar') == 'foo/bar' assert a.best_match(['application/xml', 'text/xml']) == 'application/xml' def test_charset_accept(self): a = http.parse_accept_header('ISO-8859-1,utf-8;q=0.7,*;q=0.7', datastructures.CharsetAccept) assert a['iso-8859-1'] == a['iso8859-1'] assert a['iso-8859-1'] == 1 assert a['UTF8'] == 0.7 assert a['ebcdic'] == 0.7 def test_language_accept(self): a = http.parse_accept_header('de-AT,de;q=0.8,en;q=0.5', datastructures.LanguageAccept) assert a.best == 'de-AT' assert 'de_AT' in a assert 'en' in a assert a['de-at'] == 1 assert a['en'] == 0.5 def test_set_header(self): hs = http.parse_set_header('foo, Bar, "Blah baz", Hehe') assert 'blah baz' in hs assert 'foobar' not in hs assert 'foo' in hs assert list(hs) == ['foo', 'Bar', 'Blah baz', 'Hehe'] hs.add('Foo') assert hs.to_header() == 'foo, Bar, "Blah baz", Hehe' def test_list_header(self): hl = http.parse_list_header('foo baz, blah') assert hl == ['foo baz', 'blah'] def test_dict_header(self): d = http.parse_dict_header('foo="bar baz", blah=42') assert d == {'foo': 'bar baz', 'blah': '42'} def test_cache_control_header(self): cc = http.parse_cache_control_header('max-age=0, no-cache') assert cc.max_age == 0 assert cc.no_cache cc = http.parse_cache_control_header('private, community="UCI"', None, datastructures.ResponseCacheControl) assert cc.private assert cc['community'] == 'UCI' c = datastructures.ResponseCacheControl() assert c.no_cache is None assert c.private is None c.no_cache = True assert c.no_cache == '*' c.private = True assert c.private == '*' del c.private assert c.private is None assert c.to_header() == 'no-cache' def test_authorization_header(self): a = http.parse_authorization_header('Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==') assert a.type == 'basic' assert a.username == 'Aladdin' assert a.password == 'open sesame' a = http.parse_authorization_header('''Digest username="Mufasa", realm="testrealm@host.invalid", nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", uri="/dir/index.html", qop=auth, nc=00000001, cnonce="0a4f113b", response="6629fae49393a05397450978507c4ef1", opaque="5ccc069c403ebaf9f0171e9517f40e41"''') assert a.type == 'digest' assert a.username == 'Mufasa' assert a.realm == 'testrealm@host.invalid' assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' assert a.uri == '/dir/index.html' assert a.qop == 'auth' assert a.nc == '00000001' assert a.cnonce == '0a4f113b' assert a.response == '6629fae49393a05397450978507c4ef1' assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' a = http.parse_authorization_header('''Digest username="Mufasa", realm="testrealm@host.invalid", nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", uri="/dir/index.html", response="e257afa1414a3340d93d30955171dd0e", opaque="5ccc069c403ebaf9f0171e9517f40e41"''') assert a.type == 'digest' assert a.username == 'Mufasa' assert a.realm == 'testrealm@host.invalid' assert a.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' assert a.uri == '/dir/index.html' assert a.response == 'e257afa1414a3340d93d30955171dd0e' assert a.opaque == '5ccc069c403ebaf9f0171e9517f40e41' assert http.parse_authorization_header('') is None assert http.parse_authorization_header(None) is None assert http.parse_authorization_header('foo') is None def test_www_authenticate_header(self): wa = http.parse_www_authenticate_header('Basic realm="WallyWorld"') assert wa.type == 'basic' assert wa.realm == 'WallyWorld' wa.realm = 'Foo Bar' assert wa.to_header() == 'Basic realm="Foo Bar"' wa = http.parse_www_authenticate_header('''Digest realm="testrealm@host.com", qop="auth,auth-int", nonce="dcd98b7102dd2f0e8b11d0f600bfb0c093", opaque="5ccc069c403ebaf9f0171e9517f40e41"''') assert wa.type == 'digest' assert wa.realm == 'testrealm@host.com' assert 'auth' in wa.qop assert 'auth-int' in wa.qop assert wa.nonce == 'dcd98b7102dd2f0e8b11d0f600bfb0c093' assert wa.opaque == '5ccc069c403ebaf9f0171e9517f40e41' wa = http.parse_www_authenticate_header('broken') assert wa.type == 'broken' assert not http.parse_www_authenticate_header('').type assert not http.parse_www_authenticate_header('') def test_etags(self): assert http.quote_etag('foo') == '"foo"' assert http.quote_etag('foo', True) == 'W/"foo"' assert http.unquote_etag('"foo"') == ('foo', False) assert http.unquote_etag('W/"foo"') == ('foo', True) es = http.parse_etags('"foo", "bar", W/"baz", blar') assert sorted(es) == ['bar', 'blar', 'foo'] assert 'foo' in es assert 'baz' not in es assert es.contains_weak('baz') assert 'blar' in es assert es.contains_raw('W/"baz"') assert es.contains_raw('"foo"') assert sorted(es.to_header().split(', ')) == ['"bar"', '"blar"', '"foo"', 'W/"baz"'] def test_etags_nonzero(self): etags = http.parse_etags('W/"foo"') assert bool(etags) assert etags.contains_raw('W/"foo"') def test_parse_date(self): assert http.parse_date('Sun, 06 Nov 1994 08:49:37 GMT ') == datetime( 1994, 11, 6, 8, 49, 37) assert http.parse_date('Sunday, 06-Nov-94 08:49:37 GMT') == datetime(1994, 11, 6, 8, 49, 37) assert http.parse_date(' Sun Nov 6 08:49:37 1994') == datetime(1994, 11, 6, 8, 49, 37) assert http.parse_date('foo') is None def test_parse_date_overflows(self): assert http.parse_date(' Sun 02 Feb 1343 08:49:37 GMT') == datetime(1343, 2, 2, 8, 49, 37) assert http.parse_date('Thu, 01 Jan 1970 00:00:00 GMT') == datetime(1970, 1, 1, 0, 0) assert http.parse_date('Thu, 33 Jan 1970 00:00:00 GMT') is None def test_remove_entity_headers(self): now = http.http_date() headers1 = [('Date', now), ('Content-Type', 'text/html'), ('Content-Length', '0')] headers2 = datastructures.Headers(headers1) http.remove_entity_headers(headers1) assert headers1 == [('Date', now)] http.remove_entity_headers(headers2) assert headers2 == datastructures.Headers([(u'Date', now)]) def test_remove_hop_by_hop_headers(self): headers1 = [('Connection', 'closed'), ('Foo', 'bar'), ('Keep-Alive', 'wtf')] headers2 = datastructures.Headers(headers1) http.remove_hop_by_hop_headers(headers1) assert headers1 == [('Foo', 'bar')] http.remove_hop_by_hop_headers(headers2) assert headers2 == datastructures.Headers([('Foo', 'bar')]) def test_parse_options_header(self): assert http.parse_options_header(None) == \ ('', {}) assert http.parse_options_header("") == \ ('', {}) assert http.parse_options_header(r'something; foo="other\"thing"') == \ ('something', {'foo': 'other"thing'}) assert http.parse_options_header(r'something; foo="other\"thing"; meh=42') == \ ('something', {'foo': 'other"thing', 'meh': '42'}) assert http.parse_options_header(r'something; foo="other\"thing"; meh=42; bleh') == \ ('something', {'foo': 'other"thing', 'meh': '42', 'bleh': None}) assert http.parse_options_header('something; foo="other;thing"; meh=42; bleh') == \ ('something', {'foo': 'other;thing', 'meh': '42', 'bleh': None}) assert http.parse_options_header('something; foo="otherthing"; meh=; bleh') == \ ('something', {'foo': 'otherthing', 'meh': None, 'bleh': None}) # Issue #404 assert http.parse_options_header('multipart/form-data; name="foo bar"; ' 'filename="bar foo"') == \ ('multipart/form-data', {'name': 'foo bar', 'filename': 'bar foo'}) # Examples from RFC assert http.parse_options_header('audio/*; q=0.2, audio/basic') == \ ('audio/*', {'q': '0.2'}) assert http.parse_options_header('audio/*; q=0.2, audio/basic', multiple=True) == \ ('audio/*', {'q': '0.2'}, "audio/basic", {}) assert http.parse_options_header( 'text/plain; q=0.5, text/html\n ' 'text/x-dvi; q=0.8, text/x-c', multiple=True) == \ ('text/plain', {'q': '0.5'}, "text/html", {}, "text/x-dvi", {'q': '0.8'}, "text/x-c", {}) assert http.parse_options_header('text/plain; q=0.5, text/html\n' ' ' 'text/x-dvi; q=0.8, text/x-c') == \ ('text/plain', {'q': '0.5'}) # Issue #932 assert http.parse_options_header( 'form-data; ' 'name="a_file"; ' 'filename*=UTF-8\'\'' '"%c2%a3%20and%20%e2%82%ac%20rates"') == \ ('form-data', {'name': 'a_file', 'filename': u'\xa3 and \u20ac rates'}) assert http.parse_options_header( 'form-data; ' 'name*=UTF-8\'\'"%C5%AAn%C4%ADc%C5%8Dde%CC%BD"; ' 'filename="some_file.txt"') == \ ('form-data', {'name': u'\u016an\u012dc\u014dde\u033d', 'filename': 'some_file.txt'}) def test_parse_options_header_value_with_quotes(self): assert http.parse_options_header( 'form-data; name="file"; filename="t\'es\'t.txt"' ) == ('form-data', {'name': 'file', 'filename': "t'es't.txt"}) assert http.parse_options_header( 'form-data; name="file"; filename*=UTF-8\'\'"\'🐍\'.txt"' ) == ('form-data', {'name': 'file', 'filename': u"'🐍'.txt"}) def test_parse_options_header_broken_values(self): # Issue #995 assert http.parse_options_header(' ') == ('', {}) assert http.parse_options_header(' , ') == ('', {}) assert http.parse_options_header(' ; ') == ('', {}) assert http.parse_options_header(' ,; ') == ('', {}) assert http.parse_options_header(' , a ') == ('', {}) assert http.parse_options_header(' ; a ') == ('', {}) def test_dump_options_header(self): assert http.dump_options_header('foo', {'bar': 42}) == \ 'foo; bar=42' assert http.dump_options_header('foo', {'bar': 42, 'fizz': None}) in \ ('foo; bar=42; fizz', 'foo; fizz; bar=42') def test_dump_header(self): assert http.dump_header([1, 2, 3]) == '1, 2, 3' assert http.dump_header([1, 2, 3], allow_token=False) == '"1", "2", "3"' assert http.dump_header({'foo': 'bar'}, allow_token=False) == 'foo="bar"' assert http.dump_header({'foo': 'bar'}) == 'foo=bar' def test_is_resource_modified(self): env = create_environ() # ignore POST env['REQUEST_METHOD'] = 'POST' assert not http.is_resource_modified(env, etag='testing') env['REQUEST_METHOD'] = 'GET' # etagify from data pytest.raises(TypeError, http.is_resource_modified, env, data='42', etag='23') env['HTTP_IF_NONE_MATCH'] = http.generate_etag(b'awesome') assert not http.is_resource_modified(env, data=b'awesome') env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30)) assert not http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 12, 00)) assert http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 13, 00)) def test_is_resource_modified_for_range_requests(self): env = create_environ() env['HTTP_IF_MODIFIED_SINCE'] = http.http_date(datetime(2008, 1, 1, 12, 30)) env['HTTP_IF_RANGE'] = http.generate_etag(b'awesome_if_range') # Range header not present, so If-Range should be ignored assert not http.is_resource_modified(env, data=b'not_the_same', ignore_if_range=False, last_modified=datetime(2008, 1, 1, 12, 30)) env['HTTP_RANGE'] = '' assert not http.is_resource_modified(env, data=b'awesome_if_range', ignore_if_range=False) assert http.is_resource_modified(env, data=b'not_the_same', ignore_if_range=False) env['HTTP_IF_RANGE'] = http.http_date(datetime(2008, 1, 1, 13, 30)) assert http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 14, 00), ignore_if_range=False) assert not http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 13, 30), ignore_if_range=False) assert http.is_resource_modified(env, last_modified=datetime(2008, 1, 1, 13, 30), ignore_if_range=True) def test_date_formatting(self): assert http.cookie_date(0) == 'Thu, 01-Jan-1970 00:00:00 GMT' assert http.cookie_date(datetime(1970, 1, 1)) == 'Thu, 01-Jan-1970 00:00:00 GMT' assert http.http_date(0) == 'Thu, 01 Jan 1970 00:00:00 GMT' assert http.http_date(datetime(1970, 1, 1)) == 'Thu, 01 Jan 1970 00:00:00 GMT' def test_cookies(self): strict_eq( dict(http.parse_cookie('dismiss-top=6; CP=null*; PHPSESSID=0a539d42abc001cd' 'c762809248d4beed; a=42; b="\\\";"')), { 'CP': u'null*', 'PHPSESSID': u'0a539d42abc001cdc762809248d4beed', 'a': u'42', 'dismiss-top': u'6', 'b': u'\";' } ) rv = http.dump_cookie('foo', 'bar baz blub', 360, httponly=True, sync_expires=False) assert type(rv) is str assert set(rv.split('; ')) == set(['HttpOnly', 'Max-Age=360', 'Path=/', 'foo="bar baz blub"']) strict_eq(dict(http.parse_cookie('fo234{=bar; blub=Blah')), {'fo234{': u'bar', 'blub': u'Blah'}) strict_eq(http.dump_cookie('key', 'xxx/'), 'key=xxx/; Path=/') strict_eq(http.dump_cookie('key', 'xxx='), 'key=xxx=; Path=/') def test_bad_cookies(self): strict_eq( dict(http.parse_cookie('first=IamTheFirst ; a=1; oops ; a=2 ;' 'second = andMeTwo;')), { 'first': u'IamTheFirst', 'a': u'1', 'a': u'2', 'oops': u'', 'second': u'andMeTwo', } ) def test_cookie_quoting(self): val = http.dump_cookie("foo", "?foo") strict_eq(val, 'foo="?foo"; Path=/') strict_eq(dict(http.parse_cookie(val)), {'foo': u'?foo'}) strict_eq(dict(http.parse_cookie(r'foo="foo\054bar"')), {'foo': u'foo,bar'}) def test_cookie_domain_resolving(self): val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/') def test_cookie_unicode_dumping(self): val = http.dump_cookie('foo', u'\N{SNOWMAN}') h = datastructures.Headers() h.add('Set-Cookie', val) assert h['Set-Cookie'] == 'foo="\\342\\230\\203"; Path=/' cookies = http.parse_cookie(h['Set-Cookie']) assert cookies['foo'] == u'\N{SNOWMAN}' def test_cookie_unicode_keys(self): # Yes, this is technically against the spec but happens val = http.dump_cookie(u'fö', u'fö') assert val == wsgi_encoding_dance(u'fö="f\\303\\266"; Path=/', 'utf-8') cookies = http.parse_cookie(val) assert cookies[u'fö'] == u'fö' def test_cookie_unicode_parsing(self): # This is actually a correct test. This is what is being submitted # by firefox if you set an unicode cookie and we get the cookie sent # in on Python 3 under PEP 3333. cookies = http.parse_cookie(u'fö=fö') assert cookies[u'fö'] == u'fö' def test_cookie_domain_encoding(self): val = http.dump_cookie('foo', 'bar', domain=u'\N{SNOWMAN}.com') strict_eq(val, 'foo=bar; Domain=xn--n3h.com; Path=/') val = http.dump_cookie('foo', 'bar', domain=u'.\N{SNOWMAN}.com') strict_eq(val, 'foo=bar; Domain=.xn--n3h.com; Path=/') val = http.dump_cookie('foo', 'bar', domain=u'.foo.com') strict_eq(val, 'foo=bar; Domain=.foo.com; Path=/') def test_cookie_maxsize(self, recwarn): val = http.dump_cookie('foo', 'bar' * 1360 + 'b') assert len(recwarn) == 0 assert len(val) == 4093 http.dump_cookie('foo', 'bar' * 1360 + 'ba') assert len(recwarn) == 1 w = recwarn.pop() assert 'cookie is too large' in str(w.message) http.dump_cookie('foo', b'w' * 502, max_size=512) assert len(recwarn) == 1 w = recwarn.pop() assert 'the limit is 512 bytes' in str(w.message) @pytest.mark.parametrize('input, expected', [ ('strict', 'foo=bar; Path=/; SameSite=Strict'), ('lax', 'foo=bar; Path=/; SameSite=Lax'), (None, 'foo=bar; Path=/'), ]) def test_cookie_samesite_attribute(self, input, expected): val = http.dump_cookie('foo', 'bar', samesite=input) strict_eq(val, expected) class TestRange(object): def test_if_range_parsing(self): rv = http.parse_if_range_header('"Test"') assert rv.etag == 'Test' assert rv.date is None assert rv.to_header() == '"Test"' # weak information is dropped rv = http.parse_if_range_header('W/"Test"') assert rv.etag == 'Test' assert rv.date is None assert rv.to_header() == '"Test"' # broken etags are supported too rv = http.parse_if_range_header('bullshit') assert rv.etag == 'bullshit' assert rv.date is None assert rv.to_header() == '"bullshit"' rv = http.parse_if_range_header('Thu, 01 Jan 1970 00:00:00 GMT') assert rv.etag is None assert rv.date == datetime(1970, 1, 1) assert rv.to_header() == 'Thu, 01 Jan 1970 00:00:00 GMT' for x in '', None: rv = http.parse_if_range_header(x) assert rv.etag is None assert rv.date is None assert rv.to_header() == '' def test_range_parsing(self): rv = http.parse_range_header('bytes=52') assert rv is None rv = http.parse_range_header('bytes=52-') assert rv.units == 'bytes' assert rv.ranges == [(52, None)] assert rv.to_header() == 'bytes=52-' rv = http.parse_range_header('bytes=52-99') assert rv.units == 'bytes' assert rv.ranges == [(52, 100)] assert rv.to_header() == 'bytes=52-99' rv = http.parse_range_header('bytes=52-99,-1000') assert rv.units == 'bytes' assert rv.ranges == [(52, 100), (-1000, None)] assert rv.to_header() == 'bytes=52-99,-1000' rv = http.parse_range_header('bytes = 1 - 100') assert rv.units == 'bytes' assert rv.ranges == [(1, 101)] assert rv.to_header() == 'bytes=1-100' rv = http.parse_range_header('AWesomes=0-999') assert rv.units == 'awesomes' assert rv.ranges == [(0, 1000)] assert rv.to_header() == 'awesomes=0-999' rv = http.parse_range_header('bytes=-') assert rv is None rv = http.parse_range_header('bytes=bullshit') assert rv is None rv = http.parse_range_header('bytes=bullshit-1') assert rv is None rv = http.parse_range_header('bytes=-bullshit') assert rv is None rv = http.parse_range_header('bytes=52-99, bullshit') assert rv is None def test_content_range_parsing(self): rv = http.parse_content_range_header('bytes 0-98/*') assert rv.units == 'bytes' assert rv.start == 0 assert rv.stop == 99 assert rv.length is None assert rv.to_header() == 'bytes 0-98/*' rv = http.parse_content_range_header('bytes 0-98/*asdfsa') assert rv is None rv = http.parse_content_range_header('bytes 0-99/100') assert rv.to_header() == 'bytes 0-99/100' rv.start = None rv.stop = None assert rv.units == 'bytes' assert rv.to_header() == 'bytes */100' rv = http.parse_content_range_header('bytes */100') assert rv.start is None assert rv.stop is None assert rv.length == 100 assert rv.units == 'bytes' class TestRegression(object): def test_best_match_works(self): # was a bug in 0.6 rv = http.parse_accept_header('foo=,application/xml,application/xhtml+xml,' 'text/html;q=0.9,text/plain;q=0.8,' 'image/png,*/*;q=0.5', datastructures.MIMEAccept).best_match(['foo/bar']) assert rv == 'foo/bar' werkzeug-0.14.1/tests/test_internal.py000066400000000000000000000046761322225165500200470ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.internal ~~~~~~~~~~~~~~ Internal tests. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest from datetime import datetime from warnings import filterwarnings, resetwarnings from werkzeug.wrappers import Request, Response from werkzeug import _internal as internal from werkzeug.test import create_environ def test_date_to_unix(): assert internal._date_to_unix(datetime(1970, 1, 1)) == 0 assert internal._date_to_unix(datetime(1970, 1, 1, 1, 0, 0)) == 3600 assert internal._date_to_unix(datetime(1970, 1, 1, 1, 1, 1)) == 3661 x = datetime(2010, 2, 15, 16, 15, 39) assert internal._date_to_unix(x) == 1266250539 def test_easteregg(): req = Request.from_values('/?macgybarchakku') resp = Response.force_type(internal._easteregg(None), req) assert b'About Werkzeug' in resp.get_data() assert b'the Swiss Army knife of Python web development' in resp.get_data() def test_wrapper_internals(): req = Request.from_values(data={'foo': 'bar'}, method='POST') req._load_form_data() assert req.form.to_dict() == {'foo': 'bar'} # second call does not break req._load_form_data() assert req.form.to_dict() == {'foo': 'bar'} # check reprs assert repr(req) == "" resp = Response() assert repr(resp) == '' resp.set_data('Hello World!') assert repr(resp) == '' resp.response = iter(['Test']) assert repr(resp) == '' # unicode data does not set content length response = Response([u'Hällo Wörld']) headers = response.get_wsgi_headers(create_environ()) assert u'Content-Length' not in headers response = Response([u'Hällo Wörld'.encode('utf-8')]) headers = response.get_wsgi_headers(create_environ()) assert u'Content-Length' in headers # check for internal warnings filterwarnings('error', category=Warning) response = Response() environ = create_environ() response.response = 'What the...?' pytest.raises(Warning, lambda: list(response.iter_encoded())) pytest.raises(Warning, lambda: list(response.get_app_iter(environ))) response.direct_passthrough = True pytest.raises(Warning, lambda: list(response.iter_encoded())) pytest.raises(Warning, lambda: list(response.get_app_iter(environ))) resetwarnings() werkzeug-0.14.1/tests/test_local.py000066400000000000000000000106561322225165500173200ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.local ~~~~~~~~~~~~~~~~~~~~~~~~ Local and local proxy tests. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest import time import copy from functools import partial from threading import Thread from werkzeug import local def test_basic_local(): ns = local.Local() ns.foo = 0 values = [] def value_setter(idx): time.sleep(0.01 * idx) ns.foo = idx time.sleep(0.02) values.append(ns.foo) threads = [Thread(target=value_setter, args=(x,)) for x in [1, 2, 3]] for thread in threads: thread.start() time.sleep(0.2) assert sorted(values) == [1, 2, 3] def delfoo(): del ns.foo delfoo() pytest.raises(AttributeError, lambda: ns.foo) pytest.raises(AttributeError, delfoo) local.release_local(ns) def test_local_release(): ns = local.Local() ns.foo = 42 local.release_local(ns) assert not hasattr(ns, 'foo') ls = local.LocalStack() ls.push(42) local.release_local(ls) assert ls.top is None def test_local_proxy(): foo = [] ls = local.LocalProxy(lambda: foo) ls.append(42) ls.append(23) ls[1:] = [1, 2, 3] assert foo == [42, 1, 2, 3] assert repr(foo) == repr(ls) assert foo[0] == 42 foo += [1] assert list(foo) == [42, 1, 2, 3, 1] def test_local_proxy_operations_math(): foo = 2 ls = local.LocalProxy(lambda: foo) assert ls + 1 == 3 assert 1 + ls == 3 assert ls - 1 == 1 assert 1 - ls == -1 assert ls * 1 == 2 assert 1 * ls == 2 assert ls / 1 == 2 assert 1.0 / ls == 0.5 assert ls // 1.0 == 2.0 assert 1.0 // ls == 0.0 assert ls % 2 == 0 assert 2 % ls == 0 def test_local_proxy_operations_strings(): foo = "foo" ls = local.LocalProxy(lambda: foo) assert ls + "bar" == "foobar" assert "bar" + ls == "barfoo" assert ls * 2 == "foofoo" foo = "foo %s" assert ls % ("bar",) == "foo bar" def test_local_stack(): ident = local.get_ident() ls = local.LocalStack() assert ident not in ls._local.__storage__ assert ls.top is None ls.push(42) assert ident in ls._local.__storage__ assert ls.top == 42 ls.push(23) assert ls.top == 23 ls.pop() assert ls.top == 42 ls.pop() assert ls.top is None assert ls.pop() is None assert ls.pop() is None proxy = ls() ls.push([1, 2]) assert proxy == [1, 2] ls.push((1, 2)) assert proxy == (1, 2) ls.pop() ls.pop() assert repr(proxy) == '' assert ident not in ls._local.__storage__ def test_local_proxies_with_callables(): foo = 42 ls = local.LocalProxy(lambda: foo) assert ls == 42 foo = [23] ls.append(42) assert ls == [23, 42] assert foo == [23, 42] def test_custom_idents(): ident = 0 ns = local.Local() stack = local.LocalStack() local.LocalManager([ns, stack], ident_func=lambda: ident) ns.foo = 42 stack.push({'foo': 42}) ident = 1 ns.foo = 23 stack.push({'foo': 23}) ident = 0 assert ns.foo == 42 assert stack.top['foo'] == 42 stack.pop() assert stack.top is None ident = 1 assert ns.foo == 23 assert stack.top['foo'] == 23 stack.pop() assert stack.top is None def test_deepcopy_on_proxy(): class Foo(object): attr = 42 def __copy__(self): return self def __deepcopy__(self, memo): return self f = Foo() p = local.LocalProxy(lambda: f) assert p.attr == 42 assert copy.deepcopy(p) is f assert copy.copy(p) is f a = [] p2 = local.LocalProxy(lambda: [a]) assert copy.copy(p2) == [a] assert copy.copy(p2)[0] is a assert copy.deepcopy(p2) == [a] assert copy.deepcopy(p2)[0] is not a def test_local_proxy_wrapped_attribute(): class SomeClassWithWrapped(object): __wrapped__ = 'wrapped' def lookup_func(): return 42 partial_lookup_func = partial(lookup_func) proxy = local.LocalProxy(lookup_func) assert proxy.__wrapped__ is lookup_func partial_proxy = local.LocalProxy(partial_lookup_func) assert partial_proxy.__wrapped__ == partial_lookup_func ns = local.Local() ns.foo = SomeClassWithWrapped() ns.bar = 42 assert ns('foo').__wrapped__ == 'wrapped' pytest.raises(AttributeError, lambda: ns('bar').__wrapped__) werkzeug-0.14.1/tests/test_routing.py000066400000000000000000001050131322225165500177050ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.routing ~~~~~~~~~~~~~ Routing tests. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest import uuid from tests import strict_eq from werkzeug import routing as r from werkzeug.wrappers import Response from werkzeug.datastructures import ImmutableDict, MultiDict from werkzeug.test import create_environ def test_basic_routing(): map = r.Map([ r.Rule('/', endpoint='index'), r.Rule('/foo', endpoint='foo'), r.Rule('/bar/', endpoint='bar') ]) adapter = map.bind('example.org', '/') assert adapter.match('/') == ('index', {}) assert adapter.match('/foo') == ('foo', {}) assert adapter.match('/bar/') == ('bar', {}) pytest.raises(r.RequestRedirect, lambda: adapter.match('/bar')) pytest.raises(r.NotFound, lambda: adapter.match('/blub')) adapter = map.bind('example.org', '/test') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match('/bar') assert excinfo.value.new_url == 'http://example.org/test/bar/' adapter = map.bind('example.org', '/') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match('/bar') assert excinfo.value.new_url == 'http://example.org/bar/' adapter = map.bind('example.org', '/') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match('/bar', query_args={'aha': 'muhaha'}) assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha' adapter = map.bind('example.org', '/') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match('/bar', query_args='aha=muhaha') assert excinfo.value.new_url == 'http://example.org/bar/?aha=muhaha' adapter = map.bind_to_environ(create_environ('/bar?foo=bar', 'http://example.org/')) with pytest.raises(r.RequestRedirect) as excinfo: adapter.match() assert excinfo.value.new_url == 'http://example.org/bar/?foo=bar' def test_strict_slashes_redirect(): map = r.Map([ r.Rule('/bar/', endpoint='get', methods=["GET"]), r.Rule('/bar', endpoint='post', methods=["POST"]), ]) adapter = map.bind('example.org', '/') # Check if the actual routes works assert adapter.match('/bar/', method='GET') == ('get', {}) assert adapter.match('/bar', method='POST') == ('post', {}) # Check if exceptions are correct pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET') pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST') # Check differently defined order map = r.Map([ r.Rule('/bar', endpoint='post', methods=["POST"]), r.Rule('/bar/', endpoint='get', methods=["GET"]), ]) adapter = map.bind('example.org', '/') # Check if the actual routes works assert adapter.match('/bar/', method='GET') == ('get', {}) assert adapter.match('/bar', method='POST') == ('post', {}) # Check if exceptions are correct pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET') pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST') # Check what happens when only slash route is defined map = r.Map([ r.Rule('/bar/', endpoint='get', methods=["GET"]), ]) adapter = map.bind('example.org', '/') # Check if the actual routes works assert adapter.match('/bar/', method='GET') == ('get', {}) # Check if exceptions are correct pytest.raises(r.RequestRedirect, adapter.match, '/bar', method='GET') pytest.raises(r.MethodNotAllowed, adapter.match, '/bar/', method='POST') pytest.raises(r.MethodNotAllowed, adapter.match, '/bar', method='POST') def test_environ_defaults(): environ = create_environ("/foo") strict_eq(environ["PATH_INFO"], '/foo') m = r.Map([r.Rule("/foo", endpoint="foo"), r.Rule("/bar", endpoint="bar")]) a = m.bind_to_environ(environ) strict_eq(a.match("/foo"), ('foo', {})) strict_eq(a.match(), ('foo', {})) strict_eq(a.match("/bar"), ('bar', {})) pytest.raises(r.NotFound, a.match, "/bars") def test_environ_nonascii_pathinfo(): environ = create_environ(u'/лошадь') m = r.Map([ r.Rule(u'/', endpoint='index'), r.Rule(u'/лошадь', endpoint='horse') ]) a = m.bind_to_environ(environ) strict_eq(a.match(u'/'), ('index', {})) strict_eq(a.match(u'/лошадь'), ('horse', {})) pytest.raises(r.NotFound, a.match, u'/барсук') def test_basic_building(): map = r.Map([ r.Rule('/', endpoint='index'), r.Rule('/foo', endpoint='foo'), r.Rule('/bar/', endpoint='bar'), r.Rule('/bar/', endpoint='bari'), r.Rule('/bar/', endpoint='barf'), r.Rule('/bar/', endpoint='barp'), r.Rule('/hehe', endpoint='blah', subdomain='blah') ]) adapter = map.bind('example.org', '/', subdomain='blah') assert adapter.build('index', {}) == 'http://example.org/' assert adapter.build('foo', {}) == 'http://example.org/foo' assert adapter.build('bar', {'baz': 'blub'}) == \ 'http://example.org/bar/blub' assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/bar/50' multivalues = MultiDict([('bazi', 50), ('bazi', None)]) assert adapter.build('bari', multivalues) == 'http://example.org/bar/50' assert adapter.build('barf', {'bazf': 0.815}) == \ 'http://example.org/bar/0.815' assert adapter.build('barp', {'bazp': 'la/di'}) == \ 'http://example.org/bar/la/di' assert adapter.build('blah', {}) == '/hehe' pytest.raises(r.BuildError, lambda: adapter.build('urks')) adapter = map.bind('example.org', '/test', subdomain='blah') assert adapter.build('index', {}) == 'http://example.org/test/' assert adapter.build('foo', {}) == 'http://example.org/test/foo' assert adapter.build('bar', {'baz': 'blub'}) == \ 'http://example.org/test/bar/blub' assert adapter.build('bari', {'bazi': 50}) == 'http://example.org/test/bar/50' assert adapter.build('barf', {'bazf': 0.815}) == 'http://example.org/test/bar/0.815' assert adapter.build('barp', {'bazp': 'la/di'}) == 'http://example.org/test/bar/la/di' assert adapter.build('blah', {}) == '/test/hehe' adapter = map.bind('example.org') assert adapter.build('foo', {}) == '/foo' assert adapter.build('foo', {}, force_external=True) == 'http://example.org/foo' adapter = map.bind('example.org', url_scheme='') assert adapter.build('foo', {}) == '/foo' assert adapter.build('foo', {}, force_external=True) == '//example.org/foo' def test_defaults(): map = r.Map([ r.Rule('/foo/', defaults={'page': 1}, endpoint='foo'), r.Rule('/foo/', endpoint='foo') ]) adapter = map.bind('example.org', '/') assert adapter.match('/foo/') == ('foo', {'page': 1}) pytest.raises(r.RequestRedirect, lambda: adapter.match('/foo/1')) assert adapter.match('/foo/2') == ('foo', {'page': 2}) assert adapter.build('foo', {}) == '/foo/' assert adapter.build('foo', {'page': 1}) == '/foo/' assert adapter.build('foo', {'page': 2}) == '/foo/2' def test_greedy(): map = r.Map([ r.Rule('/foo', endpoint='foo'), r.Rule('/', endpoint='bar'), r.Rule('//', endpoint='bar') ]) adapter = map.bind('example.org', '/') assert adapter.match('/foo') == ('foo', {}) assert adapter.match('/blub') == ('bar', {'bar': 'blub'}) assert adapter.match('/he/he') == ('bar', {'bar': 'he', 'blub': 'he'}) assert adapter.build('foo', {}) == '/foo' assert adapter.build('bar', {'bar': 'blub'}) == '/blub' assert adapter.build('bar', {'bar': 'blub', 'blub': 'bar'}) == '/blub/bar' def test_path(): map = r.Map([ r.Rule('/', defaults={'name': 'FrontPage'}, endpoint='page'), r.Rule('/Special', endpoint='special'), r.Rule('/', endpoint='year'), r.Rule('/:foo', endpoint='foopage'), r.Rule('/:', endpoint='twopage'), r.Rule('/', endpoint='page'), r.Rule('//edit', endpoint='editpage'), r.Rule('//silly/', endpoint='sillypage'), r.Rule('//silly//edit', endpoint='editsillypage'), r.Rule('/Talk:', endpoint='talk'), r.Rule('/User:', endpoint='user'), r.Rule('/User:/', endpoint='userpage'), r.Rule('/User:/comment/-', endpoint='usercomment'), r.Rule('/Files/', endpoint='files'), r.Rule('///', endpoint='admin'), ]) adapter = map.bind('example.org', '/') assert adapter.match('/') == ('page', {'name': 'FrontPage'}) pytest.raises(r.RequestRedirect, lambda: adapter.match('/FrontPage')) assert adapter.match('/Special') == ('special', {}) assert adapter.match('/2007') == ('year', {'year': 2007}) assert adapter.match('/Some:foo') == ('foopage', {'name': 'Some'}) assert adapter.match('/Some:bar') == ('twopage', {'name': 'Some', 'name2': 'bar'}) assert adapter.match('/Some/Page') == ('page', {'name': 'Some/Page'}) assert adapter.match('/Some/Page/edit') == ('editpage', {'name': 'Some/Page'}) assert adapter.match('/Foo/silly/bar') == ('sillypage', {'name': 'Foo', 'name2': 'bar'}) assert adapter.match( '/Foo/silly/bar/edit') == ('editsillypage', {'name': 'Foo', 'name2': 'bar'}) assert adapter.match('/Talk:Foo/Bar') == ('talk', {'name': 'Foo/Bar'}) assert adapter.match('/User:thomas') == ('user', {'username': 'thomas'}) assert adapter.match('/User:thomas/projects/werkzeug') == \ ('userpage', {'username': 'thomas', 'name': 'projects/werkzeug'}) assert adapter.match('/User:thomas/comment/123-456') == \ ('usercomment', {'username': 'thomas', 'id': 123, 'replyId': 456}) assert adapter.match('/Files/downloads/werkzeug/0.2.zip') == \ ('files', {'file': 'downloads/werkzeug/0.2.zip'}) assert adapter.match('/Jerry/eats/cheese') == \ ('admin', {'admin': 'Jerry', 'manage': 'eats', 'things': 'cheese'}) def test_dispatch(): env = create_environ('/') map = r.Map([ r.Rule('/', endpoint='root'), r.Rule('/foo/', endpoint='foo') ]) adapter = map.bind_to_environ(env) raise_this = None def view_func(endpoint, values): if raise_this is not None: raise raise_this return Response(repr((endpoint, values))) dispatch = lambda p, q=False: Response.force_type( adapter.dispatch(view_func, p, catch_http_exceptions=q), env ) assert dispatch('/').data == b"('root', {})" assert dispatch('/foo').status_code == 301 raise_this = r.NotFound() pytest.raises(r.NotFound, lambda: dispatch('/bar')) assert dispatch('/bar', True).status_code == 404 def test_http_host_before_server_name(): env = { 'HTTP_HOST': 'wiki.example.com', 'SERVER_NAME': 'web0.example.com', 'SERVER_PORT': '80', 'SCRIPT_NAME': '', 'PATH_INFO': '', 'REQUEST_METHOD': 'GET', 'wsgi.url_scheme': 'http' } map = r.Map([r.Rule('/', endpoint='index', subdomain='wiki')]) adapter = map.bind_to_environ(env, server_name='example.com') assert adapter.match('/') == ('index', {}) assert adapter.build('index', force_external=True) == 'http://wiki.example.com/' assert adapter.build('index') == '/' env['HTTP_HOST'] = 'admin.example.com' adapter = map.bind_to_environ(env, server_name='example.com') assert adapter.build('index') == 'http://wiki.example.com/' def test_adapter_url_parameter_sorting(): map = r.Map([r.Rule('/', endpoint='index')], sort_parameters=True, sort_key=lambda x: x[1]) adapter = map.bind('localhost', '/') assert adapter.build('index', {'x': 20, 'y': 10, 'z': 30}, force_external=True) == 'http://localhost/?y=10&x=20&z=30' def test_request_direct_charset_bug(): map = r.Map([r.Rule(u'/öäü/')]) adapter = map.bind('localhost', '/') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match(u'/öäü') assert excinfo.value.new_url == 'http://localhost/%C3%B6%C3%A4%C3%BC/' def test_request_redirect_default(): map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}), r.Rule(u'/foo/')]) adapter = map.bind('localhost', '/') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match(u'/foo/42') assert excinfo.value.new_url == 'http://localhost/foo' def test_request_redirect_default_subdomain(): map = r.Map([r.Rule(u'/foo', defaults={'bar': 42}, subdomain='test'), r.Rule(u'/foo/', subdomain='other')]) adapter = map.bind('localhost', '/', subdomain='other') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match(u'/foo/42') assert excinfo.value.new_url == 'http://test.localhost/foo' def test_adapter_match_return_rule(): rule = r.Rule('/foo/', endpoint='foo') map = r.Map([rule]) adapter = map.bind('localhost', '/') assert adapter.match('/foo/', return_rule=True) == (rule, {}) def test_server_name_interpolation(): server_name = 'example.invalid' map = r.Map([r.Rule('/', endpoint='index'), r.Rule('/', endpoint='alt', subdomain='alt')]) env = create_environ('/', 'http://%s/' % server_name) adapter = map.bind_to_environ(env, server_name=server_name) assert adapter.match() == ('index', {}) env = create_environ('/', 'http://alt.%s/' % server_name) adapter = map.bind_to_environ(env, server_name=server_name) assert adapter.match() == ('alt', {}) env = create_environ('/', 'http://%s/' % server_name) adapter = map.bind_to_environ(env, server_name='foo') assert adapter.subdomain == '' def test_rule_emptying(): rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'], False, 'x', True, None) rule2 = rule.empty() assert rule.__dict__ == rule2.__dict__ rule.methods.add('GET') assert rule.__dict__ != rule2.__dict__ rule.methods.discard('GET') rule.defaults['meh'] = 'aha' assert rule.__dict__ != rule2.__dict__ def test_rule_unhashable(): rule = r.Rule('/foo', {'meh': 'muh'}, 'x', ['POST'], False, 'x', True, None) pytest.raises(TypeError, hash, rule) def test_rule_templates(): testcase = r.RuleTemplate([ r.Submount( '/test/$app', [r.Rule('/foo/', endpoint='handle_foo'), r.Rule('/bar/', endpoint='handle_bar'), r.Rule('/baz/', endpoint='handle_baz')]), r.EndpointPrefix( '${app}', [r.Rule('/${app}-blah', endpoint='bar'), r.Rule('/${app}-meh', endpoint='baz')]), r.Subdomain( '$app', [r.Rule('/blah', endpoint='x_bar'), r.Rule('/meh', endpoint='x_baz')]) ]) url_map = r.Map( [testcase(app='test1'), testcase(app='test2'), testcase(app='test3'), testcase(app='test4') ]) out = sorted([(x.rule, x.subdomain, x.endpoint) for x in url_map.iter_rules()]) assert out == ([ ('/blah', 'test1', 'x_bar'), ('/blah', 'test2', 'x_bar'), ('/blah', 'test3', 'x_bar'), ('/blah', 'test4', 'x_bar'), ('/meh', 'test1', 'x_baz'), ('/meh', 'test2', 'x_baz'), ('/meh', 'test3', 'x_baz'), ('/meh', 'test4', 'x_baz'), ('/test/test1/bar/', '', 'handle_bar'), ('/test/test1/baz/', '', 'handle_baz'), ('/test/test1/foo/', '', 'handle_foo'), ('/test/test2/bar/', '', 'handle_bar'), ('/test/test2/baz/', '', 'handle_baz'), ('/test/test2/foo/', '', 'handle_foo'), ('/test/test3/bar/', '', 'handle_bar'), ('/test/test3/baz/', '', 'handle_baz'), ('/test/test3/foo/', '', 'handle_foo'), ('/test/test4/bar/', '', 'handle_bar'), ('/test/test4/baz/', '', 'handle_baz'), ('/test/test4/foo/', '', 'handle_foo'), ('/test1-blah', '', 'test1bar'), ('/test1-meh', '', 'test1baz'), ('/test2-blah', '', 'test2bar'), ('/test2-meh', '', 'test2baz'), ('/test3-blah', '', 'test3bar'), ('/test3-meh', '', 'test3baz'), ('/test4-blah', '', 'test4bar'), ('/test4-meh', '', 'test4baz') ]) def test_non_string_parts(): m = r.Map([ r.Rule('/', endpoint='foo') ]) a = m.bind('example.com') assert a.build('foo', {'foo': 42}) == '/42' def test_complex_routing_rules(): m = r.Map([ r.Rule('/', endpoint='index'), r.Rule('/', endpoint='an_int'), r.Rule('/', endpoint='a_string'), r.Rule('/foo/', endpoint='nested'), r.Rule('/foobar/', endpoint='nestedbar'), r.Rule('/foo//', endpoint='nested_show'), r.Rule('/foo//edit', endpoint='nested_edit'), r.Rule('/users/', endpoint='users', defaults={'page': 1}), r.Rule('/users/page/', endpoint='users'), r.Rule('/foox', endpoint='foox'), r.Rule('//', endpoint='barx_path_path') ]) a = m.bind('example.com') assert a.match('/') == ('index', {}) assert a.match('/42') == ('an_int', {'blub': 42}) assert a.match('/blub') == ('a_string', {'blub': 'blub'}) assert a.match('/foo/') == ('nested', {}) assert a.match('/foobar/') == ('nestedbar', {}) assert a.match('/foo/1/2/3/') == ('nested_show', {'testing': '1/2/3'}) assert a.match('/foo/1/2/3/edit') == ('nested_edit', {'testing': '1/2/3'}) assert a.match('/users/') == ('users', {'page': 1}) assert a.match('/users/page/2') == ('users', {'page': 2}) assert a.match('/foox') == ('foox', {}) assert a.match('/1/2/3') == ('barx_path_path', {'bar': '1', 'blub': '2/3'}) assert a.build('index') == '/' assert a.build('an_int', {'blub': 42}) == '/42' assert a.build('a_string', {'blub': 'test'}) == '/test' assert a.build('nested') == '/foo/' assert a.build('nestedbar') == '/foobar/' assert a.build('nested_show', {'testing': '1/2/3'}) == '/foo/1/2/3/' assert a.build('nested_edit', {'testing': '1/2/3'}) == '/foo/1/2/3/edit' assert a.build('users', {'page': 1}) == '/users/' assert a.build('users', {'page': 2}) == '/users/page/2' assert a.build('foox') == '/foox' assert a.build('barx_path_path', {'bar': '1', 'blub': '2/3'}) == '/1/2/3' def test_default_converters(): class MyMap(r.Map): default_converters = r.Map.default_converters.copy() default_converters['foo'] = r.UnicodeConverter assert isinstance(r.Map.default_converters, ImmutableDict) m = MyMap([ r.Rule('/a/', endpoint='a'), r.Rule('/b/', endpoint='b'), r.Rule('/c/', endpoint='c') ], converters={'bar': r.UnicodeConverter}) a = m.bind('example.org', '/') assert a.match('/a/1') == ('a', {'a': '1'}) assert a.match('/b/2') == ('b', {'b': '2'}) assert a.match('/c/3') == ('c', {'c': '3'}) assert 'foo' not in r.Map.default_converters def test_uuid_converter(): m = r.Map([ r.Rule('/a/', endpoint='a') ]) a = m.bind('example.org', '/') rooute, kwargs = a.match('/a/a8098c1a-f86e-11da-bd1a-00112444be1e') assert type(kwargs['a_uuid']) == uuid.UUID def test_converter_with_tuples(): ''' Regression test for https://github.com/pallets/werkzeug/issues/709 ''' class TwoValueConverter(r.BaseConverter): def __init__(self, *args, **kwargs): super(TwoValueConverter, self).__init__(*args, **kwargs) self.regex = r'(\w\w+)/(\w\w+)' def to_python(self, two_values): one, two = two_values.split('/') return one, two def to_url(self, values): return "%s/%s" % (values[0], values[1]) map = r.Map([ r.Rule('//', endpoint='handler') ], converters={'two': TwoValueConverter}) a = map.bind('example.org', '/') route, kwargs = a.match('/qwert/yuiop/') assert kwargs['foo'] == ('qwert', 'yuiop') def test_anyconverter(): m = r.Map([ r.Rule('/', endpoint='no_dot'), r.Rule('/', endpoint='yes_dot') ]) a = m.bind('example.org', '/') assert a.match('/a1') == ('no_dot', {'a': 'a1'}) assert a.match('/a2') == ('no_dot', {'a': 'a2'}) assert a.match('/a.1') == ('yes_dot', {'a': 'a.1'}) assert a.match('/a.2') == ('yes_dot', {'a': 'a.2'}) def test_build_append_unknown(): map = r.Map([ r.Rule('/bar/', endpoint='barf') ]) adapter = map.bind('example.org', '/', subdomain='blah') assert adapter.build('barf', {'bazf': 0.815, 'bif': 1.0}) == \ 'http://example.org/bar/0.815?bif=1.0' assert adapter.build('barf', {'bazf': 0.815, 'bif': 1.0}, append_unknown=False) == 'http://example.org/bar/0.815' def test_build_append_multiple(): map = r.Map([ r.Rule('/bar/', endpoint='barf') ]) adapter = map.bind('example.org', '/', subdomain='blah') params = {'bazf': 0.815, 'bif': [1.0, 3.0], 'pof': 2.0} a, b = adapter.build('barf', params).split('?') assert a == 'http://example.org/bar/0.815' assert set(b.split('&')) == set('pof=2.0&bif=1.0&bif=3.0'.split('&')) def test_method_fallback(): map = r.Map([ r.Rule('/', endpoint='index', methods=['GET']), r.Rule('/', endpoint='hello_name', methods=['GET']), r.Rule('/select', endpoint='hello_select', methods=['POST']), r.Rule('/search_get', endpoint='search', methods=['GET']), r.Rule('/search_post', endpoint='search', methods=['POST']) ]) adapter = map.bind('example.com') assert adapter.build('index') == '/' assert adapter.build('index', method='GET') == '/' assert adapter.build('hello_name', {'name': 'foo'}) == '/foo' assert adapter.build('hello_select') == '/select' assert adapter.build('hello_select', method='POST') == '/select' assert adapter.build('search') == '/search_get' assert adapter.build('search', method='GET') == '/search_get' assert adapter.build('search', method='POST') == '/search_post' def test_implicit_head(): url_map = r.Map([ r.Rule('/get', methods=['GET'], endpoint='a'), r.Rule('/post', methods=['POST'], endpoint='b') ]) adapter = url_map.bind('example.org') assert adapter.match('/get', method='HEAD') == ('a', {}) pytest.raises(r.MethodNotAllowed, adapter.match, '/post', method='HEAD') def test_pass_str_as_router_methods(): with pytest.raises(TypeError): r.Rule('/get', methods='GET') def test_protocol_joining_bug(): m = r.Map([r.Rule('/', endpoint='x')]) a = m.bind('example.org') assert a.build('x', {'foo': 'x:y'}) == '/x:y' assert a.build('x', {'foo': 'x:y'}, force_external=True) == \ 'http://example.org/x:y' def test_allowed_methods_querying(): m = r.Map([r.Rule('/', methods=['GET', 'HEAD']), r.Rule('/foo', methods=['POST'])]) a = m.bind('example.org') assert sorted(a.allowed_methods('/foo')) == ['GET', 'HEAD', 'POST'] def test_external_building_with_port(): map = r.Map([ r.Rule('/', endpoint='index'), ]) adapter = map.bind('example.org:5000', '/') built_url = adapter.build('index', {}, force_external=True) assert built_url == 'http://example.org:5000/', built_url def test_external_building_with_port_bind_to_environ(): map = r.Map([ r.Rule('/', endpoint='index'), ]) adapter = map.bind_to_environ( create_environ('/', 'http://example.org:5000/'), server_name="example.org:5000" ) built_url = adapter.build('index', {}, force_external=True) assert built_url == 'http://example.org:5000/', built_url def test_external_building_with_port_bind_to_environ_wrong_servername(): map = r.Map([ r.Rule('/', endpoint='index'), ]) environ = create_environ('/', 'http://example.org:5000/') adapter = map.bind_to_environ(environ, server_name="example.org") assert adapter.subdomain == '' def test_converter_parser(): args, kwargs = r.parse_converter_args(u'test, a=1, b=3.0') assert args == ('test',) assert kwargs == {'a': 1, 'b': 3.0} args, kwargs = r.parse_converter_args('') assert not args and not kwargs args, kwargs = r.parse_converter_args('a, b, c,') assert args == ('a', 'b', 'c') assert not kwargs args, kwargs = r.parse_converter_args('True, False, None') assert args == (True, False, None) args, kwargs = r.parse_converter_args('"foo", u"bar"') assert args == ('foo', 'bar') def test_alias_redirects(): m = r.Map([ r.Rule('/', endpoint='index'), r.Rule('/index.html', endpoint='index', alias=True), r.Rule('/users/', defaults={'page': 1}, endpoint='users'), r.Rule('/users/index.html', defaults={'page': 1}, alias=True, endpoint='users'), r.Rule('/users/page/', endpoint='users'), r.Rule('/users/page-.html', alias=True, endpoint='users'), ]) a = m.bind('example.com') def ensure_redirect(path, new_url, args=None): with pytest.raises(r.RequestRedirect) as excinfo: a.match(path, query_args=args) assert excinfo.value.new_url == 'http://example.com' + new_url ensure_redirect('/index.html', '/') ensure_redirect('/users/index.html', '/users/') ensure_redirect('/users/page-2.html', '/users/page/2') ensure_redirect('/users/page-1.html', '/users/') ensure_redirect('/users/page-1.html', '/users/?foo=bar', {'foo': 'bar'}) assert a.build('index') == '/' assert a.build('users', {'page': 1}) == '/users/' assert a.build('users', {'page': 2}) == '/users/page/2' @pytest.mark.parametrize('prefix', ('', '/aaa')) def test_double_defaults(prefix): m = r.Map([ r.Rule(prefix + '/', defaults={'foo': 1, 'bar': False}, endpoint='x'), r.Rule(prefix + '/', defaults={'bar': False}, endpoint='x'), r.Rule(prefix + '/bar/', defaults={'foo': 1, 'bar': True}, endpoint='x'), r.Rule(prefix + '/bar/', defaults={'bar': True}, endpoint='x') ]) a = m.bind('example.com') assert a.match(prefix + '/') == ('x', {'foo': 1, 'bar': False}) assert a.match(prefix + '/2') == ('x', {'foo': 2, 'bar': False}) assert a.match(prefix + '/bar/') == ('x', {'foo': 1, 'bar': True}) assert a.match(prefix + '/bar/2') == ('x', {'foo': 2, 'bar': True}) assert a.build('x', {'foo': 1, 'bar': False}) == prefix + '/' assert a.build('x', {'foo': 2, 'bar': False}) == prefix + '/2' assert a.build('x', {'bar': False}) == prefix + '/' assert a.build('x', {'foo': 1, 'bar': True}) == prefix + '/bar/' assert a.build('x', {'foo': 2, 'bar': True}) == prefix + '/bar/2' assert a.build('x', {'bar': True}) == prefix + '/bar/' def test_host_matching(): m = r.Map([ r.Rule('/', endpoint='index', host='www.'), r.Rule('/', endpoint='files', host='files.'), r.Rule('/foo/', defaults={'page': 1}, host='www.', endpoint='x'), r.Rule('/', host='files.', endpoint='x') ], host_matching=True) a = m.bind('www.example.com') assert a.match('/') == ('index', {'domain': 'example.com'}) assert a.match('/foo/') == ('x', {'domain': 'example.com', 'page': 1}) with pytest.raises(r.RequestRedirect) as excinfo: a.match('/foo') assert excinfo.value.new_url == 'http://www.example.com/foo/' a = m.bind('files.example.com') assert a.match('/') == ('files', {'domain': 'example.com'}) assert a.match('/2') == ('x', {'domain': 'example.com', 'page': 2}) with pytest.raises(r.RequestRedirect) as excinfo: a.match('/1') assert excinfo.value.new_url == 'http://www.example.com/foo/' def test_host_matching_building(): m = r.Map([ r.Rule('/', endpoint='index', host='www.domain.com'), r.Rule('/', endpoint='foo', host='my.domain.com') ], host_matching=True) www = m.bind('www.domain.com') assert www.match('/') == ('index', {}) assert www.build('index') == '/' assert www.build('foo') == 'http://my.domain.com/' my = m.bind('my.domain.com') assert my.match('/') == ('foo', {}) assert my.build('foo') == '/' assert my.build('index') == 'http://www.domain.com/' def test_server_name_casing(): m = r.Map([ r.Rule('/', endpoint='index', subdomain='foo') ]) env = create_environ() env['SERVER_NAME'] = env['HTTP_HOST'] = 'FOO.EXAMPLE.COM' a = m.bind_to_environ(env, server_name='example.com') assert a.match('/') == ('index', {}) env = create_environ() env['SERVER_NAME'] = '127.0.0.1' env['SERVER_PORT'] = '5000' del env['HTTP_HOST'] a = m.bind_to_environ(env, server_name='example.com') with pytest.raises(r.NotFound): a.match() def test_redirect_request_exception_code(): exc = r.RequestRedirect('http://www.google.com/') exc.code = 307 env = create_environ() strict_eq(exc.get_response(env).status_code, exc.code) def test_redirect_path_quoting(): url_map = r.Map([ r.Rule('/', defaults={'page': 1}, endpoint='category'), r.Rule('//page/', endpoint='category') ]) adapter = url_map.bind('example.com') with pytest.raises(r.RequestRedirect) as excinfo: adapter.match('/foo bar/page/1') response = excinfo.value.get_response({}) strict_eq(response.headers['location'], u'http://example.com/foo%20bar') def test_unicode_rules(): m = r.Map([ r.Rule(u'/войти/', endpoint='enter'), r.Rule(u'/foo+bar/', endpoint='foobar') ]) a = m.bind(u'☃.example.com') with pytest.raises(r.RequestRedirect) as excinfo: a.match(u'/войти') strict_eq(excinfo.value.new_url, 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') endpoint, values = a.match(u'/войти/') strict_eq(endpoint, 'enter') strict_eq(values, {}) with pytest.raises(r.RequestRedirect) as excinfo: a.match(u'/foo+bar') strict_eq(excinfo.value.new_url, 'http://xn--n3h.example.com/foo+bar/') endpoint, values = a.match(u'/foo+bar/') strict_eq(endpoint, 'foobar') strict_eq(values, {}) url = a.build('enter', {}, force_external=True) strict_eq(url, 'http://xn--n3h.example.com/%D0%B2%D0%BE%D0%B9%D1%82%D0%B8/') url = a.build('foobar', {}, force_external=True) strict_eq(url, 'http://xn--n3h.example.com/foo+bar/') def test_empty_path_info(): m = r.Map([ r.Rule("/", endpoint="index"), ]) b = m.bind("example.com", script_name="/approot") with pytest.raises(r.RequestRedirect) as excinfo: b.match("") assert excinfo.value.new_url == "http://example.com/approot/" a = m.bind("example.com") with pytest.raises(r.RequestRedirect) as excinfo: a.match("") assert excinfo.value.new_url == "http://example.com/" def test_map_repr(): m = r.Map([ r.Rule(u'/wat', endpoint='enter'), r.Rule(u'/woop', endpoint='foobar') ]) rv = repr(m) strict_eq(rv, "Map([ foobar>, enter>])") def test_empty_subclass_rules_with_custom_kwargs(): class CustomRule(r.Rule): def __init__(self, string=None, custom=None, *args, **kwargs): self.custom = custom super(CustomRule, self).__init__(string, *args, **kwargs) rule1 = CustomRule(u'/foo', endpoint='bar') try: rule2 = rule1.empty() assert rule1.rule == rule2.rule except TypeError as e: # raised without fix in PR #675 raise e def test_finding_closest_match_by_endpoint(): m = r.Map([ r.Rule(u'/foo/', endpoint='users.here'), r.Rule(u'/wat/', endpoint='admin.users'), r.Rule(u'/woop', endpoint='foo.users'), ]) adapter = m.bind('example.com') assert r.BuildError('admin.user', None, None, adapter).suggested.endpoint \ == 'admin.users' def test_finding_closest_match_by_values(): rule_id = r.Rule(u'/user/id//', endpoint='users') rule_slug = r.Rule(u'/user//', endpoint='users') rule_random = r.Rule(u'/user/emails//', endpoint='users') m = r.Map([rule_id, rule_slug, rule_random]) adapter = m.bind('example.com') assert r.BuildError('x', {'slug': ''}, None, adapter).suggested == \ rule_slug def test_finding_closest_match_by_method(): post = r.Rule(u'/post/', endpoint='foobar', methods=['POST']) get = r.Rule(u'/get/', endpoint='foobar', methods=['GET']) put = r.Rule(u'/put/', endpoint='foobar', methods=['PUT']) m = r.Map([post, get, put]) adapter = m.bind('example.com') assert r.BuildError('invalid', {}, 'POST', adapter).suggested == post assert r.BuildError('invalid', {}, 'GET', adapter).suggested == get assert r.BuildError('invalid', {}, 'PUT', adapter).suggested == put def test_finding_closest_match_when_none_exist(): m = r.Map([]) assert not r.BuildError('invalid', {}, None, m.bind('test.com')).suggested def test_error_message_without_suggested_rule(): m = r.Map([ r.Rule(u'/foo/', endpoint='world', methods=['GET']), ]) adapter = m.bind('example.com') with pytest.raises(r.BuildError) as excinfo: adapter.build('urks') assert str(excinfo.value).startswith( "Could not build url for endpoint 'urks'." ) with pytest.raises(r.BuildError) as excinfo: adapter.build('world', method='POST') assert str(excinfo.value).startswith( "Could not build url for endpoint 'world' ('POST')." ) with pytest.raises(r.BuildError) as excinfo: adapter.build('urks', values={'user_id': 5}) assert str(excinfo.value).startswith( "Could not build url for endpoint 'urks' with values ['user_id']." ) def test_error_message_suggestion(): m = r.Map([ r.Rule(u'/foo//', endpoint='world', methods=['GET']), ]) adapter = m.bind('example.com') with pytest.raises(r.BuildError) as excinfo: adapter.build('helloworld') assert "Did you mean 'world' instead?" in str(excinfo.value) with pytest.raises(r.BuildError) as excinfo: adapter.build('world') assert "Did you forget to specify values ['id']?" in str(excinfo.value) assert "Did you mean to use methods" not in str(excinfo.value) with pytest.raises(r.BuildError) as excinfo: adapter.build('world', {'id': 2}, method='POST') assert "Did you mean to use methods ['GET', 'HEAD']?" in str(excinfo.value) werkzeug-0.14.1/tests/test_security.py000066400000000000000000000124111322225165500200640ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.security ~~~~~~~~~~~~~~ Tests the security helpers. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import os import posixpath import pytest from werkzeug.security import check_password_hash, generate_password_hash, \ safe_join, pbkdf2_hex, safe_str_cmp def test_safe_str_cmp(): assert safe_str_cmp('a', 'a') is True assert safe_str_cmp(b'a', u'a') is True assert safe_str_cmp('a', 'b') is False assert safe_str_cmp(b'aaa', 'aa') is False assert safe_str_cmp(b'aaa', 'bbb') is False assert safe_str_cmp(b'aaa', u'aaa') is True assert safe_str_cmp(u'aaa', u'aaa') is True def test_safe_str_cmp_no_builtin(): import werkzeug.security as sec prev_value = sec._builtin_safe_str_cmp sec._builtin_safe_str_cmp = None assert safe_str_cmp('a', 'ab') is False assert safe_str_cmp('str', 'str') is True assert safe_str_cmp('str1', 'str2') is False sec._builtin_safe_str_cmp = prev_value def test_password_hashing(): hash0 = generate_password_hash('default') assert check_password_hash(hash0, 'default') assert hash0.startswith('pbkdf2:sha256:50000$') hash1 = generate_password_hash('default', 'sha1') hash2 = generate_password_hash(u'default', method='sha1') assert hash1 != hash2 assert check_password_hash(hash1, 'default') assert check_password_hash(hash2, 'default') assert hash1.startswith('sha1$') assert hash2.startswith('sha1$') with pytest.raises(TypeError): check_password_hash('$made$up$', 'default') with pytest.raises(ValueError): generate_password_hash('default', 'sha1', salt_length=0) fakehash = generate_password_hash('default', method='plain') assert fakehash == 'plain$$default' assert check_password_hash(fakehash, 'default') mhash = generate_password_hash(u'default', method='md5') assert mhash.startswith('md5$') assert check_password_hash(mhash, 'default') legacy = 'md5$$c21f969b5f03d33d43e04f8f136e7682' assert check_password_hash(legacy, 'default') legacy = u'md5$$c21f969b5f03d33d43e04f8f136e7682' assert check_password_hash(legacy, 'default') def test_safe_join(): assert safe_join('foo', 'bar/baz') == posixpath.join('foo', 'bar/baz') assert safe_join('foo', '../bar/baz') is None if os.name == 'nt': assert safe_join('foo', 'foo\\bar') is None def test_safe_join_os_sep(): import werkzeug.security as sec prev_value = sec._os_alt_seps sec._os_alt_seps = '*' assert safe_join('foo', 'bar/baz*') is None sec._os_alt_steps = prev_value def test_pbkdf2(): def check(data, salt, iterations, keylen, hashfunc, expected): rv = pbkdf2_hex(data, salt, iterations, keylen, hashfunc) assert rv == expected # From RFC 6070 # Assumes default keylen is 20 # check('password', 'salt', 1, None, # '0c60c80f961f0e71f3a9b524af6012062fe037a6') check('password', 'salt', 1, 20, 'sha1', '0c60c80f961f0e71f3a9b524af6012062fe037a6') check('password', 'salt', 2, 20, 'sha1', 'ea6c014dc72d6f8ccd1ed92ace1d41f0d8de8957') check('password', 'salt', 4096, 20, 'sha1', '4b007901b765489abead49d926f721d065a429c1') check('passwordPASSWORDpassword', 'saltSALTsaltSALTsaltSALTsaltSALTsalt', 4096, 25, 'sha1', '3d2eec4fe41c849b80c8d83662c0e44a8b291a964cf2f07038') check('pass\x00word', 'sa\x00lt', 4096, 16, 'sha1', '56fa6aa75548099dcc37d7f03425e0c3') # PBKDF2-HMAC-SHA256 test vectors check('password', 'salt', 1, 32, 'sha256', '120fb6cffcf8b32c43e7225256c4f837a86548c92ccc35480805987cb70be17b') check('password', 'salt', 2, 32, 'sha256', 'ae4d0c95af6b46d32d0adff928f06dd02a303f8ef3c251dfd6e2d85a95474c43') check('password', 'salt', 4096, 20, 'sha256', 'c5e478d59288c841aa530db6845c4c8d962893a0') # This one is from the RFC but it just takes for ages # check('password', 'salt', 16777216, 20, # 'eefe3d61cd4da4e4e9945b3d6ba2158c2634e984') # From Crypt-PBKDF2 check('password', 'ATHENA.MIT.EDUraeburn', 1, 16, 'sha1', 'cdedb5281bb2f801565a1122b2563515') check('password', 'ATHENA.MIT.EDUraeburn', 1, 32, 'sha1', 'cdedb5281bb2f801565a1122b25635150ad1f7a04bb9f3a333ecc0e2e1f70837') check('password', 'ATHENA.MIT.EDUraeburn', 2, 16, 'sha1', '01dbee7f4a9e243e988b62c73cda935d') check('password', 'ATHENA.MIT.EDUraeburn', 2, 32, 'sha1', '01dbee7f4a9e243e988b62c73cda935da05378b93244ec8f48a99e61ad799d86') check('password', 'ATHENA.MIT.EDUraeburn', 1200, 32, 'sha1', '5c08eb61fdf71e4e4ec3cf6ba1f5512ba7e52ddbc5e5142f708a31e2e62b1e13') check('X' * 64, 'pass phrase equals block size', 1200, 32, 'sha1', '139c30c0966bc32ba55fdbf212530ac9c5ec59f1a452f5cc9ad940fea0598ed1') check('X' * 65, 'pass phrase exceeds block size', 1200, 32, 'sha1', '9ccad6d468770cd51b10e6a68721be611a8b4d282601db3b36be9246915ec82a') def test_pbkdf2_non_native(): import werkzeug.security as sec prev_value = sec._has_native_pbkdf2 sec._has_native_pbkdf2 = None assert pbkdf2_hex('password', 'salt', 1, 20, 'sha1') \ == '0c60c80f961f0e71f3a9b524af6012062fe037a6' sec._has_native_pbkdf2 = prev_value werkzeug-0.14.1/tests/test_serving.py000066400000000000000000000332651322225165500177040ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.serving ~~~~~~~~~~~~~ Added serving tests. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import os import ssl import sys import textwrap import time import subprocess try: import OpenSSL except ImportError: OpenSSL = None try: import watchdog except ImportError: watchdog = None try: import httplib except ImportError: from http import client as httplib import requests import requests.exceptions import pytest from werkzeug import __version__ as version, serving, _reloader def test_serving(dev_server): server = dev_server('from werkzeug.testapp import test_app as app') rv = requests.get('http://%s/?foo=bar&baz=blah' % server.addr).content assert b'WSGI Information' in rv assert b'foo=bar&baz=blah' in rv assert b'Werkzeug/' + version.encode('ascii') in rv def test_absolute_requests(dev_server): server = dev_server(''' def app(environ, start_response): assert environ['HTTP_HOST'] == 'surelynotexisting.example.com:1337' assert environ['PATH_INFO'] == '/index.htm' addr = environ['HTTP_X_WERKZEUG_ADDR'] assert environ['SERVER_PORT'] == addr.split(':')[1] start_response('200 OK', [('Content-Type', 'text/html')]) return [b'YES'] ''') conn = httplib.HTTPConnection(server.addr) conn.request('GET', 'http://surelynotexisting.example.com:1337/index.htm#ignorethis', headers={'X-Werkzeug-Addr': server.addr}) res = conn.getresponse() assert res.read() == b'YES' def test_double_slash_path(dev_server): server = dev_server(''' def app(environ, start_response): assert 'fail' not in environ['HTTP_HOST'] start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'YES'] ''') r = requests.get(server.url + '//fail') assert r.content == b'YES' def test_broken_app(dev_server): server = dev_server(''' def app(environ, start_response): 1 // 0 ''') r = requests.get(server.url + '/?foo=bar&baz=blah') assert r.status_code == 500 assert 'Internal Server Error' in r.text @pytest.mark.skipif(not hasattr(ssl, 'SSLContext'), reason='Missing PEP 466 (Python 2.7.9+) or Python 3.') @pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is required for cert generation.') def test_stdlib_ssl_contexts(dev_server, tmpdir): certificate, private_key = \ serving.make_ssl_devcert(str(tmpdir.mkdir('certs'))) server = dev_server(''' def app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] import ssl ctx = ssl.SSLContext(ssl.PROTOCOL_SSLv23) ctx.load_cert_chain("%s", "%s") kwargs['ssl_context'] = ctx ''' % (certificate, private_key)) assert server.addr is not None r = requests.get(server.url, verify=False) assert r.content == b'hello' @pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.') def test_ssl_context_adhoc(dev_server): server = dev_server(''' def app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] kwargs['ssl_context'] = 'adhoc' ''') r = requests.get(server.url, verify=False) assert r.content == b'hello' @pytest.mark.skipif(OpenSSL is None, reason='OpenSSL is not installed.') def test_make_ssl_devcert(tmpdir): certificate, private_key = \ serving.make_ssl_devcert(str(tmpdir)) assert os.path.isfile(certificate) assert os.path.isfile(private_key) @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.') def test_reloader_broken_imports(tmpdir, dev_server): # We explicitly assert that the server reloads on change, even though in # this case the import could've just been retried. This is to assert # correct behavior for apps that catch and cache import errors. # # Because this feature is achieved by recursively watching a large amount # of directories, this only works for the watchdog reloader. The stat # reloader is too inefficient to watch such a large amount of files. real_app = tmpdir.join('real_app.py') real_app.write("lol syntax error") server = dev_server(''' trials = [] def app(environ, start_response): assert not trials, 'should have reloaded' trials.append(1) import real_app return real_app.real_app(environ, start_response) kwargs['use_reloader'] = True kwargs['reloader_interval'] = 0.1 kwargs['reloader_type'] = 'watchdog' ''') server.wait_for_reloader_loop() r = requests.get(server.url) assert r.status_code == 500 real_app.write(textwrap.dedent(''' def real_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] ''')) server.wait_for_reloader() r = requests.get(server.url) assert r.status_code == 200 assert r.content == b'hello' @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.') def test_reloader_nested_broken_imports(tmpdir, dev_server): real_app = tmpdir.mkdir('real_app') real_app.join('__init__.py').write('from real_app.sub import real_app') sub = real_app.mkdir('sub').join('__init__.py') sub.write("lol syntax error") server = dev_server(''' trials = [] def app(environ, start_response): assert not trials, 'should have reloaded' trials.append(1) import real_app return real_app.real_app(environ, start_response) kwargs['use_reloader'] = True kwargs['reloader_interval'] = 0.1 kwargs['reloader_type'] = 'watchdog' ''') server.wait_for_reloader_loop() r = requests.get(server.url) assert r.status_code == 500 sub.write(textwrap.dedent(''' def real_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] ''')) server.wait_for_reloader() r = requests.get(server.url) assert r.status_code == 200 assert r.content == b'hello' @pytest.mark.skipif(watchdog is None, reason='Watchdog not installed.') def test_reloader_reports_correct_file(tmpdir, dev_server): real_app = tmpdir.join('real_app.py') real_app.write(textwrap.dedent(''' def real_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] ''')) server = dev_server(''' trials = [] def app(environ, start_response): assert not trials, 'should have reloaded' trials.append(1) import real_app return real_app.real_app(environ, start_response) kwargs['use_reloader'] = True kwargs['reloader_interval'] = 0.1 kwargs['reloader_type'] = 'watchdog' ''') server.wait_for_reloader_loop() r = requests.get(server.url) assert r.status_code == 200 assert r.content == b'hello' real_app_binary = tmpdir.join('real_app.pyc') real_app_binary.write('anything is fine here') server.wait_for_reloader() change_event = " * Detected change in '%(path)s', reloading" % { 'path': real_app_binary } server.logfile.seek(0) for i in range(20): time.sleep(0.1 * i) log = server.logfile.read() if change_event in log: break else: raise RuntimeError('Change event not detected.') def test_windows_get_args_for_reloading(monkeypatch, tmpdir): test_py_exe = r'C:\Users\test\AppData\Local\Programs\Python\Python36\python.exe' monkeypatch.setattr(os, 'name', 'nt') monkeypatch.setattr(sys, 'executable', test_py_exe) test_exe = tmpdir.mkdir('test').join('test.exe') monkeypatch.setattr(sys, 'argv', [test_exe.strpath, 'run']) rv = _reloader._get_args_for_reloading() assert rv == [test_exe.strpath, 'run'] def test_monkeypached_sleep(tmpdir): # removing the staticmethod wrapper in the definition of # ReloaderLoop._sleep works most of the time, since `sleep` is a c # function, and unlike python functions which are descriptors, doesn't # become a method when attached to a class. however, if the user has called # `eventlet.monkey_patch` before importing `_reloader`, `time.sleep` is a # python function, and subsequently calling `ReloaderLoop._sleep` fails # with a TypeError. This test checks that _sleep is attached correctly. script = tmpdir.mkdir('app').join('test.py') script.write(textwrap.dedent(''' import time def sleep(secs): pass # simulate eventlet.monkey_patch by replacing the builtin sleep # with a regular function before _reloader is imported time.sleep = sleep from werkzeug._reloader import ReloaderLoop ReloaderLoop()._sleep(0) ''')) subprocess.check_call(['python', str(script)]) def test_wrong_protocol(dev_server): # Assert that sending HTTPS requests to a HTTP server doesn't show a # traceback # See https://github.com/pallets/werkzeug/pull/838 server = dev_server(''' def app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] ''') with pytest.raises(requests.exceptions.ConnectionError): requests.get('https://%s/' % server.addr) log = server.logfile.read() assert 'Traceback' not in log assert '\n127.0.0.1' in log def test_absent_content_length_and_content_type(dev_server): server = dev_server(''' def app(environ, start_response): assert 'CONTENT_LENGTH' not in environ assert 'CONTENT_TYPE' not in environ start_response('200 OK', [('Content-Type', 'text/html')]) return [b'YES'] ''') r = requests.get(server.url) assert r.content == b'YES' def test_set_content_length_and_content_type_if_provided_by_client(dev_server): server = dev_server(''' def app(environ, start_response): assert environ['CONTENT_LENGTH'] == '233' assert environ['CONTENT_TYPE'] == 'application/json' start_response('200 OK', [('Content-Type', 'text/html')]) return [b'YES'] ''') r = requests.get(server.url, headers={ 'content_length': '233', 'content_type': 'application/json' }) assert r.content == b'YES' def test_port_must_be_integer(dev_server): def app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return [b'hello'] with pytest.raises(TypeError) as excinfo: serving.run_simple(hostname='localhost', port='5001', application=app, use_reloader=True) assert 'port must be an integer' in str(excinfo.value) with pytest.raises(TypeError) as excinfo: serving.run_simple(hostname='localhost', port='5001', application=app, use_reloader=False) assert 'port must be an integer' in str(excinfo.value) def test_chunked_encoding(dev_server): server = dev_server(r''' from werkzeug.wrappers import Request def app(environ, start_response): assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked' assert environ.get('wsgi.input_terminated', False) request = Request(environ) assert request.mimetype == 'multipart/form-data' assert request.files['file'].read() == b'This is a test\n' assert request.form['type'] == 'text/plain' start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'YES'] ''') testfile = os.path.join(os.path.dirname(__file__), 'res', 'chunked.txt') if sys.version_info[0] == 2: from httplib import HTTPConnection else: from http.client import HTTPConnection conn = HTTPConnection('127.0.0.1', server.port) conn.connect() conn.putrequest('POST', '/', skip_host=1, skip_accept_encoding=1) conn.putheader('Accept', 'text/plain') conn.putheader('Transfer-Encoding', 'chunked') conn.putheader( 'Content-Type', 'multipart/form-data; boundary=' '--------------------------898239224156930639461866') conn.endheaders() with open(testfile, 'rb') as f: conn.send(f.read()) res = conn.getresponse() assert res.status == 200 assert res.read() == b'YES' conn.close() def test_chunked_encoding_with_content_length(dev_server): server = dev_server(r''' from werkzeug.wrappers import Request def app(environ, start_response): assert environ['HTTP_TRANSFER_ENCODING'] == 'chunked' assert environ.get('wsgi.input_terminated', False) request = Request(environ) assert request.mimetype == 'multipart/form-data' assert request.files['file'].read() == b'This is a test\n' assert request.form['type'] == 'text/plain' start_response('200 OK', [('Content-Type', 'text/plain')]) return [b'YES'] ''') testfile = os.path.join(os.path.dirname(__file__), 'res', 'chunked.txt') if sys.version_info[0] == 2: from httplib import HTTPConnection else: from http.client import HTTPConnection conn = HTTPConnection('127.0.0.1', server.port) conn.connect() conn.putrequest('POST', '/', skip_host=1, skip_accept_encoding=1) conn.putheader('Accept', 'text/plain') conn.putheader('Transfer-Encoding', 'chunked') # Content-Length is invalid for chunked, but some libraries might send it conn.putheader('Content-Length', '372') conn.putheader( 'Content-Type', 'multipart/form-data; boundary=' '--------------------------898239224156930639461866') conn.endheaders() with open(testfile, 'rb') as f: conn.send(f.read()) res = conn.getresponse() assert res.status == 200 assert res.read() == b'YES' conn.close() werkzeug-0.14.1/tests/test_test.py000066400000000000000000000531341322225165500172030ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.test ~~~~~~~~~~ Tests the testing tools. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import pytest import sys from io import BytesIO from werkzeug._compat import iteritems, to_bytes, implements_iterator from functools import partial from tests import strict_eq from werkzeug.wrappers import Request, Response, BaseResponse from werkzeug.test import Client, EnvironBuilder, create_environ, \ ClientRedirectError, stream_encode_multipart, run_wsgi_app from werkzeug.utils import redirect from werkzeug.formparser import parse_form_data from werkzeug.datastructures import MultiDict, FileStorage def cookie_app(environ, start_response): """A WSGI application which sets a cookie, and returns as a response any cookie which exists. """ response = Response(environ.get('HTTP_COOKIE', 'No Cookie'), mimetype='text/plain') response.set_cookie('test', 'test') return response(environ, start_response) def redirect_loop_app(environ, start_response): response = redirect('http://localhost/some/redirect/') return response(environ, start_response) def redirect_with_get_app(environ, start_response): req = Request(environ) if req.url not in ('http://localhost/', 'http://localhost/first/request', 'http://localhost/some/redirect/'): assert False, 'redirect_demo_app() did not expect URL "%s"' % req.url if '/some/redirect' not in req.url: response = redirect('http://localhost/some/redirect/') else: response = Response('current url: %s' % req.url) return response(environ, start_response) def redirect_with_post_app(environ, start_response): req = Request(environ) if req.url == 'http://localhost/some/redirect/': assert req.method == 'GET', 'request should be GET' assert not req.form, 'request should not have data' response = Response('current url: %s' % req.url) else: response = redirect('http://localhost/some/redirect/') return response(environ, start_response) def external_redirect_demo_app(environ, start_response): response = redirect('http://example.com/') return response(environ, start_response) def external_subdomain_redirect_demo_app(environ, start_response): if 'test.example.com' in environ['HTTP_HOST']: response = Response('redirected successfully to subdomain') else: response = redirect('http://test.example.com/login') return response(environ, start_response) def multi_value_post_app(environ, start_response): req = Request(environ) assert req.form['field'] == 'val1', req.form['field'] assert req.form.getlist('field') == ['val1', 'val2'], req.form.getlist('field') response = Response('ok') return response(environ, start_response) def test_cookie_forging(): c = Client(cookie_app) c.set_cookie('localhost', 'foo', 'bar') appiter, code, headers = c.open() strict_eq(list(appiter), [b'foo=bar']) def test_set_cookie_app(): c = Client(cookie_app) appiter, code, headers = c.open() assert 'Set-Cookie' in dict(headers) def test_cookiejar_stores_cookie(): c = Client(cookie_app) appiter, code, headers = c.open() assert 'test' in c.cookie_jar._cookies['localhost.local']['/'] def test_no_initial_cookie(): c = Client(cookie_app) appiter, code, headers = c.open() strict_eq(b''.join(appiter), b'No Cookie') def test_resent_cookie(): c = Client(cookie_app) c.open() appiter, code, headers = c.open() strict_eq(b''.join(appiter), b'test=test') def test_disable_cookies(): c = Client(cookie_app, use_cookies=False) c.open() appiter, code, headers = c.open() strict_eq(b''.join(appiter), b'No Cookie') def test_cookie_for_different_path(): c = Client(cookie_app) c.open('/path1') appiter, code, headers = c.open('/path2') strict_eq(b''.join(appiter), b'test=test') def test_environ_builder_basics(): b = EnvironBuilder() assert b.content_type is None b.method = 'POST' assert b.content_type is None b.form['test'] = 'normal value' assert b.content_type == 'application/x-www-form-urlencoded' b.files.add_file('test', BytesIO(b'test contents'), 'test.txt') assert b.files['test'].content_type == 'text/plain' b.form['test_int'] = 1 assert b.content_type == 'multipart/form-data' req = b.get_request() b.close() strict_eq(req.url, u'http://localhost/') strict_eq(req.method, 'POST') strict_eq(req.form['test'], u'normal value') assert req.files['test'].content_type == 'text/plain' strict_eq(req.files['test'].filename, u'test.txt') strict_eq(req.files['test'].read(), b'test contents') def test_environ_builder_data(): b = EnvironBuilder(data='foo') assert b.input_stream.getvalue() == b'foo' b = EnvironBuilder(data=b'foo') assert b.input_stream.getvalue() == b'foo' b = EnvironBuilder(data={'foo': 'bar'}) assert b.form['foo'] == 'bar' b = EnvironBuilder(data={'foo': ['bar1', 'bar2']}) assert b.form.getlist('foo') == ['bar1', 'bar2'] def check_list_content(b, length): foo = b.files.getlist('foo') assert len(foo) == length for obj in foo: assert isinstance(obj, FileStorage) b = EnvironBuilder(data={'foo': BytesIO()}) check_list_content(b, 1) b = EnvironBuilder(data={'foo': [BytesIO(), BytesIO()]}) check_list_content(b, 2) b = EnvironBuilder(data={'foo': (BytesIO(),)}) check_list_content(b, 1) b = EnvironBuilder(data={'foo': [(BytesIO(),), (BytesIO(),)]}) check_list_content(b, 2) def test_environ_builder_headers(): b = EnvironBuilder(environ_base={'HTTP_USER_AGENT': 'Foo/0.1'}, environ_overrides={'wsgi.version': (1, 1)}) b.headers['X-Beat-My-Horse'] = 'very well sir' env = b.get_environ() strict_eq(env['HTTP_USER_AGENT'], 'Foo/0.1') strict_eq(env['HTTP_X_BEAT_MY_HORSE'], 'very well sir') strict_eq(env['wsgi.version'], (1, 1)) b.headers['User-Agent'] = 'Bar/1.0' env = b.get_environ() strict_eq(env['HTTP_USER_AGENT'], 'Bar/1.0') def test_environ_builder_headers_content_type(): b = EnvironBuilder(headers={'Content-Type': 'text/plain'}) env = b.get_environ() assert env['CONTENT_TYPE'] == 'text/plain' b = EnvironBuilder(content_type='text/html', headers={'Content-Type': 'text/plain'}) env = b.get_environ() assert env['CONTENT_TYPE'] == 'text/html' def test_environ_builder_paths(): b = EnvironBuilder(path='/foo', base_url='http://example.com/') strict_eq(b.base_url, 'http://example.com/') strict_eq(b.path, '/foo') strict_eq(b.script_root, '') strict_eq(b.host, 'example.com') b = EnvironBuilder(path='/foo', base_url='http://example.com/bar') strict_eq(b.base_url, 'http://example.com/bar/') strict_eq(b.path, '/foo') strict_eq(b.script_root, '/bar') strict_eq(b.host, 'example.com') b.host = 'localhost' strict_eq(b.base_url, 'http://localhost/bar/') b.base_url = 'http://localhost:8080/' strict_eq(b.host, 'localhost:8080') strict_eq(b.server_name, 'localhost') strict_eq(b.server_port, 8080) b.host = 'foo.invalid' b.url_scheme = 'https' b.script_root = '/test' env = b.get_environ() strict_eq(env['SERVER_NAME'], 'foo.invalid') strict_eq(env['SERVER_PORT'], '443') strict_eq(env['SCRIPT_NAME'], '/test') strict_eq(env['PATH_INFO'], '/foo') strict_eq(env['HTTP_HOST'], 'foo.invalid') strict_eq(env['wsgi.url_scheme'], 'https') strict_eq(b.base_url, 'https://foo.invalid/test/') def test_environ_builder_content_type(): builder = EnvironBuilder() assert builder.content_type is None builder.method = 'POST' assert builder.content_type is None builder.method = 'PUT' assert builder.content_type is None builder.method = 'PATCH' assert builder.content_type is None builder.method = 'DELETE' assert builder.content_type is None builder.method = 'GET' assert builder.content_type is None builder.form['foo'] = 'bar' assert builder.content_type == 'application/x-www-form-urlencoded' builder.files.add_file('blafasel', BytesIO(b'foo'), 'test.txt') assert builder.content_type == 'multipart/form-data' req = builder.get_request() strict_eq(req.form['foo'], u'bar') strict_eq(req.files['blafasel'].read(), b'foo') def test_environ_builder_stream_switch(): d = MultiDict(dict(foo=u'bar', blub=u'blah', hu=u'hum')) for use_tempfile in False, True: stream, length, boundary = stream_encode_multipart( d, use_tempfile, threshold=150) assert isinstance(stream, BytesIO) != use_tempfile form = parse_form_data({'wsgi.input': stream, 'CONTENT_LENGTH': str(length), 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % boundary})[1] strict_eq(form, d) stream.close() def test_environ_builder_unicode_file_mix(): for use_tempfile in False, True: f = FileStorage(BytesIO(u'\N{SNOWMAN}'.encode('utf-8')), 'snowman.txt') d = MultiDict(dict(f=f, s=u'\N{SNOWMAN}')) stream, length, boundary = stream_encode_multipart( d, use_tempfile, threshold=150) assert isinstance(stream, BytesIO) != use_tempfile _, form, files = parse_form_data({ 'wsgi.input': stream, 'CONTENT_LENGTH': str(length), 'CONTENT_TYPE': 'multipart/form-data; boundary="%s"' % boundary }) strict_eq(form['s'], u'\N{SNOWMAN}') strict_eq(files['f'].name, 'f') strict_eq(files['f'].filename, u'snowman.txt') strict_eq(files['f'].read(), u'\N{SNOWMAN}'.encode('utf-8')) stream.close() def test_create_environ(): env = create_environ('/foo?bar=baz', 'http://example.org/') expected = { 'wsgi.multiprocess': False, 'wsgi.version': (1, 0), 'wsgi.run_once': False, 'wsgi.errors': sys.stderr, 'wsgi.multithread': False, 'wsgi.url_scheme': 'http', 'SCRIPT_NAME': '', 'CONTENT_TYPE': '', 'CONTENT_LENGTH': '0', 'SERVER_NAME': 'example.org', 'REQUEST_METHOD': 'GET', 'HTTP_HOST': 'example.org', 'PATH_INFO': '/foo', 'SERVER_PORT': '80', 'SERVER_PROTOCOL': 'HTTP/1.1', 'QUERY_STRING': 'bar=baz' } for key, value in iteritems(expected): assert env[key] == value strict_eq(env['wsgi.input'].read(0), b'') strict_eq(create_environ('/foo', 'http://example.com/')['SCRIPT_NAME'], '') def test_file_closing(): closed = [] class SpecialInput(object): def read(self, size): return '' def close(self): closed.append(self) create_environ(data={'foo': SpecialInput()}) strict_eq(len(closed), 1) builder = EnvironBuilder() builder.files.add_file('blah', SpecialInput()) builder.close() strict_eq(len(closed), 2) def test_follow_redirect(): env = create_environ('/', base_url='http://localhost') c = Client(redirect_with_get_app) appiter, code, headers = c.open(environ_overrides=env, follow_redirects=True) strict_eq(code, '200 OK') strict_eq(b''.join(appiter), b'current url: http://localhost/some/redirect/') # Test that the :cls:`Client` is aware of user defined response wrappers c = Client(redirect_with_get_app, response_wrapper=BaseResponse) resp = c.get('/', follow_redirects=True) strict_eq(resp.status_code, 200) strict_eq(resp.data, b'current url: http://localhost/some/redirect/') # test with URL other than '/' to make sure redirected URL's are correct c = Client(redirect_with_get_app, response_wrapper=BaseResponse) resp = c.get('/first/request', follow_redirects=True) strict_eq(resp.status_code, 200) strict_eq(resp.data, b'current url: http://localhost/some/redirect/') def test_follow_local_redirect(): class LocalResponse(BaseResponse): autocorrect_location_header = False def local_redirect_app(environ, start_response): req = Request(environ) if '/from/location' in req.url: response = redirect('/to/location', Response=LocalResponse) else: response = Response('current path: %s' % req.path) return response(environ, start_response) c = Client(local_redirect_app, response_wrapper=BaseResponse) resp = c.get('/from/location', follow_redirects=True) strict_eq(resp.status_code, 200) strict_eq(resp.data, b'current path: /to/location') def test_follow_redirect_with_post_307(): def redirect_with_post_307_app(environ, start_response): req = Request(environ) if req.url == 'http://localhost/some/redirect/': assert req.method == 'POST', 'request should be POST' assert not req.form, 'request should not have data' response = Response('current url: %s' % req.url) else: response = redirect('http://localhost/some/redirect/', code=307) return response(environ, start_response) c = Client(redirect_with_post_307_app, response_wrapper=BaseResponse) resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42') assert resp.status_code == 200 assert resp.data == b'current url: http://localhost/some/redirect/' def test_follow_external_redirect(): env = create_environ('/', base_url='http://localhost') c = Client(external_redirect_demo_app) pytest.raises(RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)) def test_follow_external_redirect_on_same_subdomain(): env = create_environ('/', base_url='http://example.com') c = Client(external_subdomain_redirect_demo_app, allow_subdomain_redirects=True) c.get(environ_overrides=env, follow_redirects=True) # check that this does not work for real external domains env = create_environ('/', base_url='http://localhost') pytest.raises(RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)) # check that subdomain redirects fail if no `allow_subdomain_redirects` is applied c = Client(external_subdomain_redirect_demo_app) pytest.raises(RuntimeError, lambda: c.get(environ_overrides=env, follow_redirects=True)) def test_follow_redirect_loop(): c = Client(redirect_loop_app, response_wrapper=BaseResponse) with pytest.raises(ClientRedirectError): c.get('/', follow_redirects=True) def test_follow_redirect_with_post(): c = Client(redirect_with_post_app, response_wrapper=BaseResponse) resp = c.post('/', follow_redirects=True, data='foo=blub+hehe&blah=42') strict_eq(resp.status_code, 200) strict_eq(resp.data, b'current url: http://localhost/some/redirect/') def test_path_info_script_name_unquoting(): def test_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return [environ['PATH_INFO'] + '\n' + environ['SCRIPT_NAME']] c = Client(test_app, response_wrapper=BaseResponse) resp = c.get('/foo%40bar') strict_eq(resp.data, b'/foo@bar\n') c = Client(test_app, response_wrapper=BaseResponse) resp = c.get('/foo%40bar', 'http://localhost/bar%40baz') strict_eq(resp.data, b'/foo@bar\n/bar@baz') def test_multi_value_submit(): c = Client(multi_value_post_app, response_wrapper=BaseResponse) data = { 'field': ['val1', 'val2'] } resp = c.post('/', data=data) strict_eq(resp.status_code, 200) c = Client(multi_value_post_app, response_wrapper=BaseResponse) data = MultiDict({ 'field': ['val1', 'val2'] }) resp = c.post('/', data=data) strict_eq(resp.status_code, 200) def test_iri_support(): b = EnvironBuilder(u'/föö-bar', base_url=u'http://☃.net/') strict_eq(b.path, '/f%C3%B6%C3%B6-bar') strict_eq(b.base_url, 'http://xn--n3h.net/') @pytest.mark.parametrize('buffered', (True, False)) @pytest.mark.parametrize('iterable', (True, False)) def test_run_wsgi_apps(buffered, iterable): leaked_data = [] def simple_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return ['Hello World!'] def yielding_app(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) yield 'Hello ' yield 'World!' def late_start_response(environ, start_response): yield 'Hello ' yield 'World' start_response('200 OK', [('Content-Type', 'text/html')]) yield '!' def depends_on_close(environ, start_response): leaked_data.append('harhar') start_response('200 OK', [('Content-Type', 'text/html')]) class Rv(object): def __iter__(self): yield 'Hello ' yield 'World' yield '!' def close(self): assert leaked_data.pop() == 'harhar' return Rv() for app in (simple_app, yielding_app, late_start_response, depends_on_close): if iterable: app = iterable_middleware(app) app_iter, status, headers = run_wsgi_app(app, {}, buffered=buffered) strict_eq(status, '200 OK') strict_eq(list(headers), [('Content-Type', 'text/html')]) strict_eq(''.join(app_iter), 'Hello World!') if hasattr(app_iter, 'close'): app_iter.close() assert not leaked_data def test_run_wsgi_app_closing_iterator(): got_close = [] @implements_iterator class CloseIter(object): def __init__(self): self.iterated = False def __iter__(self): return self def close(self): got_close.append(None) def __next__(self): if self.iterated: raise StopIteration() self.iterated = True return 'bar' def bar(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) return CloseIter() app_iter, status, headers = run_wsgi_app(bar, {}) assert status == '200 OK' assert list(headers) == [('Content-Type', 'text/plain')] assert next(app_iter) == 'bar' pytest.raises(StopIteration, partial(next, app_iter)) app_iter.close() assert run_wsgi_app(bar, {}, True)[0] == ['bar'] assert len(got_close) == 2 def iterable_middleware(app): '''Guarantee that the app returns an iterable''' def inner(environ, start_response): rv = app(environ, start_response) class Iterable(object): def __iter__(self): return iter(rv) if hasattr(rv, 'close'): def close(self): rv.close() return Iterable() return inner def test_multiple_cookies(): @Request.application def test_app(request): response = Response(repr(sorted(request.cookies.items()))) response.set_cookie(u'test1', b'foo') response.set_cookie(u'test2', b'bar') return response client = Client(test_app, Response) resp = client.get('/') strict_eq(resp.data, b'[]') resp = client.get('/') strict_eq(resp.data, to_bytes(repr([('test1', u'foo'), ('test2', u'bar')]), 'ascii')) def test_correct_open_invocation_on_redirect(): class MyClient(Client): counter = 0 def open(self, *args, **kwargs): self.counter += 1 env = kwargs.setdefault('environ_overrides', {}) env['werkzeug._foo'] = self.counter return Client.open(self, *args, **kwargs) @Request.application def test_app(request): return Response(str(request.environ['werkzeug._foo'])) c = MyClient(test_app, response_wrapper=Response) strict_eq(c.get('/').data, b'1') strict_eq(c.get('/').data, b'2') strict_eq(c.get('/').data, b'3') def test_correct_encoding(): req = Request.from_values(u'/\N{SNOWMAN}', u'http://example.com/foo') strict_eq(req.script_root, u'/foo') strict_eq(req.path, u'/\N{SNOWMAN}') def test_full_url_requests_with_args(): base = 'http://example.com/' @Request.application def test_app(request): return Response(request.args['x']) client = Client(test_app, Response) resp = client.get('/?x=42', base) strict_eq(resp.data, b'42') resp = client.get('http://www.example.com/?x=23', base) strict_eq(resp.data, b'23') def test_delete_requests_with_form(): @Request.application def test_app(request): return Response(request.form.get('x', None)) client = Client(test_app, Response) resp = client.delete('/', data={'x': 42}) strict_eq(resp.data, b'42') def test_post_with_file_descriptor(tmpdir): c = Client(Response(), response_wrapper=Response) f = tmpdir.join('some-file.txt') f.write('foo') with open(f.strpath, mode='rt') as data: resp = c.post('/', data=data) strict_eq(resp.status_code, 200) with open(f.strpath, mode='rb') as data: resp = c.post('/', data=data) strict_eq(resp.status_code, 200) def test_content_type(): @Request.application def test_app(request): return Response(request.content_type) client = Client(test_app, Response) resp = client.get('/', data=b'testing', mimetype='text/css') strict_eq(resp.data, b'text/css; charset=utf-8') resp = client.get('/', data=b'testing', mimetype='application/octet-stream') strict_eq(resp.data, b'application/octet-stream') werkzeug-0.14.1/tests/test_urls.py000066400000000000000000000342171322225165500172120ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.urls ~~~~~~~~~~ URL helper tests. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import pytest from tests import strict_eq from werkzeug.datastructures import OrderedMultiDict from werkzeug import urls from werkzeug._compat import text_type, NativeStringIO, BytesIO def test_parsing(): url = urls.url_parse('http://anon:hunter2@[2001:db8:0:1]:80/a/b/c') assert url.netloc == 'anon:hunter2@[2001:db8:0:1]:80' assert url.username == 'anon' assert url.password == 'hunter2' assert url.port == 80 assert url.ascii_host == '2001:db8:0:1' assert url.get_file_location() == (None, None) # no file scheme @pytest.mark.parametrize('implicit_format', (True, False)) @pytest.mark.parametrize('localhost', ('127.0.0.1', '::1', 'localhost')) def test_fileurl_parsing_windows(implicit_format, localhost, monkeypatch): if implicit_format: pathformat = None monkeypatch.setattr('os.name', 'nt') else: pathformat = 'windows' monkeypatch.delattr('os.name') # just to make sure it won't get used url = urls.url_parse('file:///C:/Documents and Settings/Foobar/stuff.txt') assert url.netloc == '' assert url.scheme == 'file' assert url.get_file_location(pathformat) == \ (None, r'C:\Documents and Settings\Foobar\stuff.txt') url = urls.url_parse('file://///server.tld/file.txt') assert url.get_file_location(pathformat) == ('server.tld', r'file.txt') url = urls.url_parse('file://///server.tld') assert url.get_file_location(pathformat) == ('server.tld', '') url = urls.url_parse('file://///%s' % localhost) assert url.get_file_location(pathformat) == (None, '') url = urls.url_parse('file://///%s/file.txt' % localhost) assert url.get_file_location(pathformat) == (None, r'file.txt') def test_replace(): url = urls.url_parse('http://de.wikipedia.org/wiki/Troll') strict_eq(url.replace(query='foo=bar'), urls.url_parse('http://de.wikipedia.org/wiki/Troll?foo=bar')) strict_eq(url.replace(scheme='https'), urls.url_parse('https://de.wikipedia.org/wiki/Troll')) def test_quoting(): strict_eq(urls.url_quote(u'\xf6\xe4\xfc'), '%C3%B6%C3%A4%C3%BC') strict_eq(urls.url_unquote(urls.url_quote(u'#%="\xf6')), u'#%="\xf6') strict_eq(urls.url_quote_plus('foo bar'), 'foo+bar') strict_eq(urls.url_unquote_plus('foo+bar'), u'foo bar') strict_eq(urls.url_quote_plus('foo+bar'), 'foo%2Bbar') strict_eq(urls.url_unquote_plus('foo%2Bbar'), u'foo+bar') strict_eq(urls.url_encode({b'a': None, b'b': b'foo bar'}), 'b=foo+bar') strict_eq(urls.url_encode({u'a': None, u'b': u'foo bar'}), 'b=foo+bar') strict_eq(urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffsklärung)'), 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)') strict_eq(urls.url_quote_plus(42), '42') strict_eq(urls.url_quote(b'\xff'), '%FF') def test_bytes_unquoting(): strict_eq(urls.url_unquote(urls.url_quote( u'#%="\xf6', charset='latin1'), charset=None), b'#%="\xf6') def test_url_decoding(): x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel') strict_eq(x['foo'], u'42') strict_eq(x['bar'], u'23') strict_eq(x['uni'], u'Hänsel') x = urls.url_decode(b'foo=42;bar=23;uni=H%C3%A4nsel', separator=b';') strict_eq(x['foo'], u'42') strict_eq(x['bar'], u'23') strict_eq(x['uni'], u'Hänsel') x = urls.url_decode(b'%C3%9Ch=H%C3%A4nsel', decode_keys=True) strict_eq(x[u'Üh'], u'Hänsel') def test_url_bytes_decoding(): x = urls.url_decode(b'foo=42&bar=23&uni=H%C3%A4nsel', charset=None) strict_eq(x[b'foo'], b'42') strict_eq(x[b'bar'], b'23') strict_eq(x[b'uni'], u'Hänsel'.encode('utf-8')) def test_streamed_url_decoding(): item1 = u'a' * 100000 item2 = u'b' * 400 string = ('a=%s&b=%s&c=%s' % (item1, item2, item2)).encode('ascii') gen = urls.url_decode_stream(BytesIO(string), limit=len(string), return_iterator=True) strict_eq(next(gen), ('a', item1)) strict_eq(next(gen), ('b', item2)) strict_eq(next(gen), ('c', item2)) pytest.raises(StopIteration, lambda: next(gen)) def test_stream_decoding_string_fails(): pytest.raises(TypeError, urls.url_decode_stream, 'testing') def test_url_encoding(): strict_eq(urls.url_encode({'foo': 'bar 45'}), 'foo=bar+45') d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} strict_eq(urls.url_encode(d, sort=True), 'bar=23&blah=H%C3%A4nsel&foo=1') strict_eq(urls.url_encode(d, sort=True, separator=u';'), 'bar=23;blah=H%C3%A4nsel;foo=1') def test_sorted_url_encode(): strict_eq(urls.url_encode({u"a": 42, u"b": 23, 1: 1, 2: 2}, sort=True, key=lambda i: text_type(i[0])), '1=1&2=2&a=42&b=23') strict_eq(urls.url_encode({u'A': 1, u'a': 2, u'B': 3, 'b': 4}, sort=True, key=lambda x: x[0].lower() + x[0]), 'A=1&a=2&B=3&b=4') def test_streamed_url_encoding(): out = NativeStringIO() urls.url_encode_stream({'foo': 'bar 45'}, out) strict_eq(out.getvalue(), 'foo=bar+45') d = {'foo': 1, 'bar': 23, 'blah': u'Hänsel'} out = NativeStringIO() urls.url_encode_stream(d, out, sort=True) strict_eq(out.getvalue(), 'bar=23&blah=H%C3%A4nsel&foo=1') out = NativeStringIO() urls.url_encode_stream(d, out, sort=True, separator=u';') strict_eq(out.getvalue(), 'bar=23;blah=H%C3%A4nsel;foo=1') gen = urls.url_encode_stream(d, sort=True) strict_eq(next(gen), 'bar=23') strict_eq(next(gen), 'blah=H%C3%A4nsel') strict_eq(next(gen), 'foo=1') pytest.raises(StopIteration, lambda: next(gen)) def test_url_fixing(): x = urls.url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)') assert x == 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)' x = urls.url_fix("http://just.a.test/$-_.+!*'(),") assert x == "http://just.a.test/$-_.+!*'()," x = urls.url_fix('http://höhöhö.at/höhöhö/hähähä') assert x == r'http://xn--hhh-snabb.at/h%C3%B6h%C3%B6h%C3%B6/h%C3%A4h%C3%A4h%C3%A4' def test_url_fixing_filepaths(): x = urls.url_fix(r'file://C:\Users\Administrator\My Documents\ÑÈáÇíí') assert x == (r'file:///C%3A/Users/Administrator/My%20Documents/' r'%C3%91%C3%88%C3%A1%C3%87%C3%AD%C3%AD') a = urls.url_fix(r'file:/C:/') b = urls.url_fix(r'file://C:/') c = urls.url_fix(r'file:///C:/') assert a == b == c == r'file:///C%3A/' x = urls.url_fix(r'file://host/sub/path') assert x == r'file://host/sub/path' x = urls.url_fix(r'file:///') assert x == r'file:///' def test_url_fixing_qs(): x = urls.url_fix(b'http://example.com/?foo=%2f%2f') assert x == 'http://example.com/?foo=%2f%2f' x = urls.url_fix('http://acronyms.thefreedictionary.com/' 'Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation') assert x == ('http://acronyms.thefreedictionary.com/' 'Algebraic+Methods+of+Solving+the+Schr%C3%B6dinger+Equation') def test_iri_support(): strict_eq(urls.uri_to_iri('http://xn--n3h.net/'), u'http://\u2603.net/') strict_eq( urls.uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th'), u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th') strict_eq(urls.iri_to_uri(u'http://☃.net/'), 'http://xn--n3h.net/') strict_eq( urls.iri_to_uri(u'http://üser:pässword@☃.net/påth'), 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th') strict_eq(urls.uri_to_iri('http://test.com/%3Fmeh?foo=%26%2F'), u'http://test.com/%3Fmeh?foo=%26%2F') # this should work as well, might break on 2.4 because of a broken # idna codec strict_eq(urls.uri_to_iri(b'/foo'), u'/foo') strict_eq(urls.iri_to_uri(u'/foo'), '/foo') strict_eq(urls.iri_to_uri(u'http://föö.com:8080/bam/baz'), 'http://xn--f-1gaa.com:8080/bam/baz') def test_iri_safe_conversion(): strict_eq(urls.iri_to_uri(u'magnet:?foo=bar'), 'magnet:?foo=bar') strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar'), 'itms-service:?foo=bar') strict_eq(urls.iri_to_uri(u'itms-service://?foo=bar', safe_conversion=True), 'itms-service://?foo=bar') def test_iri_safe_quoting(): uri = 'http://xn--f-1gaa.com/%2F%25?q=%C3%B6&x=%3D%25#%25' iri = u'http://föö.com/%2F%25?q=ö&x=%3D%25#%25' strict_eq(urls.uri_to_iri(uri), iri) strict_eq(urls.iri_to_uri(urls.uri_to_iri(uri)), uri) def test_ordered_multidict_encoding(): d = OrderedMultiDict() d.add('foo', 1) d.add('foo', 2) d.add('foo', 3) d.add('bar', 0) d.add('foo', 4) assert urls.url_encode(d) == 'foo=1&foo=2&foo=3&bar=0&foo=4' def test_multidict_encoding(): d = OrderedMultiDict() d.add('2013-10-10T23:26:05.657975+0000', '2013-10-10T23:26:05.657975+0000') assert urls.url_encode( d) == '2013-10-10T23%3A26%3A05.657975%2B0000=2013-10-10T23%3A26%3A05.657975%2B0000' def test_href(): x = urls.Href('http://www.example.com/') strict_eq(x(u'foo'), 'http://www.example.com/foo') strict_eq(x.foo(u'bar'), 'http://www.example.com/foo/bar') strict_eq(x.foo(u'bar', x=42), 'http://www.example.com/foo/bar?x=42') strict_eq(x.foo(u'bar', class_=42), 'http://www.example.com/foo/bar?class=42') strict_eq(x.foo(u'bar', {u'class': 42}), 'http://www.example.com/foo/bar?class=42') pytest.raises(AttributeError, lambda: x.__blah__) x = urls.Href('blah') strict_eq(x.foo(u'bar'), 'blah/foo/bar') pytest.raises(TypeError, x.foo, {u"foo": 23}, x=42) x = urls.Href('') strict_eq(x('foo'), 'foo') def test_href_url_join(): x = urls.Href(u'test') assert x(u'foo:bar') == u'test/foo:bar' assert x(u'http://example.com/') == u'test/http://example.com/' assert x.a() == u'test/a' def test_href_past_root(): base_href = urls.Href('http://www.blagga.com/1/2/3') strict_eq(base_href('../foo'), 'http://www.blagga.com/1/2/foo') strict_eq(base_href('../../foo'), 'http://www.blagga.com/1/foo') strict_eq(base_href('../../../foo'), 'http://www.blagga.com/foo') strict_eq(base_href('../../../../foo'), 'http://www.blagga.com/foo') strict_eq(base_href('../../../../../foo'), 'http://www.blagga.com/foo') strict_eq(base_href('../../../../../../foo'), 'http://www.blagga.com/foo') def test_url_unquote_plus_unicode(): # was broken in 0.6 strict_eq(urls.url_unquote_plus(u'\x6d'), u'\x6d') assert type(urls.url_unquote_plus(u'\x6d')) is text_type def test_quoting_of_local_urls(): rv = urls.iri_to_uri(u'/foo\x8f') strict_eq(rv, '/foo%C2%8F') assert type(rv) is str def test_url_attributes(): rv = urls.url_parse('http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') strict_eq(rv.scheme, 'http') strict_eq(rv.auth, 'foo%3a:bar%3a') strict_eq(rv.username, u'foo:') strict_eq(rv.password, u'bar:') strict_eq(rv.raw_username, 'foo%3a') strict_eq(rv.raw_password, 'bar%3a') strict_eq(rv.host, '::1') assert rv.port == 80 strict_eq(rv.path, '/123') strict_eq(rv.query, 'x=y') strict_eq(rv.fragment, 'frag') rv = urls.url_parse(u'http://\N{SNOWMAN}.com/') strict_eq(rv.host, u'\N{SNOWMAN}.com') strict_eq(rv.ascii_host, 'xn--n3h.com') def test_url_attributes_bytes(): rv = urls.url_parse(b'http://foo%3a:bar%3a@[::1]:80/123?x=y#frag') strict_eq(rv.scheme, b'http') strict_eq(rv.auth, b'foo%3a:bar%3a') strict_eq(rv.username, u'foo:') strict_eq(rv.password, u'bar:') strict_eq(rv.raw_username, b'foo%3a') strict_eq(rv.raw_password, b'bar%3a') strict_eq(rv.host, b'::1') assert rv.port == 80 strict_eq(rv.path, b'/123') strict_eq(rv.query, b'x=y') strict_eq(rv.fragment, b'frag') def test_url_joining(): strict_eq(urls.url_join('/foo', '/bar'), '/bar') strict_eq(urls.url_join('http://example.com/foo', '/bar'), 'http://example.com/bar') strict_eq(urls.url_join('file:///tmp/', 'test.html'), 'file:///tmp/test.html') strict_eq(urls.url_join('file:///tmp/x', 'test.html'), 'file:///tmp/test.html') strict_eq(urls.url_join('file:///tmp/x', '../../../x.html'), 'file:///x.html') def test_partial_unencoded_decode(): ref = u'foo=정상처리'.encode('euc-kr') x = urls.url_decode(ref, charset='euc-kr') strict_eq(x['foo'], u'정상처리') def test_iri_to_uri_idempotence_ascii_only(): uri = u'http://www.idempoten.ce' uri = urls.iri_to_uri(uri) assert urls.iri_to_uri(uri) == uri def test_iri_to_uri_idempotence_non_ascii(): uri = u'http://\N{SNOWMAN}/\N{SNOWMAN}' uri = urls.iri_to_uri(uri) assert urls.iri_to_uri(uri) == uri def test_uri_to_iri_idempotence_ascii_only(): uri = 'http://www.idempoten.ce' uri = urls.uri_to_iri(uri) assert urls.uri_to_iri(uri) == uri def test_uri_to_iri_idempotence_non_ascii(): uri = 'http://xn--n3h/%E2%98%83' uri = urls.uri_to_iri(uri) assert urls.uri_to_iri(uri) == uri def test_iri_to_uri_to_iri(): iri = u'http://föö.com/' uri = urls.iri_to_uri(iri) assert urls.uri_to_iri(uri) == iri def test_uri_to_iri_to_uri(): uri = 'http://xn--f-rgao.com/%C3%9E' iri = urls.uri_to_iri(uri) assert urls.iri_to_uri(iri) == uri def test_uri_iri_normalization(): uri = 'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93' iri = u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713' tests = [ u'http://föñ.com/\N{BALLOT BOX}/fred?utf8=\u2713', u'http://xn--f-rgao.com/\u2610/fred?utf8=\N{CHECK MARK}', b'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', u'http://xn--f-rgao.com/%E2%98%90/fred?utf8=%E2%9C%93', u'http://föñ.com/\u2610/fred?utf8=%E2%9C%93', b'http://xn--f-rgao.com/\xe2\x98\x90/fred?utf8=\xe2\x9c\x93', ] for test in tests: assert urls.uri_to_iri(test) == iri assert urls.iri_to_uri(test) == uri assert urls.uri_to_iri(urls.iri_to_uri(test)) == iri assert urls.iri_to_uri(urls.uri_to_iri(test)) == uri assert urls.uri_to_iri(urls.uri_to_iri(test)) == iri assert urls.iri_to_uri(urls.iri_to_uri(test)) == uri werkzeug-0.14.1/tests/test_utils.py000066400000000000000000000223731322225165500173650ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.utils ~~~~~~~~~~~ General utilities. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import pytest from datetime import datetime import inspect from werkzeug import utils from werkzeug.datastructures import Headers from werkzeug.http import parse_date, http_date from werkzeug.wrappers import BaseResponse from werkzeug.test import Client from werkzeug._compat import text_type def test_redirect(): resp = utils.redirect(u'/füübär') assert b'/f%C3%BC%C3%BCb%C3%A4r' in resp.get_data() assert resp.headers['Location'] == '/f%C3%BC%C3%BCb%C3%A4r' assert resp.status_code == 302 resp = utils.redirect(u'http://☃.net/', 307) assert b'http://xn--n3h.net/' in resp.get_data() assert resp.headers['Location'] == 'http://xn--n3h.net/' assert resp.status_code == 307 resp = utils.redirect('http://example.com/', 305) assert resp.headers['Location'] == 'http://example.com/' assert resp.status_code == 305 def test_redirect_no_unicode_header_keys(): # Make sure all headers are native keys. This was a bug at one point # due to an incorrect conversion. resp = utils.redirect('http://example.com/', 305) for key, value in resp.headers.items(): assert type(key) == str assert type(value) == text_type assert resp.headers['Location'] == 'http://example.com/' assert resp.status_code == 305 def test_redirect_xss(): location = 'http://example.com/?xss=">' resp = utils.redirect(location) assert b'' not in resp.get_data() location = 'http://example.com/?xss="onmouseover="alert(1)' resp = utils.redirect(location) assert b'href="http://example.com/?xss="onmouseover="alert(1)"' not in resp.get_data() def test_redirect_with_custom_response_class(): class MyResponse(BaseResponse): pass location = "http://example.com/redirect" resp = utils.redirect(location, Response=MyResponse) assert isinstance(resp, MyResponse) assert resp.headers['Location'] == location def test_cached_property(): foo = [] class A(object): def prop(self): foo.append(42) return 42 prop = utils.cached_property(prop) a = A() p = a.prop q = a.prop assert p == q == 42 assert foo == [42] foo = [] class A(object): def _prop(self): foo.append(42) return 42 prop = utils.cached_property(_prop, name='prop') del _prop a = A() p = a.prop q = a.prop assert p == q == 42 assert foo == [42] def test_can_set_cached_property(): class A(object): @utils.cached_property def _prop(self): return 'cached_property return value' a = A() a._prop = 'value' assert a._prop == 'value' def test_inspect_treats_cached_property_as_property(): class A(object): @utils.cached_property def _prop(self): return 'cached_property return value' attrs = inspect.classify_class_attrs(A) for attr in attrs: if attr.name == '_prop': break assert attr.kind == 'property' def test_environ_property(): class A(object): environ = {'string': 'abc', 'number': '42'} string = utils.environ_property('string') missing = utils.environ_property('missing', 'spam') read_only = utils.environ_property('number') number = utils.environ_property('number', load_func=int) broken_number = utils.environ_property('broken_number', load_func=int) date = utils.environ_property('date', None, parse_date, http_date, read_only=False) foo = utils.environ_property('foo') a = A() assert a.string == 'abc' assert a.missing == 'spam' def test_assign(): a.read_only = 'something' pytest.raises(AttributeError, test_assign) assert a.number == 42 assert a.broken_number is None assert a.date is None a.date = datetime(2008, 1, 22, 10, 0, 0, 0) assert a.environ['date'] == 'Tue, 22 Jan 2008 10:00:00 GMT' def test_escape(): class Foo(str): def __html__(self): return text_type(self) assert utils.escape(None) == '' assert utils.escape(42) == '42' assert utils.escape('<>') == '<>' assert utils.escape('"foo"') == '"foo"' assert utils.escape(Foo('')) == '' def test_unescape(): assert utils.unescape('<ä>') == u'<ä>' def test_import_string(): import cgi from werkzeug.debug import DebuggedApplication assert utils.import_string('cgi.escape') is cgi.escape assert utils.import_string(u'cgi.escape') is cgi.escape assert utils.import_string('cgi:escape') is cgi.escape assert utils.import_string('XXXXXXXXXXXX', True) is None assert utils.import_string('cgi.XXXXXXXXXXXX', True) is None assert utils.import_string(u'werkzeug.debug.DebuggedApplication') is DebuggedApplication pytest.raises(ImportError, utils.import_string, 'XXXXXXXXXXXXXXXX') pytest.raises(ImportError, utils.import_string, 'cgi.XXXXXXXXXX') def test_import_string_attribute_error(tmpdir, monkeypatch): monkeypatch.syspath_prepend(str(tmpdir)) tmpdir.join('foo_test.py').write('from bar_test import value') tmpdir.join('bar_test.py').write('raise AttributeError("screw you!")') with pytest.raises(AttributeError) as foo_exc: utils.import_string('foo_test') assert 'screw you!' in str(foo_exc) with pytest.raises(AttributeError) as bar_exc: utils.import_string('bar_test') assert 'screw you!' in str(bar_exc) def test_find_modules(): assert list(utils.find_modules('werkzeug.debug')) == [ 'werkzeug.debug.console', 'werkzeug.debug.repr', 'werkzeug.debug.tbtools' ] def test_html_builder(): html = utils.html xhtml = utils.xhtml assert html.p('Hello World') == '

Hello World

' assert html.a('Test', href='#') == 'Test' assert html.br() == '
' assert xhtml.br() == '
' assert html.img(src='foo') == '' assert xhtml.img(src='foo') == '' assert html.html(html.head( html.title('foo'), html.script(type='text/javascript') )) == ( 'foo' ) assert html('') == '<foo>' assert html.input(disabled=True) == '' assert xhtml.input(disabled=True) == '' assert html.input(disabled='') == '' assert xhtml.input(disabled='') == '' assert html.input(disabled=None) == '' assert xhtml.input(disabled=None) == '' assert html.script('alert("Hello World");') == \ '' assert xhtml.script('alert("Hello World");') == \ '' def test_validate_arguments(): take_none = lambda: None take_two = lambda a, b: None take_two_one_default = lambda a, b=0: None assert utils.validate_arguments(take_two, (1, 2,), {}) == ((1, 2), {}) assert utils.validate_arguments(take_two, (1,), {'b': 2}) == ((1, 2), {}) assert utils.validate_arguments(take_two_one_default, (1,), {}) == ((1, 0), {}) assert utils.validate_arguments(take_two_one_default, (1, 2), {}) == ((1, 2), {}) pytest.raises(utils.ArgumentValidationError, utils.validate_arguments, take_two, (), {}) assert utils.validate_arguments(take_none, (1, 2,), {'c': 3}) == ((), {}) pytest.raises(utils.ArgumentValidationError, utils.validate_arguments, take_none, (1,), {}, drop_extra=False) pytest.raises(utils.ArgumentValidationError, utils.validate_arguments, take_none, (), {'a': 1}, drop_extra=False) def test_header_set_duplication_bug(): headers = Headers([ ('Content-Type', 'text/html'), ('Foo', 'bar'), ('Blub', 'blah') ]) headers['blub'] = 'hehe' headers['blafasel'] = 'humm' assert headers == Headers([ ('Content-Type', 'text/html'), ('Foo', 'bar'), ('blub', 'hehe'), ('blafasel', 'humm') ]) def test_append_slash_redirect(): def app(env, sr): return utils.append_slash_redirect(env)(env, sr) client = Client(app, BaseResponse) response = client.get('foo', base_url='http://example.org/app') assert response.status_code == 301 assert response.headers['Location'] == 'http://example.org/app/foo/' def test_cached_property_doc(): @utils.cached_property def foo(): """testing""" return 42 assert foo.__doc__ == 'testing' assert foo.__name__ == 'foo' assert foo.__module__ == __name__ def test_secure_filename(): assert utils.secure_filename('My cool movie.mov') == 'My_cool_movie.mov' assert utils.secure_filename('../../../etc/passwd') == 'etc_passwd' assert utils.secure_filename(u'i contain cool \xfcml\xe4uts.txt') == \ 'i_contain_cool_umlauts.txt' assert utils.secure_filename('__filename__') == 'filename' assert utils.secure_filename('foo$&^*)bar') == 'foobar' werkzeug-0.14.1/tests/test_wrappers.py000066400000000000000000001316041322225165500200660ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.wrappers ~~~~~~~~~~~~~~ Tests for the response and request objects. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import contextlib import os import pytest import pickle from io import BytesIO from datetime import datetime, timedelta from werkzeug._compat import iteritems from tests import strict_eq from werkzeug import wrappers from werkzeug.exceptions import SecurityError, RequestedRangeNotSatisfiable, \ BadRequest from werkzeug.wsgi import LimitedStream, wrap_file from werkzeug.datastructures import MultiDict, ImmutableOrderedMultiDict, \ ImmutableList, ImmutableTypeConversionDict, CharsetAccept, \ MIMEAccept, LanguageAccept, Accept, CombinedMultiDict from werkzeug.test import Client, create_environ, run_wsgi_app from werkzeug._compat import implements_iterator, text_type class RequestTestResponse(wrappers.BaseResponse): """Subclass of the normal response class we use to test response and base classes. Has some methods to test if things in the response match. """ def __init__(self, response, status, headers): wrappers.BaseResponse.__init__(self, response, status, headers) self.body_data = pickle.loads(self.get_data()) def __getitem__(self, key): return self.body_data[key] def request_demo_app(environ, start_response): request = wrappers.BaseRequest(environ) assert 'werkzeug.request' in environ start_response('200 OK', [('Content-Type', 'text/plain')]) return [pickle.dumps({ 'args': request.args, 'args_as_list': list(request.args.lists()), 'form': request.form, 'form_as_list': list(request.form.lists()), 'environ': prepare_environ_pickle(request.environ), 'data': request.get_data() })] def prepare_environ_pickle(environ): result = {} for key, value in iteritems(environ): try: pickle.dumps((key, value)) except Exception: continue result[key] = value return result def assert_environ(environ, method): strict_eq(environ['REQUEST_METHOD'], method) strict_eq(environ['PATH_INFO'], '/') strict_eq(environ['SCRIPT_NAME'], '') strict_eq(environ['SERVER_NAME'], 'localhost') strict_eq(environ['wsgi.version'], (1, 0)) strict_eq(environ['wsgi.url_scheme'], 'http') def test_base_request(): client = Client(request_demo_app, RequestTestResponse) # get requests response = client.get('/?foo=bar&foo=hehe') strict_eq(response['args'], MultiDict([('foo', u'bar'), ('foo', u'hehe')])) strict_eq(response['args_as_list'], [('foo', [u'bar', u'hehe'])]) strict_eq(response['form'], MultiDict()) strict_eq(response['form_as_list'], []) strict_eq(response['data'], b'') assert_environ(response['environ'], 'GET') # post requests with form data response = client.post('/?blub=blah', data='foo=blub+hehe&blah=42', content_type='application/x-www-form-urlencoded') strict_eq(response['args'], MultiDict([('blub', u'blah')])) strict_eq(response['args_as_list'], [('blub', [u'blah'])]) strict_eq(response['form'], MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) strict_eq(response['data'], b'') # currently we do not guarantee that the values are ordered correctly # for post data. # strict_eq(response['form_as_list'], [('foo', ['blub hehe']), ('blah', ['42'])]) assert_environ(response['environ'], 'POST') # patch requests with form data response = client.patch('/?blub=blah', data='foo=blub+hehe&blah=42', content_type='application/x-www-form-urlencoded') strict_eq(response['args'], MultiDict([('blub', u'blah')])) strict_eq(response['args_as_list'], [('blub', [u'blah'])]) strict_eq(response['form'], MultiDict([('foo', u'blub hehe'), ('blah', u'42')])) strict_eq(response['data'], b'') assert_environ(response['environ'], 'PATCH') # post requests with json data json = b'{"foo": "bar", "blub": "blah"}' response = client.post('/?a=b', data=json, content_type='application/json') strict_eq(response['data'], json) strict_eq(response['args'], MultiDict([('a', u'b')])) strict_eq(response['form'], MultiDict()) def test_query_string_is_bytes(): req = wrappers.Request.from_values(u'/?foo=%2f') strict_eq(req.query_string, b'foo=%2f') def test_request_repr(): req = wrappers.Request.from_values('/foobar') assert "" == repr(req) # test with non-ascii characters req = wrappers.Request.from_values('/привет') assert "" == repr(req) # test with unicode type for python 2 req = wrappers.Request.from_values(u'/привет') assert "" == repr(req) def test_access_route(): req = wrappers.Request.from_values(headers={ 'X-Forwarded-For': '192.168.1.2, 192.168.1.1' }) req.environ['REMOTE_ADDR'] = '192.168.1.3' assert req.access_route == ['192.168.1.2', '192.168.1.1'] strict_eq(req.remote_addr, '192.168.1.3') req = wrappers.Request.from_values() req.environ['REMOTE_ADDR'] = '192.168.1.3' strict_eq(list(req.access_route), ['192.168.1.3']) def test_url_request_descriptors(): req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') strict_eq(req.path, u'/bar') strict_eq(req.full_path, u'/bar?foo=baz') strict_eq(req.script_root, u'/test') strict_eq(req.url, u'http://example.com/test/bar?foo=baz') strict_eq(req.base_url, u'http://example.com/test/bar') strict_eq(req.url_root, u'http://example.com/test/') strict_eq(req.host_url, u'http://example.com/') strict_eq(req.host, 'example.com') strict_eq(req.scheme, 'http') req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') strict_eq(req.scheme, 'https') def test_url_request_descriptors_query_quoting(): next = 'http%3A%2F%2Fwww.example.com%2F%3Fnext%3D%2Fbaz%23my%3Dhash' req = wrappers.Request.from_values('/bar?next=' + next, 'http://example.com/') assert req.path == u'/bar' strict_eq(req.full_path, u'/bar?next=' + next) strict_eq(req.url, u'http://example.com/bar?next=' + next) def test_url_request_descriptors_hosts(): req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') req.trusted_hosts = ['example.com'] strict_eq(req.path, u'/bar') strict_eq(req.full_path, u'/bar?foo=baz') strict_eq(req.script_root, u'/test') strict_eq(req.url, u'http://example.com/test/bar?foo=baz') strict_eq(req.base_url, u'http://example.com/test/bar') strict_eq(req.url_root, u'http://example.com/test/') strict_eq(req.host_url, u'http://example.com/') strict_eq(req.host, 'example.com') strict_eq(req.scheme, 'http') req = wrappers.Request.from_values('/bar?foo=baz', 'https://example.com/test') strict_eq(req.scheme, 'https') req = wrappers.Request.from_values('/bar?foo=baz', 'http://example.com/test') req.trusted_hosts = ['example.org'] pytest.raises(SecurityError, lambda: req.url) pytest.raises(SecurityError, lambda: req.base_url) pytest.raises(SecurityError, lambda: req.url_root) pytest.raises(SecurityError, lambda: req.host_url) pytest.raises(SecurityError, lambda: req.host) def test_authorization_mixin(): request = wrappers.Request.from_values(headers={ 'Authorization': 'Basic QWxhZGRpbjpvcGVuIHNlc2FtZQ==' }) a = request.authorization strict_eq(a.type, 'basic') strict_eq(a.username, 'Aladdin') strict_eq(a.password, 'open sesame') def test_stream_only_mixing(): request = wrappers.PlainRequest.from_values( data=b'foo=blub+hehe', content_type='application/x-www-form-urlencoded' ) assert list(request.files.items()) == [] assert list(request.form.items()) == [] pytest.raises(AttributeError, lambda: request.data) strict_eq(request.stream.read(), b'foo=blub+hehe') def test_request_application(): @wrappers.Request.application def application(request): return wrappers.Response('Hello World!') @wrappers.Request.application def failing_application(request): raise BadRequest() resp = wrappers.Response.from_app(application, create_environ()) assert resp.data == b'Hello World!' assert resp.status_code == 200 resp = wrappers.Response.from_app(failing_application, create_environ()) assert b'Bad Request' in resp.data assert resp.status_code == 400 def test_base_response(): # unicode response = wrappers.BaseResponse(u'öäü') strict_eq(response.get_data(), u'öäü'.encode('utf-8')) # writing response = wrappers.Response('foo') response.stream.write('bar') strict_eq(response.get_data(), b'foobar') # set cookie response = wrappers.BaseResponse() response.set_cookie('foo', value='bar', max_age=60, expires=0, path='/blub', domain='example.org', samesite='Strict') strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; ' 'SameSite=Strict') ]) # delete cookie response = wrappers.BaseResponse() response.delete_cookie('foo') strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=; Expires=Thu, 01-Jan-1970 00:00:00 GMT; Max-Age=0; Path=/') ]) # close call forwarding closed = [] @implements_iterator class Iterable(object): def __next__(self): raise StopIteration() def __iter__(self): return self def close(self): closed.append(True) response = wrappers.BaseResponse(Iterable()) response.call_on_close(lambda: closed.append(True)) app_iter, status, headers = run_wsgi_app(response, create_environ(), buffered=True) strict_eq(status, '200 OK') strict_eq(''.join(app_iter), '') strict_eq(len(closed), 2) # with statement del closed[:] response = wrappers.BaseResponse(Iterable()) with response: pass assert len(closed) == 1 def test_response_status_codes(): response = wrappers.BaseResponse() response.status_code = 404 strict_eq(response.status, '404 NOT FOUND') response.status = '200 OK' strict_eq(response.status_code, 200) response.status = '999 WTF' strict_eq(response.status_code, 999) response.status_code = 588 strict_eq(response.status_code, 588) strict_eq(response.status, '588 UNKNOWN') response.status = 'wtf' strict_eq(response.status_code, 0) strict_eq(response.status, '0 wtf') # invalid status codes with pytest.raises(ValueError) as empty_string_error: wrappers.BaseResponse(None, '') assert 'Empty status argument' in str(empty_string_error) with pytest.raises(TypeError) as invalid_type_error: wrappers.BaseResponse(None, tuple()) assert 'Invalid status argument' in str(invalid_type_error) def test_type_forcing(): def wsgi_application(environ, start_response): start_response('200 OK', [('Content-Type', 'text/html')]) return ['Hello World!'] base_response = wrappers.BaseResponse('Hello World!', content_type='text/html') class SpecialResponse(wrappers.Response): def foo(self): return 42 # good enough for this simple application, but don't ever use that in # real world examples! fake_env = {} for orig_resp in wsgi_application, base_response: response = SpecialResponse.force_type(orig_resp, fake_env) assert response.__class__ is SpecialResponse strict_eq(response.foo(), 42) strict_eq(response.get_data(), b'Hello World!') assert response.content_type == 'text/html' # without env, no arbitrary conversion pytest.raises(TypeError, SpecialResponse.force_type, wsgi_application) def test_accept_mixin(): request = wrappers.Request({ 'HTTP_ACCEPT': 'text/xml,application/xml,application/xhtml+xml,' 'text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_CHARSET': 'ISO-8859-1,utf-8;q=0.7,*;q=0.7', 'HTTP_ACCEPT_ENCODING': 'gzip,deflate', 'HTTP_ACCEPT_LANGUAGE': 'en-us,en;q=0.5' }) assert request.accept_mimetypes == MIMEAccept([ ('text/xml', 1), ('image/png', 1), ('application/xml', 1), ('application/xhtml+xml', 1), ('text/html', 0.9), ('text/plain', 0.8), ('*/*', 0.5) ]) strict_eq(request.accept_charsets, CharsetAccept([ ('ISO-8859-1', 1), ('utf-8', 0.7), ('*', 0.7) ])) strict_eq(request.accept_encodings, Accept([ ('gzip', 1), ('deflate', 1)])) strict_eq(request.accept_languages, LanguageAccept([ ('en-us', 1), ('en', 0.5)])) request = wrappers.Request({'HTTP_ACCEPT': ''}) strict_eq(request.accept_mimetypes, MIMEAccept()) def test_etag_request_mixin(): request = wrappers.Request({ 'HTTP_CACHE_CONTROL': 'no-store, no-cache', 'HTTP_IF_MATCH': 'W/"foo", bar, "baz"', 'HTTP_IF_NONE_MATCH': 'W/"foo", bar, "baz"', 'HTTP_IF_MODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT', 'HTTP_IF_UNMODIFIED_SINCE': 'Tue, 22 Jan 2008 11:18:44 GMT' }) assert request.cache_control.no_store assert request.cache_control.no_cache for etags in request.if_match, request.if_none_match: assert etags('bar') assert etags.contains_raw('W/"foo"') assert etags.contains_weak('foo') assert not etags.contains('foo') assert request.if_modified_since == datetime(2008, 1, 22, 11, 18, 44) assert request.if_unmodified_since == datetime(2008, 1, 22, 11, 18, 44) def test_user_agent_mixin(): user_agents = [ ('Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.11) ' 'Gecko/20071127 Firefox/2.0.0.11', 'firefox', 'macos', '2.0.0.11', 'en-US'), ('Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; de-DE) Opera 8.54', 'opera', 'windows', '8.54', 'de-DE'), ('Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) AppleWebKit/420 ' '(KHTML, like Gecko) Version/3.0 Mobile/1A543a Safari/419.3', 'safari', 'iphone', '3.0', 'en'), ('Bot Googlebot/2.1 ( http://www.googlebot.com/bot.html)', 'google', None, '2.1', None), ('Mozilla/5.0 (X11; CrOS armv7l 3701.81.0) AppleWebKit/537.31 ' '(KHTML, like Gecko) Chrome/26.0.1410.57 Safari/537.31', 'chrome', 'chromeos', '26.0.1410.57', None), ('Mozilla/5.0 (Windows NT 6.3; Trident/7.0; .NET4.0E; rv:11.0) like Gecko', 'msie', 'windows', '11.0', None), ('Mozilla/5.0 (SymbianOS/9.3; Series60/3.2 NokiaE5-00/101.003; ' 'Profile/MIDP-2.1 Configuration/CLDC-1.1 ) AppleWebKit/533.4 (KHTML, like Gecko) ' 'NokiaBrowser/7.3.1.35 Mobile Safari/533.4 3gpp-gba', 'safari', 'symbian', '533.4', None), ('Mozilla/5.0 (X11; OpenBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0', 'firefox', 'openbsd', '45.0', None), ('Mozilla/5.0 (X11; NetBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0', 'firefox', 'netbsd', '45.0', None), ('Mozilla/5.0 (X11; FreeBSD amd64) AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/48.0.2564.103 Safari/537.36', 'chrome', 'freebsd', '48.0.2564.103', None), ('Mozilla/5.0 (X11; FreeBSD amd64; rv:45.0) Gecko/20100101 Firefox/45.0', 'firefox', 'freebsd', '45.0', None), ('Mozilla/5.0 (X11; U; NetBSD amd64; en-US; rv:) Gecko/20150921 SeaMonkey/1.1.18', 'seamonkey', 'netbsd', '1.1.18', 'en-US'), ('Mozilla/5.0 (Windows; U; Windows NT 6.2; WOW64; rv:1.8.0.7) ' 'Gecko/20110321 MultiZilla/4.33.2.6a SeaMonkey/8.6.55', 'seamonkey', 'windows', '8.6.55', None), ('Mozilla/5.0 (X11; Linux x86_64; rv:12.0) Gecko/20120427 Firefox/12.0 SeaMonkey/2.9', 'seamonkey', 'linux', '2.9', None), ('Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)', 'baidu', None, '2.0', None), ('Mozilla/5.0 (X11; SunOS i86pc; rv:38.0) Gecko/20100101 Firefox/38.0', 'firefox', 'solaris', '38.0', None), ('Mozilla/5.0 (X11; Linux x86_64; rv:38.0) Gecko/20100101 Firefox/38.0 Iceweasel/38.7.1', 'firefox', 'linux', '38.0', None), ('Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/50.0.2661.75 Safari/537.36', 'chrome', 'windows', '50.0.2661.75', None), ('Mozilla/5.0 (compatible; bingbot/2.0; +http://www.bing.com/bingbot.htm)', 'bing', None, '2.0', None), ('Mozilla/5.0 (X11; DragonFly x86_64) AppleWebKit/537.36 (KHTML, like Gecko) ' 'Chrome/47.0.2526.106 Safari/537.36', 'chrome', 'dragonflybsd', '47.0.2526.106', None), ('Mozilla/5.0 (X11; U; DragonFly i386; de; rv:1.9.1) Gecko/20090720 Firefox/3.5.1', 'firefox', 'dragonflybsd', '3.5.1', 'de') ] for ua, browser, platform, version, lang in user_agents: request = wrappers.Request({'HTTP_USER_AGENT': ua}) strict_eq(request.user_agent.browser, browser) strict_eq(request.user_agent.platform, platform) strict_eq(request.user_agent.version, version) strict_eq(request.user_agent.language, lang) assert bool(request.user_agent) strict_eq(request.user_agent.to_header(), ua) strict_eq(str(request.user_agent), ua) request = wrappers.Request({'HTTP_USER_AGENT': 'foo'}) assert not request.user_agent def test_stream_wrapping(): class LowercasingStream(object): def __init__(self, stream): self._stream = stream def read(self, size=-1): return self._stream.read(size).lower() def readline(self, size=-1): return self._stream.readline(size).lower() data = b'foo=Hello+World' req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') req.stream = LowercasingStream(req.stream) assert req.form['foo'] == 'hello world' def test_data_descriptor_triggers_parsing(): data = b'foo=Hello+World' req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') assert req.data == b'' assert req.form['foo'] == u'Hello World' def test_get_data_method_parsing_caching_behavior(): data = b'foo=Hello+World' req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') # get_data() caches, so form stays available assert req.get_data() == data assert req.form['foo'] == u'Hello World' assert req.get_data() == data # here we access the form data first, caching is bypassed req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') assert req.form['foo'] == u'Hello World' assert req.get_data() == b'' # Another case is uncached get data which trashes everything req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') assert req.get_data(cache=False) == data assert req.get_data(cache=False) == b'' assert req.form == {} # Or we can implicitly start the form parser which is similar to # the old .data behavior req = wrappers.Request.from_values( '/', method='POST', data=data, content_type='application/x-www-form-urlencoded') assert req.get_data(parse_form_data=True) == b'' assert req.form['foo'] == u'Hello World' def test_etag_response_mixin(): response = wrappers.Response('Hello World') assert response.get_etag() == (None, None) response.add_etag() assert response.get_etag() == ('b10a8db164e0754105b7a99be72e3fe5', False) assert not response.cache_control response.cache_control.must_revalidate = True response.cache_control.max_age = 60 response.headers['Content-Length'] = len(response.get_data()) assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60', 'max-age=60, must-revalidate') assert 'date' not in response.headers env = create_environ() env.update({ 'REQUEST_METHOD': 'GET', 'HTTP_IF_NONE_MATCH': response.get_etag()[0] }) response.make_conditional(env) assert 'date' in response.headers # after the thing is invoked by the server as wsgi application # (we're emulating this here), there must not be any entity # headers left and the status code would have to be 304 resp = wrappers.Response.from_app(response, env) assert resp.status_code == 304 assert 'content-length' not in resp.headers # make sure date is not overriden response = wrappers.Response('Hello World') response.date = 1337 d = response.date response.make_conditional(env) assert response.date == d # make sure content length is only set if missing response = wrappers.Response('Hello World') response.content_length = 999 response.make_conditional(env) assert response.content_length == 999 def test_etag_response_412(): response = wrappers.Response('Hello World') assert response.get_etag() == (None, None) response.add_etag() assert response.get_etag() == ('b10a8db164e0754105b7a99be72e3fe5', False) assert not response.cache_control response.cache_control.must_revalidate = True response.cache_control.max_age = 60 response.headers['Content-Length'] = len(response.get_data()) assert response.headers['Cache-Control'] in ('must-revalidate, max-age=60', 'max-age=60, must-revalidate') assert 'date' not in response.headers env = create_environ() env.update({ 'REQUEST_METHOD': 'GET', 'HTTP_IF_MATCH': response.get_etag()[0] + "xyz" }) response.make_conditional(env) assert 'date' in response.headers # after the thing is invoked by the server as wsgi application # (we're emulating this here), there must not be any entity # headers left and the status code would have to be 412 resp = wrappers.Response.from_app(response, env) assert resp.status_code == 412 assert 'content-length' not in resp.headers # make sure date is not overriden response = wrappers.Response('Hello World') response.date = 1337 d = response.date response.make_conditional(env) assert response.date == d # make sure content length is only set if missing response = wrappers.Response('Hello World') response.content_length = 999 response.make_conditional(env) assert response.content_length == 999 def test_range_request_basic(): env = create_environ() response = wrappers.Response('Hello World') env['HTTP_RANGE'] = 'bytes=0-4' response.make_conditional(env, accept_ranges=True, complete_length=11) assert response.status_code == 206 assert response.headers['Accept-Ranges'] == 'bytes' assert response.headers['Content-Range'] == 'bytes 0-4/11' assert response.headers['Content-Length'] == '5' assert response.data == b'Hello' def test_range_request_out_of_bound(): env = create_environ() response = wrappers.Response('Hello World') env['HTTP_RANGE'] = 'bytes=6-666' response.make_conditional(env, accept_ranges=True, complete_length=11) assert response.status_code == 206 assert response.headers['Accept-Ranges'] == 'bytes' assert response.headers['Content-Range'] == 'bytes 6-10/11' assert response.headers['Content-Length'] == '5' assert response.data == b'World' def test_range_request_with_file(): env = create_environ() resources = os.path.join(os.path.dirname(__file__), 'res') fname = os.path.join(resources, 'test.txt') with open(fname, 'rb') as f: fcontent = f.read() with open(fname, 'rb') as f: response = wrappers.Response(wrap_file(env, f)) env['HTTP_RANGE'] = 'bytes=0-0' response.make_conditional(env, accept_ranges=True, complete_length=len(fcontent)) assert response.status_code == 206 assert response.headers['Accept-Ranges'] == 'bytes' assert response.headers['Content-Range'] == 'bytes 0-0/%d' % len(fcontent) assert response.headers['Content-Length'] == '1' assert response.data == fcontent[:1] def test_range_request_with_complete_file(): env = create_environ() resources = os.path.join(os.path.dirname(__file__), 'res') fname = os.path.join(resources, 'test.txt') with open(fname, 'rb') as f: fcontent = f.read() with open(fname, 'rb') as f: fsize = os.path.getsize(fname) response = wrappers.Response(wrap_file(env, f)) env['HTTP_RANGE'] = 'bytes=0-%d' % (fsize - 1) response.make_conditional(env, accept_ranges=True, complete_length=fsize) assert response.status_code == 200 assert response.headers['Accept-Ranges'] == 'bytes' assert 'Content-Range' not in response.headers assert response.headers['Content-Length'] == str(fsize) assert response.data == fcontent def test_range_request_without_complete_length(): env = create_environ() response = wrappers.Response('Hello World') env['HTTP_RANGE'] = 'bytes=-' response.make_conditional(env, accept_ranges=True, complete_length=None) assert response.status_code == 200 assert response.data == b'Hello World' def test_invalid_range_request(): env = create_environ() response = wrappers.Response('Hello World') env['HTTP_RANGE'] = 'bytes=-' with pytest.raises(RequestedRangeNotSatisfiable): response.make_conditional(env, accept_ranges=True, complete_length=11) def test_etag_response_mixin_freezing(): class WithFreeze(wrappers.ETagResponseMixin, wrappers.BaseResponse): pass class WithoutFreeze(wrappers.BaseResponse, wrappers.ETagResponseMixin): pass response = WithFreeze('Hello World') response.freeze() strict_eq(response.get_etag(), (text_type(wrappers.generate_etag(b'Hello World')), False)) response = WithoutFreeze('Hello World') response.freeze() assert response.get_etag() == (None, None) response = wrappers.Response('Hello World') response.freeze() assert response.get_etag() == (None, None) def test_authenticate_mixin(): resp = wrappers.Response() resp.www_authenticate.type = 'basic' resp.www_authenticate.realm = 'Testing' strict_eq(resp.headers['WWW-Authenticate'], u'Basic realm="Testing"') resp.www_authenticate.realm = None resp.www_authenticate.type = None assert 'WWW-Authenticate' not in resp.headers def test_authenticate_mixin_quoted_qop(): # Example taken from https://github.com/pallets/werkzeug/issues/633 resp = wrappers.Response() resp.www_authenticate.set_digest('REALM', 'NONCE', qop=("auth", "auth-int")) actual = set((resp.headers['WWW-Authenticate'] + ',').split()) expected = set('Digest nonce="NONCE", realm="REALM", qop="auth, auth-int",'.split()) assert actual == expected resp.www_authenticate.set_digest('REALM', 'NONCE', qop=("auth",)) actual = set((resp.headers['WWW-Authenticate'] + ',').split()) expected = set('Digest nonce="NONCE", realm="REALM", qop="auth",'.split()) assert actual == expected def test_response_stream_mixin(): response = wrappers.Response() response.stream.write('Hello ') response.stream.write('World!') assert response.response == ['Hello ', 'World!'] assert response.get_data() == b'Hello World!' def test_common_response_descriptors_mixin(): response = wrappers.Response() response.mimetype = 'text/html' assert response.mimetype == 'text/html' assert response.content_type == 'text/html; charset=utf-8' assert response.mimetype_params == {'charset': 'utf-8'} response.mimetype_params['x-foo'] = 'yep' del response.mimetype_params['charset'] assert response.content_type == 'text/html; x-foo=yep' now = datetime.utcnow().replace(microsecond=0) assert response.content_length is None response.content_length = '42' assert response.content_length == 42 for attr in 'date', 'expires': assert getattr(response, attr) is None setattr(response, attr, now) assert getattr(response, attr) == now assert response.age is None age_td = timedelta(days=1, minutes=3, seconds=5) response.age = age_td assert response.age == age_td response.age = 42 assert response.age == timedelta(seconds=42) assert response.retry_after is None response.retry_after = now assert response.retry_after == now assert not response.vary response.vary.add('Cookie') response.vary.add('Content-Language') assert 'cookie' in response.vary assert response.vary.to_header() == 'Cookie, Content-Language' response.headers['Vary'] = 'Content-Encoding' assert response.vary.as_set() == set(['content-encoding']) response.allow.update(['GET', 'POST']) assert response.headers['Allow'] == 'GET, POST' response.content_language.add('en-US') response.content_language.add('fr') assert response.headers['Content-Language'] == 'en-US, fr' def test_common_request_descriptors_mixin(): request = wrappers.Request.from_values( content_type='text/html; charset=utf-8', content_length='23', headers={ 'Referer': 'http://www.example.com/', 'Date': 'Sat, 28 Feb 2009 19:04:35 GMT', 'Max-Forwards': '10', 'Pragma': 'no-cache', 'Content-Encoding': 'gzip', 'Content-MD5': '9a3bc6dbc47a70db25b84c6e5867a072' } ) assert request.content_type == 'text/html; charset=utf-8' assert request.mimetype == 'text/html' assert request.mimetype_params == {'charset': 'utf-8'} assert request.content_length == 23 assert request.referrer == 'http://www.example.com/' assert request.date == datetime(2009, 2, 28, 19, 4, 35) assert request.max_forwards == 10 assert 'no-cache' in request.pragma assert request.content_encoding == 'gzip' assert request.content_md5 == '9a3bc6dbc47a70db25b84c6e5867a072' def test_request_mimetype_always_lowercase(): request = wrappers.Request.from_values(content_type='APPLICATION/JSON') assert request.mimetype == 'application/json' def test_shallow_mode(): request = wrappers.Request({'QUERY_STRING': 'foo=bar'}, shallow=True) assert request.args['foo'] == 'bar' pytest.raises(RuntimeError, lambda: request.form['foo']) def test_form_parsing_failed(): data = b'--blah\r\n' request = wrappers.Request.from_values( input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST' ) assert not request.files assert not request.form # Bad Content-Type data = b'test' request = wrappers.Request.from_values( input_stream=BytesIO(data), content_length=len(data), content_type=', ', method='POST' ) assert not request.form def test_file_closing(): data = (b'--foo\r\n' b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' b'Content-Type: text/plain; charset=utf-8\r\n\r\n' b'file contents, just the contents\r\n' b'--foo--') req = wrappers.Request.from_values( input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST' ) foo = req.files['foo'] assert foo.mimetype == 'text/plain' assert foo.filename == 'foo.txt' assert foo.closed is False req.close() assert foo.closed is True def test_file_closing_with(): data = (b'--foo\r\n' b'Content-Disposition: form-data; name="foo"; filename="foo.txt"\r\n' b'Content-Type: text/plain; charset=utf-8\r\n\r\n' b'file contents, just the contents\r\n' b'--foo--') req = wrappers.Request.from_values( input_stream=BytesIO(data), content_length=len(data), content_type='multipart/form-data; boundary=foo', method='POST' ) with req: foo = req.files['foo'] assert foo.mimetype == 'text/plain' assert foo.filename == 'foo.txt' assert foo.closed is True def test_url_charset_reflection(): req = wrappers.Request.from_values() req.charset = 'utf-7' assert req.url_charset == 'utf-7' def test_response_streamed(): r = wrappers.Response() assert not r.is_streamed r = wrappers.Response("Hello World") assert not r.is_streamed r = wrappers.Response(["foo", "bar"]) assert not r.is_streamed def gen(): if 0: yield None r = wrappers.Response(gen()) assert r.is_streamed def test_response_iter_wrapping(): def uppercasing(iterator): for item in iterator: yield item.upper() def generator(): yield 'foo' yield 'bar' req = wrappers.Request.from_values() resp = wrappers.Response(generator()) del resp.headers['Content-Length'] resp.response = uppercasing(resp.iter_encoded()) actual_resp = wrappers.Response.from_app(resp, req.environ, buffered=True) assert actual_resp.get_data() == b'FOOBAR' def test_response_freeze(): def generate(): yield "foo" yield "bar" resp = wrappers.Response(generate()) resp.freeze() assert resp.response == [b'foo', b'bar'] assert resp.headers['content-length'] == '6' def test_response_content_length_uses_encode(): r = wrappers.Response(u'你好') assert r.calculate_content_length() == 6 def test_other_method_payload(): data = b'Hello World' req = wrappers.Request.from_values(input_stream=BytesIO(data), content_length=len(data), content_type='text/plain', method='WHAT_THE_FUCK') assert req.get_data() == data assert isinstance(req.stream, LimitedStream) def test_urlfication(): resp = wrappers.Response() resp.headers['Location'] = u'http://üser:pässword@☃.net/påth' resp.headers['Content-Location'] = u'http://☃.net/' headers = resp.get_wsgi_headers(create_environ()) assert headers['location'] == \ 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th' assert headers['content-location'] == 'http://xn--n3h.net/' def test_new_response_iterator_behavior(): req = wrappers.Request.from_values() resp = wrappers.Response(u'Hello Wörld!') def get_content_length(resp): headers = resp.get_wsgi_headers(req.environ) return headers.get('content-length', type=int) def generate_items(): yield "Hello " yield u"Wörld!" # werkzeug encodes when set to `data` now, which happens # if a string is passed to the response object. assert resp.response == [u'Hello Wörld!'.encode('utf-8')] assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') assert get_content_length(resp) == 13 assert not resp.is_streamed assert resp.is_sequence # try the same for manual assignment resp.set_data(u'Wörd') assert resp.response == [u'Wörd'.encode('utf-8')] assert resp.get_data() == u'Wörd'.encode('utf-8') assert get_content_length(resp) == 5 assert not resp.is_streamed assert resp.is_sequence # automatic generator sequence conversion resp.response = generate_items() assert resp.is_streamed assert not resp.is_sequence assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')] assert not resp.is_streamed assert resp.is_sequence # automatic generator sequence conversion resp.response = generate_items() resp.implicit_sequence_conversion = False assert resp.is_streamed assert not resp.is_sequence pytest.raises(RuntimeError, lambda: resp.get_data()) resp.make_sequence() assert resp.get_data() == u'Hello Wörld!'.encode('utf-8') assert resp.response == [b'Hello ', u'Wörld!'.encode('utf-8')] assert not resp.is_streamed assert resp.is_sequence # stream makes it a list no matter how the conversion is set for val in True, False: resp.implicit_sequence_conversion = val resp.response = ("foo", "bar") assert resp.is_sequence resp.stream.write('baz') assert resp.response == ['foo', 'bar', 'baz'] def test_form_data_ordering(): class MyRequest(wrappers.Request): parameter_storage_class = ImmutableOrderedMultiDict req = MyRequest.from_values('/?foo=1&bar=0&foo=3') assert list(req.args) == ['foo', 'bar'] assert list(req.args.items(multi=True)) == [ ('foo', '1'), ('bar', '0'), ('foo', '3') ] assert isinstance(req.args, ImmutableOrderedMultiDict) assert isinstance(req.values, CombinedMultiDict) assert req.values['foo'] == '1' assert req.values.getlist('foo') == ['1', '3'] def test_storage_classes(): class MyRequest(wrappers.Request): dict_storage_class = dict list_storage_class = list parameter_storage_class = dict req = MyRequest.from_values('/?foo=baz', headers={ 'Cookie': 'foo=bar' }) assert type(req.cookies) is dict assert req.cookies == {'foo': 'bar'} assert type(req.access_route) is list assert type(req.args) is dict assert type(req.values) is CombinedMultiDict assert req.values['foo'] == u'baz' req = wrappers.Request.from_values(headers={ 'Cookie': 'foo=bar' }) assert type(req.cookies) is ImmutableTypeConversionDict assert req.cookies == {'foo': 'bar'} assert type(req.access_route) is ImmutableList MyRequest.list_storage_class = tuple req = MyRequest.from_values() assert type(req.access_route) is tuple def test_response_headers_passthrough(): headers = wrappers.Headers() resp = wrappers.Response(headers=headers) assert resp.headers is headers def test_response_304_no_content_length(): resp = wrappers.Response('Test', status=304) env = create_environ() assert 'content-length' not in resp.get_wsgi_headers(env) def test_ranges(): # basic range stuff req = wrappers.Request.from_values() assert req.range is None req = wrappers.Request.from_values(headers={'Range': 'bytes=0-499'}) assert req.range.ranges == [(0, 500)] resp = wrappers.Response() resp.content_range = req.range.make_content_range(1000) assert resp.content_range.units == 'bytes' assert resp.content_range.start == 0 assert resp.content_range.stop == 500 assert resp.content_range.length == 1000 assert resp.headers['Content-Range'] == 'bytes 0-499/1000' resp.content_range.unset() assert 'Content-Range' not in resp.headers resp.headers['Content-Range'] = 'bytes 0-499/1000' assert resp.content_range.units == 'bytes' assert resp.content_range.start == 0 assert resp.content_range.stop == 500 assert resp.content_range.length == 1000 def test_auto_content_length(): resp = wrappers.Response('Hello World!') assert resp.content_length == 12 resp = wrappers.Response(['Hello World!']) assert resp.content_length is None assert resp.get_wsgi_headers({})['Content-Length'] == '12' def test_stream_content_length(): resp = wrappers.Response() resp.stream.writelines(['foo', 'bar', 'baz']) assert resp.get_wsgi_headers({})['Content-Length'] == '9' resp = wrappers.Response() resp.make_conditional({'REQUEST_METHOD': 'GET'}) resp.stream.writelines(['foo', 'bar', 'baz']) assert resp.get_wsgi_headers({})['Content-Length'] == '9' resp = wrappers.Response('foo') resp.stream.writelines(['bar', 'baz']) assert resp.get_wsgi_headers({})['Content-Length'] == '9' def test_disabled_auto_content_length(): class MyResponse(wrappers.Response): automatically_set_content_length = False resp = MyResponse('Hello World!') assert resp.content_length is None resp = MyResponse(['Hello World!']) assert resp.content_length is None assert 'Content-Length' not in resp.get_wsgi_headers({}) resp = MyResponse() resp.make_conditional({ 'REQUEST_METHOD': 'GET' }) assert resp.content_length is None assert 'Content-Length' not in resp.get_wsgi_headers({}) def test_location_header_autocorrect(): env = create_environ() class MyResponse(wrappers.Response): autocorrect_location_header = False resp = MyResponse('Hello World!') resp.headers['Location'] = '/test' assert resp.get_wsgi_headers(env)['Location'] == '/test' resp = wrappers.Response('Hello World!') resp.headers['Location'] = '/test' assert resp.get_wsgi_headers(env)['Location'] == 'http://localhost/test' def test_204_and_1XX_response_has_no_content_length(): response = wrappers.Response(status=204) assert response.content_length is None headers = response.get_wsgi_headers(create_environ()) assert 'Content-Length' not in headers response = wrappers.Response(status=100) assert response.content_length is None headers = response.get_wsgi_headers(create_environ()) assert 'Content-Length' not in headers def test_modified_url_encoding(): class ModifiedRequest(wrappers.Request): url_charset = 'euc-kr' req = ModifiedRequest.from_values(u'/?foo=정상처리'.encode('euc-kr')) strict_eq(req.args['foo'], u'정상처리') def test_request_method_case_sensitivity(): req = wrappers.Request({'REQUEST_METHOD': 'get'}) assert req.method == 'GET' def test_is_xhr_warning(): req = wrappers.Request.from_values() with pytest.warns(DeprecationWarning) as record: req.is_xhr assert len(record) == 1 assert 'Request.is_xhr is deprecated' in str(record[0].message) def test_write_length(): response = wrappers.Response() length = response.stream.write(b'bar') assert length == 3 def test_stream_zip(): import zipfile response = wrappers.Response() with contextlib.closing(zipfile.ZipFile(response.stream, mode='w')) as z: z.writestr("foo", b"bar") buffer = BytesIO(response.get_data()) with contextlib.closing(zipfile.ZipFile(buffer, mode='r')) as z: assert z.namelist() == ['foo'] assert z.read('foo') == b'bar' class TestSetCookie(object): """Tests for :meth:`werkzeug.wrappers.BaseResponse.set_cookie`.""" def test_secure(self): response = wrappers.BaseResponse() response.set_cookie('foo', value='bar', max_age=60, expires=0, path='/blub', domain='example.org', secure=True, samesite=None) strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' '01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; Path=/blub') ]) def test_httponly(self): response = wrappers.BaseResponse() response.set_cookie('foo', value='bar', max_age=60, expires=0, path='/blub', domain='example.org', secure=False, httponly=True, samesite=None) strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' '01-Jan-1970 00:00:00 GMT; Max-Age=60; HttpOnly; Path=/blub') ]) def test_secure_and_httponly(self): response = wrappers.BaseResponse() response.set_cookie('foo', value='bar', max_age=60, expires=0, path='/blub', domain='example.org', secure=True, httponly=True, samesite=None) strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' '01-Jan-1970 00:00:00 GMT; Max-Age=60; Secure; HttpOnly; ' 'Path=/blub') ]) def test_samesite(self): response = wrappers.BaseResponse() response.set_cookie('foo', value='bar', max_age=60, expires=0, path='/blub', domain='example.org', secure=False, samesite='strict') strict_eq(response.headers.to_wsgi_list(), [ ('Content-Type', 'text/plain; charset=utf-8'), ('Set-Cookie', 'foo=bar; Domain=example.org; Expires=Thu, ' '01-Jan-1970 00:00:00 GMT; Max-Age=60; Path=/blub; ' 'SameSite=Strict') ]) werkzeug-0.14.1/tests/test_wsgi.py000066400000000000000000000434451322225165500172010ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ tests.wsgi ~~~~~~~~~~ Tests the WSGI utilities. :copyright: (c) 2014 by Armin Ronacher. :license: BSD, see LICENSE for more details. """ import io import json import os from contextlib import closing from os import path import pytest from tests import strict_eq from werkzeug import wsgi from werkzeug._compat import BytesIO, NativeStringIO, StringIO, to_bytes, \ to_native from werkzeug.exceptions import BadRequest, ClientDisconnected from werkzeug.test import Client, create_environ, run_wsgi_app from werkzeug.wrappers import BaseResponse from werkzeug.urls import url_parse from werkzeug.wsgi import _RangeWrapper, wrap_file def test_shareddatamiddleware_get_file_loader(): app = wsgi.SharedDataMiddleware(None, {}) assert callable(app.get_file_loader('foo')) def test_shared_data_middleware(tmpdir): def null_application(environ, start_response): start_response('404 NOT FOUND', [('Content-Type', 'text/plain')]) yield b'NOT FOUND' test_dir = str(tmpdir) with open(path.join(test_dir, to_native(u'äöü', 'utf-8')), 'w') as test_file: test_file.write(u'FOUND') for t in [list, dict]: app = wsgi.SharedDataMiddleware(null_application, t([ ('/', path.join(path.dirname(__file__), 'res')), ('/sources', path.join(path.dirname(__file__), 'res')), ('/pkg', ('werkzeug.debug', 'shared')), ('/foo', test_dir) ])) for p in '/test.txt', '/sources/test.txt', '/foo/äöü': app_iter, status, headers = run_wsgi_app(app, create_environ(p)) assert status == '200 OK' with closing(app_iter) as app_iter: data = b''.join(app_iter).strip() assert data == b'FOUND' app_iter, status, headers = run_wsgi_app( app, create_environ('/pkg/debugger.js')) with closing(app_iter) as app_iter: contents = b''.join(app_iter) assert b'$(function() {' in contents app_iter, status, headers = run_wsgi_app( app, create_environ('/missing')) assert status == '404 NOT FOUND' assert b''.join(app_iter).strip() == b'NOT FOUND' def test_dispatchermiddleware(): def null_application(environ, start_response): start_response('404 NOT FOUND', [('Content-Type', 'text/plain')]) yield b'NOT FOUND' def dummy_application(environ, start_response): start_response('200 OK', [('Content-Type', 'text/plain')]) yield to_bytes(environ['SCRIPT_NAME']) app = wsgi.DispatcherMiddleware(null_application, { '/test1': dummy_application, '/test2/very': dummy_application, }) tests = { '/test1': ('/test1', '/test1/asfd', '/test1/very'), '/test2/very': ('/test2/very', '/test2/very/long/path/after/script/name') } for name, urls in tests.items(): for p in urls: environ = create_environ(p) app_iter, status, headers = run_wsgi_app(app, environ) assert status == '200 OK' assert b''.join(app_iter).strip() == to_bytes(name) app_iter, status, headers = run_wsgi_app( app, create_environ('/missing')) assert status == '404 NOT FOUND' assert b''.join(app_iter).strip() == b'NOT FOUND' def test_get_host(): env = {'HTTP_X_FORWARDED_HOST': 'example.org', 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} assert wsgi.get_host(env) == 'example.org' assert wsgi.get_host(create_environ('/', 'http://example.org')) == \ 'example.org' def test_get_host_multiple_forwarded(): env = {'HTTP_X_FORWARDED_HOST': 'example.com, example.org', 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} assert wsgi.get_host(env) == 'example.com' assert wsgi.get_host(create_environ('/', 'http://example.com')) == \ 'example.com' def test_get_host_validation(): env = {'HTTP_X_FORWARDED_HOST': 'example.org', 'SERVER_NAME': 'bullshit', 'HOST_NAME': 'ignore me dammit'} assert wsgi.get_host(env, trusted_hosts=['.example.org']) == 'example.org' pytest.raises(BadRequest, wsgi.get_host, env, trusted_hosts=['example.com']) def test_responder(): def foo(environ, start_response): return BaseResponse(b'Test') client = Client(wsgi.responder(foo), BaseResponse) response = client.get('/') assert response.status_code == 200 assert response.data == b'Test' def test_pop_path_info(): original_env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b///c'} # regular path info popping def assert_tuple(script_name, path_info): assert env.get('SCRIPT_NAME') == script_name assert env.get('PATH_INFO') == path_info env = original_env.copy() pop = lambda: wsgi.pop_path_info(env) assert_tuple('/foo', '/a/b///c') assert pop() == 'a' assert_tuple('/foo/a', '/b///c') assert pop() == 'b' assert_tuple('/foo/a/b', '///c') assert pop() == 'c' assert_tuple('/foo/a/b///c', '') assert pop() is None def test_peek_path_info(): env = { 'SCRIPT_NAME': '/foo', 'PATH_INFO': '/aaa/b///c' } assert wsgi.peek_path_info(env) == 'aaa' assert wsgi.peek_path_info(env) == 'aaa' assert wsgi.peek_path_info(env, charset=None) == b'aaa' assert wsgi.peek_path_info(env, charset=None) == b'aaa' def test_path_info_and_script_name_fetching(): env = create_environ(u'/\N{SNOWMAN}', u'http://example.com/\N{COMET}/') assert wsgi.get_path_info(env) == u'/\N{SNOWMAN}' assert wsgi.get_path_info(env, charset=None) == u'/\N{SNOWMAN}'.encode('utf-8') assert wsgi.get_script_name(env) == u'/\N{COMET}' assert wsgi.get_script_name(env, charset=None) == u'/\N{COMET}'.encode('utf-8') def test_query_string_fetching(): env = create_environ(u'/?\N{SNOWMAN}=\N{COMET}') qs = wsgi.get_query_string(env) strict_eq(qs, '%E2%98%83=%E2%98%84') def test_limited_stream(): class RaisingLimitedStream(wsgi.LimitedStream): def on_exhausted(self): raise BadRequest('input stream exhausted') io = BytesIO(b'123456') stream = RaisingLimitedStream(io, 3) strict_eq(stream.read(), b'123') pytest.raises(BadRequest, stream.read) io = BytesIO(b'123456') stream = RaisingLimitedStream(io, 3) strict_eq(stream.tell(), 0) strict_eq(stream.read(1), b'1') strict_eq(stream.tell(), 1) strict_eq(stream.read(1), b'2') strict_eq(stream.tell(), 2) strict_eq(stream.read(1), b'3') strict_eq(stream.tell(), 3) pytest.raises(BadRequest, stream.read) io = BytesIO(b'123456\nabcdefg') stream = wsgi.LimitedStream(io, 9) strict_eq(stream.readline(), b'123456\n') strict_eq(stream.readline(), b'ab') io = BytesIO(b'123456\nabcdefg') stream = wsgi.LimitedStream(io, 9) strict_eq(stream.readlines(), [b'123456\n', b'ab']) io = BytesIO(b'123456\nabcdefg') stream = wsgi.LimitedStream(io, 9) strict_eq(stream.readlines(2), [b'12']) strict_eq(stream.readlines(2), [b'34']) strict_eq(stream.readlines(), [b'56\n', b'ab']) io = BytesIO(b'123456\nabcdefg') stream = wsgi.LimitedStream(io, 9) strict_eq(stream.readline(100), b'123456\n') io = BytesIO(b'123456\nabcdefg') stream = wsgi.LimitedStream(io, 9) strict_eq(stream.readlines(100), [b'123456\n', b'ab']) io = BytesIO(b'123456') stream = wsgi.LimitedStream(io, 3) strict_eq(stream.read(1), b'1') strict_eq(stream.read(1), b'2') strict_eq(stream.read(), b'3') strict_eq(stream.read(), b'') io = BytesIO(b'123456') stream = wsgi.LimitedStream(io, 3) strict_eq(stream.read(-1), b'123') io = BytesIO(b'123456') stream = wsgi.LimitedStream(io, 0) strict_eq(stream.read(-1), b'') io = StringIO(u'123456') stream = wsgi.LimitedStream(io, 0) strict_eq(stream.read(-1), u'') io = StringIO(u'123\n456\n') stream = wsgi.LimitedStream(io, 8) strict_eq(list(stream), [u'123\n', u'456\n']) def test_limited_stream_json_load(): stream = wsgi.LimitedStream(BytesIO(b'{"hello": "test"}'), 17) # flask.json adapts bytes to text with TextIOWrapper # this expects stream.readable() to exist and return true stream = io.TextIOWrapper(io.BufferedReader(stream), 'UTF-8') data = json.load(stream) assert data == {'hello': 'test'} def test_limited_stream_disconnection(): io = BytesIO(b'A bit of content') # disconnect detection on out of bytes stream = wsgi.LimitedStream(io, 255) with pytest.raises(ClientDisconnected): stream.read() # disconnect detection because file close io = BytesIO(b'x' * 255) io.close() stream = wsgi.LimitedStream(io, 255) with pytest.raises(ClientDisconnected): stream.read() def test_path_info_extraction(): x = wsgi.extract_path_info('http://example.com/app', '/app/hello') assert x == u'/hello' x = wsgi.extract_path_info('http://example.com/app', 'https://example.com/app/hello') assert x == u'/hello' x = wsgi.extract_path_info('http://example.com/app/', 'https://example.com/app/hello') assert x == u'/hello' x = wsgi.extract_path_info('http://example.com/app/', 'https://example.com/app') assert x == u'/' x = wsgi.extract_path_info(u'http://☃.net/', u'/fööbär') assert x == u'/fööbär' x = wsgi.extract_path_info(u'http://☃.net/x', u'http://☃.net/x/fööbär') assert x == u'/fööbär' env = create_environ(u'/fööbär', u'http://☃.net/x/') x = wsgi.extract_path_info(env, u'http://☃.net/x/fööbär') assert x == u'/fööbär' x = wsgi.extract_path_info('http://example.com/app/', 'https://example.com/a/hello') assert x is None x = wsgi.extract_path_info('http://example.com/app/', 'https://example.com/app/hello', collapse_http_schemes=False) assert x is None def test_get_host_fallback(): assert wsgi.get_host({ 'SERVER_NAME': 'foobar.example.com', 'wsgi.url_scheme': 'http', 'SERVER_PORT': '80' }) == 'foobar.example.com' assert wsgi.get_host({ 'SERVER_NAME': 'foobar.example.com', 'wsgi.url_scheme': 'http', 'SERVER_PORT': '81' }) == 'foobar.example.com:81' def test_get_current_url_unicode(): env = create_environ() env['QUERY_STRING'] = 'foo=bar&baz=blah&meh=\xcf' rv = wsgi.get_current_url(env) strict_eq(rv, u'http://localhost/?foo=bar&baz=blah&meh=\ufffd') def test_multi_part_line_breaks(): data = 'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' test_stream = NativeStringIO(data) lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=16)) assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n', 'ABCDEFGHIJK'] data = 'abc\r\nThis line is broken by the buffer length.' \ '\r\nFoo bar baz' test_stream = NativeStringIO(data) lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=24)) assert lines == ['abc\r\n', 'This line is broken by the buffer ' 'length.\r\n', 'Foo bar baz'] def test_multi_part_line_breaks_bytes(): data = b'abcdef\r\nghijkl\r\nmnopqrstuvwxyz\r\nABCDEFGHIJK' test_stream = BytesIO(data) lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=16)) assert lines == [b'abcdef\r\n', b'ghijkl\r\n', b'mnopqrstuvwxyz\r\n', b'ABCDEFGHIJK'] data = b'abc\r\nThis line is broken by the buffer length.' \ b'\r\nFoo bar baz' test_stream = BytesIO(data) lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=24)) assert lines == [b'abc\r\n', b'This line is broken by the buffer ' b'length.\r\n', b'Foo bar baz'] def test_multi_part_line_breaks_problematic(): data = 'abc\rdef\r\nghi' for x in range(1, 10): test_stream = NativeStringIO(data) lines = list(wsgi.make_line_iter(test_stream, limit=len(data), buffer_size=4)) assert lines == ['abc\r', 'def\r\n', 'ghi'] def test_iter_functions_support_iterators(): data = ['abcdef\r\nghi', 'jkl\r\nmnopqrstuvwxyz\r', '\nABCDEFGHIJK'] lines = list(wsgi.make_line_iter(data)) assert lines == ['abcdef\r\n', 'ghijkl\r\n', 'mnopqrstuvwxyz\r\n', 'ABCDEFGHIJK'] def test_make_chunk_iter(): data = [u'abcdefXghi', u'jklXmnopqrstuvwxyzX', u'ABCDEFGHIJK'] rv = list(wsgi.make_chunk_iter(data, 'X')) assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK'] data = u'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' test_stream = StringIO(data) rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), buffer_size=4)) assert rv == [u'abcdef', u'ghijkl', u'mnopqrstuvwxyz', u'ABCDEFGHIJK'] def test_make_chunk_iter_bytes(): data = [b'abcdefXghi', b'jklXmnopqrstuvwxyzX', b'ABCDEFGHIJK'] rv = list(wsgi.make_chunk_iter(data, 'X')) assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK'] data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' test_stream = BytesIO(data) rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), buffer_size=4)) assert rv == [b'abcdef', b'ghijkl', b'mnopqrstuvwxyz', b'ABCDEFGHIJK'] data = b'abcdefXghijklXmnopqrstuvwxyzXABCDEFGHIJK' test_stream = BytesIO(data) rv = list(wsgi.make_chunk_iter(test_stream, 'X', limit=len(data), buffer_size=4, cap_at_buffer=True)) assert rv == [b'abcd', b'ef', b'ghij', b'kl', b'mnop', b'qrst', b'uvwx', b'yz', b'ABCD', b'EFGH', b'IJK'] def test_lines_longer_buffer_size(): data = '1234567890\n1234567890\n' for bufsize in range(1, 15): lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data), buffer_size=4)) assert lines == ['1234567890\n', '1234567890\n'] def test_lines_longer_buffer_size_cap(): data = '1234567890\n1234567890\n' for bufsize in range(1, 15): lines = list(wsgi.make_line_iter(NativeStringIO(data), limit=len(data), buffer_size=4, cap_at_buffer=True)) assert lines == ['1234', '5678', '90\n', '1234', '5678', '90\n'] def test_range_wrapper(): response = BaseResponse(b'Hello World') range_wrapper = _RangeWrapper(response.response, 6, 4) assert next(range_wrapper) == b'Worl' response = BaseResponse(b'Hello World') range_wrapper = _RangeWrapper(response.response, 1, 0) with pytest.raises(StopIteration): next(range_wrapper) response = BaseResponse(b'Hello World') range_wrapper = _RangeWrapper(response.response, 6, 100) assert next(range_wrapper) == b'World' response = BaseResponse((x for x in (b'He', b'll', b'o ', b'Wo', b'rl', b'd'))) range_wrapper = _RangeWrapper(response.response, 6, 4) assert not range_wrapper.seekable assert next(range_wrapper) == b'Wo' assert next(range_wrapper) == b'rl' response = BaseResponse((x for x in (b'He', b'll', b'o W', b'o', b'rld'))) range_wrapper = _RangeWrapper(response.response, 6, 4) assert next(range_wrapper) == b'W' assert next(range_wrapper) == b'o' assert next(range_wrapper) == b'rl' with pytest.raises(StopIteration): next(range_wrapper) response = BaseResponse((x for x in (b'Hello', b' World'))) range_wrapper = _RangeWrapper(response.response, 1, 1) assert next(range_wrapper) == b'e' with pytest.raises(StopIteration): next(range_wrapper) resources = os.path.join(os.path.dirname(__file__), 'res') env = create_environ() with open(os.path.join(resources, 'test.txt'), 'rb') as f: response = BaseResponse(wrap_file(env, f)) range_wrapper = _RangeWrapper(response.response, 1, 2) assert range_wrapper.seekable assert next(range_wrapper) == b'OU' with pytest.raises(StopIteration): next(range_wrapper) with open(os.path.join(resources, 'test.txt'), 'rb') as f: response = BaseResponse(wrap_file(env, f)) range_wrapper = _RangeWrapper(response.response, 2) assert next(range_wrapper) == b'UND\n' with pytest.raises(StopIteration): next(range_wrapper) def test_http_proxy(dev_server): APP_TEMPLATE = r''' from werkzeug.wrappers import Request, Response @Request.application def app(request): return Response(u'%s|%s|%s' % ( request.headers.get('X-Special'), request.environ['HTTP_HOST'], request.path, )) ''' server = dev_server(APP_TEMPLATE) app = wsgi.ProxyMiddleware(BaseResponse('ROOT'), { '/foo': { 'target': server.url, 'host': 'faked.invalid', 'headers': {'X-Special': 'foo'}, }, '/bar': { 'target': server.url, 'host': None, 'remove_prefix': True, 'headers': {'X-Special': 'bar'}, }, '/autohost': { 'target': server.url, }, }) client = Client(app, response_wrapper=BaseResponse) rv = client.get('/') assert rv.data == b'ROOT' rv = client.get('/foo/bar') assert rv.data.decode('ascii') == 'foo|faked.invalid|/foo/bar' rv = client.get('/bar/baz') assert rv.data.decode('ascii') == 'bar|localhost|/baz' rv = client.get('/autohost/aha') assert rv.data.decode('ascii') == 'None|%s|/autohost/aha' % url_parse( server.url).ascii_host werkzeug-0.14.1/tox.ini000066400000000000000000000030071322225165500147560ustar00rootroot00000000000000[tox] envlist = py{36,27}-hypothesis-uwsgi py{35,34,py} # run py33, py26 manually stylecheck docs-html coverage-report [testenv] passenv = LANG setenv = TOX_ENVTMPDIR={envtmpdir} usedevelop = true deps = # remove once we drop support for 2.6, 3.3 py26,py33: py<1.5 py26,py33: pytest<3.3 pytest-xprocess coverage requests pyopenssl greenlet redis python-memcached watchdog hypothesis: hypothesis uwsgi: uwsgi whitelist_externals = redis-server memcached uwsgi commands = coverage run -p -m pytest [] hypothesis: coverage run -p -m pytest [] tests/hypothesis uwsgi: uwsgi --pyrun {envbindir}/coverage --pyargv 'run -p -m pytest -kUWSGI' --cache2=name=werkzeugtest,items=20 --master # --pyrun doesn't pass pytest exit code up, so check for a marker uwsgi: python -c 'import os, sys; sys.exit(os.path.exists("{envtmpdir}/test_uwsgi_failed"))' [testenv:stylecheck] deps = flake8 commands = flake8 [] [testenv:docs-html] deps = sphinx commands = sphinx-build -W -b html -d {envtmpdir}/doctrees docs docs/_build/html [testenv:docs-linkcheck] deps = sphinx commands = sphinx-build -W -b linkcheck -d {envtmpdir}/doctrees docs docs/_build/linkcheck [testenv:coverage-report] deps = coverage skip_install = true commands = coverage combine coverage report coverage html [testenv:codecov] passenv = CI TRAVIS TRAVIS_* deps = codecov skip_install = true commands = coverage combine coverage report codecov werkzeug-0.14.1/werkzeug-import-rewrite.py000066400000000000000000000215141322225165500206520ustar00rootroot00000000000000#!/usr/bin/env python # -*- coding: utf-8 -*- """ Werkzeug Import Rewriter ~~~~~~~~~~~~~~~~~~~~~~~~ Changes the deprecated werkzeug imports to the full canonical imports. This is a terrible hack, don't trust the diff untested. :copyright: (c) 2014 by the Werkzeug Team. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import sys import os import re import posixpath import difflib _from_import_re = re.compile(r'(\s*(>>>|\.\.\.)?\s*)from werkzeug import\s+') _direct_usage = re.compile('(? 79: yield prefix + ', '.join(item_buffer[:-1]) + ', \\' item_buffer = [item_buffer[-1]] # doctest continuations indentation = indentation.replace('>', '.') prefix = indentation + ' ' yield prefix + ', '.join(item_buffer) def inject_imports(lines, imports): pos = 0 for idx, line in enumerate(lines): if re.match(r'(from|import)\s+werkzeug', line): pos = idx break lines[pos:pos] = ['from %s import %s' % (mod, ', '.join(sorted(attrs))) for mod, attrs in sorted(imports.items())] def rewrite_file(filename): with open(filename) as f: old_file = f.read().splitlines() new_file = [] deferred_imports = {} lineiter = iter(old_file) for line in lineiter: # rewrite from imports match = _from_import_re.search(line) if match is not None: fromlist = line[match.end():] new_file.extend(rewrite_from_imports(fromlist, match.group(1), lineiter)) continue def _handle_match(match): # rewrite attribute access to 'werkzeug' attr = match.group(2) mod = find_module(attr) if mod == 'werkzeug': return match.group(0) deferred_imports.setdefault(mod, []).append(attr) return attr new_file.append(_direct_usage.sub(_handle_match, line)) if deferred_imports: inject_imports(new_file, deferred_imports) for line in difflib.unified_diff( old_file, new_file, posixpath.normpath(posixpath.join('a', filename)), posixpath.normpath(posixpath.join('b', filename)), lineterm='' ): print(line) def rewrite_in_folders(folders): for folder in folders: for dirpath, dirnames, filenames in os.walk(folder): for filename in filenames: filename = os.path.join(dirpath, filename) if filename.endswith(('.rst', '.py')): rewrite_file(filename) def main(): if len(sys.argv) == 1: print('usage: werkzeug-import-rewrite.py [folders]') sys.exit(1) rewrite_in_folders(sys.argv[1:]) if __name__ == '__main__': main() werkzeug-0.14.1/werkzeug/000077500000000000000000000000001322225165500153065ustar00rootroot00000000000000werkzeug-0.14.1/werkzeug/__init__.py000066400000000000000000000152721322225165500174260ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug ~~~~~~~~ Werkzeug is the Swiss Army knife of Python web development. It provides useful classes and functions for any WSGI application to make the life of a python web developer much easier. All of the provided classes are independent from each other so you can mix it with any other library. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from types import ModuleType import sys from werkzeug._compat import iteritems __version__ = '0.14.1' # This import magic raises concerns quite often which is why the implementation # and motivation is explained here in detail now. # # The majority of the functions and classes provided by Werkzeug work on the # HTTP and WSGI layer. There is no useful grouping for those which is why # they are all importable from "werkzeug" instead of the modules where they are # implemented. The downside of that is, that now everything would be loaded at # once, even if unused. # # The implementation of a lazy-loading module in this file replaces the # werkzeug package when imported from within. Attribute access to the werkzeug # module will then lazily import from the modules that implement the objects. # import mapping to objects in other modules all_by_module = { 'werkzeug.debug': ['DebuggedApplication'], 'werkzeug.local': ['Local', 'LocalManager', 'LocalProxy', 'LocalStack', 'release_local'], 'werkzeug.serving': ['run_simple'], 'werkzeug.test': ['Client', 'EnvironBuilder', 'create_environ', 'run_wsgi_app'], 'werkzeug.testapp': ['test_app'], 'werkzeug.exceptions': ['abort', 'Aborter'], 'werkzeug.urls': ['url_decode', 'url_encode', 'url_quote', 'url_quote_plus', 'url_unquote', 'url_unquote_plus', 'url_fix', 'Href', 'iri_to_uri', 'uri_to_iri'], 'werkzeug.formparser': ['parse_form_data'], 'werkzeug.utils': ['escape', 'environ_property', 'append_slash_redirect', 'redirect', 'cached_property', 'import_string', 'dump_cookie', 'parse_cookie', 'unescape', 'format_string', 'find_modules', 'header_property', 'html', 'xhtml', 'HTMLBuilder', 'validate_arguments', 'ArgumentValidationError', 'bind_arguments', 'secure_filename'], 'werkzeug.wsgi': ['get_current_url', 'get_host', 'pop_path_info', 'peek_path_info', 'SharedDataMiddleware', 'DispatcherMiddleware', 'ClosingIterator', 'FileWrapper', 'make_line_iter', 'LimitedStream', 'responder', 'wrap_file', 'extract_path_info'], 'werkzeug.datastructures': ['MultiDict', 'CombinedMultiDict', 'Headers', 'EnvironHeaders', 'ImmutableList', 'ImmutableDict', 'ImmutableMultiDict', 'TypeConversionDict', 'ImmutableTypeConversionDict', 'Accept', 'MIMEAccept', 'CharsetAccept', 'LanguageAccept', 'RequestCacheControl', 'ResponseCacheControl', 'ETags', 'HeaderSet', 'WWWAuthenticate', 'Authorization', 'FileMultiDict', 'CallbackDict', 'FileStorage', 'OrderedMultiDict', 'ImmutableOrderedMultiDict' ], 'werkzeug.useragents': ['UserAgent'], 'werkzeug.http': ['parse_etags', 'parse_date', 'http_date', 'cookie_date', 'parse_cache_control_header', 'is_resource_modified', 'parse_accept_header', 'parse_set_header', 'quote_etag', 'unquote_etag', 'generate_etag', 'dump_header', 'parse_list_header', 'parse_dict_header', 'parse_authorization_header', 'parse_www_authenticate_header', 'remove_entity_headers', 'is_entity_header', 'remove_hop_by_hop_headers', 'parse_options_header', 'dump_options_header', 'is_hop_by_hop_header', 'unquote_header_value', 'quote_header_value', 'HTTP_STATUS_CODES'], 'werkzeug.wrappers': ['BaseResponse', 'BaseRequest', 'Request', 'Response', 'AcceptMixin', 'ETagRequestMixin', 'ETagResponseMixin', 'ResponseStreamMixin', 'CommonResponseDescriptorsMixin', 'UserAgentMixin', 'AuthorizationMixin', 'WWWAuthenticateMixin', 'CommonRequestDescriptorsMixin'], 'werkzeug.security': ['generate_password_hash', 'check_password_hash'], # the undocumented easteregg ;-) 'werkzeug._internal': ['_easteregg'] } # modules that should be imported when accessed as attributes of werkzeug attribute_modules = frozenset(['exceptions', 'routing']) object_origins = {} for module, items in iteritems(all_by_module): for item in items: object_origins[item] = module class module(ModuleType): """Automatically import objects from the modules.""" def __getattr__(self, name): if name in object_origins: module = __import__(object_origins[name], None, None, [name]) for extra_name in all_by_module[module.__name__]: setattr(self, extra_name, getattr(module, extra_name)) return getattr(module, name) elif name in attribute_modules: __import__('werkzeug.' + name) return ModuleType.__getattribute__(self, name) def __dir__(self): """Just show what we want to show.""" result = list(new_module.__all__) result.extend(('__file__', '__doc__', '__all__', '__docformat__', '__name__', '__path__', '__package__', '__version__')) return result # keep a reference to this module so that it's not garbage collected old_module = sys.modules['werkzeug'] # setup the new module and patch it into the dict of loaded modules new_module = sys.modules['werkzeug'] = module('werkzeug') new_module.__dict__.update({ '__file__': __file__, '__package__': 'werkzeug', '__path__': __path__, '__doc__': __doc__, '__version__': __version__, '__all__': tuple(object_origins) + tuple(attribute_modules), '__docformat__': 'restructuredtext en' }) # Due to bootstrapping issues we need to import exceptions here. # Don't ask :-( __import__('werkzeug.exceptions') werkzeug-0.14.1/werkzeug/_compat.py000066400000000000000000000142471322225165500173120ustar00rootroot00000000000000# flake8: noqa # This whole file is full of lint errors import codecs import sys import operator import functools import warnings try: import builtins except ImportError: import __builtin__ as builtins PY2 = sys.version_info[0] == 2 WIN = sys.platform.startswith('win') _identity = lambda x: x if PY2: unichr = unichr text_type = unicode string_types = (str, unicode) integer_types = (int, long) iterkeys = lambda d, *args, **kwargs: d.iterkeys(*args, **kwargs) itervalues = lambda d, *args, **kwargs: d.itervalues(*args, **kwargs) iteritems = lambda d, *args, **kwargs: d.iteritems(*args, **kwargs) iterlists = lambda d, *args, **kwargs: d.iterlists(*args, **kwargs) iterlistvalues = lambda d, *args, **kwargs: d.iterlistvalues(*args, **kwargs) int_to_byte = chr iter_bytes = iter exec('def reraise(tp, value, tb=None):\n raise tp, value, tb') def fix_tuple_repr(obj): def __repr__(self): cls = self.__class__ return '%s(%s)' % (cls.__name__, ', '.join( '%s=%r' % (field, self[index]) for index, field in enumerate(cls._fields) )) obj.__repr__ = __repr__ return obj def implements_iterator(cls): cls.next = cls.__next__ del cls.__next__ return cls def implements_to_string(cls): cls.__unicode__ = cls.__str__ cls.__str__ = lambda x: x.__unicode__().encode('utf-8') return cls def native_string_result(func): def wrapper(*args, **kwargs): return func(*args, **kwargs).encode('utf-8') return functools.update_wrapper(wrapper, func) def implements_bool(cls): cls.__nonzero__ = cls.__bool__ del cls.__bool__ return cls from itertools import imap, izip, ifilter range_type = xrange from StringIO import StringIO from cStringIO import StringIO as BytesIO NativeStringIO = BytesIO def make_literal_wrapper(reference): return _identity def normalize_string_tuple(tup): """Normalizes a string tuple to a common type. Following Python 2 rules, upgrades to unicode are implicit. """ if any(isinstance(x, text_type) for x in tup): return tuple(to_unicode(x) for x in tup) return tup def try_coerce_native(s): """Try to coerce a unicode string to native if possible. Otherwise, leave it as unicode. """ try: return to_native(s) except UnicodeError: return s wsgi_get_bytes = _identity def wsgi_decoding_dance(s, charset='utf-8', errors='replace'): return s.decode(charset, errors) def wsgi_encoding_dance(s, charset='utf-8', errors='replace'): if isinstance(s, bytes): return s return s.encode(charset, errors) def to_bytes(x, charset=sys.getdefaultencoding(), errors='strict'): if x is None: return None if isinstance(x, (bytes, bytearray, buffer)): return bytes(x) if isinstance(x, unicode): return x.encode(charset, errors) raise TypeError('Expected bytes') def to_native(x, charset=sys.getdefaultencoding(), errors='strict'): if x is None or isinstance(x, str): return x return x.encode(charset, errors) else: unichr = chr text_type = str string_types = (str, ) integer_types = (int, ) iterkeys = lambda d, *args, **kwargs: iter(d.keys(*args, **kwargs)) itervalues = lambda d, *args, **kwargs: iter(d.values(*args, **kwargs)) iteritems = lambda d, *args, **kwargs: iter(d.items(*args, **kwargs)) iterlists = lambda d, *args, **kwargs: iter(d.lists(*args, **kwargs)) iterlistvalues = lambda d, *args, **kwargs: iter(d.listvalues(*args, **kwargs)) int_to_byte = operator.methodcaller('to_bytes', 1, 'big') iter_bytes = functools.partial(map, int_to_byte) def reraise(tp, value, tb=None): if value.__traceback__ is not tb: raise value.with_traceback(tb) raise value fix_tuple_repr = _identity implements_iterator = _identity implements_to_string = _identity implements_bool = _identity native_string_result = _identity imap = map izip = zip ifilter = filter range_type = range from io import StringIO, BytesIO NativeStringIO = StringIO _latin1_encode = operator.methodcaller('encode', 'latin1') def make_literal_wrapper(reference): if isinstance(reference, text_type): return _identity return _latin1_encode def normalize_string_tuple(tup): """Ensures that all types in the tuple are either strings or bytes. """ tupiter = iter(tup) is_text = isinstance(next(tupiter, None), text_type) for arg in tupiter: if isinstance(arg, text_type) != is_text: raise TypeError('Cannot mix str and bytes arguments (got %s)' % repr(tup)) return tup try_coerce_native = _identity wsgi_get_bytes = _latin1_encode def wsgi_decoding_dance(s, charset='utf-8', errors='replace'): return s.encode('latin1').decode(charset, errors) def wsgi_encoding_dance(s, charset='utf-8', errors='replace'): if isinstance(s, text_type): s = s.encode(charset) return s.decode('latin1', errors) def to_bytes(x, charset=sys.getdefaultencoding(), errors='strict'): if x is None: return None if isinstance(x, (bytes, bytearray, memoryview)): # noqa return bytes(x) if isinstance(x, str): return x.encode(charset, errors) raise TypeError('Expected bytes') def to_native(x, charset=sys.getdefaultencoding(), errors='strict'): if x is None or isinstance(x, str): return x return x.decode(charset, errors) def to_unicode(x, charset=sys.getdefaultencoding(), errors='strict', allow_none_charset=False): if x is None: return None if not isinstance(x, bytes): return text_type(x) if charset is None and allow_none_charset: return x return x.decode(charset, errors) werkzeug-0.14.1/werkzeug/_internal.py000066400000000000000000000330611322225165500176360ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug._internal ~~~~~~~~~~~~~~~~~~ This module provides internally used helpers and constants. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import string import inspect from weakref import WeakKeyDictionary from datetime import datetime, date from itertools import chain from werkzeug._compat import iter_bytes, text_type, BytesIO, int_to_byte, \ range_type, integer_types _logger = None _empty_stream = BytesIO() _signature_cache = WeakKeyDictionary() _epoch_ord = date(1970, 1, 1).toordinal() _cookie_params = set((b'expires', b'path', b'comment', b'max-age', b'secure', b'httponly', b'version')) _legal_cookie_chars = (string.ascii_letters + string.digits + u"/=!#$%&'*+-.^_`|~:").encode('ascii') _cookie_quoting_map = { b',': b'\\054', b';': b'\\073', b'"': b'\\"', b'\\': b'\\\\', } for _i in chain(range_type(32), range_type(127, 256)): _cookie_quoting_map[int_to_byte(_i)] = ('\\%03o' % _i).encode('latin1') _octal_re = re.compile(br'\\[0-3][0-7][0-7]') _quote_re = re.compile(br'[\\].') _legal_cookie_chars_re = br'[\w\d!#%&\'~_`><@,:/\$\*\+\-\.\^\|\)\(\?\}\{\=]' _cookie_re = re.compile(br""" (?P[^=;]+) (?:\s*=\s* (?P "(?:[^\\"]|\\.)*" | (?:.*?) ) )? \s*; """, flags=re.VERBOSE) class _Missing(object): def __repr__(self): return 'no value' def __reduce__(self): return '_missing' _missing = _Missing() def _get_environ(obj): env = getattr(obj, 'environ', obj) assert isinstance(env, dict), \ '%r is not a WSGI environment (has to be a dict)' % type(obj).__name__ return env def _log(type, message, *args, **kwargs): """Log into the internal werkzeug logger.""" global _logger if _logger is None: import logging _logger = logging.getLogger('werkzeug') # Only set up a default log handler if the # end-user application didn't set anything up. if not logging.root.handlers and _logger.level == logging.NOTSET: _logger.setLevel(logging.INFO) handler = logging.StreamHandler() _logger.addHandler(handler) getattr(_logger, type)(message.rstrip(), *args, **kwargs) def _parse_signature(func): """Return a signature object for the function.""" if hasattr(func, 'im_func'): func = func.im_func # if we have a cached validator for this function, return it parse = _signature_cache.get(func) if parse is not None: return parse # inspect the function signature and collect all the information if hasattr(inspect, 'getfullargspec'): tup = inspect.getfullargspec(func) else: tup = inspect.getargspec(func) positional, vararg_var, kwarg_var, defaults = tup[:4] defaults = defaults or () arg_count = len(positional) arguments = [] for idx, name in enumerate(positional): if isinstance(name, list): raise TypeError('cannot parse functions that unpack tuples ' 'in the function signature') try: default = defaults[idx - arg_count] except IndexError: param = (name, False, None) else: param = (name, True, default) arguments.append(param) arguments = tuple(arguments) def parse(args, kwargs): new_args = [] missing = [] extra = {} # consume as many arguments as positional as possible for idx, (name, has_default, default) in enumerate(arguments): try: new_args.append(args[idx]) except IndexError: try: new_args.append(kwargs.pop(name)) except KeyError: if has_default: new_args.append(default) else: missing.append(name) else: if name in kwargs: extra[name] = kwargs.pop(name) # handle extra arguments extra_positional = args[arg_count:] if vararg_var is not None: new_args.extend(extra_positional) extra_positional = () if kwargs and kwarg_var is None: extra.update(kwargs) kwargs = {} return new_args, kwargs, missing, extra, extra_positional, \ arguments, vararg_var, kwarg_var _signature_cache[func] = parse return parse def _date_to_unix(arg): """Converts a timetuple, integer or datetime object into the seconds from epoch in utc. """ if isinstance(arg, datetime): arg = arg.utctimetuple() elif isinstance(arg, integer_types + (float,)): return int(arg) year, month, day, hour, minute, second = arg[:6] days = date(year, month, 1).toordinal() - _epoch_ord + day - 1 hours = days * 24 + hour minutes = hours * 60 + minute seconds = minutes * 60 + second return seconds class _DictAccessorProperty(object): """Baseclass for `environ_property` and `header_property`.""" read_only = False def __init__(self, name, default=None, load_func=None, dump_func=None, read_only=None, doc=None): self.name = name self.default = default self.load_func = load_func self.dump_func = dump_func if read_only is not None: self.read_only = read_only self.__doc__ = doc def __get__(self, obj, type=None): if obj is None: return self storage = self.lookup(obj) if self.name not in storage: return self.default rv = storage[self.name] if self.load_func is not None: try: rv = self.load_func(rv) except (ValueError, TypeError): rv = self.default return rv def __set__(self, obj, value): if self.read_only: raise AttributeError('read only property') if self.dump_func is not None: value = self.dump_func(value) self.lookup(obj)[self.name] = value def __delete__(self, obj): if self.read_only: raise AttributeError('read only property') self.lookup(obj).pop(self.name, None) def __repr__(self): return '<%s %s>' % ( self.__class__.__name__, self.name ) def _cookie_quote(b): buf = bytearray() all_legal = True _lookup = _cookie_quoting_map.get _push = buf.extend for char in iter_bytes(b): if char not in _legal_cookie_chars: all_legal = False char = _lookup(char, char) _push(char) if all_legal: return bytes(buf) return bytes(b'"' + buf + b'"') def _cookie_unquote(b): if len(b) < 2: return b if b[:1] != b'"' or b[-1:] != b'"': return b b = b[1:-1] i = 0 n = len(b) rv = bytearray() _push = rv.extend while 0 <= i < n: o_match = _octal_re.search(b, i) q_match = _quote_re.search(b, i) if not o_match and not q_match: rv.extend(b[i:]) break j = k = -1 if o_match: j = o_match.start(0) if q_match: k = q_match.start(0) if q_match and (not o_match or k < j): _push(b[i:k]) _push(b[k + 1:k + 2]) i = k + 2 else: _push(b[i:j]) rv.append(int(b[j + 1:j + 4], 8)) i = j + 4 return bytes(rv) def _cookie_parse_impl(b): """Lowlevel cookie parsing facility that operates on bytes.""" i = 0 n = len(b) while i < n: match = _cookie_re.search(b + b';', i) if not match: break key = match.group('key').strip() value = match.group('val') or b'' i = match.end(0) # Ignore parameters. We have no interest in them. if key.lower() not in _cookie_params: yield _cookie_unquote(key), _cookie_unquote(value) def _encode_idna(domain): # If we're given bytes, make sure they fit into ASCII if not isinstance(domain, text_type): domain.decode('ascii') return domain # Otherwise check if it's already ascii, then return try: return domain.encode('ascii') except UnicodeError: pass # Otherwise encode each part separately parts = domain.split('.') for idx, part in enumerate(parts): parts[idx] = part.encode('idna') return b'.'.join(parts) def _decode_idna(domain): # If the input is a string try to encode it to ascii to # do the idna decoding. if that fails because of an # unicode error, then we already have a decoded idna domain if isinstance(domain, text_type): try: domain = domain.encode('ascii') except UnicodeError: return domain # Decode each part separately. If a part fails, try to # decode it with ascii and silently ignore errors. This makes # most sense because the idna codec does not have error handling parts = domain.split(b'.') for idx, part in enumerate(parts): try: parts[idx] = part.decode('idna') except UnicodeError: parts[idx] = part.decode('ascii', 'ignore') return '.'.join(parts) def _make_cookie_domain(domain): if domain is None: return None domain = _encode_idna(domain) if b':' in domain: domain = domain.split(b':', 1)[0] if b'.' in domain: return domain raise ValueError( 'Setting \'domain\' for a cookie on a server running locally (ex: ' 'localhost) is not supported by complying browsers. You should ' 'have something like: \'127.0.0.1 localhost dev.localhost\' on ' 'your hosts file and then point your server to run on ' '\'dev.localhost\' and also set \'domain\' for \'dev.localhost\'' ) def _easteregg(app=None): """Like the name says. But who knows how it works?""" def bzzzzzzz(gyver): import base64 import zlib return zlib.decompress(base64.b64decode(gyver)).decode('ascii') gyver = u'\n'.join([x + (77 - len(x)) * u' ' for x in bzzzzzzz(b''' eJyFlzuOJDkMRP06xRjymKgDJCDQStBYT8BCgK4gTwfQ2fcFs2a2FzvZk+hvlcRvRJD148efHt9m 9Xz94dRY5hGt1nrYcXx7us9qlcP9HHNh28rz8dZj+q4rynVFFPdlY4zH873NKCexrDM6zxxRymzz 4QIxzK4bth1PV7+uHn6WXZ5C4ka/+prFzx3zWLMHAVZb8RRUxtFXI5DTQ2n3Hi2sNI+HK43AOWSY jmEzE4naFp58PdzhPMdslLVWHTGUVpSxImw+pS/D+JhzLfdS1j7PzUMxij+mc2U0I9zcbZ/HcZxc q1QjvvcThMYFnp93agEx392ZdLJWXbi/Ca4Oivl4h/Y1ErEqP+lrg7Xa4qnUKu5UE9UUA4xeqLJ5 jWlPKJvR2yhRI7xFPdzPuc6adXu6ovwXwRPXXnZHxlPtkSkqWHilsOrGrvcVWXgGP3daXomCj317 8P2UOw/NnA0OOikZyFf3zZ76eN9QXNwYdD8f8/LdBRFg0BO3bB+Pe/+G8er8tDJv83XTkj7WeMBJ v/rnAfdO51d6sFglfi8U7zbnr0u9tyJHhFZNXYfH8Iafv2Oa+DT6l8u9UYlajV/hcEgk1x8E8L/r XJXl2SK+GJCxtnyhVKv6GFCEB1OO3f9YWAIEbwcRWv/6RPpsEzOkXURMN37J0PoCSYeBnJQd9Giu LxYQJNlYPSo/iTQwgaihbART7Fcyem2tTSCcwNCs85MOOpJtXhXDe0E7zgZJkcxWTar/zEjdIVCk iXy87FW6j5aGZhttDBoAZ3vnmlkx4q4mMmCdLtnHkBXFMCReqthSGkQ+MDXLLCpXwBs0t+sIhsDI tjBB8MwqYQpLygZ56rRHHpw+OAVyGgaGRHWy2QfXez+ZQQTTBkmRXdV/A9LwH6XGZpEAZU8rs4pE 1R4FQ3Uwt8RKEtRc0/CrANUoes3EzM6WYcFyskGZ6UTHJWenBDS7h163Eo2bpzqxNE9aVgEM2CqI GAJe9Yra4P5qKmta27VjzYdR04Vc7KHeY4vs61C0nbywFmcSXYjzBHdiEjraS7PGG2jHHTpJUMxN Jlxr3pUuFvlBWLJGE3GcA1/1xxLcHmlO+LAXbhrXah1tD6Ze+uqFGdZa5FM+3eHcKNaEarutAQ0A QMAZHV+ve6LxAwWnXbbSXEG2DmCX5ijeLCKj5lhVFBrMm+ryOttCAeFpUdZyQLAQkA06RLs56rzG 8MID55vqr/g64Qr/wqwlE0TVxgoiZhHrbY2h1iuuyUVg1nlkpDrQ7Vm1xIkI5XRKLedN9EjzVchu jQhXcVkjVdgP2O99QShpdvXWoSwkp5uMwyjt3jiWCqWGSiaaPAzohjPanXVLbM3x0dNskJsaCEyz DTKIs+7WKJD4ZcJGfMhLFBf6hlbnNkLEePF8Cx2o2kwmYF4+MzAxa6i+6xIQkswOqGO+3x9NaZX8 MrZRaFZpLeVTYI9F/djY6DDVVs340nZGmwrDqTCiiqD5luj3OzwpmQCiQhdRYowUYEA3i1WWGwL4 GCtSoO4XbIPFeKGU13XPkDf5IdimLpAvi2kVDVQbzOOa4KAXMFlpi/hV8F6IDe0Y2reg3PuNKT3i RYhZqtkQZqSB2Qm0SGtjAw7RDwaM1roESC8HWiPxkoOy0lLTRFG39kvbLZbU9gFKFRvixDZBJmpi Xyq3RE5lW00EJjaqwp/v3EByMSpVZYsEIJ4APaHmVtpGSieV5CALOtNUAzTBiw81GLgC0quyzf6c NlWknzJeCsJ5fup2R4d8CYGN77mu5vnO1UqbfElZ9E6cR6zbHjgsr9ly18fXjZoPeDjPuzlWbFwS pdvPkhntFvkc13qb9094LL5NrA3NIq3r9eNnop9DizWOqCEbyRBFJTHn6Tt3CG1o8a4HevYh0XiJ sR0AVVHuGuMOIfbuQ/OKBkGRC6NJ4u7sbPX8bG/n5sNIOQ6/Y/BX3IwRlTSabtZpYLB85lYtkkgm p1qXK3Du2mnr5INXmT/78KI12n11EFBkJHHp0wJyLe9MvPNUGYsf+170maayRoy2lURGHAIapSpQ krEDuNoJCHNlZYhKpvw4mspVWxqo415n8cD62N9+EfHrAvqQnINStetek7RY2Urv8nxsnGaZfRr/ nhXbJ6m/yl1LzYqscDZA9QHLNbdaSTTr+kFg3bC0iYbX/eQy0Bv3h4B50/SGYzKAXkCeOLI3bcAt mj2Z/FM1vQWgDynsRwNvrWnJHlespkrp8+vO1jNaibm+PhqXPPv30YwDZ6jApe3wUjFQobghvW9p 7f2zLkGNv8b191cD/3vs9Q833z8t''').splitlines()]) def easteregged(environ, start_response): def injecting_start_response(status, headers, exc_info=None): headers.append(('X-Powered-By', 'Werkzeug')) return start_response(status, headers, exc_info) if app is not None and environ.get('QUERY_STRING') != 'macgybarchakku': return app(environ, injecting_start_response) injecting_start_response('200 OK', [('Content-Type', 'text/html')]) return [(u''' About Werkzeug

Werkzeug

the Swiss Army knife of Python web development.

%s\n\n\n
''' % gyver).encode('latin1')] return easteregged werkzeug-0.14.1/werkzeug/_reloader.py000066400000000000000000000220601322225165500176140ustar00rootroot00000000000000import os import sys import time import subprocess import threading from itertools import chain from werkzeug._internal import _log from werkzeug._compat import PY2, iteritems, text_type def _iter_module_files(): """This iterates over all relevant Python files. It goes through all loaded files from modules, all files in folders of already loaded modules as well as all files reachable through a package. """ # The list call is necessary on Python 3 in case the module # dictionary modifies during iteration. for module in list(sys.modules.values()): if module is None: continue filename = getattr(module, '__file__', None) if filename: if os.path.isdir(filename) and \ os.path.exists(os.path.join(filename, "__init__.py")): filename = os.path.join(filename, "__init__.py") old = None while not os.path.isfile(filename): old = filename filename = os.path.dirname(filename) if filename == old: break else: if filename[-4:] in ('.pyc', '.pyo'): filename = filename[:-1] yield filename def _find_observable_paths(extra_files=None): """Finds all paths that should be observed.""" rv = set(os.path.dirname(os.path.abspath(x)) if os.path.isfile(x) else os.path.abspath(x) for x in sys.path) for filename in extra_files or (): rv.add(os.path.dirname(os.path.abspath(filename))) for module in list(sys.modules.values()): fn = getattr(module, '__file__', None) if fn is None: continue fn = os.path.abspath(fn) rv.add(os.path.dirname(fn)) return _find_common_roots(rv) def _get_args_for_reloading(): """Returns the executable. This contains a workaround for windows if the executable is incorrectly reported to not have the .exe extension which can cause bugs on reloading. """ rv = [sys.executable] py_script = sys.argv[0] if os.name == 'nt' and not os.path.exists(py_script) and \ os.path.exists(py_script + '.exe'): py_script += '.exe' if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe': rv.pop(0) rv.append(py_script) rv.extend(sys.argv[1:]) return rv def _find_common_roots(paths): """Out of some paths it finds the common roots that need monitoring.""" paths = [x.split(os.path.sep) for x in paths] root = {} for chunks in sorted(paths, key=len, reverse=True): node = root for chunk in chunks: node = node.setdefault(chunk, {}) node.clear() rv = set() def _walk(node, path): for prefix, child in iteritems(node): _walk(child, path + (prefix,)) if not node: rv.add('/'.join(path)) _walk(root, ()) return rv class ReloaderLoop(object): name = None # monkeypatched by testsuite. wrapping with `staticmethod` is required in # case time.sleep has been replaced by a non-c function (e.g. by # `eventlet.monkey_patch`) before we get here _sleep = staticmethod(time.sleep) def __init__(self, extra_files=None, interval=1): self.extra_files = set(os.path.abspath(x) for x in extra_files or ()) self.interval = interval def run(self): pass def restart_with_reloader(self): """Spawn a new Python interpreter with the same arguments as this one, but running the reloader thread. """ while 1: _log('info', ' * Restarting with %s' % self.name) args = _get_args_for_reloading() new_environ = os.environ.copy() new_environ['WERKZEUG_RUN_MAIN'] = 'true' # a weird bug on windows. sometimes unicode strings end up in the # environment and subprocess.call does not like this, encode them # to latin1 and continue. if os.name == 'nt' and PY2: for key, value in iteritems(new_environ): if isinstance(value, text_type): new_environ[key] = value.encode('iso-8859-1') exit_code = subprocess.call(args, env=new_environ, close_fds=False) if exit_code != 3: return exit_code def trigger_reload(self, filename): self.log_reload(filename) sys.exit(3) def log_reload(self, filename): filename = os.path.abspath(filename) _log('info', ' * Detected change in %r, reloading' % filename) class StatReloaderLoop(ReloaderLoop): name = 'stat' def run(self): mtimes = {} while 1: for filename in chain(_iter_module_files(), self.extra_files): try: mtime = os.stat(filename).st_mtime except OSError: continue old_time = mtimes.get(filename) if old_time is None: mtimes[filename] = mtime continue elif mtime > old_time: self.trigger_reload(filename) self._sleep(self.interval) class WatchdogReloaderLoop(ReloaderLoop): def __init__(self, *args, **kwargs): ReloaderLoop.__init__(self, *args, **kwargs) from watchdog.observers import Observer from watchdog.events import FileSystemEventHandler self.observable_paths = set() def _check_modification(filename): if filename in self.extra_files: self.trigger_reload(filename) dirname = os.path.dirname(filename) if dirname.startswith(tuple(self.observable_paths)): if filename.endswith(('.pyc', '.pyo', '.py')): self.trigger_reload(filename) class _CustomHandler(FileSystemEventHandler): def on_created(self, event): _check_modification(event.src_path) def on_modified(self, event): _check_modification(event.src_path) def on_moved(self, event): _check_modification(event.src_path) _check_modification(event.dest_path) def on_deleted(self, event): _check_modification(event.src_path) reloader_name = Observer.__name__.lower() if reloader_name.endswith('observer'): reloader_name = reloader_name[:-8] reloader_name += ' reloader' self.name = reloader_name self.observer_class = Observer self.event_handler = _CustomHandler() self.should_reload = False def trigger_reload(self, filename): # This is called inside an event handler, which means throwing # SystemExit has no effect. # https://github.com/gorakhargosh/watchdog/issues/294 self.should_reload = True self.log_reload(filename) def run(self): watches = {} observer = self.observer_class() observer.start() try: while not self.should_reload: to_delete = set(watches) paths = _find_observable_paths(self.extra_files) for path in paths: if path not in watches: try: watches[path] = observer.schedule( self.event_handler, path, recursive=True) except OSError: # Clear this path from list of watches We don't want # the same error message showing again in the next # iteration. watches[path] = None to_delete.discard(path) for path in to_delete: watch = watches.pop(path, None) if watch is not None: observer.unschedule(watch) self.observable_paths = paths self._sleep(self.interval) finally: observer.stop() observer.join() sys.exit(3) reloader_loops = { 'stat': StatReloaderLoop, 'watchdog': WatchdogReloaderLoop, } try: __import__('watchdog.observers') except ImportError: reloader_loops['auto'] = reloader_loops['stat'] else: reloader_loops['auto'] = reloader_loops['watchdog'] def run_with_reloader(main_func, extra_files=None, interval=1, reloader_type='auto'): """Run the given function in an independent python interpreter.""" import signal reloader = reloader_loops[reloader_type](extra_files, interval) signal.signal(signal.SIGTERM, lambda *args: sys.exit(0)) try: if os.environ.get('WERKZEUG_RUN_MAIN') == 'true': t = threading.Thread(target=main_func, args=()) t.setDaemon(True) t.start() reloader.run() else: sys.exit(reloader.restart_with_reloader()) except KeyboardInterrupt: pass werkzeug-0.14.1/werkzeug/contrib/000077500000000000000000000000001322225165500167465ustar00rootroot00000000000000werkzeug-0.14.1/werkzeug/contrib/__init__.py000066400000000000000000000011571322225165500210630ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib ~~~~~~~~~~~~~~~~ Contains user-submitted code that other users may find useful, but which is not part of the Werkzeug core. Anyone can write code for inclusion in the `contrib` package. All modules in this package are distributed as an add-on library and thus are not part of Werkzeug itself. This file itself is mostly for informational purposes and to tell the Python interpreter that `contrib` is a package. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ werkzeug-0.14.1/werkzeug/contrib/atom.py000066400000000000000000000363271322225165500202730ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.atom ~~~~~~~~~~~~~~~~~~~~~ This module provides a class called :class:`AtomFeed` which can be used to generate feeds in the Atom syndication format (see :rfc:`4287`). Example:: def atom_feed(request): feed = AtomFeed("My Blog", feed_url=request.url, url=request.host_url, subtitle="My example blog for a feed test.") for post in Post.query.limit(10).all(): feed.add(post.title, post.body, content_type='html', author=post.author, url=post.url, id=post.uid, updated=post.last_update, published=post.pub_date) return feed.get_response() :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from datetime import datetime from werkzeug.utils import escape from werkzeug.wrappers import BaseResponse from werkzeug._compat import implements_to_string, string_types XHTML_NAMESPACE = 'http://www.w3.org/1999/xhtml' def _make_text_block(name, content, content_type=None): """Helper function for the builder that creates an XML text block.""" if content_type == 'xhtml': return u'<%s type="xhtml">
%s
\n' % \ (name, XHTML_NAMESPACE, content, name) if not content_type: return u'<%s>%s\n' % (name, escape(content), name) return u'<%s type="%s">%s\n' % (name, content_type, escape(content), name) def format_iso8601(obj): """Format a datetime object for iso8601""" iso8601 = obj.isoformat() if obj.tzinfo: return iso8601 return iso8601 + 'Z' @implements_to_string class AtomFeed(object): """A helper class that creates Atom feeds. :param title: the title of the feed. Required. :param title_type: the type attribute for the title element. One of ``'html'``, ``'text'`` or ``'xhtml'``. :param url: the url for the feed (not the url *of* the feed) :param id: a globally unique id for the feed. Must be an URI. If not present the `feed_url` is used, but one of both is required. :param updated: the time the feed was modified the last time. Must be a :class:`datetime.datetime` object. If not present the latest entry's `updated` is used. Treated as UTC if naive datetime. :param feed_url: the URL to the feed. Should be the URL that was requested. :param author: the author of the feed. Must be either a string (the name) or a dict with name (required) and uri or email (both optional). Can be a list of (may be mixed, too) strings and dicts, too, if there are multiple authors. Required if not every entry has an author element. :param icon: an icon for the feed. :param logo: a logo for the feed. :param rights: copyright information for the feed. :param rights_type: the type attribute for the rights element. One of ``'html'``, ``'text'`` or ``'xhtml'``. Default is ``'text'``. :param subtitle: a short description of the feed. :param subtitle_type: the type attribute for the subtitle element. One of ``'text'``, ``'html'``, ``'text'`` or ``'xhtml'``. Default is ``'text'``. :param links: additional links. Must be a list of dictionaries with href (required) and rel, type, hreflang, title, length (all optional) :param generator: the software that generated this feed. This must be a tuple in the form ``(name, url, version)``. If you don't want to specify one of them, set the item to `None`. :param entries: a list with the entries for the feed. Entries can also be added later with :meth:`add`. For more information on the elements see http://www.atomenabled.org/developers/syndication/ Everywhere where a list is demanded, any iterable can be used. """ default_generator = ('Werkzeug', None, None) def __init__(self, title=None, entries=None, **kwargs): self.title = title self.title_type = kwargs.get('title_type', 'text') self.url = kwargs.get('url') self.feed_url = kwargs.get('feed_url', self.url) self.id = kwargs.get('id', self.feed_url) self.updated = kwargs.get('updated') self.author = kwargs.get('author', ()) self.icon = kwargs.get('icon') self.logo = kwargs.get('logo') self.rights = kwargs.get('rights') self.rights_type = kwargs.get('rights_type') self.subtitle = kwargs.get('subtitle') self.subtitle_type = kwargs.get('subtitle_type', 'text') self.generator = kwargs.get('generator') if self.generator is None: self.generator = self.default_generator self.links = kwargs.get('links', []) self.entries = entries and list(entries) or [] if not hasattr(self.author, '__iter__') \ or isinstance(self.author, string_types + (dict,)): self.author = [self.author] for i, author in enumerate(self.author): if not isinstance(author, dict): self.author[i] = {'name': author} if not self.title: raise ValueError('title is required') if not self.id: raise ValueError('id is required') for author in self.author: if 'name' not in author: raise TypeError('author must contain at least a name') def add(self, *args, **kwargs): """Add a new entry to the feed. This function can either be called with a :class:`FeedEntry` or some keyword and positional arguments that are forwarded to the :class:`FeedEntry` constructor. """ if len(args) == 1 and not kwargs and isinstance(args[0], FeedEntry): self.entries.append(args[0]) else: kwargs['feed_url'] = self.feed_url self.entries.append(FeedEntry(*args, **kwargs)) def __repr__(self): return '<%s %r (%d entries)>' % ( self.__class__.__name__, self.title, len(self.entries) ) def generate(self): """Return a generator that yields pieces of XML.""" # atom demands either an author element in every entry or a global one if not self.author: if any(not e.author for e in self.entries): self.author = ({'name': 'Unknown author'},) if not self.updated: dates = sorted([entry.updated for entry in self.entries]) self.updated = dates and dates[-1] or datetime.utcnow() yield u'\n' yield u'\n' yield ' ' + _make_text_block('title', self.title, self.title_type) yield u' %s\n' % escape(self.id) yield u' %s\n' % format_iso8601(self.updated) if self.url: yield u' \n' % escape(self.url) if self.feed_url: yield u' \n' % \ escape(self.feed_url) for link in self.links: yield u' \n' % ''.join('%s="%s" ' % (k, escape(link[k])) for k in link) for author in self.author: yield u' \n' yield u' %s\n' % escape(author['name']) if 'uri' in author: yield u' %s\n' % escape(author['uri']) if 'email' in author: yield ' %s\n' % escape(author['email']) yield ' \n' if self.subtitle: yield ' ' + _make_text_block('subtitle', self.subtitle, self.subtitle_type) if self.icon: yield u' %s\n' % escape(self.icon) if self.logo: yield u' %s\n' % escape(self.logo) if self.rights: yield ' ' + _make_text_block('rights', self.rights, self.rights_type) generator_name, generator_url, generator_version = self.generator if generator_name or generator_url or generator_version: tmp = [u' %s\n' % escape(generator_name)) yield u''.join(tmp) for entry in self.entries: for line in entry.generate(): yield u' ' + line yield u'\n' def to_string(self): """Convert the feed into a string.""" return u''.join(self.generate()) def get_response(self): """Return a response object for the feed.""" return BaseResponse(self.to_string(), mimetype='application/atom+xml') def __call__(self, environ, start_response): """Use the class as WSGI response object.""" return self.get_response()(environ, start_response) def __str__(self): return self.to_string() @implements_to_string class FeedEntry(object): """Represents a single entry in a feed. :param title: the title of the entry. Required. :param title_type: the type attribute for the title element. One of ``'html'``, ``'text'`` or ``'xhtml'``. :param content: the content of the entry. :param content_type: the type attribute for the content element. One of ``'html'``, ``'text'`` or ``'xhtml'``. :param summary: a summary of the entry's content. :param summary_type: the type attribute for the summary element. One of ``'html'``, ``'text'`` or ``'xhtml'``. :param url: the url for the entry. :param id: a globally unique id for the entry. Must be an URI. If not present the URL is used, but one of both is required. :param updated: the time the entry was modified the last time. Must be a :class:`datetime.datetime` object. Treated as UTC if naive datetime. Required. :param author: the author of the entry. Must be either a string (the name) or a dict with name (required) and uri or email (both optional). Can be a list of (may be mixed, too) strings and dicts, too, if there are multiple authors. Required if the feed does not have an author element. :param published: the time the entry was initially published. Must be a :class:`datetime.datetime` object. Treated as UTC if naive datetime. :param rights: copyright information for the entry. :param rights_type: the type attribute for the rights element. One of ``'html'``, ``'text'`` or ``'xhtml'``. Default is ``'text'``. :param links: additional links. Must be a list of dictionaries with href (required) and rel, type, hreflang, title, length (all optional) :param categories: categories for the entry. Must be a list of dictionaries with term (required), scheme and label (all optional) :param xml_base: The xml base (url) for this feed item. If not provided it will default to the item url. For more information on the elements see http://www.atomenabled.org/developers/syndication/ Everywhere where a list is demanded, any iterable can be used. """ def __init__(self, title=None, content=None, feed_url=None, **kwargs): self.title = title self.title_type = kwargs.get('title_type', 'text') self.content = content self.content_type = kwargs.get('content_type', 'html') self.url = kwargs.get('url') self.id = kwargs.get('id', self.url) self.updated = kwargs.get('updated') self.summary = kwargs.get('summary') self.summary_type = kwargs.get('summary_type', 'html') self.author = kwargs.get('author', ()) self.published = kwargs.get('published') self.rights = kwargs.get('rights') self.links = kwargs.get('links', []) self.categories = kwargs.get('categories', []) self.xml_base = kwargs.get('xml_base', feed_url) if not hasattr(self.author, '__iter__') \ or isinstance(self.author, string_types + (dict,)): self.author = [self.author] for i, author in enumerate(self.author): if not isinstance(author, dict): self.author[i] = {'name': author} if not self.title: raise ValueError('title is required') if not self.id: raise ValueError('id is required') if not self.updated: raise ValueError('updated is required') def __repr__(self): return '<%s %r>' % ( self.__class__.__name__, self.title ) def generate(self): """Yields pieces of ATOM XML.""" base = '' if self.xml_base: base = ' xml:base="%s"' % escape(self.xml_base) yield u'\n' % base yield u' ' + _make_text_block('title', self.title, self.title_type) yield u' %s\n' % escape(self.id) yield u' %s\n' % format_iso8601(self.updated) if self.published: yield u' %s\n' % \ format_iso8601(self.published) if self.url: yield u' \n' % escape(self.url) for author in self.author: yield u' \n' yield u' %s\n' % escape(author['name']) if 'uri' in author: yield u' %s\n' % escape(author['uri']) if 'email' in author: yield u' %s\n' % escape(author['email']) yield u' \n' for link in self.links: yield u' \n' % ''.join('%s="%s" ' % (k, escape(link[k])) for k in link) for category in self.categories: yield u' \n' % ''.join('%s="%s" ' % (k, escape(category[k])) for k in category) if self.summary: yield u' ' + _make_text_block('summary', self.summary, self.summary_type) if self.content: yield u' ' + _make_text_block('content', self.content, self.content_type) yield u'\n' def to_string(self): """Convert the feed item into a unicode object.""" return u''.join(self.generate()) def __str__(self): return self.to_string() werkzeug-0.14.1/werkzeug/contrib/cache.py000066400000000000000000000766431322225165500204030ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.cache ~~~~~~~~~~~~~~~~~~~~~~ The main problem with dynamic Web sites is, well, they're dynamic. Each time a user requests a page, the webserver executes a lot of code, queries the database, renders templates until the visitor gets the page he sees. This is a lot more expensive than just loading a file from the file system and sending it to the visitor. For most Web applications, this overhead isn't a big deal but once it becomes, you will be glad to have a cache system in place. How Caching Works ================= Caching is pretty simple. Basically you have a cache object lurking around somewhere that is connected to a remote cache or the file system or something else. When the request comes in you check if the current page is already in the cache and if so, you're returning it from the cache. Otherwise you generate the page and put it into the cache. (Or a fragment of the page, you don't have to cache the full thing) Here is a simple example of how to cache a sidebar for 5 minutes:: def get_sidebar(user): identifier = 'sidebar_for/user%d' % user.id value = cache.get(identifier) if value is not None: return value value = generate_sidebar_for(user=user) cache.set(identifier, value, timeout=60 * 5) return value Creating a Cache Object ======================= To create a cache object you just import the cache system of your choice from the cache module and instantiate it. Then you can start working with that object: >>> from werkzeug.contrib.cache import SimpleCache >>> c = SimpleCache() >>> c.set("foo", "value") >>> c.get("foo") 'value' >>> c.get("missing") is None True Please keep in mind that you have to create the cache and put it somewhere you have access to it (either as a module global you can import or you just put it into your WSGI application). :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import re import errno import tempfile import platform from hashlib import md5 from time import time try: import cPickle as pickle except ImportError: # pragma: no cover import pickle from werkzeug._compat import iteritems, string_types, text_type, \ integer_types, to_native from werkzeug.posixemulation import rename def _items(mappingorseq): """Wrapper for efficient iteration over mappings represented by dicts or sequences:: >>> for k, v in _items((i, i*i) for i in xrange(5)): ... assert k*k == v >>> for k, v in _items(dict((i, i*i) for i in xrange(5))): ... assert k*k == v """ if hasattr(mappingorseq, 'items'): return iteritems(mappingorseq) return mappingorseq class BaseCache(object): """Baseclass for the cache systems. All the cache systems implement this API or a superset of it. :param default_timeout: the default timeout (in seconds) that is used if no timeout is specified on :meth:`set`. A timeout of 0 indicates that the cache never expires. """ def __init__(self, default_timeout=300): self.default_timeout = default_timeout def _normalize_timeout(self, timeout): if timeout is None: timeout = self.default_timeout return timeout def get(self, key): """Look up key in the cache and return the value for it. :param key: the key to be looked up. :returns: The value if it exists and is readable, else ``None``. """ return None def delete(self, key): """Delete `key` from the cache. :param key: the key to delete. :returns: Whether the key existed and has been deleted. :rtype: boolean """ return True def get_many(self, *keys): """Returns a list of values for the given keys. For each key an item in the list is created:: foo, bar = cache.get_many("foo", "bar") Has the same error handling as :meth:`get`. :param keys: The function accepts multiple keys as positional arguments. """ return [self.get(k) for k in keys] def get_dict(self, *keys): """Like :meth:`get_many` but return a dict:: d = cache.get_dict("foo", "bar") foo = d["foo"] bar = d["bar"] :param keys: The function accepts multiple keys as positional arguments. """ return dict(zip(keys, self.get_many(*keys))) def set(self, key, value, timeout=None): """Add a new key/value to the cache (overwrites value, if key already exists in the cache). :param key: the key to set :param value: the value for the key :param timeout: the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires. :returns: ``True`` if key has been updated, ``False`` for backend errors. Pickling errors, however, will raise a subclass of ``pickle.PickleError``. :rtype: boolean """ return True def add(self, key, value, timeout=None): """Works like :meth:`set` but does not overwrite the values of already existing keys. :param key: the key to set :param value: the value for the key :param timeout: the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires. :returns: Same as :meth:`set`, but also ``False`` for already existing keys. :rtype: boolean """ return True def set_many(self, mapping, timeout=None): """Sets multiple keys and values from a mapping. :param mapping: a mapping with the keys/values to set. :param timeout: the cache timeout for the key in seconds (if not specified, it uses the default timeout). A timeout of 0 idicates that the cache never expires. :returns: Whether all given keys have been set. :rtype: boolean """ rv = True for key, value in _items(mapping): if not self.set(key, value, timeout): rv = False return rv def delete_many(self, *keys): """Deletes multiple keys at once. :param keys: The function accepts multiple keys as positional arguments. :returns: Whether all given keys have been deleted. :rtype: boolean """ return all(self.delete(key) for key in keys) def has(self, key): """Checks if a key exists in the cache without returning it. This is a cheap operation that bypasses loading the actual data on the backend. This method is optional and may not be implemented on all caches. :param key: the key to check """ raise NotImplementedError( '%s doesn\'t have an efficient implementation of `has`. That ' 'means it is impossible to check whether a key exists without ' 'fully loading the key\'s data. Consider using `self.get` ' 'explicitly if you don\'t care about performance.' ) def clear(self): """Clears the cache. Keep in mind that not all caches support completely clearing the cache. :returns: Whether the cache has been cleared. :rtype: boolean """ return True def inc(self, key, delta=1): """Increments the value of a key by `delta`. If the key does not yet exist it is initialized with `delta`. For supporting caches this is an atomic operation. :param key: the key to increment. :param delta: the delta to add. :returns: The new value or ``None`` for backend errors. """ value = (self.get(key) or 0) + delta return value if self.set(key, value) else None def dec(self, key, delta=1): """Decrements the value of a key by `delta`. If the key does not yet exist it is initialized with `-delta`. For supporting caches this is an atomic operation. :param key: the key to increment. :param delta: the delta to subtract. :returns: The new value or `None` for backend errors. """ value = (self.get(key) or 0) - delta return value if self.set(key, value) else None class NullCache(BaseCache): """A cache that doesn't cache. This can be useful for unit testing. :param default_timeout: a dummy parameter that is ignored but exists for API compatibility with other caches. """ def has(self, key): return False class SimpleCache(BaseCache): """Simple memory cache for single process environments. This class exists mainly for the development server and is not 100% thread safe. It tries to use as many atomic operations as possible and no locks for simplicity but it could happen under heavy load that keys are added multiple times. :param threshold: the maximum number of items the cache stores before it starts deleting some. :param default_timeout: the default timeout that is used if no timeout is specified on :meth:`~BaseCache.set`. A timeout of 0 indicates that the cache never expires. """ def __init__(self, threshold=500, default_timeout=300): BaseCache.__init__(self, default_timeout) self._cache = {} self.clear = self._cache.clear self._threshold = threshold def _prune(self): if len(self._cache) > self._threshold: now = time() toremove = [] for idx, (key, (expires, _)) in enumerate(self._cache.items()): if (expires != 0 and expires <= now) or idx % 3 == 0: toremove.append(key) for key in toremove: self._cache.pop(key, None) def _normalize_timeout(self, timeout): timeout = BaseCache._normalize_timeout(self, timeout) if timeout > 0: timeout = time() + timeout return timeout def get(self, key): try: expires, value = self._cache[key] if expires == 0 or expires > time(): return pickle.loads(value) except (KeyError, pickle.PickleError): return None def set(self, key, value, timeout=None): expires = self._normalize_timeout(timeout) self._prune() self._cache[key] = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL)) return True def add(self, key, value, timeout=None): expires = self._normalize_timeout(timeout) self._prune() item = (expires, pickle.dumps(value, pickle.HIGHEST_PROTOCOL)) if key in self._cache: return False self._cache.setdefault(key, item) return True def delete(self, key): return self._cache.pop(key, None) is not None def has(self, key): try: expires, value = self._cache[key] return expires == 0 or expires > time() except KeyError: return False _test_memcached_key = re.compile(r'[^\x00-\x21\xff]{1,250}$').match class MemcachedCache(BaseCache): """A cache that uses memcached as backend. The first argument can either be an object that resembles the API of a :class:`memcache.Client` or a tuple/list of server addresses. In the event that a tuple/list is passed, Werkzeug tries to import the best available memcache library. This cache looks into the following packages/modules to find bindings for memcached: - ``pylibmc`` - ``google.appengine.api.memcached`` - ``memcached`` - ``libmc`` Implementation notes: This cache backend works around some limitations in memcached to simplify the interface. For example unicode keys are encoded to utf-8 on the fly. Methods such as :meth:`~BaseCache.get_dict` return the keys in the same format as passed. Furthermore all get methods silently ignore key errors to not cause problems when untrusted user data is passed to the get methods which is often the case in web applications. :param servers: a list or tuple of server addresses or alternatively a :class:`memcache.Client` or a compatible client. :param default_timeout: the default timeout that is used if no timeout is specified on :meth:`~BaseCache.set`. A timeout of 0 indicates that the cache never expires. :param key_prefix: a prefix that is added before all keys. This makes it possible to use the same memcached server for different applications. Keep in mind that :meth:`~BaseCache.clear` will also clear keys with a different prefix. """ def __init__(self, servers=None, default_timeout=300, key_prefix=None): BaseCache.__init__(self, default_timeout) if servers is None or isinstance(servers, (list, tuple)): if servers is None: servers = ['127.0.0.1:11211'] self._client = self.import_preferred_memcache_lib(servers) if self._client is None: raise RuntimeError('no memcache module found') else: # NOTE: servers is actually an already initialized memcache # client. self._client = servers self.key_prefix = to_native(key_prefix) def _normalize_key(self, key): key = to_native(key, 'utf-8') if self.key_prefix: key = self.key_prefix + key return key def _normalize_timeout(self, timeout): timeout = BaseCache._normalize_timeout(self, timeout) if timeout > 0: timeout = int(time()) + timeout return timeout def get(self, key): key = self._normalize_key(key) # memcached doesn't support keys longer than that. Because often # checks for so long keys can occur because it's tested from user # submitted data etc we fail silently for getting. if _test_memcached_key(key): return self._client.get(key) def get_dict(self, *keys): key_mapping = {} have_encoded_keys = False for key in keys: encoded_key = self._normalize_key(key) if not isinstance(key, str): have_encoded_keys = True if _test_memcached_key(key): key_mapping[encoded_key] = key _keys = list(key_mapping) d = rv = self._client.get_multi(_keys) if have_encoded_keys or self.key_prefix: rv = {} for key, value in iteritems(d): rv[key_mapping[key]] = value if len(rv) < len(keys): for key in keys: if key not in rv: rv[key] = None return rv def add(self, key, value, timeout=None): key = self._normalize_key(key) timeout = self._normalize_timeout(timeout) return self._client.add(key, value, timeout) def set(self, key, value, timeout=None): key = self._normalize_key(key) timeout = self._normalize_timeout(timeout) return self._client.set(key, value, timeout) def get_many(self, *keys): d = self.get_dict(*keys) return [d[key] for key in keys] def set_many(self, mapping, timeout=None): new_mapping = {} for key, value in _items(mapping): key = self._normalize_key(key) new_mapping[key] = value timeout = self._normalize_timeout(timeout) failed_keys = self._client.set_multi(new_mapping, timeout) return not failed_keys def delete(self, key): key = self._normalize_key(key) if _test_memcached_key(key): return self._client.delete(key) def delete_many(self, *keys): new_keys = [] for key in keys: key = self._normalize_key(key) if _test_memcached_key(key): new_keys.append(key) return self._client.delete_multi(new_keys) def has(self, key): key = self._normalize_key(key) if _test_memcached_key(key): return self._client.append(key, '') return False def clear(self): return self._client.flush_all() def inc(self, key, delta=1): key = self._normalize_key(key) return self._client.incr(key, delta) def dec(self, key, delta=1): key = self._normalize_key(key) return self._client.decr(key, delta) def import_preferred_memcache_lib(self, servers): """Returns an initialized memcache client. Used by the constructor.""" try: import pylibmc except ImportError: pass else: return pylibmc.Client(servers) try: from google.appengine.api import memcache except ImportError: pass else: return memcache.Client() try: import memcache except ImportError: pass else: return memcache.Client(servers) try: import libmc except ImportError: pass else: return libmc.Client(servers) # backwards compatibility GAEMemcachedCache = MemcachedCache class RedisCache(BaseCache): """Uses the Redis key-value store as a cache backend. The first argument can be either a string denoting address of the Redis server or an object resembling an instance of a redis.Redis class. Note: Python Redis API already takes care of encoding unicode strings on the fly. .. versionadded:: 0.7 .. versionadded:: 0.8 `key_prefix` was added. .. versionchanged:: 0.8 This cache backend now properly serializes objects. .. versionchanged:: 0.8.3 This cache backend now supports password authentication. .. versionchanged:: 0.10 ``**kwargs`` is now passed to the redis object. :param host: address of the Redis server or an object which API is compatible with the official Python Redis client (redis-py). :param port: port number on which Redis server listens for connections. :param password: password authentication for the Redis server. :param db: db (zero-based numeric index) on Redis Server to connect. :param default_timeout: the default timeout that is used if no timeout is specified on :meth:`~BaseCache.set`. A timeout of 0 indicates that the cache never expires. :param key_prefix: A prefix that should be added to all keys. Any additional keyword arguments will be passed to ``redis.Redis``. """ def __init__(self, host='localhost', port=6379, password=None, db=0, default_timeout=300, key_prefix=None, **kwargs): BaseCache.__init__(self, default_timeout) if host is None: raise ValueError('RedisCache host parameter may not be None') if isinstance(host, string_types): try: import redis except ImportError: raise RuntimeError('no redis module found') if kwargs.get('decode_responses', None): raise ValueError('decode_responses is not supported by ' 'RedisCache.') self._client = redis.Redis(host=host, port=port, password=password, db=db, **kwargs) else: self._client = host self.key_prefix = key_prefix or '' def _normalize_timeout(self, timeout): timeout = BaseCache._normalize_timeout(self, timeout) if timeout == 0: timeout = -1 return timeout def dump_object(self, value): """Dumps an object into a string for redis. By default it serializes integers as regular string and pickle dumps everything else. """ t = type(value) if t in integer_types: return str(value).encode('ascii') return b'!' + pickle.dumps(value) def load_object(self, value): """The reversal of :meth:`dump_object`. This might be called with None. """ if value is None: return None if value.startswith(b'!'): try: return pickle.loads(value[1:]) except pickle.PickleError: return None try: return int(value) except ValueError: # before 0.8 we did not have serialization. Still support that. return value def get(self, key): return self.load_object(self._client.get(self.key_prefix + key)) def get_many(self, *keys): if self.key_prefix: keys = [self.key_prefix + key for key in keys] return [self.load_object(x) for x in self._client.mget(keys)] def set(self, key, value, timeout=None): timeout = self._normalize_timeout(timeout) dump = self.dump_object(value) if timeout == -1: result = self._client.set(name=self.key_prefix + key, value=dump) else: result = self._client.setex(name=self.key_prefix + key, value=dump, time=timeout) return result def add(self, key, value, timeout=None): timeout = self._normalize_timeout(timeout) dump = self.dump_object(value) return ( self._client.setnx(name=self.key_prefix + key, value=dump) and self._client.expire(name=self.key_prefix + key, time=timeout) ) def set_many(self, mapping, timeout=None): timeout = self._normalize_timeout(timeout) # Use transaction=False to batch without calling redis MULTI # which is not supported by twemproxy pipe = self._client.pipeline(transaction=False) for key, value in _items(mapping): dump = self.dump_object(value) if timeout == -1: pipe.set(name=self.key_prefix + key, value=dump) else: pipe.setex(name=self.key_prefix + key, value=dump, time=timeout) return pipe.execute() def delete(self, key): return self._client.delete(self.key_prefix + key) def delete_many(self, *keys): if not keys: return if self.key_prefix: keys = [self.key_prefix + key for key in keys] return self._client.delete(*keys) def has(self, key): return self._client.exists(self.key_prefix + key) def clear(self): status = False if self.key_prefix: keys = self._client.keys(self.key_prefix + '*') if keys: status = self._client.delete(*keys) else: status = self._client.flushdb() return status def inc(self, key, delta=1): return self._client.incr(name=self.key_prefix + key, amount=delta) def dec(self, key, delta=1): return self._client.decr(name=self.key_prefix + key, amount=delta) class FileSystemCache(BaseCache): """A cache that stores the items on the file system. This cache depends on being the only user of the `cache_dir`. Make absolutely sure that nobody but this cache stores files there or otherwise the cache will randomly delete files therein. :param cache_dir: the directory where cache files are stored. :param threshold: the maximum number of items the cache stores before it starts deleting some. A threshold value of 0 indicates no threshold. :param default_timeout: the default timeout that is used if no timeout is specified on :meth:`~BaseCache.set`. A timeout of 0 indicates that the cache never expires. :param mode: the file mode wanted for the cache files, default 0600 """ #: used for temporary files by the FileSystemCache _fs_transaction_suffix = '.__wz_cache' #: keep amount of files in a cache element _fs_count_file = '__wz_cache_count' def __init__(self, cache_dir, threshold=500, default_timeout=300, mode=0o600): BaseCache.__init__(self, default_timeout) self._path = cache_dir self._threshold = threshold self._mode = mode try: os.makedirs(self._path) except OSError as ex: if ex.errno != errno.EEXIST: raise self._update_count(value=len(self._list_dir())) @property def _file_count(self): return self.get(self._fs_count_file) or 0 def _update_count(self, delta=None, value=None): # If we have no threshold, don't count files if self._threshold == 0: return if delta: new_count = self._file_count + delta else: new_count = value or 0 self.set(self._fs_count_file, new_count, mgmt_element=True) def _normalize_timeout(self, timeout): timeout = BaseCache._normalize_timeout(self, timeout) if timeout != 0: timeout = time() + timeout return int(timeout) def _list_dir(self): """return a list of (fully qualified) cache filenames """ mgmt_files = [self._get_filename(name).split('/')[-1] for name in (self._fs_count_file,)] return [os.path.join(self._path, fn) for fn in os.listdir(self._path) if not fn.endswith(self._fs_transaction_suffix) and fn not in mgmt_files] def _prune(self): if self._threshold == 0 or not self._file_count > self._threshold: return entries = self._list_dir() now = time() for idx, fname in enumerate(entries): try: remove = False with open(fname, 'rb') as f: expires = pickle.load(f) remove = (expires != 0 and expires <= now) or idx % 3 == 0 if remove: os.remove(fname) except (IOError, OSError): pass self._update_count(value=len(self._list_dir())) def clear(self): for fname in self._list_dir(): try: os.remove(fname) except (IOError, OSError): self._update_count(value=len(self._list_dir())) return False self._update_count(value=0) return True def _get_filename(self, key): if isinstance(key, text_type): key = key.encode('utf-8') # XXX unicode review hash = md5(key).hexdigest() return os.path.join(self._path, hash) def get(self, key): filename = self._get_filename(key) try: with open(filename, 'rb') as f: pickle_time = pickle.load(f) if pickle_time == 0 or pickle_time >= time(): return pickle.load(f) else: os.remove(filename) return None except (IOError, OSError, pickle.PickleError): return None def add(self, key, value, timeout=None): filename = self._get_filename(key) if not os.path.exists(filename): return self.set(key, value, timeout) return False def set(self, key, value, timeout=None, mgmt_element=False): # Management elements have no timeout if mgmt_element: timeout = 0 # Don't prune on management element update, to avoid loop else: self._prune() timeout = self._normalize_timeout(timeout) filename = self._get_filename(key) try: fd, tmp = tempfile.mkstemp(suffix=self._fs_transaction_suffix, dir=self._path) with os.fdopen(fd, 'wb') as f: pickle.dump(timeout, f, 1) pickle.dump(value, f, pickle.HIGHEST_PROTOCOL) rename(tmp, filename) os.chmod(filename, self._mode) except (IOError, OSError): return False else: # Management elements should not count towards threshold if not mgmt_element: self._update_count(delta=1) return True def delete(self, key, mgmt_element=False): try: os.remove(self._get_filename(key)) except (IOError, OSError): return False else: # Management elements should not count towards threshold if not mgmt_element: self._update_count(delta=-1) return True def has(self, key): filename = self._get_filename(key) try: with open(filename, 'rb') as f: pickle_time = pickle.load(f) if pickle_time == 0 or pickle_time >= time(): return True else: os.remove(filename) return False except (IOError, OSError, pickle.PickleError): return False class UWSGICache(BaseCache): """ Implements the cache using uWSGI's caching framework. .. note:: This class cannot be used when running under PyPy, because the uWSGI API implementation for PyPy is lacking the needed functionality. :param default_timeout: The default timeout in seconds. :param cache: The name of the caching instance to connect to, for example: mycache@localhost:3031, defaults to an empty string, which means uWSGI will cache in the local instance. If the cache is in the same instance as the werkzeug app, you only have to provide the name of the cache. """ def __init__(self, default_timeout=300, cache=''): BaseCache.__init__(self, default_timeout) if platform.python_implementation() == 'PyPy': raise RuntimeError("uWSGI caching does not work under PyPy, see " "the docs for more details.") try: import uwsgi self._uwsgi = uwsgi except ImportError: raise RuntimeError("uWSGI could not be imported, are you " "running under uWSGI?") self.cache = cache def get(self, key): rv = self._uwsgi.cache_get(key, self.cache) if rv is None: return return pickle.loads(rv) def delete(self, key): return self._uwsgi.cache_del(key, self.cache) def set(self, key, value, timeout=None): return self._uwsgi.cache_update(key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache) def add(self, key, value, timeout=None): return self._uwsgi.cache_set(key, pickle.dumps(value), self._normalize_timeout(timeout), self.cache) def clear(self): return self._uwsgi.cache_clear(self.cache) def has(self, key): return self._uwsgi.cache_exists(key, self.cache) is not None werkzeug-0.14.1/werkzeug/contrib/fixers.py000066400000000000000000000237031322225165500206250ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.fixers ~~~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: 0.5 This module includes various helpers that fix bugs in web servers. They may be necessary for some versions of a buggy web server but not others. We try to stay updated with the status of the bugs as good as possible but you have to make sure whether they fix the problem you encounter. If you notice bugs in webservers not fixed in this module consider contributing a patch. :copyright: Copyright 2009 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ try: from urllib import unquote except ImportError: from urllib.parse import unquote from werkzeug.http import parse_options_header, parse_cache_control_header, \ parse_set_header from werkzeug.useragents import UserAgent from werkzeug.datastructures import Headers, ResponseCacheControl class CGIRootFix(object): """Wrap the application in this middleware if you are using FastCGI or CGI and you have problems with your app root being set to the cgi script's path instead of the path users are going to visit .. versionchanged:: 0.9 Added `app_root` parameter and renamed from `LighttpdCGIRootFix`. :param app: the WSGI application :param app_root: Defaulting to ``'/'``, you can set this to something else if your app is mounted somewhere else. """ def __init__(self, app, app_root='/'): self.app = app self.app_root = app_root def __call__(self, environ, start_response): # only set PATH_INFO for older versions of Lighty or if no # server software is provided. That's because the test was # added in newer Werkzeug versions and we don't want to break # people's code if they are using this fixer in a test that # does not set the SERVER_SOFTWARE key. if 'SERVER_SOFTWARE' not in environ or \ environ['SERVER_SOFTWARE'] < 'lighttpd/1.4.28': environ['PATH_INFO'] = environ.get('SCRIPT_NAME', '') + \ environ.get('PATH_INFO', '') environ['SCRIPT_NAME'] = self.app_root.strip('/') return self.app(environ, start_response) # backwards compatibility LighttpdCGIRootFix = CGIRootFix class PathInfoFromRequestUriFix(object): """On windows environment variables are limited to the system charset which makes it impossible to store the `PATH_INFO` variable in the environment without loss of information on some systems. This is for example a problem for CGI scripts on a Windows Apache. This fixer works by recreating the `PATH_INFO` from `REQUEST_URI`, `REQUEST_URL`, or `UNENCODED_URL` (whatever is available). Thus the fix can only be applied if the webserver supports either of these variables. :param app: the WSGI application """ def __init__(self, app): self.app = app def __call__(self, environ, start_response): for key in 'REQUEST_URL', 'REQUEST_URI', 'UNENCODED_URL': if key not in environ: continue request_uri = unquote(environ[key]) script_name = unquote(environ.get('SCRIPT_NAME', '')) if request_uri.startswith(script_name): environ['PATH_INFO'] = request_uri[len(script_name):] \ .split('?', 1)[0] break return self.app(environ, start_response) class ProxyFix(object): """This middleware can be applied to add HTTP proxy support to an application that was not designed with HTTP proxies in mind. It sets `REMOTE_ADDR`, `HTTP_HOST` from `X-Forwarded` headers. While Werkzeug-based applications already can use :py:func:`werkzeug.wsgi.get_host` to retrieve the current host even if behind proxy setups, this middleware can be used for applications which access the WSGI environment directly. If you have more than one proxy server in front of your app, set `num_proxies` accordingly. Do not use this middleware in non-proxy setups for security reasons. The original values of `REMOTE_ADDR` and `HTTP_HOST` are stored in the WSGI environment as `werkzeug.proxy_fix.orig_remote_addr` and `werkzeug.proxy_fix.orig_http_host`. :param app: the WSGI application :param num_proxies: the number of proxy servers in front of the app. """ def __init__(self, app, num_proxies=1): self.app = app self.num_proxies = num_proxies def get_remote_addr(self, forwarded_for): """Selects the new remote addr from the given list of ips in X-Forwarded-For. By default it picks the one that the `num_proxies` proxy server provides. Before 0.9 it would always pick the first. .. versionadded:: 0.8 """ if len(forwarded_for) >= self.num_proxies: return forwarded_for[-self.num_proxies] def __call__(self, environ, start_response): getter = environ.get forwarded_proto = getter('HTTP_X_FORWARDED_PROTO', '') forwarded_for = getter('HTTP_X_FORWARDED_FOR', '').split(',') forwarded_host = getter('HTTP_X_FORWARDED_HOST', '') environ.update({ 'werkzeug.proxy_fix.orig_wsgi_url_scheme': getter('wsgi.url_scheme'), 'werkzeug.proxy_fix.orig_remote_addr': getter('REMOTE_ADDR'), 'werkzeug.proxy_fix.orig_http_host': getter('HTTP_HOST') }) forwarded_for = [x for x in [x.strip() for x in forwarded_for] if x] remote_addr = self.get_remote_addr(forwarded_for) if remote_addr is not None: environ['REMOTE_ADDR'] = remote_addr if forwarded_host: environ['HTTP_HOST'] = forwarded_host if forwarded_proto: environ['wsgi.url_scheme'] = forwarded_proto return self.app(environ, start_response) class HeaderRewriterFix(object): """This middleware can remove response headers and add others. This is for example useful to remove the `Date` header from responses if you are using a server that adds that header, no matter if it's present or not or to add `X-Powered-By` headers:: app = HeaderRewriterFix(app, remove_headers=['Date'], add_headers=[('X-Powered-By', 'WSGI')]) :param app: the WSGI application :param remove_headers: a sequence of header keys that should be removed. :param add_headers: a sequence of ``(key, value)`` tuples that should be added. """ def __init__(self, app, remove_headers=None, add_headers=None): self.app = app self.remove_headers = set(x.lower() for x in (remove_headers or ())) self.add_headers = list(add_headers or ()) def __call__(self, environ, start_response): def rewriting_start_response(status, headers, exc_info=None): new_headers = [] for key, value in headers: if key.lower() not in self.remove_headers: new_headers.append((key, value)) new_headers += self.add_headers return start_response(status, new_headers, exc_info) return self.app(environ, rewriting_start_response) class InternetExplorerFix(object): """This middleware fixes a couple of bugs with Microsoft Internet Explorer. Currently the following fixes are applied: - removing of `Vary` headers for unsupported mimetypes which causes troubles with caching. Can be disabled by passing ``fix_vary=False`` to the constructor. see: http://support.microsoft.com/kb/824847/en-us - removes offending headers to work around caching bugs in Internet Explorer if `Content-Disposition` is set. Can be disabled by passing ``fix_attach=False`` to the constructor. If it does not detect affected Internet Explorer versions it won't touch the request / response. """ # This code was inspired by Django fixers for the same bugs. The # fix_vary and fix_attach fixers were originally implemented in Django # by Michael Axiak and is available as part of the Django project: # http://code.djangoproject.com/ticket/4148 def __init__(self, app, fix_vary=True, fix_attach=True): self.app = app self.fix_vary = fix_vary self.fix_attach = fix_attach def fix_headers(self, environ, headers, status=None): if self.fix_vary: header = headers.get('content-type', '') mimetype, options = parse_options_header(header) if mimetype not in ('text/html', 'text/plain', 'text/sgml'): headers.pop('vary', None) if self.fix_attach and 'content-disposition' in headers: pragma = parse_set_header(headers.get('pragma', '')) pragma.discard('no-cache') header = pragma.to_header() if not header: headers.pop('pragma', '') else: headers['Pragma'] = header header = headers.get('cache-control', '') if header: cc = parse_cache_control_header(header, cls=ResponseCacheControl) cc.no_cache = None cc.no_store = False header = cc.to_header() if not header: headers.pop('cache-control', '') else: headers['Cache-Control'] = header def run_fixed(self, environ, start_response): def fixing_start_response(status, headers, exc_info=None): headers = Headers(headers) self.fix_headers(environ, headers, status) return start_response(status, headers.to_wsgi_list(), exc_info) return self.app(environ, fixing_start_response) def __call__(self, environ, start_response): ua = UserAgent(environ) if ua.browser != 'msie': return self.app(environ, start_response) return self.run_fixed(environ, start_response) werkzeug-0.14.1/werkzeug/contrib/iterio.py000066400000000000000000000250761322225165500206250ustar00rootroot00000000000000# -*- coding: utf-8 -*- r""" werkzeug.contrib.iterio ~~~~~~~~~~~~~~~~~~~~~~~ This module implements a :class:`IterIO` that converts an iterator into a stream object and the other way round. Converting streams into iterators requires the `greenlet`_ module. To convert an iterator into a stream all you have to do is to pass it directly to the :class:`IterIO` constructor. In this example we pass it a newly created generator:: def foo(): yield "something\n" yield "otherthings" stream = IterIO(foo()) print stream.read() # read the whole iterator The other way round works a bit different because we have to ensure that the code execution doesn't take place yet. An :class:`IterIO` call with a callable as first argument does two things. The function itself is passed an :class:`IterIO` stream it can feed. The object returned by the :class:`IterIO` constructor on the other hand is not an stream object but an iterator:: def foo(stream): stream.write("some") stream.write("thing") stream.flush() stream.write("otherthing") iterator = IterIO(foo) print iterator.next() # prints something print iterator.next() # prints otherthing iterator.next() # raises StopIteration .. _greenlet: https://github.com/python-greenlet/greenlet :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ try: import greenlet except ImportError: greenlet = None from werkzeug._compat import implements_iterator def _mixed_join(iterable, sentinel): """concatenate any string type in an intelligent way.""" iterator = iter(iterable) first_item = next(iterator, sentinel) if isinstance(first_item, bytes): return first_item + b''.join(iterator) return first_item + u''.join(iterator) def _newline(reference_string): if isinstance(reference_string, bytes): return b'\n' return u'\n' @implements_iterator class IterIO(object): """Instances of this object implement an interface compatible with the standard Python :class:`file` object. Streams are either read-only or write-only depending on how the object is created. If the first argument is an iterable a file like object is returned that returns the contents of the iterable. In case the iterable is empty read operations will return the sentinel value. If the first argument is a callable then the stream object will be created and passed to that function. The caller itself however will not receive a stream but an iterable. The function will be be executed step by step as something iterates over the returned iterable. Each call to :meth:`flush` will create an item for the iterable. If :meth:`flush` is called without any writes in-between the sentinel value will be yielded. Note for Python 3: due to the incompatible interface of bytes and streams you should set the sentinel value explicitly to an empty bytestring (``b''``) if you are expecting to deal with bytes as otherwise the end of the stream is marked with the wrong sentinel value. .. versionadded:: 0.9 `sentinel` parameter was added. """ def __new__(cls, obj, sentinel=''): try: iterator = iter(obj) except TypeError: return IterI(obj, sentinel) return IterO(iterator, sentinel) def __iter__(self): return self def tell(self): if self.closed: raise ValueError('I/O operation on closed file') return self.pos def isatty(self): if self.closed: raise ValueError('I/O operation on closed file') return False def seek(self, pos, mode=0): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def truncate(self, size=None): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def write(self, s): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def writelines(self, list): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def read(self, n=-1): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def readlines(self, sizehint=0): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def readline(self, length=None): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def flush(self): if self.closed: raise ValueError('I/O operation on closed file') raise IOError(9, 'Bad file descriptor') def __next__(self): if self.closed: raise StopIteration() line = self.readline() if not line: raise StopIteration() return line class IterI(IterIO): """Convert an stream into an iterator.""" def __new__(cls, func, sentinel=''): if greenlet is None: raise RuntimeError('IterI requires greenlet support') stream = object.__new__(cls) stream._parent = greenlet.getcurrent() stream._buffer = [] stream.closed = False stream.sentinel = sentinel stream.pos = 0 def run(): func(stream) stream.close() g = greenlet.greenlet(run, stream._parent) while 1: rv = g.switch() if not rv: return yield rv[0] def close(self): if not self.closed: self.closed = True self._flush_impl() def write(self, s): if self.closed: raise ValueError('I/O operation on closed file') if s: self.pos += len(s) self._buffer.append(s) def writelines(self, list): for item in list: self.write(item) def flush(self): if self.closed: raise ValueError('I/O operation on closed file') self._flush_impl() def _flush_impl(self): data = _mixed_join(self._buffer, self.sentinel) self._buffer = [] if not data and self.closed: self._parent.switch() else: self._parent.switch((data,)) class IterO(IterIO): """Iter output. Wrap an iterator and give it a stream like interface.""" def __new__(cls, gen, sentinel=''): self = object.__new__(cls) self._gen = gen self._buf = None self.sentinel = sentinel self.closed = False self.pos = 0 return self def __iter__(self): return self def _buf_append(self, string): '''Replace string directly without appending to an empty string, avoiding type issues.''' if not self._buf: self._buf = string else: self._buf += string def close(self): if not self.closed: self.closed = True if hasattr(self._gen, 'close'): self._gen.close() def seek(self, pos, mode=0): if self.closed: raise ValueError('I/O operation on closed file') if mode == 1: pos += self.pos elif mode == 2: self.read() self.pos = min(self.pos, self.pos + pos) return elif mode != 0: raise IOError('Invalid argument') buf = [] try: tmp_end_pos = len(self._buf) while pos > tmp_end_pos: item = next(self._gen) tmp_end_pos += len(item) buf.append(item) except StopIteration: pass if buf: self._buf_append(_mixed_join(buf, self.sentinel)) self.pos = max(0, pos) def read(self, n=-1): if self.closed: raise ValueError('I/O operation on closed file') if n < 0: self._buf_append(_mixed_join(self._gen, self.sentinel)) result = self._buf[self.pos:] self.pos += len(result) return result new_pos = self.pos + n buf = [] try: tmp_end_pos = 0 if self._buf is None else len(self._buf) while new_pos > tmp_end_pos or (self._buf is None and not buf): item = next(self._gen) tmp_end_pos += len(item) buf.append(item) except StopIteration: pass if buf: self._buf_append(_mixed_join(buf, self.sentinel)) if self._buf is None: return self.sentinel new_pos = max(0, new_pos) try: return self._buf[self.pos:new_pos] finally: self.pos = min(new_pos, len(self._buf)) def readline(self, length=None): if self.closed: raise ValueError('I/O operation on closed file') nl_pos = -1 if self._buf: nl_pos = self._buf.find(_newline(self._buf), self.pos) buf = [] try: if self._buf is None: pos = self.pos else: pos = len(self._buf) while nl_pos < 0: item = next(self._gen) local_pos = item.find(_newline(item)) buf.append(item) if local_pos >= 0: nl_pos = pos + local_pos break pos += len(item) except StopIteration: pass if buf: self._buf_append(_mixed_join(buf, self.sentinel)) if self._buf is None: return self.sentinel if nl_pos < 0: new_pos = len(self._buf) else: new_pos = nl_pos + 1 if length is not None and self.pos + length < new_pos: new_pos = self.pos + length try: return self._buf[self.pos:new_pos] finally: self.pos = min(new_pos, len(self._buf)) def readlines(self, sizehint=0): total = 0 lines = [] line = self.readline() while line: lines.append(line) total += len(line) if 0 < sizehint <= total: break line = self.readline() return lines werkzeug-0.14.1/werkzeug/contrib/jsrouting.py000066400000000000000000000205641322225165500213530ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.jsrouting ~~~~~~~~~~~~~~~~~~~~~~~~~~ Addon module that allows to create a JavaScript function from a map that generates rules. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ try: from simplejson import dumps except ImportError: try: from json import dumps except ImportError: def dumps(*args): raise RuntimeError('simplejson required for jsrouting') from inspect import getmro from werkzeug.routing import NumberConverter from werkzeug._compat import iteritems def render_template(name_parts, rules, converters): result = u'' if name_parts: for idx in range(0, len(name_parts) - 1): name = u'.'.join(name_parts[:idx + 1]) result += u"if (typeof %s === 'undefined') %s = {}\n" % (name, name) result += '%s = ' % '.'.join(name_parts) result += """(function (server_name, script_name, subdomain, url_scheme) { var converters = [%(converters)s]; var rules = %(rules)s; function in_array(array, value) { if (array.indexOf != undefined) { return array.indexOf(value) != -1; } for (var i = 0; i < array.length; i++) { if (array[i] == value) { return true; } } return false; } function array_diff(array1, array2) { array1 = array1.slice(); for (var i = array1.length-1; i >= 0; i--) { if (in_array(array2, array1[i])) { array1.splice(i, 1); } } return array1; } function split_obj(obj) { var names = []; var values = []; for (var name in obj) { if (typeof(obj[name]) != 'function') { names.push(name); values.push(obj[name]); } } return {names: names, values: values, original: obj}; } function suitable(rule, args) { var default_args = split_obj(rule.defaults || {}); var diff_arg_names = array_diff(rule.arguments, default_args.names); for (var i = 0; i < diff_arg_names.length; i++) { if (!in_array(args.names, diff_arg_names[i])) { return false; } } if (array_diff(rule.arguments, args.names).length == 0) { if (rule.defaults == null) { return true; } for (var i = 0; i < default_args.names.length; i++) { var key = default_args.names[i]; var value = default_args.values[i]; if (value != args.original[key]) { return false; } } } return true; } function build(rule, args) { var tmp = []; var processed = rule.arguments.slice(); for (var i = 0; i < rule.trace.length; i++) { var part = rule.trace[i]; if (part.is_dynamic) { var converter = converters[rule.converters[part.data]]; var data = converter(args.original[part.data]); if (data == null) { return null; } tmp.push(data); processed.push(part.name); } else { tmp.push(part.data); } } tmp = tmp.join(''); var pipe = tmp.indexOf('|'); var subdomain = tmp.substring(0, pipe); var url = tmp.substring(pipe+1); var unprocessed = array_diff(args.names, processed); var first_query_var = true; for (var i = 0; i < unprocessed.length; i++) { if (first_query_var) { url += '?'; } else { url += '&'; } first_query_var = false; url += encodeURIComponent(unprocessed[i]); url += '='; url += encodeURIComponent(args.original[unprocessed[i]]); } return {subdomain: subdomain, path: url}; } function lstrip(s, c) { while (s && s.substring(0, 1) == c) { s = s.substring(1); } return s; } function rstrip(s, c) { while (s && s.substring(s.length-1, s.length) == c) { s = s.substring(0, s.length-1); } return s; } return function(endpoint, args, force_external) { args = split_obj(args); var rv = null; for (var i = 0; i < rules.length; i++) { var rule = rules[i]; if (rule.endpoint != endpoint) continue; if (suitable(rule, args)) { rv = build(rule, args); if (rv != null) { break; } } } if (rv == null) { return null; } if (!force_external && rv.subdomain == subdomain) { return rstrip(script_name, '/') + '/' + lstrip(rv.path, '/'); } else { return url_scheme + '://' + (rv.subdomain ? rv.subdomain + '.' : '') + server_name + rstrip(script_name, '/') + '/' + lstrip(rv.path, '/'); } }; })""" % {'converters': u', '.join(converters), 'rules': rules} return result def generate_map(map, name='url_map'): """ Generates a JavaScript function containing the rules defined in this map, to be used with a MapAdapter's generate_javascript method. If you don't pass a name the returned JavaScript code is an expression that returns a function. Otherwise it's a standalone script that assigns the function with that name. Dotted names are resolved (so you an use a name like 'obj.url_for') In order to use JavaScript generation, simplejson must be installed. Note that using this feature will expose the rules defined in your map to users. If your rules contain sensitive information, don't use JavaScript generation! """ from warnings import warn warn(DeprecationWarning('This module is deprecated')) map.update() rules = [] converters = [] for rule in map.iter_rules(): trace = [{ 'is_dynamic': is_dynamic, 'data': data } for is_dynamic, data in rule._trace] rule_converters = {} for key, converter in iteritems(rule._converters): js_func = js_to_url_function(converter) try: index = converters.index(js_func) except ValueError: converters.append(js_func) index = len(converters) - 1 rule_converters[key] = index rules.append({ u'endpoint': rule.endpoint, u'arguments': list(rule.arguments), u'converters': rule_converters, u'trace': trace, u'defaults': rule.defaults }) return render_template(name_parts=name and name.split('.') or [], rules=dumps(rules), converters=converters) def generate_adapter(adapter, name='url_for', map_name='url_map'): """Generates the url building function for a map.""" values = { u'server_name': dumps(adapter.server_name), u'script_name': dumps(adapter.script_name), u'subdomain': dumps(adapter.subdomain), u'url_scheme': dumps(adapter.url_scheme), u'name': name, u'map_name': map_name } return u'''\ var %(name)s = %(map_name)s( %(server_name)s, %(script_name)s, %(subdomain)s, %(url_scheme)s );''' % values def js_to_url_function(converter): """Get the JavaScript converter function from a rule.""" if hasattr(converter, 'js_to_url_function'): data = converter.js_to_url_function() else: for cls in getmro(type(converter)): if cls in js_to_url_functions: data = js_to_url_functions[cls](converter) break else: return 'encodeURIComponent' return '(function(value) { %s })' % data def NumberConverter_js_to_url(conv): if conv.fixed_digits: return u'''\ var result = value.toString(); while (result.length < %s) result = '0' + result; return result;''' % conv.fixed_digits return u'return value.toString();' js_to_url_functions = { NumberConverter: NumberConverter_js_to_url } werkzeug-0.14.1/werkzeug/contrib/limiter.py000066400000000000000000000024661322225165500207750ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.limiter ~~~~~~~~~~~~~~~~~~~~~~~~ A middleware that limits incoming data. This works around problems with Trac_ or Django_ because those directly stream into the memory. .. _Trac: http://trac.edgewall.org/ .. _Django: http://www.djangoproject.com/ :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from warnings import warn from werkzeug.wsgi import LimitedStream class StreamLimitMiddleware(object): """Limits the input stream to a given number of bytes. This is useful if you have a WSGI application that reads form data into memory (django for example) and you don't want users to harm the server by uploading tons of data. Default is 10MB .. versionchanged:: 0.9 Deprecated middleware. """ def __init__(self, app, maximum_size=1024 * 1024 * 10): warn(DeprecationWarning('This middleware is deprecated')) self.app = app self.maximum_size = maximum_size def __call__(self, environ, start_response): limit = min(self.maximum_size, int(environ.get('CONTENT_LENGTH') or 0)) environ['wsgi.input'] = LimitedStream(environ['wsgi.input'], limit) return self.app(environ, start_response) werkzeug-0.14.1/werkzeug/contrib/lint.py000066400000000000000000000304161322225165500202720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.lint ~~~~~~~~~~~~~~~~~~~~~ .. versionadded:: 0.5 This module provides a middleware that performs sanity checks of the WSGI application. It checks that :pep:`333` is properly implemented and warns on some common HTTP errors such as non-empty responses for 304 status codes. This module provides a middleware, the :class:`LintMiddleware`. Wrap your application with it and it will warn about common problems with WSGI and HTTP while your application is running. It's strongly recommended to use it during development. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ try: from urllib.parse import urlparse except ImportError: from urlparse import urlparse from warnings import warn from werkzeug.datastructures import Headers from werkzeug.http import is_entity_header from werkzeug.wsgi import FileWrapper from werkzeug._compat import string_types class WSGIWarning(Warning): """Warning class for WSGI warnings.""" class HTTPWarning(Warning): """Warning class for HTTP warnings.""" def check_string(context, obj, stacklevel=3): if type(obj) is not str: warn(WSGIWarning('%s requires bytestrings, got %s' % (context, obj.__class__.__name__))) class InputStream(object): def __init__(self, stream): self._stream = stream def read(self, *args): if len(args) == 0: warn(WSGIWarning('wsgi does not guarantee an EOF marker on the ' 'input stream, thus making calls to ' 'wsgi.input.read() unsafe. Conforming servers ' 'may never return from this call.'), stacklevel=2) elif len(args) != 1: warn(WSGIWarning('too many parameters passed to wsgi.input.read()'), stacklevel=2) return self._stream.read(*args) def readline(self, *args): if len(args) == 0: warn(WSGIWarning('Calls to wsgi.input.readline() without arguments' ' are unsafe. Use wsgi.input.read() instead.'), stacklevel=2) elif len(args) == 1: warn(WSGIWarning('wsgi.input.readline() was called with a size hint. ' 'WSGI does not support this, although it\'s available ' 'on all major servers.'), stacklevel=2) else: raise TypeError('too many arguments passed to wsgi.input.readline()') return self._stream.readline(*args) def __iter__(self): try: return iter(self._stream) except TypeError: warn(WSGIWarning('wsgi.input is not iterable.'), stacklevel=2) return iter(()) def close(self): warn(WSGIWarning('application closed the input stream!'), stacklevel=2) self._stream.close() class ErrorStream(object): def __init__(self, stream): self._stream = stream def write(self, s): check_string('wsgi.error.write()', s) self._stream.write(s) def flush(self): self._stream.flush() def writelines(self, seq): for line in seq: self.write(seq) def close(self): warn(WSGIWarning('application closed the error stream!'), stacklevel=2) self._stream.close() class GuardedWrite(object): def __init__(self, write, chunks): self._write = write self._chunks = chunks def __call__(self, s): check_string('write()', s) self._write.write(s) self._chunks.append(len(s)) class GuardedIterator(object): def __init__(self, iterator, headers_set, chunks): self._iterator = iterator self._next = iter(iterator).next self.closed = False self.headers_set = headers_set self.chunks = chunks def __iter__(self): return self def next(self): if self.closed: warn(WSGIWarning('iterated over closed app_iter'), stacklevel=2) rv = self._next() if not self.headers_set: warn(WSGIWarning('Application returned before it ' 'started the response'), stacklevel=2) check_string('application iterator items', rv) self.chunks.append(len(rv)) return rv def close(self): self.closed = True if hasattr(self._iterator, 'close'): self._iterator.close() if self.headers_set: status_code, headers = self.headers_set bytes_sent = sum(self.chunks) content_length = headers.get('content-length', type=int) if status_code == 304: for key, value in headers: key = key.lower() if key not in ('expires', 'content-location') and \ is_entity_header(key): warn(HTTPWarning('entity header %r found in 304 ' 'response' % key)) if bytes_sent: warn(HTTPWarning('304 responses must not have a body')) elif 100 <= status_code < 200 or status_code == 204: if content_length != 0: warn(HTTPWarning('%r responses must have an empty ' 'content length' % status_code)) if bytes_sent: warn(HTTPWarning('%r responses must not have a body' % status_code)) elif content_length is not None and content_length != bytes_sent: warn(WSGIWarning('Content-Length and the number of bytes ' 'sent to the client do not match.')) def __del__(self): if not self.closed: try: warn(WSGIWarning('Iterator was garbage collected before ' 'it was closed.')) except Exception: pass class LintMiddleware(object): """This middleware wraps an application and warns on common errors. Among other thing it currently checks for the following problems: - invalid status codes - non-bytestrings sent to the WSGI server - strings returned from the WSGI application - non-empty conditional responses - unquoted etags - relative URLs in the Location header - unsafe calls to wsgi.input - unclosed iterators Detected errors are emitted using the standard Python :mod:`warnings` system and usually end up on :data:`stderr`. :: from werkzeug.contrib.lint import LintMiddleware app = LintMiddleware(app) :param app: the application to wrap """ def __init__(self, app): self.app = app def check_environ(self, environ): if type(environ) is not dict: warn(WSGIWarning('WSGI environment is not a standard python dict.'), stacklevel=4) for key in ('REQUEST_METHOD', 'SERVER_NAME', 'SERVER_PORT', 'wsgi.version', 'wsgi.input', 'wsgi.errors', 'wsgi.multithread', 'wsgi.multiprocess', 'wsgi.run_once'): if key not in environ: warn(WSGIWarning('required environment key %r not found' % key), stacklevel=3) if environ['wsgi.version'] != (1, 0): warn(WSGIWarning('environ is not a WSGI 1.0 environ'), stacklevel=3) script_name = environ.get('SCRIPT_NAME', '') if script_name and script_name[:1] != '/': warn(WSGIWarning('SCRIPT_NAME does not start with a slash: %r' % script_name), stacklevel=3) path_info = environ.get('PATH_INFO', '') if path_info[:1] != '/': warn(WSGIWarning('PATH_INFO does not start with a slash: %r' % path_info), stacklevel=3) def check_start_response(self, status, headers, exc_info): check_string('status', status) status_code = status.split(None, 1)[0] if len(status_code) != 3 or not status_code.isdigit(): warn(WSGIWarning('Status code must be three digits'), stacklevel=3) if len(status) < 4 or status[3] != ' ': warn(WSGIWarning('Invalid value for status %r. Valid ' 'status strings are three digits, a space ' 'and a status explanation'), stacklevel=3) status_code = int(status_code) if status_code < 100: warn(WSGIWarning('status code < 100 detected'), stacklevel=3) if type(headers) is not list: warn(WSGIWarning('header list is not a list'), stacklevel=3) for item in headers: if type(item) is not tuple or len(item) != 2: warn(WSGIWarning('Headers must tuple 2-item tuples'), stacklevel=3) name, value = item if type(name) is not str or type(value) is not str: warn(WSGIWarning('header items must be strings'), stacklevel=3) if name.lower() == 'status': warn(WSGIWarning('The status header is not supported due to ' 'conflicts with the CGI spec.'), stacklevel=3) if exc_info is not None and not isinstance(exc_info, tuple): warn(WSGIWarning('invalid value for exc_info'), stacklevel=3) headers = Headers(headers) self.check_headers(headers) return status_code, headers def check_headers(self, headers): etag = headers.get('etag') if etag is not None: if etag.startswith(('W/', 'w/')): if etag.startswith('w/'): warn(HTTPWarning('weak etag indicator should be upcase.'), stacklevel=4) etag = etag[2:] if not (etag[:1] == etag[-1:] == '"'): warn(HTTPWarning('unquoted etag emitted.'), stacklevel=4) location = headers.get('location') if location is not None: if not urlparse(location).netloc: warn(HTTPWarning('absolute URLs required for location header'), stacklevel=4) def check_iterator(self, app_iter): if isinstance(app_iter, string_types): warn(WSGIWarning('application returned string. Response will ' 'send character for character to the client ' 'which will kill the performance. Return a ' 'list or iterable instead.'), stacklevel=3) def __call__(self, *args, **kwargs): if len(args) != 2: warn(WSGIWarning('Two arguments to WSGI app required'), stacklevel=2) if kwargs: warn(WSGIWarning('No keyword arguments to WSGI app allowed'), stacklevel=2) environ, start_response = args self.check_environ(environ) environ['wsgi.input'] = InputStream(environ['wsgi.input']) environ['wsgi.errors'] = ErrorStream(environ['wsgi.errors']) # hook our own file wrapper in so that applications will always # iterate to the end and we can check the content length environ['wsgi.file_wrapper'] = FileWrapper headers_set = [] chunks = [] def checking_start_response(*args, **kwargs): if len(args) not in (2, 3): warn(WSGIWarning('Invalid number of arguments: %s, expected ' '2 or 3' % len(args), stacklevel=2)) if kwargs: warn(WSGIWarning('no keyword arguments allowed.')) status, headers = args[:2] if len(args) == 3: exc_info = args[2] else: exc_info = None headers_set[:] = self.check_start_response(status, headers, exc_info) return GuardedWrite(start_response(status, headers, exc_info), chunks) app_iter = self.app(environ, checking_start_response) self.check_iterator(app_iter) return GuardedIterator(app_iter, headers_set, chunks) werkzeug-0.14.1/werkzeug/contrib/profiler.py000066400000000000000000000120371322225165500211450ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.profiler ~~~~~~~~~~~~~~~~~~~~~~~~~ This module provides a simple WSGI profiler middleware for finding bottlenecks in web application. It uses the :mod:`profile` or :mod:`cProfile` module to do the profiling and writes the stats to the stream provided (defaults to stderr). Example usage:: from werkzeug.contrib.profiler import ProfilerMiddleware app = ProfilerMiddleware(app) :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import sys import time import os.path try: try: from cProfile import Profile except ImportError: from profile import Profile from pstats import Stats available = True except ImportError: available = False class MergeStream(object): """An object that redirects `write` calls to multiple streams. Use this to log to both `sys.stdout` and a file:: f = open('profiler.log', 'w') stream = MergeStream(sys.stdout, f) profiler = ProfilerMiddleware(app, stream) """ def __init__(self, *streams): if not streams: raise TypeError('at least one stream must be given') self.streams = streams def write(self, data): for stream in self.streams: stream.write(data) class ProfilerMiddleware(object): """Simple profiler middleware. Wraps a WSGI application and profiles a request. This intentionally buffers the response so that timings are more exact. By giving the `profile_dir` argument, pstat.Stats files are saved to that directory, one file per request. Without it, a summary is printed to `stream` instead. For the exact meaning of `sort_by` and `restrictions` consult the :mod:`profile` documentation. .. versionadded:: 0.9 Added support for `restrictions` and `profile_dir`. :param app: the WSGI application to profile. :param stream: the stream for the profiled stats. defaults to stderr. :param sort_by: a tuple of columns to sort the result by. :param restrictions: a tuple of profiling strictions, not used if dumping to `profile_dir`. :param profile_dir: directory name to save pstat files """ def __init__(self, app, stream=None, sort_by=('time', 'calls'), restrictions=(), profile_dir=None): if not available: raise RuntimeError('the profiler is not available because ' 'profile or pstat is not installed.') self._app = app self._stream = stream or sys.stdout self._sort_by = sort_by self._restrictions = restrictions self._profile_dir = profile_dir def __call__(self, environ, start_response): response_body = [] def catching_start_response(status, headers, exc_info=None): start_response(status, headers, exc_info) return response_body.append def runapp(): appiter = self._app(environ, catching_start_response) response_body.extend(appiter) if hasattr(appiter, 'close'): appiter.close() p = Profile() start = time.time() p.runcall(runapp) body = b''.join(response_body) elapsed = time.time() - start if self._profile_dir is not None: prof_filename = os.path.join(self._profile_dir, '%s.%s.%06dms.%d.prof' % ( environ['REQUEST_METHOD'], environ.get('PATH_INFO').strip( '/').replace('/', '.') or 'root', elapsed * 1000.0, time.time() )) p.dump_stats(prof_filename) else: stats = Stats(p, stream=self._stream) stats.sort_stats(*self._sort_by) self._stream.write('-' * 80) self._stream.write('\nPATH: %r\n' % environ.get('PATH_INFO')) stats.print_stats(*self._restrictions) self._stream.write('-' * 80 + '\n\n') return [body] def make_action(app_factory, hostname='localhost', port=5000, threaded=False, processes=1, stream=None, sort_by=('time', 'calls'), restrictions=()): """Return a new callback for :mod:`werkzeug.script` that starts a local server with the profiler enabled. :: from werkzeug.contrib import profiler action_profile = profiler.make_action(make_app) """ def action(hostname=('h', hostname), port=('p', port), threaded=threaded, processes=processes): """Start a new development server.""" from werkzeug.serving import run_simple app = ProfilerMiddleware(app_factory(), stream, sort_by, restrictions) run_simple(hostname, port, app, False, None, threaded, processes) return action werkzeug-0.14.1/werkzeug/contrib/securecookie.py000066400000000000000000000276441322225165500220150ustar00rootroot00000000000000# -*- coding: utf-8 -*- r""" werkzeug.contrib.securecookie ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ This module implements a cookie that is not alterable from the client because it adds a checksum the server checks for. You can use it as session replacement if all you have is a user id or something to mark a logged in user. Keep in mind that the data is still readable from the client as a normal cookie is. However you don't have to store and flush the sessions you have at the server. Example usage: >>> from werkzeug.contrib.securecookie import SecureCookie >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef") Dumping into a string so that one can store it in a cookie: >>> value = x.serialize() Loading from that string again: >>> x = SecureCookie.unserialize(value, "deadbeef") >>> x["baz"] (1, 2, 3) If someone modifies the cookie and the checksum is wrong the unserialize method will fail silently and return a new empty `SecureCookie` object. Keep in mind that the values will be visible in the cookie so do not store data in a cookie you don't want the user to see. Application Integration ======================= If you are using the werkzeug request objects you could integrate the secure cookie into your application like this:: from werkzeug.utils import cached_property from werkzeug.wrappers import BaseRequest from werkzeug.contrib.securecookie import SecureCookie # don't use this key but a different one; you could just use # os.urandom(20) to get something random SECRET_KEY = '\xfa\xdd\xb8z\xae\xe0}4\x8b\xea' class Request(BaseRequest): @cached_property def client_session(self): data = self.cookies.get('session_data') if not data: return SecureCookie(secret_key=SECRET_KEY) return SecureCookie.unserialize(data, SECRET_KEY) def application(environ, start_response): request = Request(environ) # get a response object here response = ... if request.client_session.should_save: session_data = request.client_session.serialize() response.set_cookie('session_data', session_data, httponly=True) return response(environ, start_response) A less verbose integration can be achieved by using shorthand methods:: class Request(BaseRequest): @cached_property def client_session(self): return SecureCookie.load_cookie(self, secret_key=COOKIE_SECRET) def application(environ, start_response): request = Request(environ) # get a response object here response = ... request.client_session.save_cookie(response) return response(environ, start_response) :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import pickle import base64 from hmac import new as hmac from time import time from hashlib import sha1 as _default_hash from werkzeug._compat import iteritems, text_type, to_bytes from werkzeug.urls import url_quote_plus, url_unquote_plus from werkzeug._internal import _date_to_unix from werkzeug.contrib.sessions import ModificationTrackingDict from werkzeug.security import safe_str_cmp from werkzeug._compat import to_native class UnquoteError(Exception): """Internal exception used to signal failures on quoting.""" class SecureCookie(ModificationTrackingDict): """Represents a secure cookie. You can subclass this class and provide an alternative mac method. The import thing is that the mac method is a function with a similar interface to the hashlib. Required methods are update() and digest(). Example usage: >>> x = SecureCookie({"foo": 42, "baz": (1, 2, 3)}, "deadbeef") >>> x["foo"] 42 >>> x["baz"] (1, 2, 3) >>> x["blafasel"] = 23 >>> x.should_save True :param data: the initial data. Either a dict, list of tuples or `None`. :param secret_key: the secret key. If not set `None` or not specified it has to be set before :meth:`serialize` is called. :param new: The initial value of the `new` flag. """ #: The hash method to use. This has to be a module with a new function #: or a function that creates a hashlib object. Such as `hashlib.md5` #: Subclasses can override this attribute. The default hash is sha1. #: Make sure to wrap this in staticmethod() if you store an arbitrary #: function there such as hashlib.sha1 which might be implemented #: as a function. hash_method = staticmethod(_default_hash) #: the module used for serialization. Unless overriden by subclasses #: the standard pickle module is used. serialization_method = pickle #: if the contents should be base64 quoted. This can be disabled if the #: serialization process returns cookie safe strings only. quote_base64 = True def __init__(self, data=None, secret_key=None, new=True): ModificationTrackingDict.__init__(self, data or ()) # explicitly convert it into a bytestring because python 2.6 # no longer performs an implicit string conversion on hmac if secret_key is not None: secret_key = to_bytes(secret_key, 'utf-8') self.secret_key = secret_key self.new = new def __repr__(self): return '<%s %s%s>' % ( self.__class__.__name__, dict.__repr__(self), self.should_save and '*' or '' ) @property def should_save(self): """True if the session should be saved. By default this is only true for :attr:`modified` cookies, not :attr:`new`. """ return self.modified @classmethod def quote(cls, value): """Quote the value for the cookie. This can be any object supported by :attr:`serialization_method`. :param value: the value to quote. """ if cls.serialization_method is not None: value = cls.serialization_method.dumps(value) if cls.quote_base64: value = b''.join(base64.b64encode(value).splitlines()).strip() return value @classmethod def unquote(cls, value): """Unquote the value for the cookie. If unquoting does not work a :exc:`UnquoteError` is raised. :param value: the value to unquote. """ try: if cls.quote_base64: value = base64.b64decode(value) if cls.serialization_method is not None: value = cls.serialization_method.loads(value) return value except Exception: # unfortunately pickle and other serialization modules can # cause pretty every error here. if we get one we catch it # and convert it into an UnquoteError raise UnquoteError() def serialize(self, expires=None): """Serialize the secure cookie into a string. If expires is provided, the session will be automatically invalidated after expiration when you unseralize it. This provides better protection against session cookie theft. :param expires: an optional expiration date for the cookie (a :class:`datetime.datetime` object) """ if self.secret_key is None: raise RuntimeError('no secret key defined') if expires: self['_expires'] = _date_to_unix(expires) result = [] mac = hmac(self.secret_key, None, self.hash_method) for key, value in sorted(self.items()): result.append(('%s=%s' % ( url_quote_plus(key), self.quote(value).decode('ascii') )).encode('ascii')) mac.update(b'|' + result[-1]) return b'?'.join([ base64.b64encode(mac.digest()).strip(), b'&'.join(result) ]) @classmethod def unserialize(cls, string, secret_key): """Load the secure cookie from a serialized string. :param string: the cookie value to unserialize. :param secret_key: the secret key used to serialize the cookie. :return: a new :class:`SecureCookie`. """ if isinstance(string, text_type): string = string.encode('utf-8', 'replace') if isinstance(secret_key, text_type): secret_key = secret_key.encode('utf-8', 'replace') try: base64_hash, data = string.split(b'?', 1) except (ValueError, IndexError): items = () else: items = {} mac = hmac(secret_key, None, cls.hash_method) for item in data.split(b'&'): mac.update(b'|' + item) if b'=' not in item: items = None break key, value = item.split(b'=', 1) # try to make the key a string key = url_unquote_plus(key.decode('ascii')) try: key = to_native(key) except UnicodeError: pass items[key] = value # no parsing error and the mac looks okay, we can now # sercurely unpickle our cookie. try: client_hash = base64.b64decode(base64_hash) except TypeError: items = client_hash = None if items is not None and safe_str_cmp(client_hash, mac.digest()): try: for key, value in iteritems(items): items[key] = cls.unquote(value) except UnquoteError: items = () else: if '_expires' in items: if time() > items['_expires']: items = () else: del items['_expires'] else: items = () return cls(items, secret_key, False) @classmethod def load_cookie(cls, request, key='session', secret_key=None): """Loads a :class:`SecureCookie` from a cookie in request. If the cookie is not set, a new :class:`SecureCookie` instanced is returned. :param request: a request object that has a `cookies` attribute which is a dict of all cookie values. :param key: the name of the cookie. :param secret_key: the secret key used to unquote the cookie. Always provide the value even though it has no default! """ data = request.cookies.get(key) if not data: return cls(secret_key=secret_key) return cls.unserialize(data, secret_key) def save_cookie(self, response, key='session', expires=None, session_expires=None, max_age=None, path='/', domain=None, secure=None, httponly=False, force=False): """Saves the SecureCookie in a cookie on response object. All parameters that are not described here are forwarded directly to :meth:`~BaseResponse.set_cookie`. :param response: a response object that has a :meth:`~BaseResponse.set_cookie` method. :param key: the name of the cookie. :param session_expires: the expiration date of the secure cookie stored information. If this is not provided the cookie `expires` date is used instead. """ if force or self.should_save: data = self.serialize(session_expires or expires) response.set_cookie(key, data, expires=expires, max_age=max_age, path=path, domain=domain, secure=secure, httponly=httponly) werkzeug-0.14.1/werkzeug/contrib/sessions.py000066400000000000000000000304411322225165500211700ustar00rootroot00000000000000# -*- coding: utf-8 -*- r""" werkzeug.contrib.sessions ~~~~~~~~~~~~~~~~~~~~~~~~~ This module contains some helper classes that help one to add session support to a python WSGI application. For full client-side session storage see :mod:`~werkzeug.contrib.securecookie` which implements a secure, client-side session storage. Application Integration ======================= :: from werkzeug.contrib.sessions import SessionMiddleware, \ FilesystemSessionStore app = SessionMiddleware(app, FilesystemSessionStore()) The current session will then appear in the WSGI environment as `werkzeug.session`. However it's recommended to not use the middleware but the stores directly in the application. However for very simple scripts a middleware for sessions could be sufficient. This module does not implement methods or ways to check if a session is expired. That should be done by a cronjob and storage specific. For example to prune unused filesystem sessions one could check the modified time of the files. If sessions are stored in the database the new() method should add an expiration timestamp for the session. For better flexibility it's recommended to not use the middleware but the store and session object directly in the application dispatching:: session_store = FilesystemSessionStore() def application(environ, start_response): request = Request(environ) sid = request.cookies.get('cookie_name') if sid is None: request.session = session_store.new() else: request.session = session_store.get(sid) response = get_the_response_object(request) if request.session.should_save: session_store.save(request.session) response.set_cookie('cookie_name', request.session.sid) return response(environ, start_response) :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import os import tempfile from os import path from time import time from random import random from hashlib import sha1 from pickle import dump, load, HIGHEST_PROTOCOL from werkzeug.datastructures import CallbackDict from werkzeug.utils import dump_cookie, parse_cookie from werkzeug.wsgi import ClosingIterator from werkzeug.posixemulation import rename from werkzeug._compat import PY2, text_type from werkzeug.filesystem import get_filesystem_encoding _sha1_re = re.compile(r'^[a-f0-9]{40}$') def _urandom(): if hasattr(os, 'urandom'): return os.urandom(30) return text_type(random()).encode('ascii') def generate_key(salt=None): if salt is None: salt = repr(salt).encode('ascii') return sha1(b''.join([ salt, str(time()).encode('ascii'), _urandom() ])).hexdigest() class ModificationTrackingDict(CallbackDict): __slots__ = ('modified',) def __init__(self, *args, **kwargs): def on_update(self): self.modified = True self.modified = False CallbackDict.__init__(self, on_update=on_update) dict.update(self, *args, **kwargs) def copy(self): """Create a flat copy of the dict.""" missing = object() result = object.__new__(self.__class__) for name in self.__slots__: val = getattr(self, name, missing) if val is not missing: setattr(result, name, val) return result def __copy__(self): return self.copy() class Session(ModificationTrackingDict): """Subclass of a dict that keeps track of direct object changes. Changes in mutable structures are not tracked, for those you have to set `modified` to `True` by hand. """ __slots__ = ModificationTrackingDict.__slots__ + ('sid', 'new') def __init__(self, data, sid, new=False): ModificationTrackingDict.__init__(self, data) self.sid = sid self.new = new def __repr__(self): return '<%s %s%s>' % ( self.__class__.__name__, dict.__repr__(self), self.should_save and '*' or '' ) @property def should_save(self): """True if the session should be saved. .. versionchanged:: 0.6 By default the session is now only saved if the session is modified, not if it is new like it was before. """ return self.modified class SessionStore(object): """Baseclass for all session stores. The Werkzeug contrib module does not implement any useful stores besides the filesystem store, application developers are encouraged to create their own stores. :param session_class: The session class to use. Defaults to :class:`Session`. """ def __init__(self, session_class=None): if session_class is None: session_class = Session self.session_class = session_class def is_valid_key(self, key): """Check if a key has the correct format.""" return _sha1_re.match(key) is not None def generate_key(self, salt=None): """Simple function that generates a new session key.""" return generate_key(salt) def new(self): """Generate a new session.""" return self.session_class({}, self.generate_key(), True) def save(self, session): """Save a session.""" def save_if_modified(self, session): """Save if a session class wants an update.""" if session.should_save: self.save(session) def delete(self, session): """Delete a session.""" def get(self, sid): """Get a session for this sid or a new session object. This method has to check if the session key is valid and create a new session if that wasn't the case. """ return self.session_class({}, sid, True) #: used for temporary files by the filesystem session store _fs_transaction_suffix = '.__wz_sess' class FilesystemSessionStore(SessionStore): """Simple example session store that saves sessions on the filesystem. This store works best on POSIX systems and Windows Vista / Windows Server 2008 and newer. .. versionchanged:: 0.6 `renew_missing` was added. Previously this was considered `True`, now the default changed to `False` and it can be explicitly deactivated. :param path: the path to the folder used for storing the sessions. If not provided the default temporary directory is used. :param filename_template: a string template used to give the session a filename. ``%s`` is replaced with the session id. :param session_class: The session class to use. Defaults to :class:`Session`. :param renew_missing: set to `True` if you want the store to give the user a new sid if the session was not yet saved. """ def __init__(self, path=None, filename_template='werkzeug_%s.sess', session_class=None, renew_missing=False, mode=0o644): SessionStore.__init__(self, session_class) if path is None: path = tempfile.gettempdir() self.path = path if isinstance(filename_template, text_type) and PY2: filename_template = filename_template.encode( get_filesystem_encoding()) assert not filename_template.endswith(_fs_transaction_suffix), \ 'filename templates may not end with %s' % _fs_transaction_suffix self.filename_template = filename_template self.renew_missing = renew_missing self.mode = mode def get_session_filename(self, sid): # out of the box, this should be a strict ASCII subset but # you might reconfigure the session object to have a more # arbitrary string. if isinstance(sid, text_type) and PY2: sid = sid.encode(get_filesystem_encoding()) return path.join(self.path, self.filename_template % sid) def save(self, session): fn = self.get_session_filename(session.sid) fd, tmp = tempfile.mkstemp(suffix=_fs_transaction_suffix, dir=self.path) f = os.fdopen(fd, 'wb') try: dump(dict(session), f, HIGHEST_PROTOCOL) finally: f.close() try: rename(tmp, fn) os.chmod(fn, self.mode) except (IOError, OSError): pass def delete(self, session): fn = self.get_session_filename(session.sid) try: os.unlink(fn) except OSError: pass def get(self, sid): if not self.is_valid_key(sid): return self.new() try: f = open(self.get_session_filename(sid), 'rb') except IOError: if self.renew_missing: return self.new() data = {} else: try: try: data = load(f) except Exception: data = {} finally: f.close() return self.session_class(data, sid, False) def list(self): """Lists all sessions in the store. .. versionadded:: 0.6 """ before, after = self.filename_template.split('%s', 1) filename_re = re.compile(r'%s(.{5,})%s$' % (re.escape(before), re.escape(after))) result = [] for filename in os.listdir(self.path): #: this is a session that is still being saved. if filename.endswith(_fs_transaction_suffix): continue match = filename_re.match(filename) if match is not None: result.append(match.group(1)) return result class SessionMiddleware(object): """A simple middleware that puts the session object of a store provided into the WSGI environ. It automatically sets cookies and restores sessions. However a middleware is not the preferred solution because it won't be as fast as sessions managed by the application itself and will put a key into the WSGI environment only relevant for the application which is against the concept of WSGI. The cookie parameters are the same as for the :func:`~dump_cookie` function just prefixed with ``cookie_``. Additionally `max_age` is called `cookie_age` and not `cookie_max_age` because of backwards compatibility. """ def __init__(self, app, store, cookie_name='session_id', cookie_age=None, cookie_expires=None, cookie_path='/', cookie_domain=None, cookie_secure=None, cookie_httponly=False, environ_key='werkzeug.session'): self.app = app self.store = store self.cookie_name = cookie_name self.cookie_age = cookie_age self.cookie_expires = cookie_expires self.cookie_path = cookie_path self.cookie_domain = cookie_domain self.cookie_secure = cookie_secure self.cookie_httponly = cookie_httponly self.environ_key = environ_key def __call__(self, environ, start_response): cookie = parse_cookie(environ.get('HTTP_COOKIE', '')) sid = cookie.get(self.cookie_name, None) if sid is None: session = self.store.new() else: session = self.store.get(sid) environ[self.environ_key] = session def injecting_start_response(status, headers, exc_info=None): if session.should_save: self.store.save(session) headers.append(('Set-Cookie', dump_cookie(self.cookie_name, session.sid, self.cookie_age, self.cookie_expires, self.cookie_path, self.cookie_domain, self.cookie_secure, self.cookie_httponly))) return start_response(status, headers, exc_info) return ClosingIterator(self.app(environ, injecting_start_response), lambda: self.store.save_if_modified(session)) werkzeug-0.14.1/werkzeug/contrib/testtools.py000066400000000000000000000046251322225165500213670ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.testtools ~~~~~~~~~~~~~~~~~~~~~~~~~~ This module implements extended wrappers for simplified testing. `TestResponse` A response wrapper which adds various cached attributes for simplified assertions on various content types. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from werkzeug.utils import cached_property, import_string from werkzeug.wrappers import Response from warnings import warn warn(DeprecationWarning('werkzeug.contrib.testtools is deprecated and ' 'will be removed with Werkzeug 1.0')) class ContentAccessors(object): """ A mixin class for response objects that provides a couple of useful accessors for unittesting. """ def xml(self): """Get an etree if possible.""" if 'xml' not in self.mimetype: raise AttributeError( 'Not a XML response (Content-Type: %s)' % self.mimetype) for module in ['xml.etree.ElementTree', 'ElementTree', 'elementtree.ElementTree']: etree = import_string(module, silent=True) if etree is not None: return etree.XML(self.body) raise RuntimeError('You must have ElementTree installed ' 'to use TestResponse.xml') xml = cached_property(xml) def lxml(self): """Get an lxml etree if possible.""" if ('html' not in self.mimetype and 'xml' not in self.mimetype): raise AttributeError('Not an HTML/XML response') from lxml import etree try: from lxml.html import fromstring except ImportError: fromstring = etree.HTML if self.mimetype == 'text/html': return fromstring(self.data) return etree.XML(self.data) lxml = cached_property(lxml) def json(self): """Get the result of simplejson.loads if possible.""" if 'json' not in self.mimetype: raise AttributeError('Not a JSON response') try: from simplejson import loads except ImportError: from json import loads return loads(self.data) json = cached_property(json) class TestResponse(Response, ContentAccessors): """Pass this to `werkzeug.test.Client` for easier unittesting.""" werkzeug-0.14.1/werkzeug/contrib/wrappers.py000066400000000000000000000242361322225165500211720ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.contrib.wrappers ~~~~~~~~~~~~~~~~~~~~~~~~~ Extra wrappers or mixins contributed by the community. These wrappers can be mixed in into request objects to add extra functionality. Example:: from werkzeug.wrappers import Request as RequestBase from werkzeug.contrib.wrappers import JSONRequestMixin class Request(RequestBase, JSONRequestMixin): pass Afterwards this request object provides the extra functionality of the :class:`JSONRequestMixin`. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import codecs try: from simplejson import loads except ImportError: from json import loads from werkzeug.exceptions import BadRequest from werkzeug.utils import cached_property from werkzeug.http import dump_options_header, parse_options_header from werkzeug._compat import wsgi_decoding_dance def is_known_charset(charset): """Checks if the given charset is known to Python.""" try: codecs.lookup(charset) except LookupError: return False return True class JSONRequestMixin(object): """Add json method to a request object. This will parse the input data through simplejson if possible. :exc:`~werkzeug.exceptions.BadRequest` will be raised if the content-type is not json or if the data itself cannot be parsed as json. """ @cached_property def json(self): """Get the result of simplejson.loads if possible.""" if 'json' not in self.environ.get('CONTENT_TYPE', ''): raise BadRequest('Not a JSON request') try: return loads(self.data.decode(self.charset, self.encoding_errors)) except Exception: raise BadRequest('Unable to read JSON request') class ProtobufRequestMixin(object): """Add protobuf parsing method to a request object. This will parse the input data through `protobuf`_ if possible. :exc:`~werkzeug.exceptions.BadRequest` will be raised if the content-type is not protobuf or if the data itself cannot be parsed property. .. _protobuf: http://code.google.com/p/protobuf/ """ #: by default the :class:`ProtobufRequestMixin` will raise a #: :exc:`~werkzeug.exceptions.BadRequest` if the object is not #: initialized. You can bypass that check by setting this #: attribute to `False`. protobuf_check_initialization = True def parse_protobuf(self, proto_type): """Parse the data into an instance of proto_type.""" if 'protobuf' not in self.environ.get('CONTENT_TYPE', ''): raise BadRequest('Not a Protobuf request') obj = proto_type() try: obj.ParseFromString(self.data) except Exception: raise BadRequest("Unable to parse Protobuf request") # Fail if not all required fields are set if self.protobuf_check_initialization and not obj.IsInitialized(): raise BadRequest("Partial Protobuf request") return obj class RoutingArgsRequestMixin(object): """This request mixin adds support for the wsgiorg routing args `specification`_. .. _specification: https://wsgi.readthedocs.io/en/latest/specifications/routing_args.html """ def _get_routing_args(self): return self.environ.get('wsgiorg.routing_args', (()))[0] def _set_routing_args(self, value): if self.shallow: raise RuntimeError('A shallow request tried to modify the WSGI ' 'environment. If you really want to do that, ' 'set `shallow` to False.') self.environ['wsgiorg.routing_args'] = (value, self.routing_vars) routing_args = property(_get_routing_args, _set_routing_args, doc=''' The positional URL arguments as `tuple`.''') del _get_routing_args, _set_routing_args def _get_routing_vars(self): rv = self.environ.get('wsgiorg.routing_args') if rv is not None: return rv[1] rv = {} if not self.shallow: self.routing_vars = rv return rv def _set_routing_vars(self, value): if self.shallow: raise RuntimeError('A shallow request tried to modify the WSGI ' 'environment. If you really want to do that, ' 'set `shallow` to False.') self.environ['wsgiorg.routing_args'] = (self.routing_args, value) routing_vars = property(_get_routing_vars, _set_routing_vars, doc=''' The keyword URL arguments as `dict`.''') del _get_routing_vars, _set_routing_vars class ReverseSlashBehaviorRequestMixin(object): """This mixin reverses the trailing slash behavior of :attr:`script_root` and :attr:`path`. This makes it possible to use :func:`~urlparse.urljoin` directly on the paths. Because it changes the behavior or :class:`Request` this class has to be mixed in *before* the actual request class:: class MyRequest(ReverseSlashBehaviorRequestMixin, Request): pass This example shows the differences (for an application mounted on `/application` and the request going to `/application/foo/bar`): +---------------+-------------------+---------------------+ | | normal behavior | reverse behavior | +===============+===================+=====================+ | `script_root` | ``/application`` | ``/application/`` | +---------------+-------------------+---------------------+ | `path` | ``/foo/bar`` | ``foo/bar`` | +---------------+-------------------+---------------------+ """ @cached_property def path(self): """Requested path as unicode. This works a bit like the regular path info in the WSGI environment but will not include a leading slash. """ path = wsgi_decoding_dance(self.environ.get('PATH_INFO') or '', self.charset, self.encoding_errors) return path.lstrip('/') @cached_property def script_root(self): """The root path of the script includling a trailing slash.""" path = wsgi_decoding_dance(self.environ.get('SCRIPT_NAME') or '', self.charset, self.encoding_errors) return path.rstrip('/') + '/' class DynamicCharsetRequestMixin(object): """"If this mixin is mixed into a request class it will provide a dynamic `charset` attribute. This means that if the charset is transmitted in the content type headers it's used from there. Because it changes the behavior or :class:`Request` this class has to be mixed in *before* the actual request class:: class MyRequest(DynamicCharsetRequestMixin, Request): pass By default the request object assumes that the URL charset is the same as the data charset. If the charset varies on each request based on the transmitted data it's not a good idea to let the URLs change based on that. Most browsers assume either utf-8 or latin1 for the URLs if they have troubles figuring out. It's strongly recommended to set the URL charset to utf-8:: class MyRequest(DynamicCharsetRequestMixin, Request): url_charset = 'utf-8' .. versionadded:: 0.6 """ #: the default charset that is assumed if the content type header #: is missing or does not contain a charset parameter. The default #: is latin1 which is what HTTP specifies as default charset. #: You may however want to set this to utf-8 to better support #: browsers that do not transmit a charset for incoming data. default_charset = 'latin1' def unknown_charset(self, charset): """Called if a charset was provided but is not supported by the Python codecs module. By default latin1 is assumed then to not lose any information, you may override this method to change the behavior. :param charset: the charset that was not found. :return: the replacement charset. """ return 'latin1' @cached_property def charset(self): """The charset from the content type.""" header = self.environ.get('CONTENT_TYPE') if header: ct, options = parse_options_header(header) charset = options.get('charset') if charset: if is_known_charset(charset): return charset return self.unknown_charset(charset) return self.default_charset class DynamicCharsetResponseMixin(object): """If this mixin is mixed into a response class it will provide a dynamic `charset` attribute. This means that if the charset is looked up and stored in the `Content-Type` header and updates itself automatically. This also means a small performance hit but can be useful if you're working with different charsets on responses. Because the charset attribute is no a property at class-level, the default value is stored in `default_charset`. Because it changes the behavior or :class:`Response` this class has to be mixed in *before* the actual response class:: class MyResponse(DynamicCharsetResponseMixin, Response): pass .. versionadded:: 0.6 """ #: the default charset. default_charset = 'utf-8' def _get_charset(self): header = self.headers.get('content-type') if header: charset = parse_options_header(header)[1].get('charset') if charset: return charset return self.default_charset def _set_charset(self, charset): header = self.headers.get('content-type') ct, options = parse_options_header(header) if not ct: raise TypeError('Cannot set charset if Content-Type ' 'header is missing.') options['charset'] = charset self.headers['Content-Type'] = dump_options_header(ct, options) charset = property(_get_charset, _set_charset, doc=""" The charset for the response. It's stored inside the Content-Type header as a parameter.""") del _get_charset, _set_charset werkzeug-0.14.1/werkzeug/datastructures.py000066400000000000000000002577401322225165500207540ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.datastructures ~~~~~~~~~~~~~~~~~~~~~~~ This module provides mixins and classes with an immutable interface. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import codecs import mimetypes from copy import deepcopy from itertools import repeat from collections import Container, Iterable, MutableSet from werkzeug._internal import _missing, _empty_stream from werkzeug._compat import iterkeys, itervalues, iteritems, iterlists, \ PY2, text_type, integer_types, string_types, make_literal_wrapper, \ to_native from werkzeug.filesystem import get_filesystem_encoding _locale_delim_re = re.compile(r'[_-]') def is_immutable(self): raise TypeError('%r objects are immutable' % self.__class__.__name__) def iter_multi_items(mapping): """Iterates over the items of a mapping yielding keys and values without dropping any from more complex structures. """ if isinstance(mapping, MultiDict): for item in iteritems(mapping, multi=True): yield item elif isinstance(mapping, dict): for key, value in iteritems(mapping): if isinstance(value, (tuple, list)): for value in value: yield key, value else: yield key, value else: for item in mapping: yield item def native_itermethods(names): if not PY2: return lambda x: x def setviewmethod(cls, name): viewmethod_name = 'view%s' % name viewmethod = lambda self, *a, **kw: ViewItems(self, name, 'view_%s' % name, *a, **kw) viewmethod.__doc__ = \ '"""`%s()` object providing a view on %s"""' % (viewmethod_name, name) setattr(cls, viewmethod_name, viewmethod) def setitermethod(cls, name): itermethod = getattr(cls, name) setattr(cls, 'iter%s' % name, itermethod) listmethod = lambda self, *a, **kw: list(itermethod(self, *a, **kw)) listmethod.__doc__ = \ 'Like :py:meth:`iter%s`, but returns a list.' % name setattr(cls, name, listmethod) def wrap(cls): for name in names: setitermethod(cls, name) setviewmethod(cls, name) return cls return wrap class ImmutableListMixin(object): """Makes a :class:`list` immutable. .. versionadded:: 0.5 :private: """ _hash_cache = None def __hash__(self): if self._hash_cache is not None: return self._hash_cache rv = self._hash_cache = hash(tuple(self)) return rv def __reduce_ex__(self, protocol): return type(self), (list(self),) def __delitem__(self, key): is_immutable(self) def __iadd__(self, other): is_immutable(self) __imul__ = __iadd__ def __setitem__(self, key, value): is_immutable(self) def append(self, item): is_immutable(self) remove = append def extend(self, iterable): is_immutable(self) def insert(self, pos, value): is_immutable(self) def pop(self, index=-1): is_immutable(self) def reverse(self): is_immutable(self) def sort(self, cmp=None, key=None, reverse=None): is_immutable(self) class ImmutableList(ImmutableListMixin, list): """An immutable :class:`list`. .. versionadded:: 0.5 :private: """ def __repr__(self): return '%s(%s)' % ( self.__class__.__name__, list.__repr__(self), ) class ImmutableDictMixin(object): """Makes a :class:`dict` immutable. .. versionadded:: 0.5 :private: """ _hash_cache = None @classmethod def fromkeys(cls, keys, value=None): instance = super(cls, cls).__new__(cls) instance.__init__(zip(keys, repeat(value))) return instance def __reduce_ex__(self, protocol): return type(self), (dict(self),) def _iter_hashitems(self): return iteritems(self) def __hash__(self): if self._hash_cache is not None: return self._hash_cache rv = self._hash_cache = hash(frozenset(self._iter_hashitems())) return rv def setdefault(self, key, default=None): is_immutable(self) def update(self, *args, **kwargs): is_immutable(self) def pop(self, key, default=None): is_immutable(self) def popitem(self): is_immutable(self) def __setitem__(self, key, value): is_immutable(self) def __delitem__(self, key): is_immutable(self) def clear(self): is_immutable(self) class ImmutableMultiDictMixin(ImmutableDictMixin): """Makes a :class:`MultiDict` immutable. .. versionadded:: 0.5 :private: """ def __reduce_ex__(self, protocol): return type(self), (list(iteritems(self, multi=True)),) def _iter_hashitems(self): return iteritems(self, multi=True) def add(self, key, value): is_immutable(self) def popitemlist(self): is_immutable(self) def poplist(self, key): is_immutable(self) def setlist(self, key, new_list): is_immutable(self) def setlistdefault(self, key, default_list=None): is_immutable(self) class UpdateDictMixin(object): """Makes dicts call `self.on_update` on modifications. .. versionadded:: 0.5 :private: """ on_update = None def calls_update(name): def oncall(self, *args, **kw): rv = getattr(super(UpdateDictMixin, self), name)(*args, **kw) if self.on_update is not None: self.on_update(self) return rv oncall.__name__ = name return oncall def setdefault(self, key, default=None): modified = key not in self rv = super(UpdateDictMixin, self).setdefault(key, default) if modified and self.on_update is not None: self.on_update(self) return rv def pop(self, key, default=_missing): modified = key in self if default is _missing: rv = super(UpdateDictMixin, self).pop(key) else: rv = super(UpdateDictMixin, self).pop(key, default) if modified and self.on_update is not None: self.on_update(self) return rv __setitem__ = calls_update('__setitem__') __delitem__ = calls_update('__delitem__') clear = calls_update('clear') popitem = calls_update('popitem') update = calls_update('update') del calls_update class TypeConversionDict(dict): """Works like a regular dict but the :meth:`get` method can perform type conversions. :class:`MultiDict` and :class:`CombinedMultiDict` are subclasses of this class and provide the same feature. .. versionadded:: 0.5 """ def get(self, key, default=None, type=None): """Return the default value if the requested data doesn't exist. If `type` is provided and is a callable it should convert the value, return it or raise a :exc:`ValueError` if that is not possible. In this case the function will return the default as if the value was not found: >>> d = TypeConversionDict(foo='42', bar='blub') >>> d.get('foo', type=int) 42 >>> d.get('bar', -1, type=int) -1 :param key: The key to be looked up. :param default: The default value to be returned if the key can't be looked up. If not further specified `None` is returned. :param type: A callable that is used to cast the value in the :class:`MultiDict`. If a :exc:`ValueError` is raised by this callable the default value is returned. """ try: rv = self[key] except KeyError: return default if type is not None: try: rv = type(rv) except ValueError: rv = default return rv class ImmutableTypeConversionDict(ImmutableDictMixin, TypeConversionDict): """Works like a :class:`TypeConversionDict` but does not support modifications. .. versionadded:: 0.5 """ def copy(self): """Return a shallow mutable copy of this object. Keep in mind that the standard library's :func:`copy` function is a no-op for this class like for any other python immutable type (eg: :class:`tuple`). """ return TypeConversionDict(self) def __copy__(self): return self class ViewItems(object): def __init__(self, multi_dict, method, repr_name, *a, **kw): self.__multi_dict = multi_dict self.__method = method self.__repr_name = repr_name self.__a = a self.__kw = kw def __get_items(self): return getattr(self.__multi_dict, self.__method)(*self.__a, **self.__kw) def __repr__(self): return '%s(%r)' % (self.__repr_name, list(self.__get_items())) def __iter__(self): return iter(self.__get_items()) @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues']) class MultiDict(TypeConversionDict): """A :class:`MultiDict` is a dictionary subclass customized to deal with multiple values for the same key which is for example used by the parsing functions in the wrappers. This is necessary because some HTML form elements pass multiple values for the same key. :class:`MultiDict` implements all standard dictionary methods. Internally, it saves all values for a key as a list, but the standard dict access methods will only return the first value for a key. If you want to gain access to the other values, too, you have to use the `list` methods as explained below. Basic Usage: >>> d = MultiDict([('a', 'b'), ('a', 'c')]) >>> d MultiDict([('a', 'b'), ('a', 'c')]) >>> d['a'] 'b' >>> d.getlist('a') ['b', 'c'] >>> 'a' in d True It behaves like a normal dict thus all dict functions will only return the first value when multiple values for one key are found. From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP exceptions. A :class:`MultiDict` can be constructed from an iterable of ``(key, value)`` tuples, a dict, a :class:`MultiDict` or from Werkzeug 0.2 onwards some keyword parameters. :param mapping: the initial value for the :class:`MultiDict`. Either a regular dict, an iterable of ``(key, value)`` tuples or `None`. """ def __init__(self, mapping=None): if isinstance(mapping, MultiDict): dict.__init__(self, ((k, l[:]) for k, l in iterlists(mapping))) elif isinstance(mapping, dict): tmp = {} for key, value in iteritems(mapping): if isinstance(value, (tuple, list)): if len(value) == 0: continue value = list(value) else: value = [value] tmp[key] = value dict.__init__(self, tmp) else: tmp = {} for key, value in mapping or (): tmp.setdefault(key, []).append(value) dict.__init__(self, tmp) def __getstate__(self): return dict(self.lists()) def __setstate__(self, value): dict.clear(self) dict.update(self, value) def __getitem__(self, key): """Return the first data value for this key; raises KeyError if not found. :param key: The key to be looked up. :raise KeyError: if the key does not exist. """ if key in self: lst = dict.__getitem__(self, key) if len(lst) > 0: return lst[0] raise exceptions.BadRequestKeyError(key) def __setitem__(self, key, value): """Like :meth:`add` but removes an existing key first. :param key: the key for the value. :param value: the value to set. """ dict.__setitem__(self, key, [value]) def add(self, key, value): """Adds a new value for the key. .. versionadded:: 0.6 :param key: the key for the value. :param value: the value to add. """ dict.setdefault(self, key, []).append(value) def getlist(self, key, type=None): """Return the list of items for a given key. If that key is not in the `MultiDict`, the return value will be an empty list. Just as `get` `getlist` accepts a `type` parameter. All items will be converted with the callable defined there. :param key: The key to be looked up. :param type: A callable that is used to cast the value in the :class:`MultiDict`. If a :exc:`ValueError` is raised by this callable the value will be removed from the list. :return: a :class:`list` of all the values for the key. """ try: rv = dict.__getitem__(self, key) except KeyError: return [] if type is None: return list(rv) result = [] for item in rv: try: result.append(type(item)) except ValueError: pass return result def setlist(self, key, new_list): """Remove the old values for a key and add new ones. Note that the list you pass the values in will be shallow-copied before it is inserted in the dictionary. >>> d = MultiDict() >>> d.setlist('foo', ['1', '2']) >>> d['foo'] '1' >>> d.getlist('foo') ['1', '2'] :param key: The key for which the values are set. :param new_list: An iterable with the new values for the key. Old values are removed first. """ dict.__setitem__(self, key, list(new_list)) def setdefault(self, key, default=None): """Returns the value for the key if it is in the dict, otherwise it returns `default` and sets that value for `key`. :param key: The key to be looked up. :param default: The default value to be returned if the key is not in the dict. If not further specified it's `None`. """ if key not in self: self[key] = default else: default = self[key] return default def setlistdefault(self, key, default_list=None): """Like `setdefault` but sets multiple values. The list returned is not a copy, but the list that is actually used internally. This means that you can put new values into the dict by appending items to the list: >>> d = MultiDict({"foo": 1}) >>> d.setlistdefault("foo").extend([2, 3]) >>> d.getlist("foo") [1, 2, 3] :param key: The key to be looked up. :param default_list: An iterable of default values. It is either copied (in case it was a list) or converted into a list before returned. :return: a :class:`list` """ if key not in self: default_list = list(default_list or ()) dict.__setitem__(self, key, default_list) else: default_list = dict.__getitem__(self, key) return default_list def items(self, multi=False): """Return an iterator of ``(key, value)`` pairs. :param multi: If set to `True` the iterator returned will have a pair for each value of each key. Otherwise it will only contain pairs for the first value of each key. """ for key, values in iteritems(dict, self): if multi: for value in values: yield key, value else: yield key, values[0] def lists(self): """Return a list of ``(key, values)`` pairs, where values is the list of all values associated with the key.""" for key, values in iteritems(dict, self): yield key, list(values) def keys(self): return iterkeys(dict, self) __iter__ = keys def values(self): """Returns an iterator of the first value on every key's value list.""" for values in itervalues(dict, self): yield values[0] def listvalues(self): """Return an iterator of all values associated with a key. Zipping :meth:`keys` and this is the same as calling :meth:`lists`: >>> d = MultiDict({"foo": [1, 2, 3]}) >>> zip(d.keys(), d.listvalues()) == d.lists() True """ return itervalues(dict, self) def copy(self): """Return a shallow copy of this object.""" return self.__class__(self) def deepcopy(self, memo=None): """Return a deep copy of this object.""" return self.__class__(deepcopy(self.to_dict(flat=False), memo)) def to_dict(self, flat=True): """Return the contents as regular dict. If `flat` is `True` the returned dict will only have the first item present, if `flat` is `False` all values will be returned as lists. :param flat: If set to `False` the dict returned will have lists with all the values in it. Otherwise it will only contain the first value for each key. :return: a :class:`dict` """ if flat: return dict(iteritems(self)) return dict(self.lists()) def update(self, other_dict): """update() extends rather than replaces existing key lists: >>> a = MultiDict({'x': 1}) >>> b = MultiDict({'x': 2, 'y': 3}) >>> a.update(b) >>> a MultiDict([('y', 3), ('x', 1), ('x', 2)]) If the value list for a key in ``other_dict`` is empty, no new values will be added to the dict and the key will not be created: >>> x = {'empty_list': []} >>> y = MultiDict() >>> y.update(x) >>> y MultiDict([]) """ for key, value in iter_multi_items(other_dict): MultiDict.add(self, key, value) def pop(self, key, default=_missing): """Pop the first item for a list on the dict. Afterwards the key is removed from the dict, so additional values are discarded: >>> d = MultiDict({"foo": [1, 2, 3]}) >>> d.pop("foo") 1 >>> "foo" in d False :param key: the key to pop. :param default: if provided the value to return if the key was not in the dictionary. """ try: lst = dict.pop(self, key) if len(lst) == 0: raise exceptions.BadRequestKeyError() return lst[0] except KeyError as e: if default is not _missing: return default raise exceptions.BadRequestKeyError(str(e)) def popitem(self): """Pop an item from the dict.""" try: item = dict.popitem(self) if len(item[1]) == 0: raise exceptions.BadRequestKeyError() return (item[0], item[1][0]) except KeyError as e: raise exceptions.BadRequestKeyError(str(e)) def poplist(self, key): """Pop the list for a key from the dict. If the key is not in the dict an empty list is returned. .. versionchanged:: 0.5 If the key does no longer exist a list is returned instead of raising an error. """ return dict.pop(self, key, []) def popitemlist(self): """Pop a ``(key, list)`` tuple from the dict.""" try: return dict.popitem(self) except KeyError as e: raise exceptions.BadRequestKeyError(str(e)) def __copy__(self): return self.copy() def __deepcopy__(self, memo): return self.deepcopy(memo=memo) def __repr__(self): return '%s(%r)' % (self.__class__.__name__, list(iteritems(self, multi=True))) class _omd_bucket(object): """Wraps values in the :class:`OrderedMultiDict`. This makes it possible to keep an order over multiple different keys. It requires a lot of extra memory and slows down access a lot, but makes it possible to access elements in O(1) and iterate in O(n). """ __slots__ = ('prev', 'key', 'value', 'next') def __init__(self, omd, key, value): self.prev = omd._last_bucket self.key = key self.value = value self.next = None if omd._first_bucket is None: omd._first_bucket = self if omd._last_bucket is not None: omd._last_bucket.next = self omd._last_bucket = self def unlink(self, omd): if self.prev: self.prev.next = self.next if self.next: self.next.prev = self.prev if omd._first_bucket is self: omd._first_bucket = self.next if omd._last_bucket is self: omd._last_bucket = self.prev @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues']) class OrderedMultiDict(MultiDict): """Works like a regular :class:`MultiDict` but preserves the order of the fields. To convert the ordered multi dict into a list you can use the :meth:`items` method and pass it ``multi=True``. In general an :class:`OrderedMultiDict` is an order of magnitude slower than a :class:`MultiDict`. .. admonition:: note Due to a limitation in Python you cannot convert an ordered multi dict into a regular dict by using ``dict(multidict)``. Instead you have to use the :meth:`to_dict` method, otherwise the internal bucket objects are exposed. """ def __init__(self, mapping=None): dict.__init__(self) self._first_bucket = self._last_bucket = None if mapping is not None: OrderedMultiDict.update(self, mapping) def __eq__(self, other): if not isinstance(other, MultiDict): return NotImplemented if isinstance(other, OrderedMultiDict): iter1 = iteritems(self, multi=True) iter2 = iteritems(other, multi=True) try: for k1, v1 in iter1: k2, v2 = next(iter2) if k1 != k2 or v1 != v2: return False except StopIteration: return False try: next(iter2) except StopIteration: return True return False if len(self) != len(other): return False for key, values in iterlists(self): if other.getlist(key) != values: return False return True __hash__ = None def __ne__(self, other): return not self.__eq__(other) def __reduce_ex__(self, protocol): return type(self), (list(iteritems(self, multi=True)),) def __getstate__(self): return list(iteritems(self, multi=True)) def __setstate__(self, values): dict.clear(self) for key, value in values: self.add(key, value) def __getitem__(self, key): if key in self: return dict.__getitem__(self, key)[0].value raise exceptions.BadRequestKeyError(key) def __setitem__(self, key, value): self.poplist(key) self.add(key, value) def __delitem__(self, key): self.pop(key) def keys(self): return (key for key, value in iteritems(self)) __iter__ = keys def values(self): return (value for key, value in iteritems(self)) def items(self, multi=False): ptr = self._first_bucket if multi: while ptr is not None: yield ptr.key, ptr.value ptr = ptr.next else: returned_keys = set() while ptr is not None: if ptr.key not in returned_keys: returned_keys.add(ptr.key) yield ptr.key, ptr.value ptr = ptr.next def lists(self): returned_keys = set() ptr = self._first_bucket while ptr is not None: if ptr.key not in returned_keys: yield ptr.key, self.getlist(ptr.key) returned_keys.add(ptr.key) ptr = ptr.next def listvalues(self): for key, values in iterlists(self): yield values def add(self, key, value): dict.setdefault(self, key, []).append(_omd_bucket(self, key, value)) def getlist(self, key, type=None): try: rv = dict.__getitem__(self, key) except KeyError: return [] if type is None: return [x.value for x in rv] result = [] for item in rv: try: result.append(type(item.value)) except ValueError: pass return result def setlist(self, key, new_list): self.poplist(key) for value in new_list: self.add(key, value) def setlistdefault(self, key, default_list=None): raise TypeError('setlistdefault is unsupported for ' 'ordered multi dicts') def update(self, mapping): for key, value in iter_multi_items(mapping): OrderedMultiDict.add(self, key, value) def poplist(self, key): buckets = dict.pop(self, key, ()) for bucket in buckets: bucket.unlink(self) return [x.value for x in buckets] def pop(self, key, default=_missing): try: buckets = dict.pop(self, key) except KeyError as e: if default is not _missing: return default raise exceptions.BadRequestKeyError(str(e)) for bucket in buckets: bucket.unlink(self) return buckets[0].value def popitem(self): try: key, buckets = dict.popitem(self) except KeyError as e: raise exceptions.BadRequestKeyError(str(e)) for bucket in buckets: bucket.unlink(self) return key, buckets[0].value def popitemlist(self): try: key, buckets = dict.popitem(self) except KeyError as e: raise exceptions.BadRequestKeyError(str(e)) for bucket in buckets: bucket.unlink(self) return key, [x.value for x in buckets] def _options_header_vkw(value, kw): return dump_options_header(value, dict((k.replace('_', '-'), v) for k, v in kw.items())) def _unicodify_header_value(value): if isinstance(value, bytes): value = value.decode('latin-1') if not isinstance(value, text_type): value = text_type(value) return value @native_itermethods(['keys', 'values', 'items']) class Headers(object): """An object that stores some headers. It has a dict-like interface but is ordered and can store the same keys multiple times. This data structure is useful if you want a nicer way to handle WSGI headers which are stored as tuples in a list. From Werkzeug 0.3 onwards, the :exc:`KeyError` raised by this class is also a subclass of the :class:`~exceptions.BadRequest` HTTP exception and will render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP exceptions. Headers is mostly compatible with the Python :class:`wsgiref.headers.Headers` class, with the exception of `__getitem__`. :mod:`wsgiref` will return `None` for ``headers['missing']``, whereas :class:`Headers` will raise a :class:`KeyError`. To create a new :class:`Headers` object pass it a list or dict of headers which are used as default values. This does not reuse the list passed to the constructor for internal usage. :param defaults: The list of default values for the :class:`Headers`. .. versionchanged:: 0.9 This data structure now stores unicode values similar to how the multi dicts do it. The main difference is that bytes can be set as well which will automatically be latin1 decoded. .. versionchanged:: 0.9 The :meth:`linked` function was removed without replacement as it was an API that does not support the changes to the encoding model. """ def __init__(self, defaults=None): self._list = [] if defaults is not None: if isinstance(defaults, (list, Headers)): self._list.extend(defaults) else: self.extend(defaults) def __getitem__(self, key, _get_mode=False): if not _get_mode: if isinstance(key, integer_types): return self._list[key] elif isinstance(key, slice): return self.__class__(self._list[key]) if not isinstance(key, string_types): raise exceptions.BadRequestKeyError(key) ikey = key.lower() for k, v in self._list: if k.lower() == ikey: return v # micro optimization: if we are in get mode we will catch that # exception one stack level down so we can raise a standard # key error instead of our special one. if _get_mode: raise KeyError() raise exceptions.BadRequestKeyError(key) def __eq__(self, other): return other.__class__ is self.__class__ and \ set(other._list) == set(self._list) __hash__ = None def __ne__(self, other): return not self.__eq__(other) def get(self, key, default=None, type=None, as_bytes=False): """Return the default value if the requested data doesn't exist. If `type` is provided and is a callable it should convert the value, return it or raise a :exc:`ValueError` if that is not possible. In this case the function will return the default as if the value was not found: >>> d = Headers([('Content-Length', '42')]) >>> d.get('Content-Length', type=int) 42 If a headers object is bound you must not add unicode strings because no encoding takes place. .. versionadded:: 0.9 Added support for `as_bytes`. :param key: The key to be looked up. :param default: The default value to be returned if the key can't be looked up. If not further specified `None` is returned. :param type: A callable that is used to cast the value in the :class:`Headers`. If a :exc:`ValueError` is raised by this callable the default value is returned. :param as_bytes: return bytes instead of unicode strings. """ try: rv = self.__getitem__(key, _get_mode=True) except KeyError: return default if as_bytes: rv = rv.encode('latin1') if type is None: return rv try: return type(rv) except ValueError: return default def getlist(self, key, type=None, as_bytes=False): """Return the list of items for a given key. If that key is not in the :class:`Headers`, the return value will be an empty list. Just as :meth:`get` :meth:`getlist` accepts a `type` parameter. All items will be converted with the callable defined there. .. versionadded:: 0.9 Added support for `as_bytes`. :param key: The key to be looked up. :param type: A callable that is used to cast the value in the :class:`Headers`. If a :exc:`ValueError` is raised by this callable the value will be removed from the list. :return: a :class:`list` of all the values for the key. :param as_bytes: return bytes instead of unicode strings. """ ikey = key.lower() result = [] for k, v in self: if k.lower() == ikey: if as_bytes: v = v.encode('latin1') if type is not None: try: v = type(v) except ValueError: continue result.append(v) return result def get_all(self, name): """Return a list of all the values for the named field. This method is compatible with the :mod:`wsgiref` :meth:`~wsgiref.headers.Headers.get_all` method. """ return self.getlist(name) def items(self, lower=False): for key, value in self: if lower: key = key.lower() yield key, value def keys(self, lower=False): for key, _ in iteritems(self, lower): yield key def values(self): for _, value in iteritems(self): yield value def extend(self, iterable): """Extend the headers with a dict or an iterable yielding keys and values. """ if isinstance(iterable, dict): for key, value in iteritems(iterable): if isinstance(value, (tuple, list)): for v in value: self.add(key, v) else: self.add(key, value) else: for key, value in iterable: self.add(key, value) def __delitem__(self, key, _index_operation=True): if _index_operation and isinstance(key, (integer_types, slice)): del self._list[key] return key = key.lower() new = [] for k, v in self._list: if k.lower() != key: new.append((k, v)) self._list[:] = new def remove(self, key): """Remove a key. :param key: The key to be removed. """ return self.__delitem__(key, _index_operation=False) def pop(self, key=None, default=_missing): """Removes and returns a key or index. :param key: The key to be popped. If this is an integer the item at that position is removed, if it's a string the value for that key is. If the key is omitted or `None` the last item is removed. :return: an item. """ if key is None: return self._list.pop() if isinstance(key, integer_types): return self._list.pop(key) try: rv = self[key] self.remove(key) except KeyError: if default is not _missing: return default raise return rv def popitem(self): """Removes a key or index and returns a (key, value) item.""" return self.pop() def __contains__(self, key): """Check if a key is present.""" try: self.__getitem__(key, _get_mode=True) except KeyError: return False return True has_key = __contains__ def __iter__(self): """Yield ``(key, value)`` tuples.""" return iter(self._list) def __len__(self): return len(self._list) def add(self, _key, _value, **kw): """Add a new header tuple to the list. Keyword arguments can specify additional parameters for the header value, with underscores converted to dashes:: >>> d = Headers() >>> d.add('Content-Type', 'text/plain') >>> d.add('Content-Disposition', 'attachment', filename='foo.png') The keyword argument dumping uses :func:`dump_options_header` behind the scenes. .. versionadded:: 0.4.1 keyword arguments were added for :mod:`wsgiref` compatibility. """ if kw: _value = _options_header_vkw(_value, kw) _value = _unicodify_header_value(_value) self._validate_value(_value) self._list.append((_key, _value)) def _validate_value(self, value): if not isinstance(value, text_type): raise TypeError('Value should be unicode.') if u'\n' in value or u'\r' in value: raise ValueError('Detected newline in header value. This is ' 'a potential security problem') def add_header(self, _key, _value, **_kw): """Add a new header tuple to the list. An alias for :meth:`add` for compatibility with the :mod:`wsgiref` :meth:`~wsgiref.headers.Headers.add_header` method. """ self.add(_key, _value, **_kw) def clear(self): """Clears all headers.""" del self._list[:] def set(self, _key, _value, **kw): """Remove all header tuples for `key` and add a new one. The newly added key either appears at the end of the list if there was no entry or replaces the first one. Keyword arguments can specify additional parameters for the header value, with underscores converted to dashes. See :meth:`add` for more information. .. versionchanged:: 0.6.1 :meth:`set` now accepts the same arguments as :meth:`add`. :param key: The key to be inserted. :param value: The value to be inserted. """ if kw: _value = _options_header_vkw(_value, kw) _value = _unicodify_header_value(_value) self._validate_value(_value) if not self._list: self._list.append((_key, _value)) return listiter = iter(self._list) ikey = _key.lower() for idx, (old_key, old_value) in enumerate(listiter): if old_key.lower() == ikey: # replace first ocurrence self._list[idx] = (_key, _value) break else: self._list.append((_key, _value)) return self._list[idx + 1:] = [t for t in listiter if t[0].lower() != ikey] def setdefault(self, key, default): """Returns the value for the key if it is in the dict, otherwise it returns `default` and sets that value for `key`. :param key: The key to be looked up. :param default: The default value to be returned if the key is not in the dict. If not further specified it's `None`. """ if key in self: return self[key] self.set(key, default) return default def __setitem__(self, key, value): """Like :meth:`set` but also supports index/slice based setting.""" if isinstance(key, (slice, integer_types)): if isinstance(key, integer_types): value = [value] value = [(k, _unicodify_header_value(v)) for (k, v) in value] [self._validate_value(v) for (k, v) in value] if isinstance(key, integer_types): self._list[key] = value[0] else: self._list[key] = value else: self.set(key, value) def to_list(self, charset='iso-8859-1'): """Convert the headers into a list suitable for WSGI.""" from warnings import warn warn(DeprecationWarning('Method removed, use to_wsgi_list instead'), stacklevel=2) return self.to_wsgi_list() def to_wsgi_list(self): """Convert the headers into a list suitable for WSGI. The values are byte strings in Python 2 converted to latin1 and unicode strings in Python 3 for the WSGI server to encode. :return: list """ if PY2: return [(to_native(k), v.encode('latin1')) for k, v in self] return list(self) def copy(self): return self.__class__(self._list) def __copy__(self): return self.copy() def __str__(self): """Returns formatted headers suitable for HTTP transmission.""" strs = [] for key, value in self.to_wsgi_list(): strs.append('%s: %s' % (key, value)) strs.append('\r\n') return '\r\n'.join(strs) def __repr__(self): return '%s(%r)' % ( self.__class__.__name__, list(self) ) class ImmutableHeadersMixin(object): """Makes a :class:`Headers` immutable. We do not mark them as hashable though since the only usecase for this datastructure in Werkzeug is a view on a mutable structure. .. versionadded:: 0.5 :private: """ def __delitem__(self, key): is_immutable(self) def __setitem__(self, key, value): is_immutable(self) set = __setitem__ def add(self, item): is_immutable(self) remove = add_header = add def extend(self, iterable): is_immutable(self) def insert(self, pos, value): is_immutable(self) def pop(self, index=-1): is_immutable(self) def popitem(self): is_immutable(self) def setdefault(self, key, default): is_immutable(self) class EnvironHeaders(ImmutableHeadersMixin, Headers): """Read only version of the headers from a WSGI environment. This provides the same interface as `Headers` and is constructed from a WSGI environment. From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP exceptions. """ def __init__(self, environ): self.environ = environ def __eq__(self, other): return self.environ is other.environ __hash__ = None def __getitem__(self, key, _get_mode=False): # _get_mode is a no-op for this class as there is no index but # used because get() calls it. if not isinstance(key, string_types): raise KeyError(key) key = key.upper().replace('-', '_') if key in ('CONTENT_TYPE', 'CONTENT_LENGTH'): return _unicodify_header_value(self.environ[key]) return _unicodify_header_value(self.environ['HTTP_' + key]) def __len__(self): # the iter is necessary because otherwise list calls our # len which would call list again and so forth. return len(list(iter(self))) def __iter__(self): for key, value in iteritems(self.environ): if key.startswith('HTTP_') and key not in \ ('HTTP_CONTENT_TYPE', 'HTTP_CONTENT_LENGTH'): yield (key[5:].replace('_', '-').title(), _unicodify_header_value(value)) elif key in ('CONTENT_TYPE', 'CONTENT_LENGTH') and value: yield (key.replace('_', '-').title(), _unicodify_header_value(value)) def copy(self): raise TypeError('cannot create %r copies' % self.__class__.__name__) @native_itermethods(['keys', 'values', 'items', 'lists', 'listvalues']) class CombinedMultiDict(ImmutableMultiDictMixin, MultiDict): """A read only :class:`MultiDict` that you can pass multiple :class:`MultiDict` instances as sequence and it will combine the return values of all wrapped dicts: >>> from werkzeug.datastructures import CombinedMultiDict, MultiDict >>> post = MultiDict([('foo', 'bar')]) >>> get = MultiDict([('blub', 'blah')]) >>> combined = CombinedMultiDict([get, post]) >>> combined['foo'] 'bar' >>> combined['blub'] 'blah' This works for all read operations and will raise a `TypeError` for methods that usually change data which isn't possible. From Werkzeug 0.3 onwards, the `KeyError` raised by this class is also a subclass of the :exc:`~exceptions.BadRequest` HTTP exception and will render a page for a ``400 BAD REQUEST`` if caught in a catch-all for HTTP exceptions. """ def __reduce_ex__(self, protocol): return type(self), (self.dicts,) def __init__(self, dicts=None): self.dicts = dicts or [] @classmethod def fromkeys(cls): raise TypeError('cannot create %r instances by fromkeys' % cls.__name__) def __getitem__(self, key): for d in self.dicts: if key in d: return d[key] raise exceptions.BadRequestKeyError(key) def get(self, key, default=None, type=None): for d in self.dicts: if key in d: if type is not None: try: return type(d[key]) except ValueError: continue return d[key] return default def getlist(self, key, type=None): rv = [] for d in self.dicts: rv.extend(d.getlist(key, type)) return rv def _keys_impl(self): """This function exists so __len__ can be implemented more efficiently, saving one list creation from an iterator. Using this for Python 2's ``dict.keys`` behavior would be useless since `dict.keys` in Python 2 returns a list, while we have a set here. """ rv = set() for d in self.dicts: rv.update(iterkeys(d)) return rv def keys(self): return iter(self._keys_impl()) __iter__ = keys def items(self, multi=False): found = set() for d in self.dicts: for key, value in iteritems(d, multi): if multi: yield key, value elif key not in found: found.add(key) yield key, value def values(self): for key, value in iteritems(self): yield value def lists(self): rv = {} for d in self.dicts: for key, values in iterlists(d): rv.setdefault(key, []).extend(values) return iteritems(rv) def listvalues(self): return (x[1] for x in self.lists()) def copy(self): """Return a shallow copy of this object.""" return self.__class__(self.dicts[:]) def to_dict(self, flat=True): """Return the contents as regular dict. If `flat` is `True` the returned dict will only have the first item present, if `flat` is `False` all values will be returned as lists. :param flat: If set to `False` the dict returned will have lists with all the values in it. Otherwise it will only contain the first item for each key. :return: a :class:`dict` """ rv = {} for d in reversed(self.dicts): rv.update(d.to_dict(flat)) return rv def __len__(self): return len(self._keys_impl()) def __contains__(self, key): for d in self.dicts: if key in d: return True return False has_key = __contains__ def __repr__(self): return '%s(%r)' % (self.__class__.__name__, self.dicts) class FileMultiDict(MultiDict): """A special :class:`MultiDict` that has convenience methods to add files to it. This is used for :class:`EnvironBuilder` and generally useful for unittesting. .. versionadded:: 0.5 """ def add_file(self, name, file, filename=None, content_type=None): """Adds a new file to the dict. `file` can be a file name or a :class:`file`-like or a :class:`FileStorage` object. :param name: the name of the field. :param file: a filename or :class:`file`-like object :param filename: an optional filename :param content_type: an optional content type """ if isinstance(file, FileStorage): value = file else: if isinstance(file, string_types): if filename is None: filename = file file = open(file, 'rb') if filename and content_type is None: content_type = mimetypes.guess_type(filename)[0] or \ 'application/octet-stream' value = FileStorage(file, filename, name, content_type) self.add(name, value) class ImmutableDict(ImmutableDictMixin, dict): """An immutable :class:`dict`. .. versionadded:: 0.5 """ def __repr__(self): return '%s(%s)' % ( self.__class__.__name__, dict.__repr__(self), ) def copy(self): """Return a shallow mutable copy of this object. Keep in mind that the standard library's :func:`copy` function is a no-op for this class like for any other python immutable type (eg: :class:`tuple`). """ return dict(self) def __copy__(self): return self class ImmutableMultiDict(ImmutableMultiDictMixin, MultiDict): """An immutable :class:`MultiDict`. .. versionadded:: 0.5 """ def copy(self): """Return a shallow mutable copy of this object. Keep in mind that the standard library's :func:`copy` function is a no-op for this class like for any other python immutable type (eg: :class:`tuple`). """ return MultiDict(self) def __copy__(self): return self class ImmutableOrderedMultiDict(ImmutableMultiDictMixin, OrderedMultiDict): """An immutable :class:`OrderedMultiDict`. .. versionadded:: 0.6 """ def _iter_hashitems(self): return enumerate(iteritems(self, multi=True)) def copy(self): """Return a shallow mutable copy of this object. Keep in mind that the standard library's :func:`copy` function is a no-op for this class like for any other python immutable type (eg: :class:`tuple`). """ return OrderedMultiDict(self) def __copy__(self): return self @native_itermethods(['values']) class Accept(ImmutableList): """An :class:`Accept` object is just a list subclass for lists of ``(value, quality)`` tuples. It is automatically sorted by specificity and quality. All :class:`Accept` objects work similar to a list but provide extra functionality for working with the data. Containment checks are normalized to the rules of that header: >>> a = CharsetAccept([('ISO-8859-1', 1), ('utf-8', 0.7)]) >>> a.best 'ISO-8859-1' >>> 'iso-8859-1' in a True >>> 'UTF8' in a True >>> 'utf7' in a False To get the quality for an item you can use normal item lookup: >>> print a['utf-8'] 0.7 >>> a['utf7'] 0 .. versionchanged:: 0.5 :class:`Accept` objects are forced immutable now. """ def __init__(self, values=()): if values is None: list.__init__(self) self.provided = False elif isinstance(values, Accept): self.provided = values.provided list.__init__(self, values) else: self.provided = True values = sorted(values, key=lambda x: (self._specificity(x[0]), x[1], x[0]), reverse=True) list.__init__(self, values) def _specificity(self, value): """Returns a tuple describing the value's specificity.""" return value != '*', def _value_matches(self, value, item): """Check if a value matches a given accept item.""" return item == '*' or item.lower() == value.lower() def __getitem__(self, key): """Besides index lookup (getting item n) you can also pass it a string to get the quality for the item. If the item is not in the list, the returned quality is ``0``. """ if isinstance(key, string_types): return self.quality(key) return list.__getitem__(self, key) def quality(self, key): """Returns the quality of the key. .. versionadded:: 0.6 In previous versions you had to use the item-lookup syntax (eg: ``obj[key]`` instead of ``obj.quality(key)``) """ for item, quality in self: if self._value_matches(key, item): return quality return 0 def __contains__(self, value): for item, quality in self: if self._value_matches(value, item): return True return False def __repr__(self): return '%s([%s])' % ( self.__class__.__name__, ', '.join('(%r, %s)' % (x, y) for x, y in self) ) def index(self, key): """Get the position of an entry or raise :exc:`ValueError`. :param key: The key to be looked up. .. versionchanged:: 0.5 This used to raise :exc:`IndexError`, which was inconsistent with the list API. """ if isinstance(key, string_types): for idx, (item, quality) in enumerate(self): if self._value_matches(key, item): return idx raise ValueError(key) return list.index(self, key) def find(self, key): """Get the position of an entry or return -1. :param key: The key to be looked up. """ try: return self.index(key) except ValueError: return -1 def values(self): """Iterate over all values.""" for item in self: yield item[0] def to_header(self): """Convert the header set into an HTTP header string.""" result = [] for value, quality in self: if quality != 1: value = '%s;q=%s' % (value, quality) result.append(value) return ','.join(result) def __str__(self): return self.to_header() def _best_single_match(self, match): for client_item, quality in self: if self._value_matches(match, client_item): # self is sorted by specificity descending, we can exit return client_item, quality def best_match(self, matches, default=None): """Returns the best match from a list of possible matches based on the specificity and quality of the client. If two items have the same quality and specificity, the one is returned that comes first. :param matches: a list of matches to check for :param default: the value that is returned if none match """ result = default best_quality = -1 best_specificity = (-1,) for server_item in matches: match = self._best_single_match(server_item) if not match: continue client_item, quality = match specificity = self._specificity(client_item) if quality <= 0 or quality < best_quality: continue # better quality or same quality but more specific => better match if quality > best_quality or specificity > best_specificity: result = server_item best_quality = quality best_specificity = specificity return result @property def best(self): """The best match as value.""" if self: return self[0][0] class MIMEAccept(Accept): """Like :class:`Accept` but with special methods and behavior for mimetypes. """ def _specificity(self, value): return tuple(x != '*' for x in value.split('/', 1)) def _value_matches(self, value, item): def _normalize(x): x = x.lower() return x == '*' and ('*', '*') or x.split('/', 1) # this is from the application which is trusted. to avoid developer # frustration we actually check these for valid values if '/' not in value: raise ValueError('invalid mimetype %r' % value) value_type, value_subtype = _normalize(value) if value_type == '*' and value_subtype != '*': raise ValueError('invalid mimetype %r' % value) if '/' not in item: return False item_type, item_subtype = _normalize(item) if item_type == '*' and item_subtype != '*': return False return ( (item_type == item_subtype == '*' or value_type == value_subtype == '*') or (item_type == value_type and (item_subtype == '*' or value_subtype == '*' or item_subtype == value_subtype)) ) @property def accept_html(self): """True if this object accepts HTML.""" return ( 'text/html' in self or 'application/xhtml+xml' in self or self.accept_xhtml ) @property def accept_xhtml(self): """True if this object accepts XHTML.""" return ( 'application/xhtml+xml' in self or 'application/xml' in self ) @property def accept_json(self): """True if this object accepts JSON.""" return 'application/json' in self class LanguageAccept(Accept): """Like :class:`Accept` but with normalization for languages.""" def _value_matches(self, value, item): def _normalize(language): return _locale_delim_re.split(language.lower()) return item == '*' or _normalize(value) == _normalize(item) class CharsetAccept(Accept): """Like :class:`Accept` but with normalization for charsets.""" def _value_matches(self, value, item): def _normalize(name): try: return codecs.lookup(name).name except LookupError: return name.lower() return item == '*' or _normalize(value) == _normalize(item) def cache_property(key, empty, type): """Return a new property object for a cache header. Useful if you want to add support for a cache extension in a subclass.""" return property(lambda x: x._get_cache_value(key, empty, type), lambda x, v: x._set_cache_value(key, v, type), lambda x: x._del_cache_value(key), 'accessor for %r' % key) class _CacheControl(UpdateDictMixin, dict): """Subclass of a dict that stores values for a Cache-Control header. It has accessors for all the cache-control directives specified in RFC 2616. The class does not differentiate between request and response directives. Because the cache-control directives in the HTTP header use dashes the python descriptors use underscores for that. To get a header of the :class:`CacheControl` object again you can convert the object into a string or call the :meth:`to_header` method. If you plan to subclass it and add your own items have a look at the sourcecode for that class. .. versionchanged:: 0.4 Setting `no_cache` or `private` to boolean `True` will set the implicit none-value which is ``*``: >>> cc = ResponseCacheControl() >>> cc.no_cache = True >>> cc >>> cc.no_cache '*' >>> cc.no_cache = None >>> cc In versions before 0.5 the behavior documented here affected the now no longer existing `CacheControl` class. """ no_cache = cache_property('no-cache', '*', None) no_store = cache_property('no-store', None, bool) max_age = cache_property('max-age', -1, int) no_transform = cache_property('no-transform', None, None) def __init__(self, values=(), on_update=None): dict.__init__(self, values or ()) self.on_update = on_update self.provided = values is not None def _get_cache_value(self, key, empty, type): """Used internally by the accessor properties.""" if type is bool: return key in self if key in self: value = self[key] if value is None: return empty elif type is not None: try: value = type(value) except ValueError: pass return value def _set_cache_value(self, key, value, type): """Used internally by the accessor properties.""" if type is bool: if value: self[key] = None else: self.pop(key, None) else: if value is None: self.pop(key) elif value is True: self[key] = None else: self[key] = value def _del_cache_value(self, key): """Used internally by the accessor properties.""" if key in self: del self[key] def to_header(self): """Convert the stored values into a cache control header.""" return dump_header(self) def __str__(self): return self.to_header() def __repr__(self): return '<%s %s>' % ( self.__class__.__name__, " ".join( "%s=%r" % (k, v) for k, v in sorted(self.items()) ), ) class RequestCacheControl(ImmutableDictMixin, _CacheControl): """A cache control for requests. This is immutable and gives access to all the request-relevant cache control headers. To get a header of the :class:`RequestCacheControl` object again you can convert the object into a string or call the :meth:`to_header` method. If you plan to subclass it and add your own items have a look at the sourcecode for that class. .. versionadded:: 0.5 In previous versions a `CacheControl` class existed that was used both for request and response. """ max_stale = cache_property('max-stale', '*', int) min_fresh = cache_property('min-fresh', '*', int) no_transform = cache_property('no-transform', None, None) only_if_cached = cache_property('only-if-cached', None, bool) class ResponseCacheControl(_CacheControl): """A cache control for responses. Unlike :class:`RequestCacheControl` this is mutable and gives access to response-relevant cache control headers. To get a header of the :class:`ResponseCacheControl` object again you can convert the object into a string or call the :meth:`to_header` method. If you plan to subclass it and add your own items have a look at the sourcecode for that class. .. versionadded:: 0.5 In previous versions a `CacheControl` class existed that was used both for request and response. """ public = cache_property('public', None, bool) private = cache_property('private', '*', None) must_revalidate = cache_property('must-revalidate', None, bool) proxy_revalidate = cache_property('proxy-revalidate', None, bool) s_maxage = cache_property('s-maxage', None, None) # attach cache_property to the _CacheControl as staticmethod # so that others can reuse it. _CacheControl.cache_property = staticmethod(cache_property) class CallbackDict(UpdateDictMixin, dict): """A dict that calls a function passed every time something is changed. The function is passed the dict instance. """ def __init__(self, initial=None, on_update=None): dict.__init__(self, initial or ()) self.on_update = on_update def __repr__(self): return '<%s %s>' % ( self.__class__.__name__, dict.__repr__(self) ) class HeaderSet(MutableSet): """Similar to the :class:`ETags` class this implements a set-like structure. Unlike :class:`ETags` this is case insensitive and used for vary, allow, and content-language headers. If not constructed using the :func:`parse_set_header` function the instantiation works like this: >>> hs = HeaderSet(['foo', 'bar', 'baz']) >>> hs HeaderSet(['foo', 'bar', 'baz']) """ def __init__(self, headers=None, on_update=None): self._headers = list(headers or ()) self._set = set([x.lower() for x in self._headers]) self.on_update = on_update def add(self, header): """Add a new header to the set.""" self.update((header,)) def remove(self, header): """Remove a header from the set. This raises an :exc:`KeyError` if the header is not in the set. .. versionchanged:: 0.5 In older versions a :exc:`IndexError` was raised instead of a :exc:`KeyError` if the object was missing. :param header: the header to be removed. """ key = header.lower() if key not in self._set: raise KeyError(header) self._set.remove(key) for idx, key in enumerate(self._headers): if key.lower() == header: del self._headers[idx] break if self.on_update is not None: self.on_update(self) def update(self, iterable): """Add all the headers from the iterable to the set. :param iterable: updates the set with the items from the iterable. """ inserted_any = False for header in iterable: key = header.lower() if key not in self._set: self._headers.append(header) self._set.add(key) inserted_any = True if inserted_any and self.on_update is not None: self.on_update(self) def discard(self, header): """Like :meth:`remove` but ignores errors. :param header: the header to be discarded. """ try: return self.remove(header) except KeyError: pass def find(self, header): """Return the index of the header in the set or return -1 if not found. :param header: the header to be looked up. """ header = header.lower() for idx, item in enumerate(self._headers): if item.lower() == header: return idx return -1 def index(self, header): """Return the index of the header in the set or raise an :exc:`IndexError`. :param header: the header to be looked up. """ rv = self.find(header) if rv < 0: raise IndexError(header) return rv def clear(self): """Clear the set.""" self._set.clear() del self._headers[:] if self.on_update is not None: self.on_update(self) def as_set(self, preserve_casing=False): """Return the set as real python set type. When calling this, all the items are converted to lowercase and the ordering is lost. :param preserve_casing: if set to `True` the items in the set returned will have the original case like in the :class:`HeaderSet`, otherwise they will be lowercase. """ if preserve_casing: return set(self._headers) return set(self._set) def to_header(self): """Convert the header set into an HTTP header string.""" return ', '.join(map(quote_header_value, self._headers)) def __getitem__(self, idx): return self._headers[idx] def __delitem__(self, idx): rv = self._headers.pop(idx) self._set.remove(rv.lower()) if self.on_update is not None: self.on_update(self) def __setitem__(self, idx, value): old = self._headers[idx] self._set.remove(old.lower()) self._headers[idx] = value self._set.add(value.lower()) if self.on_update is not None: self.on_update(self) def __contains__(self, header): return header.lower() in self._set def __len__(self): return len(self._set) def __iter__(self): return iter(self._headers) def __nonzero__(self): return bool(self._set) def __str__(self): return self.to_header() def __repr__(self): return '%s(%r)' % ( self.__class__.__name__, self._headers ) class ETags(Container, Iterable): """A set that can be used to check if one etag is present in a collection of etags. """ def __init__(self, strong_etags=None, weak_etags=None, star_tag=False): self._strong = frozenset(not star_tag and strong_etags or ()) self._weak = frozenset(weak_etags or ()) self.star_tag = star_tag def as_set(self, include_weak=False): """Convert the `ETags` object into a python set. Per default all the weak etags are not part of this set.""" rv = set(self._strong) if include_weak: rv.update(self._weak) return rv def is_weak(self, etag): """Check if an etag is weak.""" return etag in self._weak def is_strong(self, etag): """Check if an etag is strong.""" return etag in self._strong def contains_weak(self, etag): """Check if an etag is part of the set including weak and strong tags.""" return self.is_weak(etag) or self.contains(etag) def contains(self, etag): """Check if an etag is part of the set ignoring weak tags. It is also possible to use the ``in`` operator. """ if self.star_tag: return True return self.is_strong(etag) def contains_raw(self, etag): """When passed a quoted tag it will check if this tag is part of the set. If the tag is weak it is checked against weak and strong tags, otherwise strong only.""" etag, weak = unquote_etag(etag) if weak: return self.contains_weak(etag) return self.contains(etag) def to_header(self): """Convert the etags set into a HTTP header string.""" if self.star_tag: return '*' return ', '.join( ['"%s"' % x for x in self._strong] + ['W/"%s"' % x for x in self._weak] ) def __call__(self, etag=None, data=None, include_weak=False): if [etag, data].count(None) != 1: raise TypeError('either tag or data required, but at least one') if etag is None: etag = generate_etag(data) if include_weak: if etag in self._weak: return True return etag in self._strong def __bool__(self): return bool(self.star_tag or self._strong or self._weak) __nonzero__ = __bool__ def __str__(self): return self.to_header() def __iter__(self): return iter(self._strong) def __contains__(self, etag): return self.contains(etag) def __repr__(self): return '<%s %r>' % (self.__class__.__name__, str(self)) class IfRange(object): """Very simple object that represents the `If-Range` header in parsed form. It will either have neither a etag or date or one of either but never both. .. versionadded:: 0.7 """ def __init__(self, etag=None, date=None): #: The etag parsed and unquoted. Ranges always operate on strong #: etags so the weakness information is not necessary. self.etag = etag #: The date in parsed format or `None`. self.date = date def to_header(self): """Converts the object back into an HTTP header.""" if self.date is not None: return http_date(self.date) if self.etag is not None: return quote_etag(self.etag) return '' def __str__(self): return self.to_header() def __repr__(self): return '<%s %r>' % (self.__class__.__name__, str(self)) class Range(object): """Represents a range header. All the methods are only supporting bytes as unit. It does store multiple ranges but :meth:`range_for_length` will only work if only one range is provided. .. versionadded:: 0.7 """ def __init__(self, units, ranges): #: The units of this range. Usually "bytes". self.units = units #: A list of ``(begin, end)`` tuples for the range header provided. #: The ranges are non-inclusive. self.ranges = ranges def range_for_length(self, length): """If the range is for bytes, the length is not None and there is exactly one range and it is satisfiable it returns a ``(start, stop)`` tuple, otherwise `None`. """ if self.units != 'bytes' or length is None or len(self.ranges) != 1: return None start, end = self.ranges[0] if end is None: end = length if start < 0: start += length if is_byte_range_valid(start, end, length): return start, min(end, length) def make_content_range(self, length): """Creates a :class:`~werkzeug.datastructures.ContentRange` object from the current range and given content length. """ rng = self.range_for_length(length) if rng is not None: return ContentRange(self.units, rng[0], rng[1], length) def to_header(self): """Converts the object back into an HTTP header.""" ranges = [] for begin, end in self.ranges: if end is None: ranges.append(begin >= 0 and '%s-' % begin or str(begin)) else: ranges.append('%s-%s' % (begin, end - 1)) return '%s=%s' % (self.units, ','.join(ranges)) def to_content_range_header(self, length): """Converts the object into `Content-Range` HTTP header, based on given length """ range_for_length = self.range_for_length(length) if range_for_length is not None: return '%s %d-%d/%d' % (self.units, range_for_length[0], range_for_length[1] - 1, length) return None def __str__(self): return self.to_header() def __repr__(self): return '<%s %r>' % (self.__class__.__name__, str(self)) class ContentRange(object): """Represents the content range header. .. versionadded:: 0.7 """ def __init__(self, units, start, stop, length=None, on_update=None): assert is_byte_range_valid(start, stop, length), \ 'Bad range provided' self.on_update = on_update self.set(start, stop, length, units) def _callback_property(name): def fget(self): return getattr(self, name) def fset(self, value): setattr(self, name, value) if self.on_update is not None: self.on_update(self) return property(fget, fset) #: The units to use, usually "bytes" units = _callback_property('_units') #: The start point of the range or `None`. start = _callback_property('_start') #: The stop point of the range (non-inclusive) or `None`. Can only be #: `None` if also start is `None`. stop = _callback_property('_stop') #: The length of the range or `None`. length = _callback_property('_length') def set(self, start, stop, length=None, units='bytes'): """Simple method to update the ranges.""" assert is_byte_range_valid(start, stop, length), \ 'Bad range provided' self._units = units self._start = start self._stop = stop self._length = length if self.on_update is not None: self.on_update(self) def unset(self): """Sets the units to `None` which indicates that the header should no longer be used. """ self.set(None, None, units=None) def to_header(self): if self.units is None: return '' if self.length is None: length = '*' else: length = self.length if self.start is None: return '%s */%s' % (self.units, length) return '%s %s-%s/%s' % ( self.units, self.start, self.stop - 1, length ) def __nonzero__(self): return self.units is not None __bool__ = __nonzero__ def __str__(self): return self.to_header() def __repr__(self): return '<%s %r>' % (self.__class__.__name__, str(self)) class Authorization(ImmutableDictMixin, dict): """Represents an `Authorization` header sent by the client. You should not create this kind of object yourself but use it when it's returned by the `parse_authorization_header` function. This object is a dict subclass and can be altered by setting dict items but it should be considered immutable as it's returned by the client and not meant for modifications. .. versionchanged:: 0.5 This object became immutable. """ def __init__(self, auth_type, data=None): dict.__init__(self, data or {}) self.type = auth_type username = property(lambda x: x.get('username'), doc=''' The username transmitted. This is set for both basic and digest auth all the time.''') password = property(lambda x: x.get('password'), doc=''' When the authentication type is basic this is the password transmitted by the client, else `None`.''') realm = property(lambda x: x.get('realm'), doc=''' This is the server realm sent back for HTTP digest auth.''') nonce = property(lambda x: x.get('nonce'), doc=''' The nonce the server sent for digest auth, sent back by the client. A nonce should be unique for every 401 response for HTTP digest auth.''') uri = property(lambda x: x.get('uri'), doc=''' The URI from Request-URI of the Request-Line; duplicated because proxies are allowed to change the Request-Line in transit. HTTP digest auth only.''') nc = property(lambda x: x.get('nc'), doc=''' The nonce count value transmitted by clients if a qop-header is also transmitted. HTTP digest auth only.''') cnonce = property(lambda x: x.get('cnonce'), doc=''' If the server sent a qop-header in the ``WWW-Authenticate`` header, the client has to provide this value for HTTP digest auth. See the RFC for more details.''') response = property(lambda x: x.get('response'), doc=''' A string of 32 hex digits computed as defined in RFC 2617, which proves that the user knows a password. Digest auth only.''') opaque = property(lambda x: x.get('opaque'), doc=''' The opaque header from the server returned unchanged by the client. It is recommended that this string be base64 or hexadecimal data. Digest auth only.''') qop = property(lambda x: x.get('qop'), doc=''' Indicates what "quality of protection" the client has applied to the message for HTTP digest auth. Note that this is a single token, not a quoted list of alternatives as in WWW-Authenticate.''') class WWWAuthenticate(UpdateDictMixin, dict): """Provides simple access to `WWW-Authenticate` headers.""" #: list of keys that require quoting in the generated header _require_quoting = frozenset(['domain', 'nonce', 'opaque', 'realm', 'qop']) def __init__(self, auth_type=None, values=None, on_update=None): dict.__init__(self, values or ()) if auth_type: self['__auth_type__'] = auth_type self.on_update = on_update def set_basic(self, realm='authentication required'): """Clear the auth info and enable basic auth.""" dict.clear(self) dict.update(self, {'__auth_type__': 'basic', 'realm': realm}) if self.on_update: self.on_update(self) def set_digest(self, realm, nonce, qop=('auth',), opaque=None, algorithm=None, stale=False): """Clear the auth info and enable digest auth.""" d = { '__auth_type__': 'digest', 'realm': realm, 'nonce': nonce, 'qop': dump_header(qop) } if stale: d['stale'] = 'TRUE' if opaque is not None: d['opaque'] = opaque if algorithm is not None: d['algorithm'] = algorithm dict.clear(self) dict.update(self, d) if self.on_update: self.on_update(self) def to_header(self): """Convert the stored values into a WWW-Authenticate header.""" d = dict(self) auth_type = d.pop('__auth_type__', None) or 'basic' return '%s %s' % (auth_type.title(), ', '.join([ '%s=%s' % (key, quote_header_value(value, allow_token=key not in self._require_quoting)) for key, value in iteritems(d) ])) def __str__(self): return self.to_header() def __repr__(self): return '<%s %r>' % ( self.__class__.__name__, self.to_header() ) def auth_property(name, doc=None): """A static helper function for subclasses to add extra authentication system properties onto a class:: class FooAuthenticate(WWWAuthenticate): special_realm = auth_property('special_realm') For more information have a look at the sourcecode to see how the regular properties (:attr:`realm` etc.) are implemented. """ def _set_value(self, value): if value is None: self.pop(name, None) else: self[name] = str(value) return property(lambda x: x.get(name), _set_value, doc=doc) def _set_property(name, doc=None): def fget(self): def on_update(header_set): if not header_set and name in self: del self[name] elif header_set: self[name] = header_set.to_header() return parse_set_header(self.get(name), on_update) return property(fget, doc=doc) type = auth_property('__auth_type__', doc=''' The type of the auth mechanism. HTTP currently specifies `Basic` and `Digest`.''') realm = auth_property('realm', doc=''' A string to be displayed to users so they know which username and password to use. This string should contain at least the name of the host performing the authentication and might additionally indicate the collection of users who might have access.''') domain = _set_property('domain', doc=''' A list of URIs that define the protection space. If a URI is an absolute path, it is relative to the canonical root URL of the server being accessed.''') nonce = auth_property('nonce', doc=''' A server-specified data string which should be uniquely generated each time a 401 response is made. It is recommended that this string be base64 or hexadecimal data.''') opaque = auth_property('opaque', doc=''' A string of data, specified by the server, which should be returned by the client unchanged in the Authorization header of subsequent requests with URIs in the same protection space. It is recommended that this string be base64 or hexadecimal data.''') algorithm = auth_property('algorithm', doc=''' A string indicating a pair of algorithms used to produce the digest and a checksum. If this is not present it is assumed to be "MD5". If the algorithm is not understood, the challenge should be ignored (and a different one used, if there is more than one).''') qop = _set_property('qop', doc=''' A set of quality-of-privacy directives such as auth and auth-int.''') def _get_stale(self): val = self.get('stale') if val is not None: return val.lower() == 'true' def _set_stale(self, value): if value is None: self.pop('stale', None) else: self['stale'] = value and 'TRUE' or 'FALSE' stale = property(_get_stale, _set_stale, doc=''' A flag, indicating that the previous request from the client was rejected because the nonce value was stale.''') del _get_stale, _set_stale # make auth_property a staticmethod so that subclasses of # `WWWAuthenticate` can use it for new properties. auth_property = staticmethod(auth_property) del _set_property class FileStorage(object): """The :class:`FileStorage` class is a thin wrapper over incoming files. It is used by the request object to represent uploaded files. All the attributes of the wrapper stream are proxied by the file storage so it's possible to do ``storage.read()`` instead of the long form ``storage.stream.read()``. """ def __init__(self, stream=None, filename=None, name=None, content_type=None, content_length=None, headers=None): self.name = name self.stream = stream or _empty_stream # if no filename is provided we can attempt to get the filename # from the stream object passed. There we have to be careful to # skip things like , etc. Python marks these # special filenames with angular brackets. if filename is None: filename = getattr(stream, 'name', None) s = make_literal_wrapper(filename) if filename and filename[0] == s('<') and filename[-1] == s('>'): filename = None # On Python 3 we want to make sure the filename is always unicode. # This might not be if the name attribute is bytes due to the # file being opened from the bytes API. if not PY2 and isinstance(filename, bytes): filename = filename.decode(get_filesystem_encoding(), 'replace') self.filename = filename if headers is None: headers = Headers() self.headers = headers if content_type is not None: headers['Content-Type'] = content_type if content_length is not None: headers['Content-Length'] = str(content_length) def _parse_content_type(self): if not hasattr(self, '_parsed_content_type'): self._parsed_content_type = \ parse_options_header(self.content_type) @property def content_type(self): """The content-type sent in the header. Usually not available""" return self.headers.get('content-type') @property def content_length(self): """The content-length sent in the header. Usually not available""" return int(self.headers.get('content-length') or 0) @property def mimetype(self): """Like :attr:`content_type`, but without parameters (eg, without charset, type etc.) and always lowercase. For example if the content type is ``text/HTML; charset=utf-8`` the mimetype would be ``'text/html'``. .. versionadded:: 0.7 """ self._parse_content_type() return self._parsed_content_type[0].lower() @property def mimetype_params(self): """The mimetype parameters as dict. For example if the content type is ``text/html; charset=utf-8`` the params would be ``{'charset': 'utf-8'}``. .. versionadded:: 0.7 """ self._parse_content_type() return self._parsed_content_type[1] def save(self, dst, buffer_size=16384): """Save the file to a destination path or file object. If the destination is a file object you have to close it yourself after the call. The buffer size is the number of bytes held in memory during the copy process. It defaults to 16KB. For secure file saving also have a look at :func:`secure_filename`. :param dst: a filename or open file object the uploaded file is saved to. :param buffer_size: the size of the buffer. This works the same as the `length` parameter of :func:`shutil.copyfileobj`. """ from shutil import copyfileobj close_dst = False if isinstance(dst, string_types): dst = open(dst, 'wb') close_dst = True try: copyfileobj(self.stream, dst, buffer_size) finally: if close_dst: dst.close() def close(self): """Close the underlying file if possible.""" try: self.stream.close() except Exception: pass def __nonzero__(self): return bool(self.filename) __bool__ = __nonzero__ def __getattr__(self, name): return getattr(self.stream, name) def __iter__(self): return iter(self.stream) def __repr__(self): return '<%s: %r (%r)>' % ( self.__class__.__name__, self.filename, self.content_type ) # circular dependencies from werkzeug.http import dump_options_header, dump_header, generate_etag, \ quote_header_value, parse_set_header, unquote_etag, quote_etag, \ parse_options_header, http_date, is_byte_range_valid from werkzeug import exceptions werkzeug-0.14.1/werkzeug/debug/000077500000000000000000000000001322225165500163745ustar00rootroot00000000000000werkzeug-0.14.1/werkzeug/debug/__init__.py000066400000000000000000000420541322225165500205120ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.debug ~~~~~~~~~~~~~~ WSGI application traceback debugger. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import re import sys import uuid import json import time import getpass import hashlib import mimetypes from itertools import chain from os.path import join, dirname, basename, isfile from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response from werkzeug.http import parse_cookie from werkzeug.debug.tbtools import get_current_traceback, render_console_html from werkzeug.debug.console import Console from werkzeug.security import gen_salt from werkzeug._internal import _log from werkzeug._compat import text_type # DEPRECATED #: import this here because it once was documented as being available #: from this module. In case there are users left ... from werkzeug.debug.repr import debug_repr # noqa # A week PIN_TIME = 60 * 60 * 24 * 7 def hash_pin(pin): if isinstance(pin, text_type): pin = pin.encode('utf-8', 'replace') return hashlib.md5(pin + b'shittysalt').hexdigest()[:12] _machine_id = None def get_machine_id(): global _machine_id rv = _machine_id if rv is not None: return rv def _generate(): # Potential sources of secret information on linux. The machine-id # is stable across boots, the boot id is not for filename in '/etc/machine-id', '/proc/sys/kernel/random/boot_id': try: with open(filename, 'rb') as f: return f.readline().strip() except IOError: continue # On OS X we can use the computer's serial number assuming that # ioreg exists and can spit out that information. try: # Also catch import errors: subprocess may not be available, e.g. # Google App Engine # See https://github.com/pallets/werkzeug/issues/925 from subprocess import Popen, PIPE dump = Popen(['ioreg', '-c', 'IOPlatformExpertDevice', '-d', '2'], stdout=PIPE).communicate()[0] match = re.search(b'"serial-number" = <([^>]+)', dump) if match is not None: return match.group(1) except (OSError, ImportError): pass # On Windows we can use winreg to get the machine guid wr = None try: import winreg as wr except ImportError: try: import _winreg as wr except ImportError: pass if wr is not None: try: with wr.OpenKey(wr.HKEY_LOCAL_MACHINE, 'SOFTWARE\\Microsoft\\Cryptography', 0, wr.KEY_READ | wr.KEY_WOW64_64KEY) as rk: machineGuid, wrType = wr.QueryValueEx(rk, 'MachineGuid') if (wrType == wr.REG_SZ): return machineGuid.encode('utf-8') else: return machineGuid except WindowsError: pass _machine_id = rv = _generate() return rv class _ConsoleFrame(object): """Helper class so that we can reuse the frame console code for the standalone console. """ def __init__(self, namespace): self.console = Console(namespace) self.id = 0 def get_pin_and_cookie_name(app): """Given an application object this returns a semi-stable 9 digit pin code and a random key. The hope is that this is stable between restarts to not make debugging particularly frustrating. If the pin was forcefully disabled this returns `None`. Second item in the resulting tuple is the cookie name for remembering. """ pin = os.environ.get('WERKZEUG_DEBUG_PIN') rv = None num = None # Pin was explicitly disabled if pin == 'off': return None, None # Pin was provided explicitly if pin is not None and pin.replace('-', '').isdigit(): # If there are separators in the pin, return it directly if '-' in pin: rv = pin else: num = pin modname = getattr(app, '__module__', getattr(app.__class__, '__module__')) try: # `getpass.getuser()` imports the `pwd` module, # which does not exist in the Google App Engine sandbox. username = getpass.getuser() except ImportError: username = None mod = sys.modules.get(modname) # This information only exists to make the cookie unique on the # computer, not as a security feature. probably_public_bits = [ username, modname, getattr(app, '__name__', getattr(app.__class__, '__name__')), getattr(mod, '__file__', None), ] # This information is here to make it harder for an attacker to # guess the cookie name. They are unlikely to be contained anywhere # within the unauthenticated debug page. private_bits = [ str(uuid.getnode()), get_machine_id(), ] h = hashlib.md5() for bit in chain(probably_public_bits, private_bits): if not bit: continue if isinstance(bit, text_type): bit = bit.encode('utf-8') h.update(bit) h.update(b'cookiesalt') cookie_name = '__wzd' + h.hexdigest()[:20] # If we need to generate a pin we salt it a bit more so that we don't # end up with the same value and generate out 9 digits if num is None: h.update(b'pinsalt') num = ('%09d' % int(h.hexdigest(), 16))[:9] # Format the pincode in groups of digits for easier remembering if # we don't have a result yet. if rv is None: for group_size in 5, 4, 3: if len(num) % group_size == 0: rv = '-'.join(num[x:x + group_size].rjust(group_size, '0') for x in range(0, len(num), group_size)) break else: rv = num return rv, cookie_name class DebuggedApplication(object): """Enables debugging support for a given application:: from werkzeug.debug import DebuggedApplication from myapp import app app = DebuggedApplication(app, evalex=True) The `evalex` keyword argument allows evaluating expressions in a traceback's frame context. .. versionadded:: 0.9 The `lodgeit_url` parameter was deprecated. :param app: the WSGI application to run debugged. :param evalex: enable exception evaluation feature (interactive debugging). This requires a non-forking server. :param request_key: The key that points to the request object in ths environment. This parameter is ignored in current versions. :param console_path: the URL for a general purpose console. :param console_init_func: the function that is executed before starting the general purpose console. The return value is used as initial namespace. :param show_hidden_frames: by default hidden traceback frames are skipped. You can show them by setting this parameter to `True`. :param pin_security: can be used to disable the pin based security system. :param pin_logging: enables the logging of the pin system. """ def __init__(self, app, evalex=False, request_key='werkzeug.request', console_path='/console', console_init_func=None, show_hidden_frames=False, lodgeit_url=None, pin_security=True, pin_logging=True): if lodgeit_url is not None: from warnings import warn warn(DeprecationWarning('Werkzeug now pastes into gists.')) if not console_init_func: console_init_func = None self.app = app self.evalex = evalex self.frames = {} self.tracebacks = {} self.request_key = request_key self.console_path = console_path self.console_init_func = console_init_func self.show_hidden_frames = show_hidden_frames self.secret = gen_salt(20) self._failed_pin_auth = 0 self.pin_logging = pin_logging if pin_security: # Print out the pin for the debugger on standard out. if os.environ.get('WERKZEUG_RUN_MAIN') == 'true' and \ pin_logging: _log('warning', ' * Debugger is active!') if self.pin is None: _log('warning', ' * Debugger PIN disabled. ' 'DEBUGGER UNSECURED!') else: _log('info', ' * Debugger PIN: %s' % self.pin) else: self.pin = None def _get_pin(self): if not hasattr(self, '_pin'): self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app) return self._pin def _set_pin(self, value): self._pin = value pin = property(_get_pin, _set_pin) del _get_pin, _set_pin @property def pin_cookie_name(self): """The name of the pin cookie.""" if not hasattr(self, '_pin_cookie'): self._pin, self._pin_cookie = get_pin_and_cookie_name(self.app) return self._pin_cookie def debug_application(self, environ, start_response): """Run the application and conserve the traceback frames.""" app_iter = None try: app_iter = self.app(environ, start_response) for item in app_iter: yield item if hasattr(app_iter, 'close'): app_iter.close() except Exception: if hasattr(app_iter, 'close'): app_iter.close() traceback = get_current_traceback( skip=1, show_hidden_frames=self.show_hidden_frames, ignore_system_exceptions=True) for frame in traceback.frames: self.frames[frame.id] = frame self.tracebacks[traceback.id] = traceback try: start_response('500 INTERNAL SERVER ERROR', [ ('Content-Type', 'text/html; charset=utf-8'), # Disable Chrome's XSS protection, the debug # output can cause false-positives. ('X-XSS-Protection', '0'), ]) except Exception: # if we end up here there has been output but an error # occurred. in that situation we can do nothing fancy any # more, better log something into the error log and fall # back gracefully. environ['wsgi.errors'].write( 'Debugging middleware caught exception in streamed ' 'response at a point where response headers were already ' 'sent.\n') else: is_trusted = bool(self.check_pin_trust(environ)) yield traceback.render_full(evalex=self.evalex, evalex_trusted=is_trusted, secret=self.secret) \ .encode('utf-8', 'replace') traceback.log(environ['wsgi.errors']) def execute_command(self, request, command, frame): """Execute a command in a console.""" return Response(frame.console.eval(command), mimetype='text/html') def display_console(self, request): """Display a standalone shell.""" if 0 not in self.frames: if self.console_init_func is None: ns = {} else: ns = dict(self.console_init_func()) ns.setdefault('app', self.app) self.frames[0] = _ConsoleFrame(ns) is_trusted = bool(self.check_pin_trust(request.environ)) return Response(render_console_html(secret=self.secret, evalex_trusted=is_trusted), mimetype='text/html') def paste_traceback(self, request, traceback): """Paste the traceback and return a JSON response.""" rv = traceback.paste() return Response(json.dumps(rv), mimetype='application/json') def get_resource(self, request, filename): """Return a static resource from the shared folder.""" filename = join(dirname(__file__), 'shared', basename(filename)) if isfile(filename): mimetype = mimetypes.guess_type(filename)[0] \ or 'application/octet-stream' f = open(filename, 'rb') try: return Response(f.read(), mimetype=mimetype) finally: f.close() return Response('Not Found', status=404) def check_pin_trust(self, environ): """Checks if the request passed the pin test. This returns `True` if the request is trusted on a pin/cookie basis and returns `False` if not. Additionally if the cookie's stored pin hash is wrong it will return `None` so that appropriate action can be taken. """ if self.pin is None: return True val = parse_cookie(environ).get(self.pin_cookie_name) if not val or '|' not in val: return False ts, pin_hash = val.split('|', 1) if not ts.isdigit(): return False if pin_hash != hash_pin(self.pin): return None return (time.time() - PIN_TIME) < int(ts) def _fail_pin_auth(self): time.sleep(self._failed_pin_auth > 5 and 5.0 or 0.5) self._failed_pin_auth += 1 def pin_auth(self, request): """Authenticates with the pin.""" exhausted = False auth = False trust = self.check_pin_trust(request.environ) # If the trust return value is `None` it means that the cookie is # set but the stored pin hash value is bad. This means that the # pin was changed. In this case we count a bad auth and unset the # cookie. This way it becomes harder to guess the cookie name # instead of the pin as we still count up failures. bad_cookie = False if trust is None: self._fail_pin_auth() bad_cookie = True # If we're trusted, we're authenticated. elif trust: auth = True # If we failed too many times, then we're locked out. elif self._failed_pin_auth > 10: exhausted = True # Otherwise go through pin based authentication else: entered_pin = request.args.get('pin') if entered_pin.strip().replace('-', '') == \ self.pin.replace('-', ''): self._failed_pin_auth = 0 auth = True else: self._fail_pin_auth() rv = Response(json.dumps({ 'auth': auth, 'exhausted': exhausted, }), mimetype='application/json') if auth: rv.set_cookie(self.pin_cookie_name, '%s|%s' % ( int(time.time()), hash_pin(self.pin) ), httponly=True) elif bad_cookie: rv.delete_cookie(self.pin_cookie_name) return rv def log_pin_request(self): """Log the pin if needed.""" if self.pin_logging and self.pin is not None: _log('info', ' * To enable the debugger you need to ' 'enter the security pin:') _log('info', ' * Debugger pin code: %s' % self.pin) return Response('') def __call__(self, environ, start_response): """Dispatch the requests.""" # important: don't ever access a function here that reads the incoming # form data! Otherwise the application won't have access to that data # any more! request = Request(environ) response = self.debug_application if request.args.get('__debugger__') == 'yes': cmd = request.args.get('cmd') arg = request.args.get('f') secret = request.args.get('s') traceback = self.tracebacks.get(request.args.get('tb', type=int)) frame = self.frames.get(request.args.get('frm', type=int)) if cmd == 'resource' and arg: response = self.get_resource(request, arg) elif cmd == 'paste' and traceback is not None and \ secret == self.secret: response = self.paste_traceback(request, traceback) elif cmd == 'pinauth' and secret == self.secret: response = self.pin_auth(request) elif cmd == 'printpin' and secret == self.secret: response = self.log_pin_request() elif self.evalex and cmd is not None and frame is not None \ and self.secret == secret and \ self.check_pin_trust(environ): response = self.execute_command(request, cmd, frame) elif self.evalex and self.console_path is not None and \ request.path == self.console_path: response = self.display_console(request) return response(environ, start_response) werkzeug-0.14.1/werkzeug/debug/console.py000066400000000000000000000127471322225165500204230ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.debug.console ~~~~~~~~~~~~~~~~~~~~~~ Interactive console support. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import sys import code from types import CodeType from werkzeug.utils import escape from werkzeug.local import Local from werkzeug.debug.repr import debug_repr, dump, helper _local = Local() class HTMLStringO(object): """A StringO version that HTML escapes on write.""" def __init__(self): self._buffer = [] def isatty(self): return False def close(self): pass def flush(self): pass def seek(self, n, mode=0): pass def readline(self): if len(self._buffer) == 0: return '' ret = self._buffer[0] del self._buffer[0] return ret def reset(self): val = ''.join(self._buffer) del self._buffer[:] return val def _write(self, x): if isinstance(x, bytes): x = x.decode('utf-8', 'replace') self._buffer.append(x) def write(self, x): self._write(escape(x)) def writelines(self, x): self._write(escape(''.join(x))) class ThreadedStream(object): """Thread-local wrapper for sys.stdout for the interactive console.""" def push(): if not isinstance(sys.stdout, ThreadedStream): sys.stdout = ThreadedStream() _local.stream = HTMLStringO() push = staticmethod(push) def fetch(): try: stream = _local.stream except AttributeError: return '' return stream.reset() fetch = staticmethod(fetch) def displayhook(obj): try: stream = _local.stream except AttributeError: return _displayhook(obj) # stream._write bypasses escaping as debug_repr is # already generating HTML for us. if obj is not None: _local._current_ipy.locals['_'] = obj stream._write(debug_repr(obj)) displayhook = staticmethod(displayhook) def __setattr__(self, name, value): raise AttributeError('read only attribute %s' % name) def __dir__(self): return dir(sys.__stdout__) def __getattribute__(self, name): if name == '__members__': return dir(sys.__stdout__) try: stream = _local.stream except AttributeError: stream = sys.__stdout__ return getattr(stream, name) def __repr__(self): return repr(sys.__stdout__) # add the threaded stream as display hook _displayhook = sys.displayhook sys.displayhook = ThreadedStream.displayhook class _ConsoleLoader(object): def __init__(self): self._storage = {} def register(self, code, source): self._storage[id(code)] = source # register code objects of wrapped functions too. for var in code.co_consts: if isinstance(var, CodeType): self._storage[id(var)] = source def get_source_by_code(self, code): try: return self._storage[id(code)] except KeyError: pass def _wrap_compiler(console): compile = console.compile def func(source, filename, symbol): code = compile(source, filename, symbol) console.loader.register(code, source) return code console.compile = func class _InteractiveConsole(code.InteractiveInterpreter): def __init__(self, globals, locals): code.InteractiveInterpreter.__init__(self, locals) self.globals = dict(globals) self.globals['dump'] = dump self.globals['help'] = helper self.globals['__loader__'] = self.loader = _ConsoleLoader() self.more = False self.buffer = [] _wrap_compiler(self) def runsource(self, source): source = source.rstrip() + '\n' ThreadedStream.push() prompt = self.more and '... ' or '>>> ' try: source_to_eval = ''.join(self.buffer + [source]) if code.InteractiveInterpreter.runsource(self, source_to_eval, '', 'single'): self.more = True self.buffer.append(source) else: self.more = False del self.buffer[:] finally: output = ThreadedStream.fetch() return prompt + escape(source) + output def runcode(self, code): try: eval(code, self.globals, self.locals) except Exception: self.showtraceback() def showtraceback(self): from werkzeug.debug.tbtools import get_current_traceback tb = get_current_traceback(skip=1) sys.stdout._write(tb.render_summary()) def showsyntaxerror(self, filename=None): from werkzeug.debug.tbtools import get_current_traceback tb = get_current_traceback(skip=4) sys.stdout._write(tb.render_summary()) def write(self, data): sys.stdout.write(data) class Console(object): """An interactive console.""" def __init__(self, globals=None, locals=None): if locals is None: locals = {} if globals is None: globals = {} self._ipy = _InteractiveConsole(globals, locals) def eval(self, code): _local._current_ipy = self._ipy old_sys_stdout = sys.stdout try: return self._ipy.runsource(code) finally: sys.stdout = old_sys_stdout werkzeug-0.14.1/werkzeug/debug/repr.py000066400000000000000000000221741322225165500177240ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.debug.repr ~~~~~~~~~~~~~~~~~~~ This module implements object representations for debugging purposes. Unlike the default repr these reprs expose a lot more information and produce HTML instead of ASCII. Together with the CSS and JavaScript files of the debugger this gives a colorful and more compact output. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import sys import re import codecs from traceback import format_exception_only try: from collections import deque except ImportError: # pragma: no cover deque = None from werkzeug.utils import escape from werkzeug._compat import iteritems, PY2, text_type, integer_types, \ string_types missing = object() _paragraph_re = re.compile(r'(?:\r\n|\r|\n){2,}') RegexType = type(_paragraph_re) HELP_HTML = '''\

%(title)s

%(text)s
\ ''' OBJECT_DUMP_HTML = '''\

%(title)s

%(repr)s %(items)s
\ ''' def debug_repr(obj): """Creates a debug repr of an object as HTML unicode string.""" return DebugReprGenerator().repr(obj) def dump(obj=missing): """Print the object details to stdout._write (for the interactive console of the web debugger. """ gen = DebugReprGenerator() if obj is missing: rv = gen.dump_locals(sys._getframe(1).f_locals) else: rv = gen.dump_object(obj) sys.stdout._write(rv) class _Helper(object): """Displays an HTML version of the normal help, for the interactive debugger only because it requires a patched sys.stdout. """ def __repr__(self): return 'Type help(object) for help about object.' def __call__(self, topic=None): if topic is None: sys.stdout._write('%s' % repr(self)) return import pydoc pydoc.help(topic) rv = sys.stdout.reset() if isinstance(rv, bytes): rv = rv.decode('utf-8', 'ignore') paragraphs = _paragraph_re.split(rv) if len(paragraphs) > 1: title = paragraphs[0] text = '\n\n'.join(paragraphs[1:]) else: # pragma: no cover title = 'Help' text = paragraphs[0] sys.stdout._write(HELP_HTML % {'title': title, 'text': text}) helper = _Helper() def _add_subclass_info(inner, obj, base): if isinstance(base, tuple): for base in base: if type(obj) is base: return inner elif type(obj) is base: return inner module = '' if obj.__class__.__module__ not in ('__builtin__', 'exceptions'): module = '%s.' % obj.__class__.__module__ return '%s%s(%s)' % (module, obj.__class__.__name__, inner) class DebugReprGenerator(object): def __init__(self): self._stack = [] def _sequence_repr_maker(left, right, base=object(), limit=8): def proxy(self, obj, recursive): if recursive: return _add_subclass_info(left + '...' + right, obj, base) buf = [left] have_extended_section = False for idx, item in enumerate(obj): if idx: buf.append(', ') if idx == limit: buf.append('') have_extended_section = True buf.append(self.repr(item)) if have_extended_section: buf.append('') buf.append(right) return _add_subclass_info(u''.join(buf), obj, base) return proxy list_repr = _sequence_repr_maker('[', ']', list) tuple_repr = _sequence_repr_maker('(', ')', tuple) set_repr = _sequence_repr_maker('set([', '])', set) frozenset_repr = _sequence_repr_maker('frozenset([', '])', frozenset) if deque is not None: deque_repr = _sequence_repr_maker('collections.' 'deque([', '])', deque) del _sequence_repr_maker def regex_repr(self, obj): pattern = repr(obj.pattern) if PY2: pattern = pattern.decode('string-escape', 'ignore') else: pattern = codecs.decode(pattern, 'unicode-escape', 'ignore') if pattern[:1] == 'u': pattern = 'ur' + pattern[1:] else: pattern = 'r' + pattern return u're.compile(%s)' % pattern def string_repr(self, obj, limit=70): buf = [''] a = repr(obj[:limit]) b = repr(obj[limit:]) if isinstance(obj, text_type) and PY2: buf.append('u') a = a[1:] b = b[1:] if b != "''": buf.extend((escape(a[:-1]), '', escape(b[1:]), '')) else: buf.append(escape(a)) buf.append('') return _add_subclass_info(u''.join(buf), obj, (bytes, text_type)) def dict_repr(self, d, recursive, limit=5): if recursive: return _add_subclass_info(u'{...}', d, dict) buf = ['{'] have_extended_section = False for idx, (key, value) in enumerate(iteritems(d)): if idx: buf.append(', ') if idx == limit - 1: buf.append('') have_extended_section = True buf.append('%s: ' '%s' % (self.repr(key), self.repr(value))) if have_extended_section: buf.append('') buf.append('}') return _add_subclass_info(u''.join(buf), d, dict) def object_repr(self, obj): r = repr(obj) if PY2: r = r.decode('utf-8', 'replace') return u'%s' % escape(r) def dispatch_repr(self, obj, recursive): if obj is helper: return u'%r' % helper if isinstance(obj, (integer_types, float, complex)): return u'%r' % obj if isinstance(obj, string_types): return self.string_repr(obj) if isinstance(obj, RegexType): return self.regex_repr(obj) if isinstance(obj, list): return self.list_repr(obj, recursive) if isinstance(obj, tuple): return self.tuple_repr(obj, recursive) if isinstance(obj, set): return self.set_repr(obj, recursive) if isinstance(obj, frozenset): return self.frozenset_repr(obj, recursive) if isinstance(obj, dict): return self.dict_repr(obj, recursive) if deque is not None and isinstance(obj, deque): return self.deque_repr(obj, recursive) return self.object_repr(obj) def fallback_repr(self): try: info = ''.join(format_exception_only(*sys.exc_info()[:2])) except Exception: # pragma: no cover info = '?' if PY2: info = info.decode('utf-8', 'ignore') return u'<broken repr (%s)>' \ u'' % escape(info.strip()) def repr(self, obj): recursive = False for item in self._stack: if item is obj: recursive = True break self._stack.append(obj) try: try: return self.dispatch_repr(obj, recursive) except Exception: return self.fallback_repr() finally: self._stack.pop() def dump_object(self, obj): repr = items = None if isinstance(obj, dict): title = 'Contents of' items = [] for key, value in iteritems(obj): if not isinstance(key, string_types): items = None break items.append((key, self.repr(value))) if items is None: items = [] repr = self.repr(obj) for key in dir(obj): try: items.append((key, self.repr(getattr(obj, key)))) except Exception: pass title = 'Details for' title += ' ' + object.__repr__(obj)[1:-1] return self.render_object_dump(items, title, repr) def dump_locals(self, d): items = [(key, self.repr(value)) for key, value in d.items()] return self.render_object_dump(items, 'Local variables in frame') def render_object_dump(self, items, title, repr=None): html_items = [] for key, value in items: html_items.append('%s
%s
' % (escape(key), value)) if not html_items: html_items.append('Nothing') return OBJECT_DUMP_HTML % { 'title': escape(title), 'repr': repr and '
%s
' % repr or '', 'items': '\n'.join(html_items) } werkzeug-0.14.1/werkzeug/debug/shared/000077500000000000000000000000001322225165500176425ustar00rootroot00000000000000werkzeug-0.14.1/werkzeug/debug/shared/console.png000066400000000000000000000007731322225165500220210ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT8˥S=KAE@sv ib#RF< ),Ԁ v邠GLvy]\ Yfgw͛UL,}|t5;"${FL촑h1;-~?O[e}O/K^JvO75utlI.j{FhǗ'ۤ* m֡jT`ǩoW*.t:jZ P0th4ZR^wvE;_N6m$IzvCF1M Zt3G| I@tFM-~"{de9}= kZL2#0y4V') .attr('title', 'Open an interactive python shell in this frame') .click(function() { consoleNode = openShell(consoleNode, target, frameID); return false; }) .prependTo(target); } }); /** * toggle traceback types on click. */ $('h2.traceback').click(function() { $(this).next().slideToggle('fast'); $('div.plain').slideToggle('fast'); }).css('cursor', 'pointer'); $('div.plain').hide(); /** * Add extra info (this is here so that only users with JavaScript * enabled see it.) */ $('span.nojavascript') .removeClass('nojavascript') .html('

To switch between the interactive traceback and the plaintext ' + 'one, you can click on the "Traceback" headline. From the text ' + 'traceback you can also create a paste of it. ' + (!EVALEX ? '' : 'For code execution mouse-over the frame you want to debug and ' + 'click on the console icon on the right side.' + '

You can execute arbitrary Python code in the stack frames and ' + 'there are some extra helpers available for introspection:' + '

  • dump() shows all variables in the frame' + '
  • dump(obj) dumps all that\'s known about the object
')); /** * Add the pastebin feature */ $('div.plain form') .submit(function() { var label = $('input[type="submit"]', this); var old_val = label.val(); label.val('submitting...'); $.ajax({ dataType: 'json', url: document.location.pathname, data: {__debugger__: 'yes', tb: TRACEBACK, cmd: 'paste', s: SECRET}, success: function(data) { $('div.plain span.pastemessage') .removeClass('pastemessage') .text('Paste created: ') .append($('#' + data.id + '').attr('href', data.url)); }, error: function() { alert('Error: Could not submit paste. No network connection?'); label.val(old_val); } }); return false; }); // if we have javascript we submit by ajax anyways, so no need for the // not scaling textarea. var plainTraceback = $('div.plain textarea'); plainTraceback.replaceWith($('
').text(plainTraceback.text()));
});

function initPinBox() {
  $('.pin-prompt form').submit(function(evt) {
    evt.preventDefault();
    var pin = this.pin.value;
    var btn = this.btn;
    btn.disabled = true;
    $.ajax({
      dataType: 'json',
      url: document.location.pathname,
      data: {__debugger__: 'yes', cmd: 'pinauth', pin: pin,
             s: SECRET},
      success: function(data) {
        btn.disabled = false;
        if (data.auth) {
          EVALEX_TRUSTED = true;
          $('.pin-prompt').fadeOut();
        } else {
          if (data.exhausted) {
            alert('Error: too many attempts.  Restart server to retry.');
          } else {
            alert('Error: incorrect pin');
          }
        }
        console.log(data);
      },
      error: function() {
        btn.disabled = false;
        alert('Error: Could not verify PIN.  Network error?');
      }
    });
  });
}

function promptForPin() {
  if (!EVALEX_TRUSTED) {
    $.ajax({
      url: document.location.pathname,
      data: {__debugger__: 'yes', cmd: 'printpin', s: SECRET}
    });
    $('.pin-prompt').fadeIn(function() {
      $('.pin-prompt input[name="pin"]').focus();
    });
  }
}


/**
 * Helper function for shell initialization
 */
function openShell(consoleNode, target, frameID) {
  promptForPin();
  if (consoleNode)
    return consoleNode.slideToggle('fast');
  consoleNode = $('
')
    .appendTo(target.parent())
    .hide()
  var historyPos = 0, history = [''];
  var output = $('
[console ready]
') .appendTo(consoleNode); var form = $('
>>>
') .submit(function() { var cmd = command.val(); $.get('', { __debugger__: 'yes', cmd: cmd, frm: frameID, s: SECRET}, function(data) { var tmp = $('
').html(data); $('span.extended', tmp).each(function() { var hidden = $(this).wrap('').hide(); hidden .parent() .append($('  ') .click(function() { hidden.toggle(); $(this).toggleClass('open') return false; })); }); output.append(tmp); command.focus(); consoleNode.scrollTop(consoleNode.get(0).scrollHeight); var old = history.pop(); history.push(cmd); if (typeof old != 'undefined') history.push(old); historyPos = history.length - 1; }); command.val(''); return false; }). appendTo(consoleNode); var command = $('') .appendTo(form) .keydown(function(e) { if (e.charCode == 100 && e.ctrlKey) { output.text('--- screen cleared ---'); return false; } else if (e.charCode == 0 && (e.keyCode == 38 || e.keyCode == 40)) { if (e.keyCode == 38 && historyPos > 0) historyPos--; else if (e.keyCode == 40 && historyPos < history.length) historyPos++; command.val(history[historyPos]); return false; } }); return consoleNode.slideDown('fast', function() { command.focus(); }); } werkzeug-0.14.1/werkzeug/debug/shared/less.png000066400000000000000000000002771322225165500213240ustar00rootroot00000000000000PNG  IHDR w&sRGBbKGDt pHYs  tIME( :_ ?IDATc@$`a```(//'@;,FFF ǮYn&݃D)2 MA?IENDB`werkzeug-0.14.1/werkzeug/debug/shared/more.png000066400000000000000000000003101322225165500213040ustar00rootroot00000000000000PNG  IHDR w&sRGBbKGD pHYs  tIME(3&HIDATӝA 1y{9 Aa-XLrov=Wǽ AWgXZ4/;P`XeIENDB`werkzeug-0.14.1/werkzeug/debug/shared/source.png000066400000000000000000000014621322225165500216530ustar00rootroot00000000000000PNG  IHDRagAMA7tEXtSoftwareAdobe ImageReadyqe<IDAT=Khe%!5 Jt!dcA.7tBJpĽ A7QX`X iƙIswwNfU8 Kp#d~jL]t=7NʐC;[\K#ߟiEz4Ƚy CTܠp#%gSND~_+="!JBEc| g8+ǁJlT@EPTA6wwoDx;Yx1c(-4iΩF\`vF B m 6OC&eC^UW&kQ##'dQP)?Ym,*yEw}PZwxHZār|Z31^BB JaP.vFbFwgV_ E叵6OWT*JP$G/}2R}u+ϾFa0pjIOL 98%tM;nrxlބh.tqsϼ1K7/S]98IDzR,y@~\)0q\5!FTXtvjF~]uwmrr&`YEqwd "1 r;Q\IENDB`werkzeug-0.14.1/werkzeug/debug/shared/style.css000066400000000000000000000141761322225165500215250ustar00rootroot00000000000000@font-face { font-family: 'Ubuntu'; font-style: normal; font-weight: normal; src: local('Ubuntu'), local('Ubuntu-Regular'), url('?__debugger__=yes&cmd=resource&f=ubuntu.ttf') format('truetype'); } body, input { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; color: #000; text-align: center; margin: 1em; padding: 0; font-size: 15px; } h1, h2, h3 { font-family: 'Ubuntu', 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; font-weight: normal; } input { background-color: #fff; margin: 0; text-align: left; outline: none !important; } input[type="submit"] { padding: 3px 6px; } a { color: #11557C; } a:hover { color: #177199; } pre, code, textarea { font-family: 'Consolas', 'Monaco', 'Bitstream Vera Sans Mono', monospace; font-size: 14px; } div.debugger { text-align: left; padding: 12px; margin: auto; background-color: white; } h1 { font-size: 36px; margin: 0 0 0.3em 0; } div.detail p { margin: 0 0 8px 13px; font-size: 14px; white-space: pre-wrap; font-family: monospace; } div.explanation { margin: 20px 13px; font-size: 15px; color: #555; } div.footer { font-size: 13px; text-align: right; margin: 30px 0; color: #86989B; } h2 { font-size: 16px; margin: 1.3em 0 0.0 0; padding: 9px; background-color: #11557C; color: white; } h2 em, h3 em { font-style: normal; color: #A5D6D9; font-weight: normal; } div.traceback, div.plain { border: 1px solid #ddd; margin: 0 0 1em 0; padding: 10px; } div.plain p { margin: 0; } div.plain textarea, div.plain pre { margin: 10px 0 0 0; padding: 4px; background-color: #E8EFF0; border: 1px solid #D3E7E9; } div.plain textarea { width: 99%; height: 300px; } div.traceback h3 { font-size: 1em; margin: 0 0 0.8em 0; } div.traceback ul { list-style: none; margin: 0; padding: 0 0 0 1em; } div.traceback h4 { font-size: 13px; font-weight: normal; margin: 0.7em 0 0.1em 0; } div.traceback pre { margin: 0; padding: 5px 0 3px 15px; background-color: #E8EFF0; border: 1px solid #D3E7E9; } div.traceback pre:hover { background-color: #DDECEE; color: black; cursor: pointer; } div.traceback div.source.expanded pre + pre { border-top: none; } div.traceback span.ws { display: none; } div.traceback pre.before, div.traceback pre.after { display: none; background: white; } div.traceback div.source.expanded pre.before, div.traceback div.source.expanded pre.after { display: block; } div.traceback div.source.expanded span.ws { display: inline; } div.traceback blockquote { margin: 1em 0 0 0; padding: 0; } div.traceback img { float: right; padding: 2px; margin: -3px 2px 0 0; display: none; } div.traceback img:hover { background-color: #ddd; cursor: pointer; border-color: #BFDDE0; } div.traceback pre:hover img { display: block; } div.traceback cite.filename { font-style: normal; color: #3B666B; } pre.console { border: 1px solid #ccc; background: white!important; color: black; padding: 5px!important; margin: 3px 0 0 0!important; cursor: default!important; max-height: 400px; overflow: auto; } pre.console form { color: #555; } pre.console input { background-color: transparent; color: #555; width: 90%; font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 14px; border: none!important; } span.string { color: #30799B; } span.number { color: #9C1A1C; } span.help { color: #3A7734; } span.object { color: #485F6E; } span.extended { opacity: 0.5; } span.extended:hover { opacity: 1; } a.toggle { text-decoration: none; background-repeat: no-repeat; background-position: center center; background-image: url(?__debugger__=yes&cmd=resource&f=more.png); } a.toggle:hover { background-color: #444; } a.open { background-image: url(?__debugger__=yes&cmd=resource&f=less.png); } pre.console div.traceback, pre.console div.box { margin: 5px 10px; white-space: normal; border: 1px solid #11557C; padding: 10px; font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; } pre.console div.box h3, pre.console div.traceback h3 { margin: -10px -10px 10px -10px; padding: 5px; background: #11557C; color: white; } pre.console div.traceback pre:hover { cursor: default; background: #E8EFF0; } pre.console div.traceback pre.syntaxerror { background: inherit; border: none; margin: 20px -10px -10px -10px; padding: 10px; border-top: 1px solid #BFDDE0; background: #E8EFF0; } pre.console div.noframe-traceback pre.syntaxerror { margin-top: -10px; border: none; } pre.console div.box pre.repr { padding: 0; margin: 0; background-color: white; border: none; } pre.console div.box table { margin-top: 6px; } pre.console div.box pre { border: none; } pre.console div.box pre.help { background-color: white; } pre.console div.box pre.help:hover { cursor: default; } pre.console table tr { vertical-align: top; } div.console { border: 1px solid #ccc; padding: 4px; background-color: #fafafa; } div.traceback pre, div.console pre { white-space: pre-wrap; /* css-3 should we be so lucky... */ white-space: -moz-pre-wrap; /* Mozilla, since 1999 */ white-space: -pre-wrap; /* Opera 4-6 ?? */ white-space: -o-pre-wrap; /* Opera 7 ?? */ word-wrap: break-word; /* Internet Explorer 5.5+ */ _white-space: pre; /* IE only hack to re-specify in addition to word-wrap */ } div.pin-prompt { position: absolute; display: none; top: 0; bottom: 0; left: 0; right: 0; background: rgba(255, 255, 255, 0.8); } div.pin-prompt .inner { background: #eee; padding: 10px 50px; width: 350px; margin: 10% auto 0 auto; border: 1px solid #ccc; border-radius: 2px; } werkzeug-0.14.1/werkzeug/debug/tbtools.py000066400000000000000000000440231322225165500204370ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.debug.tbtools ~~~~~~~~~~~~~~~~~~~~~~ This module provides various traceback related utility functions. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD. """ import re import os import sys import json import inspect import traceback import codecs from tokenize import TokenError from werkzeug.utils import cached_property, escape from werkzeug.debug.console import Console from werkzeug._compat import range_type, PY2, text_type, string_types, \ to_native, to_unicode from werkzeug.filesystem import get_filesystem_encoding _coding_re = re.compile(br'coding[:=]\s*([-\w.]+)') _line_re = re.compile(br'^(.*?)$', re.MULTILINE) _funcdef_re = re.compile(r'^(\s*def\s)|(.*(? %(title)s // Werkzeug Debugger
''' FOOTER = u'''\

Console Locked

The console is locked and needs to be unlocked by entering the PIN. You can find the PIN printed out on the standard output of your shell that runs the server.

PIN:

''' PAGE_HTML = HEADER + u'''\

%(exception_type)s

%(exception)s

Traceback (most recent call last)

%(summary)s

This is the Copy/Paste friendly version of the traceback. You can also paste this traceback into a gist:

The debugger caught an exception in your WSGI application. You can now look at the traceback which led to the error. If you enable JavaScript you can also use additional features such as code execution (if the evalex feature is enabled), automatic pasting of the exceptions and much more.
''' + FOOTER + ''' ''' CONSOLE_HTML = HEADER + u'''\

Interactive Console

In this console you can execute Python expressions in the context of the application. The initial namespace was created by the debugger automatically.
The Console requires JavaScript.
''' + FOOTER SUMMARY_HTML = u'''\
%(title)s
    %(frames)s
%(description)s
''' FRAME_HTML = u'''\

File "%(filename)s", line %(lineno)s, in %(function_name)s

%(lines)s
''' SOURCE_LINE_HTML = u'''\ %(lineno)s %(code)s ''' def render_console_html(secret, evalex_trusted=True): return CONSOLE_HTML % { 'evalex': 'true', 'evalex_trusted': evalex_trusted and 'true' or 'false', 'console': 'true', 'title': 'Console', 'secret': secret, 'traceback_id': -1 } def get_current_traceback(ignore_system_exceptions=False, show_hidden_frames=False, skip=0): """Get the current exception info as `Traceback` object. Per default calling this method will reraise system exceptions such as generator exit, system exit or others. This behavior can be disabled by passing `False` to the function as first parameter. """ exc_type, exc_value, tb = sys.exc_info() if ignore_system_exceptions and exc_type in system_exceptions: raise for x in range_type(skip): if tb.tb_next is None: break tb = tb.tb_next tb = Traceback(exc_type, exc_value, tb) if not show_hidden_frames: tb.filter_hidden_frames() return tb class Line(object): """Helper for the source renderer.""" __slots__ = ('lineno', 'code', 'in_frame', 'current') def __init__(self, lineno, code): self.lineno = lineno self.code = code self.in_frame = False self.current = False def classes(self): rv = ['line'] if self.in_frame: rv.append('in-frame') if self.current: rv.append('current') return rv classes = property(classes) def render(self): return SOURCE_LINE_HTML % { 'classes': u' '.join(self.classes), 'lineno': self.lineno, 'code': escape(self.code) } class Traceback(object): """Wraps a traceback.""" def __init__(self, exc_type, exc_value, tb): self.exc_type = exc_type self.exc_value = exc_value if not isinstance(exc_type, str): exception_type = exc_type.__name__ if exc_type.__module__ not in ('__builtin__', 'exceptions'): exception_type = exc_type.__module__ + '.' + exception_type else: exception_type = exc_type self.exception_type = exception_type # we only add frames to the list that are not hidden. This follows # the the magic variables as defined by paste.exceptions.collector self.frames = [] while tb: self.frames.append(Frame(exc_type, exc_value, tb)) tb = tb.tb_next def filter_hidden_frames(self): """Remove the frames according to the paste spec.""" if not self.frames: return new_frames = [] hidden = False for frame in self.frames: hide = frame.hide if hide in ('before', 'before_and_this'): new_frames = [] hidden = False if hide == 'before_and_this': continue elif hide in ('reset', 'reset_and_this'): hidden = False if hide == 'reset_and_this': continue elif hide in ('after', 'after_and_this'): hidden = True if hide == 'after_and_this': continue elif hide or hidden: continue new_frames.append(frame) # if we only have one frame and that frame is from the codeop # module, remove it. if len(new_frames) == 1 and self.frames[0].module == 'codeop': del self.frames[:] # if the last frame is missing something went terrible wrong :( elif self.frames[-1] in new_frames: self.frames[:] = new_frames def is_syntax_error(self): """Is it a syntax error?""" return isinstance(self.exc_value, SyntaxError) is_syntax_error = property(is_syntax_error) def exception(self): """String representation of the exception.""" buf = traceback.format_exception_only(self.exc_type, self.exc_value) rv = ''.join(buf).strip() return rv.decode('utf-8', 'replace') if PY2 else rv exception = property(exception) def log(self, logfile=None): """Log the ASCII traceback into a file object.""" if logfile is None: logfile = sys.stderr tb = self.plaintext.rstrip() + u'\n' if PY2: tb = tb.encode('utf-8', 'replace') logfile.write(tb) def paste(self): """Create a paste and return the paste id.""" data = json.dumps({ 'description': 'Werkzeug Internal Server Error', 'public': False, 'files': { 'traceback.txt': { 'content': self.plaintext } } }).encode('utf-8') try: from urllib2 import urlopen except ImportError: from urllib.request import urlopen rv = urlopen('https://api.github.com/gists', data=data) resp = json.loads(rv.read().decode('utf-8')) rv.close() return { 'url': resp['html_url'], 'id': resp['id'] } def render_summary(self, include_title=True): """Render the traceback for the interactive console.""" title = '' frames = [] classes = ['traceback'] if not self.frames: classes.append('noframe-traceback') if include_title: if self.is_syntax_error: title = u'Syntax Error' else: title = u'Traceback (most recent call last):' for frame in self.frames: frames.append(u'%s' % ( frame.info and u' title="%s"' % escape(frame.info) or u'', frame.render() )) if self.is_syntax_error: description_wrapper = u'
%s
' else: description_wrapper = u'
%s
' return SUMMARY_HTML % { 'classes': u' '.join(classes), 'title': title and u'

%s

' % title or u'', 'frames': u'\n'.join(frames), 'description': description_wrapper % escape(self.exception) } def render_full(self, evalex=False, secret=None, evalex_trusted=True): """Render the Full HTML page with the traceback info.""" exc = escape(self.exception) return PAGE_HTML % { 'evalex': evalex and 'true' or 'false', 'evalex_trusted': evalex_trusted and 'true' or 'false', 'console': 'false', 'title': exc, 'exception': exc, 'exception_type': escape(self.exception_type), 'summary': self.render_summary(include_title=False), 'plaintext': escape(self.plaintext), 'plaintext_cs': re.sub('-{2,}', '-', self.plaintext), 'traceback_id': self.id, 'secret': secret } def generate_plaintext_traceback(self): """Like the plaintext attribute but returns a generator""" yield u'Traceback (most recent call last):' for frame in self.frames: yield u' File "%s", line %s, in %s' % ( frame.filename, frame.lineno, frame.function_name ) yield u' ' + frame.current_line.strip() yield self.exception def plaintext(self): return u'\n'.join(self.generate_plaintext_traceback()) plaintext = cached_property(plaintext) id = property(lambda x: id(x)) class Frame(object): """A single frame in a traceback.""" def __init__(self, exc_type, exc_value, tb): self.lineno = tb.tb_lineno self.function_name = tb.tb_frame.f_code.co_name self.locals = tb.tb_frame.f_locals self.globals = tb.tb_frame.f_globals fn = inspect.getsourcefile(tb) or inspect.getfile(tb) if fn[-4:] in ('.pyo', '.pyc'): fn = fn[:-1] # if it's a file on the file system resolve the real filename. if os.path.isfile(fn): fn = os.path.realpath(fn) self.filename = to_unicode(fn, get_filesystem_encoding()) self.module = self.globals.get('__name__') self.loader = self.globals.get('__loader__') self.code = tb.tb_frame.f_code # support for paste's traceback extensions self.hide = self.locals.get('__traceback_hide__', False) info = self.locals.get('__traceback_info__') if info is not None: try: info = text_type(info) except UnicodeError: info = str(info).decode('utf-8', 'replace') self.info = info def render(self): """Render a single frame in a traceback.""" return FRAME_HTML % { 'id': self.id, 'filename': escape(self.filename), 'lineno': self.lineno, 'function_name': escape(self.function_name), 'lines': self.render_line_context(), } def render_line_context(self): before, current, after = self.get_context_lines() rv = [] def render_line(line, cls): line = line.expandtabs().rstrip() stripped_line = line.strip() prefix = len(line) - len(stripped_line) rv.append( '
%s%s
' % ( cls, ' ' * prefix, escape(stripped_line) or ' ')) for line in before: render_line(line, 'before') render_line(current, 'current') for line in after: render_line(line, 'after') return '\n'.join(rv) def get_annotated_lines(self): """Helper function that returns lines with extra information.""" lines = [Line(idx + 1, x) for idx, x in enumerate(self.sourcelines)] # find function definition and mark lines if hasattr(self.code, 'co_firstlineno'): lineno = self.code.co_firstlineno - 1 while lineno > 0: if _funcdef_re.match(lines[lineno].code): break lineno -= 1 try: offset = len(inspect.getblock([x.code + '\n' for x in lines[lineno:]])) except TokenError: offset = 0 for line in lines[lineno:lineno + offset]: line.in_frame = True # mark current line try: lines[self.lineno - 1].current = True except IndexError: pass return lines def eval(self, code, mode='single'): """Evaluate code in the context of the frame.""" if isinstance(code, string_types): if PY2 and isinstance(code, unicode): # noqa code = UTF8_COOKIE + code.encode('utf-8') code = compile(code, '', mode) return eval(code, self.globals, self.locals) @cached_property def sourcelines(self): """The sourcecode of the file as list of unicode strings.""" # get sourcecode from loader or file source = None if self.loader is not None: try: if hasattr(self.loader, 'get_source'): source = self.loader.get_source(self.module) elif hasattr(self.loader, 'get_source_by_code'): source = self.loader.get_source_by_code(self.code) except Exception: # we munch the exception so that we don't cause troubles # if the loader is broken. pass if source is None: try: f = open(to_native(self.filename, get_filesystem_encoding()), mode='rb') except IOError: return [] try: source = f.read() finally: f.close() # already unicode? return right away if isinstance(source, text_type): return source.splitlines() # yes. it should be ascii, but we don't want to reject too many # characters in the debugger if something breaks charset = 'utf-8' if source.startswith(UTF8_COOKIE): source = source[3:] else: for idx, match in enumerate(_line_re.finditer(source)): match = _coding_re.search(match.group()) if match is not None: charset = match.group(1) break if idx > 1: break # on broken cookies we fall back to utf-8 too charset = to_native(charset) try: codecs.lookup(charset) except LookupError: charset = 'utf-8' return source.decode(charset, 'replace').splitlines() def get_context_lines(self, context=5): before = self.sourcelines[self.lineno - context - 1:self.lineno - 1] past = self.sourcelines[self.lineno:self.lineno + context] return ( before, self.current_line, past, ) @property def current_line(self): try: return self.sourcelines[self.lineno - 1] except IndexError: return u'' @cached_property def console(self): return Console(self.globals, self.locals) id = property(lambda x: id(x)) werkzeug-0.14.1/werkzeug/exceptions.py000066400000000000000000000500311322225165500200400ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.exceptions ~~~~~~~~~~~~~~~~~~~ This module implements a number of Python exceptions you can raise from within your views to trigger a standard non-200 response. Usage Example ------------- :: from werkzeug.wrappers import BaseRequest from werkzeug.wsgi import responder from werkzeug.exceptions import HTTPException, NotFound def view(request): raise NotFound() @responder def application(environ, start_response): request = BaseRequest(environ) try: return view(request) except HTTPException as e: return e As you can see from this example those exceptions are callable WSGI applications. Because of Python 2.4 compatibility those do not extend from the response objects but only from the python exception class. As a matter of fact they are not Werkzeug response objects. However you can get a response object by calling ``get_response()`` on a HTTP exception. Keep in mind that you have to pass an environment to ``get_response()`` because some errors fetch additional information from the WSGI environment. If you want to hook in a different exception page to say, a 404 status code, you can add a second except for a specific subclass of an error:: @responder def application(environ, start_response): request = BaseRequest(environ) try: return view(request) except NotFound, e: return not_found(request) except HTTPException, e: return e :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import sys # Because of bootstrapping reasons we need to manually patch ourselves # onto our parent module. import werkzeug werkzeug.exceptions = sys.modules[__name__] from werkzeug._internal import _get_environ from werkzeug._compat import iteritems, integer_types, text_type, \ implements_to_string from werkzeug.wrappers import Response @implements_to_string class HTTPException(Exception): """ Baseclass for all HTTP exceptions. This exception can be called as WSGI application to render a default error page or you can catch the subclasses of it independently and render nicer error messages. """ code = None description = None def __init__(self, description=None, response=None): Exception.__init__(self) if description is not None: self.description = description self.response = response @classmethod def wrap(cls, exception, name=None): """This method returns a new subclass of the exception provided that also is a subclass of `BadRequest`. """ class newcls(cls, exception): def __init__(self, arg=None, *args, **kwargs): cls.__init__(self, *args, **kwargs) exception.__init__(self, arg) newcls.__module__ = sys._getframe(1).f_globals.get('__name__') newcls.__name__ = name or cls.__name__ + exception.__name__ return newcls @property def name(self): """The status name.""" return HTTP_STATUS_CODES.get(self.code, 'Unknown Error') def get_description(self, environ=None): """Get the description.""" return u'

%s

' % escape(self.description) def get_body(self, environ=None): """Get the HTML body.""" return text_type(( u'\n' u'%(code)s %(name)s\n' u'

%(name)s

\n' u'%(description)s\n' ) % { 'code': self.code, 'name': escape(self.name), 'description': self.get_description(environ) }) def get_headers(self, environ=None): """Get a list of headers.""" return [('Content-Type', 'text/html')] def get_response(self, environ=None): """Get a response object. If one was passed to the exception it's returned directly. :param environ: the optional environ for the request. This can be used to modify the response depending on how the request looked like. :return: a :class:`Response` object or a subclass thereof. """ if self.response is not None: return self.response if environ is not None: environ = _get_environ(environ) headers = self.get_headers(environ) return Response(self.get_body(environ), self.code, headers) def __call__(self, environ, start_response): """Call the exception as WSGI application. :param environ: the WSGI environment. :param start_response: the response callable provided by the WSGI server. """ response = self.get_response(environ) return response(environ, start_response) def __str__(self): code = self.code if self.code is not None else '???' return '%s %s: %s' % (code, self.name, self.description) def __repr__(self): code = self.code if self.code is not None else '???' return "<%s '%s: %s'>" % (self.__class__.__name__, code, self.name) class BadRequest(HTTPException): """*400* `Bad Request` Raise if the browser sends something to the application the application or server cannot handle. """ code = 400 description = ( 'The browser (or proxy) sent a request that this server could ' 'not understand.' ) class ClientDisconnected(BadRequest): """Internal exception that is raised if Werkzeug detects a disconnected client. Since the client is already gone at that point attempting to send the error message to the client might not work and might ultimately result in another exception in the server. Mainly this is here so that it is silenced by default as far as Werkzeug is concerned. Since disconnections cannot be reliably detected and are unspecified by WSGI to a large extent this might or might not be raised if a client is gone. .. versionadded:: 0.8 """ class SecurityError(BadRequest): """Raised if something triggers a security error. This is otherwise exactly like a bad request error. .. versionadded:: 0.9 """ class BadHost(BadRequest): """Raised if the submitted host is badly formatted. .. versionadded:: 0.11.2 """ class Unauthorized(HTTPException): """*401* `Unauthorized` Raise if the user is not authorized. Also used if you want to use HTTP basic auth. """ code = 401 description = ( 'The server could not verify that you are authorized to access ' 'the URL requested. You either supplied the wrong credentials (e.g. ' 'a bad password), or your browser doesn\'t understand how to supply ' 'the credentials required.' ) class Forbidden(HTTPException): """*403* `Forbidden` Raise if the user doesn't have the permission for the requested resource but was authenticated. """ code = 403 description = ( 'You don\'t have the permission to access the requested resource. ' 'It is either read-protected or not readable by the server.' ) class NotFound(HTTPException): """*404* `Not Found` Raise if a resource does not exist and never existed. """ code = 404 description = ( 'The requested URL was not found on the server. ' 'If you entered the URL manually please check your spelling and ' 'try again.' ) class MethodNotAllowed(HTTPException): """*405* `Method Not Allowed` Raise if the server used a method the resource does not handle. For example `POST` if the resource is view only. Especially useful for REST. The first argument for this exception should be a list of allowed methods. Strictly speaking the response would be invalid if you don't provide valid methods in the header which you can do with that list. """ code = 405 description = 'The method is not allowed for the requested URL.' def __init__(self, valid_methods=None, description=None): """Takes an optional list of valid http methods starting with werkzeug 0.3 the list will be mandatory.""" HTTPException.__init__(self, description) self.valid_methods = valid_methods def get_headers(self, environ): headers = HTTPException.get_headers(self, environ) if self.valid_methods: headers.append(('Allow', ', '.join(self.valid_methods))) return headers class NotAcceptable(HTTPException): """*406* `Not Acceptable` Raise if the server can't return any content conforming to the `Accept` headers of the client. """ code = 406 description = ( 'The resource identified by the request is only capable of ' 'generating response entities which have content characteristics ' 'not acceptable according to the accept headers sent in the ' 'request.' ) class RequestTimeout(HTTPException): """*408* `Request Timeout` Raise to signalize a timeout. """ code = 408 description = ( 'The server closed the network connection because the browser ' 'didn\'t finish the request within the specified time.' ) class Conflict(HTTPException): """*409* `Conflict` Raise to signal that a request cannot be completed because it conflicts with the current state on the server. .. versionadded:: 0.7 """ code = 409 description = ( 'A conflict happened while processing the request. The resource ' 'might have been modified while the request was being processed.' ) class Gone(HTTPException): """*410* `Gone` Raise if a resource existed previously and went away without new location. """ code = 410 description = ( 'The requested URL is no longer available on this server and there ' 'is no forwarding address. If you followed a link from a foreign ' 'page, please contact the author of this page.' ) class LengthRequired(HTTPException): """*411* `Length Required` Raise if the browser submitted data but no ``Content-Length`` header which is required for the kind of processing the server does. """ code = 411 description = ( 'A request with this method requires a valid Content-' 'Length header.' ) class PreconditionFailed(HTTPException): """*412* `Precondition Failed` Status code used in combination with ``If-Match``, ``If-None-Match``, or ``If-Unmodified-Since``. """ code = 412 description = ( 'The precondition on the request for the URL failed positive ' 'evaluation.' ) class RequestEntityTooLarge(HTTPException): """*413* `Request Entity Too Large` The status code one should return if the data submitted exceeded a given limit. """ code = 413 description = ( 'The data value transmitted exceeds the capacity limit.' ) class RequestURITooLarge(HTTPException): """*414* `Request URI Too Large` Like *413* but for too long URLs. """ code = 414 description = ( 'The length of the requested URL exceeds the capacity limit ' 'for this server. The request cannot be processed.' ) class UnsupportedMediaType(HTTPException): """*415* `Unsupported Media Type` The status code returned if the server is unable to handle the media type the client transmitted. """ code = 415 description = ( 'The server does not support the media type transmitted in ' 'the request.' ) class RequestedRangeNotSatisfiable(HTTPException): """*416* `Requested Range Not Satisfiable` The client asked for an invalid part of the file. .. versionadded:: 0.7 """ code = 416 description = ( 'The server cannot provide the requested range.' ) def __init__(self, length=None, units="bytes", description=None): """Takes an optional `Content-Range` header value based on ``length`` parameter. """ HTTPException.__init__(self, description) self.length = length self.units = units def get_headers(self, environ): headers = HTTPException.get_headers(self, environ) if self.length is not None: headers.append( ('Content-Range', '%s */%d' % (self.units, self.length))) return headers class ExpectationFailed(HTTPException): """*417* `Expectation Failed` The server cannot meet the requirements of the Expect request-header. .. versionadded:: 0.7 """ code = 417 description = ( 'The server could not meet the requirements of the Expect header' ) class ImATeapot(HTTPException): """*418* `I'm a teapot` The server should return this if it is a teapot and someone attempted to brew coffee with it. .. versionadded:: 0.7 """ code = 418 description = ( 'This server is a teapot, not a coffee machine' ) class UnprocessableEntity(HTTPException): """*422* `Unprocessable Entity` Used if the request is well formed, but the instructions are otherwise incorrect. """ code = 422 description = ( 'The request was well-formed but was unable to be followed ' 'due to semantic errors.' ) class Locked(HTTPException): """*423* `Locked` Used if the resource that is being accessed is locked. """ code = 423 description = ( 'The resource that is being accessed is locked.' ) class PreconditionRequired(HTTPException): """*428* `Precondition Required` The server requires this request to be conditional, typically to prevent the lost update problem, which is a race condition between two or more clients attempting to update a resource through PUT or DELETE. By requiring each client to include a conditional header ("If-Match" or "If-Unmodified- Since") with the proper value retained from a recent GET request, the server ensures that each client has at least seen the previous revision of the resource. """ code = 428 description = ( 'This request is required to be conditional; try using "If-Match" ' 'or "If-Unmodified-Since".' ) class TooManyRequests(HTTPException): """*429* `Too Many Requests` The server is limiting the rate at which this user receives responses, and this request exceeds that rate. (The server may use any convenient method to identify users and their request rates). The server may include a "Retry-After" header to indicate how long the user should wait before retrying. """ code = 429 description = ( 'This user has exceeded an allotted request count. Try again later.' ) class RequestHeaderFieldsTooLarge(HTTPException): """*431* `Request Header Fields Too Large` The server refuses to process the request because the header fields are too large. One or more individual fields may be too large, or the set of all headers is too large. """ code = 431 description = ( 'One or more header fields exceeds the maximum size.' ) class UnavailableForLegalReasons(HTTPException): """*451* `Unavailable For Legal Reasons` This status code indicates that the server is denying access to the resource as a consequence of a legal demand. """ code = 451 description = ( 'Unavailable for legal reasons.' ) class InternalServerError(HTTPException): """*500* `Internal Server Error` Raise if an internal server error occurred. This is a good fallback if an unknown error occurred in the dispatcher. """ code = 500 description = ( 'The server encountered an internal error and was unable to ' 'complete your request. Either the server is overloaded or there ' 'is an error in the application.' ) class NotImplemented(HTTPException): """*501* `Not Implemented` Raise if the application does not support the action requested by the browser. """ code = 501 description = ( 'The server does not support the action requested by the ' 'browser.' ) class BadGateway(HTTPException): """*502* `Bad Gateway` If you do proxying in your application you should return this status code if you received an invalid response from the upstream server it accessed in attempting to fulfill the request. """ code = 502 description = ( 'The proxy server received an invalid response from an upstream ' 'server.' ) class ServiceUnavailable(HTTPException): """*503* `Service Unavailable` Status code you should return if a service is temporarily unavailable. """ code = 503 description = ( 'The server is temporarily unable to service your request due to ' 'maintenance downtime or capacity problems. Please try again ' 'later.' ) class GatewayTimeout(HTTPException): """*504* `Gateway Timeout` Status code you should return if a connection to an upstream server times out. """ code = 504 description = ( 'The connection to an upstream server timed out.' ) class HTTPVersionNotSupported(HTTPException): """*505* `HTTP Version Not Supported` The server does not support the HTTP protocol version used in the request. """ code = 505 description = ( 'The server does not support the HTTP protocol version used in the ' 'request.' ) default_exceptions = {} __all__ = ['HTTPException'] def _find_exceptions(): for name, obj in iteritems(globals()): try: is_http_exception = issubclass(obj, HTTPException) except TypeError: is_http_exception = False if not is_http_exception or obj.code is None: continue __all__.append(obj.__name__) old_obj = default_exceptions.get(obj.code, None) if old_obj is not None and issubclass(obj, old_obj): continue default_exceptions[obj.code] = obj _find_exceptions() del _find_exceptions class Aborter(object): """ When passed a dict of code -> exception items it can be used as callable that raises exceptions. If the first argument to the callable is an integer it will be looked up in the mapping, if it's a WSGI application it will be raised in a proxy exception. The rest of the arguments are forwarded to the exception constructor. """ def __init__(self, mapping=None, extra=None): if mapping is None: mapping = default_exceptions self.mapping = dict(mapping) if extra is not None: self.mapping.update(extra) def __call__(self, code, *args, **kwargs): if not args and not kwargs and not isinstance(code, integer_types): raise HTTPException(response=code) if code not in self.mapping: raise LookupError('no exception for %r' % code) raise self.mapping[code](*args, **kwargs) def abort(status, *args, **kwargs): ''' Raises an :py:exc:`HTTPException` for the given status code or WSGI application:: abort(404) # 404 Not Found abort(Response('Hello World')) Can be passed a WSGI application or a status code. If a status code is given it's looked up in the list of exceptions and will raise that exception, if passed a WSGI application it will wrap it in a proxy WSGI exception and raise that:: abort(404) abort(Response('Hello World')) ''' return _aborter(status, *args, **kwargs) _aborter = Aborter() #: an exception that is used internally to signal both a key error and a #: bad request. Used by a lot of the datastructures. BadRequestKeyError = BadRequest.wrap(KeyError) # imported here because of circular dependencies of werkzeug.utils from werkzeug.utils import escape from werkzeug.http import HTTP_STATUS_CODES werkzeug-0.14.1/werkzeug/filesystem.py000066400000000000000000000041771322225165500200550ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.filesystem ~~~~~~~~~~~~~~~~~~~ Various utilities for the local filesystem. :copyright: (c) 2015 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import codecs import sys import warnings # We do not trust traditional unixes. has_likely_buggy_unicode_filesystem = \ sys.platform.startswith('linux') or 'bsd' in sys.platform def _is_ascii_encoding(encoding): """ Given an encoding this figures out if the encoding is actually ASCII (which is something we don't actually want in most cases). This is necessary because ASCII comes under many names such as ANSI_X3.4-1968. """ if encoding is None: return False try: return codecs.lookup(encoding).name == 'ascii' except LookupError: return False class BrokenFilesystemWarning(RuntimeWarning, UnicodeWarning): '''The warning used by Werkzeug to signal a broken filesystem. Will only be used once per runtime.''' _warned_about_filesystem_encoding = False def get_filesystem_encoding(): """ Returns the filesystem encoding that should be used. Note that this is different from the Python understanding of the filesystem encoding which might be deeply flawed. Do not use this value against Python's unicode APIs because it might be different. See :ref:`filesystem-encoding` for the exact behavior. The concept of a filesystem encoding in generally is not something you should rely on. As such if you ever need to use this function except for writing wrapper code reconsider. """ global _warned_about_filesystem_encoding rv = sys.getfilesystemencoding() if has_likely_buggy_unicode_filesystem and not rv \ or _is_ascii_encoding(rv): if not _warned_about_filesystem_encoding: warnings.warn( 'Detected a misconfigured UNIX filesystem: Will use UTF-8 as ' 'filesystem encoding instead of {0!r}'.format(rv), BrokenFilesystemWarning) _warned_about_filesystem_encoding = True return 'utf-8' return rv werkzeug-0.14.1/werkzeug/formparser.py000066400000000000000000000523321322225165500200450ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.formparser ~~~~~~~~~~~~~~~~~~~ This module implements the form parsing. It supports url-encoded forms as well as non-nested multipart uploads. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import codecs # there are some platforms where SpooledTemporaryFile is not available. # In that case we need to provide a fallback. try: from tempfile import SpooledTemporaryFile except ImportError: from tempfile import TemporaryFile SpooledTemporaryFile = None from itertools import chain, repeat, tee from functools import update_wrapper from werkzeug._compat import to_native, text_type, BytesIO from werkzeug.urls import url_decode_stream from werkzeug.wsgi import make_line_iter, \ get_input_stream, get_content_length from werkzeug.datastructures import Headers, FileStorage, MultiDict from werkzeug.http import parse_options_header #: an iterator that yields empty strings _empty_string_iter = repeat('') #: a regular expression for multipart boundaries _multipart_boundary_re = re.compile('^[ -~]{0,200}[!-~]$') #: supported http encodings that are also available in python we support #: for multipart messages. _supported_multipart_encodings = frozenset(['base64', 'quoted-printable']) def default_stream_factory(total_content_length, filename, content_type, content_length=None): """The stream factory that is used per default.""" max_size = 1024 * 500 if SpooledTemporaryFile is not None: return SpooledTemporaryFile(max_size=max_size, mode='wb+') if total_content_length is None or total_content_length > max_size: return TemporaryFile('wb+') return BytesIO() def parse_form_data(environ, stream_factory=None, charset='utf-8', errors='replace', max_form_memory_size=None, max_content_length=None, cls=None, silent=True): """Parse the form data in the environ and return it as tuple in the form ``(stream, form, files)``. You should only call this method if the transport method is `POST`, `PUT`, or `PATCH`. If the mimetype of the data transmitted is `multipart/form-data` the files multidict will be filled with `FileStorage` objects. If the mimetype is unknown the input stream is wrapped and returned as first argument, else the stream is empty. This is a shortcut for the common usage of :class:`FormDataParser`. Have a look at :ref:`dealing-with-request-data` for more details. .. versionadded:: 0.5 The `max_form_memory_size`, `max_content_length` and `cls` parameters were added. .. versionadded:: 0.5.1 The optional `silent` flag was added. :param environ: the WSGI environment to be used for parsing. :param stream_factory: An optional callable that returns a new read and writeable file descriptor. This callable works the same as :meth:`~BaseResponse._get_file_stream`. :param charset: The character set for URL and url encoded form data. :param errors: The encoding error behavior. :param max_form_memory_size: the maximum number of bytes to be accepted for in-memory stored form data. If the data exceeds the value specified an :exc:`~exceptions.RequestEntityTooLarge` exception is raised. :param max_content_length: If this is provided and the transmitted data is longer than this value an :exc:`~exceptions.RequestEntityTooLarge` exception is raised. :param cls: an optional dict class to use. If this is not specified or `None` the default :class:`MultiDict` is used. :param silent: If set to False parsing errors will not be caught. :return: A tuple in the form ``(stream, form, files)``. """ return FormDataParser(stream_factory, charset, errors, max_form_memory_size, max_content_length, cls, silent).parse_from_environ(environ) def exhaust_stream(f): """Helper decorator for methods that exhausts the stream on return.""" def wrapper(self, stream, *args, **kwargs): try: return f(self, stream, *args, **kwargs) finally: exhaust = getattr(stream, 'exhaust', None) if exhaust is not None: exhaust() else: while 1: chunk = stream.read(1024 * 64) if not chunk: break return update_wrapper(wrapper, f) class FormDataParser(object): """This class implements parsing of form data for Werkzeug. By itself it can parse multipart and url encoded form data. It can be subclassed and extended but for most mimetypes it is a better idea to use the untouched stream and expose it as separate attributes on a request object. .. versionadded:: 0.8 :param stream_factory: An optional callable that returns a new read and writeable file descriptor. This callable works the same as :meth:`~BaseResponse._get_file_stream`. :param charset: The character set for URL and url encoded form data. :param errors: The encoding error behavior. :param max_form_memory_size: the maximum number of bytes to be accepted for in-memory stored form data. If the data exceeds the value specified an :exc:`~exceptions.RequestEntityTooLarge` exception is raised. :param max_content_length: If this is provided and the transmitted data is longer than this value an :exc:`~exceptions.RequestEntityTooLarge` exception is raised. :param cls: an optional dict class to use. If this is not specified or `None` the default :class:`MultiDict` is used. :param silent: If set to False parsing errors will not be caught. """ def __init__(self, stream_factory=None, charset='utf-8', errors='replace', max_form_memory_size=None, max_content_length=None, cls=None, silent=True): if stream_factory is None: stream_factory = default_stream_factory self.stream_factory = stream_factory self.charset = charset self.errors = errors self.max_form_memory_size = max_form_memory_size self.max_content_length = max_content_length if cls is None: cls = MultiDict self.cls = cls self.silent = silent def get_parse_func(self, mimetype, options): return self.parse_functions.get(mimetype) def parse_from_environ(self, environ): """Parses the information from the environment as form data. :param environ: the WSGI environment to be used for parsing. :return: A tuple in the form ``(stream, form, files)``. """ content_type = environ.get('CONTENT_TYPE', '') content_length = get_content_length(environ) mimetype, options = parse_options_header(content_type) return self.parse(get_input_stream(environ), mimetype, content_length, options) def parse(self, stream, mimetype, content_length, options=None): """Parses the information from the given stream, mimetype, content length and mimetype parameters. :param stream: an input stream :param mimetype: the mimetype of the data :param content_length: the content length of the incoming data :param options: optional mimetype parameters (used for the multipart boundary for instance) :return: A tuple in the form ``(stream, form, files)``. """ if self.max_content_length is not None and \ content_length is not None and \ content_length > self.max_content_length: raise exceptions.RequestEntityTooLarge() if options is None: options = {} parse_func = self.get_parse_func(mimetype, options) if parse_func is not None: try: return parse_func(self, stream, mimetype, content_length, options) except ValueError: if not self.silent: raise return stream, self.cls(), self.cls() @exhaust_stream def _parse_multipart(self, stream, mimetype, content_length, options): parser = MultiPartParser(self.stream_factory, self.charset, self.errors, max_form_memory_size=self.max_form_memory_size, cls=self.cls) boundary = options.get('boundary') if boundary is None: raise ValueError('Missing boundary') if isinstance(boundary, text_type): boundary = boundary.encode('ascii') form, files = parser.parse(stream, boundary, content_length) return stream, form, files @exhaust_stream def _parse_urlencoded(self, stream, mimetype, content_length, options): if self.max_form_memory_size is not None and \ content_length is not None and \ content_length > self.max_form_memory_size: raise exceptions.RequestEntityTooLarge() form = url_decode_stream(stream, self.charset, errors=self.errors, cls=self.cls) return stream, form, self.cls() #: mapping of mimetypes to parsing functions parse_functions = { 'multipart/form-data': _parse_multipart, 'application/x-www-form-urlencoded': _parse_urlencoded, 'application/x-url-encoded': _parse_urlencoded } def is_valid_multipart_boundary(boundary): """Checks if the string given is a valid multipart boundary.""" return _multipart_boundary_re.match(boundary) is not None def _line_parse(line): """Removes line ending characters and returns a tuple (`stripped_line`, `is_terminated`). """ if line[-2:] in ['\r\n', b'\r\n']: return line[:-2], True elif line[-1:] in ['\r', '\n', b'\r', b'\n']: return line[:-1], True return line, False def parse_multipart_headers(iterable): """Parses multipart headers from an iterable that yields lines (including the trailing newline symbol). The iterable has to be newline terminated. The iterable will stop at the line where the headers ended so it can be further consumed. :param iterable: iterable of strings that are newline terminated """ result = [] for line in iterable: line = to_native(line) line, line_terminated = _line_parse(line) if not line_terminated: raise ValueError('unexpected end of line in multipart header') if not line: break elif line[0] in ' \t' and result: key, value = result[-1] result[-1] = (key, value + '\n ' + line[1:]) else: parts = line.split(':', 1) if len(parts) == 2: result.append((parts[0].strip(), parts[1].strip())) # we link the list to the headers, no need to create a copy, the # list was not shared anyways. return Headers(result) _begin_form = 'begin_form' _begin_file = 'begin_file' _cont = 'cont' _end = 'end' class MultiPartParser(object): def __init__(self, stream_factory=None, charset='utf-8', errors='replace', max_form_memory_size=None, cls=None, buffer_size=64 * 1024): self.charset = charset self.errors = errors self.max_form_memory_size = max_form_memory_size self.stream_factory = default_stream_factory if stream_factory is None else stream_factory self.cls = MultiDict if cls is None else cls # make sure the buffer size is divisible by four so that we can base64 # decode chunk by chunk assert buffer_size % 4 == 0, 'buffer size has to be divisible by 4' # also the buffer size has to be at least 1024 bytes long or long headers # will freak out the system assert buffer_size >= 1024, 'buffer size has to be at least 1KB' self.buffer_size = buffer_size def _fix_ie_filename(self, filename): """Internet Explorer 6 transmits the full file name if a file is uploaded. This function strips the full path if it thinks the filename is Windows-like absolute. """ if filename[1:3] == ':\\' or filename[:2] == '\\\\': return filename.split('\\')[-1] return filename def _find_terminator(self, iterator): """The terminator might have some additional newlines before it. There is at least one application that sends additional newlines before headers (the python setuptools package). """ for line in iterator: if not line: break line = line.strip() if line: return line return b'' def fail(self, message): raise ValueError(message) def get_part_encoding(self, headers): transfer_encoding = headers.get('content-transfer-encoding') if transfer_encoding is not None and \ transfer_encoding in _supported_multipart_encodings: return transfer_encoding def get_part_charset(self, headers): # Figure out input charset for current part content_type = headers.get('content-type') if content_type: mimetype, ct_params = parse_options_header(content_type) return ct_params.get('charset', self.charset) return self.charset def start_file_streaming(self, filename, headers, total_content_length): if isinstance(filename, bytes): filename = filename.decode(self.charset, self.errors) filename = self._fix_ie_filename(filename) content_type = headers.get('content-type') try: content_length = int(headers['content-length']) except (KeyError, ValueError): content_length = 0 container = self.stream_factory(total_content_length, content_type, filename, content_length) return filename, container def in_memory_threshold_reached(self, bytes): raise exceptions.RequestEntityTooLarge() def validate_boundary(self, boundary): if not boundary: self.fail('Missing boundary') if not is_valid_multipart_boundary(boundary): self.fail('Invalid boundary: %s' % boundary) if len(boundary) > self.buffer_size: # pragma: no cover # this should never happen because we check for a minimum size # of 1024 and boundaries may not be longer than 200. The only # situation when this happens is for non debug builds where # the assert is skipped. self.fail('Boundary longer than buffer size') def parse_lines(self, file, boundary, content_length, cap_at_buffer=True): """Generate parts of ``('begin_form', (headers, name))`` ``('begin_file', (headers, name, filename))`` ``('cont', bytestring)`` ``('end', None)`` Always obeys the grammar parts = ( begin_form cont* end | begin_file cont* end )* """ next_part = b'--' + boundary last_part = next_part + b'--' iterator = chain(make_line_iter(file, limit=content_length, buffer_size=self.buffer_size, cap_at_buffer=cap_at_buffer), _empty_string_iter) terminator = self._find_terminator(iterator) if terminator == last_part: return elif terminator != next_part: self.fail('Expected boundary at start of multipart data') while terminator != last_part: headers = parse_multipart_headers(iterator) disposition = headers.get('content-disposition') if disposition is None: self.fail('Missing Content-Disposition header') disposition, extra = parse_options_header(disposition) transfer_encoding = self.get_part_encoding(headers) name = extra.get('name') # Accept filename* to support non-ascii filenames as per rfc2231 filename = extra.get('filename') or extra.get('filename*') # if no content type is given we stream into memory. A list is # used as a temporary container. if filename is None: yield _begin_form, (headers, name) # otherwise we parse the rest of the headers and ask the stream # factory for something we can write in. else: yield _begin_file, (headers, name, filename) buf = b'' for line in iterator: if not line: self.fail('unexpected end of stream') if line[:2] == b'--': terminator = line.rstrip() if terminator in (next_part, last_part): break if transfer_encoding is not None: if transfer_encoding == 'base64': transfer_encoding = 'base64_codec' try: line = codecs.decode(line, transfer_encoding) except Exception: self.fail('could not decode transfer encoded chunk') # we have something in the buffer from the last iteration. # this is usually a newline delimiter. if buf: yield _cont, buf buf = b'' # If the line ends with windows CRLF we write everything except # the last two bytes. In all other cases however we write # everything except the last byte. If it was a newline, that's # fine, otherwise it does not matter because we will write it # the next iteration. this ensures we do not write the # final newline into the stream. That way we do not have to # truncate the stream. However we do have to make sure that # if something else than a newline is in there we write it # out. if line[-2:] == b'\r\n': buf = b'\r\n' cutoff = -2 else: buf = line[-1:] cutoff = -1 yield _cont, line[:cutoff] else: # pragma: no cover raise ValueError('unexpected end of part') # if we have a leftover in the buffer that is not a newline # character we have to flush it, otherwise we will chop of # certain values. if buf not in (b'', b'\r', b'\n', b'\r\n'): yield _cont, buf yield _end, None def parse_parts(self, file, boundary, content_length): """Generate ``('file', (name, val))`` and ``('form', (name, val))`` parts. """ in_memory = 0 for ellt, ell in self.parse_lines(file, boundary, content_length): if ellt == _begin_file: headers, name, filename = ell is_file = True guard_memory = False filename, container = self.start_file_streaming( filename, headers, content_length) _write = container.write elif ellt == _begin_form: headers, name = ell is_file = False container = [] _write = container.append guard_memory = self.max_form_memory_size is not None elif ellt == _cont: _write(ell) # if we write into memory and there is a memory size limit we # count the number of bytes in memory and raise an exception if # there is too much data in memory. if guard_memory: in_memory += len(ell) if in_memory > self.max_form_memory_size: self.in_memory_threshold_reached(in_memory) elif ellt == _end: if is_file: container.seek(0) yield ('file', (name, FileStorage(container, filename, name, headers=headers))) else: part_charset = self.get_part_charset(headers) yield ('form', (name, b''.join(container).decode( part_charset, self.errors))) def parse(self, file, boundary, content_length): formstream, filestream = tee( self.parse_parts(file, boundary, content_length), 2) form = (p[1] for p in formstream if p[0] == 'form') files = (p[1] for p in filestream if p[0] == 'file') return self.cls(form), self.cls(files) from werkzeug import exceptions werkzeug-0.14.1/werkzeug/http.py000066400000000000000000001162171322225165500166470ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.http ~~~~~~~~~~~~~ Werkzeug comes with a bunch of utilities that help Werkzeug to deal with HTTP data. Most of the classes and functions provided by this module are used by the wrappers, but they are useful on their own, too, especially if the response and request objects are not used. This covers some of the more HTTP centric features of WSGI, some other utilities such as cookie handling are documented in the `werkzeug.utils` module. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import warnings from time import time, gmtime try: from email.utils import parsedate_tz except ImportError: # pragma: no cover from email.Utils import parsedate_tz try: from urllib.request import parse_http_list as _parse_list_header from urllib.parse import unquote_to_bytes as _unquote except ImportError: # pragma: no cover from urllib2 import parse_http_list as _parse_list_header, \ unquote as _unquote from datetime import datetime, timedelta from hashlib import md5 import base64 from werkzeug._internal import _cookie_quote, _make_cookie_domain, \ _cookie_parse_impl from werkzeug._compat import to_unicode, iteritems, text_type, \ string_types, try_coerce_native, to_bytes, PY2, \ integer_types _cookie_charset = 'latin1' # for explanation of "media-range", etc. see Sections 5.3.{1,2} of RFC 7231 _accept_re = re.compile( r'''( # media-range capturing-parenthesis [^\s;,]+ # type/subtype (?:[ \t]*;[ \t]* # ";" (?: # parameter non-capturing-parenthesis [^\s;,q][^\s;,]* # token that doesn't start with "q" | # or q[^\s;,=][^\s;,]* # token that is more than just "q" ) )* # zero or more parameters ) # end of media-range (?:[ \t]*;[ \t]*q= # weight is a "q" parameter (\d*(?:\.\d+)?) # qvalue capturing-parentheses [^,]* # "extension" accept params: who cares? )? # accept params are optional ''', re.VERBOSE) _token_chars = frozenset("!#$%&'*+-.0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ" '^_`abcdefghijklmnopqrstuvwxyz|~') _etag_re = re.compile(r'([Ww]/)?(?:"(.*?)"|(.*?))(?:\s*,\s*|$)') _unsafe_header_chars = set('()<>@,;:\"/[]?={} \t') _option_header_piece_re = re.compile(r''' ;\s* (?P "[^"\\]*(?:\\.[^"\\]*)*" # quoted string | [^\s;,=*]+ # token ) \s* (?: # optionally followed by =value (?: # equals sign, possibly with encoding \*\s*=\s* # * indicates extended notation (?P[^\s]+?) '(?P[^\s]*?)' | =\s* # basic notation ) (?P "[^"\\]*(?:\\.[^"\\]*)*" # quoted string | [^;,]+ # token )? )? \s* ''', flags=re.VERBOSE) _option_header_start_mime_type = re.compile(r',\s*([^;,\s]+)([;,]\s*.+)?') _entity_headers = frozenset([ 'allow', 'content-encoding', 'content-language', 'content-length', 'content-location', 'content-md5', 'content-range', 'content-type', 'expires', 'last-modified' ]) _hop_by_hop_headers = frozenset([ 'connection', 'keep-alive', 'proxy-authenticate', 'proxy-authorization', 'te', 'trailer', 'transfer-encoding', 'upgrade' ]) HTTP_STATUS_CODES = { 100: 'Continue', 101: 'Switching Protocols', 102: 'Processing', 200: 'OK', 201: 'Created', 202: 'Accepted', 203: 'Non Authoritative Information', 204: 'No Content', 205: 'Reset Content', 206: 'Partial Content', 207: 'Multi Status', 226: 'IM Used', # see RFC 3229 300: 'Multiple Choices', 301: 'Moved Permanently', 302: 'Found', 303: 'See Other', 304: 'Not Modified', 305: 'Use Proxy', 307: 'Temporary Redirect', 400: 'Bad Request', 401: 'Unauthorized', 402: 'Payment Required', # unused 403: 'Forbidden', 404: 'Not Found', 405: 'Method Not Allowed', 406: 'Not Acceptable', 407: 'Proxy Authentication Required', 408: 'Request Timeout', 409: 'Conflict', 410: 'Gone', 411: 'Length Required', 412: 'Precondition Failed', 413: 'Request Entity Too Large', 414: 'Request URI Too Long', 415: 'Unsupported Media Type', 416: 'Requested Range Not Satisfiable', 417: 'Expectation Failed', 418: 'I\'m a teapot', # see RFC 2324 422: 'Unprocessable Entity', 423: 'Locked', 424: 'Failed Dependency', 426: 'Upgrade Required', 428: 'Precondition Required', # see RFC 6585 429: 'Too Many Requests', 431: 'Request Header Fields Too Large', 449: 'Retry With', # proprietary MS extension 451: 'Unavailable For Legal Reasons', 500: 'Internal Server Error', 501: 'Not Implemented', 502: 'Bad Gateway', 503: 'Service Unavailable', 504: 'Gateway Timeout', 505: 'HTTP Version Not Supported', 507: 'Insufficient Storage', 510: 'Not Extended' } def wsgi_to_bytes(data): """coerce wsgi unicode represented bytes to real ones """ if isinstance(data, bytes): return data return data.encode('latin1') # XXX: utf8 fallback? def bytes_to_wsgi(data): assert isinstance(data, bytes), 'data must be bytes' if isinstance(data, str): return data else: return data.decode('latin1') def quote_header_value(value, extra_chars='', allow_token=True): """Quote a header value if necessary. .. versionadded:: 0.5 :param value: the value to quote. :param extra_chars: a list of extra characters to skip quoting. :param allow_token: if this is enabled token values are returned unchanged. """ if isinstance(value, bytes): value = bytes_to_wsgi(value) value = str(value) if allow_token: token_chars = _token_chars | set(extra_chars) if set(value).issubset(token_chars): return value return '"%s"' % value.replace('\\', '\\\\').replace('"', '\\"') def unquote_header_value(value, is_filename=False): r"""Unquotes a header value. (Reversal of :func:`quote_header_value`). This does not use the real unquoting but what browsers are actually using for quoting. .. versionadded:: 0.5 :param value: the header value to unquote. """ if value and value[0] == value[-1] == '"': # this is not the real unquoting, but fixing this so that the # RFC is met will result in bugs with internet explorer and # probably some other browsers as well. IE for example is # uploading files with "C:\foo\bar.txt" as filename value = value[1:-1] # if this is a filename and the starting characters look like # a UNC path, then just return the value without quotes. Using the # replace sequence below on a UNC path has the effect of turning # the leading double slash into a single slash and then # _fix_ie_filename() doesn't work correctly. See #458. if not is_filename or value[:2] != '\\\\': return value.replace('\\\\', '\\').replace('\\"', '"') return value def dump_options_header(header, options): """The reverse function to :func:`parse_options_header`. :param header: the header to dump :param options: a dict of options to append. """ segments = [] if header is not None: segments.append(header) for key, value in iteritems(options): if value is None: segments.append(key) else: segments.append('%s=%s' % (key, quote_header_value(value))) return '; '.join(segments) def dump_header(iterable, allow_token=True): """Dump an HTTP header again. This is the reversal of :func:`parse_list_header`, :func:`parse_set_header` and :func:`parse_dict_header`. This also quotes strings that include an equals sign unless you pass it as dict of key, value pairs. >>> dump_header({'foo': 'bar baz'}) 'foo="bar baz"' >>> dump_header(('foo', 'bar baz')) 'foo, "bar baz"' :param iterable: the iterable or dict of values to quote. :param allow_token: if set to `False` tokens as values are disallowed. See :func:`quote_header_value` for more details. """ if isinstance(iterable, dict): items = [] for key, value in iteritems(iterable): if value is None: items.append(key) else: items.append('%s=%s' % ( key, quote_header_value(value, allow_token=allow_token) )) else: items = [quote_header_value(x, allow_token=allow_token) for x in iterable] return ', '.join(items) def parse_list_header(value): """Parse lists as described by RFC 2068 Section 2. In particular, parse comma-separated lists where the elements of the list may include quoted-strings. A quoted-string could contain a comma. A non-quoted string could have quotes in the middle. Quotes are removed automatically after parsing. It basically works like :func:`parse_set_header` just that items may appear multiple times and case sensitivity is preserved. The return value is a standard :class:`list`: >>> parse_list_header('token, "quoted value"') ['token', 'quoted value'] To create a header from the :class:`list` again, use the :func:`dump_header` function. :param value: a string with a list header. :return: :class:`list` """ result = [] for item in _parse_list_header(value): if item[:1] == item[-1:] == '"': item = unquote_header_value(item[1:-1]) result.append(item) return result def parse_dict_header(value, cls=dict): """Parse lists of key, value pairs as described by RFC 2068 Section 2 and convert them into a python dict (or any other mapping object created from the type with a dict like interface provided by the `cls` argument): >>> d = parse_dict_header('foo="is a fish", bar="as well"') >>> type(d) is dict True >>> sorted(d.items()) [('bar', 'as well'), ('foo', 'is a fish')] If there is no value for a key it will be `None`: >>> parse_dict_header('key_without_value') {'key_without_value': None} To create a header from the :class:`dict` again, use the :func:`dump_header` function. .. versionchanged:: 0.9 Added support for `cls` argument. :param value: a string with a dict header. :param cls: callable to use for storage of parsed results. :return: an instance of `cls` """ result = cls() if not isinstance(value, text_type): # XXX: validate value = bytes_to_wsgi(value) for item in _parse_list_header(value): if '=' not in item: result[item] = None continue name, value = item.split('=', 1) if value[:1] == value[-1:] == '"': value = unquote_header_value(value[1:-1]) result[name] = value return result def parse_options_header(value, multiple=False): """Parse a ``Content-Type`` like header into a tuple with the content type and the options: >>> parse_options_header('text/html; charset=utf8') ('text/html', {'charset': 'utf8'}) This should not be used to parse ``Cache-Control`` like headers that use a slightly different format. For these headers use the :func:`parse_dict_header` function. .. versionadded:: 0.5 :param value: the header to parse. :param multiple: Whether try to parse and return multiple MIME types :return: (mimetype, options) or (mimetype, options, mimetype, options, …) if multiple=True """ if not value: return '', {} result = [] value = "," + value.replace("\n", ",") while value: match = _option_header_start_mime_type.match(value) if not match: break result.append(match.group(1)) # mimetype options = {} # Parse options rest = match.group(2) while rest: optmatch = _option_header_piece_re.match(rest) if not optmatch: break option, encoding, _, option_value = optmatch.groups() option = unquote_header_value(option) if option_value is not None: option_value = unquote_header_value( option_value, option == 'filename') if encoding is not None: option_value = _unquote(option_value).decode(encoding) options[option] = option_value rest = rest[optmatch.end():] result.append(options) if multiple is False: return tuple(result) value = rest return tuple(result) if result else ('', {}) def parse_accept_header(value, cls=None): """Parses an HTTP Accept-* header. This does not implement a complete valid algorithm but one that supports at least value and quality extraction. Returns a new :class:`Accept` object (basically a list of ``(value, quality)`` tuples sorted by the quality with some additional accessor methods). The second parameter can be a subclass of :class:`Accept` that is created with the parsed values and returned. :param value: the accept header string to be parsed. :param cls: the wrapper class for the return value (can be :class:`Accept` or a subclass thereof) :return: an instance of `cls`. """ if cls is None: cls = Accept if not value: return cls(None) result = [] for match in _accept_re.finditer(value): quality = match.group(2) if not quality: quality = 1 else: quality = max(min(float(quality), 1), 0) result.append((match.group(1), quality)) return cls(result) def parse_cache_control_header(value, on_update=None, cls=None): """Parse a cache control header. The RFC differs between response and request cache control, this method does not. It's your responsibility to not use the wrong control statements. .. versionadded:: 0.5 The `cls` was added. If not specified an immutable :class:`~werkzeug.datastructures.RequestCacheControl` is returned. :param value: a cache control header to be parsed. :param on_update: an optional callable that is called every time a value on the :class:`~werkzeug.datastructures.CacheControl` object is changed. :param cls: the class for the returned object. By default :class:`~werkzeug.datastructures.RequestCacheControl` is used. :return: a `cls` object. """ if cls is None: cls = RequestCacheControl if not value: return cls(None, on_update) return cls(parse_dict_header(value), on_update) def parse_set_header(value, on_update=None): """Parse a set-like header and return a :class:`~werkzeug.datastructures.HeaderSet` object: >>> hs = parse_set_header('token, "quoted value"') The return value is an object that treats the items case-insensitively and keeps the order of the items: >>> 'TOKEN' in hs True >>> hs.index('quoted value') 1 >>> hs HeaderSet(['token', 'quoted value']) To create a header from the :class:`HeaderSet` again, use the :func:`dump_header` function. :param value: a set header to be parsed. :param on_update: an optional callable that is called every time a value on the :class:`~werkzeug.datastructures.HeaderSet` object is changed. :return: a :class:`~werkzeug.datastructures.HeaderSet` """ if not value: return HeaderSet(None, on_update) return HeaderSet(parse_list_header(value), on_update) def parse_authorization_header(value): """Parse an HTTP basic/digest authorization header transmitted by the web browser. The return value is either `None` if the header was invalid or not given, otherwise an :class:`~werkzeug.datastructures.Authorization` object. :param value: the authorization header to parse. :return: a :class:`~werkzeug.datastructures.Authorization` object or `None`. """ if not value: return value = wsgi_to_bytes(value) try: auth_type, auth_info = value.split(None, 1) auth_type = auth_type.lower() except ValueError: return if auth_type == b'basic': try: username, password = base64.b64decode(auth_info).split(b':', 1) except Exception: return return Authorization('basic', {'username': bytes_to_wsgi(username), 'password': bytes_to_wsgi(password)}) elif auth_type == b'digest': auth_map = parse_dict_header(auth_info) for key in 'username', 'realm', 'nonce', 'uri', 'response': if key not in auth_map: return if 'qop' in auth_map: if not auth_map.get('nc') or not auth_map.get('cnonce'): return return Authorization('digest', auth_map) def parse_www_authenticate_header(value, on_update=None): """Parse an HTTP WWW-Authenticate header into a :class:`~werkzeug.datastructures.WWWAuthenticate` object. :param value: a WWW-Authenticate header to parse. :param on_update: an optional callable that is called every time a value on the :class:`~werkzeug.datastructures.WWWAuthenticate` object is changed. :return: a :class:`~werkzeug.datastructures.WWWAuthenticate` object. """ if not value: return WWWAuthenticate(on_update=on_update) try: auth_type, auth_info = value.split(None, 1) auth_type = auth_type.lower() except (ValueError, AttributeError): return WWWAuthenticate(value.strip().lower(), on_update=on_update) return WWWAuthenticate(auth_type, parse_dict_header(auth_info), on_update) def parse_if_range_header(value): """Parses an if-range header which can be an etag or a date. Returns a :class:`~werkzeug.datastructures.IfRange` object. .. versionadded:: 0.7 """ if not value: return IfRange() date = parse_date(value) if date is not None: return IfRange(date=date) # drop weakness information return IfRange(unquote_etag(value)[0]) def parse_range_header(value, make_inclusive=True): """Parses a range header into a :class:`~werkzeug.datastructures.Range` object. If the header is missing or malformed `None` is returned. `ranges` is a list of ``(start, stop)`` tuples where the ranges are non-inclusive. .. versionadded:: 0.7 """ if not value or '=' not in value: return None ranges = [] last_end = 0 units, rng = value.split('=', 1) units = units.strip().lower() for item in rng.split(','): item = item.strip() if '-' not in item: return None if item.startswith('-'): if last_end < 0: return None try: begin = int(item) except ValueError: return None end = None last_end = -1 elif '-' in item: begin, end = item.split('-', 1) begin = begin.strip() end = end.strip() if not begin.isdigit(): return None begin = int(begin) if begin < last_end or last_end < 0: return None if end: if not end.isdigit(): return None end = int(end) + 1 if begin >= end: return None else: end = None last_end = end ranges.append((begin, end)) return Range(units, ranges) def parse_content_range_header(value, on_update=None): """Parses a range header into a :class:`~werkzeug.datastructures.ContentRange` object or `None` if parsing is not possible. .. versionadded:: 0.7 :param value: a content range header to be parsed. :param on_update: an optional callable that is called every time a value on the :class:`~werkzeug.datastructures.ContentRange` object is changed. """ if value is None: return None try: units, rangedef = (value or '').strip().split(None, 1) except ValueError: return None if '/' not in rangedef: return None rng, length = rangedef.split('/', 1) if length == '*': length = None elif length.isdigit(): length = int(length) else: return None if rng == '*': return ContentRange(units, None, None, length, on_update=on_update) elif '-' not in rng: return None start, stop = rng.split('-', 1) try: start = int(start) stop = int(stop) + 1 except ValueError: return None if is_byte_range_valid(start, stop, length): return ContentRange(units, start, stop, length, on_update=on_update) def quote_etag(etag, weak=False): """Quote an etag. :param etag: the etag to quote. :param weak: set to `True` to tag it "weak". """ if '"' in etag: raise ValueError('invalid etag') etag = '"%s"' % etag if weak: etag = 'W/' + etag return etag def unquote_etag(etag): """Unquote a single etag: >>> unquote_etag('W/"bar"') ('bar', True) >>> unquote_etag('"bar"') ('bar', False) :param etag: the etag identifier to unquote. :return: a ``(etag, weak)`` tuple. """ if not etag: return None, None etag = etag.strip() weak = False if etag.startswith(('W/', 'w/')): weak = True etag = etag[2:] if etag[:1] == etag[-1:] == '"': etag = etag[1:-1] return etag, weak def parse_etags(value): """Parse an etag header. :param value: the tag header to parse :return: an :class:`~werkzeug.datastructures.ETags` object. """ if not value: return ETags() strong = [] weak = [] end = len(value) pos = 0 while pos < end: match = _etag_re.match(value, pos) if match is None: break is_weak, quoted, raw = match.groups() if raw == '*': return ETags(star_tag=True) elif quoted: raw = quoted if is_weak: weak.append(raw) else: strong.append(raw) pos = match.end() return ETags(strong, weak) def generate_etag(data): """Generate an etag for some data.""" return md5(data).hexdigest() def parse_date(value): """Parse one of the following date formats into a datetime object: .. sourcecode:: text Sun, 06 Nov 1994 08:49:37 GMT ; RFC 822, updated by RFC 1123 Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036 Sun Nov 6 08:49:37 1994 ; ANSI C's asctime() format If parsing fails the return value is `None`. :param value: a string with a supported date format. :return: a :class:`datetime.datetime` object. """ if value: t = parsedate_tz(value.strip()) if t is not None: try: year = t[0] # unfortunately that function does not tell us if two digit # years were part of the string, or if they were prefixed # with two zeroes. So what we do is to assume that 69-99 # refer to 1900, and everything below to 2000 if year >= 0 and year <= 68: year += 2000 elif year >= 69 and year <= 99: year += 1900 return datetime(*((year,) + t[1:7])) - \ timedelta(seconds=t[-1] or 0) except (ValueError, OverflowError): return None def _dump_date(d, delim): """Used for `http_date` and `cookie_date`.""" if d is None: d = gmtime() elif isinstance(d, datetime): d = d.utctimetuple() elif isinstance(d, (integer_types, float)): d = gmtime(d) return '%s, %02d%s%s%s%s %02d:%02d:%02d GMT' % ( ('Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun')[d.tm_wday], d.tm_mday, delim, ('Jan', 'Feb', 'Mar', 'Apr', 'May', 'Jun', 'Jul', 'Aug', 'Sep', 'Oct', 'Nov', 'Dec')[d.tm_mon - 1], delim, str(d.tm_year), d.tm_hour, d.tm_min, d.tm_sec ) def cookie_date(expires=None): """Formats the time to ensure compatibility with Netscape's cookie standard. Accepts a floating point number expressed in seconds since the epoch in, a datetime object or a timetuple. All times in UTC. The :func:`parse_date` function can be used to parse such a date. Outputs a string in the format ``Wdy, DD-Mon-YYYY HH:MM:SS GMT``. :param expires: If provided that date is used, otherwise the current. """ return _dump_date(expires, '-') def http_date(timestamp=None): """Formats the time to match the RFC1123 date format. Accepts a floating point number expressed in seconds since the epoch in, a datetime object or a timetuple. All times in UTC. The :func:`parse_date` function can be used to parse such a date. Outputs a string in the format ``Wdy, DD Mon YYYY HH:MM:SS GMT``. :param timestamp: If provided that date is used, otherwise the current. """ return _dump_date(timestamp, ' ') def parse_age(value=None): """Parses a base-10 integer count of seconds into a timedelta. If parsing fails, the return value is `None`. :param value: a string consisting of an integer represented in base-10 :return: a :class:`datetime.timedelta` object or `None`. """ if not value: return None try: seconds = int(value) except ValueError: return None if seconds < 0: return None try: return timedelta(seconds=seconds) except OverflowError: return None def dump_age(age=None): """Formats the duration as a base-10 integer. :param age: should be an integer number of seconds, a :class:`datetime.timedelta` object, or, if the age is unknown, `None` (default). """ if age is None: return if isinstance(age, timedelta): # do the equivalent of Python 2.7's timedelta.total_seconds(), # but disregarding fractional seconds age = age.seconds + (age.days * 24 * 3600) age = int(age) if age < 0: raise ValueError('age cannot be negative') return str(age) def is_resource_modified(environ, etag=None, data=None, last_modified=None, ignore_if_range=True): """Convenience method for conditional requests. :param environ: the WSGI environment of the request to be checked. :param etag: the etag for the response for comparison. :param data: or alternatively the data of the response to automatically generate an etag using :func:`generate_etag`. :param last_modified: an optional date of the last modification. :param ignore_if_range: If `False`, `If-Range` header will be taken into account. :return: `True` if the resource was modified, otherwise `False`. """ if etag is None and data is not None: etag = generate_etag(data) elif data is not None: raise TypeError('both data and etag given') if environ['REQUEST_METHOD'] not in ('GET', 'HEAD'): return False unmodified = False if isinstance(last_modified, string_types): last_modified = parse_date(last_modified) # ensure that microsecond is zero because the HTTP spec does not transmit # that either and we might have some false positives. See issue #39 if last_modified is not None: last_modified = last_modified.replace(microsecond=0) if_range = None if not ignore_if_range and 'HTTP_RANGE' in environ: # http://tools.ietf.org/html/rfc7233#section-3.2 # A server MUST ignore an If-Range header field received in a request # that does not contain a Range header field. if_range = parse_if_range_header(environ.get('HTTP_IF_RANGE')) if if_range is not None and if_range.date is not None: modified_since = if_range.date else: modified_since = parse_date(environ.get('HTTP_IF_MODIFIED_SINCE')) if modified_since and last_modified and last_modified <= modified_since: unmodified = True if etag: etag, _ = unquote_etag(etag) if if_range is not None and if_range.etag is not None: unmodified = parse_etags(if_range.etag).contains(etag) else: if_none_match = parse_etags(environ.get('HTTP_IF_NONE_MATCH')) if if_none_match: # http://tools.ietf.org/html/rfc7232#section-3.2 # "A recipient MUST use the weak comparison function when comparing # entity-tags for If-None-Match" unmodified = if_none_match.contains_weak(etag) # https://tools.ietf.org/html/rfc7232#section-3.1 # "Origin server MUST use the strong comparison function when # comparing entity-tags for If-Match" if_match = parse_etags(environ.get('HTTP_IF_MATCH')) if if_match: unmodified = not if_match.is_strong(etag) return not unmodified def remove_entity_headers(headers, allowed=('expires', 'content-location')): """Remove all entity headers from a list or :class:`Headers` object. This operation works in-place. `Expires` and `Content-Location` headers are by default not removed. The reason for this is :rfc:`2616` section 10.3.5 which specifies some entity headers that should be sent. .. versionchanged:: 0.5 added `allowed` parameter. :param headers: a list or :class:`Headers` object. :param allowed: a list of headers that should still be allowed even though they are entity headers. """ allowed = set(x.lower() for x in allowed) headers[:] = [(key, value) for key, value in headers if not is_entity_header(key) or key.lower() in allowed] def remove_hop_by_hop_headers(headers): """Remove all HTTP/1.1 "Hop-by-Hop" headers from a list or :class:`Headers` object. This operation works in-place. .. versionadded:: 0.5 :param headers: a list or :class:`Headers` object. """ headers[:] = [(key, value) for key, value in headers if not is_hop_by_hop_header(key)] def is_entity_header(header): """Check if a header is an entity header. .. versionadded:: 0.5 :param header: the header to test. :return: `True` if it's an entity header, `False` otherwise. """ return header.lower() in _entity_headers def is_hop_by_hop_header(header): """Check if a header is an HTTP/1.1 "Hop-by-Hop" header. .. versionadded:: 0.5 :param header: the header to test. :return: `True` if it's an HTTP/1.1 "Hop-by-Hop" header, `False` otherwise. """ return header.lower() in _hop_by_hop_headers def parse_cookie(header, charset='utf-8', errors='replace', cls=None): """Parse a cookie. Either from a string or WSGI environ. Per default encoding errors are ignored. If you want a different behavior you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a :exc:`HTTPUnicodeError` is raised. .. versionchanged:: 0.5 This function now returns a :class:`TypeConversionDict` instead of a regular dict. The `cls` parameter was added. :param header: the header to be used to parse the cookie. Alternatively this can be a WSGI environment. :param charset: the charset for the cookie values. :param errors: the error behavior for the charset decoding. :param cls: an optional dict class to use. If this is not specified or `None` the default :class:`TypeConversionDict` is used. """ if isinstance(header, dict): header = header.get('HTTP_COOKIE', '') elif header is None: header = '' # If the value is an unicode string it's mangled through latin1. This # is done because on PEP 3333 on Python 3 all headers are assumed latin1 # which however is incorrect for cookies, which are sent in page encoding. # As a result we if isinstance(header, text_type): header = header.encode('latin1', 'replace') if cls is None: cls = TypeConversionDict def _parse_pairs(): for key, val in _cookie_parse_impl(header): key = to_unicode(key, charset, errors, allow_none_charset=True) val = to_unicode(val, charset, errors, allow_none_charset=True) yield try_coerce_native(key), val return cls(_parse_pairs()) def dump_cookie(key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, charset='utf-8', sync_expires=True, max_size=4093, samesite=None): """Creates a new Set-Cookie header without the ``Set-Cookie`` prefix The parameters are the same as in the cookie Morsel object in the Python standard library but it accepts unicode data, too. On Python 3 the return value of this function will be a unicode string, on Python 2 it will be a native string. In both cases the return value is usually restricted to ascii as the vast majority of values are properly escaped, but that is no guarantee. If a unicode string is returned it's tunneled through latin1 as required by PEP 3333. The return value is not ASCII safe if the key contains unicode characters. This is technically against the specification but happens in the wild. It's strongly recommended to not use non-ASCII values for the keys. :param max_age: should be a number of seconds, or `None` (default) if the cookie should last only as long as the client's browser session. Additionally `timedelta` objects are accepted, too. :param expires: should be a `datetime` object or unix timestamp. :param path: limits the cookie to a given path, per default it will span the whole domain. :param domain: Use this if you want to set a cross-domain cookie. For example, ``domain=".example.com"`` will set a cookie that is readable by the domain ``www.example.com``, ``foo.example.com`` etc. Otherwise, a cookie will only be readable by the domain that set it. :param secure: The cookie will only be available via HTTPS :param httponly: disallow JavaScript to access the cookie. This is an extension to the cookie standard and probably not supported by all browsers. :param charset: the encoding for unicode values. :param sync_expires: automatically set expires if max_age is defined but expires not. :param max_size: Warn if the final header value exceeds this size. The default, 4093, should be safely `supported by most browsers `_. Set to 0 to disable this check. :param samesite: Limits the scope of the cookie such that it will only be attached to requests if those requests are "same-site". .. _`cookie`: http://browsercookielimits.squawky.net/ """ key = to_bytes(key, charset) value = to_bytes(value, charset) if path is not None: path = iri_to_uri(path, charset) domain = _make_cookie_domain(domain) if isinstance(max_age, timedelta): max_age = (max_age.days * 60 * 60 * 24) + max_age.seconds if expires is not None: if not isinstance(expires, string_types): expires = cookie_date(expires) elif max_age is not None and sync_expires: expires = to_bytes(cookie_date(time() + max_age)) samesite = samesite.title() if samesite else None if samesite not in ('Strict', 'Lax', None): raise ValueError("invalid SameSite value; must be 'Strict', 'Lax' or None") buf = [key + b'=' + _cookie_quote(value)] # XXX: In theory all of these parameters that are not marked with `None` # should be quoted. Because stdlib did not quote it before I did not # want to introduce quoting there now. for k, v, q in ((b'Domain', domain, True), (b'Expires', expires, False,), (b'Max-Age', max_age, False), (b'Secure', secure, None), (b'HttpOnly', httponly, None), (b'Path', path, False), (b'SameSite', samesite, False)): if q is None: if v: buf.append(k) continue if v is None: continue tmp = bytearray(k) if not isinstance(v, (bytes, bytearray)): v = to_bytes(text_type(v), charset) if q: v = _cookie_quote(v) tmp += b'=' + v buf.append(bytes(tmp)) # The return value will be an incorrectly encoded latin1 header on # Python 3 for consistency with the headers object and a bytestring # on Python 2 because that's how the API makes more sense. rv = b'; '.join(buf) if not PY2: rv = rv.decode('latin1') # Warn if the final value of the cookie is less than the limit. If the # cookie is too large, then it may be silently ignored, which can be quite # hard to debug. cookie_size = len(rv) if max_size and cookie_size > max_size: value_size = len(value) warnings.warn( 'The "{key}" cookie is too large: the value was {value_size} bytes' ' but the header required {extra_size} extra bytes. The final size' ' was {cookie_size} bytes but the limit is {max_size} bytes.' ' Browsers may silently ignore cookies larger than this.'.format( key=key, value_size=value_size, extra_size=cookie_size - value_size, cookie_size=cookie_size, max_size=max_size ), stacklevel=2 ) return rv def is_byte_range_valid(start, stop, length): """Checks if a given byte content range is valid for the given length. .. versionadded:: 0.7 """ if (start is None) != (stop is None): return False elif start is None: return length is None or length >= 0 elif length is None: return 0 <= start < stop elif start >= stop: return False return 0 <= start < length # circular dependency fun from werkzeug.datastructures import Accept, HeaderSet, ETags, Authorization, \ WWWAuthenticate, TypeConversionDict, IfRange, Range, ContentRange, \ RequestCacheControl # DEPRECATED # backwards compatible imports from werkzeug.datastructures import ( # noqa MIMEAccept, CharsetAccept, LanguageAccept, Headers ) from werkzeug.urls import iri_to_uri werkzeug-0.14.1/werkzeug/local.py000066400000000000000000000343311322225165500167560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.local ~~~~~~~~~~~~~~ This module implements context-local objects. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import copy from functools import update_wrapper from werkzeug.wsgi import ClosingIterator from werkzeug._compat import PY2, implements_bool # since each thread has its own greenlet we can just use those as identifiers # for the context. If greenlets are not available we fall back to the # current thread ident depending on where it is. try: from greenlet import getcurrent as get_ident except ImportError: try: from thread import get_ident except ImportError: from _thread import get_ident def release_local(local): """Releases the contents of the local for the current context. This makes it possible to use locals without a manager. Example:: >>> loc = Local() >>> loc.foo = 42 >>> release_local(loc) >>> hasattr(loc, 'foo') False With this function one can release :class:`Local` objects as well as :class:`LocalStack` objects. However it is not possible to release data held by proxies that way, one always has to retain a reference to the underlying local object in order to be able to release it. .. versionadded:: 0.6.1 """ local.__release_local__() class Local(object): __slots__ = ('__storage__', '__ident_func__') def __init__(self): object.__setattr__(self, '__storage__', {}) object.__setattr__(self, '__ident_func__', get_ident) def __iter__(self): return iter(self.__storage__.items()) def __call__(self, proxy): """Create a proxy for a name.""" return LocalProxy(self, proxy) def __release_local__(self): self.__storage__.pop(self.__ident_func__(), None) def __getattr__(self, name): try: return self.__storage__[self.__ident_func__()][name] except KeyError: raise AttributeError(name) def __setattr__(self, name, value): ident = self.__ident_func__() storage = self.__storage__ try: storage[ident][name] = value except KeyError: storage[ident] = {name: value} def __delattr__(self, name): try: del self.__storage__[self.__ident_func__()][name] except KeyError: raise AttributeError(name) class LocalStack(object): """This class works similar to a :class:`Local` but keeps a stack of objects instead. This is best explained with an example:: >>> ls = LocalStack() >>> ls.push(42) >>> ls.top 42 >>> ls.push(23) >>> ls.top 23 >>> ls.pop() 23 >>> ls.top 42 They can be force released by using a :class:`LocalManager` or with the :func:`release_local` function but the correct way is to pop the item from the stack after using. When the stack is empty it will no longer be bound to the current context (and as such released). By calling the stack without arguments it returns a proxy that resolves to the topmost item on the stack. .. versionadded:: 0.6.1 """ def __init__(self): self._local = Local() def __release_local__(self): self._local.__release_local__() def _get__ident_func__(self): return self._local.__ident_func__ def _set__ident_func__(self, value): object.__setattr__(self._local, '__ident_func__', value) __ident_func__ = property(_get__ident_func__, _set__ident_func__) del _get__ident_func__, _set__ident_func__ def __call__(self): def _lookup(): rv = self.top if rv is None: raise RuntimeError('object unbound') return rv return LocalProxy(_lookup) def push(self, obj): """Pushes a new item to the stack""" rv = getattr(self._local, 'stack', None) if rv is None: self._local.stack = rv = [] rv.append(obj) return rv def pop(self): """Removes the topmost item from the stack, will return the old value or `None` if the stack was already empty. """ stack = getattr(self._local, 'stack', None) if stack is None: return None elif len(stack) == 1: release_local(self._local) return stack[-1] else: return stack.pop() @property def top(self): """The topmost item on the stack. If the stack is empty, `None` is returned. """ try: return self._local.stack[-1] except (AttributeError, IndexError): return None class LocalManager(object): """Local objects cannot manage themselves. For that you need a local manager. You can pass a local manager multiple locals or add them later by appending them to `manager.locals`. Every time the manager cleans up, it will clean up all the data left in the locals for this context. The `ident_func` parameter can be added to override the default ident function for the wrapped locals. .. versionchanged:: 0.6.1 Instead of a manager the :func:`release_local` function can be used as well. .. versionchanged:: 0.7 `ident_func` was added. """ def __init__(self, locals=None, ident_func=None): if locals is None: self.locals = [] elif isinstance(locals, Local): self.locals = [locals] else: self.locals = list(locals) if ident_func is not None: self.ident_func = ident_func for local in self.locals: object.__setattr__(local, '__ident_func__', ident_func) else: self.ident_func = get_ident def get_ident(self): """Return the context identifier the local objects use internally for this context. You cannot override this method to change the behavior but use it to link other context local objects (such as SQLAlchemy's scoped sessions) to the Werkzeug locals. .. versionchanged:: 0.7 You can pass a different ident function to the local manager that will then be propagated to all the locals passed to the constructor. """ return self.ident_func() def cleanup(self): """Manually clean up the data in the locals for this context. Call this at the end of the request or use `make_middleware()`. """ for local in self.locals: release_local(local) def make_middleware(self, app): """Wrap a WSGI application so that cleaning up happens after request end. """ def application(environ, start_response): return ClosingIterator(app(environ, start_response), self.cleanup) return application def middleware(self, func): """Like `make_middleware` but for decorating functions. Example usage:: @manager.middleware def application(environ, start_response): ... The difference to `make_middleware` is that the function passed will have all the arguments copied from the inner application (name, docstring, module). """ return update_wrapper(self.make_middleware(func), func) def __repr__(self): return '<%s storages: %d>' % ( self.__class__.__name__, len(self.locals) ) @implements_bool class LocalProxy(object): """Acts as a proxy for a werkzeug local. Forwards all operations to a proxied object. The only operations not supported for forwarding are right handed operands and any kind of assignment. Example usage:: from werkzeug.local import Local l = Local() # these are proxies request = l('request') user = l('user') from werkzeug.local import LocalStack _response_local = LocalStack() # this is a proxy response = _response_local() Whenever something is bound to l.user / l.request the proxy objects will forward all operations. If no object is bound a :exc:`RuntimeError` will be raised. To create proxies to :class:`Local` or :class:`LocalStack` objects, call the object as shown above. If you want to have a proxy to an object looked up by a function, you can (as of Werkzeug 0.6.1) pass a function to the :class:`LocalProxy` constructor:: session = LocalProxy(lambda: get_current_request().session) .. versionchanged:: 0.6.1 The class can be instantiated with a callable as well now. """ __slots__ = ('__local', '__dict__', '__name__', '__wrapped__') def __init__(self, local, name=None): object.__setattr__(self, '_LocalProxy__local', local) object.__setattr__(self, '__name__', name) if callable(local) and not hasattr(local, '__release_local__'): # "local" is a callable that is not an instance of Local or # LocalManager: mark it as a wrapped function. object.__setattr__(self, '__wrapped__', local) def _get_current_object(self): """Return the current object. This is useful if you want the real object behind the proxy at a time for performance reasons or because you want to pass the object into a different context. """ if not hasattr(self.__local, '__release_local__'): return self.__local() try: return getattr(self.__local, self.__name__) except AttributeError: raise RuntimeError('no object bound to %s' % self.__name__) @property def __dict__(self): try: return self._get_current_object().__dict__ except RuntimeError: raise AttributeError('__dict__') def __repr__(self): try: obj = self._get_current_object() except RuntimeError: return '<%s unbound>' % self.__class__.__name__ return repr(obj) def __bool__(self): try: return bool(self._get_current_object()) except RuntimeError: return False def __unicode__(self): try: return unicode(self._get_current_object()) # noqa except RuntimeError: return repr(self) def __dir__(self): try: return dir(self._get_current_object()) except RuntimeError: return [] def __getattr__(self, name): if name == '__members__': return dir(self._get_current_object()) return getattr(self._get_current_object(), name) def __setitem__(self, key, value): self._get_current_object()[key] = value def __delitem__(self, key): del self._get_current_object()[key] if PY2: __getslice__ = lambda x, i, j: x._get_current_object()[i:j] def __setslice__(self, i, j, seq): self._get_current_object()[i:j] = seq def __delslice__(self, i, j): del self._get_current_object()[i:j] __setattr__ = lambda x, n, v: setattr(x._get_current_object(), n, v) __delattr__ = lambda x, n: delattr(x._get_current_object(), n) __str__ = lambda x: str(x._get_current_object()) __lt__ = lambda x, o: x._get_current_object() < o __le__ = lambda x, o: x._get_current_object() <= o __eq__ = lambda x, o: x._get_current_object() == o __ne__ = lambda x, o: x._get_current_object() != o __gt__ = lambda x, o: x._get_current_object() > o __ge__ = lambda x, o: x._get_current_object() >= o __cmp__ = lambda x, o: cmp(x._get_current_object(), o) # noqa __hash__ = lambda x: hash(x._get_current_object()) __call__ = lambda x, *a, **kw: x._get_current_object()(*a, **kw) __len__ = lambda x: len(x._get_current_object()) __getitem__ = lambda x, i: x._get_current_object()[i] __iter__ = lambda x: iter(x._get_current_object()) __contains__ = lambda x, i: i in x._get_current_object() __add__ = lambda x, o: x._get_current_object() + o __sub__ = lambda x, o: x._get_current_object() - o __mul__ = lambda x, o: x._get_current_object() * o __floordiv__ = lambda x, o: x._get_current_object() // o __mod__ = lambda x, o: x._get_current_object() % o __divmod__ = lambda x, o: x._get_current_object().__divmod__(o) __pow__ = lambda x, o: x._get_current_object() ** o __lshift__ = lambda x, o: x._get_current_object() << o __rshift__ = lambda x, o: x._get_current_object() >> o __and__ = lambda x, o: x._get_current_object() & o __xor__ = lambda x, o: x._get_current_object() ^ o __or__ = lambda x, o: x._get_current_object() | o __div__ = lambda x, o: x._get_current_object().__div__(o) __truediv__ = lambda x, o: x._get_current_object().__truediv__(o) __neg__ = lambda x: -(x._get_current_object()) __pos__ = lambda x: +(x._get_current_object()) __abs__ = lambda x: abs(x._get_current_object()) __invert__ = lambda x: ~(x._get_current_object()) __complex__ = lambda x: complex(x._get_current_object()) __int__ = lambda x: int(x._get_current_object()) __long__ = lambda x: long(x._get_current_object()) # noqa __float__ = lambda x: float(x._get_current_object()) __oct__ = lambda x: oct(x._get_current_object()) __hex__ = lambda x: hex(x._get_current_object()) __index__ = lambda x: x._get_current_object().__index__() __coerce__ = lambda x, o: x._get_current_object().__coerce__(x, o) __enter__ = lambda x: x._get_current_object().__enter__() __exit__ = lambda x, *a, **kw: x._get_current_object().__exit__(*a, **kw) __radd__ = lambda x, o: o + x._get_current_object() __rsub__ = lambda x, o: o - x._get_current_object() __rmul__ = lambda x, o: o * x._get_current_object() __rdiv__ = lambda x, o: o / x._get_current_object() if PY2: __rtruediv__ = lambda x, o: x._get_current_object().__rtruediv__(o) else: __rtruediv__ = __rdiv__ __rfloordiv__ = lambda x, o: o // x._get_current_object() __rmod__ = lambda x, o: o % x._get_current_object() __rdivmod__ = lambda x, o: x._get_current_object().__rdivmod__(o) __copy__ = lambda x: copy.copy(x._get_current_object()) __deepcopy__ = lambda x, memo: copy.deepcopy(x._get_current_object(), memo) werkzeug-0.14.1/werkzeug/posixemulation.py000066400000000000000000000066771322225165500207600ustar00rootroot00000000000000# -*- coding: utf-8 -*- r""" werkzeug.posixemulation ~~~~~~~~~~~~~~~~~~~~~~~ Provides a POSIX emulation for some features that are relevant to web applications. The main purpose is to simplify support for systems such as Windows NT that are not 100% POSIX compatible. Currently this only implements a :func:`rename` function that follows POSIX semantics. Eg: if the target file already exists it will be replaced without asking. This module was introduced in 0.6.1 and is not a public interface. It might become one in later versions of Werkzeug. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import sys import os import errno import time import random from ._compat import to_unicode from .filesystem import get_filesystem_encoding can_rename_open_file = False if os.name == 'nt': # pragma: no cover _rename = lambda src, dst: False _rename_atomic = lambda src, dst: False try: import ctypes _MOVEFILE_REPLACE_EXISTING = 0x1 _MOVEFILE_WRITE_THROUGH = 0x8 _MoveFileEx = ctypes.windll.kernel32.MoveFileExW def _rename(src, dst): src = to_unicode(src, get_filesystem_encoding()) dst = to_unicode(dst, get_filesystem_encoding()) if _rename_atomic(src, dst): return True retry = 0 rv = False while not rv and retry < 100: rv = _MoveFileEx(src, dst, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH) if not rv: time.sleep(0.001) retry += 1 return rv # new in Vista and Windows Server 2008 _CreateTransaction = ctypes.windll.ktmw32.CreateTransaction _CommitTransaction = ctypes.windll.ktmw32.CommitTransaction _MoveFileTransacted = ctypes.windll.kernel32.MoveFileTransactedW _CloseHandle = ctypes.windll.kernel32.CloseHandle can_rename_open_file = True def _rename_atomic(src, dst): ta = _CreateTransaction(None, 0, 0, 0, 0, 1000, 'Werkzeug rename') if ta == -1: return False try: retry = 0 rv = False while not rv and retry < 100: rv = _MoveFileTransacted(src, dst, None, None, _MOVEFILE_REPLACE_EXISTING | _MOVEFILE_WRITE_THROUGH, ta) if rv: rv = _CommitTransaction(ta) break else: time.sleep(0.001) retry += 1 return rv finally: _CloseHandle(ta) except Exception: pass def rename(src, dst): # Try atomic or pseudo-atomic rename if _rename(src, dst): return # Fall back to "move away and replace" try: os.rename(src, dst) except OSError as e: if e.errno != errno.EEXIST: raise old = "%s-%08x" % (dst, random.randint(0, sys.maxint)) os.rename(dst, old) os.rename(src, dst) try: os.unlink(old) except Exception: pass else: rename = os.rename can_rename_open_file = True werkzeug-0.14.1/werkzeug/routing.py000066400000000000000000002032161322225165500173530ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.routing ~~~~~~~~~~~~~~~~ When it comes to combining multiple controller or view functions (however you want to call them) you need a dispatcher. A simple way would be applying regular expression tests on the ``PATH_INFO`` and calling registered callback functions that return the value then. This module implements a much more powerful system than simple regular expression matching because it can also convert values in the URLs and build URLs. Here a simple example that creates an URL map for an application with two subdomains (www and kb) and some URL rules: >>> m = Map([ ... # Static URLs ... Rule('/', endpoint='static/index'), ... Rule('/about', endpoint='static/about'), ... Rule('/help', endpoint='static/help'), ... # Knowledge Base ... Subdomain('kb', [ ... Rule('/', endpoint='kb/index'), ... Rule('/browse/', endpoint='kb/browse'), ... Rule('/browse//', endpoint='kb/browse'), ... Rule('/browse//', endpoint='kb/browse') ... ]) ... ], default_subdomain='www') If the application doesn't use subdomains it's perfectly fine to not set the default subdomain and not use the `Subdomain` rule factory. The endpoint in the rules can be anything, for example import paths or unique identifiers. The WSGI application can use those endpoints to get the handler for that URL. It doesn't have to be a string at all but it's recommended. Now it's possible to create a URL adapter for one of the subdomains and build URLs: >>> c = m.bind('example.com') >>> c.build("kb/browse", dict(id=42)) 'http://kb.example.com/browse/42/' >>> c.build("kb/browse", dict()) 'http://kb.example.com/browse/' >>> c.build("kb/browse", dict(id=42, page=3)) 'http://kb.example.com/browse/42/3' >>> c.build("static/about") '/about' >>> c.build("static/index", force_external=True) 'http://www.example.com/' >>> c = m.bind('example.com', subdomain='kb') >>> c.build("static/about") 'http://www.example.com/about' The first argument to bind is the server name *without* the subdomain. Per default it will assume that the script is mounted on the root, but often that's not the case so you can provide the real mount point as second argument: >>> c = m.bind('example.com', '/applications/example') The third argument can be the subdomain, if not given the default subdomain is used. For more details about binding have a look at the documentation of the `MapAdapter`. And here is how you can match URLs: >>> c = m.bind('example.com') >>> c.match("/") ('static/index', {}) >>> c.match("/about") ('static/about', {}) >>> c = m.bind('example.com', '/', 'kb') >>> c.match("/") ('kb/index', {}) >>> c.match("/browse/42/23") ('kb/browse', {'id': 42, 'page': 23}) If matching fails you get a `NotFound` exception, if the rule thinks it's a good idea to redirect (for example because the URL was defined to have a slash at the end but the request was missing that slash) it will raise a `RequestRedirect` exception. Both are subclasses of the `HTTPException` so you can use those errors as responses in the application. If matching succeeded but the URL rule was incompatible to the given method (for example there were only rules for `GET` and `HEAD` and routing system tried to match a `POST` request) a `MethodNotAllowed` exception is raised. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import difflib import re import uuid import posixpath from pprint import pformat from threading import Lock from werkzeug.urls import url_encode, url_quote, url_join from werkzeug.utils import redirect, format_string from werkzeug.exceptions import HTTPException, NotFound, MethodNotAllowed, \ BadHost from werkzeug._internal import _get_environ, _encode_idna from werkzeug._compat import itervalues, iteritems, to_unicode, to_bytes, \ text_type, string_types, native_string_result, \ implements_to_string, wsgi_decoding_dance from werkzeug.datastructures import ImmutableDict, MultiDict from werkzeug.utils import cached_property _rule_re = re.compile(r''' (?P[^<]*) # static rule data < (?: (?P[a-zA-Z_][a-zA-Z0-9_]*) # converter name (?:\((?P.*?)\))? # converter arguments \: # variable delimiter )? (?P[a-zA-Z_][a-zA-Z0-9_]*) # variable name > ''', re.VERBOSE) _simple_rule_re = re.compile(r'<([^>]+)>') _converter_args_re = re.compile(r''' ((?P\w+)\s*=\s*)? (?P True|False| \d+.\d+| \d+.| \d+| [\w\d_.]+| [urUR]?(?P"[^"]*?"|'[^']*') )\s*, ''', re.VERBOSE | re.UNICODE) _PYTHON_CONSTANTS = { 'None': None, 'True': True, 'False': False } def _pythonize(value): if value in _PYTHON_CONSTANTS: return _PYTHON_CONSTANTS[value] for convert in int, float: try: return convert(value) except ValueError: pass if value[:1] == value[-1:] and value[0] in '"\'': value = value[1:-1] return text_type(value) def parse_converter_args(argstr): argstr += ',' args = [] kwargs = {} for item in _converter_args_re.finditer(argstr): value = item.group('stringval') if value is None: value = item.group('value') value = _pythonize(value) if not item.group('name'): args.append(value) else: name = item.group('name') kwargs[name] = value return tuple(args), kwargs def parse_rule(rule): """Parse a rule and return it as generator. Each iteration yields tuples in the form ``(converter, arguments, variable)``. If the converter is `None` it's a static url part, otherwise it's a dynamic one. :internal: """ pos = 0 end = len(rule) do_match = _rule_re.match used_names = set() while pos < end: m = do_match(rule, pos) if m is None: break data = m.groupdict() if data['static']: yield None, None, data['static'] variable = data['variable'] converter = data['converter'] or 'default' if variable in used_names: raise ValueError('variable name %r used twice.' % variable) used_names.add(variable) yield converter, data['args'] or None, variable pos = m.end() if pos < end: remaining = rule[pos:] if '>' in remaining or '<' in remaining: raise ValueError('malformed url rule: %r' % rule) yield None, None, remaining class RoutingException(Exception): """Special exceptions that require the application to redirect, notifying about missing urls, etc. :internal: """ class RequestRedirect(HTTPException, RoutingException): """Raise if the map requests a redirect. This is for example the case if `strict_slashes` are activated and an url that requires a trailing slash. The attribute `new_url` contains the absolute destination url. """ code = 301 def __init__(self, new_url): RoutingException.__init__(self, new_url) self.new_url = new_url def get_response(self, environ): return redirect(self.new_url, self.code) class RequestSlash(RoutingException): """Internal exception.""" class RequestAliasRedirect(RoutingException): """This rule is an alias and wants to redirect to the canonical URL.""" def __init__(self, matched_values): self.matched_values = matched_values @implements_to_string class BuildError(RoutingException, LookupError): """Raised if the build system cannot find a URL for an endpoint with the values provided. """ def __init__(self, endpoint, values, method, adapter=None): LookupError.__init__(self, endpoint, values, method) self.endpoint = endpoint self.values = values self.method = method self.adapter = adapter @cached_property def suggested(self): return self.closest_rule(self.adapter) def closest_rule(self, adapter): def _score_rule(rule): return sum([ 0.98 * difflib.SequenceMatcher( None, rule.endpoint, self.endpoint ).ratio(), 0.01 * bool(set(self.values or ()).issubset(rule.arguments)), 0.01 * bool(rule.methods and self.method in rule.methods) ]) if adapter and adapter.map._rules: return max(adapter.map._rules, key=_score_rule) def __str__(self): message = [] message.append('Could not build url for endpoint %r' % self.endpoint) if self.method: message.append(' (%r)' % self.method) if self.values: message.append(' with values %r' % sorted(self.values.keys())) message.append('.') if self.suggested: if self.endpoint == self.suggested.endpoint: if self.method and self.method not in self.suggested.methods: message.append(' Did you mean to use methods %r?' % sorted( self.suggested.methods )) missing_values = self.suggested.arguments.union( set(self.suggested.defaults or ()) ) - set(self.values.keys()) if missing_values: message.append( ' Did you forget to specify values %r?' % sorted(missing_values) ) else: message.append( ' Did you mean %r instead?' % self.suggested.endpoint ) return u''.join(message) class ValidationError(ValueError): """Validation error. If a rule converter raises this exception the rule does not match the current URL and the next URL is tried. """ class RuleFactory(object): """As soon as you have more complex URL setups it's a good idea to use rule factories to avoid repetitive tasks. Some of them are builtin, others can be added by subclassing `RuleFactory` and overriding `get_rules`. """ def get_rules(self, map): """Subclasses of `RuleFactory` have to override this method and return an iterable of rules.""" raise NotImplementedError() class Subdomain(RuleFactory): """All URLs provided by this factory have the subdomain set to a specific domain. For example if you want to use the subdomain for the current language this can be a good setup:: url_map = Map([ Rule('/', endpoint='#select_language'), Subdomain('', [ Rule('/', endpoint='index'), Rule('/about', endpoint='about'), Rule('/help', endpoint='help') ]) ]) All the rules except for the ``'#select_language'`` endpoint will now listen on a two letter long subdomain that holds the language code for the current request. """ def __init__(self, subdomain, rules): self.subdomain = subdomain self.rules = rules def get_rules(self, map): for rulefactory in self.rules: for rule in rulefactory.get_rules(map): rule = rule.empty() rule.subdomain = self.subdomain yield rule class Submount(RuleFactory): """Like `Subdomain` but prefixes the URL rule with a given string:: url_map = Map([ Rule('/', endpoint='index'), Submount('/blog', [ Rule('/', endpoint='blog/index'), Rule('/entry/', endpoint='blog/show') ]) ]) Now the rule ``'blog/show'`` matches ``/blog/entry/``. """ def __init__(self, path, rules): self.path = path.rstrip('/') self.rules = rules def get_rules(self, map): for rulefactory in self.rules: for rule in rulefactory.get_rules(map): rule = rule.empty() rule.rule = self.path + rule.rule yield rule class EndpointPrefix(RuleFactory): """Prefixes all endpoints (which must be strings for this factory) with another string. This can be useful for sub applications:: url_map = Map([ Rule('/', endpoint='index'), EndpointPrefix('blog/', [Submount('/blog', [ Rule('/', endpoint='index'), Rule('/entry/', endpoint='show') ])]) ]) """ def __init__(self, prefix, rules): self.prefix = prefix self.rules = rules def get_rules(self, map): for rulefactory in self.rules: for rule in rulefactory.get_rules(map): rule = rule.empty() rule.endpoint = self.prefix + rule.endpoint yield rule class RuleTemplate(object): """Returns copies of the rules wrapped and expands string templates in the endpoint, rule, defaults or subdomain sections. Here a small example for such a rule template:: from werkzeug.routing import Map, Rule, RuleTemplate resource = RuleTemplate([ Rule('/$name/', endpoint='$name.list'), Rule('/$name/', endpoint='$name.show') ]) url_map = Map([resource(name='user'), resource(name='page')]) When a rule template is called the keyword arguments are used to replace the placeholders in all the string parameters. """ def __init__(self, rules): self.rules = list(rules) def __call__(self, *args, **kwargs): return RuleTemplateFactory(self.rules, dict(*args, **kwargs)) class RuleTemplateFactory(RuleFactory): """A factory that fills in template variables into rules. Used by `RuleTemplate` internally. :internal: """ def __init__(self, rules, context): self.rules = rules self.context = context def get_rules(self, map): for rulefactory in self.rules: for rule in rulefactory.get_rules(map): new_defaults = subdomain = None if rule.defaults: new_defaults = {} for key, value in iteritems(rule.defaults): if isinstance(value, string_types): value = format_string(value, self.context) new_defaults[key] = value if rule.subdomain is not None: subdomain = format_string(rule.subdomain, self.context) new_endpoint = rule.endpoint if isinstance(new_endpoint, string_types): new_endpoint = format_string(new_endpoint, self.context) yield Rule( format_string(rule.rule, self.context), new_defaults, subdomain, rule.methods, rule.build_only, new_endpoint, rule.strict_slashes ) @implements_to_string class Rule(RuleFactory): """A Rule represents one URL pattern. There are some options for `Rule` that change the way it behaves and are passed to the `Rule` constructor. Note that besides the rule-string all arguments *must* be keyword arguments in order to not break the application on Werkzeug upgrades. `string` Rule strings basically are just normal URL paths with placeholders in the format ```` where the converter and the arguments are optional. If no converter is defined the `default` converter is used which means `string` in the normal configuration. URL rules that end with a slash are branch URLs, others are leaves. If you have `strict_slashes` enabled (which is the default), all branch URLs that are matched without a trailing slash will trigger a redirect to the same URL with the missing slash appended. The converters are defined on the `Map`. `endpoint` The endpoint for this rule. This can be anything. A reference to a function, a string, a number etc. The preferred way is using a string because the endpoint is used for URL generation. `defaults` An optional dict with defaults for other rules with the same endpoint. This is a bit tricky but useful if you want to have unique URLs:: url_map = Map([ Rule('/all/', defaults={'page': 1}, endpoint='all_entries'), Rule('/all/page/', endpoint='all_entries') ]) If a user now visits ``http://example.com/all/page/1`` he will be redirected to ``http://example.com/all/``. If `redirect_defaults` is disabled on the `Map` instance this will only affect the URL generation. `subdomain` The subdomain rule string for this rule. If not specified the rule only matches for the `default_subdomain` of the map. If the map is not bound to a subdomain this feature is disabled. Can be useful if you want to have user profiles on different subdomains and all subdomains are forwarded to your application:: url_map = Map([ Rule('/', subdomain='', endpoint='user/homepage'), Rule('/stats', subdomain='', endpoint='user/stats') ]) `methods` A sequence of http methods this rule applies to. If not specified, all methods are allowed. For example this can be useful if you want different endpoints for `POST` and `GET`. If methods are defined and the path matches but the method matched against is not in this list or in the list of another rule for that path the error raised is of the type `MethodNotAllowed` rather than `NotFound`. If `GET` is present in the list of methods and `HEAD` is not, `HEAD` is added automatically. .. versionchanged:: 0.6.1 `HEAD` is now automatically added to the methods if `GET` is present. The reason for this is that existing code often did not work properly in servers not rewriting `HEAD` to `GET` automatically and it was not documented how `HEAD` should be treated. This was considered a bug in Werkzeug because of that. `strict_slashes` Override the `Map` setting for `strict_slashes` only for this rule. If not specified the `Map` setting is used. `build_only` Set this to True and the rule will never match but will create a URL that can be build. This is useful if you have resources on a subdomain or folder that are not handled by the WSGI application (like static data) `redirect_to` If given this must be either a string or callable. In case of a callable it's called with the url adapter that triggered the match and the values of the URL as keyword arguments and has to return the target for the redirect, otherwise it has to be a string with placeholders in rule syntax:: def foo_with_slug(adapter, id): # ask the database for the slug for the old id. this of # course has nothing to do with werkzeug. return 'foo/' + Foo.get_slug_for_id(id) url_map = Map([ Rule('/foo/', endpoint='foo'), Rule('/some/old/url/', redirect_to='foo/'), Rule('/other/old/url/', redirect_to=foo_with_slug) ]) When the rule is matched the routing system will raise a `RequestRedirect` exception with the target for the redirect. Keep in mind that the URL will be joined against the URL root of the script so don't use a leading slash on the target URL unless you really mean root of that domain. `alias` If enabled this rule serves as an alias for another rule with the same endpoint and arguments. `host` If provided and the URL map has host matching enabled this can be used to provide a match rule for the whole host. This also means that the subdomain feature is disabled. .. versionadded:: 0.7 The `alias` and `host` parameters were added. """ def __init__(self, string, defaults=None, subdomain=None, methods=None, build_only=False, endpoint=None, strict_slashes=None, redirect_to=None, alias=False, host=None): if not string.startswith('/'): raise ValueError('urls must start with a leading slash') self.rule = string self.is_leaf = not string.endswith('/') self.map = None self.strict_slashes = strict_slashes self.subdomain = subdomain self.host = host self.defaults = defaults self.build_only = build_only self.alias = alias if methods is None: self.methods = None else: if isinstance(methods, str): raise TypeError('param `methods` should be `Iterable[str]`, not `str`') self.methods = set([x.upper() for x in methods]) if 'HEAD' not in self.methods and 'GET' in self.methods: self.methods.add('HEAD') self.endpoint = endpoint self.redirect_to = redirect_to if defaults: self.arguments = set(map(str, defaults)) else: self.arguments = set() self._trace = self._converters = self._regex = self._argument_weights = None def empty(self): """ Return an unbound copy of this rule. This can be useful if want to reuse an already bound URL for another map. See ``get_empty_kwargs`` to override what keyword arguments are provided to the new copy. """ return type(self)(self.rule, **self.get_empty_kwargs()) def get_empty_kwargs(self): """ Provides kwargs for instantiating empty copy with empty() Use this method to provide custom keyword arguments to the subclass of ``Rule`` when calling ``some_rule.empty()``. Helpful when the subclass has custom keyword arguments that are needed at instantiation. Must return a ``dict`` that will be provided as kwargs to the new instance of ``Rule``, following the initial ``self.rule`` value which is always provided as the first, required positional argument. """ defaults = None if self.defaults: defaults = dict(self.defaults) return dict(defaults=defaults, subdomain=self.subdomain, methods=self.methods, build_only=self.build_only, endpoint=self.endpoint, strict_slashes=self.strict_slashes, redirect_to=self.redirect_to, alias=self.alias, host=self.host) def get_rules(self, map): yield self def refresh(self): """Rebinds and refreshes the URL. Call this if you modified the rule in place. :internal: """ self.bind(self.map, rebind=True) def bind(self, map, rebind=False): """Bind the url to a map and create a regular expression based on the information from the rule itself and the defaults from the map. :internal: """ if self.map is not None and not rebind: raise RuntimeError('url rule %r already bound to map %r' % (self, self.map)) self.map = map if self.strict_slashes is None: self.strict_slashes = map.strict_slashes if self.subdomain is None: self.subdomain = map.default_subdomain self.compile() def get_converter(self, variable_name, converter_name, args, kwargs): """Looks up the converter for the given parameter. .. versionadded:: 0.9 """ if converter_name not in self.map.converters: raise LookupError('the converter %r does not exist' % converter_name) return self.map.converters[converter_name](self.map, *args, **kwargs) def compile(self): """Compiles the regular expression and stores it.""" assert self.map is not None, 'rule not bound' if self.map.host_matching: domain_rule = self.host or '' else: domain_rule = self.subdomain or '' self._trace = [] self._converters = {} self._static_weights = [] self._argument_weights = [] regex_parts = [] def _build_regex(rule): index = 0 for converter, arguments, variable in parse_rule(rule): if converter is None: regex_parts.append(re.escape(variable)) self._trace.append((False, variable)) for part in variable.split('/'): if part: self._static_weights.append((index, -len(part))) else: if arguments: c_args, c_kwargs = parse_converter_args(arguments) else: c_args = () c_kwargs = {} convobj = self.get_converter( variable, converter, c_args, c_kwargs) regex_parts.append('(?P<%s>%s)' % (variable, convobj.regex)) self._converters[variable] = convobj self._trace.append((True, variable)) self._argument_weights.append(convobj.weight) self.arguments.add(str(variable)) index = index + 1 _build_regex(domain_rule) regex_parts.append('\\|') self._trace.append((False, '|')) _build_regex(self.is_leaf and self.rule or self.rule.rstrip('/')) if not self.is_leaf: self._trace.append((False, '/')) if self.build_only: return regex = r'^%s%s$' % ( u''.join(regex_parts), (not self.is_leaf or not self.strict_slashes) and '(?/?)' or '' ) self._regex = re.compile(regex, re.UNICODE) def match(self, path, method=None): """Check if the rule matches a given path. Path is a string in the form ``"subdomain|/path"`` and is assembled by the map. If the map is doing host matching the subdomain part will be the host instead. If the rule matches a dict with the converted values is returned, otherwise the return value is `None`. :internal: """ if not self.build_only: m = self._regex.search(path) if m is not None: groups = m.groupdict() # we have a folder like part of the url without a trailing # slash and strict slashes enabled. raise an exception that # tells the map to redirect to the same url but with a # trailing slash if self.strict_slashes and not self.is_leaf and \ not groups.pop('__suffix__') and \ (method is None or self.methods is None or method in self.methods): raise RequestSlash() # if we are not in strict slashes mode we have to remove # a __suffix__ elif not self.strict_slashes: del groups['__suffix__'] result = {} for name, value in iteritems(groups): try: value = self._converters[name].to_python(value) except ValidationError: return result[str(name)] = value if self.defaults: result.update(self.defaults) if self.alias and self.map.redirect_defaults: raise RequestAliasRedirect(result) return result def build(self, values, append_unknown=True): """Assembles the relative url for that rule and the subdomain. If building doesn't work for some reasons `None` is returned. :internal: """ tmp = [] add = tmp.append processed = set(self.arguments) for is_dynamic, data in self._trace: if is_dynamic: try: add(self._converters[data].to_url(values[data])) except ValidationError: return processed.add(data) else: add(url_quote(to_bytes(data, self.map.charset), safe='/:|+')) domain_part, url = (u''.join(tmp)).split(u'|', 1) if append_unknown: query_vars = MultiDict(values) for key in processed: if key in query_vars: del query_vars[key] if query_vars: url += u'?' + url_encode(query_vars, charset=self.map.charset, sort=self.map.sort_parameters, key=self.map.sort_key) return domain_part, url def provides_defaults_for(self, rule): """Check if this rule has defaults for a given rule. :internal: """ return not self.build_only and self.defaults and \ self.endpoint == rule.endpoint and self != rule and \ self.arguments == rule.arguments def suitable_for(self, values, method=None): """Check if the dict of values has enough data for url generation. :internal: """ # if a method was given explicitly and that method is not supported # by this rule, this rule is not suitable. if method is not None and self.methods is not None \ and method not in self.methods: return False defaults = self.defaults or () # all arguments required must be either in the defaults dict or # the value dictionary otherwise it's not suitable for key in self.arguments: if key not in defaults and key not in values: return False # in case defaults are given we ensure taht either the value was # skipped or the value is the same as the default value. if defaults: for key, value in iteritems(defaults): if key in values and value != values[key]: return False return True def match_compare_key(self): """The match compare key for sorting. Current implementation: 1. rules without any arguments come first for performance reasons only as we expect them to match faster and some common ones usually don't have any arguments (index pages etc.) 2. rules with more static parts come first so the second argument is the negative length of the number of the static weights. 3. we order by static weights, which is a combination of index and length 4. The more complex rules come first so the next argument is the negative length of the number of argument weights. 5. lastly we order by the actual argument weights. :internal: """ return bool(self.arguments), -len(self._static_weights), self._static_weights,\ -len(self._argument_weights), self._argument_weights def build_compare_key(self): """The build compare key for sorting. :internal: """ return self.alias and 1 or 0, -len(self.arguments), \ -len(self.defaults or ()) def __eq__(self, other): return self.__class__ is other.__class__ and \ self._trace == other._trace __hash__ = None def __ne__(self, other): return not self.__eq__(other) def __str__(self): return self.rule @native_string_result def __repr__(self): if self.map is None: return u'<%s (unbound)>' % self.__class__.__name__ tmp = [] for is_dynamic, data in self._trace: if is_dynamic: tmp.append(u'<%s>' % data) else: tmp.append(data) return u'<%s %s%s -> %s>' % ( self.__class__.__name__, repr((u''.join(tmp)).lstrip(u'|')).lstrip(u'u'), self.methods is not None and u' (%s)' % u', '.join(self.methods) or u'', self.endpoint ) class BaseConverter(object): """Base class for all converters.""" regex = '[^/]+' weight = 100 def __init__(self, map): self.map = map def to_python(self, value): return value def to_url(self, value): return url_quote(value, charset=self.map.charset) class UnicodeConverter(BaseConverter): """This converter is the default converter and accepts any string but only one path segment. Thus the string can not include a slash. This is the default validator. Example:: Rule('/pages/'), Rule('/') :param map: the :class:`Map`. :param minlength: the minimum length of the string. Must be greater or equal 1. :param maxlength: the maximum length of the string. :param length: the exact length of the string. """ def __init__(self, map, minlength=1, maxlength=None, length=None): BaseConverter.__init__(self, map) if length is not None: length = '{%d}' % int(length) else: if maxlength is None: maxlength = '' else: maxlength = int(maxlength) length = '{%s,%s}' % ( int(minlength), maxlength ) self.regex = '[^/]' + length class AnyConverter(BaseConverter): """Matches one of the items provided. Items can either be Python identifiers or strings:: Rule('/') :param map: the :class:`Map`. :param items: this function accepts the possible items as positional arguments. """ def __init__(self, map, *items): BaseConverter.__init__(self, map) self.regex = '(?:%s)' % '|'.join([re.escape(x) for x in items]) class PathConverter(BaseConverter): """Like the default :class:`UnicodeConverter`, but it also matches slashes. This is useful for wikis and similar applications:: Rule('/') Rule('//edit') :param map: the :class:`Map`. """ regex = '[^/].*?' weight = 200 class NumberConverter(BaseConverter): """Baseclass for `IntegerConverter` and `FloatConverter`. :internal: """ weight = 50 def __init__(self, map, fixed_digits=0, min=None, max=None): BaseConverter.__init__(self, map) self.fixed_digits = fixed_digits self.min = min self.max = max def to_python(self, value): if (self.fixed_digits and len(value) != self.fixed_digits): raise ValidationError() value = self.num_convert(value) if (self.min is not None and value < self.min) or \ (self.max is not None and value > self.max): raise ValidationError() return value def to_url(self, value): value = self.num_convert(value) if self.fixed_digits: value = ('%%0%sd' % self.fixed_digits) % value return str(value) class IntegerConverter(NumberConverter): """This converter only accepts integer values:: Rule('/page/') This converter does not support negative values. :param map: the :class:`Map`. :param fixed_digits: the number of fixed digits in the URL. If you set this to ``4`` for example, the application will only match if the url looks like ``/0001/``. The default is variable length. :param min: the minimal value. :param max: the maximal value. """ regex = r'\d+' num_convert = int class FloatConverter(NumberConverter): """This converter only accepts floating point values:: Rule('/probability/') This converter does not support negative values. :param map: the :class:`Map`. :param min: the minimal value. :param max: the maximal value. """ regex = r'\d+\.\d+' num_convert = float def __init__(self, map, min=None, max=None): NumberConverter.__init__(self, map, 0, min, max) class UUIDConverter(BaseConverter): """This converter only accepts UUID strings:: Rule('/object/') .. versionadded:: 0.10 :param map: the :class:`Map`. """ regex = r'[A-Fa-f0-9]{8}-[A-Fa-f0-9]{4}-' \ r'[A-Fa-f0-9]{4}-[A-Fa-f0-9]{4}-[A-Fa-f0-9]{12}' def to_python(self, value): return uuid.UUID(value) def to_url(self, value): return str(value) #: the default converter mapping for the map. DEFAULT_CONVERTERS = { 'default': UnicodeConverter, 'string': UnicodeConverter, 'any': AnyConverter, 'path': PathConverter, 'int': IntegerConverter, 'float': FloatConverter, 'uuid': UUIDConverter, } class Map(object): """The map class stores all the URL rules and some configuration parameters. Some of the configuration values are only stored on the `Map` instance since those affect all rules, others are just defaults and can be overridden for each rule. Note that you have to specify all arguments besides the `rules` as keyword arguments! :param rules: sequence of url rules for this map. :param default_subdomain: The default subdomain for rules without a subdomain defined. :param charset: charset of the url. defaults to ``"utf-8"`` :param strict_slashes: Take care of trailing slashes. :param redirect_defaults: This will redirect to the default rule if it wasn't visited that way. This helps creating unique URLs. :param converters: A dict of converters that adds additional converters to the list of converters. If you redefine one converter this will override the original one. :param sort_parameters: If set to `True` the url parameters are sorted. See `url_encode` for more details. :param sort_key: The sort key function for `url_encode`. :param encoding_errors: the error method to use for decoding :param host_matching: if set to `True` it enables the host matching feature and disables the subdomain one. If enabled the `host` parameter to rules is used instead of the `subdomain` one. .. versionadded:: 0.5 `sort_parameters` and `sort_key` was added. .. versionadded:: 0.7 `encoding_errors` and `host_matching` was added. """ #: .. versionadded:: 0.6 #: a dict of default converters to be used. default_converters = ImmutableDict(DEFAULT_CONVERTERS) def __init__(self, rules=None, default_subdomain='', charset='utf-8', strict_slashes=True, redirect_defaults=True, converters=None, sort_parameters=False, sort_key=None, encoding_errors='replace', host_matching=False): self._rules = [] self._rules_by_endpoint = {} self._remap = True self._remap_lock = Lock() self.default_subdomain = default_subdomain self.charset = charset self.encoding_errors = encoding_errors self.strict_slashes = strict_slashes self.redirect_defaults = redirect_defaults self.host_matching = host_matching self.converters = self.default_converters.copy() if converters: self.converters.update(converters) self.sort_parameters = sort_parameters self.sort_key = sort_key for rulefactory in rules or (): self.add(rulefactory) def is_endpoint_expecting(self, endpoint, *arguments): """Iterate over all rules and check if the endpoint expects the arguments provided. This is for example useful if you have some URLs that expect a language code and others that do not and you want to wrap the builder a bit so that the current language code is automatically added if not provided but endpoints expect it. :param endpoint: the endpoint to check. :param arguments: this function accepts one or more arguments as positional arguments. Each one of them is checked. """ self.update() arguments = set(arguments) for rule in self._rules_by_endpoint[endpoint]: if arguments.issubset(rule.arguments): return True return False def iter_rules(self, endpoint=None): """Iterate over all rules or the rules of an endpoint. :param endpoint: if provided only the rules for that endpoint are returned. :return: an iterator """ self.update() if endpoint is not None: return iter(self._rules_by_endpoint[endpoint]) return iter(self._rules) def add(self, rulefactory): """Add a new rule or factory to the map and bind it. Requires that the rule is not bound to another map. :param rulefactory: a :class:`Rule` or :class:`RuleFactory` """ for rule in rulefactory.get_rules(self): rule.bind(self) self._rules.append(rule) self._rules_by_endpoint.setdefault(rule.endpoint, []).append(rule) self._remap = True def bind(self, server_name, script_name=None, subdomain=None, url_scheme='http', default_method='GET', path_info=None, query_args=None): """Return a new :class:`MapAdapter` with the details specified to the call. Note that `script_name` will default to ``'/'`` if not further specified or `None`. The `server_name` at least is a requirement because the HTTP RFC requires absolute URLs for redirects and so all redirect exceptions raised by Werkzeug will contain the full canonical URL. If no path_info is passed to :meth:`match` it will use the default path info passed to bind. While this doesn't really make sense for manual bind calls, it's useful if you bind a map to a WSGI environment which already contains the path info. `subdomain` will default to the `default_subdomain` for this map if no defined. If there is no `default_subdomain` you cannot use the subdomain feature. .. versionadded:: 0.7 `query_args` added .. versionadded:: 0.8 `query_args` can now also be a string. """ server_name = server_name.lower() if self.host_matching: if subdomain is not None: raise RuntimeError('host matching enabled and a ' 'subdomain was provided') elif subdomain is None: subdomain = self.default_subdomain if script_name is None: script_name = '/' try: server_name = _encode_idna(server_name) except UnicodeError: raise BadHost() return MapAdapter(self, server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args) def bind_to_environ(self, environ, server_name=None, subdomain=None): """Like :meth:`bind` but you can pass it an WSGI environment and it will fetch the information from that dictionary. Note that because of limitations in the protocol there is no way to get the current subdomain and real `server_name` from the environment. If you don't provide it, Werkzeug will use `SERVER_NAME` and `SERVER_PORT` (or `HTTP_HOST` if provided) as used `server_name` with disabled subdomain feature. If `subdomain` is `None` but an environment and a server name is provided it will calculate the current subdomain automatically. Example: `server_name` is ``'example.com'`` and the `SERVER_NAME` in the wsgi `environ` is ``'staging.dev.example.com'`` the calculated subdomain will be ``'staging.dev'``. If the object passed as environ has an environ attribute, the value of this attribute is used instead. This allows you to pass request objects. Additionally `PATH_INFO` added as a default of the :class:`MapAdapter` so that you don't have to pass the path info to the match method. .. versionchanged:: 0.5 previously this method accepted a bogus `calculate_subdomain` parameter that did not have any effect. It was removed because of that. .. versionchanged:: 0.8 This will no longer raise a ValueError when an unexpected server name was passed. :param environ: a WSGI environment. :param server_name: an optional server name hint (see above). :param subdomain: optionally the current subdomain (see above). """ environ = _get_environ(environ) if 'HTTP_HOST' in environ: wsgi_server_name = environ['HTTP_HOST'] if environ['wsgi.url_scheme'] == 'http' \ and wsgi_server_name.endswith(':80'): wsgi_server_name = wsgi_server_name[:-3] elif environ['wsgi.url_scheme'] == 'https' \ and wsgi_server_name.endswith(':443'): wsgi_server_name = wsgi_server_name[:-4] else: wsgi_server_name = environ['SERVER_NAME'] if (environ['wsgi.url_scheme'], environ['SERVER_PORT']) not \ in (('https', '443'), ('http', '80')): wsgi_server_name += ':' + environ['SERVER_PORT'] wsgi_server_name = wsgi_server_name.lower() if server_name is None: server_name = wsgi_server_name else: server_name = server_name.lower() if subdomain is None and not self.host_matching: cur_server_name = wsgi_server_name.split('.') real_server_name = server_name.split('.') offset = -len(real_server_name) if cur_server_name[offset:] != real_server_name: # This can happen even with valid configs if the server was # accesssed directly by IP address under some situations. # Instead of raising an exception like in Werkzeug 0.7 or # earlier we go by an invalid subdomain which will result # in a 404 error on matching. subdomain = '' else: subdomain = '.'.join(filter(None, cur_server_name[:offset])) def _get_wsgi_string(name): val = environ.get(name) if val is not None: return wsgi_decoding_dance(val, self.charset) script_name = _get_wsgi_string('SCRIPT_NAME') path_info = _get_wsgi_string('PATH_INFO') query_args = _get_wsgi_string('QUERY_STRING') return Map.bind(self, server_name, script_name, subdomain, environ['wsgi.url_scheme'], environ['REQUEST_METHOD'], path_info, query_args=query_args) def update(self): """Called before matching and building to keep the compiled rules in the correct order after things changed. """ if not self._remap: return with self._remap_lock: if not self._remap: return self._rules.sort(key=lambda x: x.match_compare_key()) for rules in itervalues(self._rules_by_endpoint): rules.sort(key=lambda x: x.build_compare_key()) self._remap = False def __repr__(self): rules = self.iter_rules() return '%s(%s)' % (self.__class__.__name__, pformat(list(rules))) class MapAdapter(object): """Returned by :meth:`Map.bind` or :meth:`Map.bind_to_environ` and does the URL matching and building based on runtime information. """ def __init__(self, map, server_name, script_name, subdomain, url_scheme, path_info, default_method, query_args=None): self.map = map self.server_name = to_unicode(server_name) script_name = to_unicode(script_name) if not script_name.endswith(u'/'): script_name += u'/' self.script_name = script_name self.subdomain = to_unicode(subdomain) self.url_scheme = to_unicode(url_scheme) self.path_info = to_unicode(path_info) self.default_method = to_unicode(default_method) self.query_args = query_args def dispatch(self, view_func, path_info=None, method=None, catch_http_exceptions=False): """Does the complete dispatching process. `view_func` is called with the endpoint and a dict with the values for the view. It should look up the view function, call it, and return a response object or WSGI application. http exceptions are not caught by default so that applications can display nicer error messages by just catching them by hand. If you want to stick with the default error messages you can pass it ``catch_http_exceptions=True`` and it will catch the http exceptions. Here a small example for the dispatch usage:: from werkzeug.wrappers import Request, Response from werkzeug.wsgi import responder from werkzeug.routing import Map, Rule def on_index(request): return Response('Hello from the index') url_map = Map([Rule('/', endpoint='index')]) views = {'index': on_index} @responder def application(environ, start_response): request = Request(environ) urls = url_map.bind_to_environ(environ) return urls.dispatch(lambda e, v: views[e](request, **v), catch_http_exceptions=True) Keep in mind that this method might return exception objects, too, so use :class:`Response.force_type` to get a response object. :param view_func: a function that is called with the endpoint as first argument and the value dict as second. Has to dispatch to the actual view function with this information. (see above) :param path_info: the path info to use for matching. Overrides the path info specified on binding. :param method: the HTTP method used for matching. Overrides the method specified on binding. :param catch_http_exceptions: set to `True` to catch any of the werkzeug :class:`HTTPException`\s. """ try: try: endpoint, args = self.match(path_info, method) except RequestRedirect as e: return e return view_func(endpoint, args) except HTTPException as e: if catch_http_exceptions: return e raise def match(self, path_info=None, method=None, return_rule=False, query_args=None): """The usage is simple: you just pass the match method the current path info as well as the method (which defaults to `GET`). The following things can then happen: - you receive a `NotFound` exception that indicates that no URL is matching. A `NotFound` exception is also a WSGI application you can call to get a default page not found page (happens to be the same object as `werkzeug.exceptions.NotFound`) - you receive a `MethodNotAllowed` exception that indicates that there is a match for this URL but not for the current request method. This is useful for RESTful applications. - you receive a `RequestRedirect` exception with a `new_url` attribute. This exception is used to notify you about a request Werkzeug requests from your WSGI application. This is for example the case if you request ``/foo`` although the correct URL is ``/foo/`` You can use the `RequestRedirect` instance as response-like object similar to all other subclasses of `HTTPException`. - you get a tuple in the form ``(endpoint, arguments)`` if there is a match (unless `return_rule` is True, in which case you get a tuple in the form ``(rule, arguments)``) If the path info is not passed to the match method the default path info of the map is used (defaults to the root URL if not defined explicitly). All of the exceptions raised are subclasses of `HTTPException` so they can be used as WSGI responses. They will all render generic error or redirect pages. Here is a small example for matching: >>> m = Map([ ... Rule('/', endpoint='index'), ... Rule('/downloads/', endpoint='downloads/index'), ... Rule('/downloads/', endpoint='downloads/show') ... ]) >>> urls = m.bind("example.com", "/") >>> urls.match("/", "GET") ('index', {}) >>> urls.match("/downloads/42") ('downloads/show', {'id': 42}) And here is what happens on redirect and missing URLs: >>> urls.match("/downloads") Traceback (most recent call last): ... RequestRedirect: http://example.com/downloads/ >>> urls.match("/missing") Traceback (most recent call last): ... NotFound: 404 Not Found :param path_info: the path info to use for matching. Overrides the path info specified on binding. :param method: the HTTP method used for matching. Overrides the method specified on binding. :param return_rule: return the rule that matched instead of just the endpoint (defaults to `False`). :param query_args: optional query arguments that are used for automatic redirects as string or dictionary. It's currently not possible to use the query arguments for URL matching. .. versionadded:: 0.6 `return_rule` was added. .. versionadded:: 0.7 `query_args` was added. .. versionchanged:: 0.8 `query_args` can now also be a string. """ self.map.update() if path_info is None: path_info = self.path_info else: path_info = to_unicode(path_info, self.map.charset) if query_args is None: query_args = self.query_args method = (method or self.default_method).upper() path = u'%s|%s' % ( self.map.host_matching and self.server_name or self.subdomain, path_info and '/%s' % path_info.lstrip('/') ) have_match_for = set() for rule in self.map._rules: try: rv = rule.match(path, method) except RequestSlash: raise RequestRedirect(self.make_redirect_url( url_quote(path_info, self.map.charset, safe='/:|+') + '/', query_args)) except RequestAliasRedirect as e: raise RequestRedirect(self.make_alias_redirect_url( path, rule.endpoint, e.matched_values, method, query_args)) if rv is None: continue if rule.methods is not None and method not in rule.methods: have_match_for.update(rule.methods) continue if self.map.redirect_defaults: redirect_url = self.get_default_redirect(rule, method, rv, query_args) if redirect_url is not None: raise RequestRedirect(redirect_url) if rule.redirect_to is not None: if isinstance(rule.redirect_to, string_types): def _handle_match(match): value = rv[match.group(1)] return rule._converters[match.group(1)].to_url(value) redirect_url = _simple_rule_re.sub(_handle_match, rule.redirect_to) else: redirect_url = rule.redirect_to(self, **rv) raise RequestRedirect(str(url_join('%s://%s%s%s' % ( self.url_scheme or 'http', self.subdomain and self.subdomain + '.' or '', self.server_name, self.script_name ), redirect_url))) if return_rule: return rule, rv else: return rule.endpoint, rv if have_match_for: raise MethodNotAllowed(valid_methods=list(have_match_for)) raise NotFound() def test(self, path_info=None, method=None): """Test if a rule would match. Works like `match` but returns `True` if the URL matches, or `False` if it does not exist. :param path_info: the path info to use for matching. Overrides the path info specified on binding. :param method: the HTTP method used for matching. Overrides the method specified on binding. """ try: self.match(path_info, method) except RequestRedirect: pass except HTTPException: return False return True def allowed_methods(self, path_info=None): """Returns the valid methods that match for a given path. .. versionadded:: 0.7 """ try: self.match(path_info, method='--') except MethodNotAllowed as e: return e.valid_methods except HTTPException as e: pass return [] def get_host(self, domain_part): """Figures out the full host name for the given domain part. The domain part is a subdomain in case host matching is disabled or a full host name. """ if self.map.host_matching: if domain_part is None: return self.server_name return to_unicode(domain_part, 'ascii') subdomain = domain_part if subdomain is None: subdomain = self.subdomain else: subdomain = to_unicode(subdomain, 'ascii') return (subdomain and subdomain + u'.' or u'') + self.server_name def get_default_redirect(self, rule, method, values, query_args): """A helper that returns the URL to redirect to if it finds one. This is used for default redirecting only. :internal: """ assert self.map.redirect_defaults for r in self.map._rules_by_endpoint[rule.endpoint]: # every rule that comes after this one, including ourself # has a lower priority for the defaults. We order the ones # with the highest priority up for building. if r is rule: break if r.provides_defaults_for(rule) and \ r.suitable_for(values, method): values.update(r.defaults) domain_part, path = r.build(values) return self.make_redirect_url( path, query_args, domain_part=domain_part) def encode_query_args(self, query_args): if not isinstance(query_args, string_types): query_args = url_encode(query_args, self.map.charset) return query_args def make_redirect_url(self, path_info, query_args=None, domain_part=None): """Creates a redirect URL. :internal: """ suffix = '' if query_args: suffix = '?' + self.encode_query_args(query_args) return str('%s://%s/%s%s' % ( self.url_scheme or 'http', self.get_host(domain_part), posixpath.join(self.script_name[:-1].lstrip('/'), path_info.lstrip('/')), suffix )) def make_alias_redirect_url(self, path, endpoint, values, method, query_args): """Internally called to make an alias redirect URL.""" url = self.build(endpoint, values, method, append_unknown=False, force_external=True) if query_args: url += '?' + self.encode_query_args(query_args) assert url != path, 'detected invalid alias setting. No canonical ' \ 'URL found' return url def _partial_build(self, endpoint, values, method, append_unknown): """Helper for :meth:`build`. Returns subdomain and path for the rule that accepts this endpoint, values and method. :internal: """ # in case the method is none, try with the default method first if method is None: rv = self._partial_build(endpoint, values, self.default_method, append_unknown) if rv is not None: return rv # default method did not match or a specific method is passed, # check all and go with first result. for rule in self.map._rules_by_endpoint.get(endpoint, ()): if rule.suitable_for(values, method): rv = rule.build(values, append_unknown) if rv is not None: return rv def build(self, endpoint, values=None, method=None, force_external=False, append_unknown=True): """Building URLs works pretty much the other way round. Instead of `match` you call `build` and pass it the endpoint and a dict of arguments for the placeholders. The `build` function also accepts an argument called `force_external` which, if you set it to `True` will force external URLs. Per default external URLs (include the server name) will only be used if the target URL is on a different subdomain. >>> m = Map([ ... Rule('/', endpoint='index'), ... Rule('/downloads/', endpoint='downloads/index'), ... Rule('/downloads/', endpoint='downloads/show') ... ]) >>> urls = m.bind("example.com", "/") >>> urls.build("index", {}) '/' >>> urls.build("downloads/show", {'id': 42}) '/downloads/42' >>> urls.build("downloads/show", {'id': 42}, force_external=True) 'http://example.com/downloads/42' Because URLs cannot contain non ASCII data you will always get bytestrings back. Non ASCII characters are urlencoded with the charset defined on the map instance. Additional values are converted to unicode and appended to the URL as URL querystring parameters: >>> urls.build("index", {'q': 'My Searchstring'}) '/?q=My+Searchstring' When processing those additional values, lists are furthermore interpreted as multiple values (as per :py:class:`werkzeug.datastructures.MultiDict`): >>> urls.build("index", {'q': ['a', 'b', 'c']}) '/?q=a&q=b&q=c' If a rule does not exist when building a `BuildError` exception is raised. The build method accepts an argument called `method` which allows you to specify the method you want to have an URL built for if you have different methods for the same endpoint specified. .. versionadded:: 0.6 the `append_unknown` parameter was added. :param endpoint: the endpoint of the URL to build. :param values: the values for the URL to build. Unhandled values are appended to the URL as query parameters. :param method: the HTTP method for the rule if there are different URLs for different methods on the same endpoint. :param force_external: enforce full canonical external URLs. If the URL scheme is not provided, this will generate a protocol-relative URL. :param append_unknown: unknown parameters are appended to the generated URL as query string argument. Disable this if you want the builder to ignore those. """ self.map.update() if values: if isinstance(values, MultiDict): valueiter = iteritems(values, multi=True) else: valueiter = iteritems(values) values = dict((k, v) for k, v in valueiter if v is not None) else: values = {} rv = self._partial_build(endpoint, values, method, append_unknown) if rv is None: raise BuildError(endpoint, values, method, self) domain_part, path = rv host = self.get_host(domain_part) # shortcut this. if not force_external and ( (self.map.host_matching and host == self.server_name) or (not self.map.host_matching and domain_part == self.subdomain) ): return str(url_join(self.script_name, './' + path.lstrip('/'))) return str('%s//%s%s/%s' % ( self.url_scheme + ':' if self.url_scheme else '', host, self.script_name[:-1], path.lstrip('/') )) werkzeug-0.14.1/werkzeug/security.py000066400000000000000000000217511322225165500175350ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.security ~~~~~~~~~~~~~~~~~ Security related helpers such as secure password hashing tools. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import hmac import hashlib import posixpath import codecs from struct import Struct from random import SystemRandom from operator import xor from itertools import starmap from werkzeug._compat import range_type, PY2, text_type, izip, to_bytes, \ string_types, to_native SALT_CHARS = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789' DEFAULT_PBKDF2_ITERATIONS = 50000 _pack_int = Struct('>I').pack _builtin_safe_str_cmp = getattr(hmac, 'compare_digest', None) _sys_rng = SystemRandom() _os_alt_seps = list(sep for sep in [os.path.sep, os.path.altsep] if sep not in (None, '/')) def _find_hashlib_algorithms(): algos = getattr(hashlib, 'algorithms', None) if algos is None: algos = ('md5', 'sha1', 'sha224', 'sha256', 'sha384', 'sha512') rv = {} for algo in algos: func = getattr(hashlib, algo, None) if func is not None: rv[algo] = func return rv _hash_funcs = _find_hashlib_algorithms() def pbkdf2_hex(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None): """Like :func:`pbkdf2_bin`, but returns a hex-encoded string. .. versionadded:: 0.9 :param data: the data to derive. :param salt: the salt for the derivation. :param iterations: the number of iterations. :param keylen: the length of the resulting key. If not provided, the digest size will be used. :param hashfunc: the hash function to use. This can either be the string name of a known hash function, or a function from the hashlib module. Defaults to sha256. """ rv = pbkdf2_bin(data, salt, iterations, keylen, hashfunc) return to_native(codecs.encode(rv, 'hex_codec')) _has_native_pbkdf2 = hasattr(hashlib, 'pbkdf2_hmac') def pbkdf2_bin(data, salt, iterations=DEFAULT_PBKDF2_ITERATIONS, keylen=None, hashfunc=None): """Returns a binary digest for the PBKDF2 hash algorithm of `data` with the given `salt`. It iterates `iterations` times and produces a key of `keylen` bytes. By default, SHA-256 is used as hash function; a different hashlib `hashfunc` can be provided. .. versionadded:: 0.9 :param data: the data to derive. :param salt: the salt for the derivation. :param iterations: the number of iterations. :param keylen: the length of the resulting key. If not provided the digest size will be used. :param hashfunc: the hash function to use. This can either be the string name of a known hash function or a function from the hashlib module. Defaults to sha256. """ if isinstance(hashfunc, string_types): hashfunc = _hash_funcs[hashfunc] elif not hashfunc: hashfunc = hashlib.sha256 data = to_bytes(data) salt = to_bytes(salt) # If we're on Python with pbkdf2_hmac we can try to use it for # compatible digests. if _has_native_pbkdf2: _test_hash = hashfunc() if hasattr(_test_hash, 'name') and \ _test_hash.name in _hash_funcs: return hashlib.pbkdf2_hmac(_test_hash.name, data, salt, iterations, keylen) mac = hmac.HMAC(data, None, hashfunc) if not keylen: keylen = mac.digest_size def _pseudorandom(x, mac=mac): h = mac.copy() h.update(x) return bytearray(h.digest()) buf = bytearray() for block in range_type(1, -(-keylen // mac.digest_size) + 1): rv = u = _pseudorandom(salt + _pack_int(block)) for i in range_type(iterations - 1): u = _pseudorandom(bytes(u)) rv = bytearray(starmap(xor, izip(rv, u))) buf.extend(rv) return bytes(buf[:keylen]) def safe_str_cmp(a, b): """This function compares strings in somewhat constant time. This requires that the length of at least one string is known in advance. Returns `True` if the two strings are equal, or `False` if they are not. .. versionadded:: 0.7 """ if isinstance(a, text_type): a = a.encode('utf-8') if isinstance(b, text_type): b = b.encode('utf-8') if _builtin_safe_str_cmp is not None: return _builtin_safe_str_cmp(a, b) if len(a) != len(b): return False rv = 0 if PY2: for x, y in izip(a, b): rv |= ord(x) ^ ord(y) else: for x, y in izip(a, b): rv |= x ^ y return rv == 0 def gen_salt(length): """Generate a random string of SALT_CHARS with specified ``length``.""" if length <= 0: raise ValueError('Salt length must be positive') return ''.join(_sys_rng.choice(SALT_CHARS) for _ in range_type(length)) def _hash_internal(method, salt, password): """Internal password hash helper. Supports plaintext without salt, unsalted and salted passwords. In case salted passwords are used hmac is used. """ if method == 'plain': return password, method if isinstance(password, text_type): password = password.encode('utf-8') if method.startswith('pbkdf2:'): args = method[7:].split(':') if len(args) not in (1, 2): raise ValueError('Invalid number of arguments for PBKDF2') method = args.pop(0) iterations = args and int(args[0] or 0) or DEFAULT_PBKDF2_ITERATIONS is_pbkdf2 = True actual_method = 'pbkdf2:%s:%d' % (method, iterations) else: is_pbkdf2 = False actual_method = method hash_func = _hash_funcs.get(method) if hash_func is None: raise TypeError('invalid method %r' % method) if is_pbkdf2: if not salt: raise ValueError('Salt is required for PBKDF2') rv = pbkdf2_hex(password, salt, iterations, hashfunc=hash_func) elif salt: if isinstance(salt, text_type): salt = salt.encode('utf-8') rv = hmac.HMAC(salt, password, hash_func).hexdigest() else: h = hash_func() h.update(password) rv = h.hexdigest() return rv, actual_method def generate_password_hash(password, method='pbkdf2:sha256', salt_length=8): """Hash a password with the given method and salt with a string of the given length. The format of the string returned includes the method that was used so that :func:`check_password_hash` can check the hash. The format for the hashed string looks like this:: method$salt$hash This method can **not** generate unsalted passwords but it is possible to set param method='plain' in order to enforce plaintext passwords. If a salt is used, hmac is used internally to salt the password. If PBKDF2 is wanted it can be enabled by setting the method to ``pbkdf2:method:iterations`` where iterations is optional:: pbkdf2:sha256:80000$salt$hash pbkdf2:sha256$salt$hash :param password: the password to hash. :param method: the hash method to use (one that hashlib supports). Can optionally be in the format ``pbkdf2:[:iterations]`` to enable PBKDF2. :param salt_length: the length of the salt in letters. """ salt = method != 'plain' and gen_salt(salt_length) or '' h, actual_method = _hash_internal(method, salt, password) return '%s$%s$%s' % (actual_method, salt, h) def check_password_hash(pwhash, password): """check a password against a given salted and hashed password value. In order to support unsalted legacy passwords this method supports plain text passwords, md5 and sha1 hashes (both salted and unsalted). Returns `True` if the password matched, `False` otherwise. :param pwhash: a hashed string like returned by :func:`generate_password_hash`. :param password: the plaintext password to compare against the hash. """ if pwhash.count('$') < 2: return False method, salt, hashval = pwhash.split('$', 2) return safe_str_cmp(_hash_internal(method, salt, password)[0], hashval) def safe_join(directory, *pathnames): """Safely join `directory` and one or more untrusted `pathnames`. If this cannot be done, this function returns ``None``. :param directory: the base directory. :param pathnames: the untrusted pathnames relative to that directory. """ parts = [directory] for filename in pathnames: if filename != '': filename = posixpath.normpath(filename) for sep in _os_alt_seps: if sep in filename: return None if os.path.isabs(filename) or \ filename == '..' or \ filename.startswith('../'): return None parts.append(filename) return posixpath.join(*parts) werkzeug-0.14.1/werkzeug/serving.py000066400000000000000000000762661322225165500173560ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.serving ~~~~~~~~~~~~~~~~ There are many ways to serve a WSGI application. While you're developing it you usually don't want a full blown webserver like Apache but a simple standalone one. From Python 2.5 onwards there is the `wsgiref`_ server in the standard library. If you're using older versions of Python you can download the package from the cheeseshop. However there are some caveats. Sourcecode won't reload itself when changed and each time you kill the server using ``^C`` you get an `KeyboardInterrupt` error. While the latter is easy to solve the first one can be a pain in the ass in some situations. The easiest way is creating a small ``start-myproject.py`` that runs the application:: #!/usr/bin/env python # -*- coding: utf-8 -*- from myproject import make_app from werkzeug.serving import run_simple app = make_app(...) run_simple('localhost', 8080, app, use_reloader=True) You can also pass it a `extra_files` keyword argument with a list of additional files (like configuration files) you want to observe. For bigger applications you should consider using `click` (http://click.pocoo.org) instead of a simple start file. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from __future__ import with_statement import io import os import socket import sys import signal can_fork = hasattr(os, "fork") try: import termcolor except ImportError: termcolor = None try: import ssl except ImportError: class _SslDummy(object): def __getattr__(self, name): raise RuntimeError('SSL support unavailable') ssl = _SslDummy() def _get_openssl_crypto_module(): try: from OpenSSL import crypto except ImportError: raise TypeError('Using ad-hoc certificates requires the pyOpenSSL ' 'library.') else: return crypto try: import SocketServer as socketserver from BaseHTTPServer import HTTPServer, BaseHTTPRequestHandler except ImportError: import socketserver from http.server import HTTPServer, BaseHTTPRequestHandler ThreadingMixIn = socketserver.ThreadingMixIn if can_fork: ForkingMixIn = socketserver.ForkingMixIn else: class ForkingMixIn(object): pass # important: do not use relative imports here or python -m will break import werkzeug from werkzeug._internal import _log from werkzeug._compat import PY2, WIN, reraise, wsgi_encoding_dance from werkzeug.urls import url_parse, url_unquote from werkzeug.exceptions import InternalServerError LISTEN_QUEUE = 128 can_open_by_fd = not WIN and hasattr(socket, 'fromfd') class DechunkedInput(io.RawIOBase): """An input stream that handles Transfer-Encoding 'chunked'""" def __init__(self, rfile): self._rfile = rfile self._done = False self._len = 0 def readable(self): return True def read_chunk_len(self): try: line = self._rfile.readline().decode('latin1') _len = int(line.strip(), 16) except ValueError: raise IOError('Invalid chunk header') if _len < 0: raise IOError('Negative chunk length not allowed') return _len def readinto(self, buf): read = 0 while not self._done and read < len(buf): if self._len == 0: # This is the first chunk or we fully consumed the previous # one. Read the next length of the next chunk self._len = self.read_chunk_len() if self._len == 0: # Found the final chunk of size 0. The stream is now exhausted, # but there is still a final newline that should be consumed self._done = True if self._len > 0: # There is data (left) in this chunk, so append it to the # buffer. If this operation fully consumes the chunk, this will # reset self._len to 0. n = min(len(buf), self._len) buf[read:read + n] = self._rfile.read(n) self._len -= n read += n if self._len == 0: # Skip the terminating newline of a chunk that has been fully # consumed. This also applies to the 0-sized final chunk terminator = self._rfile.readline() if terminator not in (b'\n', b'\r\n', b'\r'): raise IOError('Missing chunk terminating newline') return read class WSGIRequestHandler(BaseHTTPRequestHandler, object): """A request handler that implements WSGI dispatching.""" @property def server_version(self): return 'Werkzeug/' + werkzeug.__version__ def make_environ(self): request_url = url_parse(self.path) def shutdown_server(): self.server.shutdown_signal = True url_scheme = self.server.ssl_context is None and 'http' or 'https' path_info = url_unquote(request_url.path) environ = { 'wsgi.version': (1, 0), 'wsgi.url_scheme': url_scheme, 'wsgi.input': self.rfile, 'wsgi.errors': sys.stderr, 'wsgi.multithread': self.server.multithread, 'wsgi.multiprocess': self.server.multiprocess, 'wsgi.run_once': False, 'werkzeug.server.shutdown': shutdown_server, 'SERVER_SOFTWARE': self.server_version, 'REQUEST_METHOD': self.command, 'SCRIPT_NAME': '', 'PATH_INFO': wsgi_encoding_dance(path_info), 'QUERY_STRING': wsgi_encoding_dance(request_url.query), 'REMOTE_ADDR': self.address_string(), 'REMOTE_PORT': self.port_integer(), 'SERVER_NAME': self.server.server_address[0], 'SERVER_PORT': str(self.server.server_address[1]), 'SERVER_PROTOCOL': self.request_version } for key, value in self.headers.items(): key = key.upper().replace('-', '_') if key not in ('CONTENT_TYPE', 'CONTENT_LENGTH'): key = 'HTTP_' + key environ[key] = value if environ.get('HTTP_TRANSFER_ENCODING', '').strip().lower() == 'chunked': environ['wsgi.input_terminated'] = True environ['wsgi.input'] = DechunkedInput(environ['wsgi.input']) if request_url.scheme and request_url.netloc: environ['HTTP_HOST'] = request_url.netloc return environ def run_wsgi(self): if self.headers.get('Expect', '').lower().strip() == '100-continue': self.wfile.write(b'HTTP/1.1 100 Continue\r\n\r\n') self.environ = environ = self.make_environ() headers_set = [] headers_sent = [] def write(data): assert headers_set, 'write() before start_response' if not headers_sent: status, response_headers = headers_sent[:] = headers_set try: code, msg = status.split(None, 1) except ValueError: code, msg = status, "" code = int(code) self.send_response(code, msg) header_keys = set() for key, value in response_headers: self.send_header(key, value) key = key.lower() header_keys.add(key) if not ('content-length' in header_keys or environ['REQUEST_METHOD'] == 'HEAD' or code < 200 or code in (204, 304)): self.close_connection = True self.send_header('Connection', 'close') if 'server' not in header_keys: self.send_header('Server', self.version_string()) if 'date' not in header_keys: self.send_header('Date', self.date_time_string()) self.end_headers() assert isinstance(data, bytes), 'applications must write bytes' self.wfile.write(data) self.wfile.flush() def start_response(status, response_headers, exc_info=None): if exc_info: try: if headers_sent: reraise(*exc_info) finally: exc_info = None elif headers_set: raise AssertionError('Headers already set') headers_set[:] = [status, response_headers] return write def execute(app): application_iter = app(environ, start_response) try: for data in application_iter: write(data) if not headers_sent: write(b'') finally: if hasattr(application_iter, 'close'): application_iter.close() application_iter = None try: execute(self.server.app) except (socket.error, socket.timeout) as e: self.connection_dropped(e, environ) except Exception: if self.server.passthrough_errors: raise from werkzeug.debug.tbtools import get_current_traceback traceback = get_current_traceback(ignore_system_exceptions=True) try: # if we haven't yet sent the headers but they are set # we roll back to be able to set them again. if not headers_sent: del headers_set[:] execute(InternalServerError()) except Exception: pass self.server.log('error', 'Error on request:\n%s', traceback.plaintext) def handle(self): """Handles a request ignoring dropped connections.""" rv = None try: rv = BaseHTTPRequestHandler.handle(self) except (socket.error, socket.timeout) as e: self.connection_dropped(e) except Exception: if self.server.ssl_context is None or not is_ssl_error(): raise if self.server.shutdown_signal: self.initiate_shutdown() return rv def initiate_shutdown(self): """A horrible, horrible way to kill the server for Python 2.6 and later. It's the best we can do. """ # Windows does not provide SIGKILL, go with SIGTERM then. sig = getattr(signal, 'SIGKILL', signal.SIGTERM) # reloader active if os.environ.get('WERKZEUG_RUN_MAIN') == 'true': os.kill(os.getpid(), sig) # python 2.7 self.server._BaseServer__shutdown_request = True # python 2.6 self.server._BaseServer__serving = False def connection_dropped(self, error, environ=None): """Called if the connection was closed by the client. By default nothing happens. """ def handle_one_request(self): """Handle a single HTTP request.""" self.raw_requestline = self.rfile.readline() if not self.raw_requestline: self.close_connection = 1 elif self.parse_request(): return self.run_wsgi() def send_response(self, code, message=None): """Send the response header and log the response code.""" self.log_request(code) if message is None: message = code in self.responses and self.responses[code][0] or '' if self.request_version != 'HTTP/0.9': hdr = "%s %d %s\r\n" % (self.protocol_version, code, message) self.wfile.write(hdr.encode('ascii')) def version_string(self): return BaseHTTPRequestHandler.version_string(self).strip() def address_string(self): if getattr(self, 'environ', None): return self.environ['REMOTE_ADDR'] else: return self.client_address[0] def port_integer(self): return self.client_address[1] def log_request(self, code='-', size='-'): msg = self.requestline code = str(code) if termcolor: color = termcolor.colored if code[0] == '1': # 1xx - Informational msg = color(msg, attrs=['bold']) elif code[0] == '2': # 2xx - Success msg = color(msg, color='white') elif code == '304': # 304 - Resource Not Modified msg = color(msg, color='cyan') elif code[0] == '3': # 3xx - Redirection msg = color(msg, color='green') elif code == '404': # 404 - Resource Not Found msg = color(msg, color='yellow') elif code[0] == '4': # 4xx - Client Error msg = color(msg, color='red', attrs=['bold']) else: # 5xx, or any other response msg = color(msg, color='magenta', attrs=['bold']) self.log('info', '"%s" %s %s', msg, code, size) def log_error(self, *args): self.log('error', *args) def log_message(self, format, *args): self.log('info', format, *args) def log(self, type, message, *args): _log(type, '%s - - [%s] %s\n' % (self.address_string(), self.log_date_time_string(), message % args)) #: backwards compatible name if someone is subclassing it BaseRequestHandler = WSGIRequestHandler def generate_adhoc_ssl_pair(cn=None): from random import random crypto = _get_openssl_crypto_module() # pretty damn sure that this is not actually accepted by anyone if cn is None: cn = '*' cert = crypto.X509() cert.set_serial_number(int(random() * sys.maxsize)) cert.gmtime_adj_notBefore(0) cert.gmtime_adj_notAfter(60 * 60 * 24 * 365) subject = cert.get_subject() subject.CN = cn subject.O = 'Dummy Certificate' # noqa: E741 issuer = cert.get_issuer() issuer.CN = 'Untrusted Authority' issuer.O = 'Self-Signed' # noqa: E741 pkey = crypto.PKey() pkey.generate_key(crypto.TYPE_RSA, 2048) cert.set_pubkey(pkey) cert.sign(pkey, 'sha256') return cert, pkey def make_ssl_devcert(base_path, host=None, cn=None): """Creates an SSL key for development. This should be used instead of the ``'adhoc'`` key which generates a new cert on each server start. It accepts a path for where it should store the key and cert and either a host or CN. If a host is given it will use the CN ``*.host/CN=host``. For more information see :func:`run_simple`. .. versionadded:: 0.9 :param base_path: the path to the certificate and key. The extension ``.crt`` is added for the certificate, ``.key`` is added for the key. :param host: the name of the host. This can be used as an alternative for the `cn`. :param cn: the `CN` to use. """ from OpenSSL import crypto if host is not None: cn = '*.%s/CN=%s' % (host, host) cert, pkey = generate_adhoc_ssl_pair(cn=cn) cert_file = base_path + '.crt' pkey_file = base_path + '.key' with open(cert_file, 'wb') as f: f.write(crypto.dump_certificate(crypto.FILETYPE_PEM, cert)) with open(pkey_file, 'wb') as f: f.write(crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey)) return cert_file, pkey_file def generate_adhoc_ssl_context(): """Generates an adhoc SSL context for the development server.""" crypto = _get_openssl_crypto_module() import tempfile import atexit cert, pkey = generate_adhoc_ssl_pair() cert_handle, cert_file = tempfile.mkstemp() pkey_handle, pkey_file = tempfile.mkstemp() atexit.register(os.remove, pkey_file) atexit.register(os.remove, cert_file) os.write(cert_handle, crypto.dump_certificate(crypto.FILETYPE_PEM, cert)) os.write(pkey_handle, crypto.dump_privatekey(crypto.FILETYPE_PEM, pkey)) os.close(cert_handle) os.close(pkey_handle) ctx = load_ssl_context(cert_file, pkey_file) return ctx def load_ssl_context(cert_file, pkey_file=None, protocol=None): """Loads SSL context from cert/private key files and optional protocol. Many parameters are directly taken from the API of :py:class:`ssl.SSLContext`. :param cert_file: Path of the certificate to use. :param pkey_file: Path of the private key to use. If not given, the key will be obtained from the certificate file. :param protocol: One of the ``PROTOCOL_*`` constants in the stdlib ``ssl`` module. Defaults to ``PROTOCOL_SSLv23``. """ if protocol is None: protocol = ssl.PROTOCOL_SSLv23 ctx = _SSLContext(protocol) ctx.load_cert_chain(cert_file, pkey_file) return ctx class _SSLContext(object): '''A dummy class with a small subset of Python3's ``ssl.SSLContext``, only intended to be used with and by Werkzeug.''' def __init__(self, protocol): self._protocol = protocol self._certfile = None self._keyfile = None self._password = None def load_cert_chain(self, certfile, keyfile=None, password=None): self._certfile = certfile self._keyfile = keyfile or certfile self._password = password def wrap_socket(self, sock, **kwargs): return ssl.wrap_socket(sock, keyfile=self._keyfile, certfile=self._certfile, ssl_version=self._protocol, **kwargs) def is_ssl_error(error=None): """Checks if the given error (or the current one) is an SSL error.""" exc_types = (ssl.SSLError,) try: from OpenSSL.SSL import Error exc_types += (Error,) except ImportError: pass if error is None: error = sys.exc_info()[1] return isinstance(error, exc_types) def select_ip_version(host, port): """Returns AF_INET4 or AF_INET6 depending on where to connect to.""" # disabled due to problems with current ipv6 implementations # and various operating systems. Probably this code also is # not supposed to work, but I can't come up with any other # ways to implement this. # try: # info = socket.getaddrinfo(host, port, socket.AF_UNSPEC, # socket.SOCK_STREAM, 0, # socket.AI_PASSIVE) # if info: # return info[0][0] # except socket.gaierror: # pass if ':' in host and hasattr(socket, 'AF_INET6'): return socket.AF_INET6 return socket.AF_INET def get_sockaddr(host, port, family): """Returns a fully qualified socket address, that can properly used by socket.bind""" try: res = socket.getaddrinfo(host, port, family, socket.SOCK_STREAM, socket.SOL_TCP) except socket.gaierror: return (host, port) return res[0][4] class BaseWSGIServer(HTTPServer, object): """Simple single-threaded, single-process WSGI server.""" multithread = False multiprocess = False request_queue_size = LISTEN_QUEUE def __init__(self, host, port, app, handler=None, passthrough_errors=False, ssl_context=None, fd=None): if handler is None: handler = WSGIRequestHandler self.address_family = select_ip_version(host, port) if fd is not None: real_sock = socket.fromfd(fd, self.address_family, socket.SOCK_STREAM) port = 0 HTTPServer.__init__(self, get_sockaddr(host, int(port), self.address_family), handler) self.app = app self.passthrough_errors = passthrough_errors self.shutdown_signal = False self.host = host self.port = self.socket.getsockname()[1] # Patch in the original socket. if fd is not None: self.socket.close() self.socket = real_sock self.server_address = self.socket.getsockname() if ssl_context is not None: if isinstance(ssl_context, tuple): ssl_context = load_ssl_context(*ssl_context) if ssl_context == 'adhoc': ssl_context = generate_adhoc_ssl_context() # If we are on Python 2 the return value from socket.fromfd # is an internal socket object but what we need for ssl wrap # is the wrapper around it :( sock = self.socket if PY2 and not isinstance(sock, socket.socket): sock = socket.socket(sock.family, sock.type, sock.proto, sock) self.socket = ssl_context.wrap_socket(sock, server_side=True) self.ssl_context = ssl_context else: self.ssl_context = None def log(self, type, message, *args): _log(type, message, *args) def serve_forever(self): self.shutdown_signal = False try: HTTPServer.serve_forever(self) except KeyboardInterrupt: pass finally: self.server_close() def handle_error(self, request, client_address): if self.passthrough_errors: raise return HTTPServer.handle_error(self, request, client_address) def get_request(self): con, info = self.socket.accept() return con, info class ThreadedWSGIServer(ThreadingMixIn, BaseWSGIServer): """A WSGI server that does threading.""" multithread = True daemon_threads = True class ForkingWSGIServer(ForkingMixIn, BaseWSGIServer): """A WSGI server that does forking.""" multiprocess = True def __init__(self, host, port, app, processes=40, handler=None, passthrough_errors=False, ssl_context=None, fd=None): if not can_fork: raise ValueError('Your platform does not support forking.') BaseWSGIServer.__init__(self, host, port, app, handler, passthrough_errors, ssl_context, fd) self.max_children = processes def make_server(host=None, port=None, app=None, threaded=False, processes=1, request_handler=None, passthrough_errors=False, ssl_context=None, fd=None): """Create a new server instance that is either threaded, or forks or just processes one request after another. """ if threaded and processes > 1: raise ValueError("cannot have a multithreaded and " "multi process server.") elif threaded: return ThreadedWSGIServer(host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd) elif processes > 1: return ForkingWSGIServer(host, port, app, processes, request_handler, passthrough_errors, ssl_context, fd=fd) else: return BaseWSGIServer(host, port, app, request_handler, passthrough_errors, ssl_context, fd=fd) def is_running_from_reloader(): """Checks if the application is running from within the Werkzeug reloader subprocess. .. versionadded:: 0.10 """ return os.environ.get('WERKZEUG_RUN_MAIN') == 'true' def run_simple(hostname, port, application, use_reloader=False, use_debugger=False, use_evalex=True, extra_files=None, reloader_interval=1, reloader_type='auto', threaded=False, processes=1, request_handler=None, static_files=None, passthrough_errors=False, ssl_context=None): """Start a WSGI application. Optional features include a reloader, multithreading and fork support. This function has a command-line interface too:: python -m werkzeug.serving --help .. versionadded:: 0.5 `static_files` was added to simplify serving of static files as well as `passthrough_errors`. .. versionadded:: 0.6 support for SSL was added. .. versionadded:: 0.8 Added support for automatically loading a SSL context from certificate file and private key. .. versionadded:: 0.9 Added command-line interface. .. versionadded:: 0.10 Improved the reloader and added support for changing the backend through the `reloader_type` parameter. See :ref:`reloader` for more information. :param hostname: The host for the application. eg: ``'localhost'`` :param port: The port for the server. eg: ``8080`` :param application: the WSGI application to execute :param use_reloader: should the server automatically restart the python process if modules were changed? :param use_debugger: should the werkzeug debugging system be used? :param use_evalex: should the exception evaluation feature be enabled? :param extra_files: a list of files the reloader should watch additionally to the modules. For example configuration files. :param reloader_interval: the interval for the reloader in seconds. :param reloader_type: the type of reloader to use. The default is auto detection. Valid values are ``'stat'`` and ``'watchdog'``. See :ref:`reloader` for more information. :param threaded: should the process handle each request in a separate thread? :param processes: if greater than 1 then handle each request in a new process up to this maximum number of concurrent processes. :param request_handler: optional parameter that can be used to replace the default one. You can use this to replace it with a different :class:`~BaseHTTPServer.BaseHTTPRequestHandler` subclass. :param static_files: a list or dict of paths for static files. This works exactly like :class:`SharedDataMiddleware`, it's actually just wrapping the application in that middleware before serving. :param passthrough_errors: set this to `True` to disable the error catching. This means that the server will die on errors but it can be useful to hook debuggers in (pdb etc.) :param ssl_context: an SSL context for the connection. Either an :class:`ssl.SSLContext`, a tuple in the form ``(cert_file, pkey_file)``, the string ``'adhoc'`` if the server should automatically create one, or ``None`` to disable SSL (which is the default). """ if not isinstance(port, int): raise TypeError('port must be an integer') if use_debugger: from werkzeug.debug import DebuggedApplication application = DebuggedApplication(application, use_evalex) if static_files: from werkzeug.wsgi import SharedDataMiddleware application = SharedDataMiddleware(application, static_files) def log_startup(sock): display_hostname = hostname not in ('', '*') and hostname or 'localhost' if ':' in display_hostname: display_hostname = '[%s]' % display_hostname quit_msg = '(Press CTRL+C to quit)' port = sock.getsockname()[1] _log('info', ' * Running on %s://%s:%d/ %s', ssl_context is None and 'http' or 'https', display_hostname, port, quit_msg) def inner(): try: fd = int(os.environ['WERKZEUG_SERVER_FD']) except (LookupError, ValueError): fd = None srv = make_server(hostname, port, application, threaded, processes, request_handler, passthrough_errors, ssl_context, fd=fd) if fd is None: log_startup(srv.socket) srv.serve_forever() if use_reloader: # If we're not running already in the subprocess that is the # reloader we want to open up a socket early to make sure the # port is actually available. if os.environ.get('WERKZEUG_RUN_MAIN') != 'true': if port == 0 and not can_open_by_fd: raise ValueError('Cannot bind to a random port with enabled ' 'reloader if the Python interpreter does ' 'not support socket opening by fd.') # Create and destroy a socket so that any exceptions are # raised before we spawn a separate Python interpreter and # lose this ability. address_family = select_ip_version(hostname, port) s = socket.socket(address_family, socket.SOCK_STREAM) s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s.bind(get_sockaddr(hostname, port, address_family)) if hasattr(s, 'set_inheritable'): s.set_inheritable(True) # If we can open the socket by file descriptor, then we can just # reuse this one and our socket will survive the restarts. if can_open_by_fd: os.environ['WERKZEUG_SERVER_FD'] = str(s.fileno()) s.listen(LISTEN_QUEUE) log_startup(s) else: s.close() # Do not use relative imports, otherwise "python -m werkzeug.serving" # breaks. from werkzeug._reloader import run_with_reloader run_with_reloader(inner, extra_files, reloader_interval, reloader_type) else: inner() def run_with_reloader(*args, **kwargs): # People keep using undocumented APIs. Do not use this function # please, we do not guarantee that it continues working. from werkzeug._reloader import run_with_reloader return run_with_reloader(*args, **kwargs) def main(): '''A simple command-line interface for :py:func:`run_simple`.''' # in contrast to argparse, this works at least under Python < 2.7 import optparse from werkzeug.utils import import_string parser = optparse.OptionParser( usage='Usage: %prog [options] app_module:app_object') parser.add_option('-b', '--bind', dest='address', help='The hostname:port the app should listen on.') parser.add_option('-d', '--debug', dest='use_debugger', action='store_true', default=False, help='Use Werkzeug\'s debugger.') parser.add_option('-r', '--reload', dest='use_reloader', action='store_true', default=False, help='Reload Python process if modules change.') options, args = parser.parse_args() hostname, port = None, None if options.address: address = options.address.split(':') hostname = address[0] if len(address) > 1: port = address[1] if len(args) != 1: sys.stdout.write('No application supplied, or too much. See --help\n') sys.exit(1) app = import_string(args[0]) run_simple( hostname=(hostname or '127.0.0.1'), port=int(port or 5000), application=app, use_reloader=options.use_reloader, use_debugger=options.use_debugger ) if __name__ == '__main__': main() werkzeug-0.14.1/werkzeug/test.py000066400000000000000000001064041322225165500166440ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.test ~~~~~~~~~~~~~ This module implements a client to WSGI applications for testing. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import sys import mimetypes from time import time from random import random from itertools import chain from tempfile import TemporaryFile from io import BytesIO try: from urllib2 import Request as U2Request except ImportError: from urllib.request import Request as U2Request try: from http.cookiejar import CookieJar except ImportError: # Py2 from cookielib import CookieJar from werkzeug._compat import iterlists, iteritems, itervalues, to_bytes, \ string_types, text_type, reraise, wsgi_encoding_dance, \ make_literal_wrapper from werkzeug._internal import _empty_stream, _get_environ from werkzeug.wrappers import BaseRequest from werkzeug.urls import url_encode, url_fix, iri_to_uri, url_unquote, \ url_unparse, url_parse from werkzeug.wsgi import get_host, get_current_url, ClosingIterator from werkzeug.utils import dump_cookie, get_content_type from werkzeug.datastructures import FileMultiDict, MultiDict, \ CombinedMultiDict, Headers, FileStorage, CallbackDict from werkzeug.http import dump_options_header, parse_options_header def stream_encode_multipart(values, use_tempfile=True, threshold=1024 * 500, boundary=None, charset='utf-8'): """Encode a dict of values (either strings or file descriptors or :class:`FileStorage` objects.) into a multipart encoded string stored in a file descriptor. """ if boundary is None: boundary = '---------------WerkzeugFormPart_%s%s' % (time(), random()) _closure = [BytesIO(), 0, False] if use_tempfile: def write_binary(string): stream, total_length, on_disk = _closure if on_disk: stream.write(string) else: length = len(string) if length + _closure[1] <= threshold: stream.write(string) else: new_stream = TemporaryFile('wb+') new_stream.write(stream.getvalue()) new_stream.write(string) _closure[0] = new_stream _closure[2] = True _closure[1] = total_length + length else: write_binary = _closure[0].write def write(string): write_binary(string.encode(charset)) if not isinstance(values, MultiDict): values = MultiDict(values) for key, values in iterlists(values): for value in values: write('--%s\r\nContent-Disposition: form-data; name="%s"' % (boundary, key)) reader = getattr(value, 'read', None) if reader is not None: filename = getattr(value, 'filename', getattr(value, 'name', None)) content_type = getattr(value, 'content_type', None) if content_type is None: content_type = filename and \ mimetypes.guess_type(filename)[0] or \ 'application/octet-stream' if filename is not None: write('; filename="%s"\r\n' % filename) else: write('\r\n') write('Content-Type: %s\r\n\r\n' % content_type) while 1: chunk = reader(16384) if not chunk: break write_binary(chunk) else: if not isinstance(value, string_types): value = str(value) value = to_bytes(value, charset) write('\r\n\r\n') write_binary(value) write('\r\n') write('--%s--\r\n' % boundary) length = int(_closure[0].tell()) _closure[0].seek(0) return _closure[0], length, boundary def encode_multipart(values, boundary=None, charset='utf-8'): """Like `stream_encode_multipart` but returns a tuple in the form (``boundary``, ``data``) where data is a bytestring. """ stream, length, boundary = stream_encode_multipart( values, use_tempfile=False, boundary=boundary, charset=charset) return boundary, stream.read() def File(fd, filename=None, mimetype=None): """Backwards compat.""" from warnings import warn warn(DeprecationWarning('werkzeug.test.File is deprecated, use the ' 'EnvironBuilder or FileStorage instead')) return FileStorage(fd, filename=filename, content_type=mimetype) class _TestCookieHeaders(object): """A headers adapter for cookielib """ def __init__(self, headers): self.headers = headers def getheaders(self, name): headers = [] name = name.lower() for k, v in self.headers: if k.lower() == name: headers.append(v) return headers def get_all(self, name, default=None): rv = [] for k, v in self.headers: if k.lower() == name.lower(): rv.append(v) return rv or default or [] class _TestCookieResponse(object): """Something that looks like a httplib.HTTPResponse, but is actually just an adapter for our test responses to make them available for cookielib. """ def __init__(self, headers): self.headers = _TestCookieHeaders(headers) def info(self): return self.headers class _TestCookieJar(CookieJar): """A cookielib.CookieJar modified to inject and read cookie headers from and to wsgi environments, and wsgi application responses. """ def inject_wsgi(self, environ): """Inject the cookies as client headers into the server's wsgi environment. """ cvals = [] for cookie in self: cvals.append('%s=%s' % (cookie.name, cookie.value)) if cvals: environ['HTTP_COOKIE'] = '; '.join(cvals) def extract_wsgi(self, environ, headers): """Extract the server's set-cookie headers as cookies into the cookie jar. """ self.extract_cookies( _TestCookieResponse(headers), U2Request(get_current_url(environ)), ) def _iter_data(data): """Iterates over a `dict` or :class:`MultiDict` yielding all keys and values. This is used to iterate over the data passed to the :class:`EnvironBuilder`. """ if isinstance(data, MultiDict): for key, values in iterlists(data): for value in values: yield key, value else: for key, values in iteritems(data): if isinstance(values, list): for value in values: yield key, value else: yield key, values class EnvironBuilder(object): """This class can be used to conveniently create a WSGI environment for testing purposes. It can be used to quickly create WSGI environments or request objects from arbitrary data. The signature of this class is also used in some other places as of Werkzeug 0.5 (:func:`create_environ`, :meth:`BaseResponse.from_values`, :meth:`Client.open`). Because of this most of the functionality is available through the constructor alone. Files and regular form data can be manipulated independently of each other with the :attr:`form` and :attr:`files` attributes, but are passed with the same argument to the constructor: `data`. `data` can be any of these values: - a `str` or `bytes` object: The object is converted into an :attr:`input_stream`, the :attr:`content_length` is set and you have to provide a :attr:`content_type`. - a `dict` or :class:`MultiDict`: The keys have to be strings. The values have to be either any of the following objects, or a list of any of the following objects: - a :class:`file`-like object: These are converted into :class:`FileStorage` objects automatically. - a `tuple`: The :meth:`~FileMultiDict.add_file` method is called with the key and the unpacked `tuple` items as positional arguments. - a `str`: The string is set as form data for the associated key. - a file-like object: The object content is loaded in memory and then handled like a regular `str` or a `bytes`. .. versionadded:: 0.6 `path` and `base_url` can now be unicode strings that are encoded using the :func:`iri_to_uri` function. :param path: the path of the request. In the WSGI environment this will end up as `PATH_INFO`. If the `query_string` is not defined and there is a question mark in the `path` everything after it is used as query string. :param base_url: the base URL is a URL that is used to extract the WSGI URL scheme, host (server name + server port) and the script root (`SCRIPT_NAME`). :param query_string: an optional string or dict with URL parameters. :param method: the HTTP method to use, defaults to `GET`. :param input_stream: an optional input stream. Do not specify this and `data`. As soon as an input stream is set you can't modify :attr:`args` and :attr:`files` unless you set the :attr:`input_stream` to `None` again. :param content_type: The content type for the request. As of 0.5 you don't have to provide this when specifying files and form data via `data`. :param content_length: The content length for the request. You don't have to specify this when providing data via `data`. :param errors_stream: an optional error stream that is used for `wsgi.errors`. Defaults to :data:`stderr`. :param multithread: controls `wsgi.multithread`. Defaults to `False`. :param multiprocess: controls `wsgi.multiprocess`. Defaults to `False`. :param run_once: controls `wsgi.run_once`. Defaults to `False`. :param headers: an optional list or :class:`Headers` object of headers. :param data: a string or dict of form data or a file-object. See explanation above. :param environ_base: an optional dict of environment defaults. :param environ_overrides: an optional dict of environment overrides. :param charset: the charset used to encode unicode data. """ #: the server protocol to use. defaults to HTTP/1.1 server_protocol = 'HTTP/1.1' #: the wsgi version to use. defaults to (1, 0) wsgi_version = (1, 0) #: the default request class for :meth:`get_request` request_class = BaseRequest def __init__(self, path='/', base_url=None, query_string=None, method='GET', input_stream=None, content_type=None, content_length=None, errors_stream=None, multithread=False, multiprocess=False, run_once=False, headers=None, data=None, environ_base=None, environ_overrides=None, charset='utf-8', mimetype=None): path_s = make_literal_wrapper(path) if query_string is None and path_s('?') in path: path, query_string = path.split(path_s('?'), 1) self.charset = charset self.path = iri_to_uri(path) if base_url is not None: base_url = url_fix(iri_to_uri(base_url, charset), charset) self.base_url = base_url if isinstance(query_string, (bytes, text_type)): self.query_string = query_string else: if query_string is None: query_string = MultiDict() elif not isinstance(query_string, MultiDict): query_string = MultiDict(query_string) self.args = query_string self.method = method if headers is None: headers = Headers() elif not isinstance(headers, Headers): headers = Headers(headers) self.headers = headers if content_type is not None: self.content_type = content_type if errors_stream is None: errors_stream = sys.stderr self.errors_stream = errors_stream self.multithread = multithread self.multiprocess = multiprocess self.run_once = run_once self.environ_base = environ_base self.environ_overrides = environ_overrides self.input_stream = input_stream self.content_length = content_length self.closed = False if data: if input_stream is not None: raise TypeError('can\'t provide input stream and data') if hasattr(data, 'read'): data = data.read() if isinstance(data, text_type): data = data.encode(self.charset) if isinstance(data, bytes): self.input_stream = BytesIO(data) if self.content_length is None: self.content_length = len(data) else: for key, value in _iter_data(data): if isinstance(value, (tuple, dict)) or \ hasattr(value, 'read'): self._add_file_from_data(key, value) else: self.form.setlistdefault(key).append(value) if mimetype is not None: self.mimetype = mimetype def _add_file_from_data(self, key, value): """Called in the EnvironBuilder to add files from the data dict.""" if isinstance(value, tuple): self.files.add_file(key, *value) elif isinstance(value, dict): from warnings import warn warn(DeprecationWarning('it\'s no longer possible to pass dicts ' 'as `data`. Use tuples or FileStorage ' 'objects instead'), stacklevel=2) value = dict(value) mimetype = value.pop('mimetype', None) if mimetype is not None: value['content_type'] = mimetype self.files.add_file(key, **value) else: self.files.add_file(key, value) def _get_base_url(self): return url_unparse((self.url_scheme, self.host, self.script_root, '', '')).rstrip('/') + '/' def _set_base_url(self, value): if value is None: scheme = 'http' netloc = 'localhost' script_root = '' else: scheme, netloc, script_root, qs, anchor = url_parse(value) if qs or anchor: raise ValueError('base url must not contain a query string ' 'or fragment') self.script_root = script_root.rstrip('/') self.host = netloc self.url_scheme = scheme base_url = property(_get_base_url, _set_base_url, doc=''' The base URL is a URL that is used to extract the WSGI URL scheme, host (server name + server port) and the script root (`SCRIPT_NAME`).''') del _get_base_url, _set_base_url def _get_content_type(self): ct = self.headers.get('Content-Type') if ct is None and not self._input_stream: if self._files: return 'multipart/form-data' elif self._form: return 'application/x-www-form-urlencoded' return None return ct def _set_content_type(self, value): if value is None: self.headers.pop('Content-Type', None) else: self.headers['Content-Type'] = value content_type = property(_get_content_type, _set_content_type, doc=''' The content type for the request. Reflected from and to the :attr:`headers`. Do not set if you set :attr:`files` or :attr:`form` for auto detection.''') del _get_content_type, _set_content_type def _get_content_length(self): return self.headers.get('Content-Length', type=int) def _get_mimetype(self): ct = self.content_type if ct: return ct.split(';')[0].strip() def _set_mimetype(self, value): self.content_type = get_content_type(value, self.charset) def _get_mimetype_params(self): def on_update(d): self.headers['Content-Type'] = \ dump_options_header(self.mimetype, d) d = parse_options_header(self.headers.get('content-type', ''))[1] return CallbackDict(d, on_update) mimetype = property(_get_mimetype, _set_mimetype, doc=''' The mimetype (content type without charset etc.) .. versionadded:: 0.14 ''') mimetype_params = property(_get_mimetype_params, doc=''' The mimetype parameters as dict. For example if the content type is ``text/html; charset=utf-8`` the params would be ``{'charset': 'utf-8'}``. .. versionadded:: 0.14 ''') del _get_mimetype, _set_mimetype, _get_mimetype_params def _set_content_length(self, value): if value is None: self.headers.pop('Content-Length', None) else: self.headers['Content-Length'] = str(value) content_length = property(_get_content_length, _set_content_length, doc=''' The content length as integer. Reflected from and to the :attr:`headers`. Do not set if you set :attr:`files` or :attr:`form` for auto detection.''') del _get_content_length, _set_content_length def form_property(name, storage, doc): key = '_' + name def getter(self): if self._input_stream is not None: raise AttributeError('an input stream is defined') rv = getattr(self, key) if rv is None: rv = storage() setattr(self, key, rv) return rv def setter(self, value): self._input_stream = None setattr(self, key, value) return property(getter, setter, doc=doc) form = form_property('form', MultiDict, doc=''' A :class:`MultiDict` of form values.''') files = form_property('files', FileMultiDict, doc=''' A :class:`FileMultiDict` of uploaded files. You can use the :meth:`~FileMultiDict.add_file` method to add new files to the dict.''') del form_property def _get_input_stream(self): return self._input_stream def _set_input_stream(self, value): self._input_stream = value self._form = self._files = None input_stream = property(_get_input_stream, _set_input_stream, doc=''' An optional input stream. If you set this it will clear :attr:`form` and :attr:`files`.''') del _get_input_stream, _set_input_stream def _get_query_string(self): if self._query_string is None: if self._args is not None: return url_encode(self._args, charset=self.charset) return '' return self._query_string def _set_query_string(self, value): self._query_string = value self._args = None query_string = property(_get_query_string, _set_query_string, doc=''' The query string. If you set this to a string :attr:`args` will no longer be available.''') del _get_query_string, _set_query_string def _get_args(self): if self._query_string is not None: raise AttributeError('a query string is defined') if self._args is None: self._args = MultiDict() return self._args def _set_args(self, value): self._query_string = None self._args = value args = property(_get_args, _set_args, doc=''' The URL arguments as :class:`MultiDict`.''') del _get_args, _set_args @property def server_name(self): """The server name (read-only, use :attr:`host` to set)""" return self.host.split(':', 1)[0] @property def server_port(self): """The server port as integer (read-only, use :attr:`host` to set)""" pieces = self.host.split(':', 1) if len(pieces) == 2 and pieces[1].isdigit(): return int(pieces[1]) elif self.url_scheme == 'https': return 443 return 80 def __del__(self): try: self.close() except Exception: pass def close(self): """Closes all files. If you put real :class:`file` objects into the :attr:`files` dict you can call this method to automatically close them all in one go. """ if self.closed: return try: files = itervalues(self.files) except AttributeError: files = () for f in files: try: f.close() except Exception: pass self.closed = True def get_environ(self): """Return the built environ.""" input_stream = self.input_stream content_length = self.content_length mimetype = self.mimetype content_type = self.content_type if input_stream is not None: start_pos = input_stream.tell() input_stream.seek(0, 2) end_pos = input_stream.tell() input_stream.seek(start_pos) content_length = end_pos - start_pos elif mimetype == 'multipart/form-data': values = CombinedMultiDict([self.form, self.files]) input_stream, content_length, boundary = \ stream_encode_multipart(values, charset=self.charset) content_type = mimetype + '; boundary="%s"' % boundary elif mimetype == 'application/x-www-form-urlencoded': # XXX: py2v3 review values = url_encode(self.form, charset=self.charset) values = values.encode('ascii') content_length = len(values) input_stream = BytesIO(values) else: input_stream = _empty_stream result = {} if self.environ_base: result.update(self.environ_base) def _path_encode(x): return wsgi_encoding_dance(url_unquote(x, self.charset), self.charset) qs = wsgi_encoding_dance(self.query_string) result.update({ 'REQUEST_METHOD': self.method, 'SCRIPT_NAME': _path_encode(self.script_root), 'PATH_INFO': _path_encode(self.path), 'QUERY_STRING': qs, 'SERVER_NAME': self.server_name, 'SERVER_PORT': str(self.server_port), 'HTTP_HOST': self.host, 'SERVER_PROTOCOL': self.server_protocol, 'CONTENT_TYPE': content_type or '', 'CONTENT_LENGTH': str(content_length or '0'), 'wsgi.version': self.wsgi_version, 'wsgi.url_scheme': self.url_scheme, 'wsgi.input': input_stream, 'wsgi.errors': self.errors_stream, 'wsgi.multithread': self.multithread, 'wsgi.multiprocess': self.multiprocess, 'wsgi.run_once': self.run_once }) for key, value in self.headers.to_wsgi_list(): result['HTTP_%s' % key.upper().replace('-', '_')] = value if self.environ_overrides: result.update(self.environ_overrides) return result def get_request(self, cls=None): """Returns a request with the data. If the request class is not specified :attr:`request_class` is used. :param cls: The request wrapper to use. """ if cls is None: cls = self.request_class return cls(self.get_environ()) class ClientRedirectError(Exception): """ If a redirect loop is detected when using follow_redirects=True with the :cls:`Client`, then this exception is raised. """ class Client(object): """This class allows to send requests to a wrapped application. The response wrapper can be a class or factory function that takes three arguments: app_iter, status and headers. The default response wrapper just returns a tuple. Example:: class ClientResponse(BaseResponse): ... client = Client(MyApplication(), response_wrapper=ClientResponse) The use_cookies parameter indicates whether cookies should be stored and sent for subsequent requests. This is True by default, but passing False will disable this behaviour. If you want to request some subdomain of your application you may set `allow_subdomain_redirects` to `True` as if not no external redirects are allowed. .. versionadded:: 0.5 `use_cookies` is new in this version. Older versions did not provide builtin cookie support. .. versionadded:: 0.14 The `mimetype` parameter was added. """ def __init__(self, application, response_wrapper=None, use_cookies=True, allow_subdomain_redirects=False): self.application = application self.response_wrapper = response_wrapper if use_cookies: self.cookie_jar = _TestCookieJar() else: self.cookie_jar = None self.allow_subdomain_redirects = allow_subdomain_redirects def set_cookie(self, server_name, key, value='', max_age=None, expires=None, path='/', domain=None, secure=None, httponly=False, charset='utf-8'): """Sets a cookie in the client's cookie jar. The server name is required and has to match the one that is also passed to the open call. """ assert self.cookie_jar is not None, 'cookies disabled' header = dump_cookie(key, value, max_age, expires, path, domain, secure, httponly, charset) environ = create_environ(path, base_url='http://' + server_name) headers = [('Set-Cookie', header)] self.cookie_jar.extract_wsgi(environ, headers) def delete_cookie(self, server_name, key, path='/', domain=None): """Deletes a cookie in the test client.""" self.set_cookie(server_name, key, expires=0, max_age=0, path=path, domain=domain) def run_wsgi_app(self, environ, buffered=False): """Runs the wrapped WSGI app with the given environment.""" if self.cookie_jar is not None: self.cookie_jar.inject_wsgi(environ) rv = run_wsgi_app(self.application, environ, buffered=buffered) if self.cookie_jar is not None: self.cookie_jar.extract_wsgi(environ, rv[2]) return rv def resolve_redirect(self, response, new_location, environ, buffered=False): """Resolves a single redirect and triggers the request again directly on this redirect client. """ scheme, netloc, script_root, qs, anchor = url_parse(new_location) base_url = url_unparse((scheme, netloc, '', '', '')).rstrip('/') + '/' cur_server_name = netloc.split(':', 1)[0].split('.') real_server_name = get_host(environ).rsplit(':', 1)[0].split('.') if cur_server_name == ['']: # this is a local redirect having autocorrect_location_header=False cur_server_name = real_server_name base_url = EnvironBuilder(environ).base_url if self.allow_subdomain_redirects: allowed = cur_server_name[-len(real_server_name):] == real_server_name else: allowed = cur_server_name == real_server_name if not allowed: raise RuntimeError('%r does not support redirect to ' 'external targets' % self.__class__) status_code = int(response[1].split(None, 1)[0]) if status_code == 307: method = environ['REQUEST_METHOD'] else: method = 'GET' # For redirect handling we temporarily disable the response # wrapper. This is not threadsafe but not a real concern # since the test client must not be shared anyways. old_response_wrapper = self.response_wrapper self.response_wrapper = None try: return self.open(path=script_root, base_url=base_url, query_string=qs, as_tuple=True, buffered=buffered, method=method) finally: self.response_wrapper = old_response_wrapper def open(self, *args, **kwargs): """Takes the same arguments as the :class:`EnvironBuilder` class with some additions: You can provide a :class:`EnvironBuilder` or a WSGI environment as only argument instead of the :class:`EnvironBuilder` arguments and two optional keyword arguments (`as_tuple`, `buffered`) that change the type of the return value or the way the application is executed. .. versionchanged:: 0.5 If a dict is provided as file in the dict for the `data` parameter the content type has to be called `content_type` now instead of `mimetype`. This change was made for consistency with :class:`werkzeug.FileWrapper`. The `follow_redirects` parameter was added to :func:`open`. Additional parameters: :param as_tuple: Returns a tuple in the form ``(environ, result)`` :param buffered: Set this to True to buffer the application run. This will automatically close the application for you as well. :param follow_redirects: Set this to True if the `Client` should follow HTTP redirects. """ as_tuple = kwargs.pop('as_tuple', False) buffered = kwargs.pop('buffered', False) follow_redirects = kwargs.pop('follow_redirects', False) environ = None if not kwargs and len(args) == 1: if isinstance(args[0], EnvironBuilder): environ = args[0].get_environ() elif isinstance(args[0], dict): environ = args[0] if environ is None: builder = EnvironBuilder(*args, **kwargs) try: environ = builder.get_environ() finally: builder.close() response = self.run_wsgi_app(environ, buffered=buffered) # handle redirects redirect_chain = [] while 1: status_code = int(response[1].split(None, 1)[0]) if status_code not in (301, 302, 303, 305, 307) \ or not follow_redirects: break new_location = response[2]['location'] new_redirect_entry = (new_location, status_code) if new_redirect_entry in redirect_chain: raise ClientRedirectError('loop detected') redirect_chain.append(new_redirect_entry) environ, response = self.resolve_redirect(response, new_location, environ, buffered=buffered) if self.response_wrapper is not None: response = self.response_wrapper(*response) if as_tuple: return environ, response return response def get(self, *args, **kw): """Like open but method is enforced to GET.""" kw['method'] = 'GET' return self.open(*args, **kw) def patch(self, *args, **kw): """Like open but method is enforced to PATCH.""" kw['method'] = 'PATCH' return self.open(*args, **kw) def post(self, *args, **kw): """Like open but method is enforced to POST.""" kw['method'] = 'POST' return self.open(*args, **kw) def head(self, *args, **kw): """Like open but method is enforced to HEAD.""" kw['method'] = 'HEAD' return self.open(*args, **kw) def put(self, *args, **kw): """Like open but method is enforced to PUT.""" kw['method'] = 'PUT' return self.open(*args, **kw) def delete(self, *args, **kw): """Like open but method is enforced to DELETE.""" kw['method'] = 'DELETE' return self.open(*args, **kw) def options(self, *args, **kw): """Like open but method is enforced to OPTIONS.""" kw['method'] = 'OPTIONS' return self.open(*args, **kw) def trace(self, *args, **kw): """Like open but method is enforced to TRACE.""" kw['method'] = 'TRACE' return self.open(*args, **kw) def __repr__(self): return '<%s %r>' % ( self.__class__.__name__, self.application ) def create_environ(*args, **kwargs): """Create a new WSGI environ dict based on the values passed. The first parameter should be the path of the request which defaults to '/'. The second one can either be an absolute path (in that case the host is localhost:80) or a full path to the request with scheme, netloc port and the path to the script. This accepts the same arguments as the :class:`EnvironBuilder` constructor. .. versionchanged:: 0.5 This function is now a thin wrapper over :class:`EnvironBuilder` which was added in 0.5. The `headers`, `environ_base`, `environ_overrides` and `charset` parameters were added. """ builder = EnvironBuilder(*args, **kwargs) try: return builder.get_environ() finally: builder.close() def run_wsgi_app(app, environ, buffered=False): """Return a tuple in the form (app_iter, status, headers) of the application output. This works best if you pass it an application that returns an iterator all the time. Sometimes applications may use the `write()` callable returned by the `start_response` function. This tries to resolve such edge cases automatically. But if you don't get the expected output you should set `buffered` to `True` which enforces buffering. If passed an invalid WSGI application the behavior of this function is undefined. Never pass non-conforming WSGI applications to this function. :param app: the application to execute. :param buffered: set to `True` to enforce buffering. :return: tuple in the form ``(app_iter, status, headers)`` """ environ = _get_environ(environ) response = [] buffer = [] def start_response(status, headers, exc_info=None): if exc_info is not None: reraise(*exc_info) response[:] = [status, headers] return buffer.append app_rv = app(environ, start_response) close_func = getattr(app_rv, 'close', None) app_iter = iter(app_rv) # when buffering we emit the close call early and convert the # application iterator into a regular list if buffered: try: app_iter = list(app_iter) finally: if close_func is not None: close_func() # otherwise we iterate the application iter until we have a response, chain # the already received data with the already collected data and wrap it in # a new `ClosingIterator` if we need to restore a `close` callable from the # original return value. else: while not response: buffer.append(next(app_iter)) if buffer: app_iter = chain(buffer, app_iter) if close_func is not None and app_iter is not app_rv: app_iter = ClosingIterator(app_iter, close_func) return app_iter, response[0], Headers(response[1]) werkzeug-0.14.1/werkzeug/testapp.py000066400000000000000000000222641322225165500173460ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.testapp ~~~~~~~~~~~~~~~~ Provide a small test application that can be used to test a WSGI server and check it for WSGI compliance. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import sys import werkzeug from textwrap import wrap from werkzeug.wrappers import BaseRequest as Request, BaseResponse as Response from werkzeug.utils import escape import base64 logo = Response(base64.b64decode(''' R0lGODlhoACgAOMIAAEDACwpAEpCAGdgAJaKAM28AOnVAP3rAP///////// //////////////////////yH5BAEKAAgALAAAAACgAKAAAAT+EMlJq704680R+F0ojmRpnuj0rWnrv nB8rbRs33gu0bzu/0AObxgsGn3D5HHJbCUFyqZ0ukkSDlAidctNFg7gbI9LZlrBaHGtzAae0eloe25 7w9EDOX2fst/xenyCIn5/gFqDiVVDV4aGeYiKkhSFjnCQY5OTlZaXgZp8nJ2ekaB0SQOjqphrpnOiq ncEn65UsLGytLVmQ6m4sQazpbtLqL/HwpnER8bHyLrLOc3Oz8PRONPU1crXN9na263dMt/g4SzjMeX m5yDpLqgG7OzJ4u8lT/P69ej3JPn69kHzN2OIAHkB9RUYSFCFQYQJFTIkCDBiwoXWGnowaLEjRm7+G p9A7Hhx4rUkAUaSLJlxHMqVMD/aSycSZkyTplCqtGnRAM5NQ1Ly5OmzZc6gO4d6DGAUKA+hSocWYAo SlM6oUWX2O/o0KdaVU5vuSQLAa0ADwQgMEMB2AIECZhVSnTno6spgbtXmHcBUrQACcc2FrTrWS8wAf 78cMFBgwIBgbN+qvTt3ayikRBk7BoyGAGABAdYyfdzRQGV3l4coxrqQ84GpUBmrdR3xNIDUPAKDBSA ADIGDhhqTZIWaDcrVX8EsbNzbkvCOxG8bN5w8ly9H8jyTJHC6DFndQydbguh2e/ctZJFXRxMAqqPVA tQH5E64SPr1f0zz7sQYjAHg0In+JQ11+N2B0XXBeeYZgBZFx4tqBToiTCPv0YBgQv8JqA6BEf6RhXx w1ENhRBnWV8ctEX4Ul2zc3aVGcQNC2KElyTDYyYUWvShdjDyMOGMuFjqnII45aogPhz/CodUHFwaDx lTgsaOjNyhGWJQd+lFoAGk8ObghI0kawg+EV5blH3dr+digkYuAGSaQZFHFz2P/cTaLmhF52QeSb45 Jwxd+uSVGHlqOZpOeJpCFZ5J+rkAkFjQ0N1tah7JJSZUFNsrkeJUJMIBi8jyaEKIhKPomnC91Uo+NB yyaJ5umnnpInIFh4t6ZSpGaAVmizqjpByDegYl8tPE0phCYrhcMWSv+uAqHfgH88ak5UXZmlKLVJhd dj78s1Fxnzo6yUCrV6rrDOkluG+QzCAUTbCwf9SrmMLzK6p+OPHx7DF+bsfMRq7Ec61Av9i6GLw23r idnZ+/OO0a99pbIrJkproCQMA17OPG6suq3cca5ruDfXCCDoS7BEdvmJn5otdqscn+uogRHHXs8cbh EIfYaDY1AkrC0cqwcZpnM6ludx72x0p7Fo/hZAcpJDjax0UdHavMKAbiKltMWCF3xxh9k25N/Viud8 ba78iCvUkt+V6BpwMlErmcgc502x+u1nSxJSJP9Mi52awD1V4yB/QHONsnU3L+A/zR4VL/indx/y64 gqcj+qgTeweM86f0Qy1QVbvmWH1D9h+alqg254QD8HJXHvjQaGOqEqC22M54PcftZVKVSQG9jhkv7C JyTyDoAJfPdu8v7DRZAxsP/ky9MJ3OL36DJfCFPASC3/aXlfLOOON9vGZZHydGf8LnxYJuuVIbl83y Az5n/RPz07E+9+zw2A2ahz4HxHo9Kt79HTMx1Q7ma7zAzHgHqYH0SoZWyTuOLMiHwSfZDAQTn0ajk9 YQqodnUYjByQZhZak9Wu4gYQsMyEpIOAOQKze8CmEF45KuAHTvIDOfHJNipwoHMuGHBnJElUoDmAyX c2Qm/R8Ah/iILCCJOEokGowdhDYc/yoL+vpRGwyVSCWFYZNljkhEirGXsalWcAgOdeAdoXcktF2udb qbUhjWyMQxYO01o6KYKOr6iK3fE4MaS+DsvBsGOBaMb0Y6IxADaJhFICaOLmiWTlDAnY1KzDG4ambL cWBA8mUzjJsN2KjSaSXGqMCVXYpYkj33mcIApyhQf6YqgeNAmNvuC0t4CsDbSshZJkCS1eNisKqlyG cF8G2JeiDX6tO6Mv0SmjCa3MFb0bJaGPMU0X7c8XcpvMaOQmCajwSeY9G0WqbBmKv34DsMIEztU6Y2 KiDlFdt6jnCSqx7Dmt6XnqSKaFFHNO5+FmODxMCWBEaco77lNDGXBM0ECYB/+s7nKFdwSF5hgXumQe EZ7amRg39RHy3zIjyRCykQh8Zo2iviRKyTDn/zx6EefptJj2Cw+Ep2FSc01U5ry4KLPYsTyWnVGnvb UpyGlhjBUljyjHhWpf8OFaXwhp9O4T1gU9UeyPPa8A2l0p1kNqPXEVRm1AOs1oAGZU596t6SOR2mcB Oco1srWtkaVrMUzIErrKri85keKqRQYX9VX0/eAUK1hrSu6HMEX3Qh2sCh0q0D2CtnUqS4hj62sE/z aDs2Sg7MBS6xnQeooc2R2tC9YrKpEi9pLXfYXp20tDCpSP8rKlrD4axprb9u1Df5hSbz9QU0cRpfgn kiIzwKucd0wsEHlLpe5yHXuc6FrNelOl7pY2+11kTWx7VpRu97dXA3DO1vbkhcb4zyvERYajQgAADs ='''), mimetype='image/png') TEMPLATE = u'''\ WSGI Information

WSGI Information

This page displays all available information about the WSGI server and the underlying Python interpreter.

Python Interpreter

Python Version %(python_version)s
Platform %(platform)s [%(os)s]
API Version %(api_version)s
Byteorder %(byteorder)s
Werkzeug Version %(werkzeug_version)s

WSGI Environment

%(wsgi_env)s

Installed Eggs

The following python packages were installed on the system as Python eggs:

    %(python_eggs)s

System Path

The following paths are the current contents of the load path. The following entries are looked up for Python packages. Note that not all items in this path are folders. Gray and underlined items are entries pointing to invalid resources or used by custom import hooks such as the zip importer.

Items with a bright background were expanded for display from a relative path. If you encounter such paths in the output you might want to check your setup as relative paths are usually problematic in multithreaded environments.

    %(sys_path)s
''' def iter_sys_path(): if os.name == 'posix': def strip(x): prefix = os.path.expanduser('~') if x.startswith(prefix): x = '~' + x[len(prefix):] return x else: strip = lambda x: x cwd = os.path.abspath(os.getcwd()) for item in sys.path: path = os.path.join(cwd, item or os.path.curdir) yield strip(os.path.normpath(path)), \ not os.path.isdir(path), path != item def render_testapp(req): try: import pkg_resources except ImportError: eggs = () else: eggs = sorted(pkg_resources.working_set, key=lambda x: x.project_name.lower()) python_eggs = [] for egg in eggs: try: version = egg.version except (ValueError, AttributeError): version = 'unknown' python_eggs.append('
  • %s [%s]' % ( escape(egg.project_name), escape(version) )) wsgi_env = [] sorted_environ = sorted(req.environ.items(), key=lambda x: repr(x[0]).lower()) for key, value in sorted_environ: wsgi_env.append('%s%s' % ( escape(str(key)), ' '.join(wrap(escape(repr(value)))) )) sys_path = [] for item, virtual, expanded in iter_sys_path(): class_ = [] if virtual: class_.append('virtual') if expanded: class_.append('exp') sys_path.append('%s' % ( class_ and ' class="%s"' % ' '.join(class_) or '', escape(item) )) return (TEMPLATE % { 'python_version': '
    '.join(escape(sys.version).splitlines()), 'platform': escape(sys.platform), 'os': escape(os.name), 'api_version': sys.api_version, 'byteorder': sys.byteorder, 'werkzeug_version': werkzeug.__version__, 'python_eggs': '\n'.join(python_eggs), 'wsgi_env': '\n'.join(wsgi_env), 'sys_path': '\n'.join(sys_path) }).encode('utf-8') def test_app(environ, start_response): """Simple test application that dumps the environment. You can use it to check if Werkzeug is working properly: .. sourcecode:: pycon >>> from werkzeug.serving import run_simple >>> from werkzeug.testapp import test_app >>> run_simple('localhost', 3000, test_app) * Running on http://localhost:3000/ The application displays important information from the WSGI environment, the Python interpreter and the installed libraries. """ req = Request(environ, populate_request=False) if req.args.get('resource') == 'logo': response = logo else: response = Response(render_testapp(req), mimetype='text/html') return response(environ, start_response) if __name__ == '__main__': from werkzeug.serving import run_simple run_simple('localhost', 5000, test_app, use_reloader=True) werkzeug-0.14.1/werkzeug/urls.py000066400000000000000000001076431322225165500166600ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.urls ~~~~~~~~~~~~~ ``werkzeug.urls`` used to provide several wrapper functions for Python 2 urlparse, whose main purpose were to work around the behavior of the Py2 stdlib and its lack of unicode support. While this was already a somewhat inconvenient situation, it got even more complicated because Python 3's ``urllib.parse`` actually does handle unicode properly. In other words, this module would wrap two libraries with completely different behavior. So now this module contains a 2-and-3-compatible backport of Python 3's ``urllib.parse``, which is mostly API-compatible. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import os import re from werkzeug._compat import text_type, PY2, to_unicode, \ to_native, implements_to_string, try_coerce_native, \ normalize_string_tuple, make_literal_wrapper, \ fix_tuple_repr from werkzeug._internal import _encode_idna, _decode_idna from werkzeug.datastructures import MultiDict, iter_multi_items from collections import namedtuple # A regular expression for what a valid schema looks like _scheme_re = re.compile(r'^[a-zA-Z0-9+-.]+$') # Characters that are safe in any part of an URL. _always_safe = (b'abcdefghijklmnopqrstuvwxyz' b'ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789_.-+') _hexdigits = '0123456789ABCDEFabcdef' _hextobyte = dict( ((a + b).encode(), int(a + b, 16)) for a in _hexdigits for b in _hexdigits ) _bytetohex = [ ('%%%02X' % char).encode('ascii') for char in range(256) ] _URLTuple = fix_tuple_repr(namedtuple( '_URLTuple', ['scheme', 'netloc', 'path', 'query', 'fragment'] )) class BaseURL(_URLTuple): '''Superclass of :py:class:`URL` and :py:class:`BytesURL`.''' __slots__ = () def replace(self, **kwargs): """Return an URL with the same values, except for those parameters given new values by whichever keyword arguments are specified.""" return self._replace(**kwargs) @property def host(self): """The host part of the URL if available, otherwise `None`. The host is either the hostname or the IP address mentioned in the URL. It will not contain the port. """ return self._split_host()[0] @property def ascii_host(self): """Works exactly like :attr:`host` but will return a result that is restricted to ASCII. If it finds a netloc that is not ASCII it will attempt to idna decode it. This is useful for socket operations when the URL might include internationalized characters. """ rv = self.host if rv is not None and isinstance(rv, text_type): try: rv = _encode_idna(rv) except UnicodeError: rv = rv.encode('ascii', 'ignore') return to_native(rv, 'ascii', 'ignore') @property def port(self): """The port in the URL as an integer if it was present, `None` otherwise. This does not fill in default ports. """ try: rv = int(to_native(self._split_host()[1])) if 0 <= rv <= 65535: return rv except (ValueError, TypeError): pass @property def auth(self): """The authentication part in the URL if available, `None` otherwise. """ return self._split_netloc()[0] @property def username(self): """The username if it was part of the URL, `None` otherwise. This undergoes URL decoding and will always be a unicode string. """ rv = self._split_auth()[0] if rv is not None: return _url_unquote_legacy(rv) @property def raw_username(self): """The username if it was part of the URL, `None` otherwise. Unlike :attr:`username` this one is not being decoded. """ return self._split_auth()[0] @property def password(self): """The password if it was part of the URL, `None` otherwise. This undergoes URL decoding and will always be a unicode string. """ rv = self._split_auth()[1] if rv is not None: return _url_unquote_legacy(rv) @property def raw_password(self): """The password if it was part of the URL, `None` otherwise. Unlike :attr:`password` this one is not being decoded. """ return self._split_auth()[1] def decode_query(self, *args, **kwargs): """Decodes the query part of the URL. Ths is a shortcut for calling :func:`url_decode` on the query argument. The arguments and keyword arguments are forwarded to :func:`url_decode` unchanged. """ return url_decode(self.query, *args, **kwargs) def join(self, *args, **kwargs): """Joins this URL with another one. This is just a convenience function for calling into :meth:`url_join` and then parsing the return value again. """ return url_parse(url_join(self, *args, **kwargs)) def to_url(self): """Returns a URL string or bytes depending on the type of the information stored. This is just a convenience function for calling :meth:`url_unparse` for this URL. """ return url_unparse(self) def decode_netloc(self): """Decodes the netloc part into a string.""" rv = _decode_idna(self.host or '') if ':' in rv: rv = '[%s]' % rv port = self.port if port is not None: rv = '%s:%d' % (rv, port) auth = ':'.join(filter(None, [ _url_unquote_legacy(self.raw_username or '', '/:%@'), _url_unquote_legacy(self.raw_password or '', '/:%@'), ])) if auth: rv = '%s@%s' % (auth, rv) return rv def to_uri_tuple(self): """Returns a :class:`BytesURL` tuple that holds a URI. This will encode all the information in the URL properly to ASCII using the rules a web browser would follow. It's usually more interesting to directly call :meth:`iri_to_uri` which will return a string. """ return url_parse(iri_to_uri(self).encode('ascii')) def to_iri_tuple(self): """Returns a :class:`URL` tuple that holds a IRI. This will try to decode as much information as possible in the URL without losing information similar to how a web browser does it for the URL bar. It's usually more interesting to directly call :meth:`uri_to_iri` which will return a string. """ return url_parse(uri_to_iri(self)) def get_file_location(self, pathformat=None): """Returns a tuple with the location of the file in the form ``(server, location)``. If the netloc is empty in the URL or points to localhost, it's represented as ``None``. The `pathformat` by default is autodetection but needs to be set when working with URLs of a specific system. The supported values are ``'windows'`` when working with Windows or DOS paths and ``'posix'`` when working with posix paths. If the URL does not point to to a local file, the server and location are both represented as ``None``. :param pathformat: The expected format of the path component. Currently ``'windows'`` and ``'posix'`` are supported. Defaults to ``None`` which is autodetect. """ if self.scheme != 'file': return None, None path = url_unquote(self.path) host = self.netloc or None if pathformat is None: if os.name == 'nt': pathformat = 'windows' else: pathformat = 'posix' if pathformat == 'windows': if path[:1] == '/' and path[1:2].isalpha() and path[2:3] in '|:': path = path[1:2] + ':' + path[3:] windows_share = path[:3] in ('\\' * 3, '/' * 3) import ntpath path = ntpath.normpath(path) # Windows shared drives are represented as ``\\host\\directory``. # That results in a URL like ``file://///host/directory``, and a # path like ``///host/directory``. We need to special-case this # because the path contains the hostname. if windows_share and host is None: parts = path.lstrip('\\').split('\\', 1) if len(parts) == 2: host, path = parts else: host = parts[0] path = '' elif pathformat == 'posix': import posixpath path = posixpath.normpath(path) else: raise TypeError('Invalid path format %s' % repr(pathformat)) if host in ('127.0.0.1', '::1', 'localhost'): host = None return host, path def _split_netloc(self): if self._at in self.netloc: return self.netloc.split(self._at, 1) return None, self.netloc def _split_auth(self): auth = self._split_netloc()[0] if not auth: return None, None if self._colon not in auth: return auth, None return auth.split(self._colon, 1) def _split_host(self): rv = self._split_netloc()[1] if not rv: return None, None if not rv.startswith(self._lbracket): if self._colon in rv: return rv.split(self._colon, 1) return rv, None idx = rv.find(self._rbracket) if idx < 0: return rv, None host = rv[1:idx] rest = rv[idx + 1:] if rest.startswith(self._colon): return host, rest[1:] return host, None @implements_to_string class URL(BaseURL): """Represents a parsed URL. This behaves like a regular tuple but also has some extra attributes that give further insight into the URL. """ __slots__ = () _at = '@' _colon = ':' _lbracket = '[' _rbracket = ']' def __str__(self): return self.to_url() def encode_netloc(self): """Encodes the netloc part to an ASCII safe URL as bytes.""" rv = self.ascii_host or '' if ':' in rv: rv = '[%s]' % rv port = self.port if port is not None: rv = '%s:%d' % (rv, port) auth = ':'.join(filter(None, [ url_quote(self.raw_username or '', 'utf-8', 'strict', '/:%'), url_quote(self.raw_password or '', 'utf-8', 'strict', '/:%'), ])) if auth: rv = '%s@%s' % (auth, rv) return to_native(rv) def encode(self, charset='utf-8', errors='replace'): """Encodes the URL to a tuple made out of bytes. The charset is only being used for the path, query and fragment. """ return BytesURL( self.scheme.encode('ascii'), self.encode_netloc(), self.path.encode(charset, errors), self.query.encode(charset, errors), self.fragment.encode(charset, errors) ) class BytesURL(BaseURL): """Represents a parsed URL in bytes.""" __slots__ = () _at = b'@' _colon = b':' _lbracket = b'[' _rbracket = b']' def __str__(self): return self.to_url().decode('utf-8', 'replace') def encode_netloc(self): """Returns the netloc unchanged as bytes.""" return self.netloc def decode(self, charset='utf-8', errors='replace'): """Decodes the URL to a tuple made out of strings. The charset is only being used for the path, query and fragment. """ return URL( self.scheme.decode('ascii'), self.decode_netloc(), self.path.decode(charset, errors), self.query.decode(charset, errors), self.fragment.decode(charset, errors) ) def _unquote_to_bytes(string, unsafe=''): if isinstance(string, text_type): string = string.encode('utf-8') if isinstance(unsafe, text_type): unsafe = unsafe.encode('utf-8') unsafe = frozenset(bytearray(unsafe)) bits = iter(string.split(b'%')) result = bytearray(next(bits, b'')) for item in bits: try: char = _hextobyte[item[:2]] if char in unsafe: raise KeyError() result.append(char) result.extend(item[2:]) except KeyError: result.extend(b'%') result.extend(item) return bytes(result) def _url_encode_impl(obj, charset, encode_keys, sort, key): iterable = iter_multi_items(obj) if sort: iterable = sorted(iterable, key=key) for key, value in iterable: if value is None: continue if not isinstance(key, bytes): key = text_type(key).encode(charset) if not isinstance(value, bytes): value = text_type(value).encode(charset) yield url_quote_plus(key) + '=' + url_quote_plus(value) def _url_unquote_legacy(value, unsafe=''): try: return url_unquote(value, charset='utf-8', errors='strict', unsafe=unsafe) except UnicodeError: return url_unquote(value, charset='latin1', unsafe=unsafe) def url_parse(url, scheme=None, allow_fragments=True): """Parses a URL from a string into a :class:`URL` tuple. If the URL is lacking a scheme it can be provided as second argument. Otherwise, it is ignored. Optionally fragments can be stripped from the URL by setting `allow_fragments` to `False`. The inverse of this function is :func:`url_unparse`. :param url: the URL to parse. :param scheme: the default schema to use if the URL is schemaless. :param allow_fragments: if set to `False` a fragment will be removed from the URL. """ s = make_literal_wrapper(url) is_text_based = isinstance(url, text_type) if scheme is None: scheme = s('') netloc = query = fragment = s('') i = url.find(s(':')) if i > 0 and _scheme_re.match(to_native(url[:i], errors='replace')): # make sure "iri" is not actually a port number (in which case # "scheme" is really part of the path) rest = url[i + 1:] if not rest or any(c not in s('0123456789') for c in rest): # not a port number scheme, url = url[:i].lower(), rest if url[:2] == s('//'): delim = len(url) for c in s('/?#'): wdelim = url.find(c, 2) if wdelim >= 0: delim = min(delim, wdelim) netloc, url = url[2:delim], url[delim:] if (s('[') in netloc and s(']') not in netloc) or \ (s(']') in netloc and s('[') not in netloc): raise ValueError('Invalid IPv6 URL') if allow_fragments and s('#') in url: url, fragment = url.split(s('#'), 1) if s('?') in url: url, query = url.split(s('?'), 1) result_type = is_text_based and URL or BytesURL return result_type(scheme, netloc, url, query, fragment) def url_quote(string, charset='utf-8', errors='strict', safe='/:', unsafe=''): """URL encode a single string with a given encoding. :param s: the string to quote. :param charset: the charset to be used. :param safe: an optional sequence of safe characters. :param unsafe: an optional sequence of unsafe characters. .. versionadded:: 0.9.2 The `unsafe` parameter was added. """ if not isinstance(string, (text_type, bytes, bytearray)): string = text_type(string) if isinstance(string, text_type): string = string.encode(charset, errors) if isinstance(safe, text_type): safe = safe.encode(charset, errors) if isinstance(unsafe, text_type): unsafe = unsafe.encode(charset, errors) safe = frozenset(bytearray(safe) + _always_safe) - frozenset(bytearray(unsafe)) rv = bytearray() for char in bytearray(string): if char in safe: rv.append(char) else: rv.extend(_bytetohex[char]) return to_native(bytes(rv)) def url_quote_plus(string, charset='utf-8', errors='strict', safe=''): """URL encode a single string with the given encoding and convert whitespace to "+". :param s: The string to quote. :param charset: The charset to be used. :param safe: An optional sequence of safe characters. """ return url_quote(string, charset, errors, safe + ' ', '+').replace(' ', '+') def url_unparse(components): """The reverse operation to :meth:`url_parse`. This accepts arbitrary as well as :class:`URL` tuples and returns a URL as a string. :param components: the parsed URL as tuple which should be converted into a URL string. """ scheme, netloc, path, query, fragment = \ normalize_string_tuple(components) s = make_literal_wrapper(scheme) url = s('') # We generally treat file:///x and file:/x the same which is also # what browsers seem to do. This also allows us to ignore a schema # register for netloc utilization or having to differenciate between # empty and missing netloc. if netloc or (scheme and path.startswith(s('/'))): if path and path[:1] != s('/'): path = s('/') + path url = s('//') + (netloc or s('')) + path elif path: url += path if scheme: url = scheme + s(':') + url if query: url = url + s('?') + query if fragment: url = url + s('#') + fragment return url def url_unquote(string, charset='utf-8', errors='replace', unsafe=''): """URL decode a single string with a given encoding. If the charset is set to `None` no unicode decoding is performed and raw bytes are returned. :param s: the string to unquote. :param charset: the charset of the query string. If set to `None` no unicode decoding will take place. :param errors: the error handling for the charset decoding. """ rv = _unquote_to_bytes(string, unsafe) if charset is not None: rv = rv.decode(charset, errors) return rv def url_unquote_plus(s, charset='utf-8', errors='replace'): """URL decode a single string with the given `charset` and decode "+" to whitespace. Per default encoding errors are ignored. If you want a different behavior you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a :exc:`HTTPUnicodeError` is raised. :param s: The string to unquote. :param charset: the charset of the query string. If set to `None` no unicode decoding will take place. :param errors: The error handling for the `charset` decoding. """ if isinstance(s, text_type): s = s.replace(u'+', u' ') else: s = s.replace(b'+', b' ') return url_unquote(s, charset, errors) def url_fix(s, charset='utf-8'): r"""Sometimes you get an URL by a user that just isn't a real URL because it contains unsafe characters like ' ' and so on. This function can fix some of the problems in a similar way browsers handle data entered by the user: >>> url_fix(u'http://de.wikipedia.org/wiki/Elf (Begriffskl\xe4rung)') 'http://de.wikipedia.org/wiki/Elf%20(Begriffskl%C3%A4rung)' :param s: the string with the URL to fix. :param charset: The target charset for the URL if the url was given as unicode string. """ # First step is to switch to unicode processing and to convert # backslashes (which are invalid in URLs anyways) to slashes. This is # consistent with what Chrome does. s = to_unicode(s, charset, 'replace').replace('\\', '/') # For the specific case that we look like a malformed windows URL # we want to fix this up manually: if s.startswith('file://') and s[7:8].isalpha() and s[8:10] in (':/', '|/'): s = 'file:///' + s[7:] url = url_parse(s) path = url_quote(url.path, charset, safe='/%+$!*\'(),') qs = url_quote_plus(url.query, charset, safe=':&%=+$!*\'(),') anchor = url_quote_plus(url.fragment, charset, safe=':&%=+$!*\'(),') return to_native(url_unparse((url.scheme, url.encode_netloc(), path, qs, anchor))) def uri_to_iri(uri, charset='utf-8', errors='replace'): r""" Converts a URI in a given charset to a IRI. Examples for URI versus IRI: >>> uri_to_iri(b'http://xn--n3h.net/') u'http://\u2603.net/' >>> uri_to_iri(b'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th') u'http://\xfcser:p\xe4ssword@\u2603.net/p\xe5th' Query strings are left unchanged: >>> uri_to_iri('/?foo=24&x=%26%2f') u'/?foo=24&x=%26%2f' .. versionadded:: 0.6 :param uri: The URI to convert. :param charset: The charset of the URI. :param errors: The error handling on decode. """ if isinstance(uri, tuple): uri = url_unparse(uri) uri = url_parse(to_unicode(uri, charset)) path = url_unquote(uri.path, charset, errors, '%/;?') query = url_unquote(uri.query, charset, errors, '%;/?:@&=+,$#') fragment = url_unquote(uri.fragment, charset, errors, '%;/?:@&=+,$#') return url_unparse((uri.scheme, uri.decode_netloc(), path, query, fragment)) def iri_to_uri(iri, charset='utf-8', errors='strict', safe_conversion=False): r""" Converts any unicode based IRI to an acceptable ASCII URI. Werkzeug always uses utf-8 URLs internally because this is what browsers and HTTP do as well. In some places where it accepts an URL it also accepts a unicode IRI and converts it into a URI. Examples for IRI versus URI: >>> iri_to_uri(u'http://☃.net/') 'http://xn--n3h.net/' >>> iri_to_uri(u'http://üser:pässword@☃.net/påth') 'http://%C3%BCser:p%C3%A4ssword@xn--n3h.net/p%C3%A5th' There is a general problem with IRI and URI conversion with some protocols that appear in the wild that are in violation of the URI specification. In places where Werkzeug goes through a forced IRI to URI conversion it will set the `safe_conversion` flag which will not perform a conversion if the end result is already ASCII. This can mean that the return value is not an entirely correct URI but it will not destroy such invalid URLs in the process. As an example consider the following two IRIs:: magnet:?xt=uri:whatever itms-services://?action=download-manifest The internal representation after parsing of those URLs is the same and there is no way to reconstruct the original one. If safe conversion is enabled however this function becomes a noop for both of those strings as they both can be considered URIs. .. versionadded:: 0.6 .. versionchanged:: 0.9.6 The `safe_conversion` parameter was added. :param iri: The IRI to convert. :param charset: The charset for the URI. :param safe_conversion: indicates if a safe conversion should take place. For more information see the explanation above. """ if isinstance(iri, tuple): iri = url_unparse(iri) if safe_conversion: try: native_iri = to_native(iri) ascii_iri = to_native(iri).encode('ascii') if ascii_iri.split() == [ascii_iri]: return native_iri except UnicodeError: pass iri = url_parse(to_unicode(iri, charset, errors)) netloc = iri.encode_netloc() path = url_quote(iri.path, charset, errors, '/:~+%') query = url_quote(iri.query, charset, errors, '%&[]:;$*()+,!?*/=') fragment = url_quote(iri.fragment, charset, errors, '=%&[]:;$()+,!?*/') return to_native(url_unparse((iri.scheme, netloc, path, query, fragment))) def url_decode(s, charset='utf-8', decode_keys=False, include_empty=True, errors='replace', separator='&', cls=None): """ Parse a querystring and return it as :class:`MultiDict`. There is a difference in key decoding on different Python versions. On Python 3 keys will always be fully decoded whereas on Python 2, keys will remain bytestrings if they fit into ASCII. On 2.x keys can be forced to be unicode by setting `decode_keys` to `True`. If the charset is set to `None` no unicode decoding will happen and raw bytes will be returned. Per default a missing value for a key will default to an empty key. If you don't want that behavior you can set `include_empty` to `False`. Per default encoding errors are ignored. If you want a different behavior you can set `errors` to ``'replace'`` or ``'strict'``. In strict mode a `HTTPUnicodeError` is raised. .. versionchanged:: 0.5 In previous versions ";" and "&" could be used for url decoding. This changed in 0.5 where only "&" is supported. If you want to use ";" instead a different `separator` can be provided. The `cls` parameter was added. :param s: a string with the query string to decode. :param charset: the charset of the query string. If set to `None` no unicode decoding will take place. :param decode_keys: Used on Python 2.x to control whether keys should be forced to be unicode objects. If set to `True` then keys will be unicode in all cases. Otherwise, they remain `str` if they fit into ASCII. :param include_empty: Set to `False` if you don't want empty values to appear in the dict. :param errors: the decoding error behavior. :param separator: the pair separator to be used, defaults to ``&`` :param cls: an optional dict class to use. If this is not specified or `None` the default :class:`MultiDict` is used. """ if cls is None: cls = MultiDict if isinstance(s, text_type) and not isinstance(separator, text_type): separator = separator.decode(charset or 'ascii') elif isinstance(s, bytes) and not isinstance(separator, bytes): separator = separator.encode(charset or 'ascii') return cls(_url_decode_impl(s.split(separator), charset, decode_keys, include_empty, errors)) def url_decode_stream(stream, charset='utf-8', decode_keys=False, include_empty=True, errors='replace', separator='&', cls=None, limit=None, return_iterator=False): """Works like :func:`url_decode` but decodes a stream. The behavior of stream and limit follows functions like :func:`~werkzeug.wsgi.make_line_iter`. The generator of pairs is directly fed to the `cls` so you can consume the data while it's parsed. .. versionadded:: 0.8 :param stream: a stream with the encoded querystring :param charset: the charset of the query string. If set to `None` no unicode decoding will take place. :param decode_keys: Used on Python 2.x to control whether keys should be forced to be unicode objects. If set to `True`, keys will be unicode in all cases. Otherwise, they remain `str` if they fit into ASCII. :param include_empty: Set to `False` if you don't want empty values to appear in the dict. :param errors: the decoding error behavior. :param separator: the pair separator to be used, defaults to ``&`` :param cls: an optional dict class to use. If this is not specified or `None` the default :class:`MultiDict` is used. :param limit: the content length of the URL data. Not necessary if a limited stream is provided. :param return_iterator: if set to `True` the `cls` argument is ignored and an iterator over all decoded pairs is returned """ from werkzeug.wsgi import make_chunk_iter if return_iterator: cls = lambda x: x elif cls is None: cls = MultiDict pair_iter = make_chunk_iter(stream, separator, limit) return cls(_url_decode_impl(pair_iter, charset, decode_keys, include_empty, errors)) def _url_decode_impl(pair_iter, charset, decode_keys, include_empty, errors): for pair in pair_iter: if not pair: continue s = make_literal_wrapper(pair) equal = s('=') if equal in pair: key, value = pair.split(equal, 1) else: if not include_empty: continue key = pair value = s('') key = url_unquote_plus(key, charset, errors) if charset is not None and PY2 and not decode_keys: key = try_coerce_native(key) yield key, url_unquote_plus(value, charset, errors) def url_encode(obj, charset='utf-8', encode_keys=False, sort=False, key=None, separator=b'&'): """URL encode a dict/`MultiDict`. If a value is `None` it will not appear in the result string. Per default only values are encoded into the target charset strings. If `encode_keys` is set to ``True`` unicode keys are supported too. If `sort` is set to `True` the items are sorted by `key` or the default sorting algorithm. .. versionadded:: 0.5 `sort`, `key`, and `separator` were added. :param obj: the object to encode into a query string. :param charset: the charset of the query string. :param encode_keys: set to `True` if you have unicode keys. (Ignored on Python 3.x) :param sort: set to `True` if you want parameters to be sorted by `key`. :param separator: the separator to be used for the pairs. :param key: an optional function to be used for sorting. For more details check out the :func:`sorted` documentation. """ separator = to_native(separator, 'ascii') return separator.join(_url_encode_impl(obj, charset, encode_keys, sort, key)) def url_encode_stream(obj, stream=None, charset='utf-8', encode_keys=False, sort=False, key=None, separator=b'&'): """Like :meth:`url_encode` but writes the results to a stream object. If the stream is `None` a generator over all encoded pairs is returned. .. versionadded:: 0.8 :param obj: the object to encode into a query string. :param stream: a stream to write the encoded object into or `None` if an iterator over the encoded pairs should be returned. In that case the separator argument is ignored. :param charset: the charset of the query string. :param encode_keys: set to `True` if you have unicode keys. (Ignored on Python 3.x) :param sort: set to `True` if you want parameters to be sorted by `key`. :param separator: the separator to be used for the pairs. :param key: an optional function to be used for sorting. For more details check out the :func:`sorted` documentation. """ separator = to_native(separator, 'ascii') gen = _url_encode_impl(obj, charset, encode_keys, sort, key) if stream is None: return gen for idx, chunk in enumerate(gen): if idx: stream.write(separator) stream.write(chunk) def url_join(base, url, allow_fragments=True): """Join a base URL and a possibly relative URL to form an absolute interpretation of the latter. :param base: the base URL for the join operation. :param url: the URL to join. :param allow_fragments: indicates whether fragments should be allowed. """ if isinstance(base, tuple): base = url_unparse(base) if isinstance(url, tuple): url = url_unparse(url) base, url = normalize_string_tuple((base, url)) s = make_literal_wrapper(base) if not base: return url if not url: return base bscheme, bnetloc, bpath, bquery, bfragment = \ url_parse(base, allow_fragments=allow_fragments) scheme, netloc, path, query, fragment = \ url_parse(url, bscheme, allow_fragments) if scheme != bscheme: return url if netloc: return url_unparse((scheme, netloc, path, query, fragment)) netloc = bnetloc if path[:1] == s('/'): segments = path.split(s('/')) elif not path: segments = bpath.split(s('/')) if not query: query = bquery else: segments = bpath.split(s('/'))[:-1] + path.split(s('/')) # If the rightmost part is "./" we want to keep the slash but # remove the dot. if segments[-1] == s('.'): segments[-1] = s('') # Resolve ".." and "." segments = [segment for segment in segments if segment != s('.')] while 1: i = 1 n = len(segments) - 1 while i < n: if segments[i] == s('..') and \ segments[i - 1] not in (s(''), s('..')): del segments[i - 1:i + 1] break i += 1 else: break # Remove trailing ".." if the URL is absolute unwanted_marker = [s(''), s('..')] while segments[:2] == unwanted_marker: del segments[1] path = s('/').join(segments) return url_unparse((scheme, netloc, path, query, fragment)) class Href(object): """Implements a callable that constructs URLs with the given base. The function can be called with any number of positional and keyword arguments which than are used to assemble the URL. Works with URLs and posix paths. Positional arguments are appended as individual segments to the path of the URL: >>> href = Href('/foo') >>> href('bar', 23) '/foo/bar/23' >>> href('foo', bar=23) '/foo/foo?bar=23' If any of the arguments (positional or keyword) evaluates to `None` it will be skipped. If no keyword arguments are given the last argument can be a :class:`dict` or :class:`MultiDict` (or any other dict subclass), otherwise the keyword arguments are used for the query parameters, cutting off the first trailing underscore of the parameter name: >>> href(is_=42) '/foo?is=42' >>> href({'foo': 'bar'}) '/foo?foo=bar' Combining of both methods is not allowed: >>> href({'foo': 'bar'}, bar=42) Traceback (most recent call last): ... TypeError: keyword arguments and query-dicts can't be combined Accessing attributes on the href object creates a new href object with the attribute name as prefix: >>> bar_href = href.bar >>> bar_href("blub") '/foo/bar/blub' If `sort` is set to `True` the items are sorted by `key` or the default sorting algorithm: >>> href = Href("/", sort=True) >>> href(a=1, b=2, c=3) '/?a=1&b=2&c=3' .. versionadded:: 0.5 `sort` and `key` were added. """ def __init__(self, base='./', charset='utf-8', sort=False, key=None): if not base: base = './' self.base = base self.charset = charset self.sort = sort self.key = key def __getattr__(self, name): if name[:2] == '__': raise AttributeError(name) base = self.base if base[-1:] != '/': base += '/' return Href(url_join(base, name), self.charset, self.sort, self.key) def __call__(self, *path, **query): if path and isinstance(path[-1], dict): if query: raise TypeError('keyword arguments and query-dicts ' 'can\'t be combined') query, path = path[-1], path[:-1] elif query: query = dict([(k.endswith('_') and k[:-1] or k, v) for k, v in query.items()]) path = '/'.join([to_unicode(url_quote(x, self.charset), 'ascii') for x in path if x is not None]).lstrip('/') rv = self.base if path: if not rv.endswith('/'): rv += '/' rv = url_join(rv, './' + path) if query: rv += '?' + to_unicode(url_encode(query, self.charset, sort=self.sort, key=self.key), 'ascii') return to_native(rv) werkzeug-0.14.1/werkzeug/useragents.py000066400000000000000000000133511322225165500200430ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.useragents ~~~~~~~~~~~~~~~~~~~ This module provides a helper to inspect user agent strings. This module is far from complete but should work for most of the currently available browsers. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re class UserAgentParser(object): """A simple user agent parser. Used by the `UserAgent`.""" platforms = ( ('cros', 'chromeos'), ('iphone|ios', 'iphone'), ('ipad', 'ipad'), (r'darwin|mac|os\s*x', 'macos'), ('win', 'windows'), (r'android', 'android'), ('netbsd', 'netbsd'), ('openbsd', 'openbsd'), ('freebsd', 'freebsd'), ('dragonfly', 'dragonflybsd'), ('(sun|i86)os', 'solaris'), (r'x11|lin(\b|ux)?', 'linux'), (r'nintendo\s+wii', 'wii'), ('irix', 'irix'), ('hp-?ux', 'hpux'), ('aix', 'aix'), ('sco|unix_sv', 'sco'), ('bsd', 'bsd'), ('amiga', 'amiga'), ('blackberry|playbook', 'blackberry'), ('symbian', 'symbian') ) browsers = ( ('googlebot', 'google'), ('msnbot', 'msn'), ('yahoo', 'yahoo'), ('ask jeeves', 'ask'), (r'aol|america\s+online\s+browser', 'aol'), ('opera', 'opera'), ('edge', 'edge'), ('chrome', 'chrome'), ('seamonkey', 'seamonkey'), ('firefox|firebird|phoenix|iceweasel', 'firefox'), ('galeon', 'galeon'), ('safari|version', 'safari'), ('webkit', 'webkit'), ('camino', 'camino'), ('konqueror', 'konqueror'), ('k-meleon', 'kmeleon'), ('netscape', 'netscape'), (r'msie|microsoft\s+internet\s+explorer|trident/.+? rv:', 'msie'), ('lynx', 'lynx'), ('links', 'links'), ('Baiduspider', 'baidu'), ('bingbot', 'bing'), ('mozilla', 'mozilla') ) _browser_version_re = r'(?:%s)[/\sa-z(]*(\d+[.\da-z]+)?' _language_re = re.compile( r'(?:;\s*|\s+)(\b\w{2}\b(?:-\b\w{2}\b)?)\s*;|' r'(?:\(|\[|;)\s*(\b\w{2}\b(?:-\b\w{2}\b)?)\s*(?:\]|\)|;)' ) def __init__(self): self.platforms = [(b, re.compile(a, re.I)) for a, b in self.platforms] self.browsers = [(b, re.compile(self._browser_version_re % a, re.I)) for a, b in self.browsers] def __call__(self, user_agent): for platform, regex in self.platforms: match = regex.search(user_agent) if match is not None: break else: platform = None for browser, regex in self.browsers: match = regex.search(user_agent) if match is not None: version = match.group(1) break else: browser = version = None match = self._language_re.search(user_agent) if match is not None: language = match.group(1) or match.group(2) else: language = None return platform, browser, version, language class UserAgent(object): """Represents a user agent. Pass it a WSGI environment or a user agent string and you can inspect some of the details from the user agent string via the attributes. The following attributes exist: .. attribute:: string the raw user agent string .. attribute:: platform the browser platform. The following platforms are currently recognized: - `aix` - `amiga` - `android` - `blackberry` - `bsd` - `chromeos` - `dragonflybsd` - `freebsd` - `hpux` - `ipad` - `iphone` - `irix` - `linux` - `macos` - `netbsd` - `openbsd` - `sco` - `solaris` - `symbian` - `wii` - `windows` .. attribute:: browser the name of the browser. The following browsers are currently recognized: - `aol` * - `ask` * - `baidu` * - `bing` * - `camino` - `chrome` - `firefox` - `galeon` - `google` * - `kmeleon` - `konqueror` - `links` - `lynx` - `mozilla` - `msie` - `msn` - `netscape` - `opera` - `safari` - `seamonkey` - `webkit` - `yahoo` * (Browsers marked with a star (``*``) are crawlers.) .. attribute:: version the version of the browser .. attribute:: language the language of the browser """ _parser = UserAgentParser() def __init__(self, environ_or_string): if isinstance(environ_or_string, dict): environ_or_string = environ_or_string.get('HTTP_USER_AGENT', '') self.string = environ_or_string self.platform, self.browser, self.version, self.language = \ self._parser(environ_or_string) def to_header(self): return self.string def __str__(self): return self.string def __nonzero__(self): return bool(self.browser) __bool__ = __nonzero__ def __repr__(self): return '<%s %r/%s>' % ( self.__class__.__name__, self.browser, self.version ) # conceptionally this belongs in this module but because we want to lazily # load the user agent module (which happens in wrappers.py) we have to import # it afterwards. The class itself has the module set to this module so # pickle, inspect and similar modules treat the object as if it was really # implemented here. from werkzeug.wrappers import UserAgentMixin # noqa werkzeug-0.14.1/werkzeug/utils.py000066400000000000000000000546741322225165500170400ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.utils ~~~~~~~~~~~~~~ This module implements various utilities for WSGI applications. Most of them are used by the request and response wrappers but especially for middleware development it makes sense to use them without the wrappers. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import re import os import sys import pkgutil try: from html.entities import name2codepoint except ImportError: from htmlentitydefs import name2codepoint from werkzeug._compat import unichr, text_type, string_types, iteritems, \ reraise, PY2 from werkzeug._internal import _DictAccessorProperty, \ _parse_signature, _missing _format_re = re.compile(r'\$(?:(%s)|\{(%s)\})' % (('[a-zA-Z_][a-zA-Z0-9_]*',) * 2)) _entity_re = re.compile(r'&([^;]+);') _filename_ascii_strip_re = re.compile(r'[^A-Za-z0-9_.-]') _windows_device_files = ('CON', 'AUX', 'COM1', 'COM2', 'COM3', 'COM4', 'LPT1', 'LPT2', 'LPT3', 'PRN', 'NUL') class cached_property(property): """A decorator that converts a function into a lazy property. The function wrapped is called the first time to retrieve the result and then that calculated result is used the next time you access the value:: class Foo(object): @cached_property def foo(self): # calculate something important here return 42 The class has to have a `__dict__` in order for this property to work. """ # implementation detail: A subclass of python's builtin property # decorator, we override __get__ to check for a cached value. If one # choses to invoke __get__ by hand the property will still work as # expected because the lookup logic is replicated in __get__ for # manual invocation. def __init__(self, func, name=None, doc=None): self.__name__ = name or func.__name__ self.__module__ = func.__module__ self.__doc__ = doc or func.__doc__ self.func = func def __set__(self, obj, value): obj.__dict__[self.__name__] = value def __get__(self, obj, type=None): if obj is None: return self value = obj.__dict__.get(self.__name__, _missing) if value is _missing: value = self.func(obj) obj.__dict__[self.__name__] = value return value class environ_property(_DictAccessorProperty): """Maps request attributes to environment variables. This works not only for the Werzeug request object, but also any other class with an environ attribute: >>> class Test(object): ... environ = {'key': 'value'} ... test = environ_property('key') >>> var = Test() >>> var.test 'value' If you pass it a second value it's used as default if the key does not exist, the third one can be a converter that takes a value and converts it. If it raises :exc:`ValueError` or :exc:`TypeError` the default value is used. If no default value is provided `None` is used. Per default the property is read only. You have to explicitly enable it by passing ``read_only=False`` to the constructor. """ read_only = True def lookup(self, obj): return obj.environ class header_property(_DictAccessorProperty): """Like `environ_property` but for headers.""" def lookup(self, obj): return obj.headers class HTMLBuilder(object): """Helper object for HTML generation. Per default there are two instances of that class. The `html` one, and the `xhtml` one for those two dialects. The class uses keyword parameters and positional parameters to generate small snippets of HTML. Keyword parameters are converted to XML/SGML attributes, positional arguments are used as children. Because Python accepts positional arguments before keyword arguments it's a good idea to use a list with the star-syntax for some children: >>> html.p(class_='foo', *[html.a('foo', href='foo.html'), ' ', ... html.a('bar', href='bar.html')]) u'

    foo bar

    ' This class works around some browser limitations and can not be used for arbitrary SGML/XML generation. For that purpose lxml and similar libraries exist. Calling the builder escapes the string passed: >>> html.p(html("")) u'

    <foo>

    ' """ _entity_re = re.compile(r'&([^;]+);') _entities = name2codepoint.copy() _entities['apos'] = 39 _empty_elements = set([ 'area', 'base', 'basefont', 'br', 'col', 'command', 'embed', 'frame', 'hr', 'img', 'input', 'keygen', 'isindex', 'link', 'meta', 'param', 'source', 'wbr' ]) _boolean_attributes = set([ 'selected', 'checked', 'compact', 'declare', 'defer', 'disabled', 'ismap', 'multiple', 'nohref', 'noresize', 'noshade', 'nowrap' ]) _plaintext_elements = set(['textarea']) _c_like_cdata = set(['script', 'style']) def __init__(self, dialect): self._dialect = dialect def __call__(self, s): return escape(s) def __getattr__(self, tag): if tag[:2] == '__': raise AttributeError(tag) def proxy(*children, **arguments): buffer = '<' + tag for key, value in iteritems(arguments): if value is None: continue if key[-1] == '_': key = key[:-1] if key in self._boolean_attributes: if not value: continue if self._dialect == 'xhtml': value = '="' + key + '"' else: value = '' else: value = '="' + escape(value) + '"' buffer += ' ' + key + value if not children and tag in self._empty_elements: if self._dialect == 'xhtml': buffer += ' />' else: buffer += '>' return buffer buffer += '>' children_as_string = ''.join([text_type(x) for x in children if x is not None]) if children_as_string: if tag in self._plaintext_elements: children_as_string = escape(children_as_string) elif tag in self._c_like_cdata and self._dialect == 'xhtml': children_as_string = '/**/' buffer += children_as_string + '' return buffer return proxy def __repr__(self): return '<%s for %r>' % ( self.__class__.__name__, self._dialect ) html = HTMLBuilder('html') xhtml = HTMLBuilder('xhtml') def get_content_type(mimetype, charset): """Returns the full content type string with charset for a mimetype. If the mimetype represents text the charset will be appended as charset parameter, otherwise the mimetype is returned unchanged. :param mimetype: the mimetype to be used as content type. :param charset: the charset to be appended in case it was a text mimetype. :return: the content type. """ if mimetype.startswith('text/') or \ mimetype == 'application/xml' or \ (mimetype.startswith('application/') and mimetype.endswith('+xml')): mimetype += '; charset=' + charset return mimetype def format_string(string, context): """String-template format a string: >>> format_string('$foo and ${foo}s', dict(foo=42)) '42 and 42s' This does not do any attribute lookup etc. For more advanced string formattings have a look at the `werkzeug.template` module. :param string: the format string. :param context: a dict with the variables to insert. """ def lookup_arg(match): x = context[match.group(1) or match.group(2)] if not isinstance(x, string_types): x = type(string)(x) return x return _format_re.sub(lookup_arg, string) def secure_filename(filename): r"""Pass it a filename and it will return a secure version of it. This filename can then safely be stored on a regular file system and passed to :func:`os.path.join`. The filename returned is an ASCII only string for maximum portability. On windows systems the function also makes sure that the file is not named after one of the special device files. >>> secure_filename("My cool movie.mov") 'My_cool_movie.mov' >>> secure_filename("../../../etc/passwd") 'etc_passwd' >>> secure_filename(u'i contain cool \xfcml\xe4uts.txt') 'i_contain_cool_umlauts.txt' The function might return an empty filename. It's your responsibility to ensure that the filename is unique and that you generate random filename if the function returned an empty one. .. versionadded:: 0.5 :param filename: the filename to secure """ if isinstance(filename, text_type): from unicodedata import normalize filename = normalize('NFKD', filename).encode('ascii', 'ignore') if not PY2: filename = filename.decode('ascii') for sep in os.path.sep, os.path.altsep: if sep: filename = filename.replace(sep, ' ') filename = str(_filename_ascii_strip_re.sub('', '_'.join( filename.split()))).strip('._') # on nt a couple of special files are present in each folder. We # have to ensure that the target file is not such a filename. In # this case we prepend an underline if os.name == 'nt' and filename and \ filename.split('.')[0].upper() in _windows_device_files: filename = '_' + filename return filename def escape(s, quote=None): """Replace special characters "&", "<", ">" and (") to HTML-safe sequences. There is a special handling for `None` which escapes to an empty string. .. versionchanged:: 0.9 `quote` is now implicitly on. :param s: the string to escape. :param quote: ignored. """ if s is None: return '' elif hasattr(s, '__html__'): return text_type(s.__html__()) elif not isinstance(s, string_types): s = text_type(s) if quote is not None: from warnings import warn warn(DeprecationWarning('quote parameter is implicit now'), stacklevel=2) s = s.replace('&', '&').replace('<', '<') \ .replace('>', '>').replace('"', """) return s def unescape(s): """The reverse function of `escape`. This unescapes all the HTML entities, not only the XML entities inserted by `escape`. :param s: the string to unescape. """ def handle_match(m): name = m.group(1) if name in HTMLBuilder._entities: return unichr(HTMLBuilder._entities[name]) try: if name[:2] in ('#x', '#X'): return unichr(int(name[2:], 16)) elif name.startswith('#'): return unichr(int(name[1:])) except ValueError: pass return u'' return _entity_re.sub(handle_match, s) def redirect(location, code=302, Response=None): """Returns a response object (a WSGI application) that, if called, redirects the client to the target location. Supported codes are 301, 302, 303, 305, and 307. 300 is not supported because it's not a real redirect and 304 because it's the answer for a request with a request with defined If-Modified-Since headers. .. versionadded:: 0.6 The location can now be a unicode string that is encoded using the :func:`iri_to_uri` function. .. versionadded:: 0.10 The class used for the Response object can now be passed in. :param location: the location the response should redirect to. :param code: the redirect status code. defaults to 302. :param class Response: a Response class to use when instantiating a response. The default is :class:`werkzeug.wrappers.Response` if unspecified. """ if Response is None: from werkzeug.wrappers import Response display_location = escape(location) if isinstance(location, text_type): # Safe conversion is necessary here as we might redirect # to a broken URI scheme (for instance itms-services). from werkzeug.urls import iri_to_uri location = iri_to_uri(location, safe_conversion=True) response = Response( '\n' 'Redirecting...\n' '

    Redirecting...

    \n' '

    You should be redirected automatically to target URL: ' '%s. If not click the link.' % (escape(location), display_location), code, mimetype='text/html') response.headers['Location'] = location return response def append_slash_redirect(environ, code=301): """Redirects to the same URL but with a slash appended. The behavior of this function is undefined if the path ends with a slash already. :param environ: the WSGI environment for the request that triggers the redirect. :param code: the status code for the redirect. """ new_path = environ['PATH_INFO'].strip('/') + '/' query_string = environ.get('QUERY_STRING') if query_string: new_path += '?' + query_string return redirect(new_path, code) def import_string(import_name, silent=False): """Imports an object based on a string. This is useful if you want to use import paths as endpoints or something similar. An import path can be specified either in dotted notation (``xml.sax.saxutils.escape``) or with a colon as object delimiter (``xml.sax.saxutils:escape``). If `silent` is True the return value will be `None` if the import fails. :param import_name: the dotted name for the object to import. :param silent: if set to `True` import errors are ignored and `None` is returned instead. :return: imported object """ # force the import name to automatically convert to strings # __import__ is not able to handle unicode strings in the fromlist # if the module is a package import_name = str(import_name).replace(':', '.') try: try: __import__(import_name) except ImportError: if '.' not in import_name: raise else: return sys.modules[import_name] module_name, obj_name = import_name.rsplit('.', 1) try: module = __import__(module_name, None, None, [obj_name]) except ImportError: # support importing modules not yet set up by the parent module # (or package for that matter) module = import_string(module_name) try: return getattr(module, obj_name) except AttributeError as e: raise ImportError(e) except ImportError as e: if not silent: reraise( ImportStringError, ImportStringError(import_name, e), sys.exc_info()[2]) def find_modules(import_path, include_packages=False, recursive=False): """Finds all the modules below a package. This can be useful to automatically import all views / controllers so that their metaclasses / function decorators have a chance to register themselves on the application. Packages are not returned unless `include_packages` is `True`. This can also recursively list modules but in that case it will import all the packages to get the correct load path of that module. :param import_path: the dotted name for the package to find child modules. :param include_packages: set to `True` if packages should be returned, too. :param recursive: set to `True` if recursion should happen. :return: generator """ module = import_string(import_path) path = getattr(module, '__path__', None) if path is None: raise ValueError('%r is not a package' % import_path) basename = module.__name__ + '.' for importer, modname, ispkg in pkgutil.iter_modules(path): modname = basename + modname if ispkg: if include_packages: yield modname if recursive: for item in find_modules(modname, include_packages, True): yield item else: yield modname def validate_arguments(func, args, kwargs, drop_extra=True): """Checks if the function accepts the arguments and keyword arguments. Returns a new ``(args, kwargs)`` tuple that can safely be passed to the function without causing a `TypeError` because the function signature is incompatible. If `drop_extra` is set to `True` (which is the default) any extra positional or keyword arguments are dropped automatically. The exception raised provides three attributes: `missing` A set of argument names that the function expected but where missing. `extra` A dict of keyword arguments that the function can not handle but where provided. `extra_positional` A list of values that where given by positional argument but the function cannot accept. This can be useful for decorators that forward user submitted data to a view function:: from werkzeug.utils import ArgumentValidationError, validate_arguments def sanitize(f): def proxy(request): data = request.values.to_dict() try: args, kwargs = validate_arguments(f, (request,), data) except ArgumentValidationError: raise BadRequest('The browser failed to transmit all ' 'the data expected.') return f(*args, **kwargs) return proxy :param func: the function the validation is performed against. :param args: a tuple of positional arguments. :param kwargs: a dict of keyword arguments. :param drop_extra: set to `False` if you don't want extra arguments to be silently dropped. :return: tuple in the form ``(args, kwargs)``. """ parser = _parse_signature(func) args, kwargs, missing, extra, extra_positional = parser(args, kwargs)[:5] if missing: raise ArgumentValidationError(tuple(missing)) elif (extra or extra_positional) and not drop_extra: raise ArgumentValidationError(None, extra, extra_positional) return tuple(args), kwargs def bind_arguments(func, args, kwargs): """Bind the arguments provided into a dict. When passed a function, a tuple of arguments and a dict of keyword arguments `bind_arguments` returns a dict of names as the function would see it. This can be useful to implement a cache decorator that uses the function arguments to build the cache key based on the values of the arguments. :param func: the function the arguments should be bound for. :param args: tuple of positional arguments. :param kwargs: a dict of keyword arguments. :return: a :class:`dict` of bound keyword arguments. """ args, kwargs, missing, extra, extra_positional, \ arg_spec, vararg_var, kwarg_var = _parse_signature(func)(args, kwargs) values = {} for (name, has_default, default), value in zip(arg_spec, args): values[name] = value if vararg_var is not None: values[vararg_var] = tuple(extra_positional) elif extra_positional: raise TypeError('too many positional arguments') if kwarg_var is not None: multikw = set(extra) & set([x[0] for x in arg_spec]) if multikw: raise TypeError('got multiple values for keyword argument ' + repr(next(iter(multikw)))) values[kwarg_var] = extra elif extra: raise TypeError('got unexpected keyword argument ' + repr(next(iter(extra)))) return values class ArgumentValidationError(ValueError): """Raised if :func:`validate_arguments` fails to validate""" def __init__(self, missing=None, extra=None, extra_positional=None): self.missing = set(missing or ()) self.extra = extra or {} self.extra_positional = extra_positional or [] ValueError.__init__(self, 'function arguments invalid. (' '%d missing, %d additional)' % ( len(self.missing), len(self.extra) + len(self.extra_positional) )) class ImportStringError(ImportError): """Provides information about a failed :func:`import_string` attempt.""" #: String in dotted notation that failed to be imported. import_name = None #: Wrapped exception. exception = None def __init__(self, import_name, exception): self.import_name = import_name self.exception = exception msg = ( 'import_string() failed for %r. Possible reasons are:\n\n' '- missing __init__.py in a package;\n' '- package or module path not included in sys.path;\n' '- duplicated package or module name taking precedence in ' 'sys.path;\n' '- missing module, class, function or variable;\n\n' 'Debugged import:\n\n%s\n\n' 'Original exception:\n\n%s: %s') name = '' tracked = [] for part in import_name.replace(':', '.').split('.'): name += (name and '.') + part imported = import_string(name, silent=True) if imported: tracked.append((name, getattr(imported, '__file__', None))) else: track = ['- %r found in %r.' % (n, i) for n, i in tracked] track.append('- %r not found.' % name) msg = msg % (import_name, '\n'.join(track), exception.__class__.__name__, str(exception)) break ImportError.__init__(self, msg) def __repr__(self): return '<%s(%r, %r)>' % (self.__class__.__name__, self.import_name, self.exception) # DEPRECATED # these objects were previously in this module as well. we import # them here for backwards compatibility with old pickles. from werkzeug.datastructures import ( # noqa MultiDict, CombinedMultiDict, Headers, EnvironHeaders) from werkzeug.http import parse_cookie, dump_cookie # noqa werkzeug-0.14.1/werkzeug/wrappers.py000066400000000000000000002450371322225165500175360ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.wrappers ~~~~~~~~~~~~~~~~~ The wrappers are simple request and response objects which you can subclass to do whatever you want them to do. The request object contains the information transmitted by the client (webbrowser) and the response object contains all the information sent back to the browser. An important detail is that the request object is created with the WSGI environ and will act as high-level proxy whereas the response object is an actual WSGI application. Like everything else in Werkzeug these objects will work correctly with unicode data. Incoming form data parsed by the response object will be decoded into an unicode object if possible and if it makes sense. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ from functools import update_wrapper from datetime import datetime, timedelta from warnings import warn from werkzeug.http import HTTP_STATUS_CODES, \ parse_accept_header, parse_cache_control_header, parse_etags, \ parse_date, generate_etag, is_resource_modified, unquote_etag, \ quote_etag, parse_set_header, parse_authorization_header, \ parse_www_authenticate_header, remove_entity_headers, \ parse_options_header, dump_options_header, http_date, \ parse_if_range_header, parse_cookie, dump_cookie, \ parse_range_header, parse_content_range_header, dump_header, \ parse_age, dump_age from werkzeug.urls import url_decode, iri_to_uri, url_join from werkzeug.formparser import FormDataParser, default_stream_factory from werkzeug.utils import cached_property, environ_property, \ header_property, get_content_type from werkzeug.wsgi import get_current_url, get_host, \ ClosingIterator, get_input_stream, get_content_length, _RangeWrapper from werkzeug.datastructures import MultiDict, CombinedMultiDict, Headers, \ EnvironHeaders, ImmutableMultiDict, ImmutableTypeConversionDict, \ ImmutableList, MIMEAccept, CharsetAccept, LanguageAccept, \ ResponseCacheControl, RequestCacheControl, CallbackDict, \ ContentRange, iter_multi_items from werkzeug._internal import _get_environ from werkzeug._compat import to_bytes, string_types, text_type, \ integer_types, wsgi_decoding_dance, wsgi_get_bytes, \ to_unicode, to_native, BytesIO def _run_wsgi_app(*args): """This function replaces itself to ensure that the test module is not imported unless required. DO NOT USE! """ global _run_wsgi_app from werkzeug.test import run_wsgi_app as _run_wsgi_app return _run_wsgi_app(*args) def _warn_if_string(iterable): """Helper for the response objects to check if the iterable returned to the WSGI server is not a string. """ if isinstance(iterable, string_types): warn(Warning('response iterable was set to a string. This appears ' 'to work but means that the server will send the ' 'data to the client char, by char. This is almost ' 'never intended behavior, use response.data to assign ' 'strings to the response object.'), stacklevel=2) def _assert_not_shallow(request): if request.shallow: raise RuntimeError('A shallow request tried to consume ' 'form data. If you really want to do ' 'that, set `shallow` to False.') def _iter_encoded(iterable, charset): for item in iterable: if isinstance(item, text_type): yield item.encode(charset) else: yield item def _clean_accept_ranges(accept_ranges): if accept_ranges is True: return "bytes" elif accept_ranges is False: return "none" elif isinstance(accept_ranges, text_type): return to_native(accept_ranges) raise ValueError("Invalid accept_ranges value") class BaseRequest(object): """Very basic request object. This does not implement advanced stuff like entity tag parsing or cache controls. The request object is created with the WSGI environment as first argument and will add itself to the WSGI environment as ``'werkzeug.request'`` unless it's created with `populate_request` set to False. There are a couple of mixins available that add additional functionality to the request object, there is also a class called `Request` which subclasses `BaseRequest` and all the important mixins. It's a good idea to create a custom subclass of the :class:`BaseRequest` and add missing functionality either via mixins or direct implementation. Here an example for such subclasses:: from werkzeug.wrappers import BaseRequest, ETagRequestMixin class Request(BaseRequest, ETagRequestMixin): pass Request objects are **read only**. As of 0.5 modifications are not allowed in any place. Unlike the lower level parsing functions the request object will use immutable objects everywhere possible. Per default the request object will assume all the text data is `utf-8` encoded. Please refer to `the unicode chapter `_ for more details about customizing the behavior. Per default the request object will be added to the WSGI environment as `werkzeug.request` to support the debugging system. If you don't want that, set `populate_request` to `False`. If `shallow` is `True` the environment is initialized as shallow object around the environ. Every operation that would modify the environ in any way (such as consuming form data) raises an exception unless the `shallow` attribute is explicitly set to `False`. This is useful for middlewares where you don't want to consume the form data by accident. A shallow request is not populated to the WSGI environment. .. versionchanged:: 0.5 read-only mode was enforced by using immutables classes for all data. """ #: the charset for the request, defaults to utf-8 charset = 'utf-8' #: the error handling procedure for errors, defaults to 'replace' encoding_errors = 'replace' #: the maximum content length. This is forwarded to the form data #: parsing function (:func:`parse_form_data`). When set and the #: :attr:`form` or :attr:`files` attribute is accessed and the #: parsing fails because more than the specified value is transmitted #: a :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised. #: #: Have a look at :ref:`dealing-with-request-data` for more details. #: #: .. versionadded:: 0.5 max_content_length = None #: the maximum form field size. This is forwarded to the form data #: parsing function (:func:`parse_form_data`). When set and the #: :attr:`form` or :attr:`files` attribute is accessed and the #: data in memory for post data is longer than the specified value a #: :exc:`~werkzeug.exceptions.RequestEntityTooLarge` exception is raised. #: #: Have a look at :ref:`dealing-with-request-data` for more details. #: #: .. versionadded:: 0.5 max_form_memory_size = None #: the class to use for `args` and `form`. The default is an #: :class:`~werkzeug.datastructures.ImmutableMultiDict` which supports #: multiple values per key. alternatively it makes sense to use an #: :class:`~werkzeug.datastructures.ImmutableOrderedMultiDict` which #: preserves order or a :class:`~werkzeug.datastructures.ImmutableDict` #: which is the fastest but only remembers the last key. It is also #: possible to use mutable structures, but this is not recommended. #: #: .. versionadded:: 0.6 parameter_storage_class = ImmutableMultiDict #: the type to be used for list values from the incoming WSGI environment. #: By default an :class:`~werkzeug.datastructures.ImmutableList` is used #: (for example for :attr:`access_list`). #: #: .. versionadded:: 0.6 list_storage_class = ImmutableList #: the type to be used for dict values from the incoming WSGI environment. #: By default an #: :class:`~werkzeug.datastructures.ImmutableTypeConversionDict` is used #: (for example for :attr:`cookies`). #: #: .. versionadded:: 0.6 dict_storage_class = ImmutableTypeConversionDict #: The form data parser that shoud be used. Can be replaced to customize #: the form date parsing. form_data_parser_class = FormDataParser #: Optionally a list of hosts that is trusted by this request. By default #: all hosts are trusted which means that whatever the client sends the #: host is will be accepted. #: #: This is the recommended setup as a webserver should manually be set up #: to only route correct hosts to the application, and remove the #: `X-Forwarded-Host` header if it is not being used (see #: :func:`werkzeug.wsgi.get_host`). #: #: .. versionadded:: 0.9 trusted_hosts = None #: Indicates whether the data descriptor should be allowed to read and #: buffer up the input stream. By default it's enabled. #: #: .. versionadded:: 0.9 disable_data_descriptor = False def __init__(self, environ, populate_request=True, shallow=False): self.environ = environ if populate_request and not shallow: self.environ['werkzeug.request'] = self self.shallow = shallow def __repr__(self): # make sure the __repr__ even works if the request was created # from an invalid WSGI environment. If we display the request # in a debug session we don't want the repr to blow up. args = [] try: args.append("'%s'" % to_native(self.url, self.url_charset)) args.append('[%s]' % self.method) except Exception: args.append('(invalid WSGI environ)') return '<%s %s>' % ( self.__class__.__name__, ' '.join(args) ) @property def url_charset(self): """The charset that is assumed for URLs. Defaults to the value of :attr:`charset`. .. versionadded:: 0.6 """ return self.charset @classmethod def from_values(cls, *args, **kwargs): """Create a new request object based on the values provided. If environ is given missing values are filled from there. This method is useful for small scripts when you need to simulate a request from an URL. Do not use this method for unittesting, there is a full featured client object (:class:`Client`) that allows to create multipart requests, support for cookies etc. This accepts the same options as the :class:`~werkzeug.test.EnvironBuilder`. .. versionchanged:: 0.5 This method now accepts the same arguments as :class:`~werkzeug.test.EnvironBuilder`. Because of this the `environ` parameter is now called `environ_overrides`. :return: request object """ from werkzeug.test import EnvironBuilder charset = kwargs.pop('charset', cls.charset) kwargs['charset'] = charset builder = EnvironBuilder(*args, **kwargs) try: return builder.get_request(cls) finally: builder.close() @classmethod def application(cls, f): """Decorate a function as responder that accepts the request as first argument. This works like the :func:`responder` decorator but the function is passed the request object as first argument and the request object will be closed automatically:: @Request.application def my_wsgi_app(request): return Response('Hello World!') As of Werkzeug 0.14 HTTP exceptions are automatically caught and converted to responses instead of failing. :param f: the WSGI callable to decorate :return: a new WSGI callable """ #: return a callable that wraps the -2nd argument with the request #: and calls the function with all the arguments up to that one and #: the request. The return value is then called with the latest #: two arguments. This makes it possible to use this decorator for #: both methods and standalone WSGI functions. from werkzeug.exceptions import HTTPException def application(*args): request = cls(args[-2]) with request: try: resp = f(*args[:-2] + (request,)) except HTTPException as e: resp = e.get_response(args[-2]) return resp(*args[-2:]) return update_wrapper(application, f) def _get_file_stream(self, total_content_length, content_type, filename=None, content_length=None): """Called to get a stream for the file upload. This must provide a file-like class with `read()`, `readline()` and `seek()` methods that is both writeable and readable. The default implementation returns a temporary file if the total content length is higher than 500KB. Because many browsers do not provide a content length for the files only the total content length matters. :param total_content_length: the total content length of all the data in the request combined. This value is guaranteed to be there. :param content_type: the mimetype of the uploaded file. :param filename: the filename of the uploaded file. May be `None`. :param content_length: the length of this file. This value is usually not provided because webbrowsers do not provide this value. """ return default_stream_factory( total_content_length=total_content_length, content_type=content_type, filename=filename, content_length=content_length) @property def want_form_data_parsed(self): """Returns True if the request method carries content. As of Werkzeug 0.9 this will be the case if a content type is transmitted. .. versionadded:: 0.8 """ return bool(self.environ.get('CONTENT_TYPE')) def make_form_data_parser(self): """Creates the form data parser. Instantiates the :attr:`form_data_parser_class` with some parameters. .. versionadded:: 0.8 """ return self.form_data_parser_class(self._get_file_stream, self.charset, self.encoding_errors, self.max_form_memory_size, self.max_content_length, self.parameter_storage_class) def _load_form_data(self): """Method used internally to retrieve submitted data. After calling this sets `form` and `files` on the request object to multi dicts filled with the incoming form data. As a matter of fact the input stream will be empty afterwards. You can also call this method to force the parsing of the form data. .. versionadded:: 0.8 """ # abort early if we have already consumed the stream if 'form' in self.__dict__: return _assert_not_shallow(self) if self.want_form_data_parsed: content_type = self.environ.get('CONTENT_TYPE', '') content_length = get_content_length(self.environ) mimetype, options = parse_options_header(content_type) parser = self.make_form_data_parser() data = parser.parse(self._get_stream_for_parsing(), mimetype, content_length, options) else: data = (self.stream, self.parameter_storage_class(), self.parameter_storage_class()) # inject the values into the instance dict so that we bypass # our cached_property non-data descriptor. d = self.__dict__ d['stream'], d['form'], d['files'] = data def _get_stream_for_parsing(self): """This is the same as accessing :attr:`stream` with the difference that if it finds cached data from calling :meth:`get_data` first it will create a new stream out of the cached data. .. versionadded:: 0.9.3 """ cached_data = getattr(self, '_cached_data', None) if cached_data is not None: return BytesIO(cached_data) return self.stream def close(self): """Closes associated resources of this request object. This closes all file handles explicitly. You can also use the request object in a with statement which will automatically close it. .. versionadded:: 0.9 """ files = self.__dict__.get('files') for key, value in iter_multi_items(files or ()): value.close() def __enter__(self): return self def __exit__(self, exc_type, exc_value, tb): self.close() @cached_property def stream(self): """ If the incoming form data was not encoded with a known mimetype the data is stored unmodified in this stream for consumption. Most of the time it is a better idea to use :attr:`data` which will give you that data as a string. The stream only returns the data once. Unlike :attr:`input_stream` this stream is properly guarded that you can't accidentally read past the length of the input. Werkzeug will internally always refer to this stream to read data which makes it possible to wrap this object with a stream that does filtering. .. versionchanged:: 0.9 This stream is now always available but might be consumed by the form parser later on. Previously the stream was only set if no parsing happened. """ _assert_not_shallow(self) return get_input_stream(self.environ) input_stream = environ_property('wsgi.input', """ The WSGI input stream. In general it's a bad idea to use this one because you can easily read past the boundary. Use the :attr:`stream` instead. """) @cached_property def args(self): """The parsed URL parameters (the part in the URL after the question mark). By default an :class:`~werkzeug.datastructures.ImmutableMultiDict` is returned from this function. This can be changed by setting :attr:`parameter_storage_class` to a different type. This might be necessary if the order of the form data is important. """ return url_decode(wsgi_get_bytes(self.environ.get('QUERY_STRING', '')), self.url_charset, errors=self.encoding_errors, cls=self.parameter_storage_class) @cached_property def data(self): """ Contains the incoming request data as string in case it came with a mimetype Werkzeug does not handle. """ if self.disable_data_descriptor: raise AttributeError('data descriptor is disabled') # XXX: this should eventually be deprecated. # We trigger form data parsing first which means that the descriptor # will not cache the data that would otherwise be .form or .files # data. This restores the behavior that was there in Werkzeug # before 0.9. New code should use :meth:`get_data` explicitly as # this will make behavior explicit. return self.get_data(parse_form_data=True) def get_data(self, cache=True, as_text=False, parse_form_data=False): """This reads the buffered incoming data from the client into one bytestring. By default this is cached but that behavior can be changed by setting `cache` to `False`. Usually it's a bad idea to call this method without checking the content length first as a client could send dozens of megabytes or more to cause memory problems on the server. Note that if the form data was already parsed this method will not return anything as form data parsing does not cache the data like this method does. To implicitly invoke form data parsing function set `parse_form_data` to `True`. When this is done the return value of this method will be an empty string if the form parser handles the data. This generally is not necessary as if the whole data is cached (which is the default) the form parser will used the cached data to parse the form data. Please be generally aware of checking the content length first in any case before calling this method to avoid exhausting server memory. If `as_text` is set to `True` the return value will be a decoded unicode string. .. versionadded:: 0.9 """ rv = getattr(self, '_cached_data', None) if rv is None: if parse_form_data: self._load_form_data() rv = self.stream.read() if cache: self._cached_data = rv if as_text: rv = rv.decode(self.charset, self.encoding_errors) return rv @cached_property def form(self): """The form parameters. By default an :class:`~werkzeug.datastructures.ImmutableMultiDict` is returned from this function. This can be changed by setting :attr:`parameter_storage_class` to a different type. This might be necessary if the order of the form data is important. Please keep in mind that file uploads will not end up here, but instead in the :attr:`files` attribute. .. versionchanged:: 0.9 Previous to Werkzeug 0.9 this would only contain form data for POST and PUT requests. """ self._load_form_data() return self.form @cached_property def values(self): """A :class:`werkzeug.datastructures.CombinedMultiDict` that combines :attr:`args` and :attr:`form`.""" args = [] for d in self.args, self.form: if not isinstance(d, MultiDict): d = MultiDict(d) args.append(d) return CombinedMultiDict(args) @cached_property def files(self): """:class:`~werkzeug.datastructures.MultiDict` object containing all uploaded files. Each key in :attr:`files` is the name from the ````. Each value in :attr:`files` is a Werkzeug :class:`~werkzeug.datastructures.FileStorage` object. It basically behaves like a standard file object you know from Python, with the difference that it also has a :meth:`~werkzeug.datastructures.FileStorage.save` function that can store the file on the filesystem. Note that :attr:`files` will only contain data if the request method was POST, PUT or PATCH and the ``

    `` that posted to the request had ``enctype="multipart/form-data"``. It will be empty otherwise. See the :class:`~werkzeug.datastructures.MultiDict` / :class:`~werkzeug.datastructures.FileStorage` documentation for more details about the used data structure. """ self._load_form_data() return self.files @cached_property def cookies(self): """A :class:`dict` with the contents of all cookies transmitted with the request.""" return parse_cookie(self.environ, self.charset, self.encoding_errors, cls=self.dict_storage_class) @cached_property def headers(self): """The headers from the WSGI environ as immutable :class:`~werkzeug.datastructures.EnvironHeaders`. """ return EnvironHeaders(self.environ) @cached_property def path(self): """Requested path as unicode. This works a bit like the regular path info in the WSGI environment but will always include a leading slash, even if the URL root is accessed. """ raw_path = wsgi_decoding_dance(self.environ.get('PATH_INFO') or '', self.charset, self.encoding_errors) return '/' + raw_path.lstrip('/') @cached_property def full_path(self): """Requested path as unicode, including the query string.""" return self.path + u'?' + to_unicode(self.query_string, self.url_charset) @cached_property def script_root(self): """The root path of the script without the trailing slash.""" raw_path = wsgi_decoding_dance(self.environ.get('SCRIPT_NAME') or '', self.charset, self.encoding_errors) return raw_path.rstrip('/') @cached_property def url(self): """The reconstructed current URL as IRI. See also: :attr:`trusted_hosts`. """ return get_current_url(self.environ, trusted_hosts=self.trusted_hosts) @cached_property def base_url(self): """Like :attr:`url` but without the querystring See also: :attr:`trusted_hosts`. """ return get_current_url(self.environ, strip_querystring=True, trusted_hosts=self.trusted_hosts) @cached_property def url_root(self): """The full URL root (with hostname), this is the application root as IRI. See also: :attr:`trusted_hosts`. """ return get_current_url(self.environ, True, trusted_hosts=self.trusted_hosts) @cached_property def host_url(self): """Just the host with scheme as IRI. See also: :attr:`trusted_hosts`. """ return get_current_url(self.environ, host_only=True, trusted_hosts=self.trusted_hosts) @cached_property def host(self): """Just the host including the port if available. See also: :attr:`trusted_hosts`. """ return get_host(self.environ, trusted_hosts=self.trusted_hosts) query_string = environ_property( 'QUERY_STRING', '', read_only=True, load_func=wsgi_get_bytes, doc='The URL parameters as raw bytestring.') method = environ_property( 'REQUEST_METHOD', 'GET', read_only=True, load_func=lambda x: x.upper(), doc="The request method. (For example ``'GET'`` or ``'POST'``).") @cached_property def access_route(self): """If a forwarded header exists this is a list of all ip addresses from the client ip to the last proxy server. """ if 'HTTP_X_FORWARDED_FOR' in self.environ: addr = self.environ['HTTP_X_FORWARDED_FOR'].split(',') return self.list_storage_class([x.strip() for x in addr]) elif 'REMOTE_ADDR' in self.environ: return self.list_storage_class([self.environ['REMOTE_ADDR']]) return self.list_storage_class() @property def remote_addr(self): """The remote address of the client.""" return self.environ.get('REMOTE_ADDR') remote_user = environ_property('REMOTE_USER', doc=''' If the server supports user authentication, and the script is protected, this attribute contains the username the user has authenticated as.''') scheme = environ_property('wsgi.url_scheme', doc=''' URL scheme (http or https). .. versionadded:: 0.7''') @property def is_xhr(self): """True if the request was triggered via a JavaScript XMLHttpRequest. This only works with libraries that support the ``X-Requested-With`` header and set it to "XMLHttpRequest". Libraries that do that are prototype, jQuery and Mochikit and probably some more. .. deprecated:: 0.13 ``X-Requested-With`` is not standard and is unreliable. """ warn(DeprecationWarning( 'Request.is_xhr is deprecated. Given that the X-Requested-With ' 'header is not a part of any spec, it is not reliable' ), stacklevel=2) return self.environ.get( 'HTTP_X_REQUESTED_WITH', '' ).lower() == 'xmlhttprequest' is_secure = property(lambda x: x.environ['wsgi.url_scheme'] == 'https', doc='`True` if the request is secure.') is_multithread = environ_property('wsgi.multithread', doc=''' boolean that is `True` if the application is served by a multithreaded WSGI server.''') is_multiprocess = environ_property('wsgi.multiprocess', doc=''' boolean that is `True` if the application is served by a WSGI server that spawns multiple processes.''') is_run_once = environ_property('wsgi.run_once', doc=''' boolean that is `True` if the application will be executed only once in a process lifetime. This is the case for CGI for example, but it's not guaranteed that the execution only happens one time.''') class BaseResponse(object): """Base response class. The most important fact about a response object is that it's a regular WSGI application. It's initialized with a couple of response parameters (headers, body, status code etc.) and will start a valid WSGI response when called with the environ and start response callable. Because it's a WSGI application itself processing usually ends before the actual response is sent to the server. This helps debugging systems because they can catch all the exceptions before responses are started. Here a small example WSGI application that takes advantage of the response objects:: from werkzeug.wrappers import BaseResponse as Response def index(): return Response('Index page') def application(environ, start_response): path = environ.get('PATH_INFO') or '/' if path == '/': response = index() else: response = Response('Not Found', status=404) return response(environ, start_response) Like :class:`BaseRequest` which object is lacking a lot of functionality implemented in mixins. This gives you a better control about the actual API of your response objects, so you can create subclasses and add custom functionality. A full featured response object is available as :class:`Response` which implements a couple of useful mixins. To enforce a new type of already existing responses you can use the :meth:`force_type` method. This is useful if you're working with different subclasses of response objects and you want to post process them with a known interface. Per default the response object will assume all the text data is `utf-8` encoded. Please refer to `the unicode chapter `_ for more details about customizing the behavior. Response can be any kind of iterable or string. If it's a string it's considered being an iterable with one item which is the string passed. Headers can be a list of tuples or a :class:`~werkzeug.datastructures.Headers` object. Special note for `mimetype` and `content_type`: For most mime types `mimetype` and `content_type` work the same, the difference affects only 'text' mimetypes. If the mimetype passed with `mimetype` is a mimetype starting with `text/`, the charset parameter of the response object is appended to it. In contrast the `content_type` parameter is always added as header unmodified. .. versionchanged:: 0.5 the `direct_passthrough` parameter was added. :param response: a string or response iterable. :param status: a string with a status or an integer with the status code. :param headers: a list of headers or a :class:`~werkzeug.datastructures.Headers` object. :param mimetype: the mimetype for the response. See notice above. :param content_type: the content type for the response. See notice above. :param direct_passthrough: if set to `True` :meth:`iter_encoded` is not called before iteration which makes it possible to pass special iterators through unchanged (see :func:`wrap_file` for more details.) """ #: the charset of the response. charset = 'utf-8' #: the default status if none is provided. default_status = 200 #: the default mimetype if none is provided. default_mimetype = 'text/plain' #: if set to `False` accessing properties on the response object will #: not try to consume the response iterator and convert it into a list. #: #: .. versionadded:: 0.6.2 #: #: That attribute was previously called `implicit_seqence_conversion`. #: (Notice the typo). If you did use this feature, you have to adapt #: your code to the name change. implicit_sequence_conversion = True #: Should this response object correct the location header to be RFC #: conformant? This is true by default. #: #: .. versionadded:: 0.8 autocorrect_location_header = True #: Should this response object automatically set the content-length #: header if possible? This is true by default. #: #: .. versionadded:: 0.8 automatically_set_content_length = True #: Warn if a cookie header exceeds this size. The default, 4093, should be #: safely `supported by most browsers `_. A cookie larger than #: this size will still be sent, but it may be ignored or handled #: incorrectly by some browsers. Set to 0 to disable this check. #: #: .. versionadded:: 0.13 #: #: .. _`cookie`: http://browsercookielimits.squawky.net/ max_cookie_size = 4093 def __init__(self, response=None, status=None, headers=None, mimetype=None, content_type=None, direct_passthrough=False): if isinstance(headers, Headers): self.headers = headers elif not headers: self.headers = Headers() else: self.headers = Headers(headers) if content_type is None: if mimetype is None and 'content-type' not in self.headers: mimetype = self.default_mimetype if mimetype is not None: mimetype = get_content_type(mimetype, self.charset) content_type = mimetype if content_type is not None: self.headers['Content-Type'] = content_type if status is None: status = self.default_status if isinstance(status, integer_types): self.status_code = status else: self.status = status self.direct_passthrough = direct_passthrough self._on_close = [] # we set the response after the headers so that if a class changes # the charset attribute, the data is set in the correct charset. if response is None: self.response = [] elif isinstance(response, (text_type, bytes, bytearray)): self.set_data(response) else: self.response = response def call_on_close(self, func): """Adds a function to the internal list of functions that should be called as part of closing down the response. Since 0.7 this function also returns the function that was passed so that this can be used as a decorator. .. versionadded:: 0.6 """ self._on_close.append(func) return func def __repr__(self): if self.is_sequence: body_info = '%d bytes' % sum(map(len, self.iter_encoded())) else: body_info = 'streamed' if self.is_streamed else 'likely-streamed' return '<%s %s [%s]>' % ( self.__class__.__name__, body_info, self.status ) @classmethod def force_type(cls, response, environ=None): """Enforce that the WSGI response is a response object of the current type. Werkzeug will use the :class:`BaseResponse` internally in many situations like the exceptions. If you call :meth:`get_response` on an exception you will get back a regular :class:`BaseResponse` object, even if you are using a custom subclass. This method can enforce a given response type, and it will also convert arbitrary WSGI callables into response objects if an environ is provided:: # convert a Werkzeug response object into an instance of the # MyResponseClass subclass. response = MyResponseClass.force_type(response) # convert any WSGI application into a response object response = MyResponseClass.force_type(response, environ) This is especially useful if you want to post-process responses in the main dispatcher and use functionality provided by your subclass. Keep in mind that this will modify response objects in place if possible! :param response: a response object or wsgi application. :param environ: a WSGI environment object. :return: a response object. """ if not isinstance(response, BaseResponse): if environ is None: raise TypeError('cannot convert WSGI application into ' 'response objects without an environ') response = BaseResponse(*_run_wsgi_app(response, environ)) response.__class__ = cls return response @classmethod def from_app(cls, app, environ, buffered=False): """Create a new response object from an application output. This works best if you pass it an application that returns a generator all the time. Sometimes applications may use the `write()` callable returned by the `start_response` function. This tries to resolve such edge cases automatically. But if you don't get the expected output you should set `buffered` to `True` which enforces buffering. :param app: the WSGI application to execute. :param environ: the WSGI environment to execute against. :param buffered: set to `True` to enforce buffering. :return: a response object. """ return cls(*_run_wsgi_app(app, environ, buffered)) def _get_status_code(self): return self._status_code def _set_status_code(self, code): self._status_code = code try: self._status = '%d %s' % (code, HTTP_STATUS_CODES[code].upper()) except KeyError: self._status = '%d UNKNOWN' % code status_code = property(_get_status_code, _set_status_code, doc='The HTTP Status code as number') del _get_status_code, _set_status_code def _get_status(self): return self._status def _set_status(self, value): try: self._status = to_native(value) except AttributeError: raise TypeError('Invalid status argument') try: self._status_code = int(self._status.split(None, 1)[0]) except ValueError: self._status_code = 0 self._status = '0 %s' % self._status except IndexError: raise ValueError('Empty status argument') status = property(_get_status, _set_status, doc='The HTTP Status code') del _get_status, _set_status def get_data(self, as_text=False): """The string representation of the request body. Whenever you call this property the request iterable is encoded and flattened. This can lead to unwanted behavior if you stream big data. This behavior can be disabled by setting :attr:`implicit_sequence_conversion` to `False`. If `as_text` is set to `True` the return value will be a decoded unicode string. .. versionadded:: 0.9 """ self._ensure_sequence() rv = b''.join(self.iter_encoded()) if as_text: rv = rv.decode(self.charset) return rv def set_data(self, value): """Sets a new string as response. The value set must either by a unicode or bytestring. If a unicode string is set it's encoded automatically to the charset of the response (utf-8 by default). .. versionadded:: 0.9 """ # if an unicode string is set, it's encoded directly so that we # can set the content length if isinstance(value, text_type): value = value.encode(self.charset) else: value = bytes(value) self.response = [value] if self.automatically_set_content_length: self.headers['Content-Length'] = str(len(value)) data = property(get_data, set_data, doc=''' A descriptor that calls :meth:`get_data` and :meth:`set_data`. This should not be used and will eventually get deprecated. ''') def calculate_content_length(self): """Returns the content length if available or `None` otherwise.""" try: self._ensure_sequence() except RuntimeError: return None return sum(len(x) for x in self.iter_encoded()) def _ensure_sequence(self, mutable=False): """This method can be called by methods that need a sequence. If `mutable` is true, it will also ensure that the response sequence is a standard Python list. .. versionadded:: 0.6 """ if self.is_sequence: # if we need a mutable object, we ensure it's a list. if mutable and not isinstance(self.response, list): self.response = list(self.response) return if self.direct_passthrough: raise RuntimeError('Attempted implicit sequence conversion ' 'but the response object is in direct ' 'passthrough mode.') if not self.implicit_sequence_conversion: raise RuntimeError('The response object required the iterable ' 'to be a sequence, but the implicit ' 'conversion was disabled. Call ' 'make_sequence() yourself.') self.make_sequence() def make_sequence(self): """Converts the response iterator in a list. By default this happens automatically if required. If `implicit_sequence_conversion` is disabled, this method is not automatically called and some properties might raise exceptions. This also encodes all the items. .. versionadded:: 0.6 """ if not self.is_sequence: # if we consume an iterable we have to ensure that the close # method of the iterable is called if available when we tear # down the response close = getattr(self.response, 'close', None) self.response = list(self.iter_encoded()) if close is not None: self.call_on_close(close) def iter_encoded(self): """Iter the response encoded with the encoding of the response. If the response object is invoked as WSGI application the return value of this method is used as application iterator unless :attr:`direct_passthrough` was activated. """ if __debug__: _warn_if_string(self.response) # Encode in a separate function so that self.response is fetched # early. This allows us to wrap the response with the return # value from get_app_iter or iter_encoded. return _iter_encoded(self.response, self.charset) def set_cookie(self, key, value='', max_age=None, expires=None, path='/', domain=None, secure=False, httponly=False, samesite=None): """Sets a cookie. The parameters are the same as in the cookie `Morsel` object in the Python standard library but it accepts unicode data, too. A warning is raised if the size of the cookie header exceeds :attr:`max_cookie_size`, but the header will still be set. :param key: the key (name) of the cookie to be set. :param value: the value of the cookie. :param max_age: should be a number of seconds, or `None` (default) if the cookie should last only as long as the client's browser session. :param expires: should be a `datetime` object or UNIX timestamp. :param path: limits the cookie to a given path, per default it will span the whole domain. :param domain: if you want to set a cross-domain cookie. For example, ``domain=".example.com"`` will set a cookie that is readable by the domain ``www.example.com``, ``foo.example.com`` etc. Otherwise, a cookie will only be readable by the domain that set it. :param secure: If `True`, the cookie will only be available via HTTPS :param httponly: disallow JavaScript to access the cookie. This is an extension to the cookie standard and probably not supported by all browsers. :param samesite: Limits the scope of the cookie such that it will only be attached to requests if those requests are "same-site". """ self.headers.add('Set-Cookie', dump_cookie( key, value=value, max_age=max_age, expires=expires, path=path, domain=domain, secure=secure, httponly=httponly, charset=self.charset, max_size=self.max_cookie_size, samesite=samesite )) def delete_cookie(self, key, path='/', domain=None): """Delete a cookie. Fails silently if key doesn't exist. :param key: the key (name) of the cookie to be deleted. :param path: if the cookie that should be deleted was limited to a path, the path has to be defined here. :param domain: if the cookie that should be deleted was limited to a domain, that domain has to be defined here. """ self.set_cookie(key, expires=0, max_age=0, path=path, domain=domain) @property def is_streamed(self): """If the response is streamed (the response is not an iterable with a length information) this property is `True`. In this case streamed means that there is no information about the number of iterations. This is usually `True` if a generator is passed to the response object. This is useful for checking before applying some sort of post filtering that should not take place for streamed responses. """ try: len(self.response) except (TypeError, AttributeError): return True return False @property def is_sequence(self): """If the iterator is buffered, this property will be `True`. A response object will consider an iterator to be buffered if the response attribute is a list or tuple. .. versionadded:: 0.6 """ return isinstance(self.response, (tuple, list)) def close(self): """Close the wrapped response if possible. You can also use the object in a with statement which will automatically close it. .. versionadded:: 0.9 Can now be used in a with statement. """ if hasattr(self.response, 'close'): self.response.close() for func in self._on_close: func() def __enter__(self): return self def __exit__(self, exc_type, exc_value, tb): self.close() def freeze(self): """Call this method if you want to make your response object ready for being pickled. This buffers the generator if there is one. It will also set the `Content-Length` header to the length of the body. .. versionchanged:: 0.6 The `Content-Length` header is now set. """ # we explicitly set the length to a list of the *encoded* response # iterator. Even if the implicit sequence conversion is disabled. self.response = list(self.iter_encoded()) self.headers['Content-Length'] = str(sum(map(len, self.response))) def get_wsgi_headers(self, environ): """This is automatically called right before the response is started and returns headers modified for the given environment. It returns a copy of the headers from the response with some modifications applied if necessary. For example the location header (if present) is joined with the root URL of the environment. Also the content length is automatically set to zero here for certain status codes. .. versionchanged:: 0.6 Previously that function was called `fix_headers` and modified the response object in place. Also since 0.6, IRIs in location and content-location headers are handled properly. Also starting with 0.6, Werkzeug will attempt to set the content length if it is able to figure it out on its own. This is the case if all the strings in the response iterable are already encoded and the iterable is buffered. :param environ: the WSGI environment of the request. :return: returns a new :class:`~werkzeug.datastructures.Headers` object. """ headers = Headers(self.headers) location = None content_location = None content_length = None status = self.status_code # iterate over the headers to find all values in one go. Because # get_wsgi_headers is used each response that gives us a tiny # speedup. for key, value in headers: ikey = key.lower() if ikey == u'location': location = value elif ikey == u'content-location': content_location = value elif ikey == u'content-length': content_length = value # make sure the location header is an absolute URL if location is not None: old_location = location if isinstance(location, text_type): # Safe conversion is necessary here as we might redirect # to a broken URI scheme (for instance itms-services). location = iri_to_uri(location, safe_conversion=True) if self.autocorrect_location_header: current_url = get_current_url(environ, root_only=True) if isinstance(current_url, text_type): current_url = iri_to_uri(current_url) location = url_join(current_url, location) if location != old_location: headers['Location'] = location # make sure the content location is a URL if content_location is not None and \ isinstance(content_location, text_type): headers['Content-Location'] = iri_to_uri(content_location) if status in (304, 412): remove_entity_headers(headers) # if we can determine the content length automatically, we # should try to do that. But only if this does not involve # flattening the iterator or encoding of unicode strings in # the response. We however should not do that if we have a 304 # response. if self.automatically_set_content_length and \ self.is_sequence and content_length is None and \ status not in (204, 304) and \ not (100 <= status < 200): try: content_length = sum(len(to_bytes(x, 'ascii')) for x in self.response) except UnicodeError: # aha, something non-bytestringy in there, too bad, we # can't safely figure out the length of the response. pass else: headers['Content-Length'] = str(content_length) return headers def get_app_iter(self, environ): """Returns the application iterator for the given environ. Depending on the request method and the current status code the return value might be an empty response rather than the one from the response. If the request method is `HEAD` or the status code is in a range where the HTTP specification requires an empty response, an empty iterable is returned. .. versionadded:: 0.6 :param environ: the WSGI environment of the request. :return: a response iterable. """ status = self.status_code if environ['REQUEST_METHOD'] == 'HEAD' or \ 100 <= status < 200 or status in (204, 304, 412): iterable = () elif self.direct_passthrough: if __debug__: _warn_if_string(self.response) return self.response else: iterable = self.iter_encoded() return ClosingIterator(iterable, self.close) def get_wsgi_response(self, environ): """Returns the final WSGI response as tuple. The first item in the tuple is the application iterator, the second the status and the third the list of headers. The response returned is created specially for the given environment. For example if the request method in the WSGI environment is ``'HEAD'`` the response will be empty and only the headers and status code will be present. .. versionadded:: 0.6 :param environ: the WSGI environment of the request. :return: an ``(app_iter, status, headers)`` tuple. """ headers = self.get_wsgi_headers(environ) app_iter = self.get_app_iter(environ) return app_iter, self.status, headers.to_wsgi_list() def __call__(self, environ, start_response): """Process this response as WSGI application. :param environ: the WSGI environment. :param start_response: the response callable provided by the WSGI server. :return: an application iterator """ app_iter, status, headers = self.get_wsgi_response(environ) start_response(status, headers) return app_iter class AcceptMixin(object): """A mixin for classes with an :attr:`~BaseResponse.environ` attribute to get all the HTTP accept headers as :class:`~werkzeug.datastructures.Accept` objects (or subclasses thereof). """ @cached_property def accept_mimetypes(self): """List of mimetypes this client supports as :class:`~werkzeug.datastructures.MIMEAccept` object. """ return parse_accept_header(self.environ.get('HTTP_ACCEPT'), MIMEAccept) @cached_property def accept_charsets(self): """List of charsets this client supports as :class:`~werkzeug.datastructures.CharsetAccept` object. """ return parse_accept_header(self.environ.get('HTTP_ACCEPT_CHARSET'), CharsetAccept) @cached_property def accept_encodings(self): """List of encodings this client accepts. Encodings in a HTTP term are compression encodings such as gzip. For charsets have a look at :attr:`accept_charset`. """ return parse_accept_header(self.environ.get('HTTP_ACCEPT_ENCODING')) @cached_property def accept_languages(self): """List of languages this client accepts as :class:`~werkzeug.datastructures.LanguageAccept` object. .. versionchanged 0.5 In previous versions this was a regular :class:`~werkzeug.datastructures.Accept` object. """ return parse_accept_header(self.environ.get('HTTP_ACCEPT_LANGUAGE'), LanguageAccept) class ETagRequestMixin(object): """Add entity tag and cache descriptors to a request object or object with a WSGI environment available as :attr:`~BaseRequest.environ`. This not only provides access to etags but also to the cache control header. """ @cached_property def cache_control(self): """A :class:`~werkzeug.datastructures.RequestCacheControl` object for the incoming cache control headers. """ cache_control = self.environ.get('HTTP_CACHE_CONTROL') return parse_cache_control_header(cache_control, None, RequestCacheControl) @cached_property def if_match(self): """An object containing all the etags in the `If-Match` header. :rtype: :class:`~werkzeug.datastructures.ETags` """ return parse_etags(self.environ.get('HTTP_IF_MATCH')) @cached_property def if_none_match(self): """An object containing all the etags in the `If-None-Match` header. :rtype: :class:`~werkzeug.datastructures.ETags` """ return parse_etags(self.environ.get('HTTP_IF_NONE_MATCH')) @cached_property def if_modified_since(self): """The parsed `If-Modified-Since` header as datetime object.""" return parse_date(self.environ.get('HTTP_IF_MODIFIED_SINCE')) @cached_property def if_unmodified_since(self): """The parsed `If-Unmodified-Since` header as datetime object.""" return parse_date(self.environ.get('HTTP_IF_UNMODIFIED_SINCE')) @cached_property def if_range(self): """The parsed `If-Range` header. .. versionadded:: 0.7 :rtype: :class:`~werkzeug.datastructures.IfRange` """ return parse_if_range_header(self.environ.get('HTTP_IF_RANGE')) @cached_property def range(self): """The parsed `Range` header. .. versionadded:: 0.7 :rtype: :class:`~werkzeug.datastructures.Range` """ return parse_range_header(self.environ.get('HTTP_RANGE')) class UserAgentMixin(object): """Adds a `user_agent` attribute to the request object which contains the parsed user agent of the browser that triggered the request as a :class:`~werkzeug.useragents.UserAgent` object. """ @cached_property def user_agent(self): """The current user agent.""" from werkzeug.useragents import UserAgent return UserAgent(self.environ) class AuthorizationMixin(object): """Adds an :attr:`authorization` property that represents the parsed value of the `Authorization` header as :class:`~werkzeug.datastructures.Authorization` object. """ @cached_property def authorization(self): """The `Authorization` object in parsed form.""" header = self.environ.get('HTTP_AUTHORIZATION') return parse_authorization_header(header) class StreamOnlyMixin(object): """If mixed in before the request object this will change the bahavior of it to disable handling of form parsing. This disables the :attr:`files`, :attr:`form` attributes and will just provide a :attr:`stream` attribute that however is always available. .. versionadded:: 0.9 """ disable_data_descriptor = True want_form_data_parsed = False class ETagResponseMixin(object): """Adds extra functionality to a response object for etag and cache handling. This mixin requires an object with at least a `headers` object that implements a dict like interface similar to :class:`~werkzeug.datastructures.Headers`. If you want the :meth:`freeze` method to automatically add an etag, you have to mixin this method before the response base class. The default response class does not do that. """ @property def cache_control(self): """The Cache-Control general-header field is used to specify directives that MUST be obeyed by all caching mechanisms along the request/response chain. """ def on_update(cache_control): if not cache_control and 'cache-control' in self.headers: del self.headers['cache-control'] elif cache_control: self.headers['Cache-Control'] = cache_control.to_header() return parse_cache_control_header(self.headers.get('cache-control'), on_update, ResponseCacheControl) def _wrap_response(self, start, length): """Wrap existing Response in case of Range Request context.""" if self.status_code == 206: self.response = _RangeWrapper(self.response, start, length) def _is_range_request_processable(self, environ): """Return ``True`` if `Range` header is present and if underlying resource is considered unchanged when compared with `If-Range` header. """ return ( 'HTTP_IF_RANGE' not in environ or not is_resource_modified( environ, self.headers.get('etag'), None, self.headers.get('last-modified'), ignore_if_range=False ) ) and 'HTTP_RANGE' in environ def _process_range_request(self, environ, complete_length=None, accept_ranges=None): """Handle Range Request related headers (RFC7233). If `Accept-Ranges` header is valid, and Range Request is processable, we set the headers as described by the RFC, and wrap the underlying response in a RangeWrapper. Returns ``True`` if Range Request can be fulfilled, ``False`` otherwise. :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable` if `Range` header could not be parsed or satisfied. """ from werkzeug.exceptions import RequestedRangeNotSatisfiable if accept_ranges is None: return False self.headers['Accept-Ranges'] = accept_ranges if not self._is_range_request_processable(environ) or complete_length is None: return False parsed_range = parse_range_header(environ.get('HTTP_RANGE')) if parsed_range is None: raise RequestedRangeNotSatisfiable(complete_length) range_tuple = parsed_range.range_for_length(complete_length) content_range_header = parsed_range.to_content_range_header(complete_length) if range_tuple is None or content_range_header is None: raise RequestedRangeNotSatisfiable(complete_length) content_length = range_tuple[1] - range_tuple[0] # Be sure not to send 206 response # if requested range is the full content. if content_length != complete_length: self.headers['Content-Length'] = content_length self.content_range = content_range_header self.status_code = 206 self._wrap_response(range_tuple[0], content_length) return True return False def make_conditional(self, request_or_environ, accept_ranges=False, complete_length=None): """Make the response conditional to the request. This method works best if an etag was defined for the response already. The `add_etag` method can be used to do that. If called without etag just the date header is set. This does nothing if the request method in the request or environ is anything but GET or HEAD. For optimal performance when handling range requests, it's recommended that your response data object implements `seekable`, `seek` and `tell` methods as described by :py:class:`io.IOBase`. Objects returned by :meth:`~werkzeug.wsgi.wrap_file` automatically implement those methods. It does not remove the body of the response because that's something the :meth:`__call__` function does for us automatically. Returns self so that you can do ``return resp.make_conditional(req)`` but modifies the object in-place. :param request_or_environ: a request object or WSGI environment to be used to make the response conditional against. :param accept_ranges: This parameter dictates the value of `Accept-Ranges` header. If ``False`` (default), the header is not set. If ``True``, it will be set to ``"bytes"``. If ``None``, it will be set to ``"none"``. If it's a string, it will use this value. :param complete_length: Will be used only in valid Range Requests. It will set `Content-Range` complete length value and compute `Content-Length` real value. This parameter is mandatory for successful Range Requests completion. :raises: :class:`~werkzeug.exceptions.RequestedRangeNotSatisfiable` if `Range` header could not be parsed or satisfied. """ environ = _get_environ(request_or_environ) if environ['REQUEST_METHOD'] in ('GET', 'HEAD'): # if the date is not in the headers, add it now. We however # will not override an already existing header. Unfortunately # this header will be overriden by many WSGI servers including # wsgiref. if 'date' not in self.headers: self.headers['Date'] = http_date() accept_ranges = _clean_accept_ranges(accept_ranges) is206 = self._process_range_request(environ, complete_length, accept_ranges) if not is206 and not is_resource_modified( environ, self.headers.get('etag'), None, self.headers.get('last-modified') ): if parse_etags(environ.get('HTTP_IF_MATCH')): self.status_code = 412 else: self.status_code = 304 if self.automatically_set_content_length and 'content-length' not in self.headers: length = self.calculate_content_length() if length is not None: self.headers['Content-Length'] = length return self def add_etag(self, overwrite=False, weak=False): """Add an etag for the current response if there is none yet.""" if overwrite or 'etag' not in self.headers: self.set_etag(generate_etag(self.get_data()), weak) def set_etag(self, etag, weak=False): """Set the etag, and override the old one if there was one.""" self.headers['ETag'] = quote_etag(etag, weak) def get_etag(self): """Return a tuple in the form ``(etag, is_weak)``. If there is no ETag the return value is ``(None, None)``. """ return unquote_etag(self.headers.get('ETag')) def freeze(self, no_etag=False): """Call this method if you want to make your response object ready for pickeling. This buffers the generator if there is one. This also sets the etag unless `no_etag` is set to `True`. """ if not no_etag: self.add_etag() super(ETagResponseMixin, self).freeze() accept_ranges = header_property('Accept-Ranges', doc=''' The `Accept-Ranges` header. Even though the name would indicate that multiple values are supported, it must be one string token only. The values ``'bytes'`` and ``'none'`` are common. .. versionadded:: 0.7''') def _get_content_range(self): def on_update(rng): if not rng: del self.headers['content-range'] else: self.headers['Content-Range'] = rng.to_header() rv = parse_content_range_header(self.headers.get('content-range'), on_update) # always provide a content range object to make the descriptor # more user friendly. It provides an unset() method that can be # used to remove the header quickly. if rv is None: rv = ContentRange(None, None, None, on_update=on_update) return rv def _set_content_range(self, value): if not value: del self.headers['content-range'] elif isinstance(value, string_types): self.headers['Content-Range'] = value else: self.headers['Content-Range'] = value.to_header() content_range = property(_get_content_range, _set_content_range, doc=''' The `Content-Range` header as :class:`~werkzeug.datastructures.ContentRange` object. Even if the header is not set it wil provide such an object for easier manipulation. .. versionadded:: 0.7''') del _get_content_range, _set_content_range class ResponseStream(object): """A file descriptor like object used by the :class:`ResponseStreamMixin` to represent the body of the stream. It directly pushes into the response iterable of the response object. """ mode = 'wb+' def __init__(self, response): self.response = response self.closed = False def write(self, value): if self.closed: raise ValueError('I/O operation on closed file') self.response._ensure_sequence(mutable=True) self.response.response.append(value) self.response.headers.pop('Content-Length', None) return len(value) def writelines(self, seq): for item in seq: self.write(item) def close(self): self.closed = True def flush(self): if self.closed: raise ValueError('I/O operation on closed file') def isatty(self): if self.closed: raise ValueError('I/O operation on closed file') return False def tell(self): self.response._ensure_sequence() return sum(map(len, self.response.response)) @property def encoding(self): return self.response.charset class ResponseStreamMixin(object): """Mixin for :class:`BaseRequest` subclasses. Classes that inherit from this mixin will automatically get a :attr:`stream` property that provides a write-only interface to the response iterable. """ @cached_property def stream(self): """The response iterable as write-only stream.""" return ResponseStream(self) class CommonRequestDescriptorsMixin(object): """A mixin for :class:`BaseRequest` subclasses. Request objects that mix this class in will automatically get descriptors for a couple of HTTP headers with automatic type conversion. .. versionadded:: 0.5 """ content_type = environ_property('CONTENT_TYPE', doc=''' The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET.''') @cached_property def content_length(self): """The Content-Length entity-header field indicates the size of the entity-body in bytes or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET. """ return get_content_length(self.environ) content_encoding = environ_property('HTTP_CONTENT_ENCODING', doc=''' The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field. .. versionadded:: 0.9''') content_md5 = environ_property('HTTP_CONTENT_MD5', doc=''' The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.) .. versionadded:: 0.9''') referrer = environ_property('HTTP_REFERER', doc=''' The Referer[sic] request-header field allows the client to specify, for the server's benefit, the address (URI) of the resource from which the Request-URI was obtained (the "referrer", although the header field is misspelled).''') date = environ_property('HTTP_DATE', None, parse_date, doc=''' The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822.''') max_forwards = environ_property('HTTP_MAX_FORWARDS', None, int, doc=''' The Max-Forwards request-header field provides a mechanism with the TRACE and OPTIONS methods to limit the number of proxies or gateways that can forward the request to the next inbound server.''') def _parse_content_type(self): if not hasattr(self, '_parsed_content_type'): self._parsed_content_type = \ parse_options_header(self.environ.get('CONTENT_TYPE', '')) @property def mimetype(self): """Like :attr:`content_type`, but without parameters (eg, without charset, type etc.) and always lowercase. For example if the content type is ``text/HTML; charset=utf-8`` the mimetype would be ``'text/html'``. """ self._parse_content_type() return self._parsed_content_type[0].lower() @property def mimetype_params(self): """The mimetype parameters as dict. For example if the content type is ``text/html; charset=utf-8`` the params would be ``{'charset': 'utf-8'}``. """ self._parse_content_type() return self._parsed_content_type[1] @cached_property def pragma(self): """The Pragma general-header field is used to include implementation-specific directives that might apply to any recipient along the request/response chain. All pragma directives specify optional behavior from the viewpoint of the protocol; however, some systems MAY require that behavior be consistent with the directives. """ return parse_set_header(self.environ.get('HTTP_PRAGMA', '')) class CommonResponseDescriptorsMixin(object): """A mixin for :class:`BaseResponse` subclasses. Response objects that mix this class in will automatically get descriptors for a couple of HTTP headers with automatic type conversion. """ def _get_mimetype(self): ct = self.headers.get('content-type') if ct: return ct.split(';')[0].strip() def _set_mimetype(self, value): self.headers['Content-Type'] = get_content_type(value, self.charset) def _get_mimetype_params(self): def on_update(d): self.headers['Content-Type'] = \ dump_options_header(self.mimetype, d) d = parse_options_header(self.headers.get('content-type', ''))[1] return CallbackDict(d, on_update) mimetype = property(_get_mimetype, _set_mimetype, doc=''' The mimetype (content type without charset etc.)''') mimetype_params = property(_get_mimetype_params, doc=''' The mimetype parameters as dict. For example if the content type is ``text/html; charset=utf-8`` the params would be ``{'charset': 'utf-8'}``. .. versionadded:: 0.5 ''') location = header_property('Location', doc=''' The Location response-header field is used to redirect the recipient to a location other than the Request-URI for completion of the request or identification of a new resource.''') age = header_property('Age', None, parse_age, dump_age, doc=''' The Age response-header field conveys the sender's estimate of the amount of time since the response (or its revalidation) was generated at the origin server. Age values are non-negative decimal integers, representing time in seconds.''') content_type = header_property('Content-Type', doc=''' The Content-Type entity-header field indicates the media type of the entity-body sent to the recipient or, in the case of the HEAD method, the media type that would have been sent had the request been a GET. ''') content_length = header_property('Content-Length', None, int, str, doc=''' The Content-Length entity-header field indicates the size of the entity-body, in decimal number of OCTETs, sent to the recipient or, in the case of the HEAD method, the size of the entity-body that would have been sent had the request been a GET.''') content_location = header_property('Content-Location', doc=''' The Content-Location entity-header field MAY be used to supply the resource location for the entity enclosed in the message when that entity is accessible from a location separate from the requested resource's URI.''') content_encoding = header_property('Content-Encoding', doc=''' The Content-Encoding entity-header field is used as a modifier to the media-type. When present, its value indicates what additional content codings have been applied to the entity-body, and thus what decoding mechanisms must be applied in order to obtain the media-type referenced by the Content-Type header field.''') content_md5 = header_property('Content-MD5', doc=''' The Content-MD5 entity-header field, as defined in RFC 1864, is an MD5 digest of the entity-body for the purpose of providing an end-to-end message integrity check (MIC) of the entity-body. (Note: a MIC is good for detecting accidental modification of the entity-body in transit, but is not proof against malicious attacks.) ''') date = header_property('Date', None, parse_date, http_date, doc=''' The Date general-header field represents the date and time at which the message was originated, having the same semantics as orig-date in RFC 822.''') expires = header_property('Expires', None, parse_date, http_date, doc=''' The Expires entity-header field gives the date/time after which the response is considered stale. A stale cache entry may not normally be returned by a cache.''') last_modified = header_property('Last-Modified', None, parse_date, http_date, doc=''' The Last-Modified entity-header field indicates the date and time at which the origin server believes the variant was last modified.''') def _get_retry_after(self): value = self.headers.get('retry-after') if value is None: return elif value.isdigit(): return datetime.utcnow() + timedelta(seconds=int(value)) return parse_date(value) def _set_retry_after(self, value): if value is None: if 'retry-after' in self.headers: del self.headers['retry-after'] return elif isinstance(value, datetime): value = http_date(value) else: value = str(value) self.headers['Retry-After'] = value retry_after = property(_get_retry_after, _set_retry_after, doc=''' The Retry-After response-header field can be used with a 503 (Service Unavailable) response to indicate how long the service is expected to be unavailable to the requesting client. Time in seconds until expiration or date.''') def _set_property(name, doc=None): def fget(self): def on_update(header_set): if not header_set and name in self.headers: del self.headers[name] elif header_set: self.headers[name] = header_set.to_header() return parse_set_header(self.headers.get(name), on_update) def fset(self, value): if not value: del self.headers[name] elif isinstance(value, string_types): self.headers[name] = value else: self.headers[name] = dump_header(value) return property(fget, fset, doc=doc) vary = _set_property('Vary', doc=''' The Vary field value indicates the set of request-header fields that fully determines, while the response is fresh, whether a cache is permitted to use the response to reply to a subsequent request without revalidation.''') content_language = _set_property('Content-Language', doc=''' The Content-Language entity-header field describes the natural language(s) of the intended audience for the enclosed entity. Note that this might not be equivalent to all the languages used within the entity-body.''') allow = _set_property('Allow', doc=''' The Allow entity-header field lists the set of methods supported by the resource identified by the Request-URI. The purpose of this field is strictly to inform the recipient of valid methods associated with the resource. An Allow header field MUST be present in a 405 (Method Not Allowed) response.''') del _set_property, _get_mimetype, _set_mimetype, _get_retry_after, \ _set_retry_after class WWWAuthenticateMixin(object): """Adds a :attr:`www_authenticate` property to a response object.""" @property def www_authenticate(self): """The `WWW-Authenticate` header in a parsed form.""" def on_update(www_auth): if not www_auth and 'www-authenticate' in self.headers: del self.headers['www-authenticate'] elif www_auth: self.headers['WWW-Authenticate'] = www_auth.to_header() header = self.headers.get('www-authenticate') return parse_www_authenticate_header(header, on_update) class Request(BaseRequest, AcceptMixin, ETagRequestMixin, UserAgentMixin, AuthorizationMixin, CommonRequestDescriptorsMixin): """Full featured request object implementing the following mixins: - :class:`AcceptMixin` for accept header parsing - :class:`ETagRequestMixin` for etag and cache control handling - :class:`UserAgentMixin` for user agent introspection - :class:`AuthorizationMixin` for http auth handling - :class:`CommonRequestDescriptorsMixin` for common headers """ class PlainRequest(StreamOnlyMixin, Request): """A request object without special form parsing capabilities. .. versionadded:: 0.9 """ class Response(BaseResponse, ETagResponseMixin, ResponseStreamMixin, CommonResponseDescriptorsMixin, WWWAuthenticateMixin): """Full featured response object implementing the following mixins: - :class:`ETagResponseMixin` for etag and cache control handling - :class:`ResponseStreamMixin` to add support for the `stream` property - :class:`CommonResponseDescriptorsMixin` for various HTTP descriptors - :class:`WWWAuthenticateMixin` for HTTP authentication support """ werkzeug-0.14.1/werkzeug/wsgi.py000066400000000000000000001403031322225165500166320ustar00rootroot00000000000000# -*- coding: utf-8 -*- """ werkzeug.wsgi ~~~~~~~~~~~~~ This module implements WSGI related helpers. :copyright: (c) 2014 by the Werkzeug Team, see AUTHORS for more details. :license: BSD, see LICENSE for more details. """ import io try: import httplib except ImportError: from http import client as httplib import mimetypes import os import posixpath import re import socket from datetime import datetime from functools import partial, update_wrapper from itertools import chain from time import mktime, time from zlib import adler32 from werkzeug._compat import BytesIO, PY2, implements_iterator, iteritems, \ make_literal_wrapper, string_types, text_type, to_bytes, to_unicode, \ try_coerce_native, wsgi_get_bytes from werkzeug._internal import _empty_stream, _encode_idna from werkzeug.filesystem import get_filesystem_encoding from werkzeug.http import http_date, is_resource_modified, \ is_hop_by_hop_header from werkzeug.urls import uri_to_iri, url_join, url_parse, url_quote from werkzeug.datastructures import EnvironHeaders def responder(f): """Marks a function as responder. Decorate a function with it and it will automatically call the return value as WSGI application. Example:: @responder def application(environ, start_response): return Response('Hello World!') """ return update_wrapper(lambda *a: f(*a)(*a[-2:]), f) def get_current_url(environ, root_only=False, strip_querystring=False, host_only=False, trusted_hosts=None): """A handy helper function that recreates the full URL as IRI for the current request or parts of it. Here's an example: >>> from werkzeug.test import create_environ >>> env = create_environ("/?param=foo", "http://localhost/script") >>> get_current_url(env) 'http://localhost/script/?param=foo' >>> get_current_url(env, root_only=True) 'http://localhost/script/' >>> get_current_url(env, host_only=True) 'http://localhost/' >>> get_current_url(env, strip_querystring=True) 'http://localhost/script/' This optionally it verifies that the host is in a list of trusted hosts. If the host is not in there it will raise a :exc:`~werkzeug.exceptions.SecurityError`. Note that the string returned might contain unicode characters as the representation is an IRI not an URI. If you need an ASCII only representation you can use the :func:`~werkzeug.urls.iri_to_uri` function: >>> from werkzeug.urls import iri_to_uri >>> iri_to_uri(get_current_url(env)) 'http://localhost/script/?param=foo' :param environ: the WSGI environment to get the current URL from. :param root_only: set `True` if you only want the root URL. :param strip_querystring: set to `True` if you don't want the querystring. :param host_only: set to `True` if the host URL should be returned. :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted` for more information. """ tmp = [environ['wsgi.url_scheme'], '://', get_host(environ, trusted_hosts)] cat = tmp.append if host_only: return uri_to_iri(''.join(tmp) + '/') cat(url_quote(wsgi_get_bytes(environ.get('SCRIPT_NAME', ''))).rstrip('/')) cat('/') if not root_only: cat(url_quote(wsgi_get_bytes(environ.get('PATH_INFO', '')).lstrip(b'/'))) if not strip_querystring: qs = get_query_string(environ) if qs: cat('?' + qs) return uri_to_iri(''.join(tmp)) def host_is_trusted(hostname, trusted_list): """Checks if a host is trusted against a list. This also takes care of port normalization. .. versionadded:: 0.9 :param hostname: the hostname to check :param trusted_list: a list of hostnames to check against. If a hostname starts with a dot it will match against all subdomains as well. """ if not hostname: return False if isinstance(trusted_list, string_types): trusted_list = [trusted_list] def _normalize(hostname): if ':' in hostname: hostname = hostname.rsplit(':', 1)[0] return _encode_idna(hostname) try: hostname = _normalize(hostname) except UnicodeError: return False for ref in trusted_list: if ref.startswith('.'): ref = ref[1:] suffix_match = True else: suffix_match = False try: ref = _normalize(ref) except UnicodeError: return False if ref == hostname: return True if suffix_match and hostname.endswith(b'.' + ref): return True return False def get_host(environ, trusted_hosts=None): """Return the real host for the given WSGI environment. This first checks the `X-Forwarded-Host` header, then the normal `Host` header, and finally the `SERVER_NAME` environment variable (using the first one it finds). Optionally it verifies that the host is in a list of trusted hosts. If the host is not in there it will raise a :exc:`~werkzeug.exceptions.SecurityError`. :param environ: the WSGI environment to get the host of. :param trusted_hosts: a list of trusted hosts, see :func:`host_is_trusted` for more information. """ if 'HTTP_X_FORWARDED_HOST' in environ: rv = environ['HTTP_X_FORWARDED_HOST'].split(',', 1)[0].strip() elif 'HTTP_HOST' in environ: rv = environ['HTTP_HOST'] else: rv = environ['SERVER_NAME'] if (environ['wsgi.url_scheme'], environ['SERVER_PORT']) not \ in (('https', '443'), ('http', '80')): rv += ':' + environ['SERVER_PORT'] if trusted_hosts is not None: if not host_is_trusted(rv, trusted_hosts): from werkzeug.exceptions import SecurityError raise SecurityError('Host "%s" is not trusted' % rv) return rv def get_content_length(environ): """Returns the content length from the WSGI environment as integer. If it's not available or chunked transfer encoding is used, ``None`` is returned. .. versionadded:: 0.9 :param environ: the WSGI environ to fetch the content length from. """ if environ.get('HTTP_TRANSFER_ENCODING', '') == 'chunked': return None content_length = environ.get('CONTENT_LENGTH') if content_length is not None: try: return max(0, int(content_length)) except (ValueError, TypeError): pass def get_input_stream(environ, safe_fallback=True): """Returns the input stream from the WSGI environment and wraps it in the most sensible way possible. The stream returned is not the raw WSGI stream in most cases but one that is safe to read from without taking into account the content length. If content length is not set, the stream will be empty for safety reasons. If the WSGI server supports chunked or infinite streams, it should set the ``wsgi.input_terminated`` value in the WSGI environ to indicate that. .. versionadded:: 0.9 :param environ: the WSGI environ to fetch the stream from. :param safe_fallback: use an empty stream as a safe fallback when the content length is not set. Disabling this allows infinite streams, which can be a denial-of-service risk. """ stream = environ['wsgi.input'] content_length = get_content_length(environ) # A wsgi extension that tells us if the input is terminated. In # that case we return the stream unchanged as we know we can safely # read it until the end. if environ.get('wsgi.input_terminated'): return stream # If the request doesn't specify a content length, returning the stream is # potentially dangerous because it could be infinite, malicious or not. If # safe_fallback is true, return an empty stream instead for safety. if content_length is None: return safe_fallback and _empty_stream or stream # Otherwise limit the stream to the content length return LimitedStream(stream, content_length) def get_query_string(environ): """Returns the `QUERY_STRING` from the WSGI environment. This also takes care about the WSGI decoding dance on Python 3 environments as a native string. The string returned will be restricted to ASCII characters. .. versionadded:: 0.9 :param environ: the WSGI environment object to get the query string from. """ qs = wsgi_get_bytes(environ.get('QUERY_STRING', '')) # QUERY_STRING really should be ascii safe but some browsers # will send us some unicode stuff (I am looking at you IE). # In that case we want to urllib quote it badly. return try_coerce_native(url_quote(qs, safe=':&%=+$!*\'(),')) def get_path_info(environ, charset='utf-8', errors='replace'): """Returns the `PATH_INFO` from the WSGI environment and properly decodes it. This also takes care about the WSGI decoding dance on Python 3 environments. if the `charset` is set to `None` a bytestring is returned. .. versionadded:: 0.9 :param environ: the WSGI environment object to get the path from. :param charset: the charset for the path info, or `None` if no decoding should be performed. :param errors: the decoding error handling. """ path = wsgi_get_bytes(environ.get('PATH_INFO', '')) return to_unicode(path, charset, errors, allow_none_charset=True) def get_script_name(environ, charset='utf-8', errors='replace'): """Returns the `SCRIPT_NAME` from the WSGI environment and properly decodes it. This also takes care about the WSGI decoding dance on Python 3 environments. if the `charset` is set to `None` a bytestring is returned. .. versionadded:: 0.9 :param environ: the WSGI environment object to get the path from. :param charset: the charset for the path, or `None` if no decoding should be performed. :param errors: the decoding error handling. """ path = wsgi_get_bytes(environ.get('SCRIPT_NAME', '')) return to_unicode(path, charset, errors, allow_none_charset=True) def pop_path_info(environ, charset='utf-8', errors='replace'): """Removes and returns the next segment of `PATH_INFO`, pushing it onto `SCRIPT_NAME`. Returns `None` if there is nothing left on `PATH_INFO`. If the `charset` is set to `None` a bytestring is returned. If there are empty segments (``'/foo//bar``) these are ignored but properly pushed to the `SCRIPT_NAME`: >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'} >>> pop_path_info(env) 'a' >>> env['SCRIPT_NAME'] '/foo/a' >>> pop_path_info(env) 'b' >>> env['SCRIPT_NAME'] '/foo/a/b' .. versionadded:: 0.5 .. versionchanged:: 0.9 The path is now decoded and a charset and encoding parameter can be provided. :param environ: the WSGI environment that is modified. """ path = environ.get('PATH_INFO') if not path: return None script_name = environ.get('SCRIPT_NAME', '') # shift multiple leading slashes over old_path = path path = path.lstrip('/') if path != old_path: script_name += '/' * (len(old_path) - len(path)) if '/' not in path: environ['PATH_INFO'] = '' environ['SCRIPT_NAME'] = script_name + path rv = wsgi_get_bytes(path) else: segment, path = path.split('/', 1) environ['PATH_INFO'] = '/' + path environ['SCRIPT_NAME'] = script_name + segment rv = wsgi_get_bytes(segment) return to_unicode(rv, charset, errors, allow_none_charset=True) def peek_path_info(environ, charset='utf-8', errors='replace'): """Returns the next segment on the `PATH_INFO` or `None` if there is none. Works like :func:`pop_path_info` without modifying the environment: >>> env = {'SCRIPT_NAME': '/foo', 'PATH_INFO': '/a/b'} >>> peek_path_info(env) 'a' >>> peek_path_info(env) 'a' If the `charset` is set to `None` a bytestring is returned. .. versionadded:: 0.5 .. versionchanged:: 0.9 The path is now decoded and a charset and encoding parameter can be provided. :param environ: the WSGI environment that is checked. """ segments = environ.get('PATH_INFO', '').lstrip('/').split('/', 1) if segments: return to_unicode(wsgi_get_bytes(segments[0]), charset, errors, allow_none_charset=True) def extract_path_info(environ_or_baseurl, path_or_url, charset='utf-8', errors='replace', collapse_http_schemes=True): """Extracts the path info from the given URL (or WSGI environment) and path. The path info returned is a unicode string, not a bytestring suitable for a WSGI environment. The URLs might also be IRIs. If the path info could not be determined, `None` is returned. Some examples: >>> extract_path_info('http://example.com/app', '/app/hello') u'/hello' >>> extract_path_info('http://example.com/app', ... 'https://example.com/app/hello') u'/hello' >>> extract_path_info('http://example.com/app', ... 'https://example.com/app/hello', ... collapse_http_schemes=False) is None True Instead of providing a base URL you can also pass a WSGI environment. .. versionadded:: 0.6 :param environ_or_baseurl: a WSGI environment dict, a base URL or base IRI. This is the root of the application. :param path_or_url: an absolute path from the server root, a relative path (in which case it's the path info) or a full URL. Also accepts IRIs and unicode parameters. :param charset: the charset for byte data in URLs :param errors: the error handling on decode :param collapse_http_schemes: if set to `False` the algorithm does not assume that http and https on the same server point to the same resource. """ def _normalize_netloc(scheme, netloc): parts = netloc.split(u'@', 1)[-1].split(u':', 1) if len(parts) == 2: netloc, port = parts if (scheme == u'http' and port == u'80') or \ (scheme == u'https' and port == u'443'): port = None else: netloc = parts[0] port = None if port is not None: netloc += u':' + port return netloc # make sure whatever we are working on is a IRI and parse it path = uri_to_iri(path_or_url, charset, errors) if isinstance(environ_or_baseurl, dict): environ_or_baseurl = get_current_url(environ_or_baseurl, root_only=True) base_iri = uri_to_iri(environ_or_baseurl, charset, errors) base_scheme, base_netloc, base_path = url_parse(base_iri)[:3] cur_scheme, cur_netloc, cur_path, = \ url_parse(url_join(base_iri, path))[:3] # normalize the network location base_netloc = _normalize_netloc(base_scheme, base_netloc) cur_netloc = _normalize_netloc(cur_scheme, cur_netloc) # is that IRI even on a known HTTP scheme? if collapse_http_schemes: for scheme in base_scheme, cur_scheme: if scheme not in (u'http', u'https'): return None else: if not (base_scheme in (u'http', u'https') and base_scheme == cur_scheme): return None # are the netlocs compatible? if base_netloc != cur_netloc: return None # are we below the application path? base_path = base_path.rstrip(u'/') if not cur_path.startswith(base_path): return None return u'/' + cur_path[len(base_path):].lstrip(u'/') class ProxyMiddleware(object): """This middleware routes some requests to the provided WSGI app and proxies some requests to an external server. This is not something that can generally be done on the WSGI layer and some HTTP requests will not tunnel through correctly (for instance websocket requests cannot be proxied through WSGI). As a result this is only really useful for some basic requests that can be forwarded. Example configuration:: app = ProxyMiddleware(app, { '/static/': { 'target': 'http://127.0.0.1:5001/', } }) For each host options can be specified. The following options are supported: ``target``: the target URL to dispatch to ``remove_prefix``: if set to `True` the prefix is chopped off the URL before dispatching it to the server. ``host``: When set to ``''`` which is the default the host header is automatically rewritten to the URL of the target. If set to `None` then the host header is unmodified from the client request. Any other value overwrites the host header with that value. ``headers``: An optional dictionary of headers that should be sent with the request to the target host. ``ssl_context``: In case this is an HTTPS target host then an SSL context can be provided here (:class:`ssl.SSLContext`). This can be used for instance to disable SSL verification. In this case everything below ``'/static/'`` is proxied to the server on port 5001. The host header is automatically rewritten and so are request URLs (eg: the leading `/static/` prefix here gets chopped off). .. versionadded:: 0.14 """ def __init__(self, app, targets, chunk_size=2 << 13, timeout=10): def _set_defaults(opts): opts.setdefault('remove_prefix', False) opts.setdefault('host', '') opts.setdefault('headers', {}) opts.setdefault('ssl_context', None) return opts self.app = app self.targets = dict(('/%s/' % k.strip('/'), _set_defaults(v)) for k, v in iteritems(targets)) self.chunk_size = chunk_size self.timeout = timeout def proxy_to(self, opts, path, prefix): target = url_parse(opts['target']) def application(environ, start_response): headers = list(EnvironHeaders(environ).items()) headers[:] = [(k, v) for k, v in headers if not is_hop_by_hop_header(k) and k.lower() not in ('content-length', 'host')] headers.append(('Connection', 'close')) if opts['host'] == '': headers.append(('Host', target.ascii_host)) elif opts['host'] is None: headers.append(('Host', environ['HTTP_HOST'])) else: headers.append(('Host', opts['host'])) headers.extend(opts['headers'].items()) remote_path = path if opts['remove_prefix']: remote_path = '%s/%s' % ( target.path.rstrip('/'), remote_path[len(prefix):].lstrip('/') ) content_length = environ.get('CONTENT_LENGTH') chunked = False if content_length not in ('', None): headers.append(('Content-Length', content_length)) elif content_length is not None: headers.append(('Transfer-Encoding', 'chunked')) chunked = True try: if target.scheme == 'http': con = httplib.HTTPConnection( target.ascii_host, target.port or 80, timeout=self.timeout) elif target.scheme == 'https': con = httplib.HTTPSConnection( target.ascii_host, target.port or 443, timeout=self.timeout, context=opts['ssl_context']) con.connect() con.putrequest(environ['REQUEST_METHOD'], url_quote(remote_path), skip_host=True) for k, v in headers: if k.lower() == 'connection': v = 'close' con.putheader(k, v) con.endheaders() stream = get_input_stream(environ) while 1: data = stream.read(self.chunk_size) if not data: break if chunked: con.send(b'%x\r\n%s\r\n' % (len(data), data)) else: con.send(data) resp = con.getresponse() except socket.error: from werkzeug.exceptions import BadGateway return BadGateway()(environ, start_response) start_response('%d %s' % (resp.status, resp.reason), [(k.title(), v) for k, v in resp.getheaders() if not is_hop_by_hop_header(k)]) def read(): while 1: try: data = resp.read(self.chunk_size) except socket.error: break if not data: break yield data return read() return application def __call__(self, environ, start_response): path = environ['PATH_INFO'] app = self.app for prefix, opts in iteritems(self.targets): if path.startswith(prefix): app = self.proxy_to(opts, path, prefix) break return app(environ, start_response) class SharedDataMiddleware(object): """A WSGI middleware that provides static content for development environments or simple server setups. Usage is quite simple:: import os from werkzeug.wsgi import SharedDataMiddleware app = SharedDataMiddleware(app, { '/shared': os.path.join(os.path.dirname(__file__), 'shared') }) The contents of the folder ``./shared`` will now be available on ``http://example.com/shared/``. This is pretty useful during development because a standalone media server is not required. One can also mount files on the root folder and still continue to use the application because the shared data middleware forwards all unhandled requests to the application, even if the requests are below one of the shared folders. If `pkg_resources` is available you can also tell the middleware to serve files from package data:: app = SharedDataMiddleware(app, { '/shared': ('myapplication', 'shared_files') }) This will then serve the ``shared_files`` folder in the `myapplication` Python package. The optional `disallow` parameter can be a list of :func:`~fnmatch.fnmatch` rules for files that are not accessible from the web. If `cache` is set to `False` no caching headers are sent. Currently the middleware does not support non ASCII filenames. If the encoding on the file system happens to be the encoding of the URI it may work but this could also be by accident. We strongly suggest using ASCII only file names for static files. The middleware will guess the mimetype using the Python `mimetype` module. If it's unable to figure out the charset it will fall back to `fallback_mimetype`. .. versionchanged:: 0.5 The cache timeout is configurable now. .. versionadded:: 0.6 The `fallback_mimetype` parameter was added. :param app: the application to wrap. If you don't want to wrap an application you can pass it :exc:`NotFound`. :param exports: a list or dict of exported files and folders. :param disallow: a list of :func:`~fnmatch.fnmatch` rules. :param fallback_mimetype: the fallback mimetype for unknown files. :param cache: enable or disable caching headers. :param cache_timeout: the cache timeout in seconds for the headers. """ def __init__(self, app, exports, disallow=None, cache=True, cache_timeout=60 * 60 * 12, fallback_mimetype='text/plain'): self.app = app self.exports = [] self.cache = cache self.cache_timeout = cache_timeout if hasattr(exports, 'items'): exports = iteritems(exports) for key, value in exports: if isinstance(value, tuple): loader = self.get_package_loader(*value) elif isinstance(value, string_types): if os.path.isfile(value): loader = self.get_file_loader(value) else: loader = self.get_directory_loader(value) else: raise TypeError('unknown def %r' % value) self.exports.append((key, loader)) if disallow is not None: from fnmatch import fnmatch self.is_allowed = lambda x: not fnmatch(x, disallow) self.fallback_mimetype = fallback_mimetype def is_allowed(self, filename): """Subclasses can override this method to disallow the access to certain files. However by providing `disallow` in the constructor this method is overwritten. """ return True def _opener(self, filename): return lambda: ( open(filename, 'rb'), datetime.utcfromtimestamp(os.path.getmtime(filename)), int(os.path.getsize(filename)) ) def get_file_loader(self, filename): return lambda x: (os.path.basename(filename), self._opener(filename)) def get_package_loader(self, package, package_path): from pkg_resources import DefaultProvider, ResourceManager, \ get_provider loadtime = datetime.utcnow() provider = get_provider(package) manager = ResourceManager() filesystem_bound = isinstance(provider, DefaultProvider) def loader(path): if path is None: return None, None path = posixpath.join(package_path, path) if not provider.has_resource(path): return None, None basename = posixpath.basename(path) if filesystem_bound: return basename, self._opener( provider.get_resource_filename(manager, path)) s = provider.get_resource_string(manager, path) return basename, lambda: ( BytesIO(s), loadtime, len(s) ) return loader def get_directory_loader(self, directory): def loader(path): if path is not None: path = os.path.join(directory, path) else: path = directory if os.path.isfile(path): return os.path.basename(path), self._opener(path) return None, None return loader def generate_etag(self, mtime, file_size, real_filename): if not isinstance(real_filename, bytes): real_filename = real_filename.encode(get_filesystem_encoding()) return 'wzsdm-%d-%s-%s' % ( mktime(mtime.timetuple()), file_size, adler32(real_filename) & 0xffffffff ) def __call__(self, environ, start_response): cleaned_path = get_path_info(environ) if PY2: cleaned_path = cleaned_path.encode(get_filesystem_encoding()) # sanitize the path for non unix systems cleaned_path = cleaned_path.strip('/') for sep in os.sep, os.altsep: if sep and sep != '/': cleaned_path = cleaned_path.replace(sep, '/') path = '/' + '/'.join(x for x in cleaned_path.split('/') if x and x != '..') file_loader = None for search_path, loader in self.exports: if search_path == path: real_filename, file_loader = loader(None) if file_loader is not None: break if not search_path.endswith('/'): search_path += '/' if path.startswith(search_path): real_filename, file_loader = loader(path[len(search_path):]) if file_loader is not None: break if file_loader is None or not self.is_allowed(real_filename): return self.app(environ, start_response) guessed_type = mimetypes.guess_type(real_filename) mime_type = guessed_type[0] or self.fallback_mimetype f, mtime, file_size = file_loader() headers = [('Date', http_date())] if self.cache: timeout = self.cache_timeout etag = self.generate_etag(mtime, file_size, real_filename) headers += [ ('Etag', '"%s"' % etag), ('Cache-Control', 'max-age=%d, public' % timeout) ] if not is_resource_modified(environ, etag, last_modified=mtime): f.close() start_response('304 Not Modified', headers) return [] headers.append(('Expires', http_date(time() + timeout))) else: headers.append(('Cache-Control', 'public')) headers.extend(( ('Content-Type', mime_type), ('Content-Length', str(file_size)), ('Last-Modified', http_date(mtime)) )) start_response('200 OK', headers) return wrap_file(environ, f) class DispatcherMiddleware(object): """Allows one to mount middlewares or applications in a WSGI application. This is useful if you want to combine multiple WSGI applications:: app = DispatcherMiddleware(app, { '/app2': app2, '/app3': app3 }) """ def __init__(self, app, mounts=None): self.app = app self.mounts = mounts or {} def __call__(self, environ, start_response): script = environ.get('PATH_INFO', '') path_info = '' while '/' in script: if script in self.mounts: app = self.mounts[script] break script, last_item = script.rsplit('/', 1) path_info = '/%s%s' % (last_item, path_info) else: app = self.mounts.get(script, self.app) original_script_name = environ.get('SCRIPT_NAME', '') environ['SCRIPT_NAME'] = original_script_name + script environ['PATH_INFO'] = path_info return app(environ, start_response) @implements_iterator class ClosingIterator(object): """The WSGI specification requires that all middlewares and gateways respect the `close` callback of an iterator. Because it is useful to add another close action to a returned iterator and adding a custom iterator is a boring task this class can be used for that:: return ClosingIterator(app(environ, start_response), [cleanup_session, cleanup_locals]) If there is just one close function it can be passed instead of the list. A closing iterator is not needed if the application uses response objects and finishes the processing if the response is started:: try: return response(environ, start_response) finally: cleanup_session() cleanup_locals() """ def __init__(self, iterable, callbacks=None): iterator = iter(iterable) self._next = partial(next, iterator) if callbacks is None: callbacks = [] elif callable(callbacks): callbacks = [callbacks] else: callbacks = list(callbacks) iterable_close = getattr(iterator, 'close', None) if iterable_close: callbacks.insert(0, iterable_close) self._callbacks = callbacks def __iter__(self): return self def __next__(self): return self._next() def close(self): for callback in self._callbacks: callback() def wrap_file(environ, file, buffer_size=8192): """Wraps a file. This uses the WSGI server's file wrapper if available or otherwise the generic :class:`FileWrapper`. .. versionadded:: 0.5 If the file wrapper from the WSGI server is used it's important to not iterate over it from inside the application but to pass it through unchanged. If you want to pass out a file wrapper inside a response object you have to set :attr:`~BaseResponse.direct_passthrough` to `True`. More information about file wrappers are available in :pep:`333`. :param file: a :class:`file`-like object with a :meth:`~file.read` method. :param buffer_size: number of bytes for one iteration. """ return environ.get('wsgi.file_wrapper', FileWrapper)(file, buffer_size) @implements_iterator class FileWrapper(object): """This class can be used to convert a :class:`file`-like object into an iterable. It yields `buffer_size` blocks until the file is fully read. You should not use this class directly but rather use the :func:`wrap_file` function that uses the WSGI server's file wrapper support if it's available. .. versionadded:: 0.5 If you're using this object together with a :class:`BaseResponse` you have to use the `direct_passthrough` mode. :param file: a :class:`file`-like object with a :meth:`~file.read` method. :param buffer_size: number of bytes for one iteration. """ def __init__(self, file, buffer_size=8192): self.file = file self.buffer_size = buffer_size def close(self): if hasattr(self.file, 'close'): self.file.close() def seekable(self): if hasattr(self.file, 'seekable'): return self.file.seekable() if hasattr(self.file, 'seek'): return True return False def seek(self, *args): if hasattr(self.file, 'seek'): self.file.seek(*args) def tell(self): if hasattr(self.file, 'tell'): return self.file.tell() return None def __iter__(self): return self def __next__(self): data = self.file.read(self.buffer_size) if data: return data raise StopIteration() @implements_iterator class _RangeWrapper(object): # private for now, but should we make it public in the future ? """This class can be used to convert an iterable object into an iterable that will only yield a piece of the underlying content. It yields blocks until the underlying stream range is fully read. The yielded blocks will have a size that can't exceed the original iterator defined block size, but that can be smaller. If you're using this object together with a :class:`BaseResponse` you have to use the `direct_passthrough` mode. :param iterable: an iterable object with a :meth:`__next__` method. :param start_byte: byte from which read will start. :param byte_range: how many bytes to read. """ def __init__(self, iterable, start_byte=0, byte_range=None): self.iterable = iter(iterable) self.byte_range = byte_range self.start_byte = start_byte self.end_byte = None if byte_range is not None: self.end_byte = self.start_byte + self.byte_range self.read_length = 0 self.seekable = hasattr(iterable, 'seekable') and iterable.seekable() self.end_reached = False def __iter__(self): return self def _next_chunk(self): try: chunk = next(self.iterable) self.read_length += len(chunk) return chunk except StopIteration: self.end_reached = True raise def _first_iteration(self): chunk = None if self.seekable: self.iterable.seek(self.start_byte) self.read_length = self.iterable.tell() contextual_read_length = self.read_length else: while self.read_length <= self.start_byte: chunk = self._next_chunk() if chunk is not None: chunk = chunk[self.start_byte - self.read_length:] contextual_read_length = self.start_byte return chunk, contextual_read_length def _next(self): if self.end_reached: raise StopIteration() chunk = None contextual_read_length = self.read_length if self.read_length == 0: chunk, contextual_read_length = self._first_iteration() if chunk is None: chunk = self._next_chunk() if self.end_byte is not None and self.read_length >= self.end_byte: self.end_reached = True return chunk[:self.end_byte - contextual_read_length] return chunk def __next__(self): chunk = self._next() if chunk: return chunk self.end_reached = True raise StopIteration() def close(self): if hasattr(self.iterable, 'close'): self.iterable.close() def _make_chunk_iter(stream, limit, buffer_size): """Helper for the line and chunk iter functions.""" if isinstance(stream, (bytes, bytearray, text_type)): raise TypeError('Passed a string or byte object instead of ' 'true iterator or stream.') if not hasattr(stream, 'read'): for item in stream: if item: yield item return if not isinstance(stream, LimitedStream) and limit is not None: stream = LimitedStream(stream, limit) _read = stream.read while 1: item = _read(buffer_size) if not item: break yield item def make_line_iter(stream, limit=None, buffer_size=10 * 1024, cap_at_buffer=False): """Safely iterates line-based over an input stream. If the input stream is not a :class:`LimitedStream` the `limit` parameter is mandatory. This uses the stream's :meth:`~file.read` method internally as opposite to the :meth:`~file.readline` method that is unsafe and can only be used in violation of the WSGI specification. The same problem applies to the `__iter__` function of the input stream which calls :meth:`~file.readline` without arguments. If you need line-by-line processing it's strongly recommended to iterate over the input stream using this helper function. .. versionchanged:: 0.8 This function now ensures that the limit was reached. .. versionadded:: 0.9 added support for iterators as input stream. .. versionadded:: 0.11.10 added support for the `cap_at_buffer` parameter. :param stream: the stream or iterate to iterate over. :param limit: the limit in bytes for the stream. (Usually content length. Not necessary if the `stream` is a :class:`LimitedStream`. :param buffer_size: The optional buffer size. :param cap_at_buffer: if this is set chunks are split if they are longer than the buffer size. Internally this is implemented that the buffer size might be exhausted by a factor of two however. """ _iter = _make_chunk_iter(stream, limit, buffer_size) first_item = next(_iter, '') if not first_item: return s = make_literal_wrapper(first_item) empty = s('') cr = s('\r') lf = s('\n') crlf = s('\r\n') _iter = chain((first_item,), _iter) def _iter_basic_lines(): _join = empty.join buffer = [] while 1: new_data = next(_iter, '') if not new_data: break new_buf = [] buf_size = 0 for item in chain(buffer, new_data.splitlines(True)): new_buf.append(item) buf_size += len(item) if item and item[-1:] in crlf: yield _join(new_buf) new_buf = [] elif cap_at_buffer and buf_size >= buffer_size: rv = _join(new_buf) while len(rv) >= buffer_size: yield rv[:buffer_size] rv = rv[buffer_size:] new_buf = [rv] buffer = new_buf if buffer: yield _join(buffer) # This hackery is necessary to merge 'foo\r' and '\n' into one item # of 'foo\r\n' if we were unlucky and we hit a chunk boundary. previous = empty for item in _iter_basic_lines(): if item == lf and previous[-1:] == cr: previous += item item = empty if previous: yield previous previous = item if previous: yield previous def make_chunk_iter(stream, separator, limit=None, buffer_size=10 * 1024, cap_at_buffer=False): """Works like :func:`make_line_iter` but accepts a separator which divides chunks. If you want newline based processing you should use :func:`make_line_iter` instead as it supports arbitrary newline markers. .. versionadded:: 0.8 .. versionadded:: 0.9 added support for iterators as input stream. .. versionadded:: 0.11.10 added support for the `cap_at_buffer` parameter. :param stream: the stream or iterate to iterate over. :param separator: the separator that divides chunks. :param limit: the limit in bytes for the stream. (Usually content length. Not necessary if the `stream` is otherwise already limited). :param buffer_size: The optional buffer size. :param cap_at_buffer: if this is set chunks are split if they are longer than the buffer size. Internally this is implemented that the buffer size might be exhausted by a factor of two however. """ _iter = _make_chunk_iter(stream, limit, buffer_size) first_item = next(_iter, '') if not first_item: return _iter = chain((first_item,), _iter) if isinstance(first_item, text_type): separator = to_unicode(separator) _split = re.compile(r'(%s)' % re.escape(separator)).split _join = u''.join else: separator = to_bytes(separator) _split = re.compile(b'(' + re.escape(separator) + b')').split _join = b''.join buffer = [] while 1: new_data = next(_iter, '') if not new_data: break chunks = _split(new_data) new_buf = [] buf_size = 0 for item in chain(buffer, chunks): if item == separator: yield _join(new_buf) new_buf = [] buf_size = 0 else: buf_size += len(item) new_buf.append(item) if cap_at_buffer and buf_size >= buffer_size: rv = _join(new_buf) while len(rv) >= buffer_size: yield rv[:buffer_size] rv = rv[buffer_size:] new_buf = [rv] buf_size = len(rv) buffer = new_buf if buffer: yield _join(buffer) @implements_iterator class LimitedStream(io.IOBase): """Wraps a stream so that it doesn't read more than n bytes. If the stream is exhausted and the caller tries to get more bytes from it :func:`on_exhausted` is called which by default returns an empty string. The return value of that function is forwarded to the reader function. So if it returns an empty string :meth:`read` will return an empty string as well. The limit however must never be higher than what the stream can output. Otherwise :meth:`readlines` will try to read past the limit. .. admonition:: Note on WSGI compliance calls to :meth:`readline` and :meth:`readlines` are not WSGI compliant because it passes a size argument to the readline methods. Unfortunately the WSGI PEP is not safely implementable without a size argument to :meth:`readline` because there is no EOF marker in the stream. As a result of that the use of :meth:`readline` is discouraged. For the same reason iterating over the :class:`LimitedStream` is not portable. It internally calls :meth:`readline`. We strongly suggest using :meth:`read` only or using the :func:`make_line_iter` which safely iterates line-based over a WSGI input stream. :param stream: the stream to wrap. :param limit: the limit for the stream, must not be longer than what the string can provide if the stream does not end with `EOF` (like `wsgi.input`) """ def __init__(self, stream, limit): self._read = stream.read self._readline = stream.readline self._pos = 0 self.limit = limit def __iter__(self): return self @property def is_exhausted(self): """If the stream is exhausted this attribute is `True`.""" return self._pos >= self.limit def on_exhausted(self): """This is called when the stream tries to read past the limit. The return value of this function is returned from the reading function. """ # Read null bytes from the stream so that we get the # correct end of stream marker. return self._read(0) def on_disconnect(self): """What should happen if a disconnect is detected? The return value of this function is returned from read functions in case the client went away. By default a :exc:`~werkzeug.exceptions.ClientDisconnected` exception is raised. """ from werkzeug.exceptions import ClientDisconnected raise ClientDisconnected() def exhaust(self, chunk_size=1024 * 64): """Exhaust the stream. This consumes all the data left until the limit is reached. :param chunk_size: the size for a chunk. It will read the chunk until the stream is exhausted and throw away the results. """ to_read = self.limit - self._pos chunk = chunk_size while to_read > 0: chunk = min(to_read, chunk) self.read(chunk) to_read -= chunk def read(self, size=None): """Read `size` bytes or if size is not provided everything is read. :param size: the number of bytes read. """ if self._pos >= self.limit: return self.on_exhausted() if size is None or size == -1: # -1 is for consistence with file size = self.limit to_read = min(self.limit - self._pos, size) try: read = self._read(to_read) except (IOError, ValueError): return self.on_disconnect() if to_read and len(read) != to_read: return self.on_disconnect() self._pos += len(read) return read def readline(self, size=None): """Reads one line from the stream.""" if self._pos >= self.limit: return self.on_exhausted() if size is None: size = self.limit - self._pos else: size = min(size, self.limit - self._pos) try: line = self._readline(size) except (ValueError, IOError): return self.on_disconnect() if size and not line: return self.on_disconnect() self._pos += len(line) return line def readlines(self, size=None): """Reads a file into a list of strings. It calls :meth:`readline` until the file is read to the end. It does support the optional `size` argument if the underlaying stream supports it for `readline`. """ last_pos = self._pos result = [] if size is not None: end = min(self.limit, last_pos + size) else: end = self.limit while 1: if size is not None: size -= last_pos - self._pos if self._pos >= end: break result.append(self.readline(size)) if size is not None: last_pos = self._pos return result def tell(self): """Returns the position of the stream. .. versionadded:: 0.9 """ return self._pos def __next__(self): line = self.readline() if not line: raise StopIteration() return line def readable(self): return True
  • P+](AJͪݘC~ȕ|;eCq"J6;X%&\.fF2++IB~IaK&8tҽfFyAd')Qkm@I[v43 "cHQ"bb0p?2f@I-i$QBA- TYgft`^.Еɔ*mpɋ#hx}a*, `(( +J˜3Z όnä &* ;Z.DsےvYJŌ2L`PJq1 h50 m!idr*n>0+Gd<.r#+0=P,@q]:&5EfVa+YylaI'rkP7enh S— Յ!B&ҪnJ)IA5ړ!e}Kre7.6|sIVfnA f< 蒗UK}Mci2t[8(J\0d4YN$&QBFAiwepQ>C[ф1|邿 ]\#Y[gyOv 3 qU$Q$F$ xbEɢI w>Ls!nsdV1!I06ф^.6mc! ]:CIujWDDj8x{/No}=u/SWu<үqۻI6?ܽ㴦n|[χ7-us4F˩o'q἖6Wqiu^=i6VdU>J|Zݧ\e;k-Ϗ|歛nMu^Y#擙nޙEǶ?Ok9\=v^jȒ@vUf h}0Xr'NSG'J^jo욎h82(U y˜^|4.6 az6yBFq}_?_/ҩӰly̛E9Z7ՍZNl1K81B7km:=ĺsF>~zViϝv;6-uލFBF+}znmnd dH7>|Ԧw{n}̝7y;a߇a5LMvi|{{2Ս>Yq&acșǒH6ۍwFFW3ytqx6@^ sϘv^%-aQ<8K `jX?$S(gPPP1Oϔ<3=P[m+%g46r~2*' L֊@qud"%m .b p_ %!il*-yd ;QT:4y(\UmǝҝncscoJ6T)fW|6y'^K yZ<0n$׍†y N3$3AmCN&WB v*3Hbᓩ%z޳DNعm})^c] ۪QJ¼&ڌq0=VX6NJ+oTv$h9xMMxt8ҢOz:2 CNRyeRlkk(QI \زmP3ݐ^hsU5Qm1i;;m ys{RTa~6vnMkÐ=fE^Tnlu-9CJ|񒕿^ )Ě%5J%Wӌ)pxxͤNgaSb@n6NugRA&.{coPUVUX:R.dW/dЃz9rUʴvv5ѓҘ*]QvN$mhlfM_J*u>Wă ߱cPR h Q 7E;4.7}4EMJ }4Z $me8bkd{:ԍʩ$;@;N2PG3i2uks|[;u!#^J6ͬ/豒94*({w βV17]Ȃ+cayntph?1.\xiJ34ۑFk|0T.awJ6BJy]W&㈎)eo m@ix.r6-P bZv O^U#;+UH HhpY$=bFKi@F$J3ԞˉaEfnѱzIgr( i'C6퍖")ly+gE8V!WF+]|P*#' | gjՠ]dlFHdgaBtF;#bնq]NBŅJ 8 D0;/t) U͎v)Z L7AX#nPʌIjdpcKe_@)O%wݩz@Oz3F\1\D-j'rF1߿sNUJ'^r6^;4*)hF3$C ơNY,@Ss(?܁gU/V]Kw?8yTl[9 L\-] 7٪xAz~kmNl'dl8&q\ Zзz@+C1\Bt-#VA.zq~%y7hA 2!-28]Fy>lhlisC{\Gl\۴-E|p^#d M,!v7Oh+_;ft^MȃLa ¤~ ;\.%O2U)NT@hq6Ӫbe kbߥ< + 2V7X6ޡɀ 1+@2*&mP jh^r߹V6{D׸T88 d퐙_I^Jg]}qt(Qw:tD'5I^ʑ"$.EH9T&[KA-mcq]qs5]Cݴި%Q]N"+uI:ޕ%BG:~M}5pmR=qXNq5;gD(qwAV,f4P1iTvJYWǸB/+ 1ثIqpq8;#'XW0HI|2|B.1h>WnШJElT:sLb]]D!EH)j9lcٕ+sS1%@{ҝkAy#c.`1{ F@:;*\[AevSA qkt΍zU(e1$<f&hs0\Q?\ʐ h+c'\t (U\emg1Al&K=gaPw88 N T|ҖĘC4)E.pJ..0*_1LDv…O*읈 %GZQ/"r".F4=2L v(R/)AeЏX/"D*pxgn pLũ=쀍(8c* ]WΤ [ R;풝o)ie`A@CɁ] ˴)P r lؑ|4_|O(8j-"p84#Tk*lx{isTgO۰-6&qQQ nQHh+21wF2xD: ;/26wj'?Rf[o U}[ĥ--Y2ЋdNOoպ7 T1 lqL*\cecX <0aV6晠3cln"`mԕxÃM İ`2x BFFyWXƵ,*1̵f~NwJ.ULA0KڜUB Z%۱n~c_ܦjjRqykclx3qQVV66 x!7EV*nN!X FUQog̖H,gA1*ﮜT "(Q=lF"u@Abx2?tu.Lފ+uV_A&F -6Vd45W4MoՉآŻ15 >tlU08퉝0"J%j0Qs5M0϶ail´;Oy?ot;g\PڷfLnvYFi]%0anVC}ճp.14H`TK%ݸh0N@ܘ̈W6*!nHaQW)np܎2nz#/j` 8kȊfT6*<(7JR9n4=TH@oN* ^8¿hA*LKepQ,u3ΚBP`N4jU[ pbGf0L^edϊ2O5ۙέL\N&@ۤm3O͎ t*~ɶ,/~0T6?~P@Jpc"#Ǹ ]K6Or#*bcT.> T/uһ ʬ.Sjl6 xU.s|9Xe>:6ҤZO]-6񖲃M6(vrtf2+AL#͎* 2@,ƻtSa5܈ b]:*0\jHr#oBWi+~TȳwBOSr{/TFJN۠(E`F-Gd3ѿlR\kT`C,ޓL@lR\Z'3/h*pvp_dk@T Hǟ{}{|`I˟*i^'N0 ķow=(aSf}ؒ|K|XTHl*|jʩs! + *IV`[nRf"O絯FYtag{8X0![P?s~^qwPplȂahqGA|ɗnxxi?yԌ_`1q8K(zIww^ 736B,inW1fIqbzYkeq.^A [=@mV7JPs ̌C)@ y/<^ %RiM&_pوG χv>'Ov0Y\5f6ć];7M4Mo#J陏8te}+gh<:7#VU^5׵sWSB[ҫ g]O*?Ghi..Q< +q'>nbrb݁oL-eQy|._~߷#㈹0O37MJ`ډ) 8]Gw܈Οvcqȝ]$Y+3&Is[ۨŋCݔkָbD$L-ʙa02V/R0P,P?\6&.EppDmeD_=/}۳~ :b:ʁc~Mڨ~*urt0q#c.g/y o벆춣bJSIXN|ܴRzXܫ/~x߽ݓiq,VAWkmVMkc:/7`PFvGP(vCpSq]KBH3F\g bEviw`h Q4ބyPi(L\]2rޱ =;ϲvQS|~+~3~O>_{=/i;ݡIC9o ='+<ާo<շG|뇏>zN76=]9g~`-RE3 56!x[QyT j܅q(xMֹ)غA7\oq$beޕU JZDEs.qeDj p~Y4F>g!Tsg>g.^_o~/_n%;'m(JWGh;yKl۝ _܎y8O^||Mux/?L\qw$=9-waq)q$d9'rĜ3Gh(,b`NEݶ/NZ+ۯZi'B^0. ۻ"5+,9CAC.6&q+ Qr8F=9 \ll;s09 <⎣q&_/q]?h;^~wˎxgK*kjqpxiqw[ UdDj̈ByLkbv=#(CFHp WrY"JɾɲŁ*T7%ڴT6gQ *2"#aw0r67Ga>S|[=/~'>u7{tԌwDEp/3%z?tG/dǾ|&ǙþSqJ90CAg Io;J[͠yD 3I?9F9[ZfO v4|Od?D5 \9L m?TfQC&62ڸmhSK佦=g~Ll?|Όn6 otTGe5h,_}1YN;/[;m{ylq}w+}^>RwV覆Aƴ\[^n~]7h6)Io}vlyF,Hheۀrڶme ,b ۪kn5q|' h#ǿIΉ) # 63fy 8}Gm!шNoWo}ъn:>/d%͘`~adf9:ߙ y}۫,кlhЗչaq&*cSQ鳛K.aENp)gQBæx0B1u6dK45 Z7+fCMD>MWQi,oGux-gh_M=YT9҃gsiPXy/O7A}?abCSuH2DU"͵ 6&g4HHo:lй?Q6ΰ!r4Ps ,=CsA}U3^rHyw ;xUtGoڜ^ؿk,G,b1![b;[Nٿ#qU=[c,m=|+1q6 #W7D轁WwD#oRCڸ*~O dž+YAw. 1!@/ fwLoԼ-oV@}``(W ϐ+[~e':"AǷ~w~ coZm~{ BwehWd^9$wgBOR?EUs8Uql_F_F꫟C[]ƹvdJ$Cȓa+CѵAAnn77|ɱԳ/M%%~pG,*+RDt g Uhno@l8#ImwZzONޝw&DÒw}9:>3 _ |ȼ-O\aZ 1jC,HGVN}Б(ȹfԛ╻lF,7LUq̥WĽ5臰h7$+K0QM{3~}9S?㧯YO,oIsT4 t?ema%82;|8uR,a]Yi`(Pu13.Q6jJ-/z`R"9}.ku@@#f`{M>NČg&XVU4\mv<־ D}+ORYs[:EzPk<_S;ptjs+w nó?IQdsq;;Tʝ%{\gzN DxUW4|%x Q.̷ظ [(:ψϕ9)f^;,9Kķd&g@oT.Z!ۤB~Bݧ(N䮻Gynjc~O>wg۞?a`~ ws|ݻ;aPq;t?}}E~|0\}UQ" M}_l4y=w($1wVc#:hlHkUN8װ3OƣR*l,1?ǗA3aUXͷ0:T%S,cy։6wr2&jnCTGu:ӅڱvRYNں* ;`L;n&̞;SFщKUױV݆Ń'ĴH8] 1oT y;`I"tIvZ5c6iSLE%5KBIdhF$- %ʊ{ ܓBg֢!hB(g8~'AE "a]Ĭbpj\xKJeJS(wDD` >5J-;[6npJ46α[ hʦ!&#/˃Q7u .*#AQ"D;7Όd&9[;7Jd~z^oHL ԃkp=Ȫa27ݼ ɯDĔw sV*Adž[Ԏr8vIn 3foIv3PvFlI@ /-(γ+XN :5bDbgSgTiEf "`r txShb8z х[Hojj^gT?W8l.nvˆhuR)ߒoF_0OλK^"GdxYQF ?HF6E'fU9 U]xjīʖAz㼾jRG^X ]NmbדxqZdZg :8Q=Fۈ$7H(7HBQk F7EG\_YvDn{^6qEz˟I\VTce(\xUlZʛ`d #]MC~$&T\٠.h.M gʀd#HpFg\qL=+9%!V1}(_B$ՌfXdVY  k#DIm1nF"nx8'>ZFü\8Jv۩ִ:{YĆ;|baJw;ju@ #JtOYK)9U1vk&RE[dpɛȉӢErB8?S kM=]]A$r{,eJQG :}t\^e"gvuw+f2 ;61d*c HrUٲ=9l?ܐ8a_ʅUbQܝ !^(l $΍hp㨡HE\ޕ)RZ^ybPϛqEcE;dFNy"DlJ:*EA" ߈'ZF%1BR7 |O/,HչhʛL׈%IdƓV0{Mrm솾#uj/~AIoN/IR|a @a#A21X\wk~N6r4r'N#DCY Qؚф3yF%D㔴aDOGo])CkH6sdkֻi &6:$EHWی~ =Hc+a$VMYlouƭDP4NJ%Lϻvg ]f*J6H,Ygb鑚F^(fz0nZ#زq-ƀ{X;vl4um8|$QDzK c ѨYq73?s˟ᗏ~6ZԇCx埾a5FI iFV"_( olG*gIs.OЙTӅ0Ū_aZ {GvYL;n7/.2 xoԃP瞚AxEF r؎Pq1lQsM&[P9XSxU- 5eUFiwaj\(WVT͛Ej!|9 zk}|x_wH^q`qoݸv[8i*+fs.9{BW=z%ꥹfKzKsw樂> x2Z=R2PjP+KM7 Fm9ʧ u|+VLL-&4DT&;LQ Ȓ^HvEm{#of޵w{gn߼g\bBx721_xW [w`q7?*VYBxrU8zGO^µߍg_|N\9lB_Ry>KX5~gw+ϹֶI&V%ťi$jBrh \uC8[O1zrZH΂A%e/3p{s8uժ)>}3J\.I/δfVF3`^o25 ]ߪ\]<]#2jӴ۰pO+'oWR~rI ˷[ZʼtwO6sN\o旕/O?}|R:Obf)ZT,OJ+6X:6LAu\*LSO㰁iܐF3A#j&Yv#Ut6@^\b[G0r^:psoO߻_ԭwݓZl<w*cu}/yJע|I{UtW44oXGߪ6{lf<_TWx}۟#M<{?ܟxp5ιo. NnO4R؊ ߧF%WOkz J0'}db5@1bޯnܓ/Ќ3kV9C,Ab9o~Ү:^zRܿqB.߹o>ǯ֟xҗ?l;d?oDve0(kGR^n^wU*ٟo7T/퐩!z#QY-F#=%W_鱙$$OrTveqa+ݒ`N[_lgC;bʌ({Ow'N'J g{/O_6Tg[[}?߿GJU +2B;PMJYN+MK*,J_N/[q6w6]i{Njj!Ss`Z99sk7v&Bͭ8fڥ' 9 e&%E/ve}.kt[0+\zT}< C7kE-aKN^,D>. /hH}[gQXtWN>%;BW$[|6NJtSBDFU]y#|Dyy;jqc4~܂_]\Lsy/ŕxn->jB2̢_oi863O{ɱ9T Os Wif643=2MXZA` F|:hsW\ҍg $K " _勯Ӓ#g_]RgJ!\|pƣg֕⣽OIN/kJHaOkc􊛲n=W7k[=̅f~p-i—bvg^.b58{Ak~S~} eMDf.Uв EXK/K\gcLeRbXˀla^Q`Vm8PQKsMӈ78>dEt8}ǿ8~x7[[ĵ 'J /׊pݾːQfwKIWd~UK{s'!Vv՟)F$Pb[K,io@u8J}'._q _)߮ys[:QL tR`+oR,F9'.du:b&Zp}<5E$M 2)3{ F~hÈ>?`iUVz.DXpqX;hęu8/kv"ikBVIUA=DqZoFIu v뛳Fq/OH:#ͤpHH|B @7[Q:U7Q&,JF{1#?~CyƲ.hg.(JIHDX)]E;:Lkw,Xov[[SrΒA{ ֌,'GJT0JZG')ByEo.dm2RY?D9=!Qƹ%Enw9f.ˋ RuP qFR,%b0E%[E0X-&Ai=bC?/Dxh({v=D垭 |\ HI-d .`L^\Ad+n j-X0-fa)l7 R"!Hɋk;W]FF9GUB1sF`x4 !ou=hGr+e泎5D}Aƭ]e0W8_͢# C3V-.W?J ι?tz7͉e}?5U|Ȼf-_=KC3;肰k݇H??y2rG2`#/ٝ#3}0D앨Y|/Dn9%Z‘XX?9 "b:u9RiJ6-ǵiߞ.a7+#[A_v>co$-ae@uKvse;Hffፘ?0P- vQ,X}\{f۽w&QmFp3e'ծD½ޒqihJ vZ\fDv 4UOG3C i3=j23'$mf3bOǥAn#KW|T Y9[5i/Ñ~ 8&1#D !eL3I6ˆp`L^$EA&l]gxܖZp3}}cj-6܇Y?U{eekZl}b,am+غ%`kO3XL#i C#q=dGe~Zk42 ^R9D"eY[%Hqp lziٍdzj;rQ()Vj̋@W|^tgZQ1'uh3f:\OXb\W3o\S7hp_-Vj]cR3Fl2%~xfCLDL >9N"e^$W,-u^bfqetr A$]"hZӂ>lXxجG.&YIaR<.ڃO뱇**>h9*0}I|kȮ= ԃ˼CF7Di>($jWuE=FCcfG3͉ůplrfY]$GZqȩ6xϑjK$@: (N8hom 9υho&Y!eKI.e:QU7pZa3vWGH}ugQ6p{Vd֬P|OӓEuUXmFװ6U|fOS&(cj'e;rNm /YE|^5cQPtnxyJ \a LԷIJ{3 XBf }O'` WieVY8,w5-b_8ǼX}6O#ߚt - Vm1V0E `5{MEMZD(!hx09pø3w6`*aMgΠ֧mEd U = MX2#^V;ݭ Z6:cSOa2)%1:ٽν=;em\زNcY0FM@5ߵ艶ڷ'$fMeG;i<. 3G $ 4Cn0L yp<32q7b5rҊ] xnb9R=m˄ UqJ#˼`@^l3gi,DV3xWsXab1)}jt"XNgu fL0`AVi3#[-G{nac5 EԻ;oAOKX4L tn\O&͑:F}+D!fs ;"3RM*) iH8tqi~-xo&wr A3M54p0]Q*p+:}fuP>úrkmǣ:&'mmIOc 9H:]fˉF 5ÍLW=OQL"5k^eCm6s67{}e< ]فǙ{0xjwfJ0ʙ -p6̥? I=ͰCl.]Ww hu_8r\O8 6 C?dG0mYov#d4&0D`hheɟl7X:Z OI,Xfz*t.&~޴v0X-i#, TYqxAJ80sx 8'JMud3<)xa:+x<-gee4XYs%b{m/M;ܿ&Z{ղ%Q:,#@%iBLѧm_$ހDēD j?n~=:7B+[)`>HlLGؙm۪ D\?N+u~SWᬆ~ILW=Oj85Ge+͜׮*C杕PPG~c9ߑȶb-W0ˍf4qəCd]{*'dhUAlܝ5DM+faw& 3+uuS5Ѷc-04c䒿}qU<>!0!-pPjԤ[Ѡ";:xCe1'w.Oy\s|˺YۘDun>stԜ9Xͦf14( B9-я2`C}].|[pbЭqb%A3?>0P{"G>?P^[px}OQ\m|%@E&hAHJd~~aBTVLo/WIjQ!5:F1Ցi#Vj 6HBȎ'"2 b؈cOذAN!S$Jn %TSҐέn=[@Τa#9-F΢EɚA@sqR,@x] \5ҴF2=K=WQT6NG_i>eA첅VP9+K8S$k+QNi4*pf&i$@L-bD#pD.""ݲ[$BDU NޟufC2K6 }q cS{nAaqBٙ53F iTNC6hI$aMBR-Ie)^?J(@ڴIuExEɖʟ;\.g 97<#T fSĞLM'8jMROT3ȉ6b˃i~ wTѝHj{ԂpG#x!R4phQ:qŮb D{>7hI$$ 0R| ຂ$!6ƨ,%C2rQ,׋9Ƣ?ijL^͠4Ȕ7[GYF<zC@ވ湇HQd C (]M._^-n}AERKd)0!vXڻP¦t*ݍEHgOy I,wdD mLD! kAKr)1ip>/UW=PMȼC4(WSh"I-(!+oE(lч3e֪<Ӡ[FxM۰YpPU#c'm6h¼2JZ;İH5D٤X7-0h !V)4N^ٿe&H^tem2@3ySç ætB)$ fd1h)_CArĵí !apvz92IӈҴ$/Jama=c^`N({U4H[ڍ ˑa ʭ$D =QHte-Q-sh6if˜{H#3!EqzeGYX=js8nE QDζЂf(j\c`w FaSymMI,22fLb\3YZnlݑ 38+(GNXN 8Iv18W"rs:I~C> zaP2,!A0)B؈ ~?7- D"XXhK tECA:T*W{'jHO9jdW-0D:mBpmTM{#5:ˤ884I&*MUљ#S䢜㏐DC wĤposem@XZFo|9ѼCaVb9u:C86T՗ 7)eiRJIC#Ͽ'|L6E YFKG*2"D%}P.-sue@^"~Lpm8JG/A loHT0sl4[Έ.-a".Qizs7sL[TJ}0؁\7 *N2qszb_MfOt:2`xއCxg.pT]mH67\%IVl,wHsĝ b}Ӝf^p.>!`A|5:/dRy5kQg .o@,$-&h1nF-}2F>V?mhBcxCS>m玹pXv^n&Ҭ27xgۏH;V8Zc8/'n^yzx#ŋ^yHt Jݸ^Ov޿^y[_5Yq/7zxe[FUĺjgwbшv~^^e;J?VW%}-'g>lg}N^mGr\Л+_}U}s..];] vl-lt#vA2l^2az7^"S=|6dZ-[|Q.qvq ?-_zĤ e{9Z^pj>X͒~7'k{i͛?.?k䕏|}ooT>).jV6)cP^yw7u7';?W~>9q81?s6b[aW}byb9['GT{iIyTw'RvsӥO>(cB PUQӝ8v?n1w'ˬCA^lZ`qԋUqJ-岽(XBٹT9|\@/ Yy򞻖ۿ*Ǜx.7+y'뽉?ŕ".£_. SHVkx|GjZ_~rh^!CYO7{*zM^,+.H^'YȔKۡrEnzA>`PURlq_pZ&2sz7[֒Hh MCPj״\J躁B{]gdV9?-7WK]J@>ٳq7>)ǗzeBt?Ɲhkvy ??-BVYp'UwZݯotc-FWrOgq7^z};l5觭kg[9[ ފB'7^l>>\T;Qr;N;|\zJh|Ix1瀱|f9 uQqNTBap#r85^MaszYkWިwCXRZ&~v>9^*.{D՚d߉z#\L^]7Ɲ$ݮ~%;8'i$&oۛzBwO(xzwzF}2b`=Fdq5*>s2Pߤ]b/ܨw_z[0gQt7W'CU-{B&eBIN"YrSf;^fq#x98vZ>᳨S50A?&F;Z6wg_Rӽj;^D /k'4j%m\ք-M?z_Zs܂[VjX(ƣG=c)?+QIKjV9O_~ہ>n<4<UCqvJhS϶yL;=t9QB3=~S7ƈePs}?@̌<~C4&Vf.ò6`FZMP$4%r/9:qL_ŋ܍2/\ӒVp~ZcsQw/,wr(T 7F[ShbJ}o>p{UWx=wvco?Ef<>ٿ&̹@'b#϶HZ=]~sb(x}P^ƍ}v?pޤR흢oׁA Mg ׂlԩ aY3| T ),gwG2v~2nq ʇyOnhk5,g^џ܏R{mv'0ěׯ^AwW ^Sw`^V2z5Z k_fy#2):?=n'%"lJorP6ڷ[b տnsŢ= I^^ w=[NkWm#$rX\R{I};M޸ہ '7v*4 H&x8 \ou7FqLUm֌e&.cy@XOm 0&ȇ#Têʉ@|T,q[\"''#-6P)ĩ֘[HnOym{h818pM TF&`wQ\# "R4 I0 Ap.G-Nc"sdc[ܶ1Q|mQ~{io«rtȐ@/X0Ԯ4OߡsZO!^%Py?xlO EÝPLiMG7PaIQGmzI (HPkyyIWoX šL] L[@[sb,ӘJ0QU/=Ӑֶ/Ov#ql-Æi4vg׶FaÆrs/8%bxaK6åxpIPBp- ^طŮUJa/'zGm1s Ӊ\(pDTԎD_.4J+nxEn"..Z$ʍlQ;D)J+Ӥ)c(}[|+G"!O=0ōNL@Z".bIDCbZQ:uӗxEDCڠÒIcBf™p/Sw'8Ҹ`൤"M}E*k-c1D#hK@$i\N:%1QN"7d1L_q;WBF*{a#oJjqBqiXD#F<(Ԓv=F鼑[Q3);j׉p8iCf^꼜Z.~F 8.{-q8R)KXT"-3H @m<3%-\.dCwt9 O @0L3 PtFcT)z$7+nJ9px⯉9蝽 3¹dY SM_zc<]vǠ <+2#5 6M|/ )${:6A\\ V!9>6%kͨ 3Z1@ v˒-r bI$|҆5 &^4lcS À@Iñhm ]7*ļ] y9r=r|(vV7قr#EEՠa#93`{X,Ul uJVqيRrXQ[TS=qH.ӂ|:.S!8qe-T߹s qd: ݃i-_fH$8Ug:-Gqk\AvAJ&%"Q$ׄ*V;6wYtCB8eb1shcw3vcQ"E(Q%T!"[DeOhU~8#5")Uy͵Zڐo}b*I gH7u9X;2}ҎD5!C V΅4|+=Tks:2k&$AW8ђ~fp7Q~enr%Z xԹbm.A$VC4JnufRI\^YT."t uT".m̂ɉPUA` oږeQDzu9Ne<)h)-4,/Q…@^O@;=Rg 2jN mZ;.'3r!-:6v̒H< ! .]I/NWԶ8 3!g@ଊi(B>o'7*hqy ^"x(Gvln8wRYyUzI12&`:beV/}8]|vsJ622'fs ܡi:{})gq¹'y=s+߬eoߩ2KWIlgaZE./#*2L.| B]$ $[c~w# 'Ij-_s¬-p;a IO 9>ސ?n!BVCd2%1E+M[NUU<HYZ fӳOGEB"PڐTMqU(_*ۗb0 x!8 Aj]Kw5Cft`Wn=mY{:.P%~)鞛,TZ4)U2Y_{#+ȡϻlT02G }Pk6⩽,$2F E7+DlU$L|%.$dmBgRޔ4K&?9ܔkpBګGjBTelw4S7+dRiP;_5SͬڧNxjҲGi)K VUJE6KbqN:tMhZQ&#͸gg] n#5EpsE%A\|^n4xI%&cYiMDqUt} * N3ezh+giVA}Zhnnr4CᴹlR M8|Z{lSO/>n5A.|Q::qi'LNxRP]5!K8P}M&fHvV3vE"| ;2la>D؄pYOuSY4m6i AKJd#!b T28yEKfٔws'x܄% ,Y{D?VVe1hBTV `˳DuWAO& VX^wė9)Fkֈ[&2* 6]eev`fž-3l~ֆP8S\-kZx`J%jсi5GvA1O%qmVDY:6j(MEM\dP_).$k9^1h`n-%+ΧUxb׎=Yy qꖦ7Q(V[_+ MC;f›a0 w3d"HH50z٫DК UּJ+vZ2jX$Ȇ'JYp;8.D)3@elASba=kl~0ց (! 2ӷɿ:2IjkT>.H5ÕIAf}E&ЎJ& Q9 qN-B XÖaDgHi8=ΌǕdǺ]@V`ٖzT-J֜+.#YXli=┇S+JKK$eUkm+>!9LQ0og=7)3<Ye'%R>Uo?~9g8ᨌ-\<}+t3O׷R3# H9MfxY2K7tFb)d—FHi#9VV [ 3!ݠxX%+EQ3̺Bv֕=XH&Y³*d84_V&[uwtda"XBe{kKJUq-f$d*{5ŕ?25JyWI|qUhNkwP@ OXG̴|Dza_oXѣ1i,#]S$8xݧ^dl<1nDslwՋfȚWUhd~+Z(G&-g*ֺl<|TUz&irwXZtY6*\X{dMArЮG: ;Oe~?_C1bnUR .^jqHDϿR\DŽ cQ98O>7~7TgÍ#w NZν4'nʀm'i{ˊJ -8L_,ѺKREݜ@i;V@ԝNkO?j7>Yjf0&(ʙ c`f xԎ-I5/׾V>_OiqHL"5W-N4$wNje?&x)ęixF2ߏq7oG "b`V=ZT>̠Uk2\ 9E>9$K2Mteh%ݕ#9\V}C IS nU+HjpYGt5dJ q6,1k+ڣlm>5lYt3ZȦw|󛏫~'W,?~'hREN*VʁwFK @7Lq4_vbĆfdG!I\"Rr. G)rzB 2lj.˘`e >Ij) &k|[ ZxeVrvc"ۂ&:jXzZS. \Q5?)ٚ0֕b{_}틄<=+)zqQ1p^`9⇑aXpffܻ*q̌"5)pϸԪ5e4qW Ύ5V󊱁 ֗a^D6,fV*AmizI-Mz[?B; |Vm &Vz*L#Dz@Jl_~?9XV%'&?%_KeXDRQyޮq&2g&ynUHȕ-"MZ".Z"CgB#>eb"j62&aGu32,wx֡ˁ\6r"hZ&ժ!֤QǰfamcjU?cin@:yF/js/{_UjY%JUtBXY\ĄM$㵙+-%rgR\ ^qGklnDEG̖zu81OyZulfE'q( ;<$1lR}ֽn&|V8NHV M#;Y*$Pq wӰ#nQ[ZsWʇPvVȵÂ/Q*(~%,l8U"7Z3doɁws9qxNٹ~o֩rĝH zUGD&; NGSUez7aV ӭTZ-Y-MeLt=*c԰CÒ`iPqckTNT3wX܄z:6 [E'mLQΟÌq*@,3@~*쭓VV⥩lL̚jJ3'Y#@[|&u]qRLڝ̴#ԄIfԵ*@^}uJLz"Q?_}m]K_OᓥcŮJ"ףwþ~఑Yfq_g@J ̿O%8% PQ;oQ{3@`^Jp+PL"I6Ifֺ /)(F3n+y5ˠ%n,)TZ5.,旄:AHV6Xvr\W-߶~x/K)=Y#?8Chں#zQ꘱k"])gi3ut~ykˇ wVkXゖjPܴu UZc1K+d43aGR4LOC p6}L*3uP퀃L&Mrg&kJ+qKv㊮zGrQTr&K582zu%`t~_ӿUB6KtkowzsΥD!͚{zv3_fcJ2Dr9 t1rُim 2FLe|9W3ٕ b1)Zj3jfaLLeY}vBhش9v HHnERrbGADeq WOH_8 /@7k_CnA6EbqpzZ3r ϐIdj_h3 f<6fiZ#ArPM# 1^@u]1@eIg17M 8.uc+yNU5r7;-s@]G= B W~ s MVWsV Za5) ;2F>ktä;pVo;} 9nzS0k;HೕaIefCDM`&Fr1{zTgp()bMvRYsTOAIht,N.!>Sd,!3({t𳊲(_lm 3 ΜhaǍP)mz"3̑uR:xIjv~ORy٧>8V? ֟EpjH35Z3JڶpB`t|N.!XJLu0,Y3 tF)2;)P9#:b{5#WaY4:2 V]hI?}[ykuRٚ۲A,v>ogl@{@U^.ؠh%>D)5k2X8^%v/55'*52SNB̶qeQm&ucZ2wE ;"Wkh! wઉI+W[N3י|U׎[KWS1Ö́rvRq4T\#V:(wLt1n]"zm4罰bX+IO Hנ<9dyvmvάV=bK3՗ 9yl7pF5t+!l_ F\$ܶY`d$IM !W[Ɣ+ V|$JJ՜\?*>δ#>bkz2j?ltR)?,~rfaWu*VnU"D /;C#Z%Prĸ,pؗ41EbIWBfd9~_ZhHS#&r7QCeR*WULҰa[Z ~rG:-lN曳24AO;<5osjHSr{ъz6æ{6E|z 92'. ra?D]oЃW<G[|{|_zvؿFZ;V&5:j,11fUeBwL4n?Dfir B]e@eG F;ߕ73bu/r2Oj!`";'O!FkdR p,Fxr^n*1%=H!v@ EQW2sMIG!4(pVׂ9Au/ <(B q۾}}gvEb+bIFDKqcm᱖m13 l ^œʝVAi9ُEOq -[(PGX_NOԕ{ߪN(<\r#x[hMJ՗;#iM28Fny-j]NnBhIӓopaKO ċɌ&u>k>"y9~|~`AYa:xja89Qdå!R z}"fD7(lWE9MȆ}E( .0vy.,~22 ^=ge 70Gf’er0 x8R"X@4HBv=f(~3J RU[V[y J֣?qDD7IȰ+YDgwA_,^JƄ4N 9ed+T%9.Y6D`k)A#![8x!6Rmb>@պShfQ 4CXbUB DgiYj3Cg~P Q:7 `-L:d  t[9tG 葀P^Q,="Oſ*_Kl2)WR(.)6(lrUR RuZO fȑ<4yExl51a Qv )m1qHY{Pu!j$$x%BbW`9]×C{i4hi,x龉B5z8p9r0Y g(kPfw,;*@b7H1@.A&U'V`\b8 BQ\D&ƨ!xJYx92o,b QO@a~: t3"JtCGXa-N,r&6)JaqfBPn^&rkstRq`xD1' 2rYl >I!S#ZcT+p1jB`Q-lQD̨2Csu:bɖ-OxB@BN!Epa|hZV#Fe"j&6|bAV;pLD`'aSiC"Bh(F"(IS0)e-C%Ysyvd'qd^Q <1=Hr cz^IAԑk4+n$LMWsy0 I-,DY qPc>YiP8,:BqI URP;;]V\ma4$3T-*RAC#,wVn4B#Yx(šrVS tVGa C^ ;+qj;8& j3SEA J 1dT‚Nb;JZ!hf)쎒+W}ĤUj:"$,K->hXBYh"Dw&!ntCMPn"} |X>|u.#&)5 Na*lRFg{;fgȓCM9MX,5$-qΩK띐$ FuZ-9iA,>l;–d r3%icJ O`]y"Kx;gi@$=G' ӆXe .ْc Ve xq}R2јv cBq"=Șh& ʆ>N=w^l6LIlSJ-X+V'JZ3.17YG1a^9q$ICvNU˫)~I|y(.\XTţp迉BU)־ƫ*f12XC]8)V姀ROok>x"'Y(>9HNRj,suqK9`16Ńi)q ;NA 쪛a4>QU{Jp LQmgWdN+JY/mW -mK $xVCĐTB 7*DbO2MiWgFj69v#7(.\Ȱ,JCP#2a%4T#yꤐOd'XUpEtsxqRN3 b vI^ʎ #X&oӎFdC"C Z86 ]iDSŲIpj-AQ^F!ZAeߺul{\}"W>rΎB,8ѓu?_~ :_zK5 G*U.{gDof7ip'揗BCi!]nx˂u Q ڪ`@Kϐ\sNA'YA;bMy5Z6ui\DdA5 ;d<~ =֩s8P??[w{uޟ=<bȯ8}y's>wvKnP$އf\؈0Q6H־Lʔ{/9,5Hz7|ǖ48T@ť-Jȟ Iy(޹YK ֭9&C![sCN2p fcFdxV--995|/g_]:A6̷be>x8#΍yŝ=Ə_?ty2<#> o=}.5Ԁ;!P0_J^7t]^pW_q?_ǛVK˖w[3 k3P0 c(P=6p= W{D[lئ! 0ThĎuz#0Sa3{lD)u.](ʯ~zB/VCKT⡳n iq_Zp²Rzu/YVGuŵ =\}r@8ՉT;Zn|FG"zj/ 8(sZ?K+O=<:@C^_ ḭqV3@2^SGf弫Zj)Q0 '7r]J3Xm}w2H$trPpF+{MQ.|bW/_#[{K,=/߻9@o;8e?^+RB/ar/ UAD c!t&Yc^EWdR;6b 3Vi@w:ag\t~q_U}ho^{w^/[!"ZZ^:Zvѯ}n~m)u>o5#<9y'kE] =|>s9͹n8מE/2jZ!-/yrɂxHyo޺wΰdtKV~#Peq-);q"ĸw|ީ{?'9dc󛜽2惾+[|7=}_s Cwz˂]mM܎=4 r++[CmCbvǙP2DhLѠ-IFS;V9(N4Cl?J7_ #)f XCpӫ?xѵ.I]Czs7Hz_>{>KͰm97l]r9s\W6kr'Ðv?zrf>yo}w^7s}{oz塘S)yއ`<2X~貲d}/Iò`i}txK_? ik\RjБƚ$[pC0qN<ӟ!+-FEZbҗd+_)l^9 |A8xa%1&2:z^ \9lx!-OK`o_w/_cS>րܰɷ.qJykzۯϥdWd'^1H<0/_ޏW\JťGWax?X"iJ]`>k^B[)>$>axR]STFd)L#'ue~kU G̃"7ڳF'eǸO.9OÊÖyZ)u{޸^H_۽O\uq',9?O{9Ɲ|%\8ZZ.ζ[#*zĸWPT8CQ*ҊKω.c v-j4`igz;*N0O b#,!Cah$- ٬3zoYqGĺwK{6ނxŭk9%,;25ڝ-e=`x-5ļ_l{ 'ߞz|]z`oi޹K8 KKO6ŷrnLfQC=Ͳh[$P"z Lc=<3d $ոM]r:jū!JW1xPCǺ}u{\]2PwR#\}b_sjٱ?y>fVq'5i syG-wMr_x}l߸r^뵚5Wq<-_.|?l}5}'xY_k#b'Q\* R,D]96B3yBZ^vz$5UګVLI5I,O|-* (^oeeο؉ez>/QmN{kR9+>ĸG=g'~%zt'~s[~iM)WZe[0n,n.J=W~]k!h˸)/9H˯oY[ /Kb^& 9u^;]n|?+{a\mqn˧]H|A mJ uПI>Iֽl|-B Eʜ oyP6vq0,$Z H"RÉ{rN76ȏ|:ub=5ׂcOՏeH(xfX>'~ۡY &k첻0>dvoTX㉓xmC[Ϳ?}~d9S9-eWDSߙi\}DQf9ΡYO<99d?wYXR`[Y K!U6~oE Cj6|0M@@&U;~6BGR}sOkiF ~GfItIWh1T XNj `g<YP?ݡKjܼa*nseQY&Rlg2kdQJ &lY(BRtB'˖@W#{bc} z>:[#W2 6 uߌD+Y7CR.,,i`s UK@09`U3 / L2`Y"A( b \%3(%SA^ eE_Ke|=sH_bƪs0[< JD+8x lrM9RgC-jQ-D B ]eQcʌP $xM[ .*a߈uH +eKĢ"b`^YINJ321M%IHx#t"*4YE2ͦ'N" B#"8cNҲOkS_ =Gdo4KVzdJYkߊpR׿ %%@jQ)2GX^%%dtb0^^JkS:v\8lKg+cj2 [t=Da%Ī83P7f6ǡJ$' *1ޙyA M OC}ET(7)rQ]44[2`:gfgt=4j@+569,k#c2EtDVQ$T0+T5ņCRnQyIR3Cyؔ.+w$r2bd[ 31V 2:u6V:Ƀd4:!6Z՜\JLra@\X[Q$&Tu[QUo98HiJ6;!׊`,s51$8Q,$D376p?FD:Fͨ aQ?JMREr%IXklNMsDzB('eՊm/$+ŠE-4Q4-K,;k110=Zw0"TorE!k,Ңp~t^ `iZ 0|z#[ sz"B ֋ 6UqDڮ MFB YƆCRٚT"ZRnb 9&$W).MW$jNr!7ĝ&N+Uk4Y))v*C Iq S0hW1aX{!8T54a˲vT/?=r?U./$|$vPa"s;g! =EBhƷB&aɐoBSkD}Mg%Wu.7ܠ&{x@2dVt T&qWէ6&%<U(Y&ʉ"_(-kYQ w+-"Sлը1{89UNޱ"89:hȕЌ@ ch<xUz[΢1ċPD \ӇHE xa!TP]*b_0㠋+$D\ 7LmE6:5a")I8"ؚE32LYn\_K5]m)++\ȅP%:-k¼v+΀/*Vsr+P9#B F%Ƽ5R\Y еN)ې*R=91r庐B6>T~0C!PE00E:y Z~n  o,H$20EyG}7$xдdJɒ1֞`t SgЧk-Gp~ G*68Ϯ~3tȨyJ@$82JV~Je$ ڮ)B KfKR#b˛{T7P%$D$b'fʼn(8ad` vތJY4Yb22X{f15 NDz`BF[#Cr- 1bî"T& 9GP5:HkuvEַQ︀<fJI 3&]嘿]-( zԠ"9ˆY$5HE5\LU!*1X(Or-Ɏ , , 3SHuJ<eɒ=?$?Ę CcTٹFAt,䂡rMk%Ac&ֿ1W*$E#%9 wyQSSEC|MڜGTS! d<*%5aXlRXhT|&ip?Md8´"X0$+ʡy',_lHrtmy20ۻ0qǮXٲYfHt8YT%92VP7f3aG-&!ؖ0i1")RxqIߘ?Rp" 4 }&z  Z5(S|v UN\=H|vXybn6Vx gP^ 3皋`v`9Uid.u%.ȶ,uҸ\ >GciSnt=aL54W Pi0T:ihEQoqWms'Db( +GTne6Yʕ[zgQ[ɒ7G,~9iW  j44挙W0Tȸ5|}U-YeMv#*[__d`%&w,zuΣ-2}iʤ|ťԕ 5U1ۍzuז9JY$d8|Gp E=RCLخ0`P,GD8/}xHWb7|̆kjE(p$H*ᣨQFMA9cw}R-uoTPX۔i4+E# 3u|]+J>4ɹN V-~'ꎨO?h(9\eFv(au]Um/YS>ul 4["D{/j3V`ăsbc{0poJiW}ڲ*-~j)El,z,jpjPlNgLQZdM ԡ8H 6jgd}GmTyyĄ;U՘0%0mGߟ ԇW JF5pKilK%dRh{~ښ(EѬ趙ͨd}۩/95TQ u;"5j Lac(Ů$I6u5FQ)r:^Z4hٚIFr͐G>Aj;">4@2MK̖RsFҞp]/?=8z1zr>$I7:t:s=XW:$'2-n6jkl;WNdۿؕm &0bۥ?bSM8tjE[h *h`T X{ѵc[_f`?{{.?Mdw4ITMw妪<)=y!y[<:xÓww?.i6͖!<|I|t9(*cjSTJ!ZhrRQ44jI*bFQ+լGmݗyŭ[V_?O/D75@),?;㈪ҋ.iw<7:p {z|,'*#@ؘ}nh@6d [{A}-<`%S]Ζvj% L-fGSq6;l> %[7ɑ1n󊛣||LOi}yš(ݾ/pLww[[|9k;}ko>4ƛn cn&_÷^gz7+[rw5xWz*0+[,jQ6"´36<imtI$Zdm6N5NMf" :bd1tƣtUt;SJK883/Ǐ馿>x>q?Oxe/+n=UK}#޳qO8qE/ؓZ߹j?P9s<7G8[7t'/;3yAq%7?y_͏?')_x_G8pKtf[<əKt_ %ƵݴȥdUWd|H(X^Q L5=8K4mӿxamUttPO&/*B(+Й4,__=k<?sxW}N8oo9_~7S\c\)7c7;_t߸n{wx>Ǐ]r~_޼Pyb}3w>~֛~w-էMw>O`^;kywv=ߵÏ]T>w=ο>ןs7?3?=*"X}/k^Fء4<&GTm`Sڞ( ?`4a UՠW{ThZd*nۯNEK֏~{_6s?~W}8ĸ{~ȴoqOWE:2kӿwe-9|c:G7_N|}OϡRu=y ~Kڿ?+Foׯ{ؿ58p?zI>|y'KW[/ i[^ܲU;s*&3:\F,e4MZ?,mM~pUiavhPegS75y1e#/:xG_#gPZ<~#,QwwIX4/?߹3,Ugα qwx'rf\q?>e<ɜ~se]矹pZ1Rn:_ c=;躜UOWk[k/g.now|'mؐq!˷Lt3`K Me6cÖdQ,5&QL7 g,L6ئBZ(BY6O,1Kp([c\=}{d~9??[ w\0'C.^}#xߒ/aXc^n<??\g8o1uƸ;xK)>yw7g'GYW\qOGw NsCi2m4eHfhDlտy!sZIL=VQUD&)zn{!N-P:料9`DoHY9h~u/?r`i|-F9+߻ꞯo=:bM_2Z^kO_k_}_;n>[7._a-QwWܼY_c~~_Ω;~w~՗/\>y9 w%o#ֲ>p{37\?oXq h8[AiL(4ҝIru@ ݍ3Ū95Zx3 kjyjOSZqؾcg9[jeǛҏ.&6˻ 9ח,^/?}O^pM/82^z3ϼɥ{mya[o/-> 3Zp\Nh=@p:!s^C;bFdi:VM&9l1hqE 瓝ViUtO(j8XLMgxEZumN~o3O#tkq|vNrˑ eߘE<{e9ٛs3gne-O/_:NZ~m>y_{οKP[6RZj8Y:buD9iMŶң:wKh M,3{i⫚N)J:ϴgvIop>q%?x`#'M̤ܿӂKu_wh=#ә3NiϲLѨ BTk L\98e7upr5!ݑq W?O2')\F; "K(F/\zq'sYEr) \thzBF/1XAM%Sn{]g&u:$ɷ F3SVT9mOR*5k'Ov}M2zd0 Ұ#|3*mAzԟ>%MeA6sqCXڂp å8yisL}ˤ1I' *lš>FG{T,B4v:+dL3ፌCB X֚9AKKh`bʠ| tUF~eK %Y [ӆDN,d^ BUq pOVh2aUj)Io ,/VLK& TUEܔѤk4N(MZ wţTep*6=OUev8'^w4XdG}0Cpr?*uTo2WJ<(( t9hj1Os~͡,qd|jguxgP"1tAuլ0ѩԸ8ѝW9њ{#ehM6֮uvORSaVSG)|a4VzƝZSr1N 𶌄&ұ7/eߴR *Pw˭zM/z.a4K8tISRiR_酵Sзɮ];!T) -9;Pa͙HيwfFd>P Sꮲ5 MK0&G| K$yPq UEkX[mȟE8l;gL[!cNeHDޭIǒMH DڽܒMHh+fY`gd:B(x?!Elb~jMpBhbu.mg亵U!wK4Q22Ꞓ#sXNTϦ$durw; >X9/VuN/&帎M#fXʖpլ"m\`ֲ ԚF ljR`ڳ0Ml©3U oBidI~L썞DS &pLn.'D~&a闸HE&M CWQ:Uד,,=Wnc r ]z&᳢W&0-Ѕ#DK;LIkZ:֖ۇJȇpv[z:9Y:? [ljAd,iU6c^zlp ͞ z"mmd!Bܕ:3:]kM}maEdU}kgVz^yU|YkW͌,\3gev+V}z ]ǂ잠'VrŰmݡh#ht%ѷB? 3؇El|GۅXMBDPTKNl|z;zY&]+o\`69j u=22dyH;'n&Ͼo:Ff`G::6؝56H@2U..[/B7;[L)5~p S~e8\fǝmʃ,];?_n|?BM+oZZ,ٻOUjSѮТNa֋[hĮkuF/mO'aF% /<zܠ!0B_Uq $_Ln74Yd:ƨ8<7Q&}6  6΀9k`,ƜVd6#P|b? 2OJ 9a_-Bm\Z|n6܌ְ}l疨C&eΑ(ükݣ\vC=fob6y&1dK:0^ɫ8u}=hn4Z7e+ZmxHD[-t"5֧s?#3}bhyjޚ^jm#D[˴IŰˮP#V67]7Xx 0g^iQuzͳy+u9/,i8U5rK]a'jc@?3wE?r naԆLvpԙuwMݥ= >Dc d1{(lNI vw3cSU99l۰> #og;¡&҆ Gԃe"L~byOcX2K" pMkBp@2rEfLhXG]5->Q׽fr1Pw)Ϯ$HDnH&xA,Gv35Laݑ)wn#8w n/usHr= 4AsBr& c|lLW(g Mng\lpPQ$CGؤDZp0-ƹ=*85GF~mY:Q-j["dDLKKce :{}h-i龓?:LN.,8 c!p\>gݭ`cM KP`dukn-nE.ٰ[CO̤dm&}bu$NVQYlL?N?1W9O>; 7Ge`d]'Cl~(ia7Z/Sb*vB%,t@!aZ҂l>K 1hZ]8^ڬBI[~׵*]n8ew!lJA:M X '`+/c1*W[uIm1 zh!̽Hm&,͑Yܜ%sF%6䩑'_ $~ k8-"%8 ?؀,~F֔:ٞFd=5,)i8&EBP"# ^ls]kح0- jnඦĭ{<#7i 22QQ-?~p8fHh3e RۓU*r[#4;>u2 T =L;r׼'/⮸kηɒ+6[BE+gXD);(ӛ /;^,|P7q0\nUQ?o3~occ'IOL&7JN0@R&vXx;'Ǫ~c]6ju}cSuf ϰm#S0# _ .s+>BQ@- /'g6|;1f5zg= rPUIǺaD.Tpm>Z/)^(:S5dE¶iS)QOCͤ oo'?T't+%^ſNmk7,mD\z.u^.0l EYC$]Ʈ-Ozin:Ux#iw'8X|<ը+S{#[ѣS]C2^69h4qccfLs9=`=dS~<ܖ"o[Vq:"|;kO ۑEK}ߚj{zEVaDvBOƜo;[eU+!~Ca  v:fLojy캬q ]l$|||ZJHidVȃ[,i:?T& <{F ϑW*99F Q([PYi2U3Ô]H\9#{g)t S7[LӡN!e~HEAwvڶ%8R !O1$XIENDB`werkzeug-0.14.1/docs/_static/style.css000066400000000000000000000154011322225165500176740ustar00rootroot00000000000000body { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; font-size: 14px; letter-spacing: -0.01em; line-height: 150%; text-align: center; background: #AFC1C4 url(background.png); color: black; margin: 0; padding: 0; } a { color: #CA7900; text-decoration: none; } a:hover { color: #2491CF; } pre { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.85em; letter-spacing: 0.015em; padding: 0.3em 0.7em; border: 1px solid #aaa; border-right-color: #ddd; border-bottom-color: #ddd; background: #f8f8f8 url(codebackground.png); } cite, code, tt { font-family: 'Consolas', 'Deja Vu Sans Mono', 'Bitstream Vera Sans Mono', monospace; font-size: 0.95em; letter-spacing: 0.01em; font-style: normal; } tt { background-color: #f2f2f2; border-bottom: 1px solid #ddd; color: #333; } tt.func-signature { font-family: 'Lucida Grande', 'Lucida Sans Unicode', 'Geneva', 'Verdana', sans-serif; font-size: 0.85em; background-color: transparent; border-bottom: none; color: #555; } dt { margin-top: 0.8em; } dd p.first { margin-top: 0; } dd p.last { margin-bottom: 0; } pre { line-height: 150%; } pre a { color: inherit; text-decoration: underline; } div.syntax { background-color: transparent; } div.page { background: white url(contents.png) 0 130px; border: 1px solid #aaa; width: 740px; margin: 20px auto 20px auto; text-align: left; } div.header { background-image: url(header.png); height: 100px; border-bottom: 1px solid #aaa; } div.header h1 { float: right; position: absolute; margin: -43px 0 0 585px; height: 180px; width: 180px; } div.header h1 a { display: block; background-image: url(werkzeug.png); background-repeat: no-repeat; height: 180px; width: 180px; text-decoration: none; color: white!important; } div.header span { display: none; } div.header p { background-image: url(header_invert.png); margin: 0; padding: 10px; height: 80px; color: white; display: none; } ul.navigation { background-image: url(navigation.png); height: 2em; list-style: none; border-top: 1px solid #ddd; border-bottom: 1px solid #ddd; margin: 0; padding: 0; } ul.navigation li { margin: 0; padding: 0; height: 2em; line-height: 1.75em; float: left; } ul.navigation li a { margin: 0; padding: 0 10px 0 10px; color: #EE9816; } ul.navigation li a:hover { color: #3CA8E7; } ul.navigation li.active { background-image: url(navigation_active.png); } ul.navigation li.active a { color: black; } ul.navigation li.indexlink a { font-size: 0.9em; font-weight: bold; color: #11557C; } div.body { margin: 0 20px 0 20px; padding: 0.5em 0 20px 0; } p { margin: 0.8em 0 0.5em 0; } h1 { margin: 0; padding: 0.7em 0 0.3em 0; font-size: 1.5em; color: #11557C; } h2 { margin: 1.3em 0 0.2em 0; font-size: 1.35em; padding: 0; } h3 { margin: 1em 0 -0.3em 0; } h2 a, h3 a, h4 a, h5 a, h6 a { color: black!important; } a.headerlink { color: #B4B4B4!important; font-size: 0.8em; padding: 0 4px 0 4px; text-decoration: none!important; visibility: hidden; } h1:hover > a.headerlink, h2:hover > a.headerlink, h3:hover > a.headerlink, h4:hover > a.headerlink, h5:hover > a.headerlink, h6:hover > a.headerlink, dt:hover > a.headerlink { visibility: visible; } a.headerlink:hover { background-color: #B4B4B4; color: #F0F0F0!important; } table { border-collapse: collapse; margin: 0 -0.5em 0 -0.5em; } table td, table th { padding: 0.2em 0.5em 0.2em 0.5em; } div.footer { background-color: #E3EFF1; color: #86989B; padding: 3px 8px 3px 0; clear: both; font-size: 0.8em; text-align: right; } div.footer a { color: #86989B; text-decoration: underline; } div.toc { float: right; background-color: white; border: 1px solid #86989B; padding: 0; margin: 0 0 1em 1em; width: 10em; } div.toc h4 { margin: 0; font-size: 0.9em; padding: 0.1em 0 0.1em 0.6em; margin: 0; color: white; border-bottom: 1px solid #86989B; background-color: #AFC1C4; } div.toc ul { margin: 1em 0 1em 0; padding: 0 0 0 1em; list-style: none; } div.toc ul li { margin: 0.5em 0 0.5em 0; font-size: 0.9em; line-height: 130%; } div.toc ul li p { margin: 0; padding: 0; } div.toc ul ul { margin: 0.2em 0 0.2em 0; padding: 0 0 0 1.8em; } div.toc ul ul li { padding: 0; } div.admonition, div.warning, div#toc { font-size: 0.9em; margin: 1em 0 0 0; border: 1px solid #86989B; background-color: #f7f7f7; } div.admonition p, div.warning p, div#toc p { margin: 0.5em 1em 0.5em 1em; padding: 0; } div.admonition pre, div.warning pre, div#toc pre { margin: 0.4em 1em 0.4em 1em; } div.admonition p.admonition-title, div.warning p.admonition-title, div#toc h3 { margin: 0; padding: 0.1em 0 0.1em 0.5em; color: white; border-bottom: 1px solid #86989B; font-weight: bold; background-color: #AFC1C4; } div.warning { border: 1px solid #940000; } div.warning p.admonition-title { background-color: #CF0000; border-bottom-color: #940000; } div.admonition ul, div.admonition ol, div.warning ul, div.warning ol, div#toc ul, div#toc ol { margin: 0.1em 0.5em 0.5em 3em; padding: 0; } div#toc div.inner { border-top: 1px solid #86989B; padding: 10px; } div#toc h3 { border-bottom: none; cursor: pointer; font-size: 13px; } div#toc h3:hover { background-color: #86989B; } div#toc ul { margin: 2px 0 2px 20px; padding: 0; } div#toc ul li { line-height: 125%; } dl.function dt, dl.class dt, dl.exception dt, dl.method dt, dl.attribute dt { font-weight: normal; } dt .descname { font-weight: bold; margin-right: 4px; } dt .descname, dt .descclassname { padding: 0; background: transparent; border-bottom: 1px solid #111; } dt .descclassname { margin-left: 2px; } dl dt big { font-size: 100%; } dl p { margin: 0; } dl p + p { margin-top: 10px; } span.versionmodified { color: #4B4A49; font-weight: bold; } span.versionadded { color: #30691A; font-weight: bold; } table.field-list td.field-body ul.simple { margin: 0; padding: 0!important; list-style: none; } table.indextable td { width: 50%; vertical-align: top; } table.indextable dt { margin: 0; } table.indextable dd dt a { color: black!important; font-size: 0.8em; } div.jumpbox { padding: 1em 0 0.4em 0; border-bottom: 1px solid #ddd; color: #aaa; } werkzeug-0.14.1/docs/_static/werkzeug.js000066400000000000000000000003631322225165500202240ustar00rootroot00000000000000(function() { Werkzeug = {}; $(function() { $('#toc h3').click(function() { $(this).next().slideToggle(); $(this).parent().toggleClass('toc-collapsed'); }).next().hide().parent().addClass('toc-collapsed'); }); })(); werkzeug-0.14.1/docs/_static/werkzeug.png000066400000000000000000000452451322225165500204040ustar00rootroot00000000000000PNG  IHDR=2sRGB pHYscS+tIME0 IDATxwT?llotQR[bbn4Mb-hDw*Jbleβ (e}ysv9= t5F]kt5F]kt5F]ktv _]{]g rs,vf(6y)i(B( ¶lKtAtGY nBW5ME6(⪪ؖn.(0dC3TU5EUEB_BGBXlq W.>0KUuCQUS5ͧ(OQSUU@ OBxAZe;ajZ)8!0뚮Dihc5M*+M=ۗ-\72.10tMS4iW+ *}Ñe[cʊR2j@]Pw}a6 =\2??~Qm.VWؽ y媮y 4!\@}.>%0jkޘh4 Z'P¢؄̴*Mӄ*!TE .O*`hn(둺Ś*K>o{ 03f_ SxcC]W[]p`iTU@ :nv_f]uSQF}xLҗg􏶽7 #7k"< y`%xaJ]_[]_WS(@c+5Х]@9UUzj_\SQS{{` cOB\7 9raPE8XiٙiG.+Qǘg$TT1l̳&? \p,_ uGMmTl]ׂy9aZU'M'4tAxLuM`˶uQ=C07v@K (GL.#7aSTDԢ .[UFMa^n<%F 1 Gsྰ~Tմ8(ۧW*2`I =a-jꪹ .֖Vah(<P8Dw,ϬMɮtCEuAq9JW=ˆ7-uݛu-a )mj¥a͛P^lZ;`|f6M=57 _de!̉y94|De%CͭPX%Eu55FVܒT fUF3wֿ9r¥WW$füg`8 w΅mppKG_f^N?2QE(B3jl08g$*)cƇ )SҕPZ+e%z=#'42:B5Pkg̦H\'u.Zt⒔̌s`Ԅ2-MkŽaE(pveYYt~}PEpFB0hѦό7}F?—C\Ths^3\2)QJ#`PRԮWZ#4"S•-43 fM7tSQTߧEh1ifHxۻ%a`\7v`CT\;v/4?g.ݣ "2S!DH(3fG`55eƷ]lVyIip3#90\2 vRYa8TUҳ3bbgRkğ(_*mFo$(HxۻCV/ZzYiQIz e6a0ez'RVG-s=846G^QtlL0q3u=TڷB10"Qh4|~#еnl:tݒ3d s)C0;!:fwg{Z~TUFfu-B( ų]WtvN34i"a6`ִn{?lϽ0!Ԧ2 xa ft?&aJyZyIiJeYytFNVit|\PBOBPkg>#!'K_~ڒ=_(s+PG%a>xg 6 3K s2,EP+ PkfS'BQc}~34nv?h3d0U\ssZK?ʨ9QqAE d5.a;. mD:;̪nD󛉚|w.`~+_+oP@mF3}>QtȐ OAVZ0zo&}~n  x&УM=Sl )?^PK= _,P֠.RغDQ--*N;|@ZJzzq\rbBlPkf4ty>_PT=Ȣ&7/yx<M |g7K> > >QU^\YVZh`lJu=f-qi$m'x}!<( oˋ;laK󣲬"8)-+$>155]'AhZ0#|>tW_qhϾ^^+0?rv{bZt |10my{K,-V\PUj":r[|$j̺Za>LGw˯^up^-?lbUbiaaRrZJYBrrL@xx#Cux0#p'd+Wܳwk~?Q^3"P@OJ  xj|0k6x~sK./)K9@fjfzQBRR]gZ0kiG٧E ߗw׮,/;3铯oڍǰa^՘hՐ1q0n8e&'G?cUS|43Z]nN5CDбm~x{YDŽOn%w\}=}]i#wYL!I(-a =se4ܹ@u5~ R/^;߇T&u0syT'۟^\(Ҧ(B)`sja;-m¢n5vXi %?{ɜWw➦ 10s|0}|Jsυag2k;.-Kd!X|l4}~mlGI*H,:t8=.1<%=FQUj]Up=!:̪EgF3?Kf鹞03 Uh2pH͕t\,\0F xr 9 vloÕ7̀8y Va0\Xv+ޏʪ´nIeɩ)5EO]S TH/Eq::̆aL3%E3lc;F~-ï@U@$=/_omoóˆJd|I_VJ Α,'m|TWǕ$$%VtKOQ@V9qZ7͆Gى`ᡧazz 8{Q {cLlM'^Ct⩃!S`-*ueEݢkҲ2UZ;0kfBD/:g /]_`[̸|shDDɤՍ[BeS[1<3"#AKFR)}d$uMLk ʊꖞZwN3~T^\c+f|᭙̀< 9aP"`pXo 3e=I/Lja՛їT;P+0x4d'‚-Ե51EiU{ Ch8ePkfEQE>#0ԓ+J{|uOv|053+/R8`D|PPP_& }sjB PNRW[S832*:#'RQUdCJ~=a$\V\zoq|UanuVH>2^=HcFEQC`@oc85M!?pN,X|XWSSxPVdTduzfFkU!\*J$q"N%̆iYk.A_"cC3a@4|,\.CI7vLXIᦛ2!'S@Hk[L J ""#js{(OW~N5̆a$UU,~ߗ靋V&7Άֻ$ c (J.Yz& O!4>IךB(5UqaFMjfzaBW[N+Y׵4|>_RuEy&uEm'_)_s~lּ 7_M=X3`!0`i2<y|] ` QRPnf0WRkjNRk'fU¬FzkoFrmUeWFfƍsCl22L13bFu-ly\1{ero uS`rH3cCHx?LMKV 6>YYpBU';A]o* k|>TW]~o6:Xa?!*>8`6(oZ*X{gAF$FF_,J1H܀j9zu`5i` QRX[隧P&P"ɀkU_۴`0 ^yb<6/M5J౿ÿ^kn|q.^5%࣏_@.")Tj9pGRU-uU@  Z;0cxLYx٤7N[qMJ=:`ۆyPX .b)ca(saj~&{`;6 }Vmu`|~Jz`P !@}}d3r pC Z90zn>T_]~I7mS[U"Ȫ # mWb,l pЍ7߮_OxgӠk0p.Y0s*RhtM.zn((iĄ&_>jYUf u-˭8c~}ClooDs.sSݸy\}mmt ?HX<"(oF$<8z'G` M@vB!7 (uMHJՅ`ptl6 p~_X 锍ځarUyD>p0WYn|U_ qcZ3}~#Ine+ sz¼ 9cwF' ~}ܣTyG=wB>BW@Qwi:Bވia@mj+hʊӳ"EU j2hrmӏCuY=fIN0j©k '] /?g>yVmrp/vl~[\O?: ZX+۴+`90| 5WK_f۶>b%L0AXBsaǶV"F;N5@uTToZ鱦߈@]][+W{g }M֞!0y/?o_~J}8Ǹ&4G!;DZe2iԅ 9! qN2C~rre*{1J 3vs{*R u=3o\-k7\}`sQ2 }dܧ!'Ƿ*3V D``ݛi+̜i'( ;ជdqt>&PѾ IDAT=` XǶ,_iQQf]M}ٞPlu]!DRcG05tEULU"|>=Ќxo& M۰tͫMi370+r­w65a/yS;qWd@t1<7:\3B Us(0au0?_RĽx0bX:TvTWTX= u=I1v^BUu]UUSTQkR-^67L 3D#>?l0iW|B@09E߅koH'أ&Yt`D|6Tؽ?=8͏j8ֿ&]vn~ֵ)f 3-f@B誦EI[֬ۛ&ֵX4JypgDl" :ե߂n꯲r'} wXVib®?@^(;f0fܐm1TB9'ea[V́/j*YY[oPtPECUU1.wf&ff,zrQ`bxlM rD.`?70ʗ iC!\r|eʟvn8xu-7:ZJVBBh(}>ܺmFk0 5g1&[V_`~3t ,UWo/YxAs=4p|**ڮv\X_[żLXRf 9 K+_gȇ4%? }=mwTUrn?j+O/}BUuRS.֏ 4 kA8CnNB-̟߼>hԑ* ${VL4qLB7PP 7]ol♝.˲)\`DRjʡ-xAyulsq2' BṞ4zpwޖVWg4"of^%N &O[OogCCv@Ob ᯫUOjb^>{hbXцoo!cMK\4zʤE }r+h:BX4T9;Z]~& *irR*ifjVf@xn}yqI F5^]{攦(#a&O 2UuP  -K+Ʉū!'9Lk:ƍYtEj@<\-Rpi󆌾h[]Nx] r|+֭cqG)+麖=RӴ@iQQv>)ԞKVAPIMp/oЎ z ~r|*ȺFQCN+ s[Lp;!5+m=rS%oEQD\BBلg4­+J,.wm :TM\vK?鈉;uG^A̲J!ܚ`Ы :?Z9*!꺎(]DfܲnI "v%ONn\0Ԇ|^!6OIK;GÐ2? > ;p]18W'3o~x{Vnɛu0-mod VgzɘMVe9#07`WD54: h#4]23|~_mYQqjmuMLa/]1&Kcۅ=t1g$¼dQhO? ,8\轢Dm!زF6M!sj&L69v_o|4Iec]dȘo{E@̲uBx/ s@Ҟ硪pe3sz"ܬʂ2jjO-CHi¢pխ-~SMcwo<76̀Km=9Crl.ؐkmpeVE'%^4eƍy `*s,m/ݕ͌&##P͡V5ˮGfĆׂjNGwޒ/aNY|{9'Oz; ]IχmSJ;YVb\:r¸Q+h9SNu._rdj"%-:*:0pf]Ms^T+*೽ǿf1-g=7m <rr.5d`%7= 0߆kN0;:ufLY4t荺avY k>0Z3473/"[rqIAAz]uMGƲ7@#CJr*{^Zt^1^~Nΐ@W+3Ыjw6|YZGaPL%o7?o}GSwkӠ[` \Y eNL(3EMAQB˶]Ne' =PyDNrJjMdtTuIAAjueUBּ-G SN&˂O)vb|Jٰ! Vʚ|}_@A<z넴ᚫ+W1#2:fKw񛪦XUjV 4-_ }JB)^V^^EZVfa55qKd[ G.PQ |\.w > q 2XVm} y}-Cx0ͦAv Ӿn5W>1+NF5+hs|CjM-."*8E -y?4?y.T/?#/nÞ`)X7^͚(ۦeaJy0kG-ndtTͨ>y2'`W8[ص'ܮQ)ԪyEOUS򲒒䪊pm 2.d ,ڃ~`;rfT3Qg723:{.Գ_gR/,[fCNH‚![&;,تacFqIkU<ɂ+PDԀHNKNNVZ8*0zv8~(f,L vV %|Qa6`bOK41^| >4nbcx}v\C΃= RژJ7o[5&OX㋌< eeW8[ut֔BxԚn%EEݪ+Õz(*BdӦ⡐-N|Fxސ֢cܼ:=dK'&n<f6GFdoZ6mǠk?h\o:bkf̩vhsT̉'RQ PTgDmzi v۳sW/X2k{{?~r <2X_֛ڢQOHs<ݰ  ˯ -mܨHx1xYx#=f,s4! -'-NLX2҉#bbV+lϩS S1?<Qӳ{~qAArueez~[]g?$)uCM2C6#_y ~m+] s 1 IomkEccƌ]FA0`:,\~:`>@B)P !DL|\0!)4$5|R5lz:vf:7OC m䆝z a3@n,4b؆ѓ'}|+h9]aۧt[P;5u驪%~ࡔD~¤m>Ot9d$Cr"4 fFAE)|53#vA&Ξ4":P0,VyV))}T]fPz"&1!VRVTPQZn~@eeLB~˲Jm۩plC|ʀ> Ԏ5ZE볺VV'{۬Ӈ0m踁_f:j=̃YpݲDIPA`a@o9ȣE(* 0U3#8`p} 0ۖ]`n?Oۢay߯GeF44SO.>xKf|붸ρřѻŔ=\4 upMpI` 7)'7~SP;Ms6W^/Jm.- 8)U6: M(B(2TEqI=/+*,+:N3l 2=2:Rs=0kVZt5ŗÕaɊrS֞Tw~/} z7os XeUXS8^]s;̧0Ԯ+4MB=!""̒ ޽zl'(Z|NV4r]RX/V3vn`Ybkȹ_Blt6فUל1eY\R>+`%U;N}Gօyk ^P^R[QZI2:ԺT߹qeoY͵ |ofX{ n]~=h{9;?[)a3hS()-mwg݊R7u+ B(("2&:]T[]//.IsP9om@Lr'ڔ@y i#.+`ZS8N5W[8 iXP OE"@Dszu/(-./i ðp!Ħtp^(G@J7NذC>Tœ ? $DvzI蝿Gު2;}}5W,HHMcK-۩mJXnM`GC}4_Ov[l ({^U4vpްY8'1 J <47*0{l*oΆ1h^uK(/>Y~[xV+/ث]wnhٳڀTHY sPO_dYZ]QQ|0S(–p,u4?u(##ϕke"oCa6~";] hoo(^s~p̩KҲ>viв˛\e[aP@jPitll9}kT۲p.QCV93|拆eP\fS&?%++~Ka`# {Ns|xn}))=uKUaN繵snԚ+P(⏈3r`~vbqW] ѩ_f^xnmhrI$C.Kϗ}bjvo?r5ѯGӮrn|佁@Բ v t[P{"\WU'h~k**|%iJ]R ρ^ICތ'~|-Ӑ_ c3E:NY0~B׵o?uoF~?~UsS2?R+h`0wXۂZiZUBP+0 /378P_fO J`:l$dqjFF>~<+Ai;µ߂_ xy_ ѱ5q0͌=?dYݻjYvəs(""*WRW]m;vs5d$ZJ)4?T8p~||ZGK߀\9^{{T7f8{o)_+wWk5WqJv*ẵP5ղ-!:!I@ӑlr@tT둺Ś>_3`0cemxe`Nn53o. <e(腳ZvU0 qμ \mlYs?_JJM KVT559(u;BOUR O0>7G^iN-:t8qյ~3LNR'/p#`>t4=ƨd3t/ԲcvV; džCQTyB(B_7Z(/.I f2xu\n@c ?Ýc$pǷwd6!JK e0yH )3m+_uZSy͸WsziYvI0pL sP yjUW7w+e%@bzуzBǩԪT> = kކ7!2@er=%=78 nc?[w.ض#mffsP{GFxy%vႬ` FsæV~_W-Iֆ۴]6 om}aًAۋ=!"v,\H7~ִ}qP'z IDAT̕g̝v@MR2pdYRmX􆼾_\=WlA.'\C,,˧Y >|C2zWbL_y+/sw8Wřt{VT0|>7obS7 3ꢚnXS->Y< ԁxYyc8ǐ q첽Q_#[2R3^~ />oerj8L-?*F੪f#~9|,ޞ8|E+ rsXm},&%A`fKZop`Vuq~ fjqh{A \:N㻄ߝ< >ф檮Wжԃ,jCڶߵ#idV* jޝvw~w"\G ~DwjSg+>˶o`#Ru{Guu}fvt/.D뗀r''~kyuzE6MϨ,ܽKrٹLR,uQ{o `tyf5@;U'纪@ZUVj0C-,AX"u{pn. Cp2ӏ݌`+i=Ա,杪s\Udl@ouTCMmQ@+g3J%<ne{b-3&snF,ȍL`D8nNHZ fQSoB{Kx,BoaaNG'f;zyrQEUL(6 -Q`!uӕR63߿|WmCj ?] #*=tW,˞s=/UUm`?sԒ41OnI(=٥lT(&@"0u!o~#YxuKҶ3彪W1׺`6܌ZHI}tOܱ\D߬SO/ҁW.zf,[j^wD˗͍37AR131 !(旲D)_H6OP~(?LgW4z#ӓXf0l@?4jb Sw'd~r9\SyckSf.eـ~l-ۆ43 D$ԕJԝBR(k9Zz~h{O}) P!q4,Tg<^&Z#3=c;ztuTE)m0?l.d C6-v(v!eV2,dmqijU)Z!UҺxm0Oj l;$hBe)!)Ck; y\"0DnH:8ŲC̶2$f" !4{DZI)=ꏲ>ġ^+Cm1%Hi ! l@?QEz6#6 'u3lt l@`׽6 {l„ &Ov̻GIENDB`werkzeug-0.14.1/docs/_templates/000077500000000000000000000000001322225165500165305ustar00rootroot00000000000000werkzeug-0.14.1/docs/_templates/sidebarintro.html000066400000000000000000000012141322225165500221010ustar00rootroot00000000000000