pax_global_header00006660000000000000000000000064131500356260014513gustar00rootroot0000000000000052 comment=0035e7483fe887f4e8e0932e504830a4e276cba2 calendarserver-9.1+dfsg/000077500000000000000000000000001315003562600153035ustar00rootroot00000000000000calendarserver-9.1+dfsg/.gitignore000066400000000000000000000002721315003562600172740ustar00rootroot00000000000000*.egg-info/ *.o *.py[co] *.so _trial_temp*/ build/ dropin.cache *~ *.lock /.develop/ /data/ /bin/calendarserver_* /calendarserver/version.py /conf/caldavd-dev.plist /subprojects/ /_run/ calendarserver-9.1+dfsg/.project000066400000000000000000000020721315003562600167530ustar00rootroot00000000000000 CalendarServer caldavclientlibrary cx_Oracle kerberos osxframeworks pycalendar pysecuretransport twextpy org.python.pydev.PyDevBuilder org.python.pydev.pythonNature 0 10 org.eclipse.ui.ide.multiFilter 1.0-projectRelativePath-matches-true-false-.develop 0 10 org.eclipse.ui.ide.multiFilter 1.0-name-matches-true-false-subprojects calendarserver-9.1+dfsg/.pydevproject000066400000000000000000000006501315003562600200230ustar00rootroot00000000000000 python 2.7 /${PROJECT_DIR_NAME} Default calendarserver-9.1+dfsg/CONTRIBUTING.md000066400000000000000000000013721315003562600175370ustar00rootroot00000000000000By submitting a request, you represent that you have the right to license your contribution to Apple and the community, and agree that your contributions are licensed under the [Apache License Version 2.0](LICENSE.txt). For existing files modified by your request, you represent that you have retained any existing copyright notices and licensing terms. For each new file in your request, you represent that you have added to the file a copyright notice (including the year and the copyright owner's name) and the Calendar and Contacts Server's licensing terms. Before submitting the request, please make sure that your request follows the [Calendar and Contacts Server's guidelines for contributing code](../../../ccs-calendarserver/blob/master/HACKING.rst). calendarserver-9.1+dfsg/HACKING.rst000066400000000000000000000310341315003562600171020ustar00rootroot00000000000000Developer's Guide to Contributing to the Calendar Server ======================================================== If you are interested in contributing to the Calendar and Contacts Server project, please read this document. Participating in the Community ============================== The Calendar and Contacts Server began in 1996 -- an open source project sponsored and hosted by Apple Inc. (http://www.apple.com/). The project is now hosted at GitHub, and although it lives within the "apple" namespace it's still a true open-source project under an Apache license. Contributions from other developers are welcome, and, as with all open development projects, may lead to "commit access" and a voice in the future of the project. The community exists mainly through mailing lists and a GitHub repository. To participate, go to: https://trac.calendarserver.org/wiki/MailLists and join the appropriate mailing lists. We also use IRC, as described here: https://trac.calendarserver.org/wiki/IRC There are many ways to join the project. One may write code, test the software and file bugs, write documentation, etc. The issue tracking database is here: https://github.com/apple/ccs-calendarserver/issues To help manage the issues database, read over the issue summaries, looking and testing for issues that are either invalid, or are duplicates of other issues. Both kinds are very common, the first because bugs often get unknowingly fixed as side effects of other changes in the code, and the second because people sometimes file an issue without noticing that it has already been reported. If you are not sure about an issue, post a question to calendarserver-dev@lists.macosforge.org. Before filing bugs, please take a moment to perform a quick search to see if someone else has already filed your bug. In that case, add a comment to the existing bug if appropriate and monitor it, rather than filing a duplicate. Obtaining the Code ================== The source code to the Calendar and Contacts Server is available via Git at this repository URL: https://github.com/apple/ccs-calendarserver.git Directory Layout ================ A rough guide to the source tree: * ``doc/`` - User and developer documentation, including relevant protocol specifications and extensions. * ``bin/`` - Executable programs. * ``conf/`` - Configuration files. * ``calendarserver/`` - Source code for the Calendar and Contacts Server * ``twistedcaldav/`` - Source code for CalDAV library * ``twisted/`` - Files required to set up the Calendar and Contacts Server as a Twisted service. Twisted (http://twistedmatrix.com/) is a networking framework upon which the Calendar and Contacts Server is built. * ``locales/`` - Localization files. * ``contrib/`` - Extra stuff that works with the Calendar and Contacts Server, or that helps integrate with other software (including operating systems), but that the Calendar and Contacts Server does not depend on. * ``support/`` - Support files of possible use to developers. Coding Standards ================ The vast majority of the Calendar and Contacts Server is written in the Python programming language. When writing Python code for the Calendar and Contacts Server, please observe the following conventions. Please note that all of our code at present does not follow these standards, but that does not mean that one shouldn't bother to do so. On the contrary, code changes that do nothing but reformat code to comply with these standards are welcome, and code changes that do not conform to these standards are discouraged. **We require Python 2.6 or higher.** It therefore is OK to write code that does not work with Python versions older than 2.6. Read PEP-8: https://www.python.org/dev/peps/pep-0008/ For the most part, our code should follow PEP-8, with a few exceptions and a few additions. It is also useful to review the Twisted Coding Standard, from which we borrow some standards, though we don't strictly follow it: https://twistedmatrix.com/documents/current/core/development/policy/coding-standard.html Key items to follow, and specifics: * Indent level is 4 spaces. * Never indent code with tabs. Always use spaces. PEP-8 items we do not follow: * PEP-8 recommends using a backslash to break long lines up: :: if width == 0 and height == 0 and \ color == 'red' and emphasis == 'strong' or \ highlight > 100: raise ValueError("sorry, you lose") Don't do that, it's gross, and the indentation for the ``raise`` line gets confusing. Use parentheses: :: if ( width == 0 and height == 0 and color == "red" and emphasis == "strong" or highlight > 100 ): raise ValueError("sorry, you lose") Just don't do it the way PEP-8 suggests: :: if width == 0 and height == 0 and (color == 'red' or emphasis is None): raise ValueError("I don't think so") Because that's just silly. Additions: * Close parentheses and brackets such as ``()``, ``[]`` and ``{}`` at the same indent level as the line in which you opened it: :: launchAtTarget( target="David", object=PaperWad( message="Yo!", crumpleFactor=0.7, ), speed=0.4, ) * Long lines are often due to long strings. Try to break strings up into multiple lines: :: processString( "This is a very long string with a lot of text. " "Fortunately, it is easy to break it up into parts " "like this." ) Similarly, callables that take many arguments can be broken up into multiple lines, as in the ``launchAtTarget()`` example above. * Breaking generator expressions and list comprehensions into multiple lines can improve readability. For example: :: myStuff = ( item.obtainUsefulValue() for item in someDataStore if item.owner() == me ) * Import symbols (especially class names) from modules instead of importing modules and referencing the symbol via the module unless it doesn't make sense to do so. For example: :: from subprocess import Popen process = Popen(...) Instead of: :: import subprocess process = subprocess.Popen(...) This makes code shorter and makes it easier to replace one implementation with another. * All files should have an ``__all__`` specification. Put them at the top of the file, before imports (PEP-8 puts them at the top, but after the imports), so you can see what the public symbols are for a file right at the top. * It is more important that symbol names are meaningful than it is that they be concise. ``x`` is rarely an appropriate name for a variable. Avoid contractions: ``transmogrifierStatus`` is more useful to the reader than ``trmgStat``. * A deferred that will be immediately returned may be called ``d``: :: d = doThisAndThat() d.addCallback(onResult) d.addErrback(onError) return d * Do not use ``deferredGenerator``. Use ``inlineCallbacks`` instead. * That said, avoid using ``inlineCallbacks`` when chaining deferreds is straightforward, as they are more expensive. Use ``inlineCallbacks`` when necessary for keeping code maintainable, such as when creating serialized deferreds in a for loop. * ``_`` may be used to denote unused callback arguments: :: def onCompletion(_): # Don't care about result of doThisAndThat() in here; # we only care that it has completed. doNextThing() d = doThisAndThat() d.addCallback(onCompletion) return d * Do not prefix symbols with ``_`` unless they might otherwise be exposed as a public symbol: a private method name should begin with ``_``, but a locally scoped variable should not, as there is no danger of it being exposed. Locally scoped variables are already private. * Per twisted convention, use camel-case (``fuzzyWidget``, ``doThisAndThat()``) for symbol names instead of using underscores (``fuzzy_widget``, ``do_this_and_that()``). Use of underscores is reserved for implied dispatching and the like (eg. ``http_FOO()``). See the Twisted Coding Standard for details. * Do not use ``%``-formatting: :: error = "Unexpected value: %s" % (value,) Use PEP-3101 formatting instead: :: error = "Unexpected value: {value}".format(value=value) * If you must use ``%``-formatting for some reason, always use a tuple as the format argument, even when only one value is being provided: :: error = "Unexpected value: %s" % (value,) Never use the non-tuple form: :: error = "Unexpected value: %s" % value Which is allowed in Python, but results in a programming error if ``type(value) is tuple and len(value) != 1``. * Don't use a trailing ``,`` at the end of a tuple if it's on one line: :: numbers = (1,2,3,) # No numbers = (1,2,3) # Yes The trailing comma is desirable on multiple lines, though, as that makes re-ordering items easy, and avoids a diff on the last line when adding another: :: strings = ( "This is a string.", "And so is this one.", "And here is yet another string.", ) * Docstrings are important. All public symbols (anything declared in ``__all__``) must have a correct docstring. The script ``docs/Developer/gendocs`` will generate the API documentation using ``pydoctor``. See the ``pydoctor`` documentation for details on the formatting: http://codespeak.net/~mwh/pydoctor/ Note: existing docstrings need a complete review. * Use PEP-257 as a guideline for docstrings. * Begin all multi-line docstrings with 3 double quotes and a newline: :: def doThisAndThat(...): """ Do this, and that. ... """ Best Practices ============== * If a callable is going to return a Deferred some of the time, it should return a deferred all of the time. Return ``succeed(value)`` instead of ``value`` if necessary. This avoids forcing the caller to check as to whether the value is a deferred or not (eg. by using ``maybeDeferred()``), which is both annoying to code and potentially expensive at runtime. * Be proactive about closing files and file-like objects. For a lot of Python software, letting Python close the stream for you works fine, but in a long-lived server that's processing many data streams at a time, it is important to close them as soon as possible. On some platforms (eg. Windows), deleting a file will fail if the file is still open. By leaving it up to Python to decide when to close a file, you may find yourself being unable to reliably delete it. The most reliable way to ensure that a stream is closed is to put the call to ``close()`` in a ``finally`` block: :: stream = file(somePath) try: ... do something with stream ... finally: stream.close() Testing ======= Be sure that all of the units tests pass before you commit new code. Code that breaks units tests may be reverted without further discussion; it is up to the committer to fix the problem and try again. Note that repeatedly committing code that breaks units tests presents a possible time sink for other developers, and is not looked upon favorably. Units tests can be run rather easily by executing the ``./bin/test`` script at the top of the Calendar and Contacts Server source tree. By default, it will run all of the Calendar and Contacts Server tests followed by all of the Twisted tests. You can run specific tests by specifying them as arguments like this: :: ./bin/test twistedcaldav.static All non-trivial public callables must have unit tests. (Note we don't don't totally comply with this rule; that's a problem we'd like to fix.) All other callables should have unit tests. Units tests are written using the ``twisted.trial`` framework. Test module names should start with ``test_``. Twisted has some tips on writing tests here: https://twistedmatrix.com/documents/current/core/howto/testing.html https://twistedmatrix.com/documents/current/core/development/policy/test-standard.html We also use CalDAVTester (which is a companion to the Calendar and Contacts Server), which performs more "black box"-type testing against the server to ensure compliance with the CalDAV protocol. That requires running the server with a test configuration and then running CalDAVTester against it. Information about CalDAVTester is available here: https://github.com/apple/ccs-caldavtester calendarserver-9.1+dfsg/LICENSE.txt000066400000000000000000000261361315003562600171360ustar00rootroot00000000000000 Apache License Version 2.0, January 2004 http://www.apache.org/licenses/ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 1. Definitions. "License" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document. "Licensor" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License. "Legal Entity" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, "control" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity. "You" (or "Your") shall mean an individual or Legal Entity exercising permissions granted by this License. "Source" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files. "Object" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types. "Work" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below). "Derivative Works" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof. "Contribution" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, "submitted" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as "Not a Contribution." "Contributor" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work. 2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form. 3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed. 4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions: (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and (b) You must cause any modified files to carry prominent notices stating that You changed the files; and (c) You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and (d) If the Work includes a "NOTICE" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License. You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License. 5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions. 6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file. 7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License. 8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages. 9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability. END OF TERMS AND CONDITIONS APPENDIX: How to apply the Apache License to your work. To apply the Apache License to your work, attach the following boilerplate notice, with the fields enclosed by brackets "[]" replaced with your own identifying information. (Don't include the brackets!) The text should be enclosed in the appropriate comment syntax for the file format. We also recommend that a file or class name and description of purpose be included on the same "printed page" as the copyright notice for easier identification within third-party archives. Copyright [yyyy] [name of copyright owner] Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. calendarserver-9.1+dfsg/PULL_REQUEST_TEMPLATE.md000066400000000000000000000013721315003562600211070ustar00rootroot00000000000000By submitting a request, you represent that you have the right to license your contribution to Apple and the community, and agree that your contributions are licensed under the [Apache License Version 2.0](LICENSE.txt). For existing files modified by your request, you represent that you have retained any existing copyright notices and licensing terms. For each new file in your request, you represent that you have added to the file a copyright notice (including the year and the copyright owner's name) and the Calendar and Contacts Server's licensing terms. Before submitting the request, please make sure that your request follows the [Calendar and Contacts Server's guidelines for contributing code](../../../ccs-calendarserver/blob/master/HACKING.rst). calendarserver-9.1+dfsg/README.rst000066400000000000000000000123271315003562600167770ustar00rootroot00000000000000========================== Getting Started ========================== This is the core code base for the Calendar and Contacts Server, which is a CalDAV, CardDAV, WebDAV, and HTTP server. For general information about the server, see https://apple.github.io/ccs-calendarserver. ========================== Copyright and License ========================== Copyright (c) 2005-2017 Apple Inc. All rights reserved. This software is licensed under the Apache License, Version 2.0. The Apache License is a well-established open source license, enabling collaborative open source software development. See the "LICENSE" file for the full text of the license terms. ========================== QuickStart ========================== **WARNING**: these instructions are for running a server from the source tree, which is useful for development. These are not the correct steps for running the server in deployment or as part of an OS install. You should not be using the run script in system startup files (eg. /etc/init.d); it does things (like download software) that you don't want to happen in that context. Begin by creating a directory to contain Calendar and Contacts Server and all its dependencies:: mkdir ~/CalendarServer cd CalendarServer Next, check out the source code from the GIT repository. To check out the latest code:: git clone https://github.com/apple/ccs-calendarserver.git Note: if you have two-factor authentication activated on GitHub, you'll need to use a personal access token instead of your password. You can generate personal access tokens at https://github.com/settings/tokens `Pip `_ is used to retrieve the python dependencies and stage them for use by virtualenv, however if your system does not have pip (and virtualenv), install it by running: python -m ensurepip If this yields a permission denied error, you are likely using a system-wide installation of Python, so either retry with a user installation of Python or prefix the command with 'sudo'. The server requires various external libraries in order to operate. The bin/develop script in the sources will retrieve these dependencies and install them to the .develop directory. Note that this behavior is currently also a side-effect of bin/run, but that is likely to change in the future:: cd ccs-calendarserver ./bin/develop ____________________________________________________________ Using system version of libffi. ____________________________________________________________ Using system version of OpenLDAP. ____________________________________________________________ Using system version of SASL. ____________________________________________________________ ... Before you can run the server, you need to set up a configuration file for development. There is a provided test configuration that you can use to start with, conf/caldavd-test.plist, which can be copied to conf/caldavd-dev.plist (the default config file used by the bin/run script). If conf/caldavd-dev.plist is not present when the server starts, you will be prompted to create a new one from conf/caldavd-test.plist. You will need to choose a directory service to use to populate your server's principals (users, groups, resources, and locations). A directory service provides the Calendar and Contacts Server with information about these principals. The directory services supported by Calendar and Contacts Server are: - XMLDirectoryService: this service is configurable via an XML file that contains principal information. The file conf/auth/accounts.xml provides an example principals configuration. - OpenDirectoryService: this service uses Apple's OpenDirectory client, the bulk of the configuration for which is handled external to Calendar and Contacts Server (e.g. System Preferences --> Users & Groups --> Login Options --> Network Account Server). - LdapDirectoryService: a highly flexible LDAP client that can leverage existing LDAP servers. See `twistedcaldav/stdconfig.py `_ for the available LdapDirectoryService options and their defaults. The caldavd-test.plist configuration uses XMLDirectoryService by default, set up to use conf/auth/accounts-test.xml. This is a generally useful configuration for development and testing. This file contains a user principal, named admin, with password admin, which is set up (in caldavd-test.plist) to have administrative permissions on the server. Start the server using the bin/run script, and use the -n option to bypass dependency setup:: bin/run -n Using /Users/andre/CalendarServer/ccs-calendarserver/.develop/roots/py_modules/bin/python as Python Missing config file: /Users/andre/CalendarServer/ccs-calendarserver/conf/caldavd-dev.plist You might want to start by copying the test configuration: cp conf/caldavd-test.plist conf/caldavd-dev.plist Would you like to copy the test configuration now? [y/n]y Copying test cofiguration... Starting server... The server should then start up and bind to port 8008 for HTTP and 8443 for HTTPS. You should then be able to connect to the server using your web browser (eg. Safari, Firefox) or with a CalDAV client (eg. Calendar). calendarserver-9.1+dfsg/bin/000077500000000000000000000000001315003562600160535ustar00rootroot00000000000000calendarserver-9.1+dfsg/bin/_build.sh000066400000000000000000000566151315003562600176620ustar00rootroot00000000000000# -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## . "${wd}/bin/_py.sh"; # Provide a default value: if the variable named by the first argument is # empty, set it to the default in the second argument. conditional_set () { local var="$1"; shift; local default="$1"; shift; if [ -z "$(eval echo "\${${var}:-}")" ]; then eval "${var}=\${default:-}"; fi; } c_macro () { local sys_header="$1"; shift; local version_macro="$1"; shift; local value="$(printf "#include <${sys_header}>\n${version_macro}\n" | cc -x c -E - 2>/dev/null | tail -1)"; if [ "${value}" = "${version_macro}" ]; then # Macro was not replaced return 1; fi; echo "${value}"; } # Checks for presence of a C header, optionally with a version comparison. # With only a header file name, try to include it, returning nonzero if absent. # With 3 params, also attempt a version check, returning nonzero if too old. # Param 2 is a minimum acceptable version number # Param 3 is a #define from the source that holds the installed version number # Examples: # Assert that ldap.h is present # find_header "ldap.h" # Assert that ldap.h is present with a version >= 20344 # find_header "ldap.h" 20344 "LDAP_VENDOR_VERSION" find_header () { local sys_header="$1"; shift; if [ $# -ge 1 ]; then local min_version="$1"; shift; local version_macro="$1"; shift; fi; # No min_version given: # Check for presence of a header. We use the "-c" cc option because we don't # need to emit a file; cc exits nonzero if it can't find the header if [ -z "${min_version:-}" ]; then echo "#include <${sys_header}>" | cc -x c -c - -o /dev/null 2> /dev/null; return "$?"; fi; # Check for presence of a header of specified version local found_version="$(c_macro "${sys_header}" "${version_macro}")"; if [ -n "${found_version}" ] && cmp_version "${min_version}" "${found_version}"; then return 0; else return 1; fi; }; has_valid_code_signature () { local p="$1"; shift; if [ -f ${p} ] && [ -x ${p} ]; then if /usr/bin/codesign -v ${p} > /dev/null 2>&1; then return 0; else return 1; fi; fi; }; ad_hoc_sign_if_code_signature_is_invalid () { local p="$1"; shift; if [ -f ${p} ] && [ -x ${p} ]; then if has_valid_code_signature ${p}; then # If there is a valid code signature, we're done. echo "${p} has a valid code signature."; return 0; else # If codesign exits non-zero, that could mean either 'no signature' or # 'invalid signature'. We need to determine which one. # An unsigned binary has no LC_CODE_SIGNATURE load command. if /usr/bin/otool -l ${p} | /usr/bin/grep LC_CODE_SIGNATURE > /dev/null 2>&1; then # Has LC_CODE_SIGNATURE, but signature isn't valid. Ad-hoc sign it. echo "${p} has an invalid code signature, attempting to repair..." /usr/bin/codesign -f -s - ${p}; # Validate the ad-hoc signature if has_valid_code_signature ${p}; then echo "Invalid code signature replaced via ad-hoc signing."; return 0; fi; else # No code signature. echo "${p} is not code signed. This is OK."; return 0; fi; fi; fi; return 1; }; # Initialize all the global state required to use this library. init_build () { cd "${wd}"; init_py; # These variables are defaults for things which might be configured by # environment; only set them if they're un-set. conditional_set wd "$(pwd)"; conditional_set do_get "true"; conditional_set do_setup "true"; conditional_set force_setup "false"; conditional_set virtualenv_opts ""; dev_home="${wd}/.develop"; dev_roots="${dev_home}/roots"; dep_packages="${dev_home}/pkg"; dep_sources="${dev_home}/src"; py_virtualenv="${dev_home}/virtualenv"; py_bindir="${py_virtualenv}/bin"; python="${bootstrap_python}"; export PYTHON="${python}"; if [ -z "${TWEXT_PKG_CACHE-}" ]; then dep_packages="${dev_home}/pkg"; else dep_packages="${TWEXT_PKG_CACHE}"; fi; project="$(setup_print name)" || project=""; dev_patches="${dev_home}/patches"; patches="${wd}/lib-patches"; # Find some hashing commands # sha1() = sha1 hash, if available # md5() = md5 hash, if available # hash() = default hash function # $hash = name of the type of hash used by hash() hash=""; if find_cmd openssl > /dev/null; then if [ -z "${hash}" ]; then hash="md5"; fi; # remove "(stdin)= " from the front which openssl emits on some platforms md5 () { "$(find_cmd openssl)" dgst -md5 "$@" | sed 's/^.* //'; } elif find_cmd md5 > /dev/null; then if [ -z "${hash}" ]; then hash="md5"; fi; md5 () { "$(find_cmd md5)" "$@"; } elif find_cmd md5sum > /dev/null; then if [ -z "${hash}" ]; then hash="md5"; fi; md5 () { "$(find_cmd md5sum)" "$@"; } fi; if find_cmd sha1sum > /dev/null; then if [ -z "${hash}" ]; then hash="sha1sum"; fi; sha1 () { "$(find_cmd sha1sum)" "$@"; } fi; if find_cmd shasum > /dev/null; then if [ -z "${hash}" ]; then hash="sha1"; fi; sha1 () { "$(find_cmd shasum)" "$@"; } fi; if [ "${hash}" = "sha1" ]; then hash () { sha1 "$@"; } elif [ "${hash}" = "md5" ]; then hash () { md5 "$@"; } elif find_cmd cksum > /dev/null; then hash="hash"; hash () { cksum "$@" | cut -f 1 -d " "; } elif find_cmd sum > /dev/null; then hash="hash"; hash () { sum "$@" | cut -f 1 -d " "; } else hash () { echo "INTERNAL ERROR: No hash function."; exit 1; } fi; default_requirements="${wd}/requirements-default.txt"; use_openssl="true"; # Use SecureTransport instead of OpenSSL if possible, unless told otherwise if [ -z "${USE_OPENSSL-}" ]; then if [ $(uname -s) = "Darwin" ]; then # SecureTransport requires >= 10.11 (Darwin >= 15). REV=$(uname -r) if (( ${REV%%.*} >= 15 )); then default_requirements="${wd}/requirements-osx.txt"; use_openssl="false"; else # Needed to build OpenSSL 64-bit on OS X export KERNEL_BITS=64; fi; fi; fi; conditional_set requirements "${default_requirements}" } setup_print () { local what="$1"; shift; PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF from __future__ import print_function import setup print(setup.${what}) EOF } # Apply patches from lib-patches to the given dependency codebase. apply_patches () { local name="$1"; shift; local path="$1"; shift; if [ -d "${patches}/${name}" ]; then echo ""; echo "Applying patches to ${name} in ${path}..."; cd "${path}"; find "${patches}/${name}" \ -type f \ -name '*.patch' \ -print \ -exec patch -p0 --forward -i '{}' ';'; cd /; fi; echo ""; if [ -e "${path}/setup.py" ]; then echo "Removing build directory ${path}/build..."; rm -rf "${path}/build"; echo "Removing pyc files from ${path}..."; find "${path}" -type f -name '*.pyc' -print0 | xargs -0 rm -f; fi; } # If do_get is turned on, get an archive file containing a dependency via HTTP. www_get () { if ! "${do_get}"; then return 0; fi; local md5=""; local sha1=""; local OPTIND=1; while getopts "m:s:" option; do case "${option}" in 'm') md5="${OPTARG}"; ;; 's') sha1="${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); local name="$1"; shift; local path="$1"; shift; local url="$1"; shift; if "${force_setup}"; then rm -rf "${path}"; fi; if [ ! -d "${path}" ]; then local ext="$(echo "${url}" | sed 's|^.*\.\([^.]*\)$|\1|')"; local decompress=""; local unpack=""; untar () { tar -xvf -; } unzipstream () { local tmp="$(mktemp -t ccsXXXXX)"; cat > "${tmp}"; unzip "${tmp}"; rm "${tmp}"; } case "${ext}" in gz|tgz) decompress="gzip -d -c"; unpack="untar"; ;; bz2) decompress="bzip2 -d -c"; unpack="untar"; ;; tar) decompress="untar"; unpack="untar"; ;; zip) decompress="cat"; unpack="unzipstream"; ;; *) echo "Error in www_get of URL ${url}: Unknown extension ${ext}"; exit 1; ;; esac; echo ""; if [ -n "${dep_packages}" ] && [ -n "${hash}" ]; then mkdir -p "${dep_packages}"; local cache_basename="$(echo ${name} | tr '[ ]' '_')-$(echo "${url}" | hash)-$(basename "${url}")"; local cache_file="${dep_packages}/${cache_basename}"; check_hash () { local file="$1"; shift; local sum="$(md5 "${file}" | perl -pe 's|^.*([0-9a-f]{32}).*$|\1|')"; if [ -n "${md5}" ]; then echo "Checking MD5 sum for ${name}..."; if [ "${md5}" != "${sum}" ]; then echo "ERROR: MD5 sum for downloaded file is wrong: ${sum} != ${md5}"; return 1; fi; else echo "MD5 sum for ${name} is ${sum}"; fi; local sum="$(sha1 "${file}" | perl -pe 's|^.*([0-9a-f]{40}).*$|\1|')"; if [ -n "${sha1}" ]; then echo "Checking SHA1 sum for ${name}..."; if [ "${sha1}" != "${sum}" ]; then echo "ERROR: SHA1 sum for downloaded file is wrong: ${sum} != ${sha1}"; return 1; fi; else echo "SHA1 sum for ${name} is ${sum}"; fi; } if [ ! -f "${cache_file}" ]; then echo "No cache file: ${cache_file}"; echo "Downloading ${name}..."; # # Try getting a copy from the upstream source. # local tmp="$(mktemp "/tmp/${cache_basename}.XXXXXX")"; curl -L "${url}" -o "${tmp}"; echo ""; if [ ! -s "${tmp}" ] || grep '404 Not Found' "${tmp}" > /dev/null; then rm -f "${tmp}"; echo "${name} is not available from upstream source: ${url}"; exit 1; elif ! check_hash "${tmp}"; then rm -f "${tmp}"; echo "${name} from upstream source is invalid: ${url}"; exit 1; fi; # # OK, we should be good # mv "${tmp}" "${cache_file}"; else # # We have the file cached, just verify hash # if ! check_hash "${cache_file}"; then exit 1; fi; fi; echo "Unpacking ${name} from cache..."; get () { cat "${cache_file}"; } else echo "Downloading ${name}..."; get () { curl -L "${url}"; } fi; rm -rf "${path}"; cd "$(dirname "${path}")"; get | ${decompress} | ${unpack}; apply_patches "${name}" "${path}"; cd "${wd}"; fi; } # Run 'make' with the given command line, prepending a -j option appropriate to # the number of CPUs on the current machine, if that can be determined. jmake () { local ncpu=""; case "$(uname -s)" in Darwin|Linux) ncpu="$(getconf _NPROCESSORS_ONLN)"; ;; FreeBSD) ncpu="$(sysctl hw.ncpu)"; ncpu="${ncpu##hw.ncpu: }"; ;; esac; if [ -n "${ncpu:-}" ] && [[ "${ncpu}" =~ ^[0-9]+$ ]]; then make -j "${ncpu}" "$@"; else make "$@"; fi; } # Declare a dependency on a C project built with autotools. # Support for custom configure, prebuild, build, and install commands # prebuild_cmd, build_cmd, and install_cmd phases may be skipped by # passing the corresponding option with the empty string as the value. # By default, do: ./configure --prefix ... ; jmake ; make install c_dependency () { local f_hash=""; local configure="configure"; local prebuild_cmd=""; local build_cmd="jmake"; local install_cmd="make install"; local OPTIND=1; while getopts "m:s:c:p:b:" option; do case "${option}" in 'm') f_hash="-m ${OPTARG}"; ;; 's') f_hash="-s ${OPTARG}"; ;; 'c') configure="${OPTARG}"; ;; 'p') prebuild_cmd="${OPTARG}"; ;; 'b') build_cmd="${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); local name="$1"; shift; local path="$1"; shift; local uri="$1"; shift; # Extra arguments are processed below, as arguments to configure. mkdir -p "${dep_sources}"; local srcdir="${dep_sources}/${path}"; local dstroot="${dev_roots}/${name}"; www_get ${f_hash} "${name}" "${srcdir}" "${uri}"; export PATH="${dstroot}/bin:${PATH}"; export C_INCLUDE_PATH="${dstroot}/include:${C_INCLUDE_PATH:-}"; export LD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${LD_LIBRARY_PATH:-}"; export CPPFLAGS="-I${dstroot}/include ${CPPFLAGS:-} "; export LDFLAGS="-L${dstroot}/lib -L${dstroot}/lib64 ${LDFLAGS:-} "; export DYLD_LIBRARY_PATH="${dstroot}/lib:${dstroot}/lib64:${DYLD_LIBRARY_PATH:-}"; export PKG_CONFIG_PATH="${dstroot}/lib/pkgconfig:${PKG_CONFIG_PATH:-}"; if "${do_setup}"; then if "${force_setup}"; then rm -rf "${dstroot}"; fi; if [ ! -d "${dstroot}" ]; then echo "Building ${name}..."; cd "${srcdir}"; "./${configure}" --prefix="${dstroot}" "$@"; if [ ! -z "${prebuild_cmd}" ]; then eval ${prebuild_cmd}; fi; eval ${build_cmd}; eval ${install_cmd}; cd "${wd}"; else echo "Using built ${name}."; echo ""; fi; fi; } ruler () { if "${do_setup}"; then echo "____________________________________________________________"; echo ""; if [ $# -gt 0 ]; then echo "$@"; fi; fi; } using_system () { if "${do_setup}"; then local name="$1"; shift; echo "Using system version of ${name}."; echo ""; fi; } # # Build C dependencies # c_dependencies () { local c_glue_root="${dev_roots}/c_glue"; local c_glue_include="${c_glue_root}/include"; export C_INCLUDE_PATH="${c_glue_include}:${C_INCLUDE_PATH:-}"; # The OpenSSL version number is special. Our strategy is to get the integer # value of OPENSSL_VERSION_NUBMER for use in inequality comparison. if [ "${use_openssl}" = "true" ]; then ruler; local min_ssl_version="268443791"; # OpenSSL 1.0.2h local ssl_version="$(c_macro openssl/ssl.h OPENSSL_VERSION_NUMBER)"; if [ -z "${ssl_version}" ]; then ssl_version="0x0"; fi; ssl_version="$("${bootstrap_python}" -c "print ${ssl_version}")"; if [ "${ssl_version}" -ge "${min_ssl_version}" ]; then using_system "OpenSSL"; else local v="1.0.2h"; local n="openssl"; local p="${n}-${v}"; # use 'config' instead of 'configure'; 'make' instead of 'jmake'. # also pass 'shared' to config to build shared libs. c_dependency -c "config" -s "577585f5f5d299c44dd3c993d3c0ac7a219e4949" \ -p "make depend" -b "make" \ "openssl" "${p}" \ "http://www.openssl.org/source/${p}.tar.gz" "shared"; fi; fi; ruler; if find_header ffi.h; then using_system "libffi"; elif find_header ffi/ffi.h; then if "${do_setup}"; then mkdir -p "${c_glue_include}"; echo "#include " > "${c_glue_include}/ffi.h" using_system "libffi"; fi; else local v="3.2.1"; local n="libffi"; local p="${n}-${v}"; c_dependency -m "83b89587607e3eb65c70d361f13bab43" \ "libffi" "${p}" \ "ftp://sourceware.org/pub/libffi/${p}.tar.gz" fi; ruler; if find_header ldap.h 20428 LDAP_VENDOR_VERSION; then using_system "OpenLDAP"; else local v="2.4.38"; local n="openldap"; local p="${n}-${v}"; c_dependency -m "39831848c731bcaef235a04e0d14412f" \ "OpenLDAP" "${p}" \ "http://www.openldap.org/software/download/OpenLDAP/${n}-release/${p}.tgz" \ --disable-bdb --disable-hdb --with-tls=openssl; fi; ruler; if find_header sasl.h; then using_system "SASL"; elif find_header sasl/sasl.h; then if "${do_setup}"; then mkdir -p "${c_glue_include}"; echo "#include " > "${c_glue_include}/sasl.h" using_system "SASL"; fi; else local v="2.1.26"; local n="cyrus-sasl"; local p="${n}-${v}"; c_dependency -m "a7f4e5e559a0e37b3ffc438c9456e425" \ "CyrusSASL" "${p}" \ "ftp://ftp.cyrusimap.org/cyrus-sasl/${p}.tar.gz" \ --disable-macos-framework; fi; ruler; if command -v memcached > /dev/null; then using_system "memcached"; else local v="2.0.21-stable"; local n="libevent"; local p="${n}-${v}"; local configure_openssl="--enable-openssl=yes"; if [ "${use_openssl}" = "false" ]; then local configure_openssl="--enable-openssl=no"; fi; c_dependency -m "b2405cc9ebf264aa47ff615d9de527a2" \ "libevent" "${p}" \ "https://github.com/downloads/libevent/libevent/${p}.tar.gz" \ ${configure_openssl}; local v="1.4.24"; local n="memcached"; local p="${n}-${v}"; c_dependency -s "32a798a37ef782da10a09d74aa1e5be91f2861db" \ "memcached" "${p}" \ "http://www.memcached.org/files/${p}.tar.gz" \ "--disable-docs"; fi; ruler; if command -v postgres > /dev/null; then using_system "Postgres"; else local v="9.5.3"; local n="postgresql"; local p="${n}-${v}"; if command -v dtrace > /dev/null; then local enable_dtrace="--enable-dtrace"; else local enable_dtrace=""; fi; c_dependency -m "3f0c388566c688c82b01a0edf1e6b7a0" \ "PostgreSQL" "${p}" \ "http://ftp.postgresql.org/pub/source/v${v}/${p}.tar.bz2" \ ${enable_dtrace}; fi; } # # Special cx_Oracle patch handling # cx_Oracle_patch() { local f_hash="-m 6a49e1aa0e5b48589f8edfe5884ff5a5"; local v="5.2"; local n="cx_Oracle"; local p="${n}-${v}"; local srcdir="${dev_patches}/${p}"; www_get ${f_hash} "${n}" "${srcdir}" "https://pypi.python.org/packages/source/c/${n}/${p}.tar.gz"; cd "${dev_patches}"; tar zcf "${p}.tar.gz" "${p}"; cd "${wd}"; rm -rf "${srcdir}"; } # # Build Python dependencies # py_dependencies () { python="${py_bindir}/python"; py_ve_tools="${dev_home}/ve_tools"; export PATH="${py_virtualenv}/bin:${PATH}"; export PYTHON="${python}"; export PYTHONPATH="${py_ve_tools}/lib:${wd}:${PYTHONPATH:-}"; mkdir -p "${dev_patches}"; # Work around a change in Xcode tools that breaks Python modules in OS X # 10.9.2 and prior due to a hard error if the -mno-fused-madd is used, as # it was in the system Python, and is therefore passed along by disutils. if [ "$(uname -s)" = "Darwin" ]; then if "${bootstrap_python}" -c 'import distutils.sysconfig; print distutils.sysconfig.get_config_var("CFLAGS")' \ | grep -e -mno-fused-madd > /dev/null; then export ARCHFLAGS="-Wno-error=unused-command-line-argument-hard-error-in-future"; fi; fi; if ! "${do_setup}"; then return 0; fi; # Set up virtual environment if "${force_setup}"; then # Nuke the virtual environment first rm -rf "${py_virtualenv}"; fi; if [ ! -d "${py_virtualenv}" ]; then bootstrap_virtualenv; "${bootstrap_python}" -m virtualenv \ --system-site-packages \ --no-setuptools \ ${virtualenv_opts} \ "${py_virtualenv}"; if [ "${use_openssl}" = "false" ]; then # Interacting with keychain requires either a valid code signature, or no # code signature. An *invalid* code signature won't work. if ! ad_hoc_sign_if_code_signature_is_invalid ${python}; then cat << EOF SecureTransport support is enabled, but we are unable to validate or fix the code signature for ${python}. Keychain interactions used with SecureTransport require either no code signature, or a valid code signature. To switch to OpenSSL, delete the .develop directory, export USE_OPENSSL=1, then run ./bin/develop again. EOF fi; fi; fi; cd "${wd}"; # Make sure setup got called enough to write the version file. PYTHONPATH="${PYTHONPATH}" "${python}" "${wd}/setup.py" check > /dev/null; if [ -d "${dev_home}/pip_downloads" ]; then pip_install="pip_install_from_cache"; else pip_install="pip_download_and_install"; fi; ruler "Preparing Python requirements"; echo ""; "${pip_install}" --requirement="${requirements}"; for extra in $("${bootstrap_python}" -c 'import setup; print "\n".join(setup.extras_requirements.keys())'); do ruler "Preparing Python requirements for optional feature: ${extra}"; echo ""; if [ "${extra}" = "oracle" ]; then cx_Oracle_patch; fi; if ! "${pip_install}" --editable="${wd}[${extra}]"; then echo "Feature ${extra} is optional; continuing."; fi; done; ruler "Preparing Python requirements for patching"; echo ""; "${pip_install}" --ignore-installed --no-deps --requirement="${wd}/requirements-ignore-installed.txt"; ruler "Patching Python requirements"; echo ""; twisted_version=$("${python}" -c 'from twisted import version; print version.base()'); if [ ! -e "${py_virtualenv}/lib/python2.7/site-packages/twisted/.patch_applied.${twisted_version}" ]; then apply_patches "Twisted" "${py_virtualenv}/lib/python2.7/site-packages" find "${py_virtualenv}/lib/python2.7/site-packages/twisted" -type f -name '.patch_applied*' -print0 | xargs -0 rm -f; touch "${py_virtualenv}/lib/python2.7/site-packages/twisted/.patch_applied.${twisted_version}"; fi; echo ""; } macos_oracle () { if [ "${ORACLE_HOME-}" ]; then case "$(uname -s)" in Darwin) echo "macOS Oracle init." export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:-}:${ORACLE_HOME}"; export DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH:-}:${ORACLE_HOME}"; ;; esac; fi; } bootstrap_virtualenv () { mkdir -p "${py_ve_tools}"; export PYTHONUSERBASE="${py_ve_tools}" for pkg in \ setuptools==18.5 \ pip==8.1.2 \ virtualenv==15.0.2 \ ; do ruler "Installing ${pkg}"; "${bootstrap_python}" -m pip install -I --user "${pkg}"; done; } pip_download () { mkdir -p "${dev_home}/pip_downloads"; "${python}" -m pip download \ --disable-pip-version-check \ -d "${dev_home}/pip_downloads" \ --pre \ --no-cache-dir \ --log-file="${dev_home}/pip.log" \ "$@"; } pip_install_from_cache () { "${python}" -m pip install \ --disable-pip-version-check \ --pre \ --no-index \ --no-cache-dir \ --find-links="${dev_patches}" \ --find-links="${dev_home}/pip_downloads" \ --log-file="${dev_home}/pip.log" \ "$@"; } pip_download_and_install () { "${python}" -m pip install \ --disable-pip-version-check \ --pre \ --no-cache-dir \ --find-links="${dev_patches}" \ --log-file="${dev_home}/pip.log" \ "$@"; } # # Set up for development # develop () { init_build; c_dependencies; py_dependencies; macos_oracle; } develop_clean () { init_build; # Clean rm -rf "${dev_roots}"; rm -rf "${py_virtualenv}"; } develop_distclean () { init_build; # Clean rm -rf "${dev_home}"; } calendarserver-9.1+dfsg/bin/_py.sh000066400000000000000000000077001315003562600172020ustar00rootroot00000000000000# -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## find_cmd () { local cmd="$1"; shift; local path="$(type "${cmd}" 2>/dev/null | sed "s|^${cmd} is \(a tracked alias for \)\{0,1\}||")"; if [ -z "${path}" ]; then return 1; fi; echo "${path}"; } # Echo the major.minor version of the given Python interpreter. py_version () { local python="$1"; shift; echo "$("${python}" -c "from distutils.sysconfig import get_python_version; print get_python_version()")"; } # # Test if a particular python interpreter is available, given the full path to # that interpreter. # try_python () { local python="$1"; shift; if [ -z "${python}" ]; then return 1; fi; if ! type "${python}" > /dev/null 2>&1; then return 1; fi; local py_version="$(py_version "${python}")"; if [ "$(echo "${py_version}" | sed 's|\.||')" -lt "25" ]; then return 1; fi; return 0; } # # Detect which version of Python to use, then print out which one was detected. # # This will prefer the python interpreter in the PYTHON environment variable. # If that's not found, it will check for "python2.7" and "python", looking for # each in your PATH. # detect_python_version () { local v; local p; for v in "2.7" "" do for p in \ "${PYTHON:=}" \ "python${v}" \ ; do if p="$(find_cmd "${p}")"; then if try_python "${p}"; then echo "${p}"; return 0; fi; fi; done; done; return 1; } # # Compare version numbers # cmp_version () { local v="$1"; shift; local mv="$1"; shift; local vh; local mvh; local result; while true; do vh="${v%%.*}"; # Get highest-order segment mvh="${mv%%.*}"; if [ "${vh}" -gt "${mvh}" ]; then result=1; break; fi; if [ "${vh}" -lt "${mvh}" ]; then result=0; break; fi; if [ "${v}" = "${v#*.}" ]; then # No dots left, so we're ok result=0; break; fi; if [ "${mv}" = "${mv#*.}" ]; then # No dots left, so we're not gonna match result=1; break; fi; v="${v#*.}"; mv="${mv#*.}"; done; return ${result}; } # # Detect which python to use, and store it in the 'python' variable, as well as # setting up variables related to version and build configuration. # init_py () { # First, detect the appropriate version of Python to use, based on our version # requirements and the environment. Note that all invocations of python in # our build scripts should therefore be '"${python}"', not 'python'; this is # important on systems with older system pythons (2.4 or earlier) with an # alternate install of Python, or alternate python installation mechanisms # like virtualenv. bootstrap_python="$(detect_python_version)"; # Set the $PYTHON environment variable to an absolute path pointing at the # appropriate python executable, a standard-ish mechanism used by certain # non-distutils things that need to find the "right" python. For instance, # the part of the PostgreSQL build process which builds pl_python. Note that # detect_python_version, above, already honors $PYTHON, so if this is already # set it won't be stomped on, it will just be re-set to the same value. export PYTHON="$(find_cmd ${bootstrap_python})"; if [ -z "${bootstrap_python:-}" ]; then echo "No suitable python found. Python 2.6 or 2.7 is required."; exit 1; fi; } calendarserver-9.1+dfsg/bin/benchmark000077500000000000000000000014331315003562600177340ustar00rootroot00000000000000#!/bin/sh ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; exec "${python}" "${wd}/contrib/performance/benchmark" "$@"; calendarserver-9.1+dfsg/bin/benchreport000077500000000000000000000020211315003562600203070ustar00rootroot00000000000000#!/bin/sh ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; args="$@" echo "Parameter: 1"; "${python}" "${wd}/contrib/performance/report" ${args} 1 HTTP summarize; echo; echo "Parameter: 9"; "${python}" "${wd}/contrib/performance/report" ${args} 9 HTTP summarize; echo; echo "Parameter: 81"; "${python}" "${wd}/contrib/performance/report" ${args} 81 HTTP summarize; calendarserver-9.1+dfsg/bin/caldavd000077500000000000000000000112031315003562600173740ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## #PATH #PYTHONPATH daemonize=""; errorlogenabled=""; username=""; groupname=""; configfile=""; twistdpath="$(command -v twistd)"; plugin_name="caldav"; service_type=""; profile=""; twistd_reactor=""; child_reactor="" extra=""; py_version () { local python="$1"; shift; echo "$("${python}" -c "from distutils.sysconfig import get_python_version; print get_python_version()")"; } try_python () { local python="$1"; shift; if [ -z "${python}" ]; then return 1; fi; if ! type "${python}" > /dev/null 2>&1; then return 1; fi; local py_version="$(py_version "${python}")"; if [ "$(echo "${py_version}" | sed 's|\.||')" -lt "25" ]; then return 1; fi; return 0; } for v in "2.7" ""; do for p in \ "${PYTHON:=}" \ "python${v}" \ "/usr/local/bin/python${v}" \ "/usr/local/python/bin/python${v}" \ "/usr/local/python${v}/bin/python${v}" \ "/opt/bin/python${v}" \ "/opt/python/bin/python${v}" \ "/opt/python${v}/bin/python${v}" \ "/Library/Frameworks/Python.framework/Versions/${v}/bin/python" \ "/opt/local/bin/python${v}" \ "/sw/bin/python${v}" \ ; do if try_python "${p}"; then python="${p}"; break; fi; done; if [ -n "${python:-}" ]; then break; fi; done; if [ -z "${python:-}" ]; then echo "No suitable python found."; exit 1; fi; usage () { program="$(basename "$0")"; if [ "${1--}" != "-" ]; then echo "$@"; echo; fi; echo "Usage: ${program} [-hX] [-u username] [-g groupname] [-T twistd] [-t type] [-f caldavd.plist] [-p statsfile]"; echo "Options:"; echo " -h Print this help and exit"; echo " -X Do not daemonize"; echo " -L Do not log errors to file; instead use stdout"; echo " -u User name to run as"; echo " -g Group name to run as"; echo " -f Configuration file to read"; echo " -T Path to twistd binary"; echo " -t Process type (Master, Slave or Combined)"; echo " -p Path to the desired pstats file."; echo " -R The Twisted Reactor to run [${reactor}]"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'hXLu:g:f:T:P:t:p:R:o:' option; do case "${option}" in '?') usage; ;; 'h') usage -; exit 0; ;; 'X') daemonize="-n"; ;; 'L') errorlogenabled="-o ErrorLogEnabled=False"; ;; 'f') configfile="-f ${OPTARG}"; ;; 'T') twistdpath="${OPTARG}"; ;; 'u') username="-u ${OPTARG}"; ;; 'g') groupname="-g ${OPTARG}"; ;; 'P') plugin_name="${OPTARG}"; ;; 't') service_type="-o ProcessType=${OPTARG}"; ;; 'p') twistd_profile="--profiler=cprofile-cpu --profile=${OPTARG}/master.pstats --savestats"; profile="-o Profiling/Enabled=True -o Profiling/BaseDirectory=${OPTARG}"; ;; 'R') twistd_reactor="--reactor=${OPTARG}"; child_reactor="-o Twisted/reactor=${OPTARG}"; ;; 'o') extra="${extra} -o ${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# != 0 ]; then usage "Unrecognized arguments:" "$@"; fi; export PYTHONHASHSEED=random export PYTHONPATH if [ "${ORACLE_HOME-}" ]; then case "$(uname -s)" in Darwin) echo "macOS Oracle init." export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:-}:${ORACLE_HOME}"; export DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH:-}:${ORACLE_HOME}"; ;; esac; fi; exec "${python}" "${twistdpath}" ${twistd_profile} ${twistd_reactor} ${daemonize} ${username} ${groupname} "${plugin_name}" ${configfile} ${service_type} ${errorlogenabled} ${profile} ${child_reactor} ${extra}; calendarserver-9.1+dfsg/bin/dependencies000077500000000000000000000042061315003562600204310ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e set -u wd="$(cd "$(dirname "$0")/.." && pwd)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; ## # Argument handling ## all_extras="false"; output="nested"; extras () { PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF from __future__ import print_function import setup print(" ".join(setup.extras_requirements.keys())) EOF } usage () { program="$(basename "$0")"; if [ "${1--}" != "-" ]; then echo "$@"; echo; fi; echo "Usage: ${program} [-hanl] [extra ...]"; echo "Supported extras: $(extras)"; echo "Options:"; echo " -h Print this help and exit"; echo " -a Enable all extras"; echo " -n Output a nested list (default)"; echo " -l Output a simple list"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'hanl' option; do case "${option}" in '?') usage; ;; 'h') usage -; exit 0; ;; 'a') all_extras="true"; ;; 'n') output="nested"; ;; 'l') output="list"; ;; esac; done; shift $((${OPTIND} - 1)); ## # Do The Right Thing ## cmd="${py_bindir}/eggdeps"; if "${all_extras}"; then extras="$(extras)"; else extras="$@"; fi; extras="$(echo ${extras} | sed 's| |,|g')"; spec="calendarserver[${extras}]"; case "${output}" in 'nested') "${cmd}" "${spec}"; ;; 'list') PYTHONPATH="${wd}:${PYTHONPATH:-}" "${bootstrap_python}" - 2>/dev/null << EOF from __future__ import print_function import setup print("\n".join($("${cmd}" --requirements "${spec}"))) EOF esac; calendarserver-9.1+dfsg/bin/develop000077500000000000000000000046141315003562600174440ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; if [ -z "${wd:-}" ]; then wd="$(cd "$(dirname "$0")/.." && pwd)"; fi; . "${wd}/bin/_build.sh"; develop; cd ${wd}; # # Link to scripts for convenience # find .develop/virtualenv/bin -type f -name "calendarserver_*" | { while read source; do target="${wd}/bin/$(basename ${source})"; rm -f "${target}"; case "$(basename ${target})" in calendarserver_dashboard|calendarserver_dashcollect|calendarserver_dashview) cat << __END__ > "${target}" #!/bin/sh export PATH="${PATH:-}"; export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:-}"; export DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH:-}"; exec "\$(dirname \$0)/../${source}" "\$@"; __END__ ;; *) cat << __END__ > "${target}" #!/bin/sh export PATH="${PATH:-}"; export LD_LIBRARY_PATH="${LD_LIBRARY_PATH:-}"; export DYLD_LIBRARY_PATH="${DYLD_LIBRARY_PATH:-}"; exec "\$(dirname \$0)/../${source}" --config "${wd}/conf/caldavd-dev.plist" "\$@"; __END__ ;; esac chmod +x "${target}"; done; } # # Create a subprojects directory with -e checkouts for convenience. # find .develop/virtualenv/src -mindepth 1 -maxdepth 1 -type d | { while read source; do deps="${wd}/subprojects"; mkdir -p "${deps}"; target="${deps}/$(basename ${source} | sed 's|twextpy|twext|')"; if [ -L "${target}" ]; then rm "${target}"; fi; ln -s "../${source}" "${target}"; # NO! This causes bin/develop to hang the second time it is run #rm -f "${target}/.develop"; #ln -s "${wd}/.develop" "${target}/.develop"; done; } # # Do keychain init on OS X # if [ -z "${USE_OPENSSL-}" ]; then case "$(uname -s)" in Darwin) echo "Keychain init." bin/python bin/keychain_init.py ;; esac; fi; echo "Dependency setup complete." calendarserver-9.1+dfsg/bin/environment000077500000000000000000000016701315003562600203510ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e set -u wd="$(cd "$(dirname "$0")/.." && pwd)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; if [ $# -gt 0 ]; then for name in "$@"; do # Use python to avoid using eval. "${python}" -c 'import os; print os.environ.get("'${name}'", "")'; done; else env; fi; calendarserver-9.1+dfsg/bin/keychain_init.py000077500000000000000000000113341315003562600212500ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from subprocess import Popen, PIPE, STDOUT import os import re import sys import OpenSSL identity_preference = "org.calendarserver.test" certname_regex = re.compile(r'"alis"="(.*)"') certificate_name = "localhost" identity_file = "./twistedcaldav/test/data/server.pem" certificate_file = "./twistedcaldav/test/data/cert.pem" def identityExists(): child = Popen( args=[ "/usr/bin/security", "get-identity-preference", "-s", identity_preference, "login.keychain", ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_error = child.communicate() if child.returncode: print("Could not find identity '{}'".format(identity_preference)) return False else: match = certname_regex.search(output) if not match: raise RuntimeError("No certificate found for identity '{}'".format(identity_preference)) else: print("Found certificate '{}' for identity '{}'".format(match.group(1), identity_preference)) return True def identityCreate(): child = Popen( args=[ "/usr/bin/security", "set-identity-preference", "-s", identity_preference, "-c", certificate_name, ], stdout=PIPE, stderr=STDOUT, ) output, error = child.communicate() if child.returncode: raise RuntimeError(error if error else output) else: print("Created identity '{}' for certificate '{}'".format(identity_preference, certificate_name)) return True def certificateExists(): child = Popen( args=[ "/usr/bin/security", "find-certificate", "-c", certificate_name, "login.keychain", ], stdout=PIPE, stderr=STDOUT, ) _ignore_output, _ignore_error = child.communicate() if child.returncode: print("No certificate '{}' found for identity '{}'".format(certificate_name, identity_preference)) return False else: print("Found certificate '{}'".format(certificate_name)) return True def certificateImport(importFile): child = Popen( args=[ "/usr/bin/security", "import", importFile, "-k", "login.keychain", "-A", ], stdout=PIPE, stderr=STDOUT, ) output, error = child.communicate() if child.returncode: raise RuntimeError(error if error else output) else: print("Imported certificate '{}'".format(certificate_name)) return True def certificateTrust(): child = Popen( args=[ "/usr/bin/security", "add-trusted-cert", "-p", "ssl", "-p", "basic", certificate_file, ], stdout=PIPE, stderr=STDOUT, ) output, error = child.communicate() if child.returncode: raise RuntimeError(error if error else output) else: print("Trusted certificate '{}'".format(certificate_name)) return True def checkCertificate(): # Validate identity error = OpenSSL.crypto.check_keychain_identity(identity_preference, allowInteraction=True) if error: raise RuntimeError( "The configured TLS Keychain Identity ({cert}) cannot be used: {reason}".format( cert=identity_preference, reason=error ) ) else: print("Certificate/key can be used.") if __name__ == '__main__': if os.path.isfile("/usr/bin/security"): # If the identity exists we are done if identityExists(): checkCertificate() sys.exit(0) # Check for certificate and import if not present if not certificateExists(): try: # Try cert + pkey first certificateImport(identity_file) except RuntimeError: # Try just the cert certificateImport(certificate_file) certificateTrust() # Create the identity identityCreate() checkCertificate() else: raise RuntimeError("Keychain access utility ('security') not found") calendarserver-9.1+dfsg/bin/keychain_unlock.py000077500000000000000000000041441315003562600216010ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from subprocess import Popen, PIPE, STDOUT import os import sys password_file = os.path.expanduser("~/.keychain") def isKeychainUnlocked(): child = Popen( args=[ "/usr/bin/security", "show-keychain-info", "login.keychain", ], stdout=PIPE, stderr=STDOUT, ) _ignore_output, _ignore_error = child.communicate() if child.returncode: return False else: return True def unlockKeychain(): if not os.path.isfile(password_file): print("Could not unlock login.keychain: no password available.") return False with open(password_file) as f: password = f.read().strip() child = Popen( args=[ "/usr/bin/security", "-i", ], stdin=PIPE, stdout=PIPE, stderr=STDOUT, ) output, error = child.communicate("unlock-keychain -p {} login.keychain\n".format(password)) if child.returncode: print("Could not unlock login.keychain: {}".format(error if error else output)) return False else: print("Unlocked login.keychain") return True if __name__ == '__main__': if os.path.isfile("/usr/bin/security"): if isKeychainUnlocked(): print("Keychain already unlocked") else: # If the identity exists we are done result = unlockKeychain() else: print("Keychain access utility ('security') not found") result = False sys.exit(0 if True else 1) calendarserver-9.1+dfsg/bin/make-ssl-ca000077500000000000000000000017651315003562600201070ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- set -e set -u ## # Handle command line ## usage () { program=$(basename "$0"); if [ $# != 0 ]; then echo "$@"; echo ""; fi; echo "usage: ${program} name"; } if [ $# != 1 ]; then usage; exit 1; fi; name="$1"; ## # Do The Right Thing ## newfile () { # New file is not readable and empty local name="$1"; rm -f "${name}"; tmp="$(mktemp "${name}")"; if [ "${tmp}" != "${name}" ]; then mv "${tmp}" "${name}"; fi; } if [ ! -s "${name}.key" ]; then echo "Generating certificate authority key..."; newfile "${name}.key"; openssl genrsa -des3 -out "${name}.key" 2048; echo ""; else echo "Key for ${name} already exists."; fi; if [ ! -s "${name}.crt" ]; then echo "Generating certificate..."; openssl req -new -x509 -days 3650 -key "${name}.key" -out "${name}.crt"; chmod 644 "${name}.crt"; echo ""; else echo "Certificate for ${name} already exists."; fi; # Print the certificate openssl x509 -in "${name}.crt" -text -noout; calendarserver-9.1+dfsg/bin/make-ssl-key000077500000000000000000000045151315003562600203100ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- set -e set -u ## # Handle command line ## authority="ca"; usage () { program=$(basename "$0"); if [ $# != 0 ]; then echo "$@"; echo ""; fi; echo "usage: ${program} [options] host_name"; echo ""; echo " -h Show this help"; echo " -a authority Use given certificate authority [${authority}]."; } while getopts 'ha:' option; do case "$option" in '?') usage; exit 64; ;; 'h') usage; exit 0; ;; 'a') authority="${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# != 1 ]; then usage; exit 64; fi; host="$1"; ## # Do The Right Thing ## if [ "${authority}" != "NONE" ]; then if [ ! -s "${authority}.key" ]; then echo "Not a certificate authority key: ${authority}.key"; exit 1; fi; if [ ! -s "${authority}.crt" ]; then echo "Not a certificate authority certificate: ${authority}.crt"; exit 1; fi; fi; newfile () { # New file is not readable and empty name="$1"; rm -f "${name}"; tmp="$(mktemp "${name}")"; if [ "${tmp}" != "${name}" ]; then mv "${tmp}" "${name}"; fi; } # # FIXME: # Remove requirement that user type in a pass phrase here, which # we then simply strip out. # if [ ! -s "${host}.key" ]; then echo "Generating host key..."; newfile "${host}.key.tmp"; openssl genrsa -des3 -out "${host}.key.tmp" 1024; echo ""; echo "Removing pass phrase from key..."; newfile "${host}.key"; openssl rsa -in "${host}.key.tmp" -out "${host}.key"; rm "${host}.key.tmp"; echo ""; else echo "Key for ${host} already exists."; fi; # # FIXME: # Remove requirement that user type the common name, which we # already know ($hostname). # if [ ! -s "${host}.csr" ]; then echo "Generating certificate request..."; newfile "${host}.csr"; openssl req -new -key "${host}.key" -out "${host}.csr"; echo ""; else echo "Certificate request for ${host} already exists."; fi; if [ "${authority}" != "NONE" ]; then if [ ! -s "${host}.crt" ]; then echo "Generating certificate..."; openssl x509 -req -in "${host}.csr" -out "${host}.crt" -sha1 -days 3650 \ -CA "${authority}.crt" -CAkey "${authority}.key" -CAcreateserial; chmod 644 "${host}.crt"; echo ""; else echo "Certificate for ${host} already exists."; fi; # Print the certificate openssl x509 -in "${host}.crt" -text -noout; fi; calendarserver-9.1+dfsg/bin/newsim000077500000000000000000000016121315003562600173030ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; exec "${python}" "${wd}/clientsim/framework/sim.py" "--config=simplugin/config.plist" "--clients=simplugin/clients.plist" "$@"; calendarserver-9.1+dfsg/bin/package000077500000000000000000000102461315003562600173770ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## ## # WARNING: This script is intended for use by developers working on # the Calendar Server code base. It is not intended for use in a # deployment configuration. # # DO NOT use this script as a system startup tool (eg. in /etc/init.d, # /Library/StartupItems, launchd plists, etc.) # # For those uses, install the server properly (eg. with "./run -i # /tmp/foo && cd /tmp/foo && pax -pe -rvw . /") and use the caldavd # executable to start the server. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd)"; # # Usage # clean="false"; usage () { program="$(basename "$0")"; if [ "${1--}" != "-" ]; then echo "$@"; echo; fi; echo "Usage: ${program} [-hFfn] destination"; echo "Options:"; echo " -h Print this help and exit"; echo " -F Clean and force setup to run"; echo " -f Force setup to run"; echo " -n Do not run setup"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } parse_options () { local OPTIND=1; while getopts "hFfn" option; do case "${option}" in '?') usage; ;; 'h') usage -; exit 0; ;; 'F') do_setup="true" ; force_setup="true" ; clean="true" ; ;; 'f') do_setup="true" ; force_setup="true" ; clean="false"; ;; 'n') do_setup="false"; force_setup="false"; clean="false"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# -le 0 ]; then usage "No desination provided."; fi; destination="$1"; shift; if [ $# != 0 ]; then usage "Unrecognized arguments:" "$@"; fi; } main () { . "${wd}/bin/_build.sh"; parse_options "$@"; # # Build everything # if "${clean}"; then develop_clean; fi; install -d "${destination}"; local destination="$(cd "${destination}" && pwd)"; init_build; dev_roots="${destination}/roots"; py_virtualenv="${destination}/virtualenv"; py_bindir="${py_virtualenv}/bin"; py_ve_tools="${dev_home}/ve_tools"; virtualenv_opts="--always-copy"; c_dependencies; py_dependencies; install -d "${destination}/bin"; install -d "${destination}/lib"; cd "${destination}/bin"; find ../virtualenv/bin \ "(" -name "caldavd" -o -name 'calendarserver_*' -o -name "python" ")" \ -exec ln -fs "{}" . ";" \ ; for executable in \ "memcached/bin/memcached" \ "OpenLDAP/bin/ldapsearch" \ "openssl/bin/openssl" \ "PostgreSQL/bin/initdb" \ "PostgreSQL/bin/pg_ctl" \ "PostgreSQL/bin/psql" \ ; do if [ -e "../roots/${executable}" ]; then ln -s "../roots/${executable}" .; else echo "No executable: ${executable}"; fi; done; cd "${destination}/lib"; find "../roots" "(" -type f -o -type l ")" \ "(" \ -name '*.so' -o \ -name '*.so.*' -o \ -name '*.dylib' \ ")" -print0 \ | xargs -0 -I % ln -s % .; # Write out environment.sh local dst="${destination}"; cat > "${dst}/environment.sh" << __EOF__ export PATH="${dst}/bin:\${PATH}"; export C_INCLUDE_PATH="${dst}/include:\${C_INCLUDE_PATH:-}"; export LD_LIBRARY_PATH="${dst}/lib:${dst}/lib64:\${LD_LIBRARY_PATH:-}:\$ORACLE_HOME"; export CPPFLAGS="-I${dst}/include \${CPPFLAGS:-} "; export LDFLAGS="-L${dst}/lib -L${dst}/lib64 \${LDFLAGS:-} "; export DYLD_LIBRARY_PATH="${dst}/lib:${dst}/lib64:\${DYLD_LIBRARY_PATH:-}:\$ORACLE_HOME"; __EOF__ # Install CalendarServer into venv cd ${wd} ${python} setup.py install } main "$@"; calendarserver-9.1+dfsg/bin/pyflakes000077500000000000000000000016521315003562600176230ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; if [ $# -eq 0 ]; then set - calendarserver contrib twisted twistedcaldav txdav txweb2; fi; echo "Checking modules:" "$@"; exec "${python}" -m pyflakes "$@"; calendarserver-9.1+dfsg/bin/python000077500000000000000000000014341315003562600173240ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e set -u wd="$(cd "$(dirname "$0")/.." && pwd)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; exec "${python}" "$@"; calendarserver-9.1+dfsg/bin/run000077500000000000000000000145621315003562600166150ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## ## # WARNING: This script is intended for use by developers working on # the Calendar Server code base. It is not intended for use in a # deployment configuration. # # DO NOT use this script as a system startup tool (eg. in /etc/init.d, # /Library/StartupItems, launchd plists, etc.) # # For those uses, install the server properly (eg. with "./run -i # /tmp/foo && cd /tmp/foo && pax -pe -rvw . /") and use the caldavd # executable to start the server. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd)"; # # Usage # config="${wd}/conf/caldavd-dev.plist"; reactor=""; plugin_name="caldav"; service_type="Combined"; do_setup="true"; run="true"; daemonize="-X -L"; kill="false"; restart="false"; profile=""; usage () { program="$(basename "$0")"; if [ "${1--}" != "-" ]; then echo "$@"; echo; fi; echo "Usage: ${program} [-hvgsfnpdkrR] [-K key] [-iIb dst] [-t type] [-S statsdirectory] [-P plugin]"; echo "Options:"; echo " -h Print this help and exit"; echo " -s Run setup only; don't run server"; echo " -f Force setup to run"; echo " -n Do not run setup"; echo " -p Print PYTHONPATH value for server and exit"; echo " -e Print =-separated environment variables required to run and exit"; echo " -d Run caldavd as a daemon"; echo " -k Stop caldavd"; echo " -r Restart caldavd"; echo " -K Print value of configuration key and exit"; echo " -t Select the server process type (Master, Slave or Combined) [${service_type}]"; echo " -S Write a pstats object for each process to the given directory when the server is stopped."; echo " -P Select the twistd plugin name [${plugin_name}]"; echo " -R Twisted Reactor plugin to execute [${reactor}]"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } # Parse command-line options to set up state which controls the behavior of the # functions in build.sh. parse_options () { OPTIND=1; print_path="false"; while getopts "ahsfnpedkrc:K:i:I:b:t:S:P:R:" option; do case "${option}" in '?') usage; ;; 'h') usage -; exit 0; ;; 'f') force_setup="true"; do_setup="true"; ;; 'k') kill="true"; do_setup="false"; ;; 'r') restart="true"; do_setup="false"; ;; 'd') daemonize=""; ;; 'P') plugin_name="${OPTARG}"; ;; 'c') config="${OPTARG}"; ;; 'R') reactor="-R ${OPTARG}"; ;; 't') service_type="${OPTARG}"; ;; 'K') read_key="${OPTARG}"; ;; 'S') profile="-p ${OPTARG}"; ;; 's') do_setup="true"; run="false"; ;; 'p') do_get="false"; do_setup="false"; run="false"; print_path="true"; ;; 'n') do_setup="false"; force_setup="false"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# != 0 ]; then usage "Unrecognized arguments:" "$@"; fi; } # Actually run the server. (Or, exit, if things aren't sufficiently set up in # order to do that.) run () { echo "Using ${python} as Python"; if "${run}"; then if [ ! -f "${config}" ]; then echo ""; echo "Missing config file: ${config}"; echo "You might want to start by copying the test configuration:"; echo ""; echo " cp conf/caldavd-test.plist conf/caldavd-dev.plist"; echo ""; if [ -t 0 ]; then # Interactive shell echo -n "Would you like to copy the test configuration now? [y/n]"; read answer; case "${answer}" in y|yes|Y|YES|Yes) echo "Copying test cofiguration..."; cp "${wd}/conf/caldavd-test.plist" "${wd}/conf/caldavd-dev.plist"; ;; *) exit 1; ;; esac; else exit 1; fi; fi; cd "${wd}"; if [ ! -d "${wd}/data" ]; then mkdir "${wd}/data"; fi; #if [ "$(uname -s)" = "Darwin" ] && [ "$(uname -r | cut -d . -f 1)" -ge 9 ]; then # caldavd_wrapper_command="launchctl /"; #else caldavd_wrapper_command=""; #fi; # # Do keychain unlock on OS X # if [ -z "${USE_OPENSSL-}" ]; then case "$(uname -s)" in Darwin) bin/python bin/keychain_unlock.py ;; esac; fi; echo ""; echo "Starting server..."; export PYTHON="${python}"; exec ${caldavd_wrapper_command} \ "${wd}/bin/caldavd" ${daemonize} \ -f "${config}" \ -P "${plugin_name}" \ -t "${service_type}" \ ${reactor} \ ${profile}; cd /; fi; } # The main-point of the 'run' script: parse all options, decide what to do, # then do it. run_main () { . "${wd}/bin/_build.sh"; parse_options "$@"; # If we've been asked to read a configuration key, just read it and exit. if [ -n "${read_key:-}" ]; then value="$("${wd}/bin/calendarserver_config" "${read_key}")"; IFS="="; set ${value}; echo "$2"; unset IFS; exit $?; fi; if "${print_path}"; then exec "${wd}/bin/environment" PYTHONPATH; fi; # About to do something for real; let's enumerate (and depending on options, # possibly download and/or install) the dependencies. develop; if "${kill}" || "${restart}"; then pidfile="$("${wd}/bin/calendarserver_config" "-f" "${config}" "PIDFile")"; # Split key and value on "=" and just grab the value IFS="="; set ${pidfile}; pidfile="$2"; unset IFS; if [ ! -r "${pidfile}" ]; then echo "Unreadable PID file: ${pidfile}"; exit 1 fi; pid="$(cat "${pidfile}" | head -1)"; if [ -z "${pid}" ]; then echo "No PID in PID file: ${pidfile}"; exit 1; fi; echo "Killing process ${pid}"; kill -TERM "${pid}"; if ! "${restart}"; then exit 0; fi; fi; run; } run_main "$@"; calendarserver-9.1+dfsg/bin/sim000077500000000000000000000015011315003562600165660ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; exec "${python}" "${wd}/contrib/performance/sim" "$@"; calendarserver-9.1+dfsg/bin/test000077500000000000000000000126531315003562600167670ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; # # Initialize build support # wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; # # Options # do_setup="false"; do_get="false"; viewlog="false"; random="--random=$(date "+%s")"; no_color=""; until_fail=""; coverage=""; numjobs=""; reactor=""; if [ "$(uname -s)" = "Darwin" ]; then reactor="--reactor=kqueue"; fi; usage () { program="$(basename "$0")"; if [ "${1--}" != "-" ]; then echo "$@"; echo; fi; echo "Usage: ${program} [options]"; echo "Options:"; echo " -h Print this help and exit"; echo " -n Do not use color"; echo " -o Do not run tests in random order."; echo " -r Use specified seed to determine order."; echo " -u Run until the tests fail."; echo " -c Generate coverage reports."; echo " -L Show logging output in a separate view."; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts "hor:nucj:L" option; do case "${option}" in '?') usage; ;; 'h') usage -; exit 0; ;; 'o') random=""; ;; 'r') random="--random=$OPTARG"; ;; 'n') no_color="--reporter=bwverbose"; ;; 'u') until_fail="--until-failure"; ;; 'c') coverage="--coverage"; ;; 'j') numjobs="-j $OPTARG"; ;; 'L') viewlog="true"; esac; done; shift $((${OPTIND} - 1)); # # Dependencies # develop > /dev/null "${dev_home}/setup.log"; echo "Using ${python} as Python:"; "${python}" -V; echo ""; # # Clean up # find "${wd}" -name \*.pyc -print0 | xargs -0 rm; # # Do keychain unlock on OS X # if [ -z "${USE_OPENSSL-}" ]; then case "$(uname -s)" in Darwin) bin/python bin/keychain_unlock.py ;; esac; fi; # # Unit tests # trial_temp="${dev_home}/trial"; if "${viewlog}"; then if [ "${TERM_PROGRAM}" = "Apple_Terminal" ]; then if [ ! -d "${trial_temp}" ]; then mkdir "${trial_temp}"; fi; touch "${trial_temp}/_trial_marker"; cp /dev/null "${trial_temp}/test.log"; open -a Console "${trial_temp}/test.log"; else echo "Unable to view log."; fi; fi; error=0; if [ $# -eq 0 ]; then lint="true"; # Test these modules with a single invocation of trial set - calendarserver twistedcaldav txdav txweb2 contrib cd "${wd}" && \ "${wd}/bin/trial" \ --temp-directory="${dev_home}/trial" \ --rterrors \ ${reactor} \ ${random} \ ${until_fail} \ ${no_color} \ ${coverage} \ ${numjobs} \ "$@" \ || error=$?; else lint="false"; # Loop over the passed-in modules for module in "$@"; do if [ -f "${module}" ]; then module="--testmodule=${module}"; fi; cd "${wd}" && \ "${wd}/bin/trial" \ --temp-directory="${dev_home}/trial" \ --rterrors \ ${reactor} \ ${random} \ ${until_fail} \ ${no_color} \ ${coverage} \ ${numjobs} \ "${module}" \ || error=$? ; done; fi; if [ ${error} -ne 0 ]; then exit ${error}; fi; # # Linting # if ! "${lint}"; then exit 0; fi; echo ""; echo "Running pyflakes..."; "${python}" -m pip install pyflakes --upgrade >> "${dev_home}/setup.log"; tmp="$(mktemp -t "twext_flakes.XXXXX")"; cd "${wd}" && "${python}" -m pyflakes "$@" | tee "${tmp}" 2>&1; if [ -s "${tmp}" ]; then echo "**** Pyflakes says you have some code to clean up. ****"; exit 1; fi; rm -f "${tmp}"; # # Empty files # echo ""; echo "Checking for empty files..."; tmp="$(mktemp -t "twext_test_empty.XXXXX")"; find "${wd}" \ '!' '(' \ -type d \ '(' -path '*/.*' -or -name data -or -name build ')' \ -prune \ ')' \ -type f -size 0 \ > "${tmp}"; if [ -s "${tmp}" ]; then echo "**** Empty files: ****"; cat "${tmp}"; exit 1; fi; rm -f "${tmp}"; calendarserver-9.1+dfsg/bin/testpods000077500000000000000000000135041315003562600176510ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; cdt="${py_virtualenv}/src/caldavtester"; ## # Command line handling ## verbose=""; serverinfo="${cdt}/scripts/server/serverinfo-pod.xml"; printres=""; subdir=""; random="--random"; seed=""; ssl="--ssl"; cdtdebug=""; usage () { program="$(basename "$0")"; echo "Usage: ${program} [-v] [-s serverinfo]"; echo "Options:"; echo " -d Set the script subdirectory"; echo " -h Print this help and exit"; echo " -o Execute tests in order"; echo " -r Print request and response"; echo " -s Set the serverinfo.xml"; echo " -t Set the CalDAVTester directory"; echo " -x Random seed to use."; echo " -v Verbose."; echo " -z Do not use SSL."; echo " -D Turn on CalDAVTester debugging"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'Dhvrozt:s:d:x:' option; do case "$option" in '?') usage; ;; 'h') usage -; exit 0; ;; 't') cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo-pod.xml"; ;; 'd') subdir="--subdir=${OPTARG}"; ;; 's') serverinfo="${OPTARG}"; ;; 'r') printres="--always-print-request --always-print-response"; ;; 'v') verbose="v"; ;; 'o') random=""; ;; 'x') seed="--random-seed ${OPTARG}"; ;; 'z') ssl=""; ;; 'D') cdtdebug="--debug"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# = 0 ]; then set - "--all"; fi; ## # Do The Right Thing ## do_setup="false"; develop > /dev/null; # Set up sandbox sandboxdir="/tmp/cdt_server_sandbox💣" sandboxdir_u="/tmp/cdt_server_sandbox\ud83d\udca3" if [ -d "${sandboxdir}" ]; then rm -rf "${sandboxdir}" fi; configdir="${sandboxdir}/Config" serverrootA="${sandboxdir}/podA" serverrootB="${sandboxdir}/podB" configdir_u="${sandboxdir_u}/Config" serverrootA_u="${sandboxdir_u}/podA" serverrootB_u="${sandboxdir_u}/podB" mkdir -p "${configdir}/auth" mkdir -p "${serverrootA}/Logs" "${serverrootA}/Run" "${serverrootA}/Data/Documents" mkdir -p "${serverrootB}/Logs" "${serverrootB}/Run" "${serverrootB}/Data/Documents" cp conf/caldavd-test.plist "${configdir}/caldavd-cdt.plist" cp conf/caldavd-test-podA.plist "${configdir}/caldavd-cdt-podA.plist" cp conf/caldavd-test-podB.plist "${configdir}/caldavd-cdt-podB.plist" cp conf/auth/proxies-test-pod.xml "${configdir}/auth/proxies-cdt.xml" cp conf/auth/resources-test-pod.xml "${configdir}/auth/resources-cdt.xml" cp conf/auth/augments-test-pod.xml "${configdir}/auth/augments-cdt.xml" cp conf/auth/accounts-test-pod.xml "${configdir}/auth/accounts-cdt.xml" # Modify the plists python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt.plist'); f['ConfigRoot'] = u'${configdir_u}'; f['RunRoot'] = 'Run'; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-cdt.plist');" python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt-podA.plist'); f['ImportConfig'] = u'${configdir_u}/caldavd-cdt.plist'; f['ServerRoot'] = u'${serverrootA_u}'; f['ConfigRoot'] = u'${configdir_u}'; f['ProxyLoadFromFile'] = u'${configdir_u}/auth/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = u'${configdir_u}/auth/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = u'${configdir_u}/auth/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${configdir_u}/auth/augments-cdt.xml']; plistlib.writePlist(f, '${configdir}/caldavd-cdt-podA.plist');" python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt-podB.plist'); f['ImportConfig'] = u'${configdir_u}/caldavd-cdt.plist'; f['ServerRoot'] = u'${serverrootB_u}'; f['ConfigRoot'] = u'${configdir_u}'; f['ProxyLoadFromFile'] = u'${configdir_u}/auth/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = u'${configdir_u}/auth/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = u'${configdir_u}/auth/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${configdir_u}/auth/augments-cdt.xml']; plistlib.writePlist(f, '${configdir}/caldavd-cdt-podB.plist');" runpod() { local podsuffix="$1"; shift; # Start the server "${wd}/bin/run" -nd -c "${configdir}/caldavd-cdt-${podsuffix}.plist" /bin/echo -n "Waiting for server ${podsuffix} to start up..." while [ ! -f "${sandboxdir}/${podsuffix}/Run/caldav-instance-0.pid" ]; do sleep 1 /bin/echo -n "." done; echo "Server ${podsuffix} has started" } stoppod() { local podsuffix="$1"; shift; echo "Stopping server ${podsuffix}" "${wd}/bin/run" -nk -c "${configdir}/caldavd-cdt-${podsuffix}.plist" } runpod "podA"; runpod "podB"; # Don't exit if testcaldav.py fails, because we need to clean up afterwards. set +e # Run CDT echo "" echo "Starting CDT run" cd "${cdt}" && "${python}" testcaldav.py ${random} ${seed} ${ssl} ${cdtdebug} --print-details-onfail ${printres} -s "${serverinfo}" -x scripts/tests-pod ${subdir} "$@"; # Capture exit status of testcaldav.py to use as this script's exit status. STATUS=$? # Re-enable exit on failure incase run -nk fails set -e stoppod "podA"; stoppod "podB"; # Exit with the exit status of testcaldav.py, to reflect the test suite's result exit $STATUS calendarserver-9.1+dfsg/bin/testpodssim000077500000000000000000000131501315003562600203570ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; sim="${wd}/contrib/performance/loadtest/sim.py"; ## # Command line handling ## verbose=""; config="${wd}/contrib/performance/loadtest/standard-configs/accelerated-activity-config.plist"; clients="${wd}/contrib/performance/loadtest/standard-configs/accelerated-activity-clients.plist"; runtime="--runtime 300"; logfile=""; usage () { program="$(basename "$0")"; echo "Usage: ${program} [-r RUNTIME] [-l LOGFILE]"; echo "Options:"; echo " -r Set the runtime"; echo " -l Log file"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'hl:r:' option; do case "$option" in '?') usage; ;; 'h') usage -; exit 0; ;; 'r') runtime="--runtime=${OPTARG}"; ;; 'l') logfile="--logfile=${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); ## # Do The Right Thing ## do_setup="false"; develop > /dev/null; # Set up sandbox sandboxdir="/tmp/sim_server_sandbox💣" sandboxdir_u="/tmp/sim_server_sandbox\ud83d\udca3" if [ -d "${sandboxdir}" ]; then rm -rf "${sandboxdir}" fi; configdir="${sandboxdir}/Config" serverrootA="${sandboxdir}/podA" serverrootB="${sandboxdir}/podB" configdir_u="${sandboxdir_u}/Config" serverrootA_u="${sandboxdir_u}/podA" serverrootB_u="${sandboxdir_u}/podB" mkdir -p "${configdir}/auth" mkdir -p "${serverrootA}/Logs" "${serverrootA}/Run" "${serverrootA}/Data/Documents" mkdir -p "${serverrootB}/Logs" "${serverrootB}/Run" "${serverrootB}/Data/Documents" cp conf/caldavd-test.plist "${configdir}/caldavd-sim.plist" cp conf/caldavd-test-podA.plist "${configdir}/caldavd-sim-podA.plist" cp conf/caldavd-test-podB.plist "${configdir}/caldavd-sim-podB.plist" cp conf/auth/proxies-test-pod.xml "${configdir}/auth/proxies-sim.xml" cp conf/auth/resources-test-pod.xml "${configdir}/auth/resources-sim.xml" cp conf/auth/augments-test-pod.xml "${configdir}/auth/augments-sim.xml" cp conf/auth/accounts-test-pod.xml "${configdir}/auth/accounts-sim.xml" # Modify the plists python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-sim.plist'); f['ConfigRoot'] = u'${configdir_u}'; f['RunRoot'] = 'Run'; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-sim.plist');" python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-sim-podA.plist'); f['ImportConfig'] = u'${configdir_u}/caldavd-sim.plist'; f['ServerRoot'] = u'${serverrootA_u}'; f['ConfigRoot'] = u'${configdir_u}'; f['ProxyLoadFromFile'] = u'${configdir_u}/auth/proxies-sim.xml'; f['ResourceService']['params']['xmlFile'] = u'${configdir_u}/auth/resources-sim.xml'; f['DirectoryService']['params']['xmlFile'] = u'${configdir_u}/auth/accounts-sim.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${configdir_u}/auth/augments-sim.xml']; plistlib.writePlist(f, '${configdir}/caldavd-sim-podA.plist');" python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-sim-podB.plist'); f['ImportConfig'] = u'${configdir_u}/caldavd-sim.plist'; f['ServerRoot'] = u'${serverrootB_u}'; f['ConfigRoot'] = u'${configdir_u}'; f['ProxyLoadFromFile'] = u'${configdir_u}/auth/proxies-sim.xml'; f['ResourceService']['params']['xmlFile'] = u'${configdir_u}/auth/resources-sim.xml'; f['DirectoryService']['params']['xmlFile'] = u'${configdir_u}/auth/accounts-sim.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${configdir_u}/auth/augments-sim.xml']; plistlib.writePlist(f, '${configdir}/caldavd-sim-podB.plist');" # Modify config to update ports and other bits cp "${config}" "${configdir}/sim-config.plist" config="${configdir}/sim-config.plist" python -c "import plistlib; f=plistlib.readPlist('${configdir}/sim-config.plist'); f['servers']['PodB']['enabled'] = True; f['clientDataSerialization']['UseOldData'] = False; plistlib.writePlist(f, '${configdir}/sim-config.plist');" # Modify clients to update ports and other bits cp "${clients}" "${configdir}/sim-clients.plist" clients="${configdir}/sim-clients.plist" runpod() { local podsuffix="$1"; shift; # Start the server "${wd}/bin/run" -nd -c "${configdir}/caldavd-sim-${podsuffix}.plist" /bin/echo -n "Waiting for server ${podsuffix} to start up..." while [ ! -f "${sandboxdir}/${podsuffix}/Run/caldav-instance-0.pid" ]; do sleep 1 /bin/echo -n "." done; echo "Server ${podsuffix} has started" } stoppod() { local podsuffix="$1"; shift; echo "Stopping server ${podsuffix}" "${wd}/bin/run" -nk -c "${configdir}/caldavd-sim-${podsuffix}.plist" } runpod "podA"; runpod "podB"; # Don't exit if sim.py fails, because we need to clean up afterwards. set +e # Run Sim echo "" echo "Starting Sim run" "${python}" "${sim}" --config "${config}" --clients "${clients}" "${runtime}" "${logfile}" # Capture exit status of sim.py to use as this script's exit status. STATUS=$? # Re-enable exit on failure incase run -nk fails set -e stoppod "podA"; stoppod "podB"; # Exit with the exit status of sim.py, to reflect the test suite's result exit $STATUS calendarserver-9.1+dfsg/bin/testserver000077500000000000000000000114241315003562600202110ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; cdt="${py_virtualenv}/src/caldavtester"; ## # Command line handling ## verbose=""; serverinfo="${cdt}/scripts/server/serverinfo.xml"; pretests=""; printres=""; subdir=""; random="--random"; seed=""; ssl=""; cdtdebug=""; usage () { program="$(basename "$0")"; echo "Usage: ${program} [-v] [-s serverinfo]"; echo "Options:"; echo " -d Set the script subdirectory"; echo " -h Print this help and exit"; echo " -o Execute tests in order"; echo " -p Run pretests"; echo " -r Print request and response"; echo " -s Set the serverinfo.xml"; echo " -t Set the CalDAVTester directory"; echo " -x Random seed to use."; echo " -v Verbose."; echo " -z Use SSL."; echo " -D Turn on CalDAVTester debugging"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'Dhvprozt:s:d:x:' option; do case "$option" in '?') usage; ;; 'h') usage -; exit 0; ;; 't') cdt="${OPTARG}"; serverinfo="${OPTARG}/scripts/server/serverinfo.xml"; ;; 'd') subdir="--subdir=${OPTARG}"; ;; 'p') pretests="--pretest CalDAV/pretest.xml --posttest CalDAV/pretest.xml"; ;; 's') serverinfo="${OPTARG}"; ;; 'r') printres="--always-print-request --always-print-response"; ;; 'v') verbose="v"; ;; 'o') random=""; ;; 'x') seed="--random-seed ${OPTARG}"; ;; 'z') ssl="--ssl"; ;; 'D') cdtdebug="--debug"; ;; esac; done; shift $((${OPTIND} - 1)); if [ $# = 0 ]; then set - "--all"; fi; ## # Do The Right Thing ## do_setup="false"; develop > /dev/null; # Set up sandbox sandboxdir="/tmp/cdt_server_sandbox💣" sandboxdir_u="/tmp/cdt_server_sandbox\ud83d\udca3" if [ -d "${sandboxdir}" ]; then rm -rf "${sandboxdir}" fi; configdir="${sandboxdir}/Config" datadir="${sandboxdir}/Data" configdir_u="${sandboxdir_u}/Config" datadir_u="${sandboxdir_u}/Data" mkdir -p "${sandboxdir}/Config" "${sandboxdir}/Logs" "${sandboxdir}/Run" "${datadir}/Documents" cp conf/caldavd-test.plist "${configdir}/caldavd-cdt.plist" cp conf/auth/proxies-test.xml "${datadir}/proxies-cdt.xml" cp conf/auth/resources-test.xml "${datadir}/resources-cdt.xml" cp conf/auth/augments-test.xml "${datadir}/augments-cdt.xml" cp conf/auth/accounts-test.xml "${datadir}/accounts-cdt.xml" # Modify the plist python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-cdt.plist'); f['HTTPPort'] = 18008; f['BindHTTPPorts'] = [18008]; f['SSLPort'] = 18443; f['BindSSLPorts'] = [18443]; f['Notifications']['Services']['AMP']['Port'] = 62312; f['ServerRoot'] = u'${sandboxdir_u}'; f['ConfigRoot'] = 'Config'; f['RunRoot'] = 'Run'; f['ProxyLoadFromFile'] = u'${datadir_u}/proxies-cdt.xml'; f['ResourceService']['params']['xmlFile'] = u'${datadir_u}/resources-cdt.xml'; f['DirectoryService']['params']['xmlFile'] = u'${datadir_u}/accounts-cdt.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${datadir_u}/augments-cdt.xml']; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-cdt.plist');" # Modify serverinfo to update ports sed "s/8008/18008/g;s/8443/18443/g" "${serverinfo}" > "${configdir}/serverinfo-cdt.xml" serverinfo="${configdir}/serverinfo-cdt.xml" # Start the server "${wd}/bin/run" -nd -c "${configdir}/caldavd-cdt.plist" /bin/echo -n "Waiting for server to start up..." while [ ! -f "${sandboxdir}/Run/caldav-instance-0.pid" ]; do sleep 1 /bin/echo -n "." done; echo "Server has started" # Don't exit if testcaldav.py fails, because we need to clean up afterwards. set +e # Run CDT echo "Starting CDT run" cd "${cdt}" && "${python}" testcaldav.py ${random} ${seed} ${ssl} ${cdtdebug} ${pretests} --print-details-onfail ${printres} -s "${serverinfo}" ${subdir} "$@"; # Capture exit status of testcaldav.py to use as this script's exit status. STATUS=$? # Re-enable exit on failure incase run -nk fails set -e echo "Stopping server" "${wd}/bin/run" -nk -c "${configdir}/caldavd-cdt.plist" # Exit with the exit status of testcaldav.py, to reflect the test suite's result exit $STATUS calendarserver-9.1+dfsg/bin/testsim000077500000000000000000000105651315003562600175000ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; sim="${wd}/contrib/performance/loadtest/sim.py"; ## # Command line handling ## verbose=""; config="${wd}/contrib/performance/loadtest/standard-configs/accelerated-activity-config.plist"; clients="${wd}/contrib/performance/loadtest/standard-configs/accelerated-activity-clients.plist"; runtime="--runtime 300"; logfile=""; usage () { program="$(basename "$0")"; echo "Usage: ${program} [-r RUNTIME] [-l LOGFILE]"; echo "Options:"; echo " -r Set the runtime"; echo " -l Log file"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'hl:r:' option; do case "$option" in '?') usage; ;; 'h') usage -; exit 0; ;; 'r') runtime="--runtime=${OPTARG}"; ;; 'l') logfile="--logfile=${OPTARG}"; ;; esac; done; shift $((${OPTIND} - 1)); ## # Do The Right Thing ## do_setup="false"; develop > /dev/null; # Set up sandbox sandboxdir="/tmp/sim_server_sandbox💣" sandboxdir_u="/tmp/sim_server_sandbox\ud83d\udca3" if [ -d "${sandboxdir}" ]; then rm -rf "${sandboxdir}" fi; configdir="${sandboxdir}/Config" datadir="${sandboxdir}/Data" configdir_u="${sandboxdir_u}/Config" datadir_u="${sandboxdir_u}/Data" mkdir -p "${sandboxdir}/Config" "${sandboxdir}/Logs" "${sandboxdir}/Run" "${datadir}/Documents" cp conf/caldavd-test.plist "${configdir}/caldavd-sim.plist" cp conf/auth/proxies-test.xml "${datadir}/proxies-sim.xml" cp conf/auth/resources-test.xml "${datadir}/resources-sim.xml" cp conf/auth/augments-test.xml "${datadir}/augments-sim.xml" cp conf/auth/accounts-test.xml "${datadir}/accounts-sim.xml" # Modify the plist python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-sim.plist'); f['HTTPPort'] = 18008; f['BindHTTPPorts'] = [18008]; f['SSLPort'] = 18443; f['BindSSLPorts'] = [18443]; f['Notifications']['Services']['AMP']['Port'] = 62312; f['ServerRoot'] = u'${sandboxdir_u}'; f['ConfigRoot'] = 'Config'; f['RunRoot'] = 'Run'; f['ProxyLoadFromFile'] = u'${datadir_u}/proxies-sim.xml'; f['ResourceService']['params']['xmlFile'] = u'${datadir_u}/resources-sim.xml'; f['DirectoryService']['params']['xmlFile'] = u'${datadir_u}/accounts-sim.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${datadir_u}/augments-sim.xml']; f['Authentication']['Kerberos']['Enabled'] = False; plistlib.writePlist(f, '${configdir}/caldavd-sim.plist');" # Modify config to update ports and other bits cp "${config}" "${configdir}/sim-config.plist" config="${configdir}/sim-config.plist" python -c "import plistlib; f=plistlib.readPlist('${configdir}/sim-config.plist'); f['servers']['PodA']['uri'] = f['servers']['PodA']['uri'].rsplit(':', 1)[0] + ':18443'; f['clientDataSerialization']['UseOldData'] = False; f['servers']['PodA']['ampPushPort'] = 62312; plistlib.writePlist(f, '${configdir}/sim-config.plist');" # Modify clients to update ports and other bits cp "${clients}" "${configdir}/sim-clients.plist" clients="${configdir}/sim-clients.plist" # Start the server "${wd}/bin/run" -nd -c "${configdir}/caldavd-sim.plist" /bin/echo -n "Waiting for server to start up..." while [ ! -f "${sandboxdir}/Run/caldav-instance-0.pid" ]; do sleep 1 /bin/echo -n "." done; echo "Server has started" # Don't exit if sim.py fails, because we need to clean up afterwards. set +e # Run Sim echo "Starting Sim run" "${python}" "${sim}" --config "${config}" --clients "${clients}" "${runtime}" "${logfile}" # Capture exit status of sim.py to use as this script's exit status. STATUS=$? # Re-enable exit on failure incase run -nk fails set -e echo "Stopping server" "${wd}/bin/run" -nk -c "${configdir}/caldavd-sim.plist" # Exit with the exit status of sim.py, to reflect the sim's result exit $STATUS calendarserver-9.1+dfsg/bin/testsqlusage000077500000000000000000000067761315003562600205450ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; wd="$(cd "$(dirname "$0")/.." && pwd -L)"; . "${wd}/bin/_build.sh"; init_build > /dev/null; sqlusage="${wd}/contrib/performance/sqlusage/sqlusage.py"; ## # Command line handling ## usage () { program="$(basename "$0")"; echo "Usage: ${program}"; echo "Options:"; if [ "${1-}" = "-" ]; then return 0; fi; exit 64; } while getopts 'h' option; do case "$option" in '?') usage; ;; 'h') usage -; exit 0; ;; esac; done; shift $((${OPTIND} - 1)); ## # Do The Right Thing ## do_setup="false"; develop > /dev/null; # Set up sandbox sandboxdir="/tmp/sqlusage_sandbox💣" sandboxdir_u="/tmp/sqlusage_sandbox\ud83d\udca3" if [ -d "${sandboxdir}" ]; then rm -rf "${sandboxdir}" fi; configdir="${sandboxdir}/Config" datadir="${sandboxdir}/Data" configdir_u="${sandboxdir_u}/Config" datadir_u="${sandboxdir_u}/Data" mkdir -p "${sandboxdir}/Config" "${sandboxdir}/Logs" "${sandboxdir}/Run" "${datadir}/Documents" cp conf/caldavd-test.plist "${configdir}/caldavd-sqlusage.plist" cp conf/auth/proxies-test.xml "${datadir}/proxies-sqlusage.xml" cp conf/auth/resources-test.xml "${datadir}/resources-sqlusage.xml" cp conf/auth/augments-test.xml "${datadir}/augments-sqlusage.xml" cp conf/auth/accounts-test.xml "${datadir}/accounts-sqlusage.xml" # Modify the plist python -c "import plistlib; f=plistlib.readPlist('${configdir}/caldavd-sqlusage.plist'); f['HTTPPort'] = 18008; f['BindHTTPPorts'] = [18008]; f['SSLPort'] = 18443; f['BindSSLPorts'] = [18443]; f['Notifications']['Services']['AMP']['Port'] = 62312; f['ServerRoot'] = u'${sandboxdir_u}'; f['ConfigRoot'] = 'Config'; f['RunRoot'] = 'Run'; f['ProxyLoadFromFile'] = u'${datadir_u}/proxies-sqlusage.xml'; f['ResourceService']['params']['xmlFile'] = u'${datadir_u}/resources-sqlusage.xml'; f['DirectoryService']['params']['xmlFile'] = u'${datadir_u}/accounts-sqlusage.xml'; f['AugmentService']['params']['xmlFiles'] = [u'${datadir_u}/augments-sqlusage.xml']; f['Authentication']['Kerberos']['Enabled'] = False; f['LogDatabase'] = {}; f['LogDatabase']['Statistics'] = True; plistlib.writePlist(f, '${configdir}/caldavd-sqlusage.plist');" # Start the server "${wd}/bin/run" -nd -c "${configdir}/caldavd-sqlusage.plist" /bin/echo -n "Waiting for server to start up..." while [ ! -f "${sandboxdir}/Run/caldav-instance-0.pid" ]; do sleep 1 /bin/echo -n "." done; echo "Server has started" # Don't exit if sqlusage.py fails, because we need to clean up afterwards. set +e # Run sqlusage echo "Starting sqlusage run" "${python}" "${sqlusage}" --port 18008 --event --share "${sandboxdir}/Logs/sqlstats.log" # Capture exit status of sqlusage.py to use as this script's exit status. STATUS=$? # Re-enable exit on failure incase run -nk fails set -e echo "Stopping server" "${wd}/bin/run" -nk -c "${configdir}/caldavd-sqlusage.plist" # Exit with the exit status of sqlusage.py, to reflect the sqlusage result exit $STATUS calendarserver-9.1+dfsg/bin/trial000077500000000000000000000016471315003562600171240ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e set -u wd="$(cd "$(dirname "$0")/.." && pwd)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; exec "${python}" \ -c "\ import sys;\ import os;\ sys.path.insert(0, os.path.abspath(os.getcwd()));\ from twisted.scripts.trial import run;\ run()\ " \ "$@"; calendarserver-9.1+dfsg/bin/twistd000077500000000000000000000016501315003562600173210ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e set -u wd="$(cd "$(dirname "$0")/.." && pwd)"; . "${wd}/bin/_build.sh"; do_setup="false"; develop > /dev/null; exec "${python}" \ -c "\ import sys;\ import os;\ sys.path.insert(0, os.path.abspath(os.getcwd()));\ from twisted.scripts.twistd import run;\ run()\ " \ "$@"; calendarserver-9.1+dfsg/bin/update_copyrights000077500000000000000000000035551315003562600215460ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- ## # Copyright (c) 2013-2016 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e; set -u; find_files () { where="$1"; shift; find "${where}" \ ! \( \ -type d \ \( \ -name .git -o \ -name build -o \ -name data -o \ -name '_trial_temp*' \ \) \ -prune \ \) \ -type f \ ! -name '.#*' \ ! -name '#*#' \ ! -name '*~' \ ! -name '*.pyc' \ ! -name '*.log' \ ! -name update_copyrights \ -print0; } wd="$(pwd -P)"; this_year="$(date "+%Y")"; last_year=$((${this_year} - 1)); tmp="$(mktemp -t "$$")"; find_files "${wd}" > "${tmp}"; ff () { cat "${tmp}"; } echo "Updating copyrights from ${last_year} to ${this_year}..."; ff | xargs -0 perl -i -pe 's|(Copyright \(c\) .*-)'"${last_year}"'( Apple)|${1}'"${this_year}"'${2}|'; ff | xargs -0 perl -i -pe 's|(Copyright \(c\) )'"${last_year}"'( Apple)|${1}'"${last_year}-${this_year}"'${2}|'; ff | xargs -0 grep -e 'Copyright (c) .* Apple' \ | grep -v -e 'Copyright (c) .*'"${this_year}"' Apple' \ ; rm "${tmp}"; calendarserver-9.1+dfsg/bin/watch_memcached000077500000000000000000000005561315003562600211030ustar00rootroot00000000000000#!/bin/sh # -*- sh-basic-offset: 2 -*- set -e set -u if [ $# -gt 0 ]; then interval="$1"; shift; else interval="5"; fi; watch () { while :; do echo stats; sleep "${interval}"; echo "----------------------------------------" >&2; done | nc localhost 11211 2>&1; } if ! watch; then echo "Error contacting memcached for stats."; exit 1; fi; calendarserver-9.1+dfsg/calendarserver/000077500000000000000000000000001315003562600203035ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/__init__.py000066400000000000000000000021031315003562600224100ustar00rootroot00000000000000# -*- test-case-name: calendarserver -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ CalendarServer application code. """ # # setuptools is annoying # from warnings import filterwarnings filterwarnings("ignore", "Module (.*) was already imported (.*)") del filterwarnings # # Set __version__ # try: from calendarserver.version import version as __version__ except ImportError: __version__ = None # # Get imap4 module to STFU # # from twisted.mail.imap4 import Command # Command._1_RESPONSES += tuple(['BYE']) calendarserver-9.1+dfsg/calendarserver/accesslog.py000066400000000000000000000575721315003562600226400ustar00rootroot00000000000000## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Access logs. """ __all__ = [ "DirectoryLogWrapperResource", "RotatingFileAccessLoggingObserver", "AMPCommonAccessLoggingObserver", "AMPLoggingFactory", ] import collections import datetime import json import os try: import psutil except ImportError: psutil = None from sys import platform import time from calendarserver.logAnalysis import getAdjustedMethodName from twext.python.log import Logger from twext.who.idirectory import RecordType from txweb2 import iweb from txweb2.log import BaseCommonAccessLoggingObserver from txweb2.log import LogWrapperResource from twisted.internet import protocol, task from twisted.protocols import amp from twistedcaldav.config import config log = Logger() class DirectoryLogWrapperResource(LogWrapperResource): def __init__(self, resource, directory): super(DirectoryLogWrapperResource, self).__init__(resource) self.directory = directory def getDirectory(self): return self.directory class CommonAccessLoggingObserverExtensions(BaseCommonAccessLoggingObserver): """ A base class for our extension to the L{BaseCommonAccessLoggingObserver} """ def emit(self, eventDict): format = None formatArgs = None if eventDict.get("interface") is iweb.IRequest: request = eventDict["request"] response = eventDict["response"] loginfo = eventDict["loginfo"] # Try to determine authentication and authorization identifiers uid = "-" if getattr(request, "authnUser", None) is not None: def convertPrincipaltoShortName(principal): if principal.record.recordType == RecordType.user: return principal.record.shortNames[0] else: return "({rtype}){name}".format(rtype=principal.record.recordType, name=principal.record.shortNames[0],) uidn = convertPrincipaltoShortName(request.authnUser) uidz = convertPrincipaltoShortName(request.authzUser) if uidn != uidz: uid = '"{authn} as {authz}"'.format(authn=uidn, authz=uidz,) else: uid = uidn # # For some methods which basically allow you to tunnel a # custom request (eg. REPORT, POST), the method name # itself doesn't tell you much about what action is being # requested. This allows a method to tack a submethod # attribute to the request, so we can provide a little # more detail here. # if config.EnableExtendedAccessLog and hasattr(request, "submethod"): method = "%s(%s)" % (request.method, request.submethod) else: method = request.method # Standard Apache access log fields format = ( '%(host)s - %(uid)s [%(date)s]' ' "%(method)s %(uri)s HTTP/%(protocolVersion)s"' ' %(statusCode)s %(bytesSent)d' ' "%(referer)s" "%(userAgent)s"' ) formatArgs = { "host": request.remoteAddr.host, "uid": uid, "date": self.logDateString(response.headers.getHeader("date", 0)), "method": method, "uri": request.uri.replace('"', "%22"), "protocolVersion": ".".join(str(x) for x in request.clientproto), "statusCode": response.code, "bytesSent": loginfo.bytesSent, "referer": request.headers.getHeader("referer", "-"), "userAgent": request.headers.getHeader("user-agent", "-"), } # Add extended items to format and formatArgs if config.EnableExtendedAccessLog: format += ' i=%(serverInstance)s' formatArgs["serverInstance"] = config.LogID if config.LogID else "0" if request.chanRequest: # This can be None during tests format += ' or=%(outstandingRequests)s' formatArgs["outstandingRequests"] = request.chanRequest.channel.factory.outstandingRequests # Tags for time stamps collected along the way - the first one in the list is the initial # time for request creation - we use that to track the entire request/response time nowtime = time.time() if config.EnableExtendedTimingAccessLog: basetime = request.timeStamps[0][1] request.timeStamps[0] = ("t", time.time(),) for tag, timestamp in request.timeStamps: format += " %s=%%(%s).1f" % (tag, tag,) formatArgs[tag] = (timestamp - basetime) * 1000 if tag != "t": basetime = timestamp if len(request.timeStamps) > 1: format += " t-log=%(t-log).1f" formatArgs["t-log"] = (timestamp - basetime) * 1000 else: format += " t=%(t).1f" formatArgs["t"] = (nowtime - request.timeStamps[0][1]) * 1000 if hasattr(request, "extendedLogItems"): for k, v in sorted(request.extendedLogItems.iteritems(), key=lambda x: x[0]): k = str(k).replace('"', "%22") v = str(v).replace('"', "%22") if " " in v: v = '"%s"' % (v,) format += " %s=%%(%s)s" % (k, k,) formatArgs[k] = v # Add the name of the XML error element for debugging purposes if hasattr(response, "error"): format += " err=%(err)s" formatArgs["err"] = response.error.qname()[1] fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", "") if fwdHeaders: # Limit each x-forwarded-header to 50 in case someone is # trying to overwhelm the logs forwardedFor = ",".join([hdr[:50] for hdr in fwdHeaders]) forwardedFor = forwardedFor.replace(" ", "") format += " fwd=%(fwd)s" formatArgs["fwd"] = forwardedFor if formatArgs["host"] == "0.0.0.0": fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", "") if fwdHeaders: formatArgs["host"] = fwdHeaders[-1].split(",")[-1].strip() format += " unix=%(unix)s" formatArgs["unix"] = "true" elif "overloaded" in eventDict: overloaded = eventDict.get("overloaded") format = ( '%(host)s - %(uid)s [%(date)s]' ' "%(method)s"' ' %(statusCode)s %(bytesSent)d' ' "%(referer)s" "%(userAgent)s"' ) formatArgs = { "host": overloaded.transport.hostname, "uid": "-", "date": self.logDateString(time.time()), "method": "???", "uri": "", "protocolVersion": "", "statusCode": 503, "bytesSent": 0, "referer": "-", "userAgent": "-", } if config.EnableExtendedAccessLog: format += ' p=%(serverPort)s' formatArgs["serverPort"] = overloaded.transport.server.port format += ' or=%(outstandingRequests)s' formatArgs["outstandingRequests"] = overloaded.outstandingRequests # Write anything we got to the log and stats if format is not None: # sanitize output to mitigate log injection for k, v in formatArgs.items(): if not isinstance(v, basestring): continue v = v.replace("\r", "\\r") v = v.replace("\n", "\\n") v = v.replace("\"", "\\\"") formatArgs[k] = v formatArgs["type"] = "access-log" formatArgs["log-format"] = format self.logStats(formatArgs) class RotatingFileAccessLoggingObserver(CommonAccessLoggingObserverExtensions): """ Class to do "apache" style access logging to a rotating log file. The log file is rotated after midnight each day. This class also currently handles the collection of system and log statistics. """ def __init__(self, logpath): self.logpath = logpath self.systemStats = None self.statsByMinute = [] self.stats1m = None self.stats5m = None self.stats1h = None def accessLog(self, message, allowrotate=True): """ Log a message to the file and possibly rotate if date has changed. @param message: C{str} for the message to log. @param allowrotate: C{True} if log rotate allowed, C{False} to log to current file without testing for rotation. """ if self.shouldRotate() and allowrotate: self.flush() self.rotate() if isinstance(message, unicode): message = message.encode("utf-8") self.f.write(message + "\n") def start(self): """ Start logging. Open the log file and log an "open" message. """ super(RotatingFileAccessLoggingObserver, self).start() self._open() self.accessLog("Log opened - server start: [%s]." % (datetime.datetime.now().ctime(),)) def stop(self): """ Stop logging. Close the log file and log an "open" message. """ self.accessLog("Log closed - server stop: [%s]." % (datetime.datetime.now().ctime(),), False) super(RotatingFileAccessLoggingObserver, self).stop() self._close() if self.systemStats is not None: self.systemStats.stop() def _open(self): """ Open the log file. """ self.f = open(self.logpath, "a", 1) self.lastDate = self.toDate(os.stat(self.logpath)[8]) def _close(self): """ Close the log file. """ self.f.close() def flush(self): """ Flush the log file. """ self.f.flush() def shouldRotate(self): """ Rotate when the date has changed since last write """ if config.RotateAccessLog: return self.toDate() > self.lastDate else: return False def toDate(self, *args): """ Convert a unixtime to (year, month, day) localtime tuple, or return the current (year, month, day) localtime tuple. This function primarily exists so you may overload it with gmtime, or some cruft to make unit testing possible. """ # primarily so this can be unit tested easily return time.localtime(*args)[:3] def suffix(self, tupledate): """ Return the suffix given a (year, month, day) tuple or unixtime """ try: return "_".join(map(str, tupledate)) except: # try taking a float unixtime return "_".join(map(str, self.toDate(tupledate))) def rotate(self): """ Rotate the file and create a new one. If it's not possible to open new logfile, this will fail silently, and continue logging to old logfile. """ newpath = "%s.%s" % (self.logpath, self.suffix(self.lastDate)) if os.path.exists(newpath): log.info("Cannot rotate log file to '{path}' because it already exists.", path=newpath) return self.accessLog("Log closed - rotating: [%s]." % (datetime.datetime.now().ctime(),), False) log.info("Rotating log file to: '{path}'", path=newpath, system="Logging") self.f.close() os.rename(self.logpath, newpath) self._open() self.accessLog("Log opened - rotated: [%s]." % (datetime.datetime.now().ctime(),), False) def logStats(self, stats): """ Update stats """ # Only use the L{SystemMonitor} when stats socket is in use if config.Stats.EnableUnixStatsSocket or config.Stats.EnableTCPStatsSocket: # Initialize a L{SystemMonitor} on the first call if self.systemStats is None: self.systemStats = SystemMonitor() # Currently only storing stats for access log type if "type" not in stats or stats["type"] != "access-log": return currentStats = self.ensureSequentialStats() self.updateStats(currentStats, stats) if stats["type"] == "access-log": self.accessLog(stats["log-format"] % stats) def getStats(self): """ Return the stats """ # Only use the L{SystemMonitor} when stats socket is in use if not config.Stats.EnableUnixStatsSocket and not config.Stats.EnableTCPStatsSocket: return {} # Initialize a L{SystemMonitor} on the first call if self.systemStats is None: self.systemStats = SystemMonitor() # The current stats currentStats = self.ensureSequentialStats() # Get previous minute details if self.stats1m is None: index = min(2, len(self.statsByMinute)) if index > 0: self.stats1m = self.statsByMinute[-index][1] else: self.stats1m = self.initStats() # Do five minute aggregate if self.stats5m is None: self.stats5m = self.initStats() index = min(6, len(self.statsByMinute)) for i in range(-index, -1): stat = self.statsByMinute[i][1] self.mergeStats(self.stats5m, stat) # Do one hour aggregate if self.stats1h is None: self.stats1h = self.initStats() index = min(61, len(self.statsByMinute)) for i in range(-index, -1): stat = self.statsByMinute[i][1] self.mergeStats(self.stats1h, stat) printStats = { "system": self.systemStats.items, "current": currentStats, "1m": self.stats1m, "5m": self.stats5m, "1h": self.stats1h, } return printStats def ensureSequentialStats(self): """ Make sure the list of timed stats is contiguous wrt time. """ dtindex = int(time.time() / 60.0) * 60 if len(self.statsByMinute) > 0: if self.statsByMinute[-1][0] != dtindex: oldindex = self.statsByMinute[-1][0] if oldindex != dtindex: # Adding a new minutes worth of data - clear out any cached # historical data self.stats1m = None self.stats5m = None self.stats1h = None # Add enough new stats to account for any idle minutes between # the last recorded stat and the current one while oldindex != dtindex: oldindex += 60 self.statsByMinute.append((oldindex, self.initStats(),)) else: self.statsByMinute.append((dtindex, self.initStats(),)) self.stats1m = None self.stats5m = None self.stats1h = None # We only need up to 1 hour's worth of data, so truncate the cached data # to avoid filling memory threshold = 65 if len(self.statsByMinute) > threshold: self.statsByMinute = self.statsByMinute[-threshold:] return self.statsByMinute[-1][1] def initStats(self): def initTimeHistogram(): return { "<10ms": 0, "10ms<->100ms": 0, "100ms<->1s": 0, "1s<->10s": 0, "10s<->30s": 0, "30s<->60s": 0, ">60s": 0, "Over 1s": 0, "Over 10s": 0, } return { "requests": 0, "method": collections.defaultdict(int), "method-t": collections.defaultdict(float), "500": 0, "401": 0, "t": 0.0, "t-resp-wr": 0.0, "slots": 0, "max-slots": 0, "T": initTimeHistogram(), "T-RESP-WR": initTimeHistogram(), "T-MAX": 0.0, "cpu": self.systemStats.items["cpu use"], } def updateStats(self, current, stats): # Gather specific information and aggregate into our persistent stats adjustedMethod = getAdjustedMethodName(stats) if current["requests"] == 0: current["cpu"] = 0.0 current["requests"] += 1 current["method"][adjustedMethod] += 1 current["method-t"][adjustedMethod] += stats.get("t", 0.0) if stats["statusCode"] >= 500: current["500"] += 1 elif stats["statusCode"] == 401: current["401"] += 1 current["t"] += stats.get("t", 0.0) current["t-resp-wr"] += stats.get("t-resp-wr", 0.0) current["slots"] += stats.get("outstandingRequests", 0) current["max-slots"] = max(current["max-slots"], self.limiter.maxOutstandingRequests if hasattr(self, "limiter") else 0) current["cpu"] += self.systemStats.items["cpu use"] def histogramUpdate(t, key): if t >= 60000.0: current[key][">60s"] += 1 elif t >= 30000.0: current[key]["30s<->60s"] += 1 elif t >= 10000.0: current[key]["10s<->30s"] += 1 elif t >= 1000.0: current[key]["1s<->10s"] += 1 elif t >= 100.0: current[key]["100ms<->1s"] += 1 elif t >= 10.0: current[key]["10ms<->100ms"] += 1 else: current[key]["<10ms"] += 1 if t >= 1000.0: current[key]["Over 1s"] += 1 elif t >= 10000.0: current[key]["Over 10s"] += 1 t = stats.get("t", None) if t is not None: histogramUpdate(t, "T") current["T-MAX"] = max(current["T-MAX"], t) t = stats.get("t-resp-wr", None) if t is not None: histogramUpdate(t, "T-RESP-WR") def mergeStats(self, current, stats): # Gather specific information and aggregate into our persistent stats if current["requests"] == 0: current["cpu"] = 0.0 current["requests"] += stats["requests"] for method in stats["method"].keys(): current["method"][method] += stats["method"][method] for method in stats["method-t"].keys(): current["method-t"][method] += stats["method-t"][method] current["500"] += stats["500"] current["401"] += stats["401"] current["t"] += stats["t"] current["t-resp-wr"] += stats["t-resp-wr"] current["slots"] += stats["slots"] current["max-slots"] = max(current["max-slots"], stats["max-slots"]) current["cpu"] += stats["cpu"] def histogramUpdate(t, key): if t >= 60000.0: current[key][">60s"] += 1 elif t >= 30000.0: current[key]["30s<->60s"] += 1 elif t >= 10000.0: current[key]["10s<->30s"] += 1 elif t >= 1000.0: current[key]["1s<->10s"] += 1 elif t >= 100.0: current[key]["100ms<->1s"] += 1 elif t >= 10.0: current[key]["10ms<->100ms"] += 1 else: current[key]["<10ms"] += 1 if t >= 1000.0: current[key]["Over 1s"] += 1 elif t >= 10000.0: current[key]["Over 10s"] += 1 for bin in stats["T"].keys(): current["T"][bin] += stats["T"][bin] current["T-MAX"] = max(current["T-MAX"], stats["T-MAX"]) for bin in stats["T-RESP-WR"].keys(): current["T-RESP-WR"][bin] += stats["T-RESP-WR"][bin] class SystemMonitor(object): """ Keeps track of system usage information. This installs a reactor task to run about once per second and track system use. """ CPUStats = collections.namedtuple("CPUStats", ("total", "idle",)) def __init__(self): self.items = { "cpu count": psutil.cpu_count() if psutil is not None else -1, "cpu use": 0.0, "memory used": 0, "memory percent": 0.0, "start time": time.time(), } if psutil is not None: times = psutil.cpu_times() self.previous_cpu = SystemMonitor.CPUStats(sum(times), times.idle,) else: self.previous_cpu = SystemMonitor.CPUStats(0, 0) self.task = task.LoopingCall(self.update) self.task.start(1.0) def stop(self): """ Just stop the task """ self.task.stop() def update(self): # CPU usage based on diff'ing CPU times if psutil is not None: times = psutil.cpu_times() cpu_now = SystemMonitor.CPUStats(sum(times), times.idle,) try: self.items["cpu use"] = 100.0 * (1.0 - (cpu_now.idle - self.previous_cpu.idle) / (cpu_now.total - self.previous_cpu.total)) except ZeroDivisionError: self.items["cpu use"] = 0.0 self.previous_cpu = cpu_now # Memory usage if psutil is not None and 'freebsd' not in platform: mem = psutil.virtual_memory() self.items["memory used"] = mem.used self.items["memory percent"] = mem.percent class LogStats(amp.Command): arguments = [("message", amp.String())] class AMPCommonAccessLoggingObserver(CommonAccessLoggingObserverExtensions): def __init__(self): self.protocol = None self._buffer = [] def flushBuffer(self): if self._buffer: for msg in self._buffer: self.logStats(msg) def addClient(self, connectedClient): """ An AMP client connected; hook it up to this observer. """ self.protocol = connectedClient self.flushBuffer() def logStats(self, message): """ Log server stats via the remote AMP Protocol """ if self.protocol is not None: message = json.dumps(message) if isinstance(message, unicode): message = message.encode("utf-8") d = self.protocol.callRemote(LogStats, message=message) d.addErrback(log.error) else: self._buffer.append(message) class AMPLoggingProtocol(amp.AMP): """ A server side protocol for logging to the given observer. """ def __init__(self, observer): self.observer = observer super(AMPLoggingProtocol, self).__init__() def logStats(self, message): stats = json.loads(message) self.observer.logStats(stats) return {} LogStats.responder(logStats) class AMPLoggingFactory(protocol.ServerFactory): def __init__(self, observer): self.observer = observer def doStart(self): self.observer.start() def doStop(self): self.observer.stop() def buildProtocol(self, addr): return AMPLoggingProtocol(self.observer) calendarserver-9.1+dfsg/calendarserver/controlsocket.py000066400000000000000000000066131315003562600235540ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Multiplexing control socket. Currently used for messages related to queueing and logging, but extensible to more. """ from zope.interface import implements from twisted.internet.protocol import Factory from twisted.protocols.amp import BinaryBoxProtocol, IBoxReceiver, IBoxSender from twisted.application.service import Service class DispatchingSender(object): implements(IBoxSender) def __init__(self, sender, route): self.sender = sender self.route = route def sendBox(self, box): box['_route'] = self.route self.sender.sendBox(box) def unhandledError(self, failure): self.sender.unhandledError(failure) class DispatchingBoxReceiver(object): implements(IBoxReceiver) def __init__(self, receiverMap): self.receiverMap = receiverMap def startReceivingBoxes(self, boxSender): for key, receiver in self.receiverMap.items(): receiver.startReceivingBoxes(DispatchingSender(boxSender, key)) def ampBoxReceived(self, box): self.receiverMap[box['_route']].ampBoxReceived(box) def stopReceivingBoxes(self, reason): for receiver in self.receiverMap.values(): receiver.stopReceivingBoxes(reason) class ControlSocket(Factory, object): """ An AMP control socket that aggregates other AMP factories. This is the service that listens in the master process. """ def __init__(self): """ Initialize this L{ControlSocket}. """ self._factoryMap = {} def addFactory(self, key, otherFactory): """ Add another L{Factory} - one that returns L{AMP} instances - to this socket. """ self._factoryMap[key] = otherFactory def buildProtocol(self, addr): """ Build a thing that will multiplex AMP to all the relevant sockets. """ receiverMap = {} for k, f in self._factoryMap.items(): receiverMap[k] = f.buildProtocol(addr) return BinaryBoxProtocol(DispatchingBoxReceiver(receiverMap)) def doStart(self): """ Relay start notification to all added factories. """ for f in self._factoryMap.values(): f.doStart() def doStop(self): """ Relay stop notification to all added factories. """ for f in self._factoryMap.values(): f.doStop() class ControlSocketConnectingService(Service, object): def __init__(self, endpointFactory, controlSocket): super(ControlSocketConnectingService, self).__init__() self.endpointFactory = endpointFactory self.controlSocket = controlSocket def privilegedStartService(self): from twisted.internet import reactor endpoint = self.endpointFactory(reactor) endpoint.connect(self.controlSocket) calendarserver-9.1+dfsg/calendarserver/dashboard_service.py000066400000000000000000000152511315003562600243300ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twext.enterprise.jobs.jobitem import JobItem from twisted.internet.defer import inlineCallbacks, succeed, returnValue from twisted.internet.error import ConnectError from twisted.internet.protocol import Factory from twisted.protocols.basic import LineReceiver from twistedcaldav.config import config from txdav.common.datastore.work.load_work import TestWork from txdav.dps.client import DirectoryService as DirectoryProxyClientService from txdav.who.cache import CachingDirectoryService from twext.enterprise.jobs.queue import WorkerConnectionPool import json """ Protocol and other items that enable introspection of the server. This will include server protocol analysis statistics, job queue load, and other useful information that a server admin or developer would like to keep an eye on. """ class DashboardProtocol (LineReceiver): """ A protocol that receives a line containing a JSON object representing a request, and returns a line containing a JSON object as a response. """ unknown_cmd = json.dumps({"result": "unknown command"}) bad_cmd = json.dumps({"result": "bad command"}) def lineReceived(self, line): """ Process a request which is expected to be a JSON object. @param line: The line which was received with the delimiter removed. @type line: C{bytes} """ # Look for exit if line in ("exit", "quit"): self.sendLine("Done") self.transport.loseConnection() return def _write(result): self.sendLine(result) try: j = json.loads(line) if isinstance(j, list): self.process_data(j) except (ValueError, KeyError): _write(self.bad_cmd) @inlineCallbacks def process_data(self, j): results = {} for data in j: if hasattr(self, "data_{}".format(data)): result = yield getattr(self, "data_{}".format(data))() elif data.startswith("stats_"): result = yield self.data_stats() result = result.get(data[6:], "") else: result = "" results[data] = result self.sendLine(json.dumps(results)) def data_stats(self): """ Return the logging protocol statistics. @return: a string containing the JSON result. @rtype: L{str} """ return succeed(self.factory.logger.getStats()) def data_slots(self): """ Return the logging protocol statistics. @return: a string containing the JSON result. @rtype: L{str} """ states = tuple(self.factory.limiter.dispatcher.slavestates) if self.factory.limiter is not None else () results = [] for num, status in states: result = {"slot": num} result.update(status.items()) results.append(result) return succeed({"slots": results, "overloaded": self.factory.limiter.overloaded if self.factory.limiter is not None else False}) def data_jobcount(self): """ Return a count of job types. @return: the JSON result. @rtype: L{int} """ return succeed(JobItem.numberOfWorkTypes()) @inlineCallbacks def data_jobs(self): """ Return a summary of the job queue. @return: a string containing the JSON result. @rtype: L{str} """ if self.factory.store: txn = self.factory.store.newTransaction("DashboardProtocol.data_jobs") records = (yield JobItem.histogram(txn)) yield txn.commit() else: records = {} returnValue(records) def data_job_assignments(self): """ Return a summary of the job assignments to workers. @return: a string containing the JSON result. @rtype: L{str} """ if self.factory.store: pool = self.factory.store.pool loads = pool.workerPool.eachWorkerLoad() level = pool.workerPool.loadLevel() else: loads = [] level = 0 return succeed({"workers": loads, "level": level}) @inlineCallbacks def data_test_work(self): """ Return the number of TEST_WORK items in the job queue. @return: a string containing the JSON result. @rtype: L{str} """ results = {} if self.factory.store: txn = self.factory.store.newTransaction() results["queued"] = yield TestWork.count(txn) results["assigned"] = yield JobItem.count(txn, where=(JobItem.assigned != None)) results["completed"] = WorkerConnectionPool.completed.get(TestWork.workType(), 0) yield txn.commit() else: results["queued"] = 0 results["assigned"] = 0 results["completed"] = 0 returnValue(results) def data_directory(self): """ Return a summary of directory service calls. @return: the JSON result. @rtype: L{str} """ try: return self.factory.directory.stats() if self.factory.directory is not None else succeed({}) except (AttributeError, ConnectError): return succeed({}) class DashboardServer(Factory): protocol = DashboardProtocol def __init__(self, logObserver, limiter): self.logger = logObserver self.limiter = limiter self.logger.limiter = self.limiter self.store = None self.directory = None def makeDirectoryProxyClient(self): if config.DirectoryProxy.Enabled: self.directory = DirectoryProxyClientService(config.DirectoryRealmName) if config.DirectoryCaching.CachingSeconds: self.directory = CachingDirectoryService( self.directory, expireSeconds=config.DirectoryCaching.CachingSeconds, lookupsBetweenPurges=config.DirectoryCaching.LookupsBetweenPurges, negativeCaching=config.DirectoryCaching.NegativeCachingEnabled, ) calendarserver-9.1+dfsg/calendarserver/logAnalysis.py000066400000000000000000000341061315003562600231460ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## # Adjust method names # PROPFINDs METHOD_PROPFIND_CALENDAR_HOME = "PROPFIND Calendar Home" METHOD_PROPFIND_CACHED_CALENDAR_HOME = "PROPFIND cached Calendar Home" METHOD_PROPFIND_CALENDAR = "PROPFIND Calendar" METHOD_PROPFIND_INBOX = "PROPFIND Inbox" METHOD_PROPFIND_ADDRESSBOOK_HOME = "PROPFIND Adbk Home" METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME = "PROPFIND cached Adbk Home" METHOD_PROPFIND_ADDRESSBOOK = "PROPFIND Adbk" METHOD_PROPFIND_DIRECTORY = "PROPFIND Directory" METHOD_PROPFIND_PRINCIPALS = "PROPFIND Principals" METHOD_PROPFIND_CACHED_PRINCIPALS = "PROPFIND cached Principals" # PROPPATCHs METHOD_PROPPATCH_CALENDAR = "PROPPATCH Calendar" METHOD_PROPPATCH_ADDRESSBOOK = "PROPPATCH Adbk Home" # REPORTs METHOD_REPORT_CALENDAR_MULTIGET = "REPORT cal-multi" METHOD_REPORT_CALENDAR_QUERY = "REPORT cal-query" METHOD_REPORT_CALENDAR_FREEBUSY = "REPORT freebusy" METHOD_REPORT_CALENDAR_HOME_SYNC = "REPORT cal-home-sync" METHOD_REPORT_CALENDAR_SYNC = "REPORT cal-sync" METHOD_REPORT_ADDRESSBOOK_MULTIGET = "REPORT adbk-multi" METHOD_REPORT_ADDRESSBOOK_QUERY = "REPORT adbk-query" METHOD_REPORT_DIRECTORY_QUERY = "REPORT dir-query" METHOD_REPORT_ADDRESSBOOK_HOME_SYNC = "REPORT adbk-home-sync" METHOD_REPORT_ADDRESSBOOK_SYNC = "REPORT adbk-sync" METHOD_REPORT_P_SEARCH_P_SET = "REPORT p-set" METHOD_REPORT_P_P_SEARCH = "REPORT p-search" METHOD_REPORT_EXPAND_P = "REPORT expand" # POSTs METHOD_POST_CALENDAR_HOME = "POST Calendar Home" METHOD_POST_CALENDAR = "POST Calendar" METHOD_POST_CALENDAR_ADD_MEMBER = "POST Calendar-add" METHOD_POST_CALENDAR_OBJECT = "POST ics" METHOD_POST_CALENDAR_OBJECT_SPLIT = "POST split" METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_ADD = "POST att-add" METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_UPDATE = "POST att-update" METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_REMOVE = "POST att-remove" METHOD_POST_ADDRESSBOOK_HOME = "POST Adbk Home" METHOD_POST_ADDRESSBOOK = "POST Adbk" METHOD_POST_ADDRESSBOOK_ADD_MEMBER = "POST Adbk-add" METHOD_POST_ISCHEDULE_FREEBUSY = "POST Freebusy iSchedule" METHOD_POST_ISCHEDULE = "POST iSchedule" METHOD_POST_TIMEZONES = "POST Timezones" METHOD_POST_FREEBUSY = "POST Freebusy" METHOD_POST_ORGANIZER = "POST Organizer" METHOD_POST_ATTENDEE = "POST Attendee" METHOD_POST_OUTBOX = "POST Outbox" METHOD_POST_APNS = "POST apns" METHOD_POST_CONDUIT = "POST conduit" # PUTs METHOD_PUT_ICS = "PUT ics" METHOD_PUT_ORGANIZER = "PUT Organizer" METHOD_PUT_ATTENDEE = "PUT Attendee" METHOD_PUT_DROPBOX = "PUT dropbox" METHOD_PUT_VCF = "PUT VCF" # GETs METHOD_GET_CALENDAR_HOME = "GET Calendar Home" METHOD_GET_CALENDAR = "GET Calendar" METHOD_GET_ICS = "GET ics" METHOD_GET_INBOX_ICS = "GET inbox ics" METHOD_GET_DROPBOX = "GET dropbox" METHOD_GET_ADDRESSBOOK_HOME = "GET Adbk Home" METHOD_GET_ADDRESSBOOK = "GET Adbk" METHOD_GET_VCF = "GET VCF" METHOD_GET_TIMEZONES = "GET Timezones" # DELETEs METHOD_DELETE_CALENDAR_HOME = "DELETE Calendar Home" METHOD_DELETE_CALENDAR = "DELETE Calendar" METHOD_DELETE_ICS = "DELETE ics" METHOD_DELETE_INBOX_ICS = "DELETE inbox ics" METHOD_DELETE_DROPBOX = "DELETE dropbox" METHOD_DELETE_ADDRESSBOOK_HOME = "DELETE Adbk Home" METHOD_DELETE_ADDRESSBOOK = "DELETE Adbk" METHOD_DELETE_VCF = "DELETE vcf" def getAdjustedMethodName(stats, method=None, uri=None): if method is None: method = stats["method"] if uri is None: uri = stats["uri"] uribits = uri.rstrip("/").split('/')[1:] if len(uribits) == 0: uribits = [uri] calendar_specials = ("attachments", "dropbox", "notification", "freebusy", "outbox",) adbk_specials = ("notification",) def _PROPFIND(): cached = "cached" in stats if uribits[0] == "calendars": if len(uribits) == 3: return METHOD_PROPFIND_CACHED_CALENDAR_HOME if cached else METHOD_PROPFIND_CALENDAR_HOME elif len(uribits) > 3: if uribits[3] in calendar_specials: return "PROPFIND %s" % (uribits[3],) elif len(uribits) == 4: if uribits[3] == "inbox": return METHOD_PROPFIND_INBOX else: return METHOD_PROPFIND_CALENDAR elif uribits[0] == "addressbooks": if len(uribits) == 3: return METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME if cached else METHOD_PROPFIND_ADDRESSBOOK_HOME elif len(uribits) > 3: if uribits[3] in adbk_specials: return "PROPFIND %s" % (uribits[3],) elif len(uribits) == 4: return METHOD_PROPFIND_ADDRESSBOOK elif uribits[0] == "directory": return METHOD_PROPFIND_DIRECTORY elif uribits[0] == "principals": return METHOD_PROPFIND_CACHED_PRINCIPALS if cached else METHOD_PROPFIND_PRINCIPALS return method def _REPORT(): if "(" in method: report_type = method.split("}" if "}" in method else ":")[1][:-1] if report_type == "addressbook-query": if uribits[0] == "directory": report_type = "directory-query" if report_type == "sync-collection": if uribits[0] == "calendars": if len(uribits) == 3: report_type = "cal-home-sync" else: report_type = "cal-sync" elif uribits[0] == "addressbooks": if len(uribits) == 3: report_type = "adbk-home-sync" else: report_type = "adbk-sync" mappedNames = { "calendar-multiget": METHOD_REPORT_CALENDAR_MULTIGET, "calendar-query": METHOD_REPORT_CALENDAR_QUERY, "free-busy-query": METHOD_REPORT_CALENDAR_FREEBUSY, "cal-home-sync": METHOD_REPORT_CALENDAR_HOME_SYNC, "cal-sync": METHOD_REPORT_CALENDAR_SYNC, "addressbook-multiget": METHOD_REPORT_ADDRESSBOOK_MULTIGET, "addressbook-query": METHOD_REPORT_ADDRESSBOOK_QUERY, "directory-query": METHOD_REPORT_DIRECTORY_QUERY, "adbk-home-sync": METHOD_REPORT_ADDRESSBOOK_HOME_SYNC, "adbk-sync": METHOD_REPORT_ADDRESSBOOK_SYNC, "principal-search-property-set": METHOD_REPORT_P_SEARCH_P_SET, "principal-property-search": METHOD_REPORT_P_P_SEARCH, "expand-property": METHOD_REPORT_EXPAND_P, } return mappedNames.get(report_type, "REPORT %s" % (report_type,)) return method def _PROPPATCH(): if uribits[0] == "calendars": return METHOD_PROPPATCH_CALENDAR elif uribits[0] == "addressbooks": return METHOD_PROPPATCH_ADDRESSBOOK return method def _POST(): if uribits[0] == "calendars": if "(" in method: post_type = method.split("(")[1][:-1] mappedNames = { "add-member": METHOD_POST_CALENDAR_ADD_MEMBER, "split": METHOD_POST_CALENDAR_OBJECT_SPLIT, "attachment-add": METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_ADD, "attachment-update": METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_UPDATE, "attachment-remove": METHOD_POST_CALENDAR_OBJECT_ATTACHMENT_REMOVE, } return mappedNames.get(post_type, "POST %s" % (post_type,)) elif len(uribits) == 3: return METHOD_POST_CALENDAR_HOME elif len(uribits) == 4: if uribits[3] == "outbox": if "recipients" in stats: return METHOD_POST_FREEBUSY elif "freebusy" in stats: return METHOD_POST_FREEBUSY elif "itip.request" in stats or "itip.cancel" in stats: return METHOD_POST_ORGANIZER elif "itip.reply" in stats: return METHOD_POST_ATTENDEE else: return METHOD_POST_OUTBOX elif uribits[3] in calendar_specials: pass else: return METHOD_POST_CALENDAR elif len(uribits) == 5: return METHOD_POST_CALENDAR_OBJECT elif uribits[0] == "addressbooks": if len(uribits) == 3: return METHOD_POST_ADDRESSBOOK_HOME elif len(uribits) == 4: if uribits[3] in adbk_specials: pass else: return METHOD_POST_ADDRESSBOOK elif uribits[0] == "ischedule": if "fb-cached" in stats or "fb-uncached" in stats or "freebusy" in stats: return METHOD_POST_ISCHEDULE_FREEBUSY else: return METHOD_POST_ISCHEDULE elif uribits[0].startswith("timezones"): return METHOD_POST_TIMEZONES elif uribits[0].startswith("apns"): return METHOD_POST_APNS elif uribits[0].startswith("conduit"): return METHOD_POST_CONDUIT return method def _PUT(): if uribits[0] == "calendars": if len(uribits) > 3: if uribits[3] in calendar_specials: return "PUT %s" % (uribits[3],) elif len(uribits) == 4: pass else: if "itip.requests" in stats: return METHOD_PUT_ORGANIZER elif "itip.reply" in stats: return METHOD_PUT_ATTENDEE else: return METHOD_PUT_ICS elif uribits[0] == "addressbooks": if len(uribits) > 3: if uribits[3] in adbk_specials: return "PUT %s" % (uribits[3],) elif len(uribits) == 4: pass else: return METHOD_PUT_VCF return method def _GET(): if uribits[0] == "calendars": if len(uribits) == 3: return METHOD_GET_CALENDAR_HOME elif len(uribits) > 3: if uribits[3] in calendar_specials: return "GET %s" % (uribits[3],) elif len(uribits) == 4: return METHOD_GET_CALENDAR elif uribits[3] == "inbox": return METHOD_GET_INBOX_ICS else: return METHOD_GET_ICS elif uribits[0] == "addressbooks": if len(uribits) == 3: return METHOD_GET_ADDRESSBOOK_HOME elif len(uribits) > 3: if uribits[3] in adbk_specials: return "GET %s" % (uribits[3],) elif len(uribits) == 4: return METHOD_GET_ADDRESSBOOK else: return METHOD_GET_VCF elif uribits[0].startswith("timezones"): return METHOD_GET_TIMEZONES return method def _DELETE(): if uribits[0] == "calendars": if len(uribits) == 3: return METHOD_DELETE_CALENDAR_HOME elif len(uribits) > 3: if uribits[3] in calendar_specials: return "DELETE %s" % (uribits[3],) elif len(uribits) == 4: return METHOD_DELETE_CALENDAR elif uribits[3] == "inbox": return METHOD_DELETE_INBOX_ICS else: return METHOD_DELETE_ICS elif uribits[0] == "addressbooks": if len(uribits) == 3: return METHOD_DELETE_ADDRESSBOOK_HOME elif len(uribits) > 3: if uribits[3] in adbk_specials: return "DELETE %s" % (uribits[3],) elif len(uribits) == 4: return METHOD_DELETE_ADDRESSBOOK else: return METHOD_DELETE_VCF return method def _ANY(): return method return { "DELETE": _DELETE, "GET": _GET, "POST": _POST, "PROPFIND": _PROPFIND, "PROPPATCH": _PROPPATCH, "PUT": _PUT, "REPORT": _REPORT, }.get(method.split("(")[0], _ANY)() osClients = ( "Mac OS X/", "Mac+OS+X/", "Mac_OS_X/", "iOS/", ) versionClients = ( "iCal/", "iPhone/", "CalendarAgent/", "Calendar/", "CoreDAV/", "Safari/", "dataaccessd/", "Preferences/", "curl/", "DAVKit/", ) quickclients = ( ("InterMapper/", "InterMapper"), ("CardDAVPlugin/", "CardDAVPlugin"), ("Address%20Book/", "AddressBook"), ("AddressBook/", "AddressBook"), ("Mail/", "Mail"), ("iChat/", "iChat"), ) def getAdjustedClientName(stats): userAgent = stats["userAgent"] os = "" for client in osClients: index = userAgent.find(client) if index != -1: l = len(client) endex = userAgent.find(' ', index + l) os = (userAgent[index:] if endex == -1 else userAgent[index:endex]) + " " for client in versionClients: index = userAgent.find(client) if index != -1: if os: return os + client[:-1] else: l = len(client) endex = userAgent.find(' ', index + l) return os + (userAgent[index:] if endex == -1 else userAgent[index:endex]) for quick, result in quickclients: index = userAgent.find(quick) if index != -1: return os + result return userAgent[:20] calendarserver-9.1+dfsg/calendarserver/profiling.py000066400000000000000000000031601315003562600226460ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.internet.defer import inlineCallbacks, returnValue import cProfile as profile import pstats def profile_method(): """ Decorator to profile a function """ def wrapper(method): def inner(*args, **kwargs): prof = profile.Profile() prof.enable() result = method(*args, **kwargs) prof.disable() stats = pstats.Stats(prof) stats.sort_stats('cumulative').print_stats() return result return inner return wrapper def profile_inline_callback(): """ Decorator to profile an inlineCallback function """ def wrapper(method): @inlineCallbacks def inner(*args, **kwargs): prof = profile.Profile() prof.enable() result = yield method(*args, **kwargs) prof.disable() stats = pstats.Stats(prof) stats.sort_stats('cumulative').print_stats() returnValue(result) return inner return wrapper calendarserver-9.1+dfsg/calendarserver/provision/000077500000000000000000000000001315003562600223335ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/provision/__init__.py000066400000000000000000000011361315003562600244450ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/calendarserver/provision/root.py000066400000000000000000000400061315003562600236700ustar00rootroot00000000000000# -*- test-case-name: calendarserver.provision.test.test_root -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## __all__ = [ "RootResource", ] try: from twext.python.sacl import checkSACL except ImportError: # OS X Server SACLs not supported on this system, make SACL check a no-op checkSACL = lambda *ignored: True from twext.python.log import Logger from twisted.cred.error import LoginFailed, UnauthorizedLogin from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twisted.python.reflect import namedClass from twisted.web.error import Error as WebError from twistedcaldav.cache import DisabledCache from twistedcaldav.cache import MemcacheResponseCache, MemcacheChangeNotifier from twistedcaldav.cache import _CachedResponseResource from twistedcaldav.config import config from twistedcaldav.directory.principal import DirectoryPrincipalResource from twistedcaldav.extensions import DAVFile, CachingPropertyStore from twistedcaldav.extensions import DirectoryPrincipalPropertySearchMixIn from twistedcaldav.extensions import ReadOnlyResourceMixIn from twistedcaldav.resource import CalDAVComplianceMixIn from txdav.who.delegates import CachingDelegates from txdav.who.wiki import DirectoryService as WikiDirectoryService from txdav.who.wiki import uidForAuthToken from txweb2 import responsecode from txweb2.auth.wrapper import UnauthorizedResponse from txweb2.dav.xattrprops import xattrPropertyStore from txweb2.http import HTTPError, StatusResponse, RedirectResponse log = Logger() class RootResource( ReadOnlyResourceMixIn, DirectoryPrincipalPropertySearchMixIn, CalDAVComplianceMixIn, DAVFile ): """ A special root resource that contains support checking SACLs as well as adding responseFilters. """ useSacls = False # Mapping of top-level resource paths to SACLs. If a request path # starts with any of these, then the list of SACLs are checked. If the # request path does not start with any of these, then no SACLs are checked. saclMap = { "addressbooks": ("addressbook",), "calendars": ("calendar",), "directory": ("addressbook",), "principals": ("addressbook", "calendar"), "webcal": ("calendar",), } # If a top-level resource path starts with any of these, an unauthenticated # request is redirected to the auth url (config.WebCalendarAuthPath) authServiceMap = { "webcal": True, } def __init__(self, path, *args, **kwargs): super(RootResource, self).__init__(path, *args, **kwargs) if config.EnableSACLs: self.useSacls = True self.contentFilters = [] if ( config.EnableResponseCache and config.Memcached.Pools.Default.ClientEnabled ): self.responseCache = MemcacheResponseCache(self.fp) # These class attributes need to be setup with our memcache\ # notifier DirectoryPrincipalResource.cacheNotifierFactory = MemcacheChangeNotifier CachingDelegates.cacheNotifier = MemcacheChangeNotifier(None, cacheHandle="PrincipalToken") else: self.responseCache = DisabledCache() if config.ResponseCompression: from txweb2.filter import gzip self.contentFilters.append((gzip.gzipfilter, True)) def deadProperties(self): if not hasattr(self, "_dead_properties"): # Get the property store from super deadProperties = ( namedClass(config.RootResourcePropStoreClass)(self) ) # Wrap the property store in a memory store if isinstance(deadProperties, xattrPropertyStore): deadProperties = CachingPropertyStore(deadProperties) self._dead_properties = deadProperties return self._dead_properties def defaultAccessControlList(self): return succeed(config.RootResourceACL) @inlineCallbacks def checkSACL(self, request): """ Check SACLs against the current request """ topLevel = request.path.strip("/").split("/")[0] saclServices = self.saclMap.get(topLevel, None) if not saclServices: returnValue(True) try: authnUser, authzUser = yield self.authenticate(request) except Exception: response = (yield UnauthorizedResponse.makeResponse( request.credentialFactories, request.remoteAddr )) raise HTTPError(response) # SACLs are enabled in the plist, but there may not actually # be a SACL group assigned to this service. Let's see if # unauthenticated users are allowed by calling CheckSACL # with an empty string. if authzUser is None: for saclService in saclServices: if checkSACL("", saclService): # No group actually exists for this SACL, so allow # unauthenticated access returnValue(True) # There is a SACL group for at least one of the SACLs, so no # unauthenticated access response = (yield UnauthorizedResponse.makeResponse( request.credentialFactories, request.remoteAddr )) log.info("Unauthenticated user denied by SACLs") raise HTTPError(response) # Cache the authentication details request.authnUser = authnUser request.authzUser = authzUser # Figure out the "username" from the davxml.Principal object username = authzUser.record.shortNames[0] access = False for saclService in saclServices: if checkSACL(username, saclService): # Access is allowed access = True break # Mark SACLs as having been checked so we can avoid doing it # multiple times request.checkedSACL = True if access: returnValue(True) log.warn( "User {user!r} is not enabled with the {sacl!r} SACL(s)", user=username, sacl=saclServices ) raise HTTPError(responsecode.FORBIDDEN) @inlineCallbacks def locateChild(self, request, segments): for filter in self.contentFilters: request.addResponseFilter(filter[0], atEnd=filter[1]) # Examine cookies for wiki auth token; if there, ask the paired wiki # server for the corresponding record name. If that maps to a # principal, assign that to authnuser. # Also, certain non-browser clients send along the wiki auth token # sometimes, so we now also look for the presence of x-requested-with # header that the webclient sends. However, in the case of a GET on # /webcal that header won't be sent so therefore we allow wiki auth # for any path in the authServiceMap even if that header is missing. allowWikiAuth = False topLevel = request.path.strip("/").split("/")[0] if self.authServiceMap.get(topLevel, False): allowWikiAuth = True if not hasattr(request, "checkedWiki"): # Only do this once per request request.checkedWiki = True wikiConfig = config.Authentication.Wiki cookies = request.headers.getHeader("cookie") requestedWith = request.headers.hasHeader("x-requested-with") if ( wikiConfig["Enabled"] and (requestedWith or allowWikiAuth) and cookies is not None ): for cookie in cookies: if cookie.name == wikiConfig["Cookie"]: token = cookie.value break else: token = None if token is not None and token != "unauthenticated": log.debug( "Wiki sessionID cookie value: {token}", token=token ) try: uid = yield uidForAuthToken(token, wikiConfig["EndpointDescriptor"]) if uid == "unauthenticated": uid = None except WebError as w: uid = None # FORBIDDEN status means it's an unknown token if int(w.status) == responsecode.NOT_FOUND: log.debug( "Unknown wiki token: {token}", token=token ) else: log.error( "Failed to look up wiki token {token}: {msg}", token=token, msg=w.message ) except Exception as e: log.error( "Failed to look up wiki token: {error}", error=e ) uid = None if uid is not None: log.debug( "Wiki lookup returned uid: {uid}", uid=uid ) principal = yield self.principalForUID(request, uid) if principal: log.debug( "Wiki-authenticated principal {uid} " "being assigned to authnUser and authzUser", uid=uid ) request.authzUser = request.authnUser = principal if not hasattr(request, "authzUser") and config.WebCalendarAuthPath: topLevel = request.path.strip("/").split("/")[0] if self.authServiceMap.get(topLevel, False): # We've not been authenticated and the auth service is enabled # for this resource, so redirect. # Use config.ServerHostName if no x-forwarded-host header, # otherwise use the final hostname in x-forwarded-host. host = request.headers.getRawHeaders( "x-forwarded-host", [config.ServerHostName] )[-1].split(",")[-1].strip() port = 443 if (config.EnableSSL or config.BehindTLSProxy) else 80 scheme = "https" if config.EnableSSL else "http" response = RedirectResponse( request.unparseURL( host=host, port=port, scheme=scheme, path=config.WebCalendarAuthPath, querystring="redirect={}://{}{}".format( scheme, host, request.path ) ), temporary=True ) raise HTTPError(response) # We don't want the /inbox resource to pay attention to SACLs because # we just want it to use the hard-coded ACL for the imip reply user. # The /timezones resource is used by the wiki web calendar, so open # up that resource. if segments[0] in ("inbox", "timezones"): request.checkedSACL = True elif ( ( len(segments) > 2 and segments[0] in ("calendars", "principals") and ( segments[1] == "wikis" or ( segments[1] == "__uids__" and segments[2].startswith(WikiDirectoryService.uidPrefix) ) ) ) ): # This is a wiki-related calendar resource. SACLs are not checked. request.checkedSACL = True # The authzuser value is set to that of the wiki principal if # not already set. if not hasattr(request, "authzUser") and segments[2]: wikiUid = None if segments[1] == "wikis": wikiUid = "{}{}".format(WikiDirectoryService.uidPrefix, segments[2]) else: wikiUid = segments[2] if wikiUid: log.debug( "Wiki principal {name} being assigned to authzUser", name=wikiUid ) request.authzUser = yield self.principalForUID(request, wikiUid) elif ( self.useSacls and not hasattr(request, "checkedSACL") ): yield self.checkSACL(request) if config.RejectClients: # # Filter out unsupported clients # agent = request.headers.getHeader("user-agent") if agent is not None: for reject in config.RejectClients: if reject.search(agent) is not None: log.info("Rejecting user-agent: {agent}", agent=agent) raise HTTPError(StatusResponse( responsecode.FORBIDDEN, "Your client software ({}) is not allowed to " "access this service." .format(agent) )) if not hasattr(request, "authnUser"): try: authnUser, authzUser = yield self.authenticate(request) request.authnUser = authnUser request.authzUser = authzUser except (UnauthorizedLogin, LoginFailed): response = yield UnauthorizedResponse.makeResponse( request.credentialFactories, request.remoteAddr ) raise HTTPError(response) if ( config.EnableResponseCache and request.method == "PROPFIND" and not getattr(request, "notInCache", False) and len(segments) > 1 ): try: if not getattr(request, "checkingCache", False): request.checkingCache = True response = yield self.responseCache.getResponseForRequest( request ) if response is None: request.notInCache = True raise KeyError("Not found in cache.") returnValue((_CachedResponseResource(response), [])) except KeyError: pass child = yield super(RootResource, self).locateChild( request, segments ) returnValue(child) @inlineCallbacks def principalForUID(self, request, uid): principal = None directory = request.site.resource.getDirectory() record = yield directory.recordWithUID(uid) if record is not None: username = record.shortNames[0] log.debug( "Wiki user record for user {user}: {record}", user=username, record=record ) for collection in self.principalCollections(): principal = yield collection.principalForRecord(record) if principal is not None: break returnValue(principal) def http_COPY(self, request): return responsecode.FORBIDDEN def http_MOVE(self, request): return responsecode.FORBIDDEN def http_DELETE(self, request): return responsecode.FORBIDDEN calendarserver-9.1+dfsg/calendarserver/provision/test/000077500000000000000000000000001315003562600233125ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/provision/test/__init__.py000066400000000000000000000011361315003562600254240ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/calendarserver/provision/test/test_root.py000066400000000000000000000252601315003562600257130ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.internet.defer import inlineCallbacks, maybeDeferred, returnValue from txweb2 import http_headers from txweb2 import responsecode from txdav.xml import element as davxml from txweb2.http import HTTPError from txweb2.iweb import IResponse from twistedcaldav.test.util import StoreTestCase, SimpleStoreRequest from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource import calendarserver.provision.root # for patching checkSACL TEST_SACLS = { "calendar": [ "dreid" ], "addressbook": [ "dreid" ], } def stubCheckSACL(username, service): if service not in TEST_SACLS: return True if username in TEST_SACLS[service]: return True return False class RootTests(StoreTestCase): @inlineCallbacks def setUp(self): yield super(RootTests, self).setUp() self.patch(calendarserver.provision.root, "checkSACL", stubCheckSACL) class ComplianceTests(RootTests): """ Tests to verify CalDAV compliance of the root resource. """ @inlineCallbacks def issueRequest(self, segments, method="GET"): """ Get a resource from a particular path from the root URI, and return a Deferred which will fire with (something adaptable to) an HTTP response object. """ request = SimpleStoreRequest(self, method, ("/".join([""] + segments))) rsrc = self.actualRoot while segments: rsrc, segments = (yield maybeDeferred( rsrc.locateChild, request, segments )) result = yield rsrc.renderHTTP(request) returnValue(result) @inlineCallbacks def test_optionsIncludeCalendar(self): """ OPTIONS request should include a DAV header that mentions the addressbook capability. """ response = yield self.issueRequest([""], "OPTIONS") self.assertIn("addressbook", response.headers.getHeader("DAV")) class SACLTests(RootTests): @inlineCallbacks def test_noSacls(self): """ Test the behaviour of locateChild when SACLs are not enabled. should return a valid resource """ self.actualRoot.useSacls = False request = SimpleStoreRequest(self, "GET", "/principals/") resrc, segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) self.failUnless( isinstance(resrc, DirectoryPrincipalProvisioningResource), "Did not get a DirectoryPrincipalProvisioningResource: %s" % (resrc,) ) self.assertEquals(segments, []) @inlineCallbacks def test_inSacls(self): """ Test the behavior of locateChild when SACLs are enabled and the user is in the SACL group should return a valid resource """ self.actualRoot.useSacls = True principal = yield self.actualRoot.findPrincipalForAuthID("dreid") request = SimpleStoreRequest( self, "GET", "/principals/", authPrincipal=principal ) resrc, segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) self.failUnless( isinstance(resrc, DirectoryPrincipalProvisioningResource), "Did not get a DirectoryPrincipalProvisioningResource: %s" % (resrc,) ) self.assertEquals(segments, []) self.assertEquals( request.authzUser.principalElement(), davxml.Principal( davxml.HRef( "/principals/__uids__/5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1/" ) ) ) @inlineCallbacks def test_notInSacls(self): """ Test the behavior of locateChild when SACLs are enabled and the user is not in the SACL group should return a 403 forbidden response """ self.actualRoot.useSacls = True principal = yield self.actualRoot.findPrincipalForAuthID("wsanchez") request = SimpleStoreRequest( self, "GET", "/principals/", authPrincipal=principal ) try: _ignore_resrc, _ignore_segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) raise AssertionError( "RootResource.locateChild did not return an error" ) except HTTPError, e: self.assertEquals(e.response.code, 403) @inlineCallbacks def test_unauthenticated(self): """ Test the behavior of locateChild when SACLs are enabled and the request is unauthenticated should return a 401 UnauthorizedResponse """ self.actualRoot.useSacls = True request = SimpleStoreRequest( self, "GET", "/principals/" ) try: _ignore_resrc, _ignore_segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) raise AssertionError( "RootResource.locateChild did not return an error" ) except HTTPError, e: self.assertEquals(e.response.code, 401) @inlineCallbacks def test_badCredentials(self): """ Test the behavior of locateChild when SACLS are enabled, and incorrect credentials are given. should return a 401 UnauthorizedResponse """ self.actualRoot.useSacls = True request = SimpleStoreRequest( self, "GET", "/principals/", headers=http_headers.Headers( { "Authorization": [ "basic", "%s" % ("dreid:dreid".encode("base64"),) ] } ) ) try: _ignore_resrc, _ignore_segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) raise AssertionError( "RootResource.locateChild did not return an error" ) except HTTPError, e: self.assertEquals(e.response.code, 401) def test_DELETE(self): def do_test(response): response = IResponse(response) if response.code != responsecode.FORBIDDEN: self.fail("Incorrect response for DELETE /: %s" % (response.code,)) request = SimpleStoreRequest(self, "DELETE", "/") return self.send(request, do_test) def test_COPY(self): def do_test(response): response = IResponse(response) if response.code != responsecode.FORBIDDEN: self.fail("Incorrect response for COPY /: %s" % (response.code,)) request = SimpleStoreRequest( self, "COPY", "/", headers=http_headers.Headers({"Destination": "/copy/"}) ) return self.send(request, do_test) def test_MOVE(self): def do_test(response): response = IResponse(response) if response.code != responsecode.FORBIDDEN: self.fail("Incorrect response for MOVE /: %s" % (response.code,)) request = SimpleStoreRequest( self, "MOVE", "/", headers=http_headers.Headers({"Destination": "/copy/"}) ) return self.send(request, do_test) class SACLCacheTests(RootTests): class StubResponseCacheResource(object): def __init__(self): self.cache = {} self.responseCache = self self.cacheHitCount = 0 def getResponseForRequest(self, request): if str(request) in self.cache: self.cacheHitCount += 1 return self.cache[str(request)] def cacheResponseForRequest(self, request, response): self.cache[str(request)] = response return response @inlineCallbacks def setUp(self): yield super(SACLCacheTests, self).setUp() self.actualRoot.responseCache = SACLCacheTests.StubResponseCacheResource() @inlineCallbacks def test_PROPFIND(self): self.actualRoot.useSacls = True body = """ """ principal = yield self.actualRoot.findPrincipalForAuthID("dreid") request = SimpleStoreRequest( self, "PROPFIND", "/principals/users/dreid/", headers=http_headers.Headers({ 'Depth': '1', }), authPrincipal=principal, content=body ) response = yield self.send(request) response = IResponse(response) if response.code != responsecode.MULTI_STATUS: self.fail("Incorrect response for PROPFIND /principals/: %s" % (response.code,)) request = SimpleStoreRequest( self, "PROPFIND", "/principals/users/dreid/", headers=http_headers.Headers({ 'Depth': '1', }), authPrincipal=principal, content=body ) response = yield self.send(request) response = IResponse(response) if response.code != responsecode.MULTI_STATUS: self.fail("Incorrect response for PROPFIND /principals/: %s" % (response.code,)) self.assertEqual(self.actualRoot.responseCache.cacheHitCount, 1) class WikiTests(RootTests): @inlineCallbacks def test_oneTime(self): """ Make sure wiki auth lookup is only done once per request; request.checkedWiki will be set to True """ request = SimpleStoreRequest(self, "GET", "/principals/") _ignore_resrc, _ignore_segments = (yield maybeDeferred( self.actualRoot.locateChild, request, ["principals"] )) self.assertTrue(request.checkedWiki) calendarserver-9.1+dfsg/calendarserver/push/000077500000000000000000000000001315003562600212625ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/push/__init__.py000066400000000000000000000012161315003562600233730ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ CalendarServer push notification code. """ calendarserver-9.1+dfsg/calendarserver/push/amppush.py000066400000000000000000000243071315003562600233170ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from calendarserver.push.ipush import PushPriority from calendarserver.push.util import PushScheduler from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.endpoints import TCP4ClientEndpoint from twisted.internet.protocol import Factory, ServerFactory from twisted.protocols import amp import time import uuid log = Logger() # Control socket message-routing constants PUSH_ROUTE = "push" # AMP Commands sent to server class SubscribeToID(amp.Command): arguments = [('token', amp.String()), ('id', amp.String())] response = [('status', amp.String())] class UnsubscribeFromID(amp.Command): arguments = [('id', amp.String())] response = [('status', amp.String())] # AMP Commands sent to client (and forwarded to Master) class NotificationForID(amp.Command): arguments = [('id', amp.String()), ('dataChangedTimestamp', amp.Integer(optional=True)), ('priority', amp.Integer(optional=True))] response = [('status', amp.String())] # Server classes class AMPPushForwardingFactory(Factory): log = Logger() def __init__(self, forwarder): self.forwarder = forwarder def buildProtocol(self, addr): protocol = amp.AMP() self.forwarder.protocols.append(protocol) return protocol class AMPPushForwarder(object): """ Runs in the slaves, forwards notifications to the master via AMP """ log = Logger() def __init__(self, controlSocket): self.protocols = [] controlSocket.addFactory(PUSH_ROUTE, AMPPushForwardingFactory(self)) @inlineCallbacks def enqueue( self, transaction, id, dataChangedTimestamp=None, priority=PushPriority.high ): if dataChangedTimestamp is None: dataChangedTimestamp = int(time.time()) for protocol in self.protocols: yield protocol.callRemote( NotificationForID, id=id, dataChangedTimestamp=dataChangedTimestamp, priority=priority.value) class AMPPushMasterListeningProtocol(amp.AMP): """ Listens for notifications coming in over AMP from the slaves """ log = Logger() def __init__(self, master): super(AMPPushMasterListeningProtocol, self).__init__() self.master = master @NotificationForID.responder def enqueueFromWorker( self, id, dataChangedTimestamp=None, priority=PushPriority.high.value ): if dataChangedTimestamp is None: dataChangedTimestamp = int(time.time()) self.master.enqueue( None, id, dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.lookupByValue(priority)) return {"status": "OK"} class AMPPushMasterListenerFactory(Factory): log = Logger() def __init__(self, master): self.master = master def buildProtocol(self, addr): protocol = AMPPushMasterListeningProtocol(self.master) return protocol class AMPPushMaster(object): """ AMPPushNotifierService allows clients to use AMP to subscribe to, and receive, change notifications. """ log = Logger() def __init__( self, controlSocket, parentService, port, enableStaggering, staggerSeconds, reactor=None ): if reactor is None: from twisted.internet import reactor from twisted.application.strports import service as strPortsService if port: # Service which listens for client subscriptions and sends # notifications to them strPortsService( str(port), AMPPushNotifierFactory(self), reactor=reactor).setServiceParent(parentService) if controlSocket is not None: # Set up the listener which gets notifications from the slaves controlSocket.addFactory( PUSH_ROUTE, AMPPushMasterListenerFactory(self) ) self.subscribers = [] if enableStaggering: self.scheduler = PushScheduler( reactor, self.sendNotification, staggerSeconds=staggerSeconds) else: self.scheduler = None def addSubscriber(self, p): self.log.debug("Added subscriber") self.subscribers.append(p) def removeSubscriber(self, p): self.log.debug("Removed subscriber") self.subscribers.remove(p) def enqueue( self, transaction, pushKey, dataChangedTimestamp=None, priority=PushPriority.high ): """ Sends an AMP push notification to any clients subscribing to this pushKey. @param pushKey: The identifier of the resource that was updated, including a prefix indicating whether this is CalDAV or CardDAV related. "/CalDAV/abc/def/" @type pushKey: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification (Only used for unit tests) @type key: C{int} """ # Unit tests can pass this value in; otherwise it defaults to now if dataChangedTimestamp is None: dataChangedTimestamp = int(time.time()) tokens = [] for subscriber in self.subscribers: token = subscriber.subscribedToID(pushKey) if token is not None: tokens.append(token) if tokens: return self.scheduleNotifications( tokens, pushKey, dataChangedTimestamp, priority) @inlineCallbacks def sendNotification(self, token, id, dataChangedTimestamp, priority): for subscriber in self.subscribers: if subscriber.subscribedToID(id): yield subscriber.notify( token, id, dataChangedTimestamp, priority) @inlineCallbacks def scheduleNotifications(self, tokens, id, dataChangedTimestamp, priority): if self.scheduler is not None: self.scheduler.schedule(tokens, id, dataChangedTimestamp, priority) else: for token in tokens: yield self.sendNotification( token, id, dataChangedTimestamp, priority) class AMPPushNotifierProtocol(amp.AMP): log = Logger() def __init__(self, service): super(AMPPushNotifierProtocol, self).__init__() self.service = service self.subscriptions = {} self.any = None def subscribe(self, token, id): if id == "any": self.any = token else: self.subscriptions[id] = token return {"status": "OK"} SubscribeToID.responder(subscribe) def unsubscribe(self, id): try: del self.subscriptions[id] except KeyError: pass return {"status": "OK"} UnsubscribeFromID.responder(unsubscribe) def notify(self, token, id, dataChangedTimestamp, priority): if self.subscribedToID(id) == token: self.log.debug("Sending notification for {id} to {token}", id=id, token=token) return self.callRemote( NotificationForID, id=id, dataChangedTimestamp=dataChangedTimestamp, priority=priority.value) def subscribedToID(self, id): if self.any is not None: return self.any return self.subscriptions.get(id, None) def connectionLost(self, reason=None): self.service.removeSubscriber(self) class AMPPushNotifierFactory(ServerFactory): log = Logger() protocol = AMPPushNotifierProtocol def __init__(self, service): self.service = service def buildProtocol(self, addr): p = self.protocol(self.service) self.service.addSubscriber(p) p.service = self.service return p # Client classes class AMPPushClientProtocol(amp.AMP): """ Implements the client side of the AMP push protocol. Whenever the NotificationForID Command arrives, the registered callback will be called with the id. """ def __init__(self, callback): super(AMPPushClientProtocol, self).__init__() self.callback = callback @inlineCallbacks def notificationForID(self, id, dataChangedTimestamp, priority): yield self.callback(id, dataChangedTimestamp, PushPriority.lookupByValue(priority)) returnValue({"status": "OK"}) NotificationForID.responder(notificationForID) class AMPPushClientFactory(Factory): log = Logger() protocol = AMPPushClientProtocol def __init__(self, callback): self.callback = callback def buildProtocol(self, addr): p = self.protocol(self.callback) return p # Client helper methods @inlineCallbacks def subscribeToIDs(host, port, ids, callback, reactor=None): """ Clients can call this helper method to register a callback which will get called whenever a push notification is fired for any id in the ids list. @param host: AMP host name to connect to @type host: string @param port: AMP port to connect to @type port: integer @param ids: The push IDs to subscribe to @type ids: list of strings @param callback: The method to call whenever a notification is received. @type callback: callable which is passed an id (string) """ if reactor is None: from twisted.internet import reactor token = str(uuid.uuid4()) endpoint = TCP4ClientEndpoint(reactor, host, port) factory = AMPPushClientFactory(callback) protocol = yield endpoint.connect(factory) for id in ids: yield protocol.callRemote(SubscribeToID, token=token, id=id) returnValue(factory) calendarserver-9.1+dfsg/calendarserver/push/applepush.py000066400000000000000000001055621315003562600236460ustar00rootroot00000000000000# -*- test-case-name: calendarserver.push.test.test_applepush -*- ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twext.internet.ssl import ChainingOpenSSLContextFactory from twext.python.log import Logger from twext.enterprise.dal.record import fromTable from twext.enterprise.jobs.workitem import RegeneratingWorkItem from txdav.common.datastore.sql_tables import schema from txweb2 import responsecode from txdav.xml import element as davxml from txweb2.dav.noneprops import NonePropertyStore from txweb2.http import Response from txweb2.http_headers import MimeType from txweb2.server import parsePOSTData from twisted.application import service from twisted.internet.protocol import Protocol from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twisted.internet.protocol import ClientFactory, ReconnectingClientFactory from twistedcaldav.extensions import DAVResource, DAVResourceWithoutChildrenMixin from twistedcaldav.resource import ReadOnlyNoCopyResourceMixIn import json import OpenSSL import struct import time from txdav.common.icommondatastore import InvalidSubscriptionValues from calendarserver.push.ipush import PushPriority from calendarserver.push.util import validToken, TokenHistory, PushScheduler from twext.internet.adaptendpoint import connect from twext.internet.gaiendpoint import GAIEndpoint from twisted.python.constants import Values, ValueConstant log = Logger() class ApplePushPriority(Values): """ Maps calendarserver.push.util.PushPriority values to APNS-specific values """ low = ValueConstant(PushPriority.low.value) medium = ValueConstant(PushPriority.medium.value) high = ValueConstant(PushPriority.high.value) class ApplePushNotifierService(service.MultiService): """ ApplePushNotifierService is a MultiService responsible for setting up the APN provider and feedback connections. Once connected, calling its enqueue( ) method sends notifications to any device token which is subscribed to the enqueued key. The Apple Push Notification protocol is described here: https://developer.apple.com/library/ios/documentation/NetworkingInternet/Conceptual/RemoteNotificationsPG/Chapters/CommunicatingWIthAPS.html """ log = Logger() @classmethod def makeService( cls, settings, store, testConnectorClass=None, reactor=None ): """ Creates the various "subservices" that work together to implement APN, including "provider" and "feedback" services for CalDAV and CardDAV. @param settings: The portion of the configuration specific to APN @type settings: C{dict} @param store: The db store for storing/retrieving subscriptions @type store: L{IDataStore} @param testConnectorClass: Used for unit testing; implements connect( ) and receiveData( ) @type testConnectorClass: C{class} @param reactor: Used for unit testing; allows tests to advance the clock in order to test the feedback polling service. @type reactor: L{twisted.internet.task.Clock} @return: instance of L{ApplePushNotifierService} """ service = cls() service.store = store service.providers = {} service.feedbacks = {} service.purgeCall = None service.purgeIntervalSeconds = settings["SubscriptionPurgeIntervalSeconds"] service.purgeSeconds = settings["SubscriptionPurgeSeconds"] for protocol in ("CalDAV", "CardDAV"): if settings[protocol].Enabled: providerTestConnector = None feedbackTestConnector = None if testConnectorClass is not None: providerTestConnector = testConnectorClass() feedbackTestConnector = testConnectorClass() provider = APNProviderService( service.store, settings["ProviderHost"], settings["ProviderPort"], settings[protocol]["CertificatePath"], settings[protocol]["PrivateKeyPath"], chainPath=settings[protocol]["AuthorityChainPath"], passphrase=settings[protocol]["Passphrase"], keychainIdentity=settings[protocol]["KeychainIdentity"], staggerNotifications=settings["EnableStaggering"], staggerSeconds=settings["StaggerSeconds"], testConnector=providerTestConnector, reactor=reactor, ) provider.setServiceParent(service) service.providers[protocol] = provider service.log.info( "APNS {proto} topic: {topic}", proto=protocol, topic=settings[protocol]["Topic"] ) feedback = APNFeedbackService( service.store, settings["FeedbackUpdateSeconds"], settings["FeedbackHost"], settings["FeedbackPort"], settings[protocol]["CertificatePath"], settings[protocol]["PrivateKeyPath"], chainPath=settings[protocol]["AuthorityChainPath"], passphrase=settings[protocol]["Passphrase"], keychainIdentity=settings[protocol]["KeychainIdentity"], testConnector=feedbackTestConnector, reactor=reactor, ) feedback.setServiceParent(service) service.feedbacks[protocol] = feedback return service def startService(self): """ In addition to starting the provider and feedback sub-services, start a LoopingCall whose job it is to purge old subscriptions """ self.log.debug("ApplePushNotifierService startService") APNPurgingWork.purgeSeconds = self.purgeSeconds APNPurgingWork.purgeIntervalSeconds = self.purgeIntervalSeconds service.MultiService.startService(self) def stopService(self): """ In addition to stopping the provider and feedback sub-services, stop the LoopingCall """ self.log.debug("ApplePushNotifierService stopService") service.MultiService.stopService(self) @inlineCallbacks def enqueue( self, transaction, pushKey, dataChangedTimestamp=None, priority=PushPriority.high ): """ Sends an Apple Push Notification to any device token subscribed to this pushKey. @param pushKey: The identifier of the resource that was updated, including a prefix indicating whether this is CalDAV or CardDAV related. "/CalDAV/abc/def/" @type pushKey: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification (Only used for unit tests) @type key: C{int} @param priority: the priority level @type priority: L{PushPriority} """ try: protocol = pushKey.split("/")[1] except ValueError: # pushKey has no protocol, so we can't do anything with it self.log.error("Push key '{key}' is missing protocol", key=pushKey) return # Unit tests can pass this value in; otherwise it defaults to now if dataChangedTimestamp is None: dataChangedTimestamp = int(time.time()) provider = self.providers.get(protocol, None) if provider is not None: # Look up subscriptions for this key subscriptions = (yield transaction.apnSubscriptionsByKey(pushKey)) numSubscriptions = len(subscriptions) if numSubscriptions > 0: self.log.debug( "Sending {num} APNS notifications for {key}", num=numSubscriptions, key=pushKey ) tokens = [record.token for record in subscriptions if record.token and record.subscriberGUID] if tokens: provider.scheduleNotifications( tokens, pushKey, dataChangedTimestamp, priority) class APNProviderProtocol(Protocol): """ Implements the Provider portion of APNS """ log = Logger() # Sent by provider COMMAND_PROVIDER = 2 # Received by provider COMMAND_ERROR = 8 # Returned only for an error. Successful notifications get no response. STATUS_CODES = { 0: "No errors encountered", 1: "Processing error", 2: "Missing device token", 3: "Missing topic", 4: "Missing payload", 5: "Invalid token size", 6: "Invalid topic size", 7: "Invalid payload size", 8: "Invalid token", 255: "None (unknown)", } # If error code comes back as one of these, remove the associated device # token TOKEN_REMOVAL_CODES = (5, 8) MESSAGE_LENGTH = 6 def makeConnection(self, transport): self.history = TokenHistory() self.log.debug("ProviderProtocol makeConnection") Protocol.makeConnection(self, transport) def connectionMade(self): self.log.debug("ProviderProtocol connectionMade") self.buffer = "" # Store a reference to ourself on the factory so the service can # later call us self.factory.connection = self self.factory.clientConnectionMade() def connectionLost(self, reason=None): # self.log.debug("ProviderProtocol connectionLost: {reason}", reason=reason) # Clear the reference to us from the factory self.factory.connection = None @inlineCallbacks def dataReceived(self, data, fn=None): """ Buffer and divide up received data into error messages which are always 6 bytes long """ if fn is None: fn = self.processError self.log.debug("ProviderProtocol dataReceived {len} bytes", len=len(data)) self.buffer += data while len(self.buffer) >= self.MESSAGE_LENGTH: message = self.buffer[:self.MESSAGE_LENGTH] self.buffer = self.buffer[self.MESSAGE_LENGTH:] try: command, status, identifier = struct.unpack("!BBI", message) if command == self.COMMAND_ERROR: yield fn(status, identifier) except Exception, e: self.log.warn( "ProviderProtocol could not process error: {code} ({ex})", code=message.encode("hex"), ex=e ) @inlineCallbacks def processError(self, status, identifier): """ Handles an error message we've received on the provider channel. If the error code is one that indicates a bad token, remove all subscriptions corresponding to that token. @param status: The status value returned from APN Feedback server @type status: C{int} @param identifier: The identifier of the outbound push notification message which had a problem. @type status: C{int} """ msg = self.STATUS_CODES.get(status, "Unknown status code") self.log.info("Received APN error {status} on identifier {id}: {msg}", status=status, id=identifier, msg=msg) if status in self.TOKEN_REMOVAL_CODES: token = self.history.extractIdentifier(identifier) if token is not None: self.log.debug( "Removing subscriptions for bad token: {token}", token=token, ) txn = self.factory.store.newTransaction(label="APNProviderProtocol.processError") subscriptions = (yield txn.apnSubscriptionsByToken(token)) for record in subscriptions: self.log.debug( "Removing subscription: {token} {key}", token=token, key=record.resourceKey ) yield txn.removeAPNSubscription(token, record.resourceKey) yield txn.commit() def sendNotification(self, token, key, dataChangedTimestamp, priority): """ Sends a push notification message for the key to the device associated with the token. @param token: The device token subscribed to the key @type token: C{str} @param key: The key we're sending a notification about @type key: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ if not (token and key and dataChangedTimestamp): return try: binaryToken = token.replace(" ", "").decode("hex") except: self.log.error("Invalid APN token in database: {token}", token=token) return identifier = self.history.add(token) apnsPriority = ApplePushPriority.lookupByValue(priority.value).value payload = json.dumps( { "key": key, "dataChangedTimestamp": dataChangedTimestamp, "pushRequestSubmittedTimestamp": int(time.time()), } ) payloadLength = len(payload) self.log.debug( "Sending APNS notification to {token}: id={id} payload={payload} priority={priority}", token=token, id=identifier, payload=payload, priority=apnsPriority) """ Notification format Top level: Command (1 byte), Frame length (4 bytes), Frame data (variable) Within Frame data: Item ... Item: Item number (1 byte), Item data length (2 bytes), Item data (variable) Item 1: Device token (32 bytes) Item 2: Payload (variable length) in JSON format, not null-terminated Item 3: Notification ID (4 bytes) an opaque value used for reporting errors Item 4: Expiration date (4 bytes) UNIX epoch in secondcs UTC Item 5: Priority (1 byte): 10 (push sent immediately) or 5 (push sent at a time that conservces power on the device receiving it) """ # Frame struct.pack format ! Network byte order command = self.COMMAND_PROVIDER # B frameLength = ( # I # Item 1 (Device token) 1 + # Item number # B 2 + # Item length # H 32 + # device token # 32s # Item 2 (Payload) 1 + # Item number # B 2 + # Item length # H payloadLength + # the JSON payload # %d s # Item 3 (Notification ID) 1 + # Item number # B 2 + # Item length # H 4 + # Notification ID # I # Item 4 (Expiration) 1 + # Item number # B 2 + # Item length # H 4 + # Expiration seconds since epoch # I # Item 5 (Priority) 1 + # Item number # B 2 + # Item length # H 1 # Priority # B ) self.transport.write( struct.pack( "!BIBH32sBH%dsBHIBHIBHB" % (payloadLength,), command, # Command frameLength, # Frame length 1, # Item 1 (Device token) 32, # Token Length binaryToken, # Token 2, # Item 2 (Payload) payloadLength, # Payload length payload, # Payload 3, # Item 3 (Notification ID) 4, # Notification ID Length identifier, # Notification ID 4, # Item 4 (Expiration) 4, # Expiration length int(time.time()) + 72 * 60 * 60, # Expires in 72 hours 5, # Item 5 (Priority) 1, # Priority length apnsPriority, # Priority ) ) class APNProviderFactory(ReconnectingClientFactory): log = Logger() protocol = APNProviderProtocol def __init__(self, service, store): self.service = service self.store = store self.noisy = True self.maxDelay = 30 # max seconds between connection attempts self.shuttingDown = False def clientConnectionMade(self): self.log.info("Connection to APN server made") self.service.clientConnectionMade() self.delay = 1.0 def clientConnectionLost(self, connector, reason): if not self.shuttingDown: self.log.error("Connection to APN server lost: {reason}", reason=reason) if reason.type == OpenSSL.SSL.Error: # If we're failing due to a certificate issue, stop retrying. self.log.error("Ensure APNS certificate is not expired") ReconnectingClientFactory.stopTrying(self) ReconnectingClientFactory.clientConnectionLost(self, connector, reason) def clientConnectionFailed(self, connector, reason): self.log.error("Unable to connect to APN server: {reason}", reason=reason) self.connected = False ReconnectingClientFactory.clientConnectionFailed( self, connector, reason) def retry(self, connector=None): self.log.info("Reconnecting to APN server") ReconnectingClientFactory.retry(self, connector) def stopTrying(self): self.shuttingDown = True ReconnectingClientFactory.stopTrying(self) class APNConnectionService(service.Service): log = Logger() def __init__( self, host, port, certPath, keyPath, chainPath="", passphrase="", keychainIdentity="", sslMethod="TLSv1_METHOD", testConnector=None, reactor=None ): self.host = host self.port = port self.certPath = certPath self.keyPath = keyPath self.chainPath = chainPath self.passphrase = passphrase self.keychainIdentity = keychainIdentity self.sslMethod = sslMethod self.testConnector = testConnector if reactor is None: from twisted.internet import reactor self.reactor = reactor def connect(self, factory): if self.testConnector is not None: # For testing purposes self.testConnector.connect(self, factory) else: if self.passphrase: passwdCallback = lambda *ignored: self.passphrase else: passwdCallback = None context = ChainingOpenSSLContextFactory( self.keyPath, self.certPath, certificateChainFile=self.chainPath, passwdCallback=passwdCallback, keychainIdentity=self.keychainIdentity, sslmethod=getattr(OpenSSL.SSL, self.sslMethod), peerName=self.host, ) connect(GAIEndpoint(self.reactor, self.host, self.port, context), factory) class APNProviderService(APNConnectionService): def __init__( self, store, host, port, certPath, keyPath, chainPath="", passphrase="", keychainIdentity="", sslMethod="TLSv1_METHOD", staggerNotifications=False, staggerSeconds=3, testConnector=None, reactor=None ): APNConnectionService.__init__( self, host, port, certPath, keyPath, chainPath=chainPath, passphrase=passphrase, keychainIdentity=keychainIdentity, sslMethod=sslMethod, testConnector=testConnector, reactor=reactor) self.store = store self.factory = None self.queue = [] if staggerNotifications: self.scheduler = PushScheduler( self.reactor, self.sendNotification, staggerSeconds=staggerSeconds) else: self.scheduler = None def startService(self): self.log.debug("APNProviderService startService") self.factory = APNProviderFactory(self, self.store) self.reactor.callWhenRunning(self.connect, self.factory) def stopService(self): self.log.debug("APNProviderService stopService") if self.factory is not None: self.factory.stopTrying() if self.scheduler is not None: self.scheduler.stop() def clientConnectionMade(self): # Service the queue if self.queue: # Copy and clear the queue. Any notifications that don't get # sent will be put back into the queue. queued = list(self.queue) self.queue = [] for (token, key), dataChangedTimestamp, priority in queued: if token and key and dataChangedTimestamp and priority: self.sendNotification( token, key, dataChangedTimestamp, priority) def scheduleNotifications(self, tokens, key, dataChangedTimestamp, priority): """ The starting point for getting notifications to the APNS server. If there is a connection to the APNS server, these notifications are scheduled (or directly sent if there is no scheduler). If there is no connection, the notifications are saved for later. @param tokens: The device tokens to schedule notifications for @type tokens: List of strings @param key: The key to use for this batch of notifications @type key: String @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ # Service has reference to factory has reference to protocol instance connection = getattr(self.factory, "connection", None) if connection is not None: if self.scheduler is not None: self.scheduler.schedule(tokens, key, dataChangedTimestamp, priority) else: for token in tokens: self.sendNotification(token, key, dataChangedTimestamp, priority) else: self._saveForWhenConnected(tokens, key, dataChangedTimestamp, priority) def _saveForWhenConnected(self, tokens, key, dataChangedTimestamp, priority): """ Called in order to save notifications that can't be sent now because there is no connection to the APNS server. (token, key) tuples are appended to the queue which is serviced during clientConnectionMade() @param tokens: The device tokens to schedule notifications for @type tokens: List of C{str} @param key: The key to use for this batch of notifications @type key: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ for token in tokens: tokenKeyPair = (token, key) for existingPair, _ignore_timstamp, priority in self.queue: if tokenKeyPair == existingPair: self.log.debug("APNProviderService has no connection; skipping duplicate: {token} {key}", token=token, key=key) break # Already scheduled else: self.log.debug("APNProviderService has no connection; queuing: {token} {key}", token=token, key=key) self.queue.append(((token, key), dataChangedTimestamp, priority)) def sendNotification(self, token, key, dataChangedTimestamp, priority): """ If there is a connection the notification is sent right away, otherwise the notification is saved for later. @param token: The device token to send a notifications to @type token: C{str} @param key: The key to use for this notification @type key: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ if not (token and key and dataChangedTimestamp, priority): return # Service has reference to factory has reference to protocol instance connection = getattr(self.factory, "connection", None) if connection is None: self._saveForWhenConnected([token], key, dataChangedTimestamp, priority) else: connection.sendNotification(token, key, dataChangedTimestamp, priority) class APNFeedbackProtocol(Protocol): """ Implements the Feedback portion of APNS """ log = Logger() MESSAGE_LENGTH = 38 def connectionMade(self): self.log.debug("FeedbackProtocol connectionMade") self.buffer = "" @inlineCallbacks def dataReceived(self, data, fn=None): """ Buffer and divide up received data into feedback messages which are always 38 bytes long """ if fn is None: fn = self.processFeedback self.log.debug("FeedbackProtocol dataReceived {len} bytes", len=len(data)) self.buffer += data while len(self.buffer) >= self.MESSAGE_LENGTH: message = self.buffer[:self.MESSAGE_LENGTH] self.buffer = self.buffer[self.MESSAGE_LENGTH:] try: timestamp, _ignore_tokenLength, binaryToken = struct.unpack( "!IH32s", message) token = binaryToken.encode("hex").lower() yield fn(timestamp, token) except Exception, e: self.log.warn( "FeedbackProtocol could not process message: {code} ({ex})", code=message.encode("hex"), ex=e ) @inlineCallbacks def processFeedback(self, timestamp, token): """ Handles a feedback message indicating that the given token is no longer active as of the timestamp, and its subscription should be removed as long as that device has not re-subscribed since the timestamp. @param timestamp: Seconds since the epoch @type timestamp: C{int} @param token: The device token to unsubscribe @type token: C{str} """ self.log.debug( "FeedbackProtocol processFeedback time={time} token={token}", time=timestamp, token=token ) txn = self.factory.store.newTransaction(label="APNFeedbackProtocol.processFeedback") subscriptions = (yield txn.apnSubscriptionsByToken(token)) for record in subscriptions: if timestamp > record.modified: self.log.debug( "FeedbackProtocol removing subscription: {token} {key}", token=token, key=record.resourceKey, ) yield txn.removeAPNSubscription(token, record.resourceKey) yield txn.commit() class APNFeedbackFactory(ClientFactory): log = Logger() protocol = APNFeedbackProtocol def __init__(self, store): self.store = store def clientConnectionFailed(self, connector, reason): self.log.error( "Unable to connect to APN feedback server: {reason}", reason=reason, ) self.connected = False ClientFactory.clientConnectionFailed(self, connector, reason) class APNFeedbackService(APNConnectionService): def __init__( self, store, updateSeconds, host, port, certPath, keyPath, chainPath="", passphrase="", keychainIdentity="", sslMethod="TLSv1_METHOD", testConnector=None, reactor=None ): APNConnectionService.__init__( self, host, port, certPath, keyPath, chainPath=chainPath, passphrase=passphrase, keychainIdentity=keychainIdentity, sslMethod=sslMethod, testConnector=testConnector, reactor=reactor) self.store = store self.updateSeconds = updateSeconds def startService(self): self.log.debug("APNFeedbackService startService") self.factory = APNFeedbackFactory(self.store) self.reactor.callWhenRunning(self.checkForFeedback) def stopService(self): self.log.debug("APNFeedbackService stopService") if self.nextCheck is not None: self.nextCheck.cancel() def checkForFeedback(self): self.nextCheck = None self.log.debug("APNFeedbackService checkForFeedback") self.connect(self.factory) self.nextCheck = self.reactor.callLater( self.updateSeconds, self.checkForFeedback) class APNSubscriptionResource( ReadOnlyNoCopyResourceMixIn, DAVResourceWithoutChildrenMixin, DAVResource ): """ The DAV resource allowing clients to subscribe to Apple push notifications. To subscribe, a client should first determine the key they are interested in my examining the "pushkey" DAV property on the home or collection they want to monitor. Next the client sends an authenticated HTTP GET or POST request to this resource, passing their device token and the key in either the URL params or in the POST body. """ log = Logger() def __init__(self, parent, store): DAVResource.__init__( self, principalCollections=parent.principalCollections() ) self.parent = parent self.store = store def deadProperties(self): if not hasattr(self, "_dead_properties"): self._dead_properties = NonePropertyStore(self) return self._dead_properties def etag(self): return succeed(None) def checkPreconditions(self, request): return None def defaultAccessControlList(self): return succeed( davxml.ACL( # DAV:Read for authenticated principals davxml.ACE( davxml.Principal(davxml.Authenticated()), davxml.Grant( davxml.Privilege(davxml.Read()), ), davxml.Protected(), ), # DAV:Write for authenticated principals davxml.ACE( davxml.Principal(davxml.Authenticated()), davxml.Grant( davxml.Privilege(davxml.Write()), ), davxml.Protected(), ), ) ) def contentType(self): return MimeType.fromString("text/html; charset=utf-8") def resourceType(self): return None def isCollection(self): return False def isCalendarCollection(self): return False def isPseudoCalendarCollection(self): return False @inlineCallbacks def http_POST(self, request): yield self.authorize(request, (davxml.Write(),)) yield parsePOSTData(request) code, msg = (yield self.processSubscription(request)) returnValue(self.renderResponse(code, body=msg)) http_GET = http_POST @inlineCallbacks def processSubscription(self, request): """ Given an authenticated request, use the token and key arguments to add a subscription entry to the database. @param request: The request to process @type request: L{txweb2.server.Request} """ token = request.args.get("token", ("",))[0].replace(" ", "").lower() key = request.args.get("key", ("",))[0] userAgent = request.headers.getHeader("user-agent", "-") host = request.remoteAddr.host fwdHeaders = request.headers.getRawHeaders("x-forwarded-for", []) if fwdHeaders: host = fwdHeaders[0] if not (key and token): code = responsecode.BAD_REQUEST msg = "Invalid request: both 'token' and 'key' must be provided" elif not validToken(token): code = responsecode.BAD_REQUEST msg = "Invalid request: bad 'token' %s" % (token,) else: uid = request.authnUser.record.uid try: yield self.addSubscription(token, key, uid, userAgent, host) code = responsecode.OK msg = None except InvalidSubscriptionValues: code = responsecode.BAD_REQUEST msg = "Invalid subscription values" returnValue((code, msg)) @inlineCallbacks def addSubscription(self, token, key, uid, userAgent, host): """ Add a subscription (or update its timestamp if already there). @param token: The device token, must be lowercase @type token: C{str} @param key: The push key @type key: C{str} @param uid: The uid of the subscriber principal @type uid: C{str} @param userAgent: The user-agent requesting the subscription @type key: C{str} @param host: The host requesting the subscription @type key: C{str} """ now = int(time.time()) # epoch seconds txn = self.store.newTransaction(label="APNSubscriptionResource.addSubscription") yield txn.addAPNSubscription(token, key, now, uid, userAgent, host) yield txn.commit() def renderResponse(self, code, body=None): response = Response(code, {}, body) response.headers.setHeader("content-type", MimeType("text", "html")) return response class APNPurgingWork(RegeneratingWorkItem, fromTable(schema.APN_PURGING_WORK)): group = "apn_purging" purgeIntervalSeconds = 12 * 60 * 60 # 12 hours by default purgeSeconds = 14 * 24 * 60 * 60 # 14 days by default @classmethod def initialSchedule(cls, store, seconds): def _enqueue(txn): return APNPurgingWork.reschedule(txn, seconds) return store.inTransaction("APNPurgingWork.initialSchedule", _enqueue) def regenerateInterval(self): """ Return the interval in seconds between regenerating instances. """ return self.purgeIntervalSeconds def doWork(self): return self.transaction.purgeOldAPNSubscriptions( int(time.time()) - self.purgeSeconds ) calendarserver-9.1+dfsg/calendarserver/push/ipush.py000066400000000000000000000015071315003562600227670ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.python.constants import Values, ValueConstant class PushPriority(Values): """ Constants to use for push priorities """ low = ValueConstant(1) medium = ValueConstant(5) high = ValueConstant(10) calendarserver-9.1+dfsg/calendarserver/push/notifier.py000066400000000000000000000230211315003562600234510ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Notification framework for Calendar Server """ from twext.enterprise.dal.record import fromTable from twext.enterprise.dal.syntax import Delete, Select, Parameter from twext.enterprise.jobs.jobitem import JobItem from twext.enterprise.jobs.workitem import WorkItem, WORK_PRIORITY_HIGH, \ WORK_WEIGHT_1 from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks from txdav.common.datastore.sql_tables import schema from txdav.idav import IStoreNotifierFactory, IStoreNotifier from zope.interface.declarations import implements import datetime from calendarserver.push.ipush import PushPriority log = Logger() class PushNotificationWork(WorkItem, fromTable(schema.PUSH_NOTIFICATION_WORK)): group = property(lambda self: (self.table.PUSH_ID == self.pushID)) default_priority = WORK_PRIORITY_HIGH default_weight = WORK_WEIGHT_1 @inlineCallbacks def doWork(self): # Find all work items with the same push ID and find the highest # priority. Delete matching work items. results = (yield Select( [self.table.WORK_ID, self.table.JOB_ID, self.table.PUSH_PRIORITY], From=self.table, Where=self.table.PUSH_ID == self.pushID).on( self.transaction)) maxPriority = self.pushPriority # If there are other enqueued work items for this push ID, find the # highest priority one and use that value. Note that L{results} will # not contain this work item as job processing behavior will have already # deleted it. So we need to make sure the max priority calculation includes # this one. if results: workIDs, jobIDs, priorities = zip(*results) maxPriority = max(priorities + (self.pushPriority,)) # Delete the work items and jobs we selected - deleting the job will ensure that there are no # orphaned" jobs left in the job queue which would otherwise get to run at some later point, # though not do anything because there is no related work item. yield Delete( From=self.table, Where=self.table.WORK_ID.In(Parameter("workIDs", len(workIDs))) ).on(self.transaction, workIDs=workIDs) yield Delete( From=JobItem.table, # @UndefinedVariable Where=JobItem.jobID.In(Parameter("jobIDs", len(jobIDs))) # @UndefinedVariable ).on(self.transaction, jobIDs=jobIDs) pushDistributor = self.transaction._pushDistributor if pushDistributor is not None: # Convert the integer priority value back into a constant priority = PushPriority.lookupByValue(maxPriority) yield pushDistributor.enqueue(self.transaction, self.pushID, priority=priority) # - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - # Classes used within calendarserver itself # class Notifier(object): """ Provides a hook for sending change notifications to the L{NotifierFactory}. """ log = Logger() implements(IStoreNotifier) def __init__(self, notifierFactory, storeObject): self._notifierFactory = notifierFactory self._storeObject = storeObject self._notify = True def enableNotify(self, arg): self.log.debug("enableNotify: {id}", id=self._ids['default'][1]) self._notify = True def disableNotify(self): self.log.debug("disableNotify: {id}", id=self._ids['default'][1]) self._notify = False @inlineCallbacks def notify(self, txn, priority=PushPriority.high): """ Send the notification. For a home object we just push using the home id. For a home child we push both the owner home id and the owned home child id. @param txn: The transaction to create the work item with @type txn: L{CommonStoreTransaction} @param priority: the priority level @type priority: L{PushPriority} """ # Push ids from the store objects are a tuple of (prefix, name,) and we need to compose that # into a single token. ids = (self._storeObject.notifierID(),) # For resources that are children of a home, we need to add the home id too. if hasattr(self._storeObject, "parentNotifierID"): ids += (self._storeObject.parentNotifierID(),) for prefix, id in ids: if self._notify: self.log.debug( "Notifications are enabled: {obj} {prefix}/{id} priority={priority}", obj=self._storeObject, prefix=prefix, id=id, priority=priority.value ) yield self._notifierFactory.send( prefix, id, txn, priority=priority) else: self.log.debug( "Skipping notification for: %{obj} {prefix}/{id}", obj=self._storeObject, prefix=prefix, id=id, ) def clone(self, storeObject): return self.__class__(self._notifierFactory, storeObject) def nodeName(self): """ The pushkey is the notifier id of the home collection for home and owned home child objects. For a shared home child, the push key is the notifier if of the owner's home child. """ if hasattr(self._storeObject, "ownerHome"): if self._storeObject.owned(): prefix, id = self._storeObject.ownerHome().notifierID() else: prefix, id = self._storeObject.notifierID() else: prefix, id = self._storeObject.notifierID() return self._notifierFactory.pushKeyForId(prefix, id) class NotifierFactory(object): """ Notifier Factory Creates Notifier instances and forwards notifications from them to the work queue. """ log = Logger() implements(IStoreNotifierFactory) def __init__(self, hostname, coalesceSeconds, reactor=None): self.store = None # Initialized after the store is created self.hostname = hostname self.coalesceSeconds = coalesceSeconds if reactor is None: from twisted.internet import reactor self.reactor = reactor @inlineCallbacks def send(self, prefix, id, txn, priority=PushPriority.high): """ Enqueue a push notification work item on the provided transaction. """ yield txn.enqueue( PushNotificationWork, pushID=self.pushKeyForId(prefix, id), notBefore=datetime.datetime.utcnow() + datetime.timedelta(seconds=self.coalesceSeconds), pushPriority=priority.value ) def newNotifier(self, storeObject): return Notifier(self, storeObject) def pushKeyForId(self, prefix, id): key = "/%s/%s/%s/" % (prefix, self.hostname, id) return key[:255] def getPubSubAPSConfiguration(notifierID, config): """ Returns the Apple push notification settings specific to the pushKey """ try: protocol, ignored = notifierID except ValueError: # id has no protocol, so we can't look up APS config return None # If we are directly talking to apple push, advertise those settings applePushSettings = config.Notifications.Services.APNS if applePushSettings.Enabled: settings = {} settings["APSBundleID"] = applePushSettings[protocol]["Topic"] if config.EnableSSL or config.BehindTLSProxy: url = "https://%s:%s/%s" % ( config.ServerHostName, config.SSLPort, applePushSettings.SubscriptionURL) else: url = "http://%s:%s/%s" % ( config.ServerHostName, config.HTTPPort, applePushSettings.SubscriptionURL) settings["SubscriptionURL"] = url settings["SubscriptionRefreshIntervalSeconds"] = applePushSettings.SubscriptionRefreshIntervalSeconds settings["APSEnvironment"] = applePushSettings.Environment return settings return None class PushDistributor(object): """ Distributes notifications to the protocol-specific subservices """ def __init__(self, observers): """ @param observers: the list of observers to distribute pushKeys to @type observers: C{list} """ # TODO: add an IPushObservers interface? self.observers = observers @inlineCallbacks def enqueue(self, transaction, pushKey, priority=PushPriority.high): """ Pass along enqueued pushKey to any observers @param transaction: a transaction to use, if needed @type transaction: L{CommonStoreTransaction} @param pushKey: the push key to distribute to the observers @type pushKey: C{str} @param priority: the priority level @type priority: L{PushPriority} """ for observer in self.observers: yield observer.enqueue( transaction, pushKey, dataChangedTimestamp=None, priority=priority) calendarserver-9.1+dfsg/calendarserver/push/test/000077500000000000000000000000001315003562600222415ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/push/test/__init__.py000066400000000000000000000012161315003562600243520ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ CalendarServer push notification code. """ calendarserver-9.1+dfsg/calendarserver/push/test/test_amppush.py000066400000000000000000000143041315003562600253310ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from calendarserver.push.amppush import AMPPushMaster, AMPPushNotifierProtocol from calendarserver.push.amppush import NotificationForID from twistedcaldav.test.util import StoreTestCase from twisted.internet.task import Clock from calendarserver.push.ipush import PushPriority class AMPPushMasterTests(StoreTestCase): def test_AMPPushMaster(self): # Set up the service clock = Clock() service = AMPPushMaster(None, None, 0, True, 3, reactor=clock) self.assertEquals(service.subscribers, []) client1 = TestProtocol(service) client1.subscribe("token1", "/CalDAV/localhost/user01/") client1.subscribe("token1", "/CalDAV/localhost/user02/") client2 = TestProtocol(service) client2.subscribe("token2", "/CalDAV/localhost/user01/") client3 = TestProtocol(service) client3.subscribe("token3", "any") service.addSubscriber(client1) service.addSubscriber(client2) service.addSubscriber(client3) self.assertEquals(len(service.subscribers), 3) self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user01/")) self.assertTrue(client1.subscribedToID("/CalDAV/localhost/user02/")) self.assertFalse(client1.subscribedToID("nonexistent")) self.assertTrue(client2.subscribedToID("/CalDAV/localhost/user01/")) self.assertFalse(client2.subscribedToID("/CalDAV/localhost/user02/")) self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user01/")) self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user02/")) self.assertTrue(client3.subscribedToID("/CalDAV/localhost/user03/")) dataChangedTimestamp = 1354815999 service.enqueue( None, "/CalDAV/localhost/user01/", dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.high) self.assertEquals(len(client1.history), 0) self.assertEquals(len(client2.history), 0) self.assertEquals(len(client3.history), 0) clock.advance(1) self.assertEquals( client1.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.high.value, } ) ] ) self.assertEquals(len(client2.history), 0) self.assertEquals(len(client3.history), 0) clock.advance(3) self.assertEquals( client2.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.high.value, } ) ] ) self.assertEquals(len(client3.history), 0) clock.advance(3) self.assertEquals( client3.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.high.value, } ) ] ) client1.reset() client2.reset() client2.unsubscribe("/CalDAV/localhost/user01/") service.enqueue( None, "/CalDAV/localhost/user01/", dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.low) self.assertEquals(len(client1.history), 0) clock.advance(1) self.assertEquals( client1.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.low.value, } ) ] ) self.assertEquals(len(client2.history), 0) clock.advance(3) self.assertEquals(len(client2.history), 0) # Turn off staggering service.scheduler = None client1.reset() client2.reset() client2.subscribe("token2", "/CalDAV/localhost/user01/") service.enqueue( None, "/CalDAV/localhost/user01/", dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.medium) self.assertEquals( client1.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.medium.value, } ) ] ) self.assertEquals( client2.history, [ ( NotificationForID, { 'id': '/CalDAV/localhost/user01/', 'dataChangedTimestamp': 1354815999, 'priority': PushPriority.medium.value, } ) ] ) class TestProtocol(AMPPushNotifierProtocol): def __init__(self, service): super(TestProtocol, self).__init__(service) self.reset() def callRemote(self, cls, **kwds): self.history.append((cls, kwds)) def reset(self): self.history = [] calendarserver-9.1+dfsg/calendarserver/push/test/test_applepush.py000066400000000000000000000427131315003562600256620ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import json import struct import time from calendarserver.push.applepush import ( ApplePushNotifierService, APNProviderProtocol, ApplePushPriority ) from calendarserver.push.ipush import PushPriority from calendarserver.push.util import validToken, TokenHistory from twistedcaldav.test.util import StoreTestCase from twisted.internet.defer import inlineCallbacks, succeed from twisted.internet.task import Clock from txdav.common.icommondatastore import InvalidSubscriptionValues from twistedcaldav.config import ConfigDict class ApplePushNotifierServiceTests(StoreTestCase): @inlineCallbacks def test_ApplePushNotifierService(self): settings = ConfigDict({ "Enabled": True, "SubscriptionURL": "apn", "SubscriptionPurgeSeconds": 24 * 60 * 60, "SubscriptionPurgeIntervalSeconds": 24 * 60 * 60, "ProviderHost": "gateway.push.apple.com", "ProviderPort": 2195, "FeedbackHost": "feedback.push.apple.com", "FeedbackPort": 2196, "FeedbackUpdateSeconds": 300, "EnableStaggering": True, "StaggerSeconds": 3, "CalDAV": { "Enabled": True, "CertificatePath": "caldav.cer", "PrivateKeyPath": "caldav.pem", "AuthorityChainPath": "chain.pem", "Passphrase": "", "KeychainIdentity": "org.calendarserver.test", "Topic": "caldav_topic", }, "CardDAV": { "Enabled": True, "CertificatePath": "carddav.cer", "PrivateKeyPath": "carddav.pem", "AuthorityChainPath": "chain.pem", "Passphrase": "", "KeychainIdentity": "org.calendarserver.test", "Topic": "carddav_topic", }, }) # Add subscriptions txn = self._sqlCalendarStore.newTransaction() # Ensure empty values don't get through try: yield txn.addAPNSubscription("", "", "", "", "", "") except InvalidSubscriptionValues: pass try: yield txn.addAPNSubscription("", "1", "2", "3", "", "") except InvalidSubscriptionValues: pass token = "2d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df" token2 = "3d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df" key1 = "/CalDAV/calendars.example.com/user01/calendar/" timestamp1 = 1000 uid = "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB" userAgent = "test agent" ipAddr = "127.0.0.1" yield txn.addAPNSubscription(token, key1, timestamp1, uid, userAgent, ipAddr) yield txn.addAPNSubscription(token2, key1, timestamp1, uid, userAgent, ipAddr) key2 = "/CalDAV/calendars.example.com/user02/calendar/" timestamp2 = 3000 yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr) subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid)) subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions] self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions) self.assertTrue([token, key2, timestamp2, userAgent, ipAddr] in subscriptions) self.assertTrue([token2, key1, timestamp1, userAgent, ipAddr] in subscriptions) # Verify an update to a subscription with a different uid takes on # the new uid timestamp3 = 5000 uid2 = "D8FFB335-9D36-4CE8-A3B9-D1859E38C0DA" yield txn.addAPNSubscription(token, key2, timestamp3, uid2, userAgent, ipAddr) subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid)) subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions] self.assertTrue([token, key1, timestamp1, userAgent, ipAddr] in subscriptions) self.assertFalse([token, key2, timestamp3, userAgent, ipAddr] in subscriptions) subscriptions = (yield txn.apnSubscriptionsBySubscriber(uid2)) subscriptions = [[record.token, record.resourceKey, record.modified, record.userAgent, record.ipAddr] for record in subscriptions] self.assertTrue([token, key2, timestamp3, userAgent, ipAddr] in subscriptions) # Change it back yield txn.addAPNSubscription(token, key2, timestamp2, uid, userAgent, ipAddr) yield txn.commit() # Set up the service (note since Clock has no 'callWhenRunning' we add our own) def callWhenRunning(callable, *args): callable(*args) clock = Clock() clock.callWhenRunning = callWhenRunning service = (yield ApplePushNotifierService.makeService( settings, self._sqlCalendarStore, testConnectorClass=TestConnector, reactor=clock)) self.assertEquals(set(service.providers.keys()), set(["CalDAV", "CardDAV"])) self.assertEquals(set(service.feedbacks.keys()), set(["CalDAV", "CardDAV"])) # First, enqueue a notification while we have no connection, in this # case by doing it prior to startService() # Notification arrives from calendar server dataChangedTimestamp = 1354815999 txn = self._sqlCalendarStore.newTransaction() yield service.enqueue( txn, "/CalDAV/calendars.example.com/user01/calendar/", dataChangedTimestamp=dataChangedTimestamp, priority=PushPriority.high) yield txn.commit() # The notifications should be in the queue self.assertTrue( ((token, key1), dataChangedTimestamp, PushPriority.high) in service.providers["CalDAV"].queue) self.assertTrue( ((token2, key1), dataChangedTimestamp, PushPriority.high) in service.providers["CalDAV"].queue) # Start the service, making the connection which should service the # queue service.startService() # The queue should be empty self.assertEquals(service.providers["CalDAV"].queue, []) # Verify data sent to APN providerConnector = service.providers["CalDAV"].testConnector rawData = providerConnector.transport.data self.assertEquals(len(rawData), 199) data = struct.unpack("!BI", rawData[:5]) self.assertEquals(data[0], 2) # command self.assertEquals(data[1], 194) # frame length # Item 1 (device token) data = struct.unpack("!BH32s", rawData[5:40]) self.assertEquals(data[0], 1) self.assertEquals(data[1], 32) self.assertEquals(data[2].encode("hex"), token.replace(" ", "")) # token # Item 2 (payload) data = struct.unpack("!BH", rawData[40:43]) self.assertEquals(data[0], 2) payloadLength = data[1] self.assertEquals(payloadLength, 138) payload = struct.unpack("!%ds" % (payloadLength,), rawData[43:181]) payload = json.loads(payload[0]) self.assertEquals(payload["key"], u"/CalDAV/calendars.example.com/user01/calendar/") self.assertEquals(payload["dataChangedTimestamp"], dataChangedTimestamp) self.assertTrue("pushRequestSubmittedTimestamp" in payload) # Item 3 (notification id) data = struct.unpack("!BHI", rawData[181:188]) self.assertEquals(data[0], 3) self.assertEquals(data[1], 4) self.assertEquals(data[2], 2) # Item 4 (expiration) data = struct.unpack("!BHI", rawData[188:195]) self.assertEquals(data[0], 4) self.assertEquals(data[1], 4) # Item 5 (priority) data = struct.unpack("!BHB", rawData[195:199]) self.assertEquals(data[0], 5) self.assertEquals(data[1], 1) self.assertEquals(data[2], ApplePushPriority.high.value) # Verify token history is updated self.assertTrue(token in [t for (_ignore_i, t) in providerConnector.service.protocol.history.history]) self.assertTrue(token2 in [t for (_ignore_i, t) in providerConnector.service.protocol.history.history]) # # Verify staggering behavior # # Reset sent data providerConnector.transport.data = None # Send notification while service is connected txn = self._sqlCalendarStore.newTransaction() yield service.enqueue( txn, "/CalDAV/calendars.example.com/user01/calendar/", priority=PushPriority.low) yield txn.commit() clock.advance(1) # so that first push is sent self.assertEquals(len(providerConnector.transport.data), 199) # Ensure that the priority is "low" data = struct.unpack("!BHB", providerConnector.transport.data[195:199]) self.assertEquals(data[0], 5) self.assertEquals(data[1], 1) self.assertEquals(data[2], ApplePushPriority.low.value) # Reset sent data providerConnector.transport.data = None clock.advance(3) # so that second push is sent self.assertEquals(len(providerConnector.transport.data), 199) history = [] def errorTestFunction(status, identifier): history.append((status, identifier)) return succeed(None) # Simulate an error errorData = struct.pack("!BBI", APNProviderProtocol.COMMAND_ERROR, 1, 2) yield providerConnector.receiveData(errorData, fn=errorTestFunction) clock.advance(301) # Simulate multiple errors and dataReceived called # with amounts of data not fitting message boundaries # Send 1st 4 bytes history = [] errorData = struct.pack( "!BBIBBI", APNProviderProtocol.COMMAND_ERROR, 3, 4, APNProviderProtocol.COMMAND_ERROR, 5, 6, ) yield providerConnector.receiveData(errorData[:4], fn=errorTestFunction) # Send remaining bytes yield providerConnector.receiveData(errorData[4:], fn=errorTestFunction) self.assertEquals(history, [(3, 4), (5, 6)]) # Buffer is empty self.assertEquals(len(providerConnector.service.protocol.buffer), 0) # Sending 7 bytes yield providerConnector.receiveData("!" * 7, fn=errorTestFunction) # Buffer has 1 byte remaining self.assertEquals(len(providerConnector.service.protocol.buffer), 1) # Prior to feedback, there are 2 subscriptions txn = self._sqlCalendarStore.newTransaction() subscriptions = (yield txn.apnSubscriptionsByToken(token)) yield txn.commit() self.assertEquals(len(subscriptions), 2) # Simulate feedback with a single token feedbackConnector = service.feedbacks["CalDAV"].testConnector timestamp = 2000 binaryToken = token.decode("hex") feedbackData = struct.pack( "!IH32s", timestamp, len(binaryToken), binaryToken) yield feedbackConnector.receiveData(feedbackData) # Simulate feedback with multiple tokens, and dataReceived called # with amounts of data not fitting message boundaries history = [] def feedbackTestFunction(timestamp, token): history.append((timestamp, token)) return succeed(None) timestamp = 2000 binaryToken = token.decode("hex") feedbackData = struct.pack( "!IH32sIH32s", timestamp, len(binaryToken), binaryToken, timestamp, len(binaryToken), binaryToken, ) # Send 1st 10 bytes yield feedbackConnector.receiveData(feedbackData[:10], fn=feedbackTestFunction) # Send remaining bytes yield feedbackConnector.receiveData(feedbackData[10:], fn=feedbackTestFunction) self.assertEquals(history, [(timestamp, token), (timestamp, token)]) # Buffer is empty self.assertEquals(len(feedbackConnector.service.protocol.buffer), 0) # Sending 39 bytes yield feedbackConnector.receiveData("!" * 39, fn=feedbackTestFunction) # Buffer has 1 byte remaining self.assertEquals(len(feedbackConnector.service.protocol.buffer), 1) # The second subscription should now be gone txn = self._sqlCalendarStore.newTransaction() subscriptions = (yield txn.apnSubscriptionsByToken(token)) yield txn.commit() self.assertEquals(len(subscriptions), 1) self.assertEqual(subscriptions[0].resourceKey, "/CalDAV/calendars.example.com/user02/calendar/") self.assertEqual(subscriptions[0].modified, 3000) self.assertEqual(subscriptions[0].subscriberGUID, "D2256BCC-48E2-42D1-BD89-CBA1E4CCDFFB") # Verify processError removes associated subscriptions and history # First find the id corresponding to token2 for (id, t) in providerConnector.service.protocol.history.history: if t == token2: break yield providerConnector.service.protocol.processError(8, id) # The token for this identifier is gone self.assertTrue((id, token2) not in providerConnector.service.protocol.history.history) # All subscriptions for this token should now be gone txn = self._sqlCalendarStore.newTransaction() subscriptions = (yield txn.apnSubscriptionsByToken(token2)) yield txn.commit() self.assertEquals(subscriptions, []) # # Verify purgeOldAPNSubscriptions # # Create two subscriptions, one old and one new txn = self._sqlCalendarStore.newTransaction() now = int(time.time()) yield txn.addAPNSubscription(token2, key1, now - 2 * 24 * 60 * 60, uid, userAgent, ipAddr) # old yield txn.addAPNSubscription(token2, key2, now, uid, userAgent, ipAddr) # recent yield txn.commit() # Purge old subscriptions txn = self._sqlCalendarStore.newTransaction() yield txn.purgeOldAPNSubscriptions(now - 60 * 60) yield txn.commit() # Check that only the recent subscription remains txn = self._sqlCalendarStore.newTransaction() subscriptions = (yield txn.apnSubscriptionsByToken(token2)) yield txn.commit() self.assertEquals(len(subscriptions), 1) self.assertEquals(subscriptions[0].resourceKey, key2) service.stopService() def test_validToken(self): self.assertTrue(validToken("2d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df")) self.assertFalse(validToken("d0d55cd7f98bcb81c6e24abcdc35168254c7846a43e2828b1ba5a8f82e219df")) self.assertFalse(validToken("foo")) self.assertFalse(validToken("")) def test_TokenHistory(self): history = TokenHistory(maxSize=5) # Ensure returned identifiers increment for id, token in enumerate( ("one", "two", "three", "four", "five"), start=1 ): self.assertEquals(id, history.add(token)) self.assertEquals(len(history.history), 5) # History size never exceeds maxSize id = history.add("six") self.assertEquals(id, 6) self.assertEquals(len(history.history), 5) self.assertEquals( history.history, [(2, "two"), (3, "three"), (4, "four"), (5, "five"), (6, "six")] ) id = history.add("seven") self.assertEquals(id, 7) self.assertEquals(len(history.history), 5) self.assertEquals( history.history, [(3, "three"), (4, "four"), (5, "five"), (6, "six"), (7, "seven")] ) # Look up non-existent identifier token = history.extractIdentifier(9999) self.assertEquals(token, None) self.assertEquals( history.history, [(3, "three"), (4, "four"), (5, "five"), (6, "six"), (7, "seven")] ) # Look up oldest identifier in history token = history.extractIdentifier(3) self.assertEquals(token, "three") self.assertEquals( history.history, [(4, "four"), (5, "five"), (6, "six"), (7, "seven")] ) # Look up latest identifier in history token = history.extractIdentifier(7) self.assertEquals(token, "seven") self.assertEquals( history.history, [(4, "four"), (5, "five"), (6, "six")] ) # Look up an identifier in the middle token = history.extractIdentifier(5) self.assertEquals(token, "five") self.assertEquals( history.history, [(4, "four"), (6, "six")] ) class TestConnector(object): def connect(self, service, factory): self.service = service service.protocol = factory.buildProtocol(None) service.connected = 1 self.transport = StubTransport() service.protocol.makeConnection(self.transport) def receiveData(self, data, fn=None): return self.service.protocol.dataReceived(data, fn=fn) class StubTransport(object): def __init__(self): self.data = None def write(self, data): self.data = data calendarserver-9.1+dfsg/calendarserver/push/test/test_notifier.py000066400000000000000000000304541315003562600254770ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twistedcaldav.test.util import StoreTestCase from calendarserver.push.notifier import PushDistributor from calendarserver.push.notifier import getPubSubAPSConfiguration from calendarserver.push.notifier import PushNotificationWork from twisted.internet.defer import inlineCallbacks, succeed from twistedcaldav.config import ConfigDict from txdav.common.datastore.test.util import populateCalendarsFrom from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE from calendarserver.push.ipush import PushPriority from txdav.idav import ChangeCategory from twext.enterprise.jobs.jobitem import JobItem from twisted.internet import reactor class StubService(object): def __init__(self): self.reset() def reset(self): self.history = [] def enqueue( self, transaction, id, dataChangedTimestamp=None, priority=None ): self.history.append((id, priority)) return(succeed(None)) class PushDistributorTests(StoreTestCase): @inlineCallbacks def test_enqueue(self): stub = StubService() dist = PushDistributor([stub]) yield dist.enqueue(None, "testing", PushPriority.high) self.assertEquals(stub.history, [("testing", PushPriority.high)]) def test_getPubSubAPSConfiguration(self): config = ConfigDict({ "EnableSSL": True, "BehindTLSProxy": False, "ServerHostName": "calendars.example.com", "SSLPort": 8443, "HTTPPort": 8008, "Notifications": { "Services": { "APNS": { "CalDAV": { "Topic": "test topic", }, "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "apns", "Environment": "prod", "Enabled": True, }, }, }, }) result = getPubSubAPSConfiguration(("CalDAV", "foo",), config) self.assertEquals( result, { "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "https://calendars.example.com:8443/apns", "APSBundleID": "test topic", "APSEnvironment": "prod" } ) config = ConfigDict({ "EnableSSL": False, "BehindTLSProxy": True, "ServerHostName": "calendars.example.com", "SSLPort": 8443, "HTTPPort": 8008, "Notifications": { "Services": { "APNS": { "CalDAV": { "Topic": "test topic", }, "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "apns", "Environment": "prod", "Enabled": True, }, }, }, }) result = getPubSubAPSConfiguration(("CalDAV", "foo",), config) self.assertEquals( result, { "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "https://calendars.example.com:8443/apns", "APSBundleID": "test topic", "APSEnvironment": "prod" } ) result = getPubSubAPSConfiguration(("CalDAV", "foo",), config) self.assertEquals( result, { "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "https://calendars.example.com:8443/apns", "APSBundleID": "test topic", "APSEnvironment": "prod" } ) config = ConfigDict({ "EnableSSL": False, "BehindTLSProxy": False, "ServerHostName": "calendars.example.com", "SSLPort": 8443, "HTTPPort": 8008, "Notifications": { "Services": { "APNS": { "CalDAV": { "Topic": "test topic", }, "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "apns", "Environment": "prod", "Enabled": True, }, }, }, }) result = getPubSubAPSConfiguration(("CalDAV", "foo",), config) self.assertEquals( result, { "SubscriptionRefreshIntervalSeconds": 42, "SubscriptionURL": "http://calendars.example.com:8008/apns", "APSBundleID": "test topic", "APSEnvironment": "prod" } ) class StubDistributor(object): def __init__(self): self.reset() def reset(self): self.history = [] def enqueue( self, transaction, pushID, dataChangedTimestamp=None, priority=None ): self.history.append((pushID, priority)) return(succeed(None)) class PushNotificationWorkTests(StoreTestCase): @inlineCallbacks def test_work(self): self.patch(JobItem, "failureRescheduleInterval", 2) pushDistributor = StubDistributor() def decorateTransaction(txn): txn._pushDistributor = pushDistributor self._sqlCalendarStore.callWithNewTransactions(decorateTransaction) txn = self._sqlCalendarStore.newTransaction() yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/foo/", pushPriority=PushPriority.high.value ) yield txn.commit() yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60) self.assertEquals( pushDistributor.history, [("/CalDAV/localhost/foo/", PushPriority.high)]) pushDistributor.reset() txn = self._sqlCalendarStore.newTransaction() yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.high.value ) yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.high.value ) yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.high.value ) # Enqueue a different pushID to ensure those are not grouped with # the others: yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/baz/", pushPriority=PushPriority.high.value ) yield txn.commit() yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60) self.assertEquals( set(pushDistributor.history), set([ ("/CalDAV/localhost/bar/", PushPriority.high), ("/CalDAV/localhost/baz/", PushPriority.high) ]) ) # Ensure only the high-water-mark priority push goes out, by # enqueuing low, medium, and high notifications pushDistributor.reset() txn = self._sqlCalendarStore.newTransaction() yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.low.value ) yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.high.value ) yield txn.enqueue( PushNotificationWork, pushID="/CalDAV/localhost/bar/", pushPriority=PushPriority.medium.value ) yield txn.commit() yield JobItem.waitEmpty(self.storeUnderTest().newTransaction, reactor, 60) self.assertEquals( pushDistributor.history, [("/CalDAV/localhost/bar/", PushPriority.high)]) class NotifierFactory(StoreTestCase): requirements = { "user01": { "calendar_1": {} }, "user02": { "calendar_1": {} }, } @inlineCallbacks def populate(self): # Need to bypass normal validation inside the store yield populateCalendarsFrom(self.requirements, self.storeUnderTest()) self.notifierFactory.reset() def test_storeInit(self): self.assertTrue("push" in self._sqlCalendarStore._notifierFactories) @inlineCallbacks def test_homeNotifier(self): home = yield self.homeUnderTest(name="user01") yield home.notifyChanged(category=ChangeCategory.default) self.assertEquals( self.notifierFactory.history, [("/CalDAV/example.com/user01/", PushPriority.high)]) yield self.commit() @inlineCallbacks def test_calendarNotifier(self): calendar = yield self.calendarUnderTest(home="user01") yield calendar.notifyChanged(category=ChangeCategory.default) self.assertEquals( set(self.notifierFactory.history), set([ ("/CalDAV/example.com/user01/", PushPriority.high), ("/CalDAV/example.com/user01/calendar_1/", PushPriority.high)]) ) yield self.commit() @inlineCallbacks def test_shareWithNotifier(self): calendar = yield self.calendarUnderTest(home="user01") yield calendar.inviteUIDToShare("user02", _BIND_MODE_WRITE, "") self.assertEquals( set(self.notifierFactory.history), set([ ("/CalDAV/example.com/user01/", PushPriority.high), ("/CalDAV/example.com/user01/calendar_1/", PushPriority.high), ("/CalDAV/example.com/user02/", PushPriority.high), ("/CalDAV/example.com/user02/notification/", PushPriority.high), ]) ) yield self.commit() calendar = yield self.calendarUnderTest(home="user01") yield calendar.uninviteUIDFromShare("user02") self.assertEquals( set(self.notifierFactory.history), set([ ("/CalDAV/example.com/user01/", PushPriority.high), ("/CalDAV/example.com/user01/calendar_1/", PushPriority.high), ("/CalDAV/example.com/user02/", PushPriority.high), ("/CalDAV/example.com/user02/notification/", PushPriority.high), ]) ) yield self.commit() @inlineCallbacks def test_sharedCalendarNotifier(self): calendar = yield self.calendarUnderTest(home="user01") shareeView = yield calendar.inviteUIDToShare("user02", _BIND_MODE_WRITE, "") yield shareeView.acceptShare("") shareName = shareeView.name() yield self.commit() self.notifierFactory.reset() shared = yield self.calendarUnderTest(home="user02", name=shareName) yield shared.notifyChanged(category=ChangeCategory.default) self.assertEquals( set(self.notifierFactory.history), set([ ("/CalDAV/example.com/user01/", PushPriority.high), ("/CalDAV/example.com/user01/calendar_1/", PushPriority.high)]) ) yield self.commit() @inlineCallbacks def test_notificationNotifier(self): notifications = yield self.transactionUnderTest().notificationsWithUID("user01", create=True) yield notifications.notifyChanged(category=ChangeCategory.default) self.assertEquals( set(self.notifierFactory.history), set([ ("/CalDAV/example.com/user01/", PushPriority.high), ("/CalDAV/example.com/user01/notification/", PushPriority.high)]) ) yield self.commit() calendarserver-9.1+dfsg/calendarserver/push/util.py000066400000000000000000000234371315003562600226220ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twext.python.log import Logger from twistedcaldav.util import getPasswordFromKeychain, KeychainAccessError, \ KeychainPasswordNotFound import OpenSSL from OpenSSL import crypto import os def getAPNTopicFromConfig(protocol, accountName, protoConfig): """ Given the APNS protocol config, extract the APN topic. @param accountName: account name @type accountName: L{str} @param protocol: APNS protocol name @type protocol: L{str} @param protoConfig: APNS specific config @type protoConfig: L{dict} @raise: ValueError """ if hasattr(OpenSSL, "__SecureTransport__"): if protoConfig.KeychainIdentity: # Verify the identity exists error = OpenSSL.crypto.check_keychain_identity(protoConfig.KeychainIdentity) if error: raise ValueError( "The {proto} APNS Keychain Identity ({cert}) cannot be used: {reason}".format( proto=protocol, cert=protoConfig.KeychainIdentity, reason=error ) ) # Verify we can extract the topic if not protoConfig.Topic: topic = getAPNTopicFromIdentity(protoConfig.KeychainIdentity) protoConfig.Topic = topic if not protoConfig.Topic: raise ValueError("Cannot extract {proto} APNS topic".format(proto=protocol)) else: raise ValueError( "No {proto} APNS Keychain Identity was set".format(proto=protocol)) else: # Verify the cert exists if not os.path.exists(protoConfig.CertificatePath): raise ValueError( "The {proto} APNS certificate ({cert}) is missing".format( proto=protocol, cert=protoConfig.CertificatePath ) ) # Verify we can extract the topic if not protoConfig.Topic: topic = getAPNTopicFromCertificate(protoConfig.CertificatePath) protoConfig.Topic = topic if not protoConfig.Topic: raise ValueError("Cannot extract {proto} APNS topic".format(proto=protocol)) # Verify we can acquire the passphrase if not protoConfig.Passphrase: try: passphrase = getPasswordFromKeychain(accountName) protoConfig.Passphrase = passphrase except KeychainAccessError: # The system doesn't support keychain pass except KeychainPasswordNotFound: # The password doesn't exist in the keychain. raise ValueError("Cannot retrieve {proto} APNS passphrase from keychain".format(proto=protocol)) def getAPNTopicFromCertificate(certPath): """ Given the path to a certificate, extract the UID value portion of the subject, which in this context is used for the associated APN topic. @param certPath: file path of the certificate @type certPath: C{str} @return: C{str} topic, or empty string if value is not found """ with open(certPath) as f: certData = f.read() return getAPNTopicFromX509(crypto.load_certificate(crypto.FILETYPE_PEM, certData)) def getAPNTopicFromIdentity(identity): """ Given a keychain identity certificate, extract the UID value portion of the subject, which in this context is used for the associated APN topic. @param identity: keychain identity to lookup @type identity: C{str} @return: C{str} topic, or empty string if value is not found """ return getAPNTopicFromX509(crypto.load_certificate(None, identity)) def getAPNTopicFromX509(x509): """ Given an L{X509} certificate, extract the UID value portion of the subject, which in this context is used for the associated APN topic. @param x509: the certificate @type x509: L{X509} @return: C{str} topic, or empty string if value is not found """ subject = x509.get_subject() components = subject.get_components() for name, value in components: if name == "UID": return value return "" def validToken(token): """ Return True if token is in hex and is 64 characters long, False otherwise """ if len(token) != 64: return False try: token.decode("hex") except TypeError: return False return True class TokenHistory(object): """ Manages a queue of tokens and corresponding identifiers. Queue is always kept below maxSize in length (older entries removed as needed). """ def __init__(self, maxSize=200): """ @param maxSize: How large the history is allowed to grow. Once this size is reached, older entries are pruned as needed. @type maxSize: C{int} """ self.maxSize = maxSize self.identifier = 0 self.history = [] def add(self, token): """ Add a token to the history, and return the new identifier associated with this token. Identifiers begin at 1 and increase each time this is called. If the number of items in the history exceeds maxSize, older entries are removed to get the size down to maxSize. @param token: The token to store @type token: C{str} @returns: the message identifier associated with this token, C{int} """ self.identifier += 1 self.history.append((self.identifier, token)) del self.history[:-self.maxSize] return self.identifier def extractIdentifier(self, identifier): """ Look for the token associated with the identifier. Remove the identifier-token pair from the history and return the token. Return None if the identifier is not found in the history. @param identifier: The identifier to look up @type identifier: C{int} @returns: the token associated with this message identifier, C{str}, or None if not found """ for index, (id, token) in enumerate(self.history): if id == identifier: del self.history[index] return token return None class PushScheduler(object): """ Allows staggered scheduling of push notifications """ log = Logger() def __init__(self, reactor, callback, staggerSeconds=1): """ @param callback: The method to call when it's time to send a push @type callback: callable @param staggerSeconds: The number of seconds to stagger between each push @type staggerSeconds: integer """ self.outstanding = {} self.reactor = reactor self.callback = callback self.staggerSeconds = staggerSeconds def schedule(self, tokens, key, dataChangedTimestamp, priority): """ Schedules a batch of notifications for the given tokens, staggered with self.staggerSeconds between each one. Duplicates are ignored, so if a token/key pair is already scheduled and not yet sent, a new one will not be scheduled for that pair. @param tokens: The device tokens to schedule notifications for @type tokens: List of C{str} @param key: The key to use for this batch of notifications @type key: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ scheduleTime = 0.0 for token in tokens: internalKey = (token, key) if internalKey in self.outstanding: self.log.debug( "PushScheduler already has this scheduled: {key}", key=internalKey, ) else: self.outstanding[internalKey] = self.reactor.callLater( scheduleTime, self.send, token, key, dataChangedTimestamp, priority) self.log.debug( "PushScheduler scheduled: {key} in {time:.0f} sec", key=internalKey, time=scheduleTime ) scheduleTime += self.staggerSeconds def send(self, token, key, dataChangedTimestamp, priority): """ This method is what actually gets scheduled. Its job is to remove its corresponding entry from the outstanding dict and call the callback. @param token: The device token to send a notification to @type token: C{str} @param key: The notification key @type key: C{str} @param dataChangedTimestamp: Timestamp (epoch seconds) for the data change which triggered this notification @type key: C{int} """ self.log.debug("PushScheduler fired for {token} {key} {time}", token=token, key=key, time=dataChangedTimestamp) del self.outstanding[(token, key)] return self.callback(token, key, dataChangedTimestamp, priority) def stop(self): """ Cancel all outstanding delayed calls """ for (token, key), delayed in self.outstanding.iteritems(): self.log.debug("PushScheduler cancelling {token} {key}", token=token, key=key) delayed.cancel() calendarserver-9.1+dfsg/calendarserver/tap/000077500000000000000000000000001315003562600210675ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tap/__init__.py000066400000000000000000000014341315003562600232020ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ CalendarServer TAP plugin support. """ from calendarserver.tap import profiling # Pre-imported for side-effect __all__ = [ "caldav", "carddav", "profiling", "util" ] calendarserver-9.1+dfsg/calendarserver/tap/caldav.py000066400000000000000000003042011315003562600226730ustar00rootroot00000000000000# -*- test-case-name: calendarserver.tap.test.test_caldav -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function __all__ = [ "CalDAVService", "CalDAVOptions", "CalDAVServiceMaker", ] import sys from collections import OrderedDict from os import getuid, getgid, geteuid, umask, remove, environ, stat, chown, W_OK from os.path import exists, basename import signal import socket from stat import S_ISSOCK import time from subprocess import Popen, PIPE from pwd import getpwuid, getpwnam from grp import getgrnam import OpenSSL from OpenSSL.SSL import Error as SSLError from zope.interface import implements from twistedcaldav.stdconfig import config from twisted.python.log import ILogObserver from twisted.python.logfile import LogFile from twisted.python.usage import Options, UsageError from twisted.python.util import uidFromString, gidFromString from twisted.plugin import IPlugin from twisted.internet.defer import ( gatherResults, Deferred, inlineCallbacks, succeed ) from twisted.internet.endpoints import UNIXClientEndpoint, TCP4ClientEndpoint from twisted.internet.process import ProcessExitedAlready from twisted.internet.protocol import ProcessProtocol from twisted.internet.protocol import Factory from twisted.application.internet import TCPServer, UNIXServer from twisted.application.service import MultiService, IServiceMaker from twisted.application.service import Service from twisted.protocols.amp import AMP from twisted.logger import LogLevel from twext.python.filepath import CachingFilePath from twext.python.log import Logger from twext.internet.fswatch import ( DirectoryChangeListener, IDirectoryChangeListenee ) from twext.internet.ssl import ChainingOpenSSLContextFactory from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer from twext.enterprise.adbapi2 import ConnectionPool from twext.enterprise.ienterprise import POSTGRES_DIALECT, ORACLE_DIALECT, DatabaseType from twext.enterprise.jobs.queue import NonPerformingQueuer from twext.enterprise.jobs.queue import ControllerQueue from twext.enterprise.jobs.queue import WorkerFactory as QueueWorkerFactory from twext.application.service import ReExecService from txdav.who.groups import GroupCacherPollingWork from calendarserver.tools.purge import PrincipalPurgePollingWork from calendarserver.tools.util import checkDirectory from txweb2.channel.http import ( LimitingHTTPFactory, SSLRedirectRequest, HTTPChannel ) from txweb2.metafd import ConnectionLimiter, ReportingHTTPService from txweb2.server import Site from txdav.base.datastore.dbapiclient import DBAPIConnector from txdav.caldav.datastore.scheduling.imip.inbound import MailRetriever from txdav.caldav.datastore.scheduling.imip.inbound import scheduleNextMailPoll from txdav.common.datastore.upgrade.migrate import UpgradeToDatabaseStep from txdav.common.datastore.upgrade.sql.upgrade import ( UpgradeDatabaseCalendarDataStep, UpgradeDatabaseOtherStep, UpgradeDatabaseSchemaStep, UpgradeDatabaseAddressBookDataStep, UpgradeAcquireLockStep, UpgradeReleaseLockStep, UpgradeDatabaseNotificationDataStep ) from txdav.common.datastore.work.inbox_cleanup import InboxCleanupWork from txdav.common.datastore.work.revision_cleanup import FindMinValidRevisionWork from txdav.who.groups import GroupCacher from twistedcaldav import memcachepool from twistedcaldav.cache import MemcacheURLPatternChangeNotifier from twistedcaldav.config import ConfigurationError from twistedcaldav.localization import processLocalizationFiles from twistedcaldav.stdconfig import DEFAULT_CONFIG, DEFAULT_CONFIG_FILE from twistedcaldav.upgrade import ( UpgradeFileSystemFormatStep, PostDBImportStep, ) from calendarserver.accesslog import AMPCommonAccessLoggingObserver from calendarserver.accesslog import AMPLoggingFactory from calendarserver.accesslog import RotatingFileAccessLoggingObserver from calendarserver.controlsocket import ControlSocket from calendarserver.controlsocket import ControlSocketConnectingService from calendarserver.dashboard_service import DashboardServer from calendarserver.push.amppush import AMPPushMaster, AMPPushForwarder from calendarserver.push.applepush import ApplePushNotifierService, APNPurgingWork from calendarserver.push.notifier import PushDistributor from calendarserver.tap.util import ( ConnectionDispenser, Stepper, checkDirectories, getRootResource, pgServiceFromConfig, getDBPool, MemoryLimitService, storeFromConfig, getSSLPassphrase, preFlightChecks, storeFromConfigWithDPSClient, storeFromConfigWithoutDPS, serverRootLocation, AlertPoster ) try: from calendarserver.version import version except ImportError: from twisted.python.modules import getModule sys.path.insert( 0, getModule(__name__).pathEntry.filePath.child("support").path ) from version import version as getVersion version = "{} ({}*)".format(getVersion()) from txweb2.server import VERSION as TWISTED_VERSION TWISTED_VERSION = "CalendarServer/{} {}".format( version.replace(" ", ""), TWISTED_VERSION, ) log = Logger() # Control socket message-routing constants. _LOG_ROUTE = "log" _QUEUE_ROUTE = "queue" _CONTROL_SERVICE_NAME = "control" def getid(uid, gid): if uid is not None: uid = uidFromString(uid) if gid is not None: gid = gidFromString(gid) return (uid, gid) def conflictBetweenIPv4AndIPv6(): """ Is there a conflict between binding an IPv6 and an IPv4 port? Return True if there is, False if there isn't. This is a temporary workaround until maybe Twisted starts setting C{IPPROTO_IPV6 / IPV6_V6ONLY} on IPv6 sockets. @return: C{True} if listening on IPv4 conflicts with listening on IPv6. """ s4 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s6 = socket.socket(socket.AF_INET6, socket.SOCK_STREAM) try: s4.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s6.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) s4.bind(("", 0)) s4.listen(1) usedport = s4.getsockname()[1] try: s6.bind(("::", usedport)) except socket.error: return True else: return False finally: s4.close() s6.close() def _computeEnvVars(parent): """ Compute environment variables to be propagated to child processes. """ result = {} requiredVars = [ "PATH", "PYTHONPATH", "LD_LIBRARY_PATH", "LD_PRELOAD", "DYLD_LIBRARY_PATH", "DYLD_INSERT_LIBRARIES", ] optionalVars = [ "PYTHONHASHSEED", "KRB5_KTNAME", "ORACLE_HOME", "VERSIONER_PYTHON_PREFER_32_BIT", ] for varname in requiredVars: result[varname] = parent.get(varname, "") for varname in optionalVars: if varname in parent: result[varname] = parent[varname] # Worker's stdXXX are piped, which ends up setting their encoding to None; # Have them inherit the master's encoding, even when being piped. result["PYTHONIOENCODING"] = sys.stdout.encoding or "UTF-8" return result PARENT_ENVIRONMENT = _computeEnvVars(environ) class ErrorLoggingMultiService(MultiService, object): """ Registers a rotating file logger for error logging, if config.ErrorLogEnabled is True. """ def __init__(self, logEnabled, logPath, logRotateLength, logMaxFiles, logRotateOnStart): """ @param logEnabled: Whether to write to a log file @type logEnabled: C{boolean} @param logPath: the full path to the log file @type logPath: C{str} @param logRotateLength: rotate when files exceed this many bytes @type logRotateLength: C{int} @param logMaxFiles: keep at most this many files @type logMaxFiles: C{int} @param logRotateOnStart: rotate when service starts @type logRotateOnStart: C{bool} """ MultiService.__init__(self) self.logEnabled = logEnabled self.logPath = logPath self.logRotateLength = logRotateLength self.logMaxFiles = logMaxFiles self.logRotateOnStart = logRotateOnStart self.name = "elms" def setServiceParent(self, app): MultiService.setServiceParent(self, app) if self.logEnabled: errorLogFile = LogFile.fromFullPath( self.logPath, rotateLength=self.logRotateLength, maxRotatedFiles=self.logMaxFiles ) if self.logRotateOnStart: errorLogFile.rotate() encoding = "utf-8" else: errorLogFile = sys.stdout encoding = sys.stdout.encoding withTime = not (not config.ErrorLogFile and config.ProcessType in ("Slave", "DPS",)) errorLogObserver = Logger.makeFilteredFileLogObserver(errorLogFile, withTime=withTime) # We need to do this because the current Twisted logger implementation fails to setup the encoding # correctly when a L{twisted.python.logile.LogFile} is passed to a {twisted.logger._file.FileLogObserver} errorLogObserver._observers[0]._encoding = encoding # Registering ILogObserver with the Application object # gets our observer picked up within AppLogger.start( ) app.setComponent(ILogObserver, errorLogObserver) class CalDAVService (ErrorLoggingMultiService): # The ConnectionService is a MultiService which bundles all the connection # services together for the purposes of being able to stop them and wait # for all of their connections to close before shutting down. connectionServiceName = "cs" def __init__(self, logObserver): self.logObserver = logObserver # accesslog observer ErrorLoggingMultiService.__init__( self, config.ErrorLogEnabled, config.ErrorLogFile, config.ErrorLogRotateMB * 1024 * 1024, config.ErrorLogMaxRotatedFiles, config.ErrorLogRotateOnStart, ) self.name = "cds" def privilegedStartService(self): MultiService.privilegedStartService(self) self.logObserver.start() @inlineCallbacks def stopService(self): """ Wait for outstanding requests to finish @return: a Deferred which fires when all outstanding requests are complete """ connectionService = self.getServiceNamed(self.connectionServiceName) # Note: removeService() also calls stopService() yield self.removeService(connectionService) # At this point, all outstanding requests have been responded to yield super(CalDAVService, self).stopService() self.logObserver.stop() class CalDAVOptions (Options): log = Logger() optParameters = [[ "config", "f", DEFAULT_CONFIG_FILE, "Path to configuration file." ]] zsh_actions = {"config": "_files -g '*.plist'"} def __init__(self, *args, **kwargs): super(CalDAVOptions, self).__init__(*args, **kwargs) self.overrides = {} @staticmethod def coerceOption(configDict, key, value): """ Coerce the given C{val} to type of C{configDict[key]} """ if key in configDict: if isinstance(configDict[key], bool): value = value == "True" elif isinstance(configDict[key], (int, float, long)): value = type(configDict[key])(value) elif isinstance(configDict[key], (list, tuple)): value = value.split(",") elif isinstance(configDict[key], dict): raise UsageError( "Dict options not supported on the command line" ) elif value == "None": value = None return value @classmethod def setOverride(cls, configDict, path, value, overrideDict): """ Set the value at path in configDict """ key = path[0] if len(path) == 1: overrideDict[key] = cls.coerceOption(configDict, key, value) return if key in configDict: if not isinstance(configDict[key], dict): raise UsageError( "Found intermediate path element that is not a dictionary" ) if key not in overrideDict: overrideDict[key] = {} cls.setOverride( configDict[key], path[1:], value, overrideDict[key] ) def opt_option(self, option): """ Set an option to override a value in the config file. True, False, int, and float options are supported, as well as comma separated lists. Only one option may be given for each --option flag, however multiple --option flags may be specified. """ if "=" in option: path, value = option.split("=") self.setOverride( DEFAULT_CONFIG, path.split("/"), value, self.overrides ) else: self.opt_option("{}=True".format(option)) opt_o = opt_option def postOptions(self): try: if geteuid() == 0: self.waitForServerRoot() self.loadConfiguration() self.checkConfiguration() except ConfigurationError, e: print("Invalid configuration:", e) sys.exit(1) def waitForServerRoot(self): """ Block until server root directory is available """ serverRootPath = serverRootLocation() if serverRootPath: checkDirectory( serverRootPath, "Server root", access=W_OK, wait=True # Wait in a loop until ServerRoot exists ) def loadConfiguration(self): if not exists(self["config"]): raise ConfigurationError( "Config file {} not found. Exiting." .format(self["config"]) ) print("Reading configuration from file:", self["config"]) config.load(self["config"]) for path in config.getProvider().importedFiles: print("Imported configuration from file:", path) for path in config.getProvider().includedFiles: print("Adding configuration from file:", path) for path in config.getProvider().missingFiles: print("Missing configuration file:", path) config.updateDefaults(self.overrides) def checkDirectories(self, config): checkDirectories(config) def checkConfiguration(self): uid, gid = None, None if self.parent["uid"] or self.parent["gid"]: uid, gid = getid(self.parent["uid"], self.parent["gid"]) def gottaBeRoot(): if getuid() != 0: username = getpwuid(getuid()).pw_name raise UsageError( "Only root can drop privileges. You are: {}" .format(username) ) if uid and uid != getuid(): gottaBeRoot() if gid and gid != getgid(): gottaBeRoot() self.parent["pidfile"] = config.PIDFile self.checkDirectories(config) # Check current umask and warn if changed oldmask = umask(config.umask) if oldmask != config.umask: self.log.info( "WARNING: changing umask from: 0{old:03o} to 0{new:03o}", old=oldmask, new=config.umask ) self.parent["umask"] = config.umask class GroupOwnedUNIXServer(UNIXServer, object): """ A L{GroupOwnedUNIXServer} is a L{UNIXServer} which changes the group ownership of its socket immediately after binding its port. @ivar gid: the group ID which should own the socket after it is bound. """ def __init__(self, gid, *args, **kw): super(GroupOwnedUNIXServer, self).__init__(*args, **kw) self.gid = gid def privilegedStartService(self): """ Bind the UNIX socket and then change its group. """ super(GroupOwnedUNIXServer, self).privilegedStartService() # Unfortunately, there's no public way to access this. -glyph fileName = self._port.port chown(fileName, getuid(), self.gid) class SlaveSpawnerService(Service): """ Service to add all Python subprocesses that need to do work to a L{DelayedStartupProcessMonitor}: - regular slave processes (CalDAV workers) """ def __init__( self, maker, monitor, dispenser, dispatcher, stats, configPath, inheritFDs=None, inheritSSLFDs=None ): self.maker = maker self.monitor = monitor self.dispenser = dispenser self.dispatcher = dispatcher self.stats = stats self.configPath = configPath self.inheritFDs = inheritFDs self.inheritSSLFDs = inheritSSLFDs def startService(self): if config.DirectoryProxy.Enabled: # Add the directory proxy sidecar first so it at least get spawned # prior to the caldavd worker processes: log.info("Adding directory proxy service") dpsArgv = [ sys.executable, sys.argv[0], ] if config.UserName: dpsArgv.extend(("-u", config.UserName)) if config.GroupName: dpsArgv.extend(("-g", config.GroupName)) if config.Profiling.Enabled: dpsArgv.append("--profile={}/{}.pstats".format( config.Profiling.BaseDirectory, "dps")) dpsArgv.extend(("--savestats", "--profiler", "cprofile-cpu")) dpsArgv.extend(( "--reactor={}".format(config.Twisted.reactor), "-n", "caldav_directoryproxy", "-f", self.configPath, "-o", "ProcessType=DPS", "-o", "ErrorLogFile=None", "-o", "ErrorLogEnabled=False", )) self.monitor.addProcess( "directoryproxy", dpsArgv, env=PARENT_ENVIRONMENT ) else: log.info("Directory proxy service is not enabled") for slaveNumber in xrange(0, config.MultiProcess.ProcessCount): if config.UseMetaFD: extraArgs = dict( metaSocket=self.dispatcher.addSocket(slaveNumber) ) else: extraArgs = dict(inheritFDs=self.inheritFDs, inheritSSLFDs=self.inheritSSLFDs) if self.dispenser is not None: extraArgs.update(ampSQLDispenser=self.dispenser) process = TwistdSlaveProcess( sys.argv[0], self.maker.tapname, self.configPath, slaveNumber, config.BindAddresses, **extraArgs ) self.monitor.addProcessObject(process, PARENT_ENVIRONMENT) # Hook up the stats service directory to the DPS server after a short delay if self.stats is not None: self.monitor._reactor.callLater(5, self.stats.makeDirectoryProxyClient) class WorkSchedulingService(Service): """ A Service to kick off the initial scheduling of periodic work items. """ log = Logger() def __init__(self, store, doImip, doGroupCaching, doPrincipalPurging): """ @param store: the Store to use for enqueuing work @param doImip: whether to schedule imip polling @type doImip: boolean @param doGroupCaching: whether to schedule group caching @type doImip: boolean """ self.store = store self.doImip = doImip self.doGroupCaching = doGroupCaching self.doPrincipalPurging = doPrincipalPurging def startService(self): # Do the actual DB work after the reactor has started up from twisted.internet import reactor reactor.callWhenRunning(self.initializeWork) @inlineCallbacks def initializeWork(self): # Note: the "seconds in the future" args are being set to the LogID # numbers to spread them out. This is only needed until # ultimatelyPerform( ) handles groups correctly. Once that is fixed # these can be set to zero seconds in the future. if self.doImip: yield scheduleNextMailPoll( self.store, int(config.LogID) if config.LogID else 5 ) if self.doGroupCaching: yield GroupCacherPollingWork.initialSchedule( self.store, int(config.LogID) if config.LogID else 5 ) if self.doPrincipalPurging: yield PrincipalPurgePollingWork.initialSchedule( self.store, int(config.LogID) if config.LogID else 5 ) yield FindMinValidRevisionWork.initialSchedule( self.store, int(config.LogID) if config.LogID else 5 ) yield InboxCleanupWork.initialSchedule( self.store, int(config.LogID) if config.LogID else 5 ) yield APNPurgingWork.initialSchedule( self.store, int(config.LogID) if config.LogID else 5 ) class PreProcessingService(Service): """ A Service responsible for running any work that needs to be finished prior to the main service starting. Once that work is done, it instantiates the main service and adds it to the Service hierarchy (specifically to its parent). If the final work step does not return a Failure, that is an indication the store is ready and it is passed to the main service. Otherwise, None is passed to the main service in place of a store. This is mostly useful in the case of command line utilities that need to do something different if the store is not available (e.g. utilities that aren't allowed to upgrade the database). """ def __init__( self, serviceCreator, connectionPool, store, logObserver, storageService, reactor=None ): """ @param serviceCreator: callable which will be passed the connection pool, store, and log observer, and should return a Service @param connectionPool: connection pool to pass to serviceCreator @param store: the store object being processed @param logObserver: log observer to pass to serviceCreator @param storageService: the service responsible for starting/stopping the data store """ self.serviceCreator = serviceCreator self.connectionPool = connectionPool self.store = store self.logObserver = logObserver self.storageService = storageService self.stepper = Stepper() if reactor is None: from twisted.internet import reactor self.reactor = reactor def stepWithResult(self, result): """ The final "step"; if we get here we know our store is ready, so we create the main service and pass in the store. """ service = self.serviceCreator( self.connectionPool, self.store, self.logObserver, self.storageService ) if self.parent is not None: self.reactor.callLater(0, service.setServiceParent, self.parent) return succeed(None) def stepWithFailure(self, failure): """ The final "step", but if we get here we know our store is not ready, so we create the main service and pass in a None for the store. """ try: service = self.serviceCreator( self.connectionPool, None, self.logObserver, self.storageService ) if self.parent is not None: self.reactor.callLater( 0, service.setServiceParent, self.parent ) except StoreNotAvailable: log.error("Data store not available; shutting down") if config.ServiceDisablingProgram: # If the store is not available, we don't want launchd to # repeatedly launch us, we want our job to get unloaded. # If the config.ServiceDisablingProgram is assigned and exists # we execute it now. Its job is to carry out the platform- # specific tasks of disabling the service. if exists(config.ServiceDisablingProgram): log.error( "Disabling service via {exe}", exe=config.ServiceDisablingProgram ) Popen( args=[config.ServiceDisablingProgram], stdout=PIPE, stderr=PIPE, ).communicate() time.sleep(60) self.reactor.stop() return succeed(None) def addStep(self, step): """ Hand the step to our Stepper @param step: an object implementing stepWithResult( ) """ self.stepper.addStep(step) return self def startService(self): """ Add ourself as the final step, and then tell the coordinator to start working on each step one at a time. """ self.addStep(self) self.stepper.start() class StoreNotAvailable(Exception): """ Raised when we want to give up because the store is not available """ class CalDAVServiceMaker (object): log = Logger() implements(IPlugin, IServiceMaker) tapname = "caldav" description = "Calendar and Contacts Server" options = CalDAVOptions def makeService(self, options): """ Create the top-level service. """ self.log.info( "{log_source.description} {version} starting " "{config.ProcessType} process...", version=version, config=config ) try: from setproctitle import setproctitle except ImportError: pass else: execName = basename(sys.argv[0]) if config.LogID: logID = " #{}".format(config.LogID) else: logID = "" if config.ProcessType != "Utility": execName = "" setproctitle( "CalendarServer {} [{}{}] {}" .format(version, config.ProcessType, logID, execName) ) serviceMethod = getattr( self, "makeService_{}".format(config.ProcessType), None ) if not serviceMethod: raise UsageError( "Unknown server type {}. " "Please choose: Slave, Single or Combined" .format(config.ProcessType) ) else: # Always want a thread pool - so start it here before we start anything else # so that it is started before any other callWhenRunning callables. This avoids # a race condition that could cause a deadlock with our long-lived ADBAPI2 # connections which grab and hold a thread. from twisted.internet import reactor reactor.getThreadPool() # # Configure Memcached Client Pool # memcachepool.installPools( config.Memcached.Pools, config.Memcached.MaxClients, ) if config.ProcessType in ("Combined", "Single"): # Process localization string files processLocalizationFiles(config.Localization) try: service = serviceMethod(options) except ConfigurationError, e: sys.stderr.write("Configuration error: {}\n".format(e)) sys.exit(1) # # Note: if there is a stopped process in the same session # as the calendar server and the calendar server is the # group leader then when twistd forks to drop privileges a # SIGHUP may be sent by the kernel, which can cause the # process to exit. This SIGHUP should be, at a minimum, # ignored. # def location(frame): if frame is None: return "Unknown" else: return "{frame.f_code.co_name}: {frame.f_lineno}".format( frame=frame ) if config.Manhole.Enabled: namespace = dict({service.name: service}) for n, s in service.namedServices.iteritems(): namespace[n] = s self._makeManhole(namespace=namespace, parent=service) return service def createContextFactory(self): """ Create an SSL context factory for use with any SSL socket talking to this server. """ return ChainingOpenSSLContextFactory( config.SSLPrivateKey, config.SSLCertificate, certificateChainFile=config.SSLAuthorityChain, passwdCallback=getSSLPassphrase, keychainIdentity=config.SSLKeychainIdentity, sslmethod=getattr(OpenSSL.SSL, config.SSLMethod), ciphers=config.SSLCiphers.strip(), verifyClient=config.Authentication.ClientCertificate.Enabled, requireClientCertificate=config.Authentication.ClientCertificate.Required, clientCACertFileNames=config.Authentication.ClientCertificate.CAFiles, sendCAsToClient=config.Authentication.ClientCertificate.SendCAsToClient, ) def makeService_Slave(self, options): """ Create a "slave" service, a subprocess of a service created with L{makeService_Combined}, which does the work of actually handling CalDAV and CardDAV requests. """ pool, txnFactory = getDBPool(config) if config.DirectoryProxy.Enabled: store = storeFromConfigWithDPSClient(config, txnFactory) else: store = storeFromConfigWithoutDPS(config, txnFactory) directory = store.directoryService() logObserver = AMPCommonAccessLoggingObserver() result = self.requestProcessingService(options, store, logObserver) # SIGUSR1 causes in-process directory cache reset def flushDirectoryCache(signalNum, ignored): if config.EnableControlAPI: directory.flush() signal.signal(signal.SIGUSR1, flushDirectoryCache) if pool is not None: pool.setName("db") pool.setServiceParent(result) if config.ControlSocket: id = config.ControlSocket self.log.info("Control via AF_UNIX: {id}", id=id) endpointFactory = lambda reactor: UNIXClientEndpoint( reactor, id ) else: id = int(config.ControlPort) self.log.info("Control via AF_INET: {id}", id=id) endpointFactory = lambda reactor: TCP4ClientEndpoint( reactor, "127.0.0.1", id ) controlSocketClient = ControlSocket() class LogClient(AMP): def startReceivingBoxes(self, sender): super(LogClient, self).startReceivingBoxes(sender) logObserver.addClient(self) f = Factory() f.protocol = LogClient controlSocketClient.addFactory(_LOG_ROUTE, f) from txdav.common.datastore.sql import CommonDataStore as SQLStore if isinstance(store, SQLStore): def queueMasterAvailable(connectionFromMaster): store.queuer = connectionFromMaster queueFactory = QueueWorkerFactory( store.newTransaction, queueMasterAvailable ) controlSocketClient.addFactory(_QUEUE_ROUTE, queueFactory) controlClient = ControlSocketConnectingService( endpointFactory, controlSocketClient ) controlClient.setName("control") controlClient.setServiceParent(result) # Optionally set up push notifications pushDistributor = None if config.Notifications.Enabled: observers = [] if config.Notifications.Services.APNS.Enabled: pushSubService = ApplePushNotifierService.makeService( config.Notifications.Services.APNS, store) observers.append(pushSubService) pushSubService.setName("APNS") pushSubService.setServiceParent(result) if config.Notifications.Services.AMP.Enabled: pushSubService = AMPPushForwarder(controlSocketClient) observers.append(pushSubService) if observers: pushDistributor = PushDistributor(observers) # Optionally set up mail retrieval if config.Scheduling.iMIP.Enabled: mailRetriever = MailRetriever( store, directory, config.Scheduling.iMIP.Receiving ) mailRetriever.setName("MailRetriever") mailRetriever.setServiceParent(result) else: mailRetriever = None # Optionally set up group cacher if config.GroupCaching.Enabled: cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None groupCacher = GroupCacher( directory, updateSeconds=config.GroupCaching.UpdateSeconds, initialSchedulingDelaySeconds=config.GroupCaching.InitialSchedulingDelaySeconds, batchSize=config.GroupCaching.BatchSize, batchSchedulingIntervalSeconds=config.GroupCaching.BatchSchedulingIntervalSeconds, useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates, cacheNotifier=cacheNotifier, ) else: groupCacher = None # Allow worker to post alerts to master AlertPoster.setupForWorker(controlSocketClient) def decorateTransaction(txn): txn._pushDistributor = pushDistributor txn._rootResource = result.rootResource txn._mailRetriever = mailRetriever txn._groupCacher = groupCacher store.callWithNewTransactions(decorateTransaction) return result def requestProcessingService(self, options, store, logObserver): """ Make a service that will actually process HTTP requests. This may be a 'Slave' service, which runs as a worker subprocess of the 'Combined' configuration, or a 'Single' service, which is a stand-alone process that answers CalDAV/CardDAV requests by itself. """ # # Change default log level to "info" as its useful to have # that during startup # oldLogLevel = log.levels().logLevelForNamespace(None) log.levels().setLogLevelForNamespace(None, LogLevel.info) # Note: 'additional' was used for IMIP reply resource, and perhaps # we can remove this additional = [] # # Configure the service # self.log.info("Setting up service") self.log.info( "Configuring access log observer: {observer}", observer=logObserver ) service = CalDAVService(logObserver) rootResource = getRootResource(config, store, additional) service.rootResource = rootResource underlyingSite = Site(rootResource) # Need to cache SSL port info here so we can access it in a Request to # deal with the possibility of being behind an SSL decoder underlyingSite.EnableSSL = config.EnableSSL underlyingSite.BehindTLSProxy = config.BehindTLSProxy underlyingSite.SSLPort = config.SSLPort underlyingSite.BindSSLPorts = config.BindSSLPorts requestFactory = underlyingSite if (config.EnableSSL or config.BehindTLSProxy) and config.RedirectHTTPToHTTPS: self.log.info( "Redirecting to HTTPS port {port}", port=config.SSLPort ) def requestFactory(*args, **kw): return SSLRedirectRequest(site=underlyingSite, *args, **kw) # Setup HTTP connection behaviors HTTPChannel.allowPersistentConnections = config.EnableKeepAlive HTTPChannel.betweenRequestsTimeOut = config.PipelineIdleTimeOut HTTPChannel.inputTimeOut = config.IncomingDataTimeOut HTTPChannel.idleTimeOut = config.IdleConnectionTimeOut HTTPChannel.closeTimeOut = config.CloseConnectionTimeOut # Add the Strict-Transport-Security header to all secured requests # if enabled. if config.StrictTransportSecuritySeconds: previousRequestFactory = requestFactory def requestFactory(*args, **kw): request = previousRequestFactory(*args, **kw) def responseFilter(ignored, response): ignored, secure = request.chanRequest.getHostInfo() if secure: response.headers.addRawHeader( "Strict-Transport-Security", "max-age={max_age:d}".format( max_age=config.StrictTransportSecuritySeconds ) ) return response responseFilter.handleErrors = True request.addResponseFilter(responseFilter) return request httpFactory = LimitingHTTPFactory( requestFactory, maxRequests=config.MaxRequests, maxAccepts=config.MaxAccepts, betweenRequestsTimeOut=config.IdleConnectionTimeOut, vary=True, ) def updateFactory(configDict, reloading=False): httpFactory.maxRequests = configDict.MaxRequests httpFactory.maxAccepts = configDict.MaxAccepts config.addPostUpdateHooks((updateFactory,)) # Bundle the various connection services within a single MultiService # that can be stopped before the others for graceful shutdown. connectionService = MultiService() connectionService.setName(CalDAVService.connectionServiceName) connectionService.setServiceParent(service) # Service to schedule initial work WorkSchedulingService( store, config.Scheduling.iMIP.Enabled, config.GroupCaching.Enabled, config.AutomaticPurging.Enabled ).setServiceParent(service) # For calendarserver.tap.test # .test_caldav.BaseServiceMakerTests.getSite(): connectionService.underlyingSite = underlyingSite if config.InheritFDs or config.InheritSSLFDs: # Inherit sockets to call accept() on them individually. if config.EnableSSL: for fdAsStr in config.InheritSSLFDs: try: contextFactory = self.createContextFactory() except SSLError, e: log.error( "Unable to set up SSL context factory: {error}", error=e ) else: MaxAcceptSSLServer( int(fdAsStr), httpFactory, contextFactory, backlog=config.ListenBacklog, inherit=True ).setServiceParent(connectionService) for fdAsStr in config.InheritFDs: MaxAcceptTCPServer( int(fdAsStr), httpFactory, backlog=config.ListenBacklog, inherit=True ).setServiceParent(connectionService) elif config.MetaFD: # Inherit a single socket to receive accept()ed connections via # recvmsg() and SCM_RIGHTS. if config.SocketFiles.Enabled: # TLS will be handled by a front-end web proxy contextFactory = None else: try: contextFactory = self.createContextFactory() except SSLError, e: self.log.error( "Unable to set up SSL context factory: {error}", error=e ) # None is okay as a context factory for ReportingHTTPService as # long as we will never receive a file descriptor with the # 'SSL' tag on it, since that's the only time it's used. contextFactory = None reportingService = ReportingHTTPService( requestFactory, int(config.MetaFD), contextFactory, usingSocketFile=config.SocketFiles.Enabled, ) reportingService.setName("http-{}".format(int(config.MetaFD))) reportingService.setServiceParent(connectionService) else: # Not inheriting, therefore we open our own: for bindAddress in self._allBindAddresses(): self._validatePortConfig() if config.EnableSSL: for port in config.BindSSLPorts: self.log.info( "Adding SSL server at {address}:{port}", address=bindAddress, port=port ) try: contextFactory = self.createContextFactory() except SSLError, e: self.log.error( "Unable to set up SSL context factory: {error}" "Disabling SSL port: {port}", error=e, port=port ) else: httpsService = MaxAcceptSSLServer( int(port), httpFactory, contextFactory, interface=bindAddress, backlog=config.ListenBacklog, inherit=False ) httpsService.setName( "https-{}:{}".format(bindAddress, int(port))) httpsService.setServiceParent(connectionService) for port in config.BindHTTPPorts: MaxAcceptTCPServer( int(port), httpFactory, interface=bindAddress, backlog=config.ListenBacklog, inherit=False ).setServiceParent(connectionService) # Change log level back to what it was before log.levels().setLogLevelForNamespace(None, oldLogLevel) return service def _validatePortConfig(self): """ If BindHTTPPorts is specified, HTTPPort must also be specified to indicate which is the preferred port (the one to be used in URL generation, etc). If only HTTPPort is specified, BindHTTPPorts should be set to a list containing only that port number. Similarly for BindSSLPorts/SSLPort. @raise UsageError: if configuration is not valid. """ if config.BindHTTPPorts: if config.HTTPPort == 0: raise UsageError( "HTTPPort required if BindHTTPPorts is not empty" ) elif config.HTTPPort != 0: config.BindHTTPPorts = [config.HTTPPort] if config.BindSSLPorts: if config.SSLPort == 0: raise UsageError( "SSLPort required if BindSSLPorts is not empty" ) elif config.SSLPort != 0: config.BindSSLPorts = [config.SSLPort] def _allBindAddresses(self): """ An empty array for the config value of BindAddresses should be equivalent to an array containing two BindAddresses; one with a single empty string, and one with "::", meaning "bind everything on both IPv4 and IPv6". """ if not config.BindAddresses: if getattr(socket, "has_ipv6", False): if conflictBetweenIPv4AndIPv6(): # If there's a conflict between v4 and v6, then almost by # definition, v4 is mapped into the v6 space, so we will # listen "only" on v6. config.BindAddresses = ["::"] else: config.BindAddresses = ["", "::"] else: config.BindAddresses = [""] return config.BindAddresses def _spawnMemcached(self, monitor=None): """ Optionally start memcached through the specified ProcessMonitor, or if monitor is None, use Popen. """ for name, pool in config.Memcached.Pools.items(): if pool.ServerEnabled: memcachedArgv = [ config.Memcached.memcached, "-U", "0", ] # Use Unix domain sockets by default if pool.get("MemcacheSocket"): memcachedArgv.extend([ "-s", str(pool.MemcacheSocket), ]) else: # ... or INET sockets memcachedArgv.extend([ "-p", str(pool.Port), "-l", pool.BindAddress, ]) if config.Memcached.MaxMemory != 0: memcachedArgv.extend( ["-m", str(config.Memcached.MaxMemory)] ) if config.UserName: memcachedArgv.extend(["-u", config.UserName]) memcachedArgv.extend(config.Memcached.Options) if monitor is not None: monitor.addProcess( "memcached-{}".format(name), memcachedArgv, env=PARENT_ENVIRONMENT ) else: Popen(memcachedArgv) def _makeManhole(self, namespace=None, parent=None): try: import inspect import objgraph except ImportError: pass try: if 'inspect' in locals(): namespace['ins'] = inspect if 'objgraph' in locals(): namespace['og'] = objgraph from pprint import pprint namespace.update({ 'pp': pprint, 'cfg': config, }) from twisted.conch.manhole_tap import ( makeService as manholeMakeService ) portOffset = 0 if config.LogID == '' else int(config.LogID) + 1 portString = "tcp:{:d}:interface=127.0.0.1".format( config.Manhole.StartingPortNumber + portOffset ) manholeService = manholeMakeService({ "passwd": config.Manhole.PasswordFilePath, "telnetPort": portString if config.Manhole.UseSSH is False else None, "sshPort": portString if config.Manhole.UseSSH is True else None, "sshKeyDir": config.DataRoot, "sshKeyName": config.Manhole.sshKeyName, "sshKeySize": config.Manhole.sshKeySize, "namespace": namespace, }) manholeService.setName("manhole") if parent is not None: manholeService.setServiceParent(parent) # Using print(because logging isn't ready at this point) print("Manhole access enabled:", portString) except ImportError: print( "Manhole access could not enabled because " "manhole_tap could not be imported." ) import platform if platform.system() == "Darwin": if config.Manhole.UseSSH: print( "Set Manhole.UseSSH to false or rebuild CS with the " "USE_OPENSSL environment variable set." ) def makeService_Single(self, options): """ Create a service to be used in a single-process, stand-alone configuration. Memcached will be spawned automatically. """ def slaveSvcCreator(pool, store, logObserver, storageService): if store is None: raise StoreNotAvailable() result = self.requestProcessingService(options, store, logObserver) # Optionally set up push notifications pushDistributor = None if config.Notifications.Enabled: observers = [] if config.Notifications.Services.APNS.Enabled: pushSubService = ApplePushNotifierService.makeService( config.Notifications.Services.APNS, store ) observers.append(pushSubService) pushSubService.setName("APNS") pushSubService.setServiceParent(result) if config.Notifications.Services.AMP.Enabled: pushSubService = AMPPushMaster( None, result, config.Notifications.Services.AMP.Port, config.Notifications.Services.AMP.EnableStaggering, config.Notifications.Services.AMP.StaggerSeconds, ) observers.append(pushSubService) if observers: pushDistributor = PushDistributor(observers) directory = store.directoryService() # Job queues always required from twisted.internet import reactor pool = ControllerQueue( reactor, store.newTransaction, useWorkerPool=False, disableWorkProcessing=config.MigrationOnly, ) store.queuer = store.pool = pool pool.setServiceParent(result) # Optionally set up mail retrieval if config.Scheduling.iMIP.Enabled: mailRetriever = MailRetriever( store, directory, config.Scheduling.iMIP.Receiving ) mailRetriever.setName("mailRetriever") mailRetriever.setServiceParent(result) else: mailRetriever = None # Start listening on the stats socket, for administrators to inspect # the current stats on the server. stats = None if config.Stats.EnableUnixStatsSocket: stats = DashboardServer(logObserver, None) stats.store = store statsService = GroupOwnedUNIXServer( gid, config.Stats.UnixStatsSocket, stats, mode=0660 ) statsService.setName("unix-stats") statsService.setServiceParent(result) if config.Stats.EnableTCPStatsSocket: stats = DashboardServer(logObserver, None) stats.store = store statsService = TCPServer( config.Stats.TCPStatsPort, stats, interface="" ) statsService.setName("tcp-stats") statsService.setServiceParent(result) # Optionally set up group cacher if config.GroupCaching.Enabled: cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None groupCacher = GroupCacher( directory, updateSeconds=config.GroupCaching.UpdateSeconds, initialSchedulingDelaySeconds=config.GroupCaching.InitialSchedulingDelaySeconds, batchSize=config.GroupCaching.BatchSize, batchSchedulingIntervalSeconds=config.GroupCaching.BatchSchedulingIntervalSeconds, useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates, cacheNotifier=cacheNotifier, ) else: groupCacher = None def decorateTransaction(txn): txn._pushDistributor = pushDistributor txn._rootResource = result.rootResource txn._mailRetriever = mailRetriever txn._groupCacher = groupCacher store.callWithNewTransactions(decorateTransaction) return result uid, gid = getSystemIDs(config.UserName, config.GroupName) # Make sure no old socket files are lying around. self.deleteStaleSocketFiles() logObserver = RotatingFileAccessLoggingObserver( config.AccessLogFile, ) # Maybe spawn memcached. Note, this is not going through a # ProcessMonitor because there is code elsewhere that needs to # access memcached before startService() gets called self._spawnMemcached(monitor=None) return self.storageService( slaveSvcCreator, logObserver, uid=uid, gid=gid ) def makeService_Utility(self, options): """ Create a service to be used in a command-line utility Specify the actual utility service class in config.UtilityServiceClass. When created, that service will have access to the storage facilities. """ def toolServiceCreator(pool, store, ignored, storageService): return config.UtilityServiceClass(store) uid, gid = getSystemIDs(config.UserName, config.GroupName) return self.storageService( toolServiceCreator, None, uid=uid, gid=gid ) def makeService_Agent(self, options): """ Create an agent service which listens for configuration requests """ # Don't use memcached initially -- calendar server might take it away # at any moment. However, when we run a command through the gateway, # it will conditionally set ClientEnabled at that time. def agentPostUpdateHook(configDict, reloading=False): configDict.Memcached.Pools.Default.ClientEnabled = False config.addPostUpdateHooks((agentPostUpdateHook,)) config.reload() # Verify that server root actually exists and is not phantom checkDirectory( config.ServerRoot, "Server root", access=W_OK, wait=True # Wait in a loop until ServerRoot exists and is not phantom ) # These we need to set in order to open the store config.EnableCalDAV = config.EnableCardDAV = True def agentServiceCreator(pool, store, ignored, storageService): from calendarserver.tools.agent import makeAgentService if storageService is not None: # Shut down if DataRoot becomes unavailable from twisted.internet import reactor dataStoreWatcher = DirectoryChangeListener( reactor, config.DataRoot, DataStoreMonitor(reactor, storageService) ) dataStoreWatcher.startListening() if store is not None: store.queuer = NonPerformingQueuer() return makeAgentService(store) uid, gid = getSystemIDs(config.UserName, config.GroupName) svc = self.storageService( agentServiceCreator, None, uid=uid, gid=gid ) agentLoggingService = ErrorLoggingMultiService( config.ErrorLogEnabled, config.AgentLogFile, config.ErrorLogRotateMB * 1024 * 1024, config.ErrorLogMaxRotatedFiles, config.ErrorLogRotateOnStart, ) svc.setName("agent") svc.setServiceParent(agentLoggingService) return agentLoggingService def storageService( self, createMainService, logObserver, uid=None, gid=None ): """ If necessary, create a service to be started used for storage; for example, starting a database backend. This service will then start the main service. This has the effect of delaying any child process spawning or stand alone port-binding until the backing for the selected data store implementation is ready to process requests. @param createMainService: This is the service that will be doing the main work of the current process. If the configured storage mode does not require any particular setup, then this may return the C{mainService} argument. @type createMainService: C{callable} that takes C{(connectionPool, store)} and returns L{IService} @param uid: the user ID to run the backend as, if this process is running as root (also the uid to chown Attachments to). @type uid: C{int} @param gid: the user ID to run the backend as, if this process is running as root (also the gid to chown Attachments to). @type gid: C{int} @param directory: The directory service to use. @type directory: L{IStoreDirectoryService} or None @return: the appropriate a service to start. @rtype: L{IService} """ def createSubServiceFactory(dbtype, dbfeatures=()): if dbtype == "": dialect = POSTGRES_DIALECT paramstyle = "pyformat" elif dbtype == "postgres": dialect = POSTGRES_DIALECT paramstyle = "pyformat" elif dbtype == "oracle": dialect = ORACLE_DIALECT paramstyle = "numeric" def subServiceFactory(connectionFactory, storageService): ms = MultiService() cp = ConnectionPool( connectionFactory, dbtype=DatabaseType(dialect, paramstyle, dbfeatures), maxConnections=config.MaxDBConnectionsPerPool ) cp.setName("db") cp.setServiceParent(ms) store = storeFromConfigWithoutDPS(config, cp.connection) pps = PreProcessingService( createMainService, cp, store, logObserver, storageService ) # The following "steps" will run sequentially when the service # hierarchy is started. If any of the steps raise an exception # the subsequent steps' stepWithFailure methods will be called # instead, until one of them returns a non-Failure. pps.addStep( UpgradeAcquireLockStep(store) ) # Still need this for Snow Leopard support pps.addStep( UpgradeFileSystemFormatStep(config, store) ) pps.addStep( UpgradeDatabaseSchemaStep( store, uid=overrideUID, gid=overrideGID, failIfUpgradeNeeded=config.FailIfUpgradeNeeded, checkExistingSchema=config.CheckExistingSchema, ) ) pps.addStep( UpgradeDatabaseAddressBookDataStep( store, uid=overrideUID, gid=overrideGID ) ) pps.addStep( UpgradeDatabaseCalendarDataStep( store, uid=overrideUID, gid=overrideGID ) ) pps.addStep( UpgradeDatabaseNotificationDataStep( store, uid=overrideUID, gid=overrideGID ) ) pps.addStep( UpgradeToDatabaseStep( UpgradeToDatabaseStep.fileStoreFromPath( CachingFilePath(config.DocumentRoot) ), store, uid=overrideUID, gid=overrideGID, merge=config.MergeUpgrades ) ) pps.addStep( UpgradeDatabaseOtherStep( store, uid=overrideUID, gid=overrideGID ) ) pps.addStep( PostDBImportStep( store, config, getattr(self, "doPostImport", True) ) ) pps.addStep( UpgradeReleaseLockStep(store) ) pps.setName("pre") pps.setServiceParent(ms) return ms return subServiceFactory # FIXME: this is replicating the logic of getDBPool(), except for the # part where the pgServiceFromConfig service is actually started here, # and discarded in that function. This should be refactored to simply # use getDBPool. if config.UseDatabase: if getuid() == 0: # Only override if root overrideUID = uid overrideGID = gid else: overrideUID = None overrideGID = None if config.DBType == '': # Spawn our own database as an inferior process, then connect # to it. pgserv = pgServiceFromConfig( config, createSubServiceFactory("", config.DBFeatures), uid=overrideUID, gid=overrideGID ) return pgserv else: # Connect to a database that is already running. return createSubServiceFactory(config.DBType, config.DBFeatures)( DBAPIConnector.connectorFor(config.DBType, **config.DatabaseConnection).connect, None ) else: store = storeFromConfig(config, None, None) return createMainService(None, store, logObserver, None) def makeService_Combined(self, options): """ Create a master service to coordinate a multi-process configuration, spawning subprocesses that use L{makeService_Slave} to perform work. """ s = ErrorLoggingMultiService( config.ErrorLogEnabled, config.ErrorLogFile, config.ErrorLogRotateMB * 1024 * 1024, config.ErrorLogMaxRotatedFiles, config.ErrorLogRotateOnStart, ) # Perform early pre-flight checks. If this returns True, continue on. # Otherwise one of two things will happen: # A) preFlightChecks( ) will have registered an "after startup" system # event trigger to request a shutdown via the # ServiceDisablingProgram, and we want to return our # ErrorLoggingMultiService so that logging is set up enough to # emit our reason for shutting down into the error.log. # B) preFlightChecks( ) will sys.exit(1) if there is not a # ServiceDisablingProgram configured. if not preFlightChecks(config): return s # Add a service to re-exec the master when it receives SIGHUP ReExecService(config.PIDFile).setServiceParent(s) # Make sure no old socket files are lying around. self.deleteStaleSocketFiles() # The logger service must come before the monitor service, otherwise # we won't know which logging port to pass to the slaves' command lines logger = AMPLoggingFactory( RotatingFileAccessLoggingObserver(config.AccessLogFile) ) if config.GroupName: try: gid = getgrnam(config.GroupName).gr_gid except KeyError: raise ConfigurationError( "Invalid group name: {}".format(config.GroupName) ) else: gid = getgid() if config.UserName: try: uid = getpwnam(config.UserName).pw_uid except KeyError: raise ConfigurationError( "Invalid user name: {}".format(config.UserName) ) else: uid = getuid() controlSocket = ControlSocket() controlSocket.addFactory(_LOG_ROUTE, logger) # Allow master to receive alert posts from workers AlertPoster.setupForMaster(controlSocket) # Optionally set up AMPPushMaster if ( config.Notifications.Enabled and config.Notifications.Services.AMP.Enabled ): ampSettings = config.Notifications.Services.AMP AMPPushMaster( controlSocket, s, ampSettings["Port"], ampSettings["EnableStaggering"], ampSettings["StaggerSeconds"] ) if config.ControlSocket: controlSocketService = GroupOwnedUNIXServer( gid, config.ControlSocket, controlSocket, mode=0660 ) else: controlSocketService = ControlPortTCPServer( config.ControlPort, controlSocket, interface="127.0.0.1" ) controlSocketService.setName(_CONTROL_SERVICE_NAME) controlSocketService.setServiceParent(s) monitor = DelayedStartupProcessMonitor() s.processMonitor = monitor monitor.setName("pm") monitor.setServiceParent(s) # Allow cache flushing def forwardSignalToWorkers(signalNum, ignored): if config.EnableControlAPI: monitor.signalAll(signalNum, startswithname="caldav") signal.signal(signal.SIGUSR1, forwardSignalToWorkers) if config.MemoryLimiter.Enabled: memoryLimiter = MemoryLimitService( monitor, config.MemoryLimiter.Seconds, config.MemoryLimiter.Bytes, config.MemoryLimiter.ResidentOnly ) memoryLimiter.setName("ml") memoryLimiter.setServiceParent(s) # Maybe spawn memcached through a ProcessMonitor self._spawnMemcached(monitor=monitor) # Open the socket(s) to be inherited by the slaves inheritFDs = [] inheritSSLFDs = [] if config.UseMetaFD: cl = ConnectionLimiter(config.MaxAccepts, (config.MaxRequests * config.MultiProcess.ProcessCount)) dispatcher = cl.dispatcher else: # keep a reference to these so they don't close s._inheritedSockets = [] dispatcher = None if config.SocketFiles.Enabled: if config.SocketFiles.Secured: # TLS-secured requests will arrive via this Unix domain socket file cl.addSocketFileService( "SSL", config.SocketFiles.Secured, config.ListenBacklog ) if config.SocketFiles.Unsecured: # Unsecured requests will arrive via this Unix domain socket file cl.addSocketFileService( "TCP", config.SocketFiles.Unsecured, config.ListenBacklog ) else: for bindAddress in self._allBindAddresses(): self._validatePortConfig() if config.UseMetaFD: portsList = [(config.BindHTTPPorts, "TCP")] if config.EnableSSL: portsList.append((config.BindSSLPorts, "SSL")) for ports, description in portsList: for port in ports: cl.addPortService( description, port, bindAddress, config.ListenBacklog ) else: def _openSocket(addr, port): log.info( "Opening socket for inheritance at {address}:{port}", address=addr, port=port ) sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.setblocking(0) sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock.bind((addr, port)) sock.listen(config.ListenBacklog) s._inheritedSockets.append(sock) return sock for portNum in config.BindHTTPPorts: sock = _openSocket(bindAddress, int(portNum)) inheritFDs.append(sock.fileno()) if config.EnableSSL: for portNum in config.BindSSLPorts: sock = _openSocket(bindAddress, int(portNum)) inheritSSLFDs.append(sock.fileno()) # Start listening on the stats socket, for administrators to inspect # the current stats on the server. stats = None if config.Stats.EnableUnixStatsSocket: stats = DashboardServer(logger.observer, cl if config.UseMetaFD else None) statsService = GroupOwnedUNIXServer( gid, config.Stats.UnixStatsSocket, stats, mode=0660 ) statsService.setName("unix-stats") statsService.setServiceParent(s) elif config.Stats.EnableTCPStatsSocket: stats = DashboardServer(logger.observer, cl if config.UseMetaFD else None) statsService = TCPServer( config.Stats.TCPStatsPort, stats, interface="" ) statsService.setName("tcp-stats") statsService.setServiceParent(s) # Finally, let's get the real show on the road. Create a service that # will spawn all of our worker processes when started, and wrap that # service in zero to two necessary layers before it's started: first, # the service which spawns a subsidiary database (if that's necessary, # and we don't have an external, already-running database to connect # to), and second, the service which does an upgrade from the # filesystem to the database (if that's necessary, and there is # filesystem data in need of upgrading). def spawnerSvcCreator(pool, store, ignored, storageService): if store is None: raise StoreNotAvailable() if stats is not None: stats.store = store from twisted.internet import reactor pool = ControllerQueue( reactor, store.newTransaction, disableWorkProcessing=config.MigrationOnly, ) # The master should not perform queued work store.queuer = NonPerformingQueuer() store.pool = pool controlSocket.addFactory( _QUEUE_ROUTE, pool.workerListenerFactory() ) # TODO: now that we have the shared control socket, we should get # rid of the connection dispenser and make a shared / async # connection pool implementation that can dispense transactions # synchronously as the interface requires. if pool is not None and config.SharedConnectionPool: self.log.warn("Using Shared Connection Pool") dispenser = ConnectionDispenser(pool) else: dispenser = None multi = MultiService() multi.setName("multi") pool.setServiceParent(multi) spawner = SlaveSpawnerService( self, monitor, dispenser, dispatcher, stats, options["config"], inheritFDs=inheritFDs, inheritSSLFDs=inheritSSLFDs ) spawner.setName("spawner") spawner.setServiceParent(multi) if config.UseMetaFD: cl.setServiceParent(multi) directory = store.directoryService() rootResource = getRootResource(config, store, []) # Optionally set up mail retrieval if config.Scheduling.iMIP.Enabled: mailRetriever = MailRetriever( store, directory, config.Scheduling.iMIP.Receiving ) mailRetriever.setName("MailRetriever") mailRetriever.setServiceParent(multi) else: mailRetriever = None # Optionally set up group cacher if config.GroupCaching.Enabled: cacheNotifier = MemcacheURLPatternChangeNotifier("/principals/__uids__/{token}/", cacheHandle="PrincipalToken") if config.EnableResponseCache else None groupCacher = GroupCacher( directory, updateSeconds=config.GroupCaching.UpdateSeconds, initialSchedulingDelaySeconds=config.GroupCaching.InitialSchedulingDelaySeconds, batchSize=config.GroupCaching.BatchSize, batchSchedulingIntervalSeconds=config.GroupCaching.BatchSchedulingIntervalSeconds, useDirectoryBasedDelegates=config.GroupCaching.UseDirectoryBasedDelegates, cacheNotifier=cacheNotifier, ) else: groupCacher = None def decorateTransaction(txn): txn._pushDistributor = None txn._rootResource = rootResource txn._mailRetriever = mailRetriever txn._groupCacher = groupCacher store.callWithNewTransactions(decorateTransaction) return multi ssvc = self.storageService( spawnerSvcCreator, None, uid, gid ) ssvc.setName("ssvc") ssvc.setServiceParent(s) return s def deleteStaleSocketFiles(self): # Check all socket files we use. for checkSocket in [ config.ControlSocket, config.Stats.UnixStatsSocket ]: # See if the file exists. if (exists(checkSocket)): # See if the file represents a socket. If not, delete it. if (not S_ISSOCK(stat(checkSocket).st_mode)): self.log.warn( "Deleting stale socket file (not a socket): {socket}", socket=checkSocket ) remove(checkSocket) else: # It looks like a socket. # See if it's accepting connections. tmpSocket = socket.socket( socket.AF_INET, socket.SOCK_STREAM ) numConnectFailures = 0 testPorts = [config.HTTPPort, config.SSLPort] for testPort in testPorts: try: tmpSocket.connect(("127.0.0.1", testPort)) tmpSocket.shutdown(2) except: numConnectFailures = numConnectFailures + 1 # If the file didn't connect on any expected ports, # consider it stale and remove it. if numConnectFailures == len(testPorts): self.log.warn( "Deleting stale socket file " "(not accepting connections): {socket}", socket=checkSocket ) remove(checkSocket) class TwistdSlaveProcess(object): """ A L{TwistdSlaveProcess} is information about how to start a slave process running a C{twistd} plugin, to be used by L{DelayedStartupProcessMonitor.addProcessObject}. @ivar twistd: The path to the twistd executable to launch. @type twistd: C{str} @ivar tapname: The name of the twistd plugin to launch. @type tapname: C{str} @ivar id: The instance identifier for this slave process. @type id: C{int} @ivar interfaces: A sequence of interface addresses which the process will be configured to bind to. @type interfaces: sequence of C{str} @ivar inheritFDs: File descriptors to be inherited for calling accept() on in the subprocess. @type inheritFDs: C{list} of C{int}, or C{None} @ivar inheritSSLFDs: File descriptors to be inherited for calling accept() on in the subprocess, and speaking TLS on the resulting sockets. @type inheritSSLFDs: C{list} of C{int}, or C{None} @ivar metaSocket: an AF_UNIX/SOCK_DGRAM socket (initialized from the dispatcher passed to C{__init__}) that is to be inherited by the subprocess and used to accept incoming connections. @type metaSocket: L{socket.socket} @ivar ampSQLDispenser: a factory for AF_UNIX/SOCK_STREAM sockets that are to be inherited by subprocesses and used for sending AMP SQL commands back to its parent. """ prefix = "caldav" def __init__(self, twistd, tapname, configFile, id, interfaces, inheritFDs=None, inheritSSLFDs=None, metaSocket=None, ampSQLDispenser=None): self.twistd = twistd self.tapname = tapname self.configFile = configFile self.id = id def emptyIfNone(x): if x is None: return [] else: return x self.inheritFDs = emptyIfNone(inheritFDs) self.inheritSSLFDs = emptyIfNone(inheritSSLFDs) self.metaSocket = metaSocket self.interfaces = interfaces self.ampSQLDispenser = ampSQLDispenser self.ampDBSocket = None def starting(self): """ Called when the process is being started (or restarted). Allows for various initialization operations to be done. The child process will itself signal back to the master when it is ready to accept sockets - until then the master socket is marked as "starting" which means it is not active and won't be dispatched to. """ # Always tell any metafd socket that we have started, so it can # re-initialize state. if self.metaSocket is not None: self.metaSocket.start() def stopped(self): """ Called when the process has stopped (died). The socket is marked as "stopped" which means it is not active and won't be dispatched to. """ # Always tell any metafd socket that we have started, so it can # re-initialize state. if self.metaSocket is not None: self.metaSocket.stop() def getName(self): return "{}-{}".format(self.prefix, self.id) def getFileDescriptors(self): """ Get the file descriptors that will be passed to the subprocess, as a mapping that will be used with L{IReactorProcess.spawnProcess}. If this server is configured to use a meta FD, pass the client end of the meta FD. If this server is configured to use an AMP database connection pool, pass a pre-connected AMP socket. Note that, contrary to the documentation for L{twext.internet.sendfdport.InheritedSocketDispatcher.addSocket}, this does I{not} close the added child socket; this method (C{getFileDescriptors}) is called repeatedly to start a new process with the same C{LogID} if the previous one exits. Therefore we consistently re-use the same file descriptor and leave it open in the master, relying upon process-exit notification rather than noticing the meta-FD socket was closed in the subprocess. @return: a mapping of file descriptor numbers for the new (child) process to file descriptor numbers in the current (master) process. """ fds = {} extraFDs = [] if self.metaSocket is not None: extraFDs.append(self.metaSocket.childSocket().fileno()) if self.ampSQLDispenser is not None: self.ampDBSocket = self.ampSQLDispenser.dispense() extraFDs.append(self.ampDBSocket.fileno()) for fd in self.inheritSSLFDs + self.inheritFDs + extraFDs: fds[fd] = fd return fds def getCommandLine(self): """ @return: a list of command-line arguments, including the executable to be used to start this subprocess. @rtype: C{list} of C{str} """ args = [sys.executable, self.twistd] if config.UserName: args.extend(("-u", config.UserName)) if config.GroupName: args.extend(("-g", config.GroupName)) if config.Profiling.Enabled: args.append("--profile={}/{}.pstats".format( config.Profiling.BaseDirectory, self.getName() )) args.extend(("--savestats", "--profiler", "cprofile-cpu")) args.extend([ "--reactor={}".format(config.Twisted.reactor), "-n", self.tapname, "-f", self.configFile, "-o", "ProcessType=Slave", "-o", "BindAddresses={}".format(",".join(self.interfaces)), "-o", "PIDFile={}-instance-{}.pid".format(self.tapname, self.id), "-o", "ErrorLogFile=None", "-o", "ErrorLogEnabled=False", "-o", "LogID={}".format(self.id), "-o", "MultiProcess/ProcessCount={:d}".format( config.MultiProcess.ProcessCount ), "-o", "ControlPort={:d}".format(config.ControlPort), ]) if self.inheritFDs: args.extend([ "-o", "InheritFDs={}".format( ",".join(map(str, self.inheritFDs)) ) ]) if self.inheritSSLFDs: args.extend([ "-o", "InheritSSLFDs={}".format( ",".join(map(str, self.inheritSSLFDs)) ) ]) if self.metaSocket is not None: args.extend([ "-o", "MetaFD={}".format( self.metaSocket.childSocket().fileno() ) ]) if self.ampDBSocket is not None: args.extend([ "-o", "DBAMPFD={}".format(self.ampDBSocket.fileno()) ]) return args class ControlPortTCPServer(TCPServer): """ This TCPServer retrieves the port number that was actually assigned when the service was started, and stores that into config.ControlPort """ def startService(self): TCPServer.startService(self) # Record the port we were actually assigned config.ControlPort = self._port.getHost().port class DelayedStartupProcessMonitor(Service, object): """ A L{DelayedStartupProcessMonitor} is a L{procmon.ProcessMonitor} that defers building its command lines until the service is actually ready to start. It also specializes process-starting to allow for process objects to determine their arguments as they are started up rather than entirely ahead of time. Also, unlike L{procmon.ProcessMonitor}, its C{stopService} returns a L{Deferred} which fires only when all processes have shut down, to allow for a clean service shutdown. @ivar reactor: an L{IReactorProcess} for spawning processes, defaulting to the global reactor. @ivar delayInterval: the amount of time to wait between starting subsequent processes. @ivar stopping: a flag used to determine whether it is time to fire the Deferreds that track service shutdown. """ threshold = 1 killTime = 5 minRestartDelay = 1 maxRestartDelay = 3600 def __init__(self, reactor=None): super(DelayedStartupProcessMonitor, self).__init__() if reactor is None: from twisted.internet import reactor self._reactor = reactor self.processes = OrderedDict() self.protocols = {} self.delay = {} self.timeStarted = {} self.murder = {} self.restart = {} self.stopping = False if config.MultiProcess.StaggeredStartup.Enabled: self.delayInterval = config.MultiProcess.StaggeredStartup.Interval else: self.delayInterval = 0 def addProcess(self, name, args, uid=None, gid=None, env={}): """ Add a new monitored process and start it immediately if the L{DelayedStartupProcessMonitor} service is running. Note that args are passed to the system call, not to the shell. If running the shell is desired, the common idiom is to use C{ProcessMonitor.addProcess("name", ['/bin/sh', '-c', shell_script])} @param name: A name for this process. This value must be unique across all processes added to this monitor. @type name: C{str} @param args: The argv sequence for the process to launch. @param uid: The user ID to use to run the process. If C{None}, the current UID is used. @type uid: C{int} @param gid: The group ID to use to run the process. If C{None}, the current GID is used. @type uid: C{int} @param env: The environment to give to the launched process. See L{IReactorProcess.spawnProcess}'s C{env} parameter. @type env: C{dict} @raises: C{KeyError} if a process with the given name already exists """ class SimpleProcessObject(object): def starting(self): pass def stopped(self): pass def getName(self): return name def getCommandLine(self): return args def getFileDescriptors(self): return [] self.addProcessObject(SimpleProcessObject(), env, uid, gid) def addProcessObject(self, process, env, uid=None, gid=None): """ Add a process object to be run when this service is started. @param env: a dictionary of environment variables. @param process: a L{TwistdSlaveProcesses} object to be started upon service startup. """ name = process.getName() self.processes[name] = (process, env, uid, gid) self.delay[name] = self.minRestartDelay if self.running: self.startProcess(name) def startService(self): # Now we're ready to build the command lines and actually add the # processes to procmon. super(DelayedStartupProcessMonitor, self).startService() # This will be in the order added, since self.processes is an # OrderedDict for name in self.processes: self.startProcess(name) def stopService(self): """ Return a deferred that fires when all child processes have ended. """ self.stopping = True self.deferreds = {} for name in self.processes: self.deferreds[name] = Deferred() super(DelayedStartupProcessMonitor, self).stopService() # Cancel any outstanding restarts for name, delayedCall in self.restart.items(): if delayedCall.active(): delayedCall.cancel() # Stop processes in the reverse order from which they were added and # started for name in reversed(self.processes): self.stopProcess(name) return gatherResults(self.deferreds.values()) def removeProcess(self, name): """ Stop the named process and remove it from the list of monitored processes. @type name: C{str} @param name: A string that uniquely identifies the process. """ self.stopProcess(name) del self.processes[name] def stopProcess(self, name): """ @param name: The name of the process to be stopped """ if name not in self.processes: raise KeyError("Unrecognized process name: {}".format(name)) proto = self.protocols.get(name, None) if proto is not None: proc = proto.transport try: proc.signalProcess('TERM') except ProcessExitedAlready: pass else: self.murder[name] = self._reactor.callLater( self.killTime, self._forceStopProcess, proc ) def processEnded(self, name): """ When a child process has ended it calls me so I can fire the appropriate deferred which was created in stopService. Also make sure to signal the dispatcher so that the socket is marked as inactive. """ # Cancel the scheduled _forceStopProcess function if the process # dies naturally if name in self.murder: if self.murder[name].active(): self.murder[name].cancel() del self.murder[name] self.processes[name][0].stopped() del self.protocols[name] if self._reactor.seconds() - self.timeStarted[name] < self.threshold: # The process died too fast - back off nextDelay = self.delay[name] self.delay[name] = min(self.delay[name] * 2, self.maxRestartDelay) else: # Process had been running for a significant amount of time # restart immediately nextDelay = 0 self.delay[name] = self.minRestartDelay # Schedule a process restart if the service is running if self.running and name in self.processes: self.restart[name] = self._reactor.callLater(nextDelay, self.startProcess, name) if self.stopping: deferred = self.deferreds.pop(name, None) if deferred is not None: deferred.callback(None) def _forceStopProcess(self, proc): """ @param proc: An L{IProcessTransport} provider """ try: proc.signalProcess('KILL') except ProcessExitedAlready: pass def signalAll(self, signal, startswithname=None): """ Send a signal to all child processes. @param signal: the signal to send @type signal: C{int} @param startswithname: is set only signal those processes whose name starts with this string @type signal: C{str} """ for name in self.processes.keys(): if startswithname is None or name.startswith(startswithname): self.signalProcess(signal, name) def signalProcess(self, signal, name): """ Send a signal to a single monitored process, by name, if that process is running; otherwise, do nothing. @param signal: the signal to send @type signal: C{int} @param name: the name of the process to signal. @type signal: C{str} """ if name not in self.protocols: return proc = self.protocols[name].transport try: proc.signalProcess(signal) except ProcessExitedAlready: pass def reallyStartProcess(self, name): """ Actually start a process. (Re-implemented here rather than just using the inherited implementation of startService because ProcessMonitor doesn't allow customization of subprocess environment). """ if name in self.protocols: return p = self.protocols[name] = DelayedStartupLoggingProtocol() p.service = self p.name = name procObj, env, uid, gid = self.processes[name] self.timeStarted[name] = time.time() childFDs = {0: "w", 1: "r", 2: "r"} childFDs.update(procObj.getFileDescriptors()) procObj.starting() args = procObj.getCommandLine() self._reactor.spawnProcess( p, args[0], args, uid=uid, gid=gid, env=env, childFDs=childFDs ) _pendingStarts = 0 def startProcess(self, name): """ Start process named 'name'. If another process has started recently, wait until some time has passed before actually starting this process. @param name: the name of the process to start. """ interval = (self.delayInterval * self._pendingStarts) self._pendingStarts += 1 def delayedStart(): self._pendingStarts -= 1 self.reallyStartProcess(name) self._reactor.callLater(interval, delayedStart) def restartAll(self): """ Restart all processes. This is useful for third party management services to allow a user to restart servers because of an outside change in circumstances -- for example, a new version of a library is installed. """ for name in self.processes: self.stopProcess(name) def __repr__(self): l = [] for name, (procObj, uid, gid, _ignore_env) in self.processes.items(): uidgid = '' if uid is not None: uidgid = str(uid) if gid is not None: uidgid += ':' + str(gid) if uidgid: uidgid = '(' + uidgid + ')' l.append("{}{}: {}".format(name, uidgid, procObj)) return ( "<{self.__class__.__name__} {l}>" .format(self=self, l=" ".join(l)) ) class DelayedStartupLineLogger(object): """ A line logger that can handle very long lines. """ MAX_LENGTH = 1024 CONTINUED_TEXT = " (truncated, continued)" tag = None exceeded = False # Am I in the middle of parsing a long line? _buffer = '' def makeConnection(self, transport): """ Ignore this IProtocol method, since I don't need a transport. """ pass def dataReceived(self, data): lines = (self._buffer + data).split("\n") while len(lines) > 1: line = lines.pop(0) if len(line) > self.MAX_LENGTH: self.lineLengthExceeded(line) elif self.exceeded: self.lineLengthExceeded(line) self.exceeded = False else: self.lineReceived(line) lastLine = lines.pop(0) if len(lastLine) > self.MAX_LENGTH: self.lineLengthExceeded(lastLine) self.exceeded = True self._buffer = '' else: self._buffer = lastLine def lineReceived(self, line): # Hand off slave log entry to logging system as critical to ensure it is always # displayed (as the slave will have filtered out unwanted entries itself). log.critical("{msg}", msg=line, log_system=self.tag, isError=False) def lineLengthExceeded(self, line): """ A very long line is being received. Log it immediately and forget about buffering it. """ segments = self._breakLineIntoSegments(line) for segment in segments: self.lineReceived(segment) def _breakLineIntoSegments(self, line): """ Break a line into segments no longer than self.MAX_LENGTH. Each segment (except for the final one) has self.CONTINUED_TEXT appended. Returns the array of segments. @param line: The line to break up @type line: C{str} @return: array of C{str} """ length = len(line) numSegments = ( length / self.MAX_LENGTH + (1 if length % self.MAX_LENGTH else 0) ) segments = [] for i in range(numSegments): msg = line[i * self.MAX_LENGTH:(i + 1) * self.MAX_LENGTH] if i < numSegments - 1: # not the last segment msg += self.CONTINUED_TEXT segments.append(msg) return segments class DelayedStartupLoggingProtocol(ProcessProtocol): """ Logging protocol that handles lines which are too long. """ service = None name = None empty = 1 def connectionMade(self): """ Replace the superclass's output monitoring logic with one that can handle lineLengthExceeded. """ self.output = DelayedStartupLineLogger() self.output.makeConnection(self.transport) self.output.tag = self.name def outReceived(self, data): self.output.dataReceived(data) self.empty = data[-1] == '\n' errReceived = outReceived def processEnded(self, reason): """ Let the service know that this child process has ended """ if not self.empty: self.output.dataReceived('\n') self.service.processEnded(self.name) def getSystemIDs(userName, groupName): """ Return the system ID numbers corresponding to either: A) the userName and groupName if non-empty, or B) the real user ID and group ID of the process @param userName: The name of the user to look up the ID of. An empty value indicates the real user ID of the process should be returned instead. @type userName: C{str} @param groupName: The name of the group to look up the ID of. An empty value indicates the real group ID of the process should be returned instead. @type groupName: C{str} """ if userName: try: uid = getpwnam(userName).pw_uid except KeyError: raise ConfigurationError( "Invalid user name: {}".format(userName) ) else: uid = getuid() if groupName: try: gid = getgrnam(groupName).gr_gid except KeyError: raise ConfigurationError( "Invalid group name: {}".format(groupName) ) else: gid = getgid() return uid, gid class DataStoreMonitor(object): implements(IDirectoryChangeListenee) def __init__(self, reactor, storageService): """ @param storageService: the service making use of the DataStore directory; we send it a hardStop() to shut it down """ self._reactor = reactor self._storageService = storageService def disconnected(self): self._storageService.hardStop() self._reactor.stop() def deleted(self): self._storageService.hardStop() self._reactor.stop() def renamed(self): self._storageService.hardStop() self._reactor.stop() def connectionLost(self, reason): pass calendarserver-9.1+dfsg/calendarserver/tap/profiling.py000066400000000000000000000030061315003562600234310ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import time from twisted.application.app import CProfileRunner, AppProfiler class CProfileCPURunner(CProfileRunner): """ Runner for the cProfile module which uses C{time.clock} to measure CPU usage instead of the default wallclock time. """ def run(self, reactor): """ Run reactor under the cProfile profiler. """ try: import cProfile import pstats except ImportError, e: self._reportImportError("cProfile", e) p = cProfile.Profile(time.clock) p.runcall(reactor.run) if self.saveStats: p.dump_stats(self.profileOutput) else: with open(self.profileOutput, 'w') as stream: s = pstats.Stats(p, stream=stream) s.strip_dirs() s.sort_stats(-1) s.print_stats() AppProfiler.profilers["cprofile-cpu"] = CProfileCPURunner calendarserver-9.1+dfsg/calendarserver/tap/test/000077500000000000000000000000001315003562600220465ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tap/test/__init__.py000066400000000000000000000012201315003562600241520ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tests for the calendarserver.tap module. """ calendarserver-9.1+dfsg/calendarserver/tap/test/longlines.py000066400000000000000000000014061315003562600244130ustar00rootroot00000000000000## # Copyright (c) 2007-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys length = int(sys.argv[1]) data = (("x" * length) + ("y" * (length + 1)) + "\n" + ("z" + "\n")) sys.stdout.write(data) sys.stdout.flush() calendarserver-9.1+dfsg/calendarserver/tap/test/reexec.tac000066400000000000000000000006671315003562600240230ustar00rootroot00000000000000from twisted.application import service from calendarserver.tap.caldav import ReExecService class TestService(service.Service): def startService(self): print "START" def stopService(self): print "STOP" application = service.Application("ReExec Tester") reExecService = ReExecService("twistd.pid") reExecService.setServiceParent(application) testService = TestService() testService.setServiceParent(reExecService) calendarserver-9.1+dfsg/calendarserver/tap/test/test_caldav.py000066400000000000000000001415371315003562600247240ustar00rootroot00000000000000## # Copyright (c) 2007-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys import os import stat import grp import random from time import sleep from os.path import dirname, abspath from collections import namedtuple from zope.interface import implements from twisted.application import service from twisted.python import log as logging from twisted.python.threadable import isInIOThread from twisted.internet.reactor import callFromThread from twisted.python.usage import Options, UsageError from twisted.python.procutils import which from twisted.runner.procmon import ProcessMonitor, LoggingProtocol from twisted.internet.interfaces import IProcessTransport, IReactorProcess from twisted.internet.protocol import ServerFactory from twisted.internet.defer import Deferred, inlineCallbacks, succeed, gatherResults from twisted.internet.task import Clock from twisted.internet import reactor from twisted.application.service import IService, IServiceCollection, Application from twisted.application import internet from twext.python.log import Logger from twext.python.filepath import CachingFilePath as FilePath from plistlib import writePlist # @UnresolvedImport from txweb2.dav import auth from txweb2.log import LogWrapperResource from twext.internet.tcp import MaxAcceptTCPServer, MaxAcceptSSLServer from twistedcaldav import memcacheclient from twistedcaldav.config import config, ConfigDict, ConfigurationError from twistedcaldav.resource import AuthenticationWrapper from twistedcaldav.stdconfig import DEFAULT_CONFIG from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource from twistedcaldav.test.util import StoreTestCase, CapturingProcessProtocol, \ TestCase from calendarserver.tap.caldav import ( CalDAVOptions, CalDAVServiceMaker, CalDAVService, GroupOwnedUNIXServer, DelayedStartupProcessMonitor, DelayedStartupLineLogger, TwistdSlaveProcess, _CONTROL_SERVICE_NAME, getSystemIDs, PreProcessingService, DataStoreMonitor, ErrorLoggingMultiService) from calendarserver.provision.root import RootResource from StringIO import StringIO import tempfile from twisted.python.log import ILogObserver log = Logger() # Points to top of source tree. sourceRoot = dirname(dirname(dirname(dirname(abspath(__file__))))) class NotAProcessTransport(object): """ Simple L{IProcessTransport} stub. """ implements(IProcessTransport) def __init__(self, processProtocol, executable, args, env, path, uid, gid, usePTY, childFDs): """ Hold on to all the attributes passed to spawnProcess. """ self.processProtocol = processProtocol self.executable = executable self.args = args self.env = env self.path = path self.uid = uid self.gid = gid self.usePTY = usePTY self.childFDs = childFDs class InMemoryProcessSpawner(Clock): """ Stub out L{IReactorProcess} and L{IReactorClock} so that we can examine the interaction of L{DelayedStartupProcessMonitor} and the reactor. """ implements(IReactorProcess) def __init__(self): """ Create some storage to hold on to all the fake processes spawned. """ super(InMemoryProcessSpawner, self).__init__() self.processTransports = [] self.waiting = [] def waitForOneProcess(self, amount=10.0): """ Wait for an L{IProcessTransport} to be created by advancing the clock. If none are created in the specified amount of time, raise an AssertionError. """ self.advance(amount) if self.processTransports: return self.processTransports.pop(0) else: raise AssertionError( "There were no process transports available. Calls: " + repr(self.calls) ) def spawnProcess(self, processProtocol, executable, args=(), env={}, path=None, uid=None, gid=None, usePTY=0, childFDs=None): transport = NotAProcessTransport( processProtocol, executable, args, env, path, uid, gid, usePTY, childFDs ) transport.startedAt = self.seconds() self.processTransports.append(transport) if self.waiting: self.waiting.pop(0).callback(transport) return transport class TestCalDAVOptions (CalDAVOptions): """ A fake implementation of CalDAVOptions that provides empty implementations of checkDirectory and checkFile. """ def checkDirectory(self, *args, **kwargs): pass def checkFile(self, *args, **kwargs): pass def checkDirectories(self, *args, **kwargs): pass def loadConfiguration(self): """ Simple wrapper to avoid printing during test runs. """ oldout = sys.stdout newout = sys.stdout = StringIO() try: return CalDAVOptions.loadConfiguration(self) finally: sys.stdout = oldout log.info( "load configuration console output: {result}", result=newout.getvalue() ) class CalDAVOptionsTest(StoreTestCase): """ Test various parameters of our usage.Options subclass """ @inlineCallbacks def setUp(self): """ Set up our options object, giving it a parent, and forcing the global config to be loaded from defaults. """ yield super(CalDAVOptionsTest, self).setUp() self.config = TestCalDAVOptions() self.config.parent = Options() self.config.parent["uid"] = 0 self.config.parent["gid"] = 0 self.config.parent["nodaemon"] = False def tearDown(self): config.setDefaults(DEFAULT_CONFIG) config.reload() def test_overridesConfig(self): """ Test that values on the command line's -o and --option options overide the config file """ myConfig = ConfigDict(DEFAULT_CONFIG) myConfigFile = self.mktemp() writePlist(myConfig, myConfigFile) argv = [ "-f", myConfigFile, "-o", "EnableSACLs", "-o", "HTTPPort=80", "-o", "BindAddresses=127.0.0.1,127.0.0.2,127.0.0.3", "-o", "DocumentRoot=/dev/null", "-o", "UserName=None", "-o", "EnableProxyPrincipals=False", ] self.config.parseOptions(argv) self.assertEquals(config.EnableSACLs, True) self.assertEquals(config.HTTPPort, 80) self.assertEquals(config.BindAddresses, ["127.0.0.1", "127.0.0.2", "127.0.0.3"]) self.assertEquals(config.DocumentRoot, "/dev/null") self.assertEquals(config.UserName, None) self.assertEquals(config.EnableProxyPrincipals, False) argv = ["-o", "Authentication=This Doesn't Matter"] self.assertRaises(UsageError, self.config.parseOptions, argv) def test_setsParent(self): """ Test that certain values are set on the parent (i.e. twistd's Option's object) """ myConfig = ConfigDict(DEFAULT_CONFIG) myConfigFile = self.mktemp() writePlist(myConfig, myConfigFile) argv = [ "-f", myConfigFile, "-o", "PIDFile=/dev/null", "-o", "umask=63", # integers in plists & calendarserver command line are always # decimal; umask is traditionally in octal. ] self.config.parseOptions(argv) self.assertEquals(self.config.parent["pidfile"], "/dev/null") self.assertEquals(self.config.parent["umask"], 0077) def test_specifyConfigFile(self): """ Test that specifying a config file from the command line loads the global config with those values properly. """ myConfig = ConfigDict(DEFAULT_CONFIG) myConfig.Authentication.Basic.Enabled = False myConfig.HTTPPort = 80 myConfig.ServerHostName = "calendar.calenderserver.org" myConfigFile = self.mktemp() writePlist(myConfig, myConfigFile) args = ["-f", myConfigFile] self.config.parseOptions(args) self.assertEquals(config.ServerHostName, myConfig["ServerHostName"]) self.assertEquals(config.HTTPPort, myConfig.HTTPPort) self.assertEquals( config.Authentication.Basic.Enabled, myConfig.Authentication.Basic.Enabled ) def test_specifyDictPath(self): """ Test that we can specify command line overrides to leafs using a "/" seperated path. Such as "-o MultiProcess/ProcessCount=1" """ myConfig = ConfigDict(DEFAULT_CONFIG) myConfigFile = self.mktemp() writePlist(myConfig, myConfigFile) argv = [ "-o", "MultiProcess/ProcessCount=102", "-f", myConfigFile, ] self.config.parseOptions(argv) self.assertEquals(config.MultiProcess["ProcessCount"], 102) def inServiceHierarchy(svc, predicate): """ Find services in the service collection which satisfy the given predicate. """ for subsvc in svc.services: if IServiceCollection.providedBy(subsvc): for value in inServiceHierarchy(subsvc, predicate): yield value if predicate(subsvc): yield subsvc def determineAppropriateGroupID(): """ Determine a secondary group ID which can be used for testing, or None if the executing user has no additional unix group memberships. """ currentGroups = os.getgroups() if len(currentGroups) < 2: return None else: return currentGroups[1] class SocketGroupOwnership(StoreTestCase): """ Tests for L{GroupOwnedUNIXServer}. """ def test_groupOwnedUNIXSocket(self): """ When a L{GroupOwnedUNIXServer} is started, it will change the group of its socket. """ alternateGroup = determineAppropriateGroupID() if alternateGroup is None: self.skipTest (( "This test requires that the user running it is a member of at" " least two unix groups." )) socketName = self.mktemp() gous = GroupOwnedUNIXServer(alternateGroup, socketName, ServerFactory(), mode=0660) gous.privilegedStartService() self.addCleanup(gous.stopService) filestat = os.stat(socketName) self.assertTrue(stat.S_ISSOCK(filestat.st_mode)) self.assertEquals(filestat.st_gid, alternateGroup) self.assertEquals(filestat.st_uid, os.getuid()) # Tests for the various makeService_ flavors: class CalDAVServiceMakerTestBase(StoreTestCase): @inlineCallbacks def setUp(self): yield super(CalDAVServiceMakerTestBase, self).setUp() self.options = TestCalDAVOptions() self.options.parent = Options() self.options.parent["gid"] = None self.options.parent["uid"] = None self.options.parent["nodaemon"] = None class CalDAVServiceMakerTestSingle(CalDAVServiceMakerTestBase): def configure(self): super(CalDAVServiceMakerTestSingle, self).configure() config.ProcessType = "Single" def test_makeService(self): CalDAVServiceMaker().makeService(self.options) # No error class CalDAVServiceMakerTestSlave(CalDAVServiceMakerTestBase): def configure(self): super(CalDAVServiceMakerTestSlave, self).configure() config.ProcessType = "Slave" def test_makeService(self): CalDAVServiceMaker().makeService(self.options) # No error class CalDAVServiceMakerTestUnknown(CalDAVServiceMakerTestBase): def configure(self): super(CalDAVServiceMakerTestUnknown, self).configure() config.ProcessType = "Unknown" def test_makeService(self): self.assertRaises(UsageError, CalDAVServiceMaker().makeService, self.options) # error class ModesOnUNIXSocketsTests(CalDAVServiceMakerTestBase): def configure(self): super(ModesOnUNIXSocketsTests, self).configure() config.ProcessType = "Combined" config.HTTPPort = 0 self.alternateGroup = determineAppropriateGroupID() # If the current user isn't a member of >1 unix groups, # this test should proceed anyway, so use the primary group ID if self.alternateGroup is None: self.alternateGroup = os.getgroups()[0] config.GroupName = grp.getgrgid(self.alternateGroup).gr_name config.Stats.EnableUnixStatsSocket = True def test_modesOnUNIXSockets(self): """ The logging and stats UNIX sockets that are bound as part of the 'Combined' service hierarchy should have a secure mode specified: only the executing user should be able to open and send to them. """ svc = CalDAVServiceMaker().makeService(self.options) for serviceName in [_CONTROL_SERVICE_NAME]: socketService = svc.getServiceNamed(serviceName) self.assertIsInstance(socketService, GroupOwnedUNIXServer) m = socketService.kwargs.get("mode", 0666) self.assertEquals( m, int("660", 8), "Wrong mode on %s: %s" % (serviceName, oct(m)) ) self.assertEquals(socketService.gid, self.alternateGroup) for serviceName in ["unix-stats"]: socketService = svc.getServiceNamed(serviceName) self.assertIsInstance(socketService, GroupOwnedUNIXServer) m = socketService.kwargs.get("mode", 0666) self.assertEquals( m, int("660", 8), "Wrong mode on %s: %s" % (serviceName, oct(m)) ) self.assertEquals(socketService.gid, self.alternateGroup) class TestLoggingProtocol(LoggingProtocol): def processEnded(self, reason): LoggingProtocol.processEnded(self, reason) self.service.processEnded(self.name) class TestErrorLoggingMultiService(TestCase): def test_nonAsciiLog(self): """ Make sure that the file based error log can write non ascii data """ logpath = self.mktemp() service = ErrorLoggingMultiService( True, logpath, 10000, 10, False, ) app = Application("non-ascii") service.setServiceParent(app) observer = app.getComponent(ILogObserver, None) self.assertTrue(observer is not None) log = Logger(observer=observer) log.error(u"Couldn\u2019t be wrong") with open(logpath) as f: logentry = f.read() self.assertIn("Couldn\xe2\x80\x99t be wrong", logentry) class TestProcessMonitor(ProcessMonitor): def startProcess(self, name): """ @param name: The name of the process to be started """ # If a protocol instance already exists, it means the process is # already running if name in self.protocols: return args, uid, gid, env = self.processes[name] proto = TestLoggingProtocol() proto.service = self proto.name = name self.protocols[name] = proto self.timeStarted[name] = self._reactor.seconds() self._reactor.spawnProcess( proto, args[0], args, uid=uid, gid=gid, env=env ) def stopService(self): """ Return a deferred that fires when all child processes have ended. """ service.Service.stopService(self) self.stopping = True self.deferreds = {} for name in self.processes: self.deferreds[name] = Deferred() # Cancel any outstanding restarts for name, delayedCall in self.restart.items(): if delayedCall.active(): delayedCall.cancel() for name in self.processes: self.stopProcess(name) return gatherResults(self.deferreds.values()) def processEnded(self, name): if self.stopping: deferred = self.deferreds.pop(name, None) if deferred is not None: deferred.callback(None) class MemcacheSpawner(TestCase): def setUp(self): super(MemcacheSpawner, self).setUp() self.monitor = TestProcessMonitor() self.monitor.startService() self.socket = os.path.join(tempfile.gettempdir(), "memcache.sock") self.patch(config.Memcached.Pools.Default, "ServerEnabled", True) def test_memcacheUnix(self): """ Spawn a memcached process listening on a unix socket that becomes connectable in no more than one second. Connect and interact. Verify secure file permissions on the socket file. """ self.patch(config.Memcached.Pools.Default, "MemcacheSocket", self.socket) CalDAVServiceMaker()._spawnMemcached(monitor=self.monitor) sleep(1) mc = memcacheclient.Client(["unix:{}".format(self.socket)], debug=1) rando = random.random() mc.set("the_answer", rando) self.assertEquals(rando, mc.get("the_answer")) # The socket file should not be usable to other users st = os.stat(self.socket) self.assertTrue(str(oct(st.st_mode)).endswith("00")) mc.disconnect_all() def test_memcacheINET(self): """ Spawn a memcached process listening on a network socket that becomes connectable in no more than one second. Interact with it. """ self.patch(config.Memcached.Pools.Default, "MemcacheSocket", "") ba = config.Memcached.Pools.Default.BindAddress bp = config.Memcached.Pools.Default.Port CalDAVServiceMaker()._spawnMemcached(monitor=self.monitor) sleep(1) mc = memcacheclient.Client(["{}:{}".format(ba, bp)], debug=1) rando = random.random() mc.set("the_password", rando) self.assertEquals(rando, mc.get("the_password")) mc.disconnect_all() def tearDown(self): """ Verify that our spawned memcached can be reaped. """ return self.monitor.stopService() class ProcessMonitorTests(CalDAVServiceMakerTestBase): def configure(self): super(ProcessMonitorTests, self).configure() config.ProcessType = "Combined" def test_processMonitor(self): """ In the master, there should be exactly one L{DelayedStartupProcessMonitor} in the service hierarchy so that it will be started by startup. """ self.assertEquals( 1, len( list(inServiceHierarchy( CalDAVServiceMaker().makeService(self.options), lambda x: isinstance(x, DelayedStartupProcessMonitor))) ) ) class SlaveServiceTests(CalDAVServiceMakerTestBase): """ Test various configurations of the Slave service """ def configure(self): super(SlaveServiceTests, self).configure() config.ProcessType = "Slave" config.HTTPPort = 8008 config.SSLPort = 8443 pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem") config.SSLPrivateKey = pemFile config.SSLCertificate = pemFile config.SSLKeychainIdentity = "org.calendarserver.test" config.EnableSSL = True def test_defaultService(self): """ Test the value of a Slave service in it's simplest configuration. """ service = CalDAVServiceMaker().makeService(self.options) self.failUnless( IService(service), "%s does not provide IService" % (service,) ) self.failUnless( service.services, "No services configured" ) self.failUnless( isinstance(service, CalDAVService), "%s is not a CalDAVService" % (service,) ) def test_defaultListeners(self): """ Test that the Slave service has sub services with the default TCP and SSL configuration """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) expectedSubServices = dict(( (MaxAcceptTCPServer, config.HTTPPort), (MaxAcceptSSLServer, config.SSLPort), )) configuredSubServices = [(s.__class__, getattr(s, 'args', None)) for s in service.services] checked = 0 for serviceClass, serviceArgs in configuredSubServices: if serviceClass in expectedSubServices: checked += 1 self.assertEquals( serviceArgs[0], dict(expectedSubServices)[serviceClass] ) # TCP+SSL services for each bind address self.assertEquals(checked, 2 * len(config.BindAddresses)) def test_SSLKeyConfiguration(self): """ Test that the configuration of the SSLServer reflect the config file's SSL Private Key and SSL Certificate """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) sslService = None for s in service.services: if isinstance(s, internet.SSLServer): sslService = s break self.failIf(sslService is None, "No SSL Service found") context = sslService.args[2] self.assertEquals( config.SSLPrivateKey, context.privateKeyFileName ) self.assertEquals( config.SSLCertificate, context.certificateFileName, ) class NoSSLTests(CalDAVServiceMakerTestBase): def configure(self): super(NoSSLTests, self).configure() config.ProcessType = "Slave" config.HTTPPort = 8008 # pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem") # config.SSLPrivateKey = pemFile # config.SSLCertificate = pemFile # config.EnableSSL = True def test_noSSL(self): """ Test the single service to make sure there is no SSL Service when SSL is disabled """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) self.assertNotIn( internet.SSLServer, [s.__class__ for s in service.services] ) class NoHTTPTests(CalDAVServiceMakerTestBase): def configure(self): super(NoHTTPTests, self).configure() config.ProcessType = "Slave" config.SSLPort = 8443 pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem") config.SSLPrivateKey = pemFile config.SSLCertificate = pemFile config.SSLKeychainIdentity = "org.calendarserver.test" config.EnableSSL = True def test_noHTTP(self): """ Test the single service to make sure there is no TCPServer when HTTPPort is not configured """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) self.assertNotIn( internet.TCPServer, [s.__class__ for s in service.services] ) class SingleBindAddressesTests(CalDAVServiceMakerTestBase): def configure(self): super(SingleBindAddressesTests, self).configure() config.ProcessType = "Slave" config.HTTPPort = 8008 config.BindAddresses = ["127.0.0.1"] def test_singleBindAddresses(self): """ Test that the TCPServer and SSLServers are bound to the proper address """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) for s in service.services: if isinstance(s, (internet.TCPServer, internet.SSLServer)): self.assertEquals(s.kwargs["interface"], "127.0.0.1") class MultipleBindAddressesTests(CalDAVServiceMakerTestBase): def configure(self): super(MultipleBindAddressesTests, self).configure() config.ProcessType = "Slave" config.HTTPPort = 8008 config.SSLPort = 8443 pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem") config.SSLPrivateKey = pemFile config.SSLCertificate = pemFile config.SSLKeychainIdentity = "org.calendarserver.test" config.EnableSSL = True config.BindAddresses = [ "127.0.0.1", "10.0.0.2", "172.53.13.123", ] def test_multipleBindAddresses(self): """ Test that the TCPServer and SSLServers are bound to the proper addresses. """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) tcpServers = [] sslServers = [] for s in service.services: if isinstance(s, internet.TCPServer): tcpServers.append(s) elif isinstance(s, internet.SSLServer): sslServers.append(s) self.assertEquals(len(tcpServers), len(config.BindAddresses)) self.assertEquals(len(sslServers), len(config.BindAddresses)) for addr in config.BindAddresses: for s in tcpServers: if s.kwargs["interface"] == addr: tcpServers.remove(s) for s in sslServers: if s.kwargs["interface"] == addr: sslServers.remove(s) self.assertEquals(len(tcpServers), 0) self.assertEquals(len(sslServers), 0) class ListenBacklogTests(CalDAVServiceMakerTestBase): def configure(self): super(ListenBacklogTests, self).configure() config.ProcessType = "Slave" config.ListenBacklog = 1024 config.HTTPPort = 8008 config.SSLPort = 8443 pemFile = os.path.join(sourceRoot, "twistedcaldav/test/data/server.pem") config.SSLPrivateKey = pemFile config.SSLCertificate = pemFile config.SSLKeychainIdentity = "org.calendarserver.test" config.EnableSSL = True config.BindAddresses = [ "127.0.0.1", "10.0.0.2", "172.53.13.123", ] def test_listenBacklog(self): """ Test that the backlog arguments is set in TCPServer and SSLServers """ # Note: the listeners are bundled within a MultiService named "ConnectionService" service = CalDAVServiceMaker().makeService(self.options) service = service.getServiceNamed(CalDAVService.connectionServiceName) for s in service.services: if isinstance(s, (internet.TCPServer, internet.SSLServer)): self.assertEquals(s.kwargs["backlog"], 1024) class AuthWrapperAllEnabledTests(CalDAVServiceMakerTestBase): def configure(self): super(AuthWrapperAllEnabledTests, self).configure() config.HTTPPort = 8008 config.Authentication.Digest.Enabled = True config.Authentication.Kerberos.Enabled = True config.Authentication.Kerberos.ServicePrincipal = "http/hello@bob" config.Authentication.Basic.Enabled = True def test_AuthWrapperAllEnabled(self): """ Test the configuration of the authentication wrapper when all schemes are enabled. """ authWrapper = self.rootResource.resource self.failUnless( isinstance( authWrapper, auth.AuthenticationWrapper ) ) expectedSchemes = ["negotiate", "digest", "basic"] for scheme in authWrapper.credentialFactories: self.failUnless(scheme in expectedSchemes) self.assertEquals(len(expectedSchemes), len(authWrapper.credentialFactories)) ncf = authWrapper.credentialFactories["negotiate"] self.assertEquals(ncf.service, "http@HELLO") self.assertEquals(ncf.realm, "bob") class ServicePrincipalNoneTests(CalDAVServiceMakerTestBase): def configure(self): super(ServicePrincipalNoneTests, self).configure() config.HTTPPort = 8008 config.Authentication.Digest.Enabled = True config.Authentication.Kerberos.Enabled = True config.Authentication.Kerberos.ServicePrincipal = "" config.Authentication.Basic.Enabled = True def test_servicePrincipalNone(self): """ Test that the Kerberos principal look is attempted if the principal is empty. """ authWrapper = self.rootResource.resource self.assertFalse("negotiate" in authWrapper.credentialFactories) class AuthWrapperPartialEnabledTests(CalDAVServiceMakerTestBase): def configure(self): super(AuthWrapperPartialEnabledTests, self).configure() config.Authentication.Digest.Enabled = True config.Authentication.Kerberos.Enabled = False config.Authentication.Basic.Enabled = False def test_AuthWrapperPartialEnabled(self): """ Test that the expected credential factories exist when only a partial set of authentication schemes is enabled. """ authWrapper = self.rootResource.resource expectedSchemes = ["digest"] for scheme in authWrapper.credentialFactories: self.failUnless(scheme in expectedSchemes) self.assertEquals( len(expectedSchemes), len(authWrapper.credentialFactories) ) class ResourceTests(CalDAVServiceMakerTestBase): def test_LogWrapper(self): """ Test the configuration of the log wrapper """ self.failUnless(isinstance(self.rootResource, LogWrapperResource)) def test_AuthWrapper(self): """ Test the configuration of the auth wrapper """ self.failUnless(isinstance(self.rootResource.resource, AuthenticationWrapper)) def test_rootResource(self): """ Test the root resource """ self.failUnless(isinstance(self.rootResource.resource.resource, RootResource)) @inlineCallbacks def test_principalResource(self): """ Test the principal resource """ self.failUnless(isinstance( (yield self.actualRoot.getChild("principals")), DirectoryPrincipalProvisioningResource )) @inlineCallbacks def test_calendarResource(self): """ Test the calendar resource """ self.failUnless(isinstance( (yield self.actualRoot.getChild("calendars")), DirectoryCalendarHomeProvisioningResource )) @inlineCallbacks def test_sameDirectory(self): """ Test that the principal hierarchy has a reference to the same DirectoryService as the calendar hierarchy """ principals = yield self.actualRoot.getChild("principals") calendars = yield self.actualRoot.getChild("calendars") self.assertEquals(principals.directory, calendars.directory) class DummyProcessObject(object): """ Simple stub for Process Object API which just has an executable and some arguments. This is a stand in for L{TwistdSlaveProcess}. """ def __init__(self, scriptname, *args): self.scriptname = scriptname self.args = args def starting(self): pass def stopped(self): pass def getCommandLine(self): """ Simple command line. """ return [self.scriptname] + list(self.args) def getFileDescriptors(self): """ Return a dummy, empty mapping of file descriptors. """ return {} def getName(self): """ Get a dummy name. """ return 'Dummy' class ScriptProcessObject(DummyProcessObject): """ Simple stub for the Process Object API that will run a test script. """ def getCommandLine(self): """ Get the command line to invoke this script. """ return [ sys.executable, FilePath(__file__).sibling(self.scriptname).path ] + list(self.args) class DelayedStartupProcessMonitorTests(StoreTestCase): """ Test cases for L{DelayedStartupProcessMonitor}. """ def test_lineAfterLongLine(self): """ A "long" line of output from a monitored process (longer than L{LineReceiver.MAX_LENGTH}) should be logged in chunks rather than all at once, to avoid resource exhaustion. """ dspm = DelayedStartupProcessMonitor() dspm.addProcessObject( ScriptProcessObject( 'longlines.py', str(DelayedStartupLineLogger.MAX_LENGTH) ), os.environ ) dspm.startService() self.addCleanup(dspm.stopService) logged = [] def tempObserver(event): # Probably won't be a problem, but let's not have any intermittent # test issues that stem from multi-threaded log messages randomly # going off... if not isInIOThread(): callFromThread(tempObserver, event) return if event.get('isError'): d.errback() if event.get("log_system") == u'Dummy': logged.append(event["msg"]) if event["msg"] == u'z': d.callback("done") logging.addObserver(tempObserver) self.addCleanup(logging.removeObserver, tempObserver) d = Deferred() def assertions(result): self.assertEquals(["x", "y", "y", # final segment "z"], [msg[0] for msg in logged]) self.assertEquals([" (truncated, continued)", " (truncated, continued)", "y", "z"], [msg[-len(" (truncated, continued)"):] for msg in logged]) d.addCallback(assertions) return d def test_breakLineIntoSegments(self): """ Exercise the line-breaking logic with various key lengths """ testLogger = DelayedStartupLineLogger() testLogger.MAX_LENGTH = 10 for input, output in [ ("", []), ("a", ["a"]), ("abcde", ["abcde"]), ("abcdefghij", ["abcdefghij"]), ( "abcdefghijk", [ "abcdefghij (truncated, continued)", "k" ] ), ( "abcdefghijklmnopqrst", [ "abcdefghij (truncated, continued)", "klmnopqrst" ] ), ( "abcdefghijklmnopqrstuv", [ "abcdefghij (truncated, continued)", "klmnopqrst (truncated, continued)", "uv" ] ), ]: self.assertEquals(output, testLogger._breakLineIntoSegments(input)) def test_acceptDescriptorInheritance(self): """ If a L{TwistdSlaveProcess} specifies some file descriptors to be inherited, they should be inherited by the subprocess. """ imps = InMemoryProcessSpawner() dspm = DelayedStartupProcessMonitor(imps) # Most arguments here will be ignored, so these are bogus values. slave = TwistdSlaveProcess( twistd="bleh", tapname="caldav", configFile="/does/not/exist", id=10, interfaces='127.0.0.1', inheritFDs=[3, 7], inheritSSLFDs=[19, 25], ) dspm.addProcessObject(slave, {}) dspm.startService() # We can easily stub out spawnProcess, because caldav calls it, but a # bunch of callLater calls are buried in procmon itself, so we need to # use the real clock. oneProcessTransport = imps.waitForOneProcess() self.assertEquals(oneProcessTransport.childFDs, {0: 'w', 1: 'r', 2: 'r', 3: 3, 7: 7, 19: 19, 25: 25}) def test_changedArgumentEachSpawn(self): """ If the result of C{getCommandLine} changes on subsequent calls, subsequent calls should result in different arguments being passed to C{spawnProcess} each time. """ imps = InMemoryProcessSpawner() dspm = DelayedStartupProcessMonitor(imps) slave = DummyProcessObject('scriptname', 'first') dspm.addProcessObject(slave, {}) dspm.startService() oneProcessTransport = imps.waitForOneProcess() self.assertEquals(oneProcessTransport.args, ['scriptname', 'first']) slave.args = ['second'] oneProcessTransport.processProtocol.processEnded(None) twoProcessTransport = imps.waitForOneProcess() self.assertEquals(twoProcessTransport.args, ['scriptname', 'second']) def test_metaDescriptorInheritance(self): """ If a L{TwistdSlaveProcess} specifies a meta-file-descriptor to be inherited, it should be inherited by the subprocess, and a configuration argument should be passed that indicates to the subprocess. """ imps = InMemoryProcessSpawner() dspm = DelayedStartupProcessMonitor(imps) # Most arguments here will be ignored, so these are bogus values. slave = TwistdSlaveProcess( twistd="bleh", tapname="caldav", configFile="/does/not/exist", id=10, interfaces='127.0.0.1', metaSocket=FakeDispatcher().addSocket() ) dspm.addProcessObject(slave, {}) dspm.startService() oneProcessTransport = imps.waitForOneProcess() self.assertIn("MetaFD=4", oneProcessTransport.args) self.assertEquals( oneProcessTransport.args[oneProcessTransport.args.index("MetaFD=4") - 1], '-o', "MetaFD argument was not passed as an option" ) self.assertEquals(oneProcessTransport.childFDs, {0: 'w', 1: 'r', 2: 'r', 4: 4}) def test_startServiceDelay(self): """ Starting a L{DelayedStartupProcessMonitor} should result in the process objects that have been added to it being started once per delayInterval. """ imps = InMemoryProcessSpawner() dspm = DelayedStartupProcessMonitor(imps) dspm.delayInterval = 3.0 sampleCounter = range(0, 5) for counter in sampleCounter: slave = TwistdSlaveProcess( twistd="bleh", tapname="caldav", configFile="/does/not/exist", id=counter * 10, interfaces='127.0.0.1', metaSocket=FakeDispatcher().addSocket() ) dspm.addProcessObject(slave, {"SAMPLE_ENV_COUNTER": str(counter)}) dspm.startService() # Advance the clock a bunch of times, allowing us to time things with a # comprehensible resolution. imps.pump([0] + [dspm.delayInterval / 2.0] * len(sampleCounter) * 3) expectedValues = [dspm.delayInterval * n for n in sampleCounter] self.assertEquals([x.startedAt for x in imps.processTransports], expectedValues) class FakeFD(object): def __init__(self, n): self.fd = n def fileno(self): return self.fd class FakeSubsocket(object): def __init__(self, fakefd): self.fakefd = fakefd def childSocket(self): return self.fakefd def start(self): pass def restarted(self): pass def stop(self): pass class FakeDispatcher(object): n = 3 def addSocket(self): self.n += 1 return FakeSubsocket(FakeFD(self.n)) class TwistdSlaveProcessTests(StoreTestCase): """ Tests for L{TwistdSlaveProcess}. """ def test_pidfile(self): """ The result of L{TwistdSlaveProcess.getCommandLine} includes an option setting the name of the pidfile to something including the instance id. """ slave = TwistdSlaveProcess("/path/to/twistd", "something", "config", 7, []) commandLine = slave.getCommandLine() option = 'PIDFile=something-instance-7.pid' self.assertIn(option, commandLine) self.assertEquals(commandLine[commandLine.index(option) - 1], '-o') class ReExecServiceTests(StoreTestCase): @inlineCallbacks def test_reExecService(self): """ Verify that sending a HUP to the test reexec.tac causes startService and stopService to be called again by counting the number of times START and STOP appear in the process output. """ # Inherit the reactor used to run trial reactorArg = "--reactor=select" for arg in sys.argv: if arg.startswith("--reactor"): reactorArg = arg break tacFilePath = os.path.join(os.path.dirname(__file__), "reexec.tac") twistd = which("twistd")[0] deferred = Deferred() proc = reactor.spawnProcess( CapturingProcessProtocol(deferred, None), sys.executable, [sys.executable, '-W', 'ignore', twistd, reactorArg, '-n', '-y', tacFilePath], env=os.environ ) reactor.callLater(3, proc.signalProcess, "HUP") reactor.callLater(6, proc.signalProcess, "TERM") output = yield deferred self.assertEquals(output.count("START"), 2) self.assertEquals(output.count("STOP"), 2) class SystemIDsTests(StoreTestCase): """ Verifies the behavior of calendarserver.tap.caldav.getSystemIDs """ def _wrappedFunction(self): """ Return a copy of the getSystemIDs function with test implementations of the ID lookup functions swapped into the namespace. """ def _getpwnam(name): if name == "exists": Getpwnam = namedtuple("Getpwnam", ("pw_uid")) return Getpwnam(42) else: raise KeyError(name) def _getgrnam(name): if name == "exists": Getgrnam = namedtuple("Getgrnam", ("gr_gid")) return Getgrnam(43) else: raise KeyError(name) def _getuid(): return 44 def _getgid(): return 45 return type(getSystemIDs)( getSystemIDs.func_code, # @UndefinedVariable { "getpwnam": _getpwnam, "getgrnam": _getgrnam, "getuid": _getuid, "getgid": _getgid, "KeyError": KeyError, "ConfigurationError": ConfigurationError, } ) def test_getSystemIDs_UserNameNotFound(self): """ If userName is passed in but is not found on the system, raise a ConfigurationError """ self.assertRaises( ConfigurationError, self._wrappedFunction(), "nonexistent", "exists" ) def test_getSystemIDs_GroupNameNotFound(self): """ If groupName is passed in but is not found on the system, raise a ConfigurationError """ self.assertRaises( ConfigurationError, self._wrappedFunction(), "exists", "nonexistent" ) def test_getSystemIDs_NamesNotSpecified(self): """ If names are not provided, use the IDs of the process """ self.assertEquals(self._wrappedFunction()("", ""), (44, 45)) def test_getSystemIDs_NamesSpecified(self): """ If names are provided, use the IDs corresponding to those names """ self.assertEquals(self._wrappedFunction()("exists", "exists"), (42, 43)) # # Tests for PreProcessingService # class Step(object): def __init__(self, recordCallback, shouldFail): self._recordCallback = recordCallback self._shouldFail = shouldFail def stepWithResult(self, result): self._recordCallback(self.successValue, None) if self._shouldFail: 1 / 0 return succeed(result) def stepWithFailure(self, failure): self._recordCallback(self.errorValue, failure) if self._shouldFail: return failure class StepOne(Step): successValue = "one success" errorValue = "one failure" class StepTwo(Step): successValue = "two success" errorValue = "two failure" class StepThree(Step): successValue = "three success" errorValue = "three failure" class StepFour(Step): successValue = "four success" errorValue = "four failure" class PreProcessingServiceTestCase(TestCase): def fakeServiceCreator(self, cp, store, lo, storageService): self.history.append(("serviceCreator", store, storageService)) def setUp(self): self.history = [] self.clock = Clock() self.pps = PreProcessingService( self.fakeServiceCreator, None, "store", None, "storageService", reactor=self.clock ) def _record(self, value, failure): self.history.append(value) def test_allSuccess(self): self.pps.addStep( StepOne(self._record, False) ).addStep( StepTwo(self._record, False) ).addStep( StepThree(self._record, False) ).addStep( StepFour(self._record, False) ) self.pps.startService() self.assertEquals( self.history, [ 'one success', 'two success', 'three success', 'four success', ('serviceCreator', 'store', 'storageService') ] ) def test_allFailure(self): self.pps.addStep( StepOne(self._record, True) ).addStep( StepTwo(self._record, True) ).addStep( StepThree(self._record, True) ).addStep( StepFour(self._record, True) ) self.pps.startService() self.assertEquals( self.history, [ 'one success', 'two failure', 'three failure', 'four failure', ('serviceCreator', None, 'storageService') ] ) def test_partialFailure(self): self.pps.addStep( StepOne(self._record, True) ).addStep( StepTwo(self._record, False) ).addStep( StepThree(self._record, True) ).addStep( StepFour(self._record, False) ) self.pps.startService() self.assertEquals( self.history, [ 'one success', 'two failure', 'three success', 'four failure', ('serviceCreator', 'store', 'storageService') ] ) class StubStorageService(object): def __init__(self): self.hardStopCalled = False def hardStop(self): self.hardStopCalled = True class StubReactor(object): def __init__(self): self.stopCalled = False def stop(self): self.stopCalled = True class DataStoreMonitorTestCase(TestCase): def test_monitor(self): storageService = StubStorageService() stubReactor = StubReactor() monitor = DataStoreMonitor(stubReactor, storageService) monitor.disconnected() self.assertTrue(storageService.hardStopCalled) self.assertTrue(stubReactor.stopCalled) storageService.hardStopCalled = False stubReactor.stopCalled = False monitor.deleted() self.assertTrue(storageService.hardStopCalled) self.assertTrue(stubReactor.stopCalled) storageService.hardStopCalled = False stubReactor.stopCalled = False monitor.renamed() self.assertTrue(storageService.hardStopCalled) self.assertTrue(stubReactor.stopCalled) calendarserver-9.1+dfsg/calendarserver/tap/test/test_util.py000066400000000000000000000251761315003562600244470ustar00rootroot00000000000000## # Copyright (c) 2007-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from calendarserver.tap.util import ( MemoryLimitService, Stepper, verifyTLSCertificate, memoryForPID, AlertPoster, AMPAlertProtocol, AMPAlertSender ) from twisted.internet.defer import succeed, inlineCallbacks from twisted.internet.task import Clock from twisted.protocols.amp import AMP from twisted.python.filepath import FilePath from twisted.test.testutils import returnConnected from twistedcaldav.config import ConfigDict from twistedcaldav.test.util import TestCase from twistedcaldav.util import computeProcessCount import os class ProcessCountTestCase(TestCase): def test_count(self): data = ( # minimum, perCPU, perGB, cpu, memory (in GB), expected: (0, 1, 1, 0, 0, 0), (1, 2, 2, 0, 0, 1), (1, 2, 1, 1, 1, 1), (1, 2, 2, 1, 1, 2), (1, 2, 2, 2, 1, 2), (1, 2, 2, 1, 2, 2), (1, 2, 2, 2, 2, 4), (4, 1, 2, 8, 2, 4), (6, 2, 2, 2, 2, 6), (1, 2, 1, 4, 99, 8), (2, 1, 2, 2, 2, 2), # 2 cores, 2GB = 2 (2, 1, 2, 2, 4, 2), # 2 cores, 4GB = 2 (2, 1, 2, 8, 6, 8), # 8 cores, 6GB = 8 (2, 1, 2, 8, 16, 8), # 8 cores, 16GB = 8 ) for min, perCPU, perGB, cpu, mem, expected in data: mem *= (1024 * 1024 * 1024) self.assertEquals( expected, computeProcessCount(min, perCPU, perGB, cpuCount=cpu, memSize=mem) ) # Stub classes for MemoryLimitServiceTestCase class StubProtocol(object): def __init__(self, transport): self.transport = transport class StubProcess(object): def __init__(self, pid): self.pid = pid class StubProcessMonitor(object): def __init__(self, processes, protocols): self.processes = processes self.protocols = protocols self.history = [] def stopProcess(self, name): self.history.append(name) class MemoryLimitServiceTestCase(TestCase): def test_checkMemory(self): """ Set up stub objects to verify MemoryLimitService.checkMemory( ) only stops the processes whose memory usage exceeds the configured limit, and skips memcached """ data = { # PID : (name, resident memory-in-bytes, virtual memory-in-bytes) 101: ("process #1", 10, 1010), 102: ("process #2", 30, 1030), 103: ("process #3", 50, 1050), 99: ("memcached-Default", 10, 1010), } processes = [] protocols = {} for pid, (name, _ignore_resident, _ignore_virtual) in data.iteritems(): protocols[name] = StubProtocol(StubProcess(pid)) processes.append(name) processMonitor = StubProcessMonitor(processes, protocols) clock = Clock() service = MemoryLimitService(processMonitor, 10, 15, True, reactor=clock) # For testing, use a stub implementation of memory-usage lookup def testMemoryForPID(pid, residentOnly): return data[pid][1 if residentOnly else 2] service._memoryForPID = testMemoryForPID # After 5 seconds, nothing should have happened, since the interval is 10 seconds service.startService() clock.advance(5) self.assertEquals(processMonitor.history, []) # After 7 more seconds, processes 2 and 3 should have been stopped since their # memory usage exceeds 10 bytes clock.advance(7) self.assertEquals(processMonitor.history, ['process #2', 'process #3']) # Now switch to looking at virtual memory, in which case all 3 processes # should be stopped service._residentOnly = False processMonitor.history = [] clock.advance(10) self.assertEquals(processMonitor.history, ['process #1', 'process #2', 'process #3']) def test_memoryForPID(self): """ Test that L{memoryForPID} returns a valid result. """ memory = memoryForPID(os.getpid()) self.assertNotEqual(memory, 0) # # Tests for Stepper # class Step(object): def __init__(self, recordCallback, shouldFail): self._recordCallback = recordCallback self._shouldFail = shouldFail def stepWithResult(self, result): self._recordCallback(self.successValue, None) if self._shouldFail: 1 / 0 return succeed(result) def stepWithFailure(self, failure): self._recordCallback(self.errorValue, failure) if self._shouldFail: return failure class StepOne(Step): successValue = "one success" errorValue = "one failure" class StepTwo(Step): successValue = "two success" errorValue = "two failure" class StepThree(Step): successValue = "three success" errorValue = "three failure" class StepFour(Step): successValue = "four success" errorValue = "four failure" class StepperTestCase(TestCase): def setUp(self): self.history = [] self.stepper = Stepper() def _record(self, value, failure): self.history.append(value) @inlineCallbacks def test_allSuccess(self): self.stepper.addStep( StepOne(self._record, False) ).addStep( StepTwo(self._record, False) ).addStep( StepThree(self._record, False) ).addStep( StepFour(self._record, False) ) result = (yield self.stepper.start("abc")) self.assertEquals(result, "abc") # original result passed through self.assertEquals( self.history, ['one success', 'two success', 'three success', 'four success']) def test_allFailure(self): self.stepper.addStep(StepOne(self._record, True)) self.stepper.addStep(StepTwo(self._record, True)) self.stepper.addStep(StepThree(self._record, True)) self.stepper.addStep(StepFour(self._record, True)) self.failUnlessFailure(self.stepper.start(), ZeroDivisionError) self.assertEquals( self.history, ['one success', 'two failure', 'three failure', 'four failure']) @inlineCallbacks def test_partialFailure(self): self.stepper.addStep(StepOne(self._record, True)) self.stepper.addStep(StepTwo(self._record, False)) self.stepper.addStep(StepThree(self._record, True)) self.stepper.addStep(StepFour(self._record, False)) result = (yield self.stepper.start("abc")) self.assertEquals(result, None) # original result is gone self.assertEquals( self.history, ['one success', 'two failure', 'three success', 'four failure']) class PreFlightChecksTestCase(TestCase): """ Verify that missing, empty, or bogus TLS Certificates are detected """ def test_missingCertificate(self): success, _ignore_reason = verifyTLSCertificate( ConfigDict( { "SSLCertificate": "missing", "SSLKeychainIdentity": "missing", } ) ) self.assertFalse(success) def test_emptyCertificate(self): certFilePath = FilePath(self.mktemp()) certFilePath.setContent("") success, _ignore_reason = verifyTLSCertificate( ConfigDict( { "SSLCertificate": certFilePath.path, "SSLKeychainIdentity": "missing", } ) ) self.assertFalse(success) def test_bogusCertificate(self): certFilePath = FilePath(self.mktemp()) certFilePath.setContent("bogus") keyFilePath = FilePath(self.mktemp()) keyFilePath.setContent("bogus") success, _ignore_reason = verifyTLSCertificate( ConfigDict( { "SSLCertificate": certFilePath.path, "SSLPrivateKey": keyFilePath.path, "SSLAuthorityChain": "", "SSLMethod": "SSLv3_METHOD", "SSLCiphers": "ALL:!aNULL:!ADH:!eNULL:!LOW:!EXP:RC4+RSA:+HIGH:+MEDIUM", "SSLKeychainIdentity": "missing", } ) ) self.assertFalse(success) class AlertTestCase(TestCase): def test_secondsSinceLastPost(self): timestampsDir = self.mktemp() os.mkdir(timestampsDir) # Non existent timestamp file self.assertEquals( AlertPoster.secondsSinceLastPost("TestAlert", timestampsDirectory=timestampsDir, now=10), 10 ) # Existing valid, past timestamp file AlertPoster.recordTimeStamp("TestAlert", timestampsDirectory=timestampsDir, now=5) self.assertEquals( AlertPoster.secondsSinceLastPost("TestAlert", timestampsDirectory=timestampsDir, now=12), 7 ) # Existing valid, future timestamp file AlertPoster.recordTimeStamp("TestAlert", timestampsDirectory=timestampsDir, now=20) self.assertEquals( AlertPoster.secondsSinceLastPost("TestAlert", timestampsDirectory=timestampsDir, now=12), -8 ) # Existing invalid timestamp file dirFP = FilePath(timestampsDir) child = dirFP.child(".TestAlert.timestamp") child.setContent("not a number") self.assertEquals( AlertPoster.secondsSinceLastPost("TestAlert", timestampsDirectory=timestampsDir, now=12), 12 ) def stubPostAlert(self, alertType, ignoreWithinSeconds, args): self.alertType = alertType self.ignoreWithinSeconds = ignoreWithinSeconds self.args = args def test_protocol(self): self.patch(AlertPoster, "postAlert", self.stubPostAlert) client = AMP() server = AMPAlertProtocol() pump = returnConnected(server, client) sender = AMPAlertSender(protocol=client) sender.sendAlert("alertType", ["arg1", "arg2"]) pump.flush() self.assertEquals(self.alertType, "alertType") self.assertEquals(self.ignoreWithinSeconds, 0) self.assertEquals(self.args, ["arg1", "arg2"]) calendarserver-9.1+dfsg/calendarserver/tap/util.py000066400000000000000000001633771315003562600224370ustar00rootroot00000000000000# -*- test-case-name: calendarserver.tap.test.test_caldav -*- ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Utilities for assembling the service and resource hierarchy. """ __all__ = [ "FakeRequest", "getDBPool", "getRootResource", "getSSLPassphrase", "MemoryLimitService", "AlertPoster", "preFlightChecks", ] from calendarserver.accesslog import DirectoryLogWrapperResource from calendarserver.provision.root import RootResource from calendarserver.push.applepush import APNSubscriptionResource from calendarserver.push.notifier import NotifierFactory from calendarserver.push.util import getAPNTopicFromConfig from calendarserver.tools import diagnose from calendarserver.tools.util import checkDirectory from calendarserver.webadmin.delegation import WebAdminResource from calendarserver.webcal.resource import WebCalendarResource from socket import fromfd, AF_UNIX, SOCK_STREAM, socketpair from subprocess import Popen, PIPE from twext.enterprise.adbapi2 import ConnectionPool, ConnectionPoolConnection from twext.enterprise.adbapi2 import ConnectionPoolClient from twext.enterprise.ienterprise import POSTGRES_DIALECT, ORACLE_DIALECT, DatabaseType from twext.internet.ssl import ChainingOpenSSLContextFactory from twext.python.filepath import CachingFilePath from twext.python.filepath import CachingFilePath as FilePath from twext.python.log import Logger from twext.who.checker import HTTPDigestCredentialChecker from twext.who.checker import UsernamePasswordCredentialChecker from twisted.application.service import Service from twisted.cred.error import UnauthorizedLogin from twisted.cred.portal import Portal from twisted.internet import reactor as _reactor from twisted.internet.defer import inlineCallbacks, returnValue, Deferred, succeed from twisted.internet.reactor import addSystemEventTrigger from twisted.internet.protocol import Factory from twisted.internet.tcp import Connection from twisted.protocols import amp from twisted.python.procutils import which from twisted.python.usage import UsageError from twistedcaldav.bind import doBind from twistedcaldav.cache import CacheStoreNotifierFactory from twistedcaldav.config import ConfigurationError from twistedcaldav.controlapi import ControlAPIResource from twistedcaldav.directory.addressbook import DirectoryAddressBookHomeProvisioningResource from twistedcaldav.directory.calendar import DirectoryCalendarHomeProvisioningResource from twistedcaldav.directory.digest import QopDigestCredentialFactory from twistedcaldav.directory.principal import DirectoryPrincipalProvisioningResource from twistedcaldav.directorybackedaddressbook import DirectoryBackedAddressBookResource from twistedcaldav.resource import AuthenticationWrapper from twistedcaldav.serverinfo import ServerInfoResource from twistedcaldav.simpleresource import SimpleResource, SimpleRedirectResource, \ SimpleUnavailableResource from twistedcaldav.stdconfig import config from twistedcaldav.timezones import TimezoneCache from twistedcaldav.timezoneservice import TimezoneServiceResource from twistedcaldav.timezonestdservice import TimezoneStdServiceResource from txdav.base.datastore.dbapiclient import DBAPIConnector from txdav.base.datastore.subpostgres import PostgresService from txdav.caldav.datastore.scheduling.ischedule.dkim import DKIMUtils, DomainKeyResource from txdav.caldav.datastore.scheduling.ischedule.localservers import buildServersDB from txdav.caldav.datastore.scheduling.ischedule.resource import IScheduleInboxResource from txdav.common.datastore.podding.resource import ConduitResource from txdav.common.datastore.sql import current_sql_schema from txdav.common.datastore.upgrade.sql.upgrade import NotAllowedToUpgrade from txdav.dps.client import DirectoryService as DirectoryProxyClientService from txdav.who.cache import CachingDirectoryService from txdav.who.util import directoryFromConfig from txweb2.auth.basic import BasicCredentialFactory from txweb2.auth.tls import TLSCredentialsFactory, TLSCredentials from txweb2.dav import auth from txweb2.dav.auth import IPrincipalCredentials from txweb2.dav.util import joinURL from txweb2.http_headers import Headers from txweb2.resource import Resource from txweb2.static import File as FileResource from plistlib import readPlist from urllib import quote import OpenSSL import errno import os import psutil import sys import time try: from twistedcaldav.authkerb import NegotiateCredentialFactory NegotiateCredentialFactory # pacify pyflakes except ImportError: NegotiateCredentialFactory = None log = Logger() def pgServiceFromConfig(config, subServiceFactory, uid=None, gid=None): """ Construct a L{PostgresService} from a given configuration and subservice. @param config: the configuration to derive postgres configuration parameters from. @param subServiceFactory: A factory for the service to start once the L{PostgresService} has been initialized. @param uid: The user-ID to run the PostgreSQL server as. @param gid: The group-ID to run the PostgreSQL server as. @return: a service which can start postgres. @rtype: L{PostgresService} """ dbRoot = CachingFilePath(config.DatabaseRoot) # Construct a PostgresService exactly as the parent would, so that we # can establish connection information. return PostgresService( dbRoot, subServiceFactory, current_sql_schema, databaseName=config.Postgres.DatabaseName, clusterName=config.Postgres.ClusterName, logFile=config.Postgres.LogFile, logDirectory=config.LogRoot if config.Postgres.LogRotation else "", socketDir=config.Postgres.SocketDirectory, socketName=config.Postgres.SocketName, listenAddresses=config.Postgres.ListenAddresses, txnTimeoutSeconds=config.Postgres.TxnTimeoutSeconds, sharedBuffers=config.Postgres.SharedBuffers, maxConnections=config.Postgres.MaxConnections, options=config.Postgres.Options, uid=uid, gid=gid, spawnedDBUser=config.SpawnedDBUser, pgCtl=config.Postgres.Ctl, initDB=config.Postgres.Init, ) class ConnectionWithPeer(Connection): connected = True def getPeer(self): return "" % (self.socket.fileno(), id(self)) def getHost(self): return "" % (self.socket.fileno(), id(self)) def transactionFactoryFromFD(dbampfd, dbtype): """ Create a transaction factory from an inherited file descriptor, such as one created by L{ConnectionDispenser}. """ skt = fromfd(dbampfd, AF_UNIX, SOCK_STREAM) os.close(dbampfd) protocol = ConnectionPoolClient(dbtype=dbtype) transport = ConnectionWithPeer(skt, protocol) protocol.makeConnection(transport) transport.startReading() return protocol.newTransaction class ConnectionDispenser(object): """ A L{ConnectionDispenser} can dispense already-connected file descriptors, for use with subprocess spawning. """ # Very long term FIXME: this mechanism should ideally be eliminated, by # making all subprocesses have a single stdio AMP connection that # multiplexes between multiple protocols. def __init__(self, connectionPool): self.pool = connectionPool def dispense(self): """ Dispense a socket object, already connected to a server, for a client in a subprocess. """ # FIXME: these sockets need to be re-dispensed when the process is # respawned, and they currently won't be. c, s = socketpair(AF_UNIX, SOCK_STREAM) protocol = ConnectionPoolConnection(self.pool) transport = ConnectionWithPeer(s, protocol) protocol.makeConnection(transport) transport.startReading() return c def storeFromConfigWithoutDPS(config, txnFactory): store = storeFromConfig(config, txnFactory, None) directory = directoryFromConfig(config, store) store.setDirectoryService(directory) return store def storeFromConfigWithDPSClient(config, txnFactory): store = storeFromConfig(config, txnFactory, None) directory = DirectoryProxyClientService(config.DirectoryRealmName) if config.Servers.Enabled: directory.setServersDB(buildServersDB(config.Servers.MaxClients)) if config.DirectoryCaching.CachingSeconds: directory = CachingDirectoryService( directory, expireSeconds=config.DirectoryCaching.CachingSeconds, lookupsBetweenPurges=config.DirectoryCaching.LookupsBetweenPurges, negativeCaching=config.DirectoryCaching.NegativeCachingEnabled, ) store.setDirectoryService(directory) return store def storeFromConfigWithDPSServer(config, txnFactory): store = storeFromConfig(config, txnFactory, None) directory = directoryFromConfig(config, store) store.setDirectoryService(directory) return store def storeFromConfig(config, txnFactory, directoryService): """ Produce an L{IDataStore} from the given configuration, transaction factory, and notifier factory. If the transaction factory is C{None}, we will create a filesystem store. Otherwise, a SQL store, using that connection information. """ # # Configure NotifierFactory # notifierFactories = {} if config.Notifications.Enabled: notifierFactories["push"] = NotifierFactory(config.ServerHostName, config.Notifications.CoalesceSeconds) if config.EnableResponseCache and config.Memcached.Pools.Default.ClientEnabled: notifierFactories["cache"] = CacheStoreNotifierFactory() quota = config.UserQuota if quota == 0: quota = None if txnFactory is not None: if config.EnableSSL or config.BehindTLSProxy: uri = "https://{config.ServerHostName}:{config.SSLPort}".format(config=config) else: uri = "https://{config.ServerHostName}:{config.HTTPPort}".format(config=config) attachments_uri = uri + "/calendars/__uids__/%(home)s/dropbox/%(dropbox_id)s/%(name)s" from txdav.common.datastore.sql import CommonDataStore as CommonSQLDataStore store = CommonSQLDataStore( txnFactory, notifierFactories, directoryService, FilePath(config.AttachmentsRoot), attachments_uri, config.EnableCalDAV, config.EnableCardDAV, config.EnableManagedAttachments, quota=quota, logLabels=config.LogDatabase.LabelsInSQL, logStats=config.LogDatabase.Statistics, logStatsLogFile=config.LogDatabase.StatisticsLogFile, logSQL=config.LogDatabase.SQLStatements, logTransactionWaits=config.LogDatabase.TransactionWaitSeconds, timeoutTransactions=config.TransactionTimeoutSeconds, cacheQueries=config.QueryCaching.Enabled, cachePool=config.QueryCaching.MemcachedPool, cacheExpireSeconds=config.QueryCaching.ExpireSeconds ) else: from txdav.common.datastore.file import CommonDataStore as CommonFileDataStore store = CommonFileDataStore( FilePath(config.DocumentRoot), notifierFactories, directoryService, config.EnableCalDAV, config.EnableCardDAV, quota=quota ) # FIXME: NotifierFactories need a reference to the store in order # to get a txn in order to possibly create a Work item for notifierFactory in notifierFactories.values(): notifierFactory.store = store return store # MOVE2WHO -- should we move this class somewhere else? class PrincipalCredentialChecker(object): credentialInterfaces = (IPrincipalCredentials,) @inlineCallbacks def requestAvatarId(self, credentials): credentials = IPrincipalCredentials(credentials) if credentials.authnPrincipal is None: raise UnauthorizedLogin( "No such user: {user}".format( user=credentials.credentials.username ) ) # See if record is enabledForLogin if not credentials.authnPrincipal.record.isLoginEnabled(): raise UnauthorizedLogin( "User not allowed to log in: {user}".format( user=credentials.credentials.username ) ) # Handle Kerberos as a separate behavior try: from twistedcaldav.authkerb import NegotiateCredentials except ImportError: NegotiateCredentials = None if NegotiateCredentials and isinstance(credentials.credentials, NegotiateCredentials): # If we get here with Kerberos, then authentication has already succeeded returnValue( ( credentials.authnPrincipal, credentials.authzPrincipal, ) ) # Handle TLS Client Certificate elif isinstance(credentials.credentials, TLSCredentials): # If we get here with TLS, then authentication (certificate verification) has already succeeded returnValue( ( credentials.authnPrincipal, credentials.authzPrincipal, ) ) else: if (yield credentials.authnPrincipal.record.verifyCredentials(credentials.credentials)): returnValue( ( credentials.authnPrincipal, credentials.authzPrincipal, ) ) else: raise UnauthorizedLogin( "Incorrect credentials for user: {user}".format( user=credentials.credentials.username ) ) def getRootResource(config, newStore, resources=None): """ Set up directory service and resource hierarchy based on config. Return root resource. Additional resources can be added to the hierarchy by passing a list of tuples containing: path, resource class, __init__ args list, and optional authentication schemes list ("basic", "digest"). If the store is specified, then it has already been constructed, so use it. Otherwise build one with L{storeFromConfig}. """ if newStore is None: raise RuntimeError("Internal error, 'newStore' must be specified.") if resources is None: resources = [] # FIXME: this is only here to workaround circular imports doBind() # # Default resource classes # rootResourceClass = RootResource calendarResourceClass = DirectoryCalendarHomeProvisioningResource iScheduleResourceClass = IScheduleInboxResource conduitResourceClass = ConduitResource timezoneServiceResourceClass = TimezoneServiceResource timezoneStdServiceResourceClass = TimezoneStdServiceResource webCalendarResourceClass = WebCalendarResource webAdminResourceClass = WebAdminResource addressBookResourceClass = DirectoryAddressBookHomeProvisioningResource directoryBackedAddressBookResourceClass = DirectoryBackedAddressBookResource apnSubscriptionResourceClass = APNSubscriptionResource serverInfoResourceClass = ServerInfoResource principalResourceClass = DirectoryPrincipalProvisioningResource controlResourceClass = ControlAPIResource directory = newStore.directoryService() principalCollection = principalResourceClass("/principals/", directory) # # Configure the Site and Wrappers # wireEncryptedCredentialFactories = [] wireUnencryptedCredentialFactories = [] portal = Portal(auth.DavRealm()) portal.registerChecker(UsernamePasswordCredentialChecker(directory)) portal.registerChecker(HTTPDigestCredentialChecker(directory)) portal.registerChecker(PrincipalCredentialChecker()) realm = directory.realmName.encode("utf-8") or "" log.info("Configuring authentication for realm: {realm}", realm=realm) for scheme, schemeConfig in config.Authentication.iteritems(): scheme = scheme.lower() credFactory = None if schemeConfig["Enabled"]: log.info("Setting up scheme: {scheme}", scheme=scheme) if scheme == "kerberos": if not NegotiateCredentialFactory: log.info("Kerberos support not available") continue try: principal = schemeConfig["ServicePrincipal"] if not principal: credFactory = NegotiateCredentialFactory( serviceType="HTTP", hostname=config.ServerHostName, ) else: credFactory = NegotiateCredentialFactory( principal=principal, ) except ValueError: log.info("Could not start Kerberos") continue elif scheme == "digest": credFactory = QopDigestCredentialFactory( schemeConfig["Algorithm"], schemeConfig["Qop"], realm, ) elif scheme == "basic": credFactory = BasicCredentialFactory(realm) elif scheme == TLSCredentialsFactory.scheme: credFactory = TLSCredentialsFactory(realm) elif scheme == "wiki": pass else: log.error("Unknown scheme: {scheme}", scheme=scheme) if credFactory: wireEncryptedCredentialFactories.append(credFactory) if schemeConfig.get("AllowedOverWireUnencrypted", False): wireUnencryptedCredentialFactories.append(credFactory) # # Setup Resource hierarchy # log.info("Setting up document root at: {root}", root=config.DocumentRoot) if config.EnableCalDAV: log.info("Setting up calendar collection: {cls}", cls=calendarResourceClass) calendarCollection = calendarResourceClass( directory, "/calendars/", newStore, ) principalCollection.calendarCollection = calendarCollection if config.EnableCardDAV: log.info("Setting up address book collection: {cls}", cls=addressBookResourceClass) addressBookCollection = addressBookResourceClass( directory, "/addressbooks/", newStore, ) principalCollection.addressBookCollection = addressBookCollection if config.DirectoryAddressBook.Enabled and config.EnableSearchAddressBook: log.info( "Setting up directory address book: {cls}", cls=directoryBackedAddressBookResourceClass) directoryBackedAddressBookCollection = directoryBackedAddressBookResourceClass( principalCollections=(principalCollection,), principalDirectory=directory, uri=joinURL("/", config.DirectoryAddressBook.name, "/") ) if _reactor._started: directoryBackedAddressBookCollection.provisionDirectory() else: addSystemEventTrigger("after", "startup", directoryBackedAddressBookCollection.provisionDirectory) else: # remove /directory from previous runs that may have created it directoryPath = os.path.join(config.DocumentRoot, config.DirectoryAddressBook.name) try: FilePath(directoryPath).remove() log.info("Deleted: {path}", path=directoryPath) except (OSError, IOError), e: if e.errno != errno.ENOENT: log.error("Could not delete: {path} : {error}", path=directoryPath, error=e) if config.MigrationOnly: unavailable = SimpleUnavailableResource((principalCollection,)) else: unavailable = None log.info("Setting up root resource: {cls}", cls=rootResourceClass) root = rootResourceClass( config.DocumentRoot, principalCollections=(principalCollection,), ) root.putChild("principals", principalCollection if unavailable is None else unavailable) if config.EnableCalDAV: root.putChild("calendars", calendarCollection if unavailable is None else unavailable) if config.EnableCardDAV: root.putChild('addressbooks', addressBookCollection if unavailable is None else unavailable) if config.DirectoryAddressBook.Enabled and config.EnableSearchAddressBook: root.putChild(config.DirectoryAddressBook.name, directoryBackedAddressBookCollection if unavailable is None else unavailable) # /.well-known if config.EnableWellKnown: log.info("Setting up .well-known collection resource") wellKnownResource = SimpleResource( principalCollections=(principalCollection,), isdir=True, defaultACL=SimpleResource.allReadACL ) root.putChild(".well-known", wellKnownResource) for enabled, wellknown_name, redirected_to in ( (config.EnableCalDAV, "caldav", "/principals/",), (config.EnableCardDAV, "carddav", "/principals/",), (config.TimezoneService.Enabled, "timezone", config.TimezoneService.URI,), (config.Scheduling.iSchedule.Enabled, "ischedule", "/ischedule"), ): if enabled: if config.EnableSSL or config.BehindTLSProxy: scheme = "https" port = config.SSLPort else: scheme = "http" port = config.HTTPPort wellKnownResource.putChild( wellknown_name, SimpleRedirectResource( principalCollections=(principalCollection,), isdir=False, defaultACL=SimpleResource.allReadACL, scheme=scheme, port=port, path=redirected_to) ) for alias in config.Aliases: url = alias.get("url", None) path = alias.get("path", None) if not url or not path or url[0] != "/": log.error("Invalid alias: URL: {url} Path: {path}", url=url, path=path) continue urlbits = url[1:].split("/") parent = root for urlpiece in urlbits[:-1]: child = parent.getChild(urlpiece) if child is None: child = Resource() parent.putChild(urlpiece, child) parent = child if parent.getChild(urlbits[-1]) is not None: log.error("Invalid alias: URL: {url} Path: {path} already exists", url=url, path=path) continue resource = FileResource(path) parent.putChild(urlbits[-1], resource) log.info("Added alias {url} -> {path}", url=url, path=path) # Need timezone cache before setting up any timezone service log.info("Setting up Timezone Cache") TimezoneCache.create() # Timezone service is optional if config.EnableTimezoneService: log.info( "Setting up time zone service resource: {cls}", cls=timezoneServiceResourceClass) timezoneService = timezoneServiceResourceClass( root, ) root.putChild("timezones", timezoneService) # Standard Timezone service is optional if config.TimezoneService.Enabled: log.info( "Setting up standard time zone service resource: {cls}", cls=timezoneStdServiceResourceClass) timezoneStdService = timezoneStdServiceResourceClass( root, ) root.putChild("stdtimezones", timezoneStdService) # TODO: we only want the master to do this if _reactor._started: _reactor.callLater(0, timezoneStdService.onStartup) else: addSystemEventTrigger("after", "startup", timezoneStdService.onStartup) # # iSchedule/cross-pod service for podding # if config.Servers.Enabled: log.info("Setting up iSchedule podding inbox resource: {cls}", cls=iScheduleResourceClass) ischedule = iScheduleResourceClass( root, newStore, podding=True ) root.putChild(config.Servers.InboxName, ischedule if unavailable is None else unavailable) log.info("Setting up podding conduit resource: {cls}", cls=conduitResourceClass) conduit = conduitResourceClass( root, newStore, ) root.putChild(config.Servers.ConduitName, conduit) # # iSchedule service (not used for podding) # if config.Scheduling.iSchedule.Enabled: log.info("Setting up iSchedule inbox resource: {cls}", cls=iScheduleResourceClass) ischedule = iScheduleResourceClass( root, newStore, ) root.putChild("ischedule", ischedule if unavailable is None else unavailable) # Do DomainKey resources DKIMUtils.validConfiguration(config) if config.Scheduling.iSchedule.DKIM.Enabled: log.info("Setting up domainkey resource: {res}", res=DomainKeyResource) domain = config.Scheduling.iSchedule.DKIM.Domain if config.Scheduling.iSchedule.DKIM.Domain else config.ServerHostName dk = DomainKeyResource( domain, config.Scheduling.iSchedule.DKIM.KeySelector, config.Scheduling.iSchedule.DKIM.PublicKeyFile, ) wellKnownResource.putChild("domainkey", dk) # # WebCal # if config.WebCalendarRoot: log.info( "Setting up WebCalendar resource: {res}", res=config.WebCalendarRoot) webCalendar = webCalendarResourceClass( config.WebCalendarRoot, principalCollections=(principalCollection,), ) root.putChild("webcal", webCalendar if unavailable is None else unavailable) # # WebAdmin # if config.EnableWebAdmin: log.info("Setting up WebAdmin resource") webAdmin = webAdminResourceClass( config.WebCalendarRoot, root, directory, newStore, principalCollections=(principalCollection,), ) root.putChild("admin", webAdmin) # # Control API # if config.EnableControlAPI: log.info("Setting up Control API resource") controlAPI = controlResourceClass( root, directory, newStore, principalCollections=(principalCollection,), ) root.putChild("control", controlAPI) # # Apple Push Notification Subscriptions # apnConfig = config.Notifications.Services.APNS if apnConfig.Enabled: log.info( "Setting up APNS resource at /{url}", url=apnConfig["SubscriptionURL"]) apnResource = apnSubscriptionResourceClass(root, newStore) root.putChild(apnConfig["SubscriptionURL"], apnResource) # Server info document if config.EnableServerInfo: log.info( "Setting up server-info resource: {cls}", cls=serverInfoResourceClass) serverInfo = serverInfoResourceClass( root, ) root.putChild("server-info", serverInfo) # # Configure ancillary data # # MOVE2WHO log.info("Configuring authentication wrapper") overrides = {} if resources: for path, cls, args, schemes in resources: # putChild doesn't want "/" starting the path root.putChild(path, cls(root, newStore, *args)) # overrides requires "/" prepended path = "/" + path overrides[path] = [] for scheme in schemes: if scheme == "basic": overrides[path].append(BasicCredentialFactory(realm)) elif scheme == "digest": schemeConfig = config.Authentication.Digest overrides[path].append(QopDigestCredentialFactory( schemeConfig["Algorithm"], schemeConfig["Qop"], realm, )) log.info( "Overriding {path} with {cls} ({schemes})", path=path, cls=cls, schemes=schemes) authWrapper = AuthenticationWrapper( root, portal, wireEncryptedCredentialFactories, wireUnencryptedCredentialFactories, (auth.IPrincipal,), overrides=overrides ) logWrapper = DirectoryLogWrapperResource( authWrapper, directory, ) # FIXME: Storing a reference to the root resource on the store # until scheduling no longer needs resource objects newStore.rootResource = root return logWrapper def getDBPool(config): """ Inspect configuration to determine what database connection pool to set up. return: (L{ConnectionPool}, transactionFactory) """ if config.DBType == 'oracle': dialect = ORACLE_DIALECT paramstyle = 'numeric' else: dialect = POSTGRES_DIALECT paramstyle = 'pyformat' pool = None if config.DBAMPFD: txnFactory = transactionFactoryFromFD( int(config.DBAMPFD), DatabaseType(dialect, paramstyle, config.DBFeatures) ) elif not config.UseDatabase: txnFactory = None elif not config.SharedConnectionPool: if config.DBType == '': # get a PostgresService to tell us what the local connection # info is, but *don't* start it (that would start one postgres # master per slave, resulting in all kinds of mayhem...) connectionFactory = pgServiceFromConfig(config, None).produceConnection else: connectionFactory = DBAPIConnector.connectorFor(config.DBType, **config.DatabaseConnection).connect pool = ConnectionPool(connectionFactory, dbtype=DatabaseType(dialect, paramstyle, config.DBFeatures), maxConnections=config.MaxDBConnectionsPerPool) txnFactory = pool.connection else: raise UsageError( "trying to use DB in slave, but no connection info from parent" ) return (pool, txnFactory) class FakeRequest(object): def __init__(self, rootResource, method, path, uri='/', transaction=None): self.rootResource = rootResource self.method = method self.path = path self.uri = uri self._resourcesByURL = {} self._urlsByResource = {} self.headers = Headers() if transaction is not None: self._newStoreTransaction = transaction @inlineCallbacks def _getChild(self, resource, segments): if not segments: returnValue(resource) child, remaining = (yield resource.locateChild(self, segments)) returnValue((yield self._getChild(child, remaining))) @inlineCallbacks def locateResource(self, url): url = url.strip("/") segments = url.split("/") resource = (yield self._getChild(self.rootResource, segments)) if resource: self._rememberResource(resource, url) returnValue(resource) @inlineCallbacks def locateChildResource(self, parent, childName): if parent is None or childName is None: returnValue(None) parentURL = self.urlForResource(parent) if not parentURL.endswith("/"): parentURL += "/" url = parentURL + quote(childName) segment = childName resource = (yield self._getChild(parent, [segment])) if resource: self._rememberResource(resource, url) returnValue(resource) def _rememberResource(self, resource, url): self._resourcesByURL[url] = resource self._urlsByResource[resource] = url return resource def _forgetResource(self, resource, url): if url in self._resourcesByURL: del self._resourcesByURL[url] if resource in self._urlsByResource: del self._urlsByResource[resource] def urlForResource(self, resource): url = self._urlsByResource.get(resource, None) if url is None: class NoURLForResourceError(RuntimeError): pass raise NoURLForResourceError(resource) return url def addResponseFilter(self, *args, **kwds): pass def memoryForPID(pid, residentOnly=True): """ Return the amount of memory in use for the given process. If residentOnly is True, then RSS is returned; if False, then virtual memory is returned. @param pid: process id @type pid: C{int} @param residentOnly: Whether only resident memory should be included @type residentOnly: C{boolean} @return: Memory used by process in bytes @rtype: C{int} """ memoryInfo = psutil.Process(pid).memory_info() return memoryInfo.rss if residentOnly else memoryInfo.vms class MemoryLimitService(Service, object): """ A service which when paired with a DelayedStartupProcessMonitor will periodically examine the memory usage of the monitored processes and stop any which exceed a configured limit. Memcached processes are ignored. """ def __init__(self, processMonitor, intervalSeconds, limitBytes, residentOnly, reactor=None): """ @param processMonitor: the DelayedStartupProcessMonitor @param intervalSeconds: how often to check @type intervalSeconds: C{int} @param limitBytes: any monitored process over this limit is stopped @type limitBytes: C{int} @param residentOnly: whether only resident memory should be included @type residentOnly: C{boolean} @param reactor: for testing """ self._processMonitor = processMonitor self._seconds = intervalSeconds self._bytes = limitBytes self._residentOnly = residentOnly self._delayedCall = None if reactor is None: from twisted.internet import reactor self._reactor = reactor # Unit tests can swap out _memoryForPID self._memoryForPID = memoryForPID def startService(self): """ Start scheduling the memory checks """ super(MemoryLimitService, self).startService() self._delayedCall = self._reactor.callLater(self._seconds, self.checkMemory) def stopService(self): """ Stop checking memory """ super(MemoryLimitService, self).stopService() if self._delayedCall is not None and self._delayedCall.active(): self._delayedCall.cancel() self._delayedCall = None def checkMemory(self): """ Stop any processes monitored by our paired processMonitor whose resident memory exceeds our configured limitBytes. Reschedule intervalSeconds in the future. """ try: for name in self._processMonitor.processes: if name.startswith("memcached"): continue proto = self._processMonitor.protocols.get(name, None) if proto is not None: proc = proto.transport pid = proc.pid try: memory = self._memoryForPID(pid, self._residentOnly) except Exception, e: log.error( "Unable to determine memory usage of PID: {pid} ({err})", pid=pid, err=e) continue if memory > self._bytes: log.warn( "Killing large process: {name} PID:{pid} {memtype}:{mem}", name=name, pid=pid, memtype=("Resident" if self._residentOnly else "Virtual"), mem=memory) self._processMonitor.stopProcess(name) finally: self._delayedCall = self._reactor.callLater(self._seconds, self.checkMemory) def checkDirectories(config): """ Make sure that various key directories exist (and create if needed) """ # # Verify that server root actually exists # checkDirectory( config.ServerRoot, "Server root", # Require write access because one might not allow editing on / access=os.W_OK, wait=True # Wait in a loop until ServerRoot exists ) # # Verify that other root paths are OK # if config.DataRoot.startswith(config.ServerRoot + os.sep): checkDirectory( config.DataRoot, "Data root", access=os.W_OK, create=(0750, config.UserName, config.GroupName), ) if config.DocumentRoot.startswith(config.DataRoot + os.sep): checkDirectory( config.DocumentRoot, "Document root", # Don't require write access because one might not allow editing on / access=os.R_OK, create=(0750, config.UserName, config.GroupName), ) if config.ConfigRoot.startswith(config.ServerRoot + os.sep): checkDirectory( config.ConfigRoot, "Config root", access=os.W_OK, create=(0750, config.UserName, config.GroupName), ) if config.SocketFiles.Enabled: checkDirectory( config.SocketRoot, "Socket file root", access=os.W_OK, create=( config.SocketFiles.Permissions, config.SocketFiles.Owner, config.SocketFiles.Group ) ) # Always create these: checkDirectory( config.LogRoot, "Log root", access=os.W_OK, create=(0750, config.UserName, config.GroupName), ) checkDirectory( config.RunRoot, "Run root", access=os.W_OK, create=(0770, config.UserName, config.GroupName), ) class Stepper(object): """ Manages the sequential, deferred execution of "steps" which are objects implementing these methods: - stepWithResult(result) @param result: the result returned from the previous step @returns: Deferred - stepWithFailure(failure) @param failure: a Failure encapsulating the exception from the previous step @returns: Failure to continue down the errback chain, or a Deferred returning a non-Failure to switch back to the callback chain "Step" objects are added in order by calling addStep(), and when start() is called, the Stepper will call the stepWithResult() of the first step. If stepWithResult() doesn't raise an Exception, the Stepper will call the next step's stepWithResult(). If a stepWithResult() raises an Exception, the Stepper will call the next step's stepWithFailure() -- if it's implemented -- passing it a Failure object. If the stepWithFailure() decides it can handle the Failure and proceed, it can return a non-Failure which is an indicator to the Stepper to call the next step's stepWithResult(). TODO: Create an IStep interface (?) """ def __init__(self): self.steps = [] self.failure = None self.result = None self.running = False def addStep(self, step): """ Adds a step object to the ordered list of steps @param step: the object to add @type step: an object implementing stepWithResult() @return: the Stepper object itself so addStep() calls can be chained """ if self.running: raise RuntimeError("Can't add step after start") self.steps.append(step) return self def defaultStepWithResult(self, result): return succeed(result) def defaultStepWithFailure(self, failure, step): if failure.type not in ( NotAllowedToUpgrade, ConfigurationError, OpenSSL.SSL.Error ): log.failure("Step failure: {name}", name=step.__class__.__name__, failure=failure) return failure def start(self, result=None): """ Begin executing the added steps in sequence. If a step object does not implement a stepWithResult/stepWithFailure method, a default implementation will be used. @param result: an optional value to pass to the first step @return: the Deferred that will fire when steps are done """ self.running = True self.deferred = Deferred() for step in self.steps: # See if we need to use a default implementation of the step methods: if hasattr(step, "stepWithResult"): callBack = step.stepWithResult # callBack = self.protectStep(step.stepWithResult) else: callBack = self.defaultStepWithResult if hasattr(step, "stepWithFailure"): errBack = step.stepWithFailure errbackArgs = () else: errBack = self.defaultStepWithFailure errbackArgs = (step,) # Add callbacks to the Deferred self.deferred.addCallbacks(callBack, errBack, errbackArgs=errbackArgs) # Get things going self.deferred.callback(result) return self.deferred def requestShutdown(programPath, reason): """ Log the shutdown reason and call the shutdown-requesting program. In the case the service is spawned by launchd (or equivalent), if our service decides it needs to shut itself down, because of a misconfiguration, for example, we can't just exit. We may need to go through the system machinery to unload our job, manage reverse proxies, update admin UI, etc. Therefore you can configure the ServiceDisablingProgram plist key to point to a program to run which will stop our service. @param programPath: the full path to a program to call (with no args) @type programPath: C{str} @param reason: a shutdown reason to log @type reason: C{str} """ log.error("Shutting down Calendar and Contacts server") log.error(reason) Popen( args=[config.ServiceDisablingProgram], stdout=PIPE, stderr=PIPE, ).communicate() def preFlightChecks(config): """ Perform checks prior to spawning any processes. Returns True if the checks are ok, False if they don't and we have a ServiceDisablingProgram configured. Otherwise exits. """ success, reason = verifyConfig(config) if success: success, reason = verifyServerRoot(config) if success: success, reason = verifyTLSCertificate(config) if success: success, reason = verifyAPNSCertificate(config) if not success: if config.ServiceDisablingProgram: # If pre-flight checks fail, we don't want launchd to # repeatedly launch us, we want our job to get unloaded. # If the config.ServiceDisablingProgram is assigned and exists # we schedule it to run after startService finishes. # Its job is to carry out the platform-specific tasks of disabling # the service. if os.path.exists(config.ServiceDisablingProgram): addSystemEventTrigger( "after", "startup", requestShutdown, config.ServiceDisablingProgram, reason ) return False else: print(reason) sys.exit(1) return True def verifyConfig(config): """ At least one of EnableCalDAV or EnableCardDAV must be True """ if config.EnableCalDAV or config.EnableCardDAV: return True, "A protocol is enabled" return False, "Neither CalDAV nor CardDAV are enabled" def verifyServerRoot(config): """ Ensure server root is not on a phantom volume """ result = diagnose.detectPhantomVolume(config.ServerRoot) if result == diagnose.EXIT_CODE_SERVER_ROOT_MISSING: return False, "ServerRoot is missing" if result == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME: return False, "ServerRoot is supposed to be on a non-boot-volume but it's not" return True, "ServerRoot is ok" def verifyTLSCertificate(config): """ If a TLS certificate is configured, make sure it exists, is non empty, and that it's valid. """ if hasattr(OpenSSL, "__SecureTransport__"): if config.SSLKeychainIdentity: # Fall through to see if we can load the identity from the keychain certificate_title = "Keychain: {}".format(config.SSLKeychainIdentity) error = OpenSSL.crypto.check_keychain_identity(config.SSLKeychainIdentity) if error: message = ( "The configured TLS Keychain Identity ({cert}) cannot be used: {reason}".format( cert=certificate_title, reason=error ) ) return False, message else: return True, "TLS disabled" else: if config.SSLCertificate: if not os.path.exists(config.SSLCertificate): message = ( "The configured TLS certificate ({cert}) is missing".format( cert=config.SSLCertificate ) ) AlertPoster.postAlert("MissingCertificateAlert", 0, ["path", config.SSLCertificate]) return False, message length = os.stat(config.SSLCertificate).st_size if length == 0: message = ( "The configured TLS certificate ({cert}) is empty".format( cert=config.SSLCertificate ) ) return False, message certificate_title = config.SSLCertificate else: return True, "TLS disabled" try: ChainingOpenSSLContextFactory( config.SSLPrivateKey, config.SSLCertificate, certificateChainFile=config.SSLAuthorityChain, passwdCallback=getSSLPassphrase, keychainIdentity=config.SSLKeychainIdentity, sslmethod=getattr(OpenSSL.SSL, config.SSLMethod), ciphers=config.SSLCiphers.strip() ) except Exception as e: if hasattr(OpenSSL, "__SecureTransport__"): message = ( "The configured TLS Keychain Identity ({cert}) cannot be used: {reason}".format( cert=certificate_title, reason=str(e) ) ) else: message = ( "The configured TLS certificate ({cert}) cannot be used: {reason}".format( cert=certificate_title, reason=str(e) ) ) return False, message return True, "TLS enabled" def verifyAPNSCertificate(config): """ If APNS certificates are configured, make sure they're valid. """ if config.Notifications.Services.APNS.Enabled: for protocol, accountName in ( ("CalDAV", "apns:com.apple.calendar"), ("CardDAV", "apns:com.apple.contact"), ): protoConfig = config.Notifications.Services.APNS[protocol] if not protoConfig.Enabled: continue if not hasattr(OpenSSL, "__SecureTransport__"): if not checkCertExpiration(protoConfig.CertificatePath): return False, "APNS certificate expired {}".format(protoConfig.CertificatePath) try: getAPNTopicFromConfig(protocol, accountName, protoConfig) except ValueError as e: AlertPoster.postAlert("PushNotificationCertificateAlert", 0, []) return False, str(e) # Let OpenSSL try to use the cert try: if protoConfig.Passphrase: passwdCallback = lambda *ignored: protoConfig.Passphrase else: passwdCallback = None ChainingOpenSSLContextFactory( protoConfig.PrivateKeyPath, protoConfig.CertificatePath, certificateChainFile=protoConfig.AuthorityChainPath, passwdCallback=passwdCallback, keychainIdentity=protoConfig.KeychainIdentity, sslmethod=getattr(OpenSSL.SSL, "TLSv1_METHOD"), ) except Exception as e: if hasattr(OpenSSL, "__SecureTransport__"): message = ( "The {proto} APNS Keychain Identity ({cert}) cannot be used: {reason}".format( proto=protocol, cert=protoConfig.KeychainIdentity, reason=str(e) ) ) else: message = ( "The {proto} APNS certificate ({cert}) cannot be used: {reason}".format( proto=protocol, cert=protoConfig.CertificatePath, reason=str(e) ) ) AlertPoster.postAlert("PushNotificationCertificateAlert", 0, []) return False, message return True, "APNS enabled" else: return True, "APNS disabled" def checkCertExpiration(certPath): """ See if the given certificate is expired. @param certPath: the path of the certificate @type certPath: C{str} @return: True if the cert has not expired (or we can't check because we can't find the openssl command line utility); False otherwise """ try: opensslTool = which("openssl")[0] args = [opensslTool, "x509", "-checkend", "0", "-noout", "-in", certPath] child = Popen(args=args, stdout=PIPE, stderr=PIPE) child.communicate() exitStatus = child.returncode return exitStatus == 0 except IndexError: # We can't check return True def getSSLPassphrase(*ignored): if not config.SSLPrivateKey: return None if config.SSLCertAdmin and os.path.isfile(config.SSLCertAdmin): child = Popen( args=[ "sudo", config.SSLCertAdmin, "--get-private-key-passphrase", config.SSLPrivateKey, ], stdout=PIPE, stderr=PIPE, ) output, error = child.communicate() if child.returncode: log.error( "Could not get passphrase for {key}: {error}", key=config.SSLPrivateKey, error=error ) else: log.info( "Obtained passphrase for {key}", key=config.SSLPrivateKey ) return output.strip() if ( config.SSLPassPhraseDialog and os.path.isfile(config.SSLPassPhraseDialog) ): with open(config.SSLPrivateKey) as sslPrivKey: keyType = None for line in sslPrivKey.readlines(): if "-----BEGIN RSA PRIVATE KEY-----" in line: keyType = "RSA" break elif "-----BEGIN DSA PRIVATE KEY-----" in line: keyType = "DSA" break if keyType is None: log.error( "Could not get private key type for {key}", key=config.SSLPrivateKey ) else: child = Popen( args=[ config.SSLPassPhraseDialog, "{}:{}".format(config.ServerHostName, config.SSLPort), keyType, ], stdout=PIPE, stderr=PIPE, ) output, error = child.communicate() if child.returncode: log.error( "Could not get passphrase for {key}: {error}", key=config.SSLPrivateKey, error=error ) else: return output.strip() return None # # Server Alert Posting # class AlertPoster(object): """ Encapsulates the posting of server alerts via a singleton which can be configured differently depending on whether you want to directly spawn the external alert program from this process, or send an AMP request to another process to spawn. Workers should call setupForWorker( ), and then calls to postAlert( ) will send an AMP message to the master. The master should call setupForMaster( ), and then calls to postAlert( ) within the master will spawn the external alert program. """ # Control socket message-routing constants ALERT_ROUTE = "alert" _alertPoster = None @classmethod def setupForMaster(cls, controlSocket): cls._alertPoster = cls() AMPAlertReceiver(controlSocket) @classmethod def setupForWorker(cls, controlSocket): cls._alertPoster = cls(controlSocket) @classmethod def _getAlertPoster(cls): if cls._alertPoster is None: cls._alertPoster = cls() return cls._alertPoster def __init__(self, controlSocket=None): if controlSocket is None: self.sender = None else: self.sender = AMPAlertSender(controlSocket) @classmethod def postAlert(cls, alertType, ignoreWithinSeconds, args): poster = cls._getAlertPoster() if not config.AlertPostingProgram: return if not os.path.exists(config.AlertPostingProgram): return if ignoreWithinSeconds: seconds = cls.secondsSinceLastPost(alertType) if seconds < ignoreWithinSeconds: return if poster.sender is None: # Just do it cls.recordTimeStamp(alertType) try: commandLine = [config.AlertPostingProgram, alertType] commandLine.extend(args) Popen( commandLine, stdout=PIPE, stderr=PIPE, ).communicate() except Exception, e: log.error( "Could not post alert: {alertType} {args} ({error})", alertType=alertType, args=args, error=e ) else: # Send request to master over AMP poster.sender.sendAlert(alertType, args) @classmethod def secondsSinceLastPost(cls, alertType, timestampsDirectory=None, now=None): if timestampsDirectory is None: timestampsDirectory = config.DataRoot if now is None: now = int(time.time()) dirFP = FilePath(timestampsDirectory) childFP = dirFP.child(".{}.timestamp".format(alertType)) if not childFP.exists(): timestamp = 0 else: with childFP.open() as child: try: line = child.readline().strip() timestamp = int(line) except: timestamp = 0 return now - timestamp @classmethod def recordTimeStamp(cls, alertType, timestampsDirectory=None, now=None): if timestampsDirectory is None: timestampsDirectory = config.DataRoot if now is None: now = int(time.time()) dirFP = FilePath(timestampsDirectory) childFP = dirFP.child(".{}.timestamp".format(alertType)) childFP.setContent(str(now)) class AMPAlertSendingFactory(Factory): def __init__(self, sender): self.sender = sender def buildProtocol(self, addr): protocol = amp.AMP() self.sender.protocol = protocol return protocol class AMPAlertSender(object): """ Runs in the workers, sends alerts to the master via AMP """ def __init__(self, controlSocket=None, protocol=None): self.protocol = protocol if controlSocket is not None: controlSocket.addFactory(AlertPoster.ALERT_ROUTE, AMPAlertSendingFactory(self)) def sendAlert(self, alertType, args): return self.protocol.callRemote( PostAlert, alertType=alertType, args=args ) class AMPAlertReceiverFactory(Factory): def buildProtocol(self, addr): return AMPAlertProtocol() class AMPAlertReceiver(object): """ Runs in the master, receives alerts from workers, executes the alert posting program """ def __init__(self, controlSocket): if controlSocket is not None: # Set up the listener which gets alerts from the slaves controlSocket.addFactory( AlertPoster.ALERT_ROUTE, AMPAlertReceiverFactory() ) class PostAlert(amp.Command): arguments = [ ('alertType', amp.String()), ('args', amp.ListOf(amp.String())), ] response = [ ('status', amp.String()), ] class AMPAlertProtocol(amp.AMP): """ Defines the AMP protocol for sending alerts from worker to master """ @PostAlert.responder def postAlert(self, alertType, args): """ The "PostAlert" handler in the master """ AlertPoster.postAlert(alertType, 0, args) return { "status": "OK" } def serverRootLocation(): """ Return the ServerRoot value from the OS X preferences plist. If plist not present, return empty string. @rtype: C{unicode} """ defaultPlistPath = "/Library/Server/Preferences/Calendar.plist" serverRoot = u"" if os.path.exists(defaultPlistPath): serverRoot = readPlist(defaultPlistPath).get("ServerRoot", serverRoot) if isinstance(serverRoot, str): serverRoot = serverRoot.decode("utf-8") return serverRoot calendarserver-9.1+dfsg/calendarserver/test/000077500000000000000000000000001315003562600212625ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/test/__init__.py000066400000000000000000000012141315003562600233710ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tests for the calendarserver module. """ calendarserver-9.1+dfsg/calendarserver/test/test_accesslog.py000066400000000000000000000130161315003562600246370ustar00rootroot00000000000000## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from calendarserver.accesslog import SystemMonitor, \ RotatingFileAccessLoggingObserver from twistedcaldav.stdconfig import config as stdconfig from twistedcaldav.config import config import time import collections hasattr(stdconfig, "Servers") # Quell pyflakes class AccessLog(TestCase): """ Tests for L{calendarserver.accesslog}. """ def test_systemMonitor(self): """ L{SystemMonitor} generates the correct data. """ monitor = SystemMonitor() self.assertNotEqual(monitor.items["cpu count"], 0) self.assertEqual(monitor.items["cpu use"], 0.0) monitor.update() self.assertNotEqual(monitor.items["cpu count"], 0) monitor.stop() self.assertNotEqual(monitor.items["cpu count"], 0) def test_disableSystemMonitor(self): """ L{SystemMonitor} is not created when stats socket not in use. """ logpath = self.mktemp() stats = { "type": "access-log", "log-format": "", "method": "GET", "uri": "/index.html", "statusCode": 200, } # Disabled self.patch(config.Stats, "EnableUnixStatsSocket", False) self.patch(config.Stats, "EnableTCPStatsSocket", False) logger = RotatingFileAccessLoggingObserver(logpath) self.assertTrue(logger.systemStats is None) logger.start() stats["log-format"] = "test1" logger.logStats(stats) self.assertTrue(logger.systemStats is None) logger.getStats() self.assertTrue(logger.systemStats is None) logger.stop() with open(logpath) as f: self.assertIn("test1", f.read()) # Enabled self.patch(config.Stats, "EnableUnixStatsSocket", True) self.patch(config.Stats, "EnableTCPStatsSocket", False) logger = RotatingFileAccessLoggingObserver(logpath) self.assertTrue(logger.systemStats is None) logger.start() stats["log-format"] = "test2" logger.logStats(stats) self.assertTrue(logger.systemStats is not None) logger.stop() with open(logpath) as f: self.assertIn("test2", f.read()) # Enabled self.patch(config.Stats, "EnableUnixStatsSocket", False) self.patch(config.Stats, "EnableTCPStatsSocket", True) logger = RotatingFileAccessLoggingObserver(logpath) self.assertTrue(logger.systemStats is None) logger.start() stats["log-format"] = "test3" logger.logStats(stats) self.assertTrue(logger.systemStats is not None) logger.stop() with open(logpath) as f: self.assertIn("test3", f.read()) # Enabled self.patch(config.Stats, "EnableUnixStatsSocket", True) self.patch(config.Stats, "EnableTCPStatsSocket", True) logger = RotatingFileAccessLoggingObserver(logpath) self.assertTrue(logger.systemStats is None) logger.start() stats["log-format"] = "test4" logger.logStats(stats) self.assertTrue(logger.systemStats is not None) logger.stop() with open(logpath) as f: self.assertIn("test4", f.read()) SystemMonitor.CPUStats = collections.namedtuple("CPUStats", ("total", "idle",)) def test_unicodeLog(self): """ Make sure L{RotatingFileAccessLoggingObserver} handles non-ascii data properly. """ logpath = self.mktemp() observer = RotatingFileAccessLoggingObserver(logpath) observer.start() observer.accessLog(u"Can\u2019 log this") observer.stop() with open(logpath) as f: self.assertIn("log this", f.read()) def test_truncateStats(self): """ Make sure L{RotatingFileAccessLoggingObserver.ensureSequentialStats} properly truncates stats data. """ logpath = self.mktemp() observer = RotatingFileAccessLoggingObserver(logpath) observer.systemStats = SystemMonitor() observer.start() # Fill stats with some old entries t = int(time.time() / 60.0) * 60 t -= 100 * 60 for i in range(10): observer.statsByMinute.append((t + i * 60, observer.initStats(),)) self.assertEqual(len(observer.statsByMinute), 10) observer.ensureSequentialStats() self.assertEqual(len(observer.statsByMinute), 65) observer.stop() def test_smallerStats(self): """ Make sure "uid" and "user-agent" are not in the L{RotatingFileAccessLoggingObserver} stats data. """ logpath = self.mktemp() observer = RotatingFileAccessLoggingObserver(logpath) observer.systemStats = SystemMonitor() observer.start() stats = observer.initStats() observer.stop() self.assertTrue("uid" not in stats) self.assertTrue("user-agent" not in stats) calendarserver-9.1+dfsg/calendarserver/test/test_logAnalysis.py000066400000000000000000000105721315003562600251650ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from calendarserver.logAnalysis import getAdjustedMethodName, \ getAdjustedClientName class LogAnalysis(TestCase): def test_getAdjustedMethodName(self): """ L{getAdjustedMethodName} returns the appropriate method. """ data = ( ("PROPFIND", "/calendars/users/user01/", {}, "PROPFIND Calendar Home",), ("PROPFIND", "/calendars/users/user01/", {"cached": "1"}, "PROPFIND cached Calendar Home",), ("PROPFIND", "/calendars/users/user01/ABC/", {}, "PROPFIND Calendar",), ("PROPFIND", "/calendars/users/user01/inbox/", {}, "PROPFIND Inbox",), ("PROPFIND", "/addressbooks/users/user01/", {}, "PROPFIND Adbk Home",), ("PROPFIND", "/addressbooks/users/user01/", {"cached": "1"}, "PROPFIND cached Adbk Home",), ("PROPFIND", "/addressbooks/users/user01/ABC/", {}, "PROPFIND Adbk",), ("PROPFIND", "/addressbooks/users/user01/inbox/", {}, "PROPFIND Adbk",), ("PROPFIND", "/principals/users/user01/", {}, "PROPFIND Principals",), ("PROPFIND", "/principals/users/user01/", {"cached": "1"}, "PROPFIND cached Principals",), ("PROPFIND", "/.well-known/caldav", {}, "PROPFIND",), ("REPORT(CalDAV:sync-collection)", "/calendars/users/user01/ABC/", {}, "REPORT cal-sync",), ("REPORT(CalDAV:calendar-query)", "/calendars/users/user01/ABC/", {}, "REPORT cal-query",), ("POST", "/calendars/users/user01/", {}, "POST Calendar Home",), ("POST", "/calendars/users/user01/outbox/", {"recipients": "1"}, "POST Freebusy",), ("POST", "/calendars/users/user01/outbox/", {}, "POST Outbox",), ("POST", "/apns", {}, "POST apns",), ("PUT", "/calendars/users/user01/calendar/1.ics", {}, "PUT ics",), ("PUT", "/calendars/users/user01/calendar/1.ics", {"itip.requests": "1"}, "PUT Organizer",), ("PUT", "/calendars/users/user01/calendar/1.ics", {"itip.reply": "1"}, "PUT Attendee",), ("GET", "/calendars/users/user01/", {}, "GET Calendar Home",), ("GET", "/calendars/users/user01/calendar/", {}, "GET Calendar",), ("GET", "/calendars/users/user01/calendar/1.ics", {}, "GET ics",), ("DELETE", "/calendars/users/user01/", {}, "DELETE Calendar Home",), ("DELETE", "/calendars/users/user01/calendar/", {}, "DELETE Calendar",), ("DELETE", "/calendars/users/user01/calendar/1.ics", {}, "DELETE ics",), ("DELETE", "/calendars/users/user01/inbox/1.ics", {}, "DELETE inbox ics",), ("ACL", "/calendars/users/user01/", {}, "ACL",), ) for method, uri, extras, result in data: extras["method"] = method extras["uri"] = uri self.assertEqual(getAdjustedMethodName(extras), result, "Failed getAdjustedMethodName: %s" % (result,)) def test_getAdjustedClientName(self): """ L{getAdjustedClientName} returns the appropriate method. """ data = ( ("Mac OS X/10.8.2 (12C60) CalendarAgent/55", "Mac OS X/10.8.2 CalendarAgent",), ("CalendarStore/5.0.3 (1204.2); iCal/5.0.3 (1605.4); Mac OS X/10.7.5 (11G63b)", "Mac OS X/10.7.5 iCal",), ("DAVKit/5.0 (767); iCalendar/5.0 (79); iPhone/4.2.1 8C148", "iPhone/4.2.1",), ("iOS/6.0 (10A405) Preferences/1.0", "iOS/6.0 Preferences",), ("iOS/6.0.1 (10A523) dataaccessd/1.0", "iOS/6.0.1 dataaccessd",), ("InterMapper/5.4.3", "InterMapper",), ("Mac OS X/10.8.2 (12C60) AddressBook/1167", "Mac OS X/10.8.2 AddressBook",), ) for ua, result in data: self.assertEqual(getAdjustedClientName({"userAgent": ua}), result, "Failed getAdjustedClientName: %s" % (ua,)) calendarserver-9.1+dfsg/calendarserver/tools/000077500000000000000000000000001315003562600214435ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/__init__.py000066400000000000000000000011751315003562600235600ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ CalendarServer tools. """ calendarserver-9.1+dfsg/calendarserver/tools/agent.py000077500000000000000000000250111315003562600231150ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_agent -*- ## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ A service spawned on-demand by launchd, meant to handle configuration requests from Server.app. When a request comes in on the socket specified in the launchd agent.plist, launchd will run "caldavd -t Agent" which ends up creating this service. Requests are made using HTTP POSTS to /gateway, and are authenticated by OpenDirectory. """ from __future__ import print_function __all__ = [ "makeAgentService", ] import cStringIO from plistlib import readPlistFromString, writePlistToString import socket from twext.python.launchd import launchActivateSocket from twext.python.log import Logger from twext.who.checker import HTTPDigestCredentialChecker from twext.who.opendirectory import ( DirectoryService as OpenDirectoryDirectoryService, NoQOPDigestCredentialFactory ) from twisted.application.internet import StreamServerEndpointService from twisted.cred.portal import IRealm, Portal from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet.endpoints import AdoptedStreamServerEndpoint from twisted.internet.protocol import Factory from twisted.protocols import amp from twisted.web.guard import HTTPAuthSessionWrapper from twisted.web.resource import IResource, Resource, ForbiddenResource from twisted.web.server import Site, NOT_DONE_YET from zope.interface import implements log = Logger() class AgentRealm(object): """ Only allow a specified list of avatar IDs to access the site """ implements(IRealm) def __init__(self, root, allowedAvatarIds): """ @param root: The root resource of the site @param allowedAvatarIds: The list of IDs to allow access to """ self.root = root self.allowedAvatarIds = allowedAvatarIds def requestAvatar(self, avatarId, mind, *interfaces): if IResource in interfaces: if avatarId.shortNames[0] in self.allowedAvatarIds: return (IResource, self.root, lambda: None) else: return (IResource, ForbiddenResource(), lambda: None) raise NotImplementedError() class AgentGatewayResource(Resource): """ The gateway resource which forwards incoming requests through gateway.Runner. """ isLeaf = True def __init__(self, store, directory, inactivityDetector): """ @param store: an already opened store @param directory: a directory service @param inactivityDetector: the InactivityDetector to tell when requests come in """ Resource.__init__(self) self.store = store self.directory = directory self.inactivityDetector = inactivityDetector def render_POST(self, request): """ Take the body of the POST request and feed it to gateway.Runner(); return the result as the response body. """ self.inactivityDetector.activity() def onSuccess(result, output): txt = output.getvalue() output.close() request.write(txt) request.finish() def onError(failure): message = failure.getErrorMessage() tbStringIO = cStringIO.StringIO() failure.printTraceback(file=tbStringIO) tbString = tbStringIO.getvalue() tbStringIO.close() error = { "Error": message, "Traceback": tbString, } log.error("command failed {error}", error=failure) request.write(writePlistToString(error)) request.finish() from calendarserver.tools.gateway import Runner body = request.content.read() command = readPlistFromString(body) output = cStringIO.StringIO() runner = Runner(self.store, [command], output=output) d = runner.run() d.addCallback(onSuccess, output) d.addErrback(onError) return NOT_DONE_YET def makeAgentService(store): """ Returns a service which will process GatewayAMPCommands, using a socket file descripter acquired by launchd @param store: an already opened store @returns: service """ from twisted.internet import reactor sockets = launchActivateSocket("AgentSocket") fd = sockets[0] family = socket.AF_INET endpoint = AdoptedStreamServerEndpoint(reactor, fd, family) directory = store.directoryService() def becameInactive(): log.warn("Agent inactive; shutting down") reactor.stop() from twistedcaldav.config import config inactivityDetector = InactivityDetector( reactor, config.AgentInactivityTimeoutSeconds, becameInactive ) root = Resource() root.putChild( "gateway", AgentGatewayResource( store, directory, inactivityDetector ) ) # We need this service to be able to return com.apple.calendarserver, # so tell it not to suppress system accounts. directory = OpenDirectoryDirectoryService( "/Local/Default", suppressSystemRecords=False ) portal = Portal( AgentRealm(root, [u"com.apple.calendarserver"]), [HTTPDigestCredentialChecker(directory)] ) credentialFactory = NoQOPDigestCredentialFactory( "md5", "/Local/Default" ) wrapper = HTTPAuthSessionWrapper(portal, [credentialFactory]) site = Site(wrapper) return StreamServerEndpointService(endpoint, site) class InactivityDetector(object): """ If no 'activity' takes place for a specified amount of time, a method will get called. Activity causes the inactivity time threshold to be reset. """ def __init__(self, reactor, timeoutSeconds, becameInactive): """ @param reactor: the reactor @timeoutSeconds: the number of seconds considered to mean inactive @becameInactive: the method to call (with no arguments) when inactivity is reached """ self._reactor = reactor self._timeoutSeconds = timeoutSeconds self._becameInactive = becameInactive if self._timeoutSeconds > 0: self._delayedCall = self._reactor.callLater( self._timeoutSeconds, self._inactivityThresholdReached ) def _inactivityThresholdReached(self): """ The delayed call has fired. We're inactive. Call the becameInactive method. """ self._becameInactive() def activity(self): """ Call this to let the InactivityMonitor that there has been activity. It will reset the timeout. """ if self._timeoutSeconds > 0: if self._delayedCall.active(): self._delayedCall.reset(self._timeoutSeconds) else: self._delayedCall = self._reactor.callLater( self._timeoutSeconds, self._inactivityThresholdReached ) def stop(self): """ Cancels the delayed call """ if self._timeoutSeconds > 0: if self._delayedCall.active(): self._delayedCall.cancel() # # Alternate implementation using AMP instead of HTTP # class GatewayAMPCommand(amp.Command): """ A command to be executed by gateway.Runner """ arguments = [('command', amp.String())] response = [('result', amp.String())] class GatewayAMPProtocol(amp.AMP): """ Passes commands to gateway.Runner and returns the results """ def __init__(self, store, directory): """ @param store: an already opened store operations @param directory: a directory service """ amp.AMP.__init__(self) self.store = store self.directory = directory @GatewayAMPCommand.responder @inlineCallbacks def gatewayCommandReceived(self, command): """ Process a command via gateway.Runner @param command: GatewayAMPCommand @returns: a deferred returning a dict """ command = readPlistFromString(command) output = cStringIO.StringIO() from calendarserver.tools.gateway import Runner runner = Runner( self.store, [command], output=output ) try: yield runner.run() result = output.getvalue() output.close() except Exception as e: error = {"Error": str(e)} result = writePlistToString(error) output.close() returnValue(dict(result=result)) class GatewayAMPFactory(Factory): """ Builds GatewayAMPProtocols """ protocol = GatewayAMPProtocol def __init__(self, store): """ @param store: an already opened store """ self.store = store self.directory = self.store.directoryService() def buildProtocol(self, addr): return GatewayAMPProtocol( self.store, self.davRootResource, self.directory ) # # A test AMP client # command = """ command getLocationAndResourceList """ def getList(): # For the sample client, below: from twisted.internet import reactor from twisted.internet.protocol import ClientCreator creator = ClientCreator(reactor, amp.AMP) host = '127.0.0.1' import sys if len(sys.argv) > 1: host = sys.argv[1] d = creator.connectTCP(host, 62308) def connected(ampProto): return ampProto.callRemote(GatewayAMPCommand, command=command) d.addCallback(connected) def resulted(result): return result['result'] d.addCallback(resulted) def done(result): print('Done: %s' % (result,)) reactor.stop() d.addCallback(done) reactor.run() if __name__ == '__main__': getList() calendarserver-9.1+dfsg/calendarserver/tools/ampnotifications.py000077500000000000000000000064631315003562600254000ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from calendarserver.push.amppush import subscribeToIDs from calendarserver.tools.cmdline import utilityMain, WorkerService from getopt import getopt, GetoptError from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, succeed import os import sys log = Logger() def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options] [pushkey ...]" % (name,)) print("") print(" Monitor AMP Push Notifications") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -p --port : AMP port to connect to") print(" -s --server : AMP server to connect to") print(" --debug: verbose logging") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) class MonitorAMPNotifications(WorkerService): ids = [] hostname = None port = None def doWork(self): return monitorAMPNotifications(self.hostname, self.port, self.ids) def postStartService(self): """ Don't quit right away """ pass def main(): try: (optargs, args) = getopt( sys.argv[1:], "f:hp:s:v", [ "config=", "debug", "help", "port=", "server=", ], ) except GetoptError, e: usage(e) # # Get configuration # configFileName = None hostname = "localhost" port = 62311 debug = False for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-p", "--port"): port = int(arg) elif opt in ("-s", "--server"): hostname = arg elif opt in ("--debug"): debug = True else: raise NotImplementedError(opt) if not args: usage("Not enough arguments") MonitorAMPNotifications.ids = args MonitorAMPNotifications.hostname = hostname MonitorAMPNotifications.port = port utilityMain( configFileName, MonitorAMPNotifications, verbose=debug ) def notificationCallback(id, dataChangedTimestamp, priority): print("Received notification for:", id, "Priority", priority) return succeed(True) @inlineCallbacks def monitorAMPNotifications(hostname, port, ids): print("Subscribing to notifications...") yield subscribeToIDs(hostname, port, ids, notificationCallback) print("Waiting for notifications...") calendarserver-9.1+dfsg/calendarserver/tools/anonymize.py000077500000000000000000000517711315003562600240440ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from __future__ import with_statement from getopt import getopt, GetoptError from subprocess import Popen, PIPE, STDOUT import datetime import hashlib import os import random import shutil import sys import urllib import uuid import xattr import zlib from plistlib import readPlistFromString from pycalendar.icalendar.calendar import Calendar from pycalendar.parameter import Parameter COPY_CAL_XATTRS = ( 'WebDAV:{DAV:}resourcetype', 'WebDAV:{urn:ietf:params:xml:ns:caldav}calendar-timezone', 'WebDAV:{http:%2F%2Fapple.com%2Fns%2Fical%2F}calendar-color', 'WebDAV:{http:%2F%2Fapple.com%2Fns%2Fical%2F}calendar-order', ) COPY_EVENT_XATTRS = ('WebDAV:{DAV:}getcontenttype',) def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: %s [options] source destination" % (name,)) print("") print(" Anonymizes calendar data") print("") print(" source and destination should refer to document root directories") print("") print("options:") print(" -h --help: print this help and exit") print(" -n --node : Directory node (defaults to /Search)") print("") if e: sys.exit(64) else: sys.exit(0) def main(): try: (optargs, args) = getopt( sys.argv[1:], "hn:", [ "help", "node=", ], ) except GetoptError, e: usage(e) # # Get configuration # directoryNode = "/Search" for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-n", "--node"): directoryNode = arg if len(args) != 2: usage("Source and destination directories must be specified.") sourceDirectory, destDirectory = args directoryMap = DirectoryMap(directoryNode) anonymizeRoot(directoryMap, sourceDirectory, destDirectory) directoryMap.printStats() directoryMap.dumpDsImports(os.path.join(destDirectory, "dsimports")) def anonymizeRoot(directoryMap, sourceDirectory, destDirectory): # sourceDirectory and destDirectory are DocumentRoots print("Anonymizing calendar data from %s into %s" % (sourceDirectory, destDirectory)) homes = 0 calendars = 0 resources = 0 if not os.path.exists(sourceDirectory): print("Can't find source: %s" % (sourceDirectory,)) sys.exit(1) if not os.path.exists(destDirectory): os.makedirs(destDirectory) sourceCalRoot = os.path.join(sourceDirectory, "calendars") destCalRoot = os.path.join(destDirectory, "calendars") if not os.path.exists(destCalRoot): os.makedirs(destCalRoot) sourceUidHomes = os.path.join(sourceCalRoot, "__uids__") if os.path.exists(sourceUidHomes): destUidHomes = os.path.join(destCalRoot, "__uids__") if not os.path.exists(destUidHomes): os.makedirs(destUidHomes) homeList = [] for first in os.listdir(sourceUidHomes): if len(first) == 2: firstPath = os.path.join(sourceUidHomes, first) for second in os.listdir(firstPath): if len(second) == 2: secondPath = os.path.join(firstPath, second) for home in os.listdir(secondPath): record = directoryMap.lookupCUA(home) if not record: print("Couldn't find %s, skipping." % (home,)) continue sourceHome = os.path.join(secondPath, home) destHome = os.path.join( destUidHomes, record['guid'][0:2], record['guid'][2:4], record['guid']) homeList.append((sourceHome, destHome, record)) else: home = first sourceHome = os.path.join(sourceUidHomes, home) if not os.path.isdir(sourceHome): continue record = directoryMap.lookupCUA(home) if not record: print("Couldn't find %s, skipping." % (home,)) continue sourceHome = os.path.join(sourceUidHomes, home) destHome = os.path.join(destUidHomes, record['guid']) homeList.append((sourceHome, destHome, record)) print("Processing %d calendar homes..." % (len(homeList),)) for sourceHome, destHome, record in homeList: quotaUsed = 0 if not os.path.exists(destHome): os.makedirs(destHome) homes += 1 # Iterate calendars freeBusies = [] for cal in os.listdir(sourceHome): # Skip these: if cal in ("dropbox", "notifications"): continue # Don't include these in freebusy list if cal not in ("inbox", "outbox"): freeBusies.append(cal) sourceCal = os.path.join(sourceHome, cal) destCal = os.path.join(destHome, cal) if not os.path.exists(destCal): os.makedirs(destCal) calendars += 1 # Copy calendar xattrs for attr, value in xattr.xattr(sourceCal).iteritems(): if attr in COPY_CAL_XATTRS: xattr.setxattr(destCal, attr, value) # Copy index sourceIndex = os.path.join(sourceCal, ".db.sqlite") destIndex = os.path.join(destCal, ".db.sqlite") if os.path.exists(sourceIndex): shutil.copyfile(sourceIndex, destIndex) # Iterate resources for resource in os.listdir(sourceCal): if resource.startswith("."): continue sourceResource = os.path.join(sourceCal, resource) # Skip directories if os.path.isdir(sourceResource): continue with open(sourceResource) as res: data = res.read() data = anonymizeData(directoryMap, data) if data is None: # Ignore data we can't parse continue destResource = os.path.join(destCal, resource) with open(destResource, "w") as res: res.write(data) quotaUsed += len(data) for attr, value in xattr.xattr(sourceResource).iteritems(): if attr in COPY_EVENT_XATTRS: xattr.setxattr(destResource, attr, value) # Set new etag xml = "\r\n%s\r\n" % (hashlib.md5(data).hexdigest(),) xattr.setxattr(destResource, "WebDAV:{http:%2F%2Ftwistedmatrix.com%2Fxml_namespace%2Fdav%2F}getcontentmd5", zlib.compress(xml)) resources += 1 # Store new ctag on calendar xml = "\r\n%s\r\n" % (str(datetime.datetime.now()),) xattr.setxattr(destCal, "WebDAV:{http:%2F%2Fcalendarserver.org%2Fns%2F}getctag", zlib.compress(xml)) # Calendar home quota xml = "\r\n%d\r\n" % (quotaUsed,) xattr.setxattr(destHome, "WebDAV:{http:%2F%2Ftwistedmatrix.com%2Fxml_namespace%2Fdav%2Fprivate%2F}quota-used", zlib.compress(xml)) # Inbox free busy calendars list destInbox = os.path.join(destHome, "inbox") if not os.path.exists(destInbox): os.makedirs(destInbox) xml = "\n" for freeBusy in freeBusies: xml += "/calendars/__uids__/%s/%s/\n" % (record['guid'], freeBusy) xml += "\n" xattr.setxattr( destInbox, "WebDAV:{urn:ietf:params:xml:ns:caldav}calendar-free-busy-set", zlib.compress(xml) ) if not (homes % 100): print(" %d..." % (homes,)) print("Done.") print("") print("Calendar totals:") print(" Calendar homes: %d" % (homes,)) print(" Calendars: %d" % (calendars,)) print(" Events: %d" % (resources,)) print("") def anonymizeData(directoryMap, data): try: pyobj = Calendar.parseText(data) except Exception, e: print("Failed to parse (%s): %s" % (e, data)) return None # Delete property from the top level try: for prop in pyobj.getProperties('x-wr-calname'): prop.setValue(anonymize(prop.getValue().getValue())) except KeyError: pass for comp in pyobj.getComponents(): # Replace with anonymized CUAs: for propName in ('organizer', 'attendee'): try: for prop in tuple(comp.getProperties(propName)): cua = prop.getValue().getValue() record = directoryMap.lookupCUA(cua) if record is None: # print("Can't find record for", cua) record = directoryMap.addRecord(cua=cua) if record is None: comp.removeProperty(prop) continue prop.setValue("urn:uuid:%s" % (record['guid'],)) if prop.hasParameter('X-CALENDARSERVER-EMAIL'): prop.replaceParameter(Parameter('X-CALENDARSERVER-EMAIL', record['email'])) else: prop.removeParameters('EMAIL') prop.addParameter(Parameter('EMAIL', record['email'])) prop.removeParameters('CN') prop.addParameter(Parameter('CN', record['name'])) except KeyError: pass # Replace with anonymized text: for propName in ('summary', 'location', 'description'): try: for prop in comp.getProperties(propName): prop.setValue(anonymize(prop.getValue().getValue())) except KeyError: pass # Replace with anonymized URL: try: for prop in comp.getProperties('url'): prop.setValue("http://example.com/%s/" % (anonymize(prop.getValue().getValue()),)) except KeyError: pass # Remove properties: for propName in ('x-apple-dropbox', 'attach'): try: for prop in tuple(comp.getProperties(propName)): comp.removeProperty(prop) except KeyError: pass return pyobj.getText(includeTimezones=Calendar.ALL_TIMEZONES) class DirectoryMap(object): def __init__(self, node): self.map = {} self.byType = { 'users': [], 'groups': [], 'locations': [], 'resources': [], } self.counts = { 'users': 0, 'groups': 0, 'locations': 0, 'resources': 0, 'unknown': 0, } self.strings = { 'users': ('Users', 'user'), 'groups': ('Groups', 'group'), 'locations': ('Places', 'location'), 'resources': ('Resources', 'resource'), } print("Fetching records from directory: %s" % (node,)) for internalType, (recordType, _ignore_friendlyType) in self.strings.iteritems(): print(" %s..." % (internalType,)) child = Popen( args=[ "/usr/bin/dscl", "-plist", node, "-readall", "/%s" % (recordType,), "GeneratedUID", "RecordName", "EMailAddress", "GroupMembers" ], stdout=PIPE, stderr=STDOUT, ) output, error = child.communicate() if child.returncode: raise DirectoryError(error) else: records = readPlistFromString(output) random.shuffle(records) # so we don't go alphabetically for record in records: origGUID = record.get('dsAttrTypeStandard:GeneratedUID', [None])[0] if not origGUID: continue origRecordNames = record['dsAttrTypeStandard:RecordName'] origEmails = record.get('dsAttrTypeStandard:EMailAddress', []) origMembers = record.get('dsAttrTypeStandard:GroupMembers', []) self.addRecord( internalType=internalType, guid=origGUID, names=origRecordNames, emails=origEmails, members=origMembers) print("Done.") print("") def addRecord( self, internalType="users", guid=None, names=None, emails=None, members=None, cua=None ): if cua: keys = [self.cua2key(cua)] self.counts['unknown'] += 1 else: keys = self.getKeys(guid, names, emails) if keys: self.counts[internalType] += 1 count = self.counts[internalType] namePrefix = randomName(6) typeStr = self.strings[internalType][1] record = { 'guid': str(uuid.uuid4()).upper(), 'name': "%s %s%d" % (namePrefix, typeStr, count,), 'first': namePrefix, 'last': "%s%d" % (typeStr, count,), 'recordName': "%s%d" % (typeStr, count,), 'email': ("%s%d@example.com" % (typeStr, count,)), 'type': self.strings[internalType][0], 'cua': cua, 'members': members, } for key in keys: self.map[key] = record self.byType[internalType].append(record) return record else: return None def getKeys(self, guid, names, emails): keys = [] if guid: keys.append(guid.lower()) if names: for name in names: try: name = name.encode('utf-8') name = urllib.quote(name).lower() keys.append(name) except: # print("Failed to urllib.quote( ) %s. Skipping." % (name,)) pass if emails: for email in emails: email = email.lower() keys.append(email) return keys def cua2key(self, cua): key = cua.lower() if key.startswith("mailto:"): key = key[7:] elif key.startswith("urn:uuid:"): key = key[9:] elif (key.startswith("/") or key.startswith("http")): key = key.rstrip("/") key = key.split("/")[-1] return key def lookupCUA(self, cua): key = self.cua2key(cua) if key and key in self.map: return self.map[key] else: return None def printStats(self): print("Directory totals:") for internalType, (recordType, ignore) in self.strings.iteritems(): print(" %s: %d" % (recordType, self.counts[internalType])) unknown = self.counts['unknown'] if unknown: print(" Principals not found in directory: %d" % (unknown,)) def dumpDsImports(self, dirPath): if not os.path.exists(dirPath): os.makedirs(dirPath) uid = 1000000 filePath = os.path.join(dirPath, "users.dsimport") with open(filePath, "w") as out: out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Users 12 dsAttrTypeStandard:RecordName dsAttrTypeStandard:AuthMethod dsAttrTypeStandard:Password dsAttrTypeStandard:UniqueID dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:RealName dsAttrTypeStandard:FirstName dsAttrTypeStandard:LastName dsAttrTypeStandard:NFSHomeDirectory dsAttrTypeStandard:UserShell dsAttrTypeStandard:EMailAddress\n") for record in self.byType['users']: fields = [] fields.append(record['recordName']) fields.append("dsAuthMethodStandard\\:dsAuthClearText") fields.append("test") # password fields.append(str(uid)) fields.append(record['guid']) fields.append("20") # primary group id fields.append(record['name']) fields.append(record['first']) fields.append(record['last']) fields.append("/var/empty") fields.append("/usr/bin/false") fields.append(record['email']) out.write(":".join(fields)) out.write("\n") uid += 1 gid = 2000000 filePath = os.path.join(dirPath, "groups.dsimport") with open(filePath, "w") as out: out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Groups 5 dsAttrTypeStandard:RecordName dsAttrTypeStandard:PrimaryGroupID dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName dsAttrTypeStandard:GroupMembership\n") for record in self.byType['groups']: fields = [] fields.append(record['recordName']) fields.append(str(gid)) fields.append(record['guid']) fields.append(record['name']) anonMembers = [] for member in record['members']: memberRec = self.lookupCUA("urn:uuid:%s" % (member,)) if memberRec: anonMembers.append(memberRec['guid']) if anonMembers: # skip empty groups fields.append(",".join(anonMembers)) out.write(":".join(fields)) out.write("\n") gid += 1 filePath = os.path.join(dirPath, "resources.dsimport") with open(filePath, "w") as out: out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Resources 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName\n") for record in self.byType['resources']: fields = [] fields.append(record['recordName']) fields.append(record['guid']) fields.append(record['name']) out.write(":".join(fields)) out.write("\n") filePath = os.path.join(dirPath, "places.dsimport") with open(filePath, "w") as out: out.write("0x0A 0x5C 0x3A 0x2C dsRecTypeStandard:Places 3 dsAttrTypeStandard:RecordName dsAttrTypeStandard:GeneratedUID dsAttrTypeStandard:RealName\n") for record in self.byType['locations']: fields = [] fields.append(record['recordName']) fields.append(record['guid']) fields.append(record['name']) out.write(":".join(fields)) out.write("\n") class DirectoryError(Exception): """ Error trying to access dscl """ class DatabaseError(Exception): """ Error trying to access sqlite3 """ def anonymize(text): """ Return a string whose value is the hex digest of text, repeated as needed to create a string of the same length as text. Useful for anonymizing strings in a deterministic manner. """ if isinstance(text, unicode): try: text = text.encode('utf-8') except UnicodeEncodeError: print("Failed to anonymize:", text) text = "Anonymize me!" h = hashlib.md5(text) h = h.hexdigest() l = len(text) return (h * ((l / 32) + 1))[:-(32 - (l % 32))] nameChars = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" def randomName(length): l = [] for _ignore in xrange(length): l.append(random.choice(nameChars)) return "".join(l) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/calverify.py000077500000000000000000004055241315003562600240160ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ This tool scans the calendar store to analyze organizer/attendee event states to verify that the organizer's view of attendee state matches up with the attendees' views. It can optionally apply a fix to bring the two views back into line. In theory the implicit scheduling model should eliminate the possibility of mismatches, however, because we store separate resources for organizer and attendee events, there is a possibility of mismatch. This is greatly lessened via the new transaction model of database changes, but it is possible there are edge cases or actual implicit processing errors we have missed. This tool will allow us to track mismatches to help determine these errors and get them fixed. Even in the long term if we move to a "single instance" store where the organizer event resource is the only one we store (with attendee views derived from that), in a situation where we have server-to-server scheduling it is possible for mismatches to creep in. In that case having a way to analyze multiple DBs for inconsistency would be good too. """ from __future__ import print_function from uuid import uuid4 import collections import itertools import sys import time import traceback from calendarserver.tools import tables from calendarserver.tools.cmdline import utilityMain, WorkerService from pycalendar.datetime import DateTime from pycalendar.exceptions import ErrorBase from pycalendar.icalendar import definitions from pycalendar.icalendar.calendar import Calendar from pycalendar.period import Period from pycalendar.timezone import Timezone from twext.enterprise.dal.syntax import Select, Parameter, Count, Update from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python import usage from twisted.python.usage import Options from twistedcaldav.datafilters.peruserdata import PerUserDataFilter from twistedcaldav.dateops import pyCalendarToSQLTimestamp from twistedcaldav.ical import Component, InvalidICalendarDataError, Property, PERUSER_COMPONENT from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twistedcaldav.timezones import TimezoneCache from txdav.caldav.datastore.scheduling.icalsplitter import iCalSplitter from txdav.caldav.datastore.scheduling.implicit import ImplicitScheduler from txdav.caldav.datastore.scheduling.itip import iTipGenerator from txdav.caldav.datastore.sql import CalendarStoreFeatures from txdav.caldav.datastore.util import normalizationLookup from txdav.caldav.icalendarstore import ComponentUpdateState from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN from txdav.common.icommondatastore import InternalDataStoreError from txdav.who.idirectory import RecordType as CalRecordType, AutoScheduleMode log = Logger() # Monkey patch def new_validRecurrenceIDs(self, doFix=True): fixed = [] unfixed = [] # Detect invalid occurrences and fix by adding RDATEs for them master = self.masterComponent() if master is not None: # Get the set of all recurrence IDs all_rids = set(self.getComponentInstances()) if None in all_rids: all_rids.remove(None) # If the master has no recurrence properties treat any other components as invalid if master.isRecurring(): # Remove all EXDATEs with a matching RECURRENCE-ID. Do this before we start # processing of valid instances just in case the matching R-ID is also not valid and # thus will need RDATE added. exdates = {} for property in list(master.properties("EXDATE")): for exdate in property.value(): exdates[exdate.getValue()] = property for rid in all_rids: if rid in exdates: if doFix: property = exdates[rid] for value in property.value(): if value.getValue() == rid: property.value().remove(value) break master.removeProperty(property) if len(property.value()) > 0: master.addProperty(property) del exdates[rid] fixed.append("Removed EXDATE for valid override: %s" % (rid,)) else: unfixed.append("EXDATE for valid override: %s" % (rid,)) # Get the set of all valid recurrence IDs valid_rids = self.validInstances(all_rids, ignoreInvalidInstances=True) # Get the set of all RDATEs and add those to the valid set rdates = [] for property in master.properties("RDATE"): rdates.extend([_rdate.getValue() for _rdate in property.value()]) valid_rids.update(set(rdates)) # Remove EXDATEs predating master dtstart = master.propertyValue("DTSTART") if dtstart is not None: for property in list(master.properties("EXDATE")): newValues = [] changed = False for exdate in property.value(): exdateValue = exdate.getValue() if exdateValue < dtstart: if doFix: fixed.append("Removed earlier EXDATE: %s" % (exdateValue,)) else: unfixed.append("EXDATE earlier than master: %s" % (exdateValue,)) changed = True else: newValues.append(exdateValue) if changed and doFix: # Remove the property... master.removeProperty(property) if newValues: # ...and add it back only if it still has values property.setValue(newValues) master.addProperty(property) else: valid_rids = set() # Determine the invalid recurrence IDs by set subtraction invalid_rids = all_rids - valid_rids # Add RDATEs for the invalid ones, or remove any EXDATE. for invalid_rid in invalid_rids: brokenComponent = self.overriddenComponent(invalid_rid) brokenRID = brokenComponent.propertyValue("RECURRENCE-ID") if doFix: master.addProperty(Property("RDATE", [brokenRID, ])) fixed.append( "Added RDATE for invalid occurrence: %s" % (brokenRID,)) else: unfixed.append("Invalid occurrence: %s" % (brokenRID,)) return fixed, unfixed def new_hasDuplicateAlarms(self, doFix=False): """ test and optionally remove alarms that have the same ACTION and TRIGGER values in the same component. """ changed = False if self.name() in ("VCALENDAR", PERUSER_COMPONENT,): for component in self.subcomponents(): if component.name() in ("VTIMEZONE",): continue changed = component.hasDuplicateAlarms(doFix) or changed else: action_trigger = set() for component in tuple(self.subcomponents()): if component.name() == "VALARM": item = (component.propertyValue("ACTION"), component.propertyValue("TRIGGER"),) if item in action_trigger: if doFix: self.removeComponent(component) changed = True else: action_trigger.add(item) return changed Component.validRecurrenceIDs = new_validRecurrenceIDs if not hasattr(Component, "maxAlarmCounts"): Component.hasDuplicateAlarms = new_hasDuplicateAlarms VERSION = "13" def printusage(e=None): if e: print(e) print("") try: CalVerifyOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = """ Usage: calendarserver_verify_data [options] Version: %s This tool scans the calendar store to look for and correct any problems. OPTIONS: Modes of operation: -h : print help and exit. --ical : verify iCalendar data. --mismatch : verify scheduling state. --missing : display orphaned calendar homes - can be used. with either --ical or --mismatch. --double : detect double-bookings. --dark-purge : purge room/resource events with invalid organizer --split : split recurring event --upgrade : upgrade iCalendar data. --nuke PATH|UID|RID : remove specific calendar resources - can only be used by itself. PATH is the full /calendars/__uids__/XXX/YYY/ZZZ.ics object resource path. UID is the iCalendar UID, prefixed with "uid:", of the resources to remove. RID is the SQL DB resource-id. Options for all modes: --fix : changes are only made when this is present. --config : caldavd.plist file for the server. -v : verbose logging Options for --ical / --upgrade: --uuid : only scan specified calendar homes. Can be a partial GUID to scan all GUIDs with that as a prefix. --uid : scan only calendar data with the specific iCalendar UID. --calendar : When scanning calendar homes, only scan calendars with the specified calendar resource name. --path : Scan the calendar home or calendar identified by the specified URI Options for --mismatch: --uid : look for mismatches with the specified iCalendar UID only. --details : log extended details on each mismatch. --tzid : timezone to adjust details to. Options for --double: --uuid : only scan specified calendar homes. Can be a partial GUID to scan all GUIDs with that as a prefix or "*" for all GUIDS (that are marked as resources or locations in the directory). --tzid : timezone to adjust details to. --summary : report only which GUIDs have double-bookings - no details. --days : number of days ahead to scan [DEFAULT: 365] Options for --double: If none of (--no-organizer, --invalid-organizer, --disabled-organizer) is present, it will default to (--invalid-organizer, --disabled-organizer). Options for --dark-purge: --uuid : only scan specified calendar homes. Can be a partial GUID to scan all GUIDs with that as a prefix or "*" for all GUIDS (that are marked as resources or locations in the directory). --summary : report only which GUIDs have double-bookings - no details. --no-organizer : only detect events without an organizer --invalid-organizer : only detect events with an organizer not in the directory --disabled-organizer : only detect events with an organizer disabled for calendaring Options for --missing-location: --uuid : only scan specified calendar homes. Can be a partial GUID to scan all GUIDs with that as a prefix or "*" for all GUIDS (that are marked as locations in the directory). --summary : report only which GUIDs have bad events - no details. Options for --split: --path : URI path to resource to split. --rid : UTC date-time where split occurs (YYYYMMDDTHHMMSSZ). --summary : only print a list of recurrences in the resource - no splitting. CHANGES v8: Detects ORGANIZER or ATTENDEE properties with mailto: calendar user addresses for users that have valid directory records. Fix is to replace the value with a urn:x-uid: form. v9: Detects double-bookings. v10: Purges data for invalid users. v11: Allows manual splitting of recurring events. v12: Fix double-booking false positives caused by timezones-by-reference. v13: Add new options for --nuke and --ical. Add fix for invalid GEO. """ % (VERSION,) def safePercent(x, y, multiplier=100.0): return ((multiplier * x) / y) if y else 0 class CalVerifyOptions(Options): """ Command-line options for 'calendarserver_verify_data' """ synopsis = description optFlags = [ ['ical', 'i', "Calendar data check."], ['debug', 'D', "Debug logging."], ['mismatch', 's', "Detect organizer/attendee mismatches."], ['missing', 'm', "Show 'orphaned' homes."], ['double', 'd', "Detect double-bookings."], ['dark-purge', 'p', "Purge room/resource events with invalid organizer."], ['missing-location', 'g', "Room events with missing location."], ['split', 'l', "Split an event."], ['upgrade', 'u', "Upgrade calendar data."], ['fix', 'x', "Fix problems."], ['verbose', 'v', "Verbose logging."], ['details', 'V', "Detailed logging."], ['summary', 'S', "Summary of double-bookings/split."], ['tzid', 't', "Timezone to adjust displayed times to."], ['no-organizer', '', "Detect dark events without an organizer"], ['invalid-organizer', '', "Detect dark events with an organizer not in the directory"], ['disabled-organizer', '', "Detect dark events with a disabled organizer"], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ['uuid', 'u', "", "Only check this user."], ['uid', 'U', "", "Only this event UID."], ['calendar', '', "", "Only this calendar resource name."], ['nuke', 'e', "", "Remove event given its path."], ['days', 'T', "365", "Number of days for scanning events into the future."], ['path', '', "", "Split event given its path."], ['rid', '', "", "Split date-time."], ] def __init__(self): super(CalVerifyOptions, self).__init__() self.outputName = '-' def getUsage(self, width=None): return "" def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') class CalVerifyService(WorkerService, object): """ Base class for common service behaviors. """ def __init__(self, store, options, output, reactor, config): super(CalVerifyService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config self._directory = store.directoryService() self._principalCollection = self.rootResource().getChild("principals") self.cuaCache = {} self.results = {} self.summary = [] self.total = 0 self.totalErrors = None self.totalExceptions = None TimezoneCache.create() def title(self): return "" @inlineCallbacks def doWork(self): """ Do the operation stopping the reactor when done. """ self.output.write("\n---- CalVerify %s version: %s ----\n" % (self.title(), VERSION,)) try: yield self.doAction() except Exception as e: self.output.write("CalVerify Failure: %s\n" % (str(e),)) log.failure("CalVerify Failure") except: self.output.write("CalVerify Failure: unknown exception\n") log.failure("CalVerify Failure") finally: self.output.close() def directoryService(self): """ Return the directory service """ return self._directory @inlineCallbacks def getAllHomeUIDs(self): ch = schema.CALENDAR_HOME rows = (yield Select( [ch.OWNER_UID, ], From=ch, ).on(self.txn)) returnValue(tuple([uid[0] for uid in rows])) @inlineCallbacks def getMatchingHomeUIDs(self, uuid): ch = schema.CALENDAR_HOME kwds = {"uuid": uuid} rows = (yield Select( [ch.OWNER_UID, ], From=ch, Where=(ch.OWNER_UID.StartsWith(Parameter("uuid"))), ).on(self.txn, **kwds)) returnValue(tuple([uid[0] for uid in rows])) @inlineCallbacks def countHomeContents(self, uid): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT kwds = {"UID": uid} rows = (yield Select( [Count(co.RESOURCE_ID), ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("UID")) ).on(self.txn, **kwds)) returnValue(int(rows[0][0]) if rows else 0) @inlineCallbacks def getAllResourceInfo(self, inbox=False): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME if inbox: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN) else: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox") kwds = {} rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=cojoin), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoWithUUID(self, uuid, inbox=False, calendar=None): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME if calendar: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME == calendar) elif inbox: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN) else: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox") kwds = {"uuid": uuid} if len(uuid) != 36: where = (ch.OWNER_UID.StartsWith(Parameter("uuid"))) else: where = (ch.OWNER_UID == Parameter("uuid")) rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=cojoin), Where=where, GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoTimeRange(self, start): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME tr = schema.TIME_RANGE kwds = { "Start": pyCalendarToSQLTimestamp(start), "Max": pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)) } rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox").And( co.ORGANIZER != "")).join( tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)), Where=(tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start")), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoWithUID(self, uid, inbox=False): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME if inbox: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN) else: cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox") kwds = { "UID": uid, } rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=cojoin), Where=(co.ICALENDAR_UID == Parameter("UID")), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoTimeRangeWithUUID(self, start, uuid): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME tr = schema.TIME_RANGE kwds = { "Start": pyCalendarToSQLTimestamp(start), "Max": pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)), "UUID": uuid, } rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox")).join( tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("UUID")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoTimeRangeWithUUIDForAllUID(self, start, uuid): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME tr = schema.TIME_RANGE cojoin = (cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox") kwds = { "Start": pyCalendarToSQLTimestamp(start), "Max": pyCalendarToSQLTimestamp(DateTime(1900, 1, 1, 0, 0, 0)), "UUID": uuid, } rows = (yield Select( [ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=cojoin), Where=(co.ICALENDAR_UID.In(Select( [co.ICALENDAR_UID], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox").And( co.ORGANIZER != "")).join( tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("UUID")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ))), GroupBy=(ch.OWNER_UID, co.RESOURCE_ID, co.ICALENDAR_UID, cb.CALENDAR_RESOURCE_NAME, co.MD5, co.ORGANIZER, co.CREATED, co.MODIFIED,), ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def getAllResourceInfoForResourceID(self, resid): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME kwds = {"resid": resid} rows = (yield Select( [ch.RESOURCE_ID, cb.CALENDAR_RESOURCE_ID, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)), Where=(co.RESOURCE_ID == Parameter("resid")), ).on(self.txn, **kwds)) returnValue(rows[0]) @inlineCallbacks def getResourceID(self, home, calendar, resource): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME kwds = { "home": home, "calendar": calendar, "resource": resource, } rows = (yield Select( [co.RESOURCE_ID], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("home")).And( cb.CALENDAR_RESOURCE_NAME == Parameter("calendar")).And( co.RESOURCE_NAME == Parameter("resource") ), ).on(self.txn, **kwds)) returnValue(rows[0][0] if rows else None) @inlineCallbacks def getCalendar(self, resid, doFix=False): self.parseError = None self.parseException = None co = schema.CALENDAR_OBJECT kwds = {"ResourceID": resid} rows = (yield Select( [co.ICALENDAR_TEXT], From=co, Where=( co.RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds)) try: caldata = Calendar.parseText(rows[0][0]) if rows else None except ErrorBase as e: self.parseError = "Failed to parse" self.parseException = str(e) returnValue(None) returnValue(caldata) @inlineCallbacks def getCalendarForOwnerByUID(self, owner, uid): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME kwds = {"OWNER": owner, "UID": uid} rows = (yield Select( [co.ICALENDAR_TEXT, co.RESOURCE_ID, co.CREATED, co.MODIFIED, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox")), Where=(ch.OWNER_UID == Parameter("OWNER")).And(co.ICALENDAR_UID == Parameter("UID")), ).on(self.txn, **kwds)) try: caldata = Calendar.parseText(rows[0][0]) if rows else None except ErrorBase: returnValue((None, None, None, None,)) returnValue((caldata, rows[0][1], rows[0][2], rows[0][3],) if rows else (None, None, None, None,)) @inlineCallbacks def removeEvent(self, resid): """ Remove the calendar resource specified by resid - this is a force remove - no implicit scheduling is required so we use store apis directly. """ try: homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid) home = yield self.txn.calendarHomeWithResourceID(homeID) calendar = yield home.childWithID(calendarID) calendarObj = yield calendar.objectResourceWithID(resid) objname = calendarObj.name() yield calendarObj.purge(implicitly=False) yield self.txn.commit() self.txn = self.store.newTransaction() self.results.setdefault("Fix remove", set()).add((home.name(), calendar.name(), objname,)) returnValue(True) except Exception, e: print("Failed to remove resource whilst fixing: %d\n%s" % (resid, e,)) returnValue(False) def logResult(self, key, value, total=None): self.output.write("%s: %s\n" % (key, value,)) self.results[key] = value self.addToSummary(key, value, total) def addToSummary(self, title, count, total=None): if total is not None: percent = safePercent(count, total), else: percent = "" self.summary.append((title, count, percent)) def addSummaryBreak(self): self.summary.append(None) def printSummary(self): # Print summary of results table = tables.Table() table.addHeader(("Item", "Count", "%")) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) for item in self.summary: table.addRow(item) if self.totalErrors is not None: table.addRow(None) table.addRow(("Total Errors", self.totalErrors, safePercent(self.totalErrors, self.total),)) self.output.write("\n") self.output.write("Overall Summary:\n") table.printTable(os=self.output) class NukeService(CalVerifyService): """ Service which removes specific events. """ def title(self): return "Nuke Service" @inlineCallbacks def doAction(self): """ Remove a resource using either its path or resource id. When doing this do not read the iCalendar data which may be corrupt. """ self.output.write("\n---- Removing calendar resource ----\n") self.txn = self.store.newTransaction() nuke = self.options["nuke"] if nuke.startswith("/calendars/__uids__/"): pathbits = nuke.split("/") if len(pathbits) != 6: printusage("Not a valid calendar object resource path: %s" % (nuke,)) homeName = pathbits[3] calendarName = pathbits[4] resourceName = pathbits[5] rid = yield self.getResourceID(homeName, calendarName, resourceName) if rid is None: yield self.txn.commit() self.txn = None self.output.write("\n") self.output.write("Path does not exist. Nothing nuked.\n") returnValue(None) rids = [(int(rid), homeName, calendarName,)] elif nuke.startswith("uid:"): uid = nuke[4:] rows = yield self.getAllResourceInfoWithUID(uid, inbox=True) rids = [] for owner, resid, _ignore_uid, calname, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows: rids.append((resid, owner, calname,)) else: try: rids = [(int(nuke), "-", "-",)] except ValueError: printusage("nuke argument must be a calendar object path or an SQL resource-id") self.output.write("\nNumber of resources: %d\n\n" % (len(rids),)) for rid, owner, calname in rids: if self.options["fix"]: result = yield self.removeEvent(rid) if result: self.output.write("Removed resource: %s. Owner: %s. Calendar: %s\n" % (rid, owner, calname)) else: self.output.write("Resource: %s. Owner: %s. Calendar: %s\n" % (rid, owner, calname)) yield self.txn.commit() self.txn = None class OrphansService(CalVerifyService): """ Service which detects orphaned calendar homes. """ def title(self): return "Orphans Service" @inlineCallbacks def doAction(self): """ Report on home collections for which there are no directory records, or record is for user on a different pod, or a user not enabled for calendaring. """ self.output.write("\n---- Finding calendar homes with missing or disabled directory records ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() uids = yield self.getAllHomeUIDs() if self.options["verbose"]: self.output.write("getAllHomeUIDs time: %.1fs\n" % (time.time() - t,)) missing = [] wrong_server = [] disabled = [] uids_len = len(uids) uids_div = 1 if uids_len < 100 else uids_len / 100 self.addToSummary("Total Homes", uids_len) for ctr, uid in enumerate(uids): if self.options["verbose"] and divmod(ctr, uids_div)[1] == 0: self.output.write(("\r%d of %d (%d%%)" % ( ctr + 1, uids_len, ((ctr + 1) * 100 / uids_len), )).ljust(80)) self.output.flush() record = yield self.directoryService().recordWithUID(uid) if record is None: contents = yield self.countHomeContents(uid) missing.append((uid, contents,)) elif not record.thisServer(): contents = yield self.countHomeContents(uid) wrong_server.append((uid, contents,)) elif not record.hasCalendars: contents = yield self.countHomeContents(uid) disabled.append((uid, contents,)) # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(ctr, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("\r".ljust(80) + "\n") # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Calendar Objects")) for uid, count in sorted(missing, key=lambda x: x[0]): table.addRow(( uid, count, )) self.output.write("\n") self.logResult("Homes without a matching directory record", len(missing), uids_len) table.printTable(os=self.output) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Calendar Objects")) for uid, count in sorted(wrong_server, key=lambda x: x[0]): record = yield self.directoryService().recordWithUID(uid) table.addRow(( "%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,), count, )) self.output.write("\n") self.logResult("Homes not hosted on this server", len(wrong_server), uids_len) table.printTable(os=self.output) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Calendar Objects")) for uid, count in sorted(disabled, key=lambda x: x[0]): record = yield self.directoryService().recordWithUID(uid) table.addRow(( "%s/%s (%s)" % (record.recordType if record else "-", record.shortNames[0] if record else "-", uid,), count, )) self.output.write("\n") self.logResult("Homes without an enabled directory record", len(disabled), uids_len) table.printTable(os=self.output) self.printSummary() class BadDataService(CalVerifyService): """ Service which scans for bad calendar data. """ def title(self): return "Bad Data Service" @inlineCallbacks def doAction(self): self.output.write("\n---- Scanning calendar data ----\n") self.now = DateTime.getNowUTC() self.start = DateTime.getToday() self.start.setDateOnly(False) self.end = self.start.duplicate() self.end.offsetYear(1) self.fix = self.options["fix"] self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() descriptor = None if self.options["uuid"]: rows = yield self.getAllResourceInfoWithUUID(self.options["uuid"], inbox=True, calendar=self.options["calendar"]) descriptor = "getAllResourceInfoWithUUID" elif self.options["uid"]: rows = yield self.getAllResourceInfoWithUID(self.options["uid"], inbox=True) descriptor = "getAllResourceInfoWithUID" elif self.options["path"]: segments = self.options["path"].strip("/").split("/") if len(segments) not in (4, 3) or segments[0] != "calendars" or segments[1] != "__uids__": self.output.write("Invalid path for bad data scan\n") returnValue(None) else: uuid = segments[2] calendar = segments[3] if len(segments) == 4 else None rows = yield self.getAllResourceInfoWithUUID(uuid, inbox=True, calendar=calendar) descriptor = "getAllResourceInfoWithUUID" else: rows = yield self.getAllResourceInfo(inbox=True) descriptor = "getAllResourceInfo" yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) self.total = len(rows) self.logResult("Number of events to process", self.total) self.addSummaryBreak() yield self.calendarDataCheck(rows) self.printSummary() @inlineCallbacks def calendarDataCheck(self, rows): """ Check each calendar resource for valid iCalendar data. """ self.output.write("\n---- Verifying each calendar object resource ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() results_bad = [] count = 0 total = len(rows) badlen = 0 rjust = 10 for owner, resid, uid, calname, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows: try: result, message = yield self.validCalendarData(resid, calname == "inbox") except Exception, e: result = False message = "Exception for validCalendarData" if self.options["verbose"]: print(e) if not result: results_bad.append((owner, uid, resid, message)) badlen += 1 count += 1 if self.options["verbose"]: if count == 1: self.output.write("Bad".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n") last_output = time.time() if divmod(count, 100)[1] == 0 or time.time() - last_output > 1: self.output.write(( "\r" + ("%s" % badlen).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) + " " + calname + " " + str(resid) ).ljust(80)) self.output.flush() last_output = time.time() # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(count, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write(( "\r" + ("%s" % badlen).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80) + "\n") # Print table of results table = tables.Table() table.addHeader(("Owner", "Event UID", "RID", "Problem",)) for item in sorted(results_bad, key=lambda x: (x[0], x[1])): owner, uid, resid, message = item owner_record = yield self.directoryService().recordWithUID(owner) table.addRow(( "%s/%s (%s)" % (owner_record.recordType if owner_record else "-", owner_record.shortNames[0] if owner_record else "-", owner,), uid, resid, message, )) self.output.write("\n") self.logResult("Bad iCalendar data", len(results_bad), total) self.results["Bad iCalendar data"] = results_bad table.printTable(os=self.output) if self.options["verbose"]: diff_time = time.time() - t self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % ( diff_time, safePercent(diff_time, total, 1000.0), )) errorPrefix = "Calendar data had unfixable problems:\n " @inlineCallbacks def validCalendarData(self, resid, isinbox): """ Check the calendar resource for valid iCalendar data. """ result = True message = "" caldata = yield self.getCalendar(resid, self.fix) # Look for possible GEO fix if caldata is None and self.parseError and "GEO value incorrect" in self.parseException: if self.fix: caldata = yield self.fixInvalidGEOProperty(resid) if caldata is not None: result = False message = "Fixed: Removed bad GEO" else: self.parseError += ": GEO" if caldata is None: if self.parseError: returnValue((False, self.parseError)) else: returnValue((True, "Nothing to scan")) component = Component(None, pycalendar=caldata) if getattr(self.config, "MaxInstancesForRRULE", 0): component.truncateRecurrence(self.config.MaxInstancesForRRULE) try: if self.options["ical"]: component.validCalendarData(doFix=False, validateRecurrences=True) component.validCalendarForCalDAV(methodAllowed=isinbox) component.validOrganizerForScheduling(doFix=False) if component.hasDuplicateAlarms(doFix=False): raise InvalidICalendarDataError("Duplicate VALARMS") yield self.noPrincipalPathCUAddresses(component, doFix=False) if self.options["ical"]: self.attendeesWithoutOrganizer(component, doFix=False) except ValueError, e: result = False message = str(e) if message.startswith(self.errorPrefix): message = message[len(self.errorPrefix):] lines = message.splitlines() message = lines[0] + (" ++" if len(lines) > 1 else "") if self.fix: fixresult, fixmessage = yield self.fixCalendarData(resid, isinbox) if fixresult: message = "Fixed: " + message else: message = fixmessage + message returnValue((result, message,)) @inlineCallbacks def noPrincipalPathCUAddresses(self, component, doFix): @inlineCallbacks def recordWithCalendarUserAddress(address): principal = yield self._principalCollection.principalForCalendarUserAddress(address) returnValue(principal.record) @inlineCallbacks def lookupFunction(cuaddr, recordFunction, conf): # Return cached results, if any.add if cuaddr in self.cuaCache: returnValue(self.cuaCache[cuaddr]) result = yield normalizationLookup(cuaddr, recordFunction, conf) _ignore_name, guid, _ignore_cutype, _ignore_cuaddrs = result if guid is None: if cuaddr.find("__uids__") != -1: guid = cuaddr[cuaddr.find("__uids__/") + 9:][:36] result = ("", guid, "", set(),) # Cache the result self.cuaCache[cuaddr] = result returnValue(result) for subcomponent in component.subcomponents(ignore=True): organizer = subcomponent.getProperty("ORGANIZER") if organizer: cuaddr = organizer.value() # http(s) principals need to be converted to urn:uuid if cuaddr.startswith("http"): if doFix: yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) else: raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'http(s)'") elif cuaddr.startswith("mailto:"): if (yield lookupFunction(cuaddr, recordWithCalendarUserAddress, self.config))[1] is not None: if doFix: yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) else: raise InvalidICalendarDataError("iCalendar ORGANIZER starts with 'mailto:' and record exists") else: if ("@" in cuaddr) and (":" not in cuaddr) and ("/" not in cuaddr): if doFix: # Add back in mailto: then re-normalize to urn:uuid if possible organizer.setValue("mailto:%s" % (cuaddr,)) yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) # Remove any SCHEDULE-AGENT=NONE if organizer.parameterValue("SCHEDULE-AGENT", "SERVER") == "NONE": organizer.removeParameter("SCHEDULE-AGENT") else: raise InvalidICalendarDataError("iCalendar ORGANIZER missing mailto:") for attendee in subcomponent.properties("ATTENDEE"): cuaddr = attendee.value() # http(s) principals need to be converted to urn:uuid if cuaddr.startswith("http"): if doFix: yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) else: raise InvalidICalendarDataError("iCalendar ATTENDEE starts with 'http(s)'") elif cuaddr.startswith("mailto:"): if (yield lookupFunction(cuaddr, recordWithCalendarUserAddress, self.config))[1] is not None: if doFix: yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) else: raise InvalidICalendarDataError("iCalendar ATTENDEE starts with 'mailto:' and record exists") else: if ("@" in cuaddr) and (":" not in cuaddr) and ("/" not in cuaddr): if doFix: # Add back in mailto: then re-normalize to urn:uuid if possible attendee.setValue("mailto:%s" % (cuaddr,)) yield component.normalizeCalendarUserAddresses(lookupFunction, recordWithCalendarUserAddress) else: raise InvalidICalendarDataError("iCalendar ATTENDEE missing mailto:") def attendeesWithoutOrganizer(self, component, doFix): """ Look for events with ATTENDEE properties and no ORGANIZER property. """ organizer = component.getOrganizer() attendees = component.getAttendees() if organizer is None and attendees: if doFix: raise ValueError("ATTENDEEs without ORGANIZER") else: raise InvalidICalendarDataError("ATTENDEEs without ORGANIZER") @inlineCallbacks def fixCalendarData(self, resid, isinbox): """ Fix problems in calendar data using store APIs. """ homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid) home = yield self.txn.calendarHomeWithResourceID(homeID) calendar = yield home.childWithID(calendarID) calendarObj = yield calendar.objectResourceWithID(resid) try: component = yield calendarObj.component() except InternalDataStoreError: returnValue((False, "Failed parse: ")) result = True message = "" try: if self.options["ical"]: component.validCalendarData(doFix=True, validateRecurrences=True) component.validCalendarForCalDAV(methodAllowed=isinbox) component.validOrganizerForScheduling(doFix=True) component.hasDuplicateAlarms(doFix=True) yield self.noPrincipalPathCUAddresses(component, doFix=True) if self.options["ical"]: self.attendeesWithoutOrganizer(component, doFix=True) except ValueError: result = False message = "Failed fix: " if result: # Write out fix, commit and get a new transaction try: # Use _migrating to ignore possible overridden instance errors - we are either correcting or ignoring those self.txn._migrating = True component = yield calendarObj._setComponentInternal(component, internal_state=ComponentUpdateState.RAW) except Exception, e: print(e, component) print(traceback.print_exc()) result = False message = "Exception fix: " yield self.txn.commit() self.txn = self.store.newTransaction() returnValue((result, message,)) @inlineCallbacks def fixInvalidGEOProperty(self, resid, doFix=False): self.parseError = None self.parseException = None co = schema.CALENDAR_OBJECT kwds = {"ResourceID": resid} rows = (yield Select( [co.ICALENDAR_TEXT], From=co, Where=( co.RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds)) caltextdata = rows[0][0] if rows else None if caltextdata is None: returnValue(None) caltextdata = "\n".join(filter(lambda x: not (x.startswith("GEO:") and ";" not in x), caltextdata.splitlines())) try: caldata = Calendar.parseText(caltextdata) except ErrorBase as e: self.parseError = "Failed to parse" self.parseException = str(e) returnValue(None) # We now know it is valid, so write it back yield Update( {co.ICALENDAR_TEXT: caltextdata}, Where=co.RESOURCE_ID == Parameter("ResourceID") ).on(self.txn, **kwds) returnValue(caldata) class SchedulingMismatchService(CalVerifyService): """ Service which detects mismatched scheduled events. """ metadata = { "accessMode": "PUBLIC", "isScheduleObject": True, "scheduleTag": "abc", "scheduleEtags": (), "hasPrivateComment": False, } metadata_inbox = { "accessMode": "PUBLIC", "isScheduleObject": False, "scheduleTag": "", "scheduleEtags": (), "hasPrivateComment": False, } def __init__(self, store, options, output, reactor, config): super(SchedulingMismatchService, self).__init__(store, options, output, reactor, config) self.validForCalendaringUUIDs = {} self.fixAttendeesForOrganizerMissing = 0 self.fixAttendeesForOrganizerMismatch = 0 self.fixOrganizersForAttendeeMissing = 0 self.fixOrganizersForAttendeeMismatch = 0 self.fixFailed = 0 self.fixedAutoAccepts = [] def title(self): return "Scheduling Mismatch Service" @inlineCallbacks def doAction(self): self.output.write("\n---- Scanning calendar data ----\n") self.now = DateTime.getNowUTC() self.start = self.options["start"] if "start" in self.options else DateTime.getToday() self.start.setDateOnly(False) self.end = self.start.duplicate() self.end.offsetYear(1) self.fix = self.options["fix"] self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() descriptor = None if self.options["uid"]: rows = yield self.getAllResourceInfoWithUID(self.options["uid"]) descriptor = "getAllResourceInfoWithUID" elif self.options["uuid"]: rows = yield self.getAllResourceInfoTimeRangeWithUUIDForAllUID(self.start, self.options["uuid"]) descriptor = "getAllResourceInfoTimeRangeWithUUIDForAllUID" self.options["uuid"] = None else: rows = yield self.getAllResourceInfoTimeRange(self.start) descriptor = "getAllResourceInfoTimeRange" yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) self.total = len(rows) self.logResult("Number of events to process", self.total) # Split into organizer events and attendee events self.organized = [] self.organized_byuid = {} self.attended = [] self.attended_byuid = collections.defaultdict(list) self.matched_attendee_to_organizer = collections.defaultdict(set) skipped, inboxes = yield self.buildResourceInfo(rows) self.logResult("Number of organizer events to process", len(self.organized), self.total) self.logResult("Number of attendee events to process", len(self.attended), self.total) self.logResult("Number of skipped events", skipped, self.total) self.logResult("Number of inbox events", inboxes) self.addSummaryBreak() self.totalErrors = 0 yield self.verifyAllAttendeesForOrganizer() yield self.verifyAllOrganizersForAttendee() # Need to add fix summary information if self.fix: self.addSummaryBreak() self.logResult("Fixed missing attendee events", self.fixAttendeesForOrganizerMissing) self.logResult("Fixed mismatched attendee events", self.fixAttendeesForOrganizerMismatch) self.logResult("Fixed missing organizer events", self.fixOrganizersForAttendeeMissing) self.logResult("Fixed mismatched organizer events", self.fixOrganizersForAttendeeMismatch) self.logResult("Fix failures", self.fixFailed) self.logResult("Fixed Auto-Accepts", len(self.fixedAutoAccepts)) self.results["Auto-Accepts"] = self.fixedAutoAccepts self.printAutoAccepts() self.printSummary() @inlineCallbacks def buildResourceInfo(self, rows, onlyOrganizer=False, onlyAttendee=False): """ For each resource, determine whether it is an organizer or attendee event, and also cache the attendee partstats. @param rows: set of DB query rows @type rows: C{list} @param onlyOrganizer: whether organizer information only is required @type onlyOrganizer: C{bool} @param onlyAttendee: whether attendee information only is required @type onlyAttendee: C{bool} """ skipped = 0 inboxes = 0 for owner, resid, uid, calname, md5, organizer, created, modified in rows: # Skip owners not enabled for calendaring if not (yield self.testForCalendaringUUID(owner)): skipped += 1 continue # Skip inboxes if calname == "inbox": inboxes += 1 continue # If targeting a specific organizer, skip events belonging to others if self.options["uuid"]: if not organizer.startswith("urn:x-uid:") or self.options["uuid"] != organizer[10:]: continue # Cache organizer/attendee states if organizer.startswith("urn:x-uid:") and owner == organizer[10:]: if not onlyAttendee: self.organized.append((owner, resid, uid, md5, organizer, created, modified,)) self.organized_byuid[uid] = (owner, resid, uid, md5, organizer, created, modified,) else: if not onlyOrganizer: self.attended.append((owner, resid, uid, md5, organizer, created, modified,)) self.attended_byuid[uid].append((owner, resid, uid, md5, organizer, created, modified,)) returnValue((skipped, inboxes)) @inlineCallbacks def testForCalendaringUUID(self, uuid): """ Determine if the specified directory UUID is valid for calendaring. Keep a cache of valid and invalid so we can do this quickly. @param uuid: the directory UUID to test @type uuid: C{str} @return: C{True} if valid, C{False} if not """ if uuid not in self.validForCalendaringUUIDs: record = yield self.directoryService().recordWithUID(uuid) self.validForCalendaringUUIDs[uuid] = record is not None and record.hasCalendars and record.thisServer() returnValue(self.validForCalendaringUUIDs[uuid]) @inlineCallbacks def verifyAllAttendeesForOrganizer(self): """ Make sure that for each organizer, each referenced attendee has a consistent view of the organizer's event. We will look for events that an organizer has and are missing for the attendee, and events that an organizer's view of attendee status does not match the attendee's view of their own status. """ self.output.write("\n---- Verifying Organizer events against Attendee copies ----\n") self.txn = self.store.newTransaction() results_missing = [] results_mismatch = [] attendeeResIDs = {} organized_len = len(self.organized) organizer_div = 1 if organized_len < 100 else organized_len / 100 # Test organized events t = time.time() for ctr, organizerEvent in enumerate(self.organized): if self.options["verbose"] and divmod(ctr, organizer_div)[1] == 0: self.output.write(("\r%d of %d (%d%%) Missing: %d Mismatched: %s" % ( ctr + 1, organized_len, ((ctr + 1) * 100 / organized_len), len(results_missing), len(results_mismatch), )).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 10 seconds if time.time() - t > 10: yield self.txn.commit() self.txn = self.store.newTransaction() t = time.time() # Get the organizer's view of attendee states organizer, resid, uid, _ignore_md5, _ignore_organizer, org_created, org_modified = organizerEvent calendar = yield self.getCalendar(resid) if calendar is None: continue if self.options["verbose"] and self.masterComponent(calendar) is None: self.output.write("Missing master for organizer: %s, resid: %s, uid: %s\n" % (organizer, resid, uid,)) organizerViewOfAttendees = self.buildAttendeeStates(calendar, self.start, self.end) try: del organizerViewOfAttendees[organizer] except KeyError: # Odd - the organizer is not an attendee - this usually does not happen pass if len(organizerViewOfAttendees) == 0: continue # Get attendee states for matching UID eachAttendeesOwnStatus = {} attendeeCreatedModified = {} for attendeeEvent in self.attended_byuid.get(uid, ()): owner, attresid, attuid, _ignore_md5, _ignore_organizer, att_created, att_modified = attendeeEvent attendeeCreatedModified[owner] = (att_created, att_modified,) calendar = yield self.getCalendar(attresid) if calendar is None: continue eachAttendeesOwnStatus[owner] = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=owner) attendeeResIDs[(owner, attuid)] = attresid # Look at each attendee in the organizer's meeting for organizerAttendee, organizerViewOfStatus in organizerViewOfAttendees.iteritems(): missing = False mismatch = False self.matched_attendee_to_organizer[uid].add(organizerAttendee) # Skip attendees not enabled for calendaring if not (yield self.testForCalendaringUUID(organizerAttendee)): continue # Double check the missing attendee situation in case we missed it during the original query if organizerAttendee not in eachAttendeesOwnStatus: # Try to reload the attendee data calendar, attresid, att_created, att_modified = yield self.getCalendarForOwnerByUID(organizerAttendee, uid) if calendar is not None: eachAttendeesOwnStatus[organizerAttendee] = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=organizerAttendee) attendeeResIDs[(organizerAttendee, uid)] = attresid attendeeCreatedModified[organizerAttendee] = (att_created, att_modified,) # print("Reloaded missing attendee data") # If an entry for the attendee exists, then check whether attendee status matches if organizerAttendee in eachAttendeesOwnStatus: attendeeOwnStatus = eachAttendeesOwnStatus[organizerAttendee].get(organizerAttendee, set()) att_created, att_modified = attendeeCreatedModified[organizerAttendee] if organizerViewOfStatus != attendeeOwnStatus: # Check that the difference is only cancelled or declined on the organizers side for _organizerInstance, partstat in organizerViewOfStatus.difference(attendeeOwnStatus): if partstat not in ("DECLINED", "CANCELLED"): results_mismatch.append((uid, resid, organizer, org_created, org_modified, organizerAttendee, att_created, att_modified)) self.results.setdefault("Mismatch Attendee", set()).add((uid, organizer, organizerAttendee,)) mismatch = True if self.options["details"]: self.output.write("Mismatch: on Organizer's side:\n") self.output.write(" UID: %s\n" % (uid,)) self.output.write(" Organizer: %s\n" % (organizer,)) self.output.write(" Attendee: %s\n" % (organizerAttendee,)) self.output.write(" Instance: %s\n" % (_organizerInstance,)) break # Check that the difference is only cancelled on the attendees side for _attendeeInstance, partstat in attendeeOwnStatus.difference(organizerViewOfStatus): if partstat not in ("CANCELLED",): if not mismatch: results_mismatch.append((uid, resid, organizer, org_created, org_modified, organizerAttendee, att_created, att_modified)) self.results.setdefault("Mismatch Attendee", set()).add((uid, organizer, organizerAttendee,)) mismatch = True if self.options["details"]: self.output.write("Mismatch: on Attendee's side:\n") self.output.write(" Organizer: %s\n" % (organizer,)) self.output.write(" Attendee: %s\n" % (organizerAttendee,)) self.output.write(" Instance: %s\n" % (_attendeeInstance,)) break # Check that the status for this attendee is always declined which means a missing copy of the event is OK else: for _ignore_instance_id, partstat in organizerViewOfStatus: if partstat not in ("DECLINED", "CANCELLED"): results_missing.append((uid, resid, organizer, organizerAttendee, org_created, org_modified)) self.results.setdefault("Missing Attendee", set()).add((uid, organizer, organizerAttendee,)) missing = True break # If there was a problem we can fix it if (missing or mismatch) and self.fix: fix_result = (yield self.fixByReinvitingAttendee(resid, attendeeResIDs.get((organizerAttendee, uid)), organizerAttendee)) if fix_result: if missing: self.fixAttendeesForOrganizerMissing += 1 else: self.fixAttendeesForOrganizerMismatch += 1 else: self.fixFailed += 1 yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("\r".ljust(80) + "\n") # Print table of results table = tables.Table() table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified",)) results_missing.sort() for item in results_missing: uid, resid, organizer, attendee, created, modified = item organizer_record = yield self.directoryService().recordWithUID(organizer) attendee_record = yield self.directoryService().recordWithUID(attendee) table.addRow(( "%s/%s (%s)" % (organizer_record.recordType if organizer_record else "-", organizer_record.shortNames[0] if organizer_record else "-", organizer,), "%s/%s (%s)" % (attendee_record.recordType if attendee_record else "-", attendee_record.shortNames[0] if attendee_record else "-", attendee,), uid, resid, created, "" if modified == created else modified, )) self.output.write("\n") self.logResult("Events missing from Attendee's calendars", len(results_missing), self.total) table.printTable(os=self.output) self.totalErrors += len(results_missing) # Print table of results table = tables.Table() table.addHeader(("Organizer", "Attendee", "Event UID", "Organizer RID", "Created", "Modified", "Attendee RID", "Created", "Modified",)) results_mismatch.sort() for item in results_mismatch: uid, org_resid, organizer, org_created, org_modified, attendee, att_created, att_modified = item organizer_record = yield self.directoryService().recordWithUID(organizer) attendee_record = yield self.directoryService().recordWithUID(attendee) table.addRow(( "%s/%s (%s)" % (organizer_record.recordType if organizer_record else "-", organizer_record.shortNames[0] if organizer_record else "-", organizer,), "%s/%s (%s)" % (attendee_record.recordType if attendee_record else "-", attendee_record.shortNames[0] if attendee_record else "-", attendee,), uid, org_resid, org_created, "" if org_modified == org_created else org_modified, attendeeResIDs[(attendee, uid)], att_created, "" if att_modified == att_created else att_modified, )) self.output.write("\n") self.logResult("Events mismatched between Organizer's and Attendee's calendars", len(results_mismatch), self.total) table.printTable(os=self.output) self.totalErrors += len(results_mismatch) @inlineCallbacks def verifyAllOrganizersForAttendee(self): """ Make sure that for each attendee, there is a matching event for the organizer. """ self.output.write("\n---- Verifying Attendee events against Organizer copies ----\n") self.txn = self.store.newTransaction() # Now try to match up each attendee event missing = [] mismatched = [] attended_len = len(self.attended) attended_div = 1 if attended_len < 100 else attended_len / 100 t = time.time() for ctr, attendeeEvent in enumerate(tuple(self.attended)): # self.attended might mutate during the loop if self.options["verbose"] and divmod(ctr, attended_div)[1] == 0: self.output.write(("\r%d of %d (%d%%) Missing: %d Mismatched: %s" % ( ctr + 1, attended_len, ((ctr + 1) * 100 / attended_len), len(missing), len(mismatched), )).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 10 seconds if time.time() - t > 10: yield self.txn.commit() self.txn = self.store.newTransaction() t = time.time() attendee, resid, uid, _ignore_md5, organizer, att_created, att_modified = attendeeEvent calendar = yield self.getCalendar(resid) if calendar is None: continue eachAttendeesOwnStatus = self.buildAttendeeStates(calendar, self.start, self.end, attendee_only=attendee) if attendee not in eachAttendeesOwnStatus: continue # Only care about data for hosted organizers if not organizer.startswith("urn:x-uid:"): continue organizer = organizer[10:] # Skip organizers not enabled for calendaring if not (yield self.testForCalendaringUUID(organizer)): continue # Double check the missing attendee situation in case we missed it during the original query if uid not in self.organized_byuid: # Try to reload the organizer info data rows = yield self.getAllResourceInfoWithUID(uid) yield self.buildResourceInfo(rows, onlyOrganizer=True) # if uid in self.organized_byuid: # print("Reloaded missing organizer data: %s" % (uid,)) if uid not in self.organized_byuid: # Check whether attendee has all instances cancelled if self.allCancelled(eachAttendeesOwnStatus): continue missing.append((uid, attendee, organizer, resid, att_created, att_modified,)) self.results.setdefault("Missing Organizer", set()).add((uid, attendee, organizer,)) # If there is a miss we fix by removing the attendee data if self.fix: # This is where we attempt a fix fix_result = (yield self.removeEvent(resid)) if fix_result: self.fixOrganizersForAttendeeMissing += 1 else: self.fixFailed += 1 elif attendee not in self.matched_attendee_to_organizer[uid]: # Check whether attendee has all instances cancelled if self.allCancelled(eachAttendeesOwnStatus): continue mismatched.append((uid, attendee, organizer, resid, att_created, att_modified,)) self.results.setdefault("Mismatch Organizer", set()).add((uid, attendee, organizer,)) # If there is a mismatch we fix by re-inviting the attendee if self.fix: fix_result = (yield self.fixByReinvitingAttendee(self.organized_byuid[uid][1], resid, attendee)) if fix_result: self.fixOrganizersForAttendeeMismatch += 1 else: self.fixFailed += 1 yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("\r".ljust(80) + "\n") # Print table of results table = tables.Table() table.addHeader(("Organizer", "Attendee", "UID", "Attendee RID", "Created", "Modified",)) missing.sort() unique_set = set() for item in missing: uid, attendee, organizer, resid, created, modified = item unique_set.add(uid) if organizer: organizerRecord = yield self.directoryService().recordWithUID(organizer) organizer = "%s/%s (%s)" % (organizerRecord.recordType if organizerRecord else "-", organizerRecord.shortNames[0] if organizerRecord else "-", organizer,) attendeeRecord = yield self.directoryService().recordWithUID(attendee) table.addRow(( organizer, "%s/%s (%s)" % (attendeeRecord.recordType if attendeeRecord else "-", attendeeRecord.shortNames[0] if attendeeRecord else "-", attendee,), uid, resid, created, "" if modified == created else modified, )) self.output.write("\n") self.output.write("Attendee events missing in Organizer's calendar (total=%d, unique=%d):\n" % (len(missing), len(unique_set),)) table.printTable(os=self.output) self.addToSummary("Attendee events missing in Organizer's calendar", len(missing), self.total) self.totalErrors += len(missing) # Print table of results table = tables.Table() table.addHeader(("Organizer", "Attendee", "UID", "Organizer RID", "Created", "Modified", "Attendee RID", "Created", "Modified",)) mismatched.sort() for item in mismatched: uid, attendee, organizer, resid, att_created, att_modified = item if organizer: organizerRecord = yield self.directoryService().recordWithUID(organizer) organizer = "%s/%s (%s)" % (organizerRecord.recordType if organizerRecord else "-", organizerRecord.shortNames[0] if organizerRecord else "-", organizer,) attendeeRecord = yield self.directoryService().recordWithUID(attendee) table.addRow(( organizer, "%s/%s (%s)" % (attendeeRecord.recordType if attendeeRecord else "-", attendeeRecord.shortNames[0] if attendeeRecord else "-", attendee,), uid, self.organized_byuid[uid][1], self.organized_byuid[uid][5], self.organized_byuid[uid][6], resid, att_created, "" if att_modified == att_created else att_modified, )) self.output.write("\n") self.logResult("Attendee events mismatched in Organizer's calendar", len(mismatched), self.total) table.printTable(os=self.output) self.totalErrors += len(mismatched) @inlineCallbacks def fixByReinvitingAttendee(self, orgresid, attresid, attendee): """ Fix a mismatch/missing error by having the organizer send a REQUEST for the entire event to the attendee to trigger implicit scheduling to resync the attendee event. We do not have implicit apis in the store, but really want to use store-only apis here to avoid having to create "fake" HTTP requests and manipulate HTTP resources. So what we will do is emulate implicit behavior by copying the organizer resource to the attendee (filtering it for the attendee's view of the event) and deposit an inbox item for the same event. Right now that will wipe out any per-attendee data - notably alarms. """ try: cuaddr = "urn:x-uid:%s" % attendee # Get the organizer's calendar data calendar = (yield self.getCalendar(orgresid)) calendar = Component(None, pycalendar=calendar) # Generate an iTip message for the entire event filtered for the attendee's view itipmsg = iTipGenerator.generateAttendeeRequest(calendar, (cuaddr,), None) # Handle the case where the attendee is not actually in the organizer event at all by # removing the attendee event instead of re-inviting if itipmsg.resourceUID() is None: yield self.removeEvent(attresid) returnValue(True) # Convert iTip message into actual calendar data - just remove METHOD attendee_calendar = itipmsg.duplicate() attendee_calendar.removeProperty(attendee_calendar.getProperty("METHOD")) # Adjust TRANSP to match PARTSTAT self.setTransparencyForAttendee(attendee_calendar, cuaddr) # Get attendee home store object home = (yield self.txn.calendarHomeWithUID(attendee)) if home is None: raise ValueError("Cannot find home") inbox = (yield home.calendarWithName("inbox")) if inbox is None: raise ValueError("Cannot find inbox") details = {} # Replace existing resource data, or create a new one if attresid: # TODO: transfer over per-attendee data - valarms _ignore_homeID, calendarID = yield self.getAllResourceInfoForResourceID(attresid) calendar = yield home.childWithID(calendarID) calendarObj = yield calendar.objectResourceWithID(attresid) calendarObj.scheduleTag = str(uuid4()) yield calendarObj._setComponentInternal(attendee_calendar, internal_state=ComponentUpdateState.RAW) self.results.setdefault("Fix change event", set()).add((home.name(), calendar.name(), attendee_calendar.resourceUID(),)) details["path"] = "/calendars/__uids__/%s/%s/%s" % (home.name(), calendar.name(), calendarObj.name(),) details["rid"] = attresid else: # Find default calendar for VEVENTs defaultCalendar = (yield self.defaultCalendarForAttendee(home)) if defaultCalendar is None: raise ValueError("Cannot find suitable default calendar") new_name = str(uuid4()) + ".ics" calendarObj = (yield defaultCalendar._createCalendarObjectWithNameInternal(new_name, attendee_calendar, internal_state=ComponentUpdateState.RAW, options=self.metadata)) self.results.setdefault("Fix add event", set()).add((home.name(), defaultCalendar.name(), attendee_calendar.resourceUID(),)) details["path"] = "/calendars/__uids__/%s/%s/%s" % (home.name(), defaultCalendar.name(), new_name,) details["rid"] = calendarObj._resourceID details["uid"] = attendee_calendar.resourceUID() instances = attendee_calendar.expandTimeRanges(self.end) for key in instances: instance = instances[key] if instance.start > self.now: break details["start"] = instance.start.adjustTimezone(self.tzid) details["title"] = instance.component.propertyValue("SUMMARY") # Write new itip message to attendee inbox yield inbox.createCalendarObjectWithName(str(uuid4()) + ".ics", itipmsg, options=self.metadata_inbox) self.results.setdefault("Fix add inbox", set()).add((home.name(), itipmsg.resourceUID(),)) yield self.txn.commit() self.txn = self.store.newTransaction() # Need to know whether the attendee is a location or resource with auto-accept set record = yield self.directoryService().recordWithUID(attendee) autoScheduleMode = getattr(record, "autoScheduleMode", None) if autoScheduleMode not in (None, AutoScheduleMode.none): # Log details about the event so we can have a human manually process self.fixedAutoAccepts.append(details) returnValue(True) except Exception, e: print("Failed to fix resource: %d for attendee: %s\n%s" % (orgresid, attendee, e,)) returnValue(False) @inlineCallbacks def defaultCalendarForAttendee(self, home): # Check for property calendar = (yield home.defaultCalendar("VEVENT")) returnValue(calendar) def printAutoAccepts(self): # Print summary of results table = tables.Table() table.addHeader(("Path", "RID", "UID", "Start Time", "Title")) for item in sorted(self.fixedAutoAccepts, key=lambda x: x["path"]): table.addRow(( item["path"], item["rid"], item["uid"], item["start"], item["title"], )) self.output.write("\n") self.output.write("Auto-Accept Fixes:\n") table.printTable(os=self.output) def masterComponent(self, calendar): """ Return the master iCal component in this calendar. @return: the L{Component} for the master component, or C{None} if there isn't one. """ for component in calendar.getComponents(definitions.cICalComponent_VEVENT): if not component.hasProperty("RECURRENCE-ID"): return component return None def buildAttendeeStates(self, calendar, start, end, attendee_only=None): # Expand events into instances in the start/end range results = [] calendar.getVEvents( Period( start=start, end=end, ), results ) # Need to do iCal fake master fixup overrides = len(calendar.getComponents(definitions.cICalComponent_VEVENT)) > 1 # Create map of each attendee's instances with the instance id (start time) and attendee part-stat attendees = {} for item in results: # Fake master fixup if overrides: if not item.getOwner().isRecurrenceInstance(): if item.getOwner().getRecurrenceSet() is None or not item.getOwner().getRecurrenceSet().hasRecurrence(): continue # Get Status - ignore cancelled events status = item.getOwner().loadValueString(definitions.cICalProperty_STATUS) cancelled = status == definitions.cICalProperty_STATUS_CANCELLED # Get instance start item.getInstanceStart().adjustToUTC() instance_id = item.getInstanceStart().getText() props = item.getOwner().getProperties().get(definitions.cICalProperty_ATTENDEE, []) for prop in props: caladdr = prop.getCalAddressValue().getValue() if caladdr.startswith("urn:x-uid:"): caladdr = caladdr[10:] else: continue if attendee_only is not None and attendee_only != caladdr: continue if cancelled: partstat = "CANCELLED" else: if not prop.hasParameter(definitions.cICalParameter_PARTSTAT): partstat = definitions.cICalParameter_PARTSTAT_NEEDSACTION else: partstat = prop.getParameterValue(definitions.cICalParameter_PARTSTAT) attendees.setdefault(caladdr, set()).add((instance_id, partstat)) return attendees def allCancelled(self, attendeesStatus): # Check whether attendees have all instances cancelled all_cancelled = True for _ignore_guid, states in attendeesStatus.iteritems(): for _ignore_instance_id, partstat in states: if partstat not in ("CANCELLED", "DECLINED",): all_cancelled = False break if not all_cancelled: break return all_cancelled def setTransparencyForAttendee(self, calendar, attendee): """ Set the TRANSP property based on the PARTSTAT value on matching ATTENDEE properties in each component. """ for component in calendar.subcomponents(ignore=True): prop = component.getAttendeeProperty(attendee) addTransp = False if prop: partstat = prop.parameterValue("PARTSTAT", "NEEDS-ACTION") addTransp = partstat in ("NEEDS-ACTION", "DECLINED",) component.replaceProperty(Property("TRANSP", "TRANSPARENT" if addTransp else "OPAQUE")) class DoubleBookingService(CalVerifyService): """ Service which detects double-booked events. """ def title(self): return "Double Booking Service" @inlineCallbacks def doAction(self): if self.options["fix"]: self.output.write("\nFixing is not supported.\n") returnValue(None) self.output.write("\n---- Scanning calendar data ----\n") self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.now = DateTime.getNowUTC() self.start = DateTime.getToday() self.start.setDateOnly(False) self.start.setTimezone(self.tzid) self.end = self.start.duplicate() self.end.offsetYear(1) self.fix = self.options["fix"] if self.options["verbose"] and self.options["summary"]: ot = time.time() # Check loop over uuid UUIDDetails = collections.namedtuple("UUIDDetails", ("uuid", "rname", "auto", "doubled",)) self.uuid_details = [] if len(self.options["uuid"]) != 36: self.txn = self.store.newTransaction() if self.options["uuid"]: homes = yield self.getMatchingHomeUIDs(self.options["uuid"]) else: homes = yield self.getAllHomeUIDs() yield self.txn.commit() self.txn = None uuids = [] for uuid in sorted(homes): record = yield self.directoryService().recordWithUID(uuid) if record is not None and record.recordType in (CalRecordType.location, CalRecordType.resource): uuids.append(uuid) else: uuids = [self.options["uuid"], ] count = 0 for uuid in uuids: self.results = {} self.summary = [] self.total = 0 count += 1 record = yield self.directoryService().recordWithUID(uuid) if record is None: continue if not record.thisServer() or not record.hasCalendars: continue rname = record.displayName autoScheduleMode = getattr(record, "autoSchedule", AutoScheduleMode.none) if len(uuids) > 1 and not self.options["summary"]: self.output.write("\n\n-----------------------------\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() rows = yield self.getTimeRangeInfoWithUUID(uuid, self.start) descriptor = "getTimeRangeInfoWithUUID" yield self.txn.commit() self.txn = None if self.options["verbose"]: if not self.options["summary"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) else: self.output.write("%s (%d/%d)" % (uuid, count, len(uuids),)) self.output.flush() self.total = len(rows) if not self.options["summary"]: self.logResult("UUID to process", uuid) self.logResult("Record name", rname) self.logResult("Auto-schedule-mode", autoScheduleMode.description) self.addSummaryBreak() self.logResult("Number of events to process", self.total) if rows: if not self.options["summary"]: self.addSummaryBreak() doubled = yield self.doubleBookCheck(rows, uuid, self.start) else: doubled = False self.uuid_details.append(UUIDDetails(uuid, rname, autoScheduleMode, doubled)) if not self.options["summary"]: self.printSummary() else: self.output.write(" - %s\n" % ("Double-booked" if doubled else "OK",)) self.output.flush() if self.options["summary"]: table = tables.Table() table.addHeader(("GUID", "Name", "Auto-Schedule", "Double-Booked",)) doubled = 0 for item in sorted(self.uuid_details): if not item.doubled: continue table.addRow(( item.uuid, item.rname, item.autoScheduleMode, item.doubled, )) doubled += 1 table.addFooter(("Total", "", "", "%d of %d" % (doubled, len(self.uuid_details),),)) self.output.write("\n") table.printTable(os=self.output) if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % ("Summary", time.time() - ot,)) @inlineCallbacks def getTimeRangeInfoWithUUID(self, uuid, start): co = schema.CALENDAR_OBJECT cb = schema.CALENDAR_BIND ch = schema.CALENDAR_HOME tr = schema.TIME_RANGE kwds = { "uuid": uuid, "Start": pyCalendarToSQLTimestamp(start), } rows = (yield Select( [co.RESOURCE_ID, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN).And( cb.CALENDAR_RESOURCE_NAME != "inbox").And( co.ORGANIZER != "")).join( tr, type="left", on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("uuid")).And((tr.START_DATE >= Parameter("Start")).Or(co.RECURRANCE_MAX <= Parameter("Start"))), Distinct=True, ).on(self.txn, **kwds)) returnValue(tuple(rows)) @inlineCallbacks def doubleBookCheck(self, rows, uuid, start): """ Check each calendar resource by expanding instances within the next year, and looking for any that overlap with status not CANCELLED and PARTSTAT ACCEPTED. """ if not self.options["summary"]: self.output.write("\n---- Checking instances for double-booking ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() InstanceDetails = collections.namedtuple("InstanceDetails", ("resid", "uid", "start", "end", "organizer", "summary",)) end = start.duplicate() end.offsetDay(int(self.options["days"])) count = 0 total = len(rows) total_instances = 0 booked_instances = 0 details = [] rjust = 10 tzid = None hasFloating = False for resid in rows: resid = resid[0] caldata = yield self.getCalendar(resid, self.fix) if caldata is None: if self.parseError: returnValue((False, self.parseError)) else: returnValue((True, "Nothing to scan")) cal = Component(None, pycalendar=caldata) cal = PerUserDataFilter(uuid).filter(cal) uid = cal.resourceUID() instances = cal.expandTimeRanges(end, start, ignoreInvalidInstances=False) count += 1 for instance in instances.instances.values(): total_instances += 1 # See if it is CANCELLED or TRANSPARENT if instance.component.propertyValue("STATUS") == "CANCELLED": continue if instance.component.propertyValue("TRANSP") == "TRANSPARENT": continue dtstart = instance.component.propertyValue("DTSTART") if tzid is None and dtstart.getTimezoneID(): tzid = Timezone(tzid=dtstart.getTimezoneID()) hasFloating |= dtstart.isDateOnly() or dtstart.floating() details.append(InstanceDetails(resid, uid, instance.start, instance.end, instance.component.getOrganizer(), instance.component.propertyValue("SUMMARY"))) booked_instances += 1 if self.options["verbose"] and not self.options["summary"]: if count == 1: self.output.write("Instances".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n") if divmod(count, 100)[1] == 0: self.output.write(( "\r" + ("%s" % total_instances).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(count, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"] and not self.options["summary"]: self.output.write(( "\r" + ("%s" % total_instances).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80) + "\n") if not self.options["summary"]: self.logResult("Number of instances in time-range", total_instances) self.logResult("Number of booked instances", booked_instances) # Adjust floating and sort if hasFloating and tzid is not None: utc = Timezone.UTCTimezone for item in details: if item.start.floating(): item.start.setTimezone(tzid) item.start.adjustTimezone(utc) if item.end.floating(): item.end.setTimezone(tzid) item.end.adjustTimezone(utc) details.sort(key=lambda x: x.start) # Now look for double-bookings DoubleBookedDetails = collections.namedtuple("DoubleBookedDetails", ("resid1", "uid1", "resid2", "uid2", "start",)) double_booked = [] current = details[0] if details else None for next in details[1:]: if current.end > next.start and current.resid != next.resid and not (current.organizer == next.organizer and current.summary == next.summary): dt = next.start.duplicate() dt.adjustTimezone(self.tzid) double_booked.append(DoubleBookedDetails(current.resid, current.uid, next.resid, next.uid, dt,)) current = next # Print table of results if double_booked and not self.options["summary"]: table = tables.Table() table.addHeader(("RID #1", "UID #1", "RID #2", "UID #2", "Start",)) previous1 = None previous2 = None unique_events = 0 for item in sorted(double_booked): if previous1 != item.resid1: unique_events += 1 resid1 = item.resid1 if previous1 != item.resid1 else "." uid1 = item.uid1 if previous1 != item.resid1 else "." resid2 = item.resid2 if previous2 != item.resid2 else "." uid2 = item.uid2 if previous2 != item.resid2 else "." table.addRow(( resid1, uid1, resid2, uid2, item.start, )) previous1 = item.resid1 previous2 = item.resid2 self.output.write("\n") self.logResult("Number of double-bookings", len(double_booked)) self.logResult("Number of unique double-bookings", unique_events) table.printTable(os=self.output) self.results["Double-bookings"] = double_booked if self.options["verbose"] and not self.options["summary"]: diff_time = time.time() - t self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % ( diff_time, safePercent(diff_time, total, 1000.0), )) returnValue(len(double_booked) != 0) class DarkPurgeService(CalVerifyService): """ Service which detects room/resource events that have an invalid organizer. """ def title(self): return "Dark Purge Service" @inlineCallbacks def doAction(self): if not self.options["no-organizer"] and not self.options["invalid-organizer"] and not self.options["disabled-organizer"]: self.options["invalid-organizer"] = self.options["disabled-organizer"] = True self.output.write("\n---- Scanning calendar data ----\n") self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.now = DateTime.getNowUTC() self.start = self.options["start"] if "start" in self.options else DateTime.getToday() self.start.setDateOnly(False) self.start.setTimezone(self.tzid) self.fix = self.options["fix"] if self.options["verbose"] and self.options["summary"]: ot = time.time() # Check loop over uuid UUIDDetails = collections.namedtuple("UUIDDetails", ("uuid", "rname", "purged",)) self.uuid_details = [] if len(self.options["uuid"]) != 36: self.txn = self.store.newTransaction() if self.options["uuid"]: homes = yield self.getMatchingHomeUIDs(self.options["uuid"]) else: homes = yield self.getAllHomeUIDs() yield self.txn.commit() self.txn = None uuids = [] if self.options["verbose"]: self.output.write("%d uuids to check\n" % (len(homes,))) for uuid in sorted(homes): record = yield self.directoryService().recordWithUID(uuid) if record is not None and record.recordType in (CalRecordType.location, CalRecordType.resource): uuids.append(uuid) else: uuids = [self.options["uuid"], ] if self.options["verbose"]: self.output.write("%d uuids to scan\n" % (len(uuids,))) count = 0 for uuid in uuids: self.results = {} self.summary = [] self.total = 0 count += 1 record = yield self.directoryService().recordWithUID(uuid) if record is None: continue if not record.thisServer() or not record.hasCalendars: continue rname = record.displayName if len(uuids) > 1 and not self.options["summary"]: self.output.write("\n\n-----------------------------\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() rows = yield self.getAllResourceInfoTimeRangeWithUUID(self.start, uuid) descriptor = "getAllResourceInfoTimeRangeWithUUID" yield self.txn.commit() self.txn = None if self.options["verbose"]: if not self.options["summary"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) else: self.output.write("%s (%d/%d)" % (uuid, count, len(uuids),)) self.output.flush() self.total = len(rows) if not self.options["summary"]: self.logResult("UUID to process", uuid) self.logResult("Record name", rname) self.addSummaryBreak() self.logResult("Number of events to process", self.total) if rows: if not self.options["summary"]: self.addSummaryBreak() purged = yield self.darkPurge(rows, uuid) else: purged = False self.uuid_details.append(UUIDDetails(uuid, rname, purged)) if not self.options["summary"]: self.printSummary() else: self.output.write(" - %s\n" % ("Dark Events" if purged else "OK",)) self.output.flush() if count == 0: self.output.write("Nothing to scan\n") if self.options["summary"]: table = tables.Table() table.addHeader(("GUID", "Name", "RID", "UID", "Organizer",)) purged = 0 for item in sorted(self.uuid_details): if not item.purged: continue uuid = item.uuid rname = item.rname for detail in item.purged: table.addRow(( uuid, rname, detail.resid, detail.uid, detail.organizer, )) uuid = "" rname = "" purged += 1 table.addFooter(("Total", "%d" % (purged,), "", "", "",)) self.output.write("\n") table.printTable(os=self.output) if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % ("Summary", time.time() - ot,)) @inlineCallbacks def darkPurge(self, rows, uuid): """ Check each calendar resource by looking at any ORGANIER property value and verifying it is valid. """ if not self.options["summary"]: self.output.write("\n---- Checking for dark events ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() Details = collections.namedtuple("Details", ("resid", "uid", "organizer",)) count = 0 total = len(rows) details = [] fixed = 0 rjust = 10 for resid in rows: resid = resid[1] caldata = yield self.getCalendar(resid, self.fix) if caldata is None: if self.parseError: returnValue((False, self.parseError)) else: returnValue((True, "Nothing to scan")) cal = Component(None, pycalendar=caldata) uid = cal.resourceUID() fail = False organizer = cal.getOrganizer() if organizer is None: if self.options["no-organizer"]: fail = True else: principal = yield self.directoryService().recordWithCalendarUserAddress(organizer) # FIXME: Why the mix of records and principals here? if principal is None and organizer.startswith("urn:x-uid:"): principal = yield self.directoryService().principalCollection.principalForUID(organizer[10:]) if principal is None: if self.options["invalid-organizer"]: fail = True elif not principal.calendarsEnabled(): if self.options["disabled-organizer"]: fail = True if fail: details.append(Details(resid, uid, organizer,)) if self.fix: yield self.removeEvent(resid) fixed += 1 if self.options["verbose"] and not self.options["summary"]: if count == 1: self.output.write("Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n") if divmod(count, 100)[1] == 0: self.output.write(( "\r" + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(count, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"] and not self.options["summary"]: self.output.write(( "\r" + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80) + "\n") # Print table of results if not self.options["summary"]: self.logResult("Number of dark events", len(details)) self.results["Dark Events"] = details if self.fix: self.results["Fix dark events"] = fixed if self.options["verbose"] and not self.options["summary"]: diff_time = time.time() - t self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % ( diff_time, safePercent(diff_time, total, 1000.0), )) returnValue(details) class MissingLocationService(CalVerifyService): """ Service which detects room/resource events that have an ATTENDEE;CUTYPE=ROOM property but no Location. """ def title(self): return "Missing Location Service" @inlineCallbacks def doAction(self): self.output.write("\n---- Scanning calendar data ----\n") self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.now = DateTime.getNowUTC() self.start = self.options["start"] if "start" in self.options else DateTime.getToday() self.start.setDateOnly(False) self.start.setTimezone(self.tzid) self.fix = self.options["fix"] if self.options["verbose"] and self.options["summary"]: ot = time.time() # Check loop over uuid UUIDDetails = collections.namedtuple("UUIDDetails", ("uuid", "rname", "missing",)) self.uuid_details = [] if len(self.options["uuid"]) != 36: self.txn = self.store.newTransaction() if self.options["uuid"]: homes = yield self.getMatchingHomeUIDs(self.options["uuid"]) else: homes = yield self.getAllHomeUIDs() yield self.txn.commit() self.txn = None uuids = [] if self.options["verbose"]: self.output.write("%d uuids to check\n" % (len(homes,))) for uuid in sorted(homes): record = yield self.directoryService().recordWithUID(uuid) if record is not None and record.recordType == CalRecordType.location: uuids.append(uuid) else: uuids = [self.options["uuid"], ] if self.options["verbose"]: self.output.write("%d uuids to scan\n" % (len(uuids,))) count = 0 for uuid in uuids: self.results = {} self.summary = [] self.total = 0 count += 1 record = yield self.directoryService().recordWithUID(uuid) if record is None: continue if not record.thisServer() or not record.hasCalendars: continue rname = record.displayName if len(uuids) > 1 and not self.options["summary"]: self.output.write("\n\n-----------------------------\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() rows = yield self.getAllResourceInfoTimeRangeWithUUID(self.start, uuid) descriptor = "getAllResourceInfoTimeRangeWithUUID" yield self.txn.commit() self.txn = None if self.options["verbose"]: if not self.options["summary"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) else: self.output.write("%s (%d/%d)" % (uuid, count, len(uuids),)) self.output.flush() self.total = len(rows) if not self.options["summary"]: self.logResult("UUID to process", uuid) self.logResult("Record name", rname) self.addSummaryBreak() self.logResult("Number of events to process", self.total) if rows: if not self.options["summary"]: self.addSummaryBreak() missing = yield self.missingLocation(rows, uuid, rname) else: missing = False self.uuid_details.append(UUIDDetails(uuid, rname, missing)) if not self.options["summary"]: self.printSummary() else: self.output.write(" - %s\n" % ("Bad Events" if missing else "OK",)) self.output.flush() if count == 0: self.output.write("Nothing to scan\n") if self.options["summary"]: table = tables.Table() table.addHeader(("GUID", "Name", "RID", "UID",)) missing = 0 for item in sorted(self.uuid_details): if not item.missing: continue uuid = item.uuid rname = item.rname for detail in item.missing: table.addRow(( uuid, rname, detail.resid, detail.uid, )) uuid = "" rname = "" missing += 1 table.addFooter(("Total", "%d" % (missing,), "", "", "",)) self.output.write("\n") table.printTable(os=self.output) if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % ("Summary", time.time() - ot,)) @inlineCallbacks def missingLocation(self, rows, uuid, rname): """ Check each calendar resource by looking at any ORGANIZER property value and verifying it is valid. """ if not self.options["summary"]: self.output.write("\n---- Checking for missing location events ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() Details = collections.namedtuple("Details", ("resid", "uid",)) count = 0 total = len(rows) details = [] fixed = 0 rjust = 10 cuaddr = "urn:x-uid:{}".format(uuid) for resid in rows: resid = resid[1] caldata = yield self.getCalendar(resid, self.fix) if caldata is None: if self.parseError: returnValue((False, self.parseError)) else: returnValue((True, "Nothing to scan")) cal = Component(None, pycalendar=caldata) uid = cal.resourceUID() fail = False if cal.getOrganizer() is not None: for comp in cal.subcomponents(): if comp.name() != "VEVENT": continue # Look for matching location property location = comp.propertyValue("LOCATION") if location is not None: items = location.split(";") has_location = rname in items else: has_location = False # Look for matching attendee property has_attendee = cuaddr in comp.getAttendees() if not has_location and has_attendee: fail = True break if fail: details.append(Details(resid, uid,)) if self.fix: yield self.fixCalendarData(cal, rname, cuaddr, resid) fixed += 1 if self.options["verbose"] and not self.options["summary"]: if count == 1: self.output.write("Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n") if divmod(count, 100)[1] == 0: self.output.write(( "\r" + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(count, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"] and not self.options["summary"]: self.output.write(( "\r" + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80) + "\n") # Print table of results if not self.options["summary"]: self.logResult("Number of bad events", len(details)) self.results["Bad Events"] = details if self.fix: self.results["Fix bad events"] = fixed if self.options["verbose"] and not self.options["summary"]: diff_time = time.time() - t self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % ( diff_time, safePercent(diff_time, total, 1000.0), )) returnValue(details) @inlineCallbacks def fixCalendarData(self, cal, rname, cuaddr, location_resid): """ Fix problems in calendar data using store APIs. """ result = True message = "" # Extract organizer (strip off urn:x-uid:) and UID organizer = cal.getOrganizer()[10:] uid = cal.resourceUID() _ignore_calendar, resid, _ignore_created, _ignore_modified = yield self.getCalendarForOwnerByUID(organizer, uid) if resid is None: # No organizer copy exists - it is safe to delete the location resource yield self.removeEvent(location_resid) returnValue((result, message,)) # Get the organizer's calendar object and data homeID, calendarID = yield self.getAllResourceInfoForResourceID(resid) home = yield self.txn.calendarHomeWithResourceID(homeID) calendar = yield home.childWithID(calendarID) calendarObj = yield calendar.objectResourceWithID(resid) try: component = yield calendarObj.componentForUser() except InternalDataStoreError: returnValue((False, "Failed parse: ")) # Fix each component (need to dup component when modifying) component = component.duplicate() for comp in component.subcomponents(): if comp.name() != "VEVENT": continue location = comp.propertyValue("LOCATION") if location is None: # Just add the location name back in comp.addProperty(Property("LOCATION", rname)) else: # Remove the matching ATTENDEE property attendee = comp.getAttendeeProperty((cuaddr,)) if attendee is not None: comp.removeProperty(attendee) # Write out fix, commit and get a new transaction try: yield calendarObj.setComponent(component) except Exception, e: print(e, component) print(traceback.print_exc()) result = False message = "Exception fix: " yield self.txn.commit() self.txn = self.store.newTransaction() returnValue((result, message,)) class EventSplitService(CalVerifyService): """ Service which splits a recurring event at a specific date-time value. """ def title(self): return "Event Split Service" @inlineCallbacks def doAction(self): """ Split a resource using either its path or resource id. """ self.txn = self.store.newTransaction() path = self.options["path"] if path.startswith("/calendars/__uids__/"): try: pathbits = path.split("/") except TypeError: printusage("Not a valid calendar object resource path: %s" % (path,)) if len(pathbits) != 6: printusage("Not a valid calendar object resource path: %s" % (path,)) homeName = pathbits[3] calendarName = pathbits[4] resourceName = pathbits[5] resid = yield self.getResourceID(homeName, calendarName, resourceName) if resid is None: yield self.txn.commit() self.txn = None self.output.write("\n") self.output.write("Path does not exist. Nothing split.\n") returnValue(None) resid = int(resid) else: try: resid = int(path) except ValueError: printusage("path argument must be a calendar object path or an SQL resource-id") calendarObj = yield CalendarStoreFeatures(self.txn._store).calendarObjectWithID(self.txn, resid) ical = yield calendarObj.component() # Must be the ORGANIZER's copy organizer = ical.getOrganizer() if organizer is None: printusage("Calendar object has no ORGANIZER property - cannot split") # Only allow organizers to split scheduler = ImplicitScheduler() is_attendee = (yield scheduler.testAttendeeEvent(calendarObj.calendar(), calendarObj, ical,)) if is_attendee: printusage("Calendar object is not owned by the ORGANIZER - cannot split") if self.options["summary"]: result = self.doSummary(ical) else: result = yield self.doSplit(resid, calendarObj, ical) returnValue(result) def doSummary(self, ical): """ Print a summary of the recurrence instances of the specified event. @param ical: calendar to process @type ical: L{Component} """ self.output.write("\n---- Calendar resource instances ----\n") # Find the instance RECURRENCE-ID where a split is going to happen now = DateTime.getNowUTC() now.offsetDay(1) instances = ical.cacheExpandedTimeRanges(now) instances = sorted(instances.instances.values(), key=lambda x: x.start) for instance in instances: self.output.write(instance.rid.getText() + (" *\n" if instance.overridden else "\n")) @inlineCallbacks def doSplit(self, resid, calendarObj, ical): rid = self.options["rid"] try: if rid[-1] != "Z": raise ValueError rid = DateTime.parseText(rid) except ValueError: printusage("rid must be a valid UTC date-time value: 'YYYYMMDDTHHMMSSZ'") self.output.write("\n---- Splitting calendar resource ----\n") # Find actual RECURRENCE-ID of split splitter = iCalSplitter(1024, 14) rid = splitter.whereSplit(ical, break_point=rid, allow_past_the_end=False) if rid is None: printusage("rid is not a valid recurrence instance") self.output.write("\n") self.output.write("Actual RECURRENCE-ID: %s.\n" % (rid,)) oldObj = yield calendarObj.split(rid=rid) oldUID = oldObj.uid() oldRelated = (yield oldObj.component()).mainComponent().propertyValue("RELATED-TO") self.output.write("\n") self.output.write("Split Resource: %s at %s, old UID: %s.\n" % (resid, rid, oldUID,)) yield self.txn.commit() self.txn = None returnValue((oldUID, oldRelated,)) class UpgradeDataService(CalVerifyService): """ Service which scans calendar data. """ USE_FAST_MODE = True OLD_CUADDR_PREFIX = "urn:uuid:" NEW_CUADDR_PREFIX = "urn:x-uid:" def title(self): return "Upgrade Data Service" @inlineCallbacks def doAction(self): self.output.write("\n---- Scanning calendar data ----\n") self.now = DateTime.getNowUTC() self.start = DateTime.getToday() self.start.setDateOnly(False) self.end = self.start.duplicate() self.end.offsetYear(1) self.fix = self.options["fix"] self.tzid = Timezone(tzid=self.options["tzid"] if self.options["tzid"] else "America/Los_Angeles") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() descriptor = None if self.options["uuid"]: rows = yield self.getAllResourceInfoWithUUID(self.options["uuid"], inbox=True) descriptor = "getAllResourceInfoWithUUID" elif self.options["uid"]: rows = yield self.getAllResourceInfoWithUID(self.options["uid"], inbox=True) descriptor = "getAllResourceInfoWithUID" else: rows = yield self.getAllResourceInfo(inbox=True) descriptor = "getAllResourceInfo" yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write("%s time: %.1fs\n" % (descriptor, time.time() - t,)) self.total = len(rows) self.logResult("Number of events to process", self.total) self.addSummaryBreak() yield self.calendarDataUpgrade(rows) self.printSummary() @inlineCallbacks def calendarDataUpgrade(self, rows): """ Check each calendar resource for valid iCalendar data. """ self.output.write("\n---- Upgrading each calendar object resource ----\n") self.txn = self.store.newTransaction() if self.options["verbose"]: t = time.time() results_bad = [] count = 0 total = len(rows) upgradelen = 0 badlen = 0 rjust = 10 for owner, resid, uid, _ignore_calname, _ignore_md5, _ignore_organizer, _ignore_created, _ignore_modified in rows: try: calendarObj = yield CalendarStoreFeatures(self.txn._store).calendarObjectWithID(self.txn, resid) if calendarObj._dataversion < calendarObj._currentDataVersion: upgradelen += 1 if UpgradeDataService.USE_FAST_MODE: # Read data from calendar object and manually upgrade the calendar user # addresses only text = yield calendarObj._text() component = Component.fromString(text) for subcomponent in component.subcomponents(ignore=True): for prop in itertools.chain( subcomponent.properties("ORGANIZER"), subcomponent.properties("ATTENDEE"), ): cuaddr = prop.value() if cuaddr.startswith(UpgradeDataService.OLD_CUADDR_PREFIX): prop.setValue(cuaddr.replace(UpgradeDataService.OLD_CUADDR_PREFIX, UpgradeDataService.NEW_CUADDR_PREFIX)) # Do the update right now (also suppress instance indexing) calendarObj._dataversion = calendarObj._currentDataVersion component.noInstanceIndexing = True yield calendarObj.updateDatabase(component) else: # Read component and force and update if needed yield calendarObj.component(doUpdate=True) result = True except Exception, e: result = False message = "Exception when reading resource" if self.options["verbose"]: print(e) if not result: results_bad.append((owner, uid, resid, message)) badlen += 1 count += 1 if self.options["verbose"]: if count == 1: self.output.write("Upgraded".rjust(rjust) + "Bad".rjust(rjust) + "Current".rjust(rjust) + "Total".rjust(rjust) + "Complete".rjust(rjust) + "\n") if divmod(count, 100)[1] == 0: self.output.write(( "\r" + ("%s" % upgradelen).rjust(rjust) + ("%s" % badlen).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80)) self.output.flush() # To avoid holding locks on all the rows scanned, commit every 100 resources if divmod(count, 100)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None if self.options["verbose"]: self.output.write(( "\r" + ("%s" % upgradelen).rjust(rjust) + ("%s" % badlen).rjust(rjust) + ("%s" % count).rjust(rjust) + ("%s" % total).rjust(rjust) + ("%d%%" % safePercent(count, total)).rjust(rjust) ).ljust(80) + "\n") # Print table of results table = tables.Table() table.addHeader(("Owner", "Event UID", "RID", "Problem",)) for item in sorted(results_bad, key=lambda x: (x[0], x[1])): owner, uid, resid, message = item owner_record = yield self.directoryService().recordWithUID(owner) table.addRow(( "%s/%s (%s)" % (owner_record.recordType if owner_record else "-", owner_record.shortNames[0] if owner_record else "-", owner,), uid, resid, message, )) self.output.write("\n") self.logResult("Upgraded iCalendar data", upgradelen, total) self.logResult("Bad iCalendar data", len(results_bad), total) self.results["Upgraded iCalendar data"] = upgradelen self.results["Bad iCalendar data"] = results_bad table.printTable(os=self.output) if self.options["verbose"]: diff_time = time.time() - t self.output.write("Time: %.2f s Average: %.1f ms/resource\n" % ( diff_time, safePercent(diff_time, total, 1000.0), )) def main(argv=sys.argv, stderr=sys.stderr, reactor=None): if reactor is None: from twisted.internet import reactor options = CalVerifyOptions() try: options.parseOptions(argv[1:]) except usage.UsageError, e: printusage(e) try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) def makeService(store): from twistedcaldav.config import config config.TransactionTimeoutSeconds = 0 if options["nuke"]: return NukeService(store, options, output, reactor, config) elif options["missing"]: return OrphansService(store, options, output, reactor, config) elif options["ical"]: return BadDataService(store, options, output, reactor, config) elif options["mismatch"]: return SchedulingMismatchService(store, options, output, reactor, config) elif options["double"]: return DoubleBookingService(store, options, output, reactor, config) elif options["dark-purge"]: return DarkPurgeService(store, options, output, reactor, config) elif options["missing-location"]: return MissingLocationService(store, options, output, reactor, config) elif options["split"]: return EventSplitService(store, options, output, reactor, config) elif options["upgrade"]: return UpgradeDataService(store, options, output, reactor, config) else: printusage("Invalid operation") sys.exit(1) utilityMain(options['config'], makeService, reactor) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/calverify_diff.py000066400000000000000000000106001315003562600247660ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import getopt import sys import os def analyze(fname): with open(os.path.expanduser(fname)) as f: lines = f.read().splitlines() total = len(lines) ctr = 0 results = { "table1": [], "table2": [], "table3": [], "table4": [], } def _tableParser(ctr, tableName, parseFn): ctr += 4 while ctr < total: line = lines[ctr] if line.startswith("+------"): break else: results[tableName].append(parseFn(line)) ctr += 1 return ctr while ctr < total: line = lines[ctr] if line.startswith("Events missing from Attendee's calendars"): ctr = _tableParser(ctr, "table1", parseTableMissing) elif line.startswith("Events mismatched between Organizer's and Attendee's calendars"): ctr = _tableParser(ctr, "table2", parseTableMismatch) elif line.startswith("Attendee events missing in Organizer's calendar"): ctr = _tableParser(ctr, "table3", parseTableMissing) elif line.startswith("Attendee events mismatched in Organizer's calendar"): ctr = _tableParser(ctr, "table4", parseTableMismatch) ctr += 1 return results def parseTableMissing(line): splits = line.split("|") organizer = splits[1].strip() attendee = splits[2].strip() uid = splits[3].strip() resid = splits[4].strip() return (organizer, attendee, uid, resid,) def parseTableMismatch(line): splits = line.split("|") organizer = splits[1].strip() attendee = splits[2].strip() uid = splits[3].strip() organizer_resid = splits[4].strip() attendee_resid = splits[7].strip() return (organizer, attendee, uid, organizer_resid, attendee_resid,) def diff(results1, results2): print("\n\nEvents missing from Attendee's calendars") diffSets(results1["table1"], results2["table1"]) print("\n\nEvents mismatched between Organizer's and Attendee's calendars") diffSets(results1["table2"], results2["table2"]) print("\n\nAttendee events missing in Organizer's calendar") diffSets(results1["table3"], results2["table3"]) print("\n\nAttendee events mismatched in Organizer's calendar") diffSets(results1["table4"], results2["table4"]) def diffSets(results1, results2): s1 = set(results1) s2 = set(results2) d = s1 - s2 print("\nIn first, not in second: (%d)" % (len(d),)) for i in sorted(d): print(i) d = s2 - s1 print("\nIn second, not in first: (%d)" % (len(d),)) for i in sorted(d): print(i) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: calverify_diff [options] FILE1 FILE2 Options: -h Print this help and exit Arguments: FILE1 File containing calverify output to analyze FILE2 File containing calverify output to analyze Description: This utility will analyze the output of two calverify runs and show what is different between the two. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': options, args = getopt.getopt(sys.argv[1:], "h", []) for option, value in options: if option == "-h": usage() else: usage("Unrecognized option: %s" % (option,)) if len(args) != 2: usage("Must have two arguments") else: fname1 = args[0] fname2 = args[1] print("*** CalVerify diff from %s to %s" % ( os.path.basename(fname1), os.path.basename(fname2), )) results1 = analyze(fname1) results2 = analyze(fname2) diff(results1, results2) calendarserver-9.1+dfsg/calendarserver/tools/changeip_calendar.py000077500000000000000000000134511315003562600254330ustar00rootroot00000000000000#!/usr/bin/env python # # changeip script for calendar server # # Copyright (c) 2005-2017 Apple Inc. All Rights Reserved. # # IMPORTANT NOTE: This file is licensed only for use on Apple-labeled # computers and is subject to the terms and conditions of the Apple # Software License Agreement accompanying the package this file is a # part of. You may not port this file to another platform without # Apple's written consent. from __future__ import print_function from __future__ import with_statement import datetime from getopt import getopt, GetoptError import os import plistlib import subprocess import sys SERVER_APP_ROOT = "/Applications/Server.app/Contents/ServerRoot" CALENDARSERVER_CONFIG = "%s/usr/sbin/calendarserver_config" % (SERVER_APP_ROOT,) def serverRootLocation(): """ Return the ServerRoot value from the servermgr_calendar.plist. If not present, return the default. """ plist = "/Library/Server/Preferences/Calendar.plist" serverRoot = u"/Library/Server/Calendar and Contacts" if os.path.exists(plist): serverRoot = plistlib.readPlist(plist).get("ServerRoot", serverRoot) return serverRoot def usage(): name = os.path.basename(sys.argv[0]) print("Usage: %s [-hv] old-ip new-ip [old-hostname new-hostname]" % (name,)) print(" Options:") print(" -h - print this message and exit") print(" -f - path to config file") print(" Arguments:") print(" old-ip - current IPv4 address of the server") print(" new-ip - new IPv4 address of the server") print(" old-hostname - current FQDN for the server") print(" new-hostname - new FQDN for the server") def log(msg): serverRoot = serverRootLocation() logDir = os.path.join(serverRoot, "Logs") logFile = os.path.join(logDir, "changeip.log") try: timestamp = datetime.datetime.now().strftime("%b %d %H:%M:%S") msg = "changeip_calendar: %s %s" % (timestamp, msg) with open(logFile, 'a') as output: output.write("%s\n" % (msg,)) except IOError: # Could not write to log pass def main(): name = os.path.basename(sys.argv[0]) # Since the serveradmin command must be run as root, so must this script if os.getuid() != 0: print("%s must be run as root" % (name,)) sys.exit(1) try: (optargs, args) = getopt( sys.argv[1:], "hf:", [ "help", "config=", ] ) except GetoptError: usage() sys.exit(1) configFile = None for opt, arg in optargs: if opt in ("-h", "--help"): usage() sys.exit(1) elif opt in ("-f", "--config"): configFile = arg oldIP, newIP = args[0:2] try: oldHostname, newHostname = args[2:4] except ValueError: oldHostname = newHostname = None log("args: {}".format(args)) config = readConfig(configFile=configFile) updateConfig( config, oldIP, newIP, oldHostname, newHostname ) writeConfig(config) def sendCommand(commandDict, configFile=None): args = [CALENDARSERVER_CONFIG] if configFile is not None: args.append("-f {}".format(configFile)) child = subprocess.Popen( args=args, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) commandString = plistlib.writePlistToString(commandDict) log("Sending to calendarserver_config: {}".format(commandString)) output, error = child.communicate(input=commandString) log("Output from calendarserver_config: {}".format(output)) if child.returncode: log( "Error from calendarserver_config: {}, {}".format( child.returncode, error ) ) return None else: return plistlib.readPlistFromString(output)["result"] def readConfig(configFile=None): """ Ask calendarserver_config for the current configuration """ command = { "command": "readConfig" } return sendCommand(command) def writeConfig(valuesDict, configFile=None): """ Ask calendarserver_config to update the configuration """ command = { "command": "writeConfig", "Values": valuesDict, } return sendCommand(command) def updateConfig( config, oldIP, newIP, oldHostname, newHostname, configFile=None ): keys = ( ("Scheduling", "iMIP", "Receiving", "Server"), ("Scheduling", "iMIP", "Sending", "Server"), ("Scheduling", "iMIP", "Sending", "Address"), ("ServerHostName",), ) def _replace(value, oldIP, newIP, oldHostname, newHostname): newValue = value.replace(oldIP, newIP) if oldHostname and newHostname: newValue = newValue.replace(oldHostname, newHostname) if value != newValue: log("Changed %s -> %s" % (value, newValue)) return newValue for keyPath in keys: parent = config path = keyPath[:-1] key = keyPath[-1] for step in path: if step not in parent: parent = None break parent = parent[step] if parent: if key in parent: value = parent[key] if isinstance(value, list): newValue = [] for item in value: item = _replace( item, oldIP, newIP, oldHostname, newHostname ) newValue.append(item) else: newValue = _replace( value, oldIP, newIP, oldHostname, newHostname ) parent[key] = newValue if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/checkdatabaseschema.py000066400000000000000000000242361315003562600257470ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from getopt import getopt, GetoptError from twext.enterprise.dal.model import Schema, Table, Column, Sequence, SQLType, \ Constraint, Function from twext.enterprise.dal.parseschema import addSQLToSchema, schemaFromPath from twisted.python.filepath import FilePath from txdav.common.datastore.sql_dump import DTYPE_MAP_POSTGRES, \ DEFAULTVALUE_MAP_POSTGRES import os import re import subprocess import sys USERNAME = "caldav" DATABASENAME = "caldav" PGSOCKETDIR = "127.0.0.1" SCHEMADIR = "./txdav/common/datastore/sql_schema/" # Executables: PSQL = "../postgresql/_root/bin/psql" def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options] username" % (name,)) print("") print(" Check calendar server postgres database and schema") print("") print("options:") print(" -d: path to server's sql_schema directory [./txdav/common/datastore/sql_schema/]") print(" -k: postgres socket path (value for psql -h argument [127.0.0.1])") print(" -p: location of psql tool if not on PATH already [psql]") print(" -x: use default values for OS X server") print(" -h --help: print this help and exit") print(" -v --verbose: print additional information") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) def execSQL(title, stmt, verbose=False): """ Execute the provided SQL statement, return results as a list of rows. @param stmt: the SQL to execute @type stmt: L{str} """ cmdArgs = [ PSQL, "-h", PGSOCKETDIR, "-d", DATABASENAME, "-U", USERNAME, "-t", "-c", stmt, ] try: if verbose: print("\n{}".format(title)) print("Executing: {}".format(" ".join(cmdArgs))) out = subprocess.check_output(cmdArgs, stderr=subprocess.STDOUT) if verbose: print(out) except subprocess.CalledProcessError, e: if verbose: print(e.output) raise CheckSchemaError( "%s failed:\n%s (exit code = %d)" % (PSQL, e.output, e.returncode) ) return [map(lambda x: x.strip(), s.split("|")) for s in out.splitlines()[:-1]] def getSchemaVersion(verbose=False): """ Return the version number for the schema installed in the database. Raise CheckSchemaError if there is an issue. """ out = execSQL( "Reading schema version...", "select value from calendarserver where name='VERSION';", verbose ) try: version = int(out[0][0]) except ValueError, e: raise CheckSchemaError( "Failed to parse schema version: %s" % (e,) ) return version def dumpCurrentSchema(verbose=False): schemaname = "public" schema = Schema("Dumped schema") # Sequences seqs = {} rows = execSQL( "Schema sequences...", "select sequence_name from information_schema.sequences where sequence_schema = '%s';" % (schemaname,), verbose ) for row in rows: name = row[0] seqs[name.upper()] = Sequence(schema, name.upper()) # Tables tables = {} rows = execSQL( "Schema tables...", "select table_name from information_schema.tables where table_schema = '%s';" % (schemaname,), verbose ) for row in rows: name = row[0] table = Table(schema, name.upper()) tables[name.upper()] = table # Columns rows = execSQL( "Reading table '{}' columns...".format(name), "select column_name, data_type, is_nullable, character_maximum_length, column_default from information_schema.columns where table_schema = '%s' and table_name = '%s';" % (schemaname, name,), verbose, ) for name, datatype, is_nullable, charlen, default in rows: # TODO: figure out the type column = Column(table, name.upper(), SQLType(DTYPE_MAP_POSTGRES.get(datatype, datatype), int(charlen) if charlen else 0)) table.columns.append(column) if default: if default.startswith("nextval("): dname = default.split("'")[1].split(".")[-1] column.default = seqs[dname.upper()] elif default in DEFAULTVALUE_MAP_POSTGRES: column.default = DEFAULTVALUE_MAP_POSTGRES[default] else: try: column.default = int(default) except ValueError: column.default = default if is_nullable == "NO": table.tableConstraint(Constraint.NOT_NULL, [column.name, ]) # Key columns keys = {} rows = execSQL( "Schema key columns...", "select constraint_name, table_name, column_name from information_schema.key_column_usage where constraint_schema = '%s';" % (schemaname,), verbose ) for conname, tname, cname in rows: keys[conname] = (tname, cname) # Constraints constraints = {} rows = execSQL( "SChema constraints...", "select constraint_name, table_name, column_name from information_schema.constraint_column_usage where constraint_schema = '%s';" % (schemaname,), verbose ) for conname, tname, cname in rows: constraints[conname] = (tname, cname) # References - referential_constraints rows = execSQL( "Schema referential constraints...", "select constraint_name, unique_constraint_name, delete_rule from information_schema.referential_constraints where constraint_schema = '%s';" % (schemaname,), verbose ) for conname, uconname, delete in rows: table = tables[keys[conname][0].upper()] column = table.columnNamed(keys[conname][1].upper()) column.doesReferenceName(constraints[uconname][0].upper()) if delete != "NO ACTION": column.deleteAction = delete.lower() # Indexes # TODO: handle implicit indexes created via primary key() and unique() statements within CREATE TABLE rows = execSQL( "Schema indexes...", "select indexdef from pg_indexes where schemaname = '%s';" % (schemaname,), verbose ) for indexdef in rows: addSQLToSchema(schema, indexdef[0].replace("%s." % (schemaname,), "").upper()) # Functions rows = execSQL( "Schema functions", "select routine_name from information_schema.routines where routine_schema = '%s';" % (schemaname,), verbose ) for row in rows: name = row[0] Function(schema, name) return schema def checkSchema(dbversion, verbose=False): """ Compare schema in the database with the expected schema file. """ dbschema = dumpCurrentSchema(verbose) # Find current schema fp = FilePath(SCHEMADIR) fpschema = fp.child("old").child("postgres-dialect").child("v{}.sql".format(dbversion)) if not fpschema.exists(): fpschema = fp.child("current.sql") expectedSchema = schemaFromPath(fpschema) mismatched = dbschema.compare(expectedSchema) if mismatched: print("\nCurrent schema in database is mismatched:\n\n" + "\n".join(mismatched)) else: print("\nCurrent schema in database is a match to the expected server version") class CheckSchemaError(Exception): pass def error(s): sys.stderr.write("%s\n" % (s,)) sys.exit(1) def main(): try: (optargs, _ignore_args) = getopt( sys.argv[1:], "d:hk:p:vx", [ "help", "verbose", ], ) except GetoptError, e: usage(e) verbose = False global SCHEMADIR, PGSOCKETDIR, PSQL for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-d",): SCHEMADIR = arg elif opt in ("-k",): PGSOCKETDIR = arg elif opt in ("-p",): PSQL = arg elif opt in ("-x",): sktdir = FilePath("/var/run/caldavd") for skt in sktdir.children(): if skt.basename().startswith("ccs_postgres_"): PGSOCKETDIR = skt.path PSQL = "/Applications/Server.app/Contents/ServerRoot/usr/bin/psql" SCHEMADIR = "/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/lib/python2.7/site-packages/txdav/common/datastore/sql_schema/" elif opt in ("-v", "--verbose"): verbose = True else: raise NotImplementedError(opt) # Retrieve the db_version number of the installed schema try: db_version = getSchemaVersion(verbose=verbose) except CheckSchemaError, e: db_version = 0 # Retrieve the version number from the schema file currentschema = FilePath(SCHEMADIR).child("current.sql") try: data = currentschema.getContent() except IOError: print("Unable to open the current schema file: %s" % (currentschema.path,)) else: found = re.search("insert into CALENDARSERVER values \('VERSION', '(\d+)'\);", data) if found is None: print("Schema is missing required schema VERSION insert statement: %s" % (currentschema.path,)) else: current_version = int(found.group(1)) if db_version == current_version: print("Schema version {} is current".format(db_version)) else: # upgrade needed print("Schema needs to be upgraded from {} to {}".format(db_version, current_version)) checkSchema(db_version, verbose) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/cmdline.py000066400000000000000000000162021315003562600234310ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Shared main-point between utilities. """ from calendarserver.tap.util import checkDirectories, getRootResource from calendarserver.tools.util import loadConfig, autoDisableMemcached from twext.enterprise.jobs.queue import NonPerformingQueuer from twext.python.log import Logger from twistedcaldav.config import ConfigurationError from twistedcaldav.timezones import TimezoneCache from twisted.application.service import Service from twisted.internet.defer import inlineCallbacks, succeed from twisted.logger import FileLogObserver, formatEventAsClassicLogText from twisted.python.logfile import LogFile import sys from errno import ENOENT, EACCES # TODO: direct unit tests for these functions. def utilityMain( configFileName, serviceClass, reactor=None, serviceMaker=None, patchConfig=None, onShutdown=None, verbose=False, loadTimezones=False ): """ Shared main-point for utilities. This function will: - Load the configuration file named by C{configFileName}, - launch a L{CalDAVServiceMaker}'s with the C{ProcessType} of C{"Utility"} - run the reactor, with start/stop events hooked up to the service's C{startService}/C{stopService} methods. It is C{serviceClass}'s responsibility to stop the reactor when it's complete. @param configFileName: the name of the configuration file to load. @type configuration: C{str} @param serviceClass: a 1-argument callable which takes an object that provides L{ICalendarStore} and/or L{IAddressbookStore} and returns an L{IService}. @param patchConfig: a 1-argument callable which takes a config object and makes and changes necessary for the tool. @param onShutdown: a 0-argument callable which will run on shutdown. @param reactor: if specified, the L{IReactorTime} / L{IReactorThreads} / L{IReactorTCP} (etc) provider to use. If C{None}, the default reactor will be imported and used. """ from calendarserver.tap.caldav import CalDAVServiceMaker, CalDAVOptions if serviceMaker is None: serviceMaker = CalDAVServiceMaker # We want to validate that the actual service is always an instance of WorkerService, so wrap the # service maker callback inside a function that does that check def _makeValidService(store): service = serviceClass(store) assert isinstance(service, WorkerService) return service # Install std i/o observer observers = [] if verbose: observers.append(FileLogObserver(sys.stdout, lambda event: formatEventAsClassicLogText(event))) if reactor is None: from twisted.internet import reactor try: config = loadConfig(configFileName) if patchConfig is not None: patchConfig(config) checkDirectories(config) utilityLogFile = LogFile.fromFullPath( config.UtilityLogFile, rotateLength=config.ErrorLogRotateMB * 1024 * 1024, maxRotatedFiles=config.ErrorLogMaxRotatedFiles ) utilityLogObserver = FileLogObserver(utilityLogFile, lambda event: formatEventAsClassicLogText(event)) utilityLogObserver._encoding = "utf-8" observers.append(utilityLogObserver) Logger.beginLoggingTo(observers, redirectStandardIO=False) config.ProcessType = "Utility" config.UtilityServiceClass = _makeValidService autoDisableMemcached(config) maker = serviceMaker() # Only perform post-import duties if someone has explicitly said to maker.doPostImport = getattr(maker, "doPostImport", False) options = CalDAVOptions service = maker.makeService(options) reactor.addSystemEventTrigger("during", "startup", service.startService) reactor.addSystemEventTrigger("before", "shutdown", service.stopService) if onShutdown is not None: reactor.addSystemEventTrigger("before", "shutdown", onShutdown) if loadTimezones: TimezoneCache.create() except (ConfigurationError, OSError), e: sys.stderr.write("Error: %s\n" % (e,)) return reactor.run() class WorkerService(Service): def __init__(self, store): self.store = store def rootResource(self): try: from twistedcaldav.config import config rootResource = getRootResource(config, self.store) except OSError, e: if e.errno == ENOENT: # Trying to re-write resources.xml but its parent directory does # not exist. The server's never been started, so we're missing # state required to do any work. raise ConfigurationError( "It appears that the server has never been started.\n" "Please start it at least once before running this tool.") elif e.errno == EACCES: # Trying to re-write resources.xml but it is not writable by the # current user. This most likely means we're in a system # configuration and the user doesn't have sufficient privileges # to do the other things the tool might need to do either. raise ConfigurationError("You must run this tool as root.") else: raise return rootResource @inlineCallbacks def startService(self): try: # Work can be queued but will not be performed by the command # line tool if self.store is not None: self.store.queuer = NonPerformingQueuer() yield self.doWork() else: yield self.doWorkWithoutStore() except ConfigurationError, ce: sys.stderr.write("Error: %s\n" % (str(ce),)) except Exception, e: sys.stderr.write("Error: %s\n" % (e,)) raise finally: self.postStartService() def doWorkWithoutStore(self): """ Subclasses can override doWorkWithoutStore if there is any work they can accomplish without access to the store, or if they want to emit their own error message. """ sys.stderr.write("Error: Data store is not available\n") return succeed(None) def postStartService(self): """ By default, stop the reactor after doWork( ) finishes. Subclasses can override this if they want different behavior. """ if hasattr(self, "reactor"): self.reactor.stop() else: from twisted.internet import reactor reactor.stop() calendarserver-9.1+dfsg/calendarserver/tools/config.py000066400000000000000000000500531315003562600232650ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function # Suppress warning that occurs on Linux import sys if sys.platform.startswith("linux"): from Crypto.pct_warnings import PowmInsecureWarning import warnings warnings.simplefilter("ignore", PowmInsecureWarning) """ This tool gets and sets Calendar Server configuration keys """ from getopt import getopt, GetoptError from itertools import chain import os import plistlib import signal import subprocess import xml from plistlib import readPlistFromString, writePlistToString from twistedcaldav.config import config, ConfigDict, ConfigurationError, mergeData from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twisted.python.filepath import FilePath WRITABLE_CONFIG_KEYS = [ "AccountingCategories", "Authentication.Basic.AllowedOverWireUnencrypted", "Authentication.Basic.Enabled", "Authentication.Digest.AllowedOverWireUnencrypted", "Authentication.Digest.Enabled", "Authentication.Kerberos.AllowedOverWireUnencrypted", "Authentication.Kerberos.Enabled", "Authentication.Wiki.Enabled", "BehindTLSProxy", "DefaultLogLevel", "DirectoryAddressBook.params.queryPeopleRecords", "DirectoryAddressBook.params.queryUserRecords", "EnableCalDAV", "EnableCardDAV", "EnableSACLs", "EnableSearchAddressBook", "EnableSSL", "HTTPPort", "LogLevels", "Notifications.Services.APNS.Enabled", "RedirectHTTPToHTTPS", "Scheduling.iMIP.Enabled", "Scheduling.iMIP.Receiving.Port", "Scheduling.iMIP.Receiving.Server", "Scheduling.iMIP.Receiving.Type", "Scheduling.iMIP.Receiving.Username", "Scheduling.iMIP.Receiving.UseSSL", "Scheduling.iMIP.Sending.Address", "Scheduling.iMIP.Sending.Port", "Scheduling.iMIP.Sending.Server", "Scheduling.iMIP.Sending.Username", "Scheduling.iMIP.Sending.UseSSL", "ServerHostName", "SSLAuthorityChain", "SSLCertificate", "SSLPort", "SSLPrivateKey", "SSLKeychainIdentity", ] READONLY_CONFIG_KEYS = [ "ServerRoot", ] ACCOUNTING_CATEGORIES = { "http": ("HTTP",), "itip": ("iTIP", "iTIP-VFREEBUSY",), "implicit": ("Implicit Errors",), "autoscheduling": ("AutoScheduling",), "ischedule": ("iSchedule",), "migration": ("migration",), } LOGGING_CATEGORIES = { "directory": ("twext^who", "txdav^who",), "imip": ("txdav^caldav^datastore^scheduling^imip",), } def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: %s [options] config_key" % (name,)) print("") print("Print the value of the given config key.") print("options:") print(" -a --accounting: Specify accounting categories (combinations of http, itip, implicit, autoscheduling, ischedule, migration, or off)") print(" -h --help: print this help and exit") print(" -f --config: Specify caldavd.plist configuration path") print(" -l --logging: Specify logging categories (combinations of directory, imip, or default)") print(" -r --restart: Restart the calendar service") print(" --start: Start the calendar service and agent") print(" --stop: Stop the calendar service and agent") print(" -w --writeconfig: Specify caldavd.plist configuration path for writing") if e: sys.exit(64) else: sys.exit(0) def runAsRootCheck(): """ If we're running in Server.app context and are not running as root, exit. """ if os.path.abspath(__file__).startswith("/Applications/Server.app/"): if os.getuid() != 0: print("Must be run as root") sys.exit(1) def main(): runAsRootCheck() try: (optargs, args) = getopt( sys.argv[1:], "hf:rw:a:l:", [ "help", "config=", "writeconfig=", "restart", "start", "stop", "accounting=", "logging=", ], ) except GetoptError, e: usage(e) configFileName = DEFAULT_CONFIG_FILE writeConfigFileName = "" accountingCategories = None loggingCategories = None doStop = False doStart = False doRestart = False for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-w", "--writeconfig"): writeConfigFileName = arg elif opt in ("-a", "--accounting"): accountingCategories = arg.split(",") elif opt in ("-l", "--logging"): loggingCategories = arg.split(",") if opt == "--stop": doStop = True if opt == "--start": doStart = True if opt in ("-r", "--restart"): doRestart = True try: config.load(configFileName) except ConfigurationError, e: sys.stdout.write("%s\n" % (e,)) sys.exit(1) if not writeConfigFileName: # If --writeconfig was not passed, use WritableConfigFile from # main plist. If that's an empty string, writes will happen to # the main file. writeConfigFileName = config.WritableConfigFile if not writeConfigFileName: writeConfigFileName = configFileName if doStop: setServiceState("org.calendarserver.agent", "disable") setServiceState("org.calendarserver.calendarserver", "disable") setEnabled(False) if doStart: setServiceState("org.calendarserver.agent", "enable") setServiceState("org.calendarserver.calendarserver", "enable") setEnabled(True) if doStart or doStop: setReverseProxies() sys.exit(0) if doRestart: restartService(config.PIDFile) sys.exit(0) writable = WritableConfig(config, writeConfigFileName) writable.read() # Convert logging categories to actual config key changes if loggingCategories is not None: args = ["LogLevels="] if loggingCategories != ["default"]: for cat in loggingCategories: if cat in LOGGING_CATEGORIES: for moduleName in LOGGING_CATEGORIES[cat]: args.append("LogLevels.{}=debug".format(moduleName)) # Convert accounting categories to actual config key changes if accountingCategories is not None: args = ["AccountingCategories="] if accountingCategories != ["off"]: for cat in accountingCategories: if cat in ACCOUNTING_CATEGORIES: for key in ACCOUNTING_CATEGORIES[cat]: args.append("AccountingCategories.{}=True".format(key)) processArgs(writable, args) def setServiceState(service, state): """ Invoke serverctl to enable/disable a service """ SERVERCTL = "/Applications/Server.app/Contents/ServerRoot/usr/sbin/serverctl" child = subprocess.Popen( args=[SERVERCTL, state, "service={}".format(service)], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) _ignore_output, error = child.communicate() if child.returncode: sys.stdout.write( "Error from serverctl: %d, %s" % (child.returncode, error) ) def setEnabled(enabled): command = { "command": "writeConfig", "Values": { "EnableCalDAV": enabled, "EnableCardDAV": enabled, }, } runner = Runner([command], quiet=True) runner.run() def setReverseProxies(): """ Invoke calendarserver_reverse_proxies """ SERVERCTL = "/Applications/Server.app/Contents/ServerRoot/usr/libexec/calendarserver_reverse_proxies" child = subprocess.Popen( args=[SERVERCTL, ], stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) child.communicate() def processArgs(writable, args, restart=True): """ Perform the read/write operations requested in the command line args. If there are no args, stdin is read, and plist-formatted commands are processed from there. @param writable: the WritableConfig @param args: a list of utf-8 encoded strings @param restart: whether to restart the calendar server after making a config change. """ if args: for configKey in args: # args come in as utf-8 encoded strings configKey = configKey.decode("utf-8") if "=" in configKey: # This is an assignment configKey, stringValue = configKey.split("=") if stringValue: value = writable.convertToValue(stringValue) valueDict = setKeyPath({}, configKey, value) writable.set(valueDict) else: writable.delete(configKey) else: # This is a read c = config for subKey in configKey.split("."): c = c.get(subKey, None) if c is None: sys.stderr.write("No such config key: %s\n" % configKey) break sys.stdout.write("%s=%s\n" % (configKey.encode("utf-8"), c)) writable.save(restart=restart) else: # Read plist commands from stdin rawInput = sys.stdin.read() try: plist = readPlistFromString(rawInput) # Note: values in plist will already be unicode except xml.parsers.expat.ExpatError, e: # @UndefinedVariable respondWithError(str(e)) return # If the plist is an array, each element of the array is a separate # command dictionary. if isinstance(plist, list): commands = plist else: commands = [plist] runner = Runner(commands) runner.run() class Runner(object): """ A class which carries out commands, which are plist strings containing dictionaries with a "command" key, plus command-specific data. """ def __init__(self, commands, quiet=False): """ @param commands: the commands to run @type commands: list of plist strings """ self.commands = commands self.quiet = quiet def validate(self): """ Validate all the commands by making sure this class implements all the command keys. @return: True if all commands are valid, False otherwise """ # Make sure commands are valid for command in self.commands: if 'command' not in command: respondWithError("'command' missing from plist") return False commandName = command['command'] methodName = "command_%s" % (commandName,) if not hasattr(self, methodName): respondWithError("Unknown command '%s'" % (commandName,)) return False return True def run(self): """ Find the appropriate method for each command and call them. """ try: for command in self.commands: commandName = command['command'] methodName = "command_%s" % (commandName,) if hasattr(self, methodName): getattr(self, methodName)(command) else: respondWithError("Unknown command '%s'" % (commandName,)) except Exception, e: respondWithError("Command failed: '%s'" % (str(e),)) raise def command_readConfig(self, command): """ Return current configuration @param command: the dictionary parsed from the plist read from stdin @type command: C{dict} """ result = {} for keyPath in chain(WRITABLE_CONFIG_KEYS, READONLY_CONFIG_KEYS): value = getKeyPath(config, keyPath) if value is not None: # Note: config contains utf-8 encoded strings, but plistlib # wants unicode, so decode here: if isinstance(value, str): value = value.decode("utf-8") setKeyPath(result, keyPath, value) respond(command, result) def command_writeConfig(self, command): """ Write config to secondary, writable plist @param command: the dictionary parsed from the plist read from stdin @type command: C{dict} """ writable = WritableConfig(config, config.WritableConfigFile) writable.read() valuesToWrite = command.get("Values", {}) # Note: values are unicode if they contain non-ascii for keyPath, value in flattenDictionary(valuesToWrite): if keyPath in WRITABLE_CONFIG_KEYS: writable.set(setKeyPath(ConfigDict(), keyPath, value)) try: writable.save(restart=False) except Exception, e: respond(command, {"error": str(e)}) else: config.reload() if not self.quiet: self.command_readConfig(command) def setKeyPath(parent, keyPath, value): """ Allows the setting of arbitrary nested dictionary keys via a single dot-separated string. For example, setKeyPath(parent, "foo.bar.baz", "xyzzy") would create any intermediate missing directories (or whatever class parent is, such as ConfigDict) so that the following structure results: parent = { "foo" : { "bar" : { "baz" : "xyzzy } } } @param parent: the object to modify @type parent: any dict-like object @param keyPath: a dot-delimited string specifying the path of keys to traverse @type keyPath: C{str} @param value: the value to set @type value: c{object} @return: parent """ original = parent # Embedded ^ should be replaced by . # That way, we can still use . as key path separator even though sometimes # a key part wants to have a . in it (such as a LogLevels module). # For example: LogLevels.twext^who, will get converted to a "LogLevels" # dict containing a the key "twext.who" parts = [p.replace("^", ".") for p in keyPath.split(".")] for part in parts[:-1]: child = parent.get(part, None) if child is None: parent[part] = child = parent.__class__() parent = child parent[parts[-1]] = value return original def getKeyPath(parent, keyPath): """ Allows the getting of arbitrary nested dictionary keys via a single dot-separated string. For example, getKeyPath(parent, "foo.bar.baz") would fetch parent["foo"]["bar"]["baz"]. If any of the keys don't exist, None is returned instead. @param parent: the object to traverse @type parent: any dict-like object @param keyPath: a dot-delimited string specifying the path of keys to traverse @type keyPath: C{str} @return: the value at keyPath """ parts = keyPath.split(".") for part in parts[:-1]: child = parent.get(part, None) if child is None: return None parent = child return parent.get(parts[-1], None) def flattenDictionary(dictionary, current=""): """ Returns a generator of (keyPath, value) tuples for the given dictionary, where each keyPath is a dot-separated string representing the complete path to a nested key. @param dictionary: the dict object to traverse @type dictionary: C{dict} @param current: do not use; used internally for recursion @type current: C{str} @return: generator of (keyPath, value) tuples """ for key, value in dictionary.iteritems(): if isinstance(value, dict): for result in flattenDictionary(value, current + key + "."): yield result else: yield (current + key, value) def restartService(pidFilename): """ Given the path to a PID file, sends a HUP signal to the contained pid in order to cause calendar server to restart. @param pidFilename: an absolute path to a PID file @type pidFilename: C{str} """ if os.path.exists(pidFilename): with open(pidFilename, "r") as pidFile: pid = pidFile.read().strip() try: pid = int(pid) except ValueError: return try: os.kill(pid, signal.SIGHUP) except OSError: pass class WritableConfig(object): """ A wrapper around a Config object which allows writing of values. The idea is a deployment could have a master plist which doesn't change, and have it include a plist file which does. This class facilitates writing to that included plist. """ def __init__(self, wrappedConfig, fileName): """ @param wrappedConfig: the Config object to read from @type wrappedConfig: C{Config} @param fileName: the full path to the modifiable plist @type fileName: C{str} """ self.config = wrappedConfig self.fileName = fileName self.changes = None self.currentConfigSubset = ConfigDict() self.dirty = False def set(self, data): """ Merges data into a ConfigDict of changes intended to be saved to disk when save( ) is called. @param data: a dict containing new values @type data: C{dict} """ if not isinstance(data, ConfigDict): data = ConfigDict(mapping=data) mergeData(self.currentConfigSubset, data) self.dirty = True def delete(self, key): """ Deletes the specified key @param key: the key to delete @type key: C{str} """ try: del self.currentConfigSubset[key] self.dirty = True except: pass def read(self): """ Reads in the data contained in the writable plist file. @return: C{ConfigDict} """ if os.path.exists(self.fileName): self.currentConfigSubset = ConfigDict(mapping=plistlib.readPlist(self.fileName)) else: self.currentConfigSubset = ConfigDict() def toString(self): return plistlib.writePlistToString(self.currentConfigSubset) def save(self, restart=False): """ Writes any outstanding changes to the writable plist file. Optionally restart calendar server. @param restart: whether to restart the calendar server. @type restart: C{bool} """ if self.dirty: content = writePlistToString(self.currentConfigSubset) fp = FilePath(self.fileName) fp.setContent(content) self.dirty = False if restart: restartService(self.config.PIDFile) @classmethod def convertToValue(cls, string): """ Inspect string and convert the value into an appropriate Python data type TODO: change this to look at actual types defined within stdconfig """ if "." in string: try: value = float(string) except ValueError: value = string else: try: value = int(string) except ValueError: if string == "True": value = True elif string == "False": value = False else: value = string return value def respond(command, result): sys.stdout.write(writePlistToString({'command': command['command'], 'result': result})) def respondWithError(msg, status=1): sys.stdout.write(writePlistToString({'error': msg, })) calendarserver-9.1+dfsg/calendarserver/tools/dashboard.py000066400000000000000000001105471315003562600237540ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ A curses (or plain text) based dashboard for viewing various aspects of the server as exposed by the L{DashboardProtocol} stats socket. """ import argparse import collections import curses.panel import errno import fcntl import json import logging import sched import socket import struct import sys import termios import time LOG_FILENAME = 'db.log' #logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG) def main(): parser = argparse.ArgumentParser(description="Dashboard service for CalendarServer.") parser.add_argument("-t", action="store_false", help="text output, not curses") parser.add_argument("-s", nargs="*", default=["localhost:8100"], help="server host (and optional port), or unix socket path prefixed by 'unix:'") args = parser.parse_args() # # Get configuration # useCurses = args.t servers = [] for server in args.s: if not server.startswith("unix:"): server = server.split(":") if len(server) == 1: server.append(8100) else: server[1] = int(server[1]) servers.append(tuple(server)) else: servers.append(server) if useCurses: def _wrapped(stdscrn): try: curses.curs_set(0) # make the cursor invisible curses.use_default_colors() curses.init_pair(1, curses.COLOR_RED, curses.COLOR_WHITE) except: pass d = Dashboard(servers, stdscrn, True) d.run() curses.wrapper(_wrapped) else: d = Dashboard(servers, None, False) d.run() def safeDivision(value, total, factor=1): return value * factor / total if total else 0 def defaultIfNone(x, default): return x if x is not None else default def terminal_size(): h, w, _ignore_hp, _ignore_wp = struct.unpack( 'HHHH', fcntl.ioctl( 0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0) ) ) return w, h class Dashboard(object): """ Main dashboard controller. Use Python's L{sched} feature to schedule updates. """ screen = None registered_windows = collections.OrderedDict() registered_window_sets = { "H": ("HTTP Panels", [],), "J": ("Jobs Panels", [],), } def __init__(self, servers, screen, usesCurses): self.screen = screen self.usesCurses = usesCurses self.paused = False self.seconds = 0.1 if usesCurses else 1.0 self.sched = sched.scheduler(time.time, time.sleep) self.servers = servers self.current_server = 0 self.server_window = None self.client = DashboardClient(servers[0]) self.client_error = False @classmethod def registerWindow(cls, wtype, keypress): """ Register a window type along with a key press action. This allows the controller to select the appropriate window when its key is pressed, and also provides help information to the L{HelpWindow} for each available window type. """ cls.registered_windows[keypress] = wtype @classmethod def registerWindowSet(cls, wtype, keypress): """ Register a set of window types along with a key press action. This allows the controller to select the appropriate set of windows when its key is pressed, and also provides help information to the L{HelpWindow} for each available window set type. """ cls.registered_window_sets[keypress][1].append(wtype) def run(self): """ Create the initial window and run the L{scheduler}. """ self.windows = [] self.displayWindow(None) self.sched.enter(self.seconds, 0, self.updateDisplay, ()) self.sched.run() def displayWindow(self, wtype): """ Toggle the specified window, or reset to launch state if None. """ # Toggle a specific window on or off if isinstance(wtype, type): if wtype not in [type(w) for w in self.windows]: self.windows.append(wtype(self.usesCurses, self).makeWindow()) self.windows[-1].activate() else: for window in self.windows: if type(window) == wtype: window.deactivate() self.windows.remove(window) if len(self.windows) == 0: self.displayWindow(self.registered_windows["h"]) self.resetWindows() # Reset the screen to the default config else: if self.windows: for window in self.windows: window.deactivate() self.windows = [] top = 0 if len(self.servers) > 1: self.server_window = ServersMenu(self.usesCurses, self).makeWindow() self.windows.append(self.server_window) self.windows[-1].activate() top += self.windows[-1].nlines + 1 help_top = top if wtype is None: # All windows in registered order ordered_windows = self.registered_windows.values() else: ordered_windows = list(wtype) for wtype in filter(lambda x: x.all, ordered_windows): new_win = wtype(self.usesCurses, self).makeWindow(top=top) logging.debug('created %r at panel level %r' % (new_win, new_win.z_order)) self.windows.append(wtype(self.usesCurses, self).makeWindow(top=top)) self.windows[-1].activate() top += self.windows[-1].nlines + 1 # Don't display help panel if the window is too narrow if self.usesCurses: term_w, term_h = terminal_size() logging.debug("HelpWindow: rows: %s cols: %s" % (term_h, term_w)) if int(term_w) > 100: logging.debug('HelpWindow: term_w > 100, making window with top at %d' % (top)) self.windows.append(HelpWindow(self.usesCurses, self).makeWindow(top=help_top)) self.windows[-1].activate() if self.usesCurses: curses.panel.update_panels() self.updateDisplay(True) def resetWindows(self): """ Reset the current set of windows. """ if self.windows: logging.debug('resetting windows: %r' % (self.windows)) for window in self.windows: window.deactivate() old_windows = self.windows self.windows = [] top = 0 for old in old_windows: logging.debug('processing window of type %r' % (type(old))) self.windows.append(old.__class__(self.usesCurses, self).makeWindow(top=top)) self.windows[-1].activate() # Allow the help window to float on the right edge if old.__class__.__name__ != "HelpWindow": top += self.windows[-1].nlines + 1 def updateDisplay(self, initialUpdate=False): """ Periodic update of the current window and check for a key press. """ self.client.update() client_error = len(self.client.currentData) == 0 if client_error ^ self.client_error: self.client_error = client_error self.resetWindows() elif filter(lambda x: x.requiresReset(), self.windows): self.resetWindows() try: if not self.paused or initialUpdate: for window in filter( lambda x: x.requiresUpdate() or initialUpdate, self.windows ): window.update() except Exception as e: logging.debug(str(e)) pass if not self.usesCurses: print("-------------") # Check keystrokes if self.usesCurses: self.processKeys() if not initialUpdate: self.sched.enter(self.seconds, 0, self.updateDisplay, ()) def processKeys(self): """ Check for a key press. """ try: self.windows[-1].window.keypad(1) c = self.windows[-1].window.getkey() except: c = -1 if c == "q": sys.exit(0) elif c == " ": self.paused = not self.paused elif c == "t": self.seconds = 1.0 if self.seconds == 0.1 else 0.1 elif c == "a": self.displayWindow(None) elif c == "n": if self.windows: for window in self.windows: window.deactivate() self.windows = [] self.displayWindow(self.registered_windows["h"]) elif c in self.registered_windows: self.displayWindow(self.registered_windows[c]) elif c in self.registered_window_sets: self.displayWindow(self.registered_window_sets[c][1]) elif c in (curses.keyname(curses.KEY_LEFT), curses.keyname(curses.KEY_RIGHT)) and self.server_window: self.current_server += (-1 if c == curses.keyname(curses.KEY_LEFT) else 1) if self.current_server < 0: self.current_server = 0 elif self.current_server >= len(self.servers): self.current_server = len(self.servers) - 1 self.client = DashboardClient(self.servers[self.current_server]) self.resetWindows() class DashboardClient(object): """ Client that connects to a server and fetches information. """ def __init__(self, sockname): self.socket = None if isinstance(sockname, str): self.sockname = sockname[5:] self.useTCP = False else: self.sockname = sockname self.useTCP = True self.currentData = {} self.items = [] def readSock(self, items): """ Open a socket, send the specified request, and retrieve the response. Keep the socket open. """ try: if self.socket is None: self.socket = socket.socket(socket.AF_INET if self.useTCP else socket.AF_UNIX, socket.SOCK_STREAM) self.socket.connect(self.sockname) self.socket.setblocking(0) self.socket.sendall(json.dumps(items) + "\r\n") data = "" t = time.time() while not data.endswith("\n"): try: d = self.socket.recv(1024) except socket.error as se: if se.args[0] != errno.EWOULDBLOCK: raise if time.time() - t > 5: raise socket.error continue if d: data += d else: break data = json.loads(data) except socket.error: data = {} self.socket = None except ValueError: data = {} return data def update(self): """ Update the current data from the server. """ # Only read each item once self.currentData = self.readSock(list(set(self.items))) def getOneItem(self, item): """ Update the current data from the server. """ data = self.readSock([item]) return data[item] if data else None def addItem(self, item): """ Add a server data item to monitor. """ self.items.append(item) def removeItem(self, item): """ No need to monitor this item. """ try: self.items.remove(item) except ValueError: # Don't care if the item is not present pass class Point(object): def __init__(self, x=0, y=0): self.x = x self.y = y def xplus(self, xdiff=1): self.x += xdiff def yplus(self, ydiff=1): self.y += ydiff class BaseWindow(object): """ Common behavior for window types. """ help = "Not Implemented" all = True clientItem = None windowTitle = "" formatWidth = 0 additionalRows = 0 def __init__(self, usesCurses, dashboard): self.usesCurses = usesCurses self.dashboard = dashboard self.rowCount = 0 self.needsReset = False self.z_order = 'bottom' def makeWindow(self, top=0, left=0): self.updateRowCount() self._createWindow( self.windowTitle, self.rowCount + self.additionalRows, self.formatWidth, begin_y=top, begin_x=left ) return self def updateRowCount(self): """ Update L{self.rowCount} based on the current data """ raise NotImplementedError() def _createWindow( self, title, nlines, ncols, begin_y=0, begin_x=0 ): """ Initialize a curses window based on the sizes required. """ if self.usesCurses: self.window = curses.newwin(nlines, ncols, begin_y, begin_x) self.window.nodelay(1) self.panel = curses.panel.new_panel(self.window) eval("self.panel.%s()" % (self.z_order,)) else: self.window = None self.title = title self.nlines = nlines self.ncols = ncols self.iter = 0 self.lastResult = {} def requiresUpdate(self): """ Indicates whether a window type has dynamic data that should be refreshed on each update, or whether it is static data (e.g., L{HelpWindow}) that only needs to be drawn once. """ return True def requiresReset(self): """ Indicates that the window needs a full reset, because e.g., the number of items it displays has changed. """ return self.needsReset def activate(self): """ About to start displaying. """ if self.clientItem: self.dashboard.client.addItem(self.clientItem) # Update once when activated if not self.requiresUpdate(): self.update() def deactivate(self): """ Clear any drawing done by the current window type. """ if self.clientItem: self.dashboard.client.removeItem(self.clientItem) if self.usesCurses: self.window.erase() self.window.refresh() def update(self): """ Periodic window update - redraw the window. """ raise NotImplementedError() def tableHeader(self, hdrs, count): """ Generate the header rows. """ if self.usesCurses: self.window.erase() self.window.border() self.window.addstr( 0, 2, self.title + " {} ({})".format(count, self.iter) ) pt = Point(1, 1) if not self.usesCurses: print("------------ {}".format(self.title)) for hdr in hdrs: if self.usesCurses: self.window.addstr(pt.y, pt.x, hdr, curses.A_REVERSE) else: print(hdr) pt.yplus() return pt def tableFooter(self, feet, pt): """ Generate the footer rows. """ if self.usesCurses: self.window.hline(pt.y, pt.x, "-", self.formatWidth - 2) pt.yplus() for footer in feet: self.window.addstr(pt.y, pt.x, footer) pt.yplus() else: for footer in feet: print(footer) pt.yplus() print("") def tableRow(self, text, pt, style=curses.A_NORMAL): """ Generate a single row. """ try: if self.usesCurses: self.window.addstr( pt.y, pt.x, text, style ) else: print(text) except curses.error: pass pt.yplus() def clientData(self): return self.dashboard.client.currentData.get(self.clientItem) def readItem(self, item): return self.dashboard.client.getOneItem(item) class ServersMenu(BaseWindow): """ Top menu if multiple servers are present. """ help = "servers help" all = False windowTitle = "Servers" formatWidth = 0 additionalRows = 0 def makeWindow(self, top=0, left=0): term_w, _ignore_term_h = terminal_size() self.formatWidth = term_w - 50 return super(ServersMenu, self).makeWindow(0, 0) def updateRowCount(self): self.rowCount = 1 def requiresUpdate(self): return False def update(self): if not self.usesCurses: return self.window.erase() pt = Point() s = "Servers: |" self.window.addstr(pt.y, pt.x, s) pt.xplus(len(s)) selected_server = None for ctr, server in enumerate(self.dashboard.servers): s = " {:02d} ".format(ctr + 1) self.window.addstr( pt.y, pt.x, s, curses.A_REVERSE if ctr == self.dashboard.current_server else curses.A_NORMAL ) pt.xplus(len(s)) self.window.addstr(pt.y, pt.x, "|") pt.xplus() if ctr == self.dashboard.current_server: selected_server = "{}:{}".format(*server) self.window.addstr(pt.y, pt.x, " {}".format(selected_server)) self.window.refresh() class HelpWindow(BaseWindow): """ Display help for the dashboard. """ help = "Help" all = False helpItems = ( " a - All Panels", " n - No Panels", "", " - (space) Pause", " t - Toggle Update Speed", "", " q - Quit", ) windowTitle = "Help" formatWidth = 28 additionalRows = 3 def makeWindow(self, top=0, left=0): term_w, _ignore_term_h = terminal_size() help_x_offset = term_w - self.formatWidth return super(HelpWindow, self).makeWindow(0, help_x_offset) def updateRowCount(self): self.rowCount = len(self.helpItems) + len(filter(lambda x: len(x[1]) != 0, Dashboard.registered_window_sets.values())) + len(Dashboard.registered_windows) def requiresUpdate(self): return False def update(self): if self.usesCurses: self.window.erase() self.window.border() self.window.addstr(0, 2, "Hotkeys") pt = Point(1, 1) if not self.usesCurses: print("------------ {}".format(self.title)) items = [" {} - {}".format(keypress, wtype.help) for keypress, wtype in Dashboard.registered_windows.items()] items.append("") items.extend([" {} - {}".format(key, value[0]) for key, value in Dashboard.registered_window_sets.items() if value[1]]) items.extend(self.helpItems) for item in items: self.tableRow(item, pt) if self.usesCurses: self.window.refresh() class SystemWindow(BaseWindow): """ Displays the system information provided by the server. """ help = "System Status" clientItem = "stats_system" windowTitle = "System" formatWidth = 52 additionalRows = 3 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.readItem(self.clientItem), (1, 2, 3, 4,))) def update(self): records = defaultIfNone(self.clientData(), { "cpu use": 0.0, "memory percent": 0.0, "memory used": 0, "start time": time.time(), }) if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:<30}{:>18} ".format("Item", "Value") pt = self.tableHeader((s,), len(records)) records["cpu use"] = "{:.2f}".format(records["cpu use"]) records["memory percent"] = "{:.1f}".format(records["memory percent"]) records["memory used"] = "{:.2f} GB".format( records["memory used"] / (1000.0 * 1000.0 * 1000.0) ) records["uptime"] = int(time.time() - records["start time"]) hours, mins = divmod(records["uptime"] / 60, 60) records["uptime"] = "{}:{:02d} hh:mm".format(hours, mins) del records["start time"] for item, value in sorted(records.items(), key=lambda x: x[0]): changed = ( item in self.lastResult and self.lastResult[item] != value ) s = " {:<30}{:>18} ".format(item, value) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) if self.usesCurses: self.window.refresh() self.lastResult = records class RequestStatsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "HTTP Requests" clientItem = "stats" windowTitle = "Request Statistics" formatWidth = 84 additionalRows = 4 def updateRowCount(self): self.rowCount = 4 def update(self): records = defaultIfNone(self.clientData(), {}) self.iter += 1 s1 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8} ".format( "Period", "Reqs", "Av-Reqs", "Av-Resp", "Av-NoWr", "Max-Resp", "Slot", "CPU ", "500's" ) s2 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8} ".format( "", "", "per sec", "(ms)", "(ms)", "(ms)", "Avg.", "Avg.", "" ) pt = self.tableHeader((s1, s2,), len(records)) for key, seconds in (("current", 60,), ("1m", 60,), ("5m", 5 * 60,), ("1h", 60 * 60,),): stat = records.get(key, { "requests": 0, "t": 0.0, "t-resp-wr": 0.0, "T-MAX": 0.0, "slots": 0, "cpu": 0.0, "500": 0, }) s = " {:<8}{:>8}{:>10.1f}{:>10.1f}{:>10.1f}{:>10.1f}{:>8.2f}{:>7.1f}%{:>8} ".format( key, stat["requests"], safeDivision(float(stat["requests"]), seconds), safeDivision(stat["t"], stat["requests"]), safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"]), stat["T-MAX"], safeDivision(float(stat["slots"]), stat["requests"]), safeDivision(stat["cpu"], stat["requests"]), stat["500"], ) self.tableRow(s, pt) if self.usesCurses: self.window.refresh() self.lastResult = records class HTTPSlotsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "HTTP Slots" clientItem = "slots" windowTitle = "HTTP Slots" formatWidth = 72 additionalRows = 5 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.readItem(self.clientItem), {"slots": ()})["slots"]) def update(self): data = defaultIfNone(self.clientData(), {"slots": {}, "overloaded": False}) records = data["slots"] if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format( "Slot", "unack", "ack", "uncls", "total", "start", "strting", "stopped", "abd" ) pt = self.tableHeader((s,), len(records)) for record in sorted(records, key=lambda x: x["slot"]): changed = ( record["slot"] in self.lastResult and self.lastResult[record["slot"]] != record ) s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format( record["slot"], record["unacknowledged"], record["acknowledged"], record["unclosed"], record["total"], record["started"], record["starting"], record["stopped"], record["abandoned"], ) count = record["unacknowledged"] + record["acknowledged"] self.tableRow( s, pt, curses.A_REVERSE if changed else ( curses.A_BOLD if count else curses.A_NORMAL ), ) s = " {:<12}{:>8}{:>16}".format( "Total:", sum( [ record["unacknowledged"] + record["acknowledged"] for record in records ] ), sum([record["total"] for record in records]), ) if data["overloaded"]: s += " OVERLOADED" self.tableFooter((s,), pt) if self.usesCurses: self.window.refresh() self.lastResult = records class MethodsWindow(BaseWindow): """ Display the status of the server's request methods. """ help = "HTTP Methods" clientItem = "stats" stats_keys = ("current", "1m", "5m", "1h",) windowTitle = "Methods" formatWidth = 116 additionalRows = 8 def updateRowCount(self): stats = defaultIfNone(self.clientData(), {}) methods = set() for key in self.stats_keys: methods.update(stats.get(key, {}).get("method", {}).keys()) nlines = len(methods) self.rowCount = nlines def update(self): stats = defaultIfNone(self.clientData(), {}) methods = set() for key in self.stats_keys: methods.update(stats.get(key, {}).get("method", {}).keys()) if len(methods) != self.rowCount: self.needsReset = True return records = {} records_t = {} for key in self.stats_keys: records[key] = defaultIfNone(self.clientData(), {}).get(key, {}).get("method", {}) records_t[key] = defaultIfNone(self.clientData(), {}).get(key, {}).get("method-t", {}) self.iter += 1 s1 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "", "------", "current---", "------", "1m--------", "------", "5m--------", "------", "1h--------", ) s2 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "Method", "Number", "Av-Time", "Number", "Av-Time", "Number", "Av-Time", "Number", "Av-Time", ) s3 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "", "", "(ms)", "", "(ms)", "", "(ms)", "", "(ms)", ) pt = self.tableHeader((s1, s2, s3,), len(records)) total_methods = dict([(key, 0) for key in self.stats_keys]) total_time = dict([(key, 0.0) for key in self.stats_keys]) for method_type in sorted(methods): for key in self.stats_keys: total_methods[key] += records[key].get(method_type, 0) total_time[key] += records_t[key].get(method_type, 0.0) changed = self.lastResult.get(method_type, 0) != records["current"].get(method_type, 0) items = [method_type] for key in self.stats_keys: items.append(records[key].get(method_type, 0)) items.append(safeDivision(records_t[key].get(method_type, 0), records[key].get(method_type, 0))) s = " {:<40}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f} ".format( *items ) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) items = ["Total:"] for key in self.stats_keys: items.append(total_methods[key]) items.append(safeDivision(total_time[key], total_methods[key])) s1 = " {:<40}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f} ".format( *items ) items = ["401s:"] for key in self.stats_keys: items.append(defaultIfNone(self.clientData(), {}).get(key, {}).get("401", 0)) items.append("") s2 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( *items ) self.tableFooter((s1, s2,), pt) if self.usesCurses: self.window.refresh() self.lastResult = defaultIfNone(self.clientData(), {}).get("current", {}).get("method", {}) class AssignmentsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "Job Assignments" clientItem = "job_assignments" windowTitle = "Job Assignments" formatWidth = 40 additionalRows = 5 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.readItem(self.clientItem), {"workers": ()})["workers"]) def update(self): data = defaultIfNone(self.clientData(), {"workers": {}, "level": 0}) records = data["workers"] if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:>4}{:>12}{:>8}{:>12} ".format( "Slot", "assigned", "load", "completed" ) pt = self.tableHeader((s,), len(records)) total_assigned = 0 total_completed = 0 for ctr, details in enumerate(records): assigned, load, completed = details total_assigned += assigned total_completed += completed changed = ( ctr in self.lastResult and self.lastResult[ctr] != assigned ) s = " {:>4}{:>12}{:>8}{:>12} ".format( ctr, assigned, load, completed, ) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) s = " {:<6}{:>10}{:>8}{:>12}".format( "Total:", total_assigned, "{}%".format(data["level"]), total_completed, ) self.tableFooter((s,), pt) if self.usesCurses: self.window.refresh() self.lastResult = records class JobsWindow(BaseWindow): """ Display the status of the server's job queue. """ help = "Job Activity" clientItem = "jobs" windowTitle = "Jobs" formatWidth = 98 additionalRows = 6 def updateRowCount(self): self.rowCount = defaultIfNone(self.readItem("jobcount"), 0) def update(self): records = defaultIfNone(self.clientData(), {}) if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s1 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format( "Work Type", "Queued", "Assigned", "Late", "Failed", "Completed", "Av-Time", ) s2 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format( "", "", "", "", "", "", "(ms)", ) pt = self.tableHeader((s1, s2,), len(records)) total_queued = 0 total_assigned = 0 total_late = 0 total_failed = 0 total_completed = 0 total_time = 0.0 for work_type, details in sorted(records.items(), key=lambda x: x[0]): total_queued += details["queued"] total_assigned += details["assigned"] total_late += details["late"] total_failed += details["failed"] total_completed += details["completed"] total_time += details["time"] changed = ( work_type in self.lastResult and self.lastResult[work_type]["queued"] != details["queued"] ) s = "{}{:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format( ">" if details["queued"] else " ", work_type, details["queued"], details["assigned"], details["late"], details["failed"], details["completed"], safeDivision(details["time"], details["completed"], 1000.0) ) self.tableRow( s, pt, curses.A_REVERSE if changed else ( curses.A_BOLD if details["queued"] else curses.A_NORMAL ), ) s = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format( "Total:", total_queued, total_assigned, total_late, total_failed, total_completed, safeDivision(total_time, total_completed, 1000.0) ) self.tableFooter((s,), pt) if self.usesCurses: self.window.refresh() self.lastResult = records class DirectoryStatsWindow(BaseWindow): """ Displays the status of the server's directory service calls """ help = "Directory Service" clientItem = "directory" windowTitle = "Directory Service" formatWidth = 89 additionalRows = 8 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.readItem("directory"), {})) def update(self): records = defaultIfNone(self.clientData(), {}) if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s1 = " {:<40}{:>15}{:>15}{:>15} ".format( "Method", "Calls", "Total", "Average" ) s2 = " {:<40}{:>15}{:>15}{:>15} ".format( "", "", "(sec)", "(ms)" ) pt = self.tableHeader((s1, s2,), len(records)) overallCount = 0 overallCountRatio = 0 overallCountCached = 0 overallCountUncached = 0 overallTimeSpent = 0.0 for methodName, result in sorted(records.items(), key=lambda x: x[0]): if isinstance(result, int): count, timeSpent = result, 0.0 else: count, timeSpent = result overallCount += count if methodName.endswith("-hit"): overallCountRatio += count overallCountCached += count if methodName.endswith("-miss") or methodName.endswith("-expired"): overallCountRatio += count overallCountUncached += count overallTimeSpent += timeSpent s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format( methodName, count, timeSpent, safeDivision(timeSpent, count, 1000.0), ) self.tableRow(s, pt) s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format( "Total:", overallCount, overallTimeSpent, safeDivision(overallTimeSpent, overallCount, 1000.0) ) s_cached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format( "Total Cached:", overallCountCached, safeDivision(overallCountCached, overallCountRatio, 100.0), "", ) s_uncached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format( "Total Uncached:", overallCountUncached, safeDivision(overallCountUncached, overallCountRatio, 100.0), "", ) self.tableFooter((s, s_cached, s_uncached), pt) if self.usesCurses: self.window.refresh() Dashboard.registerWindow(HelpWindow, "h") Dashboard.registerWindow(SystemWindow, "s") Dashboard.registerWindow(RequestStatsWindow, "r") Dashboard.registerWindow(HTTPSlotsWindow, "c") Dashboard.registerWindow(MethodsWindow, "m") Dashboard.registerWindow(AssignmentsWindow, "w") Dashboard.registerWindow(JobsWindow, "j") Dashboard.registerWindow(DirectoryStatsWindow, "d") Dashboard.registerWindowSet(SystemWindow, "H") Dashboard.registerWindowSet(RequestStatsWindow, "H") Dashboard.registerWindowSet(HTTPSlotsWindow, "H") Dashboard.registerWindowSet(MethodsWindow, "H") Dashboard.registerWindowSet(SystemWindow, "J") Dashboard.registerWindowSet(AssignmentsWindow, "J") Dashboard.registerWindowSet(JobsWindow, "J") if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/dashcollect.py000077500000000000000000000347201315003562600243130ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ A service that logs dashboard data from multiple hosts and stores them in log files. It can also, optionally, make the most recent data available for retrieval via a simple TCP socket read on a specific port. This tool uses its own config file, specified with the '-f' option. A sample is shown below: { "title": "My CalDAV service", "pods": { "podA": { "description": "Main pod", "servers": [ "podAhost1.example.com:8100", "podAhost2.example.com:8100" ] }, "podB": { "description": "Development pod", "servers": [ "podBhost1.example.com:8100", "podBhost2.example.com:8100" ] } } } """ from argparse import HelpFormatter, SUPPRESS, OPTIONAL, ZERO_OR_MORE, \ ArgumentParser from collections import OrderedDict from datetime import datetime, date from threading import Thread import SocketServer import errno import json import os import sched import socket import sys import time import zlib verbose = False def _verbose(log): if verbose: print(log) class MyHelpFormatter(HelpFormatter): """ Help message formatter which adds default values to argument help and retains formatting of all help text. """ def _fill_text(self, text, width, indent): return ''.join([indent + line for line in text.splitlines(True)]) def _get_help_string(self, action): help = action.help if '%(default)' not in action.help: if action.default is not SUPPRESS: defaulting_nargs = [OPTIONAL, ZERO_OR_MORE] if action.option_strings or action.nargs in defaulting_nargs: help += ' (default: %(default)s)' return help def main(): try: # to produce a docstring target thisFile = __file__ except NameError: # unlikely but possible... thisFile = sys.argv[0] parser = ArgumentParser( formatter_class=MyHelpFormatter, description="Dashboard service for CalendarServer.", epilog="To view the docstring, run: pydoc {}".format(thisFile), ) parser.add_argument("-f", default=SUPPRESS, required=True, help="Server config file (see docstring for details)") parser.add_argument("-l", default=SUPPRESS, required=True, help="Log file directory") parser.add_argument("-n", action="store_true", help="Create a new log file when starting, existing log file is deleted") parser.add_argument("-s", default="localhost:8200", help="Make JSON data available on the specified host:port") parser.add_argument("-t", action="store_true", help="Rotate log files every hour, otherwise once per day") parser.add_argument("-z", action="store_true", help="zlib compress json records in log files") parser.add_argument("-v", action="store_true", help="Verbose") args = parser.parse_args() if args.v: global verbose verbose = True config = Config() try: config.loadFromFile(args.f) except: parser.print_usage() sys.exit(1) # Remove any existing logfile is asked if args.n: logfile = DashboardCollector.logfile(args.l, args.t) if os.path.exists(logfile): os.remove(logfile) print("Running DashboardCollector...") dash = DashboardCollector(config, args.l, args.t, args.z) dash_thread = Thread(target=dash.run) dash_thread.start() if args.s: print("Running the CollectorService...") host = args.s if not host.startswith("unix:"): host = host.split(":") if len(host) == 1: host.append(8200) else: host[1] = int(host[1]) host = tuple(host) try: server = CollectorService(host, CollectorRequestHandler) server.dashboard = dash server_thread = Thread(target=server.serve_forever) server_thread.daemon = True server_thread.start() except Exception as e: print("Terminating service due to error: {}".format(e)) dash.stop() server.shutdown() server_thread.join() sys.exit(1) while dash_thread.isAlive(): try: dash_thread.join(1000) except KeyboardInterrupt: print("Terminating service") dash.stop() server.shutdown() server_thread.join() class Config(object): """ Loads the config and creates a list of L{Pod}'s. JSON schema for server config file: ; root object OBJECT (title, pods) ; Title/description of this config MEMBER title "title" : STRING ; pods - set of pods to monitor MEMBER pods "pod" : Object ( *pod ) ; An event type MEMBER pod "" : Object( ?description, servers ) ; The description of a pod MEMBER description "description" : STRING ; Servers associated with a pod ; Server names are either "host:port", or "unix:path" MEMBER servers "servers" : ARRAY STRING """ def loadFromFile(self, path): _verbose("Loading config file {}".format(path)) with open(path) as f: try: jsondata = json.loads(f.read(), object_pairs_hook=OrderedDict) except Exception: print("Could not read JSON data from {}".format(path)) raise RuntimeError("Could not read JSON data from {}".format(path)) try: self.title = jsondata["title"] _verbose("Config '{}'".format(self.title)) self.pods = [Pod(podname, data) for podname, data in jsondata["pods"].items()] except Exception: print("No valid JSON data in {}".format(path)) raise RuntimeError("No valid JSON data in {}".format(path)) class Pod(object): """ Model object that represents an L{Pod}. """ def __init__(self, title, jsondata): """ Parse the pod details from the JSON data and create the list of L{Server}'s. """ self.title = title self.description = jsondata.get("description", "") _verbose(" Pod '{}': {}".format(self.title, self.description)) self.servers = [Server(data) for data in jsondata.get("servers", [])] # Setup each L{Server} with the set of stats items they need to read for ctr, server in enumerate(self.servers): _verbose(" Server: {}".format(server.sockname)) server.addItem("stats_system") server.addItem("stats") server.addItem("slots") server.addItem("job_assignments") server.addItem("jobcount") server.addItem("directory") # Only read this once as otherwise too much load if ctr == 0: server.addItem("jobs") def sendSock(self): """ Update the data for each L{Server} in this L{Pod}. """ _verbose(" Pod send: {}".format(self.title)) for server in self.servers: server.sendSock() def update(self, data): """ Update the data for each L{Server} in this L{Pod}. """ _verbose(" Pod read: {}".format(self.title)) data[self.title] = OrderedDict() for server in self.servers: server.update(data[self.title]) class Server(object): """ Model object that represents a server in a pod. """ def __init__(self, host): """ Setup the appropriate socket connection details. """ self.host = host self.socket = None if host.startswith("unix:"): self.sockname = host[5:] self.useTCP = False else: host = host.split(":") if len(host) == 1: host.append(8100) else: host[1] = int(host[1]) self.sockname = tuple(host) self.useTCP = True self.currentData = {} self.items = [] def sendSock(self): """ Open a socket, send the specified request, and retrieve the response. Keep the socket open. """ items = list(set(self.items)) try: if self.socket is None: if self.useTCP: self.socket = socket.create_connection(self.sockname, 1.0) else: self.socket = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) self.socket.connect(self.sockname) self.socket.setblocking(0) self.socket.sendall(json.dumps(items) + "\r\n") except socket.error as e: self.socket = None _verbose(" server failed: {} {}".format(self.host, e)) except ValueError as e: _verbose(" server failed: {} {}".format(self.host, e)) def readSock(self, items): """ Open a socket, send the specified request, and retrieve the response. Keep the socket open. """ if self.socket is None: return {} try: data = "" t = time.time() while not data.endswith("\n"): try: d = self.socket.recv(1024) except socket.error as se: if se.args[0] not in (errno.EWOULDBLOCK, errno.EAGAIN): raise if time.time() - t > 5: raise socket.error continue if d: data += d else: break data = json.loads(data, object_pairs_hook=OrderedDict) except socket.error as e: data = {} self.socket = None _verbose(" server failed: {} {}".format(self.host, e)) except ValueError as e: data = {} _verbose(" server failed: {} {}".format(self.host, e)) return data def update(self, data): """ Update the current data from the server. """ # Only read each item once self.currentData = self.readSock(list(set(self.items))) data[self.host] = self.currentData _verbose(" Server read: {} {}".format(self.host, len(data[self.host]))) #_verbose(" Data: {}".format(self.currentData)) def getOneItem(self, item): """ Update the current data from the server. """ data = self.readSock([item]) return data[item] if data else None def addItem(self, item): """ Add a server data item to monitor. """ self.items.append(item) def removeItem(self, item): """ No need to monitor this item. """ try: self.items.remove(item) except ValueError: # Don't care if the item is not present pass class DashboardCollector(object): """ Main dashboard controller. Use Python's L{sched} feature to schedule updates. """ def __init__(self, config, logdir, loghourly, compress_log): self.logdir = logdir self.loghourly = loghourly self.compress_log = compress_log self.title = config.title self.pods = config.pods self.sched = sched.scheduler(time.time, time.sleep) self.seconds = 1 self.lastData = {} self._stop = False def run(self): """ Start the L{scheduler}. """ _verbose("Starting Dashboard") self.sched.enter(self.seconds, 0, self.update, ()) self.sched.run() _verbose("Stopped Dashboard") def stop(self): self._stop = True @staticmethod def logfile(logdir, loghourly): """ Log file name based on current date so it rotates once a day, or hourly. """ today = date.today().isoformat() if loghourly: hour = datetime.now().hour return os.path.join(logdir, "dashboard-{today}T{hour:02d}00.log".format(today=today, hour=hour)) else: return os.path.join(logdir, "dashboard-{today}.log".format(today=today)) def update(self): """ Update data from each pod. """ _verbose("Update pods") j = OrderedDict() j["timestamp"] = datetime.now().replace(microsecond=0).isoformat() j["pods"] = OrderedDict() for pod in self.pods: pod.sendSock() for pod in self.pods: pod.update(j["pods"]) # Filter out unwanted data for jpod in j["pods"].values(): for jhost in jpod.values(): for stattype in ("current", "1m", "5m", "1h"): for statkey in ("uid", "user-agent"): try: del jhost["stats"][stattype][statkey] except KeyError: pass # Append to log file with open(self.logfile(self.logdir, self.loghourly), "a") as f: jstr = json.dumps(j) zstr = zlib.compress(jstr).encode("base64").replace("\n", "") if self.compress_log else jstr f.write("\x1e{}\n".format(zstr)) self.lastData = j if not self._stop: self.sched.enter(self.seconds, 0, self.update, ()) class CollectorService(SocketServer.ThreadingTCPServer): """ L{ThreadingTCPServer} that sends out the current data from the L{DashbordCollector}. """ def data(self): if hasattr(self, "dashboard"): return json.dumps(self.dashboard.lastData) else: return "{}" class CollectorRequestHandler(SocketServer.BaseRequestHandler): """ Request handler for L{CollectorService} that just sends back the current data. """ def handle(self): self.request.sendall(self.server.data()) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/dashtime.py000077500000000000000000000637101315003562600236250ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tool that extracts time series data from a dashcollect log. """ from argparse import SUPPRESS, OPTIONAL, ZERO_OR_MORE, HelpFormatter, \ ArgumentParser from bz2 import BZ2File from collections import OrderedDict, defaultdict from zlib import decompress import json import matplotlib.pyplot as plt import operator import os import sys verbose = False def _verbose(log): if verbose: print(log) def safeDivision(value, total, factor=1): return value * factor / total if total else 0 class MyHelpFormatter(HelpFormatter): """ Help message formatter which adds default values to argument help and retains formatting of all help text. """ def _fill_text(self, text, width, indent): return ''.join([indent + line for line in text.splitlines(True)]) def _get_help_string(self, action): help = action.help if '%(default)' not in action.help: if action.default is not SUPPRESS: defaulting_nargs = [OPTIONAL, ZERO_OR_MORE] if action.option_strings or action.nargs in defaulting_nargs: help += ' (default: %(default)s)' return help class DataType(object): """ Base class for object that can process the different types of data in a dashcollect log. """ allTypes = OrderedDict() key = "" # This indicates whether the class of data is based on a 1 minute average - # which means the data represents a 60 second delay compared to the "real- # time" value. If it is the average then setting this flag will cause the # first 60 data items to be skipped. skip60 = False @staticmethod def getTitle(measurement): if "-" in measurement: measurement, item = measurement.split("-", 1) else: item = "" return DataType.allTypes[measurement].title(item) @staticmethod def getMaxY(measurement, numHosts): if "-" in measurement: measurement = measurement.split("-", 1)[0] return DataType.allTypes[measurement].maxY(numHosts) @staticmethod def skip(measurement): if "-" in measurement: measurement = measurement.split("-", 1)[0] return DataType.allTypes[measurement].skip60 @staticmethod def process(measurement, stats, host): if "-" in measurement: measurement, item = measurement.split("-", 1) else: item = "" return DataType.allTypes[measurement].calculate(stats, item, host) @staticmethod def title(item): raise NotImplementedError @staticmethod def maxY(numHosts): raise NotImplementedError @staticmethod def calculate(stats, item, hosts): """ If hosts is L{None} then data from all hosts will be aggregated. @param stats: per-Pod L{dict} of data from each host in the pod. @type stats: L{dict} @param item: additional L{dict} key for data of interest @type item: L{str} @param hosts: list of hosts to process @type hosts: L{list} """ raise NotImplementedError class CPUDataType(DataType): """ CPU use. """ key = "cpu" @staticmethod def title(item): return "CPU Use %" @staticmethod def maxY(numHosts): return 100 * numHosts @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats_system"]["cpu use"] if stats[onehost] else 0 for onehost in hosts]) class MemoryDataType(DataType): """ CPU use. """ key = "mem" @staticmethod def title(item): return "Memory Use %" @staticmethod def maxY(numHosts): return 100 * numHosts @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats_system"]["memory percent"] if stats[onehost] else 0 for onehost in hosts]) class RequestsDataType(DataType): """ Number of requests. """ key = "reqs" skip60 = True @staticmethod def title(item): return "Requests/sec" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats"]["1m"]["requests"] if stats[onehost] else 0 for onehost in hosts]) / 60.0 class ResponseDataType(DataType): """ Average response time. """ key = "respt" skip60 = True @staticmethod def title(item): return "Av. Response Time (ms)" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): tsum = sum([stats[onehost]["stats"]["1m"]["t"] if stats[onehost] else 0 for onehost in hosts]) rsum = sum([stats[onehost]["stats"]["1m"]["requests"] if stats[onehost] else 0 for onehost in hosts]) return safeDivision(tsum, rsum) class Code500DataType(DataType): """ Number of 500 requests. """ key = "500" skip60 = True @staticmethod def title(item): return "Code 500/sec" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats"]["1m"]["500"] if stats[onehost] else 0 for onehost in hosts]) / 60.0 class Code401DataType(DataType): """ Number of 401 requests. """ key = "401" skip60 = True @staticmethod def title(item): return "Code 401/sec" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats"]["1m"]["401"] if stats[onehost] else 0 for onehost in hosts]) / 60.0 class MaxSlotsDataType(DataType): """ Max slots. """ key = "slots" skip60 = True @staticmethod def title(item): return "Max. Slots" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats"]["1m"]["max-slots"] if stats[onehost] else 0 for onehost in hosts]) class JobsCompletedDataType(DataType): """ Job completion count from job assignments. """ key = "jcomp" lastCompleted = defaultdict(int) @staticmethod def title(item): return "Completed" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): result = 0 for onehost in hosts: completed = sum(map(operator.itemgetter(2), stats[onehost]["job_assignments"]["workers"])) if stats[onehost] else 0 delta = completed - JobsCompletedDataType.lastCompleted[onehost] if JobsCompletedDataType.lastCompleted[onehost] else 0 if delta >= 0: result += delta JobsCompletedDataType.lastCompleted[onehost] = completed return result class MethodCountDataType(DataType): """ Count of specified methods. L{item} should be set to the full name of the "decorated" method seen in dashview. """ key = "methodc" skip60 = True @staticmethod def title(item): return item @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): return sum([stats[onehost]["stats"]["1m"]["method"].get(item, 0) if stats[onehost] else 0 for onehost in hosts]) class MethodResponseDataType(DataType): """ Average response time of specified methods. L{item} should be set to the full name of the "decorated" method seen in dashview. """ key = "methodr" skip60 = True @staticmethod def title(item): return item @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): tsum = sum([stats[onehost]["stats"]["1m"]["method-t"].get(item, 0) if stats[onehost] else 0 for onehost in hosts]) rsum = sum([stats[onehost]["stats"]["1m"]["method"].get(item, 0) if stats[onehost] else 0 for onehost in hosts]) return safeDivision(tsum, rsum) class JobQueueDataType(DataType): """ Count of queued job items. L{item} should be set to the full name or prefix of job types to process. Or if set to L{None}, all jobs are counted. """ key = "jqueue" @staticmethod def title(item): return ("JQ " + "_".join(map(operator.itemgetter(0), item.split("_")))) if item else "Jobs Queued" @staticmethod def maxY(numHosts): return None @staticmethod def calculate(stats, item, hosts): # Job queue stat only read for first host onehost = sorted(stats.keys())[0] if len(stats[onehost]) == 0: return 0 if item: return sum(map(operator.itemgetter("queued"), {k: v for k, v in stats[onehost]["jobs"].items() if k.startswith(item)}.values())) else: return sum(map(operator.itemgetter("queued"), stats[onehost]["jobs"].values())) # Register the known L{DataType}s for dtype in DataType.__subclasses__(): DataType.allTypes[dtype.key] = dtype class Calculator(object): def __init__(self, args): if args.v: global verbose verbose = True # Get the log file self.logname = os.path.expanduser(args.l) try: if self.logname.endswith(".bz2"): self.logfile = BZ2File(self.logname) else: self.logfile = open(self.logname) except: print("Failed to open logfile {}".format(args.l)) sys.exit(1) self.pod = getattr(args, "p", None) self.single_server = getattr(args, "s", None) self.save = args.save self.savedir = getattr(args, "i", None) self.noshow = args.noshow self.mode = args.mode # Start/end lines in log file to process self.line_start = args.start self.line_count = args.count # Plot arrays that will be generated self.x = [] self.y = OrderedDict() self.titles = {} self.ymaxes = {} def singleHost(self, valuekeys): """ Generate data for a single host only. @param valuekeys: L{DataType} keys to process @type valuekeys: L{list} or L{str} """ self._plotHosts(valuekeys, (self.single_server,)) def combinedHosts(self, valuekeys): """ Generate data for all hosts. @param valuekeys: L{DataType} keys to process @type valuekeys: L{list} or L{str} """ self._plotHosts(valuekeys, None) def _plotHosts(self, valuekeys, hosts): """ Generate data for a the specified list of hosts. @param valuekeys: L{DataType} keys to process @type valuekeys: L{list} or L{str} @param hosts: lists of hosts to process @type hosts: L{list} or L{str} """ # For each log file line, process the data for each required measurement with self.logfile: line = self.logfile.readline() ctr = 0 while line: if ctr < self.line_start: ctr += 1 line = self.logfile.readline() continue if line[0] == "\x1e": line = line[1:] if line[0] != "{": line = decompress(line.decode("base64")) jline = json.loads(line) timestamp = jline["timestamp"] self.x.append(int(timestamp[14:16]) * 60 + int(timestamp[17:19])) ctr += 1 # Initialize the plot arrays when we know how many hosts there are if len(self.y) == 0: if self.pod is None: self.pod = sorted(jline["pods"].keys())[0] if hosts is None: hosts = sorted(jline["pods"][self.pod].keys()) for measurement in valuekeys: self.y[measurement] = [] self.titles[measurement] = DataType.getTitle(measurement) self.ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts)) for measurement in valuekeys: stats = jline["pods"][self.pod] try: self.y[measurement].append(DataType.process(measurement, stats, hosts)) except KeyError: self.y[measurement].append(None) line = self.logfile.readline() if self.line_count != -1 and ctr > self.line_start + self.line_count: break # Offset data that is averaged over the previous minute for measurement in valuekeys: if DataType.skip(measurement): self.y[measurement] = self.y[measurement][60:] self.y[measurement].extend([None] * 60) def perHost(self, perhostkeys, combinedkeys): """ Generate a set of per-host plots, together we a set of plots for all- host data. @param perhostkeys: L{DataType} keys for per-host data to process @type perhostkeys: L{list} or L{str} @param combinedkeys: L{DataType} keys for all-host data to process @type combinedkeys: L{list} or L{str} """ # For each log file line, process the data for each required measurement with self.logfile: line = self.logfile.readline() ctr = 0 while line: if ctr < self.line_start: ctr += 1 line = self.logfile.readline() continue if line[0] == "\x1e": line = line[1:] if line[0] != "{": line = decompress(line.decode("base64")) jline = json.loads(line) timestamp = jline["timestamp"] self.x.append(int(timestamp[14:16]) * 60 + int(timestamp[17:19])) ctr += 1 # Initialize the plot arrays when we know how many hosts there are if len(self.y) == 0: if self.pod is None: self.pod = sorted(jline["pods"].keys())[0] hosts = sorted(jline["pods"][self.pod].keys()) for host in hosts: for measurement in perhostkeys: ykey = "{}={}".format(measurement, host) self.y[ykey] = [] self.titles[ykey] = DataType.getTitle(measurement) self.ymaxes[ykey] = DataType.getMaxY(measurement, 1) for measurement in combinedkeys: self.y[measurement] = [] self.titles[measurement] = DataType.getTitle(measurement) self.ymaxes[measurement] = DataType.getMaxY(measurement, len(hosts)) # Get actual measurement data for host in hosts: for measurement in perhostkeys: ykey = "{}={}".format(measurement, host) stats = jline["pods"][self.pod] self.y[ykey].append(DataType.process(measurement, stats, (host,))) for measurement in combinedkeys: stats = jline["pods"][self.pod] self.y[measurement].append(DataType.process(measurement, stats, hosts)) line = self.logfile.readline() if self.line_count != -1 and ctr > self.line_start + self.line_count: break # Offset data that is averaged over the previous minute. Also determine # the highest max value of all the per-host measurements and scale each # per-host plot to the same range. overall_ymax = defaultdict(int) for host in hosts: for measurement in perhostkeys: ykey = "{}={}".format(measurement, host) overall_ymax[measurement] = max(overall_ymax[measurement], max(self.y[ykey])) if DataType.skip(measurement): self.y[ykey] = self.y[ykey][60:] self.y[ykey].extend([None] * 60) for host in hosts: for measurement in perhostkeys: ykey = "{}={}".format(measurement, host) self.ymaxes[ykey] = overall_ymax[measurement] for measurement in combinedkeys: if DataType.skip(measurement): self.y[measurement] = self.y[measurement][60:] self.y[measurement].extend([None] * 60) def run(self, mode, *args): getattr(self, mode)(*args) def plot(self): # Generate a single stacked plot of the data if self.mode.startswith("scatter"): plt.figure(figsize=(10, 10)) keys = self.y.keys() x_key = CPUDataType.key keys.remove(x_key) plotmax = len(keys) for plotnum, y_key in enumerate(keys): plt.subplot(len(keys), 1, plotnum + 1) plotScatter( self.titles[y_key], self.y[x_key], self.y[y_key], (0, self.ymaxes[x_key],), (0, self.ymaxes[y_key]), plotnum == plotmax - 1, ) else: plt.figure(figsize=(18.5, min(5 + len(self.y.keys()), 18))) plotmax = len(self.y.keys()) for plotnum, measurement in enumerate(self.y.keys()): plt.subplot(len(self.y), 1, plotnum + 1) plotSeries(self.titles[measurement], self.x, self.y[measurement], 0, self.ymaxes[measurement], plotnum == plotmax - 1) if self.save: if self.savedir: dirpath = self.savedir if not os.path.exists(dirpath): os.makedirs(dirpath) else: dirpath = os.path.dirname(self.logname) fname = ".".join((os.path.basename(self.logname), self.mode, "png"),) plt.savefig(os.path.join(dirpath, fname), orientation="landscape", format="png") if not self.noshow: plt.show() def main(): selectMode = { "basic": # Generic aggregated data for all hosts ( "combinedHosts", ( CPUDataType.key, RequestsDataType.key, ResponseDataType.key, JobsCompletedDataType.key, JobQueueDataType.key, ) ), "basicjob": # Data aggregated for all hosts - job detail ( "combinedHosts", ( CPUDataType.key, RequestsDataType.key, ResponseDataType.key, JobsCompletedDataType.key, JobQueueDataType.key + "-SCHEDULE", JobQueueDataType.key + "-PUSH", JobQueueDataType.key, ), ), "basicschedule": # Data aggregated for all hosts - job detail ( "combinedHosts", ( CPUDataType.key, JobsCompletedDataType.key, JobQueueDataType.key + "-SCHEDULE_ORGANIZER_WORK", JobQueueDataType.key + "-SCHEDULE_ORGANIZER_SEND_WORK", JobQueueDataType.key + "-SCHEDULE_REPLY_WORK", JobQueueDataType.key + "-SCHEDULE_AUTO_REPLY_WORK", JobQueueDataType.key + "-SCHEDULE_REFRESH_WORK", JobQueueDataType.key + "-PUSH", JobQueueDataType.key, ), ), "basicmethod": # Data aggregated for all hosts - method detail ( "combinedHosts", ( CPUDataType.key, RequestsDataType.key, ResponseDataType.key, MethodCountDataType.key + "-PUT ics", MethodCountDataType.key + "-REPORT cal-home-sync", MethodCountDataType.key + "-PROPFIND Calendar Home", MethodCountDataType.key + "-REPORT cal-sync", MethodCountDataType.key + "-PROPFIND Calendar", ), ), "hostrequests": # Per-host requests, and total requests & CPU ( "perHost", (RequestsDataType.key,), ( RequestsDataType.key, CPUDataType.key, ), ), "hostcpu": # Per-host CPU, and total CPU ( "perHost", (CPUDataType.key,), ( RequestsDataType.key, CPUDataType.key, ), ), "hostcompleted": # Per-host job completion, and total CPU, total jobs queued ( "perHost", (JobsCompletedDataType.key,), ( CPUDataType.key, JobQueueDataType.key, ), ), "scatter": # Scatter plots of request count and response time vs CPU ( "combinedHosts", ( CPUDataType.key, RequestsDataType.key, ResponseDataType.key, ) ), } parser = ArgumentParser( formatter_class=MyHelpFormatter, description="Dashboard time series processor.", epilog="""Available modes: basic - stacked plots of total CPU, total request count, total average response time, completed jobs, and job queue size. basicjob - as per basic but with queued SCHEDULE_*_WORK and queued PUSH_NOTIFICATION_WORK plots. basicschedule - stacked plots of total CPU, completed jobs, each queued SCHEDULE_*_WORK, queued, PUSH_NOTIFICATION_WORK, and overall job queue size. basicmethod - stacked plots of total CPU, total request count, total average response time, PUT-ics, REPORT cal-home-sync, PROPFIND Calendar Home, REPORT cal-sync, and PROPFIND Calendar. hostrequests - stacked plots of per-host request counts, total request count, and total CPU. hostcpu - stacked plots of per-host CPU, total request count, and total CPU. hostcompleted - stacked plots of per-host completed jobs, total CPU, and job queue size. scatter - scatter plot of request count and response time vs CPU. """, ) parser.add_argument("-l", default=SUPPRESS, required=True, help="Log file to process") parser.add_argument("-i", default=SUPPRESS, help="Directory to store image (default: log file directory)") parser.add_argument("-p", default=SUPPRESS, help="Name of pod to analyze") parser.add_argument("-s", default=SUPPRESS, help="Name of server to analyze") parser.add_argument("--save", action="store_true", help="Save plot PNG image") parser.add_argument("--noshow", action="store_true", help="Don't show the plot on screen") parser.add_argument("--start", type=int, default=0, help="Log line to start from") parser.add_argument("--count", type=int, default=-1, help="Number of log lines to process from start") parser.add_argument("--mode", default="basic", choices=sorted(selectMode.keys()), help="Type of plot to produce") parser.add_argument("-v", action="store_true", help="Verbose") args = parser.parse_args() calculator = Calculator(args) calculator.run(*selectMode[args.mode]) calculator.plot() def plotSeries(title, x, y, ymin=None, ymax=None, last_subplot=True): """ Plot the chosen dataset key for each scanned data file. @param key: data set key to use @type key: L{str} @param ymin: minimum value for y-axis or L{None} for default @type ymin: L{int} or L{float} @param ymax: maximum value for y-axis or L{None} for default @type ymax: L{int} or L{float} """ plt.plot(x, y) if ymin is not None: plt.ylim(ymin=ymin) if ymax is not None: plt.ylim(ymax=ymax) plt.xlim(0, 3600) frame = plt.gca() if last_subplot: plt.xlabel("Time (minutes)") plt.ylabel(title, fontsize="small", horizontalalignment="right", rotation="horizontal") # Setup axes - want 0 - 60 minute scale for x-axis plt.tick_params(labelsize="small") plt.xticks(range(0, 3601, 300), range(0, 61, 5) if last_subplot else []) frame.set_xticks(range(0, 3601, 60), minor=True) plt.grid(True, "minor", "x", alpha=0.5, linewidth=0.5) def plotScatter(title, x, y, xlim=None, ylim=None, last_subplot=True): """ Plot the chosen dataset key for each scanned data file. @param key: data set key to use @type key: L{str} @param xlim: minimum, maximum value for x-axis or L{None} for default @type xlim: L{tuple} of two L{int} or L{float} @param ylim: minimum, maximum value for y-axis or L{None} for default @type ylim: L{tuple} of two L{int} or L{float} """ # Remove any None values x, y = zip(*filter(lambda x: x[0] is not None and x[1] is not None, zip(x, y))) plt.scatter(x, y, marker=".") plt.xlabel("CPU") plt.ylabel(title, fontsize="small", verticalalignment="center", rotation="vertical") if xlim is not None: plt.xlim(*xlim) if ylim is not None: plt.ylim(*ylim) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/dashview.py000077500000000000000000001357311315003562600236440ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ A curses (or plain text) based dashboard for viewing various aspects of the server as exposed by the L{DashboardProtocol} stats socket. """ from argparse import HelpFormatter, SUPPRESS, OPTIONAL, ZERO_OR_MORE, \ ArgumentParser from collections import OrderedDict, defaultdict from operator import itemgetter from zlib import decompress import curses.panel import errno import fcntl import json import logging import os import sched import socket import struct import sys import termios import time LOG_FILENAME = "db.log" #logging.basicConfig(filename=LOG_FILENAME, level=logging.DEBUG) class MyHelpFormatter(HelpFormatter): """ Help message formatter which adds default values to argument help and retains formatting of all help text. """ def _fill_text(self, text, width, indent): return ''.join([indent + line for line in text.splitlines(True)]) def _get_help_string(self, action): help = action.help if '%(default)' not in action.help: if action.default is not SUPPRESS: defaulting_nargs = [OPTIONAL, ZERO_OR_MORE] if action.option_strings or action.nargs in defaulting_nargs: help += ' (default: %(default)s)' return help def main(): parser = ArgumentParser( formatter_class=MyHelpFormatter, description="Dashboard collector viewer service for CalendarServer.", epilog="One of -s or -l should be specified", ) group = parser.add_mutually_exclusive_group() group.add_argument("-s", default="localhost:8200", help="Dashboard collector service host:port") group.add_argument("-l", default=SUPPRESS, help="Dashboard collector log file to replay data from") args = parser.parse_args() # # Get configuration # if hasattr(args, "l"): server = None logfile = open(os.path.expanduser(args.l)) else: server = args.s if not server.startswith("unix:"): server = server.split(":") if len(server) == 1: server.append(8100) else: server[1] = int(server[1]) server = tuple(server) logfile = None def _wrapped(stdscrn): try: curses.curs_set(0) # make the cursor invisible curses.use_default_colors() curses.init_pair(1, curses.COLOR_RED, curses.COLOR_WHITE) except: pass d = Dashboard(server, logfile, stdscrn) d.run() curses.wrapper(_wrapped) # client = DashboardClient(self, server) # client.getOneItem("podA", "localhost:8100", "jobcount") # print(json.dumps(client.currentData["pods"]["podA"]["aggregate"], indent=1)) # dashboard = Dashboard(server, logfile, None) # client = DashboardLogfile(dashboard, logfile) # client.getOneItem("podA", "localhost:8100", "jobcount") # print(json.dumps(client.currentData["pods"]["podA"]["cal-prod-01.pixar.com:8100"], indent=1)) def safeDivision(value, total, factor=1): return value * factor / total if total else 0 def defaultIfNone(x, default): return x if x is not None else default def terminal_size(): h, w, _ignore_hp, _ignore_wp = struct.unpack( 'HHHH', fcntl.ioctl( 0, termios.TIOCGWINSZ, struct.pack('HHHH', 0, 0, 0, 0) ) ) return w, h class Dashboard(object): """ Main dashboard controller. Use Python's L{sched} feature to schedule updates. """ MODE_SERVER = 1 MODE_LOGREPLAY = 2 screen = None registered_windows = OrderedDict() registered_window_sets = { "D": ("Directory Panels", [],), "H": ("HTTP Panels", [],), "J": ("Jobs Panels", [],), } def __init__(self, server, logfile, screen): self.screen = screen self.paused = False self.seconds = 1.0 self.sched = sched.scheduler(time.time, time.sleep) self.aggregate = False self.selected_server = Point() self.server_window = None if server: self.client = DashboardClient(self, server) self.mode = self.MODE_SERVER else: self.client = DashboardLogfile(self, logfile) self.mode = self.MODE_LOGREPLAY self.client_error = False @classmethod def registerWindow(cls, wtype, keypress): """ Register a window type along with a key press action. This allows the controller to select the appropriate window when its key is pressed, and also provides help information to the L{HelpWindow} for each available window type. """ cls.registered_windows[keypress] = wtype @classmethod def registerWindowSet(cls, wtype, keypress): """ Register a set of window types along with a key press action. This allows the controller to select the appropriate set of windows when its key is pressed, and also provides help information to the L{HelpWindow} for each available window set type. """ cls.registered_window_sets[keypress][1].append(wtype) def run(self): """ Create the initial window and run the L{scheduler}. """ self.windows = [] self.client.update() self.displayWindow(None) self.sched.enter(self.seconds, 0, self.updateDisplay, ()) self.sched.run() def displayWindow(self, wtype): """ Toggle the specified window, or reset to launch state if None. """ # Toggle a specific window on or off if isinstance(wtype, type): if wtype not in [type(w) for w in self.windows]: self.windows.append(wtype(self).makeWindow()) self.windows[-1].activate() else: for window in self.windows: if type(window) == wtype: window.deactivate() self.windows.remove(window) if len(self.windows) == 0: self.displayWindow(self.registered_windows["h"]) self.resetWindows() # Reset the screen to the default config else: if self.windows: for window in self.windows: window.deactivate() self.windows = [] top = 0 self.server_window = ServersMenu(self).makeWindow() self.windows.append(self.server_window) self.windows[-1].activate() top += self.windows[-1].nlines + 1 help_top = top if wtype is None: # All windows in registered order ordered_windows = self.registered_windows.values() else: ordered_windows = list(wtype) for wtype in filter(lambda x: x.all, ordered_windows): new_win = wtype(self).makeWindow(top=top) logging.debug('created %r at panel level %r' % (new_win, new_win.z_order)) self.windows.append(wtype(self).makeWindow(top=top)) self.windows[-1].activate() top += self.windows[-1].nlines + 1 # Don't display help panel if the window is too narrow term_w, term_h = terminal_size() logging.debug("HelpWindow: rows: %s cols: %s" % (term_h, term_w)) if int(term_w) > 100: logging.debug("HelpWindow: term_w > 100, making window with top at %d" % (top)) self.windows.append(HelpWindow(self).makeWindow(top=help_top)) self.windows[-1].activate() curses.panel.update_panels() self.updateDisplay(True) def resetWindows(self): """ Reset the current set of windows. """ if self.windows: logging.debug("resetting windows: %r" % (self.windows)) for window in self.windows: window.deactivate() old_windows = self.windows self.windows = [] top = 0 for old in old_windows: logging.debug("processing window of type %r" % (type(old))) self.windows.append(old.__class__(self).makeWindow(top=top)) self.windows[-1].activate() # Allow the help window to float on the right edge if old.__class__.__name__ != "HelpWindow": top += self.windows[-1].nlines + 1 def updateDisplay(self, initialUpdate=False): """ Periodic update of the current window and check for a key press. """ if not self.paused: self.client.update() client_error = len(self.client.currentData) == 0 if client_error ^ self.client_error: self.client_error = client_error self.resetWindows() elif filter(lambda x: x.requiresReset(), self.windows): self.resetWindows() try: if not self.paused or initialUpdate: for window in filter( lambda x: x.requiresUpdate() or initialUpdate, self.windows ): logging.debug("updating: {}".format(window)) window.update() except Exception as e: logging.debug("updateDisplay failed: {}".format(e)) # Check keystrokes self.processKeys() if not initialUpdate: self.sched.enter(self.seconds, 0, self.updateDisplay, ()) def processKeys(self): """ Check for a key press. """ try: self.windows[-1].window.keypad(1) c = self.windows[-1].window.getkey() except: c = -1 if c == "q": sys.exit(0) elif c == " ": self.paused = not self.paused elif c == "t": self.seconds = 1.0 if self.seconds == 0.1 else 0.1 elif c == "a": self.displayWindow(None) elif c == "n": if self.windows: for window in self.windows: window.deactivate() self.windows = [] self.displayWindow(self.registered_windows["h"]) elif c == "x": self.aggregate = not self.aggregate self.client.update() self.displayWindow(None) elif c in self.registered_windows: self.displayWindow(self.registered_windows[c]) elif c in self.registered_window_sets: self.displayWindow(self.registered_window_sets[c][1]) elif c in (curses.keyname(curses.KEY_LEFT), curses.keyname(curses.KEY_RIGHT)) and self.server_window: self.selected_server.xplus(-1 if c == curses.keyname(curses.KEY_LEFT) else 1) if self.selected_server.x < 0: self.selected_server.x = 0 elif self.selected_server.x >= len(self.serversForPod(self.pods()[self.selected_server.y])): self.selected_server.x = len(self.serversForPod(self.pods()[self.selected_server.y])) - 1 self.resetWindows() elif c in (curses.keyname(curses.KEY_UP), curses.keyname(curses.KEY_DOWN)) and self.server_window: self.selected_server.yplus(-1 if c == curses.keyname(curses.KEY_UP) else 1) if self.selected_server.y < 0: self.selected_server.y = 0 elif self.selected_server.y >= len(self.pods()): self.selected_server.y = len(self.pods()) - 1 if self.selected_server.x >= len(self.serversForPod(self.pods()[self.selected_server.y])): self.selected_server.x = len(self.serversForPod(self.pods()[self.selected_server.y])) - 1 self.resetWindows() elif c == "1" and self.mode == self.MODE_LOGREPLAY: self.client.skipLines(60) elif c == "5" and self.mode == self.MODE_LOGREPLAY: self.client.skipLines(300) def dataForItem(self, item): return self.client.getOneItem( self.selectedPod(), self.selectedServer(), item, ) def timestamp(self): return self.client.currentData["timestamp"] def pods(self): return sorted(self.client.currentData.get("pods", {}).keys()) def selectedPod(self): return self.pods()[self.selected_server.y] def serversForPod(self, pod): return sorted(self.client.currentData.get("pods", {pod: {}})[pod].keys()) def selectedServer(self): return self.serversForPod(self.selectedPod())[self.selected_server.x] def selectedAggregateCount(self): return self.client.currentServerCount.get(self.selectedPod(), 1) if self.aggregate else 1 class BaseDashboardClient(object): """ Base class for clients that get dashboard data. """ def __init__(self, dashboard): self.dashboard = dashboard self.currentData = {} self.currentServerCount = {} def readData(self): """ Get the data """ raise NotImplementedError def update(self): """ Update the current data from the server. """ # Only read each item once self.currentData = self.readData() for pod, servers in self.currentData["pods"].items(): self.currentServerCount[pod] = len(servers) if self.dashboard.aggregate: self.aggregateData() def getOneItem(self, pod, server, item): """ Update the current data from the server. """ if len(self.currentData) == 0: self.update() # jobs are only requested from the first server in a pod because otherwise # it would be too expensive to run the DB query for all servers. So when we # need the jobs data, always substitute the first server's data if item in ("jobs", "jobcount"): server = sorted(self.currentData["pods"][pod].keys())[0] return self.currentData["pods"][pod][server].get(item) def aggregateData(self): """ Aggregate the data from all hosts into one for each pod. """ results = OrderedDict() results["timestamp"] = self.currentData["timestamp"] results["pods"] = OrderedDict() for pod in self.currentData["pods"].keys(): results["pods"][pod] = OrderedDict() results["pods"][pod]["aggregate"] = self.aggregatePodData(self.currentData["pods"][pod]) self.currentData = results def aggregatePodData(self, data): """ Aggregate the data from a pod from all hosts into one. @param data: host data to aggregate @type data: L{dict} """ results = OrderedDict() results["aggregateCount"] = len(data) # Get all items available in all servers first items = defaultdict(list) for server in data.keys(): for item in data[server].keys(): items[item].append(data[server][item]) # Iterate each item to get the data sets and aggregate for item, allHostData in items.items(): method_name = "aggregator_{}".format(item) if not hasattr(Aggregator, method_name): print("Missing aggregator for {}".format(item)) logging.error("Missing aggregator for {}".format(item)) results[item] = allHostData[0] else: results[item] = getattr(Aggregator, method_name)(allHostData) return results class DashboardClient(BaseDashboardClient): """ Client that connects to a server and fetches information. """ def __init__(self, dashboard, sockname): super(DashboardClient, self).__init__(dashboard) self.socket = None if isinstance(sockname, str): self.sockname = sockname[5:] self.useTCP = False else: self.sockname = sockname self.useTCP = True def readData(self): """ Open a socket, send the specified request, and retrieve the response. The socket closes. """ try: self.socket = socket.socket(socket.AF_INET if self.useTCP else socket.AF_UNIX, socket.SOCK_STREAM) self.socket.connect(self.sockname) self.socket.setblocking(0) data = "" t = time.time() while not data.endswith("\n"): try: d = self.socket.recv(1024) except socket.error as se: if se.args[0] != errno.EWOULDBLOCK: raise if time.time() - t > 5: raise socket.error continue if d: data += d else: break data = json.loads(data, object_pairs_hook=OrderedDict) logging.debug("data: {}".format(len(data))) self.socket.close() self.socket = None except socket.error as e: data = {} self.socket = None logging.debug("readData: failed: {}".format(e)) except ValueError as e: data = {} logging.debug("readData: failed: {}".format(e)) return data class DashboardLogfile(BaseDashboardClient): """ Client that gets data from a log file. """ def __init__(self, dashboard, logfile): super(DashboardLogfile, self).__init__(dashboard) self.logfile = logfile def readData(self): """ Read one line of data from the logile """ try: line = self.logfile.readline() if line[0] == "\x1e": line = line[1:] if line[0] != "{": line = decompress(line.decode("base64")) data = json.loads(line) except ValueError as e: data = {} logging.debug("readData: failed: {}".format(e)) return data def skipLines(self, count): try: for _ignore in range(count): self.logfile.readline() except ValueError as e: logging.debug("skipLines: failed: {}".format(e)) class Aggregator(object): @staticmethod def aggregator_directory(serversdata): results = OrderedDict() for server_data in serversdata: for operation_name, operation_details in server_data.items(): if operation_name not in results: results[operation_name] = operation_details elif isinstance(results[operation_name], list): results[operation_name][0] += operation_details[0] results[operation_name][1] += operation_details[1] else: results[operation_name] += operation_details return results @staticmethod def aggregator_job_assignments(serversdata): results = OrderedDict() results["workers"] = [[0, 0, 0]] * max(map(len, map(itemgetter("workers"), serversdata))) results["level"] = sum(map(itemgetter("level"), serversdata)) for server_data in serversdata: for ctr, item in enumerate(server_data["workers"]): for i in range(3): results["workers"][ctr][i] += item[i] return results @staticmethod def aggregator_jobcount(serversdata): return serversdata[0] @staticmethod def aggregator_jobs(serversdata): return serversdata[0] @staticmethod def aggregator_stats(serversdata): results = OrderedDict() for stat in ("current", "1m", "5m", "1h"): if stat in serversdata[0]: results[stat] = Aggregator.serverStat(map(itemgetter(stat), serversdata)) # NB ignore the "system" key as it is not used return results @staticmethod def serverStat(serversdata): results = OrderedDict() # Values that are summed for key in ("requests", "t", "t-resp-wr", "401", "500", "cpu", "slots", "max-slots",): if key in serversdata[0]: results[key] = sum(map(itemgetter(key), serversdata)) # Averaged for key in ("cpu", "max-slots",): if key in results: results[key] = safeDivision(results[key], len(serversdata)) # Values that are maxed for key in ("T-MAX",): if key in serversdata[0]: results[key] = max(map(itemgetter(key), serversdata)) # Values that are summed dict values for key in ("method", "method-t", "uid", "user-agent", "T", "T-RESP-WR",): if key in serversdata[0]: results[key] = Aggregator.dictValueSums(map(itemgetter(key), serversdata)) return results @staticmethod def aggregator_slots(serversdata): results = OrderedDict() # Sum all items in each dict slot_count = len(serversdata[0]["slots"]) results["slots"] = [] for i in range(slot_count): results["slots"].append(Aggregator.dictValueSums(map( itemgetter(i), map(itemgetter("slots"), serversdata) ))) results["slots"][i]["slot"] = i # Check for any one being overloaded results["overloaded"] = any(map(itemgetter("overloaded"), serversdata)) return results @staticmethod def aggregator_stats_system(serversdata): results = OrderedDict() for server_data in serversdata: for stat_name, stat_value in server_data.items(): if stat_name not in results: results[stat_name] = stat_value else: results[stat_name] += stat_value # Some values should be averaged for stat_name in ("memory used", "cpu use", "memory percent"): results[stat_name] = safeDivision(results[stat_name], len(serversdata)) # Want the earliest time results["start time"] = min(map(itemgetter("start time"), serversdata)) return results @staticmethod def dictValueSums(listOfDicts): """ Sum the values of a list of dicts. @param listOfDicts: list of dicts to sum @type listOfDicts: L{list} of L{dict} """ results = OrderedDict() for result in listOfDicts: for key, value in result.items(): if key not in results: results[key] = value else: results[key] += value return results class Point(object): def __init__(self, x=0, y=0): self.x = x self.y = y def __eq__(self, other): return self.x == other.x and self.y == other.y def xplus(self, xdiff=1): self.x += xdiff def yplus(self, ydiff=1): self.y += ydiff class BaseWindow(object): """ Common behavior for window types. """ help = "Not Implemented" all = True clientItem = None windowTitle = "" formatWidth = 0 additionalRows = 0 def __init__(self, dashboard): self.dashboard = dashboard self.rowCount = 0 self.needsReset = False self.z_order = "bottom" def makeWindow(self, top=0, left=0): self.updateRowCount() self._createWindow( self.windowTitle, self.rowCount + self.additionalRows, self.formatWidth, begin_y=top, begin_x=left ) return self def updateRowCount(self): """ Update L{self.rowCount} based on the current data """ raise NotImplementedError() def _createWindow( self, title, nlines, ncols, begin_y=0, begin_x=0 ): """ Initialize a curses window based on the sizes required. """ self.window = curses.newwin(nlines, ncols, begin_y, begin_x) self.window.nodelay(1) self.panel = curses.panel.new_panel(self.window) eval("self.panel.%s()" % (self.z_order,)) self.title = title self.nlines = nlines self.ncols = ncols self.iter = 0 self.lastResult = {} def requiresUpdate(self): """ Indicates whether a window type has dynamic data that should be refreshed on each update, or whether it is static data (e.g., L{HelpWindow}) that only needs to be drawn once. """ return True def requiresReset(self): """ Indicates that the window needs a full reset, because e.g., the number of items it displays has changed. """ return self.needsReset def activate(self): """ About to start displaying. """ # Update once when activated if not self.requiresUpdate(): self.update() def deactivate(self): """ Clear any drawing done by the current window type. """ self.window.erase() self.window.refresh() def update(self): """ Periodic window update - redraw the window. """ raise NotImplementedError() def tableHeader(self, hdrs, count): """ Generate the header rows. """ self.window.erase() self.window.border() self.window.addstr( 0, 2, self.title + " {} ({})".format(count, self.iter) ) pt = Point(1, 1) for hdr in hdrs: self.window.addstr(pt.y, pt.x, hdr, curses.A_REVERSE) pt.yplus() return pt def tableFooter(self, feet, pt): """ Generate the footer rows. """ self.window.hline(pt.y, pt.x, "-", self.formatWidth - 2) pt.yplus() for footer in feet: self.window.addstr(pt.y, pt.x, footer) pt.yplus() def tableRow(self, text, pt, style=curses.A_NORMAL): """ Generate a single row. """ try: self.window.addstr( pt.y, pt.x, text, style ) except curses.error as e: logging.debug("tableRow: failed: {}".format(e)) pt.yplus() def clientData(self, item=None): return self.dashboard.dataForItem(item if item else self.clientItem) class ServersMenu(BaseWindow): """ Top menu if multiple servers are present. """ help = "servers help" all = False windowTitle = "Servers" formatWidth = 0 additionalRows = 0 def makeWindow(self, top=0, left=0): term_w, _ignore_term_h = terminal_size() self.formatWidth = term_w - 50 return super(ServersMenu, self).makeWindow(0, 0) def updateRowCount(self): self.rowCount = len(self.dashboard.pods()) def requiresUpdate(self): return False def update(self): self.window.erase() pods = self.dashboard.pods() width = max(map(len, pods)) if pods else 0 pt = Point() for row, pod in enumerate(pods): pt.x = 0 s = ("Pod: {:>" + str(width) + "} | Servers: |").format(pod) self.window.addstr(pt.y, pt.x, s) pt.xplus(len(s)) selected_server = None for column, server in enumerate(self.dashboard.serversForPod(pod)): cell = Point(column, row) selected = cell == self.dashboard.selected_server s = " {:02d} ".format(column + 1) self.window.addstr( pt.y, pt.x, s, curses.A_REVERSE if selected else curses.A_NORMAL ) pt.xplus(len(s)) self.window.addstr(pt.y, pt.x, "|") pt.xplus() if selected: selected_server = server self.window.addstr(pt.y, pt.x, " {}".format(selected_server if selected_server else "")) pt.yplus() self.window.refresh() class HelpWindow(BaseWindow): """ Display help for the dashboard. """ help = "Help" all = False helpItems = ( " a - All Panels", " n - No Panels", "", " - (space) Pause", " t - Toggle Update Speed", " x - Toggle Aggregate Mode", "", " q - Quit", ) helpItemsReplay = ( " a - All Panels", " n - No Panels", "", " - (space) Pause", " 1 - ahead one minute", " 5 - ahead five minutes", " t - Toggle Update Speed", " x - Toggle Aggregate Mode", "", " q - Quit", ) windowTitle = "Help" formatWidth = 28 additionalRows = 3 def __init__(self, dashboard): super(HelpWindow, self).__init__(dashboard) if self.dashboard.mode == self.dashboard.MODE_LOGREPLAY: self.helpItems = self.helpItemsReplay def makeWindow(self, top=0, left=0): term_w, _ignore_term_h = terminal_size() help_x_offset = term_w - self.formatWidth return super(HelpWindow, self).makeWindow(0, help_x_offset) def updateRowCount(self): self.rowCount = len(self.helpItems) + len(filter(lambda x: len(x[1]) != 0, Dashboard.registered_window_sets.values())) + len(Dashboard.registered_windows) def requiresUpdate(self): return False def update(self): self.window.erase() self.window.border() self.window.addstr(0, 2, "Hotkeys") pt = Point(1, 1) items = [" {} - {}".format(keypress, wtype.help) for keypress, wtype in Dashboard.registered_windows.items()] items.append("") items.extend([" {} - {}".format(key, value[0]) for key, value in Dashboard.registered_window_sets.items() if value[1]]) items.extend(self.helpItems) for item in items: self.tableRow(item, pt) self.window.refresh() class SystemWindow(BaseWindow): """ Displays the system information provided by the server. """ help = "System Status" clientItem = "stats_system" windowTitle = "System" formatWidth = 54 additionalRows = 3 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.clientData(), (1, 2, 3, 4,))) + 1 def update(self): records = defaultIfNone(self.clientData(), { "cpu use": 0.0, "memory percent": 0.0, "memory used": 0, "start time": time.time(), }) records["timestamp"] = self.dashboard.timestamp() if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:<30}{:>20} ".format("Item", "Value") pt = self.tableHeader((s,), len(records)) records["cpu use"] = "{:.2f}".format(records["cpu use"]) records["memory percent"] = "{:.1f}".format(records["memory percent"]) records["memory used"] = "{:.2f} GB".format( records["memory used"] / (1000.0 * 1000.0 * 1000.0) ) records["uptime"] = int(time.time() - records["start time"]) hours, mins = divmod(records["uptime"] / 60, 60) records["uptime"] = "{}:{:02d} hh:mm".format(hours, mins) del records["start time"] records["timestamp"] = records["timestamp"].replace("T", " ") for item, value in sorted(records.items(), key=lambda x: x[0]): changed = ( item in self.lastResult and self.lastResult[item] != value ) s = " {:<30}{:>20} ".format(item, value) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) self.window.refresh() self.lastResult = records class RequestStatsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "HTTP Requests" clientItem = "stats" windowTitle = "Request Statistics" formatWidth = 92 additionalRows = 4 def updateRowCount(self): self.rowCount = 4 def update(self): records = defaultIfNone(self.clientData(), {}) self.iter += 1 s1 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8}{:>8} ".format( "Period", "Reqs", "Av-Reqs", "Av-Resp", "Av-NoWr", "Max-Resp", "Slot", "Slot", "CPU ", "500's" ) s2 = " {:<8}{:>8}{:>10}{:>10}{:>10}{:>10}{:>8}{:>8}{:>8}{:>8} ".format( "", "", "per sec", "(ms)", "(ms)", "(ms)", "Avg.", "Max", "Avg.", "" ) pt = self.tableHeader((s1, s2,), len(records)) for key, seconds in (("current", 60,), ("1m", 60,), ("5m", 5 * 60,), ("1h", 60 * 60,),): stat = records.get(key, { "requests": 0, "t": 0.0, "t-resp-wr": 0.0, "T-MAX": 0.0, "slots": 0, "cpu": 0.0, "500": 0, }) s = " {:<8}{:>8}{:>10.1f}{:>10.1f}{:>10.1f}{:>10.1f}{:>8.2f}{:>8}{:>7.1f}%{:>8} ".format( key, stat["requests"], safeDivision(float(stat["requests"]), seconds), safeDivision(stat["t"], stat["requests"]), safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"]), stat["T-MAX"], safeDivision(float(stat["slots"]), stat["requests"]), stat.get("max-slots", 0), safeDivision(stat["cpu"], stat["requests"]), stat["500"], ) self.tableRow(s, pt) self.window.refresh() self.lastResult = records class HTTPSlotsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "HTTP Slots" clientItem = "slots" windowTitle = "HTTP Slots" formatWidth = 72 additionalRows = 5 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.clientData(), {"slots": ()})["slots"]) def update(self): data = defaultIfNone(self.clientData(), {"slots": {}, "overloaded": False}) records = data["slots"] if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format( "Slot", "unack", "ack", "uncls", "total", "start", "strting", "stopped", "abd" ) pt = self.tableHeader((s,), len(records)) for record in sorted(records, key=lambda x: x["slot"]): changed = ( record["slot"] in self.lastResult and self.lastResult[record["slot"]] != record ) s = " {:>4}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8}{:>8} ".format( record["slot"], record["unacknowledged"], record["acknowledged"], record["unclosed"], record["total"], record["started"], record["starting"], record["stopped"], record["abandoned"], ) count = record["unacknowledged"] + record["acknowledged"] self.tableRow( s, pt, curses.A_REVERSE if changed else ( curses.A_BOLD if count else curses.A_NORMAL ), ) active_count = sum( [ record["unacknowledged"] + record["acknowledged"] for record in records ] ) active_slots = len(records) * self.dashboard.selectedAggregateCount() s = " {:<12}{:>8}{:>16}".format( "Total:", active_count, sum([record["total"] for record in records]), ) s += " {:>3d}%".format(safeDivision(active_count, active_slots, 100)) if data["overloaded"]: s += " OVERLOADED" self.tableFooter((s,), pt) self.window.refresh() self.lastResult = records class MethodsWindow(BaseWindow): """ Display the status of the server's request methods. """ help = "HTTP Methods" clientItem = "stats" stats_keys = ("current", "1m", "5m", "1h",) windowTitle = "Methods" formatWidth = 116 additionalRows = 8 def updateRowCount(self): stats = defaultIfNone(self.clientData(), {}) methods = set() for key in self.stats_keys: methods.update(stats.get(key, {}).get("method", {}).keys()) nlines = len(methods) self.rowCount = nlines def update(self): stats = defaultIfNone(self.clientData(), {}) methods = set() for key in self.stats_keys: methods.update(stats.get(key, {}).get("method", {}).keys()) if len(methods) != self.rowCount: self.needsReset = True return records = {} records_t = {} for key in self.stats_keys: records[key] = defaultIfNone(self.clientData(), {}).get(key, {}).get("method", {}) records_t[key] = defaultIfNone(self.clientData(), {}).get(key, {}).get("method-t", {}) self.iter += 1 s1 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "", "------", "current---", "------", "1m--------", "------", "5m--------", "------", "1h--------", ) s2 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "Method", "Number", "Av-Time", "Number", "Av-Time", "Number", "Av-Time", "Number", "Av-Time", ) s3 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( "", "", "(ms)", "", "(ms)", "", "(ms)", "", "(ms)", ) pt = self.tableHeader((s1, s2, s3,), len(records)) total_methods = dict([(key, 0) for key in self.stats_keys]) total_time = dict([(key, 0.0) for key in self.stats_keys]) for method_type in sorted(methods): for key in self.stats_keys: total_methods[key] += records[key].get(method_type, 0) total_time[key] += records_t[key].get(method_type, 0.0) changed = self.lastResult.get(method_type, 0) != records["current"].get(method_type, 0) items = [method_type] for key in self.stats_keys: items.append(records[key].get(method_type, 0)) items.append(safeDivision(records_t[key].get(method_type, 0), records[key].get(method_type, 0))) s = " {:<40}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f} ".format( *items ) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) items = ["Total:"] for key in self.stats_keys: items.append(total_methods[key]) items.append(safeDivision(total_time[key], total_methods[key])) s1 = " {:<40}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f}{:>8}{:>10.1f} ".format( *items ) items = ["401s:"] for key in self.stats_keys: items.append(defaultIfNone(self.clientData(), {}).get(key, {}).get("401", 0)) items.append("") s2 = " {:<40}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10}{:>8}{:>10} ".format( *items ) self.tableFooter((s1, s2,), pt) self.window.refresh() self.lastResult = defaultIfNone(self.clientData(), {}).get("current", {}).get("method", {}) class AssignmentsWindow(BaseWindow): """ Displays the status of the server's master process worker slave slots. """ help = "Job Assignments" clientItem = "job_assignments" windowTitle = "Job Assignments" formatWidth = 40 additionalRows = 5 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.clientData(), {"workers": ()})["workers"]) def update(self): data = defaultIfNone(self.clientData(), {"workers": {}, "level": 0}) records = data["workers"] if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s = " {:>4}{:>12}{:>8}{:>12} ".format( "Slot", "assigned", "load", "completed" ) pt = self.tableHeader((s,), len(records)) total_assigned = 0 total_completed = 0 for ctr, details in enumerate(records): assigned, load, completed = details total_assigned += assigned total_completed += completed changed = ( ctr in self.lastResult and self.lastResult[ctr] != assigned ) s = " {:>4}{:>12}{:>8}{:>12} ".format( ctr, assigned, load, completed, ) self.tableRow( s, pt, curses.A_REVERSE if changed else curses.A_NORMAL, ) s = " {:<6}{:>10}{:>8}{:>12}".format( "Total:", total_assigned, "{}%".format(data["level"]), total_completed, ) self.tableFooter((s,), pt) self.window.refresh() self.lastResult = records class JobsWindow(BaseWindow): """ Display the status of the server's job queue. """ help = "Job Activity" clientItem = "jobs" windowTitle = "Jobs" formatWidth = 98 additionalRows = 6 def updateRowCount(self): self.rowCount = defaultIfNone(self.clientData("jobcount"), 0) def update(self): records = defaultIfNone(self.clientData(), {}) if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s1 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format( "Work Type", "Queued", "Assigned", "Late", "Failed", "Completed", "Av-Time", ) s2 = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10} ".format( "", "", "", "", "", "", "(ms)", ) pt = self.tableHeader((s1, s2,), len(records)) total_queued = 0 total_assigned = 0 total_late = 0 total_failed = 0 total_completed = 0 total_time = 0.0 for work_type, details in sorted(records.items(), key=lambda x: x[0]): total_queued += details["queued"] total_assigned += details["assigned"] total_late += details["late"] total_failed += details["failed"] total_completed += details["completed"] total_time += details["time"] changed = ( work_type in self.lastResult and self.lastResult[work_type]["queued"] != details["queued"] ) s = "{}{:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format( ">" if details["queued"] else " ", work_type, details["queued"], details["assigned"], details["late"], details["failed"], details["completed"], safeDivision(details["time"], details["completed"], 1000.0) ) self.tableRow( s, pt, curses.A_REVERSE if changed else ( curses.A_BOLD if details["queued"] else curses.A_NORMAL ), ) s = " {:<40}{:>8}{:>10}{:>8}{:>8}{:>10}{:>10.1f} ".format( "Total:", total_queued, total_assigned, total_late, total_failed, total_completed, safeDivision(total_time, total_completed, 1000.0) ) self.tableFooter((s,), pt) self.window.refresh() self.lastResult = records class DirectoryStatsWindow(BaseWindow): """ Displays the status of the server's directory service calls """ help = "Directory Service" clientItem = "directory" windowTitle = "Directory Service" formatWidth = 89 additionalRows = 8 def updateRowCount(self): self.rowCount = len(defaultIfNone(self.clientData(), {})) def update(self): records = defaultIfNone(self.clientData(), {}) if len(records) != self.rowCount: self.needsReset = True return self.iter += 1 s1 = " {:<40}{:>15}{:>15}{:>15} ".format( "Method", "Calls", "Total", "Average" ) s2 = " {:<40}{:>15}{:>15}{:>15} ".format( "", "", "(sec)", "(ms)" ) pt = self.tableHeader((s1, s2,), len(records)) overallCount = 0 overallCountRatio = 0 overallCountCached = 0 overallCountUncached = 0 overallTimeSpent = 0.0 for methodName, result in sorted(records.items(), key=lambda x: x[0]): if isinstance(result, int): count, timeSpent = result, 0.0 else: count, timeSpent = result overallCount += count if methodName.endswith("-hit"): overallCountRatio += count overallCountCached += count if methodName.endswith("-miss") or methodName.endswith("-expired"): overallCountRatio += count overallCountUncached += count overallTimeSpent += timeSpent s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format( methodName, count, timeSpent, safeDivision(timeSpent, count, 1000.0), ) self.tableRow(s, pt) s = " {:<40}{:>15d}{:>15.1f}{:>15.3f} ".format( "Total:", overallCount, overallTimeSpent, safeDivision(overallTimeSpent, overallCount, 1000.0) ) s_cached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format( "Total Cached:", overallCountCached, safeDivision(overallCountCached, overallCountRatio, 100.0), "", ) s_uncached = " {:<40}{:>15d}{:>14.1f}%{:>15s} ".format( "Total Uncached:", overallCountUncached, safeDivision(overallCountUncached, overallCountRatio, 100.0), "", ) self.tableFooter((s, s_cached, s_uncached), pt) self.window.refresh() Dashboard.registerWindow(HelpWindow, "h") Dashboard.registerWindow(SystemWindow, "s") Dashboard.registerWindow(RequestStatsWindow, "r") Dashboard.registerWindow(HTTPSlotsWindow, "c") Dashboard.registerWindow(MethodsWindow, "m") Dashboard.registerWindow(AssignmentsWindow, "w") Dashboard.registerWindow(JobsWindow, "j") Dashboard.registerWindow(DirectoryStatsWindow, "d") Dashboard.registerWindowSet(SystemWindow, "H") Dashboard.registerWindowSet(RequestStatsWindow, "H") Dashboard.registerWindowSet(HTTPSlotsWindow, "H") Dashboard.registerWindowSet(MethodsWindow, "H") Dashboard.registerWindowSet(SystemWindow, "J") Dashboard.registerWindowSet(AssignmentsWindow, "J") Dashboard.registerWindowSet(JobsWindow, "J") Dashboard.registerWindowSet(DirectoryStatsWindow, "D") if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/dbinspect.py000077500000000000000000000710041315003562600237750ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool allows data in the database to be directly inspected using a set of simple commands. """ from calendarserver.tools import tables from calendarserver.tools.cmdline import utilityMain, WorkerService from pycalendar.datetime import DateTime from twext.enterprise.dal.syntax import Select, Parameter, Count, Delete, \ ALL_COLUMNS from twext.who.idirectory import RecordType from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twisted.python.filepath import FilePath from twisted.python.text import wordWrap from twisted.python.usage import Options from twistedcaldav import caldavxml from twistedcaldav.config import config from twistedcaldav.directory import calendaruserproxy from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from txdav.caldav.datastore.query.filter import Filter from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN from txdav.who.idirectory import RecordType as CalRecordType from uuid import UUID import os import sys import traceback def usage(e=None): if e: print(e) print("") try: DBInspectOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = '\n'.join( wordWrap( """ Usage: calendarserver_dbinspect [options] [input specifiers]\n """, int(os.environ.get('COLUMNS', '80')) ) ) class DBInspectOptions(Options): """ Command-line options for 'calendarserver_dbinspect' """ synopsis = description optFlags = [ ['verbose', 'v', "Verbose logging."], ['debug', 'D', "Debug logging."], ['purging', 'p', "Enable Purge command."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ] def __init__(self): super(DBInspectOptions, self).__init__() self.outputName = '-' @inlineCallbacks def UserNameFromUID(txn, uid): record = yield txn.directoryService().recordWithUID(uid) returnValue(record.shortNames[0] if record else "(%s)" % (uid,)) @inlineCallbacks def UIDFromInput(txn, value): try: returnValue(str(UUID(value)).upper()) except (ValueError, TypeError): pass record = yield txn.directoryService().recordWithShortName(RecordType.user, value) if record is None: record = yield txn.directoryService().recordWithShortName(CalRecordType.location, value) if record is None: record = yield txn.directoryService().recordWithShortName(CalRecordType.resource, value) if record is None: record = yield txn.directoryService().recordWithShortName(RecordType.group, value) returnValue(record.uid if record else None) class Cmd(object): _name = None @classmethod def name(cls): return cls._name def doIt(self, txn): raise NotImplementedError class TableSizes(Cmd): _name = "Show Size of each Table" @inlineCallbacks def doIt(self, txn): results = [] for dbtable in schema.model.tables: # @UndefinedVariable dbtable = getattr(schema, dbtable.name) count = yield self.getTableSize(txn, dbtable) results.append((dbtable.model.name, count,)) # Print table of results table = tables.Table() table.addHeader(("Table", "Row Count")) for dbtable, count in sorted(results): table.addRow(( dbtable, count, )) print("\n") print("Database Tables (total=%d):\n" % (len(results),)) table.printTable() @inlineCallbacks def getTableSize(self, txn, dbtable): rows = (yield Select( [Count(ALL_COLUMNS), ], From=dbtable, ).on(txn)) returnValue(rows[0][0]) class CalendarHomes(Cmd): _name = "List Calendar Homes" @inlineCallbacks def doIt(self, txn): uids = yield self.getAllHomeUIDs(txn) # Print table of results missing = 0 table = tables.Table() table.addHeader(("Owner UID", "Short Name")) for uid in sorted(uids): shortname = yield UserNameFromUID(txn, uid) if shortname.startswith("("): missing += 1 table.addRow(( uid, shortname, )) print("\n") print("Calendar Homes (total=%d, missing=%d):\n" % (len(uids), missing,)) table.printTable() @inlineCallbacks def getAllHomeUIDs(self, txn): ch = schema.CALENDAR_HOME rows = (yield Select( [ch.OWNER_UID, ], From=ch, ).on(txn)) returnValue(tuple([row[0] for row in rows])) class CalendarHomesSummary(Cmd): _name = "List Calendar Homes with summary information" @inlineCallbacks def doIt(self, txn): uids = yield self.getCalendars(txn) results = {} for uid, calname, count in sorted(uids, key=lambda x: x[0]): totalname, totalcount = results.get(uid, (0, 0,)) if calname != "inbox": totalname += 1 totalcount += count results[uid] = (totalname, totalcount,) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Short Name", "Calendars", "Resources")) totals = [0, 0, 0] for uid in sorted(results.keys()): shortname = yield UserNameFromUID(txn, uid) table.addRow(( uid, shortname, results[uid][0], results[uid][1], )) totals[0] += 1 totals[1] += results[uid][0] totals[2] += results[uid][1] table.addFooter(("Total", totals[0], totals[1], totals[2])) table.addFooter(( "Average", "", "%.2f" % ((1.0 * totals[1]) / totals[0] if totals[0] else 0,), "%.2f" % ((1.0 * totals[2]) / totals[0] if totals[0] else 0,), )) print("\n") print("Calendars with resource count (total=%d):\n" % (len(results),)) table.printTable() @inlineCallbacks def getCalendars(self, txn): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, Count(co.RESOURCE_ID), ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME) ).on(txn)) returnValue(tuple(rows)) class Calendars(Cmd): _name = "List Calendars" @inlineCallbacks def doIt(self, txn): uids = yield self.getCalendars(txn) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Short Name", "Calendar", "Resources")) for uid, calname, count in sorted(uids, key=lambda x: (x[0], x[1])): shortname = yield UserNameFromUID(txn, uid) table.addRow(( uid, shortname, calname, count )) print("\n") print("Calendars with resource count (total=%d):\n" % (len(uids),)) table.printTable() @inlineCallbacks def getCalendars(self, txn): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, Count(co.RESOURCE_ID), ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME) ).on(txn)) returnValue(tuple(rows)) class CalendarsByOwner(Cmd): _name = "List Calendars for Owner UID/Short Name" @inlineCallbacks def doIt(self, txn): uid = raw_input("Owner UID/Name: ") uid = yield UIDFromInput(txn, uid) uids = yield self.getCalendars(txn, uid) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Short Name", "Calendars", "ID", "Resources")) totals = [0, 0, ] for uid, calname, resid, count in sorted(uids, key=lambda x: x[1]): shortname = yield UserNameFromUID(txn, uid) table.addRow(( uid if totals[0] == 0 else "", shortname if totals[0] == 0 else "", calname, resid, count, )) totals[0] += 1 totals[1] += count table.addFooter(("Total", "", totals[0], "", totals[1])) print("\n") print("Calendars with resource count (total=%d):\n" % (len(uids),)) table.printTable() @inlineCallbacks def getCalendars(self, txn, uid): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, co.CALENDAR_RESOURCE_ID, Count(co.RESOURCE_ID), ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=(ch.OWNER_UID == Parameter("UID")), GroupBy=(ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, co.CALENDAR_RESOURCE_ID) ).on(txn, **{"UID": uid})) returnValue(tuple(rows)) class Events(Cmd): _name = "List Events" @inlineCallbacks def doIt(self, txn): uids = yield self.getEvents(txn) # Print table of results table = tables.Table() table.addHeader(("Owner UID", "Short Name", "Calendar", "ID", "Type", "UID")) for uid, calname, id, caltype, caluid in sorted(uids, key=lambda x: (x[0], x[1])): shortname = yield UserNameFromUID(txn, uid) table.addRow(( uid, shortname, calname, id, caltype, caluid )) print("\n") print("Calendar events (total=%d):\n" % (len(uids),)) table.printTable() @inlineCallbacks def getEvents(self, txn): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, co.RESOURCE_ID, co.ICALENDAR_TYPE, co.ICALENDAR_UID, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), ).on(txn)) returnValue(tuple(rows)) class EventsByCalendar(Cmd): _name = "List Events for a specific calendar" @inlineCallbacks def doIt(self, txn): rid = raw_input("Resource-ID: ") try: int(rid) except ValueError: print('Resource ID must be an integer') returnValue(None) uids = yield self.getEvents(txn, rid) # Print table of results table = tables.Table() table.addHeader(("Type", "UID", "Resource Name", "Resource ID",)) for caltype, caluid, rname, rid in sorted(uids, key=lambda x: x[1]): table.addRow(( caltype, caluid, rname, rid, )) print("\n") print("Calendar events (total=%d):\n" % (len(uids),)) table.printTable() @inlineCallbacks def getEvents(self, txn, rid): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ co.ICALENDAR_TYPE, co.ICALENDAR_UID, co.RESOURCE_NAME, co.RESOURCE_ID, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=(co.CALENDAR_RESOURCE_ID == Parameter("RID")), ).on(txn, **{"RID": rid})) returnValue(tuple(rows)) class EventDetails(Cmd): """ Base class for common event details commands. """ @inlineCallbacks def printEventDetails(self, txn, details): owner, calendar, resource_id, resource, created, modified, data = details shortname = yield UserNameFromUID(txn, owner) table = tables.Table() table.addRow(("Owner UID:", owner,)) table.addRow(("User Name:", shortname,)) table.addRow(("Calendar:", calendar,)) table.addRow(("Resource Name:", resource)) table.addRow(("Resource ID:", resource_id)) table.addRow(("Created", created)) table.addRow(("Modified", modified)) print("\n") table.printTable() print(data) @inlineCallbacks def getEventData(self, txn, whereClause, whereParams): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT rows = (yield Select( [ ch.OWNER_UID, cb.CALENDAR_RESOURCE_NAME, co.RESOURCE_ID, co.RESOURCE_NAME, co.CREATED, co.MODIFIED, co.ICALENDAR_TEXT, ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="inner", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=whereClause, ).on(txn, **whereParams)) returnValue(tuple(rows)) class Event(EventDetails): _name = "Get Event Data by Resource-ID" @inlineCallbacks def doIt(self, txn): rid = raw_input("Resource-ID: ") try: int(rid) except ValueError: print('Resource ID must be an integer') returnValue(None) result = yield self.getData(txn, rid) if result: yield self.printEventDetails(txn, result[0]) else: print("Could not find resource") def getData(self, txn, rid): co = schema.CALENDAR_OBJECT return self.getEventData(txn, (co.RESOURCE_ID == Parameter("RID")), {"RID": rid}) class EventsByUID(EventDetails): _name = "Get Event Data by iCalendar UID" @inlineCallbacks def doIt(self, txn): uid = raw_input("UID: ") rows = yield self.getData(txn, uid) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, uid): co = schema.CALENDAR_OBJECT return self.getEventData(txn, (co.ICALENDAR_UID == Parameter("UID")), {"UID": uid}) class EventsByName(EventDetails): _name = "Get Event Data by resource name" @inlineCallbacks def doIt(self, txn): name = raw_input("Resource Name: ") rows = yield self.getData(txn, name) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, name): co = schema.CALENDAR_OBJECT return self.getEventData(txn, (co.RESOURCE_NAME == Parameter("NAME")), {"NAME": name}) class EventsByOwner(EventDetails): _name = "Get Event Data by Owner UID/Short Name" @inlineCallbacks def doIt(self, txn): uid = raw_input("Owner UID/Name: ") uid = yield UIDFromInput(txn, uid) rows = yield self.getData(txn, uid) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, uid): ch = schema.CALENDAR_HOME return self.getEventData(txn, (ch.OWNER_UID == Parameter("UID")), {"UID": uid}) class EventsByOwnerCalendar(EventDetails): _name = "Get Event Data by Owner UID/Short Name and calendar name" @inlineCallbacks def doIt(self, txn): uid = raw_input("Owner UID/Name: ") uid = yield UIDFromInput(txn, uid) name = raw_input("Calendar resource name: ") rows = yield self.getData(txn, uid, name) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, uid, name): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND return self.getEventData(txn, (ch.OWNER_UID == Parameter("UID")).And(cb.CALENDAR_RESOURCE_NAME == Parameter("NAME")), {"UID": uid, "NAME": name}) class EventsByPath(EventDetails): _name = "Get Event Data by HTTP Path" @inlineCallbacks def doIt(self, txn): path = raw_input("Path: ") pathbits = path.split("/") if len(pathbits) != 6: print("Not a valid calendar object resource path") returnValue(None) homeName = pathbits[3] calendarName = pathbits[4] resourceName = pathbits[5] rows = yield self.getData(txn, homeName, calendarName, resourceName) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, homeName, calendarName, resourceName): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT return self.getEventData( txn, ( ch.OWNER_UID == Parameter("HOME")).And( cb.CALENDAR_RESOURCE_NAME == Parameter("CALENDAR")).And( co.RESOURCE_NAME == Parameter("RESOURCE") ), {"HOME": homeName, "CALENDAR": calendarName, "RESOURCE": resourceName} ) class EventsByContent(EventDetails): _name = "Get Event Data by Searching its Text Data" @inlineCallbacks def doIt(self, txn): uid = raw_input("Search for: ") rows = yield self.getData(txn, uid) if rows: for result in rows: yield self.printEventDetails(txn, result) else: print("Could not find icalendar data") def getData(self, txn, text): co = schema.CALENDAR_OBJECT return self.getEventData(txn, (co.ICALENDAR_TEXT.Contains(Parameter("TEXT"))), {"TEXT": text}) class EventsInTimerange(Cmd): _name = "Get Event Data within a specified time range" @inlineCallbacks def doIt(self, txn): uid = raw_input("Owner UID/Name: ") start = raw_input("Start Time (UTC YYYYMMDDTHHMMSSZ or YYYYMMDD): ") if len(start) == 8: start += "T000000Z" end = raw_input("End Time (UTC YYYYMMDDTHHMMSSZ or YYYYMMDD): ") if len(end) == 8: end += "T000000Z" try: start = DateTime.parseText(start) except ValueError: print("Invalid start value") returnValue(None) try: end = DateTime.parseText(end) except ValueError: print("Invalid end value") returnValue(None) timerange = caldavxml.TimeRange(start=start.getText(), end=end.getText()) home = yield txn.calendarHomeWithUID(uid) if home is None: print("Could not find calendar home") returnValue(None) yield self.eventsForEachCalendar(home, uid, timerange) @inlineCallbacks def eventsForEachCalendar(self, home, uid, timerange): calendars = yield home.calendars() for calendar in calendars: if calendar.name() == "inbox": continue yield self.eventsInTimeRange(calendar, uid, timerange) @inlineCallbacks def eventsInTimeRange(self, calendar, uid, timerange): # Create fake filter element to match time-range filter = caldavxml.Filter( caldavxml.ComponentFilter( caldavxml.ComponentFilter( timerange, name=("VEVENT",), ), name="VCALENDAR", ) ) filter = Filter(filter) filter.settimezone(None) matches = yield calendar.search(filter, useruid=uid, fbtype=False) if matches is None: returnValue(None) for name, _ignore_uid, _ignore_type in matches: event = yield calendar.calendarObjectWithName(name) ical_data = yield event.componentForUser() ical_data.stripStandardTimezones() table = tables.Table() table.addRow(("Calendar:", calendar.name(),)) table.addRow(("Resource Name:", name)) table.addRow(("Resource ID:", event._resourceID)) table.addRow(("Created", event.created())) table.addRow(("Modified", event.modified())) print("\n") table.printTable() print(ical_data.getTextWithoutTimezones()) class Purge(Cmd): _name = "Purge all data from tables" @inlineCallbacks def doIt(self, txn): if raw_input("Do you really want to remove all data [y/n]: ")[0].lower() != 'y': print("No data removed") returnValue(None) wipeout = ( # These are ordered in such a way as to ensure key constraints are not # violated as data is removed schema.RESOURCE_PROPERTY, schema.CALENDAR_OBJECT_REVISIONS, schema.CALENDAR, # schema.CALENDAR_BIND, - cascades # schema.CALENDAR_OBJECT, - cascades # schema.TIME_RANGE, - cascades # schema.TRANSPARENCY, - cascades schema.CALENDAR_HOME, # schema.CALENDAR_HOME_METADATA - cascades schema.ATTACHMENT, schema.ADDRESSBOOK_OBJECT_REVISIONS, schema.ADDRESSBOOK_HOME, # schema.ADDRESSBOOK_HOME_METADATA, - cascades # schema.ADDRESSBOOK_BIND, - cascades # schema.ADDRESSBOOK_OBJECT, - cascades schema.NOTIFICATION_HOME, schema.NOTIFICATION, # schema.NOTIFICATION_OBJECT_REVISIONS - cascades, ) for tableschema in wipeout: yield self.removeTableData(txn, tableschema) print("Removed rows in table %s" % (tableschema,)) if calendaruserproxy.ProxyDBService is not None: calendaruserproxy.ProxyDBService.clean() # @UndefinedVariable print("Removed all proxies") else: print("No proxy database to clean.") fp = FilePath(config.AttachmentsRoot) if fp.exists(): for child in fp.children(): child.remove() print("Removed attachments.") else: print("No attachments path to delete.") @inlineCallbacks def removeTableData(self, txn, tableschema): yield Delete( From=tableschema, Where=None # Deletes all rows ).on(txn) class DBInspectService(WorkerService, object): """ Service which runs, exports the appropriate records, then stops the reactor. """ def __init__(self, store, options, reactor, config): super(DBInspectService, self).__init__(store) self.options = options self.reactor = reactor self.config = config self.commands = [] self.commandMap = {} def doWork(self): """ Start the service. """ # Register commands self.registerCommand(TableSizes) self.registerCommand(CalendarHomes) self.registerCommand(CalendarHomesSummary) self.registerCommand(Calendars) self.registerCommand(CalendarsByOwner) self.registerCommand(Events) self.registerCommand(EventsByCalendar) self.registerCommand(Event) self.registerCommand(EventsByUID) self.registerCommand(EventsByName) self.registerCommand(EventsByOwner) self.registerCommand(EventsByOwnerCalendar) self.registerCommand(EventsByPath) self.registerCommand(EventsByContent) self.registerCommand(EventsInTimerange) self.doDBInspect() return succeed(None) def postStartService(self): """ Don't quit right away """ pass def registerCommand(self, cmd): self.commands.append(cmd.name()) self.commandMap[cmd.name()] = cmd @inlineCallbacks def runCommandByPosition(self, position): try: yield self.runCommandByName(self.commands[position]) except IndexError: print("Position %d not available" % (position,)) returnValue(None) @inlineCallbacks def runCommandByName(self, name): try: yield self.runCommand(self.commandMap[name]) except IndexError: print("Unknown command: '%s'" % (name,)) @inlineCallbacks def runCommand(self, cmd): txn = self.store.newTransaction() try: yield cmd().doIt(txn) yield txn.commit() except Exception, e: traceback.print_exc() print("Command '%s' failed because of: %s" % (cmd.name(), e,)) yield txn.abort() def printCommands(self): print("\n<---- Commands ---->") for ctr, name in enumerate(self.commands): print("%d. %s" % (ctr + 1, name,)) if self.options["purging"]: print("P. Purge\n") print("Q. Quit\n") @inlineCallbacks def doDBInspect(self): """ Poll for commands, stopping the reactor when done. """ while True: self.printCommands() cmd = raw_input("Command: ") if cmd.lower() == 'q': break if self.options["purging"] and cmd.lower() == 'p': yield self.runCommand(Purge) else: try: position = int(cmd) except ValueError: print("Invalid command. Try again.\n") continue yield self.runCommandByPosition(position - 1) self.reactor.stop() def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ # TODO: stopping this service mid-export should really stop the export # loop, but this is not implemented because nothing will actually do it # except hitting ^C (which also calls reactor.stop(), so that will exit # anyway). def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = DBInspectOptions() options.parseOptions(argv[1:]) def makeService(store): return DBInspectService(store, options, reactor, config) utilityMain(options['config'], makeService, reactor, verbose=options['debug']) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/delegatesmigration.py000077500000000000000000000217471315003562600257020ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function # Suppress warning that occurs on Linux import sys if sys.platform.startswith("linux"): from Crypto.pct_warnings import PowmInsecureWarning import warnings warnings.simplefilter("ignore", PowmInsecureWarning) from getopt import getopt, GetoptError from calendarserver.tools.cmdline import utilityMain, WorkerService from twext.python.log import Logger from twext.who.expression import MatchExpression, MatchType from twext.who.idirectory import RecordType, FieldName from twext.who.util import uniqueResult from twisted.internet.defer import inlineCallbacks, returnValue from twistedcaldav.config import config from twistedcaldav.directory import calendaruserproxy from txdav.who.delegates import Delegates log = Logger() def usage(e=None): if e: print(e) print("") print("") print(" Migrates delegate assignments from external Postgres DB to Calendar Server Store") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -s --server ") print(" -u --user ") print(" -p --password ") print(" -P --pod ") print(" -d --database (default = 'proxies')") print(" -t --dbtype (default = 'ProxyDB')") print("") if e: sys.exit(64) else: sys.exit(0) class DelegatesMigrationService(WorkerService): """ """ function = None params = [] @inlineCallbacks def doWork(self): """ Calls the function that's been assigned to "function" and passes the root resource, directory, store, and whatever has been assigned to "params". """ if self.function is not None: yield self.function(self.store, *self.params) def main(): try: (optargs, _ignore_args) = getopt( sys.argv[1:], "d:f:hp:P:s:t:u:", [ "help", "config=", "server=", "user=", "password=", "pod=", "database=", "dbtype=", ], ) except GetoptError, e: usage(e) # # Get configuration # configFileName = None server = None user = None password = None pod = None database = "proxies" dbtype = "ProxyDB" for opt, arg in optargs: # Args come in as encoded bytes arg = arg.decode("utf-8") if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-s", "--server"): server = arg elif opt in ("-u", "--user"): user = arg elif opt in ("-P", "--pod"): pod = arg elif opt in ("-p", "--password"): password = arg elif opt in ("-d", "--database"): database = arg elif opt in ("-t", "--dbtype"): dbtype = arg else: raise NotImplementedError(opt) DelegatesMigrationService.function = migrateDelegates DelegatesMigrationService.params = [server, user, password, pod, database, dbtype] utilityMain(configFileName, DelegatesMigrationService) @inlineCallbacks def getAssignments(db): """ Returns all the delegate assignments from the db. @return: a list of (delegator group, delegate) tuples """ print("Fetching delegate assignments...") rows = yield db.query("select GROUPNAME, MEMBER from GROUPS;") print("Fetched {} delegate assignments".format(len(rows))) returnValue(rows) @inlineCallbacks def copyAssignments(assignments, pod, directory, store): """ Go through the list of assignments from the old db, and selectively copy them into the new store. @param assignments: the assignments from the old db @type assignments: a list of (delegator group, delegate) tuples @param pod: the name of the pod you want to migrate assignments for; assignments for delegators who don't reside on this pod will be ignored. Set this to None to copy all assignments. @param directory: the directory service @param store: the store """ delegatorsMissingPodInfo = set() numCopied = 0 numOtherPod = 0 numDirectoryBased = 0 numExamined = 0 # If locations and resources' delegate assignments come from the directory, # then we're only interested in copying assignments where the delegator is a # user. if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates: delegatorRecordTypes = (RecordType.user,) else: delegatorRecordTypes = None # When faulting in delegates, only worry about users and groups. delegateRecordTypes = (RecordType.user, RecordType.group) total = len(assignments) for groupname, delegateUID in assignments: numExamined += 1 if numExamined % 100 == 0: print("Processed: {} of {}...".format(numExamined, total)) if "#" in groupname: delegatorUID, permission = groupname.split("#") try: delegatorRecords = yield directory.recordsFromExpression( MatchExpression(FieldName.uid, delegatorUID, matchType=MatchType.equals), recordTypes=delegatorRecordTypes ) delegatorRecord = uniqueResult(delegatorRecords) except Exception, e: print("Failed to look up record for {}: {}".format(delegatorUID, str(e))) continue if delegatorRecord is None: continue if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates: if delegatorRecord.recordType != RecordType.user: print("Skipping non-user") numDirectoryBased += 1 continue if pod: try: if delegatorRecord.serviceNodeUID != pod: numOtherPod += 1 continue except AttributeError: print("Record missing serviceNodeUID", delegatorRecord.fullNames) delegatorsMissingPodInfo.add(delegatorUID) continue try: delegateRecords = yield directory.recordsFromExpression( MatchExpression(FieldName.uid, delegateUID, matchType=MatchType.equals), recordTypes=delegateRecordTypes ) delegateRecord = uniqueResult(delegateRecords) except Exception, e: print("Failed to look up record for {}: {}".format(delegateUID, str(e))) continue if delegateRecord is None: continue txn = store.newTransaction(label="DelegatesMigrationService") yield Delegates.addDelegate( txn, delegatorRecord, delegateRecord, (permission == "calendar-proxy-write") ) numCopied += 1 yield txn.commit() print("Total delegate assignments examined: {}".format(numExamined)) print("Total delegate assignments migrated: {}".format(numCopied)) if pod: print("Total ignored assignments because they're on another pod: {}".format(numOtherPod)) if numDirectoryBased: print("Total ignored assignments because they come from the directory: {}".format(numDirectoryBased)) if delegatorsMissingPodInfo: print("Delegators missing pod info:") for uid in delegatorsMissingPodInfo: print(uid) @inlineCallbacks def migrateDelegates(service, store, server, user, password, pod, database, dbtype): print("Migrating from server {}".format(server)) try: calendaruserproxy.ProxyDBService = calendaruserproxy.ProxyPostgreSQLDB(server, database, user, password, dbtype) calendaruserproxy.ProxyDBService.open() # @UndefinedVariable assignments = yield getAssignments(calendaruserproxy.ProxyDBService) yield copyAssignments(assignments, pod, store.directoryService(), store) calendaruserproxy.ProxyDBService.close() # @UndefinedVariable except IOError: log.error("Could not start proxydb service") raise if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/diagnose.py000066400000000000000000000361371315003562600236200ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import sys from getopt import getopt, GetoptError import os from plistlib import readPlist, readPlistFromString import re import subprocess import urllib2 PREFS_PLIST = "/Library/Server/Preferences/Calendar.plist" ServerHostName = "" class FileNotFound(Exception): """ Missing file exception """ def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: {} [options]".format(name)) print("options:") print(" -h --help: print this help and exit") if e: sys.exit(64) else: sys.exit(0) def main(): if os.getuid() != 0: usage("This program must be run as root") try: optargs, _ignore_args = getopt( sys.argv[1:], "h", [ "help", "phantom", ], ) except GetoptError, e: usage(e) for opt, arg in optargs: # Args come in as encoded bytes arg = arg.decode("utf-8") if opt in ("-h", "--help"): usage() elif opt == "--phantom": result = detectPhantomVolume() sys.exit(result) osBuild = getOSBuild() print("OS Build: {}".format(osBuild)) serverBuild = getServerBuild() print("Server Build: {}".format(serverBuild)) print() try: if checkPlist(PREFS_PLIST): print("{} exists and can be parsed".format(PREFS_PLIST)) else: print("{} exists but cannot be parsed".format(PREFS_PLIST)) except FileNotFound: print("{} does not exist (but that's ok)".format(PREFS_PLIST)) serverRoot = getServerRoot() print("Prefs plist says ServerRoot directory is: {}".format(serverRoot.encode("utf-8"))) result = detectPhantomVolume(serverRoot) if result == EXIT_CODE_OK: print("ServerRoot volume ok") elif result == EXIT_CODE_SERVER_ROOT_MISSING: print("ServerRoot directory missing") elif result == EXIT_CODE_PHANTOM_DATA_VOLUME: print("Phantom ServerRoot volume detected") systemPlist = os.path.join(serverRoot, "Config", "caldavd-system.plist") try: if checkPlist(systemPlist): print("{} exists and can be parsed".format(systemPlist.encode("utf-8"))) else: print("{} exists but cannot be parsed".format(systemPlist.encode("utf-8"))) except FileNotFound: print("{} does not exist".format(systemPlist.encode("utf-8"))) userPlist = os.path.join(serverRoot, "Config", "caldavd-user.plist") try: if checkPlist(userPlist): print("{} exists and can be parsed".format(userPlist.encode("utf-8"))) else: print("{} exists but cannot be parsed".format(userPlist.encode("utf-8"))) except FileNotFound: print("{} does not exist".format(userPlist.encode("utf-8"))) keys = showConfigKeys() showProcesses() showServerctlStatus() showDiskSpace(serverRoot) postgresRunning = showPostgresStatus(serverRoot) if postgresRunning: showPostgresContent() password = getPasswordFromKeychain("com.apple.calendarserver") connectToAgent(password) connectToCaldavd(keys) showWebApps() EXIT_CODE_OK = 0 EXIT_CODE_SERVER_ROOT_MISSING = 1 EXIT_CODE_PHANTOM_DATA_VOLUME = 2 def detectPhantomVolume(serverRoot=None): """ Check to see if serverRoot directory exists in a "phantom" volume, meaning it's simply a directory under /Volumes residing on the boot volume, rather that a real separate volume. """ if not serverRoot: serverRoot = getServerRoot() if not os.path.exists(serverRoot): return EXIT_CODE_SERVER_ROOT_MISSING if serverRoot.startswith("/Volumes/"): bootDevice = os.stat("/").st_dev dataDevice = os.stat(serverRoot).st_dev if bootDevice == dataDevice: return EXIT_CODE_PHANTOM_DATA_VOLUME return EXIT_CODE_OK def showProcesses(): print() print("Calendar and Contacts service processes:") _ignore_code, stdout, _ignore_stderr = runCommand( "/bin/ps", "ax", "-o user", "-o pid", "-o %cpu", "-o %mem", "-o rss", "-o etime", "-o lstart", "-o command" ) for line in stdout.split("\n"): if "_calendar" in line or "CalendarServer" in line or "COMMAND" in line: print(line) def showServerctlStatus(): print() print("Serverd status:") _ignore_code, stdout, _ignore_stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/sbin/serverctl", "list", ) services = { "org.calendarserver.agent": False, "org.calendarserver.calendarserver": False, "org.calendarserver.relocate": False, } enabledBucket = False for line in stdout.split("\n"): if "enabledServices" in line: enabledBucket = True if "disabledServices" in line: enabledBucket = False for service in services: if service in line: services[service] = enabledBucket for service, enabled in services.iteritems(): print( "{service} is {enabled}".format( service=service, enabled="enabled" if enabled else "disabled" ) ) def showDiskSpace(serverRoot): print() print("Disk space on boot volume:") _ignore_code, stdout, _ignore_stderr = runCommand( "/bin/df", "-H", "/", ) print(stdout) print("Disk space on service data volume:") _ignore_code, stdout, _ignore_stderr = runCommand( "/bin/df", "-H", serverRoot ) print(stdout) print("Disk space used by Calendar and Contacts service:") _ignore_code, stdout, _ignore_stderr = runCommand( "/usr/bin/du", "-sh", os.path.join(serverRoot, "Config"), os.path.join(serverRoot, "Data"), os.path.join(serverRoot, "Logs"), ) print(stdout) def showPostgresStatus(serverRoot): clusterPath = os.path.join(serverRoot, "Data", "Database.xpg", "cluster.pg") print() print("Postgres status for cluster {}:".format(clusterPath.encode("utf-8"))) code, stdout, stderr = runCommand( "/usr/bin/sudo", "-u", "calendar", "/Applications/Server.app/Contents/ServerRoot/usr/bin/pg_ctl", "status", "-D", clusterPath ) if stdout: print(stdout) if stderr: print(stderr) if code: return False return True def runSQLQuery(query): _ignore_code, stdout, stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/bin/psql", "-h", "/var/run/caldavd/PostgresSocket", "--dbname=caldav", "--username=caldav", "--command={}".format(query), ) if stdout: print(stdout) if stderr: print(stderr) def countFromSQLQuery(query): _ignore_code, stdout, _ignore_stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/bin/psql", "-h", "/var/run/caldavd/PostgresSocket", "--dbname=caldav", "--username=caldav", "--command={}".format(query), ) lines = stdout.split("\n") try: count = int(lines[2]) except IndexError: count = 0 return count def listDatabases(): _ignore_code, stdout, stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/bin/psql", "-h", "/var/run/caldavd/PostgresSocket", "--dbname=caldav", "--username=caldav", "--list", ) if stdout: print(stdout) if stderr: print(stderr) def showPostgresContent(): print() print("Postgres content:") print() listDatabases() print("'calendarserver' table...") runSQLQuery("select * from calendarserver;") count = countFromSQLQuery("select count(*) from calendar_home;") print("Number of calendar homes: {}".format(count)) count = countFromSQLQuery("select count(*) from calendar_object;") print("Number of calendar events: {}".format(count)) count = countFromSQLQuery("select count(*) from addressbook_home;") print("Number of contacts homes: {}".format(count)) count = countFromSQLQuery("select count(*) from addressbook_object;") print("Number of contacts cards: {}".format(count)) count = countFromSQLQuery("select count(*) from delegates;") print("Number of non-group delegate assignments: {}".format(count)) count = countFromSQLQuery("select count(*) from delegate_groups;") print("Number of group delegate assignments: {}".format(count)) print("'job' table...") runSQLQuery("select * from job;") def showConfigKeys(): print() print("Configuration:") _ignore_code, stdout, _ignore_stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/sbin/calendarserver_config", "EnableCalDAV", "EnableCardDAV", "Notifications.Services.APNS.Enabled", "Scheduling.iMIP.Enabled", "Authentication.Basic.Enabled", "Authentication.Digest.Enabled", "Authentication.Kerberos.Enabled", "ServerHostName", "HTTPPort", "SSLPort", ) hidden = [ "ServerHostName", ] keys = {} for line in stdout.split("\n"): if "=" in line: key, value = line.strip().split("=", 1) keys[key] = value if key not in hidden: print("{key} : {value}".format(key=key, value=value)) return keys def runCommand(commandPath, *args): """ Run a command line tool and return the output """ if not os.path.exists(commandPath): raise FileNotFound commandLine = [commandPath] if args: commandLine.extend(args) child = subprocess.Popen( args=commandLine, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE, ) output, error = child.communicate() return child.returncode, output, error def getOSBuild(): try: code, stdout, _ignore_stderr = runCommand("/usr/bin/sw_vers", "-buildVersion") if not code: return stdout.strip() except: return "Unknown" def getServerBuild(): try: code, stdout, _ignore_stderr = runCommand("/usr/sbin/serverinfo", "--buildversion") if not code: return stdout.strip() except: pass return "Unknown" def getServerRoot(): """ Return the ServerRoot value from the servermgr_calendar.plist. If not present, return the default. @rtype: C{unicode} """ try: serverRoot = u"/Library/Server/Calendar and Contacts" if os.path.exists(PREFS_PLIST): serverRoot = readPlist(PREFS_PLIST).get("ServerRoot", serverRoot) if isinstance(serverRoot, str): serverRoot = serverRoot.decode("utf-8") return serverRoot except: return "Unknown" def checkPlist(plistPath): if not os.path.exists(plistPath): raise FileNotFound try: readPlist(plistPath) except: return False return True def showWebApps(): print() print("Web apps:") _ignore_code, stdout, _ignore_stderr = runCommand( "/Applications/Server.app/Contents/ServerRoot/usr/sbin/webappctl", "status", "-" ) print(stdout) ## # Keychain access ## passwordRegExp = re.compile(r'password: "(.*)"') def getPasswordFromKeychain(account): code, _ignore_stdout, stderr = runCommand( "/usr/bin/security", "find-generic-password", "-a", account, "-g", ) if code: return None else: match = passwordRegExp.search(stderr) if not match: print( "Password for {} not found in keychain".format(account) ) return None else: return match.group(1) readCommand = """ command readConfig """ def connectToAgent(password): print() print("Agent:") url = "http://localhost:62308/gateway/" user = "com.apple.calendarserver" auth_handler = urllib2.HTTPDigestAuthHandler() auth_handler.add_password( realm="/Local/Default", uri=url, user=user, passwd=password ) opener = urllib2.build_opener(auth_handler) # ...and install it globally so it can be used with urlopen. urllib2.install_opener(opener) # Send HTTP POST request request = urllib2.Request(url, readCommand) try: print("Attempting to send a request to the agent...") response = urllib2.urlopen(request, timeout=30) except Exception as e: print("Can't connect to agent: {}".format(e)) return False html = response.read() code = response.getcode() if code == 200: try: data = readPlistFromString(html) except Exception as e: print( "Could not parse response from agent: {error}\n{html}".format( error=e, html=html ) ) return False if "result" in data: print("...success") else: print("Error in agent's response:\n{}".format(html)) return False else: print("Got an error back from the agent: {code} {html}".format( code=code, html=html) ) return True def connectToCaldavd(keys): print() print("Server connection:") url = "https://{host}/principals/".format(host=keys["ServerHostName"]) try: print("Attempting to send a request to port 443...") response = urllib2.urlopen(url, timeout=30) html = response.read() code = response.getcode() print(code, html) if code == 200: print("Received 200 response") except urllib2.HTTPError as e: code = e.code reason = e.reason if code == 401: print("Got the expected response") else: print( "Got an unexpected response: {code} {reason}".format( code=code, reason=reason ) ) except Exception as e: print("Can't connect to port 443: {error}".format(error=e)) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/dkimtool.py000077500000000000000000000204551315003562600236500ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import os import sys from Crypto.PublicKey import RSA from StringIO import StringIO from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twisted.logger import LogLevel, STDLibLogObserver from twisted.python.usage import Options from twext.python.log import Logger from txweb2.http_headers import Headers from txdav.caldav.datastore.scheduling.ischedule.dkim import RSA256, DKIMRequest, \ PublicKeyLookup, DKIMVerifier, DKIMVerificationError log = Logger() def _doKeyGeneration(options): key = RSA.generate(options["key-size"]) output = key.exportKey() lineBreak = False if options["key"]: with open(options["key"], "w") as f: f.write(output) else: print(output) lineBreak = True output = key.publickey().exportKey() if options["pub-key"]: with open(options["pub-key"], "w") as f: f.write(output) else: if lineBreak: print print(output) lineBreak = True if options["txt"]: output = "".join(output.splitlines()[1:-1]) txt = "v=DKIM1; p=%s" % (output,) if lineBreak: print print(txt) @inlineCallbacks def _doRequest(options): if options["verbose"]: log.levels().setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug) # Parse the HTTP file with open(options["request"]) as f: request = f.read() method, uri, headers, stream = _parseRequest(request) # Setup signing headers sign_headers = options["signing"] if sign_headers is None: sign_headers = [] for hdr in ("Host", "Content-Type", "Originator", "Recipient+"): if headers.hasHeader(hdr.rstrip("+")): sign_headers.append(hdr) else: sign_headers = sign_headers.split(":") dkim = DKIMRequest( method, uri, headers, stream, options["domain"], options["selector"], options["key"], options["algorithm"], sign_headers, True, True, False, int(options["expire"]), ) if options["fake-time"]: dkim.time = "100" dkim.expire = "200" dkim.message_id = "1" yield dkim.sign() s = StringIO() _writeRequest(dkim, s) print(s.getvalue()) @inlineCallbacks def _doVerify(options): # Parse the HTTP file with open(os.path.expanduser(options["verify"])) as f: verify = f.read() _method, _uri, headers, body = _parseRequest(verify) # Check for local public key if options["pub-key"]: PublicKeyLookup_File.pubkeyfile = os.path.expanduser(options["pub-key"]) lookup = (PublicKeyLookup_File,) else: lookup = None dkim = DKIMVerifier(headers, body, lookup) if options["fake-time"]: dkim.time = 0 try: yield dkim.verify() except DKIMVerificationError, e: print("Verification Failed: %s" % (e,)) else: print("Verification Succeeded") def _parseRequest(request): lines = request.splitlines(True) method, uri, _ignore_version = lines.pop(0).split() hdrs = [] body = None for line in lines: if body is not None: body.append(line) elif line.strip() == "": body = [] elif line[0] in (" ", "\t"): hdrs[-1] += line else: hdrs.append(line) headers = Headers() for hdr in hdrs: name, value = hdr.split(':', 1) headers.addRawHeader(name, value.strip()) stream = "".join(body) return method, uri, headers, stream def _writeRequest(request, f): f.write("%s %s HTTP/1.1\r\n" % (request.method, request.uri,)) for name, valuelist in request.headers.getAllRawHeaders(): for value in valuelist: f.write("%s: %s\r\n" % (name, value)) f.write("\r\n") f.write(request.stream.read()) class PublicKeyLookup_File(PublicKeyLookup): method = "*" pubkeyfile = None def getPublicKey(self): """ Do the key lookup using the actual lookup method. """ with open(self.pubkeyfile) as f: data = f.read() return RSA.importKey(data) def usage(e=None): if e: print(e) print("") try: DKIMToolOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = """Usage: dkimtool [options] Options: -h Print this help and exit # Key Generation --key-gen Generate private/public key files --key FILE Private key file to create [stdout] --pub-key FILE Public key file to create [stdout] --key-size SIZE Key size [1024] --txt Also generate the public key TXT record --fake-time Use fake t=, x= values when signing and also ignore expiration on verification # Request --request FILE An HTTP request to sign --algorithm ALGO Signature algorithm [rsa-sha256] --domain DOMAIN Signature domain [example.com] --selector SELECTOR Signature selector [dkim] --key FILE Private key to use --signing HEADERS List of headers to sign [automatic] --expire SECONDS When to expire signature [no expiry] # Verify --verify FILE An HTTP request to verify --pkey FILE Public key to use in place of q= lookup Description: This utility is for testing DKIM signed HTTP requests. Key operations are: --key-gen: generate a private/public RSA key. --request: sign an HTTP request. --verify: verify a signed HTTP request. """ class DKIMToolOptions(Options): """ Command-line options for 'calendarserver_dkimtool' """ synopsis = description optFlags = [ ['verbose', 'v', "Verbose logging."], ['key-gen', 'g', "Generate private/public key files"], ['txt', 't', "Also generate the public key TXT record"], ['fake-time', 'f', "Fake time values for signing/verification"], ] optParameters = [ ['key', 'k', None, "Private key file to create [default: stdout]"], ['pub-key', 'p', None, 'Public key file to create [default: stdout]'], ['key-size', 'x', 1024, 'Key size'], ['request', 'r', None, 'An HTTP request to sign'], ['algorithm', 'a', RSA256, 'Signature algorithm'], ['domain', 'd', 'example.com', 'Signature domain'], ['selector', 's', 'dkim', 'Signature selector'], ['signing', 'h', None, 'List of headers to sign [automatic]'], ['expire', 'e', 3600, 'When to expire signature'], ['verify', 'w', None, 'An HTTP request to verify'], ] def __init__(self): super(DKIMToolOptions, self).__init__() self.outputName = '-' @inlineCallbacks def _runInReactor(fn, options): try: yield fn(options) except Exception, e: print(e) finally: reactor.stop() def main(argv=sys.argv, stderr=sys.stderr): options = DKIMToolOptions() options.parseOptions(argv[1:]) # # Send logging output to stdout # observer = STDLibLogObserver() observer.start() if options["verbose"]: log.levels().setLogLevelForNamespace("txdav.caldav.datastore.scheduling.ischedule.dkim", LogLevel.debug) if options["key-gen"]: _doKeyGeneration(options) elif options["request"]: reactor.callLater(0, _runInReactor, _doRequest, options) reactor.run() elif options["verify"]: reactor.callLater(0, _runInReactor, _doVerify, options) reactor.run() else: usage("Invalid options") if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/export.py000077500000000000000000000316721315003562600233520ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_export -*- ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ This tool reads calendar data from a series of inputs and generates a single iCalendar file which can be opened in many calendar applications. This can be used to quickly create an iCalendar file from a user's calendars. This tool requires access to the calendar server's configuration and data storage; it does not operate by talking to the server via the network. It therefore does not apply any of the access restrictions that the server would. As such, one should be mindful that data exported via this tool may be sensitive. Please also note that this is not an appropriate tool for backups, as there is data associated with users and calendars beyond the iCalendar as visible to the owner of that calendar, including DAV properties, information about sharing, and per-user data such as alarms. """ from __future__ import print_function import itertools import os import shutil import sys from calendarserver.tools.cmdline import utilityMain, WorkerService from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twisted.python.text import wordWrap from twisted.python.usage import Options, UsageError from twistedcaldav import customxml from twistedcaldav.ical import Component, Property from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from txdav.base.propertystore.base import PropertyName from txdav.xml import element as davxml log = Logger() def usage(e=None): if e: print(e) print("") try: ExportOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = '\n'.join( wordWrap( """ Usage: calendarserver_export [options] [input specifiers]\n """ + __doc__, int(os.environ.get('COLUMNS', '80')) ) ) class ExportOptions(Options): """ Command-line options for 'calendarserver_export' @ivar exporters: a list of L{DirectoryExporter} objects which can identify the calendars to export, given a directory service. This list is built by parsing --record and --collection options. """ synopsis = description optFlags = [ ['debug', 'D', "Debug logging."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ] def __init__(self): super(ExportOptions, self).__init__() self.exporters = [] self.outputName = '-' self.outputDirectoryName = None def opt_uid(self, uid): """ Add a calendar home directly by its UID (which is usually a directory service's GUID). """ self.exporters.append(UIDExporter(uid)) def opt_record(self, recordName): """ Add a directory record's calendar home (format: 'recordType:shortName'). """ recordType, shortName = recordName.split(":", 1) self.exporters.append(DirectoryExporter(recordType, shortName)) opt_r = opt_record def opt_collection(self, collectionName): """ Add a calendar collection. This option must be passed after --record (or a synonym, like --user). for example, to export user1's calendars called 'meetings' and 'team', invoke 'calendarserver_export --user=user1 --collection=meetings --collection=team'. """ self.exporters[-1].collections.append(collectionName) opt_c = opt_collection def opt_directory(self, dirname): """ Specify output directory path. """ self.outputDirectoryName = dirname opt_d = opt_directory def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def opt_user(self, user): """ Add a user's calendar home (shorthand for '-r users:shortName'). """ self.opt_record("users:" + user) opt_u = opt_user def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') class _ExporterBase(object): """ Base exporter implementation that works from a home UID. @ivar collections: A list of the names of collections that this exporter should enumerate. @type collections: C{list} of C{str} """ def __init__(self): self.collections = [] def getHomeUID(self, exportService): """ Subclasses must implement this. """ raise NotImplementedError() @inlineCallbacks def listCalendars(self, txn, exportService): """ Enumerate all calendars based on the directory record and/or calendars for this calendar home. """ uid = yield self.getHomeUID(exportService) home = yield txn.calendarHomeWithUID(uid, create=True) result = [] if self.collections: for collection in self.collections: result.append((yield home.calendarWithName(collection))) else: for collection in (yield home.calendars()): if collection.name() != 'inbox': result.append(collection) returnValue(result) class UIDExporter(_ExporterBase): """ An exporter that constructs a list of calendars based on the UID of the home, and an optional list of calendar names. @ivar uid: The UID of a calendar home to export. @type uid: C{str} """ def __init__(self, uid): super(UIDExporter, self).__init__() self.uid = uid def getHomeUID(self, exportService): return succeed(self.uid) class DirectoryExporter(_ExporterBase): """ An exporter that constructs a list of calendars based on the directory services record ID of the home, and an optional list of calendar names. @ivar recordType: The directory record type to export. For example: 'users'. @type recordType: C{str} @ivar shortName: The shortName of the directory record to export, according to C{recordType}. @type shortName: C{str} """ def __init__(self, recordType, shortName): super(DirectoryExporter, self).__init__() self.recordType = recordType self.shortName = shortName @inlineCallbacks def getHomeUID(self, exportService): """ Retrieve the home UID. """ directory = exportService.directoryService() record = yield directory.recordWithShortName( directory.oldNameToRecordType(self.recordType), self.shortName ) returnValue(record.uid) @inlineCallbacks def exportToFile(calendars, fileobj): """ Export some calendars to a file as their owner would see them. @param calendars: an iterable of L{ICalendar} providers (or L{Deferred}s of same). @param fileobj: an object with a C{write} method that will accept some iCalendar data. @return: a L{Deferred} which fires when the export is complete. (Note that the file will not be closed.) @rtype: L{Deferred} that fires with C{None} """ comp = Component.newCalendar() for calendar in calendars: calendar = yield calendar for obj in (yield calendar.calendarObjects()): evt = yield obj.filteredComponent( calendar.ownerCalendarHome().uid(), True ) for sub in evt.subcomponents(): if sub.name() != 'VTIMEZONE': # Omit all VTIMEZONE components here - we will include them later # when we serialize the whole calendar. comp.addComponent(sub) fileobj.write(comp.getTextWithTimezones(True)) @inlineCallbacks def exportToDirectory(calendars, dirname): """ Export some calendars to a file as their owner would see them. @param calendars: an iterable of L{ICalendar} providers (or L{Deferred}s of same). @param dirname: the path to a directory to store calendar files in; each calendar being exported will have it's own .ics file @return: a L{Deferred} which fires when the export is complete. (Note that the file will not be closed.) @rtype: L{Deferred} that fires with C{None} """ for calendar in calendars: homeUID = calendar.ownerCalendarHome().uid() calendarProperties = calendar.properties() comp = Component.newCalendar() for element, propertyName in ( (davxml.DisplayName, "NAME"), (customxml.CalendarColor, "COLOR"), ): value = calendarProperties.get(PropertyName.fromElement(element), None) if value: comp.addProperty(Property(propertyName, str(value))) source = "/calendars/__uids__/{}/{}/".format(homeUID, calendar.name()) comp.addProperty(Property("SOURCE", source)) for obj in (yield calendar.calendarObjects()): evt = yield obj.filteredComponent(homeUID, True) for sub in evt.subcomponents(): if sub.name() != 'VTIMEZONE': # Omit all VTIMEZONE components here - we will include them later # when we serialize the whole calendar. comp.addComponent(sub) filename = os.path.join(dirname, "{}_{}.ics".format(homeUID, calendar.name())) with open(filename, 'wb') as fileobj: fileobj.write(comp.getTextWithTimezones(True)) class ExporterService(WorkerService, object): """ Service which runs, exports the appropriate records, then stops the reactor. """ def __init__(self, store, options, output, reactor, config): super(ExporterService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config self._directory = self.store.directoryService() @inlineCallbacks def doWork(self): """ Do the export, stopping the reactor when done. """ txn = self.store.newTransaction() try: allCalendars = itertools.chain( *[(yield exporter.listCalendars(txn, self)) for exporter in self.options.exporters] ) if self.options.outputDirectoryName: dirname = self.options.outputDirectoryName if os.path.exists(dirname): shutil.rmtree(dirname) os.mkdir(dirname) yield exportToDirectory(allCalendars, dirname) else: yield exportToFile(allCalendars, self.output) self.output.close() yield txn.commit() # TODO: should be read-only, so commit/abort shouldn't make a # difference. commit() for now, in case any transparent cache / # update stuff needed to happen, don't want to undo it. except: log.failure("doWork()") def directoryService(self): """ Get an appropriate directory service. """ return self._directory def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ # TODO: stopping this service mid-export should really stop the export # loop, but this is not implemented because nothing will actually do it # except hitting ^C (which also calls reactor.stop(), so that will exit # anyway). def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = ExportOptions() try: options.parseOptions(argv[1:]) except UsageError, e: usage(e) if options.outputDirectoryName: output = None else: try: output = options.openOutput() except IOError, e: stderr.write( "Unable to open output file for writing: %s\n" % (e) ) sys.exit(1) def makeService(store): from twistedcaldav.config import config return ExporterService(store, options, output, reactor, config) utilityMain(options["config"], makeService, reactor, verbose=options["debug"]) calendarserver-9.1+dfsg/calendarserver/tools/gateway.py000077500000000000000000000503071315003562600234660ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function # Suppress warning that occurs on Linux import sys if sys.platform.startswith("linux"): from Crypto.pct_warnings import PowmInsecureWarning import warnings warnings.simplefilter("ignore", PowmInsecureWarning) from getopt import getopt, GetoptError import os from plistlib import readPlistFromString, writePlistToString import uuid import xml from calendarserver.tools.cmdline import utilityMain from calendarserver.tools.config import ( WRITABLE_CONFIG_KEYS, setKeyPath, getKeyPath, flattenDictionary, WritableConfig ) from calendarserver.tools.purge import ( WorkerService, PurgeOldEventsService, DEFAULT_BATCH_SIZE, DEFAULT_RETAIN_DAYS, PrincipalPurgeWork ) from calendarserver.tools.util import ( recordForPrincipalID, autoDisableMemcached, getProxies, setProxies ) from pycalendar.datetime import DateTime from twext.who.directory import DirectoryRecord from twisted.internet.defer import inlineCallbacks, succeed, returnValue from twistedcaldav.config import config, ConfigDict from txdav.who.idirectory import RecordType as CalRecordType from twext.who.idirectory import FieldName from twisted.python.constants import Names, NamedConstant from txdav.who.delegates import Delegates, RecordType as DelegateRecordType attrMap = { 'GeneratedUID': {'attr': 'uid', }, 'RealName': {'attr': 'fullNames', }, 'RecordName': {'attr': 'shortNames', }, 'AutoScheduleMode': {'attr': 'autoScheduleMode', }, 'AutoAcceptGroup': {'attr': 'autoAcceptGroup', }, # 'Comment': {'extras': True, 'attr': 'comment', }, # 'Description': {'extras': True, 'attr': 'description', }, # 'Type': {'extras': True, 'attr': 'type', }, # For "Locations", i.e. scheduled spaces 'Capacity': {'attr': 'capacity', }, 'Floor': {'attr': 'floor', }, 'AssociatedAddress': {'attr': 'associatedAddress', }, # For "Addresses", i.e. nonscheduled areas containing Locations 'AbbreviatedName': {'attr': 'abbreviatedName', }, 'StreetAddress': {'attr': 'streetAddress', }, 'GeographicLocation': {'attr': 'geographicLocation', }, } def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print(" TODO: describe usage") print("") print("options:") print(" -h --help: print this help and exit") print(" -e --error: send stderr to stdout") print(" -f --config : Specify caldavd.plist configuration path") print("") if e: sys.exit(64) else: sys.exit(0) class RunnerService(WorkerService): """ A wrapper around Runner which uses utilityMain to get the store """ commands = None @inlineCallbacks def doWork(self): """ Create/run a Runner to execute the commands """ runner = Runner(self.store, self.commands) if runner.validate(): yield runner.run() def doWorkWithoutStore(self): respondWithError("Database is not available") return succeed(None) def main(): try: (optargs, _ignore_args) = getopt( sys.argv[1:], "hef:", [ "help", "error", "config=", ], ) except GetoptError, e: usage(e) # # Get configuration # configFileName = None debug = False for opt, arg in optargs: if opt in ("-h", "--help"): usage() if opt in ("-e", "--error"): debug = True elif opt in ("-f", "--config"): configFileName = arg else: raise NotImplementedError(opt) # # Read commands from stdin # rawInput = sys.stdin.read() try: plist = readPlistFromString(rawInput) except xml.parsers.expat.ExpatError, e: # @UndefinedVariable respondWithError(str(e)) return # If the plist is an array, each element of the array is a separate # command dictionary. if isinstance(plist, list): commands = plist else: commands = [plist] RunnerService.commands = commands utilityMain(configFileName, RunnerService, verbose=debug) class Runner(object): def __init__(self, store, commands, output=None): self.store = store self.dir = store.directoryService() self.commands = commands if output is None: output = sys.stdout self.output = output def validate(self): # Make sure commands are valid for command in self.commands: if 'command' not in command: self.respondWithError("'command' missing from plist") return False commandName = command['command'] methodName = "command_%s" % (commandName,) if not hasattr(self, methodName): self.respondWithError("Unknown command '%s'" % (commandName,)) return False return True @inlineCallbacks def run(self): # This method can be called as the result of an agent request. We # check to see if memcached is there for each call because the server # could have stopped/started since the last time. for pool in config.Memcached.Pools.itervalues(): pool.ClientEnabled = True autoDisableMemcached(config) try: for command in self.commands: commandName = command['command'] methodName = "command_%s" % (commandName,) if hasattr(self, methodName): (yield getattr(self, methodName)(command)) else: self.respondWithError("Unknown command '%s'" % (commandName,)) except Exception, e: self.respondWithError("Command failed: '%s'" % (str(e),)) raise # Locations # deferred def command_getLocationList(self, command): return self.respondWithRecordsOfTypes(self.dir, command, ["locations"]) @inlineCallbacks def _saveRecord(self, typeName, recordType, command, oldFields=None): """ Save a record using the values in the command plist, starting with any fields in the optional oldFields. @param typeName: one of "locations", "resources", "addresses"; used to return the appropriate list of records afterwards. @param recordType: the type of record to save @param command: the command containing values @type command: C{dict} @param oldFields: the optional fields to start with, which will be overridden by values from command @type oldFiles: C{dict} """ if oldFields is None: fields = { self.dir.fieldName.recordType: recordType, self.dir.fieldName.hasCalendars: True, self.dir.fieldName.hasContacts: True, } create = True else: fields = oldFields.copy() create = False for key, info in attrMap.iteritems(): if key in command: attrName = info['attr'] field = self.dir.fieldName.lookupByName(attrName) valueType = self.dir.fieldName.valueType(field) value = command[key] # For backwards compatibility, convert to a list if needed if ( self.dir.fieldName.isMultiValue(field) and not isinstance(value, list) ): value = [value] if valueType == int: value = int(value) elif issubclass(valueType, Names): if value is not None: value = valueType.lookupByName(value) else: if isinstance(value, list): newList = [] for item in value: if isinstance(item, str): newList.append(item.decode("utf-8")) else: newList.append(item) value = newList elif isinstance(value, str): value = value.decode("utf-8") fields[field] = value if FieldName.uid not in fields: # No uid provided, so generate one fields[FieldName.uid] = unicode(uuid.uuid4()).upper() if FieldName.shortNames not in fields: # No short names were provided, so copy from uid fields[FieldName.shortNames] = [fields[FieldName.uid]] record = DirectoryRecord(self.dir, fields) yield self.dir.updateRecords([record], create=create) readProxies = command.get("ReadProxies", None) if readProxies: proxyRecords = [] for proxyUID in readProxies: proxyRecord = yield self.dir.recordWithUID(proxyUID) if proxyRecord is not None: proxyRecords.append(proxyRecord) readProxies = proxyRecords writeProxies = command.get("WriteProxies", None) if writeProxies: proxyRecords = [] for proxyUID in writeProxies: proxyRecord = yield self.dir.recordWithUID(proxyUID) if proxyRecord is not None: proxyRecords.append(proxyRecord) writeProxies = proxyRecords yield setProxies(record, readProxies, writeProxies) yield self.respondWithRecordsOfTypes(self.dir, command, [typeName]) def command_createLocation(self, command): return self._saveRecord("locations", CalRecordType.location, command) def command_createResource(self, command): return self._saveRecord("resources", CalRecordType.resource, command) def command_createAddress(self, command): return self._saveRecord("addresses", CalRecordType.address, command) @inlineCallbacks def command_setLocationAttributes(self, command): uid = command['GeneratedUID'] record = yield self.dir.recordWithUID(uid) yield self._saveRecord( "locations", CalRecordType.location, command, oldFields=record.fields ) @inlineCallbacks def command_setResourceAttributes(self, command): uid = command['GeneratedUID'] record = yield self.dir.recordWithUID(uid) yield self._saveRecord( "resources", CalRecordType.resource, command, oldFields=record.fields ) @inlineCallbacks def command_setAddressAttributes(self, command): uid = command['GeneratedUID'] record = yield self.dir.recordWithUID(uid) yield self._saveRecord( "addresses", CalRecordType.address, command, oldFields=record.fields ) @inlineCallbacks def command_getLocationAttributes(self, command): uid = command['GeneratedUID'] record = yield self.dir.recordWithUID(uid) if record is None: self.respondWithError("Principal not found: %s" % (uid,)) return recordDict = recordToDict(record) # recordDict['AutoSchedule'] = yield principal.getAutoSchedule() try: recordDict['AutoAcceptGroup'] = record.autoAcceptGroup except AttributeError: pass readProxies, writeProxies = yield getProxies(record) recordDict['ReadProxies'] = [r.uid for r in readProxies] recordDict['WriteProxies'] = [r.uid for r in writeProxies] self.respond(command, recordDict) command_getResourceAttributes = command_getLocationAttributes command_getAddressAttributes = command_getLocationAttributes # Resources def command_getResourceList(self, command): return self.respondWithRecordsOfTypes(self.dir, command, ["resources"]) # deferred def command_getLocationAndResourceList(self, command): return self.respondWithRecordsOfTypes(self.dir, command, ["locations", "resources"]) # Addresses def command_getAddressList(self, command): return self.respondWithRecordsOfTypes(self.dir, command, ["addresses"]) @inlineCallbacks def _delete(self, typeName, command): uid = command['GeneratedUID'] yield self.dir.removeRecords([uid]) yield self.respondWithRecordsOfTypes(self.dir, command, [typeName]) @inlineCallbacks def command_deleteLocation(self, command): txn = self.store.newTransaction() uid = command['GeneratedUID'] yield txn.enqueue(PrincipalPurgeWork, uid=uid) yield txn.commit() yield self._delete("locations", command) @inlineCallbacks def command_deleteResource(self, command): txn = self.store.newTransaction() uid = command['GeneratedUID'] yield txn.enqueue(PrincipalPurgeWork, uid=uid) yield txn.commit() yield self._delete("resources", command) def command_deleteAddress(self, command): return self._delete("addresses", command) # Config def command_readConfig(self, command): """ Return current configuration @param command: the dictionary parsed from the plist read from stdin @type command: C{dict} """ config.reload() # config.Memcached.Pools.Default.ClientEnabled = False result = {} for keyPath in WRITABLE_CONFIG_KEYS: value = getKeyPath(config, keyPath) if value is not None: # Note: config contains utf-8 encoded strings, but plistlib # wants unicode, so decode here: if isinstance(value, str): value = value.decode("utf-8") setKeyPath(result, keyPath, value) self.respond(command, result) def command_writeConfig(self, command): """ Write config to secondary, writable plist @param command: the dictionary parsed from the plist read from stdin @type command: C{dict} """ writable = WritableConfig(config, config.WritableConfigFile) writable.read() valuesToWrite = command.get("Values", {}) # Note: values are unicode if they contain non-ascii for keyPath, value in flattenDictionary(valuesToWrite): if keyPath in WRITABLE_CONFIG_KEYS: writable.set(setKeyPath(ConfigDict(), keyPath, value)) try: writable.save(restart=False) except Exception, e: self.respond(command, {"error": str(e)}) else: self.command_readConfig(command) # Proxies def command_listWriteProxies(self, command): return self._listProxies(command, "write") def command_listReadProxies(self, command): return self._listProxies(command, "read") @inlineCallbacks def _listProxies(self, command, proxyType): record = yield recordForPrincipalID(self.dir, command['Principal']) if record is None: self.respondWithError("Principal not found: %s" % (command['Principal'],)) returnValue(None) yield self.respondWithProxies(command, record, proxyType) def command_addReadProxy(self, command): return self._addProxy(command, "read") def command_addWriteProxy(self, command): return self._addProxy(command, "write") @inlineCallbacks def _addProxy(self, command, proxyType): record = yield recordForPrincipalID(self.dir, command['Principal']) if record is None: self.respondWithError("Principal not found: %s" % (command['Principal'],)) returnValue(None) proxyRecord = yield recordForPrincipalID(self.dir, command['Proxy']) if proxyRecord is None: self.respondWithError("Proxy not found: %s" % (command['Proxy'],)) returnValue(None) txn = self.store.newTransaction() yield Delegates.addDelegate(txn, record, proxyRecord, (proxyType == "write")) yield txn.commit() yield self.respondWithProxies(command, record, proxyType) def command_removeReadProxy(self, command): return self._removeProxy(command, "read") def command_removeWriteProxy(self, command): return self._removeProxy(command, "write") @inlineCallbacks def _removeProxy(self, command, proxyType): record = yield recordForPrincipalID(self.dir, command['Principal']) if record is None: self.respondWithError("Principal not found: %s" % (command['Principal'],)) returnValue(None) proxyRecord = yield recordForPrincipalID(self.dir, command['Proxy']) if proxyRecord is None: self.respondWithError("Proxy not found: %s" % (command['Proxy'],)) returnValue(None) txn = self.store.newTransaction() yield Delegates.removeDelegate(txn, record, proxyRecord, (proxyType == "write")) yield txn.commit() yield self.respondWithProxies(command, record, proxyType) @inlineCallbacks def command_purgeOldEvents(self, command): """ Convert RetainDays from the command dictionary into a date, then purge events older than that date. @param command: the dictionary parsed from the plist read from stdin @type command: C{dict} """ retainDays = command.get("RetainDays", DEFAULT_RETAIN_DAYS) cutoff = DateTime.getToday() cutoff.setDateOnly(False) cutoff.offsetDay(-retainDays) eventCount = (yield PurgeOldEventsService.purgeOldEvents(self.store, None, cutoff, DEFAULT_BATCH_SIZE)) self.respond(command, {'EventsRemoved': eventCount, "RetainDays": retainDays}) @inlineCallbacks def respondWithProxies(self, command, record, proxyType): proxies = [] recordType = { "read": DelegateRecordType.readDelegateGroup, "write": DelegateRecordType.writeDelegateGroup, }[proxyType] proxyGroup = yield self.dir.recordWithShortName(recordType, record.uid) for member in (yield proxyGroup.members()): proxies.append(member.uid) self.respond(command, { 'Principal': record.uid, 'Proxies': proxies }) @inlineCallbacks def respondWithRecordsOfTypes(self, directory, command, recordTypes): result = [] for recordType in recordTypes: recordType = directory.oldNameToRecordType(recordType) for record in (yield directory.recordsWithRecordType(recordType)): recordDict = recordToDict(record) result.append(recordDict) self.respond(command, result) def respond(self, command, result): self.output.write(writePlistToString({'command': command['command'], 'result': result})) def respondWithError(self, msg, status=1): self.output.write(writePlistToString({'error': msg, })) def recordToDict(record): recordDict = {} for key, info in attrMap.iteritems(): try: value = record.fields[record.service.fieldName.lookupByName(info['attr'])] if value is None: continue # For backwards compatibility, present fullName/RealName as single # value even though twext.who now has it as multiValue if key == "RealName": value = value[0] if isinstance(value, str): value = value.decode("utf-8") elif isinstance(value, NamedConstant): value = value.name recordDict[key] = value except KeyError: pass return recordDict def respondWithError(msg, status=1): sys.stdout.write(writePlistToString({'error': msg, })) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/icalsplit.py000066400000000000000000000060471315003562600240100ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import os import sys from getopt import getopt, GetoptError from twistedcaldav.ical import Component as iComponent def splitICalendarFile(inputFileName, outputDirectory): """ Reads iCalendar data from a file and outputs a separate file for each iCalendar component into a directory. This is useful for converting a monolithic iCalendar object into a set of objects that comply with CalDAV's requirements on resources. """ with open(inputFileName) as inputFile: calendar = iComponent.fromStream(inputFile) assert calendar.name() == "VCALENDAR" topLevelProperties = tuple(calendar.properties()) for subcomponent in calendar.subcomponents(): subcalendar = iComponent("VCALENDAR") # # Add top-level properties from monolithic calendar to # top-level properties of subcalendar. # for property in topLevelProperties: subcalendar.addProperty(property) subcalendar.addComponent(subcomponent) uid = subcalendar.resourceUID() subFileName = os.path.join(outputDirectory, uid + ".ics") print("Writing %s" % (subFileName,)) subcalendar_file = file(subFileName, "w") try: subcalendar_file.write(str(subcalendar)) finally: subcalendar_file.close() def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: %s [options] input_file output_directory" % (name,)) print("") print(" Splits up monolithic iCalendar data into separate files for each") print(" subcomponent so as to comply with CalDAV requirements for") print(" individual resources.") print("") print("options:") print(" -h --help: print this help and exit") print("") if e: sys.exit(64) else: sys.exit(0) def main(): try: (optargs, args) = getopt( sys.argv[1:], "h", [ "help", ], ) except GetoptError, e: usage(e) for opt, _ignore_arg in optargs: if opt in ("-h", "--help"): usage() try: inputFileName, outputDirectory = args except ValueError: if len(args) > 2: many = "many" else: many = "few" usage("Too %s arguments" % (many,)) splitICalendarFile(inputFileName, outputDirectory) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/importer.py000077500000000000000000000330161315003562600236640ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_importer -*- ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ This tool imports calendar data This tool requires access to the calendar server's configuration and data storage; it does not operate by talking to the server via the network. It therefore does not apply any of the access restrictions that the server would. """ from __future__ import print_function import os import sys import uuid from calendarserver.tools.cmdline import utilityMain, WorkerService from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks from twisted.python.text import wordWrap from twisted.python.usage import Options, UsageError from twistedcaldav import customxml from twistedcaldav.ical import Component from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twistedcaldav.timezones import TimezoneCache from txdav.base.propertystore.base import PropertyName from txdav.caldav.datastore.scheduling.cuaddress import LocalCalendarUser from txdav.caldav.datastore.scheduling.itip import iTipGenerator from txdav.caldav.datastore.scheduling.processing import ImplicitProcessor from txdav.caldav.icalendarstore import ComponentUpdateState from txdav.common.icommondatastore import UIDExistsError from txdav.xml import element as davxml log = Logger() def usage(e=None): if e: print(e) print("") try: ImportOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = '\n'.join( wordWrap( """ Usage: calendarserver_import [options] [input specifiers]\n """ + __doc__, int(os.environ.get('COLUMNS', '80')) ) ) class ImportException(Exception): """ An error occurred during import """ class ImportOptions(Options): """ Command-line options for 'calendarserver_import' """ synopsis = description optFlags = [ ['debug', 'D', "Debug logging."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ] def __init__(self): super(ImportOptions, self).__init__() self.inputName = '-' self.inputDirectoryName = None def opt_directory(self, dirname): """ Specify input directory path. """ self.inputDirectoryName = dirname opt_d = opt_directory def opt_input(self, filename): """ Specify input file path (default: '-', meaning stdin). """ self.inputName = filename opt_i = opt_input def openInput(self): """ Open the appropriate input file based on the '--input' option. """ if self.inputName == '-': return sys.stdin else: return open(self.inputName, 'r') # These could probably live on the collection class: def setCollectionPropertyValue(collection, element, value): collectionProperties = collection.properties() collectionProperties[PropertyName.fromElement(element)] = ( element.fromString(value) ) def getCollectionPropertyValue(collection, element): collectionProperties = collection.properties() name = PropertyName.fromElement(element) if name in collectionProperties: return str(collectionProperties[name]) else: return None @inlineCallbacks def importCollectionComponent(store, component): """ Import a component representing a collection (e.g. VCALENDAR) into the store. The homeUID and collection resource name the component will be imported into is derived from the SOURCE property on the VCALENDAR (which must be present). The code assumes it will be a URI with slash-separated parts with the penultimate part specifying the homeUID and the last part specifying the calendar resource name. The NAME property will be used to set the DAV:display-name, while the COLOR property will be used to set calendar-color. Subcomponents (e.g. VEVENTs) are grouped into resources by UID. Objects which have a UID already in use within the home will be skipped. @param store: The db store to add the component to @type store: L{IDataStore} @param component: The component to store @type component: L{twistedcaldav.ical.Component} """ sourceURI = component.propertyValue("SOURCE") if not sourceURI: raise ImportException("Calendar is missing SOURCE property") ownerUID, collectionResourceName = sourceURI.strip("/").split("/")[-2:] dir = store.directoryService() ownerRecord = yield dir.recordWithUID(ownerUID) if not ownerRecord: raise ImportException("{} is not in the directory".format(ownerUID)) # Set properties on the collection txn = store.newTransaction() home = yield txn.calendarHomeWithUID(ownerUID, create=True) collection = yield home.childWithName(collectionResourceName) if not collection: print("Creating calendar: {}".format(collectionResourceName)) collection = yield home.createChildWithName(collectionResourceName) for propertyName, element in ( ("NAME", davxml.DisplayName), ("COLOR", customxml.CalendarColor), ): value = component.propertyValue(propertyName) if value is not None: setCollectionPropertyValue(collection, element, value) print( "Setting {name} to {value}".format(name=propertyName, value=value) ) yield txn.commit() # Populate the collection; NB we use a txn for each object, and we might # want to batch them? groupedComponents = Component.componentsFromComponent(component) for groupedComponent in groupedComponents: try: uid = list(groupedComponent.subcomponents())[0].propertyValue("UID") except: continue # If event is unscheduled or the organizer matches homeUID, store the # component print("Event UID: {}".format(uid)) storeDirectly = True organizer = groupedComponent.getOrganizer() if organizer is not None: organizerRecord = yield dir.recordWithCalendarUserAddress(organizer) if organizerRecord is None: # Organizer does not exist, so skip this event continue else: if ownerRecord.uid != organizerRecord.uid: # Owner is not the organizer storeDirectly = False if storeDirectly: resourceName = "{}.ics".format(str(uuid.uuid4())) try: yield storeComponentInHomeAndCalendar( store, groupedComponent, ownerUID, collectionResourceName, resourceName ) print("Imported: {}".format(uid)) except UIDExistsError: # That event is already in the home print("Skipping since UID already exists: {}".format(uid)) except Exception, e: print( "Failed to import due to: {error}\n{comp}".format( error=e, comp=groupedComponent ) ) else: # Owner is an attendee, not the organizer # Apply the PARTSTATs from the import and from the possibly # existing event (existing event takes precedence) to the # organizer's copy. # Put the attendee copy into the right calendar now otherwise it # could end up on the default calendar when the change to the # organizer's copy causes an attendee update resourceName = "{}.ics".format(str(uuid.uuid4())) try: yield storeComponentInHomeAndCalendar( store, groupedComponent, ownerUID, collectionResourceName, resourceName, asAttendee=True ) print("Imported: {}".format(uid)) except UIDExistsError: # No need since the event is already in the home pass # Now use the iTip reply processing to update the organizer's copy # with the PARTSTATs from the component we're restoring. attendeeCUA = ownerRecord.canonicalCalendarUserAddress() organizerCUA = organizerRecord.canonicalCalendarUserAddress() processor = ImplicitProcessor() newComponent = iTipGenerator.generateAttendeeReply(groupedComponent, attendeeCUA, method="X-RESTORE") if newComponent is not None: txn = store.newTransaction() yield processor.doImplicitProcessing( txn, newComponent, LocalCalendarUser(attendeeCUA, ownerRecord), LocalCalendarUser(organizerCUA, organizerRecord) ) yield txn.commit() @inlineCallbacks def storeComponentInHomeAndCalendar( store, component, homeUID, collectionResourceName, objectResourceName, asAttendee=False ): """ Add a component to the store as an objectResource If the calendar home does not yet exist for homeUID it will be created. If the collection by the name collectionResourceName does not yet exist it will be created. @param store: The db store to add the component to @type store: L{IDataStore} @param component: The component to store @type component: L{twistedcaldav.ical.Component} @param homeUID: uid of the home collection @type collectionResourceName: C{str} @param collectionResourceName: name of the collection resource @type collectionResourceName: C{str} @param objectResourceName: name of the objectresource @type objectResourceName: C{str} """ txn = store.newTransaction() home = yield txn.calendarHomeWithUID(homeUID, create=True) collection = yield home.childWithName(collectionResourceName) if not collection: collection = yield home.createChildWithName(collectionResourceName) yield collection._createCalendarObjectWithNameInternal( objectResourceName, component, ( ComponentUpdateState.ATTENDEE_ITIP_UPDATE if asAttendee else ComponentUpdateState.NORMAL ) ) yield txn.commit() class ImporterService(WorkerService, object): """ Service which runs, imports the data, then stops the reactor. """ def __init__(self, store, options, reactor, config): super(ImporterService, self).__init__(store) self.options = options self.reactor = reactor self.config = config self._directory = self.store.directoryService() TimezoneCache.create() @inlineCallbacks def doWork(self): """ Do the export, stopping the reactor when done. """ try: if self.options.inputDirectoryName: dirname = self.options.inputDirectoryName if not os.path.exists(dirname): sys.stderr.write( "Directory does not exist: {}\n".format(dirname) ) sys.exit(1) for filename in os.listdir(dirname): fullpath = os.path.join(dirname, filename) print("Importing {}".format(fullpath)) with open(fullpath, 'r') as fileobj: component = Component.allFromStream(fileobj) yield importCollectionComponent(self.store, component) else: try: input = self.options.openInput() except IOError, e: sys.stderr.write( "Unable to open input file for reading: %s\n" % (e) ) sys.exit(1) component = Component.allFromStream(input) input.close() yield importCollectionComponent(self.store, component) except: log.failure("doWork()") def directoryService(self): """ Get an appropriate directory service. """ return self._directory def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ # TODO: stopping this service mid-import should really stop the import # loop, but this is not implemented because nothing will actually do it # except hitting ^C (which also calls reactor.stop(), so that will exit # anyway). def main(argv=sys.argv, reactor=None): """ Do the import. """ if reactor is None: from twisted.internet import reactor options = ImportOptions() try: options.parseOptions(argv[1:]) except UsageError, e: usage(e) def makeService(store): from twistedcaldav.config import config return ImporterService(store, options, reactor, config) utilityMain(options["config"], makeService, reactor, verbose=options["debug"]) calendarserver-9.1+dfsg/calendarserver/tools/managetimezones.py000077500000000000000000000275131315003562600252160ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from calendarserver.tools.util import loadConfig from pycalendar.datetime import DateTime from pycalendar.icalendar.calendar import Calendar from twext.python.log import Logger from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twisted.logger import FileLogObserver, formatEventAsClassicLogText from twisted.python.filepath import FilePath from twistedcaldav.config import ConfigurationError from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twistedcaldav.timezones import TimezoneCache from twistedcaldav.timezonestdservice import PrimaryTimezoneDatabase, \ SecondaryTimezoneDatabase from zonal.tzconvert import tzconvert import getopt import os import sys import tarfile import tempfile import urllib def _doPrimaryActions(action, tzpath, xmlfile, changed, tzvers): tzdb = PrimaryTimezoneDatabase(tzpath, xmlfile) if action == "refresh": _doRefresh(tzpath, xmlfile, tzdb, tzvers) elif action == "create": _doCreate(xmlfile, tzdb) elif action == "update": _doUpdate(xmlfile, tzdb) elif action == "list": _doList(xmlfile, tzdb) elif action == "changed": _doChanged(xmlfile, changed, tzdb) else: usage("Invalid action: {}".format(action,)) def _doRefresh(tzpath, xmlfile, tzdb, tzvers): """ Refresh data from IANA. """ print("Downloading latest data from IANA") if tzvers: path = "https://www.iana.org/time-zones/repository/releases/tzdata{}.tar.gz".format(tzvers,) else: path = "https://www.iana.org/time-zones/repository/tzdata-latest.tar.gz" data = urllib.urlretrieve(path) print("Extract data at: {}".format(data[0])) rootdir = tempfile.mkdtemp() zonedir = os.path.join(rootdir, "tzdata") os.mkdir(zonedir) with tarfile.open(data[0], "r:gz") as t: t.extractall(zonedir) # Get the version from the Makefile try: with open(os.path.join(zonedir, "Makefile")) as f: makefile = f.read() lines = makefile.splitlines() for line in lines: if line.startswith("VERSION="): tzvers = line[8:].strip() break except IOError: pass # Work around change to MAKEFILE by trying to extract the version from the NEWS file if tzvers == "unknown": tzvers = None try: with open(os.path.join(zonedir, "NEWS")) as f: makefile = f.read() lines = makefile.splitlines() for line in lines: if line.startswith("Release "): tzvers = line.split()[1] break except IOError: pass if not tzvers: tzvers = DateTime.getToday().getText() print("Converting data (version: {}) at: {}".format(tzvers, zonedir,)) startYear = 1800 endYear = DateTime.getToday().getYear() + 10 Calendar.sProdID = "-//calendarserver.org//Zonal//EN" zonefiles = "northamerica", "southamerica", "europe", "africa", "asia", "australasia", "antarctica", "etcetera", "backward" parser = tzconvert() for file in zonefiles: parser.parse(os.path.join(zonedir, file)) # Try tzextras extras = TimezoneCache._getTZExtrasPath() if os.path.exists(extras): print("Converting extra data at: {}".format(extras,)) parser.parse(extras) else: print("No extra data to convert") # Check for windows aliases print("Downloading latest data from unicode.org") path = "http://unicode.org/repos/cldr/tags/latest/common/supplemental/windowsZones.xml" data = urllib.urlretrieve(path) wpath = data[0] # Generate the iCalendar data print("Generating iCalendar data") parser.generateZoneinfoFiles(os.path.join(rootdir, "zoneinfo"), startYear, endYear, windowsAliases=wpath, filterzones=()) print("Copy new zoneinfo to destination: {}".format(tzpath,)) z = FilePath(os.path.join(rootdir, "zoneinfo")) tz = FilePath(tzpath) z.copyTo(tz) print("Updating XML file at: {}".format(xmlfile,)) tzdb.readDatabase() tzdb.updateDatabase() print("Current total: {}".format(len(tzdb.timezones),)) print("Total Changed: {}".format(tzdb.changeCount,)) if tzdb.changeCount: print("Changed:") for k in sorted(tzdb.changed): print(" {}".format(k,)) versfile = os.path.join(os.path.dirname(xmlfile), "version.txt") print("Updating version file at: {}".format(versfile,)) with open(versfile, "w") as f: f.write(TimezoneCache.IANA_VERSION_PREFIX + tzvers) def _doCreate(xmlfile, tzdb): """ Create new xml file. """ print("Creating new XML file at: {}".format(xmlfile,)) tzdb.createNewDatabase() print("Current total: {}".format(len(tzdb.timezones),)) def _doUpdate(xmlfile, tzdb): """ Update xml file. """ print("Updating XML file at: {}".format(xmlfile,)) tzdb.readDatabase() tzdb.updateDatabase() print("Current total: {}".format(len(tzdb.timezones),)) print("Total Changed: {}".format(tzdb.changeCount,)) if tzdb.changeCount: print("Changed:") for k in sorted(tzdb.changed): print(" {}".format(k,)) def _doList(xmlfile, tzdb): """ List current timezones from xml file. """ print("Listing XML file at: {}".format(xmlfile,)) tzdb.readDatabase() print("Current timestamp: {}".format(tzdb.dtstamp,)) print("Timezones:") for k in sorted(tzdb.timezones.keys()): print(" {}".format(k,)) def _doChanged(xmlfile, changed, tzdb): """ Check for local timezone changes. """ print("Changes from XML file at: {}".format(xmlfile,)) tzdb.readDatabase() print("Check timestamp: {}".format(changed,)) print("Current timestamp: {}".format(tzdb.dtstamp,)) results = [k for k, v in tzdb.timezones.items() if v.dtstamp > changed] print("Total Changed: {}".format(len(results),)) if results: print("Changed:") for k in sorted(results): print(" {}".format(k,)) @inlineCallbacks def _runInReactor(tzdb): try: new, changed = yield tzdb.syncWithServer() print("New: {}".format(new,)) print("Changed: {}".format(changed,)) print("Current total: {}".format(len(tzdb.timezones),)) except Exception, e: print("Could not sync with server: {}".format(str(e),)) finally: reactor.stop() def _doSecondaryActions(action, tzpath, xmlfile, url): tzdb = SecondaryTimezoneDatabase(tzpath, xmlfile, url) try: tzdb.readDatabase() except: pass if action == "cache": print("Caching from secondary server: {}".format(url,)) observer = FileLogObserver(sys.stdout, lambda event: formatEventAsClassicLogText(event)) Logger.beginLoggingTo([observer], redirectStandardIO=False) reactor.callLater(0, _runInReactor, tzdb) reactor.run() else: usage("Invalid action: {}".format(action,)) def usage(error_msg=None): if error_msg: print(error_msg) print("") print("""Usage: managetimezones [options] Options: -h Print this help and exit -f config file path [REQUIRED] -x XML file path -z zoneinfo file path --tzvers year/release letter of IANA data to refresh default: use the latest release --url URL or domain of secondary service # Primary service --refresh refresh data from IANA --refreshpkg refresh package data from IANA --create create new XML file --update update XML file --list list timezones in XML file --changed changed since timestamp # Secondary service --cache Cache data from service Description: This utility will create, update, or list an XML timezone database summary file, or refresh iCalendar timezone from IANA (Olson). It can also be used to update the server's own zoneinfo database from IANA. It also creates aliases for the unicode.org windowsZones. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) def main(): configFileName = DEFAULT_CONFIG_FILE primary = False secondary = False action = None tzpath = None xmlfile = None changed = None url = None tzvers = None updatepkg = False # Get options options, _ignore_args = getopt.getopt( sys.argv[1:], "f:hx:z:", [ "refresh", "refreshpkg", "create", "update", "list", "changed=", "url=", "cache", "tzvers=", ] ) for option, value in options: if option == "-h": usage() elif option == "-f": configFileName = value elif option == "-x": xmlfile = value elif option == "-z": tzpath = value elif option == "--refresh": action = "refresh" primary = True elif option == "--refreshpkg": action = "refresh" primary = True updatepkg = True elif option == "--create": action = "create" primary = True elif option == "--update": action = "update" primary = True elif option == "--list": action = "list" primary = True elif option == "--changed": action = "changed" primary = True changed = value elif option == "--url": url = value secondary = True elif option == "--cache": action = "cache" secondary = True elif option == "--tzvers": tzvers = value else: usage("Unrecognized option: {}".format(option,)) if configFileName is None: usage("A configuration file must be specified") try: loadConfig(configFileName) except ConfigurationError, e: sys.stdout.write("{}\n".format(e,)) sys.exit(1) if action is None: action = "list" primary = True if tzpath is None: if updatepkg: try: import pkg_resources except ImportError: tzpath = os.path.join(os.path.dirname(__file__), "zoneinfo") else: tzpath = pkg_resources.resource_filename("twistedcaldav", "zoneinfo") # @UndefinedVariable else: # Setup the correct zoneinfo path based on the config tzpath = TimezoneCache.getDBPath() TimezoneCache.validatePath() if xmlfile is None: xmlfile = os.path.join(tzpath, "timezones.xml") if primary and not os.path.isdir(tzpath): usage("Invalid zoneinfo path: {}".format(tzpath,)) if primary and not os.path.isfile(xmlfile) and action != "create": usage("Invalid XML file path: {}".format(xmlfile,)) if primary and secondary: usage("Cannot use primary and secondary options together") if primary: _doPrimaryActions(action, tzpath, xmlfile, changed, tzvers) else: _doSecondaryActions(action, tzpath, xmlfile, url) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/migrate.py000077500000000000000000000046031315003562600234530ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool migrates existing calendar data from any previous calendar server version to the current version. This tool requires access to the calendar server's configuration and data storage; it does not operate by talking to the server via the network. """ import os import sys from getopt import getopt, GetoptError from twistedcaldav.config import ConfigurationError from twistedcaldav.upgrade import upgradeData from calendarserver.tools.util import loadConfig def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print("Migrate calendar data to current version") print(__doc__) print("options:") print(" -h --help: print this help and exit") print(" -f --config: Specify caldavd.plist configuration path") if e: sys.exit(64) else: sys.exit(0) def main(): try: (optargs, args) = getopt( sys.argv[1:], "hf:", [ "config=", "help", ], ) except GetoptError, e: usage(e) configFileName = None for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg if args: usage("Too many arguments: %s" % (" ".join(args),)) try: config = loadConfig(configFileName) except ConfigurationError, e: sys.stdout.write("%s\n" % (e,)) sys.exit(1) profiling = False if profiling: import cProfile cProfile.runctx("upgradeData(c)", globals(), {"c": config}, "/tmp/upgrade.prof") else: upgradeData(config) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/migrate_verify.py000077500000000000000000000302741315003562600250420ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool takes a list of files paths from a file store being migrated and compares that to the results of a migration to an SQL store. Items not migrated are logged. """ import os import sys from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python.text import wordWrap from twisted.python.usage import Options from twext.enterprise.dal.syntax import Select, Parameter from twext.python.log import Logger from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN from calendarserver.tools.cmdline import utilityMain, WorkerService log = Logger() VERSION = "1" def usage(e=None): if e: print(e) print("") try: MigrateVerifyOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = ''.join( wordWrap( """ Usage: calendarserver_migrate_verify [options] [input specifiers] """, int(os.environ.get('COLUMNS', '80')) ) ) description += "\nVersion: %s" % (VERSION,) class ConfigError(Exception): pass class MigrateVerifyOptions(Options): """ Command-line options for 'calendarserver_migrate_verify' """ synopsis = description optFlags = [ ['debug', 'D', "Debug logging."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ['data', 'd', "./paths.txt", "List of file paths for migrated data."], ] def __init__(self): super(MigrateVerifyOptions, self).__init__() self.outputName = '-' def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') class MigrateVerifyService(WorkerService, object): """ Service which runs, does its stuff, then stops the reactor. """ def __init__(self, store, options, output, reactor, config): super(MigrateVerifyService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config self.pathsByGUID = {} self.badPaths = [] self.validPaths = 0 self.ignoreInbox = 0 self.ignoreDropbox = 0 self.missingGUIDs = [] self.missingCalendars = [] self.missingResources = [] @inlineCallbacks def doWork(self): """ Do the work, stopping the reactor when done. """ self.output.write("\n---- Migrate Verify version: %s ----\n" % (VERSION,)) try: self.readPaths() yield self.doCheck() self.output.close() except ConfigError: pass except: log.failure("doWork()") def readPaths(self): self.output.write("-- Reading data file: %s\n" % (self.options["data"])) with open(os.path.expanduser(self.options["data"])) as datafile: total = 0 invalidGUIDs = set() for line in datafile: line = line.strip() total += 1 segments = line.split("/") while segments and segments[0] != "__uids__": segments.pop(0) if segments and len(segments) >= 6: guid = segments[3] calendar = segments[4] resource = segments[5] if calendar == "inbox": self.ignoreInbox += 1 invalidGUIDs.add(guid) elif calendar == "dropbox": self.ignoreDropbox += 1 invalidGUIDs.add(guid) elif len(segments) > 6: self.badPaths.append(line) invalidGUIDs.add(guid) else: self.pathsByGUID.setdefault(guid, {}).setdefault(calendar, set()).add(resource) self.validPaths += 1 else: if segments and len(segments) >= 4: invalidGUIDs.add(segments[3]) self.badPaths.append(line) # Remove any invalid GUIDs that actuall were valid invalidGUIDs = [pguid for pguid in invalidGUIDs if pguid not in self.pathsByGUID] self.output.write("\nTotal lines read: %d\n" % (total,)) self.output.write("Total guids: valid: %d invalid: %d overall: %d\n" % ( len(self.pathsByGUID), len(invalidGUIDs), len(self.pathsByGUID) + len(invalidGUIDs), )) self.output.write("Total valid calendars: %d\n" % (sum([len(v) for v in self.pathsByGUID.values()]),)) self.output.write("Total valid resources: %d\n" % (self.validPaths,)) self.output.write("Total inbox resources: %d\n" % (self.ignoreInbox,)) self.output.write("Total dropbox resources: %d\n" % (self.ignoreDropbox,)) self.output.write("Total bad paths: %d\n" % (len(self.badPaths),)) self.output.write("\n-- Invalid GUIDs\n") for invalidGUID in sorted(invalidGUIDs): self.output.write("Invalid GUID: %s\n" % (invalidGUID,)) self.output.write("\n-- Bad paths\n") for badPath in sorted(self.badPaths): self.output.write("Bad path: %s\n" % (badPath,)) @inlineCallbacks def doCheck(self): """ Check path data against the SQL store. """ self.output.write("\n-- Scanning database for missed migrations\n") # Get list of distinct resource_property resource_ids to delete self.txn = self.store.newTransaction() total = len(self.pathsByGUID) totalMissingCalendarResources = 0 count = 0 for guid in self.pathsByGUID: if divmod(count, 10)[1] == 0: self.output.write(("\r%d of %d (%d%%)" % ( count, total, (count * 100 / total), )).ljust(80)) self.output.flush() # First check the presence of each guid and the calendar count homeID = (yield self.guid2ResourceID(guid)) if homeID is None: self.missingGUIDs.append(guid) continue # Now get the list of calendar names and calendar resource IDs results = (yield self.calendarsForUser(homeID)) if results is None: results = [] calendars = dict(results) for calendar in self.pathsByGUID[guid].keys(): if calendar not in calendars: self.missingCalendars.append("%s/%s (resources: %d)" % (guid, calendar, len(self.pathsByGUID[guid][calendar]))) totalMissingCalendarResources += len(self.pathsByGUID[guid][calendar]) else: # Now get list of all calendar resources results = (yield self.resourcesForCalendar(calendars[calendar])) if results is None: results = [] results = [result[0] for result in results] db_resources = set(results) # Also check for split calendar if "%s-vtodo" % (calendar,) in calendars: results = (yield self.resourcesForCalendar(calendars["%s-vtodo" % (calendar,)])) if results is None: results = [] results = [result[0] for result in results] db_resources.update(results) # Also check for split calendar if "%s-vevent" % (calendar,) in calendars: results = (yield self.resourcesForCalendar(calendars["%s-vevent" % (calendar,)])) if results is None: results = [] results = [result[0] for result in results] db_resources.update(results) old_resources = set(self.pathsByGUID[guid][calendar]) self.missingResources.extend(["%s/%s/%s" % (guid, calendar, resource,) for resource in old_resources.difference(db_resources)]) # Commit every 10 time through if divmod(count + 1, 10)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() count += 1 yield self.txn.commit() self.txn = None self.output.write("\n\nTotal missing GUIDs: %d\n" % (len(self.missingGUIDs),)) for guid in sorted(self.missingGUIDs): self.output.write("%s\n" % (guid,)) self.output.write("\nTotal missing Calendars: %d (resources: %d)\n" % (len(self.missingCalendars), totalMissingCalendarResources,)) for calendar in sorted(self.missingCalendars): self.output.write("%s\n" % (calendar,)) self.output.write("\nTotal missing Resources: %d\n" % (len(self.missingResources),)) for resource in sorted(self.missingResources): self.output.write("%s\n" % (resource,)) @inlineCallbacks def guid2ResourceID(self, guid): ch = schema.CALENDAR_HOME kwds = {"GUID": guid} rows = (yield Select( [ ch.RESOURCE_ID, ], From=ch, Where=( ch.OWNER_UID == Parameter("GUID") ), ).on(self.txn, **kwds)) returnValue(rows[0][0] if rows else None) @inlineCallbacks def calendarsForUser(self, rid): cb = schema.CALENDAR_BIND kwds = {"RID": rid} rows = (yield Select( [ cb.CALENDAR_RESOURCE_NAME, cb.CALENDAR_RESOURCE_ID, ], From=cb, Where=( cb.CALENDAR_HOME_RESOURCE_ID == Parameter("RID") ).And(cb.BIND_MODE == _BIND_MODE_OWN), ).on(self.txn, **kwds)) returnValue(rows) @inlineCallbacks def resourcesForCalendar(self, rid): co = schema.CALENDAR_OBJECT kwds = {"RID": rid} rows = (yield Select( [ co.RESOURCE_NAME, ], From=co, Where=( co.CALENDAR_RESOURCE_ID == Parameter("RID") ), ).on(self.txn, **kwds)) returnValue(rows) def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = MigrateVerifyOptions() options.parseOptions(argv[1:]) try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) def makeService(store): from twistedcaldav.config import config config.TransactionTimeoutSeconds = 0 return MigrateVerifyService(store, options, output, reactor, config) utilityMain(options['config'], makeService, reactor, verbose=options["debug"]) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/notifications.py000066400000000000000000000573101315003562600246740ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from calendarserver.tools.util import loadConfig from datetime import datetime from getopt import getopt, GetoptError from getpass import getpass from twisted.application.service import Service from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web import client from twisted.words.protocols.jabber import xmlstream from twisted.words.protocols.jabber.client import XMPPAuthenticator, IQAuthInitializer from twisted.words.protocols.jabber.jid import JID from twisted.words.protocols.jabber.xmlstream import IQ from twisted.words.xish import domish from twext.internet.gaiendpoint import GAIEndpoint from twext.internet.adaptendpoint import connect from twext.internet.ssl import simpleClientContextFactory from twistedcaldav.config import config, ConfigurationError from twistedcaldav.util import AuthorizedHTTPGetter from xml.etree import ElementTree import os import signal import sys import uuid def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options] username" % (name,)) print("") print(" Monitor push notification events from calendar server") print("") print("options:") print(" -a --admin : Specify an administrator username") print(" -f --config : Specify caldavd.plist configuration path") print(" -h --help: print this help and exit") print(" -H --host : calendar server host name") print(" -n --node : pubsub node to subscribe to *") print(" -p --port : calendar server port number") print(" -s --ssl: use https (default is http)") print(" -v --verbose: print additional information including XMPP traffic") print("") print(" * The --node option is only required for calendar servers that") print(" don't advertise the push-transports DAV property (such as a Snow") print(" Leopard server). In this case, --host should specify the name") print(" of the XMPP server and --port should specify the port XMPP is") print(" is listening on.") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) def main(): try: (optargs, args) = getopt( sys.argv[1:], "a:f:hH:n:p:sv", [ "admin=", "config=", "help", "host=", "node=", "port=", "ssl", "verbose", ], ) except GetoptError, e: usage(e) admin = None configFileName = None host = None nodes = None port = None useSSL = False verbose = False for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-a", "--admin"): admin = arg elif opt in ("-H", "--host"): host = arg elif opt in ("-n", "--node"): nodes = [arg] elif opt in ("-p", "--port"): port = int(arg) elif opt in ("-s", "--ssl"): useSSL = True elif opt in ("-v", "--verbose"): verbose = True else: raise NotImplementedError(opt) if len(args) != 1: usage("Username not specified") username = args[0] if host is None: # No host specified, so try loading config to look up settings try: loadConfig(configFileName) except ConfigurationError, e: print("Error in configuration: %s" % (e,)) sys.exit(1) useSSL = config.EnableSSL or config.BehindTLSProxy host = config.ServerHostName port = config.SSLPort if useSSL else config.HTTPPort if port is None: usage("Must specify a port number") if admin: password = getpass("Password for administrator %s: " % (admin,)) else: password = getpass("Password for %s: " % (username,)) admin = username monitorService = PushMonitorService( useSSL, host, port, nodes, admin, username, password, verbose) reactor.addSystemEventTrigger( "during", "startup", monitorService.startService) reactor.addSystemEventTrigger( "before", "shutdown", monitorService.stopService) reactor.run() class PubSubClientFactory(xmlstream.XmlStreamFactory): """ An XMPP pubsub client that subscribes to nodes and prints a message whenever a notification arrives. """ pubsubNS = 'http://jabber.org/protocol/pubsub' def __init__(self, jid, password, service, nodes, verbose, sigint=True): resource = "pushmonitor.%s" % (uuid.uuid4().hex,) self.jid = "%s/%s" % (jid, resource) self.service = service self.nodes = nodes self.verbose = verbose if self.verbose: print("JID:", self.jid, "Pubsub service:", self.service) self.presenceSeconds = 60 self.presenceCall = None self.xmlStream = None self.doKeepAlive = True xmlstream.XmlStreamFactory.__init__( self, XMPPAuthenticator(JID(self.jid), password)) self.addBootstrap(xmlstream.STREAM_CONNECTED_EVENT, self.connected) self.addBootstrap(xmlstream.STREAM_END_EVENT, self.disconnected) self.addBootstrap(xmlstream.INIT_FAILED_EVENT, self.initFailed) self.addBootstrap(xmlstream.STREAM_AUTHD_EVENT, self.authenticated) self.addBootstrap( IQAuthInitializer.INVALID_USER_EVENT, self.authFailed) self.addBootstrap( IQAuthInitializer.AUTH_FAILED_EVENT, self.authFailed) if sigint: signal.signal(signal.SIGINT, self.sigint_handler) @inlineCallbacks def sigint_handler(self, num, frame): print(" Shutting down...") yield self.unsubscribeAll() reactor.stop() @inlineCallbacks def unsubscribeAll(self): if self.xmlStream is not None: for node, (_ignore_url, name, kind) in self.nodes.iteritems(): yield self.unsubscribe(node, name, kind) def connected(self, xmlStream): self.xmlStream = xmlStream if self.verbose: print("XMPP connection successful") xmlStream.rawDataInFn = self.rawDataIn xmlStream.rawDataOutFn = self.rawDataOut xmlStream.addObserver("/message/event/items", self.handleMessageEventItems) def disconnected(self, xmlStream): self.xmlStream = None if self.presenceCall is not None: self.presenceCall.cancel() self.presenceCall = None if self.verbose: print("XMPP disconnected") def initFailed(self, failure): self.xmlStream = None print("XMPP connection failure: %s" % (failure,)) reactor.stop() @inlineCallbacks def authenticated(self, xmlStream): if self.verbose: print("XMPP authentication successful") self.sendPresence() for node, (_ignore_url, name, kind) in self.nodes.iteritems(): yield self.subscribe(node, name, kind) print("Awaiting notifications (hit Control-C to end)") def authFailed(self, e): print("XMPP authentication failed") reactor.stop() def sendPresence(self): if self.doKeepAlive and self.xmlStream is not None: presence = domish.Element(('jabber:client', 'presence')) self.xmlStream.send(presence) self.presenceCall = reactor.callLater( self.presenceSeconds, self.sendPresence) def handleMessageEventItems(self, iq): item = iq.firstChildElement().firstChildElement() if item: node = item.getAttribute("node") if node: node = str(node) url, name, kind = self.nodes.get(node, ("Not subscribed", "Unknown", "Unknown")) timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S") print('%s | Notification for "%s" (%s)' % (timestamp, name, kind)) if self.verbose: print(" node = %s" % (node,)) print(" url = %s" % (url,)) @inlineCallbacks def subscribe(self, node, name, kind): iq = IQ(self.xmlStream) pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS) subElement = pubsubElement.addElement("subscribe") subElement["node"] = node subElement["jid"] = self.jid print('Subscribing to "%s" (%s)' % (name, kind)) if self.verbose: print(node) try: yield iq.send(to=self.service) print("OK") except Exception, e: print("Subscription failure: %s %s" % (node, e)) @inlineCallbacks def unsubscribe(self, node, name, kind): iq = IQ(self.xmlStream) pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS) subElement = pubsubElement.addElement("unsubscribe") subElement["node"] = node subElement["jid"] = self.jid print('Unsubscribing from "%s" (%s)' % (name, kind)) if self.verbose: print(node) try: yield iq.send(to=self.service) print("OK") except Exception, e: print("Unsubscription failure: %s %s" % (node, e)) @inlineCallbacks def retrieveSubscriptions(self): # This isn't supported by Apple's pubsub service iq = IQ(self.xmlStream) pubsubElement = iq.addElement("pubsub", defaultUri=self.pubsubNS) pubsubElement.addElement("subscriptions") print("Requesting list of subscriptions") try: yield iq.send(to=self.service) except Exception, e: print("Subscription list failure: %s" % (e,)) def rawDataIn(self, buf): print("RECV: %s" % unicode(buf, 'utf-8').encode('ascii', 'replace')) def rawDataOut(self, buf): print("SEND: %s" % unicode(buf, 'utf-8').encode('ascii', 'replace')) class PropfindRequestor(AuthorizedHTTPGetter): handleStatus_207 = lambda self: self.handleStatus_200 class PushMonitorService(Service): """ A service which uses CalDAV to determine which pubsub node(s) correspond to any calendar the user has access to (including any belonging to users who have delegated access to the user). Those nodes are subscribed to using XMPP and monitored for updates. """ def __init__( self, useSSL, host, port, nodes, authname, username, password, verbose ): self.useSSL = useSSL self.host = host self.port = port self.nodes = nodes self.authname = authname self.username = username self.password = password self.verbose = verbose @inlineCallbacks def startService(self): try: subscribeNodes = {} if self.nodes is None: paths = set() principal = "/principals/users/%s/" % (self.username,) name, homes = (yield self.getPrincipalDetails(principal)) if self.verbose: print(name, homes) for home in homes: paths.add(home) for principal in (yield self.getProxyFor()): name, homes = (yield self.getPrincipalDetails( principal, includeCardDAV=False)) if self.verbose: print(name, homes) for home in homes: if home.startswith("/"): # Only support homes on the same server for now. paths.add(home) for path in paths: host, port, nodes = (yield self.getPushInfo(path)) subscribeNodes.update(nodes) else: for node in self.nodes: subscribeNodes[node] = ("Unknown", "Unknown", "Unknown") host = self.host port = self.port # TODO: support talking to multiple hosts (to support delegates # from other calendar servers) if subscribeNodes: self.startMonitoring(host, port, subscribeNodes) else: print("No nodes to monitor") reactor.stop() except Exception, e: print("Error:", e) reactor.stop() @inlineCallbacks def getPrincipalDetails(self, path, includeCardDAV=True): """ Given a principal path, retrieve and return the corresponding displayname and set of calendar/addressbook homes. """ name = "" homes = [] headers = { "Depth": "0", } body = """ """ try: responseBody = (yield self.makeRequest( path, "PROPFIND", headers, body)) try: doc = ElementTree.fromstring(responseBody) for response in doc.findall("{DAV:}response"): href = response.find("{DAV:}href") for propstat in response.findall("{DAV:}propstat"): status = propstat.find("{DAV:}status") if "200 OK" in status.text: for prop in propstat.findall("{DAV:}prop"): calendarHomeSet = prop.find("{urn:ietf:params:xml:ns:caldav}calendar-home-set") if calendarHomeSet is not None: for href in calendarHomeSet.findall("{DAV:}href"): href = href.text if href: homes.append(href) if includeCardDAV: addressbookHomeSet = prop.find("{urn:ietf:params:xml:ns:carddav}addressbook-home-set") if addressbookHomeSet is not None: for href in addressbookHomeSet.findall("{DAV:}href"): href = href.text if href: homes.append(href) displayName = prop.find("{DAV:}displayname") if displayName is not None: displayName = displayName.text if displayName: name = displayName except Exception, e: print("Unable to parse principal details", e) print(responseBody) raise except Exception, e: print("Unable to look up principal details", e) raise returnValue((name, homes)) @inlineCallbacks def getProxyFor(self): """ Retrieve and return the principal paths for any user that has delegated calendar access to this user. """ proxies = set() headers = { "Depth": "0", } body = """ """ path = "/principals/users/%s/" % (self.username,) try: responseBody = (yield self.makeRequest( path, "PROPFIND", headers, body)) try: doc = ElementTree.fromstring(responseBody) for response in doc.findall("{DAV:}response"): href = response.find("{DAV:}href") for propstat in response.findall("{DAV:}propstat"): status = propstat.find("{DAV:}status") if "200 OK" in status.text: for prop in propstat.findall("{DAV:}prop"): for element in ( "{http://calendarserver.org/ns/}calendar-proxy-read-for", "{http://calendarserver.org/ns/}calendar-proxy-write-for", ): proxyFor = prop.find(element) if proxyFor is not None: for href in proxyFor.findall("{DAV:}href"): href = href.text if href: proxies.add(href) except Exception, e: print("Unable to parse proxy information", e) print(responseBody) raise except Exception, e: print("Unable to look up who %s is a proxy for" % (self.username,)) raise returnValue(proxies) @inlineCallbacks def getPushInfo(self, path): """ Given a calendar home path, retrieve push notification info including xmpp hostname and port, the pushkey for the home and for each shared collection bound into the home. """ headers = { "Depth": "1", } body = """ """ try: responseBody = (yield self.makeRequest( path, "PROPFIND", headers, body)) host = None port = None nodes = {} try: doc = ElementTree.fromstring(responseBody) for response in doc.findall("{DAV:}response"): href = response.find("{DAV:}href") key = None name = None if path.startswith("/calendars"): kind = "Calendar home" else: kind = "AddressBook home" for propstat in response.findall("{DAV:}propstat"): status = propstat.find("{DAV:}status") if "200 OK" in status.text: for prop in propstat.findall("{DAV:}prop"): displayName = prop.find("{DAV:}displayname") if displayName is not None: displayName = displayName.text if displayName: name = displayName resourceType = prop.find("{DAV:}resourcetype") if resourceType is not None: shared = resourceType.find("{http://calendarserver.org/ns/}shared") if shared is not None: kind = "Shared calendar" pushKey = prop.find("{http://calendarserver.org/ns/}pushkey") if pushKey is not None: pushKey = pushKey.text if pushKey: key = pushKey pushTransports = prop.find("{http://calendarserver.org/ns/}push-transports") if pushTransports is not None: if self.verbose: print("push-transports:\n\n", ElementTree.tostring(pushTransports)) for transport in pushTransports.findall("{http://calendarserver.org/ns/}transport"): if transport.attrib["type"] == "XMPP": xmppServer = transport.find("{http://calendarserver.org/ns/}xmpp-server") if xmppServer is not None: xmppServer = xmppServer.text if xmppServer: if ":" in xmppServer: host, port = xmppServer.split(":") port = int(port) else: host = xmppServer port = 5222 if key and key not in nodes: nodes[key] = (href.text, name, kind) except Exception, e: print("Unable to parse push information", e) print(responseBody) raise except Exception, e: print("Unable to look up push information for %s" % (self.username,)) raise if host is None: raise Exception("Unable to determine xmpp server name") if port is None: raise Exception("Unable to determine xmpp server port") returnValue((host, port, nodes)) def startMonitoring(self, host, port, nodes): service = "pubsub.%s" % (host,) jid = "%s@%s" % (self.authname, host) pubsubFactory = PubSubClientFactory( jid, self.password, service, nodes, self.verbose) connect(GAIEndpoint(reactor, host, port), pubsubFactory) def makeRequest(self, path, method, headers, body): scheme = "https:" if self.useSSL else "http:" url = "%s//%s:%d%s" % (scheme, self.host, self.port, path) caldavFactory = client.HTTPClientFactory( url, method=method, headers=headers, postdata=body, agent="Push Monitor") caldavFactory.username = self.authname caldavFactory.password = self.password caldavFactory.noisy = False caldavFactory.protocol = PropfindRequestor if self.useSSL: connect(GAIEndpoint(reactor, self.host, self.port, simpleClientContextFactory(self.host)), caldavFactory) else: connect(GAIEndpoint(reactor, self.host, self.port), caldavFactory) return caldavFactory.deferred if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/obliterate.py000077500000000000000000000434371315003562600241650ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool scans wipes out user data without using slow store object apis that attempt to keep the DB consistent. Instead it assumes facts about the schema and how the various table data are related. Normally the purge principal tool should be used to "correctly" remove user data. This is an emergency tool needed when data has been accidently migrated into the DB but no users actually have access to it as they are not enabled on the server. """ import os import sys import time from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python.text import wordWrap from twisted.python.usage import Options from twext.enterprise.dal.syntax import Parameter, Delete, Select, Union, \ CompoundComparison, ExpressionSyntax, Count from twext.python.log import Logger from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from txdav.common.datastore.sql_tables import schema, _BIND_MODE_OWN from calendarserver.tools.cmdline import utilityMain, WorkerService log = Logger() VERSION = "1" def usage(e=None): if e: print(e) print("") try: ObliterateOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = ''.join( wordWrap( """ Usage: calendarserver_obliterate [options] [input specifiers] """, int(os.environ.get('COLUMNS', '80')) ) ) description += "\nVersion: %s" % (VERSION,) class ConfigError(Exception): pass class ObliterateOptions(Options): """ Command-line options for 'calendarserver_obliterate' """ synopsis = description optFlags = [ ['verbose', 'v', "Verbose logging."], ['debug', 'D', "Debug logging."], ['fix-props', 'p', "Fix orphaned resource properties only."], ['dry-run', 'n', "Do not make any changes."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ['data', 'd', "./uuids.txt", "Path where list of uuids to obliterate is."], ['uuid', 'u', "", "Obliterate this user's data."], ] def __init__(self): super(ObliterateOptions, self).__init__() self.outputName = '-' def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') # Need to patch this in if not present in actual server code def NotIn(self, subselect): # Can't be Select.__contains__ because __contains__ gets __nonzero__ # called on its result by the 'in' syntax. return CompoundComparison(self, 'not in', subselect) if not hasattr(ExpressionSyntax, "NotIn"): ExpressionSyntax.NotIn = NotIn class ObliterateService(WorkerService, object): """ Service which runs, does its stuff, then stops the reactor. """ def __init__(self, store, options, output, reactor, config): super(ObliterateService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config self.results = {} self.summary = [] self.totalHomes = 0 self.totalCalendars = 0 self.totalResources = 0 self.attachments = set() @inlineCallbacks def doWork(self): """ Do the work, stopping the reactor when done. """ self.output.write("\n---- Obliterate version: %s ----\n" % (VERSION,)) if self.options["dry-run"]: self.output.write("---- DRY RUN No Changes Being Made ----\n") try: if self.options["fix-props"]: yield self.obliterateOrphanedProperties() else: yield self.obliterateUUIDs() self.output.close() except ConfigError: pass except: log.failure("doWork()") @inlineCallbacks def obliterateOrphanedProperties(self): """ Obliterate orphaned data in RESOURCE_PROPERTIES table. """ # Get list of distinct resource_property resource_ids to delete self.txn = self.store.newTransaction() ch = schema.CALENDAR_HOME ca = schema.CALENDAR co = schema.CALENDAR_OBJECT ah = schema.ADDRESSBOOK_HOME ao = schema.ADDRESSBOOK_OBJECT rp = schema.RESOURCE_PROPERTY rows = (yield Select( [rp.RESOURCE_ID, ], Distinct=True, From=rp, Where=(rp.RESOURCE_ID.NotIn( Select( [ch.RESOURCE_ID], From=ch, SetExpression=Union( Select( [ca.RESOURCE_ID], From=ca, SetExpression=Union( Select( [co.RESOURCE_ID], From=co, SetExpression=Union( Select( [ah.RESOURCE_ID], From=ah, SetExpression=Union( Select( [ao.RESOURCE_ID], From=ao, ), ), ), ), ), ), ), ), ), )) ).on(self.txn)) if not rows: self.output.write("No orphaned resource properties\n") returnValue(None) resourceIDs = [row[0] for row in rows] resourceIDs_len = len(resourceIDs) t = time.time() for ctr, resourceID in enumerate(resourceIDs): self.output.write("%d of %d (%d%%): ResourceID: %s\n" % ( ctr + 1, resourceIDs_len, ((ctr + 1) * 100 / resourceIDs_len), resourceID, )) yield self.removePropertiesForResourceID(resourceID) # Commit every 10 DELETEs if divmod(ctr + 1, 10)[1] == 0: yield self.txn.commit() self.txn = self.store.newTransaction() yield self.txn.commit() self.txn = None self.output.write("Obliteration time: %.1fs\n" % (time.time() - t,)) @inlineCallbacks def obliterateUUIDs(self): """ Obliterate specified UUIDs. """ if self.options["uuid"]: uuids = [self.options["uuid"], ] elif self.options["data"]: if not os.path.exists(self.options["data"]): self.output.write("%s is not a valid file\n" % (self.options["data"],)) raise ConfigError with open(self.options["data"]) as f: uuids = f.read().split() else: self.output.write("One of --data or --uuid must be specified\n") raise ConfigError t = time.time() uuids_len = len(uuids) for ctr, uuid in enumerate(uuids): self.txn = self.store.newTransaction() self.output.write("%d of %d (%d%%): UUID: %s - " % ( ctr + 1, uuids_len, ((ctr + 1) * 100 / uuids_len), uuid, )) result = (yield self.processUUID(uuid)) self.output.write("%s\n" % (result,)) yield self.txn.commit() self.txn = None self.output.write("\nTotal Homes: %d\n" % (self.totalHomes,)) self.output.write("Total Calendars: %d\n" % (self.totalCalendars,)) self.output.write("Total Resources: %d\n" % (self.totalResources,)) if self.attachments: self.output.write("Attachments removed: %s\n" % (len(self.attachments,))) # for attachment in self.attachments: # self.output.write(" %s\n" % (attachment,)) self.output.write("Obliteration time: %.1fs\n" % (time.time() - t,)) @inlineCallbacks def processUUID(self, uuid): # Get the resource-id for the home ch = schema.CALENDAR_HOME kwds = {"UUID": uuid} rows = (yield Select( [ch.RESOURCE_ID, ], From=ch, Where=( ch.OWNER_UID == Parameter("UUID") ), ).on(self.txn, **kwds)) if not rows: returnValue("No home found") homeID = rows[0][0] self.totalHomes += 1 # Count resources resourceCount = (yield self.countResources(uuid)) self.totalResources += resourceCount # Remove revisions - do before deleting calendars to remove # foreign key constraint yield self.removeRevisionsForHomeResourceID(homeID) # Look at each calendar and unbind/delete-if-owned count = (yield self.deleteCalendars(homeID)) self.totalCalendars += count # Remove properties yield self.removePropertiesForResourceID(homeID) # Remove notifications yield self.removeNotificationsForUUID(uuid) # Remove attachments attachmentCount = (yield self.removeAttachments(homeID)) # Now remove the home yield self.removeHomeForResourceID(homeID) returnValue("Home, %d calendars, %d resources%s - deleted" % ( count, resourceCount, (", %d attachmensts" % (attachmentCount,)) if attachmentCount else "", )) @inlineCallbacks def countResources(self, uuid): ch = schema.CALENDAR_HOME cb = schema.CALENDAR_BIND co = schema.CALENDAR_OBJECT kwds = {"UUID": uuid} rows = (yield Select( [ Count(co.RESOURCE_ID), ], From=ch.join( cb, type="inner", on=(ch.RESOURCE_ID == cb.CALENDAR_HOME_RESOURCE_ID).And( cb.BIND_MODE == _BIND_MODE_OWN)).join( co, type="left", on=(cb.CALENDAR_RESOURCE_ID == co.CALENDAR_RESOURCE_ID)), Where=( ch.OWNER_UID == Parameter("UUID") ), ).on(self.txn, **kwds)) returnValue(rows[0][0] if rows else 0) @inlineCallbacks def deleteCalendars(self, homeID): # Get list of binds and bind mode cb = schema.CALENDAR_BIND kwds = {"resourceID": homeID} rows = (yield Select( [cb.CALENDAR_RESOURCE_ID, cb.BIND_MODE, ], From=cb, Where=( cb.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID") ), ).on(self.txn, **kwds)) if not rows: returnValue(0) for resourceID, mode in rows: if mode == _BIND_MODE_OWN: yield self.deleteCalendar(resourceID) else: yield self.deleteBind(homeID, resourceID) returnValue(len(rows)) @inlineCallbacks def deleteCalendar(self, resourceID): # Need to delete any remaining CALENDAR_OBJECT_REVISIONS entries yield self.removeRevisionsForCalendarResourceID(resourceID) # Delete the CALENDAR entry (will cascade to CALENDAR_BIND and CALENDAR_OBJECT) if not self.options["dry-run"]: ca = schema.CALENDAR kwds = { "ResourceID": resourceID, } yield Delete( From=ca, Where=( ca.RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) # Remove properties yield self.removePropertiesForResourceID(resourceID) @inlineCallbacks def deleteBind(self, homeID, resourceID): if not self.options["dry-run"]: cb = schema.CALENDAR_BIND kwds = { "HomeID": homeID, "ResourceID": resourceID, } yield Delete( From=cb, Where=( (cb.CALENDAR_HOME_RESOURCE_ID == Parameter("HomeID")).And (cb.CALENDAR_RESOURCE_ID == Parameter("ResourceID")) ), ).on(self.txn, **kwds) @inlineCallbacks def removeRevisionsForHomeResourceID(self, resourceID): if not self.options["dry-run"]: rev = schema.CALENDAR_OBJECT_REVISIONS kwds = {"ResourceID": resourceID} yield Delete( From=rev, Where=( rev.CALENDAR_HOME_RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) @inlineCallbacks def removeRevisionsForCalendarResourceID(self, resourceID): if not self.options["dry-run"]: rev = schema.CALENDAR_OBJECT_REVISIONS kwds = {"ResourceID": resourceID} yield Delete( From=rev, Where=( rev.CALENDAR_RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) @inlineCallbacks def removePropertiesForResourceID(self, resourceID): if not self.options["dry-run"]: props = schema.RESOURCE_PROPERTY kwds = {"ResourceID": resourceID} yield Delete( From=props, Where=( props.RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) @inlineCallbacks def removeNotificationsForUUID(self, uuid): # Get NOTIFICATION_HOME.RESOURCE_ID nh = schema.NOTIFICATION_HOME kwds = {"UUID": uuid} rows = (yield Select( [nh.RESOURCE_ID, ], From=nh, Where=( nh.OWNER_UID == Parameter("UUID") ), ).on(self.txn, **kwds)) if rows: resourceID = rows[0][0] # Delete NOTIFICATION rows if not self.options["dry-run"]: no = schema.NOTIFICATION kwds = {"ResourceID": resourceID} yield Delete( From=no, Where=( no.NOTIFICATION_HOME_RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) # Delete NOTIFICATION_HOME (will cascade to NOTIFICATION_OBJECT_REVISIONS) if not self.options["dry-run"]: kwds = {"UUID": uuid} yield Delete( From=nh, Where=( nh.OWNER_UID == Parameter("UUID") ), ).on(self.txn, **kwds) @inlineCallbacks def removeAttachments(self, resourceID): # Get ATTACHMENT paths at = schema.ATTACHMENT kwds = {"resourceID": resourceID} rows = (yield Select( [at.PATH, ], From=at, Where=( at.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID") ), ).on(self.txn, **kwds)) if rows: self.attachments.update([row[0] for row in rows]) # Delete ATTACHMENT rows if not self.options["dry-run"]: at = schema.ATTACHMENT kwds = {"resourceID": resourceID} yield Delete( From=at, Where=( at.CALENDAR_HOME_RESOURCE_ID == Parameter("resourceID") ), ).on(self.txn, **kwds) returnValue(len(rows) if rows else 0) @inlineCallbacks def removeHomeForResourceID(self, resourceID): if not self.options["dry-run"]: ch = schema.CALENDAR_HOME kwds = {"ResourceID": resourceID} yield Delete( From=ch, Where=( ch.RESOURCE_ID == Parameter("ResourceID") ), ).on(self.txn, **kwds) def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = ObliterateOptions() options.parseOptions(argv[1:]) try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) def makeService(store): from twistedcaldav.config import config config.TransactionTimeoutSeconds = 0 return ObliterateService(store, options, output, reactor, config) utilityMain(options['config'], makeService, reactor, verbose=options["debug"]) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/pod_migration.py000077500000000000000000000201611315003562600246530ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_calverify -*- ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool manages an overall pod migration. Migration is done in a series of steps, with the system admin triggering each step individually by running this tool. """ import os import sys from twisted.internet.defer import inlineCallbacks from twisted.python.failure import Failure from twisted.python.text import wordWrap from twisted.python.usage import Options, UsageError from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twistedcaldav.timezones import TimezoneCache from txdav.common.datastore.podding.migration.home_sync import CrossPodHomeSync from twext.python.log import Logger from twext.who.idirectory import RecordType from calendarserver.tools.cmdline import utilityMain, WorkerService log = Logger() VERSION = "1" def usage(e=None): if e: print(e) print("") try: PodMigrationOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = ''.join( wordWrap( """ Usage: calendarserver_pod_migration [options] [input specifiers] """, int(os.environ.get('COLUMNS', '80')) ) ) description += "\nVersion: %s" % (VERSION,) class ConfigError(Exception): pass class PodMigrationOptions(Options): """ Command-line options for 'calendarserver_pod_migration' """ synopsis = description optFlags = [ ['verbose', 'v', "Verbose logging."], ['debug', 'D', "Debug logging."], ['step1', '1', "Run step 1 of the migration (initial sync)"], ['step2', '2', "Run step 2 of the migration (incremental sync)"], ['step3', '3', "Run step 3 of the migration (prepare for final sync)"], ['step4', '4', "Run step 4 of the migration (final incremental sync)"], ['step5', '5', "Run step 5 of the migration (final reconcile sync)"], ['step6', '6', "Run step 6 of the migration (enable new home)"], ['step7', '7', "Run step 7 of the migration (remove old home)"], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ['uid', 'u', "", "Directory record uid of user to migrate [REQUIRED]"], ] longdesc = "Only one step option is allowed." def __init__(self): super(PodMigrationOptions, self).__init__() self.outputName = '-' def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') def postOptions(self): runstep = None for step in range(7): if self["step{}".format(step + 1)]: if runstep is None: runstep = step self["runstep"] = step + 1 else: raise UsageError("Only one step option allowed") else: if runstep is None: raise UsageError("One step option must be present") if not self["uid"]: raise UsageError("A uid is required") class PodMigrationService(WorkerService, object): """ Service which runs, does its stuff, then stops the reactor. """ def __init__(self, store, options, output, reactor, config): super(PodMigrationService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config TimezoneCache.create() @inlineCallbacks def doWork(self): """ Do the work, stopping the reactor when done. """ self.output.write("\n---- Pod Migration version: %s ----\n" % (VERSION,)) # Map short name to uid record = yield self.store.directoryService().recordWithUID(self.options["uid"]) if record is None: record = yield self.store.directoryService().recordWithShortName(RecordType.user, self.options["uid"]) if record is not None: self.options["uid"] = record.uid try: yield getattr(self, "step{}".format(self.options["runstep"]))() self.output.close() except ConfigError: pass except: self.output.write(str(Failure())) log.failure("doWork()") @inlineCallbacks def step1(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 1\n") yield syncer.sync() @inlineCallbacks def step2(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 2\n") yield syncer.sync() @inlineCallbacks def step3(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 3\n") yield syncer.disableRemoteHome() @inlineCallbacks def step4(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], final=True, uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 4\n") yield syncer.sync() @inlineCallbacks def step5(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], final=True, uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 5\n") yield syncer.finalSync() @inlineCallbacks def step6(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 6\n") yield syncer.enableLocalHome() @inlineCallbacks def step7(self): syncer = CrossPodHomeSync( self.store, self.options["uid"], final=True, uselog=self.output if self.options["verbose"] else None ) syncer.accounting("Pod Migration Step 7\n") yield syncer.removeRemoteHome() def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = PodMigrationOptions() try: options.parseOptions(argv[1:]) except UsageError as e: stderr.write("Invalid options specified\n") options.opt_help() try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) def makeService(store): from twistedcaldav.config import config config.TransactionTimeoutSeconds = 0 return PodMigrationService(store, options, output, reactor, config) utilityMain(options['config'], makeService, reactor, verbose=options["debug"]) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/principals.py000077500000000000000000000726001315003562600241710ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function # Suppress warning that occurs on Linux import sys if sys.platform.startswith("linux"): from Crypto.pct_warnings import PowmInsecureWarning import warnings warnings.simplefilter("ignore", PowmInsecureWarning) from getopt import getopt, GetoptError import operator import os import uuid from calendarserver.tools.cmdline import utilityMain, WorkerService from calendarserver.tools.util import ( recordForPrincipalID, prettyRecord, action_addProxy, action_removeProxy ) from twext.who.directory import DirectoryRecord from twext.who.idirectory import RecordType, InvalidDirectoryRecordError from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twistedcaldav.config import config from twistedcaldav.cache import MemcacheChangeNotifier from txdav.who.delegates import CachingDelegates from txdav.who.idirectory import AutoScheduleMode from txdav.who.groups import GroupCacherPollingWork allowedAutoScheduleModes = { "default": None, "none": AutoScheduleMode.none, "accept-always": AutoScheduleMode.accept, "decline-always": AutoScheduleMode.decline, "accept-if-free": AutoScheduleMode.acceptIfFree, "decline-if-busy": AutoScheduleMode.declineIfBusy, "automatic": AutoScheduleMode.acceptIfFreeDeclineIfBusy, } def usage(e=None): if e: print(e) print("") name = os.path.basename(sys.argv[0]) print("usage: %s [options] action_flags principal [principal ...]" % (name,)) print(" %s [options] --list-principal-types" % (name,)) print(" %s [options] --list-principals type" % (name,)) print("") print(" Performs the given actions against the giving principals.") print("") print(" Principals are identified by one of the following:") print(" Type and shortname (eg.: users:wsanchez)") # print(" A principal path (eg.: /principals/users/wsanchez/)") print(" A GUID (eg.: E415DBA7-40B5-49F5-A7CC-ACC81E4DEC79)") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -v --verbose: print debugging information") print("") print("actions:") print(" --context : {user|group|location|resource|attendee}; must be used in conjunction with --search") print(" --search : search using one or more tokens") print(" --list-principal-types: list all of the known principal types") print(" --list-principals type: list all principals of the given type") print(" --list-read-proxies: list proxies with read-only access") print(" --list-write-proxies: list proxies with read-write access") print(" --list-proxies: list all proxies") print(" --list-proxy-for: principals this principal is a proxy for") print(" --add-read-proxy=principal: add a read-only proxy") print(" --add-write-proxy=principal: add a read-write proxy") print(" --remove-proxy=principal: remove a proxy") print(" --set-auto-schedule-mode={default|none|accept-always|decline-always|accept-if-free|decline-if-busy|automatic}: set auto-schedule mode") print(" --get-auto-schedule-mode: read auto-schedule mode") print(" --set-auto-accept-group=principal: set auto-accept-group") print(" --get-auto-accept-group: read auto-accept-group") print(" --add {locations|resources|addresses} full-name record-name UID: add a principal") print(" --remove: remove a principal") print(" --set-geo=url: set the geo: url for an address (e.g. geo:37.331741,-122.030333)") print(" --get-geo: get the geo: url for an address") print(" --set-street-address=streetaddress: set the street address string for an address") print(" --get-street-address: get the street address string for an address") print(" --set-address=guid: associate principal with an address (by guid)") print(" --get-address: get the associated address's guid") print(" --refresh-groups: schedule a group membership refresh") print(" --print-group-info : prints group delegation and membership") if e: sys.exit(64) else: sys.exit(0) class PrincipalService(WorkerService): """ Executes principals-related functions in a context which has access to the store """ function = None params = [] @inlineCallbacks def doWork(self): """ Calls the function that's been assigned to "function" and passes the root resource, directory, store, and whatever has been assigned to "params". """ if ( config.EnableResponseCache and config.Memcached.Pools.Default.ClientEnabled ): # These class attributes need to be setup with our memcache\ # notifier CachingDelegates.cacheNotifier = MemcacheChangeNotifier(None, cacheHandle="PrincipalToken") if self.function is not None: yield self.function(self.store, *self.params) def main(): try: (optargs, args) = getopt( sys.argv[1:], "a:hf:P:v", [ "help", "config=", "add=", "remove", "context=", "search", "list-principal-types", "list-principals=", # Proxies "list-read-proxies", "list-write-proxies", "list-proxies", "list-proxy-for", "add-read-proxy=", "add-write-proxy=", "remove-proxy=", # Groups "list-group-members", "add-group-member=", "remove-group-member=", "print-group-info", "refresh-groups", # Scheduling "set-auto-schedule-mode=", "get-auto-schedule-mode", "set-auto-accept-group=", "get-auto-accept-group", # Principal details "set-geo=", "get-geo", "set-address=", "get-address", "set-street-address=", "get-street-address", "verbose", ], ) except GetoptError, e: usage(e) # # Get configuration # configFileName = None addType = None listPrincipalTypes = False listPrincipals = None searchContext = None searchTokens = None printGroupInfo = False scheduleGroupRefresh = False principalActions = [] verbose = False for opt, arg in optargs: # Args come in as encoded bytes arg = arg.decode("utf-8") if opt in ("-h", "--help"): usage() elif opt in ("-v", "--verbose"): verbose = True elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-a", "--add"): addType = arg elif opt in ("-r", "--remove"): principalActions.append((action_removePrincipal,)) elif opt in ("", "--list-principal-types"): listPrincipalTypes = True elif opt in ("", "--list-principals"): listPrincipals = arg elif opt in ("", "--context"): searchContext = arg elif opt in ("", "--search"): searchTokens = args elif opt in ("", "--list-read-proxies"): principalActions.append((action_listProxies, "read")) elif opt in ("", "--list-write-proxies"): principalActions.append((action_listProxies, "write")) elif opt in ("-L", "--list-proxies"): principalActions.append((action_listProxies, "read", "write")) elif opt in ("--list-proxy-for"): principalActions.append((action_listProxyFor, "read", "write")) elif opt in ("--add-read-proxy", "--add-write-proxy"): if "read" in opt: proxyType = "read" elif "write" in opt: proxyType = "write" else: raise AssertionError("Unknown proxy type") principalActions.append((action_addProxy, proxyType, arg)) elif opt in ("", "--remove-proxy"): principalActions.append((action_removeProxy, arg)) elif opt in ("", "--list-group-members"): principalActions.append((action_listGroupMembers,)) elif opt in ("--add-group-member"): principalActions.append((action_addGroupMember, arg)) elif opt in ("", "--remove-group-member"): principalActions.append((action_removeGroupMember, arg)) elif opt in ("", "--print-group-info"): printGroupInfo = True elif opt in ("", "--refresh-groups"): scheduleGroupRefresh = True elif opt in ("", "--set-auto-schedule-mode"): try: if arg not in allowedAutoScheduleModes: raise ValueError("Unknown auto-schedule mode: {mode}".format( mode=arg)) autoScheduleMode = allowedAutoScheduleModes[arg] except ValueError, e: abort(e) principalActions.append((action_setAutoScheduleMode, autoScheduleMode)) elif opt in ("", "--get-auto-schedule-mode"): principalActions.append((action_getAutoScheduleMode,)) elif opt in ("", "--set-auto-accept-group"): principalActions.append((action_setAutoAcceptGroup, arg)) elif opt in ("", "--get-auto-accept-group"): principalActions.append((action_getAutoAcceptGroup,)) elif opt in ("", "--set-geo"): principalActions.append((action_setValue, u"geographicLocation", arg)) elif opt in ("", "--get-geo"): principalActions.append((action_getValue, u"geographicLocation")) elif opt in ("", "--set-street-address"): principalActions.append((action_setValue, u"streetAddress", arg)) elif opt in ("", "--get-street-address"): principalActions.append((action_getValue, u"streetAddress")) elif opt in ("", "--set-address"): principalActions.append((action_setValue, u"associatedAddress", arg)) elif opt in ("", "--get-address"): principalActions.append((action_getValue, u"associatedAddress")) else: raise NotImplementedError(opt) # # List principals # if listPrincipalTypes: if args: usage("Too many arguments") function = runListPrincipalTypes params = () elif printGroupInfo: function = printGroupCacherInfo params = (args,) elif scheduleGroupRefresh: function = scheduleGroupRefreshJob params = () elif addType: try: addType = matchStrings( addType, [ "locations", "resources", "addresses", "users", "groups" ] ) except ValueError, e: print(e) return try: fullName, shortName, uid = parseCreationArgs(args) except ValueError, e: print(e) return if fullName is not None: fullNames = [fullName] else: fullNames = () if shortName is not None: shortNames = [shortName] else: shortNames = () function = runAddPrincipal params = (addType, uid, shortNames, fullNames) elif listPrincipals: try: listPrincipals = matchStrings( listPrincipals, ["users", "groups", "locations", "resources", "addresses"] ) except ValueError, e: print(e) return if args: usage("Too many arguments") function = runListPrincipals params = (listPrincipals,) elif searchTokens: function = runSearch searchTokens = [t.decode("utf-8") for t in searchTokens] params = (searchTokens, searchContext) else: if not args: usage("No principals specified.") unicodeArgs = [a.decode("utf-8") for a in args] function = runPrincipalActions params = (unicodeArgs, principalActions) PrincipalService.function = function PrincipalService.params = params utilityMain(configFileName, PrincipalService, verbose=verbose) def runListPrincipalTypes(service, store): directory = store.directoryService() for recordType in directory.recordTypes(): print(directory.recordTypeToOldName(recordType)) return succeed(None) @inlineCallbacks def runListPrincipals(service, store, listPrincipals): directory = store.directoryService() recordType = directory.oldNameToRecordType(listPrincipals) try: records = list((yield directory.recordsWithRecordType(recordType))) if records: printRecordList(records) else: print("No records of type %s" % (listPrincipals,)) except InvalidDirectoryRecordError, e: usage(e) returnValue(None) @inlineCallbacks def runPrincipalActions(service, store, principalIDs, actions): directory = store.directoryService() for principalID in principalIDs: # Resolve the given principal IDs to records try: record = yield recordForPrincipalID(directory, principalID) except ValueError: record = None if record is None: sys.stderr.write("Invalid principal ID: %s\n" % (principalID,)) continue # Performs requested actions for action in actions: (yield action[0](store, record, *action[1:])) print("") @inlineCallbacks def runSearch(service, store, tokens, context=None): directory = store.directoryService() records = list( ( yield directory.recordsMatchingTokens( tokens, context=context ) ) ) if records: records.sort(key=operator.attrgetter('fullNames')) print("{n} matches found:".format(n=len(records))) for record in records: print( "\n{d} ({rt})".format( d=record.displayName, rt=record.recordType.name ) ) print(" UID: {u}".format(u=record.uid,)) try: print( " Record name{plural}: {names}".format( plural=("s" if len(record.shortNames) > 1 else ""), names=(", ".join(record.shortNames)) ) ) except AttributeError: pass try: if record.emailAddresses: print( " Email{plural}: {emails}".format( plural=("s" if len(record.emailAddresses) > 1 else ""), emails=(", ".join(record.emailAddresses)) ) ) except AttributeError: pass else: print("No matches found") print("") @inlineCallbacks def runAddPrincipal(service, store, addType, uid, shortNames, fullNames): directory = store.directoryService() recordType = directory.oldNameToRecordType(addType) # See if that UID is in use record = yield directory.recordWithUID(uid) if record is not None: print("UID already in use: {uid}".format(uid=uid)) returnValue(None) # See if the shortnames are in use for shortName in shortNames: record = yield directory.recordWithShortName(recordType, shortName) if record is not None: print("Record name already in use: {name}".format(name=shortName)) returnValue(None) fields = { directory.fieldName.recordType: recordType, directory.fieldName.uid: uid, directory.fieldName.shortNames: shortNames, directory.fieldName.fullNames: fullNames, directory.fieldName.hasCalendars: True, directory.fieldName.hasContacts: True, } record = DirectoryRecord(directory, fields) yield record.service.updateRecords([record], create=True) print("Added '{name}'".format(name=fullNames[0])) @inlineCallbacks def action_removePrincipal(store, record): directory = store.directoryService() fullName = record.displayName shortNames = ",".join(record.shortNames) yield directory.removeRecords([record.uid]) print( "Removed '{full}' {shorts} {uid}".format( full=fullName, shorts=shortNames, uid=record.uid ) ) @inlineCallbacks def action_listProxies(store, record, *proxyTypes): directory = store.directoryService() for proxyType in proxyTypes: groupRecordType = { "read": directory.recordType.readDelegateGroup, "write": directory.recordType.writeDelegateGroup, }.get(proxyType) pseudoGroup = yield directory.recordWithShortName( groupRecordType, record.uid ) proxies = yield pseudoGroup.members() if proxies: print("%s proxies for %s:" % ( {"read": "Read-only", "write": "Read/write"}[proxyType], prettyRecord(record) )) printRecordList(proxies) print("") else: print("No %s proxies for %s" % (proxyType, prettyRecord(record))) @inlineCallbacks def action_listProxyFor(store, record, *proxyTypes): directory = store.directoryService() if record.recordType != directory.recordType.user: print("You must pass a user principal to this command") returnValue(None) for proxyType in proxyTypes: groupRecordType = { "read": directory.recordType.readDelegatorGroup, "write": directory.recordType.writeDelegatorGroup, }.get(proxyType) pseudoGroup = yield directory.recordWithShortName( groupRecordType, record.uid ) proxies = yield pseudoGroup.members() if proxies: print("%s is a %s proxy for:" % ( prettyRecord(record), {"read": "Read-only", "write": "Read/write"}[proxyType] )) printRecordList(proxies) print("") else: print( "{r} is not a {t} proxy for anyone".format( r=prettyRecord(record), t={"read": "Read-only", "write": "Read/write"}[proxyType] ) ) @inlineCallbacks def action_listGroupMembers(store, record): members = yield record.members() if members: print("Group members for %s:\n" % ( prettyRecord(record) )) printRecordList(members) print("") else: print("No group members for %s" % (prettyRecord(record),)) @inlineCallbacks def action_addGroupMember(store, record, *memberIDs): directory = store.directoryService() existingMembers = yield record.members() existingMemberUIDs = set([member.uid for member in existingMembers]) add = set() for memberID in memberIDs: memberRecord = yield recordForPrincipalID(directory, memberID) if memberRecord is None: print("Invalid member ID: %s" % (memberID,)) elif memberRecord.uid in existingMemberUIDs: print("Existing member ID: %s" % (memberID,)) else: add.add(memberRecord) if add: yield record.addMembers(add) for memberRecord in add: print( "Added {member} for {record}".format( member=prettyRecord(memberRecord), record=prettyRecord(record) ) ) yield record.service.updateRecords([record], create=False) @inlineCallbacks def action_removeGroupMember(store, record, *memberIDs): directory = store.directoryService() existingMembers = yield record.members() existingMemberUIDs = set([member.uid for member in existingMembers]) remove = set() for memberID in memberIDs: memberRecord = yield recordForPrincipalID(directory, memberID) if memberRecord is None: print("Invalid member ID: %s" % (memberID,)) elif memberRecord.uid not in existingMemberUIDs: print("Missing member ID: %s" % (memberID,)) else: remove.add(memberRecord) if remove: yield record.removeMembers(remove) for memberRecord in remove: print( "Removed {member} for {record}".format( member=prettyRecord(memberRecord), record=prettyRecord(record) ) ) yield record.service.updateRecords([record], create=False) @inlineCallbacks def printGroupCacherInfo(service, store, principalIDs): """ Print all groups that have been delegated to, their cached members, and who delegated to those groups. """ directory = store.directoryService() txn = store.newTransaction() if not principalIDs: groupUIDs = yield txn.allGroupDelegates() else: groupUIDs = [] for principalID in principalIDs: record = yield recordForPrincipalID(directory, principalID) if record: groupUIDs.append(record.uid) for groupUID in groupUIDs: group = yield txn.groupByUID(groupUID) print("Group: \"{name}\" ({uid})".format(name=group.name, uid=group.groupUID)) for txt, readWrite in (("read-only", False), ("read-write", True)): delegatorUIDs = yield txn.delegatorsToGroup(group.groupID, readWrite) for delegatorUID in delegatorUIDs: delegator = yield directory.recordWithUID(delegatorUID) print( "...has {rw} access to {rec}".format( rw=txt, rec=prettyRecord(delegator) ) ) print("Group members:") memberUIDs = yield txn.groupMemberUIDs(group.groupID) for memberUID in memberUIDs: record = yield directory.recordWithUID(memberUID) print(prettyRecord(record)) print("Last cached: {} GMT".format(group.modified)) print() yield txn.commit() @inlineCallbacks def scheduleGroupRefreshJob(service, store): """ Schedule GroupCacherPollingWork """ txn = store.newTransaction() print("Scheduling a group refresh") yield GroupCacherPollingWork.reschedule(txn, 0, force=True) yield txn.commit() def action_getAutoScheduleMode(store, record): print( "Auto-schedule mode for {record} is {mode}".format( record=prettyRecord(record), mode=( record.autoScheduleMode.description if record.autoScheduleMode else "Default" ) ) ) @inlineCallbacks def action_setAutoScheduleMode(store, record, autoScheduleMode): if record.recordType == RecordType.group: print( "Setting auto-schedule-mode for {record} is not allowed.".format( record=prettyRecord(record) ) ) elif ( record.recordType == RecordType.user and not config.Scheduling.Options.AutoSchedule.AllowUsers ): print( "Setting auto-schedule-mode for {record} is not allowed.".format( record=prettyRecord(record) ) ) else: print( "Setting auto-schedule-mode to {mode} for {record}".format( mode=("default" if autoScheduleMode is None else autoScheduleMode.description), record=prettyRecord(record), ) ) yield record.setAutoScheduleMode(autoScheduleMode) @inlineCallbacks def action_setAutoAcceptGroup(store, record, autoAcceptGroup): if record.recordType == RecordType.group: print( "Setting auto-accept-group for {record} is not allowed.".format( record=prettyRecord(record) ) ) elif ( record.recordType == RecordType.user and not config.Scheduling.Options.AutoSchedule.AllowUsers ): print( "Setting auto-accept-group for {record} is not allowed.".format( record=prettyRecord(record) ) ) else: groupRecord = yield recordForPrincipalID(record.service, autoAcceptGroup) if groupRecord is None or groupRecord.recordType != RecordType.group: print("Invalid principal ID: {id}".format(id=autoAcceptGroup)) else: print("Setting auto-accept-group to {group} for {record}".format( group=prettyRecord(groupRecord), record=prettyRecord(record), )) # Get original fields newFields = record.fields.copy() # Set new values newFields[record.service.fieldName.autoAcceptGroup] = groupRecord.uid updatedRecord = DirectoryRecord(record.service, newFields) yield record.service.updateRecords([updatedRecord], create=False) @inlineCallbacks def action_getAutoAcceptGroup(store, record): if record.autoAcceptGroup: groupRecord = yield record.service.recordWithUID( record.autoAcceptGroup ) if groupRecord is not None: print( "Auto-accept-group for {record} is {group}".format( record=prettyRecord(record), group=prettyRecord(groupRecord), ) ) else: print( "Invalid auto-accept-group assigned: {uid}".format( uid=record.autoAcceptGroup ) ) else: print( "No auto-accept-group assigned to {record}".format( record=prettyRecord(record) ) ) @inlineCallbacks def action_setValue(store, record, name, value): print( "Setting {name} to {value} for {record}".format( name=name, value=value, record=prettyRecord(record), ) ) # Get original fields newFields = record.fields.copy() # Set new value newFields[record.service.fieldName.lookupByName(name)] = value updatedRecord = DirectoryRecord(record.service, newFields) yield record.service.updateRecords([updatedRecord], create=False) def action_getValue(store, record, name): try: value = record.fields[record.service.fieldName.lookupByName(name)] print( "{name} for {record} is {value}".format( name=name, record=prettyRecord(record), value=value ) ) except KeyError: print( "{name} is not set for {record}".format( name=name, record=prettyRecord(record), ) ) def abort(msg, status=1): sys.stdout.write("%s\n" % (msg,)) try: reactor.stop() except RuntimeError: pass sys.exit(status) def parseCreationArgs(args): """ Look at the command line arguments for --add; the first arg is required and is the full name. If only that one arg is provided, generate a UUID and use it for record name and uid. If two args are provided, use the second arg as the record name and generate a UUID for the uid. If three args are provided, the second arg is the record name and the third arg is the uid. """ numArgs = len(args) if numArgs == 0: print( "When adding a principal, you must provide the full-name" ) sys.exit(64) fullName = args[0].decode("utf-8") if numArgs == 1: shortName = uid = unicode(uuid.uuid4()).upper() elif numArgs == 2: shortName = args[1].decode("utf-8") uid = unicode(uuid.uuid4()).upper() else: shortName = args[1].decode("utf-8") uid = args[2].decode("utf-8") return fullName, shortName, uid def isUUID(value): try: uuid.UUID(value) return True except: return False def matchStrings(value, validValues): for validValue in validValues: if validValue.startswith(value): return validValue raise ValueError("'%s' is not a recognized value" % (value,)) def printRecordList(records): results = [] for record in records: try: shortNames = record.shortNames except AttributeError: shortNames = [] results.append( (record.displayName, record.recordType.name, record.uid, shortNames) ) results.sort() format = "%-22s %-10s %-20s %s" print(format % ("Full name", "Type", "UID", "Short names")) print(format % ("---------", "----", "---", "-----------")) for fullName, recordType, uid, shortNames in results: print(format % (fullName, recordType, uid, u", ".join(shortNames))) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/purge.py000077500000000000000000001415361315003562600231540ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_purge -*- ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import collections import datetime from getopt import getopt, GetoptError import os import sys from calendarserver.tools import tables from calendarserver.tools.cmdline import utilityMain, WorkerService from pycalendar.datetime import DateTime from twext.enterprise.dal.record import fromTable from twext.enterprise.dal.syntax import Delete, Select, Union, Parameter, Max from twext.enterprise.jobs.workitem import WorkItem, RegeneratingWorkItem from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue, succeed from twistedcaldav import caldavxml from twistedcaldav.config import config from twistedcaldav.dateops import parseSQLDateToPyCalendar, pyCalendarToSQLTimestamp from twistedcaldav.ical import Component, InvalidICalendarDataError from txdav.caldav.datastore.query.filter import Filter from txdav.common.datastore.sql_tables import schema, _HOME_STATUS_NORMAL, _BIND_MODE_OWN log = Logger() DEFAULT_BATCH_SIZE = 100 DEFAULT_RETAIN_DAYS = 365 class PrincipalPurgePollingWork( RegeneratingWorkItem, fromTable(schema.PRINCIPAL_PURGE_POLLING_WORK) ): """ A work item that scans the existing set of provisioned homes in the store and creates a work item for each to be checked against the directory to see if they need purging. """ group = "principal_purge_polling" @classmethod def initialSchedule(cls, store, seconds): def _enqueue(txn): return PrincipalPurgePollingWork.reschedule(txn, seconds) if config.AutomaticPurging.Enabled: return store.inTransaction("PrincipalPurgePollingWork.initialSchedule", _enqueue) else: return succeed(None) def regenerateInterval(self): """ Return the interval in seconds between regenerating instances. """ return config.AutomaticPurging.PollingIntervalSeconds if config.AutomaticPurging.Enabled else None @inlineCallbacks def doWork(self): # If not enabled, punt here if not config.AutomaticPurging.Enabled: returnValue(None) # Do the scan allUIDs = set() for home in (schema.CALENDAR_HOME, schema.ADDRESSBOOK_HOME): for [uid] in ( yield Select( [home.OWNER_UID], From=home, Where=(home.STATUS == _HOME_STATUS_NORMAL), ).on(self.transaction) ): allUIDs.add(uid) # Spread out the per-uid checks 0 second apart seconds = 0 for uid in allUIDs: notBefore = ( datetime.datetime.utcnow() + datetime.timedelta(seconds=config.AutomaticPurging.CheckStaggerSeconds) ) seconds += 1 yield self.transaction.enqueue( PrincipalPurgeCheckWork, uid=uid, notBefore=notBefore ) class PrincipalPurgeCheckWork( WorkItem, fromTable(schema.PRINCIPAL_PURGE_CHECK_WORK) ): """ Work item for checking for the existence of a UID in the directory. This work item is created by L{PrincipalPurgePollingWork} - one for each unique user UID to check. """ group = property(lambda self: (self.table.UID == self.uid)) @inlineCallbacks def doWork(self): # Delete any other work items for this UID yield Delete( From=self.table, Where=self.group, ).on(self.transaction) # If not enabled, punt here if not config.AutomaticPurging.Enabled: returnValue(None) log.debug("Checking for existence of {uid} in directory", uid=self.uid) directory = self.transaction.store().directoryService() record = yield directory.recordWithUID(self.uid) if record is None: # Schedule purge of this UID a week from now notBefore = ( datetime.datetime.utcnow() + datetime.timedelta(seconds=config.AutomaticPurging.PurgeIntervalSeconds) ) log.warn( "Principal {uid} is no longer in the directory; scheduling clean-up at {when}", uid=self.uid, when=notBefore ) yield self.transaction.enqueue( PrincipalPurgeWork, uid=self.uid, notBefore=notBefore ) else: log.debug("{uid} is still in the directory", uid=self.uid) class PrincipalPurgeWork( WorkItem, fromTable(schema.PRINCIPAL_PURGE_WORK) ): """ Work item for purging a UID's data """ group = property(lambda self: (self.table.UID == self.uid)) @inlineCallbacks def doWork(self): # Delete any other work items for this UID yield Delete( From=self.table, Where=self.group, ).on(self.transaction) # If not enabled, punt here if not config.AutomaticPurging.Enabled: returnValue(None) # Check for UID in directory again log.debug("One last existence check for {uid}", uid=self.uid) directory = self.transaction.store().directoryService() record = yield directory.recordWithUID(self.uid) if record is None: # Time to go service = PurgePrincipalService(self.transaction.store()) log.warn( "Cleaning up future events for principal {uid} since they are no longer in directory", uid=self.uid ) yield service.purgeUIDs( self.transaction.store(), directory, [self.uid], proxies=True, when=None ) else: log.debug("{uid} has re-appeared in the directory", uid=self.uid) class PrincipalPurgeHomeWork( WorkItem, fromTable(schema.PRINCIPAL_PURGE_HOME_WORK) ): """ Work item for removing a UID's home """ group = property(lambda self: (self.table.HOME_RESOURCE_ID == self.homeResourceID)) @inlineCallbacks def doWork(self): # Delete any other work items for this UID yield Delete( From=self.table, Where=self.group, ).on(self.transaction) # NB We do not check config.AutomaticPurging.Enabled here because if this work # item was enqueued we always need to complete it # Check for pending scheduling operations sow = schema.SCHEDULE_ORGANIZER_WORK sosw = schema.SCHEDULE_ORGANIZER_SEND_WORK srw = schema.SCHEDULE_REPLY_WORK rows = yield Select( [sow.HOME_RESOURCE_ID], From=sow, Where=(sow.HOME_RESOURCE_ID == self.homeResourceID), SetExpression=Union( Select( [sosw.HOME_RESOURCE_ID], From=sosw, Where=(sosw.HOME_RESOURCE_ID == self.homeResourceID), SetExpression=Union( Select( [srw.HOME_RESOURCE_ID], From=srw, Where=(srw.HOME_RESOURCE_ID == self.homeResourceID), ) ), ) ), ).on(self.transaction) if rows and len(rows): # Regenerate this job notBefore = ( datetime.datetime.utcnow() + datetime.timedelta(seconds=config.AutomaticPurging.HomePurgeDelaySeconds) ) yield self.transaction.enqueue( PrincipalPurgeHomeWork, homeResourceID=self.homeResourceID, notBefore=notBefore ) else: # Get the home and remove it - only if properly marked as being purged home = yield self.transaction.calendarHomeWithResourceID(self.homeResourceID) if home.purging(): yield home.remove() class PurgeOldEventsService(WorkerService): uuid = None cutoff = None batchSize = None dryrun = False debug = False @classmethod def usage(cls, e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print(" Remove old events from the calendar server") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -u --uuid : Only process this user(s) [REQUIRED]") print(" -d --days : specify how many days in the past to retain (default=%d)" % (DEFAULT_RETAIN_DAYS,)) print(" -n --dry-run: calculate how many events to purge, but do not purge data") print(" -v --verbose: print progress information") print(" -D --debug: debug logging") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) @classmethod def main(cls): try: (optargs, args) = getopt( sys.argv[1:], "Dd:b:f:hnu:v", [ "days=", "batch=", "dry-run", "config=", "uuid=", "help", "verbose", "debug", ], ) except GetoptError, e: cls.usage(e) # # Get configuration # configFileName = None uuid = None days = DEFAULT_RETAIN_DAYS batchSize = DEFAULT_BATCH_SIZE dryrun = False verbose = False debug = False for opt, arg in optargs: if opt in ("-h", "--help"): cls.usage() elif opt in ("-d", "--days"): try: days = int(arg) except ValueError, e: print("Invalid value for --days: %s" % (arg,)) cls.usage(e) elif opt in ("-b", "--batch"): try: batchSize = int(arg) except ValueError, e: print("Invalid value for --batch: %s" % (arg,)) cls.usage(e) elif opt in ("-v", "--verbose"): verbose = True elif opt in ("-D", "--debug"): debug = True elif opt in ("-n", "--dry-run"): dryrun = True elif opt in ("-f", "--config"): configFileName = arg elif opt in ("-u", "--uuid"): uuid = arg else: raise NotImplementedError(opt) if args: cls.usage("Too many arguments: %s" % (args,)) if uuid is None: cls.usage("uuid must be specified") cls.uuid = uuid if dryrun: verbose = True cutoff = DateTime.getToday() cutoff.setDateOnly(False) cutoff.offsetDay(-days) cls.cutoff = cutoff cls.batchSize = batchSize cls.dryrun = dryrun cls.debug = debug utilityMain( configFileName, cls, verbose=verbose, ) @classmethod @inlineCallbacks def purgeOldEvents(cls, store, uuid, cutoff, batchSize, debug=False, dryrun=False): service = cls(store) service.uuid = uuid service.cutoff = cutoff service.batchSize = batchSize service.dryrun = dryrun service.debug = debug result = yield service.doWork() returnValue(result) @inlineCallbacks def getMatchingHomeUIDs(self): """ Find all the calendar homes that match the uuid cli argument. """ log.debug("Searching for calendar homes matching: '{uid}'", uid=self.uuid) txn = self.store.newTransaction(label="Find matching homes") ch = schema.CALENDAR_HOME if self.uuid: kwds = {"uuid": self.uuid} rows = (yield Select( [ch.RESOURCE_ID, ch.OWNER_UID, ], From=ch, Where=(ch.OWNER_UID.StartsWith(Parameter("uuid"))), ).on(txn, **kwds)) else: rows = (yield Select( [ch.RESOURCE_ID, ch.OWNER_UID, ], From=ch, ).on(txn)) yield txn.commit() log.debug(" Found {len} calendar homes", len=len(rows)) returnValue(sorted(rows, key=lambda x: x[1])) @inlineCallbacks def getMatchingCalendarIDs(self, home_id, owner_uid): """ Find all the owned calendars for the specified calendar home. @param home_id: resource-id of calendar home to check @type home_id: L{int} @param owner_uid: owner UUID of home to check @type owner_uid: L{str} """ log.debug("Checking calendar home: {id} '{uid}'", id=home_id, uid=owner_uid) txn = self.store.newTransaction(label="Find matching calendars") cb = schema.CALENDAR_BIND kwds = {"home_id": home_id} rows = (yield Select( [cb.CALENDAR_RESOURCE_ID, cb.CALENDAR_RESOURCE_NAME, ], From=cb, Where=(cb.CALENDAR_HOME_RESOURCE_ID == Parameter("home_id")).And( cb.BIND_MODE == _BIND_MODE_OWN ), ).on(txn, **kwds)) yield txn.commit() log.debug(" Found {len} calendars", len=len(rows)) returnValue(rows) PurgeEvent = collections.namedtuple("PurgeEvent", ("home", "calendar", "resource",)) @inlineCallbacks def getResourceIDsToPurge(self, home_id, calendar_id, calendar_name): """ For the given calendar find which calendar objects are older than the cut-off and return the resource-ids of those. @param home_id: resource-id of calendar home @type home_id: L{int} @param calendar_id: resource-id of the calendar to check @type calendar_id: L{int} @param calendar_name: name of the calendar to check @type calendar_name: L{str} """ log.debug(" Checking calendar: {id} '{name}'", id=calendar_id, name=calendar_name) purge = set() txn = self.store.newTransaction(label="Find matching resources") co = schema.CALENDAR_OBJECT tr = schema.TIME_RANGE kwds = {"calendar_id": calendar_id} rows = (yield Select( [co.RESOURCE_ID, co.RECURRANCE_MAX, co.RECURRANCE_MIN, Max(tr.END_DATE)], From=co.join(tr, on=(co.RESOURCE_ID == tr.CALENDAR_OBJECT_RESOURCE_ID)), Where=(co.CALENDAR_RESOURCE_ID == Parameter("calendar_id")).And( co.ICALENDAR_TYPE == "VEVENT" ), GroupBy=(co.RESOURCE_ID, co.RECURRANCE_MAX, co.RECURRANCE_MIN,), Having=( (co.RECURRANCE_MAX == None).And(Max(tr.END_DATE) < pyCalendarToSQLTimestamp(self.cutoff)) ).Or( (co.RECURRANCE_MAX != None).And(co.RECURRANCE_MAX < pyCalendarToSQLTimestamp(self.cutoff)) ), ).on(txn, **kwds)) log.debug(" Found {len} resources to check", len=len(rows)) for resource_id, recurrence_max, recurrence_min, max_end_date in rows: recurrence_max = parseSQLDateToPyCalendar(recurrence_max) if recurrence_max else None recurrence_min = parseSQLDateToPyCalendar(recurrence_min) if recurrence_min else None max_end_date = parseSQLDateToPyCalendar(max_end_date) if max_end_date else None # Find events where we know the max(end_date) represents a valid, # untruncated expansion if recurrence_min is None or recurrence_min < self.cutoff: if recurrence_max is None: # Here we know max_end_date is the fully expand final instance if max_end_date < self.cutoff: purge.add(self.PurgeEvent(home_id, calendar_id, resource_id,)) continue elif recurrence_max > self.cutoff: # Here we know that there are instances newer than the cut-off # but they have not yet been indexed out that far continue # Manually detect the max_end_date from the actual calendar data calendar = yield self.getCalendar(txn, resource_id) if calendar is not None: if self.checkLastInstance(calendar): purge.add(self.PurgeEvent(home_id, calendar_id, resource_id,)) yield txn.commit() log.debug(" Found {len} resources to purge", len=len(purge)) returnValue(purge) @inlineCallbacks def getCalendar(self, txn, resid): """ Get the calendar data for a calendar object resource. @param resid: resource-id of the calendar object resource to load @type resid: L{int} """ co = schema.CALENDAR_OBJECT kwds = {"ResourceID": resid} rows = (yield Select( [co.ICALENDAR_TEXT], From=co, Where=( co.RESOURCE_ID == Parameter("ResourceID") ), ).on(txn, **kwds)) try: caldata = Component.fromString(rows[0][0]) if rows else None except InvalidICalendarDataError: returnValue(None) returnValue(caldata) def checkLastInstance(self, calendar): """ Determine the last instance of a calendar event. Try a "static" analysis of the data first, and only if needed, do an instance expansion. @param calendar: the calendar object to examine @type calendar: L{Component} """ # Is it recurring master = calendar.masterComponent() if not calendar.isRecurring() or master is None: # Just check the end date for comp in calendar.subcomponents(): if comp.name() == "VEVENT": if comp.getEndDateUTC() > self.cutoff: return False else: return True elif calendar.isRecurringUnbounded(): return False else: # First test all sub-components # Just check the end date for comp in calendar.subcomponents(): if comp.name() == "VEVENT": if comp.getEndDateUTC() > self.cutoff: return False # If we get here we need to test the RRULE - if there is an until use # that as the end point, if a count, we have to expand rrules = tuple(master.properties("RRULE")) if len(rrules): if rrules[0].value().getUseUntil(): return rrules[0].value().getUntil() < self.cutoff else: return not calendar.hasInstancesAfter(self.cutoff) return True @inlineCallbacks def getResourcesToPurge(self, home_id, owner_uid): """ Find all the resource-ids of calendar object resources that need to be purged in the specified home. @param home_id: resource-id of calendar home to check @type home_id: L{int} @param owner_uid: owner UUID of home to check @type owner_uid: L{str} """ purge = set() calendars = yield self.getMatchingCalendarIDs(home_id, owner_uid) for calendar_id, calendar_name in calendars: purge.update((yield self.getResourceIDsToPurge(home_id, calendar_id, calendar_name))) returnValue(purge) @inlineCallbacks def purgeResources(self, events): """ Remove up to batchSize events and return how many were removed. """ txn = self.store.newTransaction(label="Remove old events") count = 0 last_home = None last_calendar = None for event in events: if event.home != last_home: home = (yield txn.calendarHomeWithResourceID(event.home)) last_home = event.home if event.calendar != last_calendar: calendar = (yield home.childWithID(event.calendar)) last_calendar = event.calendar resource = (yield calendar.objectResourceWithID(event.resource)) yield resource.purge(implicitly=False) log.debug( "Removed resource {id} '{name}' from calendar {pid} '{pname}' of calendar home '{uid}'", id=resource.id(), name=resource.name(), pid=resource.parentCollection().id(), pname=resource.parentCollection().name(), uid=resource.parentCollection().ownerHome().uid() ) count += 1 yield txn.commit() returnValue(count) @inlineCallbacks def doWork(self): if self.debug: # Turn on debug logging for this module config.LogLevels[__name__] = "debug" else: config.LogLevels[__name__] = "info" config.update() homes = yield self.getMatchingHomeUIDs() if not homes: log.info("No homes to process") returnValue(0) if self.dryrun: log.info("Purge dry run only") log.info("Searching for old events...") purge = set() homes = yield self.getMatchingHomeUIDs() for home_id, owner_uid in homes: purge.update((yield self.getResourcesToPurge(home_id, owner_uid))) if self.dryrun: eventCount = len(purge) if eventCount == 0: log.info("No events are older than {cutoff}", cutoff=self.cutoff) elif eventCount == 1: log.info("1 event is older than {cutoff}", cutoff=self.cutoff) else: log.info("{count} events are older than {cutoff}", count=eventCount, cutoff=self.cutoff) returnValue(eventCount) purge = list(purge) purge.sort() totalEvents = len(purge) log.info("Removing {len} events older than {cutoff}...", len=len(purge), cutoff=self.cutoff) numEventsRemoved = -1 totalRemoved = 0 while numEventsRemoved: numEventsRemoved = (yield self.purgeResources(purge[:self.batchSize])) if numEventsRemoved: totalRemoved += numEventsRemoved log.debug(" Removed {removed} of {total} events...", removed=totalRemoved, total=totalEvents) purge = purge[numEventsRemoved:] if totalRemoved == 0: log.info("No events were removed") elif totalRemoved == 1: log.info("1 event was removed in total") else: log.info("{total} events were removed in total", total=totalRemoved) returnValue(totalRemoved) class PurgeAttachmentsService(WorkerService): uuid = None cutoff = None batchSize = None dryrun = False verbose = False @classmethod def usage(cls, e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print(" Remove old or orphaned attachments from the calendar server") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -u --uuid : target a specific user UID") # print(" -b --batch : number of attachments to remove in each transaction (default=%d)" % (DEFAULT_BATCH_SIZE,)) print(" -d --days : specify how many days in the past to retain (default=%d) zero means no removal of old attachments" % (DEFAULT_RETAIN_DAYS,)) print(" -n --dry-run: calculate how many attachments to purge, but do not purge data") print(" -v --verbose: print progress information") print(" -D --debug: debug logging") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) @classmethod def main(cls): try: (optargs, args) = getopt( sys.argv[1:], "Dd:b:f:hnu:v", [ "uuid=", "days=", "batch=", "dry-run", "config=", "help", "verbose", "debug", ], ) except GetoptError, e: cls.usage(e) # # Get configuration # configFileName = None uuid = None days = DEFAULT_RETAIN_DAYS batchSize = DEFAULT_BATCH_SIZE dryrun = False verbose = False debug = False for opt, arg in optargs: if opt in ("-h", "--help"): cls.usage() elif opt in ("-u", "--uuid"): uuid = arg elif opt in ("-d", "--days"): try: days = int(arg) except ValueError, e: print("Invalid value for --days: %s" % (arg,)) cls.usage(e) elif opt in ("-b", "--batch"): try: batchSize = int(arg) except ValueError, e: print("Invalid value for --batch: %s" % (arg,)) cls.usage(e) elif opt in ("-v", "--verbose"): verbose = True elif opt in ("-D", "--debug"): debug = True elif opt in ("-n", "--dry-run"): dryrun = True elif opt in ("-f", "--config"): configFileName = arg else: raise NotImplementedError(opt) if args: cls.usage("Too many arguments: %s" % (args,)) if dryrun: verbose = True cls.uuid = uuid if days > 0: cutoff = DateTime.getToday() cutoff.setDateOnly(False) cutoff.offsetDay(-days) cls.cutoff = cutoff else: cls.cutoff = None cls.batchSize = batchSize cls.dryrun = dryrun cls.verbose = verbose utilityMain( configFileName, cls, verbose=debug, ) @classmethod @inlineCallbacks def purgeAttachments(cls, store, uuid, days, limit, dryrun, verbose): service = cls(store) service.uuid = uuid if days > 0: cutoff = DateTime.getToday() cutoff.setDateOnly(False) cutoff.offsetDay(-days) service.cutoff = cutoff else: service.cutoff = None service.batchSize = limit service.dryrun = dryrun service.verbose = verbose result = yield service.doWork() returnValue(result) @inlineCallbacks def doWork(self): if self.dryrun: orphans = yield self._orphansDryRun() if self.cutoff is not None: dropbox = yield self._dropboxDryRun() managed = yield self._managedDryRun() else: dropbox = () managed = () returnValue(self._dryRunSummary(orphans, dropbox, managed)) else: total = yield self._orphansPurge() if self.cutoff is not None: total += yield self._dropboxPurge() total += yield self._managedPurge() returnValue(total) @inlineCallbacks def _orphansDryRun(self): if self.verbose: print("(Dry run) Searching for orphaned attachments...") txn = self.store.newTransaction(label="Find orphaned attachments") orphans = yield txn.orphanedAttachments(self.uuid) returnValue(orphans) @inlineCallbacks def _dropboxDryRun(self): if self.verbose: print("(Dry run) Searching for old dropbox attachments...") txn = self.store.newTransaction(label="Find old dropbox attachments") cutoffs = yield txn.oldDropboxAttachments(self.cutoff, self.uuid) yield txn.commit() returnValue(cutoffs) @inlineCallbacks def _managedDryRun(self): if self.verbose: print("(Dry run) Searching for old managed attachments...") txn = self.store.newTransaction(label="Find old managed attachments") cutoffs = yield txn.oldManagedAttachments(self.cutoff, self.uuid) yield txn.commit() returnValue(cutoffs) def _dryRunSummary(self, orphans, dropbox, managed): if self.verbose: byuser = {} ByUserData = collections.namedtuple( 'ByUserData', ['quota', 'orphanSize', 'orphanCount', 'dropboxSize', 'dropboxCount', 'managedSize', 'managedCount'] ) for user, quota, size, count in orphans: byuser[user] = ByUserData(quota=quota, orphanSize=size, orphanCount=count, dropboxSize=0, dropboxCount=0, managedSize=0, managedCount=0) for user, quota, size, count in dropbox: if user in byuser: byuser[user] = byuser[user]._replace(dropboxSize=size, dropboxCount=count) else: byuser[user] = ByUserData(quota=quota, orphanSize=0, orphanCount=0, dropboxSize=size, dropboxCount=count, managedSize=0, managedCount=0) for user, quota, size, count in managed: if user in byuser: byuser[user] = byuser[user]._replace(managedSize=size, managedCount=count) else: byuser[user] = ByUserData(quota=quota, orphanSize=0, orphanCount=0, dropboxSize=0, dropboxCount=0, managedSize=size, managedCount=count) # Print table of results table = tables.Table() table.addHeader(("User", "Current Quota", "Orphan Size", "Orphan Count", "Dropbox Size", "Dropbox Count", "Managed Size", "Managed Count", "Total Size", "Total Count")) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) totals = [0] * 8 for user, data in sorted(byuser.items(), key=lambda x: x[0]): cols = ( data.orphanSize, data.orphanCount, data.dropboxSize, data.dropboxCount, data.managedSize, data.managedCount, data.orphanSize + data.dropboxSize + data.managedSize, data.orphanCount + data.dropboxCount + data.managedCount, ) table.addRow((user, data.quota,) + cols) for ctr, value in enumerate(cols): totals[ctr] += value table.addFooter(("Total:", "",) + tuple(totals)) total = totals[7] print("\n") print("Orphaned/Old Attachments by User:\n") table.printTable() else: total = sum([x[3] for x in orphans]) + sum([x[3] for x in dropbox]) + sum([x[3] for x in managed]) return total @inlineCallbacks def _orphansPurge(self): if self.verbose: print("Removing orphaned attachments...",) numOrphansRemoved = -1 totalRemoved = 0 while numOrphansRemoved: txn = self.store.newTransaction(label="Remove orphaned attachments") numOrphansRemoved = yield txn.removeOrphanedAttachments(self.uuid, batchSize=self.batchSize) yield txn.commit() if numOrphansRemoved: totalRemoved += numOrphansRemoved if self.verbose: print(" %d," % (totalRemoved,),) elif self.verbose: print("") if self.verbose: if totalRemoved == 0: print("No orphaned attachments were removed") elif totalRemoved == 1: print("1 orphaned attachment was removed in total") else: print("%d orphaned attachments were removed in total" % (totalRemoved,)) print("") returnValue(totalRemoved) @inlineCallbacks def _dropboxPurge(self): if self.verbose: print("Removing old dropbox attachments...",) numOldRemoved = -1 totalRemoved = 0 while numOldRemoved: txn = self.store.newTransaction(label="Remove old dropbox attachments") numOldRemoved = yield txn.removeOldDropboxAttachments(self.cutoff, self.uuid, batchSize=self.batchSize) yield txn.commit() if numOldRemoved: totalRemoved += numOldRemoved if self.verbose: print(" %d," % (totalRemoved,),) elif self.verbose: print("") if self.verbose: if totalRemoved == 0: print("No old dropbox attachments were removed") elif totalRemoved == 1: print("1 old dropbox attachment was removed in total") else: print("%d old dropbox attachments were removed in total" % (totalRemoved,)) print("") returnValue(totalRemoved) @inlineCallbacks def _managedPurge(self): if self.verbose: print("Removing old managed attachments...",) numOldRemoved = -1 totalRemoved = 0 while numOldRemoved: txn = self.store.newTransaction(label="Remove old managed attachments") numOldRemoved = yield txn.removeOldManagedAttachments(self.cutoff, self.uuid, batchSize=self.batchSize) yield txn.commit() if numOldRemoved: totalRemoved += numOldRemoved if self.verbose: print(" %d," % (totalRemoved,),) elif self.verbose: print("") if self.verbose: if totalRemoved == 0: print("No old managed attachments were removed") elif totalRemoved == 1: print("1 old managed attachment was removed in total") else: print("%d old managed attachments were removed in total" % (totalRemoved,)) print("") returnValue(totalRemoved) class PurgePrincipalService(WorkerService): root = None directory = None uids = None dryrun = False verbose = False proxies = True when = None @classmethod def usage(cls, e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print(" Remove a principal's events and contacts from the calendar server") print(" Future events are declined or cancelled") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -n --dry-run: calculate how many events and contacts to purge, but do not purge data") print(" -v --verbose: print progress information") print(" -D --debug: debug logging") print("") if e: sys.stderr.write("%s\n" % (e,)) sys.exit(64) else: sys.exit(0) @classmethod def main(cls): try: (optargs, args) = getopt( sys.argv[1:], "Df:hnv", [ "dry-run", "config=", "help", "verbose", "debug", ], ) except GetoptError, e: cls.usage(e) # # Get configuration # configFileName = None dryrun = False verbose = False debug = False for opt, arg in optargs: if opt in ("-h", "--help"): cls.usage() elif opt in ("-v", "--verbose"): verbose = True elif opt in ("-D", "--debug"): debug = True elif opt in ("-n", "--dry-run"): dryrun = True elif opt in ("-f", "--config"): configFileName = arg else: raise NotImplementedError(opt) # args is a list of uids cls.uids = args cls.dryrun = dryrun cls.verbose = verbose utilityMain( configFileName, cls, verbose=debug, ) @classmethod @inlineCallbacks def purgeUIDs(cls, store, directory, uids, verbose=False, dryrun=False, proxies=True, when=None): service = cls(store) service.directory = directory service.uids = uids service.verbose = verbose service.dryrun = dryrun service.proxies = proxies service.when = when result = yield service.doWork() returnValue(result) @inlineCallbacks def doWork(self): if self.directory is None: self.directory = self.store.directoryService() total = 0 for uid in self.uids: count = yield self._purgeUID(uid) total += count if self.verbose: amount = "%d event%s" % (total, "s" if total > 1 else "") if self.dryrun: print("Would have modified or deleted %s" % (amount,)) else: print("Modified or deleted %s" % (amount,)) returnValue(total) @inlineCallbacks def _purgeUID(self, uid): if self.when is None: self.when = DateTime.getNowUTC() cuas = set(( "urn:uuid:{}".format(uid), "urn:x-uid:{}".format(uid) )) # See if calendar home is provisioned txn = self.store.newTransaction() storeCalHome = yield txn.calendarHomeWithUID(uid) calHomeProvisioned = storeCalHome is not None # Always, unshare collections, remove notifications if calHomeProvisioned: yield self._cleanHome(txn, storeCalHome) yield txn.commit() count = 0 if calHomeProvisioned: count = yield self._cancelEvents(txn, uid, cuas) # Remove empty calendar collections (and calendar home if no more # calendars) yield self._removeCalendarHome(uid) # Remove VCards count += (yield self._removeAddressbookHome(uid)) if self.proxies and not self.dryrun: if self.verbose: print("Deleting any proxy assignments") yield self._purgeProxyAssignments(self.store, uid) returnValue(count) @inlineCallbacks def _cleanHome(self, txn, storeCalHome): # Process shared and shared-to-me calendars children = list((yield storeCalHome.children())) for child in children: if self.verbose: if self.dryrun: print("Would unshare: %s" % (child.name(),)) else: print("Unsharing: %s" % (child.name(),)) if not self.dryrun: yield child.unshare() if not self.dryrun: yield storeCalHome.removeUnacceptedShares() notificationHome = yield txn.notificationsWithUID(storeCalHome.uid()) if notificationHome is not None: yield notificationHome.purge() @inlineCallbacks def _cancelEvents(self, txn, uid, cuas): # Anything in the past is left alone whenString = self.when.getText() query_filter = caldavxml.Filter( caldavxml.ComponentFilter( caldavxml.ComponentFilter( caldavxml.TimeRange(start=whenString,), name=("VEVENT",), ), name="VCALENDAR", ) ) query_filter = Filter(query_filter) count = 0 txn = self.store.newTransaction() storeCalHome = yield txn.calendarHomeWithUID(uid) calendarNames = yield storeCalHome.listCalendars() yield txn.commit() for calendarName in calendarNames: txn = self.store.newTransaction(authz_uid=uid) storeCalHome = yield txn.calendarHomeWithUID(uid) calendar = yield storeCalHome.calendarWithName(calendarName) allChildNames = [] futureChildNames = set() # Only purge owned calendars if calendar.owned(): # all events for childName in (yield calendar.listCalendarObjects()): allChildNames.append(childName) # events matching filter for childName, _ignore_childUid, _ignore_childType in (yield calendar.search(query_filter)): futureChildNames.add(childName) yield txn.commit() for childName in allChildNames: txn = self.store.newTransaction(authz_uid=uid) storeCalHome = yield txn.calendarHomeWithUID(uid) calendar = yield storeCalHome.calendarWithName(calendarName) doScheduling = childName in futureChildNames try: childResource = yield calendar.calendarObjectWithName(childName) uri = "/calendars/__uids__/%s/%s/%s" % (storeCalHome.uid(), calendar.name(), childName) incrementCount = self.dryrun if self.verbose: if self.dryrun: print("Would delete%s: %s" % (" with scheduling" if doScheduling else "", uri,)) else: print("Deleting%s: %s" % (" with scheduling" if doScheduling else "", uri,)) if not self.dryrun: retry = False try: yield childResource.purge(implicitly=doScheduling) incrementCount = True except Exception, e: print("Exception deleting %s: %s" % (uri, str(e))) retry = True if retry and doScheduling: # Try again with implicit scheduling off print("Retrying deletion of %s with scheduling turned off" % (uri,)) try: yield childResource.purge(implicitly=False) incrementCount = True except Exception, e: print("Still couldn't delete %s even with scheduling turned off: %s" % (uri, str(e))) if incrementCount: count += 1 # Commit yield txn.commit() except Exception, e: # Abort yield txn.abort() raise e returnValue(count) @inlineCallbacks def _removeCalendarHome(self, uid): try: txn = self.store.newTransaction(authz_uid=uid) # Remove empty calendar collections (and calendar home if no more # calendars) storeCalHome = yield txn.calendarHomeWithUID(uid) if storeCalHome is not None: calendars = list((yield storeCalHome.calendars())) calendars.extend(list((yield storeCalHome.calendars(onlyInTrash=True)))) remainingCalendars = len(calendars) for calColl in calendars: if len(list((yield calColl.calendarObjects()))) == 0: remainingCalendars -= 1 calendarName = calColl.name() if self.verbose: if self.dryrun: print("Would delete calendar: %s" % (calendarName,)) else: print("Deleting calendar: %s" % (calendarName,)) if not self.dryrun: if calColl.owned(): yield storeCalHome.removeChildWithName(calendarName, useTrash=False) else: yield calColl.unshare() if not remainingCalendars: if self.verbose: if self.dryrun: print("Would delete calendar home") else: print("Deleting calendar home") if not self.dryrun: # Queue a job to delete the calendar home after any scheduling operations # are complete notBefore = ( datetime.datetime.utcnow() + datetime.timedelta(seconds=config.AutomaticPurging.HomePurgeDelaySeconds) ) yield txn.enqueue( PrincipalPurgeHomeWork, homeResourceID=storeCalHome.id(), notBefore=notBefore ) # Also mark the home as purging so it won't be looked at again during # purge polling yield storeCalHome.purge() # Commit yield txn.commit() except Exception, e: # Abort yield txn.abort() raise e @inlineCallbacks def _removeAddressbookHome(self, uid): count = 0 txn = self.store.newTransaction(authz_uid=uid) try: # Remove VCards storeAbHome = yield txn.addressbookHomeWithUID(uid) if storeAbHome is not None: for abColl in list((yield storeAbHome.addressbooks())): for card in list((yield abColl.addressbookObjects())): cardName = card.name() if self.verbose: uri = "/addressbooks/__uids__/%s/%s/%s" % (uid, abColl.name(), cardName) if self.dryrun: print("Would delete: %s" % (uri,)) else: print("Deleting: %s" % (uri,)) if not self.dryrun: yield card.remove() count += 1 abName = abColl.name() if self.verbose: if self.dryrun: print("Would delete addressbook: %s" % (abName,)) else: print("Deleting addressbook: %s" % (abName,)) if not self.dryrun: # Also remove the addressbook collection itself if abColl.owned(): yield storeAbHome.removeChildWithName(abName, useTrash=False) else: yield abColl.unshare() if self.verbose: if self.dryrun: print("Would delete addressbook home") else: print("Deleting addressbook home") if not self.dryrun: yield storeAbHome.remove() # Commit yield txn.commit() except Exception, e: # Abort yield txn.abort() raise e returnValue(count) @inlineCallbacks def _purgeProxyAssignments(self, store, uid): txn = store.newTransaction() for readWrite in (True, False): yield txn.removeDelegates(uid, readWrite) yield txn.removeDelegateGroups(uid, readWrite) yield txn.commit() calendarserver-9.1+dfsg/calendarserver/tools/push.py000077500000000000000000000077741315003562600230160ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from calendarserver.tools.cmdline import utilityMain, WorkerService from argparse import ArgumentParser from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks from twext.who.idirectory import RecordType import time log = Logger() class DisplayAPNSubscriptions(WorkerService): users = [] def doWork(self): rootResource = self.rootResource() directory = rootResource.getDirectory() return displayAPNSubscriptions(self.store, directory, rootResource, self.users) def main(): parser = ArgumentParser(description='Display Apple Push Notification subscriptions') parser.add_argument('-f', '--config', dest='configFileName', metavar='CONFIGFILE', help='caldavd.plist configuration file path') parser.add_argument('-d', '--debug', action='store_true', help='show debug logging') parser.add_argument('user', help='one or more users to display', nargs='+') # Required args = parser.parse_args() DisplayAPNSubscriptions.users = args.user utilityMain( args.configFileName, DisplayAPNSubscriptions, verbose=args.debug, ) @inlineCallbacks def displayAPNSubscriptions(store, directory, root, users): for user in users: print record = yield directory.recordWithShortName(RecordType.user, user) if record is not None: print("User %s (%s)..." % (user, record.uid)) txn = store.newTransaction(label="Display APN Subscriptions") subscriptions = (yield txn.apnSubscriptionsBySubscriber(record.uid)) (yield txn.commit()) if subscriptions: byKey = {} for apnrecord in subscriptions: byKey.setdefault(apnrecord.resourceKey, []).append(apnrecord) for key, apnsrecords in byKey.iteritems(): print protocol, _ignore_host, path = key.strip("/").split("/", 2) resource = { "CalDAV": "calendar", "CardDAV": "addressbook", }[protocol] if "/" in path: uid, collection = path.split("/") else: uid = path collection = None record = yield directory.recordWithUID(uid) user = record.shortNames[0] if collection: print("...is subscribed to a share from %s's %s home" % (user, resource),) else: print("...is subscribed to %s's %s home" % (user, resource),) # print(" (key: %s)\n" % (key,)) print("with %d device(s):" % (len(apnsrecords),)) for apnrecords in apnsrecords: print(" %s\n '%s' from %s\n %s" % ( apnrecords.token, apnrecords.userAgent, apnrecords.ipAddr, time.strftime( "on %a, %d %b %Y at %H:%M:%S %z(%Z)", time.localtime(apnrecords.modified) ) )) else: print(" ...is not subscribed to anything.") else: print("User %s not found" % (user,)) calendarserver-9.1+dfsg/calendarserver/tools/resources.py000077500000000000000000000103751315003562600240400ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function __all__ = [ "migrateResources", ] from getopt import getopt, GetoptError import os import sys from calendarserver.tools.cmdline import utilityMain, WorkerService from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue from txdav.who.directory import CalendarDirectoryRecordMixin from twext.who.directory import DirectoryRecord as BaseDirectoryRecord from txdav.who.idirectory import RecordType log = Logger() class ResourceMigrationService(WorkerService): @inlineCallbacks def doWork(self): try: from txdav.who.opendirectory import ( DirectoryService as OpenDirectoryService ) except ImportError: returnValue(None) sourceService = OpenDirectoryService() sourceService.recordType = RecordType destService = self.store.directoryService() yield migrateResources(sourceService, destService) def usage(): name = os.path.basename(sys.argv[0]) print("usage: %s [options] " % (name,)) print("") print(" Migrates resources and locations from OD to Calendar Server") print("") print("options:") print(" -h --help: print this help and exit") print(" -f --config : Specify caldavd.plist configuration path") print(" -v --verbose: print debugging information") print("") sys.exit(0) def main(): try: (optargs, _ignore_args) = getopt( sys.argv[1:], "hf:", [ "help", "config=", ], ) except GetoptError, e: usage(e) # # Get configuration # configFileName = None verbose = False for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-f", "--config"): configFileName = arg else: raise NotImplementedError(opt) utilityMain(configFileName, ResourceMigrationService, verbose=verbose) class DirectoryRecord(BaseDirectoryRecord, CalendarDirectoryRecordMixin): pass @inlineCallbacks def migrateResources(sourceService, destService, verbose=False): """ Fetch all the locations and resources from sourceService that are not already in destService and copy them into destService. """ destRecords = [] for recordType in ( RecordType.resource, RecordType.location, ): records = yield sourceService.recordsWithRecordType(recordType) for sourceRecord in records: destRecord = yield destService.recordWithUID(sourceRecord.uid) if destRecord is None: if verbose: print( "Migrating {recordType} {uid}".format( recordType=recordType.name, uid=sourceRecord.uid ) ) fields = sourceRecord.fields.copy() fields[destService.fieldName.recordType] = destService.recordType.lookupByName(recordType.name) # Only interested in these fields: fn = destService.fieldName interestingFields = [ fn.recordType, fn.shortNames, fn.uid, fn.fullNames, fn.guid ] for key in fields.keys(): if key not in interestingFields: del fields[key] destRecord = DirectoryRecord(destService, fields) destRecords.append(destRecord) if destRecords: yield destService.updateRecords(destRecords, create=True) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/shell/000077500000000000000000000000001315003562600225525ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/shell/__init__.py000066400000000000000000000012051315003562600246610ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Data store interactive shell. """ calendarserver-9.1+dfsg/calendarserver/tools/shell/cmd.py000066400000000000000000000527561315003562600237060ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Data store commands. """ __all__ = [ "UsageError", "UnknownArguments", "CommandsBase", "Commands", ] from getopt import getopt from twext.python.log import Logger from twisted.internet.defer import succeed from twisted.internet.defer import inlineCallbacks, returnValue from twisted.conch.manhole import ManholeInterpreter from txdav.common.icommondatastore import NotFoundError from calendarserver.version import version from calendarserver.tools.tables import Table from calendarserver.tools.purge import PurgePrincipalService from calendarserver.tools.shell.vfs import Folder, RootFolder from calendarserver.tools.shell.directory import findRecords, summarizeRecords, recordInfo log = Logger() class UsageError(Exception): """ Usage error. """ class UnknownArguments(UsageError): """ Unknown arguments. """ def __init__(self, arguments): UsageError.__init__(self, "Unknown arguments: %s" % (arguments,)) self.arguments = arguments class InsufficientArguments(UsageError): """ Insufficient arguments. """ def __init__(self): UsageError.__init__(self, "Insufficient arguments.") class CommandsBase(object): """ Base class for commands. @ivar protocol: a protocol for parsing the incoming command line. @type protocol: L{calendarserver.tools.shell.terminal.ShellProtocol} """ def __init__(self, protocol): self.protocol = protocol self.wd = RootFolder(protocol.service) @property def terminal(self): return self.protocol.terminal # # Utilities # def documentationForCommand(self, command): """ @return: the documentation for the given C{command} as a string. """ m = getattr(self, "cmd_%s" % (command,), None) if m: doc = m.__doc__.split("\n") # Throw out first and last line if it's empty if doc: if not doc[0].strip(): doc.pop(0) if not doc[-1].strip(): doc.pop() if doc: # Get length of indentation i = len(doc[0]) - len(doc[0].lstrip()) result = [] for line in doc: result.append(line[i:]) return "\n".join(result) else: self.terminal.write("(No documentation available for %s)\n" % (command,)) else: raise NotFoundError("Unknown command: %s" % (command,)) def getTarget(self, tokens, wdFallback=False): """ Pop's the first token from tokens and locates the File indicated by that token. @return: a C{File}. """ if tokens: return self.wd.locate(tokens.pop(0).split("/")) else: if wdFallback: return succeed(self.wd) else: return succeed(None) @inlineCallbacks def getTargets(self, tokens, wdFallback=False): """ For each given C{token}, locate a File to operate on. @return: iterable of C{File} objects. """ if tokens: result = [] for token in tokens: try: target = (yield self.wd.locate(token.split("/"))) except NotFoundError: raise UsageError("No such target: %s" % (token,)) result.append(target) returnValue(result) else: if wdFallback: returnValue((self.wd,)) else: returnValue(()) @inlineCallbacks def directoryRecordWithID(self, id): """ Obtains a directory record corresponding to the given C{id}. C{id} is assumed to be a record UID. For convenience, may also take the form C{type:name}, where C{type} is a record type and C{name} is a record short name. @return: an C{IDirectoryRecord} """ directory = self.protocol.service.directory record = yield directory.recordWithUID(id) if not record: # Try type:name form try: recordType, shortName = id.split(":") except ValueError: pass else: record = yield directory.recordWithShortName(recordType, shortName) returnValue(record) def commands(self, showHidden=False): """ @return: an iterable of C{(name, method)} tuples, where C{name} is the name of the command and C{method} is the method that implements it. """ for attr in dir(self): if attr.startswith("cmd_"): m = getattr(self, attr) if showHidden or not hasattr(m, "hidden"): yield (attr[4:], m) @staticmethod def complete(word, items): """ List completions for the given C{word} from the given C{items}. Completions are the remaining portions of words in C{items} that start with C{word}. For example, if C{"foobar"} and C{"foo"} are in C{items}, then C{""} and C{"bar"} are completions when C{word} C{"foo"}. @return: an iterable of completions. """ for item in items: if item.startswith(word): yield item[len(word):] def complete_commands(self, word): """ @return: an iterable of command name completions. """ def complete(showHidden): return self.complete( word, (name for name, method in self.commands(showHidden=showHidden)) ) completions = tuple(complete(False)) # If no completions are found, try hidden commands. if not completions: completions = complete(True) return completions @inlineCallbacks def complete_files(self, tokens, filter=None): """ @return: an iterable of C{File} path completions. """ if filter is None: filter = lambda item: True if tokens: token = tokens[-1] i = token.rfind("/") if i == -1: # No "/" in token base = self.wd word = token else: base = (yield self.wd.locate(token[:i].split("/"))) word = token[i + 1:] else: base = self.wd word = "" files = ( entry.toString() for entry in (yield base.list()) if filter(entry) ) if len(tokens) == 0: returnValue(files) else: returnValue(self.complete(word, files)) class Commands(CommandsBase): """ Data store commands. """ # # Basic CLI tools # def cmd_exit(self, tokens): """ Exit the shell. usage: exit """ if tokens: raise UnknownArguments(tokens) self.protocol.exit() def cmd_help(self, tokens): """ Show help. usage: help [command] """ if tokens: command = tokens.pop(0) else: command = None if tokens: raise UnknownArguments(tokens) if command: self.terminal.write(self.documentationForCommand(command)) self.terminal.nextLine() else: self.terminal.write("Available commands:\n") result = [] max_len = 0 for name, m in self.commands(): for line in m.__doc__.split("\n"): line = line.strip() if line: doc = line break else: doc = "(no info available)" if len(name) > max_len: max_len = len(name) result.append((name, doc)) format = " %%%ds - %%s\n" % (max_len,) for info in sorted(result): self.terminal.write(format % (info)) def complete_help(self, tokens): if len(tokens) == 0: return (name for name, method in self.commands()) elif len(tokens) == 1: return self.complete_commands(tokens[0]) else: return () def cmd_emulate(self, tokens): """ Emulate editor behavior. The only correct argument is: emacs Other choices include: none usage: emulate editor """ if not tokens: if self.protocol.emulate: self.terminal.write("Emulating %s.\n" % (self.protocol.emulate,)) else: self.terminal.write("Emulation disabled.\n") return editor = tokens.pop(0).lower() if tokens: raise UnknownArguments(tokens) if editor == "none": self.terminal.write("Disabling emulation.\n") editor = None elif editor in self.protocol.emulation_modes: self.terminal.write("Emulating %s.\n" % (editor,)) else: raise UsageError("Unknown editor: %s" % (editor,)) self.protocol.emulate = editor # FIXME: Need to update key registrations cmd_emulate.hidden = "incomplete" def complete_emulate(self, tokens): if len(tokens) == 0: return self.protocol.emulation_modes elif len(tokens) == 1: return self.complete(tokens[0], self.protocol.emulation_modes) else: return () def cmd_log(self, tokens): """ Enable logging. usage: log [file] """ if hasattr(self, "_logFile"): self.terminal.write("Already logging to file: %s\n" % (self._logFile,)) return if tokens: fileName = tokens.pop(0) else: fileName = "/tmp/shell.log" if tokens: raise UnknownArguments(tokens) from twisted.python.log import startLogging try: f = open(fileName, "w") except (IOError, OSError), e: self.terminal.write("Unable to open file %s: %s\n" % (fileName, e)) return startLogging(f) self._logFile = fileName cmd_log.hidden = "debug tool" def cmd_version(self, tokens): """ Print version. usage: version """ if tokens: raise UnknownArguments(tokens) self.terminal.write("%s\n" % (version,)) # # Filesystem tools # def cmd_pwd(self, tokens): """ Print working folder. usage: pwd """ if tokens: raise UnknownArguments(tokens) self.terminal.write("%s\n" % (self.wd,)) @inlineCallbacks def cmd_cd(self, tokens): """ Change working folder. usage: cd [folder] """ if tokens: dirname = tokens.pop(0) else: return if tokens: raise UnknownArguments(tokens) wd = (yield self.wd.locate(dirname.split("/"))) if not isinstance(wd, Folder): raise NotFoundError("Not a folder: %s" % (wd,)) # log.info("wd -> {wd}", wd=wd) self.wd = wd @inlineCallbacks def complete_cd(self, tokens): returnValue((yield self.complete_files( tokens, filter=lambda item: True # issubclass(item[0], Folder) ))) @inlineCallbacks def cmd_ls(self, tokens): """ List target. usage: ls [target ...] """ targets = (yield self.getTargets(tokens, wdFallback=True)) multiple = len(targets) > 0 for target in targets: entries = sorted((yield target.list()), key=lambda e: e.fileName) # # FIXME: this can be ugly if, for example, there are zillions # of entries to output. Paging would be good. # table = Table() for entry in entries: table.addRow(entry.toFields()) if multiple: self.terminal.write("%s:\n" % (target,)) if table.rows: table.printTable(self.terminal) self.terminal.nextLine() complete_ls = CommandsBase.complete_files @inlineCallbacks def cmd_info(self, tokens): """ Print information about a target. usage: info [target] """ target = (yield self.getTarget(tokens, wdFallback=True)) if tokens: raise UnknownArguments(tokens) description = (yield target.describe()) self.terminal.write(description) self.terminal.nextLine() complete_info = CommandsBase.complete_files @inlineCallbacks def cmd_cat(self, tokens): """ Show contents of target. usage: cat target [target ...] """ targets = (yield self.getTargets(tokens)) if not targets: raise InsufficientArguments() for target in targets: if hasattr(target, "text"): text = (yield target.text()) self.terminal.write(text) complete_cat = CommandsBase.complete_files @inlineCallbacks def cmd_rm(self, tokens): """ Remove target. usage: rm target [target ...] """ options, tokens = getopt(tokens, "", ["no-implicit"]) implicit = True for option, _ignore_value in options: if option == "--no-implicit": # Not in docstring; this is really dangerous. implicit = False else: raise AssertionError("We should't be here.") targets = (yield self.getTargets(tokens)) if not targets: raise InsufficientArguments() for target in targets: if hasattr(target, "delete"): target.delete(implicit=implicit) else: self.terminal.write("Can not delete read-only target: %s\n" % (target,)) cmd_rm.hidden = "Incomplete" complete_rm = CommandsBase.complete_files # # Principal tools # @inlineCallbacks def cmd_find_principals(self, tokens): """ Search for matching principals usage: find_principal search_term """ if not tokens: raise UsageError("No search term") directory = self.protocol.service.directory records = (yield findRecords(directory, tokens)) if records: self.terminal.write((yield summarizeRecords(directory, records))) else: self.terminal.write("No matching principals found.") self.terminal.nextLine() @inlineCallbacks def cmd_print_principal(self, tokens): """ Print information about a principal. usage: print_principal principal_id """ if tokens: id = tokens.pop(0) else: raise UsageError("Principal ID required") if tokens: raise UnknownArguments(tokens) directory = self.protocol.service.directory record = yield self.directoryRecordWithID(id) if record: self.terminal.write((yield recordInfo(directory, record))) else: self.terminal.write("No such principal.") self.terminal.nextLine() # # Data purge tools # @inlineCallbacks def cmd_purge_principals(self, tokens): """ Purge data associated principals. usage: purge_principals principal_id [principal_id ...] """ dryRun = True completely = False doimplicit = True directory = self.protocol.service.directory records = [] for id in tokens: record = yield self.directoryRecordWithID(id) records.append(record) if not record: self.terminal.write("Unknown UID: %s\n" % (id,)) if None in records: self.terminal.write("Aborting.\n") return if dryRun: toPurge = "to purge" else: toPurge = "purged" total = 0 for record in records: count, _ignore_assignments = (yield PurgePrincipalService.purgeUIDs( self.protocol.service.store, directory, (record.uid,), verbose=False, dryrun=dryRun, completely=completely, doimplicit=doimplicit, )) total += count self.terminal.write( "%d events %s for UID %s.\n" % (count, toPurge, record.uid) ) self.terminal.write( "%d total events %s.\n" % (total, toPurge) ) cmd_purge_principals.hidden = "incomplete" # # Sharing # @inlineCallbacks def cmd_share(self, tokens): """ Share a resource with a principal. usage: share mode principal_id target [target ...] mode: r (read) or rw (read/write) """ if len(tokens) < 3: raise InsufficientArguments() mode = tokens.pop(0) principalID = tokens.pop(0) record = yield self.directoryRecordWithID(principalID) if not record: self.terminal.write("Principal not found: %s\n" % (principalID,)) targets = yield self.getTargets(tokens) if mode == "r": mode = None elif mode == "rw": mode = None else: raise UsageError("Unknown mode: %s" % (mode,)) for _ignore_target in targets: raise NotImplementedError() cmd_share.hidden = "incomplete" # # Python prompt, for the win # def cmd_python(self, tokens): """ Switch to a python prompt. usage: python """ if tokens: raise UnknownArguments(tokens) if not hasattr(self, "_interpreter"): # Bring in some helpful local variables. from txdav.common.datastore.sql_tables import schema from twext.enterprise.dal import syntax localVariables = dict( self=self, store=self.protocol.service.store, schema=schema, ) # FIXME: Use syntax.__all__, which needs to be defined for key, value in syntax.__dict__.items(): if not key.startswith("_"): localVariables[key] = value class Handler(object): def addOutput(innerSelf, bytes, async=False): # @NoSelf """ This is a delegate method, called by ManholeInterpreter. """ if async: self.terminal.write("... interrupted for Deferred ...\n") self.terminal.write(bytes) if async: self.terminal.write("\n") self.protocol.drawInputLine() self._interpreter = ManholeInterpreter(Handler(), localVariables) def evalSomePython(line): if line == "exit": # Return to normal command mode. del self.protocol.lineReceived del self.protocol.ps try: del self.protocol.pn except AttributeError: pass self.protocol.drawInputLine() return more = self._interpreter.push(line) self.protocol.pn = bool(more) lw = self.terminal.lastWrite if not (lw.endswith("\n") or lw.endswith("\x1bE")): self.terminal.write("\n") self.protocol.drawInputLine() self.protocol.lineReceived = evalSomePython self.protocol.ps = (">>> ", "... ") cmd_python.hidden = "debug tool" # # SQL prompt, for not as winning # def cmd_sql(self, tokens): """ Switch to an SQL prompt. usage: sql """ if tokens: raise UnknownArguments(tokens) raise NotImplementedError("Command not implemented") cmd_sql.hidden = "not implemented" # # Test tools # def cmd_raise(self, tokens): """ Raises an exception. usage: raise [message ...] """ raise RuntimeError(" ".join(tokens)) cmd_raise.hidden = "test tool" def cmd_reload(self, tokens): """ Reloads code. usage: reload """ if tokens: raise UnknownArguments(tokens) import calendarserver.tools.shell.vfs reload(calendarserver.tools.shell.vfs) import calendarserver.tools.shell.directory reload(calendarserver.tools.shell.directory) self.protocol.reloadCommands() cmd_reload.hidden = "test tool" def cmd_xyzzy(self, tokens): """ """ self.terminal.write("Nothing happens.") self.terminal.nextLine() cmd_sql.hidden = "" calendarserver-9.1+dfsg/calendarserver/tools/shell/directory.py000066400000000000000000000124111315003562600251270ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Directory tools """ __all__ = [ "findRecords", "recordInfo", "recordBasicInfo", "recordGroupMembershipInfo", "recordProxyAccessInfo", ] import operator from twisted.internet.defer import succeed from twisted.internet.defer import inlineCallbacks, returnValue from calendarserver.tools.tables import Table @inlineCallbacks def findRecords(directory, terms): records = tuple((yield directory.recordsMatchingTokens(terms))) returnValue(sorted(records, key=operator.attrgetter("fullName"))) @inlineCallbacks def recordInfo(directory, record): """ Complete record information. """ info = [] def add(name, subInfo): if subInfo: info.append("%s:" % (name,)) info.append(subInfo) add("Directory record", (yield recordBasicInfo(directory, record))) add("Group memberships", (yield recordGroupMembershipInfo(directory, record))) add("Proxy access", (yield recordProxyAccessInfo(directory, record))) returnValue("\n".join(info)) def recordBasicInfo(directory, record): """ Basic information for a record. """ table = Table() def add(name, value): if value: table.addRow((name, value)) add("Service", record.service) add("Record Type", record.recordType) for shortName in record.shortNames: add("Short Name", shortName) add("GUID", record.guid) add("Full Name", record.fullName) add("First Name", record.firstName) add("Last Name", record.lastName) try: for email in record.emailAddresses: add("Email Address", email) except AttributeError: pass try: for cua in record.calendarUserAddresses: add("Calendar User Address", cua) except AttributeError: pass add("Server ID", record.serverID) add("Enabled", record.enabled) add("Enabled for Calendar", record.hasCalendars) add("Enabled for Contacts", record.hasContacts) return succeed(table.toString()) def recordGroupMembershipInfo(directory, record): """ Group membership info for a record. """ rows = [] for group in record.groups(): rows.append((group.uid, group.shortNames[0], group.fullName)) if not rows: return succeed(None) rows = sorted( rows, key=lambda row: (row[1], row[2]) ) table = Table() table.addHeader(("UID", "Short Name", "Full Name")) for row in rows: table.addRow(row) return succeed(table.toString()) @inlineCallbacks def recordProxyAccessInfo(directory, record): """ Group membership info for a record. """ # FIXME: This proxy finding logic should be in DirectoryRecord. def meAndMyGroups(record=record, groups=set((record,))): for group in record.groups(): groups.add(group) meAndMyGroups(group, groups) return groups # FIXME: This module global is really gross. from twistedcaldav.directory.calendaruserproxy import ProxyDBService rows = [] proxyInfoSeen = set() for record in meAndMyGroups(): proxyUIDs = (yield ProxyDBService.getMemberships(record.uid)) for proxyUID in proxyUIDs: # These are of the form: F153A05B-FF27-4B6C-BD6D-D1239D0082B0#calendar-proxy-read # I don't know how to get DirectoryRecord objects for the proxyUID here, so, let's cheat for now. proxyUID, proxyType = proxyUID.split("#") if (proxyUID, proxyType) not in proxyInfoSeen: proxyRecord = yield directory.recordWithUID(proxyUID) rows.append((proxyUID, proxyRecord.recordType, proxyRecord.shortNames[0], proxyRecord.fullName, proxyType)) proxyInfoSeen.add((proxyUID, proxyType)) if not rows: returnValue(None) rows = sorted( rows, key=lambda row: (row[1], row[2], row[4]) ) table = Table() table.addHeader(("UID", "Record Type", "Short Name", "Full Name", "Access")) for row in rows: table.addRow(row) returnValue(table.toString()) def summarizeRecords(directory, records): table = Table() table.addHeader(( "UID", "Record Type", "Short Names", "Email Addresses", "Full Name", )) def formatItems(items): if items: return ", ".join(items) else: return None for record in records: table.addRow(( record.uid, record.recordType, formatItems(record.shortNames), formatItems(record.emailAddresses), record.fullName, )) if table.rows: return succeed(table.toString()) else: return succeed(None) calendarserver-9.1+dfsg/calendarserver/tools/shell/terminal.py000066400000000000000000000277371315003562600247570ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Interactive shell for terminals. """ __all__ = [ "usage", "ShellOptions", "ShellService", "ShellProtocol", "main", ] import string import os import sys import tty import termios from shlex import shlex from twisted.python.failure import Failure from twisted.python.text import wordWrap from twisted.python.usage import Options, UsageError from twisted.internet.defer import Deferred, succeed from twisted.internet.defer import inlineCallbacks from twisted.internet.stdio import StandardIO from twisted.conch.recvline import HistoricRecvLine as ReceiveLineProtocol from twisted.conch.insults.insults import ServerProtocol from twext.python.log import Logger from txdav.common.icommondatastore import NotFoundError from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from calendarserver.tools.cmdline import utilityMain, WorkerService from calendarserver.tools.shell.cmd import Commands, UsageError as CommandUsageError log = Logger() def usage(e=None): if e: print(e) print("") try: ShellOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) class ShellOptions(Options): """ Command line options for "calendarserver_shell". """ synopsis = "\n".join( wordWrap( """ Usage: calendarserver_shell [options]\n """ + __doc__, int(os.environ.get("COLUMNS", "80")) ) ) optParameters = [ ["config", "f", DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ] def __init__(self): super(ShellOptions, self).__init__() class ShellService(WorkerService, object): """ A L{ShellService} collects all the information that a shell needs to run; when run, it invokes the shell on stdin/stdout. @ivar store: the calendar / addressbook store. @type store: L{txdav.idav.IDataStore} @ivar directory: the directory service, to look up principals' names @type directory: L{twistedcaldav.directory.idirectory.IDirectoryService} @ivar options: the command-line options used to create this shell service @type options: L{ShellOptions} @ivar reactor: the reactor under which this service is running @type reactor: L{IReactorTCP}, L{IReactorTime}, L{IReactorThreads} etc @ivar config: the configuration associated with this shell service. @type config: L{twistedcaldav.config.Config} """ def __init__(self, store, options, reactor, config): super(ShellService, self).__init__(store) self.directory = store.directoryService() self.options = options self.reactor = reactor self.config = config self.terminalFD = None self.protocol = None def doWork(self): """ Service startup. """ # Set up the terminal for interactive action self.terminalFD = sys.__stdin__.fileno() self._oldTerminalSettings = termios.tcgetattr(self.terminalFD) tty.setraw(self.terminalFD) self.protocol = ServerProtocol(lambda: ShellProtocol(self)) StandardIO(self.protocol) return succeed(None) def postStartService(self): """ Don't quit right away """ pass def stopService(self): """ Stop the service. """ # Restore terminal settings termios.tcsetattr(self.terminalFD, termios.TCSANOW, self._oldTerminalSettings) os.write(self.terminalFD, "\r\x1bc\r") class ShellProtocol(ReceiveLineProtocol): """ Data store shell protocol. @ivar service: a service representing the running shell @type service: L{ShellService} """ # FIXME: # * Received lines are being echoed; find out why and stop it. # * Backspace transposes characters in the terminal. ps = ("ds% ", "... ") emulation_modes = ("emacs", "none") def __init__(self, service, commandsClass=Commands): ReceiveLineProtocol.__init__(self) self.service = service self.inputLines = [] self.commands = commandsClass(self) self.activeCommand = None self.emulate = "emacs" def reloadCommands(self): # FIXME: doesn't work for alternative Commands classes passed # to __init__. self.terminal.write("Reloading commands class...\n") import calendarserver.tools.shell.cmd reload(calendarserver.tools.shell.cmd) self.commands = calendarserver.tools.shell.cmd.Commands(self) # # Input handling # def connectionMade(self): ReceiveLineProtocol.connectionMade(self) self.keyHandlers['\x03'] = self.handle_INT # Control-C self.keyHandlers['\x04'] = self.handle_EOF # Control-D self.keyHandlers['\x1c'] = self.handle_QUIT # Control-\ self.keyHandlers['\x0c'] = self.handle_FF # Control-L # self.keyHandlers['\t' ] = self.handle_TAB # Tab if self.emulate == "emacs": # EMACS key bindinds self.keyHandlers['\x10'] = self.handle_UP # Control-P self.keyHandlers['\x0e'] = self.handle_DOWN # Control-N self.keyHandlers['\x02'] = self.handle_LEFT # Control-B self.keyHandlers['\x06'] = self.handle_RIGHT # Control-F self.keyHandlers['\x01'] = self.handle_HOME # Control-A self.keyHandlers['\x05'] = self.handle_END # Control-E def observer(event): if not event["isError"]: return text = log.textFromEventDict(event) if text is None: return self.service.reactor.callFromThread(self.terminal.write, text) log.startLoggingWithObserver(observer) def handle_INT(self): return self.resetInputLine() def handle_EOF(self): if self.lineBuffer: if self.emulate == "emacs": self.handle_DELETE() else: self.terminal.write("\a") else: self.handle_QUIT() def handle_FF(self): """ Handle a "form feed" byte - generally used to request a screen refresh/redraw. """ # FIXME: Clear screen != redraw screen. return self.clearScreen() def handle_QUIT(self): return self.exit() def handle_TAB(self): return self.completeLine() # # Utilities # def clearScreen(self): """ Clear the display. """ self.terminal.eraseDisplay() self.terminal.cursorHome() self.drawInputLine() def resetInputLine(self): """ Reset the current input variables to their initial state. """ self.pn = 0 self.lineBuffer = [] self.lineBufferIndex = 0 self.terminal.nextLine() self.drawInputLine() @inlineCallbacks def completeLine(self): """ Perform auto-completion on the input line. """ # Tokenize the text before the cursor tokens = self.tokenize("".join(self.lineBuffer[:self.lineBufferIndex])) if tokens: if len(tokens) == 1 and self.lineBuffer[-1] in string.whitespace: word = "" else: word = tokens[-1] cmd = tokens.pop(0) else: word = cmd = "" if cmd and (tokens or word == ""): # Completing arguments m = getattr(self.commands, "complete_%s" % (cmd,), None) if not m: return try: completions = tuple((yield m(tokens))) except Exception, e: self.handleFailure(Failure(e)) return log.info("COMPLETIONS: {comp}", comp=completions) else: # Completing command name completions = tuple(self.commands.complete_commands(cmd)) if len(completions) == 1: for c in completions.__iter__().next(): self.characterReceived(c, True) # FIXME: Add a space only if we know we've fully completed the term. # self.characterReceived(" ", False) else: self.terminal.nextLine() for completion in completions: # FIXME Emitting these in columns would be swell self.terminal.write("%s%s\n" % (word, completion)) self.drawInputLine() def exit(self): """ Exit. """ self.terminal.loseConnection() self.service.reactor.stop() def handleFailure(self, f): """ Handle a failure raises in the interpreter by printing a traceback and resetting the input line. """ if self.lineBuffer: self.terminal.nextLine() self.terminal.write("Error: %s !!!" % (f.value,)) if not f.check(NotImplementedError, NotFoundError): log.info(f.getTraceback()) self.resetInputLine() # # Command dispatch # def lineReceived(self, line): if self.activeCommand is not None: self.inputLines.append(line) return tokens = self.tokenize(line) if tokens: cmd = tokens.pop(0) # print("Arguments: %r" % (tokens,)) m = getattr(self.commands, "cmd_%s" % (cmd,), None) if m: def handleUsageError(f): f.trap(CommandUsageError) self.terminal.write("%s\n" % (f.value,)) doc = self.commands.documentationForCommand(cmd) if doc: self.terminal.nextLine() self.terminal.write(doc) self.terminal.nextLine() def next(_): self.activeCommand = None if self.inputLines: line = self.inputLines.pop(0) self.lineReceived(line) d = self.activeCommand = Deferred() d.addCallback(lambda _: m(tokens)) if True: d.callback(None) else: # Add time to test callbacks self.service.reactor.callLater(4, d.callback, None) d.addErrback(handleUsageError) d.addCallback(lambda _: self.drawInputLine()) d.addErrback(self.handleFailure) d.addCallback(next) else: self.terminal.write("Unknown command: %s\n" % (cmd,)) self.drawInputLine() else: self.drawInputLine() @staticmethod def tokenize(line): """ Tokenize input line. @return: an iterable of tokens """ lexer = shlex(line) lexer.whitespace_split = True tokens = [] while True: token = lexer.get_token() if not token: break tokens.append(token) return tokens def main(argv=sys.argv, stderr=sys.stderr, reactor=None): if reactor is None: from twisted.internet import reactor options = ShellOptions() try: options.parseOptions(argv[1:]) except UsageError, e: usage(e) def makeService(store): from twistedcaldav.config import config return ShellService(store, options, reactor, config) print("Initializing shell...") utilityMain(options["config"], makeService, reactor) calendarserver-9.1+dfsg/calendarserver/tools/shell/test/000077500000000000000000000000001315003562600235315ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/shell/test/__init__.py000066400000000000000000000012051315003562600256400ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Data store interactive shell. """ calendarserver-9.1+dfsg/calendarserver/tools/shell/test/test_cmd.py000066400000000000000000000140451315003562600257110ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import twisted.trial.unittest from twisted.internet.defer import inlineCallbacks from txdav.common.icommondatastore import NotFoundError from calendarserver.tools.shell.cmd import CommandsBase from calendarserver.tools.shell.terminal import ShellProtocol class TestCommandsBase(twisted.trial.unittest.TestCase): def setUp(self): self.protocol = ShellProtocol(None, commandsClass=CommandsBase) self.commands = self.protocol.commands @inlineCallbacks def test_getTargetNone(self): target = (yield self.commands.getTarget([], wdFallback=True)) self.assertEquals(target, self.commands.wd) target = (yield self.commands.getTarget([])) self.assertEquals(target, None) def test_getTargetMissing(self): return self.assertFailure(self.commands.getTarget(["/foo"]), NotFoundError) @inlineCallbacks def test_getTargetOne(self): target = (yield self.commands.getTarget(["users"])) match = (yield self.commands.wd.locate(["users"])) self.assertEquals(target, match) @inlineCallbacks def test_getTargetSome(self): target = (yield self.commands.getTarget(["users", "blah"])) match = (yield self.commands.wd.locate(["users"])) self.assertEquals(target, match) def test_commandsNone(self): allCommands = self.commands.commands() self.assertEquals(sorted(allCommands), []) allCommands = self.commands.commands(showHidden=True) self.assertEquals(sorted(allCommands), []) def test_commandsSome(self): protocol = ShellProtocol(None, commandsClass=SomeCommands) commands = protocol.commands allCommands = commands.commands() self.assertEquals( sorted(allCommands), [ ("a", commands.cmd_a), ("b", commands.cmd_b), ] ) allCommands = commands.commands(showHidden=True) self.assertEquals( sorted(allCommands), [ ("a", commands.cmd_a), ("b", commands.cmd_b), ("hidden", commands.cmd_hidden), ] ) def test_complete(self): items = ( "foo", "bar", "foobar", "baz", "quux", ) def c(word): return sorted(CommandsBase.complete(word, items)) self.assertEquals(c(""), sorted(items)) self.assertEquals(c("f"), ["oo", "oobar"]) self.assertEquals(c("foo"), ["", "bar"]) self.assertEquals(c("foobar"), [""]) self.assertEquals(c("foobars"), []) self.assertEquals(c("baz"), [""]) self.assertEquals(c("q"), ["uux"]) self.assertEquals(c("xyzzy"), []) def test_completeCommands(self): protocol = ShellProtocol(None, commandsClass=SomeCommands) commands = protocol.commands def c(word): return sorted(commands.complete_commands(word)) self.assertEquals(c(""), ["a", "b"]) self.assertEquals(c("a"), [""]) self.assertEquals(c("h"), ["idden"]) self.assertEquals(c("f"), []) @inlineCallbacks def _test_completeFiles(self, tests): protocol = ShellProtocol(None, commandsClass=SomeCommands) commands = protocol.commands def c(word): # One token d = commands.complete_files((word,)) d.addCallback(lambda c: sorted(c)) return d def d(word): # Multiple tokens d = commands.complete_files(("XYZZY", word)) d.addCallback(lambda c: sorted(c)) return d def e(word): # No tokens d = commands.complete_files(()) d.addCallback(lambda c: sorted(c)) return d for word, completions in tests: if word is None: self.assertEquals((yield e(word)), completions, "Completing %r" % (word,)) else: self.assertEquals((yield c(word)), completions, "Completing %r" % (word,)) self.assertEquals((yield d(word)), completions, "Completing %r" % (word,)) def test_completeFilesLevelOne(self): return self._test_completeFiles(( (None, ["groups/", "locations/", "resources/", "uids/", "users/"]), ("", ["groups/", "locations/", "resources/", "uids/", "users/"]), ("u", ["ids/", "sers/"]), ("g", ["roups/"]), ("gr", ["oups/"]), ("groups", ["/"]), )) def test_completeFilesLevelOneSlash(self): return self._test_completeFiles(( ("/", ["groups/", "locations/", "resources/", "uids/", "users/"]), ("/u", ["ids/", "sers/"]), ("/g", ["roups/"]), ("/gr", ["oups/"]), ("/groups", ["/"]), )) def test_completeFilesDirectory(self): return self._test_completeFiles(( ("users/", ["wsanchez", "admin"]), # FIXME: Look up users )) test_completeFilesDirectory.todo = "Doesn't work yet" def test_completeFilesLevelTwo(self): return self._test_completeFiles(( ("users/w", ["sanchez"]), # FIXME: Look up users? )) test_completeFilesLevelTwo.todo = "Doesn't work yet" class SomeCommands(CommandsBase): def cmd_a(self, tokens): pass def cmd_b(self, tokens): pass def cmd_hidden(self, tokens): pass cmd_hidden.hidden = "Hidden" calendarserver-9.1+dfsg/calendarserver/tools/shell/test/test_vfs.py000066400000000000000000000146071315003562600257500ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from twisted.internet.defer import succeed, inlineCallbacks # from twext.who.test.test_xml import xmlService # from txdav.common.datastore.test.util import buildStore from calendarserver.tools.shell.vfs import ListEntry from calendarserver.tools.shell.vfs import File, Folder # from calendarserver.tools.shell.vfs import UIDsFolder # from calendarserver.tools.shell.terminal import ShellService class TestListEntry(TestCase): def test_toString(self): self.assertEquals( ListEntry(None, File, "thingo").toString(), "thingo" ) self.assertEquals( ListEntry(None, File, "thingo", Foo="foo").toString(), "thingo" ) self.assertEquals( ListEntry(None, Folder, "thingo").toString(), "thingo/" ) self.assertEquals( ListEntry(None, Folder, "thingo", Foo="foo").toString(), "thingo/" ) def test_fieldNamesImplicit(self): # This test assumes File doesn't set list.fieldNames. assert not hasattr(File.list, "fieldNames") self.assertEquals( set(ListEntry(File(None, ()), File, "thingo").fieldNames), set(("Name",)) ) def test_fieldNamesExplicit(self): def fieldNames(fileClass): return ListEntry( fileClass(None, ()), fileClass, "thingo", Flavor="Coconut", Style="Hard" ) # Full list class MyFile1(File): def list(self): return succeed(()) list.fieldNames = ("Name", "Flavor") self.assertEquals(fieldNames(MyFile1).fieldNames, ("Name", "Flavor")) # Full list, different order class MyFile2(File): def list(self): return succeed(()) list.fieldNames = ("Flavor", "Name") self.assertEquals(fieldNames(MyFile2).fieldNames, ("Flavor", "Name")) # Omits Name, which is implicitly added class MyFile3(File): def list(self): return succeed(()) list.fieldNames = ("Flavor",) self.assertEquals(fieldNames(MyFile3).fieldNames, ("Name", "Flavor")) # Emtpy class MyFile4(File): def list(self): return succeed(()) list.fieldNames = () self.assertEquals(fieldNames(MyFile4).fieldNames, ("Name",)) def test_toFieldsImplicit(self): # This test assumes File doesn't set list.fieldNames. assert not hasattr(File.list, "fieldNames") # Name first, rest sorted by field name self.assertEquals( tuple( ListEntry( File(None, ()), File, "thingo", Flavor="Coconut", Style="Hard" ).toFields() ), ("thingo", "Coconut", "Hard") ) def test_toFieldsExplicit(self): def fields(fileClass): return tuple( ListEntry( fileClass(None, ()), fileClass, "thingo", Flavor="Coconut", Style="Hard" ).toFields() ) # Full list class MyFile1(File): def list(self): return succeed(()) list.fieldNames = ("Name", "Flavor") self.assertEquals(fields(MyFile1), ("thingo", "Coconut")) # Full list, different order class MyFile2(File): def list(self): return succeed(()) list.fieldNames = ("Flavor", "Name") self.assertEquals(fields(MyFile2), ("Coconut", "thingo")) # Omits Name, which is implicitly added class MyFile3(File): def list(self): return succeed(()) list.fieldNames = ("Flavor",) self.assertEquals(fields(MyFile3), ("thingo", "Coconut")) # Emtpy class MyFile4(File): def list(self): return succeed(()) list.fieldNames = () self.assertEquals(fields(MyFile4), ("thingo",)) class UIDsFolderTests(TestCase): """ L{UIDsFolder} contains all principals and is keyed by UID. """ # @inlineCallbacks # def setUp(self): # """ # Create a L{UIDsFolder}. # """ # directory = xmlService(self.mktemp()) # self.svc = ShellService( # store=(yield buildStore(self, None, directoryService=directory)), # directory=directory, # options=None, reactor=None, config=None # ) # self.folder = UIDsFolder(self.svc, ()) @inlineCallbacks def test_list(self): """ L{UIDsFolder.list} returns a L{Deferred} firing an iterable of L{ListEntry} objects, reflecting the directory information for all calendars and addressbooks created in the store. """ txn = self.svc.store.newTransaction() wsanchez = "6423F94A-6B76-4A3A-815B-D52CFD77935D" dreid = "5FF60DAD-0BDE-4508-8C77-15F0CA5C8DD1" yield txn.calendarHomeWithUID(wsanchez, create=True) yield txn.addressbookHomeWithUID(dreid, create=True) yield txn.commit() listing = list((yield self.folder.list())) self.assertEquals( [x.fields for x in listing], [ { "Record Type": "users", "Short Name": "wsanchez", "Full Name": "Wilfredo Sanchez", "Name": wsanchez }, { "Record Type": "users", "Short Name": "dreid", "Full Name": "David Reid", "Name": dreid }, ] ) test_list.todo = "setup() needs to be reimplemented" calendarserver-9.1+dfsg/calendarserver/tools/shell/vfs.py000066400000000000000000000544021315003562600237270ustar00rootroot00000000000000# -*- test-case-name: calendarserver.tools.shell.test.test_vfs -*- ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Virtual file system for data store objects. """ __all__ = [ "ListEntry", "File", "Folder", "RootFolder", "UIDsFolder", "RecordFolder", "UsersFolder", "LocationsFolder", "ResourcesFolder", "GroupsFolder", "PrincipalHomeFolder", "CalendarHomeFolder", "CalendarFolder", "CalendarObject", "AddressBookHomeFolder", ] from cStringIO import StringIO from time import strftime, localtime from twisted.internet.defer import succeed from twisted.internet.defer import inlineCallbacks, returnValue from twext.python.log import Logger from txdav.common.icommondatastore import NotFoundError from twistedcaldav.ical import InvalidICalendarDataError from calendarserver.tools.tables import Table from calendarserver.tools.shell.directory import recordInfo log = Logger() class ListEntry(object): """ Information about a C{File} as returned by C{File.list()}. """ def __init__(self, parent, Class, Name, **fields): self.parent = parent # The class implementing list() self.fileClass = Class self.fileName = Name self.fields = fields fields["Name"] = Name def __str__(self): return self.toString() def __repr__(self): fields = self.fields.copy() del fields["Name"] if fields: fields = " %s" % (fields,) else: fields = "" return "<%s(%s): %r%s>" % ( self.__class__.__name__, self.fileClass.__name__, self.fileName, fields, ) def isFolder(self): return issubclass(self.fileClass, Folder) def toString(self): if self.isFolder(): return "%s/" % (self.fileName,) else: return self.fileName @property def fieldNames(self): if not hasattr(self, "_fieldNames"): if hasattr(self.parent.list, "fieldNames"): if "Name" in self.parent.list.fieldNames: self._fieldNames = tuple(self.parent.list.fieldNames) else: self._fieldNames = ("Name",) + tuple(self.parent.list.fieldNames) else: self._fieldNames = ["Name"] + sorted(n for n in self.fields if n != "Name") return self._fieldNames def toFields(self): try: return tuple(self.fields[fieldName] for fieldName in self.fieldNames) except KeyError, e: raise AssertionError( "Field %s is not in %r, defined by %s" % (e, self.fields.keys(), self.parent.__name__) ) class File(object): """ Object in virtual data hierarchy. """ def __init__(self, service, path): assert type(path) is tuple self.service = service self.path = path def __str__(self): return "/" + "/".join(self.path) def __repr__(self): return "<%s: %s>" % (self.__class__.__name__, self) def __eq__(self, other): if isinstance(other, File): return self.path == other.path else: return NotImplemented def describe(self): return succeed("%s (%s)" % (self, self.__class__.__name__)) def list(self): return succeed(( ListEntry(self, self.__class__, self.path[-1]), )) class Folder(File): """ Location in virtual data hierarchy. """ def __init__(self, service, path): File.__init__(self, service, path) self._children = {} self._childClasses = {} def __str__(self): if self.path: return "/" + "/".join(self.path) + "/" else: return "/" @inlineCallbacks def locate(self, path): if path and path[-1] == "": path.pop() if not path: returnValue(RootFolder(self.service)) name = path[0] if name: target = (yield self.child(name)) if len(path) > 1: target = (yield target.locate(path[1:])) else: target = (yield RootFolder(self.service).locate(path[1:])) returnValue(target) @inlineCallbacks def child(self, name): # FIXME: Move this logic to locate() # if not name: # return succeed(self) # if name == ".": # return succeed(self) # if name == "..": # path = self.path[:-1] # if not path: # path = "/" # return RootFolder(self.service).locate(path) if name in self._children: returnValue(self._children[name]) if name in self._childClasses: child = (yield self._childClasses[name](self.service, self.path + (name,))) self._children[name] = child returnValue(child) raise NotFoundError("Folder %r has no child %r" % (str(self), name)) def list(self): result = {} for name in self._children: result[name] = ListEntry(self, self._children[name].__class__, name) for name in self._childClasses: if name not in result: result[name] = ListEntry(self, self._childClasses[name], name) return succeed(result.itervalues()) class RootFolder(Folder): """ Root of virtual data hierarchy. Hierarchy:: / RootFolder uids/ UIDsFolder / PrincipalHomeFolder calendars/ CalendarHomeFolder / CalendarFolder CalendarObject addressbooks/ AddressBookHomeFolder / AddressBookFolder AddressBookObject users/ UsersFolder / PrincipalHomeFolder ... locations/ LocationsFolder / PrincipalHomeFolder ... resources/ ResourcesFolder / PrincipalHomeFolder ... """ def __init__(self, service): Folder.__init__(self, service, ()) self._childClasses["uids"] = UIDsFolder self._childClasses["users"] = UsersFolder self._childClasses["locations"] = LocationsFolder self._childClasses["resources"] = ResourcesFolder self._childClasses["groups"] = GroupsFolder class UIDsFolder(Folder): """ Folder containing all principals by UID. """ def child(self, name): return PrincipalHomeFolder(self.service, self.path + (name,), name) @inlineCallbacks def list(self): results = {} # FIXME: This should be the merged total of calendar homes and address book homes. # FIXME: Merge in directory UIDs also? # FIXME: Add directory info (eg. name) to list entry @inlineCallbacks def addResult(ignoredTxn, home): uid = home.uid() record = yield self.service.directory.recordWithUID(uid) if record: info = { "Record Type": record.recordType, "Short Name": record.shortNames[0], "Full Name": record.fullName, } else: info = {} results[uid] = ListEntry(self, PrincipalHomeFolder, uid, **info) yield self.service.store.withEachCalendarHomeDo(addResult) yield self.service.store.withEachAddressbookHomeDo(addResult) returnValue(results.itervalues()) class RecordFolder(Folder): def _recordForName(self, name): recordTypeAttr = "recordType_" + self.recordType recordType = getattr(self.service.directory, recordTypeAttr) return self.service.directory.recordWithShortName(recordType, name) def child(self, name): record = self._recordForName(name) if record is None: return Folder.child(self, name) return PrincipalHomeFolder( self.service, self.path + (name,), record.uid, record=record ) @inlineCallbacks def list(self): names = set() for record in (yield self.service.directory.recordsWithRecordType( self.recordType )): for shortName in record.shortNames: if shortName in names: continue names.add(shortName) yield ListEntry( self, PrincipalHomeFolder, shortName, **{ "UID": record.uid, "Full Name": record.fullName, } ) list.fieldNames = ("UID", "Full Name") class UsersFolder(RecordFolder): """ Folder containing all user principals by name. """ recordType = "users" class LocationsFolder(RecordFolder): """ Folder containing all location principals by name. """ recordType = "locations" class ResourcesFolder(RecordFolder): """ Folder containing all resource principals by name. """ recordType = "resources" class GroupsFolder(RecordFolder): """ Folder containing all group principals by name. """ recordType = "groups" class PrincipalHomeFolder(Folder): """ Folder containing everything related to a given principal. """ def __init__(self, service, path, uid, record=None): Folder.__init__(self, service, path) if record is None: # FIXME: recordWithUID returns a Deferred but we cannot return or yield it in an __init__ method record = self.service.directory.recordWithUID(uid) if record is not None: assert uid == record.uid self.uid = uid self.record = record @inlineCallbacks def _initChildren(self): if not hasattr(self, "_didInitChildren"): txn = self.service.store.newTransaction() try: if ( self.record is not None and self.service.config.EnableCalDAV and self.record.hasCalendars ): create = True else: create = False # Try assuming it exists home = (yield txn.calendarHomeWithUID(self.uid, create=False)) if home is None and create: # Doesn't exist, so create it in a different # transaction, to avoid having to commit the live # transaction. txnTemp = self.service.store.newTransaction() try: home = (yield txnTemp.calendarHomeWithUID(self.uid, create=True)) (yield txnTemp.commit()) # Fetch the home again. This time we expect it to be there. home = (yield txn.calendarHomeWithUID(self.uid, create=False)) assert home finally: (yield txn.abort()) if home: self._children["calendars"] = CalendarHomeFolder( self.service, self.path + ("calendars",), home, self.record, ) if ( self.record is not None and self.service.config.EnableCardDAV and self.record.enabledForAddressBooks ): create = True else: create = False # Again, assume it exists home = (yield txn.addressbookHomeWithUID(self.uid)) if not home and create: # Repeat the above dance. txnTemp = self.service.store.newTransaction() try: home = (yield txnTemp.addressbookHomeWithUID(self.uid, create=True)) (yield txnTemp.commit()) # Fetch the home again. This time we expect it to be there. home = (yield txn.addressbookHomeWithUID(self.uid, create=False)) assert home finally: (yield txn.abort()) if home: self._children["addressbooks"] = AddressBookHomeFolder( self.service, self.path + ("addressbooks",), home, self.record, ) finally: (yield txn.abort()) self._didInitChildren = True def _needsChildren(m): # @NoSelf def decorate(self, *args, **kwargs): d = self._initChildren() d.addCallback(lambda _: m(self, *args, **kwargs)) return d return decorate @_needsChildren def child(self, name): return Folder.child(self, name) @_needsChildren def list(self): return Folder.list(self) def describe(self): return recordInfo(self.service.directory, self.record) class CalendarHomeFolder(Folder): """ Calendar home folder. """ def __init__(self, service, path, home, record): Folder.__init__(self, service, path) self.home = home self.record = record @inlineCallbacks def child(self, name): calendar = (yield self.home.calendarWithName(name)) if calendar: returnValue(CalendarFolder(self.service, self.path + (name,), calendar)) else: raise NotFoundError("Calendar home %r has no calendar %r" % (self, name)) @inlineCallbacks def list(self): calendars = (yield self.home.calendars()) result = [] for calendar in calendars: displayName = calendar.displayName() if displayName is None: displayName = "(unset)" info = { "Display Name": displayName, "Sync Token": (yield calendar.syncToken()), } result.append(ListEntry(self, CalendarFolder, calendar.name(), **info)) returnValue(result) @inlineCallbacks def describe(self): description = ["Calendar home:\n"] # # Attributes # uid = (yield self.home.uid()) created = (yield self.home.created()) modified = (yield self.home.modified()) quotaUsed = (yield self.home.quotaUsedBytes()) quotaAllowed = (yield self.home.quotaAllowedBytes()) recordType = (yield self.record.recordType) recordShortName = (yield self.record.shortNames[0]) rows = [] rows.append(("UID", uid)) rows.append(("Owner", "(%s)%s" % (recordType, recordShortName))) rows.append(("Created", timeString(created))) rows.append(("Last modified", timeString(modified))) if quotaUsed is not None: rows.append(( "Quota", "%s of %s (%.2s%%)" % (quotaUsed, quotaAllowed, quotaUsed / quotaAllowed) )) description.append("Attributes:") description.append(tableString(rows)) # # Properties # properties = (yield self.home.properties()) if properties: description.append(tableStringForProperties(properties)) returnValue("\n".join(description)) class CalendarFolder(Folder): """ Calendar. """ def __init__(self, service, path, calendar): Folder.__init__(self, service, path) self.calendar = calendar @inlineCallbacks def _childWithObject(self, object): uid = (yield object.uid()) returnValue(CalendarObject(self.service, self.path + (uid,), object, uid)) @inlineCallbacks def child(self, name): object = (yield self.calendar.calendarObjectWithUID(name)) if not object: raise NotFoundError("Calendar %r has no object %r" % (str(self), name)) child = (yield self._childWithObject(object)) returnValue(child) @inlineCallbacks def list(self): result = [] for object in (yield self.calendar.calendarObjects()): object = (yield self._childWithObject(object)) items = (yield object.list()) result.append(items[0]) returnValue(result) @inlineCallbacks def describe(self): description = ["Calendar:\n"] # # Attributes # ownerHome = (yield self.calendar.ownerCalendarHome()) # FIXME: Translate into human syncToken = (yield self.calendar.syncToken()) rows = [] rows.append(("Owner", ownerHome)) rows.append(("Sync Token", syncToken)) description.append("Attributes:") description.append(tableString(rows)) # # Properties # properties = (yield self.calendar.properties()) if properties: description.append(tableStringForProperties(properties)) returnValue("\n".join(description)) def delete(self, implicit=True): if implicit: # We need data store-level scheduling support to implement # this. raise NotImplementedError("Delete not implemented.") else: self.calendarObject.remove() class CalendarObject(File): """ Calendar object. """ def __init__(self, service, path, calendarObject, uid): File.__init__(self, service, path) self.object = calendarObject self.uid = uid @inlineCallbacks def lookup(self): if not hasattr(self, "component"): component = (yield self.object.component()) try: mainComponent = component.mainComponent() assert self.uid == mainComponent.propertyValue("UID") self.componentType = mainComponent.name() self.summary = mainComponent.propertyValue("SUMMARY") self.mainComponent = mainComponent except InvalidICalendarDataError, e: log.error("{path}: {ex}", path=self.path, ex=e) self.componentType = "?" self.summary = "** Invalid data **" self.mainComponent = None self.component = component @inlineCallbacks def list(self): (yield self.lookup()) returnValue((ListEntry(self, CalendarObject, self.uid, { "Component Type": self.componentType, "Summary": self.summary.replace("\n", " "), }),)) list.fieldNames = ("Component Type", "Summary") @inlineCallbacks def text(self): (yield self.lookup()) returnValue(str(self.component)) @inlineCallbacks def describe(self): (yield self.lookup()) description = ["Calendar object:\n"] # # Calendar object attributes # rows = [] rows.append(("UID", self.uid)) rows.append(("Component Type", self.componentType)) rows.append(("Summary", self.summary)) organizer = self.mainComponent.getProperty("ORGANIZER") if organizer: organizerName = organizer.parameterValue("CN") organizerEmail = organizer.parameterValue("EMAIL") name = " (%s)" % (organizerName,) if organizerName else "" email = " <%s>" % (organizerEmail,) if organizerEmail else "" rows.append(("Organizer", "%s%s%s" % (organizer.value(), name, email))) rows.append(("Created", timeString(self.object.created()))) rows.append(("Modified", timeString(self.object.modified()))) description.append("Attributes:") description.append(tableString(rows)) # # Attachments # attachments = (yield self.object.attachments()) for attachment in attachments: contentType = attachment.contentType() contentType = "%s/%s" % (contentType.mediaType, contentType.mediaSubtype) rows = [] rows.append(("Name", attachment.name())) rows.append(("Size", "%s bytes" % (attachment.size(),))) rows.append(("Content Type", contentType)) rows.append(("MD5 Sum", attachment.md5())) rows.append(("Created", timeString(attachment.created()))) rows.append(("Modified", timeString(attachment.modified()))) description.append("Attachment:") description.append(tableString(rows)) # # Properties # properties = (yield self.object.properties()) if properties: description.append(tableStringForProperties(properties)) returnValue("\n".join(description)) class AddressBookHomeFolder(Folder): """ Address book home folder. """ def __init__(self, service, path, home, record): Folder.__init__(self, service, path) self.home = home self.record = record # FIXME def tableString(rows, header=None): table = Table() if header: table.addHeader(header) for row in rows: table.addRow(row) output = StringIO() table.printTable(os=output) return output.getvalue() def tableStringForProperties(properties): return "Properties:\n%s" % (tableString(( (name.toString(), truncateAtNewline(properties[name])) for name in sorted(properties) ))) def timeString(time): if time is None: return "(unknown)" return strftime("%a, %d %b %Y %H:%M:%S %z(%Z)", localtime(time)) def truncateAtNewline(text): text = str(text) try: index = text.index("\n") except ValueError: return text return text[:index] + "..." calendarserver-9.1+dfsg/calendarserver/tools/tables.py000077500000000000000000000231551315003562600233000ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tables for fixed-width text display. """ __all__ = [ "Table", ] from sys import stdout import types from cStringIO import StringIO class Table(object): """ Class that allows pretty printing ascii tables. The table supports multiline headers and footers, independent column formatting by row, alternative tab-delimited output. """ class ColumnFormat(object): """ Defines the format string, justification and span for a column. """ LEFT_JUSTIFY = 0 RIGHT_JUSTIFY = 1 CENTER_JUSTIFY = 2 def __init__(self, strFormat="%s", justify=LEFT_JUSTIFY, span=1): self.format = strFormat self.justify = justify self.span = span def __init__(self, table=None): self.headers = [] self.headerColumnFormats = [] self.rows = [] self.footers = [] self.footerColumnFormats = [] self.columnCount = 0 self.defaultColumnFormats = [] self.columnFormatsByRow = {} if table: self.setData(table) def setData(self, table): self.hasTitles = True self.headers.append(table[0]) self.rows = table[1:] self._getMaxColumnCount() def setDefaultColumnFormats(self, columnFormats): self.defaultColumnFormats = columnFormats def addDefaultColumnFormat(self, columnFormat): self.defaultColumnFormats.append(columnFormat) def setHeaders(self, rows, columnFormats=None): self.headers = rows self.headerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.headers) self._getMaxColumnCount() def addHeader(self, row, columnFormats=None): self.headers.append(row) self.headerColumnFormats.append(columnFormats) self._getMaxColumnCount() def addHeaderDivider(self, skipColumns=()): self.headers.append((None, skipColumns,)) self.headerColumnFormats.append(None) def setFooters(self, row, columnFormats=None): self.footers = row self.footerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.footers) self._getMaxColumnCount() def addFooter(self, row, columnFormats=None): self.footers.append(row) self.footerColumnFormats.append(columnFormats) self._getMaxColumnCount() def addRow(self, row=None, columnFormats=None): self.rows.append(row) if columnFormats: self.columnFormatsByRow[len(self.rows) - 1] = columnFormats self._getMaxColumnCount() def addDivider(self, skipColumns=()): self.rows.append((None, skipColumns,)) def toString(self): output = StringIO() self.printTable(os=output) return output.getvalue() def printTable(self, os=stdout): maxWidths = self._getMaxWidths() self.printDivider(os, maxWidths, False) if self.headers: for header, format in zip(self.headers, self.headerColumnFormats): self.printRow(os, header, self._getHeaderColumnFormat(format), maxWidths) self.printDivider(os, maxWidths) for ctr, row in enumerate(self.rows): self.printRow(os, row, self._getColumnFormatForRow(ctr), maxWidths) if self.footers: self.printDivider(os, maxWidths, double=True) for footer, format in zip(self.footers, self.footerColumnFormats): self.printRow(os, footer, self._getFooterColumnFormat(format), maxWidths) self.printDivider(os, maxWidths, False) def printRow(self, os, row, format, maxWidths): if row is None or type(row) is tuple and row[0] is None: self.printDivider(os, maxWidths, skipColumns=row[1] if type(row) is tuple else ()) else: if len(row) != len(maxWidths): row = list(row) row.extend([""] * (len(maxWidths) - len(row))) t = "|" ctr = 0 while ctr < len(row): startCtr = ctr maxWidth = 0 for _ignore_span in xrange(format[startCtr].span if format else 1): maxWidth += maxWidths[ctr] ctr += 1 maxWidth += 3 * ((format[startCtr].span - 1) if format else 0) text = self._columnText(row, startCtr, format, width=maxWidth) t += " " + text + " |" t += "\n" os.write(t) def printDivider(self, os, maxWidths, intermediate=True, double=False, skipColumns=()): t = "|" if intermediate else "+" for widthctr, width in enumerate(maxWidths): if widthctr in skipColumns: c = " " else: c = "=" if double else "-" t += c * (width + 2) t += "+" if widthctr < len(maxWidths) - 1 else ("|" if intermediate else "+") t += "\n" os.write(t) def printTabDelimitedData(self, os=stdout, footer=True): if self.headers: titles = [""] * len(self.headers[0]) for row, header in enumerate(self.headers): for col, item in enumerate(header): titles[col] += (" " if row and item else "") + item self.printTabDelimitedRow(os, titles, self._getHeaderColumnFormat(self.headerColumnFormats[0])) for ctr, row in enumerate(self.rows): self.printTabDelimitedRow(os, row, self._getColumnFormatForRow(ctr)) if self.footers and footer: for footer in self.footers: self.printTabDelimitedRow(os, footer, self._getFooterColumnFormat(self.footerColumnFormats[0])) def printTabDelimitedRow(self, os, row, format): if row is None: row = [""] * self.columnCount if len(row) != self.columnCount: row = list(row) row.extend([""] * (self.columnCount - len(row))) textItems = [self._columnText(row, ctr, format) for ctr in xrange((len(row)))] os.write("\t".join(textItems) + "\n") def _getMaxColumnCount(self): self.columnCount = 0 if self.headers: for header in self.headers: self.columnCount = max(self.columnCount, len(header) if header else 0) for row in self.rows: self.columnCount = max(self.columnCount, len(row) if row else 0) if self.footers: for footer in self.footers: self.columnCount = max(self.columnCount, len(footer) if footer else 0) def _getMaxWidths(self): maxWidths = [0] * self.columnCount if self.headers: for header, format in zip(self.headers, self.headerColumnFormats): self._updateMaxWidthsFromRow(header, self._getHeaderColumnFormat(format), maxWidths) for ctr, row in enumerate(self.rows): self._updateMaxWidthsFromRow(row, self._getColumnFormatForRow(ctr), maxWidths) if self.footers: for footer, format in zip(self.footers, self.footerColumnFormats): self._updateMaxWidthsFromRow(footer, self._getFooterColumnFormat(format), maxWidths) return maxWidths def _updateMaxWidthsFromRow(self, row, format, maxWidths): if row and (type(row) is not tuple or row[0] is not None): ctr = 0 while ctr < len(row): text = self._columnText(row, ctr, format) startCtr = ctr for _ignore_span in xrange(format[startCtr].span if format else 1): maxWidths[ctr] = max(maxWidths[ctr], len(text) / (format[startCtr].span if format else 1)) ctr += 1 def _getHeaderColumnFormat(self, format): if format: return format else: justify = Table.ColumnFormat.CENTER_JUSTIFY if len(self.headers) == 1 else Table.ColumnFormat.LEFT_JUSTIFY return [Table.ColumnFormat(justify=justify)] * self.columnCount def _getFooterColumnFormat(self, format): if format: return format else: return self.defaultColumnFormats def _getColumnFormatForRow(self, ctr): if ctr in self.columnFormatsByRow: return self.columnFormatsByRow[ctr] else: return self.defaultColumnFormats def _columnText(self, row, column, format, width=0): if row is None or column >= len(row): return "" colData = row[column] if colData is None: colData = "" columnFormat = format[column] if format and column < len(format) else Table.ColumnFormat() if type(colData) in types.StringTypes: text = colData else: text = columnFormat.format % colData if width: if columnFormat.justify == Table.ColumnFormat.LEFT_JUSTIFY: text = text.ljust(width) elif columnFormat.justify == Table.ColumnFormat.RIGHT_JUSTIFY: text = text.rjust(width) elif columnFormat.justify == Table.ColumnFormat.CENTER_JUSTIFY: text = text.center(width) return text calendarserver-9.1+dfsg/calendarserver/tools/test/000077500000000000000000000000001315003562600224225ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/test/__init__.py000066400000000000000000000012221315003562600245300ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tests for the calendarserver.tools module. """ calendarserver-9.1+dfsg/calendarserver/tools/test/deprovision/000077500000000000000000000000001315003562600247635ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/test/deprovision/augments.xml000066400000000000000000000017571315003562600273420ustar00rootroot00000000000000 E9E78C86-4829-4520-A35D-70DDADAB2092 true true 291C2C29-B663-4342-8EA1-A055E6A04D65 true true calendarserver-9.1+dfsg/calendarserver/tools/test/deprovision/caldavd.plist000066400000000000000000000406351315003562600274460ustar00rootroot00000000000000 ServerHostName HTTPPort 8008 SSLPort 8443 RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts ServerRoot %(ServerRoot)s DataRoot Data DocumentRoot Documents ConfigRoot /etc/caldavd LogRoot /var/log/caldavd RunRoot /var/run Aliases UserQuota 104857600 MaximumAttachmentSize 1048576 MaxAttendeesPerInstance 100 DirectoryService type xml params xmlFile accounts.xml recordTypes users groups ResourceService Enabled type xml params xmlFile resources.xml recordTypes resources locations AugmentService Enabled type xml params xmlFiles augments.xml ProxyDBService type twistedcaldav.directory.calendaruserproxy.ProxySqliteDB params dbpath proxies.sqlite ProxyLoadFromFile conf/auth/proxies-test.xml AdminPrincipals /principals/__uids__/admin/ ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal Wiki Enabled Cookie sessionID URL http://127.0.0.1/RPC2 UserMethod userForSession WikiMethod accessLevelForUserWikiCalendar AccessLogFile logs/access.log RotateAccessLog ErrorLogFile logs/error.log DefaultLogLevel warn LogLevels PIDFile logs/caldavd.pid AccountingCategories iTIP HTTP AccountingPrincipals SSLCertificate twistedcaldav/test/data/server.pem SSLAuthorityChain SSLPrivateKey twistedcaldav/test/data/server.pem UserName GroupName ProcessType Combined MultiProcess ProcessCount 2 Notifications CoalesceSeconds 3 InternalNotificationHost localhost InternalNotificationPort 62309 Services SimpleLineNotifier Service twistedcaldav.notify.SimpleLineNotifierService Enabled Port 62308 XMPPNotifier Service twistedcaldav.notify.XMPPNotifierService Enabled Host xmpp.host.name Port 5222 JID jid@xmpp.host.name/resource Password password_goes_here ServiceAddress pubsub.xmpp.host.name NodeConfiguration pubsub#deliver_payloads 1 pubsub#persist_items 1 KeepAliveSeconds 120 HeartbeatMinutes 30 AllowedJIDs Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments iSchedule Enabled AddressPatterns Servers conf/servertoserver-test.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* Options AllowGroupAsOrganizer AllowLocationAsOrganizer AllowResourceAsOrganizer FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnablePrivateEvents EnableTimezoneService UsePackageTimezones EnableSACLs EnableWebAdmin ResponseCompression HTTPRetryAfter 180 ControlSocket logs/caldavd.sock Memcached MaxClients 5 memcached memcached Options Pools Default ClientEnabled ServerEnabled Twisted twistd ../Twisted/bin/twistd Localization LocalesDirectory locales Language English calendarserver-9.1+dfsg/calendarserver/tools/test/deprovision/resources-locations.xml000066400000000000000000000020251315003562600315070ustar00rootroot00000000000000 location%02d location%02d location%02d Room %02d resource%02d resource%02d resource%02d Resource %02d calendarserver-9.1+dfsg/calendarserver/tools/test/deprovision/users-groups.xml000066400000000000000000000022361315003562600301660ustar00rootroot00000000000000 deprovisioned E9E78C86-4829-4520-A35D-70DDADAB2092 test Deprovisioned User Deprovisioned User keeper 291C2C29-B663-4342-8EA1-A055E6A04D65 test Keeper User Keeper User calendarserver-9.1+dfsg/calendarserver/tools/test/gateway/000077500000000000000000000000001315003562600240635ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/test/gateway/augments.xml000066400000000000000000000020541315003562600264310ustar00rootroot00000000000000 user01 true true user02 false false location01 true true calendarserver-9.1+dfsg/calendarserver/tools/test/gateway/caldavd.plist000066400000000000000000000440021315003562600265360ustar00rootroot00000000000000 ServerHostName EnableCalDAV EnableCardDAV HTTPPort 8008 SSLPort 8443 RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts ServerRoot %(ServerRoot)s DataRoot %(DataRoot)s DatabaseRoot %(DatabaseRoot)s DocumentRoot %(DocumentRoot)s ConfigRoot %(ConfigRoot)s LogRoot %(LogRoot)s RunRoot %(RunRoot)s Aliases UserQuota 104857600 MaximumAttachmentSize 1048576 MaxAttendeesPerInstance 100 DirectoryService type xml params xmlFile accounts.xml recordTypes users groups ResourceService Enabled type xml params xmlFile resources.xml recordTypes resources locations addresses AugmentService Enabled type xml params xmlFiles augments.xml ProxyDBService type twistedcaldav.directory.calendaruserproxy.ProxySqliteDB params dbpath proxies.sqlite ProxyLoadFromFile AdminPrincipals /principals/__uids__/admin/ ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal Wiki Enabled Cookie sessionID URL http://127.0.0.1/RPC2 UserMethod userForSession WikiMethod accessLevelForUserWikiCalendar AccessLogFile logs/access.log RotateAccessLog ErrorLogFile logs/error.log DefaultLogLevel warn LogLevels PIDFile logs/caldavd.pid AccountingCategories iTIP HTTP AccountingPrincipals SSLCertificate twistedcaldav/test/data/server.pem SSLAuthorityChain SSLPrivateKey twistedcaldav/test/data/server.pem UserName GroupName ProcessType Combined MultiProcess ProcessCount 2 Notifications CoalesceSeconds 3 InternalNotificationHost localhost InternalNotificationPort 62309 Services APNS Enabled EnableStaggering StaggerSeconds 5 CalDAV CertificatePath /example/calendar.cer PrivateKeyPath /example/calendar.pem CardDAV CertificatePath /example/contacts.cer PrivateKeyPath /example/contacts.pem SimpleLineNotifier Service twistedcaldav.notify.SimpleLineNotifierService Enabled Port 62308 XMPPNotifier Service twistedcaldav.notify.XMPPNotifierService Enabled Host xmpp.host.name Port 5222 JID jid@xmpp.host.name/resource Password password_goes_here ServiceAddress pubsub.xmpp.host.name NodeConfiguration pubsub#deliver_payloads 1 pubsub#persist_items 1 KeepAliveSeconds 120 HeartbeatMinutes 30 AllowedJIDs Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments iSchedule Enabled AddressPatterns Servers conf/servertoserver-test.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* Options AllowGroupAsOrganizer AllowLocationAsOrganizer AllowResourceAsOrganizer FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnablePrivateEvents EnableTimezoneService UsePackageTimezones EnableSACLs EnableWebAdmin ResponseCompression HTTPRetryAfter 180 ControlSocket logs/caldavd.sock Memcached MaxClients 5 memcached memcached Options Pools Default ClientEnabled ServerEnabled EnableResponseCache ResponseCacheTimeout 30 SharedConnectionPool DirectoryProxy InProcessCachingSeconds 0 InSidecarCachingSeconds 0 Twisted twistd ../Twisted/bin/twistd Localization LocalesDirectory locales Language English Includes %(WritablePlist)s WritableConfigFile %(WritablePlist)s calendarserver-9.1+dfsg/calendarserver/tools/test/gateway/resources-locations.xml000066400000000000000000000075061315003562600306200ustar00rootroot00000000000000 location01 location01 Room 01 location02 location02 Room 02 location03 location03 Room 03 location04 location04 Room 04 location05 location05 Room 05 location06 location06 Room 06 location07 location07 Room 07 location08 location08 Room 08 location09 location09 Room 09 location10 location10 Room 10 resource01 resource01 Resource 01 resource02 resource02 Resource 02 resource03 resource03 Resource 03 resource04 resource04 Resource 04 resource05 resource05 Resource 05 resource06 resource06 Resource 06 resource07 resource07 Resource 07 resource08 resource08 Resource 08 resource09 resource09 Resource 09 resource10 resource10 Resource 10 calendarserver-9.1+dfsg/calendarserver/tools/test/gateway/users-groups.xml000066400000000000000000000066211315003562600272700ustar00rootroot00000000000000 user01 user01 user01 User 01 user01@example.com user02 user02 user02 User 02 user02@example.com user03 user03 user03 User 03 user03@example.com user04 user04 user04 User 04 user04@example.com user05 user05 user05 User 05 user05@example.com user06 user06 user06 User 06 user06@example.com user07 user07 user07 User 07 user07@example.com user08 user08 user08 User 08 user08@example.com user09 user09 user09 User 09 user09@example.com user10 user10 user10 User 10 user10@example.com e5a6142c-4189-4e9e-90b0-9cd0268b314b testgroup1 Group 01 user01 user02 calendarserver-9.1+dfsg/calendarserver/tools/test/principals/000077500000000000000000000000001315003562600245665ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/test/principals/augments.xml000066400000000000000000000020541315003562600271340ustar00rootroot00000000000000 user01 true true user02 false false location01 true true calendarserver-9.1+dfsg/calendarserver/tools/test/principals/caldavd.plist000066400000000000000000000414731315003562600272520ustar00rootroot00000000000000 ServerHostName HTTPPort 8008 SSLPort 8443 RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts ServerRoot %(ServerRoot)s DataRoot %(DataRoot)s DatabaseRoot %(DatabaseRoot)s DocumentRoot %(DocumentRoot)s ConfigRoot config LogRoot %(LogRoot)s RunRoot %(LogRoot)s Aliases UserQuota 104857600 MaximumAttachmentSize 1048576 MaxAttendeesPerInstance 100 DirectoryService type xml params xmlFile accounts.xml recordTypes users groups ResourceService Enabled type xml params xmlFile resources.xml recordTypes resources locations addresses AugmentService Enabled type xml params xmlFiles augments.xml ProxyDBService type twistedcaldav.directory.calendaruserproxy.ProxySqliteDB params dbpath proxies.sqlite ProxyLoadFromFile AdminPrincipals /principals/__uids__/admin/ ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal Wiki Enabled Cookie sessionID URL http://127.0.0.1/RPC2 UserMethod userForSession WikiMethod accessLevelForUserWikiCalendar AccessLogFile logs/access.log RotateAccessLog ErrorLogFile logs/error.log DefaultLogLevel warn LogLevels PIDFile logs/caldavd.pid AccountingCategories iTIP HTTP AccountingPrincipals SSLCertificate twistedcaldav/test/data/server.pem SSLAuthorityChain SSLPrivateKey twistedcaldav/test/data/server.pem UserName GroupName ProcessType Combined MultiProcess ProcessCount 2 Notifications CoalesceSeconds 3 InternalNotificationHost localhost InternalNotificationPort 62309 Services SimpleLineNotifier Service twistedcaldav.notify.SimpleLineNotifierService Enabled Port 62308 XMPPNotifier Service twistedcaldav.notify.XMPPNotifierService Enabled Host xmpp.host.name Port 5222 JID jid@xmpp.host.name/resource Password password_goes_here ServiceAddress pubsub.xmpp.host.name NodeConfiguration pubsub#deliver_payloads 1 pubsub#persist_items 1 KeepAliveSeconds 120 HeartbeatMinutes 30 AllowedJIDs Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments iSchedule Enabled AddressPatterns Servers conf/servertoserver-test.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* Options AllowGroupAsOrganizer AllowLocationAsOrganizer AllowResourceAsOrganizer FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnablePrivateEvents EnableTimezoneService UsePackageTimezones EnableSACLs EnableWebAdmin ResponseCompression HTTPRetryAfter 180 ControlSocket logs/caldavd.sock Memcached MaxClients 5 memcached memcached Options Pools Default ClientEnabled ServerEnabled EnableResponseCache ResponseCacheTimeout 30 SharedConnectionPool Twisted twistd ../Twisted/bin/twistd Localization LocalesDirectory locales Language English calendarserver-9.1+dfsg/calendarserver/tools/test/principals/resources-locations.xml000066400000000000000000000074741315003562600313270ustar00rootroot00000000000000 location01 location01 Room 01 location02 location02 Room 02 location03 location03 Room 03 location04 location04 Room 04 location05 location05 Room 05 location06 location06 Room 06 location07 location07 Room 07 location08 location08 Room 08 location09 location09 Room 09 location10 location10 Room 10 resource01 resource01 Resource 01 resource02 resource02 Resource 02 resource03 resource03 Resource 03 resource04 resource04 Resource 04 resource05 resource05 Resource 05 resource06 resource06 Resource 06 resource07 resource07 Resource 07 resource08 resource08 Resource 08 resource09 resource09 Resource 09 resource10 resource10 Resource 10 calendarserver-9.1+dfsg/calendarserver/tools/test/principals/users-groups.xml000066400000000000000000000076011315003562600277720ustar00rootroot00000000000000 user01 user01 user01 User 01 user01@example.com user02 user02 user02 User 02 user02@example.com user03 user03 user03 User 03 user03@example.com user04 user04 user04 User 04 user04@example.com user05 user05 user05 User 05 user05@example.com user06 user06 user06 User 06 user06@example.com user07 user07 user07 User 07 user07@example.com user08 user08 user08 User 08 user08@example.com user09 user09 user09 User 09 user09@example.com user10 user10 user10 User 10 user10@example.com testuser1 testuser1 testuser1 Test User One testuser10@example.com e5a6142c-4189-4e9e-90b0-9cd0268b314b testgroup1 Group 01 user01 user02 group2 group2 Test Group Two group3 group3 Test Group Three calendarserver-9.1+dfsg/calendarserver/tools/test/test_agent.py000066400000000000000000000070411315003562600251330ustar00rootroot00000000000000## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## try: from calendarserver.tools.agent import AgentRealm from calendarserver.tools.agent import InactivityDetector from twistedcaldav.test.util import TestCase from twisted.internet.task import Clock from twisted.web.resource import IResource from twisted.web.resource import ForbiddenResource except ImportError: pass else: class FakeRecord(object): def __init__(self, shortName): self.shortNames = [shortName] class AgentTestCase(TestCase): def test_AgentRealm(self): realm = AgentRealm("root", ["abc"]) # Valid avatar _ignore_interface, resource, ignored = realm.requestAvatar( FakeRecord("abc"), None, IResource ) self.assertEquals(resource, "root") # Not allowed avatar _ignore_interface, resource, ignored = realm.requestAvatar( FakeRecord("def"), None, IResource ) self.assertTrue(isinstance(resource, ForbiddenResource)) # Interface unhandled try: realm.requestAvatar(FakeRecord("def"), None, None) except NotImplementedError: pass else: self.fail("Didn't raise NotImplementedError") class InactivityDectectorTestCase(TestCase): def test_inactivity(self): clock = Clock() self.inactivityReached = False def becameInactive(): self.inactivityReached = True id = InactivityDetector(clock, 5, becameInactive) # After 3 seconds, not inactive clock.advance(3) self.assertFalse(self.inactivityReached) # Activity happens, pushing out the inactivity threshold id.activity() clock.advance(3) self.assertFalse(self.inactivityReached) # Time passes without activity clock.advance(3) self.assertTrue(self.inactivityReached) id.stop() # Verify a timeout of 0 does not ever fire id = InactivityDetector(clock, 0, becameInactive) self.assertEquals(clock.getDelayedCalls(), []) class FakeRequest(object): def getClientIP(self): return "127.0.0.1" class FakeOpenDirectory(object): def returnThisRecord(self, response): self.recordResponse = response def getUserRecord(self, ignored, username): return self.recordResponse def returnThisAuthResponse(self, response): self.authResponse = response def authenticateUserDigest( self, ignored, node, username, challenge, response, method ): return self.authResponse ODNSerror = "Error" class FakeCredentials(object): def __init__(self, username, fields): self.username = username self.fields = fields self.method = "POST" calendarserver-9.1+dfsg/calendarserver/tools/test/test_calverify.py000066400000000000000000003302731315003562600260270ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from twext.enterprise.dal.syntax import Update from twext.enterprise.jobs.jobitem import JobItem from txdav.common.datastore.sql_tables import schema """ Tests for calendarserver.tools.calverify """ from calendarserver.tools.calverify import BadDataService, \ SchedulingMismatchService, DoubleBookingService, DarkPurgeService, \ EventSplitService, MissingLocationService from pycalendar.datetime import DateTime from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twistedcaldav.config import config from twistedcaldav.ical import normalize_iCalStr from twistedcaldav.test.util import StoreTestCase from txdav.common.datastore.test.util import populateCalendarsFrom from StringIO import StringIO OK_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:OK DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Missing DTSTAMP BAD1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD1 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Bad recurrence BAD2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD2 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z RRULE:FREQ=DAILY;COUNT=3 SEQUENCE:2 END:VEVENT BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD2 RECURRENCE-ID:20100307T120000Z DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Bad recurrence BAD3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD2 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z RRULE:FREQ=DAILY;COUNT=3 SEQUENCE:2 END:VEVENT BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD2 RECURRENCE-ID:20100307T120000Z DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Missing Organizer BAD3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD3 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z RRULE:FREQ=DAILY;COUNT=3 SEQUENCE:2 END:VEVENT BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD3 RECURRENCE-ID:20100307T111500Z DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # https Organizer BAD4_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD4 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # https Attendee BAD5_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD5 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # https Organizer and Attendee BAD6_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD6 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:http://demo.com:8008/principals/__uids__/D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Base64 Organizer and Attendee parameter OK8_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:OK8 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbm NpcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=": urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;CALENDARSERVER-OLD-CUA="base64-aHR0cDovL2RlbW8uY29tOjgwMDgvcHJpbmN pcGFscy9fX3VpZHNfXy9ENDZGM0Q3MS0wNEI3LTQzQzItQTdCNi02RjkyRjkyRTYxRDA=":u rn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Non-mailto: Organizer BAD10_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD10 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER;CN=Example User1;SCHEDULE-AGENT=NONE:example1@example.com ATTENDEE;CN=Example User1:example1@example.com ATTENDEE;CN=Example User2:example2@example.com ATTENDEE;CN=Example User3:/principals/users/example3 ATTENDEE;CN=Example User4:http://demo.com:8008/principals/users/example4 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Bad recurrence EXDATE BAD11_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD11 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z EXDATE:20100314T111500Z RRULE:FREQ=WEEKLY SEQUENCE:2 END:VEVENT BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD11 RECURRENCE-ID:20100314T111500Z DTEND:20100314T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100314T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") BAD12_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD12 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ORGANIZER:mailto:example2@example.com ATTENDEE:mailto:example1@example.com ATTENDEE:mailto:example2@example.com SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") BAD13_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD13 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") BAD14_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD14 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 GEO:40.1;40.1 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") CORRUPT14_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:BAD14 DTEND:20100307T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:20100307T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 GEO:40.1 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") class CalVerifyDataTests(StoreTestCase): """ Tests calverify for iCalendar data problems. """ metadata = { "accessMode": "PUBLIC", "isScheduleObject": True, "scheduleTag": "abc", "scheduleEtags": (), "hasPrivateComment": False, } requirements = { "home1": { "calendar_1": { "ok.ics": (OK_ICS, metadata,), "bad1.ics": (BAD1_ICS, metadata,), "bad2.ics": (BAD2_ICS, metadata,), "bad3.ics": (BAD3_ICS, metadata,), "bad4.ics": (BAD4_ICS, metadata,), "bad5.ics": (BAD5_ICS, metadata,), "bad6.ics": (BAD6_ICS, metadata,), "ok8.ics": (OK8_ICS, metadata,), "bad10.ics": (BAD10_ICS, metadata,), "bad11.ics": (BAD11_ICS, metadata,), "bad12.ics": (BAD12_ICS, metadata,), "bad13.ics": (BAD13_ICS, metadata,), "bad14.ics": (BAD14_ICS, metadata,), } }, } number_to_process = len(requirements["home1"]["calendar_1"]) @inlineCallbacks def populate(self): # Need to bypass normal validation inside the store yield populateCalendarsFrom(self.requirements, self.storeUnderTest()) # Have to manually write these into the database populateTxn = self.storeUnderTest().newTransaction() co = schema.CALENDAR_OBJECT yield Update( {co.ICALENDAR_TEXT: CORRUPT14_ICS}, Where=co.RESOURCE_NAME == "bad14.ics", ).on(populateTxn) yield populateTxn.commit() self.notifierFactory.reset() def storeUnderTest(self): """ Create and return a L{CalendarStore} for testing. """ return self._sqlCalendarStore def verifyResultsByUID(self, results, expected): reported = set([(home, uid) for home, uid, _ignore_resid, _ignore_reason in results]) self.assertEqual(reported, expected) @inlineCallbacks def test_scanBadData(self): """ CalVerifyService.doScan without fix. Make sure it detects common errors. Make sure sync-token is not changed. """ sync_token_old = (yield (yield self.calendarUnderTest()).syncToken()) yield self.commit() options = { "ical": True, "fix": False, "nobase64": False, "verbose": False, "uid": "", "uuid": "", "path": "", "tzid": "", } output = StringIO() calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ("home1", "BAD1",), ("home1", "BAD2",), ("home1", "BAD3",), ("home1", "BAD4",), ("home1", "BAD5",), ("home1", "BAD6",), ("home1", "BAD10",), ("home1", "BAD11",), ("home1", "BAD12",), ("home1", "BAD13",), ("home1", "BAD14",), ))) sync_token_new = (yield (yield self.calendarUnderTest()).syncToken()) self.assertEqual(sync_token_old, sync_token_new) @inlineCallbacks def test_fixBadData(self): """ CalVerifyService.doScan with fix. Make sure it detects and fixes as much as it can. Make sure sync-token is changed. """ sync_token_old = (yield (yield self.calendarUnderTest()).syncToken()) yield self.commit() options = { "ical": True, "fix": True, "nobase64": False, "verbose": False, "uid": "", "uuid": "", "path": "", "tzid": "", } output = StringIO() # Do fix self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com") self.patch(config, "HTTPPort", 8008) calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ("home1", "BAD1",), ("home1", "BAD2",), ("home1", "BAD3",), ("home1", "BAD4",), ("home1", "BAD5",), ("home1", "BAD6",), ("home1", "BAD10",), ("home1", "BAD11",), ("home1", "BAD12",), ("home1", "BAD13",), ("home1", "BAD14",), ))) # Do scan options["fix"] = False calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ("home1", "BAD1",), ("home1", "BAD13",), ))) sync_token_new = (yield (yield self.calendarUnderTest()).syncToken()) self.assertNotEqual(sync_token_old, sync_token_new) # Make sure mailto: fix results in urn:x-uid value without SCHEDULE-AGENT obj = yield self.calendarObjectUnderTest(name="bad10.ics") ical = yield obj.component() org = ical.getOrganizerProperty() self.assertEqual(org.value(), "urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0") self.assertFalse(org.hasParameter("SCHEDULE-AGENT")) for attendee in ical.getAllAttendeeProperties(): self.assertTrue( attendee.value().startswith("urn:x-uid:") or attendee.value().startswith("/principals") ) @inlineCallbacks def test_scanBadCuaOnly(self): """ CalVerifyService.doScan without fix for CALENDARSERVER-OLD-CUA only. Make sure it detects and fixes as much as it can. Make sure sync-token is not changed. """ sync_token_old = (yield (yield self.calendarUnderTest()).syncToken()) yield self.commit() options = { "ical": False, "fix": False, "badcua": True, "nobase64": False, "verbose": False, "uid": "", "uuid": "", "path": "", "tzid": "", } output = StringIO() calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ("home1", "BAD4",), ("home1", "BAD5",), ("home1", "BAD6",), ("home1", "BAD10",), ("home1", "BAD12",), ("home1", "BAD14",), ))) sync_token_new = (yield (yield self.calendarUnderTest()).syncToken()) self.assertEqual(sync_token_old, sync_token_new) @inlineCallbacks def test_fixBadCuaOnly(self): """ CalVerifyService.doScan with fix for CALENDARSERVER-OLD-CUA only. Make sure it detects and fixes as much as it can. Make sure sync-token is changed. """ sync_token_old = (yield (yield self.calendarUnderTest()).syncToken()) yield self.commit() options = { "ical": False, "fix": True, "badcua": True, "nobase64": False, "verbose": False, "uid": "", "uuid": "", "path": "", "tzid": "", } output = StringIO() # Do fix self.patch(config.Scheduling.Options, "PrincipalHostAliases", "demo.com") self.patch(config, "HTTPPort", 8008) calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ("home1", "BAD4",), ("home1", "BAD5",), ("home1", "BAD6",), ("home1", "BAD10",), ("home1", "BAD12",), ("home1", "BAD14",), ))) # Do scan options["fix"] = False calverify = BadDataService(self._sqlCalendarStore, options, output, reactor, config) calverify.emailDomain = "example.com" yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], self.number_to_process) self.verifyResultsByUID(calverify.results["Bad iCalendar data"], set(( ))) sync_token_new = (yield (yield self.calendarUnderTest()).syncToken()) self.assertNotEqual(sync_token_old, sync_token_new) class CalVerifyMismatchTestsBase(StoreTestCase): """ Tests calverify for iCalendar mismatch problems. """ metadata = { "accessMode": "PUBLIC", "isScheduleObject": True, "scheduleTag": "abc", "scheduleEtags": (), "hasPrivateComment": False, } uuid1 = "D46F3D71-04B7-43C2-A7B6-6F92F92E61D0" uuid2 = "47B16BB4-DB5F-4BF6-85FE-A7DA54230F92" uuid3 = "AC478592-7783-44D1-B2AE-52359B4E8415" uuidl1 = "75EA36BE-F71B-40F9-81F9-CF59BF40CA8F" uuidl2 = "CDAF464F-9C77-4F56-A7A6-98E4ED9903D6" @inlineCallbacks def populate(self): # Need to bypass normal validation inside the store yield populateCalendarsFrom(self.requirements, self.storeUnderTest()) self.notifierFactory.reset() now = DateTime.getToday() now.setDay(1) now.offsetMonth(2) nowYear = now.getYear() nowMonth = now.getMonth() class CalVerifyMismatchTestsNonRecurring(CalVerifyMismatchTestsBase): """ Tests calverify for iCalendar mismatch problems for non-recurring events. """ # Organizer has event, attendees do not MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISSING_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendees have event, organizer does not MISSING_ORGANIZER_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISSING_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISSING_ORGANIZER_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISSING_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee partstat mismatch MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH_ATTENDEE_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH_ATTENDEE_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee events outside time range MISMATCH2_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH2_ATTENDEE_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH2_ATTENDEE_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Organizer event outside time range MISMATCH_ORGANIZER_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear - 1, "month": nowMonth} MISMATCH_ORGANIZER_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee uuid3 has event with different organizer MISMATCH3_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH3_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH3_ATTENDEE_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH3_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH3_ATTENDEE_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH3_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH_ORGANIZER_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee uuid3 has event they are not invited to MISMATCH2_ORGANIZER_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH2_ORGANIZER_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH2_ORGANIZER_3_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH2_ORGANIZER_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=DECLINED:urn:x-uid:47B16BB4-DB5F-4BF6-85FE-A7DA54230F92 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:AC478592-7783-44D1-B2AE-52359B4E8415 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": { "missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched_organizer.ics": (MISMATCH_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": { "mismatched_attendee.ics": (MISMATCH_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_2_ICS, CalVerifyMismatchTestsBase.metadata,), "missing_organizer.ics": (MISSING_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched_organizer.ics": (MISMATCH_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": { "mismatched_attendee.ics": (MISMATCH_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched3_attendee.ics": (MISMATCH3_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,), "missing_organizer.ics": (MISSING_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_organizer.ics": (MISMATCH2_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,), }, "calendar2": { "mismatched_organizer.ics": (MISMATCH_ORGANIZER_3_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched2_attendee.ics": (MISMATCH2_ATTENDEE_3_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, } @inlineCallbacks def setUp(self): yield super(CalVerifyMismatchTestsNonRecurring, self).setUp() home = (yield self.homeUnderTest(name=self.uuid3)) calendar = (yield self.calendarUnderTest(name="calendar2", home=self.uuid3)) yield home.setDefaultCalendar(calendar, "VEVENT") yield self.commit() @inlineCallbacks def test_scanMismatchOnly(self): """ CalVerifyService.doScan without fix for mismatches. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken()) sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": False, "verbose": False, "details": False, "uid": "", "uuid": "", "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 17) self.assertEqual(calverify.results["Missing Attendee"], set(( ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,), ))) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,), ("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,), ("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,), ))) self.assertEqual(calverify.results["Missing Organizer"], set(( ("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,), ("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,), ))) self.assertEqual(calverify.results["Mismatch Organizer"], set(( ("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,), ("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,), ("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,), ))) self.assertTrue("Fix change event" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken()) sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertEqual(sync_token_old2, sync_token_new2) self.assertEqual(sync_token_old3, sync_token_new3) @inlineCallbacks def test_fixMismatch(self): """ CalVerifyService.doScan with fix for mismatches. Make sure it detects and fixes as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_old2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken()) sync_token_old3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": True, "verbose": False, "details": False, "uid": "", "uuid": "", "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 17) self.assertEqual(calverify.results["Missing Attendee"], set(( ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuid3,), ))) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuid3,), ("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid2,), ("MISMATCH2_ATTENDEE_ICS", self.uuid1, self.uuid3,), ("MISMATCH3_ATTENDEE_ICS", self.uuid1, self.uuid3,), ))) self.assertEqual(calverify.results["Missing Organizer"], set(( ("MISSING_ORGANIZER_ICS", self.uuid2, self.uuid1,), ("MISSING_ORGANIZER_ICS", self.uuid3, self.uuid1,), ))) self.assertEqual(calverify.results["Mismatch Organizer"], set(( ("MISMATCH_ORGANIZER_ICS", self.uuid2, self.uuid1,), ("MISMATCH_ORGANIZER_ICS", self.uuid3, self.uuid1,), ("MISMATCH2_ORGANIZER_ICS", self.uuid3, self.uuid1,), ))) self.assertEqual(calverify.results["Fix change event"], set(( (self.uuid2, "calendar", "MISMATCH_ATTENDEE_ICS",), (self.uuid3, "calendar", "MISMATCH_ATTENDEE_ICS",), (self.uuid2, "calendar", "MISMATCH2_ATTENDEE_ICS",), (self.uuid3, "calendar2", "MISMATCH2_ATTENDEE_ICS",), (self.uuid3, "calendar", "MISMATCH3_ATTENDEE_ICS",), (self.uuid2, "calendar", "MISMATCH_ORGANIZER_ICS",), (self.uuid3, "calendar2", "MISMATCH_ORGANIZER_ICS",), ))) self.assertEqual(calverify.results["Fix add event"], set(( (self.uuid2, "calendar", "MISSING_ATTENDEE_ICS",), (self.uuid3, "calendar2", "MISSING_ATTENDEE_ICS",), ))) self.assertEqual(calverify.results["Fix add inbox"], set(( (self.uuid2, "MISSING_ATTENDEE_ICS",), (self.uuid3, "MISSING_ATTENDEE_ICS",), (self.uuid2, "MISMATCH_ATTENDEE_ICS",), (self.uuid3, "MISMATCH_ATTENDEE_ICS",), (self.uuid2, "MISMATCH2_ATTENDEE_ICS",), (self.uuid3, "MISMATCH2_ATTENDEE_ICS",), (self.uuid3, "MISMATCH3_ATTENDEE_ICS",), (self.uuid2, "MISMATCH_ORGANIZER_ICS",), (self.uuid3, "MISMATCH_ORGANIZER_ICS",), ))) self.assertEqual(calverify.results["Fix remove"], set(( (self.uuid2, "calendar", "missing_organizer.ics",), (self.uuid3, "calendar", "missing_organizer.ics",), (self.uuid3, "calendar", "mismatched2_organizer.ics",), ))) obj = yield self.calendarObjectUnderTest(home=self.uuid2, calendar_name="calendar", name="missing_organizer.ics") self.assertEqual(obj, None) obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="missing_organizer.ics") self.assertEqual(obj, None) obj = yield self.calendarObjectUnderTest(home=self.uuid3, calendar_name="calendar", name="mismatched2_organizer.ics") self.assertEqual(obj, None) self.assertEqual(calverify.results["Fix failures"], 0) self.assertEqual(calverify.results["Auto-Accepts"], []) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_new2 = (yield (yield self.calendarUnderTest(home=self.uuid2, name="calendar")).syncToken()) sync_token_new3 = (yield (yield self.calendarUnderTest(home=self.uuid3, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertNotEqual(sync_token_old2, sync_token_new2) self.assertNotEqual(sync_token_old3, sync_token_new3) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 14) self.assertTrue("Missing Attendee" not in calverify.results) self.assertTrue("Mismatch Attendee" not in calverify.results) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) class CalVerifyMismatchTestsAutoAccept(CalVerifyMismatchTestsBase): """ Tests calverify for iCalendar mismatch problems for auto-accept attendees. """ # Organizer has event, attendee do not MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISSING_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee partstat mismatch MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": { "missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": { "mismatched_attendee.ics": (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, } @inlineCallbacks def test_scanMismatchOnly(self): """ CalVerifyService.doScan without fix for mismatches. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": False, "verbose": False, "details": False, "uid": "", "uuid": "", "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 3) self.assertEqual(calverify.results["Missing Attendee"], set(( ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertTrue("Fix change event" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertEqual(sync_token_oldl1, sync_token_newl1) @inlineCallbacks def test_fixMismatch(self): """ CalVerifyService.doScan with fix for mismatches. Make sure it detects and fixes as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": True, "verbose": False, "details": False, "uid": "", "uuid": "", "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 3) self.assertEqual(calverify.results["Missing Attendee"], set(( ("MISSING_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertEqual(calverify.results["Fix change event"], set(( (self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",), ))) self.assertEqual(calverify.results["Fix add event"], set(( (self.uuidl1, "calendar", "MISSING_ATTENDEE_ICS",), ))) self.assertEqual(calverify.results["Fix add inbox"], set(( (self.uuidl1, "MISSING_ATTENDEE_ICS",), (self.uuidl1, "MISMATCH_ATTENDEE_ICS",), ))) self.assertTrue("Fix remove" not in calverify.results) self.assertEqual(calverify.results["Fix failures"], 0) testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"]) self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1) self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS") self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth}) self.assertEqual(testResults[1]["uid"], "MISSING_ATTENDEE_ICS") self.assertEqual(testResults[1]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth}) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 4) self.assertTrue("Missing Attendee" not in calverify.results) self.assertTrue("Mismatch Attendee" not in calverify.results) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) class CalVerifyMismatchTestsUUID(CalVerifyMismatchTestsBase): """ Tests calverify for iCalendar mismatch problems for auto-accept attendees. """ # Organizer has event, attendee do not MISSING_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISSING_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Attendee partstat mismatch MISMATCH_ATTENDEE_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=NEEDS-ACTION:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} MISMATCH_ATTENDEE_L1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:MISMATCH_ATTENDEE_ICS DTEND:%(year)s%(month)02d07T151500Z TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T111500Z DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE;PARTSTAT=ACCEPTED:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": { "missing_attendee.ics": (MISSING_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), "mismatched_attendee.ics": (MISMATCH_ATTENDEE_1_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": { "mismatched_attendee.ics": (MISMATCH_ATTENDEE_L1_ICS, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, } @inlineCallbacks def test_scanMismatchOnly(self): """ CalVerifyService.doScan without fix for mismatches. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": False, "verbose": False, "details": False, "uid": "", "uuid": CalVerifyMismatchTestsBase.uuidl1, "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 2) self.assertTrue("Missing Attendee" not in calverify.results) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertTrue("Fix change event" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertEqual(sync_token_oldl1, sync_token_newl1) @inlineCallbacks def test_fixMismatch(self): """ CalVerifyService.doScan with fix for mismatches. Make sure it detects and fixes as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": True, "nobase64": False, "fix": True, "verbose": False, "details": False, "uid": "", "uuid": CalVerifyMismatchTestsBase.uuidl1, "tzid": "", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 2) self.assertTrue("Missing Attendee" not in calverify.results) self.assertEqual(calverify.results["Mismatch Attendee"], set(( ("MISMATCH_ATTENDEE_ICS", self.uuid1, self.uuidl1,), ))) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertEqual(calverify.results["Fix change event"], set(( (self.uuidl1, "calendar", "MISMATCH_ATTENDEE_ICS",), ))) self.assertTrue("Fix add event" not in calverify.results) self.assertEqual(calverify.results["Fix add inbox"], set(( (self.uuidl1, "MISMATCH_ATTENDEE_ICS",), ))) self.assertTrue("Fix remove" not in calverify.results) self.assertEqual(calverify.results["Fix failures"], 0) testResults = sorted(calverify.results["Auto-Accepts"], key=lambda x: x["uid"]) self.assertEqual(testResults[0]["path"], "/calendars/__uids__/%s/calendar/mismatched_attendee.ics" % self.uuidl1) self.assertEqual(testResults[0]["uid"], "MISMATCH_ATTENDEE_ICS") self.assertEqual(testResults[0]["start"].getText()[:8], "%(year)s%(month)02d07" % {"year": nowYear, "month": nowMonth}) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False options["uuid"] = CalVerifyMismatchTestsBase.uuidl1 calverify = SchedulingMismatchService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 2) self.assertTrue("Missing Attendee" not in calverify.results) self.assertTrue("Mismatch Attendee" not in calverify.results) self.assertTrue("Missing Organizer" not in calverify.results) self.assertTrue("Mismatch Organizer" not in calverify.results) self.assertTrue("Fix add event" not in calverify.results) self.assertTrue("Fix add inbox" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) self.assertTrue("Fix failures" not in calverify.results) self.assertTrue("Auto-Accepts" not in calverify.results) class CalVerifyDoubleBooked(CalVerifyMismatchTestsBase): """ Tests calverify for double-bookings. """ # No overlap INVITE_NO_OVERLAP_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP_ICS DTSTART:%(year)s%(month)02d07T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping INVITE_NO_OVERLAP1_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP1_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP1_1_ICS DTSTART:%(year)s%(month)02d07T110000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP1_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP1_2_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP1_2_ICS DTSTART:%(year)s%(month)02d07T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping with one transparent INVITE_NO_OVERLAP2_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP2_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP2_1_ICS DTSTART:%(year)s%(month)02d07T140000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP2_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP2_2_ICS SUMMARY:INVITE_NO_OVERLAP2_2_ICS DTSTART:%(year)s%(month)02d07T150000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F TRANSP:TRANSPARENT END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping with one cancelled INVITE_NO_OVERLAP3_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP3_1_ICS TRANSP:OPAQUE SUMMARY:Ancient event DTSTART:%(year)s%(month)02d07T170000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP3_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP3_2_ICS SUMMARY:INVITE_NO_OVERLAP3_2_ICS DTSTART:%(year)s%(month)02d07T180000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F STATUS:CANCELLED END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping recurring INVITE_NO_OVERLAP4_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP4_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP4_1_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F RRULE:FREQ=DAILY;COUNT=3 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP4_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP4_2_ICS SUMMARY:INVITE_NO_OVERLAP4_2_ICS DTSTART:%(year)s%(month)02d09T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F RRULE:FREQ=DAILY;COUNT=2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping on one recurrence instance INVITE_NO_OVERLAP5_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP5_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP5_1_ICS DTSTART:%(year)s%(month)02d12T120000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F RRULE:FREQ=DAILY;COUNT=3 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP5_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP5_2_ICS SUMMARY:INVITE_NO_OVERLAP5_2_ICS DTSTART:%(year)s%(month)02d13T140000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F RRULE:FREQ=DAILY;COUNT=2 END:VEVENT BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP5_2_ICS SUMMARY:INVITE_NO_OVERLAP5_2_ICS RECURRENCE-ID:%(year)s%(month)02d14T140000Z DTSTART:%(year)s%(month)02d14T130000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two not overlapping - one all-day INVITE_NO_OVERLAP6_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3 TZNAME:PDT TZOFFSETFROM:-0800 TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11 TZNAME:PST TZOFFSETFROM:-0700 TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP6_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP6_1_ICS DTSTART;TZID=America/Los_Angeles:%(year)s%(month)02d20T200000 DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP6_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP6_2_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP6_2_ICS DTSTART;VALUE=DATE:%(year)s%(month)02d21 DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Two overlapping - same organizer and summary INVITE_NO_OVERLAP7_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP7_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP7_1_ICS DTSTART:%(year)s%(month)02d23T110000Z DURATION:PT2H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} INVITE_NO_OVERLAP7_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_OVERLAP7_2_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_OVERLAP7_1_ICS DTSTART:%(year)s%(month)02d23T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} allEvents = { "invite1.ics": (INVITE_NO_OVERLAP_ICS, CalVerifyMismatchTestsBase.metadata,), "invite2.ics": (INVITE_NO_OVERLAP1_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite3.ics": (INVITE_NO_OVERLAP1_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite4.ics": (INVITE_NO_OVERLAP2_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite5.ics": (INVITE_NO_OVERLAP2_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite6.ics": (INVITE_NO_OVERLAP3_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite7.ics": (INVITE_NO_OVERLAP3_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite8.ics": (INVITE_NO_OVERLAP4_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite9.ics": (INVITE_NO_OVERLAP4_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite10.ics": (INVITE_NO_OVERLAP5_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite11.ics": (INVITE_NO_OVERLAP5_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite12.ics": (INVITE_NO_OVERLAP6_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite13.ics": (INVITE_NO_OVERLAP6_2_ICS, CalVerifyMismatchTestsBase.metadata,), "invite14.ics": (INVITE_NO_OVERLAP7_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite15.ics": (INVITE_NO_OVERLAP7_2_ICS, CalVerifyMismatchTestsBase.metadata,), } requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": allEvents, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": allEvents, "inbox": {}, }, } @inlineCallbacks def test_scanDoubleBookingOnly(self): """ CalVerifyService.doScan without fix for mismatches. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_old1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": True, "fix": False, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), } output = StringIO() calverify = DoubleBookingService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( [(sorted((i.uid1, i.uid2,)), str(i.start),) for i in calverify.results["Double-bookings"]], [ (["INVITE_NO_OVERLAP1_1_ICS", "INVITE_NO_OVERLAP1_2_ICS"], "%(year)s%(month)02d07T120000Z" % {"year": nowYear, "month": nowMonth}), (["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d09T120000Z" % {"year": nowYear, "month": nowMonth}), (["INVITE_NO_OVERLAP4_1_ICS", "INVITE_NO_OVERLAP4_2_ICS"], "%(year)s%(month)02d10T120000Z" % {"year": nowYear, "month": nowMonth}), (["INVITE_NO_OVERLAP5_1_ICS", "INVITE_NO_OVERLAP5_2_ICS"], "%(year)s%(month)02d14T130000Z" % {"year": nowYear, "month": nowMonth}), ], ) self.assertEqual(calverify.results["Number of double-bookings"], 4) self.assertEqual(calverify.results["Number of unique double-bookings"], 3) sync_token_new1 = (yield (yield self.calendarUnderTest(home=self.uuid1, name="calendar")).syncToken()) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_old1, sync_token_new1) self.assertEqual(sync_token_oldl1, sync_token_newl1) class CalVerifyDarkPurge(CalVerifyMismatchTestsBase): """ Tests calverify for events. """ # No organizer INVITE_NO_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_ORGANIZER_ICS DTSTART:%(year)s%(month)02d07T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Valid organizer INVITE_VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_VALID_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:INVITE_VALID_ORGANIZER_ICS DTSTART:%(year)s%(month)02d08T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Invalid organizer #1 INVITE_INVALID_ORGANIZER_1_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_INVALID_ORGANIZER_1_ICS TRANSP:OPAQUE SUMMARY:INVITE_INVALID_ORGANIZER_1_ICS DTSTART:%(year)s%(month)02d09T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1 ATTENDEE:urn:x-uid:D46F3D71-04B7-43C2-A7B6-6F92F92E61D0-1 ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} # Invalid organizer #2 INVITE_INVALID_ORGANIZER_2_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_INVALID_ORGANIZER_2_ICS TRANSP:OPAQUE SUMMARY:INVITE_INVALID_ORGANIZER_2_ICS DTSTART:%(year)s%(month)02d10T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:mailto:foobar@example.com ATTENDEE:mailto:foobar@example.com ATTENDEE:urn:x-uid:75EA36BE-F71B-40F9-81F9-CF59BF40CA8F END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": nowYear, "month": nowMonth} allEvents = { "invite1.ics": (INVITE_NO_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,), "invite2.ics": (INVITE_VALID_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,), "invite3.ics": (INVITE_INVALID_ORGANIZER_1_ICS, CalVerifyMismatchTestsBase.metadata,), "invite4.ics": (INVITE_INVALID_ORGANIZER_2_ICS, CalVerifyMismatchTestsBase.metadata,), } requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": allEvents, "inbox": {}, }, } @inlineCallbacks def test_scanDarkEvents(self): """ CalVerifyService.doScan without fix for dark events. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": True, "dark-purge": False, "fix": False, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": False, "invalid-organizer": False, "disabled-organizer": False, } output = StringIO() calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( sorted([i.uid for i in calverify.results["Dark Events"]]), ["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ] ) self.assertEqual(calverify.results["Number of dark events"], 2) self.assertTrue("Fix dark events" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_oldl1, sync_token_newl1) @inlineCallbacks def test_fixDarkEvents(self): """ CalVerifyService.doScan with fix for dark events. Make sure it detects as much as it can. Make sure sync-token is changed. """ sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": True, "dark-purge": False, "fix": True, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": False, "invalid-organizer": False, "disabled-organizer": False, } output = StringIO() calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( sorted([i.uid for i in calverify.results["Dark Events"]]), ["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", ] ) self.assertEqual(calverify.results["Number of dark events"], 2) self.assertEqual(calverify.results["Fix dark events"], 2) self.assertTrue("Fix remove" in calverify.results) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False options["uuid"] = self.uuidl1 calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 2) self.assertEqual(len(calverify.results["Dark Events"]), 0) self.assertTrue("Fix dark events" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) @inlineCallbacks def test_fixDarkEventsNoOrganizerOnly(self): """ CalVerifyService.doScan with fix for dark events. Make sure it detects as much as it can. Make sure sync-token is changed. """ sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": True, "dark-purge": False, "fix": True, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": True, "invalid-organizer": False, "disabled-organizer": False, } output = StringIO() calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( sorted([i.uid for i in calverify.results["Dark Events"]]), ["INVITE_NO_ORGANIZER_ICS", ] ) self.assertEqual(calverify.results["Number of dark events"], 1) self.assertEqual(calverify.results["Fix dark events"], 1) self.assertTrue("Fix remove" in calverify.results) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False options["uuid"] = self.uuidl1 calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 3) self.assertEqual(len(calverify.results["Dark Events"]), 0) self.assertTrue("Fix dark events" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) @inlineCallbacks def test_fixDarkEventsAllTypes(self): """ CalVerifyService.doScan with fix for dark events. Make sure it detects as much as it can. Make sure sync-token is changed. """ sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": True, "dark-purge": False, "fix": True, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": True, "invalid-organizer": True, "disabled-organizer": True, } output = StringIO() calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( sorted([i.uid for i in calverify.results["Dark Events"]]), ["INVITE_INVALID_ORGANIZER_1_ICS", "INVITE_INVALID_ORGANIZER_2_ICS", "INVITE_NO_ORGANIZER_ICS", ] ) self.assertEqual(calverify.results["Number of dark events"], 3) self.assertEqual(calverify.results["Fix dark events"], 3) self.assertTrue("Fix remove" in calverify.results) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) # Re-scan after changes to make sure there are no errors yield self.commit() options["fix"] = False options["uuid"] = self.uuidl1 calverify = DarkPurgeService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], 1) self.assertEqual(len(calverify.results["Dark Events"]), 0) self.assertTrue("Fix dark events" not in calverify.results) self.assertTrue("Fix remove" not in calverify.results) class CalVerifyEventPurge(CalVerifyMismatchTestsBase): """ Tests calverify for events. """ # No organizer NO_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVITE_NO_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:INVITE_NO_ORGANIZER_ICS DTSTART:%(now_fwd10)s DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid organizer VALID_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:INVITE_VALID_ORGANIZER_ICS DTSTART:%(now)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RRULE:FREQ=DAILY SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid attendee VALID_ATTENDEE_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:INVITE_VALID_ORGANIZER_ICS DTSTART:%(now)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RRULE:FREQ=DAILY SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid organizer VALID_ORGANIZER_FUTURE_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:INVITE_VALID_ORGANIZER_ICS DTSTART:%(now_fwd11)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s RRULE:FREQ=DAILY SEQUENCE:1 SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid attendee VALID_ATTENDEE_FUTURE_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:INVITE_VALID_ORGANIZER_ICS DTSTART:%(now_fwd11)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s RRULE:FREQ=DAILY SEQUENCE:1 SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid organizer VALID_ORGANIZER_PAST_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:%(uid)s DTSTART:%(now)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s SEQUENCE:1 SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid attendee VALID_ATTENDEE_PAST_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:%(uid)s DTSTART:%(now)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RELATED-TO;RELTYPE=X-CALENDARSERVER-RECURRENCE-SET:%(relID)s RRULE:FREQ=DAILY;UNTIL=%(now_fwd11_1)s SEQUENCE:1 SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Valid organizer VALID_ORGANIZER_OVERRIDE_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN BEGIN:VEVENT UID:VALID_ORGANIZER_OVERRIDE_ICS DTSTART:%(now)s DURATION:PT1H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RRULE:FREQ=DAILY SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT BEGIN:VEVENT UID:VALID_ORGANIZER_OVERRIDE_ICS RECURRENCE-ID:%(now_fwd11)s DTSTART:%(now_fwd11)s DURATION:PT2H ATTENDEE;PARTSTAT=ACCPETED:urn:x-uid:%(uuid1)s ATTENDEE;RSVP=TRUE:urn:x-uid:%(uuid2)s ORGANIZER:urn:x-uid:%(uuid1)s RRULE:FREQ=DAILY SUMMARY:INVITE_VALID_ORGANIZER_ICS END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") @inlineCallbacks def setUp(self): self.subs = { "uuid1": CalVerifyMismatchTestsBase.uuid1, "uuid2": CalVerifyMismatchTestsBase.uuid2, } self.now = DateTime.getNowUTC() self.now.setHHMMSS(0, 0, 0) self.subs["now"] = self.now for i in range(30): attrname = "now_back%s" % (i + 1,) setattr(self, attrname, self.now.duplicate()) getattr(self, attrname).offsetDay(-(i + 1)) self.subs[attrname] = getattr(self, attrname) attrname_12h = "now_back%s_12h" % (i + 1,) setattr(self, attrname_12h, getattr(self, attrname).duplicate()) getattr(self, attrname_12h).offsetHours(12) self.subs[attrname_12h] = getattr(self, attrname_12h) attrname_1 = "now_back%s_1" % (i + 1,) setattr(self, attrname_1, getattr(self, attrname).duplicate()) getattr(self, attrname_1).offsetSeconds(-1) self.subs[attrname_1] = getattr(self, attrname_1) for i in range(30): attrname = "now_fwd%s" % (i + 1,) setattr(self, attrname, self.now.duplicate()) getattr(self, attrname).offsetDay(i + 1) self.subs[attrname] = getattr(self, attrname) attrname_12h = "now_fwd%s_12h" % (i + 1,) setattr(self, attrname_12h, getattr(self, attrname).duplicate()) getattr(self, attrname_12h).offsetHours(12) self.subs[attrname_12h] = getattr(self, attrname_12h) attrname_1 = "now_fwd%s_1" % (i + 1,) setattr(self, attrname_1, getattr(self, attrname).duplicate()) getattr(self, attrname_1).offsetSeconds(-1) self.subs[attrname_1] = getattr(self, attrname_1) self.requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": { "invite1.ics": (self.NO_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,), "invite2.ics": (self.VALID_ORGANIZER_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,), "invite3.ics": (self.VALID_ORGANIZER_OVERRIDE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": { "invite2a.ics": (self.VALID_ATTENDEE_ICS % self.subs, CalVerifyMismatchTestsBase.metadata,), }, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": {}, "inbox": {}, }, } yield super(CalVerifyEventPurge, self).setUp() @inlineCallbacks def test_validSplit(self): """ CalVerifyService.doScan without fix for dark events. Make sure it detects as much as it can. Make sure sync-token is not changed. """ options = { "nuke": False, "missing": False, "ical": False, "mismatch": False, "double": True, "dark-purge": False, "split": True, "path": "/calendars/__uids__/%(uuid1)s/calendar/invite2.ics" % self.subs, "rid": "%(now_fwd11)s" % self.subs, "summary": False, } output = StringIO() calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config) oldUID, oldRelatedTo = yield calverify.doAction() relsubs = dict(self.subs) relsubs["uid"] = oldUID relsubs["relID"] = oldRelatedTo calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid1, name="calendar") objs = yield calendar.listObjectResources() self.assertEqual(len(objs), 4) self.assertTrue("invite2.ics" in objs) oldName = filter(lambda x: not x.startswith("invite"), objs)[0] obj1 = yield calendar.objectResourceWithName("invite2.ics") ical1 = yield obj1.component() self.assertEqual(normalize_iCalStr(ical1), self.VALID_ORGANIZER_FUTURE_ICS % relsubs) obj2 = yield calendar.objectResourceWithName(oldName) ical2 = yield obj2.component() self.assertEqual(normalize_iCalStr(ical2), self.VALID_ORGANIZER_PAST_ICS % relsubs) calendar = yield self.calendarUnderTest(home=CalVerifyMismatchTestsBase.uuid2, name="calendar") objs = yield calendar.listObjectResources() self.assertEqual(len(objs), 2) self.assertTrue("invite2a.ics" in objs) oldName = filter(lambda x: not x.startswith("invite"), objs)[0] obj1 = yield calendar.objectResourceWithName("invite2a.ics") ical1 = yield obj1.component() self.assertEqual(normalize_iCalStr(ical1), self.VALID_ATTENDEE_FUTURE_ICS % relsubs) obj2 = yield calendar.objectResourceWithName(oldName) ical2 = yield obj2.component() self.assertEqual(normalize_iCalStr(ical2), self.VALID_ATTENDEE_PAST_ICS % relsubs) @inlineCallbacks def test_summary(self): """ CalVerifyService.doScan without fix for dark events. Make sure it detects as much as it can. Make sure sync-token is not changed. """ options = { "nuke": False, "missing": False, "ical": False, "mismatch": False, "double": True, "dark-purge": False, "split": True, "path": "/calendars/__uids__/%(uuid1)s/calendar/invite3.ics" % self.subs, "rid": "%(now_fwd11)s" % self.subs, "summary": True, } output = StringIO() calverify = EventSplitService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() result = output.getvalue().splitlines() self.assertTrue("%(now)s" % self.subs in result) self.assertTrue("%(now_fwd10)s" % self.subs in result) self.assertTrue("%(now_fwd11)s *" % self.subs in result) self.assertTrue("%(now_fwd12)s" % self.subs in result) class CalVerifyMissingLocations(CalVerifyMismatchTestsBase): """ Tests calverify for events. """ subs = { "year": nowYear, "month": nowMonth, "uuid1": CalVerifyMismatchTestsBase.uuid1, "uuid2": CalVerifyMismatchTestsBase.uuid2, "uuid3": CalVerifyMismatchTestsBase.uuid3, "uuidl1": CalVerifyMismatchTestsBase.uuidl1, "uuidl2": CalVerifyMismatchTestsBase.uuidl2, } # Valid event VALID_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:VALID_ICS TRANSP:OPAQUE SUMMARY:VALID_ICS DTSTART:%(year)s%(month)02d08T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid2)s ATTENDEE:urn:x-uid:%(uuidl1)s LOCATION:Room 01 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Valid event VALID_MULTI_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:VALID_MULTI_ICS TRANSP:OPAQUE SUMMARY:VALID_MULTI_ICS DTSTART:%(year)s%(month)02d08T100000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid2)s ATTENDEE:urn:x-uid:%(uuidl1)s ATTENDEE:urn:x-uid:%(uuidl2)s LOCATION:Room 01\\;Room 02 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Invalid event INVALID_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVALID_ICS TRANSP:OPAQUE SUMMARY:INVALID_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid2)s ATTENDEE:urn:x-uid:%(uuidl1)s END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Invalid event INVALID_MULTI_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:INVALID_MULTI_ICS TRANSP:OPAQUE SUMMARY:INVALID_MULTI_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid2)s ATTENDEE:urn:x-uid:%(uuidl1)s ATTENDEE:urn:x-uid:%(uuidl2)s LOCATION:Room 02 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Room as organizer event ROOM_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:ROOM_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:ROOM_ORGANIZER_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuidl1)s ATTENDEE:urn:x-uid:%(uuidl1)s END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Room event without organizer ROOM_NO_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:ROOM_NO_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:ROOM_NO_ORGANIZER_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs # Invalid event with no organizer copt ROOM_MISSING_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20100303T181216Z UID:ROOM_MISSING_ORGANIZER_ICS TRANSP:OPAQUE SUMMARY:ROOM_MISSING_ORGANIZER_ICS DTSTART:%(year)s%(month)02d08T120000Z DURATION:PT1H DTSTAMP:20100303T181220Z SEQUENCE:2 ORGANIZER:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid1)s ATTENDEE:urn:x-uid:%(uuid2)s ATTENDEE:urn:x-uid:%(uuidl1)s END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % subs allEvents = { "invite1.ics": (VALID_ICS, CalVerifyMismatchTestsBase.metadata,), "invite2.ics": (INVALID_ICS, CalVerifyMismatchTestsBase.metadata,), "invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), "invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), } allEvents_Room1 = { "invite1.ics": (VALID_ICS, CalVerifyMismatchTestsBase.metadata,), "invite2.ics": (INVALID_ICS, CalVerifyMismatchTestsBase.metadata,), "invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), "invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), "invite5.ics": (ROOM_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,), "invite6.ics": (ROOM_NO_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,), "invite7.ics": (ROOM_MISSING_ORGANIZER_ICS, CalVerifyMismatchTestsBase.metadata,), } allEvents_Room2 = { "invite3.ics": (VALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), "invite4.ics": (INVALID_MULTI_ICS, CalVerifyMismatchTestsBase.metadata,), } requirements = { CalVerifyMismatchTestsBase.uuid1: { "calendar": allEvents, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid2: { "calendar": allEvents, "inbox": {}, }, CalVerifyMismatchTestsBase.uuid3: { "calendar": {}, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl1: { "calendar": allEvents_Room1, "inbox": {}, }, CalVerifyMismatchTestsBase.uuidl2: { "calendar": allEvents_Room2, "inbox": {}, }, } badEvents = set(("INVALID_ICS", "INVALID_MULTI_ICS", "ROOM_MISSING_ORGANIZER_ICS", "ROOM_ORGANIZER_ICS",)) @inlineCallbacks def test_scanMissingLocations(self): """ MissingLocationService.doAction without fix for missing locations. Make sure it detects as much as it can. Make sure sync-token is not changed. """ sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": False, "dark-purge": False, "missing-location": True, "fix": False, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": False, "invalid-organizer": False, "disabled-organizer": False, } output = StringIO() calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( set([i.uid for i in calverify.results["Bad Events"]]), self.badEvents, ) self.assertEqual(calverify.results["Number of bad events"], len(self.badEvents)) self.assertTrue("Fix bad events" not in calverify.results) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertEqual(sync_token_oldl1, sync_token_newl1) @inlineCallbacks def test_fixMissingLocations(self): """ MissingLocationService.doAction with fix for missing locations. Make sure it detects as much as it can. Make sure sync-token is changed. """ # Make sure location is in each users event for uid in (self.uuid1, self.uuid2, self.uuidl1,): calobj = yield self.calendarObjectUnderTest(home=uid, calendar_name="calendar", name="invite2.ics") caldata = yield calobj.componentForUser() self.assertTrue("LOCATION:" not in str(caldata)) yield self.commit() sync_token_oldl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) yield self.commit() options = { "ical": False, "badcua": False, "mismatch": False, "nobase64": False, "double": False, "dark-purge": False, "missing-location": True, "fix": True, "verbose": False, "details": False, "summary": False, "days": 365, "uid": "", "uuid": self.uuidl1, "tzid": "utc", "start": DateTime(nowYear, 1, 1, 0, 0, 0), "no-organizer": False, "invalid-organizer": False, "disabled-organizer": False, } output = StringIO() calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"])) self.assertEqual( set([i.uid for i in calverify.results["Bad Events"]]), self.badEvents, ) self.assertEqual(calverify.results["Number of bad events"], len(self.badEvents)) self.assertEqual(calverify.results["Fix bad events"], len(self.badEvents)) sync_token_newl1 = (yield (yield self.calendarUnderTest(home=self.uuidl1, name="calendar")).syncToken()) self.assertNotEqual(sync_token_oldl1, sync_token_newl1) yield self.commit() # Wait for it to complete yield JobItem.waitEmpty(self._sqlCalendarStore.newTransaction, reactor, 60) # Re-scan after changes to make sure there are no errors options["fix"] = False options["uuid"] = self.uuidl1 calverify = MissingLocationService(self._sqlCalendarStore, options, output, reactor, config) yield calverify.doAction() self.assertEqual(calverify.results["Number of events to process"], len(self.requirements[CalVerifyMismatchTestsBase.uuidl1]["calendar"]) - 2) self.assertEqual(len(calverify.results["Bad Events"]), 0) self.assertTrue("Fix bad events" not in calverify.results) # Make sure location is in each users event for uid in (self.uuid1, self.uuid2, self.uuidl1,): calobj = yield self.calendarObjectUnderTest(home=uid, calendar_name="calendar", name="invite2.ics") caldata = yield calobj.componentForUser() self.assertTrue("LOCATION:" in str(caldata)) calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite3.ics") self.assertTrue(calobj is not None) calobj = yield self.calendarObjectUnderTest(home=self.uuidl2, calendar_name="calendar", name="invite3.ics") self.assertTrue(calobj is not None) calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite4.ics") self.assertTrue(calobj is None) calobj = yield self.calendarObjectUnderTest(home=self.uuidl2, calendar_name="calendar", name="invite4.ics") self.assertTrue(calobj is not None) calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite5.ics") caldata = yield calobj.componentForUser() self.assertTrue("LOCATION:" in str(caldata)) calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite6.ics") caldata = yield calobj.componentForUser() self.assertTrue("LOCATION:" not in str(caldata)) calobj = yield self.calendarObjectUnderTest(home=self.uuidl1, calendar_name="calendar", name="invite7.ics") self.assertTrue(calobj is None) yield self.commit() calendarserver-9.1+dfsg/calendarserver/tools/test/test_changeip.py000066400000000000000000000036351315003562600256200ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twistedcaldav.test.util import TestCase from calendarserver.tools.changeip_calendar import updateConfig class ChangeIPTestCase(TestCase): def test_updateConfig(self): plist = { "Untouched": "dont_change_me", "ServerHostName": "", "Scheduling": { "iMIP": { "Receiving": { "Server": "original_hostname", }, "Sending": { "Server": "original_hostname", "Address": "user@original_hostname", }, }, }, } updateConfig( plist, "10.1.1.1", "10.1.1.2", "original_hostname", "new_hostname" ) self.assertEquals( plist, { "Untouched": "dont_change_me", "ServerHostName": "", "Scheduling": { "iMIP": { "Receiving": { "Server": "new_hostname", }, "Sending": { "Server": "new_hostname", "Address": "user@new_hostname", }, }, }, } ) calendarserver-9.1+dfsg/calendarserver/tools/test/test_config.py000066400000000000000000000373411315003562600253100ustar00rootroot00000000000000## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twistedcaldav.test.util import TestCase from twistedcaldav.config import ConfigDict from calendarserver.tools.config import ( WritableConfig, setKeyPath, getKeyPath, flattenDictionary, processArgs) from calendarserver.tools.test.test_gateway import RunCommandTestCase from twisted.internet.defer import inlineCallbacks from twisted.python.filepath import FilePath from xml.parsers.expat import ExpatError import plistlib PREAMBLE = """ """ class WritableConfigTestCase(TestCase): def setUp(self): self.configFile = self.mktemp() self.fp = FilePath(self.configFile) def test_readSuccessful(self): content = """ string foo """ self.fp.setContent(PREAMBLE + content) config = ConfigDict() writable = WritableConfig(config, self.configFile) writable.read() self.assertEquals(writable.currentConfigSubset, {"string": "foo"}) def test_readInvalidXML(self): self.fp.setContent("invalid") config = ConfigDict() writable = WritableConfig(config, self.configFile) self.assertRaises(ExpatError, writable.read) def test_updates(self): content = """ key1 before key2 10 """ self.fp.setContent(PREAMBLE + content) config = ConfigDict() writable = WritableConfig(config, self.configFile) writable.read() writable.set({"key1": "after"}) writable.set({"key2": 15}) writable.set({"key2": 20}) # override previous set writable.set({"key3": ["a", "b", "c"]}) self.assertEquals(writable.currentConfigSubset, {"key1": "after", "key2": 20, "key3": ["a", "b", "c"]}) writable.save() writable2 = WritableConfig(config, self.configFile) writable2.read() self.assertEquals(writable2.currentConfigSubset, {"key1": "after", "key2": 20, "key3": ["a", "b", "c"]}) def test_convertToValue(self): self.assertEquals(True, WritableConfig.convertToValue("True")) self.assertEquals(False, WritableConfig.convertToValue("False")) self.assertEquals(1, WritableConfig.convertToValue("1")) self.assertEquals(1.2, WritableConfig.convertToValue("1.2")) self.assertEquals("xyzzy", WritableConfig.convertToValue("xyzzy")) self.assertEquals("xy.zzy", WritableConfig.convertToValue("xy.zzy")) def test_processArgs(self): """ Ensure utf-8 encoded command line args are handled properly """ content = """ key1 before """ self.fp.setContent(PREAMBLE + content) config = ConfigDict() writable = WritableConfig(config, self.configFile) writable.read() processArgs(writable, ["key1=\xf0\x9f\x92\xa3"], restart=False) writable2 = WritableConfig(config, self.configFile) writable2.read() self.assertEquals(writable2.currentConfigSubset, {'key1': u'\U0001f4a3'}) class ConfigTestCase(RunCommandTestCase): @inlineCallbacks def test_readConfig(self): """ Verify readConfig returns with only the writable keys """ results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals(results["result"]["RedirectHTTPToHTTPS"], False) self.assertEquals(results["result"]["EnableSearchAddressBook"], False) self.assertEquals(results["result"]["EnableCalDAV"], True) self.assertEquals(results["result"]["EnableCardDAV"], True) self.assertEquals(results["result"]["EnableSSL"], False) self.assertEquals(results["result"]["DefaultLogLevel"], "warn") self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], False) # Verify not all keys are present, such as umask which is not accessible self.assertFalse("umask" in results["result"]) @inlineCallbacks def test_writeConfig(self): """ Verify writeConfig updates the writable plist file only """ results = yield self.runCommand( command_writeConfig, script="calendarserver_config") self.assertEquals(results["result"]["EnableCalDAV"], False) self.assertEquals(results["result"]["EnableCardDAV"], False) self.assertEquals(results["result"]["EnableSSL"], True) self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], True) hostName = "hostname_%s_%s" % (unichr(208), u"\ud83d\udca3") self.assertTrue(results["result"]["ServerHostName"].endswith(hostName)) # We tried to modify ServerRoot, but make sure our change did not take self.assertNotEquals(results["result"]["ServerRoot"], "/You/Shall/Not/Pass") # The static plist should still have EnableCalDAV = True staticPlist = plistlib.readPlist(self.configFileName) self.assertTrue(staticPlist["EnableCalDAV"]) @inlineCallbacks def test_error(self): """ Verify sending a bogus command returns an error """ results = yield self.runCommand( command_bogusCommand, script="calendarserver_config") self.assertEquals(results["error"], "Unknown command 'bogus'") @inlineCallbacks def test_commandLineArgs(self): """ Verify commandline assignments works """ # Check current values results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals(results["result"]["DefaultLogLevel"], "warn") self.assertEquals(results["result"]["Scheduling"]["iMIP"]["Enabled"], False) # Set a single-level key and a multi-level key results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["DefaultLogLevel=debug", "Scheduling.iMIP.Enabled=True"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals(results["result"]["DefaultLogLevel"], "debug") self.assertEquals(results["result"]["Scheduling"]["iMIP"]["Enabled"], True) self.assertEquals(results["result"]["LogLevels"], {}) # Directly set some LogLevels sub-keys results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["LogLevels.testing=debug", "LogLevels.testing2=warn"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], {"testing": "debug", "testing2": "warn"} ) # Test that setting additional sub-keys retains previous sub-keys results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["LogLevels.testing3=info", "LogLevels.testing4=warn"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], {"testing": "debug", "testing2": "warn", "testing3": "info", "testing4": "warn"} ) # Test that an empty value deletes the key results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["LogLevels="], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], # now an empty dict {} ) @inlineCallbacks def test_loggingCategories(self): """ Verify logging categories get mapped to the correct python module names """ # Check existing values results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals(results["result"]["LogLevels"], {}) # Set to directory and imip results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["--logging=directory,imip"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], { "twext.who": "debug", "txdav.who": "debug", "txdav.caldav.datastore.scheduling.imip": "debug", } ) # Set up to imip, and the directory modules should go away results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["--logging=imip"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], { "txdav.caldav.datastore.scheduling.imip": "debug", } ) # Set to default, and all modules should go away results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["--logging=default"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["LogLevels"], {} ) @inlineCallbacks def test_accountingCategories(self): """ Verify accounting categories get mapped to the correct keys """ # Check existing values results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["AccountingCategories"], { 'AutoScheduling': False, 'HTTP': False, 'Implicit Errors': False, 'iSchedule': False, 'xPod': False, 'iTIP': False, 'iTIP-VFREEBUSY': False, 'Invalid Instance': False, 'migration': False, } ) # Set to http and itip results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["--accounting=http,itip"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["AccountingCategories"], { 'AutoScheduling': False, 'HTTP': True, 'Implicit Errors': False, 'iSchedule': False, 'xPod': False, 'iTIP': True, 'iTIP-VFREEBUSY': True, 'Invalid Instance': False, 'migration': False, } ) # Set to off, so all are off results = yield self.runCommand( None, script="calendarserver_config", additionalArgs=["--accounting=off"], parseOutput=False ) results = yield self.runCommand( command_readConfig, script="calendarserver_config") self.assertEquals( results["result"]["AccountingCategories"], { 'AutoScheduling': False, 'HTTP': False, 'Implicit Errors': False, 'iSchedule': False, 'xPod': False, 'iTIP': False, 'iTIP-VFREEBUSY': False, 'Invalid Instance': False, 'migration': False, } ) def test_keyPath(self): d = ConfigDict() setKeyPath(d, "one", "A") setKeyPath(d, "one", "B") setKeyPath(d, "two.one", "C") setKeyPath(d, "two.one", "D") setKeyPath(d, "two.two", "E") setKeyPath(d, "three.one.one", "F") setKeyPath(d, "three.one.two", "G") self.assertEquals(d.one, "B") self.assertEquals(d.two.one, "D") self.assertEquals(d.two.two, "E") self.assertEquals(d.three.one.one, "F") self.assertEquals(d.three.one.two, "G") self.assertEquals(getKeyPath(d, "one"), "B") self.assertEquals(getKeyPath(d, "two.one"), "D") self.assertEquals(getKeyPath(d, "two.two"), "E") self.assertEquals(getKeyPath(d, "three.one.one"), "F") self.assertEquals(getKeyPath(d, "three.one.two"), "G") def test_flattenDictionary(self): dictionary = { "one": "A", "two": { "one": "D", "two": "E", }, "three": { "one": { "one": "F", "two": "G", }, }, } self.assertEquals( set(list(flattenDictionary(dictionary))), set([("one", "A"), ("three.one.one", "F"), ("three.one.two", "G"), ("two.one", "D"), ("two.two", "E")]) ) command_readConfig = """ command readConfig """ command_writeConfig = """ command writeConfig Values EnableCalDAV EnableCardDAV EnableSSL Notifications.Services.APNS.Enabled ServerRoot /You/Shall/Not/Pass ServerHostName hostname_%s_%s """ % (unichr(208), u"\ud83d\udca3") command_bogusCommand = """ command bogus """ calendarserver-9.1+dfsg/calendarserver/tools/test/test_diagnose.py000066400000000000000000000022041315003562600256220ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from calendarserver.tools.diagnose import ( runCommand, FileNotFound ) from twisted.trial.unittest import TestCase class DiagnoseTestCase(TestCase): def test_runCommand(self): code, stdout, stderr = runCommand( "/bin/ls", "-al", "/" ) self.assertEquals(code, 0) self.assertEquals(stderr, "") self.assertTrue("total" in stdout) def test_runCommand_nonExistent(self): self.assertRaises(FileNotFound, runCommand, "/xyzzy/plugh/notthere") calendarserver-9.1+dfsg/calendarserver/tools/test/test_export.py000066400000000000000000000335251315003562600253640ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Unit tests for L{calendarsever.tools.export}. """ import sys from cStringIO import StringIO from twisted.trial.unittest import TestCase from twisted.internet.defer import inlineCallbacks from twisted.python.modules import getModule from twext.enterprise.ienterprise import AlreadyFinishedError from twistedcaldav.ical import Component from twistedcaldav.datafilters.test.test_peruserdata import dataForTwoUsers from twistedcaldav.datafilters.test.test_peruserdata import resultForUser2 from twistedcaldav.test.util import StoreTestCase from calendarserver.tools import export from calendarserver.tools.export import ExportOptions, main from calendarserver.tools.export import DirectoryExporter, UIDExporter from twisted.python.filepath import FilePath from twisted.internet.defer import Deferred from txdav.common.datastore.test.util import populateCalendarsFrom from calendarserver.tools.export import usage, exportToFile def holiday(uid): return ( getModule("twistedcaldav.test").filePath .sibling("data").child("Holidays").child(uid + ".ics") .getContent() ) def sample(name): return ( getModule("twistedcaldav.test").filePath .sibling("data").child(name + ".ics").getContent() ) valentines = holiday("C31854DA-1ED0-11D9-A5E0-000A958A3252") newYears = holiday("C3184A66-1ED0-11D9-A5E0-000A958A3252") payday = sample("PayDay") one = sample("OneEvent") another = sample("AnotherEvent") third = sample("ThirdEvent") class CommandLine(TestCase): """ Simple tests for command-line parsing. """ def test_usageMessage(self): """ The 'usage' message should print something to standard output (and nothing to standard error) and exit. """ orig = sys.stdout orige = sys.stderr try: out = sys.stdout = StringIO() err = sys.stderr = StringIO() self.assertRaises(SystemExit, usage) finally: sys.stdout = orig sys.stderr = orige self.assertEquals(len(out.getvalue()) > 0, True, "No output.") self.assertEquals(len(err.getvalue()), 0) def test_oneRecord(self): """ One '--record' option will result in a single L{DirectoryExporter} object with no calendars in its list. """ eo = ExportOptions() eo.parseOptions(["--record", "users:bob"]) self.assertEquals(len(eo.exporters), 1) exp = eo.exporters[0] self.assertIsInstance(exp, DirectoryExporter) self.assertEquals(exp.recordType, "users") self.assertEquals(exp.shortName, "bob") self.assertEquals(exp.collections, []) def test_oneUID(self): """ One '--uid' option will result in a single L{UIDExporter} object with no calendars in its list. """ eo = ExportOptions() eo.parseOptions(["--uid", "bob's your guid"]) self.assertEquals(len(eo.exporters), 1) exp = eo.exporters[0] self.assertIsInstance(exp, UIDExporter) self.assertEquals(exp.uid, "bob's your guid") def test_homeAndCollections(self): """ The --collection option adds specific calendars to the last home that was exported. """ eo = ExportOptions() eo.parseOptions(["--record", "users:bob", "--collection", "work stuff", "--uid", "jethroUID", "--collection=fun stuff"]) self.assertEquals(len(eo.exporters), 2) exp = eo.exporters[0] self.assertEquals(exp.recordType, "users") self.assertEquals(exp.shortName, "bob") self.assertEquals(exp.collections, ["work stuff"]) exp = eo.exporters[1] self.assertEquals(exp.uid, "jethroUID") self.assertEquals(exp.collections, ["fun stuff"]) def test_outputFileSelection(self): """ The --output option selects the file to write to, '-' or no parameter meaning stdout; the L{ExportOptions.openOutput} method returns that file. """ eo = ExportOptions() eo.parseOptions([]) self.assertIdentical(eo.openOutput(), sys.stdout) eo = ExportOptions() eo.parseOptions(["--output", "-"]) self.assertIdentical(eo.openOutput(), sys.stdout) eo = ExportOptions() tmpnam = self.mktemp() eo.parseOptions(["--output", tmpnam]) self.assertEquals(eo.openOutput().name, tmpnam) def test_outputFileError(self): """ If the output file cannot be opened for writing, an error will be displayed to the user on stderr. """ io = StringIO() systemExit = self.assertRaises( SystemExit, main, ['calendarserver_export', '--output', '/not/a/file'], io ) self.assertEquals(systemExit.code, 1) self.assertEquals( io.getvalue(), "Unable to open output file for writing: " "[Errno 2] No such file or directory: '/not/a/file'\n") class IntegrationTests(StoreTestCase): """ Tests for exporting data from a live store. """ @inlineCallbacks def setUp(self): """ Set up a store and fix the imported C{utilityMain} function (normally from L{calendarserver.tools.cmdline.utilityMain}) to point to a temporary method of this class. Also, patch the imported C{reactor}, since the SUT needs to call C{reactor.stop()} in order to work with L{utilityMain}. """ self.mainCalled = False self.patch(export, "utilityMain", self.fakeUtilityMain) yield super(IntegrationTests, self).setUp() self.store = self.storeUnderTest() self.waitToStop = Deferred() def stop(self): """ Emulate reactor.stop(), which the service must call when it is done with work. """ self.waitToStop.callback(None) def fakeUtilityMain(self, configFileName, serviceClass, reactor=None, verbose=False): """ Verify a few basic things. """ if self.mainCalled: raise RuntimeError( "Main called twice during this test; duplicate reactor run.") self.mainCalled = True self.usedConfigFile = configFileName self.usedReactor = reactor self.exportService = serviceClass(self.store) self.exportService.startService() self.addCleanup(self.exportService.stopService) @inlineCallbacks def test_serviceState(self): """ export.main() invokes utilityMain with the configuration file specified on the command line, and creates an L{ExporterService} pointed at the appropriate store. """ tempConfig = self.mktemp() main(['calendarserver_export', '--config', tempConfig, '--output', self.mktemp()], reactor=self) self.assertEquals(self.mainCalled, True, "Main not called.") self.assertEquals(self.usedConfigFile, tempConfig) self.assertEquals(self.usedReactor, self) self.assertEquals(self.exportService.store, self.store) yield self.waitToStop @inlineCallbacks def test_emptyCalendar(self): """ Exporting an empty calendar results in an empty calendar. """ io = StringIO() value = yield exportToFile([], io) # it doesn't return anything, it writes to the file. self.assertEquals(value, None) # but it should write a valid component to the file. self.assertEquals(Component.fromString(io.getvalue()), Component.newCalendar()) def txn(self): """ Create a new transaction and automatically clean it up when the test completes. """ aTransaction = self.store.newTransaction() @inlineCallbacks def maybeAbort(): try: yield aTransaction.abort() except AlreadyFinishedError: pass self.addCleanup(maybeAbort) return aTransaction @inlineCallbacks def test_oneEventCalendar(self): """ Exporting an calendar with one event in it will result in just that event. """ yield populateCalendarsFrom( { "user01": { "calendar1": { "valentines-day.ics": (valentines, {}) } } }, self.store ) expected = Component.newCalendar() [theComponent] = Component.fromString(valentines).subcomponents() expected.addComponent(theComponent) io = StringIO() yield exportToFile( [(yield self.txn().calendarHomeWithUID("user01")) .calendarWithName("calendar1")], io ) self.assertEquals(Component.fromString(io.getvalue()), expected) @inlineCallbacks def test_twoSimpleEvents(self): """ Exporting a calendar with two events in it will result in a VCALENDAR component with both VEVENTs in it. """ yield populateCalendarsFrom( { "user01": { "calendar1": { "valentines-day.ics": (valentines, {}), "new-years-day.ics": (newYears, {}) } } }, self.store ) expected = Component.newCalendar() a = Component.fromString(valentines) b = Component.fromString(newYears) for comp in a, b: for sub in comp.subcomponents(): expected.addComponent(sub) io = StringIO() yield exportToFile( [(yield self.txn().calendarHomeWithUID("user01")) .calendarWithName("calendar1")], io ) self.assertEquals(Component.fromString(io.getvalue()), expected) @inlineCallbacks def test_onlyOneVTIMEZONE(self): """ C{VTIMEZONE} subcomponents with matching TZIDs in multiple event calendar objects should only be rendered in the resulting output once. (Note that the code to suppor this is actually in PyCalendar, not the export tool itself.) """ yield populateCalendarsFrom( { "user01": { "calendar1": { "1.ics": (one, {}), # EST "2.ics": (another, {}), # EST "3.ics": (third, {}) # PST } } }, self.store ) io = StringIO() yield exportToFile( [(yield self.txn().calendarHomeWithUID("user01")) .calendarWithName("calendar1")], io ) result = Component.fromString(io.getvalue()) def filtered(name): for c in result.subcomponents(): if c.name() == name: yield c timezones = list(filtered("VTIMEZONE")) events = list(filtered("VEVENT")) # Sanity check to make sure we picked up all three events: self.assertEquals(len(events), 3) self.assertEquals(len(timezones), 2) self.assertEquals(set([tz.propertyValue("TZID") for tz in timezones]), # Use an intentionally wrong TZID in order to make # sure we don't depend on caching effects elsewhere. set(["America/New_Yrok", "US/Pacific"])) @inlineCallbacks def test_perUserFiltering(self): """ L{exportToFile} performs per-user component filtering based on the owner of that calendar. """ yield populateCalendarsFrom( { "user02": { "calendar1": { "peruser.ics": (dataForTwoUsers, {}), # EST } } }, self.store ) io = StringIO() yield exportToFile( [(yield self.txn().calendarHomeWithUID("user02")) .calendarWithName("calendar1")], io ) self.assertEquals( Component.fromString(resultForUser2), Component.fromString(io.getvalue()) ) @inlineCallbacks def test_full(self): """ Running C{calendarserver_export} on the command line exports an ics file. (Almost-full integration test, starting from the main point, using as few test fakes as possible.) Note: currently the only test for directory interaction. """ yield populateCalendarsFrom( { "user02": { # TODO: more direct test for skipping inbox "inbox": { "inbox-item.ics": (valentines, {}) }, "calendar1": { "peruser.ics": (dataForTwoUsers, {}), # EST } } }, self.store ) output = FilePath(self.mktemp()) main(['calendarserver_export', '--output', output.path, '--user', 'user02'], reactor=self) yield self.waitToStop self.assertEquals( Component.fromString(resultForUser2), Component.fromString(output.getContent()) ) calendarserver-9.1+dfsg/calendarserver/tools/test/test_gateway.py000066400000000000000000001021471315003562600255010ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import os from plistlib import readPlistFromString import plistlib import xml from twistedcaldav.stdconfig import config from twext.python.filepath import CachingFilePath as FilePath from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, Deferred, returnValue from twisted.trial.unittest import TestCase from twistedcaldav import memcacher from twistedcaldav.memcacheclient import ClientFactory from twistedcaldav.test.util import CapturingProcessProtocol from txdav.common.datastore.test.util import ( theStoreBuilder, StubNotifierFactory ) from txdav.who.idirectory import AutoScheduleMode from txdav.who.util import directoryFromConfig class RunCommandTestCase(TestCase): @inlineCallbacks def setUp(self): self.serverRoot = self.mktemp() os.mkdir(self.serverRoot) self.absoluteServerRoot = os.path.abspath(self.serverRoot) configRoot = os.path.join(self.absoluteServerRoot, "Config") if not os.path.exists(configRoot): os.makedirs(configRoot) dataRoot = os.path.join(self.absoluteServerRoot, "Data") if not os.path.exists(dataRoot): os.makedirs(dataRoot) documentRoot = os.path.join(self.absoluteServerRoot, "Documents") if not os.path.exists(documentRoot): os.makedirs(documentRoot) logRoot = os.path.join(self.absoluteServerRoot, "Logs") if not os.path.exists(logRoot): os.makedirs(logRoot) runRoot = os.path.join(self.absoluteServerRoot, "Run") if not os.path.exists(runRoot): os.makedirs(runRoot) config.reset() testRoot = os.path.join(os.path.dirname(__file__), "gateway") templateName = os.path.join(testRoot, "caldavd.plist") with open(templateName) as templateFile: template = templateFile.read() databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid())) newConfig = template % { "ServerRoot": self.absoluteServerRoot, "DataRoot": dataRoot, "DatabaseRoot": databaseRoot, "DocumentRoot": documentRoot, "ConfigRoot": configRoot, "LogRoot": logRoot, "RunRoot": runRoot, "WritablePlist": os.path.join( os.path.abspath(configRoot), "caldavd-writable.plist" ), } configFilePath = FilePath( os.path.join(configRoot, "caldavd.plist") ) configFilePath.setContent(newConfig) self.configFileName = configFilePath.path config.load(self.configFileName) config.Memcached.Pools.Default.ClientEnabled = False config.Memcached.Pools.Default.ServerEnabled = False ClientFactory.allowTestCache = True memcacher.Memcacher.allowTestCache = True memcacher.Memcacher.reset() config.DirectoryAddressBook.Enabled = False config.UsePackageTimezones = True origUsersFile = FilePath( os.path.join( os.path.dirname(__file__), "gateway", "users-groups.xml" ) ) copyUsersFile = FilePath( os.path.join(config.DataRoot, "accounts.xml") ) origUsersFile.copyTo(copyUsersFile) origResourcesFile = FilePath( os.path.join( os.path.dirname(__file__), "gateway", "resources-locations.xml" ) ) copyResourcesFile = FilePath( os.path.join(config.DataRoot, "resources.xml") ) origResourcesFile.copyTo(copyResourcesFile) origAugmentFile = FilePath( os.path.join( os.path.dirname(__file__), "gateway", "augments.xml" ) ) copyAugmentFile = FilePath(os.path.join(config.DataRoot, "augments.xml")) origAugmentFile.copyTo(copyAugmentFile) self.notifierFactory = StubNotifierFactory() self.store = yield theStoreBuilder.buildStore(self, self.notifierFactory) self.directory = directoryFromConfig(config, self.store) @inlineCallbacks def runCommand( self, command, error=False, script="calendarserver_command_gateway", additionalArgs=None, parseOutput=True ): """ Run the given command by feeding it as standard input to calendarserver_command_gateway in a subprocess. """ if isinstance(command, unicode): command = command.encode("utf-8") sourceRoot = os.path.dirname( os.path.dirname(os.path.dirname(os.path.dirname(__file__))) ) cmd = script # assumes it's on PATH args = [cmd, "-f", self.configFileName] if error: args.append("--error") if additionalArgs: args.extend(additionalArgs) cwd = sourceRoot deferred = Deferred() reactor.spawnProcess(CapturingProcessProtocol(deferred, command), cmd, args, env=os.environ, path=cwd) output = yield deferred if parseOutput: try: plist = readPlistFromString(output) returnValue(plist) except xml.parsers.expat.ExpatError, e: # @UndefinedVariable print("Error (%s) parsing (%s)" % (e, output)) raise else: returnValue(output) class GatewayTestCase(RunCommandTestCase): def _flush(self): # Flush both XML directories self.directory._directory.services[1].flush() self.directory._directory.services[2].flush() @inlineCallbacks def test_getLocationAndResourceList(self): results = yield self.runCommand(command_getLocationAndResourceList) self.assertEquals(len(results["result"]), 20) @inlineCallbacks def test_getLocationList(self): results = yield self.runCommand(command_getLocationList) self.assertEquals(len(results["result"]), 10) @inlineCallbacks def test_getLocationAttributes(self): yield self.runCommand(command_createLocation) # Tell the resources services to flush its cache and re-read XML self._flush() results = yield self.runCommand(command_getLocationAttributes) # self.assertEquals(results["result"]["Capacity"], "40") # self.assertEquals(results["result"]["Description"], "Test Description") self.assertEquals(results["result"]["RecordName"], ["createdlocation01"]) self.assertEquals( results["result"]["RealName"], "Created Location 01 %s %s" % (unichr(208), u"\ud83d\udca3")) # self.assertEquals(results["result"]["Comment"], "Test Comment") self.assertEquals(results["result"]["AutoScheduleMode"], u"acceptIfFree") self.assertEquals(results["result"]["AutoAcceptGroup"], "E5A6142C-4189-4E9E-90B0-9CD0268B314B") self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04'])) self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06'])) @inlineCallbacks def test_getResourceList(self): results = yield self.runCommand(command_getResourceList) self.assertEquals(len(results["result"]), 10) @inlineCallbacks def test_getResourceAttributes(self): yield self.runCommand(command_createResource) # Tell the resources services to flush its cache and re-read XML self._flush() results = yield self.runCommand(command_getResourceAttributes) # self.assertEquals(results["result"]["Comment"], "Test Comment") # self.assertEquals(results["result"]["Type"], "Computer") self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04'])) self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06'])) @inlineCallbacks def test_createAddress(self): record = yield self.directory.recordWithUID("C701069D-9CA1-4925-A1A9-5CD94767B74B") self.assertEquals(record, None) yield self.runCommand(command_createAddress) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("C701069D-9CA1-4925-A1A9-5CD94767B74B") self.assertEquals( record.displayName, "Created Address 01 %s %s" % (unichr(208), u"\ud83d\udca3") ) self.assertEquals(record.abbreviatedName, "Addr1") self.assertEquals(record.streetAddress, "1 Infinite Loop\nCupertino, 95014\nCA") self.assertEquals(record.geographicLocation, "geo:37.331,-122.030") results = yield self.runCommand(command_getAddressList) self.assertEquals(len(results["result"]), 1) results = yield self.runCommand(command_getAddressAttributes) self.assertEquals(results["result"]["RealName"], u'Created Address 01 \xd0 \U0001f4a3') results = yield self.runCommand(command_setAddressAttributes) results = yield self.runCommand(command_getAddressAttributes) self.assertEquals(results["result"]["RealName"], u'Updated Address') self.assertEquals(results["result"]["StreetAddress"], u'Updated Street Address') self.assertEquals(results["result"]["GeographicLocation"], u'Updated Geo') results = yield self.runCommand(command_deleteAddress) results = yield self.runCommand(command_getAddressList) self.assertEquals(len(results["result"]), 0) @inlineCallbacks def test_createLocation(self): record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2") self.assertEquals(record, None) yield self.runCommand(command_createLocation) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2") self.assertEquals( record.fullNames[0], u"Created Location 01 %s %s" % (unichr(208), u"\ud83d\udca3")) self.assertNotEquals(record, None) # self.assertEquals(record.autoScheduleMode, "") self.assertEquals(record.floor, u"First") # self.assertEquals(record.extras["capacity"], "40") results = yield self.runCommand(command_getLocationAttributes) self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03', 'user04'])) self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06'])) @inlineCallbacks def test_createLocationMinimal(self): """ Ensure we can create a location with just the bare minimum of values, i.e. RealName """ yield self.runCommand(command_createLocationMinimal) # Tell the resources services to flush its cache and re-read XML self._flush() records = yield self.directory.recordsWithRecordType(self.directory.recordType.location) for record in records: if record.displayName == u"Minimal": break else: self.fail("We did not find the Minimal record") @inlineCallbacks def test_setLocationAttributes(self): yield self.runCommand(command_createLocation) yield self.runCommand(command_setLocationAttributes) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2") # self.assertEquals(record.extras["comment"], "Updated Test Comment") self.assertEquals(record.floor, "Second") # self.assertEquals(record.extras["capacity"], "41") self.assertEquals(record.autoScheduleMode, AutoScheduleMode.acceptIfFree) self.assertEquals(record.autoAcceptGroup, "F5A6142C-4189-4E9E-90B0-9CD0268B314B") results = yield self.runCommand(command_getLocationAttributes) self.assertEquals(results["result"]["AutoScheduleMode"], "acceptIfFree") self.assertEquals(results["result"]["AutoAcceptGroup"], "F5A6142C-4189-4E9E-90B0-9CD0268B314B") self.assertEquals(set(results["result"]["ReadProxies"]), set(['user03'])) self.assertEquals(set(results["result"]["WriteProxies"]), set(['user05', 'user06', 'user07'])) @inlineCallbacks def test_setAddressOnLocation(self): yield self.runCommand(command_createLocation) yield self.runCommand(command_createAddress) yield self.runCommand(command_setAddressOnLocation) results = yield self.runCommand(command_getLocationAttributes) self.assertEquals(results["result"]["AssociatedAddress"], "C701069D-9CA1-4925-A1A9-5CD94767B74B") self._flush() record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2") self.assertEquals(record.associatedAddress, "C701069D-9CA1-4925-A1A9-5CD94767B74B") yield self.runCommand(command_removeAddressFromLocation) results = yield self.runCommand(command_getLocationAttributes) self.assertEquals(results["result"]["AssociatedAddress"], "") self._flush() record = yield self.directory.recordWithUID("836B1B66-2E9A-4F46-8B1C-3DD6772C20B2") self.assertEquals(record.associatedAddress, u"") @inlineCallbacks def test_destroyLocation(self): record = yield self.directory.recordWithUID("location01") self.assertNotEquals(record, None) yield self.runCommand(command_deleteLocation) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("location01") self.assertEquals(record, None) @inlineCallbacks def test_createResource(self): record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801") self.assertEquals(record, None) yield self.runCommand(command_createResource) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801") self.assertNotEquals(record, None) @inlineCallbacks def test_setResourceAttributes(self): yield self.runCommand(command_createResource) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801") self.assertEquals(record.displayName, "Laptop 1") yield self.runCommand(command_setResourceAttributes) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("AF575A61-CFA6-49E1-A0F6-B5662C9D9801") self.assertEquals(record.displayName, "Updated Laptop 1") @inlineCallbacks def test_destroyResource(self): record = yield self.directory.recordWithUID("resource01") self.assertNotEquals(record, None) yield self.runCommand(command_deleteResource) # Tell the resources services to flush its cache and re-read XML self._flush() record = yield self.directory.recordWithUID("resource01") self.assertEquals(record, None) @inlineCallbacks def test_addWriteProxy(self): results = yield self.runCommand(command_addWriteProxy) self.assertEquals(len(results["result"]["Proxies"]), 1) @inlineCallbacks def test_removeWriteProxy(self): yield self.runCommand(command_addWriteProxy) results = yield self.runCommand(command_removeWriteProxy) self.assertEquals(len(results["result"]["Proxies"]), 0) @inlineCallbacks def test_purgeOldEvents(self): results = yield self.runCommand(command_purgeOldEvents) self.assertEquals(results["result"]["EventsRemoved"], 0) self.assertEquals(results["result"]["RetainDays"], 42) results = yield self.runCommand(command_purgeOldEventsNoDays) self.assertEquals(results["result"]["RetainDays"], 365) @inlineCallbacks def test_readConfig(self): """ Verify readConfig returns with only the writable keys """ results = yield self.runCommand( command_readConfig, script="calendarserver_config" ) self.assertEquals(results["result"]["RedirectHTTPToHTTPS"], False) self.assertEquals(results["result"]["EnableSearchAddressBook"], False) self.assertEquals(results["result"]["EnableCalDAV"], True) self.assertEquals(results["result"]["EnableCardDAV"], True) self.assertEquals(results["result"]["EnableSSL"], False) self.assertEquals(results["result"]["DefaultLogLevel"], "warn") self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], False) # This is a read only key that is returned self.assertEquals(results["result"]["ServerRoot"], self.absoluteServerRoot) # Verify other non-writeable keys are not present, such as DataRoot which is not writable self.assertFalse("DataRoot" in results["result"]) @inlineCallbacks def test_writeConfig(self): """ Verify writeConfig updates the writable plist file only """ results = yield self.runCommand( command_writeConfig, script="calendarserver_config" ) self.assertEquals(results["result"]["EnableCalDAV"], False) self.assertEquals(results["result"]["EnableCardDAV"], False) self.assertEquals(results["result"]["EnableSSL"], True) self.assertEquals(results["result"]["Notifications"]["Services"]["APNS"]["Enabled"], True) hostName = "hostname_%s_%s" % (unichr(208), u"\ud83d\udca3") self.assertTrue(results["result"]["ServerHostName"].endswith(hostName)) # The static plist should still have EnableCalDAV = True staticPlist = plistlib.readPlist(self.configFileName) self.assertTrue(staticPlist["EnableCalDAV"]) command_addReadProxy = """ command addReadProxy Principal location01 Proxy user03 """ command_addWriteProxy = """ command addWriteProxy Principal location01 Proxy user01 """ command_createAddress = """ command createAddress GeneratedUID C701069D-9CA1-4925-A1A9-5CD94767B74B RealName Created Address 01 %s %s AbbreviatedName Addr1 RecordName createdaddress01 StreetAddress 1 Infinite Loop\nCupertino, 95014\nCA GeographicLocation geo:37.331,-122.030 """ % (unichr(208), u"\ud83d\udca3") command_createLocation = """ command createLocation AutoScheduleMode acceptIfFree AutoAcceptGroup E5A6142C-4189-4E9E-90B0-9CD0268B314B GeneratedUID 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 RealName Created Location 01 %s %s RecordName createdlocation01 Comment Test Comment Description Test Description Floor First AssociatedAddress C701069D-9CA1-4925-A1A9-5CD94767B74B ReadProxies user03 user04 WriteProxies user05 user06 """ % (unichr(208), u"\ud83d\udca3") command_createLocationMinimal = """ command createLocation RealName Minimal """ command_createResource = """ command createResource AutoScheduleMode declineIfBusy GeneratedUID AF575A61-CFA6-49E1-A0F6-B5662C9D9801 RealName Laptop 1 RecordName laptop1 ReadProxies user03 user04 WriteProxies user05 user06 """ command_deleteLocation = """ command deleteLocation GeneratedUID location01 """ command_deleteResource = """ command deleteResource GeneratedUID resource01 """ command_deleteAddress = """ command deleteAddress GeneratedUID C701069D-9CA1-4925-A1A9-5CD94767B74B """ command_getLocationAndResourceList = """ command getLocationAndResourceList """ command_getLocationList = """ command getLocationList """ command_getResourceList = """ command getResourceList """ command_getAddressList = """ command getAddressList """ command_listReadProxies = """ command listReadProxies Principal 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 """ command_listWriteProxies = """ command listWriteProxies Principal 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 """ command_removeReadProxy = """ command removeReadProxy Principal location01 Proxy user03 """ command_removeWriteProxy = """ command removeWriteProxy Principal location01 Proxy user01 """ command_setLocationAttributes = """ command setLocationAttributes AutoSchedule AutoAcceptGroup F5A6142C-4189-4E9E-90B0-9CD0268B314B GeneratedUID 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 RealName Updated Location 01 RecordName createdlocation01 Comment Updated Test Comment Description Updated Test Description Floor Second ReadProxies user03 WriteProxies user05 user06 user07 """ command_setAddressOnLocation = """ command setLocationAttributes GeneratedUID 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 AssociatedAddress C701069D-9CA1-4925-A1A9-5CD94767B74B """ command_removeAddressFromLocation = """ command setLocationAttributes GeneratedUID 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 AssociatedAddress """ command_getLocationAttributes = """ command getLocationAttributes GeneratedUID 836B1B66-2E9A-4F46-8B1C-3DD6772C20B2 """ command_getAddressAttributes = """ command getAddressAttributes GeneratedUID C701069D-9CA1-4925-A1A9-5CD94767B74B """ command_setAddressAttributes = """ command setAddressAttributes GeneratedUID C701069D-9CA1-4925-A1A9-5CD94767B74B RealName Updated Address StreetAddress Updated Street Address GeographicLocation Updated Geo """ command_setResourceAttributes = """ command setResourceAttributes AutoScheduleMode acceptIfFree GeneratedUID AF575A61-CFA6-49E1-A0F6-B5662C9D9801 RealName Updated Laptop 1 RecordName laptop1 """ command_getResourceAttributes = """ command getResourceAttributes GeneratedUID AF575A61-CFA6-49E1-A0F6-B5662C9D9801 """ command_purgeOldEvents = """ command purgeOldEvents RetainDays 42 """ command_purgeOldEventsNoDays = """ command purgeOldEvents """ command_readConfig = """ command readConfig """ command_writeConfig = """ command writeConfig Values EnableCalDAV EnableCardDAV EnableSSL Notifications.Services.APNS.Enabled ServerHostName hostname_%s_%s """ % (unichr(208), u"\ud83d\udca3") calendarserver-9.1+dfsg/calendarserver/tools/test/test_importer.py000066400000000000000000000414751315003562600257070ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Unit tests for L{calendarsever.tools.importer}. """ from calendarserver.tools.importer import ( importCollectionComponent, ImportException, storeComponentInHomeAndCalendar ) from twext.enterprise.jobs.jobitem import JobItem from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twistedcaldav import customxml from twistedcaldav.ical import Component from twistedcaldav.test.util import StoreTestCase from txdav.base.propertystore.base import PropertyName from txdav.xml import element as davxml DATA_MISSING_SOURCE = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Apple Computer\, Inc//iCal 2.0//EN VERSION:2.0 END:VCALENDAR """ DATA_NO_SCHEDULING = """BEGIN:VCALENDAR VERSION:2.0 NAME:Sample Import Calendar COLOR:#0E61B9FF SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/ PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F DTSTART;TZID=America/Los_Angeles:20141108T093000 DTEND;TZID=America/Los_Angeles:20141108T103000 CREATED:20141106T192546Z DTSTAMP:20141106T192546Z RRULE:FREQ=DAILY SEQUENCE:0 SUMMARY:repeating event TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F RECURRENCE-ID;TZID=America/Los_Angeles:20141111T093000 DTSTART;TZID=America/Los_Angeles:20141111T110000 DTEND;TZID=America/Los_Angeles:20141111T120000 CREATED:20141106T192546Z DTSTAMP:20141106T192546Z SEQUENCE:0 SUMMARY:repeating event TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:F6342D53-7D5E-4B5E-9E0A-F0A08977AFE5 DTSTART;TZID=America/Los_Angeles:20141108T080000 DTEND;TZID=America/Los_Angeles:20141108T091500 CREATED:20141104T205338Z DTSTAMP:20141104T205338Z SEQUENCE:0 SUMMARY:simple event TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ DATA_NO_SCHEDULING_REIMPORT = """BEGIN:VCALENDAR VERSION:2.0 NAME:Sample Import Calendar Reimported COLOR:#FFFFFFFF SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/ PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F DTSTART;TZID=America/Los_Angeles:20141108T093000 DTEND;TZID=America/Los_Angeles:20141108T103000 CREATED:20141106T192546Z DTSTAMP:20141106T192546Z RRULE:FREQ=DAILY SEQUENCE:0 SUMMARY:repeating event TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F RECURRENCE-ID;TZID=America/Los_Angeles:20141111T093000 DTSTART;TZID=America/Los_Angeles:20141111T110000 DTEND;TZID=America/Los_Angeles:20141111T120000 CREATED:20141106T192546Z DTSTAMP:20141106T192546Z SEQUENCE:0 SUMMARY:repeating event TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:CB194340-9B3C-40B1-B4E2-062FDB7C8829 DTSTART;TZID=America/Los_Angeles:20141108T080000 DTEND;TZID=America/Los_Angeles:20141108T091500 CREATED:20141104T205338Z DTSTAMP:20141104T205338Z SEQUENCE:0 SUMMARY:new event TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ DATA_WITH_ORGANIZER = """BEGIN:VCALENDAR VERSION:2.0 NAME:I'm the organizer COLOR:#0000FFFF SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/ PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:AB49C0C0-4238-41A4-8B43-F3E3DDF0E59C DTSTART;TZID=America/Los_Angeles:20141108T053000 DTEND;TZID=America/Los_Angeles:20141108T070000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 01:urn:x-uid:user01 SEQUENCE:0 SUMMARY:I'm the organizer TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ DATA_USER02_INVITES_USER01_ORGANIZER_COPY = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 DTSTART;TZID=America/Los_Angeles:20141115T074500 DTEND;TZID=America/Los_Angeles:20141115T091500 RRULE:FREQ=DAILY;COUNT=20 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 RECURRENCE-ID;TZID=America/Los_Angeles:20141117T074500 DTSTART;TZID=America/Los_Angeles:20141117T093000 DTEND;TZID=America/Los_Angeles:20141117T110000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 RECURRENCE-ID;TZID=America/Los_Angeles:20141119T074500 DTSTART;TZID=America/Los_Angeles:20141119T103000 DTEND;TZID=America/Los_Angeles:20141119T120000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ DATA_USER02_INVITES_USER01_ATTENDEE_COPY = """BEGIN:VCALENDAR VERSION:2.0 NAME:I'm an attendee COLOR:#0000FFFF SOURCE;VALUE=URI:http://example.com/calendars/__uids__/user01/calendar/ PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 DTSTART;TZID=America/Los_Angeles:20141115T074500 DTEND;TZID=America/Los_Angeles:20141115T091500 RRULE:FREQ=DAILY;COUNT=20 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 RECURRENCE-ID;TZID=America/Los_Angeles:20141117T074500 DTSTART;TZID=America/Los_Angeles:20141117T093000 DTEND;TZID=America/Los_Angeles:20141117T110000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=TENTATIVE:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT BEGIN:VEVENT UID:6DB84FB1-C943-4144-BE65-9B0DD9A9E2C7 RECURRENCE-ID;TZID=America/Los_Angeles:20141121T074500 DTSTART;TZID=America/Los_Angeles:20141121T101500 DTEND;TZID=America/Los_Angeles:20141121T114500 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;ROLE=CHAIR:urn:x-uid:user02 ATTENDEE;CN=User 03;CUTYPE=INDIVIDUAL:urn:x-uid:user03 ATTENDEE;CN=Mercury Seven;CUTYPE=ROOM:urn:x-uid:mercury CREATED:20141107T172645Z DTSTAMP:20141107T172645Z LOCATION:Mercury ORGANIZER;CN=User 02:urn:x-uid:user02 SEQUENCE:0 SUMMARY:Other organizer TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ class ImportTests(StoreTestCase): """ Tests for importing data to a live store. """ def configure(self): super(ImportTests, self).configure() # Enable the queue and make it fast self.patch(self.config.Scheduling.Options.WorkQueues, "Enabled", True) self.patch(self.config.Scheduling.Options.WorkQueues, "RequestDelaySeconds", 0.1) self.patch(self.config.Scheduling.Options.WorkQueues, "ReplyDelaySeconds", 0.1) self.patch(self.config.Scheduling.Options.WorkQueues, "AutoReplyDelaySeconds", 0.1) self.patch(self.config.Scheduling.Options.WorkQueues, "AttendeeRefreshBatchDelaySeconds", 0.1) self.patch(self.config.Scheduling.Options.WorkQueues, "AttendeeRefreshBatchIntervalSeconds", 0.1) # Reschedule quickly self.patch(JobItem, "failureRescheduleInterval", 1) self.patch(JobItem, "lockRescheduleInterval", 1) @inlineCallbacks def test_ImportComponentMissingSource(self): component = Component.allFromString(DATA_MISSING_SOURCE) try: yield importCollectionComponent(self.store, component) except ImportException: pass else: self.fail("Did not raise ImportException") @inlineCallbacks def test_ImportComponentNoScheduling(self): component = Component.allFromString(DATA_NO_SCHEDULING) yield importCollectionComponent(self.store, component) txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") # Verify properties have been set collectionProperties = collection.properties() for element, value in ( (davxml.DisplayName, "Sample Import Calendar"), (customxml.CalendarColor, "#0E61B9FF"), ): self.assertEquals( value, collectionProperties[PropertyName.fromElement(element)] ) # Verify child objects objects = yield collection.listObjectResources() self.assertEquals(len(objects), 2) yield txn.commit() # Reimport different component into same collection component = Component.allFromString(DATA_NO_SCHEDULING_REIMPORT) yield importCollectionComponent(self.store, component) txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") # Verify properties have been changed collectionProperties = collection.properties() for element, value in ( (davxml.DisplayName, "Sample Import Calendar Reimported"), (customxml.CalendarColor, "#FFFFFFFF"), ): self.assertEquals( value, collectionProperties[PropertyName.fromElement(element)] ) # Verify child objects (should be 3 now) objects = yield collection.listObjectResources() self.assertEquals(len(objects), 3) yield txn.commit() @inlineCallbacks def test_ImportComponentOrganizer(self): component = Component.allFromString(DATA_WITH_ORGANIZER) yield importCollectionComponent(self.store, component) yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60) txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") # Verify properties have been set collectionProperties = collection.properties() for element, value in ( (davxml.DisplayName, "I'm the organizer"), (customxml.CalendarColor, "#0000FFFF"), ): self.assertEquals( value, collectionProperties[PropertyName.fromElement(element)] ) # Verify the organizer's child objects objects = yield collection.listObjectResources() self.assertEquals(len(objects), 1) # Verify the attendees' child objects home = yield txn.calendarHomeWithUID("user02") collection = yield home.childWithName("calendar") objects = yield collection.listObjectResources() self.assertEquals(len(objects), 1) home = yield txn.calendarHomeWithUID("user03") collection = yield home.childWithName("calendar") objects = yield collection.listObjectResources() self.assertEquals(len(objects), 1) home = yield txn.calendarHomeWithUID("mercury") collection = yield home.childWithName("calendar") objects = yield collection.listObjectResources() self.assertEquals(len(objects), 1) yield txn.commit() @inlineCallbacks def test_ImportComponentAttendee(self): # Have user02 invite this user01 yield storeComponentInHomeAndCalendar( self.store, Component.allFromString(DATA_USER02_INVITES_USER01_ORGANIZER_COPY), "user02", "calendar", "invite.ics" ) yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60) # Delete the attendee's copy, thus declining the event txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") objects = yield collection.objectResources() self.assertEquals(len(objects), 1) yield objects[0].remove() yield txn.commit() yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60) # Make sure attendee's copy is gone txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") objects = yield collection.objectResources() self.assertEquals(len(objects), 0) yield txn.commit() # Make sure attendee shows as declined to the organizer txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user02") collection = yield home.childWithName("calendar") objects = yield collection.objectResources() self.assertEquals(len(objects), 1) component = yield objects[0].component() prop = component.getAttendeeProperty(("urn:x-uid:user01",)) self.assertEquals( prop.parameterValue("PARTSTAT"), "DECLINED" ) yield txn.commit() # When importing the event again, update through the organizer's copy # of the event as if it were an iTIP reply component = Component.allFromString(DATA_USER02_INVITES_USER01_ATTENDEE_COPY) yield importCollectionComponent(self.store, component) yield JobItem.waitEmpty(self.store.newTransaction, reactor, 60) # Make sure organizer now sees the right partstats txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user02") collection = yield home.childWithName("calendar") objects = yield collection.objectResources() self.assertEquals(len(objects), 1) component = yield objects[0].component() # print(str(component)) props = component.getAttendeeProperties(("urn:x-uid:user01",)) # The master is ACCEPTED self.assertEquals( props[0].parameterValue("PARTSTAT"), "ACCEPTED" ) # 2nd instance is TENTATIVE self.assertEquals( props[1].parameterValue("PARTSTAT"), "TENTATIVE" ) # 3rd instance is not in the attendee's copy, so remains DECLILNED self.assertEquals( props[2].parameterValue("PARTSTAT"), "DECLINED" ) yield txn.commit() # Make sure attendee now sees the right partstats txn = self.store.newTransaction() home = yield txn.calendarHomeWithUID("user01") collection = yield home.childWithName("calendar") objects = yield collection.objectResources() self.assertEquals(len(objects), 1) component = yield objects[0].component() # print(str(component)) props = component.getAttendeeProperties(("urn:x-uid:user01",)) # The master is ACCEPTED self.assertEquals( props[0].parameterValue("PARTSTAT"), "ACCEPTED" ) # 2nd instance is TENTATIVE self.assertEquals( props[1].parameterValue("PARTSTAT"), "TENTATIVE" ) # 3rd instance is not in the organizer's copy, so should inherit # the value from the master, which is ACCEPTED self.assertEquals( props[2].parameterValue("PARTSTAT"), "ACCEPTED" ) yield txn.commit() test_ImportComponentAttendee.todo = "Need to fix iTip reply processing" calendarserver-9.1+dfsg/calendarserver/tools/test/test_principals.py000066400000000000000000000333611315003562600262050ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import os from twistedcaldav.stdconfig import config from calendarserver.tools.principals import ( parseCreationArgs, matchStrings, recordForPrincipalID ) from calendarserver.tools.util import ( getProxies, setProxies ) from twext.python.filepath import CachingFilePath as FilePath from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, Deferred, returnValue from twistedcaldav.test.util import ( TestCase, StoreTestCase, CapturingProcessProtocol, ErrorOutput ) class ManagePrincipalsTestCase(TestCase): def setUp(self): super(ManagePrincipalsTestCase, self).setUp() testRoot = os.path.join(os.path.dirname(__file__), "principals") templateName = os.path.join(testRoot, "caldavd.plist") with open(templateName) as templateFile: template = templateFile.read() databaseRoot = os.path.abspath("_spawned_scripts_db" + str(os.getpid())) newConfig = template % { "ServerRoot": os.path.abspath(config.ServerRoot), "DataRoot": os.path.abspath(config.DataRoot), "DatabaseRoot": databaseRoot, "DocumentRoot": os.path.abspath(config.DocumentRoot), "LogRoot": os.path.abspath(config.LogRoot), } configFilePath = FilePath(os.path.join(config.ConfigRoot, "caldavd.plist")) configFilePath.setContent(newConfig) self.configFileName = configFilePath.path config.load(self.configFileName) origUsersFile = FilePath( os.path.join( os.path.dirname(__file__), "principals", "users-groups.xml" ) ) copyUsersFile = FilePath(os.path.join(config.DataRoot, "accounts.xml")) origUsersFile.copyTo(copyUsersFile) origResourcesFile = FilePath( os.path.join( os.path.dirname(__file__), "principals", "resources-locations.xml" ) ) copyResourcesFile = FilePath(os.path.join(config.DataRoot, "resources.xml")) origResourcesFile.copyTo(copyResourcesFile) origAugmentFile = FilePath( os.path.join( os.path.dirname(__file__), "principals", "augments.xml" ) ) copyAugmentFile = FilePath(os.path.join(config.DataRoot, "augments.xml")) origAugmentFile.copyTo(copyAugmentFile) # Make sure trial puts the reactor in the right state, by letting it # run one reactor iteration. (Ignore me, please.) d = Deferred() reactor.callLater(0, d.callback, True) return d @inlineCallbacks def runCommand(self, *additional): """ Run calendarserver_manage_principals, passing additional as args. """ sourceRoot = os.path.dirname(os.path.dirname(os.path.dirname(os.path.dirname(__file__)))) cmd = "calendarserver_manage_principals" # assumes it's on PATH args = [cmd, "-f", self.configFileName] args.extend(additional) cwd = sourceRoot deferred = Deferred() reactor.spawnProcess(CapturingProcessProtocol(deferred, None), cmd, args, env=os.environ, path=cwd) output = yield deferred returnValue(output) @inlineCallbacks def test_help(self): results = yield self.runCommand("--help") self.assertTrue(results.startswith("usage:")) @inlineCallbacks def test_principalTypes(self): results = yield self.runCommand("--list-principal-types") self.assertTrue("groups" in results) self.assertTrue("users" in results) self.assertTrue("locations" in results) self.assertTrue("resources" in results) self.assertTrue("addresses" in results) @inlineCallbacks def test_listPrincipals(self): results = yield self.runCommand("--list-principals=users") for i in xrange(1, 10): self.assertTrue("user%02d" % (i,) in results) @inlineCallbacks def test_search(self): results = yield self.runCommand("--search", "user") self.assertTrue("11 matches found" in results) for i in xrange(1, 10): self.assertTrue("user%02d" % (i,) in results) results = yield self.runCommand("--search", "user", "04") self.assertTrue("1 matches found" in results) self.assertTrue("user04" in results) results = yield self.runCommand("--context=group", "--search", "test") self.assertTrue("2 matches found" in results) self.assertTrue("group2" in results) self.assertTrue("group3" in results) results = yield self.runCommand("--context=attendee", "--search", "test") self.assertTrue("3 matches found" in results) self.assertTrue("testuser1" in results) self.assertTrue("group2" in results) self.assertTrue("group3" in results) @inlineCallbacks def test_addRemove(self): results = yield self.runCommand( "--add", "resources", "New Resource", "newresource", "newresourceuid" ) self.assertTrue("Added 'New Resource'" in results) results = yield self.runCommand( "--get-auto-schedule-mode", "resources:newresource" ) self.assertTrue( results.startswith( 'Auto-schedule mode for "New Resource" newresourceuid (resource) newresource is accept if free, decline if busy' ) ) results = yield self.runCommand("--list-principals=resources") self.assertTrue("newresource" in results) results = yield self.runCommand( "--add", "resources", "New Resource", "newresource1", "newresourceuid" ) self.assertTrue("UID already in use: newresourceuid" in results) results = yield self.runCommand( "--add", "resources", "New Resource", "newresource", "uniqueuid" ) self.assertTrue("Record name already in use" in results) results = yield self.runCommand("--remove", "resources:newresource") self.assertTrue("Removed 'New Resource'" in results) results = yield self.runCommand("--list-principals=resources") self.assertFalse("newresource" in results) def test_parseCreationArgs(self): self.assertEquals( ("full name", "short name", "uid"), parseCreationArgs(("full name", "short name", "uid")) ) def test_matchStrings(self): self.assertEquals("abc", matchStrings("a", ("abc", "def"))) self.assertEquals("def", matchStrings("de", ("abc", "def"))) self.assertRaises( ValueError, matchStrings, "foo", ("abc", "def") ) @inlineCallbacks def test_modifyWriteProxies(self): results = yield self.runCommand( "--add-write-proxy=users:user01", "locations:location01" ) self.assertTrue( results.startswith('Added "User 01" user01 (user) user01 as a write proxy for "Room 01" location01 (location) location01') ) results = yield self.runCommand( "--list-write-proxies", "locations:location01" ) self.assertTrue("User 01" in results) results = yield self.runCommand( "--remove-proxy=users:user01", "locations:location01" ) results = yield self.runCommand( "--list-write-proxies", "locations:location01" ) self.assertTrue( 'No write proxies for "Room 01" location01 (location) location01' in results ) @inlineCallbacks def test_modifyReadProxies(self): results = yield self.runCommand( "--add-read-proxy=users:user01", "locations:location01" ) self.assertTrue( results.startswith('Added "User 01" user01 (user) user01 as a read proxy for "Room 01" location01 (location) location01') ) results = yield self.runCommand( "--list-read-proxies", "locations:location01" ) self.assertTrue("User 01" in results) results = yield self.runCommand( "--remove-proxy=users:user01", "locations:location01" ) results = yield self.runCommand( "--list-read-proxies", "locations:location01" ) self.assertTrue( 'No read proxies for "Room 01" location01 (location) location01' in results ) @inlineCallbacks def test_autoScheduleMode(self): results = yield self.runCommand( "--get-auto-schedule-mode", "locations:location01" ) self.assertTrue( results.startswith('Auto-schedule mode for "Room 01" location01 (location) location01 is accept if free, decline if busy') ) results = yield self.runCommand( "--set-auto-schedule-mode=accept-if-free", "locations:location01" ) self.assertTrue( results.startswith('Setting auto-schedule-mode to accept if free for "Room 01" location01 (location) location01') ) results = yield self.runCommand( "--get-auto-schedule-mode", "locations:location01" ) self.assertTrue( results.startswith('Auto-schedule mode for "Room 01" location01 (location) location01 is accept if free') ) results = yield self.runCommand( "--set-auto-schedule-mode=decline-if-busy", "users:user01" ) self.assertTrue(results.startswith('Setting auto-schedule-mode for "User 01" user01 (user) user01 is not allowed.')) try: results = yield self.runCommand( "--set-auto-schedule-mode=bogus", "users:user01" ) except ErrorOutput: pass else: self.fail("Expected command failure") @inlineCallbacks def test_groupChanges(self): results = yield self.runCommand( "--list-group-members", "groups:testgroup1" ) self.assertTrue("user01" in results) self.assertTrue("user02" in results) self.assertTrue("user03" not in results) results = yield self.runCommand( "--add-group-member", "users:user03", "groups:testgroup1" ) self.assertTrue("Added" in results) self.assertTrue("Existing" not in results) self.assertTrue("Invalid" not in results) results = yield self.runCommand( "--list-group-members", "groups:testgroup1" ) self.assertTrue("user01" in results) self.assertTrue("user02" in results) self.assertTrue("user03" in results) results = yield self.runCommand( "--add-group-member", "users:user03", "groups:testgroup1" ) self.assertTrue("Added" not in results) self.assertTrue("Existing" in results) self.assertTrue("Invalid" not in results) results = yield self.runCommand( "--add-group-member", "users:bogus", "groups:testgroup1" ) self.assertTrue("Added" not in results) self.assertTrue("Existing" not in results) self.assertTrue("Invalid" in results) results = yield self.runCommand( "--remove-group-member", "users:user03", "groups:testgroup1" ) self.assertTrue("Removed" in results) self.assertTrue("Missing" not in results) self.assertTrue("Invalid" not in results) results = yield self.runCommand( "--list-group-members", "groups:testgroup1" ) self.assertTrue("user01" in results) self.assertTrue("user02" in results) self.assertTrue("user03" not in results) results = yield self.runCommand( "--remove-group-member", "users:user03", "groups:testgroup1" ) self.assertTrue("Removed" not in results) self.assertTrue("Missing" in results) self.assertTrue("Invalid" not in results) class SetProxiesTestCase(StoreTestCase): @inlineCallbacks def test_setProxies(self): """ Read and Write proxies can be set en masse """ directory = self.directory record = yield recordForPrincipalID(directory, "users:user01") readProxies, writeProxies = yield getProxies(record) self.assertEquals(readProxies, []) # initially empty self.assertEquals(writeProxies, []) # initially empty readProxies = [ (yield recordForPrincipalID(directory, "users:user03")), (yield recordForPrincipalID(directory, "users:user04")), ] writeProxies = [ (yield recordForPrincipalID(directory, "users:user05")), ] yield setProxies(record, readProxies, writeProxies) readProxies, writeProxies = yield getProxies(record) self.assertEquals(set([r.uid for r in readProxies]), set(["user03", "user04"])) self.assertEquals(set([r.uid for r in writeProxies]), set(["user05"])) # Using None for a proxy list indicates a no-op yield setProxies(record, [], None) readProxies, writeProxies = yield getProxies(record) self.assertEquals(readProxies, []) # now empty self.assertEquals(set([r.uid for r in writeProxies]), set(["user05"])) # unchanged calendarserver-9.1+dfsg/calendarserver/tools/test/test_purge.py000066400000000000000000000377721315003562600251750ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from calendarserver.tools.purge import PurgePrincipalService, \ PrincipalPurgeHomeWork, PrincipalPurgePollingWork, PrincipalPurgeCheckWork, \ PrincipalPurgeWork from pycalendar.datetime import DateTime from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, Deferred from twistedcaldav.config import config from twistedcaldav.test.util import StoreTestCase from txdav.common.datastore.sql_tables import _BIND_MODE_WRITE from txdav.common.datastore.test.util import populateCalendarsFrom from txweb2.http_headers import MimeType import datetime future = DateTime.getNowUTC() future.offsetDay(1) future = future.getText() past = DateTime.getNowUTC() past.offsetDay(-1) past = past.getText() # For test_purgeExistingGUID # No organizer/attendee NON_INVITE_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:151AFC76-6036-40EF-952B-97D1840760BF SUMMARY:Non Invitation DTSTART:%s DURATION:PT1H END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (past,) # Purging existing organizer; has existing attendee ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:7ED97931-9A19-4596-9D4D-52B36D6AB803 SUMMARY:Organizer DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging existing attendee; has existing organizer ATTENDEE_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:1974603C-B2C0-4623-92A0-2436DEAB07EF SUMMARY:Attendee DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # For test_purgeNonExistentGUID # No organizer/attendee, in the past NON_INVITE_PAST_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:151AFC76-6036-40EF-952B-97D1840760BF SUMMARY:Non Invitation DTSTART:%s DURATION:PT1H END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (past,) # No organizer/attendee, in the future NON_INVITE_FUTURE_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:251AFC76-6036-40EF-952B-97D1840760BF SUMMARY:Non Invitation DTSTART:%s DURATION:PT1H END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging non-existent organizer; has existing attendee ORGANIZER_ICS_2 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:7ED97931-9A19-4596-9D4D-52B36D6AB803 SUMMARY:Organizer DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging non-existent attendee; has existing organizer ATTENDEE_ICS_2 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:1974603C-B2C0-4623-92A0-2436DEAB07EF SUMMARY:Attendee DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging non-existent organizer; has existing attendee; repeating REPEATING_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:8ED97931-9A19-4596-9D4D-52B36D6AB803 SUMMARY:Repeating Organizer DTSTART:%s DURATION:PT1H RRULE:FREQ=DAILY;COUNT=400 ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (past,) # For test_purgeMultipleNonExistentGUIDs # No organizer/attendee NON_INVITE_ICS_3 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:151AFC76-6036-40EF-952B-97D1840760BF SUMMARY:Non Invitation DTSTART:%s DURATION:PT1H END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (past,) # Purging non-existent organizer; has non-existent and existent attendees ORGANIZER_ICS_3 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:7ED97931-9A19-4596-9D4D-52B36D6AB803 SUMMARY:Organizer DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging non-existent attendee; has non-existent organizer and existent attendee # (Note: Implicit scheduling doesn't update this at all for the existing attendee) ATTENDEE_ICS_3 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:1974603C-B2C0-4623-92A0-2436DEAB07EF SUMMARY:Attendee DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:F0000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) # Purging non-existent attendee; has non-existent attendee and existent organizer ATTENDEE_ICS_4 = """BEGIN:VCALENDAR VERSION:2.0 BEGIN:VEVENT UID:79F26B10-6ECE-465E-9478-53F2A9FCAFEE SUMMARY:2 non-existent attendees DTSTART:%s DURATION:PT1H ORGANIZER:urn:uuid:10000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:10000000-0000-0000-0000-000000000002 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000001 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:F0000000-0000-0000-0000-000000000002 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (future,) ATTACHMENT_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T195159Z UID:F2F14D94-B944-43D9-8F6F-97F95B2764CA DTEND;TZID=US/Pacific:20100304T141500 TRANSP:OPAQUE SUMMARY:Attachment DTSTART;TZID=US/Pacific:20100304T120000 DTSTAMP:20100303T195203Z SEQUENCE:2 X-APPLE-DROPBOX:/calendars/__uids__/6423F94A-6B76-4A3A-815B-D52CFD77935D/dropbox/F2F14D94-B944-43D9-8F6F-97F95B2764CA.dropbox END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") # Purging non-existent organizer; has existing attendee; repeating REPEATING_PUBLIC_EVENT_ORGANIZER_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN X-CALENDARSERVER-ACCESS:PRIVATE BEGIN:VEVENT UID:8ED97931-9A19-4596-9D4D-52B36D6AB803 SUMMARY:Repeating Organizer DTSTART:%s DURATION:PT1H RRULE:FREQ=DAILY;COUNT=400 ORGANIZER:urn:x-uid:user01 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user01 ATTENDEE;CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:x-uid:user02 DTSTAMP:20100303T195203Z END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % (past,) class PurgePrincipalTests(StoreTestCase): """ Tests for purging the data belonging to a given principal """ uid = "user01" uid2 = "user02" metadata = { "accessMode": "PUBLIC", "isScheduleObject": True, "scheduleTag": "abc", "scheduleEtags": (), "hasPrivateComment": False, } requirements = { uid: { "calendar1": { "attachment.ics": (ATTACHMENT_ICS, metadata,), "organizer.ics": (REPEATING_PUBLIC_EVENT_ORGANIZER_ICS, metadata,), }, "inbox": {}, }, uid2: { "calendar2": { "attendee.ics": (REPEATING_PUBLIC_EVENT_ORGANIZER_ICS, metadata,), }, "inbox": {}, }, } @inlineCallbacks def setUp(self): yield super(PurgePrincipalTests, self).setUp() txn = self._sqlCalendarStore.newTransaction() # Add attachment to attachment.ics self._sqlCalendarStore._dropbox_ok = True home = yield txn.calendarHomeWithUID(self.uid) calendar = yield home.calendarWithName("calendar1") event = yield calendar.calendarObjectWithName("attachment.ics") attachment = yield event.createAttachmentWithName("attachment.txt") t = attachment.store(MimeType("text", "x-fixture")) t.write("attachment") t.write(" text") yield t.loseConnection() self._sqlCalendarStore._dropbox_ok = False # Share calendars each way home2 = yield txn.calendarHomeWithUID(self.uid2) calendar2 = yield home2.calendarWithName("calendar2") self.sharedName = yield calendar2.shareWith(home, _BIND_MODE_WRITE) self.sharedName2 = yield calendar.shareWith(home2, _BIND_MODE_WRITE) yield txn.commit() txn = self._sqlCalendarStore.newTransaction() home = yield txn.calendarHomeWithUID(self.uid) calendar2 = yield home.childWithName(self.sharedName) self.assertNotEquals(calendar2, None) home2 = yield txn.calendarHomeWithUID(self.uid2) calendar1 = yield home2.childWithName(self.sharedName2) self.assertNotEquals(calendar1, None) yield txn.commit() # Now remove user01 yield self.directory.removeRecords((self.uid,)) self.patch(config.Scheduling.Options.WorkQueues, "Enabled", False) self.patch(config.AutomaticPurging, "Enabled", True) self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1) self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1) self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3) self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1) @inlineCallbacks def populate(self): yield populateCalendarsFrom(self.requirements, self.storeUnderTest()) self.notifierFactory.reset() @inlineCallbacks def test_purgeUIDs(self): """ Verify purgeUIDs removes homes, and doesn't provision homes that don't exist """ # Now you see it home = yield self.homeUnderTest(name=self.uid) self.assertNotEquals(home, None) calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2) comp = yield calobj2.componentForUser() self.assertTrue("STATUS:CANCELLED" not in str(comp)) self.assertTrue(";UNTIL=" not in str(comp)) yield self.commit() count = (yield PurgePrincipalService.purgeUIDs( self.storeUnderTest(), self.directory, (self.uid,), verbose=False, proxies=False)) self.assertEquals(count, 2) # 2 events # Wait for queue to process while(True): txn = self.transactionUnderTest() work = yield PrincipalPurgeHomeWork.all(txn) yield self.commit() if len(work) == 0: break d = Deferred() reactor.callLater(1, lambda: d.callback(None)) yield d # Now you don't home = yield self.homeUnderTest(name=self.uid) self.assertEquals(home, None) # Verify calendar1 was unshared to uid2 home2 = yield self.homeUnderTest(name=self.uid2) self.assertEquals((yield home2.childWithName(self.sharedName)), None) yield self.commit() count = yield PurgePrincipalService.purgeUIDs( self.storeUnderTest(), self.directory, (self.uid,), verbose=False, proxies=False, ) self.assertEquals(count, 0) # And you still don't (making sure it's not provisioned) home = yield self.homeUnderTest(name=self.uid) self.assertEquals(home, None) yield self.commit() calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2) comp = yield calobj2.componentForUser() self.assertTrue("STATUS:CANCELLED" in str(comp)) self.assertTrue(";UNTIL=" not in str(comp)) yield self.commit() class PurgePrincipalTestsWithWorkQueue(PurgePrincipalTests): """ Same as L{PurgePrincipalTests} but with the work queue enabled. """ @inlineCallbacks def setUp(self): yield super(PurgePrincipalTestsWithWorkQueue, self).setUp() self.patch(config.Scheduling.Options.WorkQueues, "Enabled", True) self.patch(config.AutomaticPurging, "Enabled", True) self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1) self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1) self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3) self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1) @inlineCallbacks def test_purgeUIDService(self): """ Test that the full sequence of work items are processed via automatic polling. """ # Now you see it home = yield self.homeUnderTest(name=self.uid) self.assertNotEquals(home, None) calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2) comp = yield calobj2.componentForUser() self.assertTrue("STATUS:CANCELLED" not in str(comp)) self.assertTrue(";UNTIL=" not in str(comp)) yield self.commit() txn = self.transactionUnderTest() notBefore = ( datetime.datetime.utcnow() + datetime.timedelta(seconds=3) ) yield txn.enqueue(PrincipalPurgePollingWork, notBefore=notBefore) yield self.commit() while True: txn = self.transactionUnderTest() work1 = yield PrincipalPurgePollingWork.all(txn) work2 = yield PrincipalPurgeCheckWork.all(txn) work3 = yield PrincipalPurgeWork.all(txn) work4 = yield PrincipalPurgeHomeWork.all(txn) if len(work4) != 0: home = yield txn.calendarHomeWithUID(self.uid) self.assertTrue(home.purging()) yield self.commit() # print len(work1), len(work2), len(work3), len(work4) if len(work1) + len(work2) + len(work3) + len(work4) == 0: break d = Deferred() reactor.callLater(1, lambda: d.callback(None)) yield d # Now you don't home = yield self.homeUnderTest(name=self.uid) self.assertEquals(home, None) # Verify calendar1 was unshared to uid2 home2 = yield self.homeUnderTest(name=self.uid2) self.assertEquals((yield home2.childWithName(self.sharedName)), None) calobj2 = yield self.calendarObjectUnderTest(name="attendee.ics", calendar_name="calendar2", home=self.uid2) comp = yield calobj2.componentForUser() self.assertTrue("STATUS:CANCELLED" in str(comp)) self.assertTrue(";UNTIL=" not in str(comp)) yield self.commit() calendarserver-9.1+dfsg/calendarserver/tools/test/test_purge_old_events.py000066400000000000000000001376211315003562600274110ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tests for calendarserver.tools.purge """ import os from calendarserver.tools.purge import ( PurgeOldEventsService, PurgeAttachmentsService, PurgePrincipalService, PrincipalPurgeHomeWork ) from pycalendar.datetime import DateTime from twext.enterprise.dal.syntax import Update, Delete from twext.enterprise.util import parseSQLTimestamp from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue, Deferred from twistedcaldav.config import config from twistedcaldav.test.util import StoreTestCase from twistedcaldav.vcard import Component as VCardComponent from txdav.common.datastore.sql_tables import schema from txdav.common.datastore.test.util import populateCalendarsFrom from txweb2.http_headers import MimeType now = DateTime.getToday().getYear() OLD_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU DTSTART:19621028T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU DTSTART:19870405T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T181216Z UID:685BC3A1-195A-49B3-926D-388DDACA78A6 DTEND;TZID=US/Pacific:%(year)s0307T151500 TRANSP:OPAQUE SUMMARY:Ancient event DTSTART;TZID=US/Pacific:%(year)s0307T111500 DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": now - 5} ATTACHMENT_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU DTSTART:19621028T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU DTSTART:19870405T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T181216Z UID:57A5D1F6-9A57-4F74-9520-25C617F54B88-%(uid)s TRANSP:OPAQUE SUMMARY:Ancient event with attachment DTSTART;TZID=US/Pacific:%(year)s0308T111500 DTEND;TZID=US/Pacific:%(year)s0308T151500 DTSTAMP:20100303T181220Z X-APPLE-DROPBOX:/calendars/__uids__/user01/dropbox/%(dropboxid)s.dropbox SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") MATTACHMENT_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU DTSTART:19621028T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU DTSTART:19870405T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T181216Z UID:57A5D1F6-9A57-4F74-9520-25C617F54B88-%(uid)s TRANSP:OPAQUE SUMMARY:Ancient event with attachment DTSTART;TZID=US/Pacific:%(year)s0308T111500 DTEND;TZID=US/Pacific:%(year)s0308T151500 DTSTAMP:20100303T181220Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") ENDLESS_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU DTSTART:19621028T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU DTSTART:19870405T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T194654Z UID:9FDE0E4C-1495-4CAF-863B-F7F0FB15FE8C DTEND;TZID=US/Pacific:%(year)s0308T151500 RRULE:FREQ=YEARLY;INTERVAL=1 TRANSP:OPAQUE SUMMARY:Ancient Repeating Endless DTSTART;TZID=US/Pacific:%(year)s0308T111500 DTSTAMP:20100303T194710Z SEQUENCE:4 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": now - 5} REPEATING_AWHILE_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;UNTIL=20061029T090000Z;BYMONTH=10;BYDAY=-1SU DTSTART:19621028T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;UNTIL=20060402T100000Z;BYMONTH=4;BYDAY=1SU DTSTART:19870405T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T194716Z UID:76236B32-2BC4-4D78-956B-8D42D4086200 DTEND;TZID=US/Pacific:%(year)s0309T151500 RRULE:FREQ=YEARLY;INTERVAL=1;COUNT=3 TRANSP:OPAQUE SUMMARY:Ancient Repeat Awhile DTSTART;TZID=US/Pacific:%(year)s0309T111500 DTSTAMP:20100303T194747Z SEQUENCE:6 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": now - 5} STRADDLING_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T213643Z UID:1C219DAD-D374-4822-8C98-ADBA85E253AB DTEND;TZID=US/Pacific:%(year)s0508T121500 RRULE:FREQ=MONTHLY;INTERVAL=1;UNTIL=%(until)s0509T065959Z TRANSP:OPAQUE SUMMARY:Straddling cut-off DTSTART;TZID=US/Pacific:%(year)s0508T111500 DTSTAMP:20100303T213704Z SEQUENCE:5 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": now - 2, "until": now + 1} RECENT_ICS = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.1//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:US/Pacific BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20100303T195159Z UID:F2F14D94-B944-43D9-8F6F-97F95B2764CA DTEND;TZID=US/Pacific:%(year)s0304T141500 TRANSP:OPAQUE SUMMARY:Recent DTSTART;TZID=US/Pacific:%(year)s0304T120000 DTSTAMP:20100303T195203Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {"year": now} VCARD_1 = """BEGIN:VCARD VERSION:3.0 N:User;Test FN:Test User EMAIL;type=INTERNET;PREF:testuser@example.com UID:12345-67890-1.1 END:VCARD """.replace("\n", "\r\n") class PurgeOldEventsTests(StoreTestCase): """ Tests for deleting events older than a given date """ metadata = { "accessMode": "PUBLIC", "isScheduleObject": True, "scheduleTag": "abc", "scheduleEtags": (), "hasPrivateComment": False, } requirements = { "home1": { "calendar1": { "old.ics": (OLD_ICS, metadata,), "endless.ics": (ENDLESS_ICS, metadata,), "oldattachment1.ics": (ATTACHMENT_ICS % {"year": now - 5, "uid": "1.1", "dropboxid": "1.1"}, metadata,), "oldattachment2.ics": (ATTACHMENT_ICS % {"year": now - 5, "uid": "1.2", "dropboxid": "1.2"}, metadata,), "currentattachment3.ics": (ATTACHMENT_ICS % {"year": now + 1, "uid": "1.3", "dropboxid": "1.3"}, metadata,), "oldmattachment1.ics": (MATTACHMENT_ICS % {"year": now - 5, "uid": "1.1m"}, metadata,), "oldmattachment2.ics": (MATTACHMENT_ICS % {"year": now - 5, "uid": "1.2m"}, metadata,), "currentmattachment3.ics": (MATTACHMENT_ICS % {"year": now + 1, "uid": "1.3m"}, metadata,), }, "inbox": {}, }, "home2": { "calendar2": { "straddling.ics": (STRADDLING_ICS, metadata,), "recent.ics": (RECENT_ICS, metadata,), "oldattachment1.ics": (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.1", "dropboxid": "2.1"}, metadata,), "currentattachment2.ics": (ATTACHMENT_ICS % {"year": now + 1, "uid": "2.2", "dropboxid": "2.1"}, metadata,), "oldattachment3.ics": (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.3", "dropboxid": "2.2"}, metadata,), "oldattachment4.ics": (ATTACHMENT_ICS % {"year": now - 5, "uid": "2.4", "dropboxid": "2.2"}, metadata,), "oldmattachment1.ics": (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.1"}, metadata,), "currentmattachment2.ics": (MATTACHMENT_ICS % {"year": now + 1, "uid": "2.2"}, metadata,), "oldmattachment3.ics": (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.3"}, metadata,), "oldmattachment4.ics": (MATTACHMENT_ICS % {"year": now - 5, "uid": "2.4"}, metadata,), }, "calendar3": { "repeating_awhile.ics": (REPEATING_AWHILE_ICS, metadata,), }, "inbox": {}, } } def configure(self): super(PurgeOldEventsTests, self).configure() # Turn off delayed indexing option so we can have some useful tests self.patch(config, "FreeBusyIndexDelayedExpand", False) # Tweak queue timing to speed things up self.patch(config.Scheduling.Options.WorkQueues, "Enabled", False) self.patch(config.AutomaticPurging, "PollingIntervalSeconds", -1) self.patch(config.AutomaticPurging, "CheckStaggerSeconds", 1) self.patch(config.AutomaticPurging, "PurgeIntervalSeconds", 3) self.patch(config.AutomaticPurging, "HomePurgeDelaySeconds", 1) # self.patch(config.DirectoryService.params, "xmlFile", # os.path.join( # os.path.dirname(__file__), "purge", "accounts.xml" # ) # ) # self.patch(config.ResourceService.params, "xmlFile", # os.path.join( # os.path.dirname(__file__), "purge", "resources.xml" # ) # ) @inlineCallbacks def populate(self): yield populateCalendarsFrom(self.requirements, self.storeUnderTest()) self.notifierFactory.reset() txn = self._sqlCalendarStore.newTransaction() Delete( From=schema.ATTACHMENT, Where=None ).on(txn) (yield txn.commit()) @inlineCallbacks def test_eventsOlderThan(self): cutoff = DateTime(now, 4, 1, 0, 0, 0) txn = self._sqlCalendarStore.newTransaction() # Query for all old events results = (yield txn.eventsOlderThan(cutoff)) self.assertEquals( sorted(results), sorted([ ['home1', 'calendar1', 'old.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home1', 'calendar1', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home1', 'calendar1', 'oldattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home1', 'calendar1', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home1', 'calendar1', 'oldmattachment2.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar3', 'repeating_awhile.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'recent.ics', parseSQLTimestamp('%s-03-04 22:15:00' % (now,))], ['home2', 'calendar2', 'oldattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'oldattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'oldattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'oldmattachment1.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'oldmattachment3.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ['home2', 'calendar2', 'oldmattachment4.ics', parseSQLTimestamp('1901-01-01 01:00:00')], ]) ) # Query for oldest event - actually with limited time caching, the oldest event # cannot be precisely known, all we get back is the first one in the sorted list # where each has the 1901 "dummy" time stamp to indicate a partial cache results = (yield txn.eventsOlderThan(cutoff, batchSize=1)) self.assertEquals(len(results), 1) @inlineCallbacks def test_removeOldEvents(self): cutoff = DateTime(now, 4, 1, 0, 0, 0) txn = self._sqlCalendarStore.newTransaction() # Remove oldest event - except we don't know what that is because of the dummy timestamps # used with a partial index. So all we can check is that one event was removed. count = (yield txn.removeOldEvents(cutoff, batchSize=1)) self.assertEquals(count, 1) results = (yield txn.eventsOlderThan(cutoff)) self.assertEquals(len(results), 12) # Remove remaining oldest events count = (yield txn.removeOldEvents(cutoff)) self.assertEquals(count, 12) results = (yield txn.eventsOlderThan(cutoff)) self.assertEquals(list(results), []) # Remove oldest events (none left) count = (yield txn.removeOldEvents(cutoff)) self.assertEquals(count, 0) @inlineCallbacks def _addAttachment(self, home, calendar, event, name): self._sqlCalendarStore._dropbox_ok = True txn = self._sqlCalendarStore.newTransaction() # Create an event with an attachment home = (yield txn.calendarHomeWithUID(home)) calendar = (yield home.calendarWithName(calendar)) event = (yield calendar.calendarObjectWithName(event)) attachment = (yield event.createAttachmentWithName(name)) t = attachment.store(MimeType("text", "x-fixture")) t.write("%s/%s/%s/%s" % (home, calendar, event, name,)) t.write(" attachment") (yield t.loseConnection()) (yield txn.commit()) self._sqlCalendarStore._dropbox_ok = False returnValue(attachment) @inlineCallbacks def _orphanAttachment(self, home, calendar, event): txn = self._sqlCalendarStore.newTransaction() # Reset dropbox id in calendar_object home = (yield txn.calendarHomeWithUID(home)) calendar = (yield home.calendarWithName(calendar)) event = (yield calendar.calendarObjectWithName(event)) co = schema.CALENDAR_OBJECT Update( {co.DROPBOX_ID: None, }, Where=co.RESOURCE_ID == event._resourceID, ).on(txn) (yield txn.commit()) @inlineCallbacks def _addManagedAttachment(self, home, calendar, event, name): txn = self._sqlCalendarStore.newTransaction() # Create an event with an attachment home = (yield txn.calendarHomeWithUID(home)) calendar = (yield home.calendarWithName(calendar)) event = (yield calendar.calendarObjectWithName(event)) attachment = (yield event.createManagedAttachment()) t = attachment.store(MimeType("text", "x-fixture"), name) t.write("%s/%s/%s/%s" % (home, calendar, event, name,)) t.write(" managed attachment") (yield t.loseConnection()) (yield txn.commit()) returnValue(attachment) @inlineCallbacks def test_removeOrphanedAttachments(self): home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) attachment = (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) attachmentPath = attachment._path.path self.assertTrue(os.path.exists(attachmentPath)) mattachment1 = (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) mattachment2 = (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) mattachmentPath1 = mattachment1._path.path self.assertTrue(os.path.exists(mattachmentPath1)) mattachmentPath2 = mattachment2._path.path self.assertTrue(os.path.exists(mattachmentPath2)) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) orphans = (yield self.transactionUnderTest().orphanedAttachments()) self.assertEquals(len(orphans), 0) count = (yield self.transactionUnderTest().removeOrphanedAttachments(batchSize=100)) self.assertEquals(count, 0) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertNotEqual(quota, 0) # Files still exist self.assertTrue(os.path.exists(attachmentPath)) self.assertTrue(os.path.exists(mattachmentPath1)) self.assertTrue(os.path.exists(mattachmentPath2)) # Delete all old events (including the event containing the attachment) cutoff = DateTime(now, 4, 1, 0, 0, 0) count = (yield self.transactionUnderTest().removeOldEvents(cutoff)) # See which events have gone and which exist home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) calendar = (yield home.calendarWithName("calendar1")) self.assertNotEqual((yield calendar.calendarObjectWithName("endless.ics")), None) self.assertNotEqual((yield calendar.calendarObjectWithName("currentmattachment3.ics")), None) self.assertEqual((yield calendar.calendarObjectWithName("old.ics")), None) self.assertEqual((yield calendar.calendarObjectWithName("oldattachment1.ics")), None) self.assertEqual((yield calendar.calendarObjectWithName("oldmattachment1.ics")), None) (yield self.commit()) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 < quota2) self.assertNotEqual(quota3, 0) # Just look for orphaned attachments - none left orphans = (yield self.transactionUnderTest().orphanedAttachments()) self.assertEquals(len(orphans), 0) # Files self.assertFalse(os.path.exists(attachmentPath)) self.assertFalse(os.path.exists(mattachmentPath1)) self.assertTrue(os.path.exists(mattachmentPath2)) @inlineCallbacks def test_purgeOldEvents(self): # Dry run total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, None, DateTime(now, 4, 1, 0, 0, 0), 2, dryrun=True, debug=True )) self.assertEquals(total, 13) # Actually remove total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, None, DateTime(now, 4, 1, 0, 0, 0), 2, debug=True )) self.assertEquals(total, 13) # There should be no more left total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, None, DateTime(now, 4, 1, 0, 0, 0), 2, debug=True )) self.assertEquals(total, 0) @inlineCallbacks def test_purgeOldEvents_home_filtering(self): # Dry run total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "ho", DateTime(now, 4, 1, 0, 0, 0), 2, dryrun=True, debug=True )) self.assertEquals(total, 13) # Dry run total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "home", DateTime(now, 4, 1, 0, 0, 0), 2, dryrun=True, debug=True )) self.assertEquals(total, 13) # Dry run total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "home1", DateTime(now, 4, 1, 0, 0, 0), 2, dryrun=True, debug=True )) self.assertEquals(total, 5) # Dry run total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "home2", DateTime(now, 4, 1, 0, 0, 0), 2, dryrun=True, debug=True )) self.assertEquals(total, 8) @inlineCallbacks def test_purgeOldEvents_old_cutoff(self): # Dry run cutoff = DateTime.getToday() cutoff.setDateOnly(False) cutoff.offsetDay(-400) total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "ho", cutoff, 2, dryrun=True, debug=True )) self.assertEquals(total, 12) # Actually remove total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, None, cutoff, 2, debug=True )) self.assertEquals(total, 12) total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, "ho", cutoff, 2, dryrun=True, debug=True )) self.assertEquals(total, 0) @inlineCallbacks def test_purgeUID(self): txn = self._sqlCalendarStore.newTransaction() # Create an addressbook and one CardDAV resource abHome = (yield txn.addressbookHomeWithUID("home1", create=True)) abColl = (yield abHome.addressbookWithName("addressbook")) (yield abColl.createAddressBookObjectWithName( "card1", VCardComponent.fromString(VCARD_1))) self.assertEquals(len((yield abColl.addressbookObjects())), 1) # Verify there are 8 events in calendar1 calHome = (yield txn.calendarHomeWithUID("home1")) calColl = (yield calHome.calendarWithName("calendar1")) self.assertEquals(len((yield calColl.calendarObjects())), 8) # Make the newly created objects available to the purgeUID transaction (yield txn.commit()) # Purge home1 completely total = yield PurgePrincipalService.purgeUIDs( self._sqlCalendarStore, self.directory, ("home1",), verbose=False, proxies=False) # Wait for queue to process while(True): txn = self.transactionUnderTest() work = yield PrincipalPurgeHomeWork.all(txn) yield self.commit() if len(work) == 0: break d = Deferred() reactor.callLater(1, lambda: d.callback(None)) yield d # 9 items deleted: 8 events and 1 vcard self.assertEquals(total, 9) # Homes have been deleted as well txn = self._sqlCalendarStore.newTransaction() abHome = (yield txn.addressbookHomeWithUID("home1")) self.assertEquals(abHome, None) calHome = (yield txn.calendarHomeWithUID("home1")) self.assertEquals(calHome, None) @inlineCallbacks def test_purgeAttachmentsWithoutCutoffWithPurgeOld(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Remove old events first total = (yield PurgeOldEventsService.purgeOldEvents( self._sqlCalendarStore, None, DateTime(now, 4, 1, 0, 0, 0), 2, debug=False )) self.assertEquals(total, 13) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 < quota2) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=True, verbose=False)) self.assertEquals(total, 1) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 == quota3) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 1) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota5 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota5 < quota4) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithoutCutoff(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=True, verbose=False)) self.assertEquals(total, 3) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 3) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 < quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithoutCutoffWithMatchingUUID(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=True, verbose=False)) self.assertEquals(total, 1) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 1) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 < quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithoutCutoffWithoutMatchingUUID(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=True, verbose=False)) self.assertEquals(total, 0) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 == quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 0, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithCutoffOld(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=True, verbose=False)) self.assertEquals(total, 13) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 13) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 < quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, None, 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithCutoffOldWithMatchingUUID(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=True, verbose=False)) self.assertEquals(total, 6) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 6) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 < quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home1", 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) @inlineCallbacks def test_purgeAttachmentsWithCutoffOldWithoutMatchingUUID(self): """ L{PurgeAttachmentsService.purgeAttachments} purges only orphaned attachments, not current ones. """ home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota1 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertEqual(quota1, 0) (yield self._addAttachment("home1", "calendar1", "oldattachment1.ics", "att1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.1")) (yield self._addAttachment("home1", "calendar1", "oldattachment2.ics", "att2.2")) (yield self._addAttachment("home1", "calendar1", "currentattachment3.ics", "att3")) (yield self._addAttachment("home2", "calendar2", "oldattachment1.ics", "att4")) (yield self._addAttachment("home2", "calendar2", "currentattachment2.ics", "att5")) (yield self._addAttachment("home2", "calendar2", "oldattachment3.ics", "att6")) (yield self._addAttachment("home2", "calendar2", "oldattachment4.ics", "att7")) (yield self._orphanAttachment("home1", "calendar1", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "oldattachment1.ics")) (yield self._orphanAttachment("home2", "calendar2", "currentattachment2.ics")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment1.ics", "matt1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.1")) (yield self._addManagedAttachment("home1", "calendar1", "oldmattachment2.ics", "matt2.2")) (yield self._addManagedAttachment("home1", "calendar1", "currentmattachment3.ics", "matt3")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment1.ics", "matt4")) (yield self._addManagedAttachment("home2", "calendar2", "currentmattachment2.ics", "matt5")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment3.ics", "matt6")) (yield self._addManagedAttachment("home2", "calendar2", "oldmattachment4.ics", "matt7")) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota2 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota2 > quota1) # Dry run total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=True, verbose=False)) self.assertEquals(total, 7) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota3 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota3 == quota2) # Actually remove total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 7) home = (yield self.transactionUnderTest().calendarHomeWithUID("home1")) quota4 = (yield home.quotaUsedBytes()) (yield self.commit()) self.assertTrue(quota4 == quota3) # There should be no more left total = (yield PurgeAttachmentsService.purgeAttachments(self._sqlCalendarStore, "home2", 14, 2, dryrun=False, verbose=False)) self.assertEquals(total, 0) calendarserver-9.1+dfsg/calendarserver/tools/test/test_resources.py000066400000000000000000000107561315003562600260560ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.internet.defer import inlineCallbacks from calendarserver.tools.resources import migrateResources from twistedcaldav.test.util import StoreTestCase from txdav.who.test.support import InMemoryDirectoryService from twext.who.directory import DirectoryRecord from txdav.who.idirectory import RecordType as CalRecordType from txdav.who.directory import CalendarDirectoryRecordMixin class TestRecord(DirectoryRecord, CalendarDirectoryRecordMixin): pass class MigrateResourcesTest(StoreTestCase): @inlineCallbacks def setUp(self): yield super(MigrateResourcesTest, self).setUp() self.store = self.storeUnderTest() self.sourceService = InMemoryDirectoryService(None) fieldName = self.sourceService.fieldName records = ( TestRecord( self.sourceService, { fieldName.uid: u"location1", fieldName.shortNames: (u"loc1",), fieldName.recordType: CalRecordType.location, } ), TestRecord( self.sourceService, { fieldName.uid: u"location2", fieldName.shortNames: (u"loc2",), fieldName.recordType: CalRecordType.location, } ), TestRecord( self.sourceService, { fieldName.uid: u"resource1", fieldName.shortNames: (u"res1",), fieldName.recordType: CalRecordType.resource, } ), ) yield self.sourceService.updateRecords(records, create=True) @inlineCallbacks def test_migrateResources(self): # Record location1 has not been migrated record = yield self.directory.recordWithUID(u"location1") self.assertEquals(record, None) # Migrate location1, location2, and resource1 yield migrateResources(self.sourceService, self.directory) record = yield self.directory.recordWithUID(u"location1") self.assertEquals(record.uid, u"location1") self.assertEquals(record.shortNames[0], u"loc1") record = yield self.directory.recordWithUID(u"location2") self.assertEquals(record.uid, u"location2") self.assertEquals(record.shortNames[0], u"loc2") record = yield self.directory.recordWithUID(u"resource1") self.assertEquals(record.uid, u"resource1") self.assertEquals(record.shortNames[0], u"res1") # Add a new location to the sourceService, and modify an existing # location fieldName = self.sourceService.fieldName newRecords = ( TestRecord( self.sourceService, { fieldName.uid: u"location1", fieldName.shortNames: (u"newloc1",), fieldName.recordType: CalRecordType.location, } ), TestRecord( self.sourceService, { fieldName.uid: u"location3", fieldName.shortNames: (u"loc3",), fieldName.recordType: CalRecordType.location, } ), ) yield self.sourceService.updateRecords(newRecords, create=True) yield migrateResources(self.sourceService, self.directory) # Ensure an existing record does not get migrated again; verified by # seeing if shortNames changed, which they should not: record = yield self.directory.recordWithUID(u"location1") self.assertEquals(record.uid, u"location1") self.assertEquals(record.shortNames[0], u"loc1") # Ensure new record does get migrated record = yield self.directory.recordWithUID(u"location3") self.assertEquals(record.uid, u"location3") self.assertEquals(record.shortNames[0], u"loc3") calendarserver-9.1+dfsg/calendarserver/tools/test/test_trash.py000066400000000000000000000053621315003562600251620ustar00rootroot00000000000000# coding=utf-8 ## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.internet.defer import inlineCallbacks from calendarserver.tools.trash import listTrashedEventsForPrincipal, listTrashedCollectionsForPrincipal, restoreTrashedEvent from twistedcaldav.config import config from txdav.xml import element from txdav.base.propertystore.base import PropertyName from txdav.common.datastore.test.test_trash import TrashTestsBase from twistedcaldav.ical import Component class TrashTool(TrashTestsBase): """ Test trash tool. """ @inlineCallbacks def setUp(self): yield super(TrashTool, self).setUp() self.patch(config, "EnableTrashCollection", True) @inlineCallbacks def test_trashEventWithNonAsciiSummary(self): """ Test that non-ascii summary does not cause any problems. """ data1 = """BEGIN:VCALENDAR VERSION:2.0 PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:5CE3B280-DBC9-4E8E-B0B2-996754020E5F DTSTART;TZID=America/Los_Angeles:20141108T093000 DTEND;TZID=America/Los_Angeles:20141108T103000 CREATED:20141106T192546Z DTSTAMP:20141106T192546Z RRULE:FREQ=DAILY SEQUENCE:0 SUMMARY:repeating évent TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ txn = self.store.newTransaction() calendar = yield self._collectionForUser(txn, "user01", "tést-calendar", create=True) calendar.properties()[PropertyName.fromElement(element.DisplayName)] = element.DisplayName.fromString("tést-calendar-name") yield calendar.createObjectResourceWithName( "tést-resource.ics", Component.allFromString(data1) ) yield txn.commit() txn = self.store.newTransaction() resource = yield self._getResource(txn, "user01", "tést-calendar", "tést-resource.ics") objID = resource._resourceID yield resource.remove() names = yield self._getTrashNames(txn, "user01") self.assertEquals(len(names), 1) yield txn.commit() yield listTrashedCollectionsForPrincipal(None, self.store, "user01") yield listTrashedEventsForPrincipal(None, self.store, "user01") yield restoreTrashedEvent(None, self.store, "user01", objID) calendarserver-9.1+dfsg/calendarserver/tools/test/test_util.py000066400000000000000000000025251315003562600250140ustar00rootroot00000000000000## # Copyright (c) 2005-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import os import tempfile from twistedcaldav.test.util import TestCase from twistedcaldav.config import ConfigurationError from calendarserver.tools.util import loadConfig, checkDirectory class UtilTestCase(TestCase): def test_loadConfig(self): testRoot = os.path.join(os.path.dirname(__file__), "util") configPath = os.path.join(testRoot, "caldavd.plist") config = loadConfig(configPath) self.assertEquals(config.EnableCalDAV, True) self.assertEquals(config.EnableCardDAV, True) def test_checkDirectory(self): tmpDir = tempfile.mkdtemp() tmpFile = os.path.join(tmpDir, "tmpFile") self.assertRaises(ConfigurationError, checkDirectory, tmpFile, "Test file") os.rmdir(tmpDir) calendarserver-9.1+dfsg/calendarserver/tools/test/util/000077500000000000000000000000001315003562600233775ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/tools/test/util/caldavd.plist000066400000000000000000000004341315003562600260530ustar00rootroot00000000000000 EnableCalDAV EnableCardDAV calendarserver-9.1+dfsg/calendarserver/tools/trash.py000077500000000000000000000265051315003562600231510ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_trash -*- ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from argparse import ArgumentParser import datetime from calendarserver.tools.cmdline import utilityMain, WorkerService from calendarserver.tools.util import prettyRecord, displayNameForCollection, agoString, locationString from pycalendar.datetime import DateTime, Timezone from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue log = Logger() class TrashRestorationService(WorkerService): operation = None operationArgs = [] def doWork(self): return self.operation(self.store, *self.operationArgs) def main(): parser = ArgumentParser(description='Restore events from trash') parser.add_argument('-f', '--config', dest='configFileName', metavar='CONFIGFILE', help='caldavd.plist configuration file path') parser.add_argument('-d', '--debug', action='store_true', help='show debug logging') parser.add_argument('-p', '--principal', dest='principal', help='the principal to use (uid)') parser.add_argument('-e', '--events', action='store_true', help='list trashed events') parser.add_argument('-c', '--collections', action='store_true', help='list trashed collections for principal (uid)') parser.add_argument('-r', '--recover', dest='resourceID', type=int, help='recover trashed collection or event (by resource ID)') parser.add_argument('--empty', action='store_true', help='empty the principal\'s trash') parser.add_argument('--days', type=int, default=0, help='number of past days to retain') args = parser.parse_args() if not args.principal: print("--principal missing") return if args.empty: operation = emptyTrashForPrincipal operationArgs = [args.principal, args.days] elif args.collections: if args.resourceID: operation = restoreTrashedCollection operationArgs = [args.principal, args.resourceID] else: operation = listTrashedCollectionsForPrincipal operationArgs = [args.principal] elif args.events: if args.resourceID: operation = restoreTrashedEvent operationArgs = [args.principal, args.resourceID] else: operation = listTrashedEventsForPrincipal operationArgs = [args.principal] else: operation = listTrashedCollectionsForPrincipal operationArgs = [args.principal] TrashRestorationService.operation = operation TrashRestorationService.operationArgs = operationArgs utilityMain( args.configFileName, TrashRestorationService, verbose=args.debug, loadTimezones=True ) @inlineCallbacks def listTrashedCollectionsForPrincipal(service, store, principalUID): directory = store.directoryService() record = yield directory.recordWithUID(principalUID) if record is None: print("No record found for:", principalUID) returnValue(None) @inlineCallbacks def doIt(txn): home = yield txn.calendarHomeWithUID(principalUID) if home is None: print("No home for principal") returnValue(None) trash = yield home.getTrash() if trash is None: print("No trash available") returnValue(None) trashedCollections = yield home.children(onlyInTrash=True) if len(trashedCollections) == 0: print("No trashed collections for:", prettyRecord(record)) returnValue(None) print("Trashed collections for:", prettyRecord(record)) nowDT = datetime.datetime.utcnow() for collection in trashedCollections: displayName = displayNameForCollection(collection) whenTrashed = collection.whenTrashed() ago = nowDT - whenTrashed print() print(" Trashed {}:".format(agoString(ago))) print( " \"{}\" (collection) Recovery ID = {}".format( displayName.encode("utf-8"), collection._resourceID ) ) startTime = whenTrashed - datetime.timedelta(minutes=5) children = yield trash.trashForCollection( collection._resourceID, start=startTime ) print(" ...containing events:") for child in children: component = yield child.component() summary = component.mainComponent().propertyValue("SUMMARY", "") print(" \"{}\"".format(summary)) yield store.inTransaction(label="List trashed collections", operation=doIt) def startString(pydt): return pydt.getLocaleDateTime(DateTime.FULLDATE, False, True, pydt.getTimezoneID()) @inlineCallbacks def printEventDetails(event): nowPyDT = DateTime.getNowUTC() nowDT = datetime.datetime.utcnow() oneYearInFuture = DateTime.getNowUTC() oneYearInFuture.offsetDay(365) component = yield event.component() mainSummary = component.mainComponent().propertyValue("SUMMARY", u"") whenTrashed = event.whenTrashed() ago = nowDT - whenTrashed print(" Trashed {}:".format(agoString(ago))) if component.isRecurring(): print( " \"{}\" (repeating) Recovery ID = {}".format( mainSummary, event._resourceID ) ) print(" ...upcoming instances:") instances = component.cacheExpandedTimeRanges(oneYearInFuture) instances = sorted(instances.instances.values(), key=lambda x: x.start) limit = 3 count = 0 for instance in instances: if instance.start >= nowPyDT: summary = instance.component.propertyValue("SUMMARY", u"") location = locationString(instance.component) tzid = instance.component.getProperty("DTSTART").parameterValue("TZID", None) dtstart = instance.start if tzid is not None: timezone = Timezone(tzid=tzid) dtstart.adjustTimezone(timezone) print(" \"{}\" {} {}".format(summary, startString(dtstart), location)) count += 1 limit -= 1 if limit == 0: break if not count: print(" (none)") else: print( " \"{}\" (non-repeating) Recovery ID = {}".format( mainSummary, event._resourceID ) ) dtstart = component.mainComponent().propertyValue("DTSTART") location = locationString(component.mainComponent()) print(" {} {}".format(startString(dtstart), location)) @inlineCallbacks def listTrashedEventsForPrincipal(service, store, principalUID): directory = store.directoryService() record = yield directory.recordWithUID(principalUID) if record is None: print("No record found for:", principalUID) returnValue(None) @inlineCallbacks def doIt(txn): home = yield txn.calendarHomeWithUID(principalUID) if home is None: print("No home for principal") returnValue(None) trash = yield home.getTrash() if trash is None: print("No trash available") returnValue(None) untrashedCollections = yield home.children(onlyInTrash=False) if len(untrashedCollections) == 0: print("No untrashed collections for:", prettyRecord(record)) returnValue(None) for collection in untrashedCollections: displayName = displayNameForCollection(collection) children = yield trash.trashForCollection(collection._resourceID) if len(children) == 0: continue print("Trashed events in calendar \"{}\":".format(displayName.encode("utf-8"))) for child in children: print() yield printEventDetails(child) print("") yield store.inTransaction(label="List trashed events", operation=doIt) @inlineCallbacks def restoreTrashedCollection(service, store, principalUID, resourceID): directory = store.directoryService() record = yield directory.recordWithUID(principalUID) if record is None: print("No record found for:", principalUID) returnValue(None) @inlineCallbacks def doIt(txn): home = yield txn.calendarHomeWithUID(principalUID) if home is None: print("No home for principal") returnValue(None) collection = yield home.childWithID(resourceID, onlyInTrash=True) if collection is None: print("Collection {} is not in the trash".format(resourceID)) returnValue(None) yield collection.fromTrash( restoreChildren=True, delta=datetime.timedelta(minutes=5), verbose=True ) yield store.inTransaction(label="Restore trashed collection", operation=doIt) @inlineCallbacks def restoreTrashedEvent(service, store, principalUID, resourceID): directory = store.directoryService() record = yield directory.recordWithUID(principalUID) if record is None: print("No record found for:", principalUID) returnValue(None) @inlineCallbacks def doIt(txn): home = yield txn.calendarHomeWithUID(principalUID) if home is None: print("No home for principal") returnValue(None) trash = yield home.getTrash() if trash is None: print("No trash available") returnValue(None) child = yield trash.objectResourceWithID(resourceID) if child is None: print("Event not found") returnValue(None) component = yield child.component() summary = component.mainComponent().propertyValue("SUMMARY", "") print("Restoring \"{}\"".format(summary)) yield child.fromTrash() yield store.inTransaction(label="Restore trashed event", operation=doIt) @inlineCallbacks def emptyTrashForPrincipal(service, store, principalUID, days, txn=None, verbose=True): directory = store.directoryService() record = yield directory.recordWithUID(principalUID) if record is None: if verbose: print("No record found for:", principalUID) returnValue(None) @inlineCallbacks def doIt(txn): home = yield txn.calendarHomeWithUID(principalUID) if home is None: if verbose: print("No home for principal") returnValue(None) yield home.emptyTrash(days=days, verbose=verbose) if txn is None: yield store.inTransaction(label="Empty trash", operation=doIt) else: yield doIt(txn) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/tools/upgrade.py000066400000000000000000000176011315003562600234510ustar00rootroot00000000000000#!/usr/bin/env python # -*- test-case-name: calendarserver.tools.test.test_upgrade -*- ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ This tool allows any necessary upgrade to complete, then exits. """ import os import sys import time from txdav.common.datastore.sql import CommonDataStore from twisted.internet.defer import succeed from twisted.logger import LogLevel, formatEvent, formatTime from twisted.logger import FilteringLogObserver, LogLevelFilterPredicate from twisted.python.text import wordWrap from twisted.python.usage import Options, UsageError from twext.python.log import Logger from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from calendarserver.tools.cmdline import utilityMain, WorkerService from calendarserver.tap.caldav import CalDAVServiceMaker log = Logger() def usage(e=None): if e: print(e) print("") try: UpgradeOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = '\n'.join( wordWrap( """ Usage: calendarserver_upgrade [options] [input specifiers]\n """ + __doc__, int(os.environ.get('COLUMNS', '80')) ) ) class UpgradeOptions(Options): """ Command-line options for 'calendarserver_upgrade' @ivar upgraders: a list of L{DirectoryUpgradeer} objects which can identify the calendars to upgrade, given a directory service. This list is built by parsing --record and --collection options. """ synopsis = description optFlags = [ ['status', 's', "Check database status and exit."], ['check', 'c', "Check current database schema and exit."], ['postprocess', 'p', "Perform post-database-import processing."], ['debug', 'D', "Print log messages to STDOUT."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ['prefix', 'x', "", "Only upgrade homes with the specified GUID prefix - partial upgrade only."], ] def __init__(self): super(UpgradeOptions, self).__init__() self.upgradeers = [] self.outputName = '-' self.merge = False def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def opt_merge(self): """ Rather than skipping homes that exist on the filesystem but not in the database, merge their data into the existing homes. """ self.merge = True opt_m = opt_merge def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, 'wb') class UpgraderService(WorkerService, object): """ Service which runs, exports the appropriate records, then stops the reactor. """ started = False def __init__(self, store, options, output, reactor, config): super(UpgraderService, self).__init__(store) self.options = options self.output = output self.reactor = reactor self.config = config self._directory = None def doWork(self): """ Immediately stop. The upgrade will have been run before this. """ # If we get this far the database is OK if self.options["status"]: self.output.write("Database OK.\n") elif self.options["check"]: checkResult = getattr(CommonDataStore, "checkSchemaResults", None) if checkResult is not None: if checkResult: self.output.write("Database check FAILED:\n{}\n".format("\n".join(checkResult))) else: self.output.write("Database check OK.\n") else: self.output.write("Database check FAILED.\n") else: self.output.write("Upgrade complete, shutting down.\n") UpgraderService.started = True return succeed(None) def doWorkWithoutStore(self): """ Immediately stop. The upgrade will have been run before this and failed. """ if self.options["status"]: self.output.write("Upgrade needed.\n") elif self.options["check"]: checkResult = getattr(CommonDataStore, "checkSchemaResults", None) if checkResult is not None: if checkResult: self.output.write("Database check FAILED:\n{}\n".format("\n".join(checkResult))) else: self.output.write("Database check OK.\n") else: self.output.write("Database check FAILED.\n") else: self.output.write("Upgrade failed.\n") UpgraderService.started = True return succeed(None) def postStartService(self): """ Quit right away """ from twisted.internet import reactor from twisted.internet.error import ReactorNotRunning try: reactor.stop() except ReactorNotRunning: # I don't care. pass def stopService(self): """ Stop the service. Nothing to do; everything should be finished by this time. """ def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ from twistedcaldav.config import config if reactor is None: from twisted.internet import reactor options = UpgradeOptions() try: options.parseOptions(argv[1:]) except UsageError, e: usage(e) try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) if options.merge: def setMerge(data): data.MergeUpgrades = True config.addPostUpdateHooks([setMerge]) def makeService(store): return UpgraderService(store, options, output, reactor, config) def onlyUpgradeEvents(eventDict): text = formatEvent(eventDict) output.write(formatTime(eventDict.get("log_time", time.time())) + " " + text + "\n") output.flush() if not options["status"] and not options["check"]: # When doing an upgrade always send L{LogLevel.warn} logging to the tool output log.observer.addObserver(FilteringLogObserver( onlyUpgradeEvents, [LogLevelFilterPredicate(defaultLogLevel=LogLevel.warn), ] )) def customServiceMaker(): customService = CalDAVServiceMaker() customService.doPostImport = options["postprocess"] return customService def _patchConfig(config): config.FailIfUpgradeNeeded = options["status"] or options["check"] config.CheckExistingSchema = options["check"] if options["prefix"]: config.UpgradeHomePrefix = options["prefix"] if not options["status"] and not options["check"]: config.DefaultLogLevel = "debug" def _onShutdown(): if not UpgraderService.started: print("Failed to start service.") utilityMain(options["config"], makeService, reactor, customServiceMaker, patchConfig=_patchConfig, onShutdown=_onShutdown, verbose=options["debug"]) if __name__ == '__main__': main() calendarserver-9.1+dfsg/calendarserver/tools/util.py000066400000000000000000000430451315003562600230000ustar00rootroot00000000000000## # Copyright (c) 2008-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Utility functionality shared between calendarserver tools. """ __all__ = [ "loadConfig", "UsageError", "booleanArgument", ] import datetime import os from time import sleep import socket from pwd import getpwnam from grp import getgrnam from calendarserver.tools import diagnose from twistedcaldav.config import config, ConfigurationError from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE from twext.python.log import Logger from twisted.internet.defer import inlineCallbacks, returnValue from twistedcaldav import memcachepool from txdav.base.propertystore.base import PropertyName from txdav.xml import element from pycalendar.datetime import DateTime, Timezone from twext.who.idirectory import RecordType from txdav.who.idirectory import RecordType as CalRecordType from txdav.who.delegates import Delegates, RecordType as DelegateRecordType log = Logger() def loadConfig(configFileName): """ Helper method for command-line utilities to load configuration plist and override certain values. """ if configFileName is None: configFileName = DEFAULT_CONFIG_FILE if not os.path.isfile(configFileName): raise ConfigurationError("No config file: %s" % (configFileName,)) config.load(configFileName) # Command-line utilities always want these enabled: config.EnableCalDAV = True config.EnableCardDAV = True return config class UsageError (StandardError): pass def booleanArgument(arg): if arg in ("true", "yes", "yup", "uh-huh", "1", "t", "y"): return True elif arg in ("false", "no", "nope", "nuh-uh", "0", "f", "n"): return False else: raise ValueError("Not a boolean: %s" % (arg,)) def autoDisableMemcached(config): """ Set ClientEnabled to False for each pool whose memcached is not running """ for pool in config.Memcached.Pools.itervalues(): if pool.ClientEnabled: try: if pool.get("MemcacheSocket"): s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM) s.connect(pool.MemcacheSocket) s.close() else: s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.connect((pool.BindAddress, pool.Port)) s.close() except socket.error: pool.ClientEnabled = False def setupMemcached(config): # # Connect to memcached # memcachepool.installPools( config.Memcached.Pools, config.Memcached.MaxClients ) autoDisableMemcached(config) def checkDirectory(dirpath, description, access=None, create=None, wait=False): """ Make sure dirpath is an existing directory, and optionally ensure it has the expected permissions. Alternatively the function can create the directory or can wait for someone else to create it. @param dirpath: The directory path we're checking @type dirpath: string @param description: A description of what the directory path represents, used in log messages @type description: string @param access: The type of access we're expecting, either os.W_OK or os.R_OK @param create: A tuple of (file permissions mode, username, groupname) to use when creating the directory. If create=None then no attempt will be made to create the directory. @type create: tuple @param wait: Wether the function should wait in a loop for the directory to be created by someone else (or mounted, etc.) @type wait: boolean """ # Note: we have to use print here because the logging mechanism has not # been set up yet. if not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME): if wait: # If we're being told to wait, post an alert that we can't continue # until the volume is mounted if not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME): from calendarserver.tap.util import AlertPoster AlertPoster.postAlert("MissingDataVolumeAlert", 0, ["volumePath", dirpath]) while not os.path.exists(dirpath) or (diagnose.detectPhantomVolume(dirpath) == diagnose.EXIT_CODE_PHANTOM_DATA_VOLUME): if not os.path.exists(dirpath): print("Path does not exist: %s" % (dirpath,)) else: print("Path is not a real volume: %s" % (dirpath,)) sleep(5) else: try: mode, username, groupname = create except TypeError: raise ConfigurationError("%s does not exist: %s" % (description, dirpath)) try: os.mkdir(dirpath) except (OSError, IOError), e: print("Could not create %s: %s" % (dirpath, e)) raise ConfigurationError( "%s does not exist and cannot be created: %s" % (description, dirpath) ) if username: uid = getpwnam(username).pw_uid else: uid = -1 if groupname: gid = getgrnam(groupname).gr_gid else: gid = -1 try: os.chmod(dirpath, mode) os.chown(dirpath, uid, gid) except (OSError, IOError), e: print("Unable to change mode/owner of %s: %s" % (dirpath, e)) print("Created directory: %s" % (dirpath,)) if not os.path.isdir(dirpath): raise ConfigurationError("%s is not a directory: %s" % (description, dirpath)) if access and not os.access(dirpath, access): raise ConfigurationError( "Insufficient permissions for server on %s directory: %s" % (description, dirpath) ) @inlineCallbacks def principalForPrincipalID(principalID, checkOnly=False, directory=None): # Allow a directory parameter to be passed in, but default to config.directory # But config.directory isn't set right away, so only use it when we're doing more # than checking. if not checkOnly and not directory: directory = config.directory if principalID.startswith("/"): segments = principalID.strip("/").split("/") if ( len(segments) == 3 and segments[0] == "principals" and segments[1] == "__uids__" ): uid = segments[2] else: raise ValueError("Can't resolve all paths yet") if checkOnly: returnValue(None) returnValue((yield directory.principalCollection.principalForUID(uid))) if principalID.startswith("("): try: i = principalID.index(")") if checkOnly: returnValue(None) recordType = principalID[1:i] shortName = principalID[i + 1:] if not recordType or not shortName or "(" in recordType: raise ValueError() returnValue((yield directory.principalCollection.principalForShortName(recordType, shortName))) except ValueError: pass if ":" in principalID: if checkOnly: returnValue(None) recordTypeDescription, shortName = principalID.split(":", 1) for recordType in ( RecordType.user, RecordType.group, CalRecordType.location, CalRecordType.resource, CalRecordType.address, ): if recordType.description == recordTypeDescription: break else: returnValue(None) returnValue((yield directory.principalCollection.principalForShortName(recordType, shortName))) try: if checkOnly: returnValue(None) returnValue((yield directory.principalCollection.principalForUID(principalID))) except ValueError: pass raise ValueError("Invalid principal identifier: %s" % (principalID,)) @inlineCallbacks def recordForPrincipalID(directory, principalID, checkOnly=False): if principalID.startswith("/"): segments = principalID.strip("/").split("/") if ( len(segments) == 3 and segments[0] == "principals" and segments[1] == "__uids__" ): uid = segments[2] else: raise ValueError("Can't resolve all paths yet") if checkOnly: returnValue(None) returnValue((yield directory.recordWithUID(uid))) if principalID.startswith("("): try: i = principalID.index(")") if checkOnly: returnValue(None) recordType = directory.oldNameToRecordType(principalID[1:i]) shortName = principalID[i + 1:] if not recordType or not shortName or "(" in recordType: raise ValueError() returnValue((yield directory.recordWithShortName(recordType, shortName))) except ValueError: pass if ":" in principalID: if checkOnly: returnValue(None) recordType, shortName = principalID.split(":", 1) recordType = directory.oldNameToRecordType(recordType) if recordType is None: returnValue(None) returnValue((yield directory.recordWithShortName(recordType, shortName))) try: if checkOnly: returnValue(None) returnValue((yield directory.recordWithUID(principalID))) except ValueError: pass raise ValueError("Invalid principal identifier: %s" % (principalID,)) @inlineCallbacks def _addRemoveProxy(msg, fn, store, record, proxyType, *proxyIDs): directory = store.directoryService() readWrite = (proxyType == "write") for proxyID in proxyIDs: proxyRecord = yield recordForPrincipalID(directory, proxyID) if proxyRecord is None: print("Invalid principal ID: %s" % (proxyID,)) else: txn = store.newTransaction() yield fn(txn, record, proxyRecord, readWrite) yield txn.commit() print( "{msg} {proxy} as a {proxyType} proxy for {record}".format( msg=msg, proxy=prettyRecord(proxyRecord), proxyType=proxyType, record=prettyRecord(record) ) ) @inlineCallbacks def action_addProxy(store, record, proxyType, *proxyIDs): if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates: if record.recordType in ( record.service.recordType.location, record.service.recordType.resource, ): print("You are not allowed to add proxies for locations or resources via command line when their proxy assignments come from the directory service.") returnValue(None) yield _addRemoveProxy("Added", Delegates.addDelegate, store, record, proxyType, *proxyIDs) @inlineCallbacks def action_removeProxy(store, record, *proxyIDs): if config.GroupCaching.Enabled and config.GroupCaching.UseDirectoryBasedDelegates: if record.recordType in ( record.service.recordType.location, record.service.recordType.resource, ): print("You are not allowed to remove proxies for locations or resources via command line when their proxy assignments come from the directory service.") returnValue(None) # Write yield _addRemoveProxy("Removed", Delegates.removeDelegate, store, record, "write", *proxyIDs) # Read yield _addRemoveProxy("Removed", Delegates.removeDelegate, store, record, "read", *proxyIDs) @inlineCallbacks def setProxies(record, readProxyRecords, writeProxyRecords): """ Set read/write proxies en masse for a record @param record: L{IDirectoryRecord} @param readProxyRecords: a list of records @param writeProxyRecords: a list of records """ proxyTypes = [ (DelegateRecordType.readDelegateGroup, readProxyRecords), (DelegateRecordType.writeDelegateGroup, writeProxyRecords), ] for recordType, proxyRecords in proxyTypes: if proxyRecords is None: continue proxyGroup = yield record.service.recordWithShortName( recordType, record.uid ) yield proxyGroup.setMembers(proxyRecords) @inlineCallbacks def getProxies(record): """ Returns a tuple containing the records for read proxies and write proxies of the given record """ allProxies = { DelegateRecordType.readDelegateGroup: [], DelegateRecordType.writeDelegateGroup: [], } for recordType in allProxies.iterkeys(): proxyGroup = yield record.service.recordWithShortName( recordType, record.uid ) allProxies[recordType] = yield proxyGroup.members() returnValue( ( allProxies[DelegateRecordType.readDelegateGroup], allProxies[DelegateRecordType.writeDelegateGroup] ) ) def proxySubprincipal(principal, proxyType): return principal.getChild("calendar-proxy-" + proxyType) def prettyPrincipal(principal): return prettyRecord(principal.record) def prettyRecord(record): try: shortNames = record.shortNames except AttributeError: shortNames = [] return "\"{d}\" {uid} ({rt}) {sn}".format( d=record.displayName.encode("utf-8"), rt=record.recordType.name, uid=record.uid, sn=(", ".join([sn.encode("utf-8") for sn in shortNames])) ) def displayNameForCollection(collection): try: displayName = collection.properties()[ PropertyName.fromElement(element.DisplayName) ] displayName = displayName.toString() except: displayName = collection.name() return displayName def agoString(delta): if delta.days: agoString = "{} days ago".format(delta.days) elif delta.seconds: if delta.seconds < 60: agoString = "{} second{} ago".format(delta.seconds, "s" if delta.seconds > 1 else "") else: minutesAgo = delta.seconds / 60 if minutesAgo < 60: agoString = "{} minute{} ago".format(minutesAgo, "s" if minutesAgo > 1 else "") else: hoursAgo = minutesAgo / 60 agoString = "{} hour{} ago".format(hoursAgo, "s" if hoursAgo > 1 else "") else: agoString = "Just now" return agoString def locationString(component): locationProps = component.properties("LOCATION") if locationProps is not None: locations = [] for locationProp in locationProps: locations.append(locationProp.value()) locationString = ", ".join(locations) else: locationString = "" return locationString @inlineCallbacks def getEventDetails(event): detail = {} nowPyDT = DateTime.getNowUTC() nowDT = datetime.datetime.utcnow() oneYearInFuture = DateTime.getNowUTC() oneYearInFuture.offsetDay(365) component = yield event.component() mainSummary = component.mainComponent().propertyValue("SUMMARY", u"") whenTrashed = event.whenTrashed() ago = nowDT - whenTrashed detail["summary"] = mainSummary detail["whenTrashed"] = agoString(ago) detail["recoveryID"] = event._resourceID if component.isRecurring(): detail["recurring"] = True detail["instances"] = [] instances = component.cacheExpandedTimeRanges(oneYearInFuture) instances = sorted(instances.instances.values(), key=lambda x: x.start) limit = 3 count = 0 for instance in instances: if instance.start >= nowPyDT: summary = instance.component.propertyValue("SUMMARY", u"") location = locationString(instance.component) tzid = instance.component.getProperty("DTSTART").parameterValue("TZID", None) dtstart = instance.start if tzid is not None: timezone = Timezone(tzid=tzid) dtstart.adjustTimezone(timezone) detail["instances"].append({ "summary": summary, "starttime": dtstart.getLocaleDateTime(DateTime.FULLDATE, False, True, dtstart.getTimezoneID()), "location": location }) count += 1 limit -= 1 if limit == 0: break else: detail["recurring"] = False dtstart = component.mainComponent().propertyValue("DTSTART") detail["starttime"] = dtstart.getLocaleDateTime(DateTime.FULLDATE, False, True, dtstart.getTimezoneID()) detail["location"] = locationString(component.mainComponent()) returnValue(detail) class ProxyError(Exception): """ Raised when proxy assignments cannot be performed """ class ProxyWarning(Exception): """ Raised for harmless proxy assignment failures such as trying to add a duplicate or remove a non-existent assignment. """ calendarserver-9.1+dfsg/calendarserver/tools/validcalendardata.py000077500000000000000000000145131315003562600254470ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ This tool takes data from stdin and validates it as iCalendar data suitable for the server. """ from calendarserver.tools.cmdline import utilityMain, WorkerService from twisted.internet.defer import succeed from twisted.python.text import wordWrap from twisted.python.usage import Options from twistedcaldav.config import config from twistedcaldav.ical import Component, InvalidICalendarDataError from twistedcaldav.stdconfig import DEFAULT_CONFIG_FILE import os import sys def usage(e=None): if e: print(e) print("") try: ValidOptions().opt_help() except SystemExit: pass if e: sys.exit(64) else: sys.exit(0) description = '\n'.join( wordWrap( """ Usage: validcalendardata [options] [input specifiers]\n """, int(os.environ.get('COLUMNS', '80')) ) ) class ValidOptions(Options): """ Command-line options for 'validcalendardata' """ synopsis = description optFlags = [ ['verbose', 'v', "Verbose logging."], ['debug', 'D', "Debug logging."], ['parse-only', 'p', "Only validate parsing of the data."], ] optParameters = [ ['config', 'f', DEFAULT_CONFIG_FILE, "Specify caldavd.plist configuration path."], ] def __init__(self): super(ValidOptions, self).__init__() self.outputName = '-' self.inputName = '-' def opt_output(self, filename): """ Specify output file path (default: '-', meaning stdout). """ self.outputName = filename opt_o = opt_output def openOutput(self): """ Open the appropriate output file based on the '--output' option. """ if self.outputName == '-': return sys.stdout else: return open(self.outputName, "wb") def opt_input(self, filename): """ Specify output file path (default: '-', meaning stdin). """ self.inputName = filename opt_i = opt_input def openInput(self): """ Open the appropriate output file based on the '--input' option. """ if self.inputName == '-': return sys.stdin else: return open(os.path.expanduser(self.inputName), "rb") errorPrefix = "Calendar data had unfixable problems:\n " class ValidService(WorkerService, object): """ Service which runs, exports the appropriate records, then stops the reactor. """ def __init__(self, store, options, output, input, reactor, config): super(ValidService, self).__init__(store) self.options = options self.output = output self.input = input self.reactor = reactor self.config = config self._directory = None def doWork(self): """ Start the service. """ super(ValidService, self).startService() if self.options["parse-only"]: result, message = self.parseCalendarData() else: result, message = self.validCalendarData() if result: print("Calendar data OK") else: print(message) return succeed(None) def parseCalendarData(self): """ Check the calendar data for valid iCalendar data. """ result = True message = "" try: component = Component.fromString(self.input.read()) # Do underlying iCalendar library validation with data fix fixed, unfixed = component._pycalendar.validate(doFix=True) if unfixed: raise InvalidICalendarDataError("Calendar data had unfixable problems:\n %s" % ("\n ".join(unfixed),)) if fixed: print("Calendar data had fixable problems:\n %s" % ("\n ".join(fixed),)) except ValueError, e: result = False message = str(e) if message.startswith(errorPrefix): message = message[len(errorPrefix):] return (result, message,) def validCalendarData(self): """ Check the calendar data for valid iCalendar data. """ result = True message = "" truncated = False try: component = Component.fromString(self.input.read()) if getattr(self.config, "MaxInstancesForRRULE", 0) != 0: truncated = component.truncateRecurrence(config.MaxInstancesForRRULE) component.validCalendarData(doFix=False, validateRecurrences=True) component.validCalendarForCalDAV(methodAllowed=True) component.validOrganizerForScheduling(doFix=False) except ValueError, e: result = False message = str(e) if message.startswith(errorPrefix): message = message[len(errorPrefix):] if truncated: message = "Calendar data RRULE truncated\n" + message return (result, message,) def main(argv=sys.argv, stderr=sys.stderr, reactor=None): """ Do the export. """ if reactor is None: from twisted.internet import reactor options = ValidOptions() options.parseOptions(argv[1:]) try: output = options.openOutput() except IOError, e: stderr.write("Unable to open output file for writing: %s\n" % (e)) sys.exit(1) try: input = options.openInput() except IOError, e: stderr.write("Unable to open input file for reading: %s\n" % (e)) sys.exit(1) def makeService(store): return ValidService(store, options, output, input, reactor, config) utilityMain(options["config"], makeService, reactor, verbose=options["debug"]) if __name__ == "__main__": main() calendarserver-9.1+dfsg/calendarserver/webadmin/000077500000000000000000000000001315003562600220715ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/webadmin/__init__.py000066400000000000000000000012051315003562600242000ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web Admin UI. """ calendarserver-9.1+dfsg/calendarserver/webadmin/config.py000066400000000000000000000105061315003562600237120ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_landing -*- ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Configuration UI. """ __all__ = [ "ConfigurationResource", ] from twisted.web.template import renderer, tags as html from twext.python.log import Logger from .resource import PageElement, TemplateResource log = Logger() class ConfigurationPageElement(PageElement): """ Configuration management page element. """ def __init__(self, configuration): super(ConfigurationPageElement, self).__init__(u"config") self.configuration = configuration def pageSlots(self): slots = { u"title": u"Server Configuration", } for key in ( "ServerHostName", "HTTPPort", "SSLPort", "BindAddresses", "BindHTTPPorts", "BindSSLPorts", "EnableSSL", "RedirectHTTPToHTTPS", "SSLCertificate", "SSLPrivateKey", "SSLAuthorityChain", "SSLKeychainIdentity", "SSLMethod", "SSLCiphers", "EnableCalDAV", "EnableCardDAV", "ServerRoot", "EnableSSL", "UserQuota", "MaxCollectionsPerHome", "MaxResourcesPerCollection", "MaxResourceSize", "UserName", "GroupName", "ProcessType", "MultiProcess.MinProcessCount", "MultiProcess.ProcessCount", "MaxRequests", ): value = self.configuration for subkey in key.split("."): value = getattr(value, subkey) def describe(value): if value == u"": return u"(empty string)" if value is None: return u"(no value)" if isinstance(value, str): value = value.decode("utf-8") return html.code(unicode(value)) if isinstance(value, list): if len(value) == 0: result = u"(empty list)" else: result = [] for item in value: result.append(describe(item)) result.append(html.br()) result.pop() else: result = describe(value) slots[key] = result return slots @renderer def settings_header(self, request, tag): return tag( html.tr( html.th(u"Option"), html.th(u"Value"), html.th(u"Note"), ), ) @renderer def log_level_row(self, request, tag): def rowsForNamespaces(namespaces): for namespace in namespaces: if namespace is None: name = u"(default)" else: name = html.code(namespace) value = html.code( log.levels().logLevelForNamespace( namespace ).name ) yield tag.clone().fillSlots( log_level_name=name, log_level_value=value, ) # FIXME: Using private attributes is bad. namespaces = log.levels()._logLevelsByNamespace.keys() return rowsForNamespaces(namespaces) class ConfigurationResource(TemplateResource): """ Configuration management page resource. """ addSlash = False def __init__(self, configuration, principalCollections): super(ConfigurationResource, self).__init__( lambda: ConfigurationPageElement(configuration), principalCollections, isdir=False ) calendarserver-9.1+dfsg/calendarserver/webadmin/config.xhtml000066400000000000000000000175371315003562600244310ustar00rootroot00000000000000 <t:slot name="title" />

Public Network Address Information

These settings determine how one accesses the server on the Internet. This may or may not be the same address that the server listens to itself. This may, for example, be the address for a load balancer which forwards requests to the server.
Host Name
HTTP Port For most servers, this should be 80.
HTTPS (TLS) Port For most servers, this should be 443.

Bound Network Address Information

These settings determine the actual network address that the server binds to. By default, this is the same as the public network address information provided above.
IP Addresses
HTTP Ports
TLS Ports

Transport Layer Security (TLS) Settings

These settings determine how one accesses the server on the Internet. This may or may not be the same address that the server listens to itself. This may, for example, be the address for a load balancer which forwards requests to the server.
Enable HTTPS (TLS) It is strongly advised that TLS is enabled on production servers.
Redirect To HTTPS Redirect clients to the secure port (recommended).
TLS Certificate
TLS Private Key
TLS Authority Chain
TLS Methods
TLS Ciphers

Service Settings

Enable CalDAV
Enable CardDAV

Data Store

Server Root
Data Source Name Database connection string.
Attachment Quota (Bytes)
Max # Collections Per Account Collections include calendars and address books.
Max # Objects Per Collection Objects include events, to-dos and contacts.
Max Object Size

Process Management

User Name
Group Name
Process Type
Min Process Count
Max Process Count
Max Requests Per Process

Log Levels

Namespace Log Level
calendarserver-9.1+dfsg/calendarserver/webadmin/delegation.html000066400000000000000000000177101315003562600251000ustar00rootroot00000000000000 Calendar Server Web Administration

Calendar Server Web Administration

Resource Management

Search for resource to manage:
No matches found for resource .
ID Full Name Type Short Names Email Addresses
select/admin/?resourceId=

Resource Details:

Auto-Schedule Mode
This resource has no proxies.
Read-Only Proxies Read-Write Proxies
mkWriteProxy| rmProxy| mkReadProxy| rmProxy|
No matches found for proxy resource .
Search to add proxies:
Full Name Type Short Names Email Addresses
mkReadProxy| mkWriteProxy|
calendarserver-9.1+dfsg/calendarserver/webadmin/delegation.py000066400000000000000000000533731315003562600245710ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_resource -*- ## # Copyright (c) 2009-2014 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web Admin UI. """ __all__ = [ "WebAdminResource", "WebAdminPage", ] import urlparse from calendarserver.tools.util import ( recordForPrincipalID, proxySubprincipal, action_addProxy, action_removeProxy, principalForPrincipalID ) from twistedcaldav.config import config from twistedcaldav.extensions import DAVFile, ReadOnlyResourceMixIn from twisted.internet.defer import inlineCallbacks, returnValue, succeed from txweb2.http import Response from txweb2.http_headers import MimeType from txweb2.stream import MemoryStream from twisted.python.modules import getModule from zope.interface.declarations import implements from txdav.xml import element as davxml from twisted.web.iweb import ITemplateLoader from twisted.web.template import ( Element, renderer, XMLFile, flattenString ) from twext.who.idirectory import RecordType from txdav.who.idirectory import RecordType as CalRecordType, AutoScheduleMode allowedAutoScheduleModes = { "default": None, "none": AutoScheduleMode.none, "accept-always": AutoScheduleMode.accept, "decline-always": AutoScheduleMode.decline, "accept-if-free": AutoScheduleMode.acceptIfFree, "decline-if-busy": AutoScheduleMode.declineIfBusy, "automatic": AutoScheduleMode.acceptIfFreeDeclineIfBusy, } class WebAdminPage(Element): """ Web administration renderer for HTML. @ivar resource: a L{WebAdminResource}. """ loader = XMLFile( getModule(__name__).filePath.sibling("delegation.html") ) def __init__(self, resource): super(WebAdminPage, self).__init__() self.resource = resource @renderer def main(self, request, tag): """ Main renderer, which fills page-global slots like 'title'. """ searchTerm = request.args.get('resourceSearch', [''])[0] return tag.fillSlots(resourceSearch=searchTerm) @renderer @inlineCallbacks def hasSearchResults(self, request, tag): """ Renderer which detects if there are resource search results and continues if so. """ if 'resourceSearch' not in request.args: returnValue('') if (yield self.performSearch(request)): returnValue(tag) else: returnValue('') @renderer @inlineCallbacks def noSearchResults(self, request, tag): """ Renderer which detects if there are resource search results and continues if so. """ if 'resourceSearch' not in request.args: returnValue('') rows = yield self.performSearch(request) if rows: returnValue("") else: returnValue(tag) _searchResults = None @inlineCallbacks def performSearch(self, request): """ Perform a directory search for users, groups, and resources based on the resourceSearch query parameter. Cache the results of that search so that it will only be done once per request. """ if self._searchResults is not None: returnValue(self._searchResults) searchTerm = request.args.get('resourceSearch', [''])[0] if searchTerm: results = sorted((yield self.resource.search(searchTerm)), key=lambda record: record.fullNames[0]) else: results = [] self._searchResults = results returnValue(results) @renderer def searchResults(self, request, tag): """ Renderer which renders resource search results. """ d = self.performSearch(request) return d.addCallback(searchToSlots, tag) @renderer @inlineCallbacks def resourceDetails(self, request, tag): """ Renderer which fills slots for details of the resource selected by the resourceId request parameter. """ resourceId = request.args.get('resourceId', [''])[0] propertyName = request.args.get('davPropertyName', [''])[0] proxySearch = request.args.get('proxySearch', [''])[0] if resourceId: principalResource = yield self.resource.getResourceById( request, resourceId) returnValue( DetailsElement( resourceId, principalResource, propertyName, proxySearch, tag, self.resource ) ) else: returnValue("") def searchToSlots(results, tag): """ Convert the result of doing a search to an iterable of tags. """ for idx, record in enumerate(results): if hasattr(record, "shortNames"): shortName = record.shortNames[0] shortNames = record.shortNames else: shortName = "(none)" shortNames = [shortName] if hasattr(record, "emailAddresses"): emailAddresses = record.emailAddresses else: emailAddresses = ["(none)"] yield tag.clone().fillSlots( rowClass="even" if (idx % 2 == 0) else "odd", type=record.recordType.description, shortName=shortName, name=record.fullNames[0], typeStr={ RecordType.user: "User", RecordType.group: "Group", CalRecordType.location: "Location", CalRecordType.resource: "Resource", CalRecordType.address: "Address", }.get(record.recordType), shortNames=str(", ".join(shortNames)), emails=str(", ".join(emailAddresses)), uid=str(record.uid), ) class stan(object): """ L{ITemplateLoader} wrapper for an existing tag, in the style of Nevow's 'stan' loader. """ implements(ITemplateLoader) def __init__(self, tag): self.tag = tag def load(self): return self.tag def recordTitle(record): return u"{} ({} {})".format(record.fullNames[0], record.recordType.description, record.uid) class DetailsElement(Element): def __init__(self, resourceId, principalResource, davPropertyName, proxySearch, tag, adminResource): self.principalResource = principalResource self.adminResource = adminResource self.proxySearch = proxySearch self.record = principalResource.record tag.fillSlots(resourceTitle=recordTitle(self.record), resourceId=resourceId, davPropertyName=davPropertyName, proxySearch=proxySearch) try: namespace, name = davPropertyName.split("#") except Exception: self.namespace = None self.name = None if davPropertyName: self.error = davPropertyName else: self.error = None else: self.namespace = namespace self.name = name self.error = None super(DetailsElement, self).__init__(loader=stan(tag)) @renderer def propertyParseError(self, request, tag): """ Renderer to display an error when the user specifies an invalid property name. """ if self.error is None: return "" else: return tag.fillSlots(davPropertyName=self.error) @renderer @inlineCallbacks def davProperty(self, request, tag): """ Renderer to display an error when the user specifies an invalid property name. """ if self.name is not None: try: propval = yield self.principalResource.readProperty( (self.namespace, self.name), request ) except: propval = "No such property: " + "#".join([self.namespace, self.name]) else: propval = propval.toxml() returnValue(tag.fillSlots(value=propval)) else: returnValue("") @renderer def autoSchedule(self, request, tag): """ Renderer which elides its tag for non-resource-type principals. """ if ( self.record.recordType.description != "user" and self.record.recordType.description != "group" or self.record.recordType.description == "user" and config.Scheduling.Options.AutoSchedule.AllowUsers ): return tag return "" @renderer def isAutoSchedule(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag if the resource is auto-schedule. """ if self.record.autoScheduleMode is not AutoScheduleMode.none: tag(selected='selected') return tag @renderer def isntAutoSchedule(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag if the resource is not auto-schedule. """ if self.record.autoScheduleMode is AutoScheduleMode.none: tag(selected='selected') return tag @renderer def autoScheduleModeNone(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.none: tag(selected='selected') return tag @renderer def autoScheduleModeAcceptAlways(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.accept: tag(selected='selected') return tag @renderer def autoScheduleModeDeclineAlways(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.decline: tag(selected='selected') return tag @renderer def autoScheduleModeAcceptIfFree(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.acceptIfFree: tag(selected='selected') return tag @renderer def autoScheduleModeDeclineIfBusy(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.declineIfBusy: tag(selected='selected') return tag @renderer def autoScheduleModeAutomatic(self, request, tag): """ Renderer which sets the 'selected' attribute on its tag based on the resource auto-schedule-mode. """ if self.record.autoScheduleMode is AutoScheduleMode.acceptIfFreeDeclineIfBusy: tag(selected='selected') return tag _matrix = None @inlineCallbacks def proxyMatrix(self, request): """ Compute a matrix of proxies to display in a 2-column table. This value is cached so that multiple renderers may refer to it without causing additional back-end queries. @return: a L{Deferred} which fires with a list of 2-tuples of (readProxy, writeProxy). If there is an unequal number of read and write proxies, the tables will be padded out with C{None}s so that some readProxy or writeProxy values will be C{None} at the end of the table. """ if self._matrix is not None: returnValue(self._matrix) (readSubPrincipal, writeSubPrincipal) = ( (yield proxySubprincipal(self.principalResource, "read")), (yield proxySubprincipal(self.principalResource, "write")) ) if readSubPrincipal or writeSubPrincipal: (readMembers, writeMembers) = ( (yield readSubPrincipal.readProperty(davxml.GroupMemberSet, None)), (yield writeSubPrincipal.readProperty(davxml.GroupMemberSet, None)) ) if readMembers.children or writeMembers.children: # FIXME: 'else' case needs to be handled by separate renderer readProxies = [] writeProxies = [] def getres(ref): return self.adminResource.getResourceById(request, str(proxyHRef)) for proxyHRef in sorted(readMembers.children, key=str): readProxies.append((yield getres(proxyHRef))) for proxyHRef in sorted(writeMembers.children, key=str): writeProxies.append((yield getres(proxyHRef))) lendiff = len(readProxies) - len(writeProxies) if lendiff > 0: writeProxies += [None] * lendiff elif lendiff < 0: readProxies += [None] * -lendiff self._matrix = zip(readProxies, writeProxies) else: self._matrix = [] else: self._matrix = [] returnValue(self._matrix) @renderer @inlineCallbacks def noProxies(self, request, tag): """ Renderer which shows its tag if there are no proxies for this resource. """ mtx = yield self.proxyMatrix(request) if mtx: returnValue("") returnValue(tag) @renderer @inlineCallbacks def hasProxies(self, request, tag): """ Renderer which shows its tag if there are any proxies for this resource. """ mtx = yield self.proxyMatrix(request) if mtx: returnValue(tag) returnValue("") @renderer @inlineCallbacks def noProxyResults(self, request, tag): """ Renderer which shows its tag if there are no proxy search results for this request. """ if not self.proxySearch: returnValue("") results = yield self.performProxySearch() if results: returnValue("") else: returnValue(tag) @renderer @inlineCallbacks def hasProxyResults(self, request, tag): """ Renderer which shows its tag if there are any proxy search results for this request. """ results = yield self.performProxySearch() if results: returnValue(tag) else: returnValue("") @renderer @inlineCallbacks def proxyRows(self, request, tag): """ Renderer which does zipping logic to render read-only and read-write rows of existing proxies for the currently-viewed resource. """ result = [] mtx = yield self.proxyMatrix(request) for idx, (readProxy, writeProxy) in enumerate(mtx): result.append(ProxyRow(tag.clone(), idx, readProxy, writeProxy)) returnValue(result) _proxySearchResults = None def performProxySearch(self): if self._proxySearchResults is not None: return succeed(self._proxySearchResults) if self.proxySearch: def nameSorted(records): self._proxySearchResults = sorted(records, key=lambda rec: rec.fullNames[0]) return records return self.adminResource.search( self.proxySearch).addCallback(nameSorted) else: return succeed([]) @renderer def proxySearchRows(self, request, tag): """ Renderer which renders search results for the proxy form. """ d = self.performProxySearch() return d.addCallback(searchToSlots, tag) class ProxyRow(Element): def __init__(self, tag, index, readProxy, writeProxy): tag.fillSlots(rowClass="even" if (index % 2 == 0) else "odd") super(ProxyRow, self).__init__(loader=stan(tag)) self.readProxy = readProxy self.writeProxy = writeProxy def proxies(self, proxyResource, tag): if proxyResource is None: return '' return tag.fillSlots(proxy=recordTitle(proxyResource.record), type=proxyResource.record.recordType.description, fullName=proxyResource.record.fullNames[0], uid=proxyResource.record.uid) def noProxies(self, proxyResource, tag): if proxyResource is None: return tag else: return "" @renderer def readOnlyProxies(self, request, tag): return self.proxies(self.readProxy, tag) @renderer def noReadOnlyProxies(self, request, tag): return self.noProxies(self.readProxy, tag) @renderer def readWriteProxies(self, request, tag): return self.proxies(self.writeProxy, tag) @renderer def noReadWriteProxies(self, request, tag): return self.noProxies(self.writeProxy, tag) class WebAdminResource (ReadOnlyResourceMixIn, DAVFile): """ Web administration HTTP resource. """ def __init__(self, path, root, directory, store, principalCollections=()): self.root = root self.directory = directory self.store = store super(WebAdminResource, self).__init__( path, principalCollections=principalCollections ) # Only allow administrators to access def defaultAccessControlList(self): return davxml.ACL(*config.AdminACEs) def etag(self): # Can't be calculated here return succeed(None) def contentLength(self): # Can't be calculated here return None def lastModified(self): return None def exists(self): return True def displayName(self): return "Web Admin" def contentType(self): return MimeType.fromString("text/html; charset=utf-8") def contentEncoding(self): return None def createSimilarFile(self, path): return DAVFile(path, principalCollections=self.principalCollections()) @inlineCallbacks def resourceActions(self, request, record): """ Take all actions on the given record based on the given request. """ def queryValue(arg): return request.args.get(arg, [""])[0] def queryValues(arg): query = urlparse.parse_qs(urlparse.urlparse(request.uri).query, True) matches = [] for key in query.keys(): if key.startswith(arg): matches.append(key[len(arg):]) return matches autoScheduleMode = queryValue("autoScheduleMode") makeReadProxies = queryValues("mkReadProxy|") makeWriteProxies = queryValues("mkWriteProxy|") removeProxies = queryValues("rmProxy|") # Update the auto-schedule-mode value if specified. if autoScheduleMode: if ( record.recordType != RecordType.user and record.recordType != RecordType.group or record.recordType == RecordType.user and config.Scheduling.Options.AutoSchedule.AllowUsers ): autoScheduleMode = allowedAutoScheduleModes[autoScheduleMode] yield record.setAutoScheduleMode(autoScheduleMode) record.autoScheduleMode = autoScheduleMode # Update the proxies if specified. if removeProxies: yield action_removeProxy(self.store, record, *removeProxies) if makeReadProxies: yield action_addProxy(self.store, record, "read", *makeReadProxies) if makeWriteProxies: yield action_addProxy(self.store, record, "write", *makeWriteProxies) @inlineCallbacks def render(self, request): """ Create a L{WebAdminPage} to render HTML content for this request, and return a response. """ resourceId = request.args.get('resourceId', [''])[0] if resourceId: record = yield recordForPrincipalID(self.directory, resourceId) yield self.resourceActions(request, record) htmlContent = yield flattenString(request, WebAdminPage(self)) response = Response() response.stream = MemoryStream(htmlContent) for (header, value) in ( ("content-type", self.contentType()), ("content-encoding", self.contentEncoding()), ): if value is not None: response.headers.setHeader(header, value) returnValue(response) def getResourceById(self, request, resourceId): if resourceId.startswith("/"): return request.locateResource(resourceId) else: return principalForPrincipalID(resourceId, directory=self.directory) @inlineCallbacks def search(self, searchStr): searchStr = unicode(searchStr, "utf-8") if isinstance(searchStr, str) else searchStr records = list((yield self.directory.recordsMatchingTokens(searchStr.strip().split()))) returnValue(records) calendarserver-9.1+dfsg/calendarserver/webadmin/eventsource.py000066400000000000000000000172411315003562600250120ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_principals -*- ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Calendar Server principal management web UI. """ __all__ = [ "textAsEvent", "EventSourceResource", ] from collections import deque from zope.interface import implementer, Interface from twistedcaldav.simpleresource import SimpleResource from twisted.internet.defer import Deferred, succeed from txweb2.stream import IByteStream, fallbackSplit from txweb2.http_headers import MimeType from txweb2.http import Response def textAsEvent(text, eventID=None, eventClass=None, eventRetry=None): """ Format some text as an HTML5 EventSource event. Since the EventSource data format is text-oriented, this function expects L{unicode}, not L{bytes}; binary data should be encoded as text if it is to be used in an EventSource stream. UTF-8 encoded L{bytes} are returned because http://www.w3.org/TR/eventsource/ states that the only allowed encoding for C{text/event-stream} is UTF-8. @param text: The text (ie. the message) to send in the event. @type text: L{unicode} @param eventID: An unique identifier for the event. @type eventID: L{unicode} @param eventClass: A class name (ie. a categorization) for the event. @type eventClass: L{unicode} @param eventRetry: The retry interval (in milliseconds) for the client to wait before reconnecting if it gets disconnected. @type eventRetry: L{int} @return: An HTML5 EventSource event as text. @rtype: UTF-8 encoded L{bytes} """ if text is None: raise TypeError("text may not be None") event = [] if eventID is not None: event.append(u"id: {0}".format(eventID)) if eventClass is not None: event.append(u"event: {0}".format(eventClass)) if eventRetry is not None: event.append(u"retry: {0:d}".format(eventRetry)) event.extend( u"data: {0}".format(l) for l in text.split("\n") ) return (u"\n".join(event) + u"\n\n").encode("utf-8") class IEventDecoder(Interface): """ An object that can be used to extract data from an application-specific event object for encoding into an EventSource data stream. """ def idForEvent(event): """ @return: An unique identifier for the given event. @rtype: L{unicode} """ def classForEvent(event): """ @return: A class name (ie. a categorization) for the event. @rtype: L{unicode} """ def textForEvent(event): """ @return: The text (ie. the message) to send in the event. @rtype: L{unicode} """ def retryForEvent(event): """ @return: The retry interval (in milliseconds) for the client to wait before reconnecting if it gets disconnected. @rtype: L{int} """ class EventSourceResource(SimpleResource): """ Resource that vends HTML5 EventSource events. Events are stored in a ring buffer and streamed to clients. """ addSlash = False def __init__(self, eventDecoder, principalCollections, bufferSize=400): """ @param eventDecoder: An object that can be used to extract data from an event for encoding into an EventSource data stream. @type eventDecoder: L{IEventDecoder} @param bufferSize: The maximum number of events to keep in the ring buffer. @type bufferSize: L{int} """ super(EventSourceResource, self).__init__(principalCollections, isdir=False) self._eventDecoder = eventDecoder self._events = deque(maxlen=bufferSize) self._streams = set() def addEvents(self, events): self._events.extend(events) # Notify outbound streams that there is new data to vend for stream in self._streams: stream.didAddEvents() def render(self, request): lastID = request.headers.getRawHeaders(u"last-event-id") response = Response() response.stream = EventStream(self._eventDecoder, self._events, lastID) response.headers.setHeader( b"content-type", MimeType.fromString(b"text/event-stream") ) # Keep track of the event streams def cleanupFilter(_request, _response): self._streams.remove(response.stream) return _response # request.addResponseFilter(cleanupFilter) self._streams.add(response.stream) return response @implementer(IByteStream) class EventStream(object): """ L{IByteStream} that streams out HTML5 EventSource events. """ length = None def __init__(self, eventDecoder, events, lastID): """ @param eventDecoder: An object that can be used to extract data from an event for encoding into an EventSource data stream. @type eventDecoder: L{IEventDecoder} @param events: Application-specific event objects. @type events: sequence of L{object} @param lastID: The identifier for the last event that was vended from C{events}. Vending will resume starting from the following event. @type lastID: L{int} """ super(EventStream, self).__init__() self._eventDecoder = eventDecoder self._events = events self._lastID = lastID self._closed = False self._deferredRead = None def didAddEvents(self): d = self._deferredRead if d is not None: self._deferredRead = None d.callback(None) def read(self): if self._closed: return succeed(None) lastID = self._lastID eventID = None idForEvent = self._eventDecoder.idForEvent classForEvent = self._eventDecoder.classForEvent textForEvent = self._eventDecoder.textForEvent retryForEvent = self._eventDecoder.retryForEvent for event in self._events: eventID = idForEvent(event) # If lastID is not None, skip messages up to and including the one # referenced by lastID. if lastID is not None: if eventID == lastID: eventID = None lastID = None continue eventClass = classForEvent(event) eventText = textForEvent(event) eventRetry = retryForEvent(event) self._lastID = eventID return succeed( textAsEvent(eventText, eventID, eventClass, eventRetry) ) if eventID is not None: # We just scanned all the messages, and none are the last one the # client saw. self._lastID = None return succeed(b"") # # This causes the client to poll, which is undesirable, but the # # deferred below doesn't seem to work in real use... # return succeed(None) d = Deferred() self._deferredRead = d d.addCallback(lambda _: self.read()) return d def split(self, point): return fallbackSplit(self, point) def close(self): self._closed = True calendarserver-9.1+dfsg/calendarserver/webadmin/landing.py000066400000000000000000000066021315003562600240630ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_landing -*- ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web Admin UI. """ __all__ = [ "WebAdminLandingResource", ] # from twisted.web.template import renderer from .resource import PageElement, TemplateResource class WebAdminLandingPageElement(PageElement): """ Web administration langing page element. """ def __init__(self): super(WebAdminLandingPageElement, self).__init__(u"landing") def pageSlots(self): return { u"title": u"Server Administration", } class WebAdminLandingResource(TemplateResource): """ Web administration landing page resource. """ addSlash = True def __init__(self, path, root, directory, store, principalCollections=()): super(WebAdminLandingResource, self).__init__(WebAdminLandingPageElement, principalCollections, True) from twistedcaldav.config import config as configuration self.configuration = configuration self.directory = directory self.store = store # self._path = path # self._root = root # self._principalCollections = principalCollections from .config import ConfigurationResource self.putChild(u"config", ConfigurationResource(configuration, principalCollections)) from .principals import PrincipalsResource self.putChild(u"principals", PrincipalsResource(directory, store, principalCollections)) from .logs import LogsResource self.putChild(u"logs", LogsResource(principalCollections)) from .work import WorkMonitorResource self.putChild(u"work", WorkMonitorResource(store, principalCollections)) # def getChild(self, name): # bound = super(WebAdminLandingResource, self).getChild(name) # if bound is not None: # return bound # # # # Dynamically load and vend child resources not bound using putChild() # # in __init__(). This is useful for development, since it allows one # # to comment out the putChild() call above, and then code will be # # re-loaded for each request. # # # from . import config, principals, logs, work # if name == u"config": # reload(config) # return config.ConfigurationResource(self.configuration, self._principalCollections) # elif name == u"principals": # reload(principals) # return principals.PrincipalsResource(self.directory, self.store, self._principalCollections) # elif name == u"logs": # reload(logs) # return logs.LogsResource(self._principalCollections) # elif name == u"work": # reload(work) # return work.WorkMonitorResource(self.store, self._principalCollections) # return None calendarserver-9.1+dfsg/calendarserver/webadmin/landing.xhtml000066400000000000000000000006261315003562600245670ustar00rootroot00000000000000 <t:slot name="title" />

calendarserver-9.1+dfsg/calendarserver/webadmin/logs.py000066400000000000000000000104701315003562600234110ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_logs -*- ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Calendar Server log viewing web UI. """ __all__ = [ "LogsResource", "LogEventsResource", ] from zope.interface import implementer from twisted.python.log import FileLogObserver from calendarserver.accesslog import CommonAccessLoggingObserverExtensions from .eventsource import EventSourceResource, IEventDecoder from .resource import PageElement, TemplateResource class LogsPageElement(PageElement): """ Logs page element. """ def __init__(self): super(LogsPageElement, self).__init__(u"logs") def pageSlots(self): return { u"title": u"Server Logs", } class LogsResource(TemplateResource): """ Logs page resource. """ addSlash = True def __init__(self, principalCollections): super(LogsResource, self).__init__(LogsPageElement, principalCollections, isdir=False) self.putChild(u"events", LogEventsResource(principalCollections)) class LogEventsResource(EventSourceResource): """ Log event vending resource. """ def __init__(self, principalCollections): super(LogEventsResource, self).__init__(EventDecoder, principalCollections) self.observers = ( AccessLogObserver(self), ServerLogObserver(self), ) for observer in self.observers: observer.start() class ServerLogObserver(FileLogObserver): """ Log observer that sends events to an L{EventSourceResource} instead of writing to a file. @note: L{ServerLogObserver} is an old-style log observer, as it inherits from L{FileLogObserver}. """ timeFormat = None def __init__(self, resource): class FooIO(object): def write(_, s): self._lastMessage = s def flush(_): pass FileLogObserver.__init__(self, FooIO()) self.lastMessage = None self._resource = resource def emit(self, event): self._resource.addEvents(((self, u"server", event),)) def formatEvent(self, event): self._lastMessage = None FileLogObserver.emit(self, event) return self._lastMessage class AccessLogObserver(CommonAccessLoggingObserverExtensions): """ Log observer that captures apache-style access log text entries in a buffer. @note: L{AccessLogObserver} is an old-style log observer, as it ultimately inherits from L{txweb2.log.BaseCommonAccessLoggingObserver}. """ def __init__(self, resource): super(AccessLogObserver, self).__init__() self._resource = resource def logStats(self, event): # Only look at access log events if event[u"type"] != u"access-log": return self._resource.addEvents(((self, u"access", event),)) @implementer(IEventDecoder) class EventDecoder(object): """ Decodes logging events. """ @staticmethod def idForEvent(event): _ignore_observer, _ignore_eventClass, logEvent = event return id(logEvent) @staticmethod def classForEvent(event): _ignore_observer, eventClass, _ignore_logEvent = event return eventClass @staticmethod def textForEvent(event): observer, eventClass, logEvent = event try: if eventClass == u"access": text = logEvent[u"log-format"] % logEvent else: text = observer.formatEvent(logEvent) if text is None: text = u"" except: text = u"*** Error while formatting event ***" return text @staticmethod def retryForEvent(event): return None calendarserver-9.1+dfsg/calendarserver/webadmin/logs.xhtml000066400000000000000000000050611315003562600241150ustar00rootroot00000000000000 <t:slot name="title" />

Access Log


    

Server Log


  


calendarserver-9.1+dfsg/calendarserver/webadmin/principals.py000066400000000000000000000224761315003562600246220ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_principals -*-
##
# Copyright (c) 2014-2017 Apple Inc. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
##

from __future__ import print_function

"""
Calendar Server principal management web UI.
"""

__all__ = [
    "PrincipalsResource",
]

from cStringIO import StringIO
from zipfile import ZipFile

from twistedcaldav.simpleresource import SimpleResource

from twisted.internet.defer import inlineCallbacks, returnValue
from twisted.web.template import tags as html, renderer

from txweb2.stream import MemoryStream
from txweb2.http import Response
from txweb2.http_headers import MimeType

from .resource import PageElement, TemplateResource


class PrincipalsPageElement(PageElement):
    """
    Principal management page element.
    """

    def __init__(self, directory):
        super(PrincipalsPageElement, self).__init__(u"principals")

        self._directory = directory

    def pageSlots(self):
        return {
            u"title": u"Principal Management",
        }

    @renderer
    def search_terms(self, request, tag):
        """
        Inserts search terms as a text child of C{tag}.
        """
        terms = searchTerms(request)
        if terms:
            return tag(value=u" ".join(terms))
        else:
            return tag

    @renderer
    def if_search_results(self, request, tag):
        """
        Renders C{tag} if there are search results, otherwise removes it.
        """
        if searchTerms(request):
            return tag
        else:
            return u""

    @renderer
    def search_results_row(self, request, tag):
        def rowsForRecords(records):
            for record in records:
                yield tag.clone().fillSlots(
                    **slotsForRecord(record)
                )

        d = self.recordsForSearchTerms(request)
        d.addCallback(rowsForRecords)
        return d

    @inlineCallbacks
    def recordsForSearchTerms(self, request):
        if not hasattr(request, "_search_result_records"):
            terms = searchTerms(request)
            records = yield self._directory.recordsMatchingTokens(terms)
            request._search_result_records = tuple(records)

        returnValue(request._search_result_records)


class PrincipalsResource(TemplateResource):
    """
    Principal management page resource.
    """

    addSlash = True

    def __init__(self, directory, store, principalCollections):
        super(PrincipalsResource, self).__init__(
            lambda: PrincipalsPageElement(directory), principalCollections, isdir=False
        )

        self._directory = directory
        self._store = store

    @inlineCallbacks
    def getChild(self, name):
        if name == "":
            returnValue(self)

        record = yield self._directory.recordWithUID(name)

        if record:
            returnValue(PrincipalResource(record, self._store, self._principalCollections))
        else:
            returnValue(None)


class PrincipalPageElement(PageElement):
    """
    Principal editing page element.
    """

    def __init__(self, record):
        super(PrincipalPageElement, self).__init__(u"principals_edit")

        self._record = record

    def pageSlots(self):
        slots = slotsForRecord(self._record)

        slots[u"title"] = u"Calendar & Contacts Server Principal Information"
        slots[u"service"] = (
            u"{self._record.service.__class__.__name__}: "
            "{self._record.service.realmName}"
            .format(self=self)
        )

        return slots


class PrincipalResource(TemplateResource):
    """
    Principal editing resource.
    """

    addSlash = True

    def __init__(self, record, store, principalCollections):
        super(PrincipalResource, self).__init__(
            lambda: PrincipalPageElement(record), principalCollections, isdir=False
        )

        self._record = record
        self._store = store

    def getChild(self, name):
        if name == "":
            return self

        if name == "calendars_combined":
            return PrincipalCalendarsExportResource(self._record, self._store, self._principalCollections)


class PrincipalCalendarsExportResource(SimpleResource):
    """
    Resource that vends a principal's calendars as iCalendar text.
    """

    addSlash = False

    def __init__(self, record, store, principalCollections):
        super(PrincipalCalendarsExportResource, self).__init__(principalCollections, isdir=False)

        self._record = record
        self._store = store

    @inlineCallbacks
    def calendarComponents(self):
        uid = self._record.uid

        calendarComponents = []

        txn = self._store.newTransaction(label="PrincipalCalendarsExportResource")
        try:
            calendarHome = yield txn.calendarHomeWithUID(uid)

            if calendarHome is None:
                raise RuntimeError("No calendar home for UID: {}".format(uid))

            for calendar in (yield calendarHome.calendars()):
                name = calendar.displayName()

                for calendarObject in (yield calendar.calendarObjects()):
                    perUser = yield calendarObject.filteredComponent(uid, True)
                    calendarComponents.add((name, perUser))

        finally:
            txn.abort()

        returnValue(calendarComponents)

    @inlineCallbacks
    def iCalendarZipArchiveData(self):
        calendarComponents = yield self.calendarComponents()

        fileHandle = StringIO()
        try:
            zipFile = ZipFile(fileHandle, "w", allowZip64=True)
            try:
                zipFile.comment = (
                    "Calendars for UID: {}".format(self._record.uid)
                )

                names = set()

                for name, component in calendarComponents:
                    if name in names:
                        i = 0
                        while True:
                            i += 1
                            nextName = "{} {:d}".format(name, i)
                            if nextName not in names:
                                name = nextName
                                break
                            assert i < len(calendarComponents)

                    text = component.getText()

                    zipFile.writestr(name, text)

            finally:
                zipFile.close()

            data = fileHandle.getvalue()
        finally:
            fileHandle.close()

        returnValue(data)

    @inlineCallbacks
    def render(self, request):
        response = Response()
        response.stream = MemoryStream((yield self.iCalendarZipArchiveData()))

        # FIXME: Use content-encoding instead?
        response.headers.setHeader(
            b"content-type",
            MimeType.fromString(b"application/zip")
        )

        returnValue(response)


def searchTerms(request):
    if not hasattr(request, "_search_terms"):
        terms = set()

        if request.args:

            for query in request.args.get(u"search", []):
                for term in query.split(u" "):
                    if term:
                        terms.add(term)

            for term in request.args.get(u"term", []):
                if term:
                    terms.add(term)

        request._search_terms = terms

    return request._search_terms


#
# This should work when we switch to twext.who
#
def slotsForRecord(record):
    def asText(obj):
        if obj is None:
            return u"(no value)"
        else:
            try:
                return unicode(obj)
            except UnicodeDecodeError:
                try:
                    return unicode(repr(obj))
                except UnicodeDecodeError:
                    return u"(error rendering value)"

    def joinWithBR(elements):
        noValues = True

        for element in elements:
            if not noValues:
                yield html.br()

            yield asText(element)

            noValues = False

        if noValues:
            yield u"(no values)"

    # slots = {}

    # for field, values in record.fields.iteritems():
    #     if not record.service.fieldName.isMultiValue(field):
    #         values = (values,)

    #     slots[field.name] = joinWithBR(asText(value) for value in values)

    # return slots

    return {
        u"service": (
            u"{record.service.__class__.__name__}: {record.service.realmName}"
            .format(record=record)
        ),
        u"uid": joinWithBR((record.uid,)),
        u"guid": joinWithBR((record.guid,)),
        u"recordType": joinWithBR((record.recordType,)),
        u"shortNames": joinWithBR(record.shortNames) if hasattr(record, "shortNames") else "",
        u"fullNames": joinWithBR(record.fullNames),
        u"emailAddresses": joinWithBR(record.emailAddresses) if hasattr(record, "emailAddresses") else "",
        u"calendarUserAddresses": joinWithBR(record.calendarUserAddresses),
        u"serverURI": joinWithBR((record.serverURI(),)),
    }
calendarserver-9.1+dfsg/calendarserver/webadmin/principals.xhtml000066400000000000000000000033531315003562600253170ustar00rootroot00000000000000


  
    <t:slot name="title" />
    
    
  

  

    

Search for a principal:
window.open("/admin/principals/");
Records
Full name Record Type Short Name Email Address Calendar User Address Server
calendarserver-9.1+dfsg/calendarserver/webadmin/principals_edit.xhtml000066400000000000000000000043721315003562600263260ustar00rootroot00000000000000 <t:slot name="title" />

Record Information
Field Name Field Value
Directory Service
UID
GUID
Record Type
Short Names
Full Names
Email Addresses
Calendar User Addresses
Server ID
calendarserver-9.1+dfsg/calendarserver/webadmin/resource.py000066400000000000000000000055701315003562600243010ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_resource -*- ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web Admin UI. """ __all__ = [ "PageElement", "TemplateResource", ] from twistedcaldav.simpleresource import SimpleResource from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python.modules import getModule from twisted.web.template import ( Element, renderer, XMLFile, flattenString, tags ) from txweb2.stream import MemoryStream from txweb2.http import Response from txweb2.http_headers import MimeType class PageElement(Element): """ Page element. """ def __init__(self, templateName): super(PageElement, self).__init__() self.loader = XMLFile( getModule(__name__).filePath.sibling( u"{name}.xhtml".format(name=templateName) ) ) def pageSlots(self): return {} @renderer def main(self, request, tag): """ Main renderer, which fills page-global slots like 'title'. """ tag.fillSlots(**self.pageSlots()) return tag @renderer def stylesheet(self, request, tag): return tags.link( rel=u"stylesheet", media=u"screen", href=u"/style.css", type=u"text/css", ) class TemplateResource(SimpleResource): """ Resource that renders a template. """ # @staticmethod # def queryValue(request, argument): # for value in request.args.get(argument, []): # return value # return u"" # @staticmethod # def queryValues(request, arguments): # return request.args.get(arguments, []) def __init__(self, elementClass, pc, isdir): super(TemplateResource, self).__init__(pc, isdir=isdir) self.elementClass = elementClass # def handleQueryArguments(self, request): # return succeed(None) @inlineCallbacks def render(self, request): # yield self.handleQueryArguments(request) htmlContent = yield flattenString(request, self.elementClass()) response = Response() response.stream = MemoryStream(htmlContent) response.headers.setHeader( b"content-type", MimeType.fromString(b"text/html; charset=utf-8") ) returnValue(response) calendarserver-9.1+dfsg/calendarserver/webadmin/test/000077500000000000000000000000001315003562600230505ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/webadmin/test/__init__.py000066400000000000000000000011361315003562600251620ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/calendarserver/webadmin/test/test_eventsource.py000066400000000000000000000170751315003562600270350ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tests for L{calendarserver.webadmin.eventsource}. """ from __future__ import print_function from zope.interface import implementer from twisted.internet.defer import inlineCallbacks from twisted.trial.unittest import TestCase from txweb2.server import Request from txweb2.http_headers import Headers from ..eventsource import textAsEvent, EventSourceResource, IEventDecoder class EventGenerationTests(TestCase): """ Tests for emitting HTML5 EventSource events. """ todo = "Disabling new webadmin" def test_textAsEvent(self): """ Generate an event from some text. """ self.assertEquals( textAsEvent(u"Hello, World!"), b"data: Hello, World!\n\n" ) def test_textAsEvent_newlines(self): """ Text with newlines generates multiple C{data:} lines in event. """ self.assertEquals( textAsEvent(u"AAPL\nApple Inc.\n527.10"), b"data: AAPL\ndata: Apple Inc.\ndata: 527.10\n\n" ) def test_textAsEvent_newlineTrailing(self): """ Text with trailing newline generates trailing blank C{data:} line. """ self.assertEquals( textAsEvent(u"AAPL\nApple Inc.\n527.10\n"), b"data: AAPL\ndata: Apple Inc.\ndata: 527.10\ndata: \n\n" ) def test_textAsEvent_encoding(self): """ Event text is encoded as UTF-8. """ self.assertEquals( textAsEvent(u"S\xe1nchez"), b"data: S\xc3\xa1nchez\n\n" ) def test_textAsEvent_emptyText(self): """ Text may not be L{None}. """ self.assertEquals( textAsEvent(u""), b"data: \n\n" # Note that b"data\n\n" is a valid result here also ) def test_textAsEvent_NoneText(self): """ Text may not be L{None}. """ self.assertRaises(TypeError, textAsEvent, None) def test_textAsEvent_eventID(self): """ Event ID goes into the C{id} field. """ self.assertEquals( textAsEvent(u"", eventID=u"1234"), b"id: 1234\ndata: \n\n" # Note order is allowed to vary ) def test_textAsEvent_eventClass(self): """ Event class goes into the C{event} field. """ self.assertEquals( textAsEvent(u"", eventClass=u"buckets"), b"event: buckets\ndata: \n\n" # Note order is allowed to vary ) class EventSourceResourceTests(TestCase): """ Tests for L{EventSourceResource}. """ todo = "Disabling new webadmin" def eventSourceResource(self): return EventSourceResource(DictionaryEventDecoder, None) def render(self, resource): headers = Headers() request = Request( chanRequest=None, command=None, path="/", version=None, contentLength=None, headers=headers ) return resource.render(request) def test_status(self): """ Response status is C{200}. """ resource = self.eventSourceResource() response = self.render(resource) self.assertEquals(response.code, 200) def test_contentType(self): """ Response content type is C{text/event-stream}. """ resource = self.eventSourceResource() response = self.render(resource) self.assertEquals( response.headers.getRawHeaders(b"Content-Type"), ["text/event-stream"] ) def test_contentLength(self): """ Response content length is not provided. """ resource = self.eventSourceResource() response = self.render(resource) self.assertEquals( response.headers.getRawHeaders(b"Content-Length"), None ) @inlineCallbacks def test_streamBufferedEvents(self): """ Events already buffered are vended immediately, then we get EOF. """ events = ( dict(eventID=u"1", eventText=u"A"), dict(eventID=u"2", eventText=u"B"), dict(eventID=u"3", eventText=u"C"), dict(eventID=u"4", eventText=u"D"), ) resource = self.eventSourceResource() resource.addEvents(events) response = self.render(resource) # Each result from read() is another event for i in range(len(events)): result = yield response.stream.read() self.assertEquals( result, textAsEvent( text=events[i]["eventText"], eventID=events[i]["eventID"] ) ) @inlineCallbacks def test_streamWaitForEvents(self): """ Stream reading blocks on additional events. """ resource = self.eventSourceResource() response = self.render(resource) # Read should block on new events. d = response.stream.read() self.assertFalse(d.called) d.addErrback(lambda f: None) d.cancel() test_streamWaitForEvents.todo = "Feature disabled; needs debugging" @inlineCallbacks def test_streamNewEvents(self): """ Events not already buffered are vended after they are posted. """ events = ( dict(eventID=u"1", eventText=u"A"), dict(eventID=u"2", eventText=u"B"), dict(eventID=u"3", eventText=u"C"), dict(eventID=u"4", eventText=u"D"), ) resource = self.eventSourceResource() response = self.render(resource) # The first read should block on new events. d = response.stream.read() self.assertFalse(d.called) # Add some events resource.addEvents(events) # We should now be unblocked self.assertTrue(d.called) # Each result from read() is another event for i in range(len(events)): if d is None: result = yield response.stream.read() else: result = yield d d = None self.assertEquals( result, textAsEvent( text=events[i]["eventText"], eventID=(events[i]["eventID"]) ) ) # The next read should block on new events. d = response.stream.read() self.assertFalse(d.called) d.addErrback(lambda f: None) d.cancel() test_streamNewEvents.todo = "Feature disabled; needs debugging" @implementer(IEventDecoder) class DictionaryEventDecoder(object): """ Decodes events represented as dictionaries. """ @staticmethod def idForEvent(event): return event.get("eventID") @staticmethod def classForEvent(event): return event.get("eventClass") @staticmethod def textForEvent(event): return event.get("eventText") @staticmethod def retryForEvent(event): return None calendarserver-9.1+dfsg/calendarserver/webadmin/work.py000066400000000000000000000165651315003562600234420ustar00rootroot00000000000000# -*- test-case-name: calendarserver.webadmin.test.test_principals -*- ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Calendar Server principal management web UI. """ __all__ = [ "WorkMonitorResource", ] from time import time from json import dumps from zope.interface import implementer from twisted.internet.defer import inlineCallbacks, returnValue from twisted.python.failure import Failure from twext.python.log import Logger from twext.enterprise.jobs.jobitem import JobItem from txdav.caldav.datastore.scheduling.imip.inbound import ( IMIPPollingWork, IMIPReplyWork ) from txdav.who.groups import GroupCacherPollingWork from calendarserver.push.notifier import PushNotificationWork from txdav.caldav.datastore.scheduling.work import ( ScheduleOrganizerWork, ScheduleRefreshWork, ScheduleReplyWork, ScheduleAutoReplyWork, ) from .eventsource import EventSourceResource, IEventDecoder from .resource import PageElement, TemplateResource class WorkMonitorPageElement(PageElement): """ Principal management page element. """ def __init__(self): super(WorkMonitorPageElement, self).__init__(u"work") def pageSlots(self): return { u"title": u"Workload Monitor", } class WorkMonitorResource(TemplateResource): """ Principal management page resource. """ addSlash = True def __init__(self, store, principalCollections): super(WorkMonitorResource, self).__init__( lambda: WorkMonitorPageElement(), principalCollections, isdir=False ) self.putChild(u"events", WorkEventsResource(store, principalCollections)) class WorkEventsResource(EventSourceResource): """ Resource that vends work queue information via HTML5 EventSource events. """ log = Logger() def __init__(self, store, principalCollections, pollInterval=1000): super(WorkEventsResource, self).__init__(EventDecoder, principalCollections, bufferSize=100) self._store = store self._pollInterval = pollInterval self._polling = False @inlineCallbacks def render(self, request): yield self.poll() returnValue(super(WorkEventsResource, self).render(request)) @inlineCallbacks def poll(self): if self._polling: returnValue(None) self._polling = True txn = self._store.newTransaction(label="WorkEventsResource.poll") try: # Look up all of the jobs events = [] jobsByTypeName = {} for job in (yield JobItem.all(txn)): jobsByTypeName.setdefault(job.workType, []).append(job) totalsByTypeName = {} for workType in JobItem.workTypes(): typeName = workType.table.model.name jobs = jobsByTypeName.get(typeName, []) totalsByTypeName[typeName] = len(jobs) jobDicts = [] for job in jobs: def formatTime(datetime): if datetime is None: return None else: # FIXME: Use HTTP time format return datetime.ctime() jobDict = dict( job_jobID=job.jobID, job_priority=job.priority, job_weight=job.weight, job_notBefore=formatTime(job.notBefore), ) work = yield job.workItem() attrs = ("workID",) if workType == PushNotificationWork: attrs += ("pushID", "pushPriority") elif workType == ScheduleOrganizerWork: attrs += ("icalendarUID", "attendeeCount") elif workType == ScheduleRefreshWork: attrs += ("icalendarUID", "attendeeCount") elif workType == ScheduleReplyWork: attrs += ("icalendarUID",) elif workType == ScheduleAutoReplyWork: attrs += ("icalendarUID",) elif workType == GroupCacherPollingWork: attrs += () elif workType == IMIPPollingWork: attrs += () elif workType == IMIPReplyWork: attrs += ("organizer", "attendee") else: attrs = () if attrs: if work is None: self.log.error( "workItem() returned None for job: {job}", job=job ) # jobDict.update((attr, None) for attr in attrs) for attr in attrs: jobDict["work_{}".format(attr)] = None else: # jobDict.update( # ("work_{}".format(attr), getattr(work, attr)) # for attr in attrs # ) for attr in attrs: jobDict["work_{}".format(attr)] = ( getattr(work, attr) ) jobDicts.append(jobDict) if jobDicts: events.append(dict( eventClass=typeName, eventID=time(), eventText=asJSON(jobDicts), )) events.append(dict( eventClass=u"work-total", eventID=time(), eventText=asJSON(totalsByTypeName), eventRetry=(self._pollInterval), )) # Send data self.addEvents(events) except Exception: f = Failure() self._polling = False yield txn.abort() returnValue(f) else: self._polling = False yield txn.commit() # Schedule the next poll if not hasattr(self, "_clock"): from twisted.internet import reactor self._clock = reactor self._clock.callLater(self._pollInterval / 1000, self.poll) @implementer(IEventDecoder) class EventDecoder(object): @staticmethod def idForEvent(event): return event.get("eventID") @staticmethod def classForEvent(event): return event.get("eventClass") @staticmethod def textForEvent(event): return event.get("eventText") @staticmethod def retryForEvent(event): return event.get("eventRetry") def asJSON(obj): return dumps(obj, separators=(',', ':')) calendarserver-9.1+dfsg/calendarserver/webadmin/work.xhtml000066400000000000000000000307711315003562600241410ustar00rootroot00000000000000 <t:slot name="title" />

Work Item Details
Job ID Priority Weight Not Before

calendarserver-9.1+dfsg/calendarserver/webcal/000077500000000000000000000000001315003562600215405ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/webcal/__init__.py000066400000000000000000000011771315003562600236570ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web UI. """ calendarserver-9.1+dfsg/calendarserver/webcal/resource.py000066400000000000000000000144231315003562600237450ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web UI. """ __all__ = [ "WebCalendarResource", ] import os from time import time from urlparse import urlparse from cgi import parse_qs from twisted.internet.defer import succeed from txweb2 import responsecode from txweb2.http import Response from txweb2.http_headers import MimeType from txweb2.stream import MemoryStream from txdav.xml import element as davxml from txweb2.dav.resource import TwistedACLInheritable from twistedcaldav.config import config from twistedcaldav.extensions import DAVFile, ReadOnlyResourceMixIn from twistedcaldav.timezones import hasTZ DEFAULT_TIMEZONE = "America/Los_Angeles" class WebCalendarResource (ReadOnlyResourceMixIn, DAVFile): def defaultAccessControlList(self): return succeed( davxml.ACL( davxml.ACE( davxml.Principal(davxml.Authenticated()), davxml.Grant( davxml.Privilege(davxml.Read()), ), davxml.Protected(), TwistedACLInheritable(), ), ) ) def etag(self): # Can't be calculated here return succeed(None) def contentLength(self): # Can't be calculated here return None def lastModified(self): return None def exists(self): return True def displayName(self): return "Web Calendar" def contentType(self): return MimeType.fromString("text/html; charset=utf-8") def contentEncoding(self): return None def createSimilarFile(self, path): return DAVFile(path, principalCollections=self.principalCollections()) _htmlContent_lastCheck = 0 _htmlContent_statInfo = 0 _htmlContentDebug_lastCheck = 0 _htmlContentDebug_statInfo = 0 def htmlContent(self, debug=False): if debug: cacheAttr = "_htmlContentDebug" templateFileName = "debug_standalone.html" else: cacheAttr = "_htmlContent" templateFileName = "standalone.html" templateFileName = os.path.join( config.WebCalendarRoot, templateFileName ) # # See if the file changed, and dump the cached template if so. # Don't bother to check if we've checked in the past minute. # We don't cache if debug is true. # if not debug and hasattr(self, cacheAttr): currentTime = time() if currentTime - getattr(self, cacheAttr + "_lastCheck") > 60: statInfo = os.stat(templateFileName) statInfo = (statInfo.st_mtime, statInfo.st_size) if statInfo != getattr(self, cacheAttr + "_statInfo"): delattr(self, cacheAttr) setattr(self, cacheAttr + "_statInfo", statInfo) setattr(self, cacheAttr + "_lastCheck", currentTime) # # If we don't have a cached template, load it up. # if not hasattr(self, cacheAttr): with open(templateFileName) as templateFile: htmlContent = templateFile.read() if debug: # Don't cache return htmlContent setattr(self, cacheAttr, htmlContent) return getattr(self, cacheAttr) def render(self, request): if not self.fp.isdir(): return responsecode.NOT_FOUND # # Get URL of authenticated principal. # Don't need to authenticate here because the ACL will have already # required it. # authenticatedPrincipalURL = request.authnUser.principalURL() def queryValue(arg): query = parse_qs(urlparse(request.uri).query, True) return query.get(arg, [""])[0] # # Parse debug query arg # debug = queryValue("debug") debug = debug is not None and debug.lower() in ("1", "true", "yes") # # Parse TimeZone query arg # tzid = queryValue("tzid") if not tzid: tzid = getLocalTimezone() self.log.debug("Determined timezone to be {tzid}", tzid=tzid) # # Make some HTML # try: htmlContent = self.htmlContent(debug) % { "tzid": tzid, "principalURL": authenticatedPrincipalURL, } except IOError, e: self.log.error("Unable to obtain WebCalendar template: %s" % (e,)) return responsecode.NOT_FOUND response = Response() response.stream = MemoryStream(htmlContent) for (header, value) in ( ("content-type", self.contentType()), ("content-encoding", self.contentEncoding()), ): if value is not None: response.headers.setHeader(header, value) return response try: from osx.utils import CFTimeZoneRef def lookupSystemTimezone(): try: return CFTimeZoneRef.defaultTimeZoneName() except: return "" except ImportError: def lookupSystemTimezone(): return "" def getLocalTimezone(): """ Returns the default timezone for the server. The order of precedence is: config.DefaultTimezone, lookupSystemTimezone( ), DEFAULT_TIMEZONE. Also, if neither of the first two values in that list are in the timezone database, DEFAULT_TIMEZONE is returned. @return: The server's local timezone name @rtype: C{str} """ if config.DefaultTimezone: if hasTZ(config.DefaultTimezone): return config.DefaultTimezone systemTimezone = lookupSystemTimezone() if hasTZ(systemTimezone): return systemTimezone return DEFAULT_TIMEZONE calendarserver-9.1+dfsg/calendarserver/webcal/test/000077500000000000000000000000001315003562600225175ustar00rootroot00000000000000calendarserver-9.1+dfsg/calendarserver/webcal/test/__init__.py000066400000000000000000000012051315003562600246260ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Calendar Server Web UI tests. """ calendarserver-9.1+dfsg/calendarserver/webcal/test/test_resource.py000066400000000000000000000050711315003562600257620ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twistedcaldav.test.util import TestCase from twistedcaldav.config import config import calendarserver.webcal.resource from calendarserver.webcal.resource import getLocalTimezone, DEFAULT_TIMEZONE class DefaultTimezoneTests(TestCase): def stubLookup(self): return self._storedLookup def stubHasTZ(self, ignored): return self._storedHasTZ.pop() def setUp(self): self.patch( calendarserver.webcal.resource, "lookupSystemTimezone", self.stubLookup ) self.patch( calendarserver.webcal.resource, "hasTZ", self.stubHasTZ ) def test_getLocalTimezone(self): # Empty config, system timezone known = use system timezone self.patch(config, "DefaultTimezone", "") self._storedLookup = "America/New_York" self._storedHasTZ = [True] self.assertEquals(getLocalTimezone(), "America/New_York") # Empty config, system timezone unknown = use DEFAULT_TIMEZONE self.patch(config, "DefaultTimezone", "") self._storedLookup = "Unknown/Unknown" self._storedHasTZ = [False] self.assertEquals(getLocalTimezone(), DEFAULT_TIMEZONE) # Known config value = use config value self.patch(config, "DefaultTimezone", "America/New_York") self._storedHasTZ = [True] self.assertEquals(getLocalTimezone(), "America/New_York") # Unknown config value, system timezone known = use system timezone self.patch(config, "DefaultTimezone", "Unknown/Unknown") self._storedLookup = "America/New_York" self._storedHasTZ = [True, False] self.assertEquals(getLocalTimezone(), "America/New_York") # Unknown config value, system timezone unknown = use DEFAULT_TIMEZONE self.patch(config, "DefaultTimezone", "Unknown/Unknown") self._storedLookup = "Unknown/Unknown" self._storedHasTZ = [False, False] self.assertEquals(getLocalTimezone(), DEFAULT_TIMEZONE) calendarserver-9.1+dfsg/conf/000077500000000000000000000000001315003562600162305ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/auth/000077500000000000000000000000001315003562600171715ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/auth/accounts-test-pod.xml000077500000000000000000002417501315003562600233030ustar00rootroot00000000000000 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E admin admin Super User admin@example.com 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000001 user01 user01 User 01 user01@example.com 10000000-0000-0000-0000-000000000002 10000000-0000-0000-0000-000000000002 user02 user02 User 02 user02@example.com 10000000-0000-0000-0000-000000000003 10000000-0000-0000-0000-000000000003 user03 user03 User 03 user03@example.com 10000000-0000-0000-0000-000000000004 10000000-0000-0000-0000-000000000004 user04 user04 User 04 user04@example.com 10000000-0000-0000-0000-000000000005 10000000-0000-0000-0000-000000000005 user05 user05 User 05 user05@example.com 10000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000006 user06 user06 User 06 user06@example.com 10000000-0000-0000-0000-000000000007 10000000-0000-0000-0000-000000000007 user07 user07 User 07 user07@example.com 10000000-0000-0000-0000-000000000008 10000000-0000-0000-0000-000000000008 user08 user08 User 08 user08@example.com 10000000-0000-0000-0000-000000000009 10000000-0000-0000-0000-000000000009 user09 user09 User 09 user09@example.com 10000000-0000-0000-0000-000000000010 10000000-0000-0000-0000-000000000010 user10 user10 User 10 user10@example.com 10000000-0000-0000-0000-000000000011 10000000-0000-0000-0000-000000000011 user11 user11 User 11 user11@example.com 10000000-0000-0000-0000-000000000012 10000000-0000-0000-0000-000000000012 user12 user12 User 12 user12@example.com 10000000-0000-0000-0000-000000000013 10000000-0000-0000-0000-000000000013 user13 user13 User 13 user13@example.com 10000000-0000-0000-0000-000000000014 10000000-0000-0000-0000-000000000014 user14 user14 User 14 user14@example.com 10000000-0000-0000-0000-000000000015 10000000-0000-0000-0000-000000000015 user15 user15 User 15 user15@example.com 10000000-0000-0000-0000-000000000016 10000000-0000-0000-0000-000000000016 user16 user16 User 16 user16@example.com 10000000-0000-0000-0000-000000000017 10000000-0000-0000-0000-000000000017 user17 user17 User 17 user17@example.com 10000000-0000-0000-0000-000000000018 10000000-0000-0000-0000-000000000018 user18 user18 User 18 user18@example.com 10000000-0000-0000-0000-000000000019 10000000-0000-0000-0000-000000000019 user19 user19 User 19 user19@example.com 10000000-0000-0000-0000-000000000020 10000000-0000-0000-0000-000000000020 user20 user20 User 20 user20@example.com 10000000-0000-0000-0000-000000000021 10000000-0000-0000-0000-000000000021 user21 user21 User 21 user21@example.com 10000000-0000-0000-0000-000000000022 10000000-0000-0000-0000-000000000022 user22 user22 User 22 user22@example.com 10000000-0000-0000-0000-000000000023 10000000-0000-0000-0000-000000000023 user23 user23 User 23 user23@example.com 10000000-0000-0000-0000-000000000024 10000000-0000-0000-0000-000000000024 user24 user24 User 24 user24@example.com 10000000-0000-0000-0000-000000000025 10000000-0000-0000-0000-000000000025 user25 user25 User 25 user25@example.com 10000000-0000-0000-0000-000000000026 10000000-0000-0000-0000-000000000026 user26 user26 User 26 user26@example.com 10000000-0000-0000-0000-000000000027 10000000-0000-0000-0000-000000000027 user27 user27 User 27 user27@example.com 10000000-0000-0000-0000-000000000028 10000000-0000-0000-0000-000000000028 user28 user28 User 28 user28@example.com 10000000-0000-0000-0000-000000000029 10000000-0000-0000-0000-000000000029 user29 user29 User 29 user29@example.com 10000000-0000-0000-0000-000000000030 10000000-0000-0000-0000-000000000030 user30 user30 User 30 user30@example.com 10000000-0000-0000-0000-000000000031 10000000-0000-0000-0000-000000000031 user31 user31 User 31 user31@example.com 10000000-0000-0000-0000-000000000032 10000000-0000-0000-0000-000000000032 user32 user32 User 32 user32@example.com 10000000-0000-0000-0000-000000000033 10000000-0000-0000-0000-000000000033 user33 user33 User 33 user33@example.com 10000000-0000-0000-0000-000000000034 10000000-0000-0000-0000-000000000034 user34 user34 User 34 user34@example.com 10000000-0000-0000-0000-000000000035 10000000-0000-0000-0000-000000000035 user35 user35 User 35 user35@example.com 10000000-0000-0000-0000-000000000036 10000000-0000-0000-0000-000000000036 user36 user36 User 36 user36@example.com 10000000-0000-0000-0000-000000000037 10000000-0000-0000-0000-000000000037 user37 user37 User 37 user37@example.com 10000000-0000-0000-0000-000000000038 10000000-0000-0000-0000-000000000038 user38 user38 User 38 user38@example.com 10000000-0000-0000-0000-000000000039 10000000-0000-0000-0000-000000000039 user39 user39 User 39 user39@example.com 10000000-0000-0000-0000-000000000040 10000000-0000-0000-0000-000000000040 user40 user40 User 40 user40@example.com 10000000-0000-0000-0000-000000000041 10000000-0000-0000-0000-000000000041 user41 user41 User 41 user41@example.com 10000000-0000-0000-0000-000000000042 10000000-0000-0000-0000-000000000042 user42 user42 User 42 user42@example.com 10000000-0000-0000-0000-000000000043 10000000-0000-0000-0000-000000000043 user43 user43 User 43 user43@example.com 10000000-0000-0000-0000-000000000044 10000000-0000-0000-0000-000000000044 user44 user44 User 44 user44@example.com 10000000-0000-0000-0000-000000000045 10000000-0000-0000-0000-000000000045 user45 user45 User 45 user45@example.com 10000000-0000-0000-0000-000000000046 10000000-0000-0000-0000-000000000046 user46 user46 User 46 user46@example.com 10000000-0000-0000-0000-000000000047 10000000-0000-0000-0000-000000000047 user47 user47 User 47 user47@example.com 10000000-0000-0000-0000-000000000048 10000000-0000-0000-0000-000000000048 user48 user48 User 48 user48@example.com 10000000-0000-0000-0000-000000000049 10000000-0000-0000-0000-000000000049 user49 user49 User 49 user49@example.com 10000000-0000-0000-0000-000000000050 10000000-0000-0000-0000-000000000050 user50 user50 User 50 user50@example.com 10000000-0000-0000-0000-000000000051 10000000-0000-0000-0000-000000000051 user51 user51 User 51 user51@example.com 10000000-0000-0000-0000-000000000052 10000000-0000-0000-0000-000000000052 user52 user52 User 52 user52@example.com 10000000-0000-0000-0000-000000000053 10000000-0000-0000-0000-000000000053 user53 user53 User 53 user53@example.com 10000000-0000-0000-0000-000000000054 10000000-0000-0000-0000-000000000054 user54 user54 User 54 user54@example.com 10000000-0000-0000-0000-000000000055 10000000-0000-0000-0000-000000000055 user55 user55 User 55 user55@example.com 10000000-0000-0000-0000-000000000056 10000000-0000-0000-0000-000000000056 user56 user56 User 56 user56@example.com 10000000-0000-0000-0000-000000000057 10000000-0000-0000-0000-000000000057 user57 user57 User 57 user57@example.com 10000000-0000-0000-0000-000000000058 10000000-0000-0000-0000-000000000058 user58 user58 User 58 user58@example.com 10000000-0000-0000-0000-000000000059 10000000-0000-0000-0000-000000000059 user59 user59 User 59 user59@example.com 10000000-0000-0000-0000-000000000060 10000000-0000-0000-0000-000000000060 user60 user60 User 60 user60@example.com 10000000-0000-0000-0000-000000000061 10000000-0000-0000-0000-000000000061 user61 user61 User 61 user61@example.com 10000000-0000-0000-0000-000000000062 10000000-0000-0000-0000-000000000062 user62 user62 User 62 user62@example.com 10000000-0000-0000-0000-000000000063 10000000-0000-0000-0000-000000000063 user63 user63 User 63 user63@example.com 10000000-0000-0000-0000-000000000064 10000000-0000-0000-0000-000000000064 user64 user64 User 64 user64@example.com 10000000-0000-0000-0000-000000000065 10000000-0000-0000-0000-000000000065 user65 user65 User 65 user65@example.com 10000000-0000-0000-0000-000000000066 10000000-0000-0000-0000-000000000066 user66 user66 User 66 user66@example.com 10000000-0000-0000-0000-000000000067 10000000-0000-0000-0000-000000000067 user67 user67 User 67 user67@example.com 10000000-0000-0000-0000-000000000068 10000000-0000-0000-0000-000000000068 user68 user68 User 68 user68@example.com 10000000-0000-0000-0000-000000000069 10000000-0000-0000-0000-000000000069 user69 user69 User 69 user69@example.com 10000000-0000-0000-0000-000000000070 10000000-0000-0000-0000-000000000070 user70 user70 User 70 user70@example.com 10000000-0000-0000-0000-000000000071 10000000-0000-0000-0000-000000000071 user71 user71 User 71 user71@example.com 10000000-0000-0000-0000-000000000072 10000000-0000-0000-0000-000000000072 user72 user72 User 72 user72@example.com 10000000-0000-0000-0000-000000000073 10000000-0000-0000-0000-000000000073 user73 user73 User 73 user73@example.com 10000000-0000-0000-0000-000000000074 10000000-0000-0000-0000-000000000074 user74 user74 User 74 user74@example.com 10000000-0000-0000-0000-000000000075 10000000-0000-0000-0000-000000000075 user75 user75 User 75 user75@example.com 10000000-0000-0000-0000-000000000076 10000000-0000-0000-0000-000000000076 user76 user76 User 76 user76@example.com 10000000-0000-0000-0000-000000000077 10000000-0000-0000-0000-000000000077 user77 user77 User 77 user77@example.com 10000000-0000-0000-0000-000000000078 10000000-0000-0000-0000-000000000078 user78 user78 User 78 user78@example.com 10000000-0000-0000-0000-000000000079 10000000-0000-0000-0000-000000000079 user79 user79 User 79 user79@example.com 10000000-0000-0000-0000-000000000080 10000000-0000-0000-0000-000000000080 user80 user80 User 80 user80@example.com 10000000-0000-0000-0000-000000000081 10000000-0000-0000-0000-000000000081 user81 user81 User 81 user81@example.com 10000000-0000-0000-0000-000000000082 10000000-0000-0000-0000-000000000082 user82 user82 User 82 user82@example.com 10000000-0000-0000-0000-000000000083 10000000-0000-0000-0000-000000000083 user83 user83 User 83 user83@example.com 10000000-0000-0000-0000-000000000084 10000000-0000-0000-0000-000000000084 user84 user84 User 84 user84@example.com 10000000-0000-0000-0000-000000000085 10000000-0000-0000-0000-000000000085 user85 user85 User 85 user85@example.com 10000000-0000-0000-0000-000000000086 10000000-0000-0000-0000-000000000086 user86 user86 User 86 user86@example.com 10000000-0000-0000-0000-000000000087 10000000-0000-0000-0000-000000000087 user87 user87 User 87 user87@example.com 10000000-0000-0000-0000-000000000088 10000000-0000-0000-0000-000000000088 user88 user88 User 88 user88@example.com 10000000-0000-0000-0000-000000000089 10000000-0000-0000-0000-000000000089 user89 user89 User 89 user89@example.com 10000000-0000-0000-0000-000000000090 10000000-0000-0000-0000-000000000090 user90 user90 User 90 user90@example.com 10000000-0000-0000-0000-000000000091 10000000-0000-0000-0000-000000000091 user91 user91 User 91 user91@example.com 10000000-0000-0000-0000-000000000092 10000000-0000-0000-0000-000000000092 user92 user92 User 92 user92@example.com 10000000-0000-0000-0000-000000000093 10000000-0000-0000-0000-000000000093 user93 user93 User 93 user93@example.com 10000000-0000-0000-0000-000000000094 10000000-0000-0000-0000-000000000094 user94 user94 User 94 user94@example.com 10000000-0000-0000-0000-000000000095 10000000-0000-0000-0000-000000000095 user95 user95 User 95 user95@example.com 10000000-0000-0000-0000-000000000096 10000000-0000-0000-0000-000000000096 user96 user96 User 96 user96@example.com 10000000-0000-0000-0000-000000000097 10000000-0000-0000-0000-000000000097 user97 user97 User 97 user97@example.com 10000000-0000-0000-0000-000000000098 10000000-0000-0000-0000-000000000098 user98 user98 User 98 user98@example.com 10000000-0000-0000-0000-000000000099 10000000-0000-0000-0000-000000000099 user99 user99 User 99 user99@example.com 10000000-0000-0000-0000-000000000100 10000000-0000-0000-0000-000000000100 user100 user100 User 100 user100@example.com 60000000-0000-0000-0000-000000000001 60000000-0000-0000-0000-000000000001 puser01 puser01 Puser 01 puser01@example.com 60000000-0000-0000-0000-000000000002 60000000-0000-0000-0000-000000000002 puser02 puser02 Puser 02 puser02@example.com 60000000-0000-0000-0000-000000000003 60000000-0000-0000-0000-000000000003 puser03 puser03 Puser 03 puser03@example.com 60000000-0000-0000-0000-000000000004 60000000-0000-0000-0000-000000000004 puser04 puser04 Puser 04 puser04@example.com 60000000-0000-0000-0000-000000000005 60000000-0000-0000-0000-000000000005 puser05 puser05 Puser 05 puser05@example.com 60000000-0000-0000-0000-000000000006 60000000-0000-0000-0000-000000000006 puser06 puser06 Puser 06 puser06@example.com 60000000-0000-0000-0000-000000000007 60000000-0000-0000-0000-000000000007 puser07 puser07 Puser 07 puser07@example.com 60000000-0000-0000-0000-000000000008 60000000-0000-0000-0000-000000000008 puser08 puser08 Puser 08 puser08@example.com 60000000-0000-0000-0000-000000000009 60000000-0000-0000-0000-000000000009 puser09 puser09 Puser 09 puser09@example.com 60000000-0000-0000-0000-000000000010 60000000-0000-0000-0000-000000000010 puser10 puser10 Puser 10 puser10@example.com 60000000-0000-0000-0000-000000000011 60000000-0000-0000-0000-000000000011 puser11 puser11 Puser 11 puser11@example.com 60000000-0000-0000-0000-000000000012 60000000-0000-0000-0000-000000000012 puser12 puser12 Puser 12 puser12@example.com 60000000-0000-0000-0000-000000000013 60000000-0000-0000-0000-000000000013 puser13 puser13 Puser 13 puser13@example.com 60000000-0000-0000-0000-000000000014 60000000-0000-0000-0000-000000000014 puser14 puser14 Puser 14 puser14@example.com 60000000-0000-0000-0000-000000000015 60000000-0000-0000-0000-000000000015 puser15 puser15 Puser 15 puser15@example.com 60000000-0000-0000-0000-000000000016 60000000-0000-0000-0000-000000000016 puser16 puser16 Puser 16 puser16@example.com 60000000-0000-0000-0000-000000000017 60000000-0000-0000-0000-000000000017 puser17 puser17 Puser 17 puser17@example.com 60000000-0000-0000-0000-000000000018 60000000-0000-0000-0000-000000000018 puser18 puser18 Puser 18 puser18@example.com 60000000-0000-0000-0000-000000000019 60000000-0000-0000-0000-000000000019 puser19 puser19 Puser 19 puser19@example.com 60000000-0000-0000-0000-000000000020 60000000-0000-0000-0000-000000000020 puser20 puser20 Puser 20 puser20@example.com 60000000-0000-0000-0000-000000000021 60000000-0000-0000-0000-000000000021 puser21 puser21 Puser 21 puser21@example.com 60000000-0000-0000-0000-000000000022 60000000-0000-0000-0000-000000000022 puser22 puser22 Puser 22 puser22@example.com 60000000-0000-0000-0000-000000000023 60000000-0000-0000-0000-000000000023 puser23 puser23 Puser 23 puser23@example.com 60000000-0000-0000-0000-000000000024 60000000-0000-0000-0000-000000000024 puser24 puser24 Puser 24 puser24@example.com 60000000-0000-0000-0000-000000000025 60000000-0000-0000-0000-000000000025 puser25 puser25 Puser 25 puser25@example.com 60000000-0000-0000-0000-000000000026 60000000-0000-0000-0000-000000000026 puser26 puser26 Puser 26 puser26@example.com 60000000-0000-0000-0000-000000000027 60000000-0000-0000-0000-000000000027 puser27 puser27 Puser 27 puser27@example.com 60000000-0000-0000-0000-000000000028 60000000-0000-0000-0000-000000000028 puser28 puser28 Puser 28 puser28@example.com 60000000-0000-0000-0000-000000000029 60000000-0000-0000-0000-000000000029 puser29 puser29 Puser 29 puser29@example.com 60000000-0000-0000-0000-000000000030 60000000-0000-0000-0000-000000000030 puser30 puser30 Puser 30 puser30@example.com 60000000-0000-0000-0000-000000000031 60000000-0000-0000-0000-000000000031 puser31 puser31 Puser 31 puser31@example.com 60000000-0000-0000-0000-000000000032 60000000-0000-0000-0000-000000000032 puser32 puser32 Puser 32 puser32@example.com 60000000-0000-0000-0000-000000000033 60000000-0000-0000-0000-000000000033 puser33 puser33 Puser 33 puser33@example.com 60000000-0000-0000-0000-000000000034 60000000-0000-0000-0000-000000000034 puser34 puser34 Puser 34 puser34@example.com 60000000-0000-0000-0000-000000000035 60000000-0000-0000-0000-000000000035 puser35 puser35 Puser 35 puser35@example.com 60000000-0000-0000-0000-000000000036 60000000-0000-0000-0000-000000000036 puser36 puser36 Puser 36 puser36@example.com 60000000-0000-0000-0000-000000000037 60000000-0000-0000-0000-000000000037 puser37 puser37 Puser 37 puser37@example.com 60000000-0000-0000-0000-000000000038 60000000-0000-0000-0000-000000000038 puser38 puser38 Puser 38 puser38@example.com 60000000-0000-0000-0000-000000000039 60000000-0000-0000-0000-000000000039 puser39 puser39 Puser 39 puser39@example.com 60000000-0000-0000-0000-000000000040 60000000-0000-0000-0000-000000000040 puser40 puser40 Puser 40 puser40@example.com 60000000-0000-0000-0000-000000000041 60000000-0000-0000-0000-000000000041 puser41 puser41 Puser 41 puser41@example.com 60000000-0000-0000-0000-000000000042 60000000-0000-0000-0000-000000000042 puser42 puser42 Puser 42 puser42@example.com 60000000-0000-0000-0000-000000000043 60000000-0000-0000-0000-000000000043 puser43 puser43 Puser 43 puser43@example.com 60000000-0000-0000-0000-000000000044 60000000-0000-0000-0000-000000000044 puser44 puser44 Puser 44 puser44@example.com 60000000-0000-0000-0000-000000000045 60000000-0000-0000-0000-000000000045 puser45 puser45 Puser 45 puser45@example.com 60000000-0000-0000-0000-000000000046 60000000-0000-0000-0000-000000000046 puser46 puser46 Puser 46 puser46@example.com 60000000-0000-0000-0000-000000000047 60000000-0000-0000-0000-000000000047 puser47 puser47 Puser 47 puser47@example.com 60000000-0000-0000-0000-000000000048 60000000-0000-0000-0000-000000000048 puser48 puser48 Puser 48 puser48@example.com 60000000-0000-0000-0000-000000000049 60000000-0000-0000-0000-000000000049 puser49 puser49 Puser 49 puser49@example.com 60000000-0000-0000-0000-000000000050 60000000-0000-0000-0000-000000000050 puser50 puser50 Puser 50 puser50@example.com 60000000-0000-0000-0000-000000000051 60000000-0000-0000-0000-000000000051 puser51 puser51 Puser 51 puser51@example.com 60000000-0000-0000-0000-000000000052 60000000-0000-0000-0000-000000000052 puser52 puser52 Puser 52 puser52@example.com 60000000-0000-0000-0000-000000000053 60000000-0000-0000-0000-000000000053 puser53 puser53 Puser 53 puser53@example.com 60000000-0000-0000-0000-000000000054 60000000-0000-0000-0000-000000000054 puser54 puser54 Puser 54 puser54@example.com 60000000-0000-0000-0000-000000000055 60000000-0000-0000-0000-000000000055 puser55 puser55 Puser 55 puser55@example.com 60000000-0000-0000-0000-000000000056 60000000-0000-0000-0000-000000000056 puser56 puser56 Puser 56 puser56@example.com 60000000-0000-0000-0000-000000000057 60000000-0000-0000-0000-000000000057 puser57 puser57 Puser 57 puser57@example.com 60000000-0000-0000-0000-000000000058 60000000-0000-0000-0000-000000000058 puser58 puser58 Puser 58 puser58@example.com 60000000-0000-0000-0000-000000000059 60000000-0000-0000-0000-000000000059 puser59 puser59 Puser 59 puser59@example.com 60000000-0000-0000-0000-000000000060 60000000-0000-0000-0000-000000000060 puser60 puser60 Puser 60 puser60@example.com 60000000-0000-0000-0000-000000000061 60000000-0000-0000-0000-000000000061 puser61 puser61 Puser 61 puser61@example.com 60000000-0000-0000-0000-000000000062 60000000-0000-0000-0000-000000000062 puser62 puser62 Puser 62 puser62@example.com 60000000-0000-0000-0000-000000000063 60000000-0000-0000-0000-000000000063 puser63 puser63 Puser 63 puser63@example.com 60000000-0000-0000-0000-000000000064 60000000-0000-0000-0000-000000000064 puser64 puser64 Puser 64 puser64@example.com 60000000-0000-0000-0000-000000000065 60000000-0000-0000-0000-000000000065 puser65 puser65 Puser 65 puser65@example.com 60000000-0000-0000-0000-000000000066 60000000-0000-0000-0000-000000000066 puser66 puser66 Puser 66 puser66@example.com 60000000-0000-0000-0000-000000000067 60000000-0000-0000-0000-000000000067 puser67 puser67 Puser 67 puser67@example.com 60000000-0000-0000-0000-000000000068 60000000-0000-0000-0000-000000000068 puser68 puser68 Puser 68 puser68@example.com 60000000-0000-0000-0000-000000000069 60000000-0000-0000-0000-000000000069 puser69 puser69 Puser 69 puser69@example.com 60000000-0000-0000-0000-000000000070 60000000-0000-0000-0000-000000000070 puser70 puser70 Puser 70 puser70@example.com 60000000-0000-0000-0000-000000000071 60000000-0000-0000-0000-000000000071 puser71 puser71 Puser 71 puser71@example.com 60000000-0000-0000-0000-000000000072 60000000-0000-0000-0000-000000000072 puser72 puser72 Puser 72 puser72@example.com 60000000-0000-0000-0000-000000000073 60000000-0000-0000-0000-000000000073 puser73 puser73 Puser 73 puser73@example.com 60000000-0000-0000-0000-000000000074 60000000-0000-0000-0000-000000000074 puser74 puser74 Puser 74 puser74@example.com 60000000-0000-0000-0000-000000000075 60000000-0000-0000-0000-000000000075 puser75 puser75 Puser 75 puser75@example.com 60000000-0000-0000-0000-000000000076 60000000-0000-0000-0000-000000000076 puser76 puser76 Puser 76 puser76@example.com 60000000-0000-0000-0000-000000000077 60000000-0000-0000-0000-000000000077 puser77 puser77 Puser 77 puser77@example.com 60000000-0000-0000-0000-000000000078 60000000-0000-0000-0000-000000000078 puser78 puser78 Puser 78 puser78@example.com 60000000-0000-0000-0000-000000000079 60000000-0000-0000-0000-000000000079 puser79 puser79 Puser 79 puser79@example.com 60000000-0000-0000-0000-000000000080 60000000-0000-0000-0000-000000000080 puser80 puser80 Puser 80 puser80@example.com 60000000-0000-0000-0000-000000000081 60000000-0000-0000-0000-000000000081 puser81 puser81 Puser 81 puser81@example.com 60000000-0000-0000-0000-000000000082 60000000-0000-0000-0000-000000000082 puser82 puser82 Puser 82 puser82@example.com 60000000-0000-0000-0000-000000000083 60000000-0000-0000-0000-000000000083 puser83 puser83 Puser 83 puser83@example.com 60000000-0000-0000-0000-000000000084 60000000-0000-0000-0000-000000000084 puser84 puser84 Puser 84 puser84@example.com 60000000-0000-0000-0000-000000000085 60000000-0000-0000-0000-000000000085 puser85 puser85 Puser 85 puser85@example.com 60000000-0000-0000-0000-000000000086 60000000-0000-0000-0000-000000000086 puser86 puser86 Puser 86 puser86@example.com 60000000-0000-0000-0000-000000000087 60000000-0000-0000-0000-000000000087 puser87 puser87 Puser 87 puser87@example.com 60000000-0000-0000-0000-000000000088 60000000-0000-0000-0000-000000000088 puser88 puser88 Puser 88 puser88@example.com 60000000-0000-0000-0000-000000000089 60000000-0000-0000-0000-000000000089 puser89 puser89 Puser 89 puser89@example.com 60000000-0000-0000-0000-000000000090 60000000-0000-0000-0000-000000000090 puser90 puser90 Puser 90 puser90@example.com 60000000-0000-0000-0000-000000000091 60000000-0000-0000-0000-000000000091 puser91 puser91 Puser 91 puser91@example.com 60000000-0000-0000-0000-000000000092 60000000-0000-0000-0000-000000000092 puser92 puser92 Puser 92 puser92@example.com 60000000-0000-0000-0000-000000000093 60000000-0000-0000-0000-000000000093 puser93 puser93 Puser 93 puser93@example.com 60000000-0000-0000-0000-000000000094 60000000-0000-0000-0000-000000000094 puser94 puser94 Puser 94 puser94@example.com 60000000-0000-0000-0000-000000000095 60000000-0000-0000-0000-000000000095 puser95 puser95 Puser 95 puser95@example.com 60000000-0000-0000-0000-000000000096 60000000-0000-0000-0000-000000000096 puser96 puser96 Puser 96 puser96@example.com 60000000-0000-0000-0000-000000000097 60000000-0000-0000-0000-000000000097 puser97 puser97 Puser 97 puser97@example.com 60000000-0000-0000-0000-000000000098 60000000-0000-0000-0000-000000000098 puser98 puser98 Puser 98 puser98@example.com 60000000-0000-0000-0000-000000000099 60000000-0000-0000-0000-000000000099 puser99 puser99 Puser 99 puser99@example.com 60000000-0000-0000-0000-000000000100 60000000-0000-0000-0000-000000000100 puser100 puser100 Puser 100 puser100@example.com 20000000-0000-0000-0000-000000000001 20000000-0000-0000-0000-000000000001 group01 Group 01 group01@example.com 10000000-0000-0000-0000-000000000001 20000000-0000-0000-0000-000000000002 20000000-0000-0000-0000-000000000002 group02 Group 02 group02@example.com 10000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000007 20000000-0000-0000-0000-000000000003 20000000-0000-0000-0000-000000000003 group03 Group 03 group03@example.com 10000000-0000-0000-0000-000000000008 10000000-0000-0000-0000-000000000009 20000000-0000-0000-0000-000000000004 20000000-0000-0000-0000-000000000004 group04 Group 04 group04@example.com 20000000-0000-0000-0000-000000000002 20000000-0000-0000-0000-000000000003 10000000-0000-0000-0000-000000000010 20000000-0000-0000-0000-000000000005 20000000-0000-0000-0000-000000000005 group05 Group 05 group05@example.com 20000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000020 20000000-0000-0000-0000-000000000006 20000000-0000-0000-0000-000000000006 group06 Group 06 group06@example.com 10000000-0000-0000-0000-000000000021 20000000-0000-0000-0000-000000000007 20000000-0000-0000-0000-000000000007 group07 Group 07 group07@example.com 10000000-0000-0000-0000-000000000022 10000000-0000-0000-0000-000000000023 10000000-0000-0000-0000-000000000024 20000000-0000-0000-0000-000000000008 20000000-0000-0000-0000-000000000008 group08 Group 08 group08@example.com 20000000-0000-0000-0000-000000000009 20000000-0000-0000-0000-000000000009 group09 Group 09 group09@example.com 20000000-0000-0000-0000-000000000010 20000000-0000-0000-0000-000000000010 group10 Group 10 group10@example.com 20000000-0000-0000-0000-000000000011 20000000-0000-0000-0000-000000000011 group11 Group 11 group11@example.com 20000000-0000-0000-0000-000000000012 20000000-0000-0000-0000-000000000012 group12 Group 12 group12@example.com 20000000-0000-0000-0000-000000000013 20000000-0000-0000-0000-000000000013 group13 Group 13 group13@example.com 20000000-0000-0000-0000-000000000014 20000000-0000-0000-0000-000000000014 group14 Group 14 group14@example.com 20000000-0000-0000-0000-000000000015 20000000-0000-0000-0000-000000000015 group15 Group 15 group15@example.com 20000000-0000-0000-0000-000000000016 20000000-0000-0000-0000-000000000016 group16 Group 16 group16@example.com 20000000-0000-0000-0000-000000000017 20000000-0000-0000-0000-000000000017 group17 Group 17 group17@example.com 20000000-0000-0000-0000-000000000018 20000000-0000-0000-0000-000000000018 group18 Group 18 group18@example.com 20000000-0000-0000-0000-000000000019 20000000-0000-0000-0000-000000000019 group19 Group 19 group19@example.com 20000000-0000-0000-0000-000000000020 20000000-0000-0000-0000-000000000020 group20 Group 20 group20@example.com 20000000-0000-0000-0000-000000000021 20000000-0000-0000-0000-000000000021 group21 Group 21 group21@example.com 20000000-0000-0000-0000-000000000022 20000000-0000-0000-0000-000000000022 group22 Group 22 group22@example.com 20000000-0000-0000-0000-000000000023 20000000-0000-0000-0000-000000000023 group23 Group 23 group23@example.com 20000000-0000-0000-0000-000000000024 20000000-0000-0000-0000-000000000024 group24 Group 24 group24@example.com 20000000-0000-0000-0000-000000000025 20000000-0000-0000-0000-000000000025 group25 Group 25 group25@example.com 20000000-0000-0000-0000-000000000026 20000000-0000-0000-0000-000000000026 group26 Group 26 group26@example.com 20000000-0000-0000-0000-000000000027 20000000-0000-0000-0000-000000000027 group27 Group 27 group27@example.com 20000000-0000-0000-0000-000000000028 20000000-0000-0000-0000-000000000028 group28 Group 28 group28@example.com 20000000-0000-0000-0000-000000000029 20000000-0000-0000-0000-000000000029 group29 Group 29 group29@example.com 20000000-0000-0000-0000-000000000030 20000000-0000-0000-0000-000000000030 group30 Group 30 group30@example.com 20000000-0000-0000-0000-000000000031 20000000-0000-0000-0000-000000000031 group31 Group 31 group31@example.com 20000000-0000-0000-0000-000000000032 20000000-0000-0000-0000-000000000032 group32 Group 32 group32@example.com 20000000-0000-0000-0000-000000000033 20000000-0000-0000-0000-000000000033 group33 Group 33 group33@example.com 20000000-0000-0000-0000-000000000034 20000000-0000-0000-0000-000000000034 group34 Group 34 group34@example.com 20000000-0000-0000-0000-000000000035 20000000-0000-0000-0000-000000000035 group35 Group 35 group35@example.com 20000000-0000-0000-0000-000000000036 20000000-0000-0000-0000-000000000036 group36 Group 36 group36@example.com 20000000-0000-0000-0000-000000000037 20000000-0000-0000-0000-000000000037 group37 Group 37 group37@example.com 20000000-0000-0000-0000-000000000038 20000000-0000-0000-0000-000000000038 group38 Group 38 group38@example.com 20000000-0000-0000-0000-000000000039 20000000-0000-0000-0000-000000000039 group39 Group 39 group39@example.com 20000000-0000-0000-0000-000000000040 20000000-0000-0000-0000-000000000040 group40 Group 40 group40@example.com 20000000-0000-0000-0000-000000000041 20000000-0000-0000-0000-000000000041 group41 Group 41 group41@example.com 20000000-0000-0000-0000-000000000042 20000000-0000-0000-0000-000000000042 group42 Group 42 group42@example.com 20000000-0000-0000-0000-000000000043 20000000-0000-0000-0000-000000000043 group43 Group 43 group43@example.com 20000000-0000-0000-0000-000000000044 20000000-0000-0000-0000-000000000044 group44 Group 44 group44@example.com 20000000-0000-0000-0000-000000000045 20000000-0000-0000-0000-000000000045 group45 Group 45 group45@example.com 20000000-0000-0000-0000-000000000046 20000000-0000-0000-0000-000000000046 group46 Group 46 group46@example.com 20000000-0000-0000-0000-000000000047 20000000-0000-0000-0000-000000000047 group47 Group 47 group47@example.com 20000000-0000-0000-0000-000000000048 20000000-0000-0000-0000-000000000048 group48 Group 48 group48@example.com 20000000-0000-0000-0000-000000000049 20000000-0000-0000-0000-000000000049 group49 Group 49 group49@example.com 20000000-0000-0000-0000-000000000050 20000000-0000-0000-0000-000000000050 group50 Group 50 group50@example.com 20000000-0000-0000-0000-000000000051 20000000-0000-0000-0000-000000000051 group51 Group 51 group51@example.com 20000000-0000-0000-0000-000000000052 20000000-0000-0000-0000-000000000052 group52 Group 52 group52@example.com 20000000-0000-0000-0000-000000000053 20000000-0000-0000-0000-000000000053 group53 Group 53 group53@example.com 20000000-0000-0000-0000-000000000054 20000000-0000-0000-0000-000000000054 group54 Group 54 group54@example.com 20000000-0000-0000-0000-000000000055 20000000-0000-0000-0000-000000000055 group55 Group 55 group55@example.com 20000000-0000-0000-0000-000000000056 20000000-0000-0000-0000-000000000056 group56 Group 56 group56@example.com 20000000-0000-0000-0000-000000000057 20000000-0000-0000-0000-000000000057 group57 Group 57 group57@example.com 20000000-0000-0000-0000-000000000058 20000000-0000-0000-0000-000000000058 group58 Group 58 group58@example.com 20000000-0000-0000-0000-000000000059 20000000-0000-0000-0000-000000000059 group59 Group 59 group59@example.com 20000000-0000-0000-0000-000000000060 20000000-0000-0000-0000-000000000060 group60 Group 60 group60@example.com 20000000-0000-0000-0000-000000000061 20000000-0000-0000-0000-000000000061 group61 Group 61 group61@example.com 20000000-0000-0000-0000-000000000062 20000000-0000-0000-0000-000000000062 group62 Group 62 group62@example.com 20000000-0000-0000-0000-000000000063 20000000-0000-0000-0000-000000000063 group63 Group 63 group63@example.com 20000000-0000-0000-0000-000000000064 20000000-0000-0000-0000-000000000064 group64 Group 64 group64@example.com 20000000-0000-0000-0000-000000000065 20000000-0000-0000-0000-000000000065 group65 Group 65 group65@example.com 20000000-0000-0000-0000-000000000066 20000000-0000-0000-0000-000000000066 group66 Group 66 group66@example.com 20000000-0000-0000-0000-000000000067 20000000-0000-0000-0000-000000000067 group67 Group 67 group67@example.com 20000000-0000-0000-0000-000000000068 20000000-0000-0000-0000-000000000068 group68 Group 68 group68@example.com 20000000-0000-0000-0000-000000000069 20000000-0000-0000-0000-000000000069 group69 Group 69 group69@example.com 20000000-0000-0000-0000-000000000070 20000000-0000-0000-0000-000000000070 group70 Group 70 group70@example.com 20000000-0000-0000-0000-000000000071 20000000-0000-0000-0000-000000000071 group71 Group 71 group71@example.com 20000000-0000-0000-0000-000000000072 20000000-0000-0000-0000-000000000072 group72 Group 72 group72@example.com 20000000-0000-0000-0000-000000000073 20000000-0000-0000-0000-000000000073 group73 Group 73 group73@example.com 20000000-0000-0000-0000-000000000074 20000000-0000-0000-0000-000000000074 group74 Group 74 group74@example.com 20000000-0000-0000-0000-000000000075 20000000-0000-0000-0000-000000000075 group75 Group 75 group75@example.com 20000000-0000-0000-0000-000000000076 20000000-0000-0000-0000-000000000076 group76 Group 76 group76@example.com 20000000-0000-0000-0000-000000000077 20000000-0000-0000-0000-000000000077 group77 Group 77 group77@example.com 20000000-0000-0000-0000-000000000078 20000000-0000-0000-0000-000000000078 group78 Group 78 group78@example.com 20000000-0000-0000-0000-000000000079 20000000-0000-0000-0000-000000000079 group79 Group 79 group79@example.com 20000000-0000-0000-0000-000000000080 20000000-0000-0000-0000-000000000080 group80 Group 80 group80@example.com 20000000-0000-0000-0000-000000000081 20000000-0000-0000-0000-000000000081 group81 Group 81 group81@example.com 20000000-0000-0000-0000-000000000082 20000000-0000-0000-0000-000000000082 group82 Group 82 group82@example.com 20000000-0000-0000-0000-000000000083 20000000-0000-0000-0000-000000000083 group83 Group 83 group83@example.com 20000000-0000-0000-0000-000000000084 20000000-0000-0000-0000-000000000084 group84 Group 84 group84@example.com 20000000-0000-0000-0000-000000000085 20000000-0000-0000-0000-000000000085 group85 Group 85 group85@example.com 20000000-0000-0000-0000-000000000086 20000000-0000-0000-0000-000000000086 group86 Group 86 group86@example.com 20000000-0000-0000-0000-000000000087 20000000-0000-0000-0000-000000000087 group87 Group 87 group87@example.com 20000000-0000-0000-0000-000000000088 20000000-0000-0000-0000-000000000088 group88 Group 88 group88@example.com 20000000-0000-0000-0000-000000000089 20000000-0000-0000-0000-000000000089 group89 Group 89 group89@example.com 20000000-0000-0000-0000-000000000090 20000000-0000-0000-0000-000000000090 group90 Group 90 group90@example.com 20000000-0000-0000-0000-000000000091 20000000-0000-0000-0000-000000000091 group91 Group 91 group91@example.com 20000000-0000-0000-0000-000000000092 20000000-0000-0000-0000-000000000092 group92 Group 92 group92@example.com 20000000-0000-0000-0000-000000000093 20000000-0000-0000-0000-000000000093 group93 Group 93 group93@example.com 20000000-0000-0000-0000-000000000094 20000000-0000-0000-0000-000000000094 group94 Group 94 group94@example.com 20000000-0000-0000-0000-000000000095 20000000-0000-0000-0000-000000000095 group95 Group 95 group95@example.com 20000000-0000-0000-0000-000000000096 20000000-0000-0000-0000-000000000096 group96 Group 96 group96@example.com 20000000-0000-0000-0000-000000000097 20000000-0000-0000-0000-000000000097 group97 Group 97 group97@example.com 20000000-0000-0000-0000-000000000098 20000000-0000-0000-0000-000000000098 group98 Group 98 group98@example.com 20000000-0000-0000-0000-000000000099 20000000-0000-0000-0000-000000000099 group99 Group 99 group99@example.com 20000000-0000-0000-0000-000000000100 20000000-0000-0000-0000-000000000100 group100 Group 100 group100@example.com calendarserver-9.1+dfsg/conf/auth/accounts-test-s2s.xml000077500000000000000000000710071315003562600232240ustar00rootroot00000000000000 82A5A2FA-F8FD-4761-B1CB-F1664C1F376A 82A5A2FA-F8FD-4761-B1CB-F1664C1F376A admin admin Super User admin@example.org 70000000-0000-0000-0000-000000000001 70000000-0000-0000-0000-000000000001 other01 other01 Other 01 other01@example.org 70000000-0000-0000-0000-000000000002 70000000-0000-0000-0000-000000000002 other02 other02 Other 02 other02@example.org 70000000-0000-0000-0000-000000000003 70000000-0000-0000-0000-000000000003 other03 other03 Other 03 other03@example.org 70000000-0000-0000-0000-000000000004 70000000-0000-0000-0000-000000000004 other04 other04 Other 04 other04@example.org 70000000-0000-0000-0000-000000000005 70000000-0000-0000-0000-000000000005 other05 other05 Other 05 other05@example.org 70000000-0000-0000-0000-000000000006 70000000-0000-0000-0000-000000000006 other06 other06 Other 06 other06@example.org 70000000-0000-0000-0000-000000000007 70000000-0000-0000-0000-000000000007 other07 other07 Other 07 other07@example.org 70000000-0000-0000-0000-000000000008 70000000-0000-0000-0000-000000000008 other08 other08 Other 08 other08@example.org 70000000-0000-0000-0000-000000000009 70000000-0000-0000-0000-000000000009 other09 other09 Other 09 other09@example.org 70000000-0000-0000-0000-000000000010 70000000-0000-0000-0000-000000000010 other10 other10 Other 10 other10@example.org 70000000-0000-0000-0000-000000000011 70000000-0000-0000-0000-000000000011 other11 other11 Other 11 other11@example.org 70000000-0000-0000-0000-000000000012 70000000-0000-0000-0000-000000000012 other12 other12 Other 12 other12@example.org 70000000-0000-0000-0000-000000000013 70000000-0000-0000-0000-000000000013 other13 other13 Other 13 other13@example.org 70000000-0000-0000-0000-000000000014 70000000-0000-0000-0000-000000000014 other14 other14 Other 14 other14@example.org 70000000-0000-0000-0000-000000000015 70000000-0000-0000-0000-000000000015 other15 other15 Other 15 other15@example.org 70000000-0000-0000-0000-000000000016 70000000-0000-0000-0000-000000000016 other16 other16 Other 16 other16@example.org 70000000-0000-0000-0000-000000000017 70000000-0000-0000-0000-000000000017 other17 other17 Other 17 other17@example.org 70000000-0000-0000-0000-000000000018 70000000-0000-0000-0000-000000000018 other18 other18 Other 18 other18@example.org 70000000-0000-0000-0000-000000000019 70000000-0000-0000-0000-000000000019 other19 other19 Other 19 other19@example.org 70000000-0000-0000-0000-000000000020 70000000-0000-0000-0000-000000000020 other20 other20 Other 20 other20@example.org 70000000-0000-0000-0000-000000000021 70000000-0000-0000-0000-000000000021 other21 other21 Other 21 other21@example.org 70000000-0000-0000-0000-000000000022 70000000-0000-0000-0000-000000000022 other22 other22 Other 22 other22@example.org 70000000-0000-0000-0000-000000000023 70000000-0000-0000-0000-000000000023 other23 other23 Other 23 other23@example.org 70000000-0000-0000-0000-000000000024 70000000-0000-0000-0000-000000000024 other24 other24 Other 24 other24@example.org 70000000-0000-0000-0000-000000000025 70000000-0000-0000-0000-000000000025 other25 other25 Other 25 other25@example.org 70000000-0000-0000-0000-000000000026 70000000-0000-0000-0000-000000000026 other26 other26 Other 26 other26@example.org 70000000-0000-0000-0000-000000000027 70000000-0000-0000-0000-000000000027 other27 other27 Other 27 other27@example.org 70000000-0000-0000-0000-000000000028 70000000-0000-0000-0000-000000000028 other28 other28 Other 28 other28@example.org 70000000-0000-0000-0000-000000000029 70000000-0000-0000-0000-000000000029 other29 other29 Other 29 other29@example.org 70000000-0000-0000-0000-000000000030 70000000-0000-0000-0000-000000000030 other30 other30 Other 30 other30@example.org 70000000-0000-0000-0000-000000000031 70000000-0000-0000-0000-000000000031 other31 other31 Other 31 other31@example.org 70000000-0000-0000-0000-000000000032 70000000-0000-0000-0000-000000000032 other32 other32 Other 32 other32@example.org 70000000-0000-0000-0000-000000000033 70000000-0000-0000-0000-000000000033 other33 other33 Other 33 other33@example.org 70000000-0000-0000-0000-000000000034 70000000-0000-0000-0000-000000000034 other34 other34 Other 34 other34@example.org 70000000-0000-0000-0000-000000000035 70000000-0000-0000-0000-000000000035 other35 other35 Other 35 other35@example.org 70000000-0000-0000-0000-000000000036 70000000-0000-0000-0000-000000000036 other36 other36 Other 36 other36@example.org 70000000-0000-0000-0000-000000000037 70000000-0000-0000-0000-000000000037 other37 other37 Other 37 other37@example.org 70000000-0000-0000-0000-000000000038 70000000-0000-0000-0000-000000000038 other38 other38 Other 38 other38@example.org 70000000-0000-0000-0000-000000000039 70000000-0000-0000-0000-000000000039 other39 other39 Other 39 other39@example.org 70000000-0000-0000-0000-000000000040 70000000-0000-0000-0000-000000000040 other40 other40 Other 40 other40@example.org 70000000-0000-0000-0000-000000000041 70000000-0000-0000-0000-000000000041 other41 other41 Other 41 other41@example.org 70000000-0000-0000-0000-000000000042 70000000-0000-0000-0000-000000000042 other42 other42 Other 42 other42@example.org 70000000-0000-0000-0000-000000000043 70000000-0000-0000-0000-000000000043 other43 other43 Other 43 other43@example.org 70000000-0000-0000-0000-000000000044 70000000-0000-0000-0000-000000000044 other44 other44 Other 44 other44@example.org 70000000-0000-0000-0000-000000000045 70000000-0000-0000-0000-000000000045 other45 other45 Other 45 other45@example.org 70000000-0000-0000-0000-000000000046 70000000-0000-0000-0000-000000000046 other46 other46 Other 46 other46@example.org 70000000-0000-0000-0000-000000000047 70000000-0000-0000-0000-000000000047 other47 other47 Other 47 other47@example.org 70000000-0000-0000-0000-000000000048 70000000-0000-0000-0000-000000000048 other48 other48 Other 48 other48@example.org 70000000-0000-0000-0000-000000000049 70000000-0000-0000-0000-000000000049 other49 other49 Other 49 other49@example.org 70000000-0000-0000-0000-000000000050 70000000-0000-0000-0000-000000000050 other50 other50 Other 50 other50@example.org 70000000-0000-0000-0000-000000000051 70000000-0000-0000-0000-000000000051 other51 other51 Other 51 other51@example.org 70000000-0000-0000-0000-000000000052 70000000-0000-0000-0000-000000000052 other52 other52 Other 52 other52@example.org 70000000-0000-0000-0000-000000000053 70000000-0000-0000-0000-000000000053 other53 other53 Other 53 other53@example.org 70000000-0000-0000-0000-000000000054 70000000-0000-0000-0000-000000000054 other54 other54 Other 54 other54@example.org 70000000-0000-0000-0000-000000000055 70000000-0000-0000-0000-000000000055 other55 other55 Other 55 other55@example.org 70000000-0000-0000-0000-000000000056 70000000-0000-0000-0000-000000000056 other56 other56 Other 56 other56@example.org 70000000-0000-0000-0000-000000000057 70000000-0000-0000-0000-000000000057 other57 other57 Other 57 other57@example.org 70000000-0000-0000-0000-000000000058 70000000-0000-0000-0000-000000000058 other58 other58 Other 58 other58@example.org 70000000-0000-0000-0000-000000000059 70000000-0000-0000-0000-000000000059 other59 other59 Other 59 other59@example.org 70000000-0000-0000-0000-000000000060 70000000-0000-0000-0000-000000000060 other60 other60 Other 60 other60@example.org 70000000-0000-0000-0000-000000000061 70000000-0000-0000-0000-000000000061 other61 other61 Other 61 other61@example.org 70000000-0000-0000-0000-000000000062 70000000-0000-0000-0000-000000000062 other62 other62 Other 62 other62@example.org 70000000-0000-0000-0000-000000000063 70000000-0000-0000-0000-000000000063 other63 other63 Other 63 other63@example.org 70000000-0000-0000-0000-000000000064 70000000-0000-0000-0000-000000000064 other64 other64 Other 64 other64@example.org 70000000-0000-0000-0000-000000000065 70000000-0000-0000-0000-000000000065 other65 other65 Other 65 other65@example.org 70000000-0000-0000-0000-000000000066 70000000-0000-0000-0000-000000000066 other66 other66 Other 66 other66@example.org 70000000-0000-0000-0000-000000000067 70000000-0000-0000-0000-000000000067 other67 other67 Other 67 other67@example.org 70000000-0000-0000-0000-000000000068 70000000-0000-0000-0000-000000000068 other68 other68 Other 68 other68@example.org 70000000-0000-0000-0000-000000000069 70000000-0000-0000-0000-000000000069 other69 other69 Other 69 other69@example.org 70000000-0000-0000-0000-000000000070 70000000-0000-0000-0000-000000000070 other70 other70 Other 70 other70@example.org 70000000-0000-0000-0000-000000000071 70000000-0000-0000-0000-000000000071 other71 other71 Other 71 other71@example.org 70000000-0000-0000-0000-000000000072 70000000-0000-0000-0000-000000000072 other72 other72 Other 72 other72@example.org 70000000-0000-0000-0000-000000000073 70000000-0000-0000-0000-000000000073 other73 other73 Other 73 other73@example.org 70000000-0000-0000-0000-000000000074 70000000-0000-0000-0000-000000000074 other74 other74 Other 74 other74@example.org 70000000-0000-0000-0000-000000000075 70000000-0000-0000-0000-000000000075 other75 other75 Other 75 other75@example.org 70000000-0000-0000-0000-000000000076 70000000-0000-0000-0000-000000000076 other76 other76 Other 76 other76@example.org 70000000-0000-0000-0000-000000000077 70000000-0000-0000-0000-000000000077 other77 other77 Other 77 other77@example.org 70000000-0000-0000-0000-000000000078 70000000-0000-0000-0000-000000000078 other78 other78 Other 78 other78@example.org 70000000-0000-0000-0000-000000000079 70000000-0000-0000-0000-000000000079 other79 other79 Other 79 other79@example.org 70000000-0000-0000-0000-000000000080 70000000-0000-0000-0000-000000000080 other80 other80 Other 80 other80@example.org 70000000-0000-0000-0000-000000000081 70000000-0000-0000-0000-000000000081 other81 other81 Other 81 other81@example.org 70000000-0000-0000-0000-000000000082 70000000-0000-0000-0000-000000000082 other82 other82 Other 82 other82@example.org 70000000-0000-0000-0000-000000000083 70000000-0000-0000-0000-000000000083 other83 other83 Other 83 other83@example.org 70000000-0000-0000-0000-000000000084 70000000-0000-0000-0000-000000000084 other84 other84 Other 84 other84@example.org 70000000-0000-0000-0000-000000000085 70000000-0000-0000-0000-000000000085 other85 other85 Other 85 other85@example.org 70000000-0000-0000-0000-000000000086 70000000-0000-0000-0000-000000000086 other86 other86 Other 86 other86@example.org 70000000-0000-0000-0000-000000000087 70000000-0000-0000-0000-000000000087 other87 other87 Other 87 other87@example.org 70000000-0000-0000-0000-000000000088 70000000-0000-0000-0000-000000000088 other88 other88 Other 88 other88@example.org 70000000-0000-0000-0000-000000000089 70000000-0000-0000-0000-000000000089 other89 other89 Other 89 other89@example.org 70000000-0000-0000-0000-000000000090 70000000-0000-0000-0000-000000000090 other90 other90 Other 90 other90@example.org 70000000-0000-0000-0000-000000000091 70000000-0000-0000-0000-000000000091 other91 other91 Other 91 other91@example.org 70000000-0000-0000-0000-000000000092 70000000-0000-0000-0000-000000000092 other92 other92 Other 92 other92@example.org 70000000-0000-0000-0000-000000000093 70000000-0000-0000-0000-000000000093 other93 other93 Other 93 other93@example.org 70000000-0000-0000-0000-000000000094 70000000-0000-0000-0000-000000000094 other94 other94 Other 94 other94@example.org 70000000-0000-0000-0000-000000000095 70000000-0000-0000-0000-000000000095 other95 other95 Other 95 other95@example.org 70000000-0000-0000-0000-000000000096 70000000-0000-0000-0000-000000000096 other96 other96 Other 96 other96@example.org 70000000-0000-0000-0000-000000000097 70000000-0000-0000-0000-000000000097 other97 other97 Other 97 other97@example.org 70000000-0000-0000-0000-000000000098 70000000-0000-0000-0000-000000000098 other98 other98 Other 98 other98@example.org 70000000-0000-0000-0000-000000000099 70000000-0000-0000-0000-000000000099 other99 other99 Other 99 other99@example.org 70000000-0000-0000-0000-000000000100 70000000-0000-0000-0000-000000000100 other100 other100 Other 100 other100@example.org calendarserver-9.1+dfsg/conf/auth/accounts-test.xml000077500000000000000000001620651315003562600225240ustar00rootroot00000000000000 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E admin admin Super User admin@example.com 29B6C503-11DF-43EC-8CCA-40C7003149CE 29B6C503-11DF-43EC-8CCA-40C7003149CE apprentice apprentice Apprentice Super User apprentice@example.com 860B3EE9-6D7C-4296-9639-E6B998074A78 860B3EE9-6D7C-4296-9639-E6B998074A78 i18nuser i18nuser ã¾ã  i18nuser@example.com 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000001 user01 user01 User 01 user01@example.com 10000000-0000-0000-0000-000000000002 10000000-0000-0000-0000-000000000002 user02 user02 User 02 user02@example.com 10000000-0000-0000-0000-000000000003 10000000-0000-0000-0000-000000000003 user03 user03 User 03 user03@example.com 10000000-0000-0000-0000-000000000004 10000000-0000-0000-0000-000000000004 user04 user04 User 04 user04@example.com 10000000-0000-0000-0000-000000000005 10000000-0000-0000-0000-000000000005 user05 user05 User 05 user05@example.com 10000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000006 user06 user06 User 06 user06@example.com 10000000-0000-0000-0000-000000000007 10000000-0000-0000-0000-000000000007 user07 user07 User 07 user07@example.com 10000000-0000-0000-0000-000000000008 10000000-0000-0000-0000-000000000008 user08 user08 User 08 user08@example.com 10000000-0000-0000-0000-000000000009 10000000-0000-0000-0000-000000000009 user09 user09 User 09 user09@example.com 10000000-0000-0000-0000-000000000010 10000000-0000-0000-0000-000000000010 user10 user10 User 10 user10@example.com 10000000-0000-0000-0000-000000000011 10000000-0000-0000-0000-000000000011 user11 user11 User 11 user11@example.com 10000000-0000-0000-0000-000000000012 10000000-0000-0000-0000-000000000012 user12 user12 User 12 user12@example.com 10000000-0000-0000-0000-000000000013 10000000-0000-0000-0000-000000000013 user13 user13 User 13 user13@example.com 10000000-0000-0000-0000-000000000014 10000000-0000-0000-0000-000000000014 user14 user14 User 14 user14@example.com 10000000-0000-0000-0000-000000000015 10000000-0000-0000-0000-000000000015 user15 user15 User 15 user15@example.com 10000000-0000-0000-0000-000000000016 10000000-0000-0000-0000-000000000016 user16 user16 User 16 user16@example.com 10000000-0000-0000-0000-000000000017 10000000-0000-0000-0000-000000000017 user17 user17 User 17 user17@example.com 10000000-0000-0000-0000-000000000018 10000000-0000-0000-0000-000000000018 user18 user18 User 18 user18@example.com 10000000-0000-0000-0000-000000000019 10000000-0000-0000-0000-000000000019 user19 user19 User 19 user19@example.com 10000000-0000-0000-0000-000000000020 10000000-0000-0000-0000-000000000020 user20 user20 User 20 user20@example.com 10000000-0000-0000-0000-000000000021 10000000-0000-0000-0000-000000000021 user21 user21 User 21 user21@example.com 10000000-0000-0000-0000-000000000022 10000000-0000-0000-0000-000000000022 user22 user22 User 22 user22@example.com 10000000-0000-0000-0000-000000000023 10000000-0000-0000-0000-000000000023 user23 user23 User 23 user23@example.com 10000000-0000-0000-0000-000000000024 10000000-0000-0000-0000-000000000024 user24 user24 User 24 user24@example.com 10000000-0000-0000-0000-000000000025 10000000-0000-0000-0000-000000000025 user25 user25 User 25 user25@example.com 10000000-0000-0000-0000-000000000026 10000000-0000-0000-0000-000000000026 user26 user26 User 26 user26@example.com 10000000-0000-0000-0000-000000000027 10000000-0000-0000-0000-000000000027 user27 user27 User 27 user27@example.com 10000000-0000-0000-0000-000000000028 10000000-0000-0000-0000-000000000028 user28 user28 User 28 user28@example.com 10000000-0000-0000-0000-000000000029 10000000-0000-0000-0000-000000000029 user29 user29 User 29 user29@example.com 10000000-0000-0000-0000-000000000030 10000000-0000-0000-0000-000000000030 user30 user30 User 30 user30@example.com 10000000-0000-0000-0000-000000000031 10000000-0000-0000-0000-000000000031 user31 user31 User 31 user31@example.com 10000000-0000-0000-0000-000000000032 10000000-0000-0000-0000-000000000032 user32 user32 User 32 user32@example.com 10000000-0000-0000-0000-000000000033 10000000-0000-0000-0000-000000000033 user33 user33 User 33 user33@example.com 10000000-0000-0000-0000-000000000034 10000000-0000-0000-0000-000000000034 user34 user34 User 34 user34@example.com 10000000-0000-0000-0000-000000000035 10000000-0000-0000-0000-000000000035 user35 user35 User 35 user35@example.com 10000000-0000-0000-0000-000000000036 10000000-0000-0000-0000-000000000036 user36 user36 User 36 user36@example.com 10000000-0000-0000-0000-000000000037 10000000-0000-0000-0000-000000000037 user37 user37 User 37 user37@example.com 10000000-0000-0000-0000-000000000038 10000000-0000-0000-0000-000000000038 user38 user38 User 38 user38@example.com 10000000-0000-0000-0000-000000000039 10000000-0000-0000-0000-000000000039 user39 user39 User 39 user39@example.com 10000000-0000-0000-0000-000000000040 10000000-0000-0000-0000-000000000040 user40 user40 User 40 user40@example.com 10000000-0000-0000-0000-000000000041 10000000-0000-0000-0000-000000000041 user41 user41 User 41 user41@example.com 10000000-0000-0000-0000-000000000042 10000000-0000-0000-0000-000000000042 user42 user42 User 42 user42@example.com 10000000-0000-0000-0000-000000000043 10000000-0000-0000-0000-000000000043 user43 user43 User 43 user43@example.com 10000000-0000-0000-0000-000000000044 10000000-0000-0000-0000-000000000044 user44 user44 User 44 user44@example.com 10000000-0000-0000-0000-000000000045 10000000-0000-0000-0000-000000000045 user45 user45 User 45 user45@example.com 10000000-0000-0000-0000-000000000046 10000000-0000-0000-0000-000000000046 user46 user46 User 46 user46@example.com 10000000-0000-0000-0000-000000000047 10000000-0000-0000-0000-000000000047 user47 user47 User 47 user47@example.com 10000000-0000-0000-0000-000000000048 10000000-0000-0000-0000-000000000048 user48 user48 User 48 user48@example.com 10000000-0000-0000-0000-000000000049 10000000-0000-0000-0000-000000000049 user49 user49 User 49 user49@example.com 10000000-0000-0000-0000-000000000050 10000000-0000-0000-0000-000000000050 user50 user50 User 50 user50@example.com 10000000-0000-0000-0000-000000000051 10000000-0000-0000-0000-000000000051 user51 user51 User 51 user51@example.com 10000000-0000-0000-0000-000000000052 10000000-0000-0000-0000-000000000052 user52 user52 User 52 user52@example.com 10000000-0000-0000-0000-000000000053 10000000-0000-0000-0000-000000000053 user53 user53 User 53 user53@example.com 10000000-0000-0000-0000-000000000054 10000000-0000-0000-0000-000000000054 user54 user54 User 54 user54@example.com 10000000-0000-0000-0000-000000000055 10000000-0000-0000-0000-000000000055 user55 user55 User 55 user55@example.com 10000000-0000-0000-0000-000000000056 10000000-0000-0000-0000-000000000056 user56 user56 User 56 user56@example.com 10000000-0000-0000-0000-000000000057 10000000-0000-0000-0000-000000000057 user57 user57 User 57 user57@example.com 10000000-0000-0000-0000-000000000058 10000000-0000-0000-0000-000000000058 user58 user58 User 58 user58@example.com 10000000-0000-0000-0000-000000000059 10000000-0000-0000-0000-000000000059 user59 user59 User 59 user59@example.com 10000000-0000-0000-0000-000000000060 10000000-0000-0000-0000-000000000060 user60 user60 User 60 user60@example.com 10000000-0000-0000-0000-000000000061 10000000-0000-0000-0000-000000000061 user61 user61 User 61 user61@example.com 10000000-0000-0000-0000-000000000062 10000000-0000-0000-0000-000000000062 user62 user62 User 62 user62@example.com 10000000-0000-0000-0000-000000000063 10000000-0000-0000-0000-000000000063 user63 user63 User 63 user63@example.com 10000000-0000-0000-0000-000000000064 10000000-0000-0000-0000-000000000064 user64 user64 User 64 user64@example.com 10000000-0000-0000-0000-000000000065 10000000-0000-0000-0000-000000000065 user65 user65 User 65 user65@example.com 10000000-0000-0000-0000-000000000066 10000000-0000-0000-0000-000000000066 user66 user66 User 66 user66@example.com 10000000-0000-0000-0000-000000000067 10000000-0000-0000-0000-000000000067 user67 user67 User 67 user67@example.com 10000000-0000-0000-0000-000000000068 10000000-0000-0000-0000-000000000068 user68 user68 User 68 user68@example.com 10000000-0000-0000-0000-000000000069 10000000-0000-0000-0000-000000000069 user69 user69 User 69 user69@example.com 10000000-0000-0000-0000-000000000070 10000000-0000-0000-0000-000000000070 user70 user70 User 70 user70@example.com 10000000-0000-0000-0000-000000000071 10000000-0000-0000-0000-000000000071 user71 user71 User 71 user71@example.com 10000000-0000-0000-0000-000000000072 10000000-0000-0000-0000-000000000072 user72 user72 User 72 user72@example.com 10000000-0000-0000-0000-000000000073 10000000-0000-0000-0000-000000000073 user73 user73 User 73 user73@example.com 10000000-0000-0000-0000-000000000074 10000000-0000-0000-0000-000000000074 user74 user74 User 74 user74@example.com 10000000-0000-0000-0000-000000000075 10000000-0000-0000-0000-000000000075 user75 user75 User 75 user75@example.com 10000000-0000-0000-0000-000000000076 10000000-0000-0000-0000-000000000076 user76 user76 User 76 user76@example.com 10000000-0000-0000-0000-000000000077 10000000-0000-0000-0000-000000000077 user77 user77 User 77 user77@example.com 10000000-0000-0000-0000-000000000078 10000000-0000-0000-0000-000000000078 user78 user78 User 78 user78@example.com 10000000-0000-0000-0000-000000000079 10000000-0000-0000-0000-000000000079 user79 user79 User 79 user79@example.com 10000000-0000-0000-0000-000000000080 10000000-0000-0000-0000-000000000080 user80 user80 User 80 user80@example.com 10000000-0000-0000-0000-000000000081 10000000-0000-0000-0000-000000000081 user81 user81 User 81 user81@example.com 10000000-0000-0000-0000-000000000082 10000000-0000-0000-0000-000000000082 user82 user82 User 82 user82@example.com 10000000-0000-0000-0000-000000000083 10000000-0000-0000-0000-000000000083 user83 user83 User 83 user83@example.com 10000000-0000-0000-0000-000000000084 10000000-0000-0000-0000-000000000084 user84 user84 User 84 user84@example.com 10000000-0000-0000-0000-000000000085 10000000-0000-0000-0000-000000000085 user85 user85 User 85 user85@example.com 10000000-0000-0000-0000-000000000086 10000000-0000-0000-0000-000000000086 user86 user86 User 86 user86@example.com 10000000-0000-0000-0000-000000000087 10000000-0000-0000-0000-000000000087 user87 user87 User 87 user87@example.com 10000000-0000-0000-0000-000000000088 10000000-0000-0000-0000-000000000088 user88 user88 User 88 user88@example.com 10000000-0000-0000-0000-000000000089 10000000-0000-0000-0000-000000000089 user89 user89 User 89 user89@example.com 10000000-0000-0000-0000-000000000090 10000000-0000-0000-0000-000000000090 user90 user90 User 90 user90@example.com 10000000-0000-0000-0000-000000000091 10000000-0000-0000-0000-000000000091 user91 user91 User 91 user91@example.com 10000000-0000-0000-0000-000000000092 10000000-0000-0000-0000-000000000092 user92 user92 User 92 user92@example.com 10000000-0000-0000-0000-000000000093 10000000-0000-0000-0000-000000000093 user93 user93 User 93 user93@example.com 10000000-0000-0000-0000-000000000094 10000000-0000-0000-0000-000000000094 user94 user94 User 94 user94@example.com 10000000-0000-0000-0000-000000000095 10000000-0000-0000-0000-000000000095 user95 user95 User 95 user95@example.com 10000000-0000-0000-0000-000000000096 10000000-0000-0000-0000-000000000096 user96 user96 User 96 user96@example.com 10000000-0000-0000-0000-000000000097 10000000-0000-0000-0000-000000000097 user97 user97 User 97 user97@example.com 10000000-0000-0000-0000-000000000098 10000000-0000-0000-0000-000000000098 user98 user98 User 98 user98@example.com 10000000-0000-0000-0000-000000000099 10000000-0000-0000-0000-000000000099 user99 user99 User 99 user99@example.com 10000000-0000-0000-0000-000000000100 10000000-0000-0000-0000-000000000100 user100 user100 User 100 user100@example.com 10000000-0000-0000-0000-000000000101 10000000-0000-0000-0000-000000000101 user101 user101 User 101 user101@example.com 50000000-0000-0000-0000-000000000001 50000000-0000-0000-0000-000000000001 public01 public01 Public 01 public01@example.com 50000000-0000-0000-0000-000000000002 50000000-0000-0000-0000-000000000002 public02 public02 Public 02 public02@example.com 50000000-0000-0000-0000-000000000003 50000000-0000-0000-0000-000000000003 public03 public03 Public 03 public03@example.com 50000000-0000-0000-0000-000000000004 50000000-0000-0000-0000-000000000004 public04 public04 Public 04 public04@example.com 50000000-0000-0000-0000-000000000005 50000000-0000-0000-0000-000000000005 public05 public05 Public 05 public05@example.com 50000000-0000-0000-0000-000000000006 50000000-0000-0000-0000-000000000006 public06 public06 Public 06 public06@example.com 50000000-0000-0000-0000-000000000007 50000000-0000-0000-0000-000000000007 public07 public07 Public 07 public07@example.com 50000000-0000-0000-0000-000000000008 50000000-0000-0000-0000-000000000008 public08 public08 Public 08 public08@example.com 50000000-0000-0000-0000-000000000009 50000000-0000-0000-0000-000000000009 public09 public09 Public 09 public09@example.com 50000000-0000-0000-0000-000000000010 50000000-0000-0000-0000-000000000010 public10 public10 Public 10 public10@example.com 20000000-0000-0000-0000-000000000001 20000000-0000-0000-0000-000000000001 group01 Group 01 group01@example.com 10000000-0000-0000-0000-000000000001 20000000-0000-0000-0000-000000000002 20000000-0000-0000-0000-000000000002 group02 Group 02 group02@example.com 10000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000007 20000000-0000-0000-0000-000000000003 20000000-0000-0000-0000-000000000003 group03 Group 03 group03@example.com 10000000-0000-0000-0000-000000000008 10000000-0000-0000-0000-000000000009 20000000-0000-0000-0000-000000000004 20000000-0000-0000-0000-000000000004 group04 Group 04 group04@example.com 20000000-0000-0000-0000-000000000002 20000000-0000-0000-0000-000000000003 10000000-0000-0000-0000-000000000010 20000000-0000-0000-0000-000000000005 20000000-0000-0000-0000-000000000005 group05 Group 05 group05@example.com 20000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000020 20000000-0000-0000-0000-000000000006 20000000-0000-0000-0000-000000000006 group06 Group 06 group06@example.com 10000000-0000-0000-0000-000000000021 20000000-0000-0000-0000-000000000007 20000000-0000-0000-0000-000000000007 group07 Group 07 group07@example.com 10000000-0000-0000-0000-000000000022 10000000-0000-0000-0000-000000000023 10000000-0000-0000-0000-000000000024 20000000-0000-0000-0000-000000000008 20000000-0000-0000-0000-000000000008 group08 Group 08 group08@example.com 20000000-0000-0000-0000-000000000009 20000000-0000-0000-0000-000000000009 group09 Group 09 group09@example.com 20000000-0000-0000-0000-000000000010 20000000-0000-0000-0000-000000000010 group10 Group 10 group10@example.com 20000000-0000-0000-0000-000000000011 20000000-0000-0000-0000-000000000011 group11 Group 11 group11@example.com 20000000-0000-0000-0000-000000000012 20000000-0000-0000-0000-000000000012 group12 Group 12 group12@example.com 20000000-0000-0000-0000-000000000013 20000000-0000-0000-0000-000000000013 group13 Group 13 group13@example.com 20000000-0000-0000-0000-000000000014 20000000-0000-0000-0000-000000000014 group14 Group 14 group14@example.com 20000000-0000-0000-0000-000000000015 20000000-0000-0000-0000-000000000015 group15 Group 15 group15@example.com 20000000-0000-0000-0000-000000000016 20000000-0000-0000-0000-000000000016 group16 Group 16 group16@example.com 20000000-0000-0000-0000-000000000017 20000000-0000-0000-0000-000000000017 group17 Group 17 group17@example.com 20000000-0000-0000-0000-000000000018 20000000-0000-0000-0000-000000000018 group18 Group 18 group18@example.com 20000000-0000-0000-0000-000000000019 20000000-0000-0000-0000-000000000019 group19 Group 19 group19@example.com 20000000-0000-0000-0000-000000000020 20000000-0000-0000-0000-000000000020 group20 Group 20 group20@example.com 20000000-0000-0000-0000-000000000021 20000000-0000-0000-0000-000000000021 group21 Group 21 group21@example.com 20000000-0000-0000-0000-000000000022 20000000-0000-0000-0000-000000000022 group22 Group 22 group22@example.com 20000000-0000-0000-0000-000000000023 20000000-0000-0000-0000-000000000023 group23 Group 23 group23@example.com 20000000-0000-0000-0000-000000000024 20000000-0000-0000-0000-000000000024 group24 Group 24 group24@example.com 20000000-0000-0000-0000-000000000025 20000000-0000-0000-0000-000000000025 group25 Group 25 group25@example.com 20000000-0000-0000-0000-000000000026 20000000-0000-0000-0000-000000000026 group26 Group 26 group26@example.com 20000000-0000-0000-0000-000000000027 20000000-0000-0000-0000-000000000027 group27 Group 27 group27@example.com 20000000-0000-0000-0000-000000000028 20000000-0000-0000-0000-000000000028 group28 Group 28 group28@example.com 20000000-0000-0000-0000-000000000029 20000000-0000-0000-0000-000000000029 group29 Group 29 group29@example.com 20000000-0000-0000-0000-000000000030 20000000-0000-0000-0000-000000000030 group30 Group 30 group30@example.com 20000000-0000-0000-0000-000000000031 20000000-0000-0000-0000-000000000031 group31 Group 31 group31@example.com 20000000-0000-0000-0000-000000000032 20000000-0000-0000-0000-000000000032 group32 Group 32 group32@example.com 20000000-0000-0000-0000-000000000033 20000000-0000-0000-0000-000000000033 group33 Group 33 group33@example.com 20000000-0000-0000-0000-000000000034 20000000-0000-0000-0000-000000000034 group34 Group 34 group34@example.com 20000000-0000-0000-0000-000000000035 20000000-0000-0000-0000-000000000035 group35 Group 35 group35@example.com 20000000-0000-0000-0000-000000000036 20000000-0000-0000-0000-000000000036 group36 Group 36 group36@example.com 20000000-0000-0000-0000-000000000037 20000000-0000-0000-0000-000000000037 group37 Group 37 group37@example.com 20000000-0000-0000-0000-000000000038 20000000-0000-0000-0000-000000000038 group38 Group 38 group38@example.com 20000000-0000-0000-0000-000000000039 20000000-0000-0000-0000-000000000039 group39 Group 39 group39@example.com 20000000-0000-0000-0000-000000000040 20000000-0000-0000-0000-000000000040 group40 Group 40 group40@example.com 20000000-0000-0000-0000-000000000041 20000000-0000-0000-0000-000000000041 group41 Group 41 group41@example.com 20000000-0000-0000-0000-000000000042 20000000-0000-0000-0000-000000000042 group42 Group 42 group42@example.com 20000000-0000-0000-0000-000000000043 20000000-0000-0000-0000-000000000043 group43 Group 43 group43@example.com 20000000-0000-0000-0000-000000000044 20000000-0000-0000-0000-000000000044 group44 Group 44 group44@example.com 20000000-0000-0000-0000-000000000045 20000000-0000-0000-0000-000000000045 group45 Group 45 group45@example.com 20000000-0000-0000-0000-000000000046 20000000-0000-0000-0000-000000000046 group46 Group 46 group46@example.com 20000000-0000-0000-0000-000000000047 20000000-0000-0000-0000-000000000047 group47 Group 47 group47@example.com 20000000-0000-0000-0000-000000000048 20000000-0000-0000-0000-000000000048 group48 Group 48 group48@example.com 20000000-0000-0000-0000-000000000049 20000000-0000-0000-0000-000000000049 group49 Group 49 group49@example.com 20000000-0000-0000-0000-000000000050 20000000-0000-0000-0000-000000000050 group50 Group 50 group50@example.com 20000000-0000-0000-0000-000000000051 20000000-0000-0000-0000-000000000051 group51 Group 51 group51@example.com 20000000-0000-0000-0000-000000000052 20000000-0000-0000-0000-000000000052 group52 Group 52 group52@example.com 20000000-0000-0000-0000-000000000053 20000000-0000-0000-0000-000000000053 group53 Group 53 group53@example.com 20000000-0000-0000-0000-000000000054 20000000-0000-0000-0000-000000000054 group54 Group 54 group54@example.com 20000000-0000-0000-0000-000000000055 20000000-0000-0000-0000-000000000055 group55 Group 55 group55@example.com 20000000-0000-0000-0000-000000000056 20000000-0000-0000-0000-000000000056 group56 Group 56 group56@example.com 20000000-0000-0000-0000-000000000057 20000000-0000-0000-0000-000000000057 group57 Group 57 group57@example.com 20000000-0000-0000-0000-000000000058 20000000-0000-0000-0000-000000000058 group58 Group 58 group58@example.com 20000000-0000-0000-0000-000000000059 20000000-0000-0000-0000-000000000059 group59 Group 59 group59@example.com 20000000-0000-0000-0000-000000000060 20000000-0000-0000-0000-000000000060 group60 Group 60 group60@example.com 20000000-0000-0000-0000-000000000061 20000000-0000-0000-0000-000000000061 group61 Group 61 group61@example.com 20000000-0000-0000-0000-000000000062 20000000-0000-0000-0000-000000000062 group62 Group 62 group62@example.com 20000000-0000-0000-0000-000000000063 20000000-0000-0000-0000-000000000063 group63 Group 63 group63@example.com 20000000-0000-0000-0000-000000000064 20000000-0000-0000-0000-000000000064 group64 Group 64 group64@example.com 20000000-0000-0000-0000-000000000065 20000000-0000-0000-0000-000000000065 group65 Group 65 group65@example.com 20000000-0000-0000-0000-000000000066 20000000-0000-0000-0000-000000000066 group66 Group 66 group66@example.com 20000000-0000-0000-0000-000000000067 20000000-0000-0000-0000-000000000067 group67 Group 67 group67@example.com 20000000-0000-0000-0000-000000000068 20000000-0000-0000-0000-000000000068 group68 Group 68 group68@example.com 20000000-0000-0000-0000-000000000069 20000000-0000-0000-0000-000000000069 group69 Group 69 group69@example.com 20000000-0000-0000-0000-000000000070 20000000-0000-0000-0000-000000000070 group70 Group 70 group70@example.com 20000000-0000-0000-0000-000000000071 20000000-0000-0000-0000-000000000071 group71 Group 71 group71@example.com 20000000-0000-0000-0000-000000000072 20000000-0000-0000-0000-000000000072 group72 Group 72 group72@example.com 20000000-0000-0000-0000-000000000073 20000000-0000-0000-0000-000000000073 group73 Group 73 group73@example.com 20000000-0000-0000-0000-000000000074 20000000-0000-0000-0000-000000000074 group74 Group 74 group74@example.com 20000000-0000-0000-0000-000000000075 20000000-0000-0000-0000-000000000075 group75 Group 75 group75@example.com 20000000-0000-0000-0000-000000000076 20000000-0000-0000-0000-000000000076 group76 Group 76 group76@example.com 20000000-0000-0000-0000-000000000077 20000000-0000-0000-0000-000000000077 group77 Group 77 group77@example.com 20000000-0000-0000-0000-000000000078 20000000-0000-0000-0000-000000000078 group78 Group 78 group78@example.com 20000000-0000-0000-0000-000000000079 20000000-0000-0000-0000-000000000079 group79 Group 79 group79@example.com 20000000-0000-0000-0000-000000000080 20000000-0000-0000-0000-000000000080 group80 Group 80 group80@example.com 20000000-0000-0000-0000-000000000081 20000000-0000-0000-0000-000000000081 group81 Group 81 group81@example.com 20000000-0000-0000-0000-000000000082 20000000-0000-0000-0000-000000000082 group82 Group 82 group82@example.com 20000000-0000-0000-0000-000000000083 20000000-0000-0000-0000-000000000083 group83 Group 83 group83@example.com 20000000-0000-0000-0000-000000000084 20000000-0000-0000-0000-000000000084 group84 Group 84 group84@example.com 20000000-0000-0000-0000-000000000085 20000000-0000-0000-0000-000000000085 group85 Group 85 group85@example.com 20000000-0000-0000-0000-000000000086 20000000-0000-0000-0000-000000000086 group86 Group 86 group86@example.com 20000000-0000-0000-0000-000000000087 20000000-0000-0000-0000-000000000087 group87 Group 87 group87@example.com 20000000-0000-0000-0000-000000000088 20000000-0000-0000-0000-000000000088 group88 Group 88 group88@example.com 20000000-0000-0000-0000-000000000089 20000000-0000-0000-0000-000000000089 group89 Group 89 group89@example.com 20000000-0000-0000-0000-000000000090 20000000-0000-0000-0000-000000000090 group90 Group 90 group90@example.com 20000000-0000-0000-0000-000000000091 20000000-0000-0000-0000-000000000091 group91 Group 91 group91@example.com 20000000-0000-0000-0000-000000000092 20000000-0000-0000-0000-000000000092 group92 Group 92 group92@example.com 20000000-0000-0000-0000-000000000093 20000000-0000-0000-0000-000000000093 group93 Group 93 group93@example.com 20000000-0000-0000-0000-000000000094 20000000-0000-0000-0000-000000000094 group94 Group 94 group94@example.com 20000000-0000-0000-0000-000000000095 20000000-0000-0000-0000-000000000095 group95 Group 95 group95@example.com 20000000-0000-0000-0000-000000000096 20000000-0000-0000-0000-000000000096 group96 Group 96 group96@example.com 20000000-0000-0000-0000-000000000097 20000000-0000-0000-0000-000000000097 group97 Group 97 group97@example.com 20000000-0000-0000-0000-000000000098 20000000-0000-0000-0000-000000000098 group98 Group 98 group98@example.com 20000000-0000-0000-0000-000000000099 20000000-0000-0000-0000-000000000099 group99 Group 99 group99@example.com 20000000-0000-0000-0000-000000000100 20000000-0000-0000-0000-000000000100 group100 Group 100 group100@example.com calendarserver-9.1+dfsg/conf/auth/accounts.dtd000066400000000000000000000027541315003562600215150ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/accounts.xml000066400000000000000000000023371315003562600215370ustar00rootroot00000000000000 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E 0C8BDE62-E600-4696-83D3-8B5ECABDFD2E admin admin Super User admin@example.com 29B6C503-11DF-43EC-8CCA-40C7003149CE 29B6C503-11DF-43EC-8CCA-40C7003149CE test test Test User test@example.com calendarserver-9.1+dfsg/conf/auth/augments-default.xml000066400000000000000000000015061315003562600231620ustar00rootroot00000000000000 Default true true calendarserver-9.1+dfsg/conf/auth/augments-test-pod.xml000066400000000000000000000474471315003562600233130ustar00rootroot00000000000000 Default A true true 60000000-0000-0000-0000-000000000001 B true true 60000000-0000-0000-0000-000000000002 B true true 60000000-0000-0000-0000-000000000003 B true true 60000000-0000-0000-0000-000000000004 B true true 60000000-0000-0000-0000-000000000005 B true true 60000000-0000-0000-0000-000000000006 B true true 60000000-0000-0000-0000-000000000007 B true true 60000000-0000-0000-0000-000000000008 B true true 60000000-0000-0000-0000-000000000009 B true true 60000000-0000-0000-0000-000000000010 B true true 60000000-0000-0000-0000-000000000011 B true true 60000000-0000-0000-0000-000000000012 B true true 60000000-0000-0000-0000-000000000013 B true true 60000000-0000-0000-0000-000000000014 B true true 60000000-0000-0000-0000-000000000015 B true true 60000000-0000-0000-0000-000000000016 B true true 60000000-0000-0000-0000-000000000017 B true true 60000000-0000-0000-0000-000000000018 B true true 60000000-0000-0000-0000-000000000019 B true true 60000000-0000-0000-0000-000000000020 B true true 60000000-0000-0000-0000-000000000021 B true true 60000000-0000-0000-0000-000000000022 B true true 60000000-0000-0000-0000-000000000023 B true true 60000000-0000-0000-0000-000000000024 B true true 60000000-0000-0000-0000-000000000025 B true true 60000000-0000-0000-0000-000000000026 B true true 60000000-0000-0000-0000-000000000027 B true true 60000000-0000-0000-0000-000000000028 B true true 60000000-0000-0000-0000-000000000029 B true true 60000000-0000-0000-0000-000000000030 B true true 60000000-0000-0000-0000-000000000031 B true true 60000000-0000-0000-0000-000000000032 B true true 60000000-0000-0000-0000-000000000033 B true true 60000000-0000-0000-0000-000000000034 B true true 60000000-0000-0000-0000-000000000035 B true true 60000000-0000-0000-0000-000000000036 B true true 60000000-0000-0000-0000-000000000037 B true true 60000000-0000-0000-0000-000000000038 B true true 60000000-0000-0000-0000-000000000039 B true true 60000000-0000-0000-0000-000000000040 B true true 60000000-0000-0000-0000-000000000041 B true true 60000000-0000-0000-0000-000000000042 B true true 60000000-0000-0000-0000-000000000043 B true true 60000000-0000-0000-0000-000000000044 B true true 60000000-0000-0000-0000-000000000045 B true true 60000000-0000-0000-0000-000000000046 B true true 60000000-0000-0000-0000-000000000047 B true true 60000000-0000-0000-0000-000000000048 B true true 60000000-0000-0000-0000-000000000049 B true true 60000000-0000-0000-0000-000000000050 B true true 60000000-0000-0000-0000-000000000051 B true true 60000000-0000-0000-0000-000000000052 B true true 60000000-0000-0000-0000-000000000053 B true true 60000000-0000-0000-0000-000000000054 B true true 60000000-0000-0000-0000-000000000055 B true true 60000000-0000-0000-0000-000000000056 B true true 60000000-0000-0000-0000-000000000057 B true true 60000000-0000-0000-0000-000000000058 B true true 60000000-0000-0000-0000-000000000059 B true true 60000000-0000-0000-0000-000000000060 B true true 60000000-0000-0000-0000-000000000061 B true true 60000000-0000-0000-0000-000000000062 B true true 60000000-0000-0000-0000-000000000063 B true true 60000000-0000-0000-0000-000000000064 B true true 60000000-0000-0000-0000-000000000065 B true true 60000000-0000-0000-0000-000000000066 B true true 60000000-0000-0000-0000-000000000067 B true true 60000000-0000-0000-0000-000000000068 B true true 60000000-0000-0000-0000-000000000069 B true true 60000000-0000-0000-0000-000000000070 B true true 60000000-0000-0000-0000-000000000071 B true true 60000000-0000-0000-0000-000000000072 B true true 60000000-0000-0000-0000-000000000073 B true true 60000000-0000-0000-0000-000000000074 B true true 60000000-0000-0000-0000-000000000075 B true true 60000000-0000-0000-0000-000000000076 B true true 60000000-0000-0000-0000-000000000077 B true true 60000000-0000-0000-0000-000000000078 B true true 60000000-0000-0000-0000-000000000079 B true true 60000000-0000-0000-0000-000000000080 B true true 60000000-0000-0000-0000-000000000081 B true true 60000000-0000-0000-0000-000000000082 B true true 60000000-0000-0000-0000-000000000083 B true true 60000000-0000-0000-0000-000000000084 B true true 60000000-0000-0000-0000-000000000085 B true true 60000000-0000-0000-0000-000000000086 B true true 60000000-0000-0000-0000-000000000087 B true true 60000000-0000-0000-0000-000000000088 B true true 60000000-0000-0000-0000-000000000089 B true true 60000000-0000-0000-0000-000000000090 B true true 60000000-0000-0000-0000-000000000091 B true true 60000000-0000-0000-0000-000000000092 B true true 60000000-0000-0000-0000-000000000093 B true true 60000000-0000-0000-0000-000000000094 B true true 60000000-0000-0000-0000-000000000095 B true true 60000000-0000-0000-0000-000000000096 B true true 60000000-0000-0000-0000-000000000097 B true true 60000000-0000-0000-0000-000000000098 B true true 60000000-0000-0000-0000-000000000099 B true true 60000000-0000-0000-0000-000000000100 B true true calendarserver-9.1+dfsg/conf/auth/augments-test-s2s.xml000066400000000000000000000015021315003562600232160ustar00rootroot00000000000000 Default true true calendarserver-9.1+dfsg/conf/auth/augments-test.xml000066400000000000000000000054721315003562600225230ustar00rootroot00000000000000 Default true true Location-Default true true automatic Resource-Default true true automatic 40000000-0000-0000-0000-000000000005 true true none 40000000-0000-0000-0000-000000000006 true true accept-always 40000000-0000-0000-0000-000000000007 true true decline-always 40000000-0000-0000-0000-000000000008 true true accept-if-free 40000000-0000-0000-0000-000000000009 true true decline-if-busy 40000000-0000-0000-0000-000000000010 true true automatic 40000000-0000-0000-0000-000000000011 true true decline-always 20000000-0000-0000-0000-000000000001 calendarserver-9.1+dfsg/conf/auth/augments.dtd000066400000000000000000000020731315003562600215130ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/generate_test_accounts.py000066400000000000000000000756171315003562600243130ustar00rootroot00000000000000#!/usr/bin/env python # Generates test directory records in accounts-test.xml, resources-test.xml, # augments-test.xml and proxies-test.xml (overwriting them if they exist in # the current directory). prefix = """ """ EXTRA_GROUPS = False # The uids and guids for CDT test accounts are the same # The short name is of the form userNN USERGUIDS = "10000000-0000-0000-0000-000000000%03d" GROUPGUIDS = "20000000-0000-0000-0000-000000000%03d" LOCATIONGUIDS = "30000000-0000-0000-0000-000000000%03d" RESOURCEGUIDS = "40000000-0000-0000-0000-000000000%03d" PUBLICGUIDS = "50000000-0000-0000-0000-0000000000%02d" PUSERGUIDS = "60000000-0000-0000-0000-000000000%03d" S2SUSERGUIDS = "70000000-0000-0000-0000-000000000%03d" # accounts-test.xml out = file("accounts-test.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') for uid, fullName, guid in ( ("admin", "Super User", "0C8BDE62-E600-4696-83D3-8B5ECABDFD2E"), ("apprentice", "Apprentice Super User", "29B6C503-11DF-43EC-8CCA-40C7003149CE"), ("i18nuser", u"\u307e\u3060".encode("utf-8"), "860B3EE9-6D7C-4296-9639-E6B998074A78"), ): out.write(""" {guid} {guid} {uid} {uid} {fullName} {uid}@example.com """.format(uid=uid, guid=guid, fullName=fullName)) # user01-101 for i in xrange(1, 501 if EXTRA_GROUPS else 102): out.write(""" {guid} {guid} user{ctr:02d} user{ctr:02d} User {ctr:02d} user{ctr:02d}@example.com """.format(guid=(USERGUIDS % i), ctr=i)) # public01-10 for i in xrange(1, 11): out.write(""" {guid} {guid} public{ctr:02d} public{ctr:02d} Public {ctr:02d} public{ctr:02d}@example.com """.format(guid=PUBLICGUIDS % i, ctr=i)) # group01-100 members = { GROUPGUIDS % 1: (USERGUIDS % 1,), GROUPGUIDS % 2: (USERGUIDS % 6, USERGUIDS % 7), GROUPGUIDS % 3: (USERGUIDS % 8, USERGUIDS % 9), GROUPGUIDS % 4: (GROUPGUIDS % 2, GROUPGUIDS % 3, USERGUIDS % 10), GROUPGUIDS % 5: (GROUPGUIDS % 6, USERGUIDS % 20), GROUPGUIDS % 6: (USERGUIDS % 21,), GROUPGUIDS % 7: (USERGUIDS % 22, USERGUIDS % 23, USERGUIDS % 24), } if EXTRA_GROUPS: members.update({ GROUPGUIDS % 8: ( USERGUIDS % 1, USERGUIDS % 2, USERGUIDS % 3, USERGUIDS % 4, USERGUIDS % 5, ), GROUPGUIDS % 9: ( USERGUIDS % 1, USERGUIDS % 2, USERGUIDS % 3, USERGUIDS % 4, USERGUIDS % 5, USERGUIDS % 6, USERGUIDS % 7, USERGUIDS % 8, USERGUIDS % 9, USERGUIDS % 10, USERGUIDS % 11, USERGUIDS % 12, USERGUIDS % 13, USERGUIDS % 14, USERGUIDS % 15, USERGUIDS % 16, USERGUIDS % 17, USERGUIDS % 18, USERGUIDS % 19, USERGUIDS % 20, USERGUIDS % 21, USERGUIDS % 22, USERGUIDS % 23, USERGUIDS % 24, USERGUIDS % 25, USERGUIDS % 26, USERGUIDS % 27, USERGUIDS % 28, USERGUIDS % 29, USERGUIDS % 30, USERGUIDS % 31, USERGUIDS % 32, USERGUIDS % 33, USERGUIDS % 34, USERGUIDS % 35, USERGUIDS % 36, USERGUIDS % 37, USERGUIDS % 38, USERGUIDS % 39, USERGUIDS % 40, USERGUIDS % 41, USERGUIDS % 42, USERGUIDS % 43, USERGUIDS % 44, USERGUIDS % 45, USERGUIDS % 46, USERGUIDS % 47, USERGUIDS % 48, USERGUIDS % 49, USERGUIDS % 50, USERGUIDS % 51, USERGUIDS % 52, USERGUIDS % 53, USERGUIDS % 54, USERGUIDS % 55, USERGUIDS % 56, USERGUIDS % 57, USERGUIDS % 58, USERGUIDS % 59, USERGUIDS % 60, USERGUIDS % 61, USERGUIDS % 62, USERGUIDS % 63, USERGUIDS % 64, USERGUIDS % 65, USERGUIDS % 66, USERGUIDS % 67, USERGUIDS % 68, USERGUIDS % 69, USERGUIDS % 70, USERGUIDS % 71, USERGUIDS % 72, USERGUIDS % 73, USERGUIDS % 74, USERGUIDS % 75, USERGUIDS % 76, USERGUIDS % 77, USERGUIDS % 78, USERGUIDS % 79, USERGUIDS % 80, USERGUIDS % 81, USERGUIDS % 82, USERGUIDS % 83, USERGUIDS % 84, USERGUIDS % 85, USERGUIDS % 86, USERGUIDS % 87, USERGUIDS % 88, USERGUIDS % 89, USERGUIDS % 90, USERGUIDS % 91, USERGUIDS % 92, USERGUIDS % 93, USERGUIDS % 94, USERGUIDS % 95, USERGUIDS % 96, USERGUIDS % 97, USERGUIDS % 98, USERGUIDS % 99, USERGUIDS % 100, ), GROUPGUIDS % 10: ( USERGUIDS % 1, USERGUIDS % 2, USERGUIDS % 3, USERGUIDS % 4, USERGUIDS % 5, USERGUIDS % 6, USERGUIDS % 7, USERGUIDS % 8, USERGUIDS % 9, USERGUIDS % 10, USERGUIDS % 11, USERGUIDS % 12, USERGUIDS % 13, USERGUIDS % 14, USERGUIDS % 15, USERGUIDS % 16, USERGUIDS % 17, USERGUIDS % 18, USERGUIDS % 19, USERGUIDS % 20, USERGUIDS % 21, USERGUIDS % 22, USERGUIDS % 23, USERGUIDS % 24, USERGUIDS % 25, USERGUIDS % 26, USERGUIDS % 27, USERGUIDS % 28, USERGUIDS % 29, USERGUIDS % 30, USERGUIDS % 31, USERGUIDS % 32, USERGUIDS % 33, USERGUIDS % 34, USERGUIDS % 35, USERGUIDS % 36, USERGUIDS % 37, USERGUIDS % 38, USERGUIDS % 39, USERGUIDS % 40, USERGUIDS % 41, USERGUIDS % 42, USERGUIDS % 43, USERGUIDS % 44, USERGUIDS % 45, USERGUIDS % 46, USERGUIDS % 47, USERGUIDS % 48, USERGUIDS % 49, USERGUIDS % 50, USERGUIDS % 51, USERGUIDS % 52, USERGUIDS % 53, USERGUIDS % 54, USERGUIDS % 55, USERGUIDS % 56, USERGUIDS % 57, USERGUIDS % 58, USERGUIDS % 59, USERGUIDS % 60, USERGUIDS % 61, USERGUIDS % 62, USERGUIDS % 63, USERGUIDS % 64, USERGUIDS % 65, USERGUIDS % 66, USERGUIDS % 67, USERGUIDS % 68, USERGUIDS % 69, USERGUIDS % 70, USERGUIDS % 71, USERGUIDS % 72, USERGUIDS % 73, USERGUIDS % 74, USERGUIDS % 75, USERGUIDS % 76, USERGUIDS % 77, USERGUIDS % 78, USERGUIDS % 79, USERGUIDS % 80, USERGUIDS % 81, USERGUIDS % 82, USERGUIDS % 83, USERGUIDS % 84, USERGUIDS % 85, USERGUIDS % 86, USERGUIDS % 87, USERGUIDS % 88, USERGUIDS % 89, USERGUIDS % 90, USERGUIDS % 91, USERGUIDS % 92, USERGUIDS % 93, USERGUIDS % 94, USERGUIDS % 95, USERGUIDS % 96, USERGUIDS % 97, USERGUIDS % 98, USERGUIDS % 99, USERGUIDS % 100, USERGUIDS % 101, USERGUIDS % 102, USERGUIDS % 103, USERGUIDS % 104, USERGUIDS % 105, USERGUIDS % 106, USERGUIDS % 107, USERGUIDS % 108, USERGUIDS % 109, USERGUIDS % 110, USERGUIDS % 111, USERGUIDS % 112, USERGUIDS % 113, USERGUIDS % 114, USERGUIDS % 115, USERGUIDS % 116, USERGUIDS % 117, USERGUIDS % 118, USERGUIDS % 119, USERGUIDS % 120, USERGUIDS % 121, USERGUIDS % 122, USERGUIDS % 123, USERGUIDS % 124, USERGUIDS % 125, USERGUIDS % 126, USERGUIDS % 127, USERGUIDS % 128, USERGUIDS % 129, USERGUIDS % 130, USERGUIDS % 131, USERGUIDS % 132, USERGUIDS % 133, USERGUIDS % 134, USERGUIDS % 135, USERGUIDS % 136, USERGUIDS % 137, USERGUIDS % 138, USERGUIDS % 139, USERGUIDS % 140, USERGUIDS % 141, USERGUIDS % 142, USERGUIDS % 143, USERGUIDS % 144, USERGUIDS % 145, USERGUIDS % 146, USERGUIDS % 147, USERGUIDS % 148, USERGUIDS % 149, USERGUIDS % 150, USERGUIDS % 151, USERGUIDS % 152, USERGUIDS % 153, USERGUIDS % 154, USERGUIDS % 155, USERGUIDS % 156, USERGUIDS % 157, USERGUIDS % 158, USERGUIDS % 159, USERGUIDS % 160, USERGUIDS % 161, USERGUIDS % 162, USERGUIDS % 163, USERGUIDS % 164, USERGUIDS % 165, USERGUIDS % 166, USERGUIDS % 167, USERGUIDS % 168, USERGUIDS % 169, USERGUIDS % 170, USERGUIDS % 171, USERGUIDS % 172, USERGUIDS % 173, USERGUIDS % 174, USERGUIDS % 175, USERGUIDS % 176, USERGUIDS % 177, USERGUIDS % 178, USERGUIDS % 179, USERGUIDS % 180, USERGUIDS % 181, USERGUIDS % 182, USERGUIDS % 183, USERGUIDS % 184, USERGUIDS % 185, USERGUIDS % 186, USERGUIDS % 187, USERGUIDS % 188, USERGUIDS % 189, USERGUIDS % 190, USERGUIDS % 191, USERGUIDS % 192, USERGUIDS % 193, USERGUIDS % 194, USERGUIDS % 195, USERGUIDS % 196, USERGUIDS % 197, USERGUIDS % 198, USERGUIDS % 199, USERGUIDS % 200, USERGUIDS % 201, USERGUIDS % 202, USERGUIDS % 203, USERGUIDS % 204, USERGUIDS % 205, USERGUIDS % 206, USERGUIDS % 207, USERGUIDS % 208, USERGUIDS % 209, USERGUIDS % 210, USERGUIDS % 211, USERGUIDS % 212, USERGUIDS % 213, USERGUIDS % 214, USERGUIDS % 215, USERGUIDS % 216, USERGUIDS % 217, USERGUIDS % 218, USERGUIDS % 219, USERGUIDS % 220, USERGUIDS % 221, USERGUIDS % 222, USERGUIDS % 223, USERGUIDS % 224, USERGUIDS % 225, USERGUIDS % 226, USERGUIDS % 227, USERGUIDS % 228, USERGUIDS % 229, USERGUIDS % 230, USERGUIDS % 231, USERGUIDS % 232, USERGUIDS % 233, USERGUIDS % 234, USERGUIDS % 235, USERGUIDS % 236, USERGUIDS % 237, USERGUIDS % 238, USERGUIDS % 239, USERGUIDS % 240, USERGUIDS % 241, USERGUIDS % 242, USERGUIDS % 243, USERGUIDS % 244, USERGUIDS % 245, USERGUIDS % 246, USERGUIDS % 247, USERGUIDS % 248, USERGUIDS % 249, USERGUIDS % 250, USERGUIDS % 251, USERGUIDS % 252, USERGUIDS % 253, USERGUIDS % 254, USERGUIDS % 255, USERGUIDS % 256, USERGUIDS % 257, USERGUIDS % 258, USERGUIDS % 259, USERGUIDS % 260, USERGUIDS % 261, USERGUIDS % 262, USERGUIDS % 263, USERGUIDS % 264, USERGUIDS % 265, USERGUIDS % 266, USERGUIDS % 267, USERGUIDS % 268, USERGUIDS % 269, USERGUIDS % 270, USERGUIDS % 271, USERGUIDS % 272, USERGUIDS % 273, USERGUIDS % 274, USERGUIDS % 275, USERGUIDS % 276, USERGUIDS % 277, USERGUIDS % 278, USERGUIDS % 279, USERGUIDS % 280, USERGUIDS % 281, USERGUIDS % 282, USERGUIDS % 283, USERGUIDS % 284, USERGUIDS % 285, USERGUIDS % 286, USERGUIDS % 287, USERGUIDS % 288, USERGUIDS % 289, USERGUIDS % 290, USERGUIDS % 291, USERGUIDS % 292, USERGUIDS % 293, USERGUIDS % 294, USERGUIDS % 295, USERGUIDS % 296, USERGUIDS % 297, USERGUIDS % 298, USERGUIDS % 299, USERGUIDS % 300, USERGUIDS % 301, USERGUIDS % 302, USERGUIDS % 303, USERGUIDS % 304, USERGUIDS % 305, USERGUIDS % 306, USERGUIDS % 307, USERGUIDS % 308, USERGUIDS % 309, USERGUIDS % 310, USERGUIDS % 311, USERGUIDS % 312, USERGUIDS % 313, USERGUIDS % 314, USERGUIDS % 315, USERGUIDS % 316, USERGUIDS % 317, USERGUIDS % 318, USERGUIDS % 319, USERGUIDS % 320, USERGUIDS % 321, USERGUIDS % 322, USERGUIDS % 323, USERGUIDS % 324, USERGUIDS % 325, USERGUIDS % 326, USERGUIDS % 327, USERGUIDS % 328, USERGUIDS % 329, USERGUIDS % 330, USERGUIDS % 331, USERGUIDS % 332, USERGUIDS % 333, USERGUIDS % 334, USERGUIDS % 335, USERGUIDS % 336, USERGUIDS % 337, USERGUIDS % 338, USERGUIDS % 339, USERGUIDS % 340, USERGUIDS % 341, USERGUIDS % 342, USERGUIDS % 343, USERGUIDS % 344, USERGUIDS % 345, USERGUIDS % 346, USERGUIDS % 347, USERGUIDS % 348, USERGUIDS % 349, USERGUIDS % 350, USERGUIDS % 351, USERGUIDS % 352, USERGUIDS % 353, USERGUIDS % 354, USERGUIDS % 355, USERGUIDS % 356, USERGUIDS % 357, USERGUIDS % 358, USERGUIDS % 359, USERGUIDS % 360, USERGUIDS % 361, USERGUIDS % 362, USERGUIDS % 363, USERGUIDS % 364, USERGUIDS % 365, USERGUIDS % 366, USERGUIDS % 367, USERGUIDS % 368, USERGUIDS % 369, USERGUIDS % 370, USERGUIDS % 371, USERGUIDS % 372, USERGUIDS % 373, USERGUIDS % 374, USERGUIDS % 375, USERGUIDS % 376, USERGUIDS % 377, USERGUIDS % 378, USERGUIDS % 379, USERGUIDS % 380, USERGUIDS % 381, USERGUIDS % 382, USERGUIDS % 383, USERGUIDS % 384, USERGUIDS % 385, USERGUIDS % 386, USERGUIDS % 387, USERGUIDS % 388, USERGUIDS % 389, USERGUIDS % 390, USERGUIDS % 391, USERGUIDS % 392, USERGUIDS % 393, USERGUIDS % 394, USERGUIDS % 395, USERGUIDS % 396, USERGUIDS % 397, USERGUIDS % 398, USERGUIDS % 399, USERGUIDS % 400, USERGUIDS % 401, USERGUIDS % 402, USERGUIDS % 403, USERGUIDS % 404, USERGUIDS % 405, USERGUIDS % 406, USERGUIDS % 407, USERGUIDS % 408, USERGUIDS % 409, USERGUIDS % 410, USERGUIDS % 411, USERGUIDS % 412, USERGUIDS % 413, USERGUIDS % 414, USERGUIDS % 415, USERGUIDS % 416, USERGUIDS % 417, USERGUIDS % 418, USERGUIDS % 419, USERGUIDS % 420, USERGUIDS % 421, USERGUIDS % 422, USERGUIDS % 423, USERGUIDS % 424, USERGUIDS % 425, USERGUIDS % 426, USERGUIDS % 427, USERGUIDS % 428, USERGUIDS % 429, USERGUIDS % 430, USERGUIDS % 431, USERGUIDS % 432, USERGUIDS % 433, USERGUIDS % 434, USERGUIDS % 435, USERGUIDS % 436, USERGUIDS % 437, USERGUIDS % 438, USERGUIDS % 439, USERGUIDS % 440, USERGUIDS % 441, USERGUIDS % 442, USERGUIDS % 443, USERGUIDS % 444, USERGUIDS % 445, USERGUIDS % 446, USERGUIDS % 447, USERGUIDS % 448, USERGUIDS % 449, USERGUIDS % 450, USERGUIDS % 451, USERGUIDS % 452, USERGUIDS % 453, USERGUIDS % 454, USERGUIDS % 455, USERGUIDS % 456, USERGUIDS % 457, USERGUIDS % 458, USERGUIDS % 459, USERGUIDS % 460, USERGUIDS % 461, USERGUIDS % 462, USERGUIDS % 463, USERGUIDS % 464, USERGUIDS % 465, USERGUIDS % 466, USERGUIDS % 467, USERGUIDS % 468, USERGUIDS % 469, USERGUIDS % 470, USERGUIDS % 471, USERGUIDS % 472, USERGUIDS % 473, USERGUIDS % 474, USERGUIDS % 475, USERGUIDS % 476, USERGUIDS % 477, USERGUIDS % 478, USERGUIDS % 479, USERGUIDS % 480, USERGUIDS % 481, USERGUIDS % 482, USERGUIDS % 483, USERGUIDS % 484, USERGUIDS % 485, USERGUIDS % 486, USERGUIDS % 487, USERGUIDS % 488, USERGUIDS % 489, USERGUIDS % 490, USERGUIDS % 491, USERGUIDS % 492, USERGUIDS % 493, USERGUIDS % 494, USERGUIDS % 495, USERGUIDS % 496, USERGUIDS % 497, USERGUIDS % 498, USERGUIDS % 499, USERGUIDS % 500, ), }) for i in xrange(1, 101): memberElements = [] groupUID = GROUPGUIDS % i if groupUID in members: for uid in members[groupUID]: memberElements.append("{}".format(uid)) memberString = " " + "\n ".join(memberElements) + "\n" else: memberString = "" out.write(""" {guid} {guid} group{ctr:02d} Group {ctr:02d} group{ctr:02d}@example.com {members} """.format(guid=GROUPGUIDS % i, ctr=i, members=memberString)) out.write("\n") out.close() # accounts-test-pod.xml out = file("accounts-test-pod.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') for uid, fullName, guid in ( ("admin", "Super User", "0C8BDE62-E600-4696-83D3-8B5ECABDFD2E"), ): out.write(""" {guid} {guid} {uid} {uid} {fullName} {uid}@example.com """.format(uid=uid, guid=guid, fullName=fullName)) # user01-100 for i in xrange(1, 101): out.write(""" {guid} {guid} user{ctr:02d} user{ctr:02d} User {ctr:02d} user{ctr:02d}@example.com """.format(guid=USERGUIDS % i, ctr=i)) # puser01-100 for i in xrange(1, 101): out.write(""" {guid} {guid} puser{ctr:02d} puser{ctr:02d} Puser {ctr:02d} puser{ctr:02d}@example.com """.format(guid=PUSERGUIDS % i, ctr=i)) out.write("\n") out.close() # accounts-test-s2s.xml out = file("accounts-test-s2s.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') for uid, fullName, guid in ( ("admin", "Super User", "82A5A2FA-F8FD-4761-B1CB-F1664C1F376A"), ): out.write(""" {guid} {guid} {uid} {uid} {fullName} {uid}@example.org """.format(uid=uid, guid=guid, fullName=fullName)) # other01-100 for i in xrange(1, 101): out.write(""" {guid} {guid} other{ctr:02d} other{ctr:02d} Other {ctr:02d} other{ctr:02d}@example.org """.format(guid=S2SUSERGUIDS % i, ctr=i)) out.write("\n") out.close() # resources-test.xml out = file("resources-test.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') out.write(""" pretend pretend Pretend Conference Room il1 il1 il1 IL1 1 Infinite Loop, Cupertino, CA 95014 geo:37.331741,-122.030333 fantastic fantastic Fantastic Conference Room il2 il2 il2 IL2 2 Infinite Loop, Cupertino, CA 95014 geo:37.332633,-122.030502 delegatedroom delegatedroom Delegated Conference Room """) for i in xrange(1, 101): out.write(""" {guid} {guid} location{ctr:02d} Location {ctr:02d} """.format(guid=LOCATIONGUIDS % i, ctr=i)) for i in xrange(1, 101): out.write(""" {guid} {guid} resource{ctr:02d} Resource {ctr:02d} """.format(guid=RESOURCEGUIDS % i, ctr=i)) out.write("\n") out.close() # resources-test-pod.xml out = file("resources-test-pod.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') out.close() # resources-test-s2s.xml out = file("resources-test-s2s.xml", "w") out.write(prefix) out.write('\n\n') out.write('\n') out.close() # augments-test.xml out = file("augments-test.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") augments = ( # resource05 (RESOURCEGUIDS % 5, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "none"), )), # resource06 (RESOURCEGUIDS % 6, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "accept-always"), )), # resource07 (RESOURCEGUIDS % 7, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "decline-always"), )), # resource08 (RESOURCEGUIDS % 8, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "accept-if-free"), )), # resource09 (RESOURCEGUIDS % 9, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "decline-if-busy"), )), # resource10 (RESOURCEGUIDS % 10, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "automatic"), )), # resource11 (RESOURCEGUIDS % 11, ( ("enable-calendar", "true"), ("enable-addressbook", "true"), ("auto-schedule-mode", "decline-always"), ("auto-accept-group", GROUPGUIDS % 1), )), ) out.write(""" Default true true """) out.write(""" Location-Default true true automatic """) out.write(""" Resource-Default true true automatic """) for uid, settings in augments: elements = [] for key, value in settings: elements.append("<{key}>{value}".format(key=key, value=value)) elementsString = "\n ".join(elements) out.write(""" {uid} {elements} """.format(uid=uid, elements=elementsString)) out.write("\n") out.close() # augments-test-pod.xml out = file("augments-test-pod.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") out.write(""" Default A true true """) # puser01-100 for i in xrange(1, 101): out.write(""" {guid} B true true """.format(guid=PUSERGUIDS % i)) out.write("\n") out.close() # augments-test-s2s.xml out = file("augments-test-s2s.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") out.write(""" Default true true """) out.write("\n") out.close() # proxies-test.xml out = file("proxies-test.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") proxies = ( (RESOURCEGUIDS % 1, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 2, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 3, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 4, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 5, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 6, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 7, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 8, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 9, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), (RESOURCEGUIDS % 10, { "write-proxies": (USERGUIDS % 1,), "read-proxies": (USERGUIDS % 3,), }), ("delegatedroom", { "write-proxies": (GROUPGUIDS % 5,), "read-proxies": (), }), ) for uid, settings in proxies: elements = [] for key, values in settings.iteritems(): elements.append("<{key}>".format(key=key)) for value in values: elements.append("{value}".format(value=value)) elements.append("".format(key=key)) elementsString = "\n ".join(elements) out.write(""" {uid} {elements} """.format(uid=uid, elements=elementsString)) out.write("\n") out.close() # proxies-test-pod.xml out = file("proxies-test-pod.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") out.close() # proxies-test-s2s.xml out = file("proxies-test-s2s.xml", "w") out.write(prefix) out.write('\n\n') out.write("\n") out.close() calendarserver-9.1+dfsg/conf/auth/proxies-test-pod.xml000066400000000000000000000012551315003562600231440ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/proxies-test-s2s.xml000066400000000000000000000012551315003562600230710ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/proxies-test.xml000066400000000000000000000067641315003562600223760ustar00rootroot00000000000000 40000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000002 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000003 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000004 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000005 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000006 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000007 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000008 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000009 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000010 10000000-0000-0000-0000-000000000001 10000000-0000-0000-0000-000000000003 delegatedroom 20000000-0000-0000-0000-000000000005 calendarserver-9.1+dfsg/conf/auth/proxies.dtd000066400000000000000000000015271315003562600213640ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/resources-test-pod.xml000077500000000000000000000034451315003562600234730ustar00rootroot00000000000000 30000000-0000-0000-0000-000000000001 30000000-0000-0000-0000-000000000001 location01 Location 01 30000000-0000-0000-0000-000000000002 30000000-0000-0000-0000-000000000002 location02 Location 02 30000000-0000-0000-0000-000000000003 30000000-0000-0000-0000-000000000003 location03 Location 03 30000000-0000-0000-0000-000000000004 30000000-0000-0000-0000-000000000004 location04 Location 04 30000000-0000-0000-0000-000000000005 30000000-0000-0000-0000-000000000005 location05 Location 05 calendarserver-9.1+dfsg/conf/auth/resources-test-s2s.xml000077500000000000000000000013201315003562600234060ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/auth/resources-test.xml000077500000000000000000001313171315003562600227130ustar00rootroot00000000000000 pretend pretend Pretend Conference Room il1 il1 il1 IL1 1 Infinite Loop, Cupertino, CA 95014 geo:37.331741,-122.030333 fantastic fantastic Fantastic Conference Room il2 il2 il2 IL2 2 Infinite Loop, Cupertino, CA 95014 geo:37.332633,-122.030502 delegatedroom delegatedroom Delegated Conference Room 30000000-0000-0000-0000-000000000001 30000000-0000-0000-0000-000000000001 location01 Location 01 30000000-0000-0000-0000-000000000002 30000000-0000-0000-0000-000000000002 location02 Location 02 30000000-0000-0000-0000-000000000003 30000000-0000-0000-0000-000000000003 location03 Location 03 30000000-0000-0000-0000-000000000004 30000000-0000-0000-0000-000000000004 location04 Location 04 30000000-0000-0000-0000-000000000005 30000000-0000-0000-0000-000000000005 location05 Location 05 30000000-0000-0000-0000-000000000006 30000000-0000-0000-0000-000000000006 location06 Location 06 30000000-0000-0000-0000-000000000007 30000000-0000-0000-0000-000000000007 location07 Location 07 30000000-0000-0000-0000-000000000008 30000000-0000-0000-0000-000000000008 location08 Location 08 30000000-0000-0000-0000-000000000009 30000000-0000-0000-0000-000000000009 location09 Location 09 30000000-0000-0000-0000-000000000010 30000000-0000-0000-0000-000000000010 location10 Location 10 30000000-0000-0000-0000-000000000011 30000000-0000-0000-0000-000000000011 location11 Location 11 30000000-0000-0000-0000-000000000012 30000000-0000-0000-0000-000000000012 location12 Location 12 30000000-0000-0000-0000-000000000013 30000000-0000-0000-0000-000000000013 location13 Location 13 30000000-0000-0000-0000-000000000014 30000000-0000-0000-0000-000000000014 location14 Location 14 30000000-0000-0000-0000-000000000015 30000000-0000-0000-0000-000000000015 location15 Location 15 30000000-0000-0000-0000-000000000016 30000000-0000-0000-0000-000000000016 location16 Location 16 30000000-0000-0000-0000-000000000017 30000000-0000-0000-0000-000000000017 location17 Location 17 30000000-0000-0000-0000-000000000018 30000000-0000-0000-0000-000000000018 location18 Location 18 30000000-0000-0000-0000-000000000019 30000000-0000-0000-0000-000000000019 location19 Location 19 30000000-0000-0000-0000-000000000020 30000000-0000-0000-0000-000000000020 location20 Location 20 30000000-0000-0000-0000-000000000021 30000000-0000-0000-0000-000000000021 location21 Location 21 30000000-0000-0000-0000-000000000022 30000000-0000-0000-0000-000000000022 location22 Location 22 30000000-0000-0000-0000-000000000023 30000000-0000-0000-0000-000000000023 location23 Location 23 30000000-0000-0000-0000-000000000024 30000000-0000-0000-0000-000000000024 location24 Location 24 30000000-0000-0000-0000-000000000025 30000000-0000-0000-0000-000000000025 location25 Location 25 30000000-0000-0000-0000-000000000026 30000000-0000-0000-0000-000000000026 location26 Location 26 30000000-0000-0000-0000-000000000027 30000000-0000-0000-0000-000000000027 location27 Location 27 30000000-0000-0000-0000-000000000028 30000000-0000-0000-0000-000000000028 location28 Location 28 30000000-0000-0000-0000-000000000029 30000000-0000-0000-0000-000000000029 location29 Location 29 30000000-0000-0000-0000-000000000030 30000000-0000-0000-0000-000000000030 location30 Location 30 30000000-0000-0000-0000-000000000031 30000000-0000-0000-0000-000000000031 location31 Location 31 30000000-0000-0000-0000-000000000032 30000000-0000-0000-0000-000000000032 location32 Location 32 30000000-0000-0000-0000-000000000033 30000000-0000-0000-0000-000000000033 location33 Location 33 30000000-0000-0000-0000-000000000034 30000000-0000-0000-0000-000000000034 location34 Location 34 30000000-0000-0000-0000-000000000035 30000000-0000-0000-0000-000000000035 location35 Location 35 30000000-0000-0000-0000-000000000036 30000000-0000-0000-0000-000000000036 location36 Location 36 30000000-0000-0000-0000-000000000037 30000000-0000-0000-0000-000000000037 location37 Location 37 30000000-0000-0000-0000-000000000038 30000000-0000-0000-0000-000000000038 location38 Location 38 30000000-0000-0000-0000-000000000039 30000000-0000-0000-0000-000000000039 location39 Location 39 30000000-0000-0000-0000-000000000040 30000000-0000-0000-0000-000000000040 location40 Location 40 30000000-0000-0000-0000-000000000041 30000000-0000-0000-0000-000000000041 location41 Location 41 30000000-0000-0000-0000-000000000042 30000000-0000-0000-0000-000000000042 location42 Location 42 30000000-0000-0000-0000-000000000043 30000000-0000-0000-0000-000000000043 location43 Location 43 30000000-0000-0000-0000-000000000044 30000000-0000-0000-0000-000000000044 location44 Location 44 30000000-0000-0000-0000-000000000045 30000000-0000-0000-0000-000000000045 location45 Location 45 30000000-0000-0000-0000-000000000046 30000000-0000-0000-0000-000000000046 location46 Location 46 30000000-0000-0000-0000-000000000047 30000000-0000-0000-0000-000000000047 location47 Location 47 30000000-0000-0000-0000-000000000048 30000000-0000-0000-0000-000000000048 location48 Location 48 30000000-0000-0000-0000-000000000049 30000000-0000-0000-0000-000000000049 location49 Location 49 30000000-0000-0000-0000-000000000050 30000000-0000-0000-0000-000000000050 location50 Location 50 30000000-0000-0000-0000-000000000051 30000000-0000-0000-0000-000000000051 location51 Location 51 30000000-0000-0000-0000-000000000052 30000000-0000-0000-0000-000000000052 location52 Location 52 30000000-0000-0000-0000-000000000053 30000000-0000-0000-0000-000000000053 location53 Location 53 30000000-0000-0000-0000-000000000054 30000000-0000-0000-0000-000000000054 location54 Location 54 30000000-0000-0000-0000-000000000055 30000000-0000-0000-0000-000000000055 location55 Location 55 30000000-0000-0000-0000-000000000056 30000000-0000-0000-0000-000000000056 location56 Location 56 30000000-0000-0000-0000-000000000057 30000000-0000-0000-0000-000000000057 location57 Location 57 30000000-0000-0000-0000-000000000058 30000000-0000-0000-0000-000000000058 location58 Location 58 30000000-0000-0000-0000-000000000059 30000000-0000-0000-0000-000000000059 location59 Location 59 30000000-0000-0000-0000-000000000060 30000000-0000-0000-0000-000000000060 location60 Location 60 30000000-0000-0000-0000-000000000061 30000000-0000-0000-0000-000000000061 location61 Location 61 30000000-0000-0000-0000-000000000062 30000000-0000-0000-0000-000000000062 location62 Location 62 30000000-0000-0000-0000-000000000063 30000000-0000-0000-0000-000000000063 location63 Location 63 30000000-0000-0000-0000-000000000064 30000000-0000-0000-0000-000000000064 location64 Location 64 30000000-0000-0000-0000-000000000065 30000000-0000-0000-0000-000000000065 location65 Location 65 30000000-0000-0000-0000-000000000066 30000000-0000-0000-0000-000000000066 location66 Location 66 30000000-0000-0000-0000-000000000067 30000000-0000-0000-0000-000000000067 location67 Location 67 30000000-0000-0000-0000-000000000068 30000000-0000-0000-0000-000000000068 location68 Location 68 30000000-0000-0000-0000-000000000069 30000000-0000-0000-0000-000000000069 location69 Location 69 30000000-0000-0000-0000-000000000070 30000000-0000-0000-0000-000000000070 location70 Location 70 30000000-0000-0000-0000-000000000071 30000000-0000-0000-0000-000000000071 location71 Location 71 30000000-0000-0000-0000-000000000072 30000000-0000-0000-0000-000000000072 location72 Location 72 30000000-0000-0000-0000-000000000073 30000000-0000-0000-0000-000000000073 location73 Location 73 30000000-0000-0000-0000-000000000074 30000000-0000-0000-0000-000000000074 location74 Location 74 30000000-0000-0000-0000-000000000075 30000000-0000-0000-0000-000000000075 location75 Location 75 30000000-0000-0000-0000-000000000076 30000000-0000-0000-0000-000000000076 location76 Location 76 30000000-0000-0000-0000-000000000077 30000000-0000-0000-0000-000000000077 location77 Location 77 30000000-0000-0000-0000-000000000078 30000000-0000-0000-0000-000000000078 location78 Location 78 30000000-0000-0000-0000-000000000079 30000000-0000-0000-0000-000000000079 location79 Location 79 30000000-0000-0000-0000-000000000080 30000000-0000-0000-0000-000000000080 location80 Location 80 30000000-0000-0000-0000-000000000081 30000000-0000-0000-0000-000000000081 location81 Location 81 30000000-0000-0000-0000-000000000082 30000000-0000-0000-0000-000000000082 location82 Location 82 30000000-0000-0000-0000-000000000083 30000000-0000-0000-0000-000000000083 location83 Location 83 30000000-0000-0000-0000-000000000084 30000000-0000-0000-0000-000000000084 location84 Location 84 30000000-0000-0000-0000-000000000085 30000000-0000-0000-0000-000000000085 location85 Location 85 30000000-0000-0000-0000-000000000086 30000000-0000-0000-0000-000000000086 location86 Location 86 30000000-0000-0000-0000-000000000087 30000000-0000-0000-0000-000000000087 location87 Location 87 30000000-0000-0000-0000-000000000088 30000000-0000-0000-0000-000000000088 location88 Location 88 30000000-0000-0000-0000-000000000089 30000000-0000-0000-0000-000000000089 location89 Location 89 30000000-0000-0000-0000-000000000090 30000000-0000-0000-0000-000000000090 location90 Location 90 30000000-0000-0000-0000-000000000091 30000000-0000-0000-0000-000000000091 location91 Location 91 30000000-0000-0000-0000-000000000092 30000000-0000-0000-0000-000000000092 location92 Location 92 30000000-0000-0000-0000-000000000093 30000000-0000-0000-0000-000000000093 location93 Location 93 30000000-0000-0000-0000-000000000094 30000000-0000-0000-0000-000000000094 location94 Location 94 30000000-0000-0000-0000-000000000095 30000000-0000-0000-0000-000000000095 location95 Location 95 30000000-0000-0000-0000-000000000096 30000000-0000-0000-0000-000000000096 location96 Location 96 30000000-0000-0000-0000-000000000097 30000000-0000-0000-0000-000000000097 location97 Location 97 30000000-0000-0000-0000-000000000098 30000000-0000-0000-0000-000000000098 location98 Location 98 30000000-0000-0000-0000-000000000099 30000000-0000-0000-0000-000000000099 location99 Location 99 30000000-0000-0000-0000-000000000100 30000000-0000-0000-0000-000000000100 location100 Location 100 40000000-0000-0000-0000-000000000001 40000000-0000-0000-0000-000000000001 resource01 Resource 01 40000000-0000-0000-0000-000000000002 40000000-0000-0000-0000-000000000002 resource02 Resource 02 40000000-0000-0000-0000-000000000003 40000000-0000-0000-0000-000000000003 resource03 Resource 03 40000000-0000-0000-0000-000000000004 40000000-0000-0000-0000-000000000004 resource04 Resource 04 40000000-0000-0000-0000-000000000005 40000000-0000-0000-0000-000000000005 resource05 Resource 05 40000000-0000-0000-0000-000000000006 40000000-0000-0000-0000-000000000006 resource06 Resource 06 40000000-0000-0000-0000-000000000007 40000000-0000-0000-0000-000000000007 resource07 Resource 07 40000000-0000-0000-0000-000000000008 40000000-0000-0000-0000-000000000008 resource08 Resource 08 40000000-0000-0000-0000-000000000009 40000000-0000-0000-0000-000000000009 resource09 Resource 09 40000000-0000-0000-0000-000000000010 40000000-0000-0000-0000-000000000010 resource10 Resource 10 40000000-0000-0000-0000-000000000011 40000000-0000-0000-0000-000000000011 resource11 Resource 11 40000000-0000-0000-0000-000000000012 40000000-0000-0000-0000-000000000012 resource12 Resource 12 40000000-0000-0000-0000-000000000013 40000000-0000-0000-0000-000000000013 resource13 Resource 13 40000000-0000-0000-0000-000000000014 40000000-0000-0000-0000-000000000014 resource14 Resource 14 40000000-0000-0000-0000-000000000015 40000000-0000-0000-0000-000000000015 resource15 Resource 15 40000000-0000-0000-0000-000000000016 40000000-0000-0000-0000-000000000016 resource16 Resource 16 40000000-0000-0000-0000-000000000017 40000000-0000-0000-0000-000000000017 resource17 Resource 17 40000000-0000-0000-0000-000000000018 40000000-0000-0000-0000-000000000018 resource18 Resource 18 40000000-0000-0000-0000-000000000019 40000000-0000-0000-0000-000000000019 resource19 Resource 19 40000000-0000-0000-0000-000000000020 40000000-0000-0000-0000-000000000020 resource20 Resource 20 40000000-0000-0000-0000-000000000021 40000000-0000-0000-0000-000000000021 resource21 Resource 21 40000000-0000-0000-0000-000000000022 40000000-0000-0000-0000-000000000022 resource22 Resource 22 40000000-0000-0000-0000-000000000023 40000000-0000-0000-0000-000000000023 resource23 Resource 23 40000000-0000-0000-0000-000000000024 40000000-0000-0000-0000-000000000024 resource24 Resource 24 40000000-0000-0000-0000-000000000025 40000000-0000-0000-0000-000000000025 resource25 Resource 25 40000000-0000-0000-0000-000000000026 40000000-0000-0000-0000-000000000026 resource26 Resource 26 40000000-0000-0000-0000-000000000027 40000000-0000-0000-0000-000000000027 resource27 Resource 27 40000000-0000-0000-0000-000000000028 40000000-0000-0000-0000-000000000028 resource28 Resource 28 40000000-0000-0000-0000-000000000029 40000000-0000-0000-0000-000000000029 resource29 Resource 29 40000000-0000-0000-0000-000000000030 40000000-0000-0000-0000-000000000030 resource30 Resource 30 40000000-0000-0000-0000-000000000031 40000000-0000-0000-0000-000000000031 resource31 Resource 31 40000000-0000-0000-0000-000000000032 40000000-0000-0000-0000-000000000032 resource32 Resource 32 40000000-0000-0000-0000-000000000033 40000000-0000-0000-0000-000000000033 resource33 Resource 33 40000000-0000-0000-0000-000000000034 40000000-0000-0000-0000-000000000034 resource34 Resource 34 40000000-0000-0000-0000-000000000035 40000000-0000-0000-0000-000000000035 resource35 Resource 35 40000000-0000-0000-0000-000000000036 40000000-0000-0000-0000-000000000036 resource36 Resource 36 40000000-0000-0000-0000-000000000037 40000000-0000-0000-0000-000000000037 resource37 Resource 37 40000000-0000-0000-0000-000000000038 40000000-0000-0000-0000-000000000038 resource38 Resource 38 40000000-0000-0000-0000-000000000039 40000000-0000-0000-0000-000000000039 resource39 Resource 39 40000000-0000-0000-0000-000000000040 40000000-0000-0000-0000-000000000040 resource40 Resource 40 40000000-0000-0000-0000-000000000041 40000000-0000-0000-0000-000000000041 resource41 Resource 41 40000000-0000-0000-0000-000000000042 40000000-0000-0000-0000-000000000042 resource42 Resource 42 40000000-0000-0000-0000-000000000043 40000000-0000-0000-0000-000000000043 resource43 Resource 43 40000000-0000-0000-0000-000000000044 40000000-0000-0000-0000-000000000044 resource44 Resource 44 40000000-0000-0000-0000-000000000045 40000000-0000-0000-0000-000000000045 resource45 Resource 45 40000000-0000-0000-0000-000000000046 40000000-0000-0000-0000-000000000046 resource46 Resource 46 40000000-0000-0000-0000-000000000047 40000000-0000-0000-0000-000000000047 resource47 Resource 47 40000000-0000-0000-0000-000000000048 40000000-0000-0000-0000-000000000048 resource48 Resource 48 40000000-0000-0000-0000-000000000049 40000000-0000-0000-0000-000000000049 resource49 Resource 49 40000000-0000-0000-0000-000000000050 40000000-0000-0000-0000-000000000050 resource50 Resource 50 40000000-0000-0000-0000-000000000051 40000000-0000-0000-0000-000000000051 resource51 Resource 51 40000000-0000-0000-0000-000000000052 40000000-0000-0000-0000-000000000052 resource52 Resource 52 40000000-0000-0000-0000-000000000053 40000000-0000-0000-0000-000000000053 resource53 Resource 53 40000000-0000-0000-0000-000000000054 40000000-0000-0000-0000-000000000054 resource54 Resource 54 40000000-0000-0000-0000-000000000055 40000000-0000-0000-0000-000000000055 resource55 Resource 55 40000000-0000-0000-0000-000000000056 40000000-0000-0000-0000-000000000056 resource56 Resource 56 40000000-0000-0000-0000-000000000057 40000000-0000-0000-0000-000000000057 resource57 Resource 57 40000000-0000-0000-0000-000000000058 40000000-0000-0000-0000-000000000058 resource58 Resource 58 40000000-0000-0000-0000-000000000059 40000000-0000-0000-0000-000000000059 resource59 Resource 59 40000000-0000-0000-0000-000000000060 40000000-0000-0000-0000-000000000060 resource60 Resource 60 40000000-0000-0000-0000-000000000061 40000000-0000-0000-0000-000000000061 resource61 Resource 61 40000000-0000-0000-0000-000000000062 40000000-0000-0000-0000-000000000062 resource62 Resource 62 40000000-0000-0000-0000-000000000063 40000000-0000-0000-0000-000000000063 resource63 Resource 63 40000000-0000-0000-0000-000000000064 40000000-0000-0000-0000-000000000064 resource64 Resource 64 40000000-0000-0000-0000-000000000065 40000000-0000-0000-0000-000000000065 resource65 Resource 65 40000000-0000-0000-0000-000000000066 40000000-0000-0000-0000-000000000066 resource66 Resource 66 40000000-0000-0000-0000-000000000067 40000000-0000-0000-0000-000000000067 resource67 Resource 67 40000000-0000-0000-0000-000000000068 40000000-0000-0000-0000-000000000068 resource68 Resource 68 40000000-0000-0000-0000-000000000069 40000000-0000-0000-0000-000000000069 resource69 Resource 69 40000000-0000-0000-0000-000000000070 40000000-0000-0000-0000-000000000070 resource70 Resource 70 40000000-0000-0000-0000-000000000071 40000000-0000-0000-0000-000000000071 resource71 Resource 71 40000000-0000-0000-0000-000000000072 40000000-0000-0000-0000-000000000072 resource72 Resource 72 40000000-0000-0000-0000-000000000073 40000000-0000-0000-0000-000000000073 resource73 Resource 73 40000000-0000-0000-0000-000000000074 40000000-0000-0000-0000-000000000074 resource74 Resource 74 40000000-0000-0000-0000-000000000075 40000000-0000-0000-0000-000000000075 resource75 Resource 75 40000000-0000-0000-0000-000000000076 40000000-0000-0000-0000-000000000076 resource76 Resource 76 40000000-0000-0000-0000-000000000077 40000000-0000-0000-0000-000000000077 resource77 Resource 77 40000000-0000-0000-0000-000000000078 40000000-0000-0000-0000-000000000078 resource78 Resource 78 40000000-0000-0000-0000-000000000079 40000000-0000-0000-0000-000000000079 resource79 Resource 79 40000000-0000-0000-0000-000000000080 40000000-0000-0000-0000-000000000080 resource80 Resource 80 40000000-0000-0000-0000-000000000081 40000000-0000-0000-0000-000000000081 resource81 Resource 81 40000000-0000-0000-0000-000000000082 40000000-0000-0000-0000-000000000082 resource82 Resource 82 40000000-0000-0000-0000-000000000083 40000000-0000-0000-0000-000000000083 resource83 Resource 83 40000000-0000-0000-0000-000000000084 40000000-0000-0000-0000-000000000084 resource84 Resource 84 40000000-0000-0000-0000-000000000085 40000000-0000-0000-0000-000000000085 resource85 Resource 85 40000000-0000-0000-0000-000000000086 40000000-0000-0000-0000-000000000086 resource86 Resource 86 40000000-0000-0000-0000-000000000087 40000000-0000-0000-0000-000000000087 resource87 Resource 87 40000000-0000-0000-0000-000000000088 40000000-0000-0000-0000-000000000088 resource88 Resource 88 40000000-0000-0000-0000-000000000089 40000000-0000-0000-0000-000000000089 resource89 Resource 89 40000000-0000-0000-0000-000000000090 40000000-0000-0000-0000-000000000090 resource90 Resource 90 40000000-0000-0000-0000-000000000091 40000000-0000-0000-0000-000000000091 resource91 Resource 91 40000000-0000-0000-0000-000000000092 40000000-0000-0000-0000-000000000092 resource92 Resource 92 40000000-0000-0000-0000-000000000093 40000000-0000-0000-0000-000000000093 resource93 Resource 93 40000000-0000-0000-0000-000000000094 40000000-0000-0000-0000-000000000094 resource94 Resource 94 40000000-0000-0000-0000-000000000095 40000000-0000-0000-0000-000000000095 resource95 Resource 95 40000000-0000-0000-0000-000000000096 40000000-0000-0000-0000-000000000096 resource96 Resource 96 40000000-0000-0000-0000-000000000097 40000000-0000-0000-0000-000000000097 resource97 Resource 97 40000000-0000-0000-0000-000000000098 40000000-0000-0000-0000-000000000098 resource98 Resource 98 40000000-0000-0000-0000-000000000099 40000000-0000-0000-0000-000000000099 resource99 Resource 99 40000000-0000-0000-0000-000000000100 40000000-0000-0000-0000-000000000100 resource100 Resource 100 calendarserver-9.1+dfsg/conf/caldavd-apple.plist000066400000000000000000000341511315003562600220060ustar00rootroot00000000000000 ServerHostName EnableCalDAV EnableCardDAV SocketFiles Enabled Owner _calendar Group _www SocketRoot /var/run/caldavd_requests HTTPPort 80 SSLPort 443 EnableSSL BehindTLSProxy RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts ServerRoot /Library/Server/Calendar and Contacts DBType DBFeatures Postgres Ctl xpg_ctl Options -c log_lock_waits=TRUE -c deadlock_timeout=10 -c log_line_prefix='%m [%p] ' ExtraConnections 20 ClusterName cluster.pg LogFile xpg_ctl.log SocketDirectory /var/run/caldavd/PostgresSocket SocketName LogRotation DataRoot Data DatabaseRoot Database.xpg DocumentRoot Documents ConfigRoot Config RunRoot /var/run/caldavd Aliases UserQuota 104857600 MaximumAttachmentSize 10485760 MaxCollectionsPerHome 50 MaxResourcesPerCollection 10000 MaxResourceSize 1048576 MaxAttendeesPerInstance 100 MaxAllowedInstances 3000 DirectoryService type opendirectory params node /Search recordTypes users groups AugmentService type xml params xmlFiles augments.xml DirectoryFilterStartsWith AdminPrincipals ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal Wiki Enabled LogRoot Logs AccessLogFile access.log RotateAccessLog ErrorLogFile error.log DefaultLogLevel warn PIDFile caldavd.pid SSLKeychainIdentity UserName _calendar GroupName _calendar ProcessType Combined MultiProcess ProcessCount 0 Notifications CoalesceSeconds 3 Services Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns iSchedule Enabled AddressPatterns RemoteServers remoteservers.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnableManagedAttachments EnablePrivateEvents EnableTimezoneService TimezoneService Enabled Sharing Enabled EnableSACLs EnableWebAdmin WebCalendarAuthPath /auth DirectoryAddressBook Enabled type opendirectory params queryUserRecords queryPeopleRecords EnableSearchAddressBook AutomaticPurging Enabled AlertPostingProgram /Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/Setup/CalendarServerAlert Includes /Library/Server/Preferences/Calendar.plist caldavd-system.plist caldavd-user.plist WritableConfigFile caldavd-system.plist calendarserver-9.1+dfsg/conf/caldavd-stdconfig.plist000066400000000000000000001537151315003562600226750ustar00rootroot00000000000000 ServerHostName HTTPPort 0 SSLPort 0 EnableSSL BehindTLSProxy RedirectHTTPToHTTPS SSLMethod SSLv23_METHOD SSLCiphers RC4-SHA:HIGH:!ADH StrictTransportSecuritySeconds 604800 SocketFiles Enabled Secured secured.sock Unsecured unsecured.sock Owner Group Permissions 504 SocketRoot /tmp/calendarserver BindAddresses BindHTTPPorts BindSSLPorts InheritFDs InheritSSLFDs UseMetaFD MetaFD 0 UseDatabase DBType DBFeatures skip-locked SpawnedDBUser caldav DatabaseConnection endpoint database user password ssl SharedConnectionPool DBAMPFD 0 FailIfUpgradeNeeded CheckExistingSchema UpgradeHomePrefix TransactionTimeoutSeconds 300 TransactionHTTPRetrySeconds 300 WorkQueue queuePollInterval 0.1 queueOverdueTimeout 300 queuePollingBackoff 60 60 5 1 overloadLevel 95 highPriorityLevel 80 mediumPriorityLevel 50 rowLimit 1 failureRescheduleInterval 60 lockRescheduleInterval 60 workParameters EnableCalDAV EnableCardDAV MigrationOnly ServerRoot /var/db/caldavd DataRoot Data DatabaseRoot Database AttachmentsRoot Attachments DocumentRoot Documents ConfigRoot Config LogRoot /var/log/caldavd RunRoot /var/run/caldavd WebCalendarRoot /Applications/Server.app/Contents/ServerRoot/usr/share/collabd/webcal/public UserQuota 104857600 MaximumAttachmentSize 10485760 MaximumAttachmentsPerInstance 5 MaxCollectionsPerHome 50 MaxResourcesPerCollection 10000 MaxResourceSize 1048576 MaxAttendeesPerInstance 100 MaxAllowedInstances 3000 WebCalendarAuthPath Aliases DirectoryService Enabled type xml params recordTypes users groups xmlFile accounts.xml DirectoryRealmName DirectoryFilterStartsWith ResourceService Enabled type xml params recordTypes locations resources addresses xmlFile resources.xml AugmentService type xml params xmlFiles statSeconds 15 ProxyLoadFromFile AdminPrincipals ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars RejectClients Authentication Basic Enabled AllowedOverWireUnencrypted Digest Enabled Algorithm md5 Qop AllowedOverWireUnencrypted Kerberos Enabled ServicePrincipal AllowedOverWireUnencrypted ClientCertificate Enabled AllowedOverWireUnencrypted Required CAFiles SendCAsToClient Wiki Enabled Cookie cc.collabd_session_guid EndpointDescriptor unix:path=/var/run/collabd AccessLogFile access.log ErrorLogFile error.log AgentLogFile agent.log UtilityLogFile utility.log ErrorLogEnabled ErrorLogRotateMB 10 ErrorLogMaxRotatedFiles 5 ErrorLogRotateOnStart PIDFile caldavd.pid RotateAccessLog EnableExtendedAccessLog EnableExtendedTimingAccessLog DefaultLogLevel LogLevels LogID AccountingCategories HTTP iTIP iTIP-VFREEBUSY Implicit Errors AutoScheduling iSchedule xPod Invalid Instance migration AccountingPrincipals AccountingLogRoot accounting Stats EnableUnixStatsSocket UnixStatsSocket caldavd-stats.sock EnableTCPStatsSocket TCPStatsPort 8100 LogDatabase LabelsInSQL Statistics StatisticsLogFile sqlstats.log SQLStatements TransactionWaitSeconds 0 SSLCertificate SSLPrivateKey SSLAuthorityChain SSLPassPhraseDialog /etc/apache2/getsslpassphrase SSLCertAdmin /Applications/Server.app/Contents/ServerRoot/usr/sbin/certadmin SSLKeychainIdentity UserName GroupName ProcessType Combined MultiProcess ProcessCount 0 MinProcessCount 2 PerCPU 1 PerGB 1 StaggeredStartup Enabled Interval 15 MemoryLimiter Enabled Seconds 60 Bytes 2147483648 ResidentOnly EnableSACLs EnableReadOnlyServer EnableAddMember EnableSyncReport EnableSyncReportHome EnableConfigSyncToken EnableWellKnown EnableCalendarQueryExtended EnableManagedAttachments EnableServerInfo EnableJSONData EnableDropBox EnablePrivateEvents EnableTimezoneService TimezoneService Enabled URI /stdtimezones Mode primary BasePath XMLInfoPath PrettyPrintJSON SecondaryService Host URI UpdateIntervalMinutes 1440 EnableTimezonesByReference UsePackageTimezones EnableBatchUpload MaxResourcesBatchUpload 100 MaxBytesBatchUpload 10485760 Sharing Enabled AllowExternalUsers Calendars Enabled IgnorePerUserProperties X-APPLE-STRUCTURED-LOCATION CollectionProperties Shadowable {urn:ietf:params:xml:ns:caldav}calendar-description ProxyOverride {urn:ietf:params:xml:ns:caldav}calendar-description {com.apple.ical:}calendarcolor {http://apple.com/ns/ical/}calendar-color {http://apple.com/ns/ical/}calendar-order Global Groups Enabled ReconciliationDelaySeconds 5 AddressBooks Enabled CollectionProperties Shadowable {urn:ietf:params:xml:ns:carddav}addressbook-description ProxyOverride Global Groups Enabled RestrictCalendarsToOneComponentType SupportedComponents VEVENT VTODO EnableTrashCollection ExposeTrashCollection ParallelUpgrades MergeUpgrades EnableDefaultAlarms RemoveDuplicateAlarms RemoveDuplicatePrivateComments HostedStatus Enabled Parameter X-APPLE-HOSTED-STATUS Values local external EXTERNAL RevisionCleanup Enabled SyncTokenLifetimeDays 14.0 CleanupPeriodDays 2.0 InboxCleanup Enabled ItemLifetimeDays 14.0 CleanupPeriodDays 2.0 StartDelaySeconds 300 StaggerSeconds 0.5 InboxRemoveWorkThreshold 5 RemovalStaggerSeconds 0.5 DirectoryAddressBook Enabled type opendirectory params queryPeopleRecords peopleNode /Search/Contacts queryUserRecords userNode /Search/Contacts maxDSQueryRecords 0 queryDSLocal ignoreSystemRecords dsLocalCacheTimeout 30 liveQuery fakeETag cacheQuery cacheTimeout 30 standardizeSyntheticUIDs addDSAttrXProperties appleInternalServer additionalAttributes allowedAttributes name directory MaxQueryResults 1000 EnableSearchAddressBook AnonymousDirectoryAddressBookAccess EnableWebAdmin EnableControlAPI Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments PerAttendeeProperties X-APPLE-NEEDS-REPLY X-APPLE-TRAVEL-DURATION X-APPLE-TRAVEL-START X-APPLE-TRAVEL-RETURN-DURATION X-APPLE-TRAVEL-RETURN OrganizerPublicProperties X-APPLE-DROPBOX X-APPLE-STRUCTURED-LOCATION OrganizerPublicParameters AttendeePublicProperties AttendeePublicParameters iSchedule Enabled AddressPatterns RemoteServers remoteservers.xml SerialNumber 1 DNSDebug DKIM Enabled Domain KeySelector ischedule SignatureAlgorithm rsa-sha256 UseDNSKey UseHTTPKey UsePrivateExchangeKey ExpireSeconds 3600 PrivateKeyFile PublicKeyFile PrivateExchanges ProtocolDebug iMIP Enabled Sending Server Port 587 Address UseSSL Username Password SuppressionDays 7 Receiving Server Port 0 UseSSL Type PollingSeconds 30 Username Password AddressPatterns MailTemplatesDirectory /Applications/Server.app/Contents/ServerRoot/usr/share/caldavd/share/email_templates MailIconsDirectory /Applications/Server.app/Contents/ServerRoot/usr/share/caldavd/share/date_icons InvitationDaysToLive 90 Options AllowGroupAsOrganizer AllowLocationAsOrganizer AllowResourceAsOrganizer AllowLocationWithoutOrganizer AllowResourceWithoutOrganizer TrackUnscheduledLocationData TrackUnscheduledResourceData FakeResourceLocationEmail LimitFreeBusyAttendees 30 AttendeeRefreshBatch 5 AttendeeRefreshCountLimit 50 UIDLockTimeoutSeconds 60 UIDLockExpirySeconds 300 PrincipalHostAliases TimestampAttendeePartStatChanges DelegeteRichFreeBusy RoomResourceRichFreeBusy AutoSchedule Enabled Always AllowUsers DefaultMode automatic FutureFreeBusyDays 1095 WorkQueues Enabled RequestDelaySeconds 5 ReplyDelaySeconds 1 AutoReplyDelaySeconds 5 AttendeeRefreshBatchDelaySeconds 5 AttendeeRefreshBatchIntervalSeconds 5 TemporaryFailureDelay 60 MaxTemporaryFailures 10 Splitting Enabled Size 102400 PastDays 14 Delay 60 FreeBusyURL Enabled TimePeriod 14 AnonymousAccess Notifications Enabled CoalesceSeconds 3 Services APNS Enabled SubscriptionURL apns SubscriptionRefreshIntervalSeconds 172800 SubscriptionPurgeIntervalSeconds 43200 SubscriptionPurgeSeconds 1209600 ProviderHost gateway.push.apple.com ProviderPort 2195 FeedbackHost feedback.push.apple.com FeedbackPort 2196 FeedbackUpdateSeconds 28800 Environment PRODUCTION EnableStaggering StaggerSeconds 3 CalDAV Enabled CertificatePath Certificates/apns:com.apple.calendar.cert.pem PrivateKeyPath Certificates/apns:com.apple.calendar.key.pem AuthorityChainPath Certificates/apns:com.apple.calendar.chain.pem Passphrase KeychainIdentity apns:com.apple.calendar Topic CardDAV Enabled CertificatePath Certificates/apns:com.apple.contact.cert.pem PrivateKeyPath Certificates/apns:com.apple.contact.key.pem AuthorityChainPath Certificates/apns:com.apple.contact.chain.pem Passphrase KeychainIdentity apns:com.apple.contact Topic AMP Enabled Port 62311 EnableStaggering StaggerSeconds 3 DirectoryProxy Enabled SocketPath directory-proxy.sock InSidecarCachingSeconds 120 DirectoryCaching CachingSeconds 60 NegativeCachingEnabled LookupsBetweenPurges 10000 Servers Enabled ConfigFile localservers.xml MaxClients 5 InboxName podding ConduitName conduit MaxRequests 3 MaxAccepts 1 MaxDBConnectionsPerPool 10 ListenBacklog 2024 IncomingDataTimeOut 60 PipelineIdleTimeOut 15 IdleConnectionTimeOut 360 CloseConnectionTimeOut 15 UIDReservationTimeOut 1800 MaxMultigetWithDataHrefs 5000 MaxQueryWithDataResults 1000 MaxPrincipalSearchReportResults 500 PrincipalSearchReportTimeout 10 ClientFixes ForceAttendeeTRANSP iOS/8\\.0(\\..*)? iOS/8\\.1(\\..*)? iOS/8\\.2(\\..*)? Localization TranslationsDirectory /Applications/Server.app/Contents/ServerRoot/usr/share/caldavd/share/translations LocalesDirectory locales Language Twisted reactor select umask 18 ControlPort 0 ControlSocket caldavd.sock ResponseCompression HTTPRetryAfter 180 Profiling Enabled BaseDirectory /tmp/stats Memcached MaxClients 5 Pools Default MemcacheSocket memcache.sock ClientEnabled ServerEnabled BindAddress 127.0.0.1 Port 11311 HandleCacheTypes Default memcached memcached MaxMemory 0 Options ProxyDBKeyNormalization Postgres DatabaseName caldav ClusterName cluster LogFile postgres.log LogRotation SocketDirectory SocketName ListenAddresses TxnTimeoutSeconds 30 SharedBuffers 0 MaxConnections 0 ExtraConnections 3 BuffersToConnectionsRatio 1.5 Options -c standard_conforming_strings=on Ctl pg_ctl Init initdb QueryCaching Enabled MemcachedPool Default ExpireSeconds 3600 GroupCaching Enabled UpdateSeconds 300 UseDirectoryBasedDelegates InitialSchedulingDelaySeconds 10 BatchSize 100 BatchSchedulingIntervalSeconds 2 GroupAttendees Enabled ReconciliationDelaySeconds 5 AutoUpdateSecondsFromNow 3600 AutomaticPurging Enabled PollingIntervalSeconds 604800 CheckStaggerSeconds 0 PurgeIntervalSeconds 604800 HomePurgeDelaySeconds 60 GroupPurgeIntervalSeconds 604800 Manhole Enabled UseSSH StartingPortNumber 5000 DPSPortNumber 4999 PasswordFilePath sshKeyName manhole.key sshKeySize 4096 EnableKeepAlive EnableResponseCache ResponseCacheTimeout 30 EnableFreeBusyCache FreeBusyCacheDaysBack 7 FreeBusyCacheDaysForward 84 FreeBusyIndexLowerLimitDays 365 FreeBusyIndexExpandAheadDays 365 FreeBusyIndexExpandMaxDays 1825 FreeBusyIndexDelayedExpand FreeBusyIndexSmartUpdate RootResourcePropStoreClass txweb2.dav.xattrprops.xattrPropertyStore UtilityServiceClass MigratedInboxDaysCutoff 60 DefaultTimezone AgentInactivityTimeoutSeconds 300 ServiceDisablingProgram AlertPostingProgram ImportConfig Includes WritableConfigFile calendarserver-9.1+dfsg/conf/caldavd-test-podA.plist000066400000000000000000000103731315003562600225450ustar00rootroot00000000000000 ImportConfig ./conf/caldavd-test.plist HTTPPort 8008 SSLPort 8443 BindHTTPPorts BindSSLPorts ServerRoot ./data/podA ConfigRoot ./conf DirectoryService type xml params xmlFile ./conf/auth/accounts-test-pod.xml ResourceService Enabled type xml params xmlFile ./conf/auth/resources-test-pod.xml AugmentService Enabled type xml params xmlFiles ./conf/auth/augments-test-pod.xml ProxyLoadFromFile ./conf/auth/proxies-test-pod.xml Servers Enabled ConfigFile ./conf/localservers-test.xml MaxClients 5 Notifications Services AMP Enabled Port 62311 EnableStaggering StaggerSeconds 3 Memcached Pools Default ClientEnabled ServerEnabled BindAddress localhost Port 11211 AllPods ClientEnabled ServerEnabled BindAddress localhost Port 11311 HandleCacheTypes ProxyDB DelegatesDB PrincipalToken DIGESTCREDENTIALS MaxClients 5 Options calendarserver-9.1+dfsg/conf/caldavd-test-podB.plist000066400000000000000000000103741315003562600225470ustar00rootroot00000000000000 ImportConfig ./conf/caldavd-test.plist HTTPPort 8108 SSLPort 8543 BindHTTPPorts BindSSLPorts ServerRoot ./data/podB ConfigRoot ./conf DirectoryService type xml params xmlFile ./conf/auth/accounts-test-pod.xml ResourceService Enabled type xml params xmlFile ./conf/auth/resources-test-pod.xml AugmentService Enabled type xml params xmlFiles ./conf/auth/augments-test-pod.xml ProxyLoadFromFile ./conf/auth/proxies-test-pod.xml Servers Enabled ConfigFile ./conf/localservers-test.xml MaxClients 5 Notifications Services AMP Enabled Port 62312 EnableStaggering StaggerSeconds 3 Memcached Pools Default ClientEnabled ServerEnabled BindAddress localhost Port 11411 AllPods ClientEnabled ServerEnabled BindAddress localhost Port 11311 HandleCacheTypes ProxyDB DelegatesDB PrincipalToken DIGESTCREDENTIALS MaxClients 5 Options calendarserver-9.1+dfsg/conf/caldavd-test-s2s.plist000066400000000000000000000112001315003562600223570ustar00rootroot00000000000000 ImportConfig ./conf/caldavd-test.plist HTTPPort 8208 SSLPort 8643 BindHTTPPorts BindSSLPorts ServerRoot ./data/s2s ConfigRoot ./conf DirectoryService type xml params xmlFile ./conf/auth/accounts-test-s2s.xml ResourceService Enabled type xml params xmlFile ./conf/auth/resources-test-s2s.xml AugmentService Enabled type xml params xmlFiles ./conf/auth/augments-test-s2s.xml ProxyLoadFromFile ./conf/auth/proxies-test-s2s.xml Scheduling iSchedule Enabled AddressPatterns RemoteServers remoteservers-test-s2s.xml DNSDebug test-db.zones DKIM Enabled Domain example.org KeySelector ischedule2 UseDNSKey UseHTTPKey UsePrivateExchangeKey ExpireSeconds 3600 PrivateKeyFile dkim-test-s2s/priv.pem PublicKeyFile dkim-test-s2s/pub.pem PrivateExchanges dkim-test-s2s/other_keys ProtocolDebug Notifications Services AMP Enabled Memcached Pools Default ClientEnabled ServerEnabled BindAddress localhost Port 11511 MaxClients 5 memcached ../memcached/_root/bin/memcached Options calendarserver-9.1+dfsg/conf/caldavd-test.plist000066400000000000000000000571711315003562600216730ustar00rootroot00000000000000 ServerHostName localhost EnableCalDAV EnableCardDAV SocketFiles Enabled SocketRoot /tmp/caldavd_requests HTTPPort 8008 SSLPort 8443 EnableSSL RedirectHTTPToHTTPS BindAddresses BindHTTPPorts 8008 8800 BindSSLPorts 8443 8843 ServerRoot ./data DataRoot Data DatabaseRoot Database DocumentRoot Documents ConfigRoot ./conf RunRoot Logs/state Aliases FailIfUpgradeNeeded UserQuota 104857600 MaximumAttachmentSize 10485760 MaxCollectionsPerHome 50 MaxResourcesPerCollection 10000 MaxResourceSize 1048576 MaxAttendeesPerInstance 100 MaxAllowedInstances 3000 DirectoryProxy Enabled InSidecarCachingSeconds 30 DirectoryCaching CachingSeconds 10 NegativeCachingEnabled LookupsBetweenPurges 100 DirectoryService type xml params xmlFile ./conf/auth/accounts-test.xml DirectoryRealmName Test Realm ResourceService Enabled type xml params xmlFile ./conf/auth/resources-test.xml AugmentService type xml params xmlFiles ./conf/auth/augments-test.xml ProxyLoadFromFile ./conf/auth/proxies-test.xml AdminPrincipals /principals/__uids__/0C8BDE62-E600-4696-83D3-8B5ECABDFD2E/ ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled AllowedOverWireUnencrypted Digest Enabled AllowedOverWireUnencrypted Algorithm md5 Qop Kerberos Enabled AllowedOverWireUnencrypted ServicePrincipal ClientCertificate Enabled AllowedOverWireUnencrypted Required CAFiles twistedcaldav/test/data/demoCA/cacert.pem SendCAsToClient Wiki Enabled Cookie sessionID EndpointDescriptor unix:path=/var/run/collabd LogRoot Logs AccessLogFile access.log RotateAccessLog ErrorLogFile error.log DefaultLogLevel info LogLevels PIDFile caldavd.pid AccountingCategories HTTP iTIP iTIP-VFREEBUSY Implicit Errors AutoScheduling iSchedule xPod AccountingPrincipals SSLCertificate twistedcaldav/test/data/server.pem SSLAuthorityChain SSLPrivateKey twistedcaldav/test/data/server.pem SSLKeychainIdentity org.calendarserver.test UserName GroupName ProcessType Combined MultiProcess ProcessCount 0 Notifications CoalesceSeconds 3 Services AMP Enabled Port 62311 EnableStaggering StaggerSeconds 3 Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments OrganizerPublicProperties X-APPLE-DROPBOX X-APPLE-STRUCTURED-LOCATION X-TEST-ORGANIZER-PROP1 X-TEST-ORGANIZER-PROP2 X-TEST-ORGANIZER-PROP3 X-TEST-ORGANIZER-PROP4 X-TEST-ORGANIZER-PROP5 OrganizerPublicParameters X-TEST-ORGANIZER-PARAM1 X-TEST-ORGANIZER-PARAM2 X-TEST-ORGANIZER-PARAM3 X-TEST-ORGANIZER-PARAM4 X-TEST-ORGANIZER-PARAM5 AttendeePublicProperties X-TEST-ALL-PROP1 X-TEST-ALL-PROP2 X-TEST-ALL-PROP3 X-TEST-ALL-PROP4 X-TEST-ALL-PROP5 AttendeePublicParameters X-TEST-ALL-PARAM1 X-TEST-ALL-PARAM2 X-TEST-ALL-PARAM3 X-TEST-ALL-PARAM4 X-TEST-ALL-PARAM5 iSchedule Enabled AddressPatterns RemoteServers remoteservers-test.xml iMIP Enabled Sending Server Port 587 UseSSL Username Password Address SupressionDays 7 Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* Options WorkQueues Enabled RequestDelaySeconds 0.1 ReplyDelaySeconds 0.5 AutoReplyDelaySeconds 1.0 AttendeeRefreshBatchDelaySeconds 0.1 AttendeeRefreshBatchIntervalSeconds 0.1 WorkQueue failureRescheduleInterval 1 lockRescheduleInterval 1 FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnableManagedAttachments EnablePrivateEvents RemoveDuplicatePrivateComments EnableTimezoneService TimezoneService Enabled Mode primary BasePath XMLInfoPath SecondaryService Host URI UpdateIntervalMinutes 1440 UsePackageTimezones EnableBatchUpload Sharing Enabled AllowExternalUsers Calendars Enabled Groups Enabled ReconciliationDelaySeconds 0.1 AddressBooks Enabled Groups Enabled EnableSACLs EnableReadOnlyServer EnableWebAdmin EnableControlAPI ResponseCompression HTTPRetryAfter 180 ControlSocket caldavd.sock Memcached MaxClients 5 memcached memcached Options EnableResponseCache ResponseCacheTimeout 30 Postgres Options QueryCaching Enabled MemcachedPool Default ExpireSeconds 3600 GroupCaching Enabled UpdateSeconds 300 UseDirectoryBasedDelegates GroupAttendees Enabled ReconciliationDelaySeconds 5 MaxPrincipalSearchReportResults 500 Twisted twistd ../Twisted/bin/twistd Localization TranslationsDirectory locales LocalesDirectory locales Language en EnableSearchAddressBook EnableTrashCollection calendarserver-9.1+dfsg/conf/caldavd.plist000066400000000000000000000267471315003562600207230ustar00rootroot00000000000000 ServerHostName HTTPPort 80 RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts ServerRoot /var/db/caldavd DataRoot Data DocumentRoot Documents ConfigRoot /etc/caldavd RunRoot /var/run Aliases UserQuota 104857600 MaximumAttachmentSize 10485760 MaxCollectionsPerHome 50 MaxResourcesPerCollection 10000 MaxResourceSize 1048576 MaxAttendeesPerInstance 100 MaxAllowedInstances 3000 DirectoryService type xml params xmlFile accounts.xml AdminPrincipals ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal LogRoot /var/log/caldavd AccessLogFile access.log RotateAccessLog ErrorLogFile error.log DefaultLogLevel warn PIDFile caldavd.pid SSLCertificate SSLAuthorityChain SSLPrivateKey UserName daemon GroupName daemon ProcessType Combined MultiProcess ProcessCount 0 Notifications CoalesceSeconds 3 Services XMPPNotifier Service twistedcaldav.notify.XMPPNotifierService Enabled Host xmpp.host.name Port 5222 JID jid@xmpp.host.name/resource Password password_goes_here ServiceAddress pubsub.xmpp.host.name Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns iSchedule Enabled AddressPatterns RemoteServers remoteservers.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnablePrivateEvents Sharing Enabled EnableWebAdmin calendarserver-9.1+dfsg/conf/dkim-test-s2s/000077500000000000000000000000001315003562600206365ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/dkim-test-s2s/other_keys/000077500000000000000000000000001315003562600230125ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/dkim-test-s2s/other_keys/example.com#ischedule000066400000000000000000000003441315003562600270770ustar00rootroot00000000000000v=DKIM1; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDtocSHvSS1Nn0uIL4Sg+0wp6KcW31WRC4Fww8P+jvsVAazVOxvxkShNSd18EvApiNa55P8WgKVEu02OQePjnjKNqfgJPeajkWy/0CJn+d6rX/ncPMGX2EYzqXy/CyVqpcnVAosToymo6VHL6ufhzlyLJFDznLtV121CZLUZlAySQIDAQAB calendarserver-9.1+dfsg/conf/dkim-test-s2s/priv.pem000066400000000000000000000015671315003562600223320ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICWwIBAAKBgQCwwGYRfC59jFg7D8cvwdFDoxhnyZQh77dcl/1dvyMOz4leU/JS Cp7M/cCh5Bgw440deV483sr1jCkyychIgBZCaPSHqItfwRgKPntnifuCzWHfv5gZ E+JI+tWBXvUs4ao4dWH7VjmUyj49QfZ/NGNKKG2Xtr3In7/7qdbfg7tVRQIDAQAB AoGAVSS5hl69vnjm37ygBR9mgSCF1ylBlH93YsFMqeYzKyVKVQg3SNIY4UKzksjf 5l0XU0Vt4gCo4FQeXHrbYiFhltrB6l9uIH3bUHRK4pCI7IUe6pjKGTKxN4pX6+eA 62PzBzbVLuc1rI2D57yzUgkpR8e1PkdVCZLH+eJC1kSffZkCQQDiqJVDi6ZeAINO 5rxlAB+mnboQ2qvhJ0Z0nM5aauCpHkvypQTLxncez/fAvi2C/pbOxflUgV5ef7FZ dgLI/xQHAkEAx6Hor29mxejI1R5RwYWF5/VvjsCnbip1I9ToiqO8heLwmUcgL24S 9Mfdiv2nJCxV7UtjLwJU0PY73KQx3JmxUwJAPzGBbDOjTtIVygnKvN4r9OhE2C4f fcbVfe26Grtxp7Uqt5wKmkXbMFwLV1GunrcclMndmhH3naE8cRTV8fQsQQJARh19 xjBQXm52KzQs7tVgxKmVdwP/SlgrMFyVGCyOCFA+xPcQPNhiXAreqvSQAcp4m5GA 0n/1Hjd9qu8YfCyW9QJAa8fiAT75cDtxRS+YsUjAwawmleaDclQDDPO4bMkq0nTd 2dQpov79tiNCPV8CFh64GPdGC2Lyv7ZXusEo4QBzJw== -----END RSA PRIVATE KEY----- calendarserver-9.1+dfsg/conf/dkim-test-s2s/pub.pem000066400000000000000000000004201315003562600221230ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCwwGYRfC59jFg7D8cvwdFDoxhn yZQh77dcl/1dvyMOz4leU/JSCp7M/cCh5Bgw440deV483sr1jCkyychIgBZCaPSH qItfwRgKPntnifuCzWHfv5gZE+JI+tWBXvUs4ao4dWH7VjmUyj49QfZ/NGNKKG2X tr3In7/7qdbfg7tVRQIDAQAB -----END PUBLIC KEY----- calendarserver-9.1+dfsg/conf/dkim-test/000077500000000000000000000000001315003562600201315ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/dkim-test/other_keys/000077500000000000000000000000001315003562600223055ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/dkim-test/other_keys/example.org#ischedule2000066400000000000000000000003431315003562600264640ustar00rootroot00000000000000v=DKIM1; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCwwGYRfC59jFg7D8cvwdFDoxhnyZQh77dcl/1dvyMOz4leU/JSCp7M/cCh5Bgw440deV483sr1jCkyychIgBZCaPSHqItfwRgKPntnifuCzWHfv5gZE+JI+tWBXvUs4ao4dWH7VjmUyj49QfZ/NGNKKG2Xtr3In7/7qdbfg7tVRQIDAQABcalendarserver-9.1+dfsg/conf/dkim-test/priv.pem000066400000000000000000000015731315003562600216220ustar00rootroot00000000000000-----BEGIN RSA PRIVATE KEY----- MIICXgIBAAKBgQDtocSHvSS1Nn0uIL4Sg+0wp6KcW31WRC4Fww8P+jvsVAazVOxv xkShNSd18EvApiNa55P8WgKVEu02OQePjnjKNqfgJPeajkWy/0CJn+d6rX/ncPMG X2EYzqXy/CyVqpcnVAosToymo6VHL6ufhzlyLJFDznLtV121CZLUZlAySQIDAQAB AoGBAJhsO/hpTUNjKRZOcDzGHH0p+bbbRGDyKKcPf/japFcpaobbATGM9naE9sPC l4SBzInBov2p6qAeXMN7/yqI01Z9EQEXBHXfMFSrvbAMQFwhqLTwDQWh1S32GAPQ Fkf2NkUjK6DDqgeT3LJrehTmSjwZTN6T+WRpkGaAvYAQIywRAkEA+4uIInt+OuZN 7ZXBwfms5oDuFOdToiDedglnNVzQmJ4AcPqakgnF9tsQipDnvlOVHqjujmYXpOgK 7MRNMycMNQJBAPHXJ+GdSfiuVrPbRQOg+9v6AdaP3+7fUJ8I8kGhr/vrn8lZziPZ /3W96pG2Dly6j+eQ3vk/3ATHD7PziAlPSEUCQQCiJTZSq/IZe30+Keuk4xFt4CwX 82l4t+FOiw8pWbPovOih6xiaDIy8bEeEWpXXnL8h7VkhF3QkS6NHLd5pm8EFAkEA 2l8GEvH8/kEl5wfCXJF7ellYWY7WjJI28TOZ1GuUReyv/pdJzROmWYHgkiwK8e4/ zMACppvkJqg8ZKgtGQLu5QJAQ3D2RjXwZukcwmcOfykU7A3FoItC6JVxVBrZwq0x 9/7WfuHF5ayWDvo1xMJkzlSsL5lsHOQzmvf0JbKhkToazw== -----END RSA PRIVATE KEY----- calendarserver-9.1+dfsg/conf/dkim-test/pub.pem000066400000000000000000000004201315003562600214160ustar00rootroot00000000000000-----BEGIN PUBLIC KEY----- MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDtocSHvSS1Nn0uIL4Sg+0wp6Kc W31WRC4Fww8P+jvsVAazVOxvxkShNSd18EvApiNa55P8WgKVEu02OQePjnjKNqfg JPeajkWy/0CJn+d6rX/ncPMGX2EYzqXy/CyVqpcnVAosToymo6VHL6ufhzlyLJFD znLtV121CZLUZlAySQIDAQAB -----END PUBLIC KEY----- calendarserver-9.1+dfsg/conf/localservers-test.xml000066400000000000000000000022411315003562600224320ustar00rootroot00000000000000 A https://localhost:8443 127.0.0.1 ::1 ::ffff:127.0.0.1 A B https://localhost:8543 127.0.0.1 ::1 ::ffff:127.0.0.1 B calendarserver-9.1+dfsg/conf/localservers.dtd000066400000000000000000000015241315003562600214330ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/localservers.xml000066400000000000000000000017651315003562600214670ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/mime.types000066400000000000000000000352541315003562600202560ustar00rootroot00000000000000# This is a comment. I love comments. # This file controls what Internet media types are sent to the client for # given file extension(s). Sending the correct media type to the client # is important so they know how to handle the content of the file. # Extra types can either be added here or by using an AddType directive # in your config files. For more information about Internet media types, # please read RFC 2045, 2046, 2047, 2048, and 2077. The Internet media type # registry is at . # MIME type Extensions application/activemessage application/andrew-inset ez application/applefile application/atom+xml atom application/atomicmail application/batch-smtp application/beep+xml application/cals-1840 application/cnrp+xml application/commonground application/cpl+xml application/cybercash application/dca-rft application/dec-dx application/dvcs application/edi-consent application/edifact application/edi-x12 application/eshop application/font-tdpfr application/http application/hyperstudio application/iges application/index application/index.cmd application/index.obj application/index.response application/index.vnd application/iotp application/ipp application/isup application/mac-binhex40 hqx application/mac-compactpro cpt application/macwriteii application/marc application/mathematica application/mathml+xml mathml application/msword doc application/news-message-id application/news-transmission application/ocsp-request application/ocsp-response application/octet-stream bin dms lha lzh exe class so dll dmg application/oda oda application/ogg ogg application/parityfec application/pdf pdf application/pgp-encrypted application/pgp-keys application/pgp-signature application/pkcs10 application/pkcs7-mime application/pkcs7-signature application/pkix-cert application/pkix-crl application/pkixcmp application/postscript ai eps ps application/prs.alvestrand.titrax-sheet application/prs.cww application/prs.nprend application/prs.plucker application/qsig application/rdf+xml rdf application/reginfo+xml application/remote-printing application/riscos application/rtf application/sdp application/set-payment application/set-payment-initiation application/set-registration application/set-registration-initiation application/sgml application/sgml-open-catalog application/sieve application/slate application/smil smi smil application/srgs gram application/srgs+xml grxml application/timestamp-query application/timestamp-reply application/tve-trigger application/vemmi application/vnd.3gpp.pic-bw-large application/vnd.3gpp.pic-bw-small application/vnd.3gpp.pic-bw-var application/vnd.3gpp.sms application/vnd.3m.post-it-notes application/vnd.accpac.simply.aso application/vnd.accpac.simply.imp application/vnd.acucobol application/vnd.acucorp application/vnd.adobe.xfdf application/vnd.aether.imp application/vnd.amiga.ami application/vnd.anser-web-certificate-issue-initiation application/vnd.anser-web-funds-transfer-initiation application/vnd.audiograph application/vnd.blueice.multipass application/vnd.bmi application/vnd.businessobjects application/vnd.canon-cpdl application/vnd.canon-lips application/vnd.cinderella application/vnd.claymore application/vnd.commerce-battelle application/vnd.commonspace application/vnd.contact.cmsg application/vnd.cosmocaller application/vnd.criticaltools.wbs+xml application/vnd.ctc-posml application/vnd.cups-postscript application/vnd.cups-raster application/vnd.cups-raw application/vnd.curl application/vnd.cybank application/vnd.data-vision.rdz application/vnd.dna application/vnd.dpgraph application/vnd.dreamfactory application/vnd.dxr application/vnd.ecdis-update application/vnd.ecowin.chart application/vnd.ecowin.filerequest application/vnd.ecowin.fileupdate application/vnd.ecowin.series application/vnd.ecowin.seriesrequest application/vnd.ecowin.seriesupdate application/vnd.enliven application/vnd.epson.esf application/vnd.epson.msf application/vnd.epson.quickanime application/vnd.epson.salt application/vnd.epson.ssf application/vnd.ericsson.quickcall application/vnd.eudora.data application/vnd.fdf application/vnd.ffsns application/vnd.fints application/vnd.flographit application/vnd.framemaker application/vnd.fsc.weblaunch application/vnd.fujitsu.oasys application/vnd.fujitsu.oasys2 application/vnd.fujitsu.oasys3 application/vnd.fujitsu.oasysgp application/vnd.fujitsu.oasysprs application/vnd.fujixerox.ddd application/vnd.fujixerox.docuworks application/vnd.fujixerox.docuworks.binder application/vnd.fut-misnet application/vnd.grafeq application/vnd.groove-account application/vnd.groove-help application/vnd.groove-identity-message application/vnd.groove-injector application/vnd.groove-tool-message application/vnd.groove-tool-template application/vnd.groove-vcard application/vnd.hbci application/vnd.hhe.lesson-player application/vnd.hp-hpgl application/vnd.hp-hpid application/vnd.hp-hps application/vnd.hp-pcl application/vnd.hp-pclxl application/vnd.httphone application/vnd.hzn-3d-crossword application/vnd.ibm.afplinedata application/vnd.ibm.electronic-media application/vnd.ibm.minipay application/vnd.ibm.modcap application/vnd.ibm.rights-management application/vnd.ibm.secure-container application/vnd.informix-visionary application/vnd.intercon.formnet application/vnd.intertrust.digibox application/vnd.intertrust.nncp application/vnd.intu.qbo application/vnd.intu.qfx application/vnd.irepository.package+xml application/vnd.is-xpr application/vnd.japannet-directory-service application/vnd.japannet-jpnstore-wakeup application/vnd.japannet-payment-wakeup application/vnd.japannet-registration application/vnd.japannet-registration-wakeup application/vnd.japannet-setstore-wakeup application/vnd.japannet-verification application/vnd.japannet-verification-wakeup application/vnd.jisp application/vnd.kde.karbon application/vnd.kde.kchart application/vnd.kde.kformula application/vnd.kde.kivio application/vnd.kde.kontour application/vnd.kde.kpresenter application/vnd.kde.kspread application/vnd.kde.kword application/vnd.kenameaapp application/vnd.koan application/vnd.liberty-request+xml application/vnd.llamagraphics.life-balance.desktop application/vnd.llamagraphics.life-balance.exchange+xml application/vnd.lotus-1-2-3 application/vnd.lotus-approach application/vnd.lotus-freelance application/vnd.lotus-notes application/vnd.lotus-organizer application/vnd.lotus-screencam application/vnd.lotus-wordpro application/vnd.mcd application/vnd.mediastation.cdkey application/vnd.meridian-slingshot application/vnd.micrografx.flo application/vnd.micrografx.igx application/vnd.mif mif application/vnd.minisoft-hp3000-save application/vnd.mitsubishi.misty-guard.trustweb application/vnd.mobius.daf application/vnd.mobius.dis application/vnd.mobius.mbk application/vnd.mobius.mqy application/vnd.mobius.msl application/vnd.mobius.plc application/vnd.mobius.txf application/vnd.mophun.application application/vnd.mophun.certificate application/vnd.motorola.flexsuite application/vnd.motorola.flexsuite.adsi application/vnd.motorola.flexsuite.fis application/vnd.motorola.flexsuite.gotap application/vnd.motorola.flexsuite.kmr application/vnd.motorola.flexsuite.ttc application/vnd.motorola.flexsuite.wem application/vnd.mozilla.xul+xml xul application/vnd.ms-artgalry application/vnd.ms-asf application/vnd.ms-excel xls application/vnd.ms-lrm application/vnd.ms-powerpoint ppt application/vnd.ms-project application/vnd.ms-tnef application/vnd.ms-works application/vnd.ms-wpl application/vnd.mseq application/vnd.msign application/vnd.music-niff application/vnd.musician application/vnd.netfpx application/vnd.noblenet-directory application/vnd.noblenet-sealer application/vnd.noblenet-web application/vnd.novadigm.edm application/vnd.novadigm.edx application/vnd.novadigm.ext application/vnd.obn application/vnd.osa.netdeploy application/vnd.palm application/vnd.pg.format application/vnd.pg.osasli application/vnd.powerbuilder6 application/vnd.powerbuilder6-s application/vnd.powerbuilder7 application/vnd.powerbuilder7-s application/vnd.powerbuilder75 application/vnd.powerbuilder75-s application/vnd.previewsystems.box application/vnd.publishare-delta-tree application/vnd.pvi.ptid1 application/vnd.pwg-multiplexed application/vnd.pwg-xhtml-print+xml application/vnd.quark.quarkxpress application/vnd.rapid application/vnd.s3sms application/vnd.sealed.net application/vnd.seemail application/vnd.shana.informed.formdata application/vnd.shana.informed.formtemplate application/vnd.shana.informed.interchange application/vnd.shana.informed.package application/vnd.smaf application/vnd.sss-cod application/vnd.sss-dtf application/vnd.sss-ntf application/vnd.street-stream application/vnd.svd application/vnd.swiftview-ics application/vnd.triscape.mxs application/vnd.trueapp application/vnd.truedoc application/vnd.ufdl application/vnd.uplanet.alert application/vnd.uplanet.alert-wbxml application/vnd.uplanet.bearer-choice application/vnd.uplanet.bearer-choice-wbxml application/vnd.uplanet.cacheop application/vnd.uplanet.cacheop-wbxml application/vnd.uplanet.channel application/vnd.uplanet.channel-wbxml application/vnd.uplanet.list application/vnd.uplanet.list-wbxml application/vnd.uplanet.listcmd application/vnd.uplanet.listcmd-wbxml application/vnd.uplanet.signal application/vnd.vcx application/vnd.vectorworks application/vnd.vidsoft.vidconference application/vnd.visio application/vnd.visionary application/vnd.vividence.scriptfile application/vnd.vsf application/vnd.wap.sic application/vnd.wap.slc application/vnd.wap.wbxml wbxml application/vnd.wap.wmlc wmlc application/vnd.wap.wmlscriptc wmlsc application/vnd.webturbo application/vnd.wrq-hp3000-labelled application/vnd.wt.stf application/vnd.wv.csp+wbxml application/vnd.xara application/vnd.xfdl application/vnd.yamaha.hv-dic application/vnd.yamaha.hv-script application/vnd.yamaha.hv-voice application/vnd.yellowriver-custom-menu application/voicexml+xml vxml application/watcherinfo+xml application/whoispp-query application/whoispp-response application/wita application/wordperfect5.1 application/x-bcpio bcpio application/x-cdlink vcd application/x-chess-pgn pgn application/x-compress application/x-cpio cpio application/x-csh csh application/x-director dcr dir dxr application/x-dvi dvi application/x-futuresplash spl application/x-gtar gtar application/x-gzip application/x-hdf hdf application/x-javascript js application/x-koan skp skd skt skm application/x-latex latex application/x-netcdf nc cdf application/x-sh sh application/x-shar shar application/x-shockwave-flash swf application/x-stuffit sit application/x-sv4cpio sv4cpio application/x-sv4crc sv4crc application/x-tar tar application/x-tcl tcl application/x-tex tex application/x-texinfo texinfo texi application/x-troff t tr roff application/x-troff-man man application/x-troff-me me application/x-troff-ms ms application/x-ustar ustar application/x-wais-source src application/x400-bp application/xhtml+xml xhtml xht application/xslt+xml xslt application/xml xml xsl application/xml-dtd dtd application/xml-external-parsed-entity application/zip zip audio/32kadpcm audio/amr audio/amr-wb audio/basic au snd audio/cn audio/dat12 audio/dsr-es201108 audio/dvi4 audio/evrc audio/evrc0 audio/g722 audio/g.722.1 audio/g723 audio/g726-16 audio/g726-24 audio/g726-32 audio/g726-40 audio/g728 audio/g729 audio/g729D audio/g729E audio/gsm audio/gsm-efr audio/l8 audio/l16 audio/l20 audio/l24 audio/lpc audio/midi mid midi kar audio/mpa audio/mpa-robust audio/mp4a-latm audio/mpeg mpga mp2 mp3 audio/parityfec audio/pcma audio/pcmu audio/prs.sid audio/qcelp audio/red audio/smv audio/smv0 audio/telephone-event audio/tone audio/vdvi audio/vnd.3gpp.iufp audio/vnd.cisco.nse audio/vnd.cns.anp1 audio/vnd.cns.inf1 audio/vnd.digital-winds audio/vnd.everad.plj audio/vnd.lucent.voice audio/vnd.nortel.vbk audio/vnd.nuera.ecelp4800 audio/vnd.nuera.ecelp7470 audio/vnd.nuera.ecelp9600 audio/vnd.octel.sbc audio/vnd.qcelp audio/vnd.rhetorex.32kadpcm audio/vnd.vmx.cvsd audio/x-aiff aif aiff aifc audio/x-alaw-basic audio/x-mpegurl m3u audio/x-pn-realaudio ram ra audio/x-pn-realaudio-plugin application/vnd.rn-realmedia rm audio/x-wav wav chemical/x-pdb pdb chemical/x-xyz xyz image/bmp bmp image/cgm cgm image/g3fax image/gif gif image/ief ief image/jpeg jpeg jpg jpe image/naplps image/png png image/prs.btif image/prs.pti image/svg+xml svg image/t38 image/tiff tiff tif image/tiff-fx image/vnd.cns.inf2 image/vnd.djvu djvu djv image/vnd.dwg image/vnd.dxf image/vnd.fastbidsheet image/vnd.fpx image/vnd.fst image/vnd.fujixerox.edmics-mmr image/vnd.fujixerox.edmics-rlc image/vnd.globalgraphics.pgb image/vnd.mix image/vnd.ms-modi image/vnd.net-fpx image/vnd.svf image/vnd.wap.wbmp wbmp image/vnd.xiff image/x-cmu-raster ras image/x-icon ico image/x-portable-anymap pnm image/x-portable-bitmap pbm image/x-portable-graymap pgm image/x-portable-pixmap ppm image/x-rgb rgb image/x-xbitmap xbm image/x-xpixmap xpm image/x-xwindowdump xwd message/delivery-status message/disposition-notification message/external-body message/http message/news message/partial message/rfc822 message/s-http message/sip message/sipfrag model/iges igs iges model/mesh msh mesh silo model/vnd.dwf model/vnd.flatland.3dml model/vnd.gdl model/vnd.gs-gdl model/vnd.gtw model/vnd.mts model/vnd.parasolid.transmit.binary model/vnd.parasolid.transmit.text model/vnd.vtu model/vrml wrl vrml multipart/alternative multipart/appledouble multipart/byteranges multipart/digest multipart/encrypted multipart/form-data multipart/header-set multipart/mixed multipart/parallel multipart/related multipart/report multipart/signed multipart/voice-message text/calendar ics ifb text/css css text/directory text/enriched text/html html htm text/parityfec text/plain asc txt text/prs.lines.tag text/rfc822-headers text/richtext rtx text/rtf rtf text/sgml sgml sgm text/t140 text/tab-separated-values tsv text/uri-list text/vnd.abc text/vnd.curl text/vnd.dmclientscript text/vnd.fly text/vnd.fmi.flexstor text/vnd.in3d.3dml text/vnd.in3d.spot text/vnd.iptc.nitf text/vnd.iptc.newsml text/vnd.latex-z text/vnd.motorola.reflex text/vnd.ms-mediapackage text/vnd.net2phone.commcenter.command text/vnd.sun.j2me.app-descriptor text/vnd.wap.si text/vnd.wap.sl text/vnd.wap.wml wml text/vnd.wap.wmlscript wmls text/x-setext etx text/xml text/xml-external-parsed-entity video/bmpeg video/bt656 video/celb video/dv video/h261 video/h263 video/h263-1998 video/h263-2000 video/jpeg video/mp1s video/mp2p video/mp2t video/mp4v-es video/mpv video/mpeg mpeg mpg mpe video/nv video/parityfec video/pointer video/quicktime qt mov video/smpte292m video/vnd.fvt video/vnd.motorola.video video/vnd.motorola.videop video/vnd.mpegurl mxu m4u video/vnd.nokia.interleaved-multimedia video/vnd.objectvideo video/vnd.vivo video/x-msvideo avi video/x-sgi-movie movie x-conference/x-cooltalk ice calendarserver-9.1+dfsg/conf/remoteservers-test-s2s.xml000066400000000000000000000016531315003562600233460ustar00rootroot00000000000000 https://localhost:8443/ischedule example.com 127.0.0.1 calendarserver-9.1+dfsg/conf/remoteservers-test.xml000066400000000000000000000016531315003562600226410ustar00rootroot00000000000000 https://localhost:8643/ischedule example.org 127.0.0.1 calendarserver-9.1+dfsg/conf/remoteservers.dtd000066400000000000000000000021401315003562600216270ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/remoteservers.xml000066400000000000000000000016641315003562600216660ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/resources.xml000066400000000000000000000013121315003562600207610ustar00rootroot00000000000000 calendarserver-9.1+dfsg/conf/resources/000077500000000000000000000000001315003562600202425ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/resources/caldavd-resources.plist000066400000000000000000000400531315003562600247270ustar00rootroot00000000000000 ServerHostName HTTPPort 8008 SSLPort 8443 RedirectHTTPToHTTPS BindAddresses BindHTTPPorts BindSSLPorts DataRoot data/ DocumentRoot twistedcaldav/test/data/ Aliases UserQuota 104857600 MaximumAttachmentSize 1048576 MaxAttendeesPerInstance 100 DirectoryService type xml params xmlFile conf/resources/users-groups.xml recordTypes users groups ResourceService Enabled type xml params xmlFile conf/resources/locations-resources.xml recordTypes locations resources AugmentService Enabled type xml params xmlFiles conf/auth/augments-test.xml ProxyDBService type twistedcaldav.directory.calendaruserproxy.ProxySqliteDB params dbpath data/proxies.sqlite ProxyLoadFromFile conf/auth/proxies-test.xml AdminPrincipals /principals/__uids__/admin/ ReadPrincipals EnableProxyPrincipals EnableAnonymousReadRoot EnableAnonymousReadNav EnablePrincipalListings EnableMonolithicCalendars Authentication Basic Enabled Digest Enabled Algorithm md5 Qop Kerberos Enabled ServicePrincipal Wiki Enabled Cookie sessionID URL http://127.0.0.1/RPC2 UserMethod userForSession WikiMethod accessLevelForUserWikiCalendar AccessLogFile logs/access.log RotateAccessLog ErrorLogFile logs/error.log DefaultLogLevel info LogLevels ServerStatsFile logs/stats.plist PIDFile logs/caldavd.pid AccountingCategories iTIP HTTP AccountingPrincipals SSLCertificate twistedcaldav/test/data/server.pem SSLAuthorityChain SSLPrivateKey twistedcaldav/test/data/server.pem UserName GroupName ProcessType Combined MultiProcess ProcessCount 2 Notifications CoalesceSeconds 3 InternalNotificationHost localhost InternalNotificationPort 62309 Services SimpleLineNotifier Service twistedcaldav.notify.SimpleLineNotifierService Enabled Port 62308 XMPPNotifier Service twistedcaldav.notify.XMPPNotifierService Enabled Host xmpp.host.name Port 5222 JID jid@xmpp.host.name/resource Password password_goes_here ServiceAddress pubsub.xmpp.host.name NodeConfiguration pubsub#deliver_payloads 1 pubsub#persist_items 1 KeepAliveSeconds 120 HeartbeatMinutes 30 AllowedJIDs Scheduling CalDAV EmailDomain HTTPDomain AddressPatterns OldDraftCompatibility ScheduleTagCompatibility EnablePrivateComments iSchedule Enabled AddressPatterns Servers conf/servertoserver-test.xml iMIP Enabled MailGatewayServer localhost MailGatewayPort 62310 Sending Server Port 587 UseSSL Username Password Address Receiving Server Port 995 Type UseSSL Username Password PollingSeconds 30 AddressPatterns mailto:.* Options AllowGroupAsOrganizer AllowLocationAsOrganizer AllowResourceAsOrganizer FreeBusyURL Enabled TimePeriod 14 AnonymousAccess EnableDropBox EnablePrivateEvents EnableTimezoneService EnableSACLs EnableWebAdmin ResponseCompression HTTPRetryAfter 180 ControlSocket logs/caldavd.sock Memcached MaxClients 5 memcached memcached Options EnableResponseCache ResponseCacheTimeout 30 Twisted twistd ../Twisted/bin/twistd Localization LocalesDirectory locales Language English calendarserver-9.1+dfsg/conf/resources/locations-resources-orig.xml000066400000000000000000000020251315003562600257240ustar00rootroot00000000000000 location%02d location%02d location%02d Room %02d resource%02d resource%02d resource%02d Resource %02d calendarserver-9.1+dfsg/conf/resources/locations-resources.xml000066400000000000000000000020251315003562600247660ustar00rootroot00000000000000 location%02d location%02d location%02d Room %02d resource%02d resource%02d resource%02d Resource %02d calendarserver-9.1+dfsg/conf/resources/users-groups.xml000066400000000000000000000054271315003562600234520ustar00rootroot00000000000000 admin admin admin Super User Super User apprentice apprentice apprentice Apprentice Super User Apprentice Super User user%02d User %02d user%02d user%02d User %02d User %02d user%02d@example.com public%02d public%02d public%02d Public %02d Public %02d group01 group01 group01 Group 01 user01 group02 group02 group02 Group 02 user06 user07 group03 group03 group03 Group 03 user08 user09 group04 group04 group04 Group 04 group02 group03 user10 disabledgroup disabledgroup disabledgroup Disabled Group user01 calendarserver-9.1+dfsg/conf/test-db.zones000066400000000000000000000040141315003562600206510ustar00rootroot00000000000000example.com. 10800 IN SOA ns.example.com. admin.example.com. ( 2012090810 ; serial 3600 ; refresh (1 hour) 900 ; retry (15 minutes) 1209600 ; expire (2 weeks) 86400 ; minimum (1 day) ) 10800 IN NS ns.example.com. 10800 IN A 127.0.0.1 ns.example.com. 10800 IN A 127.0.0.1 _caldavs._tcp.example.com. 10800 IN SRV 0 0 8443 example.com. _ischedules._tcp.example.com. 10800 IN SRV 0 0 8443 example.com. ischedule._domainkey.example.com. 10800 IN TXT "v=DKIM1; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDtocSHvSS1Nn0uIL4Sg+0wp6KcW31WRC4Fww8P+jvsVAazVOxvxkShNSd18EvApiNa55P8WgKVEu02OQePjnjKNqfgJPeajkWy/0CJn+d6rX/ncPMGX2EYzqXy/CyVqpcnVAosToymo6VHL6ufhzlyLJFDznLtV121CZLUZlAySQIDAQAB" example.org. 10800 IN SOA ns.example.org. admin.example.org. ( 2012090810 ; serial 3600 ; refresh (1 hour) 900 ; retry (15 minutes) 1209600 ; expire (2 weeks) 86400 ; minimum (1 day) ) 10800 IN NS ns.example.org. 10800 IN A 127.0.0.1 ns.example.org. 10800 IN A 127.0.0.1 _caldavs._tcp.example.org. 10800 IN SRV 0 0 8643 example.org. _ischedules._tcp.example.org. 10800 IN SRV 0 0 8643 calendar.example.org. ischedule2._domainkey.example.org. 10800 IN TXT "v=DKIM1; s=ischedule; p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQCwwGYRfC59jFg7D8cvwdFDoxhnyZQh77dcl/1dvyMOz4leU/JSCp7M/cCh5Bgw440deV483sr1jCkyychIgBZCaPSHqItfwRgKPntnifuCzWHfv5gZE+JI+tWBXvUs4ao4dWH7VjmUyj49QfZ/NGNKKG2Xtr3In7/7qdbfg7tVRQIDAQAB" calendarserver-9.1+dfsg/conf/test/000077500000000000000000000000001315003562600172075ustar00rootroot00000000000000calendarserver-9.1+dfsg/conf/test/accounts.xml000066400000000000000000000112251315003562600215510ustar00rootroot00000000000000 admin admin admin Super User apprentice apprentice Apprentice Super User apprentice wsanchez wsanchez wsanchez@example.com Wilfredo Sanchez Vega test cdaboo cdaboo cdaboo@example.com Cyrus Daboo test sagen sagen sagen@example.com Morgen Sagen test andre dre dre@example.com Andre LaBranche test glyph glyph glyph@example.com Glyph Lefkowitz test i18nuser i18nuser i18nuser@example.com ã¾ã  i18nuser user%02d user%02d User %02d User %02d user%02d@example.com user%02d public%02d public%02d Public %02d public%02d group01 group01 Group 01 group01 user01 group02 group02 Group 02 group02 user06 user07 group03 group03 Group 03 group03 user08 user09 group04 group04 Group 04 group04 group02 group03 user10 group05 group05 Group 05 group05 group06 user20 group06 group06 Group 06 group06 user21 group07 group07 Group 07 group07 user22 user23 user24 disabledgroup disabledgroup Disabled Group disabledgroup user01 calendarserver-9.1+dfsg/contrib/000077500000000000000000000000001315003562600167435ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/CalendarServer.png000066400000000000000000003417331315003562600223640ustar00rootroot00000000000000‰PNG  IHDRôxÔú pHYs  šœ IDATxœìÝy]×}à÷ï9ç.oí½  (Rj di´R’mm©ÅnÛ¥ÍòÇUÉ?ΙšÉLRI寤jªR•djâd¦Ù›´=cI£]V4¦¥‘(ÉÚeYìèýí÷Þ³åû Ñ@7%ý@žê`ßׯϺßùÝs~çw ‚ ‚ ‚ ‚>±× ‚ ØÉ;Þ÷Ûhlñ§MÝ!žðDÅöùO|ìË{ݶ ¸]… ‚±õ®÷ÿî‘B_Í2} ŠTRÃ[ƒ6:ë“ÄÑ—âjò_~ñ¯þä‡{ÝÖ ¸Ý„ ‚±ôö¥ß±YqÊ:QŸßw‘TÖ£bï Î&¶»i­þ›_úä¿ýÔ^·9n'j¯A°»ïyà[Ö‰ƒ‡î¼‹(RTëM&¦§©×&ˆ¢¼gjrmm:èmüÚ=­WüõS?ùþé½nwÜ. @cçïû½÷zð G¨ÖkTê“ìÛ·F³v¬fsuƒå• o8wæ:ÕóõzõUŸÿøŸžÛëöÁí ÚëA\C¸ÿb’4&I«ìß7ϱ»ö‘Öªt‡(ÁÌÌõz…'ŸzšC‡î¤ÐÅlÐûsàu{Ýü ¸„%€ ÆÊÒÒVs7üˆHMMÎÖjÜqÇ~ŽÜ9ÅÔDæÄ BFLÖ"Ò4Â;ÇÚF›‰fƒÍõ;ÝóÀŧúýoìu?‚`Üɽn@Á•úôßç=±³ë-ÒC,cæ¦&Y|É^qÿ=Ìí;ȾùYßq€ý óH3»°€ÖÃÿþïüàÄw#Æ^X‚`¬ǃÞ9¬sc±FÓé fŽ\ˆàpS@¼€Âb‹œ~okí &§¦ÙÜØXXóÏ€ÿz¯ûã,ÌA0Vd$ßšå‘t¬¬œÇ[ÍÚê:««› ûš^{ȸc¦çöÑl4Ø·oŽ;öíGžýûÑ&û½·½íÃõ½îKŒ³ÁØø¥¥ßŸŒáê÷ûD*,ƒÁ€Zs! Y« œFUfhT1:ëRšÕM$‚N§Su^wŸþéÛë>Á¸ @cãè±û_©¤üÝ¢(ˆ’ˆ8ŽÑºÀ‹HRêÕ:͉±’بÎt2á‹ë ú½!ín8R´»÷Ü}xÿÿúôÓO»½îWŒ£°ÁØp^,jcˆ£˜H)¤”Ôk5ÖÖ.’ †œ½x‘õ½î&½Þ Ôj“ss4 ægQD£ÑDŠèɾwìuŸ‚`\… ‚ñáÜ!¼#Ib’$!R’$ŽQJrþÌ)Î/¯qqmÁ0gÐÙ`­ “5H*5&'šÌÏN3?3Œ"šS8a?²×] ‚q€ Ƈð‡‚´R¡R©')*R$ID¿¿‰É‡\¸°ÆæÆ&&ï3èöË­L2bvnF£ÆÜÌÆÃüü<Öê‡^ÿðï4÷º[A0ŽBÁØpÎB¦ i%!M¢(BJ‰V–/°¼Ñfm£CžòA—~ó3P«OÐlÔ™›Ÿ¦Y¯' q\«¨¬·´×ý ‚q€ ƈ›R'1i’P­T¨Œ‚€(Rôºëx£¹°¶A»ÛAC67s*€W’™}³ÌÌL073…‚æÄ$λ_Ùë^Á8 @ãdRI‰’(Ѝ¤*• iœ ¤ïÙX_eu­C»3 (,&/“£&›³4ëuf§&°N073޽롇~¿¶× ‚q€ Ɔó¾!„)R%iš’Œf<ÐÞX%ÏrVVÛè"'öØÈa¦ •JJ½Vgjfšz­J”ƨ8­tÍæÛöºoA0nBÁØpÞUw€ÀRJT$‰â˜(‰‘R’çCŠlÈÚF—Í^) ÓÎI+avf†©ÉS““ÍFgí»÷¸kA0vBÁØpÎ%1ú? •"Š"â(B)…1†õeV7Ú´Û}Lap6+?_@³9ÁT£ÆÌôÆÁäÔΉ×ï]¯‚`<… ‚±á­ÃY‹÷ïÁ—ÿA©2' R ïNœce½Ã°È)²!}`ª I5!I«LOOà…b¢9‰”â¾×½íà {Ý¿ '!‚`lxçÀƒw€/’H•³RHœÉ)ŠœvYÐ:ÚëŽ*à%ÌÌL1Õh0Õ¬ãÄI…ˆì-{ܽ +!‚`lXçpÞãœÅ{w¾œ¤(¥PRᜧ½¹ÊZ»G»3À:KQ Ë×Í&*SSM„Ôëu@¼iïzã'AŒ ç\Ï9‡õ¾|àñBà…)*BE½Î&F[6º=òBSd£ÀCµV£Z¯01QGÏÄäÂûW@cÃ;g­µxëÀ{pe€À—;” Š$Á`8pËúz›Á gx˜J!Š ’T˜¨×ÑÖ3Ñlâàe'z(Ô‚‘A06œs…³¶\ pë\¹0zHYž(àÝùÁ O§7d0ÌÀ;67)à4'šLN4¨U« 2ŠT5O÷ºA0.¢½n@pûz×ûû÷²"û­ÖsÆ8%+Bʧ’$ý_üäŸüÙ^·/¸ýxïzÎyœµ8ç±Ö—¹€@EY'ÀëõÚ‡ÚÝ)º½!Æ[²l4(,ÔªuêÕ Ízn· R©Òïë“À×ö´“A0& @ð3yûû~ëËýAÿ<龯Ìjæà=L/Ü5ŸT§^Ý ÿô o{ÿw~é¡Þëv·ë|ÛZ‹õç,nk; !±õPËÃ~ŸlXÐî÷0ÆbtÀPC’T©¤ fc=µZáyõžu.ÆL‚çìïÿí¿êÌ›fsçGi6šT+&ÝÁ]Gï%Šª úí¯¿å]~Í^·7¸}l%:cqÆ•9£‡p€áÁ;p®?QJÐëæäCÉ5‰‚´Q©¤4ª5¬s4›xÄË÷¶‡A0>B<'ïzßァȲ‡ö311AR­±°ïGŽåàG¨Õ¦ˆÓ*GŽÜM½1³/ËÚŸ3ÁnyïÛÎŒµXçqÎ ‰òw!¢\B8k²B èöúäÆ)ÉÀA½ RB'L4jhmiÔkà}kiiIíu?ƒ`„ xN´+þwÕ˜˜š ­ÔØ·p€»ï9‘£G¸ó®;8~ÏQì?ˆ—1ÝIµ:1=t?ùæ7´²×mÆŸ÷nÅ:±v8Ê_'ÊÊ€AY!HL9ϲA—ÞpÈp8Ýn™è€z½Fµž¢"UæH•œßpGö¶—A0BìÚ»—þàÞl6›$•jÃw.pôÎ)œbrj†Cwäî{säðx¡¸óð]Ä*YtI÷ÙëöãÏz±nÃZ‹sO9p¥2)Ð#ó^ˆ ‡Cºýœþ Ã9O¯? rµjZšP¯WqÎÅ1^È—íEß‚`Ü„ Ø5‡û€ó©"¢HR¯¤L6,LÔ¹ãÀ/¹û0Sì_˜å®Ã‡8´?^*Ýy¿óæ_ùpxã nH:¿îG9Ö¼sxç£yÿrÀèŒégðîGÃaŸÁ0#Ë ¬·XSæ’J™XKSœ‡$NÁ»{÷¸›A0Â6À`פ¿á=è"ÇûršÖyƒL"æfgX˜­q~u–•‹P@–éôzô½¥19 úÿ xp¯ûŒ/'ì…­kÜ¥zBŒr(ƒG"ç½ã'EžáŒ¥?`E*€6PKÒ$¡^«²¶±A½^§ÛÝ|`»c!Á®¼}éwg¢HpÖÑïu°vža6d³Ý¥ßŸcrcfj›—f=°oa†öæ?øq›ñÓh¿éÁwþÆ;¿üé?ûô^÷'OÞŠuç,Ƭ5m°Ö"GÉ—žç|CÁé<ÏRÒhtaQ9à¨H’$1•J‚5–J%ü]{еëzdi)™‹k‹NqBzÑßòÐBøÇ~ñcÿÏ÷º}Á W‚Ý1¼Z»L£I•à¹3Üu¼ÊÅ‹+ì››"M#ªµ Ò9ÅÁý^`8ìràÀ<+k묭­21=K·³ñÛrÊžuÖ—wÿ[Å€œGIYú¾ŒàœÃ!slA$a0RCM r –€ÇŠZ­Ja=ÕjïÅѽêßç>ðÑBÐBú–ð´À·òÞ$RT¢ˆD)RQ‰"Z¿a¯Ú¼8„ ØïÜIm=bT‰Eç}Îz !cΞ[¦Y¯±²rÍš™Ö˜[`8r`~Ž•ÕUæçh·×|ãÛ?ðê¿ùì¿ýú^÷)?ºîN%C‡óc v4Ei™ˆ)@”€P4¬u=ï]c˜åm@V@%)®V*TÓ£-i­Š÷~ÿýKKÉ}´x¾úñ×úнÞË–Ç/‚h-`±ÉKƒ|ªéhÐßN-ŠéC:þ–?þã'ž¯v/n!v«e¬MÅJ’$¢ÛiÓ^_ál£ÆÔTZ5¡VÛ¤71žÁE=C£±Ì sgØÜX§Ù˜¢ßïþcàW÷ºCÁø9óÕ¯ïyùƒmkݤ1g-Ea©¤âr9`BÈr'€W‡À=]ùËò¬ /ÊJ€ý¾aj:"Ï R­RMâ¸h•ŠÔÄŠ? üÜëg?üá£ÒËòN¾œ¾_NDR©4Š®èËAÿ¹ˆ¤DŠäÁ›ÑÎ ØN‚]BÜkAH‰”g¬×ª¬­^ ÖœâÂÅ æf§PëQiÃûTZavn?½Þ€…¹ÖÖÖ˜[ØG牷¿ím®îsëïu¿‚1äüYoí¤5å,€ÖP/¯yÊcç=RºýÞsJçÅËyA® àÈ󜈈žƒ$®¤•4Å:ƒŠ"¬ÜÃsX¿ôÑîwֶ𢼫÷â$Њ…ª–ýåA>"ÄŽ¯¸;õ$z+ðݤ— ‚«„ Ø!8$¤$Žc¢H¢TD)ííµÎժ웟f¢^§¶éóLO°Ó4ë5æg§9Ýl ú$Iµž‰âCÀ¿Úë~ãÇãOYçZÖ¬)ël $åv@!±N#”:Œ 1Ìr†¹Æ:Ç0ω©£5¨4%ŽÒ4Fçš8ŽÉsuçv_û3K¿;£"Ý·<œ¤œºoaüLU%$ª\«ßZ¯—b›¡Þ{üµý™Ô¢8äÏ›;ºi)ÁøýQ¤H+)J ”Rx@EŠÍõ&g游¼Ê¾ùiÒx“Õ¤Á‡ªô㔩Ùif:mf§§8;0==ÃêÚ…€`;Þ?å}Y ÀX‹sv4è—ê”!Á{‹ 9ì…û~ž—ƒ–çñ£\€¡†Z¡bE%¶¼×ý Æ‹G<åœÇ8W.‡±nt&À!Hœó€<,œøB¹T …óiµF„Jb’$Å8H’„iôOª‘ü'•(¹4mŸ*E$·¯v³îèŸ+%%Ë…ù­W¿öµ?BˆK=p¯Ç]Ѹ²…ÎÙ0cìZ‚9]LB•(Š©¤ R*”XëH‚öæ“3 ¬­µ9¸0O–¶i÷§˜ªÃz^crj‚™Ù)¦'&ö{$•šr.ûUà_îuÿ‚qãNyçØÊ°Æ`´FpÖâ½/Ï0=ê¥;¯u‚¢0XçP£[k‹ˆ”ŒIÒg Q!£˜£SSÛ|í½꯯çÜ?qÕ@=I~ûÃÿìÇ?ƒ…RÀÁ޼”“Îy¤”ÄQLš&TªUâ$A)‰²¬.2VÖ:tº¼7t{Cª±dzz†Éfé©I‚ɉ)¬uïßã®ãȈ'œ÷£†<+×÷±­i˜‚»¥ã õˆ‡+Ì|p"ýŸÕ(ý¼ ³žÌÀ •@¬qŒvŽ(IX×ft¨Àø?^ÖœØý_µDðÖçúW¼8…€`GÆÚIïË@H‰Š$i%!Éc¢8FfÚ:ÖW/Wêlv:Ô³èl€¡JšI&š f§'xêô¦§gYY¹ð†‡z¨ö‰O|b°×} ÆG®ìSU'™ž;¤ãN_p°·Æ‚„,ÕŠ$­×H£&©ŠX±þî?=Û)ѹÆKÇ G8 Nic­£’&¬i;†÷úÛ»£Vg&ŽY×zÇçZk1Æ=üÏË‚Û]‚9gUy «¿´0ŠIÇ1R)\QÐílrH–×ÚÜ•Tó!›˜¨A×$Ôëu¦g&©Õ븬O¤ÒÊf½ ø÷{ÝÇ`ï|jé·ç£È´¼ð-'~±pøz¤ÄÖÚüVöýv[ìœqXgR’ëí©d¶ø5 “ 2‰1Æ5«´e§û·IÁ=Õ*_ÛEàœò‚Ý @°3cFo,Œ¯D™EÄQL4*p¢‹œ^·ÍúfƒN§ÇÌL“^È|­J[ÀÔô S+ÌNL°œgÔ5zÎ À‹Â—––YT]”Â/âü D¹Å.v>Qi¤¨\‘·ý»k?4¡$ÖX”¹Aƒ‚¼€D Å1q¬°¦ b‡qÔÔí± z_½É×:m¯]õ·ä=Z›Ã?üðñ‡<€`!vä½l8çðÎãË ‘’DQDG()ÑÚ°¹±B¥1ÅF»Ï¹CgPE;hÔ'™hÔ˜˜jrþâ “Sô:ánåæ‘¥¥d2ŽAðÞ·´´´‡k‘¼Të~ëÎþçͼ@C Œ3hcÑÖa< sK$= (‰Œ"T$Gõ]ë¨Þ&ÀâÄ$œ?{éÿ¯ ®ø€6Ký­„ ‚ÁBìÈ ×s~TŒ¥¬ÆŽàòR@ÅH©0&£ßï"p¬ovä9©É“ 0YDšT˜i40Î159ÉÙ3âäɇª}3äÜ–>÷œpN´„`Ñ{ÊÁާâò ¿U!ïzýèh¿ŸK]IŒÑdyNf4N@n4“5…s £Ge@ïRÑ·üöuøÇÍ¡j•Ù(fÝŒ–nXëpΆ<€`G!vf-Þ–wþÎ{¼/˰"R)¢("Š"ÀcM)rÖ7:ôCf&'Øl{L 6LLN01Ù`²QG*NUM«7ŸÝën×÷©¥¶”¢å}y7/<-/XLDD’\QW)âënó|®¸7¤ m4Úbœ§7ÔÌÔ+䜊‘R"#9ª&¬èw›d”õŽ×ª|½»•pýÂDÎ9¬µÞš–·³;²€sëÜ((”¥$J)”TàËšµzƒ^gˆÛçȆ˜¬ã=4›L6ë4›MÖ×3ªÕ½Nûu„`,|véÃG½ò-ïi á[À"p²œ®¿¡`ÑIßÂÓßRB½rʾœÂWå¿Ñvn£q~[ÎáŠÛíá„ }é}¸S›h£ÉµÅ ^‚ƒÜXbá…D*‰w©ƒ;Žb–Ï ´AÕëˆ(û à¾zƒÇ{;ç[y÷~üãÿñóܬà6€`gÚ[SÿÎzœ+ªÿå¦@„e¡ ÏÙÁ O»?¤ßâ¼C9P!³P­×I« Z•õlH5­ÒɆ¯þÃ÷t¬ü‡¥. l )Zx=´Ü›ŽöÒ_NÈSÄòz™÷·ûH¿ ïqy†Ùlãïo!~íWIÿöØ'ÇƒÉ œã,ŽrÀ6Π̲…yøè¯cþè_ã{]T³9ö3÷7›pñüŽÏ€5‹|@°­;zòØtïeOmb¬Á8‡sk=q$ååRxþ|– 0ÚÐ1Æ Fû—‹šIJ5M©×ª,¯xêõíöÚK÷¬ƒ{ìÓKº×`ZÑ\:îõD R•^•—\/óþ8ÎoËûòοÓÃþÒ/bÞðzò<ÇZƒ³!Ô¨Ç{‹s ­ÆYB–ëýÎ#¤¤×ЕŠôþsüýŸˆ~U¯ƒßà`¥¬°f®. ¼]‹58gB@p]!vöè£ÖŸ|늳~~+ÐYƒ'¾´%°<¸L”•¡g‘‚nH‘Dq‚Ò%I“˜Z­‚µ–Z­ˆÅ=íã-ð‰¥¥Ã’¨å %ð-'€‘òªvi ¿\÷~;/–qþz¼µØ^ìMo õ«è­¯Óï÷Éó kÐ…E)΃É-®ð£zCb” ØlwX^Y¦ÙlRÿÀoÀý„1ˆ$Ùën^—’’»«UÖ»Ûœ ð¬(ÀZ‹u.ä×€`W¬ëÖ™ym4Ö:Œ³—’ËDÀòw‡!µÖ™•B©^oH^šB2tP«’¤B£^#·Ž©zïýñ¥¥%õè£Ú½íéÏïSKKó-Z\K8±ˆàЊ…llÌIÔh›Ãm€›RïÇ{Ü0#?|'“¯¤³¼J§Û%ÏsœµXkRb ƒÑ)ÁÐ…Á9N „Â{‡’á0cuuÁ czjñ '‰ÿ&jÌs^R­òx·sZ@WÐ{èᇗîýøÇ ËÁ5BìŠwvÝY‹³kË#W/ÝýûËã•÷)dÓÂS: †dYÂÓëz¦&™‡z­F­šI9*(%gVÜQn£}ËŸ_ZšÌˆZ¿ˆã„´<´„óéh[]%º\@çz[ìÂP¿;ÞZŒ5l¾é ¬­¬°ÙÞ$ËòÑ÷ Å;  «1ÎI‰¶ Av4à¬Ã (Œ¦ßï‘eC´.ð/9Nõë#œ+sÆT«9+¯øÈõƒ­5¶¦ßJȶ€`Wœõ¬sXk0£e€ËC—/gœÃ+9ïñ? ÇyΰÈÁ{zÃ3“u ÕZ•J%¡VM±F£âc-Æ0xdi)©Á"¨ÂùE/| DK#U•,3ïG§Ù%òF™÷a¨ÿyxm9²6¬¬¬2 .ò±å¾wÄèP*c2Rô‡­Ö œøÑ²€G` Ã`0 (Ê)õ¹ùÝÎX×8˜VFyfÇçZkqƾø—ÏË‚ÛM‚Ýñ\p¶,¶RÎ8FRý¼À!ˆû“_äÇÜ,Ó8gÉsMh[.T’„Z¥B¯WÇ1E&ïm'á“KK'ª%œo1è…çx:à¯LÆûyO± žïk÷±²¼ÌÆæ&Ö^^-ʇn4  ),Q³²ÖÁ¯=Þ€w£é*ƒAŸS§žavv–<ÏÊ*–õõN{ï:¹ JJŽ¥ÖLoÇç:çB@p]!vÅ wÁ9‡1¶L¶ò[o¦€I¹lêƒHB˜/åyF˜3Ì2œwWÞeå4ãˆJS©¦´;ŽJšÒërìVõçß/-µ£){'[BøE­T–S÷[ yi_÷n0 õ·’÷–‹*bm}Ápxi ªÓnÓévË¡ÐEAQâÄóõïÆ¡ÐÚbXmGÕ+%…1¬¯¯³¹¹ÉìÜAæÌh*k¼ÿmï©Õx¼¿sà½Çò‚m… ØéÅÙ­ìk Ö¬³ˆÑ–)/Ê3Ö=!9$´x¦( ¤€á0ÇRåá+Cm˜©ÇDQL-M±ÆS©TÀù»nv»ÿjié8––´äÛ@°+ÊÛ³ÎEXk1Æ ­C[‹TrtDðÖ3Î;å…ÊŒ.PR2Ì roHŒ²€¥ i’bŒ¡YM¿cÀ#KK3UCËIñœð°( %½œ¬Œú+×é·Ë¼÷»»à:¤"ÞØ¤?èÓëvév;ÏzB§È3ÖWσxam}•õµe6×W˜˜Z`h‡8ïÉòËÀBÑp/Ÿÿ(!9ºMÀvûœs8ïÞzkZÜNBìʪêG'šrúÔjC¤ÒQ€@‰o-±ÕºH²AF‘ä„d˜Aš€ue"`’Æhcˆ¢ ÀÌÉ_Zšüæm?²´Ôˆ5-'ðnDK@K9±?/ßɧJ+õÂ8Ü&¸1%98òÍ~ŸÕÕ•kŽòõ>ÅhC­ÞäâégpÆ!¤'IR67V8÷ÌÔ›3x)1Úି4 „à—fçAÉÛæ{æx¥Ê7ú½Ý”À³ÿᥥ{?þhÈ. @°+g~ðÕõ{ßÔvÎNn-d…¦R­”oÄbt"€,+­IâÃxwV›âhVh²Â`½d˜;êuIf!NR¢4!N¬)$ÿYÍ⟿÷׎HËá4ÝͶ׽ · vAð€s<Úí^5u¿Ådž¬×¥R?Ösöé I+  ôq•8­‘e9ý^¼!ÏËüʼnI^QV¤¼m¾—îo4k+—?pƒH@kÚ…<€à*!vÍ{N9ë-è­õÇÑ™¢,êãœGDî°wœ3E~4+ ­qÂÒÏr¦'«ôrˆÒ •8¦’–%…UœP¯¨7«§7Øbw›¼;7Ÿ€š’ür¥Ê¿)ŠmêŒöæE*3$IDœÔÉóMT­!UB‘ Ú›äÑ铛‚ƒÕ ÿÅ¡;@ŠÛêûk_š2—Ĭêg—¾60Æ`¬ yÁUBìš÷î”óv±ü5Zh]Æjœµ8ï˜r–{”xóÓÖO¦`˜kr­q†y¤J¡A% 2QÄIŒÑ9*Š‘·Í]Xp« Þ1;ÇSŸ»põéѬ"’&«çždröC>ìã¼%õON6èÒ^?pŠÁwÖjüó{¢Ûî{.’#Qª޹ ÐhOÈ®€`÷¼xÊ95†¢È‰†}¦Û°Ðc¦œf¾áIUL¢Ô»>ç™Â ó‚¼°hçȳœjO¢$RÆDqLQä¨Hѳú¶º ö€€?8z”¦‡©4‘er)ÞiL1Àô×xY-á#/{9/œU¯Úã®üî¯5`}mWÏ5Æàµ{@„ ¸Â#KK“‘1-áå¢ÃŸ°´”ñ3©JxË|eÇ-vW®Ò×”ÀZCQh²"ÇàÎá€a®iÔ+¥ˆ"‰µ¥bz™ 3ÁîØ_¯òO_þrÖ³ŒwêßßXãB–³¶qï=¥˜MSöW*œœ›ã­­“Ì&I¹EÇÑ?=x©×½–Á?ü„îw¿{Ãç.¤ sñvy×ÒZ‡<€à*!xzdi©kÝrpR ZZxZJÛC±Õ¼m±»á)v;l¨o0F#¼ (ÊZTTN×mËmƒJÅ1Ö9¢8¢oÝŽ¯Ï6¦üÎ=÷÷ìø\{^ "Ù·™$d§O_sÍé¤$Þ·RâíõOÈ–Bp8Šnlýôzçð! ¸B^àþÝ{ß{Ò![Âûr G,ÊÂMT|i€Ov8ÜægÕTi„ÚPxrÚ¢¡$2ŽqÖEŠÞ¥õÜ xáj¾ü„RÛz}ƒöãS=r„dß>òsç¶y…ËŽWk|늊ˆ×=X€q! ¸,/¼ç=-‹ÞÑòB¶DY󾋫·I”ºîá67ûλ.À‹Pèg-‘T@ª$N€Œc¢(ÂZOœÄt3Ám-Ý·é7¿ ³±Éêç>ÍuYIÁ{’ùy¢ÉIôææU×½Ö¤û÷ã½#¿paÇŸ‡—Vª»*å2€w! (…à6óÈÃZxÕ’ø–´ðåá6[UñR9ú}·©‰2@") K®*‚BC)¬!#T¤°Þ’¨”¡ó{=;7$“„é׿–Á“O1|æÔ5×u¯Ïæ×gö-‚T×Lá;mÉWWQÍ&6Ë·ý~_ì+»nÏBš2«kÖÞøö0ÆbŒ}€±õÈ;—æ‰Í ‹Rø–œÀ³ ™\9mŸÞ` ß맺o ^ ŠLc´Å§­!Zƒ”1BÄXcQ²ÌØëv¬¤¸¼Ø6Z®Üqìüy¦ßøz†Ï/¯OM²ðÎwÌÍqà×ÞOeÿ¾«®Ÿÿó¿$=°ŸÁ“OR?v”Îw¿·«×½;I®yW<®zo1fþ=KK­[ð6Œ±Œ1íÜ=p;<œµè^qââþí~••¤”cÑÆPèmƒ¼@`1ÎRæü „xçÁòò këk ççH>úô`€3fÏûá±ÛGõ®#x!˜û/³ð®wâG¥‡=ÏÎ’,,¯¬°ò…/R9|øêÏ÷¯ ĵãwS½ëHùz;|Í{«µ«ükRh¯¸ F”yÁ‹XÈc¹ÓO¤þ6Xÿö唩Ÿ#zß{Y[_§Ûí‘eJp) Ð­-ƒÂž<78W¾# Î9„ll¶YYYe82yäâÞ{°§N£TØðB¦ªUö¿ç!¼6œû‹¿ÜëæìJõða†§.ט|ùLý«8ó'ÿ/"­ ×ש;Fÿ‰'èýä jÇŽ‘ÌÍ18}†‰/çÙÅ~.~ú3xï9ý±?Ùu[æ“„Ù(bm+7á?*ÆXŒ y/va`Œu ûÔ^·a7¼µ˜,ÿë]¬µÛ\¼¸Ìúú:½^©Öj¬6hmðΓ „XmqVŒr$Þy<0ét:¬¬¬²²ºŠyË›qEqÍ›dpû™|å+h¾ô¾ë^_ì+¤ûÛLg›h¢Éôk~©W¾‚xròÒÇÛßù.ºÝÆ{O<9AvîÉÜìUŸÛùþI‚Ë2V¾ð××¼¶n·ËY€ç@ ÁJ]çö®œpÎáLÈx± 3c`iiIýôÔ©JˆEï} d |ë_­.ýjwíuóvdµÆ»›v£ÎêÅeúýnTÓ_v´`rƒ6Pƒs­]yçÏèTVë‘BÑí  †‘Qææ˜ŸžFE˜¸ umõ;€xzšxr’äøÝ NÆôzW]7ƒ•;î`õocâE6ÿÆ­jò¶DqࡇJqöÏÿž€š^¿ÌÖcjÇïfóß¼tmxú •C‡Ê$¾ûïgýo¿rU;}šJµÂÜÜB@.%u!oßø&“' ýíoo{}îÁ70õŠl~ëÚR¸½Ÿü„ÞO~ò¼¶q‹LеUêÇŽâ.Ý5_æµFDŠ‹Ÿù,Ó¯~«ÿß—¯yN÷Ç?&»x½±qÍçÞ B)„RÌü£WÓþÞ®ù:϶&̪ˆ5»óò1cuÈx À-âqÕ¦ä¶s6Ï™UcüÏä=6ŽXîõÙØÜDÞø:íÍMŒv8kA)м ( Þ:¾ôÕp÷ý'pq¹KÀZÇV½Ÿ"/DŽÁpH·×ãØÑcøA¿úCÀøòžþSO1÷௛¯±üſƇÛ^»é„àÎü&¦Óåü'>yÍåb}§ L¿wÝö®|ùohÞûÖ¾ò•íŸãýM¿Òþw¿‹bmœ#[Yeÿ»…Sûãþ Hà”¬Ý ÐÖ»y/zã>»ü‚‘ÄÉòg(k{±(ö|KÓ B ÿ¼|}9:Ë¢zÇ!š÷݇wŽ ŸútyQ&X!XÿÚ×Hçf±[‡ùŒ¾Ç‡gÏ"ÓÓéâŠÓî`ºÝ^’$ˆþ žpÅÛ6š*!àÅê6ØdþÂpäÈ‘;ó¿ú\·²U„ä5ÍæóÔª›DVŒåéáåå‹èb”ðg Ö:´¯2³ï0Ýõe*•Î:òl@%tÖW9ûÓ¿Ç{R1Ð}pÅèó oJ*N’°à…NÒ…yl–sð½“ÌÎ28}úçzÉÁ3§¨;ÆÆ7¾»IëòÛI¦§pW$Àzÿûˆ&š ž9E¶²LÜlÒær½€ÞO0û^Cû?¤vø0z³îu¯Ú¡7Û˜~ŸÎþžö÷¾W–ñÝAE)þ.2Üz¯¹Á€÷ž4I‡?þû=ú³ö=¸}…€[Ä ñ”ø°UcÆÿ» øb¿ËðÙ™ÒBრ&Nëœ?óSš“38k0º ³±B¯³A¥1EžX£Ö€)ï†$ ¯‰c<„»ÿÊýèÍöUù o~ÊÁ\üìçѽ>ªÑ ™%_]ý™¿NÑn³òØßÞŒ&_£zè Q£‰+r¦x€ÁéÓl|ëï.]?÷‰Orè½³ò71ó ¯¢ÿô3×ü,gË+ØaÆæè°Ÿá¹«g)zO=õœÛ%ƒR²v݃.¿9ç±Ö<øœ¿Hð‚€[ÄW*?–ý[AÀuqÅ:ÞbCùÝï~ï¹c˜q!V6µY†Ã>IZÁêœÕ çÀ;V/žCʘ´1ƒ1–A¯‡Ñ˜!Æå4¢ˆNÏ–}ƒÿ F:7ÇÜk_‹LÎüÅ_^Ú~·ü×_âȇ?ˆÞܤ²oöw¾Kõàò••=içÖ©|Éì,Ó¯|ƒgž¹4PL¼ô¥¨j•þ“O’ÌΠªUÚßûþ¥þØÁàÒI–ªZ-ƒg}_øôgèüðG7µíGã„ï™+g<®ÿþaL¨ðb’o‘¯~ö³ë±.„àÙ3—fçÄh–ûòð.Þuð‘’NÏ0(³ý·Y†7«çž¤Ès¼x1̆8/*¡(4ƒ^‡•3ŸQ »$Æò ûYˆãë&”…Çx>d¥ÂÁ‡âàÃm{]÷ûÄ3ÓD“Ôï>vյᅋ$ ól~ç»4î»—þ™3·¤Íñô4ñôôU;ôþ÷2ûú×!Ó¤¥®º~áó_ šœ ó?A·;Ø<»æ5.|á‹ %Ë_þ /^¼eÿ÷$)78à*ZkD™¼È„€[èð]w½×{g¹Ý_ôŸõ‡k>ïx’°‡£>ÇA$%¯n6ø~§ÃZžaÅ:‹39"ŠÈ³@C†ýÞ:¬)ÿÍÕÓx3$ëœá€RüWwåp¥ÖýÇPÔlÒ8z/½ïª5í-²Z¥ù’{HçfÙøöw®Y·vZãAHÉðÂòUÛè†çÎcgÎÒþþ°ƒçoËàÄ}÷R=x€ìâE¼ó$SSWM¹·ðCæ^ÿzÖþÓ×Ðí6­ÙòòU*û÷ÓêigÏP¬¯38söÿgïÍ£Û¸ò;ßï-ì@¡°p_±$%™"%J²Ý–D-Þí¶c»i·w’—îqÒéL2“™œ“7™dú¼“÷:/I÷9™Ž·nwœÌ$¯Û¢wË-Ë‹,»-Û’µ[ p§(î ‚QËû£ $(ÖÇ2X˽TÝßý­1}°3¾¤ìöIAÌu·€õùâ“-AKQ8??‡¤®ž @+ûlHdÀBxÁí¼ ¬jbJ£óRª1ªÔøîj¼7<„Ãׯc’c ŠÙõó³Ó ”‚ÀaÎ?žc>.èƒÖ7ŒGóóñPIIœ–D&{Ph4 ÔjèJK$÷³33P1 ÆO}ëŽFŒŸ<wÌL{æ® ÆÙ÷S™#@i4B¡Ñ€((Ü·>QézU ôåå˜éð`~xLm (µzqr ©ëù…X·o‡ÀÁz½1} }°0±€…‰å“ôÜD¡@ù#„Âlw7”F#ôÁ ¼W[—=OAJ)&$о»xž'çØÈ€5¤ÒîØ,øU®L…­:}šF•z7Mã¾Â"˜•JLÌ/`zn³3#àÙyðÜ>.àƒ"0…~¿m1àw++Q›íJ¥‚íÉoCWRŒOgÜ~ÖÚb¯³K²  //ÅÄ™³à–ú…@Ô°KFSMùCß„ÚjÅÜÐÆÇa®¯ÇÔÅE¾¿y»vÂÛÖÒXäcòü…˜hÙÞðL_¹ o[»dƒ›ÅX톊a )ÈGñ=wCà9̈~”FÎï‡&?|í°C_QokëŠQ 3,‡ö°l™`J•R·­¡áõ+W®Œ¤ò³Éd7²` ¡(x¸Ðޱ„Ô'íŠÂÁ¢b,*=~?|Á ¸²".š†Nû\Ÿs½A X·C¡×cD"½-¥ÓAaÐC1g€ÒhDpf&îµÅ vn>á÷5yá&/Ä×XKz_} ŽßyóŽ(”àÙ 4ÅE˜Ž3ùõ%°ss¸öî(´ÚÅXü SÓ)“Án7?ƒÊh„¥q;–‹˜ôv¦._ &Nâ‚A}>Ìtx`¨¬„Àó JåŠ÷J•Z XF£ /çØpÈÀBAÕFÈê%€ ŽC.xÁÛõZŒø\ÙŠB¯“´¡BÀÔÔ`ìË“’×?8= nvS.´e3ƾø2î?憮§íûShµÐäåÁX[Õ‚¾–×V߈ BL›7aar‘ј1ODULGŠâ°6…›ŸÃÀo!oçÆ'0}ôzÑÿêë°=ÑŒ™Ö6†G`ÙÞµ‰‰› *#µÙ„ÞWR©ÀÎ.—éG$O©„…LFçH€X@8¹.À†B6¬!v{¥„ÿÆ ÂªlÜ Ðd å/K&iòvî@þí·aêëKqûŽƒ¶°šü|(õ:ÌE­ˆÃÌ€©…ž·# —’Ä:Jƒ!Ò^˜üÛnEÞ®˜¾ÚŠ…‰IèÊJ1½‚Í;ÞŽpóÌtvbÆÓ™:‡¼(, õPçY¡6[Pzÿ}P[-ðu÷µ`±qèÊÊÀÍù¡ÐëÀ¸Ý˜¾r%âkPØ´êPŠ_]q±hr¸ÚTf{z0}å*æGGÅ뾂` /¸€O0ù/P)UÅímw“—Cf!‡®!Ç÷kR¡€+1Ê3Þ%¿ÖÏKA@)U ÷slj‹ 33’ûýCC<úØùùÔÍ`€µ±”^‡ŠGuGcÌþ‰‹¡Ð똚‚@æ†n<|Žõ§®ö¥×¡øž» «(ù,º²2˜¶lÏsÿê4 Çâu¡4Ò¢ÏC €…©i°þYýþÅÏ{öz~uc'OáúGÇpý£c!%c®T,]6$öàxÎÚ,רPÈ&€5†€x!e«Íîw}aeÙ\X&«ÐƒR«`ÙZ‡É¨ä5a†O_ç„@e4‚_X@éý÷aarÃ/ögÙZfS-|ÝÝP™M`ªÝð¶·#0*Fp³~,LMAm2Ae¤Å–úPj5œ¿û;€ÀcêëK˜íîAéÝwÁóÒ?XŸCÇŽÁþÄãð_»}i)‚Ó^hòó"ŸG¡Õ‚õû¡Ðh1öåIÑôÅ %~o—B 9è k‚ì°aµÊkL¥Ýv;Ï «  Ti´i•L®1uéf{û0?2šÖø ­JšŽ™4‹šö"o×xÛÚaÚ²ñqÌöDöÏöÀ\w ÆÏœíp`¶¯J½þ¨4¸ SÓàà„p0mãWWV Ö?÷3ßÃÔ¥ËàBjvBQð÷@WV†±/Å¢=AŸ³}‹ùŠ€ÚbQ(ÄÉÞ烷­=²ß?x3¤©lðrh) ó˜_昈>@ Õhý­W¯Èù6²°ÆØìöM‚€»V h$[µº4jýbt¹`Ú¼ ù·îBÉÁ°=Ñ ÷÷~³}ý˜X¹†õû“¶¯µÙŒ‚;¾_W·h¯ßوɋ‹‹F_O/òvîÄØÉ“ ¥N¥Á€ù¨”¾ µ<ÇbêëKPh5˜8w>¦ Ï'>x#(´Z¥ ¥÷Ýna!²êV 0Ø*¡¶˜ÇÂô4”z=æ†E¿ã`tWAÅôzQvß½Ï#蛉8éÍ ^Çè—'1ÛßÿÀµa'Óˆ~AŒF-8… ‚µZ]ÜÞÖ*ûldÀC(ª IÄ’%ŒòD«ÞÆCEÓFUH‹¦þüT&&n;ít`ôËxïu™•!„@SXˆ…ÉIT<üúèXd?í°Áèt`è#”JJ¥‚Ò ñLŒŽ@[CE9”¦¯^Eôïwì«Åä@“ãM©‚R©àøí'ž‡¯¯¾®.”<€Ž®—ì¬c_BùƒÂß×vf…»ïX2&sׇÀú|hýŸ?Ó¨°þ•=ò3I%EáJÂÂ@Q€åXksss]‹\`C kŒ@Q)À¸-K$€ žÏùˆ9]Qôå0”—A_V£Ë]i (µŸ>þ¤¤7úLw7¬ õe}„ IDATqÛi‡=ç¯W*0:LL`aj*²­øàÊËÑÿö;?{¥wÄÐGG XŸ=Ӧ͠4b¨¥VC_Qã©ýØqðÁ úßywÍ?S4|E﫯£âáobnhÚÂBÌAÉ0N‹Yý ïØ J­Fþí·cäÄ瘼|5æ·3~ö|‚Öo]QQDËn\J%ÀJ˜$üƒÁ xì°Q€5fÂjõ˜‡†9Š¢„Ū€K—üK˜æ80TînÜÇ·ŸÜ§¯(ÇLWwÜv_OOÀ‘ýe”3„¡¼æ-›1væ òoÛJ©B×ÿ÷ËÈÊvðƒáþýÿ‰ µ Þ,Û0~æl¤ÉË—¡0èÑ}¨JƒAÌïu½–¤Ì½Y´Ph4«6ë‚ÓæM“‚Úd‚ŠaÀÎú#ãí?+¤x;:R6î0D©@Å7d1yá"\¿û4ZÿéùÁ+]˜J˜L%xIJxA8àŸÒ=.™Ì“»³I–r¹¥eÒG@("Q(ñ]:­Ÿšú²20ÕîU#5Á‡¡ŽU£-(€ÊH¯ªÿ\ÂètJ㮊٧6›`¨¨@pÚ‹Þ×Þ(Aˆ5KÍŒ@[\Æí†ÊÄ`fIÊ߉sç°³³iñ5°=úœO?%fËÛZ‡¢¦=7ÔÎüè(¼í&&Ðûúð¼ü¯iqŒ4TT@WT]I1œ¿ýdŒ`ªÐh1ué2Œ;æB!–´Ãžò1H!ÖXîQ¿øìáy,Ë5­ÉÀd2Žì˜*íŽ{U«]¡–R Tda(`áí·¡xß>”Ý{ì7£ößGŃ h÷nô´¼št;üB•SrßÜÈHÌ 4‚  üþû$Ï?{N2ÉM®Ã¸«`¼ãgÎB[Xma!¼Q9ýçFF`ÚT o{;Jï¾ Ë‚R©á¿¾è…ï¿v ¬ß™ÎNL]¹—"7Ð6ŒU.°~?ì [%¦£¼çAÄì|Jš†®¨Ú‚Ìt÷$•/šÀø8憇SZ{À¼y“‘ÄB*£%÷‹“¾L·¶¡¸i/ÆB™ù…PJø`úò2L]mE~c#¦®\IÙ˜–ÃÇqèŒyË/8”J¥n[ýÎׯ\ùZ® ãÈ€Œ@/u—aŒc3’TFe±@aÐ'Ü_¸wO4£hÏn‹+u¥AmQQÒýø‡‡ÁÎJ?¤i›Mò__¿¤o2dàzeú5ÝáÁÈ—'¡-*Âðg' +-;fìÌ( Œ9‹ëŸ|бsçböggSš(oG#˜jwìïfÏnh,˜·lÆÐ'¿µJEdÿÜÈh‡þáaÌ!0= …>ñï0/(°?Þ Ç“O øÀ~¨ÌfØ}4²af½o¾  ph› ócãP‘cô•Е”`ªµ ©) ûxÍÆïT*±’†¡½l0€mZö@™œ@öÈè°jõXšk(tZÜz«èŒWZÚn]Y J­ÆÕçžÇÀ»G$Ï›éìBñžÝ’ûŒ憆’ƒ×ãµ~kÜv¦ªJò³ _O¸ ´Ý–Öë•Íøººat91ÓÕo{{Üu¸ÑÔº«ÁõôS ÔjL^üJyÛ·ÁÛÙ =Ò»þíßáúÎÓè?ü.´…˜8–Í›1J\¤+,7?CY†>ý €ß¤u¼¦ÚšPªÝ ¿q;üƒƒúäS€B¥ÆÄ¹ó°n«Gpz–­uÐB¡Õ€›µ#%ûšà¼à9Ç™‰\û±S_aìÔW‘þããiý<ÑX(Ì‚€©$Y–+p²À@2¡ÐFøÕ×»Ÿäyˆ BÓƒÚbFÝŸÿÉ}LUU¾½žÄNS´Ëá/>Oz ¾žI@Ô&JzNÏtuK L•+­×+›ñvwÁÛ-Úí‡>û,-}˜7o‚¾´CŸþ®ß~#'¾Àtû¢ ää)ÊËôÏb¦§ú²Òmµ¾š<+ ï¸3==Pèu˜»Íìµëàææ¡ÐÅg4: ÓÞÎî„m2.תáÞ.éúòâœðKØôgB…W–¢//••6ȬŒB§?@Ù=wÃ×ׇ©+W#ûÌ›6Á××k}=ÆÏ]€±ª SQ6üÀÄ$ŠvïÆLO/Êï¿„¢`Ùº5²ÂŸjïÀäå+‘¬{S?Ù‡¡T*T}çi@à1úÕip¨‹,‘ßæ‚wݯ¾×S߆¯´Í– ÉËÇü˜˜Ò—v8 4ÐÐaääWàƒAp ‹Î„ÜB³×²còWhµÐW”Ãèp@o«Äî_¢õ¹ç *ÉCÔŸËZ¡×o†˜ÓÈ@°Z­žëÃ#EQ !:pxS<kC½]]°lÙ·¶ÛA)’“éÂô““ÐX,qûŒ;°ŠU¸·Ã“pŸÑ鈬ØbÎI  ”JÐv¼žÄmʈ(õú8'9Gó· Ôë1ÕÚ Bäo߆™î®ˆÊ»çÍ·àþï ûÕWÁÎøß¸ÑßµÚÄÀ?x *ƒ=o¼ w&¦}vV:¹S: üè#Ýñ Ìö÷ÃZ¿SK’•ìÛ  ++ôùÄJ~ãc‘c?úhÍÆ»h—†ò *+`tØa´Ù`()‰9ÆÀëõâ'Ï?/>o„" ,ÛYÈid ´´´,ìÙ·¿ç „$%„oÓ‘ ‹Z¶±y;<’¥TB_VŽ™žž„çìÚ·][PcÂÂôtRýûúÀ³,(eüO“¶Û%µ ÞîÞ„íÑv;¦—*62j“ &·3}½(¿óN‚€Î_þ*²è7'`­»Ü\*Ư§F§ “—.òë ¢i75¡ïíÃ>ñEÌ÷3íéÄtTôA&a PX˜™ ( 4¸`0f¼CŸnÖÑHdº’bl6*Êa´‹½ÉåLú|†a`·ÛÐÕÖØ‘.,ËåxÙ Ç‘€ÌÑFq,7ùKeãYÔ }¡€Þ®ÄlÆíÂL´ºÚ#-€ÑåÀøÙsIõϳAøº{âb×ÀTU)m;ëƒpúÒÒø1;¸¶Aý!P1 ‚³³(Ý¿ÜÜ|Œ?@^ýV˜ªÝðvw‚Òj Ôë@Û+áë T //ÃÜèæÇF‘×Pî7Þ@ø;;£§Ú&¯\^Ë·j(• ÂBÚü<\ûàƒ¸\é¬Ê·Ôf³8ÑWVÀh·ƒq9ar¹$…âÕR__Ι-ò|!ñEÙ`c Cð±‘˧7ŒréM <ݾœ Þ™Ø`™•žÑáÄØ™ä@4CH ÚÂ(ôÉ8ð™ž^IÀèJ<æ\§ô΃ÐcàèQ¨Íf¨m&\ÿÍ¢0øÉ§ v¦¼˜nï€Æj¥ÖD®—B£ÃÔÕVÌM`ììy ¼ÿaLûB05¾j“  “°?ö(üƒƒ¸ò¾O%£Ry$2ˆB«…¡¢´Ó £Ó£­&§j&¾¶EªØ¾m^}í5((„P²Dî€<ËZ¡×É~9Œ,d ª"Ýlo؆W[^}æ$‚Á U­“ýrYÈ---ÜÞý<„'›Õ¥å8HxéöÄ™ýLî*\ÿT:%ërŽ€&·S­É ᶤCE9¨É}¦;;‘ßÐ߷˵šT2Kp=ñ8ÔFÃ_œ¥Tƒ¶UÆ\ÏËÏ>êßý†O|¼­u¿ø5¬·Ü‚ñ‹âœar‰Ú¤¢ÛnÃÈÉSðvva~t,r¾ ðv&6û¬Æ*—h£¯¬€ÑfãtÂ;}ºÙ¾}[¤©€e´Í`Ù G‘€Ìâ°yµªÎQŽƒFU£·»;± ÞáH˜^wj™:êŒË¹ª´¼S(Ù_þ•R*A;옖èËë‘VÛ÷FÃh·CWX€É+Wá~òÛ˜êèÀàñÅþÄ¥KÐB €¹±({ 1×3oëV¨M&Þ~+Æ/]‚ÆbÁðÉ“‘cFÏŸÃèù¨(_lB µÆPV ±£m6˜œ˜”›ÎE†Ýáˆøq~‹Ëràxöd?€œD2I~IÅϧuEËÍàë€ÑV·Ïìv'ì{¦§\ …F·oµ«ð™®eÒ ;’ZŠD‘ºÂB¨h#‚3™x2R§Cþöm Tª˜ ¾l_üׇP²g(• ú¢"(u‹™}ýÈ«¯Ç䕫З‹£¾Ë™ž^x~ù æF²«z¬&/†Špv¼J˜ªª`©vK ¶ÆíÑ~U äy<Çí–ýrYÈ(¼'ÙL€Àâí9)¤·(Lwz$ÕµÕ‚ÀÄDÜ>ç0ÓÛsµ;nŸÑa(©Ÿ¾S‰Í ÑEZbΉ*D³¦Ê‰±sç“ê;×p?õ$T4ëŸ~ ¦Ê…¡ŸÇ\¿¶ÿýo¨þÎÓè=|tE9¼Ý=0Õ¸#׋¶U‚çXжJŒ_¼aIQªÀôÔš¦h”= •‹IsB+úõl§O7;ñꫯIæÈˆ¯´ªtFÙ ‘€ B)•mà;=IZæ0 ¤Ý¦=ÝÖò$÷™Ýn yRrŸ·³SRPh40VÚàíN¼²f~t 33PK8õé %?¿¯¯?q§ cg“ôÅÅ0Úm Ë˯¾–ô¹ÙÆØ™³0”•Rª0?:†Â;Áðõ‰aùÛ¶AkÍCñ7î@ßÑ÷a©­ÁÀ‡Ç"×xüÂEŒ‡ ød¦Ê£Ãº²ŒÓ Ænƒ¡¸8ÓÃZw4Ô×/ªüCщ<‚A:9@N" „pœ'|†5‰&ýhü @@ú*SËdtØ1ôå—Òçµ·£ò¾{%÷1UNLw'ïìå¿>$)h,I Çáíê–Ö@8¥}ôÅÅ +D§/£½&WŒ¶Ê3F`bžW_MzÜÙÆÜø8Jö5aÊã£QCEÄzõá²»W®`ìÂp$-¤¥Cyh»ÆÊJ0Œv;ÌÎc§O7 ÃÀá°£'”ö9Þ ‚l‚ È~9ˆ,dcÇŽ]Û»o¿OAQôÊ!9±'8Eé lKìÐgv»¦×\î¼ÚZô-I'»ìÜœäv•Á8%qÀºy wî„Ñf]QsuuÜDŸÕ ÅŠy ³Çz@e2ÁÛÝ ¥^Î×ÞˆÛ¿0³vUù–¢ÍËÃìl•0Úl0»«`©®–íôk€èП`ÉóG°²@N" ™Ç‚B>zV[>AЈÀ¡(‰Ì7JpÖ‡Ùk×`(+‹Ûgªr!‘ ÂÛÝ•Ð0ëÖ„çI!µúÄ‚A‰Ú™îô¸;n;]QÛÿöÿIºï¥0;æ'ÆoøüL2ÝÑ!5±–( QÓâ°Ãävƒ±Ûa®rAMÓ×F&âaÑp™|*š®°1irYÈ4}„žL*à0ã¼!Í‹¤é®nIÀPV¥Þ€ DQv>€©öäÕÝ·q:@WÚ0Ó›¸|o…F}‰´m705P0Ý™õ5SåÂðé3ii;— J˜c‡Ñác·Áä°Ëvú,¤¾¾^Ì,Ÿ0Tš 9…,dBQmàx¬Ö«oRXý9«eÚãAéžÝ’ûLU.Œ]ˆÏÉçNI Pó§púoV^‰WÞ{7”:ä>1…l‚\í7Y¹, 33¡±È¹¢¡++ÅT¸¶J0N‡¸ªw&_Ÿ^&³0 §Ó‰®ÏÇ¢@üÊ‚eY‚œ ×€LÓU…†™ÂMGZ«UTÅVTÀ\ãÆäÕVôüúˆä±“ËddœŒ&>>ŽÍßý}É}bè‹“èÿ裄m—îÙºïÿaÂýC§¾JøÙfgá¾°0áù‰àÌôöb²½3=½˜îêÆLoﺵý§ m^ž˜ ×&zÞ›œNXkd;}.°­¡]Ý ÆÚ¢õσee?€\C2 ¥$mËD&D L^¨ÿÓ?AåÝwAµ¤Z]Z†žw-ÝÇ2vcº¢"aÿ¾Œœ>ƒÂ’ûwþ÷ÿ†¼º[0pìcŒý5ž‡¹º–šj”ïß‚íÛö;qù FÏ._×}ª­}E`Úã'øžx{{áíêÆìõëËž“ë( †ÈDo®vÚìe;}î²£±¯¾öz$i7°,kÕk ²@! F*pÅs,˜ M²vn.nòë–Í 4šHX4þ±1°ss’ªx}aᲈÖÿýo p>üœ?”ÔØ£¹ô³Ÿ¯¨ù˜îꊘ.fúûC«ù.x{{#ï72„¢À¸ÄÂ6LØNïrÁpZ™õM}(ÁJ~˱àÄp@YÈd Ã;vìÚÞý|ÔÒPÀ%HÝ–ã<‡²$‹]¼ˆš§žŒÛ®ÐhP°­C_HÇõÏŽÁXY·]©Ó-«=Ýï†ã›&5¾d8ÿÿ£çW~öôy×OœÀT‡'é̃¹Ê¢Þ £ÍK• ŒÍ–éaÉd ÃÀñbó,yè°, ŽãšüdÍ*“d ñB€(Gœ„/¾”&80vñë„~*ï¾×‰âä96¸âJüüOÿ º¢BïÚ•Ü  ð<ÎüÝߣçÈ{+ Àwý:°ÁÔùºÂ1;ž#äŒçrÂZS#ÛéeV¤±q;:»º ËØxžÇó²@! YáÑF€ÉL€ËH’ö úf1tòJn¿-n_ù¾&\ùç Ì„RƱnª…¾HZ-츶bßÜ|ŸÿŸ‰íÿõ¿Àž ;àr<Þ£ïãê¿ü¯ oŸ£2“æT»ar9a–íô27Á¶†¼ÒÒ’”ù1d­:ƒYöÈd  Q‘É,ýCL ªŠõ9")ŠÂžÿ=Nÿ¿ñË—¡Ôéà|è›p뱄m_ºœTß\0ˆ¯~ô·è|ó-Üò¾‹¢;–=^àyŒž?ÑóÐìXœP²Qíô.˜B*{“Ë “Ã!ÛéeRN}T]€x?€Øç ‚Ãe €,€R6ŽÇ’{meÝþ4Áªb¯}òLux`vWÅíÓaïOþ!©vSS¸öéoVÕ÷Ä•«øô?ÿW¨ íÜ5À./¥TÂÊG>ÕáÁtW—¤Cb.c´UF’æ06ÌUU0Ùe;½ÌÚã שAÇç@öÈ d …A«É^BV¥x_ÿü%ìù»¿]åcùúç/ Ìßй ³>ô?~Sý¯Wt……¢CžÃ“ÝS• ù›6ezX22hÜÞ(æ…Äဢ€p»ìÈ@ „¶ !1j¸d`ø¬Æú{ý‹/ÐÑò*ÜÍßZõ8 ý•Cèzû:w£ Úém`v˜«ª`v¹`qWA¥×gzh27Aoo/  ÏÁï±±q;µ´$•[$È­Z³ì È@pôèщ¦'!V yF†UjÎ?÷<´ùù¨Ø¿/ésf¯_Ç×?û9z?H¾š_®C)•âj¾ª &§&‡ø2$pœ”Y?´¶¶¢½½=æ577‡þð‡¸ï¾û2=¼”SwË-Ëø F Á²@ Y!¤r{²“ø~8–IÞ! âó¿úk¸{›žþmè $ úf1rþ<®þº ®ªŸ\"¼¢gìv˜œ¢ç½l§_ÿø|>´··£££íííhkkCÇ20ÛÛÛsRˆó@T>€%°, žãd?€@²BàPH4ÅOâÆKÔ´¿ö:Ú_{ÆÊJ64D¶{{zÀÎÍa2Ã%d3®°&‡F»Mœääo–íô¹Àèè(ÚÛÛqõêÕÈ„?88¸ª6ÚÛÛÓ4ºÌ³½q;:»» ÄçŠAp‚ì È@– ðh[š 8™ˆÀÕÖb¦·7©½¹„ÚhãtˆñônÑNou»e;}ŽÐÝÝYÙ·µµ¡½½SSS7Ýn. ÛE?€ÄZHù— ²V­‰i ×È^ÇÈ@¶@('=é/# LS^.S›J©ãtˆ¡uN'L;,UUÐççgzh2)€eÙˆÚ>¼²okk˲iéÏëõbhhÅÅÅii?“Äû„ãâA,Ë‚[àš ëYÈT”Ð+Ó–5éÇîôN H×ÀÖŒÓ!&Íq:Á8ì°VU‘¨c ³>ñz½1Ny­­­èÕ²_K:::rR`î*7< "šÅD~ÇÊ~ëYÈ‚Á ‡P@–V\ÞÁO0Eäm %€¾¸XLšc·‹©päoÞœéaɤÁÁÁ8þððp¦‡hkkÞ={2=Œ´°­¡G¬ °Ìq<σ8Ù`# YÂñãÇ}MNP”" ˜|» °¦md™#l§gl6˜ÝnäÕTÃìtÊvúÃãñDÂîÂÎy>Ÿ/ÓÃJÈrQëíÛ·ãPË«¡ˆ„õ'˲V-#û¬gd ‹ DÑFŸt(`˜ àÆC²€°ÞRSÆ.–¬5;œÐÈvú\"Ä©ðÛÛÛÁ¯³’͹ì¸è ¦Žrˆ# ‚Ãe`" Y°l(`"¦×ÑìÏ809œ`ì6˜XÝU`*+3=,™3>>§ÂïëëËô°RÂàà |>è¬ÀÈ0 ªÜUèìì„($öà8Ë6øûµ¥Lª€,B€ (ÅÒ[qŠP«2¬†’bmv˜CòÌU.ä×ÖfzX2i``` ne?>>žéa¥•ÖÖVìX¡²åz¥qÛ¶$ü^/×X×È@¡ (ÏÅOä+øÈÐü¯1›ÀØìb*\— ÖêjX\²>WI”"w£ÑÙÙ™³@CC~u踀ˆ¥¤žCÁ`Ðj`Œ²À:E²JP¶’ @‰s„`A J×À(406,5Õ09ì0»ª`qÊvú\Åï÷£µµ5é¹ á âÄ/=ù¢€À ²À:E²ˆ`Ðï!”"¾*`Á4!ȿɌ€aLN‡Ko·‹•ì\N˜d;}ÎN‘ýÈô°²š¶¶¶L!m„ótvvB–÷`Y”ýÖ)²E?~Ü·ÿà×EÊ$Èð'- ŒXíZÜPR3Ñ›ÙN¿Á`Y=ôкóÄÏ4ÝÝÝ`YJen>B·‡üz ÀqÜnÙ`}’›¿Þu ¡ˆ‡ð¤,ÙD@a¦ +° IDAT$¬  1›E¯{§SÌy_S «Û ¥V›¢QˬW”J%l6[F2ê­gX–…ÇãAmŽ ÌÛðJ´•¸B˲&M78µ¦ƒ”¹id Ë ‚Ð&QHnò“„€WP°V¹`©©9T²ÖZí†ÎbIï eÖ5.—KV‰F£Ã0™FÚØºµ„" ¬XeY@d`Ý! YOQ +†† g¨ßŠßyþùtM&G©ªªÂ‡~˜éa¬+¾÷½ï¡´´4ÓÃH4M£ªªjE?€Pe@pANöX‡P™€Ì*yï"A%sA@oon$YÙHLNNâÔ©Sxûí·3:ŽªªªŒö¿Þp»Ýxê©§2=Œ´³}Û¶¸…H8 p16@Ì[Âsìîææf¹&Ù:CÖd”’x^Zá¶t«X°K”Χ¦§àõzsZ-¹žYš,§½½£££Š¢pçwBŸ¡Ü .—+#ý®WþüÏÿ ¥uu¼«QÁk4šÈd»Ë†Ò¸½½½8zôhJÚzä‘G6üê?ÌömÛB*aÙÀ$AÀq²ÀzA² mD ¹€è;0ìÑÈ¡€kÆG}”ò6[[[3&8P·zÎËË‹SáÛl¶ŒŒq%~ñ‹_¤dõ¯T*7DÁŸd©««Ã/_y%RDjõÞƒ4m2íðÅZŽQfõÈ@–¢P(<-@KˆÝá›0ôB††† ¡R©Ò?È L{{{ZTöØ·o_ÊÛMFƒ;î¸&2ÑWWW#///#ãY-ÝÝÝøàƒRÒÖý÷ß‚‚‚”´• lkh5ŽQ&ÉD"@åÀñ|d ë‘€,EP(Ú…ÛH!†Š ;úûûs"³[6óòË/§¥Ý¶¶ä«A§ƒø‡Èhÿ7ÃK/½”’Õ?EQòê :5ÕÕbºk1HB?>ÈÊ~ë¹@–bÒjûE-PDú&ÓqSX€¾†¦ÉÄsáÂ;v,-mg:`½ÒÝݲïdïÞ½p8)i+—ضm[xîÇr “h?€5œÌ ! YJKK GñŠŠ™Ü%k…ìÿ„€"r$@ñz½ø«¿ú«´µ?<<œÒäB…ý×MÉêž~úé”´“kl­«‹˜††N,¤uF㮵Ì Ù'>áf,$ü%V$úS›.ËøC §µ›©*¸O™í¿¶¶urImI"fÆ V%A– ûÈd1²Í6'¸éDa@Ì(“z~ô£áĉiïÇãñ¤½\â—¿üeJ þbâit:j7ÕÿK  `Y<ËÊ@–# Y ¥ BˆXŠSR÷/ÎÑ B’Òìt2bÆ¿¿þë¿Æ›o¾¹&ýe:!ÐzÂï÷§,ëŸÙlÆRÒV®R¿ukLQ ¥DQ‚žãvË~Ù,d)‡5«›ê5õ‰}mB·¡B/:”`tt,eUê6:ƒƒƒøã?þã”%—IÙ0yÞ}÷]ø|¾”´õÈ#@£Ñ¤¤­\e©@B¿$°A–Þ^«ûƒ5¤LÒÈ@rø¹æ‡,~ÍåÆÚü?"Q1þ˪ăĔ„¤´HÍFåÈ‘#xê©§pöìÙ5í···@`Mû\¯¼öÚk)i‡¢(<öØc)i+—ihhK‡&ÿEâŸOA–…A§|öè/¾ýú[Ï6g¦Ì¥Ì²Èy²ˆ7_hÞ¬S)¬3èïÕé ÐhµÐj®ƒc¹PÊßÄÅ8¨Ð^ @Oo/jjjÖdܹƱcÇð‹_ü"c¶xžçáñx°eË–Œô¿^8}ú4º»»SÒÖäÄ?I ÓéP[[‹Ö¶¶¨ÀÅœdѰ,‹áI‚]µyhæfx÷çÍ?™]ÀÿýøZR£²‘¹id 8ôR³•¨¿ÑëõÏhu…V§…Z£çBiÉ<]ýÏÇLÿq5Bùd À144„#GŽàèÑ£)›Tn†ÖÖVYX–––”µõð笭\§~k=Z[ÛV¬(@@ïpc!ÔµZ­Öþ…Ö?û{‡_hþïþaË/ÖnÄ2‰€ sø…æïêÔºëô“V§ƒZ£…ž)€¹dÔº|”—¶ÃÓÕejEBÄ Ð+G¬Èèè(Þ{ï=|üñǸ|ùr¦‡Cggg¦‡ÕŒŽŽâÓO?MI[hllLI[¦¦&üò•WD­ct]‰ÔÜ\sªZ˜µýPªf ÖhŠÕZÿKGÿù©ï³sÁï=ðƒ–ók:x™d C¼õ\sF£zIOëwëõh4:hô&˜Šë`0;C1ýJŠò"^µ’Ø÷áP@9@<>ŸçÎÃÙ³gñÙgŸ¡¯¯/ÓCJˆ °þöO›å’ŒiFÒÄ¡g›é_ÿü‰gÌ–×fK‘1ÁÀXQ`ûò*÷B©¡#5|(ˆÉü(B@¨øò¿¥Å ”bÙ(€°@DGzz{Òø ³—Ë•é!ÜrM€xŽ9’²$I;vìÃ0)ik#²ß¾ÈÄ¿\H ˱¸êé!…1q)("jtÆ2VÝKñè 4Æ“%¯‰6ë/ÊÚ€ô"û¤·ŸknÒéµ/ëiƒC¯7@£Õ¶T‚)Þ…J’€C“>Â&ühÚâÍÄó<Z³5q%îðb[½&À¦M›2=„›BN ϯý딵µwïÞ”µµq:¨¨¨Àµk×B[BÁÊKP‚ àJk7žˆYËQQó)¥PÃT¼ :¦ “ƒ§¡TNB¡TÑJµêÅ#/=þ€à¾wÿŸ´Œ®õgÌud @ 9t¨YýîÏšl2›Ì‡‘1Ao4#¯b,e·C©ÔFTýªàöÜ'ñ7ÎøØ>ùèMPül‚ô›„L z³8Ìm-¨®®ÎônŠ©©) ÉþPaZ[[Sf¡(J.ü“šöîMbɲпñÌúçÐÕ;ÒT†5ˆhŸ‹j}! wÁ\TMƒaÌ0[òÒ˜u—ßyþÑGÖôÃmd E¼ùÓæ:³O}Úd±þc2Ã`4‚±V¢Èy'ôfGtÍQ%ìYœ÷oœàBçO†Ï?9Ÿwå‰;^â-Plt @yy9t:]¦‡qStttdzYC*ÿêêêdõ Ø»gO¤,°” ühâB~1Û£L–T”P P*a*iD}7 F+h#Æd)`Ì–×ßýÙ/z¶™^«Ï—ëÈ@ xûùæ¿4[ §MkØ §XËW¹J5±óS$zÕ/@*p¦¿·} }½ *¸ëöcïÝO.ÀEßðfÿÜlŽ# 7Á¡—š­G~þø‹Õü7´É¢6 ÍÅ(pW B¨ÈÄLQ¡Ð>*¼êüççüøò³÷qîôgXX ¸bö?ô§Ø´ín”—•‰Î‚±„Õá~!G¬óHY rúôix½Þ”µ·gÏž”µµQXtö¢¶ Ø×ÔÞ“hé¹m}h»z~Éþ(!ùDD„¥ZKÙmÈ+ß ½Ñ #cc±8Œ&úø;/6ÿEš?rÎ# 7È;/4ßnQjÏ™,y÷3 4 Kñ&äÛöC¥5ÇØùÃ^¯ýÀ—2Ð׉cᅫá¡è 4Üþ(nÝÿ;0óˆ1íŒÑ˜T€ÅcÄi#çÖFÀ0 JKKÓÒ¶ì(rôèÑ”µUUU…ââ┵—ë,ª÷C5IĨüï¦Ñhàv»QSSƒêêjÔÖÖÂív¯ª^¯GyyyZ* ¶··oh U™ÿñ»®««KY{¹Ä²½¸AzŸìÞ½?}ö9ð<ÒaÍ‚  ­µÿùOþÅ›qåìQŒŽ ᣣ¯£~ûí(¯ŒMÜE@ „š¡ ðE c*¡Ô˜19øÔ ¥šR)_üõÏŸhò-ð WLYH‚·Ÿô;´A÷¢ÁhÒé zhµ4LÅ[¡3Ù@(*âÙí[´õÇÞPÓS8õÅ1Ìú¼ „BmèٺÀJR¶€’$l–„"xA,$„’õ÷# B¥RÝìeX·D f³92ɇո©*S]] ³³3åm®8ŽK©P__F“²ördVøÒûßkµZìÞ}>ùä“_€¥ý~?zzzàrmF~‘ç¿x×z.âôÉO0x­ÛwîR©Š9„ˆ…Î(<@ðüðÔ·»‘k\ºt)¥Þÿ;wîLY[뙥»¸MzâO´úÞ.¢3àÇǃ OþˆD±,‹.— *µ;›žD~±WÏ€kݘšÇ­·€Éœs!Dô  (1ƒ ˆ ¦âPi­˜¾JAR(j ßÉ·Ÿk~ô¡?jù䯝Rî# 8ô“fÑHþm4}+<ùL¥0•l‡B© …ø-ÆõKû÷‹*ÿ³§~ƒkÝ›{'¶ßñ˜dŸ‰¤kAP\T \º_B@ôžéØÐÀZ…¦«Ÿ®®.°, ¥rãÝ®Ÿ}öYJÛÛµkWJÛ[o¬ìØ·øw"! Ñ1õ[ëQR\,æ ÷#áxùêUÜ{Ï=‘mŽšÛPTVƒsŸ¿Ž‘Á|òÑ;غí6ØÑ÷“¸¨B%Ó)ȨŠ@ovB©6ajðEAA)­JŠzÿíçýƒ‡¾ÿú¿¬ò2mä0@ =Û\l2)3ë·hƽÁ¦  æ²]¡É_ŒëWPâÿÅø|ĉ»ÓSã8öþ›¸6Ð Ö€MOINþÑ*³¥µÂïµZ-òóò’Ê@Hèk ™%6z(àZ‘®âC<ÏoØÂ@ŸþyÊÚ¢in·;eí­'âŸ+±1ýÑ¡~Ñû"ïéíBô{AÀ]wÞ¹X!K"æ¸Ú –ãbƧ§-¸ãîªÇáÜé8ýåq°l0æ¸Ðn1¿JTžµ>ÖÊÝ00…s²XÔf³ååwÖüãC‡š“wèÙ@ÈÀÞxþ·vY š“ŒÅ²Ëhd 3Ð0—l…± ¥EB? M´DzåßÝÙŠã¾ߌ%nì{ðQîØ `ñfYîÅó=v=Ýí *ìØûw7H¬Ú~Ÿ¨ Fx{IIqxC‚ ?–pa þk²`­HWæÁhHeò`c‰'~DTòñÏaÉ{©U~¬ÊéÊ?Z¸ïž»!ð¡ó€¸Õ ˲h Õ¸H4V=mÁî{þ*Ý;Àr,¾üü#\þútÜçÃV)(¨°I@ cÁ-0—n…Î`„ÑÈ€±˜ëÔ&ÍÉ7žý­†^òu,@ôô7YÌ?6šÌÐhLE°–*­eqâ±÷Ç_¶é© |üþ[ÙRŠ;þO¨tmpã«þ¥7sIq‰XxÂé4 JBDŒŽÂï÷¯þÂȬšt íííà–ØMs/¾ø"¥ímÞ¼9¥íeËNüÑvü¸}ñïcWï˨ü%°mÛ6””–€ ŽkûÕ«‘ä\‰G„ìØÝŒú]ß õòy|yâ#p\lR¯° 6 i(Š =¯Лœ°–ï‚Î`M30™ÌÅF#süõg‘‹ ApøùæçÌë_Œ¢³m.…¹ôV¨Ô†È)ìl"ÚûÃI*ìÇñßßïCqE-vßó]˜¼„7JØÆ/õ~¹m%¥%Ò" R€ˆ‘2é']ÀÜÜ܆òåðx<) ÿ+**J›F¦ˆ^u/nK Ú‡´ó^Jý¿œÊ?áqî½çÞ8á ú5;7‡žžÉçÜÒ—kóØ{ß3Ðh ¸ÖßO½ ¿IžŸ°_AÈG‹ ùPPë `)¿:Ú š6ÂȘL&†yÿ­ç}`-¾§lfà ‡~ج~çÅæ7YÍß7qò·VÂTÒ…R*J–(Ã)}¥“ëÕóøìÓ÷dPáÚ†oÜù{Pk ‘ý+­ö­ü¥(.*Š[á¯`ˆ˜äH€µ¡¼¼:nåo€”(Õöÿõ^:šhu½ø÷rª} /þDï¥&t)•¯ýûöA«V‡÷ÂùÂãwü30¿Ø‰}þÌy¥Á±÷ßÂÄøh\»aÇìðäOQ ÔZ ,e·BG[D!ÀdÒ™æõ7Ÿ{ô©Ô|+ë“ )úI³ÎPªxÝb±>EM¢£HžLa=(…ZüÑ„UIdQÅ Ïó8}ò\:ÿ ·4Þ‡{Ÿˆ9fé9‘ê-úøD“? †æå¯ (B…°›" °–¤k²¹zõjZÚÍFΞ=›ÒöGJÛËñ«ýÔzô'»ºO´z¯Ñh°·© Â’~–eqõj벓ÿÒg¡Á˜‡=÷=ƒÒÊ[0?çÇ'Æ@œsàb´ˆ³vèY®Ò00—Ý 1ƒ4cV›M¦ëùǾŸ¢¯hݱá€C/6›ŒŒò}³Åümd ûÿÙ{ó É­ûLðn 3ëÎ:úîê‹d“ÍC<$^"E‰ÔAY¤ZòÐò¬¼žÑ„w7»ܙÍ*ÂAG8VZy†ÖH–'¼¡Y‡µÅlyG#Éf{9¶ IDATk˲dŠ´(êàÙgu×}ßgfVÞ™Ø?€‡2³*³ªº;h /L$¾ïw+*Â'긄ðkdX¦ü.¼Íçsxýµ¿Çø˜ì÷þ§~§î|ÜÚ_éf®EëwKOOpúŒ ÀRžlÓ°{ÒÌع¼ûî»uïF.„UOà×=€ßnp“rÍ߇x§?ük»$ÆóI×Kq-^ÏF?E‰ç%<ôäoã°ø‹×‚áë—Ë>3Z•µÀ‚e xACkßPÃP4ÍH Gþì{ÿñù/Ö᫺áä–"/Ÿo îÕp$ò°ª… ) Â]g µŸ!œ•æGKüzEúg³¼öÊßbqaZ¸úÿ=KüJŒ¶V­ß-½=ö8§ãß/3€033[õ{4egÒ( À­âÇêêj]ç<òô¿D(Ò̆MXµ~•–îh7†…®åÿB­`LN5-»%ýýý ™wuuµîÀ¸åÊ•+uŸ³QU!þÀoQ ~§ÀòA&úZßzþˇž|º^t¸ùÀÊŒ8€jŸ—öNÜþ0üàç]ÇÕKoã7_/# FØ6cÕ`Mã$D¢÷@mé3\Àá´´´ü›üÙóذ/vÊ-A¾y^Óùa$Ôr/­ëŽž>Âr¥ƒšþ=À?ÛÀ+?ù67×ÑÑ} ïêóPÔ–²*]±W¿±j¤¯··ªüÆjT`DÆÆãqlÖ1ªº)þÒßßß°ºý·‚ ÞרÙÙ EQ*¸‡bßÒ˜ðhÿöqkð{Yê½”Àûܹsèìê2Æ ;\ù|ƒßu5ÏQ:~àèY<úÌ ¾Š_¾ñ ŠÅ¢sBÇÅØˆ! œˆp×]P#=fÕÀ"--ð½?{þvü…Þ rÓ€—ÏËšH~‰´»]–ÀŠñb­ªX^D8z7”P§A¸%üµÿöÍç~·Á·ÄžÊMIιÿ%nyZÑ¡¶ãP[Ž›à_"¾à¿ºd²È¿÷?ö›*û§Ü?pû¾zIOooum©É˼ᛵvO• 833sS—unDŒÃ~ åÀï¶Tüº/ðû˜Ë· àU®‹åkÝ\CסxìÑÇ ‡­ó¡bôp’Z¯ç§ßsÖ=®j­øÀSŸG¤­+ˆç. ”¬¢¬U,ˆá$„¢ç ‡:J1‘È·þÛ7Ÿûüv¿÷ý.7%W‹ßˆ´D>¥†Â†Ù¿í(”¶~°„ãÐüá þkkËxõ'‹\.‹CÇïÆ—ƒÐÚ.õÀ¨è{þ¿W—B#³¡iØ=idÔùððpÃæÞkiDÓ£ÎÎκÏY­ø?—ÊW±z_Ï¿1 ÏóxêCOA/Mò`y0<<Œ\.Wö¹y}–Õ¬µøÐ‡H[7–—*{LÇËGÏAÖÚ¡(*´p¡Pè[ßýƧžØöM°å¦#ßûæó_ µ´üžŠ@’¨- ´öƒÞð÷”ü9o´õµUü³%ð¯Æå6K‹Åº™ýÝ‹Q ˆ¦2 °RÁ4kì¢hšÖ°Üó›9 ª{ü6ß}ù>; ðÒäVÈWLÓ«øë@àœãñLJ¬¨ÖySI¥R˜ž™ñ|Æz=Wƒž»tQ4ƒ„[¢XZ˜ÇÏ~ú#oX$€°,áÀñ2B]wBÖZ **B‘0 EBßýÞ7>y¶A·ËžÉME¾÷ç^ˆ´„¿¬…Œ€?%Ô µí”YÞ×ÐüZÑÞ࿱±†Ÿþãÿ‡L&ƒÇî´4ÀÛÄï^»m”ôD£Õ§š®†eN§°´\^?»)‘f@í2>>^÷9wË`ÍÒ˜ð»Rù|·½¬;îªsùw¢¼xŸƒ xâƒ4¬f§@ºUÀÿsõ~¾=“@ µááj¸óóÓøÙ«?òI´‘ƈ àZç ªª!GMûáÀËçû¶soìW¹iÀw¿þ©Ç´Hè/ÔPŠ¢ARZ¡vÞ‹ „Ø€w´<¾‰W~ü7ȤS8xôN<øÄoYûöøF9àŠb3 XáÍL€]•F¹a&ß’H$°\g‚*Š"4M«ëœn)=œ@ï ü6ퟂ=t×xðûš÷«$ÛHå«™x’ ãüž|âIȪb]+`Vpkí„<öÌ¿€jÃüÜ4^íÊŸÑLÉRÊbàá!ˆ-uœ )Pd ¡p¤/â~8ðçç#5ß$ûTn 0ðÍgûµúסpXP’B¨ëð¼l?gµõ5ÌáÎ “NáÕŸüÒ©½=ù9kŸûF¦cöµýØÝööv𻩟ÍLy› Ã0èè쨮%°)”áN73vM• Ïç166Ö¹÷R‘¥ÒÚÚZ÷9«~í¿æT¾ Å_ó÷ájw¤¿q>µÆ”ÿøc£££Ã:G T â9Õ`ßÖÂxìi#£oøú\½ìÑlÊ|œ²(– i}P[1eZ‘pä)q¥øºßX{ 74xé¼ ÉÌ÷ÃáH¿¬jejÇ)ðR«‘îG80„XýÜ¢ë:^íÇXY^Dk{øào‚e‰µ®½˜¦{žÝ–žh´ª¶À`hƒMM6-»%íííhiiiÈÜ7cÀìlýƒTÃápÝæ*~úwíÀïì^‹š4O¯ví‘ÊWøK×äK:lçJÁóÏ=où™cÙlCUf·ø=+=³#mÝxä#¿]×ñîÛ¿ÄÄXùû%Ô–a@Xb,œ¹å”HŸa …iiù½ïþé_-ð†&|´øU-yHÑ4H’l°45jDüÎj… ÀSK~óW?ÃÌôd5‚÷ès%ÃW¸ßÁŠlf\ÿŒ1úÿL²Mñ—f `õ²¸¸X÷9ëAÀüRùìÛ^Z¾ÓäïÏjšó€=_P$3&yÛØí·ßŽãÇŽ[ \¯ážÞ. è=t÷=üà? óåÏCÆè#ludYB(­ý´Ȳ5B$¤}uàëŸz¬ê“Þ‡rÀÿúç^ˆ„"¿¯jšÁÊÂÝÂ-ð7ÚúßÎ~W.½‹¡ëWŒ"Ïü.ÔP€ü˜©€ŒynÀ7Ä~Ý4#`vv¦<¶) “F¹nÆTÀ………ºÏ¹ @ ð{óù=;ÊÁ4Àä_|·üõè÷?O¯ØïyŸ{î9ë³,uŒŽŽ!›u•ï­ðýxý]éÞÛC8vú~ŠE¼öêßcc½¼Ã&cFOÙ†;€p"Ô¶“ådY…“ˆ¦ü—ožßûJSÛ”’ |ý“·…CêŸkádE…¨D ·Ç `Y„ùþð1ýOOáÝ·è:ýÈï ÒÚí¸9íù§ûÁçïµD£QOÐw‹=Pa ÿq#´Mñ–Fµmȼ{)sssuŸ3 U>È&v+y¿-•º“8Íåæÿ €­%å¯êˆþX ªÔîƒ^RÐ×ׇxÈz–&“IÏzAK51^Ïê÷=òi>~¹L?ýÉß"N¹ï„RAÚ=ðà jÛ ˆ’YQ ‡»žüõÀKç…ܦ{&7øÊùHHQ¾ G4YV!ˆ ”¶“ ¼ –åÁbæú{cls?ýèÎ=ø ô:cûqÛnZìOÍŸJU©€€3°Ùp×¥Q€D"qÓ•vÞK @éwïz÷³ÁáÓ׫خ5¢Çàìµl'•¯6÷"ÞÀoÿŒt|ìc…Ò ëEäò9«/@-â÷|®ô\è‰Ðsè [ ¼þO?ñ|_«Z cZQé€Òr¢,™áÐÃ|GñË5ô>‘Žˆ-øV(>%+*DI„Úv ‚)ùýY³Sžüçs9¼úÊ‘ËdqøøÝ8}ç£7$ø@{[Ap¦žé€ŽTÀf&À®Ê ËrCæ¾™âÖ××’ÚXé³·ÿæ¿KÀíÞïê~Ûžà¤Ýûì lÎÓàT>ß}µû*,æ˜,ËxþSÏ¡¨ëÈçrº~½ì;©F¶Kxì]ÓI﹡Àÿû§Ÿz1ZQ5ƒ}…û ¨Ý DáJÚ¿ŸQü篿‚utöÃüì þÀ²,º£Q°„­ØØn aL5kìšBpìØ±†Ì}3€X,ÖyEQô/~úwíÀ¯{?|Ó®ýVºN"PU·>!ðŒìGå÷sžgm.çõÁEÜcÀÙ³gqî®s(‹E&“ÙÖ³¶ (ÿ^uHr<þè.]|S“鵦•eŒrÁ„p`9Jk?D9bt … +1ðµgÓ¼ArÀÿð÷FB¡/kªYQ )­"‡ ¿¿Ùä‡q¨¼N¹zå=LNŒ@”T<øøg­¶¾•|Hîñý´D£]U~´ý%ð˜šjVÜMi” I*‹»  ?ð£BD¿¿–_ c‡&çœÕ¦ö9yéš|ϳVÍ?øïóéçžG{{;¶’ILNNn»—JÐóܯF@Gôîyèãt¼ñú?"¶¹Qv™¦%€–áe(mýDÅ ÔBš¢È7T<À A^:/¨²ümMÓIQÒŒ­'Àq’a’1›ü0¬w±ŸÅ…9¼ýë7¸ÿ‘硆Z=ö¾]Üï©TãÜ™P Ÿ˜hüÉ5Å’f&@eÙØ(èÖC¨ øm~éò}å$@÷ÕòáÏjŠøTíÇß‹e{~}ûký6²¢€þÙ (‹A¡P°@ºVñ{~»Ÿóö}§Î>Š#ýw#›Éâ§ÿøCä=ªZAŒQ*˜¼Ø¹å¨ª…Âg¹¶ü—j>é=’‚ðù/káÐm²jøýåðap¢ ÖŒúgYð1ü§RI¼úÊߣ¨ë8~æA8z‡ç q£?`´.¥úT°ùþ nÄ`eeɤ;êµ)’FõX]]ÅúúzCæÞmI$ ™—eÙÊÀo‹è/{6À ËÁ®c‚@t{ËÎRù*ž_©P´¨ ø}Ž/'ÎíÞÞ^¼ðÏ~ #.Ð(à~æ?ðøgÑÙ{køùë?õœ·T#€!ÇCR{ i$jHE¸%ü¥—ãþšOzd߀¯ê±P8òûš¦B’dHjDµ „ˆF™_Ö¿Ì/¼ñ³WN¥pèèxß#ÏY_<5 ù™…¼þÞoKww·O?a¬É™™¦`·¤¿¿ß$©õ—JMTni” @UUUøôuÝ;}ϾÑ_ðÛöíqsžÀó«"•ÏoÌúÜà üðÙ>q¢wÜq'‰r¹r¹\¨v zž{=ÿY–à¡Ç? AP016Œ‘a ›½qkZ8RäI…$©P5h!ùÛ/ŸoLôoe_€ož×TYü MS‰()$ rø0'šÅ~Ì”?ºŠéé (Zï{”pÞ$tÌ-^cûMºlå€+–fPêÀ2ͦ@»(¢(6,ðfqd2™†Ìëþ—¿_ZŸ_Ÿ øQàzã^ªŽè¯Å¢PűA ï;‡Kó÷ú }´|ðëÞã'OžÀf,Žt:]ÒÛùþýƼ>5ÔŠ»útèøõ/ÿ [‰xÙë-ð7­,áÁñ*”Èa¢hÖâÅü¾O Ü׀׋ÿ>Ž•TÕø`#‡@,'˜-~ýýþ±Ø&~ý‹Ÿ:pç} *eÀ8o÷Ø~—Ö–È’ä þ®tÀRG`ÃVp³åïwi”àúõë ™w·%/ÐÖKœ$Àøk ò«w*_ÀìœÇ¬Ÿ_M*_•šv÷kËæô~è^óØÆç¸íµŽ„¾ j*$Q†¤u‚—ÛÍt?Î,óëò§ë:~öê‘Íåpôä½8zê¾²ÇùP¸ñÀŸJ45ÌË>í€-aJ¡€ Ó´ì¶43öFxžPøu_à÷{xŒUZ¶Û”ÇÝœ&xVóž¾çY!ø°2)p‘/Òà2ÿ{¿ñí8H@,õk×Às’(8✶+~$Àîôy„Z:177ƒ«—/xÎi–ã W@è xQ()PUª*ÿÅÀWÎGvtò n¯OÀK¾y^Syþ[ªªBe÷¢4MÿFÔ?¬´¿òãÒÅw°´4ÖŽ^Üÿاýor”þFº{º161îõ1”‰= pv¦Ùh7¥QMfffL&¡(JCæß-i” ›ÍÚ~ëý¡Xgéט¹4æXS«f Ä·³.Š{Ÿu¦ûôÒ ;ÏÍþùŽ9¯Ý>V¾?hÛvn0,¶o½ý6®^¹Œ±±1´µ·ãÔÉ‹|Þfí5ËòÖCt]·ˆ…3PÚО8^Ä]÷?Ÿý÷ñÖ[o ÷ÀA´´´9'1ûÌ"½( (È#QÈ@RTháÜ¡tzåËþ‡ºœxe_.Ÿý’Ö9$)†é_ õ²Qð‡p ñ-z³¾¶ŠwÞút¸ë}Ï€e‰'ð7>ø@´«+!ÛÚA· NM7ƒwSEhñ”³gÏ6dþÝ’zWôûW&æ¶‹ øÛœ þŽúãE¼@ÞE쟇û|« ÛþÍÍM¼õÖ›¸pᆇ‡ÀŽBN§1??_×^ì½þöz~:v'ŽzF¯½‰úéññO~ÚjO…U*˜+ Ð¥vˆÊ:Š…<òªŠL(ý…?ùÿó3ÿË÷Þ«ÛÅÔIöxùÙ~5~Ñhñ+A”[ÁËíàhàCLs¶·éÿµWŒb±€þ3÷£÷Ði_à¿À(¯Àxm1†¥„và !‹Õµ_zSü%#6¤åíÐÐÐ Oê!ž²[söþŠ Šž "ð׃x»?ÈïLów^çöÁÞ üº®[ ÿî»ï`xxØìÚÊXZ>]ŠÅ"¦¦§qòäÉÊÎ5ˆ à v…êÁÇ?ƒµE¬,MáòÅ÷pç¹{Ëæ4H Ö´¾1Ô‡|&QÊC i$JÀ#u»:ɾ#‚$~MÕTA’ep‚1ÔÂÓ‚?\`©ßkƒ—±º²ŒÎîÃxà±óU›þoTð€®®.{™Ÿª^Ã0ÆM;99ÙŽ]”3gÎ4„Ü,™Û¨—ÉdÒ$¶cª~p÷<¦j Àˆ€í}jÒðû“í¿ŽÍÍ.^¼€É‰ ¼óî;ØÚÚòà8Î"ø›˜šž!İw"n`cÅbÑ‘]uç}Oá•¿ýOxïÝ7ÑâÕY]’ºØkºxp‚ Q‹"ŸÏ@Qd"ÚÃÿá“/|æ÷¿ÿ_@e_üɳ …Ô)Š A!ªàÍÖè‡-¥²¹$“Iãí7@Ç©;>P5øßèÒÕÕùW SfÌ›,ÓÌØeiTkà[5P×éoY/m»~ÛÙl¥ 58Ÿ¨¼žð~~ìesÏót+|®Ño^xG Si^ç6tŸmÛ1±Ø&^ýgø¿¿ýmüç¿ú6®\¾„dr mmmàx<ÇYkžçÁÑ…3Ú¹³fÓ2°üÿ»J¸àµô:#ýw!—Ëâ׿ú¹ç ÌŒ4³alj”.ˆr‚ AUThŠòÕož×<'Ø#Ù7€—Î ŠB¾¦¨DI/*”('‚¡¢öÿæ¯Þ@:Æ#·ãpÿ9Î/”ÊÍbú§Ò4Ȳì_Iqm›.†a0ÙŒØU9qâDCæG¡P°4§Qjé˜è¥ÍûÉÖVÂö€7ŽujØæXÍ¿ÜàŽªÖî÷±ÆíçS~~% Þ_ó¯Þà¼ÞŠš½nÍf;Ö>^ÚÞØØÀ•+—066ŽxlÓÒêEQ15ûÎŽ¬¬¬”ºëÙ‚û¨N·³™,æpìØQÇþzHµñÔp÷CÇìä5Œ _ÇñþS8xèˆ{FóxÊÁy^† t#—N@TdhšÖ—\Zù€[—‹¨ƒì€Öì‹Z¨³_Vdð‚Q‹‚ðr©ÑOÀ¿°0‡kƒ—Á0,îy裪úü©D»ºL&Q,½´€f-€=F¥f2Œ7ÌÂÐH¡¿Áj¢»k~*ÉdÒîAÀï°žk€v•ëÀc¼€¿|_õ~}¯1çµûí÷# Ðõ \¾| “ãØÜô}û¢ª*ÆÆÇš½ü ëh[^(095‰þþãuë tW€ûoUkÁí÷À<•*Á½VÍßÚQðWööíõõ \¹| ““ØØØ0+â_Ðw/íííØØÜ4´e›EÒò»³Æº¨12:Š=ù¤ó󨣸5¿Ô@†apûÝÄÜôu,ÌŽàâÅwqîÜ}eóÑ€@†% ¬€"'V€L‚$AQUAÙJý€OÖýb¶!û‚°¢øÅ¦i¢lhÿ‚ÒŽ—Á,ñú€Á«—ÌÀ¿#¸ûÁ–ùs¨Ü¬à]U¦ZÑF-€Ù¹]9¿¦”ääÉ“xë­·ê>ïÐОy智Ï[oñ³Ì¹ €(½^W­¬¯¯WAJóWOvB`;Ÿíj÷^û¬²±J`_ ø×ÖVqùòeLOOac}½"èS_¾}¡Ö®®(≭èÓ¸$S,"`jj ¹|<×8¨ò"tÛ~>p۹DZ03‚÷Þþ5Nœ8 ÕãùÊ2:á@8œ‚ ·¢ËAQ„Bê³ÿùå߸ÿ·^üÞ¯vQUÊž€¿|ùã}ª¦|AQd‚AŠ€“"fyEÞ¨øÇzƒ.—Ã»ï¼ À©³ÎÀ?*î¿oFñO,”Ru# Ò©–——ÑÙÙ¹;'Ú”†€ý*^æzçoÔÖ´§ô‚2°YYY)m; –€ª‰¶üA¦ÿrß™ÉßE<q IDATyµ€=Ý^]]ÁÅ‹1=5…x<¶-зƒ?èïŽF1>9Qú^Oùd2‰™éi=z4à[®¯¸ß~¯ö:Ã'Îabø=¼óö¯ðÈ£O–½žaX€X˜}Drr©M³W€%±õG>¼+ {NTNø¢ªj² Ià¼Ü ÂI DÀõü»zå"¶ ô:#ýç ëå €ríáf#]€ýüeÎ6ÁÓ33M°‹Ò¨‚@û1°üw؉€}¿ªj¶ß«ó¸í¿?°¼¼ +:¾Pøw@ì×¼MÍ¿2ðW"ÁÛssó¼Šé©)$ñ¾ø3 ƒh´ ÇA/+~‡ù|cãã8räHÅcë-nWS¹ëÞ§01ô®âܹ÷!TVKÅ$¬Qˆ%<ˆ ‚—[!ä3Pš¦=õ—?ñØ /þÍk»|YÙSðí¯|âª)_/—"à$ Ìzÿþà_Òþuœ¸ãÁª}ý7ø@´»ÛŒ‘0nÐJ×H“Àq÷Ü}wãO²)— H$°°°Pf Ú ©øé¾––H]ŸÎãE]~ëJD`§~}Ǿ*@~§Z¾”ÌLOãÚõk˜ššD*™4ÊÛîôÝfwÀ_WG—–*~—…B#££x⃬xìN¤RV€ÝÅní±S÷`ôúÛxûí_âñ–+òÆkY°,KN„ t"ŸÙ/f¡¨ äÄÖx¼¡VAö”(2ÿªª ¢$xðR;8"‚áŒzÿ~þàâ…wI§Ñwè4¾­¢Öï7v3ˆj0Jlnnú_£=üŸeÝø!63vWŽ= QRûþÚµk{J*‘m/à§ÛíííÖßÛÿÒœTfgg} €[‹·ŸcÐÚ òÖ»úZ켿ɿzÍߨNË÷&““ã¸~í:&&'‘Ëf O5v/Ðç8Îs¿{aÆ:–.@)­.VEt]Çää$òù<¸ÆÐ÷òË pßËwÞ÷F¯½¡kƒ¸ëÜ}hmuõ µ2*ƒã%pbB6 Y–¡iêcßùãO<ñ¿ù›Wza²gàÛü‰£jHù¼,Ëà¼'ª`9ÑŠú÷3cg2i\xïèΜ{àfú÷’îh±XÌy^]‚m64Ȧ)»'„ô÷÷ãÊ•+uŸ{xx?þxÝç­$@o xï³v{{ÇÞ¿4[R©ÑÕÙå ü;Zãü;ÔüËÆŒíJVèÀðÈ0F†‡ŒÀº\Α²çuŽãš>kVî 7èÛÇÝ ¯¯/U— —L&1=3ƒ£»àÒü©0 ƒP¤'nCW~‰7ý |ø#+ŸŒ1 V,ÁËíȧ7Á‹YÈŠYÙú#ïoø…ùÈž…翤**% n?¾££³"þ¢¿ÿ8Ž÷÷£ÿøq;~m­mÐa>Êu'D Ví~'Á~Æ@%°×uœ¯bdxóósÈçó;}/à·»è{ÅÐç“®ë( èéíÅt`5RãIU(1:>ŽòxûƈŸ+Àqf Y áȉs¼ø:Þzó—xú™O”Of·pF‰`Nl—Ù„ I¼°õ"€[‡È<ùŸdUDQ/p b¤Tò—p¾ÿ«W.!—ËáÀá3èê9vKšû½Äè à¯OÐ(§¬$‹a‘Ïç°°°€ÞÞÞÆŸdS4.`qqëëëhmm­ûÜ5¿ ð=ÆÜîëëÅÄÄxÀûDZ,‹þþ8yò$Ž?Žþþ8~üAtœWeàßðÜç¥É»Ýk,Hó/+ßÎåòÄÈðu,,,Xe¡†©úvðv›øíàN·i¬€›‘ { `”B¡€#‡»€—ÏÒø¦¦¦v%ÀK‚ÜwÞ÷! ^|ã£ØÜÜ@$ÒRözà @À²X"€ãep¼ žË@’$hšòô_ý»OÞö¹ýý«»zaØ0ðòyYR¸/Ȳ^À!b¶ûe|ÿº®ãâÅ÷ ëÀ‰Û@5i· ˆF£©€T죺™Pê 855Õ$»(Ç·ZžÖ[FGGqß}åʶ+Û‰è¯fÛ®É[àÀ²÷×´Nœè7µú~ï?ŽcG9^WR²õêÖ;²Øµô:ùõË€Ýøé´Ùlƒƒƒº~‹‹ ÐuÝÚÕ€¾ÕxÇôÝîeð{??P,Q,‘ÏçqäÈüìuWso€äÖ¦§§w-0( ºX–…(©8~ú}|ï¾óÿà‡ýýïÏü¯ÿýÿöÃå]¹8SvȲø{²$ƒx^/¶€ð"ጪQlTû?tìŽ@­ÿVQÑÚÚfT>«þ,èÆš)“ÍL€]—3gÎ4„ì´ ÐNRùµüÒ—æ^:îCO>‰'ŸxÒ¹O/mï5´öÊëÒ©6Nó÷&ÖŽr2`ccƒƒW01>Ž¥¥% ˜yÀº£÷ƒ,AÚ|µ oŸ»Òû¸ @>Ÿ·ÀôСCC%¡ Ñõ*‰Wf@K[‡ŽÅäè% ^½„{ìuÔb`”6:²B\6 I”!)Š,ñ±ÏøãÝ»š]&ù|ü~E‘ï% <ǃp X^KD0,°þŸV–—17;ƒ–¶(NŸ5²&š¦§tuvbuuµüÚÝ©^*½É0 f›µv]úûûñ£ý¨îón7°©|Õhù^À_®ÍÛ´úŠÀßhàî;4ù—9¯}mu.^ÄôôÖVW-õËѯg ßNAßM$ìÔekì?v¬* ë:&&&v= RZ ýûÄ™û19z W._ÂÝ÷¼ÏÇŒŽ‡ŒYˆ#ȧ7À È¢EU¾00pþåÏ|æ¯+§FÔIv•ˆ÷{’,CEŽ'F@8„å+Fÿ¿ûî[ÐuÝ}Ç0¾ÿ¨Ü’ Ú…ÁkתJ`uáš ŒÄmJ#¤Q­'''‘Éd ŠbUÇ×'•O/;¾2ðûìÓ]s6øç೯\ó÷y/` ø™ŸÕìì ¯^ÅøÄ’[ D%Iòp7û·—F_mô~Y?HÛw/v-˜it»–ÿ©T Ó33{’HÅ@ãúŸF´÷fG199Ž#GŽ•½–aŒN„!(Þ|pÙ$DI„¬Èýññõ'°‹»F^>/K2ù´•÷ÏË ¼BD04òÞ NctÄÐln»ëÑ@ànMð @/ú_?Ë0Ö#½ã¦\ZZB.—Ïó»ržMi\Iàb±ˆ‘‘Ü~ûí¾Ç¸Ýó~/ ¹lÛ ø+€(z€ŠÀ_"àîA~ýhþÎë„ Œ`xx“ȺªñióvÐ÷?Яä¨Vïô½òÿÝdàØÑ£à99W€—Xq»L*¥ZÅÂÂì(®^¹ìIè± K@ÂIà„8.^ Ix‘ý]ÜŒ KºEi×$QÇs`9,'š’ˆ™–æýÚ±Ñaäry:z;ÔPk­åÖÀ(äezòm D‹:fffvµãÖ­.­­­hooÇêêjÝçö$Ûì+ x‚½};ø}ÀÝó˜ª-; ök®ÊôïE *i¬X,àÚà †‡‡133]–£ï§…»Óô*¾Ÿ)Þ>^)rßϺP èÛ«ÿ9¾Sü !ày‡Æèè(*I¡PÀøø8°GÕ.½ˆ ýNÜv?Þzãï091Žt: I’ÊæaÀ‚am=x „—À ˆ’E–Ÿýó¯|(ò¯¾ø“Íݸ®]#’Äÿ¶$ÉàEÁ‚Bà8,áÁ0Avë¡¡ktp{a?-¼Zí»Ðß®_ß øv¿~%зʎÛüæöãèÒüxU€Æär¹]¯àUdz=‚¨àÐÑÛ05~c£Ã¸íö³Ñ”@„p DËÉà¸-ˆ¢Y–d9&ÀÚëÚ•OñÛ_ùÄ!Y–žEáLæcjÿ,1;ÙyÃVrk ÓSSÐuàhÿ]¾f*·*øF* Ë0(–õ±ö©@{p3 ¦¦›™»-'Ožl ™»üµiþÛ~µE tÛy9Îg{~}oà÷Úç¼Þ­Ä._¹‚‘‘!,› o( ò<_ˆ+·½¨6¯зƒ¿¸í`ïúö5/‹Aåÿ¥9R©ô®õ°‹—Öïõ÷Ácw`rì ®]ô&ŒôkŒ™H $³ ^ J$ÿmÜL€aôg%Y"¢ XæB0D¨˜ú74tº®ãHÿ` g€f乩€­XYñÈp cT23/€‰‰‰]9Ǧ”¤QGFFÏçAàú¥¿ë¥å{¹ö5Æ_nÖ¯ øw¨ù›cëëk¸|éÆÇǰ¼¼l™Î)àWÊѯƤïÕl§RÚ^-‘û~&~?m¨ô队àÈ‘#®8Ÿj@ Å&&'wå OÇìÛÇNÜ×òÿ`vf‰Dš¦•Íc\? –"€ ,˃#DQ„"ËÿùW>qè_}ño®•í eá³’(‚x°„á°\©ð/í¾v“ÿõëƒÆÄÄ8r¹Ü¶Òõ*¿[÷û»’¶ï¶.™öíÀú^à_«Ðò¿™L©T kkkX^^F*•B*•Â[o¿]uù’T*…™™Ù7€—ØÁŸa:v;”P Ç •?S-7K@XÁì…cº¢$>ô­—>Òö/^úÑZ#Ͻá@¹ ’aþ'„áe›]ÿH`ñŸ©É @;[ðMÀ÷–h4ê»ÏÕÈÒþ­‚@SS¸ãŽ;~ŽM)ÉéÓ§àôéÓ ?qâE±ö—îûràvî÷u¿í /÷ @Õšÿ6ˆ€ýšëä×Ïçîzׯ]Ãüür¹œ#r?ÈìnרkÉÑ"A¥}ƒ‚øü¿‘ OSü²Ù,2™ 666°°°€­­-¤Ói¤R)B Ë2dYFKK zzzð÷5TÅ, Ÿ˜ØÀ/%°»ç(Fcïb|l wÞuÎóµ´K Ãr`9 ‘@¸$8Q€$‰„Häißiäù7”¼üòyYÈ¢(þ,–e93ø¢yH>ŸÇÜÜ,tèˆöGPê_“ ”¤£££Ôí/è@†1Ÿ|Œe‰™™™m€]–~ô£8sæ Nœ8þþ~ßãö"•ÏÜ]ûÜkOà·®D\–ŠÊà¾}¿~,ÃÕ+W022‚å•ekh©¤¡ûëÕ¢åWãϯ¤å7ô‹Å"²Ù,²Ù,’É$VVV°²²‚t:d2‰|>I’,°omm…ªªH&“Ö1KKKX\ZÂêZõŠn¡PÀÄÄÄž=÷ý‚íŸoßáÓ¹þfgg| ƨ @h€ ŽãÁóA€ÀóÃL©äB¤Mæ8!`X„ ´ëŸOå?˜žšD>—GïÁ fô?•&àûKgG8BP´•Ü,+ü¯t#7S÷DNŸ>X¸’å«àwhô¥Õ™ü-Ýžo?WßT>÷ñ®1?üAšÿòÒ2.\|ÓSÓXYY.(ËBàùŠvÕ€­Ø{™öƒ?ÈŸ_ èëºa¡Zýúú:æææ,>Nƒã8 èÛÚÚÐ××]×-<ÇÒÒ …E¢(…BˆF£= 扮똘œÜ³8¯óq[¢=G˜ó°»–!ò,G8¢IŸ8OÙ ¡ŸžÈ1‰¢h¤ÂpœUùaÎAÑÿSSSÐt0´"¿üÿ&p !mmmX\\´}6þuèšaÐl ¼$èï}5jùÕ- ]ýÚ~•¿z“ÿõë×1<<ŒÉ‰q+ˆešû~v^éz~ÍsÜ{^>øJš~¶ïE"ì@_ è{’Ý|ŸH$°´´„ ¤Ói¤Óiäóy èEA{{;$I²€>•Ja~~Ùl<Ï[`ßÝÝ EQÏç‘J¥L&-"ñó7Þ(;J’N§13;»g}‚R†¢EÐÙ}K “X˜ŸGOo¯ç< X€áŒ”@N±ày¢(´Í_[?àíF]GC € ŠO4÷Ÿ% œdÿaˆQû?€LLŒC׿?öô?ÀýÀk·tuvaiiº^ >1[›¶™ °÷R­†_¾¯ü¸Ú€¿1šÿ¶ÖÇT§ùÓíl&‹«ƒWqýú5ÌÍΡPÈW âóó·ûùð©« Öt½JÚ¾Ÿ›ÀKÃ÷|?Ð×uÝ2ß§Ói¬®®baaÁÒè³Ù,8Žƒ¢(eíííPUÕðT*…ÍÍM,,,€E Âá0zzzÀó¼ôÉdkkkÈårŠ¢@UU´´´@Ó4¼[!ÖK …ÆÆÆpøÐ¡š_Û±šØÕs‹ó˜žžò%–€åº8D!)p<;õ'p#€o½ô‘6QäÏñ<bæú3¬Yú— .ý‹Å°²² UkAG×€]ü½¤£³Ã{‡G[`cÜœmZöDÜÀnŒùhõ¥Çq^`oß®Fó 諚ì„ÔªÝ{ïÛØØÀåK—0::ŠååR%>»?¿ ûEî{÷ÙSÚü|þ^|Õhù~&þ -ß öÅb™Lét±X ‹‹‹ˆÅbVD¾®ëe’$! ¡«« <Ï[&~ àù|ÞpÀ§( r¹’ɤÝ?;; †a ª*EA[[<–e±µµ…T*…D"aFÉÇ188èõ³”B¡€©©½UXüLût_´÷.½óÆÆFqÿzçÈ œâQºÔÜÌd2˜šžÞ“²ÀTìV·E€aX´wöauy++ˈF»=ç`ÖÈcÀ0„°&à" Ãg¼×ˆóo¸ûyÞŒþgX°¬–Ìè&Ð07;]×ÑÝþMº:;–Õdñ2ŒQ’ijzwß}wÃÎíV–[%•ÏA"\. ?Í¿šT¾t*«ƒWpõÊ æõìA|Õ.~Àè+EîW£íWŠÞwWã£Z=ä••¬®®:‚òA°Àžšå)ȧÓi+5ÏîÓokkÃÀ:Žýéºniõ𦡫« ’$Yæû­­-¬¯¯#“ÉX~ªª¢§§ªª"—Ëakk Éd«««˜šš˲)hooÇáÇÁó<ÆËzø¾[ …&§¦ö”T’Öön¬,Í`iqÑ—uX̪€,°<1H"Ï s/n$0pþð5žaιÿ4GÕ0o0´øO€ùyyÐu´´ú—´mJe¡ÀàŸ5ÿ3 0k½¦)Û•ú¤òUipùkþ¶9 üŽsðÙg×ü Å"®^¹‚ëCC˜žžB&.KÕór·?ßÏ—ï·®¤éWkÞ ¤¥qi€Ýââ"2™ 2™ ²Ù,!–ù¾­­ ’$С@O-¢”Ð2^ƒÉ IDATH}A,`N¥RØØØ@>Ÿ·Ìò²,#‰@UUËJ@ƒ÷(€Só}{{;:†a, ßÚÚÂâ⢣xu (Šb³µµ…ÙÙYËÍ iB¡Eòù<‰‰VWW199‰B¡€¡áaèºP®²d2™}ÕÀ-­í=€®cÙt­ø¹¾†kf°„À€5ï7žçoxé¼ð™—þ:[ïók ήôÃkÿ1kÿY–——¡hi‹zšÿ›ÚuÒÑÑ€AÐÇŰ ô¢nð–ŠŒe5 ÁFM©NÜÀnŒùhõ¥ÿm/àß5Í'DÀ îžût`+¹…k×®áê•ËXXX@¡P°ÀÓ¯Þ~%Û‹$Ô¢éWcÞ]בËå,sx"‘°À^×uˆ¢h™Ç;::Àqœ¥õSm¸=€O–eGþ=ÔgYÖúÖÖVôöö‚âÐê———-‹MË£9ýTSO&“V¤>ÏóPUš¦!âØ±cÐu‰DÂ*Þ333ƒb±h‰DÐ××I’, §i~ét¢(Z¤ààÁƒ‡ÃVíJ¡PÀäôôž¯j€@‡H[k‹XYY °f†5p’ðÐÁ€e 8ž€pœ°Ânœp©ÞçÝÀóg ó¿ øüa6ÿ PH——–Ýž ÒýÚ¥½­­|ÐãsglkF†…ÅEôöô4ðìnÙv`_i ²ß?øk!U[v@ì×ìÚ·¼´Œ ÞÃôô,-ß º^õvŸzP‘B¼‹òTþjÌú„qÔ_ŸËå°ººjÙe2är9ð<I’¬¼zI’Ëå,-|ss‹‹‹Vð¢¢(hii$Ià8ÎÒþ©¾P(@ŲH}J¨YžFêS ÷‹Ô§$Ÿ‰DÐÓÓãˆÔ§.êf æ{ÚÂ:“ÉX>ý¥¥%Œ€ô8rä8ŽÃÖÖâñ8âñ8æææÍf¡( ~úÚk_•¤P(`rbxä‘ÏU/q“Ö¶nl¬.biqÑŸÐ@@†€e9€!`Y„åÀs€ÂYÜ(@ำœù#3bJþÿJæèÍØ&tèPÔpÓÿ¿éh7ÚWr¬1f* NS`”an€`Ù‹ˆþj€ß½ö$~ëJD J¿>Ëçr¸:8ˆÑÑQLLL •ÜòÔòÝ`ìG*™õ)øû÷Uwûûò<žç­Ï'‹auu©TÊ ÌcÆô]]] ©vô˜õõu€K’„ÖÖV+ÕΞ³OÓ÷Ü9õ,§…|h Mµëêê‚,Ë–Yžjë™L‚ XAyÔ,ŸÍf­ãVVV°µµeïÑ4ÍŠÔ'„XÚ,³ÊüJ’äûp8l‚D"™™$ ð}ê…B`·¹¹‰™™+÷? !£··¡Pñx‰DñxH$H¥R˜5篵—d2™=í à'ô÷¤…ZØÜØ8ÚìÊÊ0`Í^9:ÃÀH$à9âß*tRwЉØ!µ–%`cù2 Ó° #_(@QÃà¸ûnJíÒ‰€çy …RU@aÌRÀŒñ¦š=Êd/#ú·£ù×5 /ðàÊÕ+ÆøØ(’ɤÀ|…b;ÕøÕ«1íûô¹ †(ŠÁ{ÚŽ–úéí©v¢("‰@2Êî«§Z=Õþ)€‹¢h}:F<w¤ÚɲliõöŠ}´€‹Ð:‚ 8‚òVVVÏç­H} ööT;z]©T à  9"õXfþX,†ÙÙYäóyk>Ú¥OÓ4 À·¶¶055…d2 A … …Ð×ׇp8Œ|>o™úWVV011ÍÍM@"‘Àúú:––—±´´„••¤Òi‹LìTŒ²ÀSûŠØÝ†X[_¯ø:ƒ°K ë9ÃÂŒ§»1ú9úCdtf Æ"=67]‡¬„¬@´¦ùûÒÙÞŽ¹ùyŸT@·˜A(ÍžÙ>ð—“€Z¿>}¨;XߨÀÕ+W0xmKf×I¯¨}?-ßp祽åçWòçÓ‚7´N@>ŸG,ÃÚÚšôÔ,Oó(ØSðÎd2XYY±LËT[§Ýïì‘ú´Õ­_M}»¯ž6űûô#‘ˆ£$/Õê§§§AH)R¿££ªª‚eY‹Äãq+ˆ’Z ì‘úÔ|ŸH$°¶¶†d2 žç … iº»»Ñßßb±h=Ô×uÝÒþ[[[qðàA¨ªŠX,fƒ¥¥%‹èär9llnbuu‹‹‹˜››ÃÌÌ rù¼ea$, A’ ©j`1¸Z„Æè?\—ù¶+^Qþº®CÕ"€®cc=ÈÓ`Îc8t€aY–Ë’£8纆Ѳ„ËRŸ²‘È0F `P@b+€(,µ øÛú¹µwt`ÎÔ,ÊÅù=ЖÀF-€™Ÿáþ—ª‹õ”Çy½}Û“ 4„ì„”Îgèúu\»~ ãF­}·ÉÞÏW¸ç>Ö‹T}EQ I’œ—L&±±±µµ5«ÛÕêi¥oåéÛ#õ©?Ÿºh¤>%GަiVFèE<lnnbm}ËËËX\XÀÜü¼ÕÏ0FZ-]Ì0cYI¤älg$ÀlzIJ˜]X°0÷›H’ FæK`* hÉg£[nQ×Á²f6!m/¿ô‘¶_ú‘ Á6¤€é6~ Æ—«ÃVùÑ-ñXÐÙ$vi’Ú¥³ÃÞÐ/ÿ”)}¶¦+`iyißþ˜)n¾1Vø}·½€§š¿Û_ß @<‘Ààà ®^½‚ù¹9Õd>ÀeY®hÚý ½ŸÖOÓÓ¨¿¾X,"‹YõÙl4ÕŽšÇ[[[-·wã©vì©Y>—ËYÇÑy†±€ž’èg×ÖÝ5õ[[[%yS©”U€»¦îUS¿X,ZÇÑ6»´$¯Ý§ŸN§!‚”××ׇP(d‘J2ÆÆÆÀ²¬u\gg'Ž;A, ÅbXXX0ܱ¶ª„´±ÐìÜ&''-—!u²xžË0`1Œ±¬ÑçÞþsFT;oºbj! cÞGŒ…,EQÄÜüüž·ö* dœŸ‚L:‰äÖT­ßFXÓUNûå0, –Ÿ#_(ôØß€°¤aY£Ï<è,c±A& 0@-:DI ÔÀšâ/öÏ©½½µ\9€ô]ÔuÌÌÎîëÛõ”º9O-S““¸tù2&''°¼´äð½{åæEèû}5¦}MÓ Ë2A˲Èd2Ø4MË4(úôEQ´üïÀ3™ŒÕî–¦ÚQŸ>uQP ·GêÛkêS³¼Ý§ooŠ£ªªå n/³ëU@'ZNÁžZ*ì5õûúú¬šúÀ———1aÖͧZ}[[:žç>ý……d³Y«0ûp8lµâ¥)yñxÜ*Ë»¾¾ŽÕµ5¬,/c~asssØÜÜ´²ƒÀ²`ðgö°7ƒÖl€O1@ßþý³¬u¼ûûö#Æ~£“,!¬åÚ±€òœQ.—˜ÙKËË8tð ç|{)Åb¢¬!ÞB,÷'0±’Vs]/YQaÁ0l7êœ XÀ‘nÂ2¶/—šý+–I¥ÒÐu€ä2&Õ”ÚÅ3çÔ«€IÐ(“gÌÎÌÜôàVkÎÝ~N¥påêU ^ÅÌô4r¹œ•FGµüj@¿S¾—iŸšÚeY¶Vãñ¸ejÏf³`‚ XÅdÚÚÚÀqœôT«§n{©O£`Ë0Œå:hiiAoo/t]·€ž–çµ×Ô§>xjB§Ú¿=RŸ8ÔÏçóÖ±´¦>Mµ£V©oÏ¿_^^F6›…(ŠŽ´¼P(䨾7??­­-Gõ½îîn„B!°,k¥äÑHýD"B¡`¤nl`eeK‹‹˜_X0êýÛ´zºð­[Nt]G{[X¢P,8Mý6q¢Öš›9 þÍyìÛ®ý5ÿkñös®”Ê7¿°€ g‰‰q,,,š>ÆàÜ|ª¹»Íø~Ínì–ºM5j»N5V ¶<Ï[Z=%´d.5EÓ¶¿²,CEË,À‘W¿¶¶fh^®:îH}wS-OÜž’777g¹hñžÞÞ^+RŸ3 ¤Nß[UU‹8Ð2»[[[–E¦ä9rÄŠg°kë4ÕŽZº»»¡išåˆÇ㘞žF<G:F±XÄææ&–——±´¼Œ…ùyÌÌÍ!iæü;´z3†‚Þ ¥à3›Æn{bjýA@ïû2²HkÞC¼©Õ›€Ï ‚ò‚ ˜Åp€l6‹•¥%L­¬@RÄc1 bkk ¿ù›¿ø»ßKáx@0°ê±bè 1tF =<ª»íð¼ê=!XÒÆšLFL†fD˜Ng ëáxjJ°¸£ÐC¡DA@>Ÿ·Æý¾ËŸ7i&À¾K嫚 ¢ùÞoÏåpéòe ab|Ü L3_Œ¦¯¤Ý{™ði®§ÑåÙlÖª,GɆ ‡ÃE ÃXÚ:\šjGAœ‚=õéÛ#õi Ÿ½Rž½.õéÓH}{M}žçÍnÜMqè{Ó@?zœ½¦¾»úž;R~~ÞJµ£.ÎÎN Àí5õiªÕê{zzpâÄ Ð’¼´ïôô4b±€RªÝÊÊŠ??o4 ñÒê¡ä£·>ý^]¾{Kû§Ç›ÀïônÂà4ßÁ„7LøeZ½ Xó¬./cfak««ØØØÀææ&’6-Ÿf7‹E‹Eüÿì½y°,ùU&öåRëÝ—÷î}ï¾÷Z-©µ¦EwkA–Ôj4j »`’O0,±ÁŽ˜ÐØŽflL„1c °‰Á#À3’A H4j5ÚzU·ºû-w¿µæ¾gúÌó«_fefÕ½ïÖ}÷¾WçFŭʪʥª2¿s¾s¾s}ôÑSƒ< †!$©’0eíüXI3sD!f ž~@0+H1Œˆ*Iÿ_þ帮 ‚”x{§åË<˶²º ;éÝ]jÜI,ؼ€ãοîpÀ?ÙÈ¿è«ÝƳÏ<ƒo½ü-ìïí1I-oè—EùÙ×˲ŒÅÅEÌÎβÂ8G*¢ ÃÕä¢NUæàÕ÷z=àôKKKÌy  §J}j Ãëï À)²ïõz©æ=4‡—ÚK`šfj$.UêKÒpO}rHìI–Çëôwvv`Y$Ibúûµµ5¼îu¯KUêSOý ˜Ôn~~h6›¬(Ò$µó}}E}ŽÔŽ/Êãsô|TŸGßóyyžîÙ\TŸL~Í:…²\$ɨTåøwüRý íV Nýn½~Šª²ó/OZG·0 Eî»ï>\8¥ÝK£ˆ0-‚ç•Ïó!%Vòˆ=‚,.÷¾MÀÆÄ3æ#òBG¼7DQœóˆ½&é¸wïŽ0¸Î¯®ÆýÆ0šK-"¶·ÏöTÀ,°ÇË ¢úÁ‚ÔëòÀž¿?NäR>Ï÷ñÜsÏâêÕ«xååW ijê"L’«q"ü<úŸ"Ñ™™Ôj5„aÈ@œŠóȹ ¨™^G@O­iIjGô88ßU*õIO•úõzÑò”ØÛÛc)ŠÔ×××!IRa¥~V–—í©O•úD˯­­¡Ùl" ážúDËó²<¾§>9.ÙJýË—/3©ôÔ-@Ü»€¤vITßnµFJíøÈ^,ø~ÅŒc0о/úíðiY–!ËÕd„mÕz öµê ¥»·»‹íÍMt»]ô{=¨š6D‹g·Jk%‘5þaâ±Ç˹ ÜZãYq,€ÞPE@ ÚµÂévþÍ/þ' B‚ADÉüøQ·š¦a?þÜøžúkkkh4¬9Žã0)"=8ßSÿúõë¬ÐŠïºë.T«U–Ðu{{{ìsr=Nj·³³ƒ››1€àðR;þ±È>Gã#Y/Ÿþ½,C†~G2}5þŽkõ¤d}šª¢}pGõ½úŠÂÒYã#|>íË3ü2¢üéµ²,ã]ïzשdŒ£(=x\ÎÄFt¹À]_’ï_8å€[©²¼xc<­g”ü…#>¤©o«©^@™Fô¿ Æy§­­-ÜsÏ=Ý¿ã²3-å+úϯÛÃ7_ü&žþyì&ÕÆ‰òy ç~~óóó,Z'U®«ª:Dß×j5ø¾Ïì-M2AúÕÕUæõÔ§”ß@‡¤vý~n­V ;»»ØÝÙA?‘ÚñQ½œ­¤çŸ<ÿ}‹i©Ý¸Ey¢(¦èûlT_©Tâ¢<ûZ µZÕJbÖ¶up€Íë×ÐëvÑï÷¡j<Ï+Uߟ;wwß}7DQLéïi€N†1kAR»ƒìîìâÆæfì\ÑÅ2Çq¥v|D?ìãMGõT”Ç×t Iíúžp§×ëbogÝN½^Š¢@Oeçç8>ÿ\–òϾ7ûþw¿ûÝ…Û?UÑȶìÂà“¼‰ DѱӱÇêHV0+,Å÷ÙE•¾À1>Eˆ?¤©p<¶º²QS^t©/Àå©ñÈi´ã—òør÷o"ò?JAßææ <ó̳¸ví*ö÷öØÅ“*ÜóŠñøOßS뺂€åÃ)¢ç«ë‰âç£zðjµšp¢ù‰–'©­sqqµZm¨§>Iíøâ=p¾ÍnQ¥>É÷ø:Tèǃ=8½¶Õj1' _YYÁ•+W ŠbjÞÞÞ^j(9$¼þžFÝÆ!úœÔnw[Û;0L#©§A\1 Z=¼Ô.õú1£úa©]ÚI$úž¥h§Ž:º®‡ÖÁ®¿zn‡åê‰Êæ= îü2z\T€œæôþ0 SP ‚€ÙÙY¼õ­oÍ]ÿi°T±"’N#Œ}¢™`Ž]?¡ÊÉ!$¾qÁ\ˆ½œ¤pÊÖòN¦ø‚Û€šôðÎ3úÁ‰ˆn¡lŽYõÔç+õù‚@b Úí6¨Ðýœ;w.U©OѺçy ÀiÛäð•ú¶mC‡{êS¥>óÁAÜšrúähÔŽ^»»»[X©€åÿEÁöö6t]GE¬z¿Ýé`o{;;ØÞÝe@ŸŠê©ÝPT_ÙߔԎ€=CßKÒ@j—ÕÕSv«…MNj×W6Z˜¬¨(¯èüÌ»•Ýçž7ÞqÈ:ôÿï|'*•ʩƊ0 ãÈ?Šô\K^“¤ ’O¼t†ÎÍØ±;ô‡ìAr@åÁ'*Õ*"žç¦~S»9[]]…–\Ä€’ï€.4ÉÿÓ$<)ߘ,@fYqäÏ­s @S5<ÿ ¸võU\¿~Ñïe´þââ"£ï«Õ¸?=QùDÉS•~½^OÑüÔ N’$¸©K@>>ª§hzêó¹zŠC³ê)%@=õéuDËg+õ$IJUàSNŸï©ŸW©¿··˲ Ër*R¿rå +ô#Zžô÷ô:*à#©½¶Ûí²~$µët:ØÝÛÃîî.v¶·Yµz È"!]”'q€Ï籂LΑ%µ£ÇÀ8R»âNy²4ÚµZ-tÚmôz=ôz}(ª2DÑóÛ¥ßw`gÏÇ20ç—e)ÿl‘_ö|¥çùφ>Ã÷¼ç=8­FLF†p=bŒeÌMˆ(bˆ±tÌ ‚CÙ±:Nè»1ÞÇž›€dÇ£r¯‘L% Š›—„Ó4À±ÙâÂD!îñ?ŽQ1àÎ-n”öxY>ð§¼è~ðßlä_âX¦…—¾õn\¿ë7®c{k+ð<à8}O9xª"6M3Õœ†¢{z=EoER;ªê§ t¾§>uÊ#†€Š÷ÇIUêÏÏÏcmmåô)WO9}rø:üD½V«•ŠÃO¿‹¢ˆåô³£n³tø¨¾×ë1©Iò.\¸€™™VØH’¼ëׯÃ0 €¦ièöz888`ºúV»=T}/ˆ"*•jò} ¤s<]ÏXÊÕ'àÎhü1£úÑR»LQ^ÂàííîbgssÐ@GUasR»,…_V”—µ¼=ÿþlÔžðyÛãÿgSyŸ— X[[Ãý÷ߟ»Ÿ§ÅˆþˆŒã=È:^à˜€P”­( “€?„D^ÆÓ%V«×!‚ãZ, 0eÆ·"/z´p¸2‚ÛvÐjµòg LÐ&UØÇß/þ#:¾çãÕ«¯bks ׯ_ÃK/½Ä.V”Ç—e™]¥1¶%…oš&Ë¿èÒë|ßg@¯ëzªRŸrõ|TO´5€T¡ßÊÊ ÓÊó||O}*ࣖ¼|¥>¥+²•úQ1 §:$µ#jþܹs¬Ðï©ONClèé3! ßÞÞF†Lj·°°€‹/²A;üÃ0˜CÓï÷ÑîtpÐjakk ÛI#¾%® ƒ¨^”ÀæšP4_ÙO^jWeß_žÔ®ÛíJíxðÌ;ײàœ}.›‹Ï‹òùåy}Ÿêé>ÿ<ÿ˜† Vé0¸žÒ9íº"D¨×kåoä®'Q¢€0Œ¯?aå¸÷s")€€¢~„1…àˆB«×j@¸IqM=5µb+r–VWVÆ|¿Aˆ’ÿq+Êííís&üy,Àa€¿¨¢?Š"loocgg¯¼ò ž}öYV¼$Š"XžžúÕS[fêrGt;U߈AÀ_UUV©ÏSý²,3 w]—Eà|TOŽEþDËSNŸ×Ô×ëuDQ”ªÔ§î{|Oý••6‡À>ÛSŸo‹K8uéÛÙÙa£n©žáÂ… Ì) Èžô÷ÙJý™™¶ü¤¼<©ÝÞÞv¶·ÑS”‚¨¾’€7ü@[?”³ç«ï…ñ茒ÚÈKíj±ÔœÔîÚµ蓨¾HjÇSìyEzYÎFôYº>U@ =Î[žs@ëË‹æó>ÃJ¥‚+W®àî»ïÆåË—qåʦÔ8+FN€ï:@Ôjõ²W³÷ ÓßGžþÀ÷«ŸQþð׈óE„Q¬ëgçkõ:"®k§RSàæl¹ÐÈ:W[Fýnln⡇šàÞ/ð§€}ðÄ3Pì…Øß?Àææ&^}õ<÷ÜsL_>;;‹õõõÔ¼zªš‚–e±œ:9MÀ£×ðòõˆÊÏïñò½¢žúöy9}jÉK]õ¨úŸœ/¶Ë«ÔÀ^·°°€õõuT*•¡J}¾Ðoff/^ê©Oú{J…PîÿòåËe9Õ+ŸèPñ!5ÐÙÛÛÃnÒ@‡RXÕ³NyhÇ‘z2jV” ÉÃs ò¤v ou IDATÀøQýá¤vñúzÝ.ö¶·ÑI¤vý~FâØäYöÚ˜}œWŸÐEÏ—]ƒó®ÉQ±ÏŒÞ7 ð`}}—.]b åÊ•Tk߬cuš-ët…aÏkF1ñµ&B& @"§# ý¸÷õØ€0•0ŠbÚBÂQÆó >;3ƒ(Š`™q~’ï8uƳŸí©O`›­Ô¯×ë Gÿš¦¡Õj!Š¢Tñ^¶RŸœÀ›Í&c5¨½¯mÛh·ÛlÈ=º¥J}J ¤ZòR_¶§þÞÞ“ÚEQ}»OµK¢zU×óÇ× ™NyìÆIã¤t½“–Ú9I×ÂëívLá÷z¬YÒa@.{¾¥óœ„Qyÿ< »¼ Üùet¯×ë¸té.^¼ˆ×¼æ5ØØØÀåË—YÝ Ÿ¸®ùÌqwâßqsffŒw…£8%(€R Â(:ý)€0 •0E ƒ$šé’¢:ôÙ¹˜Úql#Õî1¯Pejã[£^G³ÑL)äì&§8:ð;‡þ²|¾¦kØÙÞÁ+¯¼Œo}ë[P….\`Bו*Õj5U}/Ë2‹ü©k(ôT©OàLÝ÷ÀkµZJjG@OëRL?‡ï©ïyë¾×h4pîÜ9&µ£×öz=&‹#`^XXÀ… R•úD·û¾Ï¶KŒOËS¾RŸRwÝuA`4?MÊ#Ib±Ôn@ßST_e¹ú4ˆ¨{^W/FõãåÑ:,µÛÝE·Û£zEaéŠ,Øóô}Ñãâs¢ü~Þÿ¼Aö>¿o‡¡ð/\¸€‹/ââÅ‹ØØØÀÅ‹±ººšzúÙc>‹–ýÇçx\œ;_Ò±0JXÆ0Œ…¢0 ÇØ…@ž~ÀC=CaˆH…>¨`Täù¹yD`[&c¦vó&VVW ›F2Ÿ!ç5Ü}@˜¼ogçø†•?Õ¤^—öüý\g Äh·;¸qã:nÜØÄ×Y·9Šê©Q 5С(¼ÊU]ó|¦i¦ZÝ6TW=ªÖWý~Ÿ9³³³ŒÊç+õÛívª§>½–¤v|›Ý¼žúçÏŸg…~Ùó•úDá/--±B?{Šê©û^£ÑÀÊÊ ._¾ ìuÔî–*õInwîÜ9ÌÌÌ0êž øLÓd͉ø©v4«ÞNT ©¨^–‡¤vl‚OÛs /Ji€—Æês¤vDݳ¢¼q¤vÝ.ºÉÛlôÍÿÏž£>Û ‡n”“P–³ç-§åÙïF–Ú¡Eø¼sÅïWà‹R ÙÏ鬱~–i Lä­Ä¿'DúÜ8!¼ð 8Aî…að@E~" ƒüË#ø¹„p]‹]$ÆÉ;Mm`EàòҮ߸‘zmöÔb:¹VqM'ÿá÷È~ïcåô FÿHz0LûØÛÛÅææ&v’~ú|Q°ÕÐÉ zç”§ïÉ9 ¨ž ß4MKMµ£\=Eþ”Óïv» Àùžú¢(¦¢uªÔç£ú………TO}J ÔŽÏég{ê뺎ƒƒƒÜQ·F#U©¿··Û¶SúûóçÏcff&U©ßï÷±½½=$µkÓT»ì·Z#ŠòrÞd"‰bFj'qÅycDõãJíÈ9ËJí¶oÜ@Æ×*JJjGÆïCYq¿lTÄž-Â+Ë×—]/³ô|‘ƒÄG騨ØÀ… Øma!FM‘cC×zLéI~_òÖUä@œ㑾'ß÷aè1nÏÏšWa”°æ€„ú#„A„Që¸÷ùø€ Ü è#Äàúˆ‚ø€|ɵ¥¥%DQ×6˜®ù4á§ÑŠR%+««1àGQi3¦ÌÊø>ö÷÷=kû8 ûøûeô>-³,­Vûû{l^:ç @U`‘57uãÇŒÒ8~  Ï ‚€½¦i0 #UlG”ϧnya¦&åÍÍÍ¡^¯§ºïõû}àÙJ} èi(N†Ì! hÚçò•úÙžú” þõÔowwT'@•úTäÈGõN†a0vƒo‹ËKíâŸSN®^SQ½”÷Ÿ‹øA„$ÅÿEi\’8Y©]«Õbýï•‚©vüöyÎcòž/rÞùÿEà~Xú¾,oO Šè è766 »ÌFEóEŽÐY¾îóô¿¦vã ,‘c¼#~OSÿ1^ƵAÂD΀à‡{A ôã“> | ô!¤’-V«UÌÍÍBÓtèš‚¥¥%AI’¦‘ÿ!-묔þøÒïK]„X 0®0Yà^f&Zíö÷±»»‹7n°±µ)Q<ûjµš¢Q£(î®G9}¾úž*õ)ê'ä#p*¦ã‹÷(—MÑ?õÔ''š?ÛSŸÀžòÿùS©ÈÉX]]e}ÿ ìUUÅÁÁAJOºz^O)×uÙqÓ<*ô£Ü»ÝfïEQb©]»ý½ØÑêt»ãEõ9MtXdÎéè ð•,r¿M eR;ÊÕC4®eÓ–E€ŸQúô˜þùe<Àæ9¿Ù×H’„ \ºt)Õ—Ô®Œåà#ú¼c9«Ní;uë4ôØa\,`J˜E@ÆŒyùDa„0ø!¼Ç^‘=‰€m?„aÒÀ Dè{BÄÙåânH‹‹KPUºÖOUW“E¸S+6A°²²rè“I@Üòt{kx[ùkG¥jü©ˆ~ðºÝÚí:¶¶¶pýúu–c%Z‘ê«Õ*dYNEôQ1À£h€—§ø)ªÏ«Ô!Uà×ívS)lû\ºQþ›¢z¾RŸØ¢åÛí6 åôIjG`O}úùJýõõõT¡åßwvv BjxÎ… ؤ<^Woš&“ÿñ³êwwwqíÚ5Ây tR…v°çEô<àÇð -.áqáß±Kí’:š®§¶UôûÎzö5<˜`ç eyï²õó ÈçêißÊ(ü•••TaÞ… °¶¶–{¼“²Ã‚:ÿúÓìäí)t­(ÊÍž¶ˆåü„Iƒª á!ü @$ˆ{ǽß`‚íÀ1u! Búq Œå€‚¡H pþü9ܸqj¿ Çqày;Éɦ@¹åQ„K‹‹CùÈaØÿ¸P(U”}¼ ÿ¹’(?¦–»h·cÉ×öö®_¿–º˜ÒEžrõÕ¤Ç6¯¡èþ³:}Òßóò=s›êùžúô<½–œÏé-ŸmÉËWêó=õ ìÏŸ?z½Î:þ‘$ogg‡Ií²ú{>WßétX¥>ýÒÒfggSýûûûlýÐíõÐ¥Yõ;;ùR; Eßg+ëÓyiÐO=ˆq…bB:ª—Qj×Nzà+‰ÔŽýâ35Ùßi6àÈRöEù}ú-æ½/ÿ|È¿¦EîE¾(ЍV«¸pá.]ºÄªï©ÑRÖNúZÊGLJ­ ;ÍŽ¿ã8° @„ÕsåÅ€ ð H""?D3îk_]:ý €½$‚ŠgGoP 0‚8þ"ºÖKÕð?–) px‹Ó+sè÷û™ÏnøDbã¤ÀÖî°`Üø¹Áý Ðív±·¿Ï(ük×®1I}ïD£S4ÎGYlØFóUúÌö¾ïÃ0 ˆ¢È"p>ú§ÈŸdy‚ ¤òôÙJ}Êéó=õëõ:Ë¿û¾Ÿúƒƒ‚*Þ[[[cR;b z½^jÔmQOýN§Ó4SÝ÷–——1“h)ú§J}Ó4%ͪßO>÷­­­TÚg|©Ý0ÈGõY©`ÒpQÞMIíM}¯ßšjG¿í"ÐΖ²®wYÚ:Ïñç|Éî[^ŽȯÂ_[[cQ=Ñ÷£"Í[aãÔdŸ? @¤÷™~t½‰Y= €õlK…£aè#ô=TdAàÃ÷x^xõCò'Aé Ž`ÇîȾsÃ÷ø~€ Jß‹ÓaÀãB®­¯­` lÛj4µ£ÛÊò2EE¡S¾+¶ ìñ²‚¨~°@ÜžµÝn3ykk ûûû  °hnvv6¾Ø'Ey<}OŒPV–'Š"k´Ã÷ÂçûßóÝ÷(²7 #EõS¥~£Ñ`'0EÿN‡9$É[]]…$I©¡8ý~Ÿ© ¨> ¯§~§ÓaR;r ¨R_„Ô¼V«Å¤vy“òˆ  /“ÚY–UÚ@g´Ôn0ì¦ è Ÿ»q!Þ@j'ó¹ÿlQÞR;>W?*‚.éìó|ï‘QÅvôxTtÏ/ˋ䋢|`úFƒÉë.\¸€Ë—/³> §Ýòhü¼:þõÀÙ­ ÔëºÐ5¡ï%þŠë*¢(qaà!}Ti”xÞÕIìë±;ÿõo|±û;ÿÃ÷vý XürMB&E€Œ(¶õõ5Dˆà8ñLsb¨¨gÚljùÑÍòò ^½zmð¢œóJ 9éÄävppÀŠÏøõ îü–e£ÓiÇ•à¸qã®_¿ÎFØ`zäJ¥2ÕØóÕÙDõòõ¨«_©Oÿù|¾çyèõzŒ¾§¢<ªà'àéºÎ¤vÙJý¼žúDËSó¾Ð¢pªÔÏêï À)ª?88€mÛ$)%É»rå ëI@-y©[ϰïözƒ¨~g»ûû¹EyÕ¤(@{(ªçŸ“⥋óŠ5õÔq>W”'KÃô}•‹êkܸT’ÚQT¯( ×=ÔyW”—¥ñóêŒøtSYd?8†€Qàž¥»é1ÍÓýÅÅű¶{–,ïZžìQj€^wš“øô1‰ý^ "¬­Ÿ±¦M |¡ï5A‹ê—'±ÿ“HÀó¼ç=ÏWøÄ <ÇCà;ƒXÞIÅRÀ¹¹Y,.,¢ßïCS{pVVR9»©oY‡iyy ’l{N´OF€íÜuå. òöÊonn¢×뱓@v¡ÏFõt%ºŒ*õù|½ CQ=Qã<ÍOîû>søIy¶µZ­ƒ¢zpZMÔÑ÷y•ú|_©ßjµX¡¯¿ßØØ`ýë©¢Ÿô÷Ôhgvv–MÊ£<=mÛ4͔ԮÓncïà;ÛÛØÞÚ‚—Ô;äFõyR;qÐì&;ÕŽèw Š,wyø•´ÔNCj§* ö÷Y^2솶QdÙȲèùù$^üæ7áû>677qíÚ5ÎÀ ªo6›,²ãOnz¾‰ =uߣ穩 E‡4@‡rúDß{žÇràÙŠ~,ª§Â<¢åé5öÔSŸhyê©ÏÓòÄ(ðÎë爫Îêêêþ^QÆ }O£n=ÏKIò¨¿>¨ªŠv§3© ¤œþ÷iº^Híätóœq¤vNý„¾w¹©vÙí£‹ò²÷ùß-9¼•-§íg)úQQ¾ l¸ øgiªÝqب<QZæ¬9”b´m†ÞCàòårÇ.JŠåßE¸D ðx^'–(?3‰}Œà{Ïx®ŸDd±ôÏ÷øàÂ8§/I"Š è®+WðÌÓÏBSZ°¬¸+`%Gv–i°IYÞgµ¼´QF$aF/Q’ðÉO~÷Ý{/AJÀ]©TXŸ†,Ø0óT:ß:—•×éÓ-Û>—¢zZåô)ª'°WUÝn7ÕSqq±°§~E©œþÊÊ ªÕêPO}^Ï÷Ô§:~R^E,úŸ››ÃÚÚZJ¾gËÿ“ —ÚmïìàÆq*(—Ú%€ÏÓöÙˆ>¯(¾ßžu=¾ç# Hbßsø.¢ÀǨJÀË—/!`™ sòä€S;œ-..æßÈ3þÉIØj·ñÈ#@Å”:ƒ¦—‘ü*Û>—ŸkÏÓ÷<L3Qó4˜†œÚç`ÔšçyÐí6­‡än•Je¬J}’ùQô¯(qá© ¤{êŸ;w²,§zåg'åQU?ßSŸ…­­-8I›Øn¯‡n§‹ƒö`ª¢ªCQ½œ)²ºåTágsôãDõÅSí¸þ÷•´Ô®V«¡’°qŽm£up€kívLá÷zèg¤vÙßÿÛU”W–«'Ëö¼÷—ýîGÑõÙe’$áÒ¥KLjG@OìÓÔŽfEפ¼ÁiKÐõŽ€~÷Aàaqq ó%ï @xq ¸¨H2\'¦ÿ×ÛþØo|±;‰}žˆÐWü¯¯®zëy’ç$A·:+…uw]¹Œf£Ó4 *},//³ün^1`öñÔòM’$ÌÎÎæHóèÿ(iÎòâ‹/â<À$y¨Ôb— ó(ª'P&€æOß÷¡ë: Ã`ÏSÅ|¥RaԽ뺬/ïÐrh¦}»ÝNåô)¯N•ú|O}rHø>ýÍf3õ:š”G…~Ô@§Ùl²:ªÂ§nyQ%¹û>Úv ô© ¤vù³ê9ºýHR»´6<ÛGŸœéA®>z¹‚J­Ê€>+µÛÚÝE;©Àï+J®Ô®ÌŠŠò€a/£ú³ÿ²|=¿, ìãDõçÎj sîܹC÷ÔòmT à4EëÕˆ-Ï÷Ô§:‚ °×e'åñ…~F#Õ}¯ÕJ§¦Xÿûƒööv±½µ Ã4c—˜ÓÔóR;>¢çþSD>¾Ô.?ªÎÝóR;ÊÓWÙg IíZ-tz=V˜7ê‚\ÊüóeEyüû²ÎAöùQç|öó)k‹ ÄR»ìøZꔘw¬Sϲiþ»)Z–ÇÞœ¶@Öè·J×Ó4¡«mD^ûÚ»KÞ%=røž ߳㮹à{>\ׇãú_Ô~O¬ªÎ ¥ëŽë=èz>æ¤Y¾ ßs5€„PÒð¾ûîÁsÏ=M9€išp‡uL£“vÊŒ6þsËË˸zíZÞ+sßsÓ"DQ€xù•WðÆ` O9/¾}.u#ã øx©ëºè÷ûÃ9ö4‡ÀžòïÔ¼‡R¼ÔޝÔçœòÿÙIyN§pREï$¤¶¸@Zj·—HíöööÙgß©]µÂ¦Ò”Úe"ÿñ¤våSíä$wŸ’ÚÕj¨UãÏš¾î½Ý]ìl¤vý~?5ÕŽô< ={Þ•å Â@^—ùg×WôY¸Œ¾ÏFø†šçlllJí¦vsVTË‘¥ñóÈÓÆEÿÄnòRb×êCEÜwß½#Ö"b¥œï;dÀq=ØŽQ¬¿ˆøŒ=v€›˜‰Õ=×q“ ~"Bø®ßsB„PìÕ½á ÷!àÚ:“CñÑàÔŽfKKKH»Ÿ× ùG''bªÉãÍÍM¼ñXßxpÒÁ-O`OSèx§€ºê{ðl¥>ýÓ>ªÏ›”Gmq³ú{*ôËVê[–Å™~¿N·‹ÝÝ]ìîîb{gNR!þTDQ@¥ZI ÍßåM¸GjÇ?Î+Êã¥vÔ)Oâ$Õj¨Öꩪ µ¿? ï“ ü¼ ZYŸµ¼ùôe ïä-/£÷Ë€H<-Ÿ››KUÞÓí´G”w’Jçä9§©€¶O€išè¶wØØ¸ˆf³QøÞ8ÿ"ô]xžϱ!‹<߇ëzp€øÄ¤ö}b€­™¿í4ðS®ãÂsH² ϵã4€ï&uraÀÒâ"Ñë÷Ñë`yy™ÒÊýÝÉ–AÅRÀ’&Übq\EU¡( ^ÿú×CL  ì)ÿM'åÕkµk CÑ?/µ£”?§¨§>¯éç‹íò&å­¬¬ Ñhä6Ðá¥v]nªÝöÎÚí6W}ÏEõ•*š9O€.³êÏÈKßg£{ÕóSí2 tb©–Ú%òzÝnªOûŽsΣ²s(¶Ï^ˆ‹èý, œ—ûÍn» ìó>GšhÇåÍÍåÏ`Ÿ^+N—Ñ÷Q”8ƳU”Ê´,+žÒÛG¯ýëF¬%Bzð|¾g!ðmT›UXº ÇqáøQÿ—~ý“û“:†‰9ÿü7?ýôoì{mÇõêžç¡ÑˆõÖžk%ZG’\E™*ý ܇'Ÿ| Joº~ ¶m£Ùl²4@Ö¦i€Ñ¶´´÷øÏh‹¾…¼Ïùï¿ñ ,,,°b»jµÊ€žNŠÀ©à^GEtÔSŸ¢u]×™Òƒ€žzêó“ò4MC«ÕBE,OO“òdYNEõÝn–e1ù_·×Hí¶·qãÆ4… ÒÔW!Šhæ<9ËÏó ÏEõÙ |úüÆ—Úů¥‰v|¾S/µÛÝÞA§“´Åí÷‡¤vY'#kyÑVŸ÷¾ìЛ¼u•­{\ Ÿîçͪ¿xñbîñMítXY±_Ѳ¢÷ß*+ÂìC€k©"<ð†ûJÖçÿƒÀGà; ¡ 7¡ÿ=?zj2GÛD;ë¸~ø´ã¸os=³³MøžϵàQ¡CBÁÏo¸_øÂSpL…UŒS€¿¨NA|[L€[ÌœŒ‡±ÝÝ]4 ÌÏϧ¢u^jÇkõ³•ú$‰ãõ÷TýŸ”çy^JO“òÈÛ¶m›½›´‰íõûlªIízý>Gß' œ€nLßóyúôÔ¹Tc•Ú Šò*² 9#µ“y©]Òÿž¯ÀÏ“ÚCß—Ðò‹X¶QEzeÛ.jœÃ?¦û²,ãâÅ‹¬ï=ÝšÍfv¶¬ ÔÇ©¸•–=ϨØÙ0 (ýßB³ÙÄ]w])\GˆB¾ ϱà»6$ƒë¥í²íÿÊÿ“uôÿIJœ·¹Ž? I<ÇL¨QÔDI…i€{^ÿº¤Õª…~¯ƒ¥¥%V&Ë2Óš§6:u˜e©Y ¤Å…t:#N‚ àSŸþ4Þ÷øã©Q·T©Ï7С†@DáçUꫪšš”G©ƒóçÏC’“òø¨žŠõz½ÚIT¿³³ƒ››CƒnA`òŠæ¾(/§-nÔ1Z6Zj'A®TânyÕ´ÔŽÖÓ:8ÀT*BÌû€âè)ï\È£î‹^“gy}^.–wŽèqY”ÏϪ'*µ»s¬ðùçŠîŸ„c0*ú§®¤º®£ßݰåµ&Q<íÏsãÀص W*ð\¶íÂvÜhë õéIÙD€N_ûøêÒ̯۶ ×ó!K|ÏŽ½/n ,–(*• xà>|õk_‡ÒÛ…¦­Á0 6ž·i-@¾åEhËKK¬ ÞQLlnm1€ªª)ý=Ÿû'ý=ß-/«¿§:ôü¤Ÿ²–¸üòz½Î"yŠì×××OŬú©Œåý¾ø`Žê²’ozÝ­¶ìþû¾Ï:|*ŠÇR!Âÿ©d-¢0Œ›ÿx<ׄçÚ¨Í6ajlÛ…åWÿ×ÿãé‰4"›¨ð¿÷åÍßøåïÞµlç‚ëx˜›­C×M¸®‘è=D2E?ùNÀý _ýê×áÙ*4Mƒ®ë¬0¯)0eF)Žn¢0“_ü"¾ã±Ç+C4?5Ð!©]^¥>9|[\0 ½^/.ÊKfÕïììÄ[Í€JµKíQ½ÀºØQdO¦+è¦w R;9i…œ–ÚUSjvª]?é8XfEQM–vÏÞ„a©]Y_‘sÑ—Eðü2ºO³êù¢¼åååÒãžÚík£X£,“u’~™›û×4 J¿ƒ(¡ÿïQ%ô¿ïZðQè‚ÇqaY6LËùøDˆ³IO×LÃþ–eÿŒã8˜››Aà;pmž÷« ” zÃý÷¡ÑlÆe½–––`Yë07­8¼ÑExüÏ+K-‚$âÅ_Ä[ßò–ÂIy4h‡ cÛö!¤vqT/—Fõø"$‘ëOÅ{cDõ!EÕÿiÍýLœðP«Õà9\;vB߃(÷ø—$ o~ä!<ñÄ“0ÕTuº®³4É‹r™wºåÃÒX O„ò‡‚UÓ°³»‹·¾å-ìD œ~«ÕbQ®¦i©Ýþ>vvvâœ?tC³ê®^ªQNŠn—“åBLí‹}t©Ý`ªÝ«WѦ¶Ijcœï‚ÿ éÈç=ή§¨`¯ˆÂÏs2FE÷üòK—.a}}åë×××±°°0ÖñNmj¼müðŸlÑ)=R,AÃÌGÿJ2ìÊs ðío}sÉZ#DÉðÏ3á:<ÇDs¦ÝtaÙ6,ÛUþõ¿ý»çP&“;›h üÚÿþå'ÿ§þÝŠaÚ ¶íb¶Qƒapžg" æEô…æë[ßò0þæ‰/À÷LhšUU177‡F£ÁŠ»€á/kÊ äËËË9'KÎg_²ˆ¢ëÏ}þ󘟛c'C¿ßgEy[[[ØÚÚbƒ ¯¥Hs¨%®˜ÎÕ‹bF[/¦åu’8ZjGEy‡—ÚmÇQ}¯‡^¯5‘Ú•]pÊrõüóã~67?Ê)Èn· Üó(ü¹¹¹u§ÍªŸÚä­¨¸þVZvŠ¢]סª*z] r°ºº‚×¾ö5…ë" äñ ×6àû6qŽ­Á4m¦û ~Ó8ƒ*f†eÿ…eÙ?êØ.æf›<%NØ&¼¦9l ŠÕw]¹Œ‹ëëØÙ݃ÒÝÅÂÂ0;;Ë:ŸMÁ¾ÜøÏgnn²\IF5s'Úx>[Ÿ(ŠØßßÇÇÿôO±·¿NªÕKRq[Üœÿìñô½˜-MjW¡™R»vR˜×IÚâòR»,›úì2–ØE9ù2€__øg# <µBö1}Ž¢(bccƒÑöT”w§ÍªŸÚÉØ¨ˆ=ï|Éžo§!E±ôÏ4A©ctøö·½eÄZ¢¸õ¯gÁub V«Âõ×îCQÌÏϧZóùÆi* my€³´´ˆƒVkx:Û!Ö+&QöÖÖlÇÉ‘Ú ) gn8' ø‰Ã@û(J\‹\LPj×ïÃÈ‘Úñq0óŸ+ÿ8 îyzYÑÞ8NƨÂÙÇËËËX__Oåë×ÖÖÆ:¦©MmRÆŸ¤È^³ø´R–Ç&óûëû>ëùßï÷¡ô»ˆÂxŒùÛ¿ý­%kTÿ»Ž ×Rá:ffš0L¦åÀ²=í÷þì•/Mðð˜ˆð§þÊ—þËÿüœjšÖ¼m;˜™®ëp¬8g4æ ˵Ҧ@o{ë#øŸø 躥w€……6¾•`Jýc‹KK8hµŽœ`"¯T*¨ÕëìÄ¥¼=ö—Ó'?®ÞÏŒ°Ò‘©|\R»V Dj×íõJ ]P—ÝQôeLÿÞì{x' ŒÂÏ[V©T†&ÚMgÕOí,Ú­Žøað¢ˆéþUU…ªª0Ô} Šðm¾óóùí§ã÷aäÃ÷l¸¶ÇÖú.q–¥Á2,è¦ó‰n·;Üåkv"Àµ~?0lÿ†aزÌÎÎ <8–ÇÖQwmT*Mbq R©àïx;þòSŸ…cvS,@½^OEƒEéÔ¶²´tÓ‰%1Éå7›M‰Ž{¨ÿ={>ø)šþ¤vDjG2ÄÃXQ=˜‹Úßæ9 ük³ï¥×ðŸ OáÓ²"Ð?þ|ªÎúú:VWWuÜS›ÚY³Qõ7cyë¢ó¢&ûS(Š‚À· x÷»ßY¶×@"ðâèß±5Ø–«¥¦eÃ0-¼z£ý›˜pñÙ‰8ð«­ß\š¯}Ø0-Ì9N|ЖÇÒà:ªõYˆ² APtì=ú|úÓ0´¡ô»˜››ÃÜÜcø4À´ °ÜhôéÍ|&‚ ¤:2G¤i“d.GŸÈêò¤vr¥3 IT_­V ¥v4Âö¨€²=-/£ï‹òõy÷³–^—7?€7 ô£fÕOmj§Õnö<´*üã»þ©É`4­¿1òqåò%¼nDñ_|®¥Á6¸¶ŽÅ¥eôU†aÁ°¼Îïÿû§_œÄ±åÙ‰9ôgßxù¡û×[†i³-K‹s0;-Øf®½ ¿±©RƒPR 8??‡‡úøê×¾Ko£ß_`ÕPQW{';Y »ùf@y‘iFj'¥){ªGjç{Þ@j×jÅQ}2E¶ÍïǸŸÿYð€?NÑ^Q_Þ6ò?Ÿñòõ^\\L5С?ÏîÄßáÔv³àvØßÏqnï8€9›È.+:§ùçËøõñ?Qÿ½^ºÚ†$øX\\À;Þñ¶’µÇÒ?ßÛþÚ–ÛÔÐh4aÛLÓ‚n˜PÍèD´ÿ¼MÒÈ~;6þH7Ì¡éfg4š³°- ¶©ÂÕQ«ÏB’«¥’À7Ü/î~ͼzõ:l½…~?ž)Oóç)¯ÌWnò6eb[^^F_Qe/%÷RR»Ê@jW+‘ÚµÐévÑëvÑ)‘Ú‘e/ Ey¾":Ÿ7ê4˜½XÐ6òÀ?ûºlóœQ€/Ërj¢ýtVýe'¡w~û´ìGÞù•w¿hû£ÒƒãìCÑ~éÂ?EQXôù:"!Âãï} rÉÜ‹¸øÏ‡ïÚpM¶©ÂsLÌÍ­¡ÓS ë&4à ?þ™gþ°hG:°1l@„a䦈>ñ™gþèCxã¯Ì릠&Î[†¡íÂ6û° µÆ<äJIÇŠì{¿ç;ñ?ÿ/ÿD8дØ+k6›h4qÕ8WLUy¶¾¾Žv§ËîPd_« òôÕj•}™yR»QSíÈòòóEì̸Q}QŸ]Wv² œÂ_]]*̛Ϊ¿=ì,Òè7»ž[E«g·Ÿ=óœþöþqîKÑã(ÈþTUeÃ,½9‰þ}çÛK¶!Š’Ü¿£Ã6ئŠJU†ízÐu ºnÂñÄg_zi?{10áZ€I§¢ìý§_Ü1>ø‡¾®ëæÃ†³õz¶¡Âžé£a/¢ZkB”*#Y€{ï}=^|ñeøvŠÒJžœ§tîTÐÏ;ö¥Å%,//cv6fN( Ö­½½¸ú>™Sßï÷¼=ž®§Çdy _Äd×™µ¬Þ>{Ë.§Ç@ øµZ ¸¹|ùòTjwl à·ÎÆÙ¢×äï8LÀ$-o{|¿¢þUU ‘|à»Þ7bê_,ýó=¶¥Á48¶†…Å(š ]7¡(zšðôœP p25Y' ÒàOÅÈ~xnÞÄ̬‰•¥yô;û0êÆªõ9Èòhà‡ð{ñ?þêo@‚MSP¯×Y1`…kÃGŸEàr'X<ççç°vþ\×ÅÞÎnlnâÙgžŒä-ò¼³ù±¼‚:jo›·Ý¢ý)rÒÊòø´oã‚=ÝÖÖÖ†¢úé¬ú³gg=~³ûp³6ênäËŒwêùs”Îõ¢ -{¹+JÓú£(‚ëº ü;úý>l£ƒªècmí<Þ9*÷|Ž­Ã6z°Œ>“þéºM3 ;aøÅ¯½úW8aðN®0uPŸû»oþÇï|ôžiª!ÎÍ61;ÓDµZ…m(°Í>3‹¨Ö#Y€+W.áÍ<„¯|õëˆ\ªÚ`irHŽ–,²;™XYYÁïüÎïàé§ŸNd^µÉØí˜¿;?ÌëŽjãäíÇeë&µ?YG$KýSô/F¢øÁïÿÀˆÊÿ$úw 8–KïÃ6Ì-,B7LhºE3¢ò•/ýýUžbpBÎÀ‰ª‹¾òëýï|÷_ï)æ#³sfgg°´¸€~{†ÖE£¹€j}’Tƒ ÖK¿øøþïÆ3Ͼ8L½‡^¯†F£Áj(ÀÛ´ 6Qñó?ÿóø¥_ú%´Ûmþ¼ž—//ŠÜé~M_´Œ÷øaÚ¯àiMµËJíŠfÕOíÖØÀÓv\öIøa¶=ÎóyŒc™30îºÇÙŸ¼u„aÛ¶¡iºÝ.:N\øçö ‹!^ÿº»Gêþ)÷ïX:L­ÃèAB„¡é}¨ªvׂÊŸdoºRÙƒIÝ7-÷³fè=²¤™Ðç Ì4ë¨Öª°>L£‡ZsÕjc¤"àü¹U|à»ÇÇÿü¢x4c§ÓA­VJ°3"½Ý,ï¸çææð ¿ð øØÇ>ß÷YE}^C›¼õåÉv²4]Þ²,¸—åþ²†èû‹/–óíl§ÌƱã¢Ð“ÊŸÄú»®[}<7ð£¬è;ÏFöE²À"ö6»G9׳ƒ?åý üûý>L½ŠèA’%üø?þPéz£ƒžcÂ2XzŽ©bfn1!lBQMDbÏ>¿õ×þ¢ûÇj'™HݾüìõO¾ýÁ»þÛžbcf.‡¸¸°¥»Cë N,€L,Pä<þÞÇð·_x ­6\«E© ^¯3€RT ç›ï èï½÷^|ô£ÅüÁ°e”oÏæ¢}ö½yÛ/¢õò–K’„TdñâÅÛNjw–†õµó%[‰%…Ž­ÁÒ;0ôˆ CUUhªŽVÏ(VŸþ«'¿µ  X«Í.êyT« ˆ² Q¬ èû–e úÐâ·~ûßB]ºŠ^µŠz½Žz½ÎR45°Èî'È?Î|àxá…ðÔSO¾¯LI‘]–'«£móiÙÊÊÊPT_4«þ4ÚYñ)€Of›‡]÷IxÙúnÀºý<Ð矛øg·A÷}߇mÛ,ïO²?ÏQP,,.à{¾ûñÒíDQ„0ôâè_ïÁк°  KËÐ ª¦CÑ,˜–I>Ë¿åLÀ±Û­(di™Þ§üÀÿ¹žæbvFÇÌL«KKèµwafX±*£lFÀ›þÁxóÃ߆¯~íšVKM“£ÞDo“eíNqÈøãý¹Ÿû9ܸq»»»ìù2Iÿ? ðEô=?«žšè¬¯¯£–Œ¾U6ð㵓­Inó0ë¿€?®ý8)g½7ûYr ³¾Qï¡k Iþ(ïßn·ã¦†Ž  ~䇾oĵ*‰þ]¶¥ÂÔ:°ô.dI„ŠÐ4ªf £¨Vd¼x½ý9ÄÑYÊ|bv’i»½øêÁ§¸gõç4P5Í™š:Í& ½‡Z³Zs•j¢,Cª…,üè‡~Ï<÷"Çm«P”Áüøj2|†@(ËÜ)òÀ²ãm4øÅ_üEüò/ÿ2<Ï‹¦/{nmm ëëëcͪ¿ÙÏû8ó”'i·ÀËŽù¸×}žMEÆ&àÇùž›ùlÊòæ‡ÝãØ§²ßFv£XÚq¶¢÷ÐõŠj—x½? &SUB AB<øàøö·=RvdˆÂ(–ý9fLýk˜†‚Å•5(ª‰¾¢CU-–‡zEzù/þú¹—Ø›ùD# Ü€èSO¼ðâƒ÷ªr?ö~¨t[±ìχçY°Í ­Cï¢^«ÃñB(ŠU5Ð×#ø¾Áß Žþ)ÿ_„—À„œ‚“.Ì>ŽL'øË( \³˜QMÌÌèh6ê˜[X‚®õQo¶QoÎ'Ý%•rYàûÿ|ýÏá[/¿ 1Сiƒ™ó|A ?6ÈÏoßiNó»ßýn¼üòËxâ‰'†À~cc#ö/^<óR»;Fw]·Ò¡™øá¶y’Nóqÿ~»ï“^ïßn·SÔUpøà}/ΟÏg/c#ÙŸÇRaªèj®©añÜÚª¦£¯90íYÀµ>UÿGàD¥€'U˜çÕ„Âí½ÞgïÚXüqßó š2šŠŽf³Æ¹eDa‚ñßÿ:à8°] ª;•J…±Ôú•lïy`ÑqòË?ò‘ÀqÌÍͱœý]wÝ•jü“]GÞã";-ÿi¡Ñëó8­4úYðÓ”Vàí0צIý+ 0jÝ7˜ñL/­OAÛ¶¡( :£þ5Mƒ¥o¸ÿ<þÞw<ž0ðà:L­ MmÁÔºhÌÎÁ´]¨šUÕ¡Ú"ßFµ"íâ³Ïÿ=Xx¢¹²I9bžžÿÏ?;Øÿïo^úúñãoW½ÀŸwýÕB£¡£Q¯aaézÔ-Ôs¨ÔšIA "ŠR««+øð‡~¿ÿ‡ŒŠèÂ2 (ò€ àY~j 0UÐ}×uñÃ?üéÝn—›(²‰‡¡ñ&±ß§aݧÀof·kü4}GµÓà“²ãr"ó^›e3y½¯×C«Õb |WC]öÑl6ñÿäÃ¥ÛŠ«þ}x® ËT`¨mjo£2¿‚n«EÑÐ×^„ pá@z9iñ¼Õ}ÐG°“ì˜eB¡çy¾íøŸ„†aÓ“ ªZ, ¬ÇUüºÒF¥6³•DI†$³ðè»ÞޝþýÓxú™ç!Á€®Ç EL±‚  Z­8^ó,§ã8°m;7º‚A0Æ Š¢”S@ùIŒ£¶9I;)êû8×3¥ÑOî}§!~3¯=ÊëoµMúøò¨ÿì-¸® Ã0Ðívã‘æí6Em™¨Kà£þa¬,/•l-¢çÁ±5˜jºzSëbay šnBQtôU¶/! T$›Úg‘OûrŽÕNr =ÎÞ½ŽöùKçæ>>¨UsÐhè¨×k8·²‚~{ºÚB½‘8rRÑ_Ò!~â?ý1üÊ¿ü5hšÛסëb ü ´A`ÎÀêÐñù¾]×YKàØ» Ðg»ýƒ<¸®Ë–C@7J½v¿Žóu'±ž)€ÏûÆyïiŒÂownMMÄQÞŸ~ Žü}߇a¬â¿Õj¡ßïC×uÈ‚ ›ù6¼ýmo.Ý^ùp–ÖªìCSÚ¨T*p}Š¢CQ4V„0 ø>$IT?û¹ŸÂ-}ÞÆ/Ó>¼ Üþ&r7 €Ôêií‡Xÿ(€Š,WHˆr%ná;3Ó„©ö Ê2äJrµIª&ª€b' ^¯áüùU|éË_ƒ,†°] ùŠ\ù”@žÝn5EÇEEa€Ï×”­#ï19AÀ÷}¸® Û¶aY<σïû©”CÑí(6I/Û×Qû|”÷å³(zÏ8Û/Û‡£Ø8~ßÿqÒÍ~¯gÁNz?Ob{ü6è~ö{ნ^¯‡ƒƒìïï£Óé$ÝþtT¤ óøoþ«ŸEµZ6ê7Öü{Ž Sï@íí@ioÃÒ»˜]¾€^_C§ÓG»§Ã d„A×1á‡Ñç¿øÍ¿àøÉÿV€O•OÌN:@ÿCîØéX¦ãOHBøþz­Hh–‡ZOC½VCíü2DI„Þo¡Z›Aµ:(• xóÃoÂûÞû>õ™Ï£*Ú°L±0-BJºF ÆGÿ·Pt º®#‚T¤Ÿuøûa¦N¬qÒEl±3‡‘æãQž»™õç¶Š€ø¨6I?;n€¸UÎÃIÚid?n•ö¹ `š&EA»ÝÆÁÁÿÀ3Q—}‚ˆŸýé‚™™²–ã1õï{.l[ƒ¡¶ öã6ös‹«0tŠ¢AQt¸¾ f „8h;ŸÆ0З±œ“Td‹3N€ñÄùÕæû?D¥!*P4õº†z­ŠååsPZ›Ðûq Ö€$W ˆ3¥ªøÐü®]ßÄKßzrdÁ4%ñó`#êõ:{Žîvt²fY\×eठA†¿(%À[¶s`‘…aˆ0 áy^êõü÷sÔ4oG¹ç{NšÕç}§ÄOÊѺ•6ðã±2ðç¯TñOr?*úSUŽc¡!Åɇ~äûqÿ}¯/ÝfÌnzp¦Ö‚Ö?€¡´!J¡Š¾ÒE¿¯AÓ]@¬! ‚„þ—œÏå•¿EøO,âÏÚ$SÀ0ýŸM°4€a{û_wî# U*Uˆ²ß $YDµRC³Y‡¡õ H*r •j²\‡(‘ž?ÿG.Š"Þôàøâß}žk# }¸¾x“yò?¬ÛM"Èï»çyÐu@ôé1Y¥Lÿyšm}:. Mû§Çëºlr!G™Spœ”øQÞÿï(6N~XÚú(ۙ亦4úéÛÞY°qh)¹_«ÕÂÁÁâ)¦‰ªhB#¼õÍá#?6ªáO„0ðê¿ ¥³ ¥³Së`~å"ú= íN®/’ ˆ"‚ €e[üðïžøò?GLÿûÉ>p[à €ò<ŸðÚV¿ïyÁ—…Ð}gØð!G@– [.j}µZ kç—!JZÂ4Q©%ã‚Å…¤APä,,ÌãŸýìOàWÿÕo!Š|Xž ÃRi¾(°Hp»0´ïQ¥À_Ùgî0N€"‡â0ûš]Ÿïûð}?õ\¶è¾Ï²uuŽó½ÓüÖØÀ϶þ‚ °ÿ|äßjµ ( LÓ„ ’ââ…uüä?ýȈ­& <¶¥@Wö¡õ÷ )mÌ-ƒ¦Ûèõ5(ŠÛDI@†B`¿oýÒ Ï§€Û¤@Ö²¹¡€ ÕÕ?}ayæU "¡j:jõ jµ VWΣßÚ„ÚÛC¥Ú@¥ZOœ ’T>+àÞ{^‹ýG?€ÿëÿþ8ê² Ë• ëBngÀÙÙÙ±Z~M×õ! Ÿ?&:™Š¨þìñç}yÍ–²ï/³¬£è}Ù4½ŽOËs;‹~ØõEpšøi‡a"yÚ¿Óé¤"Ã0€ÐDEÑh4ðóÿì§P9è'B๰m†Ú‚Ò݃Ú;@¥Z…UÐï÷ÐW4¨† A!Šü Vˆ’è|ù›|ûß%=rŽñ#+´[áä‚?€ð©¯o}îƒï{ƒãN- +%‘$ÂõCôûZÒη‚ù¥µøC¯ÔQ­5âæ@’ QQ.=Qßÿøwà[/_Ãß}ék¨Š6Ð’¨7†Ç  çÏšEêõx3Ic¨:Ÿ}²,¸Û0»Žq/¤Y§â¨à<Ç ¯èðfj nfÿŽk}g NrŸÏâç3µ´ø÷z=躎зP—}DðÓÿô£¸°~¾tÛ,ïïÆÝþÔÞ.4e¾kbnå2:G}ôì IDATý¾ E5FdY„˜ð]®ç?µ¹««Pýãäÿo›F@dCôr ¶ötÅuý/ >¹ I‰€i;±P© RYF½Þ„¦´P©6 W¥*DA†\P60ˆ¿ìV«ƒW¯^Gl[&¿™™™¡nô<ÿ?»ü,X†‚€ÑþÔ#!Îqàäsí@±³S¶œÿØvÁyŸ{ÑkÇ5R"d×Á×PÁáa–£îÓÍÔ­øíô>ßÌöÎÒ¹9µa+ ºò‚3ß÷S´ÿþþ>X›_ϵЬPÑß÷áͼ©tÛ,ïïš0õ´þÔÞt¥ƒ…• P5ݾ†¾¢ÁñÈ¢Q„!|ÏCzh÷l¾ùϨ@'òct €Â~¸"@ºùâ|u¶!½§R©B–%ˆ¢ Bž¨•%ó KpL¾W®Wª5H•*$©QKë$I›y¾òµoÀ4uQÇ‹A4ðDäŽóø4ožîŸ2×B™WKðŒ P^˜Ç®do|&{;ÌºŠ˜œq^G*:ô<ŽãÀó<öYѺòÞ?Îq–Ýn…ÝÌþeŸOz{S»=ì°‘¿ã8Ð4-ùÀßF]v!Â{ßó(>ô#ß?bëIÞßµa=¨ÝmôZ›Pº»h4çà†2:>:Ý ÓfØ!Š‚0€c9ˆÂÐùÿê•i»¾…îŸúdõÿñFOÀNÚ ÿE΀¸ß2w¿íþs€JµV$%_0€ |Ï… ˆd KK0ú-Dã&ArìˆR‚(–:µZßö¦ðŧ¾ ßs€(€© =0Bž"¾]œß÷‡À?€ý¹e@r²eƒJ¥Y’†j(Žza?*(äÄaA¤Ì[âyS#d•·¤¦>µÛÝþÔß_Ó´ÜÈßulÔ$¢ᑇÄöSÿxÄï2J(|¶­@ëí¡×Þ„ÒÙFú¨4WbðïôÑW @ ¢r@×õa™l'øÜ“_ßþ$ÒÕÿEÍNŒª:I€îÓ-+H¶ëGo|ÝÊeYÆýUb’W PAE2ÈgfvšÒŠFª@ªTãt€$C¤Ò/wvf÷ßw¾ðÔ—E~ÜÕ)RQ1A%œ®ÔAŸ‹ÇuãKQkô˜–qŽ=GŽBÊ)¨TPMØ‚l«å£Þ²ïŠA¨Œ9˜P•Iù!J£j ¦>µ©åÛ8àÏ;éDûw»]þNš¦Á¶mT%ÿ?{o$KV ~wq÷Xsy™o«ªW+UP EQPTUl’êžÑ´º[C[Ô€$BK·¤1›1µLcšiÆFm­îiµÔ²–ÔH$@ T%  ¶·/™±Gø¾ÝùqýzÞðtˆÌ—¯^¾Ç;fžáááîá~Ïw¾ós¸åæcøé¾wΜ%"«8 xØ£ »g0잆çŒÑ:p=úÃ1zÝ!†ÃÒ”€òÌ_P‚$?DxáÌø_¿pftÓ@•Ꚁ«”EþÛÁkÔyÿèZí(¡0-œeÑIVæWe‹:¿ÙF¦tÓ‚|к f!šöB¥ ÃØ:ï :𲈿ì8.– ¸Øˆ]·EÑ¡®-¸Ø4Â5»fûÑæ9}QŒ>±Ï… ÐétÐï÷á8ŽŒü‰¤ýMÓÀGúÇqÇí·ÎÜ!ä qèeŠÿsèwNaÐ= ‘İZ‡ÐíÐë0Û ”qÎdÙ!@œ$ð½¶ãbâòÅ'/| [οŠxÑéàò3êõ¢€…q*n;Ön›÷›†Ã4À¹d(! @d—¥(Š@çÚK˘ .HA)ç¸!;Ê郋»3m÷¾üN8®‹çž?Nã¬2@ä ¨úÖËâfÙ~Ó«ªlOègBfœñí©‚âÿišNU$PJAµÎ~S à"›ùìµí•ƒÖAA[P¼.¯Ù5»RlÊ__G5õr]ÃáNùä>Žã Cpâ+çÿ¡Ÿ;Á*÷‹£¾;ÄxxƒÍ“tN#<´Ü€~Öç4gâgš9 Æ(ÒT #x®‡(ŒÒ'ŸéÿÏç:î&¶‹ÿ.;ý¼x˜ ÊÒ¬;Ÿ¿óæ¥H1-Ó€™±ò‚¢„ #ıŒúM«Žf«ñ !7Á¹ JùÜò@xŽwa4žà…NÓqJ‘$Û™`KèVT¼çZ¸˜‹¯_.+6÷YĶ¡ðìõ™Û©pþ„ [È €Pzž = ʪ5ÔR¬¸\Ír.†˜eר‚kv%Û"Î_Ý¿jŽ’0 á8ƒANû«Èßu]DQ$sþ™óÿÐÞƒ»ïzéœ=HÓD–ûyCL†0Ø<…~÷4\g„¥õc tºC cIUs Î8g`4#¦ <7ÀdlÃöâ¿üÄÎü.¶;ÿ}Aÿ—’çº 0:^”Üvl©ÉizŸi0M4C[ÊiLP%§«µ-Xƒ;éK¡ ã Ü©ÆAÈ|ðÊûîA¯?Äñ“§s& [äb­„NÏMiä©°@@Ç}µÍÚRæØ«à†`‘õŠŒŠ¢C=• —sêû³—Ôÿ¢éKµÝ2`P¬l™÷^³kv)må_dôÒ4Í¿ßG§ÓÁ… Ðív1 ò)Í9ñÁiÎ9>ô“ïÆ½/¿sΞd=þã7†=ÚÀ sƒÎIØã–×oÀx ×b0HsÁc„’¼<:BØŽ ?ðÔ³½_8·é©è_ÏÿÏ£ÿ_4»œ €zœÉŒ'áñ;n\þ‡”Á0-¦ÁÁ@(ÉòÀ¡€DùœõÍöˆˆà:cP!@“]™‘U̯ºÿ^t»}œöÜzËì:ÿiçoÃu0ìœB/«õ7¬:`¬düŒàùê S*þ9§LN=ŸQ#ð#ض ßÓ'¿=ø…s]o’ú/–þÍsþWeÝz*€ ½àù—ݼüvJÑ0MYÈ¥£”P0ŠL!/"Ç‘ €2†¥Õƒð&D¡ˆ4£y³Ù)]Üþ’[pôÈ!|õk_D 8!S ˜O-‚Ý*{±µ‹–γâì€Êйw`kÐÏ…r`P ÜÕö¶Yéy™¡!Yó¡yU¦C˜§eÐöG?ž)mA¡<ñjʵ祚¶@O]MÇ{ͳªô–z,Ž À;©:ûé=ý;†Ã!lÛFEH“8wþGÄ/ýËáè‘Csö*ü%¡tþㆽÓèožÀ°{VN(W[C¯7Êœ¿‹fÃç²Å/§Ù µ™ð/I¢0„ãø&ŒƒÿÙ—Îÿ¦…zî¿LüwÙl¿õ¼¬)À¼ N¯šãf½Á4LœÁ0¸ì¾DˆlÂéòçŒÂ¶}U&»Ž:ˆ" ²«ŸlÌ5& ¸[Óvã±ëqËÍÇð•¿ùD Aœ²)a`YDT6AͶ³@š`/l7%€efšfþ|  LeëM¥²hzf¡ìfÒòÕiƒ…Ïr¸ÐK"‹éÉLI}רÕå° ¾Ø¶—Lƒ2u®±ßy6+½©__zÕ”¢üUOÿ~¿¿-ßﺮL·¦1ÌÌùß|Ó ø…Ÿý¬®ÌÙ«Bä?é`Ø=…ÞÆ ºgÁ¸V_G¯?D¿7‚ë¹h6k2ïo00B4Õ¿<–(Šà{!†ã B?œüù—.|lìF”Gÿj¸ŒÍt»¨nê¯={Ú>yß«"Ö %¤ŒPFÀ(2çO³ôe€=q!0ʰt`îhS2Ù\T•æL@q§íè‘C¸ç®—≿yQ€"E*(R!r6@oçZL TÑijR{9Hê w³@‹r˶£[•³Wÿ+§Wd Šçj‘4!¤‚(Ø ‡?óÈŽ~A0sý2Ö[ÇXZ¢x‰@Áì]ñ€Æ<` ïÓ5Û6k «û€­?Û¶óúþ t»]ôû}L&“\€Êàƒ“ðÆ7¼|ÿÀ²¬9{¦•úej)ø;A÷!0Z‡%íßÂu=,-7` §0L5T( i*sjò,Ê£}òŸªèÿ;¢ @·* lQZ €Fá7^r¬õ½ Â2 çàœfó1”eÔj&Ø P €a<¶%KLx6ƒ`aèBöÈQ²ÀÜœs<òº`Ûž{þ(b@¤H›´ŠùÏü$œÜ¬¨z/D‚J¬u1W›ÞIOíCYsžY€@_f{ÕºelAÕzUQˆ®ÜW½$f Öf‰«Ä‡e6ÇYEˆeçvL§òKûü~³‹Ù'Ù¶"[pÍ.•®â½\æüE–* ‚`[K_•ïÇys‘&àðÁHŒZÍÂG~ê=xý#¯]`µö¾îöhýÎIô2Á¥ÌÖIûFð\+«K°L–™Eþ„€±-qœÀ÷CLl£á7úÜ}öÌ¿EµóŸ—û¿l¶€þ¼* :v¢pm¥6n×Ù#†!Ã`YU@6Àõ«ÖÁ ²„ƒ2 ÒD”byå<»‹0p!ÒD~žñŒ`  4 "Dö XYnã©¿ÿVV!"t›8°8PE‚j{ú¶ËžÏzm–íj€,DðŒ±J¾Z¹jf±U¬BØ(®·H*‘¸Sç0 ýNê<Î_3·â1*ËI‚ŒýØ/‘ý¥bʃklÁÞÛ<ç_Œöõ{]Eý“ɽ^oŠò y}’$€ˆÁ‰Ÿ+ýág/½ã¶¹û'oÕÛ„Éðú›'Ñß8aW–ú±úzÖäG6Z[[†eqXfVïϳÊ2B „šº;EÆp¼ã¡ Ï&þåìpéøÐÕÿû2ú./ÊA@ñÿbe@Ÿ?=9qÏm˯ "=jf3†¢EYF•fs45U#ͦndm¤©!KÖ8ÞB¤(ãYJ@u $%»8m·Ýz3îzÙKð•¿}*&`H¶1º6ØîœÔkS'dÒ ±í l«u¯r¸Eà2o«"µÝYë¿»¸nY Þ,` ZyS#•‹×é\› 澟­,¦YЭÈ"Øt¦ÒûÕ¿ñn·¥@ Š¢ÃýtüûÙªèþ*š_¿¶T®_5öÑ»úõû} ‡Ã|2Ÿ$I@ENH±ßøW?÷<¸6w…iŒ8ôá¹Lç1ÈhÿQï<ÌzÂXE¯'iÿ(±¾¶Ó2`ã0Œ­Içä}”B$)‚0†ë…˜ŒŒÇœ:ïþúW¾ÙûÌþíËèØŸ`ž _|?yúÆ#·ô Y«if!ÙÀœÉŽTÂ90Æ`˜£áq’ h¯¬#òGðÝ Æ€P^¸d¨Äó‰ƒ×ðЃ¯ÂÓß~Ãá‘D‚æMƒ(£-uT;ÏE_+Z®C˜s¢ QËôM­”ÿeVµ¯eêNY€¢³¯Z¯žf‚‹a Š]‹%Šdç¾µS¶}dû'ÊRSÇ© ÷K%Bq?wûù2+²Å龫Îãw¢Íoªîuo«ò>UÛ¯û(¡Ÿ¢üåx˜€!ËÄ~ß÷=oÆûÞû.Ôjµ9{™ÝÃI‚(tṌ‡çÐß<)ÿj­U$´…^oˆþ`„4I±~p¦Éa˜F.ü£” D €Èªü Ædâb8Ãö¿û½Oú5ÌwþûJù¯Ûå@5ªK)ÚÎ ‡¢ÆÅœK§.«8“í9ç[©¢RÒ‘™–‰ÁpŒ8Š´–Ö‘D.\{ˆ4²È@1 ,O%ÌšÍ{ôu°]Ï>$똤lªYоk£lsfEÛ>`·=”B½j»êõ²|uÕºelDx¨bf­[µÝyëé€dj"¡L„>à•M=W[P\æ¼?Ï*‹²ïÒ˜õÝD£&<,:Ë™kW×Á^îC•¶@ßiú‚²{­ ˜éÏõ÷õ\¿jç«—÷ ƒ¼«_ËÔ)ƒ‚FúÉÅÛ¿ëñ…Ä~B$q„ tàÚ}Œúg1Ø8ÞæI h.B˜ZèöFèF€õõUÔ,žEÿ †¡1Æ9ÿ4ME ü0„c{ m~~þk›ëƒ.¶(,šûÿެÐm§,€ÎçθÏÞuËò}ÉÓä`¦l¤¨[ʶdÂä¼Í$}QÔê5 Gc„QŒT¥5 ñáÚ}$q(ÅT² ,¯ÔÐ,b–RŠûï»7»_ýÚבÄQ–¥‚z aÕ5­ªÏz#Ÿ0íæÛvbKPûn;`ªmoé:;ˆ–fEi—‚-¸ØÔÀ,`¢¶ªj„2¶`Þù™k;eŠÜ"ŸÑAAàì%S°µ‹‹Qñó]»Ù·²4B±¥öÕÄTýUç½xí !…~*ׯ ýŠQ†B€"ƒÛn½ ¿ôóķߺÀÞj5þ wÜÁ¨ýèwNa<ì ¹r^ÈÑë0ŒÀ¹ƒë+™àOæýe½V%$R‘"Mäoø!<7Àxä`4²qú¼ûo¾ðdG¯ù×güSKqÆ?à2;ü¢íTçÿ+¿zž¦)L§n»¡ñ6a™‡Á &Ë^šG1DFò ­/[×ë5ŒÇ6‚ DšÔ[+0=î"‰Cˆ4›9/*%0À±Žâá_…§¿µ•RA¶U €ž›,F™»jÛ-:0W}v'ëídýyÀ`Ö€^ÑWƒâº;I#”ý޳i)ÈYÐáïèwÚ((‚Ÿ*`°“kfÖõ< 0\N+c ®tmÁ,Ç?Ïé«Ï©¨ß¶mŒF#t»Ý\è§rýzÔTx`”ÿw¿í1|è?º@s Xæg61êFïÂqô7OÁ±Gh­ƒí¥è÷¥ó¯5,¬¯¯fŽß€™‰þ8'`D þ€4ý~× 0ÛôÇ;ñ—~÷S'ÓÔqâ½ßÿ¾Œþý Êœñÿ)0œ„~»iž_mñ72Jaš<ûAu ªd£ÆI!¨7°mžç#MŒZÍV “ÁâØGšÆòË…ê°H… û¼ñ ¯Ãx<ÁóÇO A A8’dšnT @§õsPÏËnb!R!¦Tý‹šaSoV¾ˆídPœµî<¶ l}½b9aqý‹qöeëVý–óØ’E%UÇ2×úúSÿ–¯T9yRY AÏ¢û=ëýý €ÙlÁ~ÕÌsüúó"Pÿ«Ô¦šºWµòUùþáp8UÞ—¦) bPx HP«Yøàû~ßÿ½oY€òוþ>|wŒÉpƒÎIô6Ž£ß9ƒ0ðÐX=†ÑÄG¯?Æh4ÆÊê2X†iHç_³dä/óþ²Ø_dÍÜ’8E^äG® IDAT…ðýŽíb8Áó£ÁŸ|îÜG&ÞT»ß+.÷¯l¿ Ú‹€þ<×?kŸ½ýÆåCŒÄ·¦!Ó†‘u Ô!šu d T:tB$íÓlÖ„ÆcI*@™…öÒ*œñ&¢ÀEšd¬Ý‹ŠcxàU÷â†ëŽâɧžF… bš Ð'R ‚`júÕ²ˆ ˜Žîç9J!DšîhðTÏ€.‚+‹Zw¢X$g_¶^ÙºÛòØ%NwÞ>,º^Ù~*+–œéë÷³lݪßwŠÍ*t9\(/]ÔT™úÞEÖ…(ïPü ´×€éTIYú`ácºL¶“}+^Ç Ì—Úv¼Š×!°õû)bZä§Óý*×?¶fðËÆ3*|0„ ¸ý%7ã_ýÜpçË^²Àhb¿À…ç 0œGózÇ1蜃 æÒu ôú#L&9„¥v¦eäå~†Á³@‘€ä:‚i*Æ!/†ã¸'°'žzvø‹_aXløSVöWùï«èØß È ¨ÿ§¢푞8;~ò®ÛVÞ(Dºdpãà‡Á@ JçL D‚Ôë5ƒÁIš„¡µ¼ŽÐÀw'H© @þY™! ±7»¾áAœ>{.l‚d×J*X>¨´€ê}­nÕÁ¯u–9ñ‪;èÒ“_² }MÓ0*?STÆ—å¹ç9΢U9å¢#Õ·½ R¶­YlA1â)ÛGý»ËØ‚ªuwÃl+S܃r¾KÂ Ìø\@*˜eààr[€Þ-£U ʦëÞ‹cŸ·Ÿ³¢}µèbP!ÄTiŸÞͯÓé Ûíb8ÊV¹Û¢~)ôãœã‡þûwà}?öϰ´ å¯fó BΤ‡Qÿú›'²ÿ 0¬&ˆµŽ~_Õø‡¸áØõh5j¨ÕdÎß49L#k GåØ-R I¥óOâŽÂuÜŒúa³ïÿþþÜaÚùWµý-æÿ÷…Ã/Ú~Àt„_öZq4@D"MSñÍÕßJ„`Ü0ÀMŒ2\:j5£“ÒL‰¶ˆd j–ò0è E²½osiqhÃsFˆã"•ÓƒP¬Â€,˜hÔkxôõâÀê2žúúÓH“1(„ Û‚ú¢¢`;í\æôZÿ\å Ø˜9ç3©øy=j-sR‹nKY=_¶Ïj2±È`]5¨ïV3Peëéû9@ì$P öÀqíÌÔ±ÍÜxéËE¶  ¼ØKÕùÙíg‹VL#èLÁNµëøõó¯,I’|òE÷«\§ÓÉE~Žãäã–!\ÝüÒÏ=øª…Ž%§ü#™ïwFjBŸãèmžÄx°‰úÒA„hÈÿÁBÇn¼º™9~™ó7 &ubóÇ!& â$E§Âžëa2¶Ñï áxÑ·þó'Oý/Q’÷ú¯¢ÿ«„û.úö/(û¿ÊùëïÑó]´²dn®´ø#rnÓ0d{ß,ÏC(ÏZýJúG 4ª“ çÍfýÁa"µæ*IáŒ6Ç’$†@šÊäã")¸íÖ›ðЃ¯Â³Ï@¿?ÈÚ ¤`9ºVÚ•¨ åmoó“—€²÷K)²¹©W·ÝP–Åeañ[áó;ù®¬[õý³ñ:û²õªœ½¾nUt? @Ì—L¯9Î'ßSá˜^l¶àrŽ"cP•FÒ÷qÞ1TOYE‹N÷Û¶ÓýÊùët¿ïû[‹HÀ„/Ç8ïøÞ·àÃ|7Ơ̈˜¢üC¾3Ädxƒn–ïß<Ç¡±|=ì€J±ßp„F£ëo¸5Ë”¥~&‡e°­NªÔO¤HD*#ÿ$A„p=éüƒ\7|â‹gfsô˜ŽþËfû+ÒÿÓ³l¿`vôÂÿz* O¡DŠw „àî;oÇk_}ž}î8ƒÔˆ$c0U2¨Ø”M›ª¶­l›XK{½ø\m^>¿ÌöBÔ4k@׿gžHpÖº³ö¿¸^™Yt{Åu‹ÇY´ªßREaÅõª‡þ{VMÃ\9Añœkÿl?¯³OÂÎÖŸõ¹Âu[¶]•Æ*snӛ߷côBVumAÙy›äéHñ«<¿îøÕ”½®ëN‰ü "Pág:'à{¾ëq|ôCïÅuG/tLÛ(ÿqÃÞ鬧ÿI {ÀÍhý0úƒ Ã1lÇÅ7Þˆ«+°,VÍDÝ2`˜fÖžRJ²Þþi*' ŠS„aß`Û²Óßp8Æñ³“ßø³¿>ÿYÈÈ_þË&ü)ýíKê_Ù~ÀìT€ú¿Êù$MSlôܯÝ~ÓÒ£HÓ&ç& ÆÁ N¹ìÀT¤O² #õ©÷2¥?¥f V­ŽÎfQ#ÐXZI#Ø£’(@šD"j‘)²Ã˜¶•å%¼íÍo@³ÙÄ7Ÿ~qe•ÈôÕi•P7¼î´tú®ÝU1úÿU €.Ç YÈe výïtf9ɪs[\·ø»\*¶`žó[„-˜™‹Ÿyƶ۞Dç‹|¯¶ŽÚW= UÆ”9ÆË¹”»y×}hVº‚"@Rë(Ÿã8Syþb _Çqàû~>þ‘‚DȨÿ¦¯ÇÏÿÌûð–7½~A-Q¶IŒ8ôà;#­Äïú›§`úh,ŸÔÐïgýÁ-·ÞŠV»«f ^“Ñ¿ir˜gª-¼Èʯe£Ÿ8N%»°'.Fƒ1z½>ºÃàS¿ó‰ÿÛÿ<ê_ ÿtÛ¯˜vîúk³RS€`ìÄa‹§o8\{‚1nHñ‡!0)„ r槼̊ç:)”¦a¢½´„á`×õ‘$2%PoÔad©`‡’ ZɕڀE@¼ôö[ñøB§ÛÇé3ç²”@ ­Y©2þïçl€ÜèbASé`; ˜ŸÐÂóY`ÖÀôbÚ¢ `‘È{Öw,ú¹YƒöNrµeξj›‹¤v²î"¢ Šù⋲]‰ÝlCg äG¶3ûáW¦ŸßYûU|¯èä‹Ç`Û˜¢"þ¢ÀOÏó«N~ys‚Š ¯ëÿ§?ôøÀOüs¬¯Xèøäu—"‰C¾ gÒÃxpFªü7O`Ø9‡0ôQ_=†‘£ßa8£Ý^Æ7ÞˆF£†zÍ@­ff´¿þ\§tü‰J­& ’$†ç…°m£á½Þ¶=û>þü/G‰p±åüË¢ÿ‹Qÿû \ `Þ{U,@Îw¼a«ÆÏX6„)-c`L®Æ˜l Äòtã ”ó¬{ deXZ^†çùƲQ³ÐZ:ßîÁó&ˆ#I!ŸY0ë°`£^Ã#=€Ûn½ ßúösp]'cxŽàõN‚ ø¾Ÿ3E1°]=žŸÈ’€œOaNo?ض¼‚ÎVVEóWEO³Ö­bFU¿Aq=}]=r›·Í½` ôugE«—B ·'×ׂÀ@ÿÎKu<‹Ú",@|•ýúó"°WÎ_5òQy~½wUžH@DQÜ/~þgÞ‡î¿wÁó³%ô‹C?Æd´Q÷4zdÔ?ì]ehã0úCƒÁöÄÁÑë®Ç¡Ã‡PË¿‰šeÂ2,Ëç”i¢$FKêß 8Ž‹ñp‚^wÇ /ü—OŸú™Þ(`¶ó/Fÿe´ÿ¾¶ý €ÅSe¯ååÏŸ±ÏY«»­y€"õŒQŒ€LH´9 `àL¥T$OÐl6AG·ÓE'HA½µ’†p&]D¡‡$ R9Q‘ÅæPvýuGðÖ7¿I’àÛÏ—m‰E 13- ³zgA5x—åG‹NØJÚ‹¥Øk+(gY1¯<ë3³„WUÛ[ô»‹ÇP´ªf2E@2k=õ걬mtäètrÙz³œÑ¹ø‹aÔ÷ïâ£EBÙïQd=öèÛœ,Ë>Sv®•ÓW€Qª–_5òéõzyÔßï÷óI{TįÊú„HARH@¤8°º‚÷½÷]øáüh5 ¥Øêå:píA&ô;þ…èwÎÀž `µ!@C ý#ÄQ‚›n¹ËËK¨×MÔj[‹eJág”@"bÄQŠ$ŽF)’8ïðãñÝn¶íÿô¯ÏýÌs§GgQ-ú+Ëû©ÿ}ýW˜• ¦éëäé†ÏßzÃRÃäéy«U*e”Î@X†Œ3Ú_oaÊòÖ¬[7¤e™¨7›èwûðüI’‚ÕÚ¨×[pǾƒ8òsm€Šþ ÙÀ9Ç}¯¸ ¯yõ}8sæ<6;!«H–SÝõj dÕ@ÖPH9³’è@tôÀyÑé¶oŸ±E+ w³¯;ùL p)4jÝE×ÛI¡ìZ˜çÌËŽ]¿ö*Ó :¾…íbEa;¥/çUDÛ§ˆ^l³å糊i™õ~‘µÐsü*âw“ÉÃá0§ûUŽ_o䣯üZIÔ€s†ïùîÇð¡ŸüÜtÓõ€XdŒP€J¶ó ¼1œQ£þiô²Yü½óHR³uÆN„Á`Œá`Œz£o¾Fõš‰zÍD­fe‘¿¤ýc Lz‘Èãó9bÙâ×õ1žLÐë`Oìðó_Ýø—_}ºû ¦#ÿbô_Tü_qÔ¿²ý€ùཊ …÷É“ÏôŸºû¶Õ‰obŒgZf¨VYΛËÎ~£Ù ‚Z Ö,—¯n$Æ––—„!†Ã!â$ÀÑ\ZG:ð&}Ä¡‡8›TYì­f#Ü °º²Œ7=ö0n¾é<ÿÂIØöDD`òÊ+0ÅÒÁ0 ²÷â)± Îè%„Ŷ³êõyNjÛ´ÏÁPôcŸÇ·µ›ïžåD‹ç¼hUÀ`ÑõÀÞ¤t›ÅÌè] ¶`/®Î²mT‹zo7÷Q•Ã/~—þ{é%Å*â/:~}š^åø§ªŒD"| (¿öÕ¯ÄG?ô<òº`š†,¹ž3s¨žëg«o甜ȧs ö¨³q [Á`hc0Ãq<9r‡D­f¡^Ï–Zm+ïŸ1¼”f¡y–*âI$sþnÂw}™êè1Mð÷ÏôõÓOœÿk”;ÿbÉ_Y§¿}íìËìJÀìTÀ6G_ò˜§¾ùÜàoï½cýþ4 ×(c0¸‘ ¥j_®Ê9!L¾G)8ã`Lå’¥Ó&jÀ 2goš½!ü D’%XV öpa Ù€$Q@  ÙdD‹W r†Áï~ë£h6øö3/ C@„°[– ê@@•*}€Ô”Ï5L—ý)ØîøõÇâû³l¿¡7>(±2`0oÝéí—‡yÛÑ··È6gƒE~§*g¿ˆ-šRÙÉo¿ˆèpo„zdë^Þ#[ô\”9êEî­*‡¯@{Q« üTįšø(º_åømÛÎKúô># ú@Bà–›á§Þÿ#øw¼ íVsj¶jÛžëwF¬¼ïDVÞwQÂj_‡±'ÐŒ1ŽA)ño@»ÝD­n¢^³2PËšýÈ9`8“uþ€H“<à‰ãq¢ÒÆ#ƒþ£áÇÏMþýïêÄÅvÚ¿Lñ_ý]qÔ¿²+̺³ªØ€Ü¢DˆçÏL¾|ç­+¤q¼Âx–ïçÙMÙÚ„dúYÀ(ÕRrÀ [`²Ä¤ÙlÂu=L&6â8¨fë¢` ×"]Īd)äà£nÔÅ+(¥xÙKoÃÛÞú(<ÏÇóÇOA¤1fÂTÂs§®@€<Ï›bô©L·òÄ$–E)Ë ìtÝKe Ì-š)Þ Pë/¾/³#ܲs¯[0Øíz;]·l÷ò:¨b ÊáE_S{ @H^†;sß²õÊèûyÑ~ñúQ-{}ßÏËùôî}ŠêWªþjÇÈ«+ËøŸÞõƒxÏþ:¸Æhîø«KdI’)ü'²©Ïà›Yy_ç &ƒ˜Ù†0×1Ééw'ö+VpøðAÔ, õZñ×Õs–eJF—rù ‘Õ÷'ˆBéøe?ƒÌù'è÷úö‡8sÁþƒßþø³ÿÓå~óêý¯hê_Ù•€êѹlä.Kä‹ëÅñésö/½yé5I-1Ʋ_M,#rªšQÙ3™ ,» ³V’ùB$hÔk Œ¡ß" #$)`4V`pg$µR$˜•  Dh Äâl€ešxõ«îÅ#½ ›ο oØL+i  7RŒÀ”`0Ž‘–´æ; Ýv#Üͺ—ÊÄôŸ™VŒŠB¹EÄeëT9ðE„„;µE„„³Ö+¾7KÄX¶^Õq—Ëâ6Ë€AÕ{UÇP8Èí¿ÝìO”Zñ8f] eǬLwøU3CÈ{(aŸã8y9ŸísüÛª9þµš‰w~ß[ñáþ Ü~û-ù÷36gÝüÒqàÃs†°Ç›öN£·yýÍ“ôÎ! |˜­#°†~_vôKÒG¯;‚¥võš…F]æù%í/ÿ–eÂäœq¨ÝHR)ò £(«õáù><×Çd’8„HS…"2-Ââ"AX^jãÑ׿Üÿr\¸ÐÁ…M@Ä€P“m5@¨B ä¬@ÖÔ#Ž%8Ñg'S  Š˜úq*åF„ ¹s‹œªKješz}‘Ïîô;ŠŸ)ËïïÄ.³ßØ‚"ƒ3/'?µìà8J÷‘yW,Uß_&ÒUÕJº°OŸ¤G¯ã/:~½Ž_U m‘ú’îGÓ0ñöïzûðáû_Ã0wüJäY]c5{ß…èožÁdØ3ÛHu ƆÃ1Æã š­&^ËhþLä§@€rþ¦ ÃäYšR€ˆ’qe5þ)¢(†ð\“±-§‡Í¾÷™ßüý§[‘Ñùëýuþ— ZЮtP|mVª`(NBÿ\×ýÊ7.=œÄQƒ1Ƥ&€3¾%Ô£J(o`Ji–@–Ç—X„’}ƒìºW«×£¡0ŒÇFm¦YƒgwxD‘8ö‘¦Ñ”ƒ%„f` êзÛÚÚ*ÞôØC¸ïÞ»pöü:îL P úYz@±jp˜Š 2SOUAÕàp1ÌÀnÖ¿œ¶S@0k@UÛ™—¸NürÚ¥úó*G»è÷W9ü2AJÖŸåô«t„LϪúôÇã©>ý~?/åÇp]7wüº8øD¤08ÇÛÞü|ì#ïÁÃ>Ë4ÁØöþÛMÑýšÈÏ`2¨šèzâ,„Sf¤êýâãn˜²ïº’¬ ÌZ‘íåêú«Ùvr½,ºÞ¢À`§¡¸^P•××…¹¦îWåøu§_,åÇ9°¢Išn1f"…H¤ã‡H@)Å[Þôz|äCïÆ£¼õZ j–Ôù¢P5yO„8ôà¹#8ãF½3è«Ò¾ÎYL&}˜u„d уÁpŒÉÄA³UÇúeÔrª?sþuuËB­Vƒeq† Óà LI‚$IŠ8J«&?I’—úMF#©}Øè ?öŸøÿôÍ_ “ÄCyÞ?ÆìV¿WtÞ_·+Õ^°*êןÓØìûöh=yóõ­“(®+M€¤ØŒLø—EÿÈP<¡A>¥$È>¡B:l!@ˆ¼*(!°L„ŒÆ6B?B'`fõæ2»Ï!]D/›%‰¼¢rp±³j8zä Þüø#¸íÖc8sîƒ!„ˆ!D,?‘—€Êš ©ÁCuÔÛ /2-ñnÀÀw X4]°ÓWÙŦ.·]Jg¿Èz³X®*00k}}ÝmÛª(áÓ•üžçÁuÝ©h¿ÓéLEü£Ñ(wü*ݧ„¾ÈÄ}"õ! rN€7=ö0>òSïÆão|šÆ”ãŸõo©û}o gÒ•tçD&ò;q)8Xë:LœXÖõ'R¬­-£ÕlÀ² ÔjJÝ_·P³ 45˜‡aÈ¿ R§•ˆiœMz” ÿâ8響‡ú½:›èý/ýæïã ¤ªÅ¯¾EªŽûŠÎûëv¥`6Xt”Ë×Ýèy“îÀÿÛ[nh½6Ž£fÞˆs° @ÝD6•”"A¹J„d É‚Ìq#ÿ#° #+ÇñdyJ X­508£BßAºˆ#I"MTÙ`6xì\Ý|×[Åí/¹n?K $i”¹M]#PÔ ¸®›Ó‡ èK±Š 8ø…„ÅçS{\Bs_ õ¥¤ öÈ,*8¼’lÑãØÉz³R)ó¶[öé‘z1ª×?Sé²Õ¥Oß¶®×ñ![]=ðÖsûƒÁ`{ÓžáãÑ“É$Ÿ™o[ó" ăH‘¢Ù¨á{ßþ8>öá÷âõ¿ívyÊsîµ±=Ïï;L†vOc jú»gáL`Ö*b¾ŠÑØÇ`0ÆhlƒRÕå&êjyÔoÁ² Ôë&¬šìéofÍ}L#›Î‚i*&rœ‰by¬Q$#×u1ÐëvÐëöp¡kú7þÓ7~-£ýUΞâ–󿢩eW2ªA@Õó¢óß¶þ8ðOŸ<ñÒ›–ï‹ã`…QU˜õ•f i*r¤±åÔ D*¶HˆL+ r!_~ce©B ƒ"‰#ØOŠ“Ôl¡ÖXBèàÙ„AV6Y#¡"S¼‚…„‚[ŒümíÞðÈkðÈC œ:sIËA"k[L™œºSŸ}°Ø]Pô‰ˆtñ`¨ªo®r|Åi/Á~w†³ÎË^ÏÕl‹œÃy@ Ùë¯ëÏg5éQ‹îôÕ=¦ßWz§>åøõ>í—–ñ„tø‰‘JÇ¿º²ŒòÞ¾ÿGðê^˲rÁ³b-«mËñ§Iˆ(pá»#8£M {g0èœD_Õô:´Z?Œ±›`8š`4š ´Û5´šõ¬ŒONâÓ¨[°L  ` †iÀà2cŒ"Òù'‰@šù £lúb£ÑÝNý^ç6íOþú|êÿÌ%üÓE:(Sû_¢¿¢] w{™“×ÙàÚbd‹©-V¶3_, IDATÔ®?Ô\{×;ïüÅ¥ví–£GŽb}ý–W—ÑhÔa™–Ò¬9Ô@ÈÜZœÄH‘"ÊråÒaF"Ùˆ"ôCø~?ˆà! DFüµz ­Vív‹¹]†ÖÒ*ÚˇÐZ9„öò!4ÚP«/ì5À™BYÞG`žéuÕÐë ñÿ3|òÏþž7A(¡Vžþ(n†aÀ0 IÓÕj¨Õjh4h4h6›h4¨×ë¨×ë°, ¦i‚s™Ç+Ò¥UµÞUÅc©:Æ‹µýN£ëé÷®t °Wû?+ò/û‹UüŒÆEÝT¿ã8Sé7m›jé?&Òñ§™ÖÀ ×Á;ßñV¼ù±‡ä(ت«Mwüâ(ÈÕýŽÝÃd¸Épö¨ g2­­Ãñcض ÇñàêC½.©~Ó4òß²¸|n°²÷8ç0-”ùL­I*‚ ID$i’EþÛÁh€öòA´–¡µ´ŽF{ V­ Ãj‚s㢀!_ý»oàO?õ9|ñK_Ý:±”ƒP”™[¯i¥4ðMS x; ³ Ȳ+gx>ëâ´ò¸È”íónÁ"ïïÆö8Л#íåþ¼Ø€àr¤:vâðÕcÑùëý6”ÓW\9s]K£¢ü©¦\­_òMM¿œÑñBÄY q»ÕÄão|¾÷»ÇuG©£ÈS‘ómËñ§IŒ8öeÄïŽàNz°‡LÆ[`Ö bÒ€c{˜ØÒñ§iËb¨™,3sþ–jâ³Ìlú^ƸŒü9¥”SA ‚f@H9~"„ò\Ú¶‹ñp€n·ƒÉÄö¿øµ³¿öñ¿xAMì£;þYÍ~f)þ‹sc_þ|ìj@5 ˜;¬÷üà}ïºíÆåw¬8€õƒ‡Ð^ZF£QC½&Kû($ÈE R\'Ò$Aœ©Tã8B*ÔÁ BDA/ ø¢P‚ D±@šRÔj54› 4›54kˆ]4m´– ½r­¥ƒh.D£µ «¶ÃjìÊYlå7;}üñ'>ƒ?ûóÏab;òä&Á3AÈ4+ „ÐÚ„²)0`šfôt¦iæÌ€Lëv¦^“*¸ àr|GÕwîvàÅ;qö»¡öÕÿú¹Uº¦Eù¢GúºóW_Ñûz©m’U©mçß—†Òñ§Qþþ-7Ã÷½ýq¼ñõ¯…intý~žm[T¿H’Üñûî®Ý…=’Q¿tü}„¾V[ALZ°ŽëÂuˆˆPo.¡Ù>€öò:šKÑZZC½¹ «Þ†a6À¹%Õýd'}¶Û‰“gñ©Ï|ŸýÜ—0Žó×)1@˜1•"ªÁ€:ЀÔúÅTRWë͈€ÅzoÑÿg9ÿý@ûï…íä8ök:`–£/þ_VY¢?×£{BÈ ¯O¹]&äÓ½žËW¿Ø]SÙÔoQüiMåöo¿íf¼ùñGðØ£¯E»Õ\è¼l·ér>™ã—Ê~×îÃwጺòÑÂw0³…”/ÃqØŽÏ•@†1Îå,ª£`œJš¿fÀ2­\ägš\ÞóœÁ0LpCνB3¦RºÜ/ N i¬“ƒž`2±1èÐïuÑOüÆï|íÿ;á[_©ý«jýwÚçÿê¸Á5»PThªÒªB`QƒG8v÷w=|Ó‡›ukùàáCXZ^F»Õ„i08ÃV‡ÀŒÉ‹—d €ˆ"MÅ¢(F'ˆÃa# äkA!ŒÄa"ßÏ@’É$)’D€ž¥êh4ꨛ‰×EŠZs ÍöJÖÑl¯£ÞZE­Þ†i5¥Xqìd â*{âoŸÂŸú øë'žDœd³B¦(3¦„ƒÀ4Pšµ(0 èÌ@;PÖÀ¤,ò/‚‚2ç>ëý:ü+ ì;°¨ít‹”ñ-êìËõ~Å(¿Hí둾¾èQ~Q½_éô!ä”á™ãW¶ººŒ·<þ{ÃkqóM7@]’;?õ[é³$G>BßÖθ {܃;Â÷PÞDj¬Àó"8®×õà> pCNŠf0Ù*s Ã`yÞ¿nÕ¤ÈÏ”"?Ã0À‡að,( {¨¦™+Žå¬éHb0Šá9&¶ƒ~¯‡ñh„g&ÿ¿~÷k¿ò\Y§¿²Èÿ;ÎùW/ö¨2Ám)›¯_>òÃï¸ë#í†qëú¡ƒ8°ºŠF«)i.Îd4k L @@éV…@gƒ@!ŒBDq’åÿ$0PÎ?ŽåŒV*'R›¦Q”B€¡^¯eU¤~ê6ší –Ö%#Ð:€Z} ¦Ù3,Yî—†»¿$&¶ƒÏþå—ðçŸù+<ûü‰­Oá …*` k¤uv@/3ÔuEaQ;Pœ.µŒÂÝ A᢯-jW*P˜eqòe¯-*Ü+û‹_ðé¹|ÝéëÓk«uô~qO ø¦„|™¥IˆHNžíç¾úxË›ÁkxÅŽÎÝv“½ú…z$Ýñ;vöhθgÒƒ;!…ÙùþÒÙïÖù£äÿ«Â®fì(aà¼^æRÓlÿópÏ»l¼~yåV¬ Ý–--¹ÅÁ3MÉÉfJ"M¤Â6‰#DI"|çÌ@'ˆ’TNr§Hb(NeÛËT …L $RPÔë5ÔëÍ:&…ˆ±z£…Fk­¥u4Ûkh.­eŒÀÌZ ܨIÁ Y¼—À,;qò >ÿWƒ¿øÜ—pîüæÖ@3°ôXwÔ ¨ªåÜP @g TÕ¾¾Zôê]G O§ªl@°ý.—³/žóËa»Ñ,ÝÏ{¶³@Êá+ñ^1Ê×óùeѽRë—Ñú Hèß;µ/©Šô¥Ó—Q½À÷¿¾ö•xô‘×d]úvka_ }„þž3„k¶¢ýqŽ3Bø Ö2Ú„í…ð\®+×0LƒÙÑ”2’Eýc`œ€1 ”°fY0 ˜ê¬Ê²L, $HR‘‚TÈ ' å> G {=8žë|ö‰3ÿúÓ}úIT—ø•Õø_sþš]íØ; ©”Àÿ]ï¼ë–Õj5›tuí€:,“ƒq5ø¨ ‘Éq%I !b$©Ì÷§‰²·uæü³%I’X²‰°’¬v’HÊÌcˆDÀªY4êhÔ8H4B»°ju4Z+h¶WÑ\ZC³½†zóêÍ%˜V †U—M…Û“ô<ÿÂ)|þ‹_Á_|î˸°ÑÙú!(¥ÆL0 õˆ^O¨G]T¨—êì@UuÞ¢Øú>ìì”Ø Ø ÁN@ÄNÿ"B½²GÝô¨»*Â×;]FQ”wµÔ•¯G÷ºÃ×;_–•ÖB!¦Q–×!× à¾{ïÄë~5^ÿЫ±¼Ü^øœ–Û4ÍŸD¢ÈƒïNà9¸v_Fûvîd×#Ž"°Ú "4àz¤ù=A¢fòÿŸ½7ÿ•$»Îľ{cÍí½ªêµª»«zc³ÙÜw‘"%JEPžÁŒcØžùeÆlÀ û¯ð/ü‹C†a<ðŒÇ°–ÑŒdR”¨¦H‘’šd³Iv7{©®^««»«ÞË%Ö{¯8çܸ•™/_Uõž/‘ËËŒˆŒóï|çïø£HCG î¨Z!Ž¢X!Öšú¢h…8Ž&QBUiœ@ó8a@Ä~4ÉÏ´¿u\ãÏο¬[‹¦xãµ×Q,êWÿïo>õ?üì©×.`=í¿ªÌ¯_ê÷¾vþÀûGƒ€P(û) –@ ûí_=ûé/}òÌ;d£“§NaïÄFÃyN]嚤5 œñÖY(Zc`-QýÖXk`CÛ:Ž$àéÿÖX Œu0¬’­ªMk‘f)†œèæ¶™!É2 G{ŽO`8! 0ŸÄ`tùp‚$pz a pý¬Ðoÿå÷ñê¥×»¢´g´N€Ôl±‡bÂ0e € ¬6‚Ü „ "|¯uó×ÃϰŽ:—›î¯Zßö±ùšãÚµ°ÇqøÛÒø«žÛßÿÒ!/töa„:ï0Âsö}…þ59|Θöêœ> ð‰=xƒ>ˆIèÓüÅ!wî{ƒ)þ70Ÿ *¦0ÆAg'P»‹‹¢DQ”0M‹|@â=­,wäEù^ìGš'ꈪ¡#M€€A@œÄHbúAƪ³.ÛA‘×uо”ú94M‹²ª0-0=<ĕׯ`6¯ÿý?üùÿøÒ¥ÅëXîéßËUSývÎ…½_°=+âÞ-dú@ {è¾›îüÇ¿y÷?&wßtÓ)ìíía4¦\u’D¬ ÇïYçgà` G1¢à¾×ÖÁVì:k´–Zb 80–~DÆX”eƒª¬¤)éx¶v‚¶>Didƒ1F“} '§(=0>…Áèòá>’|„$ÉÜ8VèÀÀßüàGxö¹–Óš« 4õ1®vpaº Ì÷‡´¿€‚Y> èW¬J„LAè„´Ö^Ï.Ån[ðNŒþŶ¥íûÛ¶¹/Ößoá-tøýȾ/Ú Ç]‡½,Ã:üð‰U€/üÌÖ6p¦aõ~'ö‹ãŸúćñÙO_þâgqòäþÆýºû´hÛ M] ò¾˘O_Çüð ³Ë(æ‡(Š”NdUcQV(%в„åaAYC)êvªyiEËH!ÖŠº¡j ¥3ðÓ£ˆÄ€T†L×@(š_b-à aEhéà€ºiÑV UÅ|Ž+—0›ÏñòÅâ›ÿû>öûEaæXŽöWÕ÷·X¦þwÎ…½Ÿpí @–«Ò2ižF£ñ>ô/î¸}ô[ã½1Nìïc<a0ä˜V ©~¤ ÊC A'Û,ÓaŽkaECT¿£³—±Ö*b¬cç/­2kQ—5fó¢8Fže  2¤q Ôp¶Æ`0Æ`´‡Áø$F{§°N Œ—YPÛÃ^{ý2¾ÿƒá‡÷üø'¿@U×ÝAÒuc@Umfú:‚þ-†ËðêB– ¬:X5å­/:@°ÎI¾Árø!XÚT­!Î]œý*_"{qüâàò»~¾>÷…¹û0ºßÆáK ½5 åõƒçÝ|ÓI|î³Çç?óq|ê“F–.—É^›¹}g`Ú¦¡Æ=4–÷ EüÓ×1Ÿ¾ÅìÅâMUA%#Øh‚²¶X,J%i´Ö˜Œ‡4yGšGìàµrDáë ýy¤- €ƒ¦Ñd€VÐÊû©#Ä1‰NQ´ÇÑ¿#ǯ ù~CŸ¥(1ÎqåòTU]ýÝc—~ï¿óìÃX¦úûB¿m¦úUçëïY{¿`5[¬ê¸.%°þᯟûò§?|ë™ÒìäɘL& rdi‚(ÖP ¦ÖÖ9hE‘‹g‰ P€Bfv€q·Æ”HßQw,cXIд,T¿\f³9¬ƒŸ¹=æÈS m¦pÍI–!Ž1Àhr ƒñ J OP?l„$ÍG)—^AhU]ãÇ?ù~øw?Á÷ð#¼öúå¥ÇµŽ™ˆybáf@°Š%£ûpöoaÊ  úa] !œ·êóõû¬£Ã“Jx³lu¼¾ ËÝE,×w¼«ílŠîCñÞª\}¿o•³ï’«¿§egßÒÒ-«úzð~|æÓÅ>÷IÜwïÙãîâ5‚" Ó6>·_— ”ÅAçøg—±˜]Æbv€j1Ck-¢t (Š‹¢BQ–¨ëy–b4"M)H¡hžÔý‘ꀀÖÂ`yÔ¹âQèZ¦¢:šF¤ë‘¢’cë´? `¡`[‡Æ´(Êóé³é W._Á¢lÏÿ¿þìÿôÄù+/âè¿0ê¡ßÎùo°÷#® ôS«Ø€%‘à‡ï=uöüúÙÿn8ˆÏž8u{“ 5 JÄ SéÎA9Ç €‚íô]+¥ˆ!èꡉ`Ŭ±ÌkqÃŒe6€ª¬u0­EÕÔXÌ U…4M0È)=ç)b,àšChä ŒN`8! 0ž@6ÚGžOdDqŽÈ—óܸسç_Àß=òS<úØãxô§O (Ë¥Ç5ƒ¡m}GÝÕÚwðý´Â*À ÛÂtDøÞ«FÄöŇáãág–û›ÀÂ*•ù:à°Ê6Œu*úð5ýh>ŒÚ«Åx}~?¢ï7ÛY±¯£ëÃ×õY‚>¸8*º÷ßÏR„Oâ=³ôø­·Ü„}äƒøÌ§?†OâÃ7 Ÿ¿ôß—)~S£©K4ÕUqˆrqàýâôËÅe¹€Žr¦ù#Oó—UÓj/> Žb¦õ©Ž_Êú”¦%9yë™êtÊdœ¢ñæÖZhE¬#íG×QþŠ€ºäü!Îß)*wnZ”eÙlŽ+‡(fs¼ôZñ­ÿãŸøý²6 ¬wü«ú¬곊ò_;àý €Í  V¥Ö• .iF£düϾqÿ¿<}óà+ãÑ 2®™UpÖÀ±@kë72››–’oÖôÒ°ˆÆ© eЀÖ8À)î!`Y4HÂÁÖXÌçsÌf3DQäYl! T;L8Í1Ž÷I88>‰ÁøÃ}?’˜R)¢ˆô72EÚ£=ŽG{ýì—xô±ÇýT4 uL€@nA{âU¶Š)èƒþ²_¢¸Š X•vè‹ ûÝ CP΋ÁÀ*Qb¨?Ÿ·ÊV£ž»Î¹÷×å¹}µý¦[(Îë;îMN=\ÁB"ÖEõÛ€!ç G÷íRë]±S'÷ñá‡À§?ñ|ê“Æí·Ýrä{Ï‚ÏkM0oªœ¡ZbáiþË(æ(‡¨Š9ÚÖ@§c˜hŒ¢¤Q¹BóÇQŒñxŒÁ CœŠ?ÒT¶Çb­M @¤”vt b÷-ÝMᔦ¾Ôí”¶Ñþ–ô„ƒL¥oÄ—W¥a[*g.« eQc6¥|U×Õ~ñúÿú'_ø.Öçú¢üû9ÿó_aïg ÈÙË2L ¬bR¬a¾þÅ;¿ø©‡núÏi:Ù;9Áx4Ä`˜#Ž"Ä1ýÈ`½&ÊSm)Rš³lŠÐºsô Í95%°œKj(5à|wlI/`¸³ 5EYb:¡i×É` Ï$®Ú9Zdƒòáƒá ÇûŽN @6ÜCÎ-‡“$g0’~“À@Û¶øÉOÇ£=Gú8ûù“W=§«0ˆ=8ØÆVƒ>@è—öY„¾p×W1}`tÄ> X'P<®õû*'¿*ºï;ý~$¿*﯇ãnûÏóÿá²:úŸõ(#'Ûrdß.5â›LFøÈCàc}ŸÿÌÇq×§¯yÿnø$W;ý–êö«r†ª˜r´À‘þZ_ÌÑÖ%à¢1*£,*”eEóFª £ñãÉiœøZ}éÚG­³‚Ÿ®72ßÄ)"ëµ`œ¶Œ(aáàŒñ Ê,—8€S Å,P”²´@Ó¶¨Ê‹¢ÄôpŠÃƒ)¦‹ö‰ÿððóÿKòßõ÷küÃÈŸ¥;ç¿ÎÞïØ¬bVMܘ8}ëðæúÕ{þë½qòÑñxˆ½=BáYFÍ1b­;]€(i#Bçôce'Q©†TÄ“%â–^ÚšÏê†iPOª,h}›Ö¢á®áA]7XÌç8<œ"Šy|§€TA›\;§¦<ƒòÁƒá ìsƒ¡’”ôQœ¼é` @ðóÇŸÂãO<ƒŸ?þ~ñøSxãòÁUÏ# --·µ>0ØVéV‰×éÖi úï·îslb6Ù*§¾ŠÖ_E¥¯êõ£ýp]ž à*'¿J p­N¾ûnÔŽÛY*»¥Hß\õ¼sgïÀCÞ<ô|èÁûß$‡\íô[˜¶äf=3Vòr„€Åü Êù!Êb†ª˜Ã!â5@Y6(Ê uEóE"¥1Ùc<"Š#®Ã'ÇŸ$|h‰þ¹¿‚L6íü%¥+ ƒ"*_÷/h-+#”¨Y™åJ&( ­Øã¸é™EUV˜N8<œ¡®ªö™gø¯þýSÿ«éþÐù÷sýëÊü;ç¿Òv€l[ ËuMƒŽb2Éïþƹ¯ðî½–dI¶?c4Αæ)²„Zk¥èŸzñ†V„Ô#M?bkÄŠffC 1µæ[q´Îi/¸qÞÑ·­õº€Öð¨bc¹²€~ÀÆZ,Š ³ÃŠb4K‘çòŒ´©n€ö°’4G–1N ÷0À`´|¸ç» 3!JRš Bõ抯^z?ü)üüOá‰' ¬2jYqú âõÍ郕ﳂ9õ£œõQ‘þ*Qá6·k±u @¬Z_Ç ¬GÝäóû;ˆ£w­¿ßëÝ?øÀ½xèÁûñ¡ïǃ܇ÑhpMûm‹OÅ }u¤_—sTåeAúÊù¡§÷Ëb†º\ i[¨xOPÖEÉý ÊÖXŒ&#Œ'#äiJ× Ý±‰qB©Ç8¢.~qùûZ)Ö9*Y†ãýEìdKuÈõãÔ‹Ö´0­4&£`”æK¦s¨jƒºi±X”X̸reвµ/}ëû/ýÏ>ùúS¸Úé÷›ú„‘èø79ÿ®ÓRxv¶mÀÑâÀuÝW±éƒçNœýÚnÿoytÏd<Âx2D>ȦÒ8‚Ž5ý­ tÇÄ:¢Vš±F¤cÄQ)RÙF4wŒX•ѯ€¼(5¶¯ pL·2Pà|^Û´˜Îæ8¼r‹,K‘f)éÒ‰ª3LMUƒòÁùhƒÑ>i<!Iˆ“QÌ Ôk¦ IDATšžFàeÕá¸qÖ¶-žzæž~æ~ùÔ³xú™ xê™ Wi qâÑ õë±~•ÃÞ”ïßôœmtÛÚ:'¼Mz`ÕëE¸êý®ãSr4oºèÞÙ¥Éy¡:¹ûî=‹ûî=‡{Î݉Ü÷›ÝŸ‘8,lÛÂØmS£e]ÍPS”>Òçœþb†ª\Ðù à¢!*› *k¢øëuU# °¿·‡|˜sG>í—Q¤ 9ÝØQÿQP×OiF¥´‹Ž-œµPŠ;–Z ë lk`…i[º~´†S‹-¬!à8 e¶Œ5(Kš'@Qƒ—^-ÿôß~ûÙÿk>ofØÞùoúí”þǰX¶U Xß+ ß=p]Ï€U"Á$Š0ø'_=÷»çný£4ãñ˜†ùdyŠ,øy1`“Jã:ÑH¢˜~Ô‘F¤¨HQYi®WPˆà”‚VŠE:ÄäÃí6ƒ±äômZÓ ’mÖ9Te…ÃÃCriœtÚKç1b[BÙœ©fdƒ¡’.ȇ{HsšL˜æCÄÉq’"Ò tßð>ÛØó/¼ìÁÀÓÏ<‡gÏ¿pU bhèˆÒ<ÔäzÁÁÚÿ8ó£îo*ÓÛÆVUô·¯º}Ž|³ù^ùÞÙ8kWFôbçÎÞûî=‹ûï=‹ûî=‹{î¾ §nH㣬€¨NßêÌ×Ô%êzºœ¢*ˆâ'šÿe1Eµ˜¢ª ´M DC¸xˆÊ$¨ªÚßêºV'Nîc<Ó2­©?¿¦”¡wø‘8{¦ú¹‘O¤5ˆ,¤@ƒ§bÀ9 º.XëGLéú”OJïlfM—J4’°MeQVæó‡SÔ-^ûÛÇ^ý½ïþøÒO±,î »ùm#ôÛ‰ý®Ãv`µG¸*%°©\ð*Fàƒ÷LÎýƧnÿ¯F}ßp8Àh<ÀhD=Ò4f  ¥+DQŒ8QHâZiÄi„˜EnqKQ”ªy¸ÖÕà*pu¥¸Ë #êÎZç£k,™CÐZnKl¸ñÍ$(‹‡‡‡˜ÏH/@ÝõÒ €Y¶F’ ådùùpŒ|°Ç¬ÀÙ`‚4Ó¨â$G’dÐqŠ(b:ê-pp0Å‹/_ijç_Às^Ä+¯\ÂË/áÙó/l||fa6<Ã! ÂuFæï5“JÉÅ[Ó‚"{jr³ XdiŠsgÏàž»ïÂí·ßŒ;N߆;ï<ûï=ëS)o]å[ÛÂHÉ^] ©äà‹)Êâå‚ï/¦¨Êêr¶mà¢иeY£¬(¯_×  €û'0Oå)ÕækÅ€°ƒšÅ}Jsk^~\û¶½a•‘Gû‘ó&ÿjZc š¶¥fC-;þÖpêР1†SŽžoHÄY·UÕ XÔ˜Ï ÔMm_¼X|óO¾û¿¹2mqtÄ¿Êù÷küwb¿k´ÝUh½]¯. ÂVÂ}F þÃ/ßñ;÷Þ1ü“$Îöö†4Úw˜#Mcd)•‘‘3—’qQ: ár³H³èަn)¥YèÆøÞ‚š ¡¾áœLâꀀ4Í€OØå”e½À|±Àôà³Ù”˜,¥j‚,!F%` ÀUHâ”Ø|„lHBÂl0a@À` Òp¢„š阕üê­Kl²ç.¼ˆW.^‹/]Äs^ÂsÏ¿ˆË—ðâK·z=¥XÀ"Dåé}í×¶U ï$“¨€§â×ŽÞ c»Ù´Ö¸ÿÞ³8yòî8sÎ=ƒswÝ3gn{‹"úU¶ìðɼÃù|jÐCùêbŠbÁ~1£[¹@S—$~Ô9œ¢v êªEYÕhêUÕÊaÿ&“=dYÊù|iÉ«»!—ò‰v(Žº–¼Ôž—¾b¡ŸVþ»øÊëà\PѶh ÍhšÖqæ""¶üüºn©£_Q£(+,æ@š‘åCdùijÙ€X,!ɆHÒ¢8Cœ$œöˆ+z;Ah/¾tÓÙO?ó¦Ó9^~å._>Àk¯¿—/^Ât:¿æ÷îJ9ô(ÿë, Í‚¾Î,O±ãçNýZìŽ3·a4àÜÙ;¦)øÀÝä9î¼ó4NßvËuŽÃ½Q:üN±oMƒ¶­ÑÖ;ý…wú”¿Ÿ²b†ªœ£. Ôu ‚‹°z€ºÕ¨ª†òùu¦n¡à°wâ&{{乯חa;:’HŸ~"  Öž1§¯ýòu$cRöË)â×4ÖÀ¶ EüÆ n´~LyPþ-Ÿ¦nQ×-ªªæŽ£óMkÚç_)þèO~éËÆ7õé—ô­£ûWÕö j”Îö–;ç¿¥½ýWÊw¾]‹.`U¹à6%ƒþþo~îÖ_ûйÉ?O’h2™Œ0çÈò i#Í$Qü:Öˆ#aÂu¢8ÖÒ8†b:P9Ž>¥µ‡‚ÿ‰8P+OÇŒ€âÊ€.§Gm†CÁ %JË ©1U4¦Å|>ÇáÁ!¦³)"­‘¦I0Œ'A[D¶ v-’4Gšæ²²ÁÙ€@@:˜ Ë ¤ÙØ$c!aÌKÕÓ¬:Œo¿½qù¯½~mÛâ™óÏ^zé"æó‡³^y…F'?õÌ/ž{·Ød2ÂinŒóÀý÷n½õ&ìïO0Ès/º;wöÌ ê‹£mY©ïð l[£Õ~U © ÔÕŒþœrúÅu9G].PUå·M D)œÀ¨eeQ7§ö›¦Öû'¨uøp0쪀8Ÿ¯uи'‰8߯)íGóF¼  ^ýJ)êѯåWàE…ÂNÀ èY¢ðÛÂ4 Qüm¦1hMƒ¦æ2Ek™ê7¨ÚMÝ€)*Ìæ%¦³wlŸþ‹G^ÿ½_>7½€í¿¨ûeÙ§ü%ÚßQþ7ÀÞyWÅw¦­ÚO›úˆF ?K`6 ÏÄÒS“äÄ×>ú?»íæô7²,ÅÞdˆÑxˆÁPJñÈ‘*­G •ó$1’8Bœ$(H j(–Ü´†E ’AÿEyÖ Oè²FÊ»œÖŠhj —.‰ }½7 ŠŠ¢ÂtzˆƒÃ+°­a ÍR6 § lmkh­‘¤’,Gš1C0#ËÇò1Rf’t€8ÍÅühû ï$–àZÌZ‹§ž¹°´íé§Ÿã^½r,ChgNßvUIÜÞdŒÛoï:߆Üqæ¶òÿÞzÁºèÞY8Ó³e´MåoMC¹üºZ *f¨+Šîër²œ¡®JÎù—p.¢NçžÚ¯›š–u¦i1°¿·‡Ñd‚,KØ¡GÅÇLñƒÅ|œÓç\¿V 覢.§)€:Ê+¸®žßë,¤÷QùUÓ0mÓP»Þ¶AÓ0rMC]ÕX5‹æ3âÓ4vúäóÓýÍ¿¹øm\MóoR÷¯ß»Ðoçü¯ÁÞWÁ·Ï¶Ñ¬K lú@ú‘ûöïûüGNþËñ0¹o4`²7Äp4Àx#äH’Y’BEIÌM…¢IJ  Jb¤1µ«Õ‘(„»&4)€ÅArõ.hЃ‚²%X :(ý“Ž€™†A€u°­ Ê ­í„ÆZ4uƒéá!¦ÓCÌfs$ 3i‚4I¤1²ØA»ÊVP¶¦4G’!e@æ#d9±I6ô쀤 ’$#†À—ÆÜ€'Z Vâ½·ìjgO•1-,ÓÞ­i(Ê—¿¡žû¤Ú'z¿®ºè¾® 4U‰¦©H1Ï¿uê†eÓp”ß‹½ý=ìïŸÀp4b OGéӘݸ‹þÃܾ4ÓðÏ×\¾‘‚Îi ÇÚ0¤Y ¦c`n¨5¯1ê¦%Çß4 Œ¥~ýÆ´µAc(Ú/ªóyÅœ]5ö•ËÅ_|ó{ÿÍÓæ 6Gü«èþ~]ÿ*¡ß.߃lwµ;¾m£ –«Ö5Z¥X%L$_ûÜm¿ùÀ¹½ÿ$MÔdïÄ“ñÃáÃa†<Ëg ò$Š$U Ä1Ò$¡¼y¤)ºˆ´bm€’’6x@ fÁÃ},}U稂ÀòTœ¿¤H8È‘? ©;˜ƒµ¬ïÖжš¦Åb1ÇáÁ¦Ó)¬µˆ“I“f aP [D®\˜š„IF,Aš#ˆHò!ÒtÈaÈé‚’dÀ` £’Ã(…ŽHX¸ÄÈó]Îìl/e„-åïm ÓÖÝ·T“ßT%šzAN¾*ДsTUºšSù^U¢­+ªá7ˆ2@`¢n•ìéÖ i[di†ýý}L&{ŒÃWTŸO*þˆsüŠt;¬â—ÇEð§”B¬(Òת£ö5wÕ‘âÞÎ_ŒdN‡J' Ë€0º/9þº®a ýnÍ\×5µï-j”u…bQa>›c:[`±(1]4¿üþOÞøýŸ?{ð –ý¦¾«ºùõËû¶UùcͶ­±Ý•íÚì8)>8p3pj’œø­/ÜþOÏÜ<øjš&zooŒÉdˆáf äYÆ9vrjqSy^BË(¢þT.û2!™-MißOi8a¹C¡u4»$V`¹r€&2À Bºaa¨iå 8ëPU4¶x6›a6=„S@sYd"€ B¬Äh[Á™š¢§$E’¤HÒÒ,G’‘æ^úTAšäˆÒœtqŠ8É £Q”@G$žôÝWƒu§ÂÎÞ:[ÑKn›[ÔÒ@cj¡óÛm]RM~S¢­JÔÕ‚‰º&±^Shj¦ÿÛÎEp:T£RÔ­ö¾8ü¶i%1öÆ{˜Lö0‘Ä$VD¼ÇµùqÔÕìK¤¯#…$âŠîÔ§ø·J¿Áˆ¿ø÷JÜTúŠh“"*áWï๠ü›lüÜ Ó´¨¹äÏYëÙ‹º©P‹ óÅÓæÓ9ÚÖüâüá¿þÖ.þ%V×ð‡í{×Eýí÷éþ]Ôÿ&Úîêu}v-U›D‚ý’Áµ`à»'wù7ÿóýqö‘|aooŒÉxŒÑ(Ç`0 ¶½YŠ˜@w Òrñ‰YALÚÍÎ?Љ'}€îDD¬°>ǧè" (ÍÒÛ–ª ½¤NØ:øâ(òðý ôõÖ¢,+ÌgSÌæ3̦SêyÅ”6Hc$1ƒ4rˆÐ@¹ΔPhÙ¹Ó-I3$Iæ5Iš{`g9§ ИcÖD ¢Hæ4§vààM²ÀÁÃñŸ(VÅÑ3}oZêTgZŽ^k˜–Äz”¿—Üüu]¢•Úüº\ºµ uè³ÆÀéN§°HѺ„úbHtß¶hšJ¨”Âd2&ñÞh„,M¡tW©#ezš©|¹q_ÊõGÌHƒ݉ø”(ø)ŸßÑýÄrýCxˆ«7\'àµ"ÞãfJ~0˜8¯å¡rƺnQ55Ê¢B±(0›/0›.p8¢iÚæ…‹‹?û³ï]üƒé¢™b})ß&uÿ*Ê•ÐØ©üo¸í®P×o몮• è·Þ¤H¾ð±›?ùÉœüO‡Ãø®ñdÄ¡‰óYF%sݳWš;Æ4_À‹Œ"Vw7(¾Ï ¬¢¨K€ *¸¬Ð²˜ŠE„Æ8S,ÕÜV”&mçü¥Üˆî·<¹pù|Žù¢Àl6'Z”uI!I¤B#R Q¶”AÌ"Á8ɘ-È‘¤9â4G’„KÒÄqÖ±qŠ8NÅ)¥¢Ä÷YÐ*ò !ÔÐ ± Ë:w^_åà·–uδܫ‚Êï¬i)ªçÚ{ÓÔ0m†~[W”Ãç^—~½m ´hÛÐ1œJàÂê µÑ\úÖvK®O“£!kqF$iBœÁaÅìÀ£ TOòø ‡ß ú¢r] ÖNÉPÍ€œ/&RµÃÃz¤~_Éþ1jŽ,¥è¸™Wë'.6Ô¾·á¶¾FÇ4 ê¦EÍ“çÅóù³é‡34u‹¯ç‘KÿÏ…—ç¯`³Ó_7µo›¨Gù¿Éö~¾úÜH;*% 8Nj |ã‹g¾ú»÷ÿqžêS{ûìM&OF†ÈóœëïDq̵Á$”èß32”…IJ‹*™>ºÖ ¨WhqpŽ„êÊ`I(èà¸ý0¼âØ9·í·mèì»Ñ°FzX™ýΣ`[ºÀUUƒÅ¢À|¾À|±@Ó´]Y$÷HH|™¤B¬ 43Ê5€mÇÔ†8ŽDIâ$C”dœZ`0fž N½ÈPÇÔŸ Š;mÒ”N/¥4_Ôu×à§5ð½úÛß±Àa…3_Z¸¥ç:î9!κý‘rÖò”> Ó6œ§oaMMÇŸ#û¶­ašš(ü†œÓVhë ¦©Ôìèk´mÃy¨NŰ*…AŒÖÆ\òÖòRîÓ¡ÑhŒáˆSmyÚ5Ûá.|çê#Ö’DrÞI _yç/B?)éë–4´G´8DïƒKv]°_³s¶Á98C3uõtÁo¤c׺²]¢÷[ÏPZ®iZ´uƒ¢®°˜X,ØñLQ%®ÌêÇ¿ÿè¥ÿó§¿¼ò46Gü«òüGµñ•°£üß{§]EÞívTJ`6`82=0Ê“Ñ7¾xæÜs×øqNîO0Þ›`8b2!Ïr¤— Æäœ$÷H_óìzNÄG0‘êXç;Šé«•ƒ/+¤!â´.ºÄŠ£ç£ë\7N”G‰ŠNÀ_´|kU¾h5–RhAQV(æ%•'Íç°ÎräE•IùæIq!Ö-"´P¶†r- à â8bÆDÇ ’8EÄ)I/ˆd[œRê@¶Ç1´æ4BD÷IÜyö@©Žq‘}KžH‹¡|µŸ>*d6†7ÂÜÊU'Q;OŒ…99xËý ºH”š±êÜP?ŒÓúhÞJsÓp3ššw Eø&pêâäÛšËÖèµÄµ€Na¡•³w1ZC,mÐÐF¾i ²<£è~8Æ`8Dž§Ü9OCkÇ‚=À^Ìçûís.qÊ?’àß[èø©?‹úøði¥áà ïj'àœ=ýÜd¢"ÑüNĵN~CüÛñ­y Ïü A>þ¹¼/ê¦ESÖX%±kÓgX,(Êö¥Gý_ýÕ#¯>‚Í¿ßÁoÝÔ¾]Ôÿ°¸ñv-l€Üï§ÂŽ‚[[Of'¿þÅ»þÉ·¿’$q²wr{“1F£F£·æM&tDÎFhIŠf¤°F%,TêÀ‚<_kÍ”d—žýò, Î)(åxDhWMà‚öÃðý‚ª‚0ú—ºlkü܆å…~z!º)Z:‡¶1(ë UQ ( ,Š겂fàPÌ–EªE¤ ƒ‚° ”3t8â‹"jU¬cêÁ'Te'<»!íÄ…œ~ 0tUÐüÉ,I/è`"!³ ¼¿»®Â΀†îXlkËwÝÿ8^%·LCy„¦&ÆYùçÖï¡èžŽ¡3­ï¢gx2ži™ÒoŽÖkî=Oâ=ºßð9ÑÂjNC-m"Šêu ‹Æi´FkºH¸åö¶Æ F# CäƒyŽ4M³ê^ÄyŠ;éE±ê¢þHqþ^†íÀN?J¸ï~Ìð`V-Š|éýt¸; üÓ9G`ƒfvØ`ß[<ÂÿV¬µpžI3<È‹@–èkh~GÇž9kÑ´MPË_¡(˜Íæ8<˜b>›¡¬ÍkO<{åþÝÃ/ü¶öûNÿ¨Ò¾]Ôÿ6Ù¼y¶N ƒõ£Ø€£*ú)‚%Ppæ–áÍ¿ýÅ;~÷ÌÍ£_˲$™ìïaooŒáp„ñ0GšçÈrDQʪk#G1÷ˆXÌ$Ó ; MHà×E¬¡ü·íUÈ7_L`Y´d™&Ca“g)B´rqkY\(É\G¡A3 X~?Ÿ~0¨ªeU¢X,P,ЈԈ/êÌù\ºï)   –À`ë.ºgÇéˆË#vþ¼äANZòÆqÂ9az uÒþ¾¯NàíJK×CdÚƒ2a ®m ç‘ÙáHå‡sR2Žê­m¹ûcçì­%šY@Ñ+ÇOJñ¤? ­m«à‘cNq4ÍŽ>âvÔôš¶•JQ¸Ó{FQ„|Hº˜|8@žåH’$л0¯:§/ })ÁÓˆcÝ)òüD=ÍCx´Ži>¦ÒoCwÏ‘~J¹Àáë.Óã,oWžÞVÍñï‚Ú0óïÄY˜Æ0ƒFßß:ÞÖÂúæ\t¼D_cšµÇ_ ,*Lg3Ìgäø+óÆ“ç/ÿÑ?|áÏYÛ¸ç(ço±º¡Ï*‘ß.êlÞ\;.°J#ÐO •¸ª¯ÀÙ3ã[¿öù;~÷Ì-£/'iïïíaÌŒÀp8D–e”ˆƒÉƒòt¾ò 4bu„‰¼ã ÅáK£žn)¥ŒÌö{-I& :f«]ÚF»10Îúui[¦þ«ŠÎk½óÙ‡Ó9懇({å—ç¯üÁ¿øù¿,S`3ÅTÄoW,WÕõëÿÎñ¿‰¶om[)°N$¸Ppà—÷Þ¹wûoýÊ¿{úÔðWã4Ž÷÷ö1šŒ00äȇò4G’Æ~¤°Ž‰r–ˆ8Ž"ßqPG±Žºî‚W p “†8Ò;!*Yê@¸¢d–òAºø‘¦ áYëìð-ŒU¬/€_‰V L/,7'2 tÛÀ‹HA]U%×|”ÅÆZÔMíË·$úó¹Þ°Þ[r¿JòÀÏFZ¶"Gá(€RJÁ™Ú«¿•Ö°¦&‡¡ïc:4OôÚnê¤{B£®¹Ö7ϱáfXÏÞÈcÄì JhÝ9@E¤ª·Ð Uˆ8«Š^UÌ©nI blç¼äHUÈ`“ˆ5¡ÑÓišqÕGÂú åkæµhW"Í\{*¾sð‘/UJÑŒ ?Hg™ˆ´ŒÏ 'êu9}Åå¡ZÅþ³€Á0Ï–ù6²[],9+ç½ó¥¶2|ÇCxÚ†Å~øÜ•´‡µÒ¶·AQUÔÁo1Ç|¾ÀtFÓÚƒ'Î_ùã÷WÏ}«,½ã?Êé÷þºF>«ÿQt?ÖlÛÙ ´xël6਴À*  Ñu Óë˜øþ»NœùÊçNÿGgnýZGÉ„ñh„Áp€|@¹Ð,I¥‰1IzÀO"Œu0c@{¦@ ƒ µÊ> u€-Ü  9døµèºa¸ LƯ’Ó°>…@"@å§:§9Mà–‘¥!úÔQ¥)rÆÁH·C³,¶ò@Á…­êº‚5 ¤®º®j88¿/u¤¸"@œ‡bJ_Z4#ÈAK©i4 "$1k¼ª:ZÊåRӓΙ,ƒn(ãäZÃ÷]ªV ÎI©!í.çh7Ún#G_½–©¢lg}€s”û§‹¯DÚ&ÈewÀ@òÜÖtië‚Þ¦Ó 8}ø¿}„ät…r·EïÕá²/@uø,ë_o»šþ#®µa®:À žŠao'}g×B^Eu/ÎQ#ˆöù¸ÃÓôSP Â…¯$j‡œ[>#@ò÷R{/ZÍ#tUÔé¢ˆŠæè^3å£öî;)%û”Î?bño罡ÅñsÉ¢ã†>€r|\¨¶;kL§G±ìôYÜÚrƒ+Ÿ‚eÓZXÓR Ó¢,k”ó9Eù|ŽÙtŠù|†¦qÅK—¦ß}øï_þ³Ÿ?ýÆ Xíð7åörü}š¿ïø±b=´ã›mÞYÖ?›ÒëX~WÁu@ ¿bñC÷ºóËŸ>óõ;o)Ž£Ád<Æho‚áhŒñhÈ@ § „qÄձϡ꨻¨“2;òÑ¿ˆ®ºZi¡» p,¬œ]h•wø"› Ÿ§–Ðç²tN–ÁuãŒ;¶ ìUÐiœ§oY¯ÀQ\,¬åáH¢~7DHYOØ 8ësîœÀ^ÅHÉ“´H¡†Žß­KÚ€(¡ÃƒžRY X‡)MU ü:ïL{Tºt”Á6J‹¯ ~(UÔ éº\?¿FJù4ÿ_®:éØI#0ƒ ç—Zþ?ò½ÅÙ/5·ZÚ7tÞu@ÀAÉR@*`8§ýyKçbÇ>ù2>'Í«LwŽ1Ø4|¼¥É’×Ë¢¾ªiÐÖMWÆ·(0ŸÍ0?œ¢( EûÒ/Ÿ?øæŸ~÷Â_]™–3\íà7âÝÖñ÷þ.Ïÿ.´xçÙµèúZM@`Õð¡£Òñ©I>þí/ýµœ;ùµá >3ÈsŒyÔéh8¦ôÀ`€$Ž©›Z’t5Ò‘†F%MS$êó%Vœ¯UaôÖå›éBMNXAò§–.ƳsRJ…äð#Ñ ™¯I—B b$/+à€hWlwàAi¡”)zab $ìÀ„¤D=/ÆvΟ?›0ŽßCDwð"H»ôÙ…¢@à¿ÿV'GóìÐC [ïRÔô©Û.Q·@0œ¼×Èkz6@A!lÿÒáw¾ý·/|ó—ç¯\ÄfZS‹ÞMŽ?túG9~`çüß5¶ï|»Q@à8¬ÀQ A*#<ùÕ/žûÊ=wìýú`ßž¥Fã Æ{{Èó!²õb,€n‚¶Ót>FÕ/“.-LÓ¢n[4Uƒº®P–‹‹Ùóùóé Ö¼v¹xôñóoüÅøÎskŒ©±>‡¿‰â?N×¾u¾ãÛ¼{ìFMŒÀ&f \.±¿þÙ»úøC·þÆí7>§µJG£Fã †£†#¶"¬@'Ôr˜'ê˜ÕâP>2Œ˜ð½Ú•CÄþHkð}ç/Ö ðybr\àºV*Äè¢Ýsì~[—G—²>>r—([}×ÅÐùí"D"|doüÿ¡N‡Ì*ø÷ì¯p€ñý˜Úg@R€¯ÙnùÚ¼Á÷/ŸT ŒŠODRGÏ (@ƒîž+¯ó¾êêìUχ €¢óŽþ€^ @HDÏå…RqÒc*®jÒƒNmKšAŽÜZfUˆíq¬Þ—4‘aÀx¥Z_ëOi  %×í×Mƒº¬QUŠEÉÝúh*_]U(ëö /~ç/~ðŸ?ûüå×°}„¿m¤¿süï3Û€wŸÝh p0pTª ºåÔpï«¿z÷—î»kÿ×&ÃôÞ(Ž1⹃уáyž#Ï3ÄIŠ$Žhr^¬;ðb,IPÐÚv-Xw`Óxˆ¨ ›DzJÅðylHë့\™ Z½£ÿå±>X–#6À3!Ј° Ò ð¯ ËýdÝRAt@CD€]ý¨Ï´ù½É:Ð çB àõòžÎ—è™%Nô>’R éž'~èÀ;0(ì‚0Bž¯Tõ£c :gOz¨Žµè¾åõƒÆTpà ÓÒ±§>ü†Êïô}š?IÓ)ë4±ŽÒ¤û@LjÓ7MÝ ªjÔU…¢(P,¨n1£XÌaœ­_£üÉ£_úö·¾wþÇØÕoãômïþ¶9þãÚ¼{íz@Vé®,-?üÁ›Ï|îcw|ùì™É—Y|kœ&ÆÄ GŒ†4–8§ÉlI’Ð ‚x¹»ÿÀÞé;Ö€sÁXª#×:òŠp­cÎ!ÓÅ^Ôéþªf)¥°Ü|(PÒ‹HÒL¨‹Â½*?ñyzÞ¿ hÿX˜°S‡C'öëþi: ÑEônàk÷J%·2aRÄiê€%‘=pœA¸c50èoÄ_øô]|âÁ[¾xúÖñÒ4ÚOÓ ÃÑÃñ0`rê6˜23 m‡}w@Á‚|¸ëJ7Ó…?Š»3np–ƒRa¯çÓ èu  °=gm]é![OÛ»î1Ø.â·!pðŒ@PàB°šààºÝ9ൠ€ÞìÛŽ »Ë €öiƒ ç8pŸ@ØEPÞ¯Ïtl€ƒO]@Ñ„ÊHvƒ´ FÇY*ÜëÊõlp?hîäLPýÑé4¬spŠ>§h¶àmšuÝ *+TU…²X`Q(xu³yýÌs/|ï»?|þ{ç_Š;ĺë!/ô5;#©> ê×u @QÞVkÀ1è.ƒÕ®Àt¯OP¿~ïxÖ2ÎÓ÷áÜ€@C€n} Ø«€¤:‘"•©õûuLÆÚG…§N: ò €4Ò£_KN|¿›Þ'e{K“¡½sï3–I?h­áäs;Š€a4òú„ ˆ€q̨ª~ûÁQÜeÒB*6ZϘ¶õL@k,Ú¦EcêªA]7(Ë EQ¢,(æ$観ú™—_?òß¼ð×?{âµ±Ú¡÷iüU‘ýqèýU {vŽÿ}n;ðÞµm@ßù÷ï‡Î¿ Bpp”n`–À‡/þìƒúÀMŸ9sëäW²4¾%Ž#äQ< å¤Y‚ÔlÈJäê¤í+¹}>u@Fùº- €åtBWê'`@4’FX¥+À Àk–€˜Xc–S®ŸÊ=Žñ¡^Ô{¿KŸX¡ O} WÉ<äö—D‚BîKµî! åcéúØ}ut2*!ìa!B=å…–rŒ„ N|ÔäÉXƒ¶mÑÔ5š¦AU7¨«šÚòV%Ê¢òù}ÀáòaùØË¯Ìôý¿þožzîò«¸Ú±osÛDë•Û_åðW9þu'ÊÎñ¿mÞû¶î÷SjÅmU¤ 6U¥8*… ?ó±3÷|ìC·|þô-ãOŽÇé½ yž!säYŽ,O‘sâ4qya%~°K%` Ý}8H#í—Á:ÿ|ÉÓcÇNˆ™-yy@ó,„槨œ=P瘸®?€OÀùÆ?&<è•Ú~€éiVôØÞ„Êú¸ °zIá%U§RL~+pξ—ðå}NþºHž0Wð7 ƒ£u¥í7Õå÷¥Ý³²tp;š¬·à]Ï€Á"xœËô趦Ak Ú¦Fkj4u‹²,QW%ª’üeY¢(J´M cQ]9,vá…Ã~ïï_|ä…—.c½8oÕ¶£ÊôŽí÷Ñຨ¿o;Çÿ¶xÿØQ@ |ŽÖ7±áýu `]ª`Pˆî<39ù™þع;ö?uÓ‰ü#IíG‘Æ Ï1ȇȇ²±xÿÙ& Ëã°«Ò«ÀÀ¦tÁªûk‡¿òñ3÷=ô¡[>uËMùGNL²‡ Iª$†ÈRj:”æj>” !D±ŸSG‚1)QSœð MáAçõʼnS¢—ºî²©ÀBÅQ§uÝ5˜ÞJª:f@„…ÎY"ºpð%*2, ‘"®Öôª–sü«­Óð!^ª à$=J¡H4ïóiŸ÷÷æÙ€îýøSy'xpN7‚˜›µ‚Ì”ÿŒJiÞè%GG> ïOÞóJùca,íënÜ35ôi›­µÔ‰¯©P•êºBU–¨Ê%߬1hZ3¿rPýì¹—®üý#?yõÇϾpð:®vêÛ®_k„¿m´Þã+ÿší;{Ú¼¿í¸é¹ß®JôÁ¶€à(€°ôº[o?÷ÉÛ?rçéɇn:1øðx”Ü¥‘$)WPÂ, IR¤Y‚4ÉñäÂ8¢¾2¤H|Q¤TÀð—çÈ[‰Sf§êl·öP'äSÞùû7ð ÿ ®ÜywïÃ^ÏÈÿóïã¸t-pÚ< Àö€îÃoo2CA‰#fñO Ð;J¤§üþðŽ^ró ©8ÑNÐá)¥Àï¡øßJÎ?x@Ø Ž K€àÓ?ýй;Æ>urø¡ñ0»Oˆ“yF©‚4˼ 0‰$i†4‰Å1bMÓ䢥²FÀ…ã_Ø.öœ¢29)ë©|yœe^`í¹êÄ}àHàz½7«ÜUl7`H€Êõ¦œDâ<اcŠ”¯÷€Hœ°§ý±”p’r èüé€þWj™öw 8b èy A<¼O|×9ü¦50Æ ®k´Æ ª9ʯ˜â¯+T‹‚§óÙÅáAýä¥Ë‹ÇŸ>ÿÆOþáóOâh~œ¨þz>‚m[u¿o;§¿³ØÙUv+€ÞrÝí8` Üv”–àšÁÙÓûžØK?0™d÷FZåPŠ„õH3Ö$)’„Ù^”hÝÍ-€"vJ+ÝË„8 ‹ðI„ßÑûÇ˵×%ZP¡º”ºô‚“”l°EoÆÂ7ºÁ €”ééˆrüKÙèÉ H H4¨øém57æ£ÿ!•N0 ÌÀŒgóCJ?ŒôÏvëÐ:¬ÓÎã·hÚuÛ ©jê¹_UÞù·¦s@]Û×§óê©×¯Oýò¹Ë?ûë\þ&g¾Î±Eã_ µ¿ Å¿é€ïÿμíÀÎÖÙµ²}lG±²}SåÁ:°<|îãwÜ}ÏÙ“Üzóàƒ'ö³ûYr— µBÊ‹Ò4Eš¤H’iœ"Nc$QŒ8ŽD1u(TÊÏ©Wh vvN9ÀÂë„âW Lë“Ê|©×:Ñ  Û \‚ñÁäPƒàð8¿tñ¸P U=‚ñ½¸ßT4Êyµ¾PýªZˆóg‡N€O'•ìô5}|ëHnH‘¾ë–Ì®´†Jóê¦Ek š¶EÛP-~Ó6hêU[“ó¯+òžåtQ=}å ~êåW§Oýì©×ž|úÙ7¤ûÞQyøm"ùkqôG9ýu?zÛ®:¢ëöÎÞ϶;ÛÆ® „ @–}0Ðgú` ÇM'y;yr0üìGO?pÇí{ìO²³“a|v0HîÓýTf˜!‰cjSÅHâ„{Ä^CéQAE2Ô¿¬æ@‘øŒ¢h»”B°@P »ä=l¸6Çô?1º@q ` ¸çݬÀ¿tQ”Cà:Ç.:WÒÏ?‚„Îéf’è^ÉGôT?‰õ(åa¬LÞŒ¥èÞƒÖGø šÖ iZÔ¦ñt¾i  ÈÙÏŠæùù¢¹ðêë‹§ž½pðäòâyMÁ¯»\í°WÑöÇqöÇqúè­ïÿήÉv`gǵÁ ÅláãÑŠÇõŠÇW¥62i%Ÿýäé{î:3¹÷äÞàÜd”œ ÒsQ„1@>MÄqÂéb¢(BœRûá(ŽknIuão¥¾]X¢ØÑEôìð¹pú؃ø‚® H9ø”B7ƒ€Hr°+Jé6[÷|¡,õúÚ+ôåí¼ã÷ŽÆáz¾ÐôÂ8)ɳ-3 –ØgI 'S±äðEÛ¶°Ekî­ß¢ihŒnÓ4õfY›‹Óy{þp^]xíõÅùg_<8ÿè/^}ëò*ÇÝ/­[÷ºþsÖ­‹³{ïïœþÎÞvÛ€]ÝH0ÐgV±›@ÁºTµ¬¯{õÀ=§n¹çÎý³7ŸÈn=urxß ‹n¦ñíiªo”•ÆÚ7&Št„(晑F¬5tLÊ¢ˆÛáúa9´‹hà‘b  ÷$@îTø*¸ÞS© ñâ<8ç‚ X?Ò„TÁ=¤+DA‰sôŽDÊp2BÑë‹©R—Ú ú» ¶$Z4ßX{XVíÅËWêg/])^~éÒôâc¿¼ü26;Ümõ6ôüQïy•/9˜p{¸ŽÞöþúÎéïìM³ØÙ¶ër_÷¶oÃÅG{pЖG<߯ßnÿæ[NNž¾yxG–éÑɽüžX«ažF·Ç‘ÚË}³åº>Qú+ ¢|ê¢Û Õñ“Eã”C É´CÆ>‚_êf(¾ŸÇÛ`@ɪr ÏLŠ{.²æÀ²¨Ñ˜pŠ9z¿¸C SeeΣÚyÑžo[3;\ÔëK‹æàÇO\z¶(LƒÍÑó&Ç{”°nÛ\ü6ÛV9øm¢ûUN~çôwö–ÚììͶuç˜Ú°ìßYþöUNX혷qìÛ¤ Ö’þÿ^÷ž«n åã·Þçuë~zK–¥û±F:™¤w)•§ñíQ䆒ûϳèv­1ô:@'‘¼xôž¿PáÝnÅ…~¥ïbÞî¦TÂqûq>ë*‡¿j¹ÊvNgo»íÀÎÞ©v#A¸¾Îñ®ÚvÔv}Äs6Eò×òXÿ»nõ÷[e ׿¤¶aŽŠŠã`b €Õùö£œúq>˦ﮯ³ÃßÙ;Îv`gï»@Ð_µ-¼ßäÃûÛˆu¯Y÷V=Ý÷[·ýFÿ¶bd}]D¼‰¿¦à8}ÓkV=ï¨ï‚Þc«lçðwöŽ·ØÙÿßží6 AìÿušJ–ÅÁŽ”Ø3/qàì¸Rè^ð·}wg›‚ÿ×Q “ ŸoÕW÷}oÍ]­€çÐ<%¯Éq5?:·UÛ»ïÖkEàóu4ÜEÚ“±^³p®9¾_ ø¤Q9Ï·æÎf×zx£F Ù%¨vF¡_Í¥×ý9Í {EÀ%ßï4d“çñ»¡žÞCõwõ„U`Vc½ðïÕŽ®‘>bèUŸß#ð¹% O3ÓôŽïW~µÏŽ÷jªúd¼2 ùÞÜhw 5—ìÌœ“ÜK‹°ç14ð'] i 'ÃÎ5ªº¤~E² К[Ý-Hç’Ïkô<žúfÖHúé9½óߵݟHvv„QÍLx z(h`ÝÊúI‚{ö1ÅÊù+ÒPŸ©Ù ÷•zàEﵻƮú•{¸êWön@ xx |–»¯Iâîÿl€õu.¬à¿˜',2[§rIEND®B`‚calendarserver-9.1+dfsg/contrib/__init__.py000066400000000000000000000011631315003562600210550ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## __import__("twext") calendarserver-9.1+dfsg/contrib/iCalServer.ico000066400000000000000000000004761315003562600215050ustar00rootroot00000000000000(( ‚‚‚qpƒ¡ ¢©®¯™²º¹ººŽ¤¿³ÀÇ`]ÈÇÉÈ€~ÎÎÖÜÜãéìðñùûûÿÿÿÿÿÿíÞÿÿÿÿþ·Fd{ïÿÿÆdC4FlÿüdUUUUFÏ÷y3Y™™—ç¹%›~ÛÉ]µ ½ëÙŸåÙ™5ÎýÕϰàìœÏÿÓÝòßœÿÿòÍ;1ŸŸÿÿñªªd¼¿ÿÿኈˆˆßÿÿü&ªˆªÿÿÿÿ½þûÿÿÿÿÿÿéÿÿÿcalendarserver-9.1+dfsg/contrib/launchd/000077500000000000000000000000001315003562600203615ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/launchd/calendarserver.plist000066400000000000000000000042741315003562600244450ustar00rootroot00000000000000 Label org.calendarserver.calendarserver Disabled ProgramArguments /Applications/Server.app/Contents/ServerRoot/usr/sbin/caldavd -X -R kqueue -o FailIfUpgradeNeeded=False -o ServiceDisablingProgram=/Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/bin/calendarserver_stop EnvironmentVariables PATH /Applications/Server.app/Contents/ServerRoot/Library/CalendarServer/bin:/Applications/Server.app/Contents/ServerRoot/usr/bin:/Applications/Server.app/Contents/ServerRoot/usr/sbin:/usr/bin:/usr/sbin:/bin:/sbin InitGroups AbandonProcessGroup KeepAlive ThrottleInterval 60 HardResourceLimits NumberOfFiles 12000 SoftResourceLimits NumberOfFiles 12000 PreventsSleep StandardOutPath /Library/Server/Logs/caldavd.log StandardErrorPath /Library/Server/Logs/caldavd.log calendarserver-9.1+dfsg/contrib/od/000077500000000000000000000000001315003562600173455ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/od/__init__.py000066400000000000000000000011361315003562600214570ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/contrib/od/dsattributes.py000066400000000000000000001410111315003562600224320ustar00rootroot00000000000000# # # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # # """ Record types and attribute names from Directory Service. This comes directly (with C->Python conversion) from """ # Specific match types eDSExact = 0x2001 eDSStartsWith = 0x2002 eDSEndsWith = 0x2003 eDSContains = 0x2004 eDSLessThan = 0x2005 eDSGreaterThan = 0x2006 eDSLessEqual = 0x2007 eDSGreaterEqual = 0x2008 # Specific Record Type Constants """ DirectoryService Specific Record Type Constants """ """ kDSStdRecordTypeAccessControls Record type that contains directory access control directives. """ kDSStdRecordTypeAccessControls = "dsRecTypeStandard:AccessControls" """ kDSStdRecordTypeAFPServer Record type of AFP server records. """ kDSStdRecordTypeAFPServer = "dsRecTypeStandard:AFPServer" """ kDSStdRecordTypeAFPUserAliases Record type of AFP user aliases used exclusively by AFP processes. """ kDSStdRecordTypeAFPUserAliases = "dsRecTypeStandard:AFPUserAliases" """ kDSStdRecordTypeAliases Used to represent alias records. """ kDSStdRecordTypeAliases = "dsRecTypeStandard:Aliases" """ kDSStdRecordTypeAugments Used to store augmented record data. """ kDSStdRecordTypeAugments = "dsRecTypeStandard:Augments" """ kDSStdRecordTypeAutomount Used to store automount record data. """ kDSStdRecordTypeAutomount = "dsRecTypeStandard:Automount" """ kDSStdRecordTypeAutomountMap Used to store automountMap record data. """ kDSStdRecordTypeAutomountMap = "dsRecTypeStandard:AutomountMap" """ kDSStdRecordTypeAutoServerSetup Used to discover automated server setup information. """ kDSStdRecordTypeAutoServerSetup = "dsRecTypeStandard:AutoServerSetup" """ kDSStdRecordTypeBootp Record in the local node for storing bootp info. """ kDSStdRecordTypeBootp = "dsRecTypeStandard:Bootp" """ kDSStdRecordTypeCertificateAuthority Record type that contains certificate authority information. """ kDSStdRecordTypeCertificateAuthorities = "dsRecTypeStandard:CertificateAuthorities" """ kDSStdRecordTypeComputerLists Identifies computer list records. """ kDSStdRecordTypeComputerLists = "dsRecTypeStandard:ComputerLists" """ kDSStdRecordTypeComputerGroups Identifies computer group records. """ kDSStdRecordTypeComputerGroups = "dsRecTypeStandard:ComputerGroups" """ kDSStdRecordTypeComputers Identifies computer records. """ kDSStdRecordTypeComputers = "dsRecTypeStandard:Computers" """ kDSStdRecordTypeConfig Identifies config records. """ kDSStdRecordTypeConfig = "dsRecTypeStandard:Config" """ kDSStdRecordTypeEthernets Record in the local node for storing ethernets. """ kDSStdRecordTypeEthernets = "dsRecTypeStandard:Ethernets" """ kDSStdRecordTypeFileMakerServers FileMaker servers record type. Describes available FileMaker servers, used for service discovery. """ kDSStdRecordTypeFileMakerServers = "dsRecTypeStandard:FileMakerServers" """ kDSStdRecordTypeFTPServer Identifies ftp server records. """ kDSStdRecordTypeFTPServer = "dsRecTypeStandard:FTPServer" """ kDSStdRecordTypeGroupAliases No longer supported in Mac OS X 10.4 or later. """ kDSStdRecordTypeGroupAliases = "dsRecTypeStandard:GroupAliases" """ kDSStdRecordTypeGroups Identifies group records. """ kDSStdRecordTypeGroups = "dsRecTypeStandard:Groups" """ kDSStdRecordTypeHostServices Record in the local node for storing host services. """ kDSStdRecordTypeHostServices = "dsRecTypeStandard:HostServices" """ kDSStdRecordTypeHosts Identifies host records. """ kDSStdRecordTypeHosts = "dsRecTypeStandard:Hosts" """ kDSStdRecordTypeLDAPServer Identifies LDAP server records. """ kDSStdRecordTypeLDAPServer = "dsRecTypeStandard:LDAPServer" """ kDSStdRecordTypeLocations Location record type. """ kDSStdRecordTypeLocations = "dsRecTypeStandard:Locations" """ kDSStdRecordTypeMachines Identifies machine records. """ kDSStdRecordTypeMachines = "dsRecTypeStandard:Machines" """ kDSStdRecordTypeMaps Identifies map records. """ kDSStdRecordTypeMaps = "dsRecTypeStandard:Maps" """ kDSStdRecordTypeMeta Identifies meta records. """ kDSStdRecordTypeMeta = "dsRecTypeStandard:AppleMetaRecord" """ kDSStdRecordTypeMounts Identifies mount records. """ kDSStdRecordTypeMounts = "dsRecTypeStandard:Mounts" """ kDSStdRecordTypMounts Supported only for backward compatibility to kDSStdRecordTypeMounts. """ kDSStdRecordTypMounts = "dsRecTypeStandard:Mounts" """ kDSStdRecordTypeNeighborhoods Neighborhood record type. Describes a list of computers and other neighborhoods, used for network browsing. """ kDSStdRecordTypeNeighborhoods = "dsRecTypeStandard:Neighborhoods" """ kDSStdRecordTypeNFS Identifies NFS records. """ kDSStdRecordTypeNFS = "dsRecTypeStandard:NFS" """ kDSStdRecordTypeNetDomains Record in the local node for storing net domains. """ kDSStdRecordTypeNetDomains = "dsRecTypeStandard:NetDomains" """ kDSStdRecordTypeNetGroups Record in the local node for storing net groups. """ kDSStdRecordTypeNetGroups = "dsRecTypeStandard:NetGroups" """ kDSStdRecordTypeNetworks Identifies network records. """ kDSStdRecordTypeNetworks = "dsRecTypeStandard:Networks" """ kDSStdRecordTypePasswordServer Used to discover password servers via Bonjour. """ kDSStdRecordTypePasswordServer = "dsRecTypeStandard:PasswordServer" """ kDSStdRecordTypePeople Record type that contains "People" records used for contact information. """ kDSStdRecordTypePeople = "dsRecTypeStandard:People" """ kDSStdRecordTypePlaces Identifies places (rooms) records. """ kDSStdRecordTypePlaces = "dsRecTypeStandard:Places" """ kDSStdRecordTypePresetComputers The computer record type used for presets in record creation. """ kDSStdRecordTypePresetComputers = "dsRecTypeStandard:PresetComputers" """ kDSStdRecordTypePresetComputerGroups The computer group record type used for presets in record creation. """ kDSStdRecordTypePresetComputerGroups = "dsRecTypeStandard:PresetComputerGroups" """ kDSStdRecordTypePresetComputerLists The computer list record type used for presets in record creation. """ kDSStdRecordTypePresetComputerLists = "dsRecTypeStandard:PresetComputerLists" """ kDSStdRecordTypePresetGroups The group record type used for presets in record creation. """ kDSStdRecordTypePresetGroups = "dsRecTypeStandard:PresetGroups" """ kDSStdRecordTypePresetUsers The user record type used for presets in record creation. """ kDSStdRecordTypePresetUsers = "dsRecTypeStandard:PresetUsers" """ kDSStdRecordTypePrintService Identifies print service records. """ kDSStdRecordTypePrintService = "dsRecTypeStandard:PrintService" """ kDSStdRecordTypePrintServiceUser Record in the local node for storing quota usage for a user. """ kDSStdRecordTypePrintServiceUser = "dsRecTypeStandard:PrintServiceUser" """ kDSStdRecordTypePrinters Identifies printer records. """ kDSStdRecordTypePrinters = "dsRecTypeStandard:Printers" """ kDSStdRecordTypeProtocols Identifies protocol records. """ kDSStdRecordTypeProtocols = "dsRecTypeStandard:Protocols" """ kDSStdRecordTypProtocols Supported only for backward compatibility to kDSStdRecordTypeProtocols. """ kDSStdRecordTypProtocols = "dsRecTypeStandard:Protocols" """ kDSStdRecordTypeQTSServer Identifies quicktime streaming server records. """ kDSStdRecordTypeQTSServer = "dsRecTypeStandard:QTSServer" """ kDSStdRecordTypeResources Identifies resources used in group services. """ kDSStdRecordTypeResources = "dsRecTypeStandard:Resources" """ kDSStdRecordTypeRPC Identifies remote procedure call records. """ kDSStdRecordTypeRPC = "dsRecTypeStandard:RPC" """ kDSStdRecordTypRPC Supported only for backward compatibility to kDSStdRecordTypeRPC. """ kDSStdRecordTypRPC = "dsRecTypeStandard:RPC" """ kDSStdRecordTypeSMBServer Identifies SMB server records. """ kDSStdRecordTypeSMBServer = "dsRecTypeStandard:SMBServer" """ kDSStdRecordTypeServer Identifies generic server records. """ kDSStdRecordTypeServer = "dsRecTypeStandard:Server" """ kDSStdRecordTypeServices Identifies directory based service records. """ kDSStdRecordTypeServices = "dsRecTypeStandard:Services" """ kDSStdRecordTypeSharePoints Share point record type. """ kDSStdRecordTypeSharePoints = "dsRecTypeStandard:SharePoints" """ kDSStdRecordTypeUserAliases No longer supported in Mac OS X 10.4 or later. """ kDSStdRecordTypeUserAliases = "dsRecTypeStandard:UserAliases" """ kDSStdRecordTypeUsers Identifies user records. """ kDSStdRecordTypeUsers = "dsRecTypeStandard:Users" """ kDSStdRecordTypeWebServer Identifies web server records. """ kDSStdRecordTypeWebServer = "dsRecTypeStandard:WebServer" # Specific Attribute Type Constants """ DirectoryService Specific Attribute Type Constants As a guideline for the attribute types the following legend is used: eDS1xxxxxx Single Valued Attribute eDSNxxxxxx Multi-Valued Attribute NOTE #1: Access controls may prevent any particular client from reading/writing various attribute types. In addition some attribute types may not be stored at all and could also represent "real-time" data generated by the directory node plug-in. NOTE #2: Attributes in the model are available for records and directory nodes. """ # Single Valued Specific Attribute Type Constants """ DirectoryService Single Valued Specific Attribute Type Constants """ """ kDS1AttrAdminLimits XML plist indicating what an admin user can edit. Found in kDSStdRecordTypeUsers records. """ kDS1AttrAdminLimits = "dsAttrTypeStandard:AdminLimits" """ kDS1AttrAliasData Used to identify alias data. """ kDS1AttrAliasData = "dsAttrTypeStandard:AppleAliasData" """ kDS1AttrAlternateDatastoreLocation Unix path used for determining where a user's email is stored. """ kDS1AttrAlternateDatastoreLocation = "dsAttrTypeStandard:AlternateDatastoreLocation" """ kDS1AttrAuthenticationHint Used to identify the authentication hint phrase. """ kDS1AttrAuthenticationHint = "dsAttrTypeStandard:AuthenticationHint" """ kDSNAttrAttributeTypes Used to indicated recommended attribute types for a record type in the Config node. """ kDSNAttrAttributeTypes = "dsAttrTypeStandard:AttributeTypes" """ kDS1AttrAuthorityRevocationList Attribute containing the binary of the authority revocation list. A certificate revocation list that defines certificate authority certificates which are no longer trusted. No user certificates are included in this list. Usually found in kDSStdRecordTypeCertificateAuthority records. """ kDS1AttrAuthorityRevocationList = "dsAttrTypeStandard:AuthorityRevocationList" """ kDS1AttrBirthday Single-valued attribute that defines the user's birthday. Format is x.208 standard YYYYMMDDHHMMSSZ which we will require as GMT time. """ kDS1AttrBirthday = "dsAttrTypeStandard:Birthday" """ kDS1AttrBootFile Attribute type in host or machine records for the name of the kernel that this machine will use by default when NetBooting. """ kDS1AttrBootFile = "dsAttrTypeStandard:BootFile" """ kDS1AttrCACertificate Attribute containing the binary of the certificate of a certificate authority. Its corresponding private key is used to sign certificates. Usually found in kDSStdRecordTypeCertificateAuthority records. """ kDS1AttrCACertificate = "dsAttrTypeStandard:CACertificate" """ kDS1AttrCapabilities Used with directory nodes so that clients can "discover" the API capabilities for this Directory Node. """ kDS1AttrCapabilities = "dsAttrTypeStandard:Capabilities" """ kDS1AttrCapacity Attribute type for the capacity of a resource. found in resource records (kDSStdRecordTypeResources). Example: 50 """ kDS1AttrCapacity = "dsAttrTypeStandard:Capacity" """ kDS1AttrCategory The category of an item used for browsing """ kDS1AttrCategory = "dsAttrTypeStandard:Category" """ kDS1AttrCertificateRevocationList Attribute containing the binary of the certificate revocation list. This is a list of certificates which are no longer trusted. Usually found in kDSStdRecordTypeCertificateAuthority records. """ kDS1AttrCertificateRevocationList = "dsAttrTypeStandard:CertificateRevocationList" """ kDS1AttrChange Retained for backward compatibility. """ kDS1AttrChange = "dsAttrTypeStandard:Change" """ kDS1AttrComment Attribute used for unformatted comment. """ kDS1AttrComment = "dsAttrTypeStandard:Comment" """ kDS1AttrContactGUID Attribute type for the contact GUID of a group. found in group records (kDSStdRecordTypeGroups). """ kDS1AttrContactGUID = "dsAttrTypeStandard:ContactGUID" """ kDS1AttrContactPerson Attribute type for the contact person of the machine. Found in host or machine records. """ kDS1AttrContactPerson = "dsAttrTypeStandard:ContactPerson" """ kDS1AttrCreationTimestamp Attribute showing date/time of record creation. Format is x.208 standard YYYYMMDDHHMMSSZ which we will require as GMT time. """ kDS1AttrCreationTimestamp = "dsAttrTypeStandard:CreationTimestamp" """ kDS1AttrCrossCertificatePair Attribute containing the binary of a pair of certificates which verify each other. Both certificates have the same level of authority. Usually found in kDSStdRecordTypeCertificateAuthority records. """ kDS1AttrCrossCertificatePair = "dsAttrTypeStandard:CrossCertificatePair" """ kDS1AttrDataStamp checksum/meta data """ kDS1AttrDataStamp = "dsAttrTypeStandard:DataStamp" """ kDS1AttrDistinguishedName Users distinguished or real name """ kDS1AttrDistinguishedName = "dsAttrTypeStandard:RealName" """ kDS1AttrDNSDomain DNS Resolver domain attribute. """ kDS1AttrDNSDomain = "dsAttrTypeStandard:DNSDomain" """ kDS1AttrDNSNameServer DNS Resolver nameserver attribute. """ kDS1AttrDNSNameServer = "dsAttrTypeStandard:DNSNameServer" """ kDS1AttrENetAddress Single-valued attribute for hardware Ethernet address (MAC address). Found in machine records (kDSStdRecordTypeMachines) and computer records (kDSStdRecordTypeComputers). """ kDS1AttrENetAddress = "dsAttrTypeStandard:ENetAddress" """ kDS1AttrExpire Used for expiration date or time depending on association. """ kDS1AttrExpire = "dsAttrTypeStandard:Expire" """ kDS1AttrFirstName Used for first name of user or person record. """ kDS1AttrFirstName = "dsAttrTypeStandard:FirstName" """ kDS1AttrGeneratedUID Used for 36 character (128 bit) unique ID. Usually found in user, group, and computer records. An example value is "A579E95E-CDFE-4EBC-B7E7-F2158562170F". The standard format contains 32 hex characters and four hyphen characters. """ kDS1AttrGeneratedUID = "dsAttrTypeStandard:GeneratedUID" """ kDS1AttrHomeDirectoryQuota Represents the allowed usage for a user's home directory in bytes. Found in user records (kDSStdRecordTypeUsers). """ kDS1AttrHomeDirectoryQuota = "dsAttrTypeStandard:HomeDirectoryQuota" """ kDS1AttrHomeDirectorySoftQuota Used to define home directory size limit in bytes when user is notified that the hard limit is approaching. """ kDS1AttrHomeDirectorySoftQuota = "dsAttrTypeStandard:HomeDirectorySoftQuota" """ kDS1AttrHomeLocOwner Represents the owner of a workgroup's shared home directory. Typically found in kDSStdRecordTypeGroups records. """ kDS1AttrHomeLocOwner = "dsAttrTypeStandard:HomeLocOwner" """ kDS1StandardAttrHomeLocOwner Retained for backward compatibility. """ kDS1StandardAttrHomeLocOwner = kDS1AttrHomeLocOwner """ kDS1AttrInternetAlias Used to track internet alias. """ kDS1AttrInternetAlias = "dsAttrTypeStandard:InetAlias" """ kDS1AttrKDCConfigData Contents of the kdc.conf file. """ kDS1AttrKDCConfigData = "dsAttrTypeStandard:KDCConfigData" """ kDS1AttrLastName Used for the last name of user or person record. """ kDS1AttrLastName = "dsAttrTypeStandard:LastName" """ kDS1AttrLDAPSearchBaseSuffix Search base suffix for a LDAP server. """ kDS1AttrLDAPSearchBaseSuffix = "dsAttrTypeStandard:LDAPSearchBaseSuffix" """ kDS1AttrLocation Represents the location a service is available from (usually domain name). Typically found in service record types including kDSStdRecordTypeAFPServer, kDSStdRecordTypeLDAPServer, and kDSStdRecordTypeWebServer. """ kDS1AttrLocation = "dsAttrTypeStandard:Location" """ kDS1AttrMapGUID Represents the GUID for a record's map. """ kDS1AttrMapGUID = "dsAttrTypeStandard:MapGUID" """ kDS1AttrMCXFlags Used by MCX. """ kDS1AttrMCXFlags = "dsAttrTypeStandard:MCXFlags" """ kDS1AttrMCXSettings Used by MCX. """ kDS1AttrMCXSettings = "dsAttrTypeStandard:MCXSettings" """ kDS1AttrMailAttribute Holds the mail account config data. """ kDS1AttrMailAttribute = "dsAttrTypeStandard:MailAttribute" """ kDS1AttrMetaAutomountMap Used to query for kDSStdRecordTypeAutomount entries associated with a specific kDSStdRecordTypeAutomountMap. """ kDS1AttrMetaAutomountMap = "dsAttrTypeStandard:MetaAutomountMap" """ kDS1AttrMiddleName Used for the middle name of user or person record. """ kDS1AttrMiddleName = "dsAttrTypeStandard:MiddleName" """ kDS1AttrModificationTimestamp Attribute showing date/time of record modification. Format is x.208 standard YYYYMMDDHHMMSSZ which we will require as GMT time. """ kDS1AttrModificationTimestamp = "dsAttrTypeStandard:ModificationTimestamp" """ kDSNAttrNeighborhoodAlias Attribute type in Neighborhood records describing sub-neighborhood records. """ kDSNAttrNeighborhoodAlias = "dsAttrTypeStandard:NeighborhoodAlias" """ kDS1AttrNeighborhoodType Attribute type in Neighborhood records describing their function. """ kDS1AttrNeighborhoodType = "dsAttrTypeStandard:NeighborhoodType" """ kDS1AttrNetworkView The name of the managed network view a computer should use for browsing. """ kDS1AttrNetworkView = "dsAttrTypeStandard:NetworkView" """ kDS1AttrNFSHomeDirectory Defines a user's home directory mount point on the local machine. """ kDS1AttrNFSHomeDirectory = "dsAttrTypeStandard:NFSHomeDirectory" """ kDS1AttrNote Note attribute. Commonly used in printer records. """ kDS1AttrNote = "dsAttrTypeStandard:Note" """ kDS1AttrOwner Attribute type for the owner of a record. Typically the value is a LDAP distinguished name. """ kDS1AttrOwner = "dsAttrTypeStandard:Owner" """ kDS1AttrOwnerGUID Attribute type for the owner GUID of a group. found in group records (kDSStdRecordTypeGroups). """ kDS1AttrOwnerGUID = "dsAttrTypeStandard:OwnerGUID" """ kDS1AttrPassword Holds the password or credential value. """ kDS1AttrPassword = "dsAttrTypeStandard:Password" """ kDS1AttrPasswordPlus Holds marker data to indicate possible authentication redirection. """ kDS1AttrPasswordPlus = "dsAttrTypeStandard:PasswordPlus" """ kDS1AttrPasswordPolicyOptions Collection of password policy options in single attribute. Used in user presets record. """ kDS1AttrPasswordPolicyOptions = "dsAttrTypeStandard:PasswordPolicyOptions" """ kDS1AttrPasswordServerList Represents the attribute for storing the password server's replication information. """ kDS1AttrPasswordServerList = "dsAttrTypeStandard:PasswordServerList" """ kDS1AttrPasswordServerLocation Specifies the IP address or domain name of the Password Server associated with a given directory node. Found in a config record named PasswordServer. """ kDS1AttrPasswordServerLocation = "dsAttrTypeStandard:PasswordServerLocation" """ kDS1AttrPicture Represents the path of the picture for each user displayed in the login window. Found in user records (kDSStdRecordTypeUsers). """ kDS1AttrPicture = "dsAttrTypeStandard:Picture" """ kDS1AttrPort Represents the port number a service is available on. Typically found in service record types including kDSStdRecordTypeAFPServer, kDSStdRecordTypeLDAPServer, and kDSStdRecordTypeWebServer. """ kDS1AttrPort = "dsAttrTypeStandard:Port" """ kDS1AttrPresetUserIsAdmin Flag to indicate whether users created from this preset are administrators by default. Found in kDSStdRecordTypePresetUsers records. """ kDS1AttrPresetUserIsAdmin = "dsAttrTypeStandard:PresetUserIsAdmin" """ kDS1AttrPrimaryComputerGUID Single-valued attribute that defines a primary computer of the computer group. added via extensible object for computer group record type (kDSStdRecordTypeComputerGroups) """ kDS1AttrPrimaryComputerGUID = "dsAttrTypeStandard:PrimaryComputerGUID" """ kDS1AttrPrimaryComputerList The GUID of the computer list with which this computer record is associated. """ kDS1AttrPrimaryComputerList = "dsAttrTypeStandard:PrimaryComputerList" """ kDS1AttrPrimaryGroupID This is the 32 bit unique ID that represents the primary group a user is part of, or the ID of a group. Format is a signed 32 bit integer represented as a string. """ kDS1AttrPrimaryGroupID = "dsAttrTypeStandard:PrimaryGroupID" """ kDS1AttrPrinter1284DeviceID Single-valued attribute that defines the IEEE 1284 DeviceID of a printer. This is used when configuring a printer. """ kDS1AttrPrinter1284DeviceID = "dsAttrTypeStandard:Printer1284DeviceID" """ kDS1AttrPrinterLPRHost Standard attribute type for kDSStdRecordTypePrinters. """ kDS1AttrPrinterLPRHost = "dsAttrTypeStandard:PrinterLPRHost" """ kDS1AttrPrinterLPRQueue Standard attribute type for kDSStdRecordTypePrinters. """ kDS1AttrPrinterLPRQueue = "dsAttrTypeStandard:PrinterLPRQueue" """ kDS1AttrPrinterMakeAndModel Single-valued attribute for definition of the Printer Make and Model. An example Value would be "HP LaserJet 2200". This would be used to determine the proper PPD file to be used when configuring a printer from the Directory. This attribute is based on the IPP Printing Specification RFC and IETF IPP-LDAP Printer Record. """ kDS1AttrPrinterMakeAndModel = "dsAttrTypeStandard:PrinterMakeAndModel" """ kDS1AttrPrinterType Standard attribute type for kDSStdRecordTypePrinters. """ kDS1AttrPrinterType = "dsAttrTypeStandard:PrinterType" """ kDS1AttrPrinterURI Single-valued attribute that defines the URI of a printer "ipp://address" or "smb://server/queue". This is used when configuring a printer. This attribute is based on the IPP Printing Specification RFC and IETF IPP-LDAP Printer Record. """ kDS1AttrPrinterURI = "dsAttrTypeStandard:PrinterURI" """ kDS1AttrPrinterXRISupported Multi-valued attribute that defines additional URIs supported by a printer. This is used when configuring a printer. This attribute is based on the IPP Printing Specification RFC and IETF IPP-LDAP Printer Record. """ kDSNAttrPrinterXRISupported = "dsAttrTypeStandard:PrinterXRISupported" """ kDS1AttrPrintServiceInfoText Standard attribute type for kDSStdRecordTypePrinters. """ kDS1AttrPrintServiceInfoText = "dsAttrTypeStandard:PrintServiceInfoText" """ kDS1AttrPrintServiceInfoXML Standard attribute type for kDSStdRecordTypePrinters. """ kDS1AttrPrintServiceInfoXML = "dsAttrTypeStandard:PrintServiceInfoXML" """ kDS1AttrPrintServiceUserData Single-valued attribute for print quota configuration or statistics (XML data). Found in user records (kDSStdRecordTypeUsers) or print service statistics records (kDSStdRecordTypePrintServiceUser). """ kDS1AttrPrintServiceUserData = "dsAttrTypeStandard:PrintServiceUserData" """ kDS1AttrRealUserID Used by MCX. """ kDS1AttrRealUserID = "dsAttrTypeStandard:RealUserID" """ kDS1AttrRelativeDNPrefix Used to map the first native LDAP attribute type required in the building of the Relative Distinguished Name for LDAP record creation. """ kDS1AttrRelativeDNPrefix = "dsAttrTypeStandard:RelativeDNPrefix" """ kDS1AttrSMBAcctFlags Account control flag. """ kDS1AttrSMBAcctFlags = "dsAttrTypeStandard:SMBAccountFlags" """ kDS1AttrSMBGroupRID Constant for supporting PDC SMB interaction with DS. """ kDS1AttrSMBGroupRID = "dsAttrTypeStandard:SMBGroupRID" """ kDS1AttrSMBHome UNC address of Windows homedirectory mount point (\\server\\sharepoint). """ kDS1AttrSMBHome = "dsAttrTypeStandard:SMBHome" """ kDS1AttrSMBHomeDrive Drive letter for homedirectory mount point. """ kDS1AttrSMBHomeDrive = "dsAttrTypeStandard:SMBHomeDrive" """ kDS1AttrSMBKickoffTime Attribute in support of SMB interaction. """ kDS1AttrSMBKickoffTime = "dsAttrTypeStandard:SMBKickoffTime" """ kDS1AttrSMBLogoffTime Attribute in support of SMB interaction. """ kDS1AttrSMBLogoffTime = "dsAttrTypeStandard:SMBLogoffTime" """ kDS1AttrSMBLogonTime Attribute in support of SMB interaction. """ kDS1AttrSMBLogonTime = "dsAttrTypeStandard:SMBLogonTime" """ kDS1AttrSMBPrimaryGroupSID SMB Primary Group Security ID, stored as a string attribute of up to 64 bytes. Found in user, group, and computer records (kDSStdRecordTypeUsers, kDSStdRecordTypeGroups, kDSStdRecordTypeComputers). """ kDS1AttrSMBPrimaryGroupSID = "dsAttrTypeStandard:SMBPrimaryGroupSID" """ kDS1AttrSMBPWDLastSet Attribute in support of SMB interaction. """ kDS1AttrSMBPWDLastSet = "dsAttrTypeStandard:SMBPasswordLastSet" """ kDS1AttrSMBProfilePath Desktop management info (dock, desktop links, etc). """ kDS1AttrSMBProfilePath = "dsAttrTypeStandard:SMBProfilePath" """ kDS1AttrSMBRID Attribute in support of SMB interaction. """ kDS1AttrSMBRID = "dsAttrTypeStandard:SMBRID" """ kDS1AttrSMBScriptPath Login script path. """ kDS1AttrSMBScriptPath = "dsAttrTypeStandard:SMBScriptPath" """ kDS1AttrSMBSID SMB Security ID, stored as a string attribute of up to 64 bytes. Found in user, group, and computer records (kDSStdRecordTypeUsers, kDSStdRecordTypeGroups, kDSStdRecordTypeComputers). """ kDS1AttrSMBSID = "dsAttrTypeStandard:SMBSID" """ kDS1AttrSMBUserWorkstations List of workstations user can login from (machine account names). """ kDS1AttrSMBUserWorkstations = "dsAttrTypeStandard:SMBUserWorkstations" """ kDS1AttrServiceType Represents the service type for the service. This is the raw service type of the service. For example a service record type of kDSStdRecordTypeWebServer might have a service type of "http" or "https". """ kDS1AttrServiceType = "dsAttrTypeStandard:ServiceType" """ kDS1AttrSetupAdvertising Used for Setup Assistant automatic population. """ kDS1AttrSetupAdvertising = "dsAttrTypeStandard:SetupAssistantAdvertising" """ kDS1AttrSetupAutoRegister Used for Setup Assistant automatic population. """ kDS1AttrSetupAutoRegister = "dsAttrTypeStandard:SetupAssistantAutoRegister" """ kDS1AttrSetupLocation Used for Setup Assistant automatic population. """ kDS1AttrSetupLocation = "dsAttrTypeStandard:SetupAssistantLocation" """ kDS1AttrSetupOccupation Used for Setup Assistant automatic population. """ kDS1AttrSetupOccupation = "dsAttrTypeStandard:Occupation" """ kDS1AttrTimeToLive Attribute recommending how long to cache the record's attribute values. Format is an unsigned 32 bit representing seconds. i.e. 300 is 5 minutes. """ kDS1AttrTimeToLive = "dsAttrTypeStandard:TimeToLive" """ kDS1AttrUniqueID This is the 32 bit unique ID that represents the user in the legacy manner. Format is a signed integer represented as a string. """ kDS1AttrUniqueID = "dsAttrTypeStandard:UniqueID" """ kDS1AttrUserCertificate Attribute containing the binary of the user's certificate. Usually found in user records. The certificate is data which identifies a user. This data is attested to by a known party, and can be independently verified by a third party. """ kDS1AttrUserCertificate = "dsAttrTypeStandard:UserCertificate" """ kDS1AttrUserPKCS12Data Attribute containing binary data in PKCS #12 format. Usually found in user records. The value can contain keys, certificates, and other related information and is encrypted with a passphrase. """ kDS1AttrUserPKCS12Data = "dsAttrTypeStandard:UserPKCS12Data" """ kDS1AttrUserShell Used to represent the user's shell setting. """ kDS1AttrUserShell = "dsAttrTypeStandard:UserShell" """ kDS1AttrUserSMIMECertificate Attribute containing the binary of the user's SMIME certificate. Usually found in user records. The certificate is data which identifies a user. This data is attested to by a known party, and can be independently verified by a third party. SMIME certificates are often used for signed or encrypted emails. """ kDS1AttrUserSMIMECertificate = "dsAttrTypeStandard:UserSMIMECertificate" """ kDS1AttrVFSDumpFreq Attribute used to support mount records. """ kDS1AttrVFSDumpFreq = "dsAttrTypeStandard:VFSDumpFreq" """ kDS1AttrVFSLinkDir Attribute used to support mount records. """ kDS1AttrVFSLinkDir = "dsAttrTypeStandard:VFSLinkDir" """ kDS1AttrVFSPassNo Attribute used to support mount records. """ kDS1AttrVFSPassNo = "dsAttrTypeStandard:VFSPassNo" """ kDS1AttrVFSType Attribute used to support mount records. """ kDS1AttrVFSType = "dsAttrTypeStandard:VFSType" """ kDS1AttrWeblogURI Single-valued attribute that defines the URI of a user's weblog. Usually found in user records (kDSStdRecordTypeUsers). Example: http://example.com/blog/jsmith """ kDS1AttrWeblogURI = "dsAttrTypeStandard:WeblogURI" """ kDS1AttrXMLPlist SA config settings plist. """ kDS1AttrXMLPlist = "dsAttrTypeStandard:XMLPlist" """ kDS1AttrProtocolNumber Single-valued attribute that defines a protocol number. Usually found in protocol records (kDSStdRecordTypeProtocols) """ kDS1AttrProtocolNumber = "dsAttrTypeStandard:ProtocolNumber" """ kDS1AttrRPCNumber Single-valued attribute that defines an RPC number. Usually found in RPC records (kDSStdRecordTypeRPC) """ kDS1AttrRPCNumber = "dsAttrTypeStandard:RPCNumber" """ kDS1AttrNetworkNumber Single-valued attribute that defines a network number. Usually found in network records (kDSStdRecordTypeNetworks) """ kDS1AttrNetworkNumber = "dsAttrTypeStandard:NetworkNumber" # Multiple Valued Specific Attribute Type Constants """ DirectoryService Multiple Valued Specific Attribute Type Constants """ """ kDSNAttrAccessControlEntry Attribute type which stores directory access control directives. """ kDSNAttrAccessControlEntry = "dsAttrTypeStandard:AccessControlEntry" """ kDSNAttrAddressLine1 Line one of multiple lines of address data for a user. """ kDSNAttrAddressLine1 = "dsAttrTypeStandard:AddressLine1" """ kDSNAttrAddressLine2 Line two of multiple lines of address data for a user. """ kDSNAttrAddressLine2 = "dsAttrTypeStandard:AddressLine2" """ kDSNAttrAddressLine3 Line three of multiple lines of address data for a user. """ kDSNAttrAddressLine3 = "dsAttrTypeStandard:AddressLine3" """ kDSNAttrAltSecurityIdentities Alternative Security Identities such as Kerberos principals. """ kDSNAttrAltSecurityIdentities = "dsAttrTypeStandard:AltSecurityIdentities" """ kDSNAttrAreaCode Area code of a user's phone number. """ kDSNAttrAreaCode = "dsAttrTypeStandard:AreaCode" """ kDSNAttrAuthenticationAuthority Determines what mechanism is used to verify or set a user's password. If multiple values are present, the first attributes returned take precedence. Typically found in User records (kDSStdRecordTypeUsers). """ kDSNAttrAuthenticationAuthority = "dsAttrTypeStandard:AuthenticationAuthority" """ kDSNAttrAutomountInformation Used to store automount information in kDSStdRecordTypeAutomount records. """ kDSNAttrAutomountInformation = "dsAttrTypeStandard:AutomountInformation" """ kDSNAttrBootParams Attribute type in host or machine records for storing boot params. """ kDSNAttrBootParams = "dsAttrTypeStandard:BootParams" """ kDSNAttrBuilding Represents the building name for a user or person record. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrBuilding = "dsAttrTypeStandard:Building" """ kDSNAttrCalendarPrincipalURI the URI for a record's calendar """ kDSNAttrCalendarPrincipalURI = "dsAttrTypeStandard:CalendarPrincipalURI" """ kDSNAttrCity Usually, city for a user or person record. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrCity = "dsAttrTypeStandard:City" """ kDSNAttrCompany attribute that defines the user's company. Example: Apple Computer, Inc """ kDSNAttrCompany = "dsAttrTypeStandard:Company" """ kDSNAttrComputerAlias Attribute type in Neighborhood records describing computer records pointed to by this neighborhood. """ kDSNAttrComputerAlias = "dsAttrTypeStandard:ComputerAlias" """ kDSNAttrComputers List of computers. """ kDSNAttrComputers = "dsAttrTypeStandard:Computers" """ kDSNAttrCountry Represents country of a record entry. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrCountry = "dsAttrTypeStandard:Country" """ kDSNAttrDepartment Represents the department name of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrDepartment = "dsAttrTypeStandard:Department" """ kDSNAttrDNSName Domain Name Service name. """ kDSNAttrDNSName = "dsAttrTypeStandard:DNSName" """ kDSNAttrEMailAddress Email address of usually a user record. """ kDSNAttrEMailAddress = "dsAttrTypeStandard:EMailAddress" """ kDSNAttrEMailContacts multi-valued attribute that defines a record's custom email addresses . found in user records (kDSStdRecordTypeUsers). Example: home:johndoe@mymail.com """ kDSNAttrEMailContacts = "dsAttrTypeStandard:EMailContacts" """ kDSNAttrFaxNumber Represents the FAX numbers of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrFaxNumber = "dsAttrTypeStandard:FAXNumber" """ kDSNAttrGroup List of groups. """ kDSNAttrGroup = "dsAttrTypeStandard:Group" """ kDSNAttrGroupMembers Attribute type in group records containing lists of GUID values for members other than groups. """ kDSNAttrGroupMembers = "dsAttrTypeStandard:GroupMembers" """ kDSNAttrGroupMembership Usually a list of users that below to a given group record. """ kDSNAttrGroupMembership = "dsAttrTypeStandard:GroupMembership" """ kDSNAttrGroupServices xml-plist attribute that defines a group's services . found in group records (kDSStdRecordTypeGroups). """ kDSNAttrGroupServices = "dsAttrTypeStandard:GroupServices" """ kDSNAttrHomePhoneNumber Home telephone number of a user or person. """ kDSNAttrHomePhoneNumber = "dsAttrTypeStandard:HomePhoneNumber" """ kDSNAttrHTML HTML location. """ kDSNAttrHTML = "dsAttrTypeStandard:HTML" """ kDSNAttrHomeDirectory Network home directory URL. """ kDSNAttrHomeDirectory = "dsAttrTypeStandard:HomeDirectory" """ kDSNAttrIMHandle Represents the Instant Messaging handles of a user. Values should be prefixed with the appropriate IM type ie. AIM:, Jabber:, MSN:, Yahoo:, or ICQ: Usually found in user records (kDSStdRecordTypeUsers). """ kDSNAttrIMHandle = "dsAttrTypeStandard:IMHandle" """ kDSNAttrIPAddress IP address expressed either as domain or IP notation. """ kDSNAttrIPAddress = "dsAttrTypeStandard:IPAddress" """ kDSNAttrIPAddressAndENetAddress A pairing of IPv4 or IPv6 addresses with Ethernet addresses (e.g., "10.1.1.1/00:16:cb:92:56:41"). Usually found on kDSStdRecordTypeComputers for use by services that need specific pairing of the two values. This should be in addition to kDSNAttrIPAddress, kDSNAttrIPv6Address and kDS1AttrENetAddress. This is necessary because not all directories return attribute values in a guaranteed order. """ kDSNAttrIPAddressAndENetAddress = "dsAttrTypeStandard:IPAddressAndENetAddress" """ kDSNAttrIPv6Address IPv6 address expressed in the standard notation (e.g., "fe80::236:caff:fcc2:5641" ) Usually found on kDSStdRecordTypeComputers, kDSStdRecordTypeHosts, and kDSStdRecordTypeMachines. """ kDSNAttrIPv6Address = "dsAttrTypeStandard:IPv6Address" """ kDSNAttrJPEGPhoto Used to store binary picture data in JPEG format. Usually found in user, people or group records (kDSStdRecordTypeUsers, kDSStdRecordTypePeople, kDSStdRecordTypeGroups). """ kDSNAttrJPEGPhoto = "dsAttrTypeStandard:JPEGPhoto" """ kDSNAttrJobTitle Represents the job title of a user. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrJobTitle = "dsAttrTypeStandard:JobTitle" """ kDSNAttrKDCAuthKey KDC master key RSA encrypted with realm public key. """ kDSNAttrKDCAuthKey = "dsAttrTypeStandard:KDCAuthKey" """ kDSNAttrKeywords Keywords using for searching capability. """ kDSNAttrKeywords = "dsAttrTypeStandard:Keywords" """ kDSNAttrLDAPReadReplicas List of LDAP server URLs which can each be used to read directory data. """ kDSNAttrLDAPReadReplicas = "dsAttrTypeStandard:LDAPReadReplicas" """ kDSNAttrLDAPWriteReplicas List of LDAP server URLs which can each be used to write directory data. """ kDSNAttrLDAPWriteReplicas = "dsAttrTypeStandard:LDAPWriteReplicas" """ kDSNAttrMachineServes Attribute type in host or machine records for storing NetInfo domains served. """ kDSNAttrMachineServes = "dsAttrTypeStandard:MachineServes" """ kDSNAttrMapCoordinates attribute that defines coordinates for a user's location . * found in user records (kDSStdRecordTypeUsers) and resource records (kDSStdRecordTypeResources). Example: 7.7,10.6 """ kDSNAttrMapCoordinates = "dsAttrTypeStandard:MapCoordinates" """ kDSNAttrMapURI attribute that defines the URI of a user's location. Usually found in user records (kDSStdRecordTypeUsers). Example: http://example.com/bldg1 """ kDSNAttrMapURI = "dsAttrTypeStandard:MapURI" """ kDSNAttrMCXSettings Used by MCX. """ kDSNAttrMCXSettings = "dsAttrTypeStandard:MCXSettings" """ kDSNAttrMIME Data contained in this attribute type is a fully qualified MIME Type. """ kDSNAttrMIME = "dsAttrTypeStandard:MIME" """ kDSNAttrMember List of member records. """ kDSNAttrMember = "dsAttrTypeStandard:Member" """ kDSNAttrMobileNumber Represents the mobile numbers of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrMobileNumber = "dsAttrTypeStandard:MobileNumber" """ kDSNAttrNBPEntry Appletalk data. """ kDSNAttrNBPEntry = "dsAttrTypeStandard:NBPEntry" """ kDSNAttrNestedGroups Attribute type in group records for the list of GUID values for nested groups. """ kDSNAttrNestedGroups = "dsAttrTypeStandard:NestedGroups" """ kDSNAttrNetGroups Attribute type that indicates which netgroups its record is a member of. Found in user, host, and netdomain records. """ kDSNAttrNetGroups = "dsAttrTypeStandard:NetGroups" """ kDSNAttrNickName Represents the nickname of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrNickName = "dsAttrTypeStandard:NickName" """ kDSNAttrNodePathXMLPlist Attribute type in Neighborhood records describing the DS Node to search while looking up aliases in this neighborhood. """ kDSNAttrNodePathXMLPlist = "dsAttrTypeStandard:NodePathXMLPlist" """ kDSNAttrOrganizationInfo Usually the organization info of a user. """ kDSNAttrOrganizationInfo = "dsAttrTypeStandard:OrganizationInfo" """ kDSNAttrOrganizationName Usually the organization of a user. """ kDSNAttrOrganizationName = "dsAttrTypeStandard:OrganizationName" """ kDSNAttrPagerNumber Represents the pager numbers of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrPagerNumber = "dsAttrTypeStandard:PagerNumber" """ kDSNAttrPhoneContacts multi-valued attribute that defines a record's custom phone numbers . found in user records (kDSStdRecordTypeUsers). Example: home fax:408-555-4444 """ kDSNAttrPhoneContacts = "dsAttrTypeStandard:PhoneContacts" """ kDSNAttrPhoneNumber Telephone number of a user. """ kDSNAttrPhoneNumber = "dsAttrTypeStandard:PhoneNumber" """ kDSNAttrPGPPublicKey Pretty Good Privacy public encryption key. """ kDSNAttrPGPPublicKey = "dsAttrTypeStandard:PGPPublicKey" """ kDSNAttrPostalAddress The postal address usually excluding postal code. """ kDSNAttrPostalAddress = "dsAttrTypeStandard:PostalAddress" """ * kDSNAttrPostalAddressContacts * multi-valued attribute that defines a record's alternate postal addresses . * found in user records (kDSStdRecordTypeUsers) and resource records (kDSStdRecordTypeResources). """ kDSNAttrPostalAddressContacts = "dsAttrTypeStandard:PostalAddressContacts" """ kDSNAttrPostalCode The postal code such as zip code in the USA. """ kDSNAttrPostalCode = "dsAttrTypeStandard:PostalCode" """ kDSNAttrNamePrefix Represents the title prefix of a user or person. ie. Mr., Ms., Mrs., Dr., etc. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrNamePrefix = "dsAttrTypeStandard:NamePrefix" """ kDSNAttrProtocols List of protocols. """ kDSNAttrProtocols = "dsAttrTypeStandard:Protocols" """ kDSNAttrRecordName List of names/keys for this record. """ kDSNAttrRecordName = "dsAttrTypeStandard:RecordName" """ kDSNAttrRelationships multi-valued attribute that defines the relationship to the record type . found in user records (kDSStdRecordTypeUsers). Example: brother:John """ kDSNAttrRelationships = "dsAttrTypeStandard:Relationships" """ * kDSNAttrResourceInfo * multi-valued attribute that defines a resource record's info. """ kDSNAttrResourceInfo = "dsAttrTypeStandard:ResourceInfo" """ kDSNAttrResourceType Attribute type for the kind of resource. found in resource records (kDSStdRecordTypeResources). Example: ConferenceRoom """ kDSNAttrResourceType = "dsAttrTypeStandard:ResourceType" """ kDSNAttrServicesLocator Attribute describing the services hosted for the record. """ kDSNAttrServicesLocator = "dsAttrTypeStandard:ServicesLocator" """ kDSNAttrState The state or province of a country. """ kDSNAttrState = "dsAttrTypeStandard:State" """ kDSNAttrStreet Represents the street address of a user or person. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrStreet = "dsAttrTypeStandard:Street" """ kDSNAttrNameSuffix Represents the name suffix of a user or person. ie. Jr., Sr., etc. Usually found in user or people records (kDSStdRecordTypeUsers or kDSStdRecordTypePeople). """ kDSNAttrNameSuffix = "dsAttrTypeStandard:NameSuffix" """ kDSNAttrURL List of URLs. """ kDSNAttrURL = "dsAttrTypeStandard:URL" """ kDSNAttrURLForNSL List of URLs used by NSL. """ kDSNAttrURLForNSL = "dsAttrTypeStandard:URLForNSL" """ kDSNAttrVFSOpts Used in support of mount records. """ kDSNAttrVFSOpts = "dsAttrTypeStandard:VFSOpts" # Other Attribute Type Constants """ DirectoryService Other Attribute Type Constants Not Mapped by Directory Node Plugins Mainly used internally by the DirectoryService Daemon or made available via dsGetDirNodeInfo() """ """ kDS1AttrAdminStatus Retained only for backward compatibility. """ kDS1AttrAdminStatus = "dsAttrTypeStandard:AdminStatus" """ kDS1AttrAlias Alias attribute, contain pointer to another node/record/attribute. """ kDS1AttrAlias = "dsAttrTypeStandard:Alias" """ kDS1AttrAuthCredential An "auth" credential, to be used to authenticate to other Directory nodes. """ kDS1AttrAuthCredential = "dsAttrTypeStandard:AuthCredential" """ kDS1AttrCopyTimestamp Timestamp used in local account caching. """ kDS1AttrCopyTimestamp = "dsAttrTypeStandard:CopyTimestamp" """ kDS1AttrDateRecordCreated Date of record creation. """ kDS1AttrDateRecordCreated = "dsAttrTypeStandard:DateRecordCreated" """ kDS1AttrKerberosRealm Supports Kerberized SMB Server services. """ kDS1AttrKerberosRealm = "dsAttrTypeStandard:KerberosRealm" """ kDS1AttrNTDomainComputerAccount Supports Kerberized SMB Server services. """ kDS1AttrNTDomainComputerAccount = "dsAttrTypeStandard:NTDomainComputerAccount" """ kDSNAttrOriginalHomeDirectory Home directory URL used in local account caching. """ kDSNAttrOriginalHomeDirectory = "dsAttrTypeStandard:OriginalHomeDirectory" """ kDS1AttrOriginalNFSHomeDirectory NFS home directory used in local account caching. """ kDS1AttrOriginalNFSHomeDirectory = "dsAttrTypeStandard:OriginalNFSHomeDirectory" """ kDS1AttrOriginalNodeName Nodename used in local account caching. """ kDS1AttrOriginalNodeName = "dsAttrTypeStandard:OriginalNodeName" """ kDS1AttrPrimaryNTDomain Supports Kerberized SMB Server services. """ kDS1AttrPrimaryNTDomain = "dsAttrTypeStandard:PrimaryNTDomain" """ kDS1AttrPwdAgingPolicy Contains the password aging policy data for an authentication capable record. """ kDS1AttrPwdAgingPolicy = "dsAttrTypeStandard:PwdAgingPolicy" """ kDS1AttrRARA Retained only for backward compatibility. """ kDS1AttrRARA = "dsAttrTypeStandard:RARA" """ kDS1AttrReadOnlyNode Can be found using dsGetDirNodeInfo and will return one of ReadOnly, ReadWrite, or WriteOnly strings. Note that ReadWrite does not imply fully readable or writable """ kDS1AttrReadOnlyNode = "dsAttrTypeStandard:ReadOnlyNode" """ kDS1AttrRecordImage A binary image of the record and all it's attributes. Has never been supported. """ kDS1AttrRecordImage = "dsAttrTypeStandard:RecordImage" """ kDS1AttrSMBGroupRID Attributefor supporting PDC SMB interaction. """ kDS1AttrSMBGroupRID = "dsAttrTypeStandard:SMBGroupRID" """ kDS1AttrTimePackage Data of Create, Modify, Backup time in UTC. """ kDS1AttrTimePackage = "dsAttrTypeStandard:TimePackage" """ kDS1AttrTotalSize checksum/meta data. """ kDS1AttrTotalSize = "dsAttrTypeStandard:TotalSize" """ kDSNAttrAllNames Backward compatibility only - all possible names for a record. Has never been supported. """ kDSNAttrAllNames = "dsAttrTypeStandard:AllNames" """ kDSNAttrAuthMethod Authentication method for an authentication capable record. """ kDSNAttrAuthMethod = "dsAttrTypeStandard:AuthMethod" """ kDSNAttrMetaNodeLocation Meta attribute returning registered node name by directory node plugin. """ kDSNAttrMetaNodeLocation = "dsAttrTypeStandard:AppleMetaNodeLocation" """ kDSNAttrNodePath Sub strings of a Directory Service Node given in order. """ kDSNAttrNodePath = "dsAttrTypeStandard:NodePath" """ kDSNAttrPlugInInfo Information (version, signature, about, credits, etc.) about the plug-in that is actually servicing a particular directory node. Has never been supported. """ kDSNAttrPlugInInfo = "dsAttrTypeStandard:PlugInInfo" """ kDSNAttrRecordAlias No longer supported in Mac OS X 10.4 or later. """ kDSNAttrRecordAlias = "dsAttrTypeStandard:RecordAlias" """ kDSNAttrRecordType Single Valued for a Record, Multi-valued for a Directory Node. """ kDSNAttrRecordType = "dsAttrTypeStandard:RecordType" """ kDSNAttrSchema List of attribute types. """ kDSNAttrSchema = "dsAttrTypeStandard:Scheama" """ kDSNAttrSetPasswdMethod Retained only for backward compatibility. """ kDSNAttrSetPasswdMethod = "dsAttrTypeStandard:SetPasswdMethod" """ kDSNAttrSubNodes Attribute of a node which lists the available subnodes of that node. """ kDSNAttrSubNodes = "dsAttrTypeStandard:SubNodes" """ kStandardSourceAlias No longer supported in Mac OS X 10.4 or later. """ kStandardSourceAlias = "dsAttrTypeStandard:AppleMetaAliasSource" """ kStandardTargetAlias No longer supported in Mac OS X 10.4 or later. """ kStandardTargetAlias = "dsAttrTypeStandard:AppleMetaAliasTarget" """ kDSNAttrNetGroupTriplet Multivalued attribute that defines the host, user and domain triplet combinations to support NetGroups. Each attribute value is comma separated string to maintain the triplet (e.g., host,user,domain). """ kDSNAttrNetGroupTriplet = "dsAttrTypeStandard:NetGroupTriplet" # Search Node attribute type Constants """ Search Node attribute type Constants """ """ kDS1AttrSearchPath Search path used by the search node. """ kDS1AttrSearchPath = "dsAttrTypeStandard:SearchPath" """ kDSNAttrSearchPath Retained only for backward compatibility. """ kDSNAttrSearchPath = "dsAttrTypeStandard:SearchPath" """ kDS1AttrSearchPolicy Search policy for the search node. """ kDS1AttrSearchPolicy = "dsAttrTypeStandard:SearchPolicy" """ kDS1AttrNSPSearchPath Automatic search path defined by the search node. """ kDS1AttrNSPSearchPath = "dsAttrTypeStandard:NSPSearchPath" """ kDSNAttrNSPSearchPath Retained only for backward compatibility. """ kDSNAttrNSPSearchPath = "dsAttrTypeStandard:NSPSearchPath" """ kDS1AttrLSPSearchPath Local only search path defined by the search node. """ kDS1AttrLSPSearchPath = "dsAttrTypeStandard:LSPSearchPath" """ kDSNAttrLSPSearchPath Retained only for backward compatibility. """ kDSNAttrLSPSearchPath = "dsAttrTypeStandard:LSPSearchPath" """ kDS1AttrCSPSearchPath Admin user configured custom search path defined by the search node. """ kDS1AttrCSPSearchPath = "dsAttrTypeStandard:CSPSearchPath" """ kDSNAttrCSPSearchPath Retained only for backward compatibility. """ kDSNAttrCSPSearchPath = "dsAttrTypeStandard:CSPSearchPath" calendarserver-9.1+dfsg/contrib/od/odframework.py000066400000000000000000000004151315003562600222370ustar00rootroot00000000000000import objc as _objc __bundle__ = _objc.initFrameworkWrapper( "OpenDirectory", frameworkIdentifier="com.apple.OpenDirectory", frameworkPath=_objc.pathForFramework( "/System/Library/Frameworks/OpenDirectory.framework" ), globals=globals() ) calendarserver-9.1+dfsg/contrib/od/setup_directory.py000066400000000000000000000373771315003562600231640ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import os import sys import odframework import dsattributes from getopt import getopt, GetoptError # TODO: Nested groups # TODO: GroupMembership masterNodeName = "/LDAPv3/127.0.0.1" localNodeName = "/Local/Default" saclGroupNodeName = "/Local/Default" saclGroupNames = ("com.apple.access_calendar", "com.apple.access_addressbook") masterUsers = [ ( "odtestamanda", { dsattributes.kDS1AttrFirstName: ["Amanda"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Amanda Test"], dsattributes.kDSNAttrEMailAddress: ["amanda@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A70-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33300"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestbetty", { dsattributes.kDS1AttrFirstName: ["Betty"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Betty Test"], dsattributes.kDSNAttrEMailAddress: ["betty@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A71-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33301"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestcarlene", { dsattributes.kDS1AttrFirstName: ["Carlene"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Carlene Test"], dsattributes.kDSNAttrEMailAddress: ["carlene@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A72-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33302"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestdenise", { dsattributes.kDS1AttrFirstName: ["Denise"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Denise Test"], dsattributes.kDSNAttrEMailAddress: ["denise@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A73-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33303"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestunicode", { dsattributes.kDS1AttrFirstName: ["Unicode " + unichr(208)], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Unicode Test " + unichr(208)], dsattributes.kDSNAttrEMailAddress: ["unicodetest@example.com"], dsattributes.kDS1AttrGeneratedUID: ["CA795296-D77A-4E09-A72F-869920A3D284"], dsattributes.kDS1AttrUniqueID: ["33304"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestat@sign", { dsattributes.kDS1AttrFirstName: ["AtSign"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["At Sign Test"], dsattributes.kDSNAttrEMailAddress: ["attsign@example.com"], dsattributes.kDS1AttrGeneratedUID: ["71646A3A-1CEF-4744-AB1D-0AC855E25DC8"], dsattributes.kDS1AttrUniqueID: ["33305"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestsatou", { dsattributes.kDS1AttrFirstName: ["\xe4\xbd\x90\xe8\x97\xa4\xe4\xbd\x90\xe8\x97\xa4\xe4\xbd\x90\xe8\x97\xa4".decode("utf-8")], dsattributes.kDS1AttrLastName: ["Test \xe4\xbd\x90\xe8\x97\xa4".decode("utf-8")], dsattributes.kDS1AttrDistinguishedName: ["\xe4\xbd\x90\xe8\x97\xa4\xe4\xbd\x90\xe8\x97\xa4\xe4\xbd\x90\xe8\x97\xa4 Test \xe4\xbd\x90\xe8\x97\xa4".decode("utf-8")], dsattributes.kDSNAttrEMailAddress: ["satou@example.com"], dsattributes.kDS1AttrGeneratedUID: ["C662F833-75AD-4589-9879-5FF102943CEF"], dsattributes.kDS1AttrUniqueID: ["33306"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "anotherodtestamanda", { dsattributes.kDS1AttrFirstName: ["Amanda"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Amanda Test"], dsattributes.kDSNAttrEMailAddress: ["anotheramanda@example.com"], dsattributes.kDS1AttrGeneratedUID: ["E7666814-6D92-49EC-8562-8C4C3D64A4B0"], dsattributes.kDS1AttrUniqueID: ["33307"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ] masterGroups = [ ( "odtestsubgroupb", { dsattributes.kDS1AttrGeneratedUID: ["6C6CD282-E6E3-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrDistinguishedName: ["OD Test Subgroup B"], dsattributes.kDSNAttrGroupMembers: ["9DC04A72-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrPrimaryGroupID: ["33401"], }, ), ( "odtestgrouptop", { dsattributes.kDS1AttrGeneratedUID: ["6C6CD280-E6E3-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrDistinguishedName: ["OD Test Group Top"], dsattributes.kDSNAttrGroupMembers: ["9DC04A70-E6DD-11DF-9492-0800200C9A66", "9DC04A71-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDSNAttrNestedGroups: ["6C6CD282-E6E3-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrPrimaryGroupID: ["33400"], }, ), ( "odtestgroupbetty", { dsattributes.kDS1AttrGeneratedUID: ["2A1F3ED9-D1B3-40F2-8FC4-05E197C1F90C"], dsattributes.kDS1AttrDistinguishedName: ["OD Test Group Betty"], dsattributes.kDSNAttrGroupMembers: [], dsattributes.kDSNAttrNestedGroups: [], dsattributes.kDS1AttrPrimaryGroupID: ["33403"], }, ), ] localUsers = [ ( "odtestalbert", { dsattributes.kDS1AttrFirstName: ["Albert"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Albert Test"], dsattributes.kDSNAttrEMailAddress: ["albert@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A74-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33350"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestbill", { dsattributes.kDS1AttrFirstName: ["Bill"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Bill Test"], dsattributes.kDSNAttrEMailAddress: ["bill@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A75-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33351"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestcarl", { dsattributes.kDS1AttrFirstName: ["Carl"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Carl Test"], dsattributes.kDSNAttrEMailAddress: ["carl@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A76-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33352"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "odtestdavid", { dsattributes.kDS1AttrFirstName: ["David"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["David Test"], dsattributes.kDSNAttrEMailAddress: ["david@example.com"], dsattributes.kDS1AttrGeneratedUID: ["9DC04A77-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrUniqueID: ["33353"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ( "anotherodtestalbert", { dsattributes.kDS1AttrFirstName: ["Albert"], dsattributes.kDS1AttrLastName: ["Test"], dsattributes.kDS1AttrDistinguishedName: ["Albert Test"], dsattributes.kDSNAttrEMailAddress: ["anotheralbert@example.com"], dsattributes.kDS1AttrGeneratedUID: ["8F059F1B-1CD0-42B5-BEA2-6A36C9B5620F"], dsattributes.kDS1AttrUniqueID: ["33354"], dsattributes.kDS1AttrPrimaryGroupID: ["20"], }, ), ] localGroups = [ ( "odtestsubgroupa", { dsattributes.kDS1AttrGeneratedUID: ["6C6CD281-E6E3-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrDistinguishedName: ["OD Test Subgroup A"], dsattributes.kDSNAttrGroupMembers: ["9DC04A74-E6DD-11DF-9492-0800200C9A66", "9DC04A75-E6DD-11DF-9492-0800200C9A66"], dsattributes.kDS1AttrPrimaryGroupID: ["33402"], }, ), ( "odtestgroupalbert", { dsattributes.kDS1AttrGeneratedUID: ["3F4D01B8-FDFD-4805-A853-DE9879A2D951"], dsattributes.kDS1AttrDistinguishedName: ["OD Test Group Albert"], dsattributes.kDSNAttrGroupMembers: [], dsattributes.kDS1AttrPrimaryGroupID: ["33404"], }, ), ] def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options] local_user local_password odmaster_user odmaster_password" % (name,)) print("") print(" Configures local and OD master directories for testing") print("") print("options:") print(" -h --help: print this help and exit") if e: sys.exit(1) else: sys.exit(0) def lookupRecordName(node, recordType, name): query, error = odframework.ODQuery.queryWithNode_forRecordTypes_attribute_matchType_queryValues_returnAttributes_maximumResults_error_( node, recordType, dsattributes.kDSNAttrRecordName, dsattributes.eDSExact, name, None, 0, None) if error: raise ODError(error) records, error = query.resultsAllowingPartial_error_(False, None) if error: raise ODError(error) if len(records) < 1: return None if len(records) > 1: raise ODError("Multiple records for '%s' were found" % (name,)) return records[0] def createRecord(node, recordType, recordName, attrs): record, error = node.createRecordWithRecordType_name_attributes_error_( recordType, recordName, attrs, None) if error: print(error) raise ODError(error) return record def main(): try: (optargs, args) = getopt(sys.argv[1:], "h", ["help"]) except GetoptError, e: usage(e) for opt, _ignore_arg in optargs: if opt in ("-h", "--help"): usage() if len(args) != 4: usage() localUser, localPassword, masterUser, masterPassword = args userInfo = { masterNodeName: { "user": masterUser, "password": masterPassword, "users": masterUsers, "groups": masterGroups, }, localNodeName: { "user": localUser, "password": localPassword, "users": localUsers, "groups": localGroups, }, } session = odframework.ODSession.defaultSession() userRecords = [] for nodeName, info in userInfo.iteritems(): userName = info["user"] password = info["password"] users = info["users"] groups = info["groups"] node, error = odframework.ODNode.nodeWithSession_name_error_(session, nodeName, None) if error: print(error) raise ODError(error) result, error = node.setCredentialsWithRecordType_recordName_password_error_( dsattributes.kDSStdRecordTypeUsers, userName, password, None ) if error: print("Unable to authenticate with directory %s: %s" % (nodeName, error)) raise ODError(error) print("Successfully authenticated with directory %s" % (nodeName,)) print("Creating users within %s:" % (nodeName,)) for recordName, attrs in users: record = lookupRecordName(node, dsattributes.kDSStdRecordTypeUsers, recordName) if record is None: print("Creating user %s" % (recordName,)) try: record = createRecord(node, dsattributes.kDSStdRecordTypeUsers, recordName, attrs) print("Successfully created user %s" % (recordName,)) result, error = record.changePassword_toPassword_error_( None, "password", None) if error or not result: print("Failed to set password for %s: %s" % (recordName, error)) else: print("Successfully set password for %s" % (recordName,)) except ODError, e: print("Failed to create user %s: %s" % (recordName, e)) else: print("User %s already exists" % (recordName,)) if record is not None: userRecords.append(record) print("Creating groups within %s:" % (nodeName,)) for recordName, attrs in groups: record = lookupRecordName(node, dsattributes.kDSStdRecordTypeGroups, recordName) if record is None: print("Creating group %s" % (recordName,)) try: record = createRecord(node, dsattributes.kDSStdRecordTypeGroups, recordName, attrs) print("Successfully created group %s" % (recordName,)) except ODError, e: print("Failed to create group %s: %s" % (recordName, e)) else: print("Group %s already exists" % (recordName,)) print # Populate SACL groups node, error = odframework.ODNode.nodeWithSession_name_error_(session, saclGroupNodeName, None) result, error = node.setCredentialsWithRecordType_recordName_password_error_( dsattributes.kDSStdRecordTypeUsers, userInfo[saclGroupNodeName]["user"], userInfo[saclGroupNodeName]["password"], None ) if not error: for saclGroupName in saclGroupNames: saclGroupRecord = lookupRecordName(node, dsattributes.kDSStdRecordTypeGroups, saclGroupName) if saclGroupRecord: print("Populating %s SACL group:" % (saclGroupName,)) for userRecord in userRecords: details, error = userRecord.recordDetailsForAttributes_error_(None, None) recordName = details.get(dsattributes.kDSNAttrRecordName, [None])[0] result, error = saclGroupRecord.isMemberRecord_error_(userRecord, None) if result: print("%s is already in the %s SACL group" % (recordName, saclGroupName)) else: result, error = saclGroupRecord.addMemberRecord_error_(userRecord, None) print("Adding %s to the %s SACL group" % (recordName, saclGroupName)) print("") class ODError(Exception): def __init__(self, error): self.message = (str(error), error.code()) if __name__ == "__main__": main() calendarserver-9.1+dfsg/contrib/od/test/000077500000000000000000000000001315003562600203245ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/od/test/__init__.py000066400000000000000000000011361315003562600224360ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/contrib/od/test/test_live.py000066400000000000000000000341201315003562600226740ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ OpenDirectory live service tests. """ from __future__ import print_function from itertools import chain from uuid import UUID from twisted.trial import unittest from twisted.internet.defer import inlineCallbacks, returnValue try: from twext.who.opendirectory import DirectoryService moduleImported = True except: moduleImported = False print("Could not import OpenDirectory") if moduleImported: from twext.who.expression import ( CompoundExpression, Operand, MatchExpression, MatchType, MatchFlags ) from txdav.who.directory import CalendarDirectoryServiceMixin from txdav.who.opendirectory import DirectoryService as OpenDirectoryService class CalOpenDirectoryService(OpenDirectoryService, CalendarDirectoryServiceMixin): pass LOCAL_SHORTNAMES = "odtestalbert odtestbill odtestcarl odtestdavid odtestsubgroupa".split() NETWORK_SHORTNAMES = "odtestamanda odtestbetty odtestcarlene odtestdenise odtestsubgroupb odtestgrouptop".split() def onlyIfPopulated(func): """ Only run the decorated test method if the "odtestamanda" record exists """ @inlineCallbacks def checkThenRun(self): record = yield self.service.recordWithShortName(self.service.recordType.user, u"odtestamanda") if record is not None: result = yield func(self) returnValue(result) else: print("OD not populated, skipping {}".format(func.func_name)) return checkThenRun class LiveOpenDirectoryServiceTestCase(unittest.TestCase): """ Live service tests for L{DirectoryService}. """ def setUp(self): self.service = DirectoryService() def tearDown(self): self.service._deletePool() def verifyResults(self, records, expected, unexpected): shortNames = [] for record in records: for shortName in record.shortNames: shortNames.append(shortName) for name in expected: self.assertTrue(name in shortNames) for name in unexpected: self.assertFalse(name in shortNames) @onlyIfPopulated @inlineCallbacks def test_shortNameStartsWith(self): records = yield self.service.recordsFromExpression( MatchExpression( self.service.fieldName.shortNames, u"odtest", matchType=MatchType.startsWith ) ) self.verifyResults( records, chain(LOCAL_SHORTNAMES, NETWORK_SHORTNAMES), ["anotherodtestamanda", "anotherodtestalbert"] ) @onlyIfPopulated @inlineCallbacks def test_uid(self): for uid, name in ( (u"9DC04A71-E6DD-11DF-9492-0800200C9A66", u"odtestbetty"), (u"9DC04A75-E6DD-11DF-9492-0800200C9A66", u"odtestbill"), ): record = yield self.service.recordWithUID(uid) self.assertTrue(record is not None) self.assertEquals(record.shortNames[0], name) @onlyIfPopulated @inlineCallbacks def test_guid(self): for guid, name in ( (UUID("9DC04A71-E6DD-11DF-9492-0800200C9A66"), u"odtestbetty"), (UUID("9DC04A75-E6DD-11DF-9492-0800200C9A66"), u"odtestbill"), ): record = yield self.service.recordWithGUID(guid) self.assertTrue(record is not None) self.assertEquals(record.shortNames[0], name) @onlyIfPopulated @inlineCallbacks def test_compoundWithoutRecordType(self): expression = CompoundExpression( [ CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"be", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"be", matchType=MatchType.startsWith ), ], Operand.OR ), CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"test", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"test", matchType=MatchType.startsWith ), ], Operand.OR ), ], Operand.AND ) records = yield self.service.recordsFromExpression(expression) # We should get back users and groups since we did not specify a type: self.verifyResults( records, [ "odtestbetty", "odtestalbert", "anotherodtestalbert", "odtestgroupbetty", "odtestgroupalbert" ], ["odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb"] ) @onlyIfPopulated @inlineCallbacks def test_compoundWithExplicitRecordType(self): expression = CompoundExpression( [ CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"be", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"be", matchType=MatchType.startsWith ), ], Operand.OR ), CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"test", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"test", matchType=MatchType.startsWith ), ], Operand.OR ), ], Operand.AND ) records = yield self.service.recordsFromExpression( expression, recordTypes=[self.service.recordType.user] ) # We should get back users but not groups: self.verifyResults( records, ["odtestbetty", "odtestalbert", "anotherodtestalbert"], ["odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb"] ) @onlyIfPopulated @inlineCallbacks def test_compoundWithMultipleExplicitRecordTypes(self): expression = CompoundExpression( [ CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"be", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"be", matchType=MatchType.startsWith ), ], Operand.OR ), CompoundExpression( [ MatchExpression( self.service.fieldName.fullNames, u"test", matchType=MatchType.contains ), MatchExpression( self.service.fieldName.emailAddresses, u"test", matchType=MatchType.startsWith ), ], Operand.OR ), ], Operand.AND ) records = yield self.service.recordsFromExpression( expression, recordTypes=[ self.service.recordType.user, self.service.recordType.group ] ) # We should get back users and groups: self.verifyResults( records, [ "odtestbetty", "odtestalbert", "anotherodtestalbert", "odtestgroupbetty", "odtestgroupalbert" ], ["odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb"] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingTokens(self): self.calService = CalOpenDirectoryService() records = yield self.calService.recordsMatchingTokens([u"be", u"test"]) self.verifyResults( records, [ "odtestbetty", "odtestalbert", "anotherodtestalbert", "odtestgroupbetty", "odtestgroupalbert" ], ["odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb"] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingTokensWithContextUser(self): self.calService = CalOpenDirectoryService() records = yield self.calService.recordsMatchingTokens( [u"be", u"test"], context=self.calService.searchContext_user ) self.verifyResults( records, [ "odtestbetty", "odtestalbert", "anotherodtestalbert", ], [ "odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb", "odtestgroupbetty", "odtestgroupalbert" ] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingTokensWithContextGroup(self): self.calService = CalOpenDirectoryService() records = yield self.calService.recordsMatchingTokens( [u"be", u"test"], context=self.calService.searchContext_group ) self.verifyResults( records, [ "odtestgroupbetty", "odtestgroupalbert" ], [ "odtestamanda", "odtestbill", "odtestgroupa", "odtestgroupb", "odtestbetty", "odtestalbert", "anotherodtestalbert" ] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingMultipleFieldsNoRecordType(self): self.calService = CalOpenDirectoryService() fields = ( (u"fullNames", u"be", MatchFlags.caseInsensitive, MatchType.contains), (u"fullNames", u"test", MatchFlags.caseInsensitive, MatchType.contains), ) records = (yield self.calService.recordsMatchingFields( fields, operand=Operand.AND, recordType=None )) self.verifyResults( records, [ "odtestgroupbetty", "odtestgroupalbert", "odtestbetty", "odtestalbert", "anotherodtestalbert" ], [ "odtestamanda", ] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingSingleFieldNoRecordType(self): self.calService = CalOpenDirectoryService() fields = ( (u"fullNames", u"test", MatchFlags.caseInsensitive, MatchType.contains), ) records = (yield self.calService.recordsMatchingFields( fields, operand=Operand.AND, recordType=None )) self.verifyResults( records, [ "odtestgroupbetty", "odtestgroupalbert", "odtestbetty", "odtestalbert", "anotherodtestalbert", "odtestamanda", ], [ "nobody", ] ) @onlyIfPopulated @inlineCallbacks def test_recordsMatchingFieldsWithRecordType(self): self.calService = CalOpenDirectoryService() fields = ( (u"fullNames", u"be", MatchFlags.caseInsensitive, MatchType.contains), (u"fullNames", u"test", MatchFlags.caseInsensitive, MatchType.contains), ) records = (yield self.calService.recordsMatchingFields( fields, operand=Operand.AND, recordType=self.calService.recordType.user )) self.verifyResults( records, [ "odtestbetty", "odtestalbert", "anotherodtestalbert" ], [ "odtestamanda", "odtestgroupalbert", "odtestgroupbetty", ] ) calendarserver-9.1+dfsg/contrib/performance/000077500000000000000000000000001315003562600212445ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/LogNormalVisualization.numbers000066400000000000000000005444231315003562600273310ustar00rootroot00000000000000PKøUA†ì]èkèkQuickLook/Thumbnail.jpgÿØÿàJFIFÿâ XICC_PROFILE HLinomntrRGB XYZ Î 1acspMSFTIEC sRGBöÖÓ-HP cprtP3desc„lwtptðbkptrXYZgXYZ,bXYZ@dmndTpdmddĈvuedL†viewÔ$lumiømeas $tech0 rTRC< gTRC< bTRC< textCopyright (c) 1998 Hewlett-Packard CompanydescsRGB IEC61966-2.1sRGB IEC61966-2.1XYZ óQÌXYZ XYZ o¢8õXYZ b™·…ÚXYZ $ „¶ÏdescIEC http://www.iec.chIEC http://www.iec.chdesc.IEC 61966-2.1 Default RGB colour space - sRGB.IEC 61966-2.1 Default RGB colour space - sRGBdesc,Reference Viewing Condition in IEC61966-2.1,Reference Viewing Condition in IEC61966-2.1view¤þ_.ÏíÌ \žXYZ L VPWçmeassig CRT curv #(-27;@EJOTY^chmrw|†‹•šŸ¤©®²·¼ÁÆËÐÕÛàåëðöû %+28>ELRY`gnu|ƒ‹’š¡©±¹ÁÉÑÙáéòú &/8AKT]gqz„Ž˜¢¬¶ÁËÕàëõ !-8COZfr~Š–¢®ºÇÓàìù -;HUcq~Œš¨¶ÄÓáðþ +:IXgw†–¦µÅÕåö'7HYj{Œ¯ÀÑãõ+=Oat†™¬¿Òåø 2FZn‚–ª¾Òçû  % : O d y ¤ º Ï å û  ' = T j ˜ ® Å Ü ó " 9 Q i € ˜ ° È á ù  * C \ u Ž § À Ù ó & @ Z t Ž © Ã Þ ø.Id›¶Òî %A^z–³Ïì &Ca~›¹×õ1OmŒªÉè&Ed„£Ãã#Ccƒ¤Åå'Ij‹­Îð4Vx›½à&Il²ÖúAe‰®Ò÷@eНÕú Ek‘·Ý*QwžÅì;cвÚ*R{£ÌõGp™Ãì@j”¾é>i”¿ê  A l ˜ Ä ð!!H!u!¡!Î!û"'"U"‚"¯"Ý# #8#f#”#Â#ð$$M$|$«$Ú% %8%h%—%Ç%÷&'&W&‡&·&è''I'z'«'Ü( (?(q(¢(Ô))8)k))Ð**5*h*›*Ï++6+i++Ñ,,9,n,¢,×- -A-v-«-á..L.‚.·.î/$/Z/‘/Ç/þ050l0¤0Û11J1‚1º1ò2*2c2›2Ô3 3F33¸3ñ4+4e4ž4Ø55M5‡5Â5ý676r6®6é7$7`7œ7×88P8Œ8È99B99¼9ù:6:t:²:ï;-;k;ª;è<' >`> >à?!?a?¢?â@#@d@¦@çA)AjA¬AîB0BrBµB÷C:C}CÀDDGDŠDÎEEUEšEÞF"FgF«FðG5G{GÀHHKH‘H×IIcI©IðJ7J}JÄK KSKšKâL*LrLºMMJM“MÜN%NnN·OOIO“OÝP'PqP»QQPQ›QæR1R|RÇSS_SªSöTBTTÛU(UuUÂVV\V©V÷WDW’WàX/X}XËYYiY¸ZZVZ¦Zõ[E[•[å\5\†\Ö]']x]É^^l^½__a_³``W`ª`üaOa¢aõbIbœbðcCc—cëd@d”dée=e’eçf=f’fèg=g“géh?h–hìiCišiñjHjŸj÷kOk§kÿlWl¯mm`m¹nnknÄooxoÑp+p†pàq:q•qðrKr¦ss]s¸ttptÌu(u…uáv>v›vøwVw³xxnxÌy*y‰yçzFz¥{{c{Â|!||á}A}¡~~b~Â#„å€G€¨ kÍ‚0‚’‚ôƒWƒº„„€„ã…G…«††r†×‡;‡ŸˆˆiˆÎ‰3‰™‰þŠdŠÊ‹0‹–‹üŒcŒÊ1˜ÿŽfŽÎ6žnÖ‘?‘¨’’z’ã“M“¶” ”Š”ô•_•É–4–Ÿ— —u—à˜L˜¸™$™™üšhšÕ›B›¯œœ‰œ÷dÒž@ž®ŸŸ‹Ÿú i Ø¡G¡¶¢&¢–££v£æ¤V¤Ç¥8¥©¦¦‹¦ý§n§à¨R¨Ä©7©©ªª««u«é¬\¬Ð­D­¸®-®¡¯¯‹°°u°ê±`±Ö²K²Â³8³®´%´œµµŠ¶¶y¶ð·h·à¸Y¸Ñ¹J¹Âº;ºµ».»§¼!¼›½½¾ ¾„¾ÿ¿z¿õÀpÀìÁgÁãÂ_ÂÛÃXÃÔÄQÄÎÅKÅÈÆFÆÃÇAÇ¿È=ȼÉ:ɹÊ8Ê·Ë6˶Ì5̵Í5͵Î6ζÏ7ϸÐ9кÑ<ѾÒ?ÒÁÓDÓÆÔIÔËÕNÕÑÖUÖØ×\×àØdØèÙlÙñÚvÚûÛ€ÜÜŠÝÝ–ÞÞ¢ß)߯à6à½áDáÌâSâÛãcãëäsäü儿 æ–çç©è2è¼éFéÐê[êåëpëûì†ííœî(î´ï@ïÌðXðåñrñÿòŒóó§ô4ôÂõPõÞömöû÷Šøø¨ù8ùÇúWúçûwüü˜ý)ýºþKþÜÿmÿÿÿáLExifMM*‡i  6 ðÿÛC      ÿÛC  ÿÀð6"ÿÄ ÿĵ}!1AQa"q2‘¡#B±ÁRÑð$3br‚ %&'()*456789:CDEFGHIJSTUVWXYZcdefghijstuvwxyzƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚáâãäåæçèéêñòóôõö÷øùúÿÄ ÿĵw!1AQaq"2B‘¡±Á #3RðbrÑ $4á%ñ&'()*56789:CDEFGHIJSTUVWXYZcdefghijstuvwxyz‚ƒ„…†‡ˆ‰Š’“”•–—˜™š¢£¤¥¦§¨©ª²³´µ¶·¸¹ºÂÃÄÅÆÇÈÉÊÒÓÔÕÖרÙÚâãäåæçèéêòóôõö÷øùúÿÚ ?ýü¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(¢Š(޽¯Gáûxä¹I$>ÀÎ îG¥eÿÂȵÿŸk¿É?øªoĵ-¥Úíb§Ïê?ÀÞµÇùOÿ=›þù_ð ËþE¯üû]þIÿÅQÿ "×þ}®ÿ$ÿâ«òŸþ{7ýò¿áG”ÿóÙ¿ï•ÿ ìSâ]¬‹•¶»êGDìqýê¥'Ç%˜Ëu‹}Â\ÝB<­ –Ýóñ€¬Nz}+˜´I0Vf3 ÿxûWÈß¾|ðOÅ=BßÅÿüe¬j)'Ú¥ÕôyífštÅeI/óòÛ@ 0ìPÛ>ý |5âÛK¯ _Úê6÷ë¾ÚKk¨e[…Á9M¯ó xì3Zñ.ÕJƒmwóÓ?Þö¯‡ÿd†¿¼OñfòëáwÂø/\ð’¬–Z޵§\Á DŒbO!å‘”?/˜ÀÈÜs>«š'ó"ýë}ÿîîŸjè|_ûAøkáöš·ž<¿¶Ñm$fDšúætvTiî2BFî@þcÐçì¿m¿†:”7²iÞ2ðÝÄzm»ÝÝ´zµ«‹hQ•ZG"_•C:Oaë^]ûeÛèq|Ô/>#xCQøca4r.b­ö©ÚLÀL>XÝ¿dÒ8Îq‘œ•ü3©|-Ò|­Þé³Ä->d6Ö?Ùöíz'ÔarÌL$cýY‚0 iœšýðÇÿ|@Ò¡àKØ5«•à767ÜD$C†Mèäd¢´Óâ]¬ƒ+mwÔŽ‰ØãûÕàß²g…ôàF‹/à \xLÕÞcM¹žÑŽ#Çïí»"Œ Â@@z%´OåŸÞ·ßoáÞ>ÔÛÂȵÿŸk¿É?øª?ádZÿϵßäŸüUq¾SÿÏfÿ¾Wü(òŸþ{7ýò¿á@Ö—ã‹}WPŠÚ(.å$ÁqÀ'±ö­ªóߣ/Š,‹ÈÌ77øÐW¡PYߌ Ð)ážFØ($ŽäzV½pß‘›ÄŸ#•ýÂt÷oZ×ÿ…‘kÿ>×’ñTÂȵÿŸk¿É?øªã|§ÿžÍÿ|¯øQå?üöoûå€;>%ÚE3[]áFO ÿÅRÿÂǶÏüzÝþIÿÅWyý–LÊßtÿ ú}*hã”L»'`Ùàí^å@øß¡0r·P*ä]Cò‘ÔŸ‚;úU›OŠº~¡™§¤·䮸ž7\ƒ‚2‚+ó¯ÄþýŸ'þПSýŸ~$ý¬CqÓÃ÷}­H%±"O€,p@9lÏÒ±³øZëà”ðcÂ:÷€´oµÈ¤jº{ÙÜG2*#¾ÉIb§b€ç¶’Fs@B‰v¦B¿f»ÈôNùÿkÚ¨x§ã¦ƒà}]OÆw ¤éв$—W“E (]‚(.Î,̪r@®ebµ?ï[î/ðVö®ö˜ðΑ⃥¿Ä ]ø×KIm¥“I·ÂÍ3-Ä{]*AŒŸ0àç@8 ›£þÔ~ ñ‹®×’ñU¡ xš¼ËmјB“¼sž˜'ÐמùOÿ=›þù_ð®Ÿá’²Üßobß,}@ßÒ€:Ú(¬¿iWÚ÷ƒ5k ^7R¼³š[±ŸôiYI8çåbñ@”U}* -t»h®¸–8•_÷­/ ~wùŸýæäõ5b€ (¢€ (¢€9¯‰ÁN•kæ&ñçôÆ«ŒÙüûû÷]§Ä°ÇLµòÈÏî3ü \‡—7÷ÓþýŸñ ¶Eÿ>ÇþýѲ/ùö?÷î¥òæþúß³þ4ys}?ïÙÿ­l‘y?ñîOÌßòÏý£\‡ÆŸøOÆ—§§ìë„Æ¢Ó3Ý ~)Ú6„þ¨@ÀîÜy' qÇ5ÙÛ,¾Oßï7ðïzùçöªý“<}ý¿ñ#âGŽ|sá[[TÕ§Óõ)¾Ï ´)È‘À™(bL”²2£4ì¿ ßÅ·> ‰þ:iz-–¿æ¸’-!¥šÐ 8B­*†É’1Æ@ç­}bšÉÓKE‚éÑÖë”m¬WÀgœWšüý’|5ðkÅQx“áþµâ럴iïm¶¡¬Myf"•£¸‰ÏßýÚa³À-ɯU”J%„‡!øù÷O½|ýâí㱸Ðåð¿‹¼¤Ê---õ8/-aš §H¥K«˜?r²Ó5¶Ø‹ ×Èñ6ŸûCïÇ7üuðê{‘%–-Jv@¢E,€ÅvCHÙÈ`2¡‰ÇOñþ ÙàŠVv–ž#}j;+¾{k{iã‰-þÛp×–YA‘Ø»åÈ#T¯Íý˜¿eÿüA_Ú\|Cƒ\:­Î‹¬÷[™d‡rI(!v´{¼µ ß3Q´Æ\P©®…ûZ¬­£âNñì mn¨YÊ*« ’ÝŽw¬ƒœá×”¯ÕÖ(¿e_>ßsäî".3“šð¸ÿà™ÿ ø‹¥ø¢Þ×W‹TÑï"¾¶Ù~ë–;§º”rÙ–BNNHÈËnô¿Œß´ß€¾ ]wÆ‚âkY¯–ÉÙ# d¹¼²Ç(Ç,ã$fU ~È¿çØÿߺ6Eÿ>Çþý׈ø)Á³q°øâØGœ5ÁÒï>΀v´Þ^Å`d‘œk+Iÿ‚¥|Ö/ †]ÇæDÒO#é]‘ ­²Rùˆtáwc<ãKx1PxªËd%æçf?«ÑkÆÿgŸŠº'ÆÝ/Eñ?Ã]Au=Qy…½ÇÙ¥€ÈS̾I@a†FŽqÅ{%Â|ET>%dFCä';s™«»®â»x“÷L£÷ Õsݽèd_óìïÝ"ÿŸcÿ~ê_.oï§ýû?ãG—7÷ÓþýŸñ  ×iÙdÿG#å?òÎ¥)Oú1ÿ¿t—‰0´“.Ÿtÿÿ|±K"²´‰†!ÿB ,ø³7ƈ|i+üÓ¾ßhÄ‹ö}`ÝÅzÓc. ‘er0 Ž{צYG/Ø-ÿ¶­P]˜“ÏÆvoÀݳ<íÎqžÕò׉a‡ñç†,Ñ ¶–ïåWÁÃcŽãÃý€È‘¿–±†æÂXn6Øaå—Ö˱ÝÌaz‡ºÇdøµ¤IñÄ¿ð_”~ݤQ‹›ÜYàÉX@n2Ûw6}ì«æ³ìÝû<|iÖn-þêWZ-œ3ÜÛjr»Cn²F<. O û ›3mbÁ°]¹ êß ÿàœ~ xÛHñ?ƒF¼ÚÆ€¬Öò\ê/$nÆ€³F0ƒä‘¾T ¥°J“@ñ²/ùö?÷î‘ϱÿ¿uç?ÿk üñ¦Ÿ£|Q¾}7íö†ôÞ¼J-m£ÞÑ®üÈ$rYHýÚ>Î ìVñ׿ðS?ƒ–v-?ü%Ìç1„ˆép¼êì <~j(u·  wÙüûû÷]OÂð¢âÿËËùcãn3Ë×ÊÚüàÆ¡ospÞ,žÞÒDŽâ]ïeÞå AP±eä…ë_U|/bòÞ³2°dˆ© G7© ºŠ( Š( Š( Š( 2ý­¥ñT¯_àŒ}ϊĉýŸ÷ú†lüÛ²Ê Ù¼€X ØÉÅfé¸m6ØêéݘÎ"Þcm‚“É]ÙÆ{Tÿ¶½Æ“mû;x…¼y«êZŽmdK½KO3}ªÅ|ؼ€dܤƒòƒïÆhµ¼kûX®-|§Št#n#r°È8+Ç€ [zɨŷ¡üš¤ÌßÝ‹þû?áFfþì_÷Ùÿ ê¼+¦é3híwmÈCnc~cß}ô]EÄ–P°=Œ Gò¤ðsÝ Úùq[‘†Á2~ñÿf´üËÏùåmÿ[ÿ‰ 'ân›¨Zi:s|%Òt뫨B/â£ìœù› ¾èë ŠåþÚxâ]ú/‡ìô_›ì-gefìk7o3°ùDGïUÈøÿƒ¨|rÐ×Ä>9¼ðî¹o ³C¦Z ¤K…7 QåÚ»qº6Á8Ûµ˜œUÙÊÏÃö?5×ð5oÞ4R «{Ád2±næ(ÀB êì=اýøo𪖺‰sjéyco"ù¯•{rÇ8àŽØ•|£¤øcÁ–—+|lòˆäfx®EÎÁ…™dhpNÝìÖ$ÊÛFe@¿Yx ‹éÐZÈ—±AÆ—Ê|Ë€8ß'VÆO¹ >ðéB§K²ÚÙ%~ÉÁÏ'´åð¿‡Ñ·&—h¬rr-0O¯ðûʵ|ËÏùåmÿ[ÿ‰¯ñN¡ám Ä~(¹ñWµX£—Y qo] ´Ÿì· 壯Ÿ:”ó€+Æè£Qó ÈµA¤é–Ö‡H´Šœ„¦ÖÏlVÝxÀFÐ&ø•©IàOêë˜ašeÌòJ¶ÇÊ]³'˜†ð2v§°¯n ¼Wö‚›ÆüeðrøÛI›Ãnd$’äŸ>(vŸ(ÆïÝœ9  IÕ^ûGêz=‡Æß‡‰âcPÓµ)o&].Ö?ÈÕ%6—ðÜyjSj¤†E2cçvóšØÅ·¡üšŒ[zɪLÍýØ¿ï³þfoîÅÿ}Ÿð ™mˆÁ\‚pxjïÿ²4_ùô‹þü·øW LÜ|±uÆ}~•é~eçüò¶ÿ¿­ÿÄÐkhš°/cèM¹$}8®⯋m¼C(øO¢è×Z[i˜Gž%­ç›þÛ®G—÷AùK¸Œ õ2óþy[ßÖÿâkÈ>5Á¤¿Ä[Ù|SâKHœø^xg±³ ³³ƒ:P8ÂšÜø'§kw^—þ¶—£[kÛÇîôè™ òñÁ^O·àžJ…,dÌèz9^lc ÿÓÁý+ÃíÃÚ‡ìõâ#mã}VMïRYäÕö\˜¥W¤~^öR 6à0ÞaqÁÀænÁºÄ¿Wân½u©ÆL÷vÑ]‡³?lˆù‰å¡Þ†H¼¬ÊÁXs°ÐÑzg€<1¢É»Fдû71¬‚ÃË;™•>U(gr@YsRêz&’šmÃ-’"bÃ}+äÏi>ðö…öíSâˆõ]ÚÍ­®$žÊñƒ«½¤òÎ1sƒÌhÌ€žŸÀ¾ð]Ÿ‹ì,të޹¤naŽt¹ß*¬÷NmË4`l-½ 6FÕcÌôlÞðýÖ>Õ§ZË€@ßl[õŽø•#xgÃÎA}2͈é›\ãéòÖ°’óê­¿ïëñ5CÅRJ<1©iK…¿Ùeónc•íÓaÌŠvpTdƒí@'„ôFXô›P½gzü¿Jµ£ÚZZjWJ‰a4ÜÉËz×Í:f‘àË;ÍOøó[Ô¡‡O…`¶6÷qnO>`ŒÛP¼eeV\pË´+7>»û-¦š¿Ôø?Å7~/ÒÙ‰·¿¹É( ÆÌUXí ýî~lt€=2Š+Žø»à-oÇØOà½uôY4­E/&¼Ùt‹ü #uÈ=0Û”†9R@ÀcEC§[Éi§ÁÜÍq,Qª<¬2°,@èI榠Š( Š( %ý¶£¸—özÖÆ‘ákÞ݃{–ßTr±º£ b ö¨´©Qô«S>ëG0¡h<ÜùhÊdqòôãÒ¬~Ú–éyû=k°\xÆo‡ÉqBÞ#†Hã“HÞ¥DªÒ ä… å¾R6ÁæŽÂ´ì±¨2’šp>lg¿_Æ€|_óÝ¿ïåâÿžíÿ*_:_ùæß÷Ðÿ<éç›ßCüh¯ð„–¿ðŽ[y—Ž­†ÈóÈÇÌ}ëO̳ÿŸÙ?ð ÿTðuÔËá»P¶Ò0Ãs½yù½iý²ùô“þû_ñ øÛs}ÿ kJMøs¦x¿J0ÀÓk3•–{2'9Cs•0Ç IcÇÀ««ÿøNµOí¯‡g‚íÄoä^Úí†MC’Â7#±/'^ñïû² üY¢¥ïij6ñm´öMEJé‡Q2bVº|Û¸AHÞ kÑ´£) ¿*Üø£¶—ãíZH¾'IãY%IØ-âÉýœrAp¦yá‘»šWçg–ˆÈ=æµÞ]?à†‚þt‘ÇzÍDºj£¹“9À©&?¼ù覭jã^ðTÚtÞø3áY'0,­%›)û,«mÈU#ï‰HÏ Œ0¨5ÏM èN>=Üé°¤DAuú†½il¿ï.˜>ü¼°lД25 ¼ÐipGwûAË Ó 3®¡†’6Hö6Ö¼`Xàñ` ¶’x†‹Û àç†-n-<Ù¢y-„mp¸8Rìxå¥ à0aï¾½[ßéÓk¡´ËÙ W¸³[­ÂÒCËD8!I+‘×ó›xLÛks\Ú6îÞ6’b [ØaÏœ_2á‰Ù‰1»89ôŒ/Òžžh|§"I&ª÷ÍCöÎ?ÖçqÎÿ½œž½M_ó,ÿçöOü?ã^-ñŽ ô«­Vo ü4Ñ|Q}>¦­ º†Ñ Ec<Òmù$nˆgvBŽ™%}Çí“ÿϤŸ÷Úÿxß‹´¡{â˜îþ#É¥µÎ©òÅûFúk›'ÛoÚy% ͵BäÆ()š³ðRÿR»ñº¯Š<cნBÖúÖD1<¤Ýlãï!NÑ•ý×åöJñß:~ÏKw¥|D›ÆvSà Ëj×Ζ’ùC÷¨UØÆ Û8Áùyö*+Áÿi…¹_Ž¿äÓü!câÖæd¹ÕfE3xn&‚OßÂçîeHÛxW¼W‡~Ò¿køßð÷ogðä–÷sNš,om2´‡eˆIç|¹¨$p¤lï‹þ{·ýü£|_óÝ¿ïåKçKÿ<ÛþúãG/üóoûèB^.3;uòÒ½#̳ÿŸÙ?ð ÿyášN?vÝGññ¯Kûdÿóé'ýö¿ã@üË?ùý“ÿø×“|tV¹ñMÂZøNñlBÃ}sÚKOæem[¾Ï•[©`GÝ5ì?lŸþ}$ÿ¾×ükʾ3ÛIsãɛ⠞ÝáÙ¡{A:o·BäÔS0PAe]Å FÖŠçô«ÝJãàN±%ïÃ}2ÛT’íþ¢U ¸Ý(f•”pì·ÍüF0ã‚døÒÿ\ðoŒ&¹øoðwAÔÿz²Ãy¤SÊ ‰·3àlu`8ÆðHŒ†ÜÓôf·ø­ÙÏñ:Y×í«æx•®Àû+ @0–RX"Òœ(\-Mâ>|_x÷Ÿî,^Ýš[958ãŽÚO6ı?¿ ‹º žY;@»eæ;€8½&ãľÕR×à†àWK5 Ý’ª òð|ÅR­€Š@aòì|8ñ7‹|Iâ?Š>éi% -ã°yÝZá²dVq¸m o‘‰À`k–—ÂÑ\é×V°~Ц=èFøõ"f·vO”)7„íÚ Àœ€AÝ»vß‚þ]é¾)Ó§ã…þº#½26[ÈÜê;ZoÜ`ÜxIA•rA ¡>ŠY‘ÿ²àAÿeÊØÜ[É—^bº•(ó’¬èG¥\“ãþ=$ÿ¾×üj Mî¯4Ûˆma’ e‰‘$Þ–H 7 ž>â€<>ÇEž [ÃçKøSá»Iì¬ ™ïXÑ4ë¢J¼6åWÌ ó¿ [Œûöx¹¹»ðÕÄšç… ð…ûIþ‘a£ÄÍÚUdûÁ†HåÇ`Oœiž··Õ4øJ>/Kulº]ºÇm«ösy±¥asŸjf;Û ç~V=¤3^—û>i³i^\xªoÂ~k}RWIXÉ?)tf´†ù¸ëŒq@õQ@Q@Q@Q@3û|ߨiŸ³&¿?Š|-ªxÛOEA>‡§:‚—j”ù€RC’¹!PðzTú"¢è–BÏ̵„[ÇåÀʪЮф Œ‚£Õ©ûYÅu7ÁmM4OEáKÇR°k2Ëi¦¹R¬Ò‚€Fã¾jŒ Ì€$`„ÉÇÞ8ã'¯P˜?óðß’ÿ…?óðß’ÿ…7åÿŸcÿ|¯øÑòÿϱÿ¾Wüh¸ðtl|7kþšéÃpqóU­?)¿çþOÊ?þ&²¼¨|9m»Og8o›l|üÇÕ«Odô ûæ?þ*€>mø­x||kKoü1ñ%ÝÈ»ÏöÌ2H!\ܲy£Ê\|ÀoÇC¸+9­OÙÎëÃ÷õ¤ð¿5Ï L±He¼»i$]Apj‰P.J¬iÇ?èÌŸr$fÒ¼Æ_Ú¾e—|=É­º-‹ÜZò…À n®"æa•>y` Œî«_"ñkxãR:ñŽâ{R²}žÊÅíå’ÅòÊÕUN$Ã’~ð¬[œšÓ´Ÿ x¿áî‹o{ðßY›Ma.ëYî$cc¾î%FÞ¡ÝZ9|Ö\a‘d\º¸]rËÀº_†lîÏÁ fv¹E›ìÐË'™.ÁÙ Ùó•iH ø~7Û}‹Âëâink‘¿µø…¨ƒ„þ-øMÑ-Hä2Ú¬(ÒVó%¤Þ »F0I :ÛÅž-H§Á߈K*Ê“dÜ݈å}²¸‘¤#÷aY¶–#‰ ã"0íõ7Ø`_éCGyôëAl‚ GUVµ,d2îFŽkÁ|Q£|VŠîIGÅÏØÄòÎñîŠÚ1†—ƒÙåÈ…··‰ù¡ùýëÀQOƒtÕ×£S¾X\ÞCKܸùä@Y²ÀÆ€6¼¦ÿŸù?(ÿøšðßx‡EÑ|c¯Æ<â ûû½R+yf™§HµöGcq XÝB /ËŽY¶åˆV÷ ‘ÿÐ5ÿï˜ÿøªñ߯RãÄø[ÆÓõhÚÕnä„Ik‘–³c´ˆ™±æÚì'ß(ƒ×}çÆBK¯_ø_Ägí¶Kkøü´ùXÔ+; òœ½h¯'ø=câ{/JÇþù_ñ£åÿŸcÿ|¯øÐxýûu—×é^•å7üÿÉùGÿÄך¼££øWükÒvGÿ@×ÿ¾cÿâ¨ÞSÏüŸ”üMx§í¨èšgŒn‰<%¯xšå|>η6RH‚uóŠ‹oݮ܌³—'r†àM{NÈÿèÿ÷ÌüUy·ÅQ®/Œ®?°üG¥xJ: Åí®'‚í¥Þ@½Sò.èÆâÁFÇÍÃè7ž³^±5€µË=!/"_øGÝ¥K‰t àìó(Ú›Ta|¢ƒ*3\ÿÆcÂÖ^;›ûkáˆüM7‘¹5UŸí‚ ¡>aÎ'*ê+¼°ïíâñ<u½¾4ÑfÕÒð-¾¼òÛ-âò¤&Å~Úv— “€Õ—ñÛǧÆnþøá}Ô1it뿲nâ¡Ù³»æ`ê™S&åòý;Yð€º¾1|ñTS}²;ß\„¼ShpÌŸ9W“ûÍ®—¦ðWŒ¼=¡üJ³Ó|5ð—Æ:KÜÜ}™5›È$ý f $ÆE-‚K:‚I Ï’¡Fí /ˆ—²\Üë_üw¥ è÷-kÙɶžŒK€X}ÒNFEkxN×Åí«Y®¯ñBÖ¼‹œÞZÚÜÚƒsmæN1å'{x@Øü–о›ñÿ'åÿUuÄ ¢Þ‰å»Œ@å 2fNS{ôüjp‘ãþA¯ÿ|ÇÿÅTwb%´”µƒD\ùJŽ¹ÝÆ=hæ½ÆZ©hx[ᇈ‡Ûtë!y²}›Nýû·U’?-63È w1à.ïYý—?²[ÁS7ƒü5áZRfÒîã(ðË݇*ÃiÊñ×¾k‹Ö×Är7†£Ò>(hÖS}žÐÜy·VrÿkDZC<謼–ùR2>\),I½à4í·†™> ëoˆ/†6j1¢Gq;w;w¸qÆ1ß4ÝÑEqß~ßüF:hìº$š6¢—í¶7‘nÿ "}9$`°Úrìh¨tëV²Óà†i¤¸xcTi_ïH@ÁcîzÔÔQEQEâ?ðPùü#kû)ø’_V†§àø‘Tµ± n%Œ:ãf×CÃì'æ犿áõWÐ,vZhŒ ñ0u`Ú1ÜÜg<úó[ÿ´¼:äÿ ï—ášéïâ"ŽtÕ¿‰¥¶iÄlPJªÀ•'ƒÈÆsÎ1^sðQ¼a/ƒ$?í´Èuÿ·\l]9Ãömÿ¹Ï_›n}ñ´ v~\ßßOûöÆ.oï§ýû?ãQm‡û’ß þm‡û’ß þÞø6;ŸøF­vKn $Ÿ¼Ú­?.ïþ{[ÿß“ÿÅÖ„ÓþËo2 ‰Ãdˆ¤?Ä}iì²ÿŸ{ûó'øPÌšÃü*oÍOEñß®µ-«jËu˜Òá®pÁ™o>æôp © Dñ•Ø7?f½GáÅÿÄv?ƒö­¶¤Èu /7ËÑíE;žü•[eè8Œ'ߎE];Ïø\1øæàXhÞ“ÃcRÝh@¸{_3<üÜ>23×o?ŠØøUÿ ¼_}ÿ LÐí´½¯ýžl¡>a“’¢B¤œs/üË?|È¡xá”^ðÒXèÚ厨ím`³¨ãs1k“–ó’v)c·q–æzš~ÏžšµO x””X²ÒÒB¤….òŽìÍ»ZFc÷ÔŸw´‹âkhš½Òô(îÌûu@-_´!¦ yb•6à<ñ•-—ÆV·ÔZ[? ­Ä1l`³;Çò£:ÈrƒÌÞäîáT¨ òšã|~~]]M¨øÓšõ¬—ŽÑ¤ñH.KMzãk%æÈÁ‘n]Iؘœà⾋øUᾆ|å[éÊ#cÀÛ¢ƒhòÔåÉÈM£’zu5åºèøÄu¹ Òìü5&œÍ(-¶Íɺ_/"Fbß/vC …Çñë^¶Š ÕVK‹¤f™-depHf 3´“ŽÝ(cË»ÿžÖÿ÷äÿñuóoÆføaðûÅ–¡ñ#GÕÒø’3#¤¿è×­f\Í´ÝHÄNË…0xú'e—üûÜß™?¼“Åçâ‘×µTð–á{«©¡Ó^íCZy?8›e;ÈåAnHÀõj_ï¤øy¥jZ.¼ö5äWVÒ[ «synÈÈä.#qçæ9÷:òÏ„ãÇPø©Sⵎ€–-Om>žÇ?”<Ø™Xa—q“k8QÇ#§@|áûYÜx*ÚkáüFÓõ;¯Íyu…îmÒF‚ÎãÈc1Ÿk†Œ2­Ðžã_Gׂ~ÙIñýOÙ¶ ãVDarºœ%ÿvb›a…·æù|7H4Ôys}?ïÙÿ<¹¿¾Ÿ÷ìÿPÑD­£YŸEêþÔ G1 ¶û23·vìgœb­m‡û’ß þ!Žn>tê?å™ÿôÏ.ïþ{[ÿß“ÿÅ×—‡Ý¿Qü þé,¿çÞãþüÉþgË»ÿžÖÿ÷äÿñuá_´íÇ€#ñV¡Æ;]Jêóþ§’²3GöBf=¾zî}ÛÎHÀ\îa¶l²ÿŸ{ûó'øWñRßÄÃÌ…Ö6A¬$EK›Sæ¥Ï›Ö 쯕çü¤cxLœgÀxâçÁÍ+áõ•ý·…..Su½Ùš;"+†ó‹¨"]¤†P 0V +âw¾j?uQâýQ½×ü0ösË-·™æHó4’¢¥Àfe-Ì@á¤Û¸ïÇQ¢7Ä¡ð›Pm[OГÇ~tf(íâo±ùeÆò2pyópX†+´ŸŸ"±jš–¥à›¦ñ¯†­|s«y66¸Ž9­ÖR"¹Ú¤á¤\1õ9= ç÷Þ±~Gühý÷¬_‘ÿ̇þ{¿ýü4yÿÏwÿ¿†€;Ï‹¯øF­|³oŒ62~ñ÷­<^zÛ~Mþ5‰á-?á¶ónäVÃdyì1ózÒól¿çöOüoñ “uh>ÉñÆî¯ø’ÛÄk†I K8Ò(®en¾WÌŸ»I^\aØç¬ý˜í|mñ7Ä ð·_×5HÃ/ÛÖú.8Ó ÌÑ•E$íh©ÿ\Ï÷ä–®j7ñ”_.-cøS¥Ýhk¨5 WÍ’/87al—9'wLý½ðwÅ^"Õf½³Á‚ê°Ÿ_/§ßÏ’æÔ\’-¤<¼`ƒÎÖ%sí@ŠÎÚ|âðA$&6«±eÇ Éã°¯• ê··¾²¶×þÐ92‹I5;%‰­"w‘n Ìb™Ú¦BŠXð‘Öº1-–9½“ÿükâ¥u§ø{ÍðTk¢æm®/ÆePîÏ(¤¸ÿv€< óÃÿ“Jð|:ÜÞ(¸ÛebtéN“›a4¯ cRê$!Ž“|§hs^¯û&Â:>cáV±¨k: ù/|ìÓÛ·ñDÁ€e¨ ÏÍéŠáSâWÄûÝGÂ9LñÛKw5ÅÐU°œ¼ˆÄa‰Qe#q™FxÏ©|Öõ={@šøy<1©† 5œs,±¶:HŒ§G€r¤c€HwEÇ|]øm©|C:øo]—D“GÔRùö£ºÜ…þ "}9,¸-•'v4T:u³ÙiðC<ÏpñF¨Ò¿Þ”‚ÇÜõ©¨¢Š(¢Š(Ì?lh>kŸÛ¾"›Â6mm*Ë­Ã;C&– l<õu ‚§‚3Ó½y×ìÓq¿n¤Ñüiyñ µ½JQ«]®…®¤-jP6@I‰pÚ€€+Ój¹æ¶ø/«½†‚¾)•måÆŒÒ òŸ0d«}ñ‘§9Çzò¿Ùv¶ü0š[ÏðÞGÕï·éWsŸ8´’dÊlò= P ô6_ùæ?ïçÿZ6_ùæ?ïçÿZ™±缟÷Ðÿ 6/ü÷“þúá@׃f¹ÿ„j×d1†äʼ٭O:ëþxEÿÿXþ†í³{*ðÜo^>cíZ~RÏô¿÷Úÿ…|ã®[ÛÉñfèBí5u+§o–ÝÌÿñêÞS¬e®Ð6»X¾á7?g³ÙüJÕä±ø•?% æ[ äc±èK1™xá¸ÿ¤–Ï–ñ*óž ºŒü]žÞ_7/hº‰c­ykûÒgÜ…†78óœôÞ@Î:/€F7ñö®"øc'‚TG&/±´êCþyñ(ç;zž`?òÏÊb~çÄúN¥â«|üD6úOˆ%±»³µ3°­µÔQ2©$²)ÆÅÏónóèšO‹ôû[Ÿüpñœ-lƒÌöÒÞÎ÷°¬)LåˆÙ#³€I2ò†»ï³7ƒô˜­¾ ;Yƒ-Í“¹Ci›ëu`©,p)¶ã ˜8Æêàôÿjž ³þýŸoá·•–îH"r<™‚ † ¬ãbŒ¡!·/<6y&¡ˆ¬µŸغ°3\]Ÿ·\‡hU¯Ct'Ë+ó)~N` &ߥü¨½ÿ…lgÓ^+ëy¢Esçö”<‰>ïñã_9üGÍ׈/äÖ~ jæo0Í:fqpD—â± '¯ÎO,wÐ? ãVð’Z)tBmPÿgoOø—ñþ£“îqýÚÚÔ'˜XNnãŽ(„m½Öᔢã’\Œâ¾d×të]WÂjÖ?õ+}<ßY/Úa{¹%™ÆžFdó 9$YX®b½óôÍÊl·Û^<’%¥U qÀ'iÇ×¾{´Ñ--¬ñÍn­opÅã–?°“½aÃùnÓ‡ÝôÔ~ÏrZÜøÆê_|FŸÆPy-ÍŒ²œžJâd˜ÃáŽÎŸ7å¯l¯øB²‰Ó§ˆ<žÔ-íb‰5X%[ˆ/£òÓ0‰v+|„(ºù~ÕìTWÌ¿ðPÙ´ÄÒ,×ÄŸ5/…~e­âE¨ØÌVY÷Y\£â<í·ž§«D á°Ãéªù§þ ¨]iþ·6 [â¢4r‰4Ô”+[ÿ£ÜaÂùlX>LGnpœçC’Q¡Ù/Û?ÑãÿH/ƒqòÞŽ7uüj×›/üó÷óÿ­Yþ‡4ï.'ÓWì±bÏ*¿dî°Ï»éWv/ü÷“þúá@2ËÇîרÿ–ŸýjôÏ:ëþxEÿÿ^`QxýûõÄ?½+ÊOùþ—þû_ð  |ë¯ùáýþ?üMy·Ç ¸£ž÷íž1¸ðÁðýÚIäLîa¥…MÜq´KE` ~øŽ+Ñ<¤ÿŸéïµÿ ó·2ZX\cÂSøÊ%Ò§h–)³#Êf€}˜ªÆHVRd'<ù N(Ó!‚?Ùï\‰¾&]ÜÛ‹äßâ2óù°7š¿º7˜CpÙV |ÒʵOÆ—zUŸÄÛ¹5_‹ZµÔ*^}.ÖâåÒYl÷0Tݶ0ØGË‹Œ´0kz!ŒþÏz¹‹á{دÚãÇ…J•󇘿¼*°îÊôùT¯î¾RSªx×X›JøŸšw uc ·Ö2ÌÅÛlS‹rQ€ÉÃp¦–ÆMqVº%¦šÏ<¿´.¶þÏlŸh/}›‡K7ÌQƒe”±ÆàCüã¥ð‰´1«évv¯u;Ç¿‰–ÐË/—©âIÜ@wäâî,FáösÊ´B¨x¦ùÒê6ëàC_[iöfh&O2H ˜ÆXÂ[nf,HówòAܼÐ]ø²ÊY> Q} Ù"—̲f/™Ã5°Bb[p ÚHòèê15Ö?ÔEÿÿ\WÇY¶øcM“YÖÿᵇVµ’Kˆ¤˜´à?úŒÄT¨s€Y·(È=Gb"LÇô¿÷Úÿ…pÿBCá+B—Æ3.§ŠËs~é²}˜ã}»~fFy=òmLXÜkšåÇÆ-I4È´Ûû6ûµk¨•¥E™æW%‘ÕÕš@ÄíÁ ׬~Íh«àd6ž.“Æöó[j²6ù$L‘±ŸøŠÜûãµy†‡¦­Þ±á™4„Ñ[^ÛYEp³]´Ö˥ͽ÷IJ48.X6rÄáÂ’+ÓÿfãøRwÂ2ø&i%ÍÆ˜ê€Æ<Åd8`nÇðãµz5Q@Q@Q@Q@sûSCsðwTÂú忆u'†Ak«N"1iÒùm¶W‚›AÆr:{× ðj)að|ßo××ÅÒ>¥zÂõ&‚d… Ä›-ƒÆ>JlŒänÜ­“]‡íŒtÅøâãMóÄZ@´˜Þé–ˆ¯=ì>SïŽ0Ì£v2GÌsŠòOØÄxnOƒ—?ð¨ ñ¾–5­,Çê+Ú=廊HD›ã …áYàêT ®BóMñÝÏ…eoø[Þ·½Š+u†xåµò£‘‘–Vø³ìÚb“s÷Ø4q}¯i2y¥]#íË`UÚV¾—øYoðãD>óìtóeÙm¦÷¶ñmòIÜ«€rO еè_±Ë›I-þCûÜCû¾>÷ÌHã¯#ãÛø¼7Ø|EÒ Asf×úƒ]Ú‰œaÕSa@Ó9’` UÃ!ƒíw¶×ÎRÒÆÃ˜„K™8û¼9éÉÅ|—w}ð¾ÇÃÚ‹^x/źb}¶ÆIbk™-ÞFŸº‘’^ q…ˆ•'nðNv€{_ÂHµóã{‰µi&Ñ¥HÌ1@°yÖRùCqÝ ÁF;ÈÈÈÊû׫׃þÏ-á7ø™tÞð¶µá]¬mÌíuØ/ ò£ØU£c²ƒ'ïr}Z½â€ ùûöÖµÕ®VÏþ_Yx ­½ÀžêæKx„ÀÚÜʼÊpb”Å9Çc`Ù¾¯”à§²x2/Ú7ÆÿxƒÆºRÇpR%{jE¥ÉrÌ$Fã'$dt S°UŽÂdºU@¼²Ó œ•à“׎9â¥ÈÿŸcù/øÕÇðŽ“ý’—vŸbƒÈ·ž,Ko–»QÃC(À ’rkCË“þ{ã‚€qÇú1ê;/ø×¥aèÿ”üUyÁŽN?}Üõ¯Kû5Çüýä1@áèÿ”üUpÿ~Ø·'K×m=·™k•pÅ™¼©1‚6ÄøÏ €R¶‹Äqü×~8ѦÔÖô }}ç¶[Ä& åI ]ŠçÚv— “€ÕŸñN/K}q‡>'h^Õä­®'´ÙîµßB¹m¤J#qûB†=+#Ãw~ _ÙkXŸM𿉭˜Š>1xjD¹ÚÖÁØ,Rr׎rZ9‚ íà ðÄ>=oéc]ø±àýN²µ’ض žd…¢Ž,nF!Y‡nÌ`·œZj ôÛQ—Ãï¢$h¾]Ü…QÛï vùyyˆ>ä†PÞ•àÿ‡Ð|U§x?Å–— u÷W¬`‡cÞD¯±¥l¤ieæm`>V€Œº€>Žqÿ çü£ÿâ«‘øÊ×Éáí?þýBÏÃlu;>êêx`ûç…ƒïÀŒúö=¶¸Çü}ä1^qûQϤé bŸâ5ž«­éɨ[ÿ¡éçdÒ¹bÊ@É>_9ÀãŽ9-^ËÄ÷ö^_ |KÑíã·†Î]FGšÎf¿‡|¢I£b¼y¬ê‹’W÷+ÎwgÐ~[ëvÞ)ãÝoMñç5$DKˆùÆà„®àw8Æ;æ¾wÑføY½á;}7À~&»¹‚ÊÖ=9šF’1çÊR |ͯûĺ²Ê¹ùm÷/ÙGþïø@¤ÿ…g¤êš›æÚuüo¶²ÿ Ä€¤m?)ÛÉ=I R¢Š(¢Š(¢Š(¢Š(ÌÿkKOßüÔ!ø1sggâ©04Û‹¼y0IÝŸtR ÝÕ=8ê8€º_4Ÿ‡‰íy¥êþ&2—¸ÓívBa$yc4ÁñŒ€Tr 7Aÿ¶ðíçì«âx¾-^j:†ž5ì"Ž[”‹#ýZH¬¥‰Àä§8#‹ý“5Ÿ ë¿„ßeÔ%ÐUÔBŽQ9º‘æP‚Gp¾€`ò¶ùàïÉÿ 6Ãÿ<ýù?áR~ûÖ/Èÿ¾õ‹ò?ã@„ÏþËo6гa²~ÊN~cßmim±ÿŸ#ÿ€ÿÄÕO­ÑðÕ¯–öàa±”9ûÇÞ´öÞ~Ûþøoñ  >øÒž3’c«øVMutlÊ;Zy¤¬g6ĆٵIÜr†!†ßÂí#â%Ÿ‹o_ãö‰¥ÈìÚÚe¡“nü@„‰;Ÿ”Ƹ܎ïâ@|¹ø»4v¾,ñWöäìŽPÀ…"¹{²[çt,È2Ã$•rà°Ïmû0Zø‰Þ!o…ZÞ¹}©˜eûx¿„$H˜VfŒ¢©'k@Ýå«7úÇ–€:}KÃßfðvš4_ðëq Ríæ²ó!œ™‰ò¶Ã€§k U, îwžøûöhL:ÿ‚ctH„‡ì É 7æÛÛ¨*B®OÊÜW-¨Zü.µøuáF¾×RÛˆàqÚ¾YÓt„~=Ôîoô­{Ç—âºóD½ÜYÝ}rà+#'eË)à¸!AmìÕþ ·û?†,ãðúÚA`‘í¶beÄ>âž\ P¯Æ¡^Ù¨¿0H-‹ÛQ.Ó³$ÆØ±ÎÖú•ãñx{ãBxmâLJQi£xîÐmòü—›E¨ë6,„ð­±}ƒÆÑ<ž Õ×X¸Ž F²˜O$qxÓcne•$ ꯖu½_á-—Ã}>MCÄ-Ò´­?P·‘M¼^DÓÜ.šª¬JÙзg]£ „€{ÂMÇÚG‰|\¿Ðõ;9 ‰¢{;¶– ‘óWîÑ–2a‰xçS¯ŸeãàCã©ÏÂ-_\–éôûV¹²¿Ë‘•p¨ã–+°–O—2ŽV¾‚ ¾sý¸4OŠºæ«¢'쳫hº=Ò,‡R—Q\KÖòÒ0ÖòüÛ·Ü ² ­}_ÿÁQåømiá[ÚGQñ ”VúŠYÅ¥[¬1{9Rfv*]6DÄ©VQ»²£€aÑb–=ÍuôŠkõ‚1u$6ì±É.ѽ¥²@=±VvÃÿ<ýù?áQhׇQÑìî,H0\A±#d}¬ ®å8 àŒƒÈ5g÷Þ±~Güh2°qûƒÔËþèûlçÈÿà#ñ5ç‡Î㘺ŽÇükÒöÞ~Ûþøoñ  ûlçÈÿà#ñ5Á|NÓ¼gu¯L~¶ŸibÚYŽ5ž®.ÌÊ|ÀÝñˆƒ($²îq”ùr}mç÷í¿ï†ÿñ¿+áx¢üHá5ý¶³¨hrøèˆö=¶«ñ¸`!s‰p 1 UKêèŸÄ·Rx#Tð¬dè{»[¶èK(e€îàJŠÄq¿, Ú–´Ô?f¯cá>«¨êÚ ˆó³OlßÅ 7(Ÿ›ÐŠõJ(¢€ (¢€ (¢€ (¢€<·öÅéøª²Ñ,Á¦ÞÄ’ÛÝKžˆòF¥@Ërë÷r2pŸþΫâøpWâ¯ jËrËKÊ·6þgî¤Ù’*³/'ç9ÆN3Ù~ÝZN—®þÍšýŸõëŸ èóŶóU·Fyl£ÎK*¨$“¼÷«€ý’íô;?ƒËÿ ÃÅÚ§t‰õKë„Õõ+³<ï#Ü;IÆ¥‘²¯åŒP£þãÖ_ÍèýǬ¿›Ô›åÿžiÿ?úÔo—þy§ýüÿëPcá²Â;mæœá³ƒ.>ñô­,YúÝþrÕOËt<5kåÁ nLÄÿf´üë¿ù÷ƒþÿŸþ"€>|¸¹ø§'Ž¥_|'&‰ý äܯÏq$ã+ Ý0˲a‹9m¤üµÐ|“ƇÆWÿð²¼#¢èv×û öJR[‡ù˜$…%sŒ¼Ç à+ýéñ†|{ñng‡Ç:†¬Çª«Éd‘Í›–¹g ¬¼"ƒ÷@‹~r\Ùë@ðÞñ[—Á>:Ô&Ûøx¦™ðÛÂ’ß4ð–V·Xb›cæü‚á·0—bƒ‘òîÆ {Ç‹{¿ êq]ìâ–ÒT{„»@ \.ÎHëŽø¯™5Ÿ øRøp¾.ø‚×K[¸bóÅË‚“G§•ò£Så_'t…9ÚÙä7@[ø9?ŒÛÅÇñSÑ٢k{6G;d1/™nêäœ+n”ùJõšðÿÙ×ÃÞшÌFGÌNN(Õ4"N…buxE­Ù¶Ï‚&s2l‘Hꡲ°«_¸õ—óz‡B-…b¶Ïö¨–Ú0“¼Ÿ4ë°aÎrÞƒ­Zß/üóOûùÿÖ Ï‘Ç2õÞ½~·œµç…åã÷iÔËO¥z_wÿ>ðßóÿÄP|YúÝþr×›ü]ƒ\›Ä_ð†xVÇ[µþÅ&âîÑff½óG— 2N„ÆsÀ †'+^¡ç]ÿϼ÷üÿñä¿ôÝ*çÅ—Óø»ÅwÞ’O Ü[I›º¼V­*‰'ŽA03F `‘œ“š§¥Éãy~j‡Rð†‰‹á zJ)k9#2g+çmc“! ]KW*쪾;½ø“a®LþðNfÈVÝäR&HȈˆßuÂîùƒ©Á ¸´+RÓ<;ᨿgrÆ?j7:ß#O­4®ÓÇ'˜§`QX6‹†2 Qü@ð‡…uŸ“«|FÕ,/â½&[$¾- OæiêÇÊ1Ÿãû(”sò‘‘@ ¤Ø|GŸD¾’ûá߃MÒ[·Ùmä·òDŽ`;UŠÌàm‘P0΀7Õ¿j_õmlEñžÐt…`óMyØ 2’«‰³ž ·‚á¶òÁxONøZÚäóüEÔa´†I$i²¤D±@ R…ƒHÍ8,ÄcåZð-ù5ø¥k:e¾ƒà¿ 4ij—÷¦6·[y æA$Ìα«`'*YXä%wÿç×n4Ä­ÃBÖ46R·” ‰’HÏB?/¦+Âtÿ x [^|R¾»Ž :Ým ’iDš¼o;)g ½1” °kç"½§ögÓôÝ7Àˆž ñ-׊ô—;í¯®.>ÐûrAO3h,ïdŒã8G¢Š(¢Š(¢Š(¢Š(ÈnFTýœõ³'…SÇU+ º;®¨ÙŒª#±Ãp§îäàd9ý’¯›Pø;ú§Âë™u ³&…½ISæßØ»Œƒv9Ç5é¶ø¿gmpiÞ+·ð4Åüò4i¥sÌ¥•ÐŽ2£ 9aסâÿfû›K…±?‡¼\|ß]±Õÿ´>Ú¬Æw>JÈ]ŽØ†ØñŸáÈÀ8Æø¿çäÿßbñÏÉÿ¾Å?|ŸóÇÿo“þxÿã€;/Kn<;m¿P(pß/š£1ö­/:Ûþ‚gþÿ'øU?ÜL¾µ jXa¹Þ¼üƵ>Ó?üú7ýüZùÃÆú¬íñ^+mgá¶³`—lÐëQ1Ćí÷9A *ìwËÁÁÞ¬Ä)8Öøyø÷Vð­cðB,rm¿RêC9’ñ(ä’:ža'±êx‚rŸ-d¹›Ä“ìÞévFåqde ”g‘÷NUHÆs­ðu¾ jæ÷âu¯r’bÂÃ0ää4Œ\ôë¶ýňúî‡Õ>C~²…[ˆ¼°>Â>ÔxA,!N0’|ÅyŸº¬j6Ôn›À]>Ê×ÍÅÙ&Ý&d VB!Ù‘µCH X`±Ýj-Téþ'ãÞ™§b<Ç/ÚøÔ€¹|“¾o˜uLœ‘äŽvS7†4ÝFêþ;ßø_MaklÿhŠ+ýÊÌ+;î–S€¥ãf$(\ÎÌ@1üGuŸ©Þ˦~Ï·òÁ4Š<ØÑ¤Ÿ÷²mÞCFá,§ @ %ü±_JøâÝü¦ÇHͺŸ°ùˆ¿bãýVãgÝǵx[I=¦—jÿmŒ±Jâæ·¤N7¹Â p¬A#æGÙ†Ð6Šú Âw²ËáëV‰~Ø®»…À‘q8$üãŸâëøÐ/–ÐøKTÌu$6’î´ó}¨l9j©c»îàyàùßL»MWL·Pø‘Ü5ý¼+ˆ£ˆZ³™ Y@Œ“Nø@¿®|»•¤>tå1Îbp7ä)©`]Çõ|³ÿ•aÕ<8Ú‡Å{_…VUØži/ÞÎMD›wÖTÿTO›ƒœà‚yô/4#ÚvØÎš>Ë,÷ û'È?u€06}Ü8«›âÿŸ“ÿ}ŠLptËo°î¹ƒÉO.f˜HÓ.ч/üDŒÝóšŸ|ŸóÇÿÂñqþ’zãèþu·ýÏýþOð¯;/'¹î?ˆW¦}¦ùôoûø´_ζÿ ™ÿ¿ÉþäŸ/à´ñ5ëÚø)üc7öbáKæsö€вÆT¯/)· ¼.Hϱý¦ùôoûøµå¿o%ƒÅ·’j>3µðÅðäâ[Y/^€o²²¨2ªÁœ’(”Ño"?³Þ®ÉðÊ+ô¨öøT€qæ)ó ,YÊç*•ÌYRS¥ñ^­u‹eOü3Óõ f;{ÙßkKm6«Ê`ýÞFr¥ŽÓj  ÔöiuÕ³ø›Èo”ÅâF¼Þ¶‰ç @Ò 7}˜ É€6`VOÅI®¿á)”¿ÆKpRæÙµ µ x”*2°Ê†VÇžC\Šæ.-â‡MK©þéÒGu°D•æm²'‘¸FìTî Û²¹ Àðîªbø™§ÚÅð"ßL¶PuÈÆÔ² $…®4 FFçnHÏd¦j7òÝx~Æ=;ö±‚âÞ&{©¾Û,˳vr«°˜–aÉqÍt æ¹Ó§h†ÌÍËuiUZå÷V%,ø“Œc‘Ú‹™ñÿÿ¹?jšŸƒ^µõ¨Òlu—3 üº‘æ±Xò~l´ãs¥¾‰â/ àü ²iÊÓÌ7*hÒ‡,!Y YʾðF0ÌT0P}Oömš;Ÿ ÜHž> åÍÆ›„ÚóÁ€Q»îãµyž¨Z|1§ñ–ÚÆîKmíªÁuHwÊ^Pb¬ÌpŠä6ÅÉõ?ÙâØ|±â¨¼hͬ›Iž<œ+21V*C ŒvÈ$€zR<Š˜ÞÀn8=O¥-Q@Q@Q@;ûwêš&‰û5k÷´{Ïh"µÞ›k+E5êîF¬¬¤eŠô# Îp|×ö.Ôü9­üIþxGUðý§xŸÙZ„;&¬¸’\–mÊÄu G@Éö¿ÚsÞ-ñOÂJ×à]ŵŸŠÙöuÍÁO*Þ\·ÆàŒÆÆÎ{u?À?‡ô‡qÃûFÉ¥ë^'ûL­%Î$"FÅjtù°6ä)PYÈ.À/—'üõÿÇE\Ÿó×ÿþËÏú·äŸãGü!—Ÿô oÉ?Æ€:A9ðÝ©[£ ÆÁýãZŸgŸþ~ÏýûZÊðæ†‹WúióߺCÜž¹«¾U·ýOýùZùj_OŒïn‡ÂíRcñ Öw:¯–b“í qƒ1!ƒä(Î8;J•†z_Ùß]ð~±ñ Y‹áçƒu]Ææêæ2ë|˜ª†cÉ5ÿ¶ H@«á‘øÞyãÖ|96„5#$ÿdŽ9~ÇædDÇìäîÛÔ†û¿.w~ò·>øgâU—ŠïæøÃw£ê:S+›m­Ð´xR؉ çî~Vq¹ÜƼOã_…Ñi:ö¿Âßê°Ýª„b{¯²¤¸Ã©róÏ„ÿ–„Rø{âgÃý[Á²Ú|!ñ#iÒ²}¾ÞòÙ¤— …å·®ù¹`ÛŽï3$Žÿ\ð‡Ç‹‹ )|9­øZÚæô—ÑÊ·Îvû«n¿òÌÆ¿)^¸,C È< ûEýº6»ñ'„Ý~ò%”bFùpÆØ€wÝ䎘 ‡Ôþ1|1ñ>™mgâ_„þ(–ÒÑæ=õ³Ÿ'kÈûžBÅÀw‘™sŸ›Ìèa¯«>Ûà}/û>iötò-›TÇË JŒ/ôêkÆí| ñåõà×>'ðÜ:s3åE„RK>fÒ •ýߦr û„Iíš%˜·ÒâM^ÀIt2ºÂ¸f$’G±4ž2U¶ð†«&µ4“YÇg3O $a e *HÈÈ Zùþ_†Sø'P’óá^«$m{h$±Dræ×1Ï<„‰µK «îb2À}G«ÛùšMÒè¶]ãBâ1"…“iÚrCÎ9Ú~‡¥xö—á_Pè" kTÑ&¾7¹¹…aB±œ:6…rÎbç±V`v‘Zø ©xSPø†ãÁþÔ|5©Ea7_Z˜cØOâÚ< CáÆ:}ì{…y‡Â?üAÒ¼IŸŠ÷Ú>§`ÖñUб“ƒ¼rqéôWÇÿðTŸøÃkáÃñÃÀßÄ8&Žçì¶ZtŒßlE¤vA"oÊàŽvŒœ°+Àÿm‡ÿ(èþ8½Ö™¾5…•ƒi¯ŽábV[³*•s㬋’Yrà”;yóÿkþoÙ¯W½Óü¬Ùøn+¨£“B’3É}è¾þSh(…a f>‹T~+êþº×ïí¼cðûÄ> h¢FÌ´ÿk@Ö¢9K/*wÇ™ÕÛéøÂ;Û}fÿF—Ç,~EüVÑ­°Œ2îH\þ󣬪Y˜o©µo |Iºñ=ĺV£¦Ûim¶ha6бFó-ËBsqµ.”I¸ŸÞ/Ë‘óþ•­|¿²ºqðcÄvÆ Gºk9RÌ.üÌ_ Œ*“ËR;×qàø/[øo üY¢ý“R‰md“Kš+xeIgX昅‰Yw|¹>C·i¼ñ÷Ê&xhHcy¶±>Æ Á[eɨ$`“¸¨ ×øuàߌPëüZÖ<7y§RÐØX¢WlÓ-#%ƒœ€Š 'q Ô-çÇü}ŸûöµÃ~вhú€#›â6‹uâ»Ô-„vVúx¹u”ÊJ@ûª™,XàïƒÙyVÝôÓÿ~V¹¿Šºgˆµ ¾ &•¦ëFâ&ê6žd" àȸ\òÊ ôîzâzþÖµo7„þ^]ÞÁcmso Må1LŽD!‚|¾Q~(䛽Wöa»Ð/ü4ßô Ï Ù¼ÇÍÓ®­¼MüD©ã‘°åN?×)ÿŸÆ{©4TŸVðüÅo z›­´NÒÊüÉcS>deÊeFQpTnÏ¢|ÒüS¥ø}ãø½.›w«!Ùö›(|˜îª’˜Xd‚GCžp;:ãþ-|¶ø³ýˆ×·÷vè:‚j0< îtéê@ç0ÁÆFpÆ» (;8 ­¤Q3o1 MØv3Àü*J( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( Š( ÿÙPKøUA)ûB7l7l(E84127C33D91498C9356DF9CFC687ECB.chrtshr columns median mode mu sigma x rows rowName 0 rowValueTypes 0 -1 rowValues 0.0 rowName 1 rowValueTypes 0 0 rowValues 30 0.027710603356411772 rowName 2 rowValueTypes 0 0 rowValues 60 0.059057035090520488 rowName 3 rowValueTypes 0 0 rowValues 90 0.081193337889246792 rowName 4 rowValueTypes 0 0 rowValues 120 0.096259366391025239 rowName 5 rowValueTypes 0 0 rowValues 150 0.1064063232793558 rowName 6 rowValueTypes 0 0 rowValues 180 0.11312806447977272 rowName 7 rowValueTypes 0 0 rowValues 210 0.11742940994432266 rowName 8 rowValueTypes 0 0 rowValues 240 0.1199941458660105 rowName 9 rowValueTypes 0 0 rowValues 270 0.12129677430276291 rowName 10 rowValueTypes 0 0 rowValues 300 0.12167310371630058 rowName 11 rowValueTypes 0 0 rowValues 330 0.12136505945700479 rowName 12 rowValueTypes 0 0 rowValues 360 0.12054966807784671 rowName 13 rowValueTypes 0 0 rowValues 390 0.11935821675089602 rowName 14 rowValueTypes 0 0 rowValues 420 0.11788919096333833 rowName 15 rowValueTypes 0 0 rowValues 450 0.1162171858107461 rowName 16 rowValueTypes 0 0 rowValues 480 0.11439915585986721 rowName 17 rowValueTypes 0 0 rowValues 510 0.11247887062355842 rowName 18 rowValueTypes 0 0 rowValues 540 0.11049013796779933 rowName 19 rowValueTypes 0 0 rowValues 570 0.10845916734179735 rowName 20 rowValueTypes 0 0 rowValues 600 0.10640632327935581 rowName 21 rowValueTypes 0 0 rowValues 630 0.10434744067397815 rowName 22 rowValueTypes 0 0 rowValues 660 0.10229482108557843 rowName 23 rowValueTypes 0 0 rowValues 690 0.10025799418493368 rowName 24 rowValueTypes 0 0 rowValues 720 0.098244304425676734 rowName 25 rowValueTypes 0 0 rowValues 750 0.096259366391025267 rowName 26 rowValueTypes 0 0 rowValues 780 0.094307420577360065 rowName 27 rowValueTypes 0 0 rowValues 810 0.092391613072151713 rowName 28 rowValueTypes 0 0 rowValues 840 0.090514216614688156 rowName 29 rowValueTypes 0 0 rowValues 870 0.088676806192529281 rowName 30 rowValueTypes 0 0 rowValues 900 0.086880399146684328 rowName 31 rowValueTypes 0 0 rowValues 930 0.085125567404838509 rowName 32 rowValueTypes 0 0 rowValues 960 0.083412527704977452 rowName 33 rowValueTypes 0 0 rowValues 990 0.081741214349726074 rowName 34 rowValueTypes 0 0 rowValues 1020 0.080111338029518267 rowName 35 rowValueTypes 0 0 rowValues 1050 0.078522433487636933 rowName 36 rowValueTypes 0 0 rowValues 1080 0.07697389821225481 rowName 37 rowValueTypes 0 0 rowValues 1110 0.075465023886039845 rowName 38 rowValueTypes 0 0 rowValues 1140 0.073995021970366767 rowName 39 rowValueTypes 0 0 rowValues 1170 0.072563044524728254 rowName 40 rowValueTypes 0 0 rowValues 1200 0.071168201144637303 rowName 41 rowValueTypes 0 0 rowValues 1230 0.069809572729667785 rowName 42 rowValueTypes 0 0 rowValues 1260 0.068486222657071258 rowName 43 rowValueTypes 0 0 rowValues 1290 0.067197205827848036 rowName 44 rowValueTypes 0 0 rowValues 1320 0.065941575965267979 rowName 45 rowValueTypes 0 0 rowValues 1350 0.064718391476032977 rowName 46 rowValueTypes 0 0 rowValues 1380 0.063526720127982411 rowName 47 rowValueTypes 0 0 rowValues 1410 0.062365642752692504 rowName 48 rowValueTypes 0 0 rowValues 1440 0.061234256144337737 rowName 49 rowValueTypes 0 0 rowValues 1470 0.060131675296063808 rowName 50 rowValueTypes 0 0 rowValues 1500 0.059057035090520446 rowName 51 rowValueTypes 0 0 rowValues 1530 0.058009491541051429 rowName 52 rowValueTypes 0 0 rowValues 1560 0.056988222663492051 rowName 53 rowValueTypes 0 0 rowValues 1590 0.055992429044901218 rowName 54 rowValueTypes 0 0 rowValues 1620 0.055021334164315532 rowName 55 rowValueTypes 0 0 rowValues 1650 0.054074184511318513 rowName 56 rowValueTypes 0 0 rowValues 1680 0.053150249540516732 rowName 57 rowValueTypes 0 0 rowValues 1710 0.052248821493623206 rowName 58 rowValueTypes 0 0 rowValues 1740 0.051369215115532448 rowName 59 rowValueTypes 0 0 rowValues 1770 0.050510767286345536 rowName 60 rowValueTypes 0 0 rowValues 1800 0.049672836587612006 rowName 61 rowValueTypes 0 0 rowValues 1830 0.048854802817972096 rowName 62 rowValueTypes 0 0 rowValues 1860 0.048056066470806444 rowName 63 rowValueTypes 0 0 rowValues 1890 0.047276048184343542 rowName 64 rowValueTypes 0 0 rowValues 1920 0.046514188172868967 rowName 65 rowValueTypes 0 0 rowValues 1950 0.045769945646167744 rowName 66 rowValueTypes 0 0 rowValues 1980 0.045042798223061169 rowName 67 rowValueTypes 0 0 rowValues 2010 0.044332241343836565 rowName 68 rowValueTypes 0 0 rowValues 2040 0.043637787685475028 rowName 69 rowValueTypes 0 0 rowValues 2070 0.04295896658283422 rowName 70 rowValueTypes 0 0 rowValues 2100 0.0422953234583162 rowName 71 rowValueTypes 0 0 rowValues 2130 0.041646419262024101 rowName 72 rowValueTypes 0 0 rowValues 2160 0.04101182992397253 rowName 73 rowValueTypes 0 0 rowValues 2190 0.040391145819548419 rowName 74 rowValueTypes 0 0 rowValues 2220 0.039783971249112433 rowName 75 rowValueTypes 0 0 rowValues 2250 0.03918992393237615 rowName 76 rowValueTypes 0 0 rowValues 2280 0.038608634517977462 rowName 77 rowValueTypes 0 0 rowValues 2310 0.038039746108501936 rowName 78 rowValueTypes 0 0 rowValues 2340 0.037482913801052696 rowName 79 rowValueTypes 0 0 rowValues 2370 0.036937804243352919 rowName 80 rowValueTypes 0 0 rowValues 2400 0.036404095205268543 rowName 81 rowValueTypes 0 0 rowValues 2430 0.035881475165560402 rowName 82 rowValueTypes 0 0 rowValues 2460 0.035369642913612567 rowName 83 rowValueTypes 0 0 rowValues 2490 0.034868307165834139 rowName 84 rowValueTypes 0 0 rowValues 2520 0.034377186196393399 rowName 85 rowValueTypes 0 0 rowValues 2550 0.033896007481913758 rowName 86 rowValueTypes 0 0 rowValues 2580 0.033424507359740029 rowName 87 rowValueTypes 0 0 rowValues 2610 0.032962430699367891 rowName 88 rowValueTypes 0 0 rowValues 2640 0.03250953058662024 rowName 89 rowValueTypes 0 0 rowValues 2670 0.032065568020148955 rowName 90 rowValueTypes 0 0 rowValues 2700 0.031630311619838807 rowName 91 rowValueTypes 0 0 rowValues 2730 0.031203537346692088 rowName 92 rowValueTypes 0 0 rowValues 2760 0.030785028233776591 rowName 93 rowValueTypes 0 0 rowValues 2790 0.03037457412782513 rowName 94 rowValueTypes 0 0 rowValues 2820 0.029971971441082997 rowName 95 rowValueTypes 0 0 rowValues 2850 0.029577022913007876 rowName 96 rowValueTypes 0 0 rowValues 2880 0.029189537381437376 rowName 97 rowValueTypes 0 0 rowValues 2910 0.02880932956284896 rowName 98 rowValueTypes 0 0 rowValues 2940 0.028436219841348875 rowName 99 rowValueTypes 0 0 rowValues 2970 0.0280700340660377 rowName 100 rowValueTypes 0 0 rowValues 3000 0.027710603356411734 PKøUAÖVµbPpPp(04701F2A486C48649734C8D34A92C98E.chrtshr columns mean mode mu sigma x rows rowName 0 rowValueTypes 0 -1 rowValues 0.0 rowName 1 rowValueTypes 0 0 rowValues 0.10000000000000001 0.021832787327691162 rowName 2 rowValueTypes 0 0 rowValues 0.20000000000000001 0.067927819059293931 rowName 3 rowValueTypes 0 0 rowValues 0.29999999999999999 0.10950168574032383 rowName 4 rowValueTypes 0 0 rowValues 0.40000000000000002 0.14135289560042083 rowName 5 rowValueTypes 0 0 rowValues 0.5 0.16428252409497668 rowName 6 rowValueTypes 0 0 rowValues 0.59999999999999998 0.18009254962539389 rowName 7 rowValueTypes 0 0 rowValues 0.69999999999999996 0.1904607671815651 rowName 8 rowValueTypes 0 0 rowValues 0.80000000000000004 0.19673439741667847 rowName 9 rowValueTypes 0 0 rowValues 0.90000000000000002 0.19994663004925858 rowName 10 rowValueTypes 0 0 rowValues 1 0.20087786830921753 rowName 11 rowValueTypes 0 0 rowValues 1.1000000000000001 0.20011549654979063 rowName 12 rowValueTypes 0 0 rowValues 1.2 0.19810217317010445 rowName 13 rowValueTypes 0 0 rowValues 1.3 0.19517253090501871 rowName 14 rowValueTypes 0 0 rowValues 1.3999999999999999 0.19158045143548327 rowName 15 rowValueTypes 0 0 rowValues 1.5 0.1875192008893565 rowName 16 rowValueTypes 0 0 rowValues 1.6000000000000001 0.18313630231250541 rowName 17 rowValueTypes 0 0 rowValues 1.7 0.17854456391733259 rowName 18 rowValueTypes 0 0 rowValues 1.8 0.17383030219698545 rowName 19 rowValueTypes 0 0 rowValues 1.8999999999999999 0.16905951172126946 rowName 20 rowValueTypes 0 0 rowValues 2 0.1642825240949766 rowName 21 rowValueTypes 0 0 rowValues 2.1000000000000001 0.15953754831632691 rowName 22 rowValueTypes 0 0 rowValues 2.2000000000000002 0.15485337744014777 rowName 23 rowValueTypes 0 0 rowValues 2.2999999999999998 0.15025146970883999 rowName 24 rowValueTypes 0 0 rowValues 2.3999999999999999 0.14574755723771321 rowName 25 rowValueTypes 0 0 rowValues 2.5 0.14135289560042077 rowName 26 rowValueTypes 0 0 rowValues 2.6000000000000001 0.13707523880585662 rowName 27 rowValueTypes 0 0 rowValues 2.7000000000000002 0.13291960306853717 rowName 28 rowValueTypes 0 0 rowValues 2.7999999999999998 0.12888886725604989 rowName 29 rowValueTypes 0 0 rowValues 2.8999999999999999 0.12498424640102478 rowName 30 rowValueTypes 0 0 rowValues 3 0.12120566609242865 rowName 31 rowValueTypes 0 0 rowValues 3.1000000000000001 0.11755205912739201 rowName 32 rowValueTypes 0 0 rowValues 3.2000000000000002 0.11402160094668295 rowName 33 rowValueTypes 0 0 rowValues 3.2999999999999998 0.11061189668679929 rowName 34 rowValueTypes 0 0 rowValues 3.3999999999999999 0.10732012986278996 rowName 35 rowValueTypes 0 0 rowValues 3.5 0.10414318053099769 rowName 36 rowValueTypes 0 0 rowValues 3.6000000000000001 0.10107771910968129 rowName 37 rowValueTypes 0 0 rowValues 3.7000000000000002 0.098120280739050242 rowName 38 rowValueTypes 0 0 rowValues 3.7999999999999998 0.095267324051906077 rowName 39 rowValueTypes 0 0 rowValues 3.8999999999999999 0.092515277435238699 rowName 40 rowValueTypes 0 0 rowValues 4 0.089860575241519372 rowName 41 rowValueTypes 0 0 rowValues 4.0999999999999996 0.087299685917916189 rowName 42 rowValueTypes 0 0 rowValues 4.2000000000000002 0.08482913363315725 rowName 43 rowValueTypes 0 0 rowValues 4.2999999999999998 0.082445514672979253 rowName 44 rowValueTypes 0 0 rowValues 4.4000000000000004 0.08014550962886334 rowName 45 rowValueTypes 0 0 rowValues 4.5 0.077925892207794945 rowName 46 rowValueTypes 0 0 rowValues 4.5999999999999996 0.075783535332772059 rowName 47 rowValueTypes 0 0 rowValues 4.7000000000000002 0.073715415076682542 rowName 48 rowValueTypes 0 0 rowValues 4.7999999999999998 0.071718612869668241 rowName 49 rowValueTypes 0 0 rowValues 4.9000000000000004 0.06979031633724038 rowName 50 rowValueTypes 0 0 rowValues 5 0.067927819059293917 rowName 51 rowValueTypes 0 0 rowValues 5.0999999999999996 0.066128519485693732 rowName 52 rowValueTypes 0 0 rowValues 5.2000000000000002 0.06438991919981317 rowName 53 rowValueTypes 0 0 rowValues 5.2999999999999998 0.062709620685334283 rowName 54 rowValueTypes 0 0 rowValues 5.4000000000000004 0.061085324722203689 rowName 55 rowValueTypes 0 0 rowValues 5.5 0.059514827513622005 rowName 56 rowValueTypes 0 0 rowValues 5.5999999999999996 0.05799601762631576 rowName 57 rowValueTypes 0 0 rowValues 5.7000000000000002 0.056526872810289813 rowName 58 rowValueTypes 0 0 rowValues 5.7999999999999998 0.055105456751121856 rowName 59 rowValueTypes 0 0 rowValues 5.9000000000000004 0.053729915797110735 rowName 60 rowValueTypes 0 0 rowValues 6 0.052398475694791453 rowName 61 rowValueTypes 0 0 rowValues 6.0999999999999996 0.0511094383591311 rowName 62 rowValueTypes 0 0 rowValues 6.2000000000000002 0.049861178698835468 rowName 63 rowValueTypes 0 0 rowValues 6.2999999999999998 0.048652141512389072 rowName 64 rowValueTypes 0 0 rowValues 6.4000000000000004 0.047480838466532747 rowName 65 rowValueTypes 0 0 rowValues 6.5 0.046345845165694469 rowName 66 rowValueTypes 0 0 rowValues 6.5999999999999996 0.045245798318300293 rowName 67 rowValueTypes 0 0 rowValues 6.7000000000000002 0.044179393003800714 rowName 68 rowValueTypes 0 0 rowValues 6.7999999999999998 0.043145380042561757 rowName 69 rowValueTypes 0 0 rowValues 6.9000000000000004 0.042142563469421253 rowName 70 rowValueTypes 0 0 rowValues 7 0.041169798110638131 rowName 71 rowValueTypes 0 0 rowValues 7.0999999999999996 0.040225987263116547 rowName 72 rowValueTypes 0 0 rowValues 7.2000000000000002 0.039310080474128421 rowName 73 rowValueTypes 0 0 rowValues 7.2999999999999998 0.038421071419252592 rowName 74 rowValueTypes 0 0 rowValues 7.4000000000000004 0.037557995875869145 rowName 75 rowValueTypes 0 0 rowValues 7.5 0.036719929789270148 rowName 76 rowValueTypes 0 0 rowValues 7.5999999999999996 0.035905987428254953 rowName 77 rowValueTypes 0 0 rowValues 7.7000000000000002 0.035115319626952586 rowName 78 rowValueTypes 0 0 rowValues 7.7999999999999998 0.03434711210954295 rowName 79 rowValueTypes 0 0 rowValues 7.9000000000000004 0.033600583894521786 rowName 80 rowValueTypes 0 0 rowValues 8 0.032874985775163491 rowName 81 rowValueTypes 0 0 rowValues 8.0999999999999996 0.032169598872871352 rowName 82 rowValueTypes 0 0 rowValues 8.1999999999999993 0.031483733260163717 rowName 83 rowValueTypes 0 0 rowValues 8.3000000000000007 0.030816726650118233 rowName 84 rowValueTypes 0 0 rowValues 8.4000000000000004 0.030167943149184962 rowName 85 rowValueTypes 0 0 rowValues 8.5 0.029536772070374109 rowName 86 rowValueTypes 0 0 rowValues 8.5999999999999996 0.028922626803928684 rowName 87 rowValueTypes 0 0 rowValues 8.6999999999999993 0.028324943742698266 rowName 88 rowValueTypes 0 0 rowValues 8.8000000000000007 0.027743181259540377 rowName 89 rowValueTypes 0 0 rowValues 8.9000000000000004 0.027176818734186117 rowName 90 rowValueTypes 0 0 rowValues 9 0.026625355627116848 rowName 91 rowValueTypes 0 0 rowValues 9.0999999999999996 0.026088310598108184 rowName 92 rowValueTypes 0 0 rowValues 9.1999999999999993 0.025565220667203983 rowName 93 rowValueTypes 0 0 rowValues 9.3000000000000007 0.02505564041598812 rowName 94 rowValueTypes 0 0 rowValues 9.4000000000000004 0.024559141227123559 rowName 95 rowValueTypes 0 0 rowValues 9.5 0.02407531056022634 rowName 96 rowValueTypes 0 0 rowValues 9.5999999999999996 0.023603751262237651 rowName 97 rowValueTypes 0 0 rowValues 9.6999999999999993 0.023144080910548367 rowName 98 rowValueTypes 0 0 rowValues 9.8000000000000007 0.022695931187218278 rowName 99 rowValueTypes 0 0 rowValues 9.9000000000000004 0.022258947282716726 rowName 100 rowValueTypes 0 0 rowValues 10 0.021832787327691151 PKøUA4.¸õèèbuildVersionHistory.plist local build-Jun 29 2012 PKøUAßÚüÚ}E index.xmlìýë“Û¸–'Š~¿…Ã}&¢{î¤ïGñœð£\Ûq쪺NwWïù²C™bfª­”r$e¹¼ÿú ð%P"H€E)÷Ì® ‚ °°°¿µÖÿüþz˜¿ø3Y­gËÅë—ðxùÿü¯ÿÏÿœ¯š.ož’Åæ…j°Xÿ´¾¼~y¿Ù<þty9MþLæËÇdõjòø8O^Ý,.“‡dý8¹IÖ—ªéËò)÷‡ŠgþZÏʇ¾ÿþê;~µ\Ý]"àå~þtusŸ ·¼{+ß"òiãï³Åtù]5~Aá Hx!Ôÿe?¸x‘¶[ÿyñ}6ÝÜ«ÙG¼¸rŸÌîî7¯_"HÓK½~ÉÒ?~¨Ó?Ôú\Üܼ~)ˆL߬’‹ÇÕRõˆ œ½ü_z×›ód}Ÿ$›òÓ®>\]•WÕ‡ý¯ÿ¹¾ÍÛ¥þ™Ül–«‹ôŠñÐ×ÿH¯§^¤3õÓlªæçõËÍäúi>Y]¼?%/þ–L¦jf/n—ËM¢zI'«‰îÏìöb6Í^«¬Öióãâaò˜^Pï¹Ú¬–ß’ßó;ùèô%c0_²Fù8fúUó™úïë—$½”Ï(NÜL_¿¼~ÚlÒ_ÿµœ©UOŸÉq³œ/W/Ùý´ùñ¨VW¿æf2Ÿ]«'Ó‹ÕÝõEÚäBßÎHd¥VáBœP ä˜A!ÏîÝé{P!f„BH„ù½k}H!g„ –@f÷Ôþ‚//³i™lÔèûü{v#ÿîÇûÉ:IéAýÈǾœ—s›µU=^ne?²ùÌþ®ŸquõíäæÛÝjù´˜Vî,ž®“U:´ìÏ|“|7//ËŽ-],ÕFœÙ»ƒfw·EwåC—{ts¹CµÎd ëÈø—ÕäÇÅÝj~‡¡YØB³pPšÅ˜+jDK )ÀØ YD Š.%œS„…A³sŽ BHÑ3eØŸda$Ù$‹êHvšÜNžæ›‚é^/§?†¡]4&írªDuv*¶ ¹dB´k¹wÝpχx‘x“‡ÇÍH¼ŽÄ‹kÅ%ê$’ð˜ò& 2õÿ0HbÎA¿I!Wò%+ÙР_„€†„âЀAúÅ‘ù† _b“^|˜Íç©Ð€‡!\rÆûý^5Ý#Ýïåû‰‚àh-ܬg7/þõ×å‹_V³é¿ )«Ò²S‚!=2Ê£‘òBP³jø%«ÃÐ;1VÇ"Á… 8nÕÅïS“Ò€Z S«˜"I â„õ15r˽ë†{>”Ë£V‚r…ƒJ>°^#Z( É3?å‰È3CPž´Š‡Ã3M9²h¨8 ÀAªþ‰RªQ(ŠáÉÚÑÛýAÃ"cß,õóP"9Q'1Ä4J  d €8&ƒºÀsD©PÁ!ë`Q^ 0Ôku•Ô;87…#{…RY ªþ’œ£ŠW(§T !€ñŠWH3`&˜„";x2aô …!âFÇЀê:D=Ñ'Q¤Ü ”‹­šÿàþLˆÏTõ‡8êþAˆ—4BHòAr¦’èV CµÔ¦¯mYîJ ¼ÃP/•z%B*ò”ŒK¸‰Ú#„CL! ¤x«I½Œ1 ˆ+MO‘5¤]d]™nò­uN}J¦wŠ JUm@fc°Ì¬ )*D(>k*k2Å•*òåAÂ2¥† ®*ãŠEËN"oôu…!`n心Ò-?WÆ=]aèV4 !<ˆixK±[úì@"Rcj”v8©V¾Ð@(Çä¢#‚HáqyËNšLµ‚÷Gu”‰Þ޲0„ ­bë€TG–Z•²”습 º+µ¦a~Œ ")ä;R«þ'ÅDt“ZQt…![ÔŒeÍ-C´:1\+Šþ­0ä‡Û‘­ÃéKŸ¼¾„¢¯* !§ØÑi‘œ«ýVa(˜Ú5þâ£ú®¸HÁXX`¢DR)¨IÂJ ¨ÿQ¢ªÓÌ+ 8å ;x^Q « CÁÌ* ¨ø³3E¹ è¯ Cµ¼Qñ傯vW¦øóH¶AÈVØãcDÄ¢6ÕI„Æ ãŠÑ:ÙØ$›­Oƒ*\òô••ÿ ©w@fj@o)gŠ•ÂÑé†pÛ£³4ù’È÷\ã²pt>…!_Ô ™NëǧæpÂÑá†äÚÓì Huø< ¤8:©Â/qÈi1œÆ„ÉsÐÝñqù›NVwÇöü{‡JXÚæoÂÇOŒÑuÔJŒß—«oiy‚zütõGqÇH¼?ùi}{3Ÿ¬×¯_¦³©‹¼~Y¶4iv§ãÂÛo!Nãu¿Oî’_Óoüm5S]M6³åÂkngÅdxõj4þ”Ün>OVw³Ö÷bV» í}-þ}­¦x²Úl‡Ø’Ü»4ZfšìÇÅ:Ù´½¶oµuVÙï«ÙBm²lRÖ!¾µ¡G£á»¥j´Ø\)þ“xuŸjíÌhóv¹Ù,܉’¶7Z{3Ú|]>†z]}Wæ LNÖfÓÒ—Ñä‹.KjSÚ:«p=¯- Ûã~T-f ÕjÚ{_Ô÷´ÃC³Xæú!_qœÚÎê¾É]c­)Æ¡ŒŠ¤±à ŽHŠ0oUo‡Ö(ø³Ðp "qº®pŠÐ^(NÍJ£wÃPžtòë „-Ãò\ýz1ˆ7L)Ð^ f H'Æ2I„B„¡9èZf(ʃ#ËŽ]ˆ/Â_ !´ö}±Jnó±|ùùÃNHãe\‚›³Ÿ UˆlÔ#f?!†nI«Ü?8H…àq¥ÿ±ò§’R CôÙj2<Ÿ\=H)/åy¸A#>z¦N‘,aˆ˜·a¢‡óà‘1=xaB¢‹¼4 ' ôP¶~¦h£›ÃoCõ¿ÿgÜLbps˜úä Ù5PT9×¼$zïÂÐ-lM$5¤JÇÍÄ;ZŽÀ0äÛ’Ñt@±•³<Ñèÿ Cs¸9™Î@ÙŸ):yÓê²4rê\F²BUÄÑ’>àa|úr)Š¡ ÔHê˜H‰äL ëÇ3:ý²–$Oe¼££z„FLòDcbç0„ËLñCɘìLMñ4:4ïh໇IAÛÜIxP“hö`Ä8!ÀL­OäŽþdðå"‡ ÇÀÄ0$ÜŽs˜øD*ΕG¿hJfåõDÄ£:¨F¬tB˱zºD ãr‡³F0púv±˜<: 5¢Ön9Ññþ<ºÏ¯ûl`!‚?§Ð< Í;]Rk†:áseš1t9 %3÷”£ÃÙº89Sc-™sÃ1w@á ˜áô9føq9lO„ÈE£Ùu Ü#œ§ÕŠGgmªmqT8׌‡0tËíV¤!ÝÞ’ž©)â7ÂЭp¨õ1(’4b>]ða(X6ºà‡2ñ3uÁËè‚B¶€f~@aWœ˜ /£«3ÍA'SûP¦#y¦¦v½è¹Ø D0\{<óSö£ß‘Ÿƒ«Ex§æë‘Ñ¿ˆêH[ªP4͵ùzžk®Pý”h—6*çd(» ÔhÚytv"\ÖîaŒzÉ©õщˆê¸“~†"G޾Ì@$l¯ƒ40û=¡¢GM„x\îÍ“M!mÎE3”•>§@¢˜ŒfŸ®¬œ%F(Xìùy! #l<¥{1·¦T SÓÊc. 0¦ìŒ×ÂîÛRÔÕ'ÇR·%’NÄõ³¨E„@“Jz+‘—C‘@3žñb_&± Wº™“‡£áç”÷/“÷ µ•Õ  GÂ1l»ðÏDæØ›vacÂ! !Ï»(T;#e‘‘î#j­¯w€ìB`8¿:{…S2²ÕÞ”ŒÛ‚†’ 8Ó …±‰·7ñ’¦d.ƒ©Ih\¯Þh¹\ wf$ÜÞ„Kí„;$–ZFJ,dêÿ)ÊDsf )•pI0%‹Š¥;•} `]A0‚‘.k±! åDø\m0ÂÑn{ØÎ* bËù…ïžÖHɽ)Y´ þ€èÂŽÒcâ„@ÄÜÈs :D†Ã (wáa(ƒ6 Òæ~®ÈÈce–0ô ›í ƒÙpÅÙÚ¢_7é:§Š̰pŠn0:ÀÑn¯Ì6àáÁ¹Zµˆ‡ &íÙ‡r(àSËvX8ï"ãìMv´-…Ìl«Î„Pd›aè—9Y†$b|rÌGâ C|¼=máP©±ñ©¥-,fñÌîMvÂEÙ>tÓ‘•îj2,¥¹%gòt=9w¢ñt B|´zLJ$¼Qý9c&Hi\i»ÌÐM,÷®îùpôG"`h '?@9a1Ž\/j–ѰÁâ›°çX o¥ÀùäÇòi³K€üþ)½žQ`S´®ú©Z,&ÉEÖUÙ=N¦ÓÙâÎü»òâß³kùK7ËÇ"gžÜnŠ¿¯—›Íò¡øµšÝݧ·ÊEܾâa¶˜=<=üm¹šýs¹ØLæëdc›ST;§ }Ü.çóåwõ2c¾òÎçóüqk›?ÕÄÌñ¿™Ïîjv/+—¿&åW²™ýôkò”˜«[>”nH}Ü]^/çS¿¼`åzðW÷µ 0ÙF:¹åŒ/Ÿîî/«ö6•G¿ªþ·ßw5ûgb®åõ•Ç,ö&«¯ËÇÙͯéóÙUÅk–Ú`"yƒÚÅß»—»ªå¾RÇÞzçû®raIJ‡ŒÛ)¡ÞTð)ù3™o9î³Íý¯åé2)ާSqVÌTõ±©: ÞMóÝ3»3™V…ïþv{«ÏK›€R=¾ k9ës{z¨žˆzÏO˧µõ†È“õ’(!zO§®6œ“Y¾Á\ô^-¿_l{±Ÿý$ºLXÔãÒ’Áj*:'ã³vb~xqcWlqa뜥óµ+2¾È(@=u‘?[i‘ïËì,ê¸g¥FâÚŸ˜ÝFÍs³{”ô`Õ­ì‚8°‹ rÌVjÙÊ(mIë±Ú…žGgÆ×KípÕÀàVƒ†+450¸§9pàKG¶S×&;Œ?,W“ÕnÒWŒ¯úw³eÎѳFÙJW. ù/ÿã_þ¼ú—ù—ÿû¢üÓl—懲ǹ’VÖ;Ü<­Ô~½ù¡Hwª^ðïWïÍ»‹D‘³?“l÷î¾_é›{uˆMÓõÖ²£˜Æê)1[Þ*mJK¤“õ¾ÉÍ×/óuË<­}O‰„½¦ùÛn'óu²s4Ol]£ì¶¹¬L–ÒZfàÐ[]“÷Š|=$Ƹ¦êRÍrë–ÕÅ~PôùùrzùãÇ‹ûŸ^LšÆRó¢ºf•¤5Û„K¯µãTk®—ßœüRݤTZ›sÿâAMÊz½nœ–ý×ݺ[wÿg–w–«ÿPŽ¡€uZeñ–ºQµÉûæš“¹á³w_PÏÔ¶ ³”d5ìgVCѬ6°YÍiQ£QíÙÕâ ¤IvâÙ±$ê•q"*ñìŠùÒaí°x.Õ¢Q-Õ¢Qí™ÕZEq´§U$Eì4]gfNs›”XÓŽÇ$¦¤‚™ÐÑÈN¨™Ob©ÑUiÊ>7#éÂZ~JSVoK<KÚž²à´÷Ë%¬ÜUÃa”±îÚÒqšç`4Ï=Cóܾ¦j¥;ws<¤9Ή8QÁ@ruŽTª@I@9Ô)Ù‡óJZW N)¬‰Ä>ˆ7âµI±¨=iõ1ʱ`49–8NXPIÖõ8&Ñ6:°mÔuùÙ£!õÈ ©®PïhÐÓ1€Âhݱ‰\ž´)sõ ¤þ¢ÁÕßLoúoÔ;è¿è(ô_È © "T Mû)„¥*.’bÌ*eM(ƒ À™ºG„#è¿äàú/nÏø…¢'§¢Sç);3oŽûÄOé´nvóêX‚b®k'Āǀ €+“ -oJA‰q p¤¯.…ê4J¬q aAdE¾ V KÄl¯g{u[ÕX ëù•Â:‡<&‚‰`c%¬X ë¨+açÞžIî«>Íó•–Ùá8[¨3j1™¿~¹ã¢KÐMêï´ø÷ZNZ5¸÷µdiS2âÿäFéd0'ôõ ‹ˆùÕxìbTéñŽbº4wðzì2YQVyƲJo;¢Ø1²Øqˆã |§¢íÛ‰ZŽ©¶°h>Q9ëzTRgI ˆ, ÝçdïßžÌo—Ó{F ·òžN0£kÕ,Tµ„a¿‰;7¿#ñ›žˆ^‹èµÀAüD iD¯_ôú&Mžl” \¿ãÒQ–Û¶ùE© j}Y~Gå>è /uAv?â9sç&ùQÏù‰¢_ýÎUô‹Ô£èw¶€/<„N:¤ÐÙ.Pª¶™ÿï]êlý’<&“ͺ³˜ÖÖ—jbŒÉÉô:MþœÝ$öÕçH¹(MÌ·€” †„ZcÊ©Ì\{˜->ÅX~»ÖTÛ4Ö]êUÁDøi9±¹i[ß²û€zGU¹|1O·MŸ ]¦ïÜ” Öe’bÖD¥Ä[)AQ)‰JÉÙÙ£ 7Ìa‚¤›ÀºÔ àê?”c(à(á%lHm£EàÜ6ü¦½ )0 ¶/0;¾e_[JÑ08Ü ýa¹½ÅÒž¶Š´*›\½ÖšÉz‰á3ÛRÕö¦dô[ë•‹áuñ]]®H™?ï<“ã{u»â,Øh€hÆæËIzJ¤·“…FçL3g:[%éióúåõrsŸ^›l6“›{-­eOe–ŠpÉ´¦×ì~ÙÓdž¶yHã}Ôìd'ódþx?Qà*Q'á|š–*¦ªvêæ¯›ªçz‰éOmª󘾳Tó¸ïÅd¨QÅ‹*ž·Š«ÐGï U<U<ó¬PÅk4wuÁÌDEhî°ÖåÚ¤áEÉL{ÙàNÖ xó´Y~IÖê¤ì,±Zº(ýkêýau–ŽÊ …ä~òçl¹ò‚ëÏöBêzÙ?œþ Û2bo´¿À¢Ã¢-ª6ô 0&[…<BX%`Û¨@û¨œí¬…ÛÔvdŒ+Y>$›ÕÊí»ü¢qR½s{;[$EóÜ¡7öúÓòæ[igÈÆ©c¢&s-cG Áè¤rÔÑBqq¢Ü«ƒ§Þ×ËõL~é¿Êí«D,ŒË|\Å×ì-Iݧïv³¤6S‘Ûnþ¸˜³ï»XÐw±˜('›Õ.N]ûb€¶¥hø­n’œAg\íþv]ü&!C¥?íõK,¶¡ÙÔT¶i4ÕŽP9‘+5àœzß÷F‹hpx6½¢7v 6oY#e¥ÚëΚkS?uq$á€C-ý º¨›ÆèÊš¹Ñ¢æXÑð›Á³T»MÒqÃéû'>]Ý3jŽÃhŽ,jŽQs³Û¼s­f}S¬ž¶º¡ÂÖÝ.?çËÓ<ùCÓãnWæEµC>.¾þö®È5·¸{*¢ÀŒ÷T®Ïgëz…ºzc³¼©mU¹>Ù¬Ÿ>(ÅÿC¢3=&yÊ;/“‰W´U¥ãr¦>B±1Ìý›¥½ FL®3+Ô™%f«õFS‚I@Oêï•^÷|IžÁ­oV³Çü¾â):¬j¡¶Ù<_¦ä¶B‚ŠÑÌ6jFþ™j"»Ë÷²\Mÿ±óªòg-qÕܽŸMÕ[/keºHÌ[Eõw+¥IO÷V}ÿîýÇûd¡Ö°Ø$«E¹c¼,Ho—óimÄJº›´…¸è·GüŒþ‚+¼GÆûÕ†ÑV™±Úåíí:Q ¾¢XétA&Ö{fšLgOëÂT”›´õ€†Uš”Ý/ÔìÉdj) •tµ|º»¿¬î£,|R«ç—n¶DÒnKÜL® ® 7Ï×åãì&3½dW—O›‚6X\Þ –Ä÷îMˤ?W›åãzçû®òSÐÂ)ŒÛév¼© Í®¼=þ˜mî-3Ž)g9UgÕ»ÉcÎ#fw&K­œ ¿¥äxéciõ9Ñ·‚fu:|úw%ûlœêwd쮯,Ì­XhÚñj¤Í6K)YJUÚüàÖÒÌÌÙb.ÅÞæR÷YNWN/Ú¿¯“l¥»Qµ´¯¢àÝyÍàOV7j“)H›uú Ü'd%˜ÿ–SËר~(2qœér]ËeôÓß`‹þFèovÀ-àB „$Q€M“ À!"0àc „éÊCX`©NãLˆj 65p½œ—Žy5°m ê–x–ôÝA(î ûj˜ÞtÉ´aD³Ãô9Íý|Y-´³Z¿¾ÓGÞNVÖ½»ïn0M0æ‹ë{ÑÖ<='™P[ƒÜ®{AÚ6,‚–>r™H-yöÁZíèg´rè.µÊjtŠ’í¾ÎæZÏôY¯ª¹ÞÞOz[/§¯Ôv`¶wÙ8Ö]SÕi|Ló-¢n§MÔ&?×ï©]¶ÿãA)É]È8ôVÃåª^×ݯ޵Ôºr}úT­^­–+ÅfþS &~Ò:5k1†AÅ„¶©SM ‚÷÷!Y]Hî½VC^`8á~~˜­×Jðæ;%Ûƒ¤K~ÒËËåjupœŽç 5x¾ äKg[¶ªÆÃŒ­9•³³v¬éS’špƒÔTðÑ…«þhïwkÿ ³«¼•ƒ£[ˆ†êÆMb·¥d£ gÃkë­Äbôý®ú>LÌB›µ®çààÒðÒ2I‹–‰ž+Œ †‘9,ÞRÛÄÚú*"ÈB‹ÞÜ&z;½-úÔ2²g'»SZÉ…{sئCC[ä¼!=9s Y?eÛ™îβtcò2_û À~G±ŸüÝb?1%¤€9ÛRïžcƒ ›­/,n=êŽ×N«8šz¼S§Õ–'ÿ´©õåÍàËé˜jAßü-;S;C­Ô˜hœmÀ®´´õÖEÒ²I÷Iuîµâ÷:xÙxH €€ÛqMƒ*M“]œæ§Â+¬„’C ô®Óº}hµJØÔ*——EÆ@þ ».RVàí¬39SQFãÅ\ù´š£4ï°aÌ;­òYs¨QZv•ûJKõ=lsd…b9wZXjO娴à!2‡dI[_Íå0©š_i¦{»Ÿ-tº¢ö䯨=¸°©¯}Á|ð)hýúZ•²¯òÑÞ¥%æ^’ļS3“¢áá{«³‹ü<½K:—oëJµ0r[¦¤½þM f±™lúž8wªÚn³ŽoÓüv{µI×ÃP\û¬õól±\}Ý|û”,îÚSfZßîØe¶¡ÒCí’W“¿·oMH¡RËö§µ;u;/õ)¹SªÒ[³©:êqë¿ëNnJ«,mvDÀWœ$g”!ZîO,óÜåZ‚bňòþ.ÝÞ®³¿*.p·\ýØ&Z­;¢ŒaV¬‹`ß¶X[÷ùn¤žyMá~^Óºx$ºŸ×”Ƽ¦1¯é±å5%âäÜ?<æ5­[È“KP‹IP{輦²ŽùǼ¦1¯iÌkóšK^SóšÆ¼¦1¯éÙä5Å1¯iÌkêC0$fe9¯)m«K1h^S"$E$-[Ť€ÂˆP‚HÝ'€Ò˜UÉ– Âfºv!ì‹(ŸEbS·ÐÙ'6•1±éi%6ˆï”Ùtد9XjS bjÓáS›Rxjc6zLmS›ÆÔ¦çšÚ”cjS lÞ¤˜ÚÔÕÖ-bjÓ˜Ú4¦64µ©-µ)xÅ„Òa¸¤œFhž¡¿(°ž)9PçÈD1I¥z„"8R°ãJv:´&³Æl§Ç–í4OazÖéNéÉU!ÃT9dºÓ¡ eÏ0ßéÐçÓ³OxŠHØžÂS m*qLxzl O!ŒOûfJ!b䌧CŸç™òtèãó`9O‡þg›ô´fcǤ§1éé!“žÒ¶Òè¹bÌ Œ1;›¬§yq“ù;f=u–‹Ÿ_¹½SO{JÉÉ™Ýðñ§=¥`è´§x+ ~NiOm๘öt„´§;2ÿ˜iO‚ yOcÞÓÖ“—Ž©„ „p;ÁÉéfBš}ÄT¨Z×Â6]+¦B=ÊT¨ž@.TÊ ê§Õ‡cõ‰ÉPc2ÔQO˘ 5fC}&ÙP©W ‘Õžµš5ó?Û)‚" [ÎÿÜ_‚tß¼Ÿé ›Šßü²š¥2fêÞovd©-üuù¨s“®{ÉÖn ð–îŸîÓêÝCïÕ¤d)†ú¯lcWÉÝ$‹$TœuKg:‘ªÞºYÕ ê ©¨7íýÓhj(ÈœV—¾,P’°2‰’5ÝšKV¼Ý|kû9žl‰ùö’”ÕçjªK gI,X›f®C«^ùáì©ýGÜ®8Ö¥»ÛM‰·—¦Ñ–¼l‡¦*éÄv²#öH·“Ë+'Y5—äVù­O"Y›æq×ÎP›³Ò3b]®µ½”q•¤~•üdNé!K£Ãå^V5kÒ¹´‚5I5½3vÈ™§­ó2O8{™Ø?-ï&+Åhf7e·Ká¤*¥ùØ Ù¨¯ÜãÜu‘^¸gŸK?f³íl ­rï7%>Ì—ËÕ˜‰Z`~QQ%à}r3™îÑF¹\É4²Åf¦¾g²V‹ùy9Múd+jê*ýÖ?&óyø¤ídÿ¨d€0B„a²°Ü»n¸··BÍ£VM¶“|«6ÖÔȹߺ™Œ]û¬Q¬*Û¼·=ϯïô‘/ZZÈ’Þ¯oîkŸÔ»gAc1ódM>´öé5ý…:<{2. ãTæ}òØ^|¾Âº„´½n¿mø-ím®@KW;ÕF²ov¶ÇÛØ–c—Y¨aÞÖÂ\¼EmEór#é`¨¸W,û'˜ÚìB@*»8}BWÀóžashúù<ù¯¬8Ðçɪ‡εÏm žTxw/DI _C„•Ë‚„x%™Ú„” X¸+4NR § „ NkÙÆ‘›Q—ßµn£„v_k7y@îõÎw=ù‘¬ú¨ö~ @†Öë~þ?O)Çëktê0s-7›åC i¡¥³ZâèpïµU¦òb{âY°=˜íùMo]18·Þ¹¶\íQoÖ`+X¢ßêŸ\Ý"‚®´ã æ²¹G3u*PúXNVåÚ°Vk[GI꫉Êò<¥šºÒ@Œ´j^o Ýñ¶4õýÜ=-ÑÉ,ýœ,—Ñg}&ŽùÛ˜øÚ%f4a$àQa$ŒïÙžrÛ¹XYì~–QY´þÞuýý|Âm/„ÓŽwõt{;û«ÍH÷ër±gléÅ”P¢0ØŽ\àòŽ(Dù`dùFù ÊÏB>pfêuçÄûl‰Oýbá¼ÝîæºYYÕÊÆÂ¶N•ë¤VM4¡ý N½îY…ûc]Ûz+mú)²£§_`¿2g–l™š"9b¢"’ãŠHN›Ç£é{Ò{áÐè³Êôz¡Ñ3Ș8¡æ÷«&:¾+Ùü2yl§²²W[/U˜woGMSOËî§dq×n”gçF{—ÕC.c)Ÿ–7“^^q·Óe.æ?u¸,¥%ߨU›¢/otê0O•·øS]ùº,ŒÉJšQ_gð™[U»¹^hŸ ÷`CµVù&¸Œy~ÔŽùnŒ‚Ûk¢‚ä¨ G9*È!‚ÕÞ™L&xŸé?Î Õ3@šØˆú{× ÷,µ–Cøò„K¡}ºw[K “©¥«D]^òþÉ}£0pö馉+ò‹‡YÆ–®öêz{-$>¹…DC,¤uêòjá¡eÊ:æÿÇ—û÷d®Z¹¤*ÿX$Ø]1¨æØïpŠÛ$ðZ!kGøsRæ³õ¦V ªÞp–ìòóR‡z t»ZeG8ßÓjtµ= NWØUÕêÔÊ^ªÏ®|î-¡î½±;PB¥=­³Gâ粬©ìÇ+Š%†A&Ö¦· ~Uñðåðr[]:ÍȦ^ZõšVE˜8$çÍ´’b÷|]>În2a2[§ày)®µVVSíË_òIç(´ÙMJûŒ5 ©VN®>VQVwÔ÷z{‹—®ç|¤[êG&Khka³ÝÊo}“_ð˜ü¢&ù…ó,W 5’êÝ?Ëû^qÀdu£C’ï‚dKûƒ¬óßrjùoM¤ž ™8Îô~ C?®­29:f%ERJ©™&ZˆÄ ú_Ãdƒ„κ¤Ä̸” ¡.z ì®k[ƒúJ›}wŒ;¨©ió¶)ªŸw.¸ Û+«;ÜÍjhv(²Äð^%ÎZêKç¤'æÒÖG.©%Ï>Xëý¬VÝ™•B{gnLŒÃ–˜†í]6Žu×VuÓ-!ÏÜÜSQ¯òTN'K)ã­qsž¿%¯´\kvøóbZ¹¯Õ† %1˜{'ow‘wT—ÉôõËÍê)sý®ÕÜ'ÅO G³Û™µî«ÜI÷e¿^½Mþ©¨Íx‘ƒåt%¾êÿ>½ ùÿÂWTý¤ÿýß/Ò»ùW#¼xÔXT|@EÕcÍV_–ö–ìcööuŸ^–Šm–ƒ6³œÛë¢].‚¼#È;ÚûÜí}D{ŸGXr´÷…·÷yœ¤¥ÓööÄ¢VÄÇé@P¨€wvî;ÀŒ=üúú‘SÙkÇšç;e 7(k5HJ»•pµ[ÙûÝšâÃìj'øÖàa­†É) n¤©Âê§ô„ åNhë­§ö.Ï6YÙ¬ÒuU…jÄ©:à§ÆÃ˸ÕVK=W\}èò“˰gKëGbm}1õ¡5~dÓøÞþĉŠù³TÌaTÌ#gTÅÜý¼+eÁÞr^“èªÝ‘^â˜89_#œ§¦~ʶ3Ý?hÆžž´”¨|GtçÅyôw‹óÈÔÓÆ-Ù–zWš\åm}aaöÓ£ qí´ÞÒšèÔiµåIEåÙŒ‹åÍàËé˜Oßì[†ÂÚI¡Ø f즓¶Þ"D´ÔÓ}R{­@½^9¦d À¤Û¶LyÓôçû©p+"³äÙÃÆ±ù§AÑ6ÇÐæl3÷¸¼,Z{"¼#Â;¢ÉbE²ëبTU"ªâLŒ7·W%šÁGÊã‡(oz”Þ.9Œ·«UQ ã¶Çº…ÖÆ]ñþ•U-:[žD=l<Œs§…ãúT$t *=Ãg…ñ·õe1¡&ufó+Íz÷³…–…ÛK½¢öäSM}í[Ÿ‚Ö¯¯µmõµ‚´wYêíWÑÈ;5Kmìœ. @¹Õƒ%¶Q»ö™Çä÷kzõeæ>=—…©¾hÝOwݳ•¥£|þ‰g©¼_ê„ÔhÊd¿Ô ‰¥Nb©“c+uÂá©yˆ¥NêòäjÖ°AjÖ¸Ô ©eþ±ÔI4“F3i,ur,¥Nò±ÔI,uKœA© b©“Xêć``,Ô0r©ÞV«rÐR'S*À‚H '̰٠Ì1%@uSÎ “ á#Æ´ÕAŠ8좢çP꣸ƒÎ½Ô G±ÔÉi•:†íTêdÐ9X©Žc©“áKprræâ°Ñb©“Xê$–:9×R'è(KP›/é¥NvmèýÑÁA¢¦]-Ýø‚Ñc•Xeà„« 2b•%"q … Œ`SËÁXé2„b€¨€\ªLªæ°TÍáê2Uš+Ü<ÇRe``M4VˆUŽ­Ê¢±Ê?¹’£l˜’£‡¬20°¡ìVøtzöU؉bØT`6ø U†Q‰€& òêªy³g§y<ß7‡#çûød<Ï|ßžË÷=ðw<Û|ß5»:æûŽù¾™ï›·³CÏ\FƒËÎ'ß7·ÉÞ‡Ì÷}âÞ¨“¨Q~V©v9?9ƒ;þT»; ­Ríò,(ü9¥ÚµáæbªÝRírx4©v ©vcªÝÖƒWŒ©‚ „p;Àùé¦Ú˜{ÄT»ZÓ6M뀩v£Ã#I)«KRzlI/å£4ùˆaL>1éeLz9æY“^Ƥ—Ï$é%)Ò™¶kÿ…ˆžsë®høw´çÿíöj“<®;gÚtê°À:½Ó³s¥-W•Û&7›å ›æ/Õþ?Ò«Ù9öWZ„#ÉÓÒ 1 r@Þm‡ (e…"€0†4¯áðO}*‘êxä ¡=ü•eLéíóå2›DR)`BÊ“¨T·2UŒA ¸@BpB…¦*¦>G}šÀJ ãJãÔâG®­I[®fÿTòßÄ1ÏÊþù]ýßʃžoQÍ·­2÷ÎòûgÅqž2Á°k–G·ªõûL5ø”Ü)ÕòöVtÙjéÊœ 0Ê“¬(O­ÝS—jr2ÈÆV—­¿*¶“п&{›K’½Ýômû£lyþöržÕg~ªË4gÉSX›µ®CN¬^éæì™û*¤uÙóv3ìíe}´¥BÛ!§Jr²d‹Ûº;©º¼–U3Mnuäú“µI w͵-=“%Ö%bÛË'WIùWI^æ”S«Kh%îël£Î³ºÕúS¯¨þkòSºö;yÓK§~wЬ¤ÇÛä¿B™ÝföJ]ê$¹g“Õý½Í=UM]Åp>-o&N¾óYcŸæÞ¬]”I¿Õvé\C¹ºf=ê¾ /ÛÞ¼#Kö]6ueÈÊ]Œ™°0fÂWb„E„ .A~Χ¶L Žm̉„ˆ2­YÛ2á+$! ”ð ÞnÎ,—ØQ´lo_¥ß`&<×>ë¹eEäìû÷¯]qZúÜòÏ®ÑÐM~WI[éIýÆCÀi|]k‡…Ù\íǤ³ƒÅÒÅîþó’%N® ¡óÿ6ÍÜÎÉš x}÷‡[ÚåžšÅ]÷VϽ½‡ÝšZRÔ¹×#ºªÃ *bõòn²šmîf7ýõ[g¹,ã¡dÊS2m1_—-œ£º,ª~禾ŠU0ý[WEvé~ ÛÞeò—f“¹˜ûc•»eŒ£»ö~&!Ø4‹.l Êνº.7²ƒ?¹B_V¹?ƒ*÷8V¹Uî}†ÄÝ#W¹mõn­r”’`"8#JZÉF DrÆ(q\ =çJ‘C 2‚e€vQIh˜Û(eîiÜBç^æ^Xæþ´ÊÜÄ÷Fªs?ì׬н: c¡ûÁ Ý vrã°% b¡ûXè>º?×B÷ü ÝSds'²Ð}ô'Пtý>ÏÈïãîò¹ôóàÀS¨—sK¢órÐ\ÂÕâ~Þm‹ùõ·ì)Ÿk1n¨Òª0§ÕEmTNÿivY…ÊPEùZ»+ëõRkþ=½S±…¯³‡ÄM—ý|9½üñ£IµwW)Ýà–ŠQSX*ª†¥îrEDûYÌ2¢\0*Íd­„ È1!qµ•QÃÖÂ3[ ±ç´Þ‰fmšš"£ÒÉØÃ¬e-ŠTò}ÉÛÞO MúÔJ¥®µRí…RæswÊãë}­µÙõ5Ç9uÙ%…Zî-)Ôª1½Œ'Wü³AŒ·Ñû3×Û-õir• ˜ææžŠ¨ “9Ÿ,¥$¶NÄA zþ¶¼Òví[-R P-RŒR-ò?-Õ"³R¡ sØf˜s{]´ÌE¤wDzG‹Ÿ‡ÅD‹ŸO¡ºhñ nñó8Ikj´v~à†š¯™ü„à@¨T‚;û÷ÀÆ®}ýÈÉíµƒ5RžžÌ‡Ø¶@¥ñJ¸¯ìýníña6¶†kððŒÖÈ ³^|ðˆÝ’›&œO¡­·£Ú÷»êû0±œmÞƒ:ü§ÐÃËÂ5j•íQñõ<0¾Þaö,jýh¬­¯"¸>´ÖOlZ¿ÓÛ¢Òá8Q9–Ê9Êy„㌪œ»Ÿw¥0Ø¿òEƒìª’^ò˜<9£œ°¦~ʶ3Ý?tÆ^¨”¨|Hr’Åôw‹ÉTÔF/YóïHӃ뼭/,LzÔ!Á#®Ú ¬÷7<8uZmyZÁy6cy3øz:&ÃÓ7û±vRc-vB»ñ¤­·†@-ötŸTç^+Ð@Ÿ“W‚1-!Ãà&Ýð°•øšf¿8ßO†{X™%Ó6œÍ?!ж:†¶÷P›½ÇåeÑÜ1ãÍH3’]ÉÆeÚ­8ëóáU jð’òQîð(ý] ãïjUÃ8ˆí1o¡õqWU¼‘T[Ô,ŸzظçN ßõÉÈèpzÓ çoëËbE9LÍæW¾$Úa§ õëkÍ[}í í]ÖŸ{û5òNͲ•Ò!ºNjc5’ÚSîäj> ôtrš¿”œÿ˜Ìç>;CØwFCW;•ï¨ål§’]ûb»XÄ.±‹Eìb»Eìz~´cÏŠZt¿¢V]°Þ¯¨…cE­XQëØ*jÉ“«¨%bE­Ú…<¹ŠZâYTÔâuÌ?VÔŠ>¸èƒ‹µŽ¦¢‹µbE­XQël*j±XQ+VÔò!Ë\QKŽZQ‹c J¥àEþÈÔf9C !§€@d˜l .B@ Jºà\Åó(¨%â:÷‚Z2Ô:±‚Zð½‘êi ú1+§%c9­”Ó’'WNKÄrZ±œV,§Ëi…(§Å޲œ–°ù’YNk׆Þè$'‡«¥›ŸBª“XÉ&V²9åJ6bÄJ62Ór”®‚) ‚šè"¤þA#Ú¥…¹©æ ,¡`B0œã*e3°&+ÙÄJ6GWÉFÆJ6òä*ÙˆÓ¯d3°¡ì²øtzöulrö6YqZ<ëJ6Ò¦´’Í0Jñð„A2*8êÞ<;Ýûà5%äØ5%>ϳ¤ÄÀÇçÁ*J ü϶ „Œ%bA‰q JÈQ JŒ /‹õ$:ÊÞ ØdïCÖ“8mTF²ÏJ&>õTîòäR¹‹Hå.Oåž©ª¡m(ð9%s—1™ûñ$s—Ç“ÌýÈИË=æro9z5—û@È·#ü„“¹Ì=b.w­kA›®uÀ\îÑËáž›€º,ØÇ•UQ³*hô‘1©rLªyÜÜwžâÖÎ2$‘>M¿,7“=‰ÿO5Ë6=Lªý¤WóT'©×‹L,ÔÞœ õ¢ìç? çNË‹JdVÚäJ;©ÜBŠ€Wp$!!R*°ÈUúÔALÇJaà@©L©²ã+$Dˆ0$‰NìØ`Ì2¬Bú¤{*Ü|Sl¯ Áò>Tvı¦^2¬|ž¬{NûoE¯®MÊúÛr5û§b¡“ üºÍˆjò¨O=-Mn6•«ñ‹«åÃï«äÏÙòiÉÒÞ!Ü_¡íÉ]²˜vYœŠÕÁÚK>8ËÊù¿Ð˜Î$Àj rŽÔ¦™Î†ú{©%€¥QMaA  \nuAõÈoªç‘ÚùÄݶŒ‹ß©röò'Ȧr;2w[öÞ2fª¢¯÷³›onö†ü,©·84t¶SÚ@›hÊ*:Ÿ'«oÝ5gn‹Iù;JÏàåÝd¥”¯‡ÙMç—;tWæRò‰.$ÈyE‹¼júœbBªéÒ’ÛÙ_m§Ï¯ËE²;­-½´I¬Šü9é<êÛ»¹érRœ=<=d’®KÜzÌ„ÍX¶zL)¤½Ccþ‡>‰åvŠÎV;—þê†7¸ ³í}ÆØ·òR¨Ùhî1Ý–_’ÛyF«ë47²;·3óÆÎ,¢a¿ä~=–’X½TÓíÒoü0_.WákÈ\Ô¡’†ød¹wÝpo/Ô¤eØûŠf_NàÐ]©šнں) ã…+'Ê”€ f GÙ/„H "ÚÓUx1aRª{aÌf¶à ÚÁf¾YÅÕ¿èoÍlZ=ð M=™kT/lÖõþ–š{S÷K×#°¡›tv?«Å]ÍÔaÜ䚺Èu†‡d5qÉ­œÜÚI‰Ò6Ë/Wÿçi²JzÞNæŒ^[u?-o¾u^ k'éâõwfZ˜Øf(ôÒµñtm»¼ýŠeÿ£˜©ì‚Audçy­jAí¥e›@Ǩ^µjê4W|üö‡ùPMXn6ˇŒqv¹ØA:4wWýºLÏùíö}r3™î¹MQÆñ¡­m×ÓpKK ò+\-gWÚmÕuÜq(ѵµ!"…Ùî›–k ´5¶ÍŠfÕÛuàݽzj+qú­ˆûs;–ŠN%˜·NZN…ƒr*×IÝi›ãÒ_)™}Jn7zhëž^}?…ˢˮgÅ®G¯DH@,”8ËÍmÐ?Ò;©m8Jæ¹GïúW sLTò3FR`²ëE±s‚m=íÉeëö)Yܵ/kµx6uX'…Óà8ãk"ž&âi"ž&âi"ž&žÆ‡µWNŒBtp;‚HûÔÐaU”ßVh ᆲôVÊõzݺ³êïØk#ʶ¤Õ›d>ß«`þõºš±¨’ÔAч6€^ü²šü¸¸[Í/ÀŶƒ‹R”Ü=(hŒétª“?eu:{'æ77>M~(®¶ÝÖóô÷¾Pb´»È•³–éÊzÚÎÚ¾8RiÑ,…Ý+±•âò©wÛŠ’cUr¤Nóun¢£Û¬œ—ìˆN,ÂT pBM° ÄR`Ìt\1 *;j0H (Å €3ÿÌQmoÚ–°˜Xÿ¸§P‰÷„Ê^RíqJ¤D‰4J¤ç*‘ÂCK¤ª˜9ãS¢¸¹ ¡œ $’”Q%v2iœœ!0C\H¨¤£H¤ôð)²™¡^ü-™L“Õúâ>ý¯ží§‡EP«*ó˜¾óVýgè‚«vçùQSùÄù$ô:1%;%Q ŒáóM¢ß•3«ÅíÃÆ‚æ?Œ|ˆ%ÔaBNcŠ«Åp\pÎ(¸ƒbLbD”©šŒ!²Ãˇ¸îøþ¤sU¬¢dØ$r§‰;O™ÐgnŽIœ- 6¸Î!‡T±%¡8ƒ›æO¥é€æSÜ‹±Šë\¬nS %ä´0ž—T¹C«™8™å–üi³|,"o4ΰøû: ­( ukÄ¡þq"(Šh”@ÏOeG!J"bÉ ÄÌj£x0#è` G¦§J ®M›Ú¶©DT)F‘@yp Ô`†u0e*ÈóðžV©•4ÑÑ^#­ · ;7AÕqZÎÊÕ¾eW[æ—¡­™ðd„I…É(LžŸ0ÉB˜dsÎ8…ibI¡!Lr¨Ùj§6ର d)1„™Í€+©²6cÌð¤.LGôžl¨ÎÑy’K‘™^o¶¼X¨#âb¦“©-&ó×/‹£Y7Ô·¶ÉZjļq›0©‘w’ù=³`·,øy?(, •¾°LåWã± iN£Ç+¶­‹¬Ã_–ßÉÞ:¸ Ü´if´¤DŽRâãIÜÒqÆ‚ŠÜžB©è ,¶÷²ö`‰—•Ë_Ë€ìlÎ>OVw³Åú2˜¾àº¨( {Qϵ‘ÁÕdK ùn rMàm‡8Z[ |m˜óNøµSþ|¶®'—ê §pe{ûžáÛ²+Ôf Ø ßË@P“-a'¿.Z7YB]b‡^Év5gïñ°óJà{Ï^ʇX¤Ú æv·e>°†Ü©˜Ìdõuù8»ÉT š< u¹¼r*Ô&éðÂÌH¿$M8jK鱓:¤’a'GD}R‰Ìó^9¿ô~œ-ž”¶{9²Å Fƒ¬PT™ÏŽ­#C…HM ¡B`XɃ HpʤW á”2È€VÕ=*à(:² «#_z*~…êüv¢øôT·qJv;MþœÝ$ >0) #ˆ¨ „˜(}‰³&9«$º•¦2$Á^—è¶uÄÛ x»œþè¨ñ²&QÿZõފ‹ۄ™‹‰;NË)¹˜ú¢Îa„Ÿ/ˆ äBƒ;…2oNô E·‹WHÒ+äeaZRå`HIÕ.¨ÕJ³¨£PÇÛì·Ç QªƒŽ3ݧãÆà®‹ÝÑÝÑÝÑÝQÇ‹nŒèưI”‡qc «’˜AâÞ¥¬æKò¨¢ÎÙÆ[ûRMŒ1ñ˜p”ú>˜@i®½H])HP")§˜ábŽô ŒÁR’ZIÓXw1…W…¥,r‘r±õ-»¨¡wTùE“®“#¯–ߣÞ_Õû‘Ï´›OÇkrŽ;ÝÑáãŽFÌxô,£Œê}OäÔ}OQ.ry”ËûÊåhH¹¼E4ÛàSK !Yò}ÉÒí%–ö8À *“íõ’mûËå&¬^=äø’ÝôBk'x×ÊÔ½)‘6 Ñ*dÛÝs±ö~]ÏÝt…ÆðŽÛ´ó¨+ìë ØaÚÎROpž˜3Ñ"pÌMx§Ï8F#p,ÇÎ8&Ï8†‡ÔAZD¾]e%³é;‰±°EŒmî°Ö{ºÉ¥ÍÐúã-I:¢HJg,ÂÖN¶æº¨¶ak¶ak¶ak¶5Ìè‹î1›Dyذj§ož6Ë/ÉZq÷Îh5K%ŽM½? ÎÒQ‘ŸÜOþœ-WýCö^ö3ßMðAˆí¹õìo°¸÷`ˆ!ÜâD„ž£B!FH˨ç¨@¹jh”«O¶95k;2Æ•,’ÍêGåö]~ÑPzß¿¹½U²QÑü"3kë½½þ´¼ù–Ló¬¶Ù8µ4™kѰÐ~0Ëz¯N)ò|êëÖËõLŸ é¿Ê=úCÿ•k1Ú½)¯û´]]  ‹lÞd·íúq17&×w-°çZ&Ê™fuKQÛ ÇJ4|^Á]µ¢Ôߺ֘ t£ß“e\ݵ®9›ÍêÝb,€[,šØMl®+[cb‹†³Ñ g—ÑÀ \GmàÂÑÀ0Ó‰›ÅÜ$’Z…wô¤¶f>ÀÑ“Z•õ˜ãŒEOê ‰y®‹=©Ñ“=©QÐŒžÔèIžÔèIžÔèIµI”‡ñ¤Z¿ ™‘Š2„«¢&ù…›—£L[¼R^èdئS×|±i™Ÿœ§I^]§YãÏý1ɬ¶[z/ž©M¥è¶›L\›"ãm2»KŽ8°Ž' §ù:7ØmVN©Æ–³keÌS|Ž À{4X#ÄR(ŽÈ±ÀDÀÍ|aÄJ1Ÿ‡$¼ç“÷³pÔ?î dÀu@†î"ú‘ŠÌ| ‘ .2§xï(1G‰¹»[ ZDVG€„@@а1bæG@ÎVò1äˆAã`\2u€¨€BI€DcˆÈ"¬ˆìîÎÀESÿÍMfŽ¹Ï… Ø~ýòúi“}Ç-g‹×/Óg^ö#º*‰¥³ÑŸ²?Ôß³0«¹ôÓãýd]2èü­Ëy)Ãg­Ëƒ/ýU† ¤ÓrY"EڪͺT^žw¨ °¯|­|ÜKŒ 犆ÛDÖç(Üœ:ÛQJ=¼ˆ}B†ÝzèƒØBˆ} è1¡d#ð.7 q‚cþã*I‡g(.Ë ~=OÓÏJ¾ `ã¨ÿäÊZD€lý•¦y'Ÿֻ-³³EvȽ~©à/2ëÚ8~÷.ï(ëæý6›Ï~z¢PpY„ì)Z@³Ášì¶ÎTÌB³PÄàÀ³P 9 …'VßhÐÚºSe^š¤Yy^‰8¥qÃ…&Ó꼜pÆ$‚Rr³“sJ2e¬{êb'"yQ%¯(yŸä…"ò“Q"!×øu‚„Ò™M›œåÞuýƒÂŠ>P¶$d•ÍðÐ0Õaì}©&ƘœŒ²ÓäÏÙMb_}žV˜@i†¶ ˜d…Zfʩ̴ÇBÌ‘¾± XJRkmë®ôªà©\±TÚ<±­o©K¯ÓM®ç­2XŒV° ÷Ø{îÎMÂ'Þ3cb¾š¨ˆ´("0*"Q9?E„…"‚%ÔaBÂ!ÂÌ4:ÁŒ`‚ Î¥E*…ëìbˆ1‰\ý?1†"B†OAc•Ewsb†”¥!¶Ô›Ât¥š 0,,[JbÏA‘ƒ‚-ƒ"®‰SCåmè­6´¤¸x¡±dßÕåŠúóÎ3ù‰¢ÛÈF;}3v2_NÒ£%½,4:gjTÓÙ*I¨×/¯—›ûôÚd³™ÜÜk.{*û°TÊK¦5½f÷Ëž&ó,è%­òQ†ŠOæ÷uj®u|ΧiÖª—M6uó×M ­búmú†¨î+ÄuîÎR¤^³“•FÈÑ©BŽò‰ÏrDQÔ7£¾y~ú&=]ÈRÏk7-9IÃðªr:ÊÄÒ œó•îˆÃÔgÚÎMf>“Oð4²Ü‡£Üå¾³­‡íY È©ÎÒŒ £• Ζ{× ÷ëY8TA8`:ß(gOkÑðl…âŠq%ˇd³úQ¹}—_4“¤½¹½-’¢ùEf¥×{ýiyó­ôðdãÔ쓹–% t Áì•b¥HÉ€ÄJ¨Ö˜ D †B9Šèˆƒ‹Žþ–gšÛUÓk³›=¹óý/Ù<Ø?ÝÿYTRñÄLÇíœý†lÑÔ¬^D¨­æd*µ÷×[‡Yö§וyÓ.PVýX‰VÚt7Ÿ©ÿæò…ö7ê`­ìûn&¯_^?m²ïø¯¥®B•>ó²âTi,‹ÓŸ²?Ôß³Pf²þãýd]ròì­ÉÃãæÇK³›ò„L•ΗtZ.«S¤h[µY×^(/Ïí§úî­Æ×ód1ý¬Î©ËzGý&¨Ð®› Åì–ÝëåôGT¡ª*v™®sS¡ˆË¤œ™  üG«CáQ¡ˆ*TT¡ÎÏÏO¶œ–$‡·×¦8ý”L!Õ5"q›°sâ¨Û´œ•·Ú¶"Z”ǂ۴O¥¾)ƒQ ‹ÙÙÚ´ùÁmÚJMŒQ—œjÚ´Óm$°ºÎÔ}Ó¦Í%Zµ&AÉ(á2t|“6†®Paaõß§‰T"Rx_Œ¤Îsvn’$sž™Ç £š9‚0()B@3Q9kÇ&TBÆI‘!à@„t!$‘q¾ˆŒ<Óa”^£ôzVæDqºæD6¾‰˜«)mÇþ6¿JDäî‘ÌyÎÎMˆäÎ3s^Žeˆ‘TÚ®Òj•>L8f¦cY­)S °Ò|)3sH`)À F0Ŭ?á%Ax*’ Ž’`”ÏÖŽI_»€"*c†BHÅ›ÍÚ Ä °`f™1(§ªMœ’³1jH\uCØV‚•¾å/·|ÐçiÚâã{Ëé½´œwÌxÃÛ†ß÷ÄlqÍåÙÔ¹|LÚ Ÿ[m[ûÅ~ÒÕ^š ?Á¼þñ0m’ǧ§›Ùt¢n1­O¢=Ÿ­MšQ¿j(¦hsšÈ¥hUO,ênÛ¬hãbÛŸÕ8}ŵÔS)¸¯,æ( ínÖÞº`É '‹»¹Žé›Lgê M¤ù}ç—üF¾”y>{Î4Y¥ÒÞÔÝW”AÁ Æòb):iXùŽ_¯>?¥Øß¬V“ê ÌÞ¨›ã)†¢3åÃ)ÄœBNš-nçIqå}Ùëx’B L‘$BO\@Ó¸,óÜsB*ŃÉJZ:B‰P}<á3†·PUõ•úW:ŠF´n+àT¾ÚcàÎ<Àσ:¡' ¥EhÚØy6EuCŠ“pYPL®6V(uçœ7#2ªº˜h²²Q,&e•ä¡'Yjî–æt¿ûø!½ñQ]/htÿ©i’e|[®êž}_Þ-7½ün>Y¯Mƒe®>|ü÷÷Šd¹ÉrÛÒRs+ųϚ-Ÿäµ­Aêìôœþ|¯Øèdq“øY?öM¯¤¬ÄÖ“ö®¶¦?kf{J·we>›^ø¢éiíçĪ€Ôž–Yƒ ¯ ÈþAÍ!!ƒÈ=ΉÙâœÖËyi2õ‹sr r’Œ…ùö«V’æ¦ìã!ûV„Ø^vš=!ù²rùk™ó2ÝçÉên¶X·HÏ{a^5mnÒêI•žtªÎÙâiùdOëWzª¿ë2söyé¾TV‹üÇ*¹Ìɹ…*ElÁ³Âø±(öYJÛ3ú?C:<»cÀê22Ëgõ %øÌNDùG^v UÚ\·>l¯írža=¸OM,{·øáËÞ¸nqÔz~­RÏäÄB¨Ïµ*”Cn@‡j|øL·ŸûÔÄ36ž±cŸ±mZwYÖ‰-'½ò\9ƒ×ìD­;²‹SÒº©íØK‰žé–wš•s­S&K‚  fáCH€Œs‰¦´š b®ëoc@)•Š˜ í´tvV—ÿTN»^þuQ»í{q‡Ðó¶³Ï³ C™®½Y>ˆ]"°øûz¹Ù,Š_i±@ý£¦rL^Ó8Á6öšÑôWÔš“åïuö{pY¶ËñEÖ_¡ nA8˜Ã£ðƬ’yhð²DQVߊe^ :$Óu£×GkMÖŠÏÙçæ6-Qìï{ªƒxªW)>ä©.ÚC3ÏUöwŸšµ£VO8jÕ•W8$8WáÀ}j"¯ˆ¼âùó š¥å3•) Û´D%"*ǧD´º»‰ÿ3e#'F›wÜyfäÐmZ"#ŠŒè‘ôa¨y'œ)غMKd‘Aœƒ ¨C˜öFøð¡æÚÕçª09ÍJ¬è+z?›O;‡ ÍGò™EA·i‰’J”TNORAØG„ í^ˆsUgܧ&:h¢ƒæ 4¬9Ì)¯î¢hæL†çüwõ›0á—§C#ÀöŒl^^‡[ÖÈÜ@…7­ BCA½¡}Êžλ–¾*pïüöi£~›·àϼ¢„žóëÚÅñSVZ½¥>i(¹l´ÇŸ«uÁiV¢—"z)ž¿—BÔB?%Ó»´–ÑRç€>K&\'æÄŽ˜tâä˜E}l4ÞÆF##6Ub£‘†ä+°aûœ»Nƒ|&'*4'«Ð\¼UÓtúòGW>á]ð úd¿È)¿Ó™"Æ‘×ìD´EDŽ$û•Q¥z¯w£äµW¢<[7ÃY«:~Ós& OÌëå}ÔHAš½ûYÍ+]ioï6Ö?rý8QçYxc_U&Žâ¾`5ܤ~ÍR¥õãbZ~Íãä.y»J&ßÞ&·ËU^qv³šÜ|+K[íÕ+Í.k¿gòævSü¾ž¬“Ô·y?»Í;×;ý›zdõ´Œ™¯»]~É—§yò‡.•¶Û•yñ~ùýãâëoïŠI_Ü=©ïÙ/e\¯V5[Unl–7µ­*×'›õÓEr’Éæi•¬/CÚ’qš;q,±†Uj¸ÈgPÂ9EØ´%#ÌâJtEÊZ3–þ¡¨Ðh|íþÍr·4Wkjv;[­7š L:|R¯4ùä+û¤è6+Ř]ø>›.¿ëÚd«å<_íä¶BÉ7“Ç™Úü³¦å¯w©&ïe¹šþcçUåÏZ­¹{?›ª·^Öž¹:uÝ[µyîVKõäñìß½ÿñxŸ,){mµ0jÊÄ+ô– ôà³¢—Æþ7¹F:¹åŒ/Ÿîî/«öV5HVº>ú¥ÛñŽÛõÅ‹m§Éêëòqvókú|^™óiS³ÔÉÔ.þÞ½\SË­kº®w¾ïj[²n·SB½© àSòg2ß2Ü?f›û_Ë&ˆhªƒwi1æ”2îLžUa»¿ÝÞªô2ˆøtð ƒX¼¼ô>ii«ìÛ#?l/ãiO³xšÅÓ¬çiæwÚÀÚÃÆÁJOp×'ô1ÚÑ:ÔíZm³o?.ËÍ)à•ë=µÄðÔÓU ª¾ZPqÖ‚¡à͈µMô¡PÎÕ£Uÿª!‘¢Íl¥Õ´:öÕV7ü4¹NæšofåÔ+ÕÖËQþzõ9»þF_¾@/ËÇŪçSuY—lßû®²ûêî…Y€|±\$//;vCu„Bu„CuDBuDCuÄBuÄCu$j:ºÜ¥Þ­l»Cãå¥_’åC¢„ ÚÇ{´—õò£~Ìù;~æõ±©¶¨âŽéïõÍdžäòÑOËôpÌ}wÙ­‹ïê°Nsð+UhõT3eޝ‡ã¾ûz<îëɸ¯§ã¾žûz>îëE°×·q¸v–Žòݽ_n6É*0à¾}ÀÒ¦ì4Sƒ=$yiS$öÓ7ÄvýÚµQ„¹xú¢ö’y/(í½ ELkCŽæ¹ñ]ZâÚR¸~‚®„¨ë!!|–·²’³m¡lÔùö,›~–›úÇÛŒJŸ'«oÉêC2ß\|½Ÿ-ú‡yÛýuY€¤#d–³Ü»n¸7jÉœ-|È8i Ü®‰žÉj*_j³Þæ‡I”„j>”Oéê{Ï_àGcM˜PÜØ5^xŽ8ÛyicÇum2»à‡åêabÒ.nÓ+ÆÂý»Ù²¨wšµºÈ&˜—Š­ó/ÿã_þ¼ú—ù—ÿû¢üÓl—Û.ç“›d½ÓÉÍÓj•,n~(Ð5¬þýê½yw‘Ü¥%­2ªØ}ÿýòûÅæ~ù´ž,¦ë‹u¢©{³\åR‰ÑòV˳åâbr£Þ7¹ùñúe®tä žÖ‰¾·|Ò§»âm·“ù:išý½™­k”ÝÖ*žÓZ;¦CouMÞ+þóuö㚪KùzïÓÛ¿ëòEMï®é¸®ÙLJÇùLí·°néµvœjõr›_ª!újë ”…+Þêrk÷÷÷/^¬×ëÆyÙßmá8:BTG1´ºOiµmºàÊa#Ër;i¶g·¥½0{·NÐ/Ö=žL{¹F:™h<™âÉtv'Gq2a®N#F¸ÒCD‚™Iñ(f` 9’P‰°q21I±”`Ê¥öyà£I?šZÔ|Ìä`ÿ³=fì.iæ4qçDÂÝç&†Ì»Ê܈ ƒ’"$ágÃ@q/%qcÅÖ ã¤’•C2ÀZææ ’˜•ãHrýŒ!›²(›FÙôüdST659Èà2!âCš+ìâ ·ÙÆcÆá<51_Ç‘…‰EÙæÓ#Ú¡ú$ë <¦¼ë“^Åé„h Æ9÷DíHzOQÌÕsµ0‰OÀÞž_§1g ô©#*å›êèçÕj¹úc’Æ%-?¾ïÇ4örì.H}ƒ|VÃå À éƒ«­k¿Ù †é1BúÚxç§›Ùt¢¤ÈÅÔ6æ)¦hVO/Q—H©²Ö¤š´¢Â«D„=±ˆ#²•ÛÍ[Ë£E,ZÄÎÏ"†¢·¶·eN·ÌùGÑÕÑSÖbdß&³»äœ+—bè4-gV7ÌÔ&†Œ À fAB,…Úê LE¼Bº›5`œ$TízÎÄŒgÓ¶šÅ³r u1WÍ×E’wW̓«ÕñcM O&†ëÌ6¯_þª#[ýÙ'®{7Â' í]¹e°@îp!áÁbÂa° p,* ‡ÁâÂa°ÀpÈ-Ýݹë‰vÜ–#Gn9>rè89vŽ<Gއ#‡Ã‘ãÇ!Áí¹]¿[ÏÔPŒ¶>h´u—(ëú•Ã1¬zİê^áÔæ¯úJ³ëŒd…j½gÿªƒ»MÑ¿Ò2Kq9[l’Õb2ýÒ´ÇŠÂÞå&3~‹Í^é}¦\dd‰õþ#¹Ù,WËP®ûéëVG¡ñØ6Ö<^QL’ÖW°~QcTÎï)Ú¾Õn®©6ši@xeØ]­SN%%sD© Ò°>¨Ë  ˜aΠヤpбÆIÄÝ·=8 z»2o—Ó{Æ@7Oi%ÁkÕyt ÕÙ Ë$¢):ÎÏ„OR PXÇÍ¥ã°móËjùô¨¶Ð—åwÔñ°hÏõmÔôЧÅ^½¸‘N O‹xZœßiAŽnÀ„0¥ šþi¨œr‘R¤dX i B` &RrZ¨† zjµŸHªmv²¼Kql_’Çd²Ywfõm}©&Ƙœ´Äiòçì&±/?G’@ÈPk ¥é2ÁPj¢R"`¦‚µB©n`,ˆFÔ*ˆMcÝUÛ¯ ¦À8À©Í8Ðú–ÝÔÐÛd‘FH3±ÄÚñ¶á‡4"<äÄ€ýÑ8¾dS¤ep€A‰š9r{‰¥= AB¸eP¤}PÙÔê…ÖǽS˜Ùª¶7%¦&½BJ¬\LßÕåŠgâçgrp´nWH%X?;£æËI*¯¤·“…6üL ég:[%©ÜóúåõrsŸy86›Éͽ†`dOe–ú=’iM¯Ùý²§ÉHeú#€•‚4 `ì/°X4`€A[ì&ÐsT(À¨$jò2U¼eT }TÎÆ%Ö"—ÖvdŒ+‡¯Vn× ^ß¿¹½-’bMzk¯?-o¾•F£lœº®çd®ƒŠ$³W„Q¤d@bžT÷zW$·Y{¶\®gúÔHïüõú¥ääƒSFŰ7)éãrIŠoÛ[ º‰ØU,ínlV@·Ýýq17Ö¢véöG°»‚5¦ô†~­°·.Ùd%ô)ݽmY¯È8× ¥F R³iµi¥i]ÔØÀa›½òÅìÅÊ]V.-‹úfSû95Å®½šø&‡¬4éCµ2 '=G=¼V'céá(êáQzø¡ôpŽˆ¤J,#a 0B$‚R¿¡dJªãäÍd—BB甥‡c> éá–3(ı¿¬< ?®ßÏÖŠiýH¦^ ñ;ÀØÚg(D¹Ú…›ÐìaëoÓ©¼¬ü°£t!¤  UTU ä3mg—ÈÅkvÎ*ŸËöTÚžA1¿òQ¤&CIÀQIˆJBT¥$P&’0gA«Å‚HAÔ÷©©$ )˜zDb"”¦PDqVIÀ‡ÂªC«³nþõ^Ñ×b2›|˜Üe_t=[L*Æü/é½·éå"5„i|/t´ûÔ¿pY좉ÑÅÏj?Lrhéãd£Z~M‘ªU^_^Ý+ az™ÍßFG §"e¼ÚÌnos€iª6dêiÛµÛƒ«d½|ZÝ$9sÎ8ÚýíºøM B †JG…>·³Ù¤T͇¶ 4U@aî”Öé1-î;®‚@C©]ñx‡ë®÷ÊÜnìV# ]ÿº³²ÚÔOz?¬ÅùVÁŒõ F ¸æ¡Iº©« 8i$ª«UuûLÛÙ©«^³ÕÕ¨®ž«ºJ¢ºÕÕ¨®̧%º­àêÿ”Njú´p)©j…®D×"$0…J(Gñia| uÕ"J®ï'ÓåwãÏýA¹Ên”$^µ·y«èu¾zºÇÜ“X:2é‰)ûx(]ۼ홭*æÚ°çÚ-?3²u”­Ï—xð(:%/s…ÐéÈ­½sm½sà$x@9Úr˜ÔB·>%Ó;ÅÔb^…é\¸MØÙIæŽóƒS"ž/´W  /RàÙJ^  šÂ ’VŸK¨î¨éRÿ yU¤Œ*YP "”ÁQlª"¸,èc°${ >‹üØœ‰8ìÆ9–1"V£ÇÐùC±‡—MBŽyI?ly–Óïˆø?iWï!ñü¢Òsèêõò4Ö§Öœ¬g7×Ëé3µd8ÍJ´côµc€hÇØ±cttºlu]H©QP®Î‹u&Ô™hdYÄ}qkøVýú¦kñq1-¿æQ§Ê[%“oo%kå!=%tS3’ýú–$:`kýuy—lî“U÷£´´äÍí¦ø}=Y':úêê~v»)CƒfßÔ#«§-1f¾îvù%_žæÉ:6h·+ó¢R)>.¾þö®˜ôÅÝS‘Õ®$e\ŸÏÖ–PªÊÍò¦¶Uåúd³~ú HîC¢K%ùÚ‡‚ppQ¢"EH ùŠ¥NÐA¨„Œ3±<‘ p ¤L¡4‹çì,½œñ5±cû7ËíRCÜu»ªŽýÎVë¦(“ŸÔß+M?ùÒ>)Â]߬fùýﳩN7·Ø¬–ó|¹Ç1{¸™<ÎÔîŸý3•õwÉ&ïe¹šþcçUåÏZ"­¹{?›ª·^Öïºjù[µ{îVJ;îQÏþÝû÷ÉBÑB±ÙV‹rçõ‡¥R›SŠ.{`îõà³Ì%0ÙF:¹åŒ/Ÿîîwäé,á¢V/ݤ Ü.M(.Xì;MV_—³›LÑÏ®*6\³ÔÉÔ.þÞ½iYàJ}ëï»Ú†BÖí!ãvJ¨7•¤µÓ¶÷Ùæþ×òˆ ï0U§Á»Éc¾{fw&ÓªðÝ,Øó2ˆ¬ ’Gr'í *JØUU„ØSùe]±°ÄïEx蘺^b³cŸ§šIç%*šÑa~óÝüb¦Ch›5ú"UZfšÇl¡CÞ'ó×/S¼¶‘Íaû»ØÝ¢¨ì† •šÚk϶$¨®/Âדa4ÄøFïy+x‡ÂèG /jx#kx—QW;]Íç(«Ðê‚æÕäù YE (µ¿bÐdwJ ÃTJÎ9*Šw˜ÜÎaŒÕrz½ªÔ¯·wZ›:õj¨Uoï\µ5k&|IÕ™¶îŒ§iëK51ÆäD3ÓäÏÙMÒTjS¹êP„²’”Ru’"A‰¤œ PIJœ†Ùª ‚¥$µôÓ4Ö]9ñª@®FµI£­o©«ÆÑ‡¬jªóÑBÜYA•3ƒmU[<&¦XÕ”]ãƒ"ؾõš_biOBh4 ePĵèN¨ò0 ½©S2Õ ,^åâ…@.¾«Ëéçg.²XIÝ®À~m´Ú“AÄæËIŠ Ko+¹K Sc6­²dk¯_^/7÷éµÉf3¹¹×goöTöaéœLkzÍî—=MæYr»Ô¨f'Sé&óÇûÉ…“¥Å̧)µ˜ªÚY¨›¿^iZ™– ]©¯®dWmYÐëó1n<&uÞ¢~†-ìzŽ …•lòRj¶Œ ´Êù j+cÝRì—dùlV?*·ïò‹fÖÚ7··Jã+š_ð²âÞúÓòæ[y¶dãÔjÚd~eT$˜½"Œ" %sZD¢ðí¬=Û?.×3}ð¤wþRÏñ R #JÆ-ùñú%V•kR|ÜÞ ÕÍÄ~­¼ Ûc{1>—íýq17Ãwí„ïÚ1QÎ>u‹U×`wu@¹ m)>Ϫ´w‰w…ÀÍì4Æ#h,hÌÅËLʉÒ1éC—µRî%â5B,}‡Iz_š³ÿ¸~ŸUzM¦ëxºöYµ:9¶I½E«{èmǵ—f{‘l$›AÊãeˆ±ObÍÁýêWŸèTx€c)<,ùÑ0Ö>úºÃ–øWæ’Nuc`kÑq|¤Xb lŒ=¿d·ü(’ÝJ£@" —eùÏÌà 8ŒÈ!‚W²Ý  ä # 焱Q23ð!ˆ¸Z}C>Õ3.Sßzl±Ž-­xlmꆃŸS„Q. ¡q¨N%3“¤D ɤ`@X²’AHpÌ(ÅPsXppœòÖðœÊ[7;x=³.°ÂÊR” Ü+PøKv£ ‡è7ÕÓÏ«ÕrõÇ$Eo¾}º. ß[Ð:yóî'“/-õþB”Ì(v²¸›kñ`2©/6íXoô_ò97Ï»( DÙ ´ïb²ÊÀúÍû_Q¸d˜ j r¨ëòq]¾ä׫ÏO©×áÍJ——ÎÛc¹ÐÍcѰ΋ÌUœ;ÅÁ5[ÜÎ ˜‡Æ]ôcLP)%§H®0Œú{× ÷&é=}GÝ”h<·(ƒU¾ÚcðÎ<Àðó `:jÅ•´NɯöÞuÃ=¯y¸,ˆ&ËV¨u‡ùšµ32º˜hʲ/æe¥äamž¼Èº¸PÓ·4güÝÇéêºÚ –§¦IkW3¸Ï‹.ÞoŽÃåŸÉj¥N6òÙâñ)@¹¾šH«žÓŸïgëÍdq“øyÑöM¯¤ÃÖ“ö®¶¦?kænà¶we>›^ø¢håií‡É®´ Áá!¸£¤²¡âsÙp„¹l bç–̆Äd6R`SFåÛ´ï(Z¥ÀÑ í4JQ LvQ‹»qŒÞ†ýª #¹xÔå¢.u¹CérJE£XRÌ ’Hj§çP©kÓ# ÀÌÁ!¦Jê"£Ê*ƒNÌ?íˆ5eç•~š‘ÃçŸ&=òOüÓ¹½RPgõÍA ɑ䠆uúFJBÍdLB}ôI¨›£*p‹K:`:Ñærb¼‘m¾8üòÅež, *ÚëBÓv…©¨B}:¤‘Ž©`x­øíüé¬3À6 ¨Ë|ŒÀiVÎ Dð A F”Š$ÒL¡€”>Ä(#Lb.™ª#‚¸ˆ ¥ûð==§=Ñ\ÍUgk®Â‡6W ¦p  x%‘¡!zTýÌ€¨$pÅ0Ê©„"«P0Š½ŠŽ•LŠ@Wi³VNøE§V=÷‘“µMÚyÊœnÓr¡óz9ŸúÑMùÄ1É«s)8âXbHƦ¼J œ«âaS^E˜ Ä¡âP± †£¸Z\¥§"­Ê(­Fiõüb¹Ð gbÁeFïAPéoßDž&×-´É{t¸ë“žC¤Ä¯=ªÉÞ Dr«´tƹ]&å̬–Q |NR`¾E«6oÕÀ^èd1Ën®svvÌØyf"GŽ9êåîz¹ŒÕ®£^~†z9>a½\×ËÝÄi/ù”U)Kë~ìÊ&õ‡ªs¢u ÑúðBN¯£åO53êÜ›¿)$„ËÊe-tšÚüç4«Úú2œtå¶°5¢UýÚ¦ëðq1-¿äQcˆWÉäÛÛDí̼”éF±èoj6²_ß’äñÓl‘¬¿.ï’Í}²ÊÃÑ” Ÿ¼¹Ý¿¯'ëD‘]ÝÏnóÎõõM=²zÚÊ+Ƭ×Ý.¿äËÓ<ùCÁÛíʼ¨ ‹¯¿½+&|q÷T÷T®Ï |Þ^«ÊÍò¦¶Uåúd³~ú ÈíC¢Cø’|ÝGKfRÎ݇æ·³$ú­ÛubÛlµÞhÚ0IêIý½Ò”/Ò“"Á¬fváûlª±µ‹Íj9ÏNñ ³‡›ÉãLíáÙ?S®¾Ky/ËÕô;¯*Ö’[ÍÝûÙT½õ²V—Ðéߪ}p·RRÉtöïÞÿx¼OjU‹m“V¾ T¿#ÝZZw.ºìá¾ÔƒÏ@¹ÆV6@:¹åŒ/ŸîîwŒ©º\ —-¤,NUìM _—³›LTË ¡>mjÍØØyƒÚeÜ»7-[t¡ÜõÎH¯¶u8ëvƒq;%¹›ÊÒèñ-üc¶¹ÿµdù­Ê&rP6§ŠC¿›<Àâ;“‘Txáo··êD» ¢mݤÈòÊ1¥÷ãlñ¤dùKW U«‰ÛÏ®3)·ùH8)ÈV¨oF}óüôMr™ÝeV.P˜J³t_ý½ë†{‡Å0B\ï­”m6skü’ÝÈFéÊí³›o?.ËM²cD6Ž£öÆõÇ‘R¹ÓÏŠu™‡g·ÜÍ¥r·“ÞG^bÜ)%P©#©‹á0 K ¸AŒY™.§"&i1šÅÒ+µÔËÅþõêsvý¾|!Šjçu5ÙÍÀØw?¤7>ªëÈöÔ4É”õõû!kEï· (‚Q"¾¨ ï¬íYÃ)kt»žÓŸï•Ž:YÜ$NîÛJO•GÓ+¿5“™ÎoíÍ|6½ðe2=­ý¢³+^V§öÒºÒ¹ÁǤ˜’²J‚ZoÅúìÏýRô(OËó0Ó]Ïgê¿ù jUGCg"ƒÒF_¿¼~Údüã¿–:m~úÌËK¾-p¿?Ôß³Pfæ Çûɺ޲·&›/ÍnJM'ýUZ\ÒiÙý{ŸWدWT%÷ÃÃà"6ÝáÓ:&®U5'î½™ÌæÚ\ñsB¿sË‚à‰QÀFAëå¼TÂü(ÈV#˜ÝÚû憭šéËË=ÎWÎmçk£o@³Èî­Y ß±xbØÿ\a$-Ž­âÈ#©:j pž´s’P÷©y–H¯TrMrÑ$w®Ä#& ûG%„R)¯XïºáÞamp ‰=iÍ êœ¹F6{…Pô Õj3c¥^•8Añ:×#¨¹.Dôíúà˜gP™¨¾í À–÷*S5óâóL"žGus-éXçQ¬?Ï£3D)Ðã¨?ÏÆŒp$…€H03ÓÅL,!çB*6N'&)–’L9ÑQ2-a:æéD;䕨«ƒÑù¢CÆRìzæBk®ÉxÜ6y \æíL=îSsÜé¤ú£ÃŬ"¥< ¤I̲(NtÝ*!ãUû—d€tÌ*g Ÿû4f.Hº¦QºŽÒõùI×ìtcNé¨6:‹¨¤7oî'+µ³“Õ,YÛ -ï®ÒÛùK]eNÔl牵Þìþ†Ñì;±lw<Î7wöÁý c ÁDhµl7WÒ¾pSR?5d~A ¤˜2˜:X`è)\Ï!ÜìóŽgPý„Æ:ƒb¹ÑxEŸwôy;AhÔ3wð*ÿg`‡÷”e'½ôIÏ|ºÜÙA¬YccÉ´:Çr™¯³ó98ÍÊy¥º„J ‚„%APb°oH€Œs‰¦J“@û†i†LŒ¥TB*b®Ëó tˆÓ¢Ä}¶7;¼À1:\Aý?,D%¿Cý½ë†{t@L³ÈŽ´=61¦³ªèÙh ±¬Q¨DÁöÎÞé¶Í/«åÓ£Ú _–ßQ¯TܮΫ¶y½Ks1|I“ÉfÝYmjëK51ÆäDÓäÏÙMbgø<«@!€ÀJ3£€” †„’ž(§0Sª‚™XE1D'*©¥Ÿ¦±îîÏ«B? À8´qÖ·ì> †ÞFV…Þší»Èµ°RúYTêëÒ’gä郣R†F^qºF^¸‚ô¥Ï!°mø!u–‡<ÄXDäø–} &­Ë‰ŒªoëõK{`PB¶ Š´*›Z½Òš#z‘3ÛBÕöV[Ù¥¸x¡+Í^|W—+Ö‹ŸwžÉ«úèvãÞèBÕÙ6ž/')KOo' -ÄOb:[%éÑðúåõrsŸ^›l6“›{mKÈžÊ>,50$Óš^³ûeO“yVÖ'­æ­f';F'óÇû‰:­V‰:¶æÓ¬BVS}›ºùë#ÓWr2¸õ¼»õ3Y<M4wX«u€^Ÿ@£J¬3ðæi³ü’¬gÿL:ËŠ–.JG½?Œ²dé¨Ô^“ûÉŸ³åªí-À¶xö^ö- Ó!”»ÃÞ¿…ïÁ¿…·BÏ1¡s$Z…<B ´ ´Êùüa-̦¶#c\Éò!Ù¬~TnßåÍZronog‹¤h~‘'WÔûzýiyó­°JÖ˧ÕM’s㌅Ýß®‹ß b¨Ô*ÔE Ji1›†ªäh›2S „&Pã«“uZ€;b~¨±@nW%šÞáºÑ½Ê©Ô (zïnÎiê§Î Î$×¢+Wìv$I Ú|Á±”C-|‘:ÌÛ™âݧ&–rp>â4¹âXbHƆ8ƒd‚3(áœ"læqˆ4ð‘P6|J¦˜Pi õ1ÖI‹êãb?ä Wr Ãc?šä¾ZôÃÁ’F#µ¥óõýdºünü¹ÿŽ/WÙ’â‹gê§¥‚wríÅ¥¢¹ÔÀˆG3—ÆÊEñ¼;ߺÏ÷ÄŒHN‚Bf¦ä‚0I€#Æ®ÔPJUGQŠ gS7Ÿï ¢.i¸îZn é—Y’®¯¢ÌïMˆûµ²CqSØá߃g¬H¬^›ÝìI&ïÉn˜ám™áºxBC-¯—í< ¦¹a½S‹k4„âÚûê¤]~KŒ?MÃÿUzE-Gf$˜é¤\ó™úoúÓ÷ÙT[ó³ïfòøúåõÓ&û’ÿZj?˜@Ýqµ‡õºƒçît·XeIsA_-²µÙñjÿˆh‘ÙXÐ7ôu±ÈòYd#*ë(QYòÈ@Y!(Kà#e!q¨,âÊB¯+ž •Ò·þ¤nÕ³œéÿ9$‹ó#¨ÓǯÓäQ@ p„@…ÀÊg‡€j&ÞH6ƒ8·yç6÷qnkßc»úD·TìühR±×žÉ«ÍгÈÄþ¼½Ñ«uIør¶©ØÛýÎl4óœŒæ¹hž;¿40hšˆ¥å1ÀÕ(ÇPêÞaÍn”=»„Ïœ+É-³"¯É¬ø°üs–´çU¬kvð¬Šøä²*¢cɪh4;©\‰mRñÌ®*A×,¦å›¼d0ÚõuY,±Ç‡u~þ&YçïêµÃhD<ddGƒ, âÈ â¡è8k‡=ä`ÍàÁ ·*·iÏ+%!Fµ¢©ŽGÓ+`Ô+¢^q~z>á¢ÄxHÅâŒ`PD$Ø)"ÁÀ‘!Á.ô£ÅX”¹J&ßÞ&J#ÍsñmV“›oŠùg¿¾%Éã§Ù"Y]Þ%›ûd•[m•Ò’¼¹Ý¿¯'ëD[¯îg·yçÚ0óM=²zz§åˆ]obÝíòs¾<Í“?tòÅݮ̋÷Ëï_{W,Æâî©þ*i ëó"CÙ^«ÊÍò¦¶Uåúd³~ú°\l>$Úð­¤æLÂRWÒÚYí5Õý›·Ëù|ù]M~:lzbšÝÎVë^1s¡ŸÔß+½>ùÔ=)ÂX߬fùýﳩ–{›ÕržOgr[!•›Éãl£Ä¦‡ÝeÉ{Q²ç?v^Uþ¬%‚š»÷³©zk®²d6;% 椙üµy«¨óNÉ'‹éÞêìß½ÿñxŸ,”H[ójQR¶^)­ çß«‰WkâÅíëå|º}k&Ë´oî˜ô»ËÉX>ÝÝï,~&”kkÑöÕÚORFàÄ£×îëòqv“Ùôòä˜O›šù4vBÞ v†÷îMK[ÀÕfù¸ÞéÕ6h¡·Sj¸© õsmÙÆ³Íý¯jî²+)ú롤§©bNïÒLƒéäß™{¨Â~»½]'›K›ØXe§O¦ÎgZŽìv@sÊ®OÊÃíþhwá îv—ÑqgzÝÇ‹º!ij‹v/…Ýç·;A Ü]óêþhw –Œ Ñ}’º/ èñh÷ùíLú¼9tžß<”tn·væ„ݳZwgf¸û¡ÎKÚã=Ž™ÎÇ8ïÌzðOÐ}’`âí~¶u'ûGq÷µ{óÜËÖmU»?JzD÷Gì>ç{^úîç)ìL¿¢ûÁîã¥Ý™Rszç­Ê:“`wÚ}]H‘¹Ç†éL ¢;3Ã9’ìþ¥Ý÷Z÷ó©Çh»+¨=” îœ¡óuw6t^PÖr;s”îgpwÕItæ(¼ûáݽÜFç'y÷wvߟe=§Î[»»½u–¨Dç'yçÑvg¸¨ÇñÛC#è!uŸ¤Î»›už$ÑC“è¼¨Ý ®Ä0 K[Ú6«¤6Y,Æo¿°¨«ð¤:Ð÷¹gøêûw“Mr·\ýxó×lýu¶™'Õ~ <µüS«k¥y²fT~þê]÷_á±+‹»tEÀß’ùŸÉfv3ù5y*ó•‡ Ç›#V XyL»ëÊ?*S º¾/Ë››åã‹,béõK&HžÅ`v·xý2uÅÝg=Vÿêàéj\êˆ$5üüCQË›U2É~ÿÇdþ€VZ#q21öPN Ú—xqŸè)*³?T^­ÚìœÕ­k`„Ñ9ïG k$ÌËÃ`þ[S z¦10íÈþZØ1÷¼kØ:T9ý¡áé™±ß;ôþê&­Ã™×ó¬ *íõIÉê“ZBÁ:ëg#†y‘ó) é-JàDAÚF„t)kŠmeM×Ëy¹©ÝÊš¶Ì¶ºýû,iåwÞ\¶pY——:3ÙKÏQmßNVá?µ|´ËK}?ÚùCT[† ÿÕ¸å«ÞêûÙžý•LÃ8iùp·×ú~¹ÇǸJ3]á,Fx¢Ç‹Lޤco~Wzsõãáz9­C³Ý»åb‘Ü(†Ÿ6n Ð@­/¶vh9ƒ÷]'YQ£y ‹÷i…>랃J¬“Ôè$»ølWê¤,~U¿§z.¿\ýŸ'ý@p°x(k3ù™>›ç'ÀægÄYŠ3òèÅ™®gzãwóÃÚE¢ÔPj0fM¹¡)Kp«H ¿€‹Jøm0‡Ù®«Iª¡G»gã;}¹ÀìÞxªÅ‚ªúJ L›€ÿŽŒÀÞÛ:In/÷Ý=ŸdJ6?+ÁFÛžÍò“«šßî;>eÎXHÀ~$08,¿é ôáÄÛýæ³?¥8½º7ns¨›ÝL)$ý¸)&m¯õžD—Ïp÷žu…ÄyQ¼^宪u=›ŒÔ^¯Êézê~ Ò6Ü×ýàau?Þ ûÁç«ûqÞòÙÏU÷ã¢åߥîÇeËW?SÝO€–ßqê~—»Ç&§äNUÃM»iz¥÷nhþ¶Mž·3 4t£Ïæûå÷¿¥1ø)…tL9×ÔMUƒ ´bÈW ®Š¶nd5#…¤÷¾(uòËj6MÓktM•å×q¾0† Üõµ ݨ»[®fÿ\.6“¹ž› œñÛíÕ&ylÿP+ݺvj.K êòTuÓ–í#¦‡o :ZÈZœe—}]U‚x{R8tZ9t2:v:2P2é³Òuˆ÷—{j…ãÀ~×J]GèôwªÔÝÎË‹oûT£±-ìhéPÜf¨Úyj‘~Å?|RÛf›:Í6 9ÛÐ#Q)îú]Ìé»pÈïÂ!°Ú¾‹;} ù]Äkw8ù´»^‰ãºus½¬ û}§ë¾ þЋ¯ ®t*œ¾…¤SäE§¤ßú1Ç}|ýÈå³ìßÉ×1øwÖÛ5t'w0å`gK7Ѹ{¤idne¸|; ' %+Fr΀ÁÏ"Ä™R„s¢~9:8L²ó‡IÁaÏvÐK`ЉG :àãzéÀu»g‡+3M{ù I—‡h—‡ cè÷åê[Z¼fG&útõGqÇ:›ü´¾½™OÖë×/k(o§¯³Ÿñ†OÉíæódu7kÕÔ(©­—ÛÞ—Ñâír³Y>¸½³¶÷Y{3ÚüûZmÅ»’UÖnÝ;÷h4ü¢Á¡æÖÖ™Ñä÷‰º9¹ëZ˜¸µ'£AVÜôãb´×ùmŸfëÌhòn²øs²V‡’çiQ÷:[_F“¬òg Ï³uVïä7uD*^ëb3m$ÚÔŸÑìëò1yÖtU2ì0ª®%Ïâ‚Û°­(,µÉmJ׃Cœ˜ oŽ÷“â fë^âvÏÁ {T$èžKöHý,{dÎ{ †TðÙû=&¸{~ö²Š·wAÔ•v,ÑmÖºçl/ó÷;¡+Ñ8èÊrƒ¼’ õ l Ì´“©4íG¬gLÇ…ééŒýl¡qÙHYØ‹²ú¡ý(< ´_WTª“•Ú ñ££ÝQŒ­ÖÑ‚E÷ÅöÃSt\`WÌ¡3crFæÁGÇÊr@µëàšæ@zë!šwGf–%|x@÷ÒÒ°¬`ìõ¾î…Jô5¬ß¨ó¥'³CbéÉÆ8fÁ‰-AŽ8xãqFû9ë\=x ìâMêî'ëQ|®¬-ämÜè>\&=9 ìqBt¯®';­bËcwfß½›u8—zÌj ¹­jºœ¯ó"N÷I¢x‡úù¨}sŠû¬Íb@WWe£‹5fÔzr“œ3N·s¾'R°ß*3ÌͯÞ³ÞˆÀI'Œ¨?²ô­y½˽ë†{~HHÈ$_këŽ|};¹ùv§•Å´+ÔÓÖÅòqr3óuT–ÕŸJ{ÖY7Ú%M´›û©®—+-$ DÄlD"Þ’l¿z8ÔFŽÉÃc í‰äØNŽ´‰í£ÝÈ·! @†ßïUÓ=Bü^%:Ý‘zêi½Y>\dñô‰üúòáq¹PÓ“båOÝÇe[X4QÓu;ÓjÓd31fãg '¼€…¥ZöWEÁs5øõåÕ½Žœ¹ÌF_tmÎél­ýÈp•éíU²^>­Ôwd“’=s».~#$!!2ð\Ïþ™^,‹1ÖOÆe1Ý®{”Å##ÌeM{4W¨‡”zÄÙJ=<3aH˜;piå˜tŒ0æRpıÄlÒ1" Π„sа0èa.W,AH(ëDÆ"râ0d,Ú¤%<í"p®ÒÒ¾I¢š‚J+2î‘0{D¶í0Ôq ºG =%{$/9ÞlZB¨e»àg*PØHÆ}É:Ù2SL>WÅ´p´GÅ´/ £6i… E¼$J+ÃJ+8²ù¶=â÷Šêâ^3ósÙÒÜIƒ`u¼\ßì…\¢ðš{í‡ÛÛ‡ûïk5Å“Õf;ÄÞ1£-]v‹®D @te¿à›>!Àï–ªÑb(pÕÖYÇëö¨N—ëaƒHÇþ0Û|]^Íwót ûîÿºÇ»E…·ït‡¨pÏ}Þ[ì¸ÉŒî= EO{ÄÀ·³±ÚÎÜ#¨ÝÄNìäÒ‚èé£@ ‰ZP-ˆ´iAp(2dQ V ¢Q ³Gh3ètH^ÍŸ¯f‘W·Ña{kGë¹DÕôûšmÇ×ä¯Í«Éc×´qöNêE>M~,Ÿ²\ûi›yú{?]œÑNMwœ²¬·íÌí2UZ44f v›¶-'’ÿ•óÝÐG6’*0ß;‰žtœ˜²Ï¦¹ÙmÔ<=yÕ×ÉuÆ>®—ó©í”Oܪ©ùuò,T){™¥*åú[2ÿ3ÙÌn&¿&OeäSå!ýã]y¢—?ëro|ú×ÕìŸVUÖoÞícõ¼¨.ç¡•Ên—óùò»Ž&ÞÙ‘F¢dk›ÇÉtª.›×Gd·ÊŒ¯åsõ`0{¨ÙîX‚F²¢ Á3­ Ø#¸†Ì³¥¤îäëì!1Æ¥äó¤fÁuËb¹³éxP¬ïóåôòÇ÷?=<¼˜4¥æEuÍ>><Îgê¤ ;–^kÇ©Ö\/¿9!ù¥ºI©´¾Èã[Ò™¹¿¿ñ fe½^7ÎËþû”–4ï§Ù½D DœaIÁ+œµxKݨZOW§X9ÙôữèžßtÄ#Ð3ÙèìMÚ‡SwÄ×Ü+mäÔ)ß›Bð¿KÞ®’É··‰"$O¥xÆ7Ų_ß’ä1-fðuy—lî“Uv9ÕjßÜnŠß×J1ÐÁäW÷³Û¼sÍ1¿©GVOÛóÏ8bên—_òåižü¡µ£Ý®Ì‹Š#~\|ýíÝe.U-îž¶Væò=•ëóÙºþÀ«ÞØ,oj[U®O6ë§ê`þL6O«d}y‚òH9áŠÍd|íþÍR,¨Ù^uÒCô1[­7š L:TúZ²Ò䓯쓢ÛõÍjö˜ßWŠòò»öÚ¬–ó|µ“Û %+íy¦„ìÙ?S^´K5ešàé?v^Uþ¬¥Ñš»÷i%ŠËZAx£t¡­ºG<ûwï<Þ' E Å^[-Ê×_4N÷£VûŠ.{êzðW÷µ Æþ7¹F:¹åŒ/Ÿîîw"”ߦ6}t]º‰Â¸EÎÒÛÛN“Õ×åãì&S²«J\¬Yjƒ‡ä jï^ÎzÕr_m–ëï»ÊM–=dÜN õ¦2€4±Ï–áþ1ÛÜÿªf<»¢Zß-©û¥Ÿ¯>6U‡Á»4ÙPJw&Ϫ°Ýßno•¢zéªj8>x;L'>ª†á8egfø@:NL4|DÃG4|´>x4|#© ¹Ä‡Ÿ§Ü†]g渭wÁ„¾32àõs£ˆ:ˆ*£ˆEÔóQÙ ‹¨øðæ»f±Æ©ÄÐÙI‚ÐqÊÎN$޽¸Ñ‹E¸V(ÂE.zqÏÍ‹K‚‹éE½_ƒá¼ÿ%»QTVn°W¾}º.* |o9„UƒÅô³b#—†Lá+]¡F«é¶i½ÝÔ] úSýR‡øüÍŽÙ§v7So…«dd‹G‰ç“©NêbÆ‹¿Ñw~ÉodùÆŠ.Š÷ìí·;=§?ß+v9YÜ$~¶„ýGÓ+)˰õ$½«íƒéÏßšÙx…í]™Ï¦¾(byZûZT¼¬.Ñ¥•br;‹Iz%‰–”¹ÞF4$µÂ §Hõ`Á$€‚Þrïºáž_Z.=Žù÷þ![‹½å6f$Üü°J’Z1e9?Á,ÆqæaSéünCÊÝyÃ'O#|òÒ y9pdãå9Æ(^mèÃw]êV•¦‡œ ç„3ö\çÖ`I½IgëMêãI†r×<§M²ï–zÿEë\; –6Ìß Øcµöðô@“8ÿ›ÈüVÜD%'iôwZ£2щۆ±m¸“iBÛ0;€mX”– P.2«EaÖÕ„ gBBŒ˜¦m˜P"T_O8ÂŒa<¤m˜onœ&â„"¡$M¹‚gS$€P7ŽÐ0Œ:†I4 GÃðÀ†áQK&s¥&pÀ¤T0A+ZD¾×‘Ú·’b"k’SƒWdÿ æ‚AänãÙ…Ûq´­ Pq´;2qœ²sÃÑBà81Ñòq´GÛŽ£…Gq´G{f8Ú²jÏ!és*Ê‚¥,H}æí „NÖBÞl-Œç¿åüGñüç<ÿ;ø™\–Á=IP²žŠ>t8õ”€u>ÚþaíbvÄžD ®”à˜ã”«ð&¢ðÖIxÃQx‹Â[ÞÎ-ZŒ`¼!nâØÓÃ"žþ;§?÷‘dÏT€0ŠDE€(œ_*;~º©ì <k}yéq„ ÇÊœ»§Wý±TÃŒwlAÆ9iù˜Ni_ñ÷õr³Y>¿ÒQýc—q ö:˜ö@Õ—•Ë_Ë€³lj?OVw³Å:<Ú´ž¿™tWyµ±œ-žs¿t:!Ý–äYÐÚžlúuR¤Ü¹È 'V›fä8[hˆÛdþúe!'5q Ѫw¾ÏZþ-Ù² ÈÿHnÔaVÙÁùúéë–/ýjJ”¼Ï¢°6›Éͽò§²K“iM¯Ùý²§É<‹?H…4Å]2Ñt2¼Ÿ(x•(ex>ÍâŠò©ª…ºùëw\Ȧ¥iÙÕ&›w" ØBÍÖh 'WÃÇ%°NÁ›§ÍòK²žý3é|JZº(ÏOõþ0±¥£R6Jî'Ζ«Î[ÚÞ˾Ø:ý„Ï2»`lƒ…'à ,–´p~è9*džh˨ç¨@QÉ–QöQ¹žC¶ðœÚŽŒq%ˇd³úQ¹}—_4ãÂßÜÞÎIÑü"ÏÇ¥w÷úÓòæ[yÀdÕY`&sÀ(Ô‚sóÂýë—ˆìëÖËõLŸ0é¿Ê]úCÿU„€æƒÚ›óºoÛ=BmYm'Û–ý¸˜óë½Âw9˜('›ºÕ¨kÐc1¾¯`±ôãêuª5¢ìXîv8xk÷µG$îyÃÆØÒ{'áû2?Töé×ïgëG5…ÉÔë$¼Ùÿ€Ö>«Ö <×[Ú7–‡üÆZx©—ä{h$Ø&¼´ÒѺ©w³Ï0ÔX(±sÙ¦wôF@´à #œZœîmS?uv‰pZ‰MN¨gMï>ﱨSý6iQÖj{_§¹?k²8dù/Ê/-ž©÷T,ù^*idýU»ù榽ß7±¾_Íß´NþuùaVæ5ÝÔ»ƒ.ÔTÉ*:¬{ÇG6û>Èö‡`—‡P—‡p‡‡2ÿ‰ï›h—7É.o"]êôM¬Ë7‰Î)½¶é#ëI6/`zÏð”§0}.áôÎÍýdµÙ»“¾Tç:®fÖùtõ»º–&ÕÉs´»èFÒS7¯@a Á’ÿ/ÞjC–Ûû*TŸ~¼4­ŒØ’•=OVž‚(meNÕ'}ÁœO-%½d“—Ö´€[*ØæeKVÅ7fºš|×ñ°àwO:kÎ~³«dsaºêó4† _¾ˆº|чå*Érwý¨<‡Yq¬ïd‰ÊdOjÎë¹S³"¬’ä Eü­!Uk#_:܈UT¾øeâ`……UXN“ùÞ¤éÌ‚ó `ˆ=éåEì”ú"ðó[Bß ñî­$ˆ·ïdRð7o8“ e8>%"qÄë×/iyy[ô|]dO2Zç2Cy'Å\¬/þœ­g×[lÞ‹4mP¦³mý0Û{Æ«/nWË&‹š»ù«vt›bs<©¾´7±ï”t·\ýøMßÍ'xgä5Ý\hpËÞ&c;M×Éõ…Wêfþ¾ì‹•:N)¦ù$NúKÓ<‚[ÂÛCÝAlÔ í£Ðwóa¨õLŸª®Ù\pS˜h¥ßõÞ×ó½a¦¯ÎŸšM÷Æð‹º˜¿zy£åv’‡O)ºSS².2z©Ÿ9UæY¯³Lëe+P½˜µÍ·ÊÍÉ-rÒÊæü)}åËr|9á9ÄxæÕ{T„–¬VZ[ήç°×Ù&sfÔ ¥DŒS.ŠXþÅ6+iÑç5ùõ÷òJík)„Rà=µŸzE‰””#N± ïááÞCÅ+‰¥’¦ †Œ€ê÷”$•/\EAºÐo×6Ö\îK°)%jFŸoTô`¬¸îu;xu3c=¨’WùŠaîßÑCË ófy³­éq¬ý¡#ïÇþb±¿Ø_ì/öû‹ýÅþb±¿Ø_ì/öû‹ýÅþb±¿Ø_ì/öû‹ýÅþb±¿Ø_ì/öûkî¾b¶··†éíb.s0FÚ÷}‰§wÅqL'›Ézù´º©f®\.nVÉ&IEïU“«´I 0L±ƒ»á Ë#t³Ùö¾©kˆó’‚jJä+H%¡Ù)Š\f00õäC2M%h2ƒ'¦óTÌîÞ8²`Ö4ú¼‚XJaö¯ÿúSý%@1Èú¯¤ñ¶ñ±ˆbÇÏ…¯tuÖ¬²¡ås—ÛÂØž‹É+ÊÕÇòýŇÿV(^I!9ç ßúTýÒ[KyPˆIŸ: ÝÓ|Rn¼U½þôë¿þ_ïþ/x©þý[öÚ#‚ó^í&ZÎB†õ«Œüú%Y?Í7Y.D=ºbW.Œúd†^aµ¼ŽÚ^q ¹"u&€uqo«›îáñi“L³ {’q–g¼¨¤©D4GF:¦ºC•*dð\—[àdC²Kì™ìòæ6çžyÏUZt`¦€^ *)ö²žÝ=Lì¹Ý+Wÿ¿/_ÿµ²a:ìÎÏŽ~;cAƒRBÄqËÀ[Ÿî–"=ìqìù krl8Eÿªn–ö£J‘• ;ðö_ÿú7‡>!,;¥ü•.ÑοwOz_ê¥õ:ñØNÕõÌ9ÛÙ>ÕÙ¼­ï5òŠ<>$åoØ×â2¯ëÅ" ЇG =ð+¢¿yYC~µŸÉDóWÂíWÂË}ËRV¨¿–ü÷”C¢ÿþûÇý·û·ÿþóþþ¯¿ÿöÇÏ_4×ÌþÛ…ú_ü?^ »Ô-Ó»úÙÿ¡¸é¿uš«2][Ã7IÙüMȈ~ûå×ß¾|~ÿñêk1æÿ‘Y²Ûqu„½iõϲ¨x¢ÅÑòDK1÷!Z¼O´RKúJ®“ÅýÉwY9ô~»½]'›â«9ì¼ýÃmMIõE®[÷$>§7xç0€1e:å$GŽç7 y~•ݲçw~wb6ÄÂlxfC±ôÙ…t@qÍѱ%PÕ?$Öá1ˆj7f‡‚q8jp8уÃq‰}æ–ùp8˜ÃAsØd8Ç=9•€r€)PÿE€Qp¸º,!uŒî2ônd–Ý(ìF.©ÅßÝH)â’I ±)Û8ØnäÆn”Ýw£"/N'}v£ ¼‘1l †ÛxnG¡Äs!$"ŒK4úv–í(ûoG5÷^z„žû"¦vÄ©w’l;†ÓY ûÖzìG ¼xÜ×ZŒñFN¶0;›gîHì¹#Õ¤R‰ÃR-EX޽#1°ìHlI ¼´FH<·$R±VM¹bo”QÂÛ’4Ô–ÄÐÜ’°Ç–¤ÌßQ¯-IoIb ¸%™ß–T9 #¬d-¬}n:9©“Ó¡truòtÏÙ €`r”y)Ž{29¬ßË” G5•ª ;“cÁ˜œiv„¨“Ðï^LNfrÔø€–G(=™œ’Z‘PœŽp©Kî©àðb‡ÍJQ€) —òˆ€çŽ$ˆ©[Œ"Éa‚¶!y° iZÉ î¾!•DçÅíôÙFÎÅ0’™ÐP†öõÊ^):”œ×oHN$@JB0BŒ¾!m†2ˆûoH5÷^Ú#ž’ª×cµÕk±:/épz€¶#MK$=v$F~ÜŽxíHxGrcàx@c¢¾G¤Zq Œ¨ß†„á7¤ÍTI€ ‰±ynHÆ âXp" ã‚‚áI2؆¬ØÊh I…·ã^’ÞÂø€¶2$<7$‚H2Î ˆ3$áØG$±šÊh€I…—¢ƒ¤çŽä  ˜p_¡ÒŒcÒ­ „Ú’¤b+ë¹Â‚xñ; |¶$·¤4> ­ Cï-É8†s¨ä,@Åè[ÒjØ íÒ•]½ˆynI °:Ö˜H—`4Ü– (#ËND™Ò}üeØkK†Æ”A`Ž|@Ó&Þ{3 tÌ€p9úž´šv  HŽ‚pžÍ§‡ ½ˆ*É•JZ™1N‡Û’ÁP¤bÛé"˜ú1<æµ%ƒƒ  9ò;˜ûnI@‰Ô,s¡ôŸñ%W«q' Jͽ—ºƒ…ç–„D)“ ¬f@>à) E*Æ8(€¿“^[24 "cäp@ëÞÖ¥aiñ•!ÝB¬æP(5÷^êž[1µºjJ¹„B¾H¨¦- ELóê…"‚yñ;â…"¡¡P›#оC°ï–Ô¸D(dcñØ[’Úì;(Jͽ—ºCˆç–TÚ*'TÉA˜s, nKƒBQÓ¼ƒz@¡è¶†ŸÛäzA¡Hh($æÈ´ï_,dr(¨€€«Cdôi3ï ¸º-Çè6™ÜsGRD‚bN„ŽÒlCƒíPÓ¸ƒzÀv4$Ókn½`;$4lRsäCF zãv–R*¨TGÚŒ;(nGͽ_xðÜ‘L17 u`;'pÈ#2n‡š¶Ô·C•å5·^¸·™9òm;ùîH¥Ü ÁC-LFW$©Í¶ƒwÔÜ{é:{îHŽåJâHqPÀá¶d0à5m;¨p‡*õÇkr½€;44prcär@ÛõFî"T WŽ@‘´ÙvP莚{/]‡2Ï-©x`LG2ªÿcC:%ƒAwhŶӺà ñãw^к…9òm;Ô»¡Rƒä’c:ö–dVÛN쎚{/m‡JÏ-))¤ ¤:.¥dƒmI ºÃ*¶ÐF€¿c^к¥9òm;Ì»³ç6º*É¬ÆØõ­~Ù<ß–TÇ£N剤 $Ü’Á ;¬bÝéÝaŒzñ;æÝa¡¡;˜#кÃ|¡;ŒÉ‰Ú–K.àø§¤Õºº£æÞKÝažÐ™†@¥—K ÀpøV ¹Ã*ÖÈ&¡»óBî°ÐÈÍ‘hÝa¾È€$)ù•R®ñŸcïH«u'rGͽ—¶Ã<‘;1Îtœ›’@ ‡³î `ÈV±îô@îpuÙkr½;,x#dŒœhÝá¾ÈtÌ»" A$ÆltÌ9³Zw wÔÜ{i;ܹƒÔJ":'A ç•DÁ;Ì´îàÈÅŠ¼ø÷BîðÐȄ͑hÝáÞYŒÒæz¢æSsxLÆÎbÄm溣&ßKÝážÐD˜€êŒÔ()Ò)hÛ“Á ;Ü4ïàÐÎü²as/è ÝAÄù€æÎ|÷d5³˜’cÆÞ“6ûÞQ“ï¥ðpOðÒÙÄG ¬$W&‡Û“ÁÐ;Ü´ïàèŇüžz‡‡Fï jŽ|@û—¾{’è¬#H»¸1€ÑÑ÷¤ÍÀƒÀwÔä{i<¾£”aÀç  “ ‡ßAÁð;Ü´ðàø%!x1<á…ß¡ñ;ˆ™#ÐÂ#ïžDê„dPmL©Æƒ¾'m&À£&ß/Gµ'€G8Q,R¡tsÄÔ¡`nšxp ~Iª…€G„ð nŒ|È<Õ‚úîI@!Aiƒ²Ñ÷¤Íƃ xÔä{©<ÂÁƒÔ (Â:Ó2Ïô3Ìž †àáOà~‰ª…‚G„Fð aŽ|@ðDð!˜R²ƒRi“ùâÂïIaµñ€ð¨É÷Ry„'„IL!¤q]dEˆáì®8„GTl<=  Gúu"J™µˆ"# F¯è ¬6ž ‰üR+KOVÛQ(u@pF‘`b8}CñˆŠ§ŠG¿ÜÊÒ Å#C£x04G> Gú–!ºtUŠ> Z ¢£×=VOš|/•G ß=)uJe¨tt¢¸Ý€ðs Æ#*6ž0Éý²+K/ ãÁÈ9´™12®v%$XíIÄÊlì#nJ«‘'G;_uà‰äÁˆSE›” ’¦ÅÂÁ<´òH5)~åÈÀ¾n@Ð+©g×e° ^ ›c²"ðó]õc1AqÀi3ô`ž”{QŽ'šGWwfº²!˜ANÔ*ƒ¡y¤ié!=Ð<êEÄ“íQ¿ЃIeðC&¾.(Bc¢´!̼›Ø™6s éÑÓïWH x‚z0ÁT2v•®®Kž ¶5ƒz¤ið!¨ÏÖ$Ò“ó ¿­׃ieðC–Ó¾È5¹ )Dˆ’ñ#Òfô!(ÄÎ$~Y—!ô„ö` 1WûJ Å`@÷H0h4Í>÷Ù™Ü/ï2¬)]Ü´3ahtf•ÁhúÐߣ†Ã(@X°dtÌ´Ù~±5¹_öe±ïÖR"&AZŠÄ…ƒ!|¤iý!=>jçOÎGü¶fhææà‡LÁ ¡/̇cµîAÉf:pdô­i³‘0=ýžé=>j¯Ã%‰ð!wf0œ¬X€hŸ‰˜'ãã~;34Ô‹Êà‡´A_°×EŒ!Ž¢cÛf1°Ú€hˆ‰ü’1Cè ÷Á\M¶:ù‘FýB‡KF@BÁ}0¨XŸ­IýÒ1Ú¼M[…Fü`YüF ä‹ùQ| ¤ŽB˜"+0¶«ˆ…ØšÔ/+3D¾¨$×0„% Wá‡À`[³bâ}¶&÷KÌ kJñ6nÍÐÀ*ƒÒ„|¡?L'¶ ºœºTâ,c«šX­@<ÄÖä~é™!òÿjwA‚ÐuBP°­Y±õÿ¨S {r>æ·5C㬠~H+òE1ACHé\ ˆŽ”ÅÀj €ÒÓï§ !_ $jXHˆøpº&ÁÁ¶fÅ $ûlM$<9ŸôÛš¡a@™ƒ2Y3ľ8 Æ¡¢"@…bõDÌFNèƒÕ $ClMä—¯bO Ñ5ñTw…Ò(P×$Á¶¦i¢ ÏÖ¤~›!öCáÐH ‚+ƒÒ „}¡@ŒJ)×Nrĸ½–,†63!¶&õËÛ 1ñÝšid‰ÒÛ§0/];ÌÖ …ÂÐ4Ñ>X $€'çóÃáÐX B*ƒÒ „}±@Œp(°„„3€‘ä|ô­i3ÑX 5ý~ÊöÄHt]%I™À°Èð5ÌÖdÁ¶¦i¢}°@ºêŠßôúaph,¡•Áiª)¡KÙ+¡>•óú­‰õi‰€R‹tb<86C›ˆ†é^´C<Á@A n—ˆ¦åZlkšf Ú „±_*gHüÀ@$4ˆ°Êà‡4_0C˜Q¦¸»Ú¡šžF÷kB›ˆ†aì—Ðì»59Cº~‘`IÜš"ØÖ4Í@´SæÉùüÀ@$4ˆpsðCfu†Ä Ä Â:¬új¬K ÂÐf¢!À@júý”!â ":„j­R8!’nMlkVÌ@}Ð@JÍöä|~h  DDeðCšˆ/ˆ¨*ufJ¦„[Fwž «(HM¿Ÿ2D¤ïÖ”‚)í¡í@p¸PM „*f >h ü2!Ð?júý”á‰þ¡Š¹!† `R½Œ‚á,²,ú‡TÌ>}Ð?zf„~èýÃ`eðCš}÷Þš€2 `„ŒF¬fŸè=3B á»55^–Ì…`ˆ°ájô±`èR1ûôAÿèšÚ~Óë‡þ¡Ñ? ™ƒ4#´ôEÿŽ8ST·º8ÝYB¬fŸèN<3BKOô…„bÅîtÎv¢ŽÐÚ`èbš}xôgž)¡¥úG†Fÿ0\üfé‹þ!L“: 6üÉFO8Bmfý£¦ßO’ÄwkJ„4hƒ @ˆ †þ¡¦Ù‡÷Aÿ茻~Óë‡þ‘¡Ñ?ŒT?¤ÙGú¢UÓ«ø;#jº9_ ¥6³þQÓï§ IOôEX¨ÔÎ0P:g³³ôÙšÁÐ?Ô4ñ>è=SBoë§­ýÃheðCš¶Õ|ëÖ$M„%A{kÚÌ@<úG[»|hm«Ç­)°Pš3Ĉú ·5ƒ¡¨iâ}Ð?J+Â~Ó }¶&¡Ñ?ŒU? m+úºnM5™¨Mí2a£gk§63þQÓOýh{nMŒTbtÁ[N‡ÛšÁÐ?Ô4ñ>è­ùM/ñÛš¡Ñ?Œ›ƒ2%4ÚÖôuÝš˜aÎg‚N£ëš63þÑ> ?Úa¾[3­ ¦ÎL 8bl@6ú‡VÌ@}Ð?BOÎÇý¶fhô•Á³!·¦ðÝšHREIL§ÉÀn eV+Pðš}îG:ÒsgêüÚ"Í Òßå`;“ÿ°Š¨øG"¿ŒÐ¯ Cƒ˜¬ ~@+‚Ð{gª­‰&T`ÈÆß˜VPèD~ù D¾“êÂ& ´]m¸#“ƒ±Š ¨H¿|Ðb¿ ÄAeðbÈI|7&TD †¯¥o‚ѳ0« (HM¿Ÿ&©çÖ¤`´YM ÈáM Ä*6 >P É¡'çc~[34ˆÃÊà‡´Á}Á½98SïF(tü=–\/ñØ;Ój R³ï§Aá»3‰Ž DH æb¸ Ä*& >H )™'ã“~;34ˆ#sðC¦ƒFøšKÕI%%JÝq¬& H 5~š‚¾[S—¢Ð‰€€p>ÜÖ †b¦ Hô@)¹Ë/4BÈkk¢ÐH Ž+ƒÒ„°çÖTG¥Pÿ"¢v(Ò.©‘·&·Ù€D$ž~?]Ï­ÉÔ#Šý*Õ]ƒ7ø€ªf0$7m@öÙšDxr>ê·5C#8© ~HbÞ[ŠT˜ÅsGwir›HÀ;“øeƒFˆûîL]G ‰”Ò%8PÓ â¦H >;“cOÆ'üvfh §•ÁiB¾@ ,”Ü¥¦u<˜ÂF·Ïr›H [“ûeƒFØÄBÉX¨Y§t8 â¦Hà>[SúeƒFØ„C8« ~H#FÞ[S‹´„A ¨"¸ñp›Hà[SúeƒFؤö>JÅH¤ù•†ÛšÁ€@Ü´‰@ ¤Õ"¿éõáÐ@ ÎÍÁ™ a_ :¥’º|(ºdßø[Óf€@zúýt!ì âú•DÉ! q é€[3ˆW¬@´ÏÖ¤À“óùph •Ái¾@ Ì¤Ú ŠÍŒ©Ú££h…Õ DClMê— a_$€Dc€uzK9œ@+‚!DÅ ÄúlMî— ?$ â²2ø!­@zoMPâ—>3%£h…Õ ÄBlMî— _,Î7BÔœb]ˆò·f0,¨˜z`~Ù ñÑÐX *ƒÒ D|±@˜ Åç9…ŒR¦6éèf a5Àéé÷S†ˆ/Hõf*Õû9%J²nkÉŠHôÙšˆ{r>?, °2ø!Í@„{oM¬u#­n2¥ˆ±Ñ P «H„ØšÈ/4"¾` ‰Sj» L[ªÜöÚšÁÀ@¢b’}¶&EžœÏ DBƒ2?d6hD}Á@˜&0àêÔ˜@<¾®i5É[“úeƒFÔ $)çHpÕ›&ÙD000Í@²q¿lЈúhh0À•Ái¢Þ` ‚9‡BgÇc9úÖ”63 RÓï§ Q_0†oPÀ”ÂÉÍËá ´"Hšf Ù „ñä|~`  $HeðCš¨7HÑ€•LËéè5¤Í $C Ôôû)CÔ ÄbJQB¢Ô@Z4 (Hšf Ù „‘ôä|~h  $heðCš¨7«É¦€cªZà[†ßš63 RÓï§ 1à»5ÐÁi™5< ¸]IÓ $û€0õK˜ˆ… VüV æ Bjr LJ̸£kJ›H†©é÷Ó…öÝ™J+ÀÚ Å$â`¸ü"HšF Ù „ðä{~X  $¸9ø!sC#æBHŠ, ¤}cÚl@2H;Ÿ&Ä<¡@ ªS–cõ^”Ö5`º. $+6 >P ˜'ßóƒ±ÐP !*ƒÒļ¡@hw¦H aBR:rº.¬6 P 5ý~š“¾[“ Ì£S‡?Rš•¡@TL@}@ûe†FÜ ÄC#„¬ ~H÷FA†Õÿ@Æ„…cû4 °š€B ÔôûiBùîL©˜WÉ•ºIL="a°­Y1õAê—q?$’ 2ø!M@Ü ‘†pö”ˆ±­³XM@!@júýT!î‰bj唿0¡ Åò†Ûš(ØÖ¬Ø€ú ˆðÌ Íý@<4HÂÊà‡´qo$âð:ë,Ò££Û€°Ú€B ÔôûéB\ønMŠ˜Fér,(#ÃÙ€$¶3+6 >@ <Cs?  ’Èü ‰¡…7HG„QN ¢ADáØÀv¬6 8 5û~ª€¾S`(çT‡'rÀ3“Û™¦ H½¼ÏÖÄž‰¡…H„I\üF á B:…•¦"Ψ7v}0mF E4!6'öL -<‘@ CŒ™T*3‡€6H†B+›³ˆ2ÏÜР$BC$© ~H;`Þ›S©îil;º°ÂØ'DÖÍ ¤æßO¾X L0$X[%W"Ø€›“Ûœ¸²9û€¨ðÌ-üÀ@"4HÒÊà‡´ é½9¡Ú$EiÜ蛓X7g4š?Hú¢0W'•˜Au€æ†§a6'¶9iesöÁ1è™!Zúádh<$‹ÍÅÃäñq¶¸3&øóÓFäýìf3[.&« 'ºdµZ®þñ}²Z¨'þáòdöÁ“Ÿ'³Uö—˜ùÑï–jf õ%_’Ûd•,nò½5ùéf>Y+z¹)[h*Ìš¼4¿£¸½Öß½Zjý#ÛœËùÓÃBm›|Òwgƒøs2JÌaü¬¿ììÃ>/§É|gúÚËí$ì={Œ»ÓrŒÙ©L¨mZÖ›•Â¶ß¯Š¸þßäGe0Y›lMÌo1ž}?ûs6MÞþøßÉj™öcéà²:Šöñ¼Ó=tÐUò0Ylf7?ÿu“<êihÐeÍtWó¶éÛ>¤Þ`XzÃè zÒl¥·ý݈\ÈîÓò{²z«øÕÔw™Á Ù »ð·§Ío·éT®ß¬îž4¿ ¶ÿÚ¬&·K×a-R± ?þ&ù¡5«<š5ñÉïµE6¿N¼gÈÌÈ–n?.næOëÙŸ‰ïÜsnnÚç¦+‡º´¥Ù-µm•为H™@õÒL­xÆ‹î'J€Ñ? Öð©¢OÓVÙ÷ œcÜ%ˇd³2¹âû7··³EòK~ç"W׳&ëOË›oZ I…¤TPLVj‹Mææ³Çür¦ÄN6O«ÉüJ=”v­eÍLþœh1sk[·¶x\®gz:Ó;©V™´:ùéÇë—°4o_’õªåIcjÒßûrk6E™ðY*D»Ï¦±¼þ¯ä&saÞ¸]Nß§«´KCjÈ{™ÎVIJ…Ì™_Vr®îbv;ÓóZ TË{û<“xËtžYÓì´ùÇBí»µõ‘BJNé¾²a”ü8Y¼ÐŸòâáéÅzv÷0yñW±Bûí›ï\ÖË­:&[†*_v{u¼ïÄ;ñN¼ïÄ;ñN¼sÎw.÷%NC Õ¿+ü‡¶˜|NEá¼ßLô\-•šóãâzÒlßÄ•w[ýà*YÍ’õÕf¹šÜ9k§5Ò2Ú±Nf÷.&úf>¢âš£iˆ¸|zÿyöW2ÍT¨ôS´aä÷|²º«ÚÌïRÃÊÛÉêïoþšulXïžÖ›åïÉÝ=…^ƒ«#ØJÙx«_W™õiù-®6ÔšïúÏò»þCwâý]†§ÌpõÿãïþöæË×…`ë&ûYèø±w¿—HñNbÊÞï>¼c‚ÿüîmé&•—ß.».Ù^2t,Kö{²ºIÿ+EÍ1ÝØ%èýïóeÎR~[¤[ø¸¶ï'WùÏ”2»/d•ºnCOØU²Ù¨Gv@ù]%‹é§Ù"¹º_~ÿ‚ºÌ×€í{|ý¾\÷fó0›§=Ù|9éŠfËi§ùF‹ j¾Ãœ=øHΞ¿/à Gá(…óP‚ÌÕfú>ù³Ç¬Êð~þ?O­†Ç« í“Þ&G6®ß4hé8¸U9¦tž®Ò^#+U÷Þîßš}†s’cáœG·‰+÷Q‰z%]¡y"œ¾Œ3¬ù˜¹¬·&u²2Ñ[™Ð™Y™ZcH êÊ“c‡cž¯ÝE»E´[N§Áö6 ¥„¯åãˆÍEÇh¹8>es ŸÛ@8Ðáx|Bøñš|Ž^Å?b«È1ZŽGË;FƒÈÂi+<CF‡Ó†ÙËÓÖ~Žÿì92õ縷ã>§J98…“úŽ ã7·gìeÓ#4©ñÉxÜN›#<H¿8Ì¡/«ôÛ躨Î×›™ú9›V>®UPÏPÚi e”i )DyYľƠîþ2_^Oæßÿ£-)Ë‹Ÿ~zñ–ýô¶’ÉÅYÍ(]ë‡&ûíÛI2ã®Qnfd)g9ƒÑOnî“ic˜j™ÎÌô”[ç Æj\èûôúvص!sm(]BäÜ’:·®-tnIœ[r÷iwné¼BØy‰°óç5"ÎkDœ×ˆ:¯u^#ÊÝ)Þ¹¥û.r^#æ¼FÜy¸óqç5Îk$œ×HpwfãÜÒy¤;«sçuÀÙwnœ— BèÞ”¸7åÜÞ½)voÊÜ›º¯v_-ì¾ZØ}µˆûj÷Õ"ÜãÄuoê¾ZÔ}µ¨ûj1÷Õbî«ÅÜW‹»¯w_-Î=¤÷¦î«%ÜWK¸¯–t_­ÿ{ÛÜ6®­‹¢ßï¯HyžsíZ²ˆw OåV%NÒ3uÒ/'ÎܽÎùÒ¥H´ÅYô•äN»ý A ¤H Àȳ÷^3 àÀxŒ¡ÜWK¹;~‘»ç¹»~‘ðð<Ý›÷¦Ü½©ójaŒÝ›2÷¦î«EÜW‹¸¯Þ¿{S÷Õ¢î«EÝW‹¹¯s_-æ¾ZÜ}µ¸ûjqá¹7u_-á¾ZÂ}µ¤ûjI÷Õ’î«¥ÜWK¹¯–ò‰‚‘þÕÝýY쇬V­ƒf‚ŽÓ…'Á+’è ¡ kT¹?)R„!¥áO*žUõá– $œ:? ïH#N0¦$ŒI÷A ¢9¥B …ǃ‚jVQŠR‚½Tð ¢LB„Œ"æþ ¿ž ª¤+Ÿ¹ãB‚ð=â>p#™¢À õ *ŽUüJ!$°€q“z¾º¬.÷Y à60öŠçIâñ ÇàÔJ&³ŒÈãAJ”BL»’\`1…ÉŠˆcÍ©<Ž@ #D„f“ÙßQ8$eJóEÀåäHUD ®Á˜JHâè† ’Œ âD 0„,IAùxè9%1HpD)f\pêÔµ•Ϙ0 ¨P°‚ð(òyÃh|eÆõx2bˆ‚dqDu­î¡Ï¿A”9Ä3 +,=ž„‘"-&t%´• ߯!†¤’À'ËõjP–ÌÇnéÂÓ ×ð æ>«"#†„H]‡IN’ap¦ •B—Žò à ʸA8B¥…£ Õ1u…Åñ²Hè %UàpîãM.-dº4xƒ«"@Eƒûú–êóîr”›VæêÁ¶À«’C<ÈÁì „=D…ëÉ ×ÀÐGäAZV1Ág(ŸÙR]‚d \w‚½ž$z, -èwЂ,¶‹ƒG‰t |V3«<¼« ZZøLŸö `5aâ>ü^ód=f á èj°òiíéÁ| ¸Ojâ Mpä3&Sà6cÂÖcŸ'A4A• N 8&>³¥`ŽÀr2í{i[#E·C yYO­÷€DÌ5pöSWýÔæO¿$öZOx'h XÆ<¸¼Yx ˜ô4x·³…—õa©}/ìõ pQÄA¼£,@ñxRè0lØ\J¼^“›#p4‰¤òPð"pp¿@)pĽ›²ˆf PÈgLÐ!ÀE^R/‡ž¦l‰®“ k¡†Û‚d2ˆL€€¨xBcüJt¡/Ä}†è]aŽ#/- Z ;¢´Ó(°B à £"lI=&«ý}ø?r”ŽtˆÏ“HfSyˆ5G&“h-bÅù<©' , g„”×aŠÒºÀ4øx|ÚËàh¯™zµ®O‚7 . ˜¬×ãI" ŒŽ8øÔ‘׃0]°óð¢ ö˜Ïk‚ %!L!µGî3&ÑD ˆ¬ˆÏƒàëh ÌWø°H2¨!A0Cä£Àáâz3táB*Ÿ'Á!a$Òû@Z®}žŒô:‚M‰•ÊGPô‡)ˆª||(FèæÉ?ãóž:Æ󇵿^®Ï“ dLÇd"ä±* ò´#N&¶×s‘ë-°aÚÒçÉJwŸ'ë¨=ž¬–Çõy²Z»ÓýÁZaA—-íi˜—£K¿NV³Ïñª›y´° ŠNG@¾ê~<ŠËD(À´D/L¦<>1`b²O>DOŒ˜˜ê“:MDÇ'FL¬× Ca~|j,ÄÔPŸ©:?>5bj¸ÏÔébj¤ÏÔ„êbj´ÞÐH¼£SS!¦ÆúL œ«£SÃ!lꇀçÇҰ¨—!€àÿøÔBXÔÇèOǧ >Æ€húÑ©…°¸5€®C BXÜǽAptj!¬îc \îƒÖ÷±ÙÆÜÑ©…°¸5Ðû!ǧÂà>Ö€B´|tj$„5À¼—‹‹‹ a pkéq1 !¬îpr\ Hk€ûX¦¡¾G§Â^™8íƒÖ€ô±œFbÂ>Ö@ïÚŸZk@úX®:dÂ>Ö@ Ž™„°¤5´#D¦!¬éc ï‘ik@úX¡:BdÂ>Ö@¢Ž™†°¤5´#D¦!¬íUÇHt„È4„5 }¬T!2 a hk pGˆLCXÚÇ(Ú"ÓÖ€ö±Jt„È4„5 }¬R!2 a (ï·—Û#³怊^s£A2 a¨ì57Ñ%³ö1Ea2 aX¯owÄÉ,„I`½> Ö(³6õúz€DG¤ÌBÖëóŽ:BeÂ*0Úkn¸#Vf!Ìc½æÆ:‚eÂ.0ÞïCZG´ÌCØÖË.hlÊñ¹…° ¬—] ¤#^æ!ìëeë˜y»À{Ù";"fÂ.ð^vF!3ax/»@IGÌÌCØÞË.PÖ4óv÷² TvDÍ<„]à½ìCa³‚.âýP q³ax/»ÀxGà,BØÞË.0Ù9‹v÷² úØÞñ¹…° ¢—]à¤#v!ì‚èe8ïˆE» zÙ.;bgÂ.ˆ~p#Ô;‹vAô² ‚vÄÎ"„]½ì‚à±³ aï!ëˆeèi/» QGì,CØÑË.è“PÇçÂ.ˆ^vAòŽØY†° ²—]ª#v–!ì‚ìeîˆe» {ÙE;bgÂ.È^vAŒ8>·vAö² JuÄÎ2„]½€¨îˆU» {!Q#Ú;«vAö‚¢F¢#vVAN%È^sS±³ ad»€îˆU» zU?B¬#vV!ì‚B½æ&:bgÂ.¨>vã¨#vV!ì‚"½æ†;bgÂ.¨^ç0ëˆU» zÙ,:bg“óÔÉñ~§':‚g…° ª—eÈÒ•Ÿ\Ó z™Â:Âg9´ÖË6Ù?£(ÌÁµ^ÖA,í˜]³kQ/û óõuÌ.Èñµ¨—… LtÉDlQ/A%éŠ ‡Ø¢^VB§øé˜]slQ/;ÁHG,‚gî[Í…çÑô´ñxv^Ô%]=Ü­nƒ½>úª¬ÎkôÃë>+A:V!€Í¹B?\õcU,B#µ³yoç³Ý.Þdg»cuš¦V1žß·Yy½ßçéÃzw¬ –ß 9™/g›Ý$+Îd&¶¿0I?ÿ'žïòQ²Ë«ø6^/~ŒÓ»xgU5½~÷æÕÍM²Ž‹;ÅÄmòW¼ýοċ—»ÍClòÚÎv›Ùêîf}|Õ¥\Ln„¥Þw7¯¶íjpŸn]|*»ñ§.Ÿb=fÞø¥àQ)ÁǬ¤|Ûd¥›ä6YÏV“Ï@ÃÅ6/–µ½Geßæ¯¯å_Ký— ¡Ÿv³Ím¼kí™)• ïI‘ç«.ÝBÊþ‰@ŵÞöo•¬oR3ÞgXÝMvÁƒOZ„ßÃÅ Ê—âÖa-‰çZâ"·†^*†Å%kXÏÖFõ5%Eš¡G‹Æ·öÂmw«¸òÚÙ•É&¾1¯õñí»ýë_g7Q©ϸÿ€l?e2‘k¶õì¤)»ü™kÉBŸ–|õîÍ[yõæ-}Cß]aù‹7o™|óŽ\éü<ùÃw“e<[€ƒ.ޚƒÙå›4Ý•—Q½u®4öw`“d;ù#Ù&0‘—7³Õ6ÎnÝnÒ‡ûd};‰×zŽ ûž5ôäf“þ¯îš¡j úÙžOª ûöj¶‹oÓÍcVöϳ6ó†n&«d»³,TnšT­éö>ž·x 7ÍxùƒÌ1fÎWÝnfký¦É®Âz‡shºQ›ÄlŸ…¾[pÑv’=U]³UüG¼ÚêYTúÝÖÉ_AÌ6›yØTÛ³§ò#\43Hç뮘E öÊÀдøi˜Ó8ªËd±ˆ×e«¨z1o›_\Ïç¶â0–“þ!ò¢œŸá?}ók²Ø-uu¨¬“{xáx³‰sÝ8UIVý°¸ˆÑ¥ÂJ)Ì8ÜÈœž^g˜c^²¤Ï0ê¡:¡—àt2_L§Óø:l\2ª²|§Œ¨È@óòqD¸q˜¼T:5œ éäкѪë’Ûµ?âÍ.™ƒ­Ó£¯ÀjäZ![Õ,CÐDë|#=¹ïi¯¸žý~òp3×@É­v«½,޽›iÞÑS3Œ9O­ŠŠO©?bõ‡Ÿ`öû’±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿±¿ãý±KÞÖáþÖ0=¢:¦ÁsÇ2Ý$¥ë;ŽC×zÚ¦›yÔt•®ç›xgˆ¢7Ðä:kRb 3aÞ2^­ DÌœ³˜ïö½ïšš|Ü ]R®˜I¨¡ñ19d•ðä]<[—hɧ˜Q© íÁ,L¹ L$—”)eŽTæ½ë¿þxyQBzÚº˜º½§®Öèö¦è2BŠ›4ˆÍoš.âðoоõ›"y©¤&¡Fó›>Tßó¦¥Ç£¹­s3øÅÿzñáçþWÿšÂÿàÿš’Òí„㬖¾Ê›µ=îÇxû°Úýœ!±õ'¤8¸S,ùâK"”ª°s± —ú¬yÄ Ò%_DD°!ÊMUìîîvñâ]†Ï yÉÝý*™';ƒ%ŸäpÌø…UΚçþ;ŸpÞÝíí>"ûÒv·ÕóòâÿýD—ÿøÇ?þÏIù§ÝnÏ“;Ð1÷«Ù<ÖBbß?l6ñzþ8™G¿¼ø÷õûî:¾í’?Œ^ªO`™~ì–éÃv¶^l'Ûø~¶™íÒ 4Mo6³¹FOfsp6×€ëÊ4¶±¾§ñi9lòq¬-p7F®Í¿Rï*ÔQøñ¥d\Ø.Ûäön@*¬óZ×ÿ÷ÇOÿ<ÓÖ‰"géØ×_s”æ'‘ŠPuá]µŽTrƒì á° ©.‰>ArD‰ÿY]Ãn]É‘bêcÜüóÏÿò꓉K.•)ÐÜÜçÕƒæ-„^öÆÍè ËŸ+“Ði¿”UCqÓÜ >ÎÁÖ™¾WüižšóV^ «õ¬ÇòŽm2«Ók|Q.¿§u:Mÿùšgâ©ß—þ¯\rÿׯïÿù_ÿõ_ÿëíÿüúÏɯ¿üööã?A’MÃÿšÀÿ’ÿ~¶N·ÌîêgÿÃ#½¨Už¢8òNJ'ëLá‡_~üù—?½yý©˜ógSÖ³ì7CRáÉÜúGáM‡`[ëÀâ+qÛ2"üØV8±­yßèèRùãh»¤“æháÊÑbŽÞ˜t$¤ô±XºÈ.’ °XpзûÙ,|Ôfõ?»pUEüDñcDù‘S“ãK`E.…P8Š”.EÎͱþÊ„{Jìí‚\¯ä b/ñ"oqÙUìñpbOªb/]Å^"öûâ­Ž„DžbÏG…D*bÚeU]ìi‹ØËb/ó#'ö{¦üJ.¤Œ8ä€RO‚I½e½Rý¥GÔO©q¶³Ô«ÚᤞW¥^¹J½BꭺȎ„¤~R¯ÏFˆK&hD0WÔKèIx¡-B¯Nz ¦_脘§Ð#]ð•c )cœ 2˜È Kíº~¯Pt‚Ìë­/âr?™§5KÃɼªÊ<Š\…Þ´ -õû’㎤žROaX*Æ£ˆâèüRo×I¬H½!ðibO"¿ÐIO±Ç\`Œ‚ÿž>DOƒÉ= %÷vùÇW ÷Œ{*Uå'÷,˜œÛu%39GÎrŽ‘sV&Št#Ž<åœS,1Ã`•(°—g—sÒ&ç(€œ3îkbä)ç +Š€ÜH Y°Á䜓sjË9>AÎ%òÓ¢{ÊyÍ¥—áäžÕä;Ë=Dî%òÛÂÇáã—Rg«r/£HaFîQDª³Ë=o“{@î~䤞rÏ¡HDÐ"¢ÈK5˜Ü‹`r/l¹'ýåœ'O­Êüäž×䞇“{Y“{â,÷d¹Rúíáaîiï?y$¸@1ÎÐÙÅ^µ‰=9]ìš~'žb¾|$™„P^*.±έ—¡ÄÞ.1ü ÑÄž`O¥*ýÄ^ÖÂùp_ íRÆ™ØSg±§ƒˆ=!~›xXùŠ=„¸‹8’Šs˽]°¹*÷4€Üâp’ÈSî•ÀàXaÊ1‰Äx8¹WÁäžØrÏN{Ðw~ÔE~r¯ÛƳË]grÏœåž "÷Lúmãìéæ)ç”'xû`©Î.÷¬MîY¹gÒ/%‡QQ­¬LŸ:9uÅ:ƒc B™T'zùˆA°B¬“{U†N€•I=µ*õ‘ûpŸåY _†œfh„Îo0?뎣H !¹$Ó#ÁξYÏÚÐp(Èénî)åD`„À²cI2@î`B ƒg×®…NáQpmüˆëÂCu^8©ç5 rá¡APxú;¯%¥·Ô#¤wðàÿ åg—zÞÂCPx@N¿h“(O©×‰‰µW/1„õDŠáÄ>ÛêŽÛ¨;t쬤Ÿ¥ÄOÎI89¯Áì3Î ´óæ·GG}vŠ!¡«ªˆEHгïÍó6  €´rúE“”yʹâÎNà ] hσAí¸ µÃ'@í¨äžZ”{ÊùPèZ^CÚag¤i§OeúQÒi§“eH;úX͹Å^´!íp¤Ó/¾¤ÒOì1â8b˜"¦À=F'öÁvÂFÚáv aO¥ªüÄ>ÒNÔvØi‡AÚéü6é˜/ÒN †ô/)um½³{ñ¢ h‡퀚~ñ$CžbOpð ¤d\ ’ &æÁ€vÂÚá€vŒ?%ʰŸ˜ó¡6éD g‡qvxœPÒo“Ž_©'ˆpa‚0‹=»S/Úpv8ÎÈé^2ê)öÙ×7ÿ_H ÚtÀ½ù`8;aãìð 8;Ɖ§Re~b/ÂÉy W‡qux\Îo“Žùâê„d”Â0 BtÜsËy°ÖÁ{ú…—Lxʹ‚‚Œ pãcˆ 'çÁ€uÒÖá€uàÔxjQé'çápó²¤ÃÎ@:<ŽIå·IÇ|t‚H=GJÃ@ο/Û€t8ÈéOòÈSÎ¥Ò§Š©¢„(Ûܤ ¤“6Ÿ¤ãˆúiQŽ<å|¨M:YÃÑagG”ôÛ¤ãØ÷˜œNzÁa®(?»Ø·áèpÓ/¾äž8:qŠŒôx épbƒá褣Ã'àè8<•ªŽ.Ü—vYÃÑagG„óÛ¢ãÌWÊ«‡aÏ.äm0:FÔô‹&¹'ŒŽ`‚ULïÐaIÙp><†£“6ŽŸ€£ãœyªP/G§j8:쌣Ãà耒~[tÜG!&#B{Cœ€q?ûjÃÑá8: §_pÉ=qtàÈSÛFkRÉ…n‹ÃÑ)G‡OÀÑ÷ø)Uù‰=êhŒªÁê°3¬«JúíØ _XH<#D þÕIÛÏ-öm°:Väô‹5…'¬Ž0†0eAà7b4ªCÕ)U‡O@Õ ýˆK<¥~¨¼ª²ÃÎ ;<ÈN Ïdë‚zˆcfˆr%")5ˆ)d¶õ^Rß²Ã@v@N¿PSx‚ìÀ~!•Ô§‹!z¢òÁ@vÊÙ‘@vB×ð¢.÷ûÁ@vª²#Î ;2ÈNèÜ÷^”ôNgÇŒ "Ö鼯žEm ;d'(ö >…'ÈŽH†„Œ88Piw8±²c‘ ²#'€ì÷,¯ ”ŸØÙ­«É¹3ÈŽ ²ÒùíàI_]5måÙzµ¡ìH”Ó/Ú”ž(;0@Loâê:õ<˜˜Û ;rÈN(Ïrû‰ùP ;Õ@vÄdGÙ%ývð¤/ÈÜPÁ0‘2һ;å(ú6 ²rúE›Òd:”2ÌXÁ8PìE0±·Av䄎ž:•ù‰½hE5ÌqÆÜ‘A0w@I¿<鋹#X!Å#bÑóû6Ì €¹rúE›Òs1˜bs„±@XŸELìCaî²1wä̤žõ(¤ôû¶ðªAðˆ3 Á“Ô³ …ô…àa©ÿプO•<÷Cm<‚'©gE å Á£B¥ˆ®Ë1ʶ,‹AÄ^{‚GN€àIáY’B!?± ‚ÇP ‚Gœ!xdPÒo OùBð0È:»D#`}ÂàÌÖµAðHÓ/úTž<ððõAˆð RðÁÄžDÁÄÞ†à‘ xRyV¤P^<NÊk<â Á#ƒ@ð€p~xÊ‚‡Ž „ç‘Â˳“e¨ ƒG`ð€œ~Á¦òÄàQÎ"ЏÔU%)Bˆ 'å(˜”Û+ÏpÀà9‘9=1xè,¹–}ª"ˆ™ôñi0±·1xô ÅS©r?± ƒÇp ƒG1xt P’ûQRxBo#AÀ³Çºü±ºŠ×™÷íIÀà9…9=1x ´¨ÄL×óx‘­}0 ±1xô žÞöÔªÊOîÃðH „GAxtž¦×&E¾(¼ˆ"JŒt2[ˆAÏnàI @áizúEœÈ†Ç0ÂÆEH 3Ó“`0EÒ©_E аŸ¤†Ã#5uÆáQ<ŒàS¿’ùñ"0ò:¿:Š—èüž=iâQBð©_M Š<‘x`0¢’+¤SÕó!·ïƒ!ñˆÄ£äÉÌS¯2?É ŠGjP<ê Å£dÉ~E*(òÄâE Ä=Â2ÒVŠe†—èãð¢ßÆ£$„è ¿2y¢ñ( A¤$\èt¯J~00µÁxô0‚™{*Vé)ùCmæÑ:£ñè h<¤M²)•¯ä3Ì!²§C*âÞgnÂK>mÃãÑxö+]A1ö}ÌÃBP¢OßpßÓôˆ~&²¢ýŠWPì Êc\o™„ÁöGR×HJôi0PµAy”Ÿ"ú {jV/T 'é5TuFåQ>Œ¤3¿â{Âò"ã!b"+¬‚”ÎsnIoÃåQBÒ™_ý й¯¤+N%9á˜ëâ?ÃIz0`µyTœ"éBx*R/dËTù}VCæQgdȾð«`A±ô}‚’)$âHžÝȳ6l!D_øÕ° ØœÇ„d:™0¸[8x¸Èžç1œGOç!ùÕ° $ò“üÁÐy¬†Î£Îè<::O“ÒoO _ɧ+D'DWGfgü6x ÏÓôô‹<‰'>IŒÉ…þŽG¬HIƒáó˜Ï£êÉÇ~U-(!ž’?Ôž«ô¨3@ªa$ûU¹ „úJ>¦”é’0ø:ûƒ>x{vÑoCê±H=ùU½ ”úо‚ÿG¢ˆ 9.e Ôã6PÔ#y*Væ'ùƒõx ¨Çœzl ÒoSúõˆQ0TBŸ¹eçêñ6  ÔÓ[—~õêq ®=˜Q *8 ’¡áD?ROØH=v R0î©X¥§èµ«'jH=æŒÔcà õó«}A©/RO—ƒü`®ÏÞbßbôáE_´!õX¤a~Õ/(óDêq0ö„ ±ˆaʇ´úÁzÂFê±SzDú•¿  ù‰¾jWOÔz̩džAê)ývõ˜/R+¡"]àŽL#"Ï.ùm@=¨ôô E™'PÌ}„§:ë8°æ€µ-Y0 ž°zì žÆ&û‘× ¨nOOÔ€z̨džêåüöô˜/POŸérÕX…ÎŽÆm@=¨õ =÷•t!•ˆÆë¯;`…+ ¨'l ;¨G ñT¤^@=6POÖ€z̨džê)ýöô˜/Pkמ!E%0*ä쑽lê±@= ¨_èÉÜ7{ ¨'m ;¨W ©Ë#?É ¨'k@=æ ÔcÃõ€~{zܨÇ)‘Jo@+#r~›/Û€z,PèéxrO (R„©. Ì"r¸°žÃéI§ÇNÁéQéW ƒrâ)øCíèÉN9ãôØ08=0Ä~;zܧDZÐH¬Ér¿[òÛpz,NJ¿b”3_É(J³ØÛFôƒáô¤Óã§àôŠ<+÷ýÁpz²†ÓãÎ8=> NOggó#¥ð}Ѓ™ÂG„ŸÿXŽjêñ@= ¨_$Ê}z‚p­FÁþ!É0ÐÔS6PŸÔc„{jVå'úá6õT ¨Çz| ÐÎoSOøõÀáÛ‡C0NgO”­Ú€z<Pê{ _ žP\é¨>â"bX wΞê)¨ÇOê1îWƒ ì)ëCmê©P;õø0@= ¥ß¦žðê1¸Å#$8æú?»•oÃéñ8= §_ð)|qz’1A0&ÙPj@´ ÔS6PŸÔƒðÌS±2?É ¨§j@=î ÔãÃõ€”~›z¨Ç8Ã\ v K½¯wvÑoêñ@= ¨_ð)|zJçWZìŽØ€….Y( l ?¨Ç‘g} !=E m=Õ€zܨLJêqäY Cøõ˜Þ‰¡ŒƒÃ0’ìÜVŸGm@=¨õ >¥/POq)#$%“T×ãPôU0Ñ·zü 'ž52$òý¡€z<ªõ¸3PÔãijH†ôê1¢Ó퀧¯Ò3Î}&GmH=©Ç‰g™ é‰Ô&œ L 9ÃÉãQ0Ñ·‘zü¤çže2¤R‡“ôR;#õø0H= œß¦žôEê1L”„KƒÊ¢gϱÁ£6¤Ô‚úÅž’ûJºàXð¬Ü-‚—ÎÈsLÒm¤?©Ç¥g™ é…Ôãƒ!õ8ª!õ¸3RƒÔRúmêI_¤¸õ‘¢D/Aç>†ÃQN‡Àé9ý"Oé‰ÓYª®"Т2¢l8 ÔãÈêñS€zyVÉP‘ŸàÔã¨ÔãÎ@=> POŒõ#¥/P* Ò&$WRê¤0ç>‡ÃQR‡@êAý"O…}E_'Ó”Ø á=Ðæ“`¢oCõø)P=A=Ëd(â)úCíé¡T;Cõø0P=A=Ëd(_¨ø³ #>©¢³gÐæ¨ ªÇC@õõ,“¡<¡zc?!úû¨0×§Á$ßFê‰Sz‚{VÉPÜOò‡BêqTCê g¤ž©¤ôÛÒS¾H=*À-•DRX4‚=7<Ÿã6¤žÔ‚úE¢JúJ¾`3AáÿL´ù¡€zÛ@=q POûõ£®ò“ü`@=Žk@=á ÔÃõ€v^{z,òêQN(“”é,ÏŠR~nP.Çm@=¨%~õêi¨Ž 2)MY„ŽÃu¢Sd“u¨'NêIäW&ƒEeø¹ÉúP{z¸ÔÎ@=1 PH)ýHIHÙ!ú ƒAR’ I¢«³oçã6¤žÔ‚*?‚Ò‚vˆ>— «HRLÁål8ÑÁDßFê‰Sz’OÍÊüD(¤Ç5¤žpFê‰az’úÕÉ`÷} Â@”Îþ(£H sçÌæ¸ ©'B õ$õ+”Á"á)ú½IJk$Ô€¢ ©Gl¤ž8©'¹ôÔ¬ÒSô‡ÚÕ#5¤žpFê‰az’ûÊ`‘: åñlù”è/÷ õëcìì±=iCê‰H=]ŠÞ‹ (ò}¦?¿r®Kå0>œàÃé§'NÁéIåW&ƒ!ä'øƒáôH §'œqzbœ’û‘ûÚ|Œ´ÈˆñáÏóWÈᤠ§'Bàô¤ò«“Áñ|Sðó¥àGi¶N‘| ¦Gl˜ž8¦§pä©V©ä‹p‚^ƒé g˜ž¦”óÛÒCÌWÐÒÉ3%B':»ÎÙ½ ¦'BÀô€ ~'➂΀¢ (1I ¶ÇÃIz0˜±az☞¢ÌS /I¦Gk0=á ÓÃÀôõ+’Áôýc¦¤ÐGoaìó¸£m@=¨§¨_‘ †”¯èkŽN­£Ó¡Ž/w'‰~0 µzâ ž~U2ŽüD0 ­õ„3PO ÔS¯JÃÈ3®'ЧIýía þÙE¿ ¨'Bõ€ ~¡'Æž¢Ï±1’L LÐV?PÚ@=q Pì°ŸfÅÄSô‡ÚÒ£5 žpê‰a€z ±~[z˜zZ}¢Å]ïâƒ/̰N7tnÑoê‰@= ƒ_쉙¯èó|/ʦ ó#û`@=jõä @=¬=H?êr?É ¨Gk@=é Ô“ƒõ4)ýöô°ð•|Átº‰BotŸ½&gm@=¨§ ê‹bé+ùJ2ó–\—½Pôƒ!õ˜Ô“èѧÒS±*?чÔc5¤žtFêI4Œ¬S¿:ŒD¾>p+!¢g?‰ÃÚz…uêW'ƒä)ë‚0–œ‘þ‚Çåp² ©Çl¤žÄ§Èºð«“Áö”õ¡¶õX ©'‘z#ú¯N#¾H=Âtu)(Öõ/•ï§;^ôÛz‡}áW'ƒ_¤žUÊh (—b8®†Ôc6RO’SD_)OÍÊüD0¤«!õ¤3RO’aD_ùÊ`„ûZ}†˜6Å1WÈʳ'ÖbmH=IBˆ¾ò«”Áˆ/ROF‚rÌ0Ý#> èCêq©'O@êaxeOÍ*=E¨m=^CêIg¤ž©§4~ÛzDùZ}J¨Ðéô"Å(VçN¡ÍyPOêizúŸÔ¨§wI’pˈ15œäƒêqª'Ù)’Ïü*e0Šü$0¨¯Aõ¤3TO²a$ŸùUÊ`ÔªG4ØDã+±N¾qn£ÏÛ z’…}æW)ƒQ_¨žq¢Žæ¢#ßÎ)¢/ƒaõ¸Õ“üÑÜS³zaõÚøVO:cõ$FÒ…_ F}±z€ ª$‚ƒêô<ç–ô6¬žä!$]øÕÉ`Ô«§"¥?0Qi¡§ÃmêÉ`X=ncõä X= êÍS‘zaõäpX=QÃêIg¬ž«§Ié·©G¥·è#é•Ô›"ìì_íEVOÀêi‚ú…žÔ«§4þ!â2"ˆΆÃêÉ`X=acõ¤È,¥:bˆá;2XOØ`=u X é©Y¹ŸèÖ5°žrë©aÀzXøÊ`̬ƜT0þ Iª8>÷†¾lë©`= ¨_0ʤ¯èF$ç"Šˆ¢ž½—ÁÀzÒë©SÀz$¢žšUù‰~8°ž¬õ”3XO ÖÚùíëñÈSÖ±bL+ŽDDt¦Þ³§Õ“m`=¬õ‹>9ò•uÎ0áHê3ióáNãÈ``=iƒõÔ)`=‚ýJe0Ž=e}¨}=Yë)g°ž¬¤ôÛ×ã¾`=,ÕûP*ƒ‘Ÿ]ôÛÀz*Xê}rê+ú:/)¥Š"`S.†Ë­%ƒõ¤ ÖS§€õcžš•ù‰>j_OÖÀzʬ§†ëæW,ƒqî-úŸ‚ÅÇ\IòÏ.úm`=¬G˜_± Æ=Ázg!¹÷BHÉÉ€ßíƒõ” ÖS§€õˆŒ<5«ô}9Ô¾žªõ”3XO ÖÓÉ[üHé ÖÃB@d ËtR("ξ¥¯ÚÐz*Zê}ŠÈWôa+‚ÁãfÔ”ÁÐzÊFë©SÐz4ò«–ÁòýÁÐzª†ÖSÎh=5 ZHé·¯'°·è#Á%’‚P”Ë:»è·¡õT´Ô/ÄWô…d­Ë#íógôU0°ž²Ázê°%ØS±zõT8A¯õ”3XO ÖÊùmê _°8ôÀ'ŒqxýÙOàª6°ž ÖÓ¥jüê Ö“8bàØc¥å“OÛ«``=eƒõÔ)`=Ê…‹"°õçÞÔQXO…ëéº ~U¾¢¯«Žý5TD:ƒöp¢ ¬'"¬§NëQéY+CF~¢?XOD5°žrë©aÀz@J¿M=é ÖÃ`ðÆF%Ï}ø^Dm`=¬õ =%ö}p˜bœrMó!ý{Lôm°ž:¬Ç"ÏZ’ø‰þP`=ÕÀzʬ§†ë±È³V†¤ž‡ïa~é/OR¢s#ôEÔ†ÕS!°z@O¿ÐSúbõ0øú\J)¥B ˜vCÑ`’ocõ`ðSDŸxVËÜOô‡뉨Ö:¸Ê~Ñ4¸ðÏzÒ®N0Sˆ" ѾþÔun@mp½‚Æ'Š?ñ,˜!}ñz$B0úA‘ õaÄ?^O TÿS{Œ{–ÌÊOüƒöÂuyGîò> d¨ç·»§¼!{4L‡ø\çí$gði÷=xM¿TùBöÁàQ…9—Bàý|LÜiEÜOÁìwä§Nö÷¡¶÷«K?v—þaP{@L¿ >åÚ#\gÓE˜c¬Ožû ¾@¼UüCÀö€¢~q¨ò…íi— ï‚5Ñxˆ}LüEEüOÁíqäY;C1?ñ ·'¬‹?qÿa{@L¿M>åÜ#ˆÒH‚½Š•„Ÿû¾@ªUüC@÷€¢~±¨ò…î±I„AoGHr>ä(èžÀQEüOÁîqâY?CI?ñ »'0ª‹?uÿaÐ{œxVÐPÞè=]S„$¼¿ÄâÜ_öÆ­â¾§“øP”G¾ð=]‰L§ÛÓ@3)¢?í«`âO*â ~OEüè‹üÄ(üžÀ´.þÌ]ü‡Að1¹1½|üaEÖ`S}:ÿìâÏZÅ?„(*ü(ê ᣘr%)ź|žÆ"Ÿ$þ8ºä"srÆ*Z…^‘ÿSP|\JOýÚ€âk'pf#ï¢.ïÜ]Þ‡òiÌ·ñ¼|H}<þ‡+ð„ÏìcÙ*ï!|@QâGQ_$¥\P¢KŒ#$ó°G¶ø²ãD²B¢ëÿôÏ+ñÃЧAò+ÙÀ#¯¯G,·Œ$E$ÝGˆoVÔÎ`úŸÅl7Û¦›yœÿ¾Ý$‹¬ÿmºÙ•ïñóõ«Íföj9ÙB÷ö›~š}^Å×pÇŒû¢h8OWðFûßéfo^^ì6q19}=_¯dµ‹7ÛxwÐó»ìÎu¼3]ïïã‚„ñZ·X¼¼¸™­¶q>’žÄ<}XïôØvßMÓ6ï'ž·Í§ËmÊÜoâE2ŸíÊ¿ÄÛùleÿF//Š·*_&_1½úóôî.^ï&w³ûûd}kQø§‡žÉ›d¾KÒõló8)ìj¼Ù¤›ß¿Î6kxâw—'s==ûá~–lò¿`böK_¥°ÉÞäc|oâõ<ždžÎì‡ùj¶±˜—-4ŸåM.ì÷(noõ{oR €ÈÅ/]=Ü­A0 ²±óIü1[=Äö4Þê7û-±ŸÒE¼ªMâN_»ØáàÙ ¶îN%,êXdA­dÙî60…}¿Ÿ€¹þ¯ø±2™¼M¾&ö»XϾIþHñëÇÿ7Þ¤Y?-L«³èžÏ•î¡Ç„®ã»Ùz—Ìßþ9ï5º'4m §uÕ´ÍFsx‘#üFÂòéÅoÄ“ßH'¿J£ra»é×xóôÕÂw™£¿!“ *…¿<ì~¹ÉH¹}µ¹}Ðú6˜0þ¹ÛÌÞ¯oR×i­3ÃoÌßÌ­¤òhÞÄw&¿Î@Dv?Ïî¼)d9…(²çÛ÷ëùêa›üûÒ&²i3ï¦M_ 5m³¥ù-[ð 7“L T/%°â¹.ZÎÀÑ?-Õpu¥¯j®0ÎCÞ*Ÿ¢xömœÞÅ»­ß¼º¹IÖñæÎÄì—l“¿âí‡tþE;4™“”¹‚ñDl¶z·JîÍå<ðí6³Õ5<”u­õbþÉb¦ÉýæÜ¶³Å}ºM49³;‚ß“û£³ËVSëMò^µãm‘&û}è™æ$2±séôÕžÍ?I?ÿ'žgиëÖí’|®³.s-n“}‘lâŒ+ ŸÓ\GWw‘Ü$š®E˜VÞ;Ôà¹Ë[æÌ›æÖæ÷5Èݶõã”ä|_˜»x¶~¡_äÅÝËmr{7{ñg±>‡­ BmÍLaÇtÈűÆ;ãñÎxg¼3ÞïŒwÆ;ããŽhéqZ^¨þ]yäë]‘ŸÀÝ5½æÎýý&½‡@æqòyv|Óì4>¯ö!Àu¼Iâíõ.ÝÌnÐ&g™V7„ò{“™¾ifT\sÜýA.»ŽWÙÅëÙæ^ý™lõ¾Ç¯†RgÝe(§õÿèi]Ç»Üè5³(ôÌ>mâõâ„Ï¿Âʧ‹^“ƒMêý‰ãiÌ©ä¬wÉŸñâëûñVe oBÎ –p¯wýçÆBO­\Èëeúõíÿïa¦¥ùtÆŸ˜~ó“^È|C%Óz½ÕÅpBù!û˜ÝðšYipNÞ˜:ýºJ …~Yg ìiéÔÿ¹Þ-ÞÄœ ‘ƒr}¶„Oƒå+óúˆûS,øÌJûøôTX9µ'Çf§âéø:%—=}ZRêéÚîoíN›]ð~®9]ógŵO2hxúú´nÏÃAÇ£ƒ~Nô¿·G÷t]à'èk>Ž«§øýÏ“õ¾¾ý®Z«k~uøY ÇW®w üLU¤#z‡_Qɯàÿ¨„^É7„¾RøJÉ·æSCõ1Ø¡vTÌÑÈΘòûïWÿzõñÓ?áþ«ôólõþÍï]ç^üðË×ü‡×•Có/˜£¶ÉÑÓ'wÅk'œ'‡Yy³BéükÐl¾ŒÇUæH\åÛJkcsÜ!g¸vÉYŠ5µ»DÎ-±sKâÜ’:·dÎ-¹sKáÜR:·TŽ-]éŽ.Ý[bç–Ĺ%unÉœ[rç–¹¥tnéºB®ÔÄÎ+„/Ýû$Î-©sKæÜ’;·Î-¥sK×r¥q^!â¼BäÒ}têÜ’9·äÎ-…sKéÜÒu…\ßœ:¯u^!ê¼BôÒ}žÌ¹%wn)œ[Jç–®+äú>Ìy…˜ó 1çbÎ+Ä.Ý߈;·Î-¥sK×r%w^!î¼BÜy…¸ó qçâ—îï.œ[Jç–®+ä:¶p^!á¼BÂy…„ó çÎ+$.Ý©$[º®kÒy…d½øƒósÕÌñîãQç–̹%woÙómeÏ·uÁ¹óʪž+«z®¬r^Yå¼²ÊyeUÏ•U=WV¹Gȇ›ÓúÖˆÇf‰,6KV«#á{=g‰ûfBÄ…‚‘Št>N¢Ü7AP¤X„¸d‚F÷½DaX*Æ£ˆâÈãAN±Ä ÓHQ%¸û>‚EW˜Q¥+´ÁKº?¨"Ê# IÄtÚW÷¹ ”(AçB2•¢œƒLR˜±t_H¬3© É%WE‚ù<ˆ¦ üŸP÷˜ªDÀxBiBõyG†¬DL!鱎 1 #!JXV%ÉCJ“H*bŠ ;>HÕÀérîºJ%¥Œ"؃ª‚ÀˆÀBJúŒÈµ 3¤EØTùì#VS…áˆÐ+8¼¢‡XÁ3Œ!(€EVŸ++éA JJ0&°N5Lð7PUDD‚V”“̇¦D<äÀ Y‰ð˜*–ú?.tîvª<S•edƒµˆ0õxGà³ÞƒÂòcPOºÐƒBXåñŠˆ‚XÀRh )=¤E!©à!(ÎÇÄAV’cáAш"]î;Ò*ÎTr|é" æ"Å%rŸhªGXÂ|µNÅÔãI†ÁTðƒ‚Œ¸‹k‹ŠXVÙ\3ºû[Fè1Š`Pâó¤À (œ#âHy‰Ak¥cƒ¯äó$(7Æ`)A4a\Ez ©0ÖQåó$’0øà ìaÍuèeа`ÀøŒ‰„.m%×"æÃ|\{Úz€¦/Röv[±Ç“z¦Àw`‚80Ÿ‡qJ¤R:m-ˆA| êC³A¦Ó Å>ï‰"xSˆ‡Í|`zp|‡g‘Ç“ $y„Ç\ ‹éZˆ„T;† å>c2[q˜.Øé3&ÑJæ©Ò ÛãILÀò ` êÄÃ] ¸€&"Èã5ÁÔêð )¦,D%Ði ÊVE>O *#I$Mà$ÈbB™›kyÍ–eUb¥®ãE؃m)ÝAZê Õâñ$5ÔQpÛêÌ”ð ÇþôQñbmÐaR»ÄËbëØL¬€Œé'Ýžâ ˆdôº‚&£ªxƒˆyÑ=žÔÞ³R â(à]ä¡â Ï t´sq¢Ï{2­Ý¥È¢ª¼ž„ˆ–€­ã ¿÷¤„ ­À/¢ØkLŽ)1SÀûHøŒ ¾3è&@÷z=‰tfz©5.Ä5>³Ó@²À4‚1…E2K‚Í!tGŸ'áA]Ó™*$©òOX`#pg0"õÑ·X;QÀ¤üéó$X]˜-°(kì¡û2—CØÏ©ÁàIß/¡™Cåñ¤Î)OGé=9Ÿ'«µË=ž¬•>öy²V9ÕýÉjÙEªps²VüÉçÉJݪ–œð¡m5[½‡|Öó^»ìcO«Á<[rúu²š}ŽW݈Mu±ÉNE¾ê…‡ìkòó½pš²cf"ÀÌd™±²SÛÌd€™©3eu˜¶™©3CQLpT–®h™›]s¨ÿÜPŸ¹aÙ!v) þsÃ}æÆH‡ØrúÏô™x8s#!æFû¨ˆuȦ!æÆúÌD]²Â ~ðxÞ% !,êe´Ëv|n!lêcH$ºd!„U@}Ì!¤KBØÜÇ.ýÝ÷èÜH»€ûؽ…Ð1·v÷± 4R²@BØÜÇ.PSöäÈÜBØÜÇ.PŽ:d„° ¸]€›]²Â.`ÞËéÅ]²Â.à>vÑ% !ìî,èðûøÜBØÜÇ.@ìÜ% !ìéc8¢²@CØÒÇ.puÈ aH»ÀyWÜLCØÒÇ.d›éÇçÂ.>vA ®Ø™†° ¤]ÐÛ²s aH» xWìLCØÒÇ.Õ;Óvô± uÅÎ4„] }삤]±3 ah» EWìÌBØÚÇ.HÕ;³vö± wÅÎ,„] }ì‚¢]±3 ah»7:d…° ´]Pª+vf!ìåýö{»‚gÂ0PÑkr´+zf!,•½&'ºÂgÂ4Ð>¦iØfÇäBØÖëÂ]4aX¯  `;‚‡°¬×W$ºBhÂ<°^ŸpÔCóöÑ^“Ã]A4a ë59ÖEó‚ñ~_ݺÂhÂB°^‚D]q4a!X/ AHW ÍCXÖËBÖIó‚÷²Dv…Ò"„…à½,ºbiÂBð^‚’®`Z„°¼—… ¬+š!,ïe!¨ì §E Á{Y†ºâi™ÄûaºjÂBð^‚ñ®ˆZ„°¼—…`²+¤!,ïe!8ꊩE !zYNº‚jÂBˆ^‚ó® Z†°¢—…à²+¨–!,„èUB]Aµ a!D/ !hWP-CXÑËBÞTËBð~´® Z°ö²uÕ2„…½,„¤]Aµ a!D/ !yWP-CXÙËBHÕT«Bö² wÕ*„…½,„¢]Aµ a!d/ ¡DWP­BXÙËB(ÕT«Bö‚³ê“ a!d/»/;&ÂB(Òkr¸#¨ÆQ ¡zwÐ9¦:&ÂB¨^ Ù%!,„âýNcÐ.a!T/ A°êˆBõ²„±.r®—… u D˜Óp½L:¢jŒ‚œ‡‹zÙJ:ÂjŒ‚œˆ‹z Ê:âjŒ‚œ‰‹zY *;kŒ‚œŠ‹z™ uDÖ9õ²ŒÐ.©r2®gåžÇÖÓÆƒßy™tõp·v<N£‹£¯Àê¼F?¼î³¤cØœ+ôÃU?&QÅ"4R;›÷v>ÛíâMvq¶;V²jjþù}›•óû}ž>¬wÇjƒ™ñÍ“¹.–8ÉŠA™‰í/LÒÏÿ‰ç»|”ìò*¾×‹ãô.ÞY%^¯ß½yus“¬ãâÎÄœÝ&ÅÛéüK¼xy±Û<Ä&Mîl÷°™­®ánÖÇWXµ"ÑÈRoÇ›WÛv5¸O·‰.v•ÝøSC2É_^ðˆ_ E‘œq˜ÐÔ~Áê›d¥›ä6YÏV“Ï@ÃÅ6/ε½Geßæ¯¯å_Ký— ¡Ÿv³Ím¼kí™ ZöMŠdJÐ;³^Z_7#ÔzÛ¿U²¾IM¨Íì«×|øÛ‡d7ó5ôRöºš=ÆYu³ü/{u?è ï¡óI^¸7ã&«»\)€aÍ—-«V) öáú·tóe{?›Ç¯gó/·=õ¬×ò Ç¼ˆX1óVÝSé:öV¸öV埚^«í÷³ÛØ ~}-&7ÙîWñdߘn?¾}W™ýuv?›<·Œg Õí.Ý@Å~ûõ:¿6É]¡/É$ ož]ˆÿœ¯^^üX :Ý.c˜wmäëw××û›…ùÛÅêÕ_ä|{ÿ¢è"ù× ¼™Ý/Í\÷JÇznz0ûì…nÒt×ñBèé¾PuöÙ ý5IuUÅV^Òü ÍàhÃMƇSþôI?‘ñdV~/o‡Í¯² ´•æÎý#¿þsáÝ¿g›M?ý⿾þŸÂÆž¼Í~B›‰Uæï6[Ñ«7èÕ;ü:Š®^«×àΛÿ ®O‹x O%ó/“5м]VI1ÁrRµŸEŠ‹H÷›t—j}°ïöÃõurw¿Š?mâøg°M“Ȭü†À…ù2Y-6ñºu&´öD{ߨÞ÷zvW-Âû¯Œï·Åâ– ƒÙô2ÖxŸLÙü†Y޲áäaOñÍìaµ›10¶S+®²Ù<ÝtNmߺ£á^b-¡·EÛ’ýù¼jºÉgh»Zÿ;»ndÕà AQ¯ÁS*ˆòz¶Mæ“ÛÍj‚'v7ÀŒù0e]Ê»Ù}vú½ÞmÒ/eAÍÂÒÀ%kðy£ 6Gïï­V ükêEýð5Yì–Æ-çéþåÅç‡Ý.ûõŸ4Y¿¼Èž¹(|âtóâÏmbܧ¬çl•|ÞÌvñbòu M'Y£‰n°÷ .³Lf: :A”^$`›íKLÞgnÞúpò¿æ7ÀU̓üûålO£˜Gº*)•·6 ÑüÊäÔÉÿn¦\Ý›ãÊ®²óB·w‘‚x'íÝÕJûÎSñÐô€ ¦5žó`BÑ΄…ŒïÁÊè®âGÔÁÑ¥ñ™Ÿ4KšLGO…%Ñß—%e;K~ÎLùPŒˆ¿FD##†aDÕΈƟ–É÷ÁxäÇ üh1³ÕÁøñ0´²Ùr:®Fqj£ÅL-‡R ôûP dT aØw©ÁxÖÁ’äoÀt (Ãð#9¾«A‡bBþ÷ßÕ0Š{d“™gB4Šï€ ùÈ„a˜9¸ëÃù‰òûðÅè'†áFî°™17ªïƒåÈa¸Q7Ðd >¤ßÁ·/<~û Ä„ò8FC1!úû3!yZ_»ž$ÎãÕê¯àªa@̺pßE;ûö.9™ô¤>Åî~ÛÌî{[ÖNì·.n|˜=¦»ë’X«ìwZ¿ýjµ+«t,ïkO·ŒSgxhb­´0€’òÞ%ëäîáî_é&ù+]ïf«÷ëm¼k#n¤ö‘>ò™Tq\FxÌ¥CšÔP_0¾‹,eÇ(Sotœ8‹xžÜÍVŸfŸsÅúçv}µÑ‡6Bn«Ý‰~žÝÅmˆÉÅ«?â]2Ÿý?Ä{Ô®õþ¡!´­,5.Så1ýãJ«Om»¹ý| k÷jé¶üësùW¡m§Õ¡šUK•JÈf3ãܤ«Uú(U±õÃjUŽØÒæ~¶XÀeûïëe×&´±KïBøaß슿?§»]zWüÚ$·ËõºåÍ/ki‚Šj9ª>šÚüœ­ö» ,nq‚ Ï_ëßvË}žüg¾Þ‘}©`Äü÷?þ]þãÿø?'åŸv;#“ûÕlokÌ6 wóGà› ðïë7öÝu|;Û%ÄÊ­:þ2ý:Ù-Ó‡íl½Øî÷÷ Îjy³™Í5 |2›Ãx³ùãË ƒR5 4 îih¼^V3ÚÍlµkšý8i›å·?èå&mY­›²Ö[S“7 ‹Ÿ’»Øš×.5,¸ni–Û*ºýiº˜>>¾Xþpw÷bvl. 55{w¿JÀ̆%DK¯ó„5×ËoÄ\j"J¥õ¤ÈG«)³\._ÜU¶ÛíQºŽw“¬V'ú«*BŠGþa"+KÚ A‹QšfÕi\?v7ÚØ#/^¢Y³í½9o ŸUÑHÕ—ï6qüB“ضú~v;SÉï×úÑÂã¿_oâÙ—×10Fl°ç 1¾€ZÈ}‰ãûÉ:Þ~JoãÝ2Þ˜“(ªüêfWüþ Á š]/“Ó¹Ö—_à‘ÍÃÞ Zƨévù:Vño:,ªwe_}ø~ýé—«©q©Ö·ʽ:Nåú*Ù6›Æê]:olU¹>ÛmÞ ëÓ;ñvZu ¦Õ7zWðªÕÝáÍÒB7po“!or’Ív§WÌ^hˆ„â^Cº`Œí|“Ü›û‚¦_¯`ê›teÈ †ÝîâÒ\Øä¯LÔëËbzI7‹ßkC•?™ án¦Ÿ6:šúxÃ>¶;XûËÇûe¼US0óf]rvá;š÷Õ̫ãâöçtµØz½œ…,Þ·%&{ï’éÃí²¶ø¯3€ŽVÚÓŠ§júŸ}.˜G¯Ý§ô>™ç¦6¿ .O=-I0 )|pϨ é5øqÛÚL¯M„Þ¨ÖíŒæ• |ˆÿˆW{µñ[²[þ ´« fiA9]Íî âßÚ2TQ¿ÜÜ@Ô4uu’ƒJdéÖŸµåZÙŠÕ#T4Žíôt7üèœÕ«*©ŠËŸJZæ³ûi¶¹MÖÛéñˆ²¢ [Úä‡/+=ie¬ÀÛœ:¹ìÕßM¶µÓO= wm?ÏêD–q`Üè±Ðì—=Âú\p§UÞùéÝœ27X«ê•¸î™êXæN›o hµWêÇAåaÔë·W!ôú³P¯gÙ5åã®éw¸kzÈœÕÍÓg¿KªÆ]ÒŠ ½KªÓ=€8Æ›C }UÜ*RØqü:¶ÍuÙ‹±µZUZ+Î~HG—3G4Yþd‘§@ƒ`>‚áN'B'¡ÝÒª[JÝ©öìÜRîN›§í–vnàqë`Ü:8‹o+Fßvôm¿Oß¶‰@€gêÍòðÞl§)»ÎO޾aÕ7dN{vn¡p"ËøEht랯['G·ntëF·î¹¹uâ nêJö9ºuU·Ž;ìÙ¹uÒ‰,£[7ºuÏ×­S£[7ºu£[÷ÜÜ:9ð ÷àæIƒ‹S.™ð#«õ³N6Y7ü ëäh®ÊSÏg5ë˜õYÏye½„ºˆÂÁüÊó2 Ò:Û¿ÖöíI…ê GÁzBÁzÂÁz"Áz¢ÁzbÁzâ =M ÆÝÁª±uyÉÔ%J޳;9`÷ÛÃÊLÖôÊêL†Ý3/1¾É³olAÆÅyæ4;‡RØôìÖäk²[Nôñ cç}ÇWç߈ÖùÆGgŸy|ræñé™ÇggŸ¿M©Õ4Xy=?cyT­Ñ §87æõÇv'´k[>{™ôE½ æ°tìä¥#ܱ¡ÀŽ Q^È×¥%¥®-¥ëûäšKKæúFXJ—…¬¬YË.Fá¿{lYŠŽŒðÏsÇR¹Peܰ7,ŸÊÉD7a?²ŸÕÖËÊÊHûуãÝY—ð>ë¶².áJÖ%lg]ÂD^m<½£ïú GN Û °ÆT+gOµò­#ŽI]ž|R—þ–º1Lϳ c映™cN9!2T¾™oWÅÇŒ¡®£ßý]ùÝ.($“?÷”CЦVÀxHñ0"è.ù,70v£Ëßüpâ¨ÅÆÝƒ³ÀH4ÂF¸Ó³K¼Á£1ñFÅÈ>ðwMÐfrnžâˆî¿<14ÚíN©yô€Ã ¶¶h´EÏÏ¡ ¶Èvg‡‡Íªà–ÃKW1Ÿô¿4rÚÕúCl'' t8¾›°Ýü£.¦}Éõ~4ÇšûN•:»GnY®ŸÛÞ[7…ñÙ<:>£ã3ž9zngŽè·?J~,G·åÆáÐw<—q £qÃhžÛž, n m`>idb¡èr¨Ý4Ç,&k]€y¶zy‘Y ëêþw£Yé.€ý&¡ÿ•%œÍ¡JV­íŠ^¯Õ> c­D·­³=Æ((¢¿›°°ö¬œÇ)Ú¾/4?h\`eÚ'0<Ç„*Š¨ÄŒEÙ!Ã;Œ¿'òëtñxÀ²5ÿ¤añª[üK×Þë¾Í›ôáLÇÇô+>mXΠ»wh›sØUÙúßdzݶ·{ÓÕ4±æäÄ‹ød·~™¿ IB‘’û/õ—1.9–Œ*&˜ŒøþÛý¥BD`}ƒ ¤h#›k]D¯ 7"„&à¢MtSæ~"cµÎ¥µë}Ãwä%(m>œã0‡’‘0Ii‰vù;>JK{dV¤cV´{V9qõbk?×K7$mKÕØÄE™Å.¤ÊÅì¼éä+\®l˜¿­=3É1A?膅?¾ÓßWs·}•Î2O=»¯µ…XX~ÿ"ÙÄ™Çÿòâsº[æ§cw»Ù|©QMùSù›eP§xÑÐk~¿ìÉÔW¼ËNn”yg«ûå ‚M ÑÈj¡URYL¢‘ M<ÍbEÇÖ¦C¬m=„+ŽwØhÑ¢Õ;j/£V¼zØ¥ãmòWÜÛL¶tQP?Œ%né¨tŽâåì$Ýx}äO\¬¦^=ØÅc…ÆÚ}äöZ” 2#Ú¡b‘ç¬p‚wÌ {Î* B+Ù1«¨{VΆ¨Kç4vdÍËdI¨ÜnH¬ðæÕÍM²ŽëI´to?¤ó/¥É'ª`ÌVžZÄ3”àKEàAÀÿ€GZÄïKx¦.‹OÆ[ÿGîÓm¢mPvïÏRŽõ_†Å ¬JÓÛ×-E0¡æm¶ÈM¨ß¯WÖ x/˜ð]0.Kró¨iyšœ°GÞ¯5&?éË¢ì ùv(9ÑEÁGMtKï!öúß”G¾òW¿}“lï„ñ¢7 εÏêBñjÙœè,wh[/ßÈ8ä(ã Vßn¿±øi ﵞ%«÷wÅÉÞÏÉzêâ`äY“×ù]Q—œ¶Þ쥵ÈeÓBwlÖôm°¹¨v~lŒ“÷ÃÑñÝokWXaÛ[œõÓ´Q.Jêð[*¡Ôi!ïÔÜ77é‰3‘Gº¥÷mvÙú³AvócÊ“ÒÏ4 ¨|f8 ¼îk‘IŸ‡xÈíÿhŒÚ>W}öˆI1µN×ë·ÓW!Ö€‚ìù•ÇgË¡ Ä@;§£± Q–azø™¬ÇFïèÎ~»ªƒ¡߯b@޼¡ÝîÔxÑÇ­h[?æô~(äú!íJú¾wz/ò½HH«Ñö^Âé½hÈ÷¢^òFN’·<ëœÃº—72Ä[û{"G¹ þžÈK¯à¾|*Þ‡äSìŧô´õÃŽr|ýè{®íïI×1ø{6ÏÛÕyõ‰çdá¶z=„ú?‹9Ö2ÈçLž8þÂy÷Zzî^í?PÜø}¡­M}•{Øèp»ã£¶JVƒ7; ¬<~—.âÕÉ~ÒW'Ø‚øý{­ÿ]¼Èn¨ßË E®^s"_E¯Æ©(ÂäíÛ«·‘ù/ïàáÎb›m‘[_Þ§*áÈVk“õª| !n'$Ûäó¬oݾG ìïYCOn6é_ñºá®ªÖ ¡íùd;s佚íâÛtóø‹¾kÈ\›yC7ðð054¯µÝÞÇóÖ¯á¦0å—œ1ŽÝnfkýªÉ®ò åpM7j“ØÁ€í³ÐwÍ4`A³§ª‹¶Ò¶8ã‰J¿Û:ý *ˆƒÙf30'‹ƒ©üÍ Ò¹Vs÷ ì„‘YñÓðlÎyþÁ²Qíb…çs[oË ÿ dåì û½Ès-î–EnЧñM¼ÙèÝÙÊõ›d—ãcò«òÒ$¾_ϳ™Nêyì¸Òñt¿®•$³=\fXs3õ¢H!>Ñ&Áˆ–Îí ìb1Š,Ž€{¹z2–™£³:¼£çÅìîÙûûèêcjø7O¤ZLweýÅl7Û¦›yÕcM×óM¼‹3 ýš\gMJç-óËÚ|Ûݾ÷Ö†g»]V»¼sS>6õl:­pWá$y¾mºið ÄEy³Ù{º†;–ç”5Å_ÖpÑ¿3'¨Z¾E_/>Á‚›»w=¿Ëî\Ç»‰],jò³‘ô$ sqa÷Ý4mÓù~âyÛ|ê{·ìßÌÖ®¬Pó%~4k¢â7zya}ûÍ_fzÎÓ» Î Qß½|þçë7IŽÙÕq•YH0µéæ÷¯³,Ùòï‡üôåI·žD%ú µä‘ úå±` °èQAvÖŽÏöÑ™­¡¬Þju4AÅþ©v>Åw÷+ Ûvz½„ˆy1…èw—N²ñaøIþew{¹Knn :ƒ§äUÖlÛùÌ&ÎÙ  óñ—7Ûâ7Í#%æ¸ ZApŽ‘šÚÔ¨R+v—ÝË.oâx²¢NÒÏÿ<'dyq¾LV‹M¼>\¥\‚(+‡+ž©öP>øáú:ªÅŸàÎϺk­Ú+ciêT²Zæ‡Ë¦µfö<ëïd–ÉÁ¯Å sÖ´l˜7-vB l{î²C¿ûfótÓ1±}Û£ÍöÛk]_—W'¦òœõuÖm¿”vÕuîÜ(-6sóüÚÕÏþÙ%[œòFLs¶»KvYdÿšL¦…‹mBÍÙýˋϻüÌÀRÛÏž¹8ñÜšŒˆÂˆÁ?ˆrÂóòg{¨J.ªü8ù_ól’ŽÝ/gÛRã™y¤«’Ryë2!Mö«,Š%$Ÿ¶Ò®î3Ï÷ÆÇ´tq:¯yc»º[îÌ„¬ Í6Ky4|(~$üŽìÓfIü¤Xý}Y’Ë™íg Ĉôû`D22bFẎåGö}ð#ù1 ?ÊN[Œ›@¢{¶œŽ«dW‘Ôÿ>”•B6TQ§‘Ü,IþüÈÇ€2 ?v¡¢C1¡üv5ÄÈ„a˜wåKˆ ÕwÀ„rdÂ0LHÜõÁüD}~¢ýÄ0ÜH63†ãFô]p£-ŒÜx27²®l÷ñáwð틎߾1!?΄ÑPLH¾&|Z_»ž$vŸð@] 8ÖÛ­æsw#س+·KœÈ2þKæö¨÷Tc=Sè)g¹]z_àÛVñÍ®øûsºÛ¥wů$ÿmªëÒ±vÃwX»¡RÌo_´ìëøý4]L_,¸»{1+â±ÚnÅÆ[m÷¸…§Ö±Îw›8~¡Il[}?»©ä÷kýháñ߯¯7ñìËë#Î/î@c|µÿúÇ÷’u¼ý”ÞÆ»e¼1'fÁ“Œ_ÝìŠßŸ!"ЇI®—Éé\ëË/ðÈæaoP-cÔt»|«ø7Õ»²/‚>|¿þôËU‘j}ûPd˜³Æ©\×G"McõÆ.7¶ª\Ÿí¶ïÀ†¿‹õAãx;­:µDPïj~Þ,-t÷6ò&G ÙlwzÅì…†H(Þèõ1¤{ÆØÎ7ɽ¹!¨Î¬¶†€leÈ †ÝîâÒ\Øä¯LÔëËbzI7‹ßkC•?™ án~DsÚèhê<èûØî`uï.ï—ñTMÁÌÙQ†iÅá4ï«™W‡GÅíÏéj±5Ï•fñ¾-1Ù{—ÄHn—µÅÏÓþi¥=­xªÓâØrÁ¥÷É<7µùUpyèiI‚iÐHáƒ{‹2ç5øqÛÚL¯M„Þ¨ÖíŒæ• di9öjã·d·ühW ÌÒ‚rºÊRdÄ¿µe¨¢~¹¹¨iêê$;•ö‘ùŸµåZÙŠÕ#T4Žíôt7¼8•úªJªâò§’–ùì~Êòêo§Ç#ÊŠ*licŽÆÚ=ie¬ÀÛœQÍ•"¯jÀÄ©Lê³Û`NdwƀПIÏŠÏ}]p§UÜùéÝ-ÉѳÛ`¥.„{¦:–»Óæ(Zí•úqPùDõúíUc½þ,ÔëYvMÙ¸kúì*Þ’q—”Œ»¤¼8úr¦Å1ÞZè«â–½ǯcÛ\—½ÅqAU·§?¬ éèrŸl'¦i|vn)s§Ú³sK…;mž¶[Ú¹u€Ç­ƒqëà,¾-}ÛÑ·ý>}Û&"ž©7+Â{³~$í:?9ú†Uß;ìÙ¹…Ò‰,ã¡Ñ­{¾nݺѭݺçæÖÉ3¸u¬+ÙçèÖUÝ:áD°gçÖ)'²ŒnÝèÖ=_·NŽnÝèÖnÝssëÔÀ'tÜ?€›c$ .Nq¸d"Ž|¬ÖÏ:ÙdÝðÃìs¼Ò\e …ëäþ‡å2äEùøLðD¶„µÒ{ùüÊó|Â:Û¿ÖöíIëI…ê‰DÁzBÁzÂÁz"Áz¢Ázb =M ÆÝÁª±uyÉ”|L޳»:`÷†"’Öôö…$svϼTÄø&ϾaJ«äkœfçPÊB2úÖäk²[Nôñ }5™~ãË3¯Î;¾ÈóÎ<>>óøäÌãÓ3ςߦÔj¬¼žŸ±<¦Ö°)‚Ô•çÆ¼þØî„vmËg/“¾¨·Á–¼t9Å¡¡ÀŽ Q$][RêÚRº¾F®/dÒ¶¹´”Òe!+kÖ²‹Qøï[–²##ü³Ü±$‘ UÆ ËqÃò©œLtö#ñY•½¬¨Ü´=8Þu‰î³.a+ë®d]ÂvÖ%|@„áÕÆÓ;úÓÂ6(¬1ÕÊÙS­|ëãˆcR—'ŸÔ¥¿¥nLÓólØ9&hæ˜SNˆ •oæäUñ1c%0F¿û»ò»PHüäCЦVÀxHñ "`Ý•¸žçq£Ëßüpâ¨ÅÆÝƒóÀÔwáNÏ/ñoTŒì? çmâdGtÿíà‰¹ ÑÐ.hwJÍ£†´E4mÑh‹žŸ-bAm‘íÎo9¢à–ÃKWqŸô¿ 9íj}‡!¶ÓR–û¾›°üâ‹i_r¡ÞæXs_VgÎî‘tJçüÜöÞº)LÎæø ÑñŸñÌÑs;sÄÎp”\9ù@£q8ôÏeðhFã0‡ç¶'‹ƒ‡BÛ؇O™h]þ‘¿Ò4Ç,&k]€y¶zy‘Y ëêþw£Yé.€ý&¡ÿ•%œÍ¡JV­íŠ^¯Õ> cµD·´w2=Æ((¢¿›°°¶%q§hûtl¼Ðü q•iŸÀðª(DŽ˜Q¢(;dx‡ñ÷D~.X¶æŸ4,^u‹ÿ`éÚ{Ý·ùq“>Üƒéø˜~Å'΀Π»wh›sØUÙúßdzݶ·{ÓÕ4±æäÄ‹ød·~™¿ IB‘’û/õ—1.9–Œ*&˜ŒøþÛý¥BD`}ƒ ¤h#›k]D¯ 7"ˆ&`mš s˜ú0÷‹´Í¥µë}Ãwä%(mTt8Ça%#`’ Ó:"ÇGiiOƒÌ uÌŠvÏ*'®^líçz醤m©{ƒ¸(³ØE€T¹˜7|…Ë• ó·µg&(wÐuÃÂßé﫹۾Jg™§žÝŽ×ÚB,,¿‘lâÌãyñ9Ý-óÓ±»Ýl¾Ô¨¦ü©üÍ2¨S¼hè5¿_ödê+Þe'7ʼ³ÕýrAÈ&†hdµÐ*©äŸF24ð4±æÇÖ¦C¬m=„+ŽwØhѢߵ—Q+ ^=ìÒñ6ù+îm&[º( (ŒÆ·tT:GñröG’n¼>ò'.VS/‡ìâ1ˆB#í>rû-Ê™îP±ÈsV8ˆA;f…=g¡ï˜UÔ=+gCÔ¥s;²æe²$Tn7$Vxóêæ&YÇõ¤"Zº·Òù—ÒÀäÕ0f+ O-âJð%"$üx¤Eü¾„w`ê²øŽ¶õä>Ý&Úe÷þ,åøQÿeÈQ¼ÐÁª4½}ÝRjÚf‹Ü„úýze­€ï‚™3Û ÆeIn5-OSƒãÈûµÆä'}YT!Ãò¡%'šhyÔD·ôb¯ÿMyä+õ÷Û7ÉöH/zâ\û¬n ±ªys¢[°<<<Þ¡m½|;t"㨣ŒƒZ}»ýÆâ§%¼×z–¬Þß'{?'먋ƒ‘?fM^çwE]rÚz³—"Ø"oZ莭Ú¾ 6Ñ®Óqò~8:¾ûmí +l{‹ó±~š6JÂEI~K%”:MŒpG æ¾¹IOœ =*Ð-½o³#ÈÖŸ ²›Sž”&¸x¦ùEå3ÃIàu_‹Lû<$úC¾X YŒÚ>W}öˆI1µN×ëw'dÀM¬B {$¡Ÿ6~”²u =ªC+œ¦M*õÈ$ìvÓNÓ COC9MƒöŸ†Ó—Âî5qþòç³ ÕÆ¥È‘Eü¹´Ç¦Öp¿8=â÷q Û¨¨BRyØGÒ÷½ˆ£„|/Òî·½uTß‹zIG˜F2ðG#§Í6E£ÈQJú)ê0{¤§*gQÏOåÃ}¦S§¦óßCnçäèAçƒæy»:ãÎ @Î&tõ½M³U±+«G|‰M5‰¨ø^^Xßeò—Ù;°óô.ƒÚ—ßpâúçë7I«ÓH ³² ÛÒÍï_gY>ÔßÍ“ÖÚüô¥2¶žDå÷„Š·–{Á¶Û›ûÁÅ×;‹,•~ÈÐöM£Ò3µÐêá­æ¶ *6¦´þßݯ€zÛéõb¯Å‚ž]:ÉÆ‡áóÐër—ÜÜ´böý8w#³FÛŽ'6qÎˤ˜¯Ñòf[ü¦ˆFRbŽKgœçÈlËÛ”¨R*t—­É.oâx²‚NÒÏÿà&'byq¾LV ˆ,W("ƒL´Ÿ©öP>øáú:ŠÅŸàÎϺk-~•±4m*Iç®î²L­Ó¬¿’5X& ¿7&…q-~ëÃ`EbŸyš»)Ðï¾Ù<ݟ׾é±VûXÍâ©ëëòêĸAÖ—·ØwÕ\í ú‹‰<÷mõ“\vÉ–£¼Ñ›¿»d—¹Âðoá*÷$Ÿ×|vÿòâóÃ.Çóþ'Õ˜Ú왋ϔȈ(Œüƒ('“êgä\F50÷pò¿æ7&˜šÝØål[j<3tUR*o]&‹È~• ˳dÁÓVúÁÕ}VèÞß®[º8 9Ó¼IÓ닎ŽdÑ4qeyls(~düËÓfIú¤Xý}Y’ËΖð1"ÿ>‘Œ†Yg†áaùQ|üÈG~ üÓVãÇ&מ-§ãjàHt$H)ÈïC)ˆQ)„aCÙi¤wàUK’¿?Ê1  Ãi¤è@LÈ£ï`WCL„ QÔ•Ël &D&,@Ý#žÊ„ÈÁ]ÌO4_þî~bÞýÄS¹;lf ÇäûàF{úp¬…Y͵ŒöìJaR'²ŒEyÇr–=j±ÔXÏaÉU×.½/Àmºô{ñ÷çt·KïŠ_/ý»T¾4à¢1¯ú÷•W½RhkŸPݪ÷øÓt1}||±üáîîÅl¬>ÆÇJ˜;l%ÌãžYçØÞmâø…&±mõýìv¦’߯õ£…Ç¿Þij/¯c`Œ8¿¸ñÔBþëKßHÖñöSzï–ñÆO2~u³+~†ˆ@¸^&7¦s­/¿À#›‡½AµŒQÓíòu>>¬âßtXTïʾúðýúÓ/WEæ–õíC‘ýɧr]Ÿk4Õ»tÞØªr}¶Û>¼þ.Ö§+ãí´ê@Ô’´¼k¨Çwx³´Ð ÜÛdÈ›d³Ýé³"¡x£×Çîc;ß$÷æ>„ :ëѲ•!'v»ˆKpa“¿2Q¯/‹é%Ý,~¯ Uþld‚†»ù¡´i££©sïc»ƒÕ9¼»|¼_ÆkP53gg¦‡Ó¼¯f^·u±öý¨y#‹÷m‰ÉÞ»$Fúp»¬-~ž’K+íiÅSç4 æÑk÷)½O湩ͯ‚ËÓ@OKLƒF Ü[”yò®ÁÛÖfzm"ôFµngÜ0¯L ;b¾W¿%»åÏ@»Z`–”ÓUv€=#þ­-C5ðËÍ DMSW'Ù9¨´Ïÿ¬-×ÊV¬¡¢ql§§»áÅIÀWUR—?•´Ìg÷S–óz;=QVTaKsÑîI+ƒdýÞætJ‹Ø«R'u*aøìv¸YÆ€q t5VÖ³k_Üicµ)›+ºqƒµ¦^™ áž©Žî´ùŠV{¥~T>F½~{ÕB¯? õz–]S:îš>»j”tÜ%ã.iÅF/\¼œiqŒ7‡úª¸•^ã×±m®Ë^ŠÂ• ªÛ“XV†tt9s>°üÉ"CA–1¥Ì±‚;ˆ ¨©Ñ-­º¥ÜjÏÎ-•î´yÚniçÖ·Æ­ƒ³ø¶lômGßöÙVZçÏÕ›•á½ÙN?RvŸ}êo(œöìÜBåD–ñ‹ÐèÖ=_·ŽnÝèÖnÝssëÔÜ:Õ•ìstëªnt"ØssëhäD–Ñ­ݺçë։ѭݺѭ{fn>¡ãþÜ#ipqŠÃ%yäcµ~ÖÉ&ë†fŸã•æ*SÄWgö?,—arÔg=¯ôÙÖêåó+;Ì qëlÿZOtÚ·'¬'¬'ª'ë ë ë‰ë‰6ô4-w«ÆÖå%Sç.9Îîì€Ý*çYÓÛWÏËÙ=óRAã›<û†)­’¯LšC) Éè[“¯Én9ÑǃöÕdú/Î<¾<óøê¼ã1>ßøèÌãã3OÎ<> 6~›R«i°òz~Æò¨ZãN pnÌëíNh×¶|ö2é‹zÌaéÄÉK—›&‡†;6D‘tmI©kKéú>¹¾É»çÒRJ—…¬¬YË.Fá¿{lYªŽŒðÏsǹPeܰ7,ŸÊÉD'açG6â³{YM¹i?zp¼;ëßg]ÂVÖ%\ɺ„í¬Kø€ë§wôÝG_a§…mPXcª•³§ZùÖÇǤ.O>©KKݘ¦çÙ†1sLÐÌ1§œ*ßÌ7È«âcÆHW ŒÑïþ®ün’qZO9¤hjŒ‡#ÔIµç¹@Ýèò7?œ8j±q÷àÀvòß×,ó&îÿhtȵݬÎÝ#ì”Îù¹í½uS˜žËñaÑèøŒŽÏxæè¹%çßþ(9wû²7‡Cßñ\ÆÆa4£qxn{²$¸q(´ý}ø¤‘‰FљӑvÓ³˜¬uæÙêåEf-¬¨ûßf¥»ö›<„þW–p6‡*Yµ¶+z½RTû€ŒÕÝ‘Í;cÑßMXXÛ’8S´} :6^h~иÀÊ´O`xŽ UQ‰%вC†wOä×éâñ€ekþIÃâU·ø–®½×}›7éÃ=˜ŽéW|Ú *ÛàνCۜî2ÈÖÇø>ží¶½Ý›®¾ ‰5''ŽXÄ$ó¸õËü¥ÀÀHÈHДܩ¿”ˆqɱdT1ÁdÄ÷ßî/"ë„Hà E9èØ\ë"z]¸A4iÓÃÔ€¹ŸÈX¢m.­]ï¾Ë /ai#çã8Ì¡ddLbZˆ´ËßñQZÚÓ ÄR³¢Ý³Ê‰«[û¹^º!i[ªÆÞ .Ê,v U.fçM'_áreÃüm홉ùŠªþøN_ÍÝöU:Ë<õìv¼Öbaùý‹dgÿË‹Ïén™ŸŽÝífó¥F5åOåo–AâEC¯ùý²'S_ñ.;¹Q~ä­î—3B61D#«…VIåþq#šxšÅŠŽ­M‡XÛz>Wï°Ñ¢E'ª5vÔ^F­$xõ°K?ÆÛ䯸·™lé¢4 0~KÜÒQéÅËÙIºñúÈŸ¸XM½z°‹Ç jµûÈí#´(?dFQ‡ŠEž³ÂA?î˜öœU„V´cVQ÷¬œ Q—ÎiìÈš—É’P¹ÝXáÍ«››dï“zä^K÷öC:ÿR˜|¢úÆl¥á©Ehã$7J“œDBZ»¶÷NïEC¾õ’rš„Çu .!ä,;ƒ,ÜΠËâ6®’NTÇ!¹ {q=«¤£Ôç*:ÄÎnû{*Çu þžÍóvu‘ Kq8ÆKÆTŸ‡xŸ‡dŸ‡ÄÁ)ˆ"0߸ïîçV•ϹÕc|é™GåTt ? ØM›ÖÃ’éjkB–e¬O:ÁÏ‚µ,ñ¹ž'‡¿pÞ¡¦ž;ÔefØ#ßÚÚÔ÷©i¹OMºÄìp³ºˆ×“N¬Æ'v¸Wyü.]Ä«ºý¤¯N¨…åû÷Zÿ»x‘Ý6˜¾—ê «·o^¿^¿’ ]ñ(¯^½½z™ÿòñÛ"áµ¾¼ç×ÊeÓÚ¤·*`Å9Úp;ù#Ù&Ÿ÷ø_}ëÖhÑ=Ô`Ïzr³IÿŠ× wÍPµýmÏ'Û‚; ïÕlߦ›Ç_ô]CæÚ̺™è̆‡9 e­íö>ž·Žx 7Í€ù+¿¼àŒf¨8[ëWMv•%‡“hºQ›ÄlŸ…¾k¦ š=U]´•6‡ÙÊWúÝÖé_PAÌ6›y8YLåG¸hfε"¥÷]¶ÖOÛ9sæyËFQõbÞÔ0ë|nëÃ_9ÝòñÊÉî{‘çTÜ-‹dÛ ã›x³Ñ»°•ë7É.ÇÁäWQt)#¥”`L2E330;³ã(¾£L÷+VI;Ñcg697B/Š$à­ðÌèì¬ÀÖRóìg÷r½cŒ:ÕªÞžbõÎt|ò¹<É.yÛ£ù­Sž-øÛèµyšýmx]ókžú´Oﲃ>w³ûûûùÚ#i@XŸn~7<ù»yò¸ßÎZ°ô3a]ëá£Þ7o-®öyÙb²)š\Ø/RÜÎ"$¤ŠlÉåþÚÞvÃàMâ óx«ßÍ•µ÷ºc}-Ö`{À•$ª4°»ßK¶Ýb²(IeufѸè/Ó÷•ž¨«Ò²ÿ9s‚íNZz˜Vgâ2'ý™¡Ç”^¯fë/?ÀEÓ[€y½ýs·™éo®³j9~d?š7ñÉõÃýý*Ö28[™÷»ÎçàC¬7Ý4™6ò[åz×"?®¥>\›®õqÇ·í}LÿçÇUúy¶zÿÆ„[·ÙÏI²(ƒG¢¬|¼'¯?)ù{:|þT5é©´:>¼WXëÅÇÙúÖ2\U†Úè›6+»“9»ø!™ÍÚ÷üâÞxåŽÅ¢ÞL_ªX¸üKÙ*¨Yóª¬w;U…õcÛÚ“·Ígm>êeó™¿ÍG6ûiO6ÚüÑæ#Öëobó‰×òomóñhóG›ÿl>Àæ#›Üm>~š6Ÿ¶ùQ/›Ïým>î´ùÔO{ŠÑæ6ÿ±^}›Ïü¸V~k›OF›?Úü'`óñ6?r±ù‘»Í'OÓæ‹À6÷²ùÂßæ“N›Ïý´§ú»ÚüQç|›õMl¾ðãZ}k£OG£?:ÝOÀ蓌>v1úØÝèÓobô§mX€ê¹È̬æ'Æì#bù™±"MEöJv!dblûWTžâ²±ÏVo5Ðy‚ˆ9¥§Ï|ŠïîWÚ˜^/g›x1½ß¤»t’ÃO®î¶—»äæÆdEÍòT宲6Ûãlâ´b ¬æŒÊͶøM¤Ä|Ÿ)Ö32) m:T锹ËÏèéË›8ž¬œ“ôóâ¹9vW^œ/“ÕØðŠ‘û.T”ÃÏT{(üp}­ñ½ñ'¸ó³îZïcTÆÒ¤©HÛ¯«Y²Öý·Î³þNÖh™ ýZÜ0ñ]Ù0CÎ1‹ ·ùèwßlžn:&¶o{´Ùþ`£ÅT××åÕ"@öÍÑ‚£ã¥]2§EÁH¿Ô’ÿd—lAÊM°Éq—ì²ccðoqüÀœÂÉÙr>»yñùa—gþOª³÷fÏ\œ˜½^FDaÄàD9á˜TVåBªSNþ×üÆÄÀøá~9Û–è&3tUR*o]–¥Ë~.uw幨úi›8›ÑçÖÖý“ÞµtqZ޾æºzܘµ3¡9ƒY¢Ü‡âGÑÁQqtçi³$R,‰þ¾,y´tvØu F”ß#Š‘Ã0â‘Úkæ„ü°ü¨¾~”#?†áGÚi«ƒñcSªÈ=[NÇÕÀFôZÌÔâq ¥ ¢ïC)¨Q)„aCÞi¤†vàê`Iòôù‘Gc@†Åñ] :â¿ÿ®G#†aBÙU5y &$ßâ‘ Ã0¡rpׇóéwá'šú磟x*7’Èa3c8ndß7Ò‘Ãp#:n ÉP|ø|ûâã·¯@Lˆ3a4Šï€ ŸÖ×®'É„ ¦EÔÅ€.Ù¢C|Ó“*êíö¶:­4ež>RŸ¯Rà¸Z˜8,ïkO·ÃôÜ•LÒBÞ»dÜ=Üý«LÁø~]äLiJ¾ÝHí#}ä3©Vj(T¡ÉAÉ!æD–²Çc”©7:N“hïÓìs®8²²ÐâÚFÚ˜š¼úØ HW,ÔhQ-À9ÿŠWÄ:ñëÏñ>GOå!ýCçoeá¨q™*éWZ}ºhÛÍíç]»WK·å_ŸË¿*õÙöC5«–*/ä×Zç&]­Ò¯ºvAMÄÖ«U9bK›ûÙbQàƒÍß5ÖË®MHnÃvé}o[Å7»âïÏén—Þ¿6&ãéEiÌÍ/ki‚öÄõ õ7ÚØÉ%}ÒP2<¦¡´ÒP:‘¶©‘K©æ\ª Ö[S“7 ‹Ÿ’»Øš×BŸ"9\pÝÒ,·IÏ{úÓt1}||±üáîîÅìØ\:µfƒ;!Zzmœ'¬¹^~› æRQ*­'Æ;É(³\._ÜU¶ÛíQºŽwr=ËèREHñHÀ?L$‹bÏm.fÕi\Êú±c/^¢»††§…çV͇w›8~¡Il[}?»©ä÷kýháñ߯¯7ñìËë#6ØrÐ_@-俾Äñý‡do?¥·ñnoLM ð$ãW7»â÷gˆt‚êëerc:×úò <²yØTË5Ý._çãÃ*þM‡Eõ®ì‹ ߯?ýrUÔˆ\ß>uf­q*×u½„FÓX½±Kç­*×g»íÃ;°áïb]Š$ÞN«D­仂W­îo–º{› y“#l¶;½böBC$oôúÒ=clç›äÞ܇T×W]C@¶2äÃn÷qi.lòW&êõe1½¤›ÅﵡʟLÐp7/à0mt4õ ¢}lw°:‡w—÷ËX,˜9;£0­8œæ}5óêð¨¸ý9]-ö£æS-Þ·%&{ï’éÃí²¶øyñ_­´§Oµ<ÌQ0^»Oé}2ÏMm~\žzZ’`4RøàÞ¢¬È} ~ܶ6Ók¡·0ªu;ã†yeYÙ¬½Úø-Ù-ÚÕ‚³´ œ®²¢\ñomª¨_nn jšº:ÉÎA¥]Oçgm¹V¶bõc;=Ý /*[¼ª’ª¸ü©¤e>»Ÿf›Ûd½(+ª°¥)¯a÷¤•A²~osêä²{Véåä°´ï‘Åb]ÇÏs@8‘eÜwzì4ûÇåF@°þÜicµûTK®èÆ Öšzå.„{¦:VºÓæ(Zí•úqPùDõúíUc½þ,ÔëYvMɸkúîš2guóôÙï’¢q—´b#Cï’Η3-ŽñæÐB_·òÑ«qü:¶ÍuÙ‹±µZU·—x¯ éèrbŸbÑÇœ×*ÑÏÉ-îT{vn©r§ÍÓvK;·ð¸u0nœÅ·¥£o;ú¶ß§oÛD„ À3õfUxo¶Ó$]ç'Gß°êJ'‚=7·ENd¿nÝóuëØèÖnÝèÖ=3·ŽEgpëhW²ÏÑ­«ºuʉ`ÏέCNdݺѭ{¾nݺѭݺçæÖ¡Oè¸7ÇH\œâpÉDùX­Ÿu²Éºá‡Ùçx¥¹*O=?ÓÉý ™ÒƒYÏ+ýD¶„ºòÂÁüÊuÑŽ‰9[œsÄZOtÚ·'¬'¬'¬'ª'ë ë ë‰4ô4-w«ÆÖå¥ãô.û}œÝÙ»ßæÏ=6OÏôúX°{楂 Æ7yö-èÁ²’GšC)lzvkò5Ù-'úx±áÓ¾ãó3/Î<¾<óøê¼ãá?ßøèÌãã3O‚ߦÔj¬¼žŸ±<ªÖL1Á®87æõÇv'´k[>{™ôE½ æ°tâä¥#ܱ¡ÀŽ Q$][RêÚRº¾F®/„™ëa)]²²f-»…ÿî¾ey,S–[ñyîXbªŒ–ã†åS9™è&ìG6â³*{YQ¹i?zp¼;ë’Üg]ÂVÖ%\ɺ„í¬Kø€ë§wôÝG_§…mPXcª•³§ZùÖÇǤ.O>©KKݘ¦çÙ†1sLÐÌ1§œ*ßÌ7È«âcÆhW ŒÑïþ®ün';ù¢©0R<Œº©ö<7˜]þæ‡G-6îœî$F¸Ówz~‰7ð˜x£bdŸøI9'hã';¢ûoŎvA;Sj‡j‹äh‹F[ôül j‹lwvx˃[¯ô¿Ò'ýo×éxô݆ØNKI/¦ßÙØn[‡/¦}ÉEú?й¶›Õ…³{ä–Îù¹í½uS˜ÍñQ£ã3:>㙣ç–!Hœá(¹tòFãpè;žÉ8ðh4£qÃsÛ“¥ÁC¡íìÃ'L4ãšóƒvÓ³˜¬uæÙêåEf-¬¨ûßf¥»ö›<„þW–p6‡*Yµ¶+z½RTû€ŒÕÝØ&¡ÇEôwÓÖ¶$Îãm_ƒŽš4.°2ížcBETÛEÙ!Ã;Œ¿'òëtñxÀ²5ÿ¤añª[üK×Þë¾Í›ôáLÇÇô+>q ÌÓÝ;´Í9ì*ƒl}ŒïãÙnÛÛ½éê šXsrâˆEüG2[¿Ì_ Œnb$ EJî¿Ô_JĸäX2ª˜`2âûo÷— õ B$p¢tl®u½.܈ šµi‚ÎaêÀÜOd,Ü6—Ö®÷ ße—°´i5Ça%#`’Ó"¨]þŽÒÒž!–è˜ížUN\½ØÚÏõÒ IÛR5öqQf±‹©r1;o:ù —+æokÏLP¾i®þøN_ÍÝöU:Ë<õìv¼Öbaùý‹dgÿË‹Ïén™ŸŽÝífó¥F5åOåo–AâEC¯ùý²'S_ñ.;¹Q~ä­î—3B61D#«…VI¥ ÝH†&ž&ÖìØÚtˆµ­çƒpÅñ-ZtâûGGíeÔJ‚W»ôc¼MþŠ{›É–.J ㇱÄ-•ÎQ¼œý‘¤¯üÉ‹ÕÔË¡»x ¢ÐT»Ü>B‹òCAf$;T,òœ2«¨cVØsVQ#‰;fuÏÊÙuéœÆŽ¬y™, •Û ‰Þ¼º¹IÖq™Tä`-ÝÛéüKi`ò‰ê³•†§ñ %ø’Fþ<Ò"~_Â;0uYÔÜú?rŸnmƒ²{–rü¨ÿ2ä(^è`UšÞ¾n)B 5‰Úl‘›P¿_¯¬ð^0î»`\–äæQÓò458a1޼_kL~Òæ1ê ùv(9ÑD‹£&º¥÷{ýoÊ#_ù«¿ß¾I¶÷@ÂxÑçÚgu!ˆ-`݂͛åááñmëåÛ¡GeÔêÛí7?-á½Ö³dõþ®8Ùû9YÏ@]Œü1kò:¿+ê’ÓÖ›½Á6qÓBwlÖôm°¹v~lŒ“÷ÃÑñÝokWXaÛ[œõÓ´Q.Jêð[*¡Ôib„:5÷ÍMzâLÈQné}›A¶þlÝü˜ò¤4ÁÅ3Í(*ŸN¯ûb}xŸ‡ToÈýÑb'`ÔöË /˜Sët=±~wCT²GúiãG)[Ú¨:´Äi hRѨG&a·sÿ6†:gáü­Ëgã¥e]ò\YD‰z¬KmœÞ1nû¢SÇ÷ë·è§EÍ>ÎUÛú!G¦¸~ÈÃÆ¾ï…Þ‹„|/Òv¶½—›¢!ß‹zÉ9Mޘ㺗7rŽ}FBÂí3øÜ¬z~nîý©«¸#Ÿç:ÄÎnû{ G¿&ø{6ÏÛÕEve)‰N t/a²úò>{Må²im’4•w2ÌÜvòG²M>ïQ¬úÖ­Ñûæû{ÖГ›MúW¼n¸k†ª5èGh{>ÙFÒy¯f»ø6Ý<þ¢ï2×fÞÐÍDçç;Ìd,km·÷ñ¼uÄk¸iÌ_ùågÌø)è®õ«&»Ê–ÿá$šnÔ&±ƒÛg¡ïšiÀ‚fOUm¥•z¶ò•~·uúT³Íf`NSù.š¤óõÎâ> ËÖ¤Ñ?sÞÌöÞ!rqàbF¥ƒi3 ÕÃ[ÍmTlþi5ú)¾»_õ¶Óë%Dk‹)„I»t’ÃO~]Í’õå.¹¹1¨Ãì;pî f¶Ol✗ Ø0_£åͶøM¤Ä—N7¸À‘Ì©eS¢J©lÐ]vD&»¼‰ãÉ:I?ÿBˆœˆåÅù2Y- =\¡\ˆòê`•Žª=”~¸¾N€bñ'¸ó³îZKme,M›Jò¸«e<ÿ¢½%=Fë\ëïe˜‰Ã¯Å‰Y¾ò·>ÙUdé)`¤¹Ãýî›ÍÓÃäöí;›îC0‹Å®¯Ë«SÅú â¶y »J©vî;yJÛê—¶ì’-Vy£ 6ç7ï’]æÝ¿…÷g\ŽœEç³û—Ÿv9L÷?©†ÊfÏ\œxTDFDaÄàD9á˜T¿ç"«ñ¶‡“ÿ5¿1ÁÜl¼,gÛRšy¤«’Ryë2Dö«ÌCžåž¶Ò®î“=÷þ$ÝÒÅi€˜æ]žƒïQnLx¤Ì± ËÓ˜Cñ£êàǨð_ž6KÊ'Å’èoË’ôh›,&†Í×п=#ª‘Ã0"êL<,?¢ï‚MYª‘OæGÜi«ƒñc.kÏ–Óq50%¹AR øûP hT aØv©¡xƒjigIò7àG<”aø‘ßÕ C1!ýûïj˜ì¢#žÌ„¼+EÙ@LȾ&¤#†aBáà®ç'òïÃOd£Ÿ†¥ÃfÆpÜ(¾nä#7†áFÕ•`z >ü¾}‰ñÛW&dÑq&Œ†bBõ0áÓúÚõ$™°û8íbÀ±Äe5 †Áž]…KîD–±ÖîX¥²G‰•ë™Ú*¹ Û¥÷ÖMWt/þþœîvé]ñkcЦ—‚–éÒ¿Ãté•úYû<é|_:ë§ébúøøbùÃÝÝ‹ÙXTŒŽ.+6vØ—Ç-¼°Ž¦½ÛÄñ MbÛêûÙíL%¿_ëG ÿ6~½‰g_^ÇÀq~qã ¨…ü×—8¾ÿ¬ãí§ô6Þ-ã9üždüêfWüþ >3p½LnLçZ_~G6{ƒj£¦Ûåë||Xſ鰨ޕ}ôáûõ§_®Š„,ëÛ‡"©“5Nåºø6šÆê]:olU¹>ÛmÞ ëC“ñvZu j¹WÞ5”Ù;¼YZèîm2äMŽ@²ÙîôŠÙ ‘P¼ÑëcH÷Œ±o’{sBPÌh ÙÊ »ÝÄ¥ ¸°É_™¨×—Åô’n¿×†*62AÃÝü Ù´ÑÑÔ©‡÷±ÝÁêÞ]>Þ/ã5¨š‚™³# ÓŠÃiÞW3¯ŠÛºû~Ô<=‘Åû¶Ädï]#}¸]Ö?Ï´¥•ö´â©N‹£—óèµû”Þ'óÜÔæWÁåi §% ¦A#…î-Êôw×àÇmk3½6z £Z·3n˜W&Q߫ߒÝòg ]-H0K Êé*;ŸÿÖ–¡Šøå梦©«“ìTÚÇ~Ö–ke+VPÑ8¶ÓÓÝðâtß«*©ŠËŸJZæ³û)Ke½(+ª°¥9bh÷¤•A²~os:HEêU€“;U&|v;Ò‰,ãÀ¸ºÈªêYdµ¯ î´±Ê:?½çŠnÜ`­©WáB¸gªc•;m¾¢Õ^©•O„Q¯ß^5†ÐëÏB½že×»¦Ï®È$wIÙ¸KZ±‘Áë/gZãÍ¡…¾*nå£Wãøul›ë²—¢%¨êö¼š•!]Î<0µüÉ"WA–@¥L¹Ò}†kjjtK«n©t§ÚssKyäN›§í–vnàqë`Ü:8‹oKFßvômŸmuúL½Y…÷f;ýHÑu~rô «¾¡r"سs ‘YÆ/B£[÷|Ý::ºu£[7ºuÏÍ­Cgpë:“}Žn]Å­“‘Áž[‡È2ºu£[÷|Ý:6ºu£[7ºuÏͭßÐqÿnŽ‘4¸8Åá’ ŠŽ|­Ö;eÝðÃìs¼ÒleŠóêLÿå3¨),”õ¼ÒOdkX«#–O°ì0/(F­Ãýk=ÑißžX°žx°žD°žd°žT¨žx¬'¬'ÜÐÓ´`Üý¬[——Lýºä8»£vo¨ˆgM¯¬ŠgØ=sSAã›<ý†)µ’S!Í¢”…eô­É×d·œèóAûê2ýÆggŸŸy|qæñå™ÇWçߨŒóÎ<>6~›R«i°òz~Èò¨ZÃNpnÌëíNh×¶|ö2é‹zÌaéÈÉK—WUvh(°cCI×–”º¶”®ïƒ‘ë aæúFeùÙã YY³–mŒÂ÷سD)áŸç–%q¡Ê¸c9îX>•£‰nÂ~d'>+¹—U–;ö£'Ç;Ó.™ÚHYÚ%l¥]•´KØN»„ˆ0¼Úxzgß}ôuZØ…5æZ9{®•o}qÌêò䳺ô·Ô™`znSÇMsÊ‘¡Î|ƒÄ*>fŒuÕÀýîïÊïv‚!©“O)šbã)ÅÈ€tRíynp7ºüÍO'ŽZlÜ=8Þ‰x§ïôü2oð1óFÅÈžý”˜tqDUt²#ºÿvðÄ\Ðhh´3§¦ù»J&¡ï6ÄvZJv1ýÎ>ÀvÛ:r1íK.ÚÿQ|ȵݬ.Ý#ä”Ïù¹í½uS˜ŸÍñ‘£ã3:>ã¡£ç–ðR~û³ä ;ù@£q8ôÏeÔhFã0‡ç¶'Ë‚‡BÛ؇O™hÆ5'ìì¦9f1Yë ̳ÕË‹ÌZX'P÷¿ÍJwì7yý¯,ãlU²ŠmWôz¥ªök…âíC½cÑßMXX›£Ç)Ú¾/4?h\`eÚ'0<Ç„*Š¨ÄŒEÙ!Ã;Œ¿'òëtñxÀ²5ÿ¤añª[üK×Þë¾Í›ôáLÇÇô+>mRΠ»wh›sØUÙúßdzݶ·{ÓÕ4±æäÄ‹ød·~™¿ IB‘’û/õ—1.9–Œ*&˜ŒøþÛý¥BD`}ƒ ¤h#›k]D¯ 7"„& ªMtSæ~"cñ¶¹´v½oø.ƒ¼¥ ¥‡óqæP22& 2-Õ.ÇGiiOƒÌŠuÌŠvÏ*'®^líçz醤m©{ƒ¸(³ØE€T¹˜7|…Ë• ó·µg&(ß4× |§¿¯ænû*ežzv;^k ±°üþE²‰3ÿåÅçt·ÌOÇîv³ùR£šò§ò7Ë Nñ¢¡×ü~Ù“)°x—Ü(?òÎV÷Ë!›¢‘ÕB«¤r¥ÉÐDÀÓÄZ[›±¶õ|®8Þa£E‹N|zÔ^F­$xõ°K?ÆÛ䯸·™lé¢4 0~KÜÒQéÅËÙIºñúÈŸ¸XM½z°‹Ç M´ûÈí#´(?dF¼CÅ"ÏYá „ì˜öœUbV,ê˜UÔ=+gCÔ¥s;²æe²$Tn7$Vxóêæ&YÇeR’ÑÒ½ýο”&Ÿ¨>€1[ixjÏP‚/i!!àÀ#-â÷%¼S¹]0z>rŸnmƒ²{–rü¨ÿ2ä(^è`UšÞ¾n)‚ µl³EnBý~½²VÀ{Á”ï‚qY’»Dm»œ°GÞ¯5&?iów†ü ;ȇF”œT££&º¥÷{ýoÊ#_ù«¿ß¾I¶÷@ÂxÑçÚgu!ˆx‘æÍ‰nÁòððx‡¶õòíЉŒƒ2jõíö‹Ÿ–ð^ëY²zWœìýœ¬g .Fþ˜5yßuÉiëÍ^Š`ˆ»P[‡5}l.¨]§ãäýpt|÷ÛÚVØöçcý4m”„‹’:ü–J(uZˆÄ:5÷ÍMzâLÄQné}›A¶þlÝü˜ò¤œâ™æ•Ï '×}±>¢ÇC2ê ùb²?ZìŒZ)±Ù#&ÅÔ:]O¬ß¶²? U Ôƒl‚–UTZÓ73!»#r8M¡³ÆÑ´ñÛ˜°Óp*¸MÃÑÐÓ Ž 2 OPGJôâ‰îᙣT È’nS@C.w”ŠAøœ:6KçCA!á="Þƒ‡÷东¿¾'Ï&çï „¼“Ž’ÚKLœ°ÝvëîÃêûá`¸]Urú®jÀà[ö ¾Ãï±>ûGCÄ/Šô‰_†ü\ÍPàÏÕA¾¥0î[ÊéÅ{BjûœÏèéŸóÃkb¸×·&§{Z§«­‰—±>T?‹Gj ùsS’çá¿pý`’Nº L]*Œ¤`ûO07~µéhZÿ@€ËñK„Ùág‚"R>4© kVµ¬v ]yü<®ÕÒW'ÜBQþ{­ÿ]¼Èn4åË E®^s¢^E¯Æ³(ÂäíÛ«·‘ù/ïàáü\½‘¤S+m‹Tãúò>ãR‰ø¶Z›Äbå ç¹ü‘l“Ï{ÏLߺ5ü·yìïYCƒ ˜þ¯îš¡j úÚžO¶ùy@Þ«Ù.¾M7¿è»†Ìµ™7t3Ñ9%³oÓZÛí} }×L4{ªºh+­H²•¯ô»­Ó¿ ‚8˜m6óp²8˜ÊpÑÌ kOSÜt)ãøiX6gÁ<ÃcÙ¨v±ÂÆó¹­J åtXgÉkËÉî{‘g³Ü¢ òRFHrÊ1‘¢æ+ó=¼} ¡Òbr¼ÝM²ËJy+$.13³Ò/:=62F—)¥B¢P^ 76«+/÷ZÌ ’ïv¢§…Á¹ŠQd3Ÿh=jXP§™ºZ”C‘E:¸—‹±‰£iñªÅ\ªw¦½Ÿ,ȇÆ>Æ>ºú˜®Í3¹–'ã]~1ÛͶéÃf^ÖÓõ|ïâL½&×Y“ÂÝ™Î&WÖîhÙ‰ÒßÌr½—9$¹)Ðúˆ_*ÆŽÆŽNîhZaãÂ6'yÂÁmºipdØEy³Ù‰¹†;–“5»R8”ÙïÌ©ªÑ׋ÍàmnãÝAÏï²;×ñnb—_‹š³l$=‰Â]Ø}7MÛt¾ŸxÞ6ŸzTØ×XÄ]Y‡çKühêòDÅoôòÂú¿È>À§wh"«û†Ü?_¿Ir€2D=E‚°èéæ÷¯³,³ôïæIk]~zÈ’Â[O¢ÒϪÄyœffy¤Và ,’TàÈø®vؕђ­ ­ÞjÕ7AÅ—íh|ŠïîW@½íôz9RN!ÐÜ¥“l|~rµŒç_²‚Ièr—ÜÜôw†ÇÉÛ¬é¶õ¹ýS›8çh³ž¯Öòf[ü¦ˆFRÅŒsµŽ$ʽi›&Ušeïòý\}yÇ“5v’~þDÓ99Ë‹óe²ZlâõáZQRåpÅ3ÕÊ?\_'@»øÜùYw­Ie,MŸJ"Ï뇻틒2z Ö ×_Î6“Œ_‹“ÂÍ,~ë­øâCIëÏièwßlžn\g¸È­ý~×Áâ½ëëò*„ûˆÜç[5ï*sÝù-¬ØâɳWAÙ%[ÞòFlÒÒÝ%»,ˆƒM^×ÂóÏ9v>»yñùa—Ÿ øOªO1dÏ\œxŠOFDaÄàD9á˜T;¹,룇“ÿ5¿1Á"Òî—³m©Í<ÒUI©¼u™ž'ûU–ˆÈÒ³O[éW÷yø{£…Zº8 «Ø¼Öóã¸hgB³#R~IŠQ?F…Wý¤YRFOŠ%Ñß—%å±|˜ÙÖÓ@Œˆ¿FD##†aDÕ™Ó}X~$ß?⑃ð£)[{ÌVãÇ&Èìž-§ãj`ÞUs ¥@¿¥@F¥† »ËØ îÀ³–$~¤c@†Éñ] :ò¿ÿ®†d#†aÂÎ Ú1¡ø˜L† ™ƒ»>œŸ(¿?QŒ~bnä›Ãq£ú>¸QŽÜ†EWîÿAøDßÁ·/9~û Ä„ò8FC1!úû3¡zZ_»ž$vµS] 8V®f6`N{vŇ…YÆ2ècáÕ¯j¬——½¢¹ Û¥÷ômß슿?§»]zWüÚtôߥְ@c%‹ï°’E¥´á¾„…UÕð§ébúøøbùÃÝÝ‹ÙXïQ޵‡+6vØÚÃÇ-¼´N`¾ÛÄñ MbÛêûÙíL%¿_ëG ÿ6~½‰g_^ÇÀq~qã ¨…ü×—8¾ÿ¬ãí§ô6Þ-ã9é ždüêfWüþ >Ùr½LnLçZ_~G6{ƒj£¦Ûåë||Xſ鰨ޕ}ôáûõ§_®Š\YëÛ‡"ßž5Nåºø6šÆê]:olU¹>ÛmÞ ësÂñvZu ji±Þ5T@=¼YZèîm2äMŽ@²ÙîôŠÙ ‘P¼ÑëcH÷Œ±o’{sBPgn ÙÊ »ÝÄ¥ ¸°É_™¨×—Åô’n¿×†*62AÃÝü8å´ÑÑÔYá÷±ÝÁêÞ]>Þ/ã5¨š‚™³³ÓŠÃiÞW3¯ŠÛŸÓÕb?jž9Îâ}[b²÷.‰‘>Ü.k‹Ÿ'AÔJ{ZñT§Å ã‚yôÚ}Jï“ynjó«àò4ÐÓ’Ó ‘Â÷efÒkðã¶µ™^›½…Q­Û7Ì+ÈŽÿïÕÆoÉnù3Ю$˜¥åt•å‹ÉˆkËPE ürsQÓÔÕIv*íÓí?k˵²«G¨hÛéénxq0öU•TÅåO%-óÙý”UØNG”UØÒƜε{ÒÊ Y?€·9¤¶-óªÜ¹qüh_r¦ßøôÌã³3ÏÏ<¾8óøòÌã«óŽoÍùÆGÁÆoSj5 V^ÏYUkòÂ)Îyý±Ý íÚ–Ï^&}Qïƒ9,:yéwl(°cCI×–”º¶”®ïƒ‘ë éRÓŽ-¥tYÈÊšµlc¼Çž%îH ÿ<·,© UÆËqÇò©Mtö#;ñY¾¬¼Ü´=9Þv ïÓ.a+í®¤]ÂvÚ%|@„áÕÆÓ;û˜ÓÂ6(¬1×ÊÙs­|ëóˆcV—'ŸÕ¥¿¥nÌÓópØ:&hê˜SŽˆ •pæ$Vñ1c¼«ÆèwW~· ‰ž|JÑ O)F´“jÏs@¸Ñåo~:qÔbãîÁyðN|Ä;x§g—yÃ\3oFö  Ÿs†6v²#ºÿvðÄ\Ðhh´3§¦:zÂaP[$F[4Ú¢çg‹PP[d»³Ã[ÜrxéªÈ'ÿ¯bN»Zßaˆí´”übú}€íæz1íK.ÖÿQrȵݩ®•³{Äò9?·½·n ‹³9>rt|FÇgÐh}Çs5‡Ñ8ŒÆá¹íÉòàÆ¡Ðööá“F&šqÍ9K»iŽYLÖºólõò"³Ö ÔýïF³Ò]ûMBÿ+Ë8›C•¬bÛ½^©ª}@ÆZn;¡´ÇEôwÓÖ¶$Îãm_ƒŽš4.°2ížcBETbF‰¢ìáÆßùuºx<`ÙšÒ°xÕ-þƒ¥kïußæÇMúp¦ãcúŸ8r8ƒîÞ¡mÎaWdëc|ÏvÛÞîMW_ÐÄš“G,â?’yÜúeþR``ˆH#I(Rrÿ¥þR"Æ%Ç’Qœ߻¿Tˆ¬o"ƒmä cs­‹èuáF„ÐL´i‚ÎaêÀÜOd,Ô6—Ö®÷ ße— ´áøp>ŽÃJFÀ$A¦%Úåïø(-íiY‘ŽYÑîYåÄÕ‹­ý\/Ý´-Ucoe»*³ó¦“¯p¹²aþ¶öÌ囿ºaáïô÷ÕÜm_¥³ÌSÏnÇkm!–ß¿H6qæñ¿¼øœî–ùéØÝn6_jTSþTþfÔ)^4ôšß/{2ﲓåGÞÙê~9ƒ dC4²Zh•TZF24ð4±¦ÇÖ¦C¬m=„+ŽwØhÑ¢“Þ)uÔ^F­$xõ°K?ÆÛ䯸·™lé¢4 0~KÜÒQéÅËÙIºñúÈŸ¸XM½z°‹Ç µûÈí#´(?dF´CÅ"ÏYá ï˜öœU„V²cVQ÷¬œ Q—ÎiìÈš—É’P¹ÝXáÍ«››d—Ih¾C¤¥{û!) L>Q}c¶ÒðÔ"ž¡_Ò(BBÀÿ€GZÄïKx¦.‹ïv[ÿGîÓm¢mPvïÏRŽõ_†Å ¬JÓÛ×-E0¡æm¶ÈM¨ß¯WÖ x/ö]0.Kró¨iyšœ°GÞ¯5&?ió˜t†ü ;ȇF”œè¢ð£&º¥÷{ýoÊ#_ù«¿ß¾I¶÷@ÂxÑçÚgu!ˆx¡æÍ‰nÁòððx‡¶õòíЉŒ#Ž2jõíö‹Ÿ–ð^ëY²zWœìýœ¬g .Fþ˜5yßuÉiëÍ^ŠPˆL6-tÇÖaMß›‹j×éÇÆ8y?ßý¶v¶½ÅùX?M%ᢤ¿¥J&FQG æ¾¹IOœ >*Ð-½o³#ÈÖŸ ²›Sž”oZ<Óü¢ò™á$ðº/ÖGöxH¡Þ/®ú£ÅNÀ¨•Ëœ=bRL­ÓõÄúm+!û£PÕä©dSñŠJkúæc&d·sÿ8Å:gáü±Égç£ K!ˆuA*¦‡_ëzì£ô2[ݤÓë~¿~‹~ZØêãÝ´­ŸS£ë‡<”<éù^ÈmÝHÈ÷"!WÛ{!§÷¢!ß‹zÉ9MÞ㺗72ÄF_û{bG¹ þžÈK¯à¾|Še@>Å^|JO[?â(‡Áמcë—«^[¿®¯«ÏŽ¢ƒN~ású¼ly6¡›Ë‰Ï)ÌcÌâ|3$¬RFÁ*ñà°Êù7ÂUæyѽq•žˆçñÁ}¤{ÊB¾B*ûL+rRJH=S÷Uô·E@?-QõAó“ÁN¢ëÅòåæceÿbºÚš½e¬DÂÏ‚mkå$ò}„¼ŠÄ…ó§,âù)‹0u©0’‚u~sìhZÿ¼…ËÏ[RðK„ÙáG®bŸçp?¥ºÕm{›¨òøÃꀌ?é«aa€ÿ½Öÿ.^d· øå…"W¯9Qo£× cŠeaòöíÕÛÈü—wðpŠHoƒjí²-åëË{¥S¹lZ›´x…0æ(åíäd›|Þ ¾ukܵ=DiϤ3ý+^7Ü5CÕô#´=Ÿlëþ€¼W³]|›nÑw ™k3oèf¢3¢äŽgQ­íö>ž·Žx 7Í€ù+¿¼àŒf¨8[ëWMv•¬‡“hºQ›ÄlŸ…¾k¦ š=U]´•ö»³•¯ô»­Ó¿ ‚8˜m6óp²8˜ÊpÑÌ kkˆJfÂÀȤøi˜3çÎþûí_ýWamòÿ]ÛŒ·ºç¡žwÂMà^¾LB¦ ýùÛßTEîîþa/ò¨-óûLÞÎM*ñ¶~w¯˜ýï›Òb˜}~cTçÚük4ù?{­Xx©Iž8v›n\zãyè›Íîü5ܱ\ù¬!¸>%ãëß™W^-8¦¯ !ˆ»¶ñî çwÙëx7±KVFM!J6’žDá0]Ø}7MÛt¾ŸxÞ6Ÿú>çpŽÛ®¬©ö%~45Ö¢â7zya¡•ò—Ùo™ÍÓ»ìÊÝìþ¾!ÑÏ×o’ü°ÉæqRl‚;šn~ÿ:˪ünž´Ö槇¬À‡õ$*ác•h:ßµ°·)ò}‹Óf‘¥…Cìâ`!*÷lkõðV[Ó‰Qg÷3í&ŠAÂzÛéõr¤œÞoÒ]:ÉÆ‡á'×wÛWËxþ%«€‡.wÉÍ9Γ,óˆ?k¿=þðþÑMœó·Q&ZºÙ¿)¢‘”ú0Z±á’m‡ä›:Uêe£ïò-'}yÇ“5y’~þO<7BåÅù2Y-@a®Z+OkÕû-ºh™Æ6q›,âÏpy>›/ãÖÁ´<@ãûYbïüûWø=1q­¾7Éj2ÔYôÃõoéæKVbb˜`{3_Ͷ "¿wJàô²çéÚŽsÿmfcÆ«vq§Ÿ©Ïá¶®úxÞèÈðuZàúøùÝIæBV÷Y´À´m³dÛ{åæ8p÷®¥ùÕ•¾YiîÜ?òëßÊcVÒ¡é‚a³§isPÎS™ñY»Þq}[îÃõµ¶Úñ'`ÓŸ5—Š _Î'µ'Úû–õ¾µ:¨$"ÏóLkÍl‰tß2“ÙS´iƒh»½°r{áÿü9}¡·þ+ЫG¡^õ~uu¾ûZ@ü0õ¼ÿ #§~‘Ãv·^‡zqÑÿűۋësûÞZ…zkÙÿ­I·lÇÉmèi¨7Výߘv¾ñ‡xq›ï¯Ì½2ú¿2ë^äÕC¨5F¡Ô¶9eÔë¹ÓÖb$Ô‹ãþ/.Ü^< ¡ÞšxÆ a6dðØÃBšü²E:߇ž®¯ÒûÇ".|µ…XÙ:€ Üò¸ô”7ø)ý#‰;‡@G†hèáÃõ›tþ ãúk Ðg³s=úÛcó ߺûùpïqí²‹0üû7•i4TæYÏ&«x·Ë•êÔ¡ýòv¯g›ÿÕîK3§þ@žfÄÿ÷C2ÿò!M¿üº‰È_ûrBg_ûYé‚y›­ã+F¯ØÞÙ¾ÍÕl}¯@J¯ã[ý{{ê Ç;b,Ó¯ùa¢Þälî¡ÂJ›,I[7CþëW*Htwsÿû›Wüw²Yüþn•¦›ßN7»e#“6ö¼ýî!_q¢^½zaõŠRöZ¼¦âÍ+Ê"IŽ^¿¢‡$µûœVT–Îÿïfz«+ݓݪµ,V¹W6šƒþÈ ë¶=òÏùí?Ûmg»eº1稴Ú>Ü›Å]±ÊÉÛýÝoRm6ŽvN|:¯thv<»·›}‰u]Ñ£¢>Úw8­¯à|¶šOâõm²®~„Ë`ˆõ†êÛìæ>(^.ùÿþþÿPK>øUA†ì]èkèkQuickLook/Thumbnail.jpgPK>øUA)ûB7l7l(lE84127C33D91498C9356DF9CFC687ECB.chrtshrPK>øUAÖVµbPpPp(šØ04701F2A486C48649734C8D34A92C98E.chrtshrPK>øUA4.¸õèè0IbuildVersionHistory.plistPK>øUAßÚüÚ}E OJindex.xmlPKoŽÇcalendarserver-9.1+dfsg/contrib/performance/__init__.py000066400000000000000000000011631315003562600233560ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## __import__("twext") calendarserver-9.1+dfsg/contrib/performance/_event_change.py000066400000000000000000000073531315003562600244130ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of event summary changes. """ from itertools import count from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import NO_CONTENT from httpauth import AuthHandlerAgent from httpclient import StringProducer from benchlib import initialize, sample from _event_create import makeEvent @inlineCallbacks def measure(host, port, dtrace, attendeeCount, samples, fieldName, replacer, eventPerSample=False): user = password = "user01" root = "/" principal = "/" calendar = "event-%s-benchmark" % (fieldName,) authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Set up the calendar first yield initialize(agent, host, port, user, password, root, principal, calendar) if eventPerSample: # Create an event for each sample that will be taken, so that no event # is used for two different samples. f = _selfish_sample else: # Just create one event and re-use it for all samples. f = _generous_sample data = yield f( dtrace, replacer, agent, host, port, user, calendar, fieldName, attendeeCount, samples) returnValue(data) @inlineCallbacks def _selfish_sample(dtrace, replacer, agent, host, port, user, calendar, fieldName, attendeeCount, samples): url = 'http://%s:%s/calendars/__uids__/%s/%s/%s-change-%%d.ics' % ( host, port, user, calendar, fieldName) headers = Headers({"content-type": ["text/calendar"]}) events = [ # The organizerSequence here (1) may need to be a parameter. # See also the makeEvent call below. (makeEvent(i, 1, attendeeCount), url % (i,)) for i in range(samples)] for (event, url) in events: yield agent.request('PUT', url, headers, StringProducer(event)) # Sample changing the event according to the replacer. samples = yield sample( dtrace, samples, agent, (('PUT', url, headers, StringProducer(replacer(event, i))) for i, (event, url) in enumerate(events)).next, NO_CONTENT) returnValue(samples) @inlineCallbacks def _generous_sample(dtrace, replacer, agent, host, port, user, calendar, fieldName, attendeeCount, samples): url = 'http://%s:%s/calendars/__uids__/%s/%s/%s-change.ics' % ( host, port, user, calendar, fieldName) headers = Headers({"content-type": ["text/calendar"]}) # See the makeEvent call above. event = makeEvent(0, 1, attendeeCount) yield agent.request('PUT', url, headers, StringProducer(event)) # Sample changing the event according to the replacer. samples = yield sample( dtrace, samples, agent, (('PUT', url, headers, StringProducer(replacer(event, i))) for i in count(1)).next, NO_CONTENT) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/_event_create.py000066400000000000000000000107271315003562600244300ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Various helpers for event-creation benchmarks. """ from uuid import uuid4 from urllib2 import HTTPDigestAuthHandler from datetime import datetime, timedelta from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.http_headers import Headers from twisted.web.http import CREATED from twisted.web.client import Agent from twisted.internet import reactor from httpauth import AuthHandlerAgent from benchlib import initialize, sample from httpclient import StringProducer # XXX Represent these as pycalendar objects? Would make it easier to add more vevents. event = """\ BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:PST TZOFFSETFROM:-0700 TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:PDT TZOFFSETFROM:-0800 TZOFFSETTO:-0700 END:DAYLIGHT END:VTIMEZONE %(VEVENTS)s\ END:VCALENDAR """ SUMMARY = "Some random thing" vevent = """\ BEGIN:VEVENT UID:%(UID)s DTSTART;TZID=America/Los_Angeles:%(START)s DTEND;TZID=America/Los_Angeles:%(END)s %(RRULE)s\ CREATED:20100729T193912Z DTSTAMP:20100729T195557Z %(ORGANIZER)s\ %(ATTENDEES)s\ SEQUENCE:0 SUMMARY:%(SUMMARY)s TRANSP:OPAQUE END:VEVENT """ attendee = """\ ATTENDEE;CN=User %(SEQUENCE)02d;CUTYPE=INDIVIDUAL;EMAIL=user%(SEQUENCE)02d@example.com;PARTSTAT=NE EDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:urn:x-uid:user%(SEQUENCE)02d """ organizer = """\ ORGANIZER;CN=User %(SEQUENCE)02d;EMAIL=user%(SEQUENCE)02d@example.com:urn:x-uid:user%(SEQUENCE)02d ATTENDEE;CN=User %(SEQUENCE)02d;EMAIL=user%(SEQUENCE)02d@example.com;PARTSTAT=ACCEPTE D:urn:x-uid:user%(SEQUENCE)02d """ def formatDate(d): return ''.join(filter(str.isalnum, d.isoformat())) def makeOrganizer(sequence): return organizer % {'SEQUENCE': sequence} def makeAttendees(count): return [ attendee % {'SEQUENCE': n} for n in range(2, count + 2)] def makeVCalendar(uid, start, end, recurrence, organizerSequence, attendees): if recurrence is None: rrule = "" else: rrule = recurrence + "\n" cal = event % { 'VEVENTS': vevent % { 'UID': uid, 'START': formatDate(start), 'END': formatDate(end), 'SUMMARY': SUMMARY, 'ORGANIZER': makeOrganizer(organizerSequence), 'ATTENDEES': ''.join(attendees), 'RRULE': rrule, }, } return cal.replace("\n", "\r\n") def makeEvent(i, organizerSequence, attendeeCount): base = datetime(2010, 7, 30, 11, 15, 00) interval = timedelta(0, 5) duration = timedelta(0, 3) return makeVCalendar( uuid4(), base + i * interval, base + i * interval + duration, None, organizerSequence, makeAttendees(attendeeCount)) @inlineCallbacks def measure(calendar, organizerSequence, events, host, port, dtrace, samples): """ Benchmark event creation. """ user = password = "user%02d" % (organizerSequence,) root = "/" principal = "/" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # First set things up yield initialize(agent, host, port, user, password, root, principal, calendar) method = 'PUT' uri = 'http://%s:%d/calendars/__uids__/%s/%s/foo-%%d.ics' % ( host, port, user, calendar) headers = Headers({"content-type": ["text/calendar"]}) # Sample it a bunch of times samples = yield sample( dtrace, samples, agent, ((method, uri % (i,), headers, StringProducer(body)) for (i, body) in events).next, CREATED) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchlib.py000066400000000000000000000153121315003562600233660ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import pickle from time import time from twisted.internet.defer import ( FirstError, DeferredList, inlineCallbacks, returnValue) from twisted.web.http_headers import Headers from twisted.python.log import msg from twisted.web.http import NO_CONTENT, NOT_FOUND from stats import Duration from httpclient import StringProducer, readBody class CalDAVAccount(object): def __init__(self, agent, netloc, user, password, root, principal): self.agent = agent self.netloc = netloc self.user = user self.password = password self.root = root self.principal = principal def _makeURL(self, path): if not path.startswith('/'): raise ValueError("Pass a relative URL with an absolute path") return 'http://%s%s' % (self.netloc, path) def deleteResource(self, path): url = self._makeURL(path) d = self.agent.request('DELETE', url) def deleted(response): if response.code not in (NO_CONTENT, NOT_FOUND): raise Exception( "Unexpected response to DELETE %s: %d" % ( url, response.code)) d.addCallback(deleted) return d def makeCalendar(self, path): return self.agent.request('MKCALENDAR', self._makeURL(path)) def writeData(self, path, data, contentType): return self.agent.request( 'PUT', self._makeURL(path), Headers({'content-type': [contentType]}), StringProducer(data)) @inlineCallbacks def _serial(fs): for (f, args) in fs: yield f(*args) returnValue(None) def initialize(agent, host, port, user, password, root, principal, calendar): """ If the specified calendar exists, delete it. Then re-create it empty. """ account = CalDAVAccount( agent, "%s:%d" % (host, port), user=user, password=password, root=root, principal=principal) cal = "/calendars/users/%s/%s/" % (user, calendar) d = _serial([ (account.deleteResource, (cal,)), (account.makeCalendar, (cal,))]) d.addCallback(lambda ignored: account) return d def firstResult(deferreds): """ Return a L{Deferred} which fires when the first L{Deferred} from C{deferreds} fires. @param deferreds: A sequence of Deferreds to wait on. """ @inlineCallbacks def sample(dtrace, sampleTime, agent, paramgen, responseCode, concurrency=1): urlopen = Duration('HTTP') data = {urlopen: []} def once(): msg('emitting request') before = time() params = paramgen() d = agent.request(*params) def cbResponse(response): if response.code != responseCode: raise Exception( "Request %r received unexpected response code: %d" % ( params, response.code)) d = readBody(response) def cbBody(ignored): after = time() msg('response received') # Give things a moment to settle down. This is a hack # to try to collect the last of the dtrace output # which may still be sitting in the write buffer of # the dtrace process. It would be nice if there were # a more reliable way to know when we had it all, but # no luck on that front so far. The implementation of # mark is supposed to take care of that, but the # assumption it makes about ordering of events appears # to be invalid. # XXX Disabled until I get a chance to seriously # measure what affect, if any, it has. # d = deferLater(reactor, 0.5, dtrace.mark) d = dtrace.mark() def cbStats(stats): msg('stats collected') for k, v in stats.iteritems(): data.setdefault(k, []).append(v) data[urlopen].append(after - before) d.addCallback(cbStats) return d d.addCallback(cbBody) return d d.addCallback(cbResponse) return d msg('starting dtrace') yield dtrace.start() msg('dtrace started') start = time() requests = [] for _ignore_i in range(concurrency): requests.append(once()) while requests: try: _ignore_result, index = yield DeferredList(requests, fireOnOneCallback=True, fireOnOneErrback=True) except FirstError, e: e.subFailure.raiseException() # Get rid of the completed Deferred del requests[index] if time() > start + sampleTime: # Wait for the rest of the outstanding requests to keep things tidy yield DeferredList(requests) # And then move on break else: # And start a new operation to replace it try: requests.append(once()) except StopIteration: # Ran out of work to do, so paramgen raised a # StopIteration. This is pretty sad. Catch it or it # will demolish inlineCallbacks. if len(requests) == concurrency - 1: msg('exhausted parameter generator') msg('stopping dtrace') leftOver = yield dtrace.stop() msg('dtrace stopped') for (k, v) in leftOver.items(): if v: print('Extra', k, ':', v) returnValue(data) def select(statistics, benchmark, parameter, statistic): for stat, samples in statistics[benchmark][int(parameter)].iteritems(): if stat.name == statistic: return (stat, samples) raise ValueError("Unknown statistic %r" % (statistic,)) def load_stats(statfiles): data = [] for fname in statfiles: fname, bench, param, stat = fname.split(',') stats, samples = select( pickle.load(file(fname)), bench, param, stat) data.append((stats, samples)) if data: assert len(samples) == len(data[0][1]) return data calendarserver-9.1+dfsg/contrib/performance/benchlib.sh000066400000000000000000000071771315003562600233620ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## set -e # Break on error shopt -s nullglob # Expand foo* to nothing if nothing matches # Names of database backends that can be benchmarked. BACKENDS=(filesystem postgresql) # Location of the CalendarServer source. Will automatically be # updated to the appropriate version, config edited to use the right # backend, and PID files will be discovered beneath it. SOURCE=~/Projects/CalendarServer/trunk # The plist the server will respect. CONF=$SOURCE/conf/caldavd-dev.plist # Names of benchmarks we can run. Since ordering makes a difference to how # benchmarks are split across multiple hosts, new benchmarks should be appended # to this list, not inserted earlier on. BENCHMARKS="find_calendars find_events event_move event_delete_attendee event_add_attendee event_change_date event_change_summary event_delete vfreebusy event bounded_recurrence unbounded_recurrence event_autoaccept bounded_recurrence_autoaccept unbounded_recurrence_autoaccept vfreebusy_vary_attendees" # Custom scaling parameters for benchmarks that merit it. Be careful # not to exceed the 99 user limit for benchmarks where the scaling # parameter represents a number of users! SCALE_PARAMETERS="--parameters find_events:1,10,100,1000,10000 --parameters vfreebusy_vary_attendees:1,9,30" # Names of metrics we can collect. STATISTICS=(HTTP SQL read write pagein pageout) # Codespeed add-result location. ADDURL=http://localhost:8000/result/add/ EXTRACT=$(PWD)/extractconf # Change the config beneath $SOURCE to use a particular database backend. function setbackend() { ./setbackend $SOURCE/conf/caldavd-test.plist $1 > $CONF } # Clean up $SOURCE, update to the specified revision, and build the # extensions (in-place). function update_and_build() { pushd $SOURCE stop svn st --no-ignore | grep '^[?I]' | cut -c9- | xargs rm -r svn up -r$1 . python setup.py build_ext -i popd } # Ensure that the required configuration file is present, exit if not. function check_conf() { if [ ! -e $CONF ]; then echo "Configuration file $CONF is missing." exit 1 fi } # Start a CalendarServer in the current directory. Only return after # the specified number of slave processes have written their PID files # (which is only a weak metric for "the server is ready to use"). function start() { NUM_INSTANCES=$1 check_conf PIDDIR=$SOURCE/$($EXTRACT $CONF ServerRoot)/$($EXTRACT $CONF RunRoot) shift ./run -d $* while sleep 2; do instances=($PIDDIR/*instance*) if [ "${#instances[*]}" -eq "$NUM_INSTANCES" ]; then echo "instance pid files: ${instances[*]}" break fi done } # Stop the CalendarServer in the current directory. Only return after # it has exited. function stop() { if [ ! -e $CONF ]; then return fi PIDFILE=$SOURCE/$($EXTRACT $CONF ServerRoot)/$($EXTRACT $CONF RunRoot)/$($EXTRACT $CONF PIDFile) ./run -k || true while :; do pid=$(cat $PIDFILE 2>/dev/null || true) if [ ! -e $PIDFILE ]; then break fi if ! $(kill -0 $pid); then break fi echo "Waiting for server to exit..." sleep 1 done } calendarserver-9.1+dfsg/contrib/performance/benchmark000077500000000000000000000012521315003562600231240ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from benchmark import main raise SystemExit(main()) calendarserver-9.1+dfsg/contrib/performance/benchmark.py000066400000000000000000000357161315003562600235640ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from contrib.performance.stats import SQLDuration, Bytes from datetime import datetime from os.path import dirname from pickle import dump from signal import SIGINT from twisted.internet import reactor from twisted.internet.defer import Deferred, inlineCallbacks, gatherResults from twisted.internet.protocol import ProcessProtocol from twisted.protocols.basic import LineReceiver from twisted.python.filepath import FilePath from twisted.python.log import msg from twisted.python.modules import getModule from twisted.python.usage import UsageError, Options, portCoerce import sys import os import plistlib class DTraceBug(Exception): """ Represents some kind of problem related to a shortcoming in dtrace itself. """ class IOMeasureConsumer(ProcessProtocol): def __init__(self, started, done, parser): self.started = started self.done = done self.parser = parser def connectionMade(self): self._out = '' self._err = '' def mark(self): return self.parser.mark() def errReceived(self, bytes): self._err += bytes if 'Interrupted system call' in self._err: started = self.started self.started = None started.errback(DTraceBug(self._err)) def outReceived(self, bytes): if self.started is None: self.parser.dataReceived(bytes) else: self._out += bytes if self._out.startswith('READY\n'): self.parser.dataReceived(self._out[len('READY\n'):]) del self._out started = self.started self.started = None started.callback(None) def processEnded(self, reason): if self.started is None: self.done.callback(None) else: self.started.errback(RuntimeError("Exited too soon: %r/%r" % (self._out, self._err))) def masterPID(directory): return int(directory.child('caldavd.pid').getContent()) def instancePIDs(directory): pids = [] for pidfile in directory.children(): if pidfile.basename().startswith('caldav-instance-'): pidtext = pidfile.getContent() pid = int(pidtext) pids.append(pid) return pids class _DTraceParser(LineReceiver): delimiter = '\n\1' sql = None start = None _marked = None def __init__(self, collector): self.collector = collector def lineReceived(self, dtrace): # dtrace puts some extra newlines in the output sometimes. Get rid of them. dtrace = dtrace.strip() if dtrace: op, rest = dtrace.split(None, 1) getattr(self, '_op_' + op)(op, rest) def mark(self): self._marked = Deferred() return self._marked def _op_MARK(self, cmd, rest): marked = self._marked self._marked = None if marked is not None: marked.callback(None) def _op_EXECUTE(self, cmd, rest): try: which, when = rest.split(None, 1) except ValueError: msg('Bad EXECUTE line: %r' % (rest,)) return if which == 'SQL': self.sql = when return when = int(when) if which == 'ENTRY': if self.start is not None: msg('entry without return at %s in %s' % (when, cmd)) self.start = when elif which == 'RETURN': if self.start is None: msg('return without entry at %s in %s' % (when, cmd)) elif self.sql is None: msg('return without SQL at %s in %s' % (when, cmd)) else: diff = when - self.start if diff < 0: msg('Completely bogus EXECUTE %s %s' % (self.start, when)) else: if cmd == 'EXECUTE': accum = self.collector._execute elif cmd == 'ITERNEXT': accum = self.collector._iternext accum.append((self.sql, diff)) self.start = None _op_ITERNEXT = _op_EXECUTE def _op_B_READ(self, cmd, rest): self.collector._bread.append(int(rest)) def _op_B_WRITE(self, cmd, rest): self.collector._bwrite.append(int(rest)) def _op_READ(self, cmd, rest): self.collector._read.append(int(rest)) def _op_WRITE(self, cmd, rest): self.collector._write.append(int(rest)) class DTraceCollector(object): def __init__(self, script, pids): self._dScript = script self.pids = pids self._init_stats() def _init_stats(self): self._bread = [] self._bwrite = [] self._read = [] self._write = [] self._execute = [] self._iternext = [] def stats(self): results = { Bytes('pagein'): self._bread, Bytes('pageout'): self._bwrite, Bytes('read'): self._read, Bytes('write'): self._write, SQLDuration('execute'): self._execute, # Time spent in the execute phase of SQL execution SQLDuration('iternext'): self._iternext, # Time spent fetching rows from the execute phase SQLDuration('SQL'): self._execute + self._iternext, # Combination of the previous two } self._init_stats() return results def start(self): ready = [] self.finished = [] self.dtraces = {} # Trace each child process specifically. Necessary because of # the way SQL execution is measured, which requires the # $target dtrace variable (which can only take on a single # value). for p in self.pids: started, stopped = self._startDTrace(self._dScript, p) ready.append(started) self.finished.append(stopped) if self.pids: # If any tracing is to be done, then also trace postgres # i/o operations. This involves no target, because the # dtrace code just looks for processes named "postgres". # We skip it if we don't have any pids because that's # heuristically the "don't do any dtracing" setting (it # might be nice to make this explicit). started, stopped = self._startDTrace("pgsql.d", None) ready.append(started) self.finished.append(stopped) return gatherResults(ready) def _startDTrace(self, script, pid): """ Launch a dtrace process. @param script: A C{str} giving the path to the dtrace program to run. @param pid: A C{int} to target dtrace at a particular process, or C{None} not to. @return: A two-tuple of L{Deferred}s. The first will fire when the dtrace process is ready to go, the second will fire when it exits. """ started = Deferred() stopped = Deferred() proto = IOMeasureConsumer(started, stopped, _DTraceParser(self)) command = [ "/usr/sbin/dtrace", # process preprocessor macros "-C", # search for include targets in the source directory containing this file "-I", dirname(__file__), # suppress most implicitly generated output (which would mess up our parser) "-q", # load this script "-s", script] if pid is not None: # make this pid the target command.extend(["-p", str(pid)]) process = reactor.spawnProcess(proto, command[0], command) def eintr(reason): reason.trap(DTraceBug) msg('Dtrace startup failed (%s), retrying.' % (reason.getErrorMessage().strip(),)) return self._startDTrace(script, pid) def ready(passthrough): # Once the dtrace process is ready, save the state and # have the stopped Deferred deal with the results. We # don't want to do either of these for failed dtrace # processes. msg("dtrace tracking pid=%s" % (pid,)) self.dtraces[pid] = (process, proto) stopped.addCallback(self._cleanup, pid) return passthrough started.addCallbacks(ready, eintr) return started, stopped def _cleanup(self, passthrough, pid): del self.dtraces[pid] return passthrough def mark(self): marks = [] for (_ignore_process, protocol) in self.dtraces.itervalues(): marks.append(protocol.mark()) d = gatherResults(marks) d.addCallback(lambda ign: self.stats()) try: os.execve( "CalendarServer dtrace benchmarking signal", [], {}) except OSError: pass return d def stop(self): for (process, _ignore_protocol) in self.dtraces.itervalues(): process.signalProcess(SIGINT) d = gatherResults(self.finished) d.addCallback(lambda ign: self.stats()) return d @inlineCallbacks def benchmark(host, port, pids, label, scalingParameters, benchmarks): # Collect samples for 2 minutes. This should give plenty of data # for quick benchmarks. It will leave lots of error (due to small # sample size) for very slow benchmarks, but the error isn't as # interesting as the fact that a single operation takes # double-digit seconds or longer to complete. sampleTime = 60 * 2 statistics = {} for (name, measure) in benchmarks: statistics[name] = {} parameters = scalingParameters.get(name, [1, 9, 81]) for p in parameters: print('%s, parameter=%s' % (name, p)) dtrace = DTraceCollector("io_measure.d", pids) data = yield measure(host, port, dtrace, p, sampleTime) statistics[name][p] = data fObj = file( '%s-%s' % (label, datetime.now().isoformat()), 'w') dump(statistics, fObj, 2) fObj.close() def logsCoerce(directory): path = FilePath(directory) if not path.isdir(): raise ValueError("%r is not a directory" % (path.path,)) return path class BenchmarkOptions(Options): optParameters = [ ('host', 'h', 'localhost', 'Hostname or IPv4 address on which a CalendarServer is listening'), ('port', 'p', 8008, 'Port number on which a CalendarServer is listening', portCoerce), ('source-directory', 'd', None, 'The base of the CalendarServer source checkout being benchmarked ' '(if and only if the CalendarServer is on the same host as this ' 'benchmark process and dtrace-based metrics are desired)', logsCoerce), ('label', 'l', 'data', 'A descriptive string to attach to the output filename.'), ('hosts-count', None, None, 'For distributed benchmark collection, the number of hosts participating in collection.', int), ('host-index', None, None, 'For distributed benchmark collection, the (zero-based) index of this host in the collection.', int), ] optFlags = [ ('debug', None, 'Enable various debugging helpers'), ] def __init__(self): Options.__init__(self) self['parameters'] = {} def _selectBenchmarks(self, benchmarks): """ Select the benchmarks to run, based on those named and on the values passed for I{--hosts-count} and I{--host-index}. """ count = self['hosts-count'] index = self['host-index'] if (count is None) != (index is None): raise UsageError("Specify neither or both of hosts-count and host-index") if count is not None: if count < 0: raise UsageError("Specify a positive integer for hosts-count") if index < 0: raise UsageError("Specify a positive integer for host-index") if index >= count: raise UsageError("host-index must be less than hosts-count") benchmarks = [ benchmark for (i, benchmark) in enumerate(benchmarks) if i % self['hosts-count'] == self['host-index']] return benchmarks def opt_parameters(self, which): """ Specify the scaling parameters for a particular benchmark. The format of the value is :,...,. The given benchmark will be run with a scaling parameter set to each of the given values. This option may be specified multiple times to specify parameters for multiple benchmarks. """ benchmark, values = which.split(':') values = map(int, values.split(',')) self['parameters'][benchmark] = values def parseArgs(self, *benchmarks): if not benchmarks: raise UsageError("Specify at least one benchmark") self['benchmarks'] = self._selectBenchmarks(list(benchmarks)) def whichPIDs(source, conf): """ Return a list of PIDs to dtrace. """ run = source.preauthChild(conf['ServerRoot']).preauthChild(conf['RunRoot']) return [run.child(conf['PIDFile']).getContent()] + [ pid.getContent() for pid in run.globChildren('*instance*')] _benchmarks = getModule("contrib.performance.benchmarks") def resolveBenchmark(name): for module in _benchmarks.iterModules(): if module.name == ".".join((_benchmarks.name, name)): return module.load() raise ValueError("Unknown benchmark: %r" % (name,)) def main(): from twisted.python.log import startLogging, err options = BenchmarkOptions() try: options.parseOptions(sys.argv[1:]) except UsageError, e: print(e) return 1 if options['debug']: from twisted.python.failure import startDebugMode startDebugMode() if options['source-directory']: source = options['source-directory'] conf = source.child('conf').child('caldavd-dev.plist') pids = whichPIDs(source, plistlib.PlistParser().parse(conf.open())) else: pids = [] msg("Using dtrace to monitor pids %r" % (pids,)) startLogging(file('benchmark.log', 'a'), False) d = benchmark( options['host'], options['port'], pids, options['label'], options['parameters'], [(arg, resolveBenchmark(arg).measure) for arg in options['benchmarks']]) d.addErrback(err, "Failure at benchmark runner top-level") reactor.callWhenRunning(d.addCallback, lambda ign: reactor.stop()) reactor.run() calendarserver-9.1+dfsg/contrib/performance/benchmarks/000077500000000000000000000000001315003562600233615ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/benchmarks/__init__.py000066400000000000000000000001121315003562600254640ustar00rootroot00000000000000""" Package containing implementations of all of the microbenchmarks. """ calendarserver-9.1+dfsg/contrib/performance/benchmarks/bounded_recurrence.py000066400000000000000000000034301315003562600275700ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of events with a bounded recurrence. """ from uuid import uuid4 from itertools import count from datetime import datetime, timedelta from contrib.performance._event_create import ( makeAttendees, makeVCalendar, formatDate, measure as _measure) def makeEvent(i, organizerSequence, attendeeCount): """ Create a new half-hour long event that starts soon and recurs daily for the next five days. """ now = datetime.now() start = now.replace(minute=15, second=0, microsecond=0) + timedelta(hours=i) end = start + timedelta(minutes=30) until = start + timedelta(days=5) rrule = "RRULE:FREQ=DAILY;INTERVAL=1;UNTIL=" + formatDate(until) return makeVCalendar( uuid4(), start, end, rrule, organizerSequence, makeAttendees(attendeeCount)) def measure(host, port, dtrace, attendeeCount, samples): calendar = "bounded-recurrence" organizerSequence = 1 # An infinite stream of recurring VEVENTS to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/bounded_recurrence_autoaccept.py000066400000000000000000000037671315003562600320150ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of events with a bounded recurrence. """ from uuid import uuid4 from itertools import count from datetime import datetime, timedelta from contrib.performance._event_create import ( makeAttendees, makeVCalendar, formatDate, measure as _measure) def makeEvent(i, organizerSequence, attendeeCount): """ Create a new half-hour long event that starts soon and recurs daily for the next five days. """ now = datetime.now() start = now.replace(minute=15, second=0, microsecond=0) + timedelta(hours=i) end = start + timedelta(minutes=30) until = start + timedelta(days=5) rrule = "RRULE:FREQ=DAILY;INTERVAL=1;UNTIL=" + formatDate(until) attendees = makeAttendees(attendeeCount) attendees.append( 'ATTENDEE;CN="Resource 01";CUTYPE=INDIVIDUAL;PARTSTAT=NEEDS-ACTION;RSVP=T\n' ' RUE;SCHEDULE-STATUS="1.2":urn:x-uid:40000000-0000-0000-0000-000000000001\n') return makeVCalendar( uuid4(), start, end, rrule, organizerSequence, attendees) def measure(host, port, dtrace, attendeeCount, samples): calendar = "bounded-recurrence-autoaccept" organizerSequence = 1 # An infinite stream of recurring VEVENTS to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event.py000066400000000000000000000021761315003562600250620ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of event creation. """ from itertools import count from contrib.performance._event_create import makeEvent, measure as _measure def measure(host, port, dtrace, attendeeCount, samples): calendar = "event-creation-benchmark" organizerSequence = 1 # An infinite stream of VEVENTs to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_add_attendee.py000066400000000000000000000023541315003562600275410ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from contrib.performance._event_change import measure as _measure from contrib.performance._event_create import makeAttendees def measure(host, port, dtrace, attendeeCount, samples): attendees = makeAttendees(attendeeCount) def addAttendees(event, i): """ Add C{i} new attendees to the given event. """ # Find the last CREATED line created = event.rfind('CREATED') # Insert the attendees before it. return event[:created] + ''.join(attendees) + event[created:] return _measure( host, port, dtrace, 0, samples, "add-attendee", addAttendees, eventPerSample=True) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_autoaccept.py000066400000000000000000000035011315003562600272630ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of event creation involving an auto-accept resource. """ from itertools import count from uuid import uuid4 from datetime import datetime, timedelta from contrib.performance._event_create import ( makeAttendees, makeVCalendar, measure as _measure) def makeEvent(i, organizerSequence, attendeeCount): base = datetime(2010, 7, 30, 11, 15, 00) interval = timedelta(0, 5) duration = timedelta(0, 3) attendees = makeAttendees(attendeeCount) attendees.append( 'ATTENDEE;CN="Resource 01";CUTYPE=INDIVIDUAL;PARTSTAT=NEEDS-ACTION;RSVP=T\n' ' RUE;SCHEDULE-STATUS="1.2":urn:x-uid:40000000-0000-0000-0000-000000000001\n') return makeVCalendar( uuid4(), base + i * interval, base + i * interval + duration, None, organizerSequence, attendees) def measure(host, port, dtrace, attendeeCount, samples): calendar = "event-autoaccept-creation-benchmark" organizerSequence = 1 # An infinite stream of VEVENTs to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_change_date.py000066400000000000000000000031511315003562600273560ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a change in the date boundaries of an event. """ import datetime from contrib.performance import _event_change TIME_FORMAT = '%Y%m%dT%H%M%S' def _increment(event, marker, amount): # Find the last occurrence of the marker dtstart = event.rfind(marker) # Find the end of that line eol = event.find('\n', dtstart) # Find the : preceding the date on that line colon = event.find(':', dtstart) # Replace the text between the colon and the eol with the new timestamp old = datetime.datetime.strptime(event[colon + 1:eol], TIME_FORMAT) new = old + amount return event[:colon + 1] + new.strftime(TIME_FORMAT) + event[eol:] def replaceTimestamp(event, i): offset = datetime.timedelta(hours=i) return _increment( _increment(event, 'DTSTART', offset), 'DTEND', offset) def measure(host, port, dtrace, attendeeCount, samples): return _event_change.measure( host, port, dtrace, attendeeCount, samples, "change-date", replaceTimestamp) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_change_summary.py000066400000000000000000000017541315003562600301450ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from contrib.performance._event_create import SUMMARY from contrib.performance._event_change import measure as _measure def replaceSummary(event, i): return event.replace(SUMMARY, 'Replacement summary %d' % (i,)) def measure(host, port, dtrace, attendeeCount, samples): return _measure( host, port, dtrace, attendeeCount, samples, "change-summary", replaceSummary) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_delete.py000066400000000000000000000047531315003562600264070ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of event deletion. """ from itertools import count from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import NO_CONTENT from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import initialize, sample from contrib.performance.benchmarks.event import makeEvent @inlineCallbacks def measure(host, port, dtrace, attendeeCount, samples): organizerSequence = 1 user = password = "user%02d" % (organizerSequence,) root = "/" principal = "/" calendar = "event-deletion-benchmark" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Set up the calendar first yield initialize(agent, host, port, user, password, root, principal, calendar) # An infinite stream of VEVENTs to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) # Create enough events to delete uri = 'http://%s:%d/calendars/__uids__/%s/%s/foo-%%d.ics' % ( host, port, user, calendar) headers = Headers({"content-type": ["text/calendar"]}) urls = [] for i, body in events: urls.append(uri % (i,)) yield agent.request( 'PUT', urls[-1], headers, StringProducer(body)) if len(urls) == samples: break # Now delete them all samples = yield sample( dtrace, samples, agent, (('DELETE', url) for url in urls).next, NO_CONTENT) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_delete_attendee.py000066400000000000000000000024251315003562600302520ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from contrib.performance import _event_change def measure(host, port, dtrace, attendeeCount, samples): def deleteAttendees(event, i): """ Add C{i} new attendees to the given event. """ for _ignore_n in range(attendeeCount): # Find the beginning of an ATTENDEE line attendee = event.find('ATTENDEE') # And the end of it eol = event.find('\n', attendee) # And remove it event = event[:attendee] + event[eol:] return event return _event_change.measure( host, port, dtrace, attendeeCount, samples, "delete-attendee", deleteAttendees, eventPerSample=True) calendarserver-9.1+dfsg/contrib/performance/benchmarks/event_move.py000066400000000000000000000052241315003562600261050ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from itertools import count, cycle from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import CREATED from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import initialize, sample from contrib.performance.benchmarks.event import makeEvent @inlineCallbacks def measure(host, port, dtrace, attendeeCount, samples): organizerSequence = 1 user = password = "user%02d" % (organizerSequence,) root = "/" principal = "/" # Two calendars between which to move the event. fooCalendar = "event-move-foo-benchmark" barCalendar = "event-move-bar-benchmark" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Set up the calendars first for calendar in [fooCalendar, barCalendar]: yield initialize( agent, host, port, user, password, root, principal, calendar) fooURI = 'http://%s:%d/calendars/__uids__/%s/%s/some-event.ics' % ( host, port, user, fooCalendar) barURI = 'http://%s:%d/calendars/__uids__/%s/%s/some-event.ics' % ( host, port, user, barCalendar) # Create the event that will move around headers = Headers({"content-type": ["text/calendar"]}) yield agent.request( 'PUT', fooURI, headers, StringProducer(makeEvent(1, organizerSequence, attendeeCount))) # Move it around sooo much source = cycle([fooURI, barURI]) dest = cycle([barURI, fooURI]) params = ( ('MOVE', source.next(), Headers({"destination": [dest.next()], "overwrite": ["F"]})) for i in count(1)) samples = yield sample(dtrace, samples, agent, params.next, CREATED) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/find_calendars.py000066400000000000000000000062611315003562600266740ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from itertools import count from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import MULTI_STATUS from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import CalDAVAccount, sample PROPFIND = """\ """ @inlineCallbacks def measure(host, port, dtrace, numCalendars, samples): # There's already the "calendar" calendar numCalendars -= 1 user = password = "user10" root = "/" principal = "/" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Create the number of calendars necessary account = CalDAVAccount( agent, "%s:%d" % (host, port), user=user, password=password, root=root, principal=principal) cal = "/calendars/users/%s/propfind-%%d/" % (user,) for i in range(numCalendars): yield account.makeCalendar(cal % (i,)) body = StringProducer(PROPFIND) params = ( ('PROPFIND', 'http://%s:%d/calendars/__uids__/%s/' % (host, port, user), Headers({"depth": ["1"], "content-type": ["text/xml"]}), body) for i in count(1)) samples = yield sample(dtrace, samples, agent, params.next, MULTI_STATUS) # Delete the calendars we created to leave the server in roughly # the same state as we found it. for i in range(numCalendars): yield account.deleteResource(cal % (i,)) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/find_events.py000066400000000000000000000061601315003562600262420ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from itertools import count from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks, returnValue, gatherResults from twisted.internet.task import cooperate from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import MULTI_STATUS from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import CalDAVAccount, sample from contrib.performance.benchmarks.event import makeEvent PROPFIND = """\ """ def uploadEvents(numEvents, agent, uri, cal): def worker(): for i in range(numEvents): event = makeEvent(i, 1, 0) yield agent.request( 'PUT', '%s%s%d.ics' % (uri, cal, i), Headers({"content-type": ["text/calendar"]}), StringProducer(event)) worker = worker() return gatherResults([ cooperate(worker).whenDone() for _ignore_i in range(3)]) @inlineCallbacks def measure(host, port, dtrace, numEvents, samples): user = password = "user11" root = "/" principal = "/" uri = "http://%s:%d/" % (host, port) authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri=uri, user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Create the number of calendars necessary account = CalDAVAccount( agent, "%s:%d" % (host, port), user=user, password=password, root=root, principal=principal) cal = "calendars/users/%s/find-events/" % (user,) yield account.makeCalendar("/" + cal) # Create the indicated number of events on the calendar yield uploadEvents(numEvents, agent, uri, cal) body = StringProducer(PROPFIND) params = ( ('PROPFIND', '%scalendars/__uids__/%s/find-events/' % (uri, user), Headers({"depth": ["1"], "content-type": ["text/xml"]}), body) for i in count(1)) samples = yield sample(dtrace, samples, agent, params.next, MULTI_STATUS) # Delete the calendar we created to leave the server in roughly # the same state as we found it. yield account.deleteResource("/" + cal) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/unbounded_recurrence.py000066400000000000000000000032651315003562600301410ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of events with an unbounded recurrence. """ from uuid import uuid4 from itertools import count from datetime import datetime, timedelta from contrib.performance._event_create import ( makeAttendees, makeVCalendar, measure as _measure) def makeEvent(i, organizerSequence, attendeeCount): """ Create a new half-hour long event that starts soon and weekly for as long the server allows. """ now = datetime.now() start = now.replace(minute=15, second=0, microsecond=0) + timedelta(hours=i) end = start + timedelta(minutes=30) return makeVCalendar( uuid4(), start, end, "RRULE:FREQ=WEEKLY", organizerSequence, makeAttendees(attendeeCount)) def measure(host, port, dtrace, attendeeCount, samples): calendar = "unbounded-recurrence" organizerSequence = 1 # An infinite stream of recurring VEVENTS to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/unbounded_recurrence_autoaccept.py000066400000000000000000000036241315003562600323500ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of events with an unbounded recurrence. """ from uuid import uuid4 from itertools import count from datetime import datetime, timedelta from contrib.performance._event_create import ( makeAttendees, makeVCalendar, measure as _measure) def makeEvent(i, organizerSequence, attendeeCount): """ Create a new half-hour long event that starts soon and weekly for as long the server allows. """ now = datetime.now() start = now.replace(minute=15, second=0, microsecond=0) + timedelta(hours=i) end = start + timedelta(minutes=30) attendees = makeAttendees(attendeeCount) attendees.append( 'ATTENDEE;CN="Resource 01";CUTYPE=INDIVIDUAL;PARTSTAT=NEEDS-ACTION;RSVP=T\n' ' RUE;SCHEDULE-STATUS="1.2":urn:x-uid:40000000-0000-0000-0000-000000000001\n') return makeVCalendar( uuid4(), start, end, "RRULE:FREQ=WEEKLY", organizerSequence, attendees) def measure(host, port, dtrace, attendeeCount, samples): calendar = "unbounded-recurrence-autoaccept" organizerSequence = 1 # An infinite stream of recurring VEVENTS to PUT to the server. events = ((i, makeEvent(i, organizerSequence, attendeeCount)) for i in count(2)) return _measure( calendar, organizerSequence, events, host, port, dtrace, samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/vfreebusy.py000066400000000000000000000113541315003562600257510ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of VFREEBUSY requests with a varying number of events on the target's calendar. """ from urllib2 import HTTPDigestAuthHandler from uuid import uuid4 from datetime import datetime, timedelta from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet import reactor from twisted.web.client import Agent from twisted.web.http_headers import Headers from twisted.web.http import OK from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import initialize, sample # XXX Represent these as pycalendar objects? Would make it easier to add more vevents. event = """\ BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT END:VTIMEZONE %(VEVENTS)s\ END:VCALENDAR """ VFREEBUSY = """\ BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VFREEBUSY UID:81F582C8-4E7F-491C-85F4-E541864BE0FA DTEND:%(end)s %(attendees)sDTSTART:%(start)s X-CALENDARSERVER-MASK-UID:EC75A61B-08A3-44FD-BFBB-2457BBD0D490 DTSTAMP:20100729T174751Z ORGANIZER:mailto:user01@example.com SUMMARY:Availability for urn:x-uid:user02 END:VFREEBUSY END:VCALENDAR """ def formatDate(d): return ''.join(filter(str.isalnum, d.isoformat())) def makeEvent(i): # Backwards compat interface, don't delete it for a little while. return makeEventNear(datetime(2010, 7, 30, 11, 15, 00), i) def makeEventNear(base, i): s = """\ BEGIN:VEVENT UID:%(UID)s DTSTART;TZID=America/New_York:%(START)s DTEND;TZID=America/New_York:%(END)s CREATED:20100729T193912Z DTSTAMP:20100729T195557Z SEQUENCE:%(SEQUENCE)s SUMMARY:STUFF IS THINGS TRANSP:OPAQUE END:VEVENT """ interval = timedelta(hours=2) duration = timedelta(hours=1) data = event % { 'VEVENTS': s % { 'UID': uuid4(), 'START': formatDate(base + i * interval), 'END': formatDate(base + i * interval + duration), 'SEQUENCE': i, }, } return data.replace("\n", "\r\n") def makeEvents(base, n): return [makeEventNear(base, i) for i in range(n)] @inlineCallbacks def measure(host, port, dtrace, events, samples): user = password = "user01" uid = "10000000-0000-0000-0000-000000000001" root = "/" principal = "/" calendar = "vfreebusy-benchmark" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # First set things up account = yield initialize( agent, host, port, user, password, root, principal, calendar) base = "/calendars/users/%s/%s/foo-%%d.ics" % (user, calendar) baseTime = datetime.now().replace(hour=12, minute=15, second=0, microsecond=0) for i, cal in enumerate(makeEvents(baseTime, events)): yield account.writeData(base % (i,), cal, "text/calendar") method = 'POST' uri = 'http://%s:%d/calendars/__uids__/%s/outbox/' % (host, port, user) headers = Headers({ "content-type": ["text/calendar"], "originator": ["mailto:%s@example.com" % (user,)], "recipient": ["urn:x-uid:%s, urn:x-uid:10000000-0000-0000-0000-000000000002" % (uid,)]}) vfb = VFREEBUSY % { "attendees": "".join([ "ATTENDEE:urn:x-uid:%s\n" % (uid,), "ATTENDEE:urn:x-uid:10000000-0000-0000-0000-000000000002\n"]), "start": formatDate(baseTime.replace(hour=0, minute=0)) + 'Z', "end": formatDate( baseTime.replace(hour=0, minute=0) + timedelta(days=1)) + 'Z'} body = StringProducer(vfb.replace("\n", "\r\n")) samples = yield sample( dtrace, samples, agent, lambda: (method, uri, headers, body), OK) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/benchmarks/vfreebusy_vary_attendees.py000066400000000000000000000072661315003562600310550ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's handling of VFREEBUSY requests with a varying number of attendees in the request. """ from datetime import datetime, timedelta from urllib2 import HTTPDigestAuthHandler from twisted.internet.defer import inlineCallbacks, returnValue from twisted.web.http import OK from twisted.web.http_headers import Headers from twisted.web.client import Agent from twisted.internet import reactor from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer from contrib.performance.benchlib import CalDAVAccount, sample from contrib.performance.benchmarks.vfreebusy import VFREEBUSY, formatDate, makeEventNear @inlineCallbacks def measure(host, port, dtrace, attendees, samples): userNumber = 1 user = password = "user%02d" % (userNumber,) root = "/" principal = "/" calendar = "vfreebusy-vary-attendees-benchmark" targets = range(2, attendees + 2) authinfo = HTTPDigestAuthHandler() # Set up authentication info for our own user and all the other users that # may need an event created on one of their calendars. for i in [userNumber] + targets: targetUser = "user%02d" % (i,) for path in ["calendars/users/%s/" % (targetUser,), "calendars/__uids__/10000000-0000-0000-0000-000000000%03d/" % (i,)]: authinfo.add_password( realm="Test Realm", uri="http://%s:%d/%s" % (host, port, path), user=targetUser, passwd=targetUser) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Set up events on about half of the target accounts baseTime = datetime.now().replace(minute=45, second=0, microsecond=0) for i in targets[::2]: targetUser = "user%02d" % (i,) account = CalDAVAccount( agent, "%s:%d" % (host, port), user=targetUser, password=password, root=root, principal=principal) cal = "/calendars/users/%s/%s/" % (targetUser, calendar) yield account.deleteResource(cal) yield account.makeCalendar(cal) yield account.writeData(cal + "foo.ics", makeEventNear(baseTime, i), "text/calendar") # And now issue the actual VFREEBUSY request method = 'POST' uri = 'http://%s:%d/calendars/__uids__/10000000-0000-0000-0000-000000000001/outbox/' % (host, port) headers = Headers({ "content-type": ["text/calendar"], "originator": ["mailto:%s@example.com" % (user,)], "recipient": [", ".join(["urn:x-uid:10000000-0000-0000-0000-000000000%03d" % (i,) for i in [userNumber] + targets])]}) body = StringProducer(VFREEBUSY % { "attendees": "".join([ "ATTENDEE:urn:x-uid:10000000-0000-0000-0000-000000000%03d\n" % (i,) for i in [userNumber] + targets]), "start": formatDate(baseTime.replace(hour=0, minute=0)) + 'Z', "end": formatDate( baseTime.replace(hour=0, minute=0) + timedelta(days=1)) + 'Z'}) samples = yield sample( dtrace, samples, agent, lambda: (method, uri, headers, body), OK) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/compare000077500000000000000000000012261315003562600226210ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from compare import main main() calendarserver-9.1+dfsg/contrib/performance/compare.py000066400000000000000000000045001315003562600232430ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import sys import stats from benchlib import load_stats try: from scipy.stats import ttest_1samp except ImportError: from math import pi from ctypes import CDLL, c_double for lib in ['libc.dylib', 'libm.so']: try: libc = CDLL(lib) except OSError: pass else: break gamma = libc.tgamma gamma.argtypes = [c_double] gamma.restype = c_double def ttest_1samp(a, popmean): # T statistic - http://mathworld.wolfram.com/Studentst-Distribution.html t = (stats.mean(a) - popmean) / (stats.stddev(a) / len(a) ** 0.5) v = len(a) - 1.0 p = gamma((v + 1) / 2) / ((v * pi) ** 0.5 * gamma(v / 2)) * (1 + t ** 2 / v) ** (-(v + 1) / 2) return (t, p) def trim(sequence, amount): sequence.sort() n = len(sequence) t = int(n * amount / 2.0) if t: del sequence[:t] del sequence[-t:] else: raise RuntimeError( "Cannot trim length %d sequence by %d%%" % (n, int(amount * 100))) return sequence def main(): [(_ignore_stat, first), (_ignore_stat, second)] = load_stats(sys.argv[1:]) # Attempt to increase robustness by dropping the outlying 10% of values. first = trim(first, 0.1) second = trim(second, 0.1) fmean = stats.mean(first) smean = stats.mean(second) p = ttest_1samp(second, fmean)[1] if p >= 0.95: # rejected the null hypothesis print(sys.argv[1], 'mean of', fmean, 'differs from', sys.argv[2], 'mean of', smean, '(%2.0f%%)' % (p * 100,)) else: # failed to reject the null hypothesis print('cannot prove means (%s, %s) differ (%2.0f%%)' % (fmean, smean, p * 100,)) calendarserver-9.1+dfsg/contrib/performance/display-calendar-events.py000066400000000000000000000017731315003562600263440ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import eventkitframework as EventKit from Cocoa import NSDate store = EventKit.EKEventStore.alloc().init() calendars = store.calendarsForEntityType_(0) print(calendars) raise SystemExit predicate = store.predicateForEventsWithStartDate_endDate_calendars_( NSDate.date(), NSDate.distantFuture(), [calendars[2]]) print(store.eventsMatchingPredicate_(predicate)) calendarserver-9.1+dfsg/contrib/performance/eventkitframework.py000066400000000000000000000003761315003562600253730ustar00rootroot00000000000000import objc as _objc __bundle__ = _objc.initFrameworkWrapper( "EventKit", frameworkIdentifier="com.apple.EventKit", frameworkPath=_objc.pathForFramework( "/System/Library/Frameworks/EventKit.framework" ), globals=globals() ) calendarserver-9.1+dfsg/contrib/performance/extractconf000077500000000000000000000013251315003562600235130ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys import plistlib print plistlib.PlistParser().parse(file(sys.argv[1]))[sys.argv[2]] calendarserver-9.1+dfsg/contrib/performance/fix-units.sql000066400000000000000000000027141315003562600237170ustar00rootroot00000000000000-- -- Copyright (c) 2010-2017 Apple Inc. All rights reserved. -- -- Licensed under the Apache License, Version 2.0 (the "License"); -- you may not use this file except in compliance with the License. -- You may obtain a copy of the License at -- -- http://www.apache.org/licenses/LICENSE-2.0 -- -- Unless required by applicable law or agreed to in writing, software -- distributed under the License is distributed on an "AS IS" BASIS, -- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -- See the License for the specific language governing permissions and -- limitations under the License. -- -- Set the units of all of the benchmarks to what they really are. update codespeed_benchmark set units_title = 'Statements' where name like '%-SQLcount'; update codespeed_benchmark set units_title = 'Bytes' where name like '%-read'; update codespeed_benchmark set units_title = 'Bytes' where name like '%-write'; update codespeed_benchmark set units_title = 'Pages' where name like '%-pagein'; update codespeed_benchmark set units_title = 'Pages' where name like '%-pageout'; update codespeed_benchmark set units = 'statements' where name like '%-SQLcount'; update codespeed_benchmark set units = 'bytes' where name like '%-read'; update codespeed_benchmark set units = 'bytes' where name like '%-write'; update codespeed_benchmark set units = 'pages' where name like '%-pagein'; update codespeed_benchmark set units = 'pages' where name like '%-pageout'; calendarserver-9.1+dfsg/contrib/performance/graph000077500000000000000000000012241315003562600222720ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from graph import main main() calendarserver-9.1+dfsg/contrib/performance/graph.py000066400000000000000000000022211315003562600227140ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys from matplotlib import pyplot import numpy from benchlib import load_stats def main(): fig = pyplot.figure() ax = fig.add_subplot(111) data = [samples for (_ignore_stat, samples) in load_stats(sys.argv[1:])] bars = [] color = iter('rgbcmy').next w = 1.0 / len(data) xs = numpy.arange(len(data[0])) for i, s in enumerate(data): bars.append(ax.bar(xs + i * w, s, width=w, color=color())[0]) ax.set_xlabel('sample #') ax.set_ylabel('seconds') ax.legend(bars, sys.argv[1:]) pyplot.show() calendarserver-9.1+dfsg/contrib/performance/httpauth.py000066400000000000000000000136021315003562600234610ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from caldavclientlibrary.protocol.http.authentication.digest import Digest from twisted.python.log import msg from twisted.web.http import UNAUTHORIZED from twisted.web.http_headers import Headers import urlparse import urllib2 class BasicChallenge(object): def __init__(self, realm): self.realm = realm def response(self, uri, method, keyring): if type(keyring) is dict: keyring = keyring['basic'] username, password = keyring.passwd.find_user_password(self.realm, uri) credentials = ('%s:%s' % (username, password)).encode('base64').strip() authorization = 'basic ' + credentials return {'authorization': [authorization]} class DigestChallenge(object): def __init__(self, realm, **fields): self.realm = realm self.fields = fields self.fields['realm'] = realm def response(self, uri, method, keyring): if type(keyring) is dict: keyring = keyring['digest'] username, password = keyring.passwd.find_user_password(self.realm, uri) if username is None: raise RuntimeError("Credentials for realm=%s uri=%s not found" % (self.realm, uri)) digest = Digest(username, password, []) digest.fields.update(self.fields) authorization = [] class BigSigh: def getURL(self): return uri BigSigh.method = method BigSigh.url = uri digest.addHeaders(authorization, BigSigh()) return {'authorization': [value for (_ignore_name, value) in authorization]} class AuthHandlerAgent(object): def __init__(self, agent, authinfo): self._agent = agent self._authinfo = authinfo self._challenged = {} def _authKey(self, method, uri): return urlparse.urlparse(uri)[:2] def request(self, method, uri, headers=None, bodyProducer=None): return self._requestWithAuth(method, uri, headers, bodyProducer) def _requestWithAuth(self, method, uri, headers, bodyProducer): key = self._authKey(method, uri) if key in self._challenged: d = self._respondToChallenge(self._challenged[key], method, uri, headers, bodyProducer) else: d = self._agent.request(method, uri, headers, bodyProducer) d.addCallback(self._authenticate, method, uri, headers, bodyProducer) return d def _parse(self, authorization): try: scheme, rest = authorization.split(None, 1) except ValueError: # Probably "negotiate", which we don't support scheme = authorization rest = "" args = urllib2.parse_keqv_list(urllib2.parse_http_list(rest)) challengeType = { 'basic': BasicChallenge, 'digest': DigestChallenge, }.get(scheme.lower()) if challengeType is None: return "", None return scheme.lower(), challengeType(**args) def _respondToChallenge(self, challenge, method, uri, headers, bodyProducer): if headers is None: headers = Headers() else: headers = Headers(dict(headers.getAllRawHeaders())) for k, vs in challenge.response(uri, method, self._authinfo).iteritems(): for v in vs: headers.addRawHeader(k, v) return self._agent.request(method, uri, headers, bodyProducer) def _authenticate(self, response, method, uri, headers, bodyProducer): if response.code == UNAUTHORIZED: if headers is None: authorization = None else: authorization = headers.getRawHeaders('authorization') msg("UNAUTHORIZED response to %s %s (Authorization=%r)" % ( method, uri, authorization)) # Look for a challenge authorization = response.headers.getRawHeaders('www-authenticate') if authorization is None: raise Exception( "UNAUTHORIZED response with no WWW-Authenticate header") # Always choose digest over basic if both present challenges = dict([self._parse(auth) for auth in authorization]) if 'digest' in challenges: key = 'digest' elif 'basic' in challenges: key = 'basic' else: key = None if key: self._challenged[self._authKey(method, uri)] = challenges[key] return self._respondToChallenge(challenges[key], method, uri, headers, bodyProducer) return response if __name__ == '__main__': from urllib2 import HTTPDigestAuthHandler handler = HTTPDigestAuthHandler() handler.add_password( realm="Test Realm", uri="http://localhost:8008/", user="user01", passwd="user01") from twisted.web.client import Agent from twisted.internet import reactor from twisted.python.log import err agent = AuthHandlerAgent(Agent(reactor), handler) d = agent.request( 'DELETE', 'http://localhost:8008/calendars/users/user01/monkeys3/') def deleted(response): print(response.code) print(response.headers) reactor.stop() d.addCallback(deleted) d.addErrback(err) d.addCallback(lambda ign: reactor.stop()) reactor.run() calendarserver-9.1+dfsg/contrib/performance/httpclient.py000066400000000000000000000042421315003562600237760ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from StringIO import StringIO from zope.interface import implements from twisted.internet.protocol import Protocol from twisted.internet.defer import Deferred, succeed from twisted.web.iweb import IBodyProducer from twisted.internet.interfaces import IConsumer class _BufferReader(Protocol): def __init__(self, finished): self.finished = finished self.received = StringIO() def dataReceived(self, bytes): self.received.write(bytes) def connectionLost(self, reason): self.finished.callback(self.received.getvalue()) def readBody(response): if response.length == 0: return succeed(None) finished = Deferred() response.deliverBody(_BufferReader(finished)) return finished class StringProducer(object): implements(IBodyProducer) def __init__(self, body): if not isinstance(body, str): raise TypeError( "StringProducer body must be str, not %r" % (type(body),)) self._body = body self.length = len(self._body) def startProducing(self, consumer): consumer.write(self._body) return succeed(None) def stopProducing(self): pass def resumeProducing(self): pass def pauseProducing(self): pass class MemoryConsumer(object): implements(IConsumer) def __init__(self): self._buffer = [] def write(self, bytes): self._buffer.append(bytes) def value(self): if len(self._buffer) > 1: self._buffer = ["".join(self._buffer)] return "".join(self._buffer) calendarserver-9.1+dfsg/contrib/performance/io_measure.d000066400000000000000000000024531315003562600235450ustar00rootroot00000000000000/* * Copyright (c) 2010-2017 Apple Inc. All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /* * Trace information about I/O and SQL events. */ #pragma D option switchrate=10hz #include "sql_measure.d" /* * Low-level I/O stuff */ io:::start /args[0]->b_flags & B_READ/ { printf("B_READ %d\n\1", args[0]->b_bcount); } io:::start /!(args[0]->b_flags & B_READ)/ { printf("B_WRITE %d\n\1", args[0]->b_bcount); } #define READ(fname) \ pid$target::fname:return \ { \ printf("READ %d\n\1", arg1); \ } READ(read) READ(pread) READ(readv) #define WRITE(fname) \ pid$target::fname:return \ { \ printf("WRITE %d\n\1", arg1); \ } WRITE(write) WRITE(pwrite) WRITE(writev) syscall::execve:entry /copyinstr(arg0) == "CalendarServer dtrace benchmarking signal"/ { printf("MARK x\n\1"); } calendarserver-9.1+dfsg/contrib/performance/jobqueue/000077500000000000000000000000001315003562600230635ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/jobqueue/__init__.py000066400000000000000000000011361315003562600251750ustar00rootroot00000000000000## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/contrib/performance/jobqueue/loadtest.py000077500000000000000000000207271315003562600252670ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.client.httpshandler import SmartHTTPConnection from multiprocessing import Process, Value import getopt import itertools import json import random import sys import time from urlparse import urlparse import collections PRIORITY = { "low": 0, "medium": 1, "high": 2, } def httploop(ctr, config, complete): # Random time delay time.sleep(random.randint(0, config["interval"]) / 1000.0) ServerDetail = collections.namedtuple("ServerDetails", ("host", "port", "use_ssl", "headers",)) server_details = [] for server in config["server"].split(","): url = urlparse(server) use_ssl = url[0] == "https" if "@" in url[1]: auth, net_loc = url[1].split("@") else: auth = "admin:admin" net_loc = url[1] host, port = net_loc.split(":") port = int(port) user, pswd = auth.split(":") headers = {} headers["User-Agent"] = "httploop/1" headers["Depth"] = "1" headers["Authorization"] = "Basic " + "{}:{}".format(user, pswd).encode("base64")[:-1] headers["Content-Type"] = "application/json" server_details.append(ServerDetail(host, port, use_ssl, headers)) random.shuffle(server_details) interval = config["interval"] / 1000.0 total = config["limit"] / config["numProcesses"] count = 0 jstr = json.dumps({ "action": "testwork", "when": config["when"], "delay": config["delay"], "jobs": config["jobs"], "priority": PRIORITY[config["priority"]], "weight": config["weight"], }) base_time = time.time() iter = itertools.cycle(server_details) while not complete.value: server = iter.next() count += 1 http = SmartHTTPConnection(server.host, server.port, server.use_ssl, False) try: server.headers["User-Agent"] = "httploop-{}/{}".format(ctr, count,) http.request("POST", "/control", jstr, server.headers) response = http.getresponse() response.read() except Exception as e: print("Count: {}".format(count,)) print(repr(e)) raise finally: http.close() if total != 0 and count >= total: break base_time += interval sleep_time = base_time - time.time() if sleep_time < 0: print("Interval less than zero: process #{}".format(ctr)) base_time = time.time() else: time.sleep(sleep_time) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: loadtest [options] Options: -h Print this help and exit -n NUM Number of child processes [10] -i MSEC Millisecond delay between each request [1000] -r RATE Requests/second rate [10] -j JOBS Number of jobs per HTTP request [1] -s URL URL to connect to [https://localhost:8443], can also be a comma separated list of URLs to cycle through -b SEC Number of seconds for notBefore [0] -d MSEC Number of milliseconds for the work [10] -l NUM Total number of requests from all processes or zero for unlimited [0] -t SEC Number of seconds to run [unlimited] -p low|medium|high Work priority level [high] -w NUM Work weight level 1..10 [1] Arguments: None Description: This utility will use the control API to generate test work items at the specified rate using multiple processes to generate each HTTP request. Each child process will execute HTTP requests at a specific interval (with a random start offset for each process). The total number of processes can also be set to give an effective rate. Alternatively, the rate can be set directly and the tool will pick a suitable number of processes and interval to use. The following combinations of -n, -i and -r are allowed: -n, -i -n, -r -r """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': done = Value("i", 0) config = { "numProcesses": 10, "interval": 1000, "jobs": 1, "server": "https://localhost:8443", "when": 0, "delay": 10, "priority": "high", "weight": 1, "limit": 0, } numProcesses = None interval = None rate = None runtime = None options, args = getopt.getopt(sys.argv[1:], "b:d:hi:j:l:n:p:r:s:t:w:", []) for option, value in options: if option == "-h": usage() elif option == "-n": numProcesses = int(value) elif option == "-i": interval = int(value) elif option == "-j": config["jobs"] = int(value) elif option == "-r": rate = int(value) elif option == "-s": config["server"] = value elif option == "-b": config["when"] = int(value) elif option == "-d": config["delay"] = int(value) elif option == "-l": config["limit"] = int(value) elif option == "-t": runtime = int(value) elif option == "-p": if value not in PRIORITY.keys(): usage("Unrecognized priority: {}".format(value,)) config["priority"] = value elif option == "-w": config["weight"] = int(value) else: usage("Unrecognized option: {}".format(option,)) # Determine the actual number of processes and interval if numProcesses is not None and interval is not None and rate is None: config["numProcesses"] = numProcesses config["interval"] = interval elif numProcesses is not None and interval is None and rate is not None: config["numProcesses"] = numProcesses config["interval"] = (config["numProcesses"] * 1000) / rate elif numProcesses is None and interval is None and rate is not None: if rate <= 100: config["numProcesses"] = rate config["interval"] = 1000 else: config["numProcesses"] = 100 config["interval"] = (config["numProcesses"] * 1000) / rate elif numProcesses is None and interval is None and rate is None: pass else: usage("Wrong combination of -n, -i and -r") effective_rate = (config["numProcesses"] * 1000) / config["interval"] if runtime: config["limit"] = effective_rate * runtime print("Run details:") print(" Number of servers: {}".format(len(config["server"].split(",")))) print(" Number of processes: {}".format(config["numProcesses"])) print(" Interval between requests: {} ms".format(config["interval"])) print(" Effective request rate: {} req/sec".format(effective_rate)) print(" Jobs per request: {}".format(config["jobs"])) print(" Effective job rate: {} jobs/sec".format(effective_rate * config["jobs"])) print(" Total number of requests: {}").format(config["limit"] if config["limit"] != 0 else "unlimited") print("") print("Work details:") print(" Priority: {}").format(config["priority"]) print(" Weight: {}").format(config["weight"]) print(" Start delay: {} s").format(config["when"]) print(" Execution time: {} ms").format(config["delay"]) print(" Average queue depth: {}").format((effective_rate * config["delay"]) / 1000) print("") print("Starting up...") # Create a set of processes, then start them all procs = [Process(target=httploop, args=(ctr + 1, config, done,)) for ctr in range(config["numProcesses"])] map(lambda p: p.start(), procs) # Wait for user to cancel, then signal end and wait for processes to finish time.sleep(1) raw_input("\n\nEnd run (hit return)") done.value = 1 map(lambda p: p.join(), procs) print("\nRun complete") calendarserver-9.1+dfsg/contrib/performance/jobqueue/workrate.py000066400000000000000000000137561315003562600253070ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from getopt import getopt, GetoptError import errno import json import os import sched import socket import sys import time def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print("options:") print(" -h --help: print this help and exit") print(" -s: server host (and optional port) [localhost:8100]") print(" or unix socket path prefixed by 'unix:'") print("") print("This tool monitors the server's job assignment rate.") if e: sys.exit(64) else: sys.exit(0) def main(): try: (optargs, _ignore_args) = getopt( sys.argv[1:], "hs:", [ "help", ], ) except GetoptError, e: usage(e) # # Get configuration # server = ("localhost", 8100) for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt in ("-s"): if not arg.startswith("unix:"): server = arg.split(":") if len(server) == 1: server.append(8100) else: server[1] = int(server[1]) server = tuple(server) else: server = arg else: raise NotImplementedError(opt) d = Monitor(server) d.run() class Monitor(object): """ Main monitor controller. Use Python's L{sched} feature to schedule updates. """ screen = None registered_windows = {} registered_order = [] def __init__(self, server): self.paused = False self.seconds = 1.0 self.sched = sched.scheduler(time.time, time.sleep) self.client = MonitorClient(server) self.client.addItem("test_work") self.last_queued = None self.last_completed = None self.last_time = None def run(self): """ Create the initial window and run the L{scheduler}. """ self.sched.enter(self.seconds, 0, self.updateResults, ()) self.sched.run() def updateResults(self): """ Periodic update of the current window and check for a key press. """ t = time.time() self.client.update() if len(self.client.currentData) == 0: print("Failed to read any valid data from the server - exiting") sys.exit(1) queued = self.client.currentData["test_work"]["queued"] completed = self.client.currentData["test_work"]["completed"] assigned = self.client.currentData["test_work"]["assigned"] if self.last_queued is not None: diff_queued = (self.last_queued - queued) / (t - self.last_time) diff_completed = (completed - self.last_completed) / (t - self.last_time) else: diff_queued = 0 diff_completed = 0 self.last_queued = queued self.last_completed = completed self.last_time = t print("{}\t{}\t{:.1f}\t{:.1f}".format(queued, assigned, diff_queued, diff_completed,)) self.sched.enter(max(self.seconds - (time.time() - t), 0), 0, self.updateResults, ()) class MonitorClient(object): """ Client that connects to a server and fetches information. """ def __init__(self, sockname): self.socket = None if isinstance(sockname, str): self.sockname = sockname[5:] self.useTCP = False else: self.sockname = sockname self.useTCP = True self.currentData = {} self.items = [] def readSock(self, items): """ Open a socket, send the specified request, and retrieve the response. Keep the socket open. """ try: if self.socket is None: self.socket = socket.socket(socket.AF_INET if self.useTCP else socket.AF_UNIX, socket.SOCK_STREAM) self.socket.connect(self.sockname) self.socket.setblocking(0) self.socket.sendall(json.dumps(items) + "\r\n") data = "" t = time.time() while not data.endswith("\n"): try: d = self.socket.recv(1024) except socket.error as se: if se.args[0] != errno.EWOULDBLOCK: raise if time.time() - t > 5: raise socket.error continue if d: data += d else: break data = json.loads(data) except socket.error: data = {} self.socket = None except ValueError: data = {} return data def update(self): """ Update the current data from the server. """ # Only read each item once self.currentData = self.readSock(list(set(self.items))) def getOneItem(self, item): """ Update the current data from the server. """ data = self.readSock([item]) return data[item] if data else None def addItem(self, item): """ Add a server data item to monitor. """ self.items.append(item) def removeItem(self, item): """ No need to monitor this item. """ self.items.remove(item) if __name__ == "__main__": main() calendarserver-9.1+dfsg/contrib/performance/loadtest/000077500000000000000000000000001315003562600230635ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/__init__.py000066400000000000000000000011721315003562600251750ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Load-testing tool. """ calendarserver-9.1+dfsg/contrib/performance/loadtest/accounts.csv000066400000000000000000000406761315003562600254340ustar00rootroot00000000000000user01,user01,User 01,user01@example.com,10000000-0000-0000-0000-000000000001,PodA user02,user02,User 02,user02@example.com,10000000-0000-0000-0000-000000000002,PodA user03,user03,User 03,user03@example.com,10000000-0000-0000-0000-000000000003,PodA user04,user04,User 04,user04@example.com,10000000-0000-0000-0000-000000000004,PodA user05,user05,User 05,user05@example.com,10000000-0000-0000-0000-000000000005,PodA user06,user06,User 06,user06@example.com,10000000-0000-0000-0000-000000000006,PodA user07,user07,User 07,user07@example.com,10000000-0000-0000-0000-000000000007,PodA user08,user08,User 08,user08@example.com,10000000-0000-0000-0000-000000000008,PodA user09,user09,User 09,user09@example.com,10000000-0000-0000-0000-000000000009,PodA user10,user10,User 10,user10@example.com,10000000-0000-0000-0000-000000000010,PodA user11,user11,User 11,user11@example.com,10000000-0000-0000-0000-000000000011,PodA user12,user12,User 12,user12@example.com,10000000-0000-0000-0000-000000000012,PodA user13,user13,User 13,user13@example.com,10000000-0000-0000-0000-000000000013,PodA user14,user14,User 14,user14@example.com,10000000-0000-0000-0000-000000000014,PodA user15,user15,User 15,user15@example.com,10000000-0000-0000-0000-000000000015,PodA user16,user16,User 16,user16@example.com,10000000-0000-0000-0000-000000000016,PodA user17,user17,User 17,user17@example.com,10000000-0000-0000-0000-000000000017,PodA user18,user18,User 18,user18@example.com,10000000-0000-0000-0000-000000000018,PodA user19,user19,User 19,user19@example.com,10000000-0000-0000-0000-000000000019,PodA user20,user20,User 20,user20@example.com,10000000-0000-0000-0000-000000000020,PodA user21,user21,User 21,user21@example.com,10000000-0000-0000-0000-000000000021,PodA user22,user22,User 22,user22@example.com,10000000-0000-0000-0000-000000000022,PodA user23,user23,User 23,user23@example.com,10000000-0000-0000-0000-000000000023,PodA user24,user24,User 24,user24@example.com,10000000-0000-0000-0000-000000000024,PodA user25,user25,User 25,user25@example.com,10000000-0000-0000-0000-000000000025,PodA user26,user26,User 26,user26@example.com,10000000-0000-0000-0000-000000000026,PodA user27,user27,User 27,user27@example.com,10000000-0000-0000-0000-000000000027,PodA user28,user28,User 28,user28@example.com,10000000-0000-0000-0000-000000000028,PodA user29,user29,User 29,user29@example.com,10000000-0000-0000-0000-000000000029,PodA user30,user30,User 30,user30@example.com,10000000-0000-0000-0000-000000000030,PodA user31,user31,User 31,user31@example.com,10000000-0000-0000-0000-000000000031,PodA user32,user32,User 32,user32@example.com,10000000-0000-0000-0000-000000000032,PodA user33,user33,User 33,user33@example.com,10000000-0000-0000-0000-000000000033,PodA user34,user34,User 34,user34@example.com,10000000-0000-0000-0000-000000000034,PodA user35,user35,User 35,user35@example.com,10000000-0000-0000-0000-000000000035,PodA user36,user36,User 36,user36@example.com,10000000-0000-0000-0000-000000000036,PodA user37,user37,User 37,user37@example.com,10000000-0000-0000-0000-000000000037,PodA user38,user38,User 38,user38@example.com,10000000-0000-0000-0000-000000000038,PodA user39,user39,User 39,user39@example.com,10000000-0000-0000-0000-000000000039,PodA user40,user40,User 40,user40@example.com,10000000-0000-0000-0000-000000000040,PodA user41,user41,User 41,user41@example.com,10000000-0000-0000-0000-000000000041,PodA user42,user42,User 42,user42@example.com,10000000-0000-0000-0000-000000000042,PodA user43,user43,User 43,user43@example.com,10000000-0000-0000-0000-000000000043,PodA user44,user44,User 44,user44@example.com,10000000-0000-0000-0000-000000000044,PodA user45,user45,User 45,user45@example.com,10000000-0000-0000-0000-000000000045,PodA user46,user46,User 46,user46@example.com,10000000-0000-0000-0000-000000000046,PodA user47,user47,User 47,user47@example.com,10000000-0000-0000-0000-000000000047,PodA user48,user48,User 48,user48@example.com,10000000-0000-0000-0000-000000000048,PodA user49,user49,User 49,user49@example.com,10000000-0000-0000-0000-000000000049,PodA user50,user50,User 50,user50@example.com,10000000-0000-0000-0000-000000000050,PodA user51,user51,User 51,user51@example.com,10000000-0000-0000-0000-000000000051,PodA user52,user52,User 52,user52@example.com,10000000-0000-0000-0000-000000000052,PodA user53,user53,User 53,user53@example.com,10000000-0000-0000-0000-000000000053,PodA user54,user54,User 54,user54@example.com,10000000-0000-0000-0000-000000000054,PodA user55,user55,User 55,user55@example.com,10000000-0000-0000-0000-000000000055,PodA user56,user56,User 56,user56@example.com,10000000-0000-0000-0000-000000000056,PodA user57,user57,User 57,user57@example.com,10000000-0000-0000-0000-000000000057,PodA user58,user58,User 58,user58@example.com,10000000-0000-0000-0000-000000000058,PodA user59,user59,User 59,user59@example.com,10000000-0000-0000-0000-000000000059,PodA user60,user60,User 60,user60@example.com,10000000-0000-0000-0000-000000000060,PodA user61,user61,User 61,user61@example.com,10000000-0000-0000-0000-000000000061,PodA user62,user62,User 62,user62@example.com,10000000-0000-0000-0000-000000000062,PodA user63,user63,User 63,user63@example.com,10000000-0000-0000-0000-000000000063,PodA user64,user64,User 64,user64@example.com,10000000-0000-0000-0000-000000000064,PodA user65,user65,User 65,user65@example.com,10000000-0000-0000-0000-000000000065,PodA user66,user66,User 66,user66@example.com,10000000-0000-0000-0000-000000000066,PodA user67,user67,User 67,user67@example.com,10000000-0000-0000-0000-000000000067,PodA user68,user68,User 68,user68@example.com,10000000-0000-0000-0000-000000000068,PodA user69,user69,User 69,user69@example.com,10000000-0000-0000-0000-000000000069,PodA user70,user70,User 70,user70@example.com,10000000-0000-0000-0000-000000000070,PodA user71,user71,User 71,user71@example.com,10000000-0000-0000-0000-000000000071,PodA user72,user72,User 72,user72@example.com,10000000-0000-0000-0000-000000000072,PodA user73,user73,User 73,user73@example.com,10000000-0000-0000-0000-000000000073,PodA user74,user74,User 74,user74@example.com,10000000-0000-0000-0000-000000000074,PodA user75,user75,User 75,user75@example.com,10000000-0000-0000-0000-000000000075,PodA user76,user76,User 76,user76@example.com,10000000-0000-0000-0000-000000000076,PodA user77,user77,User 77,user77@example.com,10000000-0000-0000-0000-000000000077,PodA user78,user78,User 78,user78@example.com,10000000-0000-0000-0000-000000000078,PodA user79,user79,User 79,user79@example.com,10000000-0000-0000-0000-000000000079,PodA user80,user80,User 80,user80@example.com,10000000-0000-0000-0000-000000000080,PodA user81,user81,User 81,user81@example.com,10000000-0000-0000-0000-000000000081,PodA user82,user82,User 82,user82@example.com,10000000-0000-0000-0000-000000000082,PodA user83,user83,User 83,user83@example.com,10000000-0000-0000-0000-000000000083,PodA user84,user84,User 84,user84@example.com,10000000-0000-0000-0000-000000000084,PodA user85,user85,User 85,user85@example.com,10000000-0000-0000-0000-000000000085,PodA user86,user86,User 86,user86@example.com,10000000-0000-0000-0000-000000000086,PodA user87,user87,User 87,user87@example.com,10000000-0000-0000-0000-000000000087,PodA user88,user88,User 88,user88@example.com,10000000-0000-0000-0000-000000000088,PodA user89,user89,User 89,user89@example.com,10000000-0000-0000-0000-000000000089,PodA user90,user90,User 90,user90@example.com,10000000-0000-0000-0000-000000000090,PodA user91,user91,User 91,user91@example.com,10000000-0000-0000-0000-000000000091,PodA user92,user92,User 92,user92@example.com,10000000-0000-0000-0000-000000000092,PodA user93,user93,User 93,user93@example.com,10000000-0000-0000-0000-000000000093,PodA user94,user94,User 94,user94@example.com,10000000-0000-0000-0000-000000000094,PodA user95,user95,User 95,user95@example.com,10000000-0000-0000-0000-000000000095,PodA user96,user96,User 96,user96@example.com,10000000-0000-0000-0000-000000000096,PodA user97,user97,User 97,user97@example.com,10000000-0000-0000-0000-000000000097,PodA user98,user98,User 98,user98@example.com,10000000-0000-0000-0000-000000000098,PodA user99,user99,User 99,user99@example.com,10000000-0000-0000-0000-000000000099,PodA puser01,puser01,Puser 01,puser01@example.com,60000000-0000-0000-0000-000000000001,PodB puser02,puser02,Puser 02,puser02@example.com,60000000-0000-0000-0000-000000000002,PodB puser03,puser03,Puser 03,puser03@example.com,60000000-0000-0000-0000-000000000003,PodB puser04,puser04,Puser 04,puser04@example.com,60000000-0000-0000-0000-000000000004,PodB puser05,puser05,Puser 05,puser05@example.com,60000000-0000-0000-0000-000000000005,PodB puser06,puser06,Puser 06,puser06@example.com,60000000-0000-0000-0000-000000000006,PodB puser07,puser07,Puser 07,puser07@example.com,60000000-0000-0000-0000-000000000007,PodB puser08,puser08,Puser 08,puser08@example.com,60000000-0000-0000-0000-000000000008,PodB puser09,puser09,Puser 09,puser09@example.com,60000000-0000-0000-0000-000000000009,PodB puser10,puser10,Puser 10,puser10@example.com,60000000-0000-0000-0000-000000000010,PodB puser11,puser11,Puser 11,puser11@example.com,60000000-0000-0000-0000-000000000011,PodB puser12,puser12,Puser 12,puser12@example.com,60000000-0000-0000-0000-000000000012,PodB puser13,puser13,Puser 13,puser13@example.com,60000000-0000-0000-0000-000000000013,PodB puser14,puser14,Puser 14,puser14@example.com,60000000-0000-0000-0000-000000000014,PodB puser15,puser15,Puser 15,puser15@example.com,60000000-0000-0000-0000-000000000015,PodB puser16,puser16,Puser 16,puser16@example.com,60000000-0000-0000-0000-000000000016,PodB puser17,puser17,Puser 17,puser17@example.com,60000000-0000-0000-0000-000000000017,PodB puser18,puser18,Puser 18,puser18@example.com,60000000-0000-0000-0000-000000000018,PodB puser19,puser19,Puser 19,puser19@example.com,60000000-0000-0000-0000-000000000019,PodB puser20,puser20,Puser 20,puser20@example.com,60000000-0000-0000-0000-000000000020,PodB puser21,puser21,Puser 21,puser21@example.com,60000000-0000-0000-0000-000000000021,PodB puser22,puser22,Puser 22,puser22@example.com,60000000-0000-0000-0000-000000000022,PodB puser23,puser23,Puser 23,puser23@example.com,60000000-0000-0000-0000-000000000023,PodB puser24,puser24,Puser 24,puser24@example.com,60000000-0000-0000-0000-000000000024,PodB puser25,puser25,Puser 25,puser25@example.com,60000000-0000-0000-0000-000000000025,PodB puser26,puser26,Puser 26,puser26@example.com,60000000-0000-0000-0000-000000000026,PodB puser27,puser27,Puser 27,puser27@example.com,60000000-0000-0000-0000-000000000027,PodB puser28,puser28,Puser 28,puser28@example.com,60000000-0000-0000-0000-000000000028,PodB puser29,puser29,Puser 29,puser29@example.com,60000000-0000-0000-0000-000000000029,PodB puser30,puser30,Puser 30,puser30@example.com,60000000-0000-0000-0000-000000000030,PodB puser31,puser31,Puser 31,puser31@example.com,60000000-0000-0000-0000-000000000031,PodB puser32,puser32,Puser 32,puser32@example.com,60000000-0000-0000-0000-000000000032,PodB puser33,puser33,Puser 33,puser33@example.com,60000000-0000-0000-0000-000000000033,PodB puser34,puser34,Puser 34,puser34@example.com,60000000-0000-0000-0000-000000000034,PodB puser35,puser35,Puser 35,puser35@example.com,60000000-0000-0000-0000-000000000035,PodB puser36,puser36,Puser 36,puser36@example.com,60000000-0000-0000-0000-000000000036,PodB puser37,puser37,Puser 37,puser37@example.com,60000000-0000-0000-0000-000000000037,PodB puser38,puser38,Puser 38,puser38@example.com,60000000-0000-0000-0000-000000000038,PodB puser39,puser39,Puser 39,puser39@example.com,60000000-0000-0000-0000-000000000039,PodB puser40,puser40,Puser 40,puser40@example.com,60000000-0000-0000-0000-000000000040,PodB puser41,puser41,Puser 41,puser41@example.com,60000000-0000-0000-0000-000000000041,PodB puser42,puser42,Puser 42,puser42@example.com,60000000-0000-0000-0000-000000000042,PodB puser43,puser43,Puser 43,puser43@example.com,60000000-0000-0000-0000-000000000043,PodB puser44,puser44,Puser 44,puser44@example.com,60000000-0000-0000-0000-000000000044,PodB puser45,puser45,Puser 45,puser45@example.com,60000000-0000-0000-0000-000000000045,PodB puser46,puser46,Puser 46,puser46@example.com,60000000-0000-0000-0000-000000000046,PodB puser47,puser47,Puser 47,puser47@example.com,60000000-0000-0000-0000-000000000047,PodB puser48,puser48,Puser 48,puser48@example.com,60000000-0000-0000-0000-000000000048,PodB puser49,puser49,Puser 49,puser49@example.com,60000000-0000-0000-0000-000000000049,PodB puser50,puser50,Puser 50,puser50@example.com,60000000-0000-0000-0000-000000000050,PodB puser51,puser51,Puser 51,puser51@example.com,60000000-0000-0000-0000-000000000051,PodB puser52,puser52,Puser 52,puser52@example.com,60000000-0000-0000-0000-000000000052,PodB puser53,puser53,Puser 53,puser53@example.com,60000000-0000-0000-0000-000000000053,PodB puser54,puser54,Puser 54,puser54@example.com,60000000-0000-0000-0000-000000000054,PodB puser55,puser55,Puser 55,puser55@example.com,60000000-0000-0000-0000-000000000055,PodB puser56,puser56,Puser 56,puser56@example.com,60000000-0000-0000-0000-000000000056,PodB puser57,puser57,Puser 57,puser57@example.com,60000000-0000-0000-0000-000000000057,PodB puser58,puser58,Puser 58,puser58@example.com,60000000-0000-0000-0000-000000000058,PodB puser59,puser59,Puser 59,puser59@example.com,60000000-0000-0000-0000-000000000059,PodB puser60,puser60,Puser 60,puser60@example.com,60000000-0000-0000-0000-000000000060,PodB puser61,puser61,Puser 61,puser61@example.com,60000000-0000-0000-0000-000000000061,PodB puser62,puser62,Puser 62,puser62@example.com,60000000-0000-0000-0000-000000000062,PodB puser63,puser63,Puser 63,puser63@example.com,60000000-0000-0000-0000-000000000063,PodB puser64,puser64,Puser 64,puser64@example.com,60000000-0000-0000-0000-000000000064,PodB puser65,puser65,Puser 65,puser65@example.com,60000000-0000-0000-0000-000000000065,PodB puser66,puser66,Puser 66,puser66@example.com,60000000-0000-0000-0000-000000000066,PodB puser67,puser67,Puser 67,puser67@example.com,60000000-0000-0000-0000-000000000067,PodB puser68,puser68,Puser 68,puser68@example.com,60000000-0000-0000-0000-000000000068,PodB puser69,puser69,Puser 69,puser69@example.com,60000000-0000-0000-0000-000000000069,PodB puser70,puser70,Puser 70,puser70@example.com,60000000-0000-0000-0000-000000000070,PodB puser71,puser71,Puser 71,puser71@example.com,60000000-0000-0000-0000-000000000071,PodB puser72,puser72,Puser 72,puser72@example.com,60000000-0000-0000-0000-000000000072,PodB puser73,puser73,Puser 73,puser73@example.com,60000000-0000-0000-0000-000000000073,PodB puser74,puser74,Puser 74,puser74@example.com,60000000-0000-0000-0000-000000000074,PodB puser75,puser75,Puser 75,puser75@example.com,60000000-0000-0000-0000-000000000075,PodB puser76,puser76,Puser 76,puser76@example.com,60000000-0000-0000-0000-000000000076,PodB puser77,puser77,Puser 77,puser77@example.com,60000000-0000-0000-0000-000000000077,PodB puser78,puser78,Puser 78,puser78@example.com,60000000-0000-0000-0000-000000000078,PodB puser79,puser79,Puser 79,puser79@example.com,60000000-0000-0000-0000-000000000079,PodB puser80,puser80,Puser 80,puser80@example.com,60000000-0000-0000-0000-000000000080,PodB puser81,puser81,Puser 81,puser81@example.com,60000000-0000-0000-0000-000000000081,PodB puser82,puser82,Puser 82,puser82@example.com,60000000-0000-0000-0000-000000000082,PodB puser83,puser83,Puser 83,puser83@example.com,60000000-0000-0000-0000-000000000083,PodB puser84,puser84,Puser 84,puser84@example.com,60000000-0000-0000-0000-000000000084,PodB puser85,puser85,Puser 85,puser85@example.com,60000000-0000-0000-0000-000000000085,PodB puser86,puser86,Puser 86,puser86@example.com,60000000-0000-0000-0000-000000000086,PodB puser87,puser87,Puser 87,puser87@example.com,60000000-0000-0000-0000-000000000087,PodB puser88,puser88,Puser 88,puser88@example.com,60000000-0000-0000-0000-000000000088,PodB puser89,puser89,Puser 89,puser89@example.com,60000000-0000-0000-0000-000000000089,PodB puser90,puser90,Puser 90,puser90@example.com,60000000-0000-0000-0000-000000000090,PodB puser91,puser91,Puser 91,puser91@example.com,60000000-0000-0000-0000-000000000091,PodB puser92,puser92,Puser 92,puser92@example.com,60000000-0000-0000-0000-000000000092,PodB puser93,puser93,Puser 93,puser93@example.com,60000000-0000-0000-0000-000000000093,PodB puser94,puser94,Puser 94,puser94@example.com,60000000-0000-0000-0000-000000000094,PodB puser95,puser95,Puser 95,puser95@example.com,60000000-0000-0000-0000-000000000095,PodB puser96,puser96,Puser 96,puser96@example.com,60000000-0000-0000-0000-000000000096,PodB puser97,puser97,Puser 97,puser97@example.com,60000000-0000-0000-0000-000000000097,PodB puser98,puser98,Puser 98,puser98@example.com,60000000-0000-0000-0000-000000000098,PodB puser99,puser99,Puser 99,puser99@example.com,60000000-0000-0000-0000-000000000099,PodB calendarserver-9.1+dfsg/contrib/performance/loadtest/amphub.py000066400000000000000000000054721315003562600247210ustar00rootroot00000000000000## # Copyright (c) 2016-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from __future__ import print_function from calendarserver.push.amppush import AMPPushClientFactory, SubscribeToID from twisted.internet.endpoints import TCP4ClientEndpoint from twisted.internet.defer import inlineCallbacks from twisted.internet import reactor from uuid import uuid4 class AMPHub(object): _hub = None def __init__(self): # self.hostsAndPorts = hostsAndPorts self.protocols = [] self.callbacks = {} @classmethod @inlineCallbacks def start(cls, hostsAndPorts): """ Instantiates the AMPHub singleton and connects to the hosts. @param hostsAndPorts: The hosts and ports to connect to @type hostsAndPorts: A list of (hostname, port) tuples """ cls._hub = cls() for host, port in hostsAndPorts: endpoint = TCP4ClientEndpoint(reactor, host, port) factory = AMPPushClientFactory(cls._hub._pushReceived) protocol = yield endpoint.connect(factory) cls._hub.protocols.append(protocol) @classmethod @inlineCallbacks def subscribeToIDs(cls, ids, callback): """ Clients can call this method to register a callback which will get called whenever a push notification is fired for any id in the ids list. @param ids: The push IDs to subscribe to @type ids: list of strings @param callback: The method to call whenever a notification is received. @type callback: callable which is passed an id (string), a timestamp of the change in data triggering this push (integer), and the priority level (integer) """ hub = cls._hub for id in ids: hub.callbacks.setdefault(id, []).append(callback) for protocol in hub.protocols: yield protocol.callRemote(SubscribeToID, token=str(uuid4()), id=id) def _pushReceived(self, id, dataChangedTimestamp, priority=5): """ Called for every incoming push notification, this method then calls each callback registered for the given ID. """ for callback in self.callbacks.setdefault(id, []): callback(id, dataChangedTimestamp, priority=priority) calendarserver-9.1+dfsg/contrib/performance/loadtest/ampsim.py000066400000000000000000000131171315003562600247260ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ AMP-based simulator. """ if __name__ == '__main__': # When run as a script, this is the worker process, receiving commands over # stdin. def runmain(): import traceback try: __import__("twext") from twisted.python.log import startLogging from sys import exit, stderr startLogging(stderr) from twisted.internet import reactor from twisted.internet.stdio import StandardIO from contrib.performance.loadtest.ampsim import Worker # @UnresolvedImport from contrib.performance.loadtest.sim import LagTrackingReactor StandardIO(Worker(LagTrackingReactor(reactor))) reactor.run() except: traceback.print_exc() exit(1) else: exit(0) runmain() from copy import deepcopy from plistlib import writePlistToString, readPlistFromString from twisted.python.log import msg, addObserver from twisted.protocols.amp import AMP, Command, String, Unicode from twext.enterprise.adbapi2 import Pickle from contrib.performance.loadtest.sim import _DirectoryRecord, LoadSimulator class Configure(Command): """ Configure this worker process with the text of an XML property list. """ arguments = [("plist", String())] # Pass OSError exceptions through, presenting the exception message to the user. errors = {OSError: 'OSError'} class LogMessage(Command): """ This message represents an observed log message being relayed from a worker process to the manager process. """ arguments = [("event", Pickle())] class Account(Command): """ This message represents a L{_DirectoryRecord} loaded by the manager process being relayed to a worker. """ arguments = [ ("uid", Unicode()), ("password", Unicode()), ("commonName", Unicode()), ("email", Unicode()), ("guid", Unicode()), ] class Worker(AMP): """ Protocol to be run in the worker process, to handle messages from its manager. """ def __init__(self, reactor): super(Worker, self).__init__() self.reactor = reactor self.records = [] @Account.responder def account(self, **kw): self.records.append(_DirectoryRecord(**kw)) return {} @Configure.responder def config(self, plist): from sys import stderr cfg = readPlistFromString(plist) addObserver(self.emit) sim = LoadSimulator.fromConfig(cfg) sim.records = self.records sim.attachServices(stderr) return {} def emit(self, eventDict): if 'type' in eventDict: self.reactor.callFromThread( self.callRemote, LogMessage, event=eventDict ) def connectionLost(self, reason): super(Worker, self).connectionLost(reason) msg("Standard IO connection lost.") self.reactor.stop() class Manager(AMP): """ Protocol to be run in the coordinating process, to respond to messages from a single worker. """ def __init__(self, loadsim, whichWorker, numWorkers, output): super(Manager, self).__init__() self.loadsim = loadsim self.whichWorker = whichWorker self.numWorkers = numWorkers self.output = output def connectionMade(self): super(Manager, self).connectionMade() for record in self.loadsim.records: self.callRemote(Account, uid=record.uid, password=record.password, commonName=record.commonName, email=record.email, guid=record.guid) workerConfig = deepcopy(self.loadsim.configTemplate) # The list of workers is for the manager only; the workers themselves # know they're workers because they _don't_ receive this list. del workerConfig["workers"] # The manager loads the accounts via the configured loader, then sends # them out to the workers (right above), which look at the state at an # instance level and therefore don't need a globally-named directory # record loader. del workerConfig["accounts"] workerConfig["workerID"] = self.whichWorker workerConfig["workerCount"] = self.numWorkers workerConfig["observers"] = [] workerConfig.pop("accounts", None) plist = writePlistToString(workerConfig) self.output.write("Initiating worker configuration\n") def completed(x): self.output.write("Worker configuration complete.\n") def failed(reason): self.output.write("Worker configuration failed. {}\n".format(reason)) self.callRemote(Configure, plist=plist).addCallback(completed).addErrback(failed) @LogMessage.responder def observed(self, event): # from pprint import pformat # self.output.write(pformat(event)+"\n") msg(**event) return {} calendarserver-9.1+dfsg/contrib/performance/loadtest/benchmarks.json000066400000000000000000000045701315003562600261010ustar00rootroot00000000000000{ "GET{event}" : {"mean": 0.100, "weight": 1.000}, "PUT{event}" : {"mean": 0.250, "weight": 1.000}, "PUT{update}" : {"mean": 0.250, "weight": 1.000}, "PUT{attendee-small}" : {"mean": 0.500, "weight": 1.000}, "PUT{attendee-medium}" : {"mean": 1.000, "weight": 0.900}, "PUT{attendee-large}" : {"mean": 1.500, "weight": 0.750}, "PUT{attendee-huge}" : {"mean": 2.000, "weight": 0.500}, "PUT{organizer-small}" : {"mean": 0.250, "weight": 1.000}, "PUT{organizer-medium}" : {"mean": 0.500, "weight": 0.900}, "PUT{organizer-large}" : {"mean": 1.000, "weight": 0.750}, "PUT{organizer-huge}" : {"mean": 3.000, "weight": 0.500}, "DELETE{event}" : {"mean": 0.500, "weight": 0.750}, "POST{fb-small}" : {"mean": 0.500, "weight": 1.000}, "POST{fb-medium}" : {"mean": 0.750, "weight": 1.000}, "POST{fb-large}" : {"mean": 1.000, "weight": 0.800}, "POST{fb-huge}" : {"mean": 2.000, "weight": 0.700}, "PROPFIND{well-known}" : {"mean": 0.100, "weight": 0.100}, "PROPFIND{find-principal}" : {"mean": 0.250, "weight": 0.100}, "PROPFIND{principal}" : {"mean": 0.250, "weight": 0.100}, "PROPFIND{home}" : {"mean": 0.250, "weight": 1.000}, "PROPFIND{calendar}" : {"mean": 0.250, "weight": 1.000}, "PROPFIND{notification}" : {"mean": 0.100, "weight": 0.500}, "PROPFIND{notification-items}" : {"mean": 0.100, "weight": 0.500}, "PROPPATCH{calendar}" : {"mean": 0.250, "weight": 0.500}, "REPORT{pset}" : {"mean": 0.250, "weight": 0.100}, "REPORT{expand}" : {"mean": 0.250, "weight": 1.000}, "REPORT{psearch}" : {"mean": 0.250, "weight": 0.100}, "REPORT{sync-init}" : {"mean": 0.250, "weight": 0.500}, "REPORT{sync}" : {"mean": 0.250, "weight": 1.000}, "REPORT{vevent}" : {"mean": 0.250, "weight": 1.000}, "REPORT{vtodo}" : {"mean": 0.250, "weight": 1.000}, "REPORT{multiget-small}" : {"mean": 0.250, "weight": 1.000}, "REPORT{multiget-medium}" : {"mean": 0.500, "weight": 0.900}, "REPORT{multiget-large}" : {"mean": 1.000, "weight": 0.750}, "REPORT{multiget-huge}" : {"mean": 2.000, "weight": 0.500} } calendarserver-9.1+dfsg/contrib/performance/loadtest/clients-old.plist000066400000000000000000000477111315003562600263670ustar00rootroot00000000000000 clients software contrib.performance.loadtest.ical.OS_X_10_7 params title 10.7 calendarHomePollInterval 30 supportPush supportAmpPush profiles class contrib.performance.loadtest.profiles.Eventer params enabled interval 60 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 class contrib.performance.loadtest.profiles.AlarmAcknowledger params enabled interval 15 pastTheHour 0 15 30 45 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 25 weekly 25 monthly 0 yearly 0 dailylimit 0 weeklylimit 0 workdays 0 class contrib.performance.loadtest.profiles.TitleChanger params enabled interval 120 class contrib.performance.loadtest.profiles.Attacher params enabled interval 120 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.EventCountLimiter params enabled interval 60 eventCountLimit 100 class contrib.performance.loadtest.profiles.CalendarSharer params enabled interval 300 class contrib.performance.loadtest.profiles.Inviter params enabled sendInvitationDistribution type contrib.performance.stats.NormalDistribution params mu 60 sigma 5 inviteeDistribution type contrib.performance.stats.UniformIntegerDistribution params min 0 max 99 inviteeClumping inviteeCountDistribution type contrib.performance.stats.LogNormalDistribution params mode 1 median 6 maximum 60 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 class contrib.performance.loadtest.profiles.Accepter params enabled acceptDelayDistribution type contrib.performance.stats.LogNormalDistribution params mode 300 median 1800 class contrib.performance.loadtest.profiles.Tasker params enabled interval 300 taskDueDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles weight 1 calendarserver-9.1+dfsg/contrib/performance/loadtest/clients.plist000066400000000000000000000626511315003562600256130ustar00rootroot00000000000000 clients software contrib.performance.loadtest.ical.OS_X_10_11 params title main calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles class contrib.performance.loadtest.profiles.Eventer params enabled interval 180 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 fileAttachPercentage 1 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.AlarmAcknowledger params enabled interval 15 pastTheHour 0 15 30 45 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 25 weekly 25 monthly 0 yearly 0 dailylimit 0 weeklylimit 0 workdays 0 class contrib.performance.loadtest.profiles.TitleChanger params enabled interval 60 class contrib.performance.loadtest.profiles.DescriptionChanger params enabled interval 60 descriptionLengthDistribution type contrib.performance.stats.LogNormalDistribution params mode 450 median 650 maximum 1000000 class contrib.performance.loadtest.profiles.EventCountLimiter params enabled interval 60 eventCountLimit 100 class contrib.performance.loadtest.profiles.CalendarSharer params enabled interval 1800 maxSharees 3 class contrib.performance.loadtest.profiles.Inviter params enabled sendInvitationDistribution type contrib.performance.stats.NormalDistribution params mu 720 sigma 5 inviteeDistribution type contrib.performance.stats.UniformIntegerDistribution params min 0 max 99 inviteeClumping inviteeCountDistribution type contrib.performance.stats.LogNormalDistribution params mode 6 median 8 maximum 60 inviteeLookupPercentage 50 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 fileAttachPercentage 1 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.Accepter params enabled acceptDelayDistribution type contrib.performance.stats.LogNormalDistribution params mode 300 median 1800 class contrib.performance.loadtest.profiles.AttachmentDownloader params enabled class contrib.performance.loadtest.profiles.Tasker params enabled interval 300 taskDueDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles class contrib.performance.loadtest.profiles.TimeRanger params enabled interval 300 class contrib.performance.loadtest.profiles.DeepRefresher params enabled interval 60 class contrib.performance.loadtest.profiles.APNSSubscriber params enabled interval 600 class contrib.performance.loadtest.profiles.Resetter params enabled interval 600 weight 1 software contrib.performance.loadtest.ical.OS_X_10_11 params title idle calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles class contrib.performance.loadtest.profiles.AttachmentDownloader params enabled weight 2 calendarserver-9.1+dfsg/contrib/performance/loadtest/config-old.plist000066400000000000000000000156631315003562600261740ustar00rootroot00000000000000 servers PodA enabled uri https://localhost:8443 ampPushHosts localhost ampPushPort 62311 stats enabled Port 8100 PodB enabled uri https://localhost:8543 ampPushHosts localhost ampPushPort 62312 stats enabled Port 8101 principalPathTemplate /principals/users/%s/ webadmin enabled HTTPPort 8080 clientDataSerialization UseOldData Path /tmp/sim accounts loader contrib.performance.loadtest.sim.recordsFromCSVFile params path contrib/performance/loadtest/accounts.csv interleavePods arrival factory contrib.performance.loadtest.population.SmoothRampUp params groups 20 groupSize 1 interval 3 clientsPerUser 1 observers type contrib.performance.loadtest.population.ReportStatistics params thresholdsPath contrib/performance/loadtest/thresholds.json benchmarksPath contrib/performance/loadtest/benchmarks.json failCutoff 1.0 type contrib.performance.loadtest.ical.RequestLogger params type contrib.performance.loadtest.profiles.OperationLogger params thresholdsPath contrib/performance/loadtest/thresholds.json lagCutoff 1.0 failCutoff 1.0 failIfNoPush calendarserver-9.1+dfsg/contrib/performance/loadtest/config.dist.plist000066400000000000000000000167341315003562600263620ustar00rootroot00000000000000 workers ./bin/python contrib/performance/loadtest/ampsim.py ./bin/python contrib/performance/loadtest/ampsim.py ./bin/python contrib/performance/loadtest/ampsim.py ./bin/python contrib/performance/loadtest/ampsim.py ./bin/python contrib/performance/loadtest/ampsim.py ./bin/python contrib/performance/loadtest/ampsim.py servers PodA enabled uri https://localhost:8443 ampPushHosts localhost ampPushPort 62311 stats enabled Port 8100 PodB enabled uri https://localhost:8543 ampPushHosts localhost ampPushPort 62312 stats enabled Port 8101 principalPathTemplate /principals/users/%s/ webadmin enabled HTTPPort 8080 clientDataSerialization UseOldData Path /tmp/sim accounts loader contrib.performance.loadtest.sim.recordsFromCSVFile params path contrib/performance/loadtest/accounts.csv interleavePods arrival factory contrib.performance.loadtest.population.SmoothRampUp params groups 99 groupSize 1 interval 3 clientsPerUser 1 observers type contrib.performance.loadtest.population.ReportStatistics params thresholdsPath contrib/performance/loadtest/thresholds.json benchmarksPath contrib/performance/loadtest/benchmarks.json failCutoff 1.0 type contrib.performance.loadtest.ical.RequestLogger params type contrib.performance.loadtest.profiles.OperationLogger params thresholdsPath contrib/performance/loadtest/thresholds.json lagCutoff 1.0 failCutoff 1.0 calendarserver-9.1+dfsg/contrib/performance/loadtest/config.plist000066400000000000000000000165371315003562600254210ustar00rootroot00000000000000 servers PodA enabled uri https://localhost:8443 ampPushHosts localhost ampPushPort 62311 stats enabled Port 8100 PodB enabled uri https://localhost:8543 ampPushHosts localhost ampPushPort 62312 stats enabled Port 8101 principalPathTemplate /principals/users/%s/ webadmin enabled HTTPPort 8080 clientDataSerialization UseOldData Path /tmp/sim accounts loader contrib.performance.loadtest.sim.recordsFromCSVFile params path contrib/performance/loadtest/accounts.csv interleavePods arrival factory contrib.performance.loadtest.population.SmoothRampUp params groups 15 groupSize 1 interval 15 clientsPerUser 3 observers type contrib.performance.loadtest.population.ReportStatistics params thresholdsPath contrib/performance/loadtest/thresholds.json benchmarksPath contrib/performance/loadtest/benchmarks.json failCutoff 1.0 type contrib.performance.loadtest.ical.RequestLogger params type contrib.performance.loadtest.ical.ErrorLogger params directory /tmp/sim_errors durationThreshold 10.0 type contrib.performance.loadtest.profiles.OperationLogger params thresholdsPath contrib/performance/loadtest/thresholds.json lagCutoff 1.0 failCutoff 1.0 failIfNoPush calendarserver-9.1+dfsg/contrib/performance/loadtest/ical.py000066400000000000000000003176571315003562600243700ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from __future__ import print_function from caldavclientlibrary.protocol.caldav.definitions import caldavxml from caldavclientlibrary.protocol.caldav.definitions import csxml from caldavclientlibrary.protocol.calendarserver.invite import AddInvitees, RemoveInvitee, InviteUser from caldavclientlibrary.protocol.calendarserver.notifications import InviteNotification from caldavclientlibrary.protocol.url import URL from caldavclientlibrary.protocol.utils.xmlhelpers import BetterElementTree from caldavclientlibrary.protocol.webdav.definitions import davxml from caldavclientlibrary.protocol.webdav.propfindparser import PropFindParser from calendarserver.tools.notifications import PubSubClientFactory from contrib.performance.loadtest.amphub import AMPHub from contrib.performance.httpauth import AuthHandlerAgent from contrib.performance.httpclient import StringProducer, readBody from contrib.performance.loadtest.subscribe import Periodical from pycalendar.datetime import DateTime from pycalendar.duration import Duration from pycalendar.timezone import Timezone from twext.internet.adaptendpoint import connect from twext.internet.gaiendpoint import GAIEndpoint from twisted.internet.ssl import ClientContextFactory from twisted.internet.defer import Deferred, inlineCallbacks, returnValue, \ succeed from twisted.internet.task import LoopingCall from twisted.python.filepath import FilePath from twisted.python.log import addObserver, err, msg from twisted.python.util import FancyEqMixin from twisted.web.client import Agent, ContentDecoderAgent, GzipDecoder, \ _DeprecatedToCurrentPolicyForHTTPS from twisted.web.http import ( OK, MULTI_STATUS, CREATED, NO_CONTENT, PRECONDITION_FAILED, MOVED_PERMANENTLY, FORBIDDEN, FOUND, NOT_FOUND ) from twisted.web.http_headers import Headers from twistedcaldav.ical import Component, Property from urlparse import urlparse, urlunparse, urlsplit, urljoin from uuid import uuid4 from xml.etree.ElementTree import ElementTree, Element, SubElement, QName from StringIO import StringIO import json import os import random import shutil QName.__repr__ = lambda self: '' % (self.text,) def loadRequestBody(clientType, label): return FilePath(__file__).sibling('request-data').child(clientType).child(label + '.request').getContent() SUPPORTED_REPORT_SET = '{DAV:}supported-report-set' class IncorrectResponseCode(Exception): """ Raised when a response has a code other than the one expected. @ivar expected: The response codes which was expected. @type expected: C{tuple} of C{int} @ivar response: The response which was received @type response: L{twisted.web.client.Response} @ivar responseBody: The body of the received response @type responseBody: C{str} """ def __init__(self, expected, response, responseBody): self.expected = expected self.response = response self.responseBody = responseBody class MissingCalendarHome(Exception): """ Raised when the calendar home for a user is 404 """ class XMPPPush(object, FancyEqMixin): """ This represents an XMPP PubSub location where push notifications for particular calendar home might be received. """ compareAttributes = ('server', 'uri', 'pushkey') def __init__(self, server, uri, pushkey): self.server = server self.uri = uri self.pushkey = pushkey def u2str(data): return data.encode("utf-8") if type(data) is unicode else data class Event(object): def __init__(self, serializeBasePath, url, etag, component=None): self.serializeBasePath = serializeBasePath self.url = url self.etag = etag self.scheduleTag = None if component is not None: self.component = component self.uid = component.resourceUID() if component is not None else None def getUID(self): """ Return the UID of the calendar resource. """ return self.uid def serializePath(self): if self.serializeBasePath: calendar = os.path.join(self.serializeBasePath, self.url.split("/")[-2]) if not os.path.exists(calendar): os.makedirs(calendar) return os.path.join(calendar, self.url.split("/")[-1]) else: return None def serialize(self): """ Create a dict of the data so we can serialize as JSON. """ result = {} for attr in ("url", "etag", "scheduleTag", "uid",): result[attr] = getattr(self, attr) return result @staticmethod def deserialize(serializeLocation, data): """ Convert dict (deserialized from JSON) into an L{Event}. """ event = Event(serializeLocation, None, None) for attr in ("url", "etag", "scheduleTag", "uid",): setattr(event, attr, u2str(data[attr])) return event @property def component(self): """ Data always read from disk - never cached in the object. """ path = self.serializePath() if path and os.path.exists(path): with open(path) as f: comp = Component.fromString(f.read()) return comp else: return None @component.setter def component(self, component): """ Data always written to disk - never cached on the object. """ path = self.serializePath() if path: if component is None: os.remove(path) else: with open(path, "w") as f: f.write(str(component)) self.uid = component.resourceUID() if component is not None else None def removed(self): """ Resource no longer exists on the server - remove associated data. """ path = self.serializePath() if path and os.path.exists(path): os.remove(path) class Calendar(object): def __init__( self, resourceType, componentTypes, name, url, changeToken, shared=False, sharedByMe=False, invitees=None ): self.resourceType = resourceType self.componentTypes = componentTypes self.name = name self.url = url self.changeToken = changeToken self.events = {} self.shared = shared self.sharedByMe = sharedByMe self.invitees = invitees if invitees else [] if self.name is None and self.url is not None: self.name = self.url.rstrip("/").split("/")[-1] def serialize(self): """ Create a dict of the data so we can serialize as JSON. """ result = {} for attr in ("resourceType", "name", "url", "changeToken", "shared", "sharedByMe"): result[attr] = getattr(self, attr) result["componentTypes"] = list(sorted(self.componentTypes)) result["events"] = sorted(self.events.keys()) result["invitees"] = sorted(self.invitees) return result @staticmethod def deserialize(data, events): """ Convert dict (deserialized from JSON) into an L{Calendar}. """ calendar = Calendar(None, None, None, None, None) for attr in ("resourceType", "name", "url", "changeToken", "shared", "sharedByMe"): setattr(calendar, attr, u2str(data[attr])) calendar.componentTypes = set(map(u2str, data["componentTypes"])) calendar.invitees = map(u2str, data["invitees"]) for event in data["events"]: url = urljoin(calendar.url, event) if url in events: calendar.events[event] = events[url] else: # Ughh - an event is missing - force changeToken to empty to trigger full resync calendar.changeToken = "" return calendar @staticmethod def addInviteeXML(uid, summary, readwrite=True): return AddInvitees(None, '/', [uid], readwrite, summary=summary).request_data.text @staticmethod def removeInviteeXML(uid): invitee = InviteUser() # Usually an InviteUser is populated through .parseFromUser, but we only care about a uid invitee.user_uid = uid return RemoveInvitee(None, '/', invitee).request_data.text class NotificationCollection(object): def __init__(self, url, changeToken): self.url = url self.changeToken = changeToken self.notifications = {} self.name = "notification" def serialize(self): """ Create a dict of the data so we can serialize as JSON. """ result = {} for attr in ("url", "changeToken"): result[attr] = getattr(self, attr) result["notifications"] = sorted(self.notifications.keys()) return result @staticmethod def deserialize(data, notifications): """ Convert dict (deserialized from JSON) into an L{Calendar}. """ coll = NotificationCollection(None, "") for attr in ("url", "changeToken"): if attr in data: setattr(coll, attr, u2str(data[attr])) if "notifications" in data: for notification in data["notifications"]: url = urljoin(coll.url, notification) if url in notifications: coll.notifications[notification] = notifications[url] else: # Ughh - a notification is missing - force changeToken to empty to trigger full resync coll.changeToken = "" return coll class BaseClient(object): """ Base interface for all simulated clients. """ user = None # User account details _events = None # Cache of events keyed by href _calendars = None # Cache of calendars keyed by href _notificationCollection = None # Cache of the notification collection started = False # Whether or not startup() has been executed _client_type = None # Type of this client used in logging _client_id = None # Unique id for the client itself def _setEvent(self, href, event): """ Cache the provided event """ self._events[href] = event calendar, basePath = href.rsplit('/', 1) self._calendars[calendar + '/'].events[basePath] = event def _removeEvent(self, href): """ Remove event from local cache. """ self._events[href].removed() del self._events[href] calendar, basePath = href.rsplit('/', 1) del self._calendars[calendar + '/'].events[basePath] def eventByHref(self, href): """ Return the locally cached event by its href """ return self._events[href] def addEvent(self, href, calendar): """ Called when a profile needs to add an event (no scheduling). """ raise NotImplementedError("%r does not implement addEvent" % (self.__class__,)) def addInvite(self, href, calendar): """ Called when a profile needs to add a new invite. The iCalendar data will already contain ATTENDEEs. """ raise NotImplementedError("%r does not implement addInvite" % (self.__class__,)) def changeEvent(self, href, calendar): """ Called when a profile needs to change an event (no scheduling). """ raise NotImplementedError("%r does not implement changeEvent" % (self.__class__,)) def deleteEvent(self, href): """ Called when a profile needs to delete an event. """ raise NotImplementedError("%r does not implement deleteEvent" % (self.__class__,)) def addEventAttendee(self, href, attendee): """ Called when a profile needs to add an attendee to an existing event. """ raise NotImplementedError("%r does not implement addEventAttendee" % (self.__class__,)) def changeEventAttendee(self, href, oldAttendee, newAttendee): """ Called when a profile needs to change an attendee on an existing event. Used when an attendee is accepting. """ raise NotImplementedError("%r does not implement changeEventAttendee" % (self.__class__,)) class _PubSubClientFactory(PubSubClientFactory): """ Factory for XMPP pubsub functionality. """ def __init__(self, client, *args, **kwargs): PubSubClientFactory.__init__(self, *args, **kwargs) self._client = client def initFailed(self, reason): print('XMPP initialization failed', reason) def authFailed(self, reason): print('XMPP Authentication failed', reason) def handleMessageEventItems(self, iq): item = iq.firstChildElement().firstChildElement() if item: node = item.getAttribute("node") if node: url, _ignore_name, _ignore_kind = self.nodes.get(node, (None, None, None)) if url is not None: self._client._checkCalendarsForEvents(url, push=True) class WebClientContextFactory(ClientContextFactory): """ A web context factory which ignores the hostname and port and does no certificate verification. """ def getContext(self, hostname, port): return ClientContextFactory.getContext(self) class BaseAppleClient(BaseClient): """ Implementation of common OS X/iOS client behavior. """ _client_type = "Generic" _managed_attachments_server_url = None USER_AGENT = None # Override this for specific clients # The default interval, used if none is specified in external # configuration. CALENDAR_HOME_POLL_INTERVAL = 15 * 60 # The maximum number of resources to retrieve in a single multiget MULTIGET_BATCH_SIZE = 200 # Override and turn on if client supports Sync REPORT _SYNC_REPORT = False # Override and turn on if client syncs using time-range queries _SYNC_TIMERANGE = False # Override and turn off if client does not support attendee lookups _ATTENDEE_LOOKUPS = True # Request body data _LOAD_PATH = None _STARTUP_WELL_KNOWN = None _STARTUP_PRINCIPAL_PROPFIND_INITIAL = None _STARTUP_PRINCIPAL_PROPFIND = None _STARTUP_PRINCIPALS_REPORT = None _STARTUP_PRINCIPAL_EXPAND = None _STARTUP_PROPPATCH_CALENDAR_COLOR = None _STARTUP_PROPPATCH_CALENDAR_ORDER = None _STARTUP_PROPPATCH_CALENDAR_TIMEZONE = None _POLL_CALENDARHOME_PROPFIND = None _POLL_CALENDAR_PROPFIND = None _POLL_CALENDAR_PROPFIND_D1 = None _POLL_CALENDAR_MULTIGET_REPORT = None _POLL_CALENDAR_MULTIGET_REPORT_HREF = None _POLL_CALENDAR_SYNC_REPORT = None _POLL_NOTIFICATION_PROPFIND = None _POLL_NOTIFICATION_PROPFIND_D1 = None _POLL_INBOX_PROPFIND = None _NOTIFICATION_SYNC_REPORT = None _CALENDARHOME_SYNC_REPORT = None _USER_LIST_PRINCIPAL_PROPERTY_SEARCH = None _POST_AVAILABILITY = None _CALENDARSERVER_PRINCIPAL_SEARCH_REPORT = None email = None def __init__( self, reactor, server, principalPathTemplate, serializePath, record, auth, instanceNumber, title=None, calendarHomePollInterval=None, supportPush=True, supportAmpPush=True, unauthenticatedPercentage=30, ): uuid = uuid4() self._client_id = str(uuid) self._deviceToken = uuid.hex + uuid.hex # 64 character hex for fake APNS token self._instanceNumber = instanceNumber self.reactor = reactor # The server might use gzip encoding agent = Agent( self.reactor, contextFactory=_DeprecatedToCurrentPolicyForHTTPS(WebClientContextFactory()), ) self.noAuthAgent = ContentDecoderAgent(agent, [("gzip", GzipDecoder)]) self.agent = AuthHandlerAgent(self.noAuthAgent, auth) self.unauthenticatedPercentage = unauthenticatedPercentage self.server = server self.principalPathTemplate = principalPathTemplate self.record = record self.title = title if title else self._client_type if calendarHomePollInterval is None: calendarHomePollInterval = self.CALENDAR_HOME_POLL_INTERVAL self.calendarHomePollInterval = calendarHomePollInterval self.calendarHomeHref = None self.calendarHomeToken = "" self.supportPush = supportPush self.supportAmpPush = supportAmpPush ampPushHosts = self.server.get("ampPushHosts") if ampPushHosts is None: ampPushHosts = [urlparse(self.server["uri"])[1].split(":")[0]] self.ampPushHosts = ampPushHosts self.ampPushPort = self.server.get("ampPushPort", 62311) self.serializePath = serializePath self.supportSync = self._SYNC_REPORT self.supportNotificationSync = self._NOTIFICATION_SYNC_REPORT self.supportCalendarHomeSync = self._CALENDARHOME_SYNC_REPORT self.supportEnhancedAttendeeAutoComplete = self._CALENDARSERVER_PRINCIPAL_SEARCH_REPORT # Keep track of the calendars on this account, keys are # Calendar URIs, values are Calendar instances. self._calendars = {} # The principalURL found during discovery self.principalURL = None # The principal collection found during startup self.principalCollection = None # Keep track of the events on this account, keys are event # URIs (which are unambiguous across different calendars # because they start with the uri of the calendar they are # part of), values are Event instances. self._events = {} # Keep track of which calendar homes are being polled self._checking = set() # Keep track of XMPP parameters for calendar homes we encounter. This # dictionary has calendar home URLs as keys and XMPPPush instances as # values. self.xmpp = {} self.ampPushKeys = {} self.subscribedAmpPushKeys = set() self._busyWithPush = False # Keep track of push factories so we can unsubscribe at shutdown self._pushFactories = [] # Allow events to go out into the world. self.catalog = { "eventChanged": Periodical(), } # Keep track of previously downloaded attachments self._attachments = {} def _addDefaultHeaders(self, headers): """ Add the clients default set of headers to ones being used in a request. Default is to add User-Agent, sub-classes should override to add other client specific things, Accept etc. """ headers.setRawHeaders( 'User-Agent', ["{} {} {}".format( self.USER_AGENT, self.title, self._instanceNumber )] ) @inlineCallbacks def _request(self, expectedResponseCodes, method, url, headers=None, body=None, method_label=None): """ Execute a request and check against the expected response codes. """ if type(expectedResponseCodes) is int: expectedResponseCodes = (expectedResponseCodes,) if headers is None: headers = Headers({}) self._addDefaultHeaders(headers) msg( type="request", method=method_label if method_label else method, url=url, user=self.record.uid, client_type="({} {})".format(self.title, self._instanceNumber), client_id=self._client_id, ) choice = random.randint(1, 100) if choice < self.unauthenticatedPercentage: # First send an unauthenticated request response = yield self.noAuthAgent.request(method, url, headers, body) yield readBody(response) before = self.reactor.seconds() response = yield self.agent.request(method, url, headers, body) responseBody = yield readBody(response) after = self.reactor.seconds() success = response.code in expectedResponseCodes msg( type="response", success=success, method=method_label if method_label else method, headers=headers, body=body, responseBody=responseBody, code=response.code, user=self.record.uid, client_type="({} {})".format(self.title, self._instanceNumber), client_id=self._client_id, duration=(after - before), url=url, ) if success: returnValue((response, responseBody)) raise IncorrectResponseCode(expectedResponseCodes, response, responseBody) def _parseMultiStatus(self, responseBody, otherTokens=False): """ Parse a - might need to return other top-level elements in the response - e.g. DAV:sync-token I{PROPFIND} request for the principal URL. @type responseBody: C{str} @rtype: C{cls} """ parser = PropFindParser() try: parser.parseData(responseBody) except: print("=" * 80) print("COULD NOT PARSE RESPONSE:") print(responseBody) print("=" * 80) raise if otherTokens: return (parser.getResults(), parser.getOthers(),) else: return parser.getResults() _CALENDAR_TYPES = set([ caldavxml.calendar, caldavxml.schedule_inbox, ]) @inlineCallbacks def _propfind(self, url, body, depth='0', allowedStatus=(MULTI_STATUS,), method_label=None): """ Issue a PROPFIND on the chosen URL """ hdrs = Headers({'content-type': ['text/xml']}) if depth is not None: hdrs.addRawHeader('depth', depth) response, responseBody = yield self._request( allowedStatus, 'PROPFIND', self.server["uri"] + url.encode('utf-8'), hdrs, StringProducer(body), method_label=method_label, ) result = self._parseMultiStatus(responseBody) if response.code == MULTI_STATUS else None returnValue((response, result,)) @inlineCallbacks def _proppatch(self, url, body, method_label=None): """ Issue a PROPPATCH on the chosen URL """ hdrs = Headers({'content-type': ['text/xml']}) response, responseBody = yield self._request( (OK, MULTI_STATUS,), 'PROPPATCH', self.server["uri"] + url.encode('utf-8'), hdrs, StringProducer(body), method_label=method_label, ) if response.code == MULTI_STATUS: result = self._parseMultiStatus(responseBody) returnValue(result) else: returnValue(None) @inlineCallbacks def _get(self, url, allowedStatus=(MULTI_STATUS,), method_label=None): """ Issue a GET on the chosen URL """ _ignore_response, responseBody = yield self._request( allowedStatus, 'GET', url, method_label=method_label, ) returnValue(responseBody) @inlineCallbacks def _report(self, url, body, depth='0', allowedStatus=(MULTI_STATUS,), otherTokens=False, method_label=None): """ Issue a REPORT on the chosen URL """ hdrs = Headers({'content-type': ['text/xml']}) if depth is not None: hdrs.addRawHeader('depth', depth) response, responseBody = yield self._request( allowedStatus, 'REPORT', self.server["uri"] + url.encode('utf-8'), hdrs, StringProducer(body), method_label=method_label, ) result = self._parseMultiStatus(responseBody, otherTokens) if response.code == MULTI_STATUS else None returnValue(result) @inlineCallbacks def _startupPropfindWellKnown(self): """ Issue a PROPFIND on the /.well-known/caldav/ URL """ location = "/.well-known/caldav/" response, result = yield self._propfind( location, self._STARTUP_WELL_KNOWN, allowedStatus=(MULTI_STATUS, MOVED_PERMANENTLY, FOUND,), method_label="PROPFIND{well-known}", ) # Follow any redirect if response.code in (MOVED_PERMANENTLY, FOUND,): location = response.headers.getRawHeaders("location")[0] location = urlsplit(location)[2] response, result = yield self._propfind( location, self._STARTUP_WELL_KNOWN, allowedStatus=(MULTI_STATUS), method_label="PROPFIND{well-known}", ) returnValue(result[location]) @inlineCallbacks def _principalPropfindInitial(self, user): """ Issue a PROPFIND on the /principals/users/ URL to retrieve the /principals/__uids__/ principal URL """ principalPath = self.principalPathTemplate % (user,) _ignore_response, result = yield self._propfind( principalPath, self._STARTUP_PRINCIPAL_PROPFIND_INITIAL, method_label="PROPFIND{find-principal}", ) returnValue(result[principalPath]) @inlineCallbacks def _principalPropfind(self): """ Issue a PROPFIND on the likely principal URL for the given user and return a L{Principal} instance constructed from the response. """ _ignore_response, result = yield self._propfind( self.principalURL, self._STARTUP_PRINCIPAL_PROPFIND, method_label="PROPFIND{principal}", ) returnValue(result[self.principalURL]) def _principalSearchPropertySetReport(self, principalCollectionSet): """ Issue a principal-search-property-set REPORT against the chosen URL """ return self._report( principalCollectionSet, self._STARTUP_PRINCIPALS_REPORT, allowedStatus=(OK,), method_label="REPORT{pset}", ) @inlineCallbacks def _calendarHomePropfind(self, calendarHomeSet): """ Do the poll Depth:1 PROPFIND on the calendar home. """ if not calendarHomeSet.endswith('/'): calendarHomeSet = calendarHomeSet + '/' _ignore_response, result = yield self._propfind( calendarHomeSet, self._POLL_CALENDARHOME_PROPFIND, depth='1', method_label="PROPFIND{home}", ) calendars, notificationCollection, calendarHomeToken = self._extractCalendars( result, calendarHomeSet ) returnValue((calendars, notificationCollection, calendarHomeToken, result,)) @inlineCallbacks def _calendarHomeSync(self, calendarHomeSet): if not calendarHomeSet.endswith('/'): calendarHomeSet = calendarHomeSet + '/' result = yield self._report( calendarHomeSet, self._CALENDARHOME_SYNC_REPORT % {'sync-token': self.calendarHomeToken}, depth='1', allowedStatus=(MULTI_STATUS, FORBIDDEN,), otherTokens=True, method_label="REPORT{sync-home}", ) responseHrefs, others = result calendars, notificationCollection, ignored = self._extractCalendars( responseHrefs, calendarHomeSet ) # if notificationCollection is None that means it hasn't changed, so # we'll return the notificationCollection we already have if not notificationCollection: notificationCollection = self._notificationCollection # by default we'll keep our old calendars... newCalendars = {} for href, oldCalendar in self._calendars.iteritems(): newCalendars[href] = oldCalendar # ...but if a calendar shows up from _extractCalendars we know it's # either new or modified, so replace our old copy with the new for cal in calendars: # print("SYNC DETECTED CHANGE IN", cal.url) newCalendars[cal.url] = cal # Now scan through the responses for 404s which let us know a collection # has been removed for responseHref in responseHrefs: if responseHrefs[responseHref].getStatus() == 404: if responseHref in newCalendars: del newCalendars[responseHref] # print("DELETED CALENDAR", responseHref) # get the new token newCalendarHomeToken = "" for other in others: if other.tag == davxml.sync_token: newCalendarHomeToken = other.text break returnValue((newCalendars.values(), notificationCollection, newCalendarHomeToken)) def timeRangeQuery(self, url, start, end): requestBody = """ """.format(start=start, end=end) return self._report( url, requestBody, allowedStatus=(MULTI_STATUS,), method_label="REPORT{calendar-query}" ) @inlineCallbacks def deepRefresh(self): calendars, _ignore_notificationCollection, _ignore_calendarHomeToken, _ignore_results = yield self._calendarHomePropfind(self.calendarHomeHref) for calendar in calendars: yield self._propfind( calendar.url, self._POLL_CALENDAR_PROPFIND_D1, depth='1', method_label="PROPFIND{calendar}" ) @inlineCallbacks def apnsSubscribe(self): url = "{}/apns/".format(self.server["uri"]) headers = Headers({ 'Content-Type': ['application/x-www-form-urlencoded'] }) content = "token={}&key=/CalDAV/example".format(self._deviceToken) _ignore_response, _ignore_responseBody = yield self._request( OK, 'POST', url, headers=headers, body=StringProducer(content), method_label="POST{apns}" ) @inlineCallbacks def unshareAll(self): for calendar in self._calendars.values(): for invitee in calendar.invitees: body = Calendar.removeInviteeXML(invitee) yield self.postXML( calendar.url, body, label="POST{unshare-calendar}" ) @inlineCallbacks def _extractPrincipalDetails(self): # Using the actual principal URL, retrieve principal information principal = yield self._principalPropfind() hrefs = principal.getHrefProperties() # Remember our outbox and ignore notifications self.outbox = hrefs[caldavxml.schedule_outbox_URL].toString() self.notificationURL = None # Remember our own email-like principal address self.email = None self.uuid = None cuaddrs = hrefs[caldavxml.calendar_user_address_set] if isinstance(cuaddrs, URL): cuaddrs = (cuaddrs,) for cuaddr in cuaddrs: if cuaddr.toString().startswith(u"mailto:"): self.email = cuaddr.toString() elif cuaddr.toString().startswith(u"urn:x-uid"): self.uuid = cuaddr.toString() elif cuaddr.toString().startswith(u"urn:uuid") and self.uuid is None: self.uuid = cuaddr.toString() if self.email is None: raise ValueError("Cannot operate without a mail-style principal URL") # Do another kind of thing I guess self.principalCollection = hrefs[davxml.principal_collection_set].toString() yield self._principalSearchPropertySetReport(self.principalCollection) returnValue(principal) def _extractCalendars(self, results, calendarHome=None): """ Parse a calendar home PROPFIND response and create local state representing the calendars it contains. If XMPP push is enabled, also look for and record information about that from the response. """ calendars = [] notificationCollection = None calendarHomeToken = "" changeTag = davxml.sync_token if self.supportSync else csxml.getctag for href in results: if href == calendarHome: text = results[href].getTextProperties() hrefs = results[href].getHrefProperties() # Extract managed attachments url self._managed_attachments_server_url = None try: url = hrefs[caldavxml.managed_attachments_server_url] if url.toString(): self._managed_attachments_server_url = url.toString() except KeyError: pass if self._managed_attachments_server_url is None: self._managed_attachments_server_url = self.server['uri'] try: pushkey = text[csxml.pushkey] except KeyError: pass else: if pushkey: self.ampPushKeys[href] = pushkey try: server = text[csxml.xmpp_server] uri = text[csxml.xmpp_uri] pushkey = text[csxml.pushkey] except KeyError: pass else: if server and uri: self.xmpp[href] = XMPPPush(server, uri, pushkey) try: calendarHomeToken = text[davxml.sync_token] except KeyError: pass if results[href].getStatus() == 404: continue nodes = results[href].getNodeProperties() isCalendar = False isNotifications = False isShared = False isSharedByMe = False resourceType = None for nodeType in nodes[davxml.resourcetype]: if nodeType.tag in self._CALENDAR_TYPES: isCalendar = True resourceType = nodeType.tag elif nodeType.tag == csxml.notification: isNotifications = True elif nodeType.tag.startswith("{http://calendarserver.org/ns/}shared"): isShared = True if nodeType.tag == "{http://calendarserver.org/ns/}shared-owner": isSharedByMe = True if isCalendar: textProps = results[href].getTextProperties() componentTypes = set() if caldavxml.supported_calendar_component_set in nodes: for comp in nodes[caldavxml.supported_calendar_component_set]: componentTypes.add(comp.get("name").upper()) # Keep track of sharing invitees invitees = [] if isShared and isSharedByMe: try: invite = nodes[csxml.invite] for user in invite.findall(str(csxml.user)): invitee = user.find(str(davxml.href)).text invitees.append(invitee) except KeyError: pass calendars.append(Calendar( resourceType, componentTypes, textProps.get(davxml.displayname, None), href, textProps.get(changeTag, None), shared=isShared, sharedByMe=isSharedByMe, invitees=invitees )) # Also monitor shared-to-me calendars if isShared and not isSharedByMe: try: pushkey = textProps[csxml.pushkey] except KeyError: pass else: if pushkey: self.ampPushKeys[href] = pushkey elif isNotifications: textProps = results[href].getTextProperties() notificationCollection = NotificationCollection( href, textProps.get(changeTag, None) ) return calendars, notificationCollection, calendarHomeToken def _updateCalendar(self, calendar, newToken, fetchEvents=True): """ Update the local cached data for a calendar in an appropriate manner. """ if self.supportSync: return self._updateCalendar_SYNC(calendar, newToken, fetchEvents=fetchEvents) else: return self._updateCalendar_PROPFIND(calendar, newToken) @inlineCallbacks def _updateCalendar_PROPFIND(self, calendar, newToken): """ Sync a collection by doing a full PROPFIND Depth:1 on it and then sync the results with local cached data. """ # Grab old hrefs prior to the PROPFIND so we sync with the old state. We need this because # the sim can fire a PUT between the PROPFIND and when process the removals. old_hrefs = set([calendar.url + child for child in calendar.events.keys()]) _ignore_response, result = yield self._propfind( calendar.url, self._POLL_CALENDAR_PROPFIND_D1, depth='1', method_label="PROPFIND{calendar}" ) yield self._updateApplyChanges(calendar, result, old_hrefs) # Now update calendar to the new token self._calendars[calendar.url].changeToken = newToken @inlineCallbacks def _updateCalendar_SYNC(self, calendar, newToken, fetchEvents=True): """ Execute a sync REPORT against a calendar and apply changes to the local cache. The new token from the changed collection is passed in and must be applied to the existing calendar once sync is done. """ # Grab old hrefs prior to the REPORT so we sync with the old state. We need this because # the sim can fire a PUT between the REPORT and when process the removals. old_hrefs = set([calendar.url + child for child in calendar.events.keys()]) # Get changes from sync REPORT (including the other nodes at the top-level # which will have the new sync token. fullSync = not calendar.changeToken result = yield self._report( calendar.url, self._POLL_CALENDAR_SYNC_REPORT % {'sync-token': calendar.changeToken}, depth='1', allowedStatus=(MULTI_STATUS, FORBIDDEN,), otherTokens=True, method_label="REPORT{sync-coll}" if calendar.changeToken else "REPORT{sync-coll-init}", ) if result is None: if not fullSync: fullSync = True result = yield self._report( calendar.url, self._POLL_CALENDAR_SYNC_REPORT % {'sync-token': ''}, depth='1', otherTokens=True, method_label="REPORT{sync-coll}" if calendar.changeToken else "REPORT{sync-coll-init}", ) else: raise IncorrectResponseCode((MULTI_STATUS,), None, None) result, others = result if fetchEvents: changed = [] for responseHref in result: if responseHref == calendar.url: continue try: etag = result[responseHref].getTextProperties()[davxml.getetag] except KeyError: # XXX Ignore things with no etag? Seems to be dropbox. continue # Differentiate a remove vs new/update result if result[responseHref].getStatus() / 100 == 2: if responseHref not in self._events: self._setEvent(responseHref, Event(self.serializeLocation(), responseHref, None)) event = self._events[responseHref] if event.etag != etag: changed.append(responseHref) elif result[responseHref].getStatus() == 404: self._removeEvent(responseHref) yield self._updateChangedEvents(calendar, changed) # Handle removals only when doing an initial sync if fullSync: # Detect removed items and purge them remove_hrefs = old_hrefs - set(changed) for href in remove_hrefs: self._removeEvent(href) # Now update calendar to the new token taken from the report for node in others: if node.tag == davxml.sync_token: newToken = node.text break self._calendars[calendar.url].changeToken = newToken @inlineCallbacks def _updateApplyChanges(self, calendar, multistatus, old_hrefs): """ Given a multistatus for an entire collection, sync the reported items against the cached items. """ # Detect changes and new items all_hrefs = [] changed_hrefs = [] for responseHref in multistatus: if responseHref == calendar.url: continue all_hrefs.append(responseHref) try: etag = multistatus[responseHref].getTextProperties()[davxml.getetag] except KeyError: # XXX Ignore things with no etag? Seems to be dropbox. continue if responseHref not in self._events: self._setEvent(responseHref, Event(self.serializeLocation(), responseHref, None)) event = self._events[responseHref] if event.etag != etag: changed_hrefs.append(responseHref) # Retrieve changes yield self._updateChangedEvents(calendar, changed_hrefs) # Detect removed items and purge them remove_hrefs = old_hrefs - set(all_hrefs) for href in remove_hrefs: self._removeEvent(href) @inlineCallbacks def _updateChangedEvents(self, calendar, changed): """ Given a set of changed hrefs, batch multiget them all to update the local cache. """ changed.sort() while changed: batchedHrefs = changed[:self.MULTIGET_BATCH_SIZE] changed = changed[self.MULTIGET_BATCH_SIZE:] multistatus = yield self._eventReport(calendar.url, batchedHrefs) for responseHref in batchedHrefs: try: res = multistatus[responseHref] except KeyError: # Resource might have been deleted continue if res.getStatus() == 200: text = res.getTextProperties() etag = text[davxml.getetag] try: scheduleTag = text[caldavxml.schedule_tag] except KeyError: scheduleTag = None body = text[caldavxml.calendar_data] self.eventChanged(responseHref, etag, scheduleTag, body) def eventChanged(self, href, etag, scheduleTag, body): event = self._events[href] event.etag = etag if scheduleTag is not None: event.scheduleTag = scheduleTag event.component = Component.fromString(body) self.catalog["eventChanged"].issue(href) def _eventReport(self, calendar, events): # Next do a REPORT on events that might have information # we don't know about. hrefs = "".join([self._POLL_CALENDAR_MULTIGET_REPORT_HREF % {'href': event} for event in events]) label_suffix = "small" if len(events) > 5: label_suffix = "medium" if len(events) > 20: label_suffix = "large" if len(events) > 75: label_suffix = "huge" return self._report( calendar, self._POLL_CALENDAR_MULTIGET_REPORT % {'hrefs': hrefs}, depth=None, method_label="REPORT{multiget-%s}" % (label_suffix,), ) @inlineCallbacks def _checkCalendarsForEvents(self, calendarHomeSet, firstTime=False, push=False): """ The actions a client does when polling for changes, or in response to a push notification of a change. There are some actions done on the first poll we should emulate. """ result = True try: result = yield self._newOperation("push" if push else "poll", self._poll(calendarHomeSet, firstTime)) finally: if result: try: self._checking.remove(calendarHomeSet) except KeyError: pass returnValue(result) @inlineCallbacks def _updateNotifications(self, oldToken, newToken): fullSync = not oldToken # Get the list of notification xml resources result = yield self._report( self._notificationCollection.url, self._NOTIFICATION_SYNC_REPORT % {'sync-token': oldToken}, depth='1', allowedStatus=(MULTI_STATUS, FORBIDDEN,), otherTokens=True, method_label="REPORT{sync-notif}" if oldToken else "REPORT{sync-notif-init}", ) if result is None: if not fullSync: fullSync = True result = yield self._report( self._notificationCollection.url, self._NOTIFICATION_SYNC_REPORT % {'sync-token': ''}, depth='1', otherTokens=True, method_label="REPORT{sync-notif}" if oldToken else "REPORT{sync-notif-init}", ) else: raise IncorrectResponseCode((MULTI_STATUS,), None, None) result, _ignore_others = result # Scan for the sharing invites inviteNotifications = [] toDelete = [] for responseHref in result: if responseHref == self._notificationCollection.url: continue # try: # etag = result[responseHref].getTextProperties()[davxml.getetag] # except KeyError: # # XXX Ignore things with no etag? Seems to be dropbox. # continue toDelete.append(responseHref) if result[responseHref].getStatus() / 100 == 2: # Get the notification response, responseBody = yield self._request( (OK, NOT_FOUND), 'GET', self.server["uri"] + responseHref.encode('utf-8'), method_label="GET{notification}", ) if response.code == OK: node = ElementTree(file=StringIO(responseBody)).getroot() if node.tag == str(csxml.notification): nurl = URL(url=responseHref) for child in node.getchildren(): if child.tag == str(csxml.invite_notification): if child.find(str(csxml.invite_noresponse)) is not None: inviteNotifications.append( InviteNotification().parseFromNotification( nurl, child ) ) # Accept the invites for notification in inviteNotifications: # Create an invite-reply """ urn:x-uid:10000000-0000-0000-0000-000000000002 /calendars/__uids__/10000000-0000-0000-0000-000000000001/A1DDC58B-651E-4B1C-872A-C6588CA09ADB d2683fa9-7a50-4390-82bb-cbcea5e0fa86 to share """ reply = Element(csxml.invite_reply) href = SubElement(reply, davxml.href) href.text = notification.user_uid SubElement(reply, csxml.invite_accepted) hosturl = SubElement(reply, csxml.hosturl) href = SubElement(hosturl, davxml.href) href.text = notification.hosturl inReplyTo = SubElement(reply, csxml.in_reply_to) inReplyTo.text = notification.uid summary = SubElement(reply, csxml.summary) summary.text = notification.summary xmldoc = BetterElementTree(reply) os = StringIO() xmldoc.writeUTF8(os) # Post to my calendar home response = yield self.postXML( self.calendarHomeHref, os.getvalue(), "POST{invite-accept}" ) # Delete all the notification resources for responseHref in toDelete: response, responseBody = yield self._request( (NO_CONTENT, NOT_FOUND), 'DELETE', self.server["uri"] + responseHref.encode('utf-8'), method_label="DELETE{invite}", ) self._notificationCollection.changeToken = newToken @inlineCallbacks def _poll(self, calendarHomeSet, firstTime): """ This gets called during a normal poll or in response to a push """ if calendarHomeSet in self._checking: returnValue(False) self._checking.add(calendarHomeSet) if firstTime or not self.calendarHomeToken: calendars, notificationCollection, newCalendarHomeToken, results = yield self._calendarHomePropfind(calendarHomeSet) else: calendars, notificationCollection, newCalendarHomeToken = yield self._calendarHomeSync(calendarHomeSet) results = None self.calendarHomeToken = newCalendarHomeToken # First time operations if firstTime and results: yield self._pollFirstTime1(results[calendarHomeSet], calendars) # Normal poll for cal in calendars: newToken = cal.changeToken if cal.url not in self._calendars: # Calendar seen for the first time - reload it self._calendars[cal.url] = cal cal.changeToken = "" # If this is the first time this run and we have no cached copy # of this calendar, do an update but don't fetch the events. # We'll only take notice of ongoing activity and not bother # with existing events. yield self._updateCalendar( self._calendars[cal.url], newToken, fetchEvents=(not firstTime) ) elif self._calendars[cal.url].changeToken != newToken: # Calendar changed - reload it yield self._updateCalendar(self._calendars[cal.url], newToken) self._calendars[cal.url].invitees = cal.invitees if self._instanceNumber == 0 and cal.name == "inbox": yield self._inboxPropfind() # Clean out previously seen collections that are no longer on the server currentCalendarUris = [c.url for c in calendars] for previouslySeenCalendarUri in self._calendars.keys(): if previouslySeenCalendarUri not in currentCalendarUris: calendarToDelete = self._calendars[previouslySeenCalendarUri] for eventUri in calendarToDelete.events.keys(): eventUri = urljoin(previouslySeenCalendarUri, eventUri) self._removeEvent(eventUri) del self._calendars[previouslySeenCalendarUri] if notificationCollection is not None: if self.supportNotificationSync: if self._notificationCollection: oldToken = self._notificationCollection.changeToken else: oldToken = "" self._notificationCollection = notificationCollection newToken = notificationCollection.changeToken if oldToken != newToken: yield self._updateNotifications(oldToken, newToken) else: # When there is no sync REPORT, clients have to do a full PROPFIND # on the notification collection because there is no ctag self._notificationCollection = notificationCollection yield self._notificationPropfind(self._notificationCollection.url) yield self._notificationChangesPropfind(self._notificationCollection.url) # One time delegate expansion if firstTime: yield self._pollFirstTime2() pushKeys = self.ampPushKeys.values() self._monitorAmpPush(calendarHomeSet, pushKeys) # if firstTime: # # clear out shared calendars # yield self.unshareAll() returnValue(True) @inlineCallbacks def _pollFirstTime1(self, homeNode, calendars): # Detect sync report if needed if self.supportSync: nodes = homeNode.getNodeProperties() syncnodes = nodes[davxml.supported_report_set].findall( str(davxml.supported_report) + "/" + str(davxml.report) + "/" + str(davxml.sync_collection) ) self.supportSync = len(syncnodes) != 0 # Patch calendar properties for cal in calendars: if cal.name != "inbox": yield self._proppatch( cal.url, self._STARTUP_PROPPATCH_CALENDAR_COLOR, method_label="PROPPATCH{calendar}", ) yield self._proppatch( cal.url, self._STARTUP_PROPPATCH_CALENDAR_ORDER, method_label="PROPPATCH{calendar}", ) yield self._proppatch( cal.url, self._STARTUP_PROPPATCH_CALENDAR_TIMEZONE, method_label="PROPPATCH{calendar}", ) def _pollFirstTime2(self): return self._principalExpand(self.principalURL) @inlineCallbacks def _notificationPropfind(self, notificationURL): _ignore_response, result = yield self._propfind( notificationURL, self._POLL_NOTIFICATION_PROPFIND, method_label="PROPFIND{notification}", ) returnValue(result) @inlineCallbacks def _notificationChangesPropfind(self, notificationURL): _ignore_response, result = yield self._propfind( notificationURL, self._POLL_NOTIFICATION_PROPFIND_D1, depth='1', method_label="PROPFIND{notification-items}", ) returnValue(result) @inlineCallbacks def _inboxPropfind(self): for cal in self._calendars.itervalues(): if cal.name == "inbox": inboxURL = cal.url yield self._propfind( inboxURL, self._POLL_INBOX_PROPFIND, depth='1', method_label="PROPFIND{inbox}", ) break @inlineCallbacks def _principalExpand(self, principalURL): result = yield self._report( principalURL, self._STARTUP_PRINCIPAL_EXPAND, depth=None, method_label="REPORT{expand}", ) returnValue(result) def startup(self): raise NotImplementedError def _calendarCheckLoop(self, calendarHome): """ Periodically check the calendar home for changes to calendars. """ pollCalendarHome = LoopingCall( self._checkCalendarsForEvents, calendarHome) return pollCalendarHome.start(self.calendarHomePollInterval, now=False) @inlineCallbacks def _newOperation(self, label, deferred): before = self.reactor.seconds() msg( type="operation", phase="start", user=self.record.uid, client_type="({} {})".format(self.title, self._instanceNumber), client_id=self._client_id, label=label, ) try: result = yield deferred except IncorrectResponseCode: # Let this through success = False result = None except: # Anything else is fatal raise else: success = True after = self.reactor.seconds() msg( type="operation", phase="end", duration=after - before, user=self.record.uid, client_type="({} {})".format(self.title, self._instanceNumber), client_id=self._client_id, label=label, success=success, ) returnValue(result) def _monitorPubSub(self, home, params): """ Start monitoring the """ host, port = params.server.split(':') port = int(port) service, _ignore_stuff = params.uri.split('?') service = service.split(':', 1)[1] # XXX What is the domain of the 2nd argument supposed to be? The # hostname we use to connect, or the same as the email address in the # user record? factory = _PubSubClientFactory( self, "%s@%s" % (self.record.uid, host), self.record.password, service, {params.pushkey: (home, home, "Calendar home")}, False, sigint=False) self._pushFactories.append(factory) connect(GAIEndpoint(self.reactor, host, port), factory) @inlineCallbacks def _receivedPush(self, inboundID, dataChangedTimestamp, priority=5): if not self._busyWithPush: self._busyWithPush = True try: for _ignore_href, myId in self.ampPushKeys.iteritems(): if inboundID == myId: yield self._checkCalendarsForEvents(self.calendarHomeHref, push=True) break else: # somehow we are not subscribed to this id pass finally: self._busyWithPush = False def _monitorAmpPush(self, home, pushKeys): """ Start monitoring for AMP-based push notifications """ # Only subscribe to keys we haven't previously subscribed to subscribeTo = [] for pushKey in pushKeys: if pushKey not in self.subscribedAmpPushKeys: subscribeTo.append(pushKey) self.subscribedAmpPushKeys.add(pushKey) if subscribeTo: AMPHub.subscribeToIDs(subscribeTo, self._receivedPush) @inlineCallbacks def _unsubscribePubSub(self): for factory in self._pushFactories: yield factory.unsubscribeAll() @inlineCallbacks def run(self): """ Emulate a CalDAV client. """ @inlineCallbacks def startup(): principal = yield self.startup() hrefs = principal.getHrefProperties() calendarHome = hrefs[caldavxml.calendar_home_set].toString() if calendarHome is None: raise MissingCalendarHome else: self.calendarHomeHref = calendarHome yield self._checkCalendarsForEvents( calendarHome, # If we already have calendarHomeToken from deserialize, # it's not our first time firstTime=(not self.calendarHomeToken) ) returnValue(calendarHome) calendarHome = yield self._newOperation("startup", startup()) self.started = True # Start monitoring PubSub notifications, if possible. # _checkCalendarsForEvents populates self.xmpp if it finds # anything. if self.supportPush and calendarHome in self.xmpp: self._monitorPubSub(calendarHome, self.xmpp[calendarHome]) # Run indefinitely. yield Deferred() elif self.supportAmpPush and calendarHome in self.ampPushKeys: pushKeys = self.ampPushKeys.values() self._monitorAmpPush(calendarHome, pushKeys) # Run indefinitely. yield Deferred() else: # This completes when the calendar home poll loop completes, which # currently it never will except due to an unexpected error. yield self._calendarCheckLoop(calendarHome) def stop(self): """ Called before connections are closed, giving a chance to clean up """ self.serialize() return self._unsubscribePubSub() def serializeLocation(self): """ Return the path to the directory where data for this user is serialized. """ if self.serializePath is None or not os.path.isdir(self.serializePath): return None key = "%s-%s" % (self.record.uid, self.title.replace(" ", "_")) path = os.path.join(self.serializePath, key) if not os.path.exists(path): os.mkdir(path) elif not os.path.isdir(path): return None return path def serialize(self): """ Write current state to disk. """ path = self.serializeLocation() if path is None: return # Create dict for all the data we need to store data = { "homeToken": self.calendarHomeToken, "attachmentsUrl": self._managed_attachments_server_url, "principalURL": self.principalURL, "calendars": [calendar.serialize() for calendar in sorted(self._calendars.values(), key=lambda x:x.name)], "events": [event.serialize() for event in sorted(self._events.values(), key=lambda x:x.url)], "notificationCollection": self._notificationCollection.serialize() if self._notificationCollection else {}, "attachments": self._attachments } # Write JSON data with open(os.path.join(path, "index.json"), "w") as f: json.dump(data, f, indent=2) def deserialize(self): """ Read state from disk. """ self._calendars = {} self._events = {} self._attachments = {} path = self.serializeLocation() if path is None: return # Parse JSON data for calendars try: with open(os.path.join(path, "index.json")) as f: data = json.load(f) except IOError: return self.principalURL = data["principalURL"] self.calendarHomeToken = data["homeToken"].encode("utf-8") self._managed_attachments_server_url = data["attachmentsUrl"].encode("utf-8") # Extract all the events first, then do the calendars (which reference the events) for event in data["events"]: event = Event.deserialize(self.serializeLocation(), event) self._events[event.url] = event for calendar in data["calendars"]: calendar = Calendar.deserialize(calendar, self._events) self._calendars[calendar.url] = calendar if data.get("notificationCollection"): self._notificationCollection = NotificationCollection.deserialize(data, {}) self._attachments = data.get("attachments", {}) @inlineCallbacks def reset(self): path = self.serializeLocation() if path is not None and os.path.exists(path): shutil.rmtree(path) yield self.startup() yield self._checkCalendarsForEvents(self.calendarHomeHref) def _makeSelfAttendee(self): attendee = Property( name=u'ATTENDEE', value=self.email, params={ 'CN': self.record.commonName, 'CUTYPE': 'INDIVIDUAL', 'PARTSTAT': 'ACCEPTED', }, ) return attendee def _makeSelfOrganizer(self): organizer = Property( name=u'ORGANIZER', value=self.email, params={ 'CN': self.record.commonName, }, ) return organizer @inlineCallbacks def addEventAttendee(self, href, attendee): event = self._events[href] component = event.component # Trigger auto-complete behavior yield self._attendeeAutoComplete(component, attendee) # If the event has no attendees, add ourselves as an attendee. attendees = list(component.mainComponent().properties('ATTENDEE')) if len(attendees) == 0: # First add ourselves as a participant and as the # organizer. In the future for this event we should # already have those roles. component.mainComponent().addProperty(self._makeSelfOrganizer()) component.mainComponent().addProperty(self._makeSelfAttendee()) attendees.append(attendee) component.mainComponent().addProperty(attendee) label_suffix = "small" if len(attendees) > 5: label_suffix = "medium" if len(attendees) > 20: label_suffix = "large" if len(attendees) > 75: label_suffix = "huge" headers = Headers({ 'content-type': ['text/calendar'], 'if-match': [event.etag]}) if event.etag is None: headers.removeHeader('if-match') # At last, upload the new event definition response, _ignore_responseBody = yield self._request( (NO_CONTENT, PRECONDITION_FAILED,), 'PUT', self.server["uri"] + href.encode('utf-8'), headers, StringProducer(component.getTextWithTimezones(includeTimezones=True)), method_label="PUT{organizer-%s}" % (label_suffix,) ) # Finally, re-retrieve the event to update the etag if not response.headers.hasHeader("etag"): response = yield self.updateEvent(href) @inlineCallbacks def _attendeeAutoComplete(self, component, attendee): if self._ATTENDEE_LOOKUPS: # Temporarily use some non-test names (some which will return # many results, and others which will return fewer) because the # test account names are all too similar # name = attendee.parameterValue('CN').encode("utf-8") # prefix = name[:4].lower() prefix = random.choice([ "chris", "cyru", "dre", "eric", "morg", "well", "wilfr", "witz" ]) email = attendee.value() if email.startswith("mailto:"): email = email[7:] elif attendee.hasParameter('EMAIL'): email = attendee.parameterValue('EMAIL').encode("utf-8") if self.supportEnhancedAttendeeAutoComplete: # New calendar server enhanced query search = "{}".format(prefix) body = self._CALENDARSERVER_PRINCIPAL_SEARCH_REPORT.format( context="attendee", searchTokens=search) yield self._report( self.principalCollection, body, depth=None, method_label="REPORT{cpsearch}", ) else: # First try to discover some names to supply to the # auto-completion yield self._report( self.principalCollection, self._USER_LIST_PRINCIPAL_PROPERTY_SEARCH % { 'displayname': prefix, 'email': prefix, 'firstname': prefix, 'lastname': prefix, }, depth=None, method_label="REPORT{psearch}", ) # Now learn about the attendee's availability yield self.requestAvailability( component.mainComponent().getStartDateUTC(), component.mainComponent().getEndDateUTC(), [self.email, u'mailto:' + email], [component.resourceUID()] ) @inlineCallbacks def changeEventAttendee(self, href, oldAttendee, newAttendee): event = self._events[href] component = event.component # Change the event to have the new attendee instead of the old attendee component.mainComponent().removeProperty(oldAttendee) component.mainComponent().addProperty(newAttendee) okCodes = NO_CONTENT headers = Headers({ 'content-type': ['text/calendar'], }) if event.scheduleTag is not None: headers.addRawHeader('if-schedule-tag-match', event.scheduleTag) okCodes = (NO_CONTENT, PRECONDITION_FAILED,) attendees = list(component.mainComponent().properties('ATTENDEE')) label_suffix = "small" if len(attendees) > 5: label_suffix = "medium" if len(attendees) > 20: label_suffix = "large" if len(attendees) > 75: label_suffix = "huge" response, _ignore_responseBody = yield self._request( okCodes, 'PUT', self.server["uri"] + href.encode('utf-8'), headers, StringProducer(component.getTextWithTimezones(includeTimezones=True)), method_label="PUT{attendee-%s}" % (label_suffix,), ) # Finally, re-retrieve the event to update the etag if not response.headers.hasHeader("etag"): response = yield self.updateEvent(href) @inlineCallbacks def deleteEvent(self, href): """ Issue a DELETE for the given URL and remove local state associated with that event. """ self._removeEvent(href) response, _ignore_responseBody = yield self._request( (NO_CONTENT, NOT_FOUND), 'DELETE', self.server["uri"] + href.encode('utf-8'), method_label="DELETE{event}", ) returnValue(response) @inlineCallbacks def addEvent(self, href, component, invite=False, attachmentSize=0): headers = Headers({ 'content-type': ['text/calendar'], }) attendees = list(component.mainComponent().properties('ATTENDEE')) label_suffix = "small" if len(attendees) > 5: label_suffix = "medium" if len(attendees) > 20: label_suffix = "large" if len(attendees) > 75: label_suffix = "huge" response, _ignore_responseBody = yield self._request( CREATED, 'PUT', self.server["uri"] + href.encode('utf-8'), headers, StringProducer(component.getTextWithTimezones(includeTimezones=True)), method_label="PUT{organizer-%s}" % (label_suffix,) if invite else "PUT{event}", ) self._localUpdateEvent(response, href, component) if not response.headers.hasHeader("etag"): response = yield self.updateEvent(href) # Add optional attachment if attachmentSize: yield self.postAttachment(href, 'x' * attachmentSize) @inlineCallbacks def addInvite(self, href, component, attachmentSize=0, lookupPercentage=100): """ Add an event that is an invite - i.e., has attendees. We will do attendee lookups and freebusy checks on each attendee to simulate what happens when an organizer creates a new invite. """ # Do lookup and free busy of each attendee (not self) attendees = list(component.mainComponent().properties('ATTENDEE')) for attendee in attendees: if attendee.value() in (self.uuid, self.email): continue choice = random.randint(1, 100) if choice <= lookupPercentage: yield self._attendeeAutoComplete(component, attendee) # Now do a normal PUT yield self.addEvent(href, component, invite=True, attachmentSize=attachmentSize) @inlineCallbacks def changeEvent(self, href): event = self._events[href] component = event.component headers = Headers({ 'content-type': ['text/calendar'], 'if-match': [event.etag]}) if event.etag is None: headers.removeHeader('if-match') # At last, upload the new event definition response, _ignore_responseBody = yield self._request( (NO_CONTENT, PRECONDITION_FAILED,), 'PUT', self.server["uri"] + href.encode('utf-8'), headers, StringProducer(component.getTextWithTimezones(includeTimezones=True)), method_label="PUT{update}" ) # Finally, re-retrieve the event to update the etag if not response.headers.hasHeader("etag"): response = yield self.updateEvent(href) def _localUpdateEvent(self, response, href, component): headers = response.headers etag = headers.getRawHeaders("etag", [None])[0] scheduleTag = headers.getRawHeaders("schedule-tag", [None])[0] event = Event(self.serializeLocation(), href, etag, component) event.scheduleTag = scheduleTag self._setEvent(href, event) @inlineCallbacks def updateEvent(self, href): response, responseBody = yield self._request( OK, 'GET', self.server["uri"] + href.encode('utf-8'), method_label="GET{event}", ) headers = response.headers etag = headers.getRawHeaders('etag')[0] scheduleTag = headers.getRawHeaders('schedule-tag', [None])[0] self.eventChanged(href, etag, scheduleTag, responseBody) returnValue(response) @inlineCallbacks def requestAvailability(self, start, end, users, mask=set()): """ Issue a VFREEBUSY request for I{roughly} the given date range for the given users. The date range is quantized to one day. Because of this it is an error for the range to span more than 24 hours. @param start: A C{datetime} instance giving the beginning of the desired range. @param end: A C{datetime} instance giving the end of the desired range. @param users: An iterable of user UUIDs which will be included in the request. @param mask: An iterable of event UIDs which are to be ignored for the purposes of this availability lookup. @return: A C{Deferred} which fires with a C{dict}. Keys in the dict are user UUIDs (those requested) and values are something else. """ outbox = self.server["uri"] + self.outbox if mask: maskStr = u'\r\n'.join(['X-CALENDARSERVER-MASK-UID:' + uid for uid in mask]) + u'\r\n' else: maskStr = u'' maskStr = maskStr.encode('utf-8') attendeeStr = '\r\n'.join(['ATTENDEE:' + uuid.encode('utf-8') for uuid in users]) + '\r\n' # iCal issues 24 hour wide vfreebusy requests, starting and ending at 4am. if start.compareDate(end): msg("Availability request spanning multiple days (%r to %r), " "dropping the end date." % (start, end)) start.setTimezone(Timezone.UTCTimezone) start.setHHMMSS(0, 0, 0) end = start + Duration(hours=24) start = start.getText() end = end.getText() now = DateTime.getNowUTC().getText() label_suffix = "small" if len(users) > 5: label_suffix = "medium" if len(users) > 20: label_suffix = "large" if len(users) > 75: label_suffix = "huge" _ignore_response, responseBody = yield self._request( OK, 'POST', outbox, Headers({ 'content-type': ['text/calendar'], 'originator': [self.email], 'recipient': [u', '.join(users).encode('utf-8')]}), StringProducer(self._POST_AVAILABILITY % { 'attendees': attendeeStr, 'summary': (u'Availability for %s' % (', '.join(users),)).encode('utf-8'), 'organizer': self.email.encode('utf-8'), 'vfreebusy-uid': str(uuid4()).upper(), 'event-mask': maskStr, 'start': start, 'end': end, 'now': now, }), method_label="POST{fb-%s}" % (label_suffix,), ) returnValue(responseBody) @inlineCallbacks def postAttachment(self, href, content): url = self.server["uri"] + "{0}?{1}".format(href, "action=attachment-add") filename = 'file-{}.txt'.format(len(content)) headers = Headers({ 'Content-Disposition': ['attachment; filename="{}"'.format(filename)] }) response, responseBody = yield self._request( CREATED, 'POST', url, headers=headers, body=StringProducer(content), method_label="POST{attachment}" ) # We don't want to download an attachment we uploaded, so look for the # Cal-Managed-Id: and Location: headers and remember those managedId = response.headers.getRawHeaders("Cal-Managed-Id")[0] location = response.headers.getRawHeaders("Location")[0] self._attachments[managedId] = location yield self.updateEvent(href) returnValue(responseBody) @inlineCallbacks def getAttachment(self, href, managedId): # If we've already downloaded this managedId, skip it. if managedId not in self._attachments: self._attachments[managedId] = href yield self._newOperation( "download", self._get(href, (OK, FORBIDDEN), method_label="GET{attachment}") ) @inlineCallbacks def postXML(self, href, content, label): headers = Headers({ 'content-type': ['text/xml'] }) _ignore_response, responseBody = yield self._request( (OK, CREATED, MULTI_STATUS), 'POST', self.server["uri"] + href, headers=headers, body=StringProducer(content), method_label=label ) returnValue(responseBody) class OS_X_10_6(BaseAppleClient): """ Implementation of the OS X 10.6 iCal network behavior. Anything OS X 10.6 iCal does on its own, or any particular network behaviors it takes in response to a user action, belong on this class. Usage-profile based behaviors ("the user modifies an event every 3.2 minutes") belong elsewhere. """ _client_type = "OS X 10.6" USER_AGENT = "DAVKit/4.0.3 (732); CalendarStore/4.0.3 (991); iCal/4.0.3 (1388); Mac OS X/10.6.4 (10F569)" # The default interval, used if none is specified in external # configuration. This is also the actual value used by Snow # Leopard iCal. CALENDAR_HOME_POLL_INTERVAL = 15 * 60 # The maximum number of resources to retrieve in a single multiget MULTIGET_BATCH_SIZE = 200 # Override and turn on if client supports Sync REPORT _SYNC_REPORT = False # Override and turn on if client syncs using time-range queries _SYNC_TIMERANGE = False # Override and turn off if client does not support attendee lookups _ATTENDEE_LOOKUPS = True # Request body data _LOAD_PATH = "OS_X_10_6" _STARTUP_WELL_KNOWN = loadRequestBody(_LOAD_PATH, 'startup_well_known') _STARTUP_PRINCIPAL_PROPFIND_INITIAL = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind_initial') _STARTUP_PRINCIPAL_PROPFIND = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind') _STARTUP_PRINCIPALS_REPORT = loadRequestBody(_LOAD_PATH, 'startup_principals_report') _STARTUP_PRINCIPAL_EXPAND = loadRequestBody(_LOAD_PATH, 'startup_principal_expand') _STARTUP_PROPPATCH_CALENDAR_COLOR = loadRequestBody(_LOAD_PATH, 'startup_calendar_color_proppatch') _STARTUP_PROPPATCH_CALENDAR_ORDER = loadRequestBody(_LOAD_PATH, 'startup_calendar_order_proppatch') _STARTUP_PROPPATCH_CALENDAR_TIMEZONE = loadRequestBody(_LOAD_PATH, 'startup_calendar_timezone_proppatch') _POLL_CALENDARHOME_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendarhome_propfind') _POLL_CALENDAR_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_CALENDAR_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind_d1') _POLL_CALENDAR_MULTIGET_REPORT = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget') _POLL_CALENDAR_MULTIGET_REPORT_HREF = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget_hrefs') _POLL_CALENDAR_SYNC_REPORT = None _POLL_NOTIFICATION_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_NOTIFICATION_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_notification_propfind_d1') _USER_LIST_PRINCIPAL_PROPERTY_SEARCH = loadRequestBody(_LOAD_PATH, 'user_list_principal_property_search') _POST_AVAILABILITY = loadRequestBody(_LOAD_PATH, 'post_availability') @inlineCallbacks def startup(self): # Try to read data from disk - if it succeeds self.principalURL will be set self.deserialize() if self.principalURL is None: # PROPFIND principal path to retrieve actual principal-URL response = yield self._principalPropfindInitial(self.record.uid) hrefs = response.getHrefProperties() self.principalURL = hrefs[davxml.principal_URL].toString() # Using the actual principal URL, retrieve principal information principal = (yield self._extractPrincipalDetails()) returnValue(principal) class OS_X_10_7(BaseAppleClient): """ Implementation of the OS X 10.7 iCal network behavior. """ _client_type = "OS X 10.7" USER_AGENT = "CalendarStore/5.0.2 (1166); iCal/5.0.2 (1571); Mac OS X/10.7.3 (11D50)" # The default interval, used if none is specified in external # configuration. This is also the actual value used by Snow # Leopard iCal. CALENDAR_HOME_POLL_INTERVAL = 15 * 60 # The maximum number of resources to retrieve in a single multiget MULTIGET_BATCH_SIZE = 50 # Override and turn on if client supports Sync REPORT _SYNC_REPORT = True # Override and turn on if client syncs using time-range queries _SYNC_TIMERANGE = False # Override and turn off if client does not support attendee lookups _ATTENDEE_LOOKUPS = True # Request body data _LOAD_PATH = "OS_X_10_7" _STARTUP_WELL_KNOWN = loadRequestBody(_LOAD_PATH, 'startup_well_known') _STARTUP_PRINCIPAL_PROPFIND_INITIAL = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind_initial') _STARTUP_PRINCIPAL_PROPFIND = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind') _STARTUP_PRINCIPALS_REPORT = loadRequestBody(_LOAD_PATH, 'startup_principals_report') _STARTUP_PRINCIPAL_EXPAND = loadRequestBody(_LOAD_PATH, 'startup_principal_expand') _STARTUP_PROPPATCH_CALENDAR_COLOR = loadRequestBody(_LOAD_PATH, 'startup_calendar_color_proppatch') _STARTUP_PROPPATCH_CALENDAR_ORDER = loadRequestBody(_LOAD_PATH, 'startup_calendar_order_proppatch') _STARTUP_PROPPATCH_CALENDAR_TIMEZONE = loadRequestBody(_LOAD_PATH, 'startup_calendar_timezone_proppatch') _POLL_CALENDARHOME_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendarhome_propfind') _POLL_CALENDAR_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_CALENDAR_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind_d1') _POLL_CALENDAR_MULTIGET_REPORT = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget') _POLL_CALENDAR_MULTIGET_REPORT_HREF = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget_hrefs') _POLL_CALENDAR_SYNC_REPORT = loadRequestBody(_LOAD_PATH, 'poll_calendar_sync') _POLL_NOTIFICATION_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_NOTIFICATION_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_notification_propfind_d1') _USER_LIST_PRINCIPAL_PROPERTY_SEARCH = loadRequestBody(_LOAD_PATH, 'user_list_principal_property_search') _POST_AVAILABILITY = loadRequestBody(_LOAD_PATH, 'post_availability') def _addDefaultHeaders(self, headers): """ Add the clients default set of headers to ones being used in a request. Default is to add User-Agent, sub-classes should override to add other client specific things, Accept etc. """ super(OS_X_10_7, self)._addDefaultHeaders(headers) headers.setRawHeaders('Accept', ['*/*']) headers.setRawHeaders('Accept-Language', ['en-us']) headers.setRawHeaders('Accept-Encoding', ['gzip,deflate']) headers.setRawHeaders('Connection', ['keep-alive']) @inlineCallbacks def startup(self): # Try to read data from disk - if it succeeds self.principalURL will be set self.deserialize() if self.principalURL is None: # PROPFIND well-known with redirect response = yield self._startupPropfindWellKnown() hrefs = response.getHrefProperties() if davxml.current_user_principal in hrefs: self.principalURL = hrefs[davxml.current_user_principal].toString() elif davxml.principal_URL in hrefs: self.principalURL = hrefs[davxml.principal_URL].toString() else: # PROPFIND principal path to retrieve actual principal-URL response = yield self._principalPropfindInitial(self.record.uid) hrefs = response.getHrefProperties() self.principalURL = hrefs[davxml.principal_URL].toString() # Using the actual principal URL, retrieve principal information principal = yield self._extractPrincipalDetails() returnValue(principal) class OS_X_10_11(BaseAppleClient): """ Implementation of the OS X 10.11 Calendar.app network behavior. """ _client_type = "OS X 10.11" USER_AGENT = "Mac+OS+X/10.11 (15A283) CalendarAgent/361 Simulator" # The default interval, used if none is specified in external # configuration. This is also the actual value used by El # Capital Calendar.app. CALENDAR_HOME_POLL_INTERVAL = 15 * 60 # in seconds # The maximum number of resources to retrieve in a single multiget MULTIGET_BATCH_SIZE = 50 # Override and turn on if client supports Sync REPORT _SYNC_REPORT = True # Override and turn off if client does not support attendee lookups _ATTENDEE_LOOKUPS = True # Request body data _LOAD_PATH = "OS_X_10_11" _STARTUP_WELL_KNOWN = loadRequestBody(_LOAD_PATH, 'startup_well_known_propfind') _STARTUP_PRINCIPAL_PROPFIND_INITIAL = loadRequestBody(_LOAD_PATH, 'startup_principal_initial_propfind') _STARTUP_PRINCIPAL_PROPFIND = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind') _STARTUP_PRINCIPALS_REPORT = loadRequestBody(_LOAD_PATH, 'startup_principals_report') _STARTUP_PRINCIPAL_EXPAND = loadRequestBody(_LOAD_PATH, 'startup_principal_expand') _STARTUP_CREATE_CALENDAR = loadRequestBody(_LOAD_PATH, 'startup_create_calendar') _STARTUP_PROPPATCH_CALENDAR_COLOR = loadRequestBody(_LOAD_PATH, 'startup_calendar_color_proppatch') # _STARTUP_PROPPATCH_CALENDAR_NAME = loadRequestBody(_LOAD_PATH, 'startup_calendar_displayname_proppatch') _STARTUP_PROPPATCH_CALENDAR_ORDER = loadRequestBody(_LOAD_PATH, 'startup_calendar_order_proppatch') _STARTUP_PROPPATCH_CALENDAR_TIMEZONE = loadRequestBody(_LOAD_PATH, 'startup_calendar_timezone_proppatch') _POLL_CALENDARHOME_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendarhome_depth1_propfind') _POLL_CALENDAR_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_CALENDAR_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_calendar_depth1_propfind') _POLL_CALENDAR_MULTIGET_REPORT = loadRequestBody('OS_X_10_7', 'poll_calendar_multiget') _POLL_CALENDAR_MULTIGET_REPORT_HREF = loadRequestBody('OS_X_10_7', 'poll_calendar_multiget_hrefs') _POLL_CALENDAR_SYNC_REPORT = loadRequestBody('OS_X_10_7', 'poll_calendar_sync') _POLL_NOTIFICATION_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_NOTIFICATION_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_notification_depth1_propfind') _POLL_INBOX_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_inbox_propfind') _NOTIFICATION_SYNC_REPORT = loadRequestBody(_LOAD_PATH, 'notification_sync') _CALENDARHOME_SYNC_REPORT = loadRequestBody(_LOAD_PATH, 'poll_calendarhome_sync') _USER_LIST_PRINCIPAL_PROPERTY_SEARCH = loadRequestBody('OS_X_10_7', 'user_list_principal_property_search') _POST_AVAILABILITY = loadRequestBody('OS_X_10_7', 'post_availability') _CALENDARSERVER_PRINCIPAL_SEARCH_REPORT = loadRequestBody(_LOAD_PATH, 'principal_search_report') def _addDefaultHeaders(self, headers): """ Add the clients default set of headers to ones being used in a request. Default is to add User-Agent, sub-classes should override to add other client specific things, Accept etc. """ super(OS_X_10_11, self)._addDefaultHeaders(headers) headers.setRawHeaders('Accept', ['*/*']) headers.setRawHeaders('Accept-Language', ['en-us']) headers.setRawHeaders('Accept-Encoding', ['gzip,deflate']) headers.setRawHeaders('Connection', ['keep-alive']) @inlineCallbacks def startup(self): # Try to read data from disk - if it succeeds self.principalURL will be set self.deserialize() if self.principalURL is None: # PROPFIND well-known with redirect response = yield self._startupPropfindWellKnown() hrefs = response.getHrefProperties() if davxml.current_user_principal in hrefs: self.principalURL = hrefs[davxml.current_user_principal].toString() elif davxml.principal_URL in hrefs: self.principalURL = hrefs[davxml.principal_URL].toString() else: # PROPFIND principal path to retrieve actual principal-URL response = yield self._principalPropfindInitial(self.record.uid) hrefs = response.getHrefProperties() self.principalURL = hrefs[davxml.principal_URL].toString() # Using the actual principal URL, retrieve principal information principal = yield self._extractPrincipalDetails() returnValue(principal) class iOS_5(BaseAppleClient): """ Implementation of the iOS 5 network behavior. """ _client_type = "iOS 5" USER_AGENT = "iOS/5.1 (9B179) dataaccessd/1.0" # The default interval, used if none is specified in external # configuration. This is also the actual value used by Snow # Leopard iCal. CALENDAR_HOME_POLL_INTERVAL = 15 * 60 # The maximum number of resources to retrieve in a single multiget MULTIGET_BATCH_SIZE = 50 # Override and turn on if client supports Sync REPORT _SYNC_REPORT = False # Override and turn on if client syncs using time-range queries _SYNC_TIMERANGE = True # Override and turn off if client does not support attendee lookups _ATTENDEE_LOOKUPS = False # Request body data _LOAD_PATH = "iOS_5" _STARTUP_WELL_KNOWN = loadRequestBody(_LOAD_PATH, 'startup_well_known') _STARTUP_PRINCIPAL_PROPFIND_INITIAL = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind_initial') _STARTUP_PRINCIPAL_PROPFIND = loadRequestBody(_LOAD_PATH, 'startup_principal_propfind') _STARTUP_PRINCIPALS_REPORT = loadRequestBody(_LOAD_PATH, 'startup_principals_report') _STARTUP_PROPPATCH_CALENDAR_COLOR = loadRequestBody(_LOAD_PATH, 'startup_calendar_color_proppatch') _STARTUP_PROPPATCH_CALENDAR_ORDER = loadRequestBody(_LOAD_PATH, 'startup_calendar_order_proppatch') _POLL_CALENDARHOME_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendarhome_propfind') _POLL_CALENDAR_PROPFIND = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind') _POLL_CALENDAR_VEVENT_TR_QUERY = loadRequestBody(_LOAD_PATH, 'poll_calendar_vevent_tr_query') _POLL_CALENDAR_VTODO_QUERY = loadRequestBody(_LOAD_PATH, 'poll_calendar_vtodo_query') _POLL_CALENDAR_PROPFIND_D1 = loadRequestBody(_LOAD_PATH, 'poll_calendar_propfind_d1') _POLL_CALENDAR_MULTIGET_REPORT = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget') _POLL_CALENDAR_MULTIGET_REPORT_HREF = loadRequestBody(_LOAD_PATH, 'poll_calendar_multiget_hrefs') def _addDefaultHeaders(self, headers): """ Add the clients default set of headers to ones being used in a request. Default is to add User-Agent, sub-classes should override to add other client specific things, Accept etc. """ super(iOS_5, self)._addDefaultHeaders(headers) headers.setRawHeaders('Accept', ['*/*']) headers.setRawHeaders('Accept-Language', ['en-us']) headers.setRawHeaders('Accept-Encoding', ['gzip,deflate']) headers.setRawHeaders('Connection', ['keep-alive']) @inlineCallbacks def _pollFirstTime1(self, homeNode, calendars): # Patch calendar properties for cal in calendars: if cal.name != "inbox": yield self._proppatch( cal.url, self._STARTUP_PROPPATCH_CALENDAR_COLOR, method_label="PROPPATCH{calendar}", ) yield self._proppatch( cal.url, self._STARTUP_PROPPATCH_CALENDAR_ORDER, method_label="PROPPATCH{calendar}", ) def _pollFirstTime2(self): # Nothing here return succeed(None) def _updateCalendar(self, calendar, newToken): """ Update the local cached data for a calendar in an appropriate manner. """ if calendar.name == "inbox": # Inbox is done as a PROPFIND Depth:1 return self._updateCalendar_PROPFIND(calendar, newToken) elif "VEVENT" in calendar.componentTypes: # VEVENTs done as time-range VEVENT-only queries return self._updateCalendar_VEVENT(calendar, newToken) elif "VTODO" in calendar.componentTypes: # VTODOs done as VTODO-only queries return self._updateCalendar_VTODO(calendar, newToken) @inlineCallbacks def _updateCalendar_VEVENT(self, calendar, newToken): """ Sync all locally cached VEVENTs using a VEVENT-only time-range query. """ # Grab old hrefs prior to the PROPFIND so we sync with the old state. We need this because # the sim can fire a PUT between the PROPFIND and when process the removals. old_hrefs = set([calendar.url + child for child in calendar.events.keys()]) now = DateTime.getNowUTC() now.setDateOnly(True) now.offsetMonth(-1) # 1 month back default result = yield self._report( calendar.url, self._POLL_CALENDAR_VEVENT_TR_QUERY % {"start-date": now.getText()}, depth='1', method_label="REPORT{vevent}", ) yield self._updateApplyChanges(calendar, result, old_hrefs) # Now update calendar to the new token self._calendars[calendar.url].changeToken = newToken @inlineCallbacks def _updateCalendar_VTODO(self, calendar, newToken): """ Sync all locally cached VTODOs using a VTODO-only query. """ # Grab old hrefs prior to the PROPFIND so we sync with the old state. We need this because # the sim can fire a PUT between the PROPFIND and when process the removals. old_hrefs = set([calendar.url + child for child in calendar.events.keys()]) result = yield self._report( calendar.url, self._POLL_CALENDAR_VTODO_QUERY, depth='1', method_label="REPORT{vtodo}", ) yield self._updateApplyChanges(calendar, result, old_hrefs) # Now update calendar to the new token self._calendars[calendar.url].changeToken = newToken @inlineCallbacks def startup(self): # Try to read data from disk - if it succeeds self.principalURL will be set self.deserialize() if self.principalURL is None: # PROPFIND well-known with redirect response = yield self._startupPropfindWellKnown() hrefs = response.getHrefProperties() if davxml.current_user_principal in hrefs: self.principalURL = hrefs[davxml.current_user_principal].toString() elif davxml.principal_URL in hrefs: self.principalURL = hrefs[davxml.principal_URL].toString() else: # PROPFIND principal path to retrieve actual principal-URL response = yield self._principalPropfindInitial(self.record.uid) hrefs = response.getHrefProperties() self.principalURL = hrefs[davxml.principal_URL].toString() # Using the actual principal URL, retrieve principal information principal = yield self._extractPrincipalDetails() returnValue(principal) class RequestLogger(object): format = u"%(user)s %(client)s request %(code)s%(success)s[%(duration)5.2f s] %(method)8s %(url)s" success = u"\N{CHECK MARK}" failure = u"\N{BALLOT X}" def observe(self, event): if event.get("type") == "response": formatArgs = dict( user=event['user'], method=event['method'], url=urlunparse(('', '') + urlparse(event['url'])[2:]), code=event['code'], duration=event['duration'], client=event['client_type'], ) if event['success']: formatArgs['success'] = self.success else: formatArgs['success'] = self.failure print((self.format % formatArgs).encode('utf-8')) if not event['success']: body = event['body'] if isinstance(body, StringProducer): body = body._body if body: body = body[:5000] responseBody = event['responseBody'] if responseBody: responseBody = responseBody[:5000] print("=" * 80) print("INCORRECT ERROR CODE: {}".format(event['code'])) print("URL: {}".format(event['url'])) print("METHOD: {}".format(event['method'])) print("USER: {}".format(event['user'])) print("REQUEST BODY:\n{}".format(body)) print("RESPONSE BODY:\n{}".format(responseBody)) print("=" * 80) # elif event["duration"] > 9.0: # body = event['body'] # if isinstance(body, StringProducer): # body = body._body # if body: # body = body[:5000] # responseBody = event['responseBody'] # if responseBody: # responseBody = responseBody[:5000] # print("=" * 80) # print("LONG RESPONSE: {}".format(event['duration'])) # print("URL: {}".format(event['url'])) # print("METHOD: {}".format(event['method'])) # print("USER: {}".format(event['user'])) # print("REQUEST BODY:\n{}".format(body)) # print("RESPONSE BODY:\n{}".format(responseBody)) # print("=" * 80) def report(self, output): pass def failures(self): return [] class ErrorLogger(object): """ Requests which get an incorrect response code or take too long are logged """ def __init__(self, directory, durationThreshold): self.directory = directory self.durationThreshold = durationThreshold if os.path.isdir(directory): shutil.rmtree(directory) os.mkdir(directory) self.errorCounter = 0 self.durationCounter = 0 def _getBodies(self, event): body = event['body'] if isinstance(body, StringProducer): body = body._body if body: body = body[:5000] responseBody = event['responseBody'] return body, responseBody def observe(self, event): if event.get("type") == "response": if not event['success']: body, responseBody = self._getBodies(event) self.errorCounter += 1 filename = "error-{:08d}.txt".format(self.errorCounter) fullname = os.path.join(self.directory, filename) with open(fullname, "w") as f: f.write("RESPONSE CODE: {}\n".format(event['code'])) f.write("URL: {}\n".format(event['url'])) f.write("METHOD: {}\n".format(event['method'])) f.write("USER: {}\n".format(event['user'])) f.write("REQUEST BODY:\n{}\n".format(body)) f.write("RESPONSE BODY:\n{}\n".format(responseBody)) print("Incorrect Response Code logged to {}".format(fullname)) elif event["duration"] > self.durationThreshold: body, responseBody = self._getBodies(event) self.durationCounter += 1 filename = "duration-{:08d}.txt".format(self.durationCounter) fullname = os.path.join(self.directory, filename) with open(fullname, "w") as f: f.write("LONG RESPONSE: {:.1f} sec\n".format(event['duration'])) f.write("RESPONSE CODE: {}\n".format(event['code'])) f.write("URL: {}\n".format(event['url'])) f.write("METHOD: {}\n".format(event['method'])) f.write("USER: {}\n".format(event['user'])) f.write("REQUEST BODY:\n{}\n".format(body)) f.write("RESPONSE BODY:\n{}\n".format(responseBody)) print("Long Duration logged to {}".format(fullname)) def report(self, output): pass def failures(self): return [] def main(): from urllib2 import HTTPDigestAuthHandler from twisted.internet import reactor auth = HTTPDigestAuthHandler() auth.add_password( realm="Test Realm", uri="http://127.0.0.1:8008/", user="user01", passwd="user01") addObserver(RequestLogger().observe) from sim import _DirectoryRecord client = OS_X_10_6( reactor, 'http://127.0.0.1:8008/', _DirectoryRecord( u'user01', u'user01', u'User 01', u'user01@example.org'), auth) d = client.run() d.addErrback(err, "10.6 client run() problem") d.addCallback(lambda ignored: reactor.stop()) reactor.run() if __name__ == '__main__': main() calendarserver-9.1+dfsg/contrib/performance/loadtest/logger.py000066400000000000000000000064561315003562600247270ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from contrib.performance.stats import mean, median, stddev class SummarizingMixin(object): def printHeader(self, output, fields): """ Print a header for the summarization data which will be reported. @param fields: A C{list} of two-tuples. Each tuple describes one column in the summary. The first element gives a label to appear at the top of the column. The second element gives the width of the column. """ format = [] labels = [] for (label, width) in fields: format.append('%%%ds' % (width,)) labels.append(label) header = ' '.join(format) % tuple(labels) output.write("%s\n" % header) output.write("%s\n" % ("-" * len(header),)) def _summarizeData(self, operation, data, total_count): failed = 0 thresholds = [0] * len(self._thresholds) durations = [] for (success, duration) in data: if not success: failed += 1 for ctr, item in enumerate(self._thresholds): threshold, _ignore_fail_at = item if duration > threshold: thresholds[ctr] += 1 durations.append(duration) # Determine PASS/FAIL failure = False count = len(data) if failed * 100.0 / count > self._fail_cut_off: failure = True for ctr, item in enumerate(self._thresholds): _ignore_threshold, fail_at = item fail_at = fail_at.get(operation, fail_at["default"]) if thresholds[ctr] * 100.0 / count > fail_at: failure = True return (operation, count, ((100.0 * count) / total_count) if total_count else 0.0, failed,) + \ tuple(thresholds) + \ (mean(durations), median(durations), stddev(durations), "FAIL" if failure else "") def _printRow(self, output, formats, values): format = ' '.join(formats) output.write("%s\n" % format % values) def printData(self, output, formats, perOperationTimes): """ Print one or more rows of data with the given formatting. @param formats: A C{list} of C{str} giving formats into which each data field will be interpolated. @param perOperationTimes: A C{list} of all of the data to summarize. Each element is a two-tuple of whether the operation succeeded (C{True} if so, C{False} if not) and how long the operation took. """ total_count = sum(map(lambda x: len(x[1]), perOperationTimes)) for method, data in perOperationTimes: self._printRow(output, formats, self._summarizeData(method, data, total_count)) calendarserver-9.1+dfsg/contrib/performance/loadtest/population.py000066400000000000000000000536051315003562600256400ustar00rootroot00000000000000# -*- test-case-name: contrib.performance.loadtest.test_population -*- ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from __future__ import print_function from __future__ import division """ Tools for generating a population of CalendarServer users based on certain usage parameters. """ from tempfile import mkdtemp from itertools import izip from datetime import datetime from urllib2 import HTTPBasicAuthHandler from urllib2 import HTTPDigestAuthHandler from urllib2 import HTTPPasswordMgrWithDefaultRealm import collections import json import os from twisted.internet.defer import DeferredList from twisted.python.failure import Failure from twisted.python.filepath import FilePath from twisted.python.util import FancyEqMixin from twisted.python.log import msg, err from twistedcaldav.timezones import TimezoneCache from contrib.performance.stats import mean, median, stddev, mad from contrib.performance.loadtest.trafficlogger import loggedReactor from contrib.performance.loadtest.logger import SummarizingMixin from contrib.performance.loadtest.ical import OS_X_10_6, RequestLogger from contrib.performance.loadtest.profiles import Eventer, Inviter, Accepter class ProfileType(object, FancyEqMixin): """ @ivar profileType: A L{ProfileBase} subclass, or an L{ICalendarUserProfile} implementation. @ivar params: A C{dict} which will be passed to C{profileType} as keyword arguments to create a new profile instance. """ compareAttributes = ("profileType", "params") def __init__(self, profileType, params): self.profileType = profileType self.params = params def __call__(self, reactor, simulator, client, number): return self.profileType(reactor, simulator, client, number, **self.params) class ClientType(object, FancyEqMixin): """ @ivar clientType: An L{ICalendarClient} implementation @ivar profileTypes: A list of L{ProfileType} instances """ compareAttributes = ("clientType", "profileTypes") def __init__(self, clientType, clientParams, profileTypes): self.clientType = clientType self.clientParams = clientParams self.profileTypes = profileTypes def new(self, reactor, server, principalPathTemplate, serializationPath, userRecord, authInfo, instanceNumber): """ Create a new instance of this client type. """ return self.clientType( reactor, server, principalPathTemplate, serializationPath, userRecord, authInfo, instanceNumber, **self.clientParams ) class PopulationParameters(object, FancyEqMixin): """ Descriptive statistics about a population of Calendar Server users. """ compareAttributes = ("clients",) def __init__(self): self.clients = [] def addClient(self, weight, clientType): """ Add another type of client to these parameters. @param weight: A C{int} giving the weight of this client type. The higher the weight, the more frequently a client of this type will show up in the population described by these parameters. @param clientType: A L{ClientType} instance describing the type of client to add. """ self.clients.append((weight, clientType)) def clientTypes(self): """ Return a list of two-tuples giving the weights and types of clients in the population. """ return self.clients class Populator(object): """ @ivar userPattern: A C{str} giving a formatting pattern to use to construct usernames. The string will be interpolated with a single integer, the incrementing counter of how many users have thus far been "used". @ivar passwordPattern: Similar to C{userPattern}, but for passwords. """ def __init__(self, random): self._random = random def _cycle(self, elements): while True: for (weight, value) in elements: for _ignore_i in range(weight): yield value def populate(self, parameters): """ Generate individuals such as might be randomly selected from a population with the given parameters. @type parameters: L{PopulationParameters} @rtype: generator of L{ClientType} instances """ for (clientType,) in izip(self._cycle(parameters.clientTypes())): yield clientType class CalendarClientSimulator(object): def __init__(self, records, populator, random, parameters, reactor, servers, principalPathTemplate, serializationPath, workerIndex=0, workerCount=1): self._records = records self.populator = populator self._random = random self.reactor = reactor self.servers = servers self.principalPathTemplate = principalPathTemplate self.serializationPath = serializationPath self._pop = self.populator.populate(parameters) self._user = 0 self._stopped = False self.workerIndex = workerIndex self.workerCount = workerCount self.clients = [] TimezoneCache.create() def getUserRecord(self, index): return self._records[index] def getRandomUserRecord(self, besides=None): count = len(self._records) if count == 0: # No records! return None if count == 1 and besides == 0: # There is only one item and caller doesn't want it! return None for _i in xrange(100): # Try to find one that is not "besides" n = self._random.randint(0, count - 1) if besides != n: # Got it. break else: # Give up return None return self._records[n] def _nextUserNumber(self): result = self._user self._user += 1 return result def _createUser(self, number): record = self._records[number] user = record.uid authBasic = HTTPBasicAuthHandler(password_mgr=HTTPPasswordMgrWithDefaultRealm()) authBasic.add_password( realm=None, uri=self.servers[record.podID]["uri"], user=user.encode('utf-8'), passwd=record.password.encode('utf-8')) authDigest = HTTPDigestAuthHandler(passwd=HTTPPasswordMgrWithDefaultRealm()) authDigest.add_password( realm=None, uri=self.servers[record.podID]["uri"], user=user.encode('utf-8'), passwd=record.password.encode('utf-8')) return record, user, {"basic": authBasic, "digest": authDigest, } def stop(self): """ Indicate that the simulation is over. CalendarClientSimulator doesn't actively react to this, but it does cause all future failures to be disregarded (as some are expected, as the simulation will always stop while some requests are in flight). """ # Give all the clients a chance to stop (including unsubscribe from push) deferreds = [] for client in self.clients: deferreds.append(client.stop()) self._stopped = True return DeferredList(deferreds) def add(self, numClients, clientsPerUser): instanceNumber = 0 for _ignore_n in range(numClients): number = self._nextUserNumber() for _ignore_peruser in range(clientsPerUser): clientType = self._pop.next() if (number % self.workerCount) != self.workerIndex: # If we're in a distributed work scenario and we are worker N, # we have to skip all but every Nth request (since every node # runs the same arrival policy). continue record, _ignore_user, auth = self._createUser(number) reactor = loggedReactor(self.reactor) client = clientType.new( reactor, self.servers[record.podID], self.principalPathTemplate, self.serializationPath, record, auth, instanceNumber ) instanceNumber += 1 self.clients.append(client) d = client.run() d.addErrback(self._clientFailure, reactor) for profileType in clientType.profileTypes: profile = profileType(reactor, self, client, number) if profile.enabled: d = profile.initialize() def _run(result): d2 = profile.run() d2.addErrback(self._profileFailure, profileType, reactor) return d2 d.addCallback(_run) # XXX this status message is prone to be slightly inaccurate, but isn't # really used by much anyway. msg(type="status", clientCount=self._user - 1) def _dumpLogs(self, loggingReactor, reason): path = FilePath(mkdtemp()) logstate = loggingReactor.getLogFiles() i = 0 for i, log in enumerate(logstate.finished): path.child('%03d.log' % (i,)).setContent(log.getvalue()) for i, log in enumerate(logstate.active, i): path.child('%03d.log' % (i,)).setContent(log.getvalue()) path.child('reason.log').setContent(reason.getTraceback()) return path def _clientFailure(self, reason, reactor): if not self._stopped: where = self._dumpLogs(reactor, reason) err(reason, "Client stopped with error; recent traffic in %r" % ( where.path,)) if not isinstance(reason, Failure): reason = Failure(reason) msg(type="client-failure", reason="%s: %s" % (reason.type, reason.value,)) def _profileFailure(self, reason, profileType, reactor): if not self._stopped: where = self._dumpLogs(reactor, reason) err(reason, "Profile stopped with error; recent traffic in %r" % ( where.path,)) def _simFailure(self, reason, reactor): if not self._stopped: msg(type="sim-failure", reason=reason) class SmoothRampUp(object): def __init__(self, reactor, groups, groupSize, interval, clientsPerUser): self.reactor = reactor self.groups = groups self.groupSize = groupSize self.interval = interval self.clientsPerUser = clientsPerUser def run(self, simulator): for i in range(self.groups): self.reactor.callLater( self.interval * i, simulator.add, self.groupSize, self.clientsPerUser) class StatisticsBase(object): def observe(self, event): if event.get('type') == 'response': self.eventReceived(event) elif event.get('type') == 'client-failure': self.clientFailure(event) elif event.get('type') == 'sim-failure': self.simFailure(event) def report(self, output): pass def failures(self): return [] class SimpleStatistics(StatisticsBase): def __init__(self): self._times = [] self._failures = collections.defaultdict(int) self._simFailures = collections.defaultdict(int) def eventReceived(self, event): self._times.append(event['duration']) if len(self._times) == 200: print('mean:', mean(self._times)) print('median:', median(self._times)) print('stddev:', stddev(self._times)) print('mad:', mad(self._times)) del self._times[:100] def clientFailure(self, event): self._failures[event] += 1 def simFailure(self, event): self._simFailures[event] += 1 class ReportStatistics(StatisticsBase, SummarizingMixin): """ @ivar _users: A C{set} containing all user UIDs which have been observed in events. When generating the final report, the size of this set is reported as the number of users in the simulation. """ # the response time thresholds to display together with failing % count threshold _thresholds_default = { "requests": { "limits": [0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 30.0], "thresholds": { "default": [100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], } } } _fail_cut_off = 1.0 # % of total count at which failed requests will cause a failure _fields_init = [ ('request', -30, '%-30s'), ('count', 8, '%8s'), ('%', 6, '%6.1f'), ('failed', 8, '%8s'), ] _fields_extend = [ ('mean', 8, '%8.4f'), ('median', 8, '%8.4f'), ('stddev', 8, '%8.4f'), ('QoS', 8, '%8.4f'), ('STATUS', 8, '%8s'), ] def __init__(self, **params): self._perMethodTimes = {} self._users = set() self._clients = set() self._failed_clients = [] self._failed_sim = collections.defaultdict(int) self._startTime = datetime.now() self._expired_data = {} # Load parameters from config if "thresholdsPath" in params: with open(params["thresholdsPath"]) as f: jsondata = json.load(f) elif "thresholds" in params: jsondata = params["thresholds"] else: jsondata = self._thresholds_default self._thresholds = [[limit, {}] for limit in jsondata["requests"]["limits"]] for ctr, item in enumerate(self._thresholds): for k, v in jsondata["requests"]["thresholds"].items(): item[1][k] = v[ctr] self._fields = self._fields_init[:] for threshold, _ignore_fail_at in self._thresholds: self._fields.append(('>%g sec' % (threshold,), 10, '%10s')) self._fields.extend(self._fields_extend) if "benchmarksPath" in params: with open(params["benchmarksPath"]) as f: self.benchmarks = json.load(f) else: self.benchmarks = {} if "failCutoff" in params: self._fail_cut_off = params["failCutoff"] def observe(self, event): if event.get('type') == 'sim-expired': self.simExpired(event) else: super(ReportStatistics, self).observe(event) def countUsers(self): return len(self._users) def countClients(self): return len(self._clients) def countClientFailures(self): return len(self._failed_clients) def countSimFailures(self): return len(self._failed_sim) def eventReceived(self, event): dataset = self._perMethodTimes.setdefault(event['method'], []) dataset.append((event['success'], event['duration'])) self._users.add(event['user']) self._clients.add(event['client_id']) def clientFailure(self, event): self._failed_clients.append(event['reason']) def simFailure(self, event): self._failed_sim[event['reason']] += 1 def simExpired(self, event): self._expired_data[event['podID']] = event['reason'] def printMiscellaneous(self, output, items): maxColumnWidth = str(len(max(items.iterkeys(), key=len))) fmt = "%" + maxColumnWidth + "s : %-s\n" for k in sorted(items.iterkeys()): output.write(fmt % (k.title(), items[k],)) def qos(self): """ Determine a "quality of service" value that can be used for comparisons between runs. This value is based on the percentage deviation of means of each request from a set of "benchmarks" for each type of request. """ # Get means for each type of method means = {} for method, results in self._perMethodTimes.items(): try: means[method] = mean([duration for success, duration in results if success]) except ZeroDivisionError: pass # Ignore the case where all samples were unsuccessful? # Determine percentage differences with weighting differences = [] for method, value in means.items(): result = self.qos_value(method, value) if result is not None: differences.append(result) return ("%-8.4f" % mean(differences)) if differences else "None" def qos_value(self, method, value): benchmark = self.benchmarks.get(method) if benchmark is None: return None test_mean, weight = (benchmark["mean"], benchmark["weight"],) return ((value / test_mean) - 1.0) * weight + 1.0 def _summarizeData(self, operation, data, total_count): data = SummarizingMixin._summarizeData(self, operation, data, total_count) value = self.qos_value(operation, data[-4]) if value is None: value = 0.0 return data[:-1] + (value,) + data[-1:] def report(self, output): output.write("\n") output.write("** REPORT **\n") output.write("\n") runtime = datetime.now() - self._startTime cpu = os.times() cpuUser = cpu[0] + cpu[2] cpuSys = cpu[1] + cpu[3] cpuTotal = cpuUser + cpuSys runHours, remainder = divmod(runtime.seconds, 3600) runMinutes, runSeconds = divmod(remainder, 60) cpuHours, remainder = divmod(cpuTotal, 3600) cpuMinutes, cpuSeconds = divmod(remainder, 60) items = { 'Users': self.countUsers(), 'Clients': self.countClients(), 'Start time': self._startTime.strftime('%m/%d %H:%M:%S'), 'Run time': "%02d:%02d:%02d" % (runHours, runMinutes, runSeconds), 'CPU Time': "user %-5.2f sys %-5.2f total %02d:%02d:%02d" % (cpuUser, cpuSys, cpuHours, cpuMinutes, cpuSeconds,), 'QoS': self.qos(), } if self.countClientFailures() > 0: items['Failed clients'] = self.countClientFailures() for ctr, reason in enumerate(self._failed_clients, 1): items['Failure #%d' % (ctr,)] = reason if self.countSimFailures() > 0: for reason, count in self._failed_sim.items(): items['Failed operation'] = "%s : %d times" % (reason, count,) output.write("* Client\n") self.printMiscellaneous(output, items) output.write("\n") for podID, data in self._expired_data.items(): items = { "Req/sec": "%.1f" % (data[0],), "Response": "%.1f (ms)" % (data[1],), "Slots": "%.2f" % (data[2],), "CPU": "%.1f%%" % (data[3],), } output.write("* Server {podID} (Last 5 minutes)\n".format(podID=podID)) self.printMiscellaneous(output, items) output.write("\n") output.write("* Details\n") self.printHeader(output, [ (label, width) for (label, width, _ignore_fmt) in self._fields ]) self.printData( output, [fmt for (label, width, fmt) in self._fields], sorted(self._perMethodTimes.items()) ) _FAILED_REASON = "Greater than %(cutoff)g%% %(method)s failed" _REASON_1 = "Greater than %(cutoff)g%% %(method)s exceeded " _REASON_2 = "%g second response time" def failures(self): # TODO reasons = [] for (method, times) in self._perMethodTimes.iteritems(): failures = 0 overDurations = [0] * len(self._thresholds) for success, duration in times: if not success: failures += 1 for ctr, item in enumerate(self._thresholds): threshold, _ignore_fail_at = item if duration > threshold: overDurations[ctr] += 1 checks = [ (failures, self._fail_cut_off, self._FAILED_REASON), ] for ctr, item in enumerate(self._thresholds): threshold, fail_at = item fail_at = fail_at.get(method, fail_at["default"]) checks.append( (overDurations[ctr], fail_at, self._REASON_1 + self._REASON_2 % (threshold,)) ) for count, cutoff, reason in checks: if count * 100.0 / len(times) > cutoff: reasons.append(reason % dict(method=method, cutoff=cutoff)) if self.countClientFailures() != 0: reasons.append("Client failures: %d" % (self.countClientFailures(),)) if self.countSimFailures() != 0: reasons.append("Overall failures: %d" % (self.countSimFailures(),)) return reasons def main(): import random from twisted.internet import reactor from twisted.python.log import addObserver from twisted.python.failure import startDebugMode startDebugMode() report = ReportStatistics() addObserver(SimpleStatistics().observe) addObserver(report.observe) addObserver(RequestLogger().observe) r = random.Random() r.seed(100) populator = Populator(r) parameters = PopulationParameters() parameters.addClient( 1, ClientType(OS_X_10_6, [Eventer, Inviter, Accepter])) simulator = CalendarClientSimulator( populator, r, parameters, reactor, '127.0.0.1', 8008) arrivalPolicy = SmoothRampUp(groups=10, groupSize=1, interval=3) arrivalPolicy.run(reactor, simulator) reactor.run() report.report() if __name__ == '__main__': main() calendarserver-9.1+dfsg/contrib/performance/loadtest/profiles.py000066400000000000000000001271631315003562600252720ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ Implementation of specific end-user behaviors. """ from __future__ import division import json import random import sys from urlparse import urljoin, urlparse from uuid import uuid4 from caldavclientlibrary.protocol.caldav.definitions import caldavxml from twisted.python import context from twisted.python.log import msg from twisted.python.failure import Failure from twisted.internet.defer import Deferred, succeed, inlineCallbacks, returnValue from twisted.internet.task import LoopingCall from twisted.web.http import PRECONDITION_FAILED from twistedcaldav.ical import Property, Component from contrib.performance.stats import NearFutureDistribution, NormalDistribution, UniformDiscreteDistribution, mean, median from contrib.performance.stats import LogNormalDistribution, RecurrenceDistribution from contrib.performance.loadtest.logger import SummarizingMixin from contrib.performance.loadtest.ical import Calendar, IncorrectResponseCode from pycalendar.datetime import DateTime from pycalendar.duration import Duration from datetime import datetime, timedelta import dateutil class ProfileBase(object): """ Base class which provides some conveniences for profile implementations. """ random = random def __init__(self, reactor, simulator, client, userNumber, **params): self._reactor = reactor self._sim = simulator self._client = client self._number = userNumber self.setParameters(**params) def setParameters(self): pass def initialize(self): """ Called before the profile runs for real. Can be used to initialize client state. @return: a L{Deferred} that fires when initialization is done """ return succeed(None) def _calendarsOfType(self, calendarType, componentType, justOwned=False): results = [] for cal in self._client._calendars.itervalues(): if cal.resourceType == calendarType and componentType in cal.componentTypes: if justOwned: if (not cal.shared) or cal.sharedByMe: results.append(cal) else: results.append(cal) return results def _getRandomCalendarOfType(self, componentType, justOwned=False): """ Return a random L{Calendar} object from the current user or C{None} if there are no calendars to work with """ calendars = self._calendarsOfType(caldavxml.calendar, componentType, justOwned=justOwned) if not calendars: return None # Choose a random calendar calendar = self.random.choice(calendars) return calendar def _getRandomEventOfType(self, componentType, justOwned=True): """ Return a random L{Event} object from the current user or C{None} if there are no events to work with """ calendars = self._calendarsOfType(caldavxml.calendar, componentType, justOwned=justOwned) while calendars: calendar = self.random.choice(calendars) calendars.remove(calendar) if not calendar.events: continue events = calendar.events.keys() while events: href = self.random.choice(events) events.remove(href) event = calendar.events[href] if not event.component: continue if event.scheduleTag: continue return event return None def _isSelfAttendee(self, attendee): """ Try to match one of the attendee's identifiers against one of C{self._client}'s identifiers. Return C{True} if something matches, C{False} otherwise. """ return attendee.parameterValue('EMAIL') == self._client.email[len('mailto:'):] def _newOperation(self, label, deferred): """ Helper to emit a log event when a new operation is started and another one when it completes. """ # If this is a scheduled request, record the lag in the # scheduling now so it can be reported when the response is # received. lag = context.get('lag', None) before = self._reactor.seconds() msg( type="operation", phase="start", user=self._client.record.uid, client_type=self._client.title, client_id=self._client._client_id, label=label, lag=lag, ) def finished(passthrough): success = not isinstance(passthrough, Failure) if not success: passthrough.trap(IncorrectResponseCode) passthrough = passthrough.value.response after = self._reactor.seconds() msg( type="operation", phase="end", duration=after - before, user=self._client.record.uid, client_type=self._client.title, client_id=self._client._client_id, label=label, success=success, ) return passthrough deferred.addBoth(finished) return deferred def _failedOperation(self, label, reason): """ Helper to emit a log event when an operation fails. """ msg( type="operation", phase="failed", user=self._client.record.uid, client_type=self._client.title, client_id=self._client._client_id, label=label, reason=reason, ) self._sim._simFailure("%s: %s" % (label, reason,), self._reactor) class CannotAddAttendee(Exception): """ Indicates no new attendees can be invited to a particular event. """ pass def loopWithDistribution(reactor, distribution, function): result = Deferred() def repeat(ignored): reactor.callLater(distribution.sample(), iterate) def iterate(): d = function() if d is not None: d.addCallbacks(repeat, result.errback) else: repeat(None) repeat(None) return result class Inviter(ProfileBase): """ A Calendar user who invites other users to new events. """ _eventTemplate = Component.fromString("""\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20101018T155431Z UID:C98AD237-55AD-4F7D-9009-0D355D835822 DTEND;TZID=America/New_York:20101021T130000 TRANSP:OPAQUE SUMMARY:Simple event DTSTART;TZID=America/New_York:20101021T120000 DTSTAMP:20101018T155438Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n")) def setParameters( self, enabled=True, sendInvitationDistribution=NormalDistribution(600, 60), inviteeDistribution=UniformDiscreteDistribution(range(-10, 11)), inviteeClumping=True, inviteeCountDistribution=LogNormalDistribution(1.2, 1.2), eventStartDistribution=NearFutureDistribution(), eventDurationDistribution=UniformDiscreteDistribution([ 15 * 60, 30 * 60, 45 * 60, 60 * 60, 120 * 60 ]), recurrenceDistribution=RecurrenceDistribution(False), fileAttachPercentage=30, fileSizeDistribution=NormalDistribution(1024, 1), inviteeLookupPercentage=50, ): self.enabled = enabled self._sendInvitationDistribution = sendInvitationDistribution self._inviteeDistribution = inviteeDistribution self._inviteeClumping = inviteeClumping self._inviteeCountDistribution = inviteeCountDistribution self._eventStartDistribution = eventStartDistribution self._eventDurationDistribution = eventDurationDistribution self._recurrenceDistribution = recurrenceDistribution self._fileAttachPercentage = fileAttachPercentage self._fileSizeDistribution = fileSizeDistribution self._inviteeLookupPercentage = inviteeLookupPercentage def run(self): return loopWithDistribution( self._reactor, self._sendInvitationDistribution, self._invite) def _addAttendee(self, event, attendees): """ Create a new attendee to add to the list of attendees for the given event. """ selfRecord = self._sim.getUserRecord(self._number) invitees = set([u'mailto:%s' % (selfRecord.email,)]) for att in attendees: invitees.add(att.value()) for _ignore_i in range(10): sample = self._inviteeDistribution.sample() if self._inviteeClumping: sample = self._number + sample invitee = max(0, sample) try: record = self._sim.getUserRecord(invitee) except IndexError: continue cuaddr = u'mailto:%s' % (record.email,) if cuaddr not in invitees: break else: raise CannotAddAttendee("Can't find uninvited user to invite.") attendee = Property( name=u'ATTENDEE', value=cuaddr.encode("utf-8"), params={ 'CN': record.commonName, 'CUTYPE': 'INDIVIDUAL', 'PARTSTAT': 'NEEDS-ACTION', 'ROLE': 'REQ-PARTICIPANT', 'RSVP': 'TRUE', }, ) event.addProperty(attendee) attendees.append(attendee) def _invite(self): """ Try to add a new event, or perhaps remove an existing attendee from an event. @return: C{None} if there are no events to play with, otherwise a L{Deferred} which fires when the attendee change has been made. """ if not self._client.started: return succeed(None) # Find calendars which are eligible for invites calendars = self._calendarsOfType(caldavxml.calendar, "VEVENT", justOwned=True) while calendars: # Pick one at random from which to try to create an event # to modify. calendar = self.random.choice(calendars) calendars.remove(calendar) # Copy the template event and fill in some of its fields # to make a new event to create on the calendar. vcalendar = self._eventTemplate.duplicate() vevent = vcalendar.mainComponent() uid = str(uuid4()) dtstart = self._eventStartDistribution.sample() dtend = dtstart + Duration(seconds=self._eventDurationDistribution.sample()) vevent.replaceProperty(Property("CREATED", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTAMP", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTART", dtstart)) vevent.replaceProperty(Property("DTEND", dtend)) vevent.replaceProperty(Property("UID", uid)) rrule = self._recurrenceDistribution.sample() if rrule is not None: vevent.addProperty(Property(None, None, None, pycalendar=rrule)) vevent.addProperty(self._client._makeSelfOrganizer()) vevent.addProperty(self._client._makeSelfAttendee()) attendees = list(vevent.properties('ATTENDEE')) for _ignore in range(int(self._inviteeCountDistribution.sample())): try: self._addAttendee(vevent, attendees) except CannotAddAttendee: continue choice = self.random.randint(1, 100) if choice <= self._fileAttachPercentage: attachmentSize = int(self._fileSizeDistribution.sample()) else: attachmentSize = 0 # no attachment href = '%s%s.ics' % (calendar.url, uid) d = self._client.addInvite( href, vcalendar, attachmentSize=attachmentSize, lookupPercentage=self._inviteeLookupPercentage ) return self._newOperation("invite", d) class Accepter(ProfileBase): """ A Calendar user who accepts invitations to events. As well as accepting requests, this will also remove cancels and replies. """ def setParameters( self, enabled=True, acceptDelayDistribution=NormalDistribution(1200, 60) ): self.enabled = enabled self._accepting = set() self._acceptDelayDistribution = acceptDelayDistribution def run(self): self._subscription = self._client.catalog["eventChanged"].subscribe(self.eventChanged) # TODO: Propagate errors from eventChanged and _acceptInvitation to this Deferred return Deferred() def eventChanged(self, href): # Just respond to normal calendar events calendar = href.rsplit('/', 1)[0] + '/' try: calendar = self._client._calendars[calendar] except KeyError: return if calendar.resourceType == caldavxml.schedule_inbox: # Handle inbox differently self.inboxEventChanged(calendar, href) elif calendar.resourceType == caldavxml.calendar: self.calendarEventChanged(calendar, href) else: return def calendarEventChanged(self, calendar, href): if href in self._accepting: return component = self._client._events[href].component # Check to see if this user is in the attendee list in the # NEEDS-ACTION PARTSTAT. attendees = tuple(component.mainComponent().properties('ATTENDEE')) for attendee in attendees: if self._isSelfAttendee(attendee): if attendee.parameterValue('PARTSTAT') == 'NEEDS-ACTION': delay = self._acceptDelayDistribution.sample() self._accepting.add(href) self._reactor.callLater( delay, self._acceptInvitation, href, attendee) def inboxEventChanged(self, calendar, href): if href in self._accepting: return component = self._client._events[href].component method = component.propertyValue('METHOD') if method == "REPLY": # Replies are immediately deleted self._accepting.add(href) self._reactor.callLater( 0, self._handleReply, href) elif method == "CANCEL": # Cancels are handled after a user delay delay = self._acceptDelayDistribution.sample() self._accepting.add(href) self._reactor.callLater( delay, self._handleCancel, href) def _acceptInvitation(self, href, attendee): def change(): accepted = self._makeAcceptedAttendee(attendee) return self._client.changeEventAttendee(href, attendee, accepted) d = change() def scheduleError(reason): reason.trap(IncorrectResponseCode) if reason.value.response.code != PRECONDITION_FAILED: return reason.value.response.code # Download the event again and attempt to make the change # to the attendee list again. d = self._client.updateEvent(href) def cbUpdated(ignored): d = change() d.addErrback(scheduleError) return d d.addCallback(cbUpdated) return d d.addErrback(scheduleError) def accepted(ignored): # Find the corresponding event in the inbox and delete it. uid = self._client._events[href].getUID() for cal in self._client._calendars.itervalues(): if cal.resourceType == caldavxml.schedule_inbox: for event in cal.events.itervalues(): if uid == event.getUID(): return self._client.deleteEvent(event.url) d.addCallback(accepted) def finished(passthrough): self._accepting.remove(href) return passthrough d.addBoth(finished) return self._newOperation("accept", d) def _handleReply(self, href): d = self._client.deleteEvent(href) d.addBoth(self._finishRemoveAccepting, href) return self._newOperation("reply done", d) def _finishRemoveAccepting(self, passthrough, href): self._accepting.remove(href) if isinstance(passthrough, Failure): passthrough.trap(IncorrectResponseCode) passthrough = passthrough.value.response return passthrough def _handleCancel(self, href): uid = self._client._events[href].getUID() d = self._client.deleteEvent(href) def removed(ignored): # Find the corresponding event in any calendar and delete it. for cal in self._client._calendars.itervalues(): if cal.resourceType == caldavxml.calendar: for event in cal.events.itervalues(): if uid == event.getUID(): return self._client.deleteEvent(event.url) d.addCallback(removed) d.addBoth(self._finishRemoveAccepting, href) return self._newOperation("cancelled", d) def _makeAcceptedAttendee(self, attendee): accepted = attendee.duplicate() accepted.setParameter('PARTSTAT', 'ACCEPTED') accepted.removeParameter('RSVP') return accepted class AttachmentDownloader(ProfileBase): """ A Calendar user who downloads attachments. """ def setParameters( self, enabled=True, ): self.enabled = enabled def run(self): self._subscription = self._client.catalog["eventChanged"].subscribe(self.eventChanged) return Deferred() def eventChanged(self, href): # Just respond to normal calendar events calendar = href.rsplit('/', 1)[0] + '/' try: calendar = self._client._calendars[calendar] except KeyError: return if calendar.resourceType == caldavxml.calendar: self.calendarEventChanged(calendar, href) def calendarEventChanged(self, calendar, href): component = self._client._events[href].component attachments = tuple(component.mainComponent().properties('ATTACH')) if attachments: for attachment in attachments: attachmentHref = attachment.value() managedId = attachment.parameterValue('MANAGED-ID') attachmentHref = self._normalizeHref(attachmentHref) self._reactor.callLater( 0, self._client.getAttachment, attachmentHref, managedId ) def _normalizeHref(self, href): return urljoin(self._client._managed_attachments_server_url, urlparse(href).path) class Eventer(ProfileBase): """ A Calendar user who creates new events. """ _eventTemplate = Component.fromString("""\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20101018T155431Z UID:C98AD237-55AD-4F7D-9009-0D355D835822 DTEND;TZID=America/New_York:20101021T130000 TRANSP:OPAQUE SUMMARY:Simple event DTSTART;TZID=America/New_York:20101021T120000 DTSTAMP:20101018T155438Z SEQUENCE:2 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n")) def setParameters( self, enabled=True, interval=25, eventStartDistribution=NearFutureDistribution(), eventDurationDistribution=UniformDiscreteDistribution([ 15 * 60, 30 * 60, 45 * 60, 60 * 60, 120 * 60 ]), recurrenceDistribution=RecurrenceDistribution(False), fileAttachPercentage=30, fileSizeDistribution=NormalDistribution(1024, 1), ): self.enabled = enabled self._interval = interval self._eventStartDistribution = eventStartDistribution self._eventDurationDistribution = eventDurationDistribution self._recurrenceDistribution = recurrenceDistribution self._fileAttachPercentage = fileAttachPercentage self._fileSizeDistribution = fileSizeDistribution def run(self): self._call = LoopingCall(self._addEvent) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _addEvent(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) calendar = self._getRandomCalendarOfType('VEVENT') if not calendar: # No VEVENT calendars, so no new event... return succeed(None) # Copy the template event and fill in some of its fields # to make a new event to create on the calendar. vcalendar = self._eventTemplate.duplicate() vevent = vcalendar.mainComponent() uid = str(uuid4()) dtstart = self._eventStartDistribution.sample() dtend = dtstart + Duration(seconds=self._eventDurationDistribution.sample()) vevent.replaceProperty(Property("CREATED", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTAMP", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTART", dtstart)) vevent.replaceProperty(Property("DTEND", dtend)) vevent.replaceProperty(Property("UID", uid)) rrule = self._recurrenceDistribution.sample() if rrule is not None: vevent.addProperty(Property(None, None, None, pycalendar=rrule)) choice = self.random.randint(1, 100) if choice <= self._fileAttachPercentage: attachmentSize = int(self._fileSizeDistribution.sample()) else: attachmentSize = 0 # no attachment href = '%s%s.ics' % (calendar.url, uid) d = self._client.addEvent(href, vcalendar, attachmentSize=attachmentSize) return self._newOperation("create", d) class EventUpdaterBase(ProfileBase): @inlineCallbacks def action(self): # Don't perform any operations until the client is up and running if not self._client.started: returnValue(None) event = self._getRandomEventOfType('VEVENT', justOwned=True) if not event: returnValue(None) component = event.component vevent = component.mainComponent() label = yield self.modifyEvent(event.url, vevent) if label: vevent.replaceProperty(Property("DTSTAMP", DateTime.getNowUTC())) event.component = component yield self._newOperation( label, self._client.changeEvent(event.url) ) def run(self): self._call = LoopingCall(self.action) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def modifyEvent(self, href, vevent): """Overridden by subclasses""" pass class TitleChanger(EventUpdaterBase): def setParameters( self, enabled=True, interval=60, titleLengthDistribution=NormalDistribution(10, 2) ): self.enabled = enabled self._interval = interval self._titleLength = titleLengthDistribution def modifyEvent(self, _ignore_href, vevent): length = max(5, int(self._titleLength.sample())) vevent.replaceProperty(Property("SUMMARY", "Event" + "." * (length - 5))) return succeed("update") class DescriptionChanger(EventUpdaterBase): def setParameters( self, enabled=True, interval=60, descriptionLengthDistribution=NormalDistribution(10, 2) ): self.enabled = enabled self._interval = interval self._descriptionLength = descriptionLengthDistribution def modifyEvent(self, _ignore_href, vevent): length = int(self._descriptionLength.sample()) vevent.replaceProperty(Property("DESCRIPTION", "." * length)) return succeed("update") class Attacher(EventUpdaterBase): def setParameters( self, enabled=True, interval=60, fileSizeDistribution=NormalDistribution(1024, 1), ): self.enabled = enabled self._interval = interval self._fileSize = fileSizeDistribution @inlineCallbacks def modifyEvent(self, href, vevent): fileSize = int(self._fileSize.sample()) yield self._client.postAttachment(href, 'x' * fileSize) returnValue(None) class EventCountLimiter(EventUpdaterBase): """ Examines the number of events in each calendar collection, and when that count exceeds eventCountLimit, events are randomly removed until the count falls back to the limit. """ def setParameters( self, enabled=True, interval=60, eventCountLimit=1000 ): self.enabled = enabled self._interval = interval self._limit = eventCountLimit @inlineCallbacks def action(self): # Don't perform any operations until the client is up and running if not self._client.started: returnValue(None) for calendar in self._calendarsOfType(caldavxml.calendar, "VEVENT", justOwned=True): while len(calendar.events) > self._limit: event = calendar.events[self.random.choice(calendar.events.keys())] yield self._client.deleteEvent(event.url) class CalendarSharer(ProfileBase): """ A Calendar user who shares calendars to other random users. """ def setParameters( self, enabled=True, interval=60, maxSharees=3 ): self.enabled = enabled self._interval = interval self._maxSharees = maxSharees def run(self): self._call = LoopingCall(self.action) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() @inlineCallbacks def action(self): # Don't perform any operations until the client is up and running if not self._client.started: returnValue(None) yield self.shareCalendar() @inlineCallbacks def shareCalendar(self): # pick a calendar calendar = self._getRandomCalendarOfType('VEVENT', justOwned=True) if not calendar: returnValue(None) # don't exceed maxSharees if len(calendar.invitees) >= self._maxSharees: returnValue(None) # pick a random sharee shareeRecord = self._sim.getRandomUserRecord(besides=self._number) if shareeRecord is None: returnValue(None) # POST the sharing invite mailto = "mailto:{}".format(shareeRecord.email) body = Calendar.addInviteeXML(mailto, calendar.name, readwrite=True) yield self._client.postXML( calendar.url, body, label="POST{share-calendar}" ) class AlarmAcknowledger(ProfileBase): """ A Calendar user who creates a new event, and then updates its alarm. """ _eventTemplate = Component.fromString("""\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20101018T155431Z UID:C98AD237-55AD-4F7D-9009-0D355D835822 DTEND;TZID=America/New_York:20101021T130000 TRANSP:OPAQUE SUMMARY:Simple event DTSTART;TZID=America/New_York:20101021T120000 DTSTAMP:20101018T155438Z SEQUENCE:2 BEGIN:VALARM X-WR-ALARMUID:D9D1AC84-F629-4B9D-9B6B-4A6CA9A11FEF UID:D9D1AC84-F629-4B9D-9B6B-4A6CA9A11FEF DESCRIPTION:Event reminder TRIGGER:-PT8M ACTION:DISPLAY END:VALARM END:VEVENT END:VCALENDAR """.replace("\n", "\r\n")) def setParameters( self, enabled=True, interval=5, pastTheHour=[0, 15, 30, 45], eventStartDistribution=NearFutureDistribution(), eventDurationDistribution=UniformDiscreteDistribution([ 15 * 60, 30 * 60, 45 * 60, 60 * 60, 120 * 60 ]), recurrenceDistribution=RecurrenceDistribution(False), ): self.enabled = enabled self._interval = interval self._pastTheHour = pastTheHour self._eventStartDistribution = eventStartDistribution self._eventDurationDistribution = eventDurationDistribution self._recurrenceDistribution = recurrenceDistribution self._lastMinuteChecked = -1 def initialize(self): """ Called before the profile runs for real. Can be used to initialize client state. @return: a L{Deferred} that fires when initialization is done """ self.myEventHref = None return self._initEvent() def run(self): self._call = LoopingCall(self._updateEvent) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _initEvent(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) try: calendar = self._calendarsOfType(caldavxml.calendar, "VEVENT")[0] except IndexError: # There is no calendar return succeed(None) self.myEventHref = '{}{}.ics'.format(calendar.url, str(uuid4())) # Copy the template event and fill in some of its fields # to make a new event to create on the calendar. vcalendar = self._eventTemplate.duplicate() vevent = vcalendar.mainComponent() uid = str(uuid4()) dtstart = self._eventStartDistribution.sample() dtend = dtstart + Duration(seconds=self._eventDurationDistribution.sample()) vevent.replaceProperty(Property("CREATED", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTAMP", DateTime.getNowUTC())) vevent.replaceProperty(Property("DTSTART", dtstart)) vevent.replaceProperty(Property("DTEND", dtend)) vevent.replaceProperty(Property("UID", uid)) vevent.replaceProperty(Property("DESCRIPTION", "AlarmAcknowledger")) rrule = self._recurrenceDistribution.sample() if rrule is not None: vevent.addProperty(Property(None, None, None, pycalendar=rrule)) d = self._client.addEvent(self.myEventHref, vcalendar) return self._newOperation("create", d) def _shouldUpdate(self, minutePastTheHour): """ We want to only acknowledge our alarm at the "past the hour" minutes we've been configured for. """ should = False if minutePastTheHour in self._pastTheHour: # This is one of the minutes we should update on, but only update # as we pass into this minute, and not subsequent times if minutePastTheHour != self._lastMinuteChecked: should = True self._lastMinuteChecked = minutePastTheHour return should def _updateEvent(self): """ Set the ACKNOWLEDGED property on an event. @return: C{None} if there are no events to play with, otherwise a L{Deferred} which fires when the acknowledged change has been made. """ # Only do updates when we reach of the designated minutes past the hour if not self._shouldUpdate(datetime.now().minute): return succeed(None) if not self._client.started: return succeed(None) if self.myEventHref is None: return self._initEvent() event = self._client.eventByHref(self.myEventHref) # Add/update the ACKNOWLEDGED property component = event.component.mainComponent() component.replaceProperty(Property("ACKNOWLEDGED", DateTime.getNowUTC())) d = self._client.changeEvent(event.url) return self._newOperation("update", d) class Tasker(ProfileBase): """ A Calendar user who creates new tasks. """ _taskTemplate = Component.fromString("""\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VTODO CREATED:20101018T155431Z UID:C98AD237-55AD-4F7D-9009-0D355D835822 SUMMARY:Simple task DUE;TZID=America/New_York:20101021T120000 DTSTAMP:20101018T155438Z END:VTODO END:VCALENDAR """.replace("\n", "\r\n")) def setParameters( self, enabled=True, interval=25, taskDueDistribution=NearFutureDistribution(), ): self.enabled = enabled self._interval = interval self._taskStartDistribution = taskDueDistribution def run(self): self._call = LoopingCall(self._addTask) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _addTask(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) calendars = self._calendarsOfType(caldavxml.calendar, "VTODO") while calendars: calendar = self.random.choice(calendars) calendars.remove(calendar) # Copy the template task and fill in some of its fields # to make a new task to create on the calendar. vcalendar = self._taskTemplate.duplicate() vtodo = vcalendar.mainComponent() uid = str(uuid4()) due = self._taskStartDistribution.sample() vtodo.replaceProperty(Property("CREATED", DateTime.getNowUTC())) vtodo.replaceProperty(Property("DTSTAMP", DateTime.getNowUTC())) vtodo.replaceProperty(Property("DUE", due)) vtodo.replaceProperty(Property("UID", uid)) href = '%s%s.ics' % (calendar.url, uid) d = self._client.addEvent(href, vcalendar) return self._newOperation("create", d) class TimeRanger(ProfileBase): """ A profile which does time queries """ def setParameters( self, enabled=True, interval=25, ): self.enabled = enabled self._interval = interval def run(self): self._call = LoopingCall(self._runQuery) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _runQuery(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) now = datetime.now(dateutil.tz.tzutc()) start = now.strftime("%Y%m%dT%H%M%SZ") end = (now + timedelta(seconds=24 * 60 * 60)).strftime("%Y%m%dT%H%M%SZ") calendar = self._getRandomCalendarOfType('VEVENT') return self._client.timeRangeQuery(calendar.url, start, end) class DeepRefresher(ProfileBase): """ A profile which PROPFINDs all collection in a home """ def setParameters( self, enabled=True, interval=25, ): self.enabled = enabled self._interval = interval def run(self): self._call = LoopingCall(self._deepRefresh) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _deepRefresh(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) return self._client.deepRefresh() class APNSSubscriber(ProfileBase): """ A profile which POSTs to /apns """ def setParameters( self, enabled=True, interval=60, ): self.enabled = enabled self._interval = interval def run(self): self._call = LoopingCall(self._apnsSubscribe) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _apnsSubscribe(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) return self._client.apnsSubscribe() class Resetter(ProfileBase): """ A Calendar user who resets their account and re-downloads everything. """ def setParameters( self, enabled=True, interval=600, ): self.enabled = enabled self._interval = interval def run(self): self._call = LoopingCall(self._resetAccount) self._call.clock = self._reactor self._reactor.callLater( self.random.randint(1, self._interval), self._call.start, self._interval ) return Deferred() def _resetAccount(self): # Don't perform any operations until the client is up and running if not self._client.started: return succeed(None) return self._client.reset() class OperationLogger(SummarizingMixin): """ Profiles will initiate operations which may span multiple requests. Start and stop log messages are emitted for these operations and logged by this logger. """ formats = { u"start": u"%(user)s - - - - - - - - - - - %(label)8s BEGIN %(lag)s", u"end": u"%(user)s - - - - - - - - - - - %(label)8s END [%(duration)5.2f s]", u"failed": u"%(user)s x x x x x x x x x x x %(label)8s FAILED %(reason)s", } lagFormat = u'{lag %5.2f ms}' # the response time thresholds to display together with failing % count threshold _thresholds_default = { "operations": { "limits": [0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 30.0], "thresholds": { "default": [100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0], } } } _lag_cut_off = 1.0 # Maximum allowed median scheduling latency, seconds _fail_cut_off = 1.0 # % of total count at which failed requests will cause a failure _fields_init = [ ('operation', -30, '%-30s'), ('count', 8, '%8s'), ('%', 6, '%6.1f'), ('failed', 8, '%8s'), ] _fields_extend = [ ('mean', 8, '%8.4f'), ('median', 8, '%8.4f'), ('stddev', 8, '%8.4f'), ('avglag (ms)', 12, '%12.4f'), ('STATUS', 8, '%8s'), ] def __init__(self, outfile=None, **params): self._perOperationTimes = {} self._perOperationLags = {} if outfile is None: outfile = sys.stdout self._outfile = outfile # Load parameters from config if "thresholdsPath" in params: with open(params["thresholdsPath"]) as f: jsondata = json.load(f) elif "thresholds" in params: jsondata = params["thresholds"] else: jsondata = self._thresholds_default self._thresholds = [[limit, {}] for limit in jsondata["operations"]["limits"]] for ctr, item in enumerate(self._thresholds): for k, v in jsondata["operations"]["thresholds"].items(): item[1][k] = v[ctr] self._fields = self._fields_init[:] for threshold, _ignore_fail_at in self._thresholds: self._fields.append(('>%g sec' % (threshold,), 10, '%10s')) self._fields.extend(self._fields_extend) if "lagCutoff" in params: self._lag_cut_off = params["lagCutoff"] if "failCutoff" in params: self._fail_cut_off = params["failCutoff"] self._fail_if_no_push = params.get("failIfNoPush", False) def observe(self, event): if event.get("type") == "operation": event = event.copy() lag = event.get('lag') if lag is None: event['lag'] = '' else: event['lag'] = self.lagFormat % (lag * 1000.0,) self._outfile.write( (self.formats[event[u'phase']] % event).encode('utf-8') + '\n') if event[u'phase'] == u'end': dataset = self._perOperationTimes.setdefault(event[u'label'], []) dataset.append((event[u'success'], event[u'duration'])) elif lag is not None: dataset = self._perOperationLags.setdefault(event[u'label'], []) dataset.append(lag) def _summarizeData(self, operation, data, total_count): avglag = mean(self._perOperationLags.get(operation, [0.0])) * 1000.0 data = SummarizingMixin._summarizeData(self, operation, data, total_count) return data[:-1] + (avglag,) + data[-1:] def report(self, output): output.write("\n") self.printHeader(output, [ (label, width) for (label, width, _ignore_fmt) in self._fields ]) self.printData( output, [fmt for (label, width, fmt) in self._fields], sorted(self._perOperationTimes.items()) ) _LATENCY_REASON = "Median %(operation)s scheduling lag greater than %(cutoff)sms" _FAILED_REASON = "Greater than %(cutoff).0f%% %(operation)s failed" _PUSH_MISSING_REASON = "Push was configured but no pushes were received by clients" _REASON_1 = "Greater than %(cutoff)g%% %(method)s exceeded " _REASON_2 = "%g second total time" def failures(self): reasons = [] for operation, lags in self._perOperationLags.iteritems(): if median(lags) > self._lag_cut_off: reasons.append(self._LATENCY_REASON % dict( operation=operation.upper(), cutoff=self._lag_cut_off * 1000)) for operation, times in self._perOperationTimes.iteritems(): failures = len([success for (success, _ignore_duration) in times if not success]) if failures * 100.0 / len(times) > self._fail_cut_off: reasons.append(self._FAILED_REASON % dict( operation=operation.upper(), cutoff=self._fail_cut_off)) for operation, times in self._perOperationTimes.iteritems(): overDurations = [0] * len(self._thresholds) for success, duration in times: for ctr, item in enumerate(self._thresholds): threshold, _ignore_fail_at = item if duration > threshold: overDurations[ctr] += 1 checks = [] for ctr, item in enumerate(self._thresholds): threshold, fail_at = item fail_at = fail_at.get(operation, fail_at["default"]) checks.append( (overDurations[ctr], fail_at, self._REASON_1 + self._REASON_2 % (threshold,)) ) for count, cutoff, reason in checks: if count * 100.0 / len(times) > cutoff: reasons.append(reason % dict(method=operation.upper(), cutoff=cutoff)) if self._fail_if_no_push and "push" not in self._perOperationTimes: reasons.append(self._PUSH_MISSING_REASON) return reasons calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/000077500000000000000000000000001315003562600254625ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11/000077500000000000000000000000001315003562600270735ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11/Profile000066400000000000000000000211641315003562600304220ustar00rootroot00000000000000PROPFIND ./well-known/caldav -> /principals/ current-user-principal principal-URL resourcetype PROPFIND /principals/ -> current-user-principal /principals/__uids__/ principal-URL ---- resourcetype collection OPTIONS /principals/__uids__// PROPFIND /principals/__uids__// calendar-home-set /calendars/__uids__// calendar-user-address-set mailto:user#@example.com urn:uuid: urn:x-uid: current-user-principal /principals/__uids__// displayname User # dropbox-home-URL /calendars/__uids__//dropbox/ email-address-set user#@example.com notification-URL /calendars/__uids__//notification/ principal-collection-set /principals/ principal-URL /principals/__uids__// resource-id urn:x-uid: schedule-inbox-URL /calendars/__uids__//inbox/ schedule-outbox-URL /calendars/__uids__//outbox/ supported-report-set acl-principal-prop-set principal-match principal-property-search expand-property calendarserver-principal-search OPTIONS /principals/__uids__/ REPORT /principals/ -> principal-search-property-set displayname email-address-set calendar-user-address-set calendar-user-type PROPFIND /calendars/__uids__//inbox/ -> calendar-availability ??? PROPFIND /calendars/__uids__// Depth 1 add-member allowed-sharing-modes autoprovisioned bulk-requests calendar-alarm calendar-color calendar-description calendar-free-busy-set calendar-order calendar-timezone current-user-privilege-set all/read/read-free-busy/write/write-properties/write-content/bind/unbind/unlock/read-acl/write-acl/read-current-user-privilege-set default-alarm-vevent-date default-alarm-vevent-datetime displayname User # getctag invite language-code location-code owner /principals/__uids__// pre-publish-url publish-url push-transports pushkey /CalDAV/localhost// quota-available-bytes 104857600 quota-used-bytes 0 refreshrate resource-id resourcetype collection schedule-calendar-transp schedule-default-calendar-URL source subscribed-strip-alarms subscribed-strip-attachments subscribed-strip-todos supported-calendar-component-set VEVENT/VTODO supported-calendar-component-sets supported-report-set acl-principal-prop-set/principal-match/principal-property-search/expand-property/calendarserver-principal-search/calendar-query/calendar-multiget/free-busy-query/addressbook-query/addressbook-multiget/sync-collection sync-token data:,36_58/ ** and more ** PROPPATCH /calendars/__uids__// -> default-alarm-vevent-date PROPPATCH /calendars/__uids__// -> default-alarm-vevent-datetime PROPPATCH /calendars/__uids__//calendar/ -> calendar-order PROPPATCH /calendars/__uids__//calendar/ -> displayname PROPPATCH /calendars/__uids__//calendar/ -> calendar-color PROPPATCH /calendars/__uids__//calendar/ -> calendar-order PROPPATCH /calendars/__uids__//calendar/ -> calendar-timezone PROPPATCH /calendars/__uids__//tasks/ -> calendar-order PROPPATCH /calendars/__uids__//tasks/ -> displayname PROPPATCH /calendars/__uids__//tasks/ -> calendar-color PROPPATCH /calendars/__uids__//tasks/ -> calendar-order PROPPATCH /calendars/__uids__//tasks/ -> calendar-timezone PROPFIND /calendars/__uids__//calendar/-> getctag 37_63 sync-token data:,37_63/ REPORT /calendars/__uids__//calendar/ -> getcontenttype getetag REPORT /calendar/__uids__//calendar/ getcontenttype getetag PROPFIND /calendars/__uids__// -> checksum-versions ??? PROPFIND /calendars/__uids__//calendar/ -> getctag sync-token PROPFIND /calendars/__uids__//calendar/ getcontenttype httpd/unix-directory getetag "" PROPFIND /calendars/__uids__// -> (again?) checksum-versions PROPFIND /calendars/__uids__//tasks/ -> getctag sync-token PROPFIND /calendars/__uids__//tasks/ -> getcontenttype getetag PROPFIND /calendars/__uids__//inbox/ -> getctag sync-token PROPFIND /calendars/__uids__//inbox/ -> getcontenttype getetag PROPFIND /calendars/__uids__//tasks/ -> getctag sync-token PROPFIND /calendars/__uids__//tasks/ -> getcontenttype getetag PROPFIND /calendars/__uids__//notification/ -> getctag sync-token PROPFIND /calendars/__uids__//notification/ -> notificationtype getetag REPORT /principals/__uids__// calendar-proxy-write-for calendar-user-address-set email-address-set displayname calendar-proxy-read-for calendar-user-address-set email-address-set displayname REPORT /calendars/__uids__// sync-collection sync-token sync-level *lots of properties* PROPFIND /calendars/__uids__//inbox/ getctag sync-token PROPFIND /principals/__uids__// calendar-proxy-write-for calendar-user-address-set email-address-set displayname calendar-proxy-read-for calendar-user-address-set email-address-set displayname ---------------------------------------------------------------- Deep Refresh (CMD + SHIFT + R) PROPFIND /principals/__uids__// OPTIONS /principals/__uids__/10000000-0000-0000-0000-000000000001/ REPORT /principals/ principal-search-property-set PROPFIND /calendars/__uids__/10000000-0000-0000-0000-000000000001/inbox/ calendar-availability PROPFIND /calendars/__uids__/10000000-0000-0000-0000-000000000001/ Depth 1 PROPFIND on calendar/tasks/inbox/notifications as before calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11/StartupProfile000066400000000000000000000053231315003562600320040ustar00rootroot00000000000000PROPFIND ./well-known/caldav - startup_well_known_propfind PROPFIND /principals/ - startup_principal_initial_propfind PROPFIND /principals/__uids__// - startup_principal_propfind REPORT /principals/ - startup_principals_report PROPFIND /calendars/__uids__//inbox/ - ??? calendar-availability PROPFIND /calendars/__uids__// - poll_calendar_home_depth1_propfind PROPPATCH /calendars/__uids__// - startup_calendarhome_default_alarm_date_proppatch PROPPATCH /calendars/__uids__// - startup_calendarhome_default_alarm_datetime_proppatch PROPPATCH /calendars/__uids__//calendar/ - startup_calendar_order_proppatch PROPPATCH /calendars/__uids__//calendar/ - startup_calendar_displayname_proppatch PROPPATCH /calendars/__uids__//calendar/ - startup_calendar_color_proppatch PROPPATCH /calendars/__uids__//calendar/ - startup_calendar_timezone_proppatch PROPPATCH /calendars/__uids__//tasks/ - startup_calendar_order_proppatch PROPPATCH /calendars/__uids__//tasks/ - startup_calendar_displayname_proppatch PROPPATCH /calendars/__uids__//tasks/ - startup_calendar_color_proppatch PROPPATCH /calendars/__uids__//tasks/ - startup_calendar_timezone_proppatch PROPFIND /calendars/__uids__//calendar/ - poll_calendar_propfind REPORT /calendars/__uids__//calendar/ - startup_query_events_depth1_report.request PROPFIND /calendars/__uids__//calendar/ - poll_calendar_propfind PROPFIND /calendars/__uids__//calendar/ - poll_calendar_depth1_propfind PROPFIND /calendars/__uids__//tasks/ - poll_calendar_propfind PROPFIND /calendars/__uids__//tasks/ - poll_calendar_depth1_propfind PROPFIND /calendars/__uids__//inbox/ - poll_calendar_propfind PROPFIND /calendars/__uids__//inbox/ - poll_calendar_depth1_propfind PROPFIND /calendars/__uids__//tasks/ - poll_calendar_propfind PROPFIND /calendars/__uids__//tasks/ - poll_calendar_depth1_propfind PROPFIND /calendars/__uids__//notification/ - poll_calendar_propfind PROPFIND /calendars/__uids__//notification/ - poll_notification_depth1_propfind REPORT /principals/__uids__// calendar-proxy-write-for calendar-user-address-set email-address-set displayname calendar-proxy-read-for calendar-user-address-set email-address-set displayname REPORT /calendars/__uids__// sync-collection sync-token sync-level *lots of properties* PROPFIND /calendars/__uids__//inbox/ getctag sync-token PROPFIND /principals/__uids__// calendar-proxy-write-for calendar-user-address-set email-address-set displayname calendar-proxy-read-for calendar-user-address-set email-address-set displayname notification_multiget_report_hrefs.request000066400000000000000000000000531315003562600376060ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 %(href)s notification_sync.request000066400000000000000000000004301315003562600341450ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 %(sync-token)s 1 poll_calendar_depth1_propfind.request000066400000000000000000000002211315003562600363660ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 poll_calendar_propfind.request000066400000000000000000000002651315003562600351310ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 poll_calendarhome_depth1_propfind.request000066400000000000000000000043711315003562600372510ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 poll_calendarhome_sync.request000066400000000000000000000044061315003562600351360ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 %(sync-token)s 1 poll_inbox_propfind.request000066400000000000000000000002221315003562600344700ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 poll_notification_depth1_propfind.request000066400000000000000000000002731315003562600373120ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11/post_freebusy.request000066400000000000000000000005251315003562600334000ustar00rootroot00000000000000BEGIN:VCALENDAR VERSION:2.0 METHOD:REPLY PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VFREEBUSY UID:4288F0F3-5C5B-4DF4-9AD8-B1E5FE3F5B97 DTSTART:20150804T211500Z DTEND:20150804T231500Z ATTENDEE:urn:uuid:30000000-0000-0000-0000-000000000005 DTSTAMP:20150727T203410Z ORGANIZER:mailto:user01@example.com END:VFREEBUSY END:VCALENDARprincipal_search_report.request000066400000000000000000000007641315003562600353360ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 {searchTokens} report_principal_search.request000066400000000000000000000010201315003562600353200ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 %(search)s startup_calendar_color_proppatch.request000066400000000000000000000003451315003562600372410ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 #FD8208FFstartup_calendar_description_proppatch.request.xml000066400000000000000000000003441315003562600412440ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 some descriptionstartup_calendar_displayname_proppatch.request000066400000000000000000000002421315003562600404250ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 calendarstartup_calendar_order_proppatch.request000066400000000000000000000003061315003562600372330ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 1 startup_calendar_timezone_proppatch.request000066400000000000000000000013541315003562600377560ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//Mac OS X 10.11//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE END:VCALENDAR startup_calendar_transparent_proppatch.request000066400000000000000000000007251315003562600404660ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_calendarhome_default_alarm_date_proppatch.request000066400000000000000000000006511315003562600425710ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 BEGIN:VALARM X-WR-ALARMUID:49F29226-D2D7-4464-AE22-0147EDEFB2B4 UID:49F29226-D2D7-4464-AE22-0147EDEFB2B4 TRIGGER:-PT15H ATTACH;VALUE=URI:Basso ACTION:AUDIO END:VALARM startup_calendarhome_default_alarm_datetime_proppatch.request000066400000000000000000000006561315003562600434550ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 BEGIN:VALARM X-WR-ALARMUID:4AD03A33-54A6-42BE-A157-47273DD60803 UID:4AD03A33-54A6-42BE-A157-47273DD60803 TRIGGER;VALUE=DATE-TIME:19760401T005545Z ACTION:NONE END:VALARM startup_create_calendar.request000066400000000000000000000023131315003562600353030ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 {order} #{color} BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//Mac OS X 10.11//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:DAYLIGHT TZOFFSETFROM:-0800 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:PDT TZOFFSETTO:-0700 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0700 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:PST TZOFFSETTO:-0800 END:STANDARD END:VTIMEZONE END:VCALENDAR {name} startup_delegate_principal_propfind.request000066400000000000000000000014671315003562600377340ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_principal_expand.request000066400000000000000000000014001315003562600355230ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_principal_initial_propfind.request000066400000000000000000000002651315003562600376060ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_principal_propfind.request000066400000000000000000000013471315003562600360770ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_principals_report.request000066400000000000000000000001311315003562600357420ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_query_events_depth1_report.request000066400000000000000000000006351315003562600376050ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 startup_well_known_propfind.request000066400000000000000000000002651315003562600362730ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_11 calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6/000077500000000000000000000000001315003562600270175ustar00rootroot00000000000000poll_calendar_multiget.request000066400000000000000000000003641315003562600350660ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 %(hrefs)s poll_calendar_multiget_hrefs.request000066400000000000000000000000361315003562600362510ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 %(href)s poll_calendar_propfind.request000066400000000000000000000002651315003562600350550ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 poll_calendar_propfind_d1.request000066400000000000000000000003221315003562600354330ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 poll_calendarhome_propfind.request000066400000000000000000000015371315003562600357310ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 poll_notification_propfind_d1.request000066400000000000000000000002731315003562600363550ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 post_availability.request000066400000000000000000000004521315003562600340720ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VFREEBUSY UID:%(vfreebusy-uid)s DTEND:%(end)s %(attendees)sDTSTART:%(start)s %(event-mask)sDTSTAMP:%(now)s ORGANIZER:%(organizer)s SUMMARY:%(summary)s END:VFREEBUSY END:VCALENDAR startup_calendar_color_proppatch.request000066400000000000000000000003161315003562600371630ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 #882F00FF startup_calendar_order_proppatch.request000066400000000000000000000003061315003562600371570ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 1 startup_calendar_timezone_proppatch.request000066400000000000000000000013461315003562600377030ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 5.0.2//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE END:VCALENDAR startup_notification_propfind.request000066400000000000000000000003221315003562600365200ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 startup_principal_expand.request000066400000000000000000000011451315003562600354550ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 startup_principal_propfind.request000066400000000000000000000007331315003562600360210ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 startup_principal_propfind_initial.request000066400000000000000000000002651315003562600375320ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 startup_principals_report.request000066400000000000000000000001341315003562600356710ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 startup_well_known.request000066400000000000000000000002651315003562600343160ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 user_list_principal_property_search.request000066400000000000000000000016521315003562600377210ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_6 %(displayname)s%(email)s%(firstname)s%(lastname)s calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7/000077500000000000000000000000001315003562600270205ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7/Profile000066400000000000000000000066521315003562600303540ustar00rootroot00000000000000** Initial account setup PROPFIND well-known unauthorized - startup_well_known.request PROPFIND / unauthorized - startup_well_known.request HEAD well-known unauthorized PROPFIND / unauthorized - startup_well_known.request HEAD / unauthorized PROPFIND / unauthorized - startup_well_known.request PROPFIND /calendar/dav/user01/user/ unauthorized - startup_principal_propfind_initial.request PROPFIND /principals/users/user01/ unauthorized - startup_principal_propfind_initial.request PROPFIND /principals/users/user01/ - startup_principal_propfind_initial.request PROPFIND /principals/users/user01/ unauthorized - startup_principal_propfind.request PROPFIND /principals/users/user01/ - startup_principal_propfind.request OPTIONS /principals/__uids__/user01/ PROPFIND /principals/users/user01/ - startup_principal_propfind.request OPTIONS /principals/__uids__/user01/ REPORT /principals/ unauthorized - startup_principals_report.request REPORT /principals/ - startup_principals_report.request PROPFIND /calendars/__uids__/user01/ unauthorized - poll_calendarhome_propfind.request PROPFIND /calendars/__uids__/user01/ - poll_calendarhome_propfind.request PROPPATCH /calendars/__uids__/user01/tasks/ - startup_calendar_color_proppatch.request PROPPATCH /calendars/__uids__/user01/tasks/ - startup_calendar_order_proppatch.request PROPPATCH /calendars/__uids__/user01/tasks/ - startup_calendar_timezone_proppatch.request PROPPATCH /calendars/__uids__/user01/calendar/ - startup_calendar_color_proppatch.request PROPPATCH /calendars/__uids__/user01/calendar/ - startup_calendar_order_proppatch.request PROPPATCH /calendars/__uids__/user01/calendar/ - startup_calendar_timezone_proppatch.request PROPFIND /calendars/__uids__/user01/tasks/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/tasks/ - poll_calendar_propfind_d1.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind_d1.request PROPFIND /calendars/__uids__/user01/inbox/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/inbox/ - poll_calendar_propfind_d1.request PROPFIND /calendars/__uids__/user01/notification/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/notification/ - poll_notification_propfind_d1.request REPORT /principals/__uids__/user01/ - startup_principal_expand.request ** for each delegate PROPFIND /principals/__uids__/resource10/ - startup_delegate_principal_propfind.request OPTIONS /principals/__uids__/user01/ REPORT /principals/ - startup_principals_report.request ** Polling - with sync PROPFIND /calendars/__uids__/user01/ - poll_calendarhome_propfind.request REPORT /calendars/__uids__/user01/calendar/ - poll_calendar_sync.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_multiget.request ** Polling - without sync PROPFIND /calendars/__uids__/user01/ - poll_calendarhome_propfind.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind_d1.request REPORT /calendars/__uids__/user01/calendar/ - poll_calendar_multiget.request PROPFIND /calendars/__uids__/user01/notification/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/notification/ - poll_notification_propfind_d1.request poll_calendar_multiget.request000066400000000000000000000003521315003562600350640ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 %(hrefs)s poll_calendar_multiget_hrefs.request000066400000000000000000000000531315003562600362510ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 %(href)s poll_calendar_propfind.request000066400000000000000000000002651315003562600350560ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 poll_calendar_propfind_d1.request000066400000000000000000000002211315003562600354320ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 poll_calendar_sync.request000066400000000000000000000003151315003562600342050ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 %(sync-token)s poll_calendarhome_propfind.request000066400000000000000000000036051315003562600357300ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 poll_notification_propfind_d1.request000066400000000000000000000002731315003562600363560ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 post_availability.request000066400000000000000000000004521315003562600340730ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VFREEBUSY UID:%(vfreebusy-uid)s DTEND:%(end)s %(attendees)sDTSTART:%(start)s %(event-mask)sDTSTAMP:%(now)s ORGANIZER:%(organizer)s SUMMARY:%(summary)s END:VFREEBUSY END:VCALENDAR startup_calendar_color_proppatch.request000066400000000000000000000003161315003562600371640ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 #882F00FF startup_calendar_order_proppatch.request000066400000000000000000000003061315003562600371600ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 1 startup_calendar_timezone_proppatch.request000066400000000000000000000013461315003562600377040ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 5.0.2//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE END:VCALENDAR startup_delegate_principal_propfind.request000066400000000000000000000014671315003562600376610ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 startup_principal_expand.request000066400000000000000000000014001315003562600354500ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 startup_principal_propfind.request000066400000000000000000000014671315003562600360270ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 startup_principal_propfind_initial.request000066400000000000000000000002651315003562600375330ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 startup_principals_report.request000066400000000000000000000001341315003562600356720ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 startup_well_known.request000066400000000000000000000002651315003562600343170ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 user_list_principal_property_search.request000066400000000000000000000016521315003562600377220ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/OS_X_10_7 %(displayname)s%(email)s%(firstname)s%(lastname)s calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5/000077500000000000000000000000001315003562600264005ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5/Profile000066400000000000000000000044461315003562600277330ustar00rootroot00000000000000** Initial account setup PROPFIND well-known unauthorized - startup_well_known.request PROPFIND / unauthorized - startup_well_known.request PROPFIND /principals/ unauthorized - startup_principal_propfind_initial.request PROPFIND /principals/ authorized - startup_principal_propfind_initial.request OPTIONS /principals/__uids__/user01/ unauthorized OPTIONS /principals/__uids__/user01/ authorized PROPFIND /principals/users/user01/ unauthorized - startup_principal_propfind.request PROPFIND /principals/users/user01/ - startup_principal_propfind.request REPORT /principals/ unauthorized - startup_principals_report.request REPORT /principals/ - startup_principals_report.request PROPFIND /calendars/__uids__/user01/ unauthorized - poll_calendarhome_propfind.request PROPFIND /calendars/__uids__/user01/ - poll_calendarhome_propfind.request PROPPATCH /calendars/__uids__/user01/tasks/ - startup_calendar_color_proppatch.request PROPPATCH /calendars/__uids__/user01/tasks/ - startup_calendar_order_proppatch.request PROPPATCH /calendars/__uids__/user01/calendar/ - startup_calendar_color_proppatch.request PROPPATCH /calendars/__uids__/user01/calendar/ - startup_calendar_order_proppatch.request PROPFIND /calendars/__uids__/user01/tasks/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/tasks/ - poll_calendar_vtodo_query.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_vevent_tr_query.request PROPFIND /calendars/__uids__/user01/inbox/ - poll_calendar_propfind.request PROPFIND /calendars/__uids__/user01/inbox/ - poll_calendar_propfind_d1.request ** Polling - without sync PROPFIND /calendars/__uids__/user01/ - poll_calendarhome_propfind.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind.request REPORT /calendars/__uids__/user01/calendar/ - poll_calendar_propfind_d1.request REPORT /calendars/__uids__/user01/calendar/ - poll_calendar_multiget.request PROPFIND /calendars/__uids__/user01/calendar/ - poll_calendar_propfind.request REPORT /calendars/__uids__/user01/tasks/ - poll_calendar_vevent_tr_query.request REPORT /calendars/__uids__/user01/tasks/ - poll_calendar_vtodo_query.request poll_calendar_multiget.request000066400000000000000000000003521315003562600344440ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 %(hrefs)s poll_calendar_multiget_hrefs.request000066400000000000000000000000531315003562600356310ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 %(href)s poll_calendar_propfind.request000066400000000000000000000002651315003562600344360ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 poll_calendar_propfind_d1.request000066400000000000000000000002211315003562600350120ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 poll_calendar_vevent_tr_query.request000066400000000000000000000006141315003562600360540ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 poll_calendar_vtodo_query.request000066400000000000000000000004761315003562600352010ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 poll_calendarhome_propfind.request000066400000000000000000000036051315003562600353100ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 startup_calendar_color_proppatch.request000066400000000000000000000003141315003562600365420ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 #711A76 startup_calendar_order_proppatch.request000066400000000000000000000003061315003562600365400ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 0 startup_principal_propfind.request000066400000000000000000000014671315003562600354070ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 startup_principal_propfind_initial.request000066400000000000000000000002651315003562600371130ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 startup_principals_report.request000066400000000000000000000001311315003562600352470ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5 calendarserver-9.1+dfsg/contrib/performance/loadtest/request-data/iOS_5/startup_well_known.request000066400000000000000000000002651315003562600337560ustar00rootroot00000000000000 calendarserver-9.1+dfsg/contrib/performance/loadtest/setup_directory.py000066400000000000000000000124231315003562600266630ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from contrib.od import dsattributes from contrib.od import odframework from getopt import getopt, GetoptError from getpass import getpass import os import sys def usage(e=None): name = os.path.basename(sys.argv[0]) print("usage: %s [options]" % (name,)) print("") print(" Configures OD master directories for performance simulator") print(" testing and generates the simulators accounts CSV file.") print("") print("options:") print(" -n : number of users to generate [100]") print(" -u : directory admin user name [diradmin]") print(" -p : directory admin password") print(" -h --help : print this help and exit") if e: sys.exit(1) else: sys.exit(0) def lookupRecordName(node, recordType, ctr): recordName = "user{:02d}".format(ctr) query, error = odframework.ODQuery.queryWithNode_forRecordTypes_attribute_matchType_queryValues_returnAttributes_maximumResults_error_( node, recordType, dsattributes.kDSNAttrRecordName, dsattributes.eDSExact, recordName, None, 0, None) if error: raise ODError(error) records, error = query.resultsAllowingPartial_error_(False, None) if error: raise ODError(error) if len(records) < 1: return None if len(records) > 1: raise ODError("Multiple records for '%s' were found" % (recordName,)) return records[0] def createRecord(node, recordType, ctr): recordName = "user{:02d}".format(ctr) attrs = { dsattributes.kDS1AttrFirstName: ["User"], dsattributes.kDS1AttrLastName: ["{:02d}".format(ctr)], dsattributes.kDS1AttrDistinguishedName: ["User {:02d}".format(ctr)], dsattributes.kDSNAttrEMailAddress: ["user{:02d}@example.com".format(ctr)], dsattributes.kDS1AttrGeneratedUID: ["10000000-0000-0000-0000-000000000{:03d}".format(ctr)], dsattributes.kDS1AttrUniqueID: ["33{:03d}".format(ctr)], dsattributes.kDS1AttrPrimaryGroupID: ["20"], } record, error = node.createRecordWithRecordType_name_attributes_error_( recordType, recordName, attrs, None) if error: print(error) raise ODError(error) return record def main(): numUsers = 100 masterUser = "diradmin" masterPassword = "" try: (optargs, args) = getopt(sys.argv[1:], "hn:p:u:", ["help"]) except GetoptError, e: usage(e) for opt, arg in optargs: if opt in ("-h", "--help"): usage() elif opt == "-n": numUsers = int(arg) elif opt == "-u": masterUser = arg elif opt == "-p": masterPassword = arg if len(args) != 0: usage() if not masterPassword: masterPassword = getpass("Directory Admin Password: ") nodeName = "/LDAPv3/127.0.0.1" session = odframework.ODSession.defaultSession() userRecords = [] node, error = odframework.ODNode.nodeWithSession_name_error_(session, nodeName, None) if error: print(error) raise ODError(error) result, error = node.setCredentialsWithRecordType_recordName_password_error_( dsattributes.kDSStdRecordTypeUsers, masterUser, masterPassword, None ) if error: print("Unable to authenticate with directory %s: %s" % (nodeName, error)) raise ODError(error) print("Successfully authenticated with directory %s" % (nodeName,)) print("Creating users within %s:" % (nodeName,)) for ctr in range(numUsers): recordName = "user{:02d}".format(ctr + 1) record = lookupRecordName(node, dsattributes.kDSStdRecordTypeUsers, ctr + 1) if record is None: print("Creating user %s" % (recordName,)) try: record = createRecord(node, dsattributes.kDSStdRecordTypeUsers, ctr + 1) print("Successfully created user %s" % (recordName,)) result, error = record.changePassword_toPassword_error_(None, recordName, None) if error or not result: print("Failed to set password for %s: %s" % (recordName, error)) else: print("Successfully set password for %s" % (recordName,)) except ODError, e: print("Failed to create user %s: %s" % (recordName, e)) else: print("User %s already exists" % (recordName,)) if record is not None: userRecords.append(record) class ODError(Exception): def __init__(self, error): self.message = (str(error), error.code()) if __name__ == "__main__": main() calendarserver-9.1+dfsg/contrib/performance/loadtest/sim.py000066400000000000000000000565411315003562600242400ustar00rootroot00000000000000# -*- test-case-name: contrib.performance.loadtest.test_sim -*- ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from __future__ import print_function import sys if "twisted.internet.reactor" not in sys.modules: try: from twisted.internet import kqreactor kqreactor.install() except ImportError: from twisted.internet import epollreactor epollreactor.install() from collections import namedtuple, defaultdict from os import environ, mkdir from os.path import isdir from plistlib import readPlist from random import Random from sys import argv, stdout from urlparse import urlsplit from xml.parsers.expat import ExpatError import itertools import json import shutil import socket from twisted.python import context from twisted.python.filepath import FilePath from twisted.python.log import startLogging, addObserver, removeObserver, msg from twisted.python.usage import UsageError, Options from twisted.python.reflect import namedAny from twisted.application.service import Service from twisted.application.service import MultiService from twisted.internet.defer import Deferred from twisted.internet.defer import gatherResults from twisted.internet.defer import inlineCallbacks from twisted.internet.protocol import ProcessProtocol from twisted.web.server import Site from contrib.performance.loadtest.ical import OS_X_10_6 from contrib.performance.loadtest.profiles import Eventer, Inviter, Accepter from contrib.performance.loadtest.population import ( Populator, ProfileType, ClientType, PopulationParameters, SmoothRampUp, CalendarClientSimulator) from contrib.performance.loadtest.webadmin import LoadSimAdminResource from contrib.performance.loadtest.amphub import AMPHub class _DirectoryRecord(object): def __init__(self, uid, password, commonName, email, guid, podID="PodA"): self.uid = uid self.password = password self.commonName = commonName self.email = email self.guid = guid self.podID = podID def __repr__(self): return "user='{}'".format(self.uid) def safeDivision(value, total, factor=1): return value * factor / total if total else 0 def generateRecords( count, uidPattern="user%d", passwordPattern="user%d", namePattern="User %d", emailPattern="user%d@example.com", guidPattern="user%d", podID="PodA", ): for i in xrange(count): i += 1 uid = uidPattern % (i,) password = passwordPattern % (i,) name = namePattern % (i,) email = emailPattern % (i,) guid = guidPattern % (i,) yield _DirectoryRecord(uid, password, name, email, guid, podID) def recordsFromCSVFile(path, interleavePods): if path: pathObj = FilePath(path) else: pathObj = FilePath(__file__).sibling("accounts.csv") records = [ _DirectoryRecord(*line.decode('utf-8').split(u',')) for line in pathObj.getContent().splitlines() ] if interleavePods: # For Pods we re-order the record list so that we alternate between the pods recordsByPod = defaultdict(list) for record in records: recordsByPod[record.podID].append(record) records = [] for items in itertools.izip(*[recordsByPod[k] for k in sorted(recordsByPod.keys())]): records.extend(items) return records class LagTrackingReactor(object): """ This reactor wraps another reactor and proxies all attribute access (including method calls). It only changes the behavior of L{IReactorTime.callLater} to insert a C{"lag"} key into the context which delayed function calls are invoked with. This key has a float value which gives the difference in time between when the call was original scheduled and when the call actually took place. """ def __init__(self, reactor): self._reactor = reactor def __getattr__(self, name): return getattr(self._reactor, name) def callLater(self, delay, function, *args, **kwargs): expected = self._reactor.seconds() + delay def modifyContext(): now = self._reactor.seconds() context.call({'lag': now - expected}, function, *args, **kwargs) return self._reactor.callLater(delay, modifyContext) class SimOptions(Options): """ Command line configuration options for the load simulator. """ config = None _defaultConfig = FilePath(__file__).sibling("config.plist") _defaultClients = FilePath(__file__).sibling("clients.plist") optParameters = [ ("runtime", "t", None, "Specify the limit (seconds) on the time to run the simulation.", int), ("config", None, _defaultConfig, "Configuration plist file name from which to read simulation parameters.", FilePath), ("clients", None, _defaultClients, "Configuration plist file name from which to read client parameters.", FilePath), ] def opt_logfile(self, filename): """ Enable normal logging to some file. - for stdout. """ if filename == "-": fObj = stdout else: fObj = file(filename, "a") startLogging(fObj, setStdout=False) def opt_debug(self): """ Enable Deferred and Failure debugging. """ self.opt_debug_deferred() self.opt_debug_failure() def opt_debug_deferred(self): """ Enable Deferred debugging. """ from twisted.internet.defer import setDebugging setDebugging(True) def opt_debug_failure(self): """ Enable Failure debugging. """ from twisted.python.failure import startDebugMode startDebugMode() def postOptions(self): try: configFile = self['config'].open() except IOError, e: raise UsageError("--config %s: %s" % ( self['config'].path, e.strerror)) try: try: self.config = readPlist(configFile) except ExpatError, e: raise UsageError("--config %s: %s" % (self['config'].path, e)) finally: configFile.close() try: clientFile = self['clients'].open() except IOError, e: raise UsageError("--clients %s: %s" % ( self['clients'].path, e.strerror)) try: try: client_config = readPlist(clientFile) self.config["clients"] = client_config["clients"] if "arrivalInterval" in client_config: self.config["arrival"]["params"]["interval"] = client_config["arrivalInterval"] except ExpatError, e: raise UsageError("--clients %s: %s" % (self['clients'].path, e)) finally: clientFile.close() Arrival = namedtuple('Arrival', 'factory parameters') class LoadSimulator(object): """ A L{LoadSimulator} simulates some configuration of calendar clients. @type server: C{str} @type arrival: L{Arrival} @type parameters: L{PopulationParameters} @ivar records: A C{list} of L{_DirectoryRecord} instances giving user information about the accounts on the server being put under load. """ def __init__(self, servers, principalPathTemplate, webadminPort, serializationPath, arrival, parameters, observers=None, records=None, reactor=None, runtime=None, workers=None, configTemplate=None, workerID=None, workerCount=1): if reactor is None: from twisted.internet import reactor self.servers = servers self.principalPathTemplate = principalPathTemplate self.webadminPort = webadminPort self.serializationPath = serializationPath self.arrival = arrival self.parameters = parameters self.observers = observers self.reporter = None self.records = records self.reactor = LagTrackingReactor(reactor) self.runtime = runtime self.workers = workers self.configTemplate = configTemplate self.workerID = workerID self.workerCount = workerCount @classmethod def fromCommandLine(cls, args=None, output=stdout): if args is None: args = argv[1:] options = SimOptions() try: options.parseOptions(args) except UsageError, e: raise SystemExit(str(e)) return cls.fromConfig(options.config, options['runtime'], output) @classmethod def fromConfig(cls, config, runtime=None, output=stdout, reactor=None): """ Create a L{LoadSimulator} from a parsed instance of a configuration property list. """ servers = config.get('servers') workers = config.get("workers") if workers is None: # Client / place where the simulator actually runs configuration workerID = config.get("workerID", 0) workerCount = config.get("workerCount", 1) configTemplate = None principalPathTemplate = config.get('principalPathTemplate', '/principals/users/%s/') serializationPath = None if 'clientDataSerialization' in config: serializationPath = config['clientDataSerialization']['Path'] if not config['clientDataSerialization']['UseOldData']: if isdir(serializationPath): shutil.rmtree(serializationPath) serializationPath = config['clientDataSerialization']['Path'] if not isdir(serializationPath): try: mkdir(serializationPath) except OSError: print("Unable to create client data serialization directory: %s" % (serializationPath)) print("Please consult the clientDataSerialization stanza of contrib/performance/loadtest/config.plist") raise if 'arrival' in config: arrival = Arrival( namedAny(config['arrival']['factory']), config['arrival']['params']) else: arrival = Arrival( SmoothRampUp, dict(groups=10, groupSize=1, interval=3)) parameters = PopulationParameters() if 'clients' in config: for clientConfig in config['clients']: parameters.addClient( clientConfig["weight"], ClientType( namedAny(clientConfig["software"]), cls._convertParams(clientConfig["params"]), [ ProfileType( namedAny(profile["class"]), cls._convertParams(profile["params"]) ) for profile in clientConfig["profiles"] ])) if not parameters.clients: parameters.addClient(1, ClientType(OS_X_10_6, {}, [Eventer, Inviter, Accepter])) else: # Manager / observer process. principalPathTemplate = '' serializationPath = None arrival = None parameters = None workerID = 0 configTemplate = config workerCount = 1 webadminPort = None if 'webadmin' in config: if config['webadmin']['enabled']: webadminPort = config['webadmin']['HTTPPort'] observers = [] if 'observers' in config: for observer in config['observers']: observerName = observer["type"] observerParams = observer["params"] observers.append(namedAny(observerName)(**observerParams)) records = [] if 'accounts' in config: loader = config['accounts']['loader'] params = config['accounts']['params'] records.extend(namedAny(loader)(**params)) records = cls.filterRecords(records, servers) output.write("Loaded {0} accounts.\n".format(len(records))) return cls( servers, principalPathTemplate, webadminPort, serializationPath, arrival, parameters, observers=observers, records=records, runtime=runtime, reactor=reactor, workers=workers, configTemplate=configTemplate, workerID=workerID, workerCount=workerCount, ) @classmethod def _convertParams(cls, params): """ Find parameter values which should be more structured than plistlib is capable of constructing and replace them with the more structured form. Specifically, find keys that end with C{"Distribution"} and convert them into some kind of distribution object using the associated dictionary of keyword arguments. """ for k, v in params.iteritems(): if k.endswith('Distribution'): params[k] = cls._convertDistribution(v) return params @classmethod def _convertDistribution(cls, value): """ Construct and return a new distribution object using the type and params specified by C{value}. """ return namedAny(value['type'])(**value['params']) @classmethod def filterRecords(cls, records, servers): """ Remove records that do not correspond to an enabled pod. @param records: list of records to process @type records: L{list} @param servers: dictionary of servers @type servers: L{dict} """ # Get all enabled pod ids enabled = set(filter(lambda podID: servers[podID]["enabled"], servers.keys())) return filter(lambda record: record.podID in enabled, records) @classmethod def main(cls, args=None): simulator = cls.fromCommandLine(args) exitCode = simulator.run() print("Exit code: {}".format(exitCode)) raise SystemExit(exitCode) def createSimulator(self): populator = Populator(Random()) return CalendarClientSimulator( self.records, populator, Random(), self.parameters, self.reactor, self.servers, self.principalPathTemplate, self.serializationPath, self.workerID, self.workerCount, ) def createArrivalPolicy(self): return self.arrival.factory(self.reactor, **self.arrival.parameters) def serviceClasses(self): """ Return a list of L{SimService} subclasses for C{attachServices} to instantiate and attach to the reactor. """ if self.workers is not None: return [ ObserverService, WorkerSpawnerService, ReporterService, ] return [ ObserverService, SimulatorService, ReporterService, ] def attachServices(self, output): self.ms = MultiService() for svcclass in self.serviceClasses(): svcclass(self, output).setServiceParent(self.ms) attachService(self.reactor, self, self.ms) self.startAmpHub() def startAmpHub(self): hostsAndPorts = [] for serverInfo in self.servers.itervalues(): if serverInfo["enabled"]: for host in serverInfo.get("ampPushHosts", []): hostsAndPorts.append((host, serverInfo["ampPushPort"])) if hostsAndPorts: AMPHub.start(hostsAndPorts) def run(self, output=stdout): self.attachServices(output) if self.runtime is not None: self.reactor.callLater(self.runtime, self.stopAndReport) if self.webadminPort: self.reactor.listenTCP(self.webadminPort, Site(LoadSimAdminResource(self))) self.reactor.run() # Return code to indicate pass or fail return 0 if all([len(obs.failures()) == 0 for obs in self.observers]) else 1 def stop(self): if self.ms.running: self.updateStats() self.ms.stopService() self.reactor.callLater(5, self.stopAndReport) def shutdown(self): if self.ms.running: self.updateStats() return self.ms.stopService() def updateStats(self): """ Capture server stats and stop. """ for podID, server in self.servers.items(): if server["enabled"] and server["stats"]["enabled"]: _ignore_scheme, hostname, _ignore_path, _ignore_query, _ignore_fragment = urlsplit(server["uri"]) data = self.readStatsSock((hostname.split(":")[0], server["stats"]["Port"],), True) if "Failed" not in data: data = data["stats"]["5m"] if "stats" in data else data["5 Minutes"] result = ( safeDivision(float(data["requests"]), 5 * 60), safeDivision(data["t"], data["requests"]), safeDivision(float(data["slots"]), data["requests"]), safeDivision(data["cpu"], data["requests"]), ) msg(type="sim-expired", podID=podID, reason=result) def stopAndReport(self): """ Runtime has expired - capture server stats and stop. """ self.updateStats() self.reactor.stop() def readStatsSock(self, sockname, useTCP): try: s = socket.socket(socket.AF_INET if useTCP else socket.AF_UNIX, socket.SOCK_STREAM) s.connect(sockname) s.sendall('["stats"]' + "\r\n") data = "" while not data.endswith("\n"): d = s.recv(1024) if d: data += d else: break s.close() data = json.loads(data) except socket.error: data = {"Failed": "Unable to read statistics from server: %s" % (sockname,)} except ValueError: data = {"Failed": "Unable to parse statistics from server: %s" % (sockname,)} data["Server"] = sockname return data def attachService(reactor, loadsim, service): """ Attach a given L{IService} provider to the given L{IReactorCore}; cause it to be started when the reactor starts, and stopped when the reactor stops. """ reactor.callWhenRunning(service.startService) reactor.addSystemEventTrigger('before', 'shutdown', loadsim.shutdown) class SimService(Service, object): """ Base class for services associated with the L{LoadSimulator}. """ def __init__(self, loadsim, output): super(SimService, self).__init__() self.loadsim = loadsim self.output = output class ObserverService(SimService): """ A service that adds and removes a L{LoadSimulator}'s set of observers at start and stop time. """ def startService(self): """ Start observing. """ super(ObserverService, self).startService() for obs in self.loadsim.observers: addObserver(obs.observe) def stopService(self): super(ObserverService, self).stopService() for obs in self.loadsim.observers: removeObserver(obs.observe) class SimulatorService(SimService): """ A service that starts the L{CalendarClientSimulator} associated with the L{LoadSimulator} and stops it at shutdown. """ def startService(self): super(SimulatorService, self).startService() self.clientsim = self.loadsim.createSimulator() arrivalPolicy = self.loadsim.createArrivalPolicy() arrivalPolicy.run(self.clientsim) @inlineCallbacks def stopService(self): yield super(SimulatorService, self).stopService() yield self.clientsim.stop() class ReporterService(SimService): """ A service which reports all the results from all the observers on a load simulator when it is stopped. """ def startService(self): """ Start observing. """ super(ReporterService, self).startService() self.loadsim.reporter = self def stopService(self): """ Emit the report to the specified output file. """ super(ReporterService, self).stopService() self.generateReport(self.output) def generateReport(self, output): """ Emit the report to the specified output file. """ failures = [] for obs in self.loadsim.observers: obs.report(output) failures.extend(obs.failures()) if failures: output.write('\n*** FAIL\n') output.write('\n'.join(failures)) output.write('\n') else: output.write('\n*** PASS\n') class ProcessProtocolBridge(ProcessProtocol): def __init__(self, spawner, proto): self.spawner = spawner self.proto = proto self.deferred = Deferred() def connectionMade(self): self.transport.getPeer = self.getPeer self.transport.getHost = self.getHost self.proto.makeConnection(self.transport) def getPeer(self): return "Peer:PID:" + str(self.transport.pid) def getHost(self): return "Host:PID:" + str(self.transport.pid) def outReceived(self, data): self.proto.dataReceived(data) def errReceived(self, error): msg("stderr received from " + str(self.transport.pid)) msg(" " + repr(error)) def processEnded(self, reason): self.proto.connectionLost(reason) self.deferred.callback(None) self.spawner.bridges.remove(self) class WorkerSpawnerService(SimService): def startService(self): from contrib.performance.loadtest.ampsim import Manager super(WorkerSpawnerService, self).startService() self.bridges = [] for workerID, worker in enumerate(self.loadsim.workers): bridge = ProcessProtocolBridge( self, Manager(self.loadsim, workerID, len(self.loadsim.workers), self.output) ) self.bridges.append(bridge) sh = '/bin/sh' self.loadsim.reactor.spawnProcess( bridge, sh, [sh, "-c", worker], env=environ ) def stopService(self): TERMINATE_TIMEOUT = 30.0 def killThemAll(name): for bridge in self.bridges: bridge.transport.signalProcess(name) killThemAll("TERM") self.loadsim.reactor.callLater(TERMINATE_TIMEOUT, killThemAll, "KILL") return gatherResults([bridge.deferred for bridge in self.bridges]) main = LoadSimulator.main if __name__ == '__main__': main() calendarserver-9.1+dfsg/contrib/performance/loadtest/standard-configs/000077500000000000000000000000001315003562600263115ustar00rootroot00000000000000accelerated-activity-clients.plist000066400000000000000000000626531315003562600350500ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/standard-configs clients software contrib.performance.loadtest.ical.OS_X_10_11 params title main calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles class contrib.performance.loadtest.profiles.Eventer params enabled interval 120 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 fileAttachPercentage 1 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.AlarmAcknowledger params enabled interval 15 pastTheHour 0 15 30 45 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 25 weekly 25 monthly 0 yearly 0 dailylimit 0 weeklylimit 0 workdays 0 class contrib.performance.loadtest.profiles.TitleChanger params enabled interval 120 class contrib.performance.loadtest.profiles.DescriptionChanger params enabled interval 120 descriptionLengthDistribution type contrib.performance.stats.LogNormalDistribution params mode 450 median 650 maximum 1000000 class contrib.performance.loadtest.profiles.EventCountLimiter params enabled interval 60 eventCountLimit 100 class contrib.performance.loadtest.profiles.CalendarSharer params enabled interval 300 maxSharees 3 class contrib.performance.loadtest.profiles.Inviter params enabled sendInvitationDistribution type contrib.performance.stats.NormalDistribution params mu 120 sigma 5 inviteeDistribution type contrib.performance.stats.UniformIntegerDistribution params min 0 max 99 inviteeClumping inviteeCountDistribution type contrib.performance.stats.LogNormalDistribution params mode 6 median 8 maximum 60 inviteeLookupPercentage 50 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights none 50 daily 10 weekly 20 monthly 2 yearly 1 dailylimit 2 weeklylimit 5 workdays 10 fileAttachPercentage 1 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.Accepter params enabled acceptDelayDistribution type contrib.performance.stats.LogNormalDistribution params mode 300 median 1800 class contrib.performance.loadtest.profiles.AttachmentDownloader params enabled class contrib.performance.loadtest.profiles.Tasker params enabled interval 300 taskDueDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles class contrib.performance.loadtest.profiles.TimeRanger params enabled interval 300 class contrib.performance.loadtest.profiles.DeepRefresher params enabled interval 60 class contrib.performance.loadtest.profiles.APNSSubscriber params enabled interval 60 class contrib.performance.loadtest.profiles.Resetter params enabled interval 600 weight 1 software contrib.performance.loadtest.ical.OS_X_10_11 params title idle calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles class contrib.performance.loadtest.profiles.AttachmentDownloader params enabled weight 1 accelerated-activity-config.plist000066400000000000000000000165361315003562600346530ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/loadtest/standard-configs servers PodA enabled uri https://localhost:8443 ampPushHosts localhost ampPushPort 62311 stats enabled Port 8100 PodB enabled uri https://localhost:8543 ampPushHosts localhost ampPushPort 62312 stats enabled Port 8101 principalPathTemplate /principals/users/%s/ webadmin enabled HTTPPort 8080 clientDataSerialization UseOldData Path /tmp/sim accounts loader contrib.performance.loadtest.sim.recordsFromCSVFile params path contrib/performance/loadtest/accounts.csv interleavePods arrival factory contrib.performance.loadtest.population.SmoothRampUp params groups 10 groupSize 3 interval 3 clientsPerUser 2 observers type contrib.performance.loadtest.population.ReportStatistics params thresholdsPath contrib/performance/loadtest/thresholds.json benchmarksPath contrib/performance/loadtest/benchmarks.json failCutoff 1.0 type contrib.performance.loadtest.ical.RequestLogger params type contrib.performance.loadtest.ical.ErrorLogger params directory /tmp/sim_errors durationThreshold 10.0 type contrib.performance.loadtest.profiles.OperationLogger params thresholdsPath contrib/performance/loadtest/thresholds.json lagCutoff 1.0 failCutoff 1.0 failIfNoPush calendarserver-9.1+dfsg/contrib/performance/loadtest/standard-configs/constant-invites.plist000066400000000000000000000245541315003562600327100ustar00rootroot00000000000000 clients software contrib.performance.loadtest.ical.OS_X_10_11 params title main calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles class contrib.performance.loadtest.profiles.Inviter params enabled sendInvitationDistribution type contrib.performance.stats.FixedDistribution params value 60 inviteeDistribution type contrib.performance.stats.UniformIntegerDistribution params min 0 max 99 inviteeClumping inviteeCountDistribution type contrib.performance.stats.FixedDistribution params value 6 inviteeLookupPercentage 0 eventStartDistribution type contrib.performance.stats.WorkDistribution params daysOfWeek mon tue wed thu fri beginHour 8 endHour 16 tzname America/Los_Angeles recurrenceDistribution type contrib.performance.stats.RecurrenceDistribution params allowRecurrence weights fileAttachPercentage 0 fileSizeDistribution type contrib.performance.stats.NormalDistribution params mu 500000 sigma 100000 class contrib.performance.loadtest.profiles.Accepter params enabled acceptDelayDistribution type contrib.performance.stats.FixedDistribution params value 30 class contrib.performance.loadtest.profiles.EventCountLimiter params enabled interval 60 eventCountLimit 100 weight 1 software contrib.performance.loadtest.ical.OS_X_10_11 params title idle calendarHomePollInterval 30 supportPush supportAmpPush unauthenticatedPercentage 40 profiles weight 2 calendarserver-9.1+dfsg/contrib/performance/loadtest/subscribe.py000066400000000000000000000011641315003562600254200ustar00rootroot00000000000000 class Subscription(object): def __init__(self, periodical, subscriber): self.periodical = periodical self.subscriber = subscriber def cancel(self): self.periodical.subscriptions.remove(self) def issue(self, issue): self.subscriber(issue) class Periodical(object): def __init__(self): self.subscriptions = [] def subscribe(self, who): subscription = Subscription(self, who) self.subscriptions.append(subscription) return subscription def issue(self, issue): for subscr in self.subscriptions: subscr.issue(issue) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_amphub.py000066400000000000000000000034331315003562600257530ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from twisted.internet.defer import inlineCallbacks, succeed from twisted.trial.unittest import TestCase from contrib.performance.loadtest.amphub import AMPHub class StubProtocol(object): def __init__(self, history): self.history = history def callRemote(self, *args, **kwds): self.history.append((args, kwds)) return succeed(None) class AMPHubTestCase(TestCase): def callback(self, id, dataChangedTimestamp, priority=5): self.callbackHistory.append(id) @inlineCallbacks def test_amphub(self): amphub = AMPHub() subscribeHistory = [] protocol = StubProtocol(subscribeHistory) amphub.protocols.append(protocol) AMPHub._hub = amphub keys = ("a", "b", "c") yield AMPHub.subscribeToIDs(keys, self.callback) self.assertEquals(len(subscribeHistory), 3) for key in keys: self.assertEquals(len(amphub.callbacks[key]), 1) self.callbackHistory = [] amphub._pushReceived("a", 0) amphub._pushReceived("b", 0) amphub._pushReceived("a", 0) amphub._pushReceived("c", 0) self.assertEquals(self.callbackHistory, ["a", "b", "a", "c"]) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_ical.py000066400000000000000000002113441315003562600254110ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from caldavclientlibrary.protocol.caldav.definitions import caldavxml from caldavclientlibrary.protocol.caldav.definitions import csxml from caldavclientlibrary.protocol.url import URL from caldavclientlibrary.protocol.webdav.definitions import davxml from contrib.performance.httpclient import MemoryConsumer, StringProducer from contrib.performance.loadtest.ical import XMPPPush, Event, Calendar, OS_X_10_11, NotificationCollection from contrib.performance.loadtest.sim import _DirectoryRecord from pycalendar.datetime import DateTime from pycalendar.timezone import Timezone from twisted.internet.defer import Deferred, inlineCallbacks, returnValue, succeed from twisted.internet.protocol import ProtocolToConsumerAdapter from twisted.python.failure import Failure from twisted.trial.unittest import TestCase from twisted.web.client import ResponseDone from twisted.web.http import OK, NO_CONTENT, CREATED, MULTI_STATUS, NOT_FOUND, FORBIDDEN from twisted.web.http_headers import Headers from twistedcaldav.ical import Component from twistedcaldav.timezones import TimezoneCache import json import os EVENT_UID = 'D94F247D-7433-43AF-B84B-ADD684D023B0' EVENT = """\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VEVENT CREATED:20101018T155454Z UID:%(UID)s DTEND;TZID=America/New_York:20101028T130000 ATTENDEE;CN="User 03";CUTYPE=INDIVIDUAL;EMAIL="user03@example.com";PARTS TAT=NEEDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:mailto:user03@example.co m ATTENDEE;CN="User 01";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user01@ example.com TRANSP:OPAQUE SUMMARY:Attended Event DTSTART;TZID=America/New_York:20101028T120000 DTSTAMP:20101018T155513Z ORGANIZER;CN="User 01":mailto:user01@example.com SEQUENCE:3 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {'UID': EVENT_UID} EVENT_INVITE = """\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:18831118T120358 RDATE:18831118T120358 TZNAME:EST TZOFFSETFROM:-045602 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19180331T020000 RRULE:FREQ=YEARLY;UNTIL=19190330T070000Z;BYDAY=-1SU;BYMONTH=3 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19181027T020000 RRULE:FREQ=YEARLY;UNTIL=19191026T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:STANDARD DTSTART:19200101T000000 RDATE:19200101T000000 RDATE:19420101T000000 RDATE:19460101T000000 RDATE:19670101T000000 TZNAME:EST TZOFFSETFROM:-0500 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19200328T020000 RDATE:19200328T020000 RDATE:19740106T020000 RDATE:19750223T020000 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19201031T020000 RDATE:19201031T020000 RDATE:19450930T020000 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19210424T020000 RRULE:FREQ=YEARLY;UNTIL=19410427T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19210925T020000 RRULE:FREQ=YEARLY;UNTIL=19410928T060000Z;BYDAY=-1SU;BYMONTH=9 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19420209T020000 RDATE:19420209T020000 TZNAME:EWT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19450814T190000 RDATE:19450814T190000 TZNAME:EPT TZOFFSETFROM:-0400 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19460428T020000 RRULE:FREQ=YEARLY;UNTIL=19660424T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19460929T020000 RRULE:FREQ=YEARLY;UNTIL=19540926T060000Z;BYDAY=-1SU;BYMONTH=9 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:STANDARD DTSTART:19551030T020000 RRULE:FREQ=YEARLY;UNTIL=19661030T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19670430T020000 RRULE:FREQ=YEARLY;UNTIL=19730429T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19671029T020000 RRULE:FREQ=YEARLY;UNTIL=20061029T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19760425T020000 RRULE:FREQ=YEARLY;UNTIL=19860427T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19870405T020000 RRULE:FREQ=YEARLY;UNTIL=20060402T070000Z;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20101018T155454Z UID:%(UID)s DTEND;TZID=America/New_York:20101028T130000 ATTENDEE;CN="User 02";CUTYPE=INDIVIDUAL;EMAIL="user02@example.com";PARTS TAT=NEEDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:mailto:user02@example.co m ATTENDEE;CN="User 03";CUTYPE=INDIVIDUAL;EMAIL="user03@example.com";PARTS TAT=NEEDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:mailto:user03@example.co m ATTENDEE;CN="User 01";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:urn:uuid:user01 TRANSP:OPAQUE SUMMARY:Attended Event DTSTART;TZID=America/New_York:20101028T120000 DTSTAMP:20101018T155513Z ORGANIZER;CN="User 01":urn:uuid:user01 SEQUENCE:3 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {'UID': EVENT_UID} EVENT_AND_TIMEZONE = """\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/New_York X-LIC-LOCATION:America/New_York BEGIN:STANDARD DTSTART:18831118T120358 RDATE:18831118T120358 TZNAME:EST TZOFFSETFROM:-045602 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19180331T020000 RRULE:FREQ=YEARLY;UNTIL=19190330T070000Z;BYDAY=-1SU;BYMONTH=3 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19181027T020000 RRULE:FREQ=YEARLY;UNTIL=19191026T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:STANDARD DTSTART:19200101T000000 RDATE:19200101T000000 RDATE:19420101T000000 RDATE:19460101T000000 RDATE:19670101T000000 TZNAME:EST TZOFFSETFROM:-0500 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19200328T020000 RDATE:19200328T020000 RDATE:19740106T020000 RDATE:19750223T020000 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19201031T020000 RDATE:19201031T020000 RDATE:19450930T020000 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19210424T020000 RRULE:FREQ=YEARLY;UNTIL=19410427T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19210925T020000 RRULE:FREQ=YEARLY;UNTIL=19410928T060000Z;BYDAY=-1SU;BYMONTH=9 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19420209T020000 RDATE:19420209T020000 TZNAME:EWT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19450814T190000 RDATE:19450814T190000 TZNAME:EPT TZOFFSETFROM:-0400 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19460428T020000 RRULE:FREQ=YEARLY;UNTIL=19660424T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19460929T020000 RRULE:FREQ=YEARLY;UNTIL=19540926T060000Z;BYDAY=-1SU;BYMONTH=9 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:STANDARD DTSTART:19551030T020000 RRULE:FREQ=YEARLY;UNTIL=19661030T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19670430T020000 RRULE:FREQ=YEARLY;UNTIL=19730429T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:19671029T020000 RRULE:FREQ=YEARLY;UNTIL=20061029T060000Z;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:19760425T020000 RRULE:FREQ=YEARLY;UNTIL=19860427T070000Z;BYDAY=-1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:19870405T020000 RRULE:FREQ=YEARLY;UNTIL=20060402T070000Z;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYDAY=2SU;BYMONTH=3 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=11 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20101018T155454Z UID:%(UID)s DTEND;TZID=America/New_York:20101028T130000 ATTENDEE;CN="User 03";CUTYPE=INDIVIDUAL;EMAIL="user03@example.com";PARTS TAT=NEEDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:mailto:user03@example.co m ATTENDEE;CN="User 01";CUTYPE=INDIVIDUAL;PARTSTAT=ACCEPTED:mailto:user01@ example.com TRANSP:OPAQUE SUMMARY:Attended Event DTSTART;TZID=America/New_York:20101028T120000 DTSTAMP:20101018T155513Z ORGANIZER;CN="User 01":mailto:user01@example.com SEQUENCE:3 END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") % {'UID': EVENT_UID} class EventTests(TestCase): """ Tests for L{Event}. """ def test_uid(self): """ When the C{vevent} attribute of an L{Event} instance is set, L{Event.getUID} returns the UID value from it. """ event = Event(None, u'/foo/bar', u'etag', Component.fromString(EVENT)) self.assertEquals(event.getUID(), EVENT_UID) def test_withoutUID(self): """ When an L{Event} has a C{vevent} attribute set to C{None}, L{Event.getUID} returns C{None}. """ event = Event(None, u'/bar/baz', u'etag') self.assertIdentical(event.getUID(), None) PRINCIPAL_PROPFIND_RESPONSE = """\ /principals/__uids__/user01/ /principals/ /calendars/__uids__/user01 /principals/__uids__/user01/ /principals/users/user01/ /calendars/__uids__/user01/inbox/ /calendars/__uids__/user01/outbox/ /calendars/__uids__/user01/dropbox/ /calendars/__uids__/user01/notification/ User 01 /principals/__uids__/user01/ HTTP/1.1 200 OK HTTP/1.1 404 Not Found """ _CALENDAR_HOME_PROPFIND_RESPONSE_TEMPLATE = """\ /calendars/__uids__/user01/ %(xmpp)s User 01 /principals/__uids__/user01/ 104855434 2166 /Some/Unique/Value HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/notification/ SYNCTOKEN3 notification /principals/__uids__/user01/ 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/dropbox/ /principals/__uids__/user01/ 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/calendar/ c2696540-4c4c-4a31-adaf-c99630776828#3 SYNCTOKEN1 calendar #0252D4FF 1 /principals/__uids__/user01/ 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/outbox/ /principals/__uids__/user01/ 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/freebusy /principals/__uids__/user01/ 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found /calendars/__uids__/user01/inbox/ a483dab3-1391-445b-b1c3-5ae9dfc81c2f#0 SYNCTOKEN2 inbox /principals/__uids__/user01/ /calendars/__uids__/user01/calendar /calendars/__uids__/user01/calendar 104855434 2166 HTTP/1.1 200 OK HTTP/1.1 404 Not Found """ CALENDAR_HOME_PROPFIND_RESPONSE = _CALENDAR_HOME_PROPFIND_RESPONSE_TEMPLATE % { "xmpp": """\ """, } CALENDAR_HOME_PROPFIND_RESPONSE_WITH_XMPP = _CALENDAR_HOME_PROPFIND_RESPONSE_TEMPLATE % { "xmpp": """\ xmpp.example.invalid:1952 xmpp:pubsub.xmpp.example.invalid?pubsub;node=/CalDAV/another.example.invalid/user01/""", } CALENDAR_HOME_PROPFIND_RESPONSE_XMPP_MISSING = _CALENDAR_HOME_PROPFIND_RESPONSE_TEMPLATE % {"xmpp": ""} class MemoryResponse(object): def __init__(self, version, code, phrase, headers, bodyProducer): self.version = version self.code = code self.phrase = phrase self.headers = headers self.length = bodyProducer.length self._bodyProducer = bodyProducer def deliverBody(self, protocol): protocol.makeConnection(self._bodyProducer) d = self._bodyProducer.startProducing(ProtocolToConsumerAdapter(protocol)) d.addCallback(lambda ignored: protocol.connectionLost(Failure(ResponseDone()))) class OS_X_10_11Mixin: """ Mixin for L{TestCase}s for L{OS_X_10_11}. """ def setUp(self): TimezoneCache.create() self.record = _DirectoryRecord( u"user91", u"user91", u"User 91", u"user91@example.org", u"user91", ) serializePath = self.mktemp() os.mkdir(serializePath) self.client = OS_X_10_11( None, { "uri": "http://127.0.0.1", }, "/principals/users/%s/", serializePath, self.record, None, 1 ) def interceptRequests(self): requests = [] def request(*args, **kwargs): result = Deferred() requests.append((result, args)) return result self.client._request = request return requests class OS_X_10_11Tests(OS_X_10_11Mixin, TestCase): """ Tests for L{OS_X_10_11}. """ def test_parsePrincipalPROPFINDResponse(self): """ L{Principal._parsePROPFINDResponse} accepts an XML document like the one in the response to a I{PROPFIND} request for I{/principals/__uids__//} and returns a C{PropFindResult} representing the data from it. """ principals = self.client._parseMultiStatus(PRINCIPAL_PROPFIND_RESPONSE) principal = principals['/principals/__uids__/user01/'] self.assertEquals( principal.getHrefProperties(), { davxml.principal_collection_set: URL(path='/principals/'), caldavxml.calendar_home_set: URL(path='/calendars/__uids__/user01'), caldavxml.calendar_user_address_set: ( URL(path='/principals/__uids__/user01/'), URL(path='/principals/users/user01/'), ), caldavxml.schedule_inbox_URL: URL(path='/calendars/__uids__/user01/inbox/'), caldavxml.schedule_outbox_URL: URL(path='/calendars/__uids__/user01/outbox/'), csxml.dropbox_home_URL: URL(path='/calendars/__uids__/user01/dropbox/'), csxml.notification_URL: URL(path='/calendars/__uids__/user01/notification/'), davxml.principal_URL: URL(path='/principals/__uids__/user01/'), } ) self.assertEquals( principal.getTextProperties(), {davxml.displayname: 'User 01'}) # self.assertEquals( # principal.getSomething(), # {SUPPORTED_REPORT_SET: ( # '{DAV:}acl-principal-prop-set', # '{DAV:}principal-match', # '{DAV:}principal-property-search', # '{DAV:}expand-property', # )}) def test_extractCalendars(self): """ L{OS_X_10_11._extractCalendars} accepts a calendar home PROPFIND response body and returns a list of calendar objects constructed from the data extracted from the response. """ home = "/calendars/__uids__/user01/" calendars, notificationCollection, _ignore_homeToken = self.client._extractCalendars( self.client._parseMultiStatus(CALENDAR_HOME_PROPFIND_RESPONSE), home) calendars.sort(key=lambda cal: cal.resourceType) calendar, inbox = calendars self.assertEquals(calendar.resourceType, caldavxml.calendar) self.assertEquals(calendar.name, "calendar") self.assertEquals(calendar.url, "/calendars/__uids__/user01/calendar/") self.assertEquals(calendar.changeToken, "SYNCTOKEN1") self.assertEquals(inbox.resourceType, caldavxml.schedule_inbox) self.assertEquals(inbox.name, "inbox") self.assertEquals(inbox.url, "/calendars/__uids__/user01/inbox/") self.assertEquals(inbox.changeToken, "SYNCTOKEN2") self.assertEquals(notificationCollection.changeToken, "SYNCTOKEN3") self.assertEqual({}, self.client.xmpp) def test_extractCalendarsXMPP(self): """ If there is XMPP push information in a calendar home PROPFIND response, L{OS_X_10_11._extractCalendars} finds it and records it. """ home = "/calendars/__uids__/user01/" self.client._extractCalendars( self.client._parseMultiStatus(CALENDAR_HOME_PROPFIND_RESPONSE_WITH_XMPP), home ) self.assertEqual({ home: XMPPPush( "xmpp.example.invalid:1952", "xmpp:pubsub.xmpp.example.invalid?pubsub;node=/CalDAV/another.example.invalid/user01/", "/Some/Unique/Value" )}, self.client.xmpp ) def test_handleMissingXMPP(self): home = "/calendars/__uids__/user01/" self.client._extractCalendars( self.client._parseMultiStatus(CALENDAR_HOME_PROPFIND_RESPONSE_XMPP_MISSING), home) self.assertEqual({}, self.client.xmpp) @inlineCallbacks def test_changeEventAttendee(self): """ OS_X_10_11.changeEventAttendee removes one attendee from an existing event and appends another. """ requests = self.interceptRequests() vevent = Component.fromString(EVENT) attendees = tuple(vevent.mainComponent().properties("ATTENDEE")) old = attendees[0] new = old.duplicate() new.setParameter('CN', 'Some Other Guy') event = Event(self.client.serializeLocation(), u'/some/calendar/1234.ics', None, vevent) self.client._events[event.url] = event self.client.changeEventAttendee(event.url, old, new) _ignore_result, req = requests.pop(0) # iCal PUTs the new VCALENDAR object. _ignore_expectedResponseCode, method, url, headers, body = req self.assertEquals(method, 'PUT') self.assertEquals(url, 'http://127.0.0.1' + event.url) self.assertIsInstance(url, str) self.assertEquals(headers.getRawHeaders('content-type'), ['text/calendar']) consumer = MemoryConsumer() yield body.startProducing(consumer) vevent = Component.fromString(consumer.value()) attendees = tuple(vevent.mainComponent().properties("ATTENDEE")) self.assertEquals(len(attendees), 2) self.assertEquals(attendees[0].parameterValue('CN'), 'User 01') self.assertEquals(attendees[1].parameterValue('CN'), 'Some Other Guy') def test_addEvent(self): """ L{OS_X_10_11.addEvent} PUTs the event passed to it to the server and updates local state to reflect its existence. """ requests = self.interceptRequests() calendar = Calendar(caldavxml.calendar, set(('VEVENT',)), u'calendar', u'/mumble/', None) self.client._calendars[calendar.url] = calendar vcalendar = Component.fromString(EVENT) d = self.client.addEvent(u'/mumble/frotz.ics', vcalendar) result, req = requests.pop(0) # iCal PUTs the new VCALENDAR object. expectedResponseCode, method, url, headers, body = req self.assertEqual(expectedResponseCode, CREATED) self.assertEqual(method, 'PUT') self.assertEqual(url, 'http://127.0.0.1/mumble/frotz.ics') self.assertIsInstance(url, str) self.assertEqual(headers.getRawHeaders('content-type'), ['text/calendar']) consumer = MemoryConsumer() finished = body.startProducing(consumer) def cbFinished(ignored): self.assertEqual( Component.fromString(consumer.value()), Component.fromString(EVENT_AND_TIMEZONE)) finished.addCallback(cbFinished) def requested(ignored): response = MemoryResponse( ('HTTP', '1', '1'), CREATED, "Created", Headers({"etag": ["foo"]}), StringProducer("")) result.callback((response, "")) finished.addCallback(requested) return d @inlineCallbacks def test_addInvite(self): """ L{OS_X_10_11.addInvite} PUTs the event passed to it to the server and updates local state to reflect its existence, but it also does attendee auto-complete and free-busy checks before the PUT. """ calendar = Calendar(caldavxml.calendar, set(('VEVENT',)), u'calendar', u'/mumble/', None) self.client._calendars[calendar.url] = calendar vcalendar = Component.fromString(EVENT_INVITE) self.client.uuid = u'urn:uuid:user01' self.client.email = u'mailto:user01@example.com' self.client.principalCollection = "/principals/" self.client.outbox = "/calendars/__uids__/user01/outbox/" @inlineCallbacks def _testReport(*args, **kwargs): expectedResponseCode, method, url, headers, body = args self.assertEqual(expectedResponseCode, (MULTI_STATUS,)) self.assertEqual(method, 'REPORT') self.assertEqual(url, 'http://127.0.0.1/principals/') self.assertIsInstance(url, str) self.assertEqual(headers.getRawHeaders('content-type'), ['text/xml']) consumer = MemoryConsumer() yield body.startProducing(consumer) response = MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "MultiStatus", Headers({}), StringProducer("")) returnValue((response, "")) @inlineCallbacks def _testPost(*args, **kwargs): expectedResponseCode, method, url, headers, body = args self.assertEqual(expectedResponseCode, OK) self.assertEqual(method, 'POST') self.assertEqual(url, 'http://127.0.0.1/calendars/__uids__/user01/outbox/') self.assertIsInstance(url, str) self.assertEqual(headers.getRawHeaders('content-type'), ['text/calendar']) consumer = MemoryConsumer() yield body.startProducing(consumer) self.assertNotEqual(consumer.value().find(kwargs["attendee"]), -1) response = MemoryResponse( ('HTTP', '1', '1'), OK, "OK", Headers({}), StringProducer("")) returnValue((response, "")) def _testPost02(*args, **kwargs): return _testPost(*args, attendee="ATTENDEE:mailto:user02@example.com", **kwargs) def _testPost03(*args, **kwargs): return _testPost(*args, attendee="ATTENDEE:mailto:user03@example.com", **kwargs) @inlineCallbacks def _testPut(*args, **kwargs): expectedResponseCode, method, url, headers, body = args self.assertEqual(expectedResponseCode, CREATED) self.assertEqual(method, 'PUT') self.assertEqual(url, 'http://127.0.0.1/mumble/frotz.ics') self.assertIsInstance(url, str) self.assertEqual(headers.getRawHeaders('content-type'), ['text/calendar']) consumer = MemoryConsumer() yield body.startProducing(consumer) self.assertEqual( Component.fromString(consumer.value()), Component.fromString(EVENT_INVITE)) response = MemoryResponse( ('HTTP', '1', '1'), CREATED, "Created", Headers({}), StringProducer("")) returnValue((response, "")) def _testGet(*args, **kwargs): expectedResponseCode, method, url = args self.assertEqual(expectedResponseCode, OK) self.assertEqual(method, 'GET') self.assertEqual(url, 'http://127.0.0.1/mumble/frotz.ics') self.assertIsInstance(url, str) response = MemoryResponse( ('HTTP', '1', '1'), OK, "OK", Headers({"etag": ["foo"]}), StringProducer(EVENT_INVITE)) return succeed((response, EVENT_INVITE)) requests = [_testReport, _testPost02, _testReport, _testPost03, _testPut, _testGet] def _requestHandler(*args, **kwargs): handler = requests.pop(0) return handler(*args, **kwargs) self.client._request = _requestHandler yield self.client.addInvite('/mumble/frotz.ics', vcalendar) def test_deleteEvent(self): """ L{OS_X_10_11.deleteEvent} DELETEs the event at the relative URL passed to it and updates local state to reflect its removal. """ requests = self.interceptRequests() calendar = Calendar(caldavxml.calendar, set(('VEVENT',)), u'calendar', u'/foo/', None) event = Event(None, calendar.url + u'bar.ics', None) self.client._calendars[calendar.url] = calendar self.client._setEvent(event.url, event) d = self.client.deleteEvent(event.url) result, req = requests.pop() expectedResponseCode, method, url = req self.assertEqual(expectedResponseCode, (NO_CONTENT, NOT_FOUND)) self.assertEqual(method, 'DELETE') self.assertEqual(url, 'http://127.0.0.1' + event.url) self.assertIsInstance(url, str) self.assertNotIn(event.url, self.client._events) self.assertNotIn(u'bar.ics', calendar.events) response = MemoryResponse( ('HTTP', '1', '1'), NO_CONTENT, "No Content", None, StringProducer("")) result.callback((response, "")) return d def test_serialization(self): """ L{OS_X_10_11.serialize} properly generates a JSON document. """ clientPath = os.path.join(self.client.serializePath, "user91-OS_X_10.11") self.assertFalse(os.path.exists(clientPath)) indexPath = os.path.join(clientPath, "index.json") self.assertFalse(os.path.exists(indexPath)) cal1 = """BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VEVENT UID:004f8e41-b071-4b30-bb3b-6aada4adcc10 DTSTART:20120817T113000 DTEND:20120817T114500 DTSTAMP:20120815T154420Z SEQUENCE:2 SUMMARY:Simple event END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") cal2 = """BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VEVENT UID:00a79cad-857b-418e-a54a-340b5686d747 DTSTART:20120817T113000 DTEND:20120817T114500 DTSTAMP:20120815T154420Z SEQUENCE:2 SUMMARY:Simple event END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") events = ( Event(self.client.serializeLocation(), u'/home/calendar/1.ics', u'123.123', Component.fromString(cal1)), Event(self.client.serializeLocation(), u'/home/inbox/i1.ics', u'123.123', Component.fromString(cal2)), ) self.client._events.update(dict([[event.url, event] for event in events])) calendars = ( Calendar(str(caldavxml.calendar), set(('VEVENT',)), u'calendar', u'/home/calendar/', "123", invitees=["a", "b", "c"]), Calendar(str(caldavxml.calendar), set(('VTODO',)), u'tasks', u'/home/tasks/', "456"), Calendar(str(caldavxml.schedule_inbox), set(('VEVENT', "VTODO",)), u'calendar', u'/home/inbox/', "789"), ) self.client._calendars.update(dict([[calendar.url, calendar] for calendar in calendars])) self.client._calendars["/home/calendar/"].events["1.ics"] = events[0] self.client._calendars["/home/inbox/"].events["i1.ics"] = events[1] self.client._notificationCollection = NotificationCollection("/home/notification", "123") self.client._managed_attachments_server_url = "attachmentsurl" self.client.calendarHomeToken = "hometoken" self.client.serialize() self.assertTrue(os.path.exists(clientPath)) self.assertTrue(os.path.exists(indexPath)) def _normDict(d): return dict([ ( k, sorted( v, key=lambda x: x["changeToken" if k == "calendars" else "url"] ) if isinstance(v, list) else v, ) for k, v in d.items() ]) with open(indexPath) as f: jdata = f.read() self.assertEqual(_normDict(json.loads(jdata)), _normDict(json.loads("""{ "notificationCollection": { "url": "/home/notification", "notifications": [], "changeToken": "123" }, "calendars": [ { "changeToken": "123", "name": "calendar", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}calendar", "componentTypes": [ "VEVENT" ], "invitees": [ "a", "b", "c" ], "url": "/home/calendar/", "events": [ "1.ics" ] }, { "changeToken": "789", "name": "calendar", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}schedule-inbox", "componentTypes": [ "VEVENT", "VTODO" ], "invitees": [], "url": "/home/inbox/", "events": [ "i1.ics" ] }, { "changeToken": "456", "name": "tasks", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}calendar", "componentTypes": [ "VTODO" ], "invitees": [], "url": "/home/tasks/", "events": [] } ], "principalURL": null, "homeToken": "hometoken", "attachmentsUrl": "attachmentsurl", "events": [ { "url": "/home/calendar/1.ics", "scheduleTag": null, "etag": "123.123", "uid": "004f8e41-b071-4b30-bb3b-6aada4adcc10" }, { "url": "/home/inbox/i1.ics", "scheduleTag": null, "etag": "123.123", "uid": "00a79cad-857b-418e-a54a-340b5686d747" } ], "attachments": {} }"""))) event1Path = os.path.join(clientPath, "calendar", "1.ics") self.assertTrue(os.path.exists(event1Path)) with open(event1Path) as f: data = f.read() self.assertEqual(data, cal1) event2Path = os.path.join(clientPath, "inbox", "i1.ics") self.assertTrue(os.path.exists(event2Path)) with open(event2Path) as f: data = f.read() self.assertEqual(data, cal2) def test_deserialization(self): """ L{OS_X_10_11.deserailize} properly parses a JSON document. """ cal1 = """BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VEVENT UID:004f8e41-b071-4b30-bb3b-6aada4adcc10 DTSTART:20120817T113000 DTEND:20120817T114500 DTSTAMP:20120815T154420Z SEQUENCE:2 SUMMARY:Simple event END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") cal2 = """BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VEVENT UID:00a79cad-857b-418e-a54a-340b5686d747 DTSTART:20120817T113000 DTEND:20120817T114500 DTSTAMP:20120815T154420Z SEQUENCE:2 SUMMARY:Simple event END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") clientPath = os.path.join(self.client.serializePath, "user91-OS_X_10.11") os.mkdir(clientPath) indexPath = os.path.join(clientPath, "index.json") with open(indexPath, "w") as f: f.write("""{ "calendars": [ { "changeToken": "321", "attachmentsUrl": "https://example.com/attachments/", "name": "calendar", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}calendar", "componentTypes": [ "VEVENT" ], "invitees": [ "a", "b", "c" ], "url": "/home/calendar/", "events": [ "2.ics" ] }, { "changeToken": "987", "name": "calendar", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}schedule-inbox", "componentTypes": [ "VEVENT", "VTODO" ], "invitees": [ "a", "b", "c" ], "url": "/home/inbox/", "events": [ "i2.ics" ] }, { "changeToken": "654", "name": "tasks", "shared": false, "sharedByMe": false, "resourceType": "{urn:ietf:params:xml:ns:caldav}calendar", "componentTypes": [ "VTODO" ], "invitees": [ "a", "b", "c" ], "url": "/home/tasks/", "events": [] } ], "principalURL": null, "homeToken": "hometoken", "attachmentsUrl": "attachmentsurl", "events": [ { "url": "/home/calendar/2.ics", "scheduleTag": null, "etag": "321.321", "uid": "004f8e41-b071-4b30-bb3b-6aada4adcc10" }, { "url": "/home/inbox/i2.ics", "scheduleTag": null, "etag": "987.987", "uid": "00a79cad-857b-418e-a54a-340b5686d747" } ] }""") os.mkdir(os.path.join(clientPath, "calendar")) event1Path = os.path.join(clientPath, "calendar", "2.ics") with open(event1Path, "w") as f: f.write(cal1) os.mkdir(os.path.join(clientPath, "inbox")) event1Path = os.path.join(clientPath, "inbox", "i2.ics") with open(event1Path, "w") as f: f.write(cal2) self.client.deserialize() self.assertEqual(len(self.client._calendars), 3) self.assertTrue("/home/calendar/" in self.client._calendars) self.assertEqual(self.client._calendars["/home/calendar/"].changeToken, "321") self.assertEqual(self.client._calendars["/home/calendar/"].name, "calendar") self.assertEqual(self.client._calendars["/home/calendar/"].resourceType, "{urn:ietf:params:xml:ns:caldav}calendar") self.assertEqual(self.client._calendars["/home/calendar/"].componentTypes, set(("VEVENT",))) self.assertEqual(self.client._calendars["/home/calendar/"].invitees, ["a", "b", "c"]) self.assertTrue("/home/tasks/" in self.client._calendars) self.assertTrue("/home/inbox/" in self.client._calendars) self.assertEqual(self.client._calendars["/home/inbox/"].componentTypes, set(("VEVENT", "VTODO",))) self.assertEqual(len(self.client._events), 2) self.assertTrue("/home/calendar/2.ics" in self.client._events) self.assertEqual(self.client._events["/home/calendar/2.ics"].scheduleTag, None) self.assertEqual(self.client._events["/home/calendar/2.ics"].etag, "321.321") self.assertEqual(self.client._events["/home/calendar/2.ics"].getUID(), "004f8e41-b071-4b30-bb3b-6aada4adcc10") self.assertEqual(str(self.client._events["/home/calendar/2.ics"].component), cal1) self.assertTrue("/home/inbox/i2.ics" in self.client._events) self.assertEqual(self.client._events["/home/inbox/i2.ics"].scheduleTag, None) self.assertEqual(self.client._events["/home/inbox/i2.ics"].etag, "987.987") self.assertEqual(self.client._events["/home/inbox/i2.ics"].getUID(), "00a79cad-857b-418e-a54a-340b5686d747") self.assertEqual(str(self.client._events["/home/inbox/i2.ics"].component), cal2) self.assertEqual(self.client.calendarHomeToken, "hometoken") self.assertEqual(self.client._managed_attachments_server_url, "attachmentsurl") class UpdateCalendarTests(OS_X_10_11Mixin, TestCase): """ Tests for L{OS_X_10_11._updateCalendar}. """ _CALENDAR_PROPFIND_RESPONSE_BODY = """\ /something/anotherthing.ics "None" HTTP/1.1 200 OK HTTP/1.1 404 Not Found /something/else.ics "None" HTTP/1.1 200 OK """ _CALENDAR_REPORT_RESPONSE_BODY = """\ /something/anotherthing.ics HTTP/1.1 404 Not Found /something/else.ics "ef70beb4cb7da4b2e2950350b09e9a01" HTTP/1.1 200 OK """ _CALENDAR_REPORT_RESPONSE_BODY_1 = """\ /something/anotherthing.ics "ef70beb4cb7da4b2e2950350b09e9a01" HTTP/1.1 200 OK """ _CALENDAR_REPORT_RESPONSE_BODY_2 = """\ /something/else.ics "ef70beb4cb7da4b2e2950350b09e9a01" HTTP/1.1 200 OK """ def test_eventMissing(self): """ If an event included in the calendar sync REPORT response no longer exists by the time a REPORT is issued for that event, the 404 is handled and the rest of the normal update logic for that event is skipped. """ requests = self.interceptRequests() calendar = Calendar(None, set(('VEVENT',)), 'calendar', '/something/', None) self.client._calendars[calendar.url] = calendar self.client._updateCalendar(calendar, "1234") result, req = requests.pop(0) expectedResponseCode, method, url, _ignore_headers, _ignore_body = req self.assertEqual('REPORT', method) self.assertEqual('http://127.0.0.1/something/', url) self.assertEqual((MULTI_STATUS, FORBIDDEN), expectedResponseCode) result.callback( ( MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "Multi-status", None, StringProducer(self._CALENDAR_PROPFIND_RESPONSE_BODY)), self._CALENDAR_PROPFIND_RESPONSE_BODY ) ) result, req = requests.pop(0) expectedResponseCode, method, url, _ignore_headers, _ignore_body = req self.assertEqual('REPORT', method) self.assertEqual('http://127.0.0.1/something/', url) self.assertEqual((MULTI_STATUS,), expectedResponseCode) # Someone else comes along and gets rid of the event del self.client._events["/something/anotherthing.ics"] result.callback( ( MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "Multi-status", None, StringProducer(self._CALENDAR_REPORT_RESPONSE_BODY)), self._CALENDAR_REPORT_RESPONSE_BODY ) ) # Verify that processing proceeded to the response after the one with a # 404 status. self.assertIn('/something/else.ics', self.client._events) def test_multigetBatch(self): """ If an event included in the calendar sync REPORT response no longer exists by the time a REPORT is issued for that event, the 404 is handled and the rest of the normal update logic for that event is skipped. """ requests = self.interceptRequests() self.patch(self.client, "MULTIGET_BATCH_SIZE", 1) calendar = Calendar(None, set(('VEVENT',)), 'calendar', '/something/', None) self.client._calendars[calendar.url] = calendar self.client._updateCalendar(calendar, "1234") result, req = requests.pop(0) expectedResponseCode, method, url, _ignore_headers, _ignore_body = req self.assertEqual('REPORT', method) self.assertEqual('http://127.0.0.1/something/', url) self.assertEqual((MULTI_STATUS, FORBIDDEN), expectedResponseCode) result.callback( ( MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "Multi-status", None, StringProducer(self._CALENDAR_PROPFIND_RESPONSE_BODY)), self._CALENDAR_PROPFIND_RESPONSE_BODY ) ) result, req = requests.pop(0) expectedResponseCode, method, url, _ignore_headers, _ignore_body = req self.assertEqual('REPORT', method) self.assertEqual('http://127.0.0.1/something/', url) self.assertEqual((MULTI_STATUS,), expectedResponseCode) result.callback( ( MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "Multi-status", None, StringProducer(self._CALENDAR_REPORT_RESPONSE_BODY_1)), self._CALENDAR_REPORT_RESPONSE_BODY_1 ) ) self.assertTrue(self.client._events['/something/anotherthing.ics'].etag is not None) self.assertTrue(self.client._events['/something/else.ics'].etag is None) result, req = requests.pop(0) expectedResponseCode, method, url, _ignore_headers, _ignore_body = req self.assertEqual('REPORT', method) self.assertEqual('http://127.0.0.1/something/', url) self.assertEqual((MULTI_STATUS,), expectedResponseCode) result.callback( ( MemoryResponse( ('HTTP', '1', '1'), MULTI_STATUS, "Multi-status", None, StringProducer(self._CALENDAR_REPORT_RESPONSE_BODY_2)), self._CALENDAR_REPORT_RESPONSE_BODY_2 ) ) self.assertTrue(self.client._events['/something/anotherthing.ics'].etag is not None) self.assertTrue(self.client._events['/something/else.ics'].etag is not None) class VFreeBusyTests(OS_X_10_11Mixin, TestCase): """ Tests for L{OS_X_10_11.requestAvailability}. """ def test_requestAvailability(self): """ L{OS_X_10_11.requestAvailability} accepts a date range and a set of account uuids and issues a VFREEBUSY request. It returns a Deferred which fires with a dict mapping account uuids to availability range information. """ self.client.uuid = u'urn:uuid:user01' self.client.email = u'mailto:user01@example.com' self.client.outbox = "/calendars/__uids__/%s/outbox/" % (self.record.uid,) requests = self.interceptRequests() start = DateTime(2011, 6, 10, 10, 45, 0, tzid=Timezone.UTCTimezone) end = DateTime(2011, 6, 10, 11, 15, 0, tzid=Timezone.UTCTimezone) d = self.client.requestAvailability( start, end, [u"urn:uuid:user05", u"urn:uuid:user10"]) result, req = requests.pop(0) expectedResponseCode, method, url, headers, body = req self.assertEqual(OK, expectedResponseCode) self.assertEqual('POST', method) self.assertEqual( 'http://127.0.0.1/calendars/__uids__/%s/outbox/' % (self.record.uid,), url) self.assertEqual(headers.getRawHeaders('originator'), ['mailto:user01@example.com']) self.assertEqual(headers.getRawHeaders('recipient'), ['urn:uuid:user05, urn:uuid:user10']) self.assertEqual(headers.getRawHeaders('content-type'), ['text/calendar']) consumer = MemoryConsumer() finished = body.startProducing(consumer) def cbFinished(ignored): vevent = Component.fromString(consumer.value()) uid = vevent.resourceUID() dtstamp = vevent.mainComponent().propertyValue("DTSTAMP") dtstamp = dtstamp.getText() self.assertEqual("""BEGIN:VCALENDAR CALSCALE:GREGORIAN VERSION:2.0 METHOD:REQUEST PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VFREEBUSY UID:%(uid)s DTEND:20110611T000000Z ATTENDEE:urn:uuid:user05 ATTENDEE:urn:uuid:user10 DTSTART:20110610T000000Z DTSTAMP:%(dtstamp)s ORGANIZER:mailto:user01@example.com SUMMARY:Availability for urn:uuid:user05, urn:uuid:user10 END:VFREEBUSY END:VCALENDAR """.replace('\n', '\r\n') % {'uid': uid, 'dtstamp': dtstamp}, consumer.value()) finished.addCallback(cbFinished) def requested(ignored): response = MemoryResponse( ('HTTP', '1', '1'), OK, "Ok", Headers({}), StringProducer("")) result.callback((response, "")) finished.addCallback(requested) return d calendarserver-9.1+dfsg/contrib/performance/loadtest/test_population.py000066400000000000000000000361721315003562600266770ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ Tests for some things in L{loadtest.population}. """ from twisted.trial.unittest import TestCase from contrib.performance.loadtest.population import ReportStatistics class ReportStatisticsTests(TestCase): """ Tests for L{loadtest.population.ReportStatistics}. """ def test_countUsers(self): """ L{ReportStatistics.countUsers} returns the number of users observed to have acted in the simulation. """ logger = ReportStatistics() users = ['user01', 'user02', 'user03'] for user in users: logger.observe(dict( type='response', method='GET', success=True, duration=1.23, user=user, client_type="test", client_id="1234" )) self.assertEqual(len(users), logger.countUsers()) def test_countClients(self): """ L{ReportStatistics.countClients} returns the number of clients observed to have acted in the simulation. """ logger = ReportStatistics() clients = ['c01', 'c02', 'c03'] for client in clients: logger.observe(dict( type='response', method='GET', success=True, duration=1.23, user="user01", client_type="test", client_id=client )) self.assertEqual(len(clients), logger.countClients()) def test_clientFailures(self): """ L{ReportStatistics.countClientFailures} returns the number of clients observed to have failed in the simulation. """ logger = ReportStatistics() clients = ['c01', 'c02', 'c03'] for client in clients: logger.observe(dict( type='client-failure', reason="testing %s" % (client,) )) self.assertEqual(len(clients), logger.countClientFailures()) def test_simFailures(self): """ L{ReportStatistics.countSimFailures} returns the number of clients observed to have caused an error in the simulation. """ logger = ReportStatistics() clients = ['c01', 'c02', 'c03'] for client in clients: logger.observe(dict( type='sim-failure', reason="testing %s" % (client,) )) self.assertEqual(len(clients), logger.countSimFailures()) def test_noFailures(self): """ If fewer than 1% of requests fail, fewer than 1% of requests take 5 seconds or more, and fewer than 5% of requests take 3 seconds or more, L{ReportStatistics.failures} returns an empty list. """ logger = ReportStatistics() logger.observe(dict( type='response', method='GET', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) self.assertEqual([], logger.failures()) def test_requestFailures(self): """ If more than 1% of requests fail, L{ReportStatistics.failures} returns a list containing a string describing this. """ logger = ReportStatistics() for _ignore in range(98): logger.observe(dict( type='response', method='GET', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='GET', success=False, duration=2.5, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 1% GET failed"], logger.failures()) def test_threeSecondFailure(self): """ If more than 5% of requests take longer than 3 seconds, L{ReportStatistics.failures} returns a list containing a string describing that. """ logger = ReportStatistics() for _ignore in range(94): logger.observe(dict( type='response', method='GET', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) for _ignore in range(5): logger.observe(dict( type='response', method='GET', success=True, duration=3.5, user='user02', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 5% GET exceeded 3 second response time"], logger.failures()) def test_fiveSecondFailure(self): """ If more than 1% of requests take longer than 5 seconds, L{ReportStatistics.failures} returns a list containing a string describing that. """ logger = ReportStatistics() for _ignore in range(98): logger.observe(dict( type='response', method='GET', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='GET', success=True, duration=5.5, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 1% GET exceeded 5 second response time"], logger.failures()) def test_methodsCountedSeparately(self): """ The counts for one method do not affect the results of another method. """ logger = ReportStatistics() for _ignore in range(99): logger.observe(dict( type='response', method='GET', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='POST', success=True, duration=2.5, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='GET', success=False, duration=2.5, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='POST', success=False, duration=2.5, user='user01', client_type="test", client_id="1234" )) self.assertEqual([], logger.failures()) def test_bucketRequest(self): """ PUT(xxx-huge/large/medium/small} have different thresholds. Test that requests straddling each of those are correctly determined to be failures or not. """ _thresholds = { "requests": { "limits": [0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 30.0], "thresholds": { "default": [100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PUT{organizer-small}": [100.0, 50.0, 25.0, 5.0, 1.0, 0.5, 0.0], "PUT{organizer-medium}": [100.0, 100.0, 50.0, 25.0, 5.0, 1.0, 0.5], "PUT{organizer-large}": [100.0, 100.0, 100.0, 50.0, 25.0, 5.0, 1.0], "PUT{organizer-huge}": [100.0, 100.0, 100.0, 100.0, 100.0, 50.0, 25.0], } } } # -small below threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) self.assertEqual([], logger.failures()) # -small above 0.5 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-small}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 50% PUT{organizer-small} exceeded 0.5 second response time"], logger.failures() ) # -medium below 0.5 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=0.6, user='user01', client_type="test", client_id="1234" )) self.assertEqual( [], logger.failures() ) # -medium above 1.0 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-medium}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 50% PUT{organizer-medium} exceeded 1 second response time"], logger.failures() ) # -large below 1.0 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=1.6, user='user01', client_type="test", client_id="1234" )) self.assertEqual( [], logger.failures() ) # -large above 3.0 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=0.2, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=3.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=3.6, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-large}', success=True, duration=3.6, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 50% PUT{organizer-large} exceeded 3 second response time"], logger.failures() ) # -huge below 10.0 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=12.0, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=8, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=11.0, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=9.0, user='user01', client_type="test", client_id="1234" )) self.assertEqual( [], logger.failures() ) # -huge above 10.0 threshold logger = ReportStatistics(thresholds=_thresholds) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=12.0, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=9.0, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=12.0, user='user01', client_type="test", client_id="1234" )) logger.observe(dict( type='response', method='PUT{organizer-huge}', success=True, duration=42.42, user='user01', client_type="test", client_id="1234" )) self.assertEqual( ["Greater than 50% PUT{organizer-huge} exceeded 10 second response time"], logger.failures() ) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_profiles.py000066400000000000000000001226241315003562600263260ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ Tests for loadtest.profiles. """ from StringIO import StringIO from caldavclientlibrary.protocol.caldav.definitions import caldavxml, csxml from twisted.trial.unittest import TestCase from twisted.internet.task import Clock from twisted.internet.defer import succeed, fail from twisted.web.http import NO_CONTENT, PRECONDITION_FAILED from twisted.web.client import Response from twistedcaldav.ical import Component, Property from contrib.performance.loadtest.profiles import Eventer, Inviter, Accepter, OperationLogger, AlarmAcknowledger, AttachmentDownloader from contrib.performance.loadtest.population import Populator, CalendarClientSimulator from contrib.performance.loadtest.ical import IncorrectResponseCode, Calendar, Event, BaseClient from contrib.performance.loadtest.sim import _DirectoryRecord import os SIMPLE_EVENT = """\ BEGIN:VCALENDAR VERSION:2.0 PRODID:-//Apple Inc.//iCal 4.0.3//EN CALSCALE:GREGORIAN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:DAYLIGHT TZOFFSETFROM:-0500 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU DTSTART:20070311T020000 TZNAME:EDT TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD TZOFFSETFROM:-0400 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU DTSTART:20071104T020000 TZNAME:EST TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT CREATED:20101018T155431Z UID:C98AD237-55AD-4F7D-9009-0D355D835822 DTEND;TZID=America/New_York:20101021T130000 TRANSP:OPAQUE SUMMARY:Simple event DTSTART;TZID=America/New_York:20101021T120000 DTSTAMP:20101018T155438Z SEQUENCE:2 END:VEVENT END:VCALENDAR """ INVITED_EVENT = """\ BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:882C3D50-0DAE-45CB-A2E7-DA75DA9BE452 DTSTART;TZID=America/New_York:20110131T130000 DTEND;TZID=America/New_York:20110131T140000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;EMAIL=user01@example.com;PARTSTAT=AC CEPTED:urn:uuid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;EMAIL=user02@example.com;PARTSTAT=NE EDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:urn:uuid:user02 CREATED:20110124T170357Z DTSTAMP:20110124T170425Z ORGANIZER;CN=User 01;EMAIL=user01@example.com:urn:uuid:user01 SEQUENCE:3 SUMMARY:Some Event For You TRANSP:TRANSPARENT X-APPLE-NEEDS-REPLY:TRUE END:VEVENT END:VCALENDAR """ ACCEPTED_EVENT = """\ BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VTIMEZONE TZID:America/New_York BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:882C3D50-0DAE-45CB-A2E7-DA75DA9BE452 DTSTART;TZID=America/New_York:20110131T130000 DTEND;TZID=America/New_York:20110131T140000 ATTENDEE;CN=User 01;CUTYPE=INDIVIDUAL;EMAIL=user01@example.com;PARTSTAT=AC CEPTED:urn:uuid:user01 ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;EMAIL=user02@example.com;PARTSTAT=AC CEPTED:urn:uuid:user02 CREATED:20110124T170357Z DTSTAMP:20110124T170425Z ORGANIZER;CN=User 01;EMAIL=user01@example.com:urn:uuid:user01 SEQUENCE:3 SUMMARY:Some Event For You TRANSP:TRANSPARENT X-APPLE-NEEDS-REPLY:TRUE END:VEVENT END:VCALENDAR """ INBOX_REPLY = """\ BEGIN:VCALENDAR METHOD:REPLY VERSION:2.0 PRODID:-//CALENDARSERVER.ORG//NONSGML Version 1//EN BEGIN:VEVENT UID:12345-67890 ORGANIZER;CN="User 01":mailto:user1@example.com ATTENDEE;PARTSTAT=ACCEPTED:mailto:user1@example.com END:VEVENT END:VCALENDAR """ class AnyUser(object): def __getitem__(self, index): return _AnyRecord(index) class _AnyRecord(object): def __init__(self, index): self.uid = u"user%02d" % (index,) self.password = u"user%02d" % (index,) self.commonName = u"User %02d" % (index,) self.email = u"user%02d@example.com" % (index,) self.guid = u"user%02d" % (index,) class Deterministic(object): def __init__(self, value=None): self.value = value def gauss(self, mean, stddev): """ Pretend to return a value from a gaussian distribution with mu parameter C{mean} and sigma parameter C{stddev}. But actually always return C{mean + 1}. """ return mean + 1 def choice(self, sequence): return sequence[0] def sample(self): return self.value class StubClient(BaseClient): """ Stand in for an iCalendar client. @ivar rescheduled: A set of event URLs which will not allow attendee changes due to a changed schedule tag. @ivar _pendingFailures: dict mapping URLs to failure objects """ def __init__(self, number, serializePath): self.serializePath = serializePath os.mkdir(self.serializePath) self.title = "StubClient" self._events = {} self._calendars = {} self._pendingFailures = {} self.record = _DirectoryRecord( "user%02d" % (number,), "user%02d" % (number,), "User %02d" % (number,), "user%02d@example.org" % (number,), "user%02d" % (number,)) self.email = "mailto:user%02d@example.com" % (number,) self.uuid = "urn:uuid:user%02d" % (number,) self.rescheduled = set() self.started = True def _failDeleteWithObject(self, href, failureObject): """ Accessor for inserting intentional failures for deletes. """ self._pendingFailures[href] = failureObject def serializeLocation(self): """ Return the path to the directory where data for this user is serialized. """ if self.serializePath is None or not os.path.isdir(self.serializePath): return None key = "%s-%s" % (self.record.uid, "StubClient") path = os.path.join(self.serializePath, key) if not os.path.exists(path): os.mkdir(path) elif not os.path.isdir(path): return None return path def addEvent(self, href, vevent, attachmentSize=0): self._events[href] = Event(self.serializePath, href, None, vevent) return succeed(None) def addInvite(self, href, vevent, attachmentSize=0, lookupPercentage=0): return self.addEvent(href, vevent, attachmentSize=attachmentSize) def deleteEvent(self, href,): del self._events[href] calendar, uid = href.rsplit('/', 1) del self._calendars[calendar + '/'].events[uid] if href in self._pendingFailures: failureObject = self._pendingFailures.pop(href) return fail(failureObject) else: return succeed(None) def updateEvent(self, href): self.rescheduled.remove(href) return succeed(None) def addEventAttendee(self, href, attendee): vevent = self._events[href].component vevent.mainComponent().addProperty(attendee) self._events[href].component = vevent def changeEventAttendee(self, href, old, new): if href in self.rescheduled: return fail( IncorrectResponseCode( NO_CONTENT, Response( ('HTTP', 1, 1), PRECONDITION_FAILED, 'Precondition Failed', None, None), None ), ) vevent = self._events[href].component vevent.mainComponent().removeProperty(old) vevent.mainComponent().addProperty(new) self._events[href].component = vevent return succeed(None) def _makeSelfAttendee(self): attendee = Property( name=u'ATTENDEE', value=self.email, params={ 'CN': self.record.commonName, 'CUTYPE': 'INDIVIDUAL', 'PARTSTAT': 'ACCEPTED', }, ) return attendee def _makeSelfOrganizer(self): organizer = Property( name=u'ORGANIZER', value=self.email, params={ 'CN': self.record.commonName, }, ) return organizer class SequentialDistribution(object): def __init__(self, values): self.values = values def sample(self): return self.values.pop(0) # These were for the old Inviter class, before RealisticInviter was renamed # to Inviter. Keeping them around in case there are some we want to copy # over to the new Inviter: # # class InviterTests(TestCase): # """ # Tests for loadtest.profiles.Inviter. # """ # def setUp(self): # self.sim = CalendarClientSimulator( # AnyUser(), Populator(None), None, None, None, None, None, None) # def _simpleAccount(self, userNumber, eventText): # client = StubClient(userNumber, self.mktemp()) # vevent = Component.fromString(eventText) # calendar = Calendar( # caldavxml.calendar, set(('VEVENT',)), u'calendar', u'/cal/', None) # client._calendars.update({calendar.url: calendar}) # event = Event(client.serializeLocation(), calendar.url + u'1234.ics', None, vevent) # client._events.update({event.url: event}) # calendar.events = {u'1234.ics': event} # return vevent, event, calendar, client # def test_enabled(self): # userNumber = 13 # client = StubClient(userNumber, self.mktemp()) # inviter = Inviter(None, self.sim, client, userNumber, **{"enabled": False}) # self.assertEqual(inviter.enabled, False) # inviter = Inviter(None, self.sim, client, userNumber, **{"enabled": True}) # self.assertEqual(inviter.enabled, True) # def test_doNotAddAttendeeToInbox(self): # """ # When the only calendar with any events is a schedule inbox, no # attempt is made to add attendees to an event on that calendar. # """ # userNumber = 10 # vevent, _ignore_event, calendar, client = self._simpleAccount( # userNumber, SIMPLE_EVENT) # calendar.resourceType = caldavxml.schedule_inbox # inviter = Inviter(None, self.sim, client, userNumber) # inviter._invite() # self.assertFalse(vevent.mainComponent().hasProperty('ATTENDEE')) # def test_doNotAddAttendeeToNoCalendars(self): # """ # When there are no calendars and no events at all, the inviter # does nothing. # """ # userNumber = 13 # client = StubClient(userNumber, self.mktemp()) # inviter = Inviter(None, self.sim, client, userNumber) # inviter._invite() # self.assertEquals(client._events, {}) # self.assertEquals(client._calendars, {}) # def test_doNotAddAttendeeToUninitializedEvent(self): # """ # When there is an L{Event} on a calendar but the details of the # event have not yet been retrieved, no attempt is made to add # invitees to that event. # """ # userNumber = 19 # _ignore_vevent, event, calendar, client = self._simpleAccount( # userNumber, SIMPLE_EVENT) # event.component = event.etag = event.scheduleTag = None # inviter = Inviter(None, self.sim, client, userNumber) # inviter._invite() # self.assertEquals(client._events, {event.url: event}) # self.assertEquals(client._calendars, {calendar.url: calendar}) # def test_addAttendeeToEvent(self): # """ # When there is a normal calendar with an event, inviter adds an # attendee to it. # """ # userNumber = 16 # _ignore_vevent, event, _ignore_calendar, client = self._simpleAccount( # userNumber, SIMPLE_EVENT) # inviter = Inviter(Clock(), self.sim, client, userNumber) # inviter.setParameters(inviteeDistribution=Deterministic(1)) # inviter._invite() # attendees = tuple(event.component.mainComponent().properties('ATTENDEE')) # self.assertEquals(len(attendees), 1) # for paramname, paramvalue in { # 'CN': 'User %d' % (userNumber + 1,), # 'CUTYPE': 'INDIVIDUAL', # 'PARTSTAT': 'NEEDS-ACTION', # 'ROLE': 'REQ-PARTICIPANT', # 'RSVP': 'TRUE' # }.items(): # self.assertTrue(attendees[0].hasParameter(paramname)) # self.assertEqual(attendees[0].parameterValue(paramname), paramvalue) # def test_doNotAddSelfToEvent(self): # """ # If the inviter randomly selects its own user to be added to # the attendee list, a different user is added instead. # """ # selfNumber = 12 # _ignore_vevent, event, _ignore_calendar, client = self._simpleAccount( # selfNumber, SIMPLE_EVENT) # otherNumber = 20 # values = [selfNumber - selfNumber, otherNumber - selfNumber] # inviter = Inviter(Clock(), self.sim, client, selfNumber) # inviter.setParameters(inviteeDistribution=SequentialDistribution(values)) # inviter._invite() # attendees = tuple(event.component.mainComponent().properties('ATTENDEE')) # self.assertEquals(len(attendees), 1) # for paramname, paramvalue in { # 'CN': 'User %d' % (otherNumber,), # 'CUTYPE': 'INDIVIDUAL', # 'PARTSTAT': 'NEEDS-ACTION', # 'ROLE': 'REQ-PARTICIPANT', # 'RSVP': 'TRUE' # }.items(): # self.assertTrue(attendees[0].hasParameter(paramname)) # self.assertEqual(attendees[0].parameterValue(paramname), paramvalue) # def test_doNotAddExistingToEvent(self): # """ # If the inviter randomly selects a user which is already an # invitee on the event, a different user is added instead. # """ # selfNumber = 1 # _ignore_vevent, event, _ignore_calendar, client = self._simpleAccount( # selfNumber, INVITED_EVENT) # invitee = tuple(event.component.mainComponent().properties('ATTENDEE'))[0] # inviteeNumber = int(invitee.parameterValue('CN').split()[1]) # anotherNumber = inviteeNumber + 5 # values = [inviteeNumber - selfNumber, anotherNumber - selfNumber] # inviter = Inviter(Clock(), self.sim, client, selfNumber) # inviter.setParameters(inviteeDistribution=SequentialDistribution(values)) # inviter._invite() # attendees = tuple(event.component.mainComponent().properties('ATTENDEE')) # self.assertEquals(len(attendees), 3) # for paramname, paramvalue in { # 'CN': 'User %02d' % (anotherNumber,), # 'CUTYPE': 'INDIVIDUAL', # 'PARTSTAT': 'NEEDS-ACTION', # 'ROLE': 'REQ-PARTICIPANT', # 'RSVP': 'TRUE' # }.items(): # self.assertTrue(attendees[2].hasParameter(paramname)) # self.assertEqual(attendees[2].parameterValue(paramname), paramvalue) # def test_everybodyInvitedAlready(self): # """ # If the first so-many randomly selected users we come across # are already attendees on the event, the invitation attempt is # abandoned. # """ # selfNumber = 1 # vevent, _ignore_event, _ignore_calendar, client = self._simpleAccount( # selfNumber, INVITED_EVENT) # inviter = Inviter(Clock(), self.sim, client, selfNumber) # # Always return a user number which has already been invited. # inviter.setParameters(inviteeDistribution=Deterministic(2 - selfNumber)) # inviter._invite() # attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) # self.assertEquals(len(attendees), 2) # def test_doNotInviteToSomeoneElsesEvent(self): # """ # If there are events on our calendar which are being organized # by someone else, the inviter does not attempt to invite new # users to them. # """ # selfNumber = 2 # vevent, _ignore_event, _ignore_calendar, client = self._simpleAccount( # selfNumber, INVITED_EVENT) # inviter = Inviter(None, self.sim, client, selfNumber) # # Try to send an invitation, but with only one event on the # # calendar, of which we are not the organizer. It should be # # unchanged afterwards. # inviter._invite() # attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) # self.assertEqual(len(attendees), 2) # self.assertEqual(attendees[0].parameterValue('CN'), 'User 01') # self.assertEqual(attendees[1].parameterValue('CN'), 'User 02') class InviterTests(TestCase): """ Tests for loadtest.profiles.Inviter. """ def setUp(self): self.sim = CalendarClientSimulator( AnyUser(), Populator(None), None, None, None, None, None, None) def _simpleAccount(self, userNumber, eventText): client = StubClient(userNumber, self.mktemp()) vevent = Component.fromString(eventText) calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', u'/cal/', None) event = Event(client.serializeLocation(), calendar.url + u'1234.ics', None, vevent) calendar.events = {u'1234.ics': event} client._events.update({event.url: event}) client._calendars.update({calendar.url: calendar}) return vevent, event, calendar, client def test_enabled(self): userNumber = 13 client = StubClient(userNumber, self.mktemp()) inviter = Inviter(None, self.sim, client, userNumber, **{"enabled": False}) self.assertEqual(inviter.enabled, False) inviter = Inviter(None, self.sim, client, userNumber, **{"enabled": True}) self.assertEqual(inviter.enabled, True) def test_doNotAddInviteToInbox(self): """ When the only calendar with any events is a schedule inbox, no attempt is made to add attendees to that calendar. """ calendar = Calendar( caldavxml.schedule_inbox, set(), u'inbox', u'/sched/inbox', None) userNumber = 13 client = StubClient(userNumber, self.mktemp()) client._calendars.update({calendar.url: calendar}) inviter = Inviter(None, self.sim, client, userNumber, **{"enabled": False}) inviter._invite() self.assertEquals(client._events, {}) def test_doNotAddInviteToNoCalendars(self): """ When there are no calendars and no events at all, the inviter does nothing. """ userNumber = 13 client = StubClient(userNumber, self.mktemp()) inviter = Inviter(None, self.sim, client, userNumber) inviter._invite() self.assertEquals(client._events, {}) self.assertEquals(client._calendars, {}) def test_addInvite(self): """ When there is a normal calendar, inviter adds an invite to it. """ calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'personal stuff', u'/cals/personal', None) userNumber = 16 serializePath = self.mktemp() os.mkdir(serializePath) client = StubClient(userNumber, self.mktemp()) client._calendars.update({calendar.url: calendar}) inviter = Inviter(Clock(), self.sim, client, userNumber) inviter.setParameters( inviteeDistribution=Deterministic(1), inviteeCountDistribution=Deterministic(1) ) inviter._invite() self.assertEquals(len(client._events), 1) attendees = tuple(client._events.values()[0].component.mainComponent().properties('ATTENDEE')) expected = set(("mailto:user%02d@example.com" % (userNumber,), "mailto:user%02d@example.com" % (userNumber + 1,),)) for attendee in attendees: expected.remove(attendee.value()) self.assertEqual(len(expected), 0) def test_doNotAddSelfToEvent(self): """ If the inviter randomly selects its own user to be added to the attendee list, a different user is added instead. """ calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'personal stuff', u'/cals/personal', None) selfNumber = 12 client = StubClient(selfNumber, self.mktemp()) client._calendars.update({calendar.url: calendar}) otherNumber = 20 values = [selfNumber - selfNumber, otherNumber - selfNumber] inviter = Inviter(Clock(), self.sim, client, selfNumber) inviter.setParameters( inviteeDistribution=SequentialDistribution(values), inviteeCountDistribution=Deterministic(1) ) inviter._invite() self.assertEquals(len(client._events), 1) attendees = tuple(client._events.values()[0].component.mainComponent().properties('ATTENDEE')) expected = set(("mailto:user%02d@example.com" % (selfNumber,), "mailto:user%02d@example.com" % (otherNumber,),)) for attendee in attendees: expected.remove(attendee.value()) self.assertEqual(len(expected), 0) def test_doNotAddExistingToEvent(self): """ If the inviter randomly selects a user which is already an invitee on the event, a different user is added instead. """ calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'personal stuff', u'/cals/personal', None) selfNumber = 1 client = StubClient(selfNumber, self.mktemp()) client._calendars.update({calendar.url: calendar}) inviteeNumber = 20 anotherNumber = inviteeNumber + 5 values = [inviteeNumber - selfNumber, inviteeNumber - selfNumber, anotherNumber - selfNumber] inviter = Inviter(Clock(), self.sim, client, selfNumber) inviter.setParameters( inviteeDistribution=SequentialDistribution(values), inviteeCountDistribution=Deterministic(2) ) inviter._invite() self.assertEquals(len(client._events), 1) attendees = tuple(client._events.values()[0].component.mainComponent().properties('ATTENDEE')) expected = set(( "mailto:user%02d@example.com" % (selfNumber,), "mailto:user%02d@example.com" % (inviteeNumber,), "mailto:user%02d@example.com" % (anotherNumber,), )) for attendee in attendees: expected.remove(attendee.value()) self.assertEqual(len(expected), 0) def test_everybodyInvitedAlready(self): """ If the first so-many randomly selected users we come across are already attendees on the event, the invitation attempt is abandoned. """ calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'personal stuff', u'/cals/personal', None) userNumber = 1 client = StubClient(userNumber, self.mktemp()) client._calendars.update({calendar.url: calendar}) inviter = Inviter(Clock(), self.sim, client, userNumber) inviter.setParameters( inviteeDistribution=Deterministic(1), inviteeCountDistribution=Deterministic(2) ) inviter._invite() self.assertEquals(len(client._events), 0) test_everybodyInvitedAlready.todo = "Inviter logic has changed and it may add fewer-than-requested attendees" class AccepterTests(TestCase): """ Tests for loadtest.profiles.Accepter. """ def setUp(self): self.sim = CalendarClientSimulator( AnyUser(), Populator(None), None, None, None, None, None, None) def test_enabled(self): userNumber = 13 client = StubClient(userNumber, self.mktemp()) accepter = Accepter(None, self.sim, client, userNumber, **{"enabled": False}) self.assertEqual(accepter.enabled, False) accepter = Accepter(None, self.sim, client, userNumber, **{"enabled": True}) self.assertEqual(accepter.enabled, True) def test_ignoreEventOnUnknownCalendar(self): """ If an event on an unknown calendar changes, it is ignored. """ userNumber = 13 client = StubClient(userNumber, self.mktemp()) accepter = Accepter(None, self.sim, client, userNumber) accepter.eventChanged('/some/calendar/1234.ics') def test_ignoreNonCalendar(self): """ If an event is on a calendar which is not of type {CALDAV:}calendar, it is ignored. """ userNumber = 14 calendarURL = '/some/calendar/' calendar = Calendar( csxml.dropbox_home, set(), u'notification', calendarURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[calendarURL] = calendar accepter = Accepter(None, self.sim, client, userNumber) accepter.eventChanged(calendarURL + '1234.ics') def test_ignoreAccepted(self): """ If the client is an attendee on an event but the PARTSTAT is not NEEDS-ACTION, the event is ignored. """ vevent = Component.fromString(ACCEPTED_EVENT) attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) userNumber = int(attendees[1].parameterValue('CN').split(None, 1)[1]) calendarURL = '/some/calendar/' calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', calendarURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[calendarURL] = calendar event = Event(client.serializeLocation(), calendarURL + u'1234.ics', None, vevent) client._events[event.url] = event accepter = Accepter(None, self.sim, client, userNumber) accepter.eventChanged(event.url) def test_ignoreAlreadyAccepting(self): """ If the client sees an event change a second time before responding to an invitation found on it during the first change notification, the second change notification does not generate another accept attempt. """ clock = Clock() randomDelay = 7 vevent = Component.fromString(INVITED_EVENT) attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) userNumber = int(attendees[1].parameterValue('CN').split(None, 1)[1]) calendarURL = '/some/calendar/' calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', calendarURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[calendarURL] = calendar event = Event(client.serializeLocation(), calendarURL + u'1234.ics', None, vevent) client._events[event.url] = event accepter = Accepter(clock, self.sim, client, userNumber) accepter.random = Deterministic() def _gauss(mu, sigma): return randomDelay accepter.random.gauss = _gauss accepter.eventChanged(event.url) accepter.eventChanged(event.url) clock.advance(randomDelay) def test_inboxReply(self): """ When an inbox item that contains a reply is seen by the client, it deletes it immediately. """ userNumber = 1 clock = Clock() inboxURL = '/some/inbox/' vevent = Component.fromString(INBOX_REPLY) inbox = Calendar( caldavxml.schedule_inbox, set(), u'the inbox', inboxURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[inboxURL] = inbox inboxEvent = Event(client.serializeLocation(), inboxURL + u'4321.ics', None, vevent) client._setEvent(inboxEvent.url, inboxEvent) accepter = Accepter(clock, self.sim, client, userNumber) accepter.eventChanged(inboxEvent.url) clock.advance(3) self.assertNotIn(inboxEvent.url, client._events) self.assertNotIn('4321.ics', inbox.events) def test_inboxReplyFailedDelete(self): """ When an inbox item that contains a reply is seen by the client, it deletes it immediately. If the delete fails, the appropriate response code is returned. """ userNumber = 1 clock = Clock() inboxURL = '/some/inbox/' vevent = Component.fromString(INBOX_REPLY) inbox = Calendar( caldavxml.schedule_inbox, set(), u'the inbox', inboxURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[inboxURL] = inbox inboxEvent = Event(client.serializeLocation(), inboxURL + u'4321.ics', None, vevent) client._setEvent(inboxEvent.url, inboxEvent) client._failDeleteWithObject(inboxEvent.url, IncorrectResponseCode( NO_CONTENT, Response( ('HTTP', 1, 1), PRECONDITION_FAILED, 'Precondition Failed', None, None), None) ) accepter = Accepter(clock, self.sim, client, userNumber) accepter.eventChanged(inboxEvent.url) clock.advance(3) self.assertNotIn(inboxEvent.url, client._events) self.assertNotIn('4321.ics', inbox.events) def test_acceptInvitation(self): """ If the client is an attendee on an event and the PARTSTAT is NEEDS-ACTION, a response is generated which accepts the invitation and the corresponding event in the I{schedule-inbox} is deleted. """ clock = Clock() randomDelay = 7 vevent = Component.fromString(INVITED_EVENT) attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) userNumber = int(attendees[1].parameterValue('CN').split(None, 1)[1]) client = StubClient(userNumber, self.mktemp()) calendarURL = '/some/calendar/' calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', calendarURL, None) client._calendars[calendarURL] = calendar inboxURL = '/some/inbox/' inbox = Calendar( caldavxml.schedule_inbox, set(), u'the inbox', inboxURL, None) client._calendars[inboxURL] = inbox event = Event(client.serializeLocation(), calendarURL + u'1234.ics', None, vevent) client._setEvent(event.url, event) inboxEvent = Event(client.serializeLocation(), inboxURL + u'4321.ics', None, vevent) client._setEvent(inboxEvent.url, inboxEvent) accepter = Accepter(clock, self.sim, client, userNumber) accepter.setParameters(acceptDelayDistribution=Deterministic(randomDelay)) accepter.eventChanged(event.url) clock.advance(randomDelay) vevent = client._events[event.url].component attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) self.assertEquals(len(attendees), 2) self.assertEquals( attendees[1].parameterValue('CN'), 'User %02d' % (userNumber,)) self.assertEquals( attendees[1].parameterValue('PARTSTAT'), 'ACCEPTED') self.assertFalse(attendees[1].hasParameter('RSVP')) self.assertNotIn(inboxEvent.url, client._events) self.assertNotIn('4321.ics', inbox.events) def test_reacceptInvitation(self): """ If a client accepts an invitation on an event and then is later re-invited to the same event, the invitation is again accepted. """ clock = Clock() randomDelay = 7 vevent = Component.fromString(INVITED_EVENT) attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) userNumber = int(attendees[1].parameterValue('CN').split(None, 1)[1]) calendarURL = '/some/calendar/' calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', calendarURL, None) client = StubClient(userNumber, self.mktemp()) client._calendars[calendarURL] = calendar event = Event(client.serializeLocation(), calendarURL + u'1234.ics', None, vevent) client._events[event.url] = event accepter = Accepter(clock, self.sim, client, userNumber) accepter.setParameters(acceptDelayDistribution=Deterministic(randomDelay)) accepter.eventChanged(event.url) clock.advance(randomDelay) # Now re-set the event so it has to be accepted again event.component = Component.fromString(INVITED_EVENT) # And now re-deliver it accepter.eventChanged(event.url) clock.advance(randomDelay) # And ensure that it was accepted again vevent = client._events[event.url].component attendees = tuple(vevent.mainComponent().properties('ATTENDEE')) self.assertEquals(len(attendees), 2) self.assertEquals( attendees[1].parameterValue('CN'), 'User %02d' % (userNumber,)) self.assertEquals( attendees[1].parameterValue('PARTSTAT'), 'ACCEPTED') self.assertFalse(attendees[1].hasParameter('RSVP')) def test_changeEventAttendeePreconditionFailed(self): """ If the attempt to accept an invitation fails because of an unmet precondition (412), the event is re-retrieved and the PUT is re-issued with the new data. """ clock = Clock() userNumber = 2 client = StubClient(userNumber, self.mktemp()) randomDelay = 3 calendarURL = '/some/calendar/' calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'calendar', calendarURL, None) client._calendars[calendarURL] = calendar vevent = Component.fromString(INVITED_EVENT) event = Event(client.serializeLocation(), calendarURL + u'1234.ics', None, vevent) client._setEvent(event.url, event) accepter = Accepter(clock, self.sim, client, userNumber) accepter.setParameters(acceptDelayDistribution=Deterministic(randomDelay)) client.rescheduled.add(event.url) accepter.eventChanged(event.url) clock.advance(randomDelay) class EventerTests(TestCase): """ Tests for loadtest.profiles.Eventer, a profile which adds new events on calendars. """ def setUp(self): self.sim = CalendarClientSimulator( AnyUser(), Populator(None), None, None, None, None, None, None) def test_enabled(self): userNumber = 13 client = StubClient(userNumber, self.mktemp()) eventer = Eventer(None, self.sim, client, None, **{"enabled": False}) self.assertEqual(eventer.enabled, False) eventer = Eventer(None, self.sim, client, None, **{"enabled": True}) self.assertEqual(eventer.enabled, True) def test_doNotAddEventOnInbox(self): """ When the only calendar is a schedule inbox, no attempt is made to add events on it. """ calendar = Calendar( caldavxml.schedule_inbox, set(), u'inbox', u'/sched/inbox', None) client = StubClient(21, self.mktemp()) client._calendars.update({calendar.url: calendar}) eventer = Eventer(None, self.sim, client, None) eventer._addEvent() self.assertEquals(client._events, {}) def test_addEvent(self): """ When there is a normal calendar to add events to, L{Eventer._addEvent} adds an event to it. """ calendar = Calendar( caldavxml.calendar, set(('VEVENT',)), u'personal stuff', u'/cals/personal', None) client = StubClient(31, self.mktemp()) client._calendars.update({calendar.url: calendar}) eventer = Eventer(Clock(), self.sim, client, None) eventer._addEvent() self.assertEquals(len(client._events), 1) # XXX Vary the event period/interval and the uid class AttachmentDownloaderTests(TestCase): """ Tests for L{AttachmentDownloader}. """ def setUp(self): self.sim = CalendarClientSimulator( AnyUser(), Populator(None), None, None, None, None, None, None) def test_normalize(self): client = StubClient(1, self.mktemp()) client._managed_attachments_server_url = "https://thisserver:8443" downloader = AttachmentDownloader(Clock(), self.sim, client, None) self.assertEquals( downloader._normalizeHref("http://otherserver/attachment/url"), "https://thisserver:8443/attachment/url" ) class OperationLoggerTests(TestCase): """ Tests for L{OperationLogger}. """ def test_noFailures(self): """ If the median lag is below 1 second and the failure rate is below 1%, L{OperationLogger.failures} returns an empty list. """ logger = OperationLogger(outfile=StringIO()) logger.observe(dict( type='operation', phase='start', user='user01', label='testing', lag=0.5) ) logger.observe(dict( type='operation', phase='end', user='user01', duration=0.35, label='testing', success=True) ) self.assertEqual([], logger.failures()) def test_lagLimitExceeded(self): """ If the median scheduling lag for any operation in the simulation exceeds 1 second, L{OperationLogger.failures} returns a list containing a string describing that issue. """ logger = OperationLogger(outfile=StringIO()) for lag in [100.0, 1100.0, 1200.0]: logger.observe(dict( type='operation', phase='start', user='user01', label='testing', lag=lag) ) self.assertEqual( ["Median TESTING scheduling lag greater than 1000.0ms"], logger.failures()) def test_failureLimitExceeded(self): """ If the failure rate for any operation exceeds 1%, L{OperationLogger.failures} returns a list containing a string describing that issue. """ logger = OperationLogger(outfile=StringIO()) for _ignore in range(98): logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=True) ) logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=False) ) self.assertEqual( ["Greater than 1% TESTING failed"], logger.failures()) def test_failureNoPush(self): """ If there are no pushes, L{OperationLogger.failures} will/will not fail depending on whether the "failIfNoPush" parameter is set. """ # No pushes, no param logger = OperationLogger(outfile=StringIO()) for _ignore in range(98): logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=True) ) self.assertEqual( [], logger.failures()) # No pushes, have param True logger = OperationLogger(outfile=StringIO(), failIfNoPush=True) for _ignore in range(98): logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=True) ) self.assertEqual( [OperationLogger._PUSH_MISSING_REASON], logger.failures()) # Pushes, have param False logger = OperationLogger(outfile=StringIO(), failIfNoPush=False) for _ignore in range(98): logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=True) ) logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='push', success=True) ) self.assertEqual( [], logger.failures()) # Pushes, have param True logger = OperationLogger(outfile=StringIO(), failIfNoPush=True) for _ignore in range(98): logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='testing', success=True) ) logger.observe(dict( type='operation', phase='end', user='user01', duration=0.25, label='push', success=True) ) self.assertEqual( [], logger.failures()) class AlarmAcknowledgerTests(TestCase): def test_pastTheHour(self): acknowledger = AlarmAcknowledger(None, None, None, {}) self.assertTrue(acknowledger._shouldUpdate(0)) self.assertFalse(acknowledger._shouldUpdate(0)) self.assertFalse(acknowledger._shouldUpdate(1)) self.assertFalse(acknowledger._shouldUpdate(1)) self.assertFalse(acknowledger._shouldUpdate(14)) self.assertTrue(acknowledger._shouldUpdate(15)) self.assertFalse(acknowledger._shouldUpdate(15)) self.assertFalse(acknowledger._shouldUpdate(16)) self.assertTrue(acknowledger._shouldUpdate(30)) self.assertFalse(acknowledger._shouldUpdate(30)) self.assertFalse(acknowledger._shouldUpdate(59)) self.assertTrue(acknowledger._shouldUpdate(0)) self.assertFalse(acknowledger._shouldUpdate(0)) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_sim.py000066400000000000000000000533541315003562600252760ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from plistlib import writePlistToString from cStringIO import StringIO from twisted.python.log import msg from twisted.python.usage import UsageError from twisted.python.filepath import FilePath from twisted.internet.defer import Deferred, succeed from twisted.trial.unittest import TestCase from contrib.performance.stats import NormalDistribution from contrib.performance.loadtest.ical import OS_X_10_6 from contrib.performance.loadtest.profiles import Eventer, Inviter, Accepter from contrib.performance.loadtest.population import ( SmoothRampUp, ClientType, PopulationParameters, Populator, CalendarClientSimulator, ProfileType, SimpleStatistics ) from contrib.performance.loadtest.sim import ( Arrival, SimOptions, LoadSimulator, LagTrackingReactor, _DirectoryRecord ) VALID_CONFIG = { 'servers': { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, }, 'webadmin': { 'enabled': True, 'HTTPPort': 8080, }, 'arrival': { 'factory': 'contrib.performance.loadtest.population.SmoothRampUp', 'params': { 'groups': 10, 'groupSize': 1, 'interval': 3, }, }, } VALID_CONFIG_PLIST = writePlistToString(VALID_CONFIG) class SimOptionsTests(TestCase): def test_defaultConfig(self): """ If the I{config} option is not specified, the default config.plist in the source tree is used. """ options = SimOptions() self.assertEqual(options['config'], FilePath(__file__).sibling('config.plist')) def test_configFileNotFound(self): """ If the filename given to the I{config} option is not found, L{SimOptions.parseOptions} raises a L{UsageError} indicating this. """ name = FilePath(self.mktemp()) options = SimOptions() exc = self.assertRaises( UsageError, options.parseOptions, ['--config', name.path]) self.assertEquals( str(exc), "--config %s: No such file or directory" % (name.path,)) def test_configFileNotParseable(self): """ If the contents of the file given to the I{config} option cannot be parsed by L{ConfigParser}, L{SimOptions.parseOptions} raises a L{UsageError} indicating this. """ config = FilePath(self.mktemp()) config.setContent("some random junk") options = SimOptions() exc = self.assertRaises( UsageError, options.parseOptions, ['--config', config.path]) self.assertEquals( str(exc), "--config %s: syntax error: line 1, column 0" % (config.path,)) class CalendarClientSimulatorTests(TestCase): """ Tests for L{CalendarClientSimulator} which adds running clients to a simulation. """ realmName = 'stub' def _user(self, name): password = 'password-' + name email = name + "@example.com" record = _DirectoryRecord(name, password, name, email, name) return record def test_createUser(self): """ Subsequent calls to L{CalendarClientSimulator._createUser} with different user numbers return user details from different directory records. """ calsim = CalendarClientSimulator( [self._user('alice'), self._user('bob'), self._user('carol')], Populator(None), None, None, None, { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, }, None, None) users = sorted([ calsim._createUser(0)[1], calsim._createUser(1)[1], calsim._createUser(2)[1], ]) self.assertEqual(['alice', 'bob', 'carol'], users) def test_createUserAuthInfo(self): """ The auth handler returned by L{CalendarClientSimulator._createUser} includes the password taken from user's directory record. """ calsim = CalendarClientSimulator( [self._user('alice')], Populator(None), None, None, None, { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, }, None, None) _ignore_record, user, auth = calsim._createUser(0) self.assertEqual( auth['basic'].passwd.find_user_password('Test Realm', 'http://example.org:1234/')[1], 'password-' + user) self.assertEqual( auth['digest'].passwd.find_user_password('Test Realm', 'http://example.org:1234/')[1], 'password-' + user) def test_stop(self): """ After L{CalendarClientSimulator.stop} is called, failed clients and profiles are not logged. """ class BrokenClient(object): def __init__(self, reactor, serverAddress, principalPathTemplate, serializationPath, userInfo, auth, instanceNumber, runResult): self._runResult = runResult def run(self): return self._runResult def stop(self): return succeed(None) class BrokenProfile(object): def __init__(self, reactor, simulator, client, userNumber, runResult): self._runResult = runResult self.enabled = True def initialize(self): return succeed(None) def run(self): return self._runResult clientRunResult = Deferred() profileRunResult = Deferred() params = PopulationParameters() params.addClient(1, ClientType( BrokenClient, {'runResult': clientRunResult}, [ProfileType(BrokenProfile, {'runResult': profileRunResult})]) ) sim = CalendarClientSimulator( [self._user('alice')], Populator(None), None, params, None, { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, }, None, None) sim.add(1, 1) sim.stop() clientRunResult.errback(RuntimeError("Some fictional client problem")) profileRunResult.errback(RuntimeError("Some fictional profile problem")) self.assertEqual([], self.flushLoggedErrors()) class Reactor(object): message = "some event to be observed" def __init__(self): self._triggers = [] self._whenRunning = [] def run(self): for thunk in self._whenRunning: thunk() msg(thingo=self.message) for _ignore_phase, event, thunk in self._triggers: if event == 'shutdown': thunk() def callWhenRunning(self, thunk): self._whenRunning.append(thunk) def addSystemEventTrigger(self, phase, event, thunk): self._triggers.append((phase, event, thunk)) class Observer(object): def __init__(self): self.reported = False self.events = [] def observe(self, event): self.events.append(event) def report(self, output): self.reported = True def failures(self): return [] class NullArrival(object): def run(self, sim): pass class StubSimulator(LoadSimulator): def run(self): return 3 class LoadSimulatorTests(TestCase): def test_main(self): """ L{LoadSimulator.main} raises L{SystemExit} with the result of L{LoadSimulator.run}. """ config = FilePath(self.mktemp()) config.setContent(VALID_CONFIG_PLIST) exc = self.assertRaises( SystemExit, StubSimulator.main, ['--config', config.path]) self.assertEquals( exc.args, (StubSimulator(None, None, None, None, None, None, None).run(),)) def test_createSimulator(self): """ L{LoadSimulator.createSimulator} creates a L{CalendarClientSimulator} with its own reactor and host and port information from the configuration file. """ servers = { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, } reactor = object() sim = LoadSimulator(servers, None, None, None, None, None, None, reactor=reactor) calsim = sim.createSimulator() self.assertIsInstance(calsim, CalendarClientSimulator) self.assertIsInstance(calsim.reactor, LagTrackingReactor) self.assertIdentical(calsim.reactor._reactor, reactor) self.assertEquals(calsim.servers, servers) def test_loadAccountsFromFile(self): """ L{LoadSimulator.fromCommandLine} takes an account loader from the config file and uses it to create user records for use in the simulation. """ accounts = FilePath(self.mktemp()) accounts.setContent("foo,bar,baz,quux,goo\nfoo2,bar2,baz2,quux2,goo2\n") config = VALID_CONFIG.copy() config["accounts"] = { "loader": "contrib.performance.loadtest.sim.recordsFromCSVFile", "params": { "path": accounts.path, "interleavePods": True, }, } configpath = FilePath(self.mktemp()) configpath.setContent(writePlistToString(config)) io = StringIO() sim = LoadSimulator.fromCommandLine(['--config', configpath.path], io) self.assertEquals(io.getvalue(), "Loaded 2 accounts.\n") self.assertEqual(2, len(sim.records)) self.assertEqual(sim.records[0].uid, 'foo') self.assertEqual(sim.records[0].password, 'bar') self.assertEqual(sim.records[0].commonName, 'baz') self.assertEqual(sim.records[0].email, 'quux') self.assertEqual(sim.records[1].uid, 'foo2') self.assertEqual(sim.records[1].password, 'bar2') self.assertEqual(sim.records[1].commonName, 'baz2') self.assertEqual(sim.records[1].email, 'quux2') def test_loadDefaultAccountsFromFile(self): """ L{LoadSimulator.fromCommandLine} takes an account loader (with empty path)from the config file and uses it to create user records for use in the simulation. """ config = VALID_CONFIG.copy() config["accounts"] = { "loader": "contrib.performance.loadtest.sim.recordsFromCSVFile", "params": { "path": "", "interleavePods": True, }, } configpath = FilePath(self.mktemp()) configpath.setContent(writePlistToString(config)) sim = LoadSimulator.fromCommandLine(['--config', configpath.path], StringIO()) self.assertEqual(99, len(sim.records)) self.assertEqual(sim.records[0].uid, 'user01') self.assertEqual(sim.records[0].password, 'user01') self.assertEqual(sim.records[0].commonName, 'User 01') self.assertEqual(sim.records[0].email, 'user01@example.com') self.assertEqual(sim.records[98].uid, 'user99') self.assertEqual(sim.records[98].password, 'user99') self.assertEqual(sim.records[98].commonName, 'User 99') self.assertEqual(sim.records[98].email, 'user99@example.com') def test_generateRecordsDefaultPatterns(self): """ L{LoadSimulator.fromCommandLine} takes an account loader from the config file and uses it to generate user records for use in the simulation. """ config = VALID_CONFIG.copy() config["accounts"] = { "loader": "contrib.performance.loadtest.sim.generateRecords", "params": { "count": 2 }, } configpath = FilePath(self.mktemp()) configpath.setContent(writePlistToString(config)) sim = LoadSimulator.fromCommandLine(['--config', configpath.path], StringIO()) self.assertEqual(2, len(sim.records)) self.assertEqual(sim.records[0].uid, 'user1') self.assertEqual(sim.records[0].password, 'user1') self.assertEqual(sim.records[0].commonName, 'User 1') self.assertEqual(sim.records[0].email, 'user1@example.com') self.assertEqual(sim.records[1].uid, 'user2') self.assertEqual(sim.records[1].password, 'user2') self.assertEqual(sim.records[1].commonName, 'User 2') self.assertEqual(sim.records[1].email, 'user2@example.com') def test_generateRecordsNonDefaultPatterns(self): """ L{LoadSimulator.fromCommandLine} takes an account loader from the config file and uses it to generate user records for use in the simulation. """ config = VALID_CONFIG.copy() config["accounts"] = { "loader": "contrib.performance.loadtest.sim.generateRecords", "params": { "count": 3, "uidPattern": "USER%03d", "passwordPattern": "PASSWORD%03d", "namePattern": "Test User %03d", "emailPattern": "USER%03d@example2.com", }, } configpath = FilePath(self.mktemp()) configpath.setContent(writePlistToString(config)) sim = LoadSimulator.fromCommandLine(['--config', configpath.path], StringIO()) self.assertEqual(3, len(sim.records)) self.assertEqual(sim.records[0].uid, 'USER001') self.assertEqual(sim.records[0].password, 'PASSWORD001') self.assertEqual(sim.records[0].commonName, 'Test User 001') self.assertEqual(sim.records[0].email, 'USER001@example2.com') self.assertEqual(sim.records[2].uid, 'USER003') self.assertEqual(sim.records[2].password, 'PASSWORD003') self.assertEqual(sim.records[2].commonName, 'Test User 003') self.assertEqual(sim.records[2].email, 'USER003@example2.com') def test_specifyRuntime(self): """ L{LoadSimulator.fromCommandLine} recognizes the I{--runtime} option to specify a limit on how long the simulation will run. """ config = FilePath(self.mktemp()) config.setContent(VALID_CONFIG_PLIST) sim = LoadSimulator.fromCommandLine(['--config', config.path, '--runtime', '123']) self.assertEqual(123, sim.runtime) def test_loadServerConfig(self): """ The Calendar Server host and port are loaded from the [server] section of the configuration file specified. """ config = FilePath(self.mktemp()) config.setContent( writePlistToString({"servers": { "PodA": { "enabled": True, "uri": 'https://127.0.0.3:8432/', "stats": {"enabled": False}, }, }}) ) sim = LoadSimulator.fromCommandLine(['--config', config.path]) self.assertEquals(sim.servers["PodA"]["uri"], "https://127.0.0.3:8432/") def test_loadArrivalConfig(self): """ The arrival policy type and arguments are loaded from the [arrival] section of the configuration file specified. """ config = FilePath(self.mktemp()) config.setContent( writePlistToString({ "arrival": { "factory": "contrib.performance.loadtest.population.SmoothRampUp", "params": { "groups": 10, "groupSize": 1, "interval": 3, }, }, }) ) sim = LoadSimulator.fromCommandLine(['--config', config.path]) self.assertEquals( sim.arrival, Arrival(SmoothRampUp, dict(groups=10, groupSize=1, interval=3))) def test_createArrivalPolicy(self): """ L{LoadSimulator.createArrivalPolicy} creates an arrival policy based on the L{Arrival} passed to its initializer. """ class FakeArrival(object): def __init__(self, reactor, x, y): self.reactor = reactor self.x = x self.y = y reactor = object() sim = LoadSimulator( None, None, None, None, Arrival(FakeArrival, {'x': 3, 'y': 2}), None, reactor=reactor) arrival = sim.createArrivalPolicy() self.assertIsInstance(arrival, FakeArrival) self.assertIdentical(arrival.reactor, sim.reactor) self.assertEquals(arrival.x, 3) self.assertEquals(arrival.y, 2) def test_loadPopulationParameters(self): """ Client weights and profiles are loaded from the [clients] section of the configuration file specified. """ config = FilePath(self.mktemp()) config.setContent( writePlistToString( { "clients": [ { "software": "contrib.performance.loadtest.ical.OS_X_10_6", "params": { "foo": "bar" }, "profiles": [ { "params": { "interval": 25, "eventStartDistribution": { "type": "contrib.performance.stats.NormalDistribution", "params": { "mu": 123, "sigma": 456, } } }, "class": "contrib.performance.loadtest.profiles.Eventer" } ], "weight": 3, } ] } ) ) sim = LoadSimulator.fromCommandLine( ['--config', config.path, '--clients', config.path] ) expectedParameters = PopulationParameters() expectedParameters.addClient( 3, ClientType( OS_X_10_6, {"foo": "bar"}, [ ProfileType( Eventer, { "interval": 25, "eventStartDistribution": NormalDistribution(123, 456) } ) ] ) ) self.assertEquals(sim.parameters, expectedParameters) def test_requireClient(self): """ At least one client is required, so if a configuration with an empty clients array is specified, a single default client type is used. """ config = FilePath(self.mktemp()) config.setContent(writePlistToString({"clients": []})) sim = LoadSimulator.fromCommandLine( ['--config', config.path, '--clients', config.path] ) expectedParameters = PopulationParameters() expectedParameters.addClient( 1, ClientType(OS_X_10_6, {}, [Eventer, Inviter, Accepter])) self.assertEquals(sim.parameters, expectedParameters) def test_loadLogObservers(self): """ Log observers specified in the [observers] section of the configuration file are added to the logging system. """ config = FilePath(self.mktemp()) config.setContent( writePlistToString( { "observers": [ { "type": "contrib.performance.loadtest.population.SimpleStatistics", "params": {}, }, ] } ) ) sim = LoadSimulator.fromCommandLine(['--config', config.path]) self.assertEquals(len(sim.observers), 1) self.assertIsInstance(sim.observers[0], SimpleStatistics) def test_observeRunReport(self): """ Each log observer is added to the log publisher before the simulation run is started and has its C{report} method called after the simulation run completes. """ observers = [Observer()] sim = LoadSimulator( { "PodA": { "enabled": True, "uri": 'http://example.org:1234/', "stats": {"enabled": False}, }, }, "/principals/users/%s/", None, None, Arrival(lambda reactor: NullArrival(), {}), None, observers, reactor=Reactor()) io = StringIO() sim.run(io) self.assertEquals(io.getvalue(), "\n*** PASS\n") self.assertTrue(observers[0].reported) self.assertEquals( [e for e in observers[0].events if "thingo" in e][0]["thingo"], Reactor.message ) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_trafficlogger.py000066400000000000000000000154201315003562600273140ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from zope.interface import Interface, implements from twisted.internet.protocol import ClientFactory, Protocol from twisted.trial.unittest import TestCase from twisted.test.proto_helpers import StringTransport, MemoryReactor from twisted.protocols.wire import Discard from contrib.performance.loadtest.trafficlogger import _TrafficLoggingFactory, loggedReactor class IProbe(Interface): """ An interface which can be used to verify some interface-related behavior of L{loggedReactor}. """ def probe(): # @NoSelf pass class Probe(object): implements(IProbe) _probed = False def __init__(self, result=None): self._result = result def probe(self): self._probed = True return self._result class TrafficLoggingReactorTests(TestCase): """ Tests for L{loggedReactor}. """ def test_nothing(self): """ L{loggedReactor} returns the object passed to it, if the object passed to it doesn't provide any interfaces. This is mostly for testing convenience rather than a particularly useful feature. """ probe = object() self.assertIdentical(probe, loggedReactor(probe)) def test_interfaces(self): """ The object returned by L{loggedReactor} provides all of the interfaces provided by the object passed to it. """ probe = Probe() reactor = loggedReactor(probe) self.assertTrue(IProbe.providedBy(reactor)) def test_passthrough(self): """ Methods on interfaces on the object passed to L{loggedReactor} can be called by calling them on the object returned by L{loggedReactor}. """ expected = object() probe = Probe(expected) reactor = loggedReactor(probe) result = reactor.probe() self.assertTrue(probe._probed) self.assertIdentical(expected, result) def test_connectTCP(self): """ Called on the object returned by L{loggedReactor}, C{connectTCP} calls the wrapped reactor's C{connectTCP} method with the original factory wrapped in a L{_TrafficLoggingFactory}. """ class RecordDataProtocol(Protocol): def dataReceived(self, data): self.data = data proto = RecordDataProtocol() factory = ClientFactory() factory.protocol = lambda: proto reactor = MemoryReactor() logged = loggedReactor(reactor) logged.connectTCP('192.168.1.2', 1234, factory, 21, '127.0.0.2') [(host, port, factory, timeout, bindAddress)] = reactor.tcpClients self.assertEqual('192.168.1.2', host) self.assertEqual(1234, port) self.assertIsInstance(factory, _TrafficLoggingFactory) self.assertEqual(21, timeout) self.assertEqual('127.0.0.2', bindAddress) # Verify that the factory and protocol specified are really being used protocol = factory.buildProtocol(None) protocol.makeConnection(None) protocol.dataReceived("foo") self.assertEqual(proto.data, "foo") def test_getLogFiles(self): """ The reactor returned by L{loggedReactor} has a C{getLogFiles} method which returns a L{logstate} instance containing the active and completed log files tracked by the logging wrapper. """ wrapped = ClientFactory() wrapped.protocol = Discard reactor = MemoryReactor() logged = loggedReactor(reactor) logged.connectTCP('127.0.0.1', 1234, wrapped) factory = reactor.tcpClients[0][2] finished = factory.buildProtocol(None) finished.makeConnection(StringTransport()) finished.dataReceived('finished') finished.connectionLost(None) active = factory.buildProtocol(None) active.makeConnection(StringTransport()) active.dataReceived('active') logs = logged.getLogFiles() self.assertEqual(1, len(logs.finished)) self.assertIn('finished', logs.finished[0].getvalue()) self.assertEqual(1, len(logs.active)) self.assertIn('active', logs.active[0].getvalue()) class TrafficLoggingFactoryTests(TestCase): """ Tests for L{_TrafficLoggingFactory}. """ def setUp(self): self.wrapped = ClientFactory() self.wrapped.protocol = Discard self.factory = _TrafficLoggingFactory(self.wrapped) def test_receivedBytesLogged(self): """ When bytes are delivered through a protocol created by L{_TrafficLoggingFactory}, they are added to a log kept on that factory. """ protocol = self.factory.buildProtocol(None) # The factory should now have a new StringIO log file self.assertEqual(1, len(self.factory.logs)) transport = StringTransport() protocol.makeConnection(transport) protocol.dataReceived("hello, world") self.assertEqual( "*\nC 0: 'hello, world'\n", self.factory.logs[0].getvalue()) def test_finishedLogs(self): """ When connections are lost, the corresponding log files are moved into C{_TrafficLoggingFactory.finishedLogs}. """ protocol = self.factory.buildProtocol(None) transport = StringTransport() protocol.makeConnection(transport) logfile = self.factory.logs[0] protocol.connectionLost(None) self.assertEqual(0, len(self.factory.logs)) self.assertEqual([logfile], self.factory.finishedLogs) def test_finishedLogsLimit(self): """ Only the most recent C{_TrafficLoggingFactory.LOGFILE_LIMIT} logfiles are kept in C{_TrafficLoggingFactory.finishedLogs}. """ self.factory.LOGFILE_LIMIT = 2 first = self.factory.buildProtocol(None) first.makeConnection(StringTransport()) second = self.factory.buildProtocol(None) second.makeConnection(StringTransport()) third = self.factory.buildProtocol(None) third.makeConnection(StringTransport()) second.connectionLost(None) first.connectionLost(None) third.connectionLost(None) self.assertEqual( [first.logfile, third.logfile], self.factory.finishedLogs) calendarserver-9.1+dfsg/contrib/performance/loadtest/test_webadmin.py000066400000000000000000000103601315003562600262620ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from twisted.trial.unittest import TestCase from contrib.performance.loadtest.webadmin import LoadSimAdminResource class WebAdminTests(TestCase): """ Tests for L{LoadSimAdminResource}. """ class FakeReporter(object): def generateReport(self, output): output.write("FakeReporter") class FakeReactor(object): def __init__(self): self.running = True def stop(self): self.running = False class FakeLoadSim(object): def __init__(self): self.reactor = WebAdminTests.FakeReactor() self.reporter = WebAdminTests.FakeReporter() self.running = True def stop(self): self.running = False class FakeRequest(object): def __init__(self, **kwargs): self.args = kwargs def test_resourceGET(self): """ Test render_GET """ loadsim = WebAdminTests.FakeLoadSim() resource = LoadSimAdminResource(loadsim) response = resource.render_GET(WebAdminTests.FakeRequest()) self.assertTrue(response.startswith("")) self.assertTrue(response.find(resource.token) != -1) def test_resourcePOST_Stop(self): """ Test render_POST when Stop button is clicked """ loadsim = WebAdminTests.FakeLoadSim() resource = LoadSimAdminResource(loadsim) self.assertTrue(loadsim.reactor.running) response = resource.render_POST(WebAdminTests.FakeRequest( token=(resource.token,), stop=None, )) self.assertTrue(response.startswith("")) self.assertTrue(response.find(resource.token) == -1) self.assertTrue(response.find("FakeReporter") != -1) self.assertFalse(loadsim.running) def test_resourcePOST_Stop_BadToken(self): """ Test render_POST when Stop button is clicked but token is wrong """ loadsim = WebAdminTests.FakeLoadSim() resource = LoadSimAdminResource(loadsim) self.assertTrue(loadsim.reactor.running) response = resource.render_POST(WebAdminTests.FakeRequest( token=("xyz",), stop=None, )) self.assertTrue(response.startswith("")) self.assertTrue(response.find(resource.token) != -1) self.assertTrue(response.find("FakeReporter") == -1) self.assertTrue(loadsim.running) def test_resourcePOST_Results(self): """ Test render_POST when Results button is clicked """ loadsim = WebAdminTests.FakeLoadSim() resource = LoadSimAdminResource(loadsim) self.assertTrue(loadsim.reactor.running) response = resource.render_POST(WebAdminTests.FakeRequest( token=(resource.token,), results=None, )) self.assertTrue(response.startswith("")) self.assertTrue(response.find(resource.token) != -1) self.assertTrue(response.find("FakeReporter") != -1) self.assertTrue(loadsim.running) def test_resourcePOST_Results_BadToken(self): """ Test render_POST when Results button is clicked and token is wrong """ loadsim = WebAdminTests.FakeLoadSim() resource = LoadSimAdminResource(loadsim) self.assertTrue(loadsim.reactor.running) response = resource.render_POST(WebAdminTests.FakeRequest( token=("xyz",), results=None, )) self.assertTrue(response.startswith("")) self.assertTrue(response.find(resource.token) != -1) self.assertTrue(response.find("FakeReporter") == -1) self.assertTrue(loadsim.running) calendarserver-9.1+dfsg/contrib/performance/loadtest/thresholds.json000066400000000000000000000106271315003562600261430ustar00rootroot00000000000000{ "requests": { "limits" : [ 0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 30.0], "thresholds": { "default" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "GET{event}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "PUT{event}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 0.5], "PUT{update}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 0.5], "PUT{attendee-small}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "PUT{attendee-medium}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "PUT{attendee-large}" : [ 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 25.0], "PUT{attendee-huge}" : [ 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0], "PUT{organizer-small}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "PUT{organizer-medium}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "PUT{organizer-large}" : [ 100.0, 100.0, 100.0, 100.0, 100.0, 75.0, 25.0], "PUT{organizer-huge}" : [ 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0], "DELETE{event}" : [ 100.0, 100.0, 50.0, 25.0, 10.0, 5.0, 1.0], "POST{fb-small}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "POST{fb-medium}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "POST{fb-large}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 5.0], "POST{fb-huge}" : [ 100.0, 100.0, 100.0, 100.0, 75.0, 50.0, 25.0], "PROPFIND{well-known}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPFIND{find-principal}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPFIND{principal}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPFIND{home}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 10.0], "PROPFIND{calendar}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPFIND{notification}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPFIND{notification-items}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "PROPPATCH{calendar}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{pset}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{expand}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{psearch}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{sync-init}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{sync}" : [ 100.0, 100.0, 100.0, 50.0, 10.0, 5.0, 1.0], "REPORT{vevent}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{vtodo}" : [ 100.0, 100.0, 100.0, 5.0, 1.0, 0.5, 0.0], "REPORT{multiget-small}" : [ 100.0, 100.0, 50.0, 25.0, 10.0, 5.0, 1.0], "REPORT{multiget-medium}" : [ 100.0, 100.0, 75.0, 50.0, 25.0, 10.0, 5.0], "REPORT{multiget-large}" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 25.0, 10.0], "REPORT{multiget-huge}" : [ 100.0, 100.0, 100.0, 100.0, 75.0, 50.0, 25.0] } }, "operations": { "limits" : [ 0.1, 0.5, 1.0, 3.0, 5.0, 10.0, 30.0], "thresholds": { "default" : [ 100.0, 100.0, 100.0, 100.0, 100.0, 100.0, 100.0], "accept" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "create" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "download" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "invite" : [ 100.0, 100.0, 100.0, 100.0, 75.0, 50.0, 25.0], "poll" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "push" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "reply done" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "startup" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0], "update" : [ 100.0, 100.0, 100.0, 75.0, 50.0, 10.0, 5.0] } } } calendarserver-9.1+dfsg/contrib/performance/loadtest/trafficlogger.py000066400000000000000000000065461315003562600262660ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ This module implements a reactor wrapper which will cause all traffic on connections set up using that reactor to be logged. """ __all__ = ['loggedReactor'] from weakref import ref from StringIO import StringIO from collections import namedtuple from zope.interface import providedBy from twisted.python.components import proxyForInterface from twisted.internet.interfaces import IReactorTCP from twisted.protocols.policies import WrappingFactory, TrafficLoggingProtocol logstate = namedtuple('logstate', 'active finished') def loggedReactor(reactor): """ Construct and return a wrapper around the given C{reactor} which provides all of the same interfaces, but which will log all traffic over outgoing TCP connections it establishes. """ bases = [] for iface in providedBy(reactor): if iface is IReactorTCP: bases.append(_TCPTrafficLoggingReactor) else: bases.append(proxyForInterface(iface, '_reactor')) if bases: return type('(Logged Reactor)', tuple(bases), {})(reactor) return reactor class _TCPTrafficLoggingReactor(proxyForInterface(IReactorTCP, '_reactor')): """ A mixin for a reactor wrapper which defines C{connectTCP} so as to cause traffic to be logged. """ _factories = None @property def factories(self): if self._factories is None: self._factories = [] return self._factories def getLogFiles(self): active = [] finished = [] for factoryref in self.factories: factory = factoryref() active.extend(factory.logs) finished.extend(factory.finishedLogs) return logstate(active, finished) def connectTCP(self, host, port, factory, *args, **kwargs): wrapper = _TrafficLoggingFactory(factory) self.factories.append(ref(wrapper, self.factories.remove)) return self._reactor.connectTCP( host, port, wrapper, *args, **kwargs) class _TrafficLoggingFactory(WrappingFactory): """ A wrapping factory which applies L{TrafficLoggingProtocolWrapper}. """ LOGFILE_LIMIT = 20 protocol = TrafficLoggingProtocol noisy = False def __init__(self, wrappedFactory): WrappingFactory.__init__(self, wrappedFactory) self.logs = [] self.finishedLogs = [] def unregisterProtocol(self, protocol): WrappingFactory.unregisterProtocol(self, protocol) self.logs.remove(protocol.logfile) self.finishedLogs.append(protocol.logfile) del self.finishedLogs[:-self.LOGFILE_LIMIT] def buildProtocol(self, addr): logfile = StringIO() self.logs.append(logfile) return self.protocol( self, self.wrappedFactory.buildProtocol(addr), logfile, None, 0) calendarserver-9.1+dfsg/contrib/performance/loadtest/webadmin.py000066400000000000000000000056431315003562600252330ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## """ loadsim Web Admin UI. """ __all__ = [ "LoadSimAdminResource", ] import cStringIO as StringIO import uuid from time import clock from twisted.web import resource class LoadSimAdminResource (resource.Resource): """ Web administration HTTP resource. """ isLeaf = True HEAD = """\

Load Simulator Web Admin

""" BODY = """\ """ BODY_RESULTS = """\
%s
Generated in %.1f milliseconds
""" BODY_RESULTS_STOPPED = "

LoadSim Stopped - Final Results

" + BODY_RESULTS def __init__(self, loadsim): self.loadsim = loadsim self.token = str(uuid.uuid4()) def render_GET(self, request): return self._renderReport() def render_POST(self, request): html = self.HEAD + self.BODY if 'token' not in request.args or request.args['token'][0] != self.token: return html % (self.token,) if 'stop' in request.args: self.loadsim.stop() return self._renderReport(True) elif 'results' in request.args: return self._renderReport() return html % (self.token,) def _renderReport(self, stopped=False): report = StringIO.StringIO() before = clock() self.loadsim.reporter.generateReport(report) after = clock() ms = (after - before) * 1000 if stopped: html = self.HEAD + self.BODY_RESULTS_STOPPED return html % (None, report.getvalue(), ms) else: html = self.HEAD + self.BODY_RESULTS return html % (self.token, report.getvalue(), ms) calendarserver-9.1+dfsg/contrib/performance/massupload000077500000000000000000000012311315003562600233370ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from massupload import main main() calendarserver-9.1+dfsg/contrib/performance/massupload.py000066400000000000000000000060101315003562600237630ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from benchlib import select from twisted.internet import reactor from twisted.internet.task import coiterate from twisted.python.log import err from twisted.python.usage import UsageError from upload import UploadOptions, upload import sys import pickle class MassUploadOptions(UploadOptions): optParameters = [ ("benchmarks", None, None, ""), ("statistics", None, None, "")] opt_statistic = None def parseArgs(self, filename): self['filename'] = filename UploadOptions.parseArgs(self) def main(): options = MassUploadOptions() try: options.parseOptions(sys.argv[1:]) except UsageError, e: print(e) return 1 fname = options['filename'] raw = pickle.load(file(fname)) if not options['benchmarks']: benchmarks = raw.keys() else: benchmarks = options['benchmarks'].split() def go(): for benchmark in benchmarks: for param in raw[benchmark].keys(): for statistic in options['statistics'].split(): stat, samples = select( raw, benchmark, param, statistic) samples = stat.squash(samples) yield upload( reactor, options['url'], options['project'], options['revision'], options['revision-date'], benchmark, param, statistic, options['backend'], options['environment'], samples) # This is somewhat hard-coded to the currently # collected stats. if statistic == 'SQL': stat, samples = select( raw, benchmark, param, 'execute') samples = stat.squash(samples, 'count') yield upload( reactor, options['url'], options['project'], options['revision'], options['revision-date'], benchmark, param, statistic + 'count', options['backend'], options['environment'], samples) d = coiterate(go()) d.addErrback(err, "Mass upload failed") reactor.callWhenRunning(d.addCallback, lambda ign: reactor.stop()) reactor.run() calendarserver-9.1+dfsg/contrib/performance/nightly.sh000077500000000000000000000016231315003562600232630ustar00rootroot00000000000000#!/bin/bash -x ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## cd ~codespeed/Projects/CalendarServer/trunk if [ "`svn diff -rBASE:HEAD`" = "" ]; then # There are no changes exit else svn up REV="$(./contrib/performance/svn-revno .)" cd ~codespeed/performance ./sample.sh "$REV" ~codespeed/Projects/CalendarServer/trunk "$1" fi calendarserver-9.1+dfsg/contrib/performance/pgsql.d000066400000000000000000000022631315003562600225420ustar00rootroot00000000000000/* * Copyright (c) 2010-2017 Apple Inc. All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ #define READ(fname) \ syscall::fname:return, syscall::fname ## _nocancel:return \ /execname == "postgres"/ \ { \ printf("READ %d\n\1", arg0); \ } READ(read) READ(readv) READ(pread) #define WRITE(fname) \ syscall::fname:entry, syscall::fname ## _nocancel:entry \ /execname == "postgres"/ \ { \ printf("WRITE %d\n\1", arg2); \ } WRITE(write) WRITE(writev) WRITE(pwrite) dtrace:::BEGIN { /* Let the watcher know things are alright. */ printf("READY\n"); } syscall::execve:entry /copyinstr(arg0) == "CalendarServer dtrace benchmarking signal"/ { printf("MARK x\n\1"); } calendarserver-9.1+dfsg/contrib/performance/profile.sh000077500000000000000000000023741315003562600232510ustar00rootroot00000000000000#!/bin/bash -x ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## . ./benchlib.sh set -e # Break on error sudo -v # Force up to date sudo token before the user walks away REV_SPEC="$1" update_and_build "$REV_SPEC" REV="$(./svn-revno "$SOURCE_DIR")" DATE="`./svn-committime $SOURCE $REV`" for backend in ${BACKENDS[*]}; do setbackend $backend for benchmark in $BENCHMARKS; do pushd $SOURCE mkdir -p profiling/$backend/$benchmark start 0 -t Single -S profiling/$backend/$benchmark popd # Chances are sudo will throw out PYTHONPATH unless we tell it not to. sudo ./run.sh ./benchmark --label r$REV-$backend $benchmark pushd $SOURCE stop popd done done calendarserver-9.1+dfsg/contrib/performance/report000077500000000000000000000012251315003562600225050ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from report import main main() calendarserver-9.1+dfsg/contrib/performance/report.py000066400000000000000000000023251315003562600231330ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from benchlib import select import sys import pickle def main(): if len(sys.argv) < 5: print('Usage: %s [command]' % (sys.argv[0],)) else: stat, samples = select(pickle.load(file(sys.argv[1])), *sys.argv[2:5]) if len(sys.argv) == 5: print('Samples') print('\t' + '\n\t'.join(map(str, stat.squash(samples)))) print('Commands') print('\t' + '\n\t'.join(stat.commands)) else: print(getattr(stat, sys.argv[5])(samples, *sys.argv[6:])) calendarserver-9.1+dfsg/contrib/performance/report_principals.py000066400000000000000000000052701315003562600253610ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Benchmark a server's response to a simple displayname startswith report. """ from urllib2 import HTTPDigestAuthHandler from twisted.internet.defer import inlineCallbacks, returnValue from twisted.internet import reactor from twisted.web.client import Agent from twisted.web.http_headers import Headers from httpauth import AuthHandlerAgent from httpclient import StringProducer from benchlib import initialize, sample body = """\ useruseruseruser""" @inlineCallbacks def measure(host, port, dtrace, attendeeCount, samples): user = password = "user01" root = "/" principal = "/" calendar = "report-principal" authinfo = HTTPDigestAuthHandler() authinfo.add_password( realm="Test Realm", uri="http://%s:%d/" % (host, port), user=user, passwd=password) agent = AuthHandlerAgent(Agent(reactor), authinfo) # Set up the calendar first yield initialize(agent, host, port, user, password, root, principal, calendar) url = 'http://%s:%d/principals/' % (host, port) headers = Headers({"content-type": ["text/xml"]}) samples = yield sample( dtrace, samples, agent, lambda: ('REPORT', url, headers, StringProducer(body))) returnValue(samples) calendarserver-9.1+dfsg/contrib/performance/reupload.sh000077500000000000000000000023631315003562600234220ustar00rootroot00000000000000#!/bin/bash -x ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## BENCHMARKS="event_move event_delete_attendee event_add_attendee event_change_date event_change_summary event_delete vfreebusy event" for rev in 6446; do for f in eighth-try/r$rev-*; do base=`basename $f` revision=${base:1:4} backend=${base:6:10} date="`./svn-committime ~/Projects/CalendarServer/trunk $revision`" for b in $BENCHMARKS; do for p in 1 9 81; do for s in pagein pageout; do ./upload \ --url http://localhost:8000/result/add/ \ --revision $revision --revision-date "$date" \ --environment nmosbuilder --backend $backend --statistic "$f,$b,$p,$s" done done done done done calendarserver-9.1+dfsg/contrib/performance/sample-many.sh000077500000000000000000000014001315003562600240210ustar00rootroot00000000000000#!/usr/bin/env bash ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## for i in `python -c "for i in range($START, $STOP, $STEP): print i,"`; do ./sample.sh $i ~/Projects/CalendarServer/trunk/ "$1" done calendarserver-9.1+dfsg/contrib/performance/sample.sh000077500000000000000000000031561315003562600230710ustar00rootroot00000000000000#!/usr/bin/env bash ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## . ./benchlib.sh sudo -v # Force up to date sudo token before the user walks away REV_SPEC="$1" SOURCE_DIR="$2" RESULTS="$3" # Just force the conf file to be written. We need it to use start and # stop, even if it doesn't have a meaningful backend set. setbackend ${BACKENDS[0]} update_and_build "$REV_SPEC" REV="$(./svn-revno "$SOURCE_DIR")" if [ "$HOSTS_COUNT" != "" ]; then CONCURRENT="--hosts-count $HOSTS_COUNT --host-index $HOST_INDEX" else CONCURRENT="" fi DATE="`./svn-committime $SOURCE $REV`" for backend in ${BACKENDS[*]}; do setbackend $backend pushd $SOURCE stop rm -rf data/ start 2 popd sudo ./run.sh ./benchmark $CONCURRENT --label r$REV-$backend --source-directory $SOURCE_DIR $SCALE_PARAMETERS $BENCHMARKS data=`echo -n r$REV-$backend*` ./run.sh ./massupload \ --url $ADDURL --revision $REV \ --revision-date "$DATE" --environment nmosbuilder \ --backend $backend \ --statistics "${STATISTICS[*]}" \ $data mv $data $RESULTS done calendarserver-9.1+dfsg/contrib/performance/setbackend000077500000000000000000000012531315003562600232760ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from setbackend import main raise SystemExit(main()) calendarserver-9.1+dfsg/contrib/performance/setbackend.py000066400000000000000000000031321315003562600237200ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Generate a new calendar server configuration file based on an existing one, with a few values changed to satisfy requirements of the benchmarking tools. """ import sys from xml.etree import ElementTree def main(): conf = ElementTree.parse(file(sys.argv[1])) if sys.argv[2] == 'postgresql': value = 'true' elif sys.argv[2] == 'filesystem': value = 'false' else: raise RuntimeError("Don't know what to do with %r" % (sys.argv[2],)) # Here are the config changes we make - use the specified backend replace(conf.getiterator(), 'UseDatabase', value) # - and disable the response cache replace(conf.getiterator(), 'EnableResponseCache', 'false') conf.write(sys.stdout) def replace(elements, key, value): found = False for ele in elements: if found: ele.tag = value return if ele.tag == 'key' and ele.text == key: found = True raise RuntimeError("Failed to find UseDatabase") calendarserver-9.1+dfsg/contrib/performance/sim000077500000000000000000000012341315003562600217620ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # ## from loadtest.sim import main main() calendarserver-9.1+dfsg/contrib/performance/simanalysis/000077500000000000000000000000001315003562600236005ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/simanalysis/__init__.py000066400000000000000000000012251315003562600257110ustar00rootroot00000000000000## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Tools to manage sim runs and associated data. """ calendarserver-9.1+dfsg/contrib/performance/simanalysis/sim_regress.py000066400000000000000000000156231315003562600265030ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import argparse import os import plistlib import shutil import subprocess import sys import time class SimRegress(object): """ Class that manages running the sim against a range of SVN revisions. """ TMP_DIR = "/tmp/CalendarServer-SimRegress" def __init__(self, startRev, stopRev, stepRev): self.startRev = startRev self.currentRev = startRev self.stopRev = stopRev self.stepRev = stepRev self.cwd = os.getcwd() self.results = [] def run(self): # Get the actual SVN revisions we want to use svn_revs = self.getRevisions() print("SVN Revisions to analyze: {}".format(svn_revs)) # Create the tmp dir and do initial checkout for revision in svn_revs: self.currentRev = revision logfile = os.path.join(self.cwd, "Log-rev-{}.txt".format(self.currentRev)) with open(logfile, "w") as f: self.log = f self.checkRevision() self.buildServer() self.runServer() qos = self.runSim() self.stopServer() with open(logfile) as f: qos = filter(lambda line: line.strip().startswith("Qos : "), f.read().splitlines()) try: qos = float(qos[0].strip()[len("Qos : "):]) except (IndexError, ValueError): qos = None print("Revision: {} Qos: {}".format(self.currentRev, qos)) self.results.append((self.currentRev, qos)) print("All revisions complete") print("\n*** Results:") print("Rev\tQos") for result in self.results: rev, qos = result qos = "{:.4f}".format(qos) if qos is not None else "-" print("{}\t{}".format(rev, qos)) def getRevisions(self): if self.stopRev is None: print("Determining HEAD revision") out = subprocess.check_output("svn info -r HEAD".split()) rev = filter(lambda line: line.startswith("Revision: "), out.splitlines()) if rev: self.stopRev = int(rev[0][len("Revision: "):]) print("Using HEAD revision: {}".format(self.stopRev)) return range(self.startRev, self.stopRev, self.stepRev) + [self.stopRev] def checkRevision(self): # Create the tmp dir and do initial checkout if not os.path.exists(SimRegress.TMP_DIR): os.makedirs(SimRegress.TMP_DIR) os.chdir(SimRegress.TMP_DIR) self.checkoutInitialRevision() else: os.chdir(SimRegress.TMP_DIR) actualRevision = self.currentRevision() if actualRevision > self.currentRev: # If actualRevision code is newer than what we want, always wipe it # and start from scratch shutil.rmtree(SimRegress.TMP_DIR) self.checkRevision() elif actualRevision < self.currentRev: # Upgrade from older revision to what we want self.updateRevision() def checkoutInitialRevision(self): print("Checking out revision: {}".format(self.startRev)) subprocess.call( "svn checkout http://svn.calendarserver.org/repository/calendarserver/CalendarServer/trunk -r {start} .".format(start=self.startRev).split(), stdout=self.log, stderr=self.log, ) def currentRevision(self): print("Checking current revision") out = subprocess.check_output("svn info".split()) rev = filter(lambda line: line.startswith("Revision: "), out.splitlines()) return int(rev[0][len("Revision: "):]) if rev else None def updateRevision(self): print("Updating to revision: {}".format(self.currentRev)) subprocess.call( "svn up -r {rev} .".format(rev=self.currentRev).split(), stdout=self.log, stderr=self.log, ) def patchConfig(self, configPath): """ Patch the plist config file to use settings that make sense for the sim. @param configPath: path to plist file to patch @type configPath: L{str} """ f = plistlib.readPlist(configPath) f['Authentication']['Kerberos']['Enabled'] = False plistlib.writePlist(f, configPath) def buildServer(self): print("Building revision: {}".format(self.currentRev)) subprocess.call("./bin/develop".split(), stdout=self.log, stderr=self.log) def runServer(self): print("Running revision: {}".format(self.currentRev)) shutil.copyfile("conf/caldavd-test.plist", "conf/caldavd-dev.plist") self.patchConfig("conf/caldavd-dev.plist") if os.path.exists("data"): shutil.rmtree("data") subprocess.call("./bin/run -nd".split(), stdout=self.log, stderr=self.log) # Wait for first child pid to appear then wait another 10 seconds t = time.time() while time.time() - t < 60: if os.path.exists("data/Logs/state/caldav-instance-0.pid"): time.sleep(10) break def stopServer(self): print("Stopping revision: {}".format(self.currentRev)) subprocess.call("./bin/run -k".split(), stdout=self.log, stderr=self.log) def runSim(self): print("Running sim") if os.path.exists("/tmp/sim"): shutil.rmtree("/tmp/sim") subprocess.call("{exe} {sim} --config {config} --clients {clients} --runtime 300".format( exe=sys.executable, sim=os.path.join(self.cwd, "contrib/performance/loadtest/sim.py"), config=os.path.join(self.cwd, "contrib/performance/loadtest/config-old.plist"), clients=os.path.join(self.cwd, "contrib/performance/loadtest/clients-old.plist"), ).split(), stdout=self.log, stderr=self.log) if __name__ == '__main__': parser = argparse.ArgumentParser(description="Run the sim tool against a specific range of server revisions.") parser.add_argument("--start", type=int, required=True, help="Revision number to start at") parser.add_argument("--stop", type=int, help="Revision number to stop at") parser.add_argument("--step", default=100, type=int, help="Revision number steps") args = parser.parse_args() SimRegress(args.start, args.stop, args.step).run() calendarserver-9.1+dfsg/contrib/performance/some-more-data.sh000077500000000000000000000017201315003562600244150ustar00rootroot00000000000000#!/bin/bash -x ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## . ./benchlib.sh sudo -v # Force up to date sudo token before the user walks away SOURCE_DIR="$1" RESULTS="$2" NOW="$(date +%s)" WHEN=($((60*60*24*7)) $((60*60*24*31)) $((60*60*24*31*3))) for when in ${WHEN[*]}; do THEN=$(($NOW-$when)) REV_SPEC="{$(date -r "$THEN" +"%Y-%m-%d")}" ./sample.sh "$REV_SPEC" "$SOURCE_DIR" "$RESULTS" done calendarserver-9.1+dfsg/contrib/performance/speedcenter.tac000066400000000000000000000031671315003562600242450ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import os from twisted.python.threadpool import ThreadPool from twisted.python.modules import getModule from twisted.application.service import Application, Service from twisted.application.internet import TCPServer from twisted.web.server import Site from twisted.web.wsgi import WSGIResource from twisted.internet import reactor class ThreadService(Service): def __init__(self): self.pool = ThreadPool(minthreads=0, maxthreads=1) def startService(self): self.pool.start() def stopService(self): self.pool.stop() speedcenter = getModule("speedcenter").filePath django = speedcenter.sibling("wsgi").child("django.wsgi") namespace = {"__file__": django.path} execfile(django.path, namespace, namespace) threads = ThreadService() application = Application("SpeedCenter") threads.setServiceParent(application) resource = WSGIResource(reactor, threads.pool, namespace["application"]) site = Site(resource) # , 'httpd.log') TCPServer(int(os.environ.get('PORT', 8000)), site).setServiceParent(application) calendarserver-9.1+dfsg/contrib/performance/sql_measure.d000066400000000000000000000035331315003562600237350ustar00rootroot00000000000000/* * Copyright (c) 2010-2017 Apple Inc. All rights reserved. * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. */ /* * Make almost all query strings fit. Please don't have SQL longer than this. :( */ #pragma D option strsize=32768 dtrace:::BEGIN { /* Let the watcher know things are alright. */ printf("READY\n"); } /* * SQLite3 stuff */ pid$target:_sqlite3.so:_pysqlite_query_execute:entry { self->executing = 1; self->sql = ""; printf("EXECUTE ENTRY %d\n\1", timestamp); } pid$target:_sqlite3.so:_pysqlite_query_execute:return { self->executing = 0; printf("EXECUTE SQL %s\n\1", self->sql); printf("EXECUTE RETURN %d\n\1", timestamp); } pid$target::PyString_AsString:return /self->executing/ { self->sql = copyinstr(arg1); self->executing = 0; } pid$target:_sqlite3.so:pysqlite_cursor_iternext:entry { printf("ITERNEXT ENTRY %d\n\1", timestamp); } pid$target:_sqlite3.so:pysqlite_cursor_iternext:return { printf("ITERNEXT RETURN %d\n\1", timestamp); } /* * PyGreSQL stuff */ pid$target::PQexec:entry { printf("EXECUTE ENTRY %d\n\1", timestamp); printf("EXECUTE SQL %s\n\1", copyinstr(arg1)); } pid$target::PQexec:return { printf("EXECUTE RETURN %d\n\1", timestamp); } pid$target::pgsource_fetch:entry { printf("ITERNEXT ENTRY %d\n\1", timestamp); } pid$target::pgsource_fetch:return { printf("ITERNEXT RETURN %d\n\1", timestamp); } calendarserver-9.1+dfsg/contrib/performance/sqlusage/000077500000000000000000000000001315003562600230705ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/sqlusage/__init__.py000066400000000000000000000012001315003562600251720ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ SQL usage analysis tool. """ calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/000077500000000000000000000000001315003562600247435ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/__init__.py000066400000000000000000000011731315003562600270560ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ SQL usage requests. """ calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/httpTests.py000066400000000000000000000064041315003562600273230ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Defines a set of HTTP requests to execute and return results. """ class HTTPTestBase(object): """ Base class for an HTTP request that executes and results are returned for. """ class SQLResults(object): def __init__(self, count, rows, timing): self.count = count self.rows = rows self.timing = timing def __init__(self, label, sessions, logFilePath, logFilePrefix): """ @param label: label used to identify the test @type label: C{str} """ self.label = label self.sessions = sessions self.logFilePath = logFilePath self.logFilePrefix = logFilePrefix self.result = None def execute(self, count): """ Execute the HTTP request and read the results. """ self.prepare() self.clearLog() self.doRequest() self.collectResults(count) self.cleanup() return self.result def prepare(self): """ Do some setup prior to the real request. """ pass def clearLog(self): """ Clear the server's SQL log file. """ with open(self.logFilePath, "w") as f: f.write("") def doRequest(self): """ Execute the actual HTTP request. Sub-classes override. """ raise NotImplementedError def collectResults(self, event_count): """ Parse the server log file to extract the details we need. """ def extractInt(line): pos = line.find(": ") return int(line[pos + 2:]) def extractFloat(line): pos = line.find(": ") return float(line[pos + 2:]) # Need to skip over stats that are unlabeled with open(self.logFilePath) as f: data = f.read() lines = data.splitlines() offset = 0 while True: if lines[offset] == "*** SQL Stats ***": if lines[offset + 2].startswith("Label: <"): count = extractInt(lines[offset + 4]) rows = extractInt(lines[offset + 5]) timing = extractFloat(lines[offset + 6]) self.result = HTTPTestBase.SQLResults(count, rows, timing) break offset += 1 else: self.result = HTTPTestBase.SQLResults(-1, -1, 0.0) # Archive current sqlstats file with open("%s-%s-%d-%s" % (self.logFilePath, self.logFilePrefix, event_count, self.label), "w") as f: f.write(data) def cleanup(self): """ Do some cleanup after the real request. """ pass calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/invite.py000066400000000000000000000064321315003562600266200ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.url import URL from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from pycalendar.datetime import DateTime from txweb2.dav.util import joinURL from caldavclientlibrary.protocol.webdav.definitions import davxml ICAL = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Example Inc.//Example Calendar//EN VERSION:2.0 BEGIN:VTIMEZONE LAST-MODIFIED:20040110T032845Z TZID:US/Eastern BEGIN:DAYLIGHT DTSTART:20000404T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20001026T020000 RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20051222T205953Z CREATED:20060101T150000Z DTSTART;TZID=US/Eastern:{year}0101T100000 DURATION:PT1H SUMMARY:event {count} UID:invite-{count}-ics ORGANIZER:mailto:user02@example.com ATTENDEE:mailto:user02@example.com {attendees} END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") ATTENDEE = "ATTENDEE:mailto:user%02d@example.com" class InviteTest(HTTPTestBase): """ A PUT operation (invite) """ def __init__(self, label, sessions, logFilePath, logFilePrefix, count): super(InviteTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.count = count def doRequest(self): """ Execute the actual HTTP request. """ # Invite as user02 now = DateTime.getNowUTC() href = joinURL(self.sessions[1].calendarHref, "organizer.ics") attendees = "\r\n".join(["ATTENDEE:mailto:user01@example.com"] + [ATTENDEE % (ctr + 3,) for ctr in range(self.count - 1)]) self.sessions[1].writeData( URL(path=href), ICAL.format(year=now.getYear() + 1, count=self.count, attendees=attendees), "text/calendar", ) def cleanup(self): """ Do some cleanup after the real request. """ # Remove created resources href = joinURL(self.sessions[1].calendarHref, "organizer.ics") self.sessions[1].deleteResource(URL(path=href)) # Remove the attendee event and inbox items props = (davxml.resourcetype,) results = self.sessions[0].getPropertiesOnHierarchy(URL(path=self.sessions[0].calendarHref), props) for href in results.keys(): if len(href.split("/")[-1]) > 10: self.sessions[0].deleteResource(URL(path=href)) results = self.sessions[0].getPropertiesOnHierarchy(URL(path=self.sessions[0].inboxHref), props) for href in results.keys(): if href != self.sessions[0].inboxHref: self.sessions[0].deleteResource(URL(path=href)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/multiget.py000066400000000000000000000040631315003562600271520ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.caldav.definitions import caldavxml from caldavclientlibrary.protocol.caldav.multiget import Multiget from caldavclientlibrary.protocol.http.data.string import ResponseDataString from caldavclientlibrary.protocol.webdav.definitions import davxml, statuscodes from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from txweb2.dav.util import joinURL class MultigetTest(HTTPTestBase): """ A multiget operation """ def __init__(self, label, sessions, logFilePath, logFilePrefix, count): super(MultigetTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.count = count def doRequest(self): """ Execute the actual HTTP request. """ hrefs = [joinURL(self.sessions[0].calendarHref, "%d.ics" % (i + 1,)) for i in range(self.count)] props = ( davxml.getetag, caldavxml.calendar_data, caldavxml.schedule_tag, ) # Create CalDAV multiget request = Multiget(self.sessions[0], self.sessions[0].calendarHref, hrefs, props) result = ResponseDataString() request.setOutput(result) # Process it self.sessions[0].runSession(request) # If its a 207 we want to parse the XML if request.getStatusCode() == statuscodes.MultiStatus: pass else: raise RuntimeError("Muliget request failed: %s" % (request.getStatusCode(),)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/propfind.py000066400000000000000000000035741315003562600271470ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.http.data.string import ResponseDataString from caldavclientlibrary.protocol.webdav.definitions import davxml, statuscodes, \ headers from caldavclientlibrary.protocol.webdav.propfind import PropFind from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase class PropfindTest(HTTPTestBase): """ A propfind operation """ def __init__(self, label, sessions, logFilePath, logFilePrefix, depth=1): super(PropfindTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.depth = headers.Depth1 if depth == 1 else headers.Depth0 def doRequest(self): """ Execute the actual HTTP request. """ props = ( davxml.getetag, davxml.getcontenttype, ) # Create WebDAV propfind request = PropFind(self.sessions[0], self.sessions[0].calendarHref, self.depth, props) result = ResponseDataString() request.setOutput(result) # Process it self.sessions[0].runSession(request) # If its a 207 we want to parse the XML if request.getStatusCode() == statuscodes.MultiStatus: pass else: raise RuntimeError("Propfind request failed: %s" % (request.getStatusCode(),)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/propfind_invite.py000066400000000000000000000036351315003562600305230ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.http.data.string import ResponseDataString from caldavclientlibrary.protocol.webdav.definitions import statuscodes, \ headers from caldavclientlibrary.protocol.webdav.propfind import PropFind from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from caldavclientlibrary.protocol.caldav.definitions import csxml class PropfindInviteTest(HTTPTestBase): """ A propfind operation """ def __init__(self, label, sessions, logFilePath, logFilePrefix, depth=1): super(PropfindInviteTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.depth = headers.Depth1 if depth == 1 else headers.Depth0 def doRequest(self): """ Execute the actual HTTP request. """ props = ( csxml.invite, ) # Create WebDAV propfind request = PropFind(self.sessions[0], self.sessions[0].calendarHref, self.depth, props) result = ResponseDataString() request.setOutput(result) # Process it self.sessions[0].runSession(request) # If its a 207 we want to parse the XML if request.getStatusCode() == statuscodes.MultiStatus: pass else: raise RuntimeError("Propfind request failed: %s" % (request.getStatusCode(),)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/put.py000066400000000000000000000040351315003562600261270ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.url import URL from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from pycalendar.datetime import DateTime from txweb2.dav.util import joinURL ICAL = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Example Inc.//Example Calendar//EN VERSION:2.0 BEGIN:VTIMEZONE LAST-MODIFIED:20040110T032845Z TZID:US/Eastern BEGIN:DAYLIGHT DTSTART:20000404T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20001026T020000 RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20051222T205953Z CREATED:20060101T150000Z DTSTART;TZID=US/Eastern:%d0101T100000 DURATION:PT1H SUMMARY:event 1 UID:put-ics END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") class PutTest(HTTPTestBase): """ A PUT operation (non-invite) """ def doRequest(self): """ Execute the actual HTTP request. """ now = DateTime.getNowUTC() href = joinURL(self.sessions[0].calendarHref, "put.ics") self.sessions[0].writeData(URL(path=href), ICAL % (now.getYear() + 1,), "text/calendar") def cleanup(self): """ Do some cleanup after the real request. """ # Remove created resources href = joinURL(self.sessions[0].calendarHref, "put.ics") self.sessions[0].deleteResource(URL(path=href)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/query.py000066400000000000000000000066761315003562600265010ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.url import URL from caldavclientlibrary.protocol.webdav.definitions import davxml, statuscodes from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from txweb2.dav.util import joinURL from pycalendar.datetime import DateTime from caldavclientlibrary.protocol.caldav.query import QueryVEVENTTimeRange from caldavclientlibrary.protocol.http.data.string import ResponseDataString ICAL = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Example Inc.//Example Calendar//EN VERSION:2.0 BEGIN:VTIMEZONE LAST-MODIFIED:20040110T032845Z TZID:US/Eastern BEGIN:DAYLIGHT DTSTART:20000404T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20001026T020000 RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20051222T205953Z CREATED:20060101T150000Z DTSTART:%s DURATION:PT1H SUMMARY:event 1 UID:sync-collection-%d-ics END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") class QueryTest(HTTPTestBase): """ A sync operation """ def __init__(self, label, sessions, logFilePath, logFilePrefix, count): super(QueryTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.count = count def prepare(self): """ Do some setup prior to the real request. """ # Add resources to create required number of changes self.start = DateTime.getNowUTC() self.start.setHHMMSS(12, 0, 0) self.end = self.start.duplicate() self.end.offsetHours(1) for i in range(self.count): href = joinURL(self.sessions[0].calendarHref, "tr-query-%d.ics" % (i + 1,)) self.sessions[0].writeData(URL(path=href), ICAL % (self.start.getText(), i + 1,), "text/calendar") def doRequest(self): """ Execute the actual HTTP request. """ props = ( davxml.getetag, davxml.getcontenttype, ) # Create CalDAV query request = QueryVEVENTTimeRange(self.sessions[0], self.sessions[0].calendarHref, self.start.getText(), self.end.getText(), props) result = ResponseDataString() request.setOutput(result) # Process it self.sessions[0].runSession(request) # If its a 207 we want to parse the XML if request.getStatusCode() == statuscodes.MultiStatus: pass else: raise RuntimeError("Query request failed: %s" % (request.getStatusCode(),)) def cleanup(self): """ Do some cleanup after the real request. """ # Remove created resources for i in range(self.count): href = joinURL(self.sessions[0].calendarHref, "tr-query-%d.ics" % (i + 1,)) self.sessions[0].deleteResource(URL(path=href)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/requests/sync.py000066400000000000000000000062441315003562600262770ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from caldavclientlibrary.protocol.url import URL from caldavclientlibrary.protocol.webdav.definitions import davxml from contrib.performance.sqlusage.requests.httpTests import HTTPTestBase from txweb2.dav.util import joinURL from pycalendar.datetime import DateTime ICAL = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Example Inc.//Example Calendar//EN VERSION:2.0 BEGIN:VTIMEZONE LAST-MODIFIED:20040110T032845Z TZID:US/Eastern BEGIN:DAYLIGHT DTSTART:20000404T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20001026T020000 RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20051222T205953Z CREATED:20060101T150000Z DTSTART;TZID=US/Eastern:%d0101T100000 DURATION:PT1H SUMMARY:event 1 UID:sync-collection-%d-ics END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") class SyncTest(HTTPTestBase): """ A sync operation """ def __init__(self, label, sessions, logFilePath, logFilePrefix, full, count): super(SyncTest, self).__init__(label, sessions, logFilePath, logFilePrefix) self.full = full self.count = count self.synctoken = "" def prepare(self): """ Do some setup prior to the real request. """ if not self.full: # Get current sync token results, _ignore_bad = self.sessions[0].getProperties(URL(path=self.sessions[0].calendarHref), (davxml.sync_token,)) self.synctoken = results[davxml.sync_token] # Add resources to create required number of changes now = DateTime.getNowUTC() for i in range(self.count): href = joinURL(self.sessions[0].calendarHref, "sync-collection-%d.ics" % (i + 1,)) self.sessions[0].writeData(URL(path=href), ICAL % (now.getYear() + 1, i + 1,), "text/calendar") def doRequest(self): """ Execute the actual HTTP request. """ props = ( davxml.getetag, davxml.getcontenttype, ) # Run sync collection self.sessions[0].syncCollection(URL(path=self.sessions[0].calendarHref), self.synctoken, props) def cleanup(self): """ Do some cleanup after the real request. """ if not self.full: # Remove created resources for i in range(self.count): href = joinURL(self.sessions[0].calendarHref, "sync-collection-%d.ics" % (i + 1,)) self.sessions[0].deleteResource(URL(path=href)) calendarserver-9.1+dfsg/contrib/performance/sqlusage/sqlusage.py000066400000000000000000000346351315003562600253010ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from StringIO import StringIO from caldavclientlibrary.client.clientsession import CalDAVSession from caldavclientlibrary.protocol.url import URL from caldavclientlibrary.protocol.webdav.definitions import davxml from calendarserver.tools import tables from contrib.performance.sqlusage.requests.invite import InviteTest from contrib.performance.sqlusage.requests.multiget import MultigetTest from contrib.performance.sqlusage.requests.propfind import PropfindTest from contrib.performance.sqlusage.requests.propfind_invite import PropfindInviteTest from contrib.performance.sqlusage.requests.put import PutTest from contrib.performance.sqlusage.requests.query import QueryTest from contrib.performance.sqlusage.requests.sync import SyncTest from pycalendar.datetime import DateTime from txweb2.dav.util import joinURL import getopt import itertools import sys from caldavclientlibrary.client.principal import principalCache """ This tool is designed to analyze how SQL is being used for various HTTP requests. It will execute a series of HTTP requests against a test server configuration and count the total number of SQL statements per request, the total number of rows returned per request and the total SQL execution time per request. Each series will be repeated against a varying calendar size so the variation in SQL use with calendar size can be plotted. """ EVENT_COUNTS = (0, 1, 5, 10, 50, 100, 500, 1000,) SHAREE_COUNTS = (0, 1, 5, 10, 50, 100,) ICAL = """BEGIN:VCALENDAR CALSCALE:GREGORIAN PRODID:-//Example Inc.//Example Calendar//EN VERSION:2.0 BEGIN:VTIMEZONE LAST-MODIFIED:20040110T032845Z TZID:US/Eastern BEGIN:DAYLIGHT DTSTART:20000404T020000 RRULE:FREQ=YEARLY;BYDAY=1SU;BYMONTH=4 TZNAME:EDT TZOFFSETFROM:-0500 TZOFFSETTO:-0400 END:DAYLIGHT BEGIN:STANDARD DTSTART:20001026T020000 RRULE:FREQ=YEARLY;BYDAY=-1SU;BYMONTH=10 TZNAME:EST TZOFFSETFROM:-0400 TZOFFSETTO:-0500 END:STANDARD END:VTIMEZONE BEGIN:VEVENT DTSTAMP:20051222T205953Z CREATED:20060101T150000Z DTSTART;TZID=US/Eastern:%d0101T100000 DURATION:PT1H SUMMARY:event 1 UID:%d-ics END:VEVENT END:VCALENDAR """.replace("\n", "\r\n") class SQLUsageSession(CalDAVSession): def __init__(self, server, port=None, ssl=False, afunix=None, user="", pswd="", principal=None, root=None, calendar="calendar", logging=False): super(SQLUsageSession, self).__init__(server, port, ssl, afunix, user, pswd, principal, root, logging) self.homeHref = "/calendars/users/%s/" % (self.user,) self.calendarHref = "/calendars/users/%s/%s/" % (self.user, calendar,) self.inboxHref = "/calendars/users/%s/inbox/" % (self.user,) self.notificationHref = "/calendars/users/%s/notification/" % (self.user,) class EventSQLUsage(object): def __init__(self, server, port, users, pswds, logFilePath, compact): self.server = server self.port = port self.users = users self.pswds = pswds self.logFilePath = logFilePath self.compact = compact self.requestLabels = [] self.results = {} self.currentCount = 0 def runLoop(self, event_counts): # Make the sessions sessions = [ SQLUsageSession(self.server, self.port, user=user, pswd=pswd, root="/") for user, pswd in itertools.izip(self.users, self.pswds) ] # Set of requests to execute requests = [ MultigetTest("mget-1" if self.compact else "multiget-1", sessions, self.logFilePath, "event", 1), MultigetTest("mget-50" if self.compact else "multiget-50", sessions, self.logFilePath, "event", 50), PropfindTest("prop-cal" if self.compact else "propfind-cal", sessions, self.logFilePath, "event", 1), SyncTest("s-full" if self.compact else "sync-full", sessions, self.logFilePath, "event", True, 0), SyncTest("s-1" if self.compact else "sync-1", sessions, self.logFilePath, "event", False, 1), QueryTest("q-1" if self.compact else "query-1", sessions, self.logFilePath, "event", 1), QueryTest("q-10" if self.compact else "query-10", sessions, self.logFilePath, "event", 10), PutTest("put", sessions, self.logFilePath, "event"), InviteTest("invite-1", sessions, self.logFilePath, "event", 1), InviteTest("invite-5", sessions, self.logFilePath, "event", 5), ] self.requestLabels = [request.label for request in requests] def _warmUp(): # Warm-up server by doing calendar home and child collection propfinds. # Do this twice because the very first time might provision DB objects and # blow any DB cache - the second time will warm the DB cache. props = (davxml.resourcetype,) for _ignore in range(2): for session in sessions: session.getPropertiesOnHierarchy(URL(path=session.homeHref), props) session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props) session.getPropertiesOnHierarchy(URL(path=session.inboxHref), props) session.getPropertiesOnHierarchy(URL(path=session.notificationHref), props) # Now loop over sets of events for count in event_counts: print("Testing count = %d" % (count,)) self.ensureEvents(sessions[0], sessions[0].calendarHref, count) result = {} for request in requests: print(" Test = %s" % (request.label,)) _warmUp() result[request.label] = request.execute(count) self.results[count] = result def report(self): self._printReport("SQL Statement Count", "count", "%d") self._printReport("SQL Rows Returned", "rows", "%d") self._printReport("SQL Time", "timing", "%.1f") def _printReport(self, title, attr, colFormat): table = tables.Table() print(title) headers = ["Events"] + self.requestLabels table.addHeader(headers) formats = [tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY)] + \ [tables.Table.ColumnFormat(colFormat, tables.Table.ColumnFormat.RIGHT_JUSTIFY)] * len(self.requestLabels) table.setDefaultColumnFormats(formats) for k in sorted(self.results.keys()): row = [k] + [getattr(self.results[k][item], attr) for item in self.requestLabels] table.addRow(row) os = StringIO() table.printTable(os=os) print(os.getvalue()) print("") def ensureEvents(self, session, calendarhref, n): """ Make sure the required number of events are present in the calendar. @param n: number of events @type n: C{int} """ now = DateTime.getNowUTC() for i in range(n - self.currentCount): index = self.currentCount + i + 1 href = joinURL(calendarhref, "%d.ics" % (index,)) session.writeData(URL(path=href), ICAL % (now.getYear() + 1, index,), "text/calendar") self.currentCount = n class SharerSQLUsage(object): def __init__(self, server, port, users, pswds, logFilePath, compact): self.server = server self.port = port self.users = users self.pswds = pswds self.logFilePath = logFilePath self.compact = compact self.requestLabels = [] self.results = {} self.currentCount = 0 def runLoop(self, sharee_counts): # Make the sessions sessions = [ SQLUsageSession(self.server, self.port, user=user, pswd=pswd, root="/", calendar="shared") for user, pswd in itertools.izip(self.users, self.pswds) ] sessions = sessions[0:1] # Create the calendar first sessions[0].makeCalendar(URL(path=sessions[0].calendarHref)) # Set of requests to execute requests = [ MultigetTest("mget-1" if self.compact else "multiget-1", sessions, self.logFilePath, "share", 1), MultigetTest("mget-50" if self.compact else "multiget-50", sessions, self.logFilePath, "share", 50), PropfindInviteTest("propfind", sessions, self.logFilePath, "share", 1), SyncTest("s-full" if self.compact else "sync-full", sessions, self.logFilePath, "share", True, 0), SyncTest("s-1" if self.compact else "sync-1", sessions, self.logFilePath, "share", False, 1), QueryTest("q-1" if self.compact else "query-1", sessions, self.logFilePath, "share", 1), QueryTest("q-10" if self.compact else "query-10", sessions, self.logFilePath, "share", 10), PutTest("put", sessions, self.logFilePath, "share"), ] self.requestLabels = [request.label for request in requests] # Warm-up server by doing shared calendar propfinds props = (davxml.resourcetype,) for session in sessions: session.getPropertiesOnHierarchy(URL(path=session.calendarHref), props) # Now loop over sets of events for count in sharee_counts: print("Testing count = %d" % (count,)) self.ensureSharees(sessions[0], sessions[0].calendarHref, count) result = {} for request in requests: print(" Test = %s" % (request.label,)) result[request.label] = request.execute(count) self.results[count] = result def report(self): self._printReport("SQL Statement Count", "count", "%d") self._printReport("SQL Rows Returned", "rows", "%d") self._printReport("SQL Time", "timing", "%.1f") def _printReport(self, title, attr, colFormat): table = tables.Table() print(title) headers = ["Sharees"] + self.requestLabels table.addHeader(headers) formats = [tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY)] + \ [tables.Table.ColumnFormat(colFormat, tables.Table.ColumnFormat.RIGHT_JUSTIFY)] * len(self.requestLabels) table.setDefaultColumnFormats(formats) for k in sorted(self.results.keys()): row = [k] + [getattr(self.results[k][item], attr) for item in self.requestLabels] table.addRow(row) os = StringIO() table.printTable(os=os) print(os.getvalue()) print("") def ensureSharees(self, session, calendarhref, n): """ Make sure the required number of sharees are present in the calendar. @param n: number of sharees @type n: C{int} """ users = [] uids = [] for i in range(n - self.currentCount): index = self.currentCount + i + 2 users.append("user%02d" % (index,)) uids.append("urn:x-uid:10000000-0000-0000-0000-000000000%03d" % (index,)) session.addInvitees(URL(path=calendarhref), uids, True) # Now accept each one for user in users: acceptor = SQLUsageSession(self.server, self.port, user=user, pswd=user, root="/", calendar="shared") notifications = acceptor.getNotifications(URL(path=acceptor.notificationHref)) principal = principalCache.getPrincipal(acceptor, acceptor.principalPath) acceptor.processNotification(principal, notifications[0], True) self.currentCount = n def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: sqlusage.py [options] [FILE] Options: -h Print this help and exit --server Server hostname --port Server port --user User name --pswd Password --event Do event scaling --share Do sharee sclaing --event-counts Comma-separated list of event counts to test --sharee-counts Comma-separated list of sharee counts to test --compact Make printed tables as thin as possible Arguments: FILE File name for sqlstats.log to analyze. Description: This utility will analyze the output of s pg_stat_statement table. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': server = "localhost" port = 8008 users = ("user01", "user02",) pswds = ("user01", "user02",) file = "sqlstats.logs" event_counts = EVENT_COUNTS sharee_counts = SHAREE_COUNTS compact = False do_all = True do_event = False do_share = False options, args = getopt.getopt( sys.argv[1:], "h", [ "server=", "port=", "user=", "pswd=", "compact", "event", "share", "event-counts=", "sharee-counts=", ] ) for option, value in options: if option == "-h": usage() elif option == "--server": server = value elif option == "--port": port = int(value) elif option == "--user": users = value.split(",") elif option == "--pswd": pswds = value.split(",") elif option == "--compact": compact = True elif option == "--event": do_all = False do_event = True elif option == "--share": do_all = False do_share = True elif option == "--event-counts": event_counts = [int(i) for i in value.split(",")] elif option == "--sharee-counts": sharee_counts = [int(i) for i in value.split(",")] else: usage("Unrecognized option: %s" % (option,)) # Process arguments if len(args) == 1: file = args[0] elif len(args) != 0: usage("Must zero or one file arguments") if do_all or do_event: sql = EventSQLUsage(server, port, users, pswds, file, compact) sql.runLoop(event_counts) sql.report() if do_all or do_share: sql = SharerSQLUsage(server, port, users, pswds, file, compact) sql.runLoop(sharee_counts) sql.report() calendarserver-9.1+dfsg/contrib/performance/sqlwatch000077500000000000000000000012271315003562600230220ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from sqlwatch import main main() calendarserver-9.1+dfsg/contrib/performance/sqlwatch.py000066400000000000000000000044211315003562600234450ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from benchmark import DTraceCollector, instancePIDs from twisted.internet import reactor from twisted.internet.defer import Deferred, inlineCallbacks from twisted.python.failure import Failure from twisted.python.log import err import sys import signal import time class Stop(Exception): pass interrupted = 0.0 def waitForInterrupt(): if signal.getsignal(signal.SIGINT) != signal.default_int_handler: raise RuntimeError("Already waiting") d = Deferred() def fire(*ignored): global interrupted signal.signal(signal.SIGINT, signal.default_int_handler) now = time.time() if now - interrupted < 4: reactor.callFromThread(lambda: d.errback(Failure(Stop()))) else: interrupted = now reactor.callFromThread(d.callback, None) signal.signal(signal.SIGINT, fire) return d @inlineCallbacks def collect(directory): while True: pids = instancePIDs(directory) dtrace = DTraceCollector("sql_measure.d", pids) print('Starting') yield dtrace.start() print('Started') try: yield waitForInterrupt() except Stop: yield dtrace.stop() break print('Stopping') stats = yield dtrace.stop() for s in stats: if s.name == 'execute': s.statements(stats[s]) print('Stopped') def main(): from twisted.python.failure import startDebugMode startDebugMode() d = collect(sys.argv[1]) d.addErrback(err, "Problem collecting SQL") d.addBoth(lambda ign: reactor.stop()) reactor.run(installSignalHandlers=False) calendarserver-9.1+dfsg/contrib/performance/stackedbar.py000066400000000000000000000027711315003562600237300ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## # a stacked bar plot with errorbars import numpy as np import matplotlib.pyplot as plt from operator import add N = 5 menMeans = (20, 35, 30, 35, 27) womenMeans = (25, 32, 34, 20, 25) menStd = (2, 3, 4, 1, 2) womenStd = (3, 5, 2, 3, 3) otherMeans = (15, 30, 25, 40, 35) ind = np.arange(N) # the x locations for the groups width = 0.35 # the width of the bars: can also be len(x) sequence p1 = plt.bar(ind, menMeans, width, color='r', yerr=womenStd) p2 = plt.bar(ind, womenMeans, width, color='y', bottom=menMeans, yerr=menStd) p3 = plt.bar(ind, otherMeans, width, color='g', bottom=map(add, menMeans, womenMeans), yerr=menStd) plt.ylabel('Scores') plt.title('Scores by group and gender') plt.xticks(ind + width / 2., ('G1', 'G2', 'G3', 'G4', 'G5')) plt.yticks(np.arange(0, 81, 10)) plt.legend((p1[0], p2[0], p3[0]), ('Men', 'Women', 'Other')) plt.show() calendarserver-9.1+dfsg/contrib/performance/stats.py000066400000000000000000000321401315003562600227540ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from math import log, sqrt from time import mktime import random import sqlparse from pycalendar.datetime import DateTime from pycalendar.duration import Duration as PyDuration from pycalendar.icalendar.property import Property from pycalendar.timezone import Timezone from zope.interface import Interface, implements from twisted.python.util import FancyEqMixin NANO = 1000000000.0 def mean(samples): return sum(samples) / len(samples) def median(samples): return sorted(samples)[len(samples) / 2] def residuals(samples, from_): return [from_ - s for s in samples] def stddev(samples): m = mean(samples) variance = sum([datum ** 2 for datum in residuals(samples, m)]) / len(samples) return variance ** 0.5 def mad(samples): """ Return the median absolute deviation of the given data set. """ med = median(samples) res = map(abs, residuals(samples, med)) return median(res) class _Statistic(object): commands = ['summarize'] def __init__(self, name): self.name = name def __eq__(self, other): if isinstance(other, _Statistic): return self.name == other.name return NotImplemented def __hash__(self): return hash((self.__class__, self.name)) def __repr__(self): return '' % (self.name,) def squash(self, samples, mode=None): """ Normalize the sample data into float values (one per sample) in seconds (I hope time is the only thing you measure). """ return samples def summarize(self, data): return ''.join([ self.name, ' mean ', str(mean(data)), '\n', self.name, ' median ', str(median(data)), '\n', self.name, ' stddev ', str(stddev(data)), '\n', self.name, ' median absolute deviation ', str(mad(data)), '\n', self.name, ' sum ', str(sum(data)), '\n']) def write(self, basename, data): fObj = file(basename % (self.name,), 'w') fObj.write('\n'.join(map(str, data)) + '\n') fObj.close() class Duration(_Statistic): pass class SQLDuration(_Statistic): commands = ['summarize', 'statements', 'transcript'] def _is_literal(self, token): if token.ttype in sqlparse.tokens.Literal: return True if token.ttype == sqlparse.tokens.Keyword and token.value in (u'True', u'False'): return True return False def _substitute(self, expression, replacement): try: expression.tokens except AttributeError: return for i, token in enumerate(expression.tokens): if self._is_literal(token): expression.tokens[i] = replacement elif token.is_whitespace(): expression.tokens[i] = sqlparse.sql.Token('Whitespace', ' ') else: self._substitute(token, replacement) def normalize(self, sql): (statement,) = sqlparse.parse(sql) # Replace any literal values with placeholders qmark = sqlparse.sql.Token('Operator', '?') self._substitute(statement, qmark) return sqlparse.format(unicode(statement).encode('ascii')) def squash(self, samples, mode="duration"): """ Summarize the execution of a number of SQL statements. @param mode: C{"duration"} to squash the durations into the result. C{"count"} to squash the count of statements executed into the result. """ results = [] for data in samples: if mode == "duration": value = sum([interval for (_ignore_sql, interval) in data]) / NANO else: value = len(data) results.append(value) return results def summarize(self, samples): times = [] statements = {} for data in samples: total = 0 for (sql, interval) in data: sql = self.normalize(sql) statements[sql] = statements.get(sql, 0) + 1 total += interval times.append(total / NANO * 1000) return ''.join([ '%d: %s\n' % (count, statement) for (statement, count) in statements.iteritems()]) + _Statistic.summarize(self, times) def statements(self, samples): statements = {} for data in samples: for (sql, interval) in data: sql = self.normalize(sql) statements.setdefault(sql, []).append(interval) byTime = [] for statement, times in statements.iteritems(): byTime.append((sum(times), len(times), statement)) byTime.sort() byTime.reverse() if byTime: header = '%10s %10s %10s %s' row = '%10.5f %10.5f %10d %s' print(header % ('TOTAL MS', 'PERCALL MS', 'NCALLS', 'STATEMENT')) for (time, count, statement) in byTime: time = time / NANO * 1000 print(row % (time, time / count, count, statement)) def transcript(self, samples): statements = [] data = samples[len(samples) / 2] for (sql, _ignore_interval) in data: statements.append(self.normalize(sql)) return '\n'.join(statements) + '\n' class Bytes(_Statistic): def squash(self, samples): return [sum(bytes) for bytes in samples] def summarize(self, samples): return _Statistic.summarize(self, self.squash(samples)) def quantize(data): """ Given some continuous data, quantize it into appropriately sized discrete buckets (eg, as would be suitable for constructing a histogram of the values). """ # buckets = {} return [] class IPopulation(Interface): def sample(): # @NoSelf pass class UniformDiscreteDistribution(object, FancyEqMixin): """ """ implements(IPopulation) compareAttributes = ['_values'] def __init__(self, values, randomize=True): self._values = values self._randomize = randomize self._refill() def _refill(self): self._remaining = self._values[:] if self._randomize: random.shuffle(self._remaining) def sample(self): if not self._remaining: self._refill() return self._remaining.pop() class LogNormalDistribution(object, FancyEqMixin): """ """ implements(IPopulation) compareAttributes = ['_mu', '_sigma', '_maximum'] def __init__(self, mu=None, sigma=None, mean=None, mode=None, median=None, maximum=None): if mu is not None and sigma is not None: scale = 1.0 elif not (mu is None and sigma is None): raise ValueError("mu and sigma must both be defined or both not defined") elif mode is None: raise ValueError("When mu and sigma are not defined, mode must be defined") elif median is not None: scale = mode median /= float(mode) mode = 1.0 mu = log(median) sigma = sqrt(log(median) - log(mode)) elif mean is not None: scale = mode mean /= float(mode) mode = 1.0 mu = log(mean) + log(mode) / 2.0 sigma = sqrt(log(mean) - log(mode) / 2.0) else: raise ValueError("When using mode one of median or mean must be defined") self._mode = mode self._median = median self._mu = mu self._sigma = sigma self._scale = scale self._maximum = maximum def sample(self): result = self._scale * random.lognormvariate(self._mu, self._sigma) if self._maximum is not None and result > self._maximum: for _ignore in range(10): result = self._scale * random.lognormvariate(self._mu, self._sigma) if result <= self._maximum: break else: raise ValueError("Unable to generate LogNormalDistribution sample within required range") return result class FixedDistribution(object, FancyEqMixin): """ """ implements(IPopulation) compareAttributes = ['_value'] def __init__(self, value): self._value = value def sample(self): return self._value class NearFutureDistribution(object, FancyEqMixin): compareAttributes = ['_offset'] def __init__(self): self._offset = LogNormalDistribution(7, 0.8) def sample(self): now = DateTime.getNowUTC() now.offsetSeconds(int(self._offset.sample())) return now class NormalDistribution(object, FancyEqMixin): compareAttributes = ['_mu', '_sigma'] def __init__(self, mu, sigma): self._mu = mu self._sigma = sigma def sample(self): # Only return positive values or zero v = random.normalvariate(self._mu, self._sigma) while v < 0: v = random.normalvariate(self._mu, self._sigma) return v class UniformIntegerDistribution(object, FancyEqMixin): compareAttributes = ['_min', '_max'] def __init__(self, min, max): self._min = min self._max = max def sample(self): return int(random.uniform(self._min, self._max)) NUM_WEEKDAYS = 7 class WorkDistribution(object, FancyEqMixin): compareAttributes = ["_daysOfWeek", "_beginHour", "_endHour"] _weekdayNames = ["sun", "mon", "tue", "wed", "thu", "fri", "sat"] def __init__(self, daysOfWeek=["mon", "tue", "wed", "thu", "fri"], beginHour=8, endHour=17, tzname="UTC"): self._daysOfWeek = [self._weekdayNames.index(day) for day in daysOfWeek] self._beginHour = beginHour self._endHour = endHour self._tzname = tzname self._helperDistribution = NormalDistribution( # Mean 6 workdays in the future 60 * 60 * 8 * 6, # Standard deviation of 4 workdays 60 * 60 * 8 * 4) self.now = DateTime.getNow def astimestamp(self, dt): return mktime(dt.timetuple()) def _findWorkAfter(self, when): """ Return a two-tuple of the start and end of work hours following C{when}. If C{when} falls within work hours, then the start time will be equal to when. """ # Find a workday that follows the timestamp weekday = when.getDayOfWeek() for i in range(NUM_WEEKDAYS): day = when + PyDuration(days=i) if (weekday + i) % NUM_WEEKDAYS in self._daysOfWeek: # Joy, a day on which work might occur. Find the first hour on # this day when work may start. day.setHHMMSS(self._beginHour, 0, 0) begin = day end = begin.duplicate() end.setHHMMSS(self._endHour, 0, 0) if end > when: return begin, end def sample(self): offset = PyDuration(seconds=int(self._helperDistribution.sample())) beginning = self.now(Timezone(tzid=self._tzname)) while offset: start, end = self._findWorkAfter(beginning) if end - start > offset: result = start + offset result.setMinutes(result.getMinutes() // 15 * 15) result.setSeconds(0) return result offset.setDuration(offset.getTotalSeconds() - (end - start).getTotalSeconds()) beginning = end class RecurrenceDistribution(object, FancyEqMixin): compareAttributes = ["_allowRecurrence", "_weights"] _model_rrules = { "none": None, "daily": "RRULE:FREQ=DAILY", "weekly": "RRULE:FREQ=WEEKLY", "monthly": "RRULE:FREQ=MONTHLY", "yearly": "RRULE:FREQ=YEARLY", "dailylimit": "RRULE:FREQ=DAILY;COUNT=14", "weeklylimit": "RRULE:FREQ=WEEKLY;COUNT=4", "workdays": "RRULE:FREQ=DAILY;BYDAY=MO,TU,WE,TH,FR" } def __init__(self, allowRecurrence, weights={}): self._allowRecurrence = allowRecurrence self._rrules = [] if self._allowRecurrence: for rrule, count in sorted(weights.items(), key=lambda x: x[0]): for _ignore in range(count): self._rrules.append(self._model_rrules[rrule]) self._helperDistribution = UniformIntegerDistribution(0, len(self._rrules) - 1) def sample(self): if self._allowRecurrence: index = self._helperDistribution.sample() rrule = self._rrules[index] if rrule: prop = Property.parseText(rrule) return prop return None calendarserver-9.1+dfsg/contrib/performance/stats_analysis.py000066400000000000000000000171141315003562600246630ustar00rootroot00000000000000## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from collections import defaultdict, namedtuple from contrib.performance.stats import LogNormalDistribution from scipy.optimize import curve_fit import itertools import matplotlib.pyplot as plt import numpy as np import os class LogNormal(object): Params = namedtuple("Params", ("mu", "sigma", "scale_x", "scale_y")) @staticmethod def fn(x, mu, sigma, scale_x, scale_y): """ Calculate the LogNormal F(x) value for the given parameters. @param x: x value to use @type x: L{float} @param mu: mean @type mu: L{float} @param sigma: standard deviation @type sigma: L{float} @param scale_x: X-scaling factor (to make mode == 1.0) @type scale_x: L{float} @param scale_y: Y-scaling factor for peak height @type scale_y: L{float} """ return scale_y * ( np.exp(-(np.log(x / scale_x) - mu) ** 2 / (2 * sigma ** 2)) / (x / scale_x * sigma * np.sqrt(2 * np.pi)) ) @staticmethod def estimate(x, y): """ Given a set of x-y data that is likely a LogNormal distribution, try and estimate the mode and median, and then derive mu, sigma, scale_x and scale_y values. @param x: sequence of values @type x: L{list} of L{float} @param y: sequence of values @type y: L{list} of L{float} """ estimate_mode = 0.0 estimate_median = 0.0 max_y = 0.0 accumulated_y = 0.0 half_y = sum(y) / 2.0 for x_val, y_val in itertools.izip(x, y): if y_val > max_y: max_y = y_val estimate_mode = x_val accumulated_y += y_val if half_y is not None and accumulated_y > half_y: estimate_median = x_val half_y = None estimate_scale_x = estimate_mode - 0.5 estimate_mode = 1.0 estimate_median /= estimate_scale_x estimate_mu = np.log(estimate_median) estimate_sigma = np.sqrt(np.log(estimate_median) - np.log(estimate_mode)) peak_y = LogNormal.fn(estimate_scale_x, estimate_mu, estimate_sigma, estimate_scale_x, 1.0) estimate_scale_y = max(y) / peak_y return ( estimate_mode, estimate_median, LogNormal.Params(estimate_mu, estimate_sigma, estimate_scale_x, estimate_scale_y), ) @staticmethod def plot(min_x, max_x, params, color): """ Plot a LogNormal distribution over the specified x-axis range using the supplied distribution parameters. @param min_x: minimum x-axis value @type min_x: L{float} @param max_x: maximum x-asix value @type max_x: L{float} @param params: distribution parameters @type params: L{LogNormal.Params} @param color: color to use for line in plot @type color: L{str} """ xl = np.linspace(min_x, max_x, 10000) yl = ( np.exp(-(np.log(xl / params.scale_x) - params.mu) ** 2 / (2 * params.sigma ** 2)) / (xl / params.scale_x * params.sigma * np.sqrt(2 * np.pi)) ) plt.plot(xl, params.scale_y * yl, linewidth=2, color=color) @staticmethod def plotCSV(path, cutoff, bucket, color): """ Plot data from a CSV file. @param path: file to read from @type path: L{str} @param cutoff: maximum x-value to process @type cutoff: L{float} @param bucket: size of x-value buckets to use @type bucket: L{int} @param color: color to use for line in plot @type color: L{str} """ with open(os.path.expanduser(path)) as f: data = f.read() result = defaultdict(int) for line in data.splitlines(): sp = line.split(",") x_val = int(sp[0]) y_val = int(sp[1]) if x_val < cutoff: result[(x_val / bucket) * bucket] += y_val x, y = zip(*sorted(result.items())) plt.plot(x, y) return (x, y,) @staticmethod def distributionPlot(mode, median, maximum, color): """ Plot data from a randomly generated LogNornal distribution. @param mode: distribution mode @type mode: L{float} @param median: distribution median @type median: L{float} @param maximum: highest x-value to allow @type maximum: L{float} @param color: color to use for line in plot @type color: L{str} """ distribution = LogNormalDistribution(mode=mode, median=median, maximum=maximum) result = defaultdict(int) for _ignore in range(1000000): s = int(distribution.sample()) + 0.5 result[s] += 1 x, y = zip(*sorted(result.items())) plt.plot(x, y, color=color) peak_y = LogNormal.fn(distribution._scale, distribution._mu, distribution._sigma, distribution._scale, 1.0) scale_y = sum(sorted(y, reverse=True)[0:100]) / 100.0 / peak_y return (x, y, LogNormal.Params(distribution._mu, distribution._sigma, distribution._scale, scale_y),) @staticmethod def fit(x, y): """ Try and fit a LogNormal distribution to the supplied x-y data. @param x: sequence of values @type x: L{list} of L{float} @param y: sequence of values @type y: L{list} of L{float} """ estimate_mode, estimate_median, estimate_params = LogNormal.estimate(x, y) print("\n==== Estimates") print("mode: {}".format(estimate_mode * estimate_params.scale_x)) print("median: {}".format(estimate_median * estimate_params.scale_x)) print("mu: {}".format(estimate_params.mu)) print("sigma: {}".format(estimate_params.sigma)) print("scale_x: {}".format(estimate_params.scale_x)) print("scale_y: {}".format(estimate_params.scale_y)) popt, _ignore_pcov = curve_fit(LogNormal.fn, x, y, ( estimate_params.mu, estimate_params.sigma, estimate_params.scale_x, estimate_params.scale_y, )) print("\n==== Fit results") print("mode: {:.2f}".format(popt[2] * np.exp(popt[0] - popt[1] ** 2))) print("median: {:.2f}".format(popt[2] * np.exp(popt[0]))) print("mu: {:.2f}".format(popt[0])) print("sigma: {:.2f}".format(popt[1])) print("scale_x: {:.2f}".format(popt[2])) print("scale_y: {:.2f}".format(popt[3])) return ( LogNormal.Params(*popt), estimate_params, ) if __name__ == '__main__': #x, y = LogNormal.plotCSV("~/data.txt", 10000, 10, color="b") #estimate_mode, estimate_median, estimate_params = LogNormal.estimate(x, y) mode_val = 450 median_val = 650 x, y, estimate_params = LogNormal.distributionPlot(mode_val, median_val, 10000, color="b") LogNormal.plot(1, 10000, estimate_params, color="g") popt, estimate = LogNormal.fit(x, y) LogNormal.plot(1, 10000, popt, color="r") plt.xlabel("Samples") plt.ylabel("LogNormal") plt.show() calendarserver-9.1+dfsg/contrib/performance/sudo-run.sh000077500000000000000000000012131315003562600233540ustar00rootroot00000000000000#!/usr/bin/env bash ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## sudo ./run.sh "$@" calendarserver-9.1+dfsg/contrib/performance/svn-committime000077500000000000000000000015641315003562600241530ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys import datetime import pysvn wc = sys.argv[1] revision = int(sys.argv[2]) client = pysvn.Client() [entry] = client.log(wc, pysvn.Revision(pysvn.opt_revision_kind.number, revision), limit=1) print datetime.datetime.fromtimestamp(entry['date']) calendarserver-9.1+dfsg/contrib/performance/svn-revno000077500000000000000000000013101315003562600231220ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import sys import pysvn print pysvn.Client().info(sys.argv[1])['revision'].number calendarserver-9.1+dfsg/contrib/performance/test_benchmark.py000066400000000000000000000136171315003562600246170ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Unit tests for L{benchmark}. """ from twisted.trial.unittest import TestCase from twisted.python.usage import UsageError from benchmark import BenchmarkOptions class BenchmarkOptionsTests(TestCase): """ Tests for L{benchmark.BenchmarkOptions}. """ def setUp(self): """ Create a L{BenchmarkOptions} instance to test. """ self.options = BenchmarkOptions() def test_parameters(self): """ The I{--parameters} option can be specified multiple time and each time specifies the parameters for a particular benchmark as a comma separated list of integers. """ self.options.parseOptions(["--parameters", "foo:1,10,100", "foo"]) self.assertEquals( self.options['parameters'], {"foo": [1, 10, 100]}) def test_filterBenchmarksWithoutDistribution(self): """ If neither I{--hosts-count} nor I{--host-index} are supplied, L{BenchmarkOptions} takes all positional arguments as the benchmarks to run. """ self.options.parseOptions(["foo", "bar", "baz"]) self.assertEquals(self.options['benchmarks'], ["foo", "bar", "baz"]) def test_hostsCountWithoutIndex(self): """ If I{--hosts-count} is provided but I{--host-index} is not, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--hosts-count=3", "foo"]) self.assertEquals( str(exc), "Specify neither or both of hosts-count and host-index") def test_hostIndexWithoutCount(self): """ If I{--host-index} is provided by I{--hosts-count} is not, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--host-index=3", "foo"]) self.assertEquals( str(exc), "Specify neither or both of hosts-count and host-index") def test_negativeHostsCount(self): """ If a negative value is provided for I{--hosts-count}, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--host-index=1", "--hosts-count=-1", "foo"]) self.assertEquals( str(exc), "Specify a positive integer for hosts-count") def test_nonIntegerHostsCount(self): """ If a string which cannot be converted to an integer is provided for I{--hosts-count}, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--host-index=1", "--hosts-count=hello", "foo"]) self.assertEquals( str(exc), "Parameter type enforcement failed: invalid literal for int() with base 10: 'hello'") def test_negativeHostIndex(self): """ If a negative value is provided for I{--host-index}, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--host-index=-1", "--hosts-count=2", "foo"]) self.assertEquals( str(exc), "Specify a positive integer for host-index") def test_nonIntegerHostIndex(self): """ If a string which cannot be converted to an integer is provided for I{--host-index}, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--host-index=hello", "--hosts-count=2", "foo"]) self.assertEquals( str(exc), "Parameter type enforcement failed: invalid literal for int() with base 10: 'hello'") def test_largeHostIndex(self): """ If the integer supplied to I{--host-index} is greater than or equal to the integer supplied to I{--hosts-count}, a L{UsageError} is raised. """ exc = self.assertRaises( UsageError, self.options.parseOptions, ["--hosts-count=2", "--host-index=2", "foo"]) self.assertEquals( str(exc), "host-index must be less than hosts-count") def test_hostIndexAndCount(self): """ If I{--hosts-count} and I{--host-index} are supplied, of the benchmarks supplied as positional arguments, only a subset is taken as the benchmarks to run. The subset is constructed so that for a particular value of I{hosts-count}, each benchmark will only appear in the subset returned for a single value of I{--host-index}, and all benchmarks will appear in one such subset. """ self.options.parseOptions([ "--hosts-count=3", "--host-index=0", "foo", "bar", "baz", "quux"]) self.assertEquals(self.options['benchmarks'], ["foo", "quux"]) self.options.parseOptions([ "--hosts-count=3", "--host-index=1", "foo", "bar", "baz", "quux"]) self.assertEquals(self.options['benchmarks'], ["bar"]) self.options.parseOptions([ "--hosts-count=3", "--host-index=2", "foo", "bar", "baz", "quux"]) self.assertEquals(self.options['benchmarks'], ["baz"]) calendarserver-9.1+dfsg/contrib/performance/test_event_change_date.py000066400000000000000000000043461315003562600263070ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from benchmarks.event_change_date import replaceTimestamp calendarHead = """\ BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:PST TZOFFSETFROM:-0700 TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:PDT TZOFFSETFROM:-0800 TZOFFSETTO:-0700 END:DAYLIGHT END:VTIMEZONE BEGIN:VEVENT UID:f0b54c7b-c8ae-4dca-807f-5909da771964 """ calendarDates = """\ DTSTART;TZID=America/Los_Angeles:20100730T111505 DTEND;TZID=America/Los_Angeles:20100730T111508 """ calendarTail = """\ ATTENDEE;CN=User 02;CUTYPE=INDIVIDUAL;EMAIL=user02@example.com;PARTSTAT=NE EDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE;SCHEDULE-STATUS=1.2:urn:x-uid:use r02 CREATED:20100729T193912Z DTSTAMP:20100729T195557Z ORGANIZER;CN=User 03;EMAIL=user03@example.com:urn:x-uid:user03 SEQUENCE:1 SUMMARY:STUFF IS THINGS TRANSP:OPAQUE END:VEVENT END:VCALENDAR """ class TimestampReplaceTests(TestCase): def test_replaceTimestamp(self): """ replaceTimestamp adjusts the DTSTART and DTEND timestamp values by one hour times the counter parameter. """ oldCalendar = calendarHead + calendarDates + calendarTail newCalendar = replaceTimestamp(oldCalendar, 3) self.assertEquals( calendarHead + "DTSTART;TZID=America/Los_Angeles:20100730T141505\n" "DTEND;TZID=America/Los_Angeles:20100730T141508\n" + calendarTail, newCalendar) calendarserver-9.1+dfsg/contrib/performance/test_httpauth.py000066400000000000000000000023601315003562600245170ustar00rootroot00000000000000## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from httpauth import AuthHandlerAgent class HTTPAuthTests(TestCase): def test_AuthHandlerAgent_parse(self): """ Make sure auth method is handled case-insensitively. """ headers = ( "Basic realm=\"foo\"", "basic realm=\"foo\"", "BASIC realm=\"foo\"", "Digest realm=\"foo\"", "digest realm=\"foo\"", "DIGEST realm=\"foo\"", ) for hdrvalue in headers: hdlr = AuthHandlerAgent(None, None) ch = hdlr._parse(hdrvalue) self.assertTrue(ch is not None) calendarserver-9.1+dfsg/contrib/performance/test_stats.py000066400000000000000000000152431315003562600240200ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.trial.unittest import TestCase from stats import ( SQLDuration, LogNormalDistribution, UniformDiscreteDistribution, UniformIntegerDistribution, WorkDistribution, quantize, RecurrenceDistribution) from pycalendar.datetime import DateTime from pycalendar.timezone import Timezone class SQLDurationTests(TestCase): def setUp(self): self.stat = SQLDuration('foo') def test_normalize_integer(self): self.assertEquals( self.stat.normalize('SELECT foo FROM bar WHERE 1'), 'SELECT foo FROM bar WHERE ?') self.assertEquals( self.stat.normalize('SELECT foo FROM bar WHERE x = 1'), 'SELECT foo FROM bar WHERE x = ?') self.assertEquals( self.stat.normalize('SELECT foo + 1 FROM bar'), 'SELECT foo + ? FROM bar') def test_normalize_boolean(self): self.assertEquals( self.stat.normalize('SELECT foo FROM bar WHERE True'), 'SELECT foo FROM bar WHERE ?') class DistributionTests(TestCase): def test_lognormal(self): dist = LogNormalDistribution(mu=1, sigma=1) for _ignore_i in range(100): value = dist.sample() self.assertIsInstance(value, float) self.assertTrue(value >= 0.0, "negative value %r" % (value,)) self.assertTrue(value <= 1000, "implausibly high value %r" % (value,)) dist = LogNormalDistribution(mode=1, median=2) for _ignore_i in range(100): value = dist.sample() self.assertIsInstance(value, float) self.assertTrue(value >= 0.0, "negative value %r" % (value,)) self.assertTrue(value <= 1000, "implausibly high value %r" % (value,)) dist = LogNormalDistribution(mode=1, mean=2) for _ignore_i in range(100): value = dist.sample() self.assertIsInstance(value, float) self.assertTrue(value >= 0.0, "negative value %r" % (value,)) self.assertTrue(value <= 1000, "implausibly high value %r" % (value,)) self.assertRaises(ValueError, LogNormalDistribution, mu=1) self.assertRaises(ValueError, LogNormalDistribution, sigma=1) self.assertRaises(ValueError, LogNormalDistribution, mode=1) self.assertRaises(ValueError, LogNormalDistribution, mean=1) self.assertRaises(ValueError, LogNormalDistribution, median=1) def test_uniformdiscrete(self): population = [1, 5, 6, 9] counts = dict.fromkeys(population, 0) dist = UniformDiscreteDistribution(population) for _ignore_i in range(len(population) * 10): counts[dist.sample()] += 1 self.assertEqual(dict.fromkeys(population, 10), counts) def test_workdistribution(self): tzname = "US/Eastern" dist = WorkDistribution(["mon", "wed", "thu", "sat"], 10, 20, tzname) dist._helperDistribution = UniformDiscreteDistribution([35 * 60 * 60 + 30 * 60]) dist.now = lambda tzname = None: DateTime(2011, 5, 29, 18, 5, 36, tzid=tzname) value = dist.sample() self.assertEqual( # Move past three workdays - monday, wednesday, thursday - using 30 # of the hours, and then five and a half hours into the fourth # workday, saturday. Workday starts at 10am, so the sample value # is 3:30pm, ie 1530 hours. DateTime(2011, 6, 4, 15, 30, 0, tzid=Timezone(tzid=tzname)), value ) dist = WorkDistribution(["mon", "tue", "wed", "thu", "fri"], 10, 20, tzname) dist._helperDistribution = UniformDiscreteDistribution([35 * 60 * 60 + 30 * 60]) value = dist.sample() self.assertTrue(isinstance(value, DateTime)) # twisted.trial.unittest.FailTest: not equal: # a = datetime.datetime(2011, 6, 4, 15, 30, tzinfo=) # b = datetime.datetime(2011, 6, 4, 19, 30, tzinfo=) # test_workdistribution.todo = "Somehow timezones mess this up" def test_recurrencedistribution(self): dist = RecurrenceDistribution(False) for _ignore in range(100): value = dist.sample() self.assertTrue(value is None) dist = RecurrenceDistribution(True, {"daily": 1, "none": 2, "weekly": 1}) dist._helperDistribution = UniformDiscreteDistribution([0, 3, 2, 1, 0], randomize=False) value = dist.sample() self.assertTrue(value is not None) value = dist.sample() self.assertTrue(value is None) value = dist.sample() self.assertTrue(value is None) value = dist.sample() self.assertTrue(value is not None) value = dist.sample() self.assertTrue(value is not None) def test_uniform(self): dist = UniformIntegerDistribution(-5, 10) for _ignore_i in range(100): value = dist.sample() self.assertTrue(-5 <= value < 10) self.assertIsInstance(value, int) class QuantizationTests(TestCase): """ Tests for L{quantize} which constructs discrete datasets of dynamic quantization from continuous datasets. """ skip = "nothing implemented yet, maybe not necessary" def test_one(self): """ A single data point is put into a bucket equal to its value and returned. """ dataset = [5.0] expected = [(5.0, [5.0])] self.assertEqual(quantize(dataset), expected) def test_two(self): """ Each of two values are put into buckets the size of the standard deviation of the sample. """ dataset = [2.0, 5.0] expected = [(1.5, [2.0]), (4.5, [5.0])] self.assertEqual(quantize(dataset), expected) def xtest_three(self): """ If two out of three values fall within one bucket defined by the standard deviation of the sample, that bucket is split in half so each bucket has one value. """ pass def xtest_alpha(self): """ This exercises one simple case of L{quantize} with a small amount of data. """ calendarserver-9.1+dfsg/contrib/performance/upload000077500000000000000000000012471315003562600224620ustar00rootroot00000000000000#!/usr/bin/python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from upload import main raise SystemExit(main()) calendarserver-9.1+dfsg/contrib/performance/upload.py000066400000000000000000000102431315003562600231020ustar00rootroot00000000000000## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import sys import pickle from urllib import urlencode from datetime import datetime from twisted.python.log import err from twisted.python.usage import UsageError, Options from twisted.internet import reactor from twisted.web.client import Agent from stats import median, mad from benchlib import select from httpclient import StringProducer, readBody class UploadOptions(Options): optParameters = [ ('url', None, None, 'Location of Codespeed server to which to upload.'), ('project', None, 'CalendarServer', 'Name of the project to which the data relates ' '(as recognized by the Codespeed server)'), ('revision', None, None, 'Revision number of the code which produced this data.'), ('revision-date', None, None, 'Timestamp at which the specified revision was created.'), ('environment', None, None, 'Name of the environment in which the data was produced.'), ('statistic', None, None, 'Identifier for the file/benchmark/parameter'), ('backend', None, None, 'Which storage backend produced this data.'), ] def postOptions(self): assert self['url'] assert self['backend'] in ('filesystem', 'postgresql') def _upload(reactor, url, project, revision, revision_date, benchmark, executable, environment, result_value, result_date, std_dev, max_value, min_value): data = { 'commitid': str(revision), 'revision_date': revision_date, 'project': project, 'benchmark': benchmark, 'environment': environment, 'executable': executable, 'result_value': str(result_value), 'result_date': result_date, 'std_dev': str(std_dev), 'max': str(max_value), 'min': str(min_value), } print('uploading', data) agent = Agent(reactor) d = agent.request('POST', url, None, StringProducer(urlencode(data))) def check(response): d = readBody(response) def read(body): print('body', repr(body)) if response.code != 200: raise Exception("Upload failed: %r" % (response.code,)) d.addCallback(read) return d d.addCallback(check) return d def upload(reactor, url, project, revision, revision_date, benchmark, param, statistic, backend, environment, samples): d = _upload( reactor, url=url, project=project, revision=revision, revision_date=revision_date, benchmark='%s-%s-%s' % (benchmark, param, statistic), executable='%s-backend' % (backend,), environment=environment, result_value=median(samples), result_date=datetime.now(), std_dev=mad(samples), # Not really! max_value=max(samples), min_value=min(samples)) d.addErrback(err, "Upload failed") return d def main(): options = UploadOptions() try: options.parseOptions(sys.argv[1:]) except UsageError, e: print(e) return 1 fname, benchmark, param, statistic = options['statistic'].split(',') stat, samples = select( pickle.load(file(fname)), benchmark, param, statistic) samples = stat.squash(samples) d = upload( reactor, options['url'], options['project'], options['revision'], options['revision-date'], benchmark, param, statistic, options['backend'], options['environment'], samples) reactor.callWhenRunning(d.addCallback, lambda ign: reactor.stop()) reactor.run() calendarserver-9.1+dfsg/contrib/tools/000077500000000000000000000000001315003562600201035ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/tools/__init__.py000066400000000000000000000011361315003562600222150ustar00rootroot00000000000000## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## calendarserver-9.1+dfsg/contrib/tools/anonymous_log.py000077500000000000000000000114371315003562600233570ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from gzip import GzipFile import getopt import os import sys import traceback class CalendarServerLogAnalyzer(object): def __init__(self): self.userCtr = 1 self.users = {} self.guidCtr = 1 self.guids = {} self.resourceCtr = 1 self.resources = {} def anonymizeLogFile(self, logFilePath): fpath = os.path.expanduser(logFilePath) if fpath.endswith(".gz"): f = GzipFile(fpath) else: f = open(fpath) try: for line in f: if not line.startswith("Log"): line = self.anonymizeLine(line) print(line, end="") except Exception, e: print("Exception: %s for %s" % (e, line,)) raise finally: f.close() def anonymizeLine(self, line): startPos = line.find("- ") endPos = line.find(" [") userid = line[startPos + 2:endPos] if userid != "-": if userid not in self.users: self.users[userid] = "user%05d" % (self.userCtr,) self.userCtr += 1 line = line[:startPos + 2] + self.users[userid] + line[endPos:] endPos = line.find(" [") startPos = endPos + 1 startPos = line.find(']', startPos + 21) + 3 endPos = line.find(' ', startPos) if line[startPos] != '?': startPos = endPos + 1 endPos = line.find(" HTTP/", startPos) uri = line[startPos:endPos] splits = uri.split("/") if len(splits) >= 4: if splits[1] in ("calendars", "principals"): if splits[3] not in self.guids: self.guids[splits[3]] = "guid%05d" % (self.guidCtr,) self.guidCtr += 1 splits[3] = self.guids[splits[3]] if len(splits) > 4: if splits[4] not in ("", "calendar", "inbox", "outbox", "dropbox"): if splits[4] not in self.resources: self.resources[splits[4]] = "resource%d" % (self.resourceCtr,) self.resourceCtr += 1 splits[4] = self.resources[splits[4]] if len(splits) > 5: for x in range(5, len(splits)): if splits[x]: if splits[x] not in self.resources: self.resources[splits[x]] = "resource%d%s" % (self.resourceCtr, os.path.splitext(splits[x])[1]) self.resourceCtr += 1 splits[x] = self.resources[splits[x]] line = line[:startPos] + "/".join(splits) + line[endPos:] return line def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: anonymous_log [options] [FILE] Options: -h Print this help and exit Arguments: FILE File names for the access logs to anonymize Description: This utility will anonymize the content of an access log. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": try: options, args = getopt.getopt(sys.argv[1:], "h", []) for option, value in options: if option == "-h": usage() else: usage("Unrecognized option: %s" % (option,)) # Process arguments if len(args) == 0: args = ("/var/log/caldavd/access.log",) pwd = os.getcwd() analyzers = [] for arg in args: arg = os.path.expanduser(arg) if not arg.startswith("/"): arg = os.path.join(pwd, arg) if arg.endswith("/"): arg = arg[:-1] if not os.path.exists(arg): print("Path does not exist: '%s'. Ignoring." % (arg,)) continue CalendarServerLogAnalyzer().anonymizeLogFile(arg) except Exception, e: sys.exit(str(e)) print(traceback.print_exc()) calendarserver-9.1+dfsg/contrib/tools/buildbot_analyze.py000077500000000000000000000057541315003562600240220ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from bz2 import BZ2File from getopt import getopt import os import sys import cPickle def qos(): logdir = os.path.expanduser("~/buildbot/master/PerfBladeSim") logfiles = [path.replace("simlog", "stdio") for path in os.listdir(logdir) if "-simlog" in path] data = {} for logfile in logfiles: run = int(logfile.split("-")[0]) logfile = os.path.join(logdir, logfile) # Get svn revision buildfile = os.path.join(logdir, "{run}".format(run=run)) try: with open(buildfile) as f: builddata = f.read() builddata = cPickle.loads(builddata) svnvers = int(builddata.properties["got_revision"]) except Exception: svnvers = "?" # Get Qos value if not os.path.exists(logfile): continue if logfile.endswith(".bz2"): opener = BZ2File else: opener = open try: with opener(logfile) as f: lines = f.readlines() except IOError: print(logfile) raise for line in lines: if "Qos :" in line: try: data[run] = (svnvers, float(line.split()[-1]),) except ValueError: pass break for key in sorted(data.keys()): print("{}\t{}\t{}".format(key, data[key][0], data[key][1])) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: buildbot_analyze [options] Options: -h Print this help and exit --qos Look at PerfBlade Qos values Arguments: None Description: This utility will analyze the output of BuildBot runs to extract historical data for all the runs. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": do_qos = False try: options, args = getopt(sys.argv[1:], "h", ["qos", ]) for option, value in options: if option == "-h": usage() elif option == "--qos": do_qos = True else: usage("Unrecognized option: %s" % (option,)) if len(args) != 0: usage("Must have no arguments") if do_qos: qos() except Exception, e: raise sys.exit(str(e)) calendarserver-9.1+dfsg/contrib/tools/dtraceanalyze.py000077500000000000000000000400341315003562600233070ustar00rootroot00000000000000#!/usr/bin/env python # coding=utf-8 ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from __future__ import with_statement import collections import getopt import os import re import sys import tables class Dtrace(object): class DtraceLine(object): prefix_maps = { "/usr/share/caldavd/lib/python/": "{caldavd}/", "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.6": "{Python}", "/System/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6": "{Python}", "/System/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5": "{Python}", "/System/Library/Frameworks/Python.framework/Versions/2.7/Extras/lib/python": "{Extras}", "/System/Library/Frameworks/Python.framework/Versions/2.6/Extras/lib/python": "{Extras}", "/System/Library/Frameworks/Python.framework/Versions/2.5/Extras/lib/python": "{Extras}", } contains_maps = { "/CalendarServer": "{caldavd}", "/Twisted": "{Twisted}", "/pycalendar": "{pycalendar}", } def __init__(self, line, lineno): self.entering = True self.function_name = "" self.file_location = "" self.parent = None self.children = [] self.lineno = lineno re_matched = re.match("(..) ([^ ]+) \(([^\)]+)\)", line) if re_matched is None: print(line) results = re_matched.groups() if results[0] == "<-": self.entering = False elif results[0] == "->": self.entering = True else: raise ValueError("Invalid start of line at %d" % (lineno,)) self.function_name = results[1] self.file_location = results[2] for key, value in Dtrace.DtraceLine.prefix_maps.iteritems(): if self.file_location.startswith(key): self.file_location = value + self.file_location[len(key):] break else: for key, value in Dtrace.DtraceLine.contains_maps.iteritems(): found1 = self.file_location.find(key) if found1 != -1: found2 = self.file_location[found1 + 1:].find('/') if found2 != -1: self.file_location = value + self.file_location[found1 + found2 + 1:] else: self.file_location = value break def __repr__(self): return "%s (%s)" % self.getKey() def getKey(self): return (self.file_location, self.function_name,) def getPartialKey(self): return (self.filePath(), self.function_name,) def addChild(self, child): child.parent = self self.children.append(child) def checkForCollapse(self, other): if self.entering and not other.entering: if self.function_name == other.function_name and self.function_name != "mainLoop": if self.filePath() == other.filePath(): return True return False def filePath(self): return self.file_location[0:self.file_location.rfind(':')] def prettyPrint(self, indent, indents, sout): indenter = "" for level in indents: if level > 0: indenter += "⎢ " elif level < 0: indenter += "⎿ " else: indenter += " " sout.write("%s%s (%s)\n" % (indenter, self.function_name, self.file_location,)) def stackName(self): return self.function_name, self.filePath() class DtraceStack(object): def __init__(self, lines, no_collapse): self.start_indent = 0 self.stack = [] self.called_by = {} self.call_into = {} self.processLines(lines, no_collapse) def processLines(self, lines, no_collapse): new_lines = [] last_line = None for line in lines: if last_line: if not no_collapse and line.checkForCollapse(last_line): new_lines.pop() last_line = None continue new_lines.append(line) last_line = line indent = 0 min_indent = 0 current_line = None blocks = [[]] backstack = [] for line in new_lines: stackName = line.stackName() if line.entering: if line.function_name == "mainLoop": if min_indent < 0: newstack = [] for oldindent, oldline in blocks[-1]: newstack.append((oldindent - min_indent, oldline,)) blocks[-1] = newstack min_indent = 0 indent = 0 blocks.append([]) backstack = [] else: indent += 1 backstack.append(stackName) blocks[-1].append((indent, line,)) if current_line: current_line.addChild(line) current_line = line else: if len(blocks) == 1 or line.function_name != "mainLoop" and indent: indent -= 1 while backstack and indent and stackName != backstack[-1]: indent -= 1 backstack.pop() if backstack: backstack.pop() if indent < 0: print("help") current_line = current_line.parent if current_line else None min_indent = min(min_indent, indent) for block in blocks: self.stack.extend(block) if min_indent < 0: self.start_indent = -min_indent else: self.start_indent = 0 self.generateCallInfo() def generateCallInfo(self): for _ignore, line in self.stack: key = line.getKey() if line.parent: parent_key = line.parent.getKey() parent_calls = self.called_by.setdefault(key, {}).get(parent_key, 0) self.called_by[key][parent_key] = parent_calls + 1 for child in line.children: child_key = child.getKey() child_calls = self.call_into.setdefault(key, {}).get(child_key, 0) self.call_into[key][child_key] = child_calls + 1 def prettyPrint(self, sout): indents = [1] * self.start_indent ctr = 0 maxctr = len(self.stack) - 1 for indent, line in self.stack: current_indent = self.start_indent + indent next_indent = (self.start_indent + self.stack[ctr + 1][0]) if ctr < maxctr else 10000 if len(indents) == current_indent: pass elif len(indents) < current_indent: indents.append(current_indent) else: indents = indents[0:current_indent] if next_indent < current_indent: indents = indents[0:next_indent] + [-1] * (current_indent - next_indent) line.prettyPrint(self.start_indent + indent, indents, sout) ctr += 1 def __init__(self, filepath): self.filepath = filepath self.calltimes = collections.defaultdict(lambda: [0, 0, 0]) self.exclusiveTotal = 0 def analyze(self, do_stack, no_collapse): print("Parsing dtrace output.") # Parse the trace lines first and look for the start of the call times lines = [] traces = True index = -1 with file(filepath) as f: for lineno, line in enumerate(f): if traces: if line.strip() and line[0:3] in ("-> ", "<- "): lines.append(Dtrace.DtraceLine(line, lineno + 1)) elif line.startswith("Count,"): traces = False else: if line[0] != ' ': continue line = line.strip() if line.startswith("FILE"): index += 1 if index >= 0: self.parseCallTimeLine(line, index) self.printTraceDetails(lines, do_stack, no_collapse) for ctr, title in enumerate(("Sorted by Count", "Sorted by Exclusive", "Sorted by Inclusive",)): print(title) self.printCallTimeTotals(ctr) def printTraceDetails(self, lines, do_stack, no_collapse): print("Found %d lines" % (len(lines),)) print("============================") print("") self.stack = Dtrace.DtraceStack(lines, no_collapse) if do_stack: with file("stacked.txt", "w") as f: self.stack.prettyPrint(f) print("Wrote stack calls to 'stacked.txt'") print("============================") print("") # Get stats for each call stats = {} last_exit = None for line in lines: key = line.getKey() if line.entering: counts = stats.get(key, (0, 0)) counts = (counts[0] + (1 if no_collapse else 0), counts[1] + (0 if no_collapse else 1)) if line.getPartialKey() != last_exit: counts = (counts[0] + (0 if no_collapse else 1), counts[1] + (1 if no_collapse else 0)) stats[key] = counts else: last_exit = line.getPartialKey() print("Function Call Counts") print("") table = tables.Table() table.addHeader(("Count", "Function", "File",)) for key, value in sorted(stats.iteritems(), key=lambda x: x[1][0], reverse=True): table.addRow(("%d (%d)" % value, key[1], key[0],)) table.printTable() print("") print("Called By Counts") print("") table = tables.Table() table.addHeader(("Function", "Caller", "Count",)) for main_key in sorted(self.stack.called_by.keys(), key=lambda x: x[1] + x[0]): first = True for key, value in sorted(self.stack.called_by[main_key].iteritems(), key=lambda x: x[1], reverse=True): table.addRow(( ("%s (%s)" % (main_key[1], main_key[0],)) if first else "", "%s (%s)" % (key[1], key[0],), str(value), )) first = False table.printTable() print("") print("Call Into Counts") print("") table = tables.Table() table.addHeader(("Function", "Calls", "Count",)) for main_key in sorted(self.stack.call_into.keys(), key=lambda x: x[1] + x[0]): first = True for key, value in sorted(self.stack.call_into[main_key].iteritems(), key=lambda x: x[1], reverse=True): table.addRow(( ("%s (%s)" % (main_key[1], main_key[0],)) if first else "", "%s (%s)" % (key[1], key[0],), str(value), )) first = False table.printTable() print("") def parseCallTimeLine(self, line, index): file, type, name, value = line.split() if file in ("-", "FILE"): return else: self.calltimes[(file, name)][index] = int(value) if index == 1: self.exclusiveTotal += int(value) def printCallTimeTotals(self, sortIndex): table = tables.Table() table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(("File", "Name", "Count", "Inclusive", "Exclusive", "Children",)) for key, value in sorted(self.calltimes.items(), key=lambda x: x[1][sortIndex], reverse=True): table.addRow(( key[0], key[1], value[0], value[2], "%s (%6.3f%%)" % (value[1], (100.0 * value[1]) / self.exclusiveTotal), value[2] - value[1], )) table.addRow() table.addRow(( "Total:", "", "", "", self.exclusiveTotal, "", )) table.printTable() print("") def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: dtraceanalyze [options] FILE Options: -h Print this help and exit --stack Save indented stack to file --raw-count Display call counts based on full trace, else display counts on collapsed values. Arguments: FILE File name containing dtrace output to analyze Description: This utility will analyze the output of the trace.d dtrace script to produce useful statistics, and other performance related data. To use this do the following (where PID is the pid of the Python process to monitor: > sudo ./trace.d PID > results.txt ... > ./dtraceanalyze.py results.txt """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": sys.setrecursionlimit(10000) do_stack = False no_collapse = False try: options, args = getopt.getopt(sys.argv[1:], "h", ["stack", "no-collapse"]) for option, value in options: if option == "-h": usage() elif option == "--stack": do_stack = True elif option == "--no-collapse": no_collapse = True else: usage("Unrecognized option: %s" % (option,)) if len(args) == 0: fname = "results.txt" elif len(args) != 1: usage("Must have one argument") else: fname = args[0] filepath = os.path.expanduser(fname) if not os.path.exists(filepath): usage("File '%s' does not exist" % (filepath,)) print("CalendarServer dtrace analysis tool tool") print("=====================================") print("") if do_stack: print("Generating nested stack call file.") if no_collapse: print("Consecutive function calls will not be removed.") else: print("Consecutive function calls will be removed.") print("============================") print("") Dtrace(filepath).analyze(do_stack, no_collapse) except Exception, e: raise sys.exit(str(e)) calendarserver-9.1+dfsg/contrib/tools/fakecalendardata.py000077500000000000000000000140621315003562600237150ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import datetime import getopt import os import random import sys import uuid outputFile = None fileCount = 0 lastWeek = None calendar_template = """BEGIN:VCALENDAR VERSION:2.0 CALSCALE:GREGORIAN PRODID:-//Apple Inc.//iCal 4.0.3//EN BEGIN:VTIMEZONE TZID:America/Los_Angeles BEGIN:STANDARD DTSTART:20071104T020000 RRULE:FREQ=YEARLY;BYMONTH=11;BYDAY=1SU TZNAME:PST TZOFFSETFROM:-0700 TZOFFSETTO:-0800 END:STANDARD BEGIN:DAYLIGHT DTSTART:20070311T020000 RRULE:FREQ=YEARLY;BYMONTH=3;BYDAY=2SU TZNAME:PDT TZOFFSETFROM:-0800 TZOFFSETTO:-0700 END:DAYLIGHT END:VTIMEZONE %(VEVENTS)s\ END:VCALENDAR """ vevent_template = """\ BEGIN:VEVENT UID:%(UID)s DTSTART;TZID=America/Los_Angeles:%(START)s DURATION:P1H %(RRULE)s\ CREATED:20100729T193912Z DTSTAMP:20100729T195557Z %(ORGANIZER)s\ %(ATTENDEES)s\ SEQUENCE:0 SUMMARY:%(SUMMARY)s TRANSP:OPAQUE END:VEVENT """ attendee_template = """\ ATTENDEE;CN=User %(SEQUENCE)02d;CUTYPE=INDIVIDUAL;EMAIL=user%(SEQUENCE)02d@example.com;PARTSTAT=NE EDS-ACTION;ROLE=REQ-PARTICIPANT;RSVP=TRUE:urn:uuid:user%(SEQUENCE)02d """ organizer_template = """\ ORGANIZER;CN=User %(SEQUENCE)02d;EMAIL=user%(SEQUENCE)02d@example.com:urn:uuid:user%(SEQUENCE)02d ATTENDEE;CN=User %(SEQUENCE)02d;EMAIL=user%(SEQUENCE)02d@example.com;PARTSTAT=ACCEPTE D:urn:uuid:user%(SEQUENCE)02d """ summary_template = "Event %d" rrules_template = ( "RRULE:FREQ=DAILY;COUNT=5\n", "RRULE:FREQ=DAILY;BYDAY=MO,TU,WE,TH,FR\n", "RRULE:FREQ=YEARLY\n", ) def makeVEVENT(recurring, atendees, date, hour, count): subs = { "UID": str(uuid.uuid4()), "START": "", "RRULE": "", "ORGANIZER": "", "ATTENDEES": "", "SUMMARY": summary_template % (count,) } if recurring: subs["RRULE"] = random.choice(rrules_template) if attendees: subs["ORGANIZER"] = organizer_template % {"SEQUENCE": 1} for ctr in range(2, random.randint(2, 10)): subs["ATTENDEES"] += attendee_template % {"SEQUENCE": ctr} subs["START"] = "%04d%02d%02dT%02d0000" % (date.year, date.month, date.day, hour) return vevent_template % subs def argPath(path): fpath = os.path.expanduser(path) if not fpath.startswith("/"): fpath = os.path.join(pwd, fpath) return fpath def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: fakecalendardata [options] Options: -h Print this help and exit -a Percentage of events that should include attendees -c Total number of events to generate -d Directory to store separate .ics files into -r Percentage of recurring events to create -p Numbers of years in the past to start at -f Number of years into the future to end at Arguments: None Description: This utility will generate fake iCalendar data either into a single .ics file or into multiple .ics files. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": outputDir = None totalCount = 10 percentRecurring = 20 percentWithAttendees = 10 yearsPast = 2 yearsFuture = 1 options, args = getopt.getopt(sys.argv[1:], "a:c:f:hd:p:r:", []) for option, value in options: if option == "-h": usage() elif option == "-a": percentWithAttendees = int(value) elif option == "-c": totalCount = int(value) elif option == "-d": outputDir = argPath(value) elif option == "-f": yearsFuture = int(value) elif option == "-p": yearsPast = int(value) elif option == "-r": percentRecurring = int(value) else: usage("Unrecognized option: %s" % (option,)) if outputDir and not os.path.isdir(outputDir): usage("Must specify a valid output directory.") # Process arguments if len(args) != 0: usage("No arguments allowed") pwd = os.getcwd() totalRecurring = (totalCount * percentRecurring) / 100 totalRecurringWithAttendees = (totalRecurring * percentWithAttendees) / 100 totalRecurringWithoutAttendees = totalRecurring - totalRecurringWithAttendees totalNonRecurring = totalCount - totalRecurring totalNonRecurringWithAttendees = (totalNonRecurring * percentWithAttendees) / 100 totalNonRecurringWithoutAttendees = totalNonRecurring - totalNonRecurringWithAttendees eventTypes = [] eventTypes.extend([(True, True) for _ignore in range(totalRecurringWithAttendees)]) eventTypes.extend([(True, False) for _ignore in range(totalRecurringWithoutAttendees)]) eventTypes.extend([(False, True) for _ignore in range(totalNonRecurringWithAttendees)]) eventTypes.extend([(False, False) for _ignore in range(totalNonRecurringWithoutAttendees)]) random.shuffle(eventTypes) totalYears = yearsPast + yearsFuture totalDays = totalYears * 365 startDate = datetime.date.today() - datetime.timedelta(days=yearsPast * 365) for i in range(len(eventTypes)): eventTypes[i] += ( startDate + datetime.timedelta(days=random.randint(0, totalDays)), random.randint(8, 18), ) vevents = [] for count, (recurring, attendees, date, hour) in enumerate(eventTypes): # print(recurring, attendees, date, hour) vevents.append(makeVEVENT(recurring, attendees, date, hour, count + 1)) print(calendar_template % {"VEVENTS": "".join(vevents)}) calendarserver-9.1+dfsg/contrib/tools/fix_calendar000077500000000000000000000062441315003562600224560ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2006-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## import getopt import hashlib import os import sys import xattr """ Fix xattrs on calendar collections and calendar object resources """ def usage(): print """Usage: xattr_fix CALENDARS Options: CALENDARS - a list of directories that are to be treated as calendars Description: This utility will add xattrs to the specified directories and their contents to make them appear to be calendars and calendar resources when used with iCal Server. It can be used to fix xattrs lost as a result of restoring an iCal Server document root without properly preserving the xattrs. """ def fixCalendar(path): # First fix the resourcetype & getctag on the calendar x = xattr.xattr(path) x["WebDAV:{DAV:}resourcetype"] = """ """ x["WebDAV:{http:%2F%2Fcalendarserver.org%2Fns%2F}getctag"] = """ Dummy Value """ # Now deal with contenttype on child .ics files for child in os.listdir(path): if not child.endswith(".ics"): continue fullpath = os.path.join(path, child) # getcontenttype x = xattr.xattr(fullpath) x["WebDAV:{DAV:}getcontenttype"] = """ text/calendar """ # md5 data = open(fullpath).read() x["WebDAV:{http:%2F%2Ftwistedmatrix.com%2Fxml_namespace%2Fdav%2F}getcontentmd5"] = """ %s """ % (hashlib.md5(data).hexdigest(),) if __name__ == "__main__": try: options, args = getopt.getopt(sys.argv[1:], "") # Process arguments if len(args) == 0: print "No arguments given." usage() raise ValueError pwd = os.getcwd() for arg in args: if not arg.startswith("/"): arg = os.path.join(pwd, arg) if arg.endswith("/"): arg = arg[:-1] if not os.path.exists(arg): print "Path does not exist: '%s'. Ignoring." % (arg,) continue if os.path.basename(arg) in ("inbox", "outbox", "dropbox",): print "Cannot be used on inbox, outbox or dropbox." continue fixCalendar(arg) except Exception, e: sys.exit(str(e)) calendarserver-9.1+dfsg/contrib/tools/flow.d000077500000000000000000000037241315003562600212300ustar00rootroot00000000000000#!/usr/sbin/dtrace -Zs /* * py_flow.d - snoop Python execution showing function flow. * Written for the Python DTrace provider. * * $Id: py_flow.d 51 2007-09-24 00:55:23Z brendan $ * * This traces Python activity from all Python programs on the system * running with Python provider support. * * USAGE: flow.d # hit Ctrl-C to end * * This watches Python function entries and returns, and indents child * function calls. * * FIELDS: * C CPU-id * TIME(us) Time since boot, us * FILE Filename that this function belongs to * FUNC Function name * * LEGEND: * -> function entry * <- function return * * WARNING: Watch the first column carefully, it prints the CPU-id. If it * changes, then it is very likely that the output has been shuffled. * * COPYRIGHT: Copyright (c) 2007 Brendan Gregg. * * CDDL HEADER START * * The contents of this file are subject to the terms of the * Common Development and Distribution License, Version 1.0 only * (the "License"). You may not use this file except in compliance * with the License. * * You can obtain a copy of the license at Docs/cddl1.txt * or http://www.opensolaris.org/os/licensing. * See the License for the specific language governing permissions * and limitations under the License. * * CDDL HEADER END * * 09-Sep-2007 Brendan Gregg Created this. * * This is a variation of py_flow.d that takes a pid. */ #pragma D option quiet #pragma D option switchrate=10 self int depth; dtrace:::BEGIN { printf("%3s %-16s %-16s -- %s\n", "C", "TIME(us)", "FILE", "FUNC"); } python*:::function-entry /pid == $1/ { printf("%3d %-16d %-16s %*s-> %s\n", cpu, timestamp / 1000, basename(copyinstr(arg0)), self->depth * 2, "", copyinstr(arg1)); self->depth++; } python*:::function-return /pid == $1/ { self->depth -= self->depth > 0 ? 1 : 0; printf("%3d %-16d %-16s %*s<- %s\n", cpu, timestamp / 1000, basename(copyinstr(arg0)), self->depth * 2, "", copyinstr(arg1)); }calendarserver-9.1+dfsg/contrib/tools/harpoon.py000077500000000000000000000037551315003562600221400ustar00rootroot00000000000000#!/ngs/app/ical/code/bin/python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function # Sends SIGTERM to any calendar server child process whose VSIZE exceeds 2GB # Only for use in a specific environment import os import signal CUTOFFBYTES = 2 * 1024 * 1024 * 1024 PROCDIR = "/proc" PYTHON = "/ngs/app/ical/code/bin/python" CMDARG = "LogID" serverProcessCount = 0 numKilled = 0 for pidString in sorted(os.listdir(PROCDIR)): try: pidNumber = int(pidString) except ValueError: # Not a process number continue pidDir = os.path.join(PROCDIR, pidString) statsFile = os.path.join(pidDir, "stat") with open(statsFile) as f: statLine = f.read() stats = statLine.split() vsize = int(stats[22]) cmdFile = os.path.join(pidDir, "cmdline") if os.path.exists(cmdFile): with open(cmdFile) as f: cmdLine = f.read().split('\x00') if cmdLine[0].startswith(PYTHON): for arg in cmdLine[1:]: if arg.startswith(CMDARG): break else: continue serverProcessCount += 1 if vsize > CUTOFFBYTES: print("Killing process %d with VSIZE %d" % (pidNumber, vsize)) os.kill(pidNumber, signal.SIGTERM) numKilled += 1 print("Examined %d server processes" % (serverProcessCount,)) print("Killed %d processes" % (numKilled,)) calendarserver-9.1+dfsg/contrib/tools/lldb_utils.py000066400000000000000000000145211315003562600226150ustar00rootroot00000000000000## # Copyright (c) 2015-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## """ Below are a bunch of useful functions that can be used inside of lldb to debug a Python process (this applies to cPython only). The steps are as follows: > lldb (lldb) target symbols add (lldb) command script import contrib/tools/lldb_utils.py Run commands using: (lldb) pybt ... (lldb) pybtall ... (lldb) pylocals ... or inside the python shell: (lldb) script Python Interactive Interpreter. To exit, type 'quit()', 'exit()' or Ctrl-D. >>> lldb_utils.pybt() ... >>> lldb_utils.pybtall() ... >>> lldb_utils.pylocals() ... pybt - generate a python function call back trace of the currently selected thread pybtall - generate a python function call back trace of all threads pylocals - generate a list of the name and values of all locals in the current python frame (only works when the currently selected frame is a Python call frame as found by the pybt command). """ import lldb # @UnresolvedImport def _toStr(obj, pystring_t): return obj.Cast(pystring_t).GetValueForExpressionPath("->ob_sval").summary def pybt(debugger=None, command=None, result=None, dict=None, thread=None): """ An lldb command that prints a Python call back trace for the specified thread or the currently selected thread. @param debugger: debugger to use @type debugger: L{lldb.SBDebugger} @param command: ignored @type command: ignored @param result: ignored @type result: ignored @param dict: ignored @type dict: ignored @param thread: the specific thread to target @type thread: L{lldb.SBThread} """ if debugger is None: debugger = lldb.debugger target = debugger.GetSelectedTarget() if not isinstance(thread, lldb.SBThread): thread = target.GetProcess().GetSelectedThread() pystring_t = target.FindFirstType("PyStringObject").GetPointerType() num_frames = thread.GetNumFrames() for i in range(num_frames - 1): fr = thread.GetFrameAtIndex(i) if fr.GetFunctionName() == "PyEval_EvalFrameEx": fr_next = thread.GetFrameAtIndex(i + 1) if fr_next.GetFunctionName() == "PyEval_EvalCodeEx": f = fr.GetValueForVariablePath("f") filename = _toStr(f.GetValueForExpressionPath("->f_code->co_filename"), pystring_t) name = _toStr(f.GetValueForExpressionPath("->f_code->co_name"), pystring_t) lineno = f.GetValueForExpressionPath("->f_lineno").GetValue() print("#{}: {} - {}:{}".format( fr.GetFrameID(), filename[1:-1] if filename else ".", name[1:-1] if name else ".", lineno if lineno else ".", )) def pybtall(debugger=None, command=None, result=None, dict=None): """ An lldb command that prints a Python call back trace for all threads. @param debugger: debugger to use @type debugger: L{lldb.SBDebugger} @param command: ignored @type command: ignored @param result: ignored @type result: ignored @param dict: ignored @type dict: ignored """ if debugger is None: debugger = lldb.debugger process = debugger.GetSelectedTarget().GetProcess() numthreads = process.GetNumThreads() for i in range(numthreads): thread = process.GetThreadAtIndex(i) print("----- Thread: {} -----".format(i + 1)) pybt(debugger=debugger, thread=thread) def pylocals(debugger=None, command=None, result=None, dict=None): """ An lldb command that prints a list of Python local variables for the currently selected frame. @param debugger: debugger to use @type debugger: L{lldb.SBDebugger} @param command: ignored @type command: ignored @param result: ignored @type result: ignored @param dict: ignored @type dict: ignored """ if debugger is None: debugger = lldb.debugger target = debugger.GetSelectedTarget() frame = target.GetProcess().GetSelectedThread().GetSelectedFrame() pystring_t = target.FindFirstType("PyStringObject").GetPointerType() pytuple_t = target.FindFirstType("PyTupleObject").GetPointerType() f = frame.GetValueForVariablePath("f") try: numlocals = int(f.GetValueForExpressionPath("->f_code->co_nlocals").GetValue()) except TypeError: print("Current frame is not a Python function") return print("Locals in frame #{}".format(frame.GetFrameID())) tstate = frame.EvaluateExpression("PyGILState_Ensure()") names = f.GetValueForExpressionPath("->f_code->co_varnames").Cast(pytuple_t) for i in range(numlocals): localname = _toStr(names.GetValueForExpressionPath("->ob_item[{}]".format(i)), pystring_t) local = frame.EvaluateExpression("PyString_AsString(PyObject_Repr(f->f_localsplus[{}]))".format(i)).summary localtype = frame.EvaluateExpression("PyString_AsString(PyObject_Repr(PyObject_Type(f->f_localsplus[{}])))".format(i)).summary print("{}: {} = {}".format( localtype[1:-1] if localtype else ".", localname[1:-1] if localname else ".", local[1:-1] if local else ".", )) frame.EvaluateExpression("PyGILState_Release({})".format(tstate)) CMDS = ("pybt", "pybtall", "pylocals",) def __lldb_init_module(debugger, dict): """ Register each command with lldb so they are available directly within lldb as well as within its Python script shell. @param debugger: debugger to use @type debugger: L{lldb.SBDebugger} @param dict: ignored @type dict: ignored """ for cmd in CMDS: debugger.HandleCommand( "command script add -f lldb_utils.{cmd} {cmd}".format(cmd=cmd) ) calendarserver-9.1+dfsg/contrib/tools/monitoranalysis.py000077500000000000000000000236041315003562600237200ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import matplotlib.pyplot as plt import getopt import sys import os import datetime dataset = [] initialDate = None def analyze(fpath, noweekends, startDate=None, endDate=None, title=None): print("Analyzing data for %s" % (fpath,)) data = [] firstDate = None global initialDate with open(fpath) as f: for line in f: try: if line.startswith("2010/0"): date = line[:10] if startDate and date < startDate or endDate and date > endDate: continue if noweekends: dt = datetime.date(int(date[0:4]), int(date[5:7]), int(date[8:10])) if dt.weekday() > 4: continue digits = line[11:13] if digits in ("05", "06"): for _ignore in range(3): f.next() continue dtstamp = line[:19] if firstDate is None: firstDate = date.replace("/", "") if initialDate is None: initialDate = firstDate if "Listenq" in line: lqnon = line[len("2010/05/12 22:27:24 Listenq (ssl+non): "):].split("+", 1)[1] else: lqnon = line[len("2010/01/05 19:47:23 Listen queue: "):] lqnon = int(lqnon.split(" ", 1)[0]) line = f.next() cpu = int(line[len("CPU idle %: "):].split(" ", 1)[0]) line = f.next() if line.startswith("Memory"): line = f.next() reqs = int(float(line.split(" ", 1)[0])) line = f.next() resp = line[len("Response time: average "):].split(" ", 1)[0] resp = int(float(resp) / 10.0) * 10 if reqs <= 80: data.append((dtstamp, reqs, resp, lqnon, cpu)) # print("%s %d %d %d %d" % (dtstamp, reqs, resp, lqnon, cpu)) except StopIteration: break if not title: if startDate and endDate: title = "Between %s and %s" % (startDate, endDate,) elif startDate: title = "Since %s" % (startDate,) elif endDate: title = "Up to %s" % (endDate,) else: title = "Start at %s" % (firstDate,) dataset.append((title, data,)) print("Stored %d data points" % (len(data),)) def plotListenQBands(data, first, last, xlim, ylim): x1 = [] y1 = [] x2 = [] y2 = [] x3 = [] y3 = [] for _ignore_dt, reqs, resp, lq, _ignore_cpu in data: if lq == 0: x1.append(reqs) y1.append(resp) elif lq < 50: x2.append(reqs) y2.append(resp) else: x3.append(reqs) y3.append(resp) plt.plot(x1, y1, "b+", x2, y2, "g+", x3, y3, "y+") if first: plt.legend( ('ListenQ at zero', 'ListenQ < 50', 'ListenQ >= 50'), 'upper left', shadow=True, fancybox=True ) if last: plt.xlabel("Requests/second") plt.ylabel("Av. Response Time (ms)") plt.xlim(0, xlim) plt.ylim(0, ylim) def plotCPUBands(data, first, last, xlim, ylim): x = [[], [], [], []] y = [[], [], [], []] for _ignore_dt, reqs, resp, _ignore_lq, cpu in data: if cpu > 75: x[0].append(reqs) y[0].append(resp) elif cpu > 50: x[1].append(reqs) y[1].append(resp) elif cpu > 25: x[2].append(reqs) y[2].append(resp) else: x[3].append(reqs) y[3].append(resp) plt.plot( x[0], y[0], "b+", x[1], y[1], "g+", x[2], y[2], "y+", x[3], y[3], "m+", ) if first: plt.legend( ('CPU < 1/4', 'CPU < 1/2', 'CPU < 3/4', "CPU High"), 'upper left', shadow=True, fancybox=True ) if last: plt.xlabel("Requests/second") plt.ylabel("Av. Response Time (ms)") plt.xlim(0, xlim) plt.ylim(0, ylim) def plot(figure, noshow, nosave, pngDir, xlim, ylim): print("Plotting data") plt.figure(figure, figsize=(16, 5 * len(dataset))) nplots = len(dataset) subplot = nplots * 100 + 20 for ctr, item in enumerate(dataset): title, data = item if not title: title = "#%d" % (ctr + 1,) plt.subplot(subplot + 2 * ctr + 1) plotListenQBands(data, first=(ctr == 0), last=(ctr + 1 == len(dataset)), xlim=xlim, ylim=ylim) plt.title("ListenQ %s" % (title,)) plt.subplot(subplot + 2 * ctr + 2) plotCPUBands(data, first=(ctr == 0), last=(ctr + 1 == len(dataset)), xlim=xlim, ylim=ylim) plt.title("CPU %s" % (title,)) def argPath(path): fpath = os.path.expanduser(path) if not fpath.startswith("/"): fpath = os.path.join(pwd, fpath) return fpath def expandDate(date): return "%s/%s/%s" % (date[0:4], date[4:6], date[6:8],) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: monitoranalysis [options] [FILE+] Options: -h Print this help and exit -d Directory to save PNGs to -s Directory to scan for data instead of FILEs --no-weekends Ignore data for Saturday and Sunday --no-show Do not show plots on screen --no-save Do not save plots to file --xlim x-axis limit [80] --ylim y-axim limit [4000] Arguments: FILE File names for the requests.log to analyze. A date range can be specified by append a comma, then a dash seperated pair of YYYYMMDD dates, e.g.: ~/request.log,20100614-20100619. Multiple ranges can be specified for multiple plots. Description: This utility will analyze the output of the request monitor tool and generate some pretty plots of data. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": pngDir = None scanDir = None noweekends = False noshow = False nosave = False xlim = 80 ylim = 4000 options, args = getopt.getopt(sys.argv[1:], "hd:s:", ["no-weekends", "no-show", "no-save", "xlim=", "ylim="]) for option, value in options: if option == "-h": usage() elif option == "-d": pngDir = os.path.expanduser(value) elif option == "-s": scanDir = os.path.expanduser(value) elif option == "--no-show": noshow = True elif option == "--no-save": nosave = True elif option == "--no-weekends": noweekends = True elif option == "--xlim": xlim = int(value) elif option == "--ylim": ylim = int(value) else: usage("Unrecognized option: %s" % (option,)) if pngDir is None and scanDir: pngDir = scanDir if not nosave and not os.path.isdir(pngDir): usage("Must have a valid -d path for saving images") # Process arguments if len(args) == 0 and scanDir is None: usage("Must have arguments") elif scanDir and len(args) != 0: usage("No arguments allowed when scanning a directory") pwd = os.getcwd() if scanDir: fnames = os.listdir(scanDir) count = 1 for name in fnames: if name.startswith("request.log"): print("Found file: %s" % (os.path.join(scanDir, name),)) trailer = name[len("request.log"):] if trailer.startswith("."): trailer = trailer[1:] initialDate = None dataset = [] analyze(os.path.join(scanDir, name), noweekends) plot(count, noshow, nosave, pngDir, xlim, ylim) if not nosave: plt.savefig(os.path.expanduser(os.path.join(pngDir, "Monitor-%s" % (trailer,)))) count += 1 if not noshow: plt.show() else: for arg in args: if "," in arg: items = arg.split(",") arg = items[0] start = [] end = [] for daterange in items[1:]: splits = daterange.split("-") if len(splits) == 1: start.append(expandDate(splits[0])) end.append(None) elif len(splits) == 2: start.append(expandDate(splits[0])) end.append(expandDate(splits[1])) else: start.append(None) end.append(None) else: start = (None,) end = (None,) for i in range(len(start)): analyze(argPath(arg), noweekends, start[i], end[i]) plot(1, noshow, nosave, pngDir, xlim, ylim) if not nosave: plt.savefig(os.path.expanduser(os.path.join(pngDir, "Monitor-%s" % (initialDate,)))) if not noshow: plt.show() calendarserver-9.1+dfsg/contrib/tools/monitorsplit.py000077500000000000000000000072431315003562600232310ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import getopt import sys import os from gzip import GzipFile import datetime outputFile = None fileCount = 0 lastWeek = None def split(fpath, outputDir): global outputFile, fileCount, lastWeek print("Splitting data for %s" % (fpath,)) f = GzipFile(fpath) if fpath.endswith(".gz") else open(fpath) for line in f: if line.startswith("2010/0"): date = line[:10] date = date.replace("/", "") hours = line[11:13] dt = datetime.date(int(date[0:4]), int(date[4:6]), int(date[6:8])) currentWeek = dt.isocalendar()[1] if dt.weekday() == 0 and hours <= "06": currentWeek -= 1 if lastWeek != currentWeek: if outputFile: outputFile.close() outputFile = open(os.path.join(outputDir, "request.log.%s" % (date,)), "w") fileCount += 1 lastWeek = currentWeek print("Changed to week of %s" % (date,)) output = ["-----\n"] output.append(line) try: output.append(f.next()) line = f.next() if line.startswith("Memory"): line = f.next() output.append(line) output.append(f.next()) except StopIteration: break outputFile.write("".join(output)) f.close() def argPath(path): fpath = os.path.expanduser(path) if not fpath.startswith("/"): fpath = os.path.join(pwd, fpath) return fpath def expandDate(date): return "%s/%s/%s" % (date[0:4], date[4:6], date[6:8],) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: monitoranalysis [options] FILE+ Options: -h Print this help and exit -d Directory to store split files in Arguments: FILE File names for the requests.log to analyze. A date range can be specified by append a comma, then a dash seperated pair of YYYYMMDD dates, e.g.: ~/request.log,20100614-20100619. Multiple ranges can be specified for multiple plots. Description: This utility will analyze the output of the request monitor tool and generate some pretty plots of data. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": outputDir = None options, args = getopt.getopt(sys.argv[1:], "hd:", []) for option, value in options: if option == "-h": usage() elif option == "-d": outputDir = argPath(value) else: usage("Unrecognized option: %s" % (option,)) if not outputDir or not os.path.isdir(outputDir): usage("Must specify a valid output directory.") # Process arguments if len(args) == 0: usage("Must have arguments") pwd = os.getcwd() for arg in args: split(argPath(arg), outputDir) print("Created %d files" % (fileCount,)) calendarserver-9.1+dfsg/contrib/tools/netstatus.py000077500000000000000000000062501315003562600225150ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Tool to monitor network connection status and track queue sizes and long-lived connections. """ import commands import sys import time stateNames = ( ("SYN_SENT",), ("SYN_RCVD",), ("ESTABLISHED",), ("CLOSE_WAIT",), ("LAST_ACK",), ("FIN_WAIT_1", "FIN_WAIT1"), ("FIN_WAIT_2", "FIN_WAIT2"), ("CLOSING",), ("TIME_WAIT",), ) if __name__ == '__main__': pendingq = {} while True: timestamp = time.time() output = commands.getoutput("netstat -n") states = [(0, 0, 0)] * len(stateNames) newqs = {} for line in output.split("\n"): splits = line.split() if splits[0] not in ("tcp4", "tcp6", "tcp"): continue if not splits[3].endswith("8443") and not splits[3].endswith("8008"): continue for ctr, items in enumerate(stateNames): for item in items: if item in line: total, recvq, sendq = states[ctr] total += 1 if splits[1] != "0": recvq += 1 if splits[2] != "0": sendq += 1 if item == "ESTABLISHED": newqs[splits[4]] = (splits[1], splits[2],) states[ctr] = (total, recvq, sendq,) break oldqs = set(pendingq.keys()) for key in oldqs.difference(newqs.keys()): del pendingq[key] oldqs = pendingq.keys() for key in newqs.keys(): recv, sendq = newqs[key] if key in pendingq: pendingq[key] = (pendingq[key][0], recv, sendq,) else: pendingq[key] = (timestamp, recv, sendq,) print("------------------------") print("") print(time.asctime()) print("State Total RecvQ SendQ") for ctr, items in enumerate(stateNames): print("%11s %5d %5d %5d" % (items[0], states[ctr][0], states[ctr][1], states[ctr][2])) print("") print("Source IP Established (secs) RecvQ SendQ") for key, value in sorted(pendingq.iteritems(), key=lambda x: x[1]): startedat, recv, sendq = value deltatime = timestamp - startedat if deltatime > 0: print("%-20s %3d %5s %5s" % (key, deltatime, recv, sendq,)) sys.stdout.flush() time.sleep(5) calendarserver-9.1+dfsg/contrib/tools/pg_stats_analysis.py000066400000000000000000000165231315003562600242130ustar00rootroot00000000000000## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import sqlparse import os from gzip import GzipFile import collections import tables import textwrap import sys import getopt def safePercent(x, y, multiplier=100): return ((multiplier * x) / y) if y else 0 def _is_literal(token): if token.ttype in sqlparse.tokens.Literal: return True if token.ttype == sqlparse.tokens.Keyword and token.value in (u'True', u'False'): return True return False def _substitute(expression, replacement): try: expression.tokens except AttributeError: return for i, token in enumerate(expression.tokens): if _is_literal(token): expression.tokens[i] = replacement elif token.is_whitespace(): expression.tokens[i] = sqlparse.sql.Token('Whitespace', ' ') else: _substitute(token, replacement) def sqlnormalize(sql): try: statements = sqlparse.parse(sql) except ValueError, e: print(e) # Replace any literal values with placeholders qmark = sqlparse.sql.Token('Operator', '?') _substitute(statements[0], qmark) return sqlparse.format(statements[0].to_unicode().encode('ascii')) COLUMN_userid = 0 COLUMN_dbid = 1 COLUMN_query = 2 COLUMN_calls = 3 COLUMN_total_time = 4 COLUMN_rows = 5 COLUMN_shared_blks_hit = 6 COLUMN_shared_blks_read = 7 COLUMN_shared_blks_written = 8 COLUMN_local_blks_hit = 9 COLUMN_local_blks_read = 10 COLUMN_local_blks_written = 11 COLUMN_temp_blks_read = 12 COLUMN_temp_blks_written = 13 def sqlStatementsReport(entries): dcount = collections.defaultdict(int) dtime = collections.defaultdict(float) drows = collections.defaultdict(int) for entry in entries: dcount[entry[COLUMN_query]] += int(entry[COLUMN_calls]) dtime[entry[COLUMN_query]] += float(entry[COLUMN_total_time]) drows[entry[COLUMN_query]] += int(entry[COLUMN_rows]) daverage = {} for k in dcount.keys(): daverage[k] = dtime[k] / dcount[k] counttotal = sum(dcount.values()) timetotal = sum(dtime.values()) averagetotal = sum(daverage.values()) for sorttype, sortedkeys in ( ("total time", [i[0] for i in sorted(dtime.iteritems(), key=lambda x:x[1], reverse=True)],), ("count", [i[0] for i in sorted(dcount.iteritems(), key=lambda x:x[1], reverse=True)],), ("average time", [i[0] for i in sorted(daverage.iteritems(), key=lambda x:x[1], reverse=True)],), ): table = tables.Table() table.addHeader(("Statement", "Count", "Count %", "Total Time", "Total Time %", "Av. Time", "Av. Time %", "Av. rows",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) for key in sortedkeys: keylines = textwrap.wrap(key, 72, subsequent_indent=" ") table.addRow(( keylines[0], dcount[key], safePercent(dcount[key], counttotal, 100.0), dtime[key], safePercent(dtime[key], timetotal, 100.0), daverage[key], safePercent(daverage[key], averagetotal, 100.0), float(drows[key]) / dcount[key], )) for keyline in keylines[1:]: table.addRow(( keyline, None, None, None, None, None, None, None, )) print("Queries sorted by %s" % (sorttype,)) table.printTable() print("") def parseStats(logFilePath, donormlize=True, verbose=False): fpath = os.path.expanduser(logFilePath) if fpath.endswith(".gz"): f = GzipFile(fpath) else: f = open(fpath) # Punt past data for line in f: if line.startswith("---"): break f.close() entries = [] for line in f: bits = line.split("|") if len(bits) > COLUMN_query: while bits[COLUMN_query].endswith("+"): line = f.next() newbits = line.split("|") bits[COLUMN_query] = bits[COLUMN_query][:-1] + newbits[COLUMN_query] pos = bits[COLUMN_query].find("BEGIN:VCALENDAR") if pos != -1: bits[COLUMN_query] = bits[COLUMN_query][:pos] if donormlize: bits[COLUMN_query] = sqlnormalize(bits[COLUMN_query].strip()) if bits[COLUMN_query] not in ( "BEGIN", "COMMIT", "ROLLBACK", ) and bits[COLUMN_query].find("pg_catalog") == -1: bits = [bit.strip() for bit in bits] entries.append(bits) if verbose and divmod(len(entries), 1000)[1] == 0: print("%d entries" % (len(entries),)) # if float(bits[COLUMN_total_time]) > 1: # print(bits[COLUMN_total_time], bits[COLUMN_query]) if verbose: print("Read %d entries" % (len(entries,))) sqlStatementsReport(entries) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: pg_stats_analysis.py [options] FILE Options: -h Print this help and exit -v Generate progress information --no-normalize Do not normalize SQL statements Arguments: FILE File name for pg_stat_statements output to analyze. Description: This utility will analyze the output of s pg_stat_statement table. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': normalize = True verbose = False options, args = getopt.getopt(sys.argv[1:], "hv", ["no-normalize", ]) for option, value in options: if option == "-h": usage() elif option == "-v": verbose = True elif option == "--no-normalize": normalize = False else: usage("Unrecognized option: %s" % (option,)) # Process arguments if len(args) != 1: usage("Must have a file argument") parseStats(args[0], normalize, verbose) calendarserver-9.1+dfsg/contrib/tools/pgtrace.d000077500000000000000000000133551315003562600217070ustar00rootroot00000000000000#!/usr/sbin/dtrace -qs /* * USAGE : postgresql_stat.d * * DESCRIPTION: * Count postgres operations over time * ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## */ dtrace:::BEGIN { /* starting values */ txnstart = 0; txncommit = 0; txnabort = 0; querystart = 0; querydone = 0; lwacquire = 0; lwwait = 0; lockstart = 0; deadlock = 0; checkpt = 0; bufferread = 0; bufferflush = 0; bufferwrite = 0; walwrite = 0; txnoutstanding = 0; queryoutstanding = 0; txntime = 0; querytime = 0; txnstart_total = 0; txncommit_total = 0; txnabort_total = 0; querystart_total = 0; querydone_total = 0; lwacquire_total = 0; lwwait_total = 0; lockstart_total = 0; deadlock_total = 0; checkpt_total = 0; bufferread_total = 0; bufferflush_total = 0; bufferwrite_total = 0; walwrite_total = 0; seconds_total = 0; } postgresql*:::transaction-start { txnstart++; txnoutstanding++; self->tstxn = timestamp; } postgresql*:::transaction-commit /self->tstxn/ { txncommit++; txnoutstanding--; difftime = timestamp - self->tstxn; txntime += difftime; self->tstxn=0; } postgresql*:::transaction-abort /self->tstxn/ { txnabort++; txnoutstanding--; txntime += timestamp - self->tstxn; self->tstxn=0; } postgresql*:::query-start { querystart++; queryoutstanding++; self->tsq = timestamp; } postgresql*:::query-done /self->tsq/ { querydone++; queryoutstanding--; querytime += timestamp - self->tsq; self->tsq=0; } postgresql*:::lwlock-acquire /arg0 < 27/ { lwacquire++; } postgresql*:::lwlock-wait-start /arg0 < 27/ { lwwait++; } postgresql*:::lock-wait-start /arg0 < 27/ { lockstart++; } postgresql*:::deadlock-found /arg0 < 27/ { deadlock++; } postgresql*:::checkpoint-start { checkpt++; } postgresql*:::buffer-read-start { bufferread++; } postgresql*:::buffer-flush-start { bufferflush++; } postgresql*:::buffer-write-dirty-start { bufferwrite++; } postgresql*:::wal-buffer-write-dirty-start { walwrite++; } profile:::tick-1s /seconds_total % 25 == 0/ { printf("\n%34s %27s %13s %6s %6s %6s %20s %6s\n", "<-----------Transaction---------->", "<----------Query---------->", "<-LW Locks-->", "Lock", "Dead", "Check", "<------Buffers----->", "WAL"); printf("%6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s\n", "Start", "Commit", "Abort", "Active", "Time", "Start", "Done", "Active", "Time", "Acq", "Wait", "Acq", "lock", "point", "Read", "Flush", "Write", "Write"); } profile:::tick-1s /txnoutstanding < 0/ { txnoutstanding = 0; } profile:::tick-1s /queryoutstanding < 0/ { queryoutstanding = 0; } profile:::tick-1s { avtxtime = (txntime / ((txncommit + txnabort) != 0 ? (txncommit + txnabort) : 1))/1000000; avtxtime = (avtxtime >= 1000000) ? -1 : avtxtime; avquerytime = (querytime / (querydone != 0 ? querydone : 1))/1000; avquerytime = (avquerytime >= 1000000) ? -1 : avquerytime; printf("%6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d\n", txnstart, txncommit, txnabort, txnoutstanding, avtxtime, querystart, querydone, queryoutstanding, avquerytime, lwacquire, lwwait, lockstart, deadlock, checkpt, bufferread, bufferflush, bufferwrite, walwrite); txnstart_total += txnstart; txncommit_total += txncommit; txnabort_total += txnabort; querystart_total += querystart; querydone_total += querydone; lwacquire_total += lwacquire; lwwait_total += lwwait; lockstart_total += lockstart; deadlock_total += deadlock; checkpt_total += checkpt; bufferread_total += bufferread; bufferflush_total += bufferflush; bufferwrite_total += bufferwrite; walwrite_total += walwrite; seconds_total++; txnstart = 0; txncommit = 0; txnabort = 0; querystart = 0; querydone = 0; lwacquire = 0; lwwait = 0; lockstart = 0; deadlock = 0; checkpt = 0; bufferread = 0; bufferflush = 0; bufferwrite = 0; walwrite = 0; txntime = 0; querytime = 0; } dtrace:::END { printf("\nAverage count/sec\n"); printf("\n%20s %13s %13s %6s %6s %6s %20s %6s\n", "<----Transaction--->", "<---Query--->", "<-LW Locks-->", "Lock", "Dead", "Check", "<------Buffers----->", "WAL"); printf("%6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s %6s\n", "Start", "Commit", "Abort", "Start", "Done", "Acq", "Wait", "Acq", "lock", "point", "Read", "Flush", "Write", "Write"); printf("%6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d %6d\n", txnstart_total/seconds_total, txncommit_total/seconds_total, txnabort_total/seconds_total, querystart_total/seconds_total, querydone_total/seconds_total, lwacquire_total/seconds_total, lwwait_total/seconds_total, lockstart_total/seconds_total, deadlock_total/seconds_total, checkpt_total/seconds_total, bufferread_total/seconds_total, bufferflush_total/seconds_total, bufferwrite_total/seconds_total, walwrite_total/seconds_total); } calendarserver-9.1+dfsg/contrib/tools/protocolanalysis.py000077500000000000000000002350751315003562600241010ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from gzip import GzipFile import collections import datetime import getopt import os import socket import sys import tables import traceback import glob from calendarserver.logAnalysis import getAdjustedMethodName, METHOD_PUT_ICS, \ METHOD_PUT_ORGANIZER, METHOD_PUT_ATTENDEE, METHOD_PROPFIND_CALENDAR, \ METHOD_POST_FREEBUSY, METHOD_POST_ORGANIZER, METHOD_POST_ISCHEDULE_FREEBUSY, \ METHOD_POST_ISCHEDULE, METHOD_GET_DROPBOX, METHOD_PROPFIND_CALENDAR_HOME, \ METHOD_PROPFIND_CACHED_CALENDAR_HOME, METHOD_PROPFIND_ADDRESSBOOK_HOME, \ METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME, METHOD_PROPFIND_PRINCIPALS, \ METHOD_PROPFIND_CACHED_PRINCIPALS def safePercent(x, y, multiplier=100): return ((multiplier * x) / y) if y else 0 responseCountBuckets = ( (10, "(a):0-10"), (100, "(b):11-100"), (250, "(c):101-250"), (500, "(d):251-500"), (1000, "(e):501-1000"), (2500, "(f):1001-2500"), (5000, "(g):2501-5000"), (None, "(h):5001+"), ) requestSizeBuckets = ( (1000, "(a):0-1KB"), (2500, "(b):1-2.5KB"), (5000, "(c):2.5-5KB"), (10000, "(d):5-10KB"), (100000, "(e):10-100KB"), (1000000, "(f):100KB-1MB"), (10000000, "(g):1-10MB"), (100000000, "(h):10-100MB"), (None, "(i):100+MB"), ) responseSizeBuckets = ( (1000, "(a):0-1KB"), (5000, "(b):1-2.5KB"), (5000, "(c):2.5-5KB"), (10000, "(d):5-10KB"), (100000, "(e):10-100KB"), (1000000, "(f):100KB-1MB"), (10000000, "(g):1-10MB"), (None, "(h):10+MB"), ) requestTimeBuckets = ( (10, "(a):0-10ms"), (50, "(b):10-50ms"), (100, "(c):50-100ms"), (250, "(d):100-250ms"), (500, "(e):250-500ms"), (1000, "(f):500ms-1s"), (5000, "(g):1-5s"), (10000, "(h):5-10s"), (30000, "(i):10-30s"), (60000, "(j):30-60s"), (120000, "(k):60-120s"), (None, "(l):120s+"), ) userInteractionCountBuckets = ( (0, "(a):0"), (1, "(b):1"), (2, "(c):2"), (3, "(d):3"), (4, "(e):4"), (5, "(f):5"), (10, "(g):6-10"), (15, "(h):11-15"), (20, "(i):16-20"), (30, "(j):21-30"), (50, "(k):31-50"), (None, "(l):51+"), ) httpMethods = set(( "ACL", "BIND", "CONNECT", "COPY", "DELETE", "GET", "HEAD", "MKCALENDAR", "MKCOL", "MOVE", "OPTIONS", "POST", "PROPFIND", "PROPPATCH", "PUT", "REPORT", "SEARCH", )) # 401s METHOD_401 = "Z 401s" class CalendarServerLogAnalyzer(object): """ @ivar resolutionMinutes: The number of minutes long a statistics bucket will be. For example, if this is C{5}, then all data points less than 5 will be placed into the first bucket; data points greater than or equal to 5 and less than 10 will be placed into the second bucket, and so on. @ivar timeBucketCount: The number of statistics buckets of length C{resolutionMinutes} needed to hold one day of data. @ivar hourlyTotals: A C{list} of length C{timeBucketCount} holding ... """ class LogLine(object): def __init__(self, ipaddr, userid, logDateTime, logTime, method, uri, status, reqbytes, referer, client, extended): self.ipaddr = ipaddr self.userid = userid self.logDateTime = logDateTime self.logTime = logTime self.method = method self.uri = uri self.status = status self.bytes = reqbytes self.referer = referer self.client = client self.extended = extended def __init__( self, startHour=None, endHour=None, utcoffset=0, resolutionMinutes=60, filterByUser=None, filterByClient=None, ignoreNonHTTPMethods=True, separate401s=True, ): self.startHour = startHour self.endHour = endHour self.utcoffset = utcoffset self.adjustHour = None self.filterByUser = filterByUser self.filterByClient = filterByClient self.ignoreNonHTTPMethods = ignoreNonHTTPMethods self.separate401s = separate401s self.startTime = datetime.datetime.now().replace(microsecond=0) self.host = socket.getfqdn() self.startLog = "" self.endLog = "" self.resolutionMinutes = resolutionMinutes self.timeBucketCount = (24 * 60) / resolutionMinutes self.loggedUTCOffset = None self.hourlyTotals = [[0, 0, 0, collections.defaultdict(int), 0.0, ] for _ignore in xrange(self.timeBucketCount)] self.clientTotals = collections.defaultdict(lambda: [0, set(), set()]) self.clientIDMap = {} self.clientByMethodCount = collections.defaultdict(lambda: collections.defaultdict(int)) self.clientIDByMethodCount = {} self.clientByMethodTotalTime = collections.defaultdict(lambda: collections.defaultdict(float)) self.clientByMethodAveragedTime = collections.defaultdict(lambda: collections.defaultdict(float)) self.statusByMethodCount = collections.defaultdict(lambda: collections.defaultdict(int)) self.hourlyByMethodCount = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.hourlyByOKMethodCount = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.hourlyByMethodTime = collections.defaultdict(lambda: [0.0, ] * self.timeBucketCount) self.hourlyByOKMethodTime = collections.defaultdict(lambda: [0.0, ] * self.timeBucketCount) self.averagedHourlyByMethodTime = collections.defaultdict(lambda: [0.0, ] * self.timeBucketCount) self.averagedHourlyByOKMethodTime = collections.defaultdict(lambda: [0.0, ] * self.timeBucketCount) self.hourlyPropfindByResponseCount = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.hourlyByStatus = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.hourlyByRecipientCount = collections.defaultdict(lambda: [[0, 0] for _ignore in xrange(self.timeBucketCount)]) self.averagedHourlyByRecipientCount = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.responseTimeVsQueueDepth = collections.defaultdict(lambda: [0, 0.0, ]) self.averagedResponseTimeVsQueueDepth = collections.defaultdict(int) self.instanceCount = collections.defaultdict(int) self.requestSizeByBucket = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.responseSizeByBucket = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.responseCountByMethod = collections.defaultdict(lambda: [0, 0]) self.requestTimeByBucket = collections.defaultdict(lambda: [0, ] * self.timeBucketCount) self.requestURI = collections.defaultdict(int) self.userWeights = collections.defaultdict(int) self.userCounts = collections.defaultdict(int) self.userResponseTimes = collections.defaultdict(float) self.newEvents = 0 self.newInvites = 0 self.updateEvents = 0 self.updateInvites = 0 self.attendeeInvites = 0 self.otherUserCalendarRequests = {} self.currentLine = None self.linesRead = collections.defaultdict(int) def analyzeLogFile(self, logFilePath): fpath = os.path.expanduser(logFilePath) if fpath.endswith(".gz"): f = GzipFile(fpath) else: f = open(fpath) effectiveStart = self.startHour if self.startHour is not None else 0 effectiveEnd = self.endHour if self.endHour is not None else 23 self.maxIndex = (effectiveEnd - effectiveStart + 1) * 60 / self.resolutionMinutes try: lineCtr = 0 for line in f: lineCtr += 1 if lineCtr <= self.linesRead[logFilePath]: continue self.linesRead[logFilePath] += 1 if line.startswith("Log"): continue try: self.parseLine(line) except: print("Could not parse line:\n%s" % (line,)) continue # Filter method if self.ignoreNonHTTPMethods and not self.currentLine.method.startswith("REPORT(") and not self.currentLine.method.startswith("POST(") and self.currentLine.method not in httpMethods: self.currentLine.method = "???" # Do hour ranges logHour = int(self.currentLine.logTime[0:2]) logMinute = int(self.currentLine.logTime[3:5]) if self.adjustHour is None: self.adjustHour = self.startHour if self.startHour is not None else logHour hourFromStart = logHour - self.adjustHour if hourFromStart < 0: hourFromStart += 24 if self.startHour is not None and logHour < self.startHour: continue elif self.endHour is not None and logHour > self.endHour: continue timeBucketIndex = (hourFromStart * 60 + logMinute) / self.resolutionMinutes if not self.startLog: self.startLog = self.currentLine.logDateTime self.endLog = self.currentLine.logDateTime # Filter on user id if self.filterByUser and self.currentLine.userid != self.filterByUser: continue # Filter on client adjustedClient = self.getClientAdjustedName() if self.filterByClient and adjustedClient.find(self.filterByClient) == -1: continue # Get some useful values is503 = self.currentLine.method == "???" isOK = self.currentLine.status / 100 == 2 adjustedMethod = self.getAdjustedMethodName() instance = self.currentLine.extended.get("i", "") responseTime = float(self.currentLine.extended.get("t", 0.0)) queueDepth = int(self.currentLine.extended.get("or", 0)) contentLength = int(self.currentLine.extended.get("cl", -1)) rcount = int(self.currentLine.extended.get("responses", -1)) if rcount == -1: rcount = int(self.currentLine.extended.get("rcount", -1)) # Main summary self.hourlyTotals[timeBucketIndex][0] += 1 self.hourlyTotals[timeBucketIndex][1] += 1 if is503 else 0 self.hourlyTotals[timeBucketIndex][2] += queueDepth self.hourlyTotals[timeBucketIndex][3][instance] = max(self.hourlyTotals[timeBucketIndex][3][instance], queueDepth) self.hourlyTotals[timeBucketIndex][4] += responseTime # Client analysis if not is503: self.clientTotals[" TOTAL"][0] += 1 self.clientTotals[" TOTAL"][1].add(self.currentLine.userid) self.clientTotals[" TOTAL"][2].add(self.currentLine.ipaddr) self.clientTotals[adjustedClient][0] += 1 self.clientTotals[adjustedClient][1].add(self.currentLine.userid) self.clientTotals[adjustedClient][2].add(self.currentLine.ipaddr) self.clientByMethodCount[" TOTAL"][" TOTAL"] += 1 self.clientByMethodCount[" TOTAL"][adjustedMethod] += 1 self.clientByMethodCount[adjustedClient][" TOTAL"] += 1 self.clientByMethodCount[adjustedClient][adjustedMethod] += 1 self.clientByMethodTotalTime[" TOTAL"][" TOTAL"] += responseTime self.clientByMethodTotalTime[" TOTAL"][adjustedMethod] += responseTime self.clientByMethodTotalTime[adjustedClient][" TOTAL"] += responseTime self.clientByMethodTotalTime[adjustedClient][adjustedMethod] += responseTime self.statusByMethodCount[" TOTAL"][" TOTAL"] += 1 self.statusByMethodCount[" TOTAL"][adjustedMethod] += 1 self.statusByMethodCount["2xx" if isOK else "%d" % (self.currentLine.status,)][" TOTAL"] += 1 self.statusByMethodCount["2xx" if isOK else "%d" % (self.currentLine.status,)][adjustedMethod] += 1 # Method counts, timing and status self.hourlyByMethodCount[" TOTAL"][timeBucketIndex] += 1 self.hourlyByMethodCount[adjustedMethod][timeBucketIndex] += 1 self.hourlyByOKMethodCount[" TOTAL"][timeBucketIndex] += 1 self.hourlyByOKMethodCount[adjustedMethod if isOK else METHOD_401][timeBucketIndex] += 1 self.hourlyByMethodTime[" TOTAL"][timeBucketIndex] += responseTime self.hourlyByMethodTime[adjustedMethod][timeBucketIndex] += responseTime self.hourlyByOKMethodTime[" TOTAL"][timeBucketIndex] += responseTime self.hourlyByOKMethodTime[adjustedMethod if isOK else METHOD_401][timeBucketIndex] += responseTime self.hourlyByStatus[" TOTAL"][timeBucketIndex] += 1 self.hourlyByStatus[self.currentLine.status][timeBucketIndex] += 1 if self.currentLine.status == 201: if adjustedMethod == METHOD_PUT_ICS: self.newEvents += 1 elif adjustedMethod == METHOD_PUT_ORGANIZER: self.newInvites += 1 elif isOK: if adjustedMethod == METHOD_PUT_ICS: self.updateEvents += 1 elif adjustedMethod == METHOD_PUT_ORGANIZER: self.updateInvites += 1 elif adjustedMethod == METHOD_PUT_ATTENDEE: self.attendeeInvites += 1 # Cache analysis if adjustedMethod == METHOD_PROPFIND_CALENDAR and self.currentLine.status == 207: responses = int(self.currentLine.extended.get("responses", 0)) self.hourlyPropfindByResponseCount[" TOTAL"][timeBucketIndex] += 1 self.hourlyPropfindByResponseCount[self.getCountBucket(responses, responseCountBuckets)][timeBucketIndex] += 1 # Scheduling analysis if adjustedMethod == METHOD_POST_FREEBUSY: recipients = int(self.currentLine.extended.get("recipients", 0)) self.hourlyByRecipientCount["Freebusy One Offs" if recipients == 1 else "Freebusy Average"][timeBucketIndex][0] += 1 self.hourlyByRecipientCount["Freebusy One Offs" if recipients == 1 else "Freebusy Average"][timeBucketIndex][1] += recipients self.hourlyByRecipientCount["Freebusy Max."][timeBucketIndex][0] = max(self.hourlyByRecipientCount["Freebusy Max."][timeBucketIndex][0], recipients) elif adjustedMethod == METHOD_POST_ORGANIZER: recipients = int(self.currentLine.extended.get("itip.request", 0)) + int(self.currentLine.extended.get("itip.cancel", 0)) self.hourlyByRecipientCount["iTIP Average"][timeBucketIndex][0] += 1 self.hourlyByRecipientCount["iTIP Average"][timeBucketIndex][1] += recipients self.hourlyByRecipientCount["iTIP Max."][timeBucketIndex][0] = max(self.hourlyByRecipientCount["iTIP Max."][timeBucketIndex][0], recipients) elif adjustedMethod == METHOD_PUT_ORGANIZER: recipients = int(self.currentLine.extended["itip.requests"]) self.hourlyByRecipientCount["iTIP Average"][timeBucketIndex][0] += 1 self.hourlyByRecipientCount["iTIP Average"][timeBucketIndex][1] += recipients self.hourlyByRecipientCount["iTIP Max."][timeBucketIndex][0] = max(self.hourlyByRecipientCount["iTIP Max."][timeBucketIndex][0], recipients) elif adjustedMethod == METHOD_POST_ISCHEDULE_FREEBUSY: recipients = int(self.currentLine.extended.get("recipients", 0)) self.hourlyByRecipientCount["iFreebusy One Offs" if recipients == 1 else "iFreebusy Average"][timeBucketIndex][0] += 1 self.hourlyByRecipientCount["iFreebusy One Offs" if recipients == 1 else "iFreebusy Average"][timeBucketIndex][1] += recipients self.hourlyByRecipientCount["iFreebusy Max."][timeBucketIndex][0] = max(self.hourlyByRecipientCount["iFreebusy Max."][timeBucketIndex][0], recipients) elif adjustedMethod == METHOD_POST_ISCHEDULE: recipients = int(self.currentLine.extended.get("recipients", 0)) + int(self.currentLine.extended.get("itip.request", 0)) + int(self.currentLine.extended.get("itip.cancel", 0)) self.hourlyByRecipientCount["iSchedule Average"][timeBucketIndex][0] += 1 self.hourlyByRecipientCount["iSchedule Average"][timeBucketIndex][1] += recipients self.hourlyByRecipientCount["iSchedule Max."][timeBucketIndex][0] = max(self.hourlyByRecipientCount["iSchedule Max."][timeBucketIndex][0], recipients) # Queue depth analysis self.responseTimeVsQueueDepth[queueDepth][0] += 1 self.responseTimeVsQueueDepth[queueDepth][1] += responseTime # Instance counts self.instanceCount[instance] += 1 # Request/response size analysis if contentLength != -1: self.requestSizeByBucket[" TOTAL"][timeBucketIndex] += 1 self.requestSizeByBucket[self.getCountBucket(contentLength, requestSizeBuckets)][timeBucketIndex] += 1 if adjustedMethod != METHOD_GET_DROPBOX: self.responseSizeByBucket[" TOTAL"][timeBucketIndex] += 1 self.responseSizeByBucket[self.getCountBucket(self.currentLine.bytes, responseSizeBuckets)][timeBucketIndex] += 1 if rcount != -1: self.responseCountByMethod[" TOTAL"][0] += rcount self.responseCountByMethod[" TOTAL"][1] += 1 self.responseCountByMethod[adjustedMethod][0] += rcount self.responseCountByMethod[adjustedMethod][1] += 1 # Request time analysis self.requestTimeByBucket[" TOTAL"][timeBucketIndex] += 1 self.requestTimeByBucket[self.getCountBucket(responseTime, requestTimeBuckets)][timeBucketIndex] += 1 # Request URI analysis self.requestURI[self.currentLine.uri] += 1 self.userAnalysis(adjustedMethod) # Look at interactions between different users self.userInteractionAnalysis(adjustedMethod) except Exception: print("Failed to process line:\n%s" % (line,)) raise finally: f.close() # Average various items self.averagedHourlyByMethodTime.clear() for method, hours in self.hourlyByMethodTime.iteritems(): counts = self.hourlyByMethodCount[method] for hour in xrange(self.timeBucketCount): if counts[hour]: newValue = hours[hour] / counts[hour] else: newValue = hours[hour] self.averagedHourlyByMethodTime[method][hour] = newValue self.averagedHourlyByOKMethodTime.clear() for method, hours in self.hourlyByOKMethodTime.iteritems(): counts = self.hourlyByOKMethodCount[method] for hour in xrange(self.timeBucketCount): if counts[hour]: newValue = hours[hour] / counts[hour] else: newValue = hours[hour] self.averagedHourlyByOKMethodTime[method][hour] = newValue self.averagedResponseTimeVsQueueDepth.clear() for k, v in self.responseTimeVsQueueDepth.iteritems(): self.averagedResponseTimeVsQueueDepth[k] = (v[0], v[1] / v[0],) self.averagedHourlyByRecipientCount.clear() for method, value in self.hourlyByRecipientCount.iteritems(): for hour in xrange(self.timeBucketCount): if method in ("Freebusy Average", "iTIP Average", "iFreebusy Average", "iSchedule Average",): newValue = ((1.0 * value[hour][1]) / value[hour][0]) if value[hour][0] != 0 else 0 else: newValue = value[hour][0] self.averagedHourlyByRecipientCount[method][hour] = newValue averaged = collections.defaultdict(int) for key, value in self.responseCountByMethod.iteritems(): averaged[key] = (value[0] / value[1]) if value[1] else 0 self.averageResponseCountByMethod = averaged for client, data in self.clientByMethodTotalTime.iteritems(): for method, totaltime in data.iteritems(): count = self.clientByMethodCount[client][method] self.clientByMethodAveragedTime[client][method] = totaltime / count if count else 0 self.clientIDMap = {} for ctr, client in enumerate(sorted(self.clientByMethodCount.keys(), key=lambda x: x.lower())): if ctr == 0: self.clientIDMap[client] = "Total" else: self.clientIDMap[client] = "ID-%02d" % (ctr,) self.clientIDByMethodCount = {} for client, data in self.clientByMethodCount.iteritems(): self.clientIDByMethodCount[self.clientIDMap[client]] = data def parseLine(self, line): startPos = line.find("- ") endPos = line.find(" [") ipaddr = line[0:startPos - 2] userid = line[startPos + 2:endPos] startPos = endPos + 1 logDateTime = line[startPos + 1:startPos + 21] logTime = line[startPos + 13:startPos + 21] if self.loggedUTCOffset is None: self.loggedUTCOffset = int(line[startPos + 22:startPos + 25]) startPos = line.find(']', startPos + 21) + 3 endPos = line.find(' ', startPos) if line[startPos] == '?': method = "???" uri = "" startPos += 5 else: method = line[startPos:endPos] startPos = endPos + 1 endPos = line.find(" HTTP/", startPos) uri = line[startPos:endPos] startPos = endPos + 11 status = int(line[startPos:startPos + 3]) startPos += 4 endPos = line.find(' ', startPos) reqbytes = int(line[startPos:endPos]) startPos = endPos + 2 endPos = line.find('"', startPos) # Handle "attacks" where double-quotes may appear in the string if line[endPos + 1] != ' ': endPos = line.find('"', endPos + 1) referrer = line[startPos:endPos] startPos = endPos + 3 endPos = line.find('"', startPos) client = line[startPos:endPos] startPos = endPos + 2 if line[startPos] == '[': extended = {} startPos += 1 endPos = line.find(' ', startPos) extended["t"] = line[startPos:endPos] startPos = endPos + 6 endPos = line.find(' ', startPos) extended["i"] = line[startPos:endPos] startPos = endPos + 1 endPos = line.find(']', startPos) extended["or"] = line[startPos:endPos] else: items = line[startPos:].split() extended = dict([item.split('=') for item in items if item.find("=") != -1]) self.currentLine = CalendarServerLogAnalyzer.LogLine(ipaddr, userid, logDateTime, logTime, method, uri, status, reqbytes, referrer, client, extended) def getClientAdjustedName(self): versionClients = ( "iCal/", "iPhone/", "iOS/", "CalendarAgent", "Calendar/", "CoreDAV/", "Safari/", "dataaccessd", "curl/", "DAVKit", ) for client in versionClients: index = self.currentLine.client.find(client) if index != -1: endex = self.currentLine.client.find(' ', index) if endex == -1: endex = len(self.currentLine.client) name = self.currentLine.client[index:endex] return name index = self.currentLine.client.find("calendarclient") if index != -1: code = self.currentLine.client[14] if code == 'A': return "iCal/3 sim" elif code == 'B': return "iPhone/3 sim" else: return "Simulator" quickclients = ( ("CardDAVPlugin/", "CardDAVPlugin"), ("Address%20Book/", "AddressBook"), ("AddressBook/", "AddressBook"), ("Mail/", "Mail"), ("iChat/", "iChat"), ("InterMapper/", "InterMapper"), ) for quick, result in quickclients: index = self.currentLine.client.find(quick) if index != -1: return result return self.currentLine.client[:20] def getAdjustedMethodName(self): return getAdjustedMethodName(self.currentLine.extended, self.currentLine.method, self.currentLine.uri) def getCountBucket(self, count, buckets): for limit, key in buckets: if limit is None: break if count <= limit: return key return key # Determine method weight 1 - 10 # weighting = { # "ACL": lambda x: 3, # "DELETE" : lambda x: 5, # "GET" : lambda x: 3 * (1 + x.bytes / (1024 * 1024)), # METHOD_GET_DROPBOX : lambda x: 3 * (1 + x.bytes / (1024 * 1024)), # "HEAD" : lambda x: 1, # "MKCALENDAR" : lambda x: 2, # "MKCOL" : lambda x: 2, # "MOVE" : lambda x: 3, # "OPTIONS" : lambda x: 1, # METHOD_POST_FREEBUSY : lambda x: 5 * int(x.extended.get("recipients", 1)), # METHOD_PUT_ORGANIZER : lambda x: 5 * int(x.extended.get("recipients", 1)), # METHOD_PUT_ATTENDEE : lambda x: 5 * int(x.extended.get("recipients", 1)), # "PROPFIND" : lambda x: 3 * int(x.extended.get("responses", 1)), # METHOD_PROPFIND_CALENDAR : lambda x: 5 * (int(math.log10(float(x.extended.get("responses", 1)))) + 1), # METHOD_PROPFIND_CALENDAR_HOME : lambda x: 5 * (int(math.log10(float(x.extended.get("responses", 1)))) + 1), # "PROPFIND inbox" : lambda x: 5 * (int(math.log10(float(x.extended.get("responses", 1)))) + 1), # METHOD_PROPFIND_PRINCIPALS : lambda x: 5 * (int(math.log10(float(x.extended.get("responses", 1)))) + 1), # METHOD_PROPFIND_CACHED_CALENDAR_HOME : lambda x: 2, # METHOD_PROPFIND_CACHED_PRINCIPALS : lambda x: 2, # "PROPPATCH" : lambda x: 4, # METHOD_PROPPATCH_CALENDAR : lambda x:8, # METHOD_PUT_ICS : lambda x: 4, # METHOD_PUT_ORGANIZER : lambda x: 8, # METHOD_PUT_ATTENDEE : lambda x: 6, # METHOD_PUT_DROPBOX : lambda x: 10, # "REPORT" : lambda x: 5, # METHOD_REPORT_CALENDAR_MULTIGET : lambda x: 5 * int(x.extended.get("rcount", 1)), # METHOD_REPORT_CALENDAR_QUERY : lambda x: 4 * int(x.extended.get("responses", 1)), # METHOD_REPORT_EXPAND_P : lambda x: 5, # "REPORT principal-match" : lambda x: 5, # } weighting = {} def userAnalysis(self, adjustedMethod): if self.currentLine.userid == "-": return # try: # self.userWeights[self.currentLine.userid] += self.weighting[adjustedMethod](self.currentLine) # except KeyError: # self.userWeights[self.currentLine.userid] += 5 self.userWeights[self.currentLine.userid] += 1 responseTime = float(self.currentLine.extended.get("t", 0.0)) self.userCounts["%s:%s" % (self.currentLine.userid, self.getClientAdjustedName(),)] += 1 self.userResponseTimes["%s:%s" % (self.currentLine.userid, self.getClientAdjustedName(),)] += responseTime def summarizeUserInteraction(self, adjustedMethod): summary = {} otherData = self.otherUserCalendarRequests.get(adjustedMethod, {}) for _ignore_user, others in otherData.iteritems(): bucket = self.getCountBucket(len(others), userInteractionCountBuckets) summary[bucket] = summary.get(bucket, 0) + 1 return summary def userInteractionAnalysis(self, adjustedMethod): """ If the current line is a record of one user accessing another user's data, update C{self.otherUserCalendarRequests} to account for it. """ forMethod = self.otherUserCalendarRequests.setdefault(adjustedMethod, {}) others = forMethod.setdefault(self.currentLine.userid, set()) segments = self.currentLine.uri.split('/') if segments[:3] == ['', 'calendars', '__uids__']: if segments[3:] != [self.currentLine.userid, '']: others.add(segments[3]) def printAll(self, doTabs, summary): self.printInfo(doTabs) print("Load Analysis") self.printHourlyTotals(doTabs, summary) if not summary: print("Client Analysis") self.printClientTotals(doTabs) print("Protocol Analysis Count") self.printHourlyByXXXDetails( self.hourlyByOKMethodCount if self.separate401s else self.hourlyByMethodCount, doTabs, ) print("Protocol Analysis Average Response Time (ms)") self.printHourlyByXXXDetails( self.averagedHourlyByOKMethodTime if self.separate401s else self.averagedHourlyByMethodTime, doTabs, showAverages=True, ) print("Protocol Analysis Total Response Time (ms)") self.printHourlyByXXXDetails( self.hourlyByOKMethodTime if self.separate401s else self.hourlyByMethodTime, doTabs, showFloatPercent=True, ) print("Status Code Analysis") self.printHourlyByXXXDetails(self.hourlyByStatus, doTabs) print("Protocol Analysis by Status") self.printXXXMethodDetails(self.statusByMethodCount, doTabs, False) print("Cache Analysis") self.printHourlyCacheDetails(doTabs) if len(self.hourlyPropfindByResponseCount): print("PROPFIND Calendar response count distribution") self.printHourlyByXXXDetails(self.hourlyPropfindByResponseCount, doTabs) if len(self.averagedHourlyByRecipientCount): print("Average Recipient Counts") self.printHourlyByXXXDetails(self.averagedHourlyByRecipientCount, doTabs, showTotals=False) print("Queue Depth vs Response Time") self.printQueueDepthResponseTime(doTabs) print("Instance Count Distribution") self.printInstanceCount(doTabs) print("Protocol Analysis by Client") self.printXXXMethodDetails(self.clientIDByMethodCount, doTabs) if len(self.requestSizeByBucket): print("Request size distribution") self.printHourlyByXXXDetails(self.requestSizeByBucket, doTabs) if len(self.responseSizeByBucket): print("Response size distribution (excluding GET Dropbox)") self.printHourlyByXXXDetails(self.responseSizeByBucket, doTabs) if len(self.averageResponseCountByMethod): print("Average response count by method") self.printResponseCounts(doTabs) if len(self.requestTimeByBucket): print("Response time distribution") self.printHourlyByXXXDetails(self.requestTimeByBucket, doTabs) print("URI Counts") self.printURICounts(doTabs) # print("User Interaction Counts") # self.printUserInteractionCounts(doTabs) print("User Weights (top 100)") self.printUserWeights(doTabs) # print("User Response times") # self.printUserResponseTimes(doTabs) print("Sim values") self.printSimStats(doTabs) def printInfo(self, doTabs): table = tables.Table() table.addRow(("Run on:", self.startTime.isoformat(' '),)) table.addRow(("Host:", self.host,)) table.addRow(("First Log Entry:", self.startLog,)) table.addRow(("Last Log Entry:", self.endLog,)) if self.filterByUser: table.addRow(("Filtered to user:", self.filterByUser,)) if self.filterByClient: table.addRow(("Filtered to client:", self.filterByClient,)) table.addRow(("Lines Analyzed:", sum(self.linesRead.values()),)) table.printTabDelimitedData() if doTabs else table.printTable() print("") def getHourFromIndex(self, index): if index >= self.maxIndex: return None totalminutes = index * self.resolutionMinutes offsethour, minute = divmod(totalminutes, 60) localhour = divmod(offsethour + self.adjustHour + self.utcoffset, 24)[1] utchour = divmod(localhour - self.loggedUTCOffset - self.utcoffset, 24)[1] # Clip to select hour range return "%02d:%02d (%02d:%02d)" % (localhour, minute, utchour, minute,) def printHourlyTotals(self, doTabs, summary): table = tables.Table() table.addHeader( ("Local (UTC)", "Total", "Av. Requests", "Av. Response", "Av. Queue",) ) table.addHeader( ("", "Requests", "Per Second", "Time(ms)", "Depth") ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) totalRequests = 0 totalDepth = 0 totalTime = 0.0 self.timeCounts = 0 for ctr in xrange(self.timeBucketCount): hour = self.getHourFromIndex(ctr) if hour is None: continue value = self.hourlyTotals[ctr] countRequests, _ignore503, countDepth, _igniore_maxDepth, countTime = value table.addRow( ( hour, countRequests, (1.0 * countRequests) / self.resolutionMinutes / 60, safePercent(countTime, countRequests, 1.0), safePercent(float(countDepth), countRequests, 1), ) ) totalRequests += countRequests totalDepth += countDepth totalTime += countTime self.timeCounts += 1 table.addFooter( ( "Total:", totalRequests, safePercent(totalRequests, self.timeCounts * self.resolutionMinutes * 60, 1.0), safePercent(totalTime, totalRequests, 1.0), safePercent(float(totalDepth), totalRequests, 1), ), columnFormats=( tables.Table.ColumnFormat("%s"), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printClientTotals(self, doTabs): table = tables.Table() table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(("ID", "Client", "Total", "Unique", "Unique")) table.addHeader(("", "", "", "Users", "IP addrs")) for title, clientData in sorted(self.clientTotals.iteritems(), key=lambda x: x[0].lower()): if title == " TOTAL": continue table.addRow(( "%s" % (self.clientIDMap[title],), title, "%d (%2d%%)" % (clientData[0], safePercent(clientData[0], self.clientTotals[" TOTAL"][0]),), "%d (%2d%%)" % (len(clientData[1]), safePercent(len(clientData[1]), len(self.clientTotals[" TOTAL"][1])),), "%d (%2d%%)" % (len(clientData[2]), safePercent(len(clientData[2]), len(self.clientTotals[" TOTAL"][2])),), )) table.addFooter(( "All", "", "%d " % (self.clientTotals[" TOTAL"][0],), "%d " % (len(self.clientTotals[" TOTAL"][1]),), "%d " % (len(self.clientTotals[" TOTAL"][2]),), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printHourlyByXXXDetails(self, hourlyByXXX, doTabs, showTotals=True, showAverages=False, showFloatPercent=False): totals = [(0, 0,)] * len(hourlyByXXX) table = tables.Table() headers = [["Local (UTC)", ], ["", ], ["", ], ["", ], ] use_headers = [True, False, False, False] formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), ] for k in sorted(hourlyByXXX.keys(), key=lambda x: str(x).lower()): if type(k) is str: if k[0] == ' ': k = k[1:] ks = k.split(" ") headers[0].append(ks[0]) if len(ks) > 1: headers[1].append(ks[1]) use_headers[1] = use_headers[1] or headers[1][-1] else: headers[1].append("") if len(ks) > 2: headers[2].append(ks[2]) use_headers[2] = use_headers[2] or headers[2][-1] else: headers[2].append("") if len(ks) > 3: headers[3].append(ks[3]) use_headers[3] = use_headers[3] or headers[3][-1] else: headers[3].append("") else: headers[0].append(k) headers[1].append("") headers[2].append("") headers[3].append("") formats.append(tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY)) table.addHeader(headers[0]) if use_headers[1]: table.addHeader(headers[1]) if use_headers[2]: table.addHeader(headers[2]) if use_headers[3]: table.addHeader(headers[3]) table.setDefaultColumnFormats(formats) for ctr in xrange(self.timeBucketCount): row = ["-"] * (len(hourlyByXXX) + 1) row[0] = self.getHourFromIndex(ctr) if row[0] is None: continue for colctr, items in enumerate(sorted(hourlyByXXX.items(), key=lambda x: str(x[0]).lower())): _ignore, value = items data = value[ctr] if " TOTAL" in hourlyByXXX: total = hourlyByXXX[" TOTAL"][ctr] else: total = None if type(data) is int: if total is not None: row[colctr + 1] = "%d (%2d%%)" % (data, safePercent(data, total),) else: row[colctr + 1] = "%d" % (data,) if data: totals[colctr] = (totals[colctr][0] + data, totals[colctr][1] + 1,) elif type(data) is float: if total is not None and showFloatPercent: row[colctr + 1] = "%.1f (%2d%%)" % (data, safePercent(data, total),) else: row[colctr + 1] = "%.1f" % (data,) if data: totals[colctr] = (totals[colctr][0] + data, totals[colctr][1] + 1,) table.addRow(row) if showTotals or showAverages: row = ["-"] * (len(hourlyByXXX) + 1) row[0] = "Average:" if showAverages else "Total:" for colctr, totaldata in enumerate(totals): data, count = totaldata if type(data) is int: row[colctr + 1] = "%d (%2d%%)" % (data, safePercent(data, totals[0][0]),) elif type(data) is float: data = ((data / count) if count else 0.0) if showAverages else data if showFloatPercent: row[colctr + 1] = "%.1f (%2d%%)" % (data, safePercent(data, totals[0][0]),) else: row[colctr + 1] = "%.1f" % (data,) table.addFooter(row) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printHourlyCacheDetails(self, doTabs): totals = [0, ] * 7 table = tables.Table() header1 = ["Local (UTC)", "PROPFIND Calendar Home", "", "PROPFIND Address Book Home", "", "PROPFIND Principals", ""] header2 = ["", "Uncached", "Cached", "Uncached", "Cached", "Uncached", "Cached"] formats = [ tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ] table.addHeader(header1, columnFormats=[ tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=2), None, tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=2), None, tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=2), None, ]) table.addHeaderDivider(skipColumns=(0,)) table.addHeader(header2, columnFormats=[ tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), ]) table.setDefaultColumnFormats(formats) for ctr in xrange(self.timeBucketCount): hour = self.getHourFromIndex(ctr) if hour is None: continue row = [] row.append(hour) calHomeUncached = self.hourlyByOKMethodCount[METHOD_PROPFIND_CALENDAR_HOME][ctr] calHomeCached = self.hourlyByOKMethodCount[METHOD_PROPFIND_CACHED_CALENDAR_HOME][ctr] calHomeTotal = calHomeUncached + calHomeCached adbkHomeUncached = self.hourlyByOKMethodCount[METHOD_PROPFIND_ADDRESSBOOK_HOME][ctr] adbkHomeCached = self.hourlyByOKMethodCount[METHOD_PROPFIND_CACHED_ADDRESSBOOK_HOME][ctr] adbkHomeTotal = adbkHomeUncached + adbkHomeCached principalUncached = self.hourlyByOKMethodCount[METHOD_PROPFIND_PRINCIPALS][ctr] principalCached = self.hourlyByOKMethodCount[METHOD_PROPFIND_CACHED_PRINCIPALS][ctr] principalTotal = principalUncached + principalCached row.append("%d (%2d%%)" % (calHomeUncached, safePercent(calHomeUncached, calHomeTotal),)) row.append("%d (%2d%%)" % (calHomeCached, safePercent(calHomeCached, calHomeTotal),)) row.append("%d (%2d%%)" % (adbkHomeUncached, safePercent(adbkHomeUncached, adbkHomeTotal),)) row.append("%d (%2d%%)" % (adbkHomeCached, safePercent(adbkHomeCached, adbkHomeTotal),)) row.append("%d (%2d%%)" % (principalUncached, safePercent(principalUncached, principalTotal),)) row.append("%d (%2d%%)" % (principalCached, safePercent(principalCached, principalTotal),)) totals[1] += calHomeUncached totals[2] += calHomeCached totals[3] += adbkHomeUncached totals[4] += adbkHomeCached totals[5] += principalUncached totals[6] += principalCached table.addRow(row) row = [] row.append("Total:") row.append("%d (%2d%%)" % (totals[1], safePercent(totals[1], totals[1] + totals[2]),)) row.append("%d (%2d%%)" % (totals[2], safePercent(totals[2], totals[1] + totals[2]),)) row.append("%d (%2d%%)" % (totals[3], safePercent(totals[3], totals[3] + totals[4]),)) row.append("%d (%2d%%)" % (totals[4], safePercent(totals[4], totals[3] + totals[4]),)) row.append("%d (%2d%%)" % (totals[5], safePercent(totals[5], totals[5] + totals[6]),)) row.append("%d (%2d%%)" % (totals[6], safePercent(totals[6], totals[5] + totals[6]),)) table.addFooter(row) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printQueueDepthResponseTime(self, doTabs): table = tables.Table() table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(("Queue Depth", "Av. Response Time (ms)", "Number")) for k, v in sorted(self.averagedResponseTimeVsQueueDepth.iteritems(), key=lambda x: x[0]): table.addRow(( "%d" % (k,), "%.1f" % (v[1],), "%d" % (v[0],), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printXXXMethodDetails(self, data, doTabs, verticalTotals=True): table = tables.Table() header = ["Method", ] formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), ] for k in sorted(data.keys(), key=lambda x: x.lower()): header.append(k) formats.append(tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY)) table.addHeader(header) table.setDefaultColumnFormats(formats) # Get full set of methods methods = set() for v in data.itervalues(): methods.update(v.keys()) for method in sorted(methods): if method == " TOTAL": continue row = [] row.append(method) for k, v in sorted(data.iteritems(), key=lambda x: x[0].lower()): total = v[" TOTAL"] if verticalTotals else data[" TOTAL"][method] row.append("%d (%2d%%)" % (v[method], safePercent(v[method], total),)) table.addRow(row) row = [] row.append("Total:") for k, v in sorted(data.iteritems(), key=lambda x: x[0].lower()): row.append("%d" % (v[" TOTAL"],)) table.addFooter(row) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printInstanceCount(self, doTabs): total = sum(self.instanceCount.values()) table = tables.Table() table.addHeader(("Instance ID", "Count", "%% Total",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) # Top 100 only def safeIntKey(v): try: return int(v[0]) except ValueError: return -1 for key, value in sorted(self.instanceCount.iteritems(), key=safeIntKey): table.addRow(( key, value, safePercent(value, total, 1000.0), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printURICounts(self, doTabs): total = sum(self.requestURI.values()) table = tables.Table() table.addHeader(("Request URI", "Count", "%% Total",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) # Top 100 only for key, value in sorted(self.requestURI.iteritems(), key=lambda x: x[1], reverse=True)[:100]: table.addRow(( key, value, safePercent(value, total, 1000.0), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printUserWeights(self, doTabs): total = sum(self.userWeights.values()) table = tables.Table() table.addHeader(("User ID", "Weight", "%% Total",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) # Top 100 only for key, value in sorted(self.userWeights.iteritems(), key=lambda x: x[1], reverse=True)[:100]: table.addRow(( key, value, safePercent(value, total, 1000.0), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printResponseCounts(self, doTabs): table = tables.Table() table.addHeader(("Method", "Av. Response Count",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) for method, value in sorted(self.averageResponseCountByMethod.iteritems(), key=lambda x: x[0]): if method == " TOTAL": continue table.addRow(( method, value, )) table.addFooter(("Total:", self.averageResponseCountByMethod[" TOTAL"],)) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printUserResponseTimes(self, doTabs): totalCount = 0 averages = {} for user in self.userResponseTimes.keys(): count = self.userCounts[user] total = self.userResponseTimes[user] averages[user] = total / count totalCount += total table = tables.Table() table.addHeader(("User ID/Client", "Av. Response (ms)", "%% Total",)) table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) for key, value in sorted(averages.iteritems(), key=lambda x: x[1], reverse=True): table.addRow(( key, value, safePercent(value, total, 1000.0), )) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printUserInteractionCounts(self, doTabs): table = tables.Table() table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%0.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(("# users accessed", "# of users", "% of users")) summary = self.summarizeUserInteraction(METHOD_PROPFIND_CALENDAR_HOME) total = sum(summary.values()) for k, v in sorted(summary.iteritems()): # Chop off the "(a):" part. table.addRow((k[4:], v, safePercent(float(v), total))) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printSimStats(self, doTabs): users = len(self.userCounts.keys()) hours = self.timeCounts / self.resolutionMinutes / 60 table = tables.Table() table.setDefaultColumnFormats(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(("Item", "Value", "Items, per User, per Day", "Interval (sec), per item, per user")) table.addRow(("Unique Users", users, "", "")) def _addRow(title, item): table.addRow((title, item, "%.1f" % (safePercent(24 * item, hours * users, 1.0),), "%.1f" % (safePercent(hours * 60 * 60 * users, item, 1.0),),)) _addRow("New Events", self.newEvents) _addRow("New Invites", self.newInvites) _addRow("Updated Events", self.updateEvents) _addRow("Updated Invites", self.updateInvites) _addRow("Attendee Invites", self.attendeeInvites) table.addRow(( "Recipients", "%.1f" % (safePercent(sum(self.averagedHourlyByRecipientCount["iTIP Average"]), self.timeCounts, 1.0),), "", "", )) table.printTabDelimitedData() if doTabs else table.printTable() print("") class TablePrinter(object): @classmethod def printDictDictTable(cls, data, doTabs): table = tables.Table() header = ["", ] formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), ] for k in sorted(data.keys(), key=lambda x: x.lower()): header.append(k) formats.append(tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY)) table.addHeader(header) table.setDefaultColumnFormats(formats) # Get full set of row names rowNames = set() for v in data.itervalues(): rowNames.update(v.keys()) for rowName in sorted(rowNames): if rowName == " TOTAL": continue row = [] row.append(rowName) for k, v in sorted(data.iteritems(), key=lambda x: x[0].lower()): value = v[rowName] if type(value) is str: row.append(value) else: if type(value) is float: fmt = "%.1f" else: fmt = "%d" if " TOTAL" in v: total = v[" TOTAL"] row.append((fmt + " (%2d%%)") % (value, safePercent(value, total),)) else: row.append(fmt % (value,)) table.addRow(row) if " TOTAL" in rowNames: row = [] row.append("Total:") for k, v in sorted(data.iteritems(), key=lambda x: x[0].lower()): value = v[" TOTAL"] if type(value) is str: fmt = "%s" elif type(value) is float: fmt = "%.1f" else: fmt = "%d" row.append(fmt % (value,)) table.addFooter(row) table.printTabDelimitedData() if doTabs else table.printTable() print("") class Differ(TablePrinter): def __init__(self, analyzers): self.analyzers = analyzers def printAll(self, doTabs, summary): self.printInfo(doTabs) print("Load Analysis Differences") # self.printLoadAnalysisDetails(doTabs) self.printHourlyTotals(doTabs) if not summary: print("Client Differences") self.printClientTotals(doTabs) print("Protocol Count Differences") self.printMethodCountDetails(doTabs) print("Average Response Time Differences") self.printMethodTimingDetails("clientByMethodAveragedTime", doTabs) print("Total Response Time Differences") self.printMethodTimingDetails("clientByMethodTotalTime", doTabs) print("Average Response Count Differences") self.printResponseCountDetails(doTabs) def printInfo(self, doTabs): table = tables.Table() table.addRow(("Run on:", self.analyzers[0].startTime.isoformat(' '),)) table.addRow(("Host:", self.analyzers[0].host,)) for ctr, analyzer in enumerate(self.analyzers): table.addRow(("Log Start #%d:" % (ctr + 1,), analyzer.startLog,)) if self.analyzers[0].filterByUser: table.addRow(("Filtered to user:", self.analyzers[0].filterByUser,)) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printLoadAnalysisDetails(self, doTabs): # First gather all the data byCategory = collections.defaultdict(lambda: collections.defaultdict(str)) firstData = [] lastData = [] for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) totalRequests = 0 totalTime = 0.0 for ctr2 in xrange(analyzer.timeBucketCount): hour = analyzer.getHourFromIndex(ctr2) if hour is None: continue value = analyzer.hourlyTotals[ctr2] countRequests, _ignore503, _ignore_countDepth, _ignore_maxDepth, countTime = value totalRequests += countRequests totalTime += countTime byCategory[title]["#1 Total Requests"] = "%d" % (totalRequests,) byCategory[title]["#2 Av. Response Time (ms)"] = "%.1f" % (safePercent(totalTime, totalRequests, 1.0),) if ctr == 0: firstData = (totalRequests, safePercent(totalTime, totalRequests, 1.0),) lastData = (totalRequests, safePercent(totalTime, totalRequests, 1.0),) title = "Difference" byCategory[title]["#1 Total Requests"] = "%+d (%+.1f%%)" % (lastData[0] - firstData[0], safePercent(lastData[0] - firstData[0], firstData[0], 100.0),) byCategory[title]["#2 Av. Response Time (ms)"] = "%+.1f (%+.1f%%)" % (lastData[1] - firstData[1], safePercent(lastData[1] - firstData[1], firstData[1], 100.0),) self.printDictDictTable(byCategory, doTabs) def printHourlyTotals(self, doTabs): table = tables.Table() hdr1 = [""] hdr2 = ["Local (UTC)"] hdr3 = [""] fmt1 = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY)] fmt23 = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY)] for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) hdr1.extend([title, "", ""]) hdr2.extend(["Total", "Av. Requests", "Av. Response"]) hdr3.extend(["Requests", "Per Second", "Time(ms)"]) fmt1.extend([ tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=3), None, None, ]) fmt23.extend([ tables.Table.ColumnFormat(), tables.Table.ColumnFormat(), tables.Table.ColumnFormat(), ]) title = "Difference" hdr1.extend([title, "", ""]) hdr2.extend(["Total", "Av. Requests", "Av. Response"]) hdr3.extend(["Requests", "Per Second", "Time(ms)"]) fmt1.extend([ tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=3), None, None, ]) fmt23.extend([ tables.Table.ColumnFormat(), tables.Table.ColumnFormat(), tables.Table.ColumnFormat(), ]) table.addHeader(hdr1, columnFormats=fmt1) table.addHeaderDivider() table.addHeader(hdr2, columnFormats=fmt23) table.addHeader(hdr3, columnFormats=fmt23) fmt = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY)] for ctr in range(len(self.analyzers) + 1): fmt.extend([ tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ]) table.setDefaultColumnFormats(fmt) totalRequests = [0] * len(self.analyzers) totalTime = [0.0] * len(self.analyzers) for ctr in xrange(self.analyzers[0].timeBucketCount): hour = self.analyzers[0].getHourFromIndex(ctr) if hour is None: continue diffRequests = None diffRequestRate = None diffTime = None row = [hour] for ctr2, analyzer in enumerate(self.analyzers): value = analyzer.hourlyTotals[ctr] countRequests, _ignore503, _ignore_countDepth, _ignore_maxDepth, countTime = value requestRate = (1.0 * countRequests) / analyzer.resolutionMinutes / 60 averageTime = safePercent(countTime, countRequests, 1.0) row.extend([ countRequests, requestRate, averageTime, ]) totalRequests[ctr2] += countRequests totalTime[ctr2] += countTime diffRequests = countRequests if diffRequests is None else countRequests - diffRequests diffRequestRate = requestRate if diffRequestRate is None else requestRate - diffRequestRate diffTime = averageTime if diffTime is None else averageTime - diffTime row.extend([ diffRequests, diffRequestRate, diffTime, ]) table.addRow(row) ftr = ["Total:"] diffRequests = None diffRequestRate = None diffTime = None for ctr, analyzer in enumerate(self.analyzers): requestRate = (1.0 * totalRequests[ctr]) / analyzer.resolutionMinutes / 60 averageTime = safePercent(totalTime[ctr], totalRequests[ctr], 1.0) ftr.extend([ totalRequests[ctr], requestRate, averageTime, ]) diffRequests = totalRequests[ctr] if diffRequests is None else totalRequests[ctr] - diffRequests diffRequestRate = requestRate if diffRequestRate is None else requestRate - diffRequestRate diffTime = averageTime if diffTime is None else averageTime - diffTime ftr.extend([ diffRequests, diffRequestRate, diffTime, ]) fmt = [tables.Table.ColumnFormat("%s")] for ctr in range(len(self.analyzers) + 1): fmt.extend([ tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ]) table.addFooter(ftr, columnFormats=fmt) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printClientTotals(self, doTabs): table = tables.Table() header1 = ["Client", ] header2 = ["", ] header1formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), ] header2formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), ] formats = [tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), ] for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) header1.extend((title, "")) header2.extend(("Total", "Unique",)) header1formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=2), None, )) header2formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), )) formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) header1.extend(("Difference", "")) header2.extend(("Total", "Unique",)) header1formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY, span=2), None, )) header2formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), )) formats.extend(( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), )) table.addHeader(header1, columnFormats=header1formats) table.addHeaderDivider(skipColumns=(0,)) table.addHeader(header2, columnFormats=header2formats) table.setDefaultColumnFormats(formats) allClients = set() for analyzer in self.analyzers: allClients.update(analyzer.clientTotals.keys()) for title in sorted(allClients, key=lambda x: x.lower()): if title == " TOTAL": continue row = [title, ] for analyzer in self.analyzers: row.append("%d (%2d%%)" % (analyzer.clientTotals[title][0], safePercent(analyzer.clientTotals[title][0], analyzer.clientTotals[" TOTAL"][0]),)) row.append("%d (%2d%%)" % (len(analyzer.clientTotals[title][1]), safePercent(len(analyzer.clientTotals[title][1]), len(analyzer.clientTotals[" TOTAL"][1])),)) firstTotal = (self.analyzers[0].clientTotals[title][0], safePercent(self.analyzers[0].clientTotals[title][0], self.analyzers[0].clientTotals[" TOTAL"][0], 100.0),) firstUnique = (len(self.analyzers[0].clientTotals[title][1]), safePercent(len(self.analyzers[0].clientTotals[title][1]), len(self.analyzers[0].clientTotals[" TOTAL"][1]), 100.0,)) lastTotal = (self.analyzers[-1].clientTotals[title][0], safePercent(self.analyzers[-1].clientTotals[title][0], self.analyzers[-1].clientTotals[" TOTAL"][0], 100.0),) lastUnique = (len(self.analyzers[-1].clientTotals[title][1]), safePercent(len(self.analyzers[-1].clientTotals[title][1]), len(self.analyzers[-1].clientTotals[" TOTAL"][1]), 100.0,)) row.append("%+d (%+.1f%%)" % (lastTotal[0] - firstTotal[0], lastTotal[1] - firstTotal[1])) row.append("%+d (%+.1f%%)" % (lastUnique[0] - firstUnique[0], lastUnique[1] - firstUnique[1])) table.addRow(row) footer = ["All", ] for analyzer in self.analyzers: footer.append("%d " % (analyzer.clientTotals[" TOTAL"][0],)) footer.append("%d " % (len(analyzer.clientTotals[" TOTAL"][1]),)) firstTotal = self.analyzers[0].clientTotals[" TOTAL"][0] firstUnique = len(self.analyzers[0].clientTotals[" TOTAL"][1]) lastTotal = self.analyzers[-1].clientTotals[" TOTAL"][0] lastUnique = len(self.analyzers[-1].clientTotals[" TOTAL"][1]) footer.append("%+d (%+.1f%%)" % (lastTotal - firstTotal, safePercent(lastTotal - firstTotal, firstTotal, 100.0),)) footer.append("%+d (%+.1f%%)" % (lastUnique - firstUnique, safePercent(lastUnique - firstUnique, firstUnique, 100.0),)) table.addFooter(footer) table.printTabDelimitedData() if doTabs else table.printTable() print("") def printMethodCountDetails(self, doTabs): # First gather all the data allMethods = set() byMethod = collections.defaultdict(lambda: collections.defaultdict(int)) for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) for method, value in analyzer.clientByMethodCount[" TOTAL"].iteritems(): byMethod[title][method] = value allMethods.add(method) title = "Difference" for method in allMethods: firstTotal = self.analyzers[0].clientByMethodCount[" TOTAL"].get(" TOTAL", 0) lastTotal = self.analyzers[-1].clientByMethodCount[" TOTAL"].get(" TOTAL", 0) if method == " TOTAL": byMethod[title][method] = "%+d (%+.1f%%)" % (lastTotal - firstTotal, safePercent(lastTotal - firstTotal, firstTotal, 100.0),) else: firstValue = self.analyzers[0].clientByMethodCount[" TOTAL"].get(method, 0) firstPercent = ((100.0 * firstValue) / firstTotal) if firstTotal != 0 else 0.0 lastValue = self.analyzers[-1].clientByMethodCount[" TOTAL"].get(method, 0) lastPercent = ((100.0 * lastValue) / lastTotal) if lastTotal != 0 else 0.0 byMethod[title][method] = "%+d (%+.1f%%)" % (lastValue - firstValue, lastPercent - firstPercent,) self.printDictDictTable(byMethod, doTabs) def printMethodTimingDetails(self, timingType, doTabs): # First gather all the data allMethods = set() byMethod = collections.defaultdict(lambda: collections.defaultdict(int)) for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) for method, value in getattr(analyzer, timingType)[" TOTAL"].iteritems(): byMethod[title][method] = value allMethods.add(method) title1 = "1. Difference" title2 = "2. % Difference" title3 = "3. % Total Diff." for method in allMethods: firstTotal = getattr(self.analyzers[0], timingType)[" TOTAL"].get(" TOTAL", 0) lastTotal = getattr(self.analyzers[-1], timingType)[" TOTAL"].get(" TOTAL", 0) if method == " TOTAL": byMethod[title1][method] = "%+.1f" % (lastTotal - firstTotal,) byMethod[title2][method] = "%+.1f%%" % (safePercent(lastTotal - firstTotal, firstTotal, 100.0),) byMethod[title3][method] = "" else: firstValue = getattr(self.analyzers[0], timingType)[" TOTAL"].get(method, 0) lastValue = getattr(self.analyzers[-1], timingType)[" TOTAL"].get(method, 0) byMethod[title1][method] = "%+.1f" % ( lastValue - firstValue, ) byMethod[title2][method] = "%+.1f%%" % ( safePercent(lastValue - firstValue, firstValue, 100.0), ) byMethod[title3][method] = "%+.1f%%" % ( safePercent(lastValue - firstValue, lastTotal - firstTotal, 100.0), ) self.printDictDictTable(byMethod, doTabs) def printResponseCountDetails(self, doTabs): # First gather all the data allMethods = set() byMethod = collections.defaultdict(lambda: collections.defaultdict(int)) for ctr, analyzer in enumerate(self.analyzers): title = "#%d %s" % (ctr + 1, analyzer.startLog[0:11],) for method, value in analyzer.averageResponseCountByMethod.iteritems(): byMethod[title][method] = value allMethods.add(method) title = "Difference" for method in allMethods: firstTotal = self.analyzers[0].averageResponseCountByMethod.get(" TOTAL", 0) lastTotal = self.analyzers[-1].averageResponseCountByMethod.get(" TOTAL", 0) if method == " TOTAL": byMethod[title][method] = "%+d (%+.1f%%)" % (lastTotal - firstTotal, safePercent(lastTotal - firstTotal, firstTotal, 100.0),) else: firstValue = self.analyzers[0].averageResponseCountByMethod.get(method, 0) lastValue = self.analyzers[-1].averageResponseCountByMethod.get(method, 0) byMethod[title][method] = "%+d (%+.1f%%)" % (lastValue - firstValue, safePercent(lastValue - firstValue, firstValue, 100.0),) self.printDictDictTable(byMethod, doTabs) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: protocolanalysis [options] [FILE] Options: -h Print this help and exit --hours Range of hours (local time) to analyze [0:23] --utcoffset Local time offset for UTC log entries --resolution Time resolution in minutes [60] --user User to analyze --client Client to analyze --tabs Generate tab-delimited output rather than table --summary Print the Load Analysis summary only --repeat Parse the file and then allow more data to be parsed --diff Compare two or more files Arguments: FILE File names for the access logs to analyze Description: This utility will analyze the output of access logs and generate tabulated statistics. It can also display statistics about the differences between two logs. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": try: diffMode = False doTabDelimited = False repeat = False summary = False resolution = 60 startHour = None endHour = None utcoffset = 0 filterByUser = None filterByClient = None options, args = getopt.getopt(sys.argv[1:], "h", ["diff", "hours=", "utcoffset=", "resolution=", "repeat", "summary", "tabs", "user=", "client=", ]) for option, value in options: if option == "-h": usage() elif option == "--diff": diffMode = True elif option == "--repeat": repeat = True elif option == "--summary": summary = True elif option == "--tabs": doTabDelimited = True elif option == "--hours": splits = value.split(":") if len(splits) not in (1, 2): usage("Wrong format for --hours: %s %s" % (value, splits)) elif len(splits) == 1: startHour = int(splits[0]) endHour = startHour + 24 else: startHour = int(splits[0]) endHour = int(splits[1]) if endHour < startHour: endHour += 24 elif option == "--utcoffset": utcoffset = int(value) elif option == "--resolution": resolution = int(value) elif option == "--user": filterByUser = value elif option == "--client": filterByClient = value else: usage("Unrecognized option: %s" % (option,)) if repeat and diffMode: usage("Cannot have --repeat and --diff together") # Process arguments if len(args) == 0: args = ("/var/log/caldavd/access.log",) pwd = os.getcwd() logs = [] for arg in args: arg = os.path.expanduser(arg) if not arg.startswith("/"): arg = os.path.join(pwd, arg) if arg.endswith("/"): arg = arg[:-1] if "*" in arg: logs.extend(glob.iglob(arg)) else: if not os.path.exists(arg): print("Path does not exist: '%s'. Ignoring." % (arg,)) continue logs.append(arg) analyzers = [] for log in logs: if diffMode or not analyzers: analyzers.append(CalendarServerLogAnalyzer(startHour, endHour, utcoffset, resolution, filterByUser, filterByClient)) print("Analyzing: %s" % (log,)) analyzers[-1].analyzeLogFile(log) if diffMode and len(analyzers) > 1: Differ(analyzers).printAll(doTabDelimited, summary) else: analyzers[0].printAll(doTabDelimited, summary) if repeat: while True: again = raw_input("Repeat analysis [y/n]:") if again.lower()[0] == "n": break print("\n\n\n") for arg in args: print("Analyzing: %s" % (arg,)) analyzers[0].analyzeLogFile(arg) analyzers[0].printAll(doTabDelimited, summary) except Exception, e: print(traceback.print_exc()) sys.exit(str(e)) calendarserver-9.1+dfsg/contrib/tools/readStats.py000066400000000000000000000546521315003562600224230ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2012-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from StringIO import StringIO import collections import datetime import getopt import json import socket import sys import tables import time import errno """ This tool reads data from the server's statistics socket and prints a summary. """ def safeDivision(value, total, factor=1): return value * factor / total if total else 0 SummaryValues = collections.namedtuple("SummaryValues", ("requests", "response", "slots", "cpu", "errors")) def readSock(sockname, useTCP): try: s = socket.socket(socket.AF_INET if useTCP else socket.AF_UNIX, socket.SOCK_STREAM) s.connect(sockname) s.sendall(json.dumps(["stats"]) + "\r\n") s.setblocking(0) data = "" while not data.endswith("\n"): try: d = s.recv(1024) except socket.error as se: if se.args[0] != errno.EWOULDBLOCK: raise continue if d: data += d else: break s.close() data = json.loads(data)["stats"] except socket.error as e: data = {"Failed": "Unable to read statistics from server: %s %s" % (sockname, e)} data["server"] = sockname return data def printStats(stats, multimode, showMethods, topUsers, showAgents, dumpFile): if len(stats) == 1: if "Failed" in stats[0]: printFailedStats(stats[0]["Failed"]) else: try: summary = printStat(stats[0], multimode[0], showMethods, topUsers, showAgents) except KeyError, e: printFailedStats("Unable to find key '%s' in statistics from server socket" % (e,)) sys.exit(1) else: summary = printMultipleStats(stats, multimode, showMethods, topUsers, showAgents) return summary def printStat(stats, index, showMethods, topUsers, showAgents): print("- " * 40) print("Server: %s" % (stats["server"],)) print(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S")) print("Service Uptime: %s" % (datetime.timedelta(seconds=(int(time.time() - stats["system"]["start time"]))),)) if stats["system"]["cpu count"] > 0: print("Current CPU: %.1f%% (%d CPUs)" % ( stats["system"]["cpu use"], stats["system"]["cpu count"], )) print("Current Memory Used: %d bytes (%.1f GB) (%.1f%% of total)" % ( stats["system"]["memory used"], stats["system"]["memory used"] / (1024.0 * 1024 * 1024), stats["system"]["memory percent"], )) else: print("Current CPU: Unavailable") print("Current Memory Used: Unavailable") print summary = printRequestSummary(stats) printHistogramSummary(stats[index], index) if showMethods: printMethodCounts(stats[index]) if topUsers: printUserCounts(stats[index], topUsers) if showAgents: printAgentCounts(stats[index]) return summary def printMultipleStats(stats, multimode, showMethods, topUsers, showAgents): labels = serverLabels(stats) print("- " * 40) print(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S")) times = [] for stat in stats: try: t = str(datetime.timedelta(seconds=int(time.time() - stat["system"]["start time"]))) except KeyError: t = "-" times.append(t) cpus = [] memories = [] for stat in stats: if stat["system"]["cpu count"] > 0: cpus.append(stat["system"]["cpu use"]) memories.append(stat["system"]["memory percent"]) else: cpus.append(-1) memories.append(-1) summary = printMultiRequestSummary(stats, cpus, memories, times, labels, multimode) printMultiHistogramSummary(stats, multimode[0]) if showMethods: printMultiMethodCounts(stats, multimode[0]) if topUsers: printMultiUserCounts(stats, multimode[0], topUsers) if showAgents: printMultiAgentCounts(stats, multimode[0]) return summary def serverLabels(stats): servers = [stat["server"] for stat in stats] if isinstance(servers[0], tuple): hosts = set([item[0] for item in servers]) ports = set([item[1] for item in servers]) if len(ports) == 1: servers = [item[0] for item in servers] elif len(hosts) == 1: servers = [":%d" % item[1] for item in servers] elif len(hosts) == len(servers): servers = [item[0] for item in servers] else: servers = ["%s:%s" % item for item in servers] servers = [item.split(".") for item in servers] while True: if all([item[-1] == servers[0][-1] for item in servers]): servers = [item[:-1] for item in servers] else: break return [".".join(item) for item in servers] def printFailedStats(message): print("- " * 40) print(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S")) print(message) print("") def printRequestSummary(stats): table = tables.Table() table.addHeader( ("Period", "Requests", "Av. Requests", "Av. Response", "Av. Response", "Max. Response", "Slot", "CPU", "500's"), ) table.addHeader( ("", "", "per second", "(ms)", "no write(ms)", "(ms)", "Average", "Average", ""), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) for key, seconds in (("current", 60,), ("1m", 60,), ("5m", 5 * 60,), ("1h", 60 * 60,),): stat = stats[key] table.addRow(( key, stat["requests"], safeDivision(float(stat["requests"]), seconds), safeDivision(stat["t"], stat["requests"]), safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"]), stat["T-MAX"], safeDivision(float(stat["slots"]), stat["requests"]), safeDivision(stat["cpu"], stat["requests"]), stat["500"], )) os = StringIO() table.printTable(os=os) print(os.getvalue()) stat = stats["1m"] return SummaryValues( requests=safeDivision(float(stat["requests"]), seconds), response=safeDivision(stat["t"], stat["requests"]), slots=safeDivision(float(stat["slots"]), stat["requests"]), cpu=safeDivision(stat["cpu"], stat["requests"]), errors=stat["500"], ) def printMultiRequestSummary(stats, cpus, memories, times, labels, index): key, seconds = index table = tables.Table() table.addHeader( ("Server", "Requests", "Av. Requests", "Av. Response", "Av. Response", "Max. Response", "Slot", "CPU", "CPU", "Memory", "500's", "Uptime",), ) table.addHeader( (key, "", "per second", "(ms)", "no write(ms)", "(ms)", "Average", "Average", "Current", "Current", "", "",), ) max_column = 5 table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) totals = ["Overall:", 0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0, "", ] for ctr, stat in enumerate(stats): stat = stat[key] col = [] col.append(labels[ctr]) col.append(stat["requests"]) col.append(safeDivision(float(stat["requests"]), seconds)) col.append(safeDivision(stat["t"], stat["requests"])) col.append(safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"])) col.append(stat["T-MAX"]) col.append(safeDivision(float(stat["slots"]), stat["requests"])) col.append(safeDivision(stat["cpu"], stat["requests"])) col.append(cpus[ctr]) col.append(memories[ctr]) col.append(stat["500"]) col.append(times[ctr]) table.addRow(col) for item in xrange(1, len(col) - 1): if item == max_column: totals[item] = max(totals[item], col[item]) else: totals[item] += col[item] for item in (3, 4, 6, 7, 8, 9): totals[item] /= len(stats) table.addFooter(totals) os = StringIO() table.printTable(os=os) print(os.getvalue()) return SummaryValues( requests=totals[2], response=totals[3], slots=totals[6], cpu=totals[7], errors=totals[10], ) def printHistogramSummary(stat, index): print("%s average response histogram" % (index,)) table = tables.Table() table.addHeader( ("", "<10ms", "10ms<->100ms", "100ms<->1s", "1s<->10s", "10s<->30s", "30s<->60s", ">60s", "Over 1s", "Over 10s"), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) for i in ("T", "T-RESP-WR",): table.addRow(( "Overall Response" if i == "T" else "Response Write", (stat[i]["<10ms"], safeDivision(stat[i]["<10ms"], stat["requests"], 100.0)), (stat[i]["10ms<->100ms"], safeDivision(stat[i]["10ms<->100ms"], stat["requests"], 100.0)), (stat[i]["100ms<->1s"], safeDivision(stat[i]["100ms<->1s"], stat["requests"], 100.0)), (stat[i]["1s<->10s"], safeDivision(stat[i]["1s<->10s"], stat["requests"], 100.0)), (stat[i]["10s<->30s"], safeDivision(stat[i]["10s<->30s"], stat["requests"], 100.0)), (stat[i]["30s<->60s"], safeDivision(stat[i]["30s<->60s"], stat["requests"], 100.0)), (stat[i][">60s"], safeDivision(stat[i][">60s"], stat["requests"], 100.0)), safeDivision(stat[i]["Over 1s"], stat["requests"], 100.0), safeDivision(stat[i]["Over 10s"], stat["requests"], 100.0), )) os = StringIO() table.printTable(os=os) print(os.getvalue()) def printMultiHistogramSummary(stats, index): # Totals first keys = ("<10ms", "10ms<->100ms", "100ms<->1s", "1s<->10s", "10s<->30s", "30s<->60s", ">60s", "Over 1s", "Over 10s",) totals = { "T": dict([(k, 0) for k in keys]), "T-RESP-WR": dict([(k, 0) for k in keys]), "requests": 0, } for stat in stats: for i in ("T", "T-RESP-WR",): totals["requests"] += stat[index]["requests"] for k in keys: totals[i][k] += stat[index][i][k] printHistogramSummary(totals, index) def printMethodCounts(stat): print("Method Counts") table = tables.Table() table.addHeader( ("Method", "Count", "%", "Av. Response", "%", "Total Resp. %"), ) table.addHeader( ("", "", "", "(ms)", "", ""), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) response_average = {} for method in stat["method"].keys(): response_average[method] = stat["method-t"][method] / stat["method"][method] total_count = sum(stat["method"].values()) total_avresponse = sum(response_average.values()) total_response = sum(stat["method-t"].values()) for method in sorted(stat["method"].keys()): table.addRow(( method, stat["method"][method], safeDivision(stat["method"][method], total_count, 100.0), response_average[method], safeDivision(response_average[method], total_avresponse, 100.0), safeDivision(stat["method-t"][method], total_response, 100.0), )) os = StringIO() table.printTable(os=os) print(os.getvalue()) def printMultiMethodCounts(stats, index): methods = collections.defaultdict(int) method_times = collections.defaultdict(float) for stat in stats: for method in stat[index]["method"]: methods[method] += stat[index]["method"][method] if "method-t" in stat[index]: for method_time in stat[index]["method-t"]: method_times[method_time] += stat[index]["method-t"][method_time] printMethodCounts({"method": methods, "method-t": method_times}) def printUserCounts(stat, topUsers): print("User Counts") table = tables.Table() table.addHeader( ("User", "Total", "Percentage"), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) total = sum(stat["uid"].values()) for uid, value in sorted(stat["uid"].items(), key=lambda x: x[1], reverse=True)[:topUsers]: table.addRow(( uid, value, safeDivision(value, total, 100.0), )) os = StringIO() table.printTable(os=os) print(os.getvalue()) def printMultiUserCounts(stats, index, topUsers): uids = collections.defaultdict(int) for stat in stats: for uid in stat[index]["uid"]: uids[uid] += stat[index]["uid"][uid] printUserCounts({"uid": uids}, topUsers) def printAgentCounts(stat): print("User-Agent Counts") table = tables.Table() table.addHeader( ("User-Agent", "Total", "Percentage"), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.LEFT_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) total = sum(stat["user-agent"].values()) for ua in sorted(stat["user-agent"].keys()): table.addRow(( ua, stat["user-agent"][ua], safeDivision(stat["user-agent"][ua], total, 100.0), )) os = StringIO() table.printTable(os=os) print(os.getvalue()) def printMultiAgentCounts(stats, index): uas = collections.defaultdict(int) for stat in stats: for ua in stat[index]["user-agent"]: uas[ua] += stat[index]["user-agent"][ua] printAgentCounts({"user-agent": uas}) def alert(summary, alerts): alert = [] for attr, description in ( ("requests", "Request count",), ("response", "Response time",), ("slots", "Slot count",), ("cpu", "CPU time",), ("errors", "Error count",), ): threshold = getattr(alerts, attr) value = getattr(summary, attr) if threshold is not None and value > threshold: if isinstance(value, float): value = "{:.1f}".format(value) alert.append("{} threshold exceeded: {}".format(description, value)) if alert: for _ignore in range(10): print('\a', end='') sys.stdout.flush() time.sleep(0.15) print("\n".join(alert)) def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: readStats [options] Options: -h Print this help and exit -s Name of local socket to read from -t Delay in seconds between each sample [10 seconds] --tcp host:port Use TCP connection with host:port --0 Display multiserver current average --1 Display multiserver 1 minute average --5 Display multiserver 5 minute average (the default) --60 Display multiserver 1 hour average --methods Include details about HTTP method usage --users N Include details about top N users --agents Include details about HTTP User-Agent usage --dump FILE File to dump JSON data to --alert-requests N Generate an alert if the overall request count exceeds a threshold [INT] --alert-response N Generate an alert if the overall average response time exceeds a threshold [INT] --alert-slots N Generate an alert if the overall slot count exceeds a threshold [FLOAT] --alert-cpu N Generate an alert if the overall average cpu percent use exceeds a threshold [INT] --alert-errors N Generate an alert if the overall error count exceeds a threshold [INT] Description: This utility will print a summary of statistics read from a server continuously with the specified delay. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': delay = 10 servers = ("data/Logs/state/caldavd-stats.sock",) useTCP = False showMethods = False topUsers = 0 showAgents = False dumpFile = None alerts = SummaryValues( requests=None, response=None, slots=None, cpu=None, errors=None, ) multimodes = (("current", 60,), ("1m", 60,), ("5m", 5 * 60,), ("1h", 60 * 60,),) multimode = multimodes[2] options, args = getopt.getopt(sys.argv[1:], "hs:t:", [ "tcp=", "0", "1", "5", "60", "methods", "users=", "agents", "dump=", "alert-requests=", "alert-response=", "alert-slots=", "alert-cpu=", "alert-errors=", ]) for option, value in options: if option == "-h": usage() elif option == "-s": servers = value.split(",") elif option == "-t": delay = int(value) elif option == "--tcp": servers = [(host, int(port),) for host, port in [server.split(":") for server in value.split(",")]] useTCP = True elif option == "--0": multimode = multimodes[0] elif option == "--1": multimode = multimodes[1] elif option == "--5": multimode = multimodes[2] elif option == "--60": multimode = multimodes[3] elif option == "--methods": showMethods = True elif option == "--users": topUsers = int(value) elif option == "--agents": showAgents = True elif option == "--dump": dumpFile = value elif option == "--alert-requests": alerts = alerts._replace(requests=int(value)) elif option == "--alert-response": alerts = alerts._replace(response=int(value)) elif option == "--alert-slots": alerts = alerts._replace(slots=float(value)) elif option == "--alert-cpu": alerts = alerts._replace(cpu=int(value)) elif option == "--alert-errors": alerts = alerts._replace(errors=int(value)) while True: now = time.time() stats = [readSock(server, useTCP) for server in servers] summary = printStats(stats, multimode, showMethods, topUsers, showAgents, dumpFile) if dumpFile is not None: stats = dict([("{}:{}".format(*servers[ctr]), stat) for ctr, stat in enumerate(stats)]) stats["timestamp"] = datetime.datetime.now().strftime("%Y-%m-%dT%H:%M:%S") with open(dumpFile, "a") as f: f.write(json.dumps(stats)) f.write("\n") alert(summary, alerts) time.sleep(delay - (time.time() - now)) calendarserver-9.1+dfsg/contrib/tools/request_monitor.py000077500000000000000000000574271315003562600237360ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from dateutil.parser import parse as dateparse from subprocess import Popen, PIPE, STDOUT import datetime import getopt import os import sys import time import traceback import collections import tables from cStringIO import StringIO # Detect which OS this is being run on child = Popen( args=[ "uname", ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_error = child.communicate() output = output.strip() if output == "Darwin": OS = "OS X" elif output == "Linux": OS = "Linux" else: print("Unknown OS: %s" % (output,)) sys.exit(1) # Some system commands we need to detect if OS == "OS X": NETSTAT = "/usr/sbin/netstat" enableListenQueue = os.path.exists(NETSTAT) elif OS == "Linux": enableListenQueue = False if OS == "OS X": IOSTAT = "/usr/sbin/iostat" enableCpuIdle = os.path.exists(IOSTAT) elif OS == "Linux": IOSTAT = "/usr/bin/iostat" enableCpuIdle = os.path.exists(IOSTAT) if OS == "OS X": VMSTAT = "/usr/bin/vm_stat" enableFreeMem = os.path.exists(VMSTAT) elif OS == "Linux": VMSTAT = "/usr/bin/vmstat" enableFreeMem = os.path.exists(VMSTAT) sys.stdout = os.fdopen(sys.stdout.fileno(), 'w', 0) filenames = ["/var/log/caldavd/access.log", ] debug = False def listenq(): child = Popen( args=[ NETSTAT, "-L", "-anp", "tcp", ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_error = child.communicate() ssl = nonssl = 0 for line in output.split("\n"): if line.find("8443") != -1: ssl = int(line.split("/")[0]) elif line.find("8008") != -1: nonssl = int(line.split("/")[0]) return "%s+%s" % (ssl, nonssl), ssl, nonssl _listenQueueHistory = [] def listenQueueHistory(): global _listenQueueHistory latest, ssl, nonssl = listenq() _listenQueueHistory.insert(0, latest) del _listenQueueHistory[12:] return _listenQueueHistory, ssl, nonssl _idleHistory = [] def idleHistory(): global _idleHistory latest = cpuidle() _idleHistory.insert(0, latest) del _idleHistory[12:] return _idleHistory def tail(filenames, n): results = collections.defaultdict(list) for filename in filenames: child = Popen( args=[ "/usr/bin/tail", "-%d" % (n,), filename, ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_error = child.communicate() results[filename].extend(output.splitlines()) return results def range(filenames, start, end): results = collections.defaultdict(list) for filename in filenames: with open(filename) as f: for count, line in enumerate(f): if count >= start: results[filename].append(line) if count > end: break return results def cpuPerDaemon(): a = {} child = Popen( args=[ "ps", "auxw", ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_ = child.communicate() for l in output.split("\n"): if "ProcessType=" in l: f = l.split() for l in f: if l.startswith("LogID="): logID = int(l[6:]) break else: logID = None if logID is not None: a[logID] = f[2] return ", ".join([v for _ignore_k, v in sorted(a.items(), key=lambda i:i[0])]) def cpuidle(): if OS == "OS X": child = Popen( args=[ IOSTAT, "-c", "2", "-n", "0", ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_ = child.communicate() return output.splitlines()[-1].split()[2] elif OS == "Linux": child = Popen( args=[ IOSTAT, "-c", "1", "2" ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_ = child.communicate() return output.splitlines()[-2].split()[5] def freemem(): try: if OS == "OS X": child = Popen( args=[ VMSTAT, ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_ = child.communicate() lines = output.split("\n") line = lines[0] pageSize = int(line[line.find("page size of") + 12:].split()[0]) line = lines[1] freeSize = int(line[line.find("Pages free:") + 11:].split()[0][:-1]) freed = freeSize * pageSize return "%d bytes (%.1f GB)" % (freed, freed / (1024.0 * 1024 * 1024),) elif OS == "Linux": child = Popen( args=[ VMSTAT, "-s", "-S", "K" ], stdout=PIPE, stderr=STDOUT, ) output, _ignore_ = child.communicate() lines = output.splitlines() line = lines[4] freed = int(line.split()[0]) * 1024 return "%d bytes (%.1f GB)" % (freed, freed / (1024.0 * 1024 * 1024),) except Exception, e: if debug: print("freemem failure", e) print(traceback.print_exc()) return "error" def parseLine(line): startPos = line.find("- ") endPos = line.find(" [") userId = line[startPos + 2:endPos] startPos = endPos + 2 endPos = line.find(']', startPos) logTime = line[startPos:endPos] startPos = endPos + 3 endPos = line.find(' ', startPos) if line[startPos] == '?': method = "???" uri = "" startPos += 5 else: method = line[startPos:endPos] startPos = endPos + 1 endPos = line.find(" HTTP/", startPos) uri = line[startPos:endPos] startPos = endPos + 11 status = int(line[startPos:startPos + 3]) startPos += 4 endPos = line.find(' ', startPos) bytes = int(line[startPos:endPos]) startPos = endPos + 2 endPos = line.find('"', startPos) referer = line[startPos:endPos] startPos = endPos + 3 endPos = line.find('"', startPos) client = line[startPos:endPos] startPos = endPos + 2 if line[startPos] == '[': extended = {} startPos += 1 endPos = line.find(' ', startPos) extended["t"] = float(line[startPos:endPos]) startPos = endPos + 6 endPos = line.find(' ', startPos) extended["i"] = int(line[startPos:endPos]) startPos = endPos + 1 endPos = line.find(' ', startPos) extended["or"] = int(line[startPos:endPos]) else: items = line[startPos:].split() extended = dict([item.split('=') for item in items if item.find("=") != -1]) return userId, logTime, method, uri, status, bytes, referer, client, extended def safePercent(value, total): return value * 100.0 / total if total else 0.0 def usage(): print("request_monitor [OPTIONS] [FILENAME]") print("") print("FILENAME optional path of access log to monitor [/var/log/caldavd/access.log]") print("") print("OPTIONS") print("-h print help and exit") print("--debug print tracebacks and error details") print("--lines N specifies how many lines to tail from access.log (default: 10000)") print("--range M:N specifies a range of lines to analyze from access.log (default: all)") print("--procs N specifies how many python processes are expected in the log file (default: 80)") print("--top N how many long requests to print (default: 10)") print("--users N how many top users to print (default: 5)") print("") print("Version: 5") numLines = 10000 numProcs = 80 numTop = 10 numUsers = 5 lineRange = None options, args = getopt.getopt(sys.argv[1:], "h", ["debug", "lines=", "range=", "procs=", "top=", "users="]) for option, value in options: if option == "-h": usage() sys.exit(0) elif option == "--debug": debug = True elif option == "--lines": numLines = int(value) elif option == "--range": lineRange = (int(value.split(":")[0]), int(value.split(":")[1])) elif option == "--procs": numProcs = int(value) elif option == "--top": numTop = int(value) elif option == "--users": numUsers = int(value) if len(args): filenames = sorted([os.path.expanduser(arg) for arg in args]) prefix = len(os.path.commonprefix(filenames)) for filename in filenames: if not os.path.isfile(filename): print("Path %s does not exist" % (filename,)) print("") usage() sys.exit(1) for filename in filenames: if not os.access(filename, os.R_OK): print("Path %s does not exist" % (filename,)) print("") usage() sys.exit(1) if debug: print("Starting: access log files: %s" % (", ".join(filenames),)) print("") while True: currentSec = None currentCount = 0 times = [] perfile_times = collections.defaultdict(list) ids = {} rawCounts = {} timesSpent = {} numRequests = collections.defaultdict(int) numServerToServer = collections.defaultdict(int) numProxied = collections.defaultdict(int) slotCount = collections.defaultdict(int) totalRespTime = collections.defaultdict(float) totalRespWithoutWRTime = collections.defaultdict(float) maxRespTime = collections.defaultdict(float) under10ms = [0, 0] over10ms = [0, 0] over100ms = [0, 0] over1s = [0, 0] over10s = [0, 0] over30s = [0, 0] over60s = [0, 0] totalOver1s = [0, 0] totalOver10s = [0, 0] requests = [] users = {} startTime = None endTime = None errorCount = 0 parseErrors = 0 try: lines = tail(filenames, numLines) if lineRange is None else range(filenames, *lineRange) for filename in lines.keys(): for line in lines[filename]: if not line or line.startswith("Log"): continue numRequests[filename] += 1 try: userId, logTime, method, uri, status, bytes, _ignore_referer, client, extended = parseLine(line) except Exception, e: parseErrors += 1 if debug: print("Access log line parse failure", e) print(traceback.print_exc()) print("---") print(line) print("---") continue logTime = dateparse(logTime, fuzzy=True) times.append(logTime) perfile_times[filename].append(logTime) if status >= 500: errorCount += 1 if uri == "/ischedule": numServerToServer[filename] += 1 elif uri.startswith("/calendars"): numProxied[filename] += 1 outstanding = int(extended['or']) logId = int(extended['i'] if extended['i'] else 0) raw = rawCounts.get(logId, 0) + 1 rawCounts[logId] = raw prevMax = ids.get(logId, 0) if outstanding > prevMax: ids[logId] = outstanding slotCount[filename] += outstanding respTime = float(extended['t']) wrTime = float(extended.get('t-resp-wr', 0.0)) timeSpent = timesSpent.get(logId, 0.0) + respTime timesSpent[logId] = timeSpent totalRespTime[filename] += respTime totalRespWithoutWRTime[filename] += respTime - wrTime if respTime > maxRespTime[filename]: maxRespTime[filename] = respTime for index, testTime in enumerate((respTime, respTime - wrTime,)): if testTime >= 60000.0: over60s[index] += 1 elif testTime >= 30000.0: over30s[index] += 1 elif testTime >= 10000.0: over10s[index] += 1 elif testTime >= 1000.0: over1s[index] += 1 elif testTime >= 100.0: over100ms[index] += 1 elif testTime >= 10.0: over10ms[index] += 1 else: under10ms[index] += 1 if testTime >= 1000.0: totalOver1s[index] += 1 if testTime >= 10000.0: totalOver10s[index] += 1 ext = [] for key, value in extended.iteritems(): if key not in ('i', 't'): if key == "cl": value = float(value) / 1024 value = "%.1fKB" % (value,) key = "req" ext.append("%s:%s" % (key, value)) ext = ", ".join(ext) try: client = client.split(";")[2] client = client.strip() except: pass if userId != "-": userStat = users.get(userId, {'count': 0, 'clients': {}}) userStat['count'] += 1 clientCount = userStat['clients'].get(client, 0) userStat['clients'][client] = clientCount + 1 users[userId] = userStat reqStartTime = logTime - datetime.timedelta(milliseconds=respTime) requests.append((respTime, userId, method, bytes / 1024.0, ext, client, logId, logTime, reqStartTime)) times.sort() if len(times) == 0: print("No data to analyze") time.sleep(10) continue totalRequests = sum(numRequests.values()) print("- " * 40) print(datetime.datetime.now().strftime("%Y/%m/%d %H:%M:%S"),) if enableListenQueue: q, lqssl, lqnon = listenQueueHistory() print("Listenq (ssl+non):", q[0], " (Recent", ", ".join(q[1:]), "Oldest)") if enableCpuIdle: q = idleHistory() print("CPU idle %:", q[0], " (Recent", ", ".join(q[1:]), "Oldest)") if enableFreeMem: print("Memory free:", freemem()) print("CPU Per Daemon:", cpuPerDaemon()) print("") table = tables.Table() table.addHeader( ("Instance", "Requests", "Av. Requests", "Av. Response", "Av. Response", "Max. Response", "Slot", "Start", "End", "File Name"), ) table.addHeader( ("", "", "per second", "(ms)", "no write(ms)", "(ms)", "Average", "Time", "Time", ""), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%d", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.2f", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) totalAverage = 0.0 totalResponseTime = 0.0 totalResponseWithoutWRTime = 0.0 maxResponseTime = 0.0 totalSlots = 0 minStartTime = None maxEndTime = None for ctr, filename in enumerate(sorted(perfile_times.keys())): times = sorted(perfile_times[filename]) startTime = times[0] minStartTime = startTime if minStartTime is None else min(minStartTime, startTime) endTime = times[-1] maxEndTime = endTime if maxEndTime is None else min(maxEndTime, endTime) deltaTime = endTime - startTime avgRequests = float(len(times)) / deltaTime.seconds totalAverage += avgRequests avgResponse = totalRespTime[filename] / len(times) totalResponseTime += totalRespTime[filename] avgResponseWithWR = totalRespWithoutWRTime[filename] / len(times) totalResponseWithoutWRTime += totalRespWithoutWRTime[filename] maxResponseTime = max(maxResponseTime, maxRespTime[filename]) totalSlots += slotCount[filename] table.addRow(( "#%s" % (ctr + 1,), len(times), avgRequests, avgResponse, avgResponseWithWR, maxRespTime[filename], float(slotCount[filename]) / len(times), startTime, endTime, filename[prefix:], )) if len(perfile_times) > 1: table.addFooter(( "Total:", totalRequests, totalAverage, totalResponseTime / totalRequests, totalResponseWithoutWRTime / totalRequests, maxResponseTime, float(totalSlots) / totalRequests, minStartTime, maxEndTime, "", )) os = StringIO() table.printTable(os=os) print(os.getvalue()) if enableListenQueue: lqlatency = (lqssl / avgRequests, lqnon / avgRequests,) if avgRequests else (0.0, 0.0,) print(" listenq latency (ssl+non): %.1f s %.1f s" % ( lqlatency[0], lqlatency[1], )) table = tables.Table() table.addHeader( ("", "<10ms", "10ms<->100ms", "100ms<->1s", "1s<->10s", "10s<->30s", "30s<->60s", ">60s", "Over 1s", "Over 10s"), ) table.setDefaultColumnFormats( ( tables.Table.ColumnFormat("%s", tables.Table.ColumnFormat.CENTER_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%d (%.1f%%)", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), tables.Table.ColumnFormat("%.1f%%", tables.Table.ColumnFormat.RIGHT_JUSTIFY), ) ) for i in xrange(2): table.addRow(( "Overall Response" if i == 0 else "Response without Write", (under10ms[i], safePercent(under10ms[i], totalRequests)), (over10ms[i], safePercent(over10ms[i], totalRequests)), (over100ms[i], safePercent(over100ms[i], totalRequests)), (over1s[i], safePercent(over1s[i], totalRequests)), (over10s[i], safePercent(over10s[i], totalRequests)), (over30s[i], safePercent(over30s[i], totalRequests)), (over60s[i], safePercent(over60s[i], totalRequests)), safePercent(totalOver1s[i], totalRequests), safePercent(totalOver10s[i], totalRequests), )) os = StringIO() table.printTable(os=os) print(os.getvalue()) print if errorCount: print("Number of 500 errors: %d" % (errorCount,)) if parseErrors: print("Number of access log parsing errors: %d" % (parseErrors,)) if errorCount or parseErrors: print("") print("Proc: Peak outstanding: Seconds of processing (number of requests):") for l in xrange((numProcs - 1) / 10 + 1): base = l * 10 print("%2d-%2d: " % (base, base + 9),) for i in xrange(base, base + 10): try: r = ids[i] s = "%1d" % (r,) except KeyError: s = "." print(s, end="") print(" ", end="") for i in xrange(base, base + 10): try: r = timesSpent[i] / 1000 c = rawCounts[i] s = "%4.0f(%4d)" % (r, c) except KeyError: s = " ." print(s, end="") print("") print("") print("Top %d longest (in most recent %d requests):" % (numTop, sum(numRequests.values()),)) requests.sort() requests.reverse() for i in xrange(numTop): try: respTime, userId, method, kb, ext, client, logId, logTime, reqStartTime = requests[i] """ overlapCount = 0 for request in requests: _respTime, _userId, _method, _kb, _ext, _client, _logId, _logTime, _reqStartTime = request if _logId == logId and _logTime > reqStartTime and _reqStartTime < logTime: overlapCount += 1 print("%7.1fms %-12s %s res:%.1fKB, %s [%s] #%d +%d %s->%s" % (respTime, userId, method, kb, ext, client, logId, overlapCount, reqStartTime.strftime("%H:%M:%S"), logTime.strftime("%H:%M:%S"),)) """ print("%7.1fms %-12s %s res:%.1fKB, %s [%s] #%d %s->%s" % (respTime, userId, method, kb, ext, client, logId, reqStartTime.strftime("%H:%M:%S"), logTime.strftime("%H:%M:%S"),)) except: pass print("") print("Top %d busiest users (in most recent %d requests):" % (numUsers, totalRequests,)) userlist = [] for user, userStat in users.iteritems(): userlist.append((userStat['count'], user, userStat)) userlist.sort() userlist.reverse() for i in xrange(numUsers): try: count, user, userStat = userlist[i] print("%3d %-12s " % (count, user), end="") clientStat = userStat['clients'] clients = clientStat.keys() if len(clients) == 1: print("[%s]" % (clients[0],)) else: clientList = [] for client in clients: clientList.append("%s: %d" % (client, clientStat[client])) print("[%s]" % ", ".join(clientList)) except: pass print # lineRange => do loop only once if lineRange is not None: break except Exception, e: print("Script failure", e) if debug: print(traceback.print_exc()) time.sleep(10) calendarserver-9.1+dfsg/contrib/tools/sortrecurrences.py000077500000000000000000000041261315003562600237130ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2010-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function import getopt import os import sys import traceback from pycalendar.icalendar.calendar import Calendar def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: sortrecurrences FILE Options: -h Print this help and exit Arguments: FILE File name for the calendar data to sort Description: This utility will output a sorted iCalendar component. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": try: options, args = getopt.getopt(sys.argv[1:], "h", []) for option, value in options: if option == "-h": usage() else: usage("Unrecognized option: %s" % (option,)) # Process arguments if len(args) != 1: usage("Must have one argument") pwd = os.getcwd() analyzers = [] for arg in args: arg = os.path.expanduser(arg) if not arg.startswith("/"): arg = os.path.join(pwd, arg) if arg.endswith("/"): arg = arg[:-1] if not os.path.exists(arg): print("Path does not exist: '%s'. Ignoring." % (arg,)) continue cal = Calendar() cal.parse(open(arg)) print(str(cal.serialize())) except Exception, e: sys.exit(str(e)) print(traceback.print_exc()) calendarserver-9.1+dfsg/contrib/tools/sqldata_from_path.py000077500000000000000000000072741315003562600241620ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2011-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function """ Prints out an SQL statement that can be used in an SQL shell against an sqlstore database to return the calendar or address data for the provided filestore or HTTP path. """ import getopt import sys def usage(error_msg=None): if error_msg: print(error_msg) print("sqldata_from_path PATH") print("") print("PATH filestore or HTTP path") print(""" Prints out an SQL statement that can be used in an SQL shell against an sqlstore database to return the calendar or address data for the provided filestore or HTTP path. Path must be a __uids__ path. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == '__main__': options, args = getopt.getopt(sys.argv[1:], "", []) if options: usage("No options allowed") if len(args) != 1: usage("One argument only must be provided.") # Determine the type of path segments = args[0].split("/") if len(segments) not in (6, 8,): usage("Must provide a path to a calendar or addressbook object resource.") if segments[0] != "": usage("Must provide a /calendars/... or /addressbooks/... path.") if segments[1] not in ("calendars", "addressbooks",): usage("Must provide a /calendars/... or /addressbooks/... path.") if segments[2] != "__uids__": usage("Must provide a /.../__uids__/... path.") datatype = segments[1] uid = segments[5 if len(segments[3]) == 2 else 3] collection = segments[6 if len(segments[3]) == 2 else 4] resource = segments[7 if len(segments[3]) == 2 else 5] sqlstrings = { "calendars": { "home_table": "CALENDAR_HOME", "bind_table": "CALENDAR_BIND", "object_table": "CALENDAR_OBJECT", "bind_home_id": "CALENDAR_HOME_RESOURCE_ID", "bind_name": "CALENDAR_RESOURCE_NAME", "bind_id": "CALENDAR_RESOURCE_ID", "object_bind_id": "CALENDAR_RESOURCE_ID", "object_name": "RESOURCE_NAME", "object_data": "ICALENDAR_TEXT", }, "addressbooks": { "home_table": "ADDRESSBOOK_HOME", "bind_table": "ADDRESSBOOK_BIND", "object_table": "ADDRESSBOOK_OBJECT", "bind_home_id": "ADDRESSBOOK_HOME_RESOURCE_ID", "bind_name": "ADDRESSBOOK_RESOURCE_NAME", "bind_id": "ADDRESSBOOK_RESOURCE_ID", "object_bind_id": "ADDRESSBOOK_RESOURCE_ID", "object_name": "RESOURCE_NAME", "object_data": "VCARD_TEXT", }, } sqlstrings[datatype]["uid"] = uid sqlstrings[datatype]["collection"] = collection sqlstrings[datatype]["resource"] = resource print("""select %(object_data)s from %(object_table)s where %(object_name)s = '%(resource)s' and %(object_bind_id)s = ( select %(bind_id)s from %(bind_table)s where %(bind_name)s = '%(collection)s' and %(bind_home_id)s = ( select RESOURCE_ID from %(home_table)s where OWNER_UID = '%(uid)s' ) );""" % sqlstrings[datatype]) calendarserver-9.1+dfsg/contrib/tools/statsanalysis.py000077500000000000000000000317221315003562600233670ustar00rootroot00000000000000#!/usr/bin/env python ## # Copyright (c) 2014-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from __future__ import print_function from json import dumps, loads from matplotlib.ticker import AutoMinorLocator import collections import getopt import matplotlib.pyplot as plt import os import sys dataset = {} def safeDivision(value, total, factor=1): return value * factor / total if total else 0 def analyze(fpath, title): """ Analyze a readStats data file. @param fpath: path of file to analyze @type fpath: L{str} @param title: title to use for data set @type title: L{str} """ print("Analyzing data for %s" % (title,)) fpath_cached = fpath + ".cached" if os.path.exists(fpath_cached): with open(fpath_cached) as f: d = loads(f.read()) dataset[title] = dict([(int(k), v) for k, v in d.items()]) else: dataset[title] = {} json = False with open(fpath) as f: line = f.next() if line.startswith("- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"): analyzeTableFormat(f, title) elif line.startswith("{"): analyzeJSONFormat(f, line, title) json = True if not json: with open(fpath_cached, "w") as f: f.write(dumps(dataset[title])) print("Read %d data points\n" % (len(dataset[title]),)) def analyzeTableFormat(f, title): """ Analyze a "table" format output file. First line has already been tested. @param f: file object to read @type f: L{File} @param title: title to use for data set @type title: L{str} """ analyzeTableRecord(f, title) for line in f: if line.startswith("- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -"): analyzeTableRecord(f, title) def analyzeTableRecord(liter, title): """ Analyze one entry from the readStats data file. @param liter: iterator over lines in the file @type liter: L{iter} @param title: title to use for data set @type title: L{str} """ dt = liter.next() time = dt[11:] seconds = 60 * (60 * int(time[0:2]) + int(time[3:5])) + int(time[6:8]) for line in liter: if line.startswith("| Overall:"): overall = parseOverall(line) dataset[title][seconds] = overall elif line.startswith("| Method"): methods = parseMethods(liter) dataset[title][seconds].update(methods) break def parseOverall(line): """ Parse the "Overall:" stats summary line. @param line: the line to parse @type line: L{str} """ splits = line.split("|") overall = {} keys = ( ("Requests", lambda x: int(x)), ("Av. Requests per second", lambda x: float(x)), ("Av. Response", lambda x: float(x)), ("Av. Response no write", lambda x: float(x)), ("Max. Response", lambda x: float(x)), ("Slot Average", lambda x: float(x)), ("CPU Average", lambda x: float(x[:-1])), ("CPU Current", lambda x: float(x[:-1])), ("Memory Current", lambda x: float(x[:-1])), ("500's", lambda x: int(x)), ) for ctr, item in enumerate(keys): key, conv = item overall["Overall:{}".format(key)] = conv(splits[2 + ctr].strip()) return overall def parseMethods(liter): """ Parse the "Method Count" table from a data file entry. @param liter: iterator over lines in the file @type liter: L{iter} """ while liter.next()[1] != "-": pass methods = {} for line in liter: if line[0] == "+": break splits = line.split("|") keys = ( ("Count", lambda x: int(x)), ("Count %", lambda x: float(x[:-1])), ("Av. Response", lambda x: float(x)), ("Av. Response %", lambda x: float(x[:-1])), ("Total Resp. %", lambda x: float(x[:-1])), ) for ctr, item in enumerate(keys): key, conv = item methods["Method:{}:{}".format(splits[1].strip(), key)] = conv(splits[2 + ctr].strip()) return methods def analyzeJSONFormat(f, first, title): """ Analyze a JSON format output file. First line has already been tested. @param f: file object to read @type f: L{File} @param title: title to use for data set @type title: L{str} """ analyzeJSONRecord(first, title) for line in f: if line: analyzeJSONRecord(line, title) def analyzeJSONRecord(line, title): """ Analyze a JSON record. @param line: line of JSON data to parse @type line: L{str} @param title: title to use for data set @type title: L{str} """ j = loads(line) time = j["timestamp"][11:] seconds = 60 * (60 * int(time[0:2]) + int(time[3:5])) + int(time[6:8]) dataset[title][seconds] = {} allstats = [stat for server, stat in j.items() if server != "timestamp"] analyzeJSONStatsSummary(allstats, title, seconds) analyzeJSONStatsMethods(allstats, title, seconds) def analyzeJSONStatsSummary(allstats, title, seconds): """ Analyze all server JSON summary stats. @param allstats: set of JSON stats to analyze @type allstats: L{dict} @param title: title to use for data set @type title: L{str} @param seconds: timestamp for stats record @type seconds: L{int} """ results = collections.defaultdict(list) for stats in allstats: stat = stats["1m"] results["Requests"].append(stat["requests"]) results["Av. Requests per second"].append(safeDivision(float(stat["requests"]), 60.0)) results["Av. Response"].append(safeDivision(stat["t"], stat["requests"])) results["Av. Response no write"].append(safeDivision(stat["t"] - stat["t-resp-wr"], stat["requests"])) results["Max. Response"].append(stat["T-MAX"]) results["Slot Average"].append(safeDivision(float(stat["slots"]), stat["requests"])) results["CPU Average"].append(safeDivision(stat["cpu"], stat["requests"])) results["CPU Current"].append(stats["system"]["cpu use"]) results["Memory Current"].append(stats["system"]["memory percent"]) results["500's"].append(stat["500"]) for key in ("Requests", "Av. Requests per second", "500's",): dataset[title][seconds]["Overall:{}".format(key)] = sum(results[key]) for key in ("Av. Response", "Av. Response no write", "Slot Average", "CPU Average", "CPU Current", "Memory Current",): dataset[title][seconds]["Overall:{}".format(key)] = sum(results[key]) / len(allstats) dataset[title][seconds]["Overall:Max. Response"] = max(results["Max. Response"]) def analyzeJSONStatsMethods(allstats, title, seconds): """ Analyze all server JSON method stats. @param allstats: set of JSON stats to analyze @type allstats: L{dict} @param title: title to use for data set @type title: L{str} @param seconds: timestamp for stats record @type seconds: L{int} """ methods = collections.defaultdict(int) method_times = collections.defaultdict(float) response_average = {} for stat in allstats: for method in stat["1m"]["method"]: methods[method] += stat["1m"]["method"][method] if "method-t" in stat["1m"]: for method_time in stat["1m"]["method-t"]: method_times[method_time] += stat["1m"]["method-t"][method_time] response_average[method_time] = method_times[method_time] / methods[method_time] total_count = sum(methods.values()) total_avresponse = sum(response_average.values()) total_response = sum(method_times.values()) for method, count in methods.items(): dataset[title][seconds]["Method:{}:Count".format(method)] = count dataset[title][seconds]["Method:{}:Count %".format(method)] = safeDivision(methods[method], total_count, 100.0) dataset[title][seconds]["Method:{}:Av. Response".format(method)] = response_average[method] dataset[title][seconds]["Method:{}:Av. Response %".format(method)] = safeDivision(response_average[method], total_avresponse, 100.0) dataset[title][seconds]["Method:{}:Total Resp. %".format(method)] = safeDivision(method_times[method], total_response, 100.0) def plotSeries(key, ymin=None, ymax=None): """ Plot the chosen dataset key for each scanned data file. @param key: data set key to use @type key: L{str} @param ymin: minimum value for y-axis or L{None} for default @type ymin: L{int} or L{float} @param ymax: maximum value for y-axis or L{None} for default @type ymax: L{int} or L{float} """ titles = [] for title, data in sorted(dataset.items(), key=lambda x: x[0]): titles.append(title) x, y = zip(*[(k / 3600.0, v[key]) for k, v in sorted(data.items(), key=lambda x: x[0]) if key in v]) plt.plot(x, y) plt.xlabel("Hours") plt.ylabel(key) plt.xlim(0, 24) if ymin is not None: plt.ylim(ymin=ymin) if ymax is not None: plt.ylim(ymax=ymax) plt.xticks( (1, 4, 7, 10, 13, 16, 19, 22,), (18, 21, 0, 3, 6, 9, 12, 15,), ) plt.minorticks_on() plt.gca().xaxis.set_minor_locator(AutoMinorLocator(n=3)) plt.grid(True, "major", "x", alpha=0.5, linewidth=0.5) plt.grid(True, "minor", "x", alpha=0.5, linewidth=0.5) plt.legend(titles, 'upper left', shadow=True, fancybox=True) plt.show() def usage(error_msg=None): if error_msg: print(error_msg) print("""Usage: statsanalysis [options] Options: -h Print this help and exit -s Directory to scan for data [~/stats] Arguments: Description: This utility will analyze the output of the readStats tool and generate some pretty plots of data. """) if error_msg: raise ValueError(error_msg) else: sys.exit(0) if __name__ == "__main__": scanDir = os.path.expanduser("~/stats") options, args = getopt.getopt(sys.argv[1:], "hs:", []) for option, value in options: if option == "-h": usage() elif option == "-s": scanDir = os.path.expanduser(value) else: usage("Unrecognized option: %s" % (option,)) if scanDir and len(args) != 0: usage("No arguments allowed when scanning a directory") pwd = os.getcwd() # Scan data directory and read in data from each file found fnames = os.listdir(scanDir) count = 1 for name in fnames: if name.startswith("stats_all.log") and not name.endswith(".cached"): print("Found file: %s" % (os.path.join(scanDir, name),)) trailer = name[len("stats_all.log"):] if trailer.startswith("-"): trailer = trailer[1:] initialDate = None analyze(os.path.join(scanDir, name), trailer) count += 1 # Build the set of data set keys that can be plotted and let the user # choose which one keys = set() for data in dataset.values(): for items in data.values(): keys.update([":".join(k.split(":")[:2]) for k in items.keys()]) keys = sorted(list(keys)) while True: print("Select a key:") print("\n".join(["{:2d}. {}".format(ctr + 1, key) for ctr, key in enumerate(keys)])) print(" Q. Quit\n") cmd = raw_input("Key: ") if cmd.lower() == 'q': print("\nQuitting") break try: position = int(cmd) - 1 except ValueError: print("Invalid key. Try again.\n") continue if keys[position].startswith("Method:"): key = keys[position] methodkeys = ("Count", "Count %", "Av. Response", "Av. Response %", "Total Resp. %") while True: print("Select a sub-key:") print("\n".join(["{:2d}. {}".format(ctr + 1, subkey) for ctr, subkey in enumerate(methodkeys)])) print(" B. Back\n") cmd = raw_input("SubKey: ") if cmd.lower() == 'b': key = None break try: position = int(cmd) - 1 except ValueError: print("Invalid subkey. Try again.\n") continue break key = "{}:{}".format(key, methodkeys[position]) if key is not None else None else: key = keys[position] if key: plotSeries(key, ymin=0, ymax=None) calendarserver-9.1+dfsg/contrib/tools/tables.py000077500000000000000000000226111315003562600217340ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from sys import stdout import types class Table(object): """ Class that allows pretty printing ascii tables. The table supports multiline headers and footers, independent column formatting by row, alternative tab-delimited output. """ class ColumnFormat(object): """ Defines the format string, justification and span for a column. """ LEFT_JUSTIFY = 0 RIGHT_JUSTIFY = 1 CENTER_JUSTIFY = 2 def __init__(self, strFormat="%s", justify=LEFT_JUSTIFY, span=1): self.format = strFormat self.justify = justify self.span = span def __init__(self, table=None): self.headers = [] self.headerColumnFormats = [] self.rows = [] self.footers = [] self.footerColumnFormats = [] self.columnCount = 0 self.defaultColumnFormats = [] self.columnFormatsByRow = {} if table: self.setData(table) def setData(self, table): self.hasTitles = True self.headers.append(table[0]) self.rows = table[1:] self._getMaxColumnCount() def setDefaultColumnFormats(self, columnFormats): self.defaultColumnFormats = columnFormats def addDefaultColumnFormat(self, columnFormat): self.defaultColumnFormats.append(columnFormat) def setHeaders(self, rows, columnFormats=None): self.headers = rows self.headerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.headers) self._getMaxColumnCount() def addHeader(self, row, columnFormats=None): self.headers.append(row) self.headerColumnFormats.append(columnFormats) self._getMaxColumnCount() def addHeaderDivider(self, skipColumns=()): self.headers.append((None, skipColumns,)) self.headerColumnFormats.append(None) def setFooters(self, row, columnFormats=None): self.footers = row self.footerColumnFormats = columnFormats if columnFormats else [None, ] * len(self.footers) self._getMaxColumnCount() def addFooter(self, row, columnFormats=None): self.footers.append(row) self.footerColumnFormats.append(columnFormats) self._getMaxColumnCount() def addRow(self, row=None, columnFormats=None): self.rows.append(row) if columnFormats: self.columnFormatsByRow[len(self.rows) - 1] = columnFormats self._getMaxColumnCount() def addDivider(self, skipColumns=()): self.rows.append((None, skipColumns,)) def printTable(self, os=stdout): maxWidths = self._getMaxWidths() self.printDivider(os, maxWidths, False) if self.headers: for header, format in zip(self.headers, self.headerColumnFormats): self.printRow(os, header, self._getHeaderColumnFormat(format), maxWidths) self.printDivider(os, maxWidths) for ctr, row in enumerate(self.rows): self.printRow(os, row, self._getColumnFormatForRow(ctr), maxWidths) if self.footers: self.printDivider(os, maxWidths, double=True) for footer, format in zip(self.footers, self.footerColumnFormats): self.printRow(os, footer, self._getFooterColumnFormat(format), maxWidths) self.printDivider(os, maxWidths, False) def printRow(self, os, row, format, maxWidths): if row is None or type(row) is tuple and row[0] is None: self.printDivider(os, maxWidths, skipColumns=row[1] if type(row) is tuple else ()) else: if len(row) != len(maxWidths): row = list(row) row.extend([""] * (len(maxWidths) - len(row))) t = "|" ctr = 0 while ctr < len(row): startCtr = ctr maxWidth = 0 for _ignore_span in xrange(format[startCtr].span if format else 1): maxWidth += maxWidths[ctr] ctr += 1 maxWidth += 3 * ((format[startCtr].span - 1) if format else 0) text = self._columnText(row, startCtr, format, width=maxWidth) t += " " + text + " |" t += "\n" os.write(t) def printDivider(self, os, maxWidths, intermediate=True, double=False, skipColumns=()): t = "|" if intermediate else "+" for widthctr, width in enumerate(maxWidths): if widthctr in skipColumns: c = " " else: c = "=" if double else "-" t += c * (width + 2) t += "+" if widthctr < len(maxWidths) - 1 else ("|" if intermediate else "+") t += "\n" os.write(t) def printTabDelimitedData(self, os=stdout, footer=True): if self.headers: titles = [""] * len(self.headers[0]) for row, header in enumerate(self.headers): for col, item in enumerate(header): titles[col] += (" " if row and item else "") + item self.printTabDelimitedRow(os, titles, self._getHeaderColumnFormat(self.headerColumnFormats[0])) for ctr, row in enumerate(self.rows): self.printTabDelimitedRow(os, row, self._getColumnFormatForRow(ctr)) if self.footers and footer: for footer in self.footers: self.printTabDelimitedRow(os, footer, self._getFooterColumnFormat(self.footerColumnFormats[0])) def printTabDelimitedRow(self, os, row, format): if row is None: row = [""] * self.columnCount if len(row) != self.columnCount: row = list(row) row.extend([""] * (self.columnCount - len(row))) textItems = [self._columnText(row, ctr, format) for ctr in xrange((len(row)))] os.write("\t".join(textItems) + "\n") def _getMaxColumnCount(self): self.columnCount = 0 if self.headers: for header in self.headers: self.columnCount = max(self.columnCount, len(header) if header else 0) for row in self.rows: self.columnCount = max(self.columnCount, len(row) if row else 0) if self.footers: for footer in self.footers: self.columnCount = max(self.columnCount, len(footer) if footer else 0) def _getMaxWidths(self): maxWidths = [0] * self.columnCount if self.headers: for header, format in zip(self.headers, self.headerColumnFormats): self._updateMaxWidthsFromRow(header, self._getHeaderColumnFormat(format), maxWidths) for ctr, row in enumerate(self.rows): self._updateMaxWidthsFromRow(row, self._getColumnFormatForRow(ctr), maxWidths) if self.footers: for footer, format in zip(self.footers, self.footerColumnFormats): self._updateMaxWidthsFromRow(footer, self._getFooterColumnFormat(format), maxWidths) return maxWidths def _updateMaxWidthsFromRow(self, row, format, maxWidths): if row and (type(row) is not tuple or row[0] is not None): ctr = 0 while ctr < len(row): text = self._columnText(row, ctr, format) startCtr = ctr for _ignore_span in xrange(format[startCtr].span if format else 1): maxWidths[ctr] = max(maxWidths[ctr], len(text) / (format[startCtr].span if format else 1)) ctr += 1 def _getHeaderColumnFormat(self, format): if format: return format else: justify = Table.ColumnFormat.CENTER_JUSTIFY if len(self.headers) == 1 else Table.ColumnFormat.LEFT_JUSTIFY return [Table.ColumnFormat(justify=justify)] * self.columnCount def _getFooterColumnFormat(self, format): if format: return format else: return self.defaultColumnFormats def _getColumnFormatForRow(self, ctr): if ctr in self.columnFormatsByRow: return self.columnFormatsByRow[ctr] else: return self.defaultColumnFormats def _columnText(self, row, column, format, width=0): if row is None or column >= len(row): return "" colData = row[column] if colData is None: colData = "" columnFormat = format[column] if format and column < len(format) else Table.ColumnFormat() if type(colData) in types.StringTypes: text = colData else: text = columnFormat.format % colData if width: if columnFormat.justify == Table.ColumnFormat.LEFT_JUSTIFY: text = text.ljust(width) elif columnFormat.justify == Table.ColumnFormat.RIGHT_JUSTIFY: text = text.rjust(width) elif columnFormat.justify == Table.ColumnFormat.CENTER_JUSTIFY: text = text.center(width) return text calendarserver-9.1+dfsg/contrib/tools/test_protocolanalysis.py000066400000000000000000000055711315003562600251310ustar00rootroot00000000000000## # Copyright (c) 2009-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## from twisted.python.filepath import FilePath from twisted.trial.unittest import TestCase from protocolanalysis import CalendarServerLogAnalyzer class UserInteractionTests(TestCase): """ Tests for analysis of the way users interact with each other, done by CalendarServerLogAnalyzer. """ def test_propfindOtherCalendar(self): """ L{CalendarServerLogAnalyzer}'s C{otherUserCalendarRequests} attribute is populated with data about the frequency with which users access a calendar belonging to a different user. The C{"PROPFIND Calendar Home"} key is associated with a C{dict} with count bucket labels as keys and counts of how many users PROPFINDs on calendars belonging to a number of other users which falls into that bucket. """ format = ( '17.128.126.80 - %(user)s [27/Sep/2010:05:13:17 +0000] ' '"PROPFIND /calendars/__uids__/%(other)s/ HTTP/1.1" 207 ' '21274 "-" "DAVKit/4.0.3 (732); CalendarStore/4.0.3 (991); ' 'iCal/4.0.3 (1388); Mac OS X/10.6.4 (10F569)" i=16 t=199.1 or=2\n') path = FilePath(self.mktemp()) path.setContent( # A user accessing his own calendar format % dict(user="user01", other="user01") + # A user accessing the calendar of one other person format % dict(user="user02", other="user01") + # A user accessing the calendar of one other person twice format % dict(user="user03", other="user01") + format % dict(user="user03", other="user01") + # A user accessing the calendars of two other people format % dict(user="user04", other="user01") + format % dict(user="user04", other="user02") + # Another user accessing the calendars of two other people format % dict(user="user05", other="user03") + format % dict(user="user05", other="user04")) analyzer = CalendarServerLogAnalyzer(startHour=22, endHour=24) analyzer.analyzeLogFile(path.path) self.assertEquals( analyzer.summarizeUserInteraction("PROPFIND Calendar Home"), {"(a):0": 1, "(b):1": 2, "(c):2": 2}) test_propfindOtherCalendar.todo = ( "needs fixing" ) calendarserver-9.1+dfsg/contrib/tools/trace.d000077500000000000000000000054611315003562600213570ustar00rootroot00000000000000#!/usr/sbin/dtrace -Zs /* * trace.d - measure Python on-CPU times and flow for functions. * * This traces Python activity from a specified process. * * USAGE: trace.d * * FIELDS: * FILE Filename of the Python program * TYPE Type of call (func/total) * NAME Name of call (function name) * TOTAL Total on-CPU time for calls (us) * * Filename and function names are printed if available. * * COPYRIGHT: Copyright (c) 2007 Brendan Gregg. * * CDDL HEADER START * * The contents of this file are subject to the terms of the * Common Development and Distribution License, Version 1.0 only * (the "License"). You may not use this file except in compliance * with the License. * * You can obtain a copy of the license at Docs/cddl1.txt * or http://www.opensolaris.org/os/licensing. * See the License for the specific language governing permissions * and limitations under the License. * * CDDL HEADER END * * 09-Sep-2007 Brendan Gregg Created original py_ scripts. * * This is a combination of py_flow.d and py_cputime.d that takes a pid * as an argument and runs for 10 seconds. */ #pragma D option quiet #pragma D option bufsize=100m dtrace:::BEGIN { printf("Tracing... Hit Ctrl-C to end.\n"); } python*:::function-entry /pid == $1/ { self->depth++; self->exclude[self->depth] = 0; self->function[self->depth] = vtimestamp; printf("%s %s (%s:%d)\n", "->", copyinstr(arg1), copyinstr(arg0), arg2); } python*:::function-return /pid == $1 && self->function[self->depth]/ { this->oncpu_incl = vtimestamp - self->function[self->depth]; this->oncpu_excl = this->oncpu_incl - self->exclude[self->depth]; self->function[self->depth] = 0; self->exclude[self->depth] = 0; this->file = basename(copyinstr(arg0)); this->name = copyinstr(arg1); @num[this->file, "func", this->name] = count(); @num["-", "total", "-"] = count(); @types_incl[this->file, "func", this->name] = sum(this->oncpu_incl); @types_excl[this->file, "func", this->name] = sum(this->oncpu_excl); @types_excl["-", "total", "-"] = sum(this->oncpu_excl); self->depth--; self->exclude[self->depth] += this->oncpu_incl; printf("%s %s (%s:%d)\n", "<-", copyinstr(arg1), copyinstr(arg0), arg2); } profile:::tick-10s { exit(0); } dtrace:::END { printf("\nCount,\n"); printf(" %-20s %-10s %-32s %8s\n", "FILE", "TYPE", "NAME", "COUNT"); printa(" %-20s %-10s %-32s %@8d\n", @num); normalize(@types_excl, 1000); printf("\nExclusive function on-CPU times (us),\n"); printf(" %-20s %-10s %-32s %8s\n", "FILE", "TYPE", "NAME", "TOTAL"); printa(" %-20s %-10s %-32s %@8d\n", @types_excl); normalize(@types_incl, 1000); printf("\nInclusive function on-CPU times (us),\n"); printf(" %-20s %-10s %-32s %8s\n", "FILE", "TYPE", "NAME", "TOTAL"); printa(" %-20s %-10s %-32s %@8d\n", @types_incl); } calendarserver-9.1+dfsg/contrib/webpoll/000077500000000000000000000000001315003562600204075ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/webpoll/Makefile000066400000000000000000000030511315003562600220460ustar00rootroot00000000000000## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## webpoll: mkdir -p webapp/js/3rdparty curl http://code.jquery.com/jquery-2.1.3.js -o webapp/js/3rdparty/jquery-2.1.3.js curl http://code.jquery.com/ui/1.11.2/jquery-ui.js -o webapp/js/3rdparty/jquery-ui-1.11.2.js curl -L https://github.com/douglascrockford/JSON-js/raw/master/json2.js -o webapp/js/3rdparty/json2.js curl http://trentrichardson.com/examples/timepicker/jquery-ui-timepicker-addon.js -o webapp/js/3rdparty/datetimepicker.js mkdir -p webapp/css/3rdparty curl http://trentrichardson.com/examples/timepicker/jquery-ui-timepicker-addon.css -o webapp/css/3rdparty/datetimepicker.css curl http://jqueryui.com/resources/download/jquery-ui-themes-1.11.2.zip -o /tmp/jquery-ui-themes-1.11.2.zip unzip /tmp/jquery-ui-themes-1.11.2.zip jquery-ui-themes-1.11.2/themes/cupertino/* -d /tmp mv /tmp/jquery-ui-themes-1.11.2/themes/cupertino webapp/css/3rdparty rm -rf /tmp/jquery-ui-themes-1.11.2 clean: rm -rf webapp/js/3rdparty rm -rf webapp/css/3rdparty calendarserver-9.1+dfsg/contrib/webpoll/README.txt000066400000000000000000000022341315003562600221060ustar00rootroot00000000000000WebPoll Package for CalendarServer ================================== *** IMPORTANT: this a prototype/demo and is not intended for production use WebPoll is a set of webbrowser javascript files that implement a prototype webapp for a client that supports the new iCalendar VPOLL component for doing "consensus" scheduling - i.e., a scheduling process where users can vote on which of a series of potential events are best for them. The VPOLL technology has been developed by CalConnect to open up new methods of scheduling with iCalendar that allows for not only polls, but potentially booking systems and much more. The WebPoll webapp uses jQuery and jQueryUI (which have to be downloaded separately). It uses AJAX calls to make CalDAV queries to the server hosting the webapp. The CalDAV server must support jCal (iCalendar-in-JSON) format calendar data. Use with CalendarServer ======================= Use "make webpoll" in this directory to download all the relevant dependencies. In the CalendarServer directory do "./run -f ./contrib/webpoll/caldavd-test-webpoll.plist" to run the server. In a browser navigate to "/webpoll". Using the webapp ================ TBD calendarserver-9.1+dfsg/contrib/webpoll/caldavd-test-webpoll.plist000066400000000000000000000025051315003562600255030ustar00rootroot00000000000000 ImportConfig ./conf/caldavd-test.plist Aliases url /webpoll path ./contrib/webpoll/webapp SupportedComponents VEVENT VTODO VPOLL calendarserver-9.1+dfsg/contrib/webpoll/webapp/000077500000000000000000000000001315003562600216655ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/webpoll/webapp/css/000077500000000000000000000000001315003562600224555ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/webpoll/webapp/css/webpoll.css000066400000000000000000000101661315003562600246370ustar00rootroot00000000000000@CHARSET "UTF-8"; html,body { font: 80% "Trebuchet MS", sans-serif; } /* Main Panel settings */ #main-panel { position: absolute; left: 0px; top: 0px; right: 0px; bottom: 0px; } /* Title Text */ #title { font: 14pt "Trebuchet MS", sans-serif; position: absolute; margin-top: 5px; margin-left: 20px; height: 32px; left: 0px; top: 0px; } #loading { font: 14pt "Trebuchet MS", sans-serif; position: absolute; margin-top: 5px; margin-right: 20px; height: 20px; width: 180px; right: 0px; top: 0px; } #progressbar { position: absolute; width: 100px; height: 20px; top: 0px; right: 0px; } /* Sidebar Panel */ #sidepanel { position: absolute; width: 210px; left: 5px; top: 32px; bottom: 0px; border-right-style:solid; border-width: 1px; border-color:#ccc; } #sidebar { position: absolute; left: 0px; right: 2px; top: 0px; bottom: 20px; } .sidebar-title { position: relative; font: 12pt "Trebuchet MS", sans-serif; } #sidebar-new-poll-count, #sidebar-vote-poll-count { position: absolute; background-color:#3366ff; font: 8pt "Trebuchet MS", sans-serif; color:#fff; text-align: center; border:2px solid; border-color: #fff; border-radius: 30px; right: 10px; padding-top: 2px; padding-bottom: 2px; padding-left: 5px; padding-right: 5px; } /* Refresh Button */ #refresh { position: absolute; right: 5px; bottom: 4px; } /* Detail Panel */ #detail { position: absolute; left: 220px; right: 5px; top: 32px; bottom: 0px; } #detail-nocontent { position: absolute; left: 50px; top: 50px; font-size: 2.0em; } #editpoll-description { position: absolute; left: 4px; right: 0px; top: 0px; height: 32px; } #editpoll-title-edit-panel { float: left; font-size: 1.5em; } #editpoll-title-panel { float: left; font-size: 1.5em; } #editpoll-organizer-panel { position: relative; float: left; left: 20px; font-size: 1.5em; } #editpoll-status-panel { position: relative; float: left; left: 40px; font-size: 1.5em; } #editpoll-title, #editpoll-organizer, #editpoll-status { font-weight: bold; } #editpoll-details { position: absolute; left: 0px; right: 0px; top: 28px; } #sidebar-owned, #editpoll-tabs { margin-bottom: 4px; } #editpoll-delete { position: relative; left: 10px; } #editpoll-autofill { position: relative; left: 50px; } .event,.voter { position: relative; background-color:#eee; box-shadow: 5px 5px 2px #888; margin: 8px; padding: 4px; } .edit-datetime { margin: 4px; } .edit-datetime label, .edit-voter label { display: inline-block; width: 40px; } .voter-address { width: 250px; } .input-remove { position: absolute; right: 4px; bottom: 4px; } .active-voter { font-weight:bold; } #editpoll-resulttable { font: 11pt "Trebuchet MS", sans-serif; border-collapse:collapse; border: 1px solid #ccc; } #editpoll-resulttable thead, #editpoll-resulttable tfoot { text-align:right; } #editpoll-resulttable td { border: 1px solid #ccc; padding: 4px; } #editpoll-resulttable tfoot { border: 2px solid; } .response-btns, .response-btns label { height: 24px; } #winner-text { position: relative; } #winner-icon-left { position: absolute; left: 0px; top: 50%; margin-top: -8px; } #winner-icon-right { position: absolute; right: 0px; top: 50%; margin-top: -8px; } .center-td { text-align:center; } .poll-winner-td { background-color:#D9D9D9; } .best-td { background-color:#CCCCFF; } .ok-td { background-color:#CCE6CC; } .maybe-td { background-color:#FFFFCC; } .no-td { background-color:#FFCCCC; } .no-response-td { background-color:white; } .no-close .ui-dialog-titlebar-close { display: none; } #hover-grid { position: relative; width: 100%; margin: 10px; border-collapse:collapse; border: 1px solid #ccc; } #hover-grid tr { position: relative; height: 30px; } #hover-grid td { border: 1px solid #ccc; padding: 4px; } #hover-grid .hover-grid-td-time { width: 70px; text-align: right; } .hover-event { position: absolute; font: 8pt "Trebuchet MS", sans-serif; left: 81px; width: 250px; padding-top: 2px; padding-bottom: 2px; padding-left: 5px; padding-right: 5px; } #response-key { position: relative; width: 100px; top: 20px; } calendarserver-9.1+dfsg/contrib/webpoll/webapp/index.html000077500000000000000000000067321315003562600236750ustar00rootroot00000000000000 WebPoll
WebPoll
Loading
Select a poll in the side-bar to view its details.
Title:
Organizer:
Status:
Possible Responses:
calendarserver-9.1+dfsg/contrib/webpoll/webapp/js/000077500000000000000000000000001315003562600223015ustar00rootroot00000000000000calendarserver-9.1+dfsg/contrib/webpoll/webapp/js/caldav.js000066400000000000000000000770151315003562600241030ustar00rootroot00000000000000/** ## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## */ /** * Classes to model a CalDAV service using XHR requests. */ // Do an AJAX request function Ajax(params) { $.extend(params, { processData : false, crossDomain: true, username: gSession.auth, password: gSession.auth }) return $.ajax(params); } // A generic PROPFIND request function Propfind(url, depth, props) { var nsmap = {}; addNamespace("D", nsmap); var propstr = addElements(props, nsmap); return Ajax({ url : url, type : "PROPFIND", contentType : "application/xml; charset=utf-8", headers : { "Prefer" : "return=minimal", "Depth" : depth }, data : '' + '' + '' + propstr + '' + '', }); } // A calendar-query REPORT request for VPOLLs only function PollQueryReport(url, props) { var nsmap = {}; addNamespace("D", nsmap); addNamespace("C", nsmap); var propstr = addElements(props, nsmap); propstr += ''; return Ajax({ url : url, type : "REPORT", contentType : "application/xml; charset=utf-8", headers : { "Prefer" : "return=minimal", "Depth" : "0" }, data : '' + '' + '' + propstr + '' + '' + '' + '' + '' + '' + '', }); } // A calendar-query REPORT request for VEVENTs in time-range, expanded function TimeRangeExpandedSummaryQueryReport(url, start, end) { var nsmap = {}; addNamespace("D", nsmap); addNamespace("C", nsmap); return Ajax({ url : url, type : "REPORT", contentType : "application/xml; charset=utf-8", headers : { "Prefer" : "return=minimal", "Depth" : "0" }, data : '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' + '' }); } // A freebusy POST request function Freebusy(url, fbrequest) { return Ajax({ url : url, type : "POST", contentType : "application/calendar+json; charset=utf-8", data : fbrequest.toString(), }); } // A calendar-user-search REPORT request function UserSearchReport(url, text) { var nsmap = {}; addNamespace("D", nsmap); addNamespace("C", nsmap); addNamespace("CS", nsmap); return Ajax({ url : url, type : "REPORT", contentType : "application/xml; charset=utf-8", headers : { "Depth" : "0" }, data : '' + '' + '' + xmlEncode(text) + '' + '20' + '' + '' + '' + '' + '', }); } // Multistatus response processing MultiStatusResponse = function(response, parent_url) { this.response = response; this.parentURL = parent_url; } // Get property text value from the overall multistatus MultiStatusResponse.prototype.getPropertyText = function(prop) { return this._getPropertyText($(this.response), prop, "D:multistatus/D:response/D:propstat/D:prop/"); } // Get property href text value from the overall multistatus MultiStatusResponse.prototype.getPropertyHrefTextList = function(prop) { return this._getPropertyHrefTextList($(this.response), prop, "D:multistatus/D:response/D:propstat/D:prop/"); } // Get property text value from the specified response node MultiStatusResponse.prototype.getResourcePropertyText = function(response_node, prop) { return this._getPropertyText(response_node, prop, "D:propstat/D:prop/"); } // Get property href text value from the specified response node MultiStatusResponse.prototype.getResourcePropertyHrefTextList = function(response_node, prop) { return this._getPropertyHrefTextList(response_node, prop, "D:propstat/D:prop/"); } // Get property text value from the specified node MultiStatusResponse.prototype._getPropertyText = function(node, prop, prefix) { return getElementText(node, prefix + prop); } // Get all property href text values as an array from the specified node MultiStatusResponse.prototype._getPropertyHrefTextList = function(node, prop, prefix) { var items = findElementPath(node, prefix + prop + "/D:href"); if (items.length == 0) { return null; } else { var results = [] $.each(items, function(index, item) { results.push(item.text()); }); return results; } } // Apply specified function to each response (other than the parent) MultiStatusResponse.prototype.doToEachChildResource = function(doIt) { var items = findElementPath($(this.response), "D:multistatus/D:response"); var msr = this; $.each(items, function(index, item) { var href = getElementText(item, "D:href"); if (!compareURLs(href, msr.parentURL)) { doIt(href, item); } }); } // Schedule response processing ScheduleResponse = function(response) { this.response = response; } // Apply specified function to each recipient response ScheduleResponse.prototype.doToEachRecipient = function(doIt) { var items = findElementPath($(this.response), "C:schedule-response/C:response"); $.each(items, function(index, item) { doIt(getElementText(item, "C:recipient/D:href"), item); }); } // A CalDAV session for a specific principal CalDAVSession = function(user) { this.currentPrincipal = null; //this.host = "http://172.16.105.104:8080/ucaldav"; //this.host = "https://cyrus.local:8543"; this.host = ""; if (user === undefined) { this.auth = null; } else { this.auth = user; } } // Setup session CalDAVSession.prototype.init = function(whenDone) { this.currentUserPropfind(whenDone); } gWellKnown = "/.well-known/caldav"; //gWellKnown = "/"; // Discover current principal from /.well-known/caldav, then load the principal data CalDAVSession.prototype.currentUserPropfind = function(whenDone) { var session = this; Propfind(joinURLs(this.host, gWellKnown), "0", [ "D:current-user-principal" ]).done(function(response) { var msr = new MultiStatusResponse(response, gWellKnown); var href = msr.getPropertyText("D:current-user-principal/D:href"); if (href == null) { alert("Could not determine current user."); } else { // Load the principal session.currentPrincipal = new CalDAVPrincipal(href); session.currentPrincipal.init(whenDone); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // Search for calendar users matching a string CalDAVSession.prototype.calendarUserSearch = function(item, whenDone) { UserSearchReport(joinURLs(this.host, "/principals/"), item).done(function(response) { var msr = new MultiStatusResponse(response, "/principals/"); var results = []; msr.doToEachChildResource(function(url, response_node) { var cn = msr.getResourcePropertyText(response_node, "D:displayname"); var cuaddr = CalDAVPrincipal.bestCUAddress(msr.getResourcePropertyHrefTextList(response_node, "C:calendar-user-address-set")); if (cuaddr) { results.push((cn ? cn + " " : "") + "<" + cuaddr + ">"); } }); if (whenDone) { whenDone(results.sort()); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // Represents a calendar user on the server CalDAVPrincipal = function(url) { this.url = url; this.cn = null; this.home_url = null; this.inbox_url = null; this.outbox_url = null; this.calendar_user_addresses = []; this.default_address = null; this.poll_calendars = []; this.event_calendars = []; } // Return the best calendar user address from the set. Prefer mailto over urn over anything else. CalDAVPrincipal.bestCUAddress = function(cuaddress_set) { var results = $.grep(cuaddress_set, function(cuaddr, index) { return cuaddr.startsWith("mailto:"); }); if (results.length == 0) { results = $.grep(cuaddress_set, function(cuaddr, index) { return cuaddr.startsWith("urn:uuid:"); }); } if (results.length != 0) { return results[0]; } return null; } // Load principal details for this user, then load all the calendars CalDAVPrincipal.prototype.init = function(whenDone) { // Get useful properties var principal = this; Propfind(joinURLs(gSession.host, principal.url), "0", [ "D:displayname", "C:calendar-home-set", "C:schedule-inbox-URL", "C:schedule-outbox-URL", "C:calendar-user-address-set" ]).done(function(response) { var msr = new MultiStatusResponse(response, principal.url); principal.cn = msr.getPropertyText("D:displayname"); principal.home_url = msr.getPropertyText("C:calendar-home-set/D:href"); principal.inbox_url = msr.getPropertyText("C:schedule-inbox-URL/D:href"); principal.outbox_url = msr.getPropertyText("C:schedule-outbox-URL/D:href"); principal.calendar_user_addresses = msr.getPropertyHrefTextList("C:calendar-user-address-set"); // Load the calendars principal.loadCalendars(whenDone); }).fail(function(jqXHR, status, error) { alert(status + error); }); } // For a reload of all calendar data CalDAVPrincipal.prototype.refresh = function(whenDone) { this.poll_calendars = []; this.event_calendars = []; this.loadCalendars(whenDone); } // The most suitable calendar user address for the user CalDAVPrincipal.prototype.defaultAddress = function() { if (!this.default_address) { this.default_address = CalDAVPrincipal.bestCUAddress(this.calendar_user_addresses); } return this.default_address; } // Indicate whether the specified calendar-user-address matches the current user CalDAVPrincipal.prototype.matchingAddress = function(cuaddr) { return this.calendar_user_addresses.indexOf(cuaddr) != -1; } // Load all VPOLL and VEVENT capable calendars for this user CalDAVPrincipal.prototype.loadCalendars = function(whenDone) { var this_principal = this; Propfind(joinURLs(gSession.host, this.home_url), "1", [ "D:resourcetype", "D:displayname", "D:add-member", "C:supported-calendar-component-set" ]).done(function(response) { var msr = new MultiStatusResponse(response, this_principal.home_url); msr.doToEachChildResource(function(url, response_node) { if (!hasElementPath(response_node, "D:propstat/D:prop/D:resourcetype/D:collection") || !hasElementPath(response_node, "D:propstat/D:prop/D:resourcetype/C:calendar")) return; // Separate out support for VPOLL and VEVENT var comps = findElementPath(response_node, "D:propstat/D:prop/C:supported-calendar-component-set/C:comp"); var has_vpoll = true; var has_vevent = true; if (comps.length != 0) { has_vpoll = false; has_vevent = false; $.each(comps, function(index, comp) { if (comp.attr("name") == "VPOLL") { has_vpoll = true; } if (comp.attr("name") == "VEVENT") { has_vevent = true; } }); } // Build the calendar and assign to appropriate arrays var cal = new CalendarCollection(url); cal.displayname = getElementText(response_node, "D:propstat/D:prop/D:resourcetype/D:displayname") if (!cal.displayname) { cal.displayname = basenameURL(url); } cal.addmember = getElementText(response_node, "D:propstat/D:prop/D:add-member/D:href") if (has_vpoll) { this_principal.poll_calendars.push(cal); } if (has_vevent) { this_principal.event_calendars.push(cal); } }); // Load the resources from all VPOLL calendars this_principal.loadResources(whenDone); }).fail(function(jqXHR, status, error) { alert(status + error); }); } // Start loading all VPOLL resources CalDAVPrincipal.prototype.loadResources = function(whenDone) { var principal = this; var process = [].concat(principal.poll_calendars); process.reverse(); this.loadCalendarResources(whenDone, process); } // Iteratively load all resources from VPOLL calendars CalDAVPrincipal.prototype.loadCalendarResources = function(whenDone, process) { var this_principal = this; var calendar = process.pop(); calendar.loadResources(function() { if (process.length != 0) { this_principal.loadCalendarResources(whenDone, process); } else { this_principal.addResources(whenDone); } }); } // After all resources are loaded, add each VPOLL to view controller CalDAVPrincipal.prototype.addResources = function(whenDone) { // Scan each resource to see if it is organized or not $.each(this.poll_calendars, function(index1, calendar) { $.each(calendar.resources, function(index2, resource) { var poll = new Poll(resource); gViewController.addPoll(poll); }) }); if (whenDone) { whenDone(); } } // Do a freebusy query for the specified user for the specified time range and indicate whether busy or not CalDAVPrincipal.prototype.isBusy = function(user, start, end, whenDone) { var fbrequest = jcal.newCalendar(); fbrequest.newProperty("method", "REQUEST"); var fb = fbrequest.newComponent("vfreebusy", true); fb.newProperty( "organizer", this.defaultAddress(), { "cn" : this.cn }, "cal-address" ); fb.newProperty( "attendee", user, {}, "cal-address" ); fb.newProperty( "dtstart", jcaldate.jsDateTojCal(start), {}, "date-time" ); fb.newProperty( "dtend", jcaldate.jsDateTojCal(end), {}, "date-time" ); Freebusy( joinURLs(gSession.host, this.outbox_url), fbrequest ).done(function(response) { var sched = new ScheduleResponse(response); var result = null; sched.doToEachRecipient(function(url, response_node) { var caldata = getElementText(response_node, "C:calendar-data"); if (caldata) { caldata = jcal.fromString(caldata); if (caldata.mainComponent().name() == "vfreebusy") { // Any FREEBUSY property means busy sometime during the requested period result = caldata.mainComponent().hasProperty("freebusy") } } }); if (whenDone) { whenDone(result) } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // Get a summary of events for the specified time-range CalDAVPrincipal.prototype.eventsForTimeRange = function(start, end, whenDone) { var this_principal = this; TimeRangeExpandedSummaryQueryReport( joinURLs(gSession.host, this.event_calendars[0].url), start, end ).done(function(response) { var results = [] var msr = new MultiStatusResponse(response, this_principal.event_calendars[0].url); msr.doToEachChildResource(function(url, response_node) { var caldata = jcal.fromString(msr.getResourcePropertyText(response_node, "C:calendar-data")); results.push(new CalendarObject(caldata)); }); if (whenDone) { whenDone(results); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // A calendar collection on the server CalendarCollection = function(url) { this.url = url; this.displayname = null; this.addmember = null; this.resources = []; } // Load a calendar's VPOLL resources CalendarCollection.prototype.loadResources = function(whenDone) { var calendar = this; PollQueryReport(joinURLs(gSession.host, calendar.url), [ "D:getetag" ]).done(function(response) { var msr = new MultiStatusResponse(response, calendar.url); msr.doToEachChildResource(function(url, response_node) { var etag = msr.getResourcePropertyText(response_node, "D:getetag"); var caldata = jcal.fromString(msr.getResourcePropertyText(response_node, "C:calendar-data")); calendar.resources.push(new CalendarResource(calendar, url, etag, caldata)); }); if (whenDone) { whenDone(); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // A calendar resource object CalendarResource = function(calendar, url, etag, data) { this.calendar = calendar; this.url = url; this.etag = etag; this.object = (data instanceof CalendarObject ? data : new CalendarObject(data)); } // Create a brand new poll and add to the default calendar CalendarResource.newPoll = function(title) { // Add to calendar resource = new CalendarResource(gSession.currentPrincipal.poll_calendars[0], null, null, CalendarPoll.newPoll(title)); resource.calendar.resources.push(resource); return resource; } // Save this resource to the server - might be brand new or an update CalendarResource.prototype.saveResource = function(whenDone) { if (!this.object.changed()) return; // Always set to accepted if (this.object.mainComponent().data.name() == "vpoll") { this.object.mainComponent().acceptInvite(); } if (!this.url) { if (this.calendar.addmember) { // Do POST;add-member Ajax({ context : this, url : joinURLs(gSession.host, this.calendar.addmember), type : "POST", contentType : "application/calendar+json; charset=utf-8", headers : { "Prefer" : "return=representation", "Accept" : "application/calendar+json" }, data : this.object.toString(), }).done(function(response, textStatus, jqXHR) { // Get Content-Location header as new url this.url = jqXHR.getResponseHeader("Content-Location"); // Check for returned data and ETag this.etag = jqXHR.getResponseHeader("Etag"); this.object = new CalendarObject(response); if (whenDone) { whenDone(); } }).fail(function(jqXHR, status, error) { alert(status + error); }); return; } // Have to PUT a new resource this.url = joinURLs(this.calendar.url, this.data.getComponent("vpoll").getPropertyText("uid") + ".ics"); } // Do conditional PUT Ajax({ context : this, url : joinURLs(gSession.host, this.url), type : "PUT", contentType : "application/calendar+json; charset=utf-8", headers : { "Prefer" : "return=representation", "Accept" : "application/calendar+json" }, data : this.object.toString(), }).done(function(response, textStatus, jqXHR) { // Check for returned data and ETag this.etag = jqXHR.getResponseHeader("Etag"); this.object = new CalendarObject(response); if (whenDone) { whenDone(); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // Remove this resource from the server CalendarResource.prototype.removeResource = function(whenDone) { if (!this.url) { if (whenDone) { whenDone(); } return; } Ajax({ context : this, url : joinURLs(gSession.host, this.url), type : "DELETE", }).done(function(response) { var index = this.calendar.resources.indexOf(this); this.calendar.resources.splice(index, 1); if (whenDone) { whenDone(); } }).fail(function(jqXHR, status, error) { alert(status + error); }); } // A generic container for an iCalendar component CalendarComponent = function(caldata, parent) { this.data = (caldata instanceof jcal ? caldata : new jcal(caldata)); this.parent = parent; } // Maintain a registry of component types so the right class can be created when parsing CalendarComponent.createComponentType = {} CalendarComponent.registerComponentType = function(name, cls) { CalendarComponent.createComponentType[name] = cls; } CalendarComponent.buildComponentType = function(caldata, parent) { return new CalendarComponent.createComponentType[caldata.name()](caldata, parent); } CalendarComponent.prototype.duplicate = function(parent) { if (parent === undefined) { parent = this.parent; } return CalendarComponent.buildComponentType(this.data.duplicate(), parent); } CalendarComponent.prototype.toString = function() { return this.data.toString(); } // Tell component whether it has changed or not CalendarComponent.prototype.changed = function(value) { if (value === undefined) { return this.parent ? this.parent.changed() : false; } else { if (this.parent) { this.parent.changed(value); } } } CalendarComponent.prototype.uid = function() { return this.data.getPropertyValue("uid"); } // Indicate whether this object is owned by the current user. CalendarComponent.prototype.isOwned = function() { return gSession.currentPrincipal.matchingAddress(this.organizer()); } CalendarComponent.prototype.organizer = function() { return this.data.getPropertyValue("organizer"); } CalendarComponent.prototype.organizerDisplayName = function() { return new CalendarUser(this.data.getProperty("organizer"), this).nameOrAddress(); } CalendarComponent.prototype.status = function() { return this.data.getPropertyValue("status"); } CalendarComponent.prototype.summary = function(value) { if (value === undefined) { return this.data.getPropertyValue("summary"); } else { if (this.summary() != value) { this.data.updateProperty("summary", value); this.changed(true); } } } CalendarComponent.prototype.dtstart = function(value) { if (value === undefined) { return jcaldate.jCalTojsDate(this.data.getPropertyValue("dtstart")); } else { if (this.dtstart() - value !== 0) { this.data.updateProperty("dtstart", jcaldate.jsDateTojCal(value), {}, "date-time"); this.changed(true); } } } CalendarComponent.prototype.dtend = function(value) { if (value === undefined) { var dtend = this.data.getPropertyValue("dtend"); if (dtend === null) { var offset = 0; var duration = this.data.getPropertyValue("duration"); if (duration === null) { dtend = new Date(this.dtstart().getTime()); dtend.setHours(0, 0, 0, 0); dtend.setDate(dtend.getDate() + 1); } else { offset = jcalduration.parseText(duration).getTotalSeconds() * 1000; dtend = new Date(this.dtstart().getTime() + offset); } } return jcaldate.jCalTojsDate(dtend); } else { if (this.dtend() - value !== 0) { this.data.updateProperty("dtend", jcaldate.jsDateTojCal(value), {}, "date-time"); this.data.removeProperties("duration"); this.changed(true); } } } CalendarComponent.prototype.pollitemid = function(value) { if (value === undefined) { return this.data.getPropertyValue("poll-item-id"); } else { if (this.politemid() != value) { this.data.updateProperty("poll-item-id", value); this.changed(true); } } } CalendarComponent.prototype.voter_responses = function() { var voter_results = {} var pollitemid = this.pollitemid(); $.each(this.parent.data.components("vvoter"), function(index, vvoter) { var voter = vvoter.getPropertyValue("voter") $.each(vvoter.components("vote"), function(index, vote) { if (vote.getPropertyValue("poll-item-id") == pollitemid) { voter_results[voter] = vote.getPropertyValue("response"); return false; } }); }); return voter_results; } // Change active user's response to this event CalendarComponent.prototype.changeVoterResponse = function(response) { var matches_vvoter = $.grep(this.parent.data.components("vvoter"), function(vvoter, index) { return gSession.currentPrincipal.matchingAddress(vvoter.getPropertyValue("voter")); }); var pollitemid = this.pollitemid(); if (response !== null) { var matches_vote = $.grep(matches_vvoter[0].components("vote"), function(vote, index) { return vote.getPropertyValue("poll-item-id") == pollitemid; }); if (matches_vote.length == 1) { matches_vote[0].getProperty("response")[3] = response; } else { var vote = matches_vvoter[0].newComponent("vote"); vote.newProperty("response", response, {}, "integer"); vote.newProperty("poll-item-id", pollitemid, {}, "integer"); } } else { $.each(matches_vvoter[0].components("vote"), function(index, vote) { if (vote.getPropertyValue("poll-item-id") == pollitemid) { matches_vvoter[0].caldata[2].remove(index); return false; } }); } this.changed(true); } // A container class for VCALENDAR objects CalendarObject = function(caldata) { CalendarComponent.call(this, caldata, null); this._changed = false; } CalendarObject.prototype = new CalendarComponent(); CalendarObject.prototype.constructor = CalendarObject; CalendarComponent.registerComponentType("vcalendar", CalendarObject); // This is the top-level object for changing tracking CalendarObject.prototype.changed = function(value) { if (value === undefined) { return this._changed; } else { this._changed = value; } } // Get the main component type as one of our model classes CalendarObject.prototype.mainComponent = function() { var main = this.data.mainComponent(); return new CalendarComponent.buildComponentType(main, this); } // A container class for VPOLL objects CalendarPoll = function(caldata, parent) { CalendarComponent.call(this, caldata, parent); } CalendarPoll.prototype = new CalendarComponent(); CalendarPoll.prototype.constructor = CalendarPoll; CalendarComponent.registerComponentType("vpoll", CalendarPoll); // Create a brand new poll, defaulting various properties CalendarPoll.newPoll = function(title) { var calendar = jcal.newCalendar(); var vpoll = calendar.newComponent("vpoll", true); vpoll.newProperty("summary", title); vpoll.newProperty("poll-mode", "BASIC"); vpoll.newProperty("poll-properties", ["DTSTART","DTEND"]); vpoll.newProperty( "organizer", gSession.currentPrincipal.defaultAddress(), { "cn" : gSession.currentPrincipal.cn }, "cal-address" ); var vvoter = vpoll.newComponent("vvoter"); vvoter.newProperty( "voter", gSession.currentPrincipal.defaultAddress(), { "cn" : gSession.currentPrincipal.cn, "partstat" : "ACCEPTED" }, "cal-address" ); return new CalendarObject(calendar); } // Whether or not current user can make changes (depends on their role as owner too) CalendarPoll.prototype.editable = function() { var status = this.status(); return status ? status == "IN-PROCESS" : true; } CalendarComponent.prototype.pollwinner = function(value) { if (value === undefined) { return this.data.getPropertyValue("poll-winner"); } else { if (this.pollwinner() != value) { this.data.updateProperty("poll-winner", value); this.changed(true); } } } CalendarComponent.prototype.ispollwinner = function() { var pollid = this.pollitemid(); return pollid === undefined ? false : pollid == this.parent.pollwinner(); } // Get an array of child VEVENTs in the VPOLL CalendarPoll.prototype.events = function() { var this_vpoll = this; return $.map(this.data.components("vevent"), function(event, index) { return new CalendarEvent(event, this_vpoll); }); } // Add a new VEVENT to the VPOLL CalendarPoll.prototype.addEvent = function(dtstart, dtend) { this.changed(true); var poll_item_id = this.data.components("vevent").length; var vevent = this.data.newComponent("vevent", true); vevent.newProperty("dtstart", jcaldate.jsDateTojCal(dtstart), {}, "date-time"); vevent.newProperty("dtend", jcaldate.jsDateTojCal(dtend), {}, "date-time"); vevent.newProperty("summary", this.summary()); vevent.newProperty("poll-item-id", poll_item_id, {}, "integer"); var matches_vvoter = $.grep(this.data.components("vvoter"), function(vvoter, index) { return gSession.currentPrincipal.matchingAddress(vvoter.getPropertyValue("voter")); }); var vvoter = null; if (matches_vvoter.length == 1) { vvoter = matches_vvoter[0]; } else { vvoter = this.data.newComponent("vvoter"); vvoter.newProperty("voter", this.organizer(), {}, "cal-address"); } var vote = vvoter.newComponent("vote"); vote.newProperty("response", 80, {}, "integer"); vote.newProperty("poll-item-id", poll_item_id, {}, "integer"); return new CalendarEvent(vevent, this); } // Get an array of voters in the VPOLL CalendarPoll.prototype.voters = function() { var this_vpoll = this; return $.map(this.data.components("vvoter"), function(vvoter) { return new CalendarUser(vvoter.getProperty("voter"), this_vpoll); }); } // Add a voter to the VPOLL CalendarPoll.prototype.addVoter = function() { this.changed(true); var vvoter = this.data.newComponent("vvoter"); return new CalendarUser(vvoter.newProperty("voter", "", {}, "cal-address"), this); } // Mark current user as accepted CalendarPoll.prototype.acceptInvite = function() { if (!this.isOwned()) { $.each(this.data.components("vvoter"), function(index, vvoter) { var voter = vvoter.getProperty("voter"); if (gSession.currentPrincipal.matchingAddress(voter[3])) { voter[1]["partstat"] = "ACCEPTED"; delete voter[1]["rsvp"]; } }) } } // An actual VEVENT object we can manipulate CalendarEvent = function(caldata, parent) { CalendarComponent.call(this, caldata, parent); } CalendarEvent.prototype = new CalendarComponent(); CalendarEvent.prototype.constructor = CalendarEvent; CalendarComponent.registerComponentType("vevent", CalendarEvent); // Create this event as the poll winner CalendarEvent.prototype.pickAsWinner = function() { // Adjust VPOLL to mark winner and set status var vpoll = this.parent; vpoll.data.updateProperty("status", "CONFIRMED"); vpoll.data.newProperty("poll-winner", this.pollitemid()); vpoll.changed(true); // Create the new event resource with voters mapped to attendees var calendar = new CalendarObject(jcal.newCalendar()); var vevent = calendar.data.newComponent("vevent", true); vevent.updateProperty("uid", this.uid()); vevent.copyProperty("summary", this.data); vevent.copyProperty("dtstart", this.data); vevent.copyProperty("dtend", this.data); vevent.copyProperty("organizer", vpoll.data); $.each(vpoll.data.components("vvoter"), function(index, vvoter) { var voter = vvoter.getProperty("voter"); var attendee = vevent.newProperty( "attendee", voter[3], {}, "cal-address" ); $.each(voter[1], function(key, value) { if (key == "cn") { attendee[1][key] = value } }); if (gSession.currentPrincipal.matchingAddress(voter[3])) { attendee[1]["partstat"] = "ACCEPTED"; } else { attendee[1]["partstat"] = "NEEDS-ACTION"; attendee[1]["rsvp"] = "TRUE"; } }); calendar.changed(true); return new CalendarResource(gSession.currentPrincipal.event_calendars[0], null, null, calendar); } // An iCalendar calendar user (ORGANIZER/ATTENDEE/VOTER) property CalendarUser = function(caldata, parent) { this.data = caldata; this.parent = parent; } // Get or set the user name and/or cu-address CalendarUser.prototype.addressDescription = function(value) { if (value === undefined) { var cn = this.data[1]["cn"] ? this.data[1]["cn"] + " " : ""; return addressDescription(cn, this.data[3]); } else { if (this.addressDescription() != value) { var splits = splitAddressDescription(value); if (splits[0]) { this.data[1]["cn"] = splits[0]; } else { delete this.data[1]["cn"]; } this.data[3] = splits[1]; this.parent.changed(true); } } } // Get a suitable display string for this user CalendarUser.prototype.nameOrAddress = function() { return this.data[1]["cn"] ? this.data[1]["cn"] : this.data[3]; } CalendarUser.prototype.cuaddr = function() { return this.data[3]; } // Get or set the voter response CalendarUser.prototype.response = function(value) { if (value === undefined) { return this.data[1]["response"]; } else { if (this.response != value) { this.data[1]["response"] = value; this.parent.changed(true); } } } calendarserver-9.1+dfsg/contrib/webpoll/webapp/js/jcal.js000066400000000000000000000270051315003562600235540ustar00rootroot00000000000000/** ## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## */ gProdID = "-//calendarserver.org//jcal v1//EN"; jcalparser = { PARSER_ALLOW : 0, // Pass the "suspect" data through to the object model PARSER_IGNORE : 1, // Ignore the "suspect" data PARSER_FIX: 2, // Fix (or if not possible ignore) the "suspect" data PARSER_RAISE: 3 // Raise an exception } // Some clients escape ":" - fix jcalparser.INVALID_COLON_ESCAPE_SEQUENCE = jcalparser.PARSER_FIX; // Other escape sequences - raise jcalparser.INVALID_ESCAPE_SEQUENCES = jcalparser.PARSER_RAISE; // Some client generate empty lines in the body of the data jcalparser.BLANK_LINES_IN_DATA = jcalparser.PARSER_FIX; // Some clients still generate vCard 2 parameter syntax jcalparser.VCARD_2_NO_PARAMETER_VALUES = jcalparser.PARSER_ALLOW; // Use this to fix v2 BASE64 to v3 ENCODING=b - only PARSER_FIX or PARSER_ALLOW jcalparser.VCARD_2_BASE64 = jcalparser.PARSER_FIX; // Allow DATE values when DATETIME specified (and vice versa) jcalparser.INVALID_DATETIME_VALUE = jcalparser.PARSER_FIX; // Allow slightly invalid DURATION values jcalparser.INVALID_DURATION_VALUE = jcalparser.PARSER_FIX; // Truncate over long ADR and N values jcalparser.INVALID_ADR_N_VALUES = jcalparser.PARSER_FIX; // REQUEST-STATUS values with \; as the first separator or single element jcalparser.INVALID_REQUEST_STATUS_VALUE = jcalparser.PARSER_FIX; // Remove \-escaping in URI values when parsing - only PARSER_FIX or PARSER_ALLOW jcalparser.BACKSLASH_IN_URI_VALUE = jcalparser.PARSER_FIX; jcal = function(caldata) { this.caldata = caldata; } jcal.newCalendar = function() { var calendar = new jcal(["vcalendar", [], []]); calendar.newProperty("version", "2.0"); calendar.newProperty("prodid", gProdID); return calendar; } jcal.fromString = function(data) { return new jcal($.parseJSON(data)); } jcal.prototype.toString = function() { return JSON.stringify(this.caldata); } jcal.prototype.duplicate = function() { return new jcal($.parseJSON(this.toString())); } jcal.prototype.name = function() { return this.caldata[0]; } // Return the primary (master) component // TODO: currently ignores recurrence - i.e. assumes one top-level component only jcal.prototype.mainComponent = function() { var results = $.grep(this.caldata[2], function(component, index) { return (component[0] != "vtimezone"); }); return (results.length == 1) ? new jcal(results[0]) : null; } jcal.prototype.newComponent = function(name, defaultProperties) { var jcomp = new jcal([name.toLowerCase(), [], []]); this.caldata[2].push(jcomp.caldata); if (defaultProperties) { // Add UID and DTSTAMP jcomp.newProperty("uid", generateUUID()); jcomp.newProperty("dtstamp", new jcaldate().toString(), {}, "date-time"); } return jcomp; } jcal.prototype.addComponent = function(component) { this.caldata[2].push(component.caldata); return component; } // Get one component jcal.prototype.getComponent = function(name) { name = name.toLowerCase(); var results = $.grep(this.caldata[2], function(component, index) { return (component[0] == name); }); return (results.length == 1) ? new jcal(results[0]) : null; } // Get all matching components jcal.prototype.components = function(name) { name = name.toLowerCase(); var results = $.grep(this.caldata[2], function(component, index) { return (component[0] == name); }); return $.map(results, function(component, index) { return new jcal(component); }); } // Remove all matching components jcal.prototype.removeComponents = function(name) { name = name.toLowerCase(); this.caldata[2] = $.grep(this.caldata[2], function(comp, index) { return comp[0] != name; }); } jcal.prototype.newProperty = function(name, value, params, value_type) { var prop = [name.toLowerCase(), params === undefined ? {} : params, value_type == undefined ? "text" : value_type]; if (value instanceof Array) { $.each(value, function(index, single) { prop.push(single); }); } else { prop.push(value); } this.caldata[1].push(prop); return prop; } jcal.prototype.copyProperty = function(name, component) { var propdata = component.getProperty(name); this.caldata[1].push([propdata[0], propdata[1], propdata[2], propdata[3]]); return propdata; } jcal.prototype.hasProperty = function(name) { var result = false; name = name.toLowerCase(); $.each(this.caldata[1], function(index, property) { if (property[0] == name) { result = true; return false; } }); return result; } jcal.prototype.getProperty = function(name) { var result = null; name = name.toLowerCase(); $.each(this.caldata[1], function(index, property) { if (property[0] == name) { result = property; return false; } }); return result; } jcal.prototype.getPropertyValue = function(name) { var result = null; name = name.toLowerCase(); $.each(this.caldata[1], function(index, propdata) { if (propdata[0] == name) { result = propdata[3]; return false; } }); return result; } jcal.prototype.updateProperty = function(name, value, params, value_type) { if (params === undefined) { params = {}; } if (value_type === undefined) { value_type = "text"; } var props = this.properties(name); if (props.length == 1) { props[0][1] = params; props[0][2] = value_type; props[0][3] = value; return props[0]; } else if (props.length == 0) { return this.newProperty(name, value, params, value_type); } } jcal.prototype.properties = function(name) { return $.grep(this.caldata[1], function(propdata, index) { return propdata[0] == name; }); } jcal.prototype.removeProperties = function(name) { name = name.toLowerCase(); this.caldata[1] = $.grep(this.caldata[1], function(propdata, index) { return propdata[0] != name; }); } // Remove properties for which test() returns true jcal.prototype.removePropertiesMatching = function(test) { name = name.toLowerCase(); this.caldata[1] = $.grep(this.caldata[1], function(propdata, index) { return !test(propdata); }); } // Date/time utility functions jcaldate = function() { this.date = new Date(); this.tzid = "utc"; } jcaldate.jsDateTojCal = function(date) { return date.toISOString().substr(0, 19) + "Z"; } jcaldate.jsDateToiCal = function(date) { return jcaldate.jsDateTojCal(date).replace(/\-/g, "").replace(/\:/g, ""); } jcaldate.jCalTojsDate = function(value) { var result = new Date(value); result.setMilliseconds(0); return result; } jcaldate.prototype.toString = function() { return this.date.toISOString().substr(0, 19) + "Z"; } // Duration utility functions jcalduration = function(duration) { if (duration === undefined) { this.mForward = true this.mWeeks = 0 this.mDays = 0 this.mHours = 0 this.mMinutes = 0 this.mSeconds = 0 } else { this.setDuration(duration); } } jcalduration.parseText = function(data) { var duration = new jcalduration(); duration.parse(data); return duration; } jcalduration.prototype.getTotalSeconds = function() { return (this.mForward ? 1 : -1) * (this.mSeconds + (this.mMinutes + (this.mHours + (this.mDays + (this.mWeeks * 7)) * 24) * 60) * 60); } jcalduration.prototype.setDuration = function(seconds) { this.mForward = seconds >= 0; var remainder = seconds; if (remainder < 0) { remainder = -remainder; } // Is it an exact number of weeks - if so use the weeks value, otherwise // days, hours, minutes, seconds if (remainder % (7 * 24 * 60 * 60) == 0) { this.mWeeks = remainder / (7 * 24 * 60 * 60); this.mDays = 0; this.mHours = 0; this.mMinutes = 0; this.mSeconds = 0; } else { this.mSeconds = remainder % 60; remainder -= this.mSeconds; remainder /= 60; this.mMinutes = remainder % 60; remainder -= this.mMinutes; remainder /= 60; this.mHours = remainder % 24; remainder -= this.mHours; this.mDays = remainder / 24; this.mWeeks = 0; } } jcalduration.prototype.parse = function(data) { // parse format ([+]/-) "P" (dur-date / dur-time / dur-week) try { var offset = 0; var maxoffset = data.length; // Look for +/- this.mForward = true; if (data[offset] == '+') { this.mForward = true; offset += 1; } else if (data[offset] == '-') { this.mForward = false; offset += 1; } // Must have a 'P' if (data[offset] != "P") throw "Invalid duration"; offset += 1; // Look for time if (data[offset] != "T") { // Must have a number var strnum = data.strtoul(offset); var num = strnum.num; offset = strnum.offset; // Now look at character if (data[offset] == "W") { // Have a number of weeks this.mWeeks = num; offset += 1; // There cannot be anything else after this so just exit if (offset != maxoffset) { if (ParserContext.INVALID_DURATION_VALUE != ParserContext.PARSER_RAISE) return; throw "Invalid duration"; } return; } else if (data[offset] == "D") { // Have a number of days this.mDays = num; offset += 1; // Look for more data - exit if none if (offset == maxoffset) return; // Look for time - exit if none if (data[offset] != "T") throw "Invalid duration"; } else { // Error in format throw "Invalid duration"; } } // Have time offset += 1; // Strictly speaking T must always be followed by time values, but some clients // send T with no additional text if (offset == maxoffset) { if (jcalparser.INVALID_DURATION_VALUE == jcalparser.PARSER_RAISE) throw "Invalid duration"; else return; } var strnum = data.strtoul(offset); var num = strnum.num; offset = strnum.offset; // Look for hour if (data[offset] == "H") { // Get hours this.mHours = num; offset += 1; // Look for more data - exit if none if (offset == maxoffset) return; // Parse the next number strnum = data.strtoul(offset); num = strnum.num; offset = strnum.offset; } // Look for minute if (data[offset] == "M") { // Get hours this.mMinutes = num; offset += 1; // Look for more data - exit if none if (offset == maxoffset) return; // Parse the next number strnum = data.strtoul(offset); num = strnum.num; offset = strnum.offset; } // Look for seconds if (data[offset] == "S") { // Get hours this.mSeconds = num; offset += 1; // No more data - exit if (offset == maxoffset) return; } throw "Invalid duration"; } catch(err) { throw "Invalid duration"; } } jcalduration.prototype.generate = function(self, os) { var result = ""; if (!this.mForward && (this.mWeeks || this.mDays || this.mHours || this.mMinutes || this.mSeconds)) { result += "-"; } result += "P"; if (this.mWeeks != 0) { result += this.mWeeks + "W"; } else { if (this.mDays != 0) { result += this.mDays + "D"; } if (this.mHours != 0 || this.mMinutes != 0 || this.mSeconds != 0) { result += "T"; if (this.mHours != 0) { result += this.mHours + "H"; } if ((this.mMinutes != 0) || ((this.mHours != 0) && (this.mSeconds != 0))) { result += this.mMinutes + "M"; } if (this.mSeconds != 0) { result += this.mSeconds + "S"; } } else if (this.mDays == 0) { result += "T0S"; } } return result; } calendarserver-9.1+dfsg/contrib/webpoll/webapp/js/utils.js000066400000000000000000000122321315003562600237770ustar00rootroot00000000000000/** ## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## */ /** * Utility classes. */ // XML processing utilities // Make it easier to use namespaces by allowing the following "NS:" prefixes on // element names var gNamespaceShortcuts = { "D" : "DAV:", "C" : "urn:ietf:params:xml:ns:caldav", "CS" : "http://calendarserver.org/ns/" }; // Add a shortcut namespace for use in an xmlns map function addNamespace(shortcut, xmlnsmap) { xmlnsmap[gNamespaceShortcuts[shortcut]] = shortcut; } function buildXMLNS(xmlnsmap) { var xmlnstr = ""; $.each(xmlnsmap, function(ns, nsprefix) { xmlnstr += ' xmlns:' + nsprefix + '="' + ns + '"'; }); return xmlnstr; } function addElements(elements, xmlnsmap) { var propstr = ""; $.each(elements, function(index, element) { var segments = element.split(":"); addNamespace(segments[0], xmlnsmap); propstr += '<' + element + ' />'; }); return propstr; } // Find XML elements matching the specified xpath function findElementPath(node, path) { return findElementPathSegments(node, path.split("/")); } // Find XML elements matching the specified path segments function findElementPathSegments(root, segments) { var elements = findElementNS(root, segments[0]); if (segments.length == 1) { return elements; } var results = []; $.each(elements, function(index, name) { var next = findElementPathSegments($(this), segments.slice(1)); $.each(next, function(index, item) { results.push(item); }); }); return results; } // Find immediate children of node matching the XML {NS}name function findElementNS(node, nsname) { var segments = nsname.split(":"); var namespace = gNamespaceShortcuts[segments[0]]; var name = segments[1]; var results = []; node.children().each(function() { if (this.localName == name && this.namespaceURI == namespace) { results.push($(this)); } }); return results; } // Get text of target element from an xpath function getElementText(node, path) { var items = findElementPath(node, path); return (items.length == 1) ? items[0].text() : null; } // Check for the existence of an element function hasElementPath(node, path) { var elements = findElementPath(node, path); return elements.length != 0; } function xmlEncode(text) { return text.replace(/&(?!\w+([;\s]|$))/g, "&").replace(//g, ">"); } // URL helpers // Removing any trailing slash from a URL function removeTrailingSlash(url) { return (url[url.length - 1] == "/") ? url.substr(0, url.length - 1) : url; } // Removing any trailing slash from a URL function removeLeadingSlash(url) { return (url[0] == "/") ? url = url.substr(1) : url; } // Compare two URLs ignoring trailing slash function compareURLs(url1, url2) { return removeTrailingSlash(url1) == removeTrailingSlash(url2); } // Join two URLs function joinURLs(url1, url2) { return removeTrailingSlash(url1) + "/" + removeLeadingSlash(url2); } // Get last path segment function basenameURL(url) { return removeTrailingSlash(url).split("/").pop(); } // UUID function generateUUID() { var result = ""; for ( var i = 0; i < 32; i++) { if (i == 8 || i == 12 || i == 16 || i == 20) result = result + '-'; result += Math.floor(Math.random() * 16).toString(16).toUpperCase(); } return result; } // Addresses function addressDescription(cn, addr) { return addr ? (cn ? cn + " " : "") + "<" + addr + ">" : ""; } function splitAddressDescription(desc) { results = ["", ""]; if (desc.indexOf("<") == -1) { results[1] = desc; } else { var splits = desc.split("<"); results[0] = splits[0].substr(0, splits[0].length - 1); results[1] = splits[1].substr(0, splits[1].length - 1); } return results; } // JSString extensions if (typeof String.prototype.startsWith != 'function') { String.prototype.startsWith = function(str) { return this.slice(0, str.length) == str; }; } if (typeof String.prototype.endsWith != 'function') { String.prototype.endsWith = function(str) { return this.slice(-str.length) == str; }; } if (typeof String.prototype.strtoul != 'function') { String.prototype.strtoul = function(offset) { if (offset === undefined) { offset = 0; } var matches = this.substring(offset).match(/^[0-9]+/); if (matches.length == 1) { return { num : parseInt(matches[0]), offset : offset + matches[0].length } } else { throw "ValueError"; } } } // JSArray extensions if (typeof Array.prototype.average != 'function') { Array.prototype.average = function() { return this.sum() / this.length; }; } if (typeof Array.prototype.sum != 'function') { Array.prototype.sum = function() { var result = 0; $.each(this, function(index, num) { result += num; }); return result; }; } calendarserver-9.1+dfsg/contrib/webpoll/webapp/js/webpoll.js000066400000000000000000000674151315003562600243200ustar00rootroot00000000000000/** ## # Copyright (c) 2013-2017 Apple Inc. All rights reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. ## */ // Globals var gSession = null; var gViewController = null; // Page load $(function() { $("#progressbar").progressbar({ value : false }); showLoading(true); var params = {}; var ps = window.location.search.split(/\?|&/); for (var i = 0; i < ps.length; i++) { if (ps[i]) { var p = ps[i].split(/=/); params[p[0]] = p[1]; } } // Setup CalDAV session gSession = new CalDAVSession(params.user); gSession.init(function() { $("#title").text($("#title").text() + " for User: " + gSession.currentPrincipal.cn); gViewController.refreshed(); }); gViewController = new ViewController(gSession); }); function showLoading(visible) { if (visible) { $("#progressbar").progressbar("enable"); $("#loading").show(); } else { $("#progressbar").progressbar("disable"); $("#loading").hide(); } } // Handles all the view interactions ViewController = function(session) { this.session = session; this.ownedPolls = new PollList($("#sidebar-owned"), $("#sidebar-new-poll-count")); this.voterPolls = new PollList($("#sidebar-voter"), $("#sidebar-vote-poll-count")); this.activePoll = null; this.isNewPoll = null; this.init(); } // Setup all the parts of the view ViewController.prototype.init = function() { // Setup title area var view = this; // Setup sidebar UI widgets $("#sidebar").accordion({ heightStyle : "content" }); $("#sidebar-owned").menu({ select : function(event, ui) { view.clickSelectPoll(event, ui, true); } }); $("#sidebar-new-poll").button({ icons : { primary : "ui-icon-plusthick" } }).click(function() { view.clickAddPoll(); }); $("#sidebar-voter").menu({ select : function(event, ui) { view.clickSelectPoll(event, ui, false); } }); $("#refresh-btn").button({ icons : { primary : "ui-icon-refresh" } }).click(function() { view.clickRefresh(); }); // Detail Panel this.editSetVisible(false); $("#editpoll-title-edit").focus(function() { $(this).select(); }); $("#editpoll-tabs").tabs({ beforeActivate : function(event, ui) { view.showResults(event, ui); } }); $("#editpoll-save").button({ icons : { primary : "ui-icon-check" } }).click(function() { view.clickPollSave(); }); $("#editpoll-cancel").button({ icons : { primary : "ui-icon-close" } }).click(function() { view.clickPollCancel(); }); $("#editpoll-done").button({ icons : { primary : "ui-icon-arrowreturnthick-1-w" } }).click(function() { view.clickPollCancel(); }); $("#editpoll-delete").button({ icons : { primary : "ui-icon-trash" } }).click(function() { view.clickPollDelete(); }); $("#editpoll-autofill").button({ icons : { primary : "ui-icon-gear" } }).click(function() { view.clickPollAutofill(); }); $("#editpoll-addevent").button({ icons : { primary : "ui-icon-plus" } }).click(function() { view.clickAddEvent(); }); $("#editpoll-addvoter").button({ icons : { primary : "ui-icon-plus" } }).click(function() { view.clickAddVoter(); }); $("#editpoll-autofill").hide(); $("#response-key").hide(); $("#response-menu").menu(); } // Add a poll to the UI ViewController.prototype.addPoll = function(poll) { if (poll.owned) { this.ownedPolls.addPoll(poll) } else { this.voterPolls.addPoll(poll) } } // Switching away from active poll ViewController.prototype.aboutToClosePoll = function() { if (this.activePoll && this.activePoll.editing_poll.changed()) { alert("Save or cancel the current poll changes first"); return false; } else { return true; } } // Refresh the side bar - try to preserve currently selected item ViewController.prototype.clickRefresh = function() { if (!this.aboutToClosePoll()) { return; } var currentUID = this.activePoll ? this.activePoll.editing_poll.uid() : null; var active_tab = $("#editpoll-tabs").tabs("option", "active"); if (this.activePoll) { this.clickPollCancel(); } showLoading(true); this.ownedPolls.clearPolls(); this.voterPolls.clearPolls(); var this_view = this; this.session.currentPrincipal.refresh(function() { this_view.refreshed(); if (currentUID) { this_view.selectPollByUID(currentUID); $("#editpoll-tabs").tabs("option", "active", active_tab); if (active_tab == 2) { this.activePoll.buildResults(); } } }); } // Add poll button clicked ViewController.prototype.clickAddPoll = function() { if (!this.aboutToClosePoll()) { return; } // Make sure edit panel is visible this.activatePoll(new Poll(CalendarResource.newPoll("New Poll"))); this.isNewPoll = true; $("#editpoll-title-edit").focus(); } // A poll was selected ViewController.prototype.clickSelectPoll = function(event, ui, owner) { if (!this.aboutToClosePoll()) { return; } this.selectPoll(ui.item.index(), owner); } // Select a poll from the list based on its UID ViewController.prototype.selectPollByUID = function(uid) { var result = this.ownedPolls.indexOfPollUID(uid); if (result !== null) { this.selectPoll(result, true); return; } result = this.voterPolls.indexOfPollUID(uid); if (result !== null) { this.selectPoll(result, false); return; } } //A poll was selected ViewController.prototype.selectPoll = function(index, owner) { // Make sure edit panel is visible this.activatePoll(owner ? this.ownedPolls.polls[index] : this.voterPolls.polls[index]); if (owner) { $("#editpoll-title-edit").focus(); } } // Activate specified poll ViewController.prototype.activatePoll = function(poll) { this.activePoll = poll; this.activePoll.setPanel(); this.isNewPoll = false; this.editSetVisible(true); } // Save button clicked ViewController.prototype.clickPollSave = function() { // TODO: Actually save it to the server this.activePoll.getPanel(); if (this.isNewPoll) { this.ownedPolls.newPoll(this.activePoll); } else { this.activePoll.list.changePoll(this.activePoll); } } // Cancel button clicked ViewController.prototype.clickPollCancel = function() { // Make sure edit panel is visible this.activePoll.closed(); this.activePoll = null; this.isNewPoll = null; this.editSetVisible(false); } // Delete button clicked ViewController.prototype.clickPollDelete = function() { // TODO: Actually delete it on the server this.activePoll.list.removePoll(this.activePoll); // Make sure edit panel is visible this.activePoll = null; this.isNewPoll = null; this.editSetVisible(false); } // Autofill button clicked ViewController.prototype.clickPollAutofill = function() { this.activePoll.autoFill(); } // Add event button clicked ViewController.prototype.clickAddEvent = function() { this.activePoll.addEvent(); } // Add voter button clicked ViewController.prototype.clickAddVoter = function() { var panel = this.activePoll.addVoter(); panel.find(".voter-address").focus(); } // Toggle display of poll details ViewController.prototype.editSetVisible = function(visible) { if (visible) { if (this.isNewPoll) { $("#editpoll-delete").hide(); $("#editpoll-tabs").tabs("disable", 2); } else { $("#editpoll-delete").show(); $("#editpoll-tabs").tabs("enable", 2); } if (this.activePoll.owned && this.activePoll.resource.object.mainComponent().editable()) { $("#editpoll-title-panel").hide(); $("#editpoll-organizer-panel").hide(); $("#editpoll-status-panel").hide(); $("#editpoll-title-edit-panel").show(); $("#editpoll-tabs").tabs("enable", 0); $("#editpoll-tabs").tabs("enable", 1); $("#editpoll-tabs").tabs("option", "active", 0); $("#response-key").hide(); } else { $("#editpoll-title-edit-panel").hide(); $("#editpoll-title-panel").show(); $("#editpoll-organizer-panel").show(); $("#editpoll-status-panel").show(); $("#editpoll-tabs").tabs("option", "active", 2); $("#editpoll-tabs").tabs("disable", 0); $("#editpoll-tabs").tabs("disable", 1); $("#response-key").toggle(this.activePoll.resource.object.mainComponent().editable()); this.activePoll.buildResults(); } $("#editpoll-save").toggle(this.activePoll.resource.object.mainComponent().editable()); $("#editpoll-cancel").toggle(this.activePoll.resource.object.mainComponent().editable()); $("#editpoll-done").toggle(!this.activePoll.resource.object.mainComponent().editable()); $("#editpoll-autofill").toggle(this.activePoll.resource.object.mainComponent().editable()); $("#detail-nocontent").hide(); $("#editpoll").show(); } else { $("#editpoll").hide(); $("#detail-nocontent").show(); } } ViewController.prototype.refreshed = function() { showLoading(false); if (this.ownedPolls.polls.length == 0 && this.voterPolls.polls.length != 0) { $("#sidebar").accordion("option", "active", 1); } else { $("#sidebar").accordion("option", "active", 0); } } // Rebuild results panel each time it is selected ViewController.prototype.showResults = function(event, ui) { if (ui.newPanel.selector == "#editpoll-results") { this.activePoll.buildResults(); } $("#editpoll-autofill").toggle(ui.newPanel.selector == "#editpoll-results"); $("#response-key").toggle(ui.newPanel.selector == "#editpoll-results"); } // Maintains the list of editable polls and manipulates the DOM as polls are // added // and removed. PollList = function(menu, counter) { this.polls = []; this.menu = menu; this.counter = counter; } // Add a poll to the UI. PollList.prototype.addPoll = function(poll) { this.polls.push(poll); poll.list = this; this.menu.append(''); this.menu.menu("refresh"); this.counter.text(this.polls.length); } // Add a poll to the UI and save its resource PollList.prototype.newPoll = function(poll) { this.addPoll(poll); poll.saveResource(); $("#editpoll-delete").show(); } // Change a poll in the UI and save its resource PollList.prototype.changePoll = function(poll) { var index = this.polls.indexOf(poll); this.menu.find("a").eq(index).text(poll.title()); this.menu.menu("refresh"); poll.saveResource(); } // Remove a poll resource and its UI PollList.prototype.removePoll = function(poll) { var this_polllist = this; poll.resource.removeResource(function() { var index = this_polllist.polls.indexOf(poll); this_polllist.polls.splice(index, 1); this_polllist.menu.children("li").eq(index).remove(); this_polllist.menu.menu("refresh"); this_polllist.counter.text(this_polllist.polls.length); }); } PollList.prototype.indexOfPollUID = function(uid) { var result = null; $.each(this.polls, function(index, poll) { if (poll.resource.object.mainComponent().uid() == uid) { result = index; return false; } }); return result; } // Remove all UI items PollList.prototype.clearPolls = function() { this.menu.empty(); this.menu.menu("refresh"); this.polls = []; this.counter.text(this.polls.length); } // An editable poll. It manipulates the DOM for editing a poll Poll = function(resource) { this.resource = resource; this.owned = this.resource.object.mainComponent().isOwned(); this.editing_object = null; this.editing_poll = null; } Poll.prototype.title = function() { return this.editing_poll ? this.editing_poll.summary() : this.resource.object.mainComponent().summary(); } Poll.prototype.closed = function() { this.editing_poll = null; } // Save the editable state Poll.prototype.saveResource = function(whenDone) { // Only if it changed if (this.editing_poll.changed()) { this.resource.object = this.editing_object; var this_poll = this; this.resource.saveResource(function() { // Reload from the resource as it might change after write to server this_poll.editing_object = this_poll.resource.object.duplicate(); this_poll.editing_poll = this_poll.editing_object.mainComponent(); if (whenDone) { whenDone(); } }); } } // Fill the UI with details of the poll Poll.prototype.setPanel = function() { var this_poll = this; this.editing_object = this.resource.object.duplicate(); this.editing_poll = this.editing_object.mainComponent(); // Setup the details panel with this poll $("#editpoll-title-edit").val(this.editing_poll.summary()); $("#editpoll-title").text(this.editing_poll.summary()); $("#editpoll-organizer").text(this.editing_poll.organizerDisplayName()); $("#editpoll-status").text(this.editing_poll.status()); $("#editpoll-eventlist").empty(); $.each(this.editing_poll.events(), function(index, event) { this_poll.setEventPanel(this_poll.addEventPanel(), event); }); $("#editpoll-voterlist").empty(); $.each(this.editing_poll.voters(), function(index, voter) { this_poll.setVoterPanel(this_poll.addVoterPanel(), voter); }); } // Get poll details from the UI Poll.prototype.getPanel = function() { var this_poll = this; // Get values from the details panel if (this.owned) { this.editing_poll.summary($("#editpoll-title-edit").val()); var events = this.editing_poll.events(); $("#editpoll-eventlist").children().each(function(index) { this_poll.updateEventFromPanel($(this), events[index]); }); var voters = this.editing_poll.voters(); $("#editpoll-voterlist").children().each(function(index) { this_poll.updateVoterFromPanel($(this), voters[index]); }); } } //Add a new event item in the UI Poll.prototype.addEventPanel = function() { var ctr = $("#editpoll-eventlist").children().length + 1; var idstart = "event-dtstart-" + ctr; var idend = "event-dtend-" + ctr; // Add new list item var evt = '
'; evt += '
'; evt += ''; evt += ''; evt += '
'; evt += '
'; evt += ''; evt += ''; evt += '
'; evt += ''; evt += '
'; evt = $(evt).appendTo("#editpoll-eventlist"); evt.find(".event-dtstart").datetimepicker(); evt.find(".event-dtend").datetimepicker(); evt.find(".input-remove").button({ icons : { primary : "ui-icon-close" } }); return evt; } // Update the UI for this event Poll.prototype.setEventPanel = function(panel, event) { panel.find(".event-dtstart").datetimepicker("setDate", event.dtstart()); panel.find(".event-dtend").datetimepicker("setDate", event.dtend()); return panel; } // Get details of the event from the UI Poll.prototype.updateEventFromPanel = function(panel, event) { event.summary($("#editpoll-title-edit").val()); event.dtstart(panel.find(".event-dtstart").datetimepicker("getDate")); event.dtend(panel.find(".event-dtend").datetimepicker("getDate")); } // Add event button clicked Poll.prototype.addEvent = function() { var ctr = $("#editpoll-eventlist").children().length; var dtstart = new Date(); dtstart.setDate(dtstart.getDate() + ctr); dtstart.setHours(12, 0, 0, 0); var dtend = new Date(); dtend.setDate(dtend.getDate() + ctr); dtend.setHours(13, 0, 0, 0); // Add new list item var vevent = this.editing_poll.addEvent(dtstart, dtend); return this.setEventPanel(this.addEventPanel(), vevent); } //Add a new voter item in the UI Poll.prototype.addVoterPanel = function() { var ctr = $("#editpoll-voterlist").children().length + 1; var idvoter = "voter-address-" + ctr; // Add new list item var vtr = '
'; vtr += '
'; vtr += ''; vtr += ''; vtr += '
'; vtr += ''; vtr += '
'; vtr = $(vtr).appendTo("#editpoll-voterlist"); vtr.find(".voter-address").autocomplete({ minLength : 3, source : function(request, response) { gSession.calendarUserSearch(request.term, function(results) { response(results); }); } }).focus(function() { $(this).select(); }); vtr.find(".input-remove").button({ icons : { primary : "ui-icon-close" } }); return vtr; } // Update UI for this voter Poll.prototype.setVoterPanel = function(panel, voter) { panel.find(".voter-address").val(voter.addressDescription()); return panel; } // Get details of the voter from the UI Poll.prototype.updateVoterFromPanel = function(panel, voter) { voter.addressDescription(panel.find(".voter-address").val()); } // Add voter button clicked Poll.prototype.addVoter = function() { // Add new list item var voter = this.editing_poll.addVoter(); return this.setVoterPanel(this.addVoterPanel(), voter); } // Build the results UI based on the poll details Poll.prototype.buildResults = function() { var this_poll = this; // Sync with any changes from other panels this.getPanel(); var event_details = this.editing_poll.events(); var voter_details = this.editing_poll.voters(); var thead = $("#editpoll-resulttable").children("thead").first(); var th_date = thead.children("tr").eq(0).empty(); var th_start = thead.children("tr").eq(1).empty(); var th_end = thead.children("tr").eq(2).empty(); var tf = $("#editpoll-resulttable").children("tfoot").first(); var tf_overall = tf.children("tr").first().empty(); var tf_commit = tf.children("tr").last().empty(); tf_commit.toggle(this.owned || !this_poll.editing_poll.editable()); var tbody = $("#editpoll-resulttable").children("tbody").first().empty(); $('Date:').appendTo(th_date); $('Start:').appendTo(th_start); $('End:').appendTo(th_end); $('Overall:').appendTo(tf_overall); $('').appendTo(tf_commit); $.each(event_details, function(index, event) { var td_date = $('').appendTo(th_date).text(event.dtstart().toDateString()).addClass("center-td"); var td_start = $('').appendTo(th_start).text(event.dtstart().toLocaleTimeString()); var td_end = $('').appendTo(th_end).text(event.dtend().toLocaleTimeString()); $('').appendTo(tf_overall).addClass("center-td"); $('').appendTo(tf_commit).addClass("center-td"); if (event.ispollwinner()) { td_date.addClass("poll-winner-td"); td_start.addClass("poll-winner-td"); td_end.addClass("poll-winner-td"); } td_date.hover( function() { this_poll.hoverDialogOpen(td_date, thead, event); }, this_poll.hoverDialogClose ); }); $.each(voter_details, function(index, voter) { var active = gSession.currentPrincipal.matchingAddress(voter.cuaddr()); var tr = $("").appendTo(tbody); $("").appendTo(tr).text(voter.nameOrAddress()); $.each(event_details, function(index, event) { var response = event.voter_responses()[voter.cuaddr()]; var td = $("").appendTo(tr).addClass("center-td"); if (event.ispollwinner()) { td.addClass("poll-winner-td"); } if (active && this_poll.editing_poll.editable()) { var radios = $('
').appendTo(td).addClass("response-btns"); $('').appendTo(radios); $('